The world-renowned philosopher argues that our inner worlds and religious ideas can all be explained as evolutionary functions of the brain
Daniel Dennett isn’t bashful about speaking his mind. A philosopher and cognitive scientist whose thinking is rooted in evolutionary theory, he is known around the world for his rousing defense of atheism and his detailed analysis of how the physical structures of the brain give rise to consciousness and what some might call our souls. His scholarly books are bestsellers and his TED talks garner millions of views.
He’s also been a professor of philosophy at Tufts since 1971, a University Professor since 2000, and longtime co-director of the university’s Center for Cognitive Sciences. Many scholars of his renown might focus on a select group of graduate students, but Dennett has taught undergraduates his whole career, and enjoys that connection. “The undergraduates are great at asking good questions—they are bold as brass,” he said. He coauthored a book with one of his former undergraduate students, Mathew Hurley, A06—Inside Jokes: Using Humor to Reverse-Engineer the Mind.
Dennett’s first bestseller was Consciousness Explained (1991), which set the stage for a number of other works, the latest being From Bacteria to Bach and Back. “His special focus is the creation of the human mind,” Joshua Rothman wrote in a 2017 profile in The New Yorker. “Into his own, he has crammed nearly every related discipline: evolutionary biology, neuroscience, psychology, linguistics, artificial intelligence.”
He wholeheartedly subscribes to Darwin’s idea that all of human creation—from Michelangelo’s art to your own thoughts as you watch TV—has “created itself, not in a miraculous, instantaneous whoosh, but slowly, slowly,” through natural selection. And while some thinkers want to impute human consciousness to a soul, Dennett will have none of that. In his view, consciousness is a brain function and the brain is wholly material.
One of the so-called Four Horsemen of the New Atheism—along with Richard Dawkins, Sam Harris, and Christopher Hitchens—Dennett explored the idea of religion as a natural phenomenon in his 2007 book, Breaking the Spell. One staunchly Catholic critic likened Dennett’s take on religion as being like a “tone-deaf music scholar,” but Dennett takes such criticism in stride.
Artificial intelligence, or AI, is another focus for Dennett. It’s something that worries him—and doesn’t.
“I think it’s absolutely possible that we can have conscious robots—conscious AI,” he said. “But I don’t think it’s desirable. We don’t need artificial colleagues, because if we really succeeded, then they would be precisely as autonomous as we are. And we are very dangerous.” On the plus side, he doesn’t think that’s coming anytime soon, even within a half century.
In person, Dennett is remarkably down to earth—with his white beard and hand-whittled walking stick, he looks like a gentleman farmer. Indeed, for many years he and his wife maintained a farm in Maine, where he made his own apple brandy and blueberry wine. He is also a jazz pianist and an avid sailor.
Tufts Now caught up with Dennett to learn more about what makes our minds tick, what’s wrong with advanced AI, and how culture is subject to evolutionary pressures, just like everything else.
Tufts Now: How do you define consciousness?
Daniel Dennett: One of the problems with defining consciousness is that it’s several different things—and it means different things to different people. There is a sense of consciousness in which starfish are conscious. Worms are conscious. Even trees, even bacteria are conscious. But there’s one dividing line between the amoeba and the professor in the consciousness department, which really marks off a huge difference: language.
It’s language that enables us to ask ourselves questions and reflect on our own experience. Frogs notice all kinds of things in their world, but I don’t think they can notice their noticings. I don’t think they can dwell on them. That whole sort of stairway of curiosity that we have built-in doesn’t exist for any species but us, and that makes a huge difference. Human consciousness is to animal consciousness roughly as language is to bird song. Birdsong communicates, but not much.
Consciousness is not a part of the brain. It’s what brains can do. Particularly it’s what human brains can do when they are mature and well-designed, and a lot of that design is not innate. It’s not in your genes—in the same way German or English isn’t in your genes. Many of the features of consciousness are things that basically get installed culturally and at mother’s knee. It’s part of your rearing, your upbringing, your learning languages. It’s what you learn language in, and you learn a lot of consciousness, too.
What’s the evolutionary benefit of consciousness?
I might ask a rhetorical question: who says it’s got a function? Maybe consciousness is just an affliction. Maybe it just evolved as a sort of burden that we have to carry. Maybe it’s something else that benefits us, and consciousness is what we carry around as the price for having this other, better thing.
But what’s it for, then?
Consciousness is for control. In engineering control theory, we talk about degrees of freedom—how many degrees of freedom does that robot have in its arm? One or five—each one has to be controlled.
A degree of freedom is an opportunity for control. You have millions of degrees of freedom, because it’s not just “where do I put my arms now, or my feet?” There’s “what do I think about now?” And you can think about anything. You’re the ultimate chameleon when it comes to having degrees of freedom, because you can think about things here, or far away in the past and the future. You can plot schemes, write novels.
And there’s no algorithm for controlling them all. This is very unlike traditional AI, good old-fashioned AI, where you have a controller at the top that parcels out task and says, “Now you do this. Now you do that.” There’s a sort of traffic cop that’s running the show. But there is no traffic cop in your head.
Then who, or what, is in charge? We have trillions of neurons in our brains, each independent and yet somehow working together, but there’s no boss neuron.
That’s very, very important. That’s one of the things that I’ve been working on with Tufts biologist Michael Levin [A92] and a few colleagues—how to theorize about the collectivity, the cooperation, and competition between individual cells that makes possible human consciousness. In my latest book, Bacteria to Bach and Back, I give the example of the Australian termite castle. Millions of termites build this fantastic castle, without a leader.
Antoni Gaudi’s Sagrada Familia cathedral in Barcelona looks something like that termite castle—but it was designed from the top, by a creative genius. Gaudi knew what he was doing, and he ordered his subordinates around, who ordered their subordinates.
Now here’s the puzzle. Gaudi’s brain is more like a termite colony than you might expect. How do you get an intelligent designer like Gaudi or Shakespeare or Einstein or Turing? How do you get a brain that can do that out of a lot of neurons—each of which is individually myopic and more clueless than termites or ants? That’s what we’re working on.
Why do people have such a hard time with the idea that our brain is basically a piece of meat, and all this great stuff comes out of it?
If you took a cellphone and gave it to some hunter gatherers in the Amazon, they’d think it was magic. They’d be utterly unable to make sense of how that was possible. In the seventeenth century, the great scientist Descartes wondered if there could be a mechanical mind, and said no. That’s because the only mechanisms he could imagine were things with, let’s say, 10,000 moving parts or a million moving parts.
The real beauty of the twenty-first century is we now have tools to help us imagine how machines with trillions of moving parts do what they do. Descartes didn’t have that, so it’s no wonder he and others thought there had to be a sort of magical mystery thing in there somewhere—the res cogitans, the thinking thing, which is like an immortal soul. We’re learning how to pull those questions back into the real world and ask them and answer them. It’s only now that we’re beginning to be able to even frame hypotheses about how it works.
So if our brains don’t have a top-down boss, making decisions and giving orders, how does it work?
Every bit of control, from how you walk to what you think about to when you decide to eat, whether you court that lady—everything is controlled by a system fundamentally of opponent processes. They’re pulling in opposite directions, and sometimes you have what my late dear friend Oliver Selfridge called a pandemonium, where you have all these little demons saying, “Me! Me! I want to do the job! Let me do it!” All these volunteers crowding around, ready to do the job, and they sort of duke it out, and the decider is not some wise judge that understands.
Instead, it’s sort of an internal micro political process where one side wins and the other loses, and the one that wins gets to steer the ship for a little bit. This is going on all the time, and there’s no captain. There just seems to be a captain. The self is itself a virtual governor, not an actual place in the brain where the governor sits.
Sometimes I think I shouldn’t have sweets and then I just eat them—but other times I have more of what we call self-control and I don’t. How does that work?
Consciousness is all about control. Humans have developed a whole lot of techniques for self-control, and sometimes it works and sometimes it doesn’t. George Ainslie calls it inter-temporal bargaining. We all know it. One of the best examples is Ulysses, who has himself tied to the mast and puts wax in the ears of his rowers so he can listen to the Sirens sing their songs. You plan ahead, and you disable yourself in a certain way to get yourself over an irresistible urge.
There’s an old Maine joke that sums it up. A farmer is in the outhouse, and when he pulls up his pants, a quarter rolls out of his pocket and falls down the hole. He swears and pulls out his wallet and throws down a $5 bill. Someone asks him later, “Why’d you do that?” He replied, “You don’t think I’m going down there for a quarter, do you?” He’s manipulating himself.
As AI becomes more and more sophisticated, is there a possibility that machines one day will have or could have consciousness?
Yes. I think it’s absolutely possible that we can have conscious robots, conscious AI. But I don’t think it’s desirable. Smart tools, yes. But we don’t need artificial colleagues—because if we really succeeded, then they would be precisely as autonomous as we are. And we are very dangerous. Unless we could be sure that they would have the same opportunities for learning how to behave and what to value in culture, it would be reckless for us to give them that much autonomy.
Is that possibility of conscious AI coming soon?
Given that I’m pessimistic about the safety of conscious AIs, I’m also optimistic about the difficulty of getting there. I don’t think we’re anywhere close. I don’t think it’s coming in ten years, or twenty years, or fifty years—and I don’t think that the wonderful successes of deep learning—machine learning, the latest wave of enthusiasm in AI—is giving us the architectures of conscious agency.
It’s giving us tissues that can do amazing things in the way of gathering information, sorting it, and discriminating it, but that’s not making an agent that’s reflective and that can carry on a real conversation, meaning what it means and knowing what you mean when you talk to it.
Cultural evolution is something else you reference. What does it mean?
A lot of species have elements of culture. Chimpanzees have some grooming rituals and the like. But only one species has explosively cumulative culture, and that’s us. Of course, that depends on our having language. All of technology and civilization, art, science, religion, all these are the fruits of human culture. How did it get that way?
It’s evolves the same way our bodies evolved, by natural selection. There is natural selection of culture, where cultural items—Richard Dawkins calls them memes—compete for residency, if you like, in human brains, and some win and some lose.
Every word in our language is such a competitor. It’s beaten out the competition. It has a history. It has a genealogy. It’s an evolved thing. Studying cultural evolution from a Darwinian perspective comes very unnaturally to a lot of people, especially in the humanities. They still want to think of culture as, in effect, just the product of human genius.
But they should remember first of all that human culture—from high culture to all the cultural junk, of which there’s tons—has to be explained. These are things that spread unnoticed and unbeknownst to us—in the same way that words go extinct, new words appear and the language we speak today is not the language our grandfathers spoke.
That wasn’t done deliberately—it’s just like the way cheetahs legs have been optimized for running. Human culture is optimized for spreading and getting hold in people’s heads and influencing behavior.
How do you study that in an objective way, since it seems to be happening in the background, not spread intentionally by people?
We have wonderful new technologies for studying the evolution of culture, because so much of culture now is represented on the internet. It’s easy to design large-scale data-mining programs that can study the evolution of language in particular. Words are my favorite example of cultural items, and we can track them, we can count them. We can see how they change over time— looking at subtle shifts in pronunciation, in meaning, in popularity.
Taylor McNeil can be reached at taylor.mcneil@tufts.edu.