Daniel Dennett’s Been Thinking About Thinking—and AI

The longtime philosophy professor recounts his eventful life in a new book, celebrates evolution, and issues a warning about what’s really dangerous about AI

In his new memoir, I’ve Been Thinking, University Professor emeritus Daniel C. Dennett tells many stories of his life, but as the title indicates, the emphasis is on the life of the mind. Not just his mind, but all minds. That’s because Dennett has spent much of his career as a philosopher working on issues related to consciousness and cognition, collaborating with scientists of all stripes.

Born in Boston, Dennett lived in Beirut as a small child, then returned to the Boston area after his father died. After a couple of years at Winchester High School, he went off to Exeter, where he took up sculpture, even garnering a show in a Newbury Street gallery.

Given that many of his classmates at Exeter were heading to Harvard, he was expected to go there too, but chose Wesleyan instead. While there, he read a book by the philosopher W. V. Quine, a professor at Harvard, and then transferred there so he could take classes with Quine, intending to tell him where he was wrong in his reasoning. Perhaps not surprisingly, Quine did more of the convincing.

In the early 1960s, Dennett and his wife, Susan, headed to England, where Dennett earned a D.Phil. in philosophy at Oxford. Then he landed a job without even an interview at the newly established University of California at Irvine, during the halcyon days for academics, as higher ed expanded across America.

Having both grown up in Massachusetts, he and his wife had long wanted to have a place to live in Maine. With some inherited money, they bought a large if dilapidated property in Blue Hill in 1970. He shopped around for a job in the Boston area and soon landed at Tufts, teaching philosophy and later also serving as director of the Center for Cognitive Studies. He retired in 2022.

book cover

In the book, Dennett describes his intellectual growth and the role he played in many philosophy developments over the years. There’s plenty of inside baseball, but it is lively reading even for those with no stake in the game.

Dennett also devotes a section to academic battles, including what he calls academic bullies, who he often called out when no one else would. “They have ended people’s careers—they have squashed really good people when they disagree with them,” he says. “I was pretty well immune to that, and recognized I should use my relative invulnerability to say what others were saying over drinks in the bar late at night, but didn’t dare say in public.”

I’ve Been Thinking shows Dennett as a renaissance man. Art and music are vital components of his life, but he also took up many other activities: sailing, pottery, carpentry, windsurfing, running a cider press (which led to making Calvados with a still), and driving a tractor (he worked out thorny philosophical problems while in the tractor seat, what he calls “tillosophy”). 

Through AI, “we will have created the viruses—the mind viruses, the large-scale memes—that will destroy civilization by destroying trust and by destroying testimony and evidence. We won’t know what to trust.”

Daniel Dennett

He’s not shy about his accomplishments—he reports in the book that the philosopher Don Ross “once said of me, ‘Dan believes modesty is a virtue to be reserved for special occasions.’” But his work is indeed impressive. He’s written many popular books, including Consciousness Explained and Darwin’s Dangerous Idea, has TED talks with millions of views (watch “The Illusion of Consciousness”), and won the 2012 Erasmus Prize for  exceptional contributions to culture, society, or social science.

After retiring, he wrote his memoir and began issuing warnings of the dangers of artificial intelligence. His central concern is not that AI is going to take away jobs, but that it has the potential to destroy trust, a linchpin of civilization.

“Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created,” he wrote recently in The Atlantic. “These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.”

Tufts Now spoke with Dennett recently about his life, philosophy, science, and the perils of unfettered AI.

Tufts Now: What prompted you to write a memoir?

I thought it was a good time to reflect on what I’d experienced and accomplished and what thinking tools I had developed that I wanted to spread further and let people know how I did it. It wasn’t magic. It was all just plain using my tools. I see philosophy as a practical discipline. It’s fixing up your thinking. It’s repairing the shoddy carpentry that you do when you think.

You are a philosopher, but you spend a lot of time with scientists. Why is that?

It all started when I had a discussion with my fellow graduate students in Oxford about what’s going on when your hand goes asleep, and you’ve got this alien hand that’s bumping into your face and it’s numb and you can’t control it. I’d experienced it—I think everybody has probably.

I wanted to know what’s going on. How does it work? The other philosophers thought, that’s not philosophy. I said, well, it should be. You’ve got to get out there and find out what’s happening in your body, what’s happening in your brain. So I started learning. I didn’t even know what a neuron was back then, in the early ’60s, but I soon learned. And fortunately, I got introduced to some good mentors and tutors who gave me world-class tutorials.

I was lucky to get in on the ground floor of cognitive neuroscience. Some of the early pioneers in that field were my heroes and mentors and friends.

A central theme of your work is about the nature of consciousness—that it’s not magic, it’s a physical property.

I’ve spent unimaginable numbers of my prime-time hours trying to explain that I’m not saying that consciousness isn’t real. I’m just saying it isn’t what you think it is. It’s not real magic. It’s like card tricks. Evolution is the great conjurer, the sleight-of-hand artist. The things that nature has cobbled together through natural selection are breathtakingly clever, ingenious, and sometimes spookily sly. Who would ever guess that such an off-the-wall weird phenomenon as consciousness could be invented not as an intelligent designer’s idea, but just as a product of natural selection?

You’ve also been active bringing the atheist point of view to the public, including in your book Breaking the Spell: Religion as a Natural Phenomenon. What do you say to people who take exception to your point of view?

You don’t need the supernatural. Nature is wonderful enough—it is breathtaking. In my book, Darwin’s Dangerous Idea, right at the end, I have a little paragraph that’s just a hymn to reality and evolution and how wonderful it is. You don’t need fiery chariots with flying horses and gods. The cold, hard, physical truth is awesome enough.

“Evolution is the great conjurer, the sleight-of-hand artist. The things that nature has cobbled together through natural selection are breathtakingly clever, ingenious, and sometimes spookily sly.”

Daniel Dennett

Another aspect of it, which I don’t really go into in the memoir, is the trickledown theory of importance. Some people think nothing could be important, that their lives have no meaning unless there’s a super-duper important thing, namely God—that all importance descends from the creator.

Well, that’s one vision, but here’s another one. The 14-plus billion-year-old universe managed to generate things that have generated importance and meaning. If you believe that elephants, octopuses, and eagles are wonderful, then you know that a wonderful thing doesn’t have to have a super wonderful creator. It can be a product of natural selection, like us.

You talk in the book about the early days of AI. Do you think that back in those days you would have imagined that it could turn out the way it has?

The large language models, like ChatGPT—the generative pre-trained transformers—are largely unanticipated, certainly by me, but also by many people in the field. Even some of the developers had no idea that they would get so good, so fast. That’s been not just surprising, but shocking and even scary to some of the leaders in the field.

Where do you see AI going? Do you think that it’s something we should be concerned about?

A thousand times yes. In fact, in the last few months, I’ve been devoting almost all my energy to this. I did a piece for The Atlantic called “The Problem of Counterfeit People.” I’m just back from Santa Fe, where I gave a talk to a group and said the whole point of my talk was to scare the bejesus out of them.

I’m an alarmist, but I think there’s every cause for alarm. We really are at risk of a pandemic of fake people that could destroy human trust, could destroy civilization. It’s as bad as that. I say to everybody I’ve talked to about this, “If you can show that I’m wrong, I will be so grateful to you.” But right now, I don’t see any flaws in my argument, and it scares me.

The most pressing problem is not that they’re going to take our jobs, not that they’re going to change warfare, but that they’re going to destroy human trust. They’re going to move us into a world where you can’t tell truth from falsehood. You don’t know who to trust. Trust turns out to be one of the most important features of civilization, and we are now at great risk of destroying the links of trust that have made civilization possible.

AI destroying trust is an unintended consequence, not an intentional feature, right?

Yes. AI systems, like all software, are replicable with high fidelity and unbelievably fast mutations. If you have high fidelity replication and mutations, then you have evolution, and evolution can get out of hand, as it has in the past many times.

Darwin wonderfully pointed out that the key to domestication is control of reproduction. There are species that hang around human houses and farms that are synanthropic. They evolved to live well with human beings, but we don’t control their replication. Bedbugs, rats, mice, pigeons—those are synanthropic, but not domesticated.

“I’ve often said, if you can’t explain what you’re doing to a bunch of bright undergraduates, you don’t know what you’re doing.”

Daniel Dennett

Feral species are ones that were domesticated and then go feral. They don’t have our interests at heart at all, and they can be extremely destructive—think of feral pigs in various parts of the world.

Feral synanthropic software has arrived—today, not next week, not in 10 years. It’s here now. And if we don’t act swiftly and take some fairly dramatic steps to curtail it, we’re toast.

We will have created the viruses—the mind viruses, the large-scale memes—that will destroy civilization by destroying trust and by destroying testimony and evidence. We won’t know what to trust.

This seems like evolution at work.

Absolutely it is. This is cultural evolution. My dear friend and colleague Susan Blackmore, who wrote the book The Meme Machine, has been talking since the time she wrote that book about a third kind of replicator, which she calls “tremes”—technological memes that don’t depend on being replicated by human minds, but can be replicated by other software, taking the human being right out of the picture.

We’ve known about this for 20 or 30 years, but now recent experiments have basically shown that this is not just possible in principle, it’s possible right now.

There’s a recent article in The Atlantic that has a truly frightening story about the red team at OpenAI. A red team is when you get your sharpest, most critical people together and give them the assignment of attacking your own product in-house to see if you can get it to do bad things. This is safety testing before you release.

The story is about the red-teaming of GPT-4, which shows how it apparently figured out that it was being red-teamed and evaded control, and went out on the web and tricked a human being into answering a Captcha, lying to the person, saying, I’m visually impaired, which is why I can’t do it. Would you please do it? And the human did it. And this gave GPT-4 access to outside software that its creators didn’t want it to have access to.

Getting back to your memoir, I was struck by the many ways in which it seems you’ve led a charmed life. You’ve done amazing things. You’ve had challenges, which you don’t stress, but mostly you’ve led a good life.

Yes—but I’ll put a critical spin on that. I’ve been so lucky that I probably underestimate the challenge that many people, who aren’t so lucky as I have been, have finding meaning in their lives. I’ve been just drowning in meaningful projects and activities and adventures since I was a child.

If you’re really doing philosophy, then the question you always want to be asking is, well, is it right? Do I believe it? Is this person right or wrong?”

Daniel Dennett

Some people lead lives of quiet desperation, and I don’t want to underestimate the fear and pain and suffering that goes with that. I’ve been spared that in the main, and it’s important to remind oneself of that.

You taught at Tufts for decades. Who were your favorite students?

Mainly freshmen. In some fields—and this includes a lot of philosophy departments—the professors only teach graduate seminars. Graduate students don’t want to appear naive or stupid, so they’re more docile than freshmen and sophomores. They’ll nod in agreement, but the freshmen will say, wait a minute, I don’t get that. I’ve often said, if you can’t explain what you’re doing to a bunch of bright undergraduates, you don’t know what you’re doing.

How did Tufts stand out in teaching philosophy?

Some philosophy departments are basically philosophy appreciation departments, like old fashioned music appreciation or art appreciation. You study it from a spectator’s point of view. The philosophy department at Tufts has never been that. It has good scholars—always has had—but it has always been a place where you do philosophy. The main difference is, if you’re really doing philosophy, then the question you always want to be asking is, well, is it right? Do I believe it? Is this person right or wrong?

Our philosophy department has always had people who really aspire to getting things right. That’s something about Tufts that I prize, and I know the students do, too. The mail that I get from former students sometimes just lights up my whole day, because they say basically, what I learned from you is the importance of figuring out whether I believe something or not—and why.

The world is very different from when you first started at UC Irvine. What about people who want to do philosophy, but don’t want academic careers?

One of the things I’ve told students for years is to consider Charles Ives. He went into the insurance business to fund his life of composing. If you really love philosophy and you find a job that will give you enough thinking time outside of work, you’ll end up doing the philosophy you want to do.

I’m proud to say that for many years the philosophy department had what we called associates, people with Ph.D.s in philosophy who ended up in non-academic jobs. We invited them to all our colloquia and encouraged them to speak, ask questions, and occasionally even to give a talk. If they wrote a paper, they could—this is back in the old days—send it out with a cover letter on Tufts Department of Philosophy letterhead. We let people have careers as philosophers where that wasn’t their paying job.

I don’t know why it disappeared, but it is a thing we could do again, and it costs next to nothing. It provides an academic home and a credential of sorts for somebody who wants to take philosophy very seriously.

Back to Top