Can AI Make Us Smarter?

Artificial Intelligence can be a force for good—or the opposite, says author and Fletcher School alum Olaf Groth in a Tufts podcast

Artificial intelligence has the potential to enrich our lives. But it can also drive people apart and cause tremendous harm. Olaf Groth, a professor at Hult International Business School and CEO of the Cambrian Group, explores how this technology is reshaping societies in his new book, Solomon’s Code: Humanity in a World of Thinking Machines. He co-authored the volume with Mark Nitzberg, who heads the UC Berkeley Center for Human-Compatible Artificial Intelligence.

In this episode of Tell Me More, Groth—an alumnus of Tufts’ Fletcher School of Law and Diplomacy—talks with Bhaskar Chakravorti, dean of global business at The Fletcher School, about artificial intelligence—both its promise and its perils. 

BHASKAR CHAKRAVORTI: Your book is called Solomon’s Code. Could you decode the title?

OLAF GROTH: Yeah, I’d be happy to, Bhaskar. So, King Solomon was a Biblical figure who was known for making very smart, very intelligent, and often wise decisions and getting to great wealth—but then also making some decisions that were tricky and problematic, and, in the process, losing his wealth and losing his country or his son’s country, as it were.

And we’re saying in the book that we ought to be smart, we ought to be wise, and we ought to think ahead about the kinds of problems that we could be generating in this new data-driven economy. So, we do not want to end up like King Solomon, but rather get wise ahead of time and exercise some foresight so that we don’t go into the same pitfalls and make the same mistakes. That was the meaning of the title.

CHAKRAVORTI: What I really like about your book is you help us think simultaneously about the algorithmic future and the humanist future. And in many ways, the second part of your title, “Humanity in a World of Thinking Machines,” underscores that theme in your book. Could you say a little bit more about humanity and thinking machines? Is there a tension between the two? Can the two live and coexist simultaneously?

GROTH: So, our relationships are changing based on technology. Our social glue is changing how we look at other groups in society other than the ones that we belong to is changing, thanks to smarter and smarter algorithms that figure out who we are and who they are and then play us into these opposite camps.

All of those things are, at once, very satisfying because these algorithms have figured out who we are and they play to our sense of satisfaction. And they are also problematic because they tell us, “Well, you in your satisfaction and your worldview are different from these people over in this other corner that are really not very likable to you.” And so, it keeps feeding you news in these different camps that drift you and divide you ever further. And those are very destructive patterns right now.

In essence, what I’m saying is technology has two sides to it. There is a wonderful positive transformational side. You can enrich lives; you can make people see things they haven’t seen before. You can use technology for learning, for education, for health care, for greater convenience. And you can use technology to drive people apart and do harm. And so, the book is essentially about the potential as well as the pitfalls, and those two are currently unfolding quite dramatically.

CHAKROVORTI: What’s the one—I’m sure there are many examples—but one example of potential that excites you the most, that makes you feel most optimistic about how thinking machines are encroaching on our lives? And then the second part of this question would be, what’s the one pitfall that worries you the most, Olaf?

GROTH: Yeah. So, there are quite provocative, positively provocative ventures in the health-care field. We, for the book, interviewed about 100 entrepreneurs, policymakers, experts, academics, media professionals—and unearthed some wonderful ventures such as BrainQ in Israel. BrainQ in Israel has developed AI algorithms—machine-learning algorithms—that can figure out which nodes in the brain are no longer firing at each other and neurons no longer firing at each other, thereby decapacitating human beings that can no longer, let’s say use a limb as they used to because these neurons aren’t firing.

CHAKROVORTI: I’d definitely like the address of that company from you after we are done with this interview.

GROTH: Yes.

CHAKROVORTI: I can feel so many parts of my brain that has stopped functioning. We need to figure out which ones are still functioning.

GROTH: Exactly. And it leads to down this path of asking, first of all, how can I get that? Because you feel the potential for human enhancement and yet you can easily see how that’s also problematic. Who has access to it? People like you and me who are, I’d say, compared to most of the rest of the world, reasonably privileged, or is it really all of humanity that will be taking advantage of these technologies? It’s wonderfully promising for people with trauma because they can regain physical function, cognitive function—but it does have this darker side.

Can you imagine hackers now hacking into these algorithms that will then allow them to change these wavelengths and manipulate human brain functioning? So, there’s lots and lots of questions here that are both sociological, ethical, and security—safety-related in nature. So, that’s one.

There are other medical technologies that are very promising, such as a venture out of Stanford University that enables doctors who are consulting with patients that either suffer from depression or Alzheimer’s or things of that nature, to discern slight deviations in patterns in how they swipe screens or type on virtual keyboards, to then see if somebody is about to relapse into another phase of depression or drug addiction and intervene before they can do harm to themselves. That, too, is incredibly promising. And as you can imagine, if that’s not safeguarded for privacy reasons, security reasons—can also wreak havoc on people’s lives, lots and lots of those types of applications.

CHAKROVORTI: So, there are good and bad aspects of having that kind of technology surrounding us. And there’s been a lot of discussion lately about the role of the state in having access to this kind of data, and what that might mean to the future of human freedoms. And is it opening the door to state overreach?

And this specifically relates to some issues that are now coming to the forefront about China and China’s plan to have a social credit score for every individual. Is this an area that you worry about sort of having a state—an all-powerful state—having access to this kind of information? And then the second part of the question is, do you worry about, instead of the state, private actors, a combination of the technology giants having this kind of information in liberal democracies, potentially sort of having a similar state of overreach?

GROTH: Yeah, no, I definitely worry about this, Bhaskar. And it’s the shadow side of the bright side, which is that we want the global digital platforms to know as well—to make life more convenient, to enable us to have one-stop shopping. You subscribe once, they gather data, they understand who you are and what you want and they serve that up. And that’s tremendously enriching and convenient. And we can do things today that we could never do before. Just think about car sharing and all the wonderful things that Amazon and others do.

And the same goes for some of the Chinese players. And yet there is a tremendous power that comes with that kind of knowledge that I think needs to be discussed. And it has slowly but steadily coming to the fore. So, I am worried about these large digital corporations on both sides of the Pacific. I would be worried about European corporations if there were any, but there aren’t any, other than maybe SAP and that’s really B2B. So, it’s primarily about the Chinese and…

CHAKROVORTI: Spotify.

GROTH: Yeah, Spotify. That’s right. And if that grows, then certainly it would fall into the same category. So, I am worried about that because we are seeing a lack of oversight, of proper governance about what these companies do with our data. Frankly, I think the latest showing on part of the U.S. Congress and the European Parliament interviewing Mr. Zuckerberg, is evidence of that. We really need to find ways to really scrutinize and understand what’s happening in all layers of the software stack. And our political institutions are not equipped to do that very well.

I am, of course, equally worried about government and to be fair not just the Chinese government—any government. And I’m happy to give you some examples. The Chinese government is, of course, now known for its social credit system, which is going to be allegedly mandatory by 2020. We’ll see whether that actually happens, and in what instantiation it comes to fruition. But we have been researching this quite extensively and currently there are three systems in use that examine, really, anybody’s life who lives in China fairly closely and assigns points on say, a scale of zero to 800. And depending on where you are in that scale, that means you have more or less privilege in society.

The Chinese have in fact—the authorities have in fact stopped 11 million train rides and 4.4 million (or in excess thereof) international flights or tickets for flights to be purchased by individuals with lower scores. So, the Chinese system has teeth and they very readily publicize those teeth because they want to enforce stability in society. Now that’s a problem for anybody who believes in civil liberties.

And before we point fingers at China too much, let’s also be honest about what’s happening in the United States. In the United States, we use monitoring facial recognition, monitoring and scoring systems to support, let’s say, the sheriff’s department in California, tracking individuals that might have been close to a crime scene.

We’re using neural network AI-type technologies in courtrooms to assess defendants’ bail size or volume. And we’re using predictive policing in New York to understand in which neighborhoods crime is going to surface next. And while all of these technologies make sense on one side, the sheriff’s department, for instance, in the hearing in Sacramento in California where I was speaking said, “Well, it can’t be that you want us to keep citizens safe. The bad guys have these technologies and we are not supposed to use them.”

So, there is this creative tension and honest creative tension in law enforcement and intelligence, about how much of these technologies is good versus bad? So, it’s happening really on—like I said, on both sides of the Pacific, of course, with different intentions and to different degree. I’m not trying to say China and the U.S. are in the same bucket, but I think we need to monitor big powerful players that amass a lot of smart technologies and datasets alike, no matter where we are.

CHAKROVORTI: There is a notion that regardless of the model as we continue to move into this era of thinking machines, kind of penetrating more and more of our lives, whoever has accumulated the maximum amount of data is going to “win.” So, do you see the accumulation of data as being equivalent to the accumulation of oil or the accumulation of bullets or weaponry or all the things that we have spent our time accumulating in the past, in the twentieth century? Is accumulation of data going to be the next thing? And if that’s the case, are we headed to a future where it is the Chinas and the Indias and the parts of the world that have large populations with access to data-generating devices might tilt the balance of power in that direction?

GROTH: Yeah. We believe that data and artificial intelligence-generated data or data that is being made sense of through artificial intelligence is, in fact, much more tricky and potentially much more dangerous than nuclear or any kinds of physical arms dealing around the world. And the reasons are many fold.

Firstly, data and artificial intelligence can always be used for very good, sober, beneficial purposes. Whether it’s in consumer life or as we said, in education, health care, finance, transportation—to make our world better and safer. But there’s always that simultaneously dark side. So, there is this—its almost dual-use that is ingrained in very harmless applications that could possibly turn these applications into surveillance or could turn these applications to be weaponized. So, that’s number one.

Number two is many of these applications are quite cloaked. They’re not over to the eye to the beholder. So, the monitorability, as it were, of some of these algorithms and data is much lower than, let’s say, the physical movement of centrifuges in the nuclear instance or bullets or weapons into foreign markets. It’s easier to trace those.

And then thirdly, the speed with which this code is being proliferated is amazing. With one push of a button, you can literally spread codes into a billion households around the world. And so that makes it a lot more potent than physical weapons. So, we do believe that that is the dark underbelly of AI and data. And that needs to be monitored. So yes, that is a concern.

CHAKROVORTI: So let me take your out of the dark underbelly and go to a different part of the anatomy, and it’s unclear whether it’s dark or rosy, as it were. The introduction of these thinking machines and the algorithms fed by data are going to encroach into every aspect of our lives and they already are. They will prepare our coffee in the morning. And they might put you in a vehicle where you might continue to drink your coffee and read the news and be driven from point A to point B.

But the point B may not be a place where you work. You may not need to work because machines are doing much of the work. So, long-winded way of asking the question, how do you see the balance between human convenience and human ingenuity and human labor being governed, modulated, displaced, enhanced by all this data and the thinking machines? Is it going to be a net positive and negative or is it going to be some—a bit of a mix?

GROTH: I believe eventually it’s going to be a net positive, but it will take us some time to get there. And it’s the intervening period that I’m concerned about, because policymakers and even large corporate executives from the C-suite to human resources are not devoting a lot of time and energy to defining new job profiles and new types of tasks that are innately human as we compete with machines for productive time.

Because I think that this digital technology can enhance who we are. It can give us greater purpose. It can focus us on more creativity, imagination. There are so many skills, so many capabilities that humans have that machines cannot replicate right now. I just mentioned a few. Creativity’s a little bit iffy—hard to define—imagination is hard to define, but it is an inherently human capability as well. Visioning, theory development, planning, even strategy which, in some sense, is theory development about how to get from point A to point B.

Humans can do that on multiple levels, multiple timelines, all at the same time. And machine cannot develop theory or strategy. It can simply calculate much faster than we can. Things like motivating, inspiring, coaching, mentoring, teaching, counseling, all those are activities that humans are able to do with other humans because we share a large genetic—a large portion of genetic code; we literally crawled out the primordial slime together, as it were, or at least our ancestors did. And we have that genetic commonality ingrained in us.

So, we understand when somebody experiences success and failures, love and heartbreak, because it’s our part of a common and shared human condition. And machines don’t yet have that capability. Now, you can project forward and say in fifty or seventy years will we have a Blade Runner-type world in which there are replicants that can actually experience memories and feelings and the like? And most scientists will tell you that’s really very, very long-range version; we’re nowhere close to that and we also shouldn’t let that blind us from the shorter to medium term.

In the short to medium term, these are skills that I just named that help us to be more defensible, help us integrate with machines that have very linear capabilities of analyzing, pattern-recognizing, making recommendations that are fundamentally different from how the human brain functions. A good friend of mine who we interviewed for the book, Patrick Wolff, who is a U.S. chess grand master of the top league around the world, has said, “The era of humans beating chess computers is over; it will never come back. The machine will always be faster and will eventually win.”

You have probably seen AlphaGo Zero go up against not just the top Go player in the world, but the top fifty all at the same time and beating each one of them in parallel. But he says, “When you pair a Go player or a chess player with a chess computer, that’s like watching gods play.” Because the machine’s raw processing power and learning capability based on what it sees empirically and then predicting some promising moves compared or integrated with the human capability of visioning and of developing these theories about how to create a winning image on the board, that is a deadly combination that he thought was never really possible. He sees moves that he never thought possible or reasonable.

And I think that’s the future of humanity: integrating with machines, letting them play to their strength, integrating with our strength, not competing head-to-head. So, that’s where we need to go. In the meantime, I’m worried because I see a lot of people talking about this—very few people actually getting on top of this and that includes the United States, right?

CHAKROVORTI: You interact with students on a regular basis because you’re a professor. And you are preparing people who are going to be going off into the world and you are also a futurist. So, in many ways you help them think about the world that they are likely to inhabit and shape. So, 2035—a few years out, how would you describe it to your students?

GROTH: I would describe it to my students in that I would say, “Undertake some backcasting. Imagine the future the way you want it to unfold. And then chart a path that will make that future happen.” So, if you ask yourself, “This is a world in which X, Y, Z, my life functions in this way, society works in this way, politics works in this way,” then decide what’s the path into the future that will actually make that happen, and pursue that path collectively. Get together in the form of, as we called it a multi-stakeholder forum, a Cambrian Congress, chart things like the digital Magna Carta, but chart governance regimes that will actually help you shape the world you want to see.

CHAKROVORTI: Olaf Groth, such a pleasure talking to you.

GROTH: Pleasure’s all mine. Thank you, Bhaskar. Always great to be back.

HOST: Thanks for listening to this episode of Tell Me More. Please subscribe and rate and review us wherever you get your podcasts—and to be the first to hear about new episodes, please follow Tufts University on Twitter, Facebook, and Instagram. We’d also welcome your thoughts on the series. You can reach us at tellmemore@tufts.edu. That’s T-U-F-T-S dot E-D-U. Tell Me More is produced by Steffan Hacker, Anna Miller, Dave Nuscher, and Katie McLeod Strollo. Anna Miller edited this episode and Heather Stephenson wrote the introduction. Web production and editing support provided by Taylor McNeil.  Special thanks to the Fink Distinguished Alumni Speakers Series, Shelli Corcoran, and Brad Macomber. Our theme music is sourced from De Wolfe Music. And my name is Patrick Collins. Until next time—be well.

Back to Top