Three Questions About AI

Fletcher graduate Olaf Groth explores technology’s promise and perils in his new book
illustration of human brain and mapping of artificial intelligence
“Data that is being made sense of through artificial intelligence is potentially much more dangerous than nuclear or physical arms dealing around the world,” said Olaf Groth. Photo: Shutterstock
October 3, 2019

Share

Olaf Groth, F95, F97, a professor at Hult International Business School and CEO of the Cambrian Group,  co-authored the new book Solomon's Code: Humanity in a World of Thinking Machines. Tufts Now spoke with him about technology’s dual nature and governing artificial intelligence.

Tufts Now: Why the title Solomon’s Code?       

Olaf Groth: King Solomon was a biblical figure known for making very intelligent decisions and getting to great wealth, but then also making problematic decisions and losing his wealth and his country, or his son's country, as it were. We're saying that we ought to be wise about the kinds of problems that we could be generating in this new data-driven economy. We do not want to end up like Solomon, but rather exercise some foresight so that we don't make the same mistakes.

Olaf Groth, F95, F97, Photo: Alonso NicholsIs today’s technology a threat to humanity?

Technology has two sides to it. There is a wonderful positive transformational side. You can enrich lives. You can use technology for learning, for education, for health care, for greater convenience. And you can use technology to drive people apart and do harm. And so, the book is essentially about the potential as well as the pitfalls. And those two are currently unfolding quite dramatically.

We need to monitor the big powerful players, both private companies like Facebook and governments like China and the U.S., that amass a lot of smart technologies and data sets. Data that is being made sense of through artificial intelligence is potentially much more dangerous than nuclear or physical arms dealing around the world. There's always that simultaneously dark side that could possibly turn beneficial applications into surveillance or could weaponize them.

Are you optimistic about the future?

I think the end state is a positive one if we govern it and if we manage to catch up and get ahead of this thing. Because this digital technology can enhance who we are. It can give us greater purpose. It can focus us on more creativity, imagination. There are so many skills, so many capabilities that humans have that machines cannot replicate right now. The future of humanity is integrating with machines, letting them play to their strength, integrating with our strength, not competing head to head. That's where we need to go. If we govern this well, I do see a bright future to AI. The biggest risk of a laissez faire approach is that we make large-scale mistakes and public backlash of AI across borders forecloses all that potential.

Learn more in “Can AI Make Us Smarter?,” an episode of Tufts’ Tell Me More podcast with Olaf Groth.