A new program in human-robot interaction, the first in the country, examines everything from mechanics to ethics of robots and artificial intelligence
Imagine this: you are the first on your street to bring home a robot that helps out with chores like laundry and cleaning. But one evening, after eating alone, you feel a pressing need to talk. Your robot, it turns out, is a phenomenal listener. As you expound on the state of the world, you begin to see your robot as not just as a helper, but also as a friend.
That scenario, in which people blur the lines between robots as machines and robots as machines imbued with human qualities, is one of many that Matthias Scheutz, a professor of computer science and cognitive science in the School of Engineering, sees as inevitable. And it may come sooner rather than later: just this past week, news accounts suggested that Amazon is already working on developing a home robot.
“There is no doubt in my mind that we will have a human-robot society, with robots embedded in every aspect of our lives,” he said. “That means we have to think about what we want that future to look like. What does it mean for you have a robot at home and treat it as a social other, even though it was not intended for that?”
Those questions and concerns are the core of Tufts’ new graduate degree program in human-robot interaction, the first in the nation. The multidisciplinary Ph.D. and a master’s program spans engineering, including computer science, electrical engineering, and mechanical engineering, as well as the social and behavioral sciences. The Ph.D. program launched this past fall, while the M.S. program starts in September.
Scheutz said the programs offer a unique opportunity to look closely at the “human side of artificial intelligence—AI—and robotics”, and most importantly, “to become part of the movement that is trying to use AI for the good of all of us.”
“We believe it’s absolutely critical that the engineers coming out of this program not only understand the technical aspects, but also the societal implications of AI and robotics,” said Scheutz, who is also director of the Human Robot Interaction Laboratory at Tufts. “Technology is very powerful, but it’s also potentially dangerous—that’s why we need to understand the whole picture. We can incorporate AI and robots into our lives, but it must be done in way that is beneficial to us and that preserves human nature and human dignity.”
Scheutz, who came to Tufts to develop the university’s Cognitive Science Program, said he’s always seen a close connection between AI and human cognition. He’s intrigued by fundamental questions about how humans think and learn: What does it take to build a mind? Why do we have minds? How does the mind work with the body?
“We’re at a point where we can do really interesting experiments to answer some of these questions,” Scheutz said. “Recently one of my students developed a computational system to model language acquisition in children. We have embedded this program in a robot to evaluate whether the robot can now learn words in a way similar to how a child does.”
The new Human-Robot Interaction program is in part a response to AI’s increasing pervasiveness, from devices like the Amazon Echo to Paro, a soft, interactive robot shaped like a harp seal that’s proved beneficial to dementia patients. Japan, with a rapidly aging population, has led the world in the development of autonomous humanoid robots known as carebots, which are particularly in demand in elder care facilities.
Researchers are focusing on “making sure that we develop robot technology that is useful and beneficial for humanity,” said Scheutz. “And to ensure that it is, we are specifically working on the ethical foundations of the software system—what we need to put into robots so they have a concept of a norm, of social obligation.”
Human-like robots are not far off; several so-called “androids” have been created over the recent past that eerily look like humans and can conduct simple conversations. And while the International Federation of Robotics reports that the main drivers of robotic technology growth are the electronics and automotive industries, Scheutz predicts that soon the largest growth area will be social robotics.
Explore Human-Robot Interaction at Tufts
Dive into topics from affective interfaces to collaborative robotics. The future of artificial intelligence will be shaped by students like you.
This new generation of robots will need “to be aware of people, to perceive things, and make sense out of their environment,” he said. “If you have a robot that does physical therapy, that robot might have to grab and bend your arm to increase its range of motion—it’s touching you; so it has to be gentle and not hurt you.”
Human-like robots are likely to also evolve in ways that test social norms, provoke ethical debates, and make the case for ground rules. For instance, Scheutz said, “no day passes without an article on sex robots.” Recent research on the topic, he said, reveals a prevailing attitude that “they ought to be regulated,” he said. “It’s up to us to decide what we think is a good or bad application of a social robot, and that will require not only an understanding of the technology, but of social processes and a potential legal framework.”
All students in the Human-Robot Interaction program are required to take core courses that include human-robot interaction, robot design and control, robot programming, modeling for engineering systems, and ethics. The ethics class, in particular, is important, because it speaks to Tufts’ engagement in ethical ramifications of AI, said Scheutz. The university is one of a handful of universities who have joined industry leaders to support the Partnership for AI, an organization that serves as a clearinghouse for ideas and support on issues such as security and privacy, trustworthiness, reliability, and safety.
Scheutz and his colleagues are also examining how robots might be capable of distinguishing between right and wrong. “We are looking specifically at ways in which the robot would not just blindly follow an order—and reason through how we can make the robot, reject or refuse that order,” he said, if it would lead to a bad outcome. “It’s going to be increasingly important as social robots become more autonomous and part of society.”