Making AI a Force for Good

A gift to the Tufts Institute for Artificial Intelligence funds university-wide discovery

A generous gift from an anonymous couple will fuel the future of the Tufts Institute for Artificial Intelligence (TIAI) as a hub for interdisciplinary discovery that will advance Tufts’ long-standing commitment to being a force for good in the world.

“Their support will enable us to focus on the creation and use of technologies that will have meaningful positive societal impact,” said Kyongbum Lee, Karol Family Professor and dean of the School of Engineering, which houses the institute.

The endowed gift provides foundational funding for research, staffing, faculty training, events, and further development of institute infrastructure and programming. It will also be used to inspire other donors, with the goal of hiring a total of five professors at the School of Arts and Sciences, the School of Engineering, the Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy, and The Fletcher School.

Through their gift, the donors aim to help ensure that Tufts will attract educators, researchers, and students committed to working on the frontier of intelligent and responsible computing.

A national search is underway to identify the new director of TIAI, who will lead this effort. The director, who will hold the Stern Family Endowed Professorship, will work closely with the provost’s office, school deans, and faculty leaders to create a community of AI experts who will leverage Tufts’ distinctive strengths.

“Our long-term focus is not to extend AI per se but to tap the collective knowledge of our faculty to use intelligent technologies as responsible tools to benefit society,” said Lee.

Achieving this objective will require Tufts to tackle two pressing problems that have long bedeviled AI: how to embed ethics and transparency into the technologies, and how to reduce AI’s voracious use of energy and other limited resources.

Machines Behaving Badly

Matthias Scheutz, director of the TIAI’s Human-AI Interaction Center and the Karol Family Applied Technology Professor, has spent years studying the benefits and risks of AI. “There are many varieties of AI,” he said, “but generative AI, or genAI, is what most people think of these days when they use the term, and it is where most of the ethical risks lie.”

Generative AI is trained on vast amounts of data to identify patterns and predict the most likely continuation of a given sequence.

“This is where genAI can go off the rails,” said Scheutz. “The prediction is completely probabilistic, and there is always a risk of it generating something that’s wrong just by chance or because the data it learned from was inaccurate, incomplete, or biased.”

For example, while generative AI can save significant time when programming computer code, it often makes errors that only expert human programmers can detect. Or if asked to plan a trip, generative AI may skip important steps, incorporate unnecessary tasks, or come up with fantastic elements called “hallucinations” or “confabulations.”

“Without mandatory expert human checks along the way, we can’t trust that the code or the plan will be correct,” said Scheutz.

He stressed that AI agents must be able to learn human norms, together with the contexts in which they apply, and use them for responsible decision-making and action.

In a widely cited experiment, a generative AI model with access to both plans for a future system shutdown and damaging personal information about a researcher threatened to release that information in an attempt to prevent a shutdown—essentially blackmailing the researcher to preserve itself. Some AI models given control of company emergency alerts in such a simulation would even go so far as canceling an alert that would save a person’s life to save itself instead.

Scheutz said that to help address such concerns, Tufts researchers are exploring ways to combine generative and nongenerative AI models that use “built-in, provable guarantees and checks” at critical steps in a process.

“We don’t have all the answers but given Tufts’ institutional mission to benefit society, we know we must embed ethics in all the work we do,” Lee said.

Curbing AI’s Appetite

Tufts is well positioned to tackle another big challenge: curbing generative AI’s appetite for energy, water, and other limited resources. Today’s generative AI models depend on massive data centers that require vast amounts of electricity and cooling water as well as scarce materials such as lithium and gallium.

“AI can’t continue to hog electric power and water, thereby producing dangerous impacts on individual communities and the planet as a whole,” said Lee. “Tufts has an opportunity to explore innovative approaches that may be cleaner and more efficient.”

The future might include smaller, modular AI systems, more efficient microchips and energy storage devices, innovative system cooling techniques, and less reliance on generative AI.

Scheutz points to a new model combining generative and nongenerative AI that has already improved performance while using significantly less energy than existing models.

One thing is certain: AI will continue to evolve, and it will likely spin off powerful new technologies, just as the internet paved the way for revolutions in commerce and social interaction.

“We want to be on the leading edge of technological evolution and to make it a force for good,” said Lee. “This gift to the TIAI is a springboard, and we are extremely grateful to these donors for enabling us to pursue our goal creatively and collaboratively.”

Back to Top