Faculty test out when AI can make learning more effective, and when it gets in the way
“They ‘hallucinate,’ or make stuff up, quite a bit. They don’t ‘know’ things so much as they’re good at sounding like someone who does.”
Those are just some of the drawbacks of having a conversation with a large language model such as ChatGPT, said Gregory Marton, who teaches a course at the ExCollege called Who Wrote This? ChatGPT, LLMs, and the Future of Learning. Yet Marton is optimistic about the ways such AI models, even with their flaws, can be used in higher education.
For example, chatbots can be good tutors, like when Marton asked ChatGPT for help on an algebra puzzle. (He didn’t really need the help, but go with it.) Rather than prompting the bot for an answer, he aimed for a hint: “I just want you to ask me a single question that might help me get started in the right direction.” The bot’s question, and subsequent ones, nudged Marton toward finding his own solution.
Instead of trying to weed out AI-cheating, which Marton says is too hard to detect anyway, some Tufts faculty members are exploring AI in their classrooms, designing or adapting assignments to give students a chance to kick the wheels on these emerging technologies. They are figuring out together when large language models can make learning more effective and when they get in the way.
Knowing that students learn better when homework connects to their personal lives, Erica Kemmerling, an assistant teaching professor of mechanical engineering, tries to craft some assignments based on students’ interests. Last semester, she had AI do the crafting. She had each student converse with ChatGPT about things they care about and then ask it to come up with an individualized homework assignment that uses skills they were learning in class.
For one student, who loves video games, the assignment was on the nose: Design a heat sink system for a gaming computer by doing a thermal analysis and looking at air flow and materials.
“It really gets at something that he might want to do in real life,” Kemmerling said.
For another student, who has an interest in nuclear fission and is also a musician, the mash-up was problematic, “basically suggesting she put a thermal reactor inside her viola and do some computations around that,” Kemmerling said.
About 20% of the AI-generated assignments were similarly ridiculous. But even the silly results got Kemmerling partway to her goal of having students understand whether AI is suited to a particular engineering task.
Kemmerling said some of her students have reported feeling less stress over homework now that they have AI tools. A bot can act as a study buddy, a Socratic opponent, or quiz-maker at a moment’s notice, even if students are starting an assignment “at 1:30 a.m. on a Saturday when no TAs are available and I’m not checking my email,” she said.
Kemmerling is not that worried about students using AI to cheat in her classes, as asking a bot to do problems from her exams would be as useful as consulting “a duck,” she said.
“[ChatGPT] is just not that good at engineering yet,” she said. But for tasks like brainstorming, “it's really good at helping [students] get started.”
In that way, a bot’s questions may be more useful than its answers, as they can help students get their creative juices flowing in new directions.
A bot can act as a study buddy, a Socratic opponent, or quiz-maker at a moment’s notice, even if students are starting an assignment “at 1:30 a.m. on a Saturday when no TAs are available and I’m not checking my email.”
“They are reasonably good at suggesting things that you might ask next,” Marton said. “It is not useful to think about bots as thinking or knowing. Think of them as an improv partner. If you tell them that something is true, they will say ‘Yes, and …’”
Bots can also summarize, simplify, patiently explain a text part by part, edit your writing, and give other perspectives on your ideas. “And most important of all, they don’t judge,” Marton said. “You can ask them things first and feel more confident when you follow up with peers or teachers.”
Meera Gatlin, an assistant teaching professor at Cummings School of Veterinary Medicine, used AI in her class on public health for veterinary professionals last fall. She instructed her students to take advantage of AI in various ways, from asking AI to explain statistics terms they weren’t familiar with—or needed a refresher on—to pairing up with a bot to brainstorm which diseases should be prioritized for eradication.
She asked her students to critique the AI’s output as well as the questions they were putting to it. “A lot of times the AI would provide answers that are not relevant particularly to what we’re talking about, so how do you refine that?” she asked. She referred to the computer science concept of “garbage in, garbage out,” in that how you ask a bot to do something is crucial for the usefulness of what you get back.
“I think there’s value in asking students ‘What prompts did you use and how did you refine those prompts to shape your learning and the AI output?’” Gatlin said.
Marton recommends that students go beyond accepting a chatbot’s first answer: “Give it more of yourself, your thought process, your own reactions to what it said. Tell it to ask you for more, and craft a response together.”
Gatlin said students, understandably, want to know which AI uses are off limits. “Having some sort of policy incorporated into a syllabus or your course content in some manner is going to be essential,” Gatlin said.
Rebecca Shakespeare, a lecturer in the Department of Urban and Environmental Policy and Planning, was clear in the syllabus for one of her graduate-level classes that AI could be used for “brainstorming, searching, synthesizing, organizing information, and/or getting started with writing” but not for completing writing assignments.
A class exercise she created showed students some of the reasons why. Typically, Shakespeare asks a group of students to write up a synopsis of their most recent class and then pose discussion questions for other students to respond to on an online bulletin board.
One week, Shakespeare had the class ask ChatGPT to respond to the discussion questions as if it were a UEP graduate student. The class then critiqued the AI’s response, considering whose views the AI—trained on a body of existing ideas—was drawing from and whether those views were innovative.
One of the students concluded: “Having this AI do our work … won’t let us be the radicals that we are.”
“And I think that really encapsulates some of the findings that we got,” Shakespeare said, “which … is that [AI] can do something for us, but it isn’t necessarily going to be infused with our passion and our motivation and our direction.”
“In order to re-think how we plan for a more equitable and just future, we have to think outside of the box … to address the problems entrenched by the past,” she said.
In December, Gatlin, Kemmerling, and Shakespeare shared their AI experiences in a faculty panel organized by Tufts’ Center for the Enhancement of Learning and Teaching, which is supporting faculty in taking a scholarly approach to experimenting with AI.
Part of that approach is helping students learn what they can do that generative AI cannot. Rather than “futilely focusing on detection,” said Freedom Baird, a senior educational technology specialist, faculty are “centering the human connections between teachers and learners, and helping our students build uniquely human skills.”