When Is Technology Unethical?

In his new computer science course, Marty Allen challenges students to ponder the bigger and messier questions raised by technology

Marty Allen comes to the study of ethics and technology by way of philosophy, which he studied as an undergraduate and graduate student. His scholarly interests took a new, although not unrelated, direction, as he grew more intrigued by the intersection of humans and technology—especially when so-called technological advances cross swords with ethics, raising doubts about their true benefit to society. He went on to earn a master’s and a PhD from the University of Massachusetts, Amherst, where he worked in artificial intelligence and theoretical computing.

Allen joined Tufts last year from the University of Wisconsin-La Crosse as associate teaching professor in the department of computer science in the School of Engineering, and director of online computer science programs, including the Master of Science in Computer Science. One additional role is that of architect of a new course, Ethical Issues in Computer Science and Technology, an example of how faculty are peeling back the hidden and troubling biases that can be embedded in technology.

It’s a course that not only fits well with his own interdisciplinary interests, but also responds to student demand. When the computer science department set up a suggestion box this spring for student input on issues of inclusion and diversity, overwhelmingly, students wanted “more discussion of societal impact in their classes,” Kathleen Fisher, professor and department chair, told Tufts Now.

Allen meets twice weekly online with about 25 undergraduate and graduate students from both the School of Engineering and the School of Arts and Sciences. Assignments include reading original philosophical literature—writings by John Stuart Mill, Bernard Williams, Immanuel Kant, Philippa Foot and Alasdair MacIntyre—as well as discussions around case studies, all with the intent of examining theories of what is or is not ethical, as well as justifications for ethical behavior. Tufts Now recently spoke with Allen about the course.

Tufts Now: What are you aiming to accomplish with the course?

Marty Allen: I am aiming to give students the tools that allow them to unpack ethical dilemmas that they may face in technology, in their careers, or as members of society. The idea is that they come out of the class with the ability to say, OK, let’s think about this issue from an ethical point of view. 

My job is not to teach the knowledge of what is right or wrong but to give students the tools to respond to ethical issues and to understand what ethical behavior means to them. Everybody needs to be able to take a tricky situation and have some kind of ability to argue that through in their own head, or with somebody else; and by argue, I mean, in a civil discourse. This is something our department wants as a whole: We want students who, once they leave Tufts, are guided by more than just the demands of the marketplace.

Why is the philosophical text important?

What a philosophical education did for me is give me the ability to approach an argument for or against a case. It gives you concepts you can apply to life. I’ll pair Mill’s classic utilitarianism—that which serves the greater good is the best moral action—with a modern critic of utilitarianism. That’s how philosophy has worked for millennia. It’s a back-and-forth dialogue between two opposing views. 

Can you share an example of where those tools come into play?

One problematic case is happening in Britain around the issue of facial recognition surveillance. Law enforcement has a practice of taking mug shots of people who are arrested and entering them into a searchable database. But now you introduce facial recognition software, and law enforcement can search against that database to see if a person has had prior run-ins with the law.

The issue that has risen is that you may not ever have been convicted—and the vast majority of people in that database are people who have never been convicted of any crime. But because they’re in the database, they can start showing up in lineup books where they become vulnerable to false identification.

I frame this technology advance as an ethical case study. On the one hand, to remove those people would be tremendously expensive because someone has to go in manually and remove them. If you’re a database administrator, should you devote the resources to removing those photos? The utilitarian would ask, is the greatest good measured by the resources of law enforcement being spent wisely? Maybe facial recognition software leads to a few false positives--but that’s not a problem, he reasons, because it may prevent more crimes, and so serve the greater good. The opposing viewpoint could argue that the investment of time and money is worthwhile because of the potentially negative impact on the lives of innocent people.

You also offer students opportunities to look at the pervasive and harmful impact of technology bias. Your reading list includes a 2016 article published in ProPublica that looks at software used across the country to predict future criminals. The algorithms are generating risk assessment scores, the likelihood a criminal will be a repeat offender. Forecasting has huge implications because it will inform decisions during sentencing. The journalists show how it’s baking racial disparities into the criminal justice system—clearly it is biased against Black people.

What are the key takeaways from that reading?

The journalists did intensive analysis, and multiple significant racial disparities surfaced. They concluded that the algorithms in question were highly problematic, and that their application was leading to unjust outcomes. I want students to see how the implications of bias in algorithms could be reinforcing structural racism. At the same time, there have been debates about the conclusions drawn in the study. An assignment for the students is to analyze cases like this, breaking down the arguments for and against charges of biased or unethical uses of computing. I want them to forge their own ethical judgements, rather than approaching computing always from a values-neutral technical point of view.

Tufts has set out to be an anti-racist university. It’s a context that implies responsibility for change. Does computer science need to become an anti-racist field?

That’s a complicated question. On the most superficial level: we don’t tolerate any overt racism in the field. This is our most basic responsibility.

On a deeper level, we need to be asking serious questions about bias in algorithms and their deployment. Any decent machine-learning practitioner or data scientist cares about bias in algorithms—it’s just that we tend to think about bias in a relatively neutral sense, as a form of technical data skew, without paying enough attention to how data biases can arise due to histories of human biases, based upon all the categories one can imagine: race, religion, gender, and more. The solutions to these sorts of problems may be at least partly technical—get better data, write better algorithms—but many nontechnical things contribute just as much or more. Having more diverse teams of researchers and developers can help, for instance, since sometimes biased outcomes are less recognizable to those who have experienced less bias in their day-to-day lives.

So there are a lot of places where we can make the technical outcomes better by making the field better in deeper ways, related to equity. We have started to confront the fact that it is important to ask ethics- and equity-related questions about the use of the algorithmic tools that we are building, figuring out that the technical details and decisions matter, and have real moral consequences.

And on the deepest level, we have in fact found that algorithms can sometimes show less bias than humans. There are cases where predictive and analysis algorithms can outperform even experts. So, an ideal we might shoot for is that computer science can make contributions that will actually combat racism by cutting it out of decision-making channels.

Back to Top