Top

Dr. Eugenia Siapera: should AI machines make decisions for us?

With artificial intelligence (AI) advancing at an unprecedented pace, a series of ethical issues arises. The foremost among these concerns is how AI systems can make automated decisions for us.

According to recent research conducted by ESMT Berlin, machines have the potential to outperform humans in decision-making. Yet, human intervention often becomes necessary when individuals find it challenging to discern instances where the machine’s decision-making is, in fact, superior. This can lead to overridden algorithmic decisions, resulting in less favourable outcomes.

We discussed this interesting, controversial topic that engages the international academic community with Dr. Eugenia Siapera, Professor of Information and Communication Studies and the head of the ICS School at UCD. She specializes in researching digital and social media, political communication and journalism, the intersection of technology and social equity, governance of online platforms, and issues related to hate speech, racism, and misogyny.

Should AI be allowed to make decisions for us, and under what conditions?

As a general principle, I think AI should not replace human judgment and decision-making. Rather, it should be seen primarily as a tool for aiding and supporting human decision-making. It may be that there are areas where AI can deliver better judgments, for example, in some diagnostic processes, but it cannot still assume responsibility. For this reason, I cannot think of circumstances under which we could or should support entirely autonomous AI decision-making.

Many researchers and experts believe AI can think faster than humans but should only make decisions with human supervision. Consequently, they believe unsupervised decisions should be avoided at all costs, even in emergencies like critical surgery. Is this a view you agree with?

As automated decision-making systems cannot assume responsibility or be held accountable for their actions, I think it would be tough to justify fully autonomous AI decision-making. I think the question of responsibility involves important philosophical, ethical, but also legal issues that are very difficult to resolve, at least for now. For example, what if something goes wrong during the critical surgery? Who will be held responsible for this? But we don’t have to look at far-fetched scenarios. Self-driving cars are making autonomous decisions. If these result in death, the software engineer, car manufacturer, or passenger will ultimately be liable. Would self-driving car manufacturers, software engineers, or owners/passengers accept this risk and potential liability? Would an insurance company insure driverless cars? We have to either have a much better technology than the one we currently have and/or develop a new liability system.

Living with AI: should machines make decisions for us?
Dr. Eugenia Siapera

There is also a great deal of debate about whether the decisions of AI systems are biased. This view is mainly based on the fact that the data fed to AI systems may contain elements of prejudice or racism. What is your opinion on this, and how could such a challenge be met?

Given that we live in highly unequal societies characterized by both racism and misogyny, these will inevitably find themselves encoded in AI systems. Joy Boulamwini of the Algorithmic Justice League, Timnit Gebru of DAIR, the sociologist Ruha Benjamin, Abeba Birhane, and others have explored the various ways in which AI systems are caught up in and reproduce race and other forms of inequality, going far beyond simple conceptions of bias. AI is produced within certain political-economic conditions, mainly by big tech corporations and deployed in highly stratified and unequal societies. This cannot be resolved through improving technology but by addressing inequality and injustice issues. 

Can an AI system not be biased but be the human making the final decision? Is it something that is overlooked in the international literature and research?

I think it is very difficult to disemb AI from social systems and from individual users who are themselves embedded in the same systems. The notorious example of COMPAS is a case in point. COMPAS is an algorithmic tool that calculates the risk of a criminal offender reoffending, and it is widely used in the United States for making decisions on bail, parole, sentence, etc. However, a ProPublica investigation showed that COMPAS systematically predicts a higher likelihood of reoffending for black offenders than white offenders.

At the same time, we already know that judges are likely to sentence black offenders to more jail time than white offenders, even when their offence was the same. So here we have an automated system and humans reproducing biases. Some would claim that we can improve the decision-making systems so that they can produce more balanced decisions and evade human prejudices and lack of objectivity. However, when it comes to the US criminal justice system, it is the whole system that leads both automated tools and human beings towards problematic decisions. In short, I would resist the tendency to view AI systems and humans as entirely cut off from a social, economic, and political context, which they both reflect and reproduce.

Photo by Hitesh Choudhary on Unsplash
Photo by Hitesh Choudhary on Unsplash

There is also research claiming that humans cannot correctly assess whether AI systems’ decisions are beneficial. Is that something you could claim based on your multi-layered experience?

This depends on the case in point. AI systems may be more accurate in, for example, recognizing a tumour in an X-ray. But what happens when we ask an AI system to decide on offering or refusing asylum to a refugee? This is not a decision in which accuracy is or should be prioritized. In addition, what we see now is an adaptation of humans to machine decision-making, which privileges those who know how these systems work. To use an example, we know that recruiters are using AI tools for screening candidates. A typical applicant tracking system works by creating a ‘persona’ or a profile of the ideal candidate and scoring candidates based on this.

Candidates are now ‘optimizing’ their applications and CVs in ways that are more likely to get them approved by the algorithm. Not too long ago, there was a system in operation that would calculate a ‘success score’ for potential candidates based on profiles of people currently holding high positions in the organization. However, since senior executives tend to be white men, we can see how these systems merely replicate the status quo rather than opening up to new possibilities and bringing in more diverse candidates. So the question is, can these decision-making systems in these areas produce ‘better’ or more ‘accurate’ decisions?

What are the big threats arising from the rapid development of AI systems, and what are the benefits?

We recently conducted a study on public perceptions of AI and big data in Ireland. The (Irish) public has high hopes for the use of AI in improving health outcomes, and indeed, it is in health that we have seen some spectacular advances, from improved diagnostic tools to more tailored and personalized therapeutic approaches.

On the other hand, these tools are produced and used in a commercial setting, essentially creating a two-tiered system, where those who can afford to will avail of these tools while the rest of us will continue using the basic, stripped-down national health services. In essence, what I consider one of the biggest threats of AI is precisely this: the exacerbation and deepening of existing social divides. I remain hopeful that we can produce and deploy AI systems that will aid humanity as a whole, but this will require we address the social and economic context in which these systems are produced.  

George Mavridis is a freelance journalist and writer based in Greece. His work primarily covers tech, innovation, social media, digital communication, and politics. He graduated from the Aristotle University of Thessaloniki with a BA in Journalism and Mass Communication. Also, he holds an MA in Media and Communication Studies from the Malmö University of Sweden and an MA in Digital Humanities from the Linnaeus University of Sweden.