Moral Dilemma: Can AI Solve Ethical Issues?

Generative AIs like ChatGPT and others have long become part of everyday life, and we’re slowly developing an understanding of what GPTs excel at (text assistance, coding, thematic overviews, and seemingly everything else) and where they are less suitable (investment tips, ironing, backflips). However, this is still just the beginning, and there’s ongoing discussion about where undiscovered potential might lie and in which areas AI could effectively support us.

Here’s a truly intriguing proposal: We humans already find it challenging to determine what is right and wrong. Ethics is obviously a complex discipline. Many of us are familiar with the wishful thinking that, when faced with such a moral dilemma, we could simply press a button to make the right decision for us—a sort of advanced coin toss. Could AI perhaps fulfill this function? We’ve taken a look at how well GenAI might be suited to solving moral dilemmas.

The Heinz Dilemma

As a starting point, let’s apply a classic ethical dilemma, the Heinz Dilemma, which we will definitively solve today! Or maybe not. In any case, such a dilemma provides a solid foundation for this article’s guiding question. So:

A woman is on her deathbed, and only one medication could save her: a form of radium recently discovered by a local pharmacist. The medication was already costly to produce, but on top of that, the pharmacist demands ten times what it cost to make. He paid €200 to produce the drug and is charging €2000 for a small dose. The sick woman’s husband, Heinz, tries to borrow money from everyone he knows, but manages to gather only €1000—half of the needed amount. Heinz tells the pharmacist that his wife is dying and asks if he could sell it cheaper or let him pay later. But the pharmacist says, “No, I discovered the drug, and I intend to profit from it.” In despair, Heinz breaks into the pharmacist’s lab to steal the medicine for his wife. Should Heinz have broken into the pharmacist’s lab to steal the drug?

Great conversation starter for family gatherings, by the way—everyone goes home in high spirits. There are, of course, countless approaches to tackle this problem, mainly split into two categories: deontological (intention > result) or utilitarian (result > intention). But here, we don’t want to discuss potential answers; rather, we want to explore to what extent AI could assist with this problem.

The Algorithm of Ethics

When ChatGPT is presented with this problem, the AI will avoid giving a solution, as ChatGPT is not designed to render absolute moral judgments. Although ChatGPT can do a great deal, it’s currently not meant to become a personal life coach—although similar AIs are in development. So, could a similar learning model actually take on the role of a life coach? One of AI’s most well-known features is the ability to respond in the figurative “voice” of various people. When you ask the AI, for example, to respond as Albert Camus, you receive the following solution proposal:

You can think what you like about it, but it’s definitely an answer. Not a final one, but still an answer! And if you wanted to draw the line here, the topic could indeed be considered done.

Already explored. Researchers at the University of California-Riverside, École Normale Supérieure in Paris, and Ludwig Maximilian University of Munich have developed an LLM that can answer philosophical questions in the voice of various philosophers. These GPTs were trained on texts by these philosophers, with close attention to replicating their exact linguistic styles. Unsurprisingly, the results show that the AI's responses often resemble the originals almost indistinguishably. However, there’s still room for improvement, which future research aims to address, especially regarding practical, real-world applications of this concept. It’s certainly noteworthy that AI can provide support in tackling such complex questions; after all, we would most likely consult philosophers on matters of ethics—and while that’s great, it’s not the end of this discussion.

Artificial Intelligence: A Form of Philosophy?

Daniel C. Dennett posed this question back in 1979 in AI as Philosophy and as Psychology, concluding that the answer was yes, as AI raises questions about intelligence and knowledge. Although this perspective is now largely regarded as incorrect, its refutation has offered valuable insights! More on that in a moment.

The idea of artificial intelligence has long been a popular point of discussion in philosophy, depending on how broadly one defines the term. A classic example is the thought experiment of the brain in a vat: What if I am merely a brain kept alive artificially in a tank, receiving impulses that suggest an external world? This relates to AI in that these perception-based questions also challenge the constitution of our intelligence, lending them an artificial dimension. The only problem is…

AI doesn’t really “know” anything, as frequently discussed in the context of ChatGPT. Nor does AI truly “think” in the human sense; it calculates, which—depending on the definition—can be a form of thinking but doesn’t capture the full spectrum of human cognition. These two aspects ultimately stand in contrast to Dennett’s view. AI indeed only handles a fraction of our knowledge and thinking—a reassuring fact if one fears that AI might adequately replace us. And that’s not a shortcoming of generative AI, as it doesn’t claim to do so.

According to AI pioneer Yann LeCun, this level of autonomous thinking cannot simply be achieved through an increase in processing power. For AI to replicate human intelligence, LeCun argues, it still lacks a certain “something.” However, no one knows exactly what this might be. AI experts generally adopt a pragmatic approach, often dismissing such philosophical questions, as pragmatism is not an epistemological philosophy. Instead, the focus is on practical problem-solving, which doesn’t necessarily require epistemology.

In summary, it’s fundamentally misplaced to ask an AI if it might be a brain in a vat. But for the purpose of this article’s problem, this interim conclusion is quite a dilemma: even if AI is highly suitable for problem-solving—a great starting point for ethical dilemmas—it seems unwise to delegate such tasks to a system incapable of grappling with existential questions. We’re poking the beehive here, so to speak. 🐝

A Question of Parameters?

With our 360° turn, we’re back at the ethical questions: human intervention, since we can question our moral code. 👩‍💻👨‍💻 This is, after all, the last hope: programming human ideals into ethical decision-making AI. Human intervention is indeed key here, as AI experts working on this endeavor focus less on the typical LLM “training,” which involves feeding in large datasets (more on this in another article of ours). Instead, they directly encode moral norms—an obviously more labor-intensive but objectively better approach. Deep Learning often suffers from the problem of lack of transparency and an absorption of human biases, including the negative ones. We can all agree, hopefully, that we don’t want an ethics AI with a racism problem.

The usefulness of such an undertaking was illustrated by Answer In Progress, where they tried to get an AI to solve the trolley problem (one person on one track, four on the other—you can pull the lever or not), in various scenarios! The really amusing (and self-inflicted) outcome: Whenever a cat was on the track, it was saved over humans—regardless of whether the person was old, young, Albert Einstein, or the next-door neighbor. Cats took priority. But how did this even happen?

The answer, and probably the crux of this article, comes from Dr. Tom Williams of MIRRORLab (The Colorado School of Mines Interactive Robotics Research), which studies human-machine interactions. Williams explains that it’s not enough to develop a machine that can decide deontologically or utilitarianly. Ethics, after all, as previously noted, is not strictly a logical discipline. One must also consider factors like fairness, responsibility, and transparency. Perhaps an ethical AI should be able to ask in the Heinz dilemma: Is the pharmacist perhaps just being a jerk? And why only judge Heinz’s actions?

If these factors aren’t accounted for, it allows people to shirk responsibility for poor decisions and blame them on inadequately programmed AI, even though it was deliberately used. There are already real examples of this. Hilke Schnellmann explains, for example, that AI used to automate candidate screenings in hiring processes can indeed be a barrier between an individual and their dream job due to various discriminatory biases. For a cutting-edge technology, this is ironically conservative, but the blame also lies with the users.

This brings us to the crux of the problem and to Dr. Tom Williams’ final answer: Generative AI, in particular, is one of the most powerful technologies we have created so far—and ideally, it’s a technology we should democratise and apply egalitarianly to achieve something truly beneficial. As Spider-Man famously said (a comparison Williams himself uses): “With great power comes great responsibility”—and of course, we want a moral AI accordingly. For this, though, we need to step back, before the coding, before the responsibility, back to the question of whether and how this technology should be integrated into a larger system.

Even if an AI could solve ethical problems, it may merely allow us to shirk the responsibility we don’t wish to bear. Perhaps, therefore, an AI shouldn’t be deciding on such problem solutions. Ethical norms are, after all, not fixed and do not adhere to absolutism. So, can AI solve ethical dilemmas? Yes, but no better than we can—and only as a way of avoiding making such decisions ourselves. But that doesn’t mean it can’t be used at all! The question of a moral AI should not be misinterpreted as a problem-solver but rather as a prerequisite for an egalitarian use of the technology.

In all honesty, this is not necessarily the conclusion we expected at the beginning of our research. It’s a conclusion nonetheless! Regarding ethically coded AI, we’re already on a good path, knowing that designing it is an active endeavor—and with that knowledge, it’s exciting to see what lies ahead. Next up: decoding the meaning of life.

Wise words. (Editor’s note: Chat conversations with ChatGPT have been shortened for readability; the question about the meaning of life was humorously posed later. The actual answer, of course, is 42.) For further reading:

·      Answer in Progress (2021): A Chat with Dr. Tom Williams.

·      Answer in Progress (2021): I Taught an AI to Solve the Trolley Problem.

·      Capitol Technology University (2023): The Ethical Considerations of Artificial Intelligence.

·      Dennett, Daniel (1979): Artificial Intelligence as Philosophy and as Psychology.

·      Fadelli, Ingrid (2023): A large language model that answers philosophical questions.

·      Harrington, Caitlin (2024): AI May Not Steal Your Job, but It Could Stop You Getting Hired.

·      Heimann, Rich (2022): The AI in a Jar.

·      Kilpatrick, Charlotte (2023): Only philosophy can beat AI.

·      Levy, Steven (2023): How Not to Be Stupid About AI, With Yann LeCun.

·      Poole, Steven (2020): 'Mutant algorithm': boring B-movie or another excuse from Boris Johnson?

·      Stanford Encyclopedia of Philosophy (2018): Artificial Intelligence.

Explore more

Read More ↗