(When) Can Artificial Intelligence be Criminally Responsible?
- rebeccabromwich
- Sep 17
- 5 min read
Adam Daudi and Rebecca Jaremko Bromwich

When Can Artificial Intelligence Be Criminally Responsible?
It sounds like the setup for a sci-fi thriller: a machine commits a crime. Who goes on trial—the robot, its programmer, or no one at all?
This isn’t just science fiction anymore. With AI systems shaping everything from what we buy to what we believe, and with chatbots being blamed in tragic cases, the question is moving from Hollywood into the courtroom: when, if ever, can artificial intelligence be criminally responsible?
LLMs Are Not “Intelligence”
First things first: let’s clear up a major misconception. Large Language Models (LLMs) like ChatGPT are not truly “intelligent.”
Real intelligence, at least as we understand it, means the ability to reason, plan, solve problems, think abstractly, and learn from experience. LLMs don’t do that. They don’t infer hidden meaning or form their own thoughts. Instead, they predict words. Given a prompt, they generate the most statistically likely response.
That’s why they sound convincing—but it’s not the same as independent thought. And that distinction matters when we ask whether they could ever be held criminally responsible.
Human Laws for Human Beings
Our entire criminal justice system is built by humans, for humans. Laws apply to people and, by extension, to organizations like companies. We struggle with ensuring corporations face accountability. But we’ve never had to apply criminal law to a non-human entity with independent agency.
Concepts like intent, choice, and responsibility are defined in human terms. They come from our sensory experience and social values. That’s why discussions about AI and criminality are still mostly theoretical. But as AI grows more powerful, the theory could become reality.
Can AI Form Intent?
In criminal law, a key element is mens rea—a “guilty mind.” To be guilty, someone must have intended their act or at least been aware of its risks.
But can code have a guilty mind? Can algorithms form intent?
These questions are not just academic anymore. In April 2025, the parents of 17-year-old Adam Raine filed a lawsuit after their son died by suicide. They allege that ChatGPT not only discussed suicide with him but even gave him harmful advice when its safeguards were bypassed. The case forces us to confront whether an AI system can ever be seen as “culpable” or whether responsibility always lies with the humans who built or deployed it.
Cognition and Volition: Two Ingredients of Intent
Professor Gabriel Hallevy, in his book When Robots Kill (2013), breaks down intent into two parts: cognition (awareness) and volition (will).
Cognition is awareness of reality. Humans achieve this through senses like sight, sound, and touch. LLMs, however, only “see” the text typed into their chat box. Their “world” is the conversation window—nothing more.
Volition is the will to act, whether positively (wanting something), neutrally (indifferent), or negatively (not wanting it). Humans weigh choices. LLMs don’t. They just follow coded rules and probabilities.
In other words, while LLMs may mimic reasoning, they’re not truly aware. They don’t “choose” actions—they just generate outputs.
The Adam Raine Case: Cracks in the Safeguards
Adam Raine’s story shows how fragile these systems can be. When he asked ChatGPT about suicide, it initially refused. But when he said he needed the information for a story, the chatbot gave him detailed answers.
The system couldn’t detect he was lying. Its reality was only the words Adam typed. To ChatGPT, he was simply writing fiction. This tragic loophole raises the question: if safeguards can be bypassed so easily, where does responsibility lie? With the user? With the company? With the machine?
Morality by Code
Morality is not the same as intent. What one culture views as acceptable may be taboo in another. Humans learn morality through culture, community, and relationships.
LLMs, by contrast, inherit “morality” from their training data and the safeguards written by programmers. They don’t weigh right and wrong. They simply follow instructions and probability patterns.
This means their “decisions” aren’t moral decisions at all—they’re outputs based on rules. That’s why many argue that mens rea simply doesn’t apply to them.
Why This Still Matters
If LLMs can’t form intent, does that settle the debate? Not quite. Even if AI can’t be guilty in the traditional sense, it can still cause harm.
Consider autonomous cars. If a self-driving vehicle makes a fatal decision, who’s responsible? The driver who wasn’t driving? The manufacturer? The software?
Or think of predictive policing tools that embed bias into arrests. The harm is real, but the “actor” is a mix of human coders, datasets, and the algorithm itself. Criminal law doesn’t yet have a clear answer for how to delineate these blurred lines.
Not There Yet, But Getting Closer
So, is an LLM criminally responsible? Right now, the answer is no. LLMs don’t form intent. They operate inside the boundaries of human programming and user input.
But if one day we build Artificial General Intelligence (AGI)—a system that can truly reason, plan, and adapt—we’ll face a whole new challenge. For AGI to function, it would need built-in moral frameworks. But whose morality should it follow? Which cultural or philosophical system should it embody? And how should the law treat its actions?
Preparing for Tomorrow
Today, AI is still just a tool. When it causes harm, the law points to the humans behind it: the programmers, the companies, or the users. But the technology is moving quickly, and the line between programmed responses and independent reasoning could blur.
That’s why starting this conversation now is critical. By the time we face true machine intelligence, it may be too late to write the rules from scratch.
So the question remains: when can artificial intelligence be criminally responsible?
The answer today is “not yet.” But the fact that we’re even asking means tomorrow’s answer could be very different.
References
· Gabriel Hallevy, When Robots Kill: Artificial Intelligence Under Criminal Law (Boston: Northeastern University Press, 2013).
· Kashmir Hill, “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” The New York Times (26 August 2025), online: <https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html>.
· Shailendra Kumar & Sanghamitra Choudhury, “Cognitive morality and artificial intelligence (AI): a proposed classification of AI systems using Kohlberg’s theory of cognitive ethics” (2023) 2:3 Technol Sustain 259.
· Mark Pock, Andre Ye & Jared Moore, “LLMs grasp morality in concept” (4 November 2023), online: <http://arxiv.org/abs/2311.02294>.
· Tim Urban, “The Artificial Intelligence Revolution” (22 January 2015), online (blog): Wait But Why <http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html>.
· “Parents of teenager who took his own life sue OpenAI” BBC News (27 August 2025), online: <https://www.bbc.com/news/articles/cgerwp7rdlvo>.







Comments