top of page

Can AI Be Protected Against and Managed Through Restorative Justice? | Punishing AI with Protection


A humanoid robot stands behind bars, symbolizing the constraints and ethical dilemmas surrounding artificial intelligence.
A humanoid robot stands behind bars, symbolizing the constraints and ethical dilemmas surrounding artificial intelligence.

There have been rapid developments in Artificial Intelligence (AI) within the last decade, not just within the computer space but also within the legal space. Concisely, AI is about computers being able to make independent decisions without human input and acting upon those decisions.[1] While AI's increasing autonomy has major advantages in many industries, there is a cautionary concern. If AI is given excessive autonomy, it may make poor decisions that impact others and commit criminal offences. This caution is necessary to ensure responsible AI development.[2] Artificial Intelligence cannot be deterred or rehabilitated, but that does not mean it cannot be held criminally responsible for acts it commits for protection and restorative justice. This piece will explore three purposes of criminal sanctions and how they can or cannot be applied to artificial intelligence.[3] 


Firstly, we will examine how AI cannot be deterred nor rehabilitated as it does not have morals, a sense of self or pain receptors. Secondly, we will look at how society can use criminal responsibility to protect itself through incapacitation. Finally, we will look at how criminal responsibility can lead to restorative justice, helping put victims back in their original position. While it is difficult to hold AI criminally responsible, some approaches still balance the non-human nature of AI to protect against criminal offences. Therefore, we must not erase the idea of AI culpability; instead, we must embrace it and focus on the outcomes rather than the processes themselves to ensure that criminal responsibility can be upheld.[4]



Can AI be deterred?

 

The idea of deterrence is to encourage another person not to commit a similar act as they will face the same consequence as the person who did commit that criminal act.[5] There are two forms of deterrence: general (deterring all community members) and specific (deterring the individual themselves). AI cannot be deterred for several reasons. Firstly, there is a lack of consciousness and intent. For deterrence to be successful, individuals need to be able to put themselves in the shoes of the person who has committed that same offence and the sanction they receive if they were to commit that offence. AI systems operate through algorithms and data inputs. Unlike humans, AI lacks the ability for moral reasoning or fear of consequences, as it does not experience pain. It has been argued that if an AI sees another AI being punished, it may be deterred.[6] Though AI does not have a sense of self, that concept is flawed as it cannot relate to the other AI being punished and, therefore, cannot create a deterrent link. Secondly, AI cannot express intent for crimes and, thus, cannot be deterred from doing something it is not planning to do.[7] 



Can AI be rehabilitated?


Similarly, AI cannot be rehabilitated. Rehabilitation as a criminal justice objective is the concept that people who commit criminal acts can be “repaired” or “fixed” to be functioning members of society through programs such as counselling and education. This reduces their risk of reoffending and helps them transition back into the community to fit into the social cohesion model.[8] AI does not have an intrinsic motivation or the capacity for moral growth. It cannot understand traditionally that it has done something wrong, only that it has made a mistake. It cannot attribute blame to itself.[9] Similarly, while you can retrain or send an update to an AI, the process lacks the mental transformation required to understand what they have done wrong in a broad context and be able to change their behaviour. While updates and changes can be made, it stretches the jurisprudence on what rehabilitation is. While AI cannot be deterred or rehabilitated, these are only two purposes of criminal law and do not mean we should outright not hold AI criminally liable.



Can we protect from AI?

 

While AI cannot be deterred or rehabilitated, it can be protected from incapacitation, which is a core objective of criminal law.[10] Protection as a criminal law purpose focuses on protecting the public from dangerous crimes by imposing barriers between a perpetrator and the public, such as imprisonment.[11] This is a future-focused action that aims to stop further harm. A similar approach can be taken with AI. AI can be turned off, unplugged, or isolated/limited in operation. If an AI commits a criminal offence and is found criminally responsible, we can disable it and, therefore, stop it from committing further crimes, which upholds the purpose of protection.[12] 


Without the possibility of criminal liability, legal mechanisms to restrict harmful AI use may be weakened, as existing regulatory frameworks may not adequately address AI accountability. Hallevy argues, “Society must take from the artificial intelligence system its physical capabilities to commit further offences.”[13] This highlights that society has a crucial role in protecting itself from AI and that we must use criminal sanctions to incapacitate these tools when needed. It is essential to ensure that AI can be held criminally responsible for its actions so that we have grounds to enforce incapacitation and protect the public from future harm.


Society must take from the artificial intelligence system its physical capabilities to commit further offences.” - Hallevy


Can we achieve restorative justice with AI?


Similarly, with protection from the community, we can focus on restorative justice, which is about repairing harm done to victims. For justice, one needs to hold another accountable for the wrongdoing.[14] This is where the focus shifts from the AI to the human or controller and focuses on putting victims in their position before the wrong occurs, a core purpose of criminal responsibility.[15] One possible solution is having mandatory insurance or a victim compensation fund where funds can be provided to victims. One way this could be done, as seen in the automated vehicles act,[16] when a fault occurs and it is taken to court, it is up to the insurer to claim the loss of their customer. This could be implemented where an AI insurer is required to provide funds and compensation.


Another potential way is that while AI may not have money, it can pay the loss back in working hours as a form of currency, which can put the victim into their original position.[17] Some may argue that civil liability is preferable to criminal. The concern with civil liability is that victims may not feel that justice is being served similarly and adequately limits the application of different available sanctions (such as incapacitation). Civil liability also fails to show the societal condemnation and accountability expected of these offences.[18] Overall, holding AI criminally responsible is required to restore losses after the criminal conduct. It shows that restorative justice is possible if AI tools are held criminally accountable for their actions.



Criminal liability for AI is a topic of discussion among scholars and academics. It comes with many overarching purposes and goals that we use as excuses for sanctioning individuals. While AI might not be able to be deterred or rehabilitated as it does not have morals, nor can it feel pain or have a sense of self, it still can be incapacitated and assist with restorative justice for victims. There are an extensive range of criminal sanction purposes and while it cannot be deterred or rehabilitated, it can still be incapacitated and help achieve restorative justice.

 

 



 


[1] Shiona McCallum, Chris Vallance, Tom Gerken, and Jennifer Clarke, ‘What Is AI, How Does It Work and What Can It Be Used For?’ (BBC News, 13 May 2024) https://www.bbc.com/news/technology-65545864 accessed 18 February 2025.

[2] Elina Nerantzi and Giovanni Sartor, ‘Hard AI Crime: The Deterrence Turn’ (2024) 44(3) OJLS 673 https://doi.org/10.1093/ojls/gqae018.

[3] Ryan Abbott and Alex Sarch, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’ (2019) 53 UC Davis L Rev 323, 334.

[4] Ibid, 374.

[5] Cambridge Dictionary, ‘Deterrence’ (Cambridge University Press, 2024) https://dictionary.cambridge.org/dictionary/english/deterrence.

[6] European University Institute, ‘Deterring AI from Committing Crimes: An Interview with Elina Nerantzi’ (EUI News Hub, 2024) https://www.eui.eu/news-hub?id=deterring-ai-from-committing-crimes-an-interview-with-elina-nerantzi&lang=en-GB.

[7]  'Artificial Intelligence and the Limits of Legal Personality' (2020) 69(4) ICLQ 1025.

[8] UK Ministry of Justice, ‘Evidence to Reduce Reoffending’ (UK Government, 2018) 28 https://assets.publishing.service.gov.uk/media/5a7565a8e5274a1baf95e408/evidence-reduce-reoffending.pdf.

[9] Simon Chesterman, ‘Artificial Intelligence and the Limits of Legal Personality’ (2020) 69(4) International and Comparative Law Quarterly 819 https://doi.org/10.1017/S0020589320000366 accessed 18 February 2025.

[10] Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Springer 2015) 112.

[12] Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Springer 2015) 211.

[13] Ibid, 211.

[14] Ibid

[15] Elina Nerantzi and Giovanni Sartor, ‘Hard AI Crime: The Deterrence Turn’ (2024) 44(3) Oxford Journal of Legal Studies 673 https://doi.org/10.1093/ojls/gqae018.

[16] Automated Vehicles Act 2024.

[17] Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Springer 2015) 227.

[18] American Public University, ‘What Is Criminal Law and Why Does It Matter?’ (APU, 2024) https://www.apu.apus.edu/area-of-study/security-and-global-studies/resources/what-is-criminal-law-and-why-does-it-matter/.


This piece was originally submitted by William Cook to the University of Exeter.

ความคิดเห็น


455692

bottom of page