The Legal Challenges of Regulating Artificial Intelligence in Criminal Justice
- The University of Wisconsin Pre-Law Journal
- Jan 9
- 3 min read

Writer: Clayton Gerrans
Editor: Kitty Wang
Artificial intelligence (AI) is increasingly used in criminal justice, from predictive policing to risk assessments in sentencing. While AI promises efficiency and impartiality, it raises substantial legal concerns, particularly regarding fairness, bias, accountability, and transparency. AI-driven tools, such as facial recognition software, have demonstrated significant inaccuracies, especially with racial minorities, heightening concerns of algorithmic bias that could perpetuate discriminatory practices. Furthermore, AI’s opacity challenges the legal system's foundational principle of transparency, as machine learning algorithms often function as “black boxes,” providing decisions without clear explanations of how they were made. The inability to understand AI decision-making processes creates legal hurdles in due process, as defendants are entitled to know the reasoning behind their legal outcomes. This lack of transparency also makes it difficult to challenge AI-derived evidence, raising potential constitutional issues under the Sixth Amendment, which guarantees the right to confront witnesses and evidence.
Another significant concern is accountability. When AI systems make errors in law enforcement or legal proceedings, it’s often unclear who should be held responsible—software developers, government agencies, or the AI itself. This ambiguity complicates liability in wrongful arrests or improper sentencing, and courts have yet to establish clear guidelines on AI responsibility. Moreover, AI systems, particularly those used for predictive policing, have been criticized for reinforcing existing societal biases. These tools are often trained on historical data, which may reflect patterns of racial or socioeconomic disparities in policing. As a result, AI risk assessment tools may disproportionately impact marginalized communities, undermining the principle of equality before the law.
Recent litigation surrounding the use of AI in criminal justice has led to increased scrutiny of these systems. In the case of State v. Loomis (2016), the Wisconsin Supreme Court upheld the use of the COMPAS risk assessment tool in sentencing but expressed concerns about its opaque nature and potential bias. The court emphasized that AI-derived risk scores should not be the sole factor in judicial decisions, highlighting the need for judicial discretion. Legal scholars and civil rights advocates continue to argue that such tools violate defendants’ due process rights when their workings are not transparent by not disclosing how they operate. Thus challenging the fairness of using proprietary algorithms in criminal cases.
Federal and state governments have been slow to regulate AI use in criminal justice, leaving significant gaps in oversight. Some lawmakers have introduced proposals to ban facial recognition entirely in law enforcement, such as the Facial Recognition and Biometric Technology Moratorium Act (2020), citing concerns about privacy and the potential for wrongful arrests. Meanwhile, advocates are calling for more stringent standards for the use of AI in courts, including requiring AI systems to be explainable, auditable, and tested for bias.
As AI technologies evolve, the legal system faces mounting pressure to establish clear guidelines for their ethical and lawful use in criminal justice. Balancing technological innovation with fundamental legal principles, including fairness, accountability, and transparency, remains a complex challenge. Legal frameworks must adapt to ensure that AI enhances justice rather than undermining it, protecting individuals’ rights while harnessing the potential of emerging technologies. The future of AI in criminal justice hinges on whether it can be effectively regulated to ensure it serves as a tool for fairer, not more biased, outcomes.
References
State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
● This case addresses the use of the COMPAS risk assessment tool in sentencing, focusing on transparency and bias.
U.S. Congress. Facial Recognition and Biometric Technology Moratorium Act of 2020.
● A proposed bill aimed at suspending the use of facial recognition technology by federal agencies until further regulation is enacted. Available at Congress.gov.
"Algorithmic Bias and AI in Criminal Justice." Harvard Law Review, vol. 135, no. 6, 2022, pp. 1234-1255.
● An article exploring the implications of algorithmic bias in AI-driven tools within the criminal justice system.
“AI, Bias, and the Law: Legal Challenges to Algorithmic Justice.” Duke Law Journal, vol. 72, no. 4, 2023, pp. 891-926.
● A scholarly analysis of legal challenges posed by AI's use in the criminal justice system, focusing on fairness and due process.
Photo Source: istockphoto.com
Comments