Playing With Fire - AI in Criminal Law
- rebeccabromwich
- 4 days ago
- 4 min read

Advocating Prudent Partnership with AI Technology
Rebecca Jaremko Bromwich and Rida Zakir
In 2023, Sundar Pichai, CEO of Google, remarked that artificial intelligence (AI) may prove more transformative to humanity than even the discovery of fire. While bold, this statement underscores the profound and evolving impact of AI across all sectors—including law. As legal professionals, we believe the future should centre on partnership with vetted AI platforms as powerful research and drafting assistants—not blind reliance or delegation of legal responsibility.
AI and Criminal Law: A Cautious Embrace
Criminal law, like many fields, is grappling with the promises and perils of AI. The use of AI in prosecutions and legal proceedings raises essential questions of reliability, accountability, and fairness. While AI can support legal research and procedural efficiency, its unchecked use in the justice system—particularly in criminal matters—can have devastating consequences.
We advocate for a cautious and critically supervised adoption of AI, ensuring human oversight at every step. This means not only proper vetting of AI tools but also transparency in their application and strict adherence to ethical standards. It should involve legal regulators.
Risks of AI in Prosecution and Policing
There are real dangers when AI is used without adequate safeguards. Misuse can lead to wrongful convictions, violations of constitutional rights, and erosion of public trust in legal institutions. These risks are not theoretical—recent cases across multiple jurisdictions have revealed the high stakes involved:
Canadian Caution
In Zhang v Chen, 2024 BCSC 285, a family law lawyer was sanctioned for filing court materials containing fictitious case citations generated by ChatGPT. The court ordered the lawyer to pay costs and emphasized that while AI can be a tool, it cannot replace the professional expertise expected of legal counsel. The judge stressed that 'competence in the selection and use of any technology tools...is critical. The integrity of the justice system requires no less.'
In response to similar concerns, both the Court of King’s Bench of Manitoba and the Supreme Court of Yukon issued practice directions in June 2023, mandating that parties disclose how AI was used in preparing legal materials. These directives reflect growing institutional awareness about the fallibility of AI-generated content and the need for transparency.
U.S. and International Incidents
Multiple American cases further highlight these concerns:
- In Woodruff v. City of Detroit, No. 5:23-cv-11886 (E.D. Mich. Aug. 3, 2023), Porcha Woodruff, a pregnant woman, was wrongfully arrested due to a false facial recognition match. She was held for 11 hours before being released. The city of Detroit lacked any formal policy or officer training for the use of facial recognition, illustrating a dangerous gap in oversight.
- In Williams v. City of Detroit, stemming from a 2020 wrongful arrest of Robert Williams, facial recognition software misidentified him in a shoplifting case. The ACLU sued on his behalf, and the case settled in 2024, demonstrating the civil rights risks of flawed AI tools.
- State v. Black, 2024-Ohio-1206, involved Cybercheck, an AI program used in criminal investigations. After it was found to contain unverifiable data, charges in several jurisdictions were dropped—again underscoring the perils of opaque algorithmic evidence.
- In State v. Tolbert (2025), a judge excluded AI-based facial recognition evidence in a murder trial due to concerns over accuracy and bias. The court criticized law enforcement’s failure to independently verify the AI's findings before executing a search warrant, raising issues of due process and evidentiary integrity (Forbes, 2025).
Judicial Rebuttals to AI Defences
Even when accused persons attempt to invoke AI in their favour, courts remain sceptical. In R v Chaudhry, 2019 ONCJ 639, the defendant argued that his car’s AI features reduced the risk of harm while he was impaired behind the wheel. Justice Kastner flatly rejected this argument, calling it 'nonsensical' and emphasizing that even high-tech vehicles do not absolve impaired drivers of responsibility.
Ethical Challenges in Legal Research
AI’s integration into legal research also raises ethical flags. In both Australia and the United States, several lawyers faced disciplinary actions for submitting court documents with fake citations generated by AI:
- In 2024, an Australian lawyer was referred to a professional oversight body for including fabricated case law in a family court submission.
- In 2025, a U.S. legal team in a lawsuit against Walmart admitted to filing AI-generated citations, prompting a withdrawal and renewed scrutiny on ethical obligations (Reuters, 2025).
Conclusion: A Role for AI—With Supervision and Regulation
Sundar Pichai’s analogy between AI and fire is very apt. Like fire, artificial intelligence can transform human endeavour across many facets of our lives, but just as fire can burn us if not closely monitored, AI is dangerous if not properly supervised by humans. Properly deployed, artificial intelligence holds significant potential as a secondary tool to support legal professionals, especially in research and document drafting. But it must not be mistaken for legal judgment, due process, or ethical responsibility.
As courts, law societies, and legal educators adapt to these emerging technologies, it is vital to prioritize human oversight, policy development, and professional accountability. Only then can we ensure that AI complements, rather than compromises, the pursuit of justice.
Comments