top of page

"AI is reshaping Canadian Criminal Law" - a podcast

  • Writer: Featured in Robson Crim
    Featured in Robson Crim
  • 13 minutes ago
  • 6 min read

Produced by third-year Robson Hall law students Jayden and Andreas for Robson Crim, this episode looks at how AI is reshaping Canadian criminal law from both directions at once: the state’s growing use of AI in policing and surveillance, and criminals’ use of AI to scale fraud and identity theft. The first half sketches the privacy and Charter backdrop, then walks through tools like predictive policing and facial recognition, raising concerns about bias, “feedback loops,” and the way automated systems can blur accountability even when intentions are good. The second half turns to AI-enabled fraud, explaining how automation, voice cloning, and deepfakes have made scams more targeted, believable, and harder to detect, with major financial and social fallout. Throughout, the podcast keeps returning to one big question: when the same powerful technology is used by both governments and criminals, what should worry us most, and what trade-offs are we willing to accept between safety, privacy, and freedom?



This podcast, on behalf of Robson Crim, and produced by third-year law students Jayden and Andreas , explores how artificial intelligence is reshaping criminal law in Canada from two perspectives: how the state is using AI in policing and surveillance, and how criminals are using AI to commit fraud and identity theft. Through this dual lens, the discussion raises a central question: when the same powerful technology is in the hands of both governments and criminals, who should concern us more? 


The first half of the podcast examines how the state is deploying AI in criminal justice. Before addressing specific tools, the discussion outlines the legal privacy framework meant to constrain government overreach. In Canada, privacy is recognized as a legal right. Manitoba, for example, has a statutory tort for invasion of privacy under The Privacy Act, allowing individuals to sue where their privacy is substantially and unreasonably violated. The province also amended its Intimate Images Protection Act in 2024 to address AI-generated intimate images. Federally, PIPEDA governs how private organizations handle personal information, however, Canada currently lacks a comprehensive federal law specifically regulating artificial intelligence. The proposed Artificial Intelligence and Data Act (AIDA), introduced in 2022, was intended to provide oversight and enforcement for AI systems, yet, it stalled in the legislative process and ultimately died when Parliament dissolved. 

In the criminal law context, s. 8 of the Canadian Charter of Rights and Freedoms protects individuals against unreasonable search and seizure. The Supreme Court of Canada has reinforced strong privacy protections, particularly within the home (R. v. Silveira) and in digital communications (R. v. Morelli; R. v. Marakah). Intercepting private communications without consent is also criminalized under s. 184 of the Criminal Code. Although the legal safeguards exist, this podcast questions whether they can keep pace with the rapid evolution of AI technologies. 


One major example of the use of AI by police is in predictive policing. Police services in Canada are already using algorithms to analyze historical crime data and forecast where crimes are likely to occur. Vancouver’s GeoDASH system, for instance, analyzes patterns of property crime and identifies geographic “hot spots” for increased patrol presence. During its pilot phase, property crime dropped significantly in targeted areas. Similarly, Saskatoon recently partnered with the University of Saskatchewan to establish Canada’s first Predictive Analytics Lab, which analyzes patterns in property crime, assaults, suspicious activity, and even missing persons cases. 


Supporters have argued that predictive policing enhances deterrence and resource allocation without targeting individuals directly. However, critics warn that these systems rely on historical data that may reflect existing biases. If certain neighborhoods have historically been over-policed, the data may reinforce those patterns, creating a feedback loop. For instance, more patrols may lead to more recorded incidents, which in turn justify continued surveillance. Rather than predicting crime itself, algorithms may simply predict where police have previously focused attention. This raises concerns about digital profiling and systemic discrimination. 


The podcast then turns to facial recognition technology, which presents even more significant privacy implications. In 2019, the RCMP used Clearview AI, a company that scraped billions of images from social media platforms without consent to build a facial recognition database. In 2021, the Privacy Commissioner ruled that Clearview’s practices violated Canadian privacy law, and the RCMP discontinued its use. However, other police forces have adopted more controlled systems that rely solely on lawfully obtained photos under the Identification of Criminals Act. York and Peel Regional Police, for instance, partnered with IDEMIA, while Toronto Police now use NEC’s NeoFace Reveal. Toronto Police has classified this as high-risk technology. 

Despite improved safeguards, facial recognition remains probabilistic rather than definitive. This is because it calculates likelihoods rather than certainties. Errors can result in wrongful arrests, reputational harm, and infringements on liberty. Cases in other jurisdictions demonstrate that misidentifications are not merely hypothetical risks. The broader philosophical question becomes whether society is comfortable normalizing large-scale facial scanning, even if it is used for serious crimes. Comparisons to widespread surveillance systems in countries like China highlight the tension between security and freedom. While such systems may enhance order and efficiency, they significantly reduce anonymity and expand state monitoring capacity. 


The first half of the podcast concludes by arguing that while police intentions may not be malicious, the risk lies in unexamined delegation. When algorithms determine patrol zones, investigative priorities, or risk assessments, human judgment is partially replaced by automated prediction. Criminal law has traditionally depended on clear attribution of responsibility. AI complicates that structure by diffusing accountability into opaque systems that courts and defense counsel may struggle to interrogate, especially when proprietary algorithms function as “black boxes.” 


The second half of the podcast shifts focus to how criminals are using AI. Fraud and identity theft are longstanding offences under sections 380 and 402.2 of the Criminal Code, but AI has transformed the scale and sophistication of these crimes. The Canadian Anti-Fraud Centre reported over $600 million in fraud losses in 2024, a dramatic increase since 2020, with actual losses likely far higher due to underreporting. Identity theft is closely linked to fraud, and in 2024 the highest proportion of fraud victims were individuals whose identities had been stolen. Specifically, seniors and vulnerable populations are disproportionately affected, facing consequences ranging from lost retirement savings to long-term credit damage. 

AI’s primary advantage for criminals is that it removes friction. Tasks that once required significant effort, such as crafting convincing emails, mimicking tone, and conducting persuasive phone calls, can now be automated and scaled instantly. Criminals can generate thousands of personalized messages in seconds, clone voices from short audio clips, and create deepfake videos that convincingly replicate real individuals. Fraud is no longer limited to poorly written spam emails; it is now targeted, emotionally persuasive, and difficult to detect. 


Deepfakes represent a particularly alarming development. AI-generated audio and video can convincingly simulate real people in real time. In one widely reported case in Hong Kong, a financial executive was deceived during a video call in which all participants were deepfakes, leading to a massive fraudulent transfer. Beyond financial loss, such incidents also erode trust in digital communication. When voices and faces can no longer be trusted, the reliability of identity verification systems is fundamentally undermined. Experts predict that traditional authentication methods will increasingly fail when confronted with sophisticated deepfake technologies. 


Canadian criminal law, though robust, was not drafted with AI automation in mind. This is because the law on fraud and identity theft assumes direct human deception. AI complicates issues of responsibility, intent, and attribution. Questions arise about who bears liability when AI tools are used to facilitate crime. Further, how can legislation keep pace with technologies that evolve faster than regulatory processes? The proposed AIDA legislation acknowledged these gaps by attempting to create AI-specific offences and regulatory oversight, but it never became law. 


Ultimately, the podcast concludes with the message that AI itself is not inherently harmful. But, it does magnify human intent. In criminal hands, it can accelerate fraud and identity theft, making them more scalable and convincing. In state hands, it enhances surveillance and predictive capabilities while raising profound concerns about privacy, bias, and accountability. The asymmetry lies in permanence: individual criminals can be prosecuted for episodic wrongdoing, but state AI systems become embedded infrastructure. Once normalized, they are unlikely to disappear. 


The central tension, therefore, is not simply about innovation versus regulation. It is about how much freedom, privacy, and human judgment society is willing to trade for efficiency and security in an age where artificial intelligence amplifies both protection and harm. 

 

  • Facebook Basic Black
  • Twitter Basic Black

© 2023 Jochelson, Trask

The content on this website is provided for general information purposes only and does not constitute legal or other professional advice or an opinion of any kind. Users of this website are, in all matters, advised to seek specific legal advice by contacting licensed legal counsel for any and all legal issues. Robsoncrim.com does not warrant or guarantee the quality, accuracy or completeness of any information on this website. All items and works published on this website, regardless of their original date of publication, should not be relied upon as accurate, timely or fit for any particular purpose.

bottom of page