top of page
  • Robson Crim

The Dangers of AI – Implications of Deepfakes in Admission of Evidence - Scott Groot


Introduction

Video evidence can be an incredibly useful piece of evidence when deciding a case. A video has the potential to clearly show a series of events and can vividly show who was involved in the incident in question. Despite these benefits, there is the possibility that some individuals may use video footage for malicious reasons, particularly with the use of artificial intelligence and deepfakes. The use of deepfakes can pose a danger to evidence law due to the requirements for admitting video footage into evidence.


Admission of Video Footage Into Evidence

In R v. Bulldog, the court examines what is required for admitting video footage into evidence. This discussion arose when a fight broke out in a maximum security prison in Edmonton between four inmates, three of which were attacking the victim in the mini yard.[1] The issue at trial was identifying who the attackers were. Above the mini yard were security cameras which were able to capture the fight.[2]


At trial, the videotape was admitted into evidence, satisfying the requirements that R v. Nikolovski established. The two criteria the trial judge examined were whether the video accurately depicted the scene of the crime, and that the video recording had not been altered or changed.[3] Both of these criteria were met, notably that the video matched the testimony of witnessing officers and that the judge was satisfied that the recording had not been altered in any way.[4]


On appeal, the defendant claimed that the trial judge had erred in admitting the video into evidence. However, the appeal judge confirmed the trial judge’s finding and allowed the recording to be admitted. In reaching their decision, the judge noted that it was not a requirement for the party wanting to admit the video into evidence to prove that the recording was not altered. The party only needs to prove that the recording was substantially accurate and a fair representation of what it is was purported to show. This means that alterations will not render the video inadmissible unless it is shown that it somehow undermines the accuracy of the recording.[5] Along with this, the standard used to admit the evidence is the civil standard, in that the video must be relevant and that its probative value outweighs the prejudicial effects.[6]


Age of Artificial Intelligence

With computer software becoming more capable of creating nearly flawless images and videos of fake events, the potential to use these programs in a malicious scenario can become more convincing, specifically with the use of deepfakes. Deepfake videos are created by a video editor program that relies on artificial intelligence which allows the creator of a video to swap one person’s face onto another. The program that enables this editing continuously learns and improves the video by mimicking the individual’s facial expressions, gestures, voice and variations, making the video look incredibly realistic.[7] Additionally, when the creator has sufficient audio and video of the person they are attempting to make a deepfake on, the program can create both the fake video and make the person say things that were not actually said.[8]


Deepfakes have been used in malicious ways in previous years. This technology came to the attention of the public when fake pornographic videos were created of famous celebrities such as Emma Watson and Natalie Portman, with their faces being edited on the bodies of others.[9] The victims of videos using deepfake technology can experience significant emotional harm, as these videos can be very convincing to those who do not know that the video was made using artificial intelligence. These videos can lead to the victim suffering real consequences such as reputational harm, loss of employment, and/or harassment.[10] This illustrates the dangers of artificial intelligence in video evidence and is worsened by the fact that these programs are becoming easier for the everyday user to learn and use to their advantage.[11]


Issues With AI and Video Evidence

As discussed above, the court in Bulldog moved away from the two-criteria framework established in Nikolovski, in that the party wanting to admit a video into evidence only need to show that the recording is substantially accurate and a fair representation of what it is purported to show. Without having the party needing to demonstrate that the video was not altered or changed in anyway lowers the threshold for admitting a video into evidence. If a party is willing to use a video recording that has been altered by artificial intelligence and/or deepfake technology for any such reason, it would not be difficult for that party to also corroborate a story that matches the fake video in order to get that video admitted into evidence.

Comparatively, if the party has a fake video they wish to admit into evidence under the Nikolovski framework, having the party prove that the video was not altered or changed makes it more difficult to get that video admitted into evidence. While the party who wishes to do so can still overcome these hurdles, it nonetheless imposes more barriers to overcome compared to the criteria set out in Bulldog as they must prove the video was not altered. With deepfakes becoming easier to create using computer software, legislators and/or courts should consider updating the rules of evidence or reverting to the two-criteria framework set out in Nikolovski to protect the public from the potential malicious uses of artificial intelligence. While the increasing ease of creating deepfake videos does not automatically equate to an increase in the use of deepfakes for malicious purposes in a legal setting, it nonetheless creates the potential that this can happen. Technology has made this process substantially easier for those who have thought about doing such acts and will become increasingly more difficult for courts to determine what is real and what is fake.

[1] R v Bulldog, 2015 ABCA 251 at paras 3-5. [2] Ibid at para 6. [3] Ibid at para 12. [4] Ibid at paras 13-15. [5] Ibid at para 24. [6] Ibid at para 39. [7] Marie-Helen Maras & Alex Alexandrou, “Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos” (2019) 23:3 Intl J Evi & Pro 255-262 at 255. [8]Ibid. [9] Emily Laidlaw & Hilary Young, “Creating a Revenge Porn Tort for Canada” (2020) 96 SCLR 147 – 187 at para 46. [10]Ibid at para 47. [11]Supra note 7 at 261.

 

The views and opinions expressed in the blogs are the views of their authors, and do not represent the views of the Faculty of Law, or the University of Manitoba. Academic Members of the University of Manitoba are entitled to academic freedom in the context of a respectful working and learning environment.

  • Facebook Basic Black
  • Twitter Basic Black
bottom of page