top of page
  • Lewis Waring

Deep Fake Pornography and Section 162.1 - Breanna Sheppard

In Canada, “revenge porn” laws or laws against the publication of an intimate image without consent were introduced in the Protecting Canadians from Online Crime Act (“the Act”) in 2014. In the Act, the Criminal Code (“the Code”) was amended to provide for a new offence of non-consensual distribution of intimate images. As well, the Act provided for complementary amendments that would:

  • authorize the removal of such images from the Internet;

  • facilitate the recovery of expenses incurred to remove such images,

  • permit the forfeiture of property used in the commission of the offence; and

  • permit a recognizance order to be issued to prevent the future distribution of such images and the restriction of the use of a computer or the Internet by a convicted offender.

Under the Code, the offence is stated as follows:


Publication, etc., of an intimate image without consent


162.1 (1) Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty


Under section 162.1, intimate image means “a visual recording of a person made by any means including a photographic, film or video recording” and the intimate image is one in which the person is nude, is exposing their genital organs or anal region or her breasts, or is engaged in explicit sexual activity. Further, at the time of the recording, there must be circumstances that gave rise to a reasonable expectation of privacy and the person depicted must have retained a reasonable expectation of privacy at the time the offence is committed.


The offence itself is worded reasonably enough but, with the advances of technology and accessibility of editing software, new issues have arisen in just these past few years. Notably, those who fall victim to photo manipulation and deep fake technology have no recourse.


Deepfake technology was invented by Ian Goodfellow, a PhD student, in 2014 and is based on generative adversarial networks. Without getting too technical, a generative adversarial network is a type of machine learning framework that allows new data to be generated from the same data. In layperson’s terms, it can use a photo to make a new photo of the same person that looks superficially authentic to human observers with realistic characteristics; it’s an increasingly fancy way to make a face-swap on photos as well as videos. This technology has advanced since then so that, using a smartphone, you can graft your own face on actors in famous movie scenes or even make your own satire video and audio of politicians. However, this technology is just as easily used for nefarious reasons. The Canadian Parliamentary Review has already raised concerns about the increase in fake news and in April of 2019 the Library of Parliament in Ottawa published an article on deepfaking. Deepfake technology can be addressed by:

  • making a copyright infringement claim;

  • pursuing an action under defamation;

  • using the various privacy legislation if personal information is exposed; or

  • claiming appropriation of personality causes of action;

but each of these recourses are limited in their own way.


But ‘deepfakes’ have also become an increasingly common issue with regards to porn. While the Code may provide some relief, it is only able to clearly do so if the individual depicted is under 18. It is still unclear whether or not successful prosecution could be made out under provisions that address extortion, fraud, and criminal harassment or the newer non-consensual distribution of intimate images provision. Deepfake pornography has become more prevalent as both a means of harassing and humiliating women and profiting off of celebrities images by uploading or distributing for financial gain.


Scarlett Johansson has spoken about this issue, having had her image superimposed on graphic sex scenes and one video havin been uploaded as real, leaked footage. That video was viewed more than 1.5 million times. While private companies have been attempting to address deepfake porn, the Code has not yet been updated. There is some uncertainty as to whether or not an image created by deepfake technology would be captured by the provision considering that while the image of the face would be that of the complainant everything else would be fabricated. Google has updated its policy to include “involuntary synthetic pornographic imagery” which in turn allows individuals to request that Google block any results that depict them in a “nude or in a sexually explicit situation”. A similar provision or definition can be added to address the current gap within the 162.1 provision, ensuring that the loophole which allows one to make ‘fake’ revenge porn is closed.


As R v Haines-Matthews explains, section 162.1 is directed at preventing the distribution of intimate images without consent which harms the person whose intimate image is being distributed. The harm which Parliament sought to prevent is the harm to the victim which flows from the non-consensual distribution of the intimate images that exists regardless of the offender's motivation in distributing the intimate images. Those intimate images, often distributed electronically, often are distributed with little to no control over who may access them, where they may end up, or how long they will be accessible on some internet site. And that uncontrollable distribution is one aspect of the harm that is unaffected and unabated by the motivation of the offender.


The Supreme Court of Canada (“the Court”) has already acknowledged that with “a mere swipe on a smart phone's screen, one can immediately become a director, producer, cameraman and sometimes an actor in an explicit short film” and that even the least tech-savvy user can effortlessly do so and share it. Not only that, but lives can be ruined with seeming ease. As R v AB states, with the non-consensual distribution of intimate images, “reputations are ruined, self-esteem is shattered, feelings are hurt, privacy is irreparably violated” and “once a photo or video is posted online, it is wishful thinking to hope that it can be removed”.


The reasons behind expanding the provision to address involuntary synthetic pornographic imagery or pornographic imagery created by deepfakes are the same reasons why the provision was enacted. These images, regardless of whether they are “authentic” or generated with technology, can be used for humiliating cyberbullying attacks, and the images spreading quickly and often uncontrollably result in a significant violation of the depicted person's privacy. The distribution of deepfake images are likely to be just as embarrassing, humiliating, harassing, and degrading to that person, regardless of whether the original image is truly of them. The concerns with the intimate images are not just that they are intimate images, but that they are identifiable and traceable back to the individual. The impact of the invasion is equivalent whether the image is real or the product of a deepfake.


At the time of this blog post, there has not been any cases prosecuted under 162.1 that involve the use of deepfake technology to manufacture pornography of a complainant. However, policy under the Code, particularly for offences that rely on technology to be committed, should make efforts to proactively address advances in technology so that when these cases arise in court the provisions are clear. By ensuring the legislation is up to date with advances in technology, clarity is provided not only for the judicial system but also to the other individuals in the legal system from enforcement to prosecution and to the citizens to which the laws apply.


When courts have had to deal with new technology, they often require significant court resources and time for both the accused and the complainant, as most do not stop at just the trial level but go through all the available appeal processes. Failure to take the proactive approach when it comes to updating legislation is a missed opportunity to not only strengthen the regulation of virtual spaces, which is largely left up to private companies, but also to better safeguard the individuals in those spaces.


  • Facebook Basic Black
  • Twitter Basic Black
bottom of page