top of page
  • Lewis Waring

Artificial Intelligence in Policing - Mackenzie Cardinal

Recently, there has been a gradual adaptation in policing towards artificial intelligence (“AI”) and data analysis in order to fight crime. In essence, this practice is accomplished by utilizing AI to sift through large databases of information that have been accumulated in order to recognize patterns. The end goal of this is that the patterns recognized will allow police to “predict” where crime is likely to occur. While police analyzing data has been around for quite some time, new developments have enabled police to make use of AI in analyzing massive amounts of data in a manner that is more efficient than ever before. This blog hopes to provide a brief description of how AI technology is used today in policing and the potential implications that this practice may have on Canadians section 8 and 9 rights as outlined in the Canadian Charter of Rights and Freedoms ("the Charter"). It is worth noting now that this work has greatly relied on the impressive study from the Law Commission of Ontario, which was written by Robertson, Khoo, and Song. As Silver and Christian describe it, this work provides, “…one of the most comprehensive discussions of the use of AI and algorithmic technologies in the criminal justice system to date”.


How law enforcement utilizes artificial intelligence


Generally, there are two types of predictive or algorithmic policing practices that are recognized. Robertson, Khoo, and Song describe these two methods as “location-focused algorithmic policing” and “person-focused algorithmic policing”. In addition to these two relatively well-established practices, Robertson, Khoo, and Song outline a third model of data-driven policing themselves. They coin this as “algorithmic surveillance policing technologies”.

In brief, location-focused algorithmic policing involves police utilizing their data on a specific area or location in order to predict where crime is to occur. In the United Kingdom, predictive policing based on geographic data is the most common method of AI-focused policing used. In the United States, there are a number of different location-focused systems in use, all of which use their own unique combination of data points. These data points typically involve factors such as: crime type, location, date, time, type of buildings, census data, and even certain types of housing. In Canada, the Vancouver Police Department has adopted a location-focused algorithmic system. This system relies on four inputs: type of crime, geographic coordinates, date, and time. These inputs work to predict precisely the time and place where break and enters are going to occur.

In addition to this, there is also algorithmic policing that is focused on people rather than locations. The goal of this type of policing is to essentially “predict” if a person is likely to commit—or become a victim of—crime. Data which is used to formulate these predictions include: information about a person’s family, friends or associates (including gang affiliation), the social media activity of a person and criminal records, and appearances in other criminal databases. Police departments can utilize predictions made to increase monitoring of the individuals identified or to even contact these persons in order to deter them from the activity that the police believe that these individuals may commit in the future. For instance, the Chicago Police Departments person-centric predictive policing technology relies on a number of these aforementioned factors in order to rank individuals on the likelihood that they will be involved in a shooting or murder, as either a victim or subject. As well, this data can also be used to create a sort of social network web, allowing police to make maps of criminal connections in order to identify criminal leaders or organizers. The Calgary Police Service (“CPS”), for example, stores information from all individuals who interact with police—including victims and witnesses. This information includes relationships, physical characteristics, and “possible involved activities”. If someone is suspected of committing a crime, data about other characteristics like religious beliefs is also included. This data is used to, “inform officers’ and analysts’ understanding of individuals’ relationships and behaviours”.

Lastly, there is “algorithmic surveillance policing technologies”. Robertson, Khoo, and Song differentiate this type of policing from the two methods mentioned above based on the fact that there is no “predictive element” involved in this process. Rather, this characterization refers to how police utilize AI software to collect and process large amounts of data. In other words, the predictive element occurs later. The impact of the practice of collecting and processing data will be discussed in more depth during the analysis of the section 8 Charter implications.


Acting on data: potential Charter violations under section 9


As Robertson, Khoo, and Song point out, the utilization of AI to predict where crime is to occur may run afoul of section 9 of the Charter and the right to not be arbitrarily detained. The question must be asked whether police relying on predictions that have been made by AI technology can be considered to be reasonable grounds for detention. Robertson, Khoo, and Song emphasize that, because predictive policing is predicated on “generalized inferences”, there is a possibility that detainments made by law enforcement, in reliance on predictive policing technology, will violate section 9 of the Charter. The reason for this is that these “generalized inferences” amount to nothing more than a generalized suspicion. As the Supreme Court of Canada (“Court”) has established throughout its section 9 jurisprudence, a generalized suspicion is not sufficient grounds for someone to be detained.

Moreover, predictive policing also carries with it inherent biases in the sense that the inputs being used to fuel the predictive policing software may themselves be the fruits of bias, resulting in predictive policing findings that essentially reaffirm biases. As aptly put by Lum and Isaac, “[b]ecause this data is collected as a by‐product of police activity, predictions made on the basis of patterns learned from this data do not pertain to future instances of crime on the whole. They pertain to future instances of crime that becomes known to police (emphasis in original)”.

One of the ways in which bias can occur is on racial grounds. For example, in Oakland, California, algorithmic policing software was found to reinforce racial biases by targeting black citizens at approximately twice the rate of their white counterparts. The possibility of biases being built into predictive policing software carries with it potential Charter issues under section 9 in and of itself. It has been held in various courts that racial profiling cannot be used as a basis for detention. Robertson, Khoo, and Song cite the 2014 Ontario Superior Court Case of R v Neyazi to further add that explicit proof of bias in a law enforcement official does not need to be proven to find that detention was based on race. Rather, it can be proven by other circumstances surrounding the case.


Collecting data: potential violations under section 8 of the Charter

Another large legal issue which arises from a reliance on AI in policing is privacy. In Canada, privacy is protected from unjustified state intrusion under section 8 of the Charter. As we have seen so far, predictive policing often relies on enormous amounts of data in order to spot patterns and consequently predict crime. As one could reasonably expect, the practice of collecting large amounts of data naturally carries with it the potential for conflict with section 8, especially in regard to the manner in which law enforcement both collects and processes data. As mentioned briefly above, Robertson, Khoo, and Song distinguish the police practice of using technology to collect and process data as “algorithmic surveillance policing technology”.

Robertson, Khoo, and Song also make clear that the Court has concluded that Canadians “may reasonably expect that the information [which they have shared] will not be divulged further to (or collected by) law enforcement”. Nevertheless, police in Canada (such as the Royal Canadian Mounted Police (“the RCMP”) have utilized software to extract loads of data from public sources (such as social media) in order to create inferences about certain individuals and their broader networks. In the case of the RCMP, the data which was collected was then used as a part of a predictive policing initiative by the RCMP which sought to “…[analyze] social media content in the course of active investigations as well as in a systematic and proactive manner to attempt to predict crime that might happen in the future”. The RCMP themselves seem to have knowledge that they are operating on less than sturdy legal grounds by partaking in this practice. For instance, Robertson, Khoo, and Song note that the Canadian Broadcasting Corporation uncovered in November 2019 that the RCMP had begun an internal audit of their social media data monitoring/collecting practices to determine the legality of these said practices.

Moreover, police can also collect data to fuel their predictive policing technology through information-sharing partnerships with private companies. For instance, it is possible for police software to include third-party information such as financial records and telecommunications information in order to substantiate their data. This leads to the legal question of whether or not consumers have consented to having their information shared by the private business to police. In the United States, for example, the decision of the Court in the case of United States v Miller established that there is no reasonable expectation of privacy under the Fourth Amendment (the comparator to section 8 of the Charter) for personal information held by a third party. Things are different In Canada however, as Robertson, Khoo, and Song point out that the Court in Spencer has said that the Personal Information Protection and Electronic Documents Act prohibits information from being arbitrarily disclosed to law enforcement without consent. The authors continue this point by adding that the only time this would be permitted is in circumstances where there is lawful authority for law enforcement to access that information.


The future of AI in policing


There is still much to be seen (and potentially litigated) regarding how far police can implement predictive policing technology in Canada. As this blog has attempted to show, this technology can pose a novel threat to Canadians’ Charter rights, especially in the way in which police can build an image of a person based on the public information that they collect. It will be interesting to see how issues over data-driven policing play out in the future.

  • Facebook Basic Black
  • Twitter Basic Black
bottom of page