Facial recognition. It is a topic that has been widely reported on and discussed in recent months, with people taking positions a range of positions. Some see it as the greatest threat to democracy and privacy rights, while others view it as the next development on the way to a simpler world. In the Canadian context, the major concerns with facial recognition have been two-fold, centering around fears of a growing surveillance state on the one hand and the question of possible foreign malfeasance on the other.
Facial Recognition and Policing: Beginning of the surveillance state?
In Canada, it was originally believed that the introduction of facial recognition would be a jurisdictional issue, with federal agencies such as the RCMP and Canadian Border Security being unable to fully leverage these technologies because of the Federal Privacy Act. This Act regulates how the police use and store personal data including biometrics. However, recent reporting indicates that the RCMP did use Clearview AI’s facial recognition technology despite the existence of the Privacy Act, raising the possibility that more stringent regulations are necessary. On the provincial and municipal side this is even more concerning, because the only existing legislative framework does not apply.
This lack of framework has been the subject of concern for security and privacy experts across Canada and has led Ontario’s Privacy Commissioner to express his ‘concern’ about the topic. Reporting has found that a number of large police forces in Canada are employing this technology, including the Ontario Provincial Police (OPP), Toronto Police, Calgary Police and the Edmonton Police force. The OPP has also refused to disclose the circumstances in which they deploy this technology, whereas municipal agencies such as Calgary have said they use the technology to compare mug shots. In mid-February, Toronto Police finally admitted that they used the controversial Clearview AI’s facial recognition software but had stopped all use of the software. Likewise, on March 1st OPP confirmed the usage of Clearview AI’s technology.
Clearview AI first appeared in public discourse earlier this year when a New York Times exclusive investigated the company. The author found that the American company had scrubbed billions of faces off of social media sites and were using them as a database for organizations, mainly police, to search. More concerningly, however, was that the company would link agencies to where they found the picture. This meant that police forces could not only search for your face but could be linked in the same search to your Facebook, Twitter or YouTube account. Moreover, it has been discovered that this technology has been designed to eventually be employed with ‘Smart Glasses’, which would allow police agencies to use facial recognition technology in a far larger capacity than they do currently. This raises questions about possible misuse or abuse.
There are also large security risks associated with Clearview’s technology and facial recognition more widely. The New York Times had found that Clearview AI was able to view the searches different police agencies employed. This means that Clearview AI is not only able to sell your information, but that they can also monitor police behaviour and investigations, creating a serious security and surveillance issue.
Another security concern that has been identified is the risk of breaches, where other more hostile organizations or states might be able to steal this sensitive data. This concern has recently been realized, with Clearview AI’s reporting a breach in late February of 2020. Amongst the information that was stolen was the company’s client list. While some might not be overly concerned by this breach considering what was stolen, it should be noted that not only are the majority of the clients are police or intelligence agencies. Thus, the fact that this list was stolen can considered a security risk. Moreover, while the company insists that they have patched any weak spots, they also admit that breaches are a new reality facing everyone, raising concerns about keeping so much sensitive data in one place.
Skynet in Canada?
While concerning, Clearview AI is also not the only threat when it comes to facial recognition technologies. There are many other applications and companies that focus on facial recognition. In Canada, these technologies have been employed in a number of ways. As mentioned above, Toronto Police have employed facial recognition technologies and even utilized the technology in the recent Bruce McArthur investigation. They claim that many different safety protocols have been implemented to prevent misuse. Amongst other strategies, the use of the technology is restrictedto a very small investigative population that have been specifically trained on the software, and they limit the use to comparing mugshots and are not permitted to use the technology in any real time situations.
However, despite efforts by the Toronto Police to assure people of the restrictions placed on the technology, a number of serious concerns still exist. First, the fact that police can use body cam pictures raises the possibility of serious misuse. While they try to assure the public that these can only be employed if they show a crime being committed, the loose definition of crime leaves the door open to possible misuse and abuse. Moreover, the degree of oversight of the technology’s use has also come under assault. Earlier this year Toronto Police assured the public that they did not utilize Clearview AI’s technology, however, they were later forced to admit that they did use the technology and that Toronto Chief of Police Mark Saunders was unaware of its usage.
It is not just the police
While Clearview technology is almost exclusively a policing technology, facial recognition is moving beyond that. In a recent National Post article discussing the use of facial recognition technology in everyday consumer activity, they note a Canadian grocery chain is considering introducing a facial recognition paying system in stores located in both B.C. and Ontario, something that has been introduced in other countries. Large international retail chains such as WeChat and Alipay, have already introduced facial payment systems but the technology has been largely limited in North America.
While a grocery store’s consideration raises concerns about the future prevalence of facial recognition and surveillance technology, it also raised questions because the technology the company is considering is of Chinese origin. This raises questions about who might be able to access the data being collected by the technology and whether this technology could be used for espionage by a foreign nation.
What is happening now?
Since the exposé on Clearview AI emerged, privacy watchdogs have congregated to call for greater accountability and transparency. As of now, Privacy Commissioners at federal and provincial levels have decided to investigate the usage of Clearview AI by Canadian entities. However, as of yet no changes in regulation or restrictions have been applied. Some corporations have stepped up, such as Apple, which has suspended its developer credentials and banned them from the App store for now. However, in the absence of true regulation even large tech companies, like Amazon, are warning of the danger of these technologies. We know what is happening. Now we need to act, before it is too late.
Photo: Facial Recognition (2018), by Mike MacKenzie via Flickr
<p style=”line-height: 1.8; font-size: 17px; font-family: Philosopher; text-align: left;”><em>Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.</em></p>