The Case for a Ban on Facial Recognition Surveillance in Canada

Page created by Glenn Reese
 
CONTINUE READING
The Case for a Ban on Facial Recognition
Opinion
                             Surveillance in Canada

Tim McSorley
International Civil Liberties Monitoring Group, Canada
nationalcoordination@iclmg.ca

A central argument for the move to “smart cities”—and smart technology in general—is the supposed
improvement of security and safety. This ranges from increasing the safety of our home appliances by, for
example, being able to monitor them remotely to increasing the safety of pedestrians by better managing
traffic. But it also includes the “law enforcement” side of safety and security: tools that allow homeowners
to monitor their property (and, not so incidentally, that of their neighbours) or that allow police officers to
more easily “deter” or “predict” crime and to apprehend suspects.

While efforts by law enforcement agencies to engage in “smart” policing by analyzing statistics in order to
predict crime “hot spots” and thereby either prevent or minimize the amount of crime committed have been
long-documented (and de-bunked), the securitization of the city has grown in leaps and bounds over the
past two decades. This has been, in part, through developments in technology but is also due to the security
crackdown in many western countries, including Canada, since the events of 9/11 and the start of the “War
on Terror.” Expanded powers, expanded budgets, and expanded integration of national, regional, and local
police and intelligence agencies have meant increased surveillance and criminalization.

While these agencies now hold multiple technological tools in their tool belt, facial recognition has emerged
as a particularly troubling and controversial tool for surveillance by law enforcement.

In Canada, it was revealed in January 2020 that the Royal Canadian Mounted Police (RCMP) had been
using Clearview AI’s controversial facial recognition app since October 2019—even though the force had
denied ever having used it. It was further revealed that the RCMP has, in fact, used facial recognition
technology in one form or another since 2002.

Clearview AI is under investigation in multiple jurisdictions for its practice of scraping billions of images
from social media sites without users’ consent to populate its facial recognition database. The Office of the
Privacy Commissioner of Canada is currently investigating the company’s activities in Canada, including
its use by the RCMP and other law enforcement agencies.

In response to the investigation, the company has pulled out of Canada completely.

The problem isn’t limited to Clearview AI or to the RCMP, though. Police forces in Calgary, Edmonton,
Vancouver, Halifax, Windsor, Ottawa, Toronto, and multiple other police departments across Ontario have
all admitted to using facial recognition technology, with many initially staying silent or even denying their
use of it. Moreover, it’s also likely that facial recognition is being used even more broadly: police and

McSorley, Tim. 2021. The Case for a Ban on Facial Recognition in Canada. Surveillance & Society 19(2):
250-254.
https://ojs.library.queensu.ca/index.php/surveillance-and-society/index | ISSN: 1477-7487
© The author(s), 2021 | Licensed to the Surveillance Studies Network under a Creative Commons
Attribution Non-Commercial No Derivatives license
McSorley: The Case for a Ban on Facial Recognition

security agencies are not required to disclose their use of new technology and will often refuse to answer
any disclosure request under the guise of protecting sensitive operational information.

We have followed these issues closely at the International Civil Liberties Monitoring Group (ICLMG) and
believe law enforcement should be restricted in its use of facial recognition technology, especially when it
comes to surveillance, and that the widespread use of this technology necessitates public debate and new
legislation. This summer, along with thirty organizations and forty-six civil liberties advocates, we issued a
call for a ban on the use of facial recognition surveillance technology by federal law enforcement and
intelligence agencies in Canada, including the RCMP, CBSA, and CSIS. While our call was limited to the
federal government because of our coalition’s mandate, our concerns apply to local and regional police as
well.

We believe that there are four compelling reasons why we need to have a formal public consultation into
the overall use of facial recognition technology, and why facial recognition surveillance should never be
allowed for law enforcement or intelligence agencies.

1. Facial Recognition Violates Our Rights.
Facial recognition is aptly described by the Canadian Civil Liberties Association (CCLA) (2019) as allowing
for the “mass, indiscriminate, disproportionate, unnecessary, and warrantless search of innocent people
without reasonable and probable cause.” The technology renders all of us walking ID cards; it’s basically
carding by a faulty algorithm. As the CCLA further writes, “It’s like police fingerprinting and DNA
swabbing everyone in downtown Toronto during rush hour, then processing it in a broken database
processor.”

The other problem is where these photos come from. Even if you’ve never heard of Clearview AI, you likely
have an online presence—maybe a friend or a relative has posted a photo of you to Facebook—which means
you probably are in the company’s database.

Clearview’s CEO and co-founder, Cam-Hoan Ton-That, and his associates chose to mass-violate social
media policies against scraping accounts to build an image warehouse seven times larger than the FBI’s
photo database. As an aside, exclusive documents obtained by The Huffington Post recently revealed that
Ton-That, as well as several people who have done work for the company, have deep, longstanding ties to
the far-right (O’Brien 2020).

Clearview AI’s tool is also indiscriminate: it can be used to identify activists at a protest, strangers in a bar,
or visitors to any office or residential building. Users could discover not just names but where individuals
live, what they do, and whom they know. Law enforcement isn’t the only customer: the company has also
licensed the app to companies for security purposes.

Eric Goldman, the co-director of the High Tech Law Institute at Santa Clara University, has said: “The
weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk
potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail
them or throw them in jail” (qtd. in Hill 2020).

2. Facial Recognition Technology is Inaccurate and Biased.
One of the most significant problems with facial recognition technology is that it remains both inaccurate
and biased.

Facial recognition technology has been shown to be inaccurate and particularly prone to produce biased
outcomes for people of colour and women. As reported by Wired magazine, many top systems have been

Surveillance & Society 19(2)                                                                                  251
McSorley: The Case for a Ban on Facial Recognition

found to misidentify the faces of women and people with darker skin five to ten times more often than those
of white men (Simonite 2019).

Another study from the National Institute of Standards and Technology found that facial recognition
technology falsely identified African-American and Asian faces ten to one-hundred times more often than
white faces and that, among databases used by law enforcement, the highest error rates came in identifying
Native Americans (Singer and Metz 2019).

Detroit has regulated the use of facial recognition. But, according to the Detroit Police Department’s own
statistics, in the first six months of 2020, it had been used almost exclusively against black people, and it
misidentified people 96 percent of the time (Koebler 2020).

Last year, Robert Julian-Borchak Williams, a black man, was arrested in Detroit after being falsely identified
by facial recognition technology. He was released, but the damage was done. The ACLU of Michigan filed
a complaint against the Detroit Police Department asking that police stop using the software in
investigations (Allyn 2020).

Some facial recognition systems used by law enforcement use mugshot databases. This is a problem because
many of these databases include images of people who have been arrested but may have had their charges
dropped or been acquitted. Given that more people of colour are targeted by police, it increases the
likelihood of their images being included in these databases indefinitely, increasing the likelihood of their
misidentification.

These errors can lead already marginalized communities to face even more profiling, harassment, and
violations of their fundamental rights. This is especially concerning when we consider the technology’s use
in situations where biases are common, including protests against government policies and actions, when
individuals are traveling and crossing borders as well as in the context of criminal investigations, national
security operations, and the pursuit of the so-called “War on Terror.”

3. Facial Recognition's Use by Law Enforcement and Intelligence Agencies Remains
Unregulated.
As mentioned earlier, the Canadian public recently learned that the RCMP had been covertly using facial
recognition technology for almost two decades and that they specifically lied about their use of Clearview
AI technology, only admitting to using it because of a leak of the company’s client list.

The Canada Border Services Agency (CBSA) has officially been silent about its use of facial recognition,
although in private meetings Public Safety Canada officials have told us that CBSA does not currently use
facial recognition for surveillance purposes. We do know, however, that Canada’s border police ran a pilot
project in 2016 to use real-time facial recognition surveillance at Canada’s airports. Canadian intelligence
agencies like CSIS have refused to acknowledge at all whether or not they use facial recognition technology.

Across the country, police forces have admitted under pressure from the media and critics to not disclosing
their use of facial recognition. In other cases, officers have stated they began using the new technology
without the knowledge or approval of their superiors. The Privacy Commissioner of Canada was not
consulted by the RCMP before the force began using Clearview AI technology, and the RCMP’s Privacy
Impact Assessments on the use of facial recognition are nowhere to be found.

US cities and one state are having democratic debates around facial recognition bans. This hasn’t happened
in Canada. For that reason, according to the CCLA, “Canadian governments at all levels who have used the
technology may be liable for damages, and criminal prosecutions touched in any way by facial recognition

Surveillance & Society 19(2)                                                                              252
McSorley: The Case for a Ban on Facial Recognition

are in jeopardy. Further use without regulation is knowingly reckless” (Canadian Civil Liberties Association
2019).

4. Facial Recognition is a Slippery Slope.
Limited use of facial recognition by law enforcement in other countries has typically led to greater and much
broader rollouts of the technology. In the United States, former president Donald Trump issued an executive
order requiring facial recognition identification for 100% of international travellers in the top twenty US
airports by 2021 (Alba 2019).

In the UK, facial recognition is already being used at sports matches, street festivals, protests, and even on
the streets to constantly monitor passersby.

Some of the world’s most extreme examples come from the Xinjiang region of China, where the government
spies on millions of Uyghurs with facial recognition technology, controlling access to all areas of public life
including parks, public transportation, malls, and city boundaries.

Half of adults in the United States are now in facial recognition databases. Canada won’t be far behind if
we don’t slam the brakes on the spread of this technology.

In the absence of meaningful policy or regulation governing its use, facial recognition surveillance
cannot be considered safe for use in Canada.

Portland, OR, San Francisco and Oakland, CA, and Boston and Somerville, MA, have all banned the use of
facial recognition by law enforcement. There are several bills currently before the US Congress calling for
a moratorium, greater regulations, and even a nationwide ban.

Vermont is suing Clearview AI for unlawfully acquiring data from consumers and businesses in violation
of multiple state laws, and the American Civil Liberties Union and more than one-hundred leading
organizations on privacy and civil liberties—including the ICLMG—have signed a call for an international
moratorium.

In August 2020, the UK Supreme Court set a global precedent by finding the use of facial recognition by
UK police to be unconstitutional (Sabbagh 2020). Welsh police had been using surveillance cameras and
facial recognition technology to spy on the public at venues ranging from malls to soccer matches to concert
halls, in an attempt to match them to photos of suspected criminals. The court found that the police operation
breached privacy, data protection, and equality regulations.

In Canada, more than 20,000 people have signed a call from OpenMedia for a moratorium on facial
recognition technology in general and a ban on the use of facial recognition technology by federal police
and intelligence agencies in Canada.

The House of Commons Standing Committee on Ethics, Privacy and Access to Information has voted to
study the impact of facial recognition technology, and the Privacy Commissioner of Canada and the privacy
and information commissioners of Quebec, Alberta, and British Columbia are investigating the use of facial
recognition technology.

Even though Clearview AI is gone from Canada—for now—there are many other facial recognition tools
that can be used by Canadian police. Some police forces have publicly stated that they are moving forward
with new contracts for facial recognition technology in the coming months.

Given all of this, we believe the time for a ban on the use of facial recognition surveillance by law
enforcement is now. In particular, we believe the federal government must take three actions:

Surveillance & Society 19(2)                                                                               253
McSorley: The Case for a Ban on Facial Recognition

    1. Ban the use of facial recognition surveillance by federal law enforcement and intelligence
       agencies;
    2. Initiate a meaningful, public consultation on all aspects of facial recognition technology in
       Canada; and
    3. Establish clear and transparent policies and laws regulating the use of facial recognition in
       Canada, including reforms to PIPEDA and the Privacy Act.

While these relate to the federal government, they can also serve as a template for action at the municipal,
regional, and provincial levels, which all have authority over varying law enforcement and security
agencies: initiate a ban on surveillance and hold inquiries into the overall use of the technology in order to
craft better laws and to better protect our rights.

References
Alba, Davey. 2019. The US Government Will Be Scanning Your Face at 20 Top Airports, Documents Show. Buzzfeed, March 11.
      https://www.buzzfeednews.com/article/daveyalba/these-documents-reveal-the-governments-detailed-plan-for [accessed May
      31, 2021].
Allyn, Bobby. 2020. “The Computer Got It Wrong”: How Facial Recognition Led To False Arrest Of Black Man. NPR, June 24.
      https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-
      michig [accessed May 31, 2021].
Canadian Civil Liberties Association. 2019. Deputation on Facial Recognition Technology Used by Toronto. May 30.
      http://ccla.org/cclanewsite/wp-content/uploads/2019/05/toronto-police-board-deputation.pdf.
Hill, Kashmir. 2020. The Secretive Company That Might End Privacy as We Know It. The New York Times, January 18.
      https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html [accessed May 31, 2021].
Koebler, Jason. 2020. Detroit Police Chief: Facial Recognition Software Misidentifies 96% of the Time. Vice, June 29.
      https://www.vice.com/en_us/article/dyzykz/detroit-police-chief-facial-recognition-software-misidentifies-96-of-the-time
      [accessed May 31, 2021].
O’Brien, Luke. 2020. The Far-Right Helped Create the World's Most Powerful Facial Recognition Technology. The Huffington
      Post,               April            7.            https://www.huffingtonpost.ca/entry/clearview-ai-facial-recognition-alt-
      right_n_5e7d028bc5b6cb08a92a5c48?ri18n=true [accessed May 31, 2021].
Sabbagh, D. 2020. South Wales Police Lose Landmark Facial Recognition Case. The Guardian, August 11.
      https://www.theguardian.com/technology/2020/aug/11/south-wales-police-lose-landmark-facial-recognition-case [accessed
      May 31, 2021].
Simonite, Tom. 2019. The Best Algorithms Struggle to Recognize Black Faces Equally. Wired, July 22.
      https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/ [accessed May 31, 2021].
Singer, Natasha, and Cade Metz. 2019. Many Facial-Recognition Systems Are Biased, Says U.S. Study. The New York Times,
      December 19. https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html [accessed May 31, 2021].

Surveillance & Society 19(2)                                                                                                254
You can also read