Ethical Concerns of Combating Crimes with AI Surveillance and Facial Recognition Technology

Two prominent concerns emerged in the debate around the implementation of AI in fighting crimes. Authoritarian governments exploiting AI surveillance and biases in facial recognition technology.

Bohai
Towards Data Science

--

Photo by Matthew Henry on Unsplash

Introduction

Artificial intelligence (AI)¹ has been rapidly growing worldwide, with new applications being discovered every day. While AI has applications across many sectors, one area where it is commonly utilized is in AI surveillance and facial recognition technology to combat crimes. As of 2019, at least seventy-five countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms, facial recognition systems, and smart policing initiatives (Feldstein 2019: 1). However, the widespread use of AI in the name of combating crimes does not come without a cost; multiple ethical concerns have arisen in the past couple of years, which questions the feasibility of implementing AI technology to combat crimes. This article will examine two prominent ethical concerns regarding AI in fighting crimes: biases in facial recognition technology and authoritarian governments exploiting AI surveillance in the name of public safety.

Biases in Facial Recognition Technology

Photo by Perchek Industrie on Unsplash

Fueled by new research in AI, facial recognition technology has become more popular than ever; however, it is not always accurate in its findings. According to a recent study conducted by The Journal of Research of the National Institute of Standards and Technology (NIST), facial recognition software has certain biases regarding race, age, and sex. Patrick Grother, a NIST computer scientist, headed this first of its kind study. Grother and his team evaluated 189 software algorithms from 99 developers to measure whether these algorithms exhibit demographic differentials, which is a term measuring if an algorithm’s ability to match images differ for various demographic groups (NIST 2019). Using four collections of photographs containing 18.27 million images of 8.49 million people provided by various government agencies, the team evaluated these algorithms’ matching ability concerning demographic factors. The results were astounding; though levels of inaccuracies differ between algorithms, most of them exhibited demographic differentials. In particular, Grother points out that Asian, African American and Native groups are 10 to 100 times more likely to be wrongly identified compared to Caucasians. Moreover, algorithms also struggle with identifying women compared to men and older adults compared to middle-aged adults (NIST 2019; Grother, Ngan and Hanaoka 2019). These findings are critical as they expose biases in facial recognition systems that hinder the safe implementation of these technologies. “One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse,” (Stanley in Singer and Metz, 2019) said Jay Stanley, an analyst at the American Civil Liberties Union. This reality of widespread demographic differential from inherently discriminatory AI facial recognition systems remains a paramount ethical issue that needs to be addressed.

Flawed Technology Leads to Injustice

Unfortunately, biases in facial recognition technology have already led to injustices in the United States. The first known example of this is the case of Robert William, an African American man arrested after a facial recognition system mistakenly matched his photo to a thief (Porter 2020). Williams ended up having his mug shot, fingerprints and DNA taken and was held overnight (Porter 2020). When shown an image from the surveillance video by a detective, William said, “No, this is not me, you think all black men look alike?” (Hill 2020). While William ended up being released, his experience has been traumatic, and those around him, including his five-year-old daughter, could never unsee him being handcuffed and taken away (Porter 2020). Robert Williams’s story serves as a powerful testament to the harm that a flawed facial recognition technology can do to society.

Authoritarian Governments Exploiting AI Surveillance

Photo by Alec Favale on Unsplash

While liberal democracies such as the US are struggling with utilizing AI to further the safeguarding of society, ethical concerns also arise in authoritarian governments exploiting AI surveillance in the name of combating crime. One such country is China; through the “New Generation Artificial Intelligence Development Plan” (AIDP), the nation delineated an overarching goal to make China the world leader in AI. The AIDP indicates China’s intention to use AI for defence, social welfare, and developing ethical standards (Robert et al. 2020: 1–2). However, knowing the political culture of corruption and repression in China, some, such as Ross Anderson, a deputy editor of The Atlantic, argued that China’s pronouncements on AI have a sinister edge. Anderson believes that China wants to use AI to build an all-seeing digital system of social control, which would push China to the cutting edge of surveillance (Anderson 2020). This possibility of an all knowing system fueled by AI-based surveillance presents ethical concerns because it grants governments absolute control at the expense of civil liberties.

Using AI Surveillance For Racial Profiling

Photo by Kuzzat Altay on Unsplash

Anderson is certainly not speaking without cause in his worries of an all knowing digital system use for government control. According to Paul Mozur, a Hongkong-based correspondent of The New York Times, the Chinese government uses AI surveillance to profile the Uighurs, a mostly Muslim minority group in China (Mozur 2019). This type of surveillance, according to Mozur, is the first example of a government intentionally using AI for racial profiling. Through AI surveillance, the government exclusively looks for Uighurs based on their appearance and keeps records of their daily movement. This information is used to keep tabs on China’s 11 million Uighurs in Xinjiang province. Due to this widespread integration of AI technology, authorities had put a million Uighurs in detention camps for suspicion of terrorism and other alleged crimes (Mozur 2019). Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, points out that people will use the riskiest parts of AI technology: “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.” (Garvie in Mozur 2019). The capability and implementation of mass AI surveillance employed by governments remains an urgent ethical crisis to human activists and leaders worldwide.

Benefits of AI Technology

Photo by Stephen Phillips - Hostreviews.co.uk on Unsplash

While it is critical to address ethical concerns regarding the incorporation of AI in combating crimes, it is also essential to acknowledge some of the benefits AI brings to the table. According to the Oliver Wyman Risk Journal, AI is used to detect crimes such as employee theft, cyber fraud, fake invoices, money laundering, and terrorist financing (Quest et al. 2018). These AI applications have triumphed against financial crimes. Specifically, banks have reduced false alerts by 50% while finding success with AI-driven tools to track criminals (Quest et al. 2018). Furthermore, the scope of AI’s application is virtually unlimited if utilized correctly. Future applications include detecting and tracking illegal goods, terrorist activities, and human trafficking (Quest et al. 2018). Delivery companies can use AI to assess parcels containing illegal goods, shops can use AI to identify abnormal purchases, and law enforcement can use AI to combat human trafficking (Quest et al. 2018). All of these AI applications display promising capabilities of enhancing society’s safety across the globe.

Conclusion

The ethics involved in employing AI technology to combat crimes will remain a critical issue debated among researchers, government authorities and the general population. While AI has potential in combating crimes and increasing citizens’ safety across the globe, it is undeniable that there are ethical concerns regarding the implementation of AI in fighting crimes. Crucial issues include totalitarian regimes’ abuse of AI surveillance and any government’s use of fundamentally biased facial recognition systems. As a response to emerging concerns with AI technology, multiple guidelines have been published in recent years. One such set of guidelines was published by Dr. David Leslie in the Alan Turing Institute. The report stresses the significance of AI ethics and explores platforms for the responsible delivery of AI technologies (Leslie 2019: 3). As AI is becoming a gatekeeper technology, humankind ultimately can choose which direction it goes, whether it is the exponential advancement of human well being or the possibilities of significant risks (Leslie 2019: 73). The increasing integration of AI globally inevitably leads to major ethical concerns, but if leaders and researchers are willing to crack down on unethical behaviours and follow appropriate guidelines, AI can be an invincible force towards a better future.

[1]: “AI is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” (Copeland 2020).

References

Anderson, R. (2020, September 9). The panopticon is already here. The Atlantic. https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/

Artificial intelligence (AI) coined at Dartmouth. (n.d.). Dartmouth. https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth

Copeland, B. J. (n.d.). Artificial intelligence (AI). Encyclopedia Britannica School. Retrieved December 23, 2020, from https://school.eb.com/levels/high/article/artificial-intelligence/9711

Feldstein, S. (2019). The global expansion of AI surveillance. Carnegie Endowment for International Peace. JSTOR. https://www.jstor.org/stable/resrep20995.1?seq=1#metadata_info_tab_contents

Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test (FRVT) part 3: Demographic effects. The Journal of Research of the National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280

Hill, K. (2020, June 24). Wrongfully accused by an algorithm. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html?login=email&auth=login-email

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529

Mozur, P. (2019, April 14). One month, 500,000 face scans: How China is using A.I. to profile a minority. The New York Times. https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html

NIST study evaluates effects of race, age, sex on face recognition software. (2019). The Journal of Research of the National Institute of Standards and Technology. https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software

Porter, J. (2020, June 24). A black man was wrongfully arrested because of facial recognition. The Verge. https://www.theverge.com/2020/6/24/21301759/facial-recognition-detroit-police-wrongful-arrest-robert-williams-artificial-intelligence

Quest, L., Charrie, A., & Roy, S. (2018). The risks and benefits of using AI to detect crime. Oliver Wyman Risk Journal, 8. https://www.oliverwyman.com/our-expertise/insights/2018/dec/risk-journal-vol-8/rethinking-tactics/the-risks-and-benefits-of-using-ai-to-detect-crime.html

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2020). The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI and Society. https://doi.org/10.1007/s00146-020-00992-2

Singer, N., & Metz, C. (2019, December 19). Many facial-recognition systems are biased, says U.S. study. The New York Times. https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html

--

--

Aspiring Writer Interested in History, Technology, & Business | Former Editor at Lessons from History | Northwestern Business Review