The Hitchhiker’s Guide to AI Ethics Part 3: What AI Does & Its Impact

A 3-part series exploring ethics issues in Artificial Intelligence

B Nalini
Towards Data Science

--

“I don’t know what I’m doing here, do you?” (Image by Rock’n Roll Monkey on Unsplash)

Story So Far

In Part 1, I explored the what and why of ethics of AI. Part 2 looked at the ethics of what AI is. In Part 3, I wrap up the series with an exploration of the ethics of what AI does and what AI impacts. Across the topics, be it safety or civil rights, be it effects on human behavior or risk of malicious use, a common theme emerges — The need to revisit the role of technologists in dealing with the effects of what they build; going beyond broad tenets like ‘avoid harm’ and ‘respect privacy’, to establishing cause-and-effect, and identifying the vantage points we uniquely hold OR don’t.

I had a sense early on that Part 3 would be tough to do justice to as a sub-10 minute post. But 3-parts was just right and over 10 mins too long; so bear with me while I try to prove intuition wrong and fail! Let us explore.

What AI Does

AI capabilities will improve with time and AI applications will flood our world. This is not necessarily a bad thing, but it does create an urgent need to evaluate what AI does and how that affects humans; from our safety, to our interactions with robots, to our privacy and agency.

So what does AI do? AI uses lots of computation and some rules to analyse, identify, classify, predict, recommend and when allowed, make decisions for us. Making decisions that can alter the course of a human life permanently is a huge responsibility. Is AI ready for it? Are we? In the absence of an inbuilt ethical bias, an AI system can be used to help us or harm us. How do we ensure AI is not causing or enabling harm?

Distilling the Harms of Automated Decision-Making (Future of Privacy Forum Report)

Safety

Safety in AI can be understood as “AI must not cause accidents, or exhibit unintended or harmful behavior”. Bodily harm is an obvious one, and safety concerns with autonomous vehicles, drone deliveries are well known. But how does one model and enable safety in an autonomous systems?

In a rules based system where a given input always yields the same outcome, safety concerns can be addressed through rigorous testing and operating procedures. This approach only goes so far with AI.

Autonomous decision making requires automating the ability to evaluate safety under uncertainty to predictably prevent harm.

Let us unpack this. Humans do not make decisions in vacuum. Our actions are determined not just by external triggers, but also by our intentions, norms, values and biases. What we consider safe also changes with time and context. Consider weaving in and out of traffic to rush someone to the hospital. Would you do it? I’m guessing you said, it depends.

For a hardware-software composite to make the right calls requires that it be responsive to contexts as they arise, be able to model this uncertainty in its environment, and be aligned on what is “right”. Alignment to the “right” goal, aka value-alignment, is a key theme of safety in AI. The question being, how can autonomous systems pursue goals aligned with human values? More importantly, given that humanity can hold conflicting values, whose values do systems align to?

Cyber-security and Malicious Use

Building ethics into machines. Yes/No? Image credit: Iyad Rahwan

While AI is increasingly finding use in enabling cyber-security by detecting and preventing intrusions, it is itself susceptible to gaming and malicious use. In a data-driven, highly networked, always online world the risks are significant. In addition to classical threats, AI systems can be gamed by poisoning the input data or modifying the objective function to cause harm.

Decreasing costs and advancing audio/video/image/text generation capabilities also fuel gaming of AI systems and social engineering. While tech does tech, who bears the burden of this misuse? Open AI’s decision to not release their text-generation model GPT-2, for fear of malicious use, was met with strong reactions from AI researchers. Across Twitter, blogs (e.g. 1, e.g. 2) and debates it was clear that determining the “right thing to do” is hard and the AI researchers are yet to converge on the way forward. Meanwhile, black hat researchers and bad actors have no such conflicts and continue unabated.

Privacy, Control and Surveillance

Along the lines of harms and misuse of tech is AI’s ability to be repurposed, or even intentionally designed, for surveillance. What should be the ethical considerations for building such tools? Should they be built at all? Let us take a step back and understand why this matters.

Ask people to describe privacy and you’ll get multiple definitions. This is reasonable given privacy is a social construct, evolving with time and influenced by cultural norms. While defining privacy is hard, identifying its violation is intuitive. When you’re singled out for a pat down at the airport you feel violated. You have nothing to hide and yet you feel violated! Because behind the many definitions is something fundamentally human — dignity and control.

In the case described, you understand the violation and proceed to comply, giving up privacy for a bigger benefit (public safety). We do this all the time. Now consider a digital, big data, AI world, where privacy violations are neither immediate nor obvious and the risks of giving up privacy come gift wrapped in convenience and personalisation. Notions of personal, private, secure, open, consented all get muddled to work against the average user. It is here that technologists hold vantage and can play a role in defending privacy.

Consider Facial Recognition Technology, by far the most virulent form of privacy-violation-made-easy-by-tech. Seemingly innocuous tech like CCTV, Snapchat/Instagram stories, Facebook Live, all promote a culture where recording other people feels normal. Businesses continue to push the “convenient and personal” pitch while there is money to be made. Selfie to check-in, thumb to pay, blink to unlock, DNA for ancestral tours, all make it easier to collect and coalesce information that makes you, you. Meanwhile, AI can do facial analysis, skin texture analysis, gait recognition, speech recognition and emotion recognition. All without permission or cooperation from the individual. Add all this up and you disproportionately strengthen the state/corporate over the individual. While China’s surveillance regime sounds extreme, the belief that Facial Recognition is essential to law enforcement and public safety is common. Despite its many biases US also routinely uses facial recognition for law enforcement, everywhere except in its tech capital. In fact the lure of “security” is so strong, AI based tracking including facial recognition is now being used on children, despite the harmful effects of false positives and constant monitoring on young minds. Where should we, as technologists, start drawing the line here?

Human-AI Interaction

A few months after we got a Google Home, my 4 year old loudly proclaimed “Google knows everything”. Needless to say a long conversation on how Google Home knows what it knows and that it’s definitely not everything ensued! To my dismay he didn’t look very convinced. The human voice responding to “Hey Google, where is my cupcake”, “Hey Google, did you brush your teeth today”, “Hey Google, tell me a joke” is all too real for a child his age; while terms like machine and program and training too, what shall we call it, artificial.

The role AI is playing, good or bad, in my child’s life cannot be understated. Most parents use technology, including smart speakers, without knowing how it works or when it doesn’t. But here’s the thing, they shouldn’t be expected to know. Again, think technologists, effects and vantage points.

The impact of algorithms, positive and negative, on our mental and emotional wellbeing is also cause for concern.

I recently shared a story of someone saved by a mental health app notification; in other cases algorithms pushed someone towards self-harm; meanwhile examples of reliance on Alexa to combat loneliness, a fluffy robotic seal for therapy or a glass tube figurine for companionship also exist.

This dependence on AI to rescue us from what is essentially a failure of community, and in some cases medical care, scares me and saddens me in equal proportions.

What AI Impacts

AI increasingly impacts everything, but it is important to highlight the second and third order effects. Unintended or not, these effects are complex, multi-dimensional and significant at scale. Understanding them requires time and expertise, but being aware is a valuable first step, and my goal here.

Automation, Job Loss, Labor Trends

The news cycles around AI have alternated between “AI will save us” and “AI will replace us”. Stories of factory workers being replaced by robots, AI creating millions of jobs and the perils of invisible labor in AI all paint conflicting pictures of the future of work in the age of AI. Irrespective, as this Brookings report suggests, “the fact that all these major studies report significant workforce disruptions should be taken seriously”.

When it comes to humanity, I am an optimist — I believe collectively we can survive almost anything if we want to. The question with AI triggered job loss is, will we do it quickly enough? Will those most at risk find the means and resources needed to survive? And is mere survival sufficient? What about the sense of purpose, productivity and dignity? Will AI provide these to all or merely to those privileged enough to pursue it? Is AI going to fuel the vicious cycle of haves having more as a result of having?

It is clear that the landscape of labor will be disrupted with AI. Partnership on AI, Brookings, Obama White House reports provide useful insights on who will be affected and how. But it is not entirely clear how fast this change will occur and if we are doing all we can to prepare for it.

Democracy and Civil Rights

“Power always learns, and powerful tools always fall into its hands.” — Zeynep Tufecki, in MIT Technology Review

The effects of AI in the hands of the powerful are already visible, be it China’s mass surveillance or the systematic hijack of public discourse. While AI is not their sole cause, the unique ways in which it can propel the powerful is something AI researchers must contend with.

The Internet, especially the for-profit companies driving its growth, have enabled a culture of fakery. Fake people, fake conversations: at one point in 2013, half of YouTube was bots posing as real people. HALF. Year after year, less than 60% of web traffic is from humans. While companies like YouTube and Facebook claim to be neutral to content on their “platforms”, in reality they maximise consumption, which results in some content getting served up more than others. When bots or bad-actors generate content customised for virality, the platforms oblige. What does this mean to how we consume and process information, who holds power over us, who we trust and how we act? Danah Boyd, founder Data & Society, says this manipulation of AI based recommendation engines results in the fragmentation of truth and eventual loss of trust, loss of community. This loss of informed, trusting, local communities dents the strength of democracies. As democracies suffer and structural biases amplified, the free exercise of civil rights no longer remains uniformly available to all.

Human-Human Interaction

How AI reshapes human interactions matters for our individual and collective well being. And early indications are troubling: gendered AI promotes stereotypes and discrimination, natural language AI leads to a loss of courtesy and trust reduces when AI mediates interactions. This leads to a more fundamental question of how is present day narrow AI, and at some point AGI, going to impact our ability to love and empathise, to trust and to belong?

An experiment by Yale professor Nicholas Christakis showed that group dynamics in humans can be altered by introducing human-like bots. A group that cooperated to maximise collective returns altogether ceases to cooperate when selfish free-riding bots join the group. The reduced trust in the environment alters how we build connections and cooperate.

Humanity is Human Interdependency (image src: deposit photos)

Nicholas Christakis says, “As AI permeates our lives, we must confront the possibility that it will stunt our emotions and inhibit deep human connections, leaving our relationships with one another less reciprocal, or shallower, or more narcissistic.” This stunting extends to morality as well. Just as muscles unused are muscles wasted, moral muscles need real-world interactions to build strength. What happens then if decisions typical to a society are made by computations hiding behind data that’s divorced from its sources? Do we lose our ability to empathise? Do we become desensitised to unfairness? Are we able to practice moral judgement often enough to gain practical wisdom? Shannon Vallor, professor of Philosophy at Santa Clara University, calls this Moral Deskilling (elaborated here). This deskilling makes the few decisions that humans will need to take, often in more critical and conflicting situations (as a juror, for example), that much harder.

Continuous Functions Discontinuous Humans

I need to caution readers here that I am wrapping up this series in a deeply reflective state, the summary ahead reflects that. Starting with a survey of the ethics landscape in AI in Part 1, to deeper dives into what AI is in Part 2 and what AI does and its impact here in Part 3, to my readings across psychology, sociology, anthropology and technology, I come away painfully aware of how utterly insufficient our understanding of humans and humanity is.

Perhaps, to be human is to walk in another’s shoes, to understand the gravity of what they are going through and to find in your capacities the best possible way to help them. To be humanity is to trust that others will do the same for you. As technologists, we build products, and we believe they will help people, we define metrics to show they do, but we often fail to grasp the gravity of when they don’t.

The ethics of building AI for humans requires understanding humans, and all of their discontinuities. This is not a metrics-driven time-bound high-returns high-growth venture, but a deeply valuable one nonetheless.

This is Part 3 of a 3-part series exploring the ethics of AI. Click here for Part 1. Click here for Part 2. Many thanks to Rachel Thomas, Karthik Duraisamy and Sriram Karra for their feedback on the early drafts.

--

--

Interested in Tech, Society, Ethics, Nature, Life & Living. Writing about whatever moves me whenever it does.