TOWARDS RESPONSIBLE AI (PART 1)

Five Views of AI Risk: Understanding the darker side of AI

AnandSRao
Towards Data Science
10 min readNov 29, 2020

--

Get started on your journey towards Responsible AI

Source: Photo by Loic Leray on Unsplash

Thirty years from now, will we look back at 2020 as the year when AI discriminated against minority groups, disinformation propagated by special interest groups and aided by AI-based personalization caused political instability, deep fakes and other AI-supported security infringements basically rendered AI untrustworthy and propelled us into yet another AI winter, or will we look upon 2020 as the year that provided the impetus for the world bodies, corporates, and individuals to come together to ban autonomous weapons systems, assess, monitor, and govern sensitive AI technologies like deep fakes, facial recognition systems, and other sensitive technologies and truly create reliable, robust, and responsible AI beneficial to all humans? History will be the best judge on whether we use this crisis and turn it into an opportunity or squander the opportunity towards building a responsible approach to AI.

The risks of AI have been documented quite extensively in a number of articles. AI Now has a timeline of the key mishaps. These mishaps are not always ‘errors of omission’ or negligence or even ignorance. Sometimes they are deliberate and malicious as well. The malicious AI report details the key aspects of such AI systems.

Source: AI in 2018: A year in Review

In this and a series of subsequent articles we want to delve into how we should look at the risks of AI and also mitigate some of these risks through the responsible use of AI. We will draw upon our own work in this area and also those of others — an active community of academics, policy makers, and practitioners — that is emerging.

Five Views of AI Risk

Risk is defined as the “possibility of loss or injury”. Within AI (or related areas such as data, analytics, automation — see my article here on how they are linked) the risks can be viewed from five different perspectives.

First, is the time dimension. While AI has been around for a while and so have been these risks. However, the widespread use of AI in society and some of the recent successes of AI have heightened the risks of AI. The AI risks have progressively increased and will likely continue to increase as we build and use more sophisticated AI algorithms.

Second, is the stakeholder dimension. AI doesn’t impact every person or every institution the same way. The risk is different for each stakeholder and we examine these risks for each stakeholder.

Third, is the sector dimension. Different sectors have adopted AI in different ways and are subject to different types of risks. We examine some of the key sectors exposed to AI risks and what these risks are.

Fourth, is the use case dimension. There have been a number of high profile use cases for AI that have received a great deal of attention based on the potential harm they could cause individuals and our society. We will examine some of these use cases and the risks posed by AI for such use cases.

Fifth, is the socio-technical dimension. This is perhaps the most common dimension where risks are categorized by the functional specification desired of the AI system e.g., safety, security, fairness, etc.

A detailed analysis of these five dimensions can help us understand these risks better and enable us to mitigate these risks.

Time: Three time horizons of AI

The quest for AI has changed over the past 63 years since the words were first coined in 1956. The early pioneers wanted to achieve Artificial General Intelligence (AGI), however as the complexity and enormity of the endeavor set in, the ambition turned into a more modest Artificial Narrow Intelligence (ANI). More recently, there have also been philosophical discussions around Artificial Super Intelligence (ASI). So as we look at the risks of AI we should reconcile these risks with the time horizon.

Artificial Narrow Intelligence (ANI), as its name implies, is focused on the development of systems that achieve specific objectives e.g., deciding when a loan should be approved or denied or identifying a specific object in an image. The techniques used to achieve these objectives are also specialized and cannot be used to achieve other objectives, not originally envisaged by the designer. Almost all of the AI systems today fall under this category. There are a number of AI risks, that we will see below, that pertain to ANI. Furthermore, these risks pose a clear risk to us and are not a hypothetical risk in the future.

Artificial General Intelligence (AGI) is still a quest — a quest since the early days of Logic Theorist in 1956 that proved the first 38 theorems out of 52 theorems from the Principia Mathematica and was generalized to the General Problem Solver in 1959. Multiple attempts since then including the research around commonsense reasoning and knowledge and deep learning are continuing the quest. If and when we achieve AGI, we will have artificial systems that is at a human-level performance and intelligence. This will open up an entire array of additional risks. We will look at these risks below, but they are more in the future.

Artificial Super Intelligence (ASI) is when the artificial systems possess intelligence that far surpasses the best human intelligence and also have the ability to learn faster than humans. As a number of luminaries have noted, such a super intelligence could pose an existential threat to humanity. Although some AI researchers dismiss such futuristic threats, it makes sense to address them in a rigorous manner. Stuart Russell’s Provably Beneficial AI lays out the foundation for doing so.

Stakeholders: Four types of stakeholders

AI, like the Internet and mobility before it, is a general purpose technology. It impacts individuals, corporates, social groups, and nations. The impact on each of these stakeholders is somewhat different.

At the individual level, AI could pose a risk to our safety, security, reputation, liberty, and equality. Poor performance of AI could result in physical harm (e.g., accidents involving autonomous vehicles), emotional harm (e.g., mis-categorizing our emotions), financial harm (e.g., certain minority groups being flagged more for ‘suspicious’ financial activity), and medical harm (e.g., certain minority groups not receiving adequate treatment for cancer). Distinct from the impact on individuals, AI could also discriminate against specific groups of individuals e.g., females or minority groups or age groups or socio-economic groups. Adversarial examples (e.g., strategically placed stickers on a Stop sign can fool a deep learning algorithm to treat the sign as a 30 mph sign), trojan poisoning (e.g., hidden triggers embedded in neural networks to make them act erratically and maliciously), and model inversion (e.g., reverse engineering the machine learning algorithm to determine how it was trained) are some examples of security risks. The rise of deep fakes can destroy the reputation of individuals. Surveillance cameras coupled with AI could rob individuals of their liberty and freedom. AI algorithms could institutionalize our historical biases (e.g., black defendants pose a higher risk of recidivism than they actually do) and exasperate inequality.

At the corporate level, AI could pose financial, operational, reputational, and compliance risks. A deep-fake voice of a CEO of a company was used to defraud a company for over $240,000 recently causing the company both financial and reputational harm. AI going rogue or model drift if not monitored appropriately could result in significant operational risks. Compliance risks and fines for corporates will likely increase in the future as more countries start enacting specific regulations for algorithmic accountability.

At the national level, AI could pose national security threats, threaten political stability, increase economic disparity, and increase prospects of military conflict. Automated decision making, intelligent malware, data diet vulnerability and a number of other factors associated with AI can pose national security threats. With hyper-personalization, AI can create ‘echo chambers’ resulting in increased polarization of views within a country. This coupled with automated dis-information could quickly threaten the political stability of a country. AI induced automation can result in significant job losses for roles that are made up of predominantly repetitive manual and cognitive tasks. This could lead to greater unemployment and economic disparity. According to the Brookings report:

Most evident to date have been machine-driven dynamics that amplify the ability of skilled workers to add value, substitute for rote work, and inject winner-take-most — or “superstar” — dynamics into markets

Finally, increased use of AI in military systems including autonomous weapons systems, robot soldiers, micro drones, and other technologies create ethical, legal, operational, and strategic threats.

Sectors: Multiple industry sectors

The risks of AI can also be analyzed based on the sector where it gets used — financial services, healthcare, manufacturing and heavy industries, retail, technology, media, and telecommunications. In financial services, opacity of models or the inability to explain decisions, potential bias in decision making, and the impact of AI on jobs are major risks. In healthcare, errors or performance risks in models, bias in decision making, professional realignment, and privacy concerns are some of the key AI risks. The Brookings report says:

Even if AI systems learn from accurate, representative data, there can still be problems if that information reflects underlying biases and inequalities in the health system.

In manufacturing, chemicals, mining, and other heavy industries the risks of AI manifests itself in terms of the physical safety of individuals and potential security vulnerabilities. In the technology, media, telecommunications, and retail sectors the key AI risks relate to privacy concerns, bias in decision making, opaqueness in decision making, deep fakes, and misinformation.

Use Cases: Thousands of use cases

There are thousands of use cases of AI across each functional area within each of the industry sectors. The risks don’t manifest equally in all these use cases. The frequency and severity of the risks in these use cases depends on how the AI has been used in these use cases. AI can be used as automated intelligence to replace humans; it can be used as assisted intelligence to help humans make decisions or take actions; it can be used as augmented intelligence to augment human capabilities; it can be used as autonomous intelligence with no human intervention but will full agency for the AI (See my article Ten human abilities and four intelligences to exploit human-centered AI for more details). Of all the thousands of use cases, some use cases have gained prominence.

Autonomous weapons systems, facial-recognition systems, bias in recruitment, and deep fakes are some of the use cases receiving increased attention from civil rights groups, policy makers, companies, and regulators. The Human Rights Watch group details actions taken by all countries as it relates to autonomous weapons systems and killer robots. National Institute of Standards and Technology (NIST), Pew Research Center, Center for Strategic and International Studies, World Economic Forum and a number of other bodies are evaluating the use of facial recognition systems across a number of use cases for accuracy, bias, privacy and other risks. AI-based hiring algorithms and the potential risks of bias are surfacing across a number of jurisdictions, including New York State Law, American Bar Association, and UK’s Information Commissioner’s Office (ICO).

Socio-Technical Systems: Complexity of the functional requirements

The risks from AI systems — especially the interplay between AI technology and how society uses the technology — is one of the key dimensions of categorization. The risk categories are grouped into six areas:

  • Performance risks: Risks associated with how the model performed is grouped under this category. Risks from model errors, bias in the data or the models using the data, lack of interpretability or explainability, potential brittleness or stability of model results are all examples of performance risks. The impact of these risks are physical, emotional, financial, medical harm to individuals or society and the resulting financial, reputational, and operational harm to corporates that own these models.
  • Security risks: Risks associated with the model security are grouped under this category. Risks from adversarial attacks, trojan poisoning, model inversion, deep fakes etc. discussed earlier are all good examples of security risks. The security risks impact individuals, societies, corporates and nations.
  • Control risks: Risks associated with the inability to control the AI when it malfunctions fall under this category. Examples cited earlier around AI going rogue, model drift, and lack of human agency in AI-driven processes are control risks that need to be monitored and intervened before they can cause harm to humans or make wrong decisions.
  • Economic risks: Economic consequences of AI, including job losses, increased economic disparity, and winner-takes-most dynamics are all economic risks of AI. These risks are broader than just individuals or companies and affect the entire nation or region.
  • Societal risks: Risks to political stability, polarization of views, deep fakes etc. that impact the reputation of individuals, corporates and society are grouped as societal risks.
  • Ethical risks: Risks associated with value alignment, goal alignment, and broader risks of AGI and ASI are grouped under this category.
Six Categories of AI Risks (Created by Author)

We have examined in this article different views of risks — how they can be categorized, who they impact, and when they impact. What we have not discussed here is how we can mitigate and manage these risks and what frameworks, tools, and governance mechanisms might be useful. These are topics for future articles.

Are there other dimensions of AI risks that would be useful to analyze? Have you been a victim of any of these “AI abuses”?

Authors: Anand S. Rao and Ilana Golbin

--

--

Global AI lead for PwC; Researching, building, and advising clients on AI. Focused at the intersection of AI innovation, policy, economics and application.