The State of Artificial Intelligence Ethics

Xeno Acharya
Towards Data Science
7 min readMar 1, 2020

--

Some clear signals from the cacophonous noise

Tools using AI/ML such as facial recognition could turn into mechanisms for control and oppression. [Photo by Oscar Chan from Pexels]

In his blog article ‘Face recognition and the ethics of AI’, Benedict Evans talks about how in the 1970s and early 1980s people had two similar fears about relational databases that we have about AI systems today — first, that these databases would contain bad data or bad assumptions (as we worry about our biases being baked into AI systems), and second, that these databases would be deliberately used to build bad things to hurt people (as we worry about nefarious applications of AI systems). Worried if it worked, worried if it didn’t.

Our fears about AI systems is a bit different than our fears about databases in two important ways — first, AI systems are picking up signals that humans did not know existed (intractable complexity), and second, advanced AI systems can provide the right sounding answers that are wrong in non-obvious ways (unintentional deception). It has become increasingly clear to people in the field that we need to address both these behaviors.

Cautious optimism is a good thing. There have been legitimate efforts from organizations and governments to understand these fears as well as to address them. There has been a proliferation of writeups on AI ethics in the past few years seemingly from all organizations involved with technology, be they country governments or giant tech firms. These have come in the form of guidelines, policy papers, principles, or strategies accompanied by some committee or a board to oversee its implementation. A decent paper produced by researchers at Harvard’s Berkman Klein Center summarizing analyses from 36 such documents can be found here. Eight key themes emerged — privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, promotion of human values. It is notable that human control of technology and promotion of human values are the least featured principles in these 36 documents. The authors conclude with a word of caution “there’s a wide and thorny gap between the articulation of these high-level concepts and their actual achievement in the real world”, i.e. writing policy papers on ethics is all well and good, but how is this being translated into action?

Well, it is hard. Ethics is localized. Trying to impose one’s local ethical views on a phenomenon that is global poses challenges. There have been some well-meaning efforts to translate AI ethics principles into action. I have categorized them broadly into technical, policy, people, and regulatory buckets, although all are intricately interlinked.

Technical

Technical efforts constitute implementing technical tools that allow for better machine learning interpretability (for example, using Local Interpretable Model-Agnostic Explanations, or LIME, a model-agnostic tool for providing explanations for why classifiers predict certain things; or using Deep Learning Important FeaTures​, or DeepLIFT, a method for identifying which inputs in a neural network are most important in predicting a certain output using backpropagation); algorithmic explainability (using Layer-Wise Relevance Propagation, LRP or similar); and algorithmic debiasing (for example, by using model adversarial debiasing, variational autoencoders, or by dynamic upsampling of training data, or distributionally robust optimization). Implementing technical measures alone for instituting ethical AI is limited, as societal biases are much more profound and systemic — therefore need to be tied closely with policy, people, and regulations.

Policy

Having policies that advocate for algorithmic fairness by design (having accountability, transparency, and responsibility) is an important milestone towards implementing AI ethics principles. Accountability means the ability to justify one’s design decisions — which would necessitate delving deeper into moral values and societal norms in the context of the AI system’s operations — and needs to cover both a guiding function (we are choosing to do this, not that) as well as an explaining function (here is why we designed the system this way). Accountability is to others within the organization, to external bodies (regulators), or to individuals (or groups) that are impacted by the AI system. Creating a transparent AI system means being able to describe what these systems do, replicate it, and explain how they make decisions and adapt to their environment, and demonstrating mechanisms that are in place for governance of data they use or create. Responsibility means being able to name a human owner (or a group of human owners) for the design decisions that make up the AI system.

People

Understanding who makes design decisions and who is impacted by these decisions is important, so is knowing their capabilities and limitations. Teams building AI systems need to be carefully selected — it is important to ensure these teams are diverse and have good exposure to the problem(s) being targeted by the AI solution. When building an AI system, peoples’ preferences, prejudices, and historical biases should be discussed and at least carefully acknowledged, if not avoided (because sometimes avoiding these might neither be feasible nor appropriate). A fair and transparent consultation that seeks input from all stakeholders involved should accompany development. People involved in the end-to-end process need to be empowered to provide adequate oversight, and a careful study should be done about the impact versus benefit to all stakeholders involved.

Regulatory

Regulations can’t stop mistakes in AI implementation — these mistakes will inevitably happen. However, they can mandate an audit process that catch these mistakes and help course correct through penalties or bans. Regulations must be enforced at the right level of focus — too broad and you risk being totalitarian. Mandating developers to follow best practice in software engineering — with documented accountability means we will need regulatory bodies with expertise in examining these best practices. Human rights should be treated as core to ethical AI systems — these bodies should be able to provide protection against illegal discrimination, unfair practices, loss of liberty, increased surveillance, stereotype reinforcement, dignitary harms, and social stigmatization, among other things. If governance is defined by redistribution, we need to govern human application of technology.

AI ethics cautioning needs to translate into a robust regulatory framework [Photo by Fernando Arcos from Pexels}

Many workplaces have, over the past few decades, been aware of biases within their recruitment system. They have taken systematic steps to minimize these biases — internally building awareness around hiring among their staff, using validated assessments, standardizing their interview process, prioritizing diversity & inclusion, varying advertising channels and outreach. Apart from the fact that homogeneous workplaces are less productive than diverse ones, there are also severe legal implications for hiring discrimination. In the US, the Title VII of the Civil Rights Act of 1964 and associated laws protect against age, sex/gender, sexual orientation, ethnicity, religion, disability, pregnancy (or other medical history), and genetic discrimination. Similarly, in the EU, the Employment Equality Framework Directive and the Racial Equality Directive provide the legal support for maintaining a fair and unbiased recruitment process. Structures exist, and we need not start from scratch.

The question is — if recruitment is automated using an AI system, does bias (and therefore the act of discrimination) become harder to detect? The recruitment algorithm is designed, developed, trained, tested, and deployed by humans — if discrimination is determined, we need to hold accountable the people who built, own, and operate this system, not throw our hands in the air and claim ‘the AI did it’. There is also a counter argument — that machine learning algorithms will evolve — a set of debiasing tools that are applicable for one set of AI systems now are going to be quickly obsolete as new sets of algorithms evolve. Today’s algorithms are in their infancy. As algorithms evolve more to mimic human-level thinking, our inherent biases that are transferred on to these algorithms will also grow more sophisticated, reflecting current existing societal bias.

Through our regulatory process, we can mandate an audit process that acknowledges and avoids these biases. However, I think we have to move beyond bias. Algorithms are biased because people are biased — bias is not a special feature of AI but an inherent part of human nature. When we transfer our knowledge on to an AI system, we also transfer on to it our preferences and stereotypes. These preferences and stereotypes are reflected in our social construct, some of which we recognize needs to change and therefore have collectively agreed to change them. Norms, policies, and laws are outputs of negotiations that are instruments of such change — they make clear what actions are considered bad and the consequences for committing them. These norms, policies, and laws need to be applicable to humans that design AI systems just as they are applicable to humans that design drugs. Even corporations that are legal entities are an extension of human intent, and their actions can be (and are) tied to their human representation. Creating an AI system requires humans to design these systems. These design decisions can be documented, which in turn can be audited. Standards for such designs and intent can be set and enforced. All human activity occurs in the context of some sort of regulatory framework; therefore, most AI systems are already regulated, given the vast majority of AI has and will likely remain driven by commerce.

Among the eight themes that I mention in the beginning, promotion of human values has least controlled the discourse on AI ethics. Bringing this into center-stage can tip the balance in favor of an ethically driven era of AI adoption.

--

--