A Typology of AI Ethics Tools, Methods and Research

How Can We Translate Principles into Practices?

Alex Moltzau
Towards Data Science

--

In December 2019 the 33rd NeurIPS took place. NeurIPS is an annual Conference on Neural Information Processing Systems traditionally organised in Vancouver Canada. I would reccommend scrolling through the discussions on Twitter, reading the papers, or watching a few of the videos from the conference. In this article I will look post a few pictures of quotes out of context, and then talk about the paper called A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices. This paper won the ‘social good’ track at NeurIPS, so it may be interesting to check out. However I would very much like to look at as many as possible as the submissions in this category later if I get the chance.

Slides and Quotes at NeurIPS

Before I dive into a summary of the aforementioned article I thought it would be fun and easy to post a few pictures of a selection of different slides posted publicly on Twitter.

Presentation by @riakall — phd studying AI/ML at stanford, thinking about the concepts inside machine learning models + dreaming up more radical ai // posted by @math_rachel

Welcome to the land of quotes:

Posted by — @raghavgoyal14

And more quotes:

Posted by — @celestekidd

And again more quotes:

Posted by — @math_rachel
Quotes from 1984 — posted by @JIALIN_LU_1996
Photo by — @IanOsband

However there was of course more than quotes at the conference. Yet I want to focus in on the paper written about ‘social good’.

A Typology of AI Ethics Tools, Methods and Research

This paper was written by Jessica Morley and Luciano Floridi from the Oxford Internet Institute, University of Oxford, UK. It was co-written together with Libby Kinsey and Anat Elhalal from Digital Catapult, UK.

In short it proposes that there is a: “…gap between aspiration and viability, and between principle and practice.” Therefore we need to understand what tools that is available to developers in the different part of the process.

The goals is to map typology that may help practically-minded developers ‘apply ethics’ at each stage of the AI development pipeline.

However it is a signal as well that may show areas where further research is needed. They found that there is an: “…uneven distribution of effort in the applied AI ethics space, and that the stage of maturity (readiness for widespread use) of the identified tools is mostly low.”

According to the paper this approach is inspired by Saltz and Dewar (2019) who produced a framework that is meant to help data scientists consider ethical issues at each stage of a project. This was done with a grid for ‘ethical principles’ and the stage of the ‘AI application lifecycle’ on the other.

84 ethical AI documents had been identified in a recent review with different themes. These were according to the paper themes that ‘define’ ethically-aligned AI as that which is

(a) beneficial to, and respectful of, people and the environment (beneficence);
(b) robust and secure (non-maleficence);
(c) respectful of human values (autonomy);
(d) fair (justice); and
(e) explainable, accountable and understandable (explicability). Accordingly, these are the principles used in the typology.

The seven stages are

  1. business and use-case development,
  2. design phase,
  3. training and test data procurement,
  4. building,
  5. testing,
  6. deployment and
  7. monitoring.

In this paper a total of 425 sources that provide a practical or theoretical contribution to the answer of the question: ‘how to develop an ethical algorithmic system.’ were reviewed.

The fully populated typology can be found at:
http://tinyurl.com/appliedAIethics

^The above contains a wealth of information that I would recommend you to check out.

They have a table in the paper with an Applied AI ethics typology with illustrative examples of where different tools and methods are plotted.

They paper claims three inter-related findings:

  1. An over-reliance on ‘explicability’. “The most obvious observation is that the avail- ability of tools and methods is not evenly distributed across the typology either in terms of the ethical principles or in terms of the stages in the application lifecycle. The most noticeable ‘skew’ is towards post-hoc ‘explanations’ with individuals seeking to meet the principle of explicability during the testing phase having the greatest range of tools and methods to choose from.”
  2. A focus on the need to ‘protect’ the individual over the collective. “The next observation of note is that few of the available tools surveyed provide meaningful ways to assess, and respond to, the impact that the data-processing involved in an AI algorithm has on an individual, and even less on the impact on society as a whole.”
  3. A lack of usability. The vast majority of categorised tools and methods are not actionable as they offer little help on how to use them in practice. Even when there are open-source code libraries available documentation is often limited and the skill-level required for use is high.

The stated goal of the paper was to give a snapshot of what tools are currently available to AI developers to encourage the progression of ethical AI from principles to practice and to signal clearly, to the ‘ethical AI’ community at large, where further work is needed.

“Constructive patience needs to be exercised, by society and by the ethical AI community, because such progress on the question of ‘how’ to meet the ‘what’ will not be quick, and there will definitely be mistakes along the way. Only by accepting this can society be positive about seizing the opportunities presented by AI, whilst remaining mindful of the potential costs to be avoided.”

I hope you liked this short summary, however I would of course recommend that you read the original paper. The information I have provided is only meant as encouragement to read further.

This is #500daysofAI and you are reading article 196. I write one new article about or related to artificial intelligence every day for 500 days.

--

--