If you are a fan of Little Britain you are probably familiar with the title of this post, thanks to the different sketches in which a human introduces some data in a desktop PC until, well, the “computer says no”.

I am currently using some of these short videos as a funny way to introduce one of the main challenges we are facing today when working with Artificial Intelligence systems: algorithm transparency.

Image for post
everis AI ethics framework. All rights reserved.

Explainable AI, or XAI, is an essential requirement of Machine Learning models to understand, trust and manage automated decision systems.

Source: https://www.darpa.mil/program/explainable-artificial-intelligence

According to DARPA, through XAI “ New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user”.

For any critical system in which AI could have a significant impact on humans, AI should allow for:

  • Recreating the results for any decision
  • Knowing the logic and the data with which the model was trained
  • Ensuring traceability and auditability of automated decisions

If our AI systems are compliant with these requirements, we would ensure not only explainability but also traceability, allowing the establishment of feedback loops that benefit our AI systems while protecting them against other challenges like bias, and moving us from explainable AI to traceable and transparent AI.

Finally, in order to increase trust in AI systems, we will need to communicate with our users and stakeholders in a clear, precise and actionable way. In order to do that, it will be key to consider design principles and teams as a fundamental part of our AI product lifecycle.

Trust is no doubt the most valuable Business commodity, so we cannot just let a computer say no for us without understanding the whys and effectively communicating with our customers the decisions it is making on behalf of our company.

Written by

Head of Data & Intelligence for Europe at everis, an NTT Data company. All opinions are my own. https://www.linkedin.com/in/dpereirapaz/

Sign up for The Daily Pick

By Towards Data Science

Hands-on real-world examples, research, tutorials, and cutting-edge techniques delivered Monday to Thursday. Make learning your daily ritual. Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store