Inside AI

Ethics, the new frontier of technology

The Road to Trusted AI

Olivier Penel
Towards Data Science
7 min readApr 24, 2019

--

Are we creating monsters?

As artificial intelligence (AI) and machine learning (ML) applications weave into more and more aspects of our lives, voices are rising to express concerns about the ethical implications, the potential discrimination fueled by algorithmic bias (“Algorithms, the Illusion of Neutrality”), and the lack of transparency and explainability of black box models (“X-AI, Black Boxes and Crystal Balls”).

We are building systems that are beyond our intellectual ability to comprehend. Who can seriously pretend that they understand the hundreds of millions of lines of code used in a self-driving car?

AI is rapidly evolving towards more autonomy and human-like cognitive activities such as natural language processing and computer vision. Algorithms need less and less supervision to function. In some cases, they are even starting to rewrite bits of their own code. Those “generic algorithms” evolve, just as organisms do naturally. No wonder that some academic research labs are now looking for ways to understand algorithms by treating them like animals in the wild, observing their behaviors in the world.

Does this mean we are creating monsters?

Photo by freestocks on Unsplash

The myth of Frankenstein’s creature escaping our control, and the fear of killer robots eradicating humans, are fueling the hype. There is no shortage of fear-mongering and press articles trying to grab our attention on this topic. I think, however, that the reality is quite different. As with so many technological breakthroughs in the past, progress in technology has moved faster than society. As awareness builds up, practice will change, codes of conduct, best practice and safeguards will develop, and the use of AI will become more (self-)regulated.

Algorithms do not have ethics, moral, values or ideologies, but people do. Questions about the ethics of AI are questions about the ethics of the people who make it and put it to use.

Taking responsibility

As a matter of fact, organizations are already gearing up for more ethical and responsible use of AI. The importance of trustworthy AI was clearly established in a recent piece of research from SAS, Accenture and Intel conducted by Forbes Insights among more than 300 C-level executives globally. Cultural challenges, and in particular the lack of trust, were deemed to be the main obstacles preventing broader and faster adoption of AI.

Most organizations seem to be taking action, implementing governance and oversight systems to monitor the output of AI applications. For instance, more than half were reviewing the output of AI systems at least weekly. The number rose to 74% of the most successful and mature organizations.

The question of ethics is now uppermost in the minds of innovators, technologists and business leaders. Many have introduced ethics committees to review the use of AI and provide ethics training for their staff.

Some engineers are rightly worried about the possible uses of their developing technology. Recently, researchers from the non-profit AI organization, OpenAI, created a text-generating system that can write page-long responses to prompts, mimicking everything from fantasy prose to fake celebrity news stories and homework assignments. OpenAI usually releases its projects to the public. In this instance, however, it decided not to make the technology publicly available because of concerns about possible malicious use.

Photo by Lianhao Qu on Unsplash

Industry communities have put in place a number of initiatives and frameworks to tackle this issue. For example, the F.A.T.E. (Fairness, Accountability, Transparency, Explainability) community has created a set of principles to help organizations bring ethics into their use of AI, tackle the challenge of bias and consistently explain model outcomes. Big players like Google have decided to publicly advertise their ethical principles for the use of AI. In May 2018, Google CEO Sundar Pichai announced a set of principles, including not using AI in applications related to weapons or surveillance and building AI applications that are “socially beneficial”, that avoid creating or reinforcing bias, and that are accountable to people. The laudable effort to work for the greater good also fitted nicely into a PR exercise from a company that invests a lot in AI. But the trend is obvious and cannot be ignored.

Should Data Science be a regulated profession?

Today, data scientists are often focused on the fundamental science of data analytics and algorithms. They love to experiment with new languages and new techniques, to try things out and be “on the edge”. But the business of taking AI out of the lab and into the real world takes more than science. The mathematics is actually the easy part. Others also need to be involved:

  • Business leaders must set the vision and define the desired outcome, in business terms. They must also be involved in the iterative process of exploring the data and insights, refining the business objectives along the way.
  • Domain experts are those who understand the data and can guide data scientists in making the right assumptions, selecting the right data sets, asking the right questions, and interpreting the results.
  • IT teams have to provide a controlled environment to manage the models developed by data scientists. They have the responsibility for taking models and embedding them into applications for use. Depending on the business objectives and the technical constraints, models must be deployed where they are needed, in the cloud, in-database, on the edge, or in a data stream, for example. This process requires strong collaboration between the engineering and data science teams.

All these people have a role in making sure that AI applications are trustworthy and ethical. However, when everybody is responsible, perhaps nobody is responsible. I would argue that data scientists should maybe take a broader responsibility for overseeing the end-to-end analytics life-cycle, from data, to discovery and deployment. In their collaboration with engineering teams, domain experts, and business leaders, should they act as the guardians of good practice?

This begs the question of whether Data Science should be a regulated profession. Should we establish a code of professional conduct, with rigorous evaluation of data scientists, accountability and liability?

Photo by MD Duran on Unsplash

Establishing standards for the ethical development and use of AI is even more critical if we consider the current shortage of data scientist talents. Many new data scientists are entering the market and lack the awareness and experience to handle the ethical aspect of the job. Surely, this shortage will not be fixed with $40 online courses, and it will take time before the role gets fully professionalized.

Evolving regulatory environment for AI

There is also an argument, however, that this guardianship should be the responsibility of a new C-level role. In the same way that the EU mandated the creation of a Data Protection Officer (DPO) in some cases, I would not be surprised to see the emergence of a new role in the next few years, or perhaps new legal responsibilities for an existing role such as the Chief Analytics Officer (CAO). This would provide the oversight and accountability needed for the development and use of AI technologies.

AI is clearly on the agenda of both business leaders and policy makers. Many governments and private organizations have started issuing guidelines and best practice to provide safeguards in the way AI is used. It will take some time, but proper regulations will follow, very much in the same vein as the EU GDPR.

The GDPR has already established strong rules to protect personal data and the fundamental rights of individuals for data privacy. There has been much talk about whether or not the GDPR included a right of explainability. Under the GDPR, data subjects have a right to:

  • be informed about automatic decision-making;
  • contest the output of an automated system; and
  • get meaningful information about the data used and the logic involved in reaching a particular decision, if the decision has a significant impact on them

This is just a start! More is cooking in Brussels…

Photo by Guillaume Périgois on Unsplash

The European Commission has already set up the European High Level Group on Artificial Intelligence, with representatives from academia, civil society and industry. The goal of this multi-disciplinary group is:

The legal framework is therefore busy catching up with innovation and new practices around the use of AI. However, it is becoming obvious that the ability to produce trustworthy and responsible AI is a necessary condition to deliver both ethics and business value. Two sides of the same coin!

For more, read my 2 other blogs on this topic:

--

--

Passionate about data-driven innovation (AI, IoT, Analytics…), how it creates value and how it relates to bigger questions such as privacy and ethics