Artificial Intelligence Ethics and the 10 Commandments ?

Mapping ethical AI issues to the 10 Commandments

Tobi Sam
Towards Data Science

--

The School of Athens

This article explores ways in which we could bring ideas from different backgrounds together to help reduce the bias we are seeing in the industry today. So, at the end of this write up, I will map some ethical AI issues to the 10 Commandments and see how they stack up.

Maybe in the process of recognising our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves. — IBM

For a while now, I have had this thought about how the Ten Commandments from the Bible could be used as a good starting point for building an ethical framework for Artificial Intelligence principles. I brushed this off because I thought it was a bit odd. However, after seeing the unusual partnership between Microsoft, IBM and the Roman Catholic Church on AI ethics in February 2020, I realised my idea wasn’t so crazy after all!

Firstly, let us define what ethics is:

Ethics is defined as a set of moral principles that govern a person’s behaviour or the conducting of an activity. — Wikipedia

Today, ethics plays a fundamental role in society and culture. It helps determine what is legal or illegal in a society and usually serves as the basis for the law and order of a land. When exploring ethics and morals in the West, for example, you will discover that religion, and specifically Christianity, played a major role in providing guidelines. Although Western Civilisation has largely emancipated itself from its religious roots, the importance of Christianity in the formation of the Western civilization can hardly be denied. The focus of this article however, will not be in this area.

Why AI Ethics?

AI can provide extraordinary benefits, but it can also have negative impacts unless it’s built and used responsibly. Deep Learning (a subset of Machine Learning) models sometimes exceed human-level performance in different applications.

Source

Take AlphaGo for example, a Deep Learning computer program developed by DeepMind and later acquired by Google that learnt how to Play Go. AlphaGo didn’t have the rules of the Go game fed into it like typical programs, rather, the developers had AlphaGo learn the game by playing thousands of matches with amateur and professional players using a reward system, kind of like how humans learn. This eventually led it to became the first computer program to defeat a professional Go player. That is very impressive, given that Go is known as the most challenging classical game for artificial intelligence due to its vast number of variations in individual games and thus, making strategy in the game very complex!

It’s easy to see how this technique can be used to build on the ingenuity of human knowledge for the greater good. However, to prevent the infamous dystopia called Technological Singularitywhere a computer program running Artificial General Intelligence self-improves into a super-intelligent agent that surpasses human intelligence — it is important we build an ethical framework to prevent unwanted and unforeseen circumstances.

I, Robot Movie — Source

What are the Ten Commandments?

The Ten Commandments, also known as the Decalogue, are a set of biblical principles relating to ethics and worship, which play a fundamental role in the Abrahamic religions — Wikipedia

Below you will find the Ten Commandments written out and we will now try to map them to some common Artificial Intelligence ethics issues we are facing today. (Also note that this isn’t an exhaustive list but a proposition for the starting blocks for building AI ethical principles in your system. Obviously, the ethics of any AI system will largely depend on the ethics of its creator).

Source

Bridging the gap

  1. You shall have no other gods before Me (AKA: Accountability/Traceability): One issue facing AI systems today is accountability and transparency. Any particular AI system will need to be accountable to one authority — the creator of the system — which could be an individual or an organisation. For example, Amazon developed a recruiting algorithm that was meant to hire top candidates from around the world. The program worked but the algorithm had learnt to exclude women from the candidate pool, so it was down-scoring people who had ‘woman’ on their CV. Thanks to accountability, Amazon took responsibility and eventually scrapped the program before it was rolled out to larger groups. Here’s another scenario, if an autonomous vehicle (AV) hits and kills someone, who should be held responsible? The AI powered AV or the creator of the system? I love how the Engineering and Physical Sciences Research Council (EPSRC) put it in the “5 ethical rules of robotics” here (the original source has been archived but you can find it here). The 5th rule states that “it should be possible to find out who is responsible for any robot (AI in our case)”. An AI should not be legally responsible for its decisions, it is a tool and the creator should have the sole responsibility. So traceability to the creator is key to trustworthy AI systems.
  2. You shall not make idols: This is very similar to the 1st point, so we can skip this one for now.
  3. You shall not take the name of the Lord your God in vain (AKA: Abuse of Power): This command was set in place to prevent the abuse/misuse of the name of God. So in essence, an AI system should not abuse the power it has been entrusted with. This can be a difficult problem to detect seeing as most AI systems do not provide an explanation for their decisions. Let me give you an example. ToTok, an Emirati messaging app that has been downloaded millions of times, was allegedly being used by the government to track conversations, locations and other data of users of the app. Although ToTok denies this on their website, hopefully, this isn’t true. Nevertheless, AI (institutions) should not abuse or exploit the power and trust given to them from their users.
ToTok Warning Message on the Google Playstore (Source)

4. Remember the Sabbath day and keep it holy (AKA: Maintenance/Reliability): The purpose of the Sabbath day has been debated among people practicing the Abrahamic religions, but it is generally agreed upon as the day of rest. In light of this, it is important to make out time to regularly maintain an AI system to make sure it is free from bias, bugs, faults, security vulnerabilities and to ensure optimal performance. The system should be consistently tested throughout its life-cycle to guarantee its reliability.

5. Honour your father and mother (AKA: If required, allow human intervention and enforce compliance with human laws): An AI system should comply with existing laws & fundamental rights & freedoms, including privacy. These systems should also allow humans to take over them when required. To go back to the case of an Automated Vehicle (AV), in a scenario where an accident is inevitable, a group of scientists designed a way to put the decision in the hands of the human passenger and called it an “ethical knob”. This allows the passenger to decide what moral choice the AV should make in that instance. Accommodating flexibilities like this could add another layer of trust to your system.

6. You shall not murder (AKA: AI should not kill): This is one of those rules that has been generally unanimously accepted around the world as being morally wrong, except in the case of self-defense. Generally, if humans have the right to life, then morality would say, “you must not kill a human being”. It can then be argued that allowing a machine to make a judgement about taking a human life without human intervention, can be detrimental to society. For example, in 2007, a military cannon robot malfunctioned, opened fire and killed 9 soldiers and injured 14 others. This is tragic, but thankfully, many countries and organisations have signed a pledge to ban the use of Artificial Intelligence in Lethal Autonomous Weapons. The decision to take a human life should not be given to an AI system in any situation, this decision should always be delegated to a human being.

7. You shall not commit adultery (AKA: Loyalty/Security): This was a funny one to map, but as AI systems are becoming more powerful, it is important to ensure that they (especially mission critical systems like Armed Drones or even State-Wide Energy Management Systems) don’t get into the wrong hands and then be used for malicious reasons. This would mean that the system should have robust Access Control measures in place to prevent unauthorised access. Cyber security is and will continue to play a critical role in the reliability of AI solutions so make sure this security is built into your program from day 1.

8. You shall not steal (AKA: Data-Protection): To quote the Universal Declaration of Human Rights again, ‘no one shall be arbitrarily deprived of his property’, that includes his/her data. An AI system should therefore not steal or deceptively collect user’s data without their clear consent. Regulations like GDPR are already helping in this regard. The Cambridge Analytica and Facebook scandal, is an example of how Facebook allowed a third-party developer to engineer an application for the sole purpose of gathering data. And the developer was able to exploit a loophole to gather information on not only people who used the app, but all their friends — without them knowing. Practices like this should be avoided at all costs. So the action will be to ensure that your system complies with GDPR policies or any similar regulations in any region your system will be operating in.

Source: Vox

9. You shall not bear false witness against your neighbour (AKA: Authenticity/Deception eg: Deepfakes): The disadvantages of deepfakes are troubling. It can be hard to see the “good” in Deepfakes but in the special effects industry for instance, it can be very beneficial. Check out this video of David Beckham speaking 9 languages, thanks to the power of Deepfakes synthesis.

That being said, it is crucial to make sure AI is not used to exploit users by providing an illusion of truth. Your AI should provide a way for the user to know the authenticity of the information being presented by the system.

10. You shall not covet (AKA: Job loss and Wealth Inequality): According to McKinsey Global Institute report, by the year 2030, about 800 million people will lose their jobs to AI-driven robots. These numbers are worrisome and 2030 is just 10 years away! To plan for this, the concept of planting more trees for every tree that is cut down could be considered here. In essence, one should think about creative new jobs/skills that can be created as a result of the AI system being developed. For example, Machine Learning engineers can begin teaching radiologists how to re-train cancer detecting AI programs. This will grant radiologists new skills and ensure the AI system will only be a tool to augment their work as opposed to one that takes it away.

That’s it! I hope you found some of these points helpful and practical for your next AI project.

Other Resources:

--

--