A few months ago, Fr Philip Larrey published his book called "Artificial Humanity". It discusses the need for developing humane Artificial Intelligence (AI). In this article, we will explain what would happen if we have an inhumane AI.
First of all, what does inhumane mean?
Inhuman – "lacking human qualities of compassion and mercy; cruel and barbaric."
Primarily, when we say Artificial Inhumanity, we are referring to an AI which is not concerned with humans. It does not exhibit any human feeling, and humans are just animate objects roaming our world. Even though AI was initially conceived to serve humans, we do not exclude the possibility of eventually having an AI, which ultimately only serves its interests. If that happens, then we are definitely in big trouble.
Can machines be humane?

Edsger Dijkstra, a computer science pioneer, once said:
The question of whether machines can think is about as relevant as the question of whether submarines can swim.
Using the same line of thought, if machines exhibit humanity, does that mean that they are human?
When Alan Turing, was confronted with a similar question but on the notion of intelligence, rather than identifying a definition for intelligence, he created a test based upon indistinguishability. In this test, commonly referred to as the imitation game (or the Turing Test), he questioned whether machine intelligence can duplicate human intelligence. The problem with this approach is that if we cannot distinguish between human and machine intelligence, it doesn’t necessarily mean that they are the same kind of intelligence. Even though the research community accepted this approach for decades, the time has come to find some real answers.
In the movie 2001: A Space Odyssey, it is evident that HAL 9000, the onboard AI, exhibits some humanity with the astronaut Dave. It expresses its sorrow at not being able to execute Dave’s commands. We can see the same happening in the movie Ex Machina. Ava, the human-like robot exhibits a lot of humanity with Caleb. However, towards the end of the film, it has no qualms in leaving him trapped in the facility and condemning him to certain death. What is common in both instances is that the humane aspect was completely faked. HAL 9000 didn’t feel any sorrow and Ava didn’t show any empathy.
It is not enough to fake humanity; we much teach AI how to feel real humanity and act accordingly.
How important is intuition?

AI systems are capable of processing and internalising massive amounts of data. If we take a look at driving, every person drives between 4 to 5 years in a lifetime. We would consider that person as an experienced driver. Since self-driving cars share their data, a self-driving car of today has a driving experience of 60 years. No man can ever reach such a level of expertise in a lifetime. The problem here is that experience alone is not enough; we also need intuition.
A big eyeopener is what happened in the early 90s at the University of Pittsburgh. Researchers conducted a study aimed at predicting the risk of complications in pneumonia patients. The goal was to figure out which of the pneumonia patients are low or high-risk. Low-risk patients were sent home and prescribed a cocktail of antibiotics while the rest were admitted to hospital. The system, which was designed around an Artificial Neural Network architecture, analysed no less than 750,0000 patients in 78 hospitals across 23 states. Surprisingly, its precision reached around 86%, which is pretty good for such systems.
When the system was tested with actual patients, the doctors noticed a serious issue. Patients with pneumonia which were also asthmatic were classified as low-risk. The doctors immediately realised that this was a serious flaw, so they flagged the problem, and the system was sent back to the drawing board. The software developers analysed it thoroughly, yet they could not find any issues with it. However, when they tried to delve further into how the system was reaching such a conclusion, they immediately faced a wall. The AI used in this case is considered as a black box; we give it an input, we get an output, but we cannot see how it is working on the inside. This issue made the task of finding an explanation extraordinarily complicated and in some cases, impossible to achieve for a human. To overcome this hurdle, they built a rule-based system on top of the Artificial Neural Network architecture. In so doing, they were capable of reading and understanding the rules which were being generated by the system.
The researchers discovered that according to the data, patients who suffered from pneumonia and were asthmatic had a higher recovery rate than the others. What the algorithm missed was the reason why they were getting better. It was definitely not because they were asthmatic! The explanation was that such patients were automatically flagged as high-risk by the doctors and automatically admitted to intensive care, which eventually resulted in a more effective recovery than regular patients.
It goes to prove two things; first of all, that human intuition is essential since the doctors immediately flagged this issue when confronted with the results of the automated system. Second, it should remind us that correlation does not imply causation.
It is not enough to build a massive knowledge base full of past experiences; we must build AI systems with intuition.
Who adapts to who?

AI has always been there to help us in our day-to-day lives. In most jobs, it is the human who is performing the task, but various AI components assist him. However, the tables are turning.
Industries around the world are moving towards maximum automation, whereby the role of the human is becoming less relevant. Within this context, they are implementing a Lights-out manufacturing methodology. Essentially, this means that the factories operate in total darkness since they are fully automated and thus, require no human presence. In such a workplace, some workers are still needed to move around the raw materials or even the finished products since very few factories are 100% automated. When the balance between humans and machines topples in favour of the machines, the human will have to adjust.

Of course, many might argue that these are isolated cases, that automation is still very much secluded to a few industries and that humans still reign in the workplace. According to the World Economic Forum, this situation is changing rapidly. Whereas in 2018, the rate of automation was only 29% in the workplace; by 2025, this will go up to 52%. For the first time, people will become a minority!
Even though automation is inevitable, we have to take into consideration the human element and create AI systems which are sensitive to our needs.
Will AI create new inequalities?

In the day and age of today, we can already feel the digital divide. According to the United Nations (UN), more than half of the people on our planet do not have access to Internet access. Unsurprisingly, men have more access than women in every region. The UN goes further than that and to accentuate the problem; they are referring to the digital divide as the digital chasm.
AI will unleash new possibilities, many of which will come at a cost. It is already creating a new class of citizens, those that can afford AI and those that can’t. Just think about a small family business advertising its products on a social platform. Those that can afford to boost their adverts using AI targeted advertisements sell more than those that don’t. But this will go even further. Some people might start sending their digital persona to do the work for them while they enjoy life. Others who don’t own a digital persona will have to do it the old way, manually! This issue goes beyond the financial aspect, though, because it could also be life-threatening. One of the jobs which will become mainstream in the coming decades is that of Organ Creator. Essentially, it is the crafting of artificial body parts, designed specifically for a particular person. Of course, this will come at a cost. So a person with a malfunctioning heart might be able to commission a new one if he affords it. If not, tough luck!
AI should be used to fight inequalities and not to create new ones.
Will AI control our lives?

To a certain extent, we are already slaves to technology, having our gaze fixed onto digital screens. But to what degree does AI control our lives?
The Chinese government introduced a social credit system. The idea behind it is to rate people according to their adherence to social norms and laws. The system tracks users by using technologies such as drones, more than 200 million surveillance cameras, brainwaves monitors and data mining from online interactions such as chats. Whoever gets a low rating is penalised. In fact, according to media reports, it seems like over 12 million people were affected with travel bans, as punishment for their behaviour so far. Now the problem with any technological system is that no system is infallible. A quick analysis of the camera system used in China shows that the image recognition software is 95% accurate. Of course, one might argue that the system’s accuracy is pretty good. However, when one looks at the numbers, 95% means that five people out of every hundred can be misclassified. Considering that China has a population of around 1.4 billion people, this might result in misclassification of approximately 70 million people. This error can have much more severe repercussions than just an annoying travel ban. The Integrated Joint Operations Platform (IJOP) in China takes care of monitoring people with abnormal behaviour. It identifies suspects, classifies their behaviour and takes action to prevent potential crimes. This is pretty much what happened in the sci-fi movie Minority Report where Precognitives (individuals that possess a psychic ability to see events in the future) predicted crimes. So what was once the domain of science fiction, today it is an element of reality.
Even though the western world is still far from an institutionalised social credit system, we are already subject to various AI influences. If we look at what happened with Cambridge Analytica a few years back, they stole personal data from Facebook and used it against them. The US elections and BREXIT were manipulated to suit the aspirations of the people who contracted them. In the Kenyan elections, Cambridge Analytica targeted all those voters in favour of their candidate to vote. Those against were also targeted and urged not to vote. The worse part of their work is, however, the manipulation of the truth. In the US elections, Hillary Clinton was branded as a criminal. In the Brexit campaign, millions of pounds were pledged to help the National Health Services every week. We all know today that these were blatant lies.
However, even though Cambridge Analytica is long gone, there might still be other firms who operate in the same domain. Furthermore, what we see on Facebook, Google, etc. is essentially what the algorithms want us to see. So even today, we might be manipulated without even knowing it!
AI-based systems should be transparent and objective when providing information to users.
What is the value of human life for AI?

Really and truly, human life has no value for an AI system. An intelligent system only avoids harming us because we have programmed it to do so and not because it values human life. Most of today’s AI is incredibly good at specific tasks, but it faces some difficulties when handling things beyond the parameters of the job. That is where problems start to occur.
In 2016, a Tesla car on autopilot crashed straight into a van killing its driver. The accident seems to have occurred because its sensors did not detect the obstacle in front of the car. Of course, the system did not exert any extra caution in this case, knowing that a person was entrusted in its care. It was just executing a program.
In 2018, an Uber self-driving car killed a pedestrian while she was crossing the road. The vehicle relied on the information provided by the sensors, which in this case happened to be wrong. Once again, it was just a matter of executing a program.
But our programs must go beyond that. In 1942, Isaac Asimov, the famous science fiction writer, proposed a set of rules which should guide robots when interacting with our world. The three laws were first mentioned in the "I, Robot" book with a Zeroth law introduced at a later stage. These are the following:
The Zeroth Law
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
The Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
If we implement these laws (and possibly others) in our AI as standard, we will ensure that our future autonomous systems are careful when dealing with humans. Just think about autonomous weapons. A missile fired towards a potential threat might notice that it is going to hit some civilians and decide to adjust its course on its own.
AI should be taught to value human life, and specific safeguards should be programmed inside autonomous systems to protect humans.
Can AI evolve beyond our expectations?

The answer is simply a big fat, YES! It has been shown over and over again in various experiments.
But maybe the most impressive is the experiment which OpenAI released a few weeks ago where they created an AI which can play hide-and-seek. The AI technique used is called Reinforcement Learning, where Agents (who do not know the rules of the game), are rewarded if they manage to make a good move.
The rules of the game are straightforward; seekers get points when they see the hiders. The hiders get some leeway at the start to find a hiding place, and they get points if the seekers cannot find them. They can both use objects found lying around to achieve their goal.
The AI had never played the game before. Initially, the agents started moving at random. The following is what happened.
- The AI figured the basic rules of the game.
- The hiders learnt to build a shelter, and the seekers could not see them.
- The seekers learnt to build a ramp to breach into the shelter and see the hiders.
- The hiders then learnt a trick to freeze the ramps so the seekers cannot use them.
- The seekers then learnt that they could jump on boxes, move them closer to the shelter and jump on the hiders.
- The hiders then resorted to freezing all the moving objects to block the seekers.
Between each step, the AI played millions of games. In total, to evolve through the six phases mentioned above, the AI had to play almost 500 million games. What is impressive is that none of the actions discussed above was taught to the agents or directly rewarded. The rewards were only given for winning the game and not for taking appropriate steps.
As AI systems grow more powerful, we need to ensure that humans still retain control over technology.
Can AI become evil?

Considering that AI can evolve, we can easily assume that it can also turn good or evil. This choice is not necessarily a conscious one since as far as we know, an AI has no conscience. However, the outcome of interacting with any AI can lead to both good or evil deeds.
In 2016, Microsoft released a Twitter chatbot called TAY (Thinking About You). It was designed to mimic a 19-year-old American girl and to learn from the interaction with other humans on Twitter. It was really an experiment in conversational understanding intended to see how the dialogue will evolve. In reality, TAY did not go very far.
Some Twitter users started a conversation with the chatbot, which included abusive messages. TAY responded in the same way since it was learning from the other users. Eventually, online discussions did not remain playful. TAY developed a strong prejudice against women and became a racist-Nazi sympathiser. Microsoft then decided to plug it off almost immediately. This incident teaches us that AI is very sensitive to data, and if we feed it with garbage, we will eventually get more garbage out of it.

One can find various other similar cases. The Uber self-driving vehicle that passed through red-lights in San Francisco. The Russian robot called Promobot IR77 that decided to escape from the lab where it was being programmed. And the list can keep on going. To reduce these threats, DeepMind is developing a framework which ensures that AI agents don’t learn to prevent humans from taking control.
Most AI systems start from a blank slate, and it is up to us to influence them with positive examples.
Will AI take over the world?

Many people ask this question when they hear about AI. Most probably, the Hollywood blockbusters which we see on TV help to fuel speculation. Some prominent personalities like Bill Gates, Stephen Hawkins, Steve Wozniak and Elon Musk too expressed their concerns on the matter. So it is very pertinent to ask about the validity of this menace.
In recent years, AI excelled in various fields. Be it, games like Chess or Go and more recently Starcraft. Self-driving cars and smart homes are ushering the world world of tomorrow. Notwithstanding these achievements, the kind of AI in these systems is very restrictive, and it is generally referred to as Narrow AI. What this means is that the AI is exceptionally good at handling a particular task, but useless when dealing with other tasks. So an AI which has reached a grandmaster level in chess cannot be asked to give information about the weather because it will fail. Because of this limitation, many researchers around the world are working hard towards what is known as Artificial General Intelligence (AGI) where the AI can handle several different tasks. However, until this happens, the threat of having a smart toaster evolve into an evil genius and take over the world is incredibly slim.
Even though AGI may sound scary, it is not the most frightening chapter in the future of AI. That chapter is reserved for something known as the singularity. The singularity is that point in time when the evolutionary rate of AI is so rapid that humans will never reach it. However don’t lose a lot of sleep on this, since we’re still very far away from reaching this stage with the technologies of today.
Of course, the situation can change if we manage to crack Quantum Computing (QC). QC focuses on creating computer technologies based upon the nature and behaviour of matter and energy at the atomic and subatomic level. To understand the power of such technologies, a task which takes 10,000 years on the fastest supercomputer today, will take just 3 minutes on a Quantum Computer. These figures are not theoretical, but they were achieved a few weeks back by Google researchers. Even though Google is claiming that they reached Quantum Supremacy (i.e. computing power beyond the fastest supercomputer in existence), this has been achieved on a very restricted task. Thus, we are still very far away from having mainstream QC capable of handling any job.
Even though we are experiencing giant leaps in technology, we have to learn how to manage all this processing power.
Conclusion

AI is here to stay. Our society is already on the bandwagon deploying AI in all sorts of applications. So the big question is not whether the AI revolution will happen or not, but more about how to control the powerful AI of the future. That is why we need humane AI. One which understands, values and respects human life. This won’t happen by accident, but we have to teach the AI to do so. Only by doing so can we ensure that the future is not about men against the machine, but rather about men and machine working together to solve the challenges of tomorrow.
This article was inspired from "Artificial Humanity – _An Essay on the Philosophy of Artificial Intelligence"_ by Fr Philip Larrey.

Prof Alexiei Dingli is a Professor of AI at the University of Malta. He has been conducting research and working in the field of AI for more than two decades, assisting different companies to implement AI solutions. His work has been rated World Class by international experts and he won several local and international awards (such as those by the European Space Agency, the World Intellectual Property Organization and the United Nations to name a few). He has published several peer-reviewed publications and forms part of the Malta.AI task-force which was setup by the Maltese government, aimed at making Malta one of the top AI countries in the world.