Thou Shalt Not Fear Automatons

In this article, I will show that unlike the pundits would like you to think, it’s our ignorance we should fear, not the intelligence of machines.

Mikko
Towards Data Science

--

TL;DR

The imminent danger of Artificial Intelligence has nothing to do with machines becoming too intelligent. It has to do with machines inheriting the ignorance of people.

Background

Whom should we believe about the magnitude of change that comes with Artificial Intelligence? Andrew Ng when he says AI is the next electricity, or Francois Chollet when he says it’s more like the invention of human language? Should we take it seriously when Elon Musk says that we need to fear machines enslaving us, and the solution is to embed the machine into our brains? How about Rand Corporation talking about AI as the next nuclear armament, or Vladimir Putin arguing that the hegemony is his who has the best AIs? How about Roko’s Basilisk, if I did not mention it here, would I have paved my way to eternal suffering? Who is right? Should we be hopeful, afraid, or both?

To answer these questions, in this article I will establish three simple arguments in easy enough to understand terms:

  1. While we can’t know for sure how intelligent machines can be, we know for sure how stupid humans can
  2. Talking about the ethics of Artificial Intelligence is less important than talking about the ethics of human beings
  3. Artificial Intelligence is and will continue (at least for some decades) to be about process automation, not about autonomy

As a bonus, I will show how we already are to some extent ruled by incredibly dumb computer systems. Computer systems that are created by poorly motivated junior engineers and wanna-be data scientists, who are working under the influence of short-sighted salespeople and venture capitalists.

Humans Suck at Making (Good) Decisions

While we can’t know for sure how intelligent machines can be, we know for sure how stupid humans can be.

That’s right. We suck at making decisions. Consider Daniel Kahneman’s work on system-1 and our understanding of the Vagus nerve. The Vagus nerve is a part of our brain that’s been around for a long time. Several hundred million years at least. Ever seen a startled lizard quickly run away? That’s the Vagus nerve in action. In a nutshell, Kahneman shows in his Nobel Prize-winning work that regardless of how rational we think our process for making decisions is, when it comes to the moment of making the decision, we often disregard the fruit of all deliberation (even if it may have taken years to deliberate) and go for whichever option “feels” right. Kahneman calls this “thinking fast” or system-1, in contrast to how we think we are thinking, which Kahneman refers to as “thinking slow” or system-2. One way of understanding this is that in the more superficial parts of our brain, we have much capacity for rational processing of information, but very little authority to use that information for making decisions. In the very deep part of our brain where the Vagus nerve is, we have no rational capacity, but a lot of authority.

And it’s not just Kahneman; the field of psychology is made of theories just like his. Indeed, you might have a hard time finding theories that make a strong case for us humans being good at making decisions. This takes us to one of the critical risks related to AI; humans making decisions about artificial intelligence-based decision-making systems.

How Artificial Intelligence Works

Artificial intelligence solutions can be reduced into two aspects; the data inputs, and the means by which the inputs are processed. These two are the causes of the results arising from a given system. Very simply put, the result coming out of a given system is entirely dependent on the inputs and processes associated with that system. Fundamentally speaking, there is always an algorithm underpinning a decision a computer system makes. This is true for all kinds of computer systems. Consider a simple example in Python code:

1 + 1

In this case, based on the algorithm, the system decides to output the integer 2. Every computer system, regardless of how “artificially intelligent” it is, boils down to this fundamental principle which can be summarized as:

input + process = decision

Computers are not at all like humans in the way they make decisions. It is stone-cold rationality, in the perfect marriage of capacity and authority. In other words, capacity and authority are inseparable within a computer system.

Transfer Ignorance

One significant difference between the above example and a modern deep learning-based machine intelligence system is that humans can’t audit the latter in the way we can audit the former. This is a big deal when we consider the shortcomings we humans have.

Why is that? We could call this problem “transfer ignorance.” We introduce our irrationality to a system that is expected to make rational decisions and guess what, the system will start making irrational decisions. The issue comes when we don’t know that’s the case; in a computer system, irrationality easily becomes disguised as rationality. Because the output of a typical deep learning-based automated decision system is something we can’t validate (like we can validate 1+1=2), and we don’t know why we get the result we get, we can end up victims of our own irrationality. It is an important starting point to frame the problem of AI in this way because the issue becomes then known and manageable. Checks and balances can be put into place to avoid the inheritance of our negative qualities to the machine. The question here is simple:

How do we regulate our own behavior in a way that minimize the transference of our irrationality to systems that are expected to make rational decisions?

Here it’s very important to understand how the fact that we can’t precisely know why something in a deep learning model works the way it does, is exactly what we want. That is how we go beyond what we humans can do on our own. For a system to be something we can audit and explain, it needs to be simpler than our ability to audit and explain. This is not what we want, as it would severely limit our ability to use the system to extend our current capabilities.

Thou Shall Not Fear Automation

Next, consider the process of building the pyramids. By some estimates, it took up to 200,000 workers, with thousands or tens of thousands of casualties resulting from the building process. It was a process fueled and made possible by, human suffering and loss of life. Then consider the city of Dubai, where many of the most ambitious construction projects in the history of the world are. Most projects are completed with zero casualties, and many go without accidents for hundreds of days. Burj Khalifa, the +900 meter high tower reportedly involved the death of a single construction worker.

Many construction sites in Dubai go hundreds of days without any accidents

How come? It is because, over the past several millennia, we were not afraid to go beyond what we humans can do at a given point in time. We were not afraid of extending our capabilities.

Artificial intelligence is not so much about something that is separate from us, as it is about something that extends our capabilities. It would be quite useful to be able to think about AI like that; simply as something that is part of us, coming from us, and has the capability of extending our power. When we understand that it is about extending our capabilities, our hopes become more sensical, and our fears less of a distraction. For example, we understand that there is a real risk in transferring our ignorance into the systems that we create, and as a result of that, we end up with a kind of Artificial Ignorance instead of intelligence.

Thou Shall Not Fear Automation

When something new is invented, or something that was invented a long time ago starts to take off, we often end up with surprising results. The greater the degree to which we are unable to understand (or audit) a given system, the more there is potential for surprising results. Sometimes surprises are unpleasant ones. Let’s consider as an example the field of healthcare.

Healthcare, like any other field of practice, is a web of interconnected behaviors. For example, if you introduce a metric focused on total intensive care patient days, you will end up with more deaths as the system will optimize towards getting patients out of the intensive care unit earlier. That is precisely what happened in many Western hospitals when game theory-based process optimization was introduced into hospitals by management consultants in the 1980s. I do not doubt that the people behind these ideas sincerely believed that they would work, yet the results were what they were. Many more people suffered and died.

Algorithms and Zero-Sum Game Theory

Algorithm design is heavily influenced by the principles of the zero-sum side of game theory.

Let’s consider a nation-state with a national universal healthcare platform and a debt-ridden national economy. In such countries, there is a lot of time, resource, and objectives pressed decision makers. In other words, there are many people in position of power, who do not have enough time, money, and other vital resources, but have too many objectives to meet.

By now, the decision-makers in those countries have had their share of the AI hype. Here we come to witness the problem with believing things like “AI is the next electricity”. Hearing something like this how makes us think about electricity and all the benefits we’ve gotten from it. If you’re working in healthcare, the risk is then reacting by thinking “look at how much electricity changed healthcare” and then having that become a major influence in terms of your attitude towards Artificial Intelligence. That is a recipe for disaster. Artificial Intelligence could not be further from electricity in the sense of what it enables. In one way it enables both positive and negative outcomes so far beyond the scope of electricity that I doubt there is anyone in the world who really “gets it.” The other way to look at it is that artificial intelligence is just another electricity-enabled thing. In both cases, it’s completely wrong — and dangerously misleading — to say that AI is the new electricity. The new electricity would be something that provides light, heat, motion, and so forth.

Back to the national healthcare example…

The individuals and committees formed by individuals, who are right now making decisions about the use of machine intelligence in the national healthcare systems, are under tremendous pressure to deliver. Generally speaking, they are under pressure to perform against economic goals and national health outcomes. A typical career politician barely understands healthcare (or economics), so it will not come as a surprise that with very few exceptions, they are utterly lost regarding artificial intelligence. What are they influenced by? That’s right, all the hype is what influences them. It is difficult to see how under such conditions, non-experts, being bombarded with endless hype by self-proclaimed (and sometimes other-proclaimed) artificial intelligence experts, would be able to make the right decisions. On the other hand, it’s straightforward to see how they end up making all the wrong decisions.

Regarding the given example, there is something vitally important to understand about systems in general. When you change something, the adverse effect might appear 10, 20, or 100 years later. Generally speaking, the more complex the system is, and the more ambitious the change project is, the more this is true. Instead of being focused on the worst-case scenarios such as the annihilation of the human race by a superintelligence, we should be very worried about the subtle, and often hidden systemic effects of countless dumb machines.

Morality and Ethics in Artificial Intelligence

Talking about ethics of Artificial Intelligence is less important than talking about the ethics of human beings

In the same way as the machine intelligence solutions we design, inherit our irrationality and stupidity, they inherit our immorality and lack of ethics. As it stands, we have a real risk of creating Artificial Immorality.

One way of looking at the topic of morality is through two compatible works, Immanuel Kant’s Categorical Imperative, and Kohlberg’s Model of Moral Development. Kant in his Categorical Imperative explains it as “acting only according to that maxim whereby you can at the same time will that it should become a universal law.”

In Kohlberg’s work, we can appreciate how difficult living according to the standard of Kant’s moral imperative is. It can be particularly tricky in an organizational setting (where decisions about Artificial Intelligence will be mostly made) to rise above the 4th stage (in the above graphic), with occasional glimpses of the 5th. Unfortunately, many choices are formed on the premise of the 1st stage; if I do this, will something negative happen to me personally?

There is a critical intersection between irrationality and immorality. They amplify the harmful effects of one and other. To address this, and formally train themselves in dispelling both irrationality and immorality, the wise of the world used to have a significant focus on the study of philosophy. Most importantly morality, ontology (what is knowing), epistemology (what is being), and logic. Such an approach would be a useful companion for artificial intelligence researchers and developers.

Leibniz famously said:

Without mathematics we cannot penetrate deeply into philosophy. Without philosophy we cannot penetrate deeply into mathematics. Without both we cannot penetrate deeply into anything.

And the statesman John Adams said:

I must study politics and war that my sons may have liberty to study mathematics and philosophy.

In the healthcare context, none of the involved stakeholders — government and healthcare professionals, nor computer and data scientists — are trained in morality and often lack even the most basic understanding of ontology, epistemology, and formal logic. Moreover, it’s not just an AI-related problem; in Physics there is an increasingly vocal debate on the role of philosophy and if it has any significance at all. Here we find the second important thing we should be afraid of; the risks that come from disconnecting the discovery of new ideas from philosophy, for it is the science of discovery and ideas.

We’re already seeing the results of human immorality being transferred to decision-making systems. We have seen through examples in financial markets, online advertising, and other early embracers of algorithmic decision-making, how we tend to end up with so-called greedy algorithms that optimize towards a given end ruthlessly without caring about anything else. Unlike for example healthcare professionals, such algorithms are not afraid of losing their livelihood and reputation as a result of making the wrong decision. Decisions that may end up hurting people. It is this type of risk we should be worried about, as opposed to superintelligence that will want to dominate or wipe out the human race. Some of the wildest fantasies go as far as trying to ascertain us that it is just a matter of time before we see a superintelligence that will first wipe out humans and then expand throughout the galaxy.

The Autonomy Fallacy

Artificial Intelligence is and will continue (at least for some decades) to be about process automation, not about autonomy

At the core of AI fear-mongering, is the confused idea of autonomous superintelligence. In regard to this, we have been severely mistaken in two important ways; we have started to use the word autonomous in place of automated, and we have forgotten the way computer programs work. As a point of reference, take a look at the current stage of the deep learning community. Most of the work focuses on simple process automation and creating evermore complex approaches to improve the state-of-the-art of those automations. While it’s pretty incredible what one can do with a few lines of deep learning code today, there is as much human kind of intelligence in the resulting predictive model, as there is in a toaster. There is anyhow, a far greater degree of automation in a deep learning model than there is in a toaster. It seems fair to say that the way in which deep learning models can extend human capabilities is far beyond a toaster can. But is there an inherent property of superintelligence in a deep learning model that is not there in the toaster? There is not.

As a tongue-in-cheek example of close to superintelligence we might be, here is what the state-of-the-art deep learning-based spell checker thinks about deep learning.

Deep learning might be a seed with the propensity for much more, even for some form of intelligence we can’t yet imagine, but it is just a propensity. It is also possible, that it does not contain such propensity. Deep learning, after all, is a track of development that formally began in the late 1950s.

Superintelligence

First, it is important to understand that the only way to achieve superintelligence, is through autonomy. It can not happen through mere automation.

Going back to the previous point about the possibility of the current track containing the propensity for autonomy and therefore superintelligence. Then, just like an apple seed needs soil, moisture, nutrients, and light before it has a chance of growing into a sprout, a superintelligence would need the causes and conditions that allow it to manifest. Today, nobody can in a convincing way explain what those causes and conditions would be. For example, winning humans in a game by spending tens or hundreds of millions just to achieve that, has nothing to do with it. Using hundreds of millions or billions to use brute force techniques for training massive language models, also has nothing to do with it. Nothing that we’ve seen or heard so far, has anything to do with it.

The best way to understand superintelligence is that it is the science-fiction worst-case scenario for computational intelligence. Putting aside technical feasibility and the factor of the propensity for superintelligence being there in the current track the human society is on or not, how often do worst-case scenarios manifest? Think about nuclear weapons where the worst-case scenario is a nuclear holocaust. Or can you you find some other technological advancement where the worst-case scenario actually materialized?

When it comes to the topic of superintelligence, I have to agree with Marc Andreessen on his comment:

What we do know, is that we have an identifiable and addressable issue with our ignorance and our immorality. Furthermore, we have clear evidence about the tendency we have for transferring it into the systems we create. That is what we should be worried about, and we should be very worried. As it stands, we don’t even know how to ask the right questions. Here, I wholeheartedly agree with Francois Chollet when he says:

The most important questions we need to ask ourselves related to AI are not even related to AI. More than anything, we have to ask questions about systemic effects. Going back to healthcare, we might think of having an allergic reaction to medicine as a good example of an adverse effect, but many adverse effects take years, decades, or even centuries to arise. Because of the difficulty of causal analysis, we generally can’t precisely know what contributed to a certain outcome. This means that in some cases it would be almost impossible to know if a given AI solution is contributing positively in the long term or just driving short-term efficiency, and causing damage in the long term. The analysis of causality seems to be one of the more interesting, and less discussed uses of AI expert systems, and mission-critical requirements within the periphery of such systems.

Death By a Thousand Algorithms

Instead of the Skynet, we’re rapidly moving towards a world in which legions of incredibly stupid machines rule us. These machines are created, maintained, and optimized by poorly motivated junior employees — engineers and would-be data scientists — who work under tremendous pressure and are influenced by the short-sightedness of salespeople and venture capitalists. To make things worse, they tend to change jobs every two years and generally have no concern or ability to understand the systemic changes resulting from their work. Those algorithms are already everywhere. More than half of all financial market transactions are based on decisions of simple zero-sum algorithms with names like “bottom feeder” whose sole purpose might be to create noise in the market without adding any value to it. In a few years from now, more than half of advertising is traded based on algorithms that compete with each other against metrics such as “cost-per-impression” at the expense of internet users. What we see in our social streams, internet searches, video recommendations, related news sections, and shopping sites are entirely governed by algorithms that are called “yield optimizers.” In all of these examples, the algorithms are simple automatons that serve the short-sighted interests — mostly monetary in nature — of those that created and control them. Using “yield optimizer” as an example, the goal of the algorithm is to capture as much of the internet user’s attention and time, to convert it into advertising revenue. In a sense, it is a form of cognitive harvesting. A process in which very dumb algorithms harvest the cognitive energy of human beings, for a company to make money. In other words, due to our ignorance and immorality, we created ignorant and immoral algorithms that work towards making us even more ignorant.

Read about Automated Research Agents here.

PS. Thanks for your time! If you have a few seconds more, please share.

PS. If you are interested in the work I’m doing with AI, check out https://github.com/autonomio/talos

--

--

Worked with machine intelligence for 15 years, and built the interwebs for 25. Nothing here is my own.