Rise of the Mindless Machines

Matthias Plaue
Towards Data Science
11 min readNov 18, 2018

--

When people discuss the future of artificial intelligence, a commonly voiced concern is the emergence of an adversarial superintelligence that might spell the end of humankind as we know it. Indeed, we should already think about precautions in order to coexist with AI safely once it will have reached human-level intelligence, and beyond.

However, a more immediate threat does not get the spotlight that it deserves: not that our lives will be controlled by superintelligent artificial general intelligence — but by mindless agents that only mimic intelligence, and which perform tasks that they are not adequately equipped for.

For some use cases, inadequate AI might be annoying but its failing usually does not have a serious impact. For example, I recently took a vacation snap of a bunch of stray cats. The image classifier of the cloud storage service decided that these animals were dogs.

A bunch of “dogs”

In other cases, concerns about inadequate AI are more serious. For example, everyone wants autonomous cars to be as safe as possible. The lives of all road users crucially depend on the algorithms controlling the vehicles to make the right decisions.

For the above example applications, we have a good understanding of the mechanics of the problem, and of the consequences should the machine fail to make the correct decisions. Also, we can evaluate the performance of the machine quite well with objective measures (e.g., does the autonomous car manage to stay on the road) or very intuitive criteria (e.g., that’s not a dog, duh). As such, these problems are well-suited to be solved by relying on the decisions made by state-of-the-art, statistical machine learning.

However, there are problems where we do not understand the underlying mechanics of the situation very well, and there exist no obvious criteria for effectiveness. Most importantly, we do not readily grasp the long-term consequences on human society if we were to implement inadequate AI to make those decision for us on a large scale.

Examples for such decisions are:

In each example, the algorithm may introduce bias against certain features that would be ethically questionable to discriminate against, such as ethnicity, age, or gender. Frighteningly, these are all decisions that have great impact on many individuals’ fate and future, and the way we will be making these decisions will shape society as a whole.

Some experts in machine learning will tell you that in order to make the machine’s decisions reliable and without unduly bias, you only need to tweak your statistical model, run it on the right data, and maybe implement additional rules that help prevent discriminating decisions: In other words, that these tasks could be adequately performed by state-of-the-art machine learning.

However, I will argue that current algorithms cannot even emulate critical features of human reasoning, and that fact alone must make us very cautious before we let these mindless machines dictate decisions that require more than just data crunching.

How mindless machines fail to reason

Suppose that you are surprised by a thunderstorm while on a hike. Will you be hiding under a tree to wait out the storm? You probably won’t as you know that it is more likely to get struck by lightning while standing near a tree. Your very life depends on this information.

A machine learning algorithm based on the Bayesian paradigm will agree with your assessment: stay clear of trees because standing next to a 10 meter tree makes it more than 3 times more likely to get struck by lightning than if you weren’t.

Now suppose you are on a stroll across some part of a big city like Berlin in Germany. The same algorithm that warned you about standing next to a tree during a thunderstorm will also tell you to avoid any district where many people of foreign descent live. After all, it is 3 times more likely that any of these people that you encounter is a repeat criminal offender.

Though ultimately depending on your political views, this “line of reasoning” employed by the mindless machine might evoke a feeling of unease.

Finally, it is 3 times more likely for you to be killed by becoming tangled in your bedsheets if you eat 50% more cheese than the average person:

http://tylervigen.com/spurious-correlations

For this reason, the mindless machine will strongly recommend that you cut down on that tasty cheddar — which is of course ludicrous advise.

Each of the above examples use only one explanatory variable, so they must be considered examples for bad feature engineering. Still, it seems as if the human mind is able to catch aspects that add to simple probabilistic reasoning — aspects that the mindless machines are completely oblivious to.

In the following, I will identify three of these aspects: an awareness of causal relationships, good explanations for these causal relationships, and the ability to make use of the former in order to achieve motivated goals.

Causality. One well-known mantra of statistics is “correlation does not imply causation”. It turns out that while there are well-defined statistical measures of correlation, causality is notoriously difficult to define, formalize, or quantify.

One subaspect of causal reasoning is counterfactual thinking: if a really tall tree was struck by lightning, we are convinced that this incident would not have occurred had it been less tall. On the other hand, we are convinced that if we were to shoot the tall tree with Dr. Evil’s lightning gun it will not happen that it will effectively shrink as a direct consequence (additionally to getting burned, say). Therefore, the tree gets struck by lightning because it is tall, and not the other way around.

Also, we cannot imagine a world where eating copious amounts of cheese would lead to bedsheets becoming death traps.

Good explanations. How is it that we are so convinced about these counterfactual statements? After all, we have not performed any (randomized) experiments that involve feeding people with cheese while they lie in bed all day. The answer is that causation is conveyed by mechanisms that we can explain well: the moisture in a tree makes it a much better conductor than air, and the taller the tree, the more it presents a path of least resistance for the lightning to follow.

Good explanations are also the reason why we are so convinced that this reasoning can be vastly generalized: you know that you do not want to hide under a tree during a thunderstorm. But you also know that you should adhere to that rule no matter where you are on earth, and that it does not only apply to trees but other tall, conducting structures as well.

Finally, bringing together these explanations of the world provides us with a good intuition of what eating too much cheese will do to you, and what it will not.

Goals & actions. Once human beings understand cause and effect, they can act on it: rubbing the one stick with the other stick produces fire, and the family keeps warm during cold nights and the winter. Add to that the power of a good explanation, and they understand that heat and fire can be produced by other means and for other means. The deeper and more general those explanations, the more far-reaching the humans’ interaction with their environment and the more powerful the tools with which they achieve their goals. For example, they realize that heat is a form of energy that could be converted to drive mechanical tools such as, say, a steam engine.

In short, human reasoning and decision-making is not only based on observed data but on a prediction of how our actions would influence the circumstances and processes that produce the data, as well. This is also the basis of any ethical action. In order to contribute to the betterment of society the agent needs to be aware of the fact that her or his actions could contribute to change.

Can current state-of-the-art machine learning actually perform or at least emulate the above aspects of rational reasoning and decision-making? After all, causal inference is an active research topic in machine learning, for example. However, causal inference is by no means part of the standard toolbox of data scientists in the trenches.

When it comes to ethics, mindless machines do not exhibit any conscience, and currently the only reliable way to make sure that algorithms do not produce unwanted results might be good old-fashioned regulation and auditing.

Finally, all of these critical aspects are very much dependent on each other: you cannot formulate a good explanation without observing and grasping causation, and your actions would be futile without such an understanding. But even more so, other aspects of the human thought process are essential, as well: counterfactual thinking requires imagination, explanation requires abstraction (e.g., mathematical modeling), and pursuing your goals by informed action requires even more.

How mindless machines would take control

Let’s sum up so far: while artificial intelligence still lacks essential, fundamental aspects of rational, ethical, and contextual reasoning, there are already efforts to implement the technology for various morally sensitive and context-sensitive tasks such as job recruiting or romantic matchmaking.

Generally, you would only expect broad implementation of the technology once it is mature enough.

However, a common cognitive bias might lead to widespread adoption before proper maturity is reached: anthropomorphizing of AI. Note that while the current hype around artificial intelligence may wane, the tendency for anthropomorphism is hard-wired into the human brain and will therefore persist.

Additionally, anthropomorphism is habitually perpetuated by marketing lingo. For example, DeepDream offers its users to “discover amazing new dimensions and visual artifacts from the AI’s consciousness.” When fed with an image, the software might produce something like this:

Cats, dogs, hellspawn?

So what we are told is this: the machine has a consciousness, and it can dream. It is implied that the machine “thinks” and “feels” very much like a person. Even if you do not take this implication at face value, you will still be influenced by being constantly exposed to this use of language.

Now consider this piece of “art” or “dream” produced by a computer:

Ceci ne pas un mouton électrique

This is a visualization of part of the Mandelbrot set. Although beautiful, complex, and delicate, the procedure by which the image is generated basically just involves the repeated squaring of a complex number: a mathematical rule so simple that it can be fully understood and applied by anyone with first semester knowledge of mathematics.

Nobody would attribute higher cognitive function to such a simple and arbitrary computational structure. Current state-of-the-art machine learning does not produce higher cognitive function either, of course:

  • While “neural network” refers to a number of sophisticated machine learning algorithms, they are all more closely related to traditional mathematical tools than an actual biological brain.
  • The complexity of even the largest neural networks (billions of weights) is magnitudes away from the complexity of the human brain (trillions of neural connections).

To put it crudely, the idea that Siri or Alexa would be endowed with more personhood than an 80’s pocket calculator is bordering on the insane. But that is the idea being implied by marketing lingo and public articles and essays — including this very article about “mindless” machines.

This development is not coincidental. People exhibit the need to interact with computers just like they would with other humans: decades of interaction design led from the command line interface to speech recognition and research in humanoid robots and affective computing in quite the straight forward manner.

This means that much effort is put into making machines seem human, and therefore, more intelligent than they actually are. Consequently, more trust is put into AI software than is actually warranted. In the end, adoption of the technology might happen faster and more broadly than could be rationally justified.

Furthermore, all the while we anthropomorphize the mindless machines, some might argue that big parts of the world have entered a new age of dehumanization, as well as antiscience and anti-intellectualism. Should these worries come true, the outlook is in fact quite bleak.

How we can make safe use of mindless machines

In summary, we might have to deal with a different dystopia much earlier than a rogue artificial superintelligence turning humankind into a large puddle of computronium: a swarm of mindless machines will control social status and well-being of billions of individuals by deciding who will find work, a mate, or a friend, or who will receive a loan, or who will go to prison. They will, of course, also control which governments we vote for, and decide over life and death on battlefields around the world.

They will do so without scrutiny or any sense for consequence as each of them has less cognitive faculty than a fruit fly. They will be everywhere, and they will busily communicate with each other without learning anything new but perpetuating old bias and falsities in many feedback loops. They lack any intrinsic motivation, unable to truly understand or explain their environment or actions. Some machines will pretend that they are not the philosophical zombies that they in fact are but disguise themselves as your “teacher”, “friend”, “co-worker”, or “mate”. Other machines will hide from you, and we do not even know that they are there, pulling your strings from some dark, forgotten corners of the ubiquitous computational infrastructure.

What can we do to prevent such a bleak future? Of course, everyone has to do their part, and I conclude with some suggestions:

  • In my opinion, causal inference and reinforcement learning will be key concepts for machine learning research to advance. Understanding causal relationships and interaction with the environment are prerequisites of developing human-level intelligence.
  • In the meantime, we must not leave the regulation and auditing of intelligent algorithms to the major AI companies alone. Data protection laws need to enforce policies that prevent algorithmic bias, globally.
  • Data scientists and machine learning engineers need to agree on ethical guidelines and best practices for building sensitive AI applications.

Finally, as artificial intelligence becomes more and more a part of everyday life, the safety, benefits, and risks of the technology need to become part of the everyday political and public discourse just as naturally as, for example, road traffic safety.

Further reading/viewing

David Deutsch: How close are we to creating artificial intelligence?

David Deutsch: A new way to explain explanation

Sam Harris: Can we build AI without losing control over it?

John Searle: Consciousness in Artificial Intelligence

Francisco Mejia Uribe: Believing without evidence is always morally wrong

More technical exposition

Bill Hibbard: Ethical Artificial Intelligence

Jonas Peters: Lectures on Causality

--

--

Trained mathematical physicist, working data scientist. Author of text books on applied math and data science.