The world’s leading publication for data science, AI, and ML professionals.

5 Reasons Why I Left the AI Industry

I worked for 3 years at an AI company. Now I've decided to left the industry indefinitely.

ARTIFICIAL INTELLIGENCE | DEEP LEARNING | FUTURE

Source: Pixabay
Source: Pixabay

3 years ago the words Artificial Intelligence evoked powerful sensations in me. Entering that world felt like taking a step into the mysteries and secrets of the future. I was mind-blown by the promises of intelligent machines, capable of solving tasks forever reserved to us. I was deep-diving into the amazement of the mind through the familiar passages of technology.

I had just finished my bachelor’s in aerospace engineering and wanted to leap towards AI. It was late 2017 when I met the great Geoffrey Hinton and Andrew Ng. Their lectures at Coursera were the open door that led me to my first job at an AI startup in 2018.

AI promised a lot. So many movies of robots dominating the world and machines enhancing us to be demigods. But it didn’t deliver for me. Maybe I was too naive. I believed the mask behind which the real AI was hiding.

After 3 years working in AI, I’ve fallen out of love. And I don’t think I’ll be coming back soon. These are the 5 reasons I left the industry.


AI may not live up to the hype

"AI has gone through a number of AI winters because people claimed things they couldn’t deliver."

  • Yann LeCun, Chief AI scientist at Facebook

Can AI save the world? Can AI solve its most pressing problems? Way before AI spread everywhere, some already thought it would change our lives radically. In 1984, computer scientist Frederick Hayes-Roth predicted AI would "replace experts in law, medicine, finance and other professions."

But it didn’t. Throughout its history, AI has lived many hype cycles. Those cycles are known as AI winters. AI would fail to meet expectations provoking a wave of disbelief that would eventually cause the withdrawal of research funding.

Since the deep learning revolution in 2012, we’ve seen increased interest in the field. Some still think that AI will change the future. But the question remains: will AI ever live up to its hype? Gary Marcus, AI researcher at New York University, said in 2020 that "by the end of the decade there was a growing realisation that current techniques can only carry us so far." In the words of Geoffrey Hinton, the ‘Godfather of AI’:

"My view is: throw it all away and start again."

I entered the world of AI moved by its promises of intelligent machines and artificial general intelligence around the corner. But it’s not going to happen anytime soon.

Holding expectations that don’t match reality is a recipe for discontent and frustration. And I don’t want that.


AI loses its magic when you look from the inside

When you talk about AI with an outsider, they immediately think of movies such as Terminator or The Matrix and books such as I, robot or 2001: a space odyssey. All depicting scenarios in which AI has amazing abilities and power.

The term Artificial Intelligence was coined by John McCarthy, who didn’t actually like the term. He knew it was a marketing tool to draw public and private attention. Devin Coldeway writes for TechCrunch that AI is "a marketing term used to create the perception of competence, because most people can’t conceive of an incompetent AI." Now, AI has become mainstream and many companies take advantage of the status behind the name.

AI has gone from aiming at uncovering the mysteries of human intelligence in the form of silicon-based entities, to being a buzzword that companies use in their AI-powered products and services. AI has lost, to a large extent, its ambition.

Not long ago AI was synonym of human-like robots. Machines capable of incredible feats. Able to mimic human emotion and creativity. Able of planning and displaying common sense. Now it’s a synonym of data. If you work in AI you are most likely collecting data, cleaning data, labeling data, splitting data, training with data, evaluating with data. Data, data, data. All for a model to say: It’s a cat.

The marketing power of AI is such that many companies use it without knowing why. Everyone wanted to get on the AI bandwagon. I liked the magical world AI promised and I’ve found a shadow of what could’ve been.

We’re not even aiming at creating general intelligence anymore. We’ve settled for stupid software that knows how to do extremely specific tasks very well. I hope AI recovers its ambition to not disappoint those who come looking for magic and end up with just fancy math.


Everyone can do AI now

It’s hyperbole. I still can find more people that don’t know a thing about AI, than people who know it’s everywhere – which amazes me. However, when you look within the realm of computer tech, AI is everywhere.

Not long ago, AI was a general, broad term that encompassed many areas. One of those was machine learning (ML), which at the same time was divided into different branches including deep learning (DL). Now, I can safely say that, for most, AI = ML = DL.

Deep learning has taken over the world of tech and Computer Science. Why? Because of its unreasonable effectiveness. Neural nets are good at doing what they do. They do it so well that everyone is trying to take a portion of the pie.

"Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they’d never do anything are now happy to call them AI and try and get some of the money."

  • Geoffrey Hinton

The popularization of AI has made every software-related graduate dream with being the next Andrew Ng. And the apparent easiness with which you can have a powerful DL model running in the cloud, with huge databases to learn from, has made many enjoy the reward of seeing results fast and easy.

AI is within reach to almost anyone. You can use Tensorflow or Keras to create a working model in a month. Without any computer science (or programming) knowledge whatsoever. But let me ask you this: Is that what you want? Does it fulfill your hunger for discovering something new? Is it interesting? Even if it works, have you actually learned anything?

It seems to me that AI has become an end in itself. Most don’t use AI to achieve something beyond. They use AI just for the sake of it without understanding anything that happens behind the scenes. That doesn’t satisfy me at all.


We may never achieve artificial general intelligence

I’ve mentioned already the term artificial general intelligence (AGI). For decades, AGI has been the main goal driving AI forward. The world will change in unimaginable ways when we create AGI. Or should I say if?

How close are we to creating human-level intelligent machines? Some argue that it’ll happen within decades. Many expect to see AGI within our lifetimes. And then there are the skeptics. Hubert Dreyfus, one of the leading critics, says that "computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all."

For now, it seems that research in AI isn’t even going in the right direction to achieve AGI. Yann LeCun, Geoffrey Hinton, and Yoshua Bengio, winners of the Turing Award – the Nobel Price of AI – in 2018, say we need to imbue these systems with common sense and we’re not close to that yet. They say machines need to learn without labels, as kids do, using self-supervised learning (also called unsupervised learning).

That’d be the first step. However, there’s too much we don’t understand about the brain yet to try and build AGI. Some say we don’t need to create conscious machines to equal human intelligence. However, can we really separate human intelligence from the human subjective experience of the world? We don’t know yet and we may never know it.

Super intelligent machines may remain forever in the realm of science fiction. Here’s a short terrifying story from Fredric Brown about what could happen if we ever create something above our understanding:

The world was waiting with expectation. One of the leading scientists was about to connect the switch. A super machine, powered by computers from every corner of the universe, condensing the knowledge of all the galaxies.

The machine was on. One of the scientists said to another, "the honors of asking the first question are yours."

"I’ll ask what no one else has ever been able to answer." He turned to the machine and asked, "is there a God?"

The mighty voice answered without hesitation: "Yes, now there’s a God."

The scientist felt the terror running down his back. He leaped to grab the switch when a bolt of lightning from the cloudless sky struck him down and fused the switch shut.

We fear what we don’t understand. And, by definition, we won’t understand AGI. But we can remain calm. Even if AGI seems to be close in the Future when we observe AI from the outside, it won’t be happening any time soon. Now that I know we’re not even close to that, my interest in AI has notably diminished. I may come back to AI when I see AGI on the horizon. If it ever happens.


The future of AI will include the brain

AI appeared officially in the decade of 1950 as a serious endeavor to disentangle the mysteries of human thought. After research in neurology had found the brain was composed of electrical networks of neurons that fired all-or-nothing pulses, the idea of building an electronic brain wasn’t so disparate.

Today, we seem to have forgotten the brain. DL works nothing like it. Computer vision and convolutional neural nets don’t work like our visual system. Supervised learning models (which dominate AI right now) need to learn from labeled data but humans learn from sparse data thanks to innate biological structures. Computers need huge amounts of computing power and data to learn to recognize the simplest objects, whereas a kid only needs to see one dog to recognize every other dog.

There have been some attempts at closing the gap between AI and the brain. One example is neuromorphic computing. The main idea is to create hardware that resembles the structures in our brain.

There’s a big difference between biological and artificial neural nets: A neuron in the brain carries information in the timing and frequency of spikes whereas the strength (voltage) of the signal is constant. Artificial neurons are the exact opposite. They carry info only in the strength of the input and not in the timing or frequency.

This difference between the brain and AI exists at such an elemental level that everything that’s built on it ends up diverging radically. Neuromorphic computing is trying to overcome these issues.

Some argue that AI, as it exists today, will reach its ceiling soon. If we want to continue growing towards actual intelligence, we’ll have to rethink everything we’ve been doing shifting the path towards the only system we know that is intelligent enough to guide our efforts; our brain.

For me, the world of AI was a bridge to the human mind. I thought AI would teach me a lot about the brain, our thought processes, and our intelligence. What I found was that AI had long parted ways with neuroscience and didn’t have intentions to go back.

DL isn’t the future of AI. When the field resembles what was promised, I’ll be happy to combine my knowledge in both AI and neuroscience to do my bit to bring the future closer to us. Until then, I’d rather work in understanding just a little more of the mind than build machines that "are still very, very stupid."

"I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works."

  • Geoffrey Hinton

Final thoughts

There are many good reasons to stay in the AI world. And even to enter it now. However, be sure that those reasons are the ones that move you.

In the world of AI, appearances lie. It’s not as fancy as they want to make it. It’s not going to radically change the world with human-like robots, as in I, Robot. You’ll be one of many in the same position. AI isn’t new, exclusive, or necessarily prestigious anymore. And don’t expect to see machines at the level of humans anytime soon.

Lastly, remember that if we want to find the holy grail of human intelligence and win this battle to our Mother Nature, we should be looking at the only thing that has human-level of intelligence: our brains.


Related Articles