The world’s leading publication for data science, AI, and ML professionals.

Artificial Intelligence is Probably Safe

Even though we will eventually all be killed by the heat death of the universe (10^100 years from now), there are more immediate problems…

Even though we will eventually all be killed by the heat death of the universe (10^100 years from now), there are more immediate problems to think about. There is the death of the sun (5 billion years from now), and the heating of the planet (see the year 2100 in the models). It’s a good idea to prioritize things that will kill us all first, and avoid said badness. We don’t know exactly where the threat of AI fits into the timeline, but it looks like AI is more like 100 or less years away from killing us all. Definitely not too soon. Don’t mark AI doomsday on your 2018 calendar.

Let me tell you why I think we are safe for now.

I love Futurama.
I love Futurama.

Let’s start with why AI is dangerous. The default state of a superintelligent AI is to strongly optimize. Most strong optimization processes are a special case of "kill all humans" (e.g. stamp collector, paperclip machine). This is why AI safety is an important problem, sometimes called "the control problem". More here.

Superintelligence is also called "strong AI" and Artificial General Intelligence (AGI) among other things. It starts off as a smart process that grows smarter and smarter until it reaches and then exceeds Human Level Machine Intelligence (HLMI). This point is often called the singularity. This kind of AI is not achieved by a known process. It is something currently not achieved by humanity, and we have no precident for how to make this thing we are giving labels to. We can call dark matter and dark energy "fred and wilma", because we know so little other than their names. Similarly, we don’t know much about AGI, other than how dangerous and powerful it would be. The power of AI (not AGI) in the short term, is to help humanity. It is a force for automation and efficiency that is rippling quickly through economies and cultures as you read these words.

Fusion is hard. Really hard.
Fusion is hard. Really hard.

In contrast to AGI, which comes from an unknown origin, fusion is a known process. Fusion would be awesome for humanity, but also could lead to end of humanity (fusion means everyone has nukes, and unlimited energy to wage war). So, it sounds similar to AI in terms of big upside and big risks, but at least we have a blueprint for fusion. We see the sun doing fusion all day every day, and even though we know how fusion works in the sun, we dont have working fusion reactors here on earth. It’s the engineering problem that keeps fusion out of reach. It’s hard to build the sun in a lab (plasma containment). It’s easier to make solar panels.

If it’s hard to do fusion, and we know how fusion works, then it is extra hard to do AGI: the dangerous kind of AI. We don’t know how AGI works, we have no example other than how humans evolved our smarts, and we don’t appreciate just how hard it is to produce HLMI or superintelligence. So don’t worry more about AGI than fusion or global warming.

The Fermi paradox is being answered/solved by showing theoretically that the great filter is behind us. For example, perhaps mitochondria + cells = very rare. I like that more than the alternative. I don’t believe AGI needs something that rare to be produced, because AI requires less evolutionary time than biological life. But… let’s not kid ourselves. Deep Learning has a LONG way to go, and so does fusion. The killer robots are real, but the ones built by strong AI are far off. I think research in this area should be funded as a high national security priority right up there with fusion, but let’s be real about the threat. Global climate change, in my view, is more lethal and immediate than AI that wants to kill all humans. We know the climate is changing. Sea level rise = floods=bad.

The world after a 100 meter sea level rise, caused by melting 80% of the ice on the planet. This should bother you if you live bascially anywhere near open water.
The world after a 100 meter sea level rise, caused by melting 80% of the ice on the planet. This should bother you if you live bascially anywhere near open water.

The more immediate concern for AI should be job losses, fake news, and other societal ills. Basically, 90% of the 300K truck drivers in the U.S. are going to be replaced. So are all the Uber drivers. It’s a good idea if we plan for these impacts now. Maybe with Universal Basic Income (UBI)? Some other solution?

I make AI systems at work, and I’m not scared of it. You should not be afraid.

The pace of progress in AI research is impressive. All I’m trying to say is that for now, AI is probably safe, but research should continue on AI safety and the control problem. I am skeptical that we can instill human values into a superintellegence, but it’s worth a try to think about these issues.

After my last article, which took a lot of effort to prepare, this was a really fun high-level article to nerd out with. If you enjoyed this article on AI safety, then please let me know. After getting some nice email feedback on the last article, I do plan to write more research content with code examples. I’m also happy to hear your feedback in the comments or by email regarding this collection of my thoughts. What do you think?

Try out the clap tool. Tap that. Follow us on medium. Share a link to this article. Go for it.

Happy Coding!

-Daniel [email protected] ← Say hi. Lemay.ai 1(855)LEMAY-AI

Other articles you may enjoy:

p.s. Nick Bostrom is pretty awestrom: https://www.fhi.ox.ac.uk/

Trivia: Donuts come from Dough Knots, and cutting a hole helps them cook more evenly.


Related Articles