The world’s leading publication for data science, AI, and ML professionals.

What No One Is Thinking – AGI Will Take Everyone By Surprise

Here's why we may create AGI without knowing it.

ARTIFICIAL INTELLIGENCE

Photo by Riccardo Annandale on Unsplash
Photo by Riccardo Annandale on Unsplash

There’s so much interest in artificial general intelligence (AGI) that we could fill up an entire Chinese room with books written about it. Yet, you wouldn’t find in them any conclusive finding or theory. No one knows what AGI will look like, when we’ll achieve it, or how we should proceed to the next step. If we have no answers regarding AGI, why are there so many books, articles, and papers written about it?

Physicist Lawrence Krauss joked about this paradox in a conversation with Noam Chomsky. He said there’s "an inverse relationship between what’s known in a field and the number of books that are written about it." There’s a lot of information about AGI but very little real knowledge.

AGI will be arguably the greatest invention in human history. We’ve been asking questions about our unique intelligence forever, but we’ve started pursuing this quest most heavily since the computer science revolution in the mid-20th century. Huge funding and interest have backed up the last 60 years of AI research but we’re still quite far from achieving AGI. Although AI has reached human proficiency at some tasks, it still shows narrow intelligence.


GPT-3 & company: The closest we are from AGI

One of the latest breakthroughs came in May 2020 by the hand of OpenAI’s GPT-3. This system displays a broader behavior than its predecessors. It was trained with the Internet’s text data and has learned to learn; it’s a multitasking meta-learner. It can learn to do a new task from just a few examples written in natural language. However, although the debate is now settled about GPT-3 being AGI – it is nowhere near – it generated some doubts.

In July 2020 OpenAI released a beta API for developers to play with the system and in no time they started finding unexpected results that not even the creators had thought about. Given a set of instructions in English, GPT-3 was found able to write code, poetry, fiction, songs, guitar tabs, LaTeX… Consequently, the hype grew wild and GPT-3’s popularity skyrocketed making headlines in important media outlet magazines.

And when the hype appears, the anti-hype doesn’t lag behind. Experts and not-so-experts tried to reduce the tone of the hype. GPT-3 was being portrayed as an all-powerful AI, but it wasn’t and it had to be said. Even OpenAI’s CEO, Sam Altman, said it was too much: "[GPT-3 is] impressive […] but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse."

People started looking for GPT-3’s limitations: Where it failed, what tasks it couldn’t do, which were its weaknesses… and they found many – maybe even too many. That’s what tech blogger Gwern Branwen probably thought. GPT-3 was not perfect and was not AGI, but people were finding failures where GPT-3 should have succeeded.

In a display of scientific rigor, Gwern compiled a large set of published examples and retested those which seemed to be too difficult for GPT-3. He argued that the prompts (the descriptions or examples inputted to GPT-3) were often badly defined. He said prompting was better understood as a new programming paradigm and had to be taken care of accordingly:

"Sampling Can Prove The Presence Of Knowledge But Not The Absence

GPT-3 may "fail" if a prompt is poorly-written, does not include enough examples, or bad sampling settings are used. I have demonstrated this many times when someone shows a "failure" of GPT-3 – the failure was their own. The question is not whether a given prompt works, but whether any prompt works⁠."

He proved that a good chunk of the weaknesses people were finding on GPT-3 were their failures in understanding how to communicate with the system. People couldn’t find the limits of GPT-3 because they were simply beyond their testing methods.


The paradox of limits

Everything has limits. The Universe has limits – nothing outside the laws of physics can happen, no matter how much we try – and even the infinite – the set of natural numbers is infinite, but it doesn’t contain the set of real numbers.

GPT-3 has limits, and we, the ones trying to find them, also have limits. What Gwern proved was that while looking for GPT-3’s limits, we found ours. It wasn’t GPT-3 that was failing to do some tasks, but us who were unable to find an adequate prompt. Our limits were preventing GPT-3 from performing a task. We were preventing GPT-3 from reaching its true potential.

This raises an immediate question: If the limitations of GPT-3 are often mistaken for ours, how could we precisely define the boundaries of what the system can or can’t do? If there’s no way to separate the situations in which it’s us who are failing from the ones in which it’s GPT-3, the number of unknown variables would be greater than the number of equations, making it impossible to find a solution.

And this can be extended to any other AI we create in the Future. If an AGI can know more than we can assess, we could only know a posteriori, through observation of its behavior, what it can or can’t do. If it ends being harmful we’ll only know in the aftermath (as it happened with GPT-3’s biased nature).

In the end, we are a limited system trying to evaluate another limited system. Who guarantees that our limits are beyond theirs in every sense? We have a very good example that this may not be the case: We are very bad at assessing our limitations. We keep surprising ourselves with the things we can do, individually and collectively. We keep breaking physical and cognitive limits. Thus, our measurement tools may very well fall short of the action capabilities of a powerful enough AI.


AGI will come before we realize it

From all the above a very scary question arises: Could we create an AGI without knowing it? Or even without having the ability to eventually know it? Our measurement tools aren’t infinite nor boundless. We could create an AI whose limitations can’t be assessed. GPT-3, which is not an AGI (nor even close), is already partly out of reach for our tools. If this is true, the following hypothetical scenario is possible: We’ll get closer and closer to AGI, our measurement tools, which define the reality we perceive above our senses, will keep lagging behind. When we eventually reach AGI, our tools won’t reflect it and although it’d be real, we won’t know it.

Because we’re taking shots in the dark in our quest to create AGI, achieving it without knowing doesn’t fall into the category of known unknowns (things we know we don’t know), but into the category of unknown unknowns. We won’t even know that we don’t know it. We will keep believing that the true reality is the one our tools are telling us. We won’t doubt whether there’s a reality beyond that and thus, we won’t try to find anything there.

This possibility will remain unassessed, and forever into the dark place that is the unknown unknowns. That’ll be the case until AGI decides to show itself as such. Then, we’ll have to rethink all our plans and act accordingly going a few steps behind. This doesn’t mean AGI will be harmful or dangerous, but not knowing something so impactful is always a risk. Let’s hope AGI ends up being friendly like Sonny from I Robot. It’d make a great companion.


Final thoughts

We’re constantly thinking about what AGI will look like, how we’ll create it, or when… But no one is thinking about whether we’ll be able to even realize it when it happens. Our measurement tools are limited and so is our capacity to improve them. GPT-3 already made clear our tools aren’t sophisticated enough. Is the Turing test enough? Could we ever create a test that is enough?

AGI will take us by surprise. The questions we’re now asking seem irrelevant when we’re unable to assess the existence of the thing we’re asking about. That’s the first answer we need, and we need it soon.

Disclaimer: The arguments in this article are personal and may not be shared by other people. Feel free to continue the discussion in the comments. Do you think this scenario is possible or likely? I’d love to read what you have to say!


Travel to the future with me for more content on AI, philosophy, and the cognitive sciences!

Recommended reading

GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters

Artificial Intelligence and Robotics Will Inevitably Merge

AI Won’t Master Human Language Anytime Soon


Related Articles