ARTIFICIAL INTELLIGENCE | DATA SCIENCE
Here’s why programmers don’t need to panic.

OpenAI released GPT-3’s beta API in July 2020. Soon thereafter, developers started experimenting with the system and raised the hype to the sky, leading people to make strong claims about GPT-3’s power. The system was qualified as "sentient," capable of "reasoning and understanding," or even as a "general intelligence."
Frederik Bussler wrote an article that went viral on Towards Data Science in which he brought an important matter into the discussion: Could GPT-3 kill coding as we know it? For some years now, we’ve seen trends aiming at automating coding or, at the very least, reducing human involvement.
Bussler mentions No-code and AutoML as the forces strangling the Future of coding jobs. No-code is a category of design tools (e.g WordPress) that allow the user to build complex applications without programming. AutoML is an AI-based end-to-end solution to solve machine learning problems. Both approaches allow non-programmers to use tech otherwise out of their reach.
GPT-3 follows the same direction. It can generate code from English instructions which is the ultimate dream of non-programmers. There are reasons to keep an eye on this new generation of AIs, but there are stronger reasons to not panic. Let’s see what GPT-3 can do and why we can still befriend AI coders.
GPT-3’s coding skills
One of the most surprising use cases people found was GPT-3’s ability to code following a natural language prompt (a prompt is the chunk of text we input the system). Sharif Shameen created debuild.co, a code generator based on GPT-3. He showed how the system was able to build a simple program in HTML/CSS/JSX from a simple set of instructions in English. Jordan Singer built Designer, a Figma plugin that can design for you. Amjad Masad built Replit, an application that explains and even tells you how to improve your code.
How can GPT-3 code from input in natural language? The reason is its multitasking meta-learning abilities. It can learn to perform text tasks it hasn’t been trained on, after seeing just a few examples. Sharif Shameen and company conditioned GPT-3 to learn these tasks. Meta-learning is an impressive ability, but we tend to overestimate AIs that acquire human-reserved skills, and GPT-3 is no different. It can code, but it can’t code everything. Here are three important limitations:
Small context window
GPT-3 has a short memory. It can remember only a small text window into the past. You can show it a few hundred words but nothing more. If you prompt it to learn to code, you can’t then make it learn poetry. And you could never ask it to continue a large program beyond a bunch of lines. GPT-3 is highly impressive within its context window.
Lack of accountability
GPT-3 can do many things, but it can’t assess whether its answers are right or wrong – and it doesn’t care either. If you’re working on a problem you don’t know the answer to, sometimes using GPT-3 is as good as guessing. OpenAI advised against using the system for "high-stake categories," because of this issue. GPT-3 isn’t trustworthy.
Sensitive to bad prompting
GPT-3 is as good at learning as we’re good at prompting. Tech blogger Gwern Branwen proved the importance of good prompting and defended the idea that GPT-3’s potential can’t be defined by sampling (each time we prompt GPT-3 and get a result, we’re creating a sample). If we don’t know how to talk to GPT-3, it won’t show its true knowledge and it’ll make mistakes.
"Sampling can prove the presence of knowledge but not the absence.
GPT-3 may "fail" if a prompt is poorly-written. […] The question is not whether a given prompt works, but whether any prompt works."
- Gwern Branwen
The uncertainty of prompting GPT-3
When we prompt GPT-3 to create code, we’re writing in software 3.0. Prompting, Gwern says, should be understood as a new programming paradigm, different than traditional coding or neural networks.
When we write a program in Python, for instance, we’re using a formal language. There are many ways to arrive at the same solution, but each of them has to strictly follow the syntax rules of the language. There’s no space for uncertainty. You write a program and the computer behaves in a specific manner, without loose interpretations.
Prompting GPT-3 to write code is drastically different. English – or any other spoken language – isn’t a formal language; it is a natural language. Natural languages aren’t designed. They evolved with us, and are full of ambiguity. Most of the time, meaning is only completed with contextual information. Written natural language loses part of its meaning, and therefore it can be interpreted in different ways. This creates uncertainty. To this, we have to add the uncertainty corresponding to the obscure inner workings of GPT-3. We can’t access the black box, let alone understand it.
Thus, when we input an English sentence into GPT-3, and it spits out something, there’s a chain of uncertainties that could perfectly cause a disastrous discrepancy between what we wanted and what we got. Prompting GPT-3 isn’t like coding in this sense. It can be used in some situations, but there’s no way it’ll replace all coding applications in the short term. Simply because the nature of both approaches prepares them to solve different problems.
AI won’t kill coding entirely
I’ve tried to rebut some ideas about GPT-3’s threat to coding. Now, I’ll extend the arguments over AI in general. There are 3 strong reasons programmers don’t need to fear AI that much:
Other paradigms are better suited for some tasks
When I mentioned prompting as a new programming paradigm (software 3.0) I left implicit the other two paradigms: traditional coding (software 1.0) and neural networks (software 2.0). Karpathy published a viral post some years ago defending the idea that neural networks should be framed as a new form of software, and that they were better prepared than traditional coding for some tasks.
I agree with him to some extent. Neural networks proved very successful in tackling some tasks at which traditional coding had always fallen short. In particular, neural networks are well-suited for vision and language. It was evident that for some problems, directly writing the behavior we wanted from a program was easier (software 1.0), but for others, collecting data as examples of the behavior we wanted to reproduce (software 2.0) was the go-to solution.
It’ll be the same with software 3.0. Prompting allows users to deal with tasks that are beyond the capabilities of previous software paradigms, but it won’t be well-suited in other situations. Building an operating system, the office package, a database, or a program that computes the factorial of a number, will still be done with traditional coding.
Other paradigms are less costly
Deep learning costs are often prohibitive. Many companies still rely on non-neural-network machine learning solutions because data manipulation, cleaning, and labeling would comprise higher expenses than the rest of the project.
Even if newer techniques and technologies are faster or more precise, economic costs are always a limitation in the real world. Training GPT-3 cost OpenAI around $12 million. How many companies can afford it? Would you spend a few million to create an AI that writes JSX for you?
Even if the API is free for developers to use, there’s another cost to take into account. The environmental damage to the planet. GPT-3 is so big that training it generated roughly the same amount of carbon footprint as "driving a car to the Moon and back." Sometimes bigger isn’t better.
Today’s AI has limitations it can’t pass
Neural networks keep getting smarter each year, but there are tasks not even the smartest, most powerful neural network can manage. The uncertainty GPT-3 has to face when interpreting a written input is inevitable.
Disembodied AI -which comprises almost every AI to date – can’t access the meaning beyond the words. We can use context to interpret the world around us because we interact with it. We live in the world and that’s the reason we understand language. We can link form with meaning; we can link words with the subjective experience they convey.
Neural networks, no matter how powerful, won’t be able to master language as humans do. As professor Ragnar Fjelland says, "as long as computers do not grow up, belong to a culture, and act in the world, they will never acquire human-like intelligence." And it isn’t happening anytime soon.
Final thoughts
It is undeniable neural networks such as GPT-3 are an important milestone that will open doors for the next steps towards AGI. They’ll be able to tackle more complex tasks with each new upgrade. For instance, multimodal AIs are the new normal (MUM and Wu Dao 2.0 are the latest examples).
Yet, traditional methods and techniques will simply be a better option for some tasks. AI will eat chunks of problem space that were previously the domain of traditional coding, but that happens with every technology. However, new tech rarely makes the old obsolete in every sense. Improving efficiency, costs, and usability at the same time isn’t the rule, but the exception. AI will touch every industry, but it’s not an exception to this rule.
Subscribe for more content on AI, philosophy, and the cognitive sciences!
Recommended reading
4 Things GPT-4 Will Improve From GPT-3
GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters