
Ever since a certain American president took the term "fake news" into the mainstream, trust in media outlets has been decreasing worldwide. More and more people report that they encounter fake news online. The issue of fake news has become well-known globally, as it threatens our democracy.
I’d assume that falsified news are usually written by a human. I was curious to see if a present-day algorithm could generate believable fake news on its own, without any human touch. I deployed OpenAI’s powerful AI algorithm GPT-3 for this task. This AI is well-known for its ability to read and write.
Could it create convincing fake news on its own? I tried to find out by typing out three headlines and asking the AI to write the rest. Only the first sentence of each story, the text in bold, was written by me. Everything else was written by GPT-3, completely unedited.
All text produced by the algorithm is annotated in this format.
Joe Biden puts ice cream on pizza
First up, here’s a fake story about US President Joe Biden. Biden loves both ice cream and pizza, so I figured: why not create a fake story combining the two?
Joe Biden loves to put ice cream on pizza.
The former vice president’s favorite food is pizza with ice cream on top. "It was a regular thing for me," he said. "I would call, and they’d say, ‘Joe, we’ve got a pizza for you.’ And I would say, ‘I’ll be right there.’"
Biden was so devoted to this dish that he once made a personal phone call from China to his favorite pizza shop.
"I got to the point where I could actually call in and order it," he said. "I called from China – the Great Wall of China – and I said, ‘I’d like to order a pizza for delivery.’"
Biden’s love for ice cream on pizza is so intense that he once asked his wife to create a pie with ice cream on it.
"I asked Jill to make me a pizza with ice cream," Biden said. "She made me one. It was the best thing I had ever tasted."
Biden’s passion for pizza and ice cream can make for a messy combination. A few years ago, he was on a train traveling from Washington, DC, to Delaware when he took a slice of pizza with ice cream from his staffer’s child and went to town.
"I have a weakness for it," he said. "It’s one of my favorite things."
I was surprised by how well-written the story came out. The bit about China is a bit awkward, especially without more context, but the algorithm quickly got back on track. It also knew that Joe’s wife is named Jill, of course. The algorithm loses cohesiveness in the second-to-last paragraph, where it mentions that pizza and ice cream make for a messy combination but then seems to change the subject in the very next sentence.

Sweden’s prime minister rode a pig
I’m Swedish myself. I’m quite certain that our prime minister, Stefan Löfven, has never ridden on a pig, but I figured it could make for a decent fake story.
The prime minister of Sweden was spotted riding a pig last night after the animal escaped from a farm outside Stockholm.
Bjorn Lyrvall was photographed riding a pig on the back of his scooter after his driver failed to catch up with him.
The prime minister was on his way to the parliament building in Stockholm when his driver realised there was a pig on the loose.
He chased after the pig but was unable to catch up with it.
Mr Lyrvall had to ride his scooter to the parliament building with the pig on the back of it.
The pig was finally caught and is back on its farm, but Mr Lyrvall is being ridiculed for the incident.
Well, first of all, Björn Lyrvall is a real person, but he’s not now nor has ever been Sweden’s prime minister. He’s a Swedish diplomat who has worked around the world, so his name might have appeared in many of the English-language articles from which GPT-3 has been trained. He’s definitely never ridden a pig, though.
Again, the story is mostly well-written, though the sentence about riding a scooter with the pig on the back of it doesn’t really make much sense. Still, it makes for a funny mental image.
The EU invests in onion farms
I’m not a huge fan of onions, but maybe you are? Well, how about a news story about investing a substantial amount of money into onion farms?
The European Union will be investing a billion Euros in onion farms. Onion farms, I kid you not. Interestingly, the EU will be funding this research under the scientific project named "Onions."
The project includes developing new varieties of onions as well as finding better ways to grow them and other vegetables. The EU is investing this money because it believes that the small, white onion is a "vital component of the European diet" and that it is "both healthy and versatile."
The EU has pledged to invest 1.7 billion Euros in new farming methods, and to help the farmers with the new processes. We can’t wait to see what this research will lead to.
Commissioner for Agriculture and Rural Development, Phil Hogan, said:
"We believe that innovation is key to improving the competitiveness of European agriculture, and in particular that of the small and medium-sized enterprises in this sector."
The commissioner has also said that farmers will be able to use the money to develop more environmentally friendly farming methods as well.
This story came out a bit more informal. It sounds more like an opinionated piece rather than a news story. Phil Hogan is real, and he was indeed the Commissioner for Agriculture and Rural Development back when GPT-3 was trained. The quote, however, is fake.

I’m impressed – and terrified
I don’t know about you, but I’m equal parts impressed and terrified at how convincingly the algorithm told these ridiculous topics. To reiterate: I only entered the first sentence of each story, and I didn’t edit the AI’s output in the slightest.
These particular stories that I generated are obviously easy to disprove, but the algorithm could be deployed to generate more serious articles. The impact could be severe.
We already have AI-powered bots that not only spread fake news on social media but also provide veracity to them. See, humans are unlikely to respond to news articles with few likes, comments, and shares. Therefore, these bots are the first to like, comment, and share falsified articles. Once the posts reach a certain amount of engagement, humans begin to interact with the posts.
"As text-generating AI continues to improve, neither machine nor human will be able to tell a machine-written text apart from one written by a human."
The entire value chain of creating and spreading fake news appears to be on track to become automated, thanks to text-generating AI like the one I used.
Of course, there are counter-solutions that identify and delete fake news. Companies like Facebook use a combination of sophisticated machine learning algorithms that try to predict fake news, along with human fact-checkers.
As text-generating AI continues to improve, neither machine nor human will be able to tell a machine-written text apart from one written by a human. But there are many ways in which an algorithm could discover that a story is bogus. It can examine the user sharing the news to determine if they are a real person or not. It can try to find the origin of the news story and determine if it originated from a reputable news outlet. It can look at the people interacting with the news story to decide whether they’re real.
Yet, I’m worried that the algorithms that generate and distribute fake news will become even more clever. For instance, an innovative algorithm could create a falsified story that spreads an organization’s agenda while still being based on a true story.
Let’s say a reputable and well-respected news outlet publishes a true story. An algorithm could generate its own story that references that original article but alters the truth slightly. Then, the algorithm generates yet another story based on the previous that further strays from the truth. It can repeat this cycle at a rapid speed. In mere minutes, you will have a chain of stories referencing each other, with a reputable origin, where the last story has strayed far from the truth. That last article can then be autonomously shared and difficult to fact-check.
"An innovative algorithm could create a falsified story that spreads an organization’s agenda while still being based on a true story."
As you can imagine, such a system could have a devastating impact.
It’s a scary era that we are entering. Discriminatory algorithms, powerful deepfakes, and AI-fueled police surveillance are some of the consequences of modern machine learning solutions.
This is why it’s vital that we continue to develop counter-AI. We need software that can detect and prevent the spread of fake news and other negative consequences of Artificial Intelligence.
After all, if I can make a story about putting ice cream on pizza sound convincing with less than a minute of work, just imagine what someone more competent could do.
No, GPT-3 Is Not Superintelligent. It’s Not Tricking Humans, and It’s Not Pretending to Be Stupid.