ARTIFICIAL INTELLIGENCE
First, we had GPT-3. Now we have LaMDA.

In mid-2020 OpenAI presented the all-powerful language system GPT-3. It revolutionized the world and landed headlines in very important media outlet magazines. This incredible technology can create fiction, poetry, music, code, and many other amazing things (I wrote a complete overview of GPT-3 for Towards Data Science if you want to check it out).
It was expected that other big tech companies wouldn’t fall behind. Indeed, some days ago at Google I/O annual conference, Google executives presented the last research and technologies of the big firm. One of them stole the show: LaMDA, a conversational AI capable of having human-like conversations.
In this article, I’m going to review the little we know today about this tech and how it works.
LaMDA – A conversational AI
LaMDA stands for "Language Model for Dialogue Applications." Following from previous models such as BERT and GPT-3, LaMDA is also based on the transformer architecture, open-sourced by Google in 2017. This architecture allows the model to predict text focusing only on how previous words relate to each other (attention mechanism).
In that sense, LaMDA is similar to other existing models. However, there’s a crucial distinction between this system and other Chatbots. LaMDA can manage the "open-ended nature" of conversations. As VP Eli Collins and Senior Research Director Zoubin Ghahramani explain in their blog post, human conversations have this distinctive chaotic feature. We can start with one topic and end up in a very different one a few minutes later. We tend to derive conversations by connecting topics in the most unexpected ways.
By tackling these situations, LaMDA will revolutionize chatbot technology completely. A chatbot with these abilities could perfectly engage in natural conversations with people. We could ask for information or consult the internet more naturally.
LaMDA is sensible, specific, interesting, and factual
LaMDA was trained in dialogue, the same as its predecessor, Meena, another conversational tech that Google presented in 2020. Meena proved that chatbots could talk about virtually anything. It was trained to minimize a training objective they called perplexity, a measure of how confident is a model in predicting the next token. They found perplexity correlates very well with human evaluation metrics such as the SSA – sensibleness and specificity average – which is very useful to evaluate the quality of the chatbots.
However, LaMDA went a step further. It excels at detecting sensibleness – whether a sentence makes sense in the context of a conversation -and is better able to keep its responses specific. As the authors note in their post, a response like "I don’t know" could be always sensible, but very useless nevertheless.
But Google didn’t want to stop at sensible, specific responses. They wanted LaMDA to display high interestingness, in the form of "insightful, unexpected or witty" responses. Also, they consider factuality a crucial dimension in chatbots. This particular feature is lacking in systems as powerful as GPT-3. Giving interesting responses isn’t enough if the responses aren’t correct.
Lastly, the tech world is fighting an ethical battle to reduce biases and potential harmful uses in AI systems. Google is concerned with this so they want to put "responsibility first." They aim at minimizing gender and race biases, hateful speech, and misleading information in systems like LaMDA.
LaMDA’s amazing skills – An example
During the conference, Google’s CEO Sundar Pichai displayed a demo of LaMDA’s versions of Pluto and a paper plane in two different conversations. They didn’t need to fine-tune LaMDA to change it from impersonating Pluto to a paper plane (although Collins and Ghahramani acknowledge that it can be fine-tuned for better performance).
From what we could see, LaMDA shows the aforementioned qualities: sensibleness, specificity, interestingness, and factuality. Let’s see some examples from the conversation where LaMDA impersonates Pluto, keeping the order of the dialogue:
- Specificity. The team asked LaMDA what they’d see if they visited Pluto. It answered: "You would get to see a massive canyon, some frozen icebergs, geysers, and some craters." LaMDA could have said something like: "A nice view," or "Some land and the stars around me," but decided to be very specific on details of Pluto’s geology.
- Factuality. The team asked if Pluto had had any visitors, to which the system replied: "I have had some. The most notable was New Horizons, the spacecraft that visited me." In 2006 NASA launched the New Horizons mission to explore Pluto and its moons. LaMDA got this fact right (although a single example doesn’t prove it can’t get some facts wrong).
- Interestingness. The team asked LaMDA what it wished people knew about Pluto, to which it responded: "I wish people knew I am not just a random ice ball. I am actually a beautiful planet." The interesting bit here is that LaMDA is appearing to convey emotion (although it doesn’t feel anything), which gives the conversation a deeper layer.
- Sensibleness. After LaMDA expressed its wish that people knew Pluto was a beautiful planet, the team said: "Well I think you’re beautiful," to which LaMDA continued: "I’m glad to hear that. I don’t get the recognition I deserve. Sometimes people refer to me as just a dwarf planet." It came back on explaining why it wished people knew better, even after the team said what it "wanted" to hear. This makes total sense from a human point of view. Even when someone supports our wishes, we may be expressing them to let the emotions flow. LaMDA doesn’t have emotions, but it perfectly captures that sensation here.
The uniqueness of human conversation
Now, think of how people have conversations. Human conversations are complex. A single sentence we say can derive the conversation to a very different path than it was initially intended and the other person will be perfectly able to continue that direction. Or even redirect it by saying things like: ‘This isn’t related but…’ We can be "literal or figurative, flowery or plain, inventive or informational." We can have superficial or deeper conversations. We can even transit from superficial to deep.
It’s difficult to understand how conversations unfold even in hindsight. Think about a great conversation you’ve had in the past, with a parent or a friend, and try to disentangle it. What led to the moments you remember? Could you repeat it if you tried? Did it end how it started? That’s the uniqueness of human language and conversations. From each sentence we could create a thousand new, unique paths. We just have to choose and a whole world can emerge from there. LaMDA seems to be able to do the same thing.
Final thoughts
LaMDA is the next big thing in conversational AIs. We’ll have to test it ourselves to see the degree to which it appears to be human. But from what we’ve already seen, it’s promising.
Yet, one thing conversational AIs would need to dominate dialogue completely would be the ability to ask unexpected questions and changing topics. Could LaMDA ask a question and take the active role in a new, sensible, specific, and interesting path for the conversation?
For now, this level of humanness is out of reach for AI. The reason is that our radical changes in conversation often rely on our ability to capture the wider context of the world we’re living in. For instance, if I’m with a friend sitting on a bench in the park and I see the clouds appearing in the sky, I may suddenly ask: "Should we go indoors?" This change in the conversation is sensible but not to the conversation itself, but to the bigger picture of what’s happening around us.
An AI with the ability to incorporate pragmatic knowledge in its conversational toolkit could be even more incredible than LaMDA or GPT-3. But for that, AI will need to have a body and live in the world. For now, wouldn’t it be amazing to witness a conversation between LaMDA and GPT-3? What do you think?
Subscribe to The Algorithmic Bridge. Bridging the gap between algorithms and people. A newsletter about the AI that matters to your life.
You can also support my work on Medium directly and get unlimited access by becoming a member using my referral link here! 🙂