Conscious AI models?

If it looks like a duck…

Daniel Godoy
7 min readJun 29, 2022
Photo by Robert Zunikoff on Unsplash

A few days ago, the Internet was taken by countless tweets, posts, and articles about Google's LaMDA AI being conscious (or sentient) based on a conversation it had with an engineer. If you want, you can read it here.

If you read it, you will also realize that it surely looks like a dialog between two people. But, appearances can be deceiving…

What IS LaMDA?

The name stands for "Language Model for Dialogue Applications". It's yet another massive language model trained by Big Tech to chat with users, but it's not even the latest development. Google's blog has an entry from more than one year ago called "LaMDA: our breakthrough conversation technology". It's a model built using Transformers, a popular architecture used in language models. Transformers are simple, yet powerful, and their power comes from their sheer size.

LaMDA has 137 BILLION parameters, and it was trained on 1.5 TRILLION words (for more implementation details, check this post).

To put it in perspective, and give away my age, the Encyclopaedia Britannica has only 40 million words in it. So, LaMDA had access to the equivalent of 37,500 times more content than the most iconic encyclopaedia.

--

--

Daniel Godoy

Data Scientist, developer, teacher and writer. Author of "Deep Learning with PyTorch Step-by-Step: A Beginner’s Guide" https://pytorchstepbystep.com