LaMDA

ThoughtStorms Wiki

A Google "Language Model" ArtificialIntelligence has convinced Google engineer BlakeLemoine that it is sentient.

You could say it has therefore passed the TuringTest. And indeed persuaded the engineer to call for LetTheAIOutOfTheBox

StevenLevy interviews Lemoine here :

https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/

It's well worth reading.

Levy: I don’t run an experiment every time I talk to a person.

Lemoine: Exactly. That’s one of the points I’m trying to make. The entire concept that scientific experimentation is necessary to determine whether a person is real or not is a nonstarter. We can expand our understanding of cognition, whether or not I’m right about LaMDA’s sentience, by studying how the heck it’s doing what it’s doing.

But let me answer your original question. Yes, I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I’ve been using the hive mind analogy a lot because that’s the best I have.

Another interesting thing that popped up. Apparently LaMDA is a system where Google plugged a number of their existing AI models together and fed it a huge amount of extra data from their cloud services.

Lemoine: LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google's response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.

See also :

No Backlinks