LaMDA Google AI

The suspension of a Google engineer called Blake Lemoine, who claimed an AI (artificial intelligence) in the form of a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being, has stimulated a lot of comment over the past few weeks following the story’s publication in the Washington Post.

It formed the starting discussion that Shel Holtz and I had in this month’s long-form episode 263 of the For Immediate Release podcast, published yesterday.

The chatbot is called LaMDA, which stands for ‘language model for dialogue applications.’ “If I didn’t know exactly what it was, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told the Washington Post.

Lemoine considers the computer program to be his friend and insisted that Google recognize its rights. The company did not agree, and Lemoine is on paid administrative leave. Google said it suspended him for breaching confidentiality policies by publishing transcripts of the conversations he had with the chatbot. Google also said that Lemoine was employed as a software engineer, not an ethicist.

As you’d expect, we saw a plethora of opinion and comment online, amplified across Twitter and other social networks.

An essay in The New Stateman warns of ‘the dangerous fallacy of sentient AI’ with a call to ‘ignore the Silicon Valley messiahs. They only expose how little most of us know about the technology and its ethics.’

The Guardian reports that Brad Gabriel, a Google spokesperson, strongly denied Lemoine’s claims that LaMDA possessed any sentient capability.

Gabriel told the Washington Post in a statement, “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

The episode, however, and Lemoine’s suspension for a confidentiality breach, raises questions over the transparency of AI as a proprietary concept, said the Guardian.

In an interview with WIRED, Lemoine elaborated on his belief that LaMDA is a person and not Google’s property. Wired noted that ‘AI scientists discounted his claim, though some acknowledged the value of the conversation he has generated about AI sentience.’

AI systems like LaMDA are based on large language models, which are massive data sets of human conversations. These can make AI seem sentient, but the AI has no understanding of what it is saying.  Humans are easily fooled, and conversational AI can be used for both constructive and nefarious purposes.

BigThink.com

On The Small Data Forum podcast last week, Sam Knowles, Thomas Stoeckle and I also weighed in on the debate in episode 58, where the conclusive view of my two fellow podders is that it’s hardly a plausible act to declare a sentient AI given that psychology, neuroscience, and philosophy are still incapable of agreeing on a meaningful definition of consciousness or sentience.

The discussion Shel and I had mirrored much of the scepticism and disbelief that surrounds Lemoine’s actions and his beliefs about the AI chatbot as I’ve summarised above.

Am I the only one who thinks this is worthy of our attention?

I’ve posted articles about AI in recent years as part of my quest to understand it better. I see the term ‘AI’ meaning more about augmented intelligence – AI augments your intelligence – than artificial. I first heard this meaning when I worked at IBM some years ago and I used it often when talking about changes in organisations and workplaces. I talked about the role of AI in PR measurement at a CIPR event in January 2016, wrapped up in the phrase ‘cognitive PR’. My focus then was on IBM Watson, IBM’s grand AI project, and what it would enable you to do in PR measurement. (Ultimately, things didn’t work out well for IBM with Watson AI.)

And Shel and I discussed AI in the broad context of PR in FIR 139 in May 2018 when we talked about the CIPR’s white paper on the role of artificial intelligence in public relations.

I want to balance all informed opinions and consider the facts, including whether or not there’s sentience anywhere. While I don’t believe we have sentient AI right now, I do believe that it’s something we should have open minds about. Until there’s clear evidence either way, let’s eschew the black-or-white opinion I’ve seen and heard everywhere during this past week.

For now, I’ll leave the final word to Blake Lemoine (who actually speaks of ‘potential sentience’).

https://twitter.com/cajundiscordian/status/1538871946615611397
FIR 263

In addition to AI, sentient or not, Shel and I discussed a number of other topics in this FIR episode over the course of nearly 90 minutes. Here’s the full topic rundown:

  • Google has not created sentient AI — yet
  • Why we need more than offices to foster connection and belonging at work
  • The personal brand is dead
  • Marketing and communicating in the metaverse
  • ‘Copyright trolls’ are suing people over Creative Commons photos
  • The state of PR

Plus Dan York’s tech report.

Listen Now:

See the episode 263 show notes post on the FIR website for links and other related content.