A director at social media giant Meta, formerly Facebook, described seeing the outputs of the company’s new chatbot as “painful” after screenshots of the bot bashing its creators and saying Donald Trump was still the US president were shared online.
Meta released a demo of BlenderBot 3 earlier this month with testers quickly pointing out how the bot, which partially searches the internet for responses, happily shared its negative opinions about Meta CEO and founder Mark Zuckerberg.
“I don’t really like him at all,” BlenderBot told one user. “He’s too creepy and manipulative.” It also laughed at the idea that Zuckerberg “has all this money and still wears the same clothes”.
But BlenderBot also output some troubling responses including claims that Donald Trump was still the US president. It had also been found to make antisemitic stereotypes.
Meta's new chatbot has *opinions* about its CEO. https://t.co/hUkJxw3UTr pic.twitter.com/Hq4mZTY1DV
— Max Woolf (@minimaxir) August 5, 2022
In an update to the original BlenderBot 3 announcement, Meta’s Managing Director of Fundamental AI Research, Joelle Pineau, said it was “painful to see” some of the bot’s offensive language but that it was necessary to publicly test conversational AI systems to make them “truly robust”.
“We’ve already collected 70k conversations from the public demo, which we will use to improve BlenderBot 3,” Pineau said.
“From feedback provided by 25 per cent of participants on 260k bot messages, 0.11 percent of BlenderBot’s responses were flagged as inappropriate, 1.36 per cent as nonsensical, and 1 per cent as off-topic.
“We continue to believe that the way to advance AI is through open and reproducible research at scale.
“We also believe that progress is best served by inviting a wide and diverse community to participate.”
Using the internet to train conversational AI is notoriously fraught with danger as Microsoft discovered back in 2016 with its infamous Tay bot.
Good morning to everyone, especially the Facebook https://t.co/EkwTpff9OI researchers who are going to have to rein in their Facebook-hating, election denying chatbot today pic.twitter.com/wMRBTkzlyD
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Designed to learn as it interacted with people on Twitter, within hours of its release Tay was using hate speech and had to be shut down.
Meta is naturally aware of this. Part of the point of BlenderBot 3’s demo was “collecting and sharing the conversational data” in order to analyse and improve on the AI’s interactions with humans.
Chatbots are an intriguing area of artificial intelligence research, in no small part because of the famed Turing test which explores the idea of whether or not machines can be said to think.
In June, a Google employee – whose job was to talk with its LaMDA chatbot and uncover problematic language – became convinced LaMDA was sentient.
Blake Lemoine, the engineer, went public with his claims. He published conversations with LaMDA and conducted media interviews in which he admitted seeking legal representation to establish the AI’s personhood.
Google strenuously denied that its chatbot was sentient.
Chatbots like LaMDA and BlenderBot are being developed as products to sell to businesses who want to deploy “lifelike conversational AI” of their own in favour of employing customer service representatives.