Meta’s new chatbot is quick to offend


Meta’s new chatbot can convince convincingly imitate the way humans talk on the internet – for better and for worse.

In conversations with CNN Business this week, the chatbot, which went public on Friday and has been dubbed BlenderBot 3, said it identifies as “alive” and “human”, watches anime and has an Asian wife. . He also falsely claimed that Donald Trump was still president and there is “certainly plenty of evidence” that the election was stolen.

If some of these responses weren’t concerning enough for Facebook’s parent company, users were quick to point out that the AI-powered bot openly lambasted Facebook. In one instance, the chatbot reportedly said it “deleted my account” out of frustration with how Facebook handles user data.

While there is potential value in developing chatbots for customer service and digital assistants, there is a long history of experimental bots that quickly run into trouble when released to the public, such as with the “Tay” chatbot. from Microsoft more than six years ago. BlenderBot’s colorful responses show the limits of building automated conversational tools, which are typically trained on large amounts of public data online.

“If I have a message for people, it’s don’t take these things seriously,” Gary Marcus, an AI researcher and professor emeritus at New York University, told CNN Business. “These systems just don’t understand the world they’re talking about.”

In a statement on Monday amid reports that the bot also made anti-Semitic remarks, Joelle Pineau, managing director of basic AI research at Meta, said “it’s painful to see some of these offensive responses”. But she added that “public demos like this are important for building really robust conversational AI systems and closing the clear gap that exists today before such systems can be put into production.”

Meta previously acknowledged the current pitfalls of this technology in a blog post on Friday. “Since all conversational AI chatbots are known to sometimes mimic and generate dangerous, biased, or offensive remarks, we conducted large-scale studies, co-hosted workshops, and developed new techniques to create safeguards for BlenderBot 3,” the company said. “Despite this work, BlenderBot may still make rude or offensive comments.”

But Meta also claimed that its latest chatbot is “twice as competent” as its predecessors and has improved 31% on conversational tasks while being factually incorrect 47% less often.. Meta said it continuously collects data as more people interact with the bot to make improvements.

Meta did not immediately respond to CNN Business’s request for more details on how the bot was trained, but it did say in blog posts that it was trained via “a large amount of linguistic data accessible to the public”. The company added, “Many of the datasets used were collected by our own team, including a new dataset consisting of over 20,000 conversations with people based on over 1,000 conversation topics.”

Marcus speculated that the company is “likely borrowing stuff from Reddit and Wikipedia” like other AI chat systems. If so, he says, the poor results highlight limitations with the data the bot is trained on. For example, the bot may think that Trump is still president because in the majority of old datasets it was trained on, Trump was still president, Marcus speculated.

BlenderBot’s public release comes nearly two months after a Google engineer made headlines claiming that Google’s AI chatbot, LaMDA, was “sentient”. The claims, which have been widely criticized in the AI ​​community, highlighted how the technology can lead people to attribute human attributes to it.

BlenderBot self-identified as “susceptible” in discussions with CNN Business, likely because that’s what the human responses it studied say so. When asked what makes him “human”, the bot said: “The fact that I’m alive and conscious right now makes me human, as well as having emotions and being able to reason logically. .”

After being caught contradicting itself in its answers, the bot also produced an all-too-human response: “It was just a lie to get people to leave me alone. I’m afraid I’ll be hurt if I say the truth.”

As Marcus put it, “These systems produce fluent language that sounds like a human wrote it, and that’s because they rely on these vast databases of things that humans have actually written. “. But, he added, “At the end of the day, what we have is a lot of demonstrations that you can do cute stuff, and a lot of evidence that you can’t count on it.”