Internet

Meta releases BlenderBot 3, its most literate chat AI to date, on the web

More than half a decade after Microsoft’s truly monumental Taye failure, the incident still stands as a stark reminder of how quickly AI can deteriorate under the influence of the Internet’s strong toxicity, and a warning against creating bots without enough strong behavioral bonds. On Friday, Meta’s AI Research division will see if the latest iteration of its Blenderbot AI can stand up to the cross-net horrors with a public demo release of its 175 billion-parameter Blenderbot 3 .

A major hurdle facing chatbot technology (as well as the natural language processing algorithms that drive them) is sourcing. Traditionally, chatbots are trained in highly selected environments, because otherwise you always get Taye, but that ends the topics it can discuss on specific ones in the lab. Conversely, you can have a chatbot pull information from the internet to access a wide range of topics, but it can, and probably will, become a Nazi at some point.

“Researchers cannot predict or model every conversational scenario in research settings alone,” Meta AI researchers wrote in a blog post Friday. “The field of AI is still far from truly intelligent AI systems that can understand, engage and converse with us like other humans. In order to create models that are more adaptable to real-world environments, chatbots need to learn from diversity; broad perspective with humans ‘in the wild’.

Meta has been working to solve the problem since it first introduced the BlenderBot 1 chat app in 2020. Initially little more than an open-source NLP experiment, over the next year, BlenderBot 2 had learned to remember information it had discussed in previous conversations. and how to search the Internet for more details on a given topic. BlenderBot 3 takes those capabilities a step further by not only evaluating the data it pulls from the web, but also the people it talks to.

When a user records an unsatisfactory system response, which currently hovers around 0.16 percent of all training responses, Meta feeds the user’s feedback back into the model to avoid repeating the error. The system also uses a principal algorithm that first generates an answer using training data and then runs the answer through a classifier to check if it fits into a right-wrong scale defined by user feedback.

“The language modeling and classifier mechanisms must agree to generate a sentence,” the team wrote. “Using data that shows good and bad responses, we can train the classifier to penalize low-quality, toxic, conflicting or repetitive statements, and statements that are generally unhelpful.” The system also uses a separate user-weighting algorithm to detect unreliable or malicious responses from a human speaker, essentially teaching the system not to trust what the person has to say.

“Our live, interactive, public display allows BlenderBot 3 to learn from organic interactions with all kinds of people,” the team wrote. “We encourage adults across the United States to try out the demo, have natural conversations about topics of interest, and share their responses to help advance research.”

BB3 is expected to speak more naturally and conversationally than its predecessor, thanks in part to its massive upgrades. OPT-175B language model, which is almost 60 times larger than BB2’s model. “We found that compared to BlenderBot 2, BlenderBot 3 improves the overall ranking of human-judged speech tasks by 31 percent,” the team said. “It’s also estimated to be twice as knowledgeable, while actually being 47 percent less likely to be wrong. Compared to GPT3, it is more up-to-date 82 percent of the time and more specific 76 percent of the time. of time”.

Related Articles

Sorry, delete AdBlocks

Add Ban ads I wish to close them