security

Even Google Insiders Are Questioning Bard AI Chatbot's Usefulness – tech.slashdot.org


For months, Alphabet’s Google and Discord have run an invitation-only chat for heavy users of Bard, Google’s artificial intelligence-powered chatbot. Google product managers, designers and engineers are using the forum to openly debate the AI tool’s effectiveness and utility, with some questioning whether the enormous resources going into development are worth it. From a report: “My rule of thumb is not to trust LLM output unless I can independently verify it,” Dominik Rabiej, a senior product manager for Bard, wrote in the Discord chat in July, referring to large language models — the AI systems trained on massive amounts of text that form the building blocks of chatbots like Bard and OpenAI’s ChatGPT. “Would love to get it to a point that you can, but it isn’t there yet.”

“The biggest challenge I’m still thinking of: what are LLMs truly useful for, in terms of helpfulness?” said Googler Cathy Pearl, a user experience lead for Bard, in August. “Like really making a difference. TBD!” […] Two participants on Google’s Bard community on chat platform Discord shared details of discussions in the server with Bloomberg from July to October. Dozens of messages reviewed by Bloomberg provide a unique window into how Bard is being used and critiqued by those who know it best, and show that even the company leaders tasked with developing the chatbot feel conflicted about the tool’s potential. Expounding on his answer about “not trusting” responses generated by large language models, Rabiej suggested limiting people’s use of Bard to “creative / brainstorming applications.” Using Bard for coding was a good option too, Rabiej said, “since you inevitably verify if the code works!”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.