Google has been warned by one of its engineers that the company is not in a position to win the artificial intelligence race and could lose out to commonly available AI technology.
A document from a Google engineer leaked online said the company had done “a lot of looking over our shoulders at OpenAI”, referring to the developer of the ChatGPT chatbot.
However, the worker, identified by Bloomberg as a senior software engineer, wrote that neither company was in a winning position.
“The uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch,” the engineer wrote.
The engineer went on to state that the “third faction” posing a competitive threat to Google and OpenAI was the open-source community.
Open-source technology developers are not proprietorial and release their work for anyone to use, improve or adapt as they see fit. Historical examples of open-source work include the Linux operating system and LibreOffice, an alternative to Microsoft Office.
The Google engineer said open-source AI developers were “already lapping us”, citing examples including tools based on a large language model developed by Mark Zuckerberg’s Meta, which was made available by the company on a “noncommercial” and case-by-case basis in February but leaked online shortly after.
Since Meta’s LLaMA model became widely available, the document added, the barrier to entry for working on AI models has dropped “from the total output of a major research organization to one person, an evening, and a beefy laptop”.
The document also cited websites filled with open-source visual art generation models. By contrast, Chat GPT and Google’s Bard chatbot do not share make their underlying models available to the public.
“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customisable, more private, and pound-for-pound more capable,” wrote the Google worker.
The engineer went on to warn that the company had “no secret sauce” and that “our best hope is to learn from and collaborate with what others are doing outside Google”, adding that people would not pay for a restricted AI model when “free, unrestricted alternatives are comparable in quality.”
However, the EU was warned this week that it must protect grassroots AI research in its planned AI bill or risk hampering the release of open source models. In an open letter coordinated by the German research group Large-scale AI Open Network (Laion), the European parliament was told that any rules requiring developers to monitor or control use of their work “could make it impossible to release open-source AI in Europe”.
The Laion letter said such restrictions would “entrench large firms” and “hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas”.
On Thursday the UK’s competition watchdog launched a review of the AI market, focusing on the foundation models behind generative AI tools such as ChatGPT, Bard and the image generator Stable Diffusion. The Competition and Markets Authority said sustaining AI innovation would require “open, competitive markets”.
The document by the Google engineer was posted online by the consulting firm SemiAnalysis, which said it had “verified” its authenticity after it was shared on a public server on the Discord chat platform.
Google has been contacted for comment but it is understood that the document is not an official company memo. The Guardian has also contacted the engineer named by Bloomberg for comment.