As many of the top companies in the field seek to outdo each other by building ever-larger AI systems, Sakana, which takes its name from the Japanese word for fish, thinks it may be able to do more with less data. The startup plans to make multiple smaller AI models, the kind of technology that powers products like ChatGPT, and have them work together. The idea is that a “swarm” of programs could be just as smart as the massive undertakings from larger organizations.
Founded by two prominent industry researchers, former Google employees David Ha and Llion Jones, Sakana’s approach could potentially lead to AI that’s cheaper to train and use than existing technology. That includes generative AI, which has captivated Silicon Valley with its ability to spit out text and images in response to prompts. The new venture’s approach contrasts with that of companies like OpenAI, which might feed all its data into one large AI program, rather than a series of littler ones.
“Ants move around and dynamically form a bridge by themselves, which might not be the strongest bridge, but they can do it right away and adapt to the environments,” Ha said. “I think this sort of adaptation is one of the very powerful concepts that we see in natural algorithms.”
Ha and Jones are marquee names in the world of AI research. Jones, a Tokyo-based AI researcher, co-authored one of Google’s most influential papers in the field, “Attention Is All You Need,” which underpins many of today’s most popular AI products. Ha, also based in Tokyo, was previously Stability AI’s head of research. Before that, he focused on generative AI while working as a scientist at Alphabet Inc.’s Google Brain in Japan.
Sakana is still in the early stages: It hasn’t yet built an AI model and doesn’t have an office. The plan is to open one in Tokyo soon, Ha said. The company declined to comment about its fundraising status.
Discover the stories of your interest
But the ideas Sakana is working with are more established. Near the end of his time at Google, Ha and a colleague launched a project dubbed “sensory neuron as a transformer,” and deployed a fleet of small AI models to work together to play a game, rather than using one large model. Other researchers have also taken inspiration from the workings of the human brain. The term “artificial neural networks,” for example, refers to AI models programmed to process information in a way that’s roughly analogous to how people do, by using trial and error.“The human brain still works better than our best AI,” Jones said. “So, clearly the human brain is doing something right that we haven’t quite caught onto yet.”
Jones and Ha sat near each other in Google’s Tokyo offices, and the pair stayed in touch after Ha left the company. Following their years at the internet behemoth, they eventually gravitated toward startups. Ha’s role at Stability AI meant he was spending a lot of time building research teams, he said, and he longed to return to conducting research. And Jones felt fenced in at Google.
“It’s unfortunately true to say I have so much more velocity outside of Google,” he said, noting that the need to secure approvals and resources could slow the process of working on innovative technology at a large company. When Ha suggested that they found a startup, he said, “It just made a lot of sense to me.”