technology

Meta's AI chief doesn't think AI super intelligence is coming anytime soon, and is skeptical on quantum computing


Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

Meta’s chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.

His point of view stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be “fairly competitive” with humans in less than five years, besting people at a multitude of mentally intensive tasks.

“I know Jensen,” LeCun said at a recent event highlighting the Facebook parent company’s 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to gain from the AI craze. “There is an AI war, and he’s supplying the weapons.”

“[If] you think AGI is in, the more GPUs you have to buy,” LeCun said, about technologists attempting to develop artificial general intelligence, the kind of AI on par with human-level intelligence. As long as researchers at firms such as OpenAI continue their pursuit of AGI, they will need more of Nvidia’s computer chips.

Society is more likely to get “cat-level” or “dog-level” AI years before human-level AI, LeCun said. And the technology industry’s current focus on language models and text data will not be enough to create the kinds of advanced human-like AI systems that researchers have been dreaming about for decades.

Readers Also Like:  China denies pressuring companies like TikTok to spy for China

“Text is a very poor source of information,” LeCun said, explaining that it would likely take 20,000 years for a human to read the amount of text that has been used to train modern language models. “Train a system on the equivalent of 20,000 years of reading material, and they still don’t understand that if A is the same as B, then B is the same as A.”

“There’s a lot of really basic things about the world that they just don’t get through this kind of training,” LeCun said.

Hence, LeCun and other Meta AI executives have been heavily researching how the so-called transformer models used to create apps such as ChatGPT could be tailored to work with a variety of data, including audio, image and video information. The more these AI systems can discover the likely billions of hidden correlations between these various kinds of data, the more they could potentially perform more fantastical feats, the thinking goes.

Some of Meta’s research includes software that can help teach people how to play tennis better while wearing the company’s Project Aria augmented reality glasses, which blend digital graphics into the real world. Executives showed a demo in which a person wearing the AR glasses while playing tennis was able to see visual cues teaching them how to properly hold their tennis rackets and swing their arms in perfect form. The kinds of AI models needed to power this type of digital tennis assistant require a blend of three-dimensional visual data in addition to text and audio, in case the digital assistant needs to speak.

Readers Also Like:  Mysterious 8,000-year-old pits with animal bones discovered in London countryside

These so-called multimodal AI systems represent the next frontier, but their development won’t come cheap. And as more companies such as Meta and Google parent Alphabet research more advanced AI models, Nvidia could stand to gain even more of an edge, particularly if no other competition emerges.

The AI hardware of the future

Nvidia has been the biggest benefactor of generative AI, with its pricey graphics processing units becoming the standard tool used to train massive language models. Meta relied on 16,000 Nvidia A100 GPUs to train its Llama AI software.

CNBC asked if the tech industry will need more hardware providers as Meta and other researchers continue their work developing these kinds of sophisticated AI models.   

“It doesn’t require it, but it would be nice,” LeCun said, adding that the GPU technology is still the gold standard when it comes to AI.

Still, the computer chips of the future may not be called GPUs, he said.

“What you’re going to see hopefully emerging are new chips that are not graphical processing units, they are just neural, deep learning accelerators,” LeCun said.

LeCun is also somewhat skeptical about quantum computing, which tech giants such as Microsoft, IBM, and Google have all poured resources into. Many researchers outside Meta believe quantum computing machines could supercharge advancements in data-intensive fields such as drug discovery, as they’re able to perform multiple calculations with so-called quantum bits as opposed to conventional binary bits used in modern computing.

But LeCun has his doubts.

“The number of problems you can solve with quantum computing, you can solve way more efficiently with classical computers,” LeCun said.

Readers Also Like:  Fitbit fans warn latest update leaves devices ‘useless’

“Quantum computing is a fascinating scientific topic,” LeCun said. It’s less clear about the “practical relevance and the possibility of actually fabricating quantum computers that are actually useful.”

Meta senior fellow and former tech chief Mike Schroepfer concurred, saying that he evaluates quantum technology every few years and believes that useful quantum machines “may come at some point, but it’s got such a long time horizon that it’s irrelevant to what we’re doing.”

“The reason we started an AI lab a decade ago was that it was very obvious that this technology is going to be commercializable within the next years’ time frame,” Schroepfer said.

WATCH: Meta on the defensive amid reports of Instagram’s harm

Meta on the defensive amid reports of Instagram's harm



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.