Yann LeCun, chief AI scientist at Meta, speaks on the Viva Tech conference in Paris, June 13, 2023.
Chesnot | Getty Images News | Getty Images
Meta’s chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are many years away from reaching some semblance of sentience, equipped with common sense that may push their abilities beyond merely summarizing mountains of text in creative ways.
His viewpoint stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will likely be “fairly competitive” with humans in lower than five years, besting people at a mess of mentally intensive tasks.
“I do know Jensen,” LeCun said at a recent event highlighting the Facebook parent company’s 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to realize from the AI craze. “There may be an AI war, and he’s supplying the weapons.”
“[If] you think that AGI is in, the more GPUs you’ve gotten to purchase,” LeCun said, about technologists attempting to develop artificial general intelligence, the sort of AI on par with human-level intelligence. So long as researchers at firms reminiscent of OpenAI proceed their pursuit of AGI, they’ll need more of Nvidia’s computer chips.
Society is more more likely to get “cat-level” or “dog-level” AI years before human-level AI, LeCun said. And the technology industry’s current give attention to language models and text data is not going to be enough to create the sorts of advanced human-like AI systems that researchers have been dreaming about for many years.
“Text is a really poor source of data,” LeCun said, explaining that it might likely take 20,000 years for a human to read the quantity of text that has been used to coach modern language models. “Train a system on the equivalent of 20,000 years of reading material, they usually still don’t understand that if A is similar as B, then B is similar as A.”
“There’s plenty of really basic items about the world that they simply do not get through this sort of training,” LeCun said.
Hence, LeCun and other Meta AI executives have been heavily researching how the so-called transformer models used to create apps reminiscent of ChatGPT might be tailored to work with quite a lot of data, including audio, image and video information. The more these AI systems can discover the likely billions of hidden correlations between these various kinds of information, the more they may potentially perform more fantastical feats, the pondering goes.
A few of Meta’s research includes software that may also help teach people easy methods to play tennis higher while wearing the corporate’s Project Aria augmented reality glasses, which mix digital graphics into the actual world. Executives showed a demo through which an individual wearing the AR glasses while playing tennis was capable of see visual cues teaching them easy methods to properly hold their tennis rackets and swing their arms in perfect form. The sorts of AI models needed to power any such digital tennis assistant require a mix of three-dimensional visual data along with text and audio, in case the digital assistant needs to talk.
These so-called multimodal AI systems represent the following frontier, but their development won’t come low cost. And as more corporations reminiscent of Meta and Google parent Alphabet research more advanced AI models, Nvidia could stand to realize much more of an edge, particularly if no other competition emerges.
The AI hardware of the long run
Nvidia has been the most important benefactor of generative AI, with its pricey graphics processing units becoming the usual tool used to coach massive language models. Meta relied on 16,000 Nvidia A100 GPUs to coach its Llama AI software.
CNBC asked if the tech industry will need more hardware providers as Meta and other researchers proceed their work developing these sorts of sophisticated AI models.
“It doesn’t require it, however it could be nice,” LeCun said, adding that the GPU technology remains to be the gold standard in terms of AI.
Still, the pc chips of the long run will not be called GPUs, he said.
“What you are going to see hopefully emerging are latest chips that usually are not graphical processing units, they are only neural, deep learning accelerators,” LeCun said.
LeCun can also be somewhat skeptical about quantum computing, which tech giants reminiscent of Microsoft, IBM, and Google have all poured resources into. Many researchers outside Meta imagine quantum computing machines could supercharge advancements in data-intensive fields reminiscent of drug discovery, as they’re capable of perform multiple calculations with so-called quantum bits as opposed to standard binary bits utilized in modern computing.
But LeCun has his doubts.
“The variety of problems you may solve with quantum computing, you may solve far more efficiently with classical computers,” LeCun said.
“Quantum computing is an enchanting scientific topic,” LeCun said. It’s less clear about the “practical relevance and the potential of actually fabricating quantum computers which can be actually useful.”
Meta senior fellow and former tech chief Mike Schroepfer concurred, saying that he evaluates quantum technology every few years and believes that useful quantum machines “may come in some unspecified time in the future, however it’s got such a protracted time horizon that it’s irrelevant to what we’re doing.”
“The explanation we began an AI lab a decade ago was that it was very obvious that this technology goes to be commercializable inside the following years’ timeframe,” Schroepfer said.
WATCH: Meta on the defensive amid reports of Instagram’s harm
![Meta on the defensive amid reports of Instagram's harm](https://image.cnbcfm.com/api/v1/image/107339518-17011145981701114595-32209652609-1080pnbcnews.jpg?v=1701114597&w=750&h=422&vtcrop=y)