Fundamental problems elude many strains of deep learning, says LeCun, including the mystery of how to measure information.
"I think AI systems need to be able to reason," says Yann LeCun, Meta's chief AI scientist. Today's popular AI approaches such as Transformers, many of which build upon his own pioneering work in the field, will not be sufficient.
"You have to take a step back and say, Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there," says LeCun.
Yann LeCun, chief AI scientist of Meta Properties, owner of Facebook, Instagram, and WhatsApp, is likely to tick off a lot of people in his field.
With the posting in June of a think piece on the Open Review server, LeCun offered a broad overview of an approach he thinks holds promise for achieving human-level intelligence in machines.
Implied if not articulated in the paper is the contention that most of today's big projects in AI will never be able to reach that human-level goal.
In a discussion this month with ZDNET via Zoom, LeCun made clear that he views with great skepticism many of the most successful avenues of research in deep learning at the moment.
"I think they're necessary but not sufficient," the Turing Award winner told ZDNET of his peers' pursuits.
Those include large language models such as the Transformer-based GPT-3 and their ilk. As LeCun characterizes it, the Transformer devotées believe,
"We tokenize everything, and train gigantic models to make discrete predictions, and somehow AI will emerge out of this."
"They're not wrong," he says, "in the sense that that may be a component of a future intelligent system, but I think it's missing essential pieces.
Also: Meta's AI luminary LeCun explores deep learning's energy frontier
It's a startling critique of what appears to work coming from the scholar who perfected the use of convolutional neural networks, a practical technique that has been incredibly productive in deep learning programs.
LeCun sees flaws and limitations in plenty of other highly successful areas of the discipline.
Reinforcement learning will also never be enough, he maintains.
Researchers such as David Silver of DeepMind, who developed the AlphaZero program that mastered Chess, Shogi and Go, are focusing on programs that are "very action-based," observes LeCun, but "most of the learning we do, we don't do it by actually taking actions, we do it by observing."
Lecun, 62, from a perspective of decades of achievement, nevertheless expresses an urgency to confront what he thinks are the blind alleys toward which many may be rushing, and to try to coax his field in the direction he thinks things should go.
"We see a lot of claims as to what should we do to push forward towards human-level AI," he says.
"And there are ideas which I think are misdirected.
"We're not to the point where our intelligent machines have as much common sense as a cat," observes Lecun. "So, why don't we start there?"
He has abandoned his prior faith in using generative networks in things such as predicting the next frame in a video.
"It has been a complete failure," he says. LeCun decries those he calls the "religious probabilists," who "think probability theory is the only framework that you can use to explain machine learning."
The purely statistical approach is intractable, he says.
"It's too much to ask for a world model to be completely probabilistic; we don't know how to do it.
"Not just the academics, but industrial AI needs a deep re-think, argues LeCun.
The self-driving car crowd, startups such as Wayve, have been "a little too optimistic," he says, by thinking they could "throw data at" large neural networks "and you can learn pretty much anything."
"You know, I think it's entirely possible that we'll have level-five autonomous cars without common sense," he says, referring to the "ADAS," advanced driver assistance system terms for self-driving, "but you're going to have to engineer the hell out of it."
Such over-engineered self-driving tech will be something as creaky and brittle as all the computer vision programs that were made obsolete by deep learning, he believes.
"Ultimately, there's going to be a more satisfying and possibly better solution that involves systems that do a better job of understanding the way the world works."
Along the way, LeCun offers some withering views of his biggest critics, such as NYU professor Gary Marcus — "he has never contributed anything to AI" — and Jürgen Schmidhuber, co-director of the Dalle Molle Institute for Artificial Intelligence Research — "it's very easy to do flag-planting."
Beyond the critiques, the more important point made by LeCun is that certain fundamental problems confront all of AI, in particular, how to measure information.
"You have to take a step back and say, Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there," says LeCun of his desire to prompt a rethinking of basic concepts.
"Basically, what I'm writing here is, we need to build rockets, I can't give you the details of how we build rockets, but here are the basic principles."
The paper, and LeCun's thoughts in the interview, can be better understood by reading LeCun's interview earlier this year with ZDNET in which he argues for energy-based self-supervised learning as a path forward for deep learning.
Those reflections give a sense of the core approach to what he hopes to build as an alternative to the things he claims will not make it to the finish line.
For more details including the in depth interview here