Yann LeCun, Meta’s Chief AI Scientist and Turing Award winner, just launched Advanced Machine Intelligence (AMI) Labs with a $3.5 billion valuation before building a single product. The startup, confirmed December 19, 2025, is raising $586 million to bet against the entire AI industry’s obsession with large language models. This isn’t a minor course correction—it’s the highest-profile challenge yet to OpenAI, Anthropic, and Google’s LLM-first approach.
What Are World Models?
LLMs predict the next word in a sentence. World models predict the next state of the world. The difference matters.
World models learn by watching video and simulation data, building internal representations of how physics works. They understand gravity, collisions, object persistence, and cause-and-effect. Instead of guessing words based on patterns, they simulate reality.
Think of it this way: LLMs are autocomplete on steroids. World models are mental rehearsal. They maintain “persistent memory”—a state of the world that updates as things change—and can run “what-if” scenarios before acting. This makes them useful for robotics, autonomous vehicles, and healthcare diagnostics: anything operating in physical reality rather than text.
Why LeCun Hates LLMs
LeCun’s core technical argument: “Auto-Regressive LLMs are exponentially diverging diffusion processes.” Translation: every word an LLM generates has some probability of being wrong. Errors accumulate. Consequently, long answers become exponentially less reliable. If ‘e’ equals the error probability per token, the probability of a correct answer of length n equals (1-e)^n. The math is unforgiving.
Beyond the math, LeCun argues LLMs have fundamental limitations. “They don’t really understand the physical world. They don’t really have persistent memory. They can’t really reason and they certainly can’t plan,” he stated publicly on LinkedIn. Furthermore, this isn’t a fringe view from a contrarian—it’s a Turing Award winner’s decade-long conviction.
Whether he’s right remains hotly debated. OpenAI, Anthropic, and Google are betting billions that LLMs will scale to AGI. Meanwhile, LeCun is betting $3.5 billion they won’t.
The Players and Partnership
Alex LeBrun, former CEO of Nabla (medical AI transcription) and ex-Facebook AI leader, joins as CEO of AMI Labs. LeBrun previously sold Wit.ai to Facebook in 2015 and led their AI division. Additionally, Nabla gets exclusive first access to world models technology, targeting FDA-certifiable diagnostic AI systems. The company already deploys at Kaiser Permanente, so this isn’t vaporware—it’s proven execution meeting moonshot research.
LeCun chose Paris over Silicon Valley deliberately. “Silicon Valley is completely hypnotised by generative models, and so you have to do this kind of work outside of Silicon Valley, in Paris,” he explained. The geography is symbolic: Europe positioning as the alternative AI hub while Silicon Valley doubles down on LLMs.
The $3.5B Question
Here’s the uncomfortable truth: AMI Labs has zero revenue, zero product, and zero customers at a $3.5 billion valuation. Moreover, LeCun has publicly stated world models will take a decade to develop. Critics call this “the weirdest technology market” where “researchers are getting rewarded with VC money to try what remains a science experiment.”
They’re not entirely wrong. However, they might not be entirely right either. LeCun isn’t a hype merchant—he’s been warning about LLM limitations since 2022 while the rest of the industry chased GPT. His track record building Meta’s AI research division from nothing to world-class earns him credibility. Therefore, the question isn’t whether he knows AI; it’s whether he’s right about this specific bet.
The timing amplifies both the risk and opportunity. This launches right at peak LLM hype (GPT-5.2, Gemini 3 Flash, Claude 4.5 all shipping). Nevertheless, industry concerns about an AI investment bubble are growing. If LLMs plateau or hit fundamental limits, LeCun looks prescient. Conversely, if they break through to AGI, this becomes a $3.5 billion cautionary tale.
What This Means for Developers
Short-term: watch with interest, don’t change course. LLMs still dominate for most applications. Similarly, world models remain research-stage.
Medium-term (3-5 years): expect hybrid systems combining LLMs for language with world models for physical reasoning. Specifically, new tools will emerge for robotics and medical diagnostics. Skills shift from prompt engineering toward physics simulation.
Long-term (5-10 years): if LeCun is right, we’re witnessing a paradigm shift. Alternatively, if he’s wrong, LLMs evolve and world models stay niche. Most likely both coexist, optimized for different use cases.
The real bet to watch isn’t just AMI Labs versus OpenAI. Rather, it’s whether AI’s next breakthrough comes from scaling current architectures or fundamentally rethinking them. Ultimately, LeCun’s placing $3.5 billion on rethinking. That’s a bet worth watching, whether or not you’d make it yourself.











