Yann LeCun, the Turing Award-winning AI pioneer who spent over a decade as Meta’s Chief AI Scientist, has raised $1.03 billion for AMI Labs to build “world models”—AI systems that learn from physical reality rather than text. The March 2026 funding round, Europe’s largest seed round ever, values the Paris-based startup at $3.5 billion and brings backing from Jeff Bezos, Mark Cuban, Eric Schmidt, and Nvidia. LeCun left Meta in November 2025 after a strategic disagreement over whether large language models represent the path to artificial general intelligence.
The Contrarian Bet Against LLMs
LeCun’s departure from Meta and billion-dollar bet on world models challenges the AI industry’s near-total focus on large language models. His argument: LLMs lack physical grounding and cannot achieve AGI because they’re optimized for predicting the next word, not the next state of the world. Moreover, LLMs can explain gravity but cannot simulate it accurately. They have no real notion of time—a model trained on 2000 data and 2023 data cannot distinguish between them temporally or understand which information supersedes the other.
However, world models take a different approach. Built on LeCun’s Joint Embedding Predictive Architecture (JEPA), proposed in 2022, they learn from visual and sensor data—video, images, LiDAR—to understand causality, physics, and object permanence. Where LLMs predict tokens, world models predict states. They can simulate “what happens if I do X?” before taking action, maintaining persistent memory of the environment and reasoning about consequences. Furthermore, Nvidia CEO Jensen Huang captured the moment at GTC 2026: “The ChatGPT moment for robotics is here.”
Why LLMs and World Models Diverge
The fundamental difference comes down to training objectives and data:
- LLMs: Trained on text sequences (tokens), excel at Q&A, writing, code generation, and summarization. Understand linguistic patterns but lack embodiment.
- World Models: Trained on visual/sensor data, understand physical dynamics like gravity and inertia. Built for robotics, planning, and control in environments where reliability matters.
JEPA’s innovation is prediction in abstract representation space rather than pixel space. Instead of reconstructing every detail of an image, it encodes essential features while dropping noise, then predicts the next abstract state. This “digital intuition” focuses on underlying physical rules, making it efficient for real-time control in robotics, autonomous systems, and industrial automation.
The Meta Divorce
LeCun’s departure followed months of organizational turmoil at Meta. In June 2025, CEO Mark Zuckerberg invested $14.5 billion in Scale AI to recruit 28-year-old Alexandr Wang as Meta’s Chief AI Officer. CNBC reported that FAIR (Facebook AI Research), the lab LeCun built, underwent restructuring and layoffs. Consequently, LeCun, who previously reported to Chief Product Officer Chris Cox, found himself reporting to Wang. A philosophical split emerged: LeCun championed open-source AI research, while Wang favored a closed approach amid competition with OpenAI and Google.
Meta’s Llama 4 model disappointed developers in late 2025, validating LeCun’s concerns about the LLM-first strategy. His November departure statement made his mission clear: “Bringing about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.”
Europe’s Largest Seed Round
The $1.03 billion round, led by Cathay Innovation, Greycroft, HV Capital, and Bezos Expeditions, marks a turning point for European AI. Additionally, Nvidia, Samsung, Toyota Ventures, and Singapore’s Temasek joined individual investors including Bezos, Cuban, Schmidt, and Web inventor Tim Berners-Lee. The pre-money valuation of $3.5 billion—before shipping any product—reflects confidence in LeCun’s contrarian vision.
AMI Labs CEO Alexandre LeBrun, LeCun’s former Meta colleague and Nabla healthtech co-founder, predicted “world models will be the next buzzword. Within six months, many companies will adopt the terminology to attract funding.” The deliberate leadership split puts LeCun on long-term scientific direction while LeBrun handles operations.
From Labs to Production
AMI Labs targets high-stakes industries where hallucinations aren’t acceptable: healthcare (surgical robots), industrial robotics (manufacturing, logistics), aerospace (autonomous navigation), and wearables requiring spatial awareness. LeBrun estimated “about one year to get the first things we can use in the product,” placing first applications in Q1 2027.
The timing aligns with the physical AI revolution gaining momentum in 2026. At Nvidia’s GTC conference in March, robotics leaders including ABB, FANUC, Figure, and Medtronic showcased systems built on world models and physical AI. Nvidia announced Cosmos 3, a world foundation model unifying synthetic world generation, vision reasoning, and action simulation.
The Debate: Did Meta Blink?
LeCun’s bet raises uncomfortable questions. Did Meta make a strategic error choosing LLMs over world models? Can world models coexist with LLMs, or will one paradigm dominate? The AI industry has poured over $100 billion into LLM development—OpenAI’s GPT-5.4, Anthropic’s Claude Sonnet 4.6, Google’s Gemini—while world models remained niche research.
LeCun’s track record matters. His pioneering work on convolutional neural networks earned him the 2018 Turing Award and underpins modern computer vision. Indeed, if he’s right that LLMs hit fundamental scaling limits, AMI Labs could define the next decade of AI. If he’s wrong, the company has $1 billion and a year to prove world models work outside research labs. Either way, the contrarian bet is now official.

