Yann LeCun’s Advanced Machine Intelligence Labs raised $1.03 billion in seed funding yesterday—Europe’s largest seed round ever. Announced March 10, 2026, the raise backs LeCun’s claim that the AI industry’s $200 billion investment in large language models is fundamentally misguided. LeCun, a Turing Award winner and Meta’s former chief AI scientist, argues LLMs cannot achieve true intelligence because they lack understanding of the physical world. His alternative: world models built on JEPA architecture that learn from reality itself—video, sensors, physical interactions—not text descriptions.
The $1B Bet Against $200B
While OpenAI raised $110 billion in early March 2026 and Anthropic closed a $30 billion round in February, LeCun is betting that their entire LLM-based approach can’t deliver AGI. Moreover, he’s not building a better language model—he’s arguing LLMs are architecturally incapable of true intelligence. “There’s absolutely no way that autoregressive LLMs will reach human intelligence,” LeCun stated. His reasoning cuts deeper: “Most of human knowledge is actually not language.”
This is one of AI’s founding figures publicly declaring the industry wrong. If he’s right, $200 billion flows into a dead end. The timing amplifies the challenge—just as the industry doubles down on LLMs with record funding rounds, LeCun exits to commercialize an alternative architecture. He left Meta in November 2025, founded AMI Labs the same month, and closed $1.03 billion four months later.
Why LLMs Can’t Predict Consequences
World models learn from physical reality—video, sensors, spatial interactions—and build an internal simulation to answer “if I do X, what happens?” In contrast, LLMs learn from text descriptions and predict the next word. That difference matters for any AI that needs to understand physics and consequences. LeCun defines world models as “an abstract digital twin of reality that an AI can use to predict the consequences of its actions.”
Research evidence supports LeCun’s criticism. Furthermore, a 2025 conference paper documented LLMs have “near-random accuracy when distinguishing motion trajectories.” LLMs excel at pattern matching in text but fail when asked to predict physical outcomes. A robot that can’t predict whether a grip will work fails in the physical world. Consequently, LeCun argues: “Agentic systems cannot exist without predicting consequences of actions, and LLMs cannot do this.”
JEPA: Learning in Abstract Space
JEPA (Joint Embedding Predictive Architecture) predicts abstract representations instead of raw pixels or tokens. The architecture focuses on essential features and ignores irrelevant details. LeCun proposed the framework in February 2022 as his blueprint for achieving human-like AI. Meta demonstrated I-JEPA in 2023 and V-JEPA in 2024, showing 6x efficiency gains over traditional approaches.
The technical mechanism works like this: Take sequential frames, encode them into abstract representations, then train a predictor to forecast the next state. However, the model learns to focus on essential features—motion direction, object identity, spatial relationships—while ignoring unpredictable noise. This makes JEPA far more data-efficient than models that predict every pixel.
Robotics First, Where Physics Matters
AMI Labs targets robotics and autonomous transport initially because these domains expose LLM limitations immediately. A self-driving car must predict traffic physics and pedestrian motion. Additionally, a manufacturing robot must simulate whether a grip will hold. Text-based reasoning can’t solve these problems—you need physics simulation.
The choice of initial domains is strategic. If AMI Labs succeeds in robotics, it demonstrates world models solve problems LLMs cannot. Alexandre LeBrun, AMI Labs’ CEO, came from Nabla medical AI where LLM hallucinations had “life-threatening repercussions.” Therefore, both executives are betting their careers on world models being commercially necessary, not just theoretically superior.
Nvidia Hedges Both Sides
Nvidia invested in OpenAI, Anthropic, and AMI Labs. Consequently, they’re betting on both LLMs and world models because even chip makers don’t know which architecture wins. When hardware companies hedge their bets, it signals genuine uncertainty about AI’s future direction. Hybrid approaches that use LLMs for language understanding and world models for physics simulation may prove safer.
What Happens Next
LeCun is betting his reputation that the AI industry’s dominant approach is a dead end. The next two years will show whether $200 billion invested in LLMs was right or whether world models deliver capabilities LLMs cannot. Nevertheless, AMI Labs’ first robotics demonstrations will provide early signals.
For developers, the practical question is simpler: when do world models matter for your work? If you’re building robots or autonomous systems, world models might already be necessary. However, if you’re building text-based applications, LLMs remain superior. The architectures serve different domains.

