The assumption running through most AI discourse right now is that LLMs just need to get bigger, or cheaper, or better-aligned, and the remaining hard problems will yield. Physical reasoning is on that list. Robotics. Causality. Anything requiring a model to understand why something happens in the world, not just predict what word comes next.
Yann LeCun left Meta in late 2025 to found AMI Labs. The company raised $1.03B at a $3.5B valuation in March 2026 — Europe's largest-ever seed round — on the premise that this assumption is wrong.
That's not a research grant. That's a declaration.
What LLMs Are Actually Bad At
LLMs predict tokens. That's the whole thing. They learn extraordinarily rich statistical patterns over language — and it turns out language is a surprisingly good compressed representation of a lot of human knowledge. So they look like they understand causality. They look like they can reason about physics. Until you stress-test them on tasks where the world doesn't reduce to text patterns.
Physical reasoning is one of those tasks. Embodied understanding — how objects behave, how forces propagate, what happens when you drop something versus throw it — doesn't compress cleanly into token sequences. You can describe it, and LLMs can echo those descriptions back fluently. But description isn't modeling.
This isn't a criticism of LLMs. It's an architecture observation. A hammer isn't broken because it's bad at driving screws.
The Architecture Difference
LeCun's JEPA (Joint Embedding Predictive Architecture) doesn't predict tokens. It operates in latent space — learning abstract representations of how environments evolve, what actions are possible, what's invariant across changes. It builds internal representations of how things work, not how people describe how things work.
DeepMind's Genie 3 — a real-time interactive world model generating persistent 3D environments — and World Labs' Marble (Fei-Fei Li's company) are working the same seam. Different approaches, same thesis: for tasks requiring grounded physical understanding, you want a model that simulates the world, not one that summarizes it.
The neuro-symbolic angle runs parallel to this. Combining neural networks with symbolic reasoning — explicit logical rules, formal constraints — is gaining traction specifically in high-stakes domains where hallucination isn't a UX problem, it's a liability. Healthcare diagnosis. Legal reasoning. Engineering specifications. In those domains, "mostly right" is worse than wrong, because it's confident.
Why the Capital Signal Matters
Academic research directions diverge from commercial AI trajectories all the time. What's different here is the check size and who wrote it.
LeCun spent years at Meta arguing that LLMs were the wrong path for general intelligence. While he was at Meta, that was a heterodox internal view. When he left and closed a $1B round in under four months, that view became a commercial bet.
Institutional capital doesn't move at that scale on pure academic conviction. AMI Labs' investors — Bezos, NVIDIA, Eric Schmidt among them — priced in a market: physical AI, robotics, embodied systems, domains where LLMs will consistently underperform and where something better is needed. The $3.5B valuation implies they think "something better" is buildable and fundable within a reasonable horizon.
That's the signal. Not the technology, not the papers — the bet.
What This Means Now
LLM dominance isn't ending. It's getting scoped.
For language tasks, reasoning over documents, code generation, synthesis — LLMs are the right architecture and they're getting better fast. The commoditization pressure is real; the capability improvement curve is also real.
For physical reasoning, robotics, and high-stakes inference where hallucination is unacceptable — the architecture question is genuinely open. World models and neuro-symbolic hybrids are moving from research curiosity to funded commercial track.
The practitioners building on top of AI should probably know which category their use case sits in. Because the infrastructure being built for language tasks is not the same infrastructure being built for physical tasks, and in two years, the stacks will look materially different.
LeCun's $1B raise is a vote on where the boundary sits. Whether you agree with the bet or not, it's worth knowing he made it.