Discussion about this post

User's avatar
Penelope Lawrence's avatar

The Man U thought experiment is a great framing, but I think it reveals something subtle about what "computing" means here. When we imagine a stadium full of fans, we're not actually simulating thousands of people - we're generating a vaguely plausible impression with almost zero detail. The old couple, the kid jumping, the flag - those are narrative patches, not simulation. So the real question becomes whether World Models need to actually compute the physics, or just learn to generate approximations that are "good enough" the way human imagination does. Because if it's the latter, the breakthrough isn't computing the uncomputable - it's learning which parts you can safely skip.

Dorian's avatar

This feels like the transition from reasoning about the world to simulating the world.

LLMs compress knowledge.

World models compress reality.

Once AI can model action → consequence loops, the real unlock isn’t better chat.

It’s better decision infrastructure.

That’s when AI becomes less of an interface layer and more of a control layer.

11 more comments...

No posts

Ready for more?