Discussion about this post

User's avatar
Penelope Lawrence's avatar

The Man U thought experiment is a great framing, but I think it reveals something subtle about what "computing" means here. When we imagine a stadium full of fans, we're not actually simulating thousands of people - we're generating a vaguely plausible impression with almost zero detail. The old couple, the kid jumping, the flag - those are narrative patches, not simulation. So the real question becomes whether World Models need to actually compute the physics, or just learn to generate approximations that are "good enough" the way human imagination does. Because if it's the latter, the breakthrough isn't computing the uncomputable - it's learning which parts you can safely skip.

Tortoise's avatar

The single biggest challenge in physical world models is reconstructing ground truth from measurements, which is itself a deeply hard AI problem. I’ve been doing frontier R&D in this space for years and put many systems into production in defense, mapping, industrial, et al.

Using simulations to train world models is trivial because none of the hard problems exist in the simulations. It is extremely difficult to replicate the idiosyncratic reality-reconstruction anomalies that plague physical world modeling in practice. US national labs have spilled a lot of ink about how so much of this AI tech is completely missing the myriad technical issues when it comes to physical world dynamics.

That so many World Model companies think they can use simulations to train models on physical world dynamics is disheartening because it suggests they really don’t have much experience with the theory problems in this space. But I agree completely that this is probably one of the single most valuable greenfields out there!

12 more comments...

No posts

Ready for more?