Home » Shaping Futures » Michael Dukakis Leadership Fellows and Honorees » News » The Next AI Revolution Could Start with “World Models”

The Next AI Revolution Could Start with “World Models”

A new Scientific American analysis argues that today’s generative AI still struggles with a basic weakness: it does not maintain a stable, continuously updated understanding of the world across space and time—so it can produce inconsistencies (a dog’s collar vanishes; a loveseat turns into a sofa). “World models” aim to change that by giving AI an internal, evolving map of reality—often described as 4D modeling (3D + time)—so systems can stay consistent, remember what just happened, and plan what should happen next. (Scientific American)

The article highlights early research using world-model approaches to improve video generation and enable more reliable augmented reality, where virtual objects must stay anchored and obey occlusion rules (e.g., digital objects disappearing behind real ones). It also notes major implications for robotics and autonomous vehicles, where a learned world model could help machines anticipate outcomes and navigate safely. (Scientific American)

Beyond applications, the piece frames world models as a potential prerequisite for more general intelligence: large language models may encode broad “conceptual” knowledge, but they typically lack real-time physical updating and spatiotemporal memory—capabilities researchers argue are essential for AI that can act coherently in the real world. (Scientific American)