Today we shared five exciting announcements across both product and research that outline our future vision for how AI will change how stories are told, scientific progress is made and how the next frontiers of humanity are reached.
Learn more below about how we’re building AI to simulate the world.

First, we are excited to share a number of new updates to our frontier video generation model, Gen-4.5
Soon you will be able to both generate and edit native audio with Gen-4.5 and edit video at arbitrary lengths with multi-shot editing. https://t.co/JOHYuQgl8T
Two years ago, to the day, we shared our vision for General World Models and our research towards them. Today, for our second announcement, we shared an early look at our first General World Model, GWM-1.
GWM-1 is built on top on Gen-4.5 but with one crucial difference - it’s autoregressive. It predicts frame by frame, based on what came before. At any point you can intervene with actions depending on the application, that may be moving around in space, controlling a robot arm or interacting with an agent and the model will simulate what happens next.
We believe that GWM-1 marks a significant steps towards the next frontier of intelligence.
Our third announcement is GWM Worlds. A world model for real-time environment simulation.
You give the model a static scene, and it generates an immersive, infinite, explorable space as you move through it, with geometry, lighting and physics. All in real-time. You can travel to any place, real or imagined. You can become any agent, a person walking through a city, a drone flying over a snowy mountain or a robot navigating a warehouse.
GWM-1 can simulate environments, physics and motion. But one of the most complex things to simulate is human behavior – the way people look, move and respond in conversation in a way that feels natural and immersive. This is really difficult to get right.
We believe we’ve improved on this significantly with our fourth announcement - GWM Avatars.
GWM Avatars is an audio-driven interactive video generation model that simulates natural human motion and expression for arbitrary photorealistic or stylized characters.
A few months ago we set out to understand exactly how our models can be used to accelerate progress in robotics and other AI systems that don’t just represent the world, but are actually able to interact with it. We’ve been collaborating with leading companies in the space and we’ve developed an initial approach that we're now opening up to accelerate development across the industry.
Our fifth announcement is GWM Robotics. GWM Robotics is a learned simulator that generates synthetic data for scalable robot training and policy evaluation, removing the bottlenecks of physical hardware.
We believe that General World Models will play a critical role for the future of AI and that giving everyone access to their own world simulator will be one of the most important technology deployments of the next few years.
These simulators will only become more general over time. Today, they are trained on human-scale video, but we already have early indications that GWM-1 can be applied to observations from very different scales of space and time and will help drive progress in physics, life sciences and beyond.
Over the coming weeks we'll be releasing all these models in our web product and API.
Learn more at http://runwayml.com