Endless World: Real-Time 3D-Aware Long Video Generation
Paper β’ 2512.12430 β’ Published
Checkpoint for EndlessWorld, a streaming video diffusion model that produces unbounded-length, 3D-consistent videos in real time on a single GPU.
| File | Description |
|---|---|
model.pt |
DMD-distilled generator weights for the EndlessWorld causal Wan model (step 1000 of the self_forcing_dmd_separate SOTA run). |
This is the generator checkpoint only. To run inference you also need:
See the GitHub README for the full setup.
EndlessWorld extends the Self-Forcing causal diffusion framework (Wan2.1 T2V-1.3B backbone) with a Global 3D-Aware Attention module that injects scene geometry β extracted on the fly by AnySplat β into the conditional embedding of every autoregressive chunk.
Three ingredients:
CrossAttentionFusion + To3D modules ingest
3D Gaussian features produced by AnySplat and fuse them with the text
embedding, giving the generator a persistent geometric memory of the world
rendered so far.git clone https://github.com/BWGZK-keke/EndlessWorld
cd EndlessWorld
pip install -r requirements.txt
# Download this checkpoint
huggingface-cli download BWGZK/EndlessWorld model.pt --local-dir checkpoints/
# Update configs/self_forcing_dmd.yaml -> generator_ckpt: checkpoints/model.pt
bash test.sh
Loading directly from Python:
import torch
from huggingface_hub import hf_hub_download
ckpt = hf_hub_download(repo_id="BWGZK/EndlessWorld", filename="model.pt")
state_dict = torch.load(ckpt, map_location="cpu")
train.py
entry point with configs/self_forcing_dmd.yaml.@article{zhang2025endlessworld,
title = {Endless World: Real-Time 3D-Aware Long Video Generation},
author = {Zhang, Ke and others},
journal = {arXiv preprint arXiv:2512.12430},
year = {2025}
}
Apache 2.0 β same as the upstream Wan2.1 and Self-Forcing projects.
Base model
Wan-AI/Wan2.1-T2V-1.3B