# WorldDreamer
**Repository Path**: platinum-into/WorldDreamer
## Basic Information
- **Project Name**: WorldDreamer
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-09-02
- **Last Updated**: 2025-09-02
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens
## [Project Page](https://world-dreamer.github.io) | [Paper](https://arxiv.org/pdf/2401.09985.pdf)
# Abstract
World models play a crucial role in understanding and predicting the dynamics of the world, which is essential for video generation.
However, existing world models are confined to specific scenarios such as gaming or driving, limiting their ability to capture the complexity
of general world dynamic environments. Therefore, we introduce WorldDreamer, a pioneering world model to foster a comprehensive comprehension
of general world physics and motions, which significantly enhances the capabilities of video generation. Drawing inspiration from the success
of large language models, WorldDreamer frames world modeling as an unsupervised visual sequence modeling challenge. This is achieved by mapping
visual inputs to discrete tokens and predicting the masked ones. During this process, we incorporate multi-modal prompts to facilitate
interaction within the world model. Our experiments show that WorldDreamer excels in generating videos across different scenarios, including
natural scenes and driving environments. WorldDreamer showcases versatility in executing tasks such as text-to-video conversion, imageto-video
synthesis, and video editing. These results underscore WorldDreamer’s effectiveness in capturing dynamic elements within diverse general world environments.
# News
- **[2024/01/18]** Repository Initialization.
**WorldDreamer Framework**
# Bibtex
If this work is helpful for your research, please consider citing the following BibTeX entry.
```
@article{wang2023worlddreamer,
title={WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens},
author={Xiaofeng Wang and Zheng Zhu and Guan Huang and Boyuan Wang and Xinze Chen and Jiwen Lu},
journal={arXiv preprint arXiv:2401.09985},
year={2024}
}
```