# VBench
**Repository Path**: stfocus/VBench
## Basic Information
- **Project Name**: VBench
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-10-15
- **Last Updated**: 2025-09-17
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README

**How to Reach Us:**
- Code Issues: Please open an [issue](https://github.com/Vchitect/VBench/issues) in our GitHub repository for any problems or bugs.
- Evaluation Requests: To submit your sampled videos for evaluation, please complete this [Google Form](https://forms.gle/wHk1xe7ecvVNj7yAA).
- General Inquiries: **Check our [FAQ](https://github.com/Vchitect/VBench/blob/master/README-FAQ.md)** for common questions. For other questions, contact Ziqi Huang at ZIQI002 [at] e [dot] ntu [dot] edu [dot] sg.
[](https://arxiv.org/abs/2311.17982)
[](https://arxiv.org/abs/2411.13503)
[](https://arxiv.org/abs/2503.21755)
[](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard)
[](https://huggingface.co/spaces/Vchitect/VBench_Video_Arena)
[](https://huggingface.co/spaces/Vchitect/VBench2.0_Video_Arena)
[](https://vchitect.github.io/VBench-project/)
[](https://vchitect.github.io/VBench-2.0-project/)
[](https://drive.google.com/drive/folders/1on66fnZ8atRoLDimcAXMxSwRxqN8_0yS?usp=sharing)
[](https://pypi.org/project/vbench/)
[](https://www.youtube.com/watch?v=7IhCC8Qqn8Y)
[](https://www.youtube.com/watch?v=kJrzKy9tgAc)
[](https://hits.seeyoufarm.com)
This repository contains the implementation of the following paper and its related serial works in progress. We evaluate video generative models!
> **VBench: Comprehensive Benchmark Suite for Video Generative Models**
> [Ziqi Huang](https://ziqihuangg.github.io/)∗, [Yinan He](https://github.com/yinanhe)∗, [Jiashuo Yu](https://scholar.google.com/citations?user=iH0Aq0YAAAAJ&hl=zh-CN)∗, [Fan Zhang](https://github.com/zhangfan-p)∗, [Chenyang Si](https://chenyangsi.top/), [Yuming Jiang](https://yumingj.github.io/), [Yuanhan Zhang](https://zhangyuanhan-ai.github.io/), [Tianxing Wu](https://tianxingwu.github.io/), [Qingyang Jin](https://github.com/Vchitect/VBench), [Nattapol Chanpaisit](https://nattapolchan.github.io/me), [Yaohui Wang](https://wyhsirius.github.io/), [Xinyuan Chen](https://scholar.google.com/citations?user=3fWSC8YAAAAJ), [Limin Wang](https://wanglimin.github.io), [Dahua Lin](http://dahua.site/)+, [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/index.html)+, [Ziwei Liu](https://liuziwei7.github.io/)+
> IEEE/CVF Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024
> **VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models**
> [Ziqi Huang](https://ziqihuangg.github.io/)∗, [Fan Zhang](https://github.com/zhangfan-p)∗, [Xiaojie Xu](https://github.com/xjxu21), [Yinan He](https://github.com/yinanhe), [Jiashuo Yu](https://scholar.google.com/citations?user=iH0Aq0YAAAAJ&hl=zh-CN), [Ziyue Dong](https://github.com/DZY-irene), [Qianli Ma](https://github.com/MqLeet), [Nattapol Chanpaisit](https://nattapolchan.github.io/me), [Chenyang Si](https://chenyangsi.top/), [Yuming Jiang](https://yumingj.github.io/), [Yaohui Wang](https://wyhsirius.github.io/), [Xinyuan Chen](https://scholar.google.com/citations?user=3fWSC8YAAAAJ), [Ying-Cong Chen](https://www.yingcong.me/), [Limin Wang](https://wanglimin.github.io), [Dahua Lin](http://dahua.site/)+, [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/index.html)+, [Ziwei Liu](https://liuziwei7.github.io/)+
> **VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness**
> [Dian Zheng](https://zhengdian1.github.io/)∗, [Ziqi Huang](https://ziqihuangg.github.io/)∗, [Hongbo Liu](https://github.com/Alexios-hub), [Kai Zou](https://github.com/Jacky-hate), [Yinan He](https://github.com/yinanhe), [Fan Zhang](https://github.com/zhangfan-p), [Yuanhan Zhang](https://zhangyuanhan-ai.github.io/), [Jingwen He](https://scholar.google.com/citations?user=GUxrycUAAAAJ&hl=zh-CN), [Wei-Shi Zheng](https://www.isee-ai.cn/~zhwshi/)+, [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/index.html)+, [Ziwei Liu](https://liuziwei7.github.io/)+
### Table of Contents
- [Updates](#updates)
- [Overview](#overview)
- [Evaluation Results](#evaluation_results)
- [Video Generation Models Info](https://github.com/Vchitect/VBench/tree/master/sampled_videos#what-are-the-details-of-the-video-generation-models)
- [Installation](#installation)
- [Usage](#usage)
- [Prompt Suite](#prompt_suite)
- [Sampled Videos](#sampled_videos)
- [Evaluation Method Suite](#evaluation_method_suite)
- [Citation and Acknowledgement](#citation_and_acknowledgement)
## :fire: Updates
- [05/2025] We support **evaluating customized videos** for VBench-2.0! See [here](https://github.com/Vchitect/VBench/tree/master/VBench-2.0#new-evaluating-single-dimension-of-your-own-videos) for instructions.
- [04/2025] **[Human Anomaly Detection for AIGC Videos](https://github.com/Vchitect/VBench/tree/master/VBench-2.0/vbench2/third_party/ViTDetector):** We release the pipeline for evaluating human anatomical quality in AIGC videos, including a manually human anomaly dataset on real and AIGC videos, and the training pipeline for anomaly detection.
- [03/2025] :fire: **Major Update! We released [VBench-2.0](https://github.com/Vchitect/VBench/tree/master/VBench-2.0)!** :fire: Video generative models have progressed from achieving *superficial faithfulness* in fundamental technical aspects such as pixel fidelity and basic prompt adherence, to addressing more complex challenges associated with *intrinsic faithfulness*, including commonsense reasoning, physics-based realism, human motion, and creative composition. While VBench primarily assessed early-stage technical quality, VBench-2.0 expands the benchmarking framework to evaluate these advanced capabilities, ensuring a more comprehensive assessment of next-generation models.
- [01/2025] **PyPI Updates: v0.1.5** preprocessing bug fixes, torch>=2.0 support.
- [01/2025] **VBench Arena** released: [](https://huggingface.co/spaces/Vchitect/VBench_Video_Arena) View the generated videos here, and vote for your preferred video. This demo features over 180,000 generated videos, and you can explore videos generated by your chosen models (we already support 40 models) following your chosen text prompts.
- [11/2024] **VBench++** released: [](https://arxiv.org/abs/2411.13503)
- [09/2024] **VBench-Long Leaderboard** available: Our VBench-Long leaderboard now has 10 long video generation models. VBench leaderboard now has 40 text-to-video (both long and short) models. All video generative models are encouraged to participate! [](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard)
- [09/2024] **PyPI Updates: PyPI package is updated to version [0.1.4](https://github.com/Vchitect/VBench/releases/tag/v0.1.4):** bug fixes and multi-gpu inference.
- [08/2024] **Longer and More Descriptive Prompts**: [Available Here](https://github.com/Vchitect/VBench/tree/master/prompts/gpt_enhanced_prompts)! We follow [CogVideoX](https://github.com/THUDM/CogVideo?tab=readme-ov-file#prompt-optimization)'s prompt optimization technique to enhance VBench prompts using GPT-4o, making them longer and more descriptive without altering their original meaning.
- [08/2024] **VBench Leaderboard** update: Our leaderboard has 28 *T2V models*, 12 *I2V models* so far. All video generative models are encouraged to participate! [](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard)
- [06/2024] :fire: **[VBench-Long](https://github.com/Vchitect/VBench/tree/master/vbench2_beta_long)** :fire: is ready to use for evaluating longer Sora-like videos!
- [06/2024] **Model Info Documentation**: Information on video generative models in our [VBench Leaderboard](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard)
is documented [HERE](https://github.com/Vchitect/VBench/tree/master/sampled_videos#what-are-the-details-of-the-video-generation-models).
- [05/2024] **PyPI Update**: PyPI package `vbench` is updated to version 0.1.2. This includes changes in the preprocessing for high-resolution images/videos for `imaging_quality`, support for evaluating customized videos, and minor bug fixes.
- [04/2024] We release all the videos we sampled and used for VBench evaluation. [](https://drive.google.com/drive/folders/13pH95aUN-hVgybUZJBx1e_08R6xhZs5X) See details [here](https://github.com/Vchitect/VBench/tree/master/sampled_videos).
- [03/2024] :fire: **[VBench-Trustworthiness](https://github.com/Vchitect/VBench/tree/master/vbench2_beta_trustworthiness)** :fire: We now support evaluating the **trustworthiness** (*e.g.*, culture, fairness, bias, safety) of video generative models.
- [03/2024] :fire: **[VBench-I2V](https://github.com/Vchitect/VBench/tree/master/vbench2_beta_i2v)** :fire: We now support evaluating **Image-to-Video (I2V)** models. We also provide [Image Suite](https://drive.google.com/drive/folders/1fdOZKQ7HWZtgutCKKA7CMzOhMFUGv4Zx?usp=sharing).
- [03/2024] We support **evaluating customized videos**! See [here](https://github.com/Vchitect/VBench/?tab=readme-ov-file#new-evaluate-your-own-videos) for instructions.
- [01/2024] PyPI package is released! [](https://pypi.org/project/vbench/). Simply `pip install vbench`.
- [12/2023] :fire: **[VBench](https://github.com/Vchitect/VBench?tab=readme-ov-file#usage)** :fire: Evaluation code released for 16 **Text-to-Video (T2V) evaluation** dimensions.
- `['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality', 'object_class', 'multiple_objects', 'human_action', 'color', 'spatial_relationship', 'scene', 'temporal_style', 'appearance_style', 'overall_consistency']`
- [11/2023] Prompt Suites released. (See prompt lists [here](https://github.com/Vchitect/VBench/tree/master/prompts))
## :mega: Overview
### VBench-1.0

We propose **VBench**, a comprehensive benchmark suite for video generative models. We design a comprehensive and hierarchical Evaluation Dimension Suite to decompose "video generation quality" into multiple well-defined dimensions to facilitate fine-grained and objective evaluation. For each dimension and each content category, we carefully design a Prompt Suite as test cases, and sample Generated Videos from a set of video generation models. For each evaluation dimension, we specifically design an Evaluation Method Suite, which uses carefully crafted method or designated pipeline for automatic objective evaluation. We also conduct Human Preference Annotation for the generated videos for each dimension, and show that VBench evaluation results are well aligned with human perceptions. VBench can provide valuable insights from multiple perspectives. VBench++ supports a wide range of video generation tasks, including text-to-video and image-to-video, with an adaptive Image Suite for fair evaluation across different settings. It evaluates not only technical quality but also the trustworthiness of generative models, offering a comprehensive view of model performance. We continually incorporate more video generative models into VBench to inform the community about the evolving landscape of video generation.
### VBench-2.0

Overview of VBench-2.0. (a) Scope of VBench-2.0. Video generative models have progressed from achieving superficial faithfulness in fundamental technical aspects such as pixel fidelity and basic prompt adherence, to addressing more complex challenges associated with intrinsic faithfulness, including commonsense reasoning, physics-based realism, human motion, and creative composition. While VBench primarily assessed early-stage technical quality, VBench-2.0 expands the benchmarking framework to evaluate these advanced capabilities, ensuring a more comprehensive assessment of next-generation models. (b) Evaluation Dimension of VBench-2.0. VBench-2.0 introduces a structured evaluation suite comprising five broad categories and 18 fine-grained capability dimensions.
## :mortar_board: Evaluation Results
***See our leaderboard for the most updated ranking and numerical results (with models like Gen-3, Kling, Pika)***. [](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard)