# VitaBench **Repository Path**: hf-datasets/VitaBench ## Basic Information - **Project Name**: VitaBench - **Description**: Mirror of https://huggingface.co/datasets/meituan-longcat/VitaBench - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-10-18 - **Last Updated**: 2025-10-18 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README --- license: mit language: - zh tags: - agent ---

🌱VitaBench: Benchmarking LLM Agents
with Versatile Interactive Tasks

📃 Paper • 🌐 Website • 🏆 Leaderboard • 🛠️ Code • 🤗 Dataset

## 🔔 News - [2025-10] Our paper is released on arXiv: [VitaBench: Benchmarking LLM Agents with Versatile Interactive Tasks in Real-world Applications](https://arxiv.org/abs/2509.26490) - [2025-10] The VitaBench suite is released, including the **codebase, dataset and evaluation pipeline**! If you have any questions, feel free to raise issues and/or submit pull requests for new features of bug fixes. ## 📖 Introduction In this paper, we introduce **VitaBench**, a challenging benchmark that evaluates agents on **v**ersatile **i**nteractive **ta**sks grounded in real-world settings. Drawing from daily applications in food delivery, in-store consumption, and online travel services, VitaBench presents agents with the most complex life-serving simulation environment to date, comprising **66 tools**. Through a framework that eliminates domain-specific policies, we enable flexible composition of these scenarios and tools, yielding **100 cross-scenario tasks (main results) and 300 single-scenario tasks**. Each task is derived from multiple real user requests and requires agents to reason across temporal and spatial dimensions, utilize complex tool sets, proactively clarify ambiguous instructions, and track shifting user intent throughout multi-turn conversations. Moreover, we propose a rubric-based sliding window evaluator, enabling robust assessment of diverse solution pathways in complex environments and stochastic interactions. Our comprehensive evaluation reveals that even the most advanced models achieve only 30% success rate on cross-scenario tasks, and less than 50% success rate on others. Overall, we believe VitaBench will serve as a valuable resource for advancing the development of AI agents in practical real-world applications. > *The name “Vita” derives from the Latin word for “Life”, reflecting our focus on life-serving applications.* ![overall_performance](assets/overall_performance.png) ## 🌱 Benchmark Details VitaBench provides an evaluation framework that supports model evaluations on both single-domain and cross-domain tasks through flexible configuration. For cross-domain evaluation, simply connect multiple domain names with commas—this will automatically merge the environments of the specified domains into a unified environment. Statistics of databases and environments: | | Cross-Scenarios
(All domains) | Delivery | In-store | OTA | | :----------------------------- | :------------------------------: | :------: | :------: | :---: | | **Databases** | | | | | |    Service Providers | 1,324 | 410 | 611 | 1,437 | |    Products | 6,946 | 788 | 3,277 | 9,693 | |    Transactions | 447 | 48 | 28 | 154 | | **API Tools** | | | | | |    Write | 27 | 4 | 9 | 14 | |    Read | 33 | 10 | 10 | 19 | |    General | 6 | 6 | 5 | 5 | | **Tasks** | 100 | 100 | 100 | 100 | ## 🛠️ Environment VitaBench provides an evaluation framework that supports model evaluations on both single-domain and cross-domain tasks through flexible configuration. For cross-domain evaluation, simply connect multiple domain names with commas—this will automatically merge the environments of the specified domains into a unified environment. Please visit our GitHub repository [vitabench](https://github.com/meituan-longcat/vitabench) for more detailed instructions. ## 🔎 Citation If you find our work helpful or relevant to your research, please kindly cite our paper: ``` @article{he2025vitabench, title={VitaBench: Benchmarking LLM Agents with Versatile Interactive Tasks in Real-world Applications}, author={He, Wei and Sun, Yueqing and Hao, Hongyan and Hao, Xueyuan and Xia, Zhikang and Gu, Qi and Han, Chengcheng and Zhao, Dengchang and Su, Hui and Zhang, Kefeng and Gao, Man and Su, Xi and Cai, Xiaodong and Cai, Xunliang and Yang, Yu and Zhao, Yunke}, journal={arXiv preprint arXiv:2509.26490}, year={2025} } ``` ## 📜 License This project is licensed under the MIT License - see the [LICENSE](./LICENSE) file for details.