# RAD
**Repository Path**: tj1652045/RAD
## Basic Information
- **Project Name**: RAD
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-11-28
- **Last Updated**: 2025-11-28
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
RAD: Training an End-to-End Driving Policy via Large-Scale
3DGS-based Reinforcement Learning
[Hao Gao](https://scholar.google.com/citations?user=U4zbmyIAAAAJ&hl=zh-CN&oi=ao)1, [Shaoyu Chen](https://scholar.google.com/citations?user=PIeNN2gAAAAJ&hl=en&oi=sra)1,2,†, [Bo Jiang](https://scholar.google.com/citations?user=UlDxGP0AAAAJ&hl=zh-CN)1, [Bencheng Liao](https://scholar.google.com/citations?user=rUBdh_sAAAAJ&hl=zh-CN)1, [Yiang Shi](https://scholar.google.com/citations?user=AWZwS8AAAAAJ&hl=zh-CN&oi=ao)1, [Xiaoyang Guo](https://scholar.google.com/citations?hl=zh-CN&user=CrK4w4UAAAAJ&view_op=list_works&sortby=pubdate)2, [Yuechuan Pu](https://scholar.google.com/citations?user=xzpIROoAAAAJ&hl=zh-CN)2, Haoran Yin2, Xiangyu Li2, Xinbang Zhang2, Ying Zhang2, [Wenyu Liu](http://eic.hust.edu.cn/professor/liuwenyu/)1, [Qian Zhang](https://scholar.google.com/citations?user=pCY-bikAAAAJ&hl=zh-CN)2, [Xinggang Wang](https://xwcv.github.io/)1,📧
1 Huazhong University of Science and Technology,
2 Horizon Robotics,
† Project lead
📧 Corresponding author
[](https://hgao-cv.github.io/RAD/)
[](https://arxiv.org/pdf/2502.13144)
[](LICENSE)
## 📰 News
- **[2025.11.04]** Good news! 🎉 The **ReconDreamer-RL** team has now **open-sourced their reconstructed 3DGS environments** based on nuScenes. You can find the release here: 👉 [ReconDreamer-RL Environments](https://github.com/GigaAI-research/ReconDreamer-RL)
- **[2025.09.28]** We have released core code for RL training.
- **[2025.09.18]** **RAD has been accepted by NeurIPS 2025!** 🎉🎉🎉
- **[2025.02.18]** We released our paper on Arxiv. Code are coming soon. Please stay tuned! ☕️
## 📌 RAD Training Discussion & Reference
We have created a **central discussion issue** for RAD training details. You can view and participate in the discussion here: [RAD Training Details Issue](https://github.com/hustvl/RAD/issues/2). We also hope that the experiences and tips shared there will be helpful for your **own RL-related training**, not limited to RAD.
## 🎯 How to Use
- Project Structure
``` bash
.
├── data/ # Action anchors for planning/control
├── compute_advantage.py # Script for computing RL advantages and evaluation metrics
├── generate_action_anchor.py # Script for generating action anchors for planning/control
├── planning_head.py # Planning head module
├── utils.py # Utility functions for training and evaluation
└── README.md
```
- Run Key Scripts
``` bash
# You can quickly test the core functionality by running the provided scripts.
# Generate action anchors
python generate_action_anchor.py
# Run the planning head module
python planning_head.py
# Compute advantage metrics
python compute_advantage.py
```
- Using Your Own Data
> To integrate this project into your pipeline and use your own data, follow these steps:
>
> 1. **Replace the Planning Head**
> Use `planning_head.py` to replace the head of your end-to-end algorithm.
>
> 2. **Prepare the Closed-Loop Environment**
> Set up your closed-loop environment and collect closed-loop data.
>
> 3. **Compute Advantages and Train the Model**
> Use `compute_advantage.py` to calculate advantage values from the collected data, and then use them for model training.
## 📚 Citation
If you find RAD useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
```bibtex
@article{RAD,
title={RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning},
author={Gao, Hao and Chen, Shaoyu and Jiang, Bo and Liao, Bencheng and Shi, Yiang and Guo, Xiaoyang and Pu, Yuechuan and Yin, Haoran and Li, Xiangyu and Zhang, Xinbang and Zhang, Ying and Liu, Wenyu and Zhang, Qian and Wang, Xinggang},
journal={arXiv preprint arXiv:2502.13144},
year={2025}
}
```