# CoopTrack
**Repository Path**: airic-airfm/CoopTrack
## Basic Information
- **Project Name**: CoopTrack
- **Description**: [ICCV 2025] Cooperative perception aims to address the inherent limitations of single-vehicle autonomous driving systems through information exchange among multiple agents.
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-12-10
- **Last Updated**: 2025-12-10
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
CoopTrack: Exploring End-to-End Learning for Efficient Cooperative Sequential Perception
[Jiaru Zhong](https://scholar.google.com/citations?hl=zh-CN&user=Q9KMoxkAAAAJ), Jiahao Wang, [Jiahui Xu](https://scholar.google.com/citations?hl=zh-CN&user=MHa9ts4AAAAJ), [Xiaofan Li](https://scholar.google.com/citations?hl=zh-CN&user=pjZdkO4AAAAJ&view_op=list_works&sortby=pubdate), [Zaiqing Nie](https://scholar.google.com/citations?user=Qg7T6vUAAAAJ)*, [Haibao Yu](https://scholar.google.com/citations?user=JW4F5HoAAAAJ)\*
[](https://arxiv.org/abs/2507.19239)
[](https://huggingface.co/zhongjiaru/CoopTrack)
## News
- **` August 31, 2025`:** The [code](https://github.com/zhongjiaru/CoopTrack) and [model](https://huggingface.co/zhongjiaru/CoopTrack) have been open-sourced.
- **` July 25, 2025`:** CoopTrack is available at [arXiv](https://arxiv.org/abs/2507.19239) now. And CoopTrack is selected as **Highlight**.
- **` June 26, 2025`:** CoopTrack has been accepted by ICCV 2025! We will release our paper and code soon!
## Table of Contents
- [Introduction](#introduction)
- [Getting Started](#getting-started)
- [Contact](#contact)
- [Citation](#citation)
- [Related Works](#related-works)
## Introduction
Cooperative perception aims to address the inherent limitations of single-vehicle autonomous driving systems through information exchange among multiple agents. Previous research has primarily focused on single-frame perception tasks. However, the more challenging cooperative sequential perception tasks, such as cooperative 3D multi-object tracking, have not been thoroughly investigated. Therefore, we propose CoopTrack, a fully instance-level end-to-end framework for cooperative tracking, featuring learnable instance association, which fundamentally differs from existing approaches. CoopTrack transmits sparse instance-level features that significantly enhance perception capabilities while maintaining low transmission costs. Furthermore, the framework comprises two key components: Multi-Dimensional Feature Extraction, and Cross-Agent Association and Aggregation, which collectively enable comprehensive instance representation with semantic and motion features, and adaptive cross-agent association and fusion based on a feature graph. Experiments on both the V2X-Seq and Griffin datasets demonstrate that CoopTrack achieves excellent performance. Specifically, it attains state-of-the-art results on V2X-Seq, with 39.0\% mAP and 32.8\% AMOTA.
## Getting Started
- [Installation](./docs/INSTALL.md)
- [Prepare Dataset](./docs/DATA_PREP.md)
- [Train/Val](./docs/TRAIN_EVAL.md)
## Contact
If you have any questions, please contact Jiaru Zhong via email (zhong.jiaru@outlook.com).
## Citation
If you find CoopTrack is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
```
@article{zhong2025cooptrack,
title={CoopTrack: Exploring End-to-End Learning for Efficient Cooperative Sequential Perception},
author={Zhong, Jiaru and Wang, Jiahao and Xu, Jiahui and Li, Xiaofan and Nie, Zaiqing and Yu, Haibao},
journal={arXiv preprint arXiv:2507.19239},
year={2025}
}
```
## Related Works
We are deeply grateful for the following outstanding opensource work; without them, our work would not have been possible.
- [UniV2X](https://github.com/AIR-THU/UniV2X)
- [UniAD](https://github.com/OpenDriveLab/UniAD)
- [DAIR-V2X](https://github.com/AIR-THU/DAIR-V2X)
- [V2X-Seq](https://github.com/AIR-THU/DAIR-V2X-Seq)
- [PF-Track](https://github.com/TRI-ML/PF-Track)
- [AdaTrack](https://github.com/dsx0511/ADA-Track)
- [FFNET](https://github.com/haibao-yu/FFNet-VIC3D)