# Ground-Fusion2
**Repository Path**: beichenlee/Ground-Fusion2
## Basic Information
- **Project Name**: Ground-Fusion2
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-08-27
- **Last Updated**: 2025-08-27
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# [IROS2025] Ground-Fusion++: A Resilient Modular Multi-Sensor Fusion SLAM Framework
๐ Project Lead: [**Jie Yin ๆฎทๆฐ**](https://sjtuyinjie.github.io/)
๐ [[Website]](https://sjtuyinjie.github.io/M3DGR-website/)
๐ [[Paper]](https://arxiv.org/abs/2507.08364)
โก๏ธ [[Dataset]](https://github.com/sjtuyinjie/M3DGR)
โญ๏ธ [[Pre Video]](TBD)
๐ฅ [[News]](https://mp.weixin.qq.com/s/2dVvuS3z6YDXbCG9-EOYuw)
[](https://sjtuyinjie.github.io/)
[](https://sjtuyinjie.github.io/M3DGR-website/)
[](https://arxiv.org/abs/2507.08364)
[](https://github.com/sjtuyinjie/M3DGR)
[](https://github.com/sjtuyinjie/Ground-Fusion2)
[](https://github.com/sjtuyinjie/Ground-Fusion2)
[](https://github.com/sjtuyinjie/Ground-Fusion2/issues)
[](https://github.com/sjtuyinjie/Ground-Fusion2/issues)
**Core contributors:** Deteng Zhang, Junjie Zhang, Yihong Tian, Jie Yin*(Project Lead)
---
## Notice ๐ข
### News
**2025.06.16:** Our paper has been accepted to IROS 2025!
All datasets and code will be released soon โ stay tuned!
### TODO
- [x] Release camera-ready version of IROS2025 paper.[[paper](https://arxiv.org/abs/2507.08364)]
- [x] Release 40 SLAM codes adapted for M3DGR dataset.[[codes](https://github.com/sjtuyinjie/M3DGR?tab=readme-ov-file#6-supported-slam-algorithm-list)]
- [x] Release Ground-Fusion++ code, with examples on M3DGR on M2DGR-plus. [[code](https://github.com/sjtuyinjie/Ground-Fusion2)]
- [x] Release most sequences in the paper included with GT and calibration files to make sure all results can be reproduced.[[data](https://github.com/sjtuyinjie/M3DGR?tab=readme-ov-file#5-dataset-sequences)]
- [ ] Release long-term sequences upon our journal paper acception.
- [ ] Release a much more competitive and robust SLAM system upon our journal paper acception. Please look forward to our ongoing research!
> ๐ For those interested in accessing the unreleased M3DGR sequences in advance, we recommend first thoroughly evaluating your methods on the already released sequences. After that, feel free to contact us at **zhangjunjie587@gmail.com** to request early access for research purposes.
## 1. Introduction ๐ฏ
This repository contains the official implementation of our **IROS 2025** paper:
> **"Towards Robust Sensor-Fusion Ground SLAM: A Comprehensive Benchmark and a Resilient Framework"**
In this work, we propose a complete solution for robust SLAM on ground robots operating under degraded conditions. Our key contributions are:
- **[M3DGR Benchmark](https://github.com/sjtuyinjie/M3DGR)**: A challenging multi-sensor, multi-scenario SLAM benchmark dataset with systematiclly induced degradation.
- **Ground-Fusion++ ([Link](https://github.com/sjtuyinjie/Ground-Fusion2))**): A resilient and modular SLAM framework integrating heterogeneous sensors for robust localization and high-quality mapping.
- **Comprehensive Evaluation([Link](https://github.com/sjtuyinjie/M3DGR/tree/main/baseline_systems))**: A comprehensive evaluation of over 40 cutting-edge SLAM methods on M3DGR.
## 2. Key Features ๐ง
Our focus is **not on theoretical novelty**, but on building a **resilient and modular system that is simple yet effective**, serving as a practical and extensible baseline for real-world deployment and future research.
- **Multi-sensor Integration:** GNSS, RGB-D camera, IMU, wheel odometer, and LiDAR.
- **Robustness:** Combines state-of-the-art SLAM components to ensure accurate localization and mapping in large-scale, real-world scenarios.
- **Modularity:** Designed as an extensible baseline to support future research and practical deployments in complex environments.
## 3. Prerequisites and Installation
### 3.1 Ubuntu and ROS
Tested on Ubuntu 20.04(with ROS Noetic and OpenCV4).
### 3.2 Prerequisite
This package requires [Eigen 3.3.7](https://github.com/PX4/eigen), [Ceres 1.14](https://ceres-solver.googlesource.com/ceres-solver),[Sophus](https://github.com/strasdat/Sophus.git ), Sophus_no_template, fmt and Livox-SDK. We provide thridparty folder with all the third-party libraries, you can [download](https://drive.google.com/file/d/1umjPqhYcBjMMFPogh5NlPa7JLTkwftNW/view?usp=drive_link) it and install:
~~~
cd thirdparty
sudo chmod +x ./thirdparty/install.sh
sudo ./thirdparty/install.sh
~~~
### 3.3 Build Ground-Fusion++
```
cd ~/catkin_ws/src
git clone https://github.com/sjtuyinjie/Ground-Fusion2.git
cd ../..
catkin_make
```
## 4. Running with Docker
We provide a Dockerfile so you can easily replicate our setup. Below are the steps to build the Docker image.
- Install docker and nvidia-docker2. You can find tutorials like [this](https://github.com/sjtuyinjie/Ground-Fusion2/blob/main/Ground-Fusion%2B%2B/docker/readme.md).
- Pull the ROS image in advance.
```
sudo systemctl start docker
sudo docker pull j3soon/ros-noetic-desktop-full
```
- Pull and build the Docker image.
```
cd ~/catkin_ws/src
git clone https://github.com/sjtuyinjie/Ground-Fusion2.git
cp Ground-Fusion++/docker/{Dockerfile,ros.asc} ..
cd ..
wget 'https://drive.google.com/uc?id=1umjPqhYcBjMMFPogh5NlPa7JLTkwftNW&export=download' && unzip *.zip -d . && rm *.zip
sudo docker build -t groundfusion2 .
```
- Start the container and mount the data directory.
```
sudo xhost +local:docker
sudo docker run -it --rm \
--env="DISPLAY=$DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--device=/dev/dri \
-v ./data:/root/data \
groundfusion2 /bin/bash
```
Change the ```./data``` to the path where the rosbag is actually stored.
- Running.
The one container terminal and type:
```
cd /root/ws
export MESA_GL_VERSION_OVERRIDE=3.3
source devel/setup.bash
roslaunch groundfusion2 run_m3dgr.launch
```
Open another container terminal:
```
cd /root/data
rosbag play Dynamic01.bag
```
[Here](https://github.com/sjtuyinjie/Ground-Fusion2/blob/main/Ground-Fusion%2B%2B/docker/readme.md) you can find a more detailed docker build tutorial.
## 5. Run Examples ๐
### 5.1 M3DGR dataset
Download [M3DGR](https://github.com/sjtuyinjie/M3DGR) dataset and give a star.
~~~
# [launch] open a terminal and type:
source devel/setup.bash
roslaunch groundfusion2 run_m3dgr.launch
(roslaunch groundfusion2 run_m3dgr_avia.launch # Use AVIA)
~~~
### 5.2 M2DGR-Plus dataset
Download [M2DGR-Plus](https://github.com/sjtuyinjie/M2DGR-plus) dataset and give a star.
~~~
# [launch] open a terminal and type:
source devel/setup.bash
roslaunch groundfusion2 run_m2dgrp.launch
~~~
Parking_01
### 5.3 You can use rviz to view the trajectory switching
If you want to see the switching situation or the mesh quality is poor, you can run the following command to check which subsystem has the problem. Especially when you use AVIA lidar indoors, this often happens.
**๐ตBlue** is the Fusion path of Ground-Fusion++, **๐ขGreen** is the LIO submodule path, and **๐ดRed** is the VIO submodule path (*: This sequence comes from Table VIII of the [paper](https://arxiv.org/abs/2507.08364)).
~~~
# [launch] open a terminal and type:
source devel/setup.bash
rosrun rviz rviz -d $(rospack find groundfusion2)/launch/rviz.rviz
~~~
> โ ๏ธ **Known Issues**:
> - In most sequences, our provided configurations can directly reproduce the results reported in the paper. However, in certain cases, parameter fine-tuning may be required for optimal performance
> - The mapping thread is relatively computationally intensive and may require a good machine to run smoothly.
> - ๐กOur team is actively working on a next-generation version of Ground-Fusion++. Stay tuned for updates and follow our latest research!!
> - Open to collaboration! If you are willing to develop a robust and resilient SLAM system with super advanced performance and contribute to the open-source community, please contact us at robot_yinjie@outlook.com
## 6. Citation ๐
```bibtex
@article{zhang2025towards,
title={Towards Robust Sensor-Fusion Ground SLAM: A Comprehensive Benchmark and A Resilient Framework},
author={Zhang, Deteng and Zhang, Junjie and Sun, Yan and Li, Tao and Yin, Hao and Xie, Hongzhao and Yin, Jie},
journal={arXiv preprint arXiv:2507.08364},
year={2025}
}
@inproceedings{yin2024ground,
title={Ground-fusion: A low-cost ground slam system robust to corner cases},
author={Yin, Jie and Li, Ang and Xi, Wei and Yu, Wenxian and Zou, Danping},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
pages={8603--8609},
year={2024},
organization={IEEE}
}
@article{yin2021m2dgr,
title={M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots},
author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={2},
pages={2266--2273},
year={2021},
publisher={IEEE}
}
```
## 7. Star History โญ๏ธ
[](https://star-history.com/#Ashutosh00710/github-readme-activity-graph&Timeline)
## 8. Contributing ๐ทโโ๏ธ
We appreciate all contributions to improving Ground-Fusion++.
## 9. Acknowledgements ๐ค
Thanks to everyone for supporting this project.