# LV-DOT
**Repository Path**: xapples/LV-DOT
## Basic Information
- **Project Name**: LV-DOT
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-03-01
- **Last Updated**: 2025-03-01
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# LV-DOT: LiDAR-Visual Dynamic Obstacle Detection and Tracking for Autonomous Robots
This repository implements the LiDAR-visual Dynamic Obstacle Detection and Tracking (LV-DOT) algorithm which aims at detecting and tracking dynamic obstacles for robots with extremely constraint computational resources.
For additional details, please refer to the related paper available here:
Zhefan Xu\*, Haoyu Shen\*, Xinming Han, Hanyu Jin, Kanlong Ye, and Kenji Shimada, "LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation”, arXiv, 2025. [preprint] [\[video\]](https://youtu.be/rRvgTulWqvk)
*The authors contributed equally.
## News
- **2025-02-28:** The GitHub code, video demos, and relavant papers for our LV-DOT framework are released. The authors will actively maintain and update this repo!
## Table of Contents
- [Installation Guide](#Installation-Guide)
- [Run Demo](#Run-Demo)
- [Run on dataset](#a-Run-on-dataset)
- [Run on your device](#b-Run-on-your-device)
- [LV-DOT Framework and Results](#LV-DOT-Framework-and-Results)
- [Citation and Reference](#Citation-and-Reference)
## Installation Guide
The system requirements for this repository are as follows. Please ensure your system meets these requirements:
- Ubuntu 18.04/20.04 LTS
- ROS Melodic/Noetic
This package has been tested on the following onboard computer:
- [NVIDIA Jetson Xavier NX](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-xavier-series/)
- [NVIDIA Jetson Orin NX](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/)
- [Intel NUC](https://www.intel.com/content/www/us/en/products/details/nuc.html)
Please follow the instructions below to install this package.
```
# This package needs ROS vision_msgs package
sudo apt install ros-noetic-vision-msgs
# Install YOLOv11 required package
pip install ultralytics
cd ~/catkin_ws/src
git clone https://github.com/Zhefan-Xu/LV-DOT.git
cd ..
catkin_make
```
## Run Demo
### a. Run on dataset
Please download the rosbag file from this [link](https://cmu.box.com/s/cucvje5b9xfpdpe57ilh0jx702b3ks2p):
```
rosbag play -l corridor_demo.bag
roslaunch onboard_detector run_detector.launch
```
The perception results can be visualized in Rviz as follows:
https://github.com/user-attachments/assets/e640edab-d4f3-40d6-88dc-9e5014430732
### b. Run on your device
Please adjust the configuration file under ```cfg/detector_param.yaml``` of your LiDAR and camera device. Also, change the color image topic name in ```scripts/yolo_detector/yolov11_detector.py```
From the parameter file, you can find that the algorithm expects the following data from the robot:
- LiDAR Point Cloud: ```/pointcloud```
- Depth image: ```/camera/depth/image_rect_raw```
- Color image: ```/camera/color/image_rect_raw```
- Robot pose: ```/mavros/local_position/pose```
- Robot odometry (alternative to robot pose): ```/mavros/local_position/odom```
Additionally, update the camera intrinsic parameters and the camera-LiDAR extrinsic parameters in the config file.
Run the following command to launch dynamic obstacle detection and tracking.
```
# Launch your sensor device first. Make sure it has the above data.
roslaunch onboard_detector run_detector.launch
```
The LV-DOT can be directly utilized to assist mobile robot navigation and collision avoidance in dynamic environments, as demonstrated below:
## LV-DOT Framework and Results
The LV-DOT framework is shown below. Using onboard LiDAR, camera, and odometry inputs, the LiDAR and depth detection modules detect 3D obstacles, while the color detection module identifies 2D dynamic obstacles. The LiDAR-visual fusion module refines these detections, and the tracking module classifies obstacles as static or dynamic.
Example qualitative perception results in various testing environments are shown below:
## Citation and Reference
If our work is useful to your research, please consider citing our paper.
```
Coming soon
```