# Omni-Swarm-CIUS **Repository Path**: myboyhood/omni-swarm-cius ## Basic Information - **Project Name**: Omni-Swarm-CIUS - **Description**: 港科大omni-swarm的cius改动注释版本 - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: cius - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 2 - **Forks**: 0 - **Created**: 2023-01-05 - **Last Updated**: 2025-02-26 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Omni-swarm CIUS Note ## Usage ### HKUST dependancy >>The Omni-swarm offical support TX2 with Ubuntu 18.04. For those running on other hardware and system setup, converting the models to trt by your own is essential. >>[Here](https://www.dropbox.com/s/skq1vgfeawiw151/models.zip?dl=0) to download the CNN models for Omni-swarm and extract it to swarm_loop folder. >>[Here](https://www.dropbox.com/sh/w5yagas06a9r14d/AACdKgMfCCg07M6jr6Ipmus1a?dl=0) to get the raw and preprocessed offical omni-directional and pinole dataset. >>[swarm_msgs](https://github.com/HKUST-Swarm/swarm_msgs) [inf_uwb_ros](https://github.com/HKUST-Swarm/inf_uwb_ros) are compulsory. And [swarm_detector](https://github.com/HKUST-Swarm/swarm_detector) if you want to use detector, >>First, running the pinhole or fisheye version of [VINS-Fisheye](https://github.com/HKUST-Aerial-Robotics/VINS-Fisheye) (Yes, VINS-Fisheye is pinhole compatiable and is essential for Omni-swarm). 除了Omni-swarm自身说明需要一些依赖,还需要很多库和配置如下 - libtorch - TensorRT - lcm - libSGM - yolo-tensorrt - camera_models - dw - backward-cpp - faiss - graphviz 介绍如下: 1. `libtorch`和`TensorRT`在[boyhoodme blog](https://blog.csdn.net/boyhoodme/article/details/128557384)有所介绍 2. `lcm`是udp通信库,用于传输SuperPoint描述子,[github link](https://github.com/lcm-proj/lcm),下载之后执行`常规安装系统目录方式` 3. `libSGM` 用于在VINS-Fisheye中构建深度图,[github link](https://github.com/fixstars/libSGM), 下载之后,修改`CmakeLists.txt` ```c # OFF -> ON option(BUILD_OPENCV_WRAPPER "Make library compatible with cv::Mat and cv::cuda::GpuMat of OpenCV" ON) # 添加静态库编译选项 add_compile_options(-fPIC) ``` 之后执行`常规安装系统目录方式` 4. `yolo-tensorrt`是用于检测其他无人机的yolov4-tensorRT版本。[github link](https://github.com/enazoe/yolo-tensorrt/tree/dev-yolov4)。修改`CmakeLists.txt` ```c ## 注意是C++14 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -Wno-write-strings") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,-rpath -Wl,$ORIGIN") ## 声明路径 set(Torch_DIR "$ENV{HOME}/3rdParty/libtorch/share/cmake/Torch") find_package(Torch REQUIRED) set(TENSORRT_ROOT $ENV{HOME}/3rdParty/TensorRT-7.1.3.4) include_directories("$ENV{HOME}/3rdParty/TensorRT-7.1.3.4/include") ``` 5. `camera_models`是`VINS-Fisheye`中一个子包 6. `dw`用于debug的Ubuntu package。`sudo apt-get install libdw-dev` 7. `backward-cpp`是系统运行时出现`Segmentation Fault (Core dumped)` 用于指出错误在哪一行的调试器。[github link](https://github.com/bombela/backward-cpp.git) 8. `faiss`库,是facebook团队的一个向量相似度计算的库,用于回环检测。安装教程官网比较复杂且没有成功,这里推荐[博客](https://blog.csdn.net/bingningning/article/details/111465219),其中数学库测试失败没关系,可以继续安装。 9. `graphviz` 执行`sudo apt install graphviz graphviz-dev` ### 常规安装系统目录方式 ```sh mkdir build cd build/ cmake .. make -j4 sudo make install ``` ## 编译 这里由于包之间相互依赖过多,使用`catkin_make`会将所有包一起编译,导致有些头文件还没生成就被其他包调用了。此处是采用`catkin_build`方式。会对每个包进行单独编译。但是发现`catkin build`这样没办法链接到`${OpenCV_LIBS}`,于是又改回`catkin_make` ## 调用 如果该工作空间的包想对其他工作空间的package调用,这里不推荐在`~/.bashrc`中source多个空间。建议建立总的工作空间`catkin_overlay_ws`,方法见[blog](https://blog.csdn.net/dndxjj/article/details/90712809) ## Issue - cudnn链接错误,查看`echo $LD_LIBRARY_PATH`,并在`~/.zshrc`中添加`cudnn`所在路径,我的是在`/usr/lib/x86_64-linux-gnu`目录下。 - cuda报错,是VINS-Fisheye中编译stereo_depth库的cuda依赖或者版本出了问题,但是不影响omni-swarm的编译。暂时把问题放在这里。 ```c /home/zph/ros_ws/vio_ws/devel/lib/libstereo_depth.so: undefined reference to `__cudaRegisterFatBinaryEnd' ``` --- --- # Omni-swarm A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm ![](./doc/gcs.png) ## Introduction **Omni-swarm** is a decentralized omnidirectional visual-inertial-UWB state estimation system for the aerial swarm. In order to solve the issues of observability, complicated initialization, insufficient accuracy and lack of global consistency, we introduce an omnidirectional perception system as the front-end of the **Omni-swarm**, consisting of omnidirectional sensors, which includes stereo fisheye cameras and ultra-wideband (UWB) sensors, and algorithms, which includes fisheye visual inertial odometry (VIO), multi-drone map-based localization and visual object detection. A graph-based optimization and forward propagation working as the back-end of the **Omni-swarm** to fuse the measurements from the front-end. According to the experiment result, the proposed decentralized state estimation method on the swarm system achieves centimeter-level relative state estimation accuracy while ensuring global consistency. Moreover, supported by the **Omni-swarm**, inter-drone collision avoidance can be accomplished in a whole decentralized scheme without any external device, demonstrating the potential of **Omni-swarm** to be the foundation of autonomous aerial swarm flights in different scenarios. The is the code for __Omni-swarm: A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarms__. The manuscript has been accepted by IEEE Transactions on Robotics (T-RO), a preprint version is [here](https://arxiv.org/abs/2103.04131). The structure of Omni-swarm is ![](./doc/structure.PNG) The fused measurements of Omni-swarm: ![](./doc/measurements.PNG) The detailed backend structure of state estimation of Omni-swarm: ![](./doc/backend.PNG) ## Usage The Omni-swarm offical support TX2 with Ubuntu 18.04. For those running on other hardware and system setup, converting the models to trt by your own is essential. [Here](https://www.dropbox.com/s/skq1vgfeawiw151/models.zip?dl=0) to download the CNN models for Omni-swarm and extract it to swarm_loop folder. [Here](https://www.dropbox.com/sh/w5yagas06a9r14d/AACdKgMfCCg07M6jr6Ipmus1a?dl=0) to get the raw and preprocessed offical omni-directional and pinole dataset. [swarm_msgs](https://github.com/HKUST-Swarm/swarm_msgs) [inf_uwb_ros](https://github.com/HKUST-Swarm/inf_uwb_ros) are compulsory. And [swarm_detector](https://github.com/HKUST-Swarm/swarm_detector) if you want to use detector, First, running the pinhole or fisheye version of [VINS-Fisheye](https://github.com/HKUST-Aerial-Robotics/VINS-Fisheye) (Yes, VINS-Fisheye is pinhole compatiable and is essential for Omni-swarm). Start map-based localization with >roslaunch swarm_loop nodelet-sfisheye.launch or pinhole version >roslaunch swarm_loop realsense.launch Start visual object detector by (not compulsory) > roslaunch swarm_detector detector.launch Start UWB communication module with (Support NoopLoop UWB module only) >roslaunch localization_proxy uwb_comm.launch start_uwb_node:=true If you don't have a UWB module, you may start the communication with a self id(start from 1, different on each drone) >roslaunch localization_proxy uwb_comm.launch start_uwb_node:=true enable_uwb:=false self_id:=1 Start state estimation with visualizer by >roslaunch swarm_localization loop-5-drone.launch bag_replay:=true viz:=true enable_distance:=false cgraph_path:=/home/your-name/output/graph.dot You may enable/disable specific measurement by adding >enable_distance:=false or enable_detection:=false enable_loop:=true To visualize the real-time estimation result, use __viz:=true__. Add __bag_replay:=true__ only when evaluation dataset, when evaluate pre-processed dataset, you may only launch __loop-5-drone.launch__ Some analysis tools is located in [DataAnalysis](swarm_localization/DataAnalysis) ![](./doc/ob-Traj2.png) ## Docker To evaluate the program, a recommended way is by using a docker. We provide a runnable docker on docker hub, TAG is xuhao1/swarm2020:pc. Due to the limitation of the TensorRT engine adopted in the frontend to accelerate the CNNs, the docker is restricted to running with an RTX 3080 or similar graphic card. We are working on migrating an onnxruntime version of frontend for CNNs referencing and a buildable docker file, which will be released very soon. ## LICENSE GPLV3