# open_vins_cius
**Repository Path**: myboyhood/open_vins_cius
## Basic Information
- **Project Name**: open_vins_cius
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: GPL-3.0
- **Default Branch**: super_v2.7
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 3
- **Forks**: 1
- **Created**: 2021-09-16
- **Last Updated**: 2026-05-03
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# OpenVINS_Dynamic
## Framework


## Features:
* Using [scnet in mmdetection](https://github.com/open-mmlab/mmdetection) to segment dynamic objects.
* The front_end abandons origin KLT approach but adopts descriptor match with SuperPoint and SuperGlue, which are both accelerated by TensorRT.
* The back_end adopts OpenVINS.
## Demo
### **(Static Env)** OpenVINS + SuperPoint + SuperGlue in EUROC_MAV_dataset V1_01_Easy
---
### **(Dynamic Env)** OpenVINS + SuperPoint + SuperGlue in Kaist Urban32
---
### **(Dynamic Env)** OpenVINS + SuperPoint + SuperGlue + Segment in Kaist Urban32
---
### **(Dynamic Env)** OpenVINS + SuperPoint + SuperGlue + Segment in Kaist Urban32 (align the seg_mask and image)
---
## Usage
- **Run the OpenVINS with origin KLT front_end**
If we adpots KTL as front_end,we set `use_klt: true # origin is true` in `estimator_config.yaml`. Due to the disparity is small using KLT. Thus `init_max_disparity: 1.5` is ok.
```sh
roslaunch ov_msckf subscribe.launch
```
- **Run the OpenVINS with SuperPoint and SuperGlue front_end**
If we adopt Super as front_end,we should set the `set(USE_SUPER true)` in `CmakeLists.txt` of both `./ov_core` and `./ov_msckf`. In addtion, we change `use_klt: false # origin is true` in `./config/kaist/estimator_config.yaml`. `init_max_disparity: 7.5` is also required, since the disparity between last and current frame is large, using SuperPoint and SuperGlue. Raise `init_max_disparit` will help the VIO system to initialize. We better initialize the system when the system is static now and move later.
Then we run the code:
```sh
roslaunch ov_msckf subscribe.launch
```
- **Run the OpenVINS with SuperPoint and SuperGlue front_end with Segment**
We firstly install `seg_ros` package in ros workspace by
```sh
git clone https://gitee.com/myboyhood/seg_ros.git
```
After `catkin_make`, we can exclude the kpts in segment mask by setting `use_mask: true` in `./config/kaist/estimator_config.yaml`. Then we run the code.
```sh
python seg_ros/scripts/seg_ros_node_all_in_one.py # run the segment code
roslaunch ov_msckf subscribe.launch # run OpenVINS
```
The seg_ros will publish rostopic of `/seg_mask_left`, `/seg_mask_right`, `/seg_each_mask_left` and `seg_each_mask_right` to OpenVINS.
- **Visualize the result**
```sh
rviz -d ~/vins_ws/src/open_vins_cius/ov_msckf/launch/display_seg.rviz
```
Note: we should use Rviz to subscribe the visualize topic, Therefore the topic will publish. Since the code in `ROS1Visualizer.cpp` disable the topic publish if there is no subscirber as following:
```c
// Check if we have subscribers
if (it_pub_tracks.getNumSubscribers() == 0)
return;
```
- **The Parameter of SuperPoint and SuperGlue**
The parameter of SuperPoint and SuperGlue is in `/ov_core/config/config_zph.yaml`. We can adjust the parameters. The engine file is in `/ov_core/weights/`. If we use a new image size from dataset, we should adjust `/ov_core/config/config_zph.yaml` to correct image size and regenerate a new engine file. The tensorRT files are in `/ov_core/3rdparty/tensorrtbuffer/`.
---
## News / Events
* **August 31, 2023** - In `ROS1Visualizer.cpp` we publish each mask regions and the kpt_mask on them. We also publish the `kpt_las`t and `kpt_curr` in mask region to facilitate developer to further estimate the trajectory and velocity of mask object. We wirte a `/ov_msckf/src/test/test_mask_kpts_rostopic.cpp` to receive the custom rostopic `/ov_msckf/mask_group` and show the mask region and kpts. It works!

* **August 30, 2023** - Using SuperPoint+SuperGlue as frontend, With Segment Mask (scnet): We firstly SuperGlue all SuperPoints in current frame and last frame. Then we classify the current SuperPoints as `kpt_mask` and `kpt_non_mask`. We construct `database` to manager `kpt_non_mask` and feed them to OpenVINS. In addition, we construct `database_mask` to manager `kpt_mask`.

* **August 29, 2023** - Using SuperPoint+SuperGlue as frontend, With Segment Mask (scnet): We firstly classify the detected SuperPoints into kpt_mask, kpt_non_mask. Then we perfrom SuperGlue with only kpt_non_mask. The problem is the matches between the current frame and last frame are very few. The following `Active Tracks` shows the detected SuperPoints in non-mask region of current frame. The following `Track History` shows the matches between the current frame and last frame. We can not observe any of the matches. The following `seg left` and `seg right` is done by adding left_mask onto left_image and right_mask onto right_image. If we are lazy to add left_mask to both left_image and right_image. The mask region on right image can not precissly describe the contours of car. The effect is quite poor.

* **August 29, 2023** - Using SuperPoint+SuperGlue as frontend, Without Segment Mask (scnet),The running of OpenVINS is as follows:

---
## Dataset kaist
- The image of Kaist dataset is bayer format. But we can also use cv_bridge with encoding::MONO8 to convert the sensor_msgs::Image to cv::Mat.
- The parameters should be adjusted.
- The line 79 in `ov_init/src/dynamic/DynamicInitializer.cpp`:`if (_db->get_internal_data().size() < 0.3 * params.init_max_features)`. 0.3 is brtter so as to the car can more sensitively detect the start of driving.
- We shoudl set param `init_dyn_min_deg: 0.3 # traj is mostly straight line motion origin is 5.0` in `estimator_config.yaml` for initialize the system more sensitive.
- The rosbag should start when the car is static and move later. For example: `rosbag play -r 0.3 -s 68 Urban32`.
- The download of Rosbag of Urban32 can be found in [BaiduNetDisk](https://pan.baidu.com/disk/main#/index?category=all&path=%2F%E7%8E%8B%E8%B5%B5%E6%98%A0%2Frosbag%2Fkaist)
you can try `rosbag play /bag/merged.bag`, the info of the rosbag only contains `/stereo/left/image_raw`, `/stereo/right/image_raw`, `/imu/data_raw` `/gps/fix` message.
---
[](https://github.com/rpng/open_vins/actions/workflows/build_ros1.yml)
[](https://github.com/rpng/open_vins/actions/workflows/build_ros2.yml)
[](https://github.com/rpng/open_vins/actions/workflows/build.yml)
Welcome to the OpenVINS project!
The OpenVINS project houses some core computer vision code along with a state-of-the art filter-based visual-inertial
estimator. The core filter is an [Extended Kalman filter](https://en.wikipedia.org/wiki/Extended_Kalman_filter) which
fuses inertial information with sparse visual feature tracks. These visual feature tracks are fused leveraging
the [Multi-State Constraint Kalman Filter (MSCKF)](https://ieeexplore.ieee.org/document/4209642) sliding window
formulation which allows for 3D features to update the state estimate without directly estimating the feature states in
the filter. Inspired by graph-based optimization systems, the included filter has modularity allowing for convenient
covariance management with a proper type-based state system. Please take a look at the feature list below for full
details on what the system supports.
* Github project page - https://github.com/rpng/open_vins
* Documentation - https://docs.openvins.com/
* Getting started guide - https://docs.openvins.com/getting-started.html
* Publication reference - https://pgeneva.com/downloads/papers/Geneva2020ICRA.pdf
## News / Events
* **May 11, 2023** - Inertial intrinsic support released as part of v2.7 along with a few bug fixes and improvements to stereo KLT tracking. Please check out the [release page](https://github.com/rpng/open_vins/releases/tag/v2.7) for details.
* **April 15, 2023** - Minor update to v2.6.3 to support incremental feature triangulation of active features for downstream applications, faster zero-velocity update, small bug fixes, some example realsense configurations, and cached fast state prediction. Please check out the [release page](https://github.com/rpng/open_vins/releases/tag/v2.6.3) for details.
* **April 3, 2023** - We have released a monocular plane-aided VINS, termed [ov_plane](https://github.com/rpng/ov_plane), which leverages the OpenVINS project. Both now support the released [Indoor AR Table](https://github.com/rpng/ar_table_dataset) dataset.
* **July 14, 2022** - Improved feature extraction logic for >100hz tracking, some bug fixes and updated scripts. See v2.6.1 [PR#259](https://github.com/rpng/open_vins/pull/259) and v2.6.2 [PR#264](https://github.com/rpng/open_vins/pull/264).
* **March 14, 2022** - Initial dynamic initialization open sourcing, asynchronous subscription to inertial readings and publishing of odometry, support for lower frequency feature tracking. See v2.6 [PR#232](https://github.com/rpng/open_vins/pull/232) for details.
* **December 13, 2021** - New YAML configuration system, ROS2 support, Docker images, robust static initialization based on disparity, internal logging system to reduce verbosity, image transport publishers, dynamic number of features support, and other small fixes. See
v2.5 [PR#209](https://github.com/rpng/open_vins/pull/209) for details.
* **July 19, 2021** - Camera classes, masking support, alignment utility, and other small fixes. See
v2.4 [PR#117](https://github.com/rpng/open_vins/pull/186) for details.
* **December 1, 2020** - Released improved memory management, active feature pointcloud publishing, limiting number of
features in update to bound compute, and other small fixes. See
v2.3 [PR#117](https://github.com/rpng/open_vins/pull/117) for details.
* **November 18, 2020** - Released groundtruth generation utility package, [vicon2gt](https://github.com/rpng/vicon2gt)
to enable creation of groundtruth trajectories in a motion capture room for evaulating VIO methods.
* **July 7, 2020** - Released zero velocity update for vehicle applications and direct initialization when standing
still. See [PR#79](https://github.com/rpng/open_vins/pull/79) for details.
* **May 18, 2020** - Released secondary pose graph example
repository [ov_secondary](https://github.com/rpng/ov_secondary) based
on [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion). OpenVINS now publishes marginalized feature
track, feature 3d position, and first camera intrinsics and extrinsics.
See [PR#66](https://github.com/rpng/open_vins/pull/66) for details and discussion.
* **April 3, 2020** - Released [v2.0](https://github.com/rpng/open_vins/releases/tag/v2.0) update to the codebase with
some key refactoring, ros-free building, improved dataset support, and single inverse depth feature representation.
Please check out the [release page](https://github.com/rpng/open_vins/releases/tag/v2.0) for details.
* **January 21, 2020** - Our paper has been accepted for presentation in [ICRA 2020](https://www.icra2020.org/). We look
forward to seeing everybody there! We have also added links to a few videos of the system running on different
datasets.
* **October 23, 2019** - OpenVINS placed first in the [IROS 2019 FPV Drone Racing VIO Competition
](http://rpg.ifi.uzh.ch/uzh-fpv.html). We will be giving a short presentation at
the [workshop](https://wp.nyu.edu/workshopiros2019mav/) at 12:45pm in Macau on November 8th.
* **October 1, 2019** - We will be presenting at the [Visual-Inertial Navigation: Challenges and Applications
](http://udel.edu/~ghuang/iros19-vins-workshop/index.html) workshop at [IROS 2019](https://www.iros2019.org/). The
submitted workshop paper can be found at [this](http://udel.edu/~ghuang/iros19-vins-workshop/papers/06.pdf) link.
* **August 21, 2019** - Open sourced [ov_maplab](https://github.com/rpng/ov_maplab) for interfacing OpenVINS with
the [maplab](https://github.com/ethz-asl/maplab) library.
* **August 15, 2019** - Initial release of OpenVINS repository and documentation website!
## Project Features
* Sliding window visual-inertial MSCKF
* Modular covariance type system
* Comprehensive documentation and derivations
* Extendable visual-inertial simulator
* On manifold SE(3) b-spline
* Arbitrary number of cameras
* Arbitrary sensor rate
* Automatic feature generation
* Five different feature representations
1. Global XYZ
2. Global inverse depth
3. Anchored XYZ
4. Anchored inverse depth
5. Anchored MSCKF inverse depth
6. Anchored single inverse depth
* Calibration of sensor intrinsics and extrinsics
* Camera to IMU transform
* Camera to IMU time offset
* Camera intrinsics
* Inertial intrinsics (including g-sensitivity)
* Environmental SLAM feature
* OpenCV ARUCO tag SLAM features
* Sparse feature SLAM features
* Visual tracking support
* Monocular camera
* Stereo camera (synchronized)
* Binocular cameras (synchronized)
* KLT or descriptor based
* Masked tracking
* Static and dynamic state initialization
* Zero velocity detection and updates
* Out of the box evaluation on EuRocMav, TUM-VI, UZH-FPV, KAIST Urban and other VIO datasets
* Extensive evaluation suite (ATE, RPE, NEES, RMSE, etc..)
## Codebase Extensions
* **[ov_plane](https://github.com/rpng/ov_plane)** - A real-time monocular visual-inertial odometry (VIO) system which leverages
environmental planes. At the core it presents an efficient robust monocular-based plane detection algorithm which does
not require additional sensing modalities such as a stereo, depth camera or neural network. The plane detection and tracking
algorithm enables real-time regularization of point features to environmental planes which are either maintained in the state
vector as long-lived planes, or marginalized for efficiency. Planar regularities are applied to both in-state SLAM and
out-of-state MSCKF point features, enabling long-term point-to-plane loop-closures due to the large spacial volume of planes.
* **[vicon2gt](https://github.com/rpng/vicon2gt)** - This utility was created to generate groundtruth trajectories using
a motion capture system (e.g. Vicon or OptiTrack) for use in evaluating visual-inertial estimation systems.
Specifically we calculate the inertial IMU state (full 15 dof) at camera frequency rate and generate a groundtruth
trajectory similar to those provided by the EurocMav datasets. Performs fusion of inertial and motion capture
information and estimates all unknown spacial-temporal calibrations between the two sensors.
* **[ov_maplab](https://github.com/rpng/ov_maplab)** - This codebase contains the interface wrapper for exporting
visual-inertial runs from [OpenVINS](https://github.com/rpng/open_vins) into the ViMap structure taken
by [maplab](https://github.com/ethz-asl/maplab). The state estimates and raw images are appended to the ViMap as
OpenVINS runs through a dataset. After completion of the dataset, features are re-extract and triangulate with
maplab's feature system. This can be used to merge multi-session maps, or to perform a batch optimization after first
running the data through OpenVINS. Some example have been provided along with a helper script to export trajectories
into the standard groundtruth format.
* **[ov_secondary](https://github.com/rpng/ov_secondary)** - This is an example secondary thread which provides loop
closure in a loosely coupled manner for [OpenVINS](https://github.com/rpng/open_vins). This is a modification of the
code originally developed by the HKUST aerial robotics group and can be found in
their [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion) repository. Here we stress that this is a
loosely coupled method, thus no information is returned to the estimator to improve the underlying OpenVINS odometry.
This codebase has been modified in a few key areas including: exposing more loop closure parameters, subscribing to
camera intrinsics, simplifying configuration such that only topics need to be supplied, and some tweaks to the loop
closure detection to improve frequency.
## Demo Videos
## Credit / Licensing
This code was written by the [Robot Perception and Navigation Group (RPNG)](https://sites.udel.edu/robot/) at the
University of Delaware. If you have any issues with the code please open an issue on our github page with relevant
implementation details and references. For researchers that have leveraged or compared to this work, please cite the
following:
```txt
@Conference{Geneva2020ICRA,
Title = {{OpenVINS}: A Research Platform for Visual-Inertial Estimation},
Author = {Patrick Geneva and Kevin Eckenhoff and Woosik Lee and Yulin Yang and Guoquan Huang},
Booktitle = {Proc. of the IEEE International Conference on Robotics and Automation},
Year = {2020},
Address = {Paris, France},
Url = {\url{https://github.com/rpng/open_vins}}
}
```
The codebase and documentation is licensed under the [GNU General Public License v3 (GPL-3)](https://www.gnu.org/licenses/gpl-3.0.txt).
You must preserve the copyright and license notices in your derivative work and make available the complete source code with modifications under the same license ([see this](https://choosealicense.com/licenses/gpl-3.0/); this is not legal advice).