NOTE: This is a community fork of xdspacelab/openvslam. It was created to continue active development of OpenVSLAM.
OpenVSLAM is a monocular, stereo, and RGBD visual SLAM system. The notable features are:
OpenVSLAM is based on an indirect SLAM algorithm with sparse features, such as ORB-SLAM, ProSLAM, and UcoSLAM. One of the noteworthy features of OpenVSLAM is that the system can deal with various type of camera models, such as perspective, fisheye, and equirectangular. If needed, users can implement extra camera models (e.g. dual fisheye, catadioptric) with ease. For example, visual SLAM algorithm using equirectangular camera models (e.g. RICOH THETA series, insta360 series, etc) is shown above.
Some code snippets to understand the core functionalities of the system are provided.
You can employ these snippets for in your own programs.
Please see the *.cc
files in ./example
directory or check Simple Tutorial and Example.
We provided documentation for installation and tutorial. The repository for the ROS wrapper is openvslam_ros.
Visual SLAM is regarded as a next-generation technology for supporting industries such as automotives, robotics, and xR. We released OpenVSLAM as an opensource project with the aim of collaborating with people around the world to accelerate the development of this field. In return, we hope this project will bring safe and reliable technologies for a better society.
Please see Installation chapter in the documentation.
The instructions for Docker users are also provided.
Please see Simple Tutorial chapter in the documentation.
A sample ORB vocabulary file can be downloaded from here. Sample datasets are also provided at here.
If you would like to run visual SLAM with standard benchmarking datasets (e.g. KITTI Odometry dataset), please see SLAM with standard datasets section in the documentation.
Please contact us via GitHub Discussions if you have any questions or notice any bugs about the software.
Feedbacks, feature requests, and contribution are welcome!
2-clause BSD license (see LICENSE)
The following files are derived from third-party libraries.
./3rd/json
: nlohmann/json [v3.6.1] (MIT license)./3rd/popl
: badaix/popl [v1.2.0] (MIT license)./3rd/spdlog
: gabime/spdlog [v1.3.1] (MIT license)./src/openvslam/solver/pnp_solver.cc
: part of laurentkneip/opengv (3-clause BSD license)./src/openvslam/feature/orb_extractor.cc
: part of opencv/opencv (3-clause BSD License)./src/openvslam/feature/orb_point_pairs.h
: part of opencv/opencv (3-clause BSD License)./viewer/public/js/lib/dat.gui.min.js
: dataarts/dat.gui (Apache License 2.0)./viewer/public/js/lib/protobuf.min.js
: protobufjs/protobuf.js (3-clause BSD License)./viewer/public/js/lib/stats.min.js
: mrdoob/stats.js (MIT license)./viewer/public/js/lib/three.min.js
: mrdoob/three.js (MIT license)Please use g2o
as the dynamic link library because csparse_extension
module of g2o
is LGPLv3+.
OpenVSLAM won first place at ACM Multimedia 2019 Open Source Software Competition.
If OpenVSLAM helps your research, please cite the paper for OpenVSLAM. Here is a BibTeX entry:
@inproceedings{openvslam2019,
author = {Sumikura, Shinya and Shibuya, Mikiya and Sakurada, Ken},
title = {{OpenVSLAM: A Versatile Visual SLAM Framework}},
booktitle = {Proceedings of the 27th ACM International Conference on Multimedia},
series = {MM '19},
year = {2019},
isbn = {978-1-4503-6889-6},
location = {Nice, France},
pages = {2292--2295},
numpages = {4},
url = {http://doi.acm.org/10.1145/3343031.3350539},
doi = {10.1145/3343031.3350539},
acmid = {3350539},
publisher = {ACM},
address = {New York, NY, USA}
}
The preprint can be found here.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。