# lidar_camera_calibration **Repository Path**: Bryan_Jiang/lidar_camera_calibration ## Basic Information - **Project Name**: lidar_camera_calibration - **Description**: No description available - **Primary Language**: Unknown - **License**: GPL-3.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2020-11-20 - **Last Updated**: 2021-05-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # LiDAR-Camera Calibration using 3D-3D Point correspondences [Ankit Dhall](https://ankitdhall.github.io/ "Ankit Dhall"), [Kunal Chelani](http://www.chalmers.se/en/Staff/Pages/chelani.aspx "Kunal Chelani"), Vishnu Radhakrishnan, KM Krishna
--- ## ROS package to calibrate a camera and a LiDAR.  The package is used to calibrate a Velodyne LiDAR with a camera (works for both monocular and stereo). Specficially, Point Gray Blackfly and ZED camera have been successfully calibrated against Velodyne VLP-16 using `lidar_camera_calibration`. Since, VLP-16 provides only 16 rings, we believe that the higher models of the Velodyne will also work well with this package. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See [Fusion using `lidar_camera_calibration`](#fusion-using-lidar_camera_calibration) for results of the point cloud fusion (videos). The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see [Usage](#usage) for a video tutorial. The package uses `aruco_ros` and a slightly modified `aruco_mapping` as dependencies, both of which are available in the `dependencies` folder in this repository. The `lidar_camera_calibration/pointcloud_fusion` provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and `lidar_camera_calibration`. For more details please refer to our [paper](http://arxiv.org/abs/1705.09785). ### Citing `lidar_camera_calibration` Please cite our work if `lidar_camera_calibration` and our approach helps your research. ``` @article{2017arXiv170509785D, author = {{Dhall}, A. and {Chelani}, K. and {Radhakrishnan}, V. and {Krishna}, K.~M. }, title = "{LiDAR-Camera Calibration using 3D-3D Point correspondences}", journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1705.09785}, primaryClass = "cs.RO", keywords = {Computer Science - Robotics, Computer Science - Computer Vision and Pattern Recognition}, year = 2017, month = may } ``` ## Contents 1. [Setup](#setup) 2. [Getting Started](#getting-started) 3. [Usage](#usage) 4. [Fusion using `lidar_camera_calibration`](#fusion-using-lidar_camera_calibration) 5. [Future Improvements](#future-improvements) ## Setup Prerequisites: * [ROS](http://www.ros.org/) * [aruco_ros](https://github.com/pal-robotics/aruco_ros) * a slightly modified [aruco_mapping](https://github.com/SmartRoboticSystems/aruco_mapping) ROS package for the camera and LiDAR you wish to calibrate. ### Installation Clone this repository to your machine. Put the cloned repository, `dependencies/aruco_ros` and `dependencies/aruco_mapping` folders in `path/to/your/ros/workspace/src` and run the following commands, ```bash catkin_make -DCATKIN_WHITELIST_PACKAGES="aruco;aruco_ros;aruco_msgs" catkin_make -DCATKIN_WHITELIST_PACKAGES="aruco_mapping;lidar_camera_calibration" catkin_make -DCATKIN_WHITELIST_PACKAGES="" ``` Please note that `aruco_mapping` in the `dependencies` folder is a slightly modified version of the original [aruco_mapping](https://github.com/SmartRoboticSystems/aruco_mapping), so make sure to use the one provided here. The aruco packages have to be installed before the main calibration package can be installed. Installing them together, in one run, can result in build errors. Hence the two separate `catkin_make` commands. Camera parameters will also be required by the package, so it is advised that you calibrate the camera beforehand. ## Getting Started
There are a couple of configuration files that need to be specfied in order to calibrate the camera and the LiDAR. The config files are available in the `lidar_camera_calibration/conf` directory. The `find_transform.launch` file is available in the `lidar_camera_calibration/launch` directory.
### config_file.txt
>1280 720
>-2.5 2.5
>-4.0 4.0
>0.0 2.5
>0.05
>2
>0
>611.651245 0.0 642.388357 0.0
>0.0 688.443726 365.971718 0.0
>0.0 0.0 1.0 0.0
>1.57 -1.57 0.0
>0
The file contains specifications about the following:
>image_width image_height
>x- x+
>y- y+
>z- z+
>cloud_intensity_threshold
>number_of_markers
>use_camera_info_topic?
>fx 0 cx 0
>0 fy cy 0
>0 0 1 0
>MAX_ITERS
>initial_rot_x initial_rot_y initial_rot_z
>lidar_type
`x-` and `x+`, `y-` and `y+`, `z-` and `z+` are used to remove unwanted points in the cloud and are specfied in meters. The filtred point cloud makes it easier to mark the board edges. The filtered pointcloud contains all points
(x, y, z) such that,
x in [`x-`, `x+`]
y in [`y-`, `y+`]
z in [`z-`, `z+`]
The `cloud_intensity_threshold` is used to filter points that have intensity lower than a specified value. The default value at which it works well is `0.05`. However, while marking, if there seem to be missing/less points on the cardboard edges, tweaking this value will might help.
The `use_camera_info_topic?` is a boolean flag and takes values `1` or `0`(**Though you can set it to 1 with using the camera_info topic, but we still recommend you strongly to set it to 0 and then using the calibration file, unless you make sure the camera info topic's value is consistent with calibration file or there is only a pretty small difference between them, otherwise, you won't the result you want**). The `find_transform.launch` node uses camera parameters to process the points and display them for marking. If you wish to use the `camera_info` topic to read off the parameters, set this to `1`. Else, the explicitly provided camera parameters in `config_file.txt` are used.
`MAX_ITERS` is the number of iterations, you wish to run. The current pipeline assumes that the experimental setup: the boards are almost stationary and the camera and the LiDAR are fixed. The node will ask the user to mark the line-segments (see the video tutorial on how to go about marking [Usage](#usage)) for the first iteration. Once, the line-segments for each board have been marked, the algorithm runs for `MAX_ITERS`, collecting live data and producing n=`MAX_ITERS` sets of rotation and translation in the form of 4x4 matrix. Since, the marking is only done initially, the quadrilaterals should be drawn large enough such that if in the iterations that follow the boards move slightly (say, due to a gentle breeze) the edge points still fall in their respective quadrilaterals. After running for `MAX_ITERS` number of times, the node outputs an average translation vector (3x1) and an average rotation matrix (3x3). Averaging the translation vector is trivial; the rotations matrices are converted to quaternions and averaged, then converted back to a 3x3 rotation matrix.
`initial_rot_x initial_rot_y initial_rot_z` is used to specify the initial orientation of the lidar with respect to the camera, in radians. The default values are for the case when both the lidar and the camera are both pointing forward. The final transformation that is estimated by the package accounts for this initial rotation.
`lidar_type` is used to specify the lidar type. `0` for Velodyne; `1` for Hesai-Pandar40P.
Hesai driver by default **does not** publish wall time as time stamps. To solve this, modify `lidarCallback` function in `/path/to/catkin_ws/src/HesaiLidar-ros/src/main.cc` as follows:
```
void lidarCallback(boost::shared_ptr
The fused point cloud obtained when using manual measurements versus when using `lidar_camera_calibration` is shown in the video. Notice the large translation error, even when the two cameras are kept on a planar surface. Hallucinations of markers, cupboards and carton box (in the background) can be seen as a result of the two point clouds not being aligned properly.
On the other hand, rotation and translation estimated by `lidar_camera_calibration` almost perfectly fuses the two individual point clouds. There is a very minute translation error (~1-2cm) and almost no rotation error. The fused point cloud is aligned so properly, that one might actually believe that it is a single point cloud, but it actually consists of 2 clouds fused using extrinsic transformation between their sources (the stereo cameras).
The resultant fused point clouds from both manual and `lidar_camera_calibration` methods can be seen in [this video](https://youtu.be/AbjRDtHLdz0).
### Calibrating cameras kept at ~80 degrees
We also wanted to see the potential of this method and used it to calibrate cameras kept at ~80 degrees and almost no overlapping field-of-view. In principle, with a properly designed experimental setup our method can calibrate cameras with zero overlapping field of view.
However, to visualize the fusion, we needed a part to be common in both point clouds. We chose a large checkerboard to be seen in both FOVs, since it can be used to see how well the point clouds have aligned and if the dimensions of the checkerboard squares are known, one can even estimate the translation errors.
The setup for such an experiment looked something like this:
There is very less translation error, about 3-4 cm. Also, the ground planes align properly, at all distances, near and far from the camera, implying that the rotations estimated are correct.
The resultant fused point clouds after extrinsic calibration of stereo cameras kept at ~80 degrees using `lidar_camera_calibration` can be seen in [this video](https://youtu.be/Om1SFPAZ5Lc).
We believe, that better intrinsic calibration of the cameras can help drive down the error to about 1 centimeter or even less.
## Future improvements
- [x] iterative process with ~~weighted~~ average over multiple runs
- [ ] automate process of marking line-segments