These are packages for using Intel RealSense cameras (D400 series SR300 camera and T265 Tracking Module) with ROS.
This version supports Kinetic, Melodic and Noetic distributions.
For running in ROS2 environment please switch to the ros2 branch.
LibRealSense2 supported version: v2.48.0 (see realsense2_camera release notes)
Ubuntu
realsense2_camera is available as a debian package of ROS distribution. It can be installed by typing:
sudo apt-get install ros-$ROS_DISTRO-realsense2-camera
This will install both realsense2_camera and its dependents, including librealsense2 library and matching udev-rules.
Notice:
sudo apt-get install ros-$ROS_DISTRO-realsense2-description
Windows
Chocolatey distribution Coming soon
This option is demonstrated in the .travis.yml file. It basically summerize the elaborate instructions in the following 2 steps:
Ubuntu
Install librealsense2 debian package:
Windows Install using vcpkg
`vcpkg install realsense2:x64-windows`
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src/
Windows
mkdir c:\catkin_ws\src
cd c:\catkin_ws\src
git clone https://github.com/IntelRealSense/realsense-ros.git
cd realsense-ros/
git checkout `git tag | sort -V | grep -P "^2.\d+\.\d+" | tail -1`
cd ..
catkin_init_workspace
cd ..
catkin_make clean
catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
catkin_make install
Ubuntu
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc
Windows
devel\setup.bat
To start the camera node in ROS:
roslaunch realsense2_camera rs_camera.launch
This will stream all camera sensors and publish on the appropriate ROS topics.
Other stream resolutions and frame rates can optionally be provided as parameters to the 'rs_camera.launch' file.
The published topics differ according to the device and parameters.
After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type rostopic list
):
Using an L515 device the list differs a little by adding a 4-bit confidence grade (pulished as a mono8 image):
- /camera/confidence/camera_info
- /camera/confidence/image_rect_raw
It also replaces the 2 infrared topics with the single available one:
- /camera/infra/camera_info
- /camera/infra/image_raw
The "/camera" prefix is the default and can be changed. Check the rs_multiple_devices.launch file for an example. If using D435 or D415, the gyro and accel topics wont be available. Likewise, other topics will be available when using T265 (see below).
rosservice call /camera/realsense2_camera/reset
rosservice call /camera/enable False"
The following parameters are available by the wrapper:
serial_no: will attach to the device with the given serial number (serial_no) number. Default, attach to available RealSense device in random.
usb_port_id: will attach to the device with the given USB port (usb_port_id). i.e 4-1, 4-2 etc. Default, ignore USB port when choosing a device.
device_type: will attach to a device whose name includes the given device_type regular expression pattern. Default, ignore device type. For example, device_type:=d435 will match d435 and d435i. device_type=d435(?!i) will match d435 but not d435i.
rosbag_filename: Will publish topics from rosbag file.
initial_reset: On occasions the device was not closed properly and due to firmware issues needs to reset. If set to true, the device will reset prior to usage.
align_depth: If set to true, will publish additional topics for the "aligned depth to color" image.: /camera/aligned_depth_to_color/image_raw
, /camera/aligned_depth_to_color/camera_info
.
The pointcloud, if enabled, will be built based on the aligned_depth_to_color image.
filters: any of the following options, separated by commas:
colorizer
: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values .
pointcloud
: will add a pointcloud topic /camera/depth/color/points
.
pointcloud_texture_stream
and pointcloud_texture_index
. Run rqt_reconfigure to see available values for these parameters.allow_no_texture_points
to true.ordered_pc
to true.hdr_merge
: Allows depth image to be created by merging the information from 2 consecutive frames, taken with different exposure and gain values. The way to set exposure and gain values for each sequence in runtime is by first selecting the sequence id, using rqt_reconfigure stereo_module/sequence_id
parameter and then modifying the stereo_module/gain
, and stereo_module/exposure
. To view the effect on the infrared image for each sequence id use the sequence_id_filter/sequence_id
parameter. To initialize these parameters in start time use the following parameters:
stereo_module/exposure/1
, stereo_module/gain/1
, stereo_module/exposure/2
, stereo_module/gain/2
* For in-depth review of the subject please read the accompanying white paper.
The following filters have detailed descriptions in : https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md
disparity
- convert depth to disparity before applying other filters and back.spatial
- filter the depth image spatially.temporal
- filter the depth image temporally.hole_filling
- apply hole-filling filter.decimation
- reduces depth scene complexity.enable_sync: gathers closest frames of different sensors, infra red, color and depth, to be sent with the same timetag. This happens automatically when such filters as pointcloud are enabled.
<stream_type>_width, <stream_type>_height, <stream_type>_fps: <stream_type> can be any of infra, color, fisheye, depth, gyro, accel, pose, confidence. Sets the required format of the device. If the specified combination of parameters is not available by the device, the stream will be replaced with the default for that stream. Setting a value to 0, will choose the first format in the inner list. (i.e. consistent between runs but not defined).*Note: for gyro accel and pose, only _fps option is meaningful.
enable_<stream_name>: Choose whether to enable a specified stream or not. Default is true for images and false for orientation streams. <stream_name> can be any of infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose, confidence.
tf_prefix: By default all frame's ids have the same prefix - camera_
. This allows changing it per camera.
<stream_name>_frame_id, <stream_name>_optical_frame_id, aligned_depth_to_<stream_name>_frame_id: Specify the different frame_id for the different frames. Especially important when using multiple cameras.
base_frame_id: defines the frame_id all static transformations refers to.
odom_frame_id: defines the origin coordinate system in ROS convention (X-Forward, Y-Left, Z-Up). pose topic defines the pose relative to that system.
All the rest of the frame_ids can be found in the template launch file: nodelet.launch.xml
unite_imu_method: The D435i and T265 cameras have built in IMU components which produce 2 unrelated streams: gyro - which shows angular velocity and accel which shows linear acceleration. Each with it's own frequency. By default, 2 corresponding topics are available, each with only the relevant fields of the message sensor_msgs::Imu are filled out. Setting unite_imu_method creates a new topic, imu, that replaces the default gyro and accel topics. The imu topic is published at the rate of the gyro. All the fields of the Imu message under the imu topic are filled out.
clip_distance: remove from the depth image all values above a given value (meters). Disable by giving negative value (default)
linear_accel_cov, angular_velocity_cov: sets the variance given to the Imu readings. For the T265, these values are being modified by the inner confidence value.
hold_back_imu_for_frames: Images processing takes time. Therefor there is a time gap between the moment the image arrives at the wrapper and the moment the image is published to the ROS environment. During this time, Imu messages keep on arriving and a situation is created where an image with earlier timestamp is published after Imu message with later timestamp. If that is a problem, setting hold_back_imu_for_frames to true will hold the Imu messages back while processing the images and then publish them all in a burst, thus keeping the order of publication as the order of arrival. Note that in either case, the timestamp in each message's header reflects the time of it's origin.
topic_odom_in: For T265, add wheel odometry information through this topic. The code refers only to the twist.linear field in the message.
calib_odom_file: For the T265 to include odometry input, it must be given a configuration file. Explanations can be found here. The calibration is done in ROS coordinates system.
publish_tf: boolean, publish or not TF at all. Defaults to True.
tf_publish_rate: double, positive values mean dynamic transform publication with specified rate, all other values mean static transform publication. Defaults to 0
publish_odom_tf: If True (default) publish TF from odom_frame to pose_frame.
infra_rgb: When set to True (default: False), it configures the infrared camera to stream in RGB (color) mode, thus enabling the use of a RGB image in the same frame as the depth image, potentially avoiding frame transformation related errors. When this feature is required, you are additionally required to also enable enable_infra:=true
for the infrared stream to be enabled.
enable_infra
is independent of enable_depth
enable_infra:=true
NOT enable_infra1:=true
nor enable_infra2:=true
infra
cameras, which can be checked by observing the output of rs-enumerate-devices
Here is an example of how to start the camera node and make it publish the point cloud using the pointcloud option.
roslaunch realsense2_camera rs_camera.launch filters:=pointcloud
Then open rviz to watch the pointcloud:
Here is an example of how to start the camera node and make it publish the aligned depth stream to other available streams such as color or infra-red.
roslaunch realsense2_camera rs_camera.launch align_depth:=true
The following command allow to change camera control values using [http://wiki.ros.org/rqt_reconfigure].
rosrun rqt_reconfigure rqt_reconfigure
Important Notice: Launching multiple T265 cameras is currently not supported. This will be addressed in a later version.
Here is an example of how to start the camera node and streaming with two cameras using the rs_multiple_devices.launch.
roslaunch realsense2_camera rs_multiple_devices.launch serial_no_camera1:=<serial number of the first camera> serial_no_camera2:=<serial number of the second camera>
The camera serial number should be provided to serial_no_camera1
and serial_no_camera2
parameters. One way to get the serial number is from the rs-enumerate-devices tool.
rs-enumerate-devices | grep Serial
Another way of obtaining the serial number is connecting the camera alone, running
roslaunch realsense2_camera rs_camera.launch
and looking for the serial number in the log printed to screen under "[INFO][...]Device Serial No:".
Another way to use multiple cameras is running each from a different terminal. Make sure you set a different namespace for each camera using the "camera" argument:
roslaunch realsense2_camera rs_camera.launch camera:=cam_1 serial_no:=<serial number of the first camera>
roslaunch realsense2_camera rs_camera.launch camera:=cam_2 serial_no:=<serial number of the second camera>
...
To start the camera node in ROS:
roslaunch realsense2_camera rs_t265.launch
This will stream all camera sensors and publish on the appropriate ROS topics.
The T265 sets its usb unique ID during initialization and without this parameter it wont be found. Once running it will publish, among others, the following topics:
To visualize the pose output and frames in RViz, start:
roslaunch realsense2_camera demo_t265.launch
The wrapper publishes static transformations(TFs). The Frame Ids are divided into 3 groups:
For viewing included models, a separate package is included. For example:
roslaunch realsense2_description view_d415_model.launch
Unit-tests are based on bag files saved on S3 server. These can be downloaded using the following commands:
cd catkin_ws
wget "https://librealsense.intel.com/rs-tests/TestData/outdoors.bag" -P "records/"
wget "https://librealsense.intel.com/rs-tests/D435i_Depth_and_IMU_Stands_still.bag" -P "records/"
Then, unit-tests can be run using the following command (use either python or python3):
python src/realsense/realsense2_camera/scripts/rs2_test.py --all
Title | Links |
---|---|
ROS Object Analytics | github / ROS Wiki |
Copyright 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this project except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
*Other names and brands may be claimed as the property of others
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。