# S3E **Repository Path**: PengYu-Team/S3E ## Basic Information - **Project Name**: S3E - **Description**: S3E: A Large-scale Multimodal Dataset for Collaborative SLAM - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-10-18 - **Last Updated**: 2023-12-13 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # S3E: A Large-scale Multimodal Dataset for Collaborative SLAM **Authors:** Dapeng Feng, Yuhua Qi*, Shipeng Zhong, Zhiqiang Chen, Yudu Jiao, Qiming Chen, Tao Jiang, Hongbo Chen https://user-images.githubusercontent.com/18318646/197722969-0aaf8670-0783-48bb-a05d-d9056aae626c.mp4 The HD video is available on [Bilibili](https://www.bilibili.com/video/BV1Ze41137kx/?vd_source=78d041dc03a4aac231b5cac62feffc70). ## Abstract: With the advanced request to employ a team of robots to perform a task collaboratively, the research community has become increasingly interested in collaborative simultaneous localization and mapping. Unfortunately, existing datasets are limited in the scale and variation of the collaborative trajectories they capture, even though generalization between inter-trajectories among different agents is crucial to the overall viability of collaborative tasks. To help align the research community's contributions with real-world multiagent ordinated SLAM problems, we introduce S3E, a novel large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor scenes that each exceed 200 seconds, consisting of well synchronized and calibrated high-quality stereo camera, LiDAR, and high-frequency IMU data. Crucially, our effort exceeds previous attempts regarding dataset size, scene variability, and complexity. It has 4x as much average recording time as the pioneering EuRoC dataset. We also provide careful dataset analysis as well as baselines for collaborative SLAM and single counterparts. ## Main Conbributions: - We collected large-scale C-SLAM datasets with three ground robots. We mounted a top 16-beam 3D laser scanner on each robot, two high-resolution color cameras, a 9-axis IMU, and a dual-antenna RTK receiver. - We recorded the long-term sequences based on the four well-designed trajectory categories, including different intra/inter-robot loop closures situations. To our knowledge, it is the first C-SLAM dataset for LiDAR, visual, and inertial data in various indoor and outdoor environments. - We evaluated state-of-the-art C-SLAM and its single counterparts using different sensor recordings and proposed a comprehensive benchmark to analyze the characteristics of different methods. ## Sensors