# awesome-drone-papers **Repository Path**: leonexu/awesome-drone-papers ## Basic Information - **Project Name**: awesome-drone-papers - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2021-05-09 - **Last Updated**: 2021-05-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Awesome Drone Researches A curated list of related resources for drone-related researches. ## Contents - [Selected Papers](#selected-papers) - [Localization, Mapping and Replanning](#localization-mapping-replanning) - [State Estimation](#state-estimation) - [Dense Mapping](#dense-mapping) - [Trajectory Planner](#trajectory-planner) - [Drone-based Sensing](#drone-based-vision) - [Object Detection](#localization) - [Object Tracking](#mapping) - [Others](#other-applications-in-vision) - [Journal Papers](#journal-papers) - [T-RO and RA-L](#tros-/-ral) - [Others](#other-journals) - [Conference Papers](#conference-papers) - 2021: [CVPR](#2021-cvpr), [Others](#2021-others) - 2020: [CVPR](#2020-cvpr), [ECCV](#2020-eccv), [IROS](#2020-iros), [ICRA](#2020-icra), [Others](#2020-others) - 2019: [CVPR](#2019-cvpr), [ICCV](#2019-iccv), [IROS](#2019-iros), [ICRA](#2019-icra), [Others](#2019-others) - 2018 and before: [CVPR](#cvpr), [ECCV](#eccv), [ICCV](#iccv), [IROS](#iros), [ICRA](#icra), [Others](#other-papers) - [Datasets](#datasets) - [Challenges](#challenges) - [Other Related Resources](#other-related-resources) \* indicates equal contribution ## Selected Papers ### Localization Mapping and Replanning #### State Estimation ##### • [2018 IROS] Online Temporal Calibration for Monocular Visual-Inertial Systems. [\[PDF\]](https://ieeexplore.ieee.org/abstract/document/8593603) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/VINS-Mono) (IROS 2018 Best Student Paper Award) _Tong Qin, Shaojie Shen_ 好像是确定飞行器状态的算法。传感器融合。校准。AI要素不多。 > Accurate state estimation is a fundamental module for various intelligent applications, such as robot navigation, autonomous driving, virtual and augmented reality. Visual and inertial fusion is a popular technology for 6-DOF state estimation in recent years. Time instants at which different sensors' measurements are recorded are of crucial importance to the system's robustness and accuracy. In practice, timestamps of each sensor typically suffer from triggering and transmission delays, leading to temporal misalignment (time offsets) among different sensors. Such temporal offset dramatically influences the performance of sensor fusion. To this end, we propose an online approach for calibrating temporal offset between visual and inertial measurements. Our approach achieves temporal offset calibration by jointly optimizing time offset, camera and IMU states, as well as feature locations in a SLAM system. Furthermore, the approach is a general model, which can be easily employed in several feature-based optimization frameworks. Simulation and experimental results demonstrate the high accuracy of our calibration approach even compared with other state-of-art offline tools. The VIO comparison against other methods proves that the online temporal calibration significantly benefits visual-inertial systems. The source code of temporal calibration is integrated into our public project, VINS-Mono 1 . ##### • [2018 ICRA] A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots. [\[PDF\]](http://rpg.ifi.uzh.ch/docs/ICRA18_Delmerico.pdf) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/ESVO) _J. Delmerico, D. Scaramuzza_ “Visual-Inertial Odometry“,俗称VIO,是一个使用一个或者多个相机、一个或者多个IMU(Inertial Measurement Units)进行传感器状态测量的技术。所谓的状态,指的是智能体(比如无人机)的特定自由度下的姿态、速度等物理量。在目前实际可选的精确状态估计方案中,VIO是除基于GPS以及基于雷达里程计(LiDAR-based odometry)外的唯一选择。并且由于相机和IMU相比于其他传感器比较廉价,也比较轻便,因此在今天的无人机上普遍配备了VIO用于状态估计。 好像当前AI要素不多。 ##### • [2021 TROS] Event-based Stereo Visual Odometry. [\[PDF\]](https://arxiv.org/abs/2007.15548) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/ESVO) _Yi Zhou, Guillermo Gallego, Shaojie Shen_ 在机器人技术和计算机视觉中,视觉里程计是通过分析关联的摄像机图像来确定机器人的位置和方向的过程。 好像与芯片性能相关。 都是基于视觉,不是雷达,可能我用不上。 > Event-based cameras are bioinspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig. Our system follows a parallel tracking-and-mapping approach, where novel solutions to each subproblem (three-dimensional (3-D) reconstruction and camera pose estimation) are developed with two objectives in mind: being principled and efficient, for real-time operation with commodity hardware. To this end, we seek to maximize the spatio-temporal consistency of stereo event-based data while using a simple and efficient representation. Specifically, the mapping module builds a semidense 3-D map of the scene by fusing depth estimates from multiple viewpoints (obtained by spatio-temporal consistency) in a probabilistic fashion. The tracking module recovers the pose of the stereo rig by solving a registration problem that naturally arises due to the chosen map and event data representation. Experiments on publicly available datasets and on our own recordings demonstrate the versatility of the proposed method in natural scenes with general 6-DoF motion. The system successfully leverages the advantages of event-based cameras to perform visual odometry in challenging illumination conditions, such as low-light and high dynamic range, while running in real-time on a standard CPU. We release the software and dataset under an open source license to foster research in the emerging topic of event-based simultaneous localization and mapping. #### Dense Mapping ##### • [2019 ICRA] Real-time Scalable Dense Surfel Mapping. [\[PDF\]](https://www.dropbox.com/s/h9bais2wnw1g9f0/root.pdf?dl=0) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping) _Kaixuan Wang, Fei Gao, and Shaojie Shen_ 我们提出了一种新颖的稠密建图系统,它可以在不同的环境中很好地扩展,只需CPU计算。使用稀疏SLAM系统来估计相机姿势,所提出的建图系统可以将灰度图像和深度图像融合成全局一致的模型。 好像三维重建相关。和我关系不大。也没有什么AI。 ##### • [2018 RA-L] maplab: An Open Framework for Research in Visual-inertial Mapping and Localization. [\[PDF\]](https://arxiv.org/abs/1711.10250) [\[Code\]](https://github.com/ethz-asl/maplab) _Thomas Schneider, Marcin Dymczyk, Marius Fehr, Kevin Egger, Simon Lynen, Igor Gilitschenski, Roland Siegwart_ #### Trajectory Planner ##### • Learning pugachev's cobra maneuver for tail-sitter uavs using acceleration model, 机械,控制方向,和AI关系不大。 ##### • swarmlab: a matlab drone swarm simulator ##### • [2020 TROS] Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments. [\[PDF\]](https://ieeexplore.ieee.org/document/9102390) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/Teach-Repeat-Replan) * _Fei Gao, Luqi Wang, Boyu Zhou, Xin Zhou, Jie Pan, Shaojie Shen_ 把人工轨迹转换为飞行器轨迹。类似模仿学习? ##### • [2020 TROS] RAPTOR: Robust and Perception-aware Trajectory Replanning for Quadrotor Fast Flight. [\[PDF\]](https://arxiv.org/abs/2007.03465) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/Fast-Planner) _Boyu Zhou, Jie Pan, Fei Gao and Shaojie Shen_ 给定障碍物。快速算出能够躲避障碍物到达目标的路径。 低空场景,可能我用不到。 ##### • [2021 ICRA] FUEL: Fast UAV Exploration Using Incremental Frontier Structure and Hierarchical Planning. [\[PDF\]](https://www.dropbox.com/s/h9bais2wnw1g9f0/root.pdf?dl=0) [\[Code\]](https://github.com/HKUST-Aerial-Robotics/FUEL) _Boyu Zhou, Yichen Zhang, Xinyi Chen, Shaojie Shen_ 加速探索算法。算法层看少量代表性工作即可。 ##### • [2019 IROS] FASTER: Fast and Safe Trajectory Planner for Flights in Unknown Environments. [\[PDF\]](https://arxiv.org/abs/1903.03558) [\[Code\]](https://github.com/mit-acl/faster) _Tordesillas, Jesus and Lopez, Brett T and How, Jonathan P_ 加速探索算法。算法层看少量代表性工作即可。 ### Drone-based Sensing ##### • Dogfight: Detecting Drones from Drones Videos, * 在空对空场景感知小目标。 可能视觉感知还是不适合我。 #### Object Detection ##### • [2019 CoRR] SlimYOLOv3: Narrower, Faster and Better for Real-Time UAV Applications. [\[PDF\]](https://arxiv.org/abs/1907.11093) [\[Code\]](https://github.com/PengyiZhang/SlimYOLOv3) * _Pengyi Zhang, Yunxin Zhong, Xiaoqiong Li_ #### Object Trackingt ##### • [2020 FUZZ-IEEE] DroTrack: High-speed Drone-based Object Tracking Under Uncertainty. [\[PDF\]](https://doi.org/10.1109/FUZZ48607.2020.9177571) [\[Code\]](https://github.com/cruiseresearchgroup/DroTrack) _Hamdi, Ali and Salim, Flora and Kim,Du Yong_ 算法层优化 ##### • [2020 IV-IEEE] Vehicle Position Estimation with Aerial Imagery from Unmanned Aerial Vehicles. [\[PDF\]](https://arxiv.org/pdf/2004.08206) [\[Code\]](https://github.com/fkthi/OpenTrafficMonitoringPlus) _Friedrich Kruber, Eduardo Sánchez Morales, Samarjit Chakraborty, Michael Botsch_ #### Radar ##### • [应用于雷达信号的深度学习方法](https://zhuanlan.zhihu.com/p/103155774) * ##### • A Novel Semi-Supervised Convolutional Neural Network Method for Synthetic Aperture Radar Image Recognition ##### • [iros 20] Automatic Targetless Extrinsic Calibration of Multiple 3D LiDARs and Radars, ##### • [iros 20] autonomous obstacle avoidance for uav based on fusion of radar and monocular camera #### Other Applications in Vision ##### • [2021 CVPR] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles. [\[PDF\]](https://arxiv.org/pdf/2104.00946.pdf) [\[Code\]](https://github.com/SUTDCV/UAV-Human) _Tianjiao Li, Jun Liu, Wei Zhang, Yun Ni, Wenqian Wang, Zhiheng Li_ ## Journal Papers ### TROS / RAL ### Other Journals ## Conference Papers ## Datasets ##### • Vision Meets Drones: Past, Present and Future. [\[Project\]](http://aiskyeye.com/) [\[PDF\]](https://arxiv.org/pdf/2001.06303) [\[Code\]](https://github.com/VisDrone/VisDrone-Dataset) _Pengfei Zhu, Longyin Wen, Dawei Du, Xiao Bian, Qinghua Hu, Haibin Ling_ ##### • [2020 ACM MM] University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization. [\[PDF\]](https://arxiv.org/abs/2002.12186) [\[Code\]](https://github.com/layumi/University1652-Baseline) _Zheng, Zhedong and Wei, Yunchao and Yang, Yi_ ## Challenges #### [1] [Vision Meets Drones: A Challenge](http://aiskyeye.com/) #### [2] [Low Power Computer Vision Challenge](https://lpcv.ai/) #### [3] [UAVision](https://sites.google.com/site/uavisionvisdrone2020/home) ## Other Related Resources [Detailed Introduction on Engineering of Dronecraft](https://github.com/ntakouris/awesome-dronecraft#more-advanced-topics) [A Talk Given by Shaojie Shen at RI Seminar of CMU](https://www.ri.cmu.edu/event/ri-seminar-shaojie-shen-hong-kong-university-of-science-technology-assistant-professor-2019-01-25/#) [\[back to top\]](#contents)