# monocular-depth-estimation **Repository Path**: xdqrshi/monocular-depth-estimation ## Basic Information - **Project Name**: monocular-depth-estimation - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-10-21 - **Last Updated**: 2021-10-21 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Mind Mapping for Depth Estimation  # monocular-depth-estimation ## VIO * DeepVIO: Self-supervised Deep Learning of Monocular Visual Inertial Odometry using 3D Geometric Constraints + [paper](https://arxiv.org/pdf/1906.11435.pdf) * DeepVO: A Deep Learning approach for Monocular Visual Odometry + [paper](https://arxiv.org/pdf/1611.06069.pdf) ## lidar * DeepLiDAR:Deep Surface Normal Guided Depth Prediction for Outdoor Scene from Sparse LiDAR Data and Single Color Image + [paper](https://arxiv.org/pdf/1812.00488.pdf) * Pseudo-LiDAR++:Accurate Depth for 3D Object Detection in Autonomous Driving + [paper](https://arxiv.org/pdf/1906.06310.pdf) * LiStereo: Generate Dense Depth Maps from LIDAR and Stereo Imagery + [paper](https://arxiv.org/pdf/1905.02744.pdf) * Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera + [paper](https://arxiv.org/pdf/1807.00275.pdf) * RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving + [paper](https://arxiv.org/pdf/1906.00208.pdf) ## 2020 * Don’t Forget The Past: Recurrent Depth Estimation from Monocular Video + [paper](https://arxiv.org/pdf/2001.02613.pdf) + [code](https://github.com/wvangansbeke/Recurrent-Depth-Estimation) ## 2019 * 3D Packing for Self-Supervised Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1905.02693.pdf) + [code](https://github.com/TRI-ML/packnet-sfm) * Self-supervised Learning with Geometric Constraints in Monocular Video Connecting Flow, Depth, and Camera + [paper](https://arxiv.org/pdf/1907.05820.pdf) * SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1810.01849.pdf) * HOW MUCH POSITION INFORMATION DO CONVOLUTIONAL NEURAL NETWORKS ENCODE? + [paper](https://openreview.net/pdf?id=rJeB36NKvB) * Instance-wise Depth and Motion Learning from Monocular Videos + [paper](https://arxiv.org/pdf/1912.09351.pdf) + [code](https://github.com/SeokjuLee/Insta-DM) * How do neural networks see depth in single images? + [paper](https://arxiv.org/pdf/1905.07005.pdf) * Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video + [paper](https://arxiv.org/pdf/1908.10553.pdf) + [code](https://github.com/JiawangBian/SC-SfMLearner-Release) + [project](https://jwbian.net/sc-sfmlearner) * Towards Robust Monocular Depth Estimation:Mixing Datasets for Zero-Shot Cross-Dataset Transfer + [paper](https://arxiv.org/pdf/1907.01341.pdf) * SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation with Stacked Generative Adversarial Networks(***) + [paper](https://arxiv.org/abs/1906.08889) * Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios + [paper](https://arxiv.org/abs/1906.08953) * Deep Robust Single Image Depth Estimation Neural Network Using Scene Understanding + [paper](https://arxiv.org/abs/1906.03279) * Digging Into Self-Supervised Monocular Depth Estimation + [paper](https://arxiv.org/abs/1806.01260) + [code](https://github.com/nianticlabs/monodepth2) * Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics + [paper](https://arxiv.org/abs/1906.05717) + [code](https://sites.google.com/corp/view/struct2depth) * Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving + [paper](https://arxiv.org/abs/1906.06310) + [code](https://github.com/mileyan/Pseudo_Lidar_V2) * Monocular Depth Estimation Using Relative Depth Maps + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Lee_Monocular_Depth_Estimation_Using_Relative_Depth_Maps_CVPR_2019_paper.pdf) * Connecting the Dots: Learning Representations for Active Monocular Depth Estimation + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Riegler_Connecting_the_Dots_Learning_Representations_for_Active_Monocular_Depth_Estimation_CVPR_2019_paper.pdf) * Soft Labels for Ordinal Regression + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Diaz_Soft_Labels_for_Ordinal_Regression_CVPR_2019_paper.pdf) * A General and Adaptive Robust Loss Function + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Barron_A_General_and_Adaptive_Robust_Loss_Function_CVPR_2019_paper.pdf) * Adversarial Structure Matching for Structured Prediction Tasks + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Hwang_Adversarial_Structure_Matching_for_Structured_Prediction_Tasks_CVPR_2019_paper.pdf) * Veritatem Dies Aperit - Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Atapour-Abarghouei_Veritatem_Dies_Aperit_-_Temporally_Consistent_Depth_Prediction_Enabled_by_CVPR_2019_paper.pdf) * Towards Scene Understanding: Unsupervised Monocular Depth Estimation With Semantic-Aware Representation + [paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Chen_Towards_Scene_Understanding_Unsupervised_Monocular_Depth_Estimation_With_Semantic-Aware_Representation_CVPR_2019_paper.pdf) * Pattern-Affinitive Propagation across Depth, Surface Normal and Semantic Segmentation + [paper](https://arxiv.org/pdf/1906.03525v1.pdf) * Generating and Exploiting Probabilistic Monocular Depth Estimates + [paper](https://arxiv.org/pdf/1906.05739v1.pdf) * Attention-based Context Aggregation Network for Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1901.10137v1.pdf) + [code](https://github.com/miraiaroha/ACAN) * Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud + [paper](https://arxiv.org/pdf/1903.09847v3.pdf) + [code](https://github.com/xinshuoweng/mono3D_PLiDAR) * Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1904.01870v1.pdf) + [code](https://github.com/sshan-zhao/GASDA) * Learning monocular depth estimation infusing traditional stereo knowledge + [paper](https://arxiv.org/pdf/1904.04144v1.pdf) + [code](https://github.com/fabiotosi92/monoResMatch-Tensorflow) * Learn Stereo, Infer Mono: Siamese Networks for Self-Supervised, Monocular, Depth Estimation + [paper](https://arxiv.org/pdf/1905.00401v1.pdf) + [code](https://github.com/mtngld/lsim) * PackNet-SfM: 3D Packing for Self-Supervised Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1905.02693v1.pdf) + [code](https://github.com/ToyotaResearchInstitute/packnet-sfm) * Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network + [paper](https://arxiv.org/pdf/1905.07542v1.pdf) + [code](https://github.com/a-jahani/semiDepth) * Sparsity Invariant CNNs + [paper](https://arxiv.org/pdf/1708.06500.pdf) * SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1905.08598v1.pdf) + [code](https://github.com/MichaelRamamonjisoa/SharpNet) * Learning the Depths of Moving People by Watching Frozen People + [paper](https://arxiv.org/abs/1904.11111) + [code](https://ai.googleblog.com/2019/05/moving-camera-moving-people-deep.html) * Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras + [paper](https://arxiv.org/abs/1904.04998v1) * Real-time self-adaptive deep stereo + [paper](https://arxiv.org/pdf/1810.05424.pdf) + [code](https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stereo)  * Learning to Adapt for Stereo + [paper](https://arxiv.org/abs/1904.02957) + [code](https://github.com/CVLAB-Unibo/Learning2AdaptForStereo) * Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More + [paper](https://arxiv.org/pdf/1904.10167.pdf) * Learning the Depths of Moving People by Watching Frozen People + [paper](https://arxiv.org/pdf/1904.11111.pdf) * Web Stereo Video Supervision for Depth Prediction from Dynamic Scenes + [paper](https://arxiv.org/pdf/1904.11112.pdf)  * FastDepth: Fast Monocular Depth Estimation on Embedded Systems + [paper](https://arxiv.org/pdf/1903.03273.pdf) + [code](https://github.com/dwofk/fast-depth) + [webage](http://fastdepth.mit.edu/) * AMNet:Deep Atrous Multiscale Stereo Disparity Estimation Networks + [paper](https://arxiv.org/abs/1904.09099) * Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras + [paper](https://arxiv.org/abs/1904.04998v1) * Group-wise Correlation Stereo Network + [paper](https://arxiv.org/abs/1903.04025) + [code](https://github.com/xy-guo/GwcNet)  * Refine and Distill: Exploiting Cycle-Inconsistency and Knowledge Distillation for Unsupervised Monocular Depth Estimation + [paper](https://arxiv.org/abs/1903.04202)  * Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction + [paper](https://arxiv.org/abs/1903.07309)  * Deformable kernel networks for guided depth map upsampling + [paper](https://arxiv.org/abs/1903.11286?context=cs.CV) + [code](https://cvlab-yonsei.github.io/projects/DKN)  * Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations + [paper](https://arxiv.org/pdf/1809.04766.pdf) + [code](https://github.com/drsleep/multi-task-refinenet)  * Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation + [paper](https://arxiv.org/pdf/1805.09806.pdf) + [code](https://github.com/anuragranj/cc) * Privacy Protection in Street-View Panoramas using Depth and Multi-View Imagery + [paper](https://arxiv.org/abs/1903.11532) * DFineNet: Ego-Motion Estimation and Depth Refinement from Sparse, Noisy Depth Input with RGB Guidance + [paper](https://arxiv.org/abs/1903.06397) * Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction + [paper](https://arxiv.org/abs/1903.07309) * Anytime Stereo Image Depth Estimation on Mobile Devices(**) + [paper](https://arxiv.org/abs/1810.11408) + [code](https://github.com/mileyan/AnyNet) * Refine and Distill: Exploiting Cycle-Inconsistency and Knowledge Distillation for Unsupervised Monocular Depth Estimation(CVPR2019) + [paper](https://arxiv.org/abs/1903.04202) * Self-supervised Learning for Single View Depth and Surface Normal Estimation + [paper](https://arxiv.org/abs/1903.00112) * DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene from Sparse LiDAR Data and Single Color Image + [paper](https://arxiv.org/abs/1812.00488v1) * A Motion Free Approach to Dense Depth Estimation in Complex Dynamic Scene + [paper](https://arxiv.org/abs/1902.03791) * Self-supervised Learning for Dense Depth Estimation in Monocular Endoscopy + [paper](https://arxiv.org/abs/1902.07766?context=cs) * Region Deformer Networks for Unsupervised Depth Estimation from Unconstrained Monocular Videos + [paper](https://arxiv.org/abs/1902.09907) * Single Image Deblurring and Camera Motion Estimation with Depth Map + [paper](https://arxiv.org/abs/1903.00231) * SweepNet: Wide-baseline Omnidirectional Depth Estimation + [paper](https://arxiv.org/abs/1902.10904) * Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference + [paper](https://arxiv.org/abs/1902.10556) * Multi-layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction + [paper](https://arxiv.org/abs/1902.06729) * Depth-Map Generation using Pixel Matching in Stereoscopic Pair of Images + [paper](https://arxiv.org/abs/1902.03471) * Unstructured Multi-View Depth Estimation Using Mask-Based Multiplane Representation + [paper](https://arxiv.org/abs/1902.02166) * DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion + [paper](https://arxiv.org/abs/1902.00761) * Attention-based Context Aggregation Network for Monocular Depth Estimation + [paper](https://arxiv.org/abs/1901.10137)  * Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos(已复现) + [paper](https://arxiv.org/pdf/1811.06152.pdf) + [code](https://github.com/tensorflow/models/tree/master/research/struct2depth) + [webpage](https://sites.google.com/view/struct2depth) ## 2018 * Robust Depth Estimation from Auto Bracketed Images + [paper](https://arxiv.org/pdf/1803.07702.pdf) * PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume + [paper](https://arxiv.org/pdf/1709.02371.pdf) + [code](https://github.com/NVlabs/PWC-Net)  * Single View Stereo Matching + [paper](https://arxiv.org/abs/1803.02612) + [code](https://github.com/lawy623/SVS) * Geometry meets semantics for semi-supervised monocular depth estimation + [paper](https://arxiv.org/pdf/1810.04093v2.pdf) + [code](https://github.com/CVLAB-Unibo/Semantic-Mono-Depth) * Learning Monocular Depth by Distilling Cross-domain Stereo Networks + [paper](https://arxiv.org/pdf/1808.06586v1.pdf) + [code](https://github.com/xy-guo/Learning-Monocular-Depth-by-Stereo) * Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries + [paper](https://arxiv.org/pdf/1803.08673v2.pdf) + [code](https://github.com/JunjH/Revisiting_Single_Depth_Estimation) * Deep Ordinal Regression Network for Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1806.02446v1.pdf) + [code](https://github.com/hufu6371/DORN) * High Quality Monocular Depth Estimation via Transfer Learning + [paper](https://arxiv.org/pdf/1812.11941v2.pdf) + [code](https://github.com/ialhashim/DenseDepth) * Digging Into Self-Supervised Monocular Depth Estimation + [paper](https://arxiv.org/pdf/1806.01260v3.pdf) + [code](https://github.com/nianticlabs/monodepth2) + [video](https://www.youtube.com/watch?v=sIN1Tp3wIbQ) * On the Importance of Stereo for Accurate Depth Estimation: An Efficient Semi-Supervised Deep Neural Network Approach + [paper](https://arxiv.org/pdf/1803.09719v3.pdf) + [code](https://github.com/NVIDIA-AI-IOT/redtail)