# GDN-Pytorch
**Repository Path**: xiaoyuyi/GDN-Pytorch
## Basic Information
- **Project Name**: GDN-Pytorch
- **Description**: Depth Estimation From a Single Image Using Guided Deep Network
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-01-04
- **Last Updated**: 2021-01-04
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# GDN-Pytorch
This repository is a Pytorch implementation of the paper [**"Depth Estimation From a Single Image Using Guided Deep Network"**](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8854079)
Minsoo Song and [Wonjun Kim](https://sites.google.com/site/kudcvlab)
IEEE Access
When using this code in your research, please cite the following paper:
Minsoo Song and Wonjun Kim, **"Depth estimation from a single image using guided deep network,"** **IEEE Access**, vol. 7, pp. 142595-142606, Dec. 2019.
```
@ARTICLE{8854079,
author={M. {Song} and W. {Kim}},
journal={IEEE Access},
title={Depth Estimation From a Single Image Using Guided Deep Network},
year={2019},
volume={7},
pages={142595-142606},
doi={10.1109/ACCESS.2019.2944937},}
```

## Model architecture

## Requirements
* Python >= 3.5
* Pytorch 0.4.0
* Ubuntu 16.04
* CUDA 8 (if CUDA available)
* cuDNN (if CUDA available)
## Video
RGB input Estimated depth map
## Results



1st row: RGB input
2nd row: Ground truth
3rd row: Eigen et al.
4th row: Godard et al.
5th row: Kuznietsov et al.
6th row: Ours
## Pretrained models
You can download pretrained color-to-depth model
* [Trained with KITTI](https://drive.google.com/drive/folders/10wFzSDdRK7nlNZXhSP3GGWuWFK_Ap7ur?usp=sharing)
## Demo (Single Image Prediction)
Demo Command Line:
```bash
############### Example of argument usage #####################
python depth_extract.py --gpu_num 0,1 --model_dir your/model/path/model.pkl
## '--gpu_num' argument is index of your available gpu list. ex) os.environ["CUDA_VISIBLE_DEVICES"]= args.gpu_num
```
### Try it on your own image!
1. Insert your example images(png, jpg) in **GDN-pytorch/example/demo_input**
(Since our model was trained at 128 x 416 scale, we recommend resizing the images to the corresponding scale before running the demo.)
2. Specify the model directory, then run the demo.
## Dataset
* [KITTI](http://www.cvlibs.net/datasets/kitti/eval_depth.php?benchmark=depth_prediction)
Official kitti dataset is available on the link.
We prepared the training data by referring to the method on [this link](https://github.com/josephdanielchang/unsupervised_learning_of_depth_PyTorch).
## Training
### Training method
Depth_to_depth network training -> Color_to_depth network training(using pretrained depth_to_depth network)
* Depth_to_depth network training
```bash
python GDN_main.py ./your/dataset/path --epochs 50 --batch_size 20 --gpu_num 0,1,2,3 --mode DtoD
```
* Color_to_depth network training
```bash
python GDN_main.py ./your/dataset/path --epochs 50 --batch_size 20 --model_dir /your/pretrained/depth_to_depth/model/path --gpu_num 0,1,2,3 --mode RtoD
```
`--gpu_num` is index of your available gpu list.
## Testing (Eigen split)
* Depth_to_depth network testing
```bash
python GDN_main.py /mnt/MS/AEdepth/data_backup --epochs 0 --batch_size 8 --evaluate --real_test --gpu_num 0,1,2,3 --model_dir /your/pretrained/depth_to_depth/model/path --mode DtoD_test --img_save
```
* Color_to_depth network testing
```bash
python GDN_main.py /mnt/MS/AEdepth/data_backup --epochs 0 --batch_size 8 --evaluate --real_test --gpu_num 0,1,2,3 --RtoD_model_dir /your/pretrained/color_to_depth/model/path --mode RtoD_test --img_save
```
`--gpu_num` is index of your available gpu list.
if you want save your test result, using `--img_save`