# Semantic_Scene_Completion
**Repository Path**: remvs/semantic_-scene_-completion
## Basic Information
- **Project Name**: Semantic_Scene_Completion
- **Description**: 深度协同分割与修补
- **Primary Language**: Python
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-10-24
- **Last Updated**: 2021-11-25
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# 深度协同分割与修补
本工程受**国家重点研发计划“云计算与大数据”重点专项项目——“多源数据驱动的智能化高效场景建模与绘制引擎”(2017YFB1002601)**资助,为该项目中课题一“复杂场景多源数据分析融合与联合优化”的子课题。
## 算法流程图
- **深度图协同分割与修补流程** [文章链接](https://arxiv.org/pdf/2003.14052.pdf)
## 定量结果
#### NYU
| Method | Resolution | Trained on | SC IoU | SSC mIoU |
| ------------------------- | ------------ | ---------- | -------- | -------- |
| SSCNet | (240, 60) | NYU | 55.1 | 24.7 |
| VVNetR-120 | (120, 60) | NYU+SUNCG | 61.1 | 32.9 |
| DDRNet | (240, 60) | NYU | 61.0 | 30.4 |
| ForkNet | (80, 80) | NYU | 63.4 | 37.1 |
| CCPNet | (240, 240) | NYU | 63.5 | 38.5 |
| **SketchAwareSSC (Ours)** | **(60, 60)** | **NYU** | **71.3** | **41.1** |
## 数据准备
#### 预训练 ResNet-50
Please download the pretrained ResNet-50 and then put it into `./DATA/pytorch-weight`.
| Source | Link |
| :----------: | :--------------------------------------: |
| BaiDu Cloud | Link: https://pan.baidu.com/s/1wS1TozvS3cBdutsXRWUmUw Key: 4g9u |
| Google Drive | https://drive.google.com/drive/folders/121yZXBZ8wV77WRXRur86YBA4ifJEhsJQ?usp=sharing |
#### NYU Depth V2
Please download NYU dataset and then put it into `./DATA/NYU`.
| Source | Link |
| :----------: | :--------------------------------------: |
| BaiDu Cloud | Link: https://pan.baidu.com/s/1GfWqAbsfMp3NOjFcEnL54A Key: v5ta |
| Google Drive | https://drive.google.com/drive/folders/121yZXBZ8wV77WRXRur86YBA4ifJEhsJQ?usp=sharing |
## 安装依赖
The code is developed using Python 3.6 with PyTorch 1.0.0. The code is developed and tested using 2 GPU cards.
1. **Clone this repo.**
```shell
$ git clone https://github.com/charlesCXK/TorchSSC.git
$ cd TorchSSC
```
2. **Install dependencies.**
**(1) Create a conda environment:**
```shell
$ conda env create -f ssc.yaml
$ conda activate ssc
```
**(2) Install apex 0.1(needs CUDA)**
```shell
$ cd ./furnace/apex
$ python setup.py install --cpp_ext --cuda_ext
```
## 训练与测试
#### Training
Training on NYU Depth V2:
```shell
$ cd ./model/sketch.nyu
$ export NGPUS=2
$ python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py -p 10097
```
- `-p` is the port number. It is about the distributed training. If you run more than one experiments in the same machine, you should set different ports for them.
- The tensorboard file is saved in ` sketch.nyu/log/tb/` directory.
#### Inference
Inference on NYU Depth V2:
```shell
$ cd ./model/sketch.nyu
$ python eval.py -e 200-250 -d 0-1 --save_path results
```
- Here, 200-250 means we evaluate on checkpoints whose ID is in [200, 250], such as epoch-200.pth, epoch-249.pth, etc.
- The SSC predictions will be saved in `results/` and `results_sketch/`, the former stores the SSC predictions and the latter stores sketch preditcions. Performance will be written to `log/*.log`. You will expect `0.411@SSC mIoU` and `0.713@SC IoU`.
## 结果展示
**RGB:**
**深度图:**
**3D 可见表面:**
**语义场景补全结果:**
## 引用
If you find this work useful in your research, please consider cite:
```
@InProceedings{Chen_2020_SketchAwareSSC,
author = {Chen, Xiaokang and Lin, Kwan-Yee and Qian, Chen and Zeng, Gang and Li, Hongsheng},
title = {3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
```