The objective of this repository is to create the panoptic deeplab model and training pipeline as presented in the paper. The code base is adopted from the pytorch-deeplab-xception repository. Much of the original code has been changed so the name of the repo has has changed to reflect the updated content. However, the original codes has been kept to work together with the new addition. The base code implemented the semantic segmentation task using the DeepLabV3 model. Panoptic segmentation is a unification of sementatic segmentation and instance segmentation, so the added implementation is an extension of the semantic segmentaion task to to include instance segmentation in the same model. A fusion of these two tasks gives the final panoptic output. See the Panoptic segmentation paper paper for more details on this concept.
We already have a semantic segmentation branch from the DeepLabV3+, so the rest of the pipeline is as follows;
Download trained models from the following links.
The environment.yml file includes the python packages needed to run the training and other scripts in the repository. This can be installed using Anaconda.
train_panoptic.py: The main script to train panoptic segmentation. Run python train_panoptic.py --help
to see the scrip usage and all the training options.
inference_panoptic.py: The script for predicting instanceID images for the test dataset.
The Cityscapes dataset can be downloaded to a cloud platform as explained here.
See the original README for DeepLab Xception for Semantic Segmentation below:
Update on 2018/12/06. Provide model trained on VOC and SBD datasets.
Update on 2018/11/24. Release newest version code, which fix some previous issues and also add support for new backbones and multi-gpu training. For previous code, please see in previous
branch
Backbone | train/eval os | mIoU in val | Pretrained Model |
---|---|---|---|
ResNet | 16/16 | 78.43% | google drive |
MobileNet | 16/16 | 70.81% | google drive |
DRN | 16/16 | 78.87% | google drive |
This is a PyTorch(0.4.1) implementation of DeepLab-V3-Plus. It can use Modified Aligned Xception and ResNet as backbone. Currently, we train DeepLab V3 Plus using Pascal VOC 2012, SBD and Cityscapes datasets.
The code was tested with Anaconda and Python 3.6. After installing the Anaconda environment:
Clone the repo:
git clone https://github.com/jfzhang95/pytorch-deeplab-xception.git
cd pytorch-deeplab-xception
Install dependencies:
For PyTorch dependency, see pytorch.org for more details.
For custom dependencies:
pip install matplotlib pillow tensorboardX tqdm
Follow steps below to train your model:
Configure your dataset path in mypath.py.
Input arguments: (see full input arguments via python train.py --help):
usage: train.py [-h] [--backbone {resnet,xception,drn,mobilenet}]
[--out-stride OUT_STRIDE] [--dataset {pascal,coco,cityscapes}]
[--use-sbd] [--workers N] [--base-size BASE_SIZE]
[--crop-size CROP_SIZE] [--sync-bn SYNC_BN]
[--freeze-bn FREEZE_BN] [--loss-type {ce,focal}] [--epochs N]
[--start_epoch N] [--batch-size N] [--test-batch-size N]
[--use-balanced-weights] [--lr LR]
[--lr-scheduler {poly,step,cos}] [--momentum M]
[--weight-decay M] [--nesterov] [--no-cuda]
[--gpu-ids GPU_IDS] [--seed S] [--resume RESUME]
[--checkname CHECKNAME] [--ft] [--eval-interval EVAL_INTERVAL]
[--no-val]
To train deeplabv3+ using Pascal VOC dataset and ResNet as backbone:
bash train_voc.sh
To train deeplabv3+ using COCO dataset and ResNet as backbone:
bash train_coco.sh
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。