1 Star 0 Fork 0

qiaodl/panoptic-deeplab-pytorch

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

Pytorch Panoptic DeepLab

The objective of this repository is to create the panoptic deeplab model and training pipeline as presented in the paper. The code base is adopted from the pytorch-deeplab-xception repository. Much of the original code has been changed so the name of the repo has has changed to reflect the updated content. However, the original codes has been kept to work together with the new addition. The base code implemented the semantic segmentation task using the DeepLabV3 model. Panoptic segmentation is a unification of sementatic segmentation and instance segmentation, so the added implementation is an extension of the semantic segmentaion task to to include instance segmentation in the same model. A fusion of these two tasks gives the final panoptic output. See the Panoptic segmentation paper paper for more details on this concept.

Solution Pipeline

We already have a semantic segmentation branch from the DeepLabV3+, so the rest of the pipeline is as follows;

  • Add an instance decoder head
  • Groundtruth instance centers prediction encoding by a 2D Gaussian with standard deviation of 8 pixels
  • Groundtruth instance centers regression
  • Add multitask loss(criterion) function, for center prediction and center regression.
  • Panoptic Dataloader for Cityscapes dataset
  • Test training with slim backbone (mobileNet), and
  • Training with Xception backbone
  • Training with COCO dataset

Trained models

Download trained models from the following links.

The environment.yml file includes the python packages needed to run the training and other scripts in the repository. This can be installed using Anaconda.

Explanation of main scripts

  • train_panoptic.py: The main script to train panoptic segmentation. Run python train_panoptic.py --help to see the scrip usage and all the training options.

  • inference_panoptic.py: The script for predicting instanceID images for the test dataset.

Running on google cloud platform

The Cityscapes dataset can be downloaded to a cloud platform as explained here.


See the original README for DeepLab Xception for Semantic Segmentation below:

pytorch-deeplab-xception

Update on 2018/12/06. Provide model trained on VOC and SBD datasets.

Update on 2018/11/24. Release newest version code, which fix some previous issues and also add support for new backbones and multi-gpu training. For previous code, please see in previous branch

TODO

  • Support different backbones
  • Support VOC, SBD, Cityscapes and COCO datasets
  • Multi-GPU training
Backbone train/eval os mIoU in val Pretrained Model
ResNet 16/16 78.43% google drive
MobileNet 16/16 70.81% google drive
DRN 16/16 78.87% google drive

Introduction

This is a PyTorch(0.4.1) implementation of DeepLab-V3-Plus. It can use Modified Aligned Xception and ResNet as backbone. Currently, we train DeepLab V3 Plus using Pascal VOC 2012, SBD and Cityscapes datasets.

Results

Installation

The code was tested with Anaconda and Python 3.6. After installing the Anaconda environment:

  1. Clone the repo:

    git clone https://github.com/jfzhang95/pytorch-deeplab-xception.git
    cd pytorch-deeplab-xception
    
  2. Install dependencies:

    For PyTorch dependency, see pytorch.org for more details.

    For custom dependencies:

    pip install matplotlib pillow tensorboardX tqdm
    

Training

Follow steps below to train your model:

  1. Configure your dataset path in mypath.py.

  2. Input arguments: (see full input arguments via python train.py --help):

    usage: train.py [-h] [--backbone {resnet,xception,drn,mobilenet}]
                [--out-stride OUT_STRIDE] [--dataset {pascal,coco,cityscapes}]
                [--use-sbd] [--workers N] [--base-size BASE_SIZE]
                [--crop-size CROP_SIZE] [--sync-bn SYNC_BN]
                [--freeze-bn FREEZE_BN] [--loss-type {ce,focal}] [--epochs N]
                [--start_epoch N] [--batch-size N] [--test-batch-size N]
                [--use-balanced-weights] [--lr LR]
                [--lr-scheduler {poly,step,cos}] [--momentum M]
                [--weight-decay M] [--nesterov] [--no-cuda]
                [--gpu-ids GPU_IDS] [--seed S] [--resume RESUME]
                [--checkname CHECKNAME] [--ft] [--eval-interval EVAL_INTERVAL]
                [--no-val]
    
    
  3. To train deeplabv3+ using Pascal VOC dataset and ResNet as backbone:

    bash train_voc.sh
    
  4. To train deeplabv3+ using COCO dataset and ResNet as backbone:

    bash train_coco.sh
    

Acknowledgement

PyTorch-Encoding

Synchronized-BatchNorm-PyTorch

drn

MIT License Copyright (c) 2018 Pyjcsx Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

A Pytorch Implementation of Panoptic Deeplab 展开 收起
README
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/qiaodl/panoptic-deeplab-pytorch.git
git@gitee.com:qiaodl/panoptic-deeplab-pytorch.git
qiaodl
panoptic-deeplab-pytorch
panoptic-deeplab-pytorch
master

搜索帮助