# PaddleSeg
**Repository Path**: VisionDeveloper/PaddleSeg
## Basic Information
- **Project Name**: PaddleSeg
- **Description**: End-to-End Image Segmentation Suite Based on PaddlePaddle. (『飞桨』图像分割开发套件)
- **Primary Language**: Python
- **License**: Apache-2.0
- **Default Branch**: release/2.6
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 100
- **Created**: 2022-11-01
- **Last Updated**: 2022-11-01
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
English | [简体中文](README_CN.md)
**A High-Efficient Development Toolkit for Image Segmentation based on [PaddlePaddle](https://github.com/paddlepaddle/paddle).**
[](LICENSE)
[](https://github.com/PaddlePaddle/PaddleSeg/releases)



##
News
- [2022-07-20] :fire: PaddleSeg v2.6 is released! More details in Release Notes.
- Release PP-HumanSeg v2, an off-the-shelf human segmentation model. It achieves 64.26 FPS on the mobile device, which is 45.5% faster than before.
- Release EISeg v1.0, the stable-version semi-automatic tool for image, video and 3D slice data annotation. It achieves "Once for All" (training once, and labelling all) performance.
- Release PSSL, a novel pre-training method, including a large dataset that consists of 1.2M+ pseudo semantic segmentation labels corresponding to the whole ImageNet training set. It boosts the performances of various models on all downstream tasks.
- Release PP-Matting source code and the pre-trained models. Also, add five more matting methods in machine learning that allow direct usage without training.
- Release the industrial model series: high-accuracy models, light-weight models, and super light-weight models, to help developers pick up the most suitable one.
- [2022-04-20] PaddleSeg v2.5 released a real-time semantic segmentation model PP-LiteSeg, a trimap-free image matting model PP-Matting, and an easy-to-use toolkit for 3D medical image segmentation MedicalSeg.
- [2022-01-20] We release PaddleSeg v2.4 with EISeg v0.4, and PP-HumanSeg including open-sourced dataset PP-HumanSeg14K.
##
Introduction
PaddleSeg is an end-to-end high-efficent development toolkit for image segmentation based on PaddlePaddle, which helps both developers and researchers in the whole process of designing segmentation models, training models, optimizing performance and inference speed, and deploying models. A lot of well-trained models and various real-world applications in both industry and academia help users conveniently build hands-on experiences in image segmentation.
##
Features
* **High-Performance Model**: Following the state of the art segmentation methods and use the high-performance backbone trained by semi-supervised label knowledge distillation scheme ([SSLD]((https://paddleclas.readthedocs.io/zh_CN/latest/advanced_tutorials/distillation/distillation.html#ssld))), we provide 40+ models and 140+ high-quality pre-training models, which are better than other open-source implementations.
* **High Efficiency**: PaddleSeg provides multi-process asynchronous I/O, multi-card parallel training, evaluation, and other acceleration strategies, combined with the memory optimization function of the PaddlePaddle, which can greatly reduce the training overhead of the segmentation model, all this allowing developers to lower cost and more efficiently train image segmentation model.
* **Modular Design**: We desigin PaddleSeg with the modular design philosophy. Therefore, based on actual application scenarios, developers can assemble diversified training configurations with *data enhancement strategies*, *segmentation models*, *backbone networks*, *loss functions* and other different components to meet different performance and accuracy requirements.
* **Complete Flow**: PaddleSeg support image labeling, model designing, model training, model compression and model deployment. With the help of PaddleSeg, developers can easily finish all taskes.
##
Community
* If you have any questions, suggestions and feature requests, please create an issues in [GitHub Issues](https://github.com/PaddlePaddle/PaddleSeg/issues).
* Welcome to scan the following QR code and join paddleseg wechat group to communicate with us.
##
Overview
|
Models
|
Components
|
Special Cases
|
Semantic Segmentation
Interactive Segmentation
Image Matting
Panoptic Segmentation
|
Backbones
Losses
Metrics
- mIoU
- Accuracy
- Kappa
- Dice
- AUC_ROC
|
Datasets
Data Augmentation
- Flipping
- Resize
- ResizeByLong
- ResizeByShort
- LimitLong
- ResizeRangeScaling
- ResizeStepScaling
- Normalize
- Padding
- PaddingByAspectRatio
- RandomPaddingCrop
- RandomCenterCrop
- ScalePadding
- RandomNoise
- RandomBlur
- RandomRotation
- RandomScaleAspect
- RandomDistort
- RandomAffine
|
Model Selection Tool
Human Segmentation
MedicalSeg
Cityscapes SOTA Model
CVPR Champion Model
Domain Adaptation
|
##
Industrial Segmentation Models
High Accuracy Semantic Segmentation Models
#### These models have good performance and costly inference time, so they are designed for GPU and Jetson devices.
| Model | Backbone | Cityscapes mIoU(%) | V100 TRT Inference Speed(FPS) | Config File |
|:-------- |:--------:|:---------------------:|:-------------------------------:|:------------:|
| FCN | HRNet_W18 | 78.97 | 24.43 | [yml](./configs/fcn/) |
| FCN | HRNet_W48 | 80.70 | 10.16 | [yml](./configs/fcn/) |
| DeepLabV3 | ResNet50_OS8 | 79.90 | 4.56 | [yml](./configs/deeplabv3/) |
| DeepLabV3 | ResNet101_OS8 | 80.85 | 3.2 | [yml](./configs/deeplabv3/) |
| DeepLabV3 | ResNet50_OS8 | 80.36 | 6.58 | [yml](./configs/deeplabv3p/) |
| DeepLabV3 | ResNet101_OS8 | 81.10 | *3.94* | [yml](./configs/deeplabv3p/) |
| OCRNet :star2: | HRNet_w18 | 80.67 | 13.26 | [yml](./configs/ocrnet/) |
| OCRNet | HRNet_w48 | 82.15 | 6.17 | [yml](./configs/ocrnet/) |
| CCNet | ResNet101_OS8 | 80.95 | 3.24 | [yml](./configs/ccnet/) |
Note that:
* Test the inference speed on Nvidia GPU V100: use PaddleInference Python API, enable TensorRT, the data type is FP32, the dimension of input is 1x3x1024x2048.
Lightweight Semantic Segmentation Models
#### The segmentation accuracy and inference speed of these models are medium. They can be deployed on GPU, X86 CPU and ARM CPU.
| Model | Backbone | Cityscapes mIoU(%) | V100 TRT Inference Speed(FPS) | Snapdragon 855 Inference Speed(FPS) | Config File |
|:-------- |:--------:|:---------------------:|:-------------------------------:|:-----------------:|:--------:|
| PP-LiteSeg :star2: | STDC1 | 77.04 | 69.82 | 17.22 | [yml](./configs/pp_liteseg/) |
| PP-LiteSeg :star2: | STDC2 | 79.04 | 54.53 | 11.75 | [yml](./configs/pp_liteseg/) |
| BiSeNetV1 | - | 75.19 | 14.67 | 1.53 |[yml](./configs/bisenetv1/) |
| BiSeNetV2 | - | 73.19 | 61.83 | 13.67 |[yml](./configs/bisenet/) |
| STDCSeg | STDC1 | 74.74 | 62.24 | 14.51 |[yml](./configs/stdcseg/) |
| STDCSeg | STDC2 | 77.60 | 51.15 | 10.95 |[yml](./configs/stdcseg/) |
| DDRNet_23 | - | 79.85 | 42.64 | 7.68 |[yml](./configs/ddrnet/) |
| HarDNet | - | 79.03 | 30.3 | 5.44 |[yml](./configs/hardnet/) |
| SFNet | ResNet18_OS8 | 78.72 | *10.72* | - | [yml](./configs/sfnet/) |
Note that:
* Test the inference speed on Nvidia GPU V100: use PaddleInference Python API, enable TensorRT, the data type is FP32, the dimension of input is 1x3x1024x2048.
* Test the inference speed on Snapdragon 855: use PaddleLite CPP API, 1 thread, the dimension of input is 1x3x256x256.
Super Lightweight Semantic Segmentation Models
#### These super lightweight semantic segmentation models are designed for X86 CPU and ARM CPU.
| Model | Backbone | Cityscapes mIoU(%) | V100 TRT Inference Speed(FPS) | Snapdragon 855 Inference Speed(FPS) | Config File |
|:-------- |:--------:|:---------------------:|:-------------------------------:|:-----------------------------------:|:-----------:|
| MobileSeg | MobileNetV2 | 73.94 | 67.57 | 27.01 | [yml](./configs/mobileseg/) |
| MobileSeg :star2: | MobileNetV3 | 73.47 | 67.39 | 32.90 | [yml](./configs/mobileseg/) |
| MobileSeg | Lite_HRNet_18 | 70.75 | *10.5* | 13.05 | [yml](./configs/mobileseg/) |
| MobileSeg | ShuffleNetV2_x1_0 | 69.46 | *37.09* | 39.61 | [yml](./configs/mobileseg/) |
| MobileSeg | GhostNet_x1_0 | 71.88 | *35.58* | 38.74 | [yml](./configs/mobileseg/) |
Note that:
* Test the inference speed on Nvidia GPU V100: use PaddleInference Python API, enable TensorRT, the data type is FP32, the dimension of input is 1x3x1024x2048.
* Test the inference speed on Snapdragon 855: use PaddleLite CPP API, 1 thread, the dimension of input is 1x3x256x256.
##
Tutorials
**Tutorials**
* [Quick Start](./docs/quick_start.md)
* [A 20 minutes Blitz to learn PaddleSeg](./docs/whole_process.md)
**Docs**
* [Installation](./docs/install.md)
* Data Preparation
* [Prepare Public Dataset](./docs/data/pre_data.md)
* [Prepare Customized Dataset](./docs/data/marker/marker.md)
* [Label Data with EISeg](./EISeg)
* [Model Training](/docs/train/train.md)
* [Model Evaluation](./docs/evaluation/evaluate.md)
* [Prediction](./docs/predict/predict.md)
* Model Export
* [Export Inference Model](./docs/model_export.md)
* [Export ONNX Model](./docs/model_export_onnx.md)
* Model Deploy
* [Paddle Inference (Python)](./docs/deployment/inference/python_inference.md)
* [Paddle Inference (C++)](./docs/deployment/inference/cpp_inference.md)
* [Paddle Lite](./docs/deployment/lite/lite.md)
* [Paddle Serving](./docs/deployment/serving/serving.md)
* [Paddle JS](./docs/deployment/web/web.md)
* [Benchmark](./docs/deployment/inference/infer_benchmark.md)
* Model Compression
* [Quantization](./docs/slim/quant/quant.md)
* [Distillation](./docs/slim/distill/distill.md)
* [Prune](./docs/slim/prune/prune.md)
* [FAQ](./docs/faq/faq/faq.md)
**Welcome to Contribute**
* [API Documention](./docs/apis)
* Advanced Development
* [Detailed Configuration File](./docs/design/use/use.md)
* [Create Your Own Model](./docs/design/create/add_new_model.md)
* Pull Request
* [PR Tutorial](./docs/pr/pr/pr.md)
* [PR Style](./docs/pr/pr/style_cn.md)
## Practical Projects
* [Interactive Segmentation](./EISeg)
* [Image Matting](./Matting)
* [PP-HumanSeg](./contrib/PP-HumanSeg)
* [3D Medical Segmentation](./contrib/MedicalSeg)
* [Cityscapes SOTA](./contrib/CityscapesSOTA)
* [Panoptic Segmentation](./contrib/PanopticDeepLab)
* [CVPR Champion Solution](./contrib/AutoNUE)
* [Domain Adaptation](./contrib/DomainAdaptation)
# AI Studio tutorials
* [Learn Paddleseg in 10 Mins](https://aistudio.baidu.com/aistudio/projectdetail/1672610?channelType=0&channel=0)
* [Use PaddleSeg in Human Segmentation](https://aistudio.baidu.com/aistudio/projectdetail/2189481?channelType=0&channel=0)
* [Use PaddleSeg in Mini-dataset Spine Segmentation](https://aistudio.baidu.com/aistudio/projectdetail/3878920)
* [Use PaddleSeg in Lane Segmentation](https://aistudio.baidu.com/aistudio/projectdetail/1752986?channelType=0&channel=0)
* [PaddleSeg in APIs](https://aistudio.baidu.com/aistudio/projectdetail/1339458?channelType=0&channel=0)
## License
PaddleSeg is released under the [Apache 2.0 license](LICENSE).
## Acknowledgement
* Thanks [jm12138](https://github.com/jm12138) for contributing U2-Net.
* Thanks [zjhellofss](https://github.com/zjhellofss) (Fu Shenshen) for contributing Attention U-Net, and Dice Loss.
* Thanks [liuguoyu666](https://github.com/liguoyu666), [geoyee](https://github.com/geoyee) for contributing U-Net++ and U-Net3+.
* Thanks [yazheng0307](https://github.com/yazheng0307) (LIU Zheng) for contributing quick-start document.
* Thanks [CuberrChen](https://github.com/CuberrChen) for contributing STDC(rethink BiSeNet), PointRend and DetailAggregateLoss.
* Thanks [stuartchen1949](https://github.com/stuartchen1949) for contributing SegNet.
* Thanks [justld](https://github.com/justld) (Lang Du) for contributing UPerNet, DDRNet, CCNet, ESPNetV2, DMNet, ENCNet, HRNet_W48_Contrast, FastFCN, BiSeNetV1, SECrossEntropyLoss and PixelContrastCrossEntropyLoss.
* Thanks [Herman-Hu-saber](https://github.com/Herman-Hu-saber) (Hu Huiming) for contributing ESPNetV2.
* Thanks [zhangjin12138](https://github.com/zhangjin12138) for contributing RandomCenterCrop.
* Thanks [simuler](https://github.com/simuler) for contributing ESPNetV1.
* Thanks [ETTR123](https://github.com/ETTR123)(Zhang Kai) for contributing ENet, PFPNNet.
## Citation
If you find our project useful in your research, please consider citing:
```latex
@misc{liu2021paddleseg,
title={PaddleSeg: A High-Efficient Development Toolkit for Image Segmentation},
author={Yi Liu and Lutao Chu and Guowei Chen and Zewu Wu and Zeyu Chen and Baohua Lai and Yuying Hao},
year={2021},
eprint={2101.06175},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{paddleseg2019,
title={PaddleSeg, End-to-end image segmentation kit based on PaddlePaddle},
author={PaddlePaddle Contributors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleSeg}},
year={2019}
}
```