# Deep-Image-Matting-PyTorch **Repository Path**: trichoderma-flavum/Deep-Image-Matting-PyTorch ## Basic Information - **Project Name**: Deep-Image-Matting-PyTorch - **Description**: Deep Image Matting implementation in PyTorch - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-02-18 - **Last Updated**: 2021-06-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Deep Image Matting Deep Image Matting [paper](https://arxiv.org/abs/1703.03872) implementation in PyTorch. ## Differences 1. "fc6" is dropped. 2. Indices pooling.
"fc6" is clumpy, over 100 millions parameters, makes the model hard to converge. I guess it is the reason why the model (paper) has to be trained stagewisely. ## Performance - The Composition-1k testing dataset. - Evaluate with whole image. - SAD normalized by 1000. - Input image is normalized with mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]. - Both erode and dialte to generate trimap. |Models|SAD|MSE|Download| |---|---|---|---| |paper-stage0|59.6|0.019|| |paper-stage1|54.6|0.017|| |paper-stage3|50.4|0.014|| |my-stage0|66.8|0.024|[Link](https://github.com/foamliu/Deep-Image-Matting-PyTorch/releases/download/v1.0/BEST_checkpoint.tar)| ## Dependencies - Python 3.5.2 - PyTorch 1.1.0 ## Dataset ### Adobe Deep Image Matting Dataset Follow the [instruction](https://sites.google.com/view/deepimagematting) to contact author for the dataset. ### MSCOCO Go to [MSCOCO](http://cocodataset.org/#download) to download: * [2014 Train images](http://images.cocodataset.org/zips/train2014.zip) ### PASCAL VOC Go to [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) to download: * VOC challenge 2008 [training/validation data](http://host.robots.ox.ac.uk/pascal/VOC/voc2008/VOCtrainval_14-Jul-2008.tar) * The test data for the VOC2008 challenge ## Usage ### Data Pre-processing Extract training images: ```bash $ python pre_process.py ``` ### Train ```bash $ python train.py ``` If you want to visualize during training, run in your terminal: ```bash $ tensorboard --logdir runs ``` ## Experimental results ### The Composition-1k testing dataset 1. Test: ```bash $ python test.py ``` It prints out average SAD and MSE errors when finished. ### The alphamatting.com dataset 1. Download the evaluation datasets: Go to the [Datasets page](http://www.alphamatting.com/datasets.php) and download the evaluation datasets. Make sure you pick the low-resolution dataset. 2. Extract evaluation images: ```bash $ python extract.py ``` 3. Evaluate: ```bash $ python eval.py ``` Click to view whole images: Image | Trimap1 | Trimap2 | Trimap3| |---|---|---|---| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| | |||| ### Demo Download pre-trained Deep Image Matting [Link](https://github.com/foamliu/Deep-Image-Matting-PyTorch/releases/download/v1.0/BEST_checkpoint.tar) then run: ```bash $ python demo.py ``` Image/Trimap | Output/GT | New BG/Compose | |---|---|---| | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | | |  |  | | |  | | ## 小小的赞助~
若对您有帮助可给予小小的赞助~