1 Star 0 Fork 0

DWHNicholas/pytorch-deform_conv

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

PyTorch implementation of Deformable ConvNets v2

This repository contains code for Deformable ConvNets v2 (Modulated Deformable Convolution) based on Deformable ConvNets v2: More Deformable, Better Results implemented in PyTorch. This implementation of deformable convolution based on ChunhuanLin/deform_conv_pytorch, thanks to ChunhuanLin.

  • Note: This version code need more NVIdia ,eg:if oriented code need 2 GPUs and batchsize = 4,Now ,you can need 2 GPUs and batchsize = 1, or you can add more gpu.

TODO

  • Initialize weight of modulated deformable convolution based on paper
  • Learning rates of offset and modulation are set to different values from other layers
  • Results of ScaledMNIST experiments
  • Support different stride
  • Support deformable group
  • DeepLab + DCNv2
  • Results of VOC segmentation experiments

Requirements

  • Python 3.6
  • PyTorch 1.0

Usage

Replace regular convolution (following model's conv2) with modulated deformable convolution:

class ConvNet(nn.Module):
  def __init__(self):
    self.relu = nn.ReLU(inplace=True)
    self.pool = nn.MaxPool2d((2, 2))
    self.avg_pool = nn.AdaptiveAvgPool2d(1)

    self.conv1 = nn.Conv2d(3, 32, 3, padding=1)
    self.bn1 = nn.BatchNorm2d(32)
    self.conv2 = nn.DeformConv2d(32, 64, 3, padding=1, modulation=True)
    self.bn2 = nn.BatchNorm2d(64)

    self.fc = nn.Linear(64, 10)

  def forward(self, x):
    x = self.relu(self.bn1(self.conv1(x)))
    x = self.pool(x)
    x = self.relu(self.bn2(self.conv2(x)))

    x = self.avg_pool(x)
    x = x.view(x.shape[0], -1)
    x = self.fc(x)

    return x

Training

ScaledMNIST

ScaledMNIST is randomly scaled MNIST.

Use modulated deformable convolution at conv3~4:

python train.py --arch ScaledMNISTNet --deform True --modulation True --min-deform-layer 3

Use deformable convolution at conv3~4:

python train.py --arch ScaledMNISTNet --deform True --modulation False --min-deform-layer 3

Use only regular convolution:

python train.py --arch ScaledMNISTNet --deform False --modulation False

Results

ScaledMNIST

Model Accuracy (%) Loss
w/o DCN 97.22 0.113
w/ DCN @conv4 98.60 0.049
w/ DCN @conv3~4 98.95 0.035
w/ DCNv2 @conv4 98.45 0.058
w/ DCNv2 @conv3~4 99.21 0.027
MIT License Copyright (c) 2018 Takato Kimura Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

可变卷积的pytorch版本 展开 收起
Python
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Python
1
https://gitee.com/dwhnicholas/pytorch-deform-conv-v2.git
git@gitee.com:dwhnicholas/pytorch-deform-conv-v2.git
dwhnicholas
pytorch-deform-conv-v2
pytorch-deform_conv
master

搜索帮助

344bd9b3 5694891 D2dac590 5694891