1 Star 3 Fork 2

YFwinstony / WS-DAN.PyTorch

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

WS-DAN.PyTorch

A neat PyTorch implementation of WS-DAN (Weakly Supervised Data Augmentation Network) for FGVC (Fine-Grained Visual Classification). (Hu et al., "See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification", arXiv:1901.09891)

NOTICE: This is NOT an official implementation by authors of WS-DAN. The official implementation is available at tau-yihouxiang/WS_DAN (and there's another unofficial PyTorch version wvinzh/WS_DAN_PyTorch).

Innovations

  1. Data Augmentation: Attention Cropping and Attention Dropping

    Fig1
  2. Bilinear Attention Pooling (BAP) for Features Generation

    Fig3
  3. Training Process and Testing Process

    Fig2a Fig2b

Performance

  • PyTorch experiments were done on a Titan Xp GPU (batch_size = 12).
Dataset Object Category Train Test Accuracy (Paper) Accuracy (PyTorch) Feature Net
FGVC-Aircraft Aircraft 100 6,667 3,333 93.0 93.28 inception_mixed_6e
CUB-200-2011 Bird 200 5,994 5,794 89.4 88.28 inception_mixed_6e
Stanford Cars Car 196 8,144 8,041 94.5 94.38 inception_mixed_6e
Stanford Dogs Dog 120 12,000 8,580 92.2 89.66 inception_mixed_7c

Usage

WS-DAN

This repo contains WS-DAN with feature extractors including VGG19('vgg19', 'vgg19_bn'), ResNet34/50/101/152('resnet34', 'resnet50', 'resnet101', 'resnet152'), and Inception_v3('inception_mixed_6e', 'inception_mixed_7c') in PyTorch form, see ./models/wsdan.py.

net = WSDAN(num_classes=num_classes, M=num_attentions, net='inception_mixed_6e', pretrained=True)
net = WSDAN(num_classes=num_classes, M=num_attentions, net='inception_mixed_7c', pretrained=True)
net = WSDAN(num_classes=num_classes, M=num_attentions, net='vgg19_bn', pretrained=True)
net = WSDAN(num_classes=num_classes, M=num_attentions, net='resnet50', pretrained=True)

Dataset Directory

  • FGVC-Aircraft (Aircraft)

    -/FGVC-Aircraft/data/
                    └─── images
                            └─── 0034309.jpg
                            └─── 0034958.jpg
                            └─── ...
                    └─── variants.txt
                    └─── images_variant_trainval.txt
                    └─── images_variant_test.txt
  • CUB-200-2011 (Bird)

    -/CUB-200-2011
            └─── images.txt
            └─── image_class_labels.txt
            └─── train_test_split.txt
            └─── images
                    └─── 001.Black_footed_Albatross
                            └─── Black_Footed_Albatross_0001_796111.jpg
                            └─── ...
                    └─── 002.Laysan_Albatross
                    └─── ...
  • Stanford Cars (Car)

    -/StanfordCars
          └─── cars_test
                    └─── 00001.jpg
                    └─── 00002.jpg
                    └─── ...
          └─── cars_train
                    └─── 00001.jpg
                    └─── 00002.jpg
                    └─── ...
          └─── devkit
                    └─── cars_train_annos.mat
          └─── cars_test_annos_withlabels.mat
  • Stanford Dogs (Dog)

    -/StanfordDogs
          └─── Images
              └─── n02085620-Chihuahua
                      └─── n02085620_10074.jpg
                      └─── ...
              └─── n02085782-Japanese_spaniel
                      └─── ...
          └─── train_list.mat
          └─── test_list.mat

Run

  1. git clone this repo.

  2. Prepare data and modify DATAPATH in datasets/<abcd>_dataset.py.

  3. Set configurations in config.py (Training Config, Model Config, Dataset/Path Config):

    tag = 'aircraft'  # 'aircraft', 'bird', 'car', or 'dog'
  4. $ nohup python3 train.py > progress.bar & for training.

  5. $ tail -f progress.bar to see training process (tqdm package is required. Other logs are written in <config.save_dir>/train.log).

  6. Set configurations in config.py (Eval Config) and run $ python3 eval.py for evaluation and visualization.

Attention Maps Visualization

Code in eval.py helps generate attention maps. (Image, Heat Attention Map, Image x Attention Map)

Raw Heat Atten
MIT License Copyright (c) 2019 Yuchong Gu Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

暂无描述 展开 收起
Python
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/YFwinston/WS-DAN.PyTorch.git
git@gitee.com:YFwinston/WS-DAN.PyTorch.git
YFwinston
WS-DAN.PyTorch
WS-DAN.PyTorch
master

搜索帮助