13 Star 159 Fork 66

西南交通大学-龚勋/同学帮-文档-代码

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
Admin-swjtugx_xgong Admin CMT a786a63 1年前
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

Part-Aware Prototype-Aligned Interpretable Image Classification with Basic Feature Domain

Paper accepted at the AJCAI 2023(https://ajcai2023.org/index.html).

Citation

If you use this repository, please cite:

@inproceedings{li2023part,
  title={Part-Aware Prototype-Aligned Interpretable Image Classification with Basic Feature Domain},
  author={Li, Liangping and Gong, Xun and Wang, Chenzhong and Kong, Weiji},
  booktitle={Australasian Joint Conference on Artificial Intelligence},
  pages={185--196},
  year={2023},
  organization={Springer}
}

Getting Started

It is recommended to run PaProtoPNet on a multi-GPU machine to achieve better results.

Requirements: Pytorch, Numpy, cv2, Augmentor

Take the CUB-200-2011 as an example.

This version of code package was based on ProtoPNet (https://github.com/cfchen-duke/ProtoPNet) and TesNet(https://github.com/JackeyWang96/TesNet).

Parameter Setting

  1. Many parameters related to model training are placed in class DataConfiger in file PaProtoPNet/settings_CUB.py. You can change the running GPU devices, dataset location, and more.
self.feature_type = "cluster"
# self.feature_type = "row"

...

if model_name == "resnet152":
    self.devices = [2, 3, 4, 5, 6, 7]
    self.train_batch_size = 30
    self.test_batch_size = 80
    self.train_push_batch_size = 75
    
...
    
self.joint_optimizer_lrs = {'features': 1e-4,
                               'add_on_layers': 3e-3,
                               'prototype_vectors': 3e-3}
self.joint_lr_step_size = 2

self.warm_optimizer_lrs = {'add_on_layers': 3e-3,
                      'prototype_vectors': 3e-3}

self.last_layer_optimizer_lr = 1e-4
  1. Set the mode of feature aggregation by self.feature_type (row or cluster).

  2. The dataset_name in file PaProtoPNet/settings_CUB.py can determine the dataset used for model training.

dataset_name = 'CUB'
# dataset_name = 'CUB_full'
# dataset_name = 'CAR'

Preprocess the datasets

  1. Download the datasets from the following two links
  1. Crop the images and split the cropped images by ./preprocess_data/cropimages.py
  • The cropped training images in the directory: ./datasets/cub200_cropped/train_cropped/

  • The cropped test images in the directory: ./datasets/cub200_cropped/test_cropped/

  1. Augment the training set by ./preprocess_data/img_aug.py
  • The augmented training set in the directory: ./datasets/cub200_cropped/train_cropped_augmented/

Pre-trained backbone networks

  1. Download ResNet50 pretrained on iNaturalist2017 (Filename on Google Drive: BBN.iNaturalist2017.res50.180epoch.best_model.pth) and place it in the folder features/state_dicts.
  2. The models pre-trained on ImageNet does not need to be downloaded manually; the code will automatically download the required them to the folder preprocess_data.

Start running

Run PaProtoPNet/main_ProtoPNet.py

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/swjtugx/classmate.git
git@gitee.com:swjtugx/classmate.git
swjtugx
classmate
同学帮-文档-代码
master

搜索帮助