108 Star 871 Fork 1.5K

MindSpore/models

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

Contents

BiMLP Description

This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs). We provide extensive analysis on the difficulty of binarizing vision MLPs and find that previous binarization methods perform poorly due to limited capacity of binary MLPs. In contrast with the traditional CNNs that utilizing convolutional operations with large kernel size, fully-connected (FC) layers in MLPs can be treated as convolutional layers with kernel size 1×1. Thus, the representation ability of the FC layers will be limited when being binarized, and places restrictions on the capability of spatial mixing and channel mixing on the intermediate features. To this end, we propose to improve the performance of binary MLP (BiMLP) model by enriching the representation ability of binary FC layers. We design a novel binary block that contains multiple branches to merge a series of outputs from the same stage, and also a universal shortcut connection that encourages the information flow from the previous stage. The downsampling layers are also carefully designed to reduce the computational complexity while maintaining the classification performance. Experimental results on benchmark dataset ImageNet-1k demonstrate the effectiveness of the proposed BiMLP models, which achieve state-of-the-art accuracy compared to prior binary CNNs.

Paper: Yixing Xu, Xinghao Chen, Yunhe Wang. BiMLP: Compact Binary Architectures for Vision Multi-Layer Perceptrons. Neurips 2022.

RNA architecture

An illustration of Random Normalization Aggregation and Black-box Adversarial Training:

RNA

Dataset

Dataset used: [ImageNet2012]

  • Dataset size 224*224 colorful images in 1000 classes
    • Train:1,281,167 images
    • Test: 50,000 images
  • Data format:jpeg
    • Note:Data will be processed in dataset.py

Environment Requirements

Script description

Script and sample code

├── BiMLP
  ├── Readme.md     # descriptions about BiMLP   # shell script for evaluation with GPU
  ├── src
  │   ├──quan_conv.py      # parameter configuration
  │   ├──dataset.py     # creating dataset
  │   ├──wavemlp_20_3.py      # Pruned ResNet architecture
  ├── eval.py       # evaluation script

Eval process

Usage

After installing MindSpore via the official website, you can start evaluation as follows:

Launch

# infer example
  # python
  GPU: python eval.py --dataset_path dataset --platform GPU --checkpoint_path [CHECKPOINT_PATH] --checkpoint_nm BiMLP_M

checkpoint can be produced in training process.

Result

result: {'acc': 0.7155689820742638} ckpt= ./BiMLP_M.ckpt

Model Description

Performance

Evaluation Performance

Parameters Ascend
Model Version BiMLP_M
Resource GPU
Uploaded Date 26/11/2022 (month/day/year)
MindSpore Version 1.8.1
Dataset ImageNet2012
batch_size 64
outputs probability
Accuracy 1pc: 71.56%

Description of Random Situation

In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.

ModelZoo Homepage

Please check the official homepage.

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/mindspore/models.git
git@gitee.com:mindspore/models.git
mindspore
models
models
master

搜索帮助