SP-NAS is an efficient architecture search algorithm for object detection and semantic segmentation based on the backbone network architecture. The existing object detectors usually use the feature extraction network designed and pre-trained on the image classification task as the backbone. We propose an efficient, flexible and task-oriented search scheme based on NAS. which is a two-phase search solution from serial to parallel to reduce repeated ImageNet pre-training or long-time training from scratch.
This method has two phases:
Swap-expand-reignite policy: Growing starts from a small network to avoid repeated ImageNet pre-training.
The new candidate network is obtained by "switching" or "expanding" the grown network for many times.
Quickly train and evaluate candidate networks based on inherited parameters.
When the growth reaches the bottleneck, the network is re-trained using ImageNet. The number of ignition times is no more than 2.
Constrained optimal network: A serial network with limited network resources (latency, video memory usage, or complexity) is selected to obtain the maximum performance.
Search space configuration:
Block type: Basic Block, BottleNeck Block, and ResNext;
Network depth: 8 to 60 blocks;
Number of stages: 5 to 7;
Width: Position where the channel size is doubled in the entire sequence.
The benchmark datasets can be downloaded as follows:
COCO, COCO2017,
Prepare hardware environment with Ascend.
MindSpore Tutorials MindSpore Python API
Spnas
├── eval.py # inference entry
├── train.py # pre-training entry
├── image
│ └── spnas.png # the illustration of Spnas network
├── readme.md # Readme
├── scripts
│ ├── run_distributed.sh # pre-training script for all tasks
└── src
├── spnas.yml # options/hyper-parameters of Spnas
└── spnas_distributed.yml # options/hyper-parameters of Spnas
For details about hyperparameters, see src/spnas.yml.
python3 train.py --config_file=src/spnas.yaml
Or one can run following script for all tasks.
sh scripts/run_distributed.sh [RANK_TABLE_PATH]
Inference example:
Modify src/eval.yml:
models_folder: [CHECKPOINT_PATH] # /xxx/tasks/1013.135941.325/parallel/1/
python3 eval.py
The result are evaluated by the value of AP (mean Average Precision), and the format is as following.
current valid perfs [mAP: 49.1, AP50: 67.1, AP_small: 31.0, AP_medium: 52.6, AP_large: 63.7]
The Results on detection tasks are listed as below.
COCO results:
Method | mAP | AP50 | AP_small | AP_medium | AP_large |
---|---|---|---|---|---|
SPNet | 49.1 | 67.1 | 31.0 | 52.6 | 63.7 |
AmoebaNet | 43.4 | - | - | - | - |
NAS-FPN | 48.0 | - | - | - | - |
Please check the official homepage.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。