108 Star 866 Fork 1.5K

MindSpore/models

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
.gitee
.github
.ipynb_checkpoints
.jenkins
benchmark/ascend
community
how_to_contribute
official
research
3d/DeepLM
audio
cv
3D_DenseNet
3dcnn
ADNet
APDrawingGAN
AVA_cifar
AVA_hpa
AdaBin
AdderNGD
AdderQuant
Alexnet
AlignedReID++
AlignedReID
AlphaPose
ArbitraryStyleTransfer
ArtTrack
AttGAN
AttentionCluster
Auto-DeepLab
AutoSlim
BMN
BiMLP
C3D
CBAM
CFDT
CGAN
CLIFF
CMT
CSNL
CTSDG
CascadeRCNN
Cnet
ConstrainedBayesian
DBPN
DDAG
DDM
DDRNet
DRNet
DecoMR
DeepID
Deepsort
DeiT
DepthNet
DnCNN
DynamicQuant
E-NET
ECENet
EF-ENAS
EGnet
ESRGAN
EfficientDet_d0
FCANet
FCN8s
FCOS
FDA-BNN
FSAF
FaceAttribute
FaceDetection
FaceNet
FaceQualityAssessment
FaceRecognition
FaceRecognitionForTracking
Focus-DETR
FreeAnchor
GENet_Res50
GMMBN
GhostSR
Gold_YOLO
GridRCNN
HDR-Transformer
HMR
HRNetV2
HRNetW48_cls
HRNetW48_seg
HiFaceGAN
HireMLP
HourNAS
I3D
ICNet
IPT
IRN
ISyNet
Inception-v2
IndexNet
Instaboost
IntegralNeuralNetworks
JDE
LEO
LearningToSeeInTheDark
LightCNN
MAML
MCNN
MGN
MIMO-UNet
MODNet
MTCNN
MVD
ManiDP
MaskedFaceRecognition
NFNet
Neighbor2Neighbor
Non_local
OSVOS
OctSqueeze
PAGENet
PAMTRI
PDarts
PGAN
PSPNet
PTQ4SR
PVAnet
PWCNet
PaDiM
PatchCore
Pix2Pix
Pix2PixHD
PoseNet
ProtoNet
PyramidBox
RAOD
RCAN
RDN
REDNet30
RNA
ReIDStrongBaseline
RefSR-NeRF
RefineDet
RefineNet
RepVGG
ResNeSt50
ResNeXt
ResidualAttentionNet
S-GhostNet
SDNet
SE-Net
SE_ResNeXt50
SPADE
SPC-Net
SPPNet
SRGAN
SSIM-AE
STGAN
STPM
Segformer
SemanticHumanMatting
SiamFC
SinGAN
SpineNet
Spnas
image
scripts
src
README.md
eval.py
requirements.txt
train.py
StackedHourglass
StarGAN
StyTr-2
TCN
TNT
TinySAM
TokenFusion
Twins
U-GAT-IT
UNet3+
Unet3d
VehicleNet
ViG
WGAN_GP
Yolact++
adelaide_ea
advanced_east
aecrnet
ats
augvit
autoaugment
beit
brdnet
cait
cct
cdp
centerface
centernet
centernet_det
centernet_resnet101
centernet_resnet50_v1
cmr
cnn_direction_model
cnnctc
conformer
convmixer
convnext
crnn_seq2seq_ocr
csd
cspdarknet53
darknet53
dcgan
dcrnn
delf
dem
densenet
detr
dgcnet_res101
dlinknet
dncnn
dnet_nas
dpn
east
ecolite
efficientnet-b0
eppmvsnet
erfnet
esr_ea
essay-recogination
faceboxes
fairmot
faster_rcnn_dcn
faster_rcnn_ssod
fastflow
fastscnn
fishnet99
flownet2
foveabox
frustum-pointnet
gan
ghostnet
ghostnet_d
ghostnet_quant
ghostnetv2
glore_res
googlenet
guided_anchoring
hardnet
hed
hlg
ibnnet
inception_resnet_v2
ivpf
lenet
lerf
libra-rcnn
lite-hrnet
llnet
lresnet100e_ir
m2det
meta-baseline
metric_learn
midas
mifnet
mnasnet
mobilenetV3_small_x1_0
mobilenetv3_large
ms_rcnn
nas-fpn
nasnet
nima
nima_vgg16
nnUNet
ntsnet
osnet
pcb
pcb_rpp
pnasnet
pointpillars
pointtransformer
predrnn++
proxylessnas
psenet
r2plus1d
ras
rbpn
rcnn
relationnet
renas
repvgg
res2net
res2net_deeplabv3
res2net_faster_rcnn
res2net_yolov3
resnet3d
resnet50_adv_pruning
resnet50_bam
resnetv2
resnetv2_50_frn
resnext152_64x4d
retinaface
retinanet_resnet101
retinanet_resnet152
rfcn
se_resnext50
siamRPN
simclr
simple_baselines
simple_pose
single_path_nas
sknet
slowfast
snn_mlp
sphereface
squeezenet
squeezenet1_1
sr_ea
srcnn
ssc_resnet50
ssd_ghostnet
ssd_inception_v2
ssd_inceptionv2
ssd_mobilenetV2
ssd_mobilenetV2_FPNlite
ssd_resnet34
ssd_resnet50
ssd_resnet_34
stgcn
stnet
swenet
t2t-vit
tall
textfusenet
tgcn
tinydarknet
tinynet
tnt
tracktor++
tracktor
trn
tsm
tsn
u2net
uni-uvpt
unisiam
vanillanet
vit_base
vnet
warpctc
wave_mlp
wdsr
wideresnet
yolov3_resnet18
yolov3_tiny
gflownets
gnn
hpc
l2o/hem-learning-to-cut
mm
nerf
nlp
recommend
rl
.gitkeep
README.md
README_CN.md
__init__.py
utils
.clang-format
.gitignore
CONTRIBUTING.md
CONTRIBUTING_CN.md
LICENSE
OWNERS
README.md
README_CN.md
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

SP-NAS (Serial-to-Parallel Backbone Search for Object Detection)

Algorithm Introduction

SP-NAS is an efficient architecture search algorithm for object detection and semantic segmentation based on the backbone network architecture. The existing object detectors usually use the feature extraction network designed and pre-trained on the image classification task as the backbone. We propose an efficient, flexible and task-oriented search scheme based on NAS. which is a two-phase search solution from serial to parallel to reduce repeated ImageNet pre-training or long-time training from scratch.

Algorithm Principles

This method has two phases:

  1. In serial phase, the block sequence with optimal scaling ratio and output channel is found by using the "swap-expand-reignite" search policy. This search policy can guranteen a new searched architecture to completely inherit of weight from arichtectures before morphism.
  2. In parallel phase, parallelized network structures are designed, sub-networks integrated by different feature layers are searched to better fuse the high-level and low-level semantic features. The following figure shows the search policy.

sp-nas

Search Space and Search Policy

Serial-level

  • Swap-expand-reignite policy: Growing starts from a small network to avoid repeated ImageNet pre-training.

  • The new candidate network is obtained by "switching" or "expanding" the grown network for many times.

  • Quickly train and evaluate candidate networks based on inherited parameters.

  • When the growth reaches the bottleneck, the network is re-trained using ImageNet. The number of ignition times is no more than 2.

  • Constrained optimal network: A serial network with limited network resources (latency, video memory usage, or complexity) is selected to obtain the maximum performance.

  • Search space configuration:

  • Block type: Basic Block, BottleNeck Block, and ResNext;

  • Network depth: 8 to 60 blocks;

  • Number of stages: 5 to 7;

  • Width: Position where the channel size is doubled in the entire sequence.

Parallel-level

  • Based on the result SerialNet from the serial search phase (or the existing handcraft serial network such as ResNet series), search for the parallel structure stacked on SerialNet to better utilize and fuse feature information with different resolutions from different feature layers.
  • Search policy: Random sampling to meet the resource constraints: The probability of adding additional subnets is inversely proportional to the FLOPS of the subnets to be added.
  • Search space: SerialNet is divided into L self-networks based on the number of feature layers and K sub-networks are searched for in each phase.

Dataset

The benchmark datasets can be downloaded as follows:

COCO, COCO2017,

Requirements

Hardware (Ascend)

Prepare hardware environment with Ascend.

Framework

MindSpore

For more information, please check the resources below

MindSpore Tutorials MindSpore Python API

Script Description

Scripts and Sample Code

Spnas
├── eval.py # inference entry
├── train.py # pre-training entry
├── image
│   └── spnas.png # the illustration of Spnas network
├── readme.md # Readme
├── scripts
│   ├── run_distributed.sh # pre-training script for all tasks
└── src
    ├── spnas.yml # options/hyper-parameters of Spnas
    └── spnas_distributed.yml # options/hyper-parameters of Spnas

Script Parameter

For details about hyperparameters, see src/spnas.yml.

Training Process

For training

python3 train.py --config_file=src/spnas.yaml

Or one can run following script for all tasks.

sh scripts/run_distributed.sh [RANK_TABLE_PATH]

Evaluation

Evaluation Process

Inference example:

Modify src/eval.yml:

models_folder: [CHECKPOINT_PATH] # /xxx/tasks/1013.135941.325/parallel/1/
python3 eval.py

Evaluation Result

The result are evaluated by the value of AP (mean Average Precision), and the format is as following.

current valid perfs [mAP: 49.1, AP50: 67.1, AP_small: 31.0, AP_medium: 52.6, AP_large: 63.7]

Performance

Inference Performance

The Results on detection tasks are listed as below.

COCO results:

Method mAP AP50 AP_small AP_medium AP_large
SPNet 49.1 67.1 31.0 52.6 63.7
AmoebaNet 43.4 - - - -
NAS-FPN 48.0 - - - -

ModeZoo Homepage

Please check the official homepage.

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/mindspore/models.git
git@gitee.com:mindspore/models.git
mindspore
models
models
master

搜索帮助