Convert MXNet-Gluon model to Caffe.
from gluoncv import resnet18_v1
net = resnet18_v1(pretrained=True)
from convert import convert_model
text_net, binary_weights = convert_model(net, input_shape=(1,3,224,224), softmax=False, to_bgr=True, merge_bn=True)
convert_ssd_model
API but not convert_model
.
from convert import save_model
save_model(text_net, binary_weights, prefix="tmp/resnet18_v1")
PriorBox
and DetectionOutput
layers, convert_ssd_model
will extract them from gluon-net and anchors, box_decoder, class_decoder in it.step
could not be extract from anchors in gluon-net, it will be setted by default in caffe (step=img_size/layer_size, refer to caffe-ssd/prior_box_layer.cpp).gluoncv.model_zoo.SSD
and train it as gluoncv/scripts/detection/ssd/train_ssd.py, for example, ssd300_mobilenetv2 --
from gluoncv.model_zoo import SSD
image_size = 300
layer_size = (19, 10, 5, 3, 2, 1)
net = SSD(network="mobilenetv2_1.0",
base_size=image_size,
features=['features_linearbottleneck12_elemwise_add0_output', # FeatureMap: 19x19
'features_linearbottleneck16_batchnorm2_fwd_output'], # FeatureMap: 10x10
num_filters=[256, 256, 128, 128], # Expand feature extractor with FeatureMaps: 5x5, 3x3, 2x2, 1x1 (stride=2)
sizes=[21, 45, 99, 153, 207, 261, 315],
ratios=[[1, 2, 0.5]] + [[1, 2, 0.5, 3, 1.0/3]] * 3 + [[1, 2, 0.5]] * 2,
steps=[image_size/layer_size for layer_size in layer_size], # Default setting in DetectionOutput caffe-layer
classes=['A', 'B', 'C'],
pretrained=True)
# ...train as train_ssd.py
I've tested the ssd models converted from gluoncv on caffe-ssd and ncnn and they works well.
ReLU6
is one of components in MobileNetv2, which is implemented with a clip
symbol with range [0,6]. But caffe does not support clip
. Therefore, to convert MobileNetv2, converter will replace clip
symbol with range [0,6] with Activation(relu)
. And of course, some errors will be introduced especially for quantized-models.
However, as I know, some branches of caffe and some platform(such as ncnn) support ReLU6
, please reset the type of activation layers manually if you want to deploy it to such branches or platforms.
Convolution
-> Convolution
BatchNorm
-> BatchNorm
& Scale
Activation
(relu only) -> ReLU
Pooling
-> Pooling
(MAX/AVG)elemwise_add
-> Eltwise
(ADD)FullyConnected
-> InnerProduct
Flatten
-> Flatten
Concat
-> Concat
Dropout
-> Dropout
softmax
-> Softmax
transpose
-> Permute
(caffe-ssd)Reshape
-> Reshape
(caffe-ssd)ReLU6
-> ReLU
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
1. 开源生态
2. 协作、人、软件
3. 评估模型