13 Star 63 Fork 280

Ascend/ModelZoo-TensorFlow

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
.idea
ACL_TensorFlow
built-in
contrib
audio
cv
Attention-OCR_ID2013_for_ACL
Memnet_ID1085_for_ACL
SparseNet_ID1616_for_ACL
3D-POSE-BASELINE_ID0795_for_ACL
3DResNet_for_ACL
ADA-Net_ID1245_for_ACL
ADAGAN_ID2115_for_ACL
ADDA_ID1026_for_ACL
AdvancedEAST_ID0130_for_ACL
AmoebaNet-D_ID2073_for_ACL
Arcface_ID0887_for_ACL
AutoEncoder_ID2495_for_ACL
AvatarGAN_ID1305_for_ACL
BSRN_ID1296_for_ACL
BicycleGAN_ID1287_for_ACL
BigbiGAN_ID1293_for_ACL
BlitzNet_ID0948_for_ACL
C3AE_ID1250_for_ACL
C3D_ID2199_for_ACL
CFL_ID1230_for_ACL
COMPARE_GAN_ID2103_for_ALC
CSGM_ID2109_for_ACL
CSPNet_ID0840_for_ACL
CYCLE-DEHAZE_ID1219_for_ACL
CascadeNet_ID2121_for_ACL
Centernet_for_ACL
CircleLoss_ID1221_for_ACL
Cognitive_Planning_ID2015_for_ACL
ConvLSTM_ID2358_for_ACL
CovPoolFER_ID1215_for_ACL
CycleGAN_ID0716_for_ACL
DANN_ID2200_for ACL
DCGAN_ID1471_for_ACL
DD-NET_ID1088_for_TensorFlow
DDcGAN_ID2123_for_ACL
DENSEDEPTH_ID0806_for_ACL
DF-Net_ID2365_for_ACL
DMSP_ID1290_for_ACL
DPN_for_ACL
Deep-SLR_ID2122_for_ACL
DeepAlignmentNetwork_ID0874_for_ACL
DeepFaceLab_ID2017_for_ACL
DeepMatchVO_ID2363_for_ACL
DeepSort_ID0505_for_ACL
DeepSort_for_ACL
DeltaEncoder_ID1273_for_ACL
DualGAN_ID1001_for_ACL
ECONet_for_ACL
EDSR_ID1263_for_ACL
Ecolite_for_ACL
Efficientnet-CondConv_ID2074_for_ACL
Efficientnet_ID1057_for_ACL
Efficientnet_ID1220_for_ACL
FAST-STYLE-TRANSFER_ID2056_for_ACL
FQ-GAN_ID1117_for_ACL
FaceBoxes_ID1074_for_ACL
Factorized_ID1301_for_ACL
FasterRCNN_for_ACL
Benchmark
datapreprocess
scripts
transdata
LICENSE
README.md
README_EN.md
build.sh
keeptype.cfg
modelzoo_level.txt
FasterRCNN_offlineinference_for_ACL
Fixmatch_ID0843_for_ACL
FlowNet2_ID1375_for_ACL
Fpointnet_ID1470for_ACL
FusionGAN_ID2124_for_ACL
GMH—MDN_ID1225_for_ACL
GeoNet_ID2357_for_ACL
GitLoss_ID1277_for_ACL
Gman_Net_ID1302_for_ACL
HMR_ID0783_for_ACL
HybridSN_ID1160_for_ACL
IEG_ID2049_for_ACL
JSHDR_ID2119_for_ACL
KEYPOSE_ID2043_for_ACL
LEARNING-TO-SEE-IN-THE-DARK_ID2069_for_ACL
LMTCNN_ID1278_for_ACL
LSTM-HAR_ID2084_for_ACL
LiftingFromTheDeep_ID0891_for_ACL
MASF_ID1191_for_ACL
MEAN-TEACHER_ID0789_for_ACL
MGCNN_ID1039_for_ACL
MLSP_ID1295_for_ACL
MNN-LANENET_ID1251_for_TensorFlow
MONODEPTH_ID2099_for_ACL
MONO_ID1240_for_ACL
MT-NET_ID1283_for_ACL
MTCNN_for_ACL
MUNIT_ID0953_for_ACL
MZSR_ID1072_for_ACL
MaskRCNN_for_ACL
MicroexpNet_ID1157_for_ACL
Milking_cowmask_ID0712_for_ACL
Mixmatch_ID0804_for_ACL
Mixnet_ID2072_for_ACL
MnasNet_ID0728_for_ACL
NIMA_ID0853_for_ACL
NIMA_for_ACL
NOISE2NOISE_ID800_for_ACL
Non_Local_Net_ID0274_for_ACL
OOD_ID2046_for_ACL
OSNet_ID1379_for_ACL
PFE_ID0982_for_ACL
PRIDNET_ID1275_for_ACL
PROTOTYPICAL-NETWORKS_ID1286_for_ACL
PRSR_ID2111_for_ACL
PWCNet_for_ACL
PairedCycleGAN_ID1281_for_ACL
Pix2Vox_ID1284_for_ACL
Pix2pose_ID1164_for_TensorFlow_ACL
PointNet_ID0266_for_ACL
Polygen_ID2061_for_ACL
PyramidBox_ID0971_for_ACL
RANDLA-NET_ID0850_for_ACL
Realmix_ID1248_for _ACL
Rebar_ID2016_for_ACL
RetinexNet_ID0936_for_ACL
Revisiting_self_supervised_ID0935_for_ACL
RobotPlayChess_ID_for_ACL
SENet_ID0145_for_ACL
SII_ID2113_for_ACL
SRGAN_ID2087_for_ACL
SSD_InceptionV2_for_ACL
SSD_MobileNetV1_FPN_for_ACL
STGAN_ID1473_for_ACL
STNet_ID2360_for_ACL
SWD_ID1211_for_ACL
SYMNET_ID1292_for_ACL
SegdecNet_for_ACL
SeqGAN_ID2096_for_ACL
SimpleHumanPose_ID0956_for_ACL
Slot-Attention_ID2028_for_ACL
SphereFace_ID0771_for_ACL
SpineNet_ID0709_for_ACL
StarGAN_ID0725_for_ACL
StarGAN_ID1472_for_ACL
StarGAN_v2_ID1188_for_ACL
TCN-KERAS_ID2665_for_ACL
TCN_ID1231_for_ACL
TNT_ID1233_for_ACL
TSN_for_ACL
TransNet_for_ACL
Transferring-Gan_ID1252_for_ACL
UDCVO_ID2359_for_ACL
UGATIT_ID0722_for_ACL
Unsupervised_Person_Re-identification_ID1028_for_ACL
VAEGAN_for_ACL
VAT_ID1002_for_ACL
VDSR_ID2114_for_ACL
VGG19_ID0374_for_ACL
VSL_ID1291_for_ACL
VisionTransformer_ID1217_for_ACL
White-Box-Cartoonization_ID2089_for_ACL
YAD2K_ID2086_for_ACL
YOLOV5_ID0378_for_ACL
Yolov5_for_ACL
assembled-cnn_ID0941_for_ACL
dilatedrnn/DilatedRNN_ID0927_for_ACL
mobilefacenet-V2_ID0929_for_ACL
monoculartotalcapture_ID0866_for_ACL
prnet_for_ACL
squeezeseg_ID0873_for_ACL
srdrm_ID1282_for_ACL
stackgan_ID0760_for_TensorFlow
voxelmorph_ID2120_for_ACL
.DS_Store
.keep
README_EN.md
graph
nlp
recommendation
reinforcement_learning
.keep
TensorFlow
TensorFlow2
Tools/ascend_distribute
.gitignore
.gitmodules
LICENSE
NOTICE
README.CN.md
README.md
Third_Party_Open_Source_Software_Notice
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

中文|English

FasterRCNN TensorFlow离线推理

此链接提供FasterRCNN TensorFlow模型在NPU上离线推理的脚本和方法

注意

此案例仅为您学习Ascend软件栈提供参考,不用于商业目的。

在开始之前,请注意以下适配条件。如果不匹配,可能导致运行失败。

Conditions Need
CANN版本 >=5.0.3
芯片平台 Ascend310/Ascend310P3
第三方依赖 请参考 'requirements.txt'

快速指南

1. 拷贝代码

git clone https://gitee.com/ascend/ModelZoo-TensorFlow.git
cd Modelzoo-TensorFlow/ACL/Official/cv/FasterRCNN_for_ACL

2. 下载数据集和预处理

  1. 访问“datapreprocess”目录
  2. 下载并生成TFRecords数据集 COCO 2017.
   bash download_and_preprocess_mscoco.sh <data_dir_path>

注意:数据将被下载,预处理为tfrecords格式,并保存在<Data_dir_path>目录中(主机上)。如果您已经下载并创建了TFRecord文件(根据tensorflow的官方tpu脚本生成的TFRecord),请跳过此步骤。 如果您已经下载了COCO映像,请运行以下命令将其转换为TFRecord格式

     ```
     python3 object_detection/dataset_tools/create_coco_tf_record.py --include_masks=False --val_image_dir=/your/val_tfrecord_file/path --val_annotations_file=/your/val_annotations_file/path/instances_val2017.json --output_dir=/your/tfrecord_file/out/path
     ```
  1. 将数据集转成bin文件
   python3 data_2_bin.py --validation_file_pattern /your/val_tfrecord_file/path/val_file_prefix* --binfilepath /your/bin_file_out_path 
  1. 创建两个数据集文件夹,一个是用于“image_info”和“images”文件的your_data_path,另一个是“source_ids”文件的your_datasourceid_path。将bin文件移动到正确的目录;
  2. 将“instances_val2017.json”复制到FasterRCNN_for_ACL/scripts 目录下;

3. 离线推理

Pb模型转换为om模型

  • 访问 "FasterRCNN_for_ACL" 文件夹

  • 环境变量设置

请参考说明,设置环境变量

  • Pb模型转换为om模型

    atc --model=/your/pb/path/your_fast_pb_name.pb --framework=3  --output=your_fastom_name--output_type=FP32 --soc_version=Ascend310P3 --input_shape="image:1,1024,1024,3;image_info:1,5" --keep_dtype=./keeptype.cfg  --precision_mode=force_fp16  --out_nodes="generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:3;generate_detections/denormalize_box/concat:0;generate_detections/add:0;generate_detections/combined_non_max_suppression/CombinedNonMaxSuppression:1"
    

注意: 替换模型、输出、环境变量的参数

  • 编译程序

    bash build.sh
    
  • Run the program:

    cd scripts
    chmod +x benchmark_tf.sh
    ./benchmark_tf.sh --batchSize=1 --modelType=fastrcnn16  --outputType=fp32  --deviceId=2 --modelPath=/your/fastom/path/your_fastom_name.om --dataPath=/your/data/path --innum=2 --suffix1=image_info.bin --suffix2=images.bin --imgType=raw  --sourceidpath=/your/datasourceid/path
    

注意:替换modelPath、dataPath和sourceidpath的参数。使用绝对路径。

精度

结果

本结果是通过运行上面适配的推理脚本获得的。要获得相同的结果,请按照《快速指南》中的步骤操作。

推理精度结果

model data Bbox
offline Inference 5000 images 35.4%
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/ascend/ModelZoo-TensorFlow.git
git@gitee.com:ascend/ModelZoo-TensorFlow.git
ascend
ModelZoo-TensorFlow
ModelZoo-TensorFlow
master

搜索帮助