6 Star 18 Fork 9

Gitee 极速下载/fast-reid

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
此仓库是为了提升国内下载速度的镜像仓库,每日同步一次。 原始仓库: https://github.com/JDAI-CV/fast-reid
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

Model Deployment

This directory contains:

  1. The scripts that convert a fastreid model to Caffe/ONNX/TRT format.

  2. The exmpales that load a R50 baseline model in Caffe/ONNX/TRT and run inference.

Tutorial

Caffe Convert

step-to-step pipeline for caffe convert

This is a tiny example for converting fastreid-baseline in meta_arch to Caffe model, if you want to convert more complex architecture, you need to customize more things.

  1. Run caffe_export.py to get the converted Caffe model,

    python tools/deploy/caffe_export.py --config-file configs/market1501/bagtricks_R50/config.yml --name baseline_R50 --output caffe_R50_model --opts MODEL.WEIGHTS logs/market1501/bagtricks_R50/model_final.pth
    

    then you can check the Caffe model and prototxt in ./caffe_R50_model.

  2. Change prototxt following next three steps:

    1. Modify MaxPooling in baseline_R50.prototxt and delete ceil_mode: false.

    2. Add avg_pooling in baseline_R50.prototxt

      layer {
          name: "avgpool1"
          type: "Pooling"
          bottom: "relu_blob49"
          top: "avgpool_blob1"
          pooling_param {
              pool: AVE
              global_pooling: true
          }
      }
      
    3. Change the last layer top name to output

      layer {
          name: "bn_scale54"
          type: "Scale"
          bottom: "batch_norm_blob54"
          top: "output" # bn_norm_blob54
          scale_param {
              bias_term: true
          }
      }
      
  3. (optional) You can open Netscope, then enter you network prototxt to visualize the network.

  4. Run caffe_inference.py to save Caffe model features with input images

     python caffe_inference.py --model-def outputs/caffe_model/baseline_R50.prototxt \
     --model-weights outputs/caffe_model/baseline_R50.caffemodel \
     --input test_data/*.jpg --output caffe_output
    
  5. Run demo/demo.py to get fastreid model features with the same input images, then verify that Caffe and PyTorch are computing the same value for the network.

    np.testing.assert_allclose(torch_out, ort_out, rtol=1e-3, atol=1e-6)
    

ONNX Convert

step-to-step pipeline for onnx convert

This is a tiny example for converting fastreid-baseline in meta_arch to ONNX model. ONNX supports most operators in pytorch as far as I know and if some operators are not supported by ONNX, you need to customize these.

  1. Run onnx_export.py to get the converted ONNX model,

    python onnx_export.py --config-file root-path/bagtricks_R50/config.yml --name baseline_R50 --output outputs/onnx_model --opts MODEL.WEIGHTS root-path/logs/market1501/bagtricks_R50/model_final.pth
    

    then you can check the ONNX model in outputs/onnx_model.

  2. (optional) You can use Netron to visualize the network.

  3. Run onnx_inference.py to save ONNX model features with input images

     python onnx_inference.py --model-path outputs/onnx_model/baseline_R50.onnx \
     --input test_data/*.jpg --output onnx_output
    
  4. Run demo/demo.py to get fastreid model features with the same input images, then verify that ONNX Runtime and PyTorch are computing the same value for the network.

    np.testing.assert_allclose(torch_out, ort_out, rtol=1e-3, atol=1e-6)
    

TensorRT Convert

step-to-step pipeline for trt convert

This is a tiny example for converting fastreid-baseline in meta_arch to TRT model.

First you need to convert the pytorch model to ONNX format following ONNX Convert, and you need to remember your output name. Then you can convert ONNX model to TensorRT following instructions below.

  1. Run command line below to get the converted TRT model from ONNX model,

    python trt_export.py --name baseline_R50 --output outputs/trt_model \
    --mode fp32 --batch-size 8 --height 256 --width 128 \
    --onnx-model outputs/onnx_model/baseline.onnx 
    

    then you can check the TRT model in outputs/trt_model.

  2. Run trt_inference.py to save TRT model features with input images

     python3 trt_inference.py --model-path outputs/trt_model/baseline.engine \
     --input test_data/*.jpg --batch-size 8 --height 256 --width 128 --output trt_output 
    
  3. Run demo/demo.py to get fastreid model features with the same input images, then verify that TensorRT and PyTorch are computing the same value for the network.

    np.testing.assert_allclose(torch_out, trt_out, rtol=1e-3, atol=1e-6)
    

Notice: The int8 mode in tensorRT runtime is not supported now and there are some bugs in calibrator. Need help!

Acknowledgements

Thank to CPFLAME, gcong18, YuxiangJohn and wiggin66 at JDAI Model Acceleration Group for help in PyTorch model converting.

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Python
1
https://gitee.com/mirrors/fast-reid.git
git@gitee.com:mirrors/fast-reid.git
mirrors
fast-reid
fast-reid
master

搜索帮助