1 Star 0 Fork 0

OLDPAN/python_backend

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
README.md 2.35 KB
一键复制 编辑 原始数据 按行查看 历史

Preprocessing Using Python Backend Example

This example shows how to preprocess your inputs using Python backend before it is passed to the TensorRT model for inference. This ensemble model includes an image preprocessing model (preprocess) and a TensorRT model (resnet50_trt) to do inference.

1. Converting PyTorch Model to ONNX format:

Run onnx_exporter.py to convert ResNet50 PyTorch model to ONNX format. Width and height dims are fixed at 224 but dynamic axes arguments for dynamic batching are used. Commands from the 2. and 3. subsections shall be executed within this Docker container.

$ docker run -it --gpus=all -v $(pwd):/workspace nvcr.io/nvidia/pytorch:xx.yy-py3 bash
$ pip install numpy pillow torchvision
$ python onnx_exporter.py --save model.onnx

2. Create the model repository:

$ mkdir -p model_repository/ensemble_python_resnet50/1
$ mkdir -p model_repository/preprocess/1
$ mkdir -p model_repository/resnet50_trt/1

# Copy the Python model
$ cp model.py model_repository/preprocess/1

3. Build a TensorRT engine for the ONNX model

Set the arguments for enabling fp16 precision --fp16. To enable dynamic shapes use --minShapes, --optShapes, and maxShapes with --explicitBatch:

$ trtexec --onnx=model.onnx --saveEngine=./model_repository/resnet50_trt/1/model.plan --explicitBatch --minShapes=input:1x3x224x224 --optShapes=input:1x3x224x224 --maxShapes=input:256x3x224x224 --fp16

4. Run the command below to start the server container:

Under model_repository, run this command to start the server docker container:

$ docker run --gpus=all -it --rm -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd):/workspace/ -v/$(pwd)/model_repository:/models nvcr.io/nvidia/tritonserver:xx.yy-py3 bash
$ pip install numpy pillow torchvision
$ tritonserver --model-repository=/models

5. Start the client to test:

Under python_backend/examples/resnet50_trt, run the commands below to start the client Docker container:

$ wget https://raw.githubusercontent.com/triton-inference-server/server/main/qa/images/mug.jpg -O "mug.jpg"
$ docker run --rm --net=host -v $(pwd):/workspace/ nvcr.io/nvidia/tritonserver:xx.yy-py3-sdk python client.py --image mug.jpg 
$ The result of classification is:COFFEE MUG    

Here, since we input an image of "mug" and the inference result is "COFFEE MUG" which is correct.

Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/Oldpann/python_backend.git
git@gitee.com:Oldpann/python_backend.git
Oldpann
python_backend
python_backend
main

搜索帮助