This example shows how to preprocess your inputs using Python backend before it is passed to the TensorRT model for inference. This ensemble model includes an image preprocessing model (preprocess) and a TensorRT model (resnet50_trt) to do inference.
1. Converting PyTorch Model to ONNX format:
Run onnx_exporter.py to convert ResNet50 PyTorch model to ONNX format. Width and height dims are fixed at 224 but dynamic axes arguments for dynamic batching are used. Commands from the 2. and 3. subsections shall be executed within this Docker container.
$ docker run -it --gpus=all -v $(pwd):/workspace nvcr.io/nvidia/pytorch:xx.yy-py3 bash
$ pip install numpy pillow torchvision
$ python onnx_exporter.py --save model.onnx
2. Create the model repository:
$ mkdir -p model_repository/ensemble_python_resnet50/1
$ mkdir -p model_repository/preprocess/1
$ mkdir -p model_repository/resnet50_trt/1
# Copy the Python model
$ cp model.py model_repository/preprocess/1
3. Build a TensorRT engine for the ONNX model
Set the arguments for enabling fp16 precision --fp16. To enable dynamic shapes use --minShapes, --optShapes, and maxShapes with --explicitBatch:
$ trtexec --onnx=model.onnx --saveEngine=./model_repository/resnet50_trt/1/model.plan --explicitBatch --minShapes=input:1x3x224x224 --optShapes=input:1x3x224x224 --maxShapes=input:256x3x224x224 --fp16
4. Run the command below to start the server container:
Under model_repository, run this command to start the server docker container:
$ docker run --gpus=all -it --rm -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd):/workspace/ -v/$(pwd)/model_repository:/models nvcr.io/nvidia/tritonserver:xx.yy-py3 bash
$ pip install numpy pillow torchvision
$ tritonserver --model-repository=/models
5. Start the client to test:
Under python_backend/examples/resnet50_trt, run the commands below to start the client Docker container:
$ wget https://raw.githubusercontent.com/triton-inference-server/server/main/qa/images/mug.jpg -O "mug.jpg"
$ docker run --rm --net=host -v $(pwd):/workspace/ nvcr.io/nvidia/tritonserver:xx.yy-py3-sdk python client.py --image mug.jpg
$ The result of classification is:COFFEE MUG
Here, since we input an image of "mug" and the inference result is "COFFEE MUG" which is correct.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。