Use docker pull
with any of the images and tags below to pull an image and try for yourself. Note that the build from source (CPU), CUDA, and TensorRT images include additional dependencies like miniconda for compatibility with AzureML image deployment.
Example: Run docker pull mcr.microsoft.com/azureml/onnxruntime:latest-cuda
to pull the latest released docker image with ONNX Runtime GPU, CUDA, and CUDNN support.
Build Flavor | Base Image | ONNX Runtime Docker Image tags | Latest |
---|---|---|---|
Source (CPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0, :v0.5.0 | :latest |
CUDA (GPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-cuda10.0-cudnn7, :v0.5.0-cuda10.1-cudnn7 | :latest-cuda |
TensorRT (x86) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-tensorrt19.03, :v0.5.0-tensorrt19.06 | :latest-tensorrt |
OpenVino (VAD-M) | mcr.microsoft.com/azureml/onnxruntime | :v0.5.0-openvino-r1.1-vadm | :latest-openvino-vadm |
OpenVino (MYRIAD) | mcr.microsoft.com/azureml/onnxruntime | :v0.5.0-openvino-r1.1-myriad | :latest-openvino-myriad |
Server | mcr.microsoft.com/onnxruntime/server | :v0.4.0, :v0.5.0 | :latest |
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-source -f Dockerfile.source .
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-source
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-cuda -f Dockerfile.cuda .
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-cuda
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-ngraph -f Dockerfile.ngraph .
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-ngraph
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-trt
Build the onnxruntime image for one of the accelerators supported below.
Retrieve your docker image in one of the following ways.
docker build -t onnxruntime --build-arg DEVICE=$DEVICE .
DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
Device Option | Target Device |
---|---|
CPU_FP32 |
Intel CPUs |
GPU_FP32 |
Intel Integrated Graphics |
GPU_FP16 |
Intel Integrated Graphics |
MYRIAD_FP16 |
Intel MovidiusTM USB sticks |
VAD-M_FP16 |
Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs |
Retrieve your docker image in one of the following ways.
Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 --network host .
Pull the official image from DockerHub.
# Will be available with next release
Run the docker image
docker run -it onnxruntime-cpu
Retrieve your docker image in one of the following ways.
docker build -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 --network host .
# Will be available with next release
Run the docker image
docker run -it --device /dev/dri:/dev/dri onnxruntime-gpu:latest
docker build -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 --network host .
# Will be available with next release
docker run -it --network host --privileged -v /dev:/dev onnxruntime-myriad:latest
=======
docker build -t onnxruntime-vadr --build-arg DEVICE=VAD-M_FP16 --network host .
# Will be available with next release
docker run -it --device --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-hddl:latest
docker build -t {docker_image_name} -f Dockerfile.server .
docker run -v {localModelAbsoluteFolder}:{dockerModelAbsoluteFolder} -p {your_local_port}:8001 {imageName} --model_path {dockerModelAbsolutePath}
Send HTTP requests to the docker container through the binding local port. Here is the full usage document.
curl -X POST -d "@request.json" -H "Content-Type: application/json" http://0.0.0.0:{your_local_port}/v1/models/mymodel/versions/3:predict
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-nuphar -f Dockerfile.nuphar .
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-nuphar
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。