Dockerfiles
Published Microsoft Container Registry (MCR) Images
Use docker pull
with any of the images and tags below to pull an image and try for yourself. Note that the CPU, CUDA, and TensorRT images include additional dependencies like miniconda for compatibility with AzureML image deployment.
Example: Run docker pull mcr.microsoft.com/azureml/onnxruntime:latest-cuda
to pull the latest released docker image with ONNX Runtime GPU, CUDA, and CUDNN support.
Build Flavor | Base Image | ONNX Runtime Docker Image tags | Latest |
---|---|---|---|
Source (CPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0, :v0.5.0, v0.5.1, :v1.0.0 | :latest |
CUDA (GPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-cuda10.0-cudnn7, :v0.5.0-cuda10.1-cudnn7, v0.5.1-cuda10.1-cudnn7, :v1.0.0-cuda10.1-cudnn7 | :latest-cuda |
TensorRT (x86) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-tensorrt19.03, :v0.5.0-tensorrt19.06, v1.0.0-tensorrt19.09 | :latest-tensorrt |
OpenVino (VAD-M) | mcr.microsoft.com/azureml/onnxruntime | :v0.5.0-openvino-r1.1-vadm | :latest-openvino-vadm |
OpenVino (MYRIAD) | mcr.microsoft.com/azureml/onnxruntime | :v0.5.0-openvino-r1.1-myriad, :v1.0.0-openvino-r1.1-myriad | :latest-openvino-myriad |
OpenVino (CPU) | mcr.microsoft.com/azureml/onnxruntime | :v1.0.0-openvino-r1.1-cpu | :latest-openvino-cpu |
nGraph | mcr.microsoft.com/azureml/onnxruntime | :v1.0.0-ngraph-v0.26.0 | :latest-ngraph |
Nuphar | mcr.microsoft.com/azureml/onnxruntime | :latest-nuphar | |
Server | mcr.microsoft.com/onnxruntime/server | :v0.4.0, :v0.5.0, v0.5.1, v1.0.0 | :latest |
Ubuntu 16.04, CPU, Python Bindings
docker build -t onnxruntime-source -f Dockerfile.source .
docker run -it onnxruntime-source
Ubuntu 16.04, CUDA 10.0, CuDNN 7
docker build -t onnxruntime-cuda -f Dockerfile.cuda .
docker run -it onnxruntime-cuda
Public Preview
Ubuntu 16.04, Python Bindings
docker build -t onnxruntime-ngraph -f Dockerfile.ngraph .
docker run -it onnxruntime-ngraph
Ubuntu 16.04, TensorRT 5.0.2
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
docker run -it onnxruntime-trt
Public Preview
Ubuntu 16.04, Python Bindings
Build the onnxruntime image for one of the accelerators supported below.
Retrieve your docker image in one of the following ways.
docker build -t onnxruntime --build-arg DEVICE=$DEVICE .
DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
Device Option | Target Device |
---|---|
CPU_FP32 |
Intel CPUs |
GPU_FP32 |
Intel Integrated Graphics |
GPU_FP16 |
Intel Integrated Graphics |
MYRIAD_FP16 |
Intel MovidiusTM USB sticks |
VAD-M_FP16 |
Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs |
Retrieve your docker image in one of the following ways.
Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 --network host .
Pull the official image from DockerHub.
# Will be available with next release
Run the docker image
docker run -it onnxruntime-cpu
Retrieve your docker image in one of the following ways.
docker build -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 --network host .
# Will be available with next release
Run the docker image
docker run -it --device /dev/dri:/dev/dri onnxruntime-gpu:latest
docker build -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 --network host .
# Will be available with next release
docker run -it --network host --privileged -v /dev:/dev onnxruntime-myriad:latest
docker build -t onnxruntime-vadr --build-arg DEVICE=VAD-M_FP16 --network host .
# Will be available with next release
docker run -it --device --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-hddl:latest
Public Preview
The Dockerfile used in these instructions specifically targets Raspberry Pi 3/3+ running Raspbian Stretch. The same approach should work for other ARM devices, but may require some changes to the Dockerfile such as choosing a different base image (Line 0: FROM ...
).
Install DockerCE on your development machine by following the instructions here
Create an empty local directory
mkdir onnx-build
cd onnx-build
Save the Dockerfile to your new directory
Run docker build
This will build all the dependencies first, then build ONNX Runtime and its Python bindings. This will take several hours.
docker build -t onnxruntime-arm32v7 -f Dockerfile.arm32v7 .
Note the full path of the .whl
file
# Build Output
line.onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
, but version number may have changed. You'll use this path to extract the wheel file later.Check that the build succeeded
Upon completion, you should see an image tagged onnxruntime-arm32v7
in your list of docker images:
docker images
Extract the Python wheel file from the docker image
(Update the path/version of the .whl
file with the one noted in step 5)
docker create -ti --name onnxruntime_temp onnxruntime-arm32v7 bash
docker cp onnxruntime_temp:/code/onnxruntime/build/Linux/MinSizeRel/dist/onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl .
docker rm -fv onnxruntime_temp
This will save a copy of the wheel file, onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
, to your working directory on your host machine.
Copy the wheel file (onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
) to your Raspberry Pi or other ARM device
On device, install the ONNX Runtime wheel file
sudo apt-get update
sudo apt-get install -y python3 python3-pip
pip3 install numpy
# Install ONNX Runtime
# Important: Update path/version to match the name and location of your .whl file
pip3 install onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
Test installation by following the instructions here
Public Preview
Ubuntu 16.04, Python Bindings
docker build -t onnxruntime-nuphar -f Dockerfile.nuphar .
docker run -it onnxruntime-nuphar
Public Preview
Ubuntu 16.04
docker build -t {docker_image_name} -f Dockerfile.server .
docker run -v {localModelAbsoluteFolder}:{dockerModelAbsoluteFolder} -p {your_local_port}:8001 {imageName} --model_path {dockerModelAbsolutePath}
Send HTTP requests to the docker container through the binding local port. Here is the full usage document.
curl -X POST -d "@request.json" -H "Content-Type: application/json" http://0.0.0.0:{your_local_port}/v1/models/mymodel/versions/3:predict
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。