Models based on MindSpore training can be used for inference on different hardware platforms. This document introduces the inference process on each platform.
Inference on the Ascend 910 AI processor
MindSpore provides the model.eval
API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see https://gitee.com/mindspore/mindspore/blob/r0.5/model_zoo/lenet/eval.py.
res = model.eval(dataset)
In addition, the model.predict
interface can be used for inference. For detailed usage, please refer to API description.
Inference on the Ascend 310 AI processor
Export the ONNX or GEIR model by referring to the Export GEIR Model and ONNX Model.
For performing inference in the cloud environment, see the Ascend 910 training and Ascend 310 inference samples. For details about the bare-metal environment (compared with the cloud environment where the Ascend 310 AI processor is deployed locally), see the description document of the Ascend 310 AI processor software package.
Inference on a GPU
Export the ONNX model by referring to the Export GEIR Model and ONNX Model.
Perform inference on the NVIDIA GPU by referring to TensorRT backend for ONNX.
The On-Device Inference is based on the MindSpore Predict. Please refer to On-Device Inference Tutorial for details.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。