335 Star 1.5K Fork 864

MindSpore / docs

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
multi_platform_inference_ascend_310.md 1.35 KB
一键复制 编辑 原始数据 按行查看 历史
宦晓玲 提交于 2023-07-21 14:56 . modify the md links in 1.0

Inference on the Ascend 310 AI processor

Linux Ascend Inference Application Beginner Intermediate Expert

View Source On Gitee

Inference Using an ONNX or AIR File

The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or AIR format. For inference on the Ascend 310 AI processor, perform the following steps:

  1. Generate a model in ONNX or AIR format on the training platform. For details, see Export AIR Model and Export ONNX Model.

  2. Convert the ONNX or AIR model file into an OM model file and perform inference.

    • For performing inference in the cloud environment (ModelArt), see the Ascend 910 training and Ascend 310 inference samples.
    • For details about the local bare-metal environment where the Ascend 310 AI processor is deployed in local (compared with the cloud environment), see the document of the Ascend 310 AI processor software package.
1
https://gitee.com/mindspore/docs.git
git@gitee.com:mindspore/docs.git
mindspore
docs
docs
r1.0

搜索帮助