From b9fdba83d4ba38e54cdd9262cfc5d778d99a1fa9 Mon Sep 17 00:00:00 2001 From: Zhu Guodong Date: Sat, 25 Oct 2025 17:26:04 +0800 Subject: [PATCH] [doc] fix wrong description of ms model proto in README --- README.md | 12 ++++++------ README_CN.md | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 74c131e7..5ed850f7 100644 --- a/README.md +++ b/README.md @@ -56,23 +56,23 @@ MindSpore Lite achieves double the inference performance for AIGC, speech algori MindSpore Lite Architecture -1. Terminal and Cloud one-stop inference deployment +1. Device and Cloud one-stop inference deployment - - Provide end-to-end processes for model transformation optimization, deployment, and inference. + - Provide end-to-end workflow for model transformation optimization, deployment, and inference. - The unified IR realizes the device-cloud AI application integration. 2. Lightweight - - Provides model compression, which could help to improve performance as well. + - Provides model compression, which could help to improve performance. - - Provides the ultra-lightweight reasoning solution MindSpore Lite Micro to meet the deployment requirements in extreme environments such as smart watches and headphones. + - Provides the ultra-lightweight inference solution - MindSpore Lite Micro, to meet the deployment requirements in extreme environments such as smart watches and headphones. 3. High-performance - - The built-in high-performance kernel computing library NNACL supports high-performance inference for dedicated chips such as CPU, NNRt, and Ascend, maximizing hardware computing power while minimizing inference latency and power consumption. + - The built-in kernel computing library NNACL supports high-performance inference for dedicated chips such as CPU, NNRt, and Ascend, maximizing hardware computing power while minimizing inference latency and power consumption. - - Assembly code to improve performance of kernel operators. Supports CPU, GPU, and NPU. + - Use assembly instructions to improve performance of kernels. 4. Versatility diff --git a/README_CN.md b/README_CN.md index 114034b7..4db8b2b5 100644 --- a/README_CN.md +++ b/README_CN.md @@ -29,7 +29,7 @@ MindSpore Lite针对AIGC、语音类算法以及CV类模型推理,实现推理 - `.mindir`模型:用于服务侧设备的推理,可以更好地兼容MindSpore训练框架导出的模型结构,主要适用于昇腾卡,以及X86/Arm架构的CPU硬件,详细的转换方法可以参考[转换工具教程](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/converter.html)。 - - `.ms`模型:主要用于端、边设备的推理,主要适用于麒麟NPU、Arm架构CPU等终端硬件。为了更好地降低模型文件大小,`.ms`的模型通过protobuffer进行序列化与反序列化,详细的转换工具使用方式可以参考[转换教程](https://www.mindspore.cn/lite/docs/zh-CN/master/converter/converter_tool.html)。 + - `.ms`模型:主要用于端、边设备的推理,主要适用于麒麟NPU、Arm架构CPU等终端硬件。为了更好地降低模型文件大小,`.ms`的模型通过flatbuffers进行序列化与反序列化,详细的转换工具使用方式可以参考[转换教程](https://www.mindspore.cn/lite/docs/zh-CN/master/converter/converter_tool.html)。 3. 模型推理 -- Gitee