From 619adfdd4a2b17dd03d371f9ef7265ebb90104f3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 06:47:44 +0000 Subject: [PATCH 01/18] update cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../level1_single_api/1_acl/4_blas/gemm/README_CN.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md b/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md index 48cacb39f..4b497f1ed 100644 --- a/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md +++ b/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md @@ -72,16 +72,16 @@ 将矩阵乘算子的算子描述信息(\*.json文件)编译成适配昇腾AI处理器的离线模型(\*.om文件),运行矩阵乘算子时使用。 - 切换到样例目录,执行如下命令(以昇腾310 AI处理器为例): + 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --singleop=run/out/test_data/config/gemm.json --soc_version=Ascend310 --output=run/out/op_models + atc --singleop=run/out/test_data/config/gemm.json --soc_version= --output=run/out/op_models ``` - - --singleop:单算子定义文件(\*.json文件)。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 - - - --output:生成的om文件必须放在“run/out/op\_models“目录下。 + - --singleop:单算子定义文件(\*.json文件)所在的路径。直接使用命令示例中的路径,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。 + 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息。实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + - --output:生成的om文件必须放在“run/out/op\_models“目录下。直接使用命令示例中的路径,无需修改 ## 编译运行 -- Gitee From e8a7836548727e904acb04e3a14ffc648bac2a0a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 06:50:30 +0000 Subject: [PATCH 02/18] update cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md b/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md index 4b497f1ed..9138aa484 100644 --- a/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md +++ b/cplusplus/level1_single_api/1_acl/4_blas/gemm/README_CN.md @@ -80,7 +80,7 @@ - --singleop:单算子定义文件(\*.json文件)所在的路径。直接使用命令示例中的路径,无需修改。 - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。 - 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息。实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 - --output:生成的om文件必须放在“run/out/op\_models“目录下。直接使用命令示例中的路径,无需修改 -- Gitee From 8ba7e1af4cabf02df0554b1fe0880ec0a689c9b9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:04:09 +0000 Subject: [PATCH 03/18] update cplusplus/level1_single_api/1_acl/4_blas/gemm/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- cplusplus/level1_single_api/1_acl/4_blas/gemm/README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/cplusplus/level1_single_api/1_acl/4_blas/gemm/README.md b/cplusplus/level1_single_api/1_acl/4_blas/gemm/README.md index 19635647e..30ec45451 100644 --- a/cplusplus/level1_single_api/1_acl/4_blas/gemm/README.md +++ b/cplusplus/level1_single_api/1_acl/4_blas/gemm/README.md @@ -75,14 +75,15 @@ The sample directory is organized as follows: Build the operator description information \(.json file\) of the matrix-matrix multiplication operator into an offline model \(.om file\) that adapts to the Ascend AI Processor. - Run the following command in the **acl\_execute\_gemm** directory (take Ascend310 as an example): + Switch to **acl\_execute\_gemm** example directory and execute the atc command. The command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command. + ``` - atc --singleop=run/out/test_data/config/gemm.json --soc_version=Ascend310 --output=run/out/op_models + atc --singleop=run/out/test_data/config/gemm.json --soc_version= --output=run/out/op_models ``` - **--singleop**: directory of the single-operator definition file \(.json\) - - **--soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - **--soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - **--output**: directory for storing the generated .om file, that is, the **run/out/op\_models** directory. -- Gitee From cbd01d12ea850fec0afec352e60517ac4151b176 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:10:53 +0000 Subject: [PATCH 04/18] update level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../README_CN.md | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md index 2e7a615de..4a5add489 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md @@ -78,21 +78,20 @@ 2. 将ResNet-50原始模型转换为适配昇腾AI处理器的离线模型(\*.om文件)。 - 切换到样例目录,执行如下命令(以昇腾310 AI处理器为例): + 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version=Ascend310 --input_format=NCHW --input_fp16_nodes=data --output_type=FP32 --out_nodes=prob:0 + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version= --input_format=NCHW --input_fp16_nodes=data --output_type=FP32 --out_nodes=prob:0 ``` - - --model:原始模型文件路径。 - - --weight:权重文件路径。 - - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 - - - --input\_format:输入数据的Format。 - - --input\_fp16\_nodes:指定输入数据类型为FP16的输入节点名称。 - - --output\_type和--out\_nodes:这2个参数配合使用,指定prob节点的第一个输出的数据类型为float32。 - - --output:生成的resnet50.om文件存放在“样例目录/model“目录下。建议使用命令中的默认设置,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 + - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 + - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 + - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + - --input\_format:输入数据的Format。直接使用命令示例中的参数值,无需修改。 + - --input\_fp16\_nodes:指定输入数据类型为FP16的输入节点名称。直接使用命令示例中的参数值,无需修改。 + - --output\_type和--out\_nodes:这2个参数配合使用,指定prob节点的第一个输出的数据类型为float32。直接使用命令示例中的参数值,无需修改。 + - --output:生成的resnet50.om文件存放在“样例目录/model“目录下。建议使用命令示例中的路径,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 ``` const char* omModelPath = "../model/resnet50.om"; -- Gitee From b318d8cfeeff280ca85c1b84bd12e89152899892 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:13:02 +0000 Subject: [PATCH 05/18] update level2_simple_inference/1_classification/resnet50_imagenet_classification/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../resnet50_imagenet_classification/README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README.md b/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README.md index fb8b77c79..968f6986e 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README.md @@ -75,16 +75,17 @@ The sample directory is organized as follows: 2. Convert the ResNet-50 network to an .om offline model adapted to Ascend AI ProcessorHiSilicon SoC. - Go to the sample directory and run the following command (take Ascend310 as an example): + Go to the example directory and execute the atc command. The command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command. + ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version=Ascend310 --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version= --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 ``` - **--model**: directory of the source model file. - **--weight**: directory of the weight file. - **--framework**: source framework type, selected from **0** \(Caffe\), **1** \(MindSpore\), **3** \(TensorFlow\), and **5** \(ONNX\). - - **--soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - **--soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - **--input\_format**: input format. - **--input\_fp16\_nodes**: input nodes to specify as FP16 nodes. - **--output\_type** and **--out\_nodes**: specify the data type of the first output as float32. -- Gitee From 264327de256c9e9392d2e5d9a8bb052e8a8758f1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:15:31 +0000 Subject: [PATCH 06/18] update level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../README_CN.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md index 9b5b6e731..53497e4ce 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md @@ -82,21 +82,20 @@ 2. 将ResNet-50网络转换为适配昇腾AI处理器的离线模型(\*.om文件)。 - 切换到样例目录,执行如下命令(以昇腾310 AI处理器为例): + 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version=Ascend310 --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 ``` - - --model:原始模型文件路径。 - - --weight:权重文件路径。 - - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 - - - --input\_format:模型输入数据的Format。 - - --input\_fp16\_nodes:指定输入数据类型为FP16的输入节点名称。 - - --output\_type和--out\_nodes:这2个参数配合使用,指定prob节点的第一个输出的数据类型为float32。 - - --output:生成的resnet50.om文件存放在“样例目录/model“目录下。建议使用命令中的默认设置,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 + - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 + - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 + - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + - --input\_format:模型输入数据的Format。直接使用命令示例中的参数值,无需修改。 + - --input\_fp16\_nodes:指定输入数据类型为FP16的输入节点名称。直接使用命令示例中的参数值,无需修改。 + - --output\_type和--out\_nodes:这2个参数配合使用,指定prob节点的第一个输出的数据类型为float32。直接使用命令示例中的参数值,无需修改。 + - --output:生成的resnet50.om文件存放在“样例目录/model“目录下。建议使用命令示例中的参数值,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 ``` const char* omModelPath = "../model/resnet50.om"; -- Gitee From 6c917ab9b09651b227b455e806066552d056a3db Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:16:13 +0000 Subject: [PATCH 07/18] update level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../resnet50_async_imagenet_classification/README_CN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md index 53497e4ce..7aa833001 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README_CN.md @@ -85,7 +85,7 @@ 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version=Ascend310 --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version= --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 ``` - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 -- Gitee From 47badf06cd49e712dc96dda5fbd163087a51a20f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:17:19 +0000 Subject: [PATCH 08/18] update level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../resnet50_async_imagenet_classification/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README.md b/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README.md index da065f3e8..1afc83930 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_async_imagenet_classification/README.md @@ -80,16 +80,16 @@ The sample directory is organized as follows: 2. Convert the ResNet-50 network into an offline model \(.om file\) that adapts to Ascend AI Processors. - Go to the sample directory and run the following command (take Ascend310 as an example): + Go to the example directory and execute the atc command. The command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command. ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version=Ascend310 --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --output=model/resnet50 --soc_version= --input_format=NCHW --input_fp16_nodes=data -output_type=FP32 --out_nodes=prob:0 ``` - **--model**: directory of the source model file. - **--weight**: directory of the weight file. - **--framework**: source framework type, selected from **0** \(Caffe\), **1** \(MindSpore\), **3** \(TensorFlow\), and **5** \(ONNX\). - - **--soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - **--soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - **--input\_format**: input format. - **--input\_fp16\_nodes**: input nodes to specify as FP16 nodes. - **--output\_type** and **--out\_nodes**: specify the data type of the first output as float32. -- Gitee From 40de8a0c166647602593a1379336804e84973b26 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:18:03 +0000 Subject: [PATCH 09/18] update /level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../resnet50_imagenet_classification/README_CN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md index 4a5add489..cd668e87f 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_imagenet_classification/README_CN.md @@ -87,7 +87,7 @@ - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 - - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 - --input\_format:输入数据的Format。直接使用命令示例中的参数值,无需修改。 - --input\_fp16\_nodes:指定输入数据类型为FP16的输入节点名称。直接使用命令示例中的参数值,无需修改。 - --output\_type和--out\_nodes:这2个参数配合使用,指定prob节点的第一个输出的数据类型为float32。直接使用命令示例中的参数值,无需修改。 -- Gitee From e24d9ebf1c7311ac65bc6343036ae56f2ab8411d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:20:27 +0000 Subject: [PATCH 10/18] update level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../README_CN.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README_CN.md index 864e1a4a7..4c97d00e5 100644 --- a/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README_CN.md @@ -88,18 +88,18 @@ 2. 将ResNet-50原始模型转换为适配昇腾AI处理器的离线模型(\*.om文件),转换模型时,需配置色域转换参数,用于将YUV420SP格式的图片转换为RGB格式的图片。 - 切换到样例目录,执行如下命令(以昇腾310 AI处理器为例): + 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version=Ascend310 --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version= --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp ``` - - --model:原始模型文件路径。 - - --weight:权重文件路径。 - - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 - - --insert\_op\_conf:插入AIPP(AI Preprocessing)算子的配置文件路径,用于在AI Core上完成图像预处理,包括改变图像尺寸、色域转换(转换图像格式)、减均值/乘系数(改变图像像素),数据处理之后再进行真正的模型推理。 - - --output:生成的resnet50\_aipp.om文件存放在“样例目录/model“目录下。建议使用命令中的默认设置,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 + - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 + - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 + - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + - --insert\_op\_conf:插入AIPP(AI Preprocessing)算子的配置文件路径,用于在AI Core上完成图像预处理,包括改变图像尺寸、色域转换(转换图像格式)、减均值/乘系数(改变图像像素),数据处理之后再进行真正的模型推理。直接使用命令示例中的路径,无需修改。 + - --output:生成的resnet50\_aipp.om文件存放在“样例目录/model“目录下。建议使用命令示例中的路径,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 ``` const char* omModelPath = "../model/resnet50_aipp.om"; -- Gitee From 7d0cddef605aea7ca1effb43b6ec4054d0952cc8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:22:00 +0000 Subject: [PATCH 11/18] update level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../vpc_resnet50_imagenet_classification/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README.md b/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README.md index 1927fa7c8..a8cf42fe1 100644 --- a/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README.md +++ b/cplusplus/level2_simple_inference/1_classification/vpc_resnet50_imagenet_classification/README.md @@ -83,16 +83,16 @@ The sample directory is organized as follows: 2. Convert the ResNet-50 network into an offline model \(.om file\) that adapts to Ascend AI Processors. During model conversion, you need to set CSC parameters to convert YUV420SP images to RGB images. - Go to the sample directory and run the following command (take Ascend310 as an example): + Go to the example directory and execute the atc command. The command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command. ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version=Ascend310 --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version= --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp ``` - **--model**: directory of the source model file. - **--weight**: directory of the weight file. - **--framework**: source framework type, selected from **0** \(Caffe\), **1** \(MindSpore\), **3** \(TensorFlow\), and **5** \(ONNX\). - - **--soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - **--soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - **--insert\_op\_conf**: path of the configuration file for inserting the AI Pre-Processing \(AIPP\) operator for AI Core–based image preprocessing including image resizing, CSC, and mean subtraction and factor multiplication \(for pixel changing\), prior to model inference. - **--output**: directory for storing the generated **resnet50\_aipp.om** file, that is, **/model** under the sample directory. The default path in the command example is recommended. To specify another path, you need to change the value of **omModelPath** in **sample\_process.cpp** before building the code. -- Gitee From cc9983285c8f7e755a82c087f1233cc1e9bb1b4d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:28:12 +0000 Subject: [PATCH 12/18] update 1_classification/vpc_jpeg_resnet50_imagenet_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../README_CN.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README_CN.md index 8ba98cb9f..4b88f9250 100644 --- a/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README_CN.md @@ -85,19 +85,19 @@ 2. 将ResNet-50网络转换为适配昇腾AI处理器的离线模型(\*.om文件),转换模型时,需配置色域转换参数,用于将YUV420SP格式的图片转换为RGB格式的图片。 - 切换到样例目录,执行如下命令: + 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version=Ascend310 --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version= --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp ``` - - --model:原始模型文件路径。 - - --weight:权重文件路径。 - - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 + - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 + - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 + - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。 若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 - - --insert\_op\_conf:插入AIPP(AI Preprocessing)算子的配置文件路径,用于在AI Core上完成图像预处理,包括改变图像尺寸、色域转换(转换图像格式)、减均值/乘系数(改变图像像素),数据处理之后再进行真正的模型推理。 - - --output:生成的resnet50\_aipp.om文件存放在“样例目录/model“目录下。建议使用命令中的默认设置,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 + - --insert\_op\_conf:插入AIPP(AI Preprocessing)算子的配置文件路径,用于在AI Core上完成图像预处理,包括改变图像尺寸、色域转换(转换图像格式)、减均值/乘系数(改变图像像素),数据处理之后再进行真正的模型推理。直接使用命令示例中的路径,无需修改。 + - --output:生成的resnet50\_aipp.om文件存放在“样例目录/model“目录下。建议使用命令示例中的路径,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 ``` const char* omModelPath = "../model/resnet50_aipp.om"; -- Gitee From 19d72e6d65b98175a3a8be9a1bf15a76e360973e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:29:59 +0000 Subject: [PATCH 13/18] update /1_classification/vpc_jpeg_resnet50_imagenet_classification/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../vpc_jpeg_resnet50_imagenet_classification/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README.md b/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README.md index 969ef5dc4..b12aa662a 100644 --- a/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README.md +++ b/cplusplus/level2_simple_inference/1_classification/vpc_jpeg_resnet50_imagenet_classification/README.md @@ -83,16 +83,16 @@ The sample directory is organized as follows: 2. Convert the ResNet-50 network into an offline model \(.om file\) that adapts to Ascend AI Processors. During model conversion, you need to set CSC parameters to convert YUV420SP images to RGB images. - Go to the sample directory and run the following command (take Ascend310 as an example): + Go to the example directory and execute the atc command. The command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command. ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version=Ascend310 --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version= --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp ``` - **--model**: directory of the source model file. - **--weight**: directory of the weight file. - **--framework**: source framework type, selected from **0** \(Caffe\), **1** \(MindSpore\), **3** \(TensorFlow\), and **5** \(ONNX\). - - **--soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - **--soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - **--insert\_op\_conf**: path of the configuration file for inserting the AI Pre-Processing \(AIPP\) operator for AI Core–based image preprocessing including image resizing, CSC, and mean subtraction and factor multiplication \(for pixel changing\), prior to model inference. - **--output**: directory for storing the generated **resnet50\_aipp.om** file, that is, **/model** under the sample directory. The default path in the command example is recommended. To specify another path, you need to change the value of **omModelPath** in **sample\_process.cpp** before building the code. -- Gitee From c8f11df94915e0f0f2c71ba3373995d1d43427fe Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:32:28 +0000 Subject: [PATCH 14/18] update cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../vdec_resnet50_classification/README_CN.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README_CN.md b/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README_CN.md index 3514b5c4f..63f620acb 100644 --- a/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README_CN.md +++ b/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README_CN.md @@ -89,19 +89,19 @@ 2. 将ResNet-50网络转换为适配昇腾AI处理器的离线模型(\*.om文件),转换模型时,需配置色域转换参数,用于将YUV420SP格式的图片转换为RGB格式的图片。 - 切换到样例目录,执行如下命令(以昇腾310 AI处理器为例): + 切换到样例目录,执行atc命令,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version=Ascend310 --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version= --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp ``` - - --model:原始模型文件路径。 - - --weight:权重文件路径。 - - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 + - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 + - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 + - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 - - --insert\_op\_conf:插入AIPP(AI Preprocessing)算子的配置文件路径,用于在AI Core上完成图像预处理,包括改变图像尺寸、色域转换(转换图像格式)、减均值/乘系数(改变图像像素),数据处理之后再进行真正的模型推理。 - - --output:生成的resnet50\_aipp.om文件存放在“样例目录/model“目录下。建议使用命令中的默认设置,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 + - --insert\_op\_conf:插入AIPP(AI Preprocessing)算子的配置文件路径,用于在AI Core上完成图像预处理,包括改变图像尺寸、色域转换(转换图像格式)、减均值/乘系数(改变图像像素),数据处理之后再进行真正的模型推理。直接使用命令示例中的路径,无需修改。 + - --output:生成的resnet50\_aipp.om文件存放在“样例目录/model“目录下。建议使用命令示例中的路径,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 ``` const char* omModelPath = "../model/resnet50_aipp.om"; -- Gitee From 518e23e1242db68441376ba608698e2590b43029 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:33:49 +0000 Subject: [PATCH 15/18] update cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../1_classification/vdec_resnet50_classification/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README.md b/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README.md index 677245a68..add09705c 100644 --- a/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README.md +++ b/cplusplus/level2_simple_inference/1_classification/vdec_resnet50_classification/README.md @@ -76,16 +76,16 @@ The sample directory is organized as follows: 2. Convert the ResNet-50 network into an offline model \(.om file\) that adapts to Ascend AI Processors. During model conversion, you need to set CSC parameters to convert YUV420SP images to RGB images. - Go to the sample directory and run the following command (take Ascend310 as an example): + Go to the example directory and execute the atc command. The command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command. ``` - atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version=Ascend310 --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp + atc --model=caffe_model/resnet50.prototxt --weight=caffe_model/resnet50.caffemodel --framework=0 --soc_version= --insert_op_conf=caffe_model/aipp.cfg --output=model/resnet50_aipp ``` - **--model**: directory of the source model file. - **--weight**: directory of the weight file. - **--framework**: source framework type, selected from **0** \(Caffe\), **1** \(MindSpore\), **3** \(TensorFlow\), and **5** \(ONNX\). - - **--soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - **--soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - **--insert\_op\_conf**: path of the configuration file for inserting the AI Pre-Processing \(AIPP\) operator for AI Core–based image preprocessing including image resizing, CSC, and mean subtraction and factor multiplication \(for pixel changing\), prior to model inference. - **--output**: directory for storing the generated **resnet50\_aipp.om** file, that is, **/model** under the sample directory. The default path in the command example is recommended. To specify another path, you need to change the value of **omModelPath** in **sample\_process.cpp** before building the code. -- Gitee From 5191bef278d8304f655c86ad0c79cb7738564683 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:36:59 +0000 Subject: [PATCH 16/18] update 2_object_detection/YOLOV3_dynamic_batch_detection_picture/README_CN.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../README_CN.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README_CN.md b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README_CN.md index 10943ae27..4772e1b09 100644 --- a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README_CN.md +++ b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README_CN.md @@ -65,27 +65,27 @@ 2. 切换到样例目录,将yolov3网络转换为适配昇腾AI处理器的离线模型(\*.om文件)。 - 如果模型推理的输入数据是动态Batch的,执行如下命令转换模型(以昇腾310 AI处理器为例): + 如果模型推理的输入数据是动态Batch的,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:-1,3,416,416;img_info:-1,4" --input_format=NCHW --dynamic_batch_size="1,2,4,8" --soc_version=Ascend310 --output=model/yolov3_dynamic_batch + atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:-1,3,416,416;img_info:-1,4" --input_format=NCHW --dynamic_batch_size="1,2,4,8" --soc_version= --output=model/yolov3_dynamic_batch ``` - 如果模型推理的输入数据是动态分辨率的,执行如下命令转换模型(以昇腾310 AI处理器为例): + 如果模型推理的输入数据是动态分辨率的,命令示例如下,请根据参数说明调整参数值后,再执行命令: ``` - atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:1,3,-1,-1" --input_format=NCHW --dynamic_image_size="416,416;832,832;1248,1248" --soc_version=Ascend310 --output=model/yolov3_dynamic_hw + atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:1,3,-1,-1" --input_format=NCHW --dynamic_image_size="416,416;832,832;1248,1248" --soc_version= --output=model/yolov3_dynamic_hw ``` - - --model:原始模型文件路径。 - - --weight:权重文件路径。 - - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。 - - --input\_shape:模型输入数据的Shape。 - - --input\_format:模型输入数据的Format。 - - --dynamic\_batch\_size:设置动态Batch档位参数,适用于执行推理时,每次处理图片数量不固定的场景。 - - --dynamic\_image\_size:设置输入图片的动态分辨率参数,适用于执行推理时,每次处理图片宽和高不固定的场景。 - - --soc\_version:昇腾AI处理器的版本。进入“CANN软件安装目录/compiler/data/platform_config”目录,".ini"文件的文件名即为昇腾AI处理器的版本,请根据实际情况选择。 - - --output:生成的yolov3\_dynamic\_batch.om或者yolov3\_dynamic\_hw.om文件存放在“样例目录/model“目录下。建议使用命令中的默认设置,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 + - --model:原始模型文件路径。直接使用命令示例中的路径,无需修改。 + - --weight:权重文件路径。直接使用命令示例中的路径,无需修改。 + - --framework:原始框架类型。0:表示Caffe;1:表示MindSpore;3:表示TensorFlow;5:表示ONNX。直接使用命令示例中的参数值,无需修改。 + - --input\_shape:模型输入数据的Shape。直接使用命令示例中的参数值,无需修改。 + - --input\_format:模型输入数据的Format。直接使用命令示例中的参数值,无需修改。 + - --dynamic\_batch\_size:设置动态Batch档位参数,适用于执行推理时,每次处理图片数量不固定的场景。直接使用命令示例中的参数值,无需修改。 + - --dynamic\_image\_size:设置输入图片的动态分辨率参数,适用于执行推理时,每次处理图片宽和高不固定的场景。直接使用命令示例中的参数值,无需修改。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 + - --output:生成的yolov3\_dynamic\_batch.om或者yolov3\_dynamic\_hw.om文件存放在“样例目录/model“目录下。建议使用命令示例中的路径,否则在编译代码前,您还需要修改sample\_process.cpp中的omModelPath参数值。 ``` string omModelPath = "../model/yolov3_dynamic_batch.om"; -- Gitee From d93f719c978535b6744f3b9664b4d6846bc0962f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:39:11 +0000 Subject: [PATCH 17/18] update 2_object_detection/YOLOV3_dynamic_batch_detection_picture/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../YOLOV3_dynamic_batch_detection_picture/README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README.md b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README.md index b848f2205..ff603339c 100644 --- a/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README.md +++ b/cplusplus/level2_simple_inference/2_object_detection/YOLOV3_dynamic_batch_detection_picture/README.md @@ -69,16 +69,16 @@ The directory structure of the sample is as follows: 2. Switch to the sample directory and convert the YOLOv3 network into an OM offline model (**\*.om** file) that adapts to the Ascend AI Processor. - If the input for model inference allows a dynamic batch size, run the following command to convert the model (taking the Ascend 310 AI Processor as an example): + If the input for model inference allows a dynamic batch size, the command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command: ``` - atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:-1,3,416,416;img_info:-1,4" --input_format=NCHW --dynamic_batch_size="1,2,4,8" --soc_version=Ascend310 --output=model/yolov3_dynamic_batch + atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:-1,3,416,416;img_info:-1,4" --input_format=NCHW --dynamic_batch_size="1,2,4,8" --soc_version= --output=model/yolov3_dynamic_batch ``` - If the input for model inference allows a dynamic image size, run the following command to convert the model (taking the Ascend 310 AI Processor as an example): + If the input for model inference allows a dynamic image size, the command example is as follows. Plaease adjust the parameter values according to the parameter description before executing the command: ``` - atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:1,3,-1,-1" --input_format=NCHW --dynamic_image_size="416,416;832,832;1248,1248" --soc_version=Ascend310 --output=model/yolov3_dynamic_hw + atc --model=caffe_model/yolov3.prototxt --weight=caffe_model/yolov3.caffemodel --framework=0 --input_shape="data:1,3,-1,-1" --input_format=NCHW --dynamic_image_size="416,416;832,832;1248,1248" --soc_version= --output=model/yolov3_dynamic_hw ``` - --**model**: directory of the source model file. @@ -88,7 +88,7 @@ The directory structure of the sample is as follows: - --**input\_format**: input format. - --**dynamic\_batch\_size**: dynamic batch size profiles. Applies to the scenario where image count per inference batch is unfixed. - --**dynamic\_image\_size**: dynamic image size profiles. Applies to the scenario where image size per inference batch is unfixed. - - --**soc\_version**: Version of the Ascend AI processor. Go to the CANN software installation directory/compiler/data/platform_config directory. The name of the .ini file is the version of the Ascend AI processor. Select the version as required. + - --**soc\_version**: Version of the Ascend AI processor. Modify the as required. If the cannot be determined, please run the **npu-smi info** command on the server where the Ascend AI Processor is installed to obtain the Chip Name information. The actual value is Ascend+Name. For example, if Chip Name is xxxyy, the actual value is Ascendxxxyy. - --**output**: directory for storing the generated **yolov3\_dynamic\_batch.om** or **yolov3\_dynamic\_hw.om** file, that is, **/model** under the sample directory. The default path in the command example is recommended. To specify another path, change the value of **omModelPath** in **sample\_process.cpp** before building the code. ``` -- Gitee From 4a760f48c98fb675e6ba29e19ac576820e4e8bad Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=8F=B6=E6=88=90=E7=BE=8E?= Date: Wed, 17 Sep 2025 07:43:10 +0000 Subject: [PATCH 18/18] update cplusplus/level2_simple_inference/1_classification/resnet50_firstapp/README.md. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: 叶成美 --- .../1_classification/resnet50_firstapp/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cplusplus/level2_simple_inference/1_classification/resnet50_firstapp/README.md b/cplusplus/level2_simple_inference/1_classification/resnet50_firstapp/README.md index a98268db9..e63c1a3e2 100644 --- a/cplusplus/level2_simple_inference/1_classification/resnet50_firstapp/README.md +++ b/cplusplus/level2_simple_inference/1_classification/resnet50_firstapp/README.md @@ -85,7 +85,7 @@ resnet50_firstapp 2. 执行模型转换。 - 执行以下命令(以昇腾310 AI处理器为例),将原始模型转换为昇腾AI处理器能识别的\*.om模型文件。请注意,执行命令的用户需具有命令中相关路径的可读、可写权限。以下命令中的“******”请根据实际样例包的存放目录替换、“******”请根据实际昇腾AI处理器版本替换。 + 执行以下命令(以昇腾310 AI处理器为例),将原始模型转换为昇腾AI处理器能识别的\*.om模型文件。请注意,执行命令的用户需具有命令中相关路径的可读、可写权限。以下命令中的“******”请根据实际样例包的存放目录替换。 ``` cd /cplusplus/level2_simple_inference/1_classification/resnet50_firstapp/model @@ -96,7 +96,7 @@ resnet50_firstapp - --model:ResNet-50网络的模型文件路径。 - --framework:原始框架类型。5表示ONNX。 - --output:resnet50.om模型文件的路径。若此处修改模型文件名及存储路径,则需要同步修改src/main.cpp中模型加载处的模型文件名及存储路径,即modelPath变量值,修改代码后需要重新编译。 - - --soc\_version:昇腾AI处理器的版本。 + - --soc\_version:昇腾AI处理器的版本。需根据实际情况修改。若无法确定当前设备的soc_version,可在安装昇腾AI处理器的服务器上执行npu-smi info命令进行查询,获取Name信息,实际配置值为AscendName,例如Name取值为xxxyy,实际配置值为Ascendxxxyy。 关于各参数的详细解释,请参见[《ATC工具使用指南》](https://www.hiascend.com/document/redirect/CannCommunityAtc)。 -- Gitee