From 4e3f62e0f59fae7845915bcfe2ea1eb5fc8ca74e Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Fri, 11 Oct 2024 07:05:17 +0000 Subject: [PATCH 01/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/BertTextClassification/README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/BertTextClassification/README.md | 222 ----------------------- 1 file changed, 222 deletions(-) delete mode 100644 contrib/BertTextClassification/README.md diff --git a/contrib/BertTextClassification/README.md b/contrib/BertTextClassification/README.md deleted file mode 100644 index 877a73156..000000000 --- a/contrib/BertTextClassification/README.md +++ /dev/null @@ -1,222 +0,0 @@ -# 文本分类 - -## 1. 介绍 - -文本分类插件基于 MindXSDK 开发,在晟腾芯片上进行文本分类,将分类结果保存。输入一段新闻,可以判断该新闻属于哪个类别。 -该模型支持5个新闻类别:体育、健康、军事、教育、汽车。 - -### 1.1 支持的产品 - -本项目以昇腾Atlas310卡为主要的硬件平台。 - -### 1.2 支持的版本 - -支持的SDK版本为2.0.4。 -支持的CANN版本为5.0.4。 - -### 1.3 软件方案介绍 - -基于MindX SDK的文本分类业务流程为:待分类文本通过预处理,将文本根据字典vocab.txt进行编码,组成numpy形式的向量,将向量通过 appsrc 插件输入,然后由模型推理插件mxpi_tensorinfer得到每种类别的得分,再通过后处理插件mxpi_classpostprocessor将模型输出的结果处理,最后得到该文本的类别。本系统的各模块及功能描述如表1.1所示: - - -表1.1 系统方案各子系统功能描述: - -| 序号 | 子系统 | 功能描述 | -| ---- | ------ | ------------ | -| 1 | 文本输入 | 读取输入文本 | -| 2 | 文本编码 | 根据字典对输入文本编码 | -| 3 | 模型推理 | 对文本编码后的张量进行推理 | -| 5 | 后处理 | 从模型推理结果中寻找对应的分类标签 | -| 7 | 保存结果 | 将分类结果保存文件| - -### 1.4 代码目录结构与说明 - -本工程名称为文本分类,工程目录如下图所示: - -``` -. -│ build.sh -│ README.md -│ tree.txt -│ -├─mxBase -│ │ build.sh -│ │ CMakeLists.txt -│ │ main.cpp -│ │ -│ ├─BertClassification -│ │ BertClassification.cpp -│ │ BertClassification.h -│ │ -│ ├─data -│ │ vocab.txt -│ │ -│ ├─model -│ │ bert_text_classification_labels.names -│ │ -│ ├─out -│ │ prediction_label.txt -│ │ -│ └─test -│ Test.cpp -│ Test.h -│ -└─sdk - │ build.sh - │ flowChart.png - │ main.py - │ run.sh - │ tokenizer.py - │ - ├─config - │ bert_text_classification_aipp_tf.cfg - │ bert_text_classification_labels.names - │ - ├─data - │ vocab.txt - │ - ├─model - │ bert_text_classification_aipp_tf.cfg - │ bert_text_classification_labels.names - │ model_conversion.sh - │ - ├─out - │ prediction_label.txt - │ - ├─pipeline - │ BertTextClassification.pipeline - │ - └─test - test.py - test.sh - test_input.py -``` -### 1.5 技术实现流程图 - -![image](sdk/flowChart.png) - - -## 2 环境依赖 - -推荐系统为ubuntu 18.04,环境依赖软件和版本如下表: - -| 软件名称 | 版本 | -| -------- | ------ | -| cmake | 3.10.2 | -| mxVision | 2.0.4 | -| python | 3.9.2 | - -确保环境中正确安装mxVision SDK。 - -在编译运行项目前,需要设置环境变量: - -``` -export MX_SDK_HOME=${SDK安装路径}/mxVision -export LD_LIBRARY_PATH=${MX_SDK_HOME}/lib:${MX_SDK_HOME}/opensource/lib:${MX_SDK_HOME}/opensource/lib64:/usr/local/Ascend/ascend-toolkit/latest/acllib/lib64:/usr/local/Ascend/driver/lib64:${LD_LIBRARY_PATH} -export PYTHONPATH=${MX_SDK_HOME}/python:${PYTHONPATH} - -export install_path=/usr/local/Ascend/ascend-toolkit/latest -export PATH=/usr/local/python3.9.2/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH -export LD_LIBRARY_PATH=${install_path}/atc/lib64:$LD_LIBRARY_PATH -export ASCEND_OPP_PATH=${install_path}/opp -``` - -- 环境变量介绍 - -``` -MX_SDK_HOME:MindX SDK mxVision的根安装路径,用于包含MindX SDK提供的所有库和头文件。 -LD_LIBRARY_PATH:提供了MindX SDK已开发的插件和相关的库信息。 -install_path:ascend-toolkit的安装路径。 -PATH:添加python的执行路径和atc转换工具的执行路径。 -LD_LIBRARY_PATH:添加ascend-toolkit和MindX SDK提供的库目录路径。 -ASCEND_OPP_PATH:atc转换工具需要的目录。 -``` - -## 3 模型转换 - -**步骤1** 请参考https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/bert_text_classification.pb -下载模型的pb文件,存放到开发环境普通用户下的任意目录,例如:$HOME/models/bert_text_classification。 - -**步骤2** 执行以下命令使用atc命令进行模型转换: - -cd $HOME/models/bert_text_classification - -atc --model=bert_text_classification.pb --framework=3 --input_format="ND" --output=bert_text_classification --input_shape="input_1:1,300;input_2:1,300" --out_nodes=dense_1/Softmax:0 --soc_version=Ascend310 --op_select_implmode="high_precision" - -**步骤3** 执行以下命令将转换好的模型复制到项目中model文件夹中: - -``` -cp ./bert_text_classification.om $HOME/sdk/model/ -cp ./bert_text_classification.om $HOME/mxBase/model/ -``` - -**步骤4** 执行成功后终端输出为: - -``` -ATC start working now, please wait for a moment. -ATC run success, welcome to the next use. -``` - -## 4 编译与运行 - -**步骤1** 从https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip下载测试数据并解压,解压后的sample.txt和test.csv文件放在项目的mxBase/data和sdk/data目录下。 - -**步骤2** 按照第 2 小节 环境依赖 中的步骤设置环境变量。 - -**步骤3** 按照第 3 小节 模型转换 中的步骤获得 om 模型文件。 - -**步骤4** 将本项目代码的文件路径中出现的 ${SDK目录} 替换成自己SDK的存放目录,下面是需要替换的代码。 - -``` -mxBase目录下的CMakeList.txt中的第13行代码: -set(MX_SDK_HOME ${SDK目录}) - -sdk/pipeline目录下BertTextClassification.pipeline文件中的第26行: -"postProcessLibPath": "${SDK目录}/lib/modelpostprocessors/libresnet50postprocess.so" -``` - -**步骤5** pipeline项目运行在sdk目录下执行命令: - -``` -python3 main.py -``` - -命令执行成功后在out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 - -**步骤6** mxBase项目在mxBase目录中,执行以下代码进行编译。 - -``` -mkdir build -cd build -cmake .. -make -``` - -编译完成后,将可执行文件 mxBase_text_classification 移动到mxBase目录下,执行下面代码运行 - -``` -./mxBase_text_classification ./data/sample.txt -``` - -执行成功后在服务器的mxBase/out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 - -## 5 精度测试 - -**步骤1** 已按照上一小节 编译与运行 的步骤将样例运行成功。 - -**步骤2** 下载[数据集](https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip)后解压,将解压后的test.csv文件分别放在sdk/data目录和mxBase/data目录。 - -**步骤3** pipeline项目中的精度测试文件为sdk/test目录下的test.py,将test.py移到sdk目录下,执行下面代码,得到pipeline的精度测试结果。 - -``` -python3 test.py -``` - -**步骤4** mxBase项目中,将mxBase目录下main.cpp中main方法的全部代码注释,替换为下面代码后执行(即main函数中仅包含以下代码),得到mxBase的精度测试结果。 - -``` -Test::test_accuracy(); -``` - -## 6 其他问题 -1.本项目的设计为限制输入样例为txt文件,其他文件如图片、音频输入则会报错。 \ No newline at end of file -- Gitee From 5eb59c934835cf448ab3a8aed13fd1ea37f1fe4d Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Fri, 11 Oct 2024 07:05:49 +0000 Subject: [PATCH 02/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/BertTextClassification/README.md | 223 +++++++++++++++++++++++ 1 file changed, 223 insertions(+) create mode 100644 contrib/BertTextClassification/README.md diff --git a/contrib/BertTextClassification/README.md b/contrib/BertTextClassification/README.md new file mode 100644 index 000000000..13562df60 --- /dev/null +++ b/contrib/BertTextClassification/README.md @@ -0,0 +1,223 @@ +# 文本分类 + +## 1. 介绍 + +### 1.1 简介 +文本分类插件基于 MindXSDK 开发,在晟腾芯片上进行文本分类,将分类结果保存。输入一段新闻,可以判断该新闻属于哪个类别。 +该模型支持5个新闻类别:体育、健康、军事、教育、汽车。 + +### 1.2 支持的产品 + +本项目以昇腾Atlas310卡为主要的硬件平台。 + +### 1.3 支持的版本 + +推荐系统为ubuntu 18.04。 + +表1.1 环境依赖软件和版本: + +| 软件名称 | 版本 | +| -------- | ------ | +| cmake | 3.10.2 | +| mxVision | 2.0.4 | +| python | 3.9.2 | +| CANN | 5.0.4 | + +### 1.4 软件方案介绍 + +基于MindX SDK的文本分类业务流程为:待分类文本通过预处理,将文本根据字典vocab.txt进行编码,组成numpy形式的向量,将向量通过 appsrc 插件输入,然后由模型推理插件mxpi_tensorinfer得到每种类别的得分,再通过后处理插件mxpi_classpostprocessor将模型输出的结果处理,最后得到该文本的类别。本系统的各模块及功能描述如表1.2所示: + + +表1.2 系统方案各子系统功能描述: + +| 序号 | 子系统 | 功能描述 | +| ---- | ------ | ------------ | +| 1 | 文本输入 | 读取输入文本 | +| 2 | 文本编码 | 根据字典对输入文本编码 | +| 3 | 模型推理 | 对文本编码后的张量进行推理 | +| 4 | 后处理 | 从模型推理结果中寻找对应的分类标签 | +| 5 | 保存结果 | 将分类结果保存文件| + +### 1.5 代码目录结构与说明 + +本工程名称为文本分类,工程目录如下图所示: + +``` +. +│ build.sh +│ README.md +│ tree.txt +│ +├─mxBase +│ │ build.sh +│ │ CMakeLists.txt +│ │ main.cpp +│ │ +│ ├─BertClassification +│ │ BertClassification.cpp +│ │ BertClassification.h +│ │ +│ ├─data +│ │ vocab.txt +│ │ +│ ├─model +│ │ bert_text_classification_labels.names +│ │ +│ ├─out +│ │ prediction_label.txt +│ │ +│ └─test +│ Test.cpp +│ Test.h +│ +└─sdk + │ build.sh + │ flowChart.png + │ main.py + │ run.sh + │ tokenizer.py + │ + ├─config + │ bert_text_classification_aipp_tf.cfg + │ bert_text_classification_labels.names + │ + ├─data + │ vocab.txt + │ + ├─model + │ bert_text_classification_aipp_tf.cfg + │ bert_text_classification_labels.names + │ model_conversion.sh + │ + ├─out + │ prediction_label.txt + │ + ├─pipeline + │ BertTextClassification.pipeline + │ + └─test + test.py + test.sh + test_input.py +``` +### 1.5 技术实现流程图 + +![image](sdk/flowChart.png) + + +## 2 设置环境变量 + +确保环境中正确安装mxVision SDK。 + +在编译运行项目前,需要设置环境变量: + +``` +export MX_SDK_HOME=${SDK安装路径}/mxVision +export LD_LIBRARY_PATH=${MX_SDK_HOME}/lib:${MX_SDK_HOME}/opensource/lib:${MX_SDK_HOME}/opensource/lib64:/usr/local/Ascend/ascend-toolkit/latest/acllib/lib64:/usr/local/Ascend/driver/lib64:${LD_LIBRARY_PATH} +export PYTHONPATH=${MX_SDK_HOME}/python:${PYTHONPATH} + +export install_path=/usr/local/Ascend/ascend-toolkit/latest +export PATH=/usr/local/python3.9.2/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH +export LD_LIBRARY_PATH=${install_path}/atc/lib64:$LD_LIBRARY_PATH +export ASCEND_OPP_PATH=${install_path}/opp +``` + +- 环境变量介绍 + +``` +MX_SDK_HOME:MindX SDK mxVision的根安装路径,用于包含MindX SDK提供的所有库和头文件。 +LD_LIBRARY_PATH:提供了MindX SDK已开发的插件和相关的库信息。 +install_path:ascend-toolkit的安装路径。 +PATH:添加python的执行路径和atc转换工具的执行路径。 +LD_LIBRARY_PATH:添加ascend-toolkit和MindX SDK提供的库目录路径。 +ASCEND_OPP_PATH:atc转换工具需要的目录。 +``` + +## 3 准备模型 + +**步骤1** 请参考https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/bert_text_classification.pb +下载模型的pb文件,存放到开发环境普通用户下的任意目录,例如:$HOME/models/bert_text_classification。 + +**步骤2** 执行以下命令使用atc命令进行模型转换: + +cd $HOME/models/bert_text_classification + +atc --model=bert_text_classification.pb --framework=3 --input_format="ND" --output=bert_text_classification --input_shape="input_1:1,300;input_2:1,300" --out_nodes=dense_1/Softmax:0 --soc_version=Ascend310 --op_select_implmode="high_precision" + +**步骤3** 执行以下命令将转换好的模型复制到项目中model文件夹中: + +``` +cp ./bert_text_classification.om $HOME/sdk/model/ +cp ./bert_text_classification.om $HOME/mxBase/model/ +``` + +**步骤4** 执行成功后终端输出为: + +``` +ATC start working now, please wait for a moment. +ATC run success, welcome to the next use. +``` + +## 4 编译与运行 + +**步骤1** 从https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip下载测试数据并解压,解压后的sample.txt和test.csv文件放在项目的mxBase/data和sdk/data目录下。 + +**步骤2** 按照第 2 小节 环境依赖 中的步骤设置环境变量。 + +**步骤3** 按照第 3 小节 模型转换 中的步骤获得 om 模型文件。 + +**步骤4** 将本项目代码的文件路径中出现的 ${SDK目录} 替换成自己SDK的存放目录,下面是需要替换的代码。 + +``` +mxBase目录下的CMakeList.txt中的第13行代码: +set(MX_SDK_HOME ${SDK目录}) + +sdk/pipeline目录下BertTextClassification.pipeline文件中的第26行: +"postProcessLibPath": "${SDK目录}/lib/modelpostprocessors/libresnet50postprocess.so" +``` + +**步骤5** pipeline项目运行在sdk目录下执行命令: + +``` +python3 main.py +``` + +命令执行成功后在out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 + +**步骤6** mxBase项目在mxBase目录中,执行以下代码进行编译。 + +``` +mkdir build +cd build +cmake .. +make +``` + +编译完成后,将可执行文件 mxBase_text_classification 移动到mxBase目录下,执行下面代码运行 + +``` +./mxBase_text_classification ./data/sample.txt +``` + +执行成功后在服务器的mxBase/out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 + +## 5 精度测试 + +**步骤1** 已按照上一小节 编译与运行 的步骤将样例运行成功。 + +**步骤2** 下载[数据集](https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip)后解压,将解压后的test.csv文件分别放在sdk/data目录和mxBase/data目录。 + +**步骤3** pipeline项目中的精度测试文件为sdk/test目录下的test.py,将test.py移到sdk目录下,执行下面代码,得到pipeline的精度测试结果。 + +``` +python3 test.py +``` + +**步骤4** mxBase项目中,将mxBase目录下main.cpp中main方法的全部代码注释,替换为下面代码(即main函数中仅包含以下代码),然后重新编译并运行,得到mxBase的精度测试结果。 + +``` +Test::test_accuracy(); +``` + +## 6 其他问题 +1.本项目的设计为限制输入样例为txt文件,其他文件如图片、音频输入则会报错。 \ No newline at end of file -- Gitee From aa170cfafb240d55d5f71ae01ba00fd9275089b3 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Fri, 11 Oct 2024 07:09:15 +0000 Subject: [PATCH 03/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/BertTextClassification/README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/BertTextClassification/README.md | 223 ----------------------- 1 file changed, 223 deletions(-) delete mode 100644 contrib/BertTextClassification/README.md diff --git a/contrib/BertTextClassification/README.md b/contrib/BertTextClassification/README.md deleted file mode 100644 index 13562df60..000000000 --- a/contrib/BertTextClassification/README.md +++ /dev/null @@ -1,223 +0,0 @@ -# 文本分类 - -## 1. 介绍 - -### 1.1 简介 -文本分类插件基于 MindXSDK 开发,在晟腾芯片上进行文本分类,将分类结果保存。输入一段新闻,可以判断该新闻属于哪个类别。 -该模型支持5个新闻类别:体育、健康、军事、教育、汽车。 - -### 1.2 支持的产品 - -本项目以昇腾Atlas310卡为主要的硬件平台。 - -### 1.3 支持的版本 - -推荐系统为ubuntu 18.04。 - -表1.1 环境依赖软件和版本: - -| 软件名称 | 版本 | -| -------- | ------ | -| cmake | 3.10.2 | -| mxVision | 2.0.4 | -| python | 3.9.2 | -| CANN | 5.0.4 | - -### 1.4 软件方案介绍 - -基于MindX SDK的文本分类业务流程为:待分类文本通过预处理,将文本根据字典vocab.txt进行编码,组成numpy形式的向量,将向量通过 appsrc 插件输入,然后由模型推理插件mxpi_tensorinfer得到每种类别的得分,再通过后处理插件mxpi_classpostprocessor将模型输出的结果处理,最后得到该文本的类别。本系统的各模块及功能描述如表1.2所示: - - -表1.2 系统方案各子系统功能描述: - -| 序号 | 子系统 | 功能描述 | -| ---- | ------ | ------------ | -| 1 | 文本输入 | 读取输入文本 | -| 2 | 文本编码 | 根据字典对输入文本编码 | -| 3 | 模型推理 | 对文本编码后的张量进行推理 | -| 4 | 后处理 | 从模型推理结果中寻找对应的分类标签 | -| 5 | 保存结果 | 将分类结果保存文件| - -### 1.5 代码目录结构与说明 - -本工程名称为文本分类,工程目录如下图所示: - -``` -. -│ build.sh -│ README.md -│ tree.txt -│ -├─mxBase -│ │ build.sh -│ │ CMakeLists.txt -│ │ main.cpp -│ │ -│ ├─BertClassification -│ │ BertClassification.cpp -│ │ BertClassification.h -│ │ -│ ├─data -│ │ vocab.txt -│ │ -│ ├─model -│ │ bert_text_classification_labels.names -│ │ -│ ├─out -│ │ prediction_label.txt -│ │ -│ └─test -│ Test.cpp -│ Test.h -│ -└─sdk - │ build.sh - │ flowChart.png - │ main.py - │ run.sh - │ tokenizer.py - │ - ├─config - │ bert_text_classification_aipp_tf.cfg - │ bert_text_classification_labels.names - │ - ├─data - │ vocab.txt - │ - ├─model - │ bert_text_classification_aipp_tf.cfg - │ bert_text_classification_labels.names - │ model_conversion.sh - │ - ├─out - │ prediction_label.txt - │ - ├─pipeline - │ BertTextClassification.pipeline - │ - └─test - test.py - test.sh - test_input.py -``` -### 1.5 技术实现流程图 - -![image](sdk/flowChart.png) - - -## 2 设置环境变量 - -确保环境中正确安装mxVision SDK。 - -在编译运行项目前,需要设置环境变量: - -``` -export MX_SDK_HOME=${SDK安装路径}/mxVision -export LD_LIBRARY_PATH=${MX_SDK_HOME}/lib:${MX_SDK_HOME}/opensource/lib:${MX_SDK_HOME}/opensource/lib64:/usr/local/Ascend/ascend-toolkit/latest/acllib/lib64:/usr/local/Ascend/driver/lib64:${LD_LIBRARY_PATH} -export PYTHONPATH=${MX_SDK_HOME}/python:${PYTHONPATH} - -export install_path=/usr/local/Ascend/ascend-toolkit/latest -export PATH=/usr/local/python3.9.2/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH -export LD_LIBRARY_PATH=${install_path}/atc/lib64:$LD_LIBRARY_PATH -export ASCEND_OPP_PATH=${install_path}/opp -``` - -- 环境变量介绍 - -``` -MX_SDK_HOME:MindX SDK mxVision的根安装路径,用于包含MindX SDK提供的所有库和头文件。 -LD_LIBRARY_PATH:提供了MindX SDK已开发的插件和相关的库信息。 -install_path:ascend-toolkit的安装路径。 -PATH:添加python的执行路径和atc转换工具的执行路径。 -LD_LIBRARY_PATH:添加ascend-toolkit和MindX SDK提供的库目录路径。 -ASCEND_OPP_PATH:atc转换工具需要的目录。 -``` - -## 3 准备模型 - -**步骤1** 请参考https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/bert_text_classification.pb -下载模型的pb文件,存放到开发环境普通用户下的任意目录,例如:$HOME/models/bert_text_classification。 - -**步骤2** 执行以下命令使用atc命令进行模型转换: - -cd $HOME/models/bert_text_classification - -atc --model=bert_text_classification.pb --framework=3 --input_format="ND" --output=bert_text_classification --input_shape="input_1:1,300;input_2:1,300" --out_nodes=dense_1/Softmax:0 --soc_version=Ascend310 --op_select_implmode="high_precision" - -**步骤3** 执行以下命令将转换好的模型复制到项目中model文件夹中: - -``` -cp ./bert_text_classification.om $HOME/sdk/model/ -cp ./bert_text_classification.om $HOME/mxBase/model/ -``` - -**步骤4** 执行成功后终端输出为: - -``` -ATC start working now, please wait for a moment. -ATC run success, welcome to the next use. -``` - -## 4 编译与运行 - -**步骤1** 从https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip下载测试数据并解压,解压后的sample.txt和test.csv文件放在项目的mxBase/data和sdk/data目录下。 - -**步骤2** 按照第 2 小节 环境依赖 中的步骤设置环境变量。 - -**步骤3** 按照第 3 小节 模型转换 中的步骤获得 om 模型文件。 - -**步骤4** 将本项目代码的文件路径中出现的 ${SDK目录} 替换成自己SDK的存放目录,下面是需要替换的代码。 - -``` -mxBase目录下的CMakeList.txt中的第13行代码: -set(MX_SDK_HOME ${SDK目录}) - -sdk/pipeline目录下BertTextClassification.pipeline文件中的第26行: -"postProcessLibPath": "${SDK目录}/lib/modelpostprocessors/libresnet50postprocess.so" -``` - -**步骤5** pipeline项目运行在sdk目录下执行命令: - -``` -python3 main.py -``` - -命令执行成功后在out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 - -**步骤6** mxBase项目在mxBase目录中,执行以下代码进行编译。 - -``` -mkdir build -cd build -cmake .. -make -``` - -编译完成后,将可执行文件 mxBase_text_classification 移动到mxBase目录下,执行下面代码运行 - -``` -./mxBase_text_classification ./data/sample.txt -``` - -执行成功后在服务器的mxBase/out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 - -## 5 精度测试 - -**步骤1** 已按照上一小节 编译与运行 的步骤将样例运行成功。 - -**步骤2** 下载[数据集](https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip)后解压,将解压后的test.csv文件分别放在sdk/data目录和mxBase/data目录。 - -**步骤3** pipeline项目中的精度测试文件为sdk/test目录下的test.py,将test.py移到sdk目录下,执行下面代码,得到pipeline的精度测试结果。 - -``` -python3 test.py -``` - -**步骤4** mxBase项目中,将mxBase目录下main.cpp中main方法的全部代码注释,替换为下面代码(即main函数中仅包含以下代码),然后重新编译并运行,得到mxBase的精度测试结果。 - -``` -Test::test_accuracy(); -``` - -## 6 其他问题 -1.本项目的设计为限制输入样例为txt文件,其他文件如图片、音频输入则会报错。 \ No newline at end of file -- Gitee From 30494a121a5eefa1b7b0d5a1489fb69643b204a0 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Fri, 11 Oct 2024 07:10:13 +0000 Subject: [PATCH 04/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/BertTextClassification/README.md | 223 +++++++++++++++++++++++ 1 file changed, 223 insertions(+) create mode 100644 contrib/BertTextClassification/README.md diff --git a/contrib/BertTextClassification/README.md b/contrib/BertTextClassification/README.md new file mode 100644 index 000000000..119c12544 --- /dev/null +++ b/contrib/BertTextClassification/README.md @@ -0,0 +1,223 @@ +# 文本分类 + +## 1. 介绍 + +### 1.1 简介 +文本分类插件基于 MindXSDK 开发,在晟腾芯片上进行文本分类,将分类结果保存。输入一段新闻,可以判断该新闻属于哪个类别。 +该模型支持5个新闻类别:体育、健康、军事、教育、汽车。 + +### 1.2 支持的产品 + +本项目以昇腾Atlas310卡为主要的硬件平台。 + +### 1.3 支持的版本 + +推荐系统为ubuntu 18.04。 + +表1.1 环境依赖软件和版本: + +| 软件名称 | 版本 | +| -------- | ------ | +| cmake | 3.10.2 | +| mxVision | 2.0.4 | +| python | 3.9.2 | +| CANN | 5.0.4 | + +### 1.4 软件方案介绍 + +基于MindX SDK的文本分类业务流程为:待分类文本通过预处理,将文本根据字典vocab.txt进行编码,组成numpy形式的向量,将向量通过 appsrc 插件输入,然后由模型推理插件mxpi_tensorinfer得到每种类别的得分,再通过后处理插件mxpi_classpostprocessor将模型输出的结果处理,最后得到该文本的类别。本系统的各模块及功能描述如表1.2所示: + + +表1.2 系统方案各子系统功能描述: + +| 序号 | 子系统 | 功能描述 | +| ---- | ------ | ------------ | +| 1 | 文本输入 | 读取输入文本 | +| 2 | 文本编码 | 根据字典对输入文本编码 | +| 3 | 模型推理 | 对文本编码后的张量进行推理 | +| 4 | 后处理 | 从模型推理结果中寻找对应的分类标签 | +| 5 | 保存结果 | 将分类结果保存文件| + +### 1.5 代码目录结构与说明 + +本工程名称为文本分类,工程目录如下图所示: + +``` +. +│ build.sh +│ README.md +│ tree.txt +│ +├─mxBase +│ │ build.sh +│ │ CMakeLists.txt +│ │ main.cpp +│ │ +│ ├─BertClassification +│ │ BertClassification.cpp +│ │ BertClassification.h +│ │ +│ ├─data +│ │ vocab.txt +│ │ +│ ├─model +│ │ bert_text_classification_labels.names +│ │ +│ ├─out +│ │ prediction_label.txt +│ │ +│ └─test +│ Test.cpp +│ Test.h +│ +└─sdk + │ build.sh + │ flowChart.png + │ main.py + │ run.sh + │ tokenizer.py + │ + ├─config + │ bert_text_classification_aipp_tf.cfg + │ bert_text_classification_labels.names + │ + ├─data + │ vocab.txt + │ + ├─model + │ bert_text_classification_aipp_tf.cfg + │ bert_text_classification_labels.names + │ model_conversion.sh + │ + ├─out + │ prediction_label.txt + │ + ├─pipeline + │ BertTextClassification.pipeline + │ + └─test + test.py + test.sh + test_input.py +``` +### 1.6 技术实现流程图 + +![image](sdk/flowChart.png) + + +## 2 设置环境变量 + +确保环境中正确安装mxVision SDK。 + +在编译运行项目前,需要设置环境变量: + +``` +export MX_SDK_HOME=${SDK安装路径}/mxVision +export LD_LIBRARY_PATH=${MX_SDK_HOME}/lib:${MX_SDK_HOME}/opensource/lib:${MX_SDK_HOME}/opensource/lib64:/usr/local/Ascend/ascend-toolkit/latest/acllib/lib64:/usr/local/Ascend/driver/lib64:${LD_LIBRARY_PATH} +export PYTHONPATH=${MX_SDK_HOME}/python:${PYTHONPATH} + +export install_path=/usr/local/Ascend/ascend-toolkit/latest +export PATH=/usr/local/python3.9.2/bin:${install_path}/atc/ccec_compiler/bin:${install_path}/atc/bin:$PATH +export LD_LIBRARY_PATH=${install_path}/atc/lib64:$LD_LIBRARY_PATH +export ASCEND_OPP_PATH=${install_path}/opp +``` + +- 环境变量介绍 + +``` +MX_SDK_HOME:MindX SDK mxVision的根安装路径,用于包含MindX SDK提供的所有库和头文件。 +LD_LIBRARY_PATH:提供了MindX SDK已开发的插件和相关的库信息。 +install_path:ascend-toolkit的安装路径。 +PATH:添加python的执行路径和atc转换工具的执行路径。 +LD_LIBRARY_PATH:添加ascend-toolkit和MindX SDK提供的库目录路径。 +ASCEND_OPP_PATH:atc转换工具需要的目录。 +``` + +## 3 准备模型 + +**步骤1** 请参考https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/bert_text_classification.pb +下载模型的pb文件,存放到开发环境普通用户下的任意目录,例如:$HOME/models/bert_text_classification。 + +**步骤2** 执行以下命令使用atc命令进行模型转换: + +cd $HOME/models/bert_text_classification + +atc --model=bert_text_classification.pb --framework=3 --input_format="ND" --output=bert_text_classification --input_shape="input_1:1,300;input_2:1,300" --out_nodes=dense_1/Softmax:0 --soc_version=Ascend310 --op_select_implmode="high_precision" + +**步骤3** 执行以下命令将转换好的模型复制到项目中model文件夹中: + +``` +cp ./bert_text_classification.om $HOME/sdk/model/ +cp ./bert_text_classification.om $HOME/mxBase/model/ +``` + +**步骤4** 执行成功后终端输出为: + +``` +ATC start working now, please wait for a moment. +ATC run success, welcome to the next use. +``` + +## 4 编译与运行 + +**步骤1** 从https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip下载测试数据并解压,解压后的sample.txt和test.csv文件放在项目的mxBase/data和sdk/data目录下。 + +**步骤2** 按照第 2 小节 环境依赖 中的步骤设置环境变量。 + +**步骤3** 按照第 3 小节 模型转换 中的步骤获得 om 模型文件。 + +**步骤4** 将本项目代码的文件路径中出现的 ${SDK目录} 替换成自己SDK的存放目录,下面是需要替换的代码。 + +``` +mxBase目录下的CMakeList.txt中的第13行代码: +set(MX_SDK_HOME ${SDK目录}) + +sdk/pipeline目录下BertTextClassification.pipeline文件中的第26行: +"postProcessLibPath": "${SDK目录}/lib/modelpostprocessors/libresnet50postprocess.so" +``` + +**步骤5** pipeline项目运行在sdk目录下执行命令: + +``` +python3 main.py +``` + +命令执行成功后在out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 + +**步骤6** mxBase项目在mxBase目录中,执行以下代码进行编译。 + +``` +mkdir build +cd build +cmake .. +make +``` + +编译完成后,将可执行文件 mxBase_text_classification 移动到mxBase目录下,执行下面代码运行 + +``` +./mxBase_text_classification ./data/sample.txt +``` + +执行成功后在服务器的mxBase/out目录下生成分类结果文件 prediction_label.txt,查看结果文件验证分类结果。 + +## 5 精度测试 + +**步骤1** 已按照上一小节 编译与运行 的步骤将样例运行成功。 + +**步骤2** 下载[数据集](https://mindx.sdk.obs.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/BertTextClassification/data.zip)后解压,将解压后的test.csv文件分别放在sdk/data目录和mxBase/data目录。 + +**步骤3** pipeline项目中的精度测试文件为sdk/test目录下的test.py,将test.py移到sdk目录下,执行下面代码,得到pipeline的精度测试结果。 + +``` +python3 test.py +``` + +**步骤4** mxBase项目中,将mxBase目录下main.cpp中main方法的全部代码注释,替换为下面代码(即main函数中仅包含以下代码),参考第4小节 编译与运行 中的步骤4重新编译并运行,得到mxBase的精度测试结果。 + +``` +Test::test_accuracy(); +``` + +## 6 常见问题 +1.本项目的设计为限制输入样例为txt文件,其他文件如图片、音频输入则会报错。 \ No newline at end of file -- Gitee From 4d96525ce6ff01f84c12788298128d46a0233f43 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:13:23 +0000 Subject: [PATCH 05/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/CMakeLists.txt?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/CMakeLists.txt | 37 --------------------- 1 file changed, 37 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/CMakeLists.txt diff --git a/contrib/EdgeDetectionPicture/CMakeLists.txt b/contrib/EdgeDetectionPicture/CMakeLists.txt deleted file mode 100644 index 3928c17d7..000000000 --- a/contrib/EdgeDetectionPicture/CMakeLists.txt +++ /dev/null @@ -1,37 +0,0 @@ -cmake_minimum_required(VERSION 3.10) -project(edge_detection_picture) - -include_directories(./rcfPostProcess) -include_directories(./rcfDetection) -file(GLOB_RECURSE RCF_POSTPROCESS ${PROJECT_SOURCE_DIR}/rcfPostProcess/*cpp) -file(GLOB_RECURSE RCF_DETECTION ${PROJECT_SOURCE_DIR}/rcfDetection/*cpp) -set(TARGET edge_detection_picture) -add_compile_options(-std=c++11 -fPIE -fstack-protector-all -fPIC -Wl,-z,relro,-z,now,-z,noexecstack -s -pie -Wall) -add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0 -Dgoogle=mindxsdk_private) - -set(MX_SDK_HOME "$ENV{MX_SDK_HOME}") - -include_directories( - ${MX_SDK_HOME}/include - ${MX_SDK_HOME}/opensource/include - ${MX_SDK_HOME}/opensource/include/opencv4 -) - -link_directories( - ${MX_SDK_HOME}/lib - ${MX_SDK_HOME}/opensource/lib - ${MX_SDK_HOME}/lib/modelpostprocessors - ${MX_SDK_HOME}/include/MxBase/postprocess/include - /usr/local/Ascend/ascend-toolkit/latest/acllib/lib64 - /usr/local/Ascend/driver/lib64/ -) - -add_executable(edge_detection_picture main.cpp ${RCF_DETECTION} ${RCF_POSTPROCESS}) -target_link_libraries(edge_detection_picture - glog - mxbase - opencv_world - boost_system - boost_filesystem - ) -install(TARGETS ${TARGET} RUNTIME DESTINATION ${PROJECT_SOURCE_DIR}/) -- Gitee From 061a83867b246235cdfa347191776a192943782a Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:13:50 +0000 Subject: [PATCH 06/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0CMake=E4=BD=BF=E5=85=B6?= =?UTF-8?q?=E9=80=82=E9=85=8D=E7=89=88=E6=9C=AC?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EdgeDetectionPicture/CMakeLists.txt | 43 +++++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 contrib/EdgeDetectionPicture/CMakeLists.txt diff --git a/contrib/EdgeDetectionPicture/CMakeLists.txt b/contrib/EdgeDetectionPicture/CMakeLists.txt new file mode 100644 index 000000000..c038f4269 --- /dev/null +++ b/contrib/EdgeDetectionPicture/CMakeLists.txt @@ -0,0 +1,43 @@ +cmake_minimum_required(VERSION 3.10) +project(edge_detection_picture) + +include_directories(./rcfPostProcess) +include_directories(./rcfDetection) +file(GLOB_RECURSE RCF_POSTPROCESS ${PROJECT_SOURCE_DIR}/rcfPostProcess/*cpp) +file(GLOB_RECURSE RCF_DETECTION ${PROJECT_SOURCE_DIR}/rcfDetection/*cpp) +set(TARGET edge_detection_picture) +add_compile_options(-std=c++11 -fPIE -fstack-protector-all -fPIC -Wl,-z,relro,-z,now,-z,noexecstack -s -pie -Wall) +add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0 -Dgoogle=mindxsdk_private) + +set(MX_SDK_HOME /root/SDK/mxVision) + +set(cpprest_DIR ${MX_SDK_HOME}/opensource/lib/libcpprest.so) +if(EXISTS ${cpprest_DIR}) + target_link_libraries(edge_detection_picture cpprest) + add_definitions(_DMX_VERSION_5) +endif() + +include_directories( + ${MX_SDK_HOME}/include + ${MX_SDK_HOME}/opensource/include + ${MX_SDK_HOME}/opensource/include/opencv4 +) + +link_directories( + ${MX_SDK_HOME}/lib + ${MX_SDK_HOME}/opensource/lib + ${MX_SDK_HOME}/lib/modelpostprocessors + ${MX_SDK_HOME}/include/MxBase/postprocess/include + /usr/local/Ascend/ascend-toolkit/latest/acllib/lib64 + /usr/local/Ascend/driver/lib64/ +) + +add_executable(edge_detection_picture main.cpp ${RCF_DETECTION} ${RCF_POSTPROCESS}) +target_link_libraries(edge_detection_picture + glog + mxbase + opencv_world + boost_system + boost_filesystem + ) +install(TARGETS ${TARGET} RUNTIME DESTINATION ${PROJECT_SOURCE_DIR}/) -- Gitee From fb9df239a505356210a7f2c62c34abb014b5757e Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:14:00 +0000 Subject: [PATCH 07/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/rcfDetection?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../rcfDetection/RcfDetection.cpp | 281 ------------------ .../rcfDetection/RcfDetection.h | 61 ---- 2 files changed, 342 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp delete mode 100644 contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h diff --git a/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp b/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp deleted file mode 100644 index a8a84460c..000000000 --- a/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp +++ /dev/null @@ -1,281 +0,0 @@ -/** - * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -#include "RcfDetection.h" -#include "MxBase/DeviceManager/DeviceManager.h" -#include "MxBase/Log/Log.h" -#include "../rcfPostProcess/RcfPostProcess.h" -#include -#include -#include "opencv2/opencv.hpp" - -namespace { - const uint32_t YUV_BYTE_NU = 3; - const uint32_t YUV_BYTE_DE = 2; - const uint32_t VPC_H_ALIGN = 2; - const uint32_t channel = 3; - const double ALPHA = 255.0 / 255; -} -void RcfDetection::SetRcfPostProcessConfig(const InitParam &initParam, - std::map &config) -{ - MxBase::ConfigData configData; - const std::string checkTensor = initParam.checkTensor ? "true" : "false"; - configData.SetJsonValue("OUTSIZE_NUM", std::to_string(initParam.outSizeNum)); - configData.SetJsonValue("OUTSIZE", initParam.outSize); - configData.SetJsonValue("RCF_TYPE", std::to_string(initParam.rcfType)); - configData.SetJsonValue("MODEL_TYPE", std::to_string(initParam.modelType)); - configData.SetJsonValue("INPUT_TYPE", std::to_string(initParam.inputType)); - configData.SetJsonValue("CHECK_MODEL", checkTensor); - auto jsonStr = configData.GetCfgJson().serialize(); - config["postProcessConfigContent"] = jsonStr; -} - -APP_ERROR RcfDetection::Init(const InitParam &initParam) -{ - deviceId_ = initParam.deviceId; - APP_ERROR ret = MxBase::DeviceManager::GetInstance()->InitDevices(); - if (ret != APP_ERR_OK) { - LogError << "Init devices failed, ret=" << ret << "."; - return ret; - } - ret = MxBase::TensorContext::GetInstance()->SetContext(initParam.deviceId); - if (ret != APP_ERR_OK) { - LogError << "Set context failed, ret=" << ret << "."; - return ret; - } - dvppWrapper_ = std::make_shared(); - ret = dvppWrapper_->Init(); - if (ret != APP_ERR_OK) { - LogError << "DvppWrapper init failed, ret=" << ret << "."; - return ret; - } - model_ = std::make_shared(); - ret = model_->Init(initParam.modelPath, modelDesc_); - if (ret != APP_ERR_OK) { - LogError << "ModelInferenceProcessor init failed, ret=" << ret << "."; - return ret; - } - - std::map config; - SetRcfPostProcessConfig(initParam, config); - post_ = std::make_shared(); - ret = post_->Init(config); - if (ret != APP_ERR_OK) { - LogError << "RcfPostProcess init failed, ret=" << ret << "."; - return ret; - } - if (ret != APP_ERR_OK) { - LogError << "Failed to load labels, ret=" << ret << "."; - return ret; - } - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::DeInit() -{ - dvppWrapper_->DeInit(); - model_->DeInit(); - post_->DeInit(); - MxBase::DeviceManager::GetInstance()->DestroyDevices(); - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::ReadImage(const std::string &imgPath, - MxBase::TensorBase &tensor) -{ - MxBase::DvppDataInfo output = {}; - APP_ERROR ret = dvppWrapper_->DvppJpegDecode(imgPath, output); - if (ret != APP_ERR_OK) { - LogError << "DvppWrapper DvppJpegDecode failed, ret=" << ret << "."; - return ret; - } - MxBase::MemoryData memoryData((void*)output.data, output.dataSize, - MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); - if (output.heightStride % VPC_H_ALIGN != 0) { - LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; - MxBase::MemoryHelper::MxbsFree(memoryData); - return APP_ERR_COMM_INVALID_PARAM; - } - dvppHeightStride = output.heightStride; - dvppWidthStride = output.widthStride; - std::vector shape = {output.heightStride * YUV_BYTE_NU / YUV_BYTE_DE, output.widthStride}; - tensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); - return APP_ERR_OK; -} -APP_ERROR RcfDetection::Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, - uint32_t resizeHeight, uint32_t resizeWidth) -{ - auto shape = inputTensor.GetShape(); - MxBase::DvppDataInfo input = {}; - input.height = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; - input.width = shape[1]; - input.heightStride = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; - input.widthStride = shape[1]; - input.dataSize = inputTensor.GetByteSize(); - input.data = (uint8_t*)inputTensor.GetBuffer(); - MxBase::ResizeConfig resize = {}; - resize.height = resizeHeight; - resize.width = resizeWidth; - MxBase::DvppDataInfo output = {}; - APP_ERROR ret = dvppWrapper_->VpcResize(input, output, resize); - if (ret != APP_ERR_OK) { - LogError << "VpcResize failed, ret=" << ret << "."; - return ret; - } - MxBase::MemoryData memoryData((void*)output.data, output.dataSize, - MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); - if (output.heightStride % VPC_H_ALIGN != 0) { - LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; - MxBase::MemoryHelper::MxbsFree(memoryData); - return APP_ERR_COMM_INVALID_PARAM; - } - shape = {1, channel, output.heightStride, output.widthStride}; - outputTensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::Inference(const std::vector &inputs, - std::vector &outputs) -{ - std::vector output={}; - auto dtypes = model_->GetOutputDataType(); - for (size_t i = 0; i < modelDesc_.outputTensors.size(); ++i) { - std::vector shape = {}; - for (size_t j = 0; j < modelDesc_.outputTensors[i].tensorDims.size(); ++j) { - shape.push_back((uint32_t)modelDesc_.outputTensors[i].tensorDims[j]); - } - MxBase::TensorBase tensor(shape, dtypes[i], MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); - APP_ERROR ret = MxBase::TensorBase::TensorBaseMalloc(tensor); - if (ret != APP_ERR_OK) { - LogError << "TensorBaseMalloc failed, ret=" << ret << "."; - return ret; - } - outputs.push_back(tensor); - } - MxBase::DynamicInfo dynamicInfo = {}; - dynamicInfo.dynamicType = MxBase::DynamicType::STATIC_BATCH; - auto startTime = std::chrono::high_resolution_clock::now(); - APP_ERROR ret = model_->ModelInference(inputs, outputs, dynamicInfo); - auto endTime = std::chrono::high_resolution_clock::now(); - double costMs = std::chrono::duration(endTime - startTime).count(); - LogInfo<< "costMs:"<< costMs; - if (ret != APP_ERR_OK) { - LogError << "ModelInference failed, ret=" << ret << "."; - return ret; - } - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::PostProcess(const MxBase::TensorBase &tensor, - const std::vector &outputs, - std::vector &postProcessOutput) -{ - auto shape = tensor.GetShape(); - MxBase::ResizedImageInfo imgInfo; - imgInfo.widthOriginal = shape[1]; - imgInfo.heightOriginal = shape[0] * YUV_BYTE_DE; - uint32_t widthResize = 512; - uint32_t heightResize = 512; - imgInfo.widthResize = widthResize; - imgInfo.heightResize = heightResize; - imgInfo.resizeType = MxBase::RESIZER_STRETCHING; - std::vector imageInfoVec = {}; - imageInfoVec.push_back(imgInfo); - APP_ERROR ret = post_->Process(outputs, postProcessOutput); - if (ret != APP_ERR_OK) { - LogError << "Process failed, ret=" << ret << "."; - return ret; - } - ret = post_->DeInit(); - if (ret != APP_ERR_OK) { - LogError << "RcfDetection DeInit failed"; - return ret; - } - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath) -{ - auto shape = inferTensor.GetShape(); - int dim2 = 2; - int dim3 = 3; - uint32_t height = shape[dim2]; - uint32_t width = shape[dim3]; - cv::Mat imgBgr = cv::imread(imgPath); - std::string fileName = imgPath.substr(imgPath.find_last_of("/") + 1); - uint32_t imageWidth = imgBgr.cols; - uint32_t imageHeight = imgBgr.rows; - cv::Mat modelOutput = cv::Mat(height, width, CV_32FC1, inferTensor.GetBuffer()); - cv::Mat grayMat; - cv::Mat resizedMat; - int crop = 5; - cv::Rect myROI(0, 0, imageWidth - crop, imageHeight); - resize(modelOutput, resizedMat, cv::Size(dvppWidthStride, dvppHeightStride), 0, 0, cv::INTER_LINEAR); - resizedMat.convertTo(grayMat, CV_8UC1, ALPHA); - cv::Mat croppedImage = grayMat(myROI); - resize(croppedImage, croppedImage, cv::Size(imageWidth, imageHeight), 0, 0, cv::INTER_LINEAR); - - std::string resultPathName = "result"; - if (access(resultPathName.c_str(), 0) != 0) { - int ret = mkdir(resultPathName.c_str(), S_IRUSR | S_IWUSR | S_IXUSR); - if (ret != 0) { - LogError << "Failed to create result directory: " << resultPathName << ", ret = " << ret; - return APP_ERR_COMM_FAILURE; - } - } - - cv::imwrite("./result/"+fileName, croppedImage); - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::Process(const std::string &imgPath) -{ - MxBase::TensorBase inTensor; - APP_ERROR ret = ReadImage(imgPath, inTensor); - if (ret != APP_ERR_OK) { - LogError << "ReadImage failed, ret=" << ret << "."; - return ret; - } - MxBase::TensorBase outTensor; - uint32_t resizeHeight = 512; - uint32_t resizeWidth = 512; - ret = Resize(inTensor, outTensor, resizeHeight, resizeWidth); - if (ret != APP_ERR_OK) { - LogError << "Resize failed, ret=" << ret << "."; - return ret; - } - std::vector inputs = {}; - std::vector outputs = {}; - auto shape = outTensor.GetShape(); - inputs.push_back(outTensor); - ret = Inference(inputs, outputs); - if (ret != APP_ERR_OK) { - LogError << "Inference failed, ret=" << ret << "."; - return ret; - } - std::vector postProcessOutput={}; - ret = PostProcess(inTensor, outputs, postProcessOutput); - if (ret != APP_ERR_OK) { - LogError << "PostProcess failed, ret=" << ret << "."; - return ret; - } - ret = WriteResult(postProcessOutput[0], imgPath); - if (ret != APP_ERR_OK) { - LogError << "Save result failed, ret=" << ret << "."; - return ret; - } - return APP_ERR_OK; -} diff --git a/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h b/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h deleted file mode 100644 index 7c0ae6da4..000000000 --- a/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef MXBASE_RCFDETECTION_H -#define MXBASE_RCFDETECTION_H - -#include -#include -#include "MxBase/DvppWrapper/DvppWrapper.h" -#include "MxBase/ModelInfer/ModelInferenceProcessor.h" -#include "MxBase/Tensor/TensorContext/TensorContext.h" - -struct InitParam { - uint32_t deviceId = 0; - bool checkTensor = true; - std::string modelPath = ""; - uint32_t outSizeNum = 0; - std::string outSize = ""; - uint32_t rcfType = 0; - uint32_t modelType = 0; - uint32_t inputType = 0; -}; - -class RcfDetection { -public: - APP_ERROR Init(const InitParam &initParam); - APP_ERROR DeInit(); - APP_ERROR Inference(const std::vector &inputs, std::vector &outputs); - APP_ERROR PostProcess(const MxBase::TensorBase &tensor, const std::vector &outputs, - std::vector &postProcessOutput); - APP_ERROR Process(const std::string &imgPath); -protected: - APP_ERROR ReadImage(const std::string &imgPath, MxBase::TensorBase &tensor); - APP_ERROR Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, - uint32_t resizeHeight, uint32_t resizeWidth); - APP_ERROR WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath); - void SetRcfPostProcessConfig(const InitParam &initParam, std::map &config); - -private: - std::shared_ptr dvppWrapper_; - std::shared_ptr model_; - std::shared_ptr post_; - MxBase::ModelDesc modelDesc_ = {}; - uint32_t deviceId_ = 0; - int dvppHeightStride = 0; - int dvppWidthStride = 0; -}; -#endif -- Gitee From 5a5c1e129e621eb7dd649e47bf5a9a9004c00f3c Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:14:14 +0000 Subject: [PATCH 08/51] =?UTF-8?q?=E6=96=B0=E5=BB=BA=20rcfDetection?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/rcfDetection/.keep | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 contrib/EdgeDetectionPicture/rcfDetection/.keep diff --git a/contrib/EdgeDetectionPicture/rcfDetection/.keep b/contrib/EdgeDetectionPicture/rcfDetection/.keep new file mode 100644 index 000000000..e69de29bb -- Gitee From 9eaf6d186dd3a44d9db2e9cac335c85439f1a6b2 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:15:03 +0000 Subject: [PATCH 09/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0RcfDetection=E4=BD=BF?= =?UTF-8?q?=E5=85=B6=E9=80=82=E9=85=8D=E7=89=88=E6=9C=AC?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EdgeDetectionPicture/RcfDetection.cpp | 286 ++++++++++++++++++ contrib/EdgeDetectionPicture/RcfDetection.h | 61 ++++ 2 files changed, 347 insertions(+) create mode 100644 contrib/EdgeDetectionPicture/RcfDetection.cpp create mode 100644 contrib/EdgeDetectionPicture/RcfDetection.h diff --git a/contrib/EdgeDetectionPicture/RcfDetection.cpp b/contrib/EdgeDetectionPicture/RcfDetection.cpp new file mode 100644 index 000000000..ab1796d72 --- /dev/null +++ b/contrib/EdgeDetectionPicture/RcfDetection.cpp @@ -0,0 +1,286 @@ +/** + * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "RcfDetection.h" +#include "MxBase/DeviceManager/DeviceManager.h" +#include "MxBase/Log/Log.h" +#include "../rcfPostProcess/RcfPostProcess.h" +#include +#include +#include "opencv2/opencv.hpp" + +namespace { + const uint32_t YUV_BYTE_NU = 3; + const uint32_t YUV_BYTE_DE = 2; + const uint32_t VPC_H_ALIGN = 2; + const uint32_t channel = 3; + const double ALPHA = 255.0 / 255; +} +void RcfDetection::SetRcfPostProcessConfig(const InitParam &initParam, + std::map &config) +{ + MxBase::ConfigData configData; + const std::string checkTensor = initParam.checkTensor ? "true" : "false"; + configData.SetJsonValue("OUTSIZE_NUM", std::to_string(initParam.outSizeNum)); + configData.SetJsonValue("OUTSIZE", initParam.outSize); + configData.SetJsonValue("RCF_TYPE", std::to_string(initParam.rcfType)); + configData.SetJsonValue("MODEL_TYPE", std::to_string(initParam.modelType)); + configData.SetJsonValue("INPUT_TYPE", std::to_string(initParam.inputType)); + configData.SetJsonValue("CHECK_MODEL", checkTensor); + #ifdef MX_VERSION_5 + auto jsonStr = configData.GetCfgJson().serialize(); + config["postProcessConfigContent"] = jsonStr; + #else + auto jsonStr = configData.GetCfgJson(); + config["postProcessConfigContent"] = jsonStr; + #endif +} + +APP_ERROR RcfDetection::Init(const InitParam &initParam) +{ + deviceId_ = initParam.deviceId; + APP_ERROR ret = MxBase::DeviceManager::GetInstance()->InitDevices(); + if (ret != APP_ERR_OK) { + LogError << "Init devices failed, ret=" << ret << "."; + return ret; + } + ret = MxBase::TensorContext::GetInstance()->SetContext(initParam.deviceId); + if (ret != APP_ERR_OK) { + LogError << "Set context failed, ret=" << ret << "."; + return ret; + } + dvppWrapper_ = std::make_shared(); + ret = dvppWrapper_->Init(); + if (ret != APP_ERR_OK) { + LogError << "DvppWrapper init failed, ret=" << ret << "."; + return ret; + } + model_ = std::make_shared(); + ret = model_->Init(initParam.modelPath, modelDesc_); + if (ret != APP_ERR_OK) { + LogError << "ModelInferenceProcessor init failed, ret=" << ret << "."; + return ret; + } + + std::map config; + SetRcfPostProcessConfig(initParam, config); + post_ = std::make_shared(); + ret = post_->Init(config); + if (ret != APP_ERR_OK) { + LogError << "RcfPostProcess init failed, ret=" << ret << "."; + return ret; + } + if (ret != APP_ERR_OK) { + LogError << "Failed to load labels, ret=" << ret << "."; + return ret; + } + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::DeInit() +{ + dvppWrapper_->DeInit(); + model_->DeInit(); + post_->DeInit(); + MxBase::DeviceManager::GetInstance()->DestroyDevices(); + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::ReadImage(const std::string &imgPath, + MxBase::TensorBase &tensor) +{ + MxBase::DvppDataInfo output = {}; + APP_ERROR ret = dvppWrapper_->DvppJpegDecode(imgPath, output); + if (ret != APP_ERR_OK) { + LogError << "DvppWrapper DvppJpegDecode failed, ret=" << ret << "."; + return ret; + } + MxBase::MemoryData memoryData((void*)output.data, output.dataSize, + MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); + if (output.heightStride % VPC_H_ALIGN != 0) { + LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; + MxBase::MemoryHelper::MxbsFree(memoryData); + return APP_ERR_COMM_INVALID_PARAM; + } + dvppHeightStride = output.heightStride; + dvppWidthStride = output.widthStride; + std::vector shape = {output.heightStride * YUV_BYTE_NU / YUV_BYTE_DE, output.widthStride}; + tensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); + return APP_ERR_OK; +} +APP_ERROR RcfDetection::Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, + uint32_t resizeHeight, uint32_t resizeWidth) +{ + auto shape = inputTensor.GetShape(); + MxBase::DvppDataInfo input = {}; + input.height = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; + input.width = shape[1]; + input.heightStride = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; + input.widthStride = shape[1]; + input.dataSize = inputTensor.GetByteSize(); + input.data = (uint8_t*)inputTensor.GetBuffer(); + MxBase::ResizeConfig resize = {}; + resize.height = resizeHeight; + resize.width = resizeWidth; + MxBase::DvppDataInfo output = {}; + APP_ERROR ret = dvppWrapper_->VpcResize(input, output, resize); + if (ret != APP_ERR_OK) { + LogError << "VpcResize failed, ret=" << ret << "."; + return ret; + } + MxBase::MemoryData memoryData((void*)output.data, output.dataSize, + MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); + if (output.heightStride % VPC_H_ALIGN != 0) { + LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; + MxBase::MemoryHelper::MxbsFree(memoryData); + return APP_ERR_COMM_INVALID_PARAM; + } + shape = {1, channel, output.heightStride, output.widthStride}; + outputTensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::Inference(const std::vector &inputs, + std::vector &outputs) +{ + std::vector output={}; + auto dtypes = model_->GetOutputDataType(); + for (size_t i = 0; i < modelDesc_.outputTensors.size(); ++i) { + std::vector shape = {}; + for (size_t j = 0; j < modelDesc_.outputTensors[i].tensorDims.size(); ++j) { + shape.push_back((uint32_t)modelDesc_.outputTensors[i].tensorDims[j]); + } + MxBase::TensorBase tensor(shape, dtypes[i], MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); + APP_ERROR ret = MxBase::TensorBase::TensorBaseMalloc(tensor); + if (ret != APP_ERR_OK) { + LogError << "TensorBaseMalloc failed, ret=" << ret << "."; + return ret; + } + outputs.push_back(tensor); + } + MxBase::DynamicInfo dynamicInfo = {}; + dynamicInfo.dynamicType = MxBase::DynamicType::STATIC_BATCH; + auto startTime = std::chrono::high_resolution_clock::now(); + APP_ERROR ret = model_->ModelInference(inputs, outputs, dynamicInfo); + auto endTime = std::chrono::high_resolution_clock::now(); + double costMs = std::chrono::duration(endTime - startTime).count(); + LogInfo<< "costMs:"<< costMs; + if (ret != APP_ERR_OK) { + LogError << "ModelInference failed, ret=" << ret << "."; + return ret; + } + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::PostProcess(const MxBase::TensorBase &tensor, + const std::vector &outputs, + std::vector &postProcessOutput) +{ + auto shape = tensor.GetShape(); + MxBase::ResizedImageInfo imgInfo; + imgInfo.widthOriginal = shape[1]; + imgInfo.heightOriginal = shape[0] * YUV_BYTE_DE; + uint32_t widthResize = 512; + uint32_t heightResize = 512; + imgInfo.widthResize = widthResize; + imgInfo.heightResize = heightResize; + imgInfo.resizeType = MxBase::RESIZER_STRETCHING; + std::vector imageInfoVec = {}; + imageInfoVec.push_back(imgInfo); + APP_ERROR ret = post_->Process(outputs, postProcessOutput); + if (ret != APP_ERR_OK) { + LogError << "Process failed, ret=" << ret << "."; + return ret; + } + ret = post_->DeInit(); + if (ret != APP_ERR_OK) { + LogError << "RcfDetection DeInit failed"; + return ret; + } + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath) +{ + auto shape = inferTensor.GetShape(); + int dim2 = 2; + int dim3 = 3; + uint32_t height = shape[dim2]; + uint32_t width = shape[dim3]; + cv::Mat imgBgr = cv::imread(imgPath); + std::string fileName = imgPath.substr(imgPath.find_last_of("/") + 1); + uint32_t imageWidth = imgBgr.cols; + uint32_t imageHeight = imgBgr.rows; + cv::Mat modelOutput = cv::Mat(height, width, CV_32FC1, inferTensor.GetBuffer()); + cv::Mat grayMat; + cv::Mat resizedMat; + int crop = 5; + cv::Rect myROI(0, 0, imageWidth - crop, imageHeight); + resize(modelOutput, resizedMat, cv::Size(dvppWidthStride, dvppHeightStride), 0, 0, cv::INTER_LINEAR); + resizedMat.convertTo(grayMat, CV_8UC1, ALPHA); + cv::Mat croppedImage = grayMat(myROI); + resize(croppedImage, croppedImage, cv::Size(imageWidth, imageHeight), 0, 0, cv::INTER_LINEAR); + + std::string resultPathName = "result"; + if (access(resultPathName.c_str(), 0) != 0) { + int ret = mkdir(resultPathName.c_str(), S_IRUSR | S_IWUSR | S_IXUSR); + if (ret != 0) { + LogError << "Failed to create result directory: " << resultPathName << ", ret = " << ret; + return APP_ERR_COMM_FAILURE; + } + } + + cv::imwrite("./result/"+fileName, croppedImage); + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::Process(const std::string &imgPath) +{ + MxBase::TensorBase inTensor; + APP_ERROR ret = ReadImage(imgPath, inTensor); + if (ret != APP_ERR_OK) { + LogError << "ReadImage failed, ret=" << ret << "."; + return ret; + } + MxBase::TensorBase outTensor; + uint32_t resizeHeight = 512; + uint32_t resizeWidth = 512; + ret = Resize(inTensor, outTensor, resizeHeight, resizeWidth); + if (ret != APP_ERR_OK) { + LogError << "Resize failed, ret=" << ret << "."; + return ret; + } + std::vector inputs = {}; + std::vector outputs = {}; + auto shape = outTensor.GetShape(); + inputs.push_back(outTensor); + ret = Inference(inputs, outputs); + if (ret != APP_ERR_OK) { + LogError << "Inference failed, ret=" << ret << "."; + return ret; + } + std::vector postProcessOutput={}; + ret = PostProcess(inTensor, outputs, postProcessOutput); + if (ret != APP_ERR_OK) { + LogError << "PostProcess failed, ret=" << ret << "."; + return ret; + } + ret = WriteResult(postProcessOutput[0], imgPath); + if (ret != APP_ERR_OK) { + LogError << "Save result failed, ret=" << ret << "."; + return ret; + } + return APP_ERR_OK; +} diff --git a/contrib/EdgeDetectionPicture/RcfDetection.h b/contrib/EdgeDetectionPicture/RcfDetection.h new file mode 100644 index 000000000..7c0ae6da4 --- /dev/null +++ b/contrib/EdgeDetectionPicture/RcfDetection.h @@ -0,0 +1,61 @@ +/* + * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#ifndef MXBASE_RCFDETECTION_H +#define MXBASE_RCFDETECTION_H + +#include +#include +#include "MxBase/DvppWrapper/DvppWrapper.h" +#include "MxBase/ModelInfer/ModelInferenceProcessor.h" +#include "MxBase/Tensor/TensorContext/TensorContext.h" + +struct InitParam { + uint32_t deviceId = 0; + bool checkTensor = true; + std::string modelPath = ""; + uint32_t outSizeNum = 0; + std::string outSize = ""; + uint32_t rcfType = 0; + uint32_t modelType = 0; + uint32_t inputType = 0; +}; + +class RcfDetection { +public: + APP_ERROR Init(const InitParam &initParam); + APP_ERROR DeInit(); + APP_ERROR Inference(const std::vector &inputs, std::vector &outputs); + APP_ERROR PostProcess(const MxBase::TensorBase &tensor, const std::vector &outputs, + std::vector &postProcessOutput); + APP_ERROR Process(const std::string &imgPath); +protected: + APP_ERROR ReadImage(const std::string &imgPath, MxBase::TensorBase &tensor); + APP_ERROR Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, + uint32_t resizeHeight, uint32_t resizeWidth); + APP_ERROR WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath); + void SetRcfPostProcessConfig(const InitParam &initParam, std::map &config); + +private: + std::shared_ptr dvppWrapper_; + std::shared_ptr model_; + std::shared_ptr post_; + MxBase::ModelDesc modelDesc_ = {}; + uint32_t deviceId_ = 0; + int dvppHeightStride = 0; + int dvppWidthStride = 0; +}; +#endif -- Gitee From d3e96ed7244023009439f4c714e0265842a3200f Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:15:29 +0000 Subject: [PATCH 10/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/rcfDetection/.keep?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/rcfDetection/.keep | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/rcfDetection/.keep diff --git a/contrib/EdgeDetectionPicture/rcfDetection/.keep b/contrib/EdgeDetectionPicture/rcfDetection/.keep deleted file mode 100644 index e69de29bb..000000000 -- Gitee From 2c3f8c16725c766066bccc8207571a1c5971ad70 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:16:18 +0000 Subject: [PATCH 11/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/README.md | 150 ------------------------- 1 file changed, 150 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/README.md diff --git a/contrib/EdgeDetectionPicture/README.md b/contrib/EdgeDetectionPicture/README.md deleted file mode 100644 index e00f69a20..000000000 --- a/contrib/EdgeDetectionPicture/README.md +++ /dev/null @@ -1,150 +0,0 @@ - -# RCF模型边缘检测 - -## 1 介绍 -本开发样例是基于mxBase开发的端到端推理的C++应用程序,可在昇腾芯片上进行 图像边缘提取,并把可视化结果保存到本地。 -其中包含Rcf模型的后处理模块开发。 主要处理流程为: Init > ReadImage >Resize > Inference >PostProcess >DeInit - -#### 1.1 支持的产品 -昇腾310(推理) - -#### 1.2 支持的版本 -本样例配套的CANN版本为7.0.0,MindX SDK版本为5.0.0 -MindX SDK安装前准备可参考《用户指南》,[安装教程](https://gitee.com/ascend/mindxsdk-referenceapps/blob/master/docs/quickStart/1-1%E5%AE%89%E8%A3%85SDK%E5%BC%80%E5%8F%91%E5%A5%97%E4%BB%B6.md) - -#### 1.3 代码目录结构与说明 -本sample工程名称为EdgeDetectionPicture,工程目录如下图所示: - -``` -. -├── model -│ ├── aipp.cfg // 模型转换aipp配置文件 -├── rcfDetection -│ ├── RcfDetection.cpp -│ └── RcfDetection.h -├── rcfPostProcess -│ ├── rcfPostProcess.cpp -│ └── rcfPostProcess.h -├── build.sh -├── main.cpp -├── README.md -├── CMakeLists.txt -└── License -``` - -## 2 环境依赖 -环境依赖软件和版本如下表: - - - -| 软件 | 版本 | 说明 | 获取方式 | -| ------------------- | ------------ | ----------------------------- | ------------------------------------------------------------ | -| mxVision | 5.0.0 | mxVision软件包 | [链接](https://www.hiascend.com/software/Mindx-sdk) | -| Ascend-CANN-toolkit | 7.0.0 | Ascend-cann-toolkit开发套件包 | [链接](https://www.hiascend.com/software/cann/commercial) | -| 操作系统 | Ubuntu 18.04 | 操作系统 | Ubuntu官网获取 | - -在编译运行项目前,需要设置环境变量: - -- 环境变量介绍 - - ``` - . {cann_install_path}/ascend-toolkit/set_env.sh - . {sdk_install_path}/mxVision/set_env.sh - - ``` - - - -## 3 模型转换 - -**步骤1** 模型获取 -下载RCF模型 。[下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EdgeDetectionPicture/model.zip) - -**步骤2** 模型存放 -将获取到的RCF模型rcf.prototxt文件和rcf_bsds.caffemodel文件放在edge_detection_picture/model下 - -**步骤3** 执行模型转换命令 - -``` -atc --model=rcf.prototxt --weight=./rcf_bsds.caffemodel --framework=0 --output=rcf --soc_version=Ascend310 --insert_op_conf=./aipp.cfg --input_format=NCHW --output_type=FP32 -``` - -## 4 编译与运行 - -**步骤1** 执行如下编译命令: -bash build.sh - -**步骤2** 进行图像边缘检测 -请自行准备jpg格式的测试图像保存在文件夹中(例如 data/**.jpg)进行边缘检测 -``` -./edge_detection_picture ./data -``` -生成边缘检测图像 result/**.jpg - -## 5 精度测试 -下载开源数据集 BSDS500 [下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EdgeDetectionPicture/data.zip), 使用 BSR/BSDS500/data/images/test数据进行测试 - - -(1) 下载开源代码 - -``` shell -git clone https://github.com/Walstruzz/edge_eval_python.git -cd edge_eval_python -``` -(2) 编译cxx - -``` shell -cd cxx/src -source build.sh -``` -(3) 将test中的图像经过边缘检测后的结果保存在result文件夹中 - -``` shell -./edge_detection_picture path/to/BSR/BSDS500/data/images/test/ - -``` - - -(4) 修改检测代码 - -vim mian.py -注释第17行代码 nms_process(model_name_list, result_dir, save_dir, key, file_format), -修改18行为 eval_edge(alg, model_name_list, result_dir, gt_dir, workers) - -vim /impl/edges_eval_dir.py -修改155行为 im = os.path.join(res_dir, "{}.jpg".format(i)) - -vim eval_edge.py -修改14行为 res_dir = result_dir - -(5) 测试精度 - -``` shell -python main.py --result_dir path/to/result --gt_dir paht/to/BSR/BSDS500/data/groundTruth/test - -``` -注: - result_dir: results directory - - gt_dir : ground truth directory - - -## 6常见问题 -### 6.1 精度测试脚本运行时, ModuleNotFoundError报错问题: -问题描述: -运行精度测试脚本过程中, 出现如下类似报错: -``` -********* -File "***/nms_process.py", line 6 in - from impl.toolbox import conv_tri, grad2 -ModuleNotFoundError: No module named 'impl.toolbox' -``` - -解决措施: -``` -方法一: -将环境变量PYTHONPATH里面的/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe去掉, 该路径根据cann实际安装位置不同可能会不一样,须自行确认 - -方法二: -执行命令: unset PYTHONPATH -``` \ No newline at end of file -- Gitee From 60e52abad429d3ce0ef7433007b1855297e474cd Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:16:42 +0000 Subject: [PATCH 12/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0README?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EdgeDetectionPicture/README.md | 156 +++++++++++++++++++++++++ 1 file changed, 156 insertions(+) create mode 100644 contrib/EdgeDetectionPicture/README.md diff --git a/contrib/EdgeDetectionPicture/README.md b/contrib/EdgeDetectionPicture/README.md new file mode 100644 index 000000000..53642883f --- /dev/null +++ b/contrib/EdgeDetectionPicture/README.md @@ -0,0 +1,156 @@ + +# RCF模型边缘检测 + +## 1 介绍 + +#### 1.1 简介 +本开发样例是基于mxBase开发的端到端推理的C++应用程序,可在昇腾芯片上进行 图像边缘提取,并把可视化结果保存到本地。 +其中包含Rcf模型的后处理模块开发。 主要处理流程为: Init > ReadImage >Resize > Inference >PostProcess >DeInit + +#### 1.2 支持的产品 +昇腾310(推理) + +#### 1.3 支持的版本 +本样例配套的CANN版本为7.0.0,MindX SDK版本为5.0.0 +MindX SDK安装前准备可参考《用户指南》,[安装教程](https://gitee.com/ascend/mindxsdk-referenceapps/blob/master/docs/quickStart/1-1%E5%AE%89%E8%A3%85SDK%E5%BC%80%E5%8F%91%E5%A5%97%E4%BB%B6.md) +| 软件 | 版本 | 说明 | +| ------------------- | ------------ | ---------------------------- | +| mxVision | 5.0.0 | mxVision软件包 | +| Ascend-CANN-toolkit | 7.0.0 | Ascend-cann-toolkit开发套件包 | + + +#### 1.4 代码目录结构与说明 +本sample工程名称为EdgeDetectionPicture,工程目录如下图所示: + +``` +. +├── model +│ ├── aipp.cfg // 模型转换aipp配置文件 +├── rcfDetection +│ ├── RcfDetection.cpp +│ └── RcfDetection.h +├── rcfPostProcess +│ ├── rcfPostProcess.cpp +│ └── rcfPostProcess.h +├── build.sh +├── main.cpp +├── README.md +├── CMakeLists.txt +└── License +``` + +## 2 设置环境变量 + +在编译运行项目前,需要设置环境变量: + +- 环境变量介绍 + + ``` + . {cann_install_path}/ascend-toolkit/set_env.sh + . {sdk_install_path}/mxVision/set_env.sh + + ``` + + + +## 3 准备模型 + +**步骤1** 模型获取 +下载RCF模型 。[下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EdgeDetectionPicture/model.zip) + +**步骤2** 模型存放 +将获取到的RCF模型rcf.prototxt文件和rcf_bsds.caffemodel文件放在edge_detection_picture/model下 + +**步骤3** 执行模型转换命令 + +``` +atc --model=rcf.prototxt --weight=./rcf_bsds.caffemodel --framework=0 --output=rcf --soc_version=Ascend310 --insert_op_conf=./aipp.cfg --input_format=NCHW --output_type=FP32 +``` + +## 4 编译与运行 + +**步骤1** 执行如下编译命令: +bash build.sh + +**步骤2** 进行图像边缘检测 +请自行准备jpg格式的测试图像保存在文件夹中(例如 data/**.jpg)进行边缘检测 +``` +./edge_detection_picture ./data +``` +生成边缘检测图像 result/**.jpg + +## 5 精度测试 +下载开源数据集 BSDS500 [下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EdgeDetectionPicture/data.zip), 使用 BSR/BSDS500/data/images/test数据进行测试 + + +(1) 下载开源代码 + +``` shell +git clone https://github.com/Walstruzz/edge_eval_python.git +cd edge_eval_python +``` +(2) 编译cxx + +``` shell +cd cxx/src +source build.sh +``` +(3) 将test中的图像经过边缘检测后的结果保存在result文件夹中 + +``` shell +./edge_detection_picture path/to/BSR/BSDS500/data/images/test/ + +``` + + +(4) 修改检测代码 + +vim mian.py +注释第17行代码 nms_process(model_name_list, result_dir, save_dir, key, file_format), +修改18行为 eval_edge(alg, model_name_list, result_dir, gt_dir, workers) + +vim /impl/edges_eval_dir.py +修改155行为 im = os.path.join(res_dir, "{}.jpg".format(i)) + +vim eval_edge.py +修改14行为 res_dir = result_dir + +(5) 测试精度 + +``` shell +python main.py --result_dir path/to/result --gt_dir path/to/BSR/BSDS500/data/groundTruth/test + +``` +注: + result_dir: results directory + + gt_dir : ground truth directory + + +## 6常见问题 +### 6.1 精度测试脚本运行时, ModuleNotFoundError报错问题: +问题描述: +运行精度测试脚本过程中, 出现如下类似报错: +``` +********* +File "***/nms_process.py", line 6 in + from impl.toolbox import conv_tri, grad2 +ModuleNotFoundError: No module named 'impl.toolbox' +``` + +解决措施: +``` +方法一: +将环境变量PYTHONPATH里面的/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe去掉, 该路径根据cann实际安装位置不同可能会不一样,须自行确认 + +方法二: +执行命令: unset PYTHONPATH +``` +### 6.2 检测代码无法修改问题: +问题描述: +修改检测代码中, 出现无法修改问题 + +解决措施: +``` +使用 sudo vim filename进行修改 +``` \ No newline at end of file -- Gitee From 242855048bcea8e0784a24395ed259f284cbd873 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:17:02 +0000 Subject: [PATCH 13/51] =?UTF-8?q?=E6=96=B0=E5=BB=BA=20rcfDetection?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/rcfDetection/.keep | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 contrib/EdgeDetectionPicture/rcfDetection/.keep diff --git a/contrib/EdgeDetectionPicture/rcfDetection/.keep b/contrib/EdgeDetectionPicture/rcfDetection/.keep new file mode 100644 index 000000000..e69de29bb -- Gitee From c38184b05b4a598af37167d258224b5a76718c8e Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:17:31 +0000 Subject: [PATCH 14/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0RcfDetection=E4=BD=BF?= =?UTF-8?q?=E5=85=B6=E9=80=82=E9=85=8D=E7=89=88=E6=9C=AC?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../rcfDetection/RcfDetection.cpp | 286 ++++++++++++++++++ .../rcfDetection/RcfDetection.h | 61 ++++ 2 files changed, 347 insertions(+) create mode 100644 contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp create mode 100644 contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h diff --git a/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp b/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp new file mode 100644 index 000000000..ab1796d72 --- /dev/null +++ b/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.cpp @@ -0,0 +1,286 @@ +/** + * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "RcfDetection.h" +#include "MxBase/DeviceManager/DeviceManager.h" +#include "MxBase/Log/Log.h" +#include "../rcfPostProcess/RcfPostProcess.h" +#include +#include +#include "opencv2/opencv.hpp" + +namespace { + const uint32_t YUV_BYTE_NU = 3; + const uint32_t YUV_BYTE_DE = 2; + const uint32_t VPC_H_ALIGN = 2; + const uint32_t channel = 3; + const double ALPHA = 255.0 / 255; +} +void RcfDetection::SetRcfPostProcessConfig(const InitParam &initParam, + std::map &config) +{ + MxBase::ConfigData configData; + const std::string checkTensor = initParam.checkTensor ? "true" : "false"; + configData.SetJsonValue("OUTSIZE_NUM", std::to_string(initParam.outSizeNum)); + configData.SetJsonValue("OUTSIZE", initParam.outSize); + configData.SetJsonValue("RCF_TYPE", std::to_string(initParam.rcfType)); + configData.SetJsonValue("MODEL_TYPE", std::to_string(initParam.modelType)); + configData.SetJsonValue("INPUT_TYPE", std::to_string(initParam.inputType)); + configData.SetJsonValue("CHECK_MODEL", checkTensor); + #ifdef MX_VERSION_5 + auto jsonStr = configData.GetCfgJson().serialize(); + config["postProcessConfigContent"] = jsonStr; + #else + auto jsonStr = configData.GetCfgJson(); + config["postProcessConfigContent"] = jsonStr; + #endif +} + +APP_ERROR RcfDetection::Init(const InitParam &initParam) +{ + deviceId_ = initParam.deviceId; + APP_ERROR ret = MxBase::DeviceManager::GetInstance()->InitDevices(); + if (ret != APP_ERR_OK) { + LogError << "Init devices failed, ret=" << ret << "."; + return ret; + } + ret = MxBase::TensorContext::GetInstance()->SetContext(initParam.deviceId); + if (ret != APP_ERR_OK) { + LogError << "Set context failed, ret=" << ret << "."; + return ret; + } + dvppWrapper_ = std::make_shared(); + ret = dvppWrapper_->Init(); + if (ret != APP_ERR_OK) { + LogError << "DvppWrapper init failed, ret=" << ret << "."; + return ret; + } + model_ = std::make_shared(); + ret = model_->Init(initParam.modelPath, modelDesc_); + if (ret != APP_ERR_OK) { + LogError << "ModelInferenceProcessor init failed, ret=" << ret << "."; + return ret; + } + + std::map config; + SetRcfPostProcessConfig(initParam, config); + post_ = std::make_shared(); + ret = post_->Init(config); + if (ret != APP_ERR_OK) { + LogError << "RcfPostProcess init failed, ret=" << ret << "."; + return ret; + } + if (ret != APP_ERR_OK) { + LogError << "Failed to load labels, ret=" << ret << "."; + return ret; + } + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::DeInit() +{ + dvppWrapper_->DeInit(); + model_->DeInit(); + post_->DeInit(); + MxBase::DeviceManager::GetInstance()->DestroyDevices(); + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::ReadImage(const std::string &imgPath, + MxBase::TensorBase &tensor) +{ + MxBase::DvppDataInfo output = {}; + APP_ERROR ret = dvppWrapper_->DvppJpegDecode(imgPath, output); + if (ret != APP_ERR_OK) { + LogError << "DvppWrapper DvppJpegDecode failed, ret=" << ret << "."; + return ret; + } + MxBase::MemoryData memoryData((void*)output.data, output.dataSize, + MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); + if (output.heightStride % VPC_H_ALIGN != 0) { + LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; + MxBase::MemoryHelper::MxbsFree(memoryData); + return APP_ERR_COMM_INVALID_PARAM; + } + dvppHeightStride = output.heightStride; + dvppWidthStride = output.widthStride; + std::vector shape = {output.heightStride * YUV_BYTE_NU / YUV_BYTE_DE, output.widthStride}; + tensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); + return APP_ERR_OK; +} +APP_ERROR RcfDetection::Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, + uint32_t resizeHeight, uint32_t resizeWidth) +{ + auto shape = inputTensor.GetShape(); + MxBase::DvppDataInfo input = {}; + input.height = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; + input.width = shape[1]; + input.heightStride = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; + input.widthStride = shape[1]; + input.dataSize = inputTensor.GetByteSize(); + input.data = (uint8_t*)inputTensor.GetBuffer(); + MxBase::ResizeConfig resize = {}; + resize.height = resizeHeight; + resize.width = resizeWidth; + MxBase::DvppDataInfo output = {}; + APP_ERROR ret = dvppWrapper_->VpcResize(input, output, resize); + if (ret != APP_ERR_OK) { + LogError << "VpcResize failed, ret=" << ret << "."; + return ret; + } + MxBase::MemoryData memoryData((void*)output.data, output.dataSize, + MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); + if (output.heightStride % VPC_H_ALIGN != 0) { + LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; + MxBase::MemoryHelper::MxbsFree(memoryData); + return APP_ERR_COMM_INVALID_PARAM; + } + shape = {1, channel, output.heightStride, output.widthStride}; + outputTensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::Inference(const std::vector &inputs, + std::vector &outputs) +{ + std::vector output={}; + auto dtypes = model_->GetOutputDataType(); + for (size_t i = 0; i < modelDesc_.outputTensors.size(); ++i) { + std::vector shape = {}; + for (size_t j = 0; j < modelDesc_.outputTensors[i].tensorDims.size(); ++j) { + shape.push_back((uint32_t)modelDesc_.outputTensors[i].tensorDims[j]); + } + MxBase::TensorBase tensor(shape, dtypes[i], MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); + APP_ERROR ret = MxBase::TensorBase::TensorBaseMalloc(tensor); + if (ret != APP_ERR_OK) { + LogError << "TensorBaseMalloc failed, ret=" << ret << "."; + return ret; + } + outputs.push_back(tensor); + } + MxBase::DynamicInfo dynamicInfo = {}; + dynamicInfo.dynamicType = MxBase::DynamicType::STATIC_BATCH; + auto startTime = std::chrono::high_resolution_clock::now(); + APP_ERROR ret = model_->ModelInference(inputs, outputs, dynamicInfo); + auto endTime = std::chrono::high_resolution_clock::now(); + double costMs = std::chrono::duration(endTime - startTime).count(); + LogInfo<< "costMs:"<< costMs; + if (ret != APP_ERR_OK) { + LogError << "ModelInference failed, ret=" << ret << "."; + return ret; + } + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::PostProcess(const MxBase::TensorBase &tensor, + const std::vector &outputs, + std::vector &postProcessOutput) +{ + auto shape = tensor.GetShape(); + MxBase::ResizedImageInfo imgInfo; + imgInfo.widthOriginal = shape[1]; + imgInfo.heightOriginal = shape[0] * YUV_BYTE_DE; + uint32_t widthResize = 512; + uint32_t heightResize = 512; + imgInfo.widthResize = widthResize; + imgInfo.heightResize = heightResize; + imgInfo.resizeType = MxBase::RESIZER_STRETCHING; + std::vector imageInfoVec = {}; + imageInfoVec.push_back(imgInfo); + APP_ERROR ret = post_->Process(outputs, postProcessOutput); + if (ret != APP_ERR_OK) { + LogError << "Process failed, ret=" << ret << "."; + return ret; + } + ret = post_->DeInit(); + if (ret != APP_ERR_OK) { + LogError << "RcfDetection DeInit failed"; + return ret; + } + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath) +{ + auto shape = inferTensor.GetShape(); + int dim2 = 2; + int dim3 = 3; + uint32_t height = shape[dim2]; + uint32_t width = shape[dim3]; + cv::Mat imgBgr = cv::imread(imgPath); + std::string fileName = imgPath.substr(imgPath.find_last_of("/") + 1); + uint32_t imageWidth = imgBgr.cols; + uint32_t imageHeight = imgBgr.rows; + cv::Mat modelOutput = cv::Mat(height, width, CV_32FC1, inferTensor.GetBuffer()); + cv::Mat grayMat; + cv::Mat resizedMat; + int crop = 5; + cv::Rect myROI(0, 0, imageWidth - crop, imageHeight); + resize(modelOutput, resizedMat, cv::Size(dvppWidthStride, dvppHeightStride), 0, 0, cv::INTER_LINEAR); + resizedMat.convertTo(grayMat, CV_8UC1, ALPHA); + cv::Mat croppedImage = grayMat(myROI); + resize(croppedImage, croppedImage, cv::Size(imageWidth, imageHeight), 0, 0, cv::INTER_LINEAR); + + std::string resultPathName = "result"; + if (access(resultPathName.c_str(), 0) != 0) { + int ret = mkdir(resultPathName.c_str(), S_IRUSR | S_IWUSR | S_IXUSR); + if (ret != 0) { + LogError << "Failed to create result directory: " << resultPathName << ", ret = " << ret; + return APP_ERR_COMM_FAILURE; + } + } + + cv::imwrite("./result/"+fileName, croppedImage); + return APP_ERR_OK; +} + +APP_ERROR RcfDetection::Process(const std::string &imgPath) +{ + MxBase::TensorBase inTensor; + APP_ERROR ret = ReadImage(imgPath, inTensor); + if (ret != APP_ERR_OK) { + LogError << "ReadImage failed, ret=" << ret << "."; + return ret; + } + MxBase::TensorBase outTensor; + uint32_t resizeHeight = 512; + uint32_t resizeWidth = 512; + ret = Resize(inTensor, outTensor, resizeHeight, resizeWidth); + if (ret != APP_ERR_OK) { + LogError << "Resize failed, ret=" << ret << "."; + return ret; + } + std::vector inputs = {}; + std::vector outputs = {}; + auto shape = outTensor.GetShape(); + inputs.push_back(outTensor); + ret = Inference(inputs, outputs); + if (ret != APP_ERR_OK) { + LogError << "Inference failed, ret=" << ret << "."; + return ret; + } + std::vector postProcessOutput={}; + ret = PostProcess(inTensor, outputs, postProcessOutput); + if (ret != APP_ERR_OK) { + LogError << "PostProcess failed, ret=" << ret << "."; + return ret; + } + ret = WriteResult(postProcessOutput[0], imgPath); + if (ret != APP_ERR_OK) { + LogError << "Save result failed, ret=" << ret << "."; + return ret; + } + return APP_ERR_OK; +} diff --git a/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h b/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h new file mode 100644 index 000000000..7c0ae6da4 --- /dev/null +++ b/contrib/EdgeDetectionPicture/rcfDetection/RcfDetection.h @@ -0,0 +1,61 @@ +/* + * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#ifndef MXBASE_RCFDETECTION_H +#define MXBASE_RCFDETECTION_H + +#include +#include +#include "MxBase/DvppWrapper/DvppWrapper.h" +#include "MxBase/ModelInfer/ModelInferenceProcessor.h" +#include "MxBase/Tensor/TensorContext/TensorContext.h" + +struct InitParam { + uint32_t deviceId = 0; + bool checkTensor = true; + std::string modelPath = ""; + uint32_t outSizeNum = 0; + std::string outSize = ""; + uint32_t rcfType = 0; + uint32_t modelType = 0; + uint32_t inputType = 0; +}; + +class RcfDetection { +public: + APP_ERROR Init(const InitParam &initParam); + APP_ERROR DeInit(); + APP_ERROR Inference(const std::vector &inputs, std::vector &outputs); + APP_ERROR PostProcess(const MxBase::TensorBase &tensor, const std::vector &outputs, + std::vector &postProcessOutput); + APP_ERROR Process(const std::string &imgPath); +protected: + APP_ERROR ReadImage(const std::string &imgPath, MxBase::TensorBase &tensor); + APP_ERROR Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, + uint32_t resizeHeight, uint32_t resizeWidth); + APP_ERROR WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath); + void SetRcfPostProcessConfig(const InitParam &initParam, std::map &config); + +private: + std::shared_ptr dvppWrapper_; + std::shared_ptr model_; + std::shared_ptr post_; + MxBase::ModelDesc modelDesc_ = {}; + uint32_t deviceId_ = 0; + int dvppHeightStride = 0; + int dvppWidthStride = 0; +}; +#endif -- Gitee From 6bbe86b9a7c771bb196f2ff04f28c24f1a99123d Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:17:36 +0000 Subject: [PATCH 15/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/rcfDetection/.keep?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/rcfDetection/.keep | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/rcfDetection/.keep diff --git a/contrib/EdgeDetectionPicture/rcfDetection/.keep b/contrib/EdgeDetectionPicture/rcfDetection/.keep deleted file mode 100644 index e69de29bb..000000000 -- Gitee From 6d94ab464e821fead4732ba4d0e45c6f0bd9bb50 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:17:45 +0000 Subject: [PATCH 16/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/RcfDetection.h?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/RcfDetection.h | 61 --------------------- 1 file changed, 61 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/RcfDetection.h diff --git a/contrib/EdgeDetectionPicture/RcfDetection.h b/contrib/EdgeDetectionPicture/RcfDetection.h deleted file mode 100644 index 7c0ae6da4..000000000 --- a/contrib/EdgeDetectionPicture/RcfDetection.h +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef MXBASE_RCFDETECTION_H -#define MXBASE_RCFDETECTION_H - -#include -#include -#include "MxBase/DvppWrapper/DvppWrapper.h" -#include "MxBase/ModelInfer/ModelInferenceProcessor.h" -#include "MxBase/Tensor/TensorContext/TensorContext.h" - -struct InitParam { - uint32_t deviceId = 0; - bool checkTensor = true; - std::string modelPath = ""; - uint32_t outSizeNum = 0; - std::string outSize = ""; - uint32_t rcfType = 0; - uint32_t modelType = 0; - uint32_t inputType = 0; -}; - -class RcfDetection { -public: - APP_ERROR Init(const InitParam &initParam); - APP_ERROR DeInit(); - APP_ERROR Inference(const std::vector &inputs, std::vector &outputs); - APP_ERROR PostProcess(const MxBase::TensorBase &tensor, const std::vector &outputs, - std::vector &postProcessOutput); - APP_ERROR Process(const std::string &imgPath); -protected: - APP_ERROR ReadImage(const std::string &imgPath, MxBase::TensorBase &tensor); - APP_ERROR Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, - uint32_t resizeHeight, uint32_t resizeWidth); - APP_ERROR WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath); - void SetRcfPostProcessConfig(const InitParam &initParam, std::map &config); - -private: - std::shared_ptr dvppWrapper_; - std::shared_ptr model_; - std::shared_ptr post_; - MxBase::ModelDesc modelDesc_ = {}; - uint32_t deviceId_ = 0; - int dvppHeightStride = 0; - int dvppWidthStride = 0; -}; -#endif -- Gitee From 092009ea6922a7d34c3d56db2ca0cdb7b8110205 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:17:51 +0000 Subject: [PATCH 17/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/EdgeDetectionPicture/RcfDetection.cpp?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/EdgeDetectionPicture/RcfDetection.cpp | 286 ------------------ 1 file changed, 286 deletions(-) delete mode 100644 contrib/EdgeDetectionPicture/RcfDetection.cpp diff --git a/contrib/EdgeDetectionPicture/RcfDetection.cpp b/contrib/EdgeDetectionPicture/RcfDetection.cpp deleted file mode 100644 index ab1796d72..000000000 --- a/contrib/EdgeDetectionPicture/RcfDetection.cpp +++ /dev/null @@ -1,286 +0,0 @@ -/** - * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -#include "RcfDetection.h" -#include "MxBase/DeviceManager/DeviceManager.h" -#include "MxBase/Log/Log.h" -#include "../rcfPostProcess/RcfPostProcess.h" -#include -#include -#include "opencv2/opencv.hpp" - -namespace { - const uint32_t YUV_BYTE_NU = 3; - const uint32_t YUV_BYTE_DE = 2; - const uint32_t VPC_H_ALIGN = 2; - const uint32_t channel = 3; - const double ALPHA = 255.0 / 255; -} -void RcfDetection::SetRcfPostProcessConfig(const InitParam &initParam, - std::map &config) -{ - MxBase::ConfigData configData; - const std::string checkTensor = initParam.checkTensor ? "true" : "false"; - configData.SetJsonValue("OUTSIZE_NUM", std::to_string(initParam.outSizeNum)); - configData.SetJsonValue("OUTSIZE", initParam.outSize); - configData.SetJsonValue("RCF_TYPE", std::to_string(initParam.rcfType)); - configData.SetJsonValue("MODEL_TYPE", std::to_string(initParam.modelType)); - configData.SetJsonValue("INPUT_TYPE", std::to_string(initParam.inputType)); - configData.SetJsonValue("CHECK_MODEL", checkTensor); - #ifdef MX_VERSION_5 - auto jsonStr = configData.GetCfgJson().serialize(); - config["postProcessConfigContent"] = jsonStr; - #else - auto jsonStr = configData.GetCfgJson(); - config["postProcessConfigContent"] = jsonStr; - #endif -} - -APP_ERROR RcfDetection::Init(const InitParam &initParam) -{ - deviceId_ = initParam.deviceId; - APP_ERROR ret = MxBase::DeviceManager::GetInstance()->InitDevices(); - if (ret != APP_ERR_OK) { - LogError << "Init devices failed, ret=" << ret << "."; - return ret; - } - ret = MxBase::TensorContext::GetInstance()->SetContext(initParam.deviceId); - if (ret != APP_ERR_OK) { - LogError << "Set context failed, ret=" << ret << "."; - return ret; - } - dvppWrapper_ = std::make_shared(); - ret = dvppWrapper_->Init(); - if (ret != APP_ERR_OK) { - LogError << "DvppWrapper init failed, ret=" << ret << "."; - return ret; - } - model_ = std::make_shared(); - ret = model_->Init(initParam.modelPath, modelDesc_); - if (ret != APP_ERR_OK) { - LogError << "ModelInferenceProcessor init failed, ret=" << ret << "."; - return ret; - } - - std::map config; - SetRcfPostProcessConfig(initParam, config); - post_ = std::make_shared(); - ret = post_->Init(config); - if (ret != APP_ERR_OK) { - LogError << "RcfPostProcess init failed, ret=" << ret << "."; - return ret; - } - if (ret != APP_ERR_OK) { - LogError << "Failed to load labels, ret=" << ret << "."; - return ret; - } - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::DeInit() -{ - dvppWrapper_->DeInit(); - model_->DeInit(); - post_->DeInit(); - MxBase::DeviceManager::GetInstance()->DestroyDevices(); - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::ReadImage(const std::string &imgPath, - MxBase::TensorBase &tensor) -{ - MxBase::DvppDataInfo output = {}; - APP_ERROR ret = dvppWrapper_->DvppJpegDecode(imgPath, output); - if (ret != APP_ERR_OK) { - LogError << "DvppWrapper DvppJpegDecode failed, ret=" << ret << "."; - return ret; - } - MxBase::MemoryData memoryData((void*)output.data, output.dataSize, - MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); - if (output.heightStride % VPC_H_ALIGN != 0) { - LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; - MxBase::MemoryHelper::MxbsFree(memoryData); - return APP_ERR_COMM_INVALID_PARAM; - } - dvppHeightStride = output.heightStride; - dvppWidthStride = output.widthStride; - std::vector shape = {output.heightStride * YUV_BYTE_NU / YUV_BYTE_DE, output.widthStride}; - tensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); - return APP_ERR_OK; -} -APP_ERROR RcfDetection::Resize(const MxBase::TensorBase &inputTensor, MxBase::TensorBase &outputTensor, - uint32_t resizeHeight, uint32_t resizeWidth) -{ - auto shape = inputTensor.GetShape(); - MxBase::DvppDataInfo input = {}; - input.height = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; - input.width = shape[1]; - input.heightStride = (uint32_t)shape[0] * YUV_BYTE_DE / YUV_BYTE_NU; - input.widthStride = shape[1]; - input.dataSize = inputTensor.GetByteSize(); - input.data = (uint8_t*)inputTensor.GetBuffer(); - MxBase::ResizeConfig resize = {}; - resize.height = resizeHeight; - resize.width = resizeWidth; - MxBase::DvppDataInfo output = {}; - APP_ERROR ret = dvppWrapper_->VpcResize(input, output, resize); - if (ret != APP_ERR_OK) { - LogError << "VpcResize failed, ret=" << ret << "."; - return ret; - } - MxBase::MemoryData memoryData((void*)output.data, output.dataSize, - MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); - if (output.heightStride % VPC_H_ALIGN != 0) { - LogError << "Output data height(" << output.heightStride << ") can't be divided by " << VPC_H_ALIGN << "."; - MxBase::MemoryHelper::MxbsFree(memoryData); - return APP_ERR_COMM_INVALID_PARAM; - } - shape = {1, channel, output.heightStride, output.widthStride}; - outputTensor = MxBase::TensorBase(memoryData, false, shape, MxBase::TENSOR_DTYPE_UINT8); - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::Inference(const std::vector &inputs, - std::vector &outputs) -{ - std::vector output={}; - auto dtypes = model_->GetOutputDataType(); - for (size_t i = 0; i < modelDesc_.outputTensors.size(); ++i) { - std::vector shape = {}; - for (size_t j = 0; j < modelDesc_.outputTensors[i].tensorDims.size(); ++j) { - shape.push_back((uint32_t)modelDesc_.outputTensors[i].tensorDims[j]); - } - MxBase::TensorBase tensor(shape, dtypes[i], MxBase::MemoryData::MemoryType::MEMORY_DEVICE, deviceId_); - APP_ERROR ret = MxBase::TensorBase::TensorBaseMalloc(tensor); - if (ret != APP_ERR_OK) { - LogError << "TensorBaseMalloc failed, ret=" << ret << "."; - return ret; - } - outputs.push_back(tensor); - } - MxBase::DynamicInfo dynamicInfo = {}; - dynamicInfo.dynamicType = MxBase::DynamicType::STATIC_BATCH; - auto startTime = std::chrono::high_resolution_clock::now(); - APP_ERROR ret = model_->ModelInference(inputs, outputs, dynamicInfo); - auto endTime = std::chrono::high_resolution_clock::now(); - double costMs = std::chrono::duration(endTime - startTime).count(); - LogInfo<< "costMs:"<< costMs; - if (ret != APP_ERR_OK) { - LogError << "ModelInference failed, ret=" << ret << "."; - return ret; - } - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::PostProcess(const MxBase::TensorBase &tensor, - const std::vector &outputs, - std::vector &postProcessOutput) -{ - auto shape = tensor.GetShape(); - MxBase::ResizedImageInfo imgInfo; - imgInfo.widthOriginal = shape[1]; - imgInfo.heightOriginal = shape[0] * YUV_BYTE_DE; - uint32_t widthResize = 512; - uint32_t heightResize = 512; - imgInfo.widthResize = widthResize; - imgInfo.heightResize = heightResize; - imgInfo.resizeType = MxBase::RESIZER_STRETCHING; - std::vector imageInfoVec = {}; - imageInfoVec.push_back(imgInfo); - APP_ERROR ret = post_->Process(outputs, postProcessOutput); - if (ret != APP_ERR_OK) { - LogError << "Process failed, ret=" << ret << "."; - return ret; - } - ret = post_->DeInit(); - if (ret != APP_ERR_OK) { - LogError << "RcfDetection DeInit failed"; - return ret; - } - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::WriteResult(MxBase::TensorBase &inferTensor, const std::string &imgPath) -{ - auto shape = inferTensor.GetShape(); - int dim2 = 2; - int dim3 = 3; - uint32_t height = shape[dim2]; - uint32_t width = shape[dim3]; - cv::Mat imgBgr = cv::imread(imgPath); - std::string fileName = imgPath.substr(imgPath.find_last_of("/") + 1); - uint32_t imageWidth = imgBgr.cols; - uint32_t imageHeight = imgBgr.rows; - cv::Mat modelOutput = cv::Mat(height, width, CV_32FC1, inferTensor.GetBuffer()); - cv::Mat grayMat; - cv::Mat resizedMat; - int crop = 5; - cv::Rect myROI(0, 0, imageWidth - crop, imageHeight); - resize(modelOutput, resizedMat, cv::Size(dvppWidthStride, dvppHeightStride), 0, 0, cv::INTER_LINEAR); - resizedMat.convertTo(grayMat, CV_8UC1, ALPHA); - cv::Mat croppedImage = grayMat(myROI); - resize(croppedImage, croppedImage, cv::Size(imageWidth, imageHeight), 0, 0, cv::INTER_LINEAR); - - std::string resultPathName = "result"; - if (access(resultPathName.c_str(), 0) != 0) { - int ret = mkdir(resultPathName.c_str(), S_IRUSR | S_IWUSR | S_IXUSR); - if (ret != 0) { - LogError << "Failed to create result directory: " << resultPathName << ", ret = " << ret; - return APP_ERR_COMM_FAILURE; - } - } - - cv::imwrite("./result/"+fileName, croppedImage); - return APP_ERR_OK; -} - -APP_ERROR RcfDetection::Process(const std::string &imgPath) -{ - MxBase::TensorBase inTensor; - APP_ERROR ret = ReadImage(imgPath, inTensor); - if (ret != APP_ERR_OK) { - LogError << "ReadImage failed, ret=" << ret << "."; - return ret; - } - MxBase::TensorBase outTensor; - uint32_t resizeHeight = 512; - uint32_t resizeWidth = 512; - ret = Resize(inTensor, outTensor, resizeHeight, resizeWidth); - if (ret != APP_ERR_OK) { - LogError << "Resize failed, ret=" << ret << "."; - return ret; - } - std::vector inputs = {}; - std::vector outputs = {}; - auto shape = outTensor.GetShape(); - inputs.push_back(outTensor); - ret = Inference(inputs, outputs); - if (ret != APP_ERR_OK) { - LogError << "Inference failed, ret=" << ret << "."; - return ret; - } - std::vector postProcessOutput={}; - ret = PostProcess(inTensor, outputs, postProcessOutput); - if (ret != APP_ERR_OK) { - LogError << "PostProcess failed, ret=" << ret << "."; - return ret; - } - ret = WriteResult(postProcessOutput[0], imgPath); - if (ret != APP_ERR_OK) { - LogError << "Save result failed, ret=" << ret << "."; - return ret; - } - return APP_ERR_OK; -} -- Gitee From 7a361e132d66248b60ff1a52f2a0473b351a51aa Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Sat, 12 Oct 2024 07:18:44 +0000 Subject: [PATCH 18/51] update contrib/EdgeDetectionPicture/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EdgeDetectionPicture/README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/contrib/EdgeDetectionPicture/README.md b/contrib/EdgeDetectionPicture/README.md index 53642883f..ec2f72b99 100644 --- a/contrib/EdgeDetectionPicture/README.md +++ b/contrib/EdgeDetectionPicture/README.md @@ -19,7 +19,7 @@ MindX SDK安装前准备可参考《用户指南》,[安装教程](https://git | Ascend-CANN-toolkit | 7.0.0 | Ascend-cann-toolkit开发套件包 | -#### 1.4 代码目录结构与说明 +#### 1.4 代码目录结构说明 本sample工程名称为EdgeDetectionPicture,工程目录如下图所示: ``` @@ -48,7 +48,6 @@ MindX SDK安装前准备可参考《用户指南》,[安装教程](https://git ``` . {cann_install_path}/ascend-toolkit/set_env.sh . {sdk_install_path}/mxVision/set_env.sh - ``` -- Gitee From 273fc27b82152c42cf8630a18a4f42db87589f94 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 14 Oct 2024 01:50:47 +0000 Subject: [PATCH 19/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/TSM/README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/TSM/README.md | 341 ------------------------------------------ 1 file changed, 341 deletions(-) delete mode 100644 contrib/TSM/README.md diff --git a/contrib/TSM/README.md b/contrib/TSM/README.md deleted file mode 100644 index 7356a5817..000000000 --- a/contrib/TSM/README.md +++ /dev/null @@ -1,341 +0,0 @@ -# TSM视频分类参考设计 - -## 1 介绍 -使用TSM模型,基于Kinetics-400数据集,在MindX SDK环境下实现视频分类功能。将测试视频传入脚本进行前处理,模型推理,后处理等功能,最终得到模型推理的精度和性能。 - -### 1.1 支持的产品 - -以昇腾Atlas310卡为主要的硬件平台 - -### 1.2 支持的版本 - -CANN:7.0.RC1 - -SDK:mxVision 5.0.RC3(可通过cat SDK目录下的 version.info 查看) - -### 1.3 软件方案介绍 - -项目主要由离线精度测试文件,在线功能测试文件,离线单视频推理性能测试文件,模型文件,测试数据集预处理文件组成。 - -### 1.4 代码目录结构与说明 - -```text -├── TSM - ├── README.md // 所有模型相关说明 - ├── model - ├── onnx2om.sh // 转om模型脚本 - ├── onnx2om1.sh // 在线模型转om模型脚本 - ├── label - ├── kinetics_val.csv // label文件 - ├── download_data - ├── k400_extractor.sh // 解压数据集脚本 - ├── offline.png // 离线推理技术实现流程 - ├── online.png // 在线推理技术实现流程 - ├── online_infer.py // 在线推理精度脚本 - ├── offline_infer.py // 离线推理精度脚本 - ├── speed.py // 离线单视频推理NPU性能脚本 - ├── speed_gpu.py // 离线单视频推理GPU性能脚本 -``` - -### 1.5技术实现流程 - -离线推理流程: - -![离线推理流程](./offline.png) - -在线推理流程: - -![离线推理流程](./online.png) - -### 1.6特性及适用场景 - -离线模型: - -本案例中的 TSM 模型适用于Kinetics数据集中的400类视频分类,并可以返回测试集视频的精度值及单视频识别的种类、性能。 - -在以下两种情况视频分类情况不太好:1. 视频长度过短(小于3s)。 2. 视频帧率过低。 - -在线模型: - -本案例中的在线模型适用于26中手势识别,并可以返回识别手势的名称。 - -## 2 环境依赖 - -推荐系统为ubuntu 18.04,环境依赖软件和版本如下表 - -| 软件名称 | 版本 | -|----------|--------| -| cmake | 3.5+ | -| mxVision | 5.0.RC3 | -| Python | 3.9 | -| torch | 1.10.0 | -| ffmpeg | 4.2.1 | - -- 环境变量搭建 - -在运行项目前,需要设置环境变量: - -MindSDK 环境变量: - -```Shell -. ${SDK-path}/set_env.sh -``` - -CANN 环境变量: - -```Shell -. ${ascend-toolkit-path}/set_env.sh -``` - -环境变量介绍 - -SDK-path: mxVision SDK 安装路径 - -ascend-toolkit-path: CANN 安装路径。 - -下载[ffmpeg](https://github.com/FFmpeg/FFmpeg/archive/n4.2.1.tar.gz),解压进入并执行以下命令安装: - -```Shell -./configure --prefix=/usr/local/ffmpeg --enable-shared -make -j -make install -``` - -安装完毕后导入环境变量 - -```Shell -export PATH=/usr/local/ffmpeg/bin:$PATH -export LD_LIBRARY_PATH=/usr/local/ffmpeg/lib:$LD_LIBRARY_PATH -``` - -## 3 离线推理 - -**步骤1** Kinetics-400数据集下载 - -参考[Kinetics-400 数据准备](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/dataset/k400.md#%E4%B8%8B%E8%BD%BDvideo%E6%95%B0%E6%8D%AE)中的脚本下载操作,在代码根目录的"download_data"目录下准备"download.sh"数据集下载脚本和"val_link.list"验证集链接列表文件。 - -```text -├── TSM - ├── download_data - ├── download.sh // 下载数据集脚本 - ├── k400_extractor.sh // 解压数据集脚本 - ├── val_link.list -``` - -进入代码根目录的"download_data"目录下,执行以下命令下载数据集压缩包val_part1.tar、val_part2.tar、val_part3.tar: - -```Shell -bash download.sh val_link.list -``` - -然后执行以下命令解压数据集到代码根目录下: - -```Shell -bash k400_extractor.sh -``` - -数据集结构如下: - -```text -├── TSM - ├── data - ├── abseiling - ├── air_drumming - ├── ... - ├── zumba -``` - -**步骤2** 数据集预处理 - -1、视频抽帧 - -在代码根目录执行以下命令创建所需目录: - -```Shell -mkdir tools -mkdir ops -``` - -下载[“temporal-shift-module-master.zip”](https://github.com/mit-han-lab/temporal-shift-module/tree/master)代码包并上传服务器解压,将代码包中"tools"目录下的"vid2img_kinetics.py"、"gen_label_kinetics.py"、"kinetics_label_map.txt"三个文件拷贝至参考设计代码根目录的“tools”目录下。 - -```text -├── TSM - ├── tools - ├── gen_label_kinetics.py // label生成脚本 - ├── vid2img_kinetics.py // 视频抽帧脚本 - ├── kinetics_label_map.txt -``` - -将代码包中"ops"目录下的"basic_ops.py"、"dataset.py"、"dataset_config.py"、"models.py"、"temporal_shift.py"、"transforms.py"六个文件拷贝至参考设计代码根目录的“ops”目录下。 - -```text - ├── ops - ├── basic_ops.py - ├── dataset.py // 数据集构建脚本 - ├── dataset_config.py // 数据集配置脚本 - ├── models.py // 模型搭建脚本 - ├── temporal_shift.py - ├── transforms.py -``` - -修改“tools”目录下的 vid2img_kinetics.py 内容,将77、78行注释。 - -```text - -77行 #class_name = 'test' -78行 #class_process(dir_path, dst_dir_path, class_name) - -``` - -在参考设计代码根目录下,执行以下命令对数据集视频进行抽帧并生成图片: - -```shell -mkdir dataset -cd ./tools -python3 vid2img_kinetics.py [video_path] [image_path] -e.g. -python3 vid2img_kinetics.py ../data ../dataset/ -``` - -修改“tools”目录下gen_label_kinetics.py 内容。 - -```text - -# 11行 dataset_path = '../dataset' # 放视频抽帧后的图片路径 -# 12行 label_path = '../label' # 存放label路径 -# 25行 files_input = ['kinetics_val.csv'] -# 26行 files_output = ['val_videofolder.txt'] -# 37行 folders.append(items[1]) -# 57行 output.append('%s %d %d'%(os.path.join('../dataset/',os.path.join(categories_list[i], curFolder)), len(dir_files), curIDX)) - -``` - -在“tools”目录下,执行以下命令生成标签文件: - -```shell -python3 gen_label_kinetics.py -``` - -**步骤3** 模型转换 - -下载[离线模型](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/TSM/offline_models.zip) TSM.onnx, 将下载好的模型放在“${TSM代码根目录}/model”目录下。 - -将模型转换为om模型,在“model”目录下,执行以下命令生成om模型 - -```shell -bash onnx2om.sh -``` - -**步骤4** 精度测试 - -修改${TSM代码根目录}/ops/dataset_config.py 脚本中参数root_data、filename_imglist_train和filename_imglist_val,若仅进行离线精度测试则可忽略filename_imglist_train设置。 - -```shell -import os - -ROOT_DATASET = './labels/' # 标签文件所在路径 - -... - -def return_kinetics(modality): - filename_categories = 400 - if modality == 'RGB': - root_data = ROOT_DATASET # 训练集根目录 - filename_imglist_train = 'train_videofolder.txt' # 训练数据集标签 - filename_imglist_val = 'val_videofolder.txt' # 测试数据集标签 - prefix = 'img_{:05d}.jpg' - else: - raise NotImplementedError('no such modality:' + modality) - return filename_categories, filename_imglist_train, filename_imglist_val, root_data, prefix -``` - -在参考设计代码根目录下,运行精度测试脚本 - -```shell -python3 offline_infer.py kinetics -``` - -原模型精度值为71.1%,实测精度值为71.01%,符合精度偏差范围,精度达标。 - -**步骤5** 性能测试 - -将用来测试的单视频放在参考设计代码根目录下,如视频“test_speed.mp4”,运行性能测试脚本 - -修改speed_gpu.py与speed.py参数,'./test_speed.mp4'为测试视频,测试视频类别需在Kinetics-400数据集的400个种类内且视频长度至少为3s。 - -```python -def main(): - cmd = 'ffmpeg -i \"{}\" -threads 1 -vf scale=-1:331 -q:v 0 \"{}/img_%05d.jpg\"'.format('./test_speed.mp4', './image') - subprocess.call(cmd, shell=True, - stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) - files = os.listdir(r"./image/") -``` - -GPU性能(Tesla_V100S_PCIE_32GB) - -在参考设计代码根目录下,运行GPU性能测试脚本 - -```shell -python3 speed_gpu.py kinetics --test_segments=8 --test_crops=1 --batch_size=1 -``` - -注:speed_gpu.py脚本需在GPU环境上运行,NPU环境无法运行。 - -得到单视频纯推理性能为0.08sec/video - -SDK性能 - -在参考设计代码根目录下,运行SDK性能测试脚本 - -```shell -python3 speed.py -``` - -注:speed.py脚本需在NPU环境上运行。 - -得到单视频纯推理性能为0.189sec/video - -## 4 在线推理 - -**步骤1** 安装[视频流工具](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/Live555%E7%A6%BB%E7%BA%BF%E8%A7%86%E9%A2%91%E8%BD%ACRTSP%E8%AF%B4%E6%98%8E%E6%96%87%E6%A1%A3.md) - -**步骤2** 生成视频流 - -根据提示当前只支持部分视频格式,并不支持.mp4后缀的文件,但可以通过ffmpeg转换生成[ffmpeg安装教程](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/pc%E7%AB%AFffmpeg%E5%AE%89%E8%A3%85%E6%95%99%E7%A8%8B.md),如下所示为MP4转换为h.264命令: - -使用ffmpeg工具将带有手势的“jester.mp4”的mp4格式视频转换生成为“jester.264”的264格式视频: - -```shell -ffmpeg -i jester.mp4 -vcodec h264 -bf 0 -g 25 -r 10 -s 1280*720 -an -f h264 jester.264 - -//-bf B帧数目控制,-g 关键帧间隔控制,-s 分辨率控制 -an关闭音频, -r 指定帧率 -``` - -使用live555生成视频流。 - -**步骤3** 模型转换 - -下载[在线模型](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/TSM/online_models.zip) jester.onnx - -将下载好的模型放在参考设计代码根目录的“model”目录下。 - -将模型转换为om模型,在“model”目录下,运行脚本生成om模型 - -```shell -bash onnx2om1.sh -``` - -**步骤4** 程序测试 - -```shell -python3 online_infer.py -``` - -修改参数,'ip:port/jester.264'为测试视频流,其中ip为起流的机器ip地址,port为起流的机器端口地址,jester.264为测试视频jester.mp4通过ffmpeg转换后的视频。 - -```python -def video2img(): - cmd = 'ffmpeg -i \"{}\" -threads 1 -vf scale=-1:331 -q:v 0 \"{}/img_%05d.jpg\"'.format('rtsp://ip:port/jester.264', './image') - subprocess.call(cmd, shell=True, - stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) -``` -- Gitee From b60f68bfb33db81fb87ceb3ffa6a27c3e8aa1683 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 14 Oct 2024 01:51:20 +0000 Subject: [PATCH 20/51] =?UTF-8?q?=E6=9B=B4=E6=96=B0README?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/TSM/README.md | 141 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) create mode 100644 contrib/TSM/README.md diff --git a/contrib/TSM/README.md b/contrib/TSM/README.md new file mode 100644 index 000000000..cc2c0f52b --- /dev/null +++ b/contrib/TSM/README.md @@ -0,0 +1,141 @@ +# TSM视频分类参考设计 + +## 1 介绍 + +### 1.1 简介 +使用TSM模型,在MindX SDK环境下实现视频分类功能。将测试视频传入脚本进行前处理,模型推理,后处理等功能,最终得到模型推理的结果。 + +### 1.2 支持的产品 + +以昇腾Atlas310卡为主要的硬件平台 + +### 1.3 支持的版本 +环境依赖软件和版本如下表: + +| 软件名称 | 版本 | +|----------|--------| +| cmake | 3.5+ | +| mxVision | 5.0.RC3 | +| Python | 3.9 | +| torch | 1.10.0 | +| ffmpeg | 4.2.1 | + +### 1.4 软件方案介绍 + +项目主要由离线精度测试文件,在线功能测试文件,离线单视频推理性能测试文件,模型文件,测试数据集预处理文件组成。 + +### 1.5 代码目录结构与说明 + +```text +├── TSM + ├── README.md // 所有模型相关说明 + ├── model + ├── onnx2om.sh // 转om模型脚本 + ├── onnx2om1.sh // 在线模型转om模型脚本 + ├── label + ├── kinetics_val.csv // label文件 + ├── download_data + ├── k400_extractor.sh // 解压数据集脚本 + ├── offline.png // 离线推理技术实现流程 + ├── online.png // 在线推理技术实现流程 + ├── online_infer.py // 在线推理精度脚本 + ├── offline_infer.py // 离线推理精度脚本 + ├── speed.py // 离线单视频推理NPU性能脚本 + ├── speed_gpu.py // 离线单视频推理GPU性能脚本 +``` + +### 1.6技术实现流程 + +在线推理流程: + +![离线推理流程](./online.png) + +### 1.7特性及适用场景 + +在线模型: + +本案例中的在线模型适用于26中手势识别,并可以返回识别手势的名称。 + +## 2 设置环境变量 + +- 环境变量搭建 + +在运行项目前,需要设置环境变量: + +MindSDK 环境变量: + +```Shell +. ${SDK-path}/set_env.sh +``` + +CANN 环境变量: + +```Shell +. ${ascend-toolkit-path}/set_env.sh +``` + +环境变量介绍 + +SDK-path: mxVision SDK 安装路径 + +ascend-toolkit-path: CANN 安装路径。 + +下载[ffmpeg](https://github.com/FFmpeg/FFmpeg/archive/n4.2.1.tar.gz),解压进入并执行以下命令安装: + +```Shell +./configure --prefix=/usr/local/ffmpeg --enable-shared +make -j +make install +``` + +安装完毕后导入环境变量 + +```Shell +export PATH=/usr/local/ffmpeg/bin:$PATH +export LD_LIBRARY_PATH=/usr/local/ffmpeg/lib:$LD_LIBRARY_PATH +``` + +## 3 在线推理 + +**步骤1** 安装[视频流工具](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/Live555%E7%A6%BB%E7%BA%BF%E8%A7%86%E9%A2%91%E8%BD%ACRTSP%E8%AF%B4%E6%98%8E%E6%96%87%E6%A1%A3.md) + +**步骤2** 生成视频流 + +根据提示当前只支持部分视频格式,并不支持.mp4后缀的文件,但可以通过ffmpeg转换生成[ffmpeg安装教程](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/pc%E7%AB%AFffmpeg%E5%AE%89%E8%A3%85%E6%95%99%E7%A8%8B.md),如下所示为MP4转换为h.264命令: + +使用ffmpeg工具将带有手势的“jester.mp4”的mp4格式视频转换生成为“jester.264”的264格式视频: + +```shell +ffmpeg -i jester.mp4 -vcodec h264 -bf 0 -g 25 -r 10 -s 1280*720 -an -f h264 jester.264 + +//-bf B帧数目控制,-g 关键帧间隔控制,-s 分辨率控制 -an关闭音频, -r 指定帧率 +``` + +使用live555生成视频流。 + +**步骤3** 模型转换 + +下载[在线模型](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/TSM/online_models.zip) jester.onnx + +将下载好的模型放在参考设计代码根目录的“model”目录下。 + +将模型转换为om模型,在“model”目录下,运行脚本生成om模型 + +```shell +bash onnx2om1.sh +``` + +**步骤4** 程序测试 + +```shell +python3 online_infer.py +``` + +修改参数,'ip:port/jester.264'为测试视频流,其中ip为起流的机器ip地址,port为起流的机器端口地址,jester.264为测试视频jester.mp4通过ffmpeg转换后的视频。 + +```python +def video2img(): + cmd = 'ffmpeg -i \"{}\" -threads 1 -vf scale=-1:331 -q:v 0 \"{}/img_%05d.jpg\"'.format('rtsp://ip:port/jester.264', './image') + subprocess.call(cmd, shell=True, + stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) +``` -- Gitee From 315bd7775c74e898fb56a0a5ebb155b5fa7efebf Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Thu, 17 Oct 2024 01:32:17 +0000 Subject: [PATCH 21/51] =?UTF-8?q?=E5=88=A0=E9=99=A4=E6=96=87=E4=BB=B6=20co?= =?UTF-8?q?ntrib/VCOD=5FSLTNet/README.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- contrib/VCOD_SLTNet/README.md | 272 ---------------------------------- 1 file changed, 272 deletions(-) delete mode 100644 contrib/VCOD_SLTNet/README.md diff --git a/contrib/VCOD_SLTNet/README.md b/contrib/VCOD_SLTNet/README.md deleted file mode 100644 index e7d69da0c..000000000 --- a/contrib/VCOD_SLTNet/README.md +++ /dev/null @@ -1,272 +0,0 @@ -# 视频伪装物体检测 - -## 1 介绍 - -基于 MindX SDK 实现 SLT-Net 模型的推理,在 MoCA-Mask 数据集上 Sm 达到大于 0.6。输入连续几帧伪装物体的视频序列,输出伪装物体掩膜 Mask 图。 - - -### 1.1 支持的产品 - -支持昇腾310芯片 - - -### 1.2 支持的版本 - -CANN:7.0.RC1 - -SDK:mxVision 5.0.RC3(可通过cat SDK目录下的 version.info 查看) - - -### 1.3 软件方案介绍 - - -本方案中,先通过 `torch2onnx.py` 脚本将 PyTorch 版本的伪装视频物体检测模型 SLT-Net 转换为 onnx 模型;然后通过 `inference.py` 脚本调用晟腾om模型,将输入视频帧进行图像处理,最终生成视频伪装物体的掩膜 Mask 图。 - - -### 1.4 代码目录结构与说明 - -本sample工程名称为 VCOD_SLTNet,工程目录如下图所示: - -``` -──VCOD_SLTNet - ├── flowchart.jpeg - ├── inference.py # 推理文件 - ├── torch2onnx.py # 模型转换脚本 - └── README.md -``` - - -### 1.5 技术实现流程图 - -![Flowchart](./flowchart.jpeg) - -图1 视频伪装物体检测流程图 - - -### 1.6 特性及适用场景 - -对于伪装视频数据的分割任务均适用,输入视频需要转换为图片序列输入到模型中,具体可以参考 MoCA 数据格式与目录结构(如下所示),详见 [SLT-Net](https://xueliancheng.github.io/SLT-Net-project/) 与 [MoCA 数据集主页](https://www.robots.ox.ac.uk/~vgg/data/MoCA/)。 - - -``` ---data - └── TestDataset_per_sq # 测试数据集 - ├── flower_crab_spider_1 # 不同场景 - ├── GT # Ground Truth - ├── 00000.png - ├── ..... - └── Imgs # 输入图片序列 - ├── 00000.jpg - ├── ..... - ...... - -``` - - -## 2 环境依赖 - -环境依赖软件和版本如下表: - -| 软件名称 | 版本 | -| -------- | ------ | -| MindX SDK | 5.0.RC3 | -| Python | 3.9.2 | -| CANN | 7.0RC1 | -| PyTorch | 1.12.1 | -| numpy | 1.21.5 | -| imageio | 2.22.3| -| Pillow | 9.3.0 | -| cv2 | 4.5.5 | -| timm | 0.4.12 | -| tqdm | 4.64.1 | - - -## 3. 数据准备 - -### 3.1 准备相关文件 - -1、SLT-Net代码包准备 - -点击访问 [SLT-Net](https://github.com/XuelianCheng/SLT-Net) 并下载 SLT-Net-master.zip 代码压缩包,上传服务器并解压得到“SLT-Net-master”目录及文件; - -2、SLT-Net模型文件准备 - -方法一:通过访问 [SLT-Net 模型官方链接](https://drive.google.com/file/d/1_u4dEdxM4AKuuh6EcWHAlo8EtR7e8q5v/view) 下载模型压缩包 (注意,需要访问 Google Drive ),解压后将 `Net_epoch_MoCA_short_term_pseudo.pth` 模型拷贝至 `SLT-Net-master` 目录下; - -方法二:下载 [models.zip 备份模型压缩包](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/sltnet/models.zip) 并解压获得 `sltnet.pth`、`sltnet.onnx`、`sltnet.om` 三个模型文件,将 `sltnet.pth` 模型拷贝至 `SLT-Net-master` 目录下 - - -3、数据集准备 - -通过访问[MoCA官方链接](https://xueliancheng.github.io/SLT-Net-project/)下载 `MoCA_Video` 数据集,或者通过[数据集备份链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/sltnet/MoCA_Video.zip)来下载 `MoCA_Video.zip` 数据集压缩包并解压; - - -### 3.2 模型转换 - -1、SLT-Net代码预处理 - -进入 `SLT-Net-master/lib` 目录下,对 `__init__.py`、`short_term_model.py`、`pvtv2_afterTEM.py`三个文件做以下修改: - -1)`__init__.py`文件注释如下: - -``` -from .short_term_model import VideoModel as VideoModel_pvtv2 -# from .long_term_model import VideoModel as VideoModel_long_term -``` - -注:因为长期模型依赖 CUDA,并且需要在 CUDA 平台进行编译,而本项目基于 MindX SDK 实现,因此使用短期模型。并且,短期模型的评价指标满足预期。 - - -2)修改 `short_term_model.py` 文件中,如下代码行: - -修改 - -``` -def forward(self, x): - image1, image2, image3 = x[:, :3], x[:, 3:6], x[:, 6:] # 替换之前的 image1, image2, image3 = x[0],x[1],x[2] - fmap1=self.backbone.feat_net(image1) - fmap2=self.backbone.feat_net(image2) - fmap3=self.backbone.feat_net(image3) -``` - -修改 - -``` - def __init__(self, args): - super(ImageModel, self).__init__() - self.args = args - # self.backbone = Network(pvtv2_pretrained=self.args.pvtv2_pretrained, imgsize=self.args.trainsize) - self.backbone = Network(pvtv2_pretrained=self.args.pvtv2_pretrained, imgsize=352) # 指定图片大小 - - .... - - # self.backbone = Network(pvtv2_pretrained=False, imgsize=self.args.trainsize) - self.backbone = Network(pvtv2_pretrained=False, imgsize=352) # 指定图片大小 - if self.args.pretrained_cod10k is not None: - self.load_backbone(self.args.pretrained_cod10k ) -``` - - -删除 - -``` -if self.args.pretrained_cod10k is not None: - self.load_backbone(self.args.pretrained_cod10k ) -``` - - -3)`pvtv2_afterTEM.py` 文件注释如下: - -``` -from timm.models import create_model -#from mmseg.models import build_segmentor -#from mmcv import ConfigDict -import pdb -``` - - -修改“SLT-Net-master/mypath.py”文件如下: - -``` -elif dataset == 'MoCA': - return './dataset/MoCA-Mask/' # 将此处路径修改指定为“MoCA_Video”目录的相对路径 -``` - - -可参考已经完成修改的 [SLT_Net_MindXsdk_torch](https://github.com/shuowang-ai/SLT_Net_MindXsdk_torch),也可直接使用该项目进行下面的 onnx 模型转换操作,替代以上步骤。 - - -2、模型转换 - -步骤一、pth模型转onnx模型 - -将 `VCOD_SLTNet` 代码包中的 `torch2onnx.py` 脚本拷贝至 `SLT-Net-master` 目录下,并在 `SLT-Net-master` 目录下执行以下命令将 pth 模型转换成 onnx 模型: - -``` -python torch2onnx.py --pth_path ${pth模型文件路径} --onnx_path ./sltnet.onnx -``` - -参数说明: - -pth_path:pth模型文件名称及所在路径 - -onnx_path:生成输出的onnx模型文件 - - -注意,timm 的版本为 `0.4.12`,其他版本可能有兼容性问题。 - - -步骤二、简化onnx文件(可选操作) - -``` -python -m onnxsim --input-shape="1,9,352,352" --dynamic-input-shape sltnet.onnx sltnet_sim.onnx -``` - -步骤三、onnx模型转om模型 - -``` -atc --framework=5 --model=sltnet.onnx --output=sltnet --input_shape="image:1,9,352,352" --soc_version=Ascend310 --log=error -``` - -注意: - -1. 若想使用转换好的onnx模型或om模型,可通过下载 [models.zip备份模型压缩包](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/sltnet/models.zip) 解压获得转换好的 onnx 模型或 om 模型。 - -2. pth模型转onnx模型,onnx模型转om模型,均可能花费约1小时左右,视不同运行环境而定。如无报错,请耐心等待。 - - -## 4. 运行推理 - - -使用如下命令,运行 `inference.py` 脚本: - -``` -python inference.py --datapath ${MoCA_Video数据集路径} --save_root ./results/ --om_path ./sltnet.om --testsize 352 --device_id 0 -``` - -参数说明: - -datapath:下载数据以后,目录中 `TestDataset_per_sq` 的上一级目录, - -save_root:结果保存路径 - -om_path:om 模型路径 - -testsize:图片 resize 的大小,当前固定为 352 - -device_id:设备编号 - - -注意,该脚本无需放入修改的 SLT-Net 目录,在任意位置均可执行,只需设置好上述参数即可。 - - -运行输出如下: - -``` - 0%| | 0/713 [00:00 ./results/arctic_fox/Pred/00000.png - 0%|▏ | 1/713 [00:00<10:31, 1.13it/s]> ./results/arctic_fox/Pred/00005.png - 0%|▎ | 2/713 [00:01<09:01, 1.31it/s]> ./results/arctic_fox/Pred/00010.png - 0%|▍ | 3/713 [00:02<08:30, 1.39it/s]> ./results/arctic_fox/Pred/00015.png - 1%|▌ | 4/713 [00:02<08:13, 1.44it/s]> ./results/arctic_fox/Pred/00020.png -``` - -将展示剩余运行时间以及生成图片的路径。 - - -## 5. 精度评估 - -点击访问 [SLT_Net_MindXsdk_torch](https://github.com/shuowang-ai/SLT_Net_MindXsdk_torch) 并下载 `SLT_Net_MindXsdk_torch-master.zip` 代码压缩包,上传服务器并解压获得 `SLT_Net_MindXsdk_torch-master` 目录及相关文件; - -进入 `SLT_Net_MindXsdk_torch-master` 目录,修改 `eval_python/run_eval.py` 脚本中的 `gt_dir` 为本地的 `MoCA_Video/TestDataset_per_sq/` 目录的绝对路径,`pred_dir` 为预测结果目录的绝对路径,并执行以下命令进行精度评估: - -``` -python eval_python/run_eval.py -``` - -完成评估后的结果如下: - -{'Smeasure': 0.6539, 'wFmeasure': 0.3245, 'MAE': 0.0161, 'adpEm': 0.6329, 'meanEm': 0.7229, 'maxEm': 0.7554, 'adpFm': 0.3025, 'meanFm': 0.3577, 'maxFm': 0.3738} - -评测结果高于交付所要求的 Smeasure 0.6 的指标。 - -注:评估还可参考基于 基于 [MATLAB](https://github.com/XuelianCheng/SLT-Net/tree/master/eval) 的 SLT-Net 的评测代码或参考基于 Python 的 [PySODEvalToolkit](https://github.com/lartpang/PySODEvalToolkit) 的评测代码。 -- Gitee From 89d631d30c2c7252b8ad5668da3d091416cf7a6e Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Fri, 29 Nov 2024 08:30:33 +0000 Subject: [PATCH 22/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 79 ++++++++++------------------------ 1 file changed, 23 insertions(+), 56 deletions(-) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 9186dc0ed..15d9b24b9 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -1,38 +1,29 @@ # EfficientDet 目标检测 ## 1. 介绍 -EfficientDet 目标检测后处理插件基于 MindXSDK 开发,对图片中的不同类目标进行检测,将检测得到的不同类的目标用不同颜色的矩形框标记。输入一幅图像,可以检测得到图像中大部分类别目标的位置。本方案使用在 COCO2017 数据集上训练得到的 EfficientDet 模型进行目标检测,数据集中共包含 90 个目标类,包括行人、自行车、公共汽车、手机、沙发、猫、狗等,可以对不同类别、不同角度、不同密集程度的目标进行检测。 - -### 1.1 支持的产品 - -本项目以昇腾Atlas310卡为主要的硬件平台。 - -### 1.2 支持的版本 - -CANN:7.0.0 - -SDK:mxVision 5.0.0(可通过cat SDK目录下的 version.info 查看) +### 1.1 简介 +EfficientDet 目标检测后处理插件基于 MindXSDK 开发,对图片中的不同类目标进行检测,将检测得到的不同类的目标用不同颜色的矩形框标记。输入一幅图像,可以检测得到图像中大部分类别目标的位置。本方案使用在 COCO2017 数据集上训练得到的 EfficientDet 模型进行目标检测,数据集中共包含 90 个目标类,包括行人、自行车、公共汽车、手机、沙发、猫、狗等,可以对不同类别、不同角度、不同密集程度的目标进行检测。 +### 1.2 支持的产品 -### 1.3 软件方案介绍 +本项目以昇腾Atlas 300I pro和 Atlas300V pro为主要的硬件平台。 -基于MindX SDK的目标检测业务流程为:待检测图片通过 appsrc 插件输入,然后使用图像解码插件 mxpi_imagedecoder 对图片进行解码,再通过图像缩放插件 mxpi_imageresize 将图像缩放至满足检测模型要求的输入图像大小要求,缩放后的图像输入模型推理插件 mxpi_tensorinfer 得到推理结果,推理结果输入 mxpi_objectpostprocessor 插件进行后处理,得到输入图片中所有的目标框位置和对应的置信度。最后通过输出插件 appsink 获取检测结果,并在外部进行可视化,将检测结果标记到原图上,本系统的各模块及功能描述如表1所示: -表1 系统方案各模块功能描述: +### 1.3 支持的版本 -| 序号 | 子系统 | 功能描述 | -| ---- | ------ | ------------ | -| 1 | 图片输入 | 获取 jpg 格式输入图片 | -| 2 | 图片解码 | 解码图片 | -| 3 | 图片缩放 | 将输入图片放缩到模型指定输入的尺寸大小 | -| 4 | 模型推理 | 对输入张量进行推理 | -| 5 | 目标检测后处理 | 从模型推理结果计算检测框的位置和置信度,并保留置信度大于指定阈值的检测框作为检测结果 | -| 6 | 结果输出 | 获取检测结果| -| 7 | 结果可视化 | 将检测结果标注在输入图片上| +| MxVision版本 | CANN版本 | Driver/Firmware版本 | + | --------- | ------------------ | -------------- | +| 6.0.RC3 | 8.0.RC3 | 24.1.RC3 | +### 1.4 三方依赖 +| 软件名称 | 版本 | +| -------- | ------ | +| cmake | 3.5+ | +| python | 3.9.2 | +| webcolors| 1.11.1 | -### 1.4 代码目录结构与说明 +### 1.5 代码目录结构与说明 本工程名称为 EfficientDet,工程目录如下所示: ``` @@ -87,41 +78,17 @@ SDK:mxVision 5.0.0(可通过cat SDK目录下的 version.info 查看) ``` +## 2 设置环境变量 -### 1.5 技术实现流程图 +在执行后续步骤前,需要设置环境变量: -EfficientDet 的后处理插件接收模型推理插件输出的两个特征图,位置回归特征图 R 和分类特征图 C,其中 R 的形状大小为 1 x n x 4, n 表示模型在输入图片上预设的 anchors 个数,4 分别表示检测结果矩形框左上角点坐标 x, y 相对预设 anchor 的位移,以及检测框的宽、高相对预设 anchor 的比例,C 的形状大小为 1 x n x 90,90 表示每个检测框属于每个类的置信度值,该值位于 0-1 之间。后处理插件继承自 MindXSDK 的目标检测后处理插件基类,后处理插件中可以获得图片缩放插件传递的图像尺寸缩放信息 ResizedImageInfo,包括缩放前图片宽、高和缩放后图片宽、高。 -后处理插件从模型推理输出 R、C 和图像尺寸缩放信息 ResizedImageInfo 计算检测结果的整体流程如下图所示: -
- -
-
-1. **计算预设 anchors。** 根据 ResizedImageInfo 计算不同宽高比、不同大小、在原图上不同位置的预设 anchors,anchors 的形状为 n x 4, 4 表示每个 anchor 的左上角坐标和宽、高。 - -2. **根据 R、anchors、ResizedImageInfo、C 计算每个检测框的位置、宽高、类别以及类别置信度。** R 中的每个 4 元向量和 anchors 中每个 4 元向量是对应的,根据坐标位移和宽高比例计算得到真实的检测框位置和宽、高,同时去除置信度小于指定阈值 CT 的检测跨框。 - -3. **NMS 去除冗余检测框。** 对步骤 2 中剩余的检测框进行筛选,首先按照置信度对保留的检测框排序,从置信度高的检测框开始,去除于其 IOU 值超过指定阈值 IT 的检测框,得到最终的检测结果。 - - -## 2 环境依赖 - -推荐系统为ubuntu 18.04,环境依赖软件和版本如下表: - -| 软件名称 | 版本 | -| -------- | ------ | -| cmake | 3.5+ | -| MindX SDK | 5.0.0 | -| CANN | 7.0.0 | -| python | 3.9.2 | -| webcolors| 1.11.1 | - -确保环境中正确安装mxVision SDK。 - -在编译运行项目前,需要设置环境变量: -``` -. {cann_install_path}/ascend-toolkit/set_env.sh -. {sdk_install_path}/mxVision/set_env.sh +```bash +# 执行环境变量脚本使环境变量生效 +. ${ascend-toolkit-path}/set_env.sh +. ${mxVision-path}/set_env.sh +# mxVision: mxVision安装路径 +# ascend-toolkit-path: CANN安装路径 ``` -- Gitee From c2ef87c4feef3e7771283ab4b5344640f0ca9094 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Fri, 29 Nov 2024 08:46:00 +0000 Subject: [PATCH 23/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 15d9b24b9..310d0b7ce 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -93,13 +93,16 @@ EfficientDet 目标检测后处理插件基于 MindXSDK 开发,对图片中的 -## 3. 模型转换 +## 3. 准备模型 +**步骤1:** 本项目中采用的模型是 EfficientDet 模型,参考实现代码:https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch, 选用的模型是该 pytorch 项目中提供的模型 efficientdet-d0.pth,本项目运行前需要将 pytorch 模型转换为 onnx 模型,然后使用模型转换工具 ATC 将 onnx 模型转换为 om 模型,模型转换工具相关介绍参考链接:https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/%E5%8F%82%E8%80%83%E8%B5%84%E6%96%99.md 。本项目中使用的 onnx 模型和 om 模型链接:https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EfficientDet/models.zip -自行转换模型步骤如下: -1. 从上述 onnx 模型下载链接中下载 onnx 模型 simplified-efficient-det-d0-mindxsdk-order.onnx 和 simplified-efficient-det-d6-mindxsdk-order.onnx 至 ``python/models/onnx-models`` 文件夹下。 -2. 进入 ``python/models/conversion-scripts`` 文件夹下依次执行命令: +**步骤2:** +从上述 onnx 模型下载链接中下载 onnx 模型 simplified-efficient-det-d0-mindxsdk-order.onnx 和 simplified-efficient-det-d6-mindxsdk-order.onnx 至 ``python/models/onnx-models`` 文件夹下。 + +**步骤3:** +进入 ``python/models/conversion-scripts`` 文件夹下依次执行命令: ``` bash model_conversion_d0.sh bash model_conversion_d6.sh @@ -114,21 +117,21 @@ ATC run success, welcome to the next use. ### 3.1 可选操作 上述方法使用提供的 onnx 模型转换得到 om 模型,该模型的输入尺寸是 (512, 512),若想转换得到其他输入尺寸的模型,或者想从 pytorch 模型转 onnx 模型,相关操作步骤如下: -1. 从上述参考实现代码链接下载 pytorch 项目文件,执行: +**步骤1:** 从上述参考实现代码链接下载 pytorch 项目文件,执行: ``` git clone https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch.git ``` 或者下载 ZIP 压缩包再解压,在当前目录下得到 ``Yet-Another-EfficientDet-Pytorch-master`` 代码文件夹。 -2. 按照参考实现代码链接中的说明配置 pytorch 环境。 +**步骤2:** 按照参考实现代码链接中的说明配置 pytorch 环境。 -3. 将**本项目目录下**的 ``python/models/convert_to_onnx.py`` 文件复制到 ``Yet-Another-EfficientDet-Pytorch-master`` 目录下。 +**步骤3:** 将**本项目目录下**的 ``python/models/convert_to_onnx.py`` 文件复制到 ``Yet-Another-EfficientDet-Pytorch-master`` 目录下。 -4. 因为源项目中的代码不支持直接从 pth 模型转换成 onnx 模型,参考链接 https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/issues/111 中的步骤修改相关代码文件。 +**步骤4:** 因为源项目中的代码不支持直接从 pth 模型转换成 onnx 模型,参考链接 https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/issues/111 中的步骤修改相关代码文件。 -5. 从上述 github 项目页面给出的模型权重表格中下载 pytorch 模型文件,如 EfficientDet-d0 模型对应 efficientdet-d0.pth, EfficientDet-d1 模型对应 efficientdet-d1.pth,下载好的权重文件放置在 ``Yet-Another-EfficientDet-Pytorch-master/weights`` 目录下。 +**步骤5:** 从上述 github 项目页面给出的模型权重表格中下载 pytorch 模型文件,如 EfficientDet-d0 模型对应 efficientdet-d0.pth, EfficientDet-d1 模型对应 efficientdet-d1.pth,下载好的权重文件放置在 ``Yet-Another-EfficientDet-Pytorch-master/weights`` 目录下。 -6. 在``Yet-Another-EfficientDet-Pytorch-master`` 目录下创建 ```onnx-models``` 目录,运行命令: +**步骤6:** 在``Yet-Another-EfficientDet-Pytorch-master`` 目录下创建 ```onnx-models``` 目录,运行命令: ``` python3 convert_to_onnx.py --compound_coef={compound_coef} --load_weights=weights/efficientdet-d{compound_coef}.pth --output-name=efficient-det-d{compound_coef}-mindxsdk-order.onnx ``` @@ -138,7 +141,7 @@ python3 convert_to_onnx.py --compound_coef=0 --load_weights=weights/efficientdet ``` 执行成功后会 ```onnx-models``` 目录下生成从 pytorch 模型转化得到的 onnx 模型,simplified-efficient-det-d{compound_coef}-mindxsdk-order.onnx -7. 成功转换得到 onnx 文件后,将 onnx 文件拷贝到**本项目目录下** 的``python/models/onnx-models`` 目录下,然后将其转换为 om 模型,转换步骤如下: +**步骤7:**成功转换得到 onnx 文件后,将 onnx 文件拷贝到**本项目目录下** 的``python/models/onnx-models`` 目录下,然后将其转换为 om 模型,转换步骤如下: - 进入 ``python/models/conversion-scripts`` 目录; - 执行命令: ``` -- Gitee From c35388d0c135264d4bb28cfda79c02f410b3b32e Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 03:48:33 +0000 Subject: [PATCH 24/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 310d0b7ce..249fa9ad7 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -167,16 +167,17 @@ bash model_convertion_d{compound_coef}.sh cd python python3 main.py ``` -命令执行成功后在当前目录下生成检测结果文件 img_detect_result.jpg,查看结果文件验证检测结果。 -**步骤5** 精度测试。 +**步骤5**查看结果 **步骤4**执行成功后在当前目录下生成检测结果文件 img_detect_result.jpg,查看结果文件验证检测结果。 -1. 安装 python COCO 评测工具。执行命令: +## 5 精度验证 + +**步骤1** 安装 python COCO 评测工具。执行命令: ``` pip3 install pycocotools ``` -2. 下载 COCO VAL 2017 数据集和标注文件,下载链接:https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EfficientDet/data.zip, 在 ``python`` 目录下创建 ``dataset`` 目录,将数据集压缩文件和标注数据压缩文件都解压至 ``python/dataset`` 目录下。确保解压后的 python 目录结构为: +**步骤2** 下载 COCO VAL 2017 数据集和标注文件,下载链接:https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/EfficientDet/data.zip, 在 ``python`` 目录下创建 ``dataset`` 目录,将数据集压缩文件和标注数据压缩文件都解压至 ``python/dataset`` 目录下。确保解压后的 python 目录结构为: ``` . ├── dataset @@ -223,7 +224,7 @@ pip3 install pycocotools ``` -3. 执行验证 +**步骤3** 执行验证 **从 ```evaluate.py``` 中找到使用的 pipeline 文件路径,将其中 mxpi_objectpostprocessor0 插件的 postProcessLibPath 属性值改为具体路径值,** 然后执行命令: ``` @@ -257,7 +258,7 @@ python3 evaluate.py --pipeline=pipeline/EfficientDet-d0-previous-version.pipelin 其中圈出来的部分为模型在 COCO VAL 2017 数据集上,IOU 阈值为 0.50:0.05:0.95 时的精度值为 0.310。采用这种 pipeline 配置和模型转换方式得到的 om 模型评测指标会稍有下降,但相应的模型性能会有所提升。 -## 5 常见问题 +## 6 常见问题 ### 5.1 MindXSDK 版本问题 -- Gitee From e884027851f4d95d9a07f91e94becc2e87f4d33c Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 06:41:12 +0000 Subject: [PATCH 25/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 28 +++------------------------- 1 file changed, 3 insertions(+), 25 deletions(-) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 249fa9ad7..1ed82ae81 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -236,32 +236,10 @@ python3 evaluate.py --pipeline=pipeline/EfficientDet-d0.pipeline --output=val201
-其中圈出来的部分为模型在 COCO VAL 2017 数据集上,IOU 阈值为 0.50:0.05:0.95 时的精度值为 0.325。 - -该指标是基于 MindXSDK 2.0.2.1 版本的评测结果,此时 pipeline 的流程为将 ImageDecoder 插件的输出格式类型设置为 RGB,进入 ImageResize 插件,该插件要想接收 RGB 格式输入必须将 "cvProcessor" 属性设置为 "opencv",该版本支持同时将 "resizeType" 属性设置为 "Resizer_KeepAspectRatio_Fit",这样该插件可以实现接收 RGB 格式输入同时按照宽高比例缩放的功能,转换 om 模型时的 aippconfig 配置中 "input_format" 属性值设置为 "RGB888_U8" 即可。 - -如果项目环境是基于 MindXSDK 2.0.2 版本时,该版本下将 ImageResize 插件的 "cvProcessor" 属性设置为 "opencv" 时,无法实现将 "resizeType" 属性设置为 "Resizer_KeepAspectRatio_Fit", 报错信息参考第 5 节常见问题中的 5.1,这种情形下无法在 pipeline 中配置模型输入为 RGB 格式同时按照宽高比例缩放,只能在转换 om 模型时设置色域转换模式为 YUV420SP_U8 to RGB,使得模型输入为 RGB 格式,同时在 pipeline 中不设置 ImageResize 插件的 "cvProcessor" 属性值,只设置 "resizeType" 属性为 "Resizer_KeepAspectRatio_Fit",这样可以实现模型输入为 RGB 格式同时按照宽高比例缩放。这种情形下的模型转换和评测步骤为: -1. 转换模型。进入 ``python/models/conversion-scripts`` 文件夹下执行命令: -``` -bash model_convertion_d0_previous_version.sh -``` -执行成功后在 ``python/models`` 文件夹下生成 efficient-det-d0-mindxsdk-order-previous-version.om 模型文件。 - -2. 评测。将 ```python/pipeline/EfficientDet-d0-previous-version.pipeline``` 中 mxpi_objectpostprocessor0 插件的 postProcessLibPath 属性值中的 ${MX_SDK_HOME} 值改为具体路径值,然后执行命令: -``` -python3 evaluate.py --pipeline=pipeline/EfficientDet-d0-previous-version.pipeline --output=val2017_detection_result_d0_previous_version.json -``` -命令执行结束后输出 COCO 格式的评测结果,并生成 val2017_detection_result_d0_previous_version.json 检测结果文件。输出结果如下图所示: -
- -
-
-其中圈出来的部分为模型在 COCO VAL 2017 数据集上,IOU 阈值为 0.50:0.05:0.95 时的精度值为 0.310。采用这种 pipeline 配置和模型转换方式得到的 om 模型评测指标会稍有下降,但相应的模型性能会有所提升。 - ## 6 常见问题 -### 5.1 MindXSDK 版本问题 +### 6.1 MindXSDK 版本问题 **问题描述:** @@ -275,7 +253,7 @@ python3 evaluate.py --pipeline=pipeline/EfficientDet-d0-previous-version.pipelin 确保 MindXSDK 版本至少为 2.0.2.1。 -### 5.2 未修改 pipeline 文件中的 ${MX_SDK_HOME} 值为具体值 +### 6.2 未修改 pipeline 文件中的 ${MX_SDK_HOME} 值为具体值 运行检测 demo 和评测时都需要将对应 pipeline 文件中 mxpi_objectpostprocessor0 插件的 postProcessLibPath 属性值中的 ${MX_SDK_HOME} 值改为具体路径值,否则会报错,如下图所示:
@@ -286,7 +264,7 @@ python3 evaluate.py --pipeline=pipeline/EfficientDet-d0-previous-version.pipelin 检测 main.py 和 evaluate.py 里所用的 pipeline 文件, 将文件中 mxpi_objectpostprocessor0 插件的 postProcessLibPath 属性值中的 ${MX_SDK_HOME} 值改为具体路径值。 -### 5.3 未修改模型文件或生成so的权限 +### 6.3 未修改模型文件或生成so的权限 SDK对运行库so和模型文件有要求,如出现以下报错提示请参考FASQ中相关内容使用chmod指定权限640 ```shell Check Owner permission failed: Current permission is 7, but required no greater than 6. -- Gitee From 8f3f3b881d5a92b774de7607b4e1a06edd4ebb17 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 06:46:05 +0000 Subject: [PATCH 26/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 1ed82ae81..62f9dfe21 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -236,6 +236,7 @@ python3 evaluate.py --pipeline=pipeline/EfficientDet-d0.pipeline --output=val201
+ ## 6 常见问题 -- Gitee From 0ceaa6f9c8ef0f7182166af6aaa8ac1fd0e6f5c9 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 06:47:46 +0000 Subject: [PATCH 27/51] update contrib/EfficientDet/postprocess/CMakeLists.txt. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/postprocess/CMakeLists.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/postprocess/CMakeLists.txt b/contrib/EfficientDet/postprocess/CMakeLists.txt index adb59cbbf..f8a596673 100644 --- a/contrib/EfficientDet/postprocess/CMakeLists.txt +++ b/contrib/EfficientDet/postprocess/CMakeLists.txt @@ -16,7 +16,7 @@ include_directories(${MX_SDK_HOME}/opensource/lib/glib-2.0/include) link_directories(${MX_SDK_HOME}/lib) link_directories(${MX_SDK_HOME}/opensource/lib) -add_compile_options(-std=c++11 -fPIC -fstack-protector-all -pie -Wno-deprecated-declarations) +add_compile_options(-std=c++14 -fPIC -fstack-protector-all -pie -Wno-deprecated-declarations) add_compile_options("-DPLUGIN_NAME=${PLUGIN_NAME}") add_definitions(-DENABLE_DVPP_INTERFACE) add_library(${TARGET_LIBRARY} SHARED EfficientdetPostProcess.cpp) -- Gitee From 194ccf5161990b7ba44303a116c8bec61261ec35 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 06:57:58 +0000 Subject: [PATCH 28/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d0.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0.sh index 22104eb80..32da5148c 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d0 模型 # Execute, transform EfficientDet-d0 model. -atc --model=../onnx-models/simplified-efficient-det-d0-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d0-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 512, 512" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_10954:0;Sigmoid_13291:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d0.cfg +atc --model=../onnx-models/simplified-efficient-det-d0-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d0-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 512, 512" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_10954:0;Sigmoid_13291:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d0.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From d40f3402478373e45eeaba577769bf346b141177 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 06:58:15 +0000 Subject: [PATCH 29/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d6.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d6.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d6.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d6.sh index 744b63ac5..c76f7f3e8 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d6.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d6.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d6 模型 # Execute, transform EfficientDet-d6 model. -atc --model=../onnx-models/simplified-efficient-det-d6-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d6-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 1280, 1280" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_25609:0;Sigmoid_29066:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d6.cfg +atc --model=../onnx-models/simplified-efficient-det-d6-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d6-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 1280, 1280" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_25609:0;Sigmoid_29066:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d6.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From 7bad098de7e350224b9e9b0dbdd43fa4463a16e5 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:00:20 +0000 Subject: [PATCH 30/51] update model_conversion_d0_previous_version.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../conversion-scripts/model_conversion_d0_previous_version.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0_previous_version.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0_previous_version.sh index cd2ebc825..ab437d494 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0_previous_version.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d0_previous_version.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d0 模型 # Execute, transform EfficientDet-d0 model. -atc --model=../onnx-models/simplified-efficient-det-d0-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d0-mindxsdk-order-previous-version --soc_version=Ascend310 --input_shape="input:1, 3, 512, 512" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_10954:0;Sigmoid_13291:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d0_previous_version.cfg +atc --model=../onnx-models/simplified-efficient-det-d0-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d0-mindxsdk-order-previous-version --soc_version=Ascend310P3 --input_shape="input:1, 3, 512, 512" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_10954:0;Sigmoid_13291:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d0_previous_version.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From 48e8b210f8f6084067283afb29a72e1a0eaa476f Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:00:34 +0000 Subject: [PATCH 31/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d1.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d1.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d1.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d1.sh index 417240128..0bdc29bec 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d1.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d1.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d1 模型 # Execute, transform EfficientDet-d1 model. -atc --model=../onnx-models/simplified-efficient-det-d1-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d1-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 640, 640" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_14124:0;Sigmoid_16461:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d1.cfg +atc --model=../onnx-models/simplified-efficient-det-d1-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d1-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 640, 640" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_14124:0;Sigmoid_16461:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d1.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From d5b3b860b55083d0acb6f8d7bfc1e7e541c3d825 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:00:49 +0000 Subject: [PATCH 32/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d2.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d2.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d2.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d2.sh index afb341415..bd7b2f0f3 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d2.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d2.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d2 模型 # Execute, transform EfficientDet-d2 model. -atc --model=../onnx-models/simplified-efficient-det-d2-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d2-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 768, 768" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_15356:0;Sigmoid_17693:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d2.cfg +atc --model=../onnx-models/simplified-efficient-det-d2-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d2-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 768, 768" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_15356:0;Sigmoid_17693:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d2.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From aa4981ef64d3a735c7b46843dbc141ba34b5f984 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:01:03 +0000 Subject: [PATCH 33/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d3.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d3.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d3.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d3.sh index 096c509e1..a7c7cc6b3 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d3.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d3.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d3 模型 # Execute, transform EfficientDet-d3 model. -atc --model=../onnx-models/simplified-efficient-det-d3-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d3-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 896, 896" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_18003:0;Sigmoid_20900:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d3.cfg +atc --model=../onnx-models/simplified-efficient-det-d3-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d3-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 896, 896" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_18003:0;Sigmoid_20900:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d3.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From ae7e0c7236233157ac0f89ef7e84e6c7a4d519ea Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:01:17 +0000 Subject: [PATCH 34/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d4.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh index 9d7759c28..6ccbd5e97 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d4 模型 # Execute, transform EfficientDet-d4 model. -atc --model=../onnx-models/simplified-efficient-det-d4-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d4-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 1024, 1024" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_20945:0;Sigmoid_23842:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d4.cfg +atc --model=../onnx-models/simplified-efficient-det-d4-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d4-mindxsdk-order --soc_version=Ascend310P4 --input_shape="input:1, 3, 1024, 1024" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_20945:0;Sigmoid_23842:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d4.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From 3a4d4aa547cf1e042324a96d597a2ff153e3cb80 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:01:32 +0000 Subject: [PATCH 35/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d4.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh index 6ccbd5e97..fc0a3c7ed 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d4.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d4 模型 # Execute, transform EfficientDet-d4 model. -atc --model=../onnx-models/simplified-efficient-det-d4-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d4-mindxsdk-order --soc_version=Ascend310P4 --input_shape="input:1, 3, 1024, 1024" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_20945:0;Sigmoid_23842:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d4.cfg +atc --model=../onnx-models/simplified-efficient-det-d4-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d4-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 1024, 1024" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_20945:0;Sigmoid_23842:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d4.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From b4dfd1d373bf8c8918bd99f953df22d83e2af2f5 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:01:49 +0000 Subject: [PATCH 36/51] update contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d5.sh. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/models/conversion-scripts/model_conversion_d5.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d5.sh b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d5.sh index 02832a508..36d1d6d41 100644 --- a/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d5.sh +++ b/contrib/EfficientDet/python/models/conversion-scripts/model_conversion_d5.sh @@ -31,7 +31,7 @@ export ASCEND_OPP_PATH=${install_path}/opp # 执行,转换 EfficientDet-d5 模型 # Execute, transform EfficientDet-d5 model. -atc --model=../onnx-models/simplified-efficient-det-d5-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d5-mindxsdk-order --soc_version=Ascend310 --input_shape="input:1, 3, 1280, 1280" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_22883:0;Sigmoid_25780:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d5.cfg +atc --model=../onnx-models/simplified-efficient-det-d5-mindxsdk-order.onnx --framework=5 --output=../efficient-det-d5-mindxsdk-order --soc_version=Ascend310P3 --input_shape="input:1, 3, 1280, 1280" --input_format=NCHW --output_type=FP32 --out_nodes='Concat_22883:0;Sigmoid_25780:0' --log=error --insert_op_conf=../aipp-configs/insert_op_d5.cfg # 删除除 om 模型外额外生成的文件 # Remove miscellaneous -- Gitee From 0270865bfd688d6cfb28539ef4812490e12b1afd Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:04:21 +0000 Subject: [PATCH 37/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d6.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d6.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d6.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d6.pipeline index c6f267c50..631ce6f08 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d6.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d6.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From 5d00492e02e1b2c56ca64236b0eb488b1435f1d9 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:04:37 +0000 Subject: [PATCH 38/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d5.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d5.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d5.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d5.pipeline index 5ac1a0d6e..9ff10abfd 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d5.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d5.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From a7b9c0c01b5ec486cd1f087247021c2ffc9f68bc Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:04:50 +0000 Subject: [PATCH 39/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d4.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d4.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d4.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d4.pipeline index b024f1244..80c1ac020 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d4.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d4.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From e4b4a79db3246183afd22d3b1912810aba452595 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:05:03 +0000 Subject: [PATCH 40/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d3.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d3.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d3.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d3.pipeline index 73064227c..750ee6398 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d3.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d3.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From 67d1a58dab242d14c636fb453e7e78828b3d8b0a Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:05:15 +0000 Subject: [PATCH 41/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d2.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d2.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d2.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d2.pipeline index dbe86eb7c..61dbcdefc 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d2.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d2.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From e7b59cdfcd093b9ffa234bfcdbdeee206a8654ee Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:05:28 +0000 Subject: [PATCH 42/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d1.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d1.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d1.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d1.pipeline index 9f06101d1..00cfb502e 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d1.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d1.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From b0d03b2f631f835d752a7e2e24adaf91224add59 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:05:42 +0000 Subject: [PATCH 43/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d0.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/python/pipeline/EfficientDet-d0.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d0.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d0.pipeline index e48fc680e..eb2ae42f6 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d0.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d0.pipeline @@ -43,7 +43,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det-eval.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From 577a9cf8fac01d009256ccc1c8d6366b4e159906 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:05:56 +0000 Subject: [PATCH 44/51] update contrib/EfficientDet/python/pipeline/EfficientDet-d0-previous-version.pipeline. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- .../python/pipeline/EfficientDet-d0-previous-version.pipeline | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/python/pipeline/EfficientDet-d0-previous-version.pipeline b/contrib/EfficientDet/python/pipeline/EfficientDet-d0-previous-version.pipeline index e42934ea1..977327304 100644 --- a/contrib/EfficientDet/python/pipeline/EfficientDet-d0-previous-version.pipeline +++ b/contrib/EfficientDet/python/pipeline/EfficientDet-d0-previous-version.pipeline @@ -41,7 +41,7 @@ "dataSource": "mxpi_tensorinfer0", "postProcessConfigPath": "models/efficient-det-eval.cfg", "labelPath": "models/coco.names", - "postProcessLibPath": "postprocess/build/libefficientdetpostprocess.so" + "postProcessLibPath": "../postprocess/build/libefficientdetpostprocess.so" }, "factory": "mxpi_objectpostprocessor", "next": "appsink0" -- Gitee From 5a7959405dd1af01f8a632d07bc5860d5974ee9b Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:08:20 +0000 Subject: [PATCH 45/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 62f9dfe21..230386224 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -84,13 +84,11 @@ EfficientDet 目标检测后处理插件基于 MindXSDK 开发,对图片中的 ```bash -# 执行环境变量脚本使环境变量生效 . ${ascend-toolkit-path}/set_env.sh . ${mxVision-path}/set_env.sh -# mxVision: mxVision安装路径 -# ascend-toolkit-path: CANN安装路径 ``` - +mxVision: mxVision安装路径 +ascend-toolkit-path: CANN安装路径 ## 3. 准备模型 -- Gitee From 6e18fa5f0380046e539697463ef140308d38294b Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:08:40 +0000 Subject: [PATCH 46/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 230386224..3981052da 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -88,6 +88,7 @@ EfficientDet 目标检测后处理插件基于 MindXSDK 开发,对图片中的 . ${mxVision-path}/set_env.sh ``` mxVision: mxVision安装路径 + ascend-toolkit-path: CANN安装路径 -- Gitee From 79d3741d6e79c8ff61eb7026d2e3175c1d29bb9e Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Mon, 2 Dec 2024 07:09:16 +0000 Subject: [PATCH 47/51] update contrib/EfficientDet/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/EfficientDet/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/EfficientDet/README.md b/contrib/EfficientDet/README.md index 3981052da..9aa1ac6f9 100644 --- a/contrib/EfficientDet/README.md +++ b/contrib/EfficientDet/README.md @@ -78,7 +78,7 @@ EfficientDet 目标检测后处理插件基于 MindXSDK 开发,对图片中的 ``` -## 2 设置环境变量 +## 2. 设置环境变量 在执行后续步骤前,需要设置环境变量: -- Gitee From edc3c267f46417a6d8b3bddc5c7f2d5a5004bdf3 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Tue, 3 Dec 2024 08:09:38 +0000 Subject: [PATCH 48/51] update contrib/FCOS/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/FCOS/README.md | 202 +++++++---------------------------------- 1 file changed, 33 insertions(+), 169 deletions(-) diff --git a/contrib/FCOS/README.md b/contrib/FCOS/README.md index c88789d5d..98c2b3129 100644 --- a/contrib/FCOS/README.md +++ b/contrib/FCOS/README.md @@ -2,6 +2,8 @@ ## 1 介绍 +### 1.1 简介 + 本开发项目演示FCOS模型实现目标检测。本系统基于mxVision SDK进行开发,以昇腾Atlas310卡为主要的硬件平台,主要实现目标检测。待检测的图片中物体不能被遮挡太严重,并且物体要完全出现在图片中。图片亮度不能过低。输入一张图片,最后会输出图片中能检测到的物体。项目主要流程: 1.环境搭建; @@ -9,36 +11,35 @@ 3.生成后处理插件; 4.进行精度、性能对比。 -### 1.1支持的产品 +### 1.2 支持的产品 -本产品以昇腾310(推理)卡为硬件平台。 +本项目以昇腾x86_64 Atlas 300l (型号3010)和arm Atlas 300l (型号3000)为主要的硬件平台。 -### 1.2支持的版本 +### 1.3 支持的版本 -该项目支持的SDK版本为5.0.0,CANN版本为7.0.0。 +| MxVision版本 | CANN版本 | Driver/Firmware版本 | +| --------- | ------------------ | -------------- | +| 5.0.0 | 7.0.0 | 23.0.0 | -### 1.3软件方案介绍 +### 1.4 三方依赖 -基于MindXSDK的FCOS目标检测的业务流程为: +项目运行过程中涉及到的第三方软件依赖如下表所示: -1. 将待检测的图片放到相应的文件夹下。 -2. 首先使用mmcv库对图片进行前处理(改变图片大小、归一化、补边操作)。‘ -3. 将图片转换成为二进制的numpy数组,传入pipeline中。 -4. 放缩后的图片输入模型推理插件mxpi_tensorinfer进行处理; -5. 将经过模型推理输出时候的张量数据流输入到mxpi_objectpostprocessor中,对FCOS目标检测模型推理输出的张量进行后处理; -6. 处理完毕之后将数据传入mxpi_dataserialize0插件中,将stream结果组装成json字符串输出。 +| 软件名称 | 说明 | 使用教程 | +| ----------- | -------------------- | --------------------------------------------------------- | +| pycocotools | 用于实现代码测评 | [点击打开链接](https://cocodataset.org/) | +| mmdetection | 用于实现模型精度评估 | [点击打开链接](https://github.com/open-mmlab/mmdetection) | +| mmcv | 用于实现图片前处理 | [点击打开链接](https://github.com/open-mmlab/mmcv) | -下表为系统方案各个子系统功能的描述: +.安装python COCO测评工具,mmcv和mmdetection。执行命令: -| 序号 | 子系统 | 功能描述 | -| ---- | ---------- | ---------------------------------------------------------------------------------------------------------------------- | -| 1 | 图片输入 | 传入图片,修改图片的大小和格式为符合模型要求的格式 | -| 2 | 图像前处理 | 对图片进行改变大小、补边、归一化等操作。 | -| 3 | 模型推理 | 将已经处理好的图片传入到mxpi_tensorinfer中,使用目标检测模型FCOS进行推理,得到推理的张量数据流 | -| 4 | 模型后处理 | 将模型推理得到的张量数据流传入mxpi_objectpostprocessor中进行张量后处理。对模型输出的目标框进行去重,排序和筛选等工作。 | -| 5 | 组装字符串 | stream结果组装成json字符串输出。 | +``` +pip3 install pycocotools +pip3 install mmcv-full +pip3 install mmdet +``` -### 1.4代码目录结构与说明 +### 1.5 代码目录结构说明 本项目名为FCOS目标检测,项目的目录如下所示: @@ -63,25 +64,7 @@ |_ main.py ``` -### 1.5技术实现流程图 - -本项目实现对输入的图片进行目标检测,整体流程如下: - -![avatar](./image/image1.png) - -### 1.6特性以及适用场景 -本项目是根据COCO数据集进行训练,仅仅适合COCO官方数据集中的80类物体进行识别。在此这八十类物体不一一列出。 -## 2环境依赖 - -推荐系统为ubuntu 18.04,环境软件和版本如下: - -| 软件名称 | 版本 | 说明 | 获取方式 | -| ------------------- | ------ | ----------------------------- | ----------------------------------------------------------------- | -| MindX SDK | 5.0.0 | mxVision软件包 | [点击打开链接](https://www.hiascend.com/software/Mindx-sdk) | -| ubuntu | 18.04 | 操作系统 | 请上ubuntu官网获取 | -| Ascend-CANN-toolkit | 7.0.0 | Ascend-cann-toolkit开发套件包 | [点击打开链接](https://www.hiascend.com/software/cann/commercial) | -| mmdetection | 2.25.0 | 用于评估准确度 | 请上mmdetection官网 | - +## 2 设置环境变量 在项目开始运行前需要设置环境变量: ``` @@ -89,39 +72,21 @@ . ${SDK安装路径}/mxVision/set_env.sh ``` -## 3 软件依赖 - -项目运行过程中涉及到的第三方软件依赖如下表所示: - -| 软件名称 | 说明 | 使用教程 | -| ----------- | -------------------- | --------------------------------------------------------- | -| pycocotools | 用于实现代码测评 | [点击打开链接](https://cocodataset.org/) | -| mmdetection | 用于实现模型精度评估 | [点击打开链接](https://github.com/open-mmlab/mmdetection) | -| mmcv | 用于实现图片前处理 | [点击打开链接](https://github.com/open-mmlab/mmcv) | - -.安装python COCO测评工具,mmcv和mmdetection。执行命令: - -``` -pip3 install pycocotools -pip3 install mmcv-full -pip3 install mmdet -``` - -## 4 模型转换 +## 3 准备模型 本项目使用的模型是FCOS目标检测模型这个模型是一个无anchor检测器。FCOS直接把预测特征图上的每个位置$(x,y)$当作训练样本,若这个位置在某个ground truth box的内部,则视为正样本,该位置的类别标签$c$对应这个box的类别,反之则视为负样本。这个网络的输出为目标框的左上角坐标、右下角坐标、类别和置信度。本项目的onnx模型可以直接[下载](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/Fcos/ATC%20Fcos.zip)。下载后,里面自带的om模型是可以直接使用的,或者自行使用ATC工具将onnx模型转换成为om模型,模型转换工具的使用说明参考[链接](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/%E5%8F%82%E8%80%83%E8%B5%84%E6%96%99.md)。 模型转换步骤如下: -1.从下载链接处下载onnx模型至FCOS/models文件夹下。 +**步骤1:**从下载链接处下载onnx模型至FCOS/models文件夹下。 -模型转换语句如下: +**步骤2:**模型转换语句如下: ``` atc --model=fcos.onnx --framework=5 --soc_version=Ascend310 --input_format=NCHW --input_shape="input:1,3,800,1333" --output=fcos_bs1 --precision_mode=allow_fp32_to_fp16 ``` -执行完该命令之后,会在models文件夹下生成.om模型,并且转换成功之后会在终端输出: +**步骤3:**执行完该命令之后,会在models文件夹下生成.om模型,并且转换成功之后会在终端输出: ``` ATC start working now, please wait for a moment. @@ -130,15 +95,15 @@ ATC run success, welcome to the next use. -## 5准备 +## 4 编译与运行 -### 步骤1 +**步骤1:** 准备一张待检测图片,并上传到FCOS文件夹下。然后修改main.py文件里面的图片路径为待检测的图片路径。并且从https://github.com/pjreddie/darknet/blob/master/data/coco.names 里面下载coco.names文件,并且将这个文件存放到models文件夹下。并且修改main.py里IMAGENAME为图片的路径: ```python IMAGENAME = '{image path}' // 120行 ``` -### 步骤2 +**步骤2:** 进入FCOS/plugin/FCOSPostprocess目录,在该目录下运行下列命令: @@ -147,7 +112,7 @@ bash build.sh ``` 这个后处理插件即可以使用。 -## 6编译与运行 +**步骤3:** 在FCOS目录下执行命令: @@ -157,68 +122,9 @@ python3 main.py 最后生成的结果会在FCOS文件夹目录下result.jpg图片中。 -## 7精度测试 - -1.下载COCO VAL 2017[数据集](http://images.cocodataset.org/zips/val2017.zip -)和[标注文件](http://images.cocodataset.org/annotations/annotations_trainval2017.zip -)。在FCOS目录下创建dataset目录,将数据集压缩文件和标注数据压缩文件都解压至 `FCOS/dataset` 目录下。再次创建一个文件夹名为binresult,在FCOS目录下。确保解压后的FCOS目录结构为: -``` -|- models -| |- fcos.onnx //onnx模型 -| |_ Fcos_tf_bs.cfg -|- pipeline -| |_ FCOSdetection.pipeline -|- plugin -| |_FCOSPostprocess -| |- CMakeLists.txt -| |- FCOSDetectionPostProcess.cpp -| |- FCOSDetectionPostProcess.h -| |_ build.sh -|- binresult -|- dataset -| |- annotations -| | |_ instances_val2017.json -| |_ val2017 -| |-000000581615.jpg -| |-000000581781.jpg -| |_other-images -|- image -| |- image1.png -| |_ image2.png -|- build.sh -|- colorlist.txt -|- evaluate.py -|_ main.py -``` - -2.再修改后处理插件cpp文件中的: -```cpp - //107行 - if (*(beginRes + CENTERPOINT) < THRESHOLD_) { - continue; - } - //123行 - MxBase::NmsSort(objectInfo, RATE); -``` -将上面的代码全部注释掉,再重新生成一次插件。 - -3.执行命令: - -``` -python3 evaluate.py -``` - -命令执行完毕之后,会在binresult文件夹,生成模型输出的目标框、置信度等信息。 - -4.最后在终端输出COCO格式的测评结果,输出结果如下: - -![avatar](./image/image2.png) - -上图中第一行可以看到,这个模型在COCO VAL 2017数据集上,IOU阈值为0.50:0.05:0.95时的精确度为0.347。原模型的推理精度也为0.347这个数据与原模型的推理精度误差范围在0.01内。 - -## 8常见问题 -### 8.1 模型路径配置问题: +## 5常见问题 +### 5.1 模型路径配置问题: 问题描述: 检测过程中用到的模型以及模型后处理插件需配置路径属性。 @@ -244,46 +150,4 @@ python3 evaluate.py "factory": "mxpi_objectpostprocessor", "next": "mxpi_dataserialize0" }, -``` -### 8.2 精度测试脚本运行时, TypeError报错问题: -问题描述: -运行精度测试脚本过程中, 出现如下类似报错: -``` -********* -File "***/pycocotools/cocoeval.py", line 507 in setDetParams - self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) -******** -TypeError: 'numpy.float64' object cannot be interpreted as an integer -``` - -解决措施: -打开pycocotools库下的cocoeval.py文件, 将文件中的代码: -``` -self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) -self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True) -``` -修改为: -``` -self.iouThrs = np.linspace(.5, 0.95, 10, endpoint=True) -self.recThrs = np.linspace(.0, 1.00, 101, endpoint=True) -``` - -### 8.3 精度测试脚本运行时, NameError报错问题: -问题描述: -运行精度测试脚本过程中, 出现如下类似报错: -``` -********* -File "***/pycocotools/coco.py", line 308 in loadRes - if type(resFile) == str or type(resFile) == unicode: -NameError: name 'unicode' is not defined -``` - -解决措施: -打开pycocotools库下的coco.py文件, 将文件中的代码: -``` -if type(resFile) == str or type(resFile) == unicode: -``` -修改为: -``` -if type(resFile) == str: ``` \ No newline at end of file -- Gitee From 57f7403821d32fe18290efe8dbfe994e8bd4a804 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Tue, 3 Dec 2024 08:15:48 +0000 Subject: [PATCH 49/51] update contrib/FCOS/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/FCOS/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/FCOS/README.md b/contrib/FCOS/README.md index 98c2b3129..cf787a165 100644 --- a/contrib/FCOS/README.md +++ b/contrib/FCOS/README.md @@ -31,7 +31,7 @@ | mmdetection | 用于实现模型精度评估 | [点击打开链接](https://github.com/open-mmlab/mmdetection) | | mmcv | 用于实现图片前处理 | [点击打开链接](https://github.com/open-mmlab/mmcv) | -.安装python COCO测评工具,mmcv和mmdetection。执行命令: +安装python COCO测评工具,mmcv和mmdetection。执行命令: ``` pip3 install pycocotools -- Gitee From eb92ecee6650043c0e0a779b88aa7130dad5794c Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Tue, 3 Dec 2024 08:16:29 +0000 Subject: [PATCH 50/51] update contrib/FCOS/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/FCOS/README.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/contrib/FCOS/README.md b/contrib/FCOS/README.md index cf787a165..ebbbba0d2 100644 --- a/contrib/FCOS/README.md +++ b/contrib/FCOS/README.md @@ -78,15 +78,18 @@ pip3 install mmdet 模型转换步骤如下: -**步骤1:**从下载链接处下载onnx模型至FCOS/models文件夹下。 +**步骤1:** +从下载链接处下载onnx模型至FCOS/models文件夹下。 -**步骤2:**模型转换语句如下: +**步骤2:** +模型转换语句如下: ``` atc --model=fcos.onnx --framework=5 --soc_version=Ascend310 --input_format=NCHW --input_shape="input:1,3,800,1333" --output=fcos_bs1 --precision_mode=allow_fp32_to_fp16 ``` -**步骤3:**执行完该命令之后,会在models文件夹下生成.om模型,并且转换成功之后会在终端输出: +**步骤3:** +执行完该命令之后,会在models文件夹下生成.om模型,并且转换成功之后会在终端输出: ``` ATC start working now, please wait for a moment. -- Gitee From b64983efb420fbd401aaed329c86cb7464aa16f4 Mon Sep 17 00:00:00 2001 From: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> Date: Tue, 3 Dec 2024 08:17:32 +0000 Subject: [PATCH 51/51] update contrib/FCOS/README.md. Signed-off-by: ComistryMo <12393292+comistrymo@user.noreply.gitee.com> --- contrib/FCOS/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/contrib/FCOS/README.md b/contrib/FCOS/README.md index ebbbba0d2..25846bb18 100644 --- a/contrib/FCOS/README.md +++ b/contrib/FCOS/README.md @@ -123,10 +123,12 @@ bash build.sh python3 main.py ``` +**步骤4:查看结果** + 最后生成的结果会在FCOS文件夹目录下result.jpg图片中。 -## 5常见问题 +## 5 常见问题 ### 5.1 模型路径配置问题: 问题描述: 检测过程中用到的模型以及模型后处理插件需配置路径属性。 -- Gitee