diff --git a/README.zh.md b/README.zh.md
index 2e4e5bf3478efd2c53d673b319305ce40387639e..54137177f46a43627d9c60365e71307abb615b2c 100644
--- a/README.zh.md
+++ b/README.zh.md
@@ -1,184 +1,27 @@
中文|[英文](README.md)
# MindX SDK Reference Apps
-[MindX SDK](https://www.hiascend.com/software/mindx-sdk) 是华为推出的软件开发套件(SDK),提供极简易用、高性能的API和工具,助力昇腾AI处理器赋能各应用场景。
+[MindX SDK](https://www.hiascend.com/software/mindx-sdk) 是华为推出的软件开发套件(SDK),提供极简易用、高性能的API和工具,包含mxVision(视觉分析)、mxRAG(知识增强)、mxIndex(特征检索)、mxRec(搜索推荐)等多个SDK,助力昇腾AI处理器赋能各应用场景。
-mxSdkReferenceApps是基于MindX SDK开发的参考样例。
+为助力开发者快速掌握mxVision和mxRAG接口的使用、快速实现业务功能,本代码仓(mxSdkReferenceApps)提供了基于mxVision和mxRAG开发的各类参考样例。用户可以根据自身需求选择相应案例代码。
-## 版本说明
-**请在[SDK产品选择页面](https://www.hiascend.com/software/mindx-sdk)选择您使用的产品后,通过下拉框选择支持的SDK版本并查看配套关系。**
+## 主要目录结构与说明
+| 目录 | 说明 |
+|------------------------|----------------|
+| [mxVision](./mxVision) | mxVision参考样例目录 |
+| [mxRAG](./mxRAG) | mxRAG参考样例目录 |
->目前SDK分为:制造质检 [mxManufacture](https://www.hiascend.com/software/mindx-sdk/mxManufacture/community)、视觉分析 [mxVision](https://www.hiascend.com/software/mindx-sdk/mxVision/community)、检索聚类 [mxIndex](https://www.hiascend.com/software/mindx-sdk/mxIndex/community)。
-以上分支版本号相同,为适配不同的方向的SDK组件。
-
-- **当前分支样例版本适配说明如下:**
- | SDK版本 | CANN版本 |
- |---|---|
- | 2.0.4 | [5.0.4](https://www.hiascend.com/software/cann/commercial) |
-
-## 目录结构与说明
-| 目录 | 说明 |
-|---|---|
-| [build](./build) | 用户贡献样例构建目录 |
-| [contrib](./contrib) | 用户贡献样例目录 |
-| [docs](./docs) | 文档目录 |
-| [mxVision](./mxVision) | 官方应用样例目录 |
-| [tools](./tools) | 开发测试相关工具 |
-| [tutorials](./tutorials) | 官方开发样例和文档参考工程目录 |
-
-## 安装
-**按照如下步骤搭建环境:**
- (1) 在[昇腾文档](https://www.hiascend.com/document?tag=commercial-developere)中选择**CANN 软件安装指南**,点击进入文档。
- (2) 根据文档了解整体流程并根据文档进行硬件及CANN软件安装。
- (3) 在SDK下载页面中选择**MindX SDK {版本} {产品} 用户指南**。
- (4) mxManufacture和mxVision需根据不同开发(使用MindStudio开发/使用命令行方式开发)中对应**环境准备**章节完成安装。
-
-## 开发样例
-**根据以下表单,选择需要运行的样例,并按照readme部署相关样例和参考工程**
-| 样例名称 | 简介 |
-|---|---|
-| [DvppWrapper接口样例](./tutorials/DvppWrapperSample) | 对图片实现编码,解码,缩放,抠图,以及把样例图片编码为264视频文件 |
-| [图像检测样例](./tutorials/ImageDetectionSample) | c++和python版本的图像检测样例,可刷输出检测结果 |
-| [c++图片输入yolov3样例](./tutorials/mxBaseSample) | c++语言的yolov3图像检测样例及yolov3的后处理模块开发 |
-| [c++视频输入yolov3样例](./tutorials/mxBaseVideoSample) | 针对视频输入的c++版本yolov3样例 |
-| [绘图单元使用样例](./tutorials/OsdSample) | 使用osd对图像进行自定义绘图的样例 |
-| [输入输出插件使用演示](./tutorials/PipelineInputOutputSample) | 对多种输入输出方式进行演示的样例 |
-| [元数据输出输出样例](./tutorials/protocolSample) | 如何自行编解码元数据的演示样例 |
-| [SDK插件开发](./tutorials/mindx_sdk_plugin) | [4-1](docs/quickStart/4-1插件开发调试指导.md)章节对应的样例代码 |
-| [模型后处理插件开发](./tutorials/SamplePostProcess) | [4-2](docs/quickStart/4-2模型后处理库(内置类型)开发调试指导.md)章节对应的演示代码 |
-| [自定义proto结构体](./tutorials/sampleCustomProto) | [4-3](docs/quickStart/4-3挂载自定义proto结构体.md)章节对应的演示代码 |
-| [自定义后处理插件开发](./tutorials/samplePluginPostProc) | [4-4](docs/quickStart/4-4模型Tensor数据处理&自定义模型后处理.md)章节对应的演示代码 |
-1. SDK已支持的模型直接使用**mxpi_{类型}postprocessor**插件([SDK2.0.4后处理](https://support.huawei.com/enterprise/zh/doc/EDOC1100234263/cab50573))配置对应参数即可([SDK2.0.4模型支持列表](https://support.huawei.com/enterprise/zh/doc/EDOC1100234263/8c42df9f))
-2. 未支持的模型根据任务类型,选择SDK已经支持的后处理基类(目标检测,分类任务,语义分割,文本生成)([SDK2.0.4后处理基类](https://support.huawei.com/enterprise/zh/doc/EDOC1100234263/51b8b606))去派生一个新的子类,参考**模型后处理插件开发**进行开发。
-3. 如果当前后处理基类所采用的数据结构无法满足需求,用户需要自行开发**自定义后处理插件开发**,或将tensor数据通过**元数据输出输出样例**输出至外部来处理
-
-## 运行
-**根据以下表单,选择需要运行的样例,并按照readme进行第三方依赖的安装及样例下载运行**
-| 样例名称 | 语言 | 适配SDK版本 | 简介 |
-|---|---|---|---|
-| [动作识别](./contrib/ActionRecognition) | python | >=2.0.4 | 单人独处、逗留超时、快速移动、剧烈运动、离床检测、攀高检测六种应用场景 |
-| [AI风景画](./contrib/ai_paint) | python | >=2.0.4 | 从结构化描述生成对应风景照片 |
-| [自动语音识别](./contrib/ASR&KWR) | python | >=2.0.4 |端到端的自动语音识别(AutoSpeechRecognition)+文本关键词识别(KeyWordRecognition) |
-| [文本分类](./contrib/BertTextClassification) | c++ | >=2.0.4 | 新闻文本分类类别:体育、健康、军事、教育、汽车 |
-| [车牌识别](./contrib/CarPlateRecognition) | c++ | >=2.0.4 | 对图像中的车牌进行检测,并对检测到每一个车牌进行识别 |
-| [卡通图像生成](./contrib/CartoonGANPicture) | c++ | >=2.0.4 | 通用场景下的jpg图片卡通化 |
-| [黑白图像上色](./contrib/Colorization) | python | >=2.0.4 | 输入黑白图像,自动对黑白图像进行上色,还原彩色图像 |
-| [人群计数](./contrib/CrowdCounting) | c++ | >=2.0.4 | 人群计数目标检测,输出可视化结果 |
-| [驾驶员状态识别](./contrib/DriverStatusRecognition) | python | >=2.0.4 | 识别视频中的驾驶员状态 |
-| [边缘检测](./contrib/EdgeDetectionPicture) | c++ | >=2.0.4 | 图像边缘提取,输出可视化结果 |
-| [EfficientDet](./contrib/EfficientDet) | python | >=2.0.4 | 使用 EfficientDet 模型进行目标检测 |
-| [目标检测](./contrib/FaceBoxes) | python | >=2.0.4 | 对图像中的目标进行画框并且标注置信度 |
-| [口罩识别](./contrib/facemaskdetection) | python | >=2.0.4 | 对原图像的目标以及口罩进行识别画框 |
-| [目标替换](./contrib/faceswap) | python | >=2.0.4 | 进行目标检测,脸部关键点推理以及目标替换,将替换结果可视化并保存 |
-| [情绪识别](./contrib/FacialExpressionRecognition) | python | >=2.0.4 | 采集图片中的目标图像,然后利用情绪识别模型推理情绪类别 |
-| [目标跟踪](./contrib/FairMOT) | python | >=2.0.4 | 视频目标检测和跟踪,对行人进行画框和编号 |
-| [语义分割](./contrib/FastSCNN) | python | >=2.0.4 | 对图片实现语义分割功能 |
-| [疲劳驾驶识别](./contrib/FatigueDrivingRecognition) | python | >=2.0.4 | 对视频中驾驶人员疲劳状态识别与预警 |
-| [火灾识别](./contrib/FireDetection) | python | >=2.0.4 | 对视频中高速公路车辆火灾和烟雾的识别告警 |
-| [手势关键点](./contrib/GestureKeypointDetection) | python | >=2.0.4 | 检测图像中所有的人手,输出手势关键点连接成的手势骨架 |
-| [头部姿态识别](./contrib/HeadPoseEstimation) | python | >=2.0.4 | 对图像中的头部进行姿态识别,输出可视化结果 |
-| [安全帽识别](./contrib/HelmetIdentification) | python | >=2.0.4 | 两路视频的安全帽去重识别,并对为佩戴行为告警 |
-| [人体语义分割](./contrib/human_segmentation) | c++ | >=2.0.4 | 对输入图片中的人像进行语义分割操作,然后输出mask掩膜图,将其与原图结合,生成标注出人体部分的人体语义分割图片 |
-| [个体属性识别](./contrib/Individual) | python | >=2.0.4 | 识别多种目标属性信息,包括年龄、性别、颜值、情绪、脸型、胡须、发色、是否闭眼、是否配戴眼镜、目标质量信息及类型等 |
-| [语音关键词检测](./contrib/kws) | python | >=2.0.4 | 对语音进行关键词检测 |
-| [人像分割](./contrib/MMNET) | python | >=2.0.4 | 基于MMNET解决移动设备上人像抠图的问题,旨在以最小的模型性能降级在移动设备上获得实时推断 |
-| [单目深度估计](./contrib/MonocularDepthEstimation) | python | >=2.0.4 | 基于AdaBins室内模型的单目深度估计,输出目标图像的深度图 |
-| [多路视频检测](./contrib/MultiChannelVideoDetection) | c++ | >=2.0.4 | 同时对两路本地视频或RTSP视频流(H264或H265)进行YOLOv3目标检测,生成可视化结果 |
-| [小麦检测](./contrib/mxBase_wheatDetection) | c++ | >=2.0.4 | 使用yolov5对图像中的小麦进行识别检测 |
-| [OCR身份证检测识别](./contrib/OCR/IDCardRecognition) | python | >=2.0.4 | 对身份证进行识别和检测 |
-| [OCR关键词检测](./contrib/OCR/KeywordDetection) | python | >=2.0.4 | 对图片进行识别并检测是否包含指定关键词 |
-| [人体关键点检测](./contrib/OpenposeKeypointDetection) | python | >=2.0.4 | 输入一幅图像,可以检测得到图像中所有行人的关键点并连接成人体骨架 |
-| [行人属性检测](./contrib/PedestrianAttributeRecognition) | python | >=2.0.4 | 对检测图片中行人的定位和属性进行识别 |
-| [人群密度计数](./contrib/PersonCount) | python | >=2.0.4 | 输入一幅人群图像,输出图像当中人的计数(估计)的结果 |
-| [文本检测](./contrib/PixelLink) | python | >=2.0.4 | 识别图像文本的位置信息,将识别到的文本位置用线条框选出来 |
-| [人像分割与背景替换](./contrib/PortraitSegmentation) | python | >=2.0.4 | 使用Portrait模型对输入图片中的人像进行分割,然后与背景图像融合,实现背景替换 |
-| [行人重识别](./contrib/ReID) | python | >=2.0.4 | 检索给定照片中的行人ID,并与特征库对比展示 |
-| [遥感影像地块分割检测样例](./contrib/RemoteSensingSegmentation) | python | >=2.0.4 | 输出遥感图像的可视化语义分割图 |
-| [无人机遥感旋转目标检测](./contrib/RotateObjectDetection) | python | >=2.0.4 | 输入一张待检测图片,可以输出目标旋转角度检测框,并有可视化呈现 |
-| [3D目标检测](./contrib/RTM3DTargetDetection) | python | >=2.0.4 | 对道路单色RGB图像进行三维目标检测 |
-| [情感极性分类](./contrib/SentimentAnalysis) | python | >=2.0.4 | 输入一段句子,可以判断该句子属于哪个情感极性 |
-| [发言者识别](./contrib/SpeakerRecog) | python | >=2.0.4 | 对发言者进行识别。如果声纹库中不包含当前说话人,则对当前说话人进行注册并保存至声纹库,否则给出识别结果 |
-| [图像超分辨率](./contrib/SuperResolution) | python | >=2.0.4 | 对输入的图片利用VDSR模型进行超分辨率重建 |
-| [车道线检测](./contrib/UltraFastLaneDetection) | python | >=2.0.4 | 对图像中的车道线进行检测,并对检测到的图像中的每一条车道线进行识别 |
-| [车流量统计](./contrib/VehicleCounting) | c++ | >=2.0.4 | 对视频中的车辆进行计数,实现对本地视频(H264)进行车辆动向并计数,最后生成可视化结果 |
-| [视频手势识别运行](./contrib/VideoGestureRecognition) | c++ | >=2.0.4 | 对本地视频(H264)进行手势识别并分类,生成可视化结果 |
-| [伪装目标分割](./contrib/CamouflagedObjectDetection) | python | >=2.0.4 | 对图像中的伪装目标进行识别检测,生成可视化分割结果 |
-
-## 插件使用
-**以下表单描述了各插件在哪些样例中有使用,供用户查找参考**
-| 插件名称 | 参考设计位置 | 具体样例名称 |
-| :------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
-| appsrc | run包 | 通用 |
-| mxpi_rtspsrc | run包 | SampleOsdVideo2Channels.pipeline
VideoObjectDetection.pipeline
SampleMotsimplesortv2.pipeline
SampleSkipframe.pipeline |
-| mxpi_dataserialize | run包 | 通用 |
-| appsink | run包 | 通用 |
-| fakesink | run包 | 通用 |
-| filesink | run包 | SampleOsd.pipeline |
-| mxpi_parallel2serial | run包 | SampleNmsoverLapedroiv2.pipeline
SampleOsdVideo2Channels.pipeline |
-| mxpi_distributor | run包 | SampleNmsoverLapedroiv2.pipeline
SampleOsdVideo2Channels.pipeline |
-| mxpi_synchronize | [gitee-mxVision](https://gitee.com/ascend/mindxsdk-referenceapps/tree/master/mxVision/AllObjectsStructuring) | AllObjectsStructuring |
-| mxpi_datatransfer | [gitee-mxVision](https://gitee.com/ascend/mindxsdk-referenceapps/tree/master/mxVision/MultiThread) | MultiThread |
-| mxpi_nmsoverlapedroi | 弃用 | |
-| mxpi_nmsoverlapedroiV2 | run包 | SampleNmsoverLapedroiv2.pipeline |
-| mxpi_roigenerator | run包 | SemanticSegPostProcessor.pipeline |
-| mxpi_semanticsegstitcher | run包 | SemanticSegPostProcessor.pipeline |
-| mxpi_objectselector | run包 | SampleObjectSelector.pipeline |
-| mxpi_skipframe | run包 | SampleSkipframe.pipeline |
-| mxpi_imagedecoder | run包 | 通用 |
-| mxpi_imageresize | run包 | 通用 |
-| mxpi_imagecrop | run包 | Sample.pipeline
SampleNmsoverLapedroiv2.pipeline
SampleObjectSelector.pipeline
SampleOsdVideo2Channels.pipeline |
-| mxpi_videodecoder | run包 | 通用 |
-| mxpi_videoencoder | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi_imageencoder | run包 | SampleOsd.pipeline |
-| mxpi_imagenormalize | run包 | SampleImageNormalize.pipeline |
-| mxpi_opencvcentercrop | run包 | SampleOpencvCenterCrop.pipeline |
-| mxpi_warpperspective | mindxsdk-referenceapps | GeneralTextRecognition |
-| mxpi_rotation | mindxsdk-referenceapps | GeneralTextRecognition |
-| mxpi_modelinfer | 弃用 | |
-| mxpi_tensorinfer | run包 | 通用 |
-| mxpi_objectpostprocessor | run包 | SampleObjectSelector.pipeline
Sample.pipeline |
-| mxpi_classpostprocessor | run包 | Sample.pipeline
SampleOsdVideo2Channels.pipeline
BertMultiPorts.pipeline |
-| mxpi_semanticsegpostprocessor | run包 | SemanticSegPostProcessor.pipeline |
-| mxpi_textgenerationpostprocessor | run包 | TextGenerationPostProcessor.pipeline |
-| mxpi_textobjectpostprocessor | mindxsdk-referenceapps | GeneralTextRecognition |
-| mxpi_keypointpostprocessor | run包 | KeyPointPostProcessor.pipeline |
-| mxpi_motsimplesort | 弃用 | |
-| mxpi_motsimplesortV2 | run包 | |
-| mxpi_facealignment | mindxsdk-referenceapps | FaceFeatureExtraction |
-| mxpi_qualitydetection | [gitee-mxVision](https://gitee.com/ascend/mindxsdk-referenceapps/tree/master/mxVision/VideoQualityDetection) | VideoQualityDetection |
-| mxpi_dumpdata | [用户指南](https://support.huawei.com/enterprise/zh/doc/EDOC1100234263/ba172876) | |
-| mxpi_loaddata | [用户指南](https://support.huawei.com/enterprise/zh/doc/EDOC1100234263/ba172876) | |
-| mxpi_opencvosd | run包 | SampleOsd.pipeline |
-| mxpi_object2osdinstances | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi_class2osdinstances | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi_osdinstancemerger | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi_channelselector | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi_channelimagesstitcher | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi_channelosdcoordsconverter | run包 | SampleOsdVideo2Channels.pipeline |
-| mxpi-bufferstablizer | run包 | SampleOsdVideo2Channels.pipeline |
-
-
-
-## 文档
-
-参考各组件:制造质检 [mxManufacture](https://www.hiascend.com/software/mindx-sdk/mxManufacture/community)、视觉分析 [mxVision](https://www.hiascend.com/software/mindx-sdk/mxVision/community)、检索聚类 [mxIndex](https://www.hiascend.com/software/mindx-sdk/mxIndex/community)内的**用户手册**链接获取相关文档。
## 社区
-昇腾社区鼓励开发者多交流,共学习。开发者可以通过以下渠道进行交流和学习。
-
-昇腾社区网站:hiascend.com
-
-昇腾论坛:https://bbs.huaweicloud.com/forum/forum-726-1.html
->SDK专属空间位于**MindX应用使能**子目录下
+昇腾社区鼓励开发者多交流,共学习。开发者可以通过昇腾社区网站获取最新的MindX SDK的软件、文档等资源;可以通过昇腾论坛与其他开发者交流开发经验。
-昇腾官方qq群:965804873
+昇腾社区网站:https://www.hiascend.com/
-## 贡献代码
+昇腾论坛:https://www.hiascend.com/forum/
-欢迎参与贡献。更多详情,请参阅我们的[CONTRIBUTING.md](./contrib/CONTRIBUTING.md)
## 免责声明
diff --git a/build/build.sh b/build/build.sh
deleted file mode 100644
index db8341850829891c7edb063fab4d5ef10a4d4563..0000000000000000000000000000000000000000
--- a/build/build.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/bin/bash
-# Copyright 2021 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.mitations under the License.
-
-set -e
-current_folder="$( cd "$(dirname "$0")" ;pwd -P )"
-
-export MX_SDK_HOME=/opt/buildtools/mindxsdk/mxVision
-
-bash $current_folder/../contrib/build_all.sh
-
-exit 0
-
-
-
diff --git a/contrib/CONTRIBUTING.md b/contrib/CONTRIBUTING.md
deleted file mode 100644
index 82ce68f2cc4232019329bad849fbf3e19f664f70..0000000000000000000000000000000000000000
--- a/contrib/CONTRIBUTING.md
+++ /dev/null
@@ -1,197 +0,0 @@
-### 介绍
-
-mindxsdk-referenceapps欢迎各位开发者的加入,希望各位开发者遵循社区的行为准则,共同建立一个开放和受欢迎的社区 [ Ascend社区行为准则 1.0 版本]([code-of-conduct_zh_cn.md · Ascend/community - 码云 - 开源中国 (gitee.com)](https://gitee.com/ascend/community/blob/master/code-of-conduct_zh_cn.md))
-
-### 贡献要求
-
-请贡献者在提交代码之前签署CLA协议,“个人签署”,[链接](https://clasign.osinfra.cn/sign/Z2l0ZWUlMkZhc2NlbmQ=)
-
-如您完成签署,可在自己提交的PR评论区输入/check-cla进行核实校验
-
-开发者提交的内容包括项目源码、配置文件、readme、启动脚本等文件,并遵循以下标准提交:
-
-### 一、提交内容
-
-- **提交清单**
-
-| 文件 | 描述 |
-| ------------ | ------------------------------------------------------------ |
-| **README** | 包含第三方依赖安装、模型转换、编译、运行指导等内容,能指导端到端使用 |
-| 代码 | 包含插件开发的C++代码、CMakeLists.txt、python/C++推理运行代码、精度与性能测试代码 |
-| 配置文件 | 运行时相关配置文件,用于加载相关运行参数的文件 |
-| pipeline文件 | MindX SDK的编排文件 |
-| 启动脚本 | 包括编译、运行、测试、模型转换等脚本 |
-
-- **典型的目录结构**
-
-```bash
-├── config #配置文件目录
-│ └── configure.cfg
-├── model #模型目录
-├── pipeline
-│ └── test.pipeline
-├── Plugin1 #插件1工程目录
-│ ├── CMakeLists.txt
-│ ├── Plugin1.cpp
-│ └── Plugin1.h
-├── Plugin2 #插件2工程目录
-│ ├── CMakeLists.txt
-│ ├── Plugin2.cpp
-│ └── Plugin2.h
-├── main.cpp
-├── main.py
-├── README.md
-├── build.sh
-└── run.sh
-```
-
-**注意**:相关输入的数据(图像、视频等)请不要上传到代码仓,在README注明如何获取即可
-
-### 二、源码
-
-1、MindX SDK离线推理请使用`C++`或`python`代码,符合第四部分编码规范
-
-2、贡献者参考设计代码目录命名规则
-
-```shell
-mindxsdk-referenceapps/contrib/参考设计名称(英文)
-```
-
-### 三、License规则
-
-涉及的代码、启动脚本都均需要在开始位置添加华为公司 License [华为公司 License链接](https://gitee.com/mindspore/mindspore/blob/master/LICENSE)
-
-- **C++**
-
-```c++
-/*
- * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-```
-
-- **python**&**shell**
-
-```python
-# Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-```
-
-> 关于License声明时间,应注意: 2021年新建的文件,应该是Copyright 2021 Huawei Technologies Co., Ltd 2020年创建年份,2020年修改年份,应该是Copyright 2020 Huawei Technologies Co., Ltd
-
-### 四、编程规范
-
-- 规范标准
-
- C++遵循Google编程规范,Python代码均遵循PEP 8编码规范
-
- 规范参考链接:[zh-cn/contribute/OpenHarmony-cpp-coding-style-guide.md · OpenHarmony/docs - Gitee.com](https://gitee.com/openharmony/docs/blob/master/zh-cn/contribute/OpenHarmony-cpp-coding-style-guide.md)
-
-- 规范备注(前4条规则C++适用)
-
-1、优先使用string类型,避免使用char*;
-
-2、禁止使用printf,一律使用cout;
-
-3、内存管理尽量使用智能指针;
-
-4、不准在函数里调用exit;
-
-5、禁止使用IDE等工具自动生成代码;
-
-6、控制第三方库依赖,如果引入第三方依赖,则需要提供第三方依赖安装和使用指导书;
-
-7、一律使用英文注释,注释率30%--40%,鼓励自注释;
-
-8、函数头必须有注释,说明函数作用,入参、出参;
-
-9、统一错误码,通过错误码可以确认那个分支返回错误;
-
-10、禁止出现打印一堆无影响的错误级别的日志;
-
-### 五、代码提交规范
-
-- 关键要求:
-
-1、请将**`mindxsdk-referenceapps`**仓**fork**到个人分支,基于个人分支提交代码到个人**fork仓**,并创建**`Pull Requests`**,提交合并请求到主仓上
-
-**参考Fork+Pull Requests 模式**:https://gitee.com/help/articles/4128#article-header0
-
-> pr提交后请不要再关闭pr,一切操作都在不关pr的条件下进行操作
-
-2、PR标题模板
-
-```
- [xxx学校] [xxx参考设计]
-```
-
-3、PR内容模板
-
-```
-### 相关的Issue
-
-### 原因(目的、解决的问题等)
-
-### 描述(做了什么,变更了什么)
-
-### 测试用例(新增、改动、可能影响的功能)
-```
-
-### 六、ISSUE提交规范
-
-1、ISSUE提交内容需包含三部分:当前行为、预期行为、复现步骤
-
-2、ISSUE提交模板:
-
-```
-一、问题现象(附报错日志上下文):
-### 当前现象
- xxxx
-
-### 预期现象
- xxxx
-
-二、软件版本:
--- CANN 版本 (e.g., CANN 3.0.x,5.x.x):
---Tensorflow/Pytorch/MindSpore 版本:
---Python 版本 (e.g., Python 3.9.2):
--- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)):
---操作系统版本 (e.g., Ubuntu 18.04):
-
-三、复现步骤:
-xxxx
-
-
-四、日志信息:
-xxxx
-### 请根据自己的运行环境参考以下方式搜集日志信息,如果涉及到算子开发相关的问题,建议也提供UT/ST测试和单算子集成测试相关的日志。
-
-日志提供方式:
-### 将日志打包后作为附件上传。若日志大小超出附件限制,则可上传至外部网盘后提供链接。
-
-### 获取方法请参考wiki:
-https://gitee.com/ascend/modelzoo/wikis/%E5%A6%82%E4%BD%95%E8%8E%B7%E5%8F%96%E6%97%A5%E5%BF%97%E5%92%8C%E8%AE%A1%E7%AE%97%E5%9B%BE?sort_id=4097825
-```
-
diff --git a/contrib/build_all.sh b/contrib/build_all.sh
deleted file mode 100644
index 0c861ba6a8a76f053c2999d66bd4afd204f0517a..0000000000000000000000000000000000000000
--- a/contrib/build_all.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/bin/bash
-# Copyright 2021 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -e
-current_folder="$( cd "$(dirname "$0")" ;pwd -P )"
-
-
-SAMPLE_FOLDER=(
- # ActionRecognition/
- # CrowdCounting/
- # mxBase_wheatDetection/
- # EdgeDetectionPicture/
- HelmetIdentification/
- Individual/
- # human_segmentation/
- # OpenposeKeypointDetection/
- PersonCount/
- # FatigueDrivingRecognition/
- # CartoonGANPicture/
- # HeadPoseEstimation/
- FaceBoxes/
- BertTextClassification/
- # RTM3DTargetDetection/
- EfficientDet/
- SentimentAnalysis/
- # RotateObjectDetection/
- # FairMOT/
- UltraFastLaneDetection/
- VehicleIdentification/
- yunet/
- # RoadSegmentation/
- # PassengerflowEstimation/
- # VehicleRetrogradeRecognition/
- # Collision/
- # PassengerflowEstimation/
- CenterFace/
- YOLOX/
- PicoDet/
- SOLOV2/
- # OpenCVPlugin/
- RefineDet/
- FCOS
- Faster_R-CNN/
- MeterReader/
- # RTMHumanKeypointsDetection/
-)
-
-err_flag=0
-for sample in ${SAMPLE_FOLDER[@]};do
- cd ${current_folder}/${sample}
- bash build.sh || {
- echo -e "Failed to build ${sample}"
- err_flag=1
- }
-done
-
-
-if [ ${err_flag} -eq 1 ]; then
- exit 1
-fi
-exit 0
diff --git a/mindxsdk-referenceapps b/mindxsdk-referenceapps
deleted file mode 160000
index 4eb4770587ad2ce9f9ed643b8e4874992c3c0398..0000000000000000000000000000000000000000
--- a/mindxsdk-referenceapps
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 4eb4770587ad2ce9f9ed643b8e4874992c3c0398
diff --git a/mxVision/MultiThread/picture/.gitkeep b/mxVision/MultiThread/picture/.gitkeep
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ.md" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ.md"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ.md"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ.md"
diff --git a/contrib/AutoSpeechRecognition/model/.keep "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/.keep"
similarity index 100%
rename from contrib/AutoSpeechRecognition/model/.keep
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/.keep"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hi_dvpp.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hi_dvpp.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hi_dvpp.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hi_dvpp.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hidvpp_fix.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hidvpp_fix.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hidvpp_fix.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/hidvpp_fix.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq10.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq10.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq10.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq10.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq11.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq11.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq11.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq11.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq12.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq12.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq12.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq12.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq13.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq13.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq13.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq13.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq14.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq14.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq14.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq14.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq15.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq15.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq15.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq15.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq16.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq16.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq16.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq16.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq17.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq17.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq17.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq17.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq18.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq18.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq18.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq18.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq19.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq19.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq19.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq19.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1_0101.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1_0101.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1_0101.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq1_0101.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq20.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq20.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq20.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq20.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq21.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq21.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq21.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq21.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq22.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq22.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq22.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq22.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq23.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq23.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq23.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq23.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2_0202.PNG" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2_0202.PNG"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2_0202.PNG"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq2_0202.PNG"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3_0301.PNG" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3_0301.PNG"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3_0301.PNG"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq3_0301.PNG"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4_0401.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4_0401.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4_0401.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq4_0401.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq5.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq5.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq5.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq5.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq6.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq6.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq6.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq6.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq7.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq7.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq7.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq7.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq8.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq8.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq8.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq8.png"
diff --git "a/docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq9.png" "b/mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq9.png"
similarity index 100%
rename from "docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq9.png"
rename to "mxVision/mxVision-docs/MindX SDK\345\270\270\350\247\201\351\227\256\351\242\230FAQ/img/sdk_faq9.png"
diff --git "a/docs/quickStart/1-1\345\256\211\350\243\205SDK\345\274\200\345\217\221\345\245\227\344\273\266.md" "b/mxVision/mxVision-docs/quickStart/1-1\345\256\211\350\243\205SDK\345\274\200\345\217\221\345\245\227\344\273\266.md"
similarity index 100%
rename from "docs/quickStart/1-1\345\256\211\350\243\205SDK\345\274\200\345\217\221\345\245\227\344\273\266.md"
rename to "mxVision/mxVision-docs/quickStart/1-1\345\256\211\350\243\205SDK\345\274\200\345\217\221\345\245\227\344\273\266.md"
diff --git "a/docs/quickStart/1-2IDE\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md" "b/mxVision/mxVision-docs/quickStart/1-2IDE\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md"
similarity index 100%
rename from "docs/quickStart/1-2IDE\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md"
rename to "mxVision/mxVision-docs/quickStart/1-2IDE\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md"
diff --git "a/docs/quickStart/1-3MindStuido\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md" "b/mxVision/mxVision-docs/quickStart/1-3MindStuido\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md"
similarity index 100%
rename from "docs/quickStart/1-3MindStuido\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md"
rename to "mxVision/mxVision-docs/quickStart/1-3MindStuido\345\274\200\345\217\221\347\216\257\345\242\203\346\220\255\345\273\272.md"
diff --git "a/docs/quickStart/1-4MindX_SDK\346\200\273\344\275\223\347\273\223\346\236\204.md" "b/mxVision/mxVision-docs/quickStart/1-4MindX_SDK\346\200\273\344\275\223\347\273\223\346\236\204.md"
similarity index 100%
rename from "docs/quickStart/1-4MindX_SDK\346\200\273\344\275\223\347\273\223\346\236\204.md"
rename to "mxVision/mxVision-docs/quickStart/1-4MindX_SDK\346\200\273\344\275\223\347\273\223\346\236\204.md"
diff --git "a/docs/quickStart/2-1\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213.md" "b/mxVision/mxVision-docs/quickStart/2-1\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213.md"
similarity index 100%
rename from "docs/quickStart/2-1\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213.md"
rename to "mxVision/mxVision-docs/quickStart/2-1\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213.md"
diff --git "a/docs/quickStart/2-2\345\237\272\344\272\216MindStuido\347\232\204\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213\350\277\220\350\241\214.md" "b/mxVision/mxVision-docs/quickStart/2-2\345\237\272\344\272\216MindStuido\347\232\204\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213\350\277\220\350\241\214.md"
similarity index 100%
rename from "docs/quickStart/2-2\345\237\272\344\272\216MindStuido\347\232\204\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213\350\277\220\350\241\214.md"
rename to "mxVision/mxVision-docs/quickStart/2-2\345\237\272\344\272\216MindStuido\347\232\204\345\233\276\345\203\217\346\243\200\346\265\213sample\346\240\267\344\276\213\350\277\220\350\241\214.md"
diff --git "a/docs/quickStart/2-3pipeline\350\276\223\345\205\245\350\276\223\345\207\272\346\223\215\344\275\234.md" "b/mxVision/mxVision-docs/quickStart/2-3pipeline\350\276\223\345\205\245\350\276\223\345\207\272\346\223\215\344\275\234.md"
similarity index 100%
rename from "docs/quickStart/2-3pipeline\350\276\223\345\205\245\350\276\223\345\207\272\346\223\215\344\275\234.md"
rename to "mxVision/mxVision-docs/quickStart/2-3pipeline\350\276\223\345\205\245\350\276\223\345\207\272\346\223\215\344\275\234.md"
diff --git "a/docs/quickStart/3-1\345\270\270\347\224\250\346\217\222\344\273\266\346\225\260\346\215\256\347\273\223\346\236\204.md" "b/mxVision/mxVision-docs/quickStart/3-1\345\270\270\347\224\250\346\217\222\344\273\266\346\225\260\346\215\256\347\273\223\346\236\204.md"
similarity index 100%
rename from "docs/quickStart/3-1\345\270\270\347\224\250\346\217\222\344\273\266\346\225\260\346\215\256\347\273\223\346\236\204.md"
rename to "mxVision/mxVision-docs/quickStart/3-1\345\270\270\347\224\250\346\217\222\344\273\266\346\225\260\346\215\256\347\273\223\346\236\204.md"
diff --git "a/docs/quickStart/4-1\346\217\222\344\273\266\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md" "b/mxVision/mxVision-docs/quickStart/4-1\346\217\222\344\273\266\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md"
similarity index 100%
rename from "docs/quickStart/4-1\346\217\222\344\273\266\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md"
rename to "mxVision/mxVision-docs/quickStart/4-1\346\217\222\344\273\266\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md"
diff --git "a/docs/quickStart/4-2\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206\345\272\223(\345\206\205\347\275\256\347\261\273\345\236\213)\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md" "b/mxVision/mxVision-docs/quickStart/4-2\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206\345\272\223(\345\206\205\347\275\256\347\261\273\345\236\213)\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md"
similarity index 100%
rename from "docs/quickStart/4-2\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206\345\272\223(\345\206\205\347\275\256\347\261\273\345\236\213)\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md"
rename to "mxVision/mxVision-docs/quickStart/4-2\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206\345\272\223(\345\206\205\347\275\256\347\261\273\345\236\213)\345\274\200\345\217\221\350\260\203\350\257\225\346\214\207\345\257\274.md"
diff --git "a/docs/quickStart/4-3\346\214\202\350\275\275\350\207\252\345\256\232\344\271\211proto\347\273\223\346\236\204\344\275\223.md" "b/mxVision/mxVision-docs/quickStart/4-3\346\214\202\350\275\275\350\207\252\345\256\232\344\271\211proto\347\273\223\346\236\204\344\275\223.md"
similarity index 100%
rename from "docs/quickStart/4-3\346\214\202\350\275\275\350\207\252\345\256\232\344\271\211proto\347\273\223\346\236\204\344\275\223.md"
rename to "mxVision/mxVision-docs/quickStart/4-3\346\214\202\350\275\275\350\207\252\345\256\232\344\271\211proto\347\273\223\346\236\204\344\275\223.md"
diff --git "a/docs/quickStart/4-4\346\250\241\345\236\213Tensor\346\225\260\346\215\256\345\244\204\347\220\206&\350\207\252\345\256\232\344\271\211\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206.md" "b/mxVision/mxVision-docs/quickStart/4-4\346\250\241\345\236\213Tensor\346\225\260\346\215\256\345\244\204\347\220\206&\350\207\252\345\256\232\344\271\211\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206.md"
similarity index 100%
rename from "docs/quickStart/4-4\346\250\241\345\236\213Tensor\346\225\260\346\215\256\345\244\204\347\220\206&\350\207\252\345\256\232\344\271\211\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206.md"
rename to "mxVision/mxVision-docs/quickStart/4-4\346\250\241\345\236\213Tensor\346\225\260\346\215\256\345\244\204\347\220\206&\350\207\252\345\256\232\344\271\211\346\250\241\345\236\213\345\220\216\345\244\204\347\220\206.md"
diff --git "a/docs/quickStart/Cmake\344\273\213\347\273\215.md" "b/mxVision/mxVision-docs/quickStart/Cmake\344\273\213\347\273\215.md"
similarity index 100%
rename from "docs/quickStart/Cmake\344\273\213\347\273\215.md"
rename to "mxVision/mxVision-docs/quickStart/Cmake\344\273\213\347\273\215.md"
diff --git a/docs/quickStart/README.md b/mxVision/mxVision-docs/quickStart/README.md
similarity index 100%
rename from docs/quickStart/README.md
rename to mxVision/mxVision-docs/quickStart/README.md
diff --git a/docs/quickStart/img/1621941610975.png b/mxVision/mxVision-docs/quickStart/img/1621941610975.png
similarity index 100%
rename from docs/quickStart/img/1621941610975.png
rename to mxVision/mxVision-docs/quickStart/img/1621941610975.png
diff --git a/docs/quickStart/img/1621941652252.png b/mxVision/mxVision-docs/quickStart/img/1621941652252.png
similarity index 100%
rename from docs/quickStart/img/1621941652252.png
rename to mxVision/mxVision-docs/quickStart/img/1621941652252.png
diff --git a/docs/quickStart/img/1622100078616.png b/mxVision/mxVision-docs/quickStart/img/1622100078616.png
similarity index 100%
rename from docs/quickStart/img/1622100078616.png
rename to mxVision/mxVision-docs/quickStart/img/1622100078616.png
diff --git a/docs/quickStart/img/1622101236396.png b/mxVision/mxVision-docs/quickStart/img/1622101236396.png
similarity index 100%
rename from docs/quickStart/img/1622101236396.png
rename to mxVision/mxVision-docs/quickStart/img/1622101236396.png
diff --git a/docs/quickStart/img/1622173570842.png b/mxVision/mxVision-docs/quickStart/img/1622173570842.png
similarity index 100%
rename from docs/quickStart/img/1622173570842.png
rename to mxVision/mxVision-docs/quickStart/img/1622173570842.png
diff --git a/docs/quickStart/img/1622259348404.png b/mxVision/mxVision-docs/quickStart/img/1622259348404.png
similarity index 100%
rename from docs/quickStart/img/1622259348404.png
rename to mxVision/mxVision-docs/quickStart/img/1622259348404.png
diff --git a/docs/quickStart/img/1622260210336.png b/mxVision/mxVision-docs/quickStart/img/1622260210336.png
similarity index 100%
rename from docs/quickStart/img/1622260210336.png
rename to mxVision/mxVision-docs/quickStart/img/1622260210336.png
diff --git a/docs/quickStart/img/1622260262069.png b/mxVision/mxVision-docs/quickStart/img/1622260262069.png
similarity index 100%
rename from docs/quickStart/img/1622260262069.png
rename to mxVision/mxVision-docs/quickStart/img/1622260262069.png
diff --git a/docs/quickStart/img/1622260504942.png b/mxVision/mxVision-docs/quickStart/img/1622260504942.png
similarity index 100%
rename from docs/quickStart/img/1622260504942.png
rename to mxVision/mxVision-docs/quickStart/img/1622260504942.png
diff --git a/docs/quickStart/img/1622260608712.png b/mxVision/mxVision-docs/quickStart/img/1622260608712.png
similarity index 100%
rename from docs/quickStart/img/1622260608712.png
rename to mxVision/mxVision-docs/quickStart/img/1622260608712.png
diff --git a/docs/quickStart/img/1622260762270.png b/mxVision/mxVision-docs/quickStart/img/1622260762270.png
similarity index 100%
rename from docs/quickStart/img/1622260762270.png
rename to mxVision/mxVision-docs/quickStart/img/1622260762270.png
diff --git a/docs/quickStart/img/1622260859278.png b/mxVision/mxVision-docs/quickStart/img/1622260859278.png
similarity index 100%
rename from docs/quickStart/img/1622260859278.png
rename to mxVision/mxVision-docs/quickStart/img/1622260859278.png
diff --git a/docs/quickStart/img/1622261006414.png b/mxVision/mxVision-docs/quickStart/img/1622261006414.png
similarity index 100%
rename from docs/quickStart/img/1622261006414.png
rename to mxVision/mxVision-docs/quickStart/img/1622261006414.png
diff --git a/docs/quickStart/img/1622261081407.png b/mxVision/mxVision-docs/quickStart/img/1622261081407.png
similarity index 100%
rename from docs/quickStart/img/1622261081407.png
rename to mxVision/mxVision-docs/quickStart/img/1622261081407.png
diff --git a/docs/quickStart/img/1622261209163.png b/mxVision/mxVision-docs/quickStart/img/1622261209163.png
similarity index 100%
rename from docs/quickStart/img/1622261209163.png
rename to mxVision/mxVision-docs/quickStart/img/1622261209163.png
diff --git a/docs/quickStart/img/1622265242971.png b/mxVision/mxVision-docs/quickStart/img/1622265242971.png
similarity index 100%
rename from docs/quickStart/img/1622265242971.png
rename to mxVision/mxVision-docs/quickStart/img/1622265242971.png
diff --git a/docs/quickStart/img/1622518642593.png b/mxVision/mxVision-docs/quickStart/img/1622518642593.png
similarity index 100%
rename from docs/quickStart/img/1622518642593.png
rename to mxVision/mxVision-docs/quickStart/img/1622518642593.png
diff --git a/docs/quickStart/img/1622528329436.png b/mxVision/mxVision-docs/quickStart/img/1622528329436.png
similarity index 100%
rename from docs/quickStart/img/1622528329436.png
rename to mxVision/mxVision-docs/quickStart/img/1622528329436.png
diff --git a/docs/quickStart/img/1623207353906.png b/mxVision/mxVision-docs/quickStart/img/1623207353906.png
similarity index 100%
rename from docs/quickStart/img/1623207353906.png
rename to mxVision/mxVision-docs/quickStart/img/1623207353906.png
diff --git a/docs/quickStart/img/1623207817848.png b/mxVision/mxVision-docs/quickStart/img/1623207817848.png
similarity index 100%
rename from docs/quickStart/img/1623207817848.png
rename to mxVision/mxVision-docs/quickStart/img/1623207817848.png
diff --git a/docs/quickStart/img/1623219436881.png b/mxVision/mxVision-docs/quickStart/img/1623219436881.png
similarity index 100%
rename from docs/quickStart/img/1623219436881.png
rename to mxVision/mxVision-docs/quickStart/img/1623219436881.png
diff --git a/docs/quickStart/img/1623220168055.png b/mxVision/mxVision-docs/quickStart/img/1623220168055.png
similarity index 100%
rename from docs/quickStart/img/1623220168055.png
rename to mxVision/mxVision-docs/quickStart/img/1623220168055.png
diff --git a/docs/quickStart/img/1623220230223.png b/mxVision/mxVision-docs/quickStart/img/1623220230223.png
similarity index 100%
rename from docs/quickStart/img/1623220230223.png
rename to mxVision/mxVision-docs/quickStart/img/1623220230223.png
diff --git a/docs/quickStart/img/1623220694823.png b/mxVision/mxVision-docs/quickStart/img/1623220694823.png
similarity index 100%
rename from docs/quickStart/img/1623220694823.png
rename to mxVision/mxVision-docs/quickStart/img/1623220694823.png
diff --git a/docs/quickStart/img/1623220827671.png b/mxVision/mxVision-docs/quickStart/img/1623220827671.png
similarity index 100%
rename from docs/quickStart/img/1623220827671.png
rename to mxVision/mxVision-docs/quickStart/img/1623220827671.png
diff --git a/docs/quickStart/img/1623221233288.png b/mxVision/mxVision-docs/quickStart/img/1623221233288.png
similarity index 100%
rename from docs/quickStart/img/1623221233288.png
rename to mxVision/mxVision-docs/quickStart/img/1623221233288.png
diff --git a/docs/quickStart/img/1623221481373.png b/mxVision/mxVision-docs/quickStart/img/1623221481373.png
similarity index 100%
rename from docs/quickStart/img/1623221481373.png
rename to mxVision/mxVision-docs/quickStart/img/1623221481373.png
diff --git a/docs/quickStart/img/1623221646773.png b/mxVision/mxVision-docs/quickStart/img/1623221646773.png
similarity index 100%
rename from docs/quickStart/img/1623221646773.png
rename to mxVision/mxVision-docs/quickStart/img/1623221646773.png
diff --git a/docs/quickStart/img/1623224745826.png b/mxVision/mxVision-docs/quickStart/img/1623224745826.png
similarity index 100%
rename from docs/quickStart/img/1623224745826.png
rename to mxVision/mxVision-docs/quickStart/img/1623224745826.png
diff --git a/docs/quickStart/img/1623229532350.png b/mxVision/mxVision-docs/quickStart/img/1623229532350.png
similarity index 100%
rename from docs/quickStart/img/1623229532350.png
rename to mxVision/mxVision-docs/quickStart/img/1623229532350.png
diff --git a/docs/quickStart/img/1623231415247.png b/mxVision/mxVision-docs/quickStart/img/1623231415247.png
similarity index 100%
rename from docs/quickStart/img/1623231415247.png
rename to mxVision/mxVision-docs/quickStart/img/1623231415247.png
diff --git a/docs/quickStart/img/1623231423039.png b/mxVision/mxVision-docs/quickStart/img/1623231423039.png
similarity index 100%
rename from docs/quickStart/img/1623231423039.png
rename to mxVision/mxVision-docs/quickStart/img/1623231423039.png
diff --git a/docs/quickStart/img/1623231850273.png b/mxVision/mxVision-docs/quickStart/img/1623231850273.png
similarity index 100%
rename from docs/quickStart/img/1623231850273.png
rename to mxVision/mxVision-docs/quickStart/img/1623231850273.png
diff --git a/docs/quickStart/img/1623232978995.png b/mxVision/mxVision-docs/quickStart/img/1623232978995.png
similarity index 100%
rename from docs/quickStart/img/1623232978995.png
rename to mxVision/mxVision-docs/quickStart/img/1623232978995.png
diff --git a/docs/quickStart/img/1623236074913.png b/mxVision/mxVision-docs/quickStart/img/1623236074913.png
similarity index 100%
rename from docs/quickStart/img/1623236074913.png
rename to mxVision/mxVision-docs/quickStart/img/1623236074913.png
diff --git a/docs/quickStart/img/1623236387023.png b/mxVision/mxVision-docs/quickStart/img/1623236387023.png
similarity index 100%
rename from docs/quickStart/img/1623236387023.png
rename to mxVision/mxVision-docs/quickStart/img/1623236387023.png
diff --git a/docs/quickStart/img/1623309211218.png b/mxVision/mxVision-docs/quickStart/img/1623309211218.png
similarity index 100%
rename from docs/quickStart/img/1623309211218.png
rename to mxVision/mxVision-docs/quickStart/img/1623309211218.png
diff --git a/docs/quickStart/img/1623309361995.png b/mxVision/mxVision-docs/quickStart/img/1623309361995.png
similarity index 100%
rename from docs/quickStart/img/1623309361995.png
rename to mxVision/mxVision-docs/quickStart/img/1623309361995.png
diff --git a/docs/quickStart/img/1623315719375.png b/mxVision/mxVision-docs/quickStart/img/1623315719375.png
similarity index 100%
rename from docs/quickStart/img/1623315719375.png
rename to mxVision/mxVision-docs/quickStart/img/1623315719375.png
diff --git a/docs/quickStart/img/1623315806818.png b/mxVision/mxVision-docs/quickStart/img/1623315806818.png
similarity index 100%
rename from docs/quickStart/img/1623315806818.png
rename to mxVision/mxVision-docs/quickStart/img/1623315806818.png
diff --git a/docs/quickStart/img/1623316129521.png b/mxVision/mxVision-docs/quickStart/img/1623316129521.png
similarity index 100%
rename from docs/quickStart/img/1623316129521.png
rename to mxVision/mxVision-docs/quickStart/img/1623316129521.png
diff --git a/docs/quickStart/img/1623316215637.png b/mxVision/mxVision-docs/quickStart/img/1623316215637.png
similarity index 100%
rename from docs/quickStart/img/1623316215637.png
rename to mxVision/mxVision-docs/quickStart/img/1623316215637.png
diff --git a/docs/quickStart/img/1623316788931.png b/mxVision/mxVision-docs/quickStart/img/1623316788931.png
similarity index 100%
rename from docs/quickStart/img/1623316788931.png
rename to mxVision/mxVision-docs/quickStart/img/1623316788931.png
diff --git a/docs/quickStart/img/1623316885642.png b/mxVision/mxVision-docs/quickStart/img/1623316885642.png
similarity index 100%
rename from docs/quickStart/img/1623316885642.png
rename to mxVision/mxVision-docs/quickStart/img/1623316885642.png
diff --git a/docs/quickStart/img/1623324184025.png b/mxVision/mxVision-docs/quickStart/img/1623324184025.png
similarity index 100%
rename from docs/quickStart/img/1623324184025.png
rename to mxVision/mxVision-docs/quickStart/img/1623324184025.png
diff --git a/docs/quickStart/img/1623382648767.png b/mxVision/mxVision-docs/quickStart/img/1623382648767.png
similarity index 100%
rename from docs/quickStart/img/1623382648767.png
rename to mxVision/mxVision-docs/quickStart/img/1623382648767.png
diff --git a/docs/quickStart/img/1623382869487.png b/mxVision/mxVision-docs/quickStart/img/1623382869487.png
similarity index 100%
rename from docs/quickStart/img/1623382869487.png
rename to mxVision/mxVision-docs/quickStart/img/1623382869487.png
diff --git a/docs/quickStart/img/1623388882981.png b/mxVision/mxVision-docs/quickStart/img/1623388882981.png
similarity index 100%
rename from docs/quickStart/img/1623388882981.png
rename to mxVision/mxVision-docs/quickStart/img/1623388882981.png
diff --git a/docs/quickStart/img/1623389741249.png b/mxVision/mxVision-docs/quickStart/img/1623389741249.png
similarity index 100%
rename from docs/quickStart/img/1623389741249.png
rename to mxVision/mxVision-docs/quickStart/img/1623389741249.png
diff --git a/docs/quickStart/img/1623394204432.png b/mxVision/mxVision-docs/quickStart/img/1623394204432.png
similarity index 100%
rename from docs/quickStart/img/1623394204432.png
rename to mxVision/mxVision-docs/quickStart/img/1623394204432.png
diff --git a/docs/quickStart/img/1623396772171.png b/mxVision/mxVision-docs/quickStart/img/1623396772171.png
similarity index 100%
rename from docs/quickStart/img/1623396772171.png
rename to mxVision/mxVision-docs/quickStart/img/1623396772171.png
diff --git a/docs/quickStart/img/1623397820744.png b/mxVision/mxVision-docs/quickStart/img/1623397820744.png
similarity index 100%
rename from docs/quickStart/img/1623397820744.png
rename to mxVision/mxVision-docs/quickStart/img/1623397820744.png
diff --git a/docs/quickStart/img/1623400877163.png b/mxVision/mxVision-docs/quickStart/img/1623400877163.png
similarity index 100%
rename from docs/quickStart/img/1623400877163.png
rename to mxVision/mxVision-docs/quickStart/img/1623400877163.png
diff --git a/docs/quickStart/img/1623400960176.png b/mxVision/mxVision-docs/quickStart/img/1623400960176.png
similarity index 100%
rename from docs/quickStart/img/1623400960176.png
rename to mxVision/mxVision-docs/quickStart/img/1623400960176.png
diff --git a/docs/quickStart/img/1623401056998.png b/mxVision/mxVision-docs/quickStart/img/1623401056998.png
similarity index 100%
rename from docs/quickStart/img/1623401056998.png
rename to mxVision/mxVision-docs/quickStart/img/1623401056998.png
diff --git a/docs/quickStart/img/1623401140421.png b/mxVision/mxVision-docs/quickStart/img/1623401140421.png
similarity index 100%
rename from docs/quickStart/img/1623401140421.png
rename to mxVision/mxVision-docs/quickStart/img/1623401140421.png
diff --git a/docs/quickStart/img/1623401358361.png b/mxVision/mxVision-docs/quickStart/img/1623401358361.png
similarity index 100%
rename from docs/quickStart/img/1623401358361.png
rename to mxVision/mxVision-docs/quickStart/img/1623401358361.png
diff --git a/docs/quickStart/img/1623401511515.png b/mxVision/mxVision-docs/quickStart/img/1623401511515.png
similarity index 100%
rename from docs/quickStart/img/1623401511515.png
rename to mxVision/mxVision-docs/quickStart/img/1623401511515.png
diff --git a/docs/quickStart/img/1623401675765.png b/mxVision/mxVision-docs/quickStart/img/1623401675765.png
similarity index 100%
rename from docs/quickStart/img/1623401675765.png
rename to mxVision/mxVision-docs/quickStart/img/1623401675765.png
diff --git a/docs/quickStart/img/1623401832093.png b/mxVision/mxVision-docs/quickStart/img/1623401832093.png
similarity index 100%
rename from docs/quickStart/img/1623401832093.png
rename to mxVision/mxVision-docs/quickStart/img/1623401832093.png
diff --git a/docs/quickStart/img/1623401950254.png b/mxVision/mxVision-docs/quickStart/img/1623401950254.png
similarity index 100%
rename from docs/quickStart/img/1623401950254.png
rename to mxVision/mxVision-docs/quickStart/img/1623401950254.png
diff --git a/docs/quickStart/img/1623402187826.png b/mxVision/mxVision-docs/quickStart/img/1623402187826.png
similarity index 100%
rename from docs/quickStart/img/1623402187826.png
rename to mxVision/mxVision-docs/quickStart/img/1623402187826.png
diff --git a/docs/quickStart/img/1623402363018.png b/mxVision/mxVision-docs/quickStart/img/1623402363018.png
similarity index 100%
rename from docs/quickStart/img/1623402363018.png
rename to mxVision/mxVision-docs/quickStart/img/1623402363018.png
diff --git a/docs/quickStart/img/1623405875582.png b/mxVision/mxVision-docs/quickStart/img/1623405875582.png
similarity index 100%
rename from docs/quickStart/img/1623405875582.png
rename to mxVision/mxVision-docs/quickStart/img/1623405875582.png
diff --git a/docs/quickStart/img/1623406024516.png b/mxVision/mxVision-docs/quickStart/img/1623406024516.png
similarity index 100%
rename from docs/quickStart/img/1623406024516.png
rename to mxVision/mxVision-docs/quickStart/img/1623406024516.png
diff --git a/docs/quickStart/img/1623406092822.png b/mxVision/mxVision-docs/quickStart/img/1623406092822.png
similarity index 100%
rename from docs/quickStart/img/1623406092822.png
rename to mxVision/mxVision-docs/quickStart/img/1623406092822.png
diff --git a/docs/quickStart/img/1623748262037.png b/mxVision/mxVision-docs/quickStart/img/1623748262037.png
similarity index 100%
rename from docs/quickStart/img/1623748262037.png
rename to mxVision/mxVision-docs/quickStart/img/1623748262037.png
diff --git a/docs/quickStart/img/1623748753909.png b/mxVision/mxVision-docs/quickStart/img/1623748753909.png
similarity index 100%
rename from docs/quickStart/img/1623748753909.png
rename to mxVision/mxVision-docs/quickStart/img/1623748753909.png
diff --git a/docs/quickStart/img/1623748862495.png b/mxVision/mxVision-docs/quickStart/img/1623748862495.png
similarity index 100%
rename from docs/quickStart/img/1623748862495.png
rename to mxVision/mxVision-docs/quickStart/img/1623748862495.png
diff --git a/docs/quickStart/img/1623749109243.png b/mxVision/mxVision-docs/quickStart/img/1623749109243.png
similarity index 100%
rename from docs/quickStart/img/1623749109243.png
rename to mxVision/mxVision-docs/quickStart/img/1623749109243.png
diff --git a/docs/quickStart/img/1623749180481.png b/mxVision/mxVision-docs/quickStart/img/1623749180481.png
similarity index 100%
rename from docs/quickStart/img/1623749180481.png
rename to mxVision/mxVision-docs/quickStart/img/1623749180481.png
diff --git a/docs/quickStart/img/1623749192860.png b/mxVision/mxVision-docs/quickStart/img/1623749192860.png
similarity index 100%
rename from docs/quickStart/img/1623749192860.png
rename to mxVision/mxVision-docs/quickStart/img/1623749192860.png
diff --git a/docs/quickStart/img/1623749645831.png b/mxVision/mxVision-docs/quickStart/img/1623749645831.png
similarity index 100%
rename from docs/quickStart/img/1623749645831.png
rename to mxVision/mxVision-docs/quickStart/img/1623749645831.png
diff --git a/docs/quickStart/img/1623749681297.png b/mxVision/mxVision-docs/quickStart/img/1623749681297.png
similarity index 100%
rename from docs/quickStart/img/1623749681297.png
rename to mxVision/mxVision-docs/quickStart/img/1623749681297.png
diff --git a/docs/quickStart/img/1623755172684.png b/mxVision/mxVision-docs/quickStart/img/1623755172684.png
similarity index 100%
rename from docs/quickStart/img/1623755172684.png
rename to mxVision/mxVision-docs/quickStart/img/1623755172684.png
diff --git a/docs/quickStart/img/1623757765892.png b/mxVision/mxVision-docs/quickStart/img/1623757765892.png
similarity index 100%
rename from docs/quickStart/img/1623757765892.png
rename to mxVision/mxVision-docs/quickStart/img/1623757765892.png
diff --git a/docs/quickStart/img/1623758745148.png b/mxVision/mxVision-docs/quickStart/img/1623758745148.png
similarity index 100%
rename from docs/quickStart/img/1623758745148.png
rename to mxVision/mxVision-docs/quickStart/img/1623758745148.png
diff --git a/docs/quickStart/img/1623835106290.png b/mxVision/mxVision-docs/quickStart/img/1623835106290.png
similarity index 100%
rename from docs/quickStart/img/1623835106290.png
rename to mxVision/mxVision-docs/quickStart/img/1623835106290.png
diff --git a/docs/quickStart/img/1624007242093.png b/mxVision/mxVision-docs/quickStart/img/1624007242093.png
similarity index 100%
rename from docs/quickStart/img/1624007242093.png
rename to mxVision/mxVision-docs/quickStart/img/1624007242093.png
diff --git a/docs/quickStart/img/20210712140926.png b/mxVision/mxVision-docs/quickStart/img/20210712140926.png
similarity index 100%
rename from docs/quickStart/img/20210712140926.png
rename to mxVision/mxVision-docs/quickStart/img/20210712140926.png
diff --git a/docs/quickStart/img/20210712141205.png b/mxVision/mxVision-docs/quickStart/img/20210712141205.png
similarity index 100%
rename from docs/quickStart/img/20210712141205.png
rename to mxVision/mxVision-docs/quickStart/img/20210712141205.png
diff --git a/docs/quickStart/img/20210712141316.png b/mxVision/mxVision-docs/quickStart/img/20210712141316.png
similarity index 100%
rename from docs/quickStart/img/20210712141316.png
rename to mxVision/mxVision-docs/quickStart/img/20210712141316.png
diff --git a/docs/quickStart/img/20210712150707.png b/mxVision/mxVision-docs/quickStart/img/20210712150707.png
similarity index 100%
rename from docs/quickStart/img/20210712150707.png
rename to mxVision/mxVision-docs/quickStart/img/20210712150707.png
diff --git a/docs/quickStart/img/202107131357.png b/mxVision/mxVision-docs/quickStart/img/202107131357.png
similarity index 100%
rename from docs/quickStart/img/202107131357.png
rename to mxVision/mxVision-docs/quickStart/img/202107131357.png
diff --git a/docs/quickStart/img/202107131607.png b/mxVision/mxVision-docs/quickStart/img/202107131607.png
similarity index 100%
rename from docs/quickStart/img/202107131607.png
rename to mxVision/mxVision-docs/quickStart/img/202107131607.png
diff --git a/docs/quickStart/img/202107131627.png b/mxVision/mxVision-docs/quickStart/img/202107131627.png
similarity index 100%
rename from docs/quickStart/img/202107131627.png
rename to mxVision/mxVision-docs/quickStart/img/202107131627.png
diff --git a/docs/quickStart/img/202107131630.png b/mxVision/mxVision-docs/quickStart/img/202107131630.png
similarity index 100%
rename from docs/quickStart/img/202107131630.png
rename to mxVision/mxVision-docs/quickStart/img/202107131630.png
diff --git a/docs/quickStart/img/202107131639.png b/mxVision/mxVision-docs/quickStart/img/202107131639.png
similarity index 100%
rename from docs/quickStart/img/202107131639.png
rename to mxVision/mxVision-docs/quickStart/img/202107131639.png
diff --git a/docs/quickStart/img/202107131650.png b/mxVision/mxVision-docs/quickStart/img/202107131650.png
similarity index 100%
rename from docs/quickStart/img/202107131650.png
rename to mxVision/mxVision-docs/quickStart/img/202107131650.png
diff --git a/docs/quickStart/img/202107201810.png b/mxVision/mxVision-docs/quickStart/img/202107201810.png
similarity index 100%
rename from docs/quickStart/img/202107201810.png
rename to mxVision/mxVision-docs/quickStart/img/202107201810.png
diff --git a/docs/quickStart/img/202107201812.png b/mxVision/mxVision-docs/quickStart/img/202107201812.png
similarity index 100%
rename from docs/quickStart/img/202107201812.png
rename to mxVision/mxVision-docs/quickStart/img/202107201812.png
diff --git a/docs/quickStart/img/202107201822.png b/mxVision/mxVision-docs/quickStart/img/202107201822.png
similarity index 100%
rename from docs/quickStart/img/202107201822.png
rename to mxVision/mxVision-docs/quickStart/img/202107201822.png
diff --git a/docs/quickStart/img/202107301524.png b/mxVision/mxVision-docs/quickStart/img/202107301524.png
similarity index 100%
rename from docs/quickStart/img/202107301524.png
rename to mxVision/mxVision-docs/quickStart/img/202107301524.png
diff --git a/docs/quickStart/img/image-20210807143650502.png b/mxVision/mxVision-docs/quickStart/img/image-20210807143650502.png
similarity index 100%
rename from docs/quickStart/img/image-20210807143650502.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807143650502.png
diff --git a/docs/quickStart/img/image-20210807143734419.png b/mxVision/mxVision-docs/quickStart/img/image-20210807143734419.png
similarity index 100%
rename from docs/quickStart/img/image-20210807143734419.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807143734419.png
diff --git a/docs/quickStart/img/image-20210807144231093.png b/mxVision/mxVision-docs/quickStart/img/image-20210807144231093.png
similarity index 100%
rename from docs/quickStart/img/image-20210807144231093.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807144231093.png
diff --git a/docs/quickStart/img/image-20210807150419270.png b/mxVision/mxVision-docs/quickStart/img/image-20210807150419270.png
similarity index 100%
rename from docs/quickStart/img/image-20210807150419270.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807150419270.png
diff --git a/docs/quickStart/img/image-20210807150609493.png b/mxVision/mxVision-docs/quickStart/img/image-20210807150609493.png
similarity index 100%
rename from docs/quickStart/img/image-20210807150609493.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807150609493.png
diff --git a/docs/quickStart/img/image-20210807150729221.png b/mxVision/mxVision-docs/quickStart/img/image-20210807150729221.png
similarity index 100%
rename from docs/quickStart/img/image-20210807150729221.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807150729221.png
diff --git a/docs/quickStart/img/image-20210807150814717.png b/mxVision/mxVision-docs/quickStart/img/image-20210807150814717.png
similarity index 100%
rename from docs/quickStart/img/image-20210807150814717.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807150814717.png
diff --git a/docs/quickStart/img/image-20210807151651100.png b/mxVision/mxVision-docs/quickStart/img/image-20210807151651100.png
similarity index 100%
rename from docs/quickStart/img/image-20210807151651100.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807151651100.png
diff --git a/docs/quickStart/img/image-20210807160036149.png b/mxVision/mxVision-docs/quickStart/img/image-20210807160036149.png
similarity index 100%
rename from docs/quickStart/img/image-20210807160036149.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807160036149.png
diff --git a/docs/quickStart/img/image-20210807160223885.png b/mxVision/mxVision-docs/quickStart/img/image-20210807160223885.png
similarity index 100%
rename from docs/quickStart/img/image-20210807160223885.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807160223885.png
diff --git a/docs/quickStart/img/image-20210807161825690.png b/mxVision/mxVision-docs/quickStart/img/image-20210807161825690.png
similarity index 100%
rename from docs/quickStart/img/image-20210807161825690.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210807161825690.png
diff --git a/docs/quickStart/img/image-20210809112042341.png b/mxVision/mxVision-docs/quickStart/img/image-20210809112042341.png
similarity index 100%
rename from docs/quickStart/img/image-20210809112042341.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809112042341.png
diff --git a/docs/quickStart/img/image-20210809113131480.png b/mxVision/mxVision-docs/quickStart/img/image-20210809113131480.png
similarity index 100%
rename from docs/quickStart/img/image-20210809113131480.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809113131480.png
diff --git a/docs/quickStart/img/image-20210809134942739.png b/mxVision/mxVision-docs/quickStart/img/image-20210809134942739.png
similarity index 100%
rename from docs/quickStart/img/image-20210809134942739.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809134942739.png
diff --git a/docs/quickStart/img/image-20210809135333301.png b/mxVision/mxVision-docs/quickStart/img/image-20210809135333301.png
similarity index 100%
rename from docs/quickStart/img/image-20210809135333301.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809135333301.png
diff --git a/docs/quickStart/img/image-20210809140143447.png b/mxVision/mxVision-docs/quickStart/img/image-20210809140143447.png
similarity index 100%
rename from docs/quickStart/img/image-20210809140143447.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809140143447.png
diff --git a/docs/quickStart/img/image-20210809143920966.png b/mxVision/mxVision-docs/quickStart/img/image-20210809143920966.png
similarity index 100%
rename from docs/quickStart/img/image-20210809143920966.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809143920966.png
diff --git a/docs/quickStart/img/image-20210809144421207.png b/mxVision/mxVision-docs/quickStart/img/image-20210809144421207.png
similarity index 100%
rename from docs/quickStart/img/image-20210809144421207.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809144421207.png
diff --git a/docs/quickStart/img/image-20210809144444572.png b/mxVision/mxVision-docs/quickStart/img/image-20210809144444572.png
similarity index 100%
rename from docs/quickStart/img/image-20210809144444572.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809144444572.png
diff --git a/docs/quickStart/img/image-20210809150026824.png b/mxVision/mxVision-docs/quickStart/img/image-20210809150026824.png
similarity index 100%
rename from docs/quickStart/img/image-20210809150026824.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210809150026824.png
diff --git a/docs/quickStart/img/image-20210817155551060.png b/mxVision/mxVision-docs/quickStart/img/image-20210817155551060.png
similarity index 100%
rename from docs/quickStart/img/image-20210817155551060.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210817155551060.png
diff --git a/docs/quickStart/img/image-20210817155743052.png b/mxVision/mxVision-docs/quickStart/img/image-20210817155743052.png
similarity index 100%
rename from docs/quickStart/img/image-20210817155743052.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210817155743052.png
diff --git a/docs/quickStart/img/image-20210817160921286.png b/mxVision/mxVision-docs/quickStart/img/image-20210817160921286.png
similarity index 100%
rename from docs/quickStart/img/image-20210817160921286.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210817160921286.png
diff --git a/docs/quickStart/img/image-20210817161340889.png b/mxVision/mxVision-docs/quickStart/img/image-20210817161340889.png
similarity index 100%
rename from docs/quickStart/img/image-20210817161340889.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210817161340889.png
diff --git a/docs/quickStart/img/image-20210817162301001.png b/mxVision/mxVision-docs/quickStart/img/image-20210817162301001.png
similarity index 100%
rename from docs/quickStart/img/image-20210817162301001.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210817162301001.png
diff --git a/docs/quickStart/img/image-20210817164058460.png b/mxVision/mxVision-docs/quickStart/img/image-20210817164058460.png
similarity index 100%
rename from docs/quickStart/img/image-20210817164058460.png
rename to mxVision/mxVision-docs/quickStart/img/image-20210817164058460.png
diff --git a/docs/quickStart/img/zh-cn_image_0000001180516897.png b/mxVision/mxVision-docs/quickStart/img/zh-cn_image_0000001180516897.png
similarity index 100%
rename from docs/quickStart/img/zh-cn_image_0000001180516897.png
rename to mxVision/mxVision-docs/quickStart/img/zh-cn_image_0000001180516897.png
diff --git a/docs/quickStart/img/zh-cn_image_0000001180517207.png b/mxVision/mxVision-docs/quickStart/img/zh-cn_image_0000001180517207.png
similarity index 100%
rename from docs/quickStart/img/zh-cn_image_0000001180517207.png
rename to mxVision/mxVision-docs/quickStart/img/zh-cn_image_0000001180517207.png
diff --git a/docs/quickStart/img/zh-cn_image_0000001180637129.png b/mxVision/mxVision-docs/quickStart/img/zh-cn_image_0000001180637129.png
similarity index 100%
rename from docs/quickStart/img/zh-cn_image_0000001180637129.png
rename to mxVision/mxVision-docs/quickStart/img/zh-cn_image_0000001180637129.png
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713102926.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713102926.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713102926.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713102926.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103139.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103139.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103139.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103139.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103500.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103500.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103500.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713103500.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713104656.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713104656.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713104656.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713104656.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713110507.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713110507.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713110507.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713110507.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111548.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111548.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111548.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111548.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111921.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111921.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111921.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713111921.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713112316.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713112316.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713112316.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713112316.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134004.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134004.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134004.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134004.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134629.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134629.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134629.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134629.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134733.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134733.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134733.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210713134733.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210714102135900.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210714102135900.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210714102135900.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/20210714102135900.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/3f8635d8-b16a-4614-a85d-87237750b22a.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/3f8635d8-b16a-4614-a85d-87237750b22a.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/3f8635d8-b16a-4614-a85d-87237750b22a.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/3f8635d8-b16a-4614-a85d-87237750b22a.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/5293762d-b09e-4ce1-950b-fc318f588981.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/5293762d-b09e-4ce1-950b-fc318f588981.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/5293762d-b09e-4ce1-950b-fc318f588981.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/5293762d-b09e-4ce1-950b-fc318f588981.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/545ce2f7-c0aa-40d3-8c84-0fed9b98c241.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/545ce2f7-c0aa-40d3-8c84-0fed9b98c241.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/545ce2f7-c0aa-40d3-8c84-0fed9b98c241.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/545ce2f7-c0aa-40d3-8c84-0fed9b98c241.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/b24f2745-81f9-40ac-8e91-355484ac06d1.png" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/b24f2745-81f9-40ac-8e91-355484ac06d1.png"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/b24f2745-81f9-40ac-8e91-355484ac06d1.png"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/img/b24f2745-81f9-40ac-8e91-355484ac06d1.png"
diff --git "a/docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274.md" "b/mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274.md"
similarity index 100%
rename from "docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274.md"
rename to "mxVision/mxVision-docs/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274/\344\273\243\347\240\201\346\217\220\344\272\244\346\214\207\345\257\274.md"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/Live555\347\246\273\347\272\277\350\247\206\351\242\221\350\275\254RTSP\350\257\264\346\230\216\346\226\207\346\241\243.md" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/Live555\347\246\273\347\272\277\350\247\206\351\242\221\350\275\254RTSP\350\257\264\346\230\216\346\226\207\346\241\243.md"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/Live555\347\246\273\347\272\277\350\247\206\351\242\221\350\275\254RTSP\350\257\264\346\230\216\346\226\207\346\241\243.md"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/Live555\347\246\273\347\272\277\350\247\206\351\242\221\350\275\254RTSP\350\257\264\346\230\216\346\226\207\346\241\243.md"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/TensorBase.md" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/TensorBase.md"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/TensorBase.md"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/TensorBase.md"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210720145058139.png" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210720145058139.png"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210720145058139.png"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210720145058139.png"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160540018.png" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160540018.png"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160540018.png"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160540018.png"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160846743.png" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160846743.png"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160846743.png"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722160846743.png"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191031307.png" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191031307.png"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191031307.png"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191031307.png"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191821287.png" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191821287.png"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191821287.png"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/img/20210722191821287.png"
diff --git "a/docs/\345\217\202\350\200\203\350\265\204\346\226\231/pc\347\253\257ffmpeg\345\256\211\350\243\205\346\225\231\347\250\213.md" "b/mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/pc\347\253\257ffmpeg\345\256\211\350\243\205\346\225\231\347\250\213.md"
similarity index 100%
rename from "docs/\345\217\202\350\200\203\350\265\204\346\226\231/pc\347\253\257ffmpeg\345\256\211\350\243\205\346\225\231\347\250\213.md"
rename to "mxVision/mxVision-docs/\345\217\202\350\200\203\350\265\204\346\226\231/pc\347\253\257ffmpeg\345\256\211\350\243\205\346\225\231\347\250\213.md"
diff --git a/contrib/ADNet/PSNR.png b/mxVision/mxVision-referenceapps/ADNet/PSNR.png
similarity index 100%
rename from contrib/ADNet/PSNR.png
rename to mxVision/mxVision-referenceapps/ADNet/PSNR.png
diff --git a/contrib/ADNet/README.md b/mxVision/mxVision-referenceapps/ADNet/README.md
similarity index 97%
rename from contrib/ADNet/README.md
rename to mxVision/mxVision-referenceapps/ADNet/README.md
index 8ece88ef63a13477e99b0e3e42fd1102db9d1c85..57ff88c6c8aba1be8e308efe904d97453358ff6e 100644
--- a/contrib/ADNet/README.md
+++ b/mxVision/mxVision-referenceapps/ADNet/README.md
@@ -1,201 +1,201 @@
-# ADNet图像去噪参考设计
-
-## 1 介绍
-使用 ADNet 模型,在 MindX SDK 环境下实现图像去噪功能。
-由用户设置测试图片,传入到 pipeline 中先后实现前处理,模型推理,后处理等功能,最终输出结果图片实现可视化及模型精度计算。
-
-```
-ADNet 是一种包含注意力模块的卷积神经网络,主要包括用于图像去噪的稀疏块(SB)、特征增强块(FEB)、注意力块(AB)和重建块(RB)。
-
-其中,SB 模块通过使用扩张卷积和公共卷积来去除噪声,在性能和效率之间进行权衡。FEB 模块通过长路径整合全局和局部特征信息,以增强去噪模型的表达能力。
-
-AB 模块用于精细提取复杂背景中的噪声信息,对于复杂噪声图像,尤其是真实噪声图像非常有效。 此外,FEB 模块与 AB 模块集成以提高效率并降低训练去噪模型的复杂度。
-
-最后,RB 模块通过获得的噪声映射和给定的噪声图像来构造干净的图像。
-```
-
-### 1.1 支持的产品
-
-以昇腾 Atlas310 卡为主要的硬件平台
-
-### 1.2 支持的版本
-
-CANN:7.0.RC1
-
-SDK:mxVision 5.0.RC3(可通过cat SDK目录下的 version.info 查看)
-
-### 1.3 软件方案介绍
-
-项目主要由主函数,pipeline 文件,模型及其配置文件,测试数据集组成。
-主函数中构建业务流 stream 读取图片,通过 pipeline 在 SDK 环境下先后实现图像解码,图像缩放,模型推理的功能,
-最后从流中取出相应的输出数据完成图像保存并测试精度。
-
-表1.1 系统方案中各模块功能:
-
-| 序号 | 模块 | 功能描述 |
-| ---- | ------------- | ------------------------------------------------------------ |
-| 1 | appsrc | 向stream中发送数据,appsrc将数据发给下游元件 |
-| 2 | imagedecoder | 用于图像解码,当前只支持JPG/JPEG/BMP格式 |
-| 3 | imageresize | 对解码后的YUV格式的图像进行指定宽高的缩放,暂时只支持YUV格式的图像 |
-| 4 | tensorinfer | 对输入的张量进行推理 |
-| 5 | dataserialize | 将stream结果组装成json字符串输出 |
-| 6 | appsink | 从stream中获取数据 |
-| 7 | evaluate | 模型精度计算,输出图像降噪效果评估值PSNR |
-| 8 | transform | 对测试图像进行格式转换,evaluate 运行前需要进行尺寸调整 |
-
-
-### 1.4 代码目录结构与说明
-
-本工程名称为ADNet,工程目录如下图所示:
-
-```
-├── main.py //运行工程项目的主函数
-├── evaluate.py //精度计算
-├── transform.py //图像转换
-├── t.pipeline //pipeline
-├── model //存放模型文件
-| ├──aipp_adnet.cfg //预处理配置文件
-├── result.jpg //输出结果
-├── 流程.png //流程图
-├── pipeline.png //pipeline流程图
-└──README.md
-```
-
-### 1.5 技术实现流程图
-
-ADNet图像去噪模型的后处理的输入是 pipeline 中 mxpi_tensorinfer0 推理结束后通过 appsink0 输出的 tensor 数据,尺寸为[1 * 1 * 321 * 481],将张量数据通过 pred 取出推测的结果值,将像素点组成的图片保存成result.jpg,同时通过提供的 BSD68 数据集完成模型 PSNR 的精度计算。
-
-实现流程图如下图所示:
-
-
-
-
-pipeline流程如下图所示:
-
-
-
-
-### 1.6 特性及适应场景
-
-本案例中的 ADNet 模型适用于灰度图像的去噪,并可以返回测试图像的PSNR精度值。
-
-本模型在以下几种情况去噪效果良好:含有目标数量多、含有目标数量少、前景目标面积占比图像较大、前景目标面积占比图像较小、各目标边界清晰。
-
-在以下两种情况去噪效果不太好:1. 图像中各目标之间的边界不清晰,可能会出现过度去噪、目标模糊的情况。 2. 图像中前景目标较多,可能会出现无法完成目标精确化降噪的情况。
-
-
-## 2 环境依赖
-推荐系统为ubuntu 18.04,环境依赖软件和版本如下表
-
-| 软件名称 | 版本 |
-| -------- | ------ |
-| MindX SDK | 5.0.RC3 |
-| CANN | 7.0.RC1 |
-| ubuntu | 18.04.1 LTS |
-| python | 3.9.2 |
-| cv2 | 4.5.5 |
-| numpy | 1.22.3 |
-| scikit-image| 0.16.2 |
-
-
-在编译运行项目前,需要设置环境变量
-- 环境变量介绍
-
-- MX_SDK_HOME 指向SDK安装包路径
-- LD_LIBRARY_PATH 用于指定查找共享库(动态链接库)时除了默认路径之外的其他路径。
-- PYTHONPATH Python中一个重要的环境变量,用于在导入模块的时候搜索路径
-- GST_PLUGIN_SCANNER 用于查找plugin相关的依赖和库
-- GST_PLUGIN_PATH 用于查找plugin相关的依赖和库
-
-具体执行命令
-
-```
-. ${MX_SDK_HOME}/set_env.sh
-
-. ${ascend-toolkit-path}/set_env.sh
-```
-
-## 3 模型转换
-
-本项目使用的模型是ADNet模型。
-
-选用的模型为 pytorch 模型,可从 Ascend modelzoo 获取模型压缩包,在运行项目之前需要将 pytorch 模型转为 onnx 模型,再由 onnx 模型转为 om 模型。
-
-pth 权重文件和 onnx 文件的[下载链接](https://www.hiascend.com/zh/software/modelzoo/detail/1/d360c03430f04185a4fe1aa74250bfea) [备份链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ADNet/ATC%20ADNet.zip)
-
-
-模型转换工具(ATC)相关介绍如下
-https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/%E5%8F%82%E8%80%83%E8%B5%84%E6%96%99.md
-
-具体步骤如下
-
-1. 下载上述模型压缩包,获取 ADNet_bs1.onnx 模型文件放置 ADNet/model 目录下。
-
-2. 进入ADNet/model文件夹下执行命令
-
-```
-atc --framework=5 --model=ADNet.onnx --insert_op_conf=./aipp_adnet.cfg --output=ADNet_bs1 --input_format=NCHW -input_shape="image:1,1,321,481" --log=debug --soc_version=Ascend310 --output_type=FP32
- ```
-
-3. 执行该命令会在当前目录下生成项目需要的模型文件ADNet_bs1.om。执行后终端输出为
-
- ```
-ATC start working now, please wait for a moment.
-ATC run success, welcome to the next use.
-```
-
- 表示命令执行成功。
-
-## 4 编译与运行
-
-当已有模型的om文件,保存在ADNet/model/下
-
-**步骤 1** 将任意一张jpg格式的图片存到当前目录下(./ADNet),命名为test.jpg。如果 pipeline 文件(或测试图片)不在当前目录下(./ADNet),需要修改 main.py 的pipeline(或测试图片)路径指向到所在目录。
-
-**步骤 2** 按照模型转换获取om模型,放置在 ADNet/model 路径下。若未从 pytorch 模型自行转换模型,使用的是上述链接提供的 onnx 模型或者 om 模型,则无需修改相关文件,否则修改 main.py 中pipeline的相关配置,将 mxpi_tensorinfer0 插件 modelPath 属性值中的 om 模型名改成实际使用的 om 模型名;将 mxpi_imageresize0 插件中的 resizeWidth 和 resizeHeight 属性改成转换模型过程中设置的模型输入尺寸值。
-
-**步骤 3** 在命令行输入 如下命令运行整个工程
-
-```
-python3 main.py
-```
-
-**步骤 4** 图片检测。运行结束输出result.jpg。
-
-
-## 5 测试精度
-
-**步骤 1** 安装数据集用以测试精度。数据集 BSD68 需要自行下载。
-[下载链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ADNet/BSD68.zip)
-
-在./ADNet目录下新建 dataset 文件夹与 BSD68文件夹,并将数据集下载至BSD68文件夹解压。我们运行以下命令对数据集完成格式与尺寸转换,将处理好的数据集保存在dataset文件夹中,此时ADNet文件夹的目录结构为如下所示。
-
-```
-python3 transform.py
-```
-
-```
-├── main.py //运行工程项目的主函数
-├── evaluate.py //精度计算
-├── transform.py //图像转换
-├── t.pipeline //pipeline
-├── model //存放模型文件
-| ├──aipp_adnet.cfg //预处理配置文件
-├── test.jpg //测试图像
-├── result.jpg //输出结果
-├── 流程.png //流程图
-├── pipeline.png //pipeline流程图
-├── BSD68 //原始数据集
-├── dataset //完成转换后的待测试数据集
-└──README.md
-```
-
-**步骤 2** 修改 evaluate.py 中的 pipeline 路径与数据集路径与目录结构保持一致。修改完毕后运行如下命令完成精度测试,输出模型平均 PSNR 值。
-
-```
-python3 evaluate.py
-```
-
-模型在BSD68数据集上的精度达标,最终模型平均PSNR输出值为30.054,满足精度要求(PSNR ≥ 29.27)。
-
-
+# ADNet图像去噪参考设计
+
+## 1 介绍
+使用 ADNet 模型,在 MindX SDK 环境下实现图像去噪功能。
+由用户设置测试图片,传入到 pipeline 中先后实现前处理,模型推理,后处理等功能,最终输出结果图片实现可视化及模型精度计算。
+
+```
+ADNet 是一种包含注意力模块的卷积神经网络,主要包括用于图像去噪的稀疏块(SB)、特征增强块(FEB)、注意力块(AB)和重建块(RB)。
+
+其中,SB 模块通过使用扩张卷积和公共卷积来去除噪声,在性能和效率之间进行权衡。FEB 模块通过长路径整合全局和局部特征信息,以增强去噪模型的表达能力。
+
+AB 模块用于精细提取复杂背景中的噪声信息,对于复杂噪声图像,尤其是真实噪声图像非常有效。 此外,FEB 模块与 AB 模块集成以提高效率并降低训练去噪模型的复杂度。
+
+最后,RB 模块通过获得的噪声映射和给定的噪声图像来构造干净的图像。
+```
+
+### 1.1 支持的产品
+
+以昇腾 Atlas310 卡为主要的硬件平台
+
+### 1.2 支持的版本
+
+CANN:7.0.RC1
+
+SDK:mxVision 5.0.RC3(可通过cat SDK目录下的 version.info 查看)
+
+### 1.3 软件方案介绍
+
+项目主要由主函数,pipeline 文件,模型及其配置文件,测试数据集组成。
+主函数中构建业务流 stream 读取图片,通过 pipeline 在 SDK 环境下先后实现图像解码,图像缩放,模型推理的功能,
+最后从流中取出相应的输出数据完成图像保存并测试精度。
+
+表1.1 系统方案中各模块功能:
+
+| 序号 | 模块 | 功能描述 |
+| ---- | ------------- | ------------------------------------------------------------ |
+| 1 | appsrc | 向stream中发送数据,appsrc将数据发给下游元件 |
+| 2 | imagedecoder | 用于图像解码,当前只支持JPG/JPEG/BMP格式 |
+| 3 | imageresize | 对解码后的YUV格式的图像进行指定宽高的缩放,暂时只支持YUV格式的图像 |
+| 4 | tensorinfer | 对输入的张量进行推理 |
+| 5 | dataserialize | 将stream结果组装成json字符串输出 |
+| 6 | appsink | 从stream中获取数据 |
+| 7 | evaluate | 模型精度计算,输出图像降噪效果评估值PSNR |
+| 8 | transform | 对测试图像进行格式转换,evaluate 运行前需要进行尺寸调整 |
+
+
+### 1.4 代码目录结构与说明
+
+本工程名称为ADNet,工程目录如下图所示:
+
+```
+├── main.py //运行工程项目的主函数
+├── evaluate.py //精度计算
+├── transform.py //图像转换
+├── t.pipeline //pipeline
+├── model //存放模型文件
+| ├──aipp_adnet.cfg //预处理配置文件
+├── result.jpg //输出结果
+├── 流程.png //流程图
+├── pipeline.png //pipeline流程图
+└──README.md
+```
+
+### 1.5 技术实现流程图
+
+ADNet图像去噪模型的后处理的输入是 pipeline 中 mxpi_tensorinfer0 推理结束后通过 appsink0 输出的 tensor 数据,尺寸为[1 * 1 * 321 * 481],将张量数据通过 pred 取出推测的结果值,将像素点组成的图片保存成result.jpg,同时通过提供的 BSD68 数据集完成模型 PSNR 的精度计算。
+
+实现流程图如下图所示:
+
+
+
+
+pipeline流程如下图所示:
+
+
+
+
+### 1.6 特性及适应场景
+
+本案例中的 ADNet 模型适用于灰度图像的去噪,并可以返回测试图像的PSNR精度值。
+
+本模型在以下几种情况去噪效果良好:含有目标数量多、含有目标数量少、前景目标面积占比图像较大、前景目标面积占比图像较小、各目标边界清晰。
+
+在以下两种情况去噪效果不太好:1. 图像中各目标之间的边界不清晰,可能会出现过度去噪、目标模糊的情况。 2. 图像中前景目标较多,可能会出现无法完成目标精确化降噪的情况。
+
+
+## 2 环境依赖
+推荐系统为ubuntu 18.04,环境依赖软件和版本如下表
+
+| 软件名称 | 版本 |
+| -------- | ------ |
+| MindX SDK | 5.0.RC3 |
+| CANN | 7.0.RC1 |
+| ubuntu | 18.04.1 LTS |
+| python | 3.9.2 |
+| cv2 | 4.5.5 |
+| numpy | 1.22.3 |
+| scikit-image| 0.16.2 |
+
+
+在编译运行项目前,需要设置环境变量
+- 环境变量介绍
+
+- MX_SDK_HOME 指向SDK安装包路径
+- LD_LIBRARY_PATH 用于指定查找共享库(动态链接库)时除了默认路径之外的其他路径。
+- PYTHONPATH Python中一个重要的环境变量,用于在导入模块的时候搜索路径
+- GST_PLUGIN_SCANNER 用于查找plugin相关的依赖和库
+- GST_PLUGIN_PATH 用于查找plugin相关的依赖和库
+
+具体执行命令
+
+```
+. ${MX_SDK_HOME}/set_env.sh
+
+. ${ascend-toolkit-path}/set_env.sh
+```
+
+## 3 模型转换
+
+本项目使用的模型是ADNet模型。
+
+选用的模型为 pytorch 模型,可从 Ascend modelzoo 获取模型压缩包,在运行项目之前需要将 pytorch 模型转为 onnx 模型,再由 onnx 模型转为 om 模型。
+
+pth 权重文件和 onnx 文件的[下载链接](https://www.hiascend.com/zh/software/modelzoo/detail/1/d360c03430f04185a4fe1aa74250bfea) [备份链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ADNet/ATC%20ADNet.zip)
+
+
+模型转换工具(ATC)相关介绍如下
+https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/%E5%8F%82%E8%80%83%E8%B5%84%E6%96%99.md
+
+具体步骤如下
+
+1. 下载上述模型压缩包,获取 ADNet_bs1.onnx 模型文件放置 ADNet/model 目录下。
+
+2. 进入ADNet/model文件夹下执行命令
+
+```
+atc --framework=5 --model=ADNet.onnx --insert_op_conf=./aipp_adnet.cfg --output=ADNet_bs1 --input_format=NCHW -input_shape="image:1,1,321,481" --log=debug --soc_version=Ascend310 --output_type=FP32
+ ```
+
+3. 执行该命令会在当前目录下生成项目需要的模型文件ADNet_bs1.om。执行后终端输出为
+
+ ```
+ATC start working now, please wait for a moment.
+ATC run success, welcome to the next use.
+```
+
+ 表示命令执行成功。
+
+## 4 编译与运行
+
+当已有模型的om文件,保存在ADNet/model/下
+
+**步骤 1** 将任意一张jpg格式的图片存到当前目录下(./ADNet),命名为test.jpg。如果 pipeline 文件(或测试图片)不在当前目录下(./ADNet),需要修改 main.py 的pipeline(或测试图片)路径指向到所在目录。
+
+**步骤 2** 按照模型转换获取om模型,放置在 ADNet/model 路径下。若未从 pytorch 模型自行转换模型,使用的是上述链接提供的 onnx 模型或者 om 模型,则无需修改相关文件,否则修改 main.py 中pipeline的相关配置,将 mxpi_tensorinfer0 插件 modelPath 属性值中的 om 模型名改成实际使用的 om 模型名;将 mxpi_imageresize0 插件中的 resizeWidth 和 resizeHeight 属性改成转换模型过程中设置的模型输入尺寸值。
+
+**步骤 3** 在命令行输入 如下命令运行整个工程
+
+```
+python3 main.py
+```
+
+**步骤 4** 图片检测。运行结束输出result.jpg。
+
+
+## 5 测试精度
+
+**步骤 1** 安装数据集用以测试精度。数据集 BSD68 需要自行下载。
+[下载链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ADNet/BSD68.zip)
+
+在./ADNet目录下新建 dataset 文件夹与 BSD68文件夹,并将数据集下载至BSD68文件夹解压。我们运行以下命令对数据集完成格式与尺寸转换,将处理好的数据集保存在dataset文件夹中,此时ADNet文件夹的目录结构为如下所示。
+
+```
+python3 transform.py
+```
+
+```
+├── main.py //运行工程项目的主函数
+├── evaluate.py //精度计算
+├── transform.py //图像转换
+├── t.pipeline //pipeline
+├── model //存放模型文件
+| ├──aipp_adnet.cfg //预处理配置文件
+├── test.jpg //测试图像
+├── result.jpg //输出结果
+├── 流程.png //流程图
+├── pipeline.png //pipeline流程图
+├── BSD68 //原始数据集
+├── dataset //完成转换后的待测试数据集
+└──README.md
+```
+
+**步骤 2** 修改 evaluate.py 中的 pipeline 路径与数据集路径与目录结构保持一致。修改完毕后运行如下命令完成精度测试,输出模型平均 PSNR 值。
+
+```
+python3 evaluate.py
+```
+
+模型在BSD68数据集上的精度达标,最终模型平均PSNR输出值为30.054,满足精度要求(PSNR ≥ 29.27)。
+
+
diff --git a/contrib/ADNet/evaluate.py b/mxVision/mxVision-referenceapps/ADNet/evaluate.py
similarity index 97%
rename from contrib/ADNet/evaluate.py
rename to mxVision/mxVision-referenceapps/ADNet/evaluate.py
index 86bbf3aa94bb3ab156283fc31319e2ec37c538cf..61b26a43fae5813b4d2924353d5a47417ac4c8b1 100644
--- a/contrib/ADNet/evaluate.py
+++ b/mxVision/mxVision-referenceapps/ADNet/evaluate.py
@@ -1,112 +1,112 @@
-#!/usr/bin/env python
-# coding=utf-8
-
-# Copyright 2022 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-import os
-import math
-import cv2
-import numpy as np
-import MxpiDataType_pb2 as MxpiDataType
-from StreamManagerApi import StreamManagerApi, MxDataInput, StringVector
-
-SUM_PSNR = 0
-NUM = 0
-MAX_PIXEL = 255.0
-HEIGHT = 320
-WIDTH = 480
-DE_NORM = 255
-
-if __name__ == '__main__':
- streamManagerApi = StreamManagerApi()
- # 新建一个流管理StreamManager对象并初始化
- ret = streamManagerApi.InitManager()
- if ret != 0:
- print("Failed to init Stream manager, ret=%s" % str(ret))
- exit()
-
- # 构建pipeline
- PIPELINE_PATH = "./t.pipeline"
- if os.path.exists(PIPELINE_PATH) != 1:
- print("pipeline does not exist !")
- exit()
- with open(PIPELINE_PATH, 'rb') as f:
- pipelineStr = f.read()
- ret = streamManagerApi.CreateMultipleStreams(pipelineStr)
- if ret != 0:
- print("Failed to create Stream, ret=%s" % str(ret))
- exit()
-
- # 构建流的输入对象--检测目标
- dataInput = MxDataInput()
- FILEPATH = "./dataset/"
- if os.path.exists(FILEPATH) != 1:
- print("The filepath does not exist !")
- exit()
- for filename in os.listdir(FILEPATH):
- image_path = FILEPATH + filename
- if image_path.split('.')[-1] != 'jpg':
- continue
- with open(image_path, 'rb') as f:
- dataInput.data = f.read()
- begin_array = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
- STREAMNAME = b'detection'
- INPLUGINID = 0
- # 根据流名将检测目标传入流中
- uniqueId = streamManagerApi.SendData(STREAMNAME, INPLUGINID, dataInput)
- if uniqueId < 0:
- print("Failed to send data to stream.")
- exit()
- keys = [b"mxpi_tensorinfer0"]
- keyVec = StringVector()
- for key in keys:
- keyVec.push_back(key)
- # 从流中取出对应插件的输出数据
- infer = streamManagerApi.GetResult(STREAMNAME, b'appsink0', keyVec)
- if(infer.metadataVec.size() == 0):
- print("Get no data from stream !")
- exit()
- infer_result = infer.metadataVec[0]
- if infer_result.errorCode != 0:
- print("GetResult error. errorCode=%d , errMsg=%s" % (infer_result.errorCode, infer_result.errMsg))
- exit()
- result = MxpiDataType.MxpiTensorPackageList()
- result.ParseFromString(infer_result.serializedMetadata)
- pred = np.frombuffer(result.tensorPackageVec[0].tensorVec[0].dataStr, dtype=np.float32)
- pred.resize(HEIGHT + 1, WIDTH + 1)
- preds = np.zeros((HEIGHT, WIDTH))
- for i in range(HEIGHT):
- for j in range(WIDTH):
- if(pred[i+1][j+1] < 0):
- preds[i][j] = 0
- elif(pred[i+1][j+1] > 1):
- preds[i][j] = DE_NORM
- else:
- preds[i][j] = pred[i+1][j+1] * DE_NORM
- end_array = np.array(preds, dtype=int)
- SUM = 0
- for i in range(HEIGHT):
- for j in range(WIDTH):
- SUM += (begin_array[i][j] - end_array[i][j]) ** 2
- mse = SUM / (HEIGHT * WIDTH)
- psnr = 10 * math.log10(MAX_PIXEL**2/mse)
- SUM_PSNR += psnr
- NUM += 1
- print(filename.split('.')[0] + " PSNR RESULT: " , psnr)
- print("-------------------------------------------------")
- print("Model Average PSNR: " , SUM_PSNR / NUM)
- # destroy streams
+#!/usr/bin/env python
+# coding=utf-8
+
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+import math
+import cv2
+import numpy as np
+import MxpiDataType_pb2 as MxpiDataType
+from StreamManagerApi import StreamManagerApi, MxDataInput, StringVector
+
+SUM_PSNR = 0
+NUM = 0
+MAX_PIXEL = 255.0
+HEIGHT = 320
+WIDTH = 480
+DE_NORM = 255
+
+if __name__ == '__main__':
+ streamManagerApi = StreamManagerApi()
+ # 新建一个流管理StreamManager对象并初始化
+ ret = streamManagerApi.InitManager()
+ if ret != 0:
+ print("Failed to init Stream manager, ret=%s" % str(ret))
+ exit()
+
+ # 构建pipeline
+ PIPELINE_PATH = "./t.pipeline"
+ if os.path.exists(PIPELINE_PATH) != 1:
+ print("pipeline does not exist !")
+ exit()
+ with open(PIPELINE_PATH, 'rb') as f:
+ pipelineStr = f.read()
+ ret = streamManagerApi.CreateMultipleStreams(pipelineStr)
+ if ret != 0:
+ print("Failed to create Stream, ret=%s" % str(ret))
+ exit()
+
+ # 构建流的输入对象--检测目标
+ dataInput = MxDataInput()
+ FILEPATH = "./dataset/"
+ if os.path.exists(FILEPATH) != 1:
+ print("The filepath does not exist !")
+ exit()
+ for filename in os.listdir(FILEPATH):
+ image_path = FILEPATH + filename
+ if image_path.split('.')[-1] != 'jpg':
+ continue
+ with open(image_path, 'rb') as f:
+ dataInput.data = f.read()
+ begin_array = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
+ STREAMNAME = b'detection'
+ INPLUGINID = 0
+ # 根据流名将检测目标传入流中
+ uniqueId = streamManagerApi.SendData(STREAMNAME, INPLUGINID, dataInput)
+ if uniqueId < 0:
+ print("Failed to send data to stream.")
+ exit()
+ keys = [b"mxpi_tensorinfer0"]
+ keyVec = StringVector()
+ for key in keys:
+ keyVec.push_back(key)
+ # 从流中取出对应插件的输出数据
+ infer = streamManagerApi.GetResult(STREAMNAME, b'appsink0', keyVec)
+ if(infer.metadataVec.size() == 0):
+ print("Get no data from stream !")
+ exit()
+ infer_result = infer.metadataVec[0]
+ if infer_result.errorCode != 0:
+ print("GetResult error. errorCode=%d , errMsg=%s" % (infer_result.errorCode, infer_result.errMsg))
+ exit()
+ result = MxpiDataType.MxpiTensorPackageList()
+ result.ParseFromString(infer_result.serializedMetadata)
+ pred = np.frombuffer(result.tensorPackageVec[0].tensorVec[0].dataStr, dtype=np.float32)
+ pred.resize(HEIGHT + 1, WIDTH + 1)
+ preds = np.zeros((HEIGHT, WIDTH))
+ for i in range(HEIGHT):
+ for j in range(WIDTH):
+ if(pred[i+1][j+1] < 0):
+ preds[i][j] = 0
+ elif(pred[i+1][j+1] > 1):
+ preds[i][j] = DE_NORM
+ else:
+ preds[i][j] = pred[i+1][j+1] * DE_NORM
+ end_array = np.array(preds, dtype=int)
+ SUM = 0
+ for i in range(HEIGHT):
+ for j in range(WIDTH):
+ SUM += (begin_array[i][j] - end_array[i][j]) ** 2
+ mse = SUM / (HEIGHT * WIDTH)
+ psnr = 10 * math.log10(MAX_PIXEL**2/mse)
+ SUM_PSNR += psnr
+ NUM += 1
+ print(filename.split('.')[0] + " PSNR RESULT: " , psnr)
+ print("-------------------------------------------------")
+ print("Model Average PSNR: " , SUM_PSNR / NUM)
+ # destroy streams
streamManagerApi.DestroyAllStreams()
\ No newline at end of file
diff --git a/contrib/ADNet/main.py b/mxVision/mxVision-referenceapps/ADNet/main.py
similarity index 97%
rename from contrib/ADNet/main.py
rename to mxVision/mxVision-referenceapps/ADNet/main.py
index 05200f122170c48de1721ba8d24a521e81c827ff..abdee6fc97121699b3213f6dd41347c209f843cf 100644
--- a/contrib/ADNet/main.py
+++ b/mxVision/mxVision-referenceapps/ADNet/main.py
@@ -1,95 +1,95 @@
-#!/usr/bin/env python
-# coding=utf-8
-
-# Copyright 2022 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-import os
-import cv2
-import numpy as np
-import MxpiDataType_pb2 as MxpiDataType
-from StreamManagerApi import StreamManagerApi, MxDataInput, StringVector
-
-HEIGHT = 320
-WIDTH = 480
-DE_NORM = 255
-
-if __name__ == '__main__':
- streamManagerApi = StreamManagerApi()
- # 新建一个流管理StreamManager对象并初始化
- ret = streamManagerApi.InitManager()
- if ret != 0:
- print("Failed to init Stream manager, ret=%s" % str(ret))
- exit()
-
- # 构建pipeline
- PIPELINE_PATH = "./t.pipeline"
- if os.path.exists(PIPELINE_PATH) != 1:
- print("Pipeline does not exist !")
- exit()
- with open(PIPELINE_PATH, 'rb') as f:
- pipelineStr = f.read()
- ret = streamManagerApi.CreateMultipleStreams(pipelineStr)
- if ret != 0:
- print("Failed to create Stream, ret=%s" % str(ret))
- exit()
-
- # 构建流的输入对象--检测目标
- dataInput = MxDataInput()
- IMAGE_PATH = './test.jpg'
- with open(IMAGE_PATH, 'rb') as f:
- dataInput.data = f.read()
- begin_array = cv2.imread(IMAGE_PATH, cv2.IMREAD_GRAYSCALE)
- h, w = begin_array.shape[:2]
- STREAMNAME = b'detection'
- INPLUGINID = 0
- # 根据流名将检测目标传入流中
- uniqueId = streamManagerApi.SendData(STREAMNAME, INPLUGINID, dataInput)
- if uniqueId < 0:
- print("Failed to send data to stream.")
- exit()
- keys = [b"mxpi_tensorinfer0"]
- keyVec = StringVector()
- for key in keys:
- keyVec.push_back(key)
- # 从流中取出对应插件的输出数据
- infer = streamManagerApi.GetResult(STREAMNAME, b'appsink0', keyVec)
- if(infer.metadataVec.size() == 0):
- print("Get no data from stream !")
- exit()
- print("result.metadata size: ", infer.metadataVec.size())
- infer_result = infer.metadataVec[0]
- if infer_result.errorCode != 0:
- print("GetResult error. errorCode=%d , errMsg=%s" % (infer_result.errorCode, infer_result.errMsg))
- exit()
- result = MxpiDataType.MxpiTensorPackageList()
- result.ParseFromString(infer_result.serializedMetadata)
- pred = np.frombuffer(result.tensorPackageVec[0].tensorVec[0].dataStr, dtype=np.float32)
- pred.resize(HEIGHT+1, WIDTH+1)
- preds = np.zeros((HEIGHT, WIDTH))
- for i in range(HEIGHT):
- for j in range(WIDTH):
- if(pred[i+1][j+1] < 0):
- preds[i][j] = 0
- elif(pred[i+1][j+1] > 1):
- preds[i][j] = DE_NORM
- else:
- preds[i][j] = pred[i+1][j+1] * DE_NORM
- end_array = np.array(preds, dtype=int)
- SAVE_PATH = './result.jpg'
- img_resize = cv2.resize(end_array, (w, h), interpolation = cv2.INTER_NEAREST)
- cv2.imwrite(SAVE_PATH, img_resize)
- # destroy streams
+#!/usr/bin/env python
+# coding=utf-8
+
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+import cv2
+import numpy as np
+import MxpiDataType_pb2 as MxpiDataType
+from StreamManagerApi import StreamManagerApi, MxDataInput, StringVector
+
+HEIGHT = 320
+WIDTH = 480
+DE_NORM = 255
+
+if __name__ == '__main__':
+ streamManagerApi = StreamManagerApi()
+ # 新建一个流管理StreamManager对象并初始化
+ ret = streamManagerApi.InitManager()
+ if ret != 0:
+ print("Failed to init Stream manager, ret=%s" % str(ret))
+ exit()
+
+ # 构建pipeline
+ PIPELINE_PATH = "./t.pipeline"
+ if os.path.exists(PIPELINE_PATH) != 1:
+ print("Pipeline does not exist !")
+ exit()
+ with open(PIPELINE_PATH, 'rb') as f:
+ pipelineStr = f.read()
+ ret = streamManagerApi.CreateMultipleStreams(pipelineStr)
+ if ret != 0:
+ print("Failed to create Stream, ret=%s" % str(ret))
+ exit()
+
+ # 构建流的输入对象--检测目标
+ dataInput = MxDataInput()
+ IMAGE_PATH = './test.jpg'
+ with open(IMAGE_PATH, 'rb') as f:
+ dataInput.data = f.read()
+ begin_array = cv2.imread(IMAGE_PATH, cv2.IMREAD_GRAYSCALE)
+ h, w = begin_array.shape[:2]
+ STREAMNAME = b'detection'
+ INPLUGINID = 0
+ # 根据流名将检测目标传入流中
+ uniqueId = streamManagerApi.SendData(STREAMNAME, INPLUGINID, dataInput)
+ if uniqueId < 0:
+ print("Failed to send data to stream.")
+ exit()
+ keys = [b"mxpi_tensorinfer0"]
+ keyVec = StringVector()
+ for key in keys:
+ keyVec.push_back(key)
+ # 从流中取出对应插件的输出数据
+ infer = streamManagerApi.GetResult(STREAMNAME, b'appsink0', keyVec)
+ if(infer.metadataVec.size() == 0):
+ print("Get no data from stream !")
+ exit()
+ print("result.metadata size: ", infer.metadataVec.size())
+ infer_result = infer.metadataVec[0]
+ if infer_result.errorCode != 0:
+ print("GetResult error. errorCode=%d , errMsg=%s" % (infer_result.errorCode, infer_result.errMsg))
+ exit()
+ result = MxpiDataType.MxpiTensorPackageList()
+ result.ParseFromString(infer_result.serializedMetadata)
+ pred = np.frombuffer(result.tensorPackageVec[0].tensorVec[0].dataStr, dtype=np.float32)
+ pred.resize(HEIGHT+1, WIDTH+1)
+ preds = np.zeros((HEIGHT, WIDTH))
+ for i in range(HEIGHT):
+ for j in range(WIDTH):
+ if(pred[i+1][j+1] < 0):
+ preds[i][j] = 0
+ elif(pred[i+1][j+1] > 1):
+ preds[i][j] = DE_NORM
+ else:
+ preds[i][j] = pred[i+1][j+1] * DE_NORM
+ end_array = np.array(preds, dtype=int)
+ SAVE_PATH = './result.jpg'
+ img_resize = cv2.resize(end_array, (w, h), interpolation = cv2.INTER_NEAREST)
+ cv2.imwrite(SAVE_PATH, img_resize)
+ # destroy streams
streamManagerApi.DestroyAllStreams()
\ No newline at end of file
diff --git a/contrib/ADNet/model/aipp_adnet.cfg b/mxVision/mxVision-referenceapps/ADNet/model/aipp_adnet.cfg
similarity index 95%
rename from contrib/ADNet/model/aipp_adnet.cfg
rename to mxVision/mxVision-referenceapps/ADNet/model/aipp_adnet.cfg
index d48fd8f4d91f670004f4841a91e7a660bf668ca7..5d9d1a48682b944517994b1be217825f6b042441 100644
--- a/contrib/ADNet/model/aipp_adnet.cfg
+++ b/mxVision/mxVision-referenceapps/ADNet/model/aipp_adnet.cfg
@@ -1,35 +1,35 @@
-aipp_op {
- aipp_mode: static
- input_format : YUV420SP_U8
- src_image_size_w:480
- src_image_size_h:320
- crop:false
- padding:true
- left_padding_size:1
- right_padding_size:0
- top_padding_size:1
- bottom_padding_size:0
- csc_switch : true
- rbuv_swap_switch : false
- matrix_r0c0 : 256
- matrix_r0c1 : 0
- matrix_r0c2 : 0
- matrix_r1c0 : 0
- matrix_r1c1 : 0
- matrix_r1c2 : 0
- matrix_r2c0 : 0
- matrix_r2c1 : 0
- matrix_r2c2 : 0
- input_bias_0 : 0
- input_bias_1 : 0
- input_bias_2 : 0
- mean_chn_0 : 0
- mean_chn_1 : 0
- mean_chn_2 : 0
- min_chn_0 : 0.0
- min_chn_1 : 0.0
- min_chn_2 : 0.0
- var_reci_chn_0 : 0.00392
- var_reci_chn_1 : 0.00392
- var_reci_chn_2 : 0.00392
+aipp_op {
+ aipp_mode: static
+ input_format : YUV420SP_U8
+ src_image_size_w:480
+ src_image_size_h:320
+ crop:false
+ padding:true
+ left_padding_size:1
+ right_padding_size:0
+ top_padding_size:1
+ bottom_padding_size:0
+ csc_switch : true
+ rbuv_swap_switch : false
+ matrix_r0c0 : 256
+ matrix_r0c1 : 0
+ matrix_r0c2 : 0
+ matrix_r1c0 : 0
+ matrix_r1c1 : 0
+ matrix_r1c2 : 0
+ matrix_r2c0 : 0
+ matrix_r2c1 : 0
+ matrix_r2c2 : 0
+ input_bias_0 : 0
+ input_bias_1 : 0
+ input_bias_2 : 0
+ mean_chn_0 : 0
+ mean_chn_1 : 0
+ mean_chn_2 : 0
+ min_chn_0 : 0.0
+ min_chn_1 : 0.0
+ min_chn_2 : 0.0
+ var_reci_chn_0 : 0.00392
+ var_reci_chn_1 : 0.00392
+ var_reci_chn_2 : 0.00392
}
\ No newline at end of file
diff --git a/contrib/ADNet/pipeline.png b/mxVision/mxVision-referenceapps/ADNet/pipeline.png
similarity index 100%
rename from contrib/ADNet/pipeline.png
rename to mxVision/mxVision-referenceapps/ADNet/pipeline.png
diff --git a/contrib/ADNet/t.pipeline b/mxVision/mxVision-referenceapps/ADNet/t.pipeline
similarity index 96%
rename from contrib/ADNet/t.pipeline
rename to mxVision/mxVision-referenceapps/ADNet/t.pipeline
index d2076be68e0c86fc5e83d2cda69394e3343a7f85..c9d81801f851d801eb96c64ea4ce8ac53e12ecdb 100644
--- a/contrib/ADNet/t.pipeline
+++ b/mxVision/mxVision-referenceapps/ADNet/t.pipeline
@@ -1,51 +1,51 @@
-{
- "detection": {
- "stream_config": {
- "deviceId": "0"
- },
- "appsrc0": {
- "props": {
- "blocksize": "409600"
- },
- "factory": "appsrc",
- "next": "mxpi_imagedecoder0"
- },
- "mxpi_imagedecoder0": {
- "props": {
- "deviceId": "0"
- },
- "factory": "mxpi_imagedecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0": {
- "props": {
- "dataSource": "mxpi_imagedecoder0",
- "resizeHeight": "320",
- "resizeWidth": "480"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0": {
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./model/ADNet_bs1.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_tensorinfer0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
+{
+ "detection": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "appsrc0": {
+ "props": {
+ "blocksize": "409600"
+ },
+ "factory": "appsrc",
+ "next": "mxpi_imagedecoder0"
+ },
+ "mxpi_imagedecoder0": {
+ "props": {
+ "deviceId": "0"
+ },
+ "factory": "mxpi_imagedecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0": {
+ "props": {
+ "dataSource": "mxpi_imagedecoder0",
+ "resizeHeight": "320",
+ "resizeWidth": "480"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0": {
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./model/ADNet_bs1.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_tensorinfer0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
}
\ No newline at end of file
diff --git a/contrib/ADNet/transform.py b/mxVision/mxVision-referenceapps/ADNet/transform.py
similarity index 97%
rename from contrib/ADNet/transform.py
rename to mxVision/mxVision-referenceapps/ADNet/transform.py
index 2f32a19fad95669faa6bc1d4bf0432163a49e454..9b503a0f5bfb56da6c7313adcc04a9e7d171c031 100644
--- a/contrib/ADNet/transform.py
+++ b/mxVision/mxVision-referenceapps/ADNet/transform.py
@@ -1,25 +1,25 @@
-# Copyright 2022 Huawei Technologies Co., Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import cv2
-
-FILE_PATH = './BSD68/'
-for file in os.listdir(FILE_PATH):
- img_path = FILE_PATH + file
- img = cv2.imread(img_path)
- # evaluate运行前需要执行下面的resize;main运行前注释掉下面一行代码
- img = cv2.resize(img, (480, 320))
- save_path = './dataset/' + file.split('.')[0] + '.jpg'
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import cv2
+
+FILE_PATH = './BSD68/'
+for file in os.listdir(FILE_PATH):
+ img_path = FILE_PATH + file
+ img = cv2.imread(img_path)
+ # evaluate运行前需要执行下面的resize;main运行前注释掉下面一行代码
+ img = cv2.resize(img, (480, 320))
+ save_path = './dataset/' + file.split('.')[0] + '.jpg'
cv2.imwrite(save_path, img)
\ No newline at end of file
diff --git "a/contrib/ADNet/\346\265\201\347\250\213.png" "b/mxVision/mxVision-referenceapps/ADNet/\346\265\201\347\250\213.png"
similarity index 100%
rename from "contrib/ADNet/\346\265\201\347\250\213.png"
rename to "mxVision/mxVision-referenceapps/ADNet/\346\265\201\347\250\213.png"
diff --git a/contrib/ActionRecognition/README.md b/mxVision/mxVision-referenceapps/ActionRecognition/README.md
similarity index 97%
rename from contrib/ActionRecognition/README.md
rename to mxVision/mxVision-referenceapps/ActionRecognition/README.md
index a3eb616799b5843626e64f2f7b9fa87c2684a3c5..c27eba5a485f7faa905169947279e1088b82fc52 100644
--- a/contrib/ActionRecognition/README.md
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/README.md
@@ -1,477 +1,477 @@
-# ActionRecgnition
-
-## 1 介绍
-
-本开发样例演示动作识别系统 ActionRecgnition,供用户参考。
-本系统基于mxVision SDK进行开发,以昇腾Atlas300卡为主要的硬件平台,主要应用于单人独处、逗留超时、快速移动、剧烈运动、离床检测、攀高检测六种应用场景。
-
-1. 单人独处:识别出单个人独处场景后报警。
-2. 逗留超时:识别出单人或多人在区域内长时间逗留的情况并发出警报。
-3. 快速移动:检测出视频中单人或多人行进速度大于阈值的情况,并发出警报。
-4. 剧烈运动:检测到视频流中有剧烈运动并进行报警。
-5. 离床检测:检测出视频中行人离开指定区域的情况并报警。
-6. 攀高检测:检测出行人中心点向上移动的情况,并发出警报。
-
-## 2 环境依赖
-
-* 支持的硬件形态和操作系统版本
-
- | 硬件形态 | 操作系统版本 |
- | ------------------------------------- | -------------- |
- | x86_64+Atlas 300I 推理卡(型号3010) | Ubuntu 18.04.1 |
- | x86_64+Atlas 300I 推理卡 (型号3010) | CentOS 7.6 |
- | ARM+Atlas 300I 推理卡 (型号3000) | Ubuntu 18.04.1 |
- | ARM+Atlas 300I 推理卡 (型号3000) | CentOS 7.6 |
-
-* 软件依赖
-
- | 软件名称 | 版本 |
- | -------- | ----- |
- | cmake | 3.5.+ |
- | mxVision | 2.0.4 |
- | Python | 3.9.2 |
- | OpenCV | 3.4.0 |
- | gcc | 7.5.0 |
- | ffmpeg | 4.3.2 |
-
-## 3 代码主要目录介绍
-
-本Sample工程名称为Actionrecognition,工程目录如下图所示:
-
-```
-.
-├── data
-│ ├── roi
-│ │ ├── Climbup
-│ │ └── ...
-│ └── video
-│ │ ├── Alone
-│ │ └── ...
-├── models
-│ ├── ECONet
-│ │ └── ...
-│ └── yolov3
-│ │ └── ...
-├── pipeline
-│ ├── plugin_all.pipeline
-│ ├── plugin_alone.pipeline
-│ ├── plugin_climb.pipeline
-│ ├── plugin_outofbed.pipeline
-│ ├── plugin_overspeed.pipeline
-│ ├── plugin_overstay.pipeline
-│ └── plugin_violentaction.pipeline
-├── plugins
-│ ├── MxpiStackFrame // 堆帧插件
-│ │ ├── CMakeLists.txt
-│ │ ├── MxpiStackFrame.cpp
-│ │ ├── MxpiStackFrame.h
-│ │ ├── BlockingMap.cpp
-│ │ ├── BlockingMap.h
-│ │ └── build.sh
-│ ├── PluginAlone // 单人独处插件
-│ │ ├── CMakeLists.txt
-│ │ ├── PluginAlone.cpp
-│ │ ├── PluginAlone.h
-│ │ └── build.sh
-│ ├── PluginClimb // 攀高检测插件
-│ │ ├── CMakeLists.txt
-│ │ ├── PluginClimb.cpp
-│ │ ├── PluginClimb.h
-│ │ └── build.sh
-│ ├── PluginOutOfBed // 离床检测插件
-│ │ ├── CMakeLists.txt
-│ │ ├── PluginOutOfBed.cpp
-│ │ ├── PluginOutOfBed.h
-│ │ └── build.sh
-│ ├── PluginOverSpeed // 快速移动插件
-│ │ ├── CMakeLists.txt
-│ │ ├── PluginOverSpeed.cpp
-│ │ ├── PluginOverSpeed.h
-│ │ └── build.sh
-│ ├── PluginOverStay // 逗留超时插件
-│ │ ├── CMakeLists.txt
-│ │ ├── PluginOverStay.cpp
-│ │ ├── PluginOverStay.h
-│ │ └── build.sh
-│ ├── PluginCounter // 计时插件
-│ │ ├── CMakeLists.txt
-│ │ ├── PluginCounter.cpp
-│ │ ├── PluginCounter.h
-│ │ └── build.sh
-│ ├── PluginViolentAction // 剧烈运动插件
-│ │ ├── CMakeLists.txt
-│ │ ├── Plugin_ViolentAction.cpp
-│ │ ├── Plugin_ViolentAction.h
-│ │ └── build.sh
-├── main.py
-├── README.md
-└── run.sh
-```
-
-## 4 软件方案介绍
-
-为了完成上述六种应用场景中的行为识别,系统需要检测出同一目标短时间内状态的变化以及是否存在剧烈运动,因此系统中需要包含目标检测、目标跟踪、动作识别与逻辑后处理。其中目标检测模块选取Yolov3,得到行人候选框;目标跟踪模块使用IOU匹配,关联连续帧中的同一目标。将同一目标在连续帧的区域抠图组成视频序列,输入动作识别模块ECONet,模型输出动作类别,判断是否为剧烈运动。逻辑后处理通过判断同一目标在连续帧内的空间位置变化判断难以被定义为运动的其余五种应用场景。系统方案中各模块功能如表1.1 所示。
-
-表1.1 系统方案中个模块功能:
-
-| 序号 | 子系统 | 功能描述 |
-| ---- | -------------- | ------------------------------------------------------------ |
-| 1 | 初始化配置 | 主要用于初始化资源,如线程数量、共享内存等。 |
-| 2 | 视频流 | 从多路IPC相机拉流,并传输入Device进行计算。 |
-| 3 | 视频解码 | 通过硬件(DVPP)对视频进行解码,转换为YUV数据进行后续处理。 |
-| 4 | 图像预处理 | 在进行基于深度神经网络的图像处理之前,需要将图像缩放到固定的尺寸和格式。 |
-| 5 | 目标检测 | 基于深度学习的目标检测算法是该系统的核心模块之一,本方案选用基于Yolov3的目标检测。 |
-| 6 | 目标跟踪 | 基于卡尔曼滤波与匈牙利算法的目标跟踪算法是该系统的核心模块之一,本方案选用IOU匹配。 |
-| 7 | 图像抠图 | 将同一目标在连续帧所在区域抠图,并组成图像序列,输入动作识别模块。 |
-| 8 | 动作识别 | 基于深度学习的动作识别算法是该系统的核心模块之一,本方案选用基于ECONet的动作识别模型。 |
-| 9 | 单人独处后处理 | 当连续m帧只出现一个目标ID时,则判断为单人独处并报警,如果单人独处报警之前n帧内已经报警过,则不重复报警。 |
-| 10 | 快速移动后处理 | 当同一目标在连续m帧中心点平均位移高于阈值v,则判断为快速移动,如果快速移动报警之前n帧内已经报警过,则不重复报警。 |
-| 11 | 逗留超时后处理 | 当同一目标在连续m帧中心点平均位移低于阈值v,则判断为快速移动,如果快速移动报警之前n帧内已经报警过,则不重复报警。 |
-| 12 | 离床检测后处理 | 当同一目标在连续m帧内从给定区域roi内离开,则判断为离床,如果离床报警之前n帧内已经报警过,则不重复报警。 |
-| 13 | 攀高检测后处理 | 当同一目标在连续m帧内从给定区域roi内中心点上升,并且中心点位移大于阈值h,则判断为离床,如果离床报警之前n帧内已经报警过,则不重复报警。 |
-| 14 | 剧烈运动后处理 | 动作识别模块输出类别为关注的动作类别时,则判断为剧烈运动,如果剧烈运动之前n帧内已经报警过,则不重复报警。 |
-
-## 5 准备
-
-**步骤1:** 参考安装教程《mxVision 用户指南》安装 mxVision SDK。
-
-**步骤2:** 配置 mxVision SDK 环境变量。
-
-`export MX_SDK_HOME=${安装路径}/mxVision `
-
-注:本例中mxVision SDK安装路径为 /root/work/MindX_SDK/mxVision。
-
-**步骤3:** 推荐在${MX_SDK_HOME}/samples下创建ActionRecognition根目录,在项目根目录下创建目录models `mkdir models`,分别为yolov3和ECONet创建一个文件夹,将两个离线模型及各自的配置文件放入文件夹下。[下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/models.zip)。创建完成后models文件夹下的目录结构如下:
-
-```
-.models
-├── ECONet
-│ ├── eco_aipp.cfg // 模型转换aipp配置文件
-│ ├── eco_post.cfg // 模型后处理配置文件
-│ ├── ECONet.om // 离线模型
-│ ├── ucf101.names // label文件
-│ ├── ucf101_best.pb // 冻结pb模型
-│ └── trans_pb2om.sh // 模型转换脚本
-├── yolov3
-│ ├── coco.names // label文件
-│ ├── yolov3_tf_bs1_fp16.om // 离线模型
-│ └── yolov3_tf_bs1_fp16.cfg // 模型后处理配置文件
-```
-
-**步骤4:** 编译程序前提需要先交叉编译好第三方依赖库。
-
-**步骤5:** 配置环境变量MX_SDK_HOME:
-
-```bash
-export MX_SDK_HOME=/MindX_SDK/mxVision/
-# 此处MX_SDK_HOME请使用MindX_SDK的实际路径
-```
-
-**步骤6**:在插件代码目录下创建build文件夹,使用cmake命令进行编译,生成.so文件。下面以单人独处插件的编译过程作为范例:
-
-```bash
-## 进入目录 /plugins/plugin_Alone
-## 创建build目录
-mkdir build
-## 使用cmake命令进行编译
-cmake ..
-make -j
-```
-
-或者使用插件代码目录下的build.sh脚本,例:
-
-```bash
-## 前提条件是正确设置export MX_SDK_HOME
-chmod +x build.sh
-./build.sh
-```
-
-编译好的插件会自动存放到SDK的插件库中,可以直接在pipiline中使用。
-
-**步骤7:** 配置pipeline
-
-1. 插件参数介绍
-
- * MxpiStackFrame
-
- | 参数名称 | 参数解释 |
- | :----------- | -------------------- |
- | visionSource | 抠图插件名称 |
- | trackSource | 跟踪插件名称 |
- | frameNum | 跳帧间隔(为1不跳) |
- | timeOut | 某个目标堆帧超时时间 |
- | sleepTime | 检查线程休眠时间 |
-
- * PluginAlone
-
- | 参数名称 | 参数解释 |
- | :------------------ | ---------------------- |
- | dataSourceDetection | 目标检测后处理插件名称 |
- | dataSourceTrack | 跟踪插件名称 |
- | detectThresh | 检测帧数 |
- | detectRatio | 警报帧阈值 |
- | detectSleep | 警报间隔 |
-
- * PluginClimb
-
- | 参数名称 | 参数解释 |
- | :------------------ | ---------------------- |
- | dataSourceTrack | 跟踪插件名称 |
- | dataSourceDetection | 目标检测后处理插件名称 |
- | detectRatio | 警报帧阈值 |
- | filePath | ROI配置txt文件 |
- | detectSleep | 警报间隔 |
- | bufferLength | 检测帧数窗口大小 |
- | highThresh | 高度阈值 |
-
- * PluginCounter
-
- | 参数名称 | 参数解释 |
- | :----------------- | ------------ |
- | dataSourceTrack | 跟踪插件名称 |
- | descriptionMessage | 插件描述信息 |
-
- * PluginOutOfBed
-
- | 参数名称 | 参数解释 |
- | :------------------ | ---------------------- |
- | dataSourceTrack | 跟踪插件名称 |
- | dataSourceDetection | 目标检测后处理插件名称 |
- | configPath | ROI配置txt文件 |
- | detectThresh | 检测帧数窗口大小 |
- | detectSleep | 警报间隔 |
- | detectRatio | 警报帧阈值 |
-
- * PluginOverSpeed
-
- | 参数名称 | 参数解释 |
- | :------------------ | ---------------------- |
- | dataSourceTrack | 跟踪插件名称 |
- | dataSourceDetection | 目标检测后处理插件名称 |
- | speedThresh | 速度阈值 |
- | frames | 检测帧数窗口大小 |
- | detectSleep | 警报间隔 |
-
- * PluginOverStay
-
- | 参数名称 | 参数解释 |
- | :------------------ | ---------------------- |
- | dataSourceTrack | 跟踪插件名称 |
- | dataSourceDetection | 目标检测后处理插件名称 |
- | stayThresh | 逗留时间阈值 |
- | frames | 检测间隔帧数 |
- | distanceThresh | 逗留范围 |
- | detectRatio | 警报帧阈值 |
- | detectSleep | 警报间隔 |
-
- * PluginViolentAction
-
- | 参数名称 | 参数解释 |
- | :-------------- | --------------------- |
- | classSource | 分类后处理插件名称 |
- | filePath | 感兴趣动作类别txt文件 |
- | detectSleep | 警报间隔 |
- | actionThreshold | 动作阈值 |
-
-2. 配置范例
-
- ```
- ## PluginClimb
- "mxpi_pluginclimb0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "detectRatio": "0.6",
- "filePath": "./data/roi/Climbup/*.txt",
- "detectSleep": "30",
- "bufferLength": "8",
- "highThresh": "10"
- },
- "factory": "mxpi_pluginclimb",
- "next": "mxpi_dataserialize0"
- }
- ## /*Yolov3*/
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_distributor0"
- },
- ## ECONet
- "mxpi_tensorinfer1":{
- "props": {
- "dataSource": "mxpi_stackframe0",
- "skipModelCheck": "1",
- "modelPath": "./models/ECONet/ECONet.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_classpostprocessor0"
- },
- "mxpi_classpostprocessor0":{
- "props": {
- "dataSource": "mxpi_tensorinfer1",
- "postProcessConfigPath": "./models/ECONet/eco_post.cfg",
- "labelPath":"./models/ECONet/ucf101.names",
- "postProcessLibPath":"../../lib/modelpostprocessors/libresnet50postprocess.so"
- },
- "factory": "mxpi_classpostprocessor",
- "next": "mxpi_violentaction0"
- },
- ```
-
- 根据所需场景,配置pipeline文件,调整路径参数以及插件阈值参数。例如"filePath"字段替换为roi/Climb目录下的感兴趣区域txt文件,“postProcessLibPath”字段是SDK模型后处理插件库路径。
-
-3. 将pipeline中“rtspUrl”字段值替换为可用的 rtsp 流源地址(需要自行准备可用的视频流,视频流格式为H264),[自主搭建rtsp视频流教程](###7.3-数据下载与RTSP)。
-
-**步骤8:** 在main.py中,修改pipeline路径、对应的流名称以及需要获取结果的插件名称。
-
-```python
-## 插件位置
-with open("./pipeline/plugin_outofbed.pipeline", 'rb') as f:
- pipelineStr = f.read()
-## pipeline中的流名称
-streamName = b'classification+detection'
-## 想要获取结果的插件名称
-key = b'mxpi_pluginalone0'
-```
-
-## 6 运行
-
-修改 run.sh 文件中的环境路径和项目路径。
-
-```bash
-export MX_SDK_HOME=${CUR_PATH}/../../..
-## 注意当前目录CUR_PATH与MX_SDK_HOME环境目录的相对位置
-```
-
-直接运行
-
-```bash
-chmod +x run.sh
-bash run.sh
-```
-
-## 7 常见问题
-### 7.1 未配置ROI
-
-#### 问题描述:
-
-攀高检测与离床检测出现如下报错:
-```bash
-terminate called after throwing an instance of 'cv::Exception'
- what(): OpenCV(4.2.0) /usr1/workspace/MindX_SDK_Multi_DailyBuild/opensource/opensource-scl7/opencv/modules/imgproc/src/geometry.cpp:103: error: (-215:Assertion failed) total >= 0 && (depth == CV_32S || depth == CV_32F) in function 'pointPolygonTest'
-```
-#### 解决方案:
-在pipeline处没有配置roi区域的txt文件,进行配置即可。
-
-示例:
-
-```json
-"mxpi_pluginoutofbed0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetect": "mxpi_objectpostprocessor0",
- "configPath": "./data/roi/OutofBed/*.txt",
- "detectThresh": "8",
- "detectSleep": "15",
- "detectRatio": "0.25"
- },
- "factory": "mxpi_pluginoutofbed",
- "next": "mxpi_dataserialize0"
-}
-```
-
-### 7.2 模型路径配置
-
-#### 问题描述:
-
-检测过程中用到的模型以及模型后处理插件需配置路径属性。
-
-#### 后处理插件配置范例:
-
-```json
-"mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
-}
-```
-
-### 7.3 数据下载与RTSP
-
-H264视频文件及ROI文件:[下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/data.zip) ;
-
-RTSP取流地址(可以从网络摄像机获取,也可通过Live555等工具将本地视频文 件转换为rtsp流)。自主搭建RTSP拉流教程:[live555链接](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/Live555%E7%A6%BB%E7%BA%BF%E8%A7%86%E9%A2%91%E8%BD%ACRTSP%E8%AF%B4%E6%98%8E%E6%96%87%E6%A1%A3.md),需要注意的是在搭建RTSP时,使用./genMakefiles 命令生成编译文件时,输入的参数是根据cofig.<后缀>获取的,与服务器架构等有关。
-
-RTSP视频拉流插件配置范例:
-
-```json
-"mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
-}
-```
-
-其中rtsp_Url的格式是 rtsp:://host:port/Data,host:port/路径映射到mediaServer/目录下,Data为视频文件的路径。
-
-RTSP拉流教程:[live555链接](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/Live555%E7%A6%BB%E7%BA%BF%E8%A7%86%E9%A2%91%E8%BD%ACRTSP%E8%AF%B4%E6%98%8E%E6%96%87%E6%A1%A3.md)中第七步视频循环推流,按照提示修改cpp文件可以使自主搭建的rtsp循环推流,如果不作更改,则为有限的视频流;同时第六步高分辨率帧花屏,修改mediaServer/DynamicRTSPServer.cpp文件,将OutPacketBuffer::maxSize增大,例如"500000",避免出现”The input frame data was too large for our buffer“问题,导致丢帧。修改完后,需要重新运行以下命令:
-
-```cmake
-./genMakefiles
-make
-```
-
-### 7.4 运行Shell脚本
-
-在linux平台下运行shell脚本时,例如build.sh/run.sh,出现如下错误:
-
-```bash
-build.sh: Line 15: $'\r': command not found
-```
-
-是由于不同系统平台之间的行结束符不同,使用如下命令去除shell脚本的特殊字符:
-
-```bash
-sed -i 's/\r//g' xxx.sh
-```
-
-## 8 模型转换
-
-本项目中用到的模型有:ECONet,yolov3 [备份链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/models.zip)
-
-yolov3模型下载参考华为昇腾社区[ModelZoo](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/ATC%20YOLOv3%28FP16%29%20from%20TensorFlow%20-%20Ascend310.zip)
-使用以下命令进行转换,请注意aipp配置文件名,此处使用的为自带sample中的相关文件({Mind_SDK安装路径}/mxVision/samples/mxVision/models/yolov3/)
-```bash
-atc --model=./yolov3_tf.pb --framework=3 --output=./yolov3_tf_bs1_fp16 --soc_version=Ascend310 --insert_op_conf=./aipp_yolov3_416_416.aippconfig --input_shape="input:1,416,416,3" --out_nodes="yolov3/yolov3_head/Conv_6/BiasAdd:0;yolov3/yolov3_head/Conv_14/BiasAdd:0;yolov3/yolov3_head/Conv_22/BiasAdd:0"
-```
-
-ECONet离线模型转换参考 [昇腾Gitee](https://www.hiascend.com/zh/software/modelzoo/models/detail/1/0d7d0413cf89404d882d69e695a9bc4b/1):下载冻结pb模型ucf101_best.pb,编辑trans_pb2om.sh文件,将--model 配置为ECONet模型所在目录,--output配置为模型输出路径,--insert_op_conf配置为aipp文件路径,在命令行输入
-
-```bash
-chmod +x trans_pb2om.sh
-./trans_pb2om.sh
-```
-
-完成ECONet模型转换。模型下载或转换完成后,按照目录结构放置模型。
+# ActionRecgnition
+
+## 1 介绍
+
+本开发样例演示动作识别系统 ActionRecgnition,供用户参考。
+本系统基于mxVision SDK进行开发,以昇腾Atlas300卡为主要的硬件平台,主要应用于单人独处、逗留超时、快速移动、剧烈运动、离床检测、攀高检测六种应用场景。
+
+1. 单人独处:识别出单个人独处场景后报警。
+2. 逗留超时:识别出单人或多人在区域内长时间逗留的情况并发出警报。
+3. 快速移动:检测出视频中单人或多人行进速度大于阈值的情况,并发出警报。
+4. 剧烈运动:检测到视频流中有剧烈运动并进行报警。
+5. 离床检测:检测出视频中行人离开指定区域的情况并报警。
+6. 攀高检测:检测出行人中心点向上移动的情况,并发出警报。
+
+## 2 环境依赖
+
+* 支持的硬件形态和操作系统版本
+
+ | 硬件形态 | 操作系统版本 |
+ | ------------------------------------- | -------------- |
+ | x86_64+Atlas 300I 推理卡(型号3010) | Ubuntu 18.04.1 |
+ | x86_64+Atlas 300I 推理卡 (型号3010) | CentOS 7.6 |
+ | ARM+Atlas 300I 推理卡 (型号3000) | Ubuntu 18.04.1 |
+ | ARM+Atlas 300I 推理卡 (型号3000) | CentOS 7.6 |
+
+* 软件依赖
+
+ | 软件名称 | 版本 |
+ | -------- | ----- |
+ | cmake | 3.5.+ |
+ | mxVision | 2.0.4 |
+ | Python | 3.9.2 |
+ | OpenCV | 3.4.0 |
+ | gcc | 7.5.0 |
+ | ffmpeg | 4.3.2 |
+
+## 3 代码主要目录介绍
+
+本Sample工程名称为Actionrecognition,工程目录如下图所示:
+
+```
+.
+├── data
+│ ├── roi
+│ │ ├── Climbup
+│ │ └── ...
+│ └── video
+│ │ ├── Alone
+│ │ └── ...
+├── models
+│ ├── ECONet
+│ │ └── ...
+│ └── yolov3
+│ │ └── ...
+├── pipeline
+│ ├── plugin_all.pipeline
+│ ├── plugin_alone.pipeline
+│ ├── plugin_climb.pipeline
+│ ├── plugin_outofbed.pipeline
+│ ├── plugin_overspeed.pipeline
+│ ├── plugin_overstay.pipeline
+│ └── plugin_violentaction.pipeline
+├── plugins
+│ ├── MxpiStackFrame // 堆帧插件
+│ │ ├── CMakeLists.txt
+│ │ ├── MxpiStackFrame.cpp
+│ │ ├── MxpiStackFrame.h
+│ │ ├── BlockingMap.cpp
+│ │ ├── BlockingMap.h
+│ │ └── build.sh
+│ ├── PluginAlone // 单人独处插件
+│ │ ├── CMakeLists.txt
+│ │ ├── PluginAlone.cpp
+│ │ ├── PluginAlone.h
+│ │ └── build.sh
+│ ├── PluginClimb // 攀高检测插件
+│ │ ├── CMakeLists.txt
+│ │ ├── PluginClimb.cpp
+│ │ ├── PluginClimb.h
+│ │ └── build.sh
+│ ├── PluginOutOfBed // 离床检测插件
+│ │ ├── CMakeLists.txt
+│ │ ├── PluginOutOfBed.cpp
+│ │ ├── PluginOutOfBed.h
+│ │ └── build.sh
+│ ├── PluginOverSpeed // 快速移动插件
+│ │ ├── CMakeLists.txt
+│ │ ├── PluginOverSpeed.cpp
+│ │ ├── PluginOverSpeed.h
+│ │ └── build.sh
+│ ├── PluginOverStay // 逗留超时插件
+│ │ ├── CMakeLists.txt
+│ │ ├── PluginOverStay.cpp
+│ │ ├── PluginOverStay.h
+│ │ └── build.sh
+│ ├── PluginCounter // 计时插件
+│ │ ├── CMakeLists.txt
+│ │ ├── PluginCounter.cpp
+│ │ ├── PluginCounter.h
+│ │ └── build.sh
+│ ├── PluginViolentAction // 剧烈运动插件
+│ │ ├── CMakeLists.txt
+│ │ ├── Plugin_ViolentAction.cpp
+│ │ ├── Plugin_ViolentAction.h
+│ │ └── build.sh
+├── main.py
+├── README.md
+└── run.sh
+```
+
+## 4 软件方案介绍
+
+为了完成上述六种应用场景中的行为识别,系统需要检测出同一目标短时间内状态的变化以及是否存在剧烈运动,因此系统中需要包含目标检测、目标跟踪、动作识别与逻辑后处理。其中目标检测模块选取Yolov3,得到行人候选框;目标跟踪模块使用IOU匹配,关联连续帧中的同一目标。将同一目标在连续帧的区域抠图组成视频序列,输入动作识别模块ECONet,模型输出动作类别,判断是否为剧烈运动。逻辑后处理通过判断同一目标在连续帧内的空间位置变化判断难以被定义为运动的其余五种应用场景。系统方案中各模块功能如表1.1 所示。
+
+表1.1 系统方案中个模块功能:
+
+| 序号 | 子系统 | 功能描述 |
+| ---- | -------------- | ------------------------------------------------------------ |
+| 1 | 初始化配置 | 主要用于初始化资源,如线程数量、共享内存等。 |
+| 2 | 视频流 | 从多路IPC相机拉流,并传输入Device进行计算。 |
+| 3 | 视频解码 | 通过硬件(DVPP)对视频进行解码,转换为YUV数据进行后续处理。 |
+| 4 | 图像预处理 | 在进行基于深度神经网络的图像处理之前,需要将图像缩放到固定的尺寸和格式。 |
+| 5 | 目标检测 | 基于深度学习的目标检测算法是该系统的核心模块之一,本方案选用基于Yolov3的目标检测。 |
+| 6 | 目标跟踪 | 基于卡尔曼滤波与匈牙利算法的目标跟踪算法是该系统的核心模块之一,本方案选用IOU匹配。 |
+| 7 | 图像抠图 | 将同一目标在连续帧所在区域抠图,并组成图像序列,输入动作识别模块。 |
+| 8 | 动作识别 | 基于深度学习的动作识别算法是该系统的核心模块之一,本方案选用基于ECONet的动作识别模型。 |
+| 9 | 单人独处后处理 | 当连续m帧只出现一个目标ID时,则判断为单人独处并报警,如果单人独处报警之前n帧内已经报警过,则不重复报警。 |
+| 10 | 快速移动后处理 | 当同一目标在连续m帧中心点平均位移高于阈值v,则判断为快速移动,如果快速移动报警之前n帧内已经报警过,则不重复报警。 |
+| 11 | 逗留超时后处理 | 当同一目标在连续m帧中心点平均位移低于阈值v,则判断为快速移动,如果快速移动报警之前n帧内已经报警过,则不重复报警。 |
+| 12 | 离床检测后处理 | 当同一目标在连续m帧内从给定区域roi内离开,则判断为离床,如果离床报警之前n帧内已经报警过,则不重复报警。 |
+| 13 | 攀高检测后处理 | 当同一目标在连续m帧内从给定区域roi内中心点上升,并且中心点位移大于阈值h,则判断为离床,如果离床报警之前n帧内已经报警过,则不重复报警。 |
+| 14 | 剧烈运动后处理 | 动作识别模块输出类别为关注的动作类别时,则判断为剧烈运动,如果剧烈运动之前n帧内已经报警过,则不重复报警。 |
+
+## 5 准备
+
+**步骤1:** 参考安装教程《mxVision 用户指南》安装 mxVision SDK。
+
+**步骤2:** 配置 mxVision SDK 环境变量。
+
+`export MX_SDK_HOME=${安装路径}/mxVision `
+
+注:本例中mxVision SDK安装路径为 /root/work/MindX_SDK/mxVision。
+
+**步骤3:** 推荐在${MX_SDK_HOME}/samples下创建ActionRecognition根目录,在项目根目录下创建目录models `mkdir models`,分别为yolov3和ECONet创建一个文件夹,将两个离线模型及各自的配置文件放入文件夹下。[下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/models.zip)。创建完成后models文件夹下的目录结构如下:
+
+```
+.models
+├── ECONet
+│ ├── eco_aipp.cfg // 模型转换aipp配置文件
+│ ├── eco_post.cfg // 模型后处理配置文件
+│ ├── ECONet.om // 离线模型
+│ ├── ucf101.names // label文件
+│ ├── ucf101_best.pb // 冻结pb模型
+│ └── trans_pb2om.sh // 模型转换脚本
+├── yolov3
+│ ├── coco.names // label文件
+│ ├── yolov3_tf_bs1_fp16.om // 离线模型
+│ └── yolov3_tf_bs1_fp16.cfg // 模型后处理配置文件
+```
+
+**步骤4:** 编译程序前提需要先交叉编译好第三方依赖库。
+
+**步骤5:** 配置环境变量MX_SDK_HOME:
+
+```bash
+export MX_SDK_HOME=/MindX_SDK/mxVision/
+# 此处MX_SDK_HOME请使用MindX_SDK的实际路径
+```
+
+**步骤6**:在插件代码目录下创建build文件夹,使用cmake命令进行编译,生成.so文件。下面以单人独处插件的编译过程作为范例:
+
+```bash
+## 进入目录 /plugins/plugin_Alone
+## 创建build目录
+mkdir build
+## 使用cmake命令进行编译
+cmake ..
+make -j
+```
+
+或者使用插件代码目录下的build.sh脚本,例:
+
+```bash
+## 前提条件是正确设置export MX_SDK_HOME
+chmod +x build.sh
+./build.sh
+```
+
+编译好的插件会自动存放到SDK的插件库中,可以直接在pipiline中使用。
+
+**步骤7:** 配置pipeline
+
+1. 插件参数介绍
+
+ * MxpiStackFrame
+
+ | 参数名称 | 参数解释 |
+ | :----------- | -------------------- |
+ | visionSource | 抠图插件名称 |
+ | trackSource | 跟踪插件名称 |
+ | frameNum | 跳帧间隔(为1不跳) |
+ | timeOut | 某个目标堆帧超时时间 |
+ | sleepTime | 检查线程休眠时间 |
+
+ * PluginAlone
+
+ | 参数名称 | 参数解释 |
+ | :------------------ | ---------------------- |
+ | dataSourceDetection | 目标检测后处理插件名称 |
+ | dataSourceTrack | 跟踪插件名称 |
+ | detectThresh | 检测帧数 |
+ | detectRatio | 警报帧阈值 |
+ | detectSleep | 警报间隔 |
+
+ * PluginClimb
+
+ | 参数名称 | 参数解释 |
+ | :------------------ | ---------------------- |
+ | dataSourceTrack | 跟踪插件名称 |
+ | dataSourceDetection | 目标检测后处理插件名称 |
+ | detectRatio | 警报帧阈值 |
+ | filePath | ROI配置txt文件 |
+ | detectSleep | 警报间隔 |
+ | bufferLength | 检测帧数窗口大小 |
+ | highThresh | 高度阈值 |
+
+ * PluginCounter
+
+ | 参数名称 | 参数解释 |
+ | :----------------- | ------------ |
+ | dataSourceTrack | 跟踪插件名称 |
+ | descriptionMessage | 插件描述信息 |
+
+ * PluginOutOfBed
+
+ | 参数名称 | 参数解释 |
+ | :------------------ | ---------------------- |
+ | dataSourceTrack | 跟踪插件名称 |
+ | dataSourceDetection | 目标检测后处理插件名称 |
+ | configPath | ROI配置txt文件 |
+ | detectThresh | 检测帧数窗口大小 |
+ | detectSleep | 警报间隔 |
+ | detectRatio | 警报帧阈值 |
+
+ * PluginOverSpeed
+
+ | 参数名称 | 参数解释 |
+ | :------------------ | ---------------------- |
+ | dataSourceTrack | 跟踪插件名称 |
+ | dataSourceDetection | 目标检测后处理插件名称 |
+ | speedThresh | 速度阈值 |
+ | frames | 检测帧数窗口大小 |
+ | detectSleep | 警报间隔 |
+
+ * PluginOverStay
+
+ | 参数名称 | 参数解释 |
+ | :------------------ | ---------------------- |
+ | dataSourceTrack | 跟踪插件名称 |
+ | dataSourceDetection | 目标检测后处理插件名称 |
+ | stayThresh | 逗留时间阈值 |
+ | frames | 检测间隔帧数 |
+ | distanceThresh | 逗留范围 |
+ | detectRatio | 警报帧阈值 |
+ | detectSleep | 警报间隔 |
+
+ * PluginViolentAction
+
+ | 参数名称 | 参数解释 |
+ | :-------------- | --------------------- |
+ | classSource | 分类后处理插件名称 |
+ | filePath | 感兴趣动作类别txt文件 |
+ | detectSleep | 警报间隔 |
+ | actionThreshold | 动作阈值 |
+
+2. 配置范例
+
+ ```
+ ## PluginClimb
+ "mxpi_pluginclimb0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "detectRatio": "0.6",
+ "filePath": "./data/roi/Climbup/*.txt",
+ "detectSleep": "30",
+ "bufferLength": "8",
+ "highThresh": "10"
+ },
+ "factory": "mxpi_pluginclimb",
+ "next": "mxpi_dataserialize0"
+ }
+ ## /*Yolov3*/
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_distributor0"
+ },
+ ## ECONet
+ "mxpi_tensorinfer1":{
+ "props": {
+ "dataSource": "mxpi_stackframe0",
+ "skipModelCheck": "1",
+ "modelPath": "./models/ECONet/ECONet.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_classpostprocessor0"
+ },
+ "mxpi_classpostprocessor0":{
+ "props": {
+ "dataSource": "mxpi_tensorinfer1",
+ "postProcessConfigPath": "./models/ECONet/eco_post.cfg",
+ "labelPath":"./models/ECONet/ucf101.names",
+ "postProcessLibPath":"../../lib/modelpostprocessors/libresnet50postprocess.so"
+ },
+ "factory": "mxpi_classpostprocessor",
+ "next": "mxpi_violentaction0"
+ },
+ ```
+
+ 根据所需场景,配置pipeline文件,调整路径参数以及插件阈值参数。例如"filePath"字段替换为roi/Climb目录下的感兴趣区域txt文件,“postProcessLibPath”字段是SDK模型后处理插件库路径。
+
+3. 将pipeline中“rtspUrl”字段值替换为可用的 rtsp 流源地址(需要自行准备可用的视频流,视频流格式为H264),[自主搭建rtsp视频流教程](###7.3-数据下载与RTSP)。
+
+**步骤8:** 在main.py中,修改pipeline路径、对应的流名称以及需要获取结果的插件名称。
+
+```python
+## 插件位置
+with open("./pipeline/plugin_outofbed.pipeline", 'rb') as f:
+ pipelineStr = f.read()
+## pipeline中的流名称
+streamName = b'classification+detection'
+## 想要获取结果的插件名称
+key = b'mxpi_pluginalone0'
+```
+
+## 6 运行
+
+修改 run.sh 文件中的环境路径和项目路径。
+
+```bash
+export MX_SDK_HOME=${CUR_PATH}/../../..
+## 注意当前目录CUR_PATH与MX_SDK_HOME环境目录的相对位置
+```
+
+直接运行
+
+```bash
+chmod +x run.sh
+bash run.sh
+```
+
+## 7 常见问题
+### 7.1 未配置ROI
+
+#### 问题描述:
+
+攀高检测与离床检测出现如下报错:
+```bash
+terminate called after throwing an instance of 'cv::Exception'
+ what(): OpenCV(4.2.0) /usr1/workspace/MindX_SDK_Multi_DailyBuild/opensource/opensource-scl7/opencv/modules/imgproc/src/geometry.cpp:103: error: (-215:Assertion failed) total >= 0 && (depth == CV_32S || depth == CV_32F) in function 'pointPolygonTest'
+```
+#### 解决方案:
+在pipeline处没有配置roi区域的txt文件,进行配置即可。
+
+示例:
+
+```json
+"mxpi_pluginoutofbed0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetect": "mxpi_objectpostprocessor0",
+ "configPath": "./data/roi/OutofBed/*.txt",
+ "detectThresh": "8",
+ "detectSleep": "15",
+ "detectRatio": "0.25"
+ },
+ "factory": "mxpi_pluginoutofbed",
+ "next": "mxpi_dataserialize0"
+}
+```
+
+### 7.2 模型路径配置
+
+#### 问题描述:
+
+检测过程中用到的模型以及模型后处理插件需配置路径属性。
+
+#### 后处理插件配置范例:
+
+```json
+"mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+}
+```
+
+### 7.3 数据下载与RTSP
+
+H264视频文件及ROI文件:[下载地址](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/data.zip) ;
+
+RTSP取流地址(可以从网络摄像机获取,也可通过Live555等工具将本地视频文 件转换为rtsp流)。自主搭建RTSP拉流教程:[live555链接](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/Live555%E7%A6%BB%E7%BA%BF%E8%A7%86%E9%A2%91%E8%BD%ACRTSP%E8%AF%B4%E6%98%8E%E6%96%87%E6%A1%A3.md),需要注意的是在搭建RTSP时,使用./genMakefiles 命令生成编译文件时,输入的参数是根据cofig.<后缀>获取的,与服务器架构等有关。
+
+RTSP视频拉流插件配置范例:
+
+```json
+"mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+}
+```
+
+其中rtsp_Url的格式是 rtsp:://host:port/Data,host:port/路径映射到mediaServer/目录下,Data为视频文件的路径。
+
+RTSP拉流教程:[live555链接](https://gitee.com/ascend/docs-openmind/blob/master/guide/mindx/sdk/tutorials/reference_material/Live555%E7%A6%BB%E7%BA%BF%E8%A7%86%E9%A2%91%E8%BD%ACRTSP%E8%AF%B4%E6%98%8E%E6%96%87%E6%A1%A3.md)中第七步视频循环推流,按照提示修改cpp文件可以使自主搭建的rtsp循环推流,如果不作更改,则为有限的视频流;同时第六步高分辨率帧花屏,修改mediaServer/DynamicRTSPServer.cpp文件,将OutPacketBuffer::maxSize增大,例如"500000",避免出现”The input frame data was too large for our buffer“问题,导致丢帧。修改完后,需要重新运行以下命令:
+
+```cmake
+./genMakefiles
+make
+```
+
+### 7.4 运行Shell脚本
+
+在linux平台下运行shell脚本时,例如build.sh/run.sh,出现如下错误:
+
+```bash
+build.sh: Line 15: $'\r': command not found
+```
+
+是由于不同系统平台之间的行结束符不同,使用如下命令去除shell脚本的特殊字符:
+
+```bash
+sed -i 's/\r//g' xxx.sh
+```
+
+## 8 模型转换
+
+本项目中用到的模型有:ECONet,yolov3 [备份链接](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/models.zip)
+
+yolov3模型下载参考华为昇腾社区[ModelZoo](https://mindx.sdk.obs.cn-north-4.myhuaweicloud.com/mindxsdk-referenceapps%20/contrib/ActionRecognition/ATC%20YOLOv3%28FP16%29%20from%20TensorFlow%20-%20Ascend310.zip)
+使用以下命令进行转换,请注意aipp配置文件名,此处使用的为自带sample中的相关文件({Mind_SDK安装路径}/mxVision/samples/mxVision/models/yolov3/)
+```bash
+atc --model=./yolov3_tf.pb --framework=3 --output=./yolov3_tf_bs1_fp16 --soc_version=Ascend310 --insert_op_conf=./aipp_yolov3_416_416.aippconfig --input_shape="input:1,416,416,3" --out_nodes="yolov3/yolov3_head/Conv_6/BiasAdd:0;yolov3/yolov3_head/Conv_14/BiasAdd:0;yolov3/yolov3_head/Conv_22/BiasAdd:0"
+```
+
+ECONet离线模型转换参考 [昇腾Gitee](https://www.hiascend.com/zh/software/modelzoo/models/detail/1/0d7d0413cf89404d882d69e695a9bc4b/1):下载冻结pb模型ucf101_best.pb,编辑trans_pb2om.sh文件,将--model 配置为ECONet模型所在目录,--output配置为模型输出路径,--insert_op_conf配置为aipp文件路径,在命令行输入
+
+```bash
+chmod +x trans_pb2om.sh
+./trans_pb2om.sh
+```
+
+完成ECONet模型转换。模型下载或转换完成后,按照目录结构放置模型。
diff --git a/contrib/ActionRecognition/build.sh b/mxVision/mxVision-referenceapps/ActionRecognition/build.sh
similarity index 100%
rename from contrib/ActionRecognition/build.sh
rename to mxVision/mxVision-referenceapps/ActionRecognition/build.sh
diff --git a/contrib/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/roi/Climbup/.gitkeep
similarity index 100%
rename from contrib/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/roi/Climbup/.gitkeep
diff --git a/contrib/ActionRecognition/data/roi/Climbup/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/roi/OutOfBed/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/roi/Climbup/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/roi/OutOfBed/.gitkeep
diff --git a/contrib/ActionRecognition/data/roi/OutOfBed/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/roi/ViolentAction/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/roi/OutOfBed/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/roi/ViolentAction/.gitkeep
diff --git a/contrib/ActionRecognition/data/roi/ViolentAction/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/video/Alone/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/roi/ViolentAction/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/video/Alone/.gitkeep
diff --git a/contrib/ActionRecognition/data/video/Alone/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/video/Climbup/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/video/Alone/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/video/Climbup/.gitkeep
diff --git a/contrib/ActionRecognition/data/video/Climbup/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/video/OutOfBed/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/video/Climbup/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/video/OutOfBed/.gitkeep
diff --git a/contrib/ActionRecognition/data/video/OutOfBed/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/video/Over_staying/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/video/OutOfBed/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/video/Over_staying/.gitkeep
diff --git a/contrib/ActionRecognition/data/video/Over_staying/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/video/Speed_up/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/video/Over_staying/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/video/Speed_up/.gitkeep
diff --git a/contrib/ActionRecognition/data/video/Speed_up/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/data/video/Violent_action/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/video/Speed_up/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/data/video/Violent_action/.gitkeep
diff --git a/contrib/ActionRecognition/main.py b/mxVision/mxVision-referenceapps/ActionRecognition/main.py
similarity index 97%
rename from contrib/ActionRecognition/main.py
rename to mxVision/mxVision-referenceapps/ActionRecognition/main.py
index 43592e6f1d6fc5e48422a557395115a666f0ca65..9198dabf9534db74718ee440b8e659f06325016b 100644
--- a/contrib/ActionRecognition/main.py
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/main.py
@@ -1,62 +1,62 @@
-#!/usr/bin/env python
-# coding=utf-8
-
-"""
-
- Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
-"""
-
-from StreamManagerApi import StreamManagerApi, StringVector
-
-if __name__ == '__main__':
- # init stream manager
- streamManagerApi = StreamManagerApi()
- ret = streamManagerApi.InitManager()
- if ret != 0:
- print("Failed to init Stream manager, ret=%s" % str(ret))
- exit()
-
- # create streams by pipeline config file
- with open("./pipeline/plugin_alone.pipeline", 'rb') as f:
- pipelineStr = f.read()
- ret = streamManagerApi.CreateMultipleStreams(pipelineStr)
- if ret != 0:
- print("Failed to create Stream, ret=%s" % str(ret))
- exit()
-
- # Inputs data to a specified stream based on streamName.
- streamName = b'detection+tracking'
- inPluginId = 0
-
- retStr = ''
- key = b'mxpi_pluginalone0'
- keyVec = StringVector()
- keyVec.push_back(key)
- while True:
- # Obtain the inference result by specifying streamName and uniqueId.
- inferResult = streamManagerApi.GetResult(streamName, 0, 10000)
- if inferResult is None:
- break
- if inferResult.errorCode != 0:
- print("GetResultWithUniqueId error. errorCode=%d, errorMsg=%s" % (
- inferResult.errorCode, inferResult.data.decode()))
- break
- retStr = inferResult.data.decode()
- print(retStr)
-
- # destroy streams
- streamManagerApi.DestroyAllStreams()
-
+#!/usr/bin/env python
+# coding=utf-8
+
+"""
+
+ Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+"""
+
+from StreamManagerApi import StreamManagerApi, StringVector
+
+if __name__ == '__main__':
+ # init stream manager
+ streamManagerApi = StreamManagerApi()
+ ret = streamManagerApi.InitManager()
+ if ret != 0:
+ print("Failed to init Stream manager, ret=%s" % str(ret))
+ exit()
+
+ # create streams by pipeline config file
+ with open("./pipeline/plugin_alone.pipeline", 'rb') as f:
+ pipelineStr = f.read()
+ ret = streamManagerApi.CreateMultipleStreams(pipelineStr)
+ if ret != 0:
+ print("Failed to create Stream, ret=%s" % str(ret))
+ exit()
+
+ # Inputs data to a specified stream based on streamName.
+ streamName = b'detection+tracking'
+ inPluginId = 0
+
+ retStr = ''
+ key = b'mxpi_pluginalone0'
+ keyVec = StringVector()
+ keyVec.push_back(key)
+ while True:
+ # Obtain the inference result by specifying streamName and uniqueId.
+ inferResult = streamManagerApi.GetResult(streamName, 0, 10000)
+ if inferResult is None:
+ break
+ if inferResult.errorCode != 0:
+ print("GetResultWithUniqueId error. errorCode=%d, errorMsg=%s" % (
+ inferResult.errorCode, inferResult.data.decode()))
+ break
+ retStr = inferResult.data.decode()
+ print(retStr)
+
+ # destroy streams
+ streamManagerApi.DestroyAllStreams()
+
diff --git a/contrib/ActionRecognition/models/ECONet/eco_aipp.cfg b/mxVision/mxVision-referenceapps/ActionRecognition/models/ECONet/eco_aipp.cfg
similarity index 94%
rename from contrib/ActionRecognition/models/ECONet/eco_aipp.cfg
rename to mxVision/mxVision-referenceapps/ActionRecognition/models/ECONet/eco_aipp.cfg
index f01079fbee26c7a20ee5b6b11d1473fd81a8970e..8b7d1178233b8df4f3283b62415b907e9ee0ddbd 100644
--- a/contrib/ActionRecognition/models/ECONet/eco_aipp.cfg
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/models/ECONet/eco_aipp.cfg
@@ -1,29 +1,29 @@
-aipp_op {
-aipp_mode : static
-related_input_rank : 0
-input_format : YUV420SP_U8
-crop : false
-csc_switch : true
-rbuv_swap_switch : false
-matrix_r0c0 : 256
-matrix_r0c1 : 0
-matrix_r0c2 : 359
-matrix_r1c0 : 256
-matrix_r1c1 : -88
-matrix_r1c2 : -183
-matrix_r2c0 : 256
-matrix_r2c1 : 454
-matrix_r2c2 : 0
-input_bias_0 : 0
-input_bias_1 : 128
-input_bias_2 : 128
-mean_chn_0 : 104
-mean_chn_1 : 117
-mean_chn_2 : 128
-min_chn_0 : 0.0
-min_chn_1 : 0.0
-min_chn_2 : 0.0
-var_reci_chn_0 : 1.0
-var_reci_chn_1 : 1.0
-var_reci_chn_2 : 1.0
-}
+aipp_op {
+aipp_mode : static
+related_input_rank : 0
+input_format : YUV420SP_U8
+crop : false
+csc_switch : true
+rbuv_swap_switch : false
+matrix_r0c0 : 256
+matrix_r0c1 : 0
+matrix_r0c2 : 359
+matrix_r1c0 : 256
+matrix_r1c1 : -88
+matrix_r1c2 : -183
+matrix_r2c0 : 256
+matrix_r2c1 : 454
+matrix_r2c2 : 0
+input_bias_0 : 0
+input_bias_1 : 128
+input_bias_2 : 128
+mean_chn_0 : 104
+mean_chn_1 : 117
+mean_chn_2 : 128
+min_chn_0 : 0.0
+min_chn_1 : 0.0
+min_chn_2 : 0.0
+var_reci_chn_0 : 1.0
+var_reci_chn_1 : 1.0
+var_reci_chn_2 : 1.0
+}
diff --git a/contrib/ActionRecognition/models/ECONet/trans_pb2om.sh b/mxVision/mxVision-referenceapps/ActionRecognition/models/ECONet/trans_pb2om.sh
similarity index 100%
rename from contrib/ActionRecognition/models/ECONet/trans_pb2om.sh
rename to mxVision/mxVision-referenceapps/ActionRecognition/models/ECONet/trans_pb2om.sh
diff --git a/contrib/ActionRecognition/data/video/Violent_action/.gitkeep b/mxVision/mxVision-referenceapps/ActionRecognition/models/yolov3/.gitkeep
similarity index 100%
rename from contrib/ActionRecognition/data/video/Violent_action/.gitkeep
rename to mxVision/mxVision-referenceapps/ActionRecognition/models/yolov3/.gitkeep
diff --git a/contrib/ActionRecognition/pipeline/plugin_all.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_all.pipeline
similarity index 97%
rename from contrib/ActionRecognition/pipeline/plugin_all.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_all.pipeline
index 2bc55c013f1d8a5e6fc17b6b02261091e7d0ac0e..dbf1c5cb91bbcc70f1570b96bf9c5f6a1dd9d3d4 100644
--- a/contrib/ActionRecognition/pipeline/plugin_all.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_all.pipeline
@@ -1,137 +1,137 @@
-{
- "detection+tracking": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_objectpostprocessor0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_plugincounter0"
- },
- "mxpi_plugincounter0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0"
- },
- "factory": "mxpi_plugincounter",
- "next": "mxpi_pluginalone0"
- },
- "mxpi_pluginalone0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "detectThresh":"8",
- "detectRatio":"0.8",
- "detectSleep":"270"
- },
- "factory": "mxpi_pluginalone",
- "next": "mxpi_pluginoverspeed0"
- },
- "mxpi_pluginoverspeed0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "speedThresh": "1",
- "frames": "8",
- "detectSleep": "270"
- },
- "factory": "mxpi_pluginoverspeed",
- "next": "mxpi_pluginoverstay0"
- },
- "mxpi_pluginoverstay0": {
- "props": {
- "status": "1",
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "stayThresh":"90",
- "frames":"90",
- "distanceThresh":"100",
- "detectRatio":"0.7",
- "detectSleep":"90"
- },
- "factory": "mxpi_pluginoverstay",
- "next": "mxpi_pluginclimb0"
- },
- "mxpi_pluginclimb0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "detectRatio": "0.6",
- "filePath": "./data/roi/Climbup/*.txt",
- "detectSleep": "30",
- "bufferLength": "8",
- "highThresh": "10"
- },
- "factory": "mxpi_pluginclimb",
- "next": "mxpi_pluginoutofbed0"
- },
- "mxpi_pluginoutofbed0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "configPath": "./data/roi/OutofBed/*.txt",
- "detectThresh": "8",
- "detectSleep": "15",
- "detectRatio": "0.25"
- },
- "factory": "mxpi_pluginoutofbed",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_pluginoverstay0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
+{
+ "detection+tracking": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_objectpostprocessor0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_plugincounter0"
+ },
+ "mxpi_plugincounter0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0"
+ },
+ "factory": "mxpi_plugincounter",
+ "next": "mxpi_pluginalone0"
+ },
+ "mxpi_pluginalone0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "detectThresh":"8",
+ "detectRatio":"0.8",
+ "detectSleep":"270"
+ },
+ "factory": "mxpi_pluginalone",
+ "next": "mxpi_pluginoverspeed0"
+ },
+ "mxpi_pluginoverspeed0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "speedThresh": "1",
+ "frames": "8",
+ "detectSleep": "270"
+ },
+ "factory": "mxpi_pluginoverspeed",
+ "next": "mxpi_pluginoverstay0"
+ },
+ "mxpi_pluginoverstay0": {
+ "props": {
+ "status": "1",
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "stayThresh":"90",
+ "frames":"90",
+ "distanceThresh":"100",
+ "detectRatio":"0.7",
+ "detectSleep":"90"
+ },
+ "factory": "mxpi_pluginoverstay",
+ "next": "mxpi_pluginclimb0"
+ },
+ "mxpi_pluginclimb0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "detectRatio": "0.6",
+ "filePath": "./data/roi/Climbup/*.txt",
+ "detectSleep": "30",
+ "bufferLength": "8",
+ "highThresh": "10"
+ },
+ "factory": "mxpi_pluginclimb",
+ "next": "mxpi_pluginoutofbed0"
+ },
+ "mxpi_pluginoutofbed0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "configPath": "./data/roi/OutofBed/*.txt",
+ "detectThresh": "8",
+ "detectSleep": "15",
+ "detectRatio": "0.25"
+ },
+ "factory": "mxpi_pluginoutofbed",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_pluginoverstay0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
}
\ No newline at end of file
diff --git a/contrib/ActionRecognition/pipeline/plugin_alone.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_alone.pipeline
similarity index 96%
rename from contrib/ActionRecognition/pipeline/plugin_alone.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_alone.pipeline
index c8ed69cd2cebd62e838559892b0aa42877e49641..922fc085334e990fdb8fbe89127ff0c0b7af1d22 100644
--- a/contrib/ActionRecognition/pipeline/plugin_alone.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_alone.pipeline
@@ -1,80 +1,80 @@
-{
- "detection+tracking": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_objectpostprocessor0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_pluginalone0"
- },
- "mxpi_pluginalone0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "detectThresh": "8",
- "detectRatio": "0.8",
- "detectSleep": "270"
- },
- "factory": "mxpi_pluginalone",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_pluginalone0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
-}
+{
+ "detection+tracking": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_objectpostprocessor0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_pluginalone0"
+ },
+ "mxpi_pluginalone0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "detectThresh": "8",
+ "detectRatio": "0.8",
+ "detectSleep": "270"
+ },
+ "factory": "mxpi_pluginalone",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_pluginalone0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
+}
diff --git a/contrib/ActionRecognition/pipeline/plugin_climb.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_climb.pipeline
similarity index 97%
rename from contrib/ActionRecognition/pipeline/plugin_climb.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_climb.pipeline
index f07068b38d8dd43f48621c1aee7adbf714396cac..64c6d88365a08e7cc097c43ed29011c5d1f8f093 100644
--- a/contrib/ActionRecognition/pipeline/plugin_climb.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_climb.pipeline
@@ -1,82 +1,82 @@
-{
- "detection+tracking": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_objectpostprocessor0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_pluginclimb0"
- },
- "mxpi_pluginclimb0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "detectRatio": "0.6",
- "filePath": "./data/roi/Climbup/*.txt",
- "detectSleep": "30",
- "bufferLength": "8",
- "highThresh": "10"
- },
- "factory": "mxpi_pluginclimb",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_pluginclimb0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
-}
+{
+ "detection+tracking": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_objectpostprocessor0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_pluginclimb0"
+ },
+ "mxpi_pluginclimb0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "detectRatio": "0.6",
+ "filePath": "./data/roi/Climbup/*.txt",
+ "detectSleep": "30",
+ "bufferLength": "8",
+ "highThresh": "10"
+ },
+ "factory": "mxpi_pluginclimb",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_pluginclimb0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
+}
diff --git a/contrib/ActionRecognition/pipeline/plugin_outofbed.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_outofbed.pipeline
similarity index 97%
rename from contrib/ActionRecognition/pipeline/plugin_outofbed.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_outofbed.pipeline
index 7e8500a5356f8dae41c075fa7256ed9a32543f56..459053b9ee5a91bc783b1ee0c549ba22bdec1724 100644
--- a/contrib/ActionRecognition/pipeline/plugin_outofbed.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_outofbed.pipeline
@@ -1,81 +1,81 @@
-{
- "detection+tracking": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_objectpostprocessor0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_pluginoutofbed0"
- },
- "mxpi_pluginoutofbed0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "configPath": "./datas/OutofBed/*.txt",
- "detectThresh": "8",
- "detectSleep": "15",
- "detectRatio": "0.25"
- },
- "factory": "mxpi_pluginoutofbed",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_pluginoutofbed0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
-}
+{
+ "detection+tracking": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_objectpostprocessor0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_pluginoutofbed0"
+ },
+ "mxpi_pluginoutofbed0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "configPath": "./datas/OutofBed/*.txt",
+ "detectThresh": "8",
+ "detectSleep": "15",
+ "detectRatio": "0.25"
+ },
+ "factory": "mxpi_pluginoutofbed",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_pluginoutofbed0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
+}
diff --git a/contrib/ActionRecognition/pipeline/plugin_overspeed.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_overspeed.pipeline
similarity index 97%
rename from contrib/ActionRecognition/pipeline/plugin_overspeed.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_overspeed.pipeline
index 8ab1739c87b78a9557d605344abeed37069edc67..046f7017f41318167375b548d2aeef5161bf463f 100644
--- a/contrib/ActionRecognition/pipeline/plugin_overspeed.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_overspeed.pipeline
@@ -1,80 +1,80 @@
-{
- "detection+tracking": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_objectpostprocessor0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_pluginoverspeed0"
- },
- "mxpi_pluginoverspeed0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "speedThresh": "10",
- "frames": "8",
- "detectSleep": "270"
- },
- "factory": "mxpi_pluginoverspeed",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_pluginoverspeed0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
-}
+{
+ "detection+tracking": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_objectpostprocessor0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_pluginoverspeed0"
+ },
+ "mxpi_pluginoverspeed0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "speedThresh": "10",
+ "frames": "8",
+ "detectSleep": "270"
+ },
+ "factory": "mxpi_pluginoverspeed",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_pluginoverspeed0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
+}
diff --git a/contrib/ActionRecognition/pipeline/plugin_overstay.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_overstay.pipeline
similarity index 97%
rename from contrib/ActionRecognition/pipeline/plugin_overstay.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_overstay.pipeline
index bd2c611f63a5bc6b34ab03ecb984c6cac2196c54..377407ee5f5febd6bfcaaefee9887f481272cf86 100644
--- a/contrib/ActionRecognition/pipeline/plugin_overstay.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_overstay.pipeline
@@ -1,82 +1,82 @@
-{
- "detection+tracking": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_objectpostprocessor0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_pluginoverstay0"
- },
- "mxpi_pluginoverstay0": {
- "props": {
- "dataSourceTrack": "mxpi_motsimplesort0",
- "dataSourceDetection": "mxpi_objectpostprocessor0",
- "stayThresh":"90",
- "frames":"90",
- "distanceThresh":"100",
- "detectRatio":"0.7",
- "detectSleep":"90"
- },
- "factory": "mxpi_pluginoverstay",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_pluginoverstay0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
-}
+{
+ "detection+tracking": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_objectpostprocessor0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_pluginoverstay0"
+ },
+ "mxpi_pluginoverstay0": {
+ "props": {
+ "dataSourceTrack": "mxpi_motsimplesort0",
+ "dataSourceDetection": "mxpi_objectpostprocessor0",
+ "stayThresh":"90",
+ "frames":"90",
+ "distanceThresh":"100",
+ "detectRatio":"0.7",
+ "detectSleep":"90"
+ },
+ "factory": "mxpi_pluginoverstay",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_pluginoverstay0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
+}
diff --git a/contrib/ActionRecognition/pipeline/plugin_violentaction.pipeline b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_violentaction.pipeline
similarity index 97%
rename from contrib/ActionRecognition/pipeline/plugin_violentaction.pipeline
rename to mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_violentaction.pipeline
index b95253140e72e72b86229d0f835f687021b0ccd3..9c8e75eef59db337cc4523a24f137e8c06d903c1 100644
--- a/contrib/ActionRecognition/pipeline/plugin_violentaction.pipeline
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/pipeline/plugin_violentaction.pipeline
@@ -1,125 +1,125 @@
-{
- "detection+action recognition": {
- "stream_config": {
- "deviceId": "0"
- },
- "mxpi_rtspsrc0": {
- "props": {
- "rtspUrl": "rtsp_Url"
- },
- "factory": "mxpi_rtspsrc",
- "next": "mxpi_videodecoder0"
- },
- "mxpi_videodecoder0":{
- "props": {
- "vdecChannelId": "0"
- },
- "factory": "mxpi_videodecoder",
- "next": "mxpi_imageresize0"
- },
- "mxpi_imageresize0":{
- "props": {
- "dataSource": "mxpi_videodecoder0",
- "resizeHeight": "416",
- "resizeWidth": "416"
- },
- "factory": "mxpi_imageresize",
- "next": "mxpi_tensorinfer0"
- },
- "mxpi_tensorinfer0":{
- "props": {
- "dataSource": "mxpi_imageresize0",
- "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_objectpostprocessor0"
- },
- "mxpi_objectpostprocessor0": {
- "props": {
- "dataSource": "mxpi_tensorinfer0",
- "funcLanguage":"c++",
- "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
- "labelPath": "./models/yolov3/coco.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
- },
- "factory": "mxpi_objectpostprocessor",
- "next": "mxpi_distributor0"
- },
- "mxpi_distributor0": {
- "props": {
- "dataSource": "mxpi_objectpostprocessor0",
- "classIds": "0"
- },
- "factory": "mxpi_distributor",
- "next": "mxpi_motsimplesort0"
- },
- "mxpi_motsimplesort0": {
- "props": {
- "dataSourceDetection": "mxpi_distributor0_0"
- },
- "factory": "mxpi_motsimplesort",
- "next": "mxpi_imagecrop0"
- },
- "mxpi_imagecrop0": {
- "props": {
- "dataSource": "mxpi_distributor0_0",
- "dataSourceImage": "mxpi_videodecoder0",
- "resizeHeight": "224",
- "resizeWidth": "224",
- "resizeType": "Resizer_KeepAspectRatio_Fit"
- },
- "factory": "mxpi_imagecrop",
- "next": "mxpi_stackframe0"
- },
- "mxpi_stackframe0":{
- "props": {
- "visionSource": "mxpi_imagecrop0",
- "trackSource": "mxpi_motsimplesort0",
- "frameNum": "3"
- },
- "factory": "mxpi_stackframe",
- "next": "mxpi_tensorinfer1"
- },
- "mxpi_tensorinfer1":{
- "props": {
- "dataSource": "mxpi_stackframe0",
- "skipModelCheck": "1",
- "modelPath": "./models/ECONet/ECONet.om"
- },
- "factory": "mxpi_tensorinfer",
- "next": "mxpi_classpostprocessor0"
- },
- "mxpi_classpostprocessor0":{
- "props": {
- "dataSource": "mxpi_tensorinfer1",
- "postProcessConfigPath": "./models/ECONet/eco_post.cfg",
- "labelPath": "./models/ECONet/ucf101.names",
- "postProcessLibPath": "../../lib/modelpostprocessors/libresnet50postprocess.so"
- },
- "factory": "mxpi_classpostprocessor",
- "next": "mxpi_violentaction0"
- },
- "mxpi_violentaction0":{
- "props": {
- "classSource": "mxpi_classpostprocessor0",
- "filePath": "./data/roi/ViolentAction/aoi.txt",
- "detectSleep": "5"
- },
- "factory": "mxpi_violentaction",
- "next": "mxpi_dataserialize0"
- },
- "mxpi_dataserialize0": {
- "props": {
- "outputDataKeys": "mxpi_violentaction0"
- },
- "factory": "mxpi_dataserialize",
- "next": "appsink0"
- },
- "appsink0": {
- "props": {
- "blocksize": "4096000"
- },
- "factory": "appsink"
- }
- }
-}
+{
+ "detection+action recognition": {
+ "stream_config": {
+ "deviceId": "0"
+ },
+ "mxpi_rtspsrc0": {
+ "props": {
+ "rtspUrl": "rtsp_Url"
+ },
+ "factory": "mxpi_rtspsrc",
+ "next": "mxpi_videodecoder0"
+ },
+ "mxpi_videodecoder0":{
+ "props": {
+ "vdecChannelId": "0"
+ },
+ "factory": "mxpi_videodecoder",
+ "next": "mxpi_imageresize0"
+ },
+ "mxpi_imageresize0":{
+ "props": {
+ "dataSource": "mxpi_videodecoder0",
+ "resizeHeight": "416",
+ "resizeWidth": "416"
+ },
+ "factory": "mxpi_imageresize",
+ "next": "mxpi_tensorinfer0"
+ },
+ "mxpi_tensorinfer0":{
+ "props": {
+ "dataSource": "mxpi_imageresize0",
+ "modelPath": "./models/yolov3/yolov3_tf_bs1_fp16.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_objectpostprocessor0"
+ },
+ "mxpi_objectpostprocessor0": {
+ "props": {
+ "dataSource": "mxpi_tensorinfer0",
+ "funcLanguage":"c++",
+ "postProcessConfigPath": "./models/yolov3/yolov3_tf_bs1_fp16.cfg",
+ "labelPath": "./models/yolov3/coco.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libyolov3postprocess.so"
+ },
+ "factory": "mxpi_objectpostprocessor",
+ "next": "mxpi_distributor0"
+ },
+ "mxpi_distributor0": {
+ "props": {
+ "dataSource": "mxpi_objectpostprocessor0",
+ "classIds": "0"
+ },
+ "factory": "mxpi_distributor",
+ "next": "mxpi_motsimplesort0"
+ },
+ "mxpi_motsimplesort0": {
+ "props": {
+ "dataSourceDetection": "mxpi_distributor0_0"
+ },
+ "factory": "mxpi_motsimplesort",
+ "next": "mxpi_imagecrop0"
+ },
+ "mxpi_imagecrop0": {
+ "props": {
+ "dataSource": "mxpi_distributor0_0",
+ "dataSourceImage": "mxpi_videodecoder0",
+ "resizeHeight": "224",
+ "resizeWidth": "224",
+ "resizeType": "Resizer_KeepAspectRatio_Fit"
+ },
+ "factory": "mxpi_imagecrop",
+ "next": "mxpi_stackframe0"
+ },
+ "mxpi_stackframe0":{
+ "props": {
+ "visionSource": "mxpi_imagecrop0",
+ "trackSource": "mxpi_motsimplesort0",
+ "frameNum": "3"
+ },
+ "factory": "mxpi_stackframe",
+ "next": "mxpi_tensorinfer1"
+ },
+ "mxpi_tensorinfer1":{
+ "props": {
+ "dataSource": "mxpi_stackframe0",
+ "skipModelCheck": "1",
+ "modelPath": "./models/ECONet/ECONet.om"
+ },
+ "factory": "mxpi_tensorinfer",
+ "next": "mxpi_classpostprocessor0"
+ },
+ "mxpi_classpostprocessor0":{
+ "props": {
+ "dataSource": "mxpi_tensorinfer1",
+ "postProcessConfigPath": "./models/ECONet/eco_post.cfg",
+ "labelPath": "./models/ECONet/ucf101.names",
+ "postProcessLibPath": "../../lib/modelpostprocessors/libresnet50postprocess.so"
+ },
+ "factory": "mxpi_classpostprocessor",
+ "next": "mxpi_violentaction0"
+ },
+ "mxpi_violentaction0":{
+ "props": {
+ "classSource": "mxpi_classpostprocessor0",
+ "filePath": "./data/roi/ViolentAction/aoi.txt",
+ "detectSleep": "5"
+ },
+ "factory": "mxpi_violentaction",
+ "next": "mxpi_dataserialize0"
+ },
+ "mxpi_dataserialize0": {
+ "props": {
+ "outputDataKeys": "mxpi_violentaction0"
+ },
+ "factory": "mxpi_dataserialize",
+ "next": "appsink0"
+ },
+ "appsink0": {
+ "props": {
+ "blocksize": "4096000"
+ },
+ "factory": "appsink"
+ }
+ }
+}
diff --git a/contrib/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.cpp b/mxVision/mxVision-referenceapps/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.cpp
similarity index 97%
rename from contrib/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.cpp
rename to mxVision/mxVision-referenceapps/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.cpp
index 7b44f51e92a7438ef9860078722275c551441586..94ba48b231ddab5e3358abae60d56867e6a3a8cc 100644
--- a/contrib/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.cpp
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.cpp
@@ -1,138 +1,138 @@
-/*
- * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "BlockingMap.h"
-
-namespace MxPlugins {
- BlockingMap::BlockingMap() {}
-
- BlockingMap::~BlockingMap() {}
-
- void BlockingMap::Insert(const uint32_t &id, MxBase::MemoryData newData) {
- // add std::lock_guard
- std::lock_guard guard(mtx_);
- // get current timestamp
- using Time = std::chrono::high_resolution_clock;
- auto currentTime = Time::now();
- std::pair> time_MxpiVisionList;
- // set MxpiVisionInfo and MxpiVisionData
- auto mxpiVisionList = copyList(newData);
- time_MxpiVisionList = std::make_pair(currentTime, mxpiVisionList);
- blockingMap_[id] = time_MxpiVisionList;
- keys_.insert(id);
- }
-
- void BlockingMap::Update(const uint32_t &id, MxBase::MemoryData newData) {
- // add std::lock_guard
- std::lock_guard guard(mtx_);
- std::pair>
- time_MxpiVisionList = blockingMap_[id];
- auto mxpiVisionList = time_MxpiVisionList.second;
- // set MxpiVisionInfo and MxpiVisionData
- MxTools::MxpiVision *dstMxpivision = mxpiVisionList->add_visionvec();
- MxTools::MxpiVisionInfo *mxpiVisionInfo = dstMxpivision->mutable_visioninfo();
- mxpiVisionInfo->set_format(1);
- mxpiVisionInfo->set_height(224);
- mxpiVisionInfo->set_width(224);
- mxpiVisionInfo->set_heightaligned(224);
- mxpiVisionInfo->set_widthaligned(224);
- // set MxpiVisionData by MemoryData
- MxTools::MxpiVisionData *mxpiVisionData = dstMxpivision->mutable_visiondata();
- mxpiVisionData->set_dataptr((uint64_t) newData.ptrData);
- mxpiVisionData->set_datasize(newData.size);
- mxpiVisionData->set_deviceid(newData.deviceId);
- mxpiVisionData->set_memtype((MxTools::MxpiMemoryType) newData.type);
- // visionlist->pair
- time_MxpiVisionList = std::make_pair(time_MxpiVisionList.first, mxpiVisionList);
- blockingMap_[id] = time_MxpiVisionList;
- }
-
- std::pair>
- BlockingMap::Get(const uint32_t &id) {
- // add std::lock_guard
- std::lock_guard guard(mtx_);
- if (blockingMap_.find(id) != blockingMap_.end()) {
- return blockingMap_[id];
- } else {
- // If can't find the element, manually assign nullptr
- std::pair> empty;
- using Time = std::chrono::high_resolution_clock;
- auto currentTime = Time::now();
- empty = std::make_pair(currentTime, nullptr);
- return empty;
- }
- }
-
- void BlockingMap::Clear(const uint32_t &id) {
- std::lock_guard guard(mtx_);
- blockingMap_.erase(id);
- keys_.erase(id);
- }
-
- void BlockingMap::Reinsert(const uint32_t &id, std::shared_ptr &mxpiVisionList) {
- // add std::lock_guard
- std::lock_guard guard(mtx_);
- // get current timestamp
- using Time = std::chrono::high_resolution_clock;
- auto currentTime = Time::now();
- std::pair> time_MxpiVisionList;
- // set MxpiVisionInfo and MxpiVisionData
- time_MxpiVisionList = std::make_pair(currentTime, mxpiVisionList);
- blockingMap_[id] = time_MxpiVisionList;
- keys_.insert(id);
- }
-
- std::uint32_t BlockingMap::count(const uint32_t &id) {
- std::lock_guard guard(mtx_);
- return blockingMap_.count(id);
- }
-
- size_t BlockingMap::Size() const {
- return blockingMap_.size();
- }
-
- std::vector BlockingMap::Keys() {
- // id<->key
- std::vector keys;
- std::lock_guard guard(mtx_);
- for (auto iter = keys_.begin(); iter != keys_.end(); iter++) {
- keys.push_back(*iter);
- }
- return keys;
- }
-
- std::shared_ptr BlockingMap::copyList(MxBase::MemoryData newData) {
- // new shared_ptr MxpiVisionList;
- std::shared_ptr dstMxpiVisionListSptr(new MxTools::MxpiVisionList,
- MxTools::g_deleteFuncMxpiVisionList);
- MxTools::MxpiVision *dstMxpivision = dstMxpiVisionListSptr->add_visionvec();
- MxTools::MxpiVisionInfo *mxpiVisionInfo = dstMxpivision->mutable_visioninfo();
- mxpiVisionInfo->set_format(1);
- mxpiVisionInfo->set_height(224);
- mxpiVisionInfo->set_width(224);
- mxpiVisionInfo->set_heightaligned(224);
- mxpiVisionInfo->set_widthaligned(224);
- // set MxpiVisionData by MemoryData
- MxTools::MxpiVisionData *mxpiVisionData = dstMxpivision->mutable_visiondata();
- mxpiVisionData->set_dataptr((uint64_t) newData.ptrData);
- mxpiVisionData->set_datasize(newData.size);
- mxpiVisionData->set_deviceid(newData.deviceId);
- mxpiVisionData->set_memtype((MxTools::MxpiMemoryType) newData.type);
- return dstMxpiVisionListSptr;
- }
+/*
+ * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "BlockingMap.h"
+
+namespace MxPlugins {
+ BlockingMap::BlockingMap() {}
+
+ BlockingMap::~BlockingMap() {}
+
+ void BlockingMap::Insert(const uint32_t &id, MxBase::MemoryData newData) {
+ // add std::lock_guard
+ std::lock_guard guard(mtx_);
+ // get current timestamp
+ using Time = std::chrono::high_resolution_clock;
+ auto currentTime = Time::now();
+ std::pair> time_MxpiVisionList;
+ // set MxpiVisionInfo and MxpiVisionData
+ auto mxpiVisionList = copyList(newData);
+ time_MxpiVisionList = std::make_pair(currentTime, mxpiVisionList);
+ blockingMap_[id] = time_MxpiVisionList;
+ keys_.insert(id);
+ }
+
+ void BlockingMap::Update(const uint32_t &id, MxBase::MemoryData newData) {
+ // add std::lock_guard
+ std::lock_guard guard(mtx_);
+ std::pair>
+ time_MxpiVisionList = blockingMap_[id];
+ auto mxpiVisionList = time_MxpiVisionList.second;
+ // set MxpiVisionInfo and MxpiVisionData
+ MxTools::MxpiVision *dstMxpivision = mxpiVisionList->add_visionvec();
+ MxTools::MxpiVisionInfo *mxpiVisionInfo = dstMxpivision->mutable_visioninfo();
+ mxpiVisionInfo->set_format(1);
+ mxpiVisionInfo->set_height(224);
+ mxpiVisionInfo->set_width(224);
+ mxpiVisionInfo->set_heightaligned(224);
+ mxpiVisionInfo->set_widthaligned(224);
+ // set MxpiVisionData by MemoryData
+ MxTools::MxpiVisionData *mxpiVisionData = dstMxpivision->mutable_visiondata();
+ mxpiVisionData->set_dataptr((uint64_t) newData.ptrData);
+ mxpiVisionData->set_datasize(newData.size);
+ mxpiVisionData->set_deviceid(newData.deviceId);
+ mxpiVisionData->set_memtype((MxTools::MxpiMemoryType) newData.type);
+ // visionlist->pair
+ time_MxpiVisionList = std::make_pair(time_MxpiVisionList.first, mxpiVisionList);
+ blockingMap_[id] = time_MxpiVisionList;
+ }
+
+ std::pair>
+ BlockingMap::Get(const uint32_t &id) {
+ // add std::lock_guard
+ std::lock_guard guard(mtx_);
+ if (blockingMap_.find(id) != blockingMap_.end()) {
+ return blockingMap_[id];
+ } else {
+ // If can't find the element, manually assign nullptr
+ std::pair> empty;
+ using Time = std::chrono::high_resolution_clock;
+ auto currentTime = Time::now();
+ empty = std::make_pair(currentTime, nullptr);
+ return empty;
+ }
+ }
+
+ void BlockingMap::Clear(const uint32_t &id) {
+ std::lock_guard guard(mtx_);
+ blockingMap_.erase(id);
+ keys_.erase(id);
+ }
+
+ void BlockingMap::Reinsert(const uint32_t &id, std::shared_ptr &mxpiVisionList) {
+ // add std::lock_guard
+ std::lock_guard guard(mtx_);
+ // get current timestamp
+ using Time = std::chrono::high_resolution_clock;
+ auto currentTime = Time::now();
+ std::pair> time_MxpiVisionList;
+ // set MxpiVisionInfo and MxpiVisionData
+ time_MxpiVisionList = std::make_pair(currentTime, mxpiVisionList);
+ blockingMap_[id] = time_MxpiVisionList;
+ keys_.insert(id);
+ }
+
+ std::uint32_t BlockingMap::count(const uint32_t &id) {
+ std::lock_guard guard(mtx_);
+ return blockingMap_.count(id);
+ }
+
+ size_t BlockingMap::Size() const {
+ return blockingMap_.size();
+ }
+
+ std::vector BlockingMap::Keys() {
+ // id<->key
+ std::vector keys;
+ std::lock_guard guard(mtx_);
+ for (auto iter = keys_.begin(); iter != keys_.end(); iter++) {
+ keys.push_back(*iter);
+ }
+ return keys;
+ }
+
+ std::shared_ptr BlockingMap::copyList(MxBase::MemoryData newData) {
+ // new shared_ptr MxpiVisionList;
+ std::shared_ptr dstMxpiVisionListSptr(new MxTools::MxpiVisionList,
+ MxTools::g_deleteFuncMxpiVisionList);
+ MxTools::MxpiVision *dstMxpivision = dstMxpiVisionListSptr->add_visionvec();
+ MxTools::MxpiVisionInfo *mxpiVisionInfo = dstMxpivision->mutable_visioninfo();
+ mxpiVisionInfo->set_format(1);
+ mxpiVisionInfo->set_height(224);
+ mxpiVisionInfo->set_width(224);
+ mxpiVisionInfo->set_heightaligned(224);
+ mxpiVisionInfo->set_widthaligned(224);
+ // set MxpiVisionData by MemoryData
+ MxTools::MxpiVisionData *mxpiVisionData = dstMxpivision->mutable_visiondata();
+ mxpiVisionData->set_dataptr((uint64_t) newData.ptrData);
+ mxpiVisionData->set_datasize(newData.size);
+ mxpiVisionData->set_deviceid(newData.deviceId);
+ mxpiVisionData->set_memtype((MxTools::MxpiMemoryType) newData.type);
+ return dstMxpiVisionListSptr;
+ }
}
\ No newline at end of file
diff --git a/contrib/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.h b/mxVision/mxVision-referenceapps/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.h
similarity index 97%
rename from contrib/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.h
rename to mxVision/mxVision-referenceapps/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.h
index afca5a48da14f7b3e8f2fb8efad27db7a162ab08..6d0c81fc16998e326f7454f7c2d9020a7184c08a 100644
--- a/contrib/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.h
+++ b/mxVision/mxVision-referenceapps/ActionRecognition/plugins/MxpiStackFrame/BlockingMap.h
@@ -1,84 +1,84 @@
-/*
- * Copyright(C) 2021. Huawei Technologies Co.,Ltd. All rights reserved.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef INC_FACE_BLOCKING_MAP_H
-#define INC_FACE_BLOCKING_MAP_H
-
-#include