diff --git a/.github/ISSUE_TEMPLATE/-------------other-general-issues.md b/.github/ISSUE_TEMPLATE/-------------other-general-issues.md deleted file mode 100644 index f4e48541a03f15a3cb68dff228104e2dcdae0c06..0000000000000000000000000000000000000000 --- a/.github/ISSUE_TEMPLATE/-------------other-general-issues.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -name: "\U0001F516 其他通用问题 / Other General Issues" -about: 提出其他问题 / Suggest other general issues -title: "[Other General Issues]" -labels: '' -assignees: '' - ---- - -**PaddleDetection team appreciate any suggestion or problem you delivered~** - -## Checklist: - -1. 查找历史相关issue寻求解答/I have searched related issues but cannot get the expected help. -2. 翻阅[FAQ](https://paddledetection.readthedocs.io/FAQ.html) /I have read the [FAQ documentation](https://paddledetection.readthedocs.io/FAQ.html) but cannot get the expected help. - -## 描述问题/Describe the bug -A clear and concise description of what the bug is. - -## 复现/Reproduction - -1. 您使用的命令是?/What command or script did you run? - -```none -请填写命令/A placeholder for the command. -``` -2. 您是否更改过代码或配置文件?您是否理解您所更改的内容?还请您提供所更改的部分代码。/Did you make any modifications on the code or config? Did you understand what you have modified? Please provide the codes that you modified. - -3. 您使用的数据集是?/What dataset did you use? - -4. 请提供您出现的报错信息及相关log。/Please provide the error messages or relevant log information. - -## 环境/Environment -1. 请提供您使用的Paddle和PaddleDetection的版本号/Please provide the version of Paddle and PaddleDetection you use: - -2. 如您在使用PaddleDetection的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号/ Please provide the version of any other related tools/products used, such as the version of PaddleServing and etc: - -3. 请提供您使用的操作系统信息,如Linux/Windows/MacOS /Please provide the OS information, e.g., Linux: - -4. 请问您使用的Python版本是?/ Please provide the version of Python you used. - -5. 请问您使用的CUDA/cuDNN的版本号是?/ Please provide the version of CUDA/cuDNN you used. - - -如果您的issue是关于安装或环境,您可以先查询[安装文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md)尝试解决~ - -If your issue looks like an installation issue / environment issue, -please first try to solve it yourself with the instructions in -https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL.md diff --git a/.github/ISSUE_TEMPLATE/------------feature-request.md b/.github/ISSUE_TEMPLATE/------------feature-request.md deleted file mode 100644 index 7b23ce6925a943fd56981df22c8363ec8c716de7..0000000000000000000000000000000000000000 --- a/.github/ISSUE_TEMPLATE/------------feature-request.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -name: "\U0001F680 新功能需求 / Feature Request" -about: 提出一个新的功能需求或改进建议 / Suggest an improvement for PaddleDetection -title: "[Feature Request]" -labels: '' -assignees: '' - ---- - -## 🚀 新功能/Feature - -PaddleDetection欢迎大家以清晰简洁的语言提出新功能需求。 - -A clear and concise description of the feature proposal. - -## 需求原因&示例/Motivation & Examples - -请描述这个需求的必要性。 - -Tell us why the feature is useful. - -请描述这个需求可实现的具体功能,如果可以,辛苦您提供相关代码实现效果。 - -Describe what the feature would look like, if it is implemented. -Best demonstrated using **code examples** in addition to words. - -## 📣 注意/Note - -PaddleDetection仅添加通用性较高的新功能/特性。 - -We only consider adding new features if they are relevant to many users. - -如果您需要论文中的模型能力,PaddleDetection会优先考虑与目标检测强相关且意义较大的论文。 - -If you request implementation of research papers -- we only consider papers that have enough significance and prevalence in the object detection field. - -比如“让XX功能更快”类似的需求不能作为一个有效需求,需要更具体的描述,如“创建一个具体XX工具/功能,让XX更快”即是一个有效需求。 - -"Make X faster/accurate" is not a valid feature request. "Implement a concrete feature that can make X faster/accurate" can be a valid feature request. - -PaddleDetection感谢您的支持,我们期待您提出新功能需求! - -Thanks for your suggestions! diff --git a/.github/ISSUE_TEMPLATE/-----------documentation-improvement.md b/.github/ISSUE_TEMPLATE/-----------documentation-improvement.md deleted file mode 100644 index 3969fd2e0fc6c5e483268b4a85320627497c1c82..0000000000000000000000000000000000000000 --- a/.github/ISSUE_TEMPLATE/-----------documentation-improvement.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -name: "\U0001F4D6 文档优化 / Documentation Improvement" -about: 对现有的文档教程提出修改建议 / Suggest an improvement about existing documentation or tutorials - in PaddleDetection. -title: "[Document Improvement]" -labels: '' -assignees: '' - ---- - -## 📖 文档优化/Documentation Improvement - -**请简单说明文档存在问题/Please provide a concise and brief description of the documentation problem:** - -**请提供有问题的文档部分截图及链接/Please provide the screen shoot and the link of the document:** diff --git a/.github/ISSUE_TEMPLATE/------bug---bug-report.md b/.github/ISSUE_TEMPLATE/------bug---bug-report.md deleted file mode 100644 index d778a03420a16fbf9425710fdb8046b33f445520..0000000000000000000000000000000000000000 --- a/.github/ISSUE_TEMPLATE/------bug---bug-report.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -name: "\U0001F41B 提出Bug / Bug Report" -about: 提出PaddleDetection使用中存在的Bug / Report a bug in PaddleDetection -title: "[BUG]" -labels: '' -assignees: '' - ---- - -**PaddleDetection team appreciate any suggestion or problem you delivered~** - -## Checklist: - -1. 查找历史相关issue寻求解答/I have searched related issues but cannot get the expected help. -2. 翻阅[FAQ](https://paddledetection.readthedocs.io/FAQ.html) /I have read the [FAQ documentation](https://paddledetection.readthedocs.io/FAQ.html) but cannot get the expected help. -3. 确认bug是否在新版本里还未修复/The bug has not been fixed in the latest version. - -## 描述问题/Describe the bug -A clear and concise description of what the bug is. - -## 复现/Reproduction - -1. 您使用的命令是?/What command or script did you run? - -```none -请填写命令/A placeholder for the command. -``` -2. 您是否更改过代码或配置文件?您是否理解您所更改的内容?还请您提供所更改的部分代码。/Did you make any modifications on the code or config? Did you understand what you have modified? Please provide the codes that you modified. - -3. 您使用的数据集是?/What dataset did you use? - -4. 请提供您出现的报错信息及相关log。/Please provide the error messages or relevant log information. - -## 环境/Environment -1. 请提供您使用的Paddle和PaddleDetection的版本号/Please provide the version of Paddle and PaddleDetection you use: - -2. 如您在使用PaddleDetection的同时还在使用其他产品,如PaddleServing、PaddleInference等,请您提供其版本号/ Please provide the version of any other related tools/products used, such as the version of PaddleServing and etc: - -3. 请提供您使用的操作系统信息,如Linux/Windows/MacOS /Please provide the OS information, e.g., Linux: - -4. 请问您使用的Python版本是?/ Please provide the version of Python you used. - -5. 请问您使用的CUDA/cuDNN的版本号是?/ Please provide the version of CUDA/cuDNN you used. - - -如果您的issue是关于安装或环境,您可以先查询[安装文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md)尝试解决~ - -If your issue looks like an installation issue / environment issue, -please first try to solve it yourself with the instructions in -https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL.md diff --git a/.github/ISSUE_TEMPLATE/1_bug-report.yml b/.github/ISSUE_TEMPLATE/1_bug-report.yml new file mode 100644 index 0000000000000000000000000000000000000000..41e32cfe941b03073044102b7077f9822cfa380f --- /dev/null +++ b/.github/ISSUE_TEMPLATE/1_bug-report.yml @@ -0,0 +1,73 @@ +name: 🐛 报BUG Bug Report +description: 报告一个可复现的BUG帮助我们修复PaddleDetection。 Report a bug to help us reproduce and fix it. +labels: [type/bug-report, status/new-issue] + +body: +- type: markdown + attributes: + value: | + Thank you for submitting a PaddleDetection Bug Report! + +- type: checkboxes + attributes: + label: 问题确认 Search before asking + description: > + 在向PaddleDetection报bug之前,请先查询[历史issue](https://github.com/PaddlePaddle/PaddleDetection/issues)是否报过同样的bug。 + + Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/PaddlePaddle/PaddleDetection/issues). + + options: + - label: > + 我已经查询[历史issue](https://github.com/PaddlePaddle/PaddleDetection/issues),没有报过同样bug。I have searched the [issues](https://github.com/PaddlePaddle/PaddleDetection/issues) and found no similar bug report. + required: true + +- type: textarea + id: code + attributes: + label: bug描述 Describe the Bug + description: | + 请清晰简洁的描述这个bug,最好附上bug复现步骤及最小代码集,以便我们可以通过运行代码来重现错误。代码片段需要尽可能简洁,请花些时间去掉不相关的代码以帮助我们有效地调试。我们希望通过复制代码并运行得到与你相同的结果,请避免任何外部数据或包含相关的导入等。如果代码太长,请将可执行代码放到[AIStudio](https://aistudio.baidu.com/aistudio/index)中并将项目设置为公开(或者放到github gist上),请在项目中描述清楚bug复现步骤,在issue中描述期望结果与实际结果。 + + 如果你报告的是一个报错信息,请将完整回溯的报错贴在这里,并使用 ` ```三引号块``` `展示错误信息。 + + + placeholder: | + 请清晰简洁的描述这个bug。A clear and concise description of what the bug is. + + ```python + # 最小可复现代码。 Sample code to reproduce the problem. + ``` + + ```shell + 带有完整回溯的报错信息。 The error message you got, with the full traceback. + ``` + validations: + required: true + +- type: textarea + attributes: + label: 复现环境 Environment + description: 请具体说明复现bug的环境信息,Please specify the software and hardware you used to produce the bug. + placeholder: | + - PaddlePaddle: 2.2.2 + - PaddleDetection: release/2.4 + - Python: 3.8.0 + - CUDA: 10.2 + - CUDNN: 7.6 + validations: + required: false + +- type: checkboxes + attributes: + label: 是否愿意提交PR Are you willing to submit a PR? + description: > + (可选)如果你对修复bug有自己的想法,十分鼓励提交[Pull Request](https://github.com/PaddlePaddle/PaddleDetection/pulls),共同提升PaddleDetection + + (Optional) We encourage you to submit a [Pull Request](https://github.com/PaddlePaddle/PaddleDetection/pulls) (PR) to help improve PaddleDetection for everyone, especially if you have a good understanding of how to implement a fix or feature. + options: + - label: Yes I'd like to help by submitting a PR! + +- type: markdown + attributes: + value: > + 感谢你的贡献 🎉!Thanks for your contribution 🎉! diff --git a/.github/ISSUE_TEMPLATE/2_feature-request.yml b/.github/ISSUE_TEMPLATE/2_feature-request.yml new file mode 100644 index 0000000000000000000000000000000000000000..dcf9ec4462886c7064315f0fc6ac167dd6c6dbf5 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/2_feature-request.yml @@ -0,0 +1,50 @@ +name: 🚀 新需求 Feature Request +description: 提交一个你对PaddleDetection的新需求。 Submit a request for a new Paddle feature. +labels: [type/feature-request, status/new-issue] + +body: +- type: markdown + attributes: + value: > + #### 你可以在这里提出你对PaddleDetection的新需求,包括但不限于:功能或模型缺失、功能不全或无法使用、精度/性能不符合预期等。 + + #### You could submit a request for a new feature here, including but not limited to: new features or models, incomplete or unusable features, accuracy/performance not as expected, etc. + +- type: checkboxes + attributes: + label: 问题确认 Search before asking + description: > + 在向PaddleDetection提新需求之前,请先查询[历史issue](https://github.com/PaddlePaddle/PaddleDetection/issues)是否报过同样的需求。 + + Before submitting a feature request, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/PaddlePaddle/PaddleDetection/issues). + + options: + - label: > + 我已经查询[历史issue](https://github.com/PaddlePaddle/PaddleDetection/issues),没有类似需求。I have searched the [issues](https://github.com/PaddlePaddle/PaddleDetection/issues) and found no similar feature requests. + required: true + +- type: textarea + id: description + attributes: + label: 需求描述 Feature Description + description: | + 请尽可能包含任务目标、需求场景、功能描述等信息,全面的信息有利于我们准确评估你的需求。 + Please include as much information as possible, such as mission objectives, requirement scenarios, functional descriptions, etc. Comprehensive information will help us accurately assess your feature request. + value: "1. 任务目标(请描述你正在做的项目是什么,如模型、论文、项目是什么?); 2. 需求场景(请描述你的项目中为什么需要用此功能); 3. 功能描述(请简单描述或设计这个功能)" + validations: + required: true + +- type: checkboxes + attributes: + label: 是否愿意提交PR Are you willing to submit a PR? + description: > + (可选)如果你对新feature有自己的想法,十分鼓励提交[Pull Request](https://github.com/PaddlePaddle/PaddleDetection/pulls),共同提升PaddleDetection + + (Optional) We encourage you to submit a [Pull Request](https://github.com/PaddlePaddle/PaddleDetection/pulls) (PR) to help improve PaddleDetection for everyone, especially if you have a good understanding of how to implement a fix or feature. + options: + - label: Yes I'd like to help by submitting a PR! + +- type: markdown + attributes: + value: > + 感谢你的贡献 🎉!Thanks for your contribution 🎉! diff --git a/.github/ISSUE_TEMPLATE/3_documentation-issue.yml b/.github/ISSUE_TEMPLATE/3_documentation-issue.yml new file mode 100644 index 0000000000000000000000000000000000000000..4ea08cd5f4b99003d2323e1578bd0456a9dcf848 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/3_documentation-issue.yml @@ -0,0 +1,38 @@ +name: 📚 文档 Documentation Issue +description: 反馈一个官网文档错误。 Report an issue related to https://github.com/PaddlePaddle/PaddleDetection. +labels: [type/docs, status/new-issue] + +body: +- type: markdown + attributes: + value: > + #### 请确认反馈的问题来自PaddlePaddle官网文档:https://github.com/PaddlePaddle/PaddleDetection 。 + + #### Before submitting a Documentation Issue, Please make sure that issue is related to https://github.com/PaddlePaddle/PaddleDetection. + +- type: textarea + id: link + attributes: + label: 文档链接&描述 Document Links & Description + description: | + 请说明有问题的文档链接以及该文档存在的问题。 + Please fill in the link to the document and describe the question. + validations: + required: true + + +- type: textarea + id: error + attributes: + label: 请提出你的建议 Please give your suggestion + description: | + 请告诉我们,你希望如何改进这个文档。或者你可以提个PR修复这个问题。 + Please tell us how you would like to improve this document. Or you can submit a PR to fix this problem. + + validations: + required: false + +- type: markdown + attributes: + value: > + 感谢你的贡献 🎉!Thanks for your contribution 🎉! diff --git a/.github/ISSUE_TEMPLATE/4_ask-a-question.yml b/.github/ISSUE_TEMPLATE/4_ask-a-question.yml new file mode 100644 index 0000000000000000000000000000000000000000..af237f516eb333d4c5f33bba4b7dc9c0dec2e30f --- /dev/null +++ b/.github/ISSUE_TEMPLATE/4_ask-a-question.yml @@ -0,0 +1,37 @@ +name: 🙋🏼‍♀️🙋🏻‍♂️提问 Ask a Question +description: 提出一个使用/咨询问题。 Ask a usage or consultation question. +labels: [type/question, status/new-issue] + +body: +- type: checkboxes + attributes: + label: 问题确认 Search before asking + description: > + #### 你可以在这里提出一个使用/咨询问题,提问之前请确保: + + - 1)已经百度/谷歌搜索过你的问题,但是没有找到解答; + + - 2)已经在官网查询过[教程文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/GETTING_STARTED_cn.md)与[FAQ](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/docs/tutorials/FAQ),但是没有找到解答; + + - 3)已经在[历史issue](https://github.com/PaddlePaddle/PaddleDetection/issues)中搜索过,没有找到同类issue或issue未被解答。 + + + #### You could ask a usage or consultation question here, before your start, please make sure: + + - 1) You have searched your question on Baidu/Google, but found no answer; + + - 2) You have checked the [tutorials](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/GETTING_STARTED.md), but found no answer; + + - 3) You have searched [the existing and past issues](https://github.com/PaddlePaddle/PaddleDetection/issues), but found no similar issue or the issue has not been answered. + + options: + - label: > + 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer. + required: true + +- type: textarea + id: question + attributes: + label: 请提出你的问题 Please ask your question + validations: + required: true diff --git a/.github/ISSUE_TEMPLATE/5_others.yml b/.github/ISSUE_TEMPLATE/5_others.yml new file mode 100644 index 0000000000000000000000000000000000000000..ec2f08ae16098cd8987f3b6bc726d9a28696833a --- /dev/null +++ b/.github/ISSUE_TEMPLATE/5_others.yml @@ -0,0 +1,23 @@ +name: 🧩 其他 Others +description: 提出其他问题。 Report any other non-support related issues. +labels: [type/others, status/new-issue] + +body: +- type: markdown + attributes: + value: > + #### 你可以在这里提出任何前面几类模板不适用的问题,包括但不限于:优化性建议、框架使用体验反馈、版本兼容性问题、报错信息不清楚等。 + + #### You can report any issues that are not applicable to the previous types of templates, including but not limited to: enhancement suggestions, feedback on the use of the framework, version compatibility issues, unclear error information, etc. + +- type: textarea + id: others + attributes: + label: 问题描述 Please describe your issue + validations: + required: true + +- type: markdown + attributes: + value: > + 感谢你的贡献 🎉! Thanks for your contribution 🎉! diff --git a/.gitignore b/.gitignore index 2260189a0aa6105c5ead510efe24f27dfb046882..6a98a38b72ef9a59a2fdab266697661cdd1fa136 100644 --- a/.gitignore +++ b/.gitignore @@ -82,3 +82,7 @@ ppdet/version.py # NPU meta folder kernel_meta/ + +# MAC +*.DS_Store + diff --git a/README_cn.md b/README_cn.md index 1b2ebaa39c4ebdbc6d224b9e4ad0cefa1d3eaeb2..e1e3f07d205e566bf27f95c516461a7290743557 100644 --- a/README_cn.md +++ b/README_cn.md @@ -2,84 +2,81 @@

- +

**飞桨目标检测开发套件,端到端地完成从训练到部署的全流程目标检测应用。** +

+ + + + + +

+
-[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) -[![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleDetection.svg)](https://github.com/PaddlePaddle/PaddleDetection/releases) -![python version](https://img.shields.io/badge/python-3.6+-orange.svg) -![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg) +
+
-## 产品动态 - -- 🔥 **2022.3.24:PaddleDetection发布[release/2.4版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)** - - - 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](config/ppyoloe),全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。 - - 发布边缘端和CPU端超轻量SOTA目标检测模型[PP-PicoDet增强版](configs/picodet),提供模型稀疏化和量化功能,便于模型加速,各类硬件无需单独开发后处理模块,降低部署门槛。 - - 发布实时行人分析工具[PP-Human](deploy/pphuman),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。 - -- 2021.11.03: PaddleDetection发布[release/2.3版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3) - - - 发布轻量级检测特色模型⚡[PP-PicoDet](configs/picodet),0.99m的参数量可实现精度30+mAP、速度150FPS。 - - - 发布轻量级关键点特色模型⚡[PP-TinyPose](configs/keypoint/tiny_pose),单人场景FP16推理可达122FPS、51.8AP,具有精度高速度快、检测人数无限制、微小目标效果好的优势。 - - - 发布实时跟踪系统[PP-Tracking](deploy/pptracking),覆盖单、多镜头下行人、车辆、多类别跟踪,对小目标、密集型特殊优化,提供人、车流量技术解决方案。 - - - 新增[Swin Transformer](configs/faster_rcnn),[TOOD](configs/tood),[GFL](configs/gfl)目标检测模型。 - - - 发布[Sniper](configs/sniper)小目标检测优化模型,发布针对EdgeBoard优化[PP-YOLO-EB](configs/ppyolo)模型。 - - - 新增轻量化关键点模型[Lite HRNet](configs/keypoint)关键点模型并支持Paddle Lite部署。 - -- 2021.08.10: PaddleDetection发布[release/2.2版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2) - - - 发布Transformer检测系列模型,包括[DETR](configs/detr), [Deformable DETR](configs/deformable_detr), [Sparse RCNN](configs/sparse_rcnn)。 - - 新增Dark HRNet关键点模型和MPII数据集[关键点模型](configs/keypoint) - - 新增[人头](configs/mot/headtracking21)、[车辆](configs/mot/vehicle)跟踪垂类模型。 - -- 2021.05.20: PaddleDetection发布[release/2.1版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1) - - 新增[关键点检测](configs/keypoint),模型包括HigherHRNet,HRNet。 - - 新增[多目标跟踪](configs/mot)能力,模型包括DeepSORT,JDE,FairMOT。 - - 发布PPYOLO系列模型压缩模型,新增[ONNX模型导出教程](deploy/EXPORT_ONNX_MODEL.md)。 - -## 简介 - -**PaddleDetection**为基于飞桨PaddlePaddle的端到端目标检测套件,内置**190+主流目标检测、实例分割、跟踪、关键点检测**算法,其中包括**服务器端和移动端高精度、轻量级**产业级SOTA模型、冠军方案和学术前沿算法,并提供配置化的网络模块组件、十余种数据增强策略和损失函数等高阶优化支持和多种部署方案,在打通数据处理、模型开发、训练、压缩、部署全流程的基础上,提供丰富的案例及教程,加速算法产业落地应用。 - -#### 提供目标检测、实例分割、多目标跟踪、关键点检测等多种能力 - -
- -
+## 产品动态 + +- 🔥 **2022.8.09:[YOLO家族全系列模型](https://github.com/nemonameless/PaddleDetection_YOLOSeries)发布** + - 全面覆盖的YOLO家族经典与最新模型: 包括YOLOv3,百度飞桨自研的实时高精度目标检测检测模型PP-YOLOE,以及前沿检测算法YOLOv4、YOLOv5、YOLOX,MT-YOLOv6及YOLOv7 + - 更强的模型性能:基于各家前沿YOLO算法进行创新并升级,缩短训练周期5~8倍,精度普遍提升1%~5% mAP;使用模型压缩策略实现精度无损的同时速度提升30%以上 + - 完备的端到端开发支持:支持从模型训练、评估、预测到模型量化压缩,部署多种硬件的端到端开发全流程。同时支持不同模型算法灵活切换,一键实现算法二次开发 + +- 🔥 **2022.8.01:发布[PP-TinyPose升级版](./configs/keypoint/tiny_pose/). 在健身、舞蹈等场景的业务数据集端到端AP提升9.1** + - 新增体育场景真实数据,复杂动作识别效果显著提升,覆盖侧身、卧躺、跳跃、高抬腿等非常规动作 + - 检测模型采用[PP-PicoDet增强版](./configs/picodet/README.md),在COCO数据集上精度提升3.1% + - 关键点稳定性增强,新增滤波稳定方式,使得视频预测结果更加稳定平滑 + +- 2022.7.14:[行人分析工具PP-Human v2](./deploy/pipeline)发布 + - 四大产业特色功能:高性能易扩展的五大复杂行为识别、闪电级人体属性识别、一行代码即可实现的人流检测与轨迹留存以及高精度跨镜跟踪 + - 底层核心算法性能强劲:覆盖行人检测、跟踪、属性三类核心算法能力,对目标人数、光线、背景均无限制 + - 极低使用门槛:提供保姆级全流程开发及模型优化策略、一行命令完成推理、兼容各类数据输入格式 + +- 2022.3.24:PaddleDetection发布[release/2.4版本](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) + - 发布高精度云边一体SOTA目标检测模型[PP-YOLOE](configs/ppyoloe),提供s/m/l/x版本,l版本COCO test2017数据集精度51.6%,V100预测速度78.1 FPS,支持混合精度训练,训练较PP-YOLOv2加速33%,全系列多尺度模型,满足不同硬件算力需求,可适配服务器、边缘端GPU及其他服务器端AI加速卡。 + - 发布边缘端和CPU端超轻量SOTA目标检测模型[PP-PicoDet增强版](configs/picodet),精度提升2%左右,CPU预测速度提升63%,新增参数量0.7M的PicoDet-XS模型,提供模型稀疏化和量化功能,便于模型加速,各类硬件无需单独开发后处理模块,降低部署门槛。 + - 发布实时行人分析工具[PP-Human](deploy/pipeline),支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度。 + - 新增[YOLOX](configs/yolox)目标检测模型,支持nano/tiny/s/m/l/x版本,x版本COCO val2017数据集精度51.8%。 + +- [更多版本发布](https://github.com/PaddlePaddle/PaddleDetection/releases) -#### 应用场景覆盖工业、智慧城市、安防、交通、零售、医疗等十余种行业 +## 简介 -## 特性 +**PaddleDetection**为基于飞桨PaddlePaddle的端到端目标检测套件,内置**30+模型算法**及**250+预训练模型**,覆盖**目标检测、实例分割、跟踪、关键点检测**等方向,其中包括**服务器端和移动端高精度、轻量级**产业级SOTA模型、冠军方案和学术前沿算法,并提供配置化的网络模块组件、十余种数据增强策略和损失函数等高阶优化支持和多种部署方案,在打通数据处理、模型开发、训练、压缩、部署全流程的基础上,提供丰富的案例及教程,加速算法产业落地应用。 +
+ +
+ +## 特性 -- **模型丰富**: 包含**目标检测**、**实例分割**、**人脸检测**等**100+个预训练模型**,涵盖多种**全球竞赛冠军**方案。 +- **模型丰富**: 包含**目标检测**、**实例分割**、**人脸检测**、****关键点检测****、**多目标跟踪**等**250+个预训练模型**,涵盖多种**全球竞赛冠军**方案。 - **使用简洁**:模块化设计,解耦各个网络组件,开发者轻松搭建、试用各种检测模型及优化策略,快速得到高性能、定制化的算法。 - **端到端打通**: 从数据增强、组网、训练、压缩、部署端到端打通,并完备支持**云端**/**边缘端**多架构、多设备部署。 - **高性能**: 基于飞桨的高性能内核,模型训练速度及显存占用优势明显。支持FP16训练, 支持多机训练。 -## 技术交流 +
+ +
+ +## 技术交流 - 如果你发现任何PaddleDetection存在的问题或者是建议, 欢迎通过[GitHub Issues](https://github.com/PaddlePaddle/PaddleDetection/issues)给我们提issues。 -- 欢迎加入PaddleDetection QQ、微信(添加并回复小助手“检测”)用户群 - +- 欢迎加入PaddleDetection QQ、微信用户群(添加并回复小助手“检测”) +
- - + +
-## 套件结构概览 +## 套件结构概览 @@ -100,115 +97,130 @@ @@ -232,7 +245,10 @@
    -
  • Object Detection
  • +
    Object Detection
    • Faster RCNN
    • FPN
    • Cascade-RCNN
    • -
    • Libra RCNN
    • -
    • Hybrid Task RCNN
    • PSS-Det
    • RetinaNet
    • -
    • YOLOv3
    • -
    • YOLOv4
    • +
    • YOLOv3
    • +
    • YOLOv5
    • +
    • MT-YOLOv6
    • +
    • YOLOv7
    • PP-YOLOv1/v2
    • PP-YOLO-Tiny
    • +
    • PP-YOLOE
    • +
    • YOLOX
    • SSD
    • -
    • CornerNet-Squeeze
    • +
    • CenterNet
    • FCOS
    • TTFNet
    • +
    • TOOD
    • +
    • GFL
    • PP-PicoDet
    • DETR
    • Deformable DETR
    • Swin Transformer
    • Sparse RCNN
    • -
    -
  • Instance Segmentation
  • -
      +
    +
    Instance Segmentation +
    • Mask RCNN
    • +
    • Cascade Mask RCNN
    • SOLOv2
    • -
    -
  • Face Detection
  • +
+
Face Detection
    -
  • FaceBoxes
  • BlazeFace
  • -
  • BlazeFace-NAS
  • -
-
  • Multi-Object-Tracking
  • +
    +
    Multi-Object-Tracking
    • JDE
    • FairMOT
    • -
    • DeepSort
    • -
    -
  • KeyPoint-Detection
  • +
  • DeepSORT
  • +
  • ByteTrack
  • +
    +
    KeyPoint-Detection
    • HRNet
    • HigherHRNet
    • -
    +
  • Lite-HRNet
  • +
  • PP-TinyPose
  • +
    +
    Details
    • ResNet(&vd)
    • -
    • ResNeXt(&vd)
    • +
    • Res2Net(&vd)
    • +
    • CSPResNet
    • SENet
    • Res2Net
    • HRNet
    • -
    • Hourglass
    • -
    • CBNet
    • -
    • GCNet
    • +
    • Lite-HRNet
    • DarkNet
    • CSPDarkNet
    • -
    • VGG
    • MobileNetv1/v3
    • +
    • ShuffleNet
    • GhostNet
    • -
    • Efficientnet
    • -
    • BlazeNet
    • -
    +
  • BlazeNet
  • +
  • DLA
  • +
  • HardNet
  • +
  • LCNet
  • +
  • ESNet
  • +
  • Swin-Transformer
  • +
    -
    • Common
    • +
      Common
      • Sync-BN
      • Group Norm
      • DCNv2
      • -
      • Non-local
      • -
      +
    • EMA
    • +
    -
    • KeyPoint
    • +
      KeyPoint
      • DarkPose
      • -
      +
    -
    • FPN
    • +
      FPN
      • BiFPN
      • -
      • BFP
      • +
      • CSP-PAN
      • +
      • Custom-PAN
      • +
      • ES-PAN
      • HRFPN
      • -
      • ACFPN
      • -
      +
    -
    • Loss
    • +
      Loss
      • Smooth-L1
      • GIoU/DIoU/CIoU
      • IoUAware
      • -
      +
    • Focal Loss
    • +
    • CT Focal Loss
    • +
    • VariFocal Loss
    • +
    -
    • Post-processing
    • +
      Post-processing
      • SoftNMS
      • MatrixNMS
      • -
      +
    -
    • Speed
    • +
      Speed
      • FP16 training
      • Multi-machine training
      • -
      +
    +
    Details
    • Resize
    • Lighting
    • @@ -218,12 +230,13 @@
    • Color Distort
    • Random Erasing
    • Mixup
    • +
    • AugmentHSV
    • Mosaic
    • Cutmix
    • Grid Mask
    • Auto Augment
    • Random Perspective
    • -
    +
    -## 模型性能概览 +## 模型性能概览 + +
    + 云端模型性能对比 各模型结构和骨干网络的代表模型在COCO数据集上精度mAP和单卡Tesla V100上预测速度(FPS)对比图。 @@ -246,8 +262,15 @@ - `Cascade-Faster-RCNN`为`Cascade-Faster-RCNN-ResNet50vd-DCN`,PaddleDetection将其优化到COCO数据mAP为47.8%时推理速度为20FPS - `PP-YOLO`在COCO数据集精度45.9%,Tesla V100预测速度72.9FPS,精度速度均优于[YOLOv4](https://arxiv.org/abs/2004.10934) - `PP-YOLO v2`是对`PP-YOLO`模型的进一步优化,在COCO数据集精度49.5%,Tesla V100预测速度68.9FPS +- `PP-YOLOE`是对`PP-YOLO v2`模型的进一步优化,在COCO数据集精度51.6%,Tesla V100预测速度78.1FPS +- [`YOLOX`](configs/yolox)和[`YOLOv5`](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov5)均为基于PaddleDetection复现算法 - 图中模型均可在[模型库](#模型库)中获取 +
    + +
    + 移动端模型性能对比 + 各移动端模型在COCO数据集上精度mAP和高通骁龙865处理器上预测速度(FPS)对比图。
    @@ -259,28 +282,132 @@ - 测试数据均使用高通骁龙865(4\*A77 + 4\*A55)处理器batch size为1, 开启4线程测试,测试使用NCNN预测库,测试脚本见[MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark) - [PP-PicoDet](configs/picodet)及[PP-YOLO-Tiny](configs/ppyolo)为PaddleDetection自研模型,其余模型PaddleDetection暂未提供 -## 文档教程 +
    + +## 模型库 + +
    + 1. 通用检测 + +#### [PP-YOLOE](./configs/ppyoloe)系列 推荐场景:Nvidia V100, T4等云端GPU和Jetson系列等边缘端设备 + +| 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 配置文件 | 模型下载 | +|:---------- |:-----------:|:-------------------------:|:-----------------------------------------------------:|:------------------------------------------------------------------------------------:| +| PP-YOLOE-s | 42.7 | 333.3 | [链接](configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | +| PP-YOLOE-m | 48.6 | 208.3 | [链接](configs/ppyolo/ppyolo_r50vd_dcn_2x_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_2x_coco.pdparams) | +| PP-YOLOE-l | 50.9 | 149.2 | [链接](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | +| PP-YOLOE-x | 51.9 | 95.2 | [链接](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | + +#### [PP-PicoDet](./configs/picodet)系列 推荐场景:ARM CPU(RK3399, 树莓派等) 和NPU(比特大陆,晶晨等)移动端芯片和x86 CPU设备 + +| 模型名称 | COCO精度(mAP) | 骁龙865 四线程速度(ms) | 配置文件 | 模型下载 | +|:---------- |:-----------:|:---------------:|:---------------------------------------------------:|:---------------------------------------------------------------------------------:| +| PicoDet-XS | 23.5 | 7.81 | [链接](configs/picodet/picodet_xs_320_coco_lcnet.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | +| PicoDet-S | 29.1 | 9.56 | [链接](configs/picodet/picodet_s_320_coco_lcnet.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | +| PicoDet-M | 34.4 | 17.68 | [链接](configs/picodet/picodet_m_320_coco_lcnet.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | +| PicoDet-L | 36.1 | 25.21 | [链接](configs/picodet/picodet_l_320_coco_lcnet.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | + +#### 前沿检测算法 + +| 模型名称 | COCO精度(mAP) | V100 TensorRT FP16速度(FPS) | 配置文件 | 模型下载 | +|:------------------------------------------------------------------ |:-----------:|:-------------------------:|:------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:| +| [YOLOX-l](configs/yolox) | 50.1 | 107.5 | [链接](configs/yolox/yolox_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | +| [YOLOv5-l](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov5) | 48.6 | 136.0 | [链接](https://github.com/nemonameless/PaddleDetection_YOLOSeries/blob/develop/configs/yolov5/yolov5_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov5_l_300e_coco.pdparams) | +| [YOLOv7-l](https://github.com/nemonameless/PaddleDetection_YOLOSeries/tree/develop/configs/yolov7) | 51.0 | 135.0 | [链接](https://github.com/nemonameless/PaddleDetection_YOLOSeries/blob/develop/configs/yolov7/yolov7_l_300e_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/yolov7_l_300e_coco.pdparams) | + +#### 其他通用检测模型 [文档链接](docs/MODEL_ZOO_cn.md) + +
    + +
    + 2. 实例分割 + +| 模型名称 | 模型简介 | 推荐场景 | COCO精度(mAP) | 配置文件 | 模型下载 | +|:----------------- |:------------ |:---- |:--------------------------------:|:---------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:| +| Mask RCNN | 两阶段实例分割算法 | 云边端 | box AP: 41.4
    mask AP: 37.5 | [链接](configs/mask_rcnn/mask_rcnn_r50_vd_fpn_2x_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_2x_coco.pdparams) | +| Cascade Mask RCNN | 两阶段实例分割算法 | 云边端 | box AP: 45.7
    mask AP: 39.7 | [链接](configs/mask_rcnn/cascade_mask_rcnn_r50_vd_fpn_ssld_2x_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/cascade_mask_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams) | +| SOLOv2 | 轻量级单阶段实例分割算法 | 云边端 | mask AP: 38.0 | [链接](configs/solov2/solov2_r50_fpn_3x_coco.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/solov2_r50_fpn_3x_coco.pdparams) | + +
    + +
    + 3. 关键点检测 + +| 模型名称 | 模型简介 | 推荐场景 | COCO精度(AP) | 速度 | 配置文件 | 模型下载 | +|:------------------------------------------- |:---------------------------------------------------------------- |:---------------------------------- |:----------:|:-----------------------:|:-------------------------------------------------------:|:---------------------------------------------------------------------------------------:| +| HRNet-w32 + DarkPose |
    top-down 关键点检测算法
    输入尺寸384x288
    |
    云边端
    | 78.3 | T4 TensorRT FP16 2.96ms | [链接](configs/keypoint/hrnet/dark_hrnet_w32_384x288.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_384x288.pdparams) | +| HRNet-w32 + DarkPose | top-down 关键点检测算法
    输入尺寸256x192 | 云边端 | 78.0 | T4 TensorRT FP16 1.75ms | [链接](configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_256x192.pdparams) | +| [PP-TinyPose](./configs/keypoint/tiny_pose) | 轻量级关键点算法
    输入尺寸256x192 | 移动端 | 68.8 | 骁龙865 四线程 6.30ms | [链接](configs/keypoint/tiny_pose/tinypose_256x192.yml) | [下载地址](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | +| [PP-TinyPose](./configs/keypoint/tiny_pose) | 轻量级关键点算法
    输入尺寸128x96 | 移动端 | 58.1 | 骁龙865 四线程 2.37ms | [链接](configs/keypoint/tiny_pose/tinypose_128x96.yml) | [下载地址](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | + +#### 其他关键点检测模型 [文档链接](configs/keypoint) + +
    + +
    + 4. 多目标跟踪PP-Tracking + +| 模型名称 | 模型简介 | 推荐场景 | 精度 | 配置文件 | 模型下载 | +|:--------- |:------------------------ |:---------------------------------- |:----------------------:|:---------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:| +| DeepSORT | SDE多目标跟踪算法 检测、ReID模型相互独立 |
    云边端
    | MOT-17 half val: 66.9 | [链接](configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams) | +| ByteTrack | SDE多目标跟踪算法 仅包含检测模型 | 云边端 | MOT-17 half val: 77.3 | [链接](configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) | +| JDE | JDE多目标跟踪算法 多任务联合学习方法 | 云边端 | MOT-16 test: 64.6 | [链接](configs/mot/jde/jde_darknet53_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) | +| FairMOT | JDE多目标跟踪算法 多任务联合学习方法 | 云边端 | MOT-16 test: 75.0 | [链接](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [下载地址](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | + +#### 其他多目标跟踪模型 [文档链接](configs/mot) + +
    + +
    + 5. 产业级实时行人分析工具 + + +| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 | +| :---------: | :-------: | :------: |:------: | +| 行人检测(高精度) | 25.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 行人检测(轻量级) | 16.2ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | 目标检测:182M
    属性识别:86M | +| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | 目标检测:182M
    属性识别:86M | +| 摔倒识别 | 单人10ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
    [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M
    关键点检测:101M
    基于关键点行为识别:21.8M | +| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 打架识别 | 19.7ms | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M | +| 抽烟识别 | 单人15.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M
    基于人体id的目标检测:27M | +| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M
    基于人体id的图像分类:45M | + + +点击模型方案中的模型即可下载指定模型 + +详细信息参考[文档](deploy/pipeline) + +
    + + +## 文档教程 ### 入门教程 - [安装说明](docs/tutorials/INSTALL_cn.md) -- [数据准备](docs/tutorials/PrepareDataSet.md) -- [30分钟上手PaddleDetecion](docs/tutorials/GETTING_STARTED_cn.md) +- [快速体验](docs/tutorials/QUICK_STARTED_cn.md) +- [数据准备](docs/tutorials/data/README.md) +- [PaddleDetection全流程使用](docs/tutorials/GETTING_STARTED_cn.md) +- [自定义数据训练](docs/tutorials/CustomizeDataTraining.md) - [FAQ/常见问题汇总](docs/tutorials/FAQ) ### 进阶教程 - 参数配置 - + - [RCNN参数说明](docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md) - [PP-YOLO参数说明](docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md) - 模型压缩(基于[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)) - + - [剪裁/量化/蒸馏教程](configs/slim) - [推理部署](deploy/README.md) - + - [模型导出教程](deploy/EXPORT_MODEL.md) - [Paddle Inference部署](deploy/README.md) - [Python端推理部署](deploy/python) @@ -291,51 +418,44 @@ - [推理benchmark](deploy/BENCHMARK_INFER.md) - 进阶开发 - + - [数据处理模块](docs/advanced_tutorials/READER.md) - [新增检测模型](docs/advanced_tutorials/MODEL_TECHNICAL.md) + - 二次开发教程 + - [目标检测](docs/advanced_tutorials/customization/detection.md) + - [关键点检测](docs/advanced_tutorials/customization/keypoint_detection.md) + - [多目标跟踪](docs/advanced_tutorials/customization/pphuman_mot.md) + - [行为识别](docs/advanced_tutorials/customization/pphuman_action.md) + - [属性识别](docs/advanced_tutorials/customization/pphuman_attribute.md) + +### 课程专栏 + +- **【理论基础】[目标检测7日打卡营](https://aistudio.baidu.com/aistudio/education/group/info/1617):** 目标检测任务综述、RCNN系列目标检测算法详解、YOLO系列目标检测算法详解、PP-YOLO优化策略与案例分享、AnchorFree系列算法介绍和实践 + +- **【产业实践】[AI快车道产业级目标检测技术与应用](https://aistudio.baidu.com/aistudio/education/group/info/23670):** 目标检测超强目标检测算法矩阵、实时行人分析系统PP-Human、目标检测产业应用全流程拆解与实践 + +- **【行业特色】2022.3.26 [智慧城市行业七日课](https://aistudio.baidu.com/aistudio/education/group/info/25620):** 城市规划、城市治理、智慧政务、交通管理、社区治理 + +### [产业实践范例教程](./industrial_tutorial/README.md) + +- [基于PP-TinyPose增强版的智能健身动作识别](https://aistudio.baidu.com/aistudio/projectdetail/4385813) + +- [基于PP-Human的打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?contributionType=1) + +- [基于PP-PicoDet增强版的路面垃圾检测](https://aistudio.baidu.com/aistudio/projectdetail/3846170?channelType=0&channel=0) + +- [基于PP-PicoDet的通信塔识别及Android端部署](https://aistudio.baidu.com/aistudio/projectdetail/3561097) + +- [基于FairMOT实现人流量统计](https://aistudio.baidu.com/aistudio/projectdetail/2421822) + +- [更多其他范例](./industrial_tutorial/README.md) + +## 应用案例 -## 模型库 - -- 通用目标检测: - - [模型库](docs/MODEL_ZOO_cn.md) - - [PP-YOLOE模型](configs/ppyoloe/README_cn.md) - - [PP-YOLO模型](configs/ppyolo/README_cn.md) - - [PP-PicoDet模型](configs/picodet/README.md) - - [增强版Anchor Free模型TTFNet](configs/ttfnet/README.md) - - [移动端模型](static/configs/mobile/README.md) - - [676类目标检测](static/docs/featured_model/LARGE_SCALE_DET_MODEL.md) - - [两阶段实用模型PSS-Det](configs/rcnn_enhance/README.md) - - [半监督知识蒸馏预训练检测模型](docs/feature_models/SSLD_PRETRAINED_MODEL.md) -- 通用实例分割 - - [SOLOv2](configs/solov2/README.md) -- 旋转框检测 - - [S2ANet](configs/dota/README.md) -- [关键点检测](configs/keypoint) - - [PP-TinyPose](configs/keypoint/tiny_pose) - - HigherHRNet - - HRNet - - LiteHRNet -- [多目标跟踪](configs/mot/README.md) - - [PP-Tracking](deploy/pptracking/README.md) - - [DeepSORT](configs/mot/deepsort/README_cn.md) - - [JDE](configs/mot/jde/README_cn.md) - - [FairMOT](configs/mot/fairmot/README_cn.md) -- 垂类领域 - - [行人检测](configs/pedestrian/README.md) - - [车辆检测](configs/vehicle/README.md) - - [人脸检测](configs/face_detection/README.md) - - [实时行人分析](deploy/pphuman/README.md) -- 比赛冠军方案 - - [Objects365 2019 Challenge夺冠模型](static/docs/featured_model/champion_model/CACascadeRCNN.md) - - [Open Images 2019-Object Detction比赛最佳单模型](static/docs/featured_model/champion_model/OIDV5_BASELINE_MODEL.md) - -## 应用案例 - -- [人像圣诞特效自动生成工具](static/application/christmas) - [安卓健身APP](https://github.com/zhiboniu/pose_demo_android) +- [多目标跟踪系统GUI可视化界面](https://github.com/yangyudong2020/PP-Tracking_GUi) -## 第三方教程推荐 +## 第三方教程推荐 - [PaddleDetection在Windows下的部署(一)](https://zhuanlan.zhihu.com/p/268657833) - [PaddleDetection在Windows下的部署(二)](https://zhuanlan.zhihu.com/p/280206376) @@ -343,15 +463,15 @@ - [安全帽检测YOLOv3模型在树莓派上的部署](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/yolov3_for_raspi.md) - [使用SSD-MobileNetv1完成一个项目--准备数据集到完成树莓派部署](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/ssd_mobilenet_v1_for_raspi.md) -## 版本更新 +## 版本更新 版本更新内容请参考[版本更新文档](docs/CHANGELOG.md) -## 许可证书 +## 许可证书 本项目的发布受[Apache 2.0 license](LICENSE)许可认证。 -## 贡献代码 +## 贡献代码 我们非常欢迎你可以为PaddleDetection提供代码,也十分感谢你的反馈。 @@ -359,9 +479,10 @@ - 感谢[FL77N](https://github.com/FL77N/)贡献`Sparse-RCNN`模型。 - 感谢[Chen-Song](https://github.com/Chen-Song)贡献`Swin Faster-RCNN`模型。 - 感谢[yangyudong](https://github.com/yangyudong2020), [hchhtc123](https://github.com/hchhtc123) 开发PP-Tracking GUI界面 -- 感谢[Shigure19](https://github.com/Shigure19) 开发PP-TinyPose健身APP +- 感谢Shigure19 开发PP-TinyPose健身APP +- 感谢[manangoel99](https://github.com/manangoel99)贡献Wandb可视化方式 -## 引用 +## 引用 ``` @misc{ppdet2019, diff --git a/README_en.md b/README_en.md index ad02eeeaff1bd594640681c2452e2979c65bc1bf..9fc92960dc4c3b336d12a0a5bce682dd2e101300 100644 --- a/README_en.md +++ b/README_en.md @@ -1,39 +1,85 @@ -English | [简体中文](README_cn.md) +[简体中文](README_cn.md) | English +
    +

    + +

    + +**A High-Efficient Development Toolkit for Object Detection based on [PaddlePaddle](https://github.com/paddlepaddle/paddle)** + +

    + + + + + +

    +
    + +
    + + +
    + +## Product Update -# Product news +- 🔥 **2022.8.09:Release [YOLO series model zoo](https://github.com/nemonameless/PaddleDetection_YOLOSeries)** + - Comprehensive coverage of classic and latest models of the YOLO series: Including YOLOv3,Paddle real-time object detection model PP-YOLOE, and frontier detection algorithms YOLOv4, YOLOv5, YOLOX, MT-YOLOv6 and YOLOv7 + - Better model performance:Upgrade based on various YOLO algorithms, shorten training time in 5-8 times and the accuracy is generally improved by 1%-5% mAP. The model compression strategy is used to achieve 30% improvement in speed without precision loss + - Complete end-to-end development support:End-to-end development pipieline including training, evaluation, inference, model compression and deployment on various hardware. Meanwhile, support flexible algorithnm switch and implement customized development efficiently -- 2021.11.03: Release [release/2.3](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.3) version. Release mobile object detection model ⚡[PP-PicoDet](configs/picodet), mobile keypoint detection model ⚡[PP-TinyPose](configs/keypoint/tiny_pose),Real-time tracking system [PP-Tracking](deploy/pptracking). Release object detection models, including [Swin-Transformer](configs/faster_rcnn), [TOOD](configs/tood), [GFL](configs/gfl), release [Sniper](configs/sniper) tiny object detection models and optimized [PP-YOLO-EB](configs/ppyolo) model for EdgeBoard. Release mobile keypoint detection model [Lite HRNet](configs/keypoint). -- 2021.08.10: Release [release/2.2](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.2) version. Release Transformer object detection models, including [DETR](configs/detr), [Deformable DETR](configs/deformable_detr), [Sparse RCNN](configs/sparse_rcnn). Release [keypoint detection](configs/keypoint) models, including DarkHRNet and model trained on MPII dataset. Release [head-tracking](configs/mot/headtracking21) and [vehicle-tracking](configs/mot/vehicle) multi-object tracking models. -- 2021.05.20: Release [release/2.1](https://github.com/PaddlePaddle/Paddleetection/tree/release/2.1) version. Release [Keypoint Detection](configs/keypoint), including HigherHRNet and HRNet, [Multi-Object Tracking](configs/mot), including DeepSORT,JDE and FairMOT. Release model compression for PPYOLO series models.Update documents such as [EXPORT ONNX MODEL](deploy/EXPORT_ONNX_MODEL.md). +- 🔥 **2022.8.01:Release [PP-TinyPose plus](./configs/keypoint/tiny_pose/). The end-to-end precision improves 9.1% AP in dataset + of fitness and dance scenes** + - Increase data of sports scenes, and the recognition performance of complex actions is significantly improved, covering actions such as sideways, lying down, jumping, and raising legs + - Detection model uses PP-PicoDet plus and the precision on COCO dataset is improved by 3.1% mAP + - The stability of keypoints is enhanced. Implement the filter stabilization method to make the video prediction result more stable and smooth. +- 2022.7.14:Release [pedestrian analysis tool PP-Human v2](./deploy/pipeline) + - Four major functions: five complicated action recognition with high performance and Flexible, real-time human attribute recognition, visitor flow statistics and high-accuracy multi-camera tracking. + - High performance algorithm: including pedestrian detection, tracking, attribute recognition which is robust to the number of targets and the variant of background and light. + - Highly Flexible: providing complete introduction of end-to-end development and optimization strategy, simple command for deployment and compatibility with different input format. -# Introduction +- 2022.3.24:PaddleDetection released[release/2.4 version](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) + - Release high-performanace SOTA object detection model [PP-YOLOE](configs/ppyoloe). It integrates cloud and edge devices and provides S/M/L/X versions. In particular, Verson L has the accuracy as 51.4% on COCO test 2017 dataset, inference speed as 78.1 FPS on a single Test V100. It supports mixed precision training, 33% faster than PP-YOLOv2. Its full range of multi-sized models can meet different hardware arithmetic requirements, and adaptable to server, edge-device GPU and other AI accelerator cards on servers. + - Release ultra-lightweight SOTA object detection model [PP-PicoDet Plus](configs/picodet) with 2% improvement in accuracy and 63% improvement in CPU inference speed. Add PicoDet-XS model with a 0.7M parameter, providing model sparsification and quantization functions for model acceleration. No specific post processing module is required for all the hardware, simplifying the deployment. + - Release the real-time pedestrian analysis tool [PP-Human](deploy/pphuman). It has four major functions: pedestrian tracking, visitor flow statistics, human attribute recognition and falling detection. For falling detection, it is optimized based on real-life data with accurate recognition of various types of falling posture. It can adapt to different environmental background, light and camera angle. + - Add [YOLOX](configs/yolox) object detection model with nano/tiny/S/M/L/X. X version has the accuracy as 51.8% on COCO Val2017 dataset. -PaddleDetection is an end-to-end object detection development kit based on PaddlePaddle, which implements varied mainstream object detection, instance segmentation, tracking and keypoint detection algorithms in modular designwhich with configurable modules such as network components, data augmentations and losses, and release many kinds SOTA industry practice models, integrates abilities of model compression and cross-platform high-performance deployment, aims to help developers in the whole end-to-end development in a faster and better way. +- [More releases](https://github.com/PaddlePaddle/PaddleDetection/releases) -### PaddleDetection provides image processing capabilities such as object detection, instance segmentation, multi-object tracking, keypoint detection and etc. +## Brief Introduction -
    - +**PaddleDetection** is an end-to-end object detection development kit based on PaddlePaddle. Providing **over 30 model algorithm** and **over 250 pre-trained models**, it covers object detection, instance segmentation, keypoint detection, multi-object tracking. In particular, PaddleDetection offers **high- performance & light-weight** industrial SOTA models on **servers and mobile** devices, champion solution and cutting-edge algorithm. PaddleDetection provides various data augmentation methods, configurable network components, loss functions and other advanced optimization & deployment schemes. In addition to running through the whole process of data processing, model development, training, compression and deployment, PaddlePaddle also provides rich cases and tutorials to accelerate the industrial application of algorithm. + +
    +
    -### Features -- **Rich Models** -PaddleDetection provides rich of models, including **100+ pre-trained models** such as **object detection**, **instance segmentation**, **face detection** etc. It covers a variety of **global competition champion** schemes. +## Features + +- **Rich model library**: PaddleDetection provides over 250 pre-trained models including **object detection, instance segmentation, face recognition, multi-object tracking**. It covers a variety of **global competition champion** schemes. +- **Simple to use**: Modular design, decoupling each network component, easy for developers to build and try various detection models and optimization strategies, quick access to high-performance, customized algorithm. +- **Getting Through End to End**: PaddlePaddle gets through end to end from data augmentation, constructing models, training, compression, depolyment. It also supports multi-architecture, multi-device deployment for **cloud and edge** device. +- **High Performance**: Due to the high performance core, PaddlePaddle has clear advantages in training speed and memory occupation. It also supports FP16 training and multi-machine training. + +
    + newstructure +
    Exchanges -- **Production Ready:** -From data augmentation, constructing models, training, compression, depolyment, get through end to end, and complete support for multi-architecture, multi-device deployment for **cloud and edge device**. +- If you have any question or suggestion, please give us your valuable input via [GitHub Issues](https://github.com/PaddlePaddle/PaddleDetection/issues) -- **High Performance:** -Based on the high performance core of PaddlePaddle, advantages of training speed and memory occupation are obvious. FP16 training and multi-machine training are supported as well. + Welcome to join PaddleDetection user groups on QQ, WeChat (scan the QR code, add and reply "D" to the assistant) + +
    + + +
    -#### Overview of Kit Structures +## Kit Structure @@ -54,115 +100,127 @@ Based on the high performance core of PaddlePaddle, advantages of training speed -
      -
    • Object Detection
    • +
      Object Detection
      • Faster RCNN
      • FPN
      • Cascade-RCNN
      • -
      • Libra RCNN
      • -
      • Hybrid Task RCNN
      • PSS-Det
      • RetinaNet
      • -
      • YOLOv3
      • -
      • YOLOv4
      • +
      • YOLOv3
      • PP-YOLOv1/v2
      • PP-YOLO-Tiny
      • +
      • PP-YOLOE
      • +
      • YOLOX
      • SSD
      • -
      • CornerNet-Squeeze
      • +
      • CenterNet
      • FCOS
      • TTFNet
      • +
      • TOOD
      • +
      • GFL
      • PP-PicoDet
      • DETR
      • Deformable DETR
      • Swin Transformer
      • Sparse RCNN
      • -
      -
    • Instance Segmentation
    • -
        +
      +
      Instance Segmentation +
      • Mask RCNN
      • +
      • Cascade Mask RCNN
      • SOLOv2
      • -
      -
    • Face Detection
    • +
    +
    Face Detection
      -
    • FaceBoxes
    • BlazeFace
    • -
    • BlazeFace-NAS
    • -
    -
  • Multi-Object-Tracking
  • +
    +
    Multi-Object-Tracking
    • JDE
    • FairMOT
    • -
    • DeepSort
    • -
    -
  • KeyPoint-Detection
  • +
  • DeepSORT
  • +
  • ByteTrack
  • +
    +
    KeyPoint-Detection
    • HRNet
    • HigherHRNet
    • -
    +
  • Lite-HRNet
  • +
  • PP-TinyPose
  • +
    +
    Details
    • ResNet(&vd)
    • -
    • ResNeXt(&vd)
    • +
    • Res2Net(&vd)
    • +
    • CSPResNet
    • SENet
    • Res2Net
    • HRNet
    • -
    • Hourglass
    • -
    • CBNet
    • -
    • GCNet
    • +
    • Lite-HRNet
    • DarkNet
    • CSPDarkNet
    • -
    • VGG
    • MobileNetv1/v3
    • +
    • ShuffleNet
    • GhostNet
    • -
    • Efficientnet
    • -
    • BlazeNet
    • -
    +
  • BlazeNet
  • +
  • DLA
  • +
  • HardNet
  • +
  • LCNet
  • +
  • ESNet
  • +
  • Swin-Transformer
  • +
    -
    • Common
    • +
      Common
      • Sync-BN
      • Group Norm
      • DCNv2
      • -
      • Non-local
      • -
      +
    • EMA
    • +
    -
    • KeyPoint
    • +
      KeyPoint
      • DarkPose
      • -
      +
    -
    • FPN
    • +
      FPN
      • BiFPN
      • -
      • BFP
      • +
      • CSP-PAN
      • +
      • Custom-PAN
      • +
      • ES-PAN
      • HRFPN
      • -
      • ACFPN
      • -
      +
    -
    • Loss
    • +
      Loss
      • Smooth-L1
      • GIoU/DIoU/CIoU
      • IoUAware
      • -
      +
    • Focal Loss
    • +
    • CT Focal Loss
    • +
    • VariFocal Loss
    • +
    -
    • Post-processing
    • +
      Post-processing
      • SoftNMS
      • MatrixNMS
      • -
      +
    -
    • Speed
    • +
      Speed
      • FP16 training
      • Multi-machine training
      • -
      +
    +
    Details
    • Resize
    • Lighting
    • @@ -172,144 +230,253 @@ Based on the high performance core of PaddlePaddle, advantages of training speed
    • Color Distort
    • Random Erasing
    • Mixup
    • +
    • AugmentHSV
    • Mosaic
    • Cutmix
    • Grid Mask
    • Auto Augment
    • Random Perspective
    • -
    +
    -#### Overview of Model Performance +## Model Performance + +
    + Performance comparison of Cloud models -The relationship between COCO mAP and FPS on Tesla V100 of representative models of each server side architectures and backbones. +The comparison between COCO mAP and FPS on Tesla V100 of representative models of each architectures and backbones.
    -
    - - **NOTE:** - - - `CBResNet stands` for `Cascade-Faster-RCNN-CBResNet200vd-FPN`, which has highest mAP on COCO as 53.3% +
    - - `Cascade-Faster-RCNN` stands for `Cascade-Faster-RCNN-ResNet50vd-DCN`, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8% in PaddleDetection models +**Clarification:** - - `PP-YOLO` achieves mAP of 45.9% on COCO and 72.9FPS on Tesla V100. Both precision and speed surpass [YOLOv4](https://arxiv.org/abs/2004.10934) +- `CBResNet` stands for `Cascade-Faster-RCNN-CBResNet200vd-FPN`, which has highest mAP on COCO as 53.3% +- `Cascade-Faster-RCNN`stands for `Cascade-Faster-RCNN-ResNet50vd-DCN`, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8% in PaddleDetection models +- `PP-YOLO` reached accuracy as 45.9% on COCO dataset, inference speed as 72.9 FPS on Tesla V100, higher than [YOLOv4]([[2004.10934] YOLOv4: Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934)) in terms of speed and accuracy +- `PP-YOLO v2`are optimized `PP-YOLO`. It reached accuracy as 49.5% on COCO dataset, inference speed as 68.9 FPS on Tesla V100. +- `PP-YOLOE`are optimized `PP-YOLO v2`. It reached accuracy as 51.4% on COCO dataset, inference speed as 78.1 FPS on Tesla V100 +- The models in the figure are available in the[ model library](#模型库) - - `PP-YOLO v2` is optimized version of `PP-YOLO` which has mAP of 49.5% and 68.9FPS on Tesla V100 + - - All these models can be get in [Model Zoo](#ModelZoo) +
    + Performance omparison on mobiles -The relationship between COCO mAP and FPS on Qualcomm Snapdragon 865 of representative mobile side models. +The comparison between COCO mAP and FPS on Qualcomm Snapdragon 865 processor of models on mobile devices.
    - +
    -**NOTE:** +**Clarification:** + +- Tests were conducted on Qualcomm Snapdragon 865 (4 \*A77 + 4 \*A55) batch_size=1, 4 thread, and NCNN inference library, test script see [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark) +- [PP-PicoDet](configs/picodet) and [PP-YOLO-Tiny](configs/ppyolo) are self-developed models of PaddleDetection, and other models are not tested yet. + +
    + +## Model libraries + +
    + 1. General detection + +#### PP-YOLOE series Recommended scenarios: Cloud GPU such as Nvidia V100, T4 and edge devices such as Jetson series + +| Model | COCO Accuracy(mAP) | V100 TensorRT FP16 Speed(FPS) | Configuration | Download | +|:---------- |:------------------:|:-----------------------------:|:-------------------------------------------------------:|:----------------------------------------------------------------------------------------:| +| PP-YOLOE-s | 42.7 | 333.3 | [Link](configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | +| PP-YOLOE-m | 48.6 | 208.3 | [Link](configs/ppyolo/ppyolo_r50vd_dcn_2x_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_2x_coco.pdparams) | +| PP-YOLOE-l | 50.9 | 149.2 | [Link](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | +| PP-YOLOE-x | 51.9 | 95.2 | [Link](configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | + +#### PP-PicoDet series Recommended scenarios: Mobile chips and x86 CPU devices, such as ARM CPU(RK3399, Raspberry Pi) and NPU(BITMAIN) + +| Model | COCO Accuracy(mAP) | Snapdragon 865 four-thread speed (ms) | Configuration | Download | +|:---------- |:------------------:|:-------------------------------------:|:-----------------------------------------------------:|:-------------------------------------------------------------------------------------:| +| PicoDet-XS | 23.5 | 7.81 | [Link](configs/picodet/picodet_xs_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | +| PicoDet-S | 29.1 | 9.56 | [Link](configs/picodet/picodet_s_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | +| PicoDet-M | 34.4 | 17.68 | [Link](configs/picodet/picodet_m_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | +| PicoDet-L | 36.1 | 25.21 | [Link](configs/picodet/picodet_l_320_coco_lcnet.yml) | [Download](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | + +#### Frontier detection algorithm + +| Model | COCO Accuracy(mAP) | V100 TensorRT FP16 speed(FPS) | Configuration | Download | +|:-------- |:------------------:|:-----------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------:| +| YOLOX-l | 50.1 | 107.5 | [Link](configs/yolox/yolox_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | +| YOLOv5-l | 48.6 | 136.0 | [Link](https://github.com/nemonameless/PaddleDetection_YOLOv5/blob/main/configs/yolov5/yolov5_l_300e_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/yolov5_l_300e_coco.pdparams) | + +#### Other general purpose models [doc](docs/MODEL_ZOO_cn.md) + +
    + +
    + 2. Instance segmentation + +| Model | Introduction | Recommended Scenarios | COCO Accuracy(mAP) | Configuration | Download | +|:----------------- |:-------------------------------------------------------- |:--------------------------------------------- |:--------------------------------:|:-----------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------:| +| Mask RCNN | Two-stage instance segmentation algorithm |
    Edge-Cloud end
    | box AP: 41.4
    mask AP: 37.5 | [Link](configs/mask_rcnn/mask_rcnn_r50_vd_fpn_2x_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_2x_coco.pdparams) | +| Cascade Mask RCNN | Two-stage instance segmentation algorithm |
    Edge-Cloud end
    | box AP: 45.7
    mask AP: 39.7 | [Link](configs/mask_rcnn/cascade_mask_rcnn_r50_vd_fpn_ssld_2x_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/cascade_mask_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams) | +| SOLOv2 | Lightweight single-stage instance segmentation algorithm |
    Edge-Cloud end
    | mask AP: 38.0 | [Link](configs/solov2/solov2_r50_fpn_3x_coco.yml) | [Download](https://paddledet.bj.bcebos.com/models/solov2_r50_fpn_3x_coco.pdparams) | + +
    + +
    + 3. Keypoint detection -- All data tested on Qualcomm Snapdragon 865(4\*A77 + 4\*A55) processor with batch size of 1 and CPU threads of 4, and use NCNN library in testing, benchmark scripts is publiced at [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark) -- [PP-PicoDet](configs/picodet) and [PP-YOLO-Tiny](configs/ppyolo) are developed and released by PaddleDetection, other models are not provided in PaddleDetection. +| Model | Introduction | Recommended scenarios | COCO Accuracy(AP) | Speed | Configuration | Download | +|:-------------------- |:--------------------------------------------------------------------------------------------- |:--------------------------------------------- |:-----------------:|:---------------------------------:|:---------------------------------------------------------:|:-------------------------------------------------------------------------------------------:| +| HRNet-w32 + DarkPose |
    Top-down Keypoint detection algorithm
    Input size: 384x288
    |
    Edge-Cloud end
    | 78.3 | T4 TensorRT FP16 2.96ms | [Link](configs/keypoint/hrnet/dark_hrnet_w32_384x288.yml) | [Download](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_384x288.pdparams) | +| HRNet-w32 + DarkPose | Top-down Keypoint detection algorithm
    Input size: 256x192 | Edge-Cloud end | 78.0 | T4 TensorRT FP16 1.75ms | [Link](configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) | [Download](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_256x192.pdparams) | +| PP-TinyPose | Light-weight keypoint algorithm
    Input size: 256x192 | Mobile | 68.8 | Snapdragon 865 four-thread 6.30ms | [Link](configs/keypoint/tiny_pose/tinypose_256x192.yml) | [Download](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | +| PP-TinyPose | Light-weight keypoint algorithm
    Input size: 128x96 | Mobile | 58.1 | Snapdragon 865 four-thread 2.37ms | [Link](configs/keypoint/tiny_pose/tinypose_128x96.yml) | [Download](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | -## Tutorials +#### Other keypoint detection models [doc](configs/keypoint) -### Get Started +
    -- [Installation guide](docs/tutorials/INSTALL.md) -- [Prepare dataset](docs/tutorials/PrepareDataSet_en.md) -- [Quick start on PaddleDetection](docs/tutorials/GETTING_STARTED.md) +
    + 4. Multi-object tracking PP-Tracking +| Model | Introduction | Recommended scenarios | Accuracy | Configuration | Download | +|:--------- |:------------------------------------------------------------- |:--------------------- |:----------------------:|:-----------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------:| +| DeepSORT | SDE Multi-object tracking algorithm, independent ReID models | Edge-Cloud end | MOT-17 half val: 66.9 | [Link](configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams) | +| ByteTrack | SDE Multi-object tracking algorithm with detection model only | Edge-Cloud end | MOT-17 half val: 77.3 | [Link](configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) | +| JDE | JDE multi-object tracking algorithm multi-task learning | Edge-Cloud end | MOT-16 test: 64.6 | [Link](configs/mot/jde/jde_darknet53_30e_1088x608.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/jde_darknet53_30e_1088x608.pdparams) | +| FairMOT | JDE multi-object tracking algorithm multi-task learning | Edge-Cloud end | MOT-16 test: 75.0 | [Link](configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml) | [Download](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | -### Advanced Tutorials +#### Other multi-object tracking models [docs](configs/mot) -- Parameter configuration - - [Parameter configuration for RCNN model](docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation_en.md) - - [Parameter configuration for PP-YOLO model](docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation_en.md) +
    -- Model Compression(Based on [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)) - - [Prune/Quant/Distill](configs/slim) +
    + 5. Industrial real-time pedestrain analysis tool-PP Human -- Inference and deployment - - [Export model for inference](deploy/EXPORT_MODEL_en.md) - - [Paddle Inference](deploy/README_en.md) - - [Python inference](deploy/python) - - [C++ inference](deploy/cpp) - - [Paddle-Lite](deploy/lite) - - [Paddle Serving](deploy/serving) - - [Export ONNX model](deploy/EXPORT_ONNX_MODEL_en.md) - - [Inference benchmark](deploy/BENCHMARK_INFER_en.md) - - [Exporting to ONNX and using OpenVINO for inference](docs/advanced_tutorials/openvino_inference/README.md) +| Function \ Model | Obejct detection | Multi- object tracking | Attribute recognition | Keypoint detection | Action recognition | ReID | +|:------------------------------------ |:-------------------------------------------------------------------------------------- |:-------------------------------------------------------------------------------------- |:-----------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-----------------------------------------------------------------:|:----------------------------------------------------------------------:| +| Pedestrian Detection | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | | | | | | +| Pedestrian Tracking | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | | | | | +| Attribute Recognition (Image) | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | | | | +| Attribute Recognition (Video) | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | | | | | +| Falling Detection | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | | +| ReID | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | | | | [✅](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | +| **Accuracy** | mAP 56.3 | MOTA 72.0 | mA 94.86 | AP 87.1 | AP 96.43 | mAP 98.8 | +| **T4 TensorRT FP16 Inference speed** | 28.0ms | 33.1ms | Single person 2ms | Single person 2.9ms | Single person 2.7ms | Single person 1.5ms | + +
    + +**Click “ ✅ ” to download** + +## Document tutorials + +### Introductory tutorials + +- [Installation](docs/tutorials/INSTALL_cn.md) +- [Quick start](docs/tutorials/QUICK_STARTED_cn.md) +- [Data preparation](docs/tutorials/data/README.md) +- [Geting Started on PaddleDetection](docs/tutorials/GETTING_STARTED_cn.md) +- [Customize data training]((docs/tutorials/CustomizeDataTraining.md) +- [FAQ]((docs/tutorials/FAQ) + +### Advanced tutorials + +- Configuration + + - [RCNN Configuration](docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md) + - [PP-YOLO Configuration](docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md) + +- Compression based on [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim) + + - [Pruning/Quantization/Distillation Tutorial](configs/slim) + +- [Inference deployment](deploy/README.md) + + - [Export model for inference](deploy/EXPORT_MODEL.md) + + - [Paddle Inference deployment](deploy/README.md) + + - [Inference deployment with Python](deploy/python) + - [Inference deployment with C++](deploy/cpp) + + - [Paddle-Lite deployment](deploy/lite) + + - [Paddle Serving deployment](deploy/serving) + + - [ONNX model export](deploy/EXPORT_ONNX_MODEL.md) + + - [Inference benchmark](deploy/BENCHMARK_INFER.md) - Advanced development - - [New data augmentations](docs/advanced_tutorials/READER_en.md) - - [New detection algorithms](docs/advanced_tutorials/MODEL_TECHNICAL_en.md) - - -## Model Zoo - -- Universal object detection - - [Model library and baselines](docs/MODEL_ZOO_cn.md) - - [PP-YOLO](configs/ppyolo/README.md) - - [PP-PicoDet](configs/picodet/README.md) - - [Enhanced Anchor Free model--TTFNet](configs/ttfnet/README_en.md) - - [Mobile models](static/configs/mobile/README_en.md) - - [676 classes of object detection](static/docs/featured_model/LARGE_SCALE_DET_MODEL_en.md) - - [Two-stage practical PSS-Det](configs/rcnn_enhance/README_en.md) - - [SSLD pretrained models](docs/feature_models/SSLD_PRETRAINED_MODEL_en.md) -- Universal instance segmentation - - [SOLOv2](configs/solov2/README.md) -- Rotation object detection - - [S2ANet](configs/dota/README_en.md) -- [Keypoint detection](configs/keypoint) - - [PP-TinyPose](configs/keypoint/tiny_pose) - - HigherHRNet - - HRNet - - LiteHRNet -- [Multi-Object Tracking](configs/mot/README.md) - - [PP-Tracking](deploy/pptracking/README.md) - - [DeepSORT](configs/mot/deepsort/README.md) - - [JDE](configs/mot/jde/README.md) - - [FairMOT](configs/mot/fairmot/README.md) -- Vertical field - - [Face detection](configs/face_detection/README_en.md) - - [Pedestrian detection](configs/pedestrian/README.md) - - [Vehicle detection](configs/vehicle/README.md) -- Competition Plan - - [Objects365 2019 Challenge champion model](static/docs/featured_model/champion_model/CACascadeRCNN_en.md) - - [Best single model of Open Images 2019-Object Detection](static/docs/featured_model/champion_model/OIDV5_BASELINE_MODEL_en.md) - -## Applications - -- [Christmas portrait automatic generation tool](static/application/christmas) -- [Android Fitness Demo](https://github.com/zhiboniu/pose_demo_android) - -## Updates - -Updates please refer to [change log](docs/CHANGELOG_en.md) for details. - - -## License - -PaddleDetection is released under the [Apache 2.0 license](LICENSE). - - -## Contributing - -Contributions are highly welcomed and we would really appreciate your feedback!! -- Thanks [Mandroide](https://github.com/Mandroide) for cleaning the code and unifying some function interface. -- Thanks [FL77N](https://github.com/FL77N/) for contributing the code of `Sparse-RCNN` model. -- Thanks [Chen-Song](https://github.com/Chen-Song) for contributing the code of `Swin Faster-RCNN` model. -- Thanks [yangyudong](https://github.com/yangyudong2020), [hchhtc123](https://github.com/hchhtc123) for contributing PP-Tracking GUI interface. -- Thanks [Shigure19](https://github.com/Shigure19) for contributing PP-TinyPose fitness APP. - -## Citation + + - [Data processing module](docs/advanced_tutorials/READER.md) + - [New object detection models](docs/advanced_tutorials/MODEL_TECHNICAL.md) + - Custumization + - [Object detection](docs/advanced_tutorials/customization/detection.md) + - [Keypoint detection](docs/advanced_tutorials/customization/keypoint_detection.md) + - [Multiple object tracking](docs/advanced_tutorials/customization/pphuman_mot.md) + - [Action recognition](docs/advanced_tutorials/customization/pphuman_action.md) + - [Attribute recognition](docs/advanced_tutorials/customization/pphuman_attribute.md) + +### Courses + +- **[Theoretical foundation] [Object detection 7-day camp](https://aistudio.baidu.com/aistudio/education/group/info/1617):** Overview of object detection tasks, details of RCNN series object detection algorithm and YOLO series object detection algorithm, PP-YOLO optimization strategy and case sharing, introduction and practice of AnchorFree series algorithm + +- **[Industrial application] [AI Fast Track industrial object detection technology and application](https://aistudio.baidu.com/aistudio/education/group/info/23670):** Super object detection algorithms, real-time pedestrian analysis system PP-Human, breakdown and practice of object detection industrial application + +- **[Industrial features] 2022.3.26** **[Smart City Industry Seven-Day Class](https://aistudio.baidu.com/aistudio/education/group/info/25620)** : Urban planning, Urban governance, Smart governance service, Traffic management, community governance. + +### [Industrial tutorial examples](./industrial_tutorial/README.md) + +- [Intelligent fitness recognition based on PP-TinyPose Plus](https://aistudio.baidu.com/aistudio/projectdetail/4385813) + +- [Road litter detection based on PP-PicoDet Plus](https://aistudio.baidu.com/aistudio/projectdetail/3561097) + +- [Communication tower detection based on PP-PicoDet and deployment on Android](https://aistudio.baidu.com/aistudio/projectdetail/3561097) + +- [Visitor flow statistics based on FairMOT](https://aistudio.baidu.com/aistudio/projectdetail/2421822) + +- [More examples](./industrial_tutorial/README.md) + +## Applications + +- [Fitness app on android mobile](https://github.com/zhiboniu/pose_demo_android) +- [PP-Tracking GUI Visualization Interface](https://github.com/yangyudong2020/PP-Tracking_GUi) + +## Recommended third-party tutorials + +- [Deployment of PaddleDetection for Windows I ](https://zhuanlan.zhihu.com/p/268657833) +- [Deployment of PaddleDetection for Windows II](https://zhuanlan.zhihu.com/p/280206376) +- [Deployment of PaddleDetection on Jestson Nano](https://zhuanlan.zhihu.com/p/319371293) +- [How to deploy YOLOv3 model on Raspberry Pi for Helmet detection](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/yolov3_for_raspi.md) +- [Use SSD-MobileNetv1 for a project -- From dataset to deployment on Raspberry Pi](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/ssd_mobilenet_v1_for_raspi.md) + +## Version updates + +Please refer to the[ Release note ](https://github.com/PaddlePaddle/Paddle/wiki/PaddlePaddle-2.3.0-Release-Note-EN)for more details about the updates + +## License + +PaddlePaddle is provided under the [Apache 2.0 license](LICENSE) + +## Contribute your code + +We appreciate your contributions and your feedback! + +- Thank [Mandroide](https://github.com/Mandroide) for code cleanup and +- Thank [FL77N](https://github.com/FL77N/) for `Sparse-RCNN`model +- Thank [Chen-Song](https://github.com/Chen-Song) for `Swin Faster-RCNN`model +- Thank [yangyudong](https://github.com/yangyudong2020), [hchhtc123](https://github.com/hchhtc123) for developing PP-Tracking GUI interface +- Thank Shigure19 for developing PP-TinyPose fitness APP +- Thank [manangoel99](https://github.com/manangoel99) for Wandb visualization methods + +## Quote ``` @misc{ppdet2019, diff --git "a/activity/\347\233\264\346\222\255\347\255\224\347\226\221\347\254\254\344\270\200\346\234\237.md" "b/activity/\347\233\264\346\222\255\347\255\224\347\226\221\347\254\254\344\270\200\346\234\237.md" new file mode 100644 index 0000000000000000000000000000000000000000..f94f0dd09941474558bf9dc6baac0669c8ded9c3 --- /dev/null +++ "b/activity/\347\233\264\346\222\255\347\255\224\347\226\221\347\254\254\344\270\200\346\234\237.md" @@ -0,0 +1,125 @@ +# 直播答疑第一期 + +### 答疑全程回放可以通过链接下载观看:https://pan.baidu.com/s/168ouju4MxN5XJEb-GU1iAw 提取码: 92mw + +## PaddleDetection框架/API问题 + +#### Q1. warmup能详细讲解下吗? +A1. warmup是在训练初期学习率从0调整至预设学习率的过程,设置可以参考[源码](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/ppdet/optimizer.py#L156),可以设置step数或epoch数 + +#### Q2. 如果类别不匹配 也能用pretrain weights吗? +A2. 可以,类别不匹配时,模型会自动不加载shape不匹配的权重,通常和类别数相关的权重位于head层 + +#### Q3. 请问nms_eta怎么用呀,源码上没有写的很清楚,API文档也没有细说 +A3. 针对密集的场景,nms_eta会在每轮动态的调整nms阈值,避免过滤掉两个重叠程度很高但是属于不同物体的检测框,具体可以参考[源码](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/detection/multiclass_nms_op.cc#L139),默认为1,通常无需设置 + +#### Q4. 请问anchor_cluster.py中的--size 是模型的input size 还是 实际使用图片的size? +A4. 是实际推理时的图片尺寸,一般可以参考TestReader中的image_shape的设置。 + +#### Q5. 请问为什么预测的坐标会出现负的值? +A5. 模型算法中是有可能负值的情况,首先需要判断模型预测效果是否符合预期,如果正常可以考虑在后处理中增加clip的操作限制输出box在图像中;如果不正常,说明模型训练效果欠佳,需要进一步排查问题或调优 + +#### Q6. PaddleDetection 人脸检测blazeface模型,一键式预测时load_params没有参数文件,从哪里下载? +A6. blazeface的模型可以在[模型库](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/face_detection#%E6%A8%A1%E5%9E%8B%E5%BA%93)中下载到,如果想部署需要参考[步骤](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md) 导出模型 + +## PP-YOLOE问题 +#### Q1. 训练PP-YOLOE的时候,loss是越训练越高这种情况 是数据集的问题吗? +A1. 可以从以下几个方面排查 + +1. 数据: 首先确认数据集没问题,包括标注,类别等 +2. 超参数:base_lr根据batch_size调整,遵守线性原则;warmup_iters根据总的epoch数进行调整 +3. 预训练参数:可以加载官方提供的自在coco数据集上的预训练参数 +4. 网络结构方面:分析下box的分布情况 适当调整dfl的参数 + +#### Q2. 检测模型选型问题:PicoDet、PP-YOLO系列如何选型 +A2. PicoDet是针对移动端设备设计的模型,是针对arm,x86等低算力设备上设计;PP-YOLO是针对服务器端设计的模型,英伟达N卡,百度昆仑卡等。手机端,无gpu桌面端,优先PicoDet;有高算力设备,如N卡,优先PP-YOLO系列;对延时不敏感的场景,更注重高精度,优先PP-YOLO系列 + +#### Q3. ConvBNLayer中BN层的参数都不会使用L2Decay;PP-YOLOE-s的其它部分都会按照配置文件的设置使用0.0005的L2Decay。是这样吗 +A3. PP-YOLOE的backbone和neck部分使用了ConvBNLayer,其中BN层不会使用L2Decay,其他部分使用全局设置的0.0005的L2Decay + +#### Q4. PP-YOLOE的Conv的bias也不使用decay吗? +A4. PP-YOLOE的backbone和neck部分的Conv是没有bias参数的,head部分的Conv bias使用全局decay + +#### Q5. 在测速时,为什么要用PaddleInference而不是直接加载模型测时间呢 +A5. PaddleInference会将paddle导出的预测模型会前向算子做融合,从而实现速度优化,并且实际部署过程也是使用PaddleInference实现 + +#### Q6. PP-YOLOE系列在部署的时候,前后处理是不是一样的啊? +A6. PP-YOLO系列模型在部署时的前处理都是 decode-resize-nomalize-permute的流程,后处理方面PP-YOLOv2使用了Matrix NMS,PP-YOLOE使用的是普通的NMS算法 + +#### Q7. 针对小目标和类别不平衡的数据集,PP-YOLOE有什么调整策略吗 +A7 针对小目标数据集,可以适当增大ppyoloe的输入尺寸,然后在模型中增加注意力机制,目前基于PP-YOLOE的小目标检测正在开发中;针对类别不平衡问题,可以从数据采样的角度处理,目前PP-YOLOE还没有专门针对类别不平衡问题的优化 + +## PP-Human问题 +#### Q1. 请问pphuman用导出的模型18个点(不是官方17个点)去预测时,报错是问什么 +A1. 这个问题是关键点模型输出点的数量与行为识别模型不一致导致的。如果希望用18点模型预测,除了关键点用18点模型以外,还需要自建18点的动作识别模型。 + +#### Q2. 为什么官方导出模型设置的window_size是50 +A2. 导出模型的设置与训练和预测的输入数据长度是一致的;我们主要采用的数据集是ntu、企业提供的实际数据等等。在训练这个模型的时候,我们对这些数据中摔倒的片段做了统计分析,基本上每个动作片段持续的帧数大约是40~80左右。综合考虑到实际使用的延迟以及预测效果,我们选择了50这个量级,在我们的这部分数据上既能完整描述一个完整动作,又不会使得延迟过大。 + +总的来说,这个window_size的数值最好还是根据实际动作以及设备的情况进行选择。例如在某种设备上,50帧的长度根本不足以包含一个完整的动作,那么这个数值就需要扩大;又或者某些动作持续时间很短,50帧的长度包含了太多不相关的其他动作,容易造成误识别,那么这个数值可以适当缩小。 + + +#### Q3. PP-Human中如何替换检测、跟踪、关键点模型 +A3. 我们使用的模型都是PaddleDetection中模型进行导出得到的。理论上PP-Human所使用的模型都是可以直接替换的,但是需要注意是流程和前后处理一样的模型。 + +#### Q4. PP-Human中的数据标注问题(检测、跟踪、关键点、行为、属性)标注工具推荐和标注步骤 +A4. 标注工具:检测 labelme, labelImg, cvat; 跟踪darklabel,cvat;关键点 labelme,cvat。检测标注可以使用tools/x2coco.py转换成coco格式 + +#### Q5. PP-Human中如何更改label(属性和动作识别) +A5. 在PPHuman中,动作识别被定义为基于骨骼点序列的分类问题,目前我们已经开源的摔倒动作识别是一个二分类问题;属性方面我们当前还暂时没有开放训练,正在建设中 + +#### Q6. PP-Human的哪些功能支持单人、哪些支持多人 +A6. PP-Human的功能实现基于一套流程:检测->跟踪->具体功能。当前我们的具体功能模型每次处理的是单人的,即属性、动作等都是属于图像中每一个具体人的。但是基于这套流程下来,图像中的每一个人都得到了处理的。所以单人、多人实际都是一样支持的。 + +#### Q7. PP-Human对视频流预测的支持及服务化部署 +A7. 目前正在建设当中,下个版本会支持这部分功能 + +#### Q8. 在使用pphuman训练自己的数据集时,训练完进行测试时,可视化的标签如何更改,没有更改的情况下还是falling + +A8. 可视化的函数位于https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/visualize.py#L368,这里在可视化的时候将 action_text替换为期望的类别即可。 + +#### Q9. 关键点检测可以实现一个连贯动作的检测吗,比如健身规范 +A9. 基于关键点是可以实现的。这里可以有不同思路去做: + +1. 如果是期望判定动作规范的程度,且这个动作可以很好的描述。那么可以在关键点模型获得的坐标的基础上,人工增加逻辑判断即可。这里我们提供一个安卓的健身APP示例:https://github.com/zhiboniu/pose_demo_android ,其中实现判定各项动作的逻辑可以参考https://github.com/zhiboniu/pose_demo_android/blob/release/1.0/app/src/main/cpp/pose_action.cc 。 + +2. 当一个动作较难用逻辑去描述的时候,可能参考现有摔倒检测的案例,训练一个识别健身动作的模型,但对收集数据的要求会比较高。 + + +#### Q10. 有遮挡的生产环境中梯子,可以用关键点检测判断人员上下梯动作是否合规 +A10. 这个问题需要视遮挡的程度而定,如果遮挡过于严重时关键点检测模型的效果会大打折扣,从而导致行为的判断失准。此外,由于基于关键点的方案抹去了外观信息,如果只是从人物本身的动作上去做判断,那么在遮挡不严重的场景下是可以的。反之,如果梯子这个物体是判断动作是否合规的必要元素,那么这个方案其实不一定是最佳选择。 + +#### Q11. 关键点做的行为识别并不是时序上的动作识别吗 +A11. 是时序的动作识别。这里是将一定时间范围内的每一帧关键点坐标组成一个时序的关键点序列,再通过行为识别模型去预测这个序列所属的行为类别。 + + +## 检测算法问题 +#### Q1. 大图片小目标 最终推理的图片也是大图片 怎么预处理呀 +A1. 小目标问题常见的处理方式是切图以及增大网络输入尺寸,如果使用基于anchor的检测算法,可以通过对目标物体大小聚类生成anchor,参考[脚本](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/tools/anchor_cluster.py); 目前基于PP-YOLOE的小目标检测正在开发中 + +#### Q2. 想问下大的目标对象怎么检测,比如发票 +A2. 如果使用基于anchor的检测算法,可以通过对目标物体大小聚类生成anchor,参考[脚本](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/tools/anchor_cluster.py);另外可以增强深层特征提升大物体检测效果 + +#### Q3. 在做预测时发现预测框特别多,有的框的置信度甚至低于0.1,请问如果将这种框过滤掉?也就是训练模型时就把这些极低置信度的预测结果过滤掉,避免在推理部署时,做不必要的计算,从而影响推理速度。 +A3. 后处理部分有两个过滤,1)是提取置信度最高的Top 100个框做nms。2)是根据设定阈值threshold进行过滤。如果你可以确认图片上目标相对比较少<10个,可以调整Top 100这个值到50或者更低,这样可以加速nms部分的计算。其次调整threshold这个影响最终检测的准确度和召回率的效果。 + +#### Q4. 正负样本的比例一般怎么设计 +A4. 在PaddleDetection中,支持负样本训练,TrainDataset下设置allow_empty: true即可,通过数据集测试,负样本比例在0.3时对模型提升效果最明显。 + +## 压缩部署问题 +#### Q1. PaddleDetection训练的模型导出inference model后,在做推理部署的时候,前后处理相关代码如何编写,有什么参考教程吗? +A1. 目前PaddleDetection下的网络模型大部分都能够支持c++ inference,不同的处理方式针对不同功能,例如:PP-YOLOE速度测试不包含后处理,PicoDet为支持不同的第三方推理引擎会设置是否导出nms + +object_detector.cc是针对所有检测模型的流程,其中前处理大部分都是decode-resize-nomalize-permute 部分网络会加入padding的操作;大部分模型的后处理操作都放在模型里面了,picodet有单独提供nms的后处理代码 + +检测模型的输入统一为image,im_shape,scale_factor ,如果模型中没有使用im_shape,输出个数会减少,但是整套预处理流程不需要额外开发 + +#### Q2. 针对TensorRT的加速问题,fp16在v100确实可以,但是耗时好像有点偏差,我在1080ti上,单张图片跑1000次,耗时50s,还是float32的,可是在v100上,float16耗时97 +A2. 目前PPYOLOE等模型的速度都有在V100上使用TensorRT FP16测试,关于速度测试有以下几个方面可以排查: + +1. 速度测试时是否正确设置warmup,以避免过长的启动时间影响速度测试准确度 +2. 在开启TensorRT时,生成engine文件的过程耗时较长,可以在https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/infer.py#L745 中将use_static设置为True + + +#### Q3. PaddleDetection已经支持了在线量化一些模型,比如想训练其他的一个新模型,是不是可以轻松用起来qat?如果不能,为什么只能支持很有限的模型,而qat其他模型总会出各种各样的问题,原因是什么? +A3. 目前PaddleDetection模型很多,只能针对部分模型开源了QAT的config,其他模型也是支持QAT的,只是配置文件没有覆盖到,如果量化报错,通常是配置问题。检测模型一般建议跳过head最后一个conv。如果想要跳过某些层量化,可以设置skip_quant,参考[代码](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/ppdet/modeling/heads/yolo_head.py#L97) diff --git a/configs/centernet/_base_/centernet_reader.yml b/configs/centernet/_base_/centernet_reader.yml index 1f18dca49d1e39c61b9bbc7b5be8bfac7bce5ca4..81af4ab840502da6e738ac667dd0883041ba8992 100644 --- a/configs/centernet/_base_/centernet_reader.yml +++ b/configs/centernet/_base_/centernet_reader.yml @@ -30,6 +30,6 @@ TestReader: sample_transforms: - Decode: {} - WarpAffine: {keep_res: True, input_h: 512, input_w: 512} - - NormalizeImage: {mean: [0.40789655, 0.44719303, 0.47026116], std: [0.2886383 , 0.27408165, 0.27809834]} + - NormalizeImage: {mean: [0.40789655, 0.44719303, 0.47026116], std: [0.2886383 , 0.27408165, 0.27809834], is_scale: True} - Permute: {} batch_size: 1 diff --git a/configs/convnext/README.md b/configs/convnext/README.md new file mode 100644 index 0000000000000000000000000000000000000000..644d66815660427d2a6cdf587c014d8cb877eb15 --- /dev/null +++ b/configs/convnext/README.md @@ -0,0 +1,20 @@ +# ConvNeXt (A ConvNet for the 2020s) + +## 模型库 +### ConvNeXt on COCO + +| 网络网络 | 输入尺寸 | 图片数/GPU | 学习率策略 | mAPval
    0.5:0.95 | mAPval
    0.5 | Params(M) | FLOPs(G) | 下载链接 | 配置文件 | +| :------------- | :------- | :-------: | :------: | :------------: | :---------------------: | :----------------: |:---------: | :------: |:---------------: | +| PP-YOLOE-ConvNeXt-tiny | 640 | 16 | 36e | 44.6 | 63.3 | 33.04 | 13.87 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_convnext_tiny_36e_coco.pdparams) | [配置文件](./ppyoloe_convnext_tiny_36e_coco.yml) | +| YOLOX-ConvNeXt-s | 640 | 8 | 36e | 44.6 | 65.3 | 36.20 | 27.52 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_convnext_s_36e_coco.pdparams) | [配置文件](./yolox_convnext_s_36e_coco.yml) | + + +## Citations +``` +@Article{liu2022convnet, + author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, + title = {A ConvNet for the 2020s}, + journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, + year = {2022}, +} +``` diff --git a/configs/convnext/ppyoloe_convnext_tiny_36e_coco.yml b/configs/convnext/ppyoloe_convnext_tiny_36e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..360a368ec0837033ab408db59aa0d4ea5b7972dd --- /dev/null +++ b/configs/convnext/ppyoloe_convnext_tiny_36e_coco.yml @@ -0,0 +1,55 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +depth_mult: 0.25 +width_mult: 0.50 + +log_iter: 100 +snapshot_epoch: 5 +weights: output/ppyoloe_convnext_tiny_36e_coco/model_final +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/convnext_tiny_22k_224.pdparams + + +YOLOv3: + backbone: ConvNeXt + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +ConvNeXt: + arch: 'tiny' + drop_path_rate: 0.4 + layer_scale_init_value: 1.0 + return_idx: [1, 2, 3] + + +PPYOLOEHead: + static_assigner_epoch: 12 + nms: + nms_top_k: 10000 + keep_top_k: 300 + score_threshold: 0.01 + nms_threshold: 0.7 + + +TrainReader: + batch_size: 16 + + +epoch: 36 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [36] + use_warmup: false + +OptimizerBuilder: + regularizer: false + optimizer: + type: AdamW + weight_decay: 0.0005 diff --git a/configs/convnext/yolox_convnext_s_36e_coco.yml b/configs/convnext/yolox_convnext_s_36e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..b41551dee8a2e2793ac09d474c0e7d2a8868299f --- /dev/null +++ b/configs/convnext/yolox_convnext_s_36e_coco.yml @@ -0,0 +1,58 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../yolox/_base_/yolox_cspdarknet.yml', + '../yolox/_base_/yolox_reader.yml' +] +depth_mult: 0.33 +width_mult: 0.50 + +log_iter: 100 +snapshot_epoch: 5 +weights: output/yolox_convnext_s_36e_coco/model_final +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/convnext_tiny_22k_224.pdparams + + +YOLOX: + backbone: ConvNeXt + neck: YOLOCSPPAN + head: YOLOXHead + size_stride: 32 + size_range: [15, 25] # multi-scale range [480*480 ~ 800*800] + +ConvNeXt: + arch: 'tiny' + drop_path_rate: 0.4 + layer_scale_init_value: 1.0 + return_idx: [1, 2, 3] + + +TrainReader: + batch_size: 8 + mosaic_epoch: 30 + + +YOLOXHead: + l1_epoch: 30 + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 1000 + score_threshold: 0.001 + nms_threshold: 0.65 + + +epoch: 36 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [36] + use_warmup: false + +OptimizerBuilder: + regularizer: false + optimizer: + type: AdamW + weight_decay: 0.0005 diff --git a/configs/datasets/coco_detection.yml b/configs/datasets/coco_detection.yml index 7a62c3b0b57a5d76c8ed519d3a3940c1b4532c15..291c24874b72bbb92fb2510e754c791a3f06c146 100644 --- a/configs/datasets/coco_detection.yml +++ b/configs/datasets/coco_detection.yml @@ -16,4 +16,5 @@ EvalDataset: TestDataset: !ImageFolder - anno_path: annotations/instances_val2017.json + anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path' diff --git a/configs/datasets/coco_instance.yml b/configs/datasets/coco_instance.yml index 5eaf76791a94bfd2819ba6dab610fae54b69f26e..b04dbdca955a326ffc5eb13756e73ced83b92309 100644 --- a/configs/datasets/coco_instance.yml +++ b/configs/datasets/coco_instance.yml @@ -16,4 +16,5 @@ EvalDataset: TestDataset: !ImageFolder - anno_path: annotations/instances_val2017.json + anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path' diff --git a/configs/datasets/dota.yml b/configs/datasets/dota.yml index f9d9395b00d7ed9028396044c407784d251e43e5..5153163d95a8a418a82d3d6d43f6e1f9404ed075 100644 --- a/configs/datasets/dota.yml +++ b/configs/datasets/dota.yml @@ -17,3 +17,4 @@ EvalDataset: TestDataset: !ImageFolder anno_path: trainval_split/s2anet_trainval_paddle_coco.json + dataset_dir: dataset/DOTA_1024_s2anet/ diff --git a/configs/datasets/roadsign_voc.yml b/configs/datasets/roadsign_voc.yml index ddbfc7889e0027d85971c6ab11f3f33adfe8be71..9a081611aa8dafef5d5c6f1af1476cc038db5702 100644 --- a/configs/datasets/roadsign_voc.yml +++ b/configs/datasets/roadsign_voc.yml @@ -3,19 +3,19 @@ map_type: integral num_classes: 4 TrainDataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: train.txt - label_list: label_list.txt - data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult'] + name: VOCDataSet + dataset_dir: dataset/roadsign_voc + anno_path: train.txt + label_list: label_list.txt + data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult'] EvalDataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: valid.txt - label_list: label_list.txt - data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult'] + name: VOCDataSet + dataset_dir: dataset/roadsign_voc + anno_path: valid.txt + label_list: label_list.txt + data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult'] TestDataset: - !ImageFolder - anno_path: dataset/roadsign_voc/label_list.txt + name: ImageFolder + anno_path: dataset/roadsign_voc/label_list.txt diff --git a/configs/datasets/visdrone_detection.yml b/configs/datasets/visdrone_detection.yml new file mode 100644 index 0000000000000000000000000000000000000000..37feb6e2618ff9d83ce2842a9e581dcfd31efc78 --- /dev/null +++ b/configs/datasets/visdrone_detection.yml @@ -0,0 +1,22 @@ +metric: COCO +num_classes: 10 + +TrainDataset: + !COCODataSet + image_dir: VisDrone2019-DET-train + anno_path: train.json + dataset_dir: dataset/visdrone + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: VisDrone2019-DET-val + anno_path: val.json + # image_dir: test_dev + # anno_path: test_dev.json + dataset_dir: dataset/visdrone + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/visdrone diff --git a/configs/dota/README.md b/configs/dota/README.md index 9a5988a761810600b02cb4e9f1348c6072e02cac..adde27691ef78606d2528b95dee8e30842bfff64 100644 --- a/configs/dota/README.md +++ b/configs/dota/README.md @@ -53,7 +53,7 @@ DOTA数据集中总共有2806张图像,其中1411张图像作为训练集,45 - PaddlePaddle >= 2.1.1 - GCC == 8.2 -推荐使用docker镜像[paddle:2.1.1-gpu-cuda10.1-cudnn7](registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.1-cudnn7)。 +推荐使用docker镜像 paddle:2.1.1-gpu-cuda10.1-cudnn7。 执行如下命令下载镜像并启动容器: ``` diff --git a/configs/dota/README_en.md b/configs/dota/README_en.md index e299e0e81808888e947d0e0b1e1423bb5f7fdbea..61eeee7f5c53b7ec4e01c2a68c75f98f9a09bd14 100644 --- a/configs/dota/README_en.md +++ b/configs/dota/README_en.md @@ -64,7 +64,7 @@ To use the rotating frame IOU to calculate the OP, the following conditions must - PaddlePaddle >= 2.1.1 - GCC == 8.2 -Docker images are recommended[paddle:2.1.1-gpu-cuda10.1-cudnn7](registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.1-cudnn7)。 +Docker images are recommended paddle:2.1.1-gpu-cuda10.1-cudnn7。 Run the following command to download the image and start the container: ``` diff --git a/configs/faster_rcnn/_base_/faster_rcnn_swin_reader.yml b/configs/faster_rcnn/_base_/faster_rcnn_swin_reader.yml index 396462a2ff7cf68f7d91a1bb1bb87b8e5a040486..1af6175a931a571f1c6726f0f312591c07489d1d 100644 --- a/configs/faster_rcnn/_base_/faster_rcnn_swin_reader.yml +++ b/configs/faster_rcnn/_base_/faster_rcnn_swin_reader.yml @@ -30,14 +30,12 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 640, 640] + image_shape: [-1, 3, 640, 640] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [640, 640], keep_ratio: True} + - LetterBoxResize: {target_size: 640} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 shuffle: false drop_last: false diff --git a/configs/fcos/_base_/fcos_r50_fpn.yml b/configs/fcos/_base_/fcos_r50_fpn.yml index 64a275d88023030b2299b0c3932b1c3fc9ce1e34..cd22c229a1192ea384037848be1b7e6edc43741a 100644 --- a/configs/fcos/_base_/fcos_r50_fpn.yml +++ b/configs/fcos/_base_/fcos_r50_fpn.yml @@ -30,7 +30,6 @@ FCOSHead: num_convs: 4 norm_type: "gn" use_dcn: false - num_classes: 80 fpn_stride: [8, 16, 32, 64, 128] prior_prob: 0.01 fcos_loss: FCOSLoss @@ -46,7 +45,6 @@ FCOSLoss: FCOSPostProcess: decode: name: FCOSBox - num_classes: 80 nms: name: MultiClassNMS nms_top_k: 1000 diff --git a/configs/keypoint/README.md b/configs/keypoint/README.md index e750312a0f0c17197ca74032d00d97978298549d..4ca9b07474caca53ca529fc19bd5b239acb742e4 100644 --- a/configs/keypoint/README.md +++ b/configs/keypoint/README.md @@ -1,66 +1,110 @@ 简体中文 | [English](README_en.md) -# KeyPoint模型系列 +# 关键点检测系列模型 +
    + +
    +## 目录 + +- [简介](#简介) +- [模型推荐](#模型推荐) +- [模型库](#模型库) +- [快速开始](#快速开始) + - [环境安装](#1环境安装) + - [数据准备](#2数据准备) + - [训练与测试](#3训练与测试) + - [单卡训练](#单卡训练) + - [多卡训练](#多卡训练) + - [模型评估](#模型评估) + - [模型预测](#模型预测) + - [模型部署](#模型部署) + - [Top-Down模型联合部署](#top-down模型联合部署) + - [Bottom-Up模型独立部署](#bottom-up模型独立部署) + - [与多目标跟踪联合部署](#与多目标跟踪模型fairmot联合部署) + - [完整部署教程及Demo](#完整部署教程及Demo) +- [自定义数据训练](#自定义数据训练) +- [BenchMark](#benchmark) ## 简介 -- PaddleDetection KeyPoint部分紧跟业内最新最优算法方案,包含Top-Down、BottomUp两套方案,以满足用户的不同需求。 +PaddleDetection 中的关键点检测部分紧跟最先进的算法,包括 Top-Down 和 Bottom-Up 两种方法,可以满足用户的不同需求。Top-Down 先检测对象,再检测特定关键点。Top-Down 模型的准确率会更高,但速度会随着对象数量的增加而变慢。不同的是,Bottom-Up 首先检测点,然后对这些点进行分组或连接以形成多个人体姿势实例。Bottom-Up 的速度是固定的,不会随着物体数量的增加而变慢,但精度会更低。 -
    - -
    +同时,PaddleDetection 提供针对移动端设备优化的自研实时关键点检测模型 [PP-TinyPose](./tiny_pose/README.md)。 +## 模型推荐 +### 移动端模型推荐 -#### Model Zoo -COCO数据集 -| 模型 | 输入尺寸 | AP(coco val) | 模型下载 | 配置文件 | -| :---------------- | -------- | :----------: | :----------------------------------------------------------: | ----------------------------------------------------------- | -| HigherHRNet-w32 | 512 | 67.1 | [higherhrnet_hrnet_w32_512.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_512.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_512.yml) | -| HigherHRNet-w32 | 640 | 68.3 | [higherhrnet_hrnet_w32_640.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_640.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_640.yml) | -| HigherHRNet-w32+SWAHR | 512 | 68.9 | [higherhrnet_hrnet_w32_512_swahr.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_512_swahr.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_512_swahr.yml) | -| HRNet-w32 | 256x192 | 76.9 | [hrnet_w32_256x192.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams) | [config](./hrnet/hrnet_w32_256x192.yml) | -| HRNet-w32 | 384x288 | 77.8 | [hrnet_w32_384x288.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams) | [config](./hrnet/hrnet_w32_384x288.yml) | -| HRNet-w32+DarkPose | 256x192 | 78.0 | [dark_hrnet_w32_256x192.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_256x192.pdparams) | [config](./hrnet/dark_hrnet_w32_256x192.yml) | -| HRNet-w32+DarkPose | 384x288 | 78.3 | [dark_hrnet_w32_384x288.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_384x288.pdparams) | [config](./hrnet/dark_hrnet_w32_384x288.yml) | -| WiderNaiveHRNet-18 | 256x192 | 67.6(+DARK 68.4) | [wider_naive_hrnet_18_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/wider_naive_hrnet_18_256x192_coco.pdparams) | [config](./lite_hrnet/wider_naive_hrnet_18_256x192_coco.yml) | -| LiteHRNet-18 | 256x192 | 66.5 | [lite_hrnet_18_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_18_256x192_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_18_256x192_coco.yml) | -| LiteHRNet-18 | 384x288 | 69.7 | [lite_hrnet_18_384x288_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_18_384x288_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_18_384x288_coco.yml) | -| LiteHRNet-30 | 256x192 | 69.4 | [lite_hrnet_30_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_30_256x192_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_30_256x192_coco.yml) | -| LiteHRNet-30 | 384x288 | 72.5 | [lite_hrnet_30_384x288_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_30_384x288_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_30_384x288_coco.yml) | +| 检测模型 | 关键点模型 | 输入尺寸 | COCO数据集精度 | 平均推理耗时 (FP16) | 参数量 (M) | Flops (G) | 模型权重 | Paddle-Lite部署模型(FP16) | +| :----------------------------------------------------------- | :------------------------------------ | :------------------------------: | :-----------------------------: | :------------------------------------: | --------------------------- | :-------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | +| [PicoDet-S-Pedestrian](../picodet/legacy_model/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [PP-TinyPose](./tiny_pose/tinypose_128x96.yml) | 检测:192x192
    关键点:128x96 | 检测mAP:29.0
    关键点AP:58.1 | 检测耗时:2.37ms
    关键点耗时:3.27ms | 检测:1.18
    关键点:1.36 | 检测:0.35
    关键点:0.08 | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams)
    [关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.nb)
    [关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) | +| [PicoDet-S-Pedestrian](../picodet/legacy_model/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [PP-TinyPose](./tiny_pose/tinypose_256x192.yml) | 检测:320x320
    关键点:256x192 | 检测mAP:38.5
    关键点AP:68.8 | 检测耗时:6.30ms
    关键点耗时:8.33ms | 检测:1.18
    关键点:1.36 | 检测:0.97
    关键点:0.32 | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams)
    [关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [检测](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.nb)
    [关键点](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) | + + +*详细关于PP-TinyPose的使用请参考[文档](./tiny_pose/README.md)。 + +### 服务端模型推荐 + +| 检测模型 | 关键点模型 | 输入尺寸 | COCO数据集精度 | 参数量 (M) | Flops (G) | 模型权重 | +| :----------------------------------------------------------- | :----------------------------------------- | :------------------------------: | :-----------------------------: | :----------------------: | :----------------------: | :----------------------------------------------------------: | +| [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [HRNet-w32](./hrnet/hrnet_w32_384x288.yml) | 检测:640x640
    关键点:384x288 | 检测mAP:49.5
    关键点AP:77.8 | 检测:54.6
    关键点:28.6 | 检测:115.8
    关键点:17.3 | [检测](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
    [关键点](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams) | +| [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [HRNet-w32](./hrnet/hrnet_w32_256x192.yml) | 检测:640x640
    关键点:256x192 | 检测mAP:49.5
    关键点AP:76.9 | 检测:54.6
    关键点:28.6 | 检测:115.8
    关键点:7.68 | [检测](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
    [关键点](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams) | + + +## 模型库 +COCO数据集 +| 模型 | 方案 |输入尺寸 | AP(coco val) | 模型下载 | 配置文件 | +| :---------------- | -------- | :----------: | :----------------------------------------------------------: | ----------------------------------------------------| ------- | +| HigherHRNet-w32 |Bottom-Up| 512 | 67.1 | [higherhrnet_hrnet_w32_512.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_512.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_512.yml) | +| HigherHRNet-w32 | Bottom-Up| 640 | 68.3 | [higherhrnet_hrnet_w32_640.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_640.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_640.yml) | +| HigherHRNet-w32+SWAHR |Bottom-Up| 512 | 68.9 | [higherhrnet_hrnet_w32_512_swahr.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_512_swahr.pdparams) | [config](./higherhrnet/higherhrnet_hrnet_w32_512_swahr.yml) | +| HRNet-w32 | Top-Down| 256x192 | 76.9 | [hrnet_w32_256x192.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams) | [config](./hrnet/hrnet_w32_256x192.yml) | +| HRNet-w32 |Top-Down| 384x288 | 77.8 | [hrnet_w32_384x288.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams) | [config](./hrnet/hrnet_w32_384x288.yml) | +| HRNet-w32+DarkPose |Top-Down| 256x192 | 78.0 | [dark_hrnet_w32_256x192.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_256x192.pdparams) | [config](./hrnet/dark_hrnet_w32_256x192.yml) | +| HRNet-w32+DarkPose |Top-Down| 384x288 | 78.3 | [dark_hrnet_w32_384x288.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/dark_hrnet_w32_384x288.pdparams) | [config](./hrnet/dark_hrnet_w32_384x288.yml) | +| WiderNaiveHRNet-18 | Top-Down|256x192 | 67.6(+DARK 68.4) | [wider_naive_hrnet_18_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/wider_naive_hrnet_18_256x192_coco.pdparams) | [config](./lite_hrnet/wider_naive_hrnet_18_256x192_coco.yml) | +| LiteHRNet-18 |Top-Down| 256x192 | 66.5 | [lite_hrnet_18_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_18_256x192_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_18_256x192_coco.yml) | +| LiteHRNet-18 |Top-Down| 384x288 | 69.7 | [lite_hrnet_18_384x288_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_18_384x288_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_18_384x288_coco.yml) | +| LiteHRNet-30 | Top-Down|256x192 | 69.4 | [lite_hrnet_30_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_30_256x192_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_30_256x192_coco.yml) | +| LiteHRNet-30 |Top-Down| 384x288 | 72.5 | [lite_hrnet_30_384x288_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_30_384x288_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_30_384x288_coco.yml) | 备注: Top-Down模型测试AP结果基于GroundTruth标注框 MPII数据集 -| 模型 | 输入尺寸 | PCKh(Mean) | PCKh(Mean@0.1) | 模型下载 | 配置文件 | -| :---- | -------- | :--------: | :------------: | :----------------------------------------------------------: | -------------------------------------------- | -| HRNet-w32 | 256x256 | 90.6 | 38.5 | [hrnet_w32_256x256_mpii.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x256_mpii.pdparams) | [config](./hrnet/hrnet_w32_256x256_mpii.yml) | +| 模型 | 方案| 输入尺寸 | PCKh(Mean) | PCKh(Mean@0.1) | 模型下载 | 配置文件 | +| :---- | ---|----- | :--------: | :------------: | :----------------------------------------------------------: | -------------------------------------------- | +| HRNet-w32 | Top-Down|256x256 | 90.6 | 38.5 | [hrnet_w32_256x256_mpii.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x256_mpii.pdparams) | [config](./hrnet/hrnet_w32_256x256_mpii.yml) | + +场景模型 +| 模型 | 方案 | 输入尺寸 | 精度 | 预测速度 |模型权重 | 部署模型 | 说明| +| :---- | ---|----- | :--------: | :--------: | :------------: |:------------: |:-------------------: | +| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (业务数据集)| 单人2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | 针对摔倒场景特别优化,该模型应用于[PP-Human](../../deploy/pipeline/README.md) | + + +我们同时推出了基于LiteHRNet(Top-Down)针对移动端设备优化的实时关键点检测模型[PP-TinyPose](./tiny_pose/README.md), 欢迎体验。 -我们同时推出了针对移动端设备优化的实时关键点检测模型[PP-TinyPose](./tiny_pose/README.md), 欢迎体验。 ## 快速开始 ### 1、环境安装 -​ 请参考PaddleDetection [安装文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL_cn.md)正确安装PaddlePaddle和PaddleDetection即可。 +​ 请参考PaddleDetection [安装文档](../../docs/tutorials/INSTALL_cn.md)正确安装PaddlePaddle和PaddleDetection即可。 ### 2、数据准备 -​ 目前KeyPoint模型支持[COCO](https://cocodataset.org/#keypoints-2017)数据集和[MPII](http://human-pose.mpi-inf.mpg.de/#overview)数据集,数据集的准备方式请参考[关键点数据准备](../../docs/tutorials/PrepareKeypointDataSet_cn.md)。 +​ 目前KeyPoint模型支持[COCO](https://cocodataset.org/#keypoints-2017)数据集和[MPII](http://human-pose.mpi-inf.mpg.de/#overview)数据集,数据集的准备方式请参考[关键点数据准备](../../docs/tutorials/data/PrepareKeypointDataSet.md)。 ​ 关于config配置文件内容说明请参考[关键点配置文件说明](../../docs/tutorials/KeyPointConfigGuide_cn.md)。 - - - 请注意,Top-Down方案使用检测框测试时,需要通过检测模型生成bbox.json文件。COCO val2017的检测结果可以参考[Detector having human AP of 56.4 on COCO val2017 dataset](https://paddledet.bj.bcebos.com/data/bbox.json),下载后放在根目录(PaddleDetection)下,然后修改config配置文件中`use_gt_bbox: False`后生效。然后正常执行测试命令即可。 - +- 请注意,Top-Down方案使用检测框测试时,需要通过检测模型生成bbox.json文件。COCO val2017的检测结果可以参考[Detector having human AP of 56.4 on COCO val2017 dataset](https://paddledet.bj.bcebos.com/data/bbox.json),下载后放在根目录(PaddleDetection)下,然后修改config配置文件中`use_gt_bbox: False`后生效。然后正常执行测试命令即可。 ### 3、训练与测试 -​ **单卡训练:** +#### 单卡训练 ```shell #COCO DataSet @@ -70,7 +114,7 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/keypoint/higherhrnet/hi CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x256_mpii.yml ``` -​ **多卡训练:** +#### 多卡训练 ```shell #COCO DataSet @@ -80,7 +124,7 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x256_mpii.yml ``` -​ **模型评估:** +#### 模型评估 ```shell #COCO DataSet @@ -93,7 +137,7 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32 CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml --save_prediction_only ``` -​ **模型预测:** +#### 模型预测 ​ 注意:top-down模型只支持单人截图预测,如需使用多人图,请使用[联合部署推理]方式。或者使用bottom-up模型。 @@ -101,22 +145,32 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/keypoint/higherhrnet/hig CUDA_VISIBLE_DEVICES=0 python3 tools/infer.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=./output/higherhrnet_hrnet_w32_512/model_final.pdparams --infer_dir=../images/ --draw_threshold=0.5 --save_txt=True ``` -​ **部署预测:** +#### 模型部署 + +##### Top-Down模型联合部署 + +```shell +#导出检测模型 +python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams + +#导出关键点模型 +python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams + +#detector 检测 + keypoint top-down模型联合部署(联合推理只支持top-down方式) +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/ppyolo_r50vd_dcn_2x_coco/ --keypoint_model_dir=output_inference/hrnet_w32_384x288/ --video_file=../video/xxx.mp4 --device=gpu +``` + +##### Bottom-Up模型独立部署 ```shell #导出模型 python tools/export_model.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=output/higherhrnet_hrnet_w32_512/model_final.pdparams #部署推理 -#keypoint top-down/bottom-up 单独推理,该模式下top-down模型只支持单人截图预测。 python deploy/python/keypoint_infer.py --model_dir=output_inference/higherhrnet_hrnet_w32_512/ --image_file=./demo/000000014439_640x640.jpg --device=gpu --threshold=0.5 -python deploy/python/keypoint_infer.py --model_dir=output_inference/hrnet_w32_384x288/ --image_file=./demo/hrnet_demo.jpg --device=gpu --threshold=0.5 - -#detector 检测 + keypoint top-down模型联合部署(联合推理只支持top-down方式) -python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/ppyolo_r50vd_dcn_2x_coco/ --keypoint_model_dir=output_inference/hrnet_w32_384x288/ --video_file=../video/xxx.mp4 --device=gpu ``` -​ **与多目标跟踪模型FairMOT联合部署预测:** +##### 与多目标跟踪模型FairMOT联合部署 ```shell #导出FairMOT跟踪模型 @@ -125,18 +179,73 @@ python tools/export_model.py -c configs/mot/fairmot/fairmot_dla34_30e_1088x608.y #用导出的跟踪和关键点模型Python联合预测 python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inference/fairmot_dla34_30e_1088x608/ --keypoint_model_dir=output_inference/higherhrnet_hrnet_w32_512/ --video_file={your video name}.mp4 --device=GPU ``` + **注意:** - 跟踪模型导出教程请参考`configs/mot/README.md`。 + 跟踪模型导出教程请参考[文档](../mot/README.md)。 + +### 完整部署教程及Demo + + +​ 我们提供了PaddleInference(服务器端)、PaddleLite(移动端)、第三方部署(MNN、OpenVino)支持。无需依赖训练代码,deploy文件夹下相应文件夹提供独立完整部署代码。 详见 [部署文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/README.md)介绍。 + +## 自定义数据训练 + +我们以[tinypose_256x192](./tiny_pose/README.md)为例来说明对于自定义数据如何修改: + +#### 1、配置文件[tinypose_256x192.yml](../../configs/keypoint/tiny_pose/tinypose_256x192.yml) + +基本的修改内容及其含义如下: + +``` +num_joints: &num_joints 17 #自定义数据的关键点数量 +train_height: &train_height 256 #训练图片尺寸-高度h +train_width: &train_width 192 #训练图片尺寸-宽度w +hmsize: &hmsize [48, 64] #对应训练尺寸的输出尺寸,这里是输入[w,h]的1/4 +flip_perm: &flip_perm [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]] #关键点定义中左右对称的关键点,用于flip增强。若没有对称结构在 TrainReader 的 RandomFlipHalfBodyTransform 一栏中 flip_pairs 后面加一行 "flip: False"(注意缩紧对齐) +num_joints_half_body: 8 #半身关键点数量,用于半身增强 +prob_half_body: 0.3 #半身增强实现概率,若不需要则修改为0 +upper_body_ids: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] #上半身对应关键点id,用于半身增强中获取上半身对应的关键点。 +``` + +上述是自定义数据时所需要的修改部分,完整的配置及含义说明可参考文件:[关键点配置文件说明](../../docs/tutorials/KeyPointConfigGuide_cn.md)。 + +#### 2、其他代码修改(影响测试、可视化) +- keypoint_utils.py中的sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07,.87, .87, .89, .89]) / 10.0,表示每个关键点的确定范围方差,根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。 +- visualizer.py中的draw_pose函数中的EDGES,表示可视化时关键点之间的连接线关系。 +- pycocotools工具中的sigmas,同第一个keypoint_utils.py中的设置。用于coco指标评估时计算。 + +#### 3、数据准备注意 +- 训练数据请按coco数据格式处理。需要包括关键点[Nx3]、检测框[N]标注。 +- 请注意area>0,area=0时数据在训练时会被过滤掉。此外,由于COCO的评估机制,area较小的数据在评估时也会被过滤掉,我们建议在自定义数据时取`area = bbox_w * bbox_h`。 + +如有遗漏,欢迎反馈 -### 4、模型单独部署 -​ 我们提供了PaddleInference(服务器端)、PaddleLite(移动端)、第三方部署(MNN、OpenVino)支持。无需依赖训练代码,deploy文件夹下相应文件夹提供独立完整部署代码。 -详见 [部署文档](../../deploy/README.md)介绍。 +## 关键点稳定策略(仅适用于视频数据) +使用关键点算法处理视频数据时,由于预测针对单帧图像进行,在视频结果上往往会有抖动的现象。在一些依靠精细化坐标的应用场景(例如健身计数、基于关键点的虚拟渲染等)上容易造成误检或体验不佳的问题。针对这个问题,在PaddleDetection关键点视频推理中加入了[OneEuro滤波器](http://www.lifl.fr/~casiez/publications/CHI2012-casiez.pdf)和EMA两种关键点稳定方式。实现将当前关键点坐标结果和历史关键点坐标结果结合计算,使得输出的点坐标更加稳定平滑。该功能同时支持在Python及C++推理中一键开启使用。 -## Benchmark -我们给出了不同运行环境下的测试结果,供您在选用模型时参考。详细数据请见[Keypoint Inference Benchmark](./KeypointBenchmark.md)。 +```bash +# 使用Python推理 +python deploy/python/det_keypoint_unite_infer.py \ + --det_model_dir output_inference/picodet_s_320 \ + --keypoint_model_dir output_inference/tinypose_256x192 \ + --video_file test_video.mp4 --device gpu --smooth True + +# 使用CPP推理 +./deploy/cpp/build/main --det_model_dir output_inference/picodet_s_320 \ + --keypoint_model_dir output_inference/tinypose_256x192 \ + --video_file test_video.mp4 --device gpu --smooth True +``` +效果如下: + +![](https://user-images.githubusercontent.com/15810355/181733125-3710bacc-2080-47e4-b397-3621a2f0caae.gif) + +## BenchMark + +我们给出了不同运行环境下的测试结果,供您在选用模型时参考。详细数据请见[Keypoint Inference Benchmark](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/keypoint/KeypointBenchmark.md)。 ## 引用 + ``` @inproceedings{cheng2020bottom, title={HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation}, diff --git a/configs/keypoint/README_en.md b/configs/keypoint/README_en.md index 05e77f66819c26a38a54746eeb9e569f4945b442..f6f049824eb5aa185b767f37648bed429196d913 100644 --- a/configs/keypoint/README_en.md +++ b/configs/keypoint/README_en.md @@ -2,19 +2,63 @@ # KeyPoint Detection Models - +## Content + +- [Introduction](#introduction) +- [Model Recommendation](#model-recommendation) +- [Model Zoo](#model-zoo) +- [Getting Start](#getting-start) + - [Environmental Installation](#1environmental-installation) + - [Dataset Preparation](#2dataset-preparation) + - [Training and Testing](#3training-and-testing) + - [Training on single GPU](#training-on-single-gpu) + - [Training on multiple GPU](#training-on-multiple-gpu) + - [Evaluation](#evaluation) + - [Inference](#inference) + - [Deploy Inference](#deploy-inference) + - [Deployment for Top-Down models](#deployment-for-top-down-models) + - [Deployment for Bottom-Up models](#deployment-for-bottom-up-models) + - [Joint Inference with Multi-Object Tracking Model FairMOT](#joint-inference-with-multi-object-tracking-model-fairmot) + - [Complete Deploy Instruction and Demo](#complete-deploy-instruction-and-demo) +- [Train with custom data](#train-with-custom-data) +- [BenchMark](#benchmark) ## Introduction -- The keypoint detection part in PaddleDetection follows the state-of-the-art algorithm closely, including Top-Down and BottomUp methods, which can meet the different needs of users. +The keypoint detection part in PaddleDetection follows the state-of-the-art algorithm closely, including Top-Down and Bottom-Up methods, which can satisfy the different needs of users. Top-Down detects the object first and then detects the specific keypoint. Top-Down models will be more accurate, but slower as the number of objects increases. Differently, Bottom-Up detects the point first and then group or connect those points to form several instances of human pose. The speed of Bottom-Up is fixed, it won't slow down as the number of objects increases, but it will be less accurate. + +At the same time, PaddleDetection provides a self-developed real-time keypoint detection model [PP-TinyPose](./tiny_pose/README_en.md) optimized for mobile devices.
    +## Model Recommendation + +### Mobile Terminal + + + + +| Detection Model | Keypoint Model | Input Size | Accuracy of COCO | Average Inference Time (FP16) | Params (M) | Flops (G) | Model Weight | Paddle-Lite Inference Model(FP16) | +| :----------------------------------------------------------- | :------------------------------------ | :-------------------------------------: | :--------------------------------------: | :-----------------------------------: | :--------------------------------: | :--------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | +| [PicoDet-S-Pedestrian](../picodet/legacy_model/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [PP-TinyPose](./tiny_pose/tinypose_128x96.yml) | Detection:192x192
    Keypoint:128x96 | Detection mAP:29.0
    Keypoint AP:58.1 | Detection:2.37ms
    Keypoint:3.27ms | Detection:1.18
    Keypoint:1.36 | Detection:0.35
    Keypoint:0.08 | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams)
    [Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.nb)
    [Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) | +| [PicoDet-S-Pedestrian](../picodet/legacy_model/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [PP-TinyPose](./tiny_pose/tinypose_256x192.yml) | Detection:320x320
    Keypoint:256x192 | Detection mAP:38.5
    Keypoint AP:68.8 | Detection:6.30ms
    Keypoint:8.33ms | Detection:1.18
    Keypoint:1.36 | Detection:0.97
    Keypoint:0.32 | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams)
    [Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [Detection](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.nb)
    [Keypoint](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) | + + +*Specific documents of PP-TinyPose, please refer to [Document](./tiny_pose/README.md)。 + +### Terminal Server + +| Detection Model | Keypoint Model | Input Size | Accuracy of COCO | Params (M) | Flops (G) | Model Weight | +| :----------------------------------------------------------- | :----------------------------------------- | :-------------------------------------: | :--------------------------------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------------------------: | +| [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [HRNet-w32](./hrnet/hrnet_w32_384x288.yml) | Detection:640x640
    Keypoint:384x288 | Detection mAP:49.5
    Keypoint AP:77.8 | Detection:54.6
    Keypoint:28.6 | Detection:115.8
    Keypoint:17.3 | [Detection](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
    [Keypoint](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams) | +| [PP-YOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml) | [HRNet-w32](./hrnet/hrnet_w32_256x192.yml) | Detection:640x640
    Keypoint:256x192 | Detection mAP:49.5
    Keypoint AP:76.9 | Detection:54.6
    Keypoint:28.6 | Detection:115.8
    Keypoint:7.68 | [Detection](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams)
    [Keypoint](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams) | + + +## Model Zoo -#### Model Zoo COCO Dataset | Model | Input Size | AP(coco val) | Model Download | Config File | | :---------------- | -------- | :----------: | :----------------------------------------------------------: | ----------------------------------------------------------- | @@ -31,7 +75,6 @@ COCO Dataset | LiteHRNet-30 | 256x192 | 69.4 | [lite_hrnet_30_256x192_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_30_256x192_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_30_256x192_coco.yml) | | LiteHRNet-30 | 384x288 | 72.5 | [lite_hrnet_30_384x288_coco.pdparams](https://bj.bcebos.com/v1/paddledet/models/keypoint/lite_hrnet_30_384x288_coco.pdparams) | [config](./lite_hrnet/lite_hrnet_30_384x288_coco.yml) | - Note:The AP results of Top-Down models are based on bounding boxes in GroundTruth. MPII Dataset @@ -40,25 +83,31 @@ MPII Dataset | HRNet-w32 | 256x256 | 90.6 | 38.5 | [hrnet_w32_256x256_mpii.pdparams](https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x256_mpii.pdparams) | [config](./hrnet/hrnet_w32_256x256_mpii.yml) | +Model for Scenes +| Model | Strategy | Input Size | Precision | Inference Speed |Model Weights | Model Inference and Deployment | description| +| :---- | ---|----- | :--------: | :-------: |:------------: |:------------: |:-------------------: | +| HRNet-w32 + DarkPose | Top-Down|256x192 | AP: 87.1 (on internal dataset)| 2.9ms per person |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | Especially optimized for fall scenarios, the model is applied to [PP-Human](../../deploy/pipeline/README.md) | + + We also release [PP-TinyPose](./tiny_pose/README_en.md), a real-time keypoint detection model optimized for mobile devices. Welcome to experience. ## Getting Start -### 1. Environmental Installation +### 1.Environmental Installation -​ Please refer to [PaddleDetection Installation Guild](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) to install PaddlePaddle and PaddleDetection correctly. +​ Please refer to [PaddleDetection Installation Guide](../../docs/tutorials/INSTALL.md) to install PaddlePaddle and PaddleDetection correctly. -### 2. Dataset Preparation +### 2.Dataset Preparation -​ Currently, KeyPoint Detection Models support [COCO](https://cocodataset.org/#keypoints-2017) and [MPII](http://human-pose.mpi-inf.mpg.de/#overview). Please refer to [Keypoint Dataset Preparation](../../docs/tutorials/PrepareKeypointDataSet_en.md) to prepare dataset. +​ Currently, KeyPoint Detection Models support [COCO](https://cocodataset.org/#keypoints-2017) and [MPII](http://human-pose.mpi-inf.mpg.de/#overview). Please refer to [Keypoint Dataset Preparation](../../docs/tutorials/data/PrepareKeypointDataSet_en.md) to prepare dataset. ​ About the description for config files, please refer to [Keypoint Config Guild](../../docs/tutorials/KeyPointConfigGuide_en.md). - - Note that, when testing by detected bounding boxes in Top-Down method, We should get `bbox.json` by a detection model. You can download the detected results for COCO val2017 [(Detector having human AP of 56.4 on COCO val2017 dataset)](https://paddledet.bj.bcebos.com/data/bbox.json) directly, put it at the root path (`PaddleDetection/`), and set `use_gt_bbox: False` in config file. +- Note that, when testing by detected bounding boxes in Top-Down method, We should get `bbox.json` by a detection model. You can download the detected results for COCO val2017 [(Detector having human AP of 56.4 on COCO val2017 dataset)](https://paddledet.bj.bcebos.com/data/bbox.json) directly, put it at the root path (`PaddleDetection/`), and set `use_gt_bbox: False` in config file. -### 3、Training and Testing +### 3.Training and Testing -​ **Training on single gpu:** +#### Training on single GPU ```shell #COCO DataSet @@ -68,7 +117,7 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/keypoint/higherhrnet/hi CUDA_VISIBLE_DEVICES=0 python3 tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x256_mpii.yml ``` -​ **Training on multiple gpu:** +#### Training on multiple GPU ```shell #COCO DataSet @@ -78,7 +127,7 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x256_mpii.yml ``` -​ **Evaluation** +#### Evaluation ```shell #COCO DataSet @@ -91,7 +140,7 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32 CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml --save_prediction_only ``` -​ **Inference** +#### Inference ​ Note:Top-down models only support inference for a cropped image with single person. If you want to do inference on image with several people, please see "joint inference by detection and keypoint". Or you can choose a Bottom-up model. @@ -99,22 +148,34 @@ CUDA_VISIBLE_DEVICES=0 python3 tools/eval.py -c configs/keypoint/higherhrnet/hig CUDA_VISIBLE_DEVICES=0 python3 tools/infer.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=./output/higherhrnet_hrnet_w32_512/model_final.pdparams --infer_dir=../images/ --draw_threshold=0.5 --save_txt=True ``` -​ **Deploy Inference** +#### Deploy Inference + +##### Deployment for Top-Down models ```shell -#export models -python tools/export_model.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=output/higherhrnet_hrnet_w32_512/model_final.pdparams +#Export Detection Model +python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams -#deploy inference -#keypoint inference for a single model of top-down/bottom-up method. In this mode, top-down model only support inference for a cropped image with single person. -python deploy/python/keypoint_infer.py --model_dir=output_inference/higherhrnet_hrnet_w32_512/ --image_file=./demo/000000014439_640x640.jpg --device=gpu --threshold=0.5 -python deploy/python/keypoint_infer.py --model_dir=output_inference/hrnet_w32_384x288/ --image_file=./demo/hrnet_demo.jpg --device=gpu --threshold=0.5 -#joint inference by detection and keypoint for top-down models. +#Export Keypoint Model +python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams + +#Deployment for detector and keypoint, which is only for Top-Down models python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/ppyolo_r50vd_dcn_2x_coco/ --keypoint_model_dir=output_inference/hrnet_w32_384x288/ --video_file=../video/xxx.mp4 --device=gpu ``` -​ **joint inference with Multi-Object Tracking model FairMOT** +##### Deployment for Bottom-Up models + +```shell +#Export model +python tools/export_model.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=output/higherhrnet_hrnet_w32_512/model_final.pdparams + + +#Keypoint independent deployment, which is only for bottom-up models +python deploy/python/keypoint_infer.py --model_dir=output_inference/higherhrnet_hrnet_w32_512/ --image_file=./demo/000000014439_640x640.jpg --device=gpu --threshold=0.5 +``` + +##### Joint Inference with Multi-Object Tracking Model FairMOT ```shell #export FairMOT model @@ -123,17 +184,51 @@ python tools/export_model.py -c configs/mot/fairmot/fairmot_dla34_30e_1088x608.y #joint inference with Multi-Object Tracking model FairMOT python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inference/fairmot_dla34_30e_1088x608/ --keypoint_model_dir=output_inference/higherhrnet_hrnet_w32_512/ --video_file={your video name}.mp4 --device=GPU ``` + **Note:** To export MOT model, please refer to [Here](../../configs/mot/README_en.md). -### 4、Deploy standalone +### Complete Deploy Instruction and Demo + +​ We provide standalone deploy of PaddleInference(Server-GPU)、PaddleLite(mobile、ARM)、Third-Engine(MNN、OpenVino), which is independent of training codes。For detail, please click [Deploy-docs](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/README_en.md)。 + +## Train with custom data + +We take an example of [tinypose_256x192](./tiny_pose/README_en.md) to show how to train with custom data. -​ We provide standalone deploy of PaddleInference(Server-GPU)、PaddleLite(mobile、ARM)、Third-Engine(MNN、OpenVino), which is independent of training codes。For detail, please click [Deploy-docs](../../deploy/README_en.md)。 +#### 1、For configs [tinypose_256x192.yml](../../configs/keypoint/tiny_pose/tinypose_256x192.yml) -## Benchmark -We provide benchmarks in different runtime environments for your reference when choosing models. See [Keypoint Inference Benchmark](./KeypointBenchmark.md) for details. +you may need to modity these for your job: + +``` +num_joints: &num_joints 17 #the number of joints in your job +train_height: &train_height 256 #the height of model input +train_width: &train_width 192 #the width of model input +hmsize: &hmsize [48, 64] #the shape of model output,usually 1/4 of [w,h] +flip_perm: &flip_perm [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]] #the correspondence between left and right keypoint id,used for flip transform。You can add an line(by "flip: False") behind of flip_pairs in RandomFlipHalfBodyTransform of TrainReader if you don't need it +num_joints_half_body: 8 #The joint numbers of half body, used for half_body transform +prob_half_body: 0.3 #The probility of half_body transform, set to 0 if you don't need it +upper_body_ids: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] #The joint ids of half(upper) body, used to get the upper joints in half_body transform +``` + +For more configs, please refer to [KeyPointConfigGuide](../../docs/tutorials/KeyPointConfigGuide_en.md)。 + +#### 2、Others(used for test and visualization) +- In keypoint_utils.py, please set: "sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07,.87, .87, .89, .89]) / 10.0", the value indicate the variance of a joint locations,normally 0.25-0.5 means the location is highly accuracy,for example: eyes。0.5-1.0 means the location is not sure so much,for example: shoulder。0.75 is recommand if you not sure。 +- In visualizer.py, please set "EDGES" in draw_pose function,this indicate the line to show between joints for visualization。 +- In pycocotools you installed, please set "sigmas",it is the same as that in keypoint_utils.py, but used for coco evaluation。 + +#### 3、Note for data preparation +- The data should has the same format as Coco data, and the keypoints(Nx3) and bbox(N) should be annotated. +- please set "area">0 in annotations files otherwise it will be skiped while training. Moreover, due to the evaluation mechanism of COCO, the data with small area may also be filtered out during evaluation. We recommend to set `area = bbox_w * bbox_h` when customizing your dataset. + + +## BenchMark + +We provide benchmarks in different runtime environments for your reference when choosing models. See [Keypoint Inference Benchmark](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/keypoint/KeypointBenchmark.md) for details. ## Reference + ``` @inproceedings{cheng2020bottom, title={HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation}, diff --git a/configs/keypoint/tiny_pose/README.md b/configs/keypoint/tiny_pose/README.md index 6d9c5be02d4eccfff00abc611da96296fab2223c..809b0ab10f30f74969e5f4773ddad1e8add2c8b6 100644 --- a/configs/keypoint/tiny_pose/README.md +++ b/configs/keypoint/tiny_pose/README.md @@ -7,12 +7,20 @@
    图片来源:COCO2017开源数据集
    +## 最新动态 +- **2022.8.01:发布PP-TinyPose升级版。 在健身、舞蹈等场景的业务数据集端到端AP提升9.1** + - 新增体育场景真实数据,复杂动作识别效果显著提升,覆盖侧身、卧躺、跳跃、高抬腿等非常规动作 + - 检测模型升级为[PP-PicoDet增强版](../../../configs/picodet/README.md),在COCO数据集上精度提升3.1% + - 关键点稳定性增强。新增滤波稳定方式,视频预测结果更加稳定平滑 + + ![](https://user-images.githubusercontent.com/15810355/181733705-d0f84232-c6a2-43dd-be70-4a3a246b8fbc.gif) + ## 简介 PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测模型,可流畅地在移动端设备上执行多人姿态估计任务。借助PaddleDetecion自研的优秀轻量级检测模型[PicoDet](../../picodet/README.md),我们同时提供了特色的轻量级垂类行人检测模型。TinyPose的运行环境有以下依赖要求: - [PaddlePaddle](https://github.com/PaddlePaddle/Paddle)>=2.2 如希望在移动端部署,则还需要: -- [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite)>=2.10 +- [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite)>=2.11
    @@ -34,28 +42,49 @@ PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测 ## 模型库 -### 关键点检测模型 -| 模型 | 输入尺寸 | AP (COCO Val) | 单人推理耗时 (FP32) | 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | -| :---------- | :------: | :-----------: | :-----------------: | :-----------------: | :------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | -| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.tar) | -| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.tar) | +### Pipeline性能 +| 单人模型配置 | AP (业务数据集) | AP (COCO Val单人)| 单人耗时 (FP32) | 单人耗时 (FP16) | +| :---------------------------------- | :------: | :------: | :---: | :---: | +| PicoDet-S-Lcnet-Pedestrian-192\*192 + PP-TinyPose-128\*96 | 77.1 (+9.1) | 52.3 (+0.5) | 12.90 ms| 9.61 ms | +| 多人模型配置 | AP (业务数据集) | AP (COCO Val多人)| 6人耗时 (FP32) | 6人耗时 (FP16)| +| :------------------------ | :-------: | :-------: | :---: | :---: | +| PicoDet-S-Lcnet-Pedestrian-320\*320 + PP-TinyPose-128\*96 | 78.0 (+7.7) | 50.1 (-0.2) | 47.63 ms| 34.62 ms | -### 行人检测模型 -| 模型 | 输入尺寸 | mAP (COCO Val) | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | -| :------------------- | :------: | :------------: | :-----------------: | :-----------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | -| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.tar) | -| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.tar) | +**说明** +- 关键点检测模型的精度指标是基于对应行人检测模型检测得到的检测框。 +- 精度测试中去除了flip操作,且检测置信度阈值要求0.5。 +- 速度测试环境为qualcomm snapdragon 865,采用arm8下4线程推理。 +- Pipeline速度包含模型的预处理、推理及后处理部分。 +- 精度值的增量对比自历史版本中对应模型组合, 详情请见**历史版本-Pipeline性能**。 +- 精度测试中,为了公平比较,多人数据去除了6人以上(不含6人)的图像。 +### 关键点检测模型 +| 模型 | 输入尺寸 | AP (业务数据集) | AP (COCO Val) | 参数量 | FLOPS |单人推理耗时 (FP32) | 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | +| :---------- | :------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------------: | :-----------------: | :------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | +| PP-TinyPose | 128*96 | 84.3 | 58.4 | 1.32 M | 81.56 M | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96.zip) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96_fp32.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_128x96_fp16.nb) | +| PP-TinyPose | 256*192 | 91.0 | 68.3 | 1.32 M | 326.24M |14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_256x192.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_256x192.zip) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_256x192_fp32.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/tinypose_256x192_fp16.nb) | +### 行人检测模型 +| 模型 | 输入尺寸 | mAP (COCO Val-Person) | 参数量 | FLOPS | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | +| :------------------- | :------: | :------------: | :------------: | :------------: | :-----------------: | :-----------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | +| PicoDet-S-Lcnet-Pedestrian | 192*192 | 31.7 | 1.16 M | 170.03 M | 5.24ms | 3.66ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_lcnet_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_192_lcnet_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_192_lcnet_pedestrian.zip) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_192_lcnet_pedestrian_fp32.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_192_lcnet_pedestrian_fp16.nb) | +| PicoDet-S-Lcnet-Pedestrian | 320*320 | 41.6 | 1.16 M | 472.07 M | 13.87ms | 8.94ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian.zip) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian_fp32.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_enhance/picodet_s_320_lcnet_pedestrian_fp16.nb) | + **说明** -- 关键点检测模型与行人检测模型均使用`COCO train2017`和`AI Challenger trainset`作为训练集。关键点检测模型使用`COCO person keypoints val2017`作为测试集,行人检测模型采用`COCO instances val2017`作为测试集。 +- 关键点检测模型与行人检测模型均使用`COCO train2017`, `AI Challenger trainset`以及采集的多姿态场景数据集作为训练集。关键点检测模型使用多姿态场景数据集作为测试集,行人检测模型采用`COCO instances val2017`作为测试集。 - 关键点检测模型的精度指标所依赖的检测框为ground truth标注得到。 - 关键点检测模型与行人检测模型均在4卡环境下训练,若实际训练环境需要改变GPU数量或batch size, 须参考[FAQ](../../../docs/tutorials/FAQ/README.md)对应调整学习率。 - 推理速度测试环境为 Qualcomm Snapdragon 865,采用arm8下4线程推理得到。 +## 历史版本 + +
    +2021版本 + + ### Pipeline性能 | 单人模型配置 | AP (COCO Val 单人) | 单人耗时 (FP32) | 单人耗时 (FP16) | | :------------------------ | :------: | :---: | :---: | @@ -76,6 +105,29 @@ PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测 - 其他优秀开源模型的测试及部署方案,请参考[这里](https://github.com/zhiboniu/MoveNet-PaddleLite)。 - 更多环境下的性能测试结果,请参考[Keypoint Inference Benchmark](../KeypointBenchmark.md)。 + +### 关键点检测模型 +| 模型 | 输入尺寸 | AP (COCO Val) | 单人推理耗时 (FP32) | 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | +| :---------- | :------: | :-----------: | :-----------------: | :-----------------: | :------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | +| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16_lite.tar) | +| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16_lite.tar) | + +### 行人检测模型 +| 模型 | 输入尺寸 | mAP (COCO Val-Person) | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) | +| :------------------- | :------: | :------------: | :-----------------: | :-----------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | +| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16_lite.tar) | +| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_lite.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16_lite.tar) | + + +**说明** +- 关键点检测模型与行人检测模型均使用`COCO train2017`和`AI Challenger trainset`作为训练集。关键点检测模型使用`COCO person keypoints val2017`作为测试集,行人检测模型采用`COCO instances val2017`作为测试集。 +- 关键点检测模型的精度指标所依赖的检测框为ground truth标注得到。 +- 关键点检测模型与行人检测模型均在4卡环境下训练,若实际训练环境需要改变GPU数量或batch size, 须参考[FAQ](../../../docs/tutorials/FAQ/README.md)对应调整学习率。 +- 推理速度测试环境为 Qualcomm Snapdragon 865,采用arm8下4线程推理得到。 + + +
    + ## 模型训练 关键点检测模型与行人检测模型的训练集在`COCO`以外还扩充了[AI Challenger](https://arxiv.org/abs/1711.06475)数据集,各数据集关键点定义如下: ``` @@ -176,7 +228,7 @@ python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inferen ### 实现移动端部署 #### 直接使用我们提供的模型进行部署 1. 下载模型库中提供的`Paddle-Lite部署模型`,分别获取得到行人检测模型和关键点检测模型的`.nb`格式文件。 -2. 准备Paddle-Lite运行环境, 可直接通过[PaddleLite预编译库下载](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html)获取预编译库,无需自行编译。如需要采用FP16推理,则需要下载[FP16的预编译库](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.10-rc/inference_lite_lib.android.armv8_clang_c++_static_with_extra_with_cv_with_fp16.tiny_publish_427e46.zip) +2. 准备Paddle-Lite运行环境, 可直接通过[PaddleLite预编译库下载](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html)获取预编译库,无需自行编译。如需要采用FP16推理,则需要下载FP16的预编译库。 3. 编译模型运行代码,详细步骤见[Paddle-Lite端侧部署](../../../deploy/lite/README.md)。 #### 将训练的模型实现端侧部署 @@ -216,11 +268,14 @@ paddle_lite_opt --model_dir=inference_model/tinypose_128x96 --valid_targets=arm - 在导出模型时增加`TestReader.fuse_normalize=true`参数,可以将对图像的Normalize操作合并在模型中执行,从而实现加速。 - FP16推理可实现更快的模型推理速度。若希望部署FP16模型,除模型转换步骤外,还需要编译支持FP16的Paddle-Lite预测库,详见[Paddle Lite 使用 ARM CPU 预测部署](https://paddle-lite.readthedocs.io/zh/latest/demo_guides/arm_cpu.html)。 +## 关键点稳定策略(仅支持视频推理) +请参考[关键点稳定策略](../README.md#关键点稳定策略仅适用于视频数据)。 + ## 优化策略 TinyPose采用了以下策略来平衡模型的速度和精度表现: - 轻量级的姿态估计任务骨干网络,[wider naive Lite-HRNet](https://arxiv.org/abs/2104.06403)。 -- 更小的输入尺寸。 +- 更小的输入尺寸,以提升整体推理速度。 - 加入Distribution-Aware coordinate Representation of Keypoints ([DARK](https://arxiv.org/abs/1910.06278)),以提升低分辨率热力图下模型的精度表现。 -- Unbiased Data Processing ([UDP](https://arxiv.org/abs/1911.07524))。 -- Augmentation by Information Dropping ([AID](https://arxiv.org/abs/2008.07139v2))。 -- FP16 推理。 +- Unbiased Data Processing ([UDP](https://arxiv.org/abs/1911.07524)),使用无偏数据编解码提升模型精度。 +- Augmentation by Information Dropping ([AID](https://arxiv.org/abs/2008.07139v2)),通过添加信息丢失的数组增强,提升模型对关键点的定位能力。 +- FP16 推理, 实现更快的模型推理速度。 diff --git a/configs/keypoint/tiny_pose/README_en.md b/configs/keypoint/tiny_pose/README_en.md index e632c5b1996b71e5414fb8f596a4a497c9160015..0db1520315a34b1721d15c395896b3133d1ffefa 100644 --- a/configs/keypoint/tiny_pose/README_en.md +++ b/configs/keypoint/tiny_pose/README_en.md @@ -37,14 +37,14 @@ If you want to deploy it on the mobile devives, you also need: ### Keypoint Detection Model | Model | Input Size | AP (COCO Val) | Inference Time for Single Person (FP32)| Inference Time for Single Person(FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)| | :------------------------ | :-------: | :------: | :------: |:---: | :---: | :---: | :---: | :---: | :---: | -| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.tar) | -| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.tar) | +| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16_lite.tar) | +| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16_lite.tar) | ### Pedestrian Detection Model | Model | Input Size | mAP (COCO Val) | Average Inference Time (FP32)| Average Inference Time (FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)| | :------------------------ | :-------: | :------: | :------: | :---: | :---: | :---: | :---: | :---: | :---: | -| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.tar) | -| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.tar) | +| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16_lite.tar) | +| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/legacy_model/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_lite.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16_lite.tar) | **Tips** diff --git a/configs/mot/DataDownload.md b/configs/mot/DataDownload.md new file mode 100644 index 0000000000000000000000000000000000000000..d17cadf42631eed066700986d7c97ef04cd2a681 --- /dev/null +++ b/configs/mot/DataDownload.md @@ -0,0 +1,38 @@ +# 多目标跟踪数据集下载汇总 +## 目录 +- [行人跟踪](#行人跟踪) +- [车辆跟踪](#车辆跟踪) +- [人头跟踪](#人头跟踪) +- [多类别跟踪](#多类别跟踪) + +## 行人跟踪 + +| 数据集 | 下载链接 | 备注 | +| :-------------| :-------------| :----: | +| MOT17 | [download](https://dataset.bj.bcebos.com/mot/MOT16.zip) | - | +| MOT16 | [download](https://dataset.bj.bcebos.com/mot/MOT16.zip) | - | +| Caltech | [download](https://dataset.bj.bcebos.com/mot/Caltech.zip) | - | +| Cityscapes | [download](https://dataset.bj.bcebos.com/mot/Cityscapes.zip) | - | +| CUHKSYSU | [download](https://dataset.bj.bcebos.com/mot/CUHKSYSU.zip) | - | +| PRW | [download](https://dataset.bj.bcebos.com/mot/PRW.zip) | - | +| ETHZ | [download](https://dataset.bj.bcebos.com/mot/ETHZ.zip) | - | + +## 车辆跟踪 + +| 数据集 | 下载链接 | 备注 | +| :-------------| :-------------| :----: | +| AICity21 | [download](https://bj.bcebos.com/v1/paddledet/data/mot/aic21mtmct_vehicle.zip) | - | + + +## 人头跟踪 + +| 数据集 | 下载链接 | 备注 | +| :-------------| :-------------| :----: | +| HT21 | [download](https://bj.bcebos.com/v1/paddledet/data/mot/HT21.zip) | - | + + +## 多类别跟踪 + +| 数据集 | 下载链接 | 备注 | +| :-------------| :-------------| :----: | +| VisDrone-MOT | [download](https://bj.bcebos.com/v1/paddledet/data/mot/visdrone_mcmot.zip) | - | diff --git a/configs/mot/README.md b/configs/mot/README.md index 18284eb34c07682d5af98eb26d5e8886adc9bc81..f21d760be587a3aeb8a5395b3f7e95232f8172b6 100644 --- a/configs/mot/README.md +++ b/configs/mot/README.md @@ -5,33 +5,39 @@ ## 内容 - [简介](#简介) - [安装依赖](#安装依赖) -- [模型库](#模型库) -- [数据集准备](#数据集准备) +- [模型库和选型](#模型库和选型) +- [MOT数据集准备](#MOT数据集准备) + - [SDE数据集](#SDE数据集) + - [JDE数据集](#JDE数据集) + - [用户自定义数据集准备](#用户自定义数据集准备) - [引用](#引用) ## 简介 -当前主流的Tracking By Detecting方式的多目标追踪(Multi-Object Tracking, MOT)算法主要由两部分组成:Detection+Embedding。Detection部分即针对视频,检测出每一帧中的潜在目标。Embedding部分则将检出的目标分配和更新到已有的对应轨迹上(即ReID重识别任务)。根据这两部分实现的不同,又可以划分为**SDE**系列和**JDE**系列算法。 -- SDE(Separate Detection and Embedding)这类算法完全分离Detection和Embedding两个环节,最具代表性的就是**DeepSORT**算法。这样的设计可以使系统无差别的适配各类检测器,可以针对两个部分分别调优,但由于流程上是串联的导致速度慢耗时较长,在构建实时MOT系统中面临较大挑战。 +多目标跟踪(Multi-Object Tracking, MOT)是对给定视频或图片序列,定位出多个感兴趣的目标,并在连续帧之间维持个体的ID信息和记录其轨迹。 +当前主流的做法是Tracking By Detecting方式,算法主要由两部分组成:Detection + Embedding。Detection部分即针对视频,检测出每一帧中的潜在目标。Embedding部分则将检出的目标分配和更新到已有的对应轨迹上(即ReID重识别任务),进行物体间的长时序关联。根据这两部分实现的不同,又可以划分为**SDE**系列和**JDE**系列算法。 +- SDE(Separate Detection and Embedding)这类算法完全分离Detection和Embedding两个环节,最具代表性的是**DeepSORT**算法。这样的设计可以使系统无差别的适配各类检测器,可以针对两个部分分别调优,但由于流程上是串联的导致速度慢耗时较长。也有算法如**ByteTrack**算法为了降低耗时,不使用Embedding特征来计算外观相似度,前提是检测器的精度足够高。 - JDE(Joint Detection and Embedding)这类算法完是在一个共享神经网络中同时学习Detection和Embedding,使用一个多任务学习的思路设置损失函数。代表性的算法有**JDE**和**FairMOT**。这样的设计兼顾精度和速度,可以实现高精度的实时多目标跟踪。 -PaddleDetection实现了这两个系列的3种多目标跟踪算法,分别是SDE系列的[DeepSORT](https://arxiv.org/abs/1812.00442)和JDE系列的[JDE](https://arxiv.org/abs/1909.12605)与[FairMOT](https://arxiv.org/abs/2004.01888)。 +PaddleDetection中提供了SDE和JDE两个系列的多种算法实现: +- SDE + - [ByteTrack](./bytetrack) + - [DeepSORT](./deepsort) +- JDE + - [JDE](./jde) + - [FairMOT](./fairmot) + - [MCFairMOT](./mcfairmot) -### PP-Tracking 实时多目标跟踪系统 -此外,PaddleDetection还提供了[PP-Tracking](../../deploy/pptracking/README.md)实时多目标跟踪系统。PP-Tracking是基于PaddlePaddle深度学习框架的业界首个开源的实时多目标跟踪系统,具有模型丰富、应用广泛和部署高效三大优势。 -PP-Tracking支持单镜头跟踪(MOT)和跨镜头跟踪(MTMCT)两种模式,针对实际业务的难点和痛点,提供了行人跟踪、车辆跟踪、多类别跟踪、小目标跟踪、流量统计以及跨镜头跟踪等各种多目标跟踪功能和应用,部署方式支持API调用和GUI可视化界面,部署语言支持Python和C++,部署平台环境支持Linux、NVIDIA Jetson等。 - -### AI Studio公开项目案例 -PP-Tracking 提供了AI Studio公开项目案例,教程请参考[PP-Tracking之手把手玩转多目标跟踪](https://aistudio.baidu.com/aistudio/projectdetail/3022582)。 - -### Python端预测部署 -PP-Tracking 支持Python预测部署,教程请参考[PP-Tracking Python部署文档](../../deploy/pptracking/python/README.md)。 +**注意:** + - 以上算法原论文均为单类别的多目标跟踪,PaddleDetection团队同时也支持了[ByteTrack](./bytetrack)和FairMOT([MCFairMOT](./mcfairmot))的多类别的多目标跟踪; + - [DeepSORT](./deepsort)和[JDE](./jde)均只支持单类别的多目标跟踪; + - [DeepSORT](./deepsort)需要额外添加ReID权重一起执行,[ByteTrack](./bytetrack)可加可不加ReID权重,默认不加; -### C++端预测部署 -PP-Tracking 支持C++预测部署,教程请参考[PP-Tracking C++部署文档](../../deploy/pptracking/cpp/README.md)。 -### GUI可视化界面预测部署 -PP-Tracking 提供了简洁的GUI可视化界面,教程请参考[PP-Tracking可视化界面试用版使用文档](https://github.com/yangyudong2020/PP-Tracking_GUi)。 +### 实时多目标跟踪系统 PP-Tracking +PaddleDetection团队提供了实时多目标跟踪系统[PP-Tracking](../../deploy/pptracking),是基于PaddlePaddle深度学习框架的业界首个开源的实时多目标跟踪系统,具有模型丰富、应用广泛和部署高效三大优势。 +PP-Tracking支持单镜头跟踪(MOT)和跨镜头跟踪(MTMCT)两种模式,针对实际业务的难点和痛点,提供了行人跟踪、车辆跟踪、多类别跟踪、小目标跟踪、流量统计以及跨镜头跟踪等各种多目标跟踪功能和应用,部署方式支持API调用和GUI可视化界面,部署语言支持Python和C++,部署平台环境支持Linux、NVIDIA Jetson等。 +PP-Tracking单镜头跟踪采用的方案是[FairMOT](./fairmot),跨镜头跟踪采用的方案是[DeepSORT](./deepsort)。
    @@ -43,20 +49,46 @@ PP-Tracking 提供了简洁的GUI可视化界面,教程请参考[PP-Tracking 视频来源:VisDrone和BDD100K公开数据集
    +#### AI Studio公开项目案例 +教程请参考[PP-Tracking之手把手玩转多目标跟踪](https://aistudio.baidu.com/aistudio/projectdetail/3022582)。 + +#### Python端预测部署 +教程请参考[PP-Tracking Python部署文档](../../deploy/pptracking/python/README.md)。 + +#### C++端预测部署 +教程请参考[PP-Tracking C++部署文档](../../deploy/pptracking/cpp/README.md)。 + +#### GUI可视化界面预测部署 +教程请参考[PP-Tracking可视化界面使用文档](https://github.com/yangyudong2020/PP-Tracking_GUi)。 + + +### 实时行人分析工具 PP-Human +PaddleDetection团队提供了实时行人分析工具[PP-Human](../../deploy/pipeline),是基于PaddlePaddle深度学习框架的业界首个开源的产业级实时行人分析工具,具有模型丰富、应用广泛和部署高效三大优势。 +PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能覆盖多目标跟踪、属性识别、行为分析及人流量计数与轨迹记录。能够广泛应用于智慧交通、智慧社区、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。 +PP-Human跟踪采用的方案是[ByteTrack](./bytetrack)。 + +![](https://user-images.githubusercontent.com/48054808/173030254-ecf282bd-2cfe-43d5-b598-8fed29e22020.gif) + +#### AI Studio公开项目案例 +PP-Human实时行人分析全流程实战教程[链接](https://aistudio.baidu.com/aistudio/projectdetail/3842982)。 + +PP-Human赋能社区智能精细化管理教程[链接](https://aistudio.baidu.com/aistudio/projectdetail/3679564)。 + + ## 安装依赖 一键安装MOT相关的依赖: ``` -pip install lap sklearn motmetrics openpyxl cython_bbox -或者 pip install -r requirements.txt +# 或手动pip安装MOT相关的库 +pip install lap motmetrics sklearn filterpy ``` **注意:** -- `cython_bbox`在windows上安装:`pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox`。可参考这个[教程](https://stackoverflow.com/questions/60349980/is-there-a-way-to-install-cython-bbox-for-windows)。 -- 预测需确保已安装[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`。 + - 预测需确保已安装[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`。 + -## 模型库 +## 模型库和选型 - 基础模型 - [ByteTrack](bytetrack/README_cn.md) - [DeepSORT](deepsort/README_cn.md) @@ -71,25 +103,75 @@ pip install -r requirements.txt - 跨境头跟踪 - [跨境头跟踪](mtmct/README_cn.md) +### 模型选型总结 -## 数据集准备 -### MOT数据集 -PaddleDetection复现[JDE](https://github.com/Zhongdao/Towards-Realtime-MOT) 和[FairMOT](https://github.com/ifzhang/FairMOT),是使用的和他们相同的MIX数据集,包括**Caltech Pedestrian, CityPersons, CUHK-SYSU, PRW, ETHZ, MOT17和MOT16**。使用前6者作为联合数据集参与训练,MOT16作为评测数据集。如果您想使用这些数据集,请**遵循他们的License**。 +关于模型选型,PaddleDetection团队提供的总结建议如下: + +| MOT方式 | 经典算法 | 算法流程 | 数据集要求 | 其他特点 | +| :--------------| :--------------| :------- | :----: | :----: | +| SDE系列 | DeepSORT,ByteTrack | 分离式,两个独立模型权重先检测后ReID,也可不加ReID | 检测和ReID数据相对独立,不加ReID时即纯检测数据集 |检测和ReID可分别调优,鲁棒性较高,AI竞赛常用| +| JDE系列 | FairMOT | 联合式,一个模型权重端到端同时检测和ReID | 必须同时具有检测和ReID标注 | 检测和ReID联合训练,不易调优,泛化性不强| **注意:** -- 多目标跟踪数据集一般是用于单类别的多目标跟踪,DeepSORT、JDE和FairMOT均为单类别跟踪模型,MIX数据集以及其子数据集也都是单类别的行人跟踪数据集,可认为相比于行人检测数据集多了id号的标注。 -- 为了训练更多场景的垂类模型例如车辆等,垂类数据集也需要处理成与MIX数据集相同的格式,PaddleDetection也提供了[车辆跟踪](vehicle/README_cn.md)、[人头跟踪](headtracking21/README_cn.md)以及更通用的[行人跟踪](pedestrian/README_cn.md)的垂类数据集和模型。用户自定义数据集也可参照[数据准备文档](../../docs/tutorials/PrepareMOTDataSet_cn.md)去准备。 -- 多类别跟踪模型是[MCFairMOT](mcfairmot/README_cn.md),多类别数据集是VisDrone数据集的整合版,可参照[MCFairMOT](mcfairmot/README_cn.md)的文档说明。 -- 跨镜头跟踪模型,是选用的[AIC21 MTMCT](https://www.aicitychallenge.org) (CityFlow)车辆跨镜头跟踪数据集,数据集和模型可参照[跨境头跟踪](mtmct/README_cn.md)的文档说明。 + - 由于数据标注的成本较大,建议选型前优先考虑**数据集要求**,如果数据集只有检测框标注而没有ReID标注,是无法使用JDE系列算法训练的,更推荐使用SDE系列; + - SDE系列算法在检测器精度足够高时,也可以不使用ReID权重进行物体间的长时序关联,可以参照[ByteTrack](bytetrack); + - 耗时速度和模型权重参数量计算量有一定关系,耗时从理论上看`不使用ReID的SDE系列 < JDE系列 < 使用ReID的SDE系列`; + + + +## MOT数据集准备 +PaddleDetection团队提供了众多公开数据集或整理后数据集的下载链接,参考[数据集下载汇总](DataDownload.md),用户可以自行下载使用。 + +根据模型选型总结,MOT数据集可以分为两类:一类纯检测框标注的数据集,仅SDE系列可以使用;另一类是同时有检测和ReID标注的数据集,SDE系列和JDE系列都可以使用。 -### 数据集目录 -首先按照以下命令下载image_lists.zip并解压放在`PaddleDetection/dataset/mot`目录下: +### SDE数据集 +SDE数据集是纯检测标注的数据集,用户自定义数据集可以参照[DET数据准备文档](../../docs/tutorials/data/PrepareDetDataSet.md)准备。 + +以MOT17数据集为例,下载并解压放在`PaddleDetection/dataset/mot`目录下: +``` +wget https://dataset.bj.bcebos.com/mot/MOT17.zip + +``` +并修改数据集部分的配置文件如下: +``` +num_classes: 1 + +TrainDataset: + !COCODataSet + dataset_dir: dataset/mot/MOT17 + anno_path: annotations/train_half.json + image_dir: images/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + dataset_dir: dataset/mot/MOT17 + anno_path: annotations/val_half.json + image_dir: images/train + +TestDataset: + !ImageFolder + dataset_dir: dataset/mot/MOT17 + anno_path: annotations/val_half.json +``` + +数据集目录为: +``` +dataset/mot + |——————MOT17 + |——————annotations + |——————images +``` + +### JDE数据集 +JDE数据集是同时有检测和ReID标注的数据集,首先按照以下命令`image_lists.zip`并解压放在`PaddleDetection/dataset/mot`目录下: ``` wget https://dataset.bj.bcebos.com/mot/image_lists.zip ``` -然后按照以下命令可以快速下载MIX数据集的各个子数据集,并解压放在`PaddleDetection/dataset/mot`目录下: +然后按照以下命令可以快速下载各个公开数据集,也解压放在`PaddleDetection/dataset/mot`目录下: ``` +# MIX数据,同JDE,FairMOT论文使用的数据集 wget https://dataset.bj.bcebos.com/mot/MOT17.zip wget https://dataset.bj.bcebos.com/mot/Caltech.zip wget https://dataset.bj.bcebos.com/mot/CUHKSYSU.zip @@ -98,24 +180,17 @@ wget https://dataset.bj.bcebos.com/mot/Cityscapes.zip wget https://dataset.bj.bcebos.com/mot/ETHZ.zip wget https://dataset.bj.bcebos.com/mot/MOT16.zip ``` - -最终目录为: +数据集目录为: ``` dataset/mot |——————image_lists - |——————caltech.10k.val |——————caltech.all - |——————caltech.train - |——————caltech.val |——————citypersons.train - |——————citypersons.val |——————cuhksysu.train - |——————cuhksysu.val |——————eth.train |——————mot16.train |——————mot17.train |——————prw.train - |——————prw.val |——————Caltech |——————Cityscapes |——————CUHKSYSU @@ -125,7 +200,7 @@ dataset/mot |——————PRW ``` -### 数据格式 +#### JDE数据集的格式 这几个相关数据集都遵循以下结构: ``` MOT17 @@ -139,11 +214,20 @@ MOT17 ``` [class] [identity] [x_center] [y_center] [width] [height] ``` -**注意**: -- `class`为类别id,支持单类别和多类别,从`0`开始计,单类别即为`0`。 -- `identity`是从`1`到`num_identities`的整数(`num_identities`是数据集中所有视频或图片序列的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 -- `[x_center] [y_center] [width] [height]`是中心点坐标和宽高,注意他们的值是由图片的宽度/高度标准化的,因此它们是从0到1的浮点数。 + - `class`为类别id,支持单类别和多类别,从`0`开始计,单类别即为`0`。 + - `identity`是从`1`到`num_identities`的整数(`num_identities`是数据集中所有视频或图片序列的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 + - `[x_center] [y_center] [width] [height]`是中心点坐标和宽高,注意他们的值是由图片的宽度/高度标准化的,因此它们是从0到1的浮点数。 + + +**注意:** + - MIX数据集是[JDE](https://github.com/Zhongdao/Towards-Realtime-MOT)和[FairMOT](https://github.com/ifzhang/FairMOT)原论文使用的数据集,包括**Caltech Pedestrian, CityPersons, CUHK-SYSU, PRW, ETHZ, MOT17和MOT16**。使用前6者作为联合数据集参与训练,MOT16作为评测数据集。如果您想使用这些数据集,请**遵循他们的License**。 + - MIX数据集以及其子数据集都是单类别的行人跟踪数据集,可认为相比于行人检测数据集多了id号的标注。 + - 更多场景的垂类模型例如车辆行人人头跟踪等,垂类数据集也需要处理成与MIX数据集相同的格式,参照[数据集下载汇总](DataDownload.md)、[车辆跟踪](vehicle/README_cn.md)、[人头跟踪](headtracking21/README_cn.md)以及更通用的[行人跟踪](pedestrian/README_cn.md)。 + - 用户自定义数据集可参照[MOT数据集准备教程](../../docs/tutorials/PrepareMOTDataSet_cn.md)去准备。 + +### 用户自定义数据集准备 +用户自定义数据集准备请参考[MOT数据集准备教程](../../docs/tutorials/PrepareMOTDataSet_cn.md)去准备。 ## 引用 ``` diff --git a/configs/mot/README_en.md b/configs/mot/README_en.md index e23bc451b36cb100440d47e90351eb6d22879983..eafd462ce782ab19e3f53cc69ebdd25a3d74701e 100644 --- a/configs/mot/README_en.md +++ b/configs/mot/README_en.md @@ -49,12 +49,11 @@ PP-Tracking supports GUI predict and deployment. Please refer to this [doc](http ## Installation Install all the related dependencies for MOT: ``` -pip install lap sklearn motmetrics openpyxl cython_bbox +pip install lap motmetrics sklearn filterpy or pip install -r requirements.txt ``` **Notes:** -- Install `cython_bbox` for Windows: `pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox`. You can refer to this [tutorial](https://stackoverflow.com/questions/60349980/is-there-a-way-to-install-cython-bbox-for-windows). - Please make sure that [ffmpeg](https://ffmpeg.org/ffmpeg.html) is installed first, on Linux(Ubuntu) platform you can directly install it by the following command:`apt-get update && apt-get install -y ffmpeg`. @@ -80,7 +79,7 @@ PaddleDetection implement [JDE](https://github.com/Zhongdao/Towards-Realtime-MOT **Notes:** - Multi-Object Tracking(MOT) datasets are always used for single category tracking. DeepSORT, JDE and FairMOT are single category MOT models. 'MIX' dataset and it's sub datasets are also single category pedestrian tracking datasets. It can be considered that there are additional IDs ground truth for detection datasets. -- In order to train the feature models of more scenes, more datasets are also processed into the same format as the MIX dataset. PaddleDetection Team also provides feature datasets and models of [vehicle tracking](vehicle/readme.md), [head tracking](headtracking21/readme.md) and more general [pedestrian tracking](pedestrian/readme.md). User defined datasets can also be prepared by referring to data preparation [doc](../../docs/tutorials/PrepareMOTDataSet.md). +- In order to train the feature models of more scenes, more datasets are also processed into the same format as the MIX dataset. PaddleDetection Team also provides feature datasets and models of [vehicle tracking](vehicle/README.md), [head tracking](headtracking21/README.md) and more general [pedestrian tracking](pedestrian/README.md). User defined datasets can also be prepared by referring to data preparation [doc](../../docs/tutorials/data/PrepareMOTDataSet.md). - The multipe category MOT model is [MCFairMOT] (mcfairmot/readme_cn.md), and the multi category dataset is the integrated version of VisDrone dataset. Please refer to the doc of [MCFairMOT](mcfairmot/README.md). - The Multi-Target Multi-Camera Tracking (MTMCT) model is [AIC21 MTMCT](https://www.aicitychallenge.org)(CityFlow) Multi-Camera Vehicle Tracking dataset. The dataset and model can refer to the doc of [MTMCT](mtmct/README.md) diff --git a/configs/mot/bytetrack/README_cn.md b/configs/mot/bytetrack/README_cn.md index 242319c08332eedc66c6a9f72c85e6f7e6731d87..8e93134f83bac05abfa85ea8551f9571913df0f9 100644 --- a/configs/mot/bytetrack/README_cn.md +++ b/configs/mot/bytetrack/README_cn.md @@ -13,18 +13,38 @@ ## 模型库 -### ByteTrack在MOT-17 half Val Set上结果 +### 基于不同检测器的ByteTrack在MOT-17 half Val Set上结果 -| 检测训练数据集 | 检测器 | 输入尺度 | ReID | 检测mAP | MOTA | IDF1 | FPS | 配置文件 | +| 检测训练数据集 | 检测器 | 输入尺度 | ReID | 检测mAP(0.5:0.95) | MOTA | IDF1 | FPS | 配置文件 | | :-------- | :----- | :----: | :----:|:------: | :----: |:-----: |:----:|:----: | | MOT-17 half train | YOLOv3 | 608x608 | - | 42.7 | 49.5 | 54.8 | - |[配置文件](./bytetrack_yolov3.yml) | -| MOT-17 half train | PPYOLOe | 640x640 | - | 52.9 | 50.4 | 59.7 | - |[配置文件](./bytetrack_ppyoloe.yml) | -| MOT-17 half train | PPYOLOe | 640x640 |PPLCNet| 52.9 | 51.7 | 58.8 | - |[配置文件](./bytetrack_ppyoloe_pplcnet.yml) | +| MOT-17 half train | PP-YOLOE-l | 640x640 | - | 52.9 | 50.4 | 59.7 | - |[配置文件](./bytetrack_ppyoloe.yml) | +| MOT-17 half train | PP-YOLOE-l | 640x640 |PPLCNet| 52.9 | 51.7 | 58.8 | - |[配置文件](./bytetrack_ppyoloe_pplcnet.yml) | +| **mix_mot_ch** | YOLOX-x | 800x1440| - | 61.9 | 77.3 | 71.6 | - |[配置文件](./bytetrack_yolox.yml) | +| **mix_det** | YOLOX-x | 800x1440| - | 65.4 | 84.5 | 77.4 | - |[配置文件](./bytetrack_yolox.yml) | **注意:** -- 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行验证的命令即可自动下载。 -- ByteTrack的训练是单独的检测器训练MOT数据集,推理是组装跟踪器去评估MOT指标,单独的检测模型也可以评估检测指标。 -- ByteTrack的导出部署,是单独导出检测模型,再组装跟踪器运行的,参照[PP-Tracking](../../../deploy/pptracking/python/README.md)。 + - 检测任务相关配置和文档请查看[detector](detector/) + + +### YOLOX-x ByteTrack(mix_det) + +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/pp-yoloe-an-evolved-version-of-yolo/multi-object-tracking-on-mot16)](https://paperswithcode.com/sota/multi-object-tracking-on-mot16?p=pp-yoloe-an-evolved-version-of-yolo) + +| 网络 | 测试集 | MOTA | IDF1 | IDS | FP | FN | FPS | 下载链接 | 配置文件 | +| :---------: | :-------: | :----: | :----: | :----: | :----: | :----: | :------: | :----: |:-----: | +| ByteTrack-x| MOT-17 Train | 84.4 | 72.8 | 837 | 5653 | 10985 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams) | [配置文件](./bytetrack_yolox.yml) | +| ByteTrack-x| MOT-17 Test | 78.4 | 69.7 | 4974 | 37551 | 79524 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams) | [配置文件](./bytetrack_yolox.yml) | +| ByteTrack-x| MOT-16 Train | 83.5 | 72.7 | 800 | 6973 | 10419 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams) | [配置文件](./bytetrack_yolox.yml) | +| ByteTrack-x| MOT-16 Test | 77.7 | 70.1 | 1570 | 15695 | 23304 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams) | [配置文件](./bytetrack_yolox.yml) | + + +**注意:** + - 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行```tools/eval_mot.py```评估的命令即可自动下载,```reid_weights```若为None则表示不需要使用,ByteTrack默认不使用ReID权重。 + - **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://dataset.bj.bcebos.com/mot/MOT17.zip)下载,并解压放在`dataset/mot/`文件夹下。 + - **mix_mot_ch**数据集,是MOT17、CrowdHuman组成的联合数据集,**mix_det**是MOT17、CrowdHuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。 + - ByteTrack的训练是单独的检测器训练MOT数据集,推理是组装跟踪器去评估MOT指标,单独的检测模型也可以评估检测指标。 + - ByteTrack的导出部署,是单独导出检测模型,再组装跟踪器运行的,参照[PP-Tracking](../../../deploy/pptracking/python/README.md)。 ## 快速开始 @@ -32,13 +52,20 @@ ### 1. 训练 通过如下命令一键式启动训练和评估 ```bash -python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml --eval --amp --fleet +python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml --eval --amp +# 或者 +python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml --eval --amp ``` +**注意:** + - ` --eval`是边训练边验证精度;`--amp`是混合精度训练避免溢出,推荐使用paddlepaddle2.2.2版本。 + ### 2. 评估 #### 2.1 评估检测效果 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://bj.bcebos.com/v1/paddledet/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +# 或者 +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml -o weights=https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_mix_det.pdparams ``` **注意:** @@ -51,30 +78,41 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetra CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe.yml --scaled=True # 或者 CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe_pplcnet.yml --scaled=True +# 或者 +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/bytetrack/bytetrack_yolox.yml --scaled=True ``` **注意:** - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE YOLOv3则为False,如果使用通用检测模型则为True, 默认值是False。 - - 跟踪结果会存于`{output_dir}/mot_results/`中,里面每个视频序列对应一个txt,每个txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`, 此外`{output_dir}`可通过`--output_dir`设置。 + - 跟踪结果会存于`{output_dir}/mot_results/`中,里面每个视频序列对应一个txt,每个txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`, 此外`{output_dir}`可通过`--output_dir`设置,默认文件夹名为`output`。 ### 3. 预测 使用单个GPU通过如下命令预测一个视频,并保存为视频 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe.yml --video_file={your video name}.mp4 --scaled=True --save_videos +# 下载demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 + +# 使用PPYOLOe行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/bytetrack/bytetrack_ppyoloe.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos +# 或者使用YOLOX行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/bytetrack/bytetrack_yolox.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos ``` **注意:** - 请先确保已经安装了[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`。 - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 + - `--save_videos`表示保存可视化视频,同时会保存可视化的图片在`{output_dir}/mot_outputs/`中,`{output_dir}`可通过`--output_dir`设置,默认文件夹名为`output`。 ### 4. 导出预测模型 Step 1:导出检测模型 ```bash -# 导出PPYOLe行人检测模型 +# 导出PPYOLOe行人检测模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +# 或者导出YOLOX行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams ``` Step 2:导出ReID模型(可选步骤,默认不需要) @@ -83,15 +121,17 @@ Step 2:导出ReID模型(可选步骤,默认不需要) CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams ``` -### 4. 用导出的模型基于Python去预测 +### 5. 用导出的模型基于Python去预测 ```bash -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=tracker_config.yml --video_file={your video name}.mp4 --device=GPU --scaled=True --save_mot_txts +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts +# 或者 +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/yolox_x_24e_800x1440_mix_det/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts ``` + **注意:** - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_mot_txt_per_img`(对每张图片保存一个txt)表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 ## 引用 diff --git a/configs/mot/bytetrack/_base_/ht21.yml b/configs/mot/bytetrack/_base_/ht21.yml new file mode 100644 index 0000000000000000000000000000000000000000..8500af3165e1173cc442396ace1af54f09ab810a --- /dev/null +++ b/configs/mot/bytetrack/_base_/ht21.yml @@ -0,0 +1,34 @@ +metric: COCO +num_classes: 1 + +# Detection Dataset for training +TrainDataset: + !COCODataSet + image_dir: images/train + anno_path: annotations/train.json + dataset_dir: dataset/mot/HT21 + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images/train + anno_path: annotations/val_half.json + dataset_dir: dataset/mot/HT21 + +TestDataset: + !ImageFolder + dataset_dir: dataset/mot/HT21 + anno_path: annotations/val_half.json + + +# MOTDataset for MOT evaluation and inference +EvalMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + data_root: HT21/images/test + keep_ori_im: True # set as True in DeepSORT and ByteTrack + +TestMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + keep_ori_im: True # set True if save visualization images or video diff --git a/configs/mot/bytetrack/_base_/mix_det.yml b/configs/mot/bytetrack/_base_/mix_det.yml new file mode 100644 index 0000000000000000000000000000000000000000..fbe19bdaa29246919189d5d93a3ea01e3734b52c --- /dev/null +++ b/configs/mot/bytetrack/_base_/mix_det.yml @@ -0,0 +1,34 @@ +metric: COCO +num_classes: 1 + +# Detection Dataset for training +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train.json + dataset_dir: dataset/mot/mix_det + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images/train + anno_path: annotations/val_half.json + dataset_dir: dataset/mot/MOT17 + +TestDataset: + !ImageFolder + anno_path: annotations/val_half.json + dataset_dir: dataset/mot/MOT17 + + +# MOTDataset for MOT evaluation and inference +EvalMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + data_root: MOT17/images/half + keep_ori_im: True # set as True in DeepSORT and ByteTrack + +TestMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + keep_ori_im: True # set True if save visualization images or video diff --git a/configs/mot/bytetrack/_base_/mix_mot_ch.yml b/configs/mot/bytetrack/_base_/mix_mot_ch.yml new file mode 100644 index 0000000000000000000000000000000000000000..a19f149301a1d993c552a12e60144f63990d6f4d --- /dev/null +++ b/configs/mot/bytetrack/_base_/mix_mot_ch.yml @@ -0,0 +1,34 @@ +metric: COCO +num_classes: 1 + +# Detection Dataset for training +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train.json + dataset_dir: dataset/mot/mix_mot_ch + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images/train + anno_path: annotations/val_half.json + dataset_dir: dataset/mot/MOT17 + +TestDataset: + !ImageFolder + anno_path: annotations/val_half.json + dataset_dir: dataset/mot/MOT17 + + +# MOTDataset for MOT evaluation and inference +EvalMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + data_root: MOT17/images/half + keep_ori_im: True # set as True in DeepSORT and ByteTrack + +TestMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + keep_ori_im: True # set True if save visualization images or video diff --git a/configs/mot/bytetrack/_base_/mot17.yml b/configs/mot/bytetrack/_base_/mot17.yml index 2efa55546026168c39396c4d51a71428e19a0638..faf47f622d1c2847a9686dfa8d7e48a49c05436c 100644 --- a/configs/mot/bytetrack/_base_/mot17.yml +++ b/configs/mot/bytetrack/_base_/mot17.yml @@ -17,6 +17,7 @@ EvalDataset: TestDataset: !ImageFolder + dataset_dir: dataset/mot/MOT17 anno_path: annotations/val_half.json diff --git a/configs/mot/bytetrack/_base_/ppyoloe_mot_reader_640x640.yml b/configs/mot/bytetrack/_base_/ppyoloe_mot_reader_640x640.yml index c1e7ab8418956810ae6d2788cd4d67b9f2e17775..ef6342fd0e9249acf386b7795cb538b73a26f108 100644 --- a/configs/mot/bytetrack/_base_/ppyoloe_mot_reader_640x640.yml +++ b/configs/mot/bytetrack/_base_/ppyoloe_mot_reader_640x640.yml @@ -1,4 +1,8 @@ -worker_num: 8 +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -20,17 +24,17 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 8 TestReader: inputs_def: - image_shape: [3, 640, 640] + image_shape: [3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 1 @@ -40,17 +44,17 @@ TestReader: EvalMOTReader: sample_transforms: - Decode: {} - - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 1 TestMOTReader: inputs_def: - image_shape: [3, 640, 640] + image_shape: [3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 1 diff --git a/configs/mot/bytetrack/_base_/yolox_mot_reader_800x1440.yml b/configs/mot/bytetrack/_base_/yolox_mot_reader_800x1440.yml new file mode 100644 index 0000000000000000000000000000000000000000..48d4144221f6fa353af90ce3781a21329a566751 --- /dev/null +++ b/configs/mot/bytetrack/_base_/yolox_mot_reader_800x1440.yml @@ -0,0 +1,67 @@ + +input_height: &input_height 800 +input_width: &input_width 1440 +input_size: &input_size [*input_height, *input_width] + +worker_num: 4 +TrainReader: + sample_transforms: + - Decode: {} + - Mosaic: + prob: 1.0 + input_dim: *input_size + degrees: [-10, 10] + scale: [0.1, 2.0] + shear: [-2, 2] + translate: [-0.1, 0.1] + enable_mixup: True + mixup_prob: 1.0 + mixup_scale: [0.5, 1.5] + - AugmentHSV: {is_bgr: False, hgain: 5, sgain: 30, vgain: 30} + - PadResize: {target_size: *input_size} + - RandomFlip: {} + batch_transforms: + - Permute: {} + batch_size: 6 + shuffle: True + drop_last: True + collate_batch: False + mosaic_epoch: 20 + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *input_size, keep_ratio: True} + - Pad: {size: *input_size, fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 800, 1440] + sample_transforms: + - Decode: {} + - Resize: {target_size: *input_size, keep_ratio: True} + - Pad: {size: *input_size, fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 1 + + +# add MOTReader for MOT evaluation and inference, note batch_size should be 1 in MOT +EvalMOTReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *input_size, keep_ratio: True} + - Pad: {size: *input_size, fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 1 + +TestMOTReader: + inputs_def: + image_shape: [3, 800, 1440] + sample_transforms: + - Decode: {} + - Resize: {target_size: *input_size, keep_ratio: True} + - Pad: {size: *input_size, fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 1 diff --git a/configs/mot/bytetrack/bytetrack_ppyoloe.yml b/configs/mot/bytetrack/bytetrack_ppyoloe.yml index 08b7a00d89b79ad1bd1e2753738f22fcc66e657c..5e7ffe07f0f758c641596e90ee0da4c31085fd85 100644 --- a/configs/mot/bytetrack/bytetrack_ppyoloe.yml +++ b/configs/mot/bytetrack/bytetrack_ppyoloe.yml @@ -8,7 +8,7 @@ weights: output/bytetrack_ppyoloe/model_final log_iter: 20 snapshot_epoch: 2 -metric: MOT # eval/infer mode +metric: MOT # eval/infer mode, set 'COCO' can be training mode num_classes: 1 architecture: ByteTrack @@ -33,7 +33,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/bytetrack/bytetrack_ppyoloe_pplcnet.yml b/configs/mot/bytetrack/bytetrack_ppyoloe_pplcnet.yml index 98ea15f15299b9d550bc4de1a53fe203e7cd61fc..60f81165d5b324943a997dbc26fbe56f249f2ef6 100644 --- a/configs/mot/bytetrack/bytetrack_ppyoloe_pplcnet.yml +++ b/configs/mot/bytetrack/bytetrack_ppyoloe_pplcnet.yml @@ -33,7 +33,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/bytetrack/bytetrack_yolox.yml b/configs/mot/bytetrack/bytetrack_yolox.yml new file mode 100644 index 0000000000000000000000000000000000000000..2e195c56d00cfc696e93fee4e9f709f123b5dcec --- /dev/null +++ b/configs/mot/bytetrack/bytetrack_yolox.yml @@ -0,0 +1,68 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + 'detector/yolox_x_24e_800x1440_mix_det.yml', + '_base_/mix_det.yml', + '_base_/yolox_mot_reader_800x1440.yml' +] +weights: output/bytetrack_yolox/model_final +log_iter: 20 +snapshot_epoch: 2 + +metric: MOT # eval/infer mode +num_classes: 1 + +architecture: ByteTrack +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/yolox_x_300e_coco.pdparams +ByteTrack: + detector: YOLOX + reid: None + tracker: JDETracker +det_weights: https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_mix_det.pdparams +reid_weights: None + +depth_mult: 1.33 +width_mult: 1.25 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + input_size: [800, 1440] + size_stride: 32 + size_range: [18, 22] # multi-scale range [576*1024 ~ 800*1440], w/h ratio=1.8 + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +YOLOXHead: + l1_epoch: 20 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.7 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. + + +# BYTETracker +JDETracker: + use_byte: True + match_thres: 0.9 + conf_thres: 0.6 + low_conf_thres: 0.2 + min_box_area: 100 + vertical_ratio: 1.6 # for pedestrian diff --git a/configs/mot/bytetrack/bytetrack_yolox_ht21.yml b/configs/mot/bytetrack/bytetrack_yolox_ht21.yml new file mode 100644 index 0000000000000000000000000000000000000000..ea21a87c5ed1ec8297155c80b8e7136e1941c636 --- /dev/null +++ b/configs/mot/bytetrack/bytetrack_yolox_ht21.yml @@ -0,0 +1,68 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + 'detector/yolox_x_24e_800x1440_ht21.yml', + '_base_/ht21.yml', + '_base_/yolox_mot_reader_800x1440.yml' +] +weights: output/bytetrack_yolox_ht21/model_final +log_iter: 20 +snapshot_epoch: 2 + +metric: MOT # eval/infer mode +num_classes: 1 + +architecture: ByteTrack +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/yolox_x_300e_coco.pdparams +ByteTrack: + detector: YOLOX + reid: None + tracker: JDETracker +det_weights: https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_ht21.pdparams +reid_weights: None + +depth_mult: 1.33 +width_mult: 1.25 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + input_size: [800, 1440] + size_stride: 32 + size_range: [18, 22] # multi-scale range [576*1024 ~ 800*1440], w/h ratio=1.8 + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +YOLOXHead: + l1_epoch: 20 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 30000 + keep_top_k: 1000 + score_threshold: 0.01 + nms_threshold: 0.7 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. + + +# BYTETracker +JDETracker: + use_byte: True + match_thres: 0.9 + conf_thres: 0.7 + low_conf_thres: 0.1 + min_box_area: 0 + vertical_ratio: 0 # 1.6 for pedestrian diff --git a/configs/mot/bytetrack/detector/README_cn.md b/configs/mot/bytetrack/detector/README_cn.md index 7bdb095f177fa9365955649841a0a27eda571d7e..98b40bf61b43a170d2026f47d33d3ecbdb46e6da 100644 --- a/configs/mot/bytetrack/detector/README_cn.md +++ b/configs/mot/bytetrack/detector/README_cn.md @@ -12,23 +12,28 @@ | :-------------- | :------------- | :--------: | :---------: | :-----------: | :-----: | :------: | :-----: | | DarkNet-53 | YOLOv3 | 608X608 | 40e | ---- | 42.7 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolov3_darknet53_40e_608x608_mot17half.pdparams) | [配置文件](./yolov3_darknet53_40e_608x608_mot17half.yml) | | CSPResNet | PPYOLOe | 640x640 | 36e | ---- | 52.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams) | [配置文件](./ppyoloe_crn_l_36e_640x640_mot17half.yml) | +| CSPDarkNet | YOLOX-x(mix_mot_ch) | 800x1440 | 24e | ---- | 61.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_mot_ch.pdparams) | [配置文件](./yolox_x_24e_800x1440_mix_mot_ch.yml) | +| CSPDarkNet | YOLOX-x(mix_det) | 800x1440 | 24e | ---- | 65.4 | [下载链接](https://paddledet.bj.bcebos.com/models/mot/deepsort/yolox_x_24e_800x1440_mix_det.pdparams) | [配置文件](./yolox_x_24e_800x1440_mix_det.yml) | **注意:** - - 以上模型均可采用**MOT17-half train**数据集训练,数据集可以从[此链接](https://dataset.bj.bcebos.com/mot/MOT17.zip)下载。 + - 以上模型除YOLOX外采用**MOT17-half train**数据集训练,数据集可以从[此链接](https://dataset.bj.bcebos.com/mot/MOT17.zip)下载。 - **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://paddledet.bj.bcebos.com/data/mot/mot17half/annotations.zip)下载,并解压放在`dataset/mot/MOT17/images/`文件夹下。 + - YOLOX-x(mix_mot_ch)采用**mix_mot_ch**数据集,是MOT17、CrowdHuman组成的联合数据集;YOLOX-x(mix_det)采用**mix_det**数据集,是MOT17、CrowdHuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。 - 行人跟踪请使用行人检测器结合行人ReID模型。车辆跟踪请使用车辆检测器结合车辆ReID模型。 - 用于ByteTrack跟踪时,这些模型的NMS阈值等后处理设置会与纯检测任务的设置不同。 ## 快速开始 -通过如下命令一键式启动训练和评估 +通过如下命令一键式启动评估、评估和导出 ```bash job_name=ppyoloe_crn_l_36e_640x640_mot17half config=configs/mot/bytetrack/detector/${job_name}.yml log_dir=log_dir/${job_name} # 1. training -python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp --fleet +python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp # 2. evaluation -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/${job_name}.pdparams +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=output/${job_name}/model_final.pdparams +# 3. export +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c ${config} -o weights=output/${job_name}/model_final.pdparams ``` diff --git a/configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml b/configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml index 89654f059e603eda24002dfec844f450bd73e8ff..6c770e9bf85e953a30df43faf57c401518b7f6ad 100644 --- a/configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +++ b/configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml @@ -7,6 +7,7 @@ weights: output/ppyoloe_crn_l_36e_640x640_mot17half/model_final log_iter: 20 snapshot_epoch: 2 + # schedule configuration for fine-tuning epoch: 36 LearningRate: @@ -16,7 +17,7 @@ LearningRate: max_epochs: 43 - !LinearWarmup start_factor: 0.001 - steps: 100 + epochs: 1 OptimizerBuilder: optimizer: @@ -26,9 +27,11 @@ OptimizerBuilder: factor: 0.0005 type: L2 + TrainReader: batch_size: 8 + # detector configuration architecture: YOLOv3 norm_type: sync_bn @@ -63,7 +66,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_ht21.yml b/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_ht21.yml new file mode 100644 index 0000000000000000000000000000000000000000..bd102a48d1013b9e6399411562b47e1e85e2c2ec --- /dev/null +++ b/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_ht21.yml @@ -0,0 +1,80 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + '../../../yolox/yolox_x_300e_coco.yml', + '../_base_/ht21.yml', +] +weights: output/yolox_x_24e_800x1440_ht21/model_final +log_iter: 20 +snapshot_epoch: 2 + +# schedule configuration for fine-tuning +epoch: 24 +LearningRate: + base_lr: 0.0005 # fintune + schedulers: + - !CosineDecay + max_epochs: 24 + min_lr_ratio: 0.05 + last_plateau_epochs: 4 + - !ExpWarmup + epochs: 1 + +OptimizerBuilder: + optimizer: + type: Momentum + momentum: 0.9 + use_nesterov: True + regularizer: + factor: 0.0005 + type: L2 + + +TrainReader: + batch_size: 4 + mosaic_epoch: 20 + +# detector configuration +architecture: YOLOX +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/yolox_x_300e_coco.pdparams +norm_type: sync_bn +use_ema: True +ema_decay: 0.9999 +ema_decay_type: "exponential" +act: silu +find_unused_parameters: True +depth_mult: 1.33 +width_mult: 1.25 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + input_size: [800, 1440] + size_stride: 32 + size_range: [18, 32] # multi-scale range [576*1024 ~ 800*1440], w/h ratio=1.8 + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +YOLOXHead: + l1_epoch: 20 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.7 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. diff --git a/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml b/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml new file mode 100644 index 0000000000000000000000000000000000000000..2585e5a47ac0589f7d673803a5172b42f3b902bc --- /dev/null +++ b/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml @@ -0,0 +1,80 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + '../../../yolox/yolox_x_300e_coco.yml', + '../_base_/mix_det.yml', +] +weights: output/yolox_x_24e_800x1440_mix_det/model_final +log_iter: 20 +snapshot_epoch: 2 + +# schedule configuration for fine-tuning +epoch: 24 +LearningRate: + base_lr: 0.00075 # fintune + schedulers: + - !CosineDecay + max_epochs: 24 + min_lr_ratio: 0.05 + last_plateau_epochs: 4 + - !ExpWarmup + epochs: 1 + +OptimizerBuilder: + optimizer: + type: Momentum + momentum: 0.9 + use_nesterov: True + regularizer: + factor: 0.0005 + type: L2 + + +TrainReader: + batch_size: 6 + mosaic_epoch: 20 + +# detector configuration +architecture: YOLOX +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/yolox_x_300e_coco.pdparams +norm_type: sync_bn +use_ema: True +ema_decay: 0.9999 +ema_decay_type: "exponential" +act: silu +find_unused_parameters: True +depth_mult: 1.33 +width_mult: 1.25 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + input_size: [800, 1440] + size_stride: 32 + size_range: [18, 30] # multi-scale range [576*1024 ~ 800*1440], w/h ratio=1.8 + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +YOLOXHead: + l1_epoch: 20 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.7 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. diff --git a/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_mot_ch.yml b/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_mot_ch.yml new file mode 100644 index 0000000000000000000000000000000000000000..34678d52b107f92b8c374f12af1b3834f16b9676 --- /dev/null +++ b/configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_mot_ch.yml @@ -0,0 +1,80 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + '../../../yolox/yolox_x_300e_coco.yml', + '../_base_/mix_mot_ch.yml', +] +weights: output/yolox_x_24e_800x1440_mix_mot_ch/model_final +log_iter: 20 +snapshot_epoch: 2 + +# schedule configuration for fine-tuning +epoch: 24 +LearningRate: + base_lr: 0.00075 # fintune + schedulers: + - !CosineDecay + max_epochs: 24 + min_lr_ratio: 0.05 + last_plateau_epochs: 4 + - !ExpWarmup + epochs: 1 + +OptimizerBuilder: + optimizer: + type: Momentum + momentum: 0.9 + use_nesterov: True + regularizer: + factor: 0.0005 + type: L2 + + +TrainReader: + batch_size: 6 + mosaic_epoch: 20 + +# detector configuration +architecture: YOLOX +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/yolox_x_300e_coco.pdparams +norm_type: sync_bn +use_ema: True +ema_decay: 0.9999 +ema_decay_type: "exponential" +act: silu +find_unused_parameters: True +depth_mult: 1.33 +width_mult: 1.25 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + input_size: [800, 1440] + size_stride: 32 + size_range: [18, 30] # multi-scale range [576*1024 ~ 800*1440], w/h ratio=1.8 + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +YOLOXHead: + l1_epoch: 20 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.7 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. diff --git a/configs/mot/deepsort/README_cn.md b/configs/mot/deepsort/README_cn.md index 8a957d7fad61624bd70f9670182f3d2ad15e5992..08bee2e1e4c173c426c608562a4bcd4334bcc5e7 100644 --- a/configs/mot/deepsort/README_cn.md +++ b/configs/mot/deepsort/README_cn.md @@ -6,7 +6,7 @@ - [简介](#简介) - [模型库](#模型库) - [快速开始](#快速开始) -- [适配其他检测器](适配其他检测器) +- [适配其他检测器](#适配其他检测器) - [引用](#引用) ## 简介 @@ -18,18 +18,18 @@ | 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 检测结果或模型 | ReID模型 |配置文件 | | :---------| :------- | :----: | :----: | :--: | :----: | :---: | :---: | :-----:| :-----: | :-----: | -| ResNet-101 | 1088x608 | 72.2 | 60.5 | 998 | 8054 | 21644 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | +| ResNet-101 | 1088x608 | 72.2 | 60.5 | 998 | 8054 | 21644 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | | ResNet-101 | 1088x608 | 68.3 | 56.5 | 1722 | 17337 | 15890 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./deepsort_jde_yolov3_pcb_pyramid.yml) | -| PPLCNet | 1088x608 | 72.2 | 59.5 | 1087 | 8034 | 21481 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | +| PPLCNet | 1088x608 | 72.2 | 59.5 | 1087 | 8034 | 21481 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | | PPLCNet | 1088x608 | 68.1 | 53.6 | 1979 | 17446 | 15766 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./deepsort_jde_yolov3_pplcnet.yml) | ### DeepSORT在MOT-16 Test Set上结果 | 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 检测结果或模型 | ReID模型 |配置文件 | | :---------| :------- | :----: | :----: | :--: | :----: | :---: | :---: | :-----: | :-----: |:-----: | -| ResNet-101 | 1088x608 | 64.1 | 53.0 | 1024 | 12457 | 51919 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) | [ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | +| ResNet-101 | 1088x608 | 64.1 | 53.0 | 1024 | 12457 | 51919 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) | [ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./reid/deepsort_pcb_pyramid_r101.yml) | | ResNet-101 | 1088x608 | 61.2 | 48.5 | 1799 | 25796 | 43232 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams)|[配置文件](./deepsort_jde_yolov3_pcb_pyramid.yml) | -| PPLCNet | 1088x608 | 64.0 | 51.3 | 1208 | 12697 | 51784 | - | [检测结果](https://dataset.bj.bcebos.com/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | +| PPLCNet | 1088x608 | 64.0 | 51.3 | 1208 | 12697 | 51784 | - | [检测结果](https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./reid/deepsort_pplcnet.yml) | | PPLCNet | 1088x608 | 61.1 | 48.8 | 2010 | 25401 | 43432 | - | [检测模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams) |[ReID模型](https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams)|[配置文件](./deepsort_jde_yolov3_pplcnet.yml) | @@ -41,8 +41,8 @@ | MIX | JDE YOLOv3 | PPLCNet | - | 66.3 | 62.1 | - |[配置文件](./deepsort_jde_yolov3_pplcnet.yml) | | MOT-17 half train | YOLOv3 | PPLCNet | 42.7 | 50.2 | 52.4 | - |[配置文件](./deepsort_yolov3_pplcnet.yml) | | MOT-17 half train | PPYOLOv2 | PPLCNet | 46.8 | 51.8 | 55.8 | - |[配置文件](./deepsort_ppyolov2_pplcnet.yml) | -| MOT-17 half train | PPYOLOe | PPLCNet | 52.9 | 56.7 | 60.5 | - |[配置文件](./deepsort_ppyoloe_pplcnet.yml) | -| MOT-17 half train | PPYOLOe | ResNet-50 | 52.9 | 56.7 | 64.6 | - |[配置文件](./deepsort_ppyoloe_resnet.yml) | +| MOT-17 half train | PPYOLOe | PPLCNet | 52.7 | 56.7 | 60.5 | - |[配置文件](./deepsort_ppyoloe_pplcnet.yml) | +| MOT-17 half train | PPYOLOe | ResNet-50 | 52.7 | 56.7 | 64.6 | - |[配置文件](./deepsort_ppyoloe_resnet.yml) | **注意:** 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行验证的命令即可自动下载。 @@ -60,7 +60,7 @@ det_results_dir ``` 对于MOT16数据集,可以下载PaddleDetection提供的一个经过匹配之后的检测框结果det_results_dir.zip并解压: ``` -wget https://dataset.bj.bcebos.com/mot/det_results_dir.zip +wget https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip ``` 如果使用更强的检测模型,可以取得更好的结果。其中每个txt是每个视频中所有图片的检测结果,每行都描述一个边界框,格式如下: ``` @@ -72,7 +72,7 @@ wget https://dataset.bj.bcebos.com/mot/det_results_dir.zip - `score`是目标框的得分 - `class_id`是目标框的类别,如果只有1类则是`0` -- **方式2**:同时加载检测模型和ReID模型,此处选用JDE版本的YOLOv3,具体配置见`configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml`。加载其他通用检测模型可参照`configs/mot/deepsort/deepsort_ppyolov2_pplcnet.yml`进行修改。 +- **方式2**:同时加载检测模型和ReID模型,此处选用JDE版本的YOLOv3,具体配置见`configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml`。加载其他通用检测模型可参照`configs/mot/deepsort/deepsort_yoloe_pplcnet.yml`进行修改。 ## 快速开始 @@ -80,7 +80,7 @@ wget https://dataset.bj.bcebos.com/mot/det_results_dir.zip #### 1.1 评估检测效果 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://bj.bcebos.com/v1/paddledet/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` **注意:** @@ -89,9 +89,12 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/deepsort/detector/ppy #### 1.2 评估跟踪效果 **方式1**:加载检测结果文件和ReID模型,得到跟踪结果 ```bash -CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml --det_results_dir {your detection results} +# 下载PaddleDetection提供的MOT16数据集检测结果文件并解压,如需自己使用其他检测器生成请参照这个文件里的格式 +wget https://bj.bcebos.com/v1/paddledet/data/mot/det_results_dir.zip + +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml --det_results_dir det_results_dir # 或者 -CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml --det_results_dir {your detection results} +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml --det_results_dir det_results_dir ``` **方式2**:加载行人检测模型和ReID模型,得到跟踪结果 @@ -115,11 +118,14 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/deepsort 使用单个GPU通过如下命令预测一个视频,并保存为视频 ```bash +# 下载demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 + # 加载JDE YOLOv3行人检测模型和PCB Pyramid ReID模型,并保存为视频 -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml --video_file={your video name}.mp4 --save_videos +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_jde_yolov3_pcb_pyramid.yml --video_file=mot17_demo.mp4 --save_videos -# 或者加载PPYOLOv2行人检测模型和PPLCNet ReID模型,并保存为视频 -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_ppyolov2_pplcnet.yml --video_file={your video name}.mp4 --scaled=True --save_videos +# 或者加载PPYOLOE行人检测模型和PPLCNet ReID模型,并保存为视频 +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos ``` **注意:** @@ -132,33 +138,34 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsor Step 1:导出检测模型 ```bash # 导出JDE YOLOv3行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/jde_yolov3_darknet53_30e_1088x608_mix.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams +CUDA_VISIBLE_DEVICES=0 python3.7 tools/export_model.py -c configs/mot/deepsort/detector/jde_yolov3_darknet53_30e_1088x608_mix.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/jde_yolov3_darknet53_30e_1088x608_mix.pdparams -# 或导出PPYOLOv2行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyolov2_r50vd_dcn_365e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_640x640_mot17half.pdparams +# 或导出PPYOLOE行人检测模型 +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` Step 2:导出ReID模型 ```bash # 导出PCB Pyramid ReID模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_pcb_pyramid_r101.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pcb_pyramid_r101.pdparams + # 或者导出PPLCNet ReID模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_pplcnet.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet.pdparams + +# 或者导出ResNet ReID模型 +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid/deepsort_resnet.yml -o reid_weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_resnet.pdparams ``` ### 4. 用导出的模型基于Python去预测 ```bash -# 用导出JDE YOLOv3行人检测模型和PCB Pyramid ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/jde_yolov3_darknet53_30e_1088x608_mix/ --reid_model_dir=output_inference/deepsort_pcb_pyramid_r101/ --video_file={your video name}.mp4 --device=GPU --save_mot_txts - -# 或用导出的PPYOLOv2行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyolov2_r50vd_dcn_365e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --video_file={your video name}.mp4 --device=GPU --scaled=True --save_mot_txts +# 用导出的PPYOLOE行人检测模型和PPLCNet ReID模型 +python3.7 deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts --threshold=0.5 ``` **注意:** - - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_mot_txt_per_img`(对每张图片保存一个txt)表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。 + - 运行前需要先改动`deploy/pptracking/python/tracker_config.yml`里的tracker为`DeepSORTTracker`。 + - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`表示对每个视频保存一个txt,或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 ## 适配其他检测器 @@ -184,7 +191,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/deepsort/deepsort ``` #### 2.加载检测模型和ReID模型去推理: ``` -CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_xxx_yyy.yml --video_file={your video name}.mp4 --scaled=True --save_videos +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/deepsort/deepsort_xxx_yyy.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos ``` #### 3.导出检测模型和ReID模型: ```bash @@ -195,7 +202,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid ``` #### 4.使用导出的检测模型和ReID模型去部署: ``` -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/xxx./ --reid_model_dir=output_inference/deepsort_yyy/ --video_file={your video name}.mp4 --device=GPU --scaled=True --save_mot_txts +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/xxx./ --reid_model_dir=output_inference/deepsort_yyy/ --video_file=mot17_demo.mp4 --device=GPU --scaled=True --save_mot_txts ``` **注意:** - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 diff --git a/configs/mot/deepsort/_base_/mot17.yml b/configs/mot/deepsort/_base_/mot17.yml index 2efa55546026168c39396c4d51a71428e19a0638..faf47f622d1c2847a9686dfa8d7e48a49c05436c 100644 --- a/configs/mot/deepsort/_base_/mot17.yml +++ b/configs/mot/deepsort/_base_/mot17.yml @@ -17,6 +17,7 @@ EvalDataset: TestDataset: !ImageFolder + dataset_dir: dataset/mot/MOT17 anno_path: annotations/val_half.json diff --git a/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml b/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml index 972a1d16975a587a162ad5354b784dffdd473e60..0af80a7d899f02ac4b66c5191b2616ed1db1aa8e 100644 --- a/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml +++ b/configs/mot/deepsort/deepsort_ppyoloe_pplcnet.yml @@ -92,7 +92,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml b/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml index cde2cf23cf4ea81ce67f7dbd08b4fe1fc4b77e15..d9692304b055040bb22c49a2f90e05e4e7ba53eb 100644 --- a/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml +++ b/configs/mot/deepsort/deepsort_ppyoloe_resnet.yml @@ -91,7 +91,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/deepsort/detector/README_cn.md b/configs/mot/deepsort/detector/README_cn.md index 1a984b5c358f2c12e77e3d12396bba22bb46793c..6ebe7de7949d1db3d4c4f72db5ad8147f12e1f3d 100644 --- a/configs/mot/deepsort/detector/README_cn.md +++ b/configs/mot/deepsort/detector/README_cn.md @@ -26,11 +26,11 @@ 通过如下命令一键式启动训练和评估 ```bash -job_name=ppyolov2_r50vd_dcn_365e_640x640_mot17half +job_name=ppyoloe_crn_l_36e_640x640_mot17half config=configs/mot/deepsort/detector/${job_name}.yml log_dir=log_dir/${job_name} # 1. training -python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp --fleet +python -m paddle.distributed.launch --log_dir=${log_dir} --gpus 0,1,2,3,4,5,6,7 tools/train.py -c ${config} --eval --amp # 2. evaluation -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/${job_name}.pdparams +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c ${config} -o weights=https://paddledet.bj.bcebos.com/models/mot/${job_name}.pdparams ``` diff --git a/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml b/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml index 4852d6b7e21924156826575dcc10a00cfecb5f80..a0501222c9f35d657826fb525e54bd7f4f663ae4 100644 --- a/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +++ b/configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml @@ -6,6 +6,7 @@ weights: output/ppyoloe_crn_l_36e_640x640_mot17half/model_final log_iter: 20 snapshot_epoch: 2 + # schedule configuration for fine-tuning epoch: 36 LearningRate: @@ -15,7 +16,7 @@ LearningRate: max_epochs: 43 - !LinearWarmup start_factor: 0.001 - steps: 100 + epochs: 1 OptimizerBuilder: optimizer: @@ -25,9 +26,11 @@ OptimizerBuilder: factor: 0.0005 type: L2 + TrainReader: batch_size: 8 + # detector configuration architecture: YOLOv3 norm_type: sync_bn @@ -62,7 +65,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: -1 # 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner diff --git a/configs/mot/fairmot/README.md b/configs/mot/fairmot/README.md index 25441f21cba40a5e7b26dbbab7627e8bb7097b2f..fbb9daa04e05b1f9848c03ef62f790ebeeee167e 100644 --- a/configs/mot/fairmot/README.md +++ b/configs/mot/fairmot/README.md @@ -86,7 +86,7 @@ PP-tracking provides an AI studio public project tutorial. Please refer to this ### Results on MOT-17 Half Set | backbone | input shape | MOTA | IDF1 | IDS | FP | FN | FPS | download | config | | :--------------| :------- | :----: | :----: | :----: | :----: | :----: | :------: | :----: |:-----: | -| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[model](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [config](./fairmot_dla34_30e_1088x608.yml) | +| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[model](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | [config](./fairmot_dla34_30e_1088x608.yml) | | DLA-34 + BYTETracker| 1088x608 | 70.3 | 73.2 | 234 | 2176 | 13598 | - |[model](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [config](./fairmot_dla34_30e_1088x608_bytetracker.yml) | **Notes:** diff --git a/configs/mot/fairmot/README_cn.md b/configs/mot/fairmot/README_cn.md index bb22459e858c36414a13c142e38184df3899b7b4..dd5a27874e6c7439222ca9f8648099ca25bf9863 100644 --- a/configs/mot/fairmot/README_cn.md +++ b/configs/mot/fairmot/README_cn.md @@ -82,7 +82,7 @@ PP-Tracking 提供了AI Studio公开项目案例,教程请参考[PP-Tracking ### 在MOT-17 Half上结果 | 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 下载链接 | 配置文件 | | :--------------| :------- | :----: | :----: | :----: | :----: | :----: | :------: | :----: |:-----: | -| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608.yml) | +| DLA-34 | 1088x608 | 69.1 | 72.8 | 299 | 1957 | 14412 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608.yml) | | DLA-34 + BYTETracker| 1088x608 | 70.3 | 73.2 | 234 | 2176 | 13598 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_bytetracker.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_bytetracker.yml) | diff --git a/configs/mot/fairmot/fairmot_dla34_30e_1088x608_bytetracker.yml b/configs/mot/fairmot/fairmot_dla34_30e_1088x608_bytetracker.yml index 7b668c5687584a65e6895efe26454ca4418c7226..a0ad44a0f9a6ef12d3904f1d78ede896f917a90b 100644 --- a/configs/mot/fairmot/fairmot_dla34_30e_1088x608_bytetracker.yml +++ b/configs/mot/fairmot/fairmot_dla34_30e_1088x608_bytetracker.yml @@ -14,8 +14,18 @@ TrainDataset: image_lists: ['mot17.half', 'caltech.all', 'cuhksysu.train', 'prw.train', 'citypersons.train', 'eth.train'] data_fields: ['image', 'gt_bbox', 'gt_class', 'gt_ide'] +# for MOT evaluation +# If you want to change the MOT evaluation dataset, please modify 'data_root' +EvalMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + data_root: MOT17/images/half + keep_ori_im: False # set True if save visualization images or video, or used in DeepSORT + JDETracker: use_byte: True match_thres: 0.8 conf_thres: 0.4 low_conf_thres: 0.2 + min_box_area: 200 + vertical_ratio: 1.6 # for pedestrian diff --git a/configs/mot/headtracking21/README_cn.md b/configs/mot/headtracking21/README_cn.md index eafd87d7cbae64ea46bd31682687b8c6b7f7df8a..092dfac6c240949f93d5b4cd75af0ba618e40b55 100644 --- a/configs/mot/headtracking21/README_cn.md +++ b/configs/mot/headtracking21/README_cn.md @@ -11,21 +11,22 @@ ## 模型库 ### FairMOT在HT-21 Training Set上结果 -| 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 下载链接 | 配置文件 | +| 模型 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 下载链接 | 配置文件 | | :--------------| :------- | :----: | :----: | :---: | :----: | :---: | :------: | :----: |:----: | -| DLA-34 | 1088x608 | 64.7 | 69.0 | 8533 | 148817 | 234970 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_headtracking21.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_headtracking21.yml) | -| HRNetv2-W18 | 1088x608 | 57.2 | 58.4 | 30950 | 188260 | 256580 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_headtracking21.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_1088x608_headtracking21.yml) | - +| FairMOT DLA-34 | 1088x608 | 64.7 | 69.0 | 8533 | 148817 | 234970 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_headtracking21.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_headtracking21.yml) | +| ByteTrack-x | 1440x800 | 64.1 | 63.4 | 4191 | 185162 | 210240 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/bytetrack_yolox_ht21.pdparams) | [配置文件](../bytetrack/bytetrack_yolox_ht21.yml) | ### FairMOT在HT-21 Test Set上结果 | 骨干网络 | 输入尺寸 | MOTA | IDF1 | IDS | FP | FN | FPS | 下载链接 | 配置文件 | | :--------------| :------- | :----: | :----: | :----: | :----: | :----: |:-------: | :----: | :----: | -| DLA-34 | 1088x608 | 60.8 | 62.8 | 12781 | 118109 | 198896 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_headtracking21.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_headtracking21.yml) | -| HRNetv2-W18 | 1088x608 | 41.2 | 47.1 | 48809 | 241683 | 204346 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_headtracking21.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_headtracking21.yml) | +| FairMOT DLA-34 | 1088x608 | 60.8 | 62.8 | 12781 | 118109 | 198896 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_headtracking21.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_headtracking21.yml) | +| ByteTrack-x | 1440x800 | 72.6 | 61.8 | 5163 | 71235 | 154139 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/bytetrack_yolox_ht21.pdparams) | [配置文件](../bytetrack/bytetrack_yolox_ht21.yml) | **注意:** - - FairMOT DLA-34使用2个GPU进行训练,每个GPU上batch size为6,训练30个epoch。目前MOTA精度位于MOT官网[Head Tracking 21](https://motchallenge.net/results/Head_Tracking_21)榜单榜首。 - - FairMOT HRNetv2-W18使用4个GPU进行训练,每个GPU上batch size为8,训练30个epoch。 + - FairMOT DLA-34使用2个GPU进行训练,每个GPU上batch size为6,训练30个epoch。 + - ByteTrack使用YOLOX-x做检测器,使用8个GPU进行训练,每个GPU上batch size为8,训练30个epoch,具体细节参照[bytetrack](../bytetrack/)。 + - 此处提供PaddleDetection团队整理后的[下载链接](https://bj.bcebos.com/v1/paddledet/data/mot/HT21.zip),下载后需解压放到`dataset/mot/`目录下,HT-21 Test集的结果需要交到[官网](https://motchallenge.net)评测。 + ## 快速开始 diff --git a/configs/mot/jde/_base_/jde_darknet53.yml b/configs/mot/jde/_base_/jde_darknet53.yml index 73faa52f662e7db24ef40c25c029561225d1a3b8..f5370fc6affa10f33af04185c48d61d5a2f06d98 100644 --- a/configs/mot/jde/_base_/jde_darknet53.yml +++ b/configs/mot/jde/_base_/jde_darknet53.yml @@ -53,4 +53,4 @@ JDETracker: det_thresh: 0.3 track_buffer: 30 min_box_area: 200 - motion: KalmanFilter + vertical_ratio: 1.6 # for pedestrian diff --git a/configs/mot/mcfairmot/README.md b/configs/mot/mcfairmot/README.md index 555aee9fecd5a03c91c5fd3500e5f9b5c6d38e3c..4e595f3900fa89e0789bf98474a2ea40c1f2c633 100644 --- a/configs/mot/mcfairmot/README.md +++ b/configs/mot/mcfairmot/README.md @@ -48,7 +48,7 @@ PP-tracking provides an AI studio public project tutorial. Please refer to this | Model | Compression Strategy | Prediction Delay(T4) |Prediction Delay(V100)| Model Configuration File |Compression Algorithm Configuration File | | :--------------| :------- | :------: | :----: | :----: | :----: | | DLA-34 | baseline | 41.3 | 21.9 |[Configuration File](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)| - | -| DLA-34 | off-line quantization | 37.8 | 21.2 |[Configuration File](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[Configuration File](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.3/configs/slim/post_quant/mcfairmot_ptq.yml)| +| DLA-34 | off-line quantization | 37.8 | 21.2 |[Configuration File](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[Configuration File](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/slim/post_quant/mcfairmot_ptq.yml)| ## Getting Start @@ -122,8 +122,8 @@ CUDA_VISIBLE_DEVICES=0 python3.7 tools/post_quant.py -c configs/mot/mcfairmot/mc @ARTICLE{9573394, author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - title={Detection and Tracking Meet Drones Challenge}, + journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, + title={Detection and Tracking Meet Drones Challenge}, year={2021}, volume={}, number={}, diff --git a/configs/mot/mcfairmot/README_cn.md b/configs/mot/mcfairmot/README_cn.md index 184045584a455cc8b2443a9d5541e12732e625a9..d054a04314977f397e840a4d778770ba9b50d366 100644 --- a/configs/mot/mcfairmot/README_cn.md +++ b/configs/mot/mcfairmot/README_cn.md @@ -47,7 +47,7 @@ PP-Tracking 提供了AI Studio公开项目案例,教程请参考[PP-Tracking | 骨干网络 | 压缩策略 | 预测时延(T4) |预测时延(V100)| 配置文件 |压缩算法配置文件 | | :--------------| :------- | :------: | :----: | :----: | :----: | | DLA-34 | baseline | 41.3 | 21.9 |[配置文件](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)| - | -| DLA-34 | 离线量化 | 37.8 | 21.2 |[配置文件](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[配置文件](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.3/configs/slim/post_quant/mcfairmot_ptq.yml)| +| DLA-34 | 离线量化 | 37.8 | 21.2 |[配置文件](./mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml)|[配置文件](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/slim/post_quant/mcfairmot_ptq.yml)| ## 快速开始 @@ -119,8 +119,8 @@ CUDA_VISIBLE_DEVICES=0 python3.7 tools/post_quant.py -c configs/mot/mcfairmot/mc @ARTICLE{9573394, author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - title={Detection and Tracking Meet Drones Challenge}, + journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, + title={Detection and Tracking Meet Drones Challenge}, year={2021}, volume={}, number={}, diff --git a/configs/mot/ocsort/README.md b/configs/mot/ocsort/README.md new file mode 100644 index 0000000000000000000000000000000000000000..1d2d6144a2b4a0360854c1fbd8557d9158ac3608 --- /dev/null +++ b/configs/mot/ocsort/README.md @@ -0,0 +1,101 @@ +简体中文 | [English](README.md) + +# OC_SORT (Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking) + +## 内容 +- [简介](#简介) +- [模型库](#模型库) +- [快速开始](#快速开始) +- [引用](#引用) + +## 简介 +[OC_SORT](https://arxiv.org/abs/2203.14360)(Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking)。此处提供了几个常用检测器的配置作为参考。由于训练数据集、输入尺度、训练epoch数、NMS阈值设置等的不同均会导致模型精度和性能的差异,请自行根据需求进行适配。 + +## 模型库 + +### OC_SORT在MOT-17 half Val Set上结果 + +| 检测训练数据集 | 检测器 | 输入尺度 | ReID | 检测mAP | MOTA | IDF1 | FPS | 配置文件 | +| :-------- | :----- | :----: | :----:|:------: | :----: |:-----: |:----:|:----: | +| MOT-17 half train | PP-YOLOE-l | 640x640 | - | 52.9 | 50.1 | 62.6 | - |[配置文件](./bytetrack_ppyoloe.yml) | +| **mot17_ch** | YOLOX-x | 800x1440| - | 61.9 | 75.5 | 77.0 | - |[配置文件](./ocsort_yolox.yml) | + +**注意:** + - 模型权重下载链接在配置文件中的```det_weights```和```reid_weights```,运行验证的命令即可自动下载,OC_SORT默认不需要```reid_weights```权重。 + - **MOT17-half train**是MOT17的train序列(共7个)每个视频的前一半帧的图片和标注组成的数据集,而为了验证精度可以都用**MOT17-half val**数据集去评估,它是每个视频的后一半帧组成的,数据集可以从[此链接](https://dataset.bj.bcebos.com/mot/MOT17.zip)下载,并解压放在`dataset/mot/`文件夹下。 + - **mix_mot_ch**数据集,是MOT17、CrowdHuman组成的联合数据集,**mix_det**是MOT17、CrowdHuman、Cityscapes、ETHZ组成的联合数据集,数据集整理的格式和目录可以参考[此链接](https://github.com/ifzhang/ByteTrack#data-preparation),最终放置于`dataset/mot/`目录下。为了验证精度可以都用**MOT17-half val**数据集去评估。 + - OC_SORT的训练是单独的检测器训练MOT数据集,推理是组装跟踪器去评估MOT指标,单独的检测模型也可以评估检测指标。 + - OC_SORT的导出部署,是单独导出检测模型,再组装跟踪器运行的,参照[PP-Tracking](../../../deploy/pptracking/python)。 + - OC_SORT是PP-Human和PP-Vehicle等Pipeline分析项目跟踪方向的主要方案,具体使用参照[Pipeline](../../../deploy/pipeline)和[MOT](../../../deploy/pipeline/docs/tutorials/pphuman_mot.md)。 + + +## 快速开始 + +### 1. 训练 +通过如下命令一键式启动训练和评估 +```bash +python -m paddle.distributed.launch --log_dir=ppyoloe --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml --eval --amp +``` + +### 2. 评估 +#### 2.1 评估检测效果 +```bash +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml +``` + +**注意:** + - 评估检测使用的是```tools/eval.py```, 评估跟踪使用的是```tools/eval_mot.py```。 + +#### 2.2 评估跟踪效果 +```bash +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/ocsort/ocsort_ppyoloe.yml --scaled=True +# 或者 +CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/ocsort/ocsort_yolox.yml --scaled=True +``` +**注意:** + - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE YOLOv3则为False,如果使用通用检测模型则为True, 默认值是False。 + - 跟踪结果会存于`{output_dir}/mot_results/`中,里面每个视频序列对应一个txt,每个txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`, 此外`{output_dir}`可通过`--output_dir`设置。 + +### 3. 预测 + +使用单个GPU通过如下命令预测一个视频,并保存为视频 + +```bash +# 下载demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 + +CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/ocsort/ocsort_yolox.yml --video_file=mot17_demo.mp4 --scaled=True --save_videos +``` + +**注意:** + - 请先确保已经安装了[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`。 + - `--scaled`表示在模型输出结果的坐标是否已经是缩放回原图的,如果使用的检测模型是JDE的YOLOv3则为False,如果使用通用检测模型则为True。 + + +### 4. 导出预测模型 + +Step 1:导出检测模型 +```bash +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/yolox_x_24e_800x1440_mix_det.pdparams +``` + +### 5. 用导出的模型基于Python去预测 + +```bash +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/yolox_x_24e_800x1440_mix_det/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts +``` +**注意:** + - 运行前需要手动修改`tracker_config.yml`的跟踪器类型为`type: OCSORTTracker`。 + - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_mot_txt_per_img`(对每张图片保存一个txt)表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。 + - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 + + +## 引用 +``` +@article{cao2022observation, + title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking}, + author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris}, + journal={arXiv preprint arXiv:2203.14360}, + year={2022} +} +``` diff --git a/configs/mot/ocsort/ocsort_ppyoloe.yml b/configs/mot/ocsort/ocsort_ppyoloe.yml new file mode 100644 index 0000000000000000000000000000000000000000..0d81b2d1c0b0def8cd4458a96c6a352e04c16456 --- /dev/null +++ b/configs/mot/ocsort/ocsort_ppyoloe.yml @@ -0,0 +1,75 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + '../bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml', + '../bytetrack/_base_/mot17.yml', + '../bytetrack/_base_/ppyoloe_mot_reader_640x640.yml' +] +weights: output/ocsort_ppyoloe/model_final +log_iter: 20 +snapshot_epoch: 2 + +metric: MOT # eval/infer mode, set 'COCO' can be training mode +num_classes: 1 + +architecture: ByteTrack +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_crn_l_300e_coco.pdparams +ByteTrack: + detector: YOLOv3 # PPYOLOe version + reid: None + tracker: OCSORTTracker +det_weights: https://bj.bcebos.com/v1/paddledet/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +reid_weights: None + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: -1 # 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.1 # 0.01 in original detector + nms_threshold: 0.4 # 0.6 in original detector + + +OCSORTTracker: + det_thresh: 0.4 # 0.6 in yolox ocsort + max_age: 30 + min_hits: 3 + iou_threshold: 0.3 + delta_t: 3 + inertia: 0.2 + vertical_ratio: 0 + min_box_area: 0 + use_byte: False + + +# MOTDataset for MOT evaluation and inference +EvalMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + data_root: MOT17/images/half + keep_ori_im: True # set as True in DeepSORT and ByteTrack + +TestMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + keep_ori_im: True # set True if save visualization images or video diff --git a/configs/mot/ocsort/ocsort_yolox.yml b/configs/mot/ocsort/ocsort_yolox.yml new file mode 100644 index 0000000000000000000000000000000000000000..4f05e2d04ce1d83c98e54b35d21217915c5ee8f4 --- /dev/null +++ b/configs/mot/ocsort/ocsort_yolox.yml @@ -0,0 +1,83 @@ +# This config is an assembled config for ByteTrack MOT, used as eval/infer mode for MOT. +_BASE_: [ + '../bytetrack/detector/yolox_x_24e_800x1440_mix_det.yml', + '../bytetrack/_base_/mix_det.yml', + '../bytetrack/_base_/yolox_mot_reader_800x1440.yml' +] +weights: output/ocsort_yolox/model_final +log_iter: 20 +snapshot_epoch: 2 + +metric: MOT # eval/infer mode +num_classes: 1 + +architecture: ByteTrack +pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/yolox_x_300e_coco.pdparams +ByteTrack: + detector: YOLOX + reid: None + tracker: OCSORTTracker +det_weights: https://bj.bcebos.com/v1/paddledet/models/mot/yolox_x_24e_800x1440_mix_mot_ch.pdparams +reid_weights: None + +depth_mult: 1.33 +width_mult: 1.25 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + input_size: [800, 1440] + size_stride: 32 + size_range: [18, 22] # multi-scale range [576*1024 ~ 800*1440], w/h ratio=1.8 + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +# Tracking requires higher quality boxes, so NMS score_threshold will be higher +YOLOXHead: + l1_epoch: 20 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.1 + nms_threshold: 0.7 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. + + +OCSORTTracker: + det_thresh: 0.6 + max_age: 30 + min_hits: 3 + iou_threshold: 0.3 + delta_t: 3 + inertia: 0.2 + vertical_ratio: 1.6 + min_box_area: 100 + use_byte: False + + +# MOTDataset for MOT evaluation and inference +EvalMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + data_root: MOT17/images/half + keep_ori_im: True # set as True in DeepSORT and ByteTrack + +TestMOTDataset: + !MOTImageFolder + dataset_dir: dataset/mot + keep_ori_im: True # set True if save visualization images or video diff --git a/configs/mot/pedestrian/README_cn.md b/configs/mot/pedestrian/README_cn.md index eca2963c51872e000b7b9ab0e02770b1fc98b60a..768733db537c5f752bbb56198bad196c68b28602 100644 --- a/configs/mot/pedestrian/README_cn.md +++ b/configs/mot/pedestrian/README_cn.md @@ -18,7 +18,7 @@ | :-------------| :-------- | :------- | :----: | :----: | :----: | :-----: |:------: | | PathTrack | DLA-34 | 1088x608 | 44.9 | 59.3 | - |[下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_pathtrack.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_pathtrack.yml) | | VisDrone | DLA-34 | 1088x608 | 49.2 | 63.1 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_dla34_30e_1088x608_visdrone_pedestrian.yml) | -| VisDrone | HRNetv2-W18| 1088x608 | 40.5 | 54.7 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.yml) | +| VisDrone | HRNetv2-W18| 1088x608 | 40.5 | 54.7 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_pedestrian.yml) | | VisDrone | HRNetv2-W18| 864x480 | 38.6 | 50.9 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone_pedestrian.yml) | | VisDrone | HRNetv2-W18| 576x320 | 30.6 | 47.2 | - | [下载链接](https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_pedestrian.pdparams) | [配置文件](./fairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone_pedestrian.yml) | @@ -124,8 +124,8 @@ month={Oct},} @ARTICLE{9573394, author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - title={Detection and Tracking Meet Drones Challenge}, + journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, + title={Detection and Tracking Meet Drones Challenge}, year={2021}, volume={}, number={}, diff --git a/configs/picodet/README.md b/configs/picodet/README.md index a226cc9a95e91b3e28635023996e201eed08e089..b6562428308b83a4aa49570a7a983a081beace7c 100644 --- a/configs/picodet/README.md +++ b/configs/picodet/README.md @@ -1,60 +1,63 @@ -English | [简体中文](README_cn.md) +简体中文 | [English](README_en.md) # PP-PicoDet ![](../../docs/images/picedet_demo.jpeg) -## News +## 最新动态 -- Released a new series of PP-PicoDet models: **(2022.03.20)** - - (1) It was used TAL/Task-aligned-Head and optimized PAN, which greatly improved the accuracy; - - (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved; - - (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower. +- 发布全新系列PP-PicoDet模型:**(2022.03.20)** + - (1)引入TAL及ETA Head,优化PAN等结构,精度提升2个点以上; + - (2)优化CPU端预测速度,同时训练速度提升一倍; + - (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低,端到端预测速度提升10%-20%。 -### Legacy Model +## 历史版本模型 -- Please refer to: [PicoDet 2021.10版本](./legacy_model/) +- 详情请参考:[PicoDet 2021.10版本](./legacy_model/) -## Introduction +## 简介 -We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902). +PaddleDetection中提出了全新的轻量级系列模型`PP-PicoDet`,在移动端具有卓越的性能,成为全新SOTA轻量级模型。详细的技术细节可以参考我们的[arXiv技术报告](https://arxiv.org/abs/2111.00902)。 -- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416. -- 🚀 Faster latency: 150FPS on mobile ARM CPU. -- 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation. -- 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc. +PP-PicoDet模型有如下特点: + +- 🌟 更高的mAP: 第一个在1M参数量之内`mAP(0.5:0.95)`超越**30+**(输入416像素时)。 +- 🚀 更快的预测速度: 网络预测在ARM CPU下可达150FPS。 +- 😊 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。 +- 😍 先进的算法: 我们在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。
    -## Benchmark +## 基线 + +| 模型 | 输入尺寸 | mAPval
    0.5:0.95 | mAPval
    0.5 | 参数量
    (M) | FLOPS
    (G) | 预测时延[CPU](#latency)
    (ms) | 预测时延[Lite](#latency)
    (ms) | 权重下载 | 配置文件 | 导出模型 | +| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | :--------------------------------------- | +| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 3.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_320_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 6.1ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_416_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 4.8ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_320_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 6.6ms | 15.20ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 8.2ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_320_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 12.7ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_416_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 11.5ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_320_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 20.7ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_416_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-L | 640*640 | 42.6 | 59.2 | 5.80 | 16.81 | 62.5ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_640_coco_lcnet.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_640_coco_lcnet_non_postprocess.tar) | -| Model | Input size | mAPval
    0.5:0.95 | mAPval
    0.5 | Params
    (M) | FLOPS
    (G) | Latency[CPU](#latency)
    (ms) | Latency[Lite](#latency)
    (ms) | Download | Config | -| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | -| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | -| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | -| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | -| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20 | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | -| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | -| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | -| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | -| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | -| PicoDet-L | 640*640 | 42.3 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) |
    -Table Notes: +注意事项: -- Latency: All our models test on `Intel-Xeon-Gold-6148` CPU with MKLDNN by 10 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite). -- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters. -- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12). +- 时延测试: 我们所有的模型都在`英特尔酷睿i7 10750H`的CPU 和`骁龙865(4xA77+4xA55)`的ARM CPU上测试(4线程,FP16预测)。上面表格中标有`CPU`的是使用OpenVINO测试,标有`Lite`的是使用[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)进行测试。 +- PicoDet在COCO train2017上训练,并且在COCO val2017上进行验证。使用4卡GPU训练,并且上表所有的预训练模型都是通过发布的默认配置训练得到。 +- Benchmark测试:测试速度benchmark性能时,导出模型后处理不包含在网络中,需要设置`-o export.benchmark=True` 或手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12)。
    -#### Benchmark of Other Models +#### 其他模型的基线 -| Model | Input size | mAPval
    0.5:0.95 | mAPval
    0.5 | Params
    (M) | FLOPS
    (G) | Latency[NCNN](#latency)
    (ms) | +| 模型 | 输入尺寸 | mAPval
    0.5:0.95 | mAPval
    0.5 | 参数量
    (M) | FLOPS
    (G) | 预测时延[NCNN](#latency)
    (ms) | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | | YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 | | YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 | @@ -68,112 +71,118 @@ We developed a series of lightweight models, named `PP-PicoDet`. Because of the | YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 | | YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 | -- Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark). +- ARM测试的benchmark脚本来自: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark)。 -## Quick Start +## 快速开始
    -Requirements: +依赖包: -- PaddlePaddle >= 2.2.1 +- PaddlePaddle == 2.2.2
    -Installation +安装 -- [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) -- [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md) +- [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) +- [准备数据文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
    -Training and Evaluation +训练&评估 -- Training model on single-GPU: +- 单卡GPU上训练: ```shell # training on single-GPU export CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval ``` -If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally. -- Training model on multi-GPU: +**注意:**如果训练时显存out memory,将TrainReader中batch_size调小,同时LearningRate中base_lr等比例减小。同时我们发布的config均由4卡训练得到,如果改变GPU卡数为1,那么base_lr需要减小4倍。 + +- 多卡GPU上训练: ```shell # training on multi-GPU -export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval ``` -- Evaluation: +**注意:**PicoDet所有模型均由4卡GPU训练得到,如果改变训练GPU卡数,需要按线性比例缩放学习率base_lr。 + +- 评估: ```shell python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams ``` -- Infer: +- 测试: ```shell python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams ``` -Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md). +详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
    -## Deployment +## 部署 -### Export and Convert Model +### 导出及转换模型 -
    -1. Export model (click to expand) +
    +1. 导出模型 ```shell cd PaddleDetection python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams \ - --output_dir=inference_model + --output_dir=output_inference ``` +- 如无需导出后处理,请指定:`-o export.benchmark=True`(如果-o已出现过,此处删掉-o)或者手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml) 中相应字段。 +- 如无需导出NMS,请指定:`-o export.nms=False`或者手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml) 中相应字段。 许多导出至ONNX场景只支持单输入及固定shape输出,所以如果导出至ONNX,推荐不导出NMS。 +
    -2. Convert to PaddleLite (click to expand) +2. 转换模型至Paddle Lite (点击展开) -- Install Paddlelite>=2.10: +- 安装Paddlelite>=2.10: ```shell pip install paddlelite ``` -- Convert model: +- 转换模型至Paddle Lite格式: ```shell # FP32 -paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp32 +paddle_lite_opt --model_dir=output_inference/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp32 # FP16 -paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp16 --enable_fp16=true +paddle_lite_opt --model_dir=output_inference/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp16 --enable_fp16=true ```
    -3. Convert to ONNX (click to expand) +3. 转换模型至ONNX (点击展开) -- Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md) +- 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 并且 ONNX > 1.10.1, 细节请参考[导出ONNX模型教程](../../deploy/EXPORT_ONNX_MODEL.md) ```shell pip install onnx -pip install paddle2onnx +pip install paddle2onnx==0.9.2 ``` -- Convert model: +- 转换模型: ```shell paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ @@ -183,123 +192,117 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ --save_file picodet_s_320_coco.onnx ``` -- Simplify ONNX model: use onnx-simplifier to simplify onnx model. +- 简化ONNX模型: 使用`onnx-simplifier`库来简化ONNX模型。 - - Install onnx-simplifier >= 0.3.6: + - 安装 onnxsim >= 0.4.1: ```shell - pip install onnx-simplifier + pip install onnxsim ``` - - simplify onnx model: + - 简化ONNX模型: ```shell - python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx + onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx ```
    -- Deploy models +- 部署用的模型 -| Model | Input size | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) | +| 模型 | 输入尺寸 | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | -| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) | -| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) | -| PicoDet-M | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_fp16.tar) | -| PicoDet-M | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_fp16.tar) | -| PicoDet-L | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_fp16.tar) | -| PicoDet-L | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_fp16.tar) | -| PicoDet-L | 640*640 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_fp16.tar) | -| PicoDet-Shufflenetv2 1x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_shufflenetv2_1x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_shufflenetv2_1x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_shufflenetv2_1x_fp16.tar) | -| PicoDet-MobileNetv3-large 1x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_mobilenetv3_large_1x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_mobilenetv3_large_1x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_mobilenetv3_large_1x_fp16.tar) | -| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) | - - -### Deploy - -- PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp) -- [PaddleLite C++ demo](../../deploy/lite) -- [NCNN C++/Python demo](../../deploy/third_engine/demo_ncnn) -- [MNN C++/Python demo](../../deploy/third_engine/demo_mnn) -- [OpenVINO C++ demo](../../deploy/third_engine/demo_openvino) -- [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) - - -Android demo visualization: +| PicoDet-XS | 320*320 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_320_coco_lcnet_fp16.tar) | +| PicoDet-XS | 416*416 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_416_coco_lcnet_fp16.tar) | +| PicoDet-S | 320*320 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_coco_lcnet_fp16.tar) | +| PicoDet-S | 416*416 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_fp16.tar) | +| PicoDet-M | 320*320 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_coco_lcnet_fp16.tar) | +| PicoDet-M | 416*416 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_coco_lcnet_fp16.tar) | +| PicoDet-L | 320*320 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_coco_lcnet_fp16.tar) | +| PicoDet-L | 416*416 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_coco_lcnet_fp16.tar) | +| PicoDet-L | 640*640 | [( w/ 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_lcnet_postprocessed.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_coco_lcnet_fp16.tar) | + +### 部署 + +| 预测库 | Python | C++ | 带后处理预测 | +| :-------- | :--------: | :---------------------: | :----------------: | +| OpenVINO | [Python](../../deploy/third_engine/demo_openvino/python) | [C++](../../deploy/third_engine/demo_openvino)(带后处理开发中) | ✔︎ | +| Paddle Lite | - | [C++](../../deploy/lite) | ✔︎ | +| Android Demo | - | [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) | ✔︎ | +| PaddleInference | [Python](../../deploy/python) | [C++](../../deploy/cpp) | ✔︎ | +| ONNXRuntime | [Python](../../deploy/third_engine/demo_onnxruntime) | Coming soon | ✔︎ | +| NCNN | Coming soon | [C++](../../deploy/third_engine/demo_ncnn) | ✘ | +| MNN | Coming soon | [C++](../../deploy/third_engine/demo_mnn) | ✘ | + + + +Android demo可视化:
    -## Quantization +## 量化
    -Requirements: +依赖包: - PaddlePaddle >= 2.2.2 -- PaddleSlim >= 2.2.1 +- PaddleSlim >= 2.2.2 -**Install:** +**安装:** ```shell -pip install paddleslim==2.2.1 +pip install paddleslim==2.2.2 ```
    -
    -Quant aware (click to expand) +
    +量化训练 -Configure the quant config and start training: +开始量化训练: ```shell -python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ - --slim_config configs/slim/quant/picodet_s_quant.yml --eval +python tools/train.py -c configs/picodet/picodet_s_416_coco_lcnet.yml \ + --slim_config configs/slim/quant/picodet_s_416_lcnet_quant.yml --eval ``` -- More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim) +- 更多细节请参考[slim文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
    -
    -Post quant (click to expand) - -Configure the post quant config and start calibrate model: +- 量化训练Model ZOO: -```shell -python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ - --slim_config configs/slim/post_quant/picodet_s_ptq.yml -``` - -- Notes: Now the accuracy of post quant is abnormal and this problem is being solved. - -
    +| 量化模型 | 输入尺寸 | mAPval
    0.5:0.95 | Configs | Weight | Inference Model | Paddle Lite(INT8) | +| :-------- | :--------: | :--------------------: | :-------: | :----------------: | :----------------: | :----------------: | +| PicoDet-S | 416*416 | 31.5 | [config](./picodet_s_416_coco_lcnet.yml) | [slim config](../slim/quant/picodet_s_416_lcnet_quant.yml) | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet_quant.pdparams) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_quant.tar) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_quant_non_postprocess.tar) | [w/ 后处理](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_quant.nb) | [w/o 后处理](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_quant_non_postprocess.nb) | -## Unstructured Pruning +## 非结构化剪枝
    -Toturial: +教程: -Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md) for details such as requirements, training and deployment. +训练及部署细节请参考[非结构化剪枝文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/legacy_model/pruner/README.md)。
    -## Application +## 应用 -- **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B) +- **行人检测:** `PicoDet-S-Pedestrian`行人检测模型请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B) -- **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./application/mainbody_detection/README.md) +- **主体检测:** `PicoDet-L-Mainbody`主体检测模型请参考[主体检测文档](./legacy_model/application/mainbody_detection/README.md) ## FAQ
    -Out of memory error. +显存爆炸(Out of memory error) -Please reduce the `batch_size` of `TrainReader` in config. +请减小配置文件中`TrainReader`的`batch_size`。
    -How to transfer learning. +如何迁移学习 -Please reset `pretrain_weights` in config, which trained on coco. Such as: +请重新设置配置文件中的`pretrain_weights`字段,比如利用COCO上训好的模型在自己的数据上继续训练: ```yaml pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams ``` @@ -307,17 +310,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne
    -The transpose operator is time-consuming on some hardware. +`transpose`算子在某些硬件上耗时验证 -Please use `PicoDet-LCNet` model, which has fewer `transpose` operators. +请使用`PicoDet-LCNet`模型,`transpose`较少。
    -How to count model parameters. +如何计算模型参数量。 -You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters. +可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) 来计算参数量。 ```python params = sum([ @@ -329,8 +332,8 @@ print('params: ', params)
    -## Cite PP-PicoDet -If you use PicoDet in your research, please cite our work by using the following BibTeX entry: +## 引用PP-PicoDet +如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告: ``` @misc{yu2021pppicodet, title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices}, diff --git a/configs/picodet/README_cn.md b/configs/picodet/README_en.md similarity index 30% rename from configs/picodet/README_cn.md rename to configs/picodet/README_en.md index 7131200a2e106e50fe71a97eda566a4520bfc5e8..d7d51c7b3b774ce0f68822a7d5084ea8639ada53 100644 --- a/configs/picodet/README_cn.md +++ b/configs/picodet/README_en.md @@ -1,63 +1,60 @@ -简体中文 | [English](README.md) +English | [简体中文](README.md) # PP-PicoDet ![](../../docs/images/picedet_demo.jpeg) -## 最新动态 +## News -- 发布全新系列PP-PicoDet模型:**(2022.03.20)** - - (1)引入TAL及Task-aligned Head,优化PAN等结构,精度大幅提升; - - (2)优化CPU端预测速度,同时训练速度大幅提升; - - (3)导出模型将后处理包含在网络中,预测直接输出box结果,无需二次开发,迁移成本更低。 +- Released a new series of PP-PicoDet models: **(2022.03.20)** + - (1) It was used TAL/ETA Head and optimized PAN, which greatly improved the accuracy; + - (2) Moreover optimized CPU prediction speed, and the training speed is greatly improved; + - (3) The export model includes post-processing, and the prediction directly outputs the result, without secondary development, and the migration cost is lower. -## 历史版本模型 +### Legacy Model -- 详情请参考:[PicoDet 2021.10版本](./legacy_model/) +- Please refer to: [PicoDet 2021.10](./legacy_model/) -## 简介 +## Introduction -PaddleDetection中提出了全新的轻量级系列模型`PP-PicoDet`,在移动端具有卓越的性能,成为全新SOTA轻量级模型。详细的技术细节可以参考我们的[arXiv技术报告](https://arxiv.org/abs/2111.00902)。 +We developed a series of lightweight models, named `PP-PicoDet`. Because of the excellent performance, our models are very suitable for deployment on mobile or CPU. For more details, please refer to our [report on arXiv](https://arxiv.org/abs/2111.00902). -PP-PicoDet模型有如下特点: - -- 🌟 更高的mAP: 第一个在1M参数量之内`mAP(0.5:0.95)`超越**30+**(输入416像素时)。 -- 🚀 更快的预测速度: 网络预测在ARM CPU下可达150FPS。 -- 😊 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。 -- 😍 先进的算法: 我们在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。 +- 🌟 Higher mAP: the **first** object detectors that surpass mAP(0.5:0.95) **30+** within 1M parameters when the input size is 416. +- 🚀 Faster latency: 150FPS on mobile ARM CPU. +- 😊 Deploy friendly: support PaddleLite/MNN/NCNN/OpenVINO and provide C++/Python/Android implementation. +- 😍 Advanced algorithm: use the most advanced algorithms and offer innovation, such as ESNet, CSP-PAN, SimOTA with VFL, etc.
    -## 基线 - -| 模型 | 输入尺寸 | mAPval
    0.5:0.95 | mAPval
    0.5 | 参数量
    (M) | FLOPS
    (G) | 预测时延[NCNN](#latency)
    (ms) | 预测时延[Lite](#latency)
    (ms) | 下载 | 配置文件 | -| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | -| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 10.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | -| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 15.4ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | -| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 12.6ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | -| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 17.2ms | 15.20 | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | -| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 14.5ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | -| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 19.5ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | -| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 18.3ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | -| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 22.1ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | -| PicoDet-L | 640*640 | 42.3 | 59.2 | 5.80 | 16.81 | 43.1ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) | +## Benchmark +| Model | Input size | mAPval
    0.5:0.95 | mAPval
    0.5 | Params
    (M) | FLOPS
    (G) | Latency[CPU](#latency)
    (ms) | Latency[Lite](#latency)
    (ms) | Weight | Config | Inference Model | +| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | :-----------------------------: | :----------------------------------------: | :--------------------------------------- | :--------------------------------------- | +| PicoDet-XS | 320*320 | 23.5 | 36.1 | 0.70 | 0.67 | 3.9ms | 7.81ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_320_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-XS | 416*416 | 26.2 | 39.3 | 0.70 | 1.13 | 6.1ms | 12.38ms | [model](https://paddledet.bj.bcebos.com/models/picodet_xs_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_xs_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_xs_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_416_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_xs_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-S | 320*320 | 29.1 | 43.4 | 1.18 | 0.97 | 4.8ms | 9.56ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_320_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-S | 416*416 | 32.5 | 47.6 | 1.18 | 1.65 | 6.6ms | 15.20ms | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_s_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_s_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-M | 320*320 | 34.4 | 50.0 | 3.46 | 2.57 | 8.2ms | 17.68ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_320_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-M | 416*416 | 37.5 | 53.4 | 3.46 | 4.34 | 12.7ms | 28.39ms | [model](https://paddledet.bj.bcebos.com/models/picodet_m_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_m_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_m_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_416_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_m_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-L | 320*320 | 36.1 | 52.0 | 5.80 | 4.20 | 11.5ms | 25.21ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_320_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_320_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_320_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_320_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_320_coco_lcnet_non_postprocess.tar) | +| PicoDet-L | 416*416 | 39.4 | 55.7 | 5.80 | 7.10 | 20.7ms | 42.23ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_416_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_416_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_416_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_416_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_416_coco_lcnet_non_postprocess.tar) | +| PicoDet-L | 640*640 | 42.6 | 59.2 | 5.80 | 16.81 | 62.5ms | 108.1ms | [model](https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams) | [log](https://paddledet.bj.bcebos.com/logs/train_picodet_l_640_coco_lcnet.log) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/picodet_l_640_coco_lcnet.yml) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_640_coco_lcnet.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_l_640_coco_lcnet_non_postprocess.tar) |
    -注意事项: +Table Notes: -- 时延测试: 我们所有的模型都在英特尔至强6148的CPU(MKLDNN 10线程)和`骁龙865(4xA77+4xA55)`的ARM CPU上测试(4线程,FP16预测)。上面表格中标有`CPU`的是使用Paddle Inference库测试,标有`Lite`的是使用[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)进行测试。 -- PicoDet在COCO train2017上训练,并且在COCO val2017上进行验证。使用4卡GPU训练,并且上表所有的预训练模型都是通过发布的默认配置训练得到。 -- Benchmark测试:测试速度benchmark性能时,导出模型后处理不包含在网络中,需要设置`-o export.benchmark=True` 或手动修改[runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12)。 +- Latency: All our models test on `Intel core i7 10750H` CPU with MKLDNN by 12 threads and `Qualcomm Snapdragon 865(4xA77+4xA55)` with 4 threads by arm8 and with FP16. In the above table, test CPU latency on Paddle-Inference and testing Mobile latency with `Lite`->[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite). +- PicoDet is trained on COCO train2017 dataset and evaluated on COCO val2017. And PicoDet used 4 GPUs for training and all checkpoints are trained with default settings and hyperparameters. +- Benchmark test: When testing the speed benchmark, the post-processing is not included in the exported model, you need to set `-o export.benchmark=True` or manually modify [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml#L12).
    -#### 其他模型的基线 +#### Benchmark of Other Models -| 模型 | 输入尺寸 | mAPval
    0.5:0.95 | mAPval
    0.5 | 参数量
    (M) | FLOPS
    (G) | 预测时延[NCNN](#latency)
    (ms) | +| Model | Input size | mAPval
    0.5:0.95 | mAPval
    0.5 | Params
    (M) | FLOPS
    (G) | Latency[NCNN](#latency)
    (ms) | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :-----------------------------: | | YOLOv3-Tiny | 416*416 | 16.6 | 33.1 | 8.86 | 5.62 | 25.42 | | YOLOv4-Tiny | 416*416 | 21.7 | 40.2 | 6.06 | 6.96 | 23.69 | @@ -71,39 +68,38 @@ PP-PicoDet模型有如下特点: | YOLOv5n | 640*640 | 28.4 | 46.0 | 1.9 | 4.5 | 40.35 | | YOLOv5s | 640*640 | 37.2 | 56.0 | 7.2 | 16.5 | 78.05 | -- ARM测试的benchmark脚本来自: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark)。 +- Testing Mobile latency with code: [MobileDetBenchmark](https://github.com/JiweiMaster/MobileDetBenchmark). -## 快速开始 +## Quick Start
    -依赖包: +Requirements: -- PaddlePaddle == 2.2.2 +- PaddlePaddle >= 2.2.2
    -安装 +Installation -- [安装指导文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) -- [准备数据文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md) +- [Installation guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md) +- [Prepare dataset](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/PrepareDataSet_en.md)
    -训练&评估 +Training and Evaluation -- 单卡GPU上训练: +- Training model on single-GPU: ```shell # training on single-GPU export CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval ``` +If the GPU is out of memory during training, reduce the batch_size in TrainReader, and reduce the base_lr in LearningRate proportionally. At the same time, the configs we published are all trained with 4 GPUs. If the number of GPUs is changed to 1, the base_lr needs to be reduced by a factor of 4. -如果训练时显存out memory,将TrainReader中batch_size调小,同时LearningRate中base_lr等比例减小。 - -- 多卡GPU上训练: +- Training model on multi-GPU: ```shell @@ -112,72 +108,76 @@ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml --eval ``` -- 评估: +- Evaluation: ```shell python tools/eval.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams ``` -- 测试: +- Infer: ```shell python tools/infer.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams ``` -详情请参考[快速开始文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md). +Detail also can refer to [Quick start guide](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md).
    -## 部署 +## Deployment -### 导出及转换模型 +### Export and Convert Model -
    -1. 导出模型 (点击展开) +
    +1. Export model ```shell cd PaddleDetection python tools/export_model.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ -o weights=https://paddledet.bj.bcebos.com/models/picodet_s_320_coco_lcnet.pdparams \ - --output_dir=inference_model + --output_dir=output_inference ``` +- If no post processing is required, please specify: `-o export.benchmark=True` (if -o has already appeared, delete -o here) or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml). +- If no NMS is required, please specify: `-o export.nms=True` or manually modify corresponding fields in [runtime.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/runtime.yml). Many scenes exported to ONNX only support single input and fixed shape output, so if exporting to ONNX, it is recommended not to export NMS. + +
    -2. 转换模型至Paddle Lite (点击展开) +2. Convert to PaddleLite (click to expand) -- 安装Paddlelite>=2.10: +- Install Paddlelite>=2.10: ```shell pip install paddlelite ``` -- 转换模型至Paddle Lite格式: +- Convert model: ```shell # FP32 -paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp32 +paddle_lite_opt --model_dir=output_inference/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp32 # FP16 -paddle_lite_opt --model_dir=inference_model/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp16 --enable_fp16=true +paddle_lite_opt --model_dir=output_inference/picodet_s_320_coco_lcnet --valid_targets=arm --optimize_out=picodet_s_320_coco_fp16 --enable_fp16=true ```
    -3. 转换模型至ONNX (点击展开) +3. Convert to ONNX (click to expand) -- 安装[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 并且 ONNX > 1.10.1, 细节请参考[导出ONNX模型教程](../../deploy/EXPORT_ONNX_MODEL.md) +- Install [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) >= 0.7 and ONNX > 1.10.1, for details, please refer to [Tutorials of Export ONNX Model](../../deploy/EXPORT_ONNX_MODEL.md) ```shell pip install onnx -pip install paddle2onnx +pip install paddle2onnx==0.9.2 ``` -- 转换模型: +- Convert model: ```shell paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ @@ -187,123 +187,117 @@ paddle2onnx --model_dir output_inference/picodet_s_320_coco_lcnet/ \ --save_file picodet_s_320_coco.onnx ``` -- 简化ONNX模型: 使用`onnx-simplifier`库来简化ONNX模型。 +- Simplify ONNX model: use onnx-simplifier to simplify onnx model. - - 安装 onnx-simplifier >= 0.3.6: + - Install onnxsim >= 0.4.1: ```shell - pip install onnx-simplifier + pip install onnxsim ``` - - 简化ONNX模型: + - simplify onnx model: ```shell - python -m onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx + onnxsim picodet_s_320_coco.onnx picodet_s_processed.onnx ```
    -- 部署用的模型 +- Deploy models -| 模型 | 输入尺寸 | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) | +| Model | Input size | ONNX(w/o postprocess) | Paddle Lite(fp32) | Paddle Lite(fp16) | | :-------- | :--------: | :---------------------: | :----------------: | :----------------: | -| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) | -| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) | -| PicoDet-M | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_fp16.tar) | -| PicoDet-M | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_fp16.tar) | -| PicoDet-L | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_fp16.tar) | -| PicoDet-L | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_fp16.tar) | -| PicoDet-L | 640*640 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_fp16.tar) | -| PicoDet-Shufflenetv2 1x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_shufflenetv2_1x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_shufflenetv2_1x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_shufflenetv2_1x_fp16.tar) | -| PicoDet-MobileNetv3-large 1x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_mobilenetv3_large_1x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_mobilenetv3_large_1x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_mobilenetv3_large_1x_fp16.tar) | -| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) | - - -### 部署 - -- PaddleInference demo [Python](../../deploy/python) & [C++](../../deploy/cpp) -- [PaddleLite C++ demo](../../deploy/lite) -- [NCNN C++/Python demo](../../deploy/third_engine/demo_ncnn) -- [MNN C++/Python demo](../../deploy/third_engine/demo_mnn) -- [OpenVINO C++ demo](../../deploy/third_engine/demo_openvino) -- [Android demo(Paddle Lite)](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) - - -Android demo可视化: +| PicoDet-XS | 320*320 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_320_coco_lcnet_fp16.tar) | +| PicoDet-XS | 416*416 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_xs_416_coco_lcnet_fp16.tar) | +| PicoDet-S | 320*320 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_coco_lcnet_fp16.tar) | +| PicoDet-S | 416*416 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_fp16.tar) | +| PicoDet-M | 320*320 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_coco_lcnet_fp16.tar) | +| PicoDet-M | 416*416 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_coco_lcnet_fp16.tar) | +| PicoDet-L | 320*320 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_coco_lcnet_fp16.tar) | +| PicoDet-L | 416*416 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco_lcnet.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_coco_lcnet_fp16.tar) | +| PicoDet-L | 640*640 | [( w/ postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_lcnet_postprocessed.onnx) | [( w/o postprocess)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco_lcnet.onnx) [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_coco_lcnet.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_coco_lcnet_fp16.tar) | + + +### Deploy + +| Infer Engine | Python | C++ | Predict With Postprocess | +| :-------- | :--------: | :---------------------: | :----------------: | +| OpenVINO | [Python](../../deploy/third_engine/demo_openvino/python) | [C++](../../deploy/third_engine/demo_openvino)(postprocess coming soon) | ✔︎ | +| Paddle Lite | - | [C++](../../deploy/lite) | ✔︎ | +| Android Demo | - | [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/android/app/cxx/picodet_detection_demo) | ✔︎ | +| PaddleInference | [Python](../../deploy/python) | [C++](../../deploy/cpp) | ✔︎ | +| ONNXRuntime | [Python](../../deploy/third_engine/demo_onnxruntime) | Coming soon | ✔︎ | +| NCNN | Coming soon | [C++](../../deploy/third_engine/demo_ncnn) | ✘ | +| MNN | Coming soon | [C++](../../deploy/third_engine/demo_mnn) | ✘ | + + +Android demo visualization:
    -## 量化 +## Quantization
    -依赖包: +Requirements: - PaddlePaddle >= 2.2.2 -- PaddleSlim >= 2.2.1 +- PaddleSlim >= 2.2.2 -**安装:** +**Install:** ```shell -pip install paddleslim==2.2.1 +pip install paddleslim==2.2.2 ```
    -
    -量化训练 (点击展开) +
    +Quant aware -开始量化训练: +Configure the quant config and start training: ```shell -python tools/train.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ - --slim_config configs/slim/quant/picodet_s_quant.yml --eval +python tools/train.py -c configs/picodet/picodet_s_416_coco_lcnet.yml \ + --slim_config configs/slim/quant/picodet_s_416_lcnet_quant.yml --eval ``` -- 更多细节请参考[slim文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim) +- More detail can refer to [slim document](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim)
    -
    -离线量化 (点击展开) - -校准及导出量化模型: +- Quant Aware Model ZOO: -```shell -python tools/post_quant.py -c configs/picodet/picodet_s_320_coco_lcnet.yml \ - --slim_config configs/slim/post_quant/picodet_s_ptq.yml -``` - -- 注意: 离线量化模型精度问题正在解决中. - -
    +| Quant Model | Input size | mAPval
    0.5:0.95 | Configs | Weight | Inference Model | Paddle Lite(INT8) | +| :-------- | :--------: | :--------------------: | :-------: | :----------------: | :----------------: | :----------------: | +| PicoDet-S | 416*416 | 31.5 | [config](./picodet_s_416_coco_lcnet.yml) | [slim config](../slim/quant/picodet_s_416_lcnet_quant.yml) | [model](https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet_quant.pdparams) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_quant.tar) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet_quant_non_postprocess.tar) | [w/ postprocess](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_quant.nb) | [w/o postprocess](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_coco_lcnet_quant_non_postprocess.nb) | -## 非结构化剪枝 +## Unstructured Pruning
    -教程: +Tutorial: -训练及部署细节请参考[非结构化剪枝文档](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/pruner/README.md)。 +Please refer this [documentation](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet/legacy_model/pruner/README.md) for details such as requirements, training and deployment.
    -## 应用 +## Application -- **行人检测:** `PicoDet-S-Pedestrian`行人检测模型请参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B) +- **Pedestrian detection:** model zoo of `PicoDet-S-Pedestrian` please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/tiny_pose#%E8%A1%8C%E4%BA%BA%E6%A3%80%E6%B5%8B%E6%A8%A1%E5%9E%8B) -- **主体检测:** `PicoDet-L-Mainbody`主体检测模型请参考[主体检测文档](./application/mainbody_detection/README.md) +- **Mainbody detection:** model zoo of `PicoDet-L-Mainbody` please refer to [mainbody detection](./legacy_model/application/mainbody_detection/README.md) ## FAQ
    -显存爆炸(Out of memory error) +Out of memory error. -请减小配置文件中`TrainReader`的`batch_size`。 +Please reduce the `batch_size` of `TrainReader` in config.
    -如何迁移学习 +How to transfer learning. -请重新设置配置文件中的`pretrain_weights`字段,比如利用COCO上训好的模型在自己的数据上继续训练: +Please reset `pretrain_weights` in config, which trained on coco. Such as: ```yaml pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams ``` @@ -311,17 +305,17 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcne
    -`transpose`算子在某些硬件上耗时验证 +The transpose operator is time-consuming on some hardware. -请使用`PicoDet-LCNet`模型,`transpose`较少。 +Please use `PicoDet-LCNet` model, which has fewer `transpose` operators.
    -如何计算模型参数量。 +How to count model parameters. -可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) 来计算参数量。 +You can insert below code at [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L141) to count learnable parameters. ```python params = sum([ @@ -333,8 +327,8 @@ print('params: ', params)
    -## 引用PP-PicoDet -如果需要在你的研究中使用PP-PicoDet,请通过一下方式引用我们的技术报告: +## Cite PP-PicoDet +If you use PicoDet in your research, please cite our work by using the following BibTeX entry: ``` @misc{yu2021pppicodet, title={PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices}, diff --git a/configs/picodet/_base_/picodet_320_reader.yml b/configs/picodet/_base_/picodet_320_reader.yml index 6b0112469e0f2b827f6addd14ac3a0b6cb42f3c0..7d6500679dba0f06c6238aa8bed4f2fd0ad8bd5b 100644 --- a/configs/picodet/_base_/picodet_320_reader.yml +++ b/configs/picodet/_base_/picodet_320_reader.yml @@ -1,4 +1,8 @@ worker_num: 6 +eval_height: &eval_height 320 +eval_width: &eval_width 320 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -18,7 +22,7 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [320, 320], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: @@ -29,13 +33,10 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 320, 320] + image_shape: [1, 3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [320, 320], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/picodet/_base_/picodet_416_reader.yml b/configs/picodet/_base_/picodet_416_reader.yml index f98fe08e1aa312cdd6e59acd0594f19d5c27b7ea..ee4ae98865f7eb58994c0a79964d24e41c697373 100644 --- a/configs/picodet/_base_/picodet_416_reader.yml +++ b/configs/picodet/_base_/picodet_416_reader.yml @@ -1,4 +1,8 @@ worker_num: 6 +eval_height: &eval_height 416 +eval_width: &eval_width 416 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -18,7 +22,7 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [416, 416], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: @@ -29,13 +33,10 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 416, 416] + image_shape: [1, 3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [416, 416], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/picodet/_base_/picodet_640_reader.yml b/configs/picodet/_base_/picodet_640_reader.yml index d90fbeb9770b911ec2dbe3c80c4b5479f7dd53e1..5502026af8b1d0762405db17e655b2b6628dea04 100644 --- a/configs/picodet/_base_/picodet_640_reader.yml +++ b/configs/picodet/_base_/picodet_640_reader.yml @@ -1,4 +1,8 @@ worker_num: 6 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -18,7 +22,7 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [640, 640], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: @@ -29,13 +33,10 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 640, 640] + image_shape: [1, 3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [640, 640], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/picodet/application/pedestrian_detection/picodet_s_192_lcnet_pedestrian.yml b/configs/picodet/application/pedestrian_detection/picodet_s_192_lcnet_pedestrian.yml new file mode 100644 index 0000000000000000000000000000000000000000..bb3d2e9bc923f0ee41c12fbbb7d1a7b91b97d339 --- /dev/null +++ b/configs/picodet/application/pedestrian_detection/picodet_s_192_lcnet_pedestrian.yml @@ -0,0 +1,161 @@ +use_gpu: true +use_xpu: false +log_iter: 20 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +metric: COCO +num_classes: 1 + +architecture: PicoDet +pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/PPLCNet_x0_75_pretrained.pdparams +weights: output/picodet_s_192_lcnet_pedestrian/best_model +find_unused_parameters: True +use_ema: true +epoch: 300 +snapshot_epoch: 10 + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 0.75 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 96 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 96 + feat_out: 96 + num_convs: 2 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + feat_in_chan: 96 + fpn_stride: [8, 16, 32, 64] + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 4 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 + +LearningRate: + base_lr: 0.32 + schedulers: + - !CosineDecay + max_epochs: 300 + - !LinearWarmup + start_factor: 0.1 + steps: 300 + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +worker_num: 6 +eval_height: &eval_height 192 +eval_width: &eval_width 192 +eval_size: &eval_size [*eval_height, *eval_width] + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [128, 160, 192, 224, 256], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 64 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: aic_coco_train_cocoformat.json + dataset_dir: dataset + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val2017 + anno_path: annotations/instances_val2017.json + dataset_dir: dataset/coco + +TestDataset: + !ImageFolder + anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path' diff --git a/configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml b/configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml new file mode 100644 index 0000000000000000000000000000000000000000..91402ba5e6cf8edb587566260c1bb7a202d3be61 --- /dev/null +++ b/configs/picodet/application/pedestrian_detection/picodet_s_320_lcnet_pedestrian.yml @@ -0,0 +1,160 @@ +use_gpu: true +use_xpu: false +log_iter: 20 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +metric: COCO +num_classes: 1 + +architecture: PicoDet +pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/legendary_models/PPLCNet_x0_75_pretrained.pdparams +weights: output/picodet_s_320_lcnet_pedestrian/best_model +find_unused_parameters: True +use_ema: true +epoch: 300 +snapshot_epoch: 10 + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 0.75 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 96 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 96 + feat_out: 96 + num_convs: 2 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + feat_in_chan: 96 + fpn_stride: [8, 16, 32, 64] + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 + +LearningRate: + base_lr: 0.32 + schedulers: + - !CosineDecay + max_epochs: 300 + - !LinearWarmup + start_factor: 0.1 + steps: 300 + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +worker_num: 6 +eval_height: &eval_height 320 +eval_width: &eval_width 320 +eval_size: &eval_size [*eval_height, *eval_width] + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [256, 288, 320, 352, 384], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 64 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: aic_coco_train_cocoformat.json + dataset_dir: dataset + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val2017 + anno_path: annotations/instances_val2017.json + dataset_dir: dataset/coco + +TestDataset: + !ImageFolder + anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path' diff --git a/configs/picodet/legacy_model/README.md b/configs/picodet/legacy_model/README.md index f58ebc75be2f9aee557341143b97c3d90de3a459..7821c88be602bfa57cfd4ab36bcaa1040ce8a85c 100644 --- a/configs/picodet/legacy_model/README.md +++ b/configs/picodet/legacy_model/README.md @@ -29,6 +29,23 @@
    +- Deploy models + +| Model | Input size | ONNX | Paddle Lite(fp32) | Paddle Lite(fp16) | +| :-------- | :--------: | :---------------------: | :----------------: | :----------------: | +| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_320_fp16.tar) | +| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_s_416_fp16.tar) | +| PicoDet-M | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_320_fp16.tar) | +| PicoDet-M | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_m_416_fp16.tar) | +| PicoDet-L | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_320_fp16.tar) | +| PicoDet-L | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_416_fp16.tar) | +| PicoDet-L | 640*640 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_l_640_fp16.tar) | +| PicoDet-Shufflenetv2 1x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_shufflenetv2_1x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_shufflenetv2_1x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_shufflenetv2_1x_fp16.tar) | +| PicoDet-MobileNetv3-large 1x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_mobilenetv3_large_1x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_mobilenetv3_large_1x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_mobilenetv3_large_1x_fp16.tar) | +| PicoDet-LCNet 1.5x | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_lcnet_1_5x_416_coco.onnx) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x.tar) | [model](https://paddledet.bj.bcebos.com/deploy/paddlelite/picodet_lcnet_1_5x_fp16.tar) | + + + ## Cite PP-PicoDet ``` @misc{yu2021pppicodet, diff --git a/configs/picodet/legacy_model/_base_/picodet_320_reader.yml b/configs/picodet/legacy_model/_base_/picodet_320_reader.yml index 2ce5bca6695ac50f25622b7d1704e68a20179b22..4d3f0cbd8648bf2d8ef44cdbf1d2422865a22c94 100644 --- a/configs/picodet/legacy_model/_base_/picodet_320_reader.yml +++ b/configs/picodet/legacy_model/_base_/picodet_320_reader.yml @@ -1,4 +1,8 @@ worker_num: 6 +eval_height: &eval_height 320 +eval_width: &eval_width 320 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -18,7 +22,7 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [320, 320], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: @@ -29,13 +33,10 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 320, 320] + image_shape: [1, 3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [320, 320], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/picodet/legacy_model/_base_/picodet_416_reader.yml b/configs/picodet/legacy_model/_base_/picodet_416_reader.yml index 12070a4be22abe3d0cdb6593b41e1f98658efca2..59433c64534163a454ad7e5a07b71d011119913c 100644 --- a/configs/picodet/legacy_model/_base_/picodet_416_reader.yml +++ b/configs/picodet/legacy_model/_base_/picodet_416_reader.yml @@ -1,4 +1,8 @@ worker_num: 6 +eval_height: &eval_height 416 +eval_width: &eval_width 416 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -18,7 +22,7 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [416, 416], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: @@ -29,13 +33,10 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 416, 416] + image_shape: [1, 3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [416, 416], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/picodet/legacy_model/_base_/picodet_640_reader.yml b/configs/picodet/legacy_model/_base_/picodet_640_reader.yml index a931f2a765855790f877e419a9cd46615c43be5e..60904fb6ba77c858a50f1e743e637961c38ccd1f 100644 --- a/configs/picodet/legacy_model/_base_/picodet_640_reader.yml +++ b/configs/picodet/legacy_model/_base_/picodet_640_reader.yml @@ -1,4 +1,8 @@ worker_num: 6 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -18,7 +22,7 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [640, 640], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: @@ -29,13 +33,10 @@ EvalReader: TestReader: inputs_def: - image_shape: [1, 3, 640, 640] + image_shape: [1, 3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {interp: 2, target_size: [640, 640], keep_ratio: False} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} - batch_transforms: - - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/picodet/legacy_model/application/mainbody_detection/README.md b/configs/picodet/legacy_model/application/mainbody_detection/README.md index dc75d9f3ebb860925ea6c981b129087e27f225a7..0408587e62a81dbd97ae9128f59497287da26f5f 100644 --- a/configs/picodet/legacy_model/application/mainbody_detection/README.md +++ b/configs/picodet/legacy_model/application/mainbody_detection/README.md @@ -20,7 +20,7 @@ PP-ShiTu图像识别任务中,训练主体检测模型时主要用到了以下 | LogoDet-3k | 155k | 155k | Logo检测 | [地址](https://github.com/Wangjing1551/LogoDet-3K-Dataset) | | RPC | 54k | 54k | 商品检测 | [地址](https://rpc-dataset.github.io/) | -在实际训练的过程中,将所有数据集混合在一起。由于是主体检测,这里将所有标注出的检测框对应的类别都修改为 `前景` 的类别,最终融合的数据集中只包含 1 个类别,即前景,数据集定义配置可以参考[mainbody_detection.yml](./mainbody_detection.yml)。 +在实际训练的过程中,将所有数据集混合在一起。由于是主体检测,这里将所有标注出的检测框对应的类别都修改为 `前景` 的类别,最终融合的数据集中只包含 1 个类别,即前景,数据集定义配置可以参考[picodet_lcnet_x2_5_640_mainbody.yml](./picodet_lcnet_x2_5_640_mainbody.yml)。 ### 1.2 模型库 diff --git a/configs/pphuman/README.md b/configs/pphuman/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e583668ab7b722a4def52e579eaa121a980ea354 --- /dev/null +++ b/configs/pphuman/README.md @@ -0,0 +1,35 @@ +简体中文 | [English](README.md) + +# PP-YOLOE Human 检测模型 + +PaddleDetection团队提供了针对行人的基于PP-YOLOE的检测模型,用户可以下载模型进行使用。 +其中整理后的COCO格式的CrowdHuman数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip),检测类别仅一类 `pedestrian(1)`,原始数据集[下载链接](http://www.crowdhuman.org/download.html)。 + +| 模型 | 数据集 | mAPval
    0.5:0.95 | mAPval
    0.5 | 下载 | 配置文件 | +|:---------|:-------:|:------:|:------:| :----: | :------:| +|PP-YOLOE-s| CrowdHuman | 42.5 | 77.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_s_36e_crowdhuman.yml) | +|PP-YOLOE-l| CrowdHuman | 48.0 | 81.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_crowdhuman.pdparams) | [配置文件](./ppyoloe_crn_l_36e_crowdhuman.yml) | + + +**注意:** +- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)。 + + +# PP-YOLOE 香烟检测模型 +基于PP-YOLOE模型的香烟检测模型,是实现PP-Human中的基于检测的行为识别方案的一环,如何在PP-Human中使用该模型进行吸烟行为识别,可参考[PP-Human行为识别模块](../../deploy/pipeline/docs/tutorials/pphuman_action.md)。该模型检测类别仅包含香烟一类。由于数据来源限制,目前暂无法直接公开训练数据。该模型使用了小目标数据集VisDrone上的权重(参照[visdrone](../visdrone))作为预训练模型,以提升检测效果。 + +| 模型 | 数据集 | mAPval
    0.5:0.95 | mAPval
    0.5 | 下载 | 配置文件 | +|:---------|:-------:|:------:|:------:| :----: | :------:| +| PP-YOLOE-s | 香烟业务数据集 | 39.7 | 79.5 |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [配置文件](./ppyoloe_crn_s_80e_smoking_visdrone.yml) | + + +## 引用 +``` +@article{shao2018crowdhuman, + title={CrowdHuman: A Benchmark for Detecting Human in a Crowd}, + author={Shao, Shuai and Zhao, Zijian and Li, Boxun and Xiao, Tete and Yu, Gang and Zhang, Xiangyu and Sun, Jian}, + journal={arXiv preprint arXiv:1805.00123}, + year={2018} + } +``` diff --git a/configs/pphuman/ppyoloe_crn_l_36e_crowdhuman.yml b/configs/pphuman/ppyoloe_crn_l_36e_crowdhuman.yml new file mode 100644 index 0000000000000000000000000000000000000000..445fefdc5c1a86c307a5c11b471df1aa95aafe7d --- /dev/null +++ b/configs/pphuman/ppyoloe_crn_l_36e_crowdhuman.yml @@ -0,0 +1,55 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/ppyoloe_crn_l_36e_crowdhuman/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +num_classes: 1 +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train.json + dataset_dir: dataset/crowdhuman + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val.json + dataset_dir: dataset/crowdhuman + +TestDataset: + !ImageFolder + anno_path: annotations/val.json + dataset_dir: dataset/crowdhuman + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/pphuman/ppyoloe_crn_s_36e_crowdhuman.yml b/configs/pphuman/ppyoloe_crn_s_36e_crowdhuman.yml new file mode 100644 index 0000000000000000000000000000000000000000..7be5fe7e72e28c1d1fc9f1d517a95caa796fee76 --- /dev/null +++ b/configs/pphuman/ppyoloe_crn_s_36e_crowdhuman.yml @@ -0,0 +1,55 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/ppyoloe_crn_s_36e_crowdhuman/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +num_classes: 1 +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train.json + dataset_dir: dataset/crowdhuman + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val.json + dataset_dir: dataset/crowdhuman + +TestDataset: + !ImageFolder + anno_path: annotations/val.json + dataset_dir: dataset/crowdhuman + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml b/configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..40a731d4dece54b02948c58e9bbaef60d1d6d9ce --- /dev/null +++ b/configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml @@ -0,0 +1,54 @@ +_BASE_: [ + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] + +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_s_80e_smoking_visdrone/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_80e_visdrone.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +TrainReader: + batch_size: 16 + +LearningRate: + base_lr: 0.01 + +epoch: 80 +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 80 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + +metric: COCO +num_classes: 1 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: smoking_train_cocoformat.json + dataset_dir: dataset/smoking + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: smoking_test_cocoformat.json + dataset_dir: dataset/smoking + +TestDataset: + !ImageFolder + anno_path: smoking_test_cocoformat.json + dataset_dir: dataset/smoking diff --git a/configs/ppvehicle/README.md b/configs/ppvehicle/README.md new file mode 100644 index 0000000000000000000000000000000000000000..5559009056a56b727b516ea942c41f67929f4be4 --- /dev/null +++ b/configs/ppvehicle/README.md @@ -0,0 +1,56 @@ +简体中文 | [English](README.md) + +# PP-YOLOE Vehicle 检测模型 + +PaddleDetection团队提供了针对自动驾驶场景的基于PP-YOLOE的检测模型,用户可以下载模型进行使用,主要包含5个数据集(BDD100K-DET、BDD100K-MOT、UA-DETRAC、PPVehicle9cls、PPVehicle)。其中前3者为公开数据集,后两者为整合数据集。 +- BDD100K-DET具体类别为10类,包括`pedestrian(1), rider(2), car(3), truck(4), bus(5), train(6), motorcycle(7), bicycle(8), traffic light(9), traffic sign(10)`。 +- BDD100K-MOT具体类别为8类,包括`pedestrian(1), rider(2), car(3), truck(4), bus(5), train(6), motorcycle(7), bicycle(8)`,但数据集比BDD100K-DET更大更多。 +- UA-DETRAC具体类别为4类,包括`car(1), bus(2), van(3), others(4)`。 +- PPVehicle9cls数据集整合了BDD100K-MOT和UA-DETRAC,具体类别为9类,包括`pedestrian(1), rider(2), car(3), truck(4), bus(5), van(6), motorcycle(7), bicycle(8), others(9)`。 +- PPVehicle数据集整合了BDD100K-MOT和UA-DETRAC,是将BDD100K-MOT中的`car, truck, bus, van`和UA-DETRAC中的`car, bus, van`都合并为1类`vehicle(1)`后的数据集。 + + +| 模型 | 数据集 | 类别数 | mAPval
    0.5:0.95 | 下载链接 | 配置文件 | +|:---------|:---------------:|:------:|:-----------------------:|:---------:| :-----: | +|PP-YOLOE-l| BDD100K-DET | 10 | 35.6 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_bdd100kdet.pdparams) | [配置文件](./ppyoloe_crn_l_36e_bdd100kdet.yml) | +|PP-YOLOE-l| BDD100K-MOT | 8 | 33.7 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_bdd100kmot.pdparams) | [配置文件](./ppyoloe_crn_l_36e_bdd100kmot.yml) | +|PP-YOLOE-l| UA-DETRAC | 4 | 51.4 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_36e_uadetrac.pdparams) | [配置文件](./ppyoloe_crn_l_36e_uadetrac.yml) | +|PP-YOLOE-l| PPVehicle9cls | 9 | 40.0 | [下载链接](https://paddledet.bj.bcebos.com/models/mot_ppyoloe_l_36e_ppvehicle9cls.pdparams) | [配置文件](./mot_ppyoloe_l_36e_ppvehicle9cls.yml) | +|PP-YOLOE-s| PPVehicle9cls | 9 | 35.3 | [下载链接](https://paddledet.bj.bcebos.com/models/mot_ppyoloe_s_36e_ppvehicle9cls.pdparams) | [配置文件](./mot_ppyoloe_s_36e_ppvehicle9cls.yml) | +|PP-YOLOE-l| PPVehicle | 1 | 63.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mot_ppyoloe_l_36e_ppvehicle.pdparams) | [配置文件](./mot_ppyoloe_l_36e_ppvehicle.yml) | +|PP-YOLOE-s| PPVehicle | 1 | 61.3 | [下载链接](https://paddledet.bj.bcebos.com/models/mot_ppyoloe_s_36e_ppvehicle.pdparams) | [配置文件](./mot_ppyoloe_s_36e_ppvehicle.yml) | + +**注意:** +- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)。 +- 如需预测出对应类别,可自行修改和添加对应的label_list.txt文件(一行记录一个对应种类),TestDataset中的anno_path为绝对路径,如: +``` +TestDataset: + !ImageFolder + anno_path: label_list.txt # 如不使用dataset_dir,则anno_path即为相对于PaddleDetection主目录的相对路径 + # dataset_dir: dataset/ppvehicle # 如使用dataset_dir,则dataset_dir/anno_path作为新的anno_path +``` +label_list.txt里的一行记录一个对应种类,如下所示: +``` +vehicle +``` + +## 引用 +``` +@InProceedings{bdd100k, + author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen, + Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor}, + title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning}, + booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, + month = {June}, + year = {2020} +} + +@article{CVIU_UA-DETRAC, + author = {Longyin Wen and Dawei Du and Zhaowei Cai and Zhen Lei and Ming{-}Ching Chang and + Honggang Qi and Jongwoo Lim and Ming{-}Hsuan Yang and Siwei Lyu}, + title = {{UA-DETRAC:} {A} New Benchmark and Protocol for Multi-Object Detection and Tracking}, + journal = {Computer Vision and Image Understanding}, + year = {2020} +} +``` diff --git a/configs/ppvehicle/mot_ppyoloe_l_36e_ppvehicle.yml b/configs/ppvehicle/mot_ppyoloe_l_36e_ppvehicle.yml new file mode 100644 index 0000000000000000000000000000000000000000..61df2fcc4b55820d6dca9e4f57ecc1fc02484777 --- /dev/null +++ b/configs/ppvehicle/mot_ppyoloe_l_36e_ppvehicle.yml @@ -0,0 +1,57 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/mot_ppyoloe_l_36e_ppvehicle/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +num_classes: 1 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train_all.json + dataset_dir: dataset/ppvehicle + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + allow_empty: true + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val_all.json + dataset_dir: dataset/ppvehicle + +TestDataset: + !ImageFolder + anno_path: annotations/val_all.json + dataset_dir: dataset/ppvehicle + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppvehicle/mot_ppyoloe_l_36e_ppvehicle9cls.yml b/configs/ppvehicle/mot_ppyoloe_l_36e_ppvehicle9cls.yml new file mode 100644 index 0000000000000000000000000000000000000000..4cd73b7e244a47fcdb3f64663df8995e1dde3e55 --- /dev/null +++ b/configs/ppvehicle/mot_ppyoloe_l_36e_ppvehicle9cls.yml @@ -0,0 +1,56 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/mot_ppyoloe_l_36e_ppvehicle9cls/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +num_classes: 9 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train_all_9cls.json + dataset_dir: dataset/ppvehicle + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val_all_9cls.json + dataset_dir: dataset/ppvehicle + +TestDataset: + !ImageFolder + anno_path: annotations/val_all_9cls.json + dataset_dir: dataset/ppvehicle + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppvehicle/mot_ppyoloe_s_36e_ppvehicle.yml b/configs/ppvehicle/mot_ppyoloe_s_36e_ppvehicle.yml new file mode 100644 index 0000000000000000000000000000000000000000..f4f384584c12ae0eff897ebc0fb7f233463ea708 --- /dev/null +++ b/configs/ppvehicle/mot_ppyoloe_s_36e_ppvehicle.yml @@ -0,0 +1,57 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/mot_ppyoloe_s_36e_ppvehicle/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +num_classes: 1 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train_all.json + dataset_dir: dataset/ppvehicle + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + allow_empty: true + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val_all.json + dataset_dir: dataset/ppvehicle + +TestDataset: + !ImageFolder + anno_path: annotations/val_all.json + dataset_dir: dataset/ppvehicle + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppvehicle/mot_ppyoloe_s_36e_ppvehicle9cls.yml b/configs/ppvehicle/mot_ppyoloe_s_36e_ppvehicle9cls.yml new file mode 100644 index 0000000000000000000000000000000000000000..653ff1a75822f965bfb0a8134f5fa78a309d52b9 --- /dev/null +++ b/configs/ppvehicle/mot_ppyoloe_s_36e_ppvehicle9cls.yml @@ -0,0 +1,56 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/mot_ppyoloe_s_36e_ppvehicle9cls/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +num_classes: 9 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train_all_9cls.json + dataset_dir: dataset/ppvehicle + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val_all_9cls.json + dataset_dir: dataset/ppvehicle + +TestDataset: + !ImageFolder + anno_path: annotations/val_all_9cls.json + dataset_dir: dataset/ppvehicle + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppvehicle/ppyoloe_crn_l_36e_bdd100kdet.yml b/configs/ppvehicle/ppyoloe_crn_l_36e_bdd100kdet.yml new file mode 100644 index 0000000000000000000000000000000000000000..921d8b33f17a2a6850cf292769bf51b00a7b1d92 --- /dev/null +++ b/configs/ppvehicle/ppyoloe_crn_l_36e_bdd100kdet.yml @@ -0,0 +1,56 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/ppyoloe_crn_l_36e_bdd100kdet/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +num_classes: 10 + +TrainDataset: + !COCODataSet + image_dir: images/100k/train + anno_path: labels/det_20/det_train_cocofmt.json + dataset_dir: dataset/bdd100k + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images/100k/val + anno_path: labels/det_20/det_val_cocofmt.json + dataset_dir: dataset/bdd100k + +TestDataset: + !ImageFolder + anno_path: labels/det_20/det_val_cocofmt.json + dataset_dir: dataset/bdd100k + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppvehicle/ppyoloe_crn_l_36e_bdd100kmot.yml b/configs/ppvehicle/ppyoloe_crn_l_36e_bdd100kmot.yml new file mode 100644 index 0000000000000000000000000000000000000000..b9d32be10d6cb415f22257bd778aab412420fa8a --- /dev/null +++ b/configs/ppvehicle/ppyoloe_crn_l_36e_bdd100kmot.yml @@ -0,0 +1,56 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/ppyoloe_crn_l_36e_bdd100kmot/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +num_classes: 8 + +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/train.json + dataset_dir: dataset/bdd100k + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: annotations/val.json + dataset_dir: dataset/bdd100k + +TestDataset: + !ImageFolder + anno_path: annotations/val.json + dataset_dir: dataset/bdd100k + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppvehicle/ppyoloe_crn_l_36e_uadetrac.yml b/configs/ppvehicle/ppyoloe_crn_l_36e_uadetrac.yml new file mode 100644 index 0000000000000000000000000000000000000000..5f3dd59cd9ef9d0c5e2947608f264187f433983c --- /dev/null +++ b/configs/ppvehicle/ppyoloe_crn_l_36e_uadetrac.yml @@ -0,0 +1,56 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 4 +weights: output/ppyoloe_crn_l_36e_uadetrac/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +num_classes: 4 + +TrainDataset: + !COCODataSet + image_dir: train + anno_path: annotations/train.json + dataset_dir: dataset/uadetrac + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val + anno_path: annotations/test.json + dataset_dir: dataset/uadetrac + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/uadetrac + +TrainReader: + batch_size: 8 + +epoch: 36 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/ppyoloe/README.md b/configs/ppyoloe/README.md index c7d4df8131d905779cc1b85c4e759a0321f9c61a..9458420fc1a7b81207bd1b0397176218b84b378a 100644 --- a/configs/ppyoloe/README.md +++ b/configs/ppyoloe/README.md @@ -9,39 +9,64 @@ English | [简体中文](README_cn.md) - [Appendix](#Appendix) ## Introduction -PP-YOLOE is an excellent single-stage anchor-free model based on PP-YOLOv2, surpassing a variety of popular yolo models. PP-YOLOE has a series of models, named s/m/l/x, which are configured through width multiplier and depth multiplier. PP-YOLOE avoids using special operators, such as deformable convolution or matrix nms, to be deployed friendly on various hardware. For more details, please refer to our report. +PP-YOLOE is an excellent single-stage anchor-free model based on PP-YOLOv2, surpassing a variety of popular YOLO models. PP-YOLOE has a series of models, named s/m/l/x, which are configured through width multiplier and depth multiplier. PP-YOLOE avoids using special operators, such as Deformable Convolution or Matrix NMS, to be deployed friendly on various hardware. For more details, please refer to our [report](https://arxiv.org/abs/2203.16250).
    -PP-YOLOE-l achieves 51.4 mAP on COCO test-dev2017 dataset with 78.1 FPS on Tesla V100. While using TensorRT FP16, PP-YOLOE-l can be further accelerated to 149.2 FPS. PP-YOLOE-s/m/x also have excellent accuracy and speed performance, which can be found in [Model Zoo](#Model-Zoo) +PP-YOLOE-l achieves 51.6 mAP on COCO test-dev2017 dataset with 78.1 FPS on Tesla V100. While using TensorRT FP16, PP-YOLOE-l can be further accelerated to 149.2 FPS. PP-YOLOE-s/m/x also have excellent accuracy and speed performance, which can be found in [Model Zoo](#Model-Zoo) PP-YOLOE is composed of following methods: - Scalable backbone and neck - [Task Alignment Learning](https://arxiv.org/abs/2108.07755) - Efficient Task-aligned head with [DFL](https://arxiv.org/abs/2006.04388) and [VFL](https://arxiv.org/abs/2008.13367) -- [SiLU activation function](https://arxiv.org/abs/1710.05941) +- [SiLU(Swish) activation function](https://arxiv.org/abs/1710.05941) ## Model Zoo -| Model | GPU number | images/GPU | backbone | input shape | Box APval | Box APtest | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | download | config | -|:------------------------:|:-------:|:-------------:|:----------:| :-------:| :------------------: | :-------------------: | :------------: | :---------------------: | :------: | :------: | -| PP-YOLOE-s | 8 | 32 | cspresnet-s | 640 | 42.7 | 43.1 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml) | -| PP-YOLOE-m | 8 | 32 | cspresnet-m | 640 | 48.6 | 48.9 | 123.4 | 208.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_m_300e_coco.yml) | -| PP-YOLOE-l | 8 | 24 | cspresnet-l | 640 | 50.9 | 51.4 | 78.1 | 149.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | -| PP-YOLOE-x | 8 | 16 | cspresnet-x | 640 | 51.9 | 52.2 | 45.0 | 95.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_x_300e_coco.yml) | +| Model | Epoch | GPU number | images/GPU | backbone | input shape | Box APval
    0.5:0.95 | Box APtest
    0.5:0.95 | Params(M) | FLOPs(G) | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | download | config | +|:------------------------:|:-------:|:-------:|:--------:|:----------:| :-------:| :------------------: | :-------------------: |:---------:|:--------:|:---------------:| :---------------------: | :------: | :------: | +| PP-YOLOE-s | 400 | 8 | 32 | cspresnet-s | 640 | 43.4 | 43.6 | 7.93 | 17.36 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_400e_coco.pdparams) | [config](./ppyoloe_crn_s_400e_coco.yml) | +| PP-YOLOE-s | 300 | 8 | 32 | cspresnet-s | 640 | 43.0 | 43.2 | 7.93 | 17.36 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) | [config](./ppyoloe_crn_s_300e_coco.yml) | +| PP-YOLOE-m | 300 | 8 | 28 | cspresnet-m | 640 | 49.0 | 49.1 | 23.43 | 49.91 | 123.4 | 208.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) | [config](./ppyoloe_crn_m_300e_coco.yml) | +| PP-YOLOE-l | 300 | 8 | 20 | cspresnet-l | 640 | 51.4 | 51.6 | 52.20 | 110.07 | 78.1 | 149.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | [config](./ppyoloe_crn_l_300e_coco.yml) | +| PP-YOLOE-x | 300 | 8 | 16 | cspresnet-x | 640 | 52.3 | 52.4 | 98.42 | 206.59 | 45.0 | 95.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) | [config](./ppyoloe_crn_x_300e_coco.yml) | + + +### Comprehensive Metrics +| Model | Epoch | AP0.5:0.95 | AP0.5 | AP0.75 | APsmall | APmedium | APlarge | ARsmall | ARmedium | ARlarge | download | config | +|:----------------------:|:-----:|:---------------:|:----------:|:-------------:| :------------:| :-----------: | :----------: |:------------:|:-------------:|:------------:| :-----: | :-----: | +| PP-YOLOE-s | 400 | 43.4 | 60.0 | 47.5 | 25.7 | 47.8 | 59.2 | 43.9 | 70.8 | 81.9 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_400e_coco.pdparams) | [config](./ppyoloe_crn_s_400e_coco.yml)| +| PP-YOLOE-s | 300 | 43.0 | 59.6 | 47.2 | 26.0 | 47.4 | 58.7 | 45.1 | 70.6 | 81.4 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) | [config](./ppyoloe_crn_s_300e_coco.yml)| +| PP-YOLOE-m | 300 | 49.0 | 65.9 | 53.8 | 30.9 | 53.5 | 65.3 | 50.9 | 74.4 | 84.7 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) | [config](./ppyoloe_crn_m_300e_coco.yml)| +| PP-YOLOE-l | 300 | 51.4 | 68.6 | 56.2 | 34.8 | 56.1 | 68.0 | 53.1 | 76.8 | 85.6 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | [config](./ppyoloe_crn_l_300e_coco.yml)| +| PP-YOLOE-x | 300 | 52.3 | 69.5 | 56.8 | 35.1 | 57.0 | 68.6 | 55.5 | 76.9 | 85.7 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) | [config](./ppyoloe_crn_x_300e_coco.yml)| + **Notes:** -- PP-YOLOE is trained on COCO train2017 dataset and evaluated on val2017 & test-dev2017 dataset,Box APtest is evaluation results of `mAP(IoU=0.5:0.95)`. -- PP-YOLOE used 8 GPUs for mixed precision training, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ). -- PP-YOLOE inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.6.5, TensorRT 6.0.1.8 in TensorRT mode. -- PP-YOLOE inference speed testing uses inference model exported by `tools/export_model.py` with `-o exclude_nms=True` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method. +- PP-YOLOE is trained on COCO train2017 dataset and evaluated on val2017 & test-dev2017 dataset. +- The model weights in the table of Comprehensive Metrics are **the same as** that in the original Model Zoo, and evaluated on **val2017**. +- PP-YOLOE used 8 GPUs for mixed precision training, if **GPU number** or **mini-batch size** is changed, **learning rate** should be adjusted according to the formula **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)**. +- PP-YOLOE inference speed is tesed on single Tesla V100 with batch size as 1, **CUDA 10.2**, **CUDNN 7.6.5**, **TensorRT 6.0.1.8** in TensorRT mode. +- Refer to [Speed testing](#Speed-testing) to reproduce the speed testing results of PP-YOLOE. - If you set `--run_benchmark=True`,you should install these dependencies at first, `pip install pynvml psutil GPUtil`. + +### Feature Models + +The PaddleDetection team provides configs and weights of various feature detection models based on PP-YOLOE, which users can download for use: + +|Scenarios | Related Datasets | Links| +| :--------: | :---------: | :------: | +|Pedestrian Detection | CrowdHuman | [pphuman](../pphuman) | +|Vehicle Detection | BDD100K, UA-DETRAC | [ppvehicle](../ppvehicle) | +|Small Object Detection | VisDrone | [visdrone](../visdrone) | + + ## Getting Start -### 1. Training +### Training Training PP-YOLOE with mixed precision on 8 GPUs with following command @@ -49,9 +74,12 @@ Training PP-YOLOE with mixed precision on 8 GPUs with following command python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --amp ``` -** Notes: ** use `--amp` to train with default config to avoid out of memeory. +**Notes:** +- use `--amp` to train with default config to avoid out of memeory. +- PaddleDetection supports multi-machine distribued training, you can refer to [DistributedTraining tutorial](../../docs/DistributedTraining_en.md). + -### 2. Evaluation +### Evaluation Evaluating PP-YOLOE on COCO val2017 dataset in single GPU with following commands: @@ -61,7 +89,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyoloe/ppyoloe_crn_l_300 For evaluation on COCO test-dev2017 dataset, please download COCO test-dev2017 dataset from [COCO dataset download](https://cocodataset.org/#download) and decompress to COCO dataset directory and configure `EvalDataset` like `configs/ppyolo/ppyolo_test.yml`. -### 3. Inference +### Inference Inference images in single GPU with following commands, use `--infer_img` to inference a single image and `--infer_dir` to inference all images in the directory. @@ -73,56 +101,120 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams --infer_dir=demo ``` -### 4. Deployment +### Exporting models -- PaddleInference [Python](../../deploy/python) & [C++](../../deploy/cpp) -- [Paddle-TensorRT](../../deploy/TENSOR_RT.md) -- [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) -- [PaddleServing](https://github.com/PaddlePaddle/Serving) - +For deployment on GPU or speed testing, model should be first exported to inference model using `tools/export_model.py`. -For deployment on GPU or benchmarked, model should be first exported to inference model using `tools/export_model.py`. - -Exporting PP-YOLOE for Paddle Inference **without TensorRT**, use following command. +**Exporting PP-YOLOE for Paddle Inference without TensorRT**, use following command ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams ``` -Exporting PP-YOLOE for Paddle Inference **with TensorRT** for better performance, use following command with extra `-o trt=True` setting. +**Exporting PP-YOLOE for Paddle Inference with TensorRT** for better performance, use following command with extra `-o trt=True` setting. ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -`deploy/python/infer.py` is used to load exported paddle inference model above for inference and benchmark through PaddleInference. +If you want to export PP-YOLOE model to **ONNX format**, use following command refer to [PaddleDetection Model Export as ONNX Format Tutorial](../../deploy/EXPORT_ONNX_MODEL_en.md). ```bash -# inference single image -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyolo_r50vd_dcn_1x_coco --image_file=demo/000000014439_640x640.jpg --device=gpu +# export inference model +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --output_dir=output_inference -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams -# inference all images in the directory -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyolo_r50vd_dcn_1x_coco --image_dir=demo/ --device=gpu +# install paddle2onnx +pip install paddle2onnx + +# convert to onnx +paddle2onnx --model_dir output_inference/ppyoloe_crn_l_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 11 --save_file ppyoloe_crn_l_300e_coco.onnx -# benchmark -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyolo_r50vd_dcn_1x_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True ``` -If you want to export PP-YOLOE model to **ONNX format**, use following command refer to [PaddleDetection Model Export as ONNX Format Tutorial](../../deploy/EXPORT_ONNX_MODEL_en.md). +**Notes:** ONNX model only supports batch_size=1 now + +### Speed testing + +For fair comparison, the speed in [Model Zoo](#Model-Zoo) do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method. Thus, you should export model with extra `-o exclude_nms=True` setting. + +**Using Paddle Inference without TensorRT** to test speed, run following command ```bash # export inference model -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --output_dir=output_inference -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams exclude_nms=True -# install paddle2onnx -pip install paddle2onnx +# speed testing with run_benchmark=True +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_file=demo/000000014439_640x640.jpg --run_mode=paddle --device=gpu --run_benchmark=True +``` + +**Using Paddle Inference with TensorRT** to test speed, run following command + +```bash +# export inference model with trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams exclude_nms=True trt=True + +# speed testing with run_benchmark=True,run_mode=trt_fp32/trt_fp16 +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_file=demo/000000014439_640x640.jpg --run_mode=trt_fp16 --device=gpu --run_benchmark=True + +``` + +**Using TensorRT Inference with ONNX** to test speed, run following command + +```bash +# export inference model with trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams exclude_nms=True trt=True # convert to onnx -paddle2onnx --model_dir output_inference/ppyoloe_crn_l_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 11 --save_file ppyoloe_crn_l_300e_coco.onnx +paddle2onnx --model_dir output_inference/ppyoloe_crn_s_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 12 --save_file ppyoloe_crn_s_300e_coco.onnx + +# trt inference using fp16 and batch_size=1 +trtexec --onnx=./ppyoloe_crn_s_300e_coco.onnx --saveEngine=./ppyoloe_s_bs1.engine --workspace=1024 --avgRuns=1000 --shapes=image:1x3x640x640,scale_factor:1x2 --fp16 + +# trt inference using fp16 and batch_size=32 +trtexec --onnx=./ppyoloe_crn_s_300e_coco.onnx --saveEngine=./ppyoloe_s_bs32.engine --workspace=1024 --avgRuns=1000 --shapes=image:32x3x640x640,scale_factor:32x2 --fp16 + +# Using the above script, T4 and tensorrt 7.2 machine, the speed of PPYOLOE-s model is as follows, + +# batch_size=1, 2.80ms, 357fps +# batch_size=32, 67.69ms, 472fps + +``` + + +### Deployment +PP-YOLOE can be deployed by following approches: + - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) + - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) + - [PaddleServing](https://github.com/PaddlePaddle/Serving) + - [PaddleSlim](../slim) + +Next, we will introduce how to use Paddle Inference to deploy PP-YOLOE models in TensorRT FP16 mode. + +First, refer to [Paddle Inference Docs](https://www.paddlepaddle.org.cn/inference/master/user_guides/download_lib.html#python), download and install packages corresponding to CUDA, CUDNN and TensorRT version. + +Then, Exporting PP-YOLOE for Paddle Inference **with TensorRT**, use following command. + +```bash +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -### 5. Other Datasets +Finally, inference in TensorRT FP16 mode. + +```bash +# inference single image +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_mode=trt_fp16 + +# inference all images in the directory +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_dir=demo/ --device=gpu --run_mode=trt_fp16 + +``` + +**Notes:** +- TensorRT will perform optimization for the current hardware platform according to the definition of the network, generate an inference engine and serialize it into a file. This inference engine is only applicable to the current hardware hardware platform. If your hardware and software platform has not changed, you can set `use_static=True` in [enable_tensorrt_engine](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/infer.py#L660). In this way, the serialized file generated will be saved in the `output_inference` folder, and the saved serialized file will be loaded the next time when TensorRT is executed. +- PaddleDetection release/2.4 and later versions will support NMS calling TensorRT, which requires PaddlePaddle release/2.3 and later versions. + +### Other Datasets Model | AP | AP50 ---|---|--- @@ -130,7 +222,7 @@ Model | AP | AP50 [YOLOv5](https://github.com/ultralytics/yolov5) | 26.0 | 42.7 **PP-YOLOE** | **30.5** | **46.4** -**Note** +**Notes** - Here, we use [VisDrone](https://github.com/VisDrone/VisDrone-Dataset) dataset, and to detect 9 objects including `person, bicycles, car, van, truck, tricyle, awning-tricyle, bus, motor`. - Above models trained using official default config, and load pretrained parameters on COCO dataset. - *Due to the limited time, more verification results will be supplemented in the future. You are also welcome to contribute to PP-YOLOE* diff --git a/configs/ppyoloe/README_cn.md b/configs/ppyoloe/README_cn.md index 32c259667ada4ffb9e3b46e94a6a620012d1cada..71ef0198feac0b2f913e05a2c92cc58c8d079d67 100644 --- a/configs/ppyoloe/README_cn.md +++ b/configs/ppyoloe/README_cn.md @@ -9,49 +9,75 @@ - [附录](#附录) ## 简介 -PP-YOLOE是基于PP-YOLOv2的卓越的单阶段Anchor-free模型,超越了多种流行的yolo模型。PP-YOLOE有一系列的模型,即s/m/l/x,可以通过width multiplier和depth multiplier配置。PP-YOLOE避免使用诸如deformable convolution或者matrix nms之类的特殊算子,以使其能轻松地部署在多种多样的硬件上。更多细节可以参考我们的report。 +PP-YOLOE是基于PP-YOLOv2的卓越的单阶段Anchor-free模型,超越了多种流行的YOLO模型。PP-YOLOE有一系列的模型,即s/m/l/x,可以通过width multiplier和depth multiplier配置。PP-YOLOE避免了使用诸如Deformable Convolution或者Matrix NMS之类的特殊算子,以使其能轻松地部署在多种多样的硬件上。更多细节可以参考我们的[report](https://arxiv.org/abs/2203.16250)。
    -PP-YOLOE-l在COCO test-dev2017达到了51.4的mAP, 同时其速度在Tesla V100上达到了78.1 FPS。PP-YOLOE-s/m/x同样具有卓越的精度速度性价比, 其精度速度可以在[模型库](#模型库)中找到。 +PP-YOLOE-l在COCO test-dev2017达到了51.6的mAP, 同时其速度在Tesla V100上达到了78.1 FPS。PP-YOLOE-s/m/x同样具有卓越的精度速度性价比, 其精度速度可以在[模型库](#模型库)中找到。 PP-YOLOE由以下方法组成 - 可扩展的backbone和neck - [Task Alignment Learning](https://arxiv.org/abs/2108.07755) - Efficient Task-aligned head with [DFL](https://arxiv.org/abs/2006.04388)和[VFL](https://arxiv.org/abs/2008.13367) -- [SiLU激活函数](https://arxiv.org/abs/1710.05941) +- [SiLU(Swish)激活函数](https://arxiv.org/abs/1710.05941) ## 模型库 -| 模型 | GPU个数 | 每GPU图片个数 | 骨干网络 | 输入尺寸 | Box APval | Box APtest | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | 模型下载 | 配置文件 | -|:------------------------:|:-------:|:-------------:|:----------:| :-------:| :------------------: | :-------------------: | :------------: | :---------------------: | :------: | :------: | -| PP-YOLOE-s | 8 | 32 | cspresnet-s | 640 | 42.7 | 43.1 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml) | -| PP-YOLOE-m | 8 | 32 | cspresnet-m | 640 | 48.6 | 48.9 | 123.4 | 208.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_m_300e_coco.yml) | -| PP-YOLOE-l | 8 | 24 | cspresnet-l | 640 | 50.9 | 51.4 | 78.1 | 149.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | -| PP-YOLOE-x | 8 | 16 | cspresnet-x | 640 | 51.9 | 52.2 | 45.0 | 95.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_x_300e_coco.yml) | +| 模型 | Epoch | GPU个数 | 每GPU图片个数 | 骨干网络 | 输入尺寸 | Box APval
    0.5:0.95 | Box APtest
    0.5:0.95 | Params(M) | FLOPs(G) | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | 模型下载 | 配置文件 | +|:------------------------:|:-------:|:-------:|:--------:|:----------:| :-------:| :------------------: | :-------------------: |:---------:|:--------:|:---------------:| :---------------------: | :------: | :------: | +| PP-YOLOE-s | 400 | 8 | 32 | cspresnet-s | 640 | 43.4 | 43.6 | 7.93 | 17.36 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_400e_coco.pdparams) | [config](./ppyoloe_crn_s_400e_coco.yml) | +| PP-YOLOE-s | 300 | 8 | 32 | cspresnet-s | 640 | 43.0 | 43.2 | 7.93 | 17.36 | 208.3 | 333.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) | [config](./ppyoloe_crn_s_300e_coco.yml) | +| PP-YOLOE-m | 300 | 8 | 28 | cspresnet-m | 640 | 49.0 | 49.1 | 23.43 | 49.91 | 123.4 | 208.3 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) | [config](./ppyoloe_crn_m_300e_coco.yml) | +| PP-YOLOE-l | 300 | 8 | 20 | cspresnet-l | 640 | 51.4 | 51.6 | 52.20 | 110.07 | 78.1 | 149.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | [config](./ppyoloe_crn_l_300e_coco.yml) | +| PP-YOLOE-x | 300 | 8 | 16 | cspresnet-x | 640 | 52.3 | 52.4 | 98.42 | 206.59 | 45.0 | 95.2 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) | [config](./ppyoloe_crn_x_300e_coco.yml) | + + +### 综合指标 +| 模型 | Epoch | AP0.5:0.95 | AP0.5 | AP0.75 | APsmall | APmedium | APlarge | ARsmall | ARmedium | ARlarge | 模型下载 | 配置文件 | +|:----------------------:|:-----:|:---------------:|:----------:|:-------------:| :------------:| :-----------: | :----------: |:------------:|:-------------:|:------------:| :-----: | :-----: | +| PP-YOLOE-s | 400 | 43.4 | 60.0 | 47.5 | 25.7 | 47.8 | 59.2 | 43.9 | 70.8 | 81.9 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_400e_coco.pdparams) | [config](./ppyoloe_crn_s_400e_coco.yml)| +| PP-YOLOE-s | 300 | 43.0 | 59.6 | 47.2 | 26.0 | 47.4 | 58.7 | 45.1 | 70.6 | 81.4 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) | [config](./ppyoloe_crn_s_300e_coco.yml)| +| PP-YOLOE-m | 300 | 49.0 | 65.9 | 53.8 | 30.9 | 53.5 | 65.3 | 50.9 | 74.4 | 84.7 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) | [config](./ppyoloe_crn_m_300e_coco.yml)| +| PP-YOLOE-l | 300 | 51.4 | 68.6 | 56.2 | 34.8 | 56.1 | 68.0 | 53.1 | 76.8 | 85.6 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | [config](./ppyoloe_crn_l_300e_coco.yml)| +| PP-YOLOE-x | 300 | 52.3 | 69.5 | 56.8 | 35.1 | 57.0 | 68.6 | 55.5 | 76.9 | 85.7 | [model](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) | [config](./ppyoloe_crn_x_300e_coco.yml)| + **注意:** -- PP-YOLOE模型使用COCO数据集中train2017作为训练集,使用val2017和test-dev2017作为测试集,Box APtest为`mAP(IoU=0.5:0.95)`评估结果。 -- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/FAQ)调整学习率和迭代次数。 -- PP-YOLOE模型推理速度测试采用单卡V100,batch size=1进行测试,使用CUDA 10.2, CUDNN 7.6.5,TensorRT推理速度测试使用TensorRT 6.0.1.8。 -- PP-YOLOE推理速度测试使用`tools/export_model.py`并设置`-o exclude_nms=True`脚本导出的模型,并用`deploy/python/infer.py`设置`--run_benchnark`参数得到。测试结果均为不包含数据预处理和模型输出后处理(NMS)的数据(与[YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet)测试方法一致)。 -- 如果你设置了`--run_benchnark=True`, 你首先需要安装以下依赖`pip install pynvml psutil GPUtil`。 +- PP-YOLOE模型使用COCO数据集中train2017作为训练集,使用val2017和test-dev2017作为测试集。 +- 综合指标的表格与模型库的表格里的模型权重是**同一个权重**,综合指标是使用**val2017**作为验证精度的。 +- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- PP-YOLOE模型推理速度测试采用单卡V100,batch size=1进行测试,使用**CUDA 10.2**, **CUDNN 7.6.5**,TensorRT推理速度测试使用**TensorRT 6.0.1.8**。 +- 参考[速度测试](#速度测试)以复现PP-YOLOE推理速度测试结果。 +- 如果你设置了`--run_benchmark=True`, 你首先需要安装以下依赖`pip install pynvml psutil GPUtil`。 + + +### 垂类应用模型 + +PaddleDetection团队提供了基于PP-YOLOE的各种垂类检测模型的配置文件和权重,用户可以下载进行使用: + +| 场景 | 相关数据集 | 链接 | +| :--------: | :---------: | :------: | +| 行人检测 | CrowdHuman | [pphuman](../pphuman) | +| 车辆检测 | BDD100K、UA-DETRAC | [ppvehicle](../ppvehicle) | +| 小目标检测 | VisDrone | [visdrone](../visdrone) | + -## 使用教程 +## 使用说明 -### 1. 训练 +### 训练 执行以下指令使用混合精度训练PP-YOLOE ```bash python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --amp ``` +**注意:** +- 使用默认配置训练需要设置`--amp`以避免显存溢出. +- PaddleDetection支持多机训练,可以参考[多机训练教程](../../docs/DistributedTraining_cn.md). -** 注意: ** 使用默认配置训练需要设置`--amp`以避免显存溢出. - -### 2. 评估 +### 评估 执行以下命令在单个GPU上评估COCO val2017数据集 @@ -61,7 +87,7 @@ CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyoloe/ppyoloe_crn_l_300 在coco test-dev2017上评估,请先从[COCO数据集下载](https://cocodataset.org/#download)下载COCO test-dev2017数据集,然后解压到COCO数据集文件夹并像`configs/ppyolo/ppyolo_test.yml`一样配置`EvalDataset`。 -### 3. 推理 +### 推理 使用以下命令在单张GPU上预测图片,使用`--infer_img`推理单张图片以及使用`--infer_dir`推理文件中的所有图片。 @@ -74,59 +100,121 @@ CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_30 CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams --infer_dir=demo ``` -### 4. 部署 +### 模型导出 -- PaddleInference [Python](../../deploy/python) & [C++](../../deploy/cpp) -- [Paddle-TensorRT](../../deploy/TENSOR_RT.md) -- [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX) -- [PaddleServing](https://github.com/PaddlePaddle/Serving) +PP-YOLOE在GPU上部署或者速度测试需要通过`tools/export_model.py`导出模型。 +当你**使用Paddle Inference但不使用TensorRT**时,运行以下的命令导出模型 -PP-YOLOE在GPU上部署或者推理benchmark需要通过`tools/export_model.py`导出模型。 +```bash +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +``` -当你使用PaddleInferenced但不使用TensorRT时,运行以下的命令进行导出 +当你**使用Paddle Inference且使用TensorRT**时,需要指定`-o trt=True`来导出模型。 ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True ``` -当你使用PaddleInference的TensorRT时,需要指定`-o trt=True`进行导出 +如果你想将PP-YOLOE模型导出为**ONNX格式**,参考 +[PaddleDetection模型导出为ONNX格式教程](../../deploy/EXPORT_ONNX_MODEL.md),运行以下命令: ```bash -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True + +# 导出推理模型 +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --output_dir=output_inference -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams + +# 安装paddle2onnx +pip install paddle2onnx + +# 转换成onnx格式 +paddle2onnx --model_dir output_inference/ppyoloe_crn_l_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 11 --save_file ppyoloe_crn_l_300e_coco.onnx ``` -`deploy/python/infer.py`使用上述导出后的PaddleInference模型用于推理和benchnark. +**注意:** ONNX模型目前只支持batch_size=1 + +### 速度测试 + +为了公平起见,在[模型库](#模型库)中的速度测试结果均为不包含数据预处理和模型输出后处理(NMS)的数据(与[YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet)测试方法一致),需要在导出模型时指定`-o exclude_nms=True`. + +**使用Paddle Inference但不使用TensorRT**进行测速,执行以下命令: ```bash -# 推理单张图片 -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyolo_r50vd_dcn_1x_coco --image_file=demo/000000014439_640x640.jpg --device=gpu +# 导出模型 +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams exclude_nms=True -# 推理文件夹下的所有图片 -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyolo_r50vd_dcn_1x_coco --image_dir=demo/ --device=gpu +# 速度测试,使用run_benchmark=True +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_file=demo/000000014439_640x640.jpg --run_mode=paddle --device=gpu --run_benchmark=True +``` + +**使用Paddle Inference且使用TensorRT**进行测速,执行以下命令: + +```bash +# 导出模型,使用trt=True +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams exclude_nms=True trt=True + +# 速度测试,使用run_benchmark=True, run_mode=trt_fp32/trt_fp16 +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_file=demo/000000014439_640x640.jpg --run_mode=trt_fp16 --device=gpu --run_benchmark=True -# benchmark -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyolo_r50vd_dcn_1x_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True ``` -如果你想将PP-YOLOE模型导出为**ONNX格式**,参考 -[PaddleDetection模型导出为ONNX格式教程](../../deploy/EXPORT_ONNX_MODEL.md) +**使用 ONNX 和 TensorRT** 进行测速,执行以下命令: ```bash -# 导出推理模型 -python tools/export_model.py configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --output_dir=output_inference -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +# 导出模型 +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams exclude_nms=True trt=True -# 安装paddle2onnx -pip install paddle2onnx +# 转化成ONNX格式 +paddle2onnx --model_dir output_inference/ppyoloe_crn_s_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 12 --save_file ppyoloe_crn_s_300e_coco.onnx -# 转换成onnx格式 -paddle2onnx --model_dir output_inference/ppyoloe_crn_l_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 11 --save_file ppyoloe_crn_l_300e_coco.onnx +# 测试速度,半精度,batch_size=1 +trtexec --onnx=./ppyoloe_crn_s_300e_coco.onnx --saveEngine=./ppyoloe_s_bs1.engine --workspace=1024 --avgRuns=1000 --shapes=image:1x3x640x640,scale_factor:1x2 --fp16 + +# 测试速度,半精度,batch_size=32 +trtexec --onnx=./ppyoloe_crn_s_300e_coco.onnx --saveEngine=./ppyoloe_s_bs32.engine --workspace=1024 --avgRuns=1000 --shapes=image:32x3x640x640,scale_factor:32x2 --fp16 + +# 使用上边的脚本, 在T4 和 TensorRT 7.2的环境下,PPYOLOE-s模型速度如下 +# batch_size=1, 2.80ms, 357fps +# batch_size=32, 67.69ms, 472fps +``` + + + +### 部署 + +PP-YOLOE可以使用以下方式进行部署: + - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) + - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) + - [PaddleServing](https://github.com/PaddlePaddle/Serving) + - [PaddleSlim模型量化](../slim) + +接下来,我们将介绍PP-YOLOE如何使用Paddle Inference在TensorRT FP16模式下部署 + +首先,参考[Paddle Inference文档](https://www.paddlepaddle.org.cn/inference/master/user_guides/download_lib.html#python),下载并安装与你的CUDA, CUDNN和TensorRT相应的wheel包。 + +然后,运行以下命令导出模型 + +```bash +python tools/export_model.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams trt=True +``` + +最后,使用TensorRT FP16进行推理 + +```bash +# 推理单张图片 +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_mode=trt_fp16 + +# 推理文件夹下的所有图片 +CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_crn_l_300e_coco --image_dir=demo/ --device=gpu --run_mode=trt_fp16 ``` +**注意:** +- TensorRT会根据网络的定义,执行针对当前硬件平台的优化,生成推理引擎并序列化为文件。该推理引擎只适用于当前软硬件平台。如果你的软硬件平台没有发生变化,你可以设置[enable_tensorrt_engine](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/infer.py#L660)的参数`use_static=True`,这样生成的序列化文件将会保存在`output_inference`文件夹下,下次执行TensorRT时将加载保存的序列化文件。 +- PaddleDetection release/2.4及其之后的版本将支持NMS调用TensorRT,需要依赖PaddlePaddle release/2.3及其之后的版本 -### 5. 泛化性验证 +### 泛化性验证 模型 | AP | AP50 ---|---|--- @@ -139,6 +227,7 @@ paddle2onnx --model_dir output_inference/ppyoloe_crn_l_300e_coco --model_filenam - 以上模型训练均采用官方提供的默认参数,并且加载COCO预训练参数 - *由于人力/时间有限,后续将会持续补充更多验证结果,也欢迎各位开源用户贡献,共同优化PP-YOLOE* + ## 附录 PP-YOLOE消融实验 diff --git a/configs/ppyoloe/_base_/optimizer_36e_xpu.yml b/configs/ppyoloe/_base_/optimizer_36e_xpu.yml new file mode 100644 index 0000000000000000000000000000000000000000..59d76f4bae98e4774c2cee9cbc8c77ac341af35d --- /dev/null +++ b/configs/ppyoloe/_base_/optimizer_36e_xpu.yml @@ -0,0 +1,18 @@ +epoch: 36 + +LearningRate: + base_lr: 0.00125 + schedulers: + - !CosineDecay + max_epochs: 43 + - !LinearWarmup + start_factor: 0.001 + steps: 2000 + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 diff --git a/configs/ppyoloe/_base_/ppyoloe_crn.yml b/configs/ppyoloe/_base_/ppyoloe_crn.yml index 2ad9a11a8e98fe0f415606fb0b1119263d3b5aa4..7abee87a2433bd6b83a5257f254809f5acda3908 100644 --- a/configs/ppyoloe/_base_/ppyoloe_crn.yml +++ b/configs/ppyoloe/_base_/ppyoloe_crn.yml @@ -28,7 +28,6 @@ PPYOLOEHead: grid_cell_offset: 0.5 static_assigner_epoch: 100 use_varifocal_loss: True - eval_input_size: [640, 640] loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} static_assigner: name: ATSSAssigner @@ -41,6 +40,6 @@ PPYOLOEHead: nms: name: MultiClassNMS nms_top_k: 1000 - keep_top_k: 100 + keep_top_k: 300 score_threshold: 0.01 - nms_threshold: 0.6 + nms_threshold: 0.7 diff --git a/configs/ppyoloe/_base_/ppyoloe_reader.yml b/configs/ppyoloe/_base_/ppyoloe_reader.yml index a7574de1fcd4db98a0aaccc2bfd8a126db676932..058b4ee478d05fea850d7970e1e6a61d81352e78 100644 --- a/configs/ppyoloe/_base_/ppyoloe_reader.yml +++ b/configs/ppyoloe/_base_/ppyoloe_reader.yml @@ -1,4 +1,8 @@ worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + TrainReader: sample_transforms: - Decode: {} @@ -20,17 +24,17 @@ TrainReader: EvalReader: sample_transforms: - Decode: {} - - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 2 TestReader: inputs_def: - image_shape: [3, 640, 640] + image_shape: [3, *eval_height, *eval_width] sample_transforms: - Decode: {} - - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} - Permute: {} batch_size: 1 diff --git a/configs/ppyoloe/ppyoloe_crn_l_36e_coco_xpu.yml b/configs/ppyoloe/ppyoloe_crn_l_36e_coco_xpu.yml new file mode 100644 index 0000000000000000000000000000000000000000..3797288864c978f7872c9b994af2477bb6c31acd --- /dev/null +++ b/configs/ppyoloe/ppyoloe_crn_l_36e_coco_xpu.yml @@ -0,0 +1,69 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_36e_xpu.yml', + './_base_/ppyoloe_reader.yml', +] + +# note: these are default values (use_gpu = true and use_xpu = false) for CI. +# set use_gpu = false and use_xpu = true for training. +use_gpu: true +use_xpu: false + +log_iter: 100 +snapshot_epoch: 1 +weights: output/ppyoloe_crn_l_36e_coco/model_final +find_unused_parameters: True + +pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/CSPResNetb_l_pretrained.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +TrainReader: + batch_size: 8 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 4 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 300 + score_threshold: 0.01 + nms_threshold: 0.7 diff --git a/configs/ppyoloe/ppyoloe_crn_s_400e_coco.yml b/configs/ppyoloe/ppyoloe_crn_s_400e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..c3cddf48bb75d733aaf15bfbd148b8e480d48493 --- /dev/null +++ b/configs/ppyoloe/ppyoloe_crn_s_400e_coco.yml @@ -0,0 +1,46 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/ppyoloe_crn.yml', + './_base_/ppyoloe_reader.yml', +] + +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_s_400e_coco/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/CSPResNetb_s_pretrained.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +TrainReader: + batch_size: 32 + +epoch: 400 +LearningRate: + base_lr: 0.04 + schedulers: + - !CosineDecay + max_epochs: 480 + - !LinearWarmup + start_factor: 0. + epochs: 5 + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +PPYOLOEHead: + static_assigner_epoch: 133 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 300 + score_threshold: 0.01 + nms_threshold: 0.7 diff --git a/configs/rcnn_enhance/README.md b/configs/rcnn_enhance/README.md index 6e2c0917b53ecc2f07836f54bb6400d40d04548c..974935b379e5569abfc001e26d3b3b61cb1ecd3b 100644 --- a/configs/rcnn_enhance/README.md +++ b/configs/rcnn_enhance/README.md @@ -2,7 +2,7 @@ ### 简介 -* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为为61FPS时,COCO mAP可达41.2%。 +* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为61FPS时,COCO mAP可达41.2%。 ### 模型库 diff --git a/configs/rcnn_enhance/_base_/faster_rcnn_enhance_reader.yml b/configs/rcnn_enhance/_base_/faster_rcnn_enhance_reader.yml index 33ec222e2715ca4f4819ffc424a0c366f42dc7af..f1a7c998d4e332661491024ca17a1a0d996b589d 100644 --- a/configs/rcnn_enhance/_base_/faster_rcnn_enhance_reader.yml +++ b/configs/rcnn_enhance/_base_/faster_rcnn_enhance_reader.yml @@ -2,9 +2,9 @@ worker_num: 2 TrainReader: sample_transforms: - Decode: {} + - AutoAugment: {autoaug_type: v1} - RandomResize: {target_size: [[384,1000], [416,1000], [448,1000], [480,1000], [512,1000], [544,1000], [576,1000], [608,1000], [640,1000], [672,1000]], interp: 2, keep_ratio: True} - RandomFlip: {prob: 0.5} - - AutoAugment: {autoaug_type: v1} - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} - Permute: {} batch_transforms: diff --git a/configs/retinanet/README.md b/configs/retinanet/README.md index bfa281321ed0b2509a423390b26db15fabc5caf4..ac3d455e03bcbc8cdd63becb12098b734d1d42c8 100644 --- a/configs/retinanet/README.md +++ b/configs/retinanet/README.md @@ -1,20 +1,19 @@ -# Focal Loss for Dense Object Detection - -## Introduction - -We reproduce RetinaNet proposed in paper Focal Loss for Dense Object Detection. +# RetinaNet (Focal Loss for Dense Object Detection) ## Model Zoo -| Backbone | Model | mstrain | imgs/GPU | lr schedule | FPS | Box AP | download | config | -| ------------ | --------- | ------- | -------- | ----------- | --- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | -| ResNet50-FPN | RetinaNet | Yes | 4 | 1x | --- | 37.5 | [model](https://bj.bcebos.com/v1/paddledet/models/retinanet_r50_fpn_mstrain_1x_coco.pdparams)\|[log](https://bj.bcebos.com/v1/paddledet/logs/retinanet_r50_fpn_mstrain_1x_coco.log) | retinanet_r50_fpn_mstrain_1x_coco.yml | +| Backbone | Model | imgs/GPU | lr schedule | FPS | Box AP | download | config | +| ------------ | --------- | -------- | ----------- | --- | ------ | ---------- | ----------- | +| ResNet50-FPN | RetinaNet | 2 | 1x | --- | 37.5 | [model](https://bj.bcebos.com/v1/paddledet/models/retinanet_r50_fpn_1x_coco.pdparams) | [config](./retinanet_r50_fpn_1x_coco.yml) | +| ResNet101-FPN| RetinaNet | 2 | 2x | --- | 40.6 | [model](https://paddledet.bj.bcebos.com/models/retinanet_r101_fpn_2x_coco.pdparams) | [config](./retinanet_r101_fpn_2x_coco.yml) | +| ResNet50-FPN | RetinaNet + [FGD](../slim/distill/README.md) | 2 | 2x | --- | 40.8 | [model](https://bj.bcebos.com/v1/paddledet/models/retinanet_r101_distill_r50_2x_coco.pdparams) | [config](./retinanet_r50_fpn_2x_coco.yml)/[slim_config](../slim/distill/retinanet_resnet101_coco_distill.yml) | + **Notes:** -- All above models are trained on COCO train2017 with 4 GPUs and evaludated on val2017. Box AP=`mAP(IoU=0.5:0.95)`. +- The ResNet50-FPN are trained on COCO train2017 with 8 GPUs. Both ResNet101-FPN and ResNet50-FPN with [FGD](../slim/distill/README.md) are trained on COCO train2017 with 4 GPUs. +- All above models are evaludated on val2017. Box AP=`mAP(IoU=0.5:0.95)`. -- Config `configs/retinanet/retinanet_r50_fpn_1x_coco.yml` is for 8 GPUs and `configs/retinanet/retinanet_r50_fpn_mstrain_1x_coco.yml` is for 4 GPUs (mind the difference of train batch size). ## Citation diff --git a/configs/retinanet/_base_/optimizer_2x.yml b/configs/retinanet/_base_/optimizer_2x.yml new file mode 100644 index 0000000000000000000000000000000000000000..61841433417b9fcc6f29a6c71a72ba23406b55ad --- /dev/null +++ b/configs/retinanet/_base_/optimizer_2x.yml @@ -0,0 +1,19 @@ +epoch: 24 + +LearningRate: + base_lr: 0.01 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.001 + steps: 500 + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 diff --git a/configs/retinanet/_base_/retinanet_r101_fpn.yml b/configs/retinanet/_base_/retinanet_r101_fpn.yml new file mode 100644 index 0000000000000000000000000000000000000000..ae5595769d940c2ecb5b857fdc8970da76d572ab --- /dev/null +++ b/configs/retinanet/_base_/retinanet_r101_fpn.yml @@ -0,0 +1,57 @@ +architecture: RetinaNet +pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet101_pretrained.pdparams + +RetinaNet: + backbone: ResNet + neck: FPN + head: RetinaHead + +ResNet: + depth: 101 + variant: b + norm_type: bn + freeze_at: 0 + return_idx: [1,2,3] + num_stages: 4 + +FPN: + out_channel: 256 + spatial_scales: [0.125, 0.0625, 0.03125] + extra_stage: 2 + has_extra_convs: true + use_c5: false + +RetinaHead: + conv_feat: + name: RetinaFeat + feat_in: 256 + feat_out: 256 + num_convs: 4 + norm_type: null + use_dcn: false + anchor_generator: + name: RetinaAnchorGenerator + octave_base_scale: 4 + scales_per_octave: 3 + aspect_ratios: [0.5, 1.0, 2.0] + strides: [8.0, 16.0, 32.0, 64.0, 128.0] + bbox_assigner: + name: MaxIoUAssigner + positive_overlap: 0.5 + negative_overlap: 0.4 + allow_low_quality: true + loss_class: + name: FocalLoss + gamma: 2.0 + alpha: 0.25 + loss_weight: 1.0 + loss_bbox: + name: SmoothL1Loss + beta: 0.0 + loss_weight: 1.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/retinanet/_base_/retinanet_r50_fpn.yml b/configs/retinanet/_base_/retinanet_r50_fpn.yml index 156a17fea84119322c4e34b5e58b37e47cadcb63..fb2d767aed5bd383f312ce79e4e39e3710c3cb9c 100644 --- a/configs/retinanet/_base_/retinanet_r50_fpn.yml +++ b/configs/retinanet/_base_/retinanet_r50_fpn.yml @@ -22,10 +22,6 @@ FPN: use_c5: false RetinaHead: - num_classes: 80 - prior_prob: 0.01 - nms_pre: 1000 - decode_reg_out: false conv_feat: name: RetinaFeat feat_in: 256 @@ -44,10 +40,6 @@ RetinaHead: positive_overlap: 0.5 negative_overlap: 0.4 allow_low_quality: true - bbox_coder: - name: DeltaBBoxCoder - norm_mean: [0.0, 0.0, 0.0, 0.0] - norm_std: [1.0, 1.0, 1.0, 1.0] loss_class: name: FocalLoss gamma: 2.0 diff --git a/configs/retinanet/_base_/retinanet_reader.yml b/configs/retinanet/_base_/retinanet_reader.yml index 8cf31aa5ecdb903ce50e6c48ca7fb8429f3d776b..1f686b4d7f06f143106491e9b8fe3957a40927c2 100644 --- a/configs/retinanet/_base_/retinanet_reader.yml +++ b/configs/retinanet/_base_/retinanet_reader.yml @@ -1,39 +1,36 @@ worker_num: 2 TrainReader: sample_transforms: - - Decode: {} - - RandomFlip: {prob: 0.5} - - Resize: {target_size: [800, 1333], keep_ratio: true, interp: 1} - - NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} - - Permute: {} + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1} + - RandomFlip: {} + - NormalizeImage: {is_scale: True, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} + - Permute: {} batch_transforms: - - PadBatch: {pad_to_stride: 32} + - PadBatch: {pad_to_stride: 32} batch_size: 2 - shuffle: true - drop_last: true - use_process: true - collate_batch: false + shuffle: True + drop_last: True + collate_batch: False EvalReader: sample_transforms: - - Decode: {} - - Resize: {target_size: [800, 1333], keep_ratio: true, interp: 1} - - NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} - - Permute: {} + - Decode: {} + - Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1} + - NormalizeImage: {is_scale: True, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} + - Permute: {} batch_transforms: - - PadBatch: {pad_to_stride: 32} - batch_size: 2 - shuffle: false + - PadBatch: {pad_to_stride: 32} + batch_size: 8 TestReader: sample_transforms: - - Decode: {} - - Resize: {target_size: [800, 1333], keep_ratio: true, interp: 1} - - NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} - - Permute: {} + - Decode: {} + - Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1} + - NormalizeImage: {is_scale: True, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} + - Permute: {} batch_transforms: - - PadBatch: {pad_to_stride: 32} + - PadBatch: {pad_to_stride: 32} batch_size: 1 - shuffle: false diff --git a/configs/retinanet/retinanet_r101_distill_r50_2x_coco.yml b/configs/retinanet/retinanet_r101_distill_r50_2x_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..bb72cda8e99ac6a597ea5fc9b113378f7954bac3 --- /dev/null +++ b/configs/retinanet/retinanet_r101_distill_r50_2x_coco.yml @@ -0,0 +1,9 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '_base_/retinanet_r50_fpn.yml', + '_base_/optimizer_2x.yml', + '_base_/retinanet_reader.yml' +] + +weights: https://paddledet.bj.bcebos.com/models/retinanet_r101_distill_r50_2x_coco.pdparams diff --git a/configs/retinanet/retinanet_r101_fpn_2x_coco.yml b/configs/retinanet/retinanet_r101_fpn_2x_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..854def4ad82ebcc48d904e665b856ec47655d167 --- /dev/null +++ b/configs/retinanet/retinanet_r101_fpn_2x_coco.yml @@ -0,0 +1,9 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + '_base_/retinanet_r101_fpn.yml', + '_base_/optimizer_2x.yml', + '_base_/retinanet_reader.yml' +] + +weights: https://paddledet.bj.bcebos.com/models/retinanet_r101_fpn_2x_coco.pdparams diff --git a/configs/retinanet/retinanet_r50_fpn_1x_coco.yml b/configs/retinanet/retinanet_r50_fpn_1x_coco.yml index bb2c5a404033691650b99430649cd512a81a91be..cb6d342baeb428547d42f417acda02e8c90e39da 100644 --- a/configs/retinanet/retinanet_r50_fpn_1x_coco.yml +++ b/configs/retinanet/retinanet_r50_fpn_1x_coco.yml @@ -7,4 +7,3 @@ _BASE_: [ ] weights: output/retinanet_r50_fpn_1x_coco/model_final -find_unused_parameters: true \ No newline at end of file diff --git a/configs/retinanet/retinanet_r50_fpn_mstrain_1x_coco.yml b/configs/retinanet/retinanet_r50_fpn_mstrain_1x_coco.yml deleted file mode 100644 index ef4023d2284941e6df255dd4e403f88e0d2d1513..0000000000000000000000000000000000000000 --- a/configs/retinanet/retinanet_r50_fpn_mstrain_1x_coco.yml +++ /dev/null @@ -1,20 +0,0 @@ -_BASE_: [ - '../datasets/coco_detection.yml', - '../runtime.yml', - '_base_/retinanet_r50_fpn.yml', - '_base_/optimizer_1x.yml', - '_base_/retinanet_reader.yml' -] - -worker_num: 4 -TrainReader: - batch_size: 4 - sample_transforms: - - Decode: {} - - RandomFlip: {prob: 0.5} - - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: true, interp: 1} - - NormalizeImage: {is_scale: true, mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225]} - - Permute: {} - -weights: output/retinanet_r50_fpn_mstrain_1x_coco/model_final -find_unused_parameters: true \ No newline at end of file diff --git a/configs/runtime.yml b/configs/runtime.yml index c67c6c94f836998cb436fc4933370a5d396d5d1f..f601433afc13619183da153cbb09afe9f1332fbe 100644 --- a/configs/runtime.yml +++ b/configs/runtime.yml @@ -10,3 +10,4 @@ export: post_process: True # Whether post-processing is included in the network when export model. nms: True # Whether NMS is included in the network when export model. benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + fuse_conv_bn: False diff --git a/configs/slim/README.md b/configs/slim/README.md index ac025688b18a502e974c438a3cbac47ede3a753d..4eabd73b570dd3d971d0625c3e138895778719b5 100755 --- a/configs/slim/README.md +++ b/configs/slim/README.md @@ -126,6 +126,8 @@ python tools/export_model.py -c configs/{MODEL.yml} --slim_config configs/slim/{ | 模型 | 压缩策略 | 输入尺寸 | 模型体积(MB) | 预测时延(V100) | 预测时延(SD855) | Box AP | 下载 | Inference模型下载 | 模型配置文件 | 压缩算法配置文件 | | ------------------ | ------------ | -------- | :---------: | :---------: |:---------: | :---------: | :----------------------------------------------: | :----------------------------------------------: |:------------------------------------------: | :------------------------------------: | +| PP-YOLOE-l | baseline | 640 | - | 11.2ms(trt_fp32) | 7.7ms(trt_fp16) | -- | 50.9 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | - | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | - | +| PP-YOLOE-l | 普通在线量化 | 640 | - | 6.7ms(trt_int8) | -- | 48.8 | [下载链接](https://paddledet.bj.bcebos.com/models/slim/ppyoloe_l_coco_qat.pdparams) | - | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/quant/ppyoloe_l_qat.yml) | | PP-YOLOv2_R50vd | baseline | 640 | 208.6 | 19.1ms | -- | 49.1 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | [下载链接](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_365e_coco.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | - | | PP-YOLOv2_R50vd | PACT在线量化 | 640 | -- | 17.3ms | -- | 48.1 | [下载链接](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_qat.pdparams) | [下载链接](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_qat.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/quant/ppyolov2_r50vd_dcn_qat.yml) | | PP-YOLO_R50vd | baseline | 608 | 183.3 | 17.4ms | -- | 44.8 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_1x_coco.pdparams) | [下载链接](https://paddledet.bj.bcebos.com/models/slim/ppyolo_r50vd_dcn_1x_coco.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | - | @@ -146,6 +148,7 @@ python tools/export_model.py -c configs/{MODEL.yml} --slim_config configs/slim/{ 说明: - 上述V100预测时延非量化模型均是使用TensorRT-FP32测试,量化模型均使用TensorRT-INT8测试,并且都包含NMS耗时。 - SD855预测时延为使用PaddleLite部署,使用arm8架构并使用4线程(4 Threads)推理时延。 +- 上述PP-YOLOE模型均在V100,开启TensorRT环境中测速,不包含NMS。(导出模型时指定:-o trt=True exclude_nms=True) ### 离线量化 需要准备val集,用来对离线量化模型进行校准,运行方式: @@ -176,3 +179,4 @@ python3.7 tools/post_quant.py -c configs/ppyolo/ppyolo_mbv3_large_coco.yml --sli | ------------------ | ------------ | -------- | :---------: |:---------: |:---------: | :---------: |:----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | YOLOv3-MobileNetV1 | baseline | 608 | 24.65 | 94.2 | 332.0ms | 29.4 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_mobilenet_v1_270e_coco.yml) | - | | YOLOv3-MobileNetV1 | 蒸馏+剪裁 | 608 | 7.54(-69.4%) | 30.9(-67.2%) | 166.1ms | 28.4(-1.0) | [下载链接](https://paddledet.bj.bcebos.com/models/slim/yolov3_mobilenet_v1_coco_distill_prune.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_mobilenet_v1_270e_coco.yml) | [slim配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/extensions/yolov3_mobilenet_v1_coco_distill_prune.yml) | +| YOLOv3-MobileNetV1 | 剪裁+量化 | 608 | - | - | - | - | - | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_mobilenet_v1_270e_voc.yml) | [slim配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/extensions/yolov3_mobilenetv1_prune_qat.yml) | diff --git a/configs/slim/README_en.md b/configs/slim/README_en.md index 8d2b39c914c281126f6d70b2f4150f49add6e087..924757e3fcdd465c7eb51c0cb7ce8b71c8e2fcb5 100755 --- a/configs/slim/README_en.md +++ b/configs/slim/README_en.md @@ -124,7 +124,9 @@ Description: | Model | Compression Strategy | Input Size | Model Volume(MB) | Prediction Delay(V100) | Prediction Delay(SD855) | Box AP | Download | Download of Inference Model | Model Configuration File | Compression Algorithm Configuration File | | ------------------------- | -------------------------- | ----------- | :--------------: | :--------------------: | :---------------------: | :-------------------: | :-----------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------: | -| PP-YOLOv2_R50vd | baseline | 640 | 208.6 | 19.1ms | -- | 49.1 | [link](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_365e_coco.tar) | [Configuration File ](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | - | +| PP-YOLOE-l | baseline | 640 | - | 11.2ms(trt_fp32) | 7.7ms(trt_fp16) | -- | 50.9 | [link](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) | - | [Configuration File](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | - | +| PP-YOLOE-l | Common Online quantitative | 640 | - | 6.7ms(trt_int8) | -- | 48.8 | [link](https://paddledet.bj.bcebos.com/models/slim/ppyoloe_l_coco_qat.pdparams) | - | [Configuration File](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml) | [Configuration File](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/quant/ppyoloe_l_qat.yml) | +| PP-YOLOv2_R50vd | baseline | 640 | 208.6 | 19.1ms | -- | 49.1 | [link](https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams) | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_365e_coco.tar) | [Configuration File](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | - | | PP-YOLOv2_R50vd | PACT Online quantitative | 640 | -- | 17.3ms | -- | 48.1 | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_qat.pdparams) | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolov2_r50vd_dcn_qat.tar) | [Configuration File ](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | [Configuration File ](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/quant/ppyolov2_r50vd_dcn_qat.yml) | | PP-YOLO_R50vd | baseline | 608 | 183.3 | 17.4ms | -- | 44.8 | [link](https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_1x_coco.pdparams) | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolo_r50vd_dcn_1x_coco.tar) | [Configuration File ](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | - | | PP-YOLO_R50vd | PACT Online quantitative | 608 | 67.3 | 13.8ms | -- | 44.3 | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolo_r50vd_qat_pact.pdparams) | [link](https://paddledet.bj.bcebos.com/models/slim/ppyolo_r50vd_qat_pact.tar) | [Configuration File ](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml) | [Configuration File ](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/slim/quant/ppyolo_r50vd_qat_pact.yml) | diff --git a/configs/slim/distill/README.md b/configs/slim/distill/README.md index da5795764cec02ea384f8e063f918b56b4f2b9bb..a08bf35fc350cd7c4284bb00ffdf641d80de9798 100644 --- a/configs/slim/distill/README.md +++ b/configs/slim/distill/README.md @@ -5,6 +5,19 @@ COCO数据集作为目标检测任务的训练目标难度更大,意味着teacher网络会预测出更多的背景bbox,如果直接用teacher的预测输出作为student学习的`soft label`会有严重的类别不均衡问题。解决这个问题需要引入新的方法,详细背景请参考论文:[Object detection at 200 Frames Per Second](https://arxiv.org/abs/1805.06361)。 为了确定蒸馏的对象,我们首先需要找到student和teacher网络得到的`x,y,w,h,cls,objness`等Tensor,用teacher得到的结果指导student训练。具体实现可参考[代码](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/ppdet/slim/distill.py) + +## FGD模型蒸馏 + +FGD全称为[Focal and Global Knowledge Distillation for Detectors](https://arxiv.org/abs/2111.11837v1),是目标检测任务的一种蒸馏方法,FGD蒸馏分为两个部分`Focal`和`Global`。`Focal`蒸馏分离图像的前景和背景,让学生模型分别关注教师模型的前景和背景部分特征的关键像素;`Global`蒸馏部分重建不同像素之间的关系并将其从教师转移到学生,以补偿`Focal`蒸馏中丢失的全局信息。试验结果表明,FGD蒸馏算法在基于anchor和anchor free的方法上能有效提升模型精度。 +在PaddleDetection中,我们实现了FGD算法,并基于retinaNet算法进行验证,实验结果如下: +| algorithm | model | AP | download| +|:-:| :-: | :-: | :-:| +|retinaNet_r101_fpn_2x | teacher | 40.6 | [download](https://paddledet.bj.bcebos.com/models/retinanet_r101_fpn_2x_coco.pdparams) | +|retinaNet_r50_fpn_1x| student | 37.5 |[download](https://paddledet.bj.bcebos.com/models/retinanet_r50_fpn_1x_coco.pdparams) | +|retinaNet_r50_fpn_2x + FGD| student | 40.8 |[download](https://paddledet.bj.bcebos.com/models/retinanet_r101_distill_r50_2x_coco.pdparams) | + + + ## Citations ``` @article{mehta2018object, @@ -15,4 +28,12 @@ COCO数据集作为目标检测任务的训练目标难度更大,意味着teac archivePrefix={arXiv}, primaryClass={cs.CV} } + +@inproceedings{yang2022focal, + title={Focal and global knowledge distillation for detectors}, + author={Yang, Zhendong and Li, Zhe and Jiang, Xiaohu and Gong, Yuan and Yuan, Zehuan and Zhao, Danpei and Yuan, Chun}, + booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, + pages={4643--4652}, + year={2022} +} ``` diff --git a/configs/slim/distill/retinanet_resnet101_coco_distill.yml b/configs/slim/distill/retinanet_resnet101_coco_distill.yml new file mode 100644 index 0000000000000000000000000000000000000000..d4793c02063d8159e5277e705a58cc0b423d94ea --- /dev/null +++ b/configs/slim/distill/retinanet_resnet101_coco_distill.yml @@ -0,0 +1,19 @@ +_BASE_: [ + '../../retinanet/retinanet_r101_fpn_2x_coco.yml', +] + +pretrain_weights: https://paddledet.bj.bcebos.com/models/retinanet_r101_fpn_2x_coco.pdparams + +slim: Distill +slim_method: FGD +distill_loss: FGDFeatureLoss +distill_loss_name: ['neck_f_4', 'neck_f_3', 'neck_f_2', 'neck_f_1', 'neck_f_0'] + +FGDFeatureLoss: + student_channels: 256 + teacher_channels: 256 + temp: 0.5 + alpha_fgd: 0.001 + beta_fgd: 0.0005 + gamma_fgd: 0.0005 + lambda_fgd: 0.000005 diff --git a/configs/slim/extensions/yolov3_mobilenetv1_prune_qat.yml b/configs/slim/extensions/yolov3_mobilenetv1_prune_qat.yml new file mode 100644 index 0000000000000000000000000000000000000000..ff17ea0b4126d934b851df60cda2db2e17fbbae2 --- /dev/null +++ b/configs/slim/extensions/yolov3_mobilenetv1_prune_qat.yml @@ -0,0 +1,19 @@ +# Weights of yolov3_mobilenet_v1_voc +pretrain_weights: https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_voc.pdparams +slim: PrunerQAT + +PrunerQAT: + criterion: fpgm + pruned_params: ['conv2d_27.w_0', 'conv2d_28.w_0', 'conv2d_29.w_0', + 'conv2d_30.w_0', 'conv2d_31.w_0', 'conv2d_32.w_0', + 'conv2d_34.w_0', 'conv2d_35.w_0', 'conv2d_36.w_0', + 'conv2d_37.w_0', 'conv2d_38.w_0', 'conv2d_39.w_0', + 'conv2d_41.w_0', 'conv2d_42.w_0', 'conv2d_43.w_0', + 'conv2d_44.w_0', 'conv2d_45.w_0', 'conv2d_46.w_0'] + pruned_ratios: [0.1,0.2,0.2,0.2,0.2,0.1,0.2,0.3,0.3,0.3,0.2,0.1,0.3,0.4,0.4,0.4,0.4,0.3] + print_prune_params: False + quant_config: { + 'weight_quantize_type': 'channel_wise_abs_max', 'activation_quantize_type': 'moving_average_abs_max', + 'weight_bits': 8, 'activation_bits': 8, 'dtype': 'int8', 'window_size': 10000, 'moving_rate': 0.9, + 'quantizable_layer_type': ['Conv2D', 'Linear']} + print_qat_model: True diff --git a/configs/slim/post_quant/mask_rcnn_r50_fpn_1x_coco_ptq.yml b/configs/slim/post_quant/mask_rcnn_r50_fpn_1x_coco_ptq.yml new file mode 100644 index 0000000000000000000000000000000000000000..d715aedffe2dd5e15bdb222a74aa35bc273d2240 --- /dev/null +++ b/configs/slim/post_quant/mask_rcnn_r50_fpn_1x_coco_ptq.yml @@ -0,0 +1,10 @@ +weights: https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_fpn_1x_coco.pdparams +slim: PTQ + +PTQ: + ptq_config: { + 'activation_quantizer': 'HistQuantizer', + 'upsample_bins': 127, + 'hist_percent': 0.999} + quant_batch_num: 10 + fuse: True diff --git a/configs/slim/post_quant/ppyoloe_crn_s_300e_coco_ptq.yml b/configs/slim/post_quant/ppyoloe_crn_s_300e_coco_ptq.yml new file mode 100644 index 0000000000000000000000000000000000000000..dfa793d528a63255fce62c6d1c94a594fee58853 --- /dev/null +++ b/configs/slim/post_quant/ppyoloe_crn_s_300e_coco_ptq.yml @@ -0,0 +1,10 @@ +weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +slim: PTQ + +PTQ: + ptq_config: { + 'activation_quantizer': 'HistQuantizer', + 'upsample_bins': 127, + 'hist_percent': 0.999} + quant_batch_num: 10 + fuse: True diff --git a/configs/slim/post_quant/tinypose_128x96_ptq.yml b/configs/slim/post_quant/tinypose_128x96_ptq.yml new file mode 100644 index 0000000000000000000000000000000000000000..a3bd64761fac679d83bbbdb4011ea3ab327ad3f9 --- /dev/null +++ b/configs/slim/post_quant/tinypose_128x96_ptq.yml @@ -0,0 +1,10 @@ +weights: https://paddledet.bj.bcebos.com/models/keypoint/tinypose_128x96.pdparams +slim: PTQ + +PTQ: + ptq_config: { + 'activation_quantizer': 'HistQuantizer', + 'upsample_bins': 127, + 'hist_percent': 0.999} + quant_batch_num: 10 + fuse: True diff --git a/configs/slim/quant/picodet_s_416_lcnet_quant.yml b/configs/slim/quant/picodet_s_416_lcnet_quant.yml new file mode 100644 index 0000000000000000000000000000000000000000..000807ab6b138ca8f28440f97b44809e75a9ac3d --- /dev/null +++ b/configs/slim/quant/picodet_s_416_lcnet_quant.yml @@ -0,0 +1,22 @@ +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_s_416_coco_lcnet.pdparams +slim: QAT + +QAT: + quant_config: { + 'activation_preprocess_type': 'PACT', + 'weight_quantize_type': 'channel_wise_abs_max', 'activation_quantize_type': 'moving_average_abs_max', + 'weight_bits': 8, 'activation_bits': 8, 'dtype': 'int8', 'window_size': 10000, 'moving_rate': 0.9, + 'quantizable_layer_type': ['Conv2D', 'Linear']} + print_model: False + +TrainReader: + batch_size: 48 + +LearningRate: + base_lr: 0.024 + schedulers: + - !CosineDecay + max_epochs: 300 + - !LinearWarmup + start_factor: 0.1 + steps: 300 diff --git a/configs/slim/quant/ppyoloe_l_qat.yml b/configs/slim/quant/ppyoloe_l_qat.yml new file mode 100644 index 0000000000000000000000000000000000000000..4c0e94003a6ed0b7dde95ecd1f2361b87c61b4c8 --- /dev/null +++ b/configs/slim/quant/ppyoloe_l_qat.yml @@ -0,0 +1,26 @@ +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +slim: QAT + +QAT: + quant_config: { + 'weight_quantize_type': 'channel_wise_abs_max', 'activation_quantize_type': 'moving_average_abs_max', + 'weight_bits': 8, 'activation_bits': 8, 'dtype': 'int8', 'window_size': 10000, 'moving_rate': 0.9, + 'quantizable_layer_type': ['Conv2D', 'Linear']} + print_model: True + +epoch: 30 +snapshot_epoch: 5 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 10 + - 20 + - !LinearWarmup + start_factor: 0. + steps: 100 + +TrainReader: + batch_size: 8 diff --git a/configs/slim/quant/tinypose_qat.yml b/configs/slim/quant/tinypose_qat.yml new file mode 100644 index 0000000000000000000000000000000000000000..3b85dfe55d226d2514bf11c530abb8df1abf8664 --- /dev/null +++ b/configs/slim/quant/tinypose_qat.yml @@ -0,0 +1,26 @@ +pretrain_weights: https://paddledet.bj.bcebos.com/models/keypoint/tinypose_128x96.pdparams +slim: QAT + +QAT: + quant_config: { + 'activation_preprocess_type': 'PACT', + 'weight_quantize_type': 'channel_wise_abs_max', 'activation_quantize_type': 'moving_average_abs_max', + 'weight_bits': 8, 'activation_bits': 8, 'dtype': 'int8', 'window_size': 10000, 'moving_rate': 0.9, + 'quantizable_layer_type': ['Conv2D', 'Linear']} + print_model: False + +epoch: 50 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 30 + - 40 + - !LinearWarmup + start_factor: 0. + steps: 100 + +TrainReader: + batch_size: 256 diff --git a/configs/smalldet/README.md b/configs/smalldet/README.md new file mode 100644 index 0000000000000000000000000000000000000000..042e2b25a4f305f04bca7b58afb0cbed01cd96f3 --- /dev/null +++ b/configs/smalldet/README.md @@ -0,0 +1,16 @@ +# PP-YOLOE Smalldet 检测模型 + +VisDroneVisDroneDOTAXview + + +| 模型 | 数据集 | SLICE_SIZE | OVERLAP_RATIO | 类别数 | mAPval
    0.5:0.95 | APval
    0.5 | 下载链接 | 配置文件 | +|:---------|:---------------:|:---------------:|:---------------:|:------:|:-----------------------:|:-------------------:|:---------:| :-----: | +|PP-YOLOE-l| Xview | 400 | 0.25 | 60 | 14.5 | 26.8 | [下载链接](https://bj.bcebos.com/v1/paddledet/models/ppyoloe_crn_l_xview_400_025.pdparams) | [配置文件](./ppyoloe_crn_l_80e_sliced_xview_400_025.yml) | +|PP-YOLOE-l| DOTA | 500 | 0.25 | 15 | 46.8 | 72.6 | [下载链接](https://bj.bcebos.com/v1/paddledet/models/ppyoloe_crn_l_dota_500_025.pdparams) | [配置文件](./ppyoloe_crn_l_80e_sliced_DOTA_500_025.yml) | +|PP-YOLOE-l| VisDrone | 500 | 0.25 | 10 | 29.7 | 48.5 | [下载链接](https://bj.bcebos.com/v1/paddledet/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams) | [配置文件](./ppyoloe_crn_l_80e_sliced_visdrone_640_025.yml) | + + +**注意:** +- **SLICE_SIZE**表示使用SAHI工具切图后子图的大小(SLICE_SIZE*SLICE_SIZE);**OVERLAP_RATIO**表示切图重叠率。 +- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)。 diff --git a/configs/smalldet/_base_/DOTA_sliced_500_025_detection.yml b/configs/smalldet/_base_/DOTA_sliced_500_025_detection.yml new file mode 100644 index 0000000000000000000000000000000000000000..100d8cbf17ef78e6cc182e14672147c235896be1 --- /dev/null +++ b/configs/smalldet/_base_/DOTA_sliced_500_025_detection.yml @@ -0,0 +1,20 @@ +metric: COCO +num_classes: 15 + +TrainDataset: + !COCODataSet + image_dir: DOTA_slice_train/train_images_500_025 + anno_path: DOTA_slice_train/train_500_025.json + dataset_dir: dataset/DOTA + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: DOTA_slice_val/val_images_500_025 + anno_path: DOTA_slice_val/val_500_025.json + dataset_dir: dataset/DOTA + +TestDataset: + !ImageFolder + anno_path: dataset/DOTA/DOTA_slice_val/val_500_025.json + dataset_dir: dataset/DOTA/DOTA_slice_val/val_images_500_025 diff --git a/configs/smalldet/_base_/visdrone_sliced_640_025_detection.yml b/configs/smalldet/_base_/visdrone_sliced_640_025_detection.yml new file mode 100644 index 0000000000000000000000000000000000000000..2d88b2c00ff5e691cbcda56036a704cbf7cf0a0c --- /dev/null +++ b/configs/smalldet/_base_/visdrone_sliced_640_025_detection.yml @@ -0,0 +1,20 @@ +metric: COCO +num_classes: 10 + +TrainDataset: + !COCODataSet + image_dir: train_images_640_025 + anno_path: train_640_025.json + dataset_dir: dataset/visdrone_sliced + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val_images_640_025 + anno_path: val_640_025.json + dataset_dir: dataset/visdrone_sliced + +TestDataset: + !ImageFolder + anno_path: dataset/visdrone_sliced/val_640_025.json + dataset_dir: dataset/visdrone_sliced/val_images_640_025 diff --git a/configs/smalldet/_base_/xview_sliced_400_025_detection.yml b/configs/smalldet/_base_/xview_sliced_400_025_detection.yml new file mode 100644 index 0000000000000000000000000000000000000000..b932359db56957b3daca58bbbebc66ff156582b5 --- /dev/null +++ b/configs/smalldet/_base_/xview_sliced_400_025_detection.yml @@ -0,0 +1,20 @@ +metric: COCO +num_classes: 60 + +TrainDataset: + !COCODataSet + image_dir: train_images_400_025 + anno_path: train_400_025.json + dataset_dir: dataset/xview/xview_slic + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val_images_400_025 + anno_path: val_400_025.json + dataset_dir: dataset/xview/xview_slic + +TestDataset: + !ImageFolder + anno_path: dataset/xview/xview_slic/val_400_025.json + dataset_dir: dataset/xview/xview_slic/val_images_400_025 diff --git a/configs/smalldet/ppyoloe_crn_l_80e_sliced_DOTA_500_025.yml b/configs/smalldet/ppyoloe_crn_l_80e_sliced_DOTA_500_025.yml new file mode 100644 index 0000000000000000000000000000000000000000..6a1429d56b1d8118d0763d3c68931f77267d4704 --- /dev/null +++ b/configs/smalldet/ppyoloe_crn_l_80e_sliced_DOTA_500_025.yml @@ -0,0 +1,36 @@ +_BASE_: [ + './_base_/DOTA_sliced_500_025_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_l_80e_sliced_DOTA_500_025/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +TrainReader: + batch_size: 8 + +epoch: 80 +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 96 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 500 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smalldet/ppyoloe_crn_l_80e_sliced_visdrone_640_025.yml b/configs/smalldet/ppyoloe_crn_l_80e_sliced_visdrone_640_025.yml new file mode 100644 index 0000000000000000000000000000000000000000..8d133bb722477c5b56ea2e9e48a1a3f81d155dae --- /dev/null +++ b/configs/smalldet/ppyoloe_crn_l_80e_sliced_visdrone_640_025.yml @@ -0,0 +1,36 @@ +_BASE_: [ + './_base_/visdrone_sliced_640_025_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_l_80e_sliced_visdrone_640_025/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +TrainReader: + batch_size: 8 + +epoch: 80 +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 96 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 500 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smalldet/ppyoloe_crn_l_80e_sliced_xview_400_025.yml b/configs/smalldet/ppyoloe_crn_l_80e_sliced_xview_400_025.yml new file mode 100644 index 0000000000000000000000000000000000000000..7c9d80ea5ba869e13e5eb19450c041eb5350b65d --- /dev/null +++ b/configs/smalldet/ppyoloe_crn_l_80e_sliced_xview_400_025.yml @@ -0,0 +1,36 @@ +_BASE_: [ + './_base_/xview_sliced_400_025_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_l_80e_sliced_xview_400_025/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +TrainReader: + batch_size: 8 + +epoch: 80 +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 96 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 500 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/DataAnalysis.md b/configs/smrt/DataAnalysis.md new file mode 100644 index 0000000000000000000000000000000000000000..66da22f43d9ba9494c35ee0fa0285aa45099399f --- /dev/null +++ b/configs/smrt/DataAnalysis.md @@ -0,0 +1,68 @@ +# 数据分析功能说明 + +为了更好的帮助用户进行数据分析,从推荐更合适的模型,我们推出了**数据分析**功能,用户不需要上传原图,只需要上传标注好的文件格式即可进一步分析数据特点。 + +当前支持格式有: +* LabelMe标注数据格式 +* 精灵标注数据格式 +* LabelImg标注数据格式 +* VOC数据格式 +* COCO数据格式 +* Seg数据格式 + +## LabelMe标注数据格式 + +1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的json文件,每一个json文件除后缀外与对应的图像同名。 +2. 支持检测与分割任务。若提供的标注信息与所选择的任务类型不匹配,则将提示错误。 +3. 对于检测任务,需提供rectangle类型标注信息;对于分割任务,需提供polygon类型标注信息。 +
    + +
    + +## 精灵标注数据格式 + +1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的json文件,每一个json文件除后缀外与对应的图像同名。 +2. 支持检测与分割任务。若提供的标注信息与所选择的任务类型不匹配,则将提示错误。 +3. 对于检测任务,需提供bndbox或polygon类型标注信息;对于分割任务,需提供polygon类型标注信息。 +
    + +
    + +## LabelImg标注数据格式 + +1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的xml文件,每一个xml文件除后缀外与对应的图像同名。 +2. 仅支持检测任务。 +3. 标注文件中必须提供bndbox字段信息;segmentation字段是可选的。 + +
    + +
    + +## VOC数据格式 + +1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的xml文件,每一个xml文件除后缀外与对应的图像同名。 +2. 仅支持检测任务。 +3. 标注文件中必须提供bndbox字段信息;segmentation字段是可选的。 +
    + +
    + +## COCO数据格式 + +1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中仅存在一个名为annotation.json的文件。 +2. 支持检测与分割任务。若提供的标注信息与所选择的任务类型不匹配,则将提示错误。 +3. 对于检测任务,标注文件中必须包含bbox字段,segmentation字段是可选的;对于分割任务,标注文件中必须包含segmentation字段。 +
    + +
    + + +## Seg数据格式 + +1. 需要选定包含标注文件的zip格式压缩包。zip格式压缩包中包含一个annotations文件夹,文件夹中的内容为与标注图像相同数量的png文件,每一个png文件除后缀外与对应的图像同名。 +2. 仅支持分割任务。 +3. 标注文件需要与原始图像在像素上严格保持一一对应,格式只可为png(后缀为.png或.PNG)。标注文件中的每个像素值为[0,255]区间内从0开始依序递增的整数ID,除255外,标注ID值的增加不能跳跃。在标注文件中,使用255表示需要忽略的像素,使用0表示背景类标注。 + +
    + +
    diff --git a/configs/smrt/README.md b/configs/smrt/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d5a59ae1799bc53096cdcf53ece2ab53758f1a48 --- /dev/null +++ b/configs/smrt/README.md @@ -0,0 +1,216 @@ +# 飞桨产业模型选型工具PaddleSMRT + +## 一、项目介绍 + +PaddleSMRT (Paddle Sense Model Recommend Tool) 是飞桨结合产业落地经验推出的产业模型选型工具,在项目落地过程中,用户根据自身的实际情况,输入自己的需求,即可以得到对应在算法模型、部署硬件以及教程文档的信息。同时为了更加精准的推荐,增加了数据分析功能,用户上传自己的标注文件,系统可以自动分析数据特点,例如数据分布不均衡、小目标、密集型等,从而提供更加精准的模型以及优化策略,更好的符合场景的需求。 + +飞桨官网使用[链接](https://www.paddlepaddle.org.cn/smrt) + +本文档主要介绍PaddleSMRT在检测方向上是如何进行模型选型推荐,以及推荐模型的使用方法。分割方向模型介绍请参考[文档](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.5/configs/smrt) + +## 二、数据介绍 + +PaddleSMRT结合产业真实场景,通过比较检测算法效果,向用户推荐最适合的模型。目前PaddleSMRT覆盖工业质检、城市安防两大场景,下面介绍PaddleSMRT进行算法对比所使用的数据集 + +### 1. 新能源电池质检数据集 + +数据集为新能源电池电池组件质检数据集,包含15021张图片,包含22045个标注框,覆盖45种缺陷类型,例如掉胶,裂纹,划痕等。 + +新能源电池数据展示图: + +
    + + +
    + +数据集特点为: + +1. 类别分布均衡 +2. 属于小目标数据 +3. 非密集型数据 + +### 2. 铝件质检数据集 + +数据集为铝件生产过程中的质检数据集,包含11293张图片,包含43157个标注框,覆盖5种缺陷类型,例如划伤,压伤,起皮等。 + +铝件质检数据展示图: + +
    + + +
    + + +数据集特点为: + +1. 类别分布不均衡 +2. 属于小目标数据 +3. 非密集型数据 + + +### 3. 人车数据集 + +数据集包含2600张人工标注的两点anchor box标签。标签包括以下人和车的类别共22种: +其中行人包括普通行人、3D 假人、坐着的人、骑车的人;车辆包括两厢车、三厢车、小型客车、小货车、皮卡车、轻卡、厢式货车、牵引车、水泥车、工程车辆、校车、中小型客车、大型单层客车、小型电动车、摩托车、自行车、三轮车以及其它特殊车辆。 + +人车数据展示图: + +
    + + +
    + + +数据集特点为: + +1. 类别分布不均衡 +2. 属于小目标数据 +3. 非密集型数据 + +**说明:** + +数据集特点判断依据如下: + +- 数据分布不均衡:采样1000张图片,不同类别样本个数标准差大于400 +- 小目标数据集:相对大小小于0.1或绝对大小小于32像素的样本个数比例大于30% +- 密集型数据集: + +``` + 密集目标定义:周围目标距离小于自身大小两倍的个数大于2; + + 密集图片定义:密集目标个数占图片目标总数50%以上; + + 密集数据集定义:密集图片个数占总个数30%以上 + +``` + +为了更好的帮助用户选择模型,我们也提供了丰富的数据分析功能,用户只需要上传标注文件(不需要原图)即可了解数据特点分布和模型优化建议 + +
    + +
    + +## 三、推荐模型使用全流程 + +通过模型选型工具会得到对应场景和数据特点的检测模型配置,例如[PP-YOLOE](./ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml) + +该配置文件的使用方法如下 + +### 1. 环境配置 + +首先需要安装PaddlePaddle + +```bash +# CUDA10.2 +pip install paddlepaddle-gpu==2.2.2 -i https://mirror.baidu.com/pypi/simple + +# CPU +pip install paddlepaddle==2.2.2 -i https://mirror.baidu.com/pypi/simple +``` + +然后安装PaddleDetection和相关依赖 + +```bash +# 克隆PaddleDetection仓库 +cd +git clone https://github.com/PaddlePaddle/PaddleDetection.git + +# 安装其他依赖 +cd PaddleDetection +pip install -r requirements.txt +``` + +详细安装文档请参考[文档](../../docs/tutorials/INSTALL_cn.md) + +### 2. 数据准备 + +用户需要准备训练数据集,建议标注文件使用COCO数据格式。如果使用lableme或者VOC数据格式,先使用[格式转换脚本](../../tools/x2coco.py)将标注格式转化为COCO,详细数据准备文档请参考[文档](../../docs/tutorials/PrepareDataSet.md) + +本文档以新能源电池工业质检子数据集为例展开,数据下载[链接](https://bj.bcebos.com/v1/paddle-smrt/data/battery_mini.zip) + +数据储存格式如下: + +``` +battery_mini +├── annotations +│   ├── test.json +│   └── train.json +└── images + ├── Board_daowen_101.png + ├── Board_daowen_109.png + ├── Board_daowen_117.png + ... +``` + + + +### 3. 模型训练/评估/预测 + +使用经过模型选型工具推荐的模型进行训练,目前所推荐的模型均使用**单卡训练**,可以在训练的过程中进行评估,模型默认保存在`./output`下 + +```bash +python tools/train.py -c configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml --eval +``` + +如果训练过程出现中断,可以使用-r命令恢复训练 + +```bash +python tools/train.py -c configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml --eval -r output/ppyoloe_crn_m_300e_battery_1024/9.pdparams +``` + +如果期望单独评估模型训练精度,可以使用`tools/eval.py` + +```bash +python tools/eval.py -c configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml -o weights=output/ppyoloe_crn_m_300e_battery_1024/model_final.pdparams +``` + +完成训练后,可以使用`tools/infer.py`可视化训练效果 + +```bash +python tools/infer.py -c configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml -o weights=output/ppyoloe_crn_m_300e_battery_1024/model_final.pdparams --infer_img=images/Board_diaojiao_1591.png +``` + +更多模型训练参数请参考[文档](../../docs/tutorials/GETTING_STARTED_cn.md) + +### 4. 模型导出部署 + +完成模型训练后,需要将模型部署到1080Ti,2080Ti或其他服务器设备上,使用Paddle Inference完成C++部署 + +首先需要将模型导出为部署时使用的模型和配置文件 + +```bash +python tools/export_model.py -c configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml -o weights=output/ppyoloe_crn_m_300e_battery_1024/model_final.pdparams +``` + +接下来可以使用PaddleDetection中的部署代码实现C++部署,详细步骤请参考[文档](../../deploy/cpp/README.md) + +如果期望使用可视化界面的方式进行部署,可以参考下面部分的内容。 + +## 四、部署demo + +为了更方便大家部署,我们也提供了完备的可视化部署Demo,欢迎尝试使用 + +* [Windows Demo下载地址](https://github.com/PaddlePaddle/PaddleX/tree/develop/deploy/cpp/docs/csharp_deploy) + +
    + +
    + +* [Linux Demo下载地址](https://github.com/cjh3020889729/The-PaddleX-QT-Visualize-GUI) + +
    + +
    + +## 五、场景范例 + +为了更方便大家更好的进行产业落地,PaddleSMRT也提供了详细的应用范例,欢迎大家使用。 + +* 工业视觉 + * [工业缺陷检测](https://aistudio.baidu.com/aistudio/projectdetail/2598319) + * [表计读数](https://aistudio.baidu.com/aistudio/projectdetail/2598327) + * [钢筋计数](https://aistudio.baidu.com/aistudio/projectdetail/2404188) +* 城市 + * [行人计数](https://aistudio.baidu.com/aistudio/projectdetail/2421822) + * [车辆计数](https://aistudio.baidu.com/aistudio/projectdetail/3391734?contributionType=1) + * [安全帽检测](https://aistudio.baidu.com/aistudio/projectdetail/3944737?contributionType=1) diff --git a/configs/smrt/images/00362.jpg b/configs/smrt/images/00362.jpg new file mode 100644 index 0000000000000000000000000000000000000000..da4ab37d5cb5501e3c1471b30a3f465dd9b0a88f Binary files /dev/null and b/configs/smrt/images/00362.jpg differ diff --git a/configs/smrt/images/Board_diaojiao_1591.png b/configs/smrt/images/Board_diaojiao_1591.png new file mode 100644 index 0000000000000000000000000000000000000000..0ec35b9450209fba4b9579fcc325e70fc5f63ddd Binary files /dev/null and b/configs/smrt/images/Board_diaojiao_1591.png differ diff --git a/configs/smrt/images/UpCoa_liewen_163.png b/configs/smrt/images/UpCoa_liewen_163.png new file mode 100644 index 0000000000000000000000000000000000000000..294c29b4ed04c81d672cbe72ddaa4ccb3e301f67 Binary files /dev/null and b/configs/smrt/images/UpCoa_liewen_163.png differ diff --git a/configs/smrt/images/lvjian1_0.jpg b/configs/smrt/images/lvjian1_0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dbf0dfaa769f9a72bd7825f8589fffa5aca3ac6e Binary files /dev/null and b/configs/smrt/images/lvjian1_0.jpg differ diff --git a/configs/smrt/images/lvjian1_10.jpg b/configs/smrt/images/lvjian1_10.jpg new file mode 100644 index 0000000000000000000000000000000000000000..25467e8174b27df7bf43b33795eb7ea1af605813 Binary files /dev/null and b/configs/smrt/images/lvjian1_10.jpg differ diff --git a/configs/smrt/images/renche_00002.jpg b/configs/smrt/images/renche_00002.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9446db44df96cf18ef7871c345a8010cdfec49df Binary files /dev/null and b/configs/smrt/images/renche_00002.jpg differ diff --git a/configs/smrt/images/renche_00204.jpg b/configs/smrt/images/renche_00204.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2c46e933b970411eca850195b59f1c477d5d2a5e Binary files /dev/null and b/configs/smrt/images/renche_00204.jpg differ diff --git a/configs/smrt/picodet/picodet_l_1024_coco_lcnet_battery.yml b/configs/smrt/picodet/picodet_l_1024_coco_lcnet_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..601dd1915ee65fb452340c783be8ca1cab905ce1 --- /dev/null +++ b/configs/smrt/picodet/picodet_l_1024_coco_lcnet_battery.yml @@ -0,0 +1,162 @@ +weights: output/picodet_l_1024_coco_lcnet_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams + +worker_num: 2 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 50 +LearningRate: + base_lr: 0.006 + schedulers: + - !CosineDecay + max_epochs: 50 + - !LinearWarmup + start_factor: 0.001 + steps: 300 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 10 +print_flops: false +find_unused_parameters: True +use_ema: true + + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +architecture: PicoDet + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 2.0 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 160 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 160 + feat_out: 160 + num_convs: 4 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + fpn_stride: [8, 16, 32, 64] + feat_in_chan: 160 + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 diff --git a/configs/smrt/picodet/picodet_l_1024_coco_lcnet_lvjian1.yml b/configs/smrt/picodet/picodet_l_1024_coco_lcnet_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..734f1bee70ce4c2708f846af4d10e350fa6a329f --- /dev/null +++ b/configs/smrt/picodet/picodet_l_1024_coco_lcnet_lvjian1.yml @@ -0,0 +1,162 @@ +weights: output/picodet_l_1024_coco_lcnet_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams + +worker_num: 2 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 50 +LearningRate: + base_lr: 0.006 + schedulers: + - !CosineDecay + max_epochs: 50 + - !LinearWarmup + start_factor: 0.001 + steps: 300 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 10 +print_flops: false +find_unused_parameters: True +use_ema: true + + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +architecture: PicoDet + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 2.0 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 160 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 160 + feat_out: 160 + num_convs: 4 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + fpn_stride: [8, 16, 32, 64] + feat_in_chan: 160 + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 diff --git a/configs/smrt/picodet/picodet_l_1024_coco_lcnet_renche.yml b/configs/smrt/picodet/picodet_l_1024_coco_lcnet_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..cdebd4ba4ae55e40c940b230bd61528a39fe0fcf --- /dev/null +++ b/configs/smrt/picodet/picodet_l_1024_coco_lcnet_renche.yml @@ -0,0 +1,162 @@ +weights: output/picodet_l_1024_coco_lcnet_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams + +worker_num: 2 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 50 +LearningRate: + base_lr: 0.006 + schedulers: + - !CosineDecay + max_epochs: 50 + - !LinearWarmup + start_factor: 0.001 + steps: 300 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 10 +print_flops: false +find_unused_parameters: True +use_ema: true + + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +architecture: PicoDet + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 2.0 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 160 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 160 + feat_out: 160 + num_convs: 4 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + fpn_stride: [8, 16, 32, 64] + feat_in_chan: 160 + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 diff --git a/configs/smrt/picodet/picodet_l_640_coco_lcnet_battery.yml b/configs/smrt/picodet/picodet_l_640_coco_lcnet_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..8200439dc928fea3d0c091d98acee30117a33be1 --- /dev/null +++ b/configs/smrt/picodet/picodet_l_640_coco_lcnet_battery.yml @@ -0,0 +1,162 @@ +weights: output/picodet_l_640_coco_lcnet_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams + +worker_num: 2 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 50 +LearningRate: + base_lr: 0.006 + schedulers: + - !CosineDecay + max_epochs: 50 + - !LinearWarmup + start_factor: 0.001 + steps: 300 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 10 +print_flops: false +find_unused_parameters: True +use_ema: true + + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +architecture: PicoDet + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 2.0 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 160 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 160 + feat_out: 160 + num_convs: 4 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + fpn_stride: [8, 16, 32, 64] + feat_in_chan: 160 + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 diff --git a/configs/smrt/picodet/picodet_l_640_coco_lcnet_lvjian1.yml b/configs/smrt/picodet/picodet_l_640_coco_lcnet_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..6000902f03363a5763e7c26fd38565f85dcb2388 --- /dev/null +++ b/configs/smrt/picodet/picodet_l_640_coco_lcnet_lvjian1.yml @@ -0,0 +1,162 @@ +weights: output/picodet_l_640_coco_lcnet_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams + +worker_num: 2 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: /paddle/dataset/model-select/gongye/lvjian1/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: /paddle/dataset/model-select/gongye/lvjian1/slice_lvjian1_data/eval/ + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 50 +LearningRate: + base_lr: 0.006 + schedulers: + - !CosineDecay + max_epochs: 50 + - !LinearWarmup + start_factor: 0.001 + steps: 300 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 10 +print_flops: false +find_unused_parameters: True +use_ema: true + + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +architecture: PicoDet + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 2.0 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 160 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 160 + feat_out: 160 + num_convs: 4 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + fpn_stride: [8, 16, 32, 64] + feat_in_chan: 160 + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 diff --git a/configs/smrt/picodet/picodet_l_640_coco_lcnet_renche.yml b/configs/smrt/picodet/picodet_l_640_coco_lcnet_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..fc1ce195ea4c5dce2bb60358f0509eb5e01a50c4 --- /dev/null +++ b/configs/smrt/picodet/picodet_l_640_coco_lcnet_renche.yml @@ -0,0 +1,162 @@ +weights: output/picodet_l_640_coco_lcnet_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/picodet_l_640_coco_lcnet.pdparams + +worker_num: 2 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 50 +LearningRate: + base_lr: 0.006 + schedulers: + - !CosineDecay + max_epochs: 50 + - !LinearWarmup + start_factor: 0.001 + steps: 300 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomCrop: {} + - RandomFlip: {prob: 0.5} + - RandomDistort: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 8 + shuffle: false + + +TestReader: + inputs_def: + image_shape: [1, 3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: *eval_size, keep_ratio: False} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 10 +print_flops: false +find_unused_parameters: True +use_ema: true + + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.00004 + type: L2 + +architecture: PicoDet + +PicoDet: + backbone: LCNet + neck: LCPAN + head: PicoHeadV2 + +LCNet: + scale: 2.0 + feature_maps: [3, 4, 5] + +LCPAN: + out_channels: 160 + use_depthwise: True + num_features: 4 + +PicoHeadV2: + conv_feat: + name: PicoFeat + feat_in: 160 + feat_out: 160 + num_convs: 4 + num_fpn_stride: 4 + norm_type: bn + share_cls_reg: True + use_se: True + fpn_stride: [8, 16, 32, 64] + feat_in_chan: 160 + prior_prob: 0.01 + reg_max: 7 + cell_offset: 0.5 + grid_cell_scale: 5.0 + static_assigner_epoch: 100 + use_align_head: True + static_assigner: + name: ATSSAssigner + topk: 9 + force_gt_matching: False + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + loss_class: + name: VarifocalLoss + use_sigmoid: False + iou_weighted: True + loss_weight: 1.0 + loss_dfl: + name: DistributionFocalLoss + loss_weight: 0.5 + loss_bbox: + name: GIoULoss + loss_weight: 2.5 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.025 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_battery.yml b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..507d0088cbc343e7ca281f8c8f54aa169e135a43 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_battery.yml @@ -0,0 +1,154 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path' + + +epoch: 40 +LearningRate: + base_lr: 0.0001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + +snapshot_epoch: 5 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 8 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 101 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.4 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_battery_1024.yml b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_battery_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..cd8b6d49f2d5b3a2de4a72e8bfe2064bcfcfc4a7 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_battery_1024.yml @@ -0,0 +1,154 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path' + + +epoch: 40 +LearningRate: + base_lr: 0.0001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + +snapshot_epoch: 5 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 1024, 1024] + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 101 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.4 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_lvjian1_1024.yml b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_lvjian1_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..0c09049a60423464a8c14666f7c86098564482d8 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_lvjian1_1024.yml @@ -0,0 +1,155 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + + +epoch: 20 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 1024, 1024] + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 101 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[8, 7], [24, 12], [14, 25], + [37, 35], [30, 140], [89, 52], + [93, 189], [226, 99], [264, 352]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_lvjian1_640.yml b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_lvjian1_640.yml new file mode 100644 index 0000000000000000000000000000000000000000..f7dc75f975efdbd08ae4f96ce2f54cc55a4569f4 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_lvjian1_640.yml @@ -0,0 +1,155 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + + +epoch: 20 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 101 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[8, 7], [24, 12], [14, 25], + [37, 35], [30, 140], [89, 52], + [93, 189], [226, 99], [264, 352]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_renche_1024.yml b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_renche_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..96ab192171798f7947ee857b8291152e5933c57a --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_renche_1024.yml @@ -0,0 +1,156 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: dataset/renche/test.json + + +epoch: 100 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 8 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 1024, 1024] + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 101 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_renche_640.yml b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_renche_640.yml new file mode 100644 index 0000000000000000000000000000000000000000..ccc2162de1c995fdede25ccfa337d6136d14b3df --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r101vd_dcn_365e_renche_640.yml @@ -0,0 +1,156 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r101vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: dataset/renche/test.json + + +epoch: 100 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 8 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 101 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/static/configs/ppyolo/ppyolo_test.yml b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_battery.yml similarity index 31% rename from static/configs/ppyolo/ppyolo_test.yml rename to configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_battery.yml index 250beb0551c95dfbe731e1ba3da52e0b695cc6e9..e886dd6c10bd03bcb2ef64f1a5f54ba0e923efcc 100644 --- a/static/configs/ppyolo/ppyolo_test.yml +++ b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_battery.yml @@ -1,138 +1,154 @@ -# NOTE: this config file is only used for evaluation on COCO test2019 set, -# for training or evaluationg on COCO val2017, please use ppyolo.yml architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 use_gpu: true -max_iters: 500000 +use_xpu: false log_iter: 100 save_dir: output -snapshot_iter: 10000 + metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/ppyolo/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 -save_prediction_only: True +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path' + + +epoch: 40 +LearningRate: + base_lr: 0.0001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + +snapshot_epoch: 5 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 8 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + YOLOv3: backbone: ResNet + neck: PPYOLOPAN yolo_head: YOLOv3Head - use_fine_grained_loss: true + post_process: BBoxPostProcess ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. depth: 50 - feature_maps: [3, 4, 5] variant: d - dcn_v2_stages: [5] + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - coord_conv: true + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss iou_aware: true iou_aware_factor: 0.4 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true YOLOv3Loss: ignore_thresh: 0.7 - scale_x_y: 1.05 + downsample: [32, 16, 8] label_smooth: false - use_fine_grained_loss: true + scale_x_y: 1.05 iou_loss: IouLoss iou_aware_loss: IouAwareLoss IouLoss: loss_weight: 2.5 - max_height: 608 - max_width: 608 + loss_square: true IouAwareLoss: loss_weight: 1.0 - max_height: 608 - max_width: 608 -MatrixNMS: - background_label: -1 +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS keep_top_k: 100 - normalized: false score_threshold: 0.01 post_threshold: 0.01 - -LearningRate: - base_lr: 0.00333 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: test2017 - anno_path: annotations/image_info_test-dev2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_battery_1024.yml b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_battery_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..d3b7ac28fc68dfb85c0bd0d67f61b77b844bd034 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_battery_1024.yml @@ -0,0 +1,154 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json # also support txt (like VOC's label_list.txt) + dataset_dir: dataset/battery_mini # if set, anno_path will be 'dataset_dir/anno_path' + + +epoch: 40 +LearningRate: + base_lr: 0.0001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + +snapshot_epoch: 5 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 1024, 1024] + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.4 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_lvjian1_1024.yml b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_lvjian1_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..6138e875e83a9a91708159b7f99e692c901f4f1b --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_lvjian1_1024.yml @@ -0,0 +1,155 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + + +epoch: 20 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 1024, 1024] + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[8, 7], [24, 12], [14, 25], + [37, 35], [30, 140], [89, 52], + [93, 189], [226, 99], [264, 352]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_lvjian1_640.yml b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_lvjian1_640.yml new file mode 100644 index 0000000000000000000000000000000000000000..5da1090006a2e1ba967c0870ba7311306ec1a164 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_lvjian1_640.yml @@ -0,0 +1,155 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + + +epoch: 20 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 2 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[8, 7], [24, 12], [14, 25], [37, 35], [30, 140], [89, 52], [93, 189], [226, 99], [264, 352]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[8, 7], [24, 12], [14, 25], + [37, 35], [30, 140], [89, 52], + [93, 189], [226, 99], [264, 352]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_renche_1024.yml b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_renche_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..611ea34a6b6bed018698f7c2fc7f1e6cf6528988 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_renche_1024.yml @@ -0,0 +1,156 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: dataset/renche/test.json + + +epoch: 100 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 8 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 1024, 1024] + sample_transforms: + - Decode: {} + - Resize: {target_size: [1024, 1024], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_renche_640.yml b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_renche_640.yml new file mode 100644 index 0000000000000000000000000000000000000000..37fb675f0acc4585f5ded137db46473b57c517c0 --- /dev/null +++ b/configs/smrt/ppyolo/ppyolov2_r50vd_dcn_365e_renche_640.yml @@ -0,0 +1,156 @@ +architecture: YOLOv3 +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyolov2_r50vd_dcn_365e_coco.pdparams +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/coco/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/coco/renche + +TestDataset: + !ImageFolder + anno_path: dataset/coco/renche/test.json + + +epoch: 100 +LearningRate: + base_lr: 0.0002 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: + - 80 + - !LinearWarmup + start_factor: 0. + steps: 1000 + + +snapshot_epoch: 3 +worker_num: 8 +TrainReader: + inputs_def: + num_max_boxes: 100 + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeBox: {} + - PadBox: {num_max_boxes: 100} + - BboxXYXY2XYWH: {} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 8 + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + + +OptimizerBuilder: + clip_grad_by_norm: 35. + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + + +YOLOv3: + backbone: ResNet + neck: PPYOLOPAN + yolo_head: YOLOv3Head + post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + return_idx: [1, 2, 3] + dcn_v2_stages: [3] + freeze_at: -1 + freeze_norm: false + norm_decay: 0. + +PPYOLOPAN: + drop_block: true + block_size: 3 + keep_prob: 0.9 + spp: true + +YOLOv3Head: + anchors: [[10, 13], [16, 30], [33, 23], + [30, 61], [62, 45], [59, 119], + [116, 90], [156, 198], [373, 326]] + anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] + loss: YOLOv3Loss + iou_aware: true + iou_aware_factor: 0.5 + +YOLOv3Loss: + ignore_thresh: 0.7 + downsample: [32, 16, 8] + label_smooth: false + scale_x_y: 1.05 + iou_loss: IouLoss + iou_aware_loss: IouAwareLoss + +IouLoss: + loss_weight: 2.5 + loss_square: true + +IouAwareLoss: + loss_weight: 1.0 + +BBoxPostProcess: + decode: + name: YOLOBox + conf_thresh: 0.01 + downsample_ratio: 32 + clip_bbox: true + scale_x_y: 1.05 + nms: + name: MatrixNMS + keep_top_k: 100 + score_threshold: 0.01 + post_threshold: 0.01 + nms_top_k: -1 + background_label: -1 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_battery.yml b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..bc58d999cabfdfb8f2252ca0e34c73e118ba70e9 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_battery.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_l_300e_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_battery_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_battery_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..027e38e202eaff50e69ac0d3204541d5ae7a08a6 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_battery_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_l_300e_battery_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_lvjian1.yml b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..272caf679a296cb4375e3628aa070fd71cec9931 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_lvjian1.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_l_300e_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_lvjian1_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_lvjian1_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..38a14259f54dd7a515aa68e5a5f7a79909f5a40b --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_lvjian1_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_l_300e_lvjian1_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_renche.yml b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..80c7bac76453e407d743a4e677257ebd4e2505b3 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_renche.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_l_300e_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_renche_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_renche_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..2151ecf711c0f52560f9318085f0fee2de7b8a85 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_l_300e_renche_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_l_300e_renche_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery.yml b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..8902f32ec42da89643b85f0743799555c3abc8ec --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_m_300e_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams +depth_mult: 0.67 +width_mult: 0.75 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..f244c1dd13381d360440a1c7705c8f5f81abf576 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_battery_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_m_300e_battery_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams +depth_mult: 0.67 +width_mult: 0.75 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_lvjian1.yml b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..7563756955b97f722a9c099dfb8ce57a90b6c6f7 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_lvjian1.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_m_300e_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams +depth_mult: 0.67 +width_mult: 0.75 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.002 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 16 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_lvjian1_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_lvjian1_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..d15e07f8e88cd1f9d592296e71cc587a6e6892ef --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_lvjian1_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_m_300e_lvjian1_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams +depth_mult: 0.67 +width_mult: 0.75 + +worker_num: 2 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.0015 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_renche.yml b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..a65cbdf540bd9e48800610516e0978d9f51b2c41 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_renche.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_m_300e_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams +depth_mult: 0.67 +width_mult: 0.75 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_renche_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_renche_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..0427b81d4f8eeca71f6245a583f0f0a2d99f3569 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_m_300e_renche_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_m_300e_renche_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams +depth_mult: 0.67 +width_mult: 0.75 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: /paddle/dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: /paddle/dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: /paddle/dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_battery.yml b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..1ef01cfc633414a9e4f71bbfc656a116c76fc7bf --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_battery.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_s_300e_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_battery_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_battery_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..42d30e00ff940b49b778306fa45562cf87f36396 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_battery_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_s_300e_battery_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_lvjian1.yml b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..b6155305fc4233b1c754dae4f2bb6cc368aa55f8 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_lvjian1.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_s_300e_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.002 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 16 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_lvjian_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_lvjian_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..72a184127f10d32176a90bd0045d20a6d88457fa --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_lvjian_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_s_300e_lvjian1_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +worker_num: 2 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.003 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 16 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_renche.yml b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..df1939153b2672222fd9f3589da89ac3aa1a5a93 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_renche.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_s_300e_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [12, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_renche_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_renche_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..07310a067794e789bd58172381cfecf37a1b3f03 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_s_300e_renche_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_s_300e_renche_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_battery.yml b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..ba94ad254319fa8fa2ca1cb3b982c7f4b5508c5f --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_battery.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_x_300e_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams +depth_mult: 1.33 +width_mult: 1.25 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_battery_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_battery_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..961d7823a32e8ee377274f1bf65399ab21b5a321 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_battery_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_x_300e_battery_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams +depth_mult: 1.33 +width_mult: 1.25 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_lvjian1.yml b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..7a47aded5e8cea1ded2d916509f54d53157dd7be --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_lvjian1.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_x_300e_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams +depth_mult: 1.33 +width_mult: 1.25 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.001 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 8 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 1 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_lvjian1_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_lvjian1_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..c1e70d2198f2af380c5cc9ab80704a9861f11c00 --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_lvjian1_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_x_300e_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams +depth_mult: 1.33 +width_mult: 1.25 + +worker_num: 2 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_renche.yml b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..be3f79044af32b12bba0e5aa13059585fd65d9ab --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_renche.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_x_300e_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams +depth_mult: 1.33 +width_mult: 1.25 + +worker_num: 4 +eval_height: &eval_height 640 +eval_width: &eval_width 640 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [512, 544, 576, 608, 640, 672, 704, 736, 768], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_renche_1024.yml b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_renche_1024.yml new file mode 100644 index 0000000000000000000000000000000000000000..250251c32504ced54291d2b5449e1ffdafb8b3ea --- /dev/null +++ b/configs/smrt/ppyoloe/ppyoloe_crn_x_300e_renche_1024.yml @@ -0,0 +1,140 @@ +weights: output/ppyoloe_crn_x_300e_renche_1024/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams +depth_mult: 1.33 +width_mult: 1.25 + +worker_num: 4 +eval_height: &eval_height 1024 +eval_width: &eval_width 1024 +eval_size: &eval_size [*eval_height, *eval_width] + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 30 +LearningRate: + base_lr: 0.0005 + schedulers: + - !CosineDecay + max_epochs: 36 + - !LinearWarmup + start_factor: 0. + epochs: 3 + +TrainReader: + sample_transforms: + - Decode: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [960, 992, 1024, 1056, 1088], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 4 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false + +# Exporting the model +export: + post_process: True # Whether post-processing is included in the network when export model. + nms: True # Whether NMS is included in the network when export model. + benchmark: False # It is used to testing model performance, if set `True`, post-process and NMS will not be exported. + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0005 + type: L2 + +architecture: YOLOv3 +norm_type: sync_bn +use_ema: true +ema_decay: 0.9998 + +YOLOv3: + backbone: CSPResNet + neck: CustomCSPPAN + yolo_head: PPYOLOEHead + post_process: ~ + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True + +CustomCSPPAN: + out_channels: [768, 384, 192] + stage_num: 1 + block_num: 3 + act: 'swish' + spp: true + +PPYOLOEHead: + fpn_strides: [32, 16, 8] + grid_cell_scale: 5.0 + grid_cell_offset: 0.5 + static_assigner_epoch: 100 + use_varifocal_loss: True + loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5} + static_assigner: + name: ATSSAssigner + topk: 9 + assigner: + name: TaskAlignedAssigner + topk: 13 + alpha: 1.0 + beta: 6.0 + nms: + name: MultiClassNMS + nms_top_k: 1000 + keep_top_k: 100 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_battery.yml b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..128328bf3853bff327b47bb1945908c338b3dcb8 --- /dev/null +++ b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_battery.yml @@ -0,0 +1,168 @@ +weights: output/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 24 +LearningRate: + base_lr: 0.00025 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [12, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [1350,1425,1500,1575,1650], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false +find_unused_parameters: True + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + +architecture: CascadeRCNN + +CascadeRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +CascadeHead: + head: CascadeTwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + +CascadeTwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1.yml b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..c6b4b8ce5c6ef099f9ba3ef9e603ddc4e273e413 --- /dev/null +++ b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1.yml @@ -0,0 +1,168 @@ +weights: output/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 24 +LearningRate: + base_lr: 0.00025 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [12, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [1350,1425,1500,1575,1650], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false +find_unused_parameters: True + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + +architecture: CascadeRCNN + +CascadeRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +CascadeHead: + head: CascadeTwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + +CascadeTwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_renche.yml b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..ef11461339080740eb3ac2414eda709f10b00ddb --- /dev/null +++ b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_renche.yml @@ -0,0 +1,168 @@ +weights: output/cascade_rcnn_r50_vd_fpn_ssld_2x_1500_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 24 +LearningRate: + base_lr: 0.00025 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [12, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [1350,1425,1500,1575,1650], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false +find_unused_parameters: True + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + +architecture: CascadeRCNN + +CascadeRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +CascadeHead: + head: CascadeTwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + +CascadeTwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_battery.yml b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..20025b07da573fbb7cff5936c50509358b85aa99 --- /dev/null +++ b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_battery.yml @@ -0,0 +1,168 @@ +weights: output/cascade_rcnn_r50_vd_fpn_ssld_2x_800_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 24 +LearningRate: + base_lr: 0.00025 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [12, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false +find_unused_parameters: True + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + +architecture: CascadeRCNN + +CascadeRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +CascadeHead: + head: CascadeTwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + +CascadeTwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1.yml b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..6e0352a1952c58a3d168787364f0b2b77fede322 --- /dev/null +++ b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1.yml @@ -0,0 +1,168 @@ +weights: output/cascade_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 24 +LearningRate: + base_lr: 0.00025 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [12, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false +find_unused_parameters: True + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + +architecture: CascadeRCNN + +CascadeRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +CascadeHead: + head: CascadeTwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + +CascadeTwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_renche.yml b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..448b65db663322476f7f0db79fcd5e6a52982720 --- /dev/null +++ b/configs/smrt/rcnn/cascade_rcnn_r50_vd_fpn_ssld_2x_800_renche.yml @@ -0,0 +1,168 @@ +weights: output/cascade_rcnn_r50_vd_fpn_ssld_2x_800_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 24 +LearningRate: + base_lr: 0.00025 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [12, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 5 +print_flops: false +find_unused_parameters: True + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + +architecture: CascadeRCNN + +CascadeRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + +ResNet: + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +CascadeHead: + head: CascadeTwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + +CascadeTwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_battery.yml b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..7e6b8871b9525a0f6775266298872178cf5b49aa --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_battery.yml @@ -0,0 +1,166 @@ +weights: output/faster_rcnn_r101_vd_fpn_ssld_2x_1500_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r101_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[800, 800], [900, 900], [1000, 1000], [1200, 1200], [1400, 1400], [1500, 1500]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 101 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_lvjian1.yml b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..190ed8fa183656127445602792df861b8018e938 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_lvjian1.yml @@ -0,0 +1,166 @@ +weights: output/faster_rcnn_r101_vd_fpn_ssld_2x_1500_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r101_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[800, 800], [900, 900], [1000, 1000], [1200, 1200], [1400, 1400], [1500, 1500]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 101 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_renche.yml b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..947c6e43bc6ff42f150566b7ef1e9713cd749926 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_1500_renche.yml @@ -0,0 +1,166 @@ +weights: output/faster_rcnn_r101_vd_fpn_ssld_2x_1500_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r101_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[800, 800], [900, 900], [1000, 1000], [1200, 1200], [1400, 1400], [1500, 1500]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 101 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_battery.yml b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..148a0459e8e8f5aea9b74d1e943852c82f524127 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_battery.yml @@ -0,0 +1,167 @@ +weights: output/faster_rcnn_r101_vd_fpn_ssld_2x_800_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r101_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 101 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_lvjian1.yml b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..9362638d3f05b30c2274e199410e7fa509e0eb10 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_lvjian1.yml @@ -0,0 +1,167 @@ +weights: output/faster_rcnn_r101_vd_fpn_ssld_2x_800_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r101_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 101 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_renche.yml b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..bf881d55a0808df85739784270373e1ada4d9f3a --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r101_vd_fpn_ssld_2x_800_renche.yml @@ -0,0 +1,167 @@ +weights: output/faster_rcnn_r101_vd_fpn_ssld_2x_800_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r101_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 101 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_battery.yml b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..688ea9bfdf6160715343d18c5b9ea83a27b6bc8e --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_battery.yml @@ -0,0 +1,166 @@ +weights: output/faster_rcnn_r50_vd_fpn_ssld_2x_1500_battery/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[800, 800], [900, 900], [1000, 1000], [1200, 1200], [1400, 1400], [1500, 1500]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1.yml b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..4b7d8e7d85a3cf61aadf2bfb276f1d325a712808 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1.yml @@ -0,0 +1,166 @@ +weights: output/faster_rcnn_r50_vd_fpn_ssld_2x_1500_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[800, 800], [900, 900], [1000, 1000], [1200, 1200], [1400, 1400], [1500, 1500]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_renche.yml b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..39eca1f8ee87f21026ee483dc6c69e6f30ac9bf7 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_1500_renche.yml @@ -0,0 +1,166 @@ +weights: output/faster_rcnn_r50_vd_fpn_ssld_2x_1500_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[800, 800], [900, 900], [1000, 1000], [1200, 1200], [1400, 1400], [1500, 1500]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [1500, 1500], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_battery.yml b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_battery.yml new file mode 100644 index 0000000000000000000000000000000000000000..7a982c06df9f32675c3de251f96e0b6477ea0943 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_battery.yml @@ -0,0 +1,167 @@ +weights: output/faster_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 45 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: annotations/train.json + dataset_dir: dataset/battery_mini + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +TestDataset: + !ImageFolder + anno_path: annotations/test.json + dataset_dir: dataset/battery_mini + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1.yml b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1.yml new file mode 100644 index 0000000000000000000000000000000000000000..39020c77e8ef1d47e9b3df08417f7f4c6a765249 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1.yml @@ -0,0 +1,167 @@ +weights: output/faster_rcnn_r50_vd_fpn_ssld_2x_800_lvjian1/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 5 + +TrainDataset: + !COCODataSet + image_dir: images + anno_path: train.json + dataset_dir: dataset/slice_lvjian1_data/train/ + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: images + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +TestDataset: + !ImageFolder + anno_path: val.json + dataset_dir: dataset/slice_lvjian1_data/eval + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_renche.yml b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_renche.yml new file mode 100644 index 0000000000000000000000000000000000000000..e27315c3572f3c89f1f98fc250e50a3d23661250 --- /dev/null +++ b/configs/smrt/rcnn/faster_rcnn_r50_vd_fpn_ssld_2x_800_renche.yml @@ -0,0 +1,167 @@ +weights: output/faster_rcnn_r50_vd_fpn_ssld_2x_800_renche/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams + +metric: COCO +num_classes: 22 + +TrainDataset: + !COCODataSet + image_dir: train_images + anno_path: train.json + dataset_dir: dataset/renche + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: train_images + anno_path: test.json + dataset_dir: dataset/renche + +TestDataset: + !ImageFolder + anno_path: test.json + dataset_dir: dataset/renche + +epoch: 24 +LearningRate: + base_lr: 0.001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [16, 22] + - !LinearWarmup + start_factor: 0.1 + steps: 1000 + + +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], interp: 2, keep_ratio: True} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 4 + shuffle: true + drop_last: true + collate_batch: false + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + + +TestReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: -1} + batch_size: 1 + shuffle: false + drop_last: false + +use_gpu: true +use_xpu: false +log_iter: 100 +save_dir: output +snapshot_epoch: 2 +print_flops: false + + +OptimizerBuilder: + optimizer: + momentum: 0.9 + type: Momentum + regularizer: + factor: 0.0001 + type: L2 + + +architecture: FasterRCNN + +FasterRCNN: + backbone: ResNet + neck: FPN + rpn_head: RPNHead + bbox_head: BBoxHead + # post process + bbox_post_process: BBoxPostProcess + + +ResNet: + # index 0 stands for res2 + depth: 50 + variant: d + norm_type: bn + freeze_at: 0 + return_idx: [0,1,2,3] + num_stages: 4 + lr_mult_list: [0.05, 0.05, 0.1, 0.15] + +FPN: + out_channel: 256 + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 1000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + + +BBoxHead: + head: TwoFCHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + use_random: True + +TwoFCHead: + out_channel: 1024 + +BBoxPostProcess: + decode: RCNNBox + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/visdrone/README.md b/configs/visdrone/README.md new file mode 100644 index 0000000000000000000000000000000000000000..8fb78190c8fcb73163bc9674d42a4b7ab2673e85 --- /dev/null +++ b/configs/visdrone/README.md @@ -0,0 +1,50 @@ +# VisDrone-DET 检测模型 + +PaddleDetection团队提供了针对VisDrone-DET小目标数航拍场景的基于PP-YOLOE的检测模型,用户可以下载模型进行使用。整理后的COCO格式VisDrone-DET数据集[下载链接](https://bj.bcebos.com/v1/paddledet/data/smalldet/visdrone.zip),检测其中的10类,包括 `pedestrian(1), people(2), bicycle(3), car(4), van(5), truck(6), tricycle(7), awning-tricycle(8), bus(9), motor(10)`,原始数据集[下载链接](https://github.com/VisDrone/VisDrone-Dataset)。 + +**注意:** +- VisDrone-DET数据集包括train集6471张,val集548张,test_dev集1610张,test-challenge集1580张(未开放检测框标注),前三者均有开放检测框标注。 +- 模型均只使用train集训练,在val集和test_dev集上验证精度,test_dev集图片数较多,精度参考性较高。 + + +## 原图训练: + +| 模型 | COCOAPI mAPval
    0.5:0.95 | COCOAPI mAPval
    0.5 | COCOAPI mAPtest_dev
    0.5:0.95 | COCOAPI mAPtest_dev
    0.5 | MatlabAPI mAPtest_dev
    0.5:0.95 | MatlabAPI mAPtest_dev
    0.5 | 下载 | 配置文件 | +|:---------|:------:|:------:| :----: | :------:| :------: | :------:| :----: | :------:| +|PP-YOLOE-s| 23.5 | 39.9 | 19.4 | 33.6 | 23.68 | 40.66 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_80e_visdrone.pdparams) | [配置文件](./ppyoloe_crn_s_80e_visdrone.yml) | +|PP-YOLOE-P2-Alpha-s| 24.4 | 41.6 | 20.1 | 34.7 | 24.55 | 42.19 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_p2_alpha_80e_visdrone.pdparams) | [配置文件](./ppyoloe_crn_s_p2_alpha_80e_visdrone.yml) | +|PP-YOLOE-l| 29.2 | 47.3 | 23.5 | 39.1 | 28.00 | 46.20 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_visdrone.pdparams) | [配置文件](./ppyoloe_crn_l_80e_visdrone.yml) | +|PP-YOLOE-P2-Alpha-l| 30.1 | 48.9 | 24.3 | 40.8 | 28.47 | 48.16 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_p2_alpha_80e_visdrone.pdparams) | [配置文件](./ppyoloe_crn_l_p2_alpha_80e_visdrone.yml) | +|PP-YOLOE-Alpha-largesize-l| 41.9 | 65.0 | 32.3 | 53.0 | 37.13 | 61.15 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_alpha_largesize_80e_visdrone.pdparams) | [配置文件](./ppyoloe_crn_l_alpha_largesize_80e_visdrone.yml) | +|PP-YOLOE-P2-Alpha-largesize-l| 41.3 | 64.5 | 32.4 | 53.1 | 37.49 | 51.54 | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_p2_alpha_largesize_80e_visdrone.pdparams) | [配置文件](./ppyoloe_crn_l_p2_alpha_largesize_80e_visdrone.yml) | + +## 切图训练: + +| 模型 | COCOAPI mAPval
    0.5:0.95 | COCOAPI mAPval
    0.5 | COCOAPI mAPtest_dev
    0.5:0.95 | COCOAPI mAPtest_dev
    0.5 | MatlabAPI mAPtest_dev
    0.5:0.95 | MatlabAPI mAPtest_dev
    0.5 | 下载 | 配置文件 | +|:---------|:------:|:------:| :----: | :------:| :------: | :------:| :----: | :------:| +|PP-YOLOE-l| 29.7 | 48.5 | 23.3 | 39.9 | - | - | [下载链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams) | [配置文件](../smalldet/ppyoloe_crn_l_80e_sliced_visdrone_640_025.yml) | + + +**注意:** +- PP-YOLOE模型训练过程中使用8 GPUs进行混合精度训练,如果**GPU卡数**或者**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率。 +- 具体使用教程请参考[ppyoloe](../ppyoloe#getting-start)。 +- P2表示增加P2层(1/4下采样层)的特征,共输出4个PPYOLOEHead。 +- Alpha表示对CSPResNet骨干网络增加可一个学习权重参数Alpha参与训练。 +- largesize表示使用以1600尺度为基础的多尺度训练和1920尺度预测,相应的训练batch_size也减小,以速度来换取高精度。 +- MatlabAPI测试是使用官网评测工具[VisDrone2018-DET-toolkit](https://github.com/VisDrone/VisDrone2018-DET-toolkit)。 +- 切图训练模型的配置文件及训练相关流程请参照[smalldet](../smalldet)。 + + +## 引用 +``` +@ARTICLE{9573394, + author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, + journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, + title={Detection and Tracking Meet Drones Challenge}, + year={2021}, + volume={}, + number={}, + pages={1-1}, + doi={10.1109/TPAMI.2021.3119563} +} +``` diff --git a/configs/visdrone/ppyoloe_crn_l_80e_visdrone.yml b/configs/visdrone/ppyoloe_crn_l_80e_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..4a51e696ac6684adcaff42d5b26033d01413ca68 --- /dev/null +++ b/configs/visdrone/ppyoloe_crn_l_80e_visdrone.yml @@ -0,0 +1,36 @@ +_BASE_: [ + '../datasets/visdrone_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_l_80e_visdrone/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams +depth_mult: 1.0 +width_mult: 1.0 + +TrainReader: + batch_size: 8 + +epoch: 80 +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 96 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 500 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/visdrone/ppyoloe_crn_l_alpha_largesize_80e_visdrone.yml b/configs/visdrone/ppyoloe_crn_l_alpha_largesize_80e_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..998f0fcb5344eb33574dd24a51f2753fb4dd1831 --- /dev/null +++ b/configs/visdrone/ppyoloe_crn_l_alpha_largesize_80e_visdrone.yml @@ -0,0 +1,55 @@ +_BASE_: [ + 'ppyoloe_crn_l_80e_visdrone.yml', +] +weights: output/ppyoloe_crn_l_alpha_largesize_80e_visdrone/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams + + +CSPResNet: + use_alpha: True + + +LearningRate: + base_lr: 0.0025 + + +worker_num: 2 +eval_height: &eval_height 1920 +eval_width: &eval_width 1920 +eval_size: &eval_size [*eval_height, *eval_width] + +TrainReader: + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, 1792, 1856, 1920], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 2 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 diff --git a/configs/visdrone/ppyoloe_crn_l_p2_alpha_80e_visdrone.yml b/configs/visdrone/ppyoloe_crn_l_p2_alpha_80e_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..718f02903bf4069366910a302e561f68aafc3a62 --- /dev/null +++ b/configs/visdrone/ppyoloe_crn_l_p2_alpha_80e_visdrone.yml @@ -0,0 +1,23 @@ +_BASE_: [ + 'ppyoloe_crn_l_80e_visdrone.yml', +] +weights: output/ppyoloe_crn_l_p2_alpha_80e_visdrone/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams + + +TrainReader: + batch_size: 4 + +LearningRate: + base_lr: 0.005 + + +CSPResNet: + return_idx: [0, 1, 2, 3] + use_alpha: True + +CustomCSPPAN: + out_channels: [768, 384, 192, 64] + +PPYOLOEHead: + fpn_strides: [32, 16, 8, 4] diff --git a/configs/visdrone/ppyoloe_crn_l_p2_alpha_largesize_80e_visdrone.yml b/configs/visdrone/ppyoloe_crn_l_p2_alpha_largesize_80e_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..1cd8dc671dd5f112742126a434d83a9196853a0f --- /dev/null +++ b/configs/visdrone/ppyoloe_crn_l_p2_alpha_largesize_80e_visdrone.yml @@ -0,0 +1,62 @@ +_BASE_: [ + 'ppyoloe_crn_l_80e_visdrone.yml', +] +weights: output/ppyoloe_crn_l_p2_alpha_largesize_80e_visdrone/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams + + +LearningRate: + base_lr: 0.005 + + +CSPResNet: + return_idx: [0, 1, 2, 3] + use_alpha: True + +CustomCSPPAN: + out_channels: [768, 384, 192, 64] + +PPYOLOEHead: + fpn_strides: [32, 16, 8, 4] + + +worker_num: 2 +eval_height: &eval_height 1920 +eval_width: &eval_width 1920 +eval_size: &eval_size [*eval_height, *eval_width] + +TrainReader: + sample_transforms: + - Decode: {} + - RandomDistort: {} + - RandomExpand: {fill_value: [123.675, 116.28, 103.53]} + - RandomCrop: {} + - RandomFlip: {} + batch_transforms: + - BatchRandomResize: {target_size: [1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, 1792, 1856, 1920, 1984, 2048], random_size: True, random_interp: True, keep_ratio: False} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + - PadGT: {} + batch_size: 1 + shuffle: true + drop_last: true + use_shared_memory: true + collate_batch: true + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 2 + +TestReader: + inputs_def: + image_shape: [3, *eval_height, *eval_width] + sample_transforms: + - Decode: {} + - Resize: {target_size: *eval_size, keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 1 diff --git a/configs/visdrone/ppyoloe_crn_s_80e_visdrone.yml b/configs/visdrone/ppyoloe_crn_s_80e_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..db3d93d628f8754aac3be50060f17a14c4dda04d --- /dev/null +++ b/configs/visdrone/ppyoloe_crn_s_80e_visdrone.yml @@ -0,0 +1,36 @@ +_BASE_: [ + '../datasets/visdrone_detection.yml', + '../runtime.yml', + '../ppyoloe/_base_/optimizer_300e.yml', + '../ppyoloe/_base_/ppyoloe_crn.yml', + '../ppyoloe/_base_/ppyoloe_reader.yml', +] +log_iter: 100 +snapshot_epoch: 10 +weights: output/ppyoloe_crn_s_80e_visdrone/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams +depth_mult: 0.33 +width_mult: 0.50 + +TrainReader: + batch_size: 8 + +epoch: 80 +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 96 + - !LinearWarmup + start_factor: 0. + epochs: 1 + +PPYOLOEHead: + static_assigner_epoch: -1 + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 500 + score_threshold: 0.01 + nms_threshold: 0.6 diff --git a/configs/visdrone/ppyoloe_crn_s_p2_alpha_80e_visdrone.yml b/configs/visdrone/ppyoloe_crn_s_p2_alpha_80e_visdrone.yml new file mode 100644 index 0000000000000000000000000000000000000000..17d6299bb89e6e70dd420b0ec01743ae26c2af8c --- /dev/null +++ b/configs/visdrone/ppyoloe_crn_s_p2_alpha_80e_visdrone.yml @@ -0,0 +1,22 @@ +_BASE_: [ + 'ppyoloe_crn_s_80e_visdrone.yml', +] +weights: output/ppyoloe_crn_s_p2_alpha_80e_visdrone/model_final + +pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams + +TrainReader: + batch_size: 4 + +LearningRate: + base_lr: 0.005 + +CSPResNet: + return_idx: [0, 1, 2, 3] + use_alpha: True + +CustomCSPPAN: + out_channels: [768, 384, 192, 64] + +PPYOLOEHead: + fpn_strides: [32, 16, 8, 4] diff --git a/configs/vitdet/README.md b/configs/vitdet/README.md new file mode 100644 index 0000000000000000000000000000000000000000..9fd9ebdf5af95b4ad05a34c458d9e9760b9ed50b --- /dev/null +++ b/configs/vitdet/README.md @@ -0,0 +1,65 @@ +# Vision Transformer Detection + +## Introduction + +- [Context Autoencoder for Self-Supervised Representation Learning](https://arxiv.org/abs/2202.03026) +- [Benchmarking Detection Transfer Learning with Vision Transformers](https://arxiv.org/pdf/2111.11429.pdf) + +Object detection is a central downstream task used to +test if pre-trained network parameters confer benefits, such +as improved accuracy or training speed. The complexity +of object detection methods can make this benchmarking +non-trivial when new architectures, such as Vision Transformer (ViT) models, arrive. + +## Model Zoo + +| Backbone | Pretrained | Model | Scheduler | Images/GPU | Box AP | Config | Download | +|:------:|:--------:|:--------------:|:--------------:|:--------------:|:------:|:------:|:--------:| +| ViT-base | CAE | Cascade RCNN | 1x | 1 | 52.7 | [config](./cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml) | [model](https://bj.bcebos.com/v1/paddledet/models/cascade_rcnn_vit_base_hrfpn_cae_1x_coco.pdparams) | +| ViT-large | CAE | Cascade RCNN | 1x | 1 | 55.7 | [config](./cascade_rcnn_vit_large_hrfpn_cae_1x_coco.yml) | [model](https://bj.bcebos.com/v1/paddledet/models/cascade_rcnn_vit_large_hrfpn_cae_1x_coco.pdparams) | + +**Notes:** +- Model is trained on COCO train2017 dataset and evaluated on val2017 results of `mAP(IoU=0.5:0.95)`. +- Base model is trained on 8x32G V100 GPU, large model on 8x80G A100. + + +## Citations +``` +@article{chen2022context, + title={Context autoencoder for self-supervised representation learning}, + author={Chen, Xiaokang and Ding, Mingyu and Wang, Xiaodi and Xin, Ying and Mo, Shentong and Wang, Yunhao and Han, Shumin and Luo, Ping and Zeng, Gang and Wang, Jingdong}, + journal={arXiv preprint arXiv:2202.03026}, + year={2022} +} + +@article{DBLP:journals/corr/abs-2111-11429, + author = {Yanghao Li and + Saining Xie and + Xinlei Chen and + Piotr Doll{\'{a}}r and + Kaiming He and + Ross B. Girshick}, + title = {Benchmarking Detection Transfer Learning with Vision Transformers}, + journal = {CoRR}, + volume = {abs/2111.11429}, + year = {2021}, + url = {https://arxiv.org/abs/2111.11429}, + eprinttype = {arXiv}, + eprint = {2111.11429}, + timestamp = {Fri, 26 Nov 2021 13:48:43 +0100}, + biburl = {https://dblp.org/rec/journals/corr/abs-2111-11429.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@article{Cai_2019, + title={Cascade R-CNN: High Quality Object Detection and Instance Segmentation}, + ISSN={1939-3539}, + url={http://dx.doi.org/10.1109/tpami.2019.2956516}, + DOI={10.1109/tpami.2019.2956516}, + journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, + publisher={Institute of Electrical and Electronics Engineers (IEEE)}, + author={Cai, Zhaowei and Vasconcelos, Nuno}, + year={2019}, + pages={1–1} +} +``` diff --git a/configs/vitdet/_base_/optimizer_base_1x.yml b/configs/vitdet/_base_/optimizer_base_1x.yml new file mode 100644 index 0000000000000000000000000000000000000000..b822b3bf92a6a12facafe4b569a0ebcad3cf1d3b --- /dev/null +++ b/configs/vitdet/_base_/optimizer_base_1x.yml @@ -0,0 +1,22 @@ +epoch: 12 + +LearningRate: + base_lr: 0.0001 + schedulers: + - !PiecewiseDecay + gamma: 0.1 + milestones: [9, 11] + - !LinearWarmup + start_factor: 0.001 + steps: 1000 + +OptimizerBuilder: + optimizer: + type: AdamWDL + betas: [0.9, 0.999] + layer_decay: 0.75 + weight_decay: 0.02 + num_layers: 12 + filter_bias_and_bn: True + skip_decay_names: ['pos_embed', 'cls_token'] + set_param_lr_func: 'layerwise_lr_decay' diff --git a/configs/vitdet/_base_/reader.yml b/configs/vitdet/_base_/reader.yml new file mode 100644 index 0000000000000000000000000000000000000000..1af6175a931a571f1c6726f0f312591c07489d1d --- /dev/null +++ b/configs/vitdet/_base_/reader.yml @@ -0,0 +1,41 @@ +worker_num: 2 +TrainReader: + sample_transforms: + - Decode: {} + - RandomResizeCrop: {resizes: [400, 500, 600], cropsizes: [[384, 600], ], prob: 0.5} + - RandomResize: {target_size: [[480, 1333], [512, 1333], [544, 1333], [576, 1333], [608, 1333], [640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 2} + - RandomFlip: {prob: 0.5} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 2 + shuffle: true + drop_last: true + collate_batch: false + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {interp: 2, target_size: [800, 1333], keep_ratio: True} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_transforms: + - PadBatch: {pad_to_stride: 32} + batch_size: 1 + shuffle: false + drop_last: false + drop_empty: false + + +TestReader: + inputs_def: + image_shape: [-1, 3, 640, 640] + sample_transforms: + - Decode: {} + - LetterBoxResize: {target_size: 640} + - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]} + - Permute: {} + batch_size: 1 + shuffle: false + drop_last: false diff --git a/configs/vitdet/cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml b/configs/vitdet/cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..23f766d75aad37f238cfebc233941d36b2d9a295 --- /dev/null +++ b/configs/vitdet/cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml @@ -0,0 +1,129 @@ + +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/reader.yml', + './_base_/optimizer_base_1x.yml' +] + +weights: output/cascade_rcnn_vit_base_hrfpn_cae_1x_coco/model_final + + +# runtime +log_iter: 100 +snapshot_epoch: 1 +find_unused_parameters: True + +use_gpu: true +norm_type: sync_bn + + +# reader +worker_num: 2 +TrainReader: + batch_size: 1 + + +# model +architecture: CascadeRCNN + +CascadeRCNN: + backbone: VisionTransformer + neck: HRFPN + rpn_head: RPNHead + bbox_head: CascadeHead + # post process + bbox_post_process: BBoxPostProcess + + +VisionTransformer: + patch_size: 16 + embed_dim: 768 + depth: 12 + num_heads: 12 + mlp_ratio: 4 + qkv_bias: True + drop_rate: 0.0 + drop_path_rate: 0.2 + init_values: 0.1 + final_norm: False + use_rel_pos_bias: False + use_sincos_pos_emb: True + epsilon: 0.000001 # 1e-6 + out_indices: [3, 5, 7, 11] + with_fpn: True + pretrained: https://bj.bcebos.com/v1/paddledet/models/pretrained/vit_base_cae_pretrained.pdparams + +HRFPN: + out_channel: 256 + use_bias: True + +RPNHead: + anchor_generator: + aspect_ratios: [0.5, 1.0, 2.0] + anchor_sizes: [[32], [64], [128], [256], [512]] + strides: [4, 8, 16, 32, 64] + rpn_target_assign: + batch_size_per_im: 256 + fg_fraction: 0.5 + negative_overlap: 0.3 + positive_overlap: 0.7 + use_random: True + train_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 2000 + post_nms_top_n: 2000 + topk_after_collect: True + test_proposal: + min_size: 0.0 + nms_thresh: 0.7 + pre_nms_top_n: 1000 + post_nms_top_n: 1000 + loss_rpn_bbox: SmoothL1Loss + +SmoothL1Loss: + beta: 0.1111111111111111 + + +CascadeHead: + head: CascadeXConvNormHead + roi_extractor: + resolution: 7 + sampling_ratio: 0 + aligned: True + bbox_assigner: BBoxAssigner + bbox_loss: GIoULoss + num_cascade_stages: 3 + reg_class_agnostic: False + stage_loss_weights: [1, 0.5, 0.25] + loss_normalize_pos: True + +BBoxAssigner: + batch_size_per_im: 512 + bg_thresh: 0.5 + fg_thresh: 0.5 + fg_fraction: 0.25 + cascade_iou: [0.5, 0.6, 0.7] + use_random: True + + +CascadeXConvNormHead: + norm_type: bn + + +GIoULoss: + loss_weight: 10. + reduction: 'none' + eps: 0.000001 + + +BBoxPostProcess: + decode: + name: RCNNBox + prior_box_var: [30.0, 30.0, 15.0, 15.0] + nms: + name: MultiClassNMS + keep_top_k: 100 + score_threshold: 0.05 + nms_threshold: 0.5 diff --git a/configs/vitdet/cascade_rcnn_vit_large_hrfpn_cae_1x_coco.yml b/configs/vitdet/cascade_rcnn_vit_large_hrfpn_cae_1x_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..5f5bbfb46ea1bdb748d31b4d8ea793ebd903fa33 --- /dev/null +++ b/configs/vitdet/cascade_rcnn_vit_large_hrfpn_cae_1x_coco.yml @@ -0,0 +1,27 @@ +_BASE_: [ + './cascade_rcnn_vit_base_hrfpn_cae_1x_coco.yml' +] + +weights: output/cascade_rcnn_vit_large_hrfpn_cae_1x_coco/model_final + + +depth: &depth 24 +dim: &dim 1024 + +VisionTransformer: + img_size: [800, 1344] + embed_dim: *dim + depth: *depth + num_heads: 16 + drop_path_rate: 0.25 + out_indices: [7, 11, 15, 23] + pretrained: https://bj.bcebos.com/v1/paddledet/models/pretrained/vit_large_cae_pretrained.pdparams + +HRFPN: + in_channels: [*dim, *dim, *dim, *dim] + +OptimizerBuilder: + optimizer: + layer_decay: 0.9 + weight_decay: 0.02 + num_layers: *depth diff --git a/configs/yolov3/README.md b/configs/yolov3/README.md index af4d07ce13d8e2ac6bf81d40ac4d25f5ab2061b3..41cee48c916f4530e43efa49f961239f52e60cd1 100644 --- a/configs/yolov3/README.md +++ b/configs/yolov3/README.md @@ -9,12 +9,12 @@ | DarkNet53(paper) | 608 | 8 | 270e | ---- | 33.0 | - | - | | DarkNet53(paper) | 416 | 8 | 270e | ---- | 31.0 | - | - | | DarkNet53(paper) | 320 | 8 | 270e | ---- | 28.2 | - | - | -| DarkNet53 | 608 | 8 | 270e | ---- | 39.0 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) | -| DarkNet53 | 416 | 8 | 270e | ---- | 37.5 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) | -| DarkNet53 | 320 | 8 | 270e | ---- | 34.6 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) | -| ResNet50_vd | 608 | 8 | 270e | ---- | 39.1 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r50vd_dcn_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r50vd_dcn_270e_coco.yml) | -| ResNet50_vd | 416 | 8 | 270e | ---- | 36.6 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r50vd_dcn_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r50vd_dcn_270e_coco.yml) | -| ResNet50_vd | 320 | 8 | 270e | ---- | 33.6 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r50vd_dcn_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r50vd_dcn_270e_coco.yml) | +| DarkNet53 | 608 | 8 | 270e | ---- | 39.1 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) | +| DarkNet53 | 416 | 8 | 270e | ---- | 37.7 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) | +| DarkNet53 | 320 | 8 | 270e | ---- | 34.8 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) | +| ResNet50_vd | 608 | 8 | 270e | ---- | 40.6 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r50vd_dcn_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r50vd_dcn_270e_coco.yml) | +| ResNet50_vd | 416 | 8 | 270e | ---- | 38.2 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r50vd_dcn_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r50vd_dcn_270e_coco.yml) | +| ResNet50_vd | 320 | 8 | 270e | ---- | 35.1 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r50vd_dcn_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r50vd_dcn_270e_coco.yml) | | ResNet34 | 608 | 8 | 270e | ---- | 36.2 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r34_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r34_270e_coco.yml) | | ResNet34 | 416 | 8 | 270e | ---- | 34.3 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r34_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r34_270e_coco.yml) | | ResNet34 | 320 | 8 | 270e | ---- | 31.2 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_r34_270e_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_r34_270e_coco.yml) | @@ -45,18 +45,6 @@ | MobileNet-V3-SSLD | 416 | 8 | 270e | - | 79.2 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v3_large_ssld_270e_voc.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_mobilenet_v3_large_ssld_270e_voc.yml) | | MobileNet-V3-SSLD | 320 | 8 | 270e | - | 77.3 | [下载链接](https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v3_large_ssld_270e_voc.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_mobilenet_v3_large_ssld_270e_voc.yml) | -**注意:** YOLOv3均使用8GPU训练,训练270个epoch。由于动态图框架整体升级,以下几个PaddleDetection发布的权重模型评估时需要添加--bias字段, 例如 - -```bash -# 使用PaddleDetection发布的权重 -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/yolov3_darknet53_270e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams --bias -``` -主要有: - -1.yolov3_darknet53_270e_coco - -2.yolov3_r50vd_dcn_270e_coco - ## Citations ``` @misc{redmon2018yolov3, diff --git a/configs/yolov3/_base_/optimizer_40e.yml b/configs/yolov3/_base_/optimizer_40e.yml index 0f858df59921e20398e34d019277e39c10abd583..7cf676d7119162d55dc0a2566c0590457344cfd3 100644 --- a/configs/yolov3/_base_/optimizer_40e.yml +++ b/configs/yolov3/_base_/optimizer_40e.yml @@ -3,12 +3,12 @@ epoch: 40 LearningRate: base_lr: 0.0001 schedulers: - - !PiecewiseDecay + - name: PiecewiseDecay gamma: 0.1 milestones: - 32 - 36 - - !LinearWarmup + - name: LinearWarmup start_factor: 0.3333333333333333 steps: 100 diff --git a/configs/yolox/README.md b/configs/yolox/README.md new file mode 100644 index 0000000000000000000000000000000000000000..a689b2981e06d439bc212c4fbb2ba4efa6860a7c --- /dev/null +++ b/configs/yolox/README.md @@ -0,0 +1,190 @@ +# YOLOX (YOLOX: Exceeding YOLO Series in 2021) + +## 内容 +- [模型库](#模型库) +- [使用说明](#使用说明) +- [速度测试](#速度测试) +- [引用](#引用) + + +## 模型库 +### YOLOX on COCO + +| 网络网络 | 输入尺寸 | 图片数/GPU | 学习率策略 | 模型推理耗时(ms) | mAPval
    0.5:0.95 | mAPval
    0.5 | Params(M) | FLOPs(G) | 下载链接 | 配置文件 | +| :------------- | :------- | :-------: | :------: | :------------: | :---------------------: | :----------------: |:---------: | :------: |:---------------: |:-----: | +| YOLOX-nano | 416 | 8 | 300e | 2.3 | 26.1 | 42.0 | 0.91 | 1.08 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_nano_300e_coco.pdparams) | [配置文件](./yolox_nano_300e_coco.yml) | +| YOLOX-tiny | 416 | 8 | 300e | 2.8 | 32.9 | 50.4 | 5.06 | 6.45 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_tiny_300e_coco.pdparams) | [配置文件](./yolox_tiny_300e_coco.yml) | +| YOLOX-s | 640 | 8 | 300e | 3.0 | 40.4 | 59.6 | 9.0 | 26.8 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams) | [配置文件](./yolox_s_300e_coco.yml) | +| YOLOX-m | 640 | 8 | 300e | 5.8 | 46.9 | 65.7 | 25.3 | 73.8 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_m_300e_coco.pdparams) | [配置文件](./yolox_m_300e_coco.yml) | +| YOLOX-l | 640 | 8 | 300e | 9.3 | 50.1 | 68.8 | 54.2 | 155.6 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_l_300e_coco.pdparams) | [配置文件](./yolox_l_300e_coco.yml) | +| YOLOX-x | 640 | 8 | 300e | 16.6 | **51.8** | **70.6** | 99.1 | 281.9 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_x_300e_coco.pdparams) | [配置文件](./yolox_x_300e_coco.yml) | + + +| 网络网络 | 输入尺寸 | 图片数/GPU | 学习率策略 | 模型推理耗时(ms) | mAPval
    0.5:0.95 | mAPval
    0.5 | Params(M) | FLOPs(G) | 下载链接 | 配置文件 | +| :------------- | :------- | :-------: | :------: | :------------: | :---------------------: | :----------------: |:---------: | :------: |:---------------: |:-----: | +| YOLOX-cdn-tiny | 416 | 8 | 300e | 1.9 | 32.4 | 50.2 | 5.03 | 6.33 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_cdn_tiny_300e_coco.pdparams) | [配置文件](./yolox_cdn_tiny_300e_coco.yml) | +| YOLOX-crn-s | 640 | 8 | 300e | 3.0 | 40.4 | 59.6 | 7.7 | 24.69 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_crn_s_300e_coco.pdparams) | [配置文件](./yolox_crn_s_300e_coco.yml) | +| YOLOX-ConvNeXt-s| 640 | 8 | 36e | - | **44.6** | **65.3** | 36.2 | 27.52 | [下载链接](https://paddledet.bj.bcebos.com/models/yolox_convnext_s_36e_coco.pdparams) | [配置文件](../convnext/yolox_convnext_s_36e_coco.yml) | + + +**注意:** + - YOLOX模型训练使用COCO train2017作为训练集,YOLOX-cdn表示使用与YOLOv5 releases v6.0之后版本相同的主干网络,YOLOX-crn表示使用与PPYOLOE相同的主干网络CSPResNet,YOLOX-ConvNeXt表示使用ConvNeXt作为主干网络; + - YOLOX模型训练过程中默认使用8 GPUs进行混合精度训练,默认每卡batch_size为8,默认lr为0.01为8卡总batch_size=64的设置,如果**GPU卡数**或者每卡**batch size**发生了改变,你需要按照公式 **lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault)** 调整学习率; + - 为保持高mAP的同时提高推理速度,可以将[yolox_cspdarknet.yml](_base_/yolox_cspdarknet.yml)中的`nms_top_k`修改为`1000`,将`keep_top_k`修改为`100`,将`score_threshold`修改为`0.01`,mAP会下降约0.1~0.2%; + - 为快速的demo演示效果,可以将[yolox_cspdarknet.yml](_base_/yolox_cspdarknet.yml)中的`score_threshold`修改为`0.25`,将`nms_threshold`修改为`0.45`,但mAP会下降较多; + - YOLOX模型推理速度测试采用单卡V100,batch size=1进行测试,使用**CUDA 10.2**, **CUDNN 7.6.5**,TensorRT推理速度测试使用**TensorRT 6.0.1.8**。 + - 参考[速度测试](#速度测试)以复现YOLOX推理速度测试结果,速度为**tensorRT-FP16**测速后的最快速度,**不包含数据预处理和模型输出后处理(NMS)**的耗时。 + - 如果你设置了`--run_benchmark=True`, 你首先需要安装以下依赖`pip install pynvml psutil GPUtil`。 + +## 使用教程 + +### 1.训练 +执行以下指令使用混合精度训练YOLOX +```bash +python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/yolox/yolox_s_300e_coco.yml --amp --eval +``` +**注意:** +- `--amp`表示开启混合精度训练以避免显存溢出,`--eval`表示边训边验证。 + +### 2.评估 +执行以下命令在单个GPU上评估COCO val2017数据集 +```bash +CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams +``` + +### 3.推理 +使用以下命令在单张GPU上预测图片,使用`--infer_img`推理单张图片以及使用`--infer_dir`推理文件中的所有图片。 +```bash +# 推理单张图片 +CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams --infer_img=demo/000000014439_640x640.jpg + +# 推理文件中的所有图片 +CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams --infer_dir=demo +``` + +### 4.导出模型 +YOLOX在GPU上推理部署或benchmark测速等需要通过`tools/export_model.py`导出模型。 + +当你**使用Paddle Inference但不使用TensorRT**时,运行以下的命令导出模型 + +```bash +python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams +``` + +当你**使用Paddle Inference且使用TensorRT**时,需要指定`-o trt=True`来导出模型。 + +```bash +python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams trt=True +``` + +如果你想将YOLOX模型导出为**ONNX格式**,参考 +[PaddleDetection模型导出为ONNX格式教程](../../deploy/EXPORT_ONNX_MODEL.md),运行以下命令: + +```bash + +# 导出推理模型 +python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml --output_dir=output_inference -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams + +# 安装paddle2onnx +pip install paddle2onnx + +# 转换成onnx格式 +paddle2onnx --model_dir output_inference/yolox_s_300e_coco --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 11 --save_file yolox_s_300e_coco.onnx +``` + +**注意:** ONNX模型目前只支持batch_size=1 + + +### 5.推理部署 +YOLOX可以使用以下方式进行部署: + - Paddle Inference [Python](../../deploy/python) & [C++](../../deploy/cpp) + - [Paddle-TensorRT](../../deploy/TENSOR_RT.md) + - [PaddleServing](https://github.com/PaddlePaddle/Serving) + - [PaddleSlim模型量化](../slim) + +运行以下命令导出模型 + +```bash +python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams trt=True +``` + +**注意:** +- trt=True表示**使用Paddle Inference且使用TensorRT**进行测速,速度会更快,默认不加即为False,表示**使用Paddle Inference但不使用TensorRT**进行测速。 +- 如果是使用Paddle Inference在TensorRT FP16模式下部署,需要参考[Paddle Inference文档](https://www.paddlepaddle.org.cn/inference/master/user_guides/download_lib.html#python),下载并安装与你的CUDA, CUDNN和TensorRT相应的wheel包。 + +#### 5.1.Python部署 +`deploy/python/infer.py`使用上述导出后的Paddle Inference模型用于推理和benchnark测速,如果设置了`--run_benchmark=True`, 首先需要安装以下依赖`pip install pynvml psutil GPUtil`。 + +```bash +# Python部署推理单张图片 +python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu + +# 推理文件夹下的所有图片 +python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_dir=demo/ --device=gpu +``` + +#### 5.2. C++部署 +`deploy/cpp/build/main`使用上述导出后的Paddle Inference模型用于C++推理部署, 首先按照[docs](../../deploy/cpp/docs)编译安装环境。 +```bash +# C++部署推理单张图片 +./deploy/cpp/build/main --model_dir=output_inference/yolox_s_300e_coco/ --image_file=demo/000000014439_640x640.jpg --run_mode=paddle --device=GPU --threshold=0.5 --output_dir=cpp_infer_output/yolox_s_300e_coco +``` + + +## 速度测试 + +为了公平起见,在[模型库](#模型库)中的速度测试结果均为不包含数据预处理和模型输出后处理(NMS)的数据(与[YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet)测试方法一致),需要在导出模型时指定`-o exclude_nms=True`。测速需设置`--run_benchmark=True`, 首先需要安装以下依赖`pip install pynvml psutil GPUtil`。 + +**使用Paddle Inference但不使用TensorRT**进行测速,执行以下命令: + +```bash +# 导出模型 +python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams exclude_nms=True + +# 速度测试,使用run_benchmark=True +python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --run_mode=paddle --device=gpu --run_benchmark=True +``` + +**使用Paddle Inference且使用TensorRT**进行测速,执行以下命令: + +```bash +# 导出模型,使用trt=True +python tools/export_model.py -c configs/yolox/yolox_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolox_s_300e_coco.pdparams exclude_nms=True trt=True + +# 速度测试,使用run_benchmark=True +python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True + +# tensorRT-FP32测速 +python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True --run_mode=trt_fp32 + +# tensorRT-FP16测速 +python deploy/python/infer.py --model_dir=output_inference/yolox_s_300e_coco --image_file=demo/000000014439_640x640.jpg --device=gpu --run_benchmark=True --run_mode=trt_fp16 +``` +**注意:** +- 导出模型时指定`-o exclude_nms=True`仅作为测速时用,这样导出的模型其推理部署预测的结果不是最终检出框的结果。 +- [模型库](#模型库)中的速度测试结果为**tensorRT-FP16**测速后的最快速度,为**不包含数据预处理和模型输出后处理(NMS)**的耗时。 + +## FAQ + +
    +如何计算模型参数量 +可以将以下代码插入:[trainer.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/engine/trainer.py#L154) 来计算参数量。 +```python +params = sum([ + p.numel() for n, p in self.model.named_parameters() + if all([x not in n for x in ['_mean', '_variance']]) +]) # exclude BatchNorm running status +print('Params: ', params) +``` +
    + + +## Citations +``` + @article{yolox2021, + title={YOLOX: Exceeding YOLO Series in 2021}, + author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian}, + journal={arXiv preprint arXiv:2107.08430}, + year={2021} +} +``` diff --git a/configs/yolox/_base_/optimizer_300e.yml b/configs/yolox/_base_/optimizer_300e.yml new file mode 100644 index 0000000000000000000000000000000000000000..1853ad61ff3e8f222388a005db9e60640700c996 --- /dev/null +++ b/configs/yolox/_base_/optimizer_300e.yml @@ -0,0 +1,20 @@ +epoch: 300 + +LearningRate: + base_lr: 0.01 + schedulers: + - !CosineDecay + max_epochs: 300 + min_lr_ratio: 0.05 + last_plateau_epochs: 15 + - !ExpWarmup + epochs: 5 + +OptimizerBuilder: + optimizer: + type: Momentum + momentum: 0.9 + use_nesterov: True + regularizer: + factor: 0.0005 + type: L2 diff --git a/configs/yolox/_base_/yolox_cspdarknet.yml b/configs/yolox/_base_/yolox_cspdarknet.yml new file mode 100644 index 0000000000000000000000000000000000000000..24ef370c437e308c3a7e9da973fe3eea439faf17 --- /dev/null +++ b/configs/yolox/_base_/yolox_cspdarknet.yml @@ -0,0 +1,42 @@ +architecture: YOLOX +norm_type: sync_bn +use_ema: True +ema_decay: 0.9999 +ema_decay_type: "exponential" +act: silu +find_unused_parameters: True + +depth_mult: 1.0 +width_mult: 1.0 + +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + size_stride: 32 + size_range: [15, 25] # multi-scale range [480*480 ~ 800*800] + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: False + +YOLOCSPPAN: + depthwise: False + +YOLOXHead: + l1_epoch: 285 + depthwise: False + loss_weight: {cls: 1.0, obj: 1.0, iou: 5.0, l1: 1.0} + assigner: + name: SimOTAAssigner + candidate_topk: 10 + use_vfl: False + nms: + name: MultiClassNMS + nms_top_k: 10000 + keep_top_k: 1000 + score_threshold: 0.001 + nms_threshold: 0.65 + # For speed while keep high mAP, you can modify 'nms_top_k' to 1000 and 'keep_top_k' to 100, the mAP will drop about 0.1%. + # For high speed demo, you can modify 'score_threshold' to 0.25 and 'nms_threshold' to 0.45, but the mAP will drop a lot. diff --git a/configs/yolox/_base_/yolox_reader.yml b/configs/yolox/_base_/yolox_reader.yml new file mode 100644 index 0000000000000000000000000000000000000000..a33b847b159a515248c8556a24bb29e779f1def8 --- /dev/null +++ b/configs/yolox/_base_/yolox_reader.yml @@ -0,0 +1,44 @@ +worker_num: 4 +TrainReader: + sample_transforms: + - Decode: {} + - Mosaic: + prob: 1.0 + input_dim: [640, 640] + degrees: [-10, 10] + scale: [0.1, 2.0] + shear: [-2, 2] + translate: [-0.1, 0.1] + enable_mixup: True + mixup_prob: 1.0 + mixup_scale: [0.5, 1.5] + - AugmentHSV: {is_bgr: False, hgain: 5, sgain: 30, vgain: 30} + - PadResize: {target_size: 640} + - RandomFlip: {} + batch_transforms: + - Permute: {} + batch_size: 8 + shuffle: True + drop_last: True + collate_batch: False + mosaic_epoch: 285 + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: True, interp: 1} + - Pad: {size: [640, 640], fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 4 + + +TestReader: + inputs_def: + image_shape: [3, 640, 640] + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: True, interp: 1} + - Pad: {size: [640, 640], fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 1 diff --git a/configs/yolox/yolox_cdn_tiny_300e_coco.yml b/configs/yolox/yolox_cdn_tiny_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..81c6c075d3620caa98dce2ebcd3b45bd694cef8d --- /dev/null +++ b/configs/yolox/yolox_cdn_tiny_300e_coco.yml @@ -0,0 +1,14 @@ +_BASE_: [ + 'yolox_tiny_300e_coco.yml' +] +depth_mult: 0.33 +width_mult: 0.375 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_cdn_tiny_300e_coco/model_final + +CSPDarkNet: + arch: "P5" # using the same backbone of YOLOv5 releases v6.0 and later version + return_idx: [2, 3, 4] + depthwise: False diff --git a/configs/yolox/yolox_crn_s_300e_coco.yml b/configs/yolox/yolox_crn_s_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..ae463113e5909f76905b70409ae75794a66430d7 --- /dev/null +++ b/configs/yolox/yolox_crn_s_300e_coco.yml @@ -0,0 +1,28 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 0.33 +width_mult: 0.50 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_crn_s_300e_coco/model_final +pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/CSPResNetb_s_pretrained.pdparams + + +YOLOX: + backbone: CSPResNet + neck: YOLOCSPPAN + head: YOLOXHead + size_stride: 32 + size_range: [15, 25] # multi-scale range [480*480 ~ 800*800] + +CSPResNet: + layers: [3, 6, 6, 3] + channels: [64, 128, 256, 512, 1024] + return_idx: [1, 2, 3] + use_large_stem: True diff --git a/configs/yolox/yolox_l_300e_coco.yml b/configs/yolox/yolox_l_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..79cffd5e544b0d2cf4629c6a9f37e75eda4a5a6d --- /dev/null +++ b/configs/yolox/yolox_l_300e_coco.yml @@ -0,0 +1,13 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 1.0 +width_mult: 1.0 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_l_300e_coco/model_final diff --git a/configs/yolox/yolox_m_300e_coco.yml b/configs/yolox/yolox_m_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..4c25d7e2561cf120b60b712e621ad695debdb61c --- /dev/null +++ b/configs/yolox/yolox_m_300e_coco.yml @@ -0,0 +1,13 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 0.67 +width_mult: 0.75 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_m_300e_coco/model_final diff --git a/configs/yolox/yolox_nano_300e_coco.yml b/configs/yolox/yolox_nano_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..80b8b5c51fbc200ecce2ff10013b7e9a94300999 --- /dev/null +++ b/configs/yolox/yolox_nano_300e_coco.yml @@ -0,0 +1,81 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 0.33 +width_mult: 0.25 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_nano_300e_coco/model_final + + +### model config: +# Note: YOLOX-nano use depthwise conv in backbone, neck and head. +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + size_stride: 32 + size_range: [10, 20] # multi-scale range [320*320 ~ 640*640] + +CSPDarkNet: + arch: "X" + return_idx: [2, 3, 4] + depthwise: True + +YOLOCSPPAN: + depthwise: True + +YOLOXHead: + depthwise: True + + +### reader config: +# Note: YOLOX-tiny/nano uses 416*416 for evaluation and inference. +# And multi-scale training setting is in model config, TrainReader's operators use 640*640 as default. +worker_num: 4 +TrainReader: + sample_transforms: + - Decode: {} + - Mosaic: + prob: 0.5 # 1.0 in YOLOX-tiny/s/m/l/x + input_dim: [640, 640] + degrees: [-10, 10] + scale: [0.5, 1.5] # [0.1, 2.0] in YOLOX-s/m/l/x + shear: [-2, 2] + translate: [-0.1, 0.1] + enable_mixup: False # True in YOLOX-s/m/l/x + - AugmentHSV: {is_bgr: False, hgain: 5, sgain: 30, vgain: 30} + - PadResize: {target_size: 640} + - RandomFlip: {} + batch_transforms: + - Permute: {} + batch_size: 8 + shuffle: True + drop_last: True + collate_batch: False + mosaic_epoch: 285 + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [416, 416], keep_ratio: True, interp: 1} + - Pad: {size: [416, 416], fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 8 + + +TestReader: + inputs_def: + image_shape: [3, 416, 416] + sample_transforms: + - Decode: {} + - Resize: {target_size: [416, 416], keep_ratio: True, interp: 1} + - Pad: {size: [416, 416], fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 1 diff --git a/configs/yolox/yolox_s_300e_coco.yml b/configs/yolox/yolox_s_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..9ba6120a93ec1d5c46cc8d8dc88351671ff44349 --- /dev/null +++ b/configs/yolox/yolox_s_300e_coco.yml @@ -0,0 +1,13 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 0.33 +width_mult: 0.50 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_s_300e_coco/model_final diff --git a/configs/yolox/yolox_tiny_300e_coco.yml b/configs/yolox/yolox_tiny_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..c81c172d27982c460bbead78f966158c67de7bc2 --- /dev/null +++ b/configs/yolox/yolox_tiny_300e_coco.yml @@ -0,0 +1,69 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 0.33 +width_mult: 0.375 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_tiny_300e_coco/model_final + + +### model config: +YOLOX: + backbone: CSPDarkNet + neck: YOLOCSPPAN + head: YOLOXHead + size_stride: 32 + size_range: [10, 20] # multi-scale ragne [320*320 ~ 640*640] + + +### reader config: +# Note: YOLOX-tiny/nano uses 416*416 for evaluation and inference. +# And multi-scale training setting is in model config, TrainReader's operators use 640*640 as default. +worker_num: 4 +TrainReader: + sample_transforms: + - Decode: {} + - Mosaic: + prob: 1.0 + input_dim: [640, 640] + degrees: [-10, 10] + scale: [0.5, 1.5] # [0.1, 2.0] in YOLOX-s/m/l/x + shear: [-2, 2] + translate: [-0.1, 0.1] + enable_mixup: False # True in YOLOX-s/m/l/x + - AugmentHSV: {is_bgr: False, hgain: 5, sgain: 30, vgain: 30} + - PadResize: {target_size: 640} + - RandomFlip: {} + batch_transforms: + - Permute: {} + batch_size: 8 + shuffle: True + drop_last: True + collate_batch: False + mosaic_epoch: 285 + + +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [416, 416], keep_ratio: True, interp: 1} + - Pad: {size: [416, 416], fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 8 + + +TestReader: + inputs_def: + image_shape: [3, 416, 416] + sample_transforms: + - Decode: {} + - Resize: {target_size: [416, 416], keep_ratio: True, interp: 1} + - Pad: {size: [416, 416], fill_value: [114., 114., 114.]} + - Permute: {} + batch_size: 1 diff --git a/configs/yolox/yolox_x_300e_coco.yml b/configs/yolox/yolox_x_300e_coco.yml new file mode 100644 index 0000000000000000000000000000000000000000..fd8e0d2eb0fbc2d8f052e549b71f9995aa325a05 --- /dev/null +++ b/configs/yolox/yolox_x_300e_coco.yml @@ -0,0 +1,13 @@ +_BASE_: [ + '../datasets/coco_detection.yml', + '../runtime.yml', + './_base_/optimizer_300e.yml', + './_base_/yolox_cspdarknet.yml', + './_base_/yolox_reader.yml' +] +depth_mult: 1.33 +width_mult: 1.25 + +log_iter: 100 +snapshot_epoch: 10 +weights: output/yolox_x_300e_coco/model_final diff --git a/deploy/EXPORT_ONNX_MODEL.md b/deploy/EXPORT_ONNX_MODEL.md index cad839c9a64f40d68af5b275b57b6eaee492f932..59d79730a1f70587aab4219a28a8111243609854 100644 --- a/deploy/EXPORT_ONNX_MODEL.md +++ b/deploy/EXPORT_ONNX_MODEL.md @@ -4,21 +4,31 @@ PaddleDetection模型支持保存为ONNX格式,目前测试支持的列表如 | 模型 | OP版本 | 备注 | | :---- | :----- | :--- | | YOLOv3 | 11 | 仅支持batch=1推理;模型导出需固定shape | -| PPYOLO | 11 | 仅支持batch=1推理;MatrixNMS将被转成NMS,精度略有变化;模型导出需固定shape | -| PPYOLOv2 | 11 | 仅支持batch=1推理;MatrixNMS将被转换NMS,精度略有变化;模型导出需固定shape | -| PPYOLO-Tiny | 11 | 仅支持batch=1推理;模型导出需固定shape | +| PP-YOLO | 11 | 仅支持batch=1推理;MatrixNMS将被转换NMS,精度略有变化;模型导出需固定shape | +| PP-YOLOv2 | 11 | 仅支持batch=1推理;MatrixNMS将被转换NMS,精度略有变化;模型导出需固定shape | +| PP-YOLO Tiny | 11 | 仅支持batch=1推理;模型导出需固定shape | +| PP-YOLOE | 11 | 仅支持batch=1推理;模型导出需固定shape | +| PP-PicoDet | 11 | 仅支持batch=1推理;模型导出需固定shape | | FCOS | 11 |仅支持batch=1推理 | | PAFNet | 11 |- | | TTFNet | 11 |-| | SSD | 11 |仅支持batch=1推理 | -| PicoDet | 11 |仅支持batch=1推理 | +| PP-TinyPose | 11 | - | +| Faster RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本| +| Mask RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本| +| Cascade RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本| +| Cascade Mask RCNN | 16 | 仅支持batch=1推理, 依赖0.9.7及以上版本| 保存ONNX的功能由[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)提供,如在转换中有相关问题反馈,可在Paddle2ONNX的Github项目中通过[ISSUE](https://github.com/PaddlePaddle/Paddle2ONNX/issues)与工程师交流。 ## 导出教程 ### 步骤一、导出PaddlePaddle部署模型 -导出步骤参考文档[PaddleDetection部署模型导出教程](./EXPORT_MODEL.md), 以COCO数据集训练的YOLOv3为例,导出示例如下 + + +导出步骤参考文档[PaddleDetection部署模型导出教程](./EXPORT_MODEL.md), 导出示例如下 + +- 非RCNN系列模型, 以YOLOv3为例 ``` cd PaddleDetection python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml \ @@ -36,17 +46,45 @@ yolov3_darknet ``` > 注意导出时的参数`TestReader.inputs_def.image_shape`,对于YOLO系列模型注意导出时指定该参数,否则无法转换成功 +- RCNN系列模型,以Faster RCNN为例 + +RCNN系列模型导出ONNX模型时,需要去除模型中的控制流,因此需要额外添加`export_onnx=True` 字段 +``` +cd PaddleDetection +python tools/export_model.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \ + -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams \ + export_onnx=True \ + --output_dir inference_model +``` + +导出的模型保存在`inference_model/faster_rcnn_r50_fpn_1x_coco/`目录中,结构如下 +``` +faster_rcnn_r50_fpn_1x_coco + ├── infer_cfg.yml # 模型配置文件信息 + ├── model.pdiparams # 静态图模型参数 + ├── model.pdiparams.info # 参数额外信息,一般无需关注 + └── model.pdmodel # 静态图模型文件 +``` + ### 步骤二、将部署模型转为ONNX格式 -安装Paddle2ONNX(高于或等于0.6版本) +安装Paddle2ONNX(高于或等于0.9.7版本) ``` pip install paddle2onnx ``` 使用如下命令转换 ``` +# YOLOv3 paddle2onnx --model_dir inference_model/yolov3_darknet53_270e_coco \ --model_filename model.pdmodel \ --params_filename model.pdiparams \ --opset_version 11 \ --save_file yolov3.onnx + +# Faster RCNN +paddle2onnx --model_dir inference_model/faster_rcnn_r50_fpn_1x_coco \ + --model_filename model.pdmodel \ + --params_filename model.pdiparams \ + --opset_version 16 \ + --save_file faster_rcnn.onnx ``` -转换后的模型即为在当前路径下的`yolov3.onnx` +转换后的模型即为在当前路径下的`yolov3.onnx`和`faster_rcnn.onnx` diff --git a/deploy/EXPORT_ONNX_MODEL_en.md b/deploy/EXPORT_ONNX_MODEL_en.md index 6f0a16664b1724da71c838a458f0545161058773..1f32a6655137a218c330a78ea8190b046d15743b 100644 --- a/deploy/EXPORT_ONNX_MODEL_en.md +++ b/deploy/EXPORT_ONNX_MODEL_en.md @@ -4,20 +4,30 @@ PaddleDetection Model support is saved in ONNX format and the list of current te | Model | OP Version | NOTE | | :---- | :----- | :--- | | YOLOv3 | 11 | Only batch=1 inferring is supported. Model export needs fixed shape | -| PPYOLO | 11 | Only batch=1 inferring is supported. A MatrixNMS will be converted to an NMS with slightly different precision; Model export needs fixed shape | -| PPYOLOv2 | 11 | Only batch=1 inferring is supported. MatrixNMS will be converted to NMS with slightly different precision; Model export needs fixed shape | -| PPYOLO-Tiny | 11 | Only batch=1 inferring is supported. Model export needs fixed shape | +| PP-YOLO | 11 | Only batch=1 inferring is supported. A MatrixNMS will be converted to an NMS with slightly different precision; Model export needs fixed shape | +| PP-YOLOv2 | 11 | Only batch=1 inferring is supported. MatrixNMS will be converted to NMS with slightly different precision; Model export needs fixed shape | +| PP-YOLO Tiny | 11 | Only batch=1 inferring is supported. Model export needs fixed shape | +| PP-YOLOE | 11 | Only batch=1 inferring is supported. Model export needs fixed shape | +| PP-PicoDet | 11 | Only batch=1 inferring is supported. Model export needs fixed shape | | FCOS | 11 |Only batch=1 inferring is supported | | PAFNet | 11 |- | | TTFNet | 11 |-| | SSD | 11 |Only batch=1 inferring is supported | +| PP-TinyPose | 11 | - | +| Faster RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7| +| Mask RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7| +| Cascade RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7| +| Cascade Mask RCNN | 16 | Only batch=1 inferring is supported, require paddle2onnx>=0.9.7| + The function of saving ONNX is provided by [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX). If there is feedback on related problems during conversion, Communicate with engineers in Paddle2ONNX's Github project via [ISSUE](https://github.com/PaddlePaddle/Paddle2ONNX/issues). ## Export Tutorial ### Step 1. Export the Paddle deployment model -Export procedure reference document[Tutorial on PaddleDetection deployment model export](./EXPORT_MODEL_en.md), take YOLOv3 of COCO dataset training as an example +Export procedure reference document[Tutorial on PaddleDetection deployment model export](./EXPORT_MODEL_en.md), for example: + +- Models except RCNN series, take YOLOv3 as example ``` cd PaddleDetection python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml \ @@ -35,17 +45,45 @@ yolov3_darknet ``` > check`TestReader.inputs_def.image_shape`, For YOLO series models, specify this parameter when exporting; otherwise, the conversion fails +- RCNN series models, take Faster RCNN as example + +The conditional block needs to be removed in RCNN series when export ONNX model. Add `export_onnx=True` in command line +``` +cd PaddleDetection +python tools/export_model.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \ + -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams \ + export_onnx=True \ + --output_dir inference_model +``` +The derived models were saved in `inference_model/faster_rcnn_r50_fpn_1x_coco/`, with the structure as follows +``` +faster_rcnn_r50_fpn_1x_coco + ├── infer_cfg.yml # Model configuration file information + ├── model.pdiparams # Static diagram model parameters + ├── model.pdiparams.info # Parameter Information is not required + └── model.pdmodel # Static diagram model file +``` + + ### Step 2. Convert the deployment model to ONNX format -Install Paddle2ONNX (version 0.6 or higher) +Install Paddle2ONNX (version 0.9.7 or higher) ``` pip install paddle2onnx ``` Use the following command to convert ``` +# YOLOv3 paddle2onnx --model_dir inference_model/yolov3_darknet53_270e_coco \ --model_filename model.pdmodel \ --params_filename model.pdiparams \ --opset_version 11 \ --save_file yolov3.onnx + +# Faster RCNN +paddle2onnx --model_dir inference_model/faster_rcnn_r50_fpn_1x_coco \ + --model_filename model.pdmodel \ + --params_filename model.pdiparams \ + --opset_version 16 \ + --save_file faster_rcnn.onnx ``` -The transformed model is under the current path`yolov3.onnx` +The transformed model is under the current path`yolov3.onnx` and `faster_rcnn.onnx` diff --git a/deploy/README_en.md b/deploy/README_en.md index 8ac1f1ce2a34a21153fa885d98b6956ec5dd0112..f587b56b99e7a6b7c7ed31c5ae6307ade6e18126 100644 --- a/deploy/README_en.md +++ b/deploy/README_en.md @@ -21,7 +21,7 @@ Use the `tools/export_model.py` script to export the model and the configuration # The YOLOv3 model is derived python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=output/yolov3_mobilenet_v1_roadsign/best_model.pdparams ``` -The prediction model will be exported to the `output_inference/yolov3_mobilenet_v1_roadsign` directory `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`. For details on model export, please refer to the documentation [Tutorial on Paddle Detection MODEL EXPORT](EXPORT_MODEL_sh.md). +The prediction model will be exported to the `output_inference/yolov3_mobilenet_v1_roadsign` directory `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`. For details on model export, please refer to the documentation [Tutorial on Paddle Detection MODEL EXPORT](./EXPORT_MODEL_en.md). ### 1.2 Use Paddle Inference to Make Predictions * Python deployment supports `CPU`, `GPU` and `XPU` environments, Windows, Linux, and NV Jetson embedded devices. Reference Documentation [Python Deployment](python/README.md) @@ -39,7 +39,7 @@ python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml ``` The prediction model will be exported to the `output_inference/yolov3_darknet53_270e_coco` directory `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`, `serving_client/` and `serving_server/` folder. -For details on model export, please refer to the documentation [Tutorial on Paddle Detection MODEL EXPORT](EXPORT_MODEL_en.md). +For details on model export, please refer to the documentation [Tutorial on Paddle Detection MODEL EXPORT](./EXPORT_MODEL_en.md). ### 2.2 Predictions are made using Paddle Serving * [Install PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md#installation) diff --git a/deploy/TENSOR_RT.md b/deploy/TENSOR_RT.md index a9d7677ee3a2af028937cc83a6a82155e2df4716..b1dd29789540746cce5f7ea3ce0a783e2178438d 100644 --- a/deploy/TENSOR_RT.md +++ b/deploy/TENSOR_RT.md @@ -1,8 +1,8 @@ # TensorRT预测部署教程 -TensorRT是NVIDIA提出的用于统一模型部署的加速库,可以应用于V100、JETSON Xavier等硬件,它可以极大提高预测速度。Paddle TensorRT教程请参考文档[使用Paddle-TensorRT库预测](https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_trt.html#) +TensorRT是NVIDIA提出的用于统一模型部署的加速库,可以应用于V100、JETSON Xavier等硬件,它可以极大提高预测速度。Paddle TensorRT教程请参考文档[使用Paddle-TensorRT库预测](https://www.paddlepaddle.org.cn/inference/optimize/paddle_trt.html) ## 1. 安装PaddleInference预测库 -- Python安装包,请从[这里](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-release) 下载带有tensorrt的安装包进行安装 +- Python安装包,请从[这里](https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html#python) 下载带有tensorrt的安装包进行安装 - CPP预测库,请从[这里](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/05_inference_deployment/inference/build_and_install_lib_cn.html) 下载带有TensorRT编译的预测库 @@ -13,7 +13,7 @@ TensorRT是NVIDIA提出的用于统一模型部署的加速库,可以应用于 - PaddleDetection中部署预测要求TensorRT版本 > 6.0。 ## 2. 导出模型 -模型导出具体请参考文档[PaddleDetection模型导出教程](../EXPORT_MODEL.md)。 +模型导出具体请参考文档[PaddleDetection模型导出教程](./EXPORT_MODEL.md)。 ## 3. 开启TensorRT加速 ### 3.1 配置TensorRT @@ -43,7 +43,7 @@ TestReader: image_shape: [3,608,608] ... ``` -或者在导出模型时设置`-o TestReader.inputs_def.image_shape=[3,608,608]`,模型将会进行固定尺寸预测,具体请参考[PaddleDetection模型导出教程](../EXPORT_MODEL.md) 。 +或者在导出模型时设置`-o TestReader.inputs_def.image_shape=[3,608,608]`,模型将会进行固定尺寸预测,具体请参考[PaddleDetection模型导出教程](./EXPORT_MODEL.md) 。 可以通过[visualdl](https://www.paddlepaddle.org.cn/paddle/visualdl/demo/graph) 打开`model.pdmodel`文件,查看输入的第一个Tensor尺寸是否是固定的,如果不指定,尺寸会用`?`表示,如下图所示: ![img](../docs/images/input_shape.png) diff --git a/deploy/auto_compression/README.md b/deploy/auto_compression/README.md new file mode 100644 index 0000000000000000000000000000000000000000..4578a4643a1933224e316d94a4062ce8987ccaca --- /dev/null +++ b/deploy/auto_compression/README.md @@ -0,0 +1,112 @@ +# 自动化压缩 + +目录: +- [1.简介](#1简介) +- [2.Benchmark](#2Benchmark) +- [3.开始自动压缩](#自动压缩流程) + - [3.1 环境准备](#31-准备环境) + - [3.2 准备数据集](#32-准备数据集) + - [3.3 准备预测模型](#33-准备预测模型) + - [3.4 测试模型精度](#34-测试模型精度) + - [3.5 自动压缩并产出模型](#35-自动压缩并产出模型) +- [4.预测部署](#4预测部署) + +## 1. 简介 +本示例使用PaddleDetection中Inference部署模型进行自动化压缩。本示例使用的自动化压缩策略为量化蒸馏。 + + +## 2.Benchmark + +### PP-YOLOE + +| 模型 | Base mAP | 离线量化mAP | ACT量化mAP | TRT-FP32 | TRT-FP16 | TRT-INT8 | 配置文件 | 量化模型 | +| :-------- |:-------- |:--------: | :---------------------: | :----------------: | :----------------: | :---------------: | :----------------------: | :---------------------: | +| PP-YOLOE-l | 50.9 | - | 50.6 | 11.2ms | 7.7ms | **6.7ms** | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/auto_compression/configs/ppyoloe_l_qat_dis.yaml) | [Quant Model](https://bj.bcebos.com/v1/paddle-slim-models/act/ppyoloe_crn_l_300e_coco_quant.tar) | + +- mAP的指标均在COCO val2017数据集中评测得到,IoU=0.5:0.95。 +- PP-YOLOE-l模型在Tesla V100的GPU环境下测试,并且开启TensorRT,batch_size=1,包含NMS,测试脚本是[benchmark demo](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/deploy/python)。 + +## 3. 自动压缩流程 + +#### 3.1 准备环境 +- PaddlePaddle >= 2.3 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装) +- PaddleSlim >= 2.3 +- PaddleDet >= 2.4 +- opencv-python + +安装paddlepaddle: +```shell +# CPU +pip install paddlepaddle +# GPU +pip install paddlepaddle-gpu +``` + +安装paddleslim: +```shell +pip install paddleslim +``` + +安装paddledet: +```shell +pip install paddledet +``` + +#### 3.2 准备数据集 + +本案例默认以COCO数据进行自动压缩实验,如果自定义COCO数据,或者其他格式数据,请参考[数据准备文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareDataSet.md) 来准备数据。 + +如果数据集为非COCO格式数据,请修改[configs](./configs)中reader配置文件中的Dataset字段。 + +以PP-YOLOE模型为例,如果已经准备好数据集,请直接修改[./configs/yolo_reader.yml]中`EvalDataset`的`dataset_dir`字段为自己数据集路径即可。 + +#### 3.3 准备预测模型 + +预测模型的格式为:`model.pdmodel` 和 `model.pdiparams`两个,带`pdmodel`的是模型文件,带`pdiparams`后缀的是权重文件。 + + +根据[PaddleDetection文档](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED_cn.md#8-%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA) 导出Inference模型,具体可参考下方PP-YOLOE模型的导出示例: +- 下载代码 +``` +git clone https://github.com/PaddlePaddle/PaddleDetection.git +``` +- 导出预测模型 + +PPYOLOE-l模型,包含NMS:如快速体验,可直接下载[PP-YOLOE-l导出模型](https://bj.bcebos.com/v1/paddle-slim-models/act/ppyoloe_crn_l_300e_coco.tar) +```shell +python tools/export_model.py \ + -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml \ + -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams \ + trt=True \ +``` + +#### 3.4 自动压缩并产出模型 + +蒸馏量化自动压缩示例通过run.py脚本启动,会使用接口```paddleslim.auto_compression.AutoCompression```对模型进行自动压缩。配置config文件中模型路径、蒸馏、量化、和训练等部分的参数,配置完成后便可对模型进行量化和蒸馏。具体运行命令为: + +- 单卡训练: +``` +export CUDA_VISIBLE_DEVICES=0 +python run.py --config_path=./configs/ppyoloe_l_qat_dis.yaml --save_dir='./output/' +``` + +- 多卡训练: +``` +CUDA_VISIBLE_DEVICES=0,1,2,3 python -m paddle.distributed.launch --log_dir=log --gpus 0,1,2,3 run.py \ + --config_path=./configs/ppyoloe_l_qat_dis.yaml --save_dir='./output/' +``` + +#### 3.5 测试模型精度 + +使用eval.py脚本得到模型的mAP: +``` +export CUDA_VISIBLE_DEVICES=0 +python eval.py --config_path=./configs/ppyoloe_l_qat_dis.yaml +``` + +**注意**: +- 要测试的模型路径可以在配置文件中`model_dir`字段下进行修改。 + +## 4.预测部署 + +- 可以参考[PaddleDetection部署教程](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/deploy),GPU上量化模型开启TensorRT并设置trt_int8模式进行部署。 diff --git a/deploy/auto_compression/configs/ppyoloe_l_qat_dis.yaml b/deploy/auto_compression/configs/ppyoloe_l_qat_dis.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cd39981c1eb44cb45ab6ed1a0386dbba92eaf660 --- /dev/null +++ b/deploy/auto_compression/configs/ppyoloe_l_qat_dis.yaml @@ -0,0 +1,33 @@ + +Global: + reader_config: configs/yolo_reader.yml + input_list: ['image', 'scale_factor'] + arch: YOLO + Evaluation: True + model_dir: ./ppyoloe_crn_l_300e_coco + model_filename: model.pdmodel + params_filename: model.pdiparams + +Distillation: + alpha: 1.0 + loss: soft_label + +Quantization: + use_pact: true + activation_quantize_type: 'moving_average_abs_max' + quantize_op_types: + - conv2d + - depthwise_conv2d + +TrainConfig: + train_iter: 5000 + eval_iter: 1000 + learning_rate: + type: CosineAnnealingDecay + learning_rate: 0.00003 + T_max: 6000 + optimizer_builder: + optimizer: + type: SGD + weight_decay: 4.0e-05 + diff --git a/deploy/auto_compression/configs/yolo_reader.yml b/deploy/auto_compression/configs/yolo_reader.yml new file mode 100644 index 0000000000000000000000000000000000000000..d1061453051e8f7408f4e605078956a8b634f13c --- /dev/null +++ b/deploy/auto_compression/configs/yolo_reader.yml @@ -0,0 +1,26 @@ +metric: COCO +num_classes: 80 + +# Datset configuration +TrainDataset: + !COCODataSet + image_dir: train2017 + anno_path: annotations/instances_train2017.json + dataset_dir: dataset/coco/ + +EvalDataset: + !COCODataSet + image_dir: val2017 + anno_path: annotations/instances_val2017.json + dataset_dir: dataset/coco/ + +worker_num: 0 + +# preprocess reader in test +EvalReader: + sample_transforms: + - Decode: {} + - Resize: {target_size: [640, 640], keep_ratio: False, interp: 2} + - NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True} + - Permute: {} + batch_size: 4 diff --git a/deploy/auto_compression/eval.py b/deploy/auto_compression/eval.py new file mode 100644 index 0000000000000000000000000000000000000000..6de8aff85ce5f3cffa4119a1a3c26e318101db74 --- /dev/null +++ b/deploy/auto_compression/eval.py @@ -0,0 +1,163 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys +import numpy as np +import argparse +import paddle +from ppdet.core.workspace import load_config, merge_config +from ppdet.core.workspace import create +from ppdet.metrics import COCOMetric, VOCMetric, KeyPointTopDownCOCOEval +from paddleslim.auto_compression.config_helpers import load_config as load_slim_config +from post_process import PPYOLOEPostProcess + + +def argsparser(): + parser = argparse.ArgumentParser(description=__doc__) + parser.add_argument( + '--config_path', + type=str, + default=None, + help="path of compression strategy config.", + required=True) + parser.add_argument( + '--devices', + type=str, + default='gpu', + help="which device used to compress.") + + return parser + + +def reader_wrapper(reader, input_list): + def gen(): + for data in reader: + in_dict = {} + if isinstance(input_list, list): + for input_name in input_list: + in_dict[input_name] = data[input_name] + elif isinstance(input_list, dict): + for input_name in input_list.keys(): + in_dict[input_list[input_name]] = data[input_name] + yield in_dict + + return gen + + +def convert_numpy_data(data, metric): + data_all = {} + data_all = {k: np.array(v) for k, v in data.items()} + if isinstance(metric, VOCMetric): + for k, v in data_all.items(): + if not isinstance(v[0], np.ndarray): + tmp_list = [] + for t in v: + tmp_list.append(np.array(t)) + data_all[k] = np.array(tmp_list) + else: + data_all = {k: np.array(v) for k, v in data.items()} + return data_all + + +def eval(): + + place = paddle.CUDAPlace(0) if FLAGS.devices == 'gpu' else paddle.CPUPlace() + exe = paddle.static.Executor(place) + + val_program, feed_target_names, fetch_targets = paddle.static.load_inference_model( + global_config["model_dir"].rstrip('/'), + exe, + model_filename=global_config["model_filename"], + params_filename=global_config["params_filename"]) + print('Loaded model from: {}'.format(global_config["model_dir"])) + + metric = global_config['metric'] + for batch_id, data in enumerate(val_loader): + data_all = convert_numpy_data(data, metric) + data_input = {} + for k, v in data.items(): + if isinstance(global_config['input_list'], list): + if k in global_config['input_list']: + data_input[k] = np.array(v) + elif isinstance(global_config['input_list'], dict): + if k in global_config['input_list'].keys(): + data_input[global_config['input_list'][k]] = np.array(v) + + outs = exe.run(val_program, + feed=data_input, + fetch_list=fetch_targets, + return_numpy=False) + res = {} + if 'arch' in global_config and global_config['arch'] == 'PPYOLOE': + postprocess = PPYOLOEPostProcess( + score_threshold=0.01, nms_threshold=0.6) + res = postprocess(np.array(outs[0]), data_all['scale_factor']) + else: + for out in outs: + v = np.array(out) + if len(v.shape) > 1: + res['bbox'] = v + else: + res['bbox_num'] = v + metric.update(data_all, res) + if batch_id % 100 == 0: + print('Eval iter:', batch_id) + metric.accumulate() + metric.log() + metric.reset() + + +def main(): + global global_config + all_config = load_slim_config(FLAGS.config_path) + assert "Global" in all_config, "Key 'Global' not found in config file." + global_config = all_config["Global"] + reader_cfg = load_config(global_config['reader_config']) + + dataset = reader_cfg['EvalDataset'] + global val_loader + val_loader = create('EvalReader')(reader_cfg['EvalDataset'], + reader_cfg['worker_num'], + return_list=True) + metric = None + if reader_cfg['metric'] == 'COCO': + clsid2catid = {v: k for k, v in dataset.catid2clsid.items()} + anno_file = dataset.get_anno() + metric = COCOMetric( + anno_file=anno_file, clsid2catid=clsid2catid, IouType='bbox') + elif reader_cfg['metric'] == 'VOC': + metric = VOCMetric( + label_list=dataset.get_label_list(), + class_num=reader_cfg['num_classes'], + map_type=reader_cfg['map_type']) + elif reader_cfg['metric'] == 'KeyPointTopDownCOCOEval': + anno_file = dataset.get_anno() + metric = KeyPointTopDownCOCOEval(anno_file, + len(dataset), 17, 'output_eval') + else: + raise ValueError("metric currently only supports COCO and VOC.") + global_config['metric'] = metric + + eval() + + +if __name__ == '__main__': + paddle.enable_static() + parser = argsparser() + FLAGS = parser.parse_args() + assert FLAGS.devices in ['cpu', 'gpu', 'xpu', 'npu'] + paddle.set_device(FLAGS.devices) + + main() diff --git a/deploy/auto_compression/post_process.py b/deploy/auto_compression/post_process.py new file mode 100644 index 0000000000000000000000000000000000000000..eea2f019548ec288a23e37b3bd2faf24f9a98935 --- /dev/null +++ b/deploy/auto_compression/post_process.py @@ -0,0 +1,157 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import numpy as np +import cv2 + + +def hard_nms(box_scores, iou_threshold, top_k=-1, candidate_size=200): + """ + Args: + box_scores (N, 5): boxes in corner-form and probabilities. + iou_threshold: intersection over union threshold. + top_k: keep top_k results. If k <= 0, keep all the results. + candidate_size: only consider the candidates with the highest scores. + Returns: + picked: a list of indexes of the kept boxes + """ + scores = box_scores[:, -1] + boxes = box_scores[:, :-1] + picked = [] + indexes = np.argsort(scores) + indexes = indexes[-candidate_size:] + while len(indexes) > 0: + current = indexes[-1] + picked.append(current) + if 0 < top_k == len(picked) or len(indexes) == 1: + break + current_box = boxes[current, :] + indexes = indexes[:-1] + rest_boxes = boxes[indexes, :] + iou = iou_of( + rest_boxes, + np.expand_dims( + current_box, axis=0), ) + indexes = indexes[iou <= iou_threshold] + + return box_scores[picked, :] + + +def iou_of(boxes0, boxes1, eps=1e-5): + """Return intersection-over-union (Jaccard index) of boxes. + Args: + boxes0 (N, 4): ground truth boxes. + boxes1 (N or 1, 4): predicted boxes. + eps: a small number to avoid 0 as denominator. + Returns: + iou (N): IoU values. + """ + overlap_left_top = np.maximum(boxes0[..., :2], boxes1[..., :2]) + overlap_right_bottom = np.minimum(boxes0[..., 2:], boxes1[..., 2:]) + + overlap_area = area_of(overlap_left_top, overlap_right_bottom) + area0 = area_of(boxes0[..., :2], boxes0[..., 2:]) + area1 = area_of(boxes1[..., :2], boxes1[..., 2:]) + return overlap_area / (area0 + area1 - overlap_area + eps) + + +def area_of(left_top, right_bottom): + """Compute the areas of rectangles given two corners. + Args: + left_top (N, 2): left top corner. + right_bottom (N, 2): right bottom corner. + Returns: + area (N): return the area. + """ + hw = np.clip(right_bottom - left_top, 0.0, None) + return hw[..., 0] * hw[..., 1] + + +class PPYOLOEPostProcess(object): + """ + Args: + input_shape (int): network input image size + scale_factor (float): scale factor of ori image + """ + + def __init__(self, + score_threshold=0.4, + nms_threshold=0.5, + nms_top_k=10000, + keep_top_k=300): + self.score_threshold = score_threshold + self.nms_threshold = nms_threshold + self.nms_top_k = nms_top_k + self.keep_top_k = keep_top_k + + def _non_max_suppression(self, prediction, scale_factor): + batch_size = prediction.shape[0] + out_boxes_list = [] + box_num_list = [] + for batch_id in range(batch_size): + bboxes, confidences = prediction[batch_id][..., :4], prediction[ + batch_id][..., 4:] + # nms + picked_box_probs = [] + picked_labels = [] + for class_index in range(0, confidences.shape[1]): + probs = confidences[:, class_index] + mask = probs > self.score_threshold + probs = probs[mask] + if probs.shape[0] == 0: + continue + subset_boxes = bboxes[mask, :] + box_probs = np.concatenate( + [subset_boxes, probs.reshape(-1, 1)], axis=1) + box_probs = hard_nms( + box_probs, + iou_threshold=self.nms_threshold, + top_k=self.nms_top_k) + picked_box_probs.append(box_probs) + picked_labels.extend([class_index] * box_probs.shape[0]) + + if len(picked_box_probs) == 0: + out_boxes_list.append(np.empty((0, 4))) + + else: + picked_box_probs = np.concatenate(picked_box_probs) + # resize output boxes + picked_box_probs[:, 0] /= scale_factor[batch_id][1] + picked_box_probs[:, 2] /= scale_factor[batch_id][1] + picked_box_probs[:, 1] /= scale_factor[batch_id][0] + picked_box_probs[:, 3] /= scale_factor[batch_id][0] + + # clas score box + out_box = np.concatenate( + [ + np.expand_dims( + np.array(picked_labels), axis=-1), np.expand_dims( + picked_box_probs[:, 4], axis=-1), + picked_box_probs[:, :4] + ], + axis=1) + if out_box.shape[0] > self.keep_top_k: + out_box = out_box[out_box[:, 1].argsort()[::-1] + [:self.keep_top_k]] + out_boxes_list.append(out_box) + box_num_list.append(out_box.shape[0]) + + out_boxes_list = np.concatenate(out_boxes_list, axis=0) + box_num_list = np.array(box_num_list) + return out_boxes_list, box_num_list + + def __call__(self, outs, scale_factor): + out_boxes_list, box_num_list = self._non_max_suppression(outs, + scale_factor) + return {'bbox': out_boxes_list, 'bbox_num': box_num_list} diff --git a/deploy/auto_compression/run.py b/deploy/auto_compression/run.py new file mode 100644 index 0000000000000000000000000000000000000000..fe485fb8f769726f9aa32a5f2dad65a9f089816c --- /dev/null +++ b/deploy/auto_compression/run.py @@ -0,0 +1,183 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys +import numpy as np +import argparse +import paddle +from ppdet.core.workspace import load_config, merge_config +from ppdet.core.workspace import create +from ppdet.metrics import COCOMetric, VOCMetric, KeyPointTopDownCOCOEval +from paddleslim.auto_compression.config_helpers import load_config as load_slim_config +from paddleslim.auto_compression import AutoCompression +from post_process import PPYOLOEPostProcess + + +def argsparser(): + parser = argparse.ArgumentParser(description=__doc__) + parser.add_argument( + '--config_path', + type=str, + default=None, + help="path of compression strategy config.", + required=True) + parser.add_argument( + '--save_dir', + type=str, + default='output', + help="directory to save compressed model.") + parser.add_argument( + '--devices', + type=str, + default='gpu', + help="which device used to compress.") + + return parser + + +def reader_wrapper(reader, input_list): + def gen(): + for data in reader: + in_dict = {} + if isinstance(input_list, list): + for input_name in input_list: + in_dict[input_name] = data[input_name] + elif isinstance(input_list, dict): + for input_name in input_list.keys(): + in_dict[input_list[input_name]] = data[input_name] + yield in_dict + + return gen + + +def convert_numpy_data(data, metric): + data_all = {} + data_all = {k: np.array(v) for k, v in data.items()} + if isinstance(metric, VOCMetric): + for k, v in data_all.items(): + if not isinstance(v[0], np.ndarray): + tmp_list = [] + for t in v: + tmp_list.append(np.array(t)) + data_all[k] = np.array(tmp_list) + else: + data_all = {k: np.array(v) for k, v in data.items()} + return data_all + + +def eval_function(exe, compiled_test_program, test_feed_names, test_fetch_list): + metric = global_config['metric'] + for batch_id, data in enumerate(val_loader): + data_all = convert_numpy_data(data, metric) + data_input = {} + for k, v in data.items(): + if isinstance(global_config['input_list'], list): + if k in test_feed_names: + data_input[k] = np.array(v) + elif isinstance(global_config['input_list'], dict): + if k in global_config['input_list'].keys(): + data_input[global_config['input_list'][k]] = np.array(v) + outs = exe.run(compiled_test_program, + feed=data_input, + fetch_list=test_fetch_list, + return_numpy=False) + res = {} + if 'arch' in global_config and global_config['arch'] == 'PPYOLOE': + postprocess = PPYOLOEPostProcess( + score_threshold=0.01, nms_threshold=0.6) + res = postprocess(np.array(outs[0]), data_all['scale_factor']) + else: + for out in outs: + v = np.array(out) + if len(v.shape) > 1: + res['bbox'] = v + else: + res['bbox_num'] = v + + metric.update(data_all, res) + if batch_id % 100 == 0: + print('Eval iter:', batch_id) + metric.accumulate() + metric.log() + map_res = metric.get_results() + metric.reset() + map_key = 'keypoint' if 'arch' in global_config and global_config[ + 'arch'] == 'keypoint' else 'bbox' + return map_res[map_key][0] + + +def main(): + global global_config + all_config = load_slim_config(FLAGS.config_path) + assert "Global" in all_config, "Key 'Global' not found in config file." + global_config = all_config["Global"] + reader_cfg = load_config(global_config['reader_config']) + + train_loader = create('EvalReader')(reader_cfg['TrainDataset'], + reader_cfg['worker_num'], + return_list=True) + train_loader = reader_wrapper(train_loader, global_config['input_list']) + + if 'Evaluation' in global_config.keys() and global_config[ + 'Evaluation'] and paddle.distributed.get_rank() == 0: + eval_func = eval_function + dataset = reader_cfg['EvalDataset'] + global val_loader + _eval_batch_sampler = paddle.io.BatchSampler( + dataset, batch_size=reader_cfg['EvalReader']['batch_size']) + val_loader = create('EvalReader')(dataset, + reader_cfg['worker_num'], + batch_sampler=_eval_batch_sampler, + return_list=True) + metric = None + if reader_cfg['metric'] == 'COCO': + clsid2catid = {v: k for k, v in dataset.catid2clsid.items()} + anno_file = dataset.get_anno() + metric = COCOMetric( + anno_file=anno_file, clsid2catid=clsid2catid, IouType='bbox') + elif reader_cfg['metric'] == 'VOC': + metric = VOCMetric( + label_list=dataset.get_label_list(), + class_num=reader_cfg['num_classes'], + map_type=reader_cfg['map_type']) + elif reader_cfg['metric'] == 'KeyPointTopDownCOCOEval': + anno_file = dataset.get_anno() + metric = KeyPointTopDownCOCOEval(anno_file, + len(dataset), 17, 'output_eval') + else: + raise ValueError("metric currently only supports COCO and VOC.") + global_config['metric'] = metric + else: + eval_func = None + + ac = AutoCompression( + model_dir=global_config["model_dir"], + model_filename=global_config["model_filename"], + params_filename=global_config["params_filename"], + save_dir=FLAGS.save_dir, + config=all_config, + train_dataloader=train_loader, + eval_callback=eval_func) + ac.compress() + + +if __name__ == '__main__': + paddle.enable_static() + parser = argsparser() + FLAGS = parser.parse_args() + assert FLAGS.devices in ['cpu', 'gpu', 'xpu', 'npu'] + paddle.set_device(FLAGS.devices) + + main() diff --git a/deploy/cpp/docs/linux_build.md b/deploy/cpp/docs/linux_build.md index 32348991b1be7a31c95e26def04917a100094bb5..ee28e73ee56db3ec46a1674a6af0cb3af1012b3e 100755 --- a/deploy/cpp/docs/linux_build.md +++ b/deploy/cpp/docs/linux_build.md @@ -1,7 +1,7 @@ # Linux平台编译指南 ## 说明 -本文档在 `Linux`平台使用`GCC 8.2`测试过,如果需要使用其他G++版本编译使用,则需要重新编译Paddle预测库,请参考: [从源码编译Paddle预测库](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html)。本文档使用的预置的opencv库是在ubuntu 16.04上用gcc4.8编译的,如果需要在ubuntu 16.04以外的系统环境编译,那么需自行编译opencv库。 +本文档在 `Linux`平台使用`GCC 8.2`测试过,如果需要使用其他G++版本编译使用,则需要重新编译Paddle预测库,请参考: [从源码编译Paddle预测库](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html)。本文档使用的预置的opencv库是在ubuntu 16.04上用gcc8.2编译的,如果需要在gcc8.2以外的环境编译,那么需自行编译opencv库。 ## 前置条件 * G++ 8.2 diff --git a/deploy/cpp/include/config_parser.h b/deploy/cpp/include/config_parser.h index 82d103723aa134ff72449a2d0ca3735b68c86fee..1f2e381c5284bb7ce16a6b06f858a32e83290f98 100644 --- a/deploy/cpp/include/config_parser.h +++ b/deploy/cpp/include/config_parser.h @@ -120,6 +120,10 @@ class ConfigPaser { } } + if (config["mask"].IsDefined()) { + mask_ = config["mask"].as(); + } + return true; } std::string mode_; @@ -132,6 +136,7 @@ class ConfigPaser { std::vector fpn_stride_; bool use_dynamic_shape_; float conf_thresh_; + bool mask_ = false; }; } // namespace PaddleDetection diff --git a/deploy/cpp/include/keypoint_detector.h b/deploy/cpp/include/keypoint_detector.h index 55eed8f9124176cb1d1551538ff14769acd4249f..ce6aa0e0692d215fc1a704afd37c3787fe8e42ef 100644 --- a/deploy/cpp/include/keypoint_detector.h +++ b/deploy/cpp/include/keypoint_detector.h @@ -33,12 +33,6 @@ using namespace paddle_infer; namespace PaddleDetection { -// Object KeyPoint Result -struct KeyPointResult { - // Keypoints: shape(N x 3); N: number of Joints; 3: x,y,conf - std::vector keypoints; - int num_joints = -1; -}; // Visualiztion KeyPoint Result cv::Mat VisualizeKptsResult(const cv::Mat& img, diff --git a/deploy/cpp/include/keypoint_postprocess.h b/deploy/cpp/include/keypoint_postprocess.h index 4239cdf7369bdb76fa3f715c755d601b40285c5b..fa0c7d55f06db986404eb23a7df1144a22e7f33f 100644 --- a/deploy/cpp/include/keypoint_postprocess.h +++ b/deploy/cpp/include/keypoint_postprocess.h @@ -14,11 +14,14 @@ #pragma once +#include #include #include #include #include +namespace PaddleDetection { + std::vector get_3rd_point(std::vector& a, std::vector& b); std::vector get_dir(float src_point_x, float src_point_y, float rot_rad); @@ -37,7 +40,8 @@ void transform_preds(std::vector& coords, std::vector& scale, std::vector& output_size, std::vector& dim, - std::vector& target_coords); + std::vector& target_coords, + bool affine = false); void box_to_center_scale(std::vector& box, int width, @@ -51,7 +55,7 @@ void get_max_preds(float* heatmap, float* maxvals, int batchid, int joint_idx); - + void get_final_preds(std::vector& heatmap, std::vector& dim, std::vector& idxout, @@ -61,3 +65,70 @@ void get_final_preds(std::vector& heatmap, std::vector& preds, int batchid, bool DARK = true); + +// Object KeyPoint Result +struct KeyPointResult { + // Keypoints: shape(N x 3); N: number of Joints; 3: x,y,conf + std::vector keypoints; + int num_joints = -1; +}; + +class PoseSmooth { + public: + explicit PoseSmooth(const int width, + const int height, + std::string filter_type = "OneEuro", + float alpha = 0.5, + float fc_d = 0.1, + float fc_min = 0.1, + float beta = 0.1, + float thres_mult = 0.3) + : width(width), + height(height), + alpha(alpha), + fc_d(fc_d), + fc_min(fc_min), + beta(beta), + filter_type(filter_type), + thres_mult(thres_mult){}; + + // Run predictor + KeyPointResult smooth_process(KeyPointResult* result); + void PointSmooth(KeyPointResult* result, + KeyPointResult* keypoint_smoothed, + std::vector thresholds, + int index); + float OneEuroFilter(float x_cur, float x_pre, int loc); + float smoothing_factor(float te, float fc); + float ExpSmoothing(float x_cur, float x_pre, int loc = 0); + + private: + int width = 0; + int height = 0; + float alpha = 0.; + float fc_d = 1.; + float fc_min = 0.; + float beta = 1.; + float thres_mult = 1.; + std::string filter_type = "OneEuro"; + std::vector thresholds = {0.005, + 0.005, + 0.005, + 0.005, + 0.005, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01, + 0.01}; + KeyPointResult x_prev_hat; + KeyPointResult dx_prev_hat; +}; +} // namespace PaddleDetection diff --git a/deploy/cpp/include/object_detector.h b/deploy/cpp/include/object_detector.h index 0a336c33401d5d7d3e4e27a22862fd666da1a36e..47bd29362c85eafc3825d25af73694803e2a1504 100644 --- a/deploy/cpp/include/object_detector.h +++ b/deploy/cpp/include/object_detector.h @@ -25,7 +25,7 @@ #include #include -#include "paddle_inference_api.h" // NOLINT +#include "paddle_inference_api.h" // NOLINT #include "include/config_parser.h" #include "include/picodet_postprocess.h" @@ -33,29 +33,25 @@ #include "include/utils.h" using namespace paddle_infer; - namespace PaddleDetection { // Generate visualization colormap for each class std::vector GenerateColorMap(int num_class); // Visualiztion Detection Result -cv::Mat VisualizeResult( - const cv::Mat& img, - const std::vector& results, - const std::vector& lables, - const std::vector& colormap, - const bool is_rbox); +cv::Mat +VisualizeResult(const cv::Mat &img, + const std::vector &results, + const std::vector &lables, + const std::vector &colormap, const bool is_rbox); class ObjectDetector { - public: - explicit ObjectDetector(const std::string& model_dir, - const std::string& device = "CPU", - bool use_mkldnn = false, - int cpu_threads = 1, - const std::string& run_mode = "paddle", - const int batch_size = 1, - const int gpu_id = 0, +public: + explicit ObjectDetector(const std::string &model_dir, + const std::string &device = "CPU", + bool use_mkldnn = false, int cpu_threads = 1, + const std::string &run_mode = "paddle", + const int batch_size = 1, const int gpu_id = 0, const int trt_min_shape = 1, const int trt_max_shape = 1280, const int trt_opt_shape = 640, @@ -78,25 +74,22 @@ class ObjectDetector { } // Load Paddle inference model - void LoadModel(const std::string& model_dir, - const int batch_size = 1, - const std::string& run_mode = "paddle"); + void LoadModel(const std::string &model_dir, const int batch_size = 1, + const std::string &run_mode = "paddle"); // Run predictor - void Predict(const std::vector imgs, - const double threshold = 0.5, - const int warmup = 0, - const int repeats = 1, - std::vector* result = nullptr, - std::vector* bbox_num = nullptr, - std::vector* times = nullptr); + void Predict(const std::vector imgs, const double threshold = 0.5, + const int warmup = 0, const int repeats = 1, + std::vector *result = nullptr, + std::vector *bbox_num = nullptr, + std::vector *times = nullptr); // Get Model Label list - const std::vector& GetLabelList() const { + const std::vector &GetLabelList() const { return config_.label_list_; } - private: +private: std::string device_ = "CPU"; int gpu_id_ = 0; int cpu_math_library_num_threads_ = 1; @@ -108,13 +101,18 @@ class ObjectDetector { int trt_opt_shape_ = 640; bool trt_calib_mode_ = false; // Preprocess image and copy data to input buffer - void Preprocess(const cv::Mat& image_mat); + void Preprocess(const cv::Mat &image_mat); // Postprocess result void Postprocess(const std::vector mats, - std::vector* result, - std::vector bbox_num, - std::vector output_data_, - bool is_rbox); + std::vector *result, + std::vector bbox_num, std::vector output_data_, + std::vector output_mask_data_, bool is_rbox); + + void SOLOv2Postprocess( + const std::vector mats, std::vector *result, + std::vector *bbox_num, std::vector out_bbox_num_data_, + std::vector out_label_data_, std::vector out_score_data_, + std::vector out_global_mask_data_, float threshold = 0.5); std::shared_ptr predictor_; Preprocessor preprocessor_; @@ -123,4 +121,4 @@ class ObjectDetector { ConfigPaser config_; }; -} // namespace PaddleDetection +} // namespace PaddleDetection diff --git a/deploy/cpp/include/preprocess_op.h b/deploy/cpp/include/preprocess_op.h index 33d7300b8fd84287cca91e86214084acec781030..a54bc2afb8aacbc55241b866ba41acc00491e4f3 100644 --- a/deploy/cpp/include/preprocess_op.h +++ b/deploy/cpp/include/preprocess_op.h @@ -74,7 +74,7 @@ class NormalizeImage : public PreprocessOp { // CHW or HWC std::vector mean_; std::vector scale_; - bool is_scale_; + bool is_scale_ = true; }; class Permute : public PreprocessOp { @@ -143,6 +143,38 @@ class TopDownEvalAffine : public PreprocessOp { std::vector trainsize_; }; +class WarpAffine : public PreprocessOp { + public: + virtual void Init(const YAML::Node& item) { + input_h_ = item["input_h"].as(); + input_w_ = item["input_w"].as(); + keep_res_ = item["keep_res"].as(); + } + + virtual void Run(cv::Mat* im, ImageBlob* data); + + private: + int input_h_; + int input_w_; + int interp_ = 1; + bool keep_res_ = true; + int pad_ = 31; +}; + +class Pad : public PreprocessOp { + public: + virtual void Init(const YAML::Node& item) { + size_ = item["size"].as>(); + fill_value_ = item["fill_value"].as>(); + } + + virtual void Run(cv::Mat* im, ImageBlob* data); + + private: + std::vector size_; + std::vector fill_value_; +}; + void CropImg(cv::Mat& img, cv::Mat& crop_img, std::vector& area, @@ -183,6 +215,10 @@ class Preprocessor { return std::make_shared(); } else if (name == "TopDownEvalAffine") { return std::make_shared(); + } else if (name == "WarpAffine") { + return std::make_shared(); + }else if (name == "Pad") { + return std::make_shared(); } std::cerr << "can not find function of OP: " << name << " and return: nullptr" << std::endl; diff --git a/deploy/cpp/include/utils.h b/deploy/cpp/include/utils.h index 3802e1267176a050402d1fdf742e54a79f33ffb9..b41db0dacff17339ffcac591b7825cec09d3663d 100644 --- a/deploy/cpp/include/utils.h +++ b/deploy/cpp/include/utils.h @@ -14,13 +14,13 @@ #pragma once -#include -#include -#include -#include +#include #include +#include #include -#include +#include +#include +#include namespace PaddleDetection { @@ -32,8 +32,10 @@ struct ObjectResult { int class_id; // Confidence of detected object float confidence; + // Mask of detected object + std::vector mask; }; void nms(std::vector &input_boxes, float nms_threshold); -} // namespace PaddleDetection \ No newline at end of file +} // namespace PaddleDetection diff --git a/deploy/cpp/src/keypoint_postprocess.cc b/deploy/cpp/src/keypoint_postprocess.cc index 52ac8d3d36ec384bf3eb81c56356c6639a61433f..eb692b0a78bcf48ac96aa45b671300b9ff2db400 100644 --- a/deploy/cpp/src/keypoint_postprocess.cc +++ b/deploy/cpp/src/keypoint_postprocess.cc @@ -11,11 +11,13 @@ // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. -#include #include "include/keypoint_postprocess.h" +#include #define PI 3.1415926535 #define HALF_CIRCLE_DEGREE 180 +namespace PaddleDetection { + cv::Point2f get_3rd_point(cv::Point2f& a, cv::Point2f& b) { cv::Point2f direct{a.x - b.x, a.y - b.y}; return cv::Point2f(a.x - direct.y, a.y + direct.x); @@ -52,7 +54,7 @@ void get_affine_transform(std::vector& center, float dst_h = static_cast(output_size[1]); float rot_rad = rot * PI / HALF_CIRCLE_DEGREE; std::vector src_dir = get_dir(-0.5 * src_w, 0, rot_rad); - std::vector dst_dir{-0.5 * dst_w, 0.0}; + std::vector dst_dir{-0.5f * dst_w, 0.0}; cv::Point2f srcPoint2f[3], dstPoint2f[3]; srcPoint2f[0] = cv::Point2f(center[0], center[1]); srcPoint2f[1] = cv::Point2f(center[0] + src_dir[0], center[1] + src_dir[1]); @@ -74,11 +76,26 @@ void transform_preds(std::vector& coords, std::vector& scale, std::vector& output_size, std::vector& dim, - std::vector& target_coords) { - cv::Mat trans(2, 3, CV_64FC1); - get_affine_transform(center, scale, 0, output_size, trans, 1); - for (int p = 0; p < dim[1]; ++p) { - affine_tranform(coords[p * 2], coords[p * 2 + 1], trans, target_coords, p); + std::vector& target_coords, + bool affine) { + if (affine) { + cv::Mat trans(2, 3, CV_64FC1); + get_affine_transform(center, scale, 0, output_size, trans, 1); + for (int p = 0; p < dim[1]; ++p) { + affine_tranform( + coords[p * 2], coords[p * 2 + 1], trans, target_coords, p); + } + } else { + float heat_w = static_cast(output_size[0]); + float heat_h = static_cast(output_size[1]); + float x_scale = scale[0] / heat_w; + float y_scale = scale[1] / heat_h; + float offset_x = center[0] - scale[0] / 2.; + float offset_y = center[1] - scale[1] / 2.; + for (int i = 0; i < dim[1]; i++) { + target_coords[i * 3 + 1] = x_scale * coords[i * 2] + offset_x; + target_coords[i * 3 + 2] = y_scale * coords[i * 2 + 1] + offset_y; + } } } @@ -111,10 +128,10 @@ void get_max_preds(float* heatmap, void dark_parse(std::vector& heatmap, std::vector& dim, std::vector& coords, - int px, - int py, + int px, + int py, int index, - int ch){ + int ch) { /*DARK postpocessing, Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020). 1) offset = - hassian.inv() * derivative @@ -124,16 +141,17 @@ void dark_parse(std::vector& heatmap, 5) hassian = Mat([[dxx, dxy], [dxy, dyy]]) */ std::vector::const_iterator first1 = heatmap.begin() + index; - std::vector::const_iterator last1 = heatmap.begin() + index + dim[2] * dim[3]; + std::vector::const_iterator last1 = + heatmap.begin() + index + dim[2] * dim[3]; std::vector heatmap_ch(first1, last1); - cv::Mat heatmap_mat = cv::Mat(heatmap_ch).reshape(0,dim[2]); + cv::Mat heatmap_mat = cv::Mat(heatmap_ch).reshape(0, dim[2]); heatmap_mat.convertTo(heatmap_mat, CV_32FC1); cv::GaussianBlur(heatmap_mat, heatmap_mat, cv::Size(3, 3), 0, 0); - heatmap_mat = heatmap_mat.reshape(1,1); - heatmap_ch = std::vector(heatmap_mat.reshape(1,1)); + heatmap_mat = heatmap_mat.reshape(1, 1); + heatmap_ch = std::vector(heatmap_mat.reshape(1, 1)); float epsilon = 1e-10; - //sample heatmap to get values in around target location + // sample heatmap to get values in around target location float xy = log(fmax(heatmap_ch[py * dim[3] + px], epsilon)); float xr = log(fmax(heatmap_ch[py * dim[3] + px + 1], epsilon)); float xl = log(fmax(heatmap_ch[py * dim[3] + px - 1], epsilon)); @@ -149,22 +167,23 @@ void dark_parse(std::vector& heatmap, float xlyu = log(fmax(heatmap_ch[(py + 1) * dim[3] + px - 1], epsilon)); float xlyd = log(fmax(heatmap_ch[(py - 1) * dim[3] + px - 1], epsilon)); - //compute dx/dy and dxx/dyy with sampled values + // compute dx/dy and dxx/dyy with sampled values float dx = 0.5 * (xr - xl); float dy = 0.5 * (yu - yd); - float dxx = 0.25 * (xr2 - 2*xy + xl2); + float dxx = 0.25 * (xr2 - 2 * xy + xl2); float dxy = 0.25 * (xryu - xryd - xlyu + xlyd); - float dyy = 0.25 * (yu2 - 2*xy + yd2); + float dyy = 0.25 * (yu2 - 2 * xy + yd2); - //finally get offset by derivative and hassian, which combined by dx/dy and dxx/dyy - if(dxx * dyy - dxy*dxy != 0){ + // finally get offset by derivative and hassian, which combined by dx/dy and + // dxx/dyy + if (dxx * dyy - dxy * dxy != 0) { float M[2][2] = {dxx, dxy, dxy, dyy}; float D[2] = {dx, dy}; - cv::Mat hassian(2,2,CV_32F,M); - cv::Mat derivative(2,1,CV_32F,D); - cv::Mat offset = - hassian.inv() * derivative; - coords[ch * 2] += offset.at(0,0); - coords[ch * 2 + 1] += offset.at(1,0); + cv::Mat hassian(2, 2, CV_32F, M); + cv::Mat derivative(2, 1, CV_32F, D); + cv::Mat offset = -hassian.inv() * derivative; + coords[ch * 2] += offset.at(0, 0); + coords[ch * 2 + 1] += offset.at(1, 0); } } @@ -193,18 +212,18 @@ void get_final_preds(std::vector& heatmap, int px = int(coords[j * 2] + 0.5); int py = int(coords[j * 2 + 1] + 0.5); - if(DARK && px > 1 && px < heatmap_width - 2){ + if (DARK && px > 1 && px < heatmap_width - 2 && py > 1 && + py < heatmap_height - 2) { dark_parse(heatmap, dim, coords, px, py, index, j); - } - else{ + } else { if (px > 0 && px < heatmap_width - 1) { float diff_x = heatmap[index + py * dim[3] + px + 1] - - heatmap[index + py * dim[3] + px - 1]; + heatmap[index + py * dim[3] + px - 1]; coords[j * 2] += diff_x > 0 ? 1 : -1 * 0.25; } if (py > 0 && py < heatmap_height - 1) { float diff_y = heatmap[index + (py + 1) * dim[3] + px] - - heatmap[index + (py - 1) * dim[3] + px]; + heatmap[index + (py - 1) * dim[3] + px]; coords[j * 2 + 1] += diff_y > 0 ? 1 : -1 * 0.25; } } @@ -213,3 +232,85 @@ void get_final_preds(std::vector& heatmap, std::vector img_size{heatmap_width, heatmap_height}; transform_preds(coords, center, scale, img_size, dim, preds); } + +// Run predictor +KeyPointResult PoseSmooth::smooth_process(KeyPointResult* result) { + KeyPointResult keypoint_smoothed = *result; + if (this->x_prev_hat.num_joints == -1) { + this->x_prev_hat = *result; + this->dx_prev_hat = *result; + std::fill(dx_prev_hat.keypoints.begin(), dx_prev_hat.keypoints.end(), 0.); + return keypoint_smoothed; + } else { + for (int i = 0; i < result->num_joints; i++) { + this->PointSmooth(result, &keypoint_smoothed, this->thresholds, i); + } + return keypoint_smoothed; + } +} + +void PoseSmooth::PointSmooth(KeyPointResult* result, + KeyPointResult* keypoint_smoothed, + std::vector thresholds, + int index) { + float distance = sqrt(pow((result->keypoints[index * 3 + 1] - + this->x_prev_hat.keypoints[index * 3 + 1]) / + this->width, + 2) + + pow((result->keypoints[index * 3 + 2] - + this->x_prev_hat.keypoints[index * 3 + 2]) / + this->height, + 2)); + if (distance < thresholds[index] * this->thres_mult) { + keypoint_smoothed->keypoints[index * 3 + 1] = + this->x_prev_hat.keypoints[index * 3 + 1]; + keypoint_smoothed->keypoints[index * 3 + 2] = + this->x_prev_hat.keypoints[index * 3 + 2]; + } else { + if (this->filter_type == "OneEuro") { + keypoint_smoothed->keypoints[index * 3 + 1] = + this->OneEuroFilter(result->keypoints[index * 3 + 1], + this->x_prev_hat.keypoints[index * 3 + 1], + index * 3 + 1); + keypoint_smoothed->keypoints[index * 3 + 2] = + this->OneEuroFilter(result->keypoints[index * 3 + 2], + this->x_prev_hat.keypoints[index * 3 + 2], + index * 3 + 2); + } else { + keypoint_smoothed->keypoints[index * 3 + 1] = + this->ExpSmoothing(result->keypoints[index * 3 + 1], + this->x_prev_hat.keypoints[index * 3 + 1], + index * 3 + 1); + keypoint_smoothed->keypoints[index * 3 + 2] = + this->ExpSmoothing(result->keypoints[index * 3 + 2], + this->x_prev_hat.keypoints[index * 3 + 2], + index * 3 + 2); + } + } + return; +} + +float PoseSmooth::OneEuroFilter(float x_cur, float x_pre, int loc) { + float te = 1.0; + this->alpha = this->smoothing_factor(te, this->fc_d); + float dx_cur = (x_cur - x_pre) / te; + float dx_cur_hat = + this->ExpSmoothing(dx_cur, this->dx_prev_hat.keypoints[loc]); + + float fc = this->fc_min + this->beta * abs(dx_cur_hat); + this->alpha = this->smoothing_factor(te, fc); + float x_cur_hat = this->ExpSmoothing(x_cur, x_pre); + this->x_prev_hat.keypoints[loc] = x_cur_hat; + this->dx_prev_hat.keypoints[loc] = dx_cur_hat; + return x_cur_hat; +} + +float PoseSmooth::smoothing_factor(float te, float fc) { + float r = 2 * PI * fc * te; + return r / (r + 1); +} + +float PoseSmooth::ExpSmoothing(float x_cur, float x_pre, int loc) { + return this->alpha * x_cur + (1 - this->alpha) * x_pre; +} +} // namespace PaddleDetection diff --git a/deploy/cpp/src/main_keypoint.cc b/deploy/cpp/src/main_keypoint.cc index 7701d5ebb5d1dcbd6431917e86ce6169e982a155..da333f6ebf1aa7feafe12c3b2d5bf87654764fe1 100644 --- a/deploy/cpp/src/main_keypoint.cc +++ b/deploy/cpp/src/main_keypoint.cc @@ -219,6 +219,8 @@ void PredictVideo(const std::string& video_path, printf("create video writer failed!\n"); return; } + PaddleDetection::PoseSmooth smoother = + PaddleDetection::PoseSmooth(video_width, video_height); std::vector result; std::vector bbox_num; @@ -307,6 +309,13 @@ void PredictVideo(const std::string& video_path, scale_bs.clear(); } } + + if (result_kpts.size() == 1) { + for (int i = 0; i < result_kpts.size(); i++) { + result_kpts[i] = smoother.smooth_process(&(result_kpts[i])); + } + } + cv::Mat out_im = VisualizeKptsResult(frame, result_kpts, colormap_kpts); video_out.write(out_im); } else { diff --git a/deploy/cpp/src/object_detector.cc b/deploy/cpp/src/object_detector.cc index a99fcd515337e72ff59a09c7eeaa12072a774cc1..d4f2ceb5d7c07142e51e2b0008148e5d90b55adc 100644 --- a/deploy/cpp/src/object_detector.cc +++ b/deploy/cpp/src/object_detector.cc @@ -15,16 +15,15 @@ // for setprecision #include #include -#include "include/object_detector.h" -using namespace paddle_infer; +#include "include/object_detector.h" namespace PaddleDetection { // Load Model and create model predictor -void ObjectDetector::LoadModel(const std::string& model_dir, +void ObjectDetector::LoadModel(const std::string &model_dir, const int batch_size, - const std::string& run_mode) { + const std::string &run_mode) { paddle_infer::Config config; std::string prog_file = model_dir + OS_PATH_SEP + "model.pdmodel"; std::string params_file = model_dir + OS_PATH_SEP + "model.pdiparams"; @@ -42,27 +41,22 @@ void ObjectDetector::LoadModel(const std::string& model_dir, } else if (run_mode == "trt_int8") { precision = paddle_infer::Config::Precision::kInt8; } else { - printf( - "run_mode should be 'paddle', 'trt_fp32', 'trt_fp16' or " - "'trt_int8'"); + printf("run_mode should be 'paddle', 'trt_fp32', 'trt_fp16' or " + "'trt_int8'"); } // set tensorrt - config.EnableTensorRtEngine(1 << 30, - batch_size, - this->min_subgraph_size_, - precision, - false, - this->trt_calib_mode_); + config.EnableTensorRtEngine(1 << 30, batch_size, this->min_subgraph_size_, + precision, false, this->trt_calib_mode_); // set use dynamic shape if (this->use_dynamic_shape_) { - // set DynamicShsape for image tensor + // set DynamicShape for image tensor const std::vector min_input_shape = { - 1, 3, this->trt_min_shape_, this->trt_min_shape_}; + batch_size, 3, this->trt_min_shape_, this->trt_min_shape_}; const std::vector max_input_shape = { - 1, 3, this->trt_max_shape_, this->trt_max_shape_}; + batch_size, 3, this->trt_max_shape_, this->trt_max_shape_}; const std::vector opt_input_shape = { - 1, 3, this->trt_opt_shape_, this->trt_opt_shape_}; + batch_size, 3, this->trt_opt_shape_, this->trt_opt_shape_}; const std::map> map_min_input_shape = { {"image", min_input_shape}}; const std::map> map_max_input_shape = { @@ -70,8 +64,8 @@ void ObjectDetector::LoadModel(const std::string& model_dir, const std::map> map_opt_input_shape = { {"image", opt_input_shape}}; - config.SetTRTDynamicShapeInfo( - map_min_input_shape, map_max_input_shape, map_opt_input_shape); + config.SetTRTDynamicShapeInfo(map_min_input_shape, map_max_input_shape, + map_opt_input_shape); std::cout << "TensorRT dynamic shape enabled" << std::endl; } } @@ -96,13 +90,14 @@ void ObjectDetector::LoadModel(const std::string& model_dir, } // Visualiztion MaskDetector results -cv::Mat VisualizeResult( - const cv::Mat& img, - const std::vector& results, - const std::vector& lables, - const std::vector& colormap, - const bool is_rbox = false) { +cv::Mat +VisualizeResult(const cv::Mat &img, + const std::vector &results, + const std::vector &lables, + const std::vector &colormap, const bool is_rbox = false) { cv::Mat vis_img = img.clone(); + int img_h = vis_img.rows; + int img_w = vis_img.cols; for (int i = 0; i < results.size(); ++i) { // Configure color and text size std::ostringstream oss; @@ -136,30 +131,45 @@ cv::Mat VisualizeResult( cv::Rect roi = cv::Rect(results[i].rect[0], results[i].rect[1], w, h); // Draw roi object, text, and background cv::rectangle(vis_img, roi, roi_color, 2); + + // Draw mask + std::vector mask_v = results[i].mask; + if (mask_v.size() > 0) { + cv::Mat mask = cv::Mat(img_h, img_w, CV_32S); + std::memcpy(mask.data, mask_v.data(), mask_v.size() * sizeof(int)); + + cv::Mat colored_img = vis_img.clone(); + + std::vector contours; + cv::Mat hierarchy; + mask.convertTo(mask, CV_8U); + cv::findContours(mask, contours, hierarchy, cv::RETR_CCOMP, + cv::CHAIN_APPROX_SIMPLE); + cv::drawContours(colored_img, contours, -1, roi_color, -1, cv::LINE_8, + hierarchy, 100); + + cv::Mat debug_roi = vis_img; + colored_img = 0.4 * colored_img + 0.6 * vis_img; + colored_img.copyTo(vis_img, mask); + } } origin.x = results[i].rect[0]; origin.y = results[i].rect[1]; // Configure text background - cv::Rect text_back = cv::Rect(results[i].rect[0], - results[i].rect[1] - text_size.height, - text_size.width, - text_size.height); + cv::Rect text_back = + cv::Rect(results[i].rect[0], results[i].rect[1] - text_size.height, + text_size.width, text_size.height); // Draw text, and background cv::rectangle(vis_img, text_back, roi_color, -1); - cv::putText(vis_img, - text, - origin, - font_face, - font_scale, - cv::Scalar(255, 255, 255), - thickness); + cv::putText(vis_img, text, origin, font_face, font_scale, + cv::Scalar(255, 255, 255), thickness); } return vis_img; } -void ObjectDetector::Preprocess(const cv::Mat& ori_im) { +void ObjectDetector::Preprocess(const cv::Mat &ori_im) { // Clone the image : keep the original mat for postprocess cv::Mat im = ori_im.clone(); cv::cvtColor(im, im, cv::COLOR_BGR2RGB); @@ -168,20 +178,21 @@ void ObjectDetector::Preprocess(const cv::Mat& ori_im) { void ObjectDetector::Postprocess( const std::vector mats, - std::vector* result, - std::vector bbox_num, - std::vector output_data_, - bool is_rbox = false) { + std::vector *result, + std::vector bbox_num, std::vector output_data_, + std::vector output_mask_data_, bool is_rbox = false) { result->clear(); int start_idx = 0; + int total_num = std::accumulate(bbox_num.begin(), bbox_num.end(), 0); + int out_mask_dim = -1; + if (config_.mask_) { + out_mask_dim = output_mask_data_.size() / total_num; + } + for (int im_id = 0; im_id < mats.size(); im_id++) { cv::Mat raw_mat = mats[im_id]; int rh = 1; int rw = 1; - if (config_.arch_ == "Face") { - rh = raw_mat.rows; - rw = raw_mat.cols; - } for (int j = start_idx; j < start_idx + bbox_num[im_id]; j++) { if (is_rbox) { // Class id @@ -218,6 +229,17 @@ void ObjectDetector::Postprocess( result_item.rect = {xmin, ymin, xmax, ymax}; result_item.class_id = class_id; result_item.confidence = score; + + if (config_.mask_) { + std::vector mask; + for (int k = 0; k < out_mask_dim; ++k) { + if (output_mask_data_[k + j * out_mask_dim] > -1) { + mask.push_back(output_mask_data_[k + j * out_mask_dim]); + } + } + result_item.mask = mask; + } + result->push_back(result_item); } } @@ -225,13 +247,85 @@ void ObjectDetector::Postprocess( } } +// This function is to convert output result from SOLOv2 to class ObjectResult +void ObjectDetector::SOLOv2Postprocess( + const std::vector mats, std::vector *result, + std::vector *bbox_num, std::vector out_bbox_num_data_, + std::vector out_label_data_, std::vector out_score_data_, + std::vector out_global_mask_data_, float threshold) { + + for (int im_id = 0; im_id < mats.size(); im_id++) { + cv::Mat mat = mats[im_id]; + + int valid_bbox_count = 0; + for (int bbox_id = 0; bbox_id < out_bbox_num_data_[im_id]; ++bbox_id) { + if (out_score_data_[bbox_id] >= threshold) { + ObjectResult result_item; + result_item.class_id = out_label_data_[bbox_id]; + result_item.confidence = out_score_data_[bbox_id]; + std::vector global_mask; + + for (int k = 0; k < mat.rows * mat.cols; ++k) { + global_mask.push_back(static_cast( + out_global_mask_data_[k + bbox_id * mat.rows * mat.cols])); + } + + // find minimize bounding box from mask + cv::Mat mask(mat.rows, mat.cols, CV_32SC1); + std::memcpy(mask.data, global_mask.data(), + global_mask.size() * sizeof(int)); + + cv::Mat mask_fp; + cv::Mat rowSum; + cv::Mat colSum; + std::vector sum_of_row(mat.rows); + std::vector sum_of_col(mat.cols); + + mask.convertTo(mask_fp, CV_32FC1); + cv::reduce(mask_fp, colSum, 0, CV_REDUCE_SUM, CV_32FC1); + cv::reduce(mask_fp, rowSum, 1, CV_REDUCE_SUM, CV_32FC1); + + for (int row_id = 0; row_id < mat.rows; ++row_id) { + sum_of_row[row_id] = rowSum.at(row_id, 0); + } + + for (int col_id = 0; col_id < mat.cols; ++col_id) { + sum_of_col[col_id] = colSum.at(0, col_id); + } + + auto it = std::find_if(sum_of_row.begin(), sum_of_row.end(), + [](int x) { return x > 0.5; }); + int y1 = std::distance(sum_of_row.begin(), it); + + auto it2 = std::find_if(sum_of_col.begin(), sum_of_col.end(), + [](int x) { return x > 0.5; }); + int x1 = std::distance(sum_of_col.begin(), it2); + + auto rit = std::find_if(sum_of_row.rbegin(), sum_of_row.rend(), + [](int x) { return x > 0.5; }); + int y2 = std::distance(rit, sum_of_row.rend()); + + auto rit2 = std::find_if(sum_of_col.rbegin(), sum_of_col.rend(), + [](int x) { return x > 0.5; }); + int x2 = std::distance(rit2, sum_of_col.rend()); + + result_item.rect = {x1, y1, x2, y2}; + result_item.mask = global_mask; + + result->push_back(result_item); + valid_bbox_count++; + } + } + bbox_num->push_back(valid_bbox_count); + } +} + void ObjectDetector::Predict(const std::vector imgs, - const double threshold, - const int warmup, + const double threshold, const int warmup, const int repeats, - std::vector* result, - std::vector* bbox_num, - std::vector* times) { + std::vector *result, + std::vector *bbox_num, + std::vector *times) { auto preprocess_start = std::chrono::steady_clock::now(); int batch_size = imgs.size(); @@ -239,8 +333,14 @@ void ObjectDetector::Predict(const std::vector imgs, std::vector in_data_all; std::vector im_shape_all(batch_size * 2); std::vector scale_factor_all(batch_size * 2); - std::vector output_data_list_; + std::vector output_data_list_; std::vector out_bbox_num_data_; + std::vector out_mask_data_; + + // these parameters are for SOLOv2 output + std::vector out_score_data_; + std::vector out_global_mask_data_; + std::vector out_label_data_; // in_net img for each batch std::vector in_net_img_all(batch_size); @@ -255,9 +355,8 @@ void ObjectDetector::Predict(const std::vector imgs, scale_factor_all[bs_idx * 2] = inputs_.scale_factor_[0]; scale_factor_all[bs_idx * 2 + 1] = inputs_.scale_factor_[1]; - // TODO: reduce cost time - in_data_all.insert( - in_data_all.end(), inputs_.im_data_.begin(), inputs_.im_data_.end()); + in_data_all.insert(in_data_all.end(), inputs_.im_data_.begin(), + inputs_.im_data_.end()); // collect in_net img in_net_img_all[bs_idx] = inputs_.in_net_im_; @@ -276,10 +375,10 @@ void ObjectDetector::Predict(const std::vector imgs, pad_img.convertTo(pad_img, CV_32FC3); std::vector pad_data; pad_data.resize(rc * rh * rw); - float* base = pad_data.data(); + float *base = pad_data.data(); for (int i = 0; i < rc; ++i) { - cv::extractChannel( - pad_img, cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i); + cv::extractChannel(pad_img, + cv::Mat(rh, rw, CV_32FC1, base + i * rh * rw), i); } in_data_all.insert(in_data_all.end(), pad_data.begin(), pad_data.end()); } @@ -290,7 +389,7 @@ void ObjectDetector::Predict(const std::vector imgs, auto preprocess_end = std::chrono::steady_clock::now(); // Prepare input tensor auto input_names = predictor_->GetInputNames(); - for (const auto& tensor_name : input_names) { + for (const auto &tensor_name : input_names) { auto in_tensor = predictor_->GetInputHandle(tensor_name); if (tensor_name == "image") { int rh = inputs_.in_net_shape_[0]; @@ -312,52 +411,118 @@ void ObjectDetector::Predict(const std::vector imgs, bool is_rbox = false; int reg_max = 7; int num_class = 80; - // warmup - for (int i = 0; i < warmup; i++) { - predictor_->Run(); - // Get output tensor - auto output_names = predictor_->GetOutputNames(); - for (int j = 0; j < output_names.size(); j++) { - auto output_tensor = predictor_->GetOutputHandle(output_names[j]); - std::vector output_shape = output_tensor->shape(); - int out_num = std::accumulate( - output_shape.begin(), output_shape.end(), 1, std::multiplies()); - if (output_tensor->type() == paddle_infer::DataType::INT32) { - out_bbox_num_data_.resize(out_num); - output_tensor->CopyToCpu(out_bbox_num_data_.data()); - } else { - std::vector out_data; - out_data.resize(out_num); - output_tensor->CopyToCpu(out_data.data()); - out_tensor_list.push_back(out_data); + + auto inference_start = std::chrono::steady_clock::now(); + if (config_.arch_ == "SOLOv2") { + // warmup + for (int i = 0; i < warmup; i++) { + predictor_->Run(); + // Get output tensor + auto output_names = predictor_->GetOutputNames(); + for (int j = 0; j < output_names.size(); j++) { + auto output_tensor = predictor_->GetOutputHandle(output_names[j]); + std::vector output_shape = output_tensor->shape(); + int out_num = std::accumulate(output_shape.begin(), output_shape.end(), + 1, std::multiplies()); + if (j == 0) { + out_bbox_num_data_.resize(out_num); + output_tensor->CopyToCpu(out_bbox_num_data_.data()); + } else if (j == 1) { + out_label_data_.resize(out_num); + output_tensor->CopyToCpu(out_label_data_.data()); + } else if (j == 2) { + out_score_data_.resize(out_num); + output_tensor->CopyToCpu(out_score_data_.data()); + } else if (config_.mask_ && (j == 3)) { + out_global_mask_data_.resize(out_num); + output_tensor->CopyToCpu(out_global_mask_data_.data()); + } } } - } - auto inference_start = std::chrono::steady_clock::now(); - for (int i = 0; i < repeats; i++) { - predictor_->Run(); - // Get output tensor - out_tensor_list.clear(); - output_shape_list.clear(); - auto output_names = predictor_->GetOutputNames(); - for (int j = 0; j < output_names.size(); j++) { - auto output_tensor = predictor_->GetOutputHandle(output_names[j]); - std::vector output_shape = output_tensor->shape(); - int out_num = std::accumulate( - output_shape.begin(), output_shape.end(), 1, std::multiplies()); - output_shape_list.push_back(output_shape); - if (output_tensor->type() == paddle_infer::DataType::INT32) { - out_bbox_num_data_.resize(out_num); - output_tensor->CopyToCpu(out_bbox_num_data_.data()); - } else { - std::vector out_data; - out_data.resize(out_num); - output_tensor->CopyToCpu(out_data.data()); - out_tensor_list.push_back(out_data); + inference_start = std::chrono::steady_clock::now(); + for (int i = 0; i < repeats; i++) { + predictor_->Run(); + // Get output tensor + out_tensor_list.clear(); + output_shape_list.clear(); + auto output_names = predictor_->GetOutputNames(); + for (int j = 0; j < output_names.size(); j++) { + auto output_tensor = predictor_->GetOutputHandle(output_names[j]); + std::vector output_shape = output_tensor->shape(); + int out_num = std::accumulate(output_shape.begin(), output_shape.end(), + 1, std::multiplies()); + output_shape_list.push_back(output_shape); + if (j == 0) { + out_bbox_num_data_.resize(out_num); + output_tensor->CopyToCpu(out_bbox_num_data_.data()); + } else if (j == 1) { + out_label_data_.resize(out_num); + output_tensor->CopyToCpu(out_label_data_.data()); + } else if (j == 2) { + out_score_data_.resize(out_num); + output_tensor->CopyToCpu(out_score_data_.data()); + } else if (config_.mask_ && (j == 3)) { + out_global_mask_data_.resize(out_num); + output_tensor->CopyToCpu(out_global_mask_data_.data()); + } + } + } + } else { + // warmup + for (int i = 0; i < warmup; i++) { + predictor_->Run(); + // Get output tensor + auto output_names = predictor_->GetOutputNames(); + for (int j = 0; j < output_names.size(); j++) { + auto output_tensor = predictor_->GetOutputHandle(output_names[j]); + std::vector output_shape = output_tensor->shape(); + int out_num = std::accumulate(output_shape.begin(), output_shape.end(), + 1, std::multiplies()); + if (config_.mask_ && (j == 2)) { + out_mask_data_.resize(out_num); + output_tensor->CopyToCpu(out_mask_data_.data()); + } else if (output_tensor->type() == paddle_infer::DataType::INT32) { + out_bbox_num_data_.resize(out_num); + output_tensor->CopyToCpu(out_bbox_num_data_.data()); + } else { + std::vector out_data; + out_data.resize(out_num); + output_tensor->CopyToCpu(out_data.data()); + out_tensor_list.push_back(out_data); + } + } + } + + inference_start = std::chrono::steady_clock::now(); + for (int i = 0; i < repeats; i++) { + predictor_->Run(); + // Get output tensor + out_tensor_list.clear(); + output_shape_list.clear(); + auto output_names = predictor_->GetOutputNames(); + for (int j = 0; j < output_names.size(); j++) { + auto output_tensor = predictor_->GetOutputHandle(output_names[j]); + std::vector output_shape = output_tensor->shape(); + int out_num = std::accumulate(output_shape.begin(), output_shape.end(), + 1, std::multiplies()); + output_shape_list.push_back(output_shape); + if (config_.mask_ && (j == 2)) { + out_mask_data_.resize(out_num); + output_tensor->CopyToCpu(out_mask_data_.data()); + } else if (output_tensor->type() == paddle_infer::DataType::INT32) { + out_bbox_num_data_.resize(out_num); + output_tensor->CopyToCpu(out_bbox_num_data_.data()); + } else { + std::vector out_data; + out_data.resize(out_num); + output_tensor->CopyToCpu(out_data.data()); + out_tensor_list.push_back(out_data); + } } } } + auto inference_end = std::chrono::steady_clock::now(); auto postprocess_start = std::chrono::steady_clock::now(); // Postprocessing result @@ -371,26 +536,24 @@ void ObjectDetector::Predict(const std::vector imgs, if (i == config_.fpn_stride_.size()) { reg_max = output_shape_list[i][2] / 4 - 1; } - float* buffer = new float[out_tensor_list[i].size()]; - memcpy(buffer, - &out_tensor_list[i][0], + float *buffer = new float[out_tensor_list[i].size()]; + memcpy(buffer, &out_tensor_list[i][0], out_tensor_list[i].size() * sizeof(float)); output_data_list_.push_back(buffer); } PaddleDetection::PicoDetPostProcess( - result, - output_data_list_, - config_.fpn_stride_, - inputs_.im_shape_, - inputs_.scale_factor_, - config_.nms_info_["score_threshold"].as(), - config_.nms_info_["nms_threshold"].as(), - num_class, - reg_max); + result, output_data_list_, config_.fpn_stride_, inputs_.im_shape_, + inputs_.scale_factor_, config_.nms_info_["score_threshold"].as(), + config_.nms_info_["nms_threshold"].as(), num_class, reg_max); bbox_num->push_back(result->size()); + } else if (config_.arch_ == "SOLOv2") { + SOLOv2Postprocess(imgs, result, bbox_num, out_bbox_num_data_, + out_label_data_, out_score_data_, out_global_mask_data_, + threshold); } else { is_rbox = output_shape_list[0][output_shape_list[0].size() - 1] % 10 == 0; - Postprocess(imgs, result, out_bbox_num_data_, out_tensor_list[0], is_rbox); + Postprocess(imgs, result, out_bbox_num_data_, out_tensor_list[0], + out_mask_data_, is_rbox); for (int k = 0; k < out_bbox_num_data_.size(); k++) { int tmp = out_bbox_num_data_[k]; bbox_num->push_back(tmp); @@ -426,4 +589,4 @@ std::vector GenerateColorMap(int num_class) { return colormap; } -} // namespace PaddleDetection +} // namespace PaddleDetection diff --git a/deploy/cpp/src/preprocess_op.cc b/deploy/cpp/src/preprocess_op.cc index 4ac3daa304e933e307596423442502a5bfc06da5..6147555be57a2739fcd4a773eb281aaa966763b0 100644 --- a/deploy/cpp/src/preprocess_op.cc +++ b/deploy/cpp/src/preprocess_op.cc @@ -60,12 +60,11 @@ void Permute::Run(cv::Mat* im, ImageBlob* data) { void Resize::Run(cv::Mat* im, ImageBlob* data) { auto resize_scale = GenerateScale(*im); - data->im_shape_ = {static_cast(im->cols * resize_scale.first), - static_cast(im->rows * resize_scale.second)}; - data->in_net_shape_ = {static_cast(im->cols * resize_scale.first), - static_cast(im->rows * resize_scale.second)}; cv::resize( *im, *im, cv::Size(), resize_scale.first, resize_scale.second, interp_); + + data->in_net_shape_ = {static_cast(im->rows), + static_cast(im->cols)}; data->im_shape_ = { static_cast(im->rows), static_cast(im->cols), }; @@ -154,6 +153,7 @@ float LetterBoxResize::GenerateScale(const cv::Mat& im) { void PadStride::Run(cv::Mat* im, ImageBlob* data) { if (stride_ <= 0) { + data->in_net_im_ = im->clone(); return; } int rc = im->channels(); @@ -177,13 +177,84 @@ void TopDownEvalAffine::Run(cv::Mat* im, ImageBlob* data) { }; } +void GetAffineTrans(const cv::Point2f center, + const cv::Point2f input_size, + const cv::Point2f output_size, + cv::Mat* trans) { + cv::Point2f srcTri[3]; + cv::Point2f dstTri[3]; + float src_w = input_size.x; + float dst_w = output_size.x; + float dst_h = output_size.y; + + cv::Point2f src_dir(0, -0.5 * src_w); + cv::Point2f dst_dir(0, -0.5 * dst_w); + + srcTri[0] = center; + srcTri[1] = center + src_dir; + cv::Point2f src_d = srcTri[0] - srcTri[1]; + srcTri[2] = srcTri[1] + cv::Point2f(-src_d.y, src_d.x); + + dstTri[0] = cv::Point2f(dst_w * 0.5, dst_h * 0.5); + dstTri[1] = cv::Point2f(dst_w * 0.5, dst_h * 0.5) + dst_dir; + cv::Point2f dst_d = dstTri[0] - dstTri[1]; + dstTri[2] = dstTri[1] + cv::Point2f(-dst_d.y, dst_d.x); + + *trans = cv::getAffineTransform(srcTri, dstTri); +} + +void WarpAffine::Run(cv::Mat* im, ImageBlob* data) { + cv::cvtColor(*im, *im, cv::COLOR_RGB2BGR); + cv::Mat trans(2, 3, CV_32FC1); + cv::Point2f center; + cv::Point2f input_size; + int h = im->rows; + int w = im->cols; + if (keep_res_) { + input_h_ = (h | pad_) + 1; + input_w_ = (w + pad_) + 1; + input_size = cv::Point2f(input_w_, input_h_); + center = cv::Point2f(w / 2, h / 2); + } else { + float s = std::max(h, w) * 1.0; + input_size = cv::Point2f(s, s); + center = cv::Point2f(w / 2., h / 2.); + } + cv::Point2f output_size(input_w_, input_h_); + + GetAffineTrans(center, input_size, output_size, &trans); + cv::warpAffine(*im, *im, trans, cv::Size(input_w_, input_h_)); + data->in_net_shape_ = { + static_cast(input_h_), static_cast(input_w_), + }; +} + +void Pad::Run(cv::Mat* im, ImageBlob* data) { + int h = size_[0]; + int w = size_[1]; + int rh = im->rows; + int rw = im->cols; + if (h == rh && w == rw){ + data->in_net_im_ = im->clone(); + return; + } + cv::copyMakeBorder( + *im, *im, 0, h - rh, 0, w - rw, cv::BORDER_CONSTANT, cv::Scalar(114)); + data->in_net_im_ = im->clone(); + data->in_net_shape_ = { + static_cast(im->rows), static_cast(im->cols), + }; +} + // Preprocessor op running order const std::vector Preprocessor::RUN_ORDER = {"InitInfo", "TopDownEvalAffine", "Resize", "LetterBoxResize", + "WarpAffine", "NormalizeImage", "PadStride", + "Pad", "Permute"}; void Preprocessor::Run(cv::Mat* im, ImageBlob* data) { @@ -242,7 +313,9 @@ bool CheckDynamicInput(const std::vector& imgs) { int h = imgs.at(0).rows; int w = imgs.at(0).cols; for (int i = 1; i < imgs.size(); ++i) { - if (imgs.at(i).rows != h || imgs.at(i).cols != w) { + int hi = imgs.at(i).rows; + int wi = imgs.at(i).cols; + if (hi != h || wi != w) { return true; } } diff --git a/deploy/cpp/src/tracker.cc b/deploy/cpp/src/tracker.cc index b00e31c4ec580f3b30fe4b10970f31623f47acb3..f40cb0dd699a4687f4f77714e4bc5ae5416141f6 100644 --- a/deploy/cpp/src/tracker.cc +++ b/deploy/cpp/src/tracker.cc @@ -58,8 +58,8 @@ bool JDETracker::update(const cv::Mat &dets, const cv::Mat &emb, std::vector(i, 4); - const cv::Mat <rb_ = dets(cv::Rect(0, i, 4, 1)); + float score = *dets.ptr(i, 1); + const cv::Mat <rb_ = dets(cv::Rect(2, i, 4, 1)); cv::Vec4f ltrb = mat2vec4f(ltrb_); const cv::Mat &embedding = emb(cv::Rect(0, i, emb.cols, 1)); candidates[i] = Trajectory(ltrb, score, embedding); diff --git a/deploy/end2end_ppyoloe/README.md b/deploy/end2end_ppyoloe/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d470dccffe7c9927eac6946d3ee47ea96c346a56 --- /dev/null +++ b/deploy/end2end_ppyoloe/README.md @@ -0,0 +1,99 @@ +# Export ONNX Model +## Download pretrain paddle models + +* [ppyoloe-s](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams) +* [ppyoloe-m](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_m_300e_coco.pdparams) +* [ppyoloe-l](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) +* [ppyoloe-x](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_x_300e_coco.pdparams) +* [ppyoloe-s-400e](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_400e_coco.pdparams) + + +## Export paddle model for deploying + +```shell +python ./tools/export_model.py \ + -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \ + -o weights=ppyoloe_crn_s_300e_coco.pdparams \ + trt=True \ + exclude_nms=True \ + TestReader.inputs_def.image_shape=[3,640,640] \ + --output_dir ./ + +# if you want to try ppyoloe-s-400e model +python ./tools/export_model.py \ + -c configs/ppyoloe/ppyoloe_crn_s_400e_coco.yml \ + -o weights=ppyoloe_crn_s_400e_coco.pdparams \ + trt=True \ + exclude_nms=True \ + TestReader.inputs_def.image_shape=[3,640,640] \ + --output_dir ./ +``` + +## Check requirements +```shell +pip install onnx>=1.10.0 +pip install paddle2onnx +pip install onnx-simplifier +pip install onnx-graphsurgeon --index-url https://pypi.ngc.nvidia.com +# if use cuda-python infer, please install it +pip install cuda-python +# if use cupy infer, please install it +pip install cupy-cuda117 # cuda110-cuda117 are all available +``` + +## Export script +```shell +python ./deploy/end2end_ppyoloe/end2end.py \ + --model-dir ppyoloe_crn_s_300e_coco \ + --save-file ppyoloe_crn_s_300e_coco.onnx \ + --opset 11 \ + --batch-size 1 \ + --topk-all 100 \ + --iou-thres 0.6 \ + --conf-thres 0.4 +# if you want to try ppyoloe-s-400e model +python ./deploy/end2end_ppyoloe/end2end.py \ + --model-dir ppyoloe_crn_s_400e_coco \ + --save-file ppyoloe_crn_s_400e_coco.onnx \ + --opset 11 \ + --batch-size 1 \ + --topk-all 100 \ + --iou-thres 0.6 \ + --conf-thres 0.4 +``` +#### Description of all arguments + +- `--model-dir` : the path of ppyoloe export dir. +- `--save-file` : the path of export onnx. +- `--opset` : onnx opset version. +- `--img-size` : image size for exporting ppyoloe. +- `--batch-size` : batch size for exporting ppyoloe. +- `--topk-all` : topk objects for every image. +- `--iou-thres` : iou threshold for NMS algorithm. +- `--conf-thres` : confidence threshold for NMS algorithm. + +### TensorRT backend (TensorRT version>= 8.0.0) +#### TensorRT engine export +``` shell +/path/to/trtexec \ + --onnx=ppyoloe_crn_s_300e_coco.onnx \ + --saveEngine=ppyoloe_crn_s_300e_coco.engine \ + --fp16 # if export TensorRT fp16 model +# if you want to try ppyoloe-s-400e model +/path/to/trtexec \ + --onnx=ppyoloe_crn_s_400e_coco.onnx \ + --saveEngine=ppyoloe_crn_s_400e_coco.engine \ + --fp16 # if export TensorRT fp16 model +``` +#### TensorRT image infer + +``` shell +# cuda-python infer script +python ./deploy/end2end_ppyoloe/cuda-python.py ppyoloe_crn_s_300e_coco.engine +# cupy infer script +python ./deploy/end2end_ppyoloe/cupy-python.py ppyoloe_crn_s_300e_coco.engine +# if you want to try ppyoloe-s-400e model +python ./deploy/end2end_ppyoloe/cuda-python.py ppyoloe_crn_s_400e_coco.engine +# or +python ./deploy/end2end_ppyoloe/cuda-python.py ppyoloe_crn_s_400e_coco.engine +``` \ No newline at end of file diff --git a/deploy/end2end_ppyoloe/cuda-python.py b/deploy/end2end_ppyoloe/cuda-python.py new file mode 100644 index 0000000000000000000000000000000000000000..3c7bd7c84b3eeaa6bea55416d8a5eabd37ac4d33 --- /dev/null +++ b/deploy/end2end_ppyoloe/cuda-python.py @@ -0,0 +1,161 @@ +import sys +import requests +import cv2 +import random +import time +import numpy as np +import tensorrt as trt +from cuda import cudart +from pathlib import Path +from collections import OrderedDict, namedtuple + + +def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleup=True, stride=32): + # Resize and pad image while meeting stride-multiple constraints + shape = im.shape[:2] # current shape [height, width] + if isinstance(new_shape, int): + new_shape = (new_shape, new_shape) + + # Scale ratio (new / old) + r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) + if not scaleup: # only scale down, do not scale up (for better val mAP) + r = min(r, 1.0) + + # Compute padding + new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) + dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding + + if auto: # minimum rectangle + dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding + + dw /= 2 # divide padding into 2 sides + dh /= 2 + + if shape[::-1] != new_unpad: # resize + im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR) + top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) + left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) + im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border + return im, r, (dw, dh) + + +w = Path(sys.argv[1]) + +assert w.exists() and w.suffix in ('.engine', '.plan'), 'Wrong engine path' + +names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', + 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', + 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', + 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', + 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', + 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', + 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', + 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', + 'hair drier', 'toothbrush'] +colors = {name: [random.randint(0, 255) for _ in range(3)] for i, name in enumerate(names)} + +url = 'https://oneflow-static.oss-cn-beijing.aliyuncs.com/tripleMu/image1.jpg' +file = requests.get(url) +img = cv2.imdecode(np.frombuffer(file.content, np.uint8), 1) + +_, stream = cudart.cudaStreamCreate() + +mean = np.array([0.485, 0.456, 0.406], dtype=np.float32).reshape(1, 3, 1, 1) +std = np.array([0.229, 0.224, 0.225], dtype=np.float32).reshape(1, 3, 1, 1) + +# Infer TensorRT Engine +Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) +logger = trt.Logger(trt.Logger.ERROR) +trt.init_libnvinfer_plugins(logger, namespace="") +with open(w, 'rb') as f, trt.Runtime(logger) as runtime: + model = runtime.deserialize_cuda_engine(f.read()) +bindings = OrderedDict() +fp16 = False # default updated below +for index in range(model.num_bindings): + name = model.get_binding_name(index) + dtype = trt.nptype(model.get_binding_dtype(index)) + shape = tuple(model.get_binding_shape(index)) + data = np.empty(shape, dtype=np.dtype(dtype)) + _, data_ptr = cudart.cudaMallocAsync(data.nbytes, stream) + bindings[name] = Binding(name, dtype, shape, data, data_ptr) + if model.binding_is_input(index) and dtype == np.float16: + fp16 = True +binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) +context = model.create_execution_context() + +image = img.copy() +image, ratio, dwdh = letterbox(image, auto=False) +image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) + +image_copy = image.copy() + +image = image.transpose((2, 0, 1)) +image = np.expand_dims(image, 0) +image = np.ascontiguousarray(image) + +im = image.astype(np.float32) +im /= 255 +im -= mean +im /= std + +_, image_ptr = cudart.cudaMallocAsync(im.nbytes, stream) +cudart.cudaMemcpyAsync(image_ptr, im.ctypes.data, im.nbytes, + cudart.cudaMemcpyKind.cudaMemcpyHostToDevice, stream) + +# warmup for 10 times +for _ in range(10): + tmp = np.random.randn(1, 3, 640, 640).astype(np.float32) + _, tmp_ptr = cudart.cudaMallocAsync(tmp.nbytes, stream) + binding_addrs['image'] = tmp_ptr + context.execute_v2(list(binding_addrs.values())) + +start = time.perf_counter() +binding_addrs['image'] = image_ptr +context.execute_v2(list(binding_addrs.values())) +print(f'Cost {(time.perf_counter() - start) * 1000}ms') + +nums = bindings['num_dets'].data +boxes = bindings['det_boxes'].data +scores = bindings['det_scores'].data +classes = bindings['det_classes'].data + +cudart.cudaMemcpyAsync(nums.ctypes.data, + bindings['num_dets'].ptr, + nums.nbytes, + cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost, + stream) +cudart.cudaMemcpyAsync(boxes.ctypes.data, + bindings['det_boxes'].ptr, + boxes.nbytes, + cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost, + stream) +cudart.cudaMemcpyAsync(scores.ctypes.data, + bindings['det_scores'].ptr, + scores.nbytes, + cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost, + stream) +cudart.cudaMemcpyAsync(classes.ctypes.data, + bindings['det_classes'].ptr, + classes.data.nbytes, + cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost, + stream) + +cudart.cudaStreamSynchronize(stream) +cudart.cudaStreamDestroy(stream) + +for i in binding_addrs.values(): + cudart.cudaFree(i) + +num = int(nums[0][0]) +box_img = boxes[0, :num].round().astype(np.int32) +score_img = scores[0, :num] +clss_img = classes[0, :num] +for i, (box, score, clss) in enumerate(zip(box_img, score_img, clss_img)): + name = names[int(clss)] + color = colors[name] + cv2.rectangle(image_copy, box[:2].tolist(), box[2:].tolist(), color, 2) + cv2.putText(image_copy, name, (int(box[0]), int(box[1]) - 2), cv2.FONT_HERSHEY_SIMPLEX, + 0.75, [225, 255, 255], thickness=2) + +cv2.imshow('Result', cv2.cvtColor(image_copy, cv2.COLOR_RGB2BGR)) +cv2.waitKey(0) diff --git a/deploy/end2end_ppyoloe/cupy-python.py b/deploy/end2end_ppyoloe/cupy-python.py new file mode 100644 index 0000000000000000000000000000000000000000..a66eb77ecf3aa4c76c143050764429a2a06e8ba1 --- /dev/null +++ b/deploy/end2end_ppyoloe/cupy-python.py @@ -0,0 +1,131 @@ +import sys +import requests +import cv2 +import random +import time +import numpy as np +import cupy as cp +import tensorrt as trt +from PIL import Image +from collections import OrderedDict, namedtuple +from pathlib import Path + + +def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleup=True, stride=32): + # Resize and pad image while meeting stride-multiple constraints + shape = im.shape[:2] # current shape [height, width] + if isinstance(new_shape, int): + new_shape = (new_shape, new_shape) + + # Scale ratio (new / old) + r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) + if not scaleup: # only scale down, do not scale up (for better val mAP) + r = min(r, 1.0) + + # Compute padding + new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) + dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding + + if auto: # minimum rectangle + dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding + + dw /= 2 # divide padding into 2 sides + dh /= 2 + + if shape[::-1] != new_unpad: # resize + im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR) + top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) + left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) + im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border + return im, r, (dw, dh) + + +names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', + 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', + 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', + 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', + 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', + 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', + 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', + 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', + 'hair drier', 'toothbrush'] +colors = {name: [random.randint(0, 255) for _ in range(3)] for i, name in enumerate(names)} + +url = 'https://oneflow-static.oss-cn-beijing.aliyuncs.com/tripleMu/image1.jpg' +file = requests.get(url) +img = cv2.imdecode(np.frombuffer(file.content, np.uint8), 1) + +w = Path(sys.argv[1]) + +assert w.exists() and w.suffix in ('.engine', '.plan'), 'Wrong engine path' + +mean = np.array([0.485, 0.456, 0.406], dtype=np.float32).reshape(1, 3, 1, 1) +std = np.array([0.229, 0.224, 0.225], dtype=np.float32).reshape(1, 3, 1, 1) + +mean = cp.asarray(mean) +std = cp.asarray(std) + +# Infer TensorRT Engine +Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) +logger = trt.Logger(trt.Logger.INFO) +trt.init_libnvinfer_plugins(logger, namespace="") +with open(w, 'rb') as f, trt.Runtime(logger) as runtime: + model = runtime.deserialize_cuda_engine(f.read()) +bindings = OrderedDict() +fp16 = False # default updated below +for index in range(model.num_bindings): + name = model.get_binding_name(index) + dtype = trt.nptype(model.get_binding_dtype(index)) + shape = tuple(model.get_binding_shape(index)) + data = cp.empty(shape, dtype=cp.dtype(dtype)) + bindings[name] = Binding(name, dtype, shape, data, int(data.data.ptr)) + if model.binding_is_input(index) and dtype == np.float16: + fp16 = True +binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) +context = model.create_execution_context() + +image = img.copy() +image, ratio, dwdh = letterbox(image, auto=False) +image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) + +image_copy = image.copy() + +image = image.transpose((2, 0, 1)) +image = np.expand_dims(image, 0) +image = np.ascontiguousarray(image) + +im = cp.asarray(image) +im = im.astype(cp.float32) +im /= 255 +im -= mean +im /= std + +# warmup for 10 times +for _ in range(10): + tmp = cp.random.randn(1, 3, 640, 640).astype(cp.float32) + binding_addrs['image'] = int(tmp.data.ptr) + context.execute_v2(list(binding_addrs.values())) + +start = time.perf_counter() +binding_addrs['image'] = int(im.data.ptr) +context.execute_v2(list(binding_addrs.values())) +print(f'Cost {(time.perf_counter() - start) * 1000}ms') + +nums = bindings['num_dets'].data +boxes = bindings['det_boxes'].data +scores = bindings['det_scores'].data +classes = bindings['det_classes'].data + +num = int(nums[0][0]) +box_img = boxes[0, :num].round().astype(cp.int32) +score_img = scores[0, :num] +clss_img = classes[0, :num] +for i, (box, score, clss) in enumerate(zip(box_img, score_img, clss_img)): + name = names[int(clss)] + color = colors[name] + cv2.rectangle(image_copy, box[:2].tolist(), box[2:].tolist(), color, 2) + cv2.putText(image_copy, name, (int(box[0]), int(box[1]) - 2), cv2.FONT_HERSHEY_SIMPLEX, + 0.75, [225, 255, 255], thickness=2) + +cv2.imshow('Result', cv2.cvtColor(image_copy, cv2.COLOR_RGB2BGR)) +cv2.waitKey(0) diff --git a/deploy/end2end_ppyoloe/end2end.py b/deploy/end2end_ppyoloe/end2end.py new file mode 100644 index 0000000000000000000000000000000000000000..fcfbf019a5d5755768e7defd573203a20a020ef7 --- /dev/null +++ b/deploy/end2end_ppyoloe/end2end.py @@ -0,0 +1,97 @@ +import argparse +import onnx +import onnx_graphsurgeon as gs +import numpy as np + +from pathlib import Path +from paddle2onnx.legacy.command import program2onnx +from collections import OrderedDict + + +def main(opt): + model_dir = Path(opt.model_dir) + save_file = Path(opt.save_file) + assert model_dir.exists() and model_dir.is_dir() + if save_file.is_dir(): + save_file = (save_file / model_dir.stem).with_suffix('.onnx') + elif save_file.is_file() and save_file.suffix != '.onnx': + save_file = save_file.with_suffix('.onnx') + input_shape_dict = {'image': [opt.batch_size, 3, *opt.img_size], + 'scale_factor': [opt.batch_size, 2]} + program2onnx(str(model_dir), str(save_file), + 'model.pdmodel', 'model.pdiparams', + opt.opset, input_shape_dict=input_shape_dict) + onnx_model = onnx.load(save_file) + try: + import onnxsim + onnx_model, check = onnxsim.simplify(onnx_model) + assert check, 'assert check failed' + except Exception as e: + print(f'Simplifier failure: {e}') + onnx.checker.check_model(onnx_model) + graph = gs.import_onnx(onnx_model) + graph.fold_constants() + graph.cleanup().toposort() + mul = concat = None + for node in graph.nodes: + if node.op == 'Div' and node.i(0).op == 'Mul': + mul = node.i(0) + if node.op == 'Concat' and node.o().op == 'Reshape' and node.o().o().op == 'ReduceSum': + concat = node + + assert mul.outputs[0].shape[1] == concat.outputs[0].shape[2], 'Something wrong in outputs shape' + + anchors = mul.outputs[0].shape[1] + classes = concat.outputs[0].shape[1] + + scores = gs.Variable(name='scores', shape=[opt.batch_size, anchors, classes], dtype=np.float32) + graph.layer(op='Transpose', name='lastTranspose', + inputs=[concat.outputs[0]], + outputs=[scores], + attrs=OrderedDict(perm=[0, 2, 1])) + + graph.inputs = [graph.inputs[0]] + + attrs = OrderedDict( + plugin_version="1", + background_class=-1, + max_output_boxes=opt.topk_all, + score_threshold=opt.conf_thres, + iou_threshold=opt.iou_thres, + score_activation=False, + box_coding=0, ) + outputs = [gs.Variable("num_dets", np.int32, [opt.batch_size, 1]), + gs.Variable("det_boxes", np.float32, [opt.batch_size, opt.topk_all, 4]), + gs.Variable("det_scores", np.float32, [opt.batch_size, opt.topk_all]), + gs.Variable("det_classes", np.int32, [opt.batch_size, opt.topk_all])] + graph.layer(op='EfficientNMS_TRT', name="batched_nms", + inputs=[mul.outputs[0], scores], + outputs=outputs, + attrs=attrs) + graph.outputs = outputs + graph.cleanup().toposort() + onnx.save(gs.export_onnx(graph), save_file) + + +def parse_opt(): + parser = argparse.ArgumentParser() + parser.add_argument('--model-dir', type=str, + default=None, + help='paddle static model') + parser.add_argument('--save-file', type=str, + default=None, + help='onnx model save path') + parser.add_argument('--opset', type=int, default=11, help='opset version') + parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') + parser.add_argument('--batch-size', type=int, default=1, help='batch size') + parser.add_argument('--topk-all', type=int, default=100, help='topk objects for every images') + parser.add_argument('--iou-thres', type=float, default=0.45, help='iou threshold for NMS') + parser.add_argument('--conf-thres', type=float, default=0.25, help='conf threshold for NMS') + opt = parser.parse_args() + opt.img_size *= 2 if len(opt.img_size) == 1 else 1 + return opt + + +if __name__ == '__main__': + opt = parse_opt() + main(opt) diff --git a/deploy/lite/README.md b/deploy/lite/README.md index e8b58e35309a225f189ca6f05b684195b48c0b75..30447460eb6c4ccdf5c1013d1ea2d631d9073fba 100644 --- a/deploy/lite/README.md +++ b/deploy/lite/README.md @@ -1,8 +1,8 @@ # Paddle-Lite端侧部署 -本教程将介绍基于[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite) 在移动端部署PaddleDetection模型的详细步骤。 +[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite)是飞桨轻量化推理引擎,为手机、IOT端提供高效推理能力,并广泛整合跨平台硬件,为端侧部署及应用落地问题提供轻量化的部署方案。 +本目录提供了PaddleDetection中主要模型在Paddle-Lite上的端到端部署代码。用户可以通过本教程了解如何使用该部分代码,基于Paddle-Lite实现在移动端部署PaddleDetection模型。 -Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理能力,并广泛整合跨平台硬件,为端侧部署及应用落地问题提供轻量化的部署方案。 ## 1. 准备环境 @@ -26,14 +26,10 @@ export NDK_ROOT=[YOUR_NDK_PATH]/android-ndk-r17c ### 1.2 准备预测库 预测库有两种获取方式: -1. [**建议**]直接下载,预测库下载链接如下:(请注意使用模型FP32/16版本需要与库相对应) - |平台| 架构 | 预测库下载链接| - |-|-|-| - |Android| arm7 | [inference_lite_lib](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.10-rc/inference_lite_lib.android.armv7.clang.c++_static.with_extra.with_cv.tar.gz) | - | Android | arm8 | [inference_lite_lib](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.10-rc/inference_lite_lib.android.armv8.clang.c++_static.with_extra.with_cv.tar.gz) | - | Android | arm8(FP16) | [inference_lite_lib](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.10-rc/inference_lite_lib.android.armv8_clang_c++_static_with_extra_with_cv_with_fp16.tiny_publish_427e46.zip) | +1. [**建议**]直接从[Paddle-Lite Release](https://github.com/PaddlePaddle/Paddle-Lite/releases)中, 根据设备类型与架构选择对应的预编译库,请注意使用模型FP32/16版本需要与库相对应,库文件的说明请参考[官方文档](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html#android-toolchain-gcc)。 -**注意**:1. 如果是从 Paddle-Lite [官方文档](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html#android-toolchain-gcc)下载的预测库,注意选择`with_extra=ON,with_cv=ON`的下载链接。2. 目前只提供Android端demo,IOS端demo可以参考[Paddle-Lite IOS demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/master/PaddleLite-ios-demo) +**注意**:(1) 如果是从 Paddle-Lite [官方文档](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html#android-toolchain-gcc)下载的预测库,注意选择`with_extra=ON,with_cv=ON`的下载链接。2. 目前只提供Android端demo,IOS端demo可以参考[Paddle-Lite IOS demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/master/PaddleLite-ios-demo) +(2)PP-PicoDet部署需要Paddle Lite 2.11以上版本。 2. 编译Paddle-Lite得到预测库,Paddle-Lite的编译方式如下(Lite库在不断更新,如若下列命令无效,请以Lite官方repo为主): @@ -48,7 +44,7 @@ git checkout develop ./lite/tools/build_android.sh --arch=armv8 --toolchain=clang --with_cv=ON --with_extra=ON --with_arm82_fp16=ON ``` -**注意**:编译Paddle-Lite获得预测库时,需要打开`--with_cv=ON --with_extra=ON`两个选项,`--arch`表示`arm`版本,这里指定为armv8,更多编译命令介绍请参考[链接](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_andriod.html#id2)。 +**注意**:编译Paddle-Lite获得预测库时,需要打开`--with_cv=ON --with_extra=ON`两个选项,`--arch`表示`arm`版本,这里指定为armv8,更多编译命令介绍请参考[链接](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_options.html)。 直接下载预测库并解压后,可以得到`inference_lite_lib.android.armv8.clang.c++_static.with_extra.with_cv/`文件夹,通过编译Paddle-Lite得到的预测库位于`Paddle-Lite/build.lite.android.armv8.gcc/inference_lite_lib.android.armv8/`文件夹下。 预测库的文件目录如下: @@ -74,23 +70,23 @@ inference_lite_lib.android.armv8/ | | `-- libpaddle_lite_jni.so | `-- src |-- demo C++和Java示例代码 -| |-- cxx C++ 预测库demo +| |-- cxx C++ 预测库demo, 请将本文档目录下的PaddleDetection相关代码拷贝至该文件夹下执行交叉编译。 | `-- java Java 预测库demo ``` ## 2 开始运行 -### 2.1 模型优化 +### 2.1 模型转换 -Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括量化、子图融合、混合调度、Kernel优选等方法,使用Paddle-Lite的`opt`工具可以自动对inference模型进行优化,目前支持两种优化方式,优化后的模型更轻量,模型运行速度更快。 +Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括量化、子图融合、混合调度、Kernel优选等方法,使用Paddle-Lite的`opt`工具可以自动对inference模型进行优化,并转换为推理所使用的文件格式。目前支持两种优化方式,优化后的模型更轻量,模型运行速度更快。 **注意**:如果已经准备好了 `.nb` 结尾的模型文件,可以跳过此步骤。 #### 2.1.1 安装paddle_lite_opt工具 -安装`paddle_lite_opt`工具有如下两种方法: +安装`paddle_lite_opt`工具有如下两种方法, **请注意**,无论使用哪种方法,请尽量保证`paddle_lite_opt`工具和预测库的版本一致,以避免未知的Bug。 1. [**建议**]pip安装paddlelite并进行转换 ```shell - pip install paddlelite==2.10rc + pip install paddlelite ``` 2. 源码编译Paddle-Lite生成`paddle_lite_opt`工具 @@ -122,13 +118,14 @@ Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括 |--optimize_out_type|输出模型类型,目前支持两种类型:protobuf和naive_buffer,其中naive_buffer是一种更轻量级的序列化/反序列化实现,默认为naive_buffer| |--optimize_out|优化模型的输出路径| |--valid_targets|指定模型可执行的backend,默认为arm。目前可支持x86、arm、opencl、npu、xpu,可以同时指定多个backend(以空格分隔),Model Optimize Tool将会自动选择最佳方式。如果需要支持华为NPU(Kirin 810/990 Soc搭载的达芬奇架构NPU),应当设置为npu, arm| +| --enable_fp16| true/false,是否使用fp16进行推理。如果开启,需要使用对应fp16的预测库| 更详细的`paddle_lite_opt`工具使用说明请参考[使用opt转化模型文档](https://paddle-lite.readthedocs.io/zh/latest/user_guides/opt/opt_bin.html) `--model_file`表示inference模型的model文件地址,`--param_file`表示inference模型的param文件地址;`optimize_out`用于指定输出文件的名称(不需要添加`.nb`的后缀)。直接在命令行中运行`paddle_lite_opt`,也可以查看所有参数及其说明。 -#### 2.1.3 转换示例 +#### 2.1.2 转换示例 下面以PaddleDetection中的 `PicoDet` 模型为例,介绍使用`paddle_lite_opt`完成预训练模型到inference模型,再到Paddle-Lite优化模型的转换。 @@ -259,16 +256,20 @@ deploy/ } ``` -* `keypoint_runtime_config.json` 包含了关键点检测的超参数,请按需进行修改: +* `keypoint_runtime_config.json` 同时包含了目标检测和关键点检测的超参数,支持Top-Down方案的推理流程,请按需进行修改: ```shell { + "model_dir_det": "./model_det/", #检测模型路径 + "batch_size_det": 1, #检测模型预测时batchsize, 存在关键点模型时只能为1 + "threshold_det": 0.5, #检测器输出阈值 "model_dir_keypoint": "./model_keypoint/", #关键点模型路径(不使用需为空字符) "batch_size_keypoint": 8, #关键点预测时batchsize "threshold_keypoint": 0.5, #关键点输出阈值 "image_file": "demo.jpg", #测试图片 "image_dir": "", #测试图片文件夹 - "run_benchmark": true, #性能测试开关 + "run_benchmark": true, #性能测试开关 "cpu_threads": 4 #线程数 + "use_dark_decode": true #是否使用DARK解码关键点坐标 } ``` @@ -299,7 +300,7 @@ chmod 777 main ## FAQ Q1:如果想更换模型怎么办,需要重新按照流程走一遍吗? -A1:如果已经走通了上述步骤,更换模型只需要替换 `.nb` 模型文件即可,同时要注意修改下配置文件中的 `.nb` 文件路径以及类别映射文件(如有必要)。 +A1:如果已经走通了上述步骤,更换模型只需要替换 `.nb` 模型文件及其对应模型配置文件`infer_cfg.json`,同时要注意修改下配置文件中的 `.nb` 文件路径以及类别映射文件(如有必要)。 Q2:换一个图测试怎么做? A2:替换 deploy 下的测试图像为你想要测试的图像,使用 ADB 再次 push 到手机上即可。 diff --git a/deploy/lite/include/config_parser.h b/deploy/lite/include/config_parser.h index 5171885ca954f50a44d511d24b3ca23845462d45..60d94c69e3b17aa9afea5dfb90e286f44d63f0bc 100644 --- a/deploy/lite/include/config_parser.h +++ b/deploy/lite/include/config_parser.h @@ -29,7 +29,7 @@ namespace PaddleDetection { -void load_jsonf(std::string jsonfile, const Json::Value& jsondata); +void load_jsonf(std::string jsonfile, Json::Value& jsondata); // Inference model configuration parser class ConfigPaser { diff --git a/deploy/lite/include/keypoint_postprocess.h b/deploy/lite/include/keypoint_postprocess.h index 0d1e747f306e44679d0500272e80df8a5fb19ab9..4e0e54c2640104488ef85e733af1c16bdc2d86aa 100644 --- a/deploy/lite/include/keypoint_postprocess.h +++ b/deploy/lite/include/keypoint_postprocess.h @@ -33,7 +33,8 @@ void transform_preds(std::vector& coords, std::vector& scale, std::vector& output_size, std::vector& dim, - std::vector& target_coords); + std::vector& target_coords, + bool affine); void box_to_center_scale(std::vector& box, int width, int height, diff --git a/deploy/lite/src/config_parser.cc b/deploy/lite/src/config_parser.cc index ed139a17dc8b2535877f3981849fdca8ce16993c..70c43e76c2c85d2917eb1c3384304260c591b85c 100644 --- a/deploy/lite/src/config_parser.cc +++ b/deploy/lite/src/config_parser.cc @@ -16,7 +16,7 @@ namespace PaddleDetection { -void load_jsonf(std::string jsonfile, const Json::Value &jsondata) { +void load_jsonf(std::string jsonfile, Json::Value &jsondata) { std::ifstream ifs; ifs.open(jsonfile); diff --git a/deploy/lite/src/keypoint_postprocess.cc b/deploy/lite/src/keypoint_postprocess.cc index 5f28d2adcffaee6b2a3135eb828996c3b00488fa..6c75ece87c2c8f743f0f112ab6bd23fdcc96a270 100644 --- a/deploy/lite/src/keypoint_postprocess.cc +++ b/deploy/lite/src/keypoint_postprocess.cc @@ -74,11 +74,26 @@ void transform_preds(std::vector& coords, std::vector& scale, std::vector& output_size, std::vector& dim, - std::vector& target_coords) { - cv::Mat trans(2, 3, CV_64FC1); - get_affine_transform(center, scale, 0, output_size, trans, 1); - for (int p = 0; p < dim[1]; ++p) { - affine_tranform(coords[p * 2], coords[p * 2 + 1], trans, target_coords, p); + std::vector& target_coords, + bool affine=false) { + if (affine) { + cv::Mat trans(2, 3, CV_64FC1); + get_affine_transform(center, scale, 0, output_size, trans, 1); + for (int p = 0; p < dim[1]; ++p) { + affine_tranform( + coords[p * 2], coords[p * 2 + 1], trans, target_coords, p); + } + } else { + float heat_w = static_cast(output_size[0]); + float heat_h = static_cast(output_size[1]); + float x_scale = scale[0] / heat_w; + float y_scale = scale[1] / heat_h; + float offset_x = center[0] - scale[0] / 2.; + float offset_y = center[1] - scale[1] / 2.; + for (int i = 0; i < dim[1]; i++) { + target_coords[i * 3 + 1] = x_scale * coords[i * 2] + offset_x; + target_coords[i * 3 + 2] = y_scale * coords[i * 2 + 1] + offset_y; + } } } diff --git a/deploy/pipeline/README.md b/deploy/pipeline/README.md new file mode 100644 index 0000000000000000000000000000000000000000..29d86d125875b2afbe1b95aac004e90ee8803a56 --- /dev/null +++ b/deploy/pipeline/README.md @@ -0,0 +1,109 @@ +简体中文 | [English](README_en.md) + +# 实时行人分析工具 PP-Human + +**PP-Human是基于飞桨深度学习框架的业界首个开源产业级实时行人分析工具,具有功能丰富,应用广泛和部署高效三大优势。** + +![](https://user-images.githubusercontent.com/22989727/178965250-14be25c1-125d-4d90-8642-7a9b01fecbe2.gif) + +PP-Human支持图片/单镜头视频/多镜头视频多种输入方式,功能覆盖多目标跟踪、属性识别、行为分析及人流量计数与轨迹记录。能够广泛应用于智慧交通、智慧社区、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。 + +## 📣 近期更新 + +- 🔥 **2022.7.13:PP-Human v2发布,行为识别、人体属性识别、流量计数、跨镜跟踪四大产业特色功能全面升级,覆盖行人检测、跟踪、属性三类核心算法能力,提供保姆级全流程开发及模型优化策略。** +- 2022.4.18:新增PP-Human全流程实战教程, 覆盖训练、部署、动作类型扩展等内容,AIStudio项目请见[链接](https://aistudio.baidu.com/aistudio/projectdetail/3842982) +- 2022.4.10:新增PP-Human范例,赋能社区智能精细化管理, AIStudio快速上手教程[链接](https://aistudio.baidu.com/aistudio/projectdetail/3679564) +- 2022.4.5:全新发布实时行人分析工具PP-Human,支持行人跟踪、人流量统计、人体属性识别与摔倒检测四大能力,基于真实场景数据特殊优化,精准识别各类摔倒姿势,适应不同环境背景、光线及摄像角度 + +## 🔮 功能介绍与效果展示 + +| ⭐ 功能 | 💟 方案优势 | 💡示例图 | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| **跨镜跟踪(ReID)** | 超强性能:针对目标遮挡、完整度、模糊度等难点特殊优化,实现mAP 98.8、1.5ms/人 | | +| **属性分析** | 兼容多种数据格式:支持图片、视频输入

    高性能:融合开源数据集与企业真实数据进行训练,实现mAP 94.86、2ms/人

    支持26种属性:性别、年龄、眼镜、上衣、鞋子、帽子、背包等26种高频属性 | | +| **行为识别** | 功能丰富:支持摔倒、打架、抽烟、打电话、人员闯入五种高频异常行为识别

    鲁棒性强:对光照、视角、背景环境无限制

    性能高:与视频识别技术相比,模型计算量大幅降低,支持本地化与服务化快速部署

    训练速度快:仅需15分钟即可产出高精度行为识别模型 | | +| **人流量计数**
    **轨迹记录** | 简洁易用:单个参数即可开启人流量计数与轨迹记录功能 | | + +## 🗳 模型库 + +
    +单模型效果(点击展开) + +| 任务 | 适用场景 | 精度 | 预测速度(ms)| 模型体积 | 预测部署模型 | +| :---------: |:---------: |:--------------- | :-------: | :------: | :------: | +| 目标检测(高精度) | 图片输入 | mAP: 57.8 | 25.1ms | 182M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 目标检测(轻量级) | 图片输入 | mAP: 53.2 | 16.2ms | 27M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | +| 目标跟踪(高精度) | 视频输入 | MOTA: 82.2 | 31.8ms | 182M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 目标跟踪(轻量级) | 视频输入 | MOTA: 73.9 | 21.0ms |27M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | +| 属性识别(高精度) | 图片/视频输入 属性识别 | mA: 95.4 | 单人4.2ms | 86M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | +| 属性识别(轻量级) | 图片/视频输入 属性识别 | mA: 94.5 | 单人2.9ms | 7.2M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | +| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人5.7ms | 101M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | +| 基于关键点序列分类 | 视频输入 行为识别 | 准确率: 96.43 | 单人0.07ms | 21.8M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | +| 基于人体id图像分类 | 视频输入 行为识别 | 准确率: 86.85 | 单人1.8ms | 45M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | +| 基于人体id检测 | 视频输入 行为识别 | AP50: 79.5 | 单人10.9ms | 27M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | +| 视频分类 | 视频输入 行为识别 | Accuracy: 89.0 | 19.7ms/1s视频 | 90M | [下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | +| ReID | 视频输入 跨镜跟踪 | mAP: 98.8 | 单人0.23ms | 85M |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | + +
    + +
    +端到端模型效果(点击展开) + +| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 | +| :---------: | :-------: | :------: |:------: | +| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M
    属性识别:86M | +| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | 目标检测:182M
    属性识别:86M | +| 摔倒识别 | 单人10ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
    [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M
    关键点检测:101M
    基于关键点行为识别:21.8M | +| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 打架识别 | 19.7ms | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M | +| 抽烟识别 | 单人15.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M
    基于人体id的目标检测:27M | +| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M
    基于人体id的图像分类:45M | + +
    + + +点击模型方案中的模型即可下载指定模型,下载后解压存放至`./output_inference`目录中 + +## 📚 文档教程 + +### [快速开始](docs/tutorials/PPHuman_QUICK_STARTED.md) + +### 行人属性/特征识别 + +* [快速开始](docs/tutorials/pphuman_attribute.md) +* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_attribute.md) + * 数据准备 + * 模型优化 + * 新增属性 + +### 行为识别 + +* [快速开始](docs/tutorials/pphuman_action.md) + * 摔倒检测 + * 打架识别 +* [二次开发教程](../../docs/advanced_tutorials/customization/action_recognotion/README.md) + * 方案选择 + * 数据准备 + * 模型优化 + * 新增行为 + +### 跨镜跟踪ReID + +* [快速开始](docs/tutorials/pphuman_mtmct.md) +* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_mtmct.md) + * 数据准备 + * 模型优化 + +### 行人跟踪、人流量计数与轨迹记录 + +* [快速开始](docs/tutorials/pphuman_mot.md) + * 行人跟踪 + * 人流量计数与轨迹记录 + * 区域闯入判断和计数 +* [二次开发教程](../../docs/advanced_tutorials/customization/pphuman_mot.md) + * 数据准备 + * 模型优化 diff --git a/deploy/pipeline/README_en.md b/deploy/pipeline/README_en.md new file mode 100644 index 0000000000000000000000000000000000000000..227d08ec7b1467d48c365629373b09c196c32528 --- /dev/null +++ b/deploy/pipeline/README_en.md @@ -0,0 +1,113 @@ +[简体中文](README.md) | English + +# Real Time Pedestrian Analysis Tool PP-Human + +**PP-Human is the industry's first open-sourced real-time pedestrian analysis tool based on PaddlePaddle deep learning framework. It has three major features: rich functions, wide application, and efficient deployment.** + + + +![](https://user-images.githubusercontent.com/22989727/178965250-14be25c1-125d-4d90-8642-7a9b01fecbe2.gif) + + + +PP-Human supports various inputs such as images, single-camera, and multi-camera videos. It covers multi-object tracking, attributes recognition, behavior analysis, visitor traffic statistics, and trace records. PP-Human can be applied to fields including Smart Transportation, Smart Community, and industrial inspections. It can also be deployed on server sides and TensorRT accelerator. On the T4 server, it could achieve real-time analysis. + +## 📣 Updates + +- 🔥 **2022.7.13:PP-Human v2 launched with a full upgrade of four industrial features: behavior analysis, attributes recognition, visitor traffic statistics and ReID. It provides a strong core algorithm for pedestrian detection, tracking and attribute analysis with a simple and detailed development process and model optimization strategy.** +- 2022.4.18: Add PP-Human practical tutorials, including training, deployment, and action expansion. Details for AIStudio project please see [Link](https://aistudio.baidu.com/aistudio/projectdetail/3842982) + +- 2022.4.10: Add PP-Human examples; empower refined management of intelligent community management. A quick start for AIStudio [Link](https://aistudio.baidu.com/aistudio/projectdetail/3679564) +- 2022.4.5: Launch the real-time pedestrian analysis tool PP-Human. It supports pedestrian tracking, visitor traffic statistics, attributes recognition, and falling detection. Due to its specific optimization of real-scene data, it can accurately recognize various falling gestures, and adapt to different environmental backgrounds, light and camera angles. + +## 🔮 Features and demonstration + +| ⭐ Feature | 💟 Advantages | 💡Example | +| -------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| **ReID** | Extraordinary performance: special optimization for technical challenges such as target occlusion, uncompleted and blurry objects to achieve mAP 98.8, 1.5ms/person | | +| **Attribute analysis** | Compatible with a variety of data formats: support for images, video input

    High performance: Integrated open-sourced datasets with real enterprise data for training, achieved mAP 94.86, 2ms/person

    Support 26 attributes: gender, age, glasses, tops, shoes, hats, backpacks and other 26 high-frequency attributes | | +| **Behaviour detection** | Rich function: support five high-frequency anomaly behavior detection of falling, fighting, smoking, telephoning, and intrusion

    Robust: unlimited by different environmental backgrounds, light, and camera angles.

    High performance: Compared with video recognition technology, it takes significantly smaller computation resources; support localization and service-oriented rapid deployment

    Fast training: only takes 15 minutes to produce high precision behavior detection models | | +| **Visitor traffic statistics**
    **Trace record** | Simple and easy to use: single parameter to initiate functions of visitor traffic statistics and trace record | | + +## 🗳 Model Zoo + +
    + Single model results (click to expand) + +| Task | Application | Accuracy | Inference speed(ms) | Model size | Inference deployment model | +|:-------------------------------------------:|:---------------------------------------:|:--------------- |:--------------------:|:----------:|:-------------------------------------------------------------------------------------------------------:| +| Object detection (high precision) | Image input | mAP: 57.8 | 25.1ms | 182M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| Object detection (Lightweight) | Image input | mAP: 53.2 | 16.2ms | 27M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | +| Object tracking (high precision) | Video input | MOTA: 82.2 | 31.8ms | 182M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| Object tracking (high precision) | Video input | MOTA: 73.9 | 21.0ms | 27M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | +| Attribute recognition (high precision) | Image/Video input Attribute recognition | mA: 95.4 | Single person 4.2ms | 86M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | +| Attribute recognition (Lightweight) | Image/Video input Attribute recognition | mA: 94.5 | Single person 2.9ms | 7.2M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | +| Keypoint detection | Video input Attribute recognition | AP: 87.1 | Single person 5.7ms | 101M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | +| Classification based on key point sequences | Video input Attribute recognition | Accuracy: 96.43 | Single person 0.07ms | 21.8M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | +| Detection based on Human ID | Video input Attribute recognition | Accuracy: 86.85 | Single person 1.8ms | 45M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | +| Detection based on Human ID | Video input Attribute recognition | AP50: 79.5 | Single person 10.9ms | 27M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | +| Video classification | Video input Attribute recognition | Accuracy: 89.0 | 19.7ms/1s Video | 90M | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | +| ReID | Video input ReID | mAP: 98.8 | Single person 0.23ms | 85M | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | + +
    + +
    +End-to-end model results (click to expand) + +| Task | End-to-End Speed(ms) | Model | Size | +|:--------------------------------------:|:--------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:| +| Pedestrian detection (high precision) | 25.1ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| Pedestrian detection (lightweight) | 16.2ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| Pedestrian tracking (high precision) | 31.8ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| Pedestrian tracking (lightweight) | 21.0ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| Attribute recognition (high precision) | Single person8.5ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [Attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | Object detection:182M
    Attribute recognition:86M | +| Attribute recognition (lightweight) | Single person 7.1ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [Attribute recognition](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | Object detection:182M
    Attribute recognition:86M | +| Falling detection | Single person 10ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [Keypoint detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
    [Behavior detection based on key points](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | Multi-object tracking:182M
    Keypoint detection:101M
    Behavior detection based on key points: 21.8M | +| Intrusion detection | 31.8ms | [Multi-object tracking](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| Fighting detection | 19.7ms | [Video classification](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M | +| Smoking detection | Single person 15.1ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [Object detection based on Human Id](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | Object detection:182M
    Object detection based on Human ID: 27M | +| Phoning detection | Single person ms | [Object detection](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [Image classification based on Human ID](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | Object detection:182M
    Image classification based on Human ID:45M | + +
    + +Click to download the model, then unzip and save it in the `. /output_inference`. + +## 📚 Doc Tutorials + +### [A Quick Start](docs/tutorials/PPHuman_QUICK_STARTED.md) + +### Pedestrian attribute/feature recognition + +* [A quick start](docs/tutorials/pphuman_attribute.md) +* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_attribute.md) + * Data Preparation + * Model Optimization + * New Attributes + +### Behavior detection + +* [A quick start](docs/tutorials/pphuman_action.md) + * Falling detection + * Fighting detection +* [Customized development tutorials](../../docs/advanced_tutorials/customization/action_recognotion/README.md) + * Solution Selection + * Data Preparation + * Model Optimization + * New Attributes + +### ReID + +* [A quick start](docs/tutorials/pphuman_mtmct.md) +* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_mtmct.md) + * Data Preparation + * Model Optimization + +### Pedestrian tracking, visitor traffic statistics, trace records + +* [A quick start](docs/tutorials/pphuman_mot.md) + * Pedestrian tracking, + * Visitor traffic statistics + * Regional intrusion diagnosis and counting +* [Customized development tutorials](../../docs/advanced_tutorials/customization/pphuman_mot.md) + * Data Preparation + * Model Optimization diff --git a/deploy/pphuman/__init__.py b/deploy/pipeline/__init__.py similarity index 100% rename from deploy/pphuman/__init__.py rename to deploy/pipeline/__init__.py diff --git a/deploy/pipeline/cfg_utils.py b/deploy/pipeline/cfg_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..1e42b29c04068a55fe09d5c16c92fcc25b1fb7cd --- /dev/null +++ b/deploy/pipeline/cfg_utils.py @@ -0,0 +1,204 @@ +import ast +import yaml +import copy +import argparse +from argparse import ArgumentParser, RawDescriptionHelpFormatter + + +class ArgsParser(ArgumentParser): + def __init__(self): + super(ArgsParser, self).__init__( + formatter_class=RawDescriptionHelpFormatter) + self.add_argument( + "-o", "--opt", nargs='*', help="set configuration options") + + def parse_args(self, argv=None): + args = super(ArgsParser, self).parse_args(argv) + assert args.config is not None, \ + "Please specify --config=configure_file_path." + args.opt = self._parse_opt(args.opt) + return args + + def _parse_opt(self, opts): + config = {} + if not opts: + return config + for s in opts: + s = s.strip() + k, v = s.split('=', 1) + if '.' not in k: + config[k] = yaml.load(v, Loader=yaml.Loader) + else: + keys = k.split('.') + if keys[0] not in config: + config[keys[0]] = {} + cur = config[keys[0]] + for idx, key in enumerate(keys[1:]): + if idx == len(keys) - 2: + cur[key] = yaml.load(v, Loader=yaml.Loader) + else: + cur[key] = {} + cur = cur[key] + return config + + +def argsparser(): + parser = ArgsParser() + + parser.add_argument( + "--config", + type=str, + default=None, + help=("Path of configure"), + required=True) + parser.add_argument( + "--image_file", type=str, default=None, help="Path of image file.") + parser.add_argument( + "--image_dir", + type=str, + default=None, + help="Dir of image file, `image_file` has a higher priority.") + parser.add_argument( + "--video_file", + type=str, + default=None, + help="Path of video file, `video_file` or `camera_id` has a highest priority." + ) + parser.add_argument( + "--video_dir", + type=str, + default=None, + help="Dir of video file, `video_file` has a higher priority.") + parser.add_argument( + "--camera_id", + type=int, + default=-1, + help="device id of camera to predict.") + parser.add_argument( + "--output_dir", + type=str, + default="output", + help="Directory of output visualization files.") + parser.add_argument( + "--run_mode", + type=str, + default='paddle', + help="mode of running(paddle/trt_fp32/trt_fp16/trt_int8)") + parser.add_argument( + "--device", + type=str, + default='cpu', + help="Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU." + ) + parser.add_argument( + "--enable_mkldnn", + type=ast.literal_eval, + default=False, + help="Whether use mkldnn with CPU.") + parser.add_argument( + "--cpu_threads", type=int, default=1, help="Num of threads with CPU.") + parser.add_argument( + "--trt_min_shape", type=int, default=1, help="min_shape for TensorRT.") + parser.add_argument( + "--trt_max_shape", + type=int, + default=1280, + help="max_shape for TensorRT.") + parser.add_argument( + "--trt_opt_shape", + type=int, + default=640, + help="opt_shape for TensorRT.") + parser.add_argument( + "--trt_calib_mode", + type=bool, + default=False, + help="If the model is produced by TRT offline quantitative " + "calibration, trt_calib_mode need to set True.") + parser.add_argument( + "--do_entrance_counting", + action='store_true', + help="Whether counting the numbers of identifiers entering " + "or getting out from the entrance. Note that only support single-class MOT." + ) + parser.add_argument( + "--do_break_in_counting", + action='store_true', + help="Whether counting the numbers of identifiers break in " + "the area. Note that only support single-class MOT and " + "the video should be taken by a static camera.") + parser.add_argument( + "--region_type", + type=str, + default='horizontal', + help="Area type for entrance counting or break in counting, 'horizontal' and " + "'vertical' used when do entrance counting. 'custom' used when do break in counting. " + "Note that only support single-class MOT, and the video should be taken by a static camera." + ) + parser.add_argument( + '--region_polygon', + nargs='+', + type=int, + default=[], + help="Clockwise point coords (x0,y0,x1,y1...) of polygon of area when " + "do_break_in_counting. Note that only support single-class MOT and " + "the video should be taken by a static camera.") + parser.add_argument( + "--secs_interval", + type=int, + default=2, + help="The seconds interval to count after tracking") + parser.add_argument( + "--draw_center_traj", + action='store_true', + help="Whether drawing the trajectory of center") + + return parser + + +def merge_cfg(args): + # load config + with open(args.config) as f: + pred_config = yaml.safe_load(f) + + def merge(cfg, arg): + # update cfg from arg directly + merge_cfg = copy.deepcopy(cfg) + for k, v in cfg.items(): + if k in arg: + merge_cfg[k] = arg[k] + else: + if isinstance(v, dict): + merge_cfg[k] = merge(v, arg) + + return merge_cfg + + def merge_opt(cfg, arg): + merge_cfg = copy.deepcopy(cfg) + # merge opt + if 'opt' in arg.keys() and arg['opt']: + for name, value in arg['opt'].items( + ): # example: {'MOT': {'batch_size': 3}} + if name not in merge_cfg.keys(): + print("No", name, "in config file!") + continue + for sub_k, sub_v in value.items(): + if sub_k not in merge_cfg[name].keys(): + print("No", sub_k, "in config file of", name, "!") + continue + merge_cfg[name][sub_k] = sub_v + + return merge_cfg + + args_dict = vars(args) + pred_config = merge(pred_config, args_dict) + pred_config = merge_opt(pred_config, args_dict) + + return pred_config + + +def print_arguments(cfg): + print('----------- Running Arguments -----------') + buffer = yaml.dump(cfg) + print(buffer) + print('------------------------------------------') diff --git a/deploy/pipeline/config/examples/infer_cfg_calling.yml b/deploy/pipeline/config/examples/infer_cfg_calling.yml new file mode 100644 index 0000000000000000000000000000000000000000..8d74712aa0b8c9f16d7b89aec5307d16253438c3 --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_calling.yml @@ -0,0 +1,17 @@ +crop_thresh: 0.5 +visual: True +warmup_frame: 50 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True + +ID_BASED_CLSACTION: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip + batch_size: 8 + threshold: 0.8 + display_frames: 80 + skip_frame_num: 2 + enable: True diff --git a/deploy/pipeline/config/examples/infer_cfg_fall_down.yml b/deploy/pipeline/config/examples/infer_cfg_fall_down.yml new file mode 100644 index 0000000000000000000000000000000000000000..5dc38bb23b161cb7e84e027a3a4dd381da3d246b --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_fall_down.yml @@ -0,0 +1,22 @@ +crop_thresh: 0.5 +kpt_thresh: 0.2 +visual: True +warmup_frame: 50 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True + +KPT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip + batch_size: 8 + +SKELETON_ACTION: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip + batch_size: 1 + max_frames: 50 + display_frames: 80 + coord_size: [384, 512] + enable: True diff --git a/deploy/pipeline/config/examples/infer_cfg_fight_recognition.yml b/deploy/pipeline/config/examples/infer_cfg_fight_recognition.yml new file mode 100644 index 0000000000000000000000000000000000000000..76826ebaa45c0345e94d5ab8218293844cc96697 --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_fight_recognition.yml @@ -0,0 +1,11 @@ +visual: True +warmup_frame: 50 + +VIDEO_ACTION: + model_dir: https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip + batch_size: 1 + frame_len: 8 + sample_freq: 7 + short_size: 340 + target_size: 320 + enable: True diff --git a/deploy/pipeline/config/examples/infer_cfg_human_attr.yml b/deploy/pipeline/config/examples/infer_cfg_human_attr.yml new file mode 100644 index 0000000000000000000000000000000000000000..b4de76b3be2b4113c327c88ced7838224ced125b --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_human_attr.yml @@ -0,0 +1,15 @@ +crop_thresh: 0.5 +attr_thresh: 0.5 +visual: True +warmup_frame: 50 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True + +ATTR: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip + batch_size: 8 + enable: True diff --git a/deploy/pipeline/config/examples/infer_cfg_human_mot.yml b/deploy/pipeline/config/examples/infer_cfg_human_mot.yml new file mode 100644 index 0000000000000000000000000000000000000000..7b9e739d4aa9139055c732728370fc414f31cfee --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_human_mot.yml @@ -0,0 +1,9 @@ +crop_thresh: 0.5 +visual: True +warmup_frame: 50 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True diff --git a/deploy/pipeline/config/examples/infer_cfg_reid.yml b/deploy/pipeline/config/examples/infer_cfg_reid.yml new file mode 100644 index 0000000000000000000000000000000000000000..42c7f6f20d7c194aa03b8d54df81958211b72452 --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_reid.yml @@ -0,0 +1,14 @@ +crop_thresh: 0.5 +visual: True +warmup_frame: 50 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True + +REID: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip + batch_size: 16 + enable: True diff --git a/deploy/pipeline/config/examples/infer_cfg_smoking.yml b/deploy/pipeline/config/examples/infer_cfg_smoking.yml new file mode 100644 index 0000000000000000000000000000000000000000..41a1475303ee25fe6f35c58d39891a868d9cecab --- /dev/null +++ b/deploy/pipeline/config/examples/infer_cfg_smoking.yml @@ -0,0 +1,17 @@ +crop_thresh: 0.5 +visual: True +warmup_frame: 50 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True + +ID_BASED_DETACTION: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip + batch_size: 8 + threshold: 0.6 + display_frames: 80 + skip_frame_num: 2 + enable: True diff --git a/deploy/pipeline/config/infer_cfg_pphuman.yml b/deploy/pipeline/config/infer_cfg_pphuman.yml new file mode 100644 index 0000000000000000000000000000000000000000..a75c3222946691e50c8cba086e013feaaf718539 --- /dev/null +++ b/deploy/pipeline/config/infer_cfg_pphuman.yml @@ -0,0 +1,62 @@ +crop_thresh: 0.5 +attr_thresh: 0.5 +kpt_thresh: 0.2 +visual: True +warmup_frame: 50 + +DET: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + batch_size: 1 + +MOT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: False + +KPT: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip + batch_size: 8 + +ATTR: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip + batch_size: 8 + enable: False + +VIDEO_ACTION: + model_dir: https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip + batch_size: 1 + frame_len: 8 + sample_freq: 7 + short_size: 340 + target_size: 320 + enable: False + +SKELETON_ACTION: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip + batch_size: 1 + max_frames: 50 + display_frames: 80 + coord_size: [384, 512] + enable: False + +ID_BASED_DETACTION: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip + batch_size: 8 + threshold: 0.6 + display_frames: 80 + skip_frame_num: 2 + enable: False + +ID_BASED_CLSACTION: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip + batch_size: 8 + threshold: 0.8 + display_frames: 80 + skip_frame_num: 2 + enable: False + +REID: + model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip + batch_size: 16 + enable: False diff --git a/deploy/pipeline/config/infer_cfg_ppvehicle.yml b/deploy/pipeline/config/infer_cfg_ppvehicle.yml new file mode 100644 index 0000000000000000000000000000000000000000..87d84dee9a16908bfbec258a0f0531ac6a4f692c --- /dev/null +++ b/deploy/pipeline/config/infer_cfg_ppvehicle.yml @@ -0,0 +1,35 @@ +crop_thresh: 0.5 +visual: True +warmup_frame: 50 + +DET: + model_dir: output_inference/mot_ppyoloe_l_36e_ppvehicle/ + batch_size: 1 + +MOT: + model_dir: output_inference/mot_ppyoloe_l_36e_ppvehicle/ + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: False + +VEHICLE_PLATE: + det_model_dir: output_inference/ch_PP-OCRv3_det_infer/ + det_limit_side_len: 480 + det_limit_type: "max" + rec_model_dir: output_inference/ch_PP-OCRv3_rec_infer/ + rec_image_shape: [3, 48, 320] + rec_batch_num: 6 + word_dict_path: deploy/pipeline/ppvehicle/rec_word_dict.txt + enable: False + +VEHICLE_ATTR: + model_dir: output_inference/vehicle_attribute_infer/ + batch_size: 8 + color_threshold: 0.5 + type_threshold: 0.5 + enable: False + +REID: + model_dir: output_inference/vehicle_reid_model/ + batch_size: 16 + enable: False diff --git a/deploy/pphuman/config/tracker_config.yml b/deploy/pipeline/config/tracker_config.yml similarity index 58% rename from deploy/pphuman/config/tracker_config.yml rename to deploy/pipeline/config/tracker_config.yml index 5182da93e3f19bd49421e09d1ec01be0c0f11643..c4f3c894655c3a8c58bdfb1b124d98427eeef4df 100644 --- a/deploy/pphuman/config/tracker_config.yml +++ b/deploy/pipeline/config/tracker_config.yml @@ -2,7 +2,8 @@ # The tracker of MOT JDE Detector (such as FairMOT) is exported together with the model. # Here 'min_box_area' and 'vertical_ratio' are set for pedestrian, you can modify for other objects tracking. -type: JDETracker # 'JDETracker' or 'DeepSORTTracker' +type: OCSORTTracker # choose one tracker in ['JDETracker', 'OCSORTTracker'] + # BYTETracker JDETracker: @@ -11,16 +12,17 @@ JDETracker: conf_thres: 0.6 low_conf_thres: 0.1 match_thres: 0.9 - min_box_area: 100 - vertical_ratio: 1.6 # for pedestrian + min_box_area: 0 + vertical_ratio: 0 # 1.6 for pedestrian + -DeepSORTTracker: - input_size: [64, 192] +OCSORTTracker: + det_thresh: 0.4 + max_age: 30 + min_hits: 3 + iou_threshold: 0.3 + delta_t: 3 + inertia: 0.2 + vertical_ratio: 0 min_box_area: 0 - vertical_ratio: -1 - budget: 100 - max_age: 70 - n_init: 3 - metric_type: cosine - matching_threshold: 0.2 - max_iou_distance: 0.9 + use_byte: False diff --git a/deploy/pphuman/datacollector.py b/deploy/pipeline/datacollector.py similarity index 64% rename from deploy/pphuman/datacollector.py rename to deploy/pipeline/datacollector.py index cd459aad0680418bcd087d00662b0c310151ffc3..794711f04868d2dd70b13e235825472f855a290b 100644 --- a/deploy/pphuman/datacollector.py +++ b/deploy/pipeline/datacollector.py @@ -14,6 +14,7 @@ import os import copy +from collections import Counter class Result(object): @@ -23,8 +24,13 @@ class Result(object): 'mot': dict(), 'attr': dict(), 'kpt': dict(), - 'action': dict(), - 'reid': dict() + 'video_action': dict(), + 'skeleton_action': dict(), + 'reid': dict(), + 'det_action': dict(), + 'cls_action': dict(), + 'vehicleplate': dict(), + 'vehicle_attr': dict() } def update(self, res, name): @@ -35,10 +41,13 @@ class Result(object): return self.res_dict[name] return None + def clear(self, name): + self.res_dict[name].clear() + class DataCollector(object): """ - DataCollector of pphuman Pipeline, collect results in every frames and assign it to each track ids. + DataCollector of Pipeline, collect results in every frames and assign it to each track ids. mainly used in mtmct. data struct: @@ -50,13 +59,13 @@ class DataCollector(object): - qualities(list of float): Nx[float] - attrs(list of attr): refer to attrs for details - kpts(list of kpts): refer to kpts for details - - actions(list of actions): refer to actions for details + - skeleton_action(list of skeleton_action): refer to skeleton_action for details ... - [idN] """ def __init__(self): - #id, frame, rect, score, label, attrs, kpts, actions + #id, frame, rect, score, label, attrs, kpts, skeleton_action self.mots = { "frames": [], "rects": [], @@ -64,7 +73,8 @@ class DataCollector(object): "kpts": [], "features": [], "qualities": [], - "actions": [] + "skeleton_action": [], + "vehicleplate": [] } self.collector = {} @@ -72,15 +82,20 @@ class DataCollector(object): mot_res = Result.get('mot') attr_res = Result.get('attr') kpt_res = Result.get('kpt') - action_res = Result.get('action') + skeleton_action_res = Result.get('skeleton_action') reid_res = Result.get('reid') + vehicleplate_res = Result.get('vehicleplate') + + rects = [] + if reid_res is not None: + rects = reid_res['rects'] + elif mot_res is not None: + rects = mot_res['boxes'] - rects = reid_res['rects'] if reid_res is not None else mot_res['boxes'] for idx, mot_item in enumerate(rects): ids = int(mot_item[0]) if ids not in self.collector: self.collector[ids] = copy.deepcopy(self.mots) - self.collector[ids]["frames"].append(frameid) self.collector[ids]["rects"].append([mot_item[2:]]) if attr_res: @@ -88,16 +103,29 @@ class DataCollector(object): if kpt_res: self.collector[ids]["kpts"].append( [kpt_res['keypoint'][0][idx], kpt_res['keypoint'][1][idx]]) - if action_res: - self.collector[ids]["actions"].append(action_res[idx + 1]) + if skeleton_action_res and (idx + 1) in skeleton_action_res: + self.collector[ids]["skeleton_action"].append( + skeleton_action_res[idx + 1]) else: # action model generate result per X frames, Not available every frames - self.collector[ids]["actions"].append(None) + self.collector[ids]["skeleton_action"].append(None) if reid_res: self.collector[ids]["features"].append(reid_res['features'][ idx]) self.collector[ids]["qualities"].append(reid_res['qualities'][ idx]) + if vehicleplate_res and vehicleplate_res['plate'][idx] != "": + self.collector[ids]["vehicleplate"].append(vehicleplate_res[ + 'plate'][idx]) def get_res(self): return self.collector + + def get_carlp(self, trackid): + lps = self.collector[trackid]["vehicleplate"] + counter = Counter(lps) + carlp = counter.most_common() + if len(carlp) > 0: + return carlp[0][0] + else: + return None diff --git a/deploy/pphuman/docs/images/action.gif b/deploy/pipeline/docs/images/action.gif similarity index 100% rename from deploy/pphuman/docs/images/action.gif rename to deploy/pipeline/docs/images/action.gif diff --git a/deploy/pphuman/docs/images/attribute.gif b/deploy/pipeline/docs/images/attribute.gif similarity index 100% rename from deploy/pphuman/docs/images/attribute.gif rename to deploy/pipeline/docs/images/attribute.gif diff --git a/deploy/pphuman/docs/images/c1.gif b/deploy/pipeline/docs/images/c1.gif similarity index 100% rename from deploy/pphuman/docs/images/c1.gif rename to deploy/pipeline/docs/images/c1.gif diff --git a/deploy/pphuman/docs/images/c2.gif b/deploy/pipeline/docs/images/c2.gif similarity index 100% rename from deploy/pphuman/docs/images/c2.gif rename to deploy/pipeline/docs/images/c2.gif diff --git a/deploy/pipeline/docs/images/calling.gif b/deploy/pipeline/docs/images/calling.gif new file mode 100644 index 0000000000000000000000000000000000000000..52046b249b145d2099c7360d3c56abc3b51764bd Binary files /dev/null and b/deploy/pipeline/docs/images/calling.gif differ diff --git a/deploy/pipeline/docs/images/fight_demo.gif b/deploy/pipeline/docs/images/fight_demo.gif new file mode 100644 index 0000000000000000000000000000000000000000..4add8047baf382ff24a459ff4a74de7ac91d704a Binary files /dev/null and b/deploy/pipeline/docs/images/fight_demo.gif differ diff --git a/deploy/pphuman/docs/images/mot.gif b/deploy/pipeline/docs/images/mot.gif similarity index 100% rename from deploy/pphuman/docs/images/mot.gif rename to deploy/pipeline/docs/images/mot.gif diff --git a/deploy/pipeline/docs/images/smoking.gif b/deploy/pipeline/docs/images/smoking.gif new file mode 100644 index 0000000000000000000000000000000000000000..f354a71e478353f1ce5cb2270467e273ec74c84e Binary files /dev/null and b/deploy/pipeline/docs/images/smoking.gif differ diff --git a/deploy/pipeline/docs/images/vehicle_attribute.gif b/deploy/pipeline/docs/images/vehicle_attribute.gif new file mode 100644 index 0000000000000000000000000000000000000000..b2ed19934d5cab5229a732ee9cdb45458ab0e8a1 Binary files /dev/null and b/deploy/pipeline/docs/images/vehicle_attribute.gif differ diff --git a/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md b/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md new file mode 100644 index 0000000000000000000000000000000000000000..fad7845e469ecadc6cfd719a6d63bc1344fe8a81 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/PPHuman_QUICK_STARTED.md @@ -0,0 +1,194 @@ +# 快速开始 + +## 目录 + +- [环境准备](#环境准备) +- [模型下载](#模型下载) +- [配置文件说明](#配置文件说明) +- [预测部署](#预测部署) + - [参数说明](#参数说明) +- [方案介绍](#方案介绍) + - [行人检测](#行人检测) + - [行人跟踪](#行人跟踪) + - [跨镜行人跟踪](#跨镜行人跟踪) + - [属性识别](#属性识别) + - [行为识别](#行为识别) + +## 环境准备 + +环境要求: PaddleDetection版本 >= release/2.4 或 develop版本 + +PaddlePaddle和PaddleDetection安装 + +``` +# PaddlePaddle CUDA10.1 +python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html + +# PaddlePaddle CPU +python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple + +# 克隆PaddleDetection仓库 +cd +git clone https://github.com/PaddlePaddle/PaddleDetection.git + +# 安装其他依赖 +cd PaddleDetection +pip install -r requirements.txt +``` + +1. 详细安装文档参考[文档](../../../../docs/tutorials/INSTALL_cn.md) +2. 如果需要TensorRT推理加速(测速方式),请安装带`TensorRT版本Paddle`。您可以从[Paddle安装包](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html#python)下载安装,或者按照[指导文档](https://www.paddlepaddle.org.cn/inference/master/optimize/paddle_trt.html)使用docker或自编译方式准备Paddle环境。 + +## 模型下载 + +PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模型,以实现不同使用场景,用户可以直接下载使用 + +| 任务 | 端到端速度(ms)| 模型方案 | 模型体积 | +| :---------: | :-------: | :------: |:------: | +| 行人检测(高精度) | 25.1ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 行人检测(轻量级) | 16.2ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| 行人跟踪(高精度) | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 182M | +| 行人跟踪(轻量级) | 21.0ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | 27M | +| 属性识别(高精度) | 单人8.5ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | 目标检测:182M
    属性识别:86M | +| 属性识别(轻量级) | 单人7.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | 目标检测:182M
    属性识别:86M | +| 摔倒识别 | 单人10ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)
    [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪:182M
    关键点检测:101M
    基于关键点行为识别:21.8M | +| 闯入识别 | 31.8ms | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 多目标跟踪:182M | +| 打架识别 | 19.7ms | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | 90M | +| 抽烟识别 | 单人15.1ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | 目标检测:182M
    基于人体id的目标检测:27M | +| 打电话识别 | 单人ms | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)
    [基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | 目标检测:182M
    基于人体id的图像分类:45M | + +下载模型后,解压至`./output_inference`文件夹。 + +在配置文件中,模型路径默认为模型的下载路径,如果用户不修改,则在推理时会自动下载对应的模型。 + +**注意:** + +- 模型精度为融合数据集结果,数据集包含开源数据集和企业数据集 +- ReID模型精度为Market1501数据集测试结果 +- 预测速度为T4下,开启TensorRT FP16的效果, 模型预测速度包含数据预处理、模型预测、后处理全流程 + +## 配置文件说明 + +PP-Human相关配置位于```deploy/pipeline/config/infer_cfg_pphuman.yml```中,存放模型路径,该配置文件中包含了目前PP-Human支持的所有功能。如果想要查看某个单一功能的配置,请参见```deploy/pipeline/config/examples/```中相关配置。此外,配置文件中的内容可以通过```-o```命令行参数修改,如修改属性的模型目录,则可通过```-o ATTR.model_dir="DIR_PATH"```进行设置。 + +功能及任务类型对应表单如下: + +| 输入类型 | 功能 | 任务类型 | 配置项 | +|-------|-------|----------|-----| +| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR | +| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR | +| 单镜头视频 | 行为识别 | 多目标跟踪 关键点检测 摔倒识别 | MOT KPT SKELETON_ACTION | + +例如基于视频输入的属性识别,任务类型包含多目标跟踪和属性识别,具体配置如下: + +``` +crop_thresh: 0.5 +attr_thresh: 0.5 +visual: True + +MOT: + model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ + tracker_config: deploy/pipeline/config/tracker_config.yml + batch_size: 1 + enable: True + +ATTR: + model_dir: output_inference/strongbaseline_r50_30e_pa100k/ + batch_size: 8 + enable: True +``` + +**注意:** + +- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True。 + + +## 预测部署 + +``` +# 行人检测,指定配置文件路径和测试图片 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16] + +# 行人跟踪,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +# 行人跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_pphuman.yml```中的MOT部分enable设置为```True``` +# 命令行中指定的模型路径优先级高于配置文件 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +# 行人属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +# 行为识别,以摔倒识别为例,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_pphuman.yml```中的SKELETON_ACTION部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,在配置文件```deploy/pipeline/config/infer_cfg_pphuman.yml```中的REID部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=mtmct_dir/ --device=gpu [--run_mode trt_fp16] + +# 行人跨境跟踪,指定配置文件路径和测试视频列表文件夹,直接使用```deploy/pipeline/config/examples/infer_cfg_reid.yml```配置文件,并利用```-o```命令修改跟踪模型路径 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_reid.yml --video_dir=mtmct_dir/ -o MOT.model_dir="mot_model_dir" --device=gpu [--run_mode trt_fp16] + +``` + +对rtsp流的支持,video_file后面的视频地址更换为rtsp流地址,示例如下: +``` +# 行人属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_pphuman.yml```中的ATTR部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o visual=False --video_file=rtsp://[YOUR_RTSP_SITE] --device=gpu [--run_mode trt_fp16] +``` + +### 参数说明 + +| 参数 | 是否必须|含义 | +|-------|-------|----------| +| --config | Yes | 配置文件路径 | +| -o | Option | 覆盖配置文件中对应的配置 | +| --image_file | Option | 需要预测的图片 | +| --image_dir | Option | 要预测的图片文件夹路径 | +| --video_file | Option | 需要预测的视频,或者rtsp流地址 | +| --camera_id | Option | 用来预测的摄像头ID,默认为-1(表示不使用摄像头预测,可设置为:0 - (摄像头数目-1) ),预测过程中在可视化界面按`q`退出输出预测结果到:output/output.mp4| +| --device | Option | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`| +| --output_dir | Option|可视化结果保存的根目录,默认为output/| +| --run_mode | Option |使用GPU时,默认为paddle, 可选(paddle/trt_fp32/trt_fp16/trt_int8)| +| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | +| --cpu_threads | Option| 设置cpu线程数,默认为1 | +| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | +| --do_entrance_counting | Option | 是否统计出入口流量,默认为False | +| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False | + +## 方案介绍 + +PP-Human v2整体方案如下图所示: + +
    + +
    + + +### 行人检测 +- 采用PP-YOLOE L 作为目标检测模型 +- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)和[检测跟踪文档](pphuman_mot.md) + +### 行人跟踪 +- 采用SDE方案完成行人跟踪 +- 检测模型使用PP-YOLOE L(高精度)和S(轻量级) +- 跟踪模块采用OC-SORT方案 +- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)和[检测跟踪文档](pphuman_mot.md) + +### 跨镜行人跟踪 +- 使用PP-YOLOE + OC-SORT得到单镜头多目标跟踪轨迹 +- 使用ReID(StrongBaseline网络)对每一帧的检测结果提取特征 +- 多镜头轨迹特征进行匹配,得到跨镜头跟踪结果 +- 详细文档参考[跨镜跟踪](pphuman_mtmct.md) + +### 属性识别 +- 使用PP-YOLOE + OC-SORT跟踪人体 +- 使用StrongBaseline(多分类模型)完成识别属性,主要属性包括年龄、性别、帽子、眼睛、上衣下衣款式、背包等 +- 详细文档参考[属性识别](pphuman_attribute.md) + +### 行为识别: +- 提供四种行为识别方案 +- 1. 基于骨骼点的行为识别,例如摔倒识别 +- 2. 基于图像分类的行为识别,例如打电话识别 +- 3. 基于检测的行为识别,例如吸烟识别 +- 4. 基于视频分类的行为识别,例如打架识别 +- 详细文档参考[行为识别](pphuman_action.md) diff --git a/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md b/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md new file mode 100644 index 0000000000000000000000000000000000000000..6a2f3d64a04f94c38a4b72322bc7255838ee78c0 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/PPVehicle_QUICK_STARTED.md @@ -0,0 +1,137 @@ +# 快速开始 + +## 目录 + +- [环境准备](#环境准备) +- [模型下载](#模型下载) +- [配置文件说明](#配置文件说明) +- [预测部署](#预测部署) + - [参数说明](#参数说明) +- [方案介绍](#方案介绍) + - [车辆检测](#车辆检测) + - [车辆跟踪](#车辆跟踪) + - [车牌识别](#车牌识别) + - [属性识别](#属性识别) + + +## 环境准备 + +环境要求: PaddleDetection版本 >= release/2.4 或 develop版本 + +PaddlePaddle和PaddleDetection安装 + +``` +# PaddlePaddle CUDA10.1 +python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html + +# PaddlePaddle CPU +python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple + +# 克隆PaddleDetection仓库 +cd +git clone https://github.com/PaddlePaddle/PaddleDetection.git + +# 安装其他依赖 +cd PaddleDetection +pip install -r requirements.txt +``` + +1. 详细安装文档参考[文档](../../../../docs/tutorials/INSTALL_cn.md) +2. 如果需要TensorRT推理加速(测速方式),请安装带`TensorRT版本Paddle`。您可以从[Paddle安装包](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html#python)下载安装,或者按照[指导文档](https://www.paddlepaddle.org.cn/inference/master/optimize/paddle_trt.html)使用docker或自编译方式准备Paddle环境。 + +## 模型下载 + + +## 配置文件说明 + +PP-Vehicle相关配置位于```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型 + +功能及任务类型对应表单如下: + +| 输入类型 | 功能 | 任务类型 | 配置项 | +|-------|-------|----------|-----| +| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR | +| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR | +| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT VEHICLEPLATE | + +例如基于视频输入的属性识别,任务类型包含多目标跟踪和属性识别,具体配置如下: + +``` + +``` + +**注意:** + +- 如果用户需要实现不同任务,可以在配置文件对应enable选项设置为True。 +- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,也可以手动修改配置文件中的相应模型路径,详细说明参考下方参数说明文档。 + + +## 预测部署 + +``` +# 车辆检测,指定配置文件路径和测试图片 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --image_file=test_image.jpg --device=gpu [--run_mode trt_fp16] + +# 车辆跟踪,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +# 车辆跟踪,指定配置文件路径,模型路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的MOT部分enable设置为```True``` +# 命令行中指定的模型路径优先级高于配置文件 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ [--run_mode trt_fp16] + +# 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml --video_file=test_video.mp4 --device=gpu [--run_mode trt_fp16] + +``` + +对rtsp流的支持,video_file后面的视频地址更换为rtsp流地址,示例如下: +``` +# 车辆属性识别,指定配置文件路径和测试视频,在配置文件```deploy/pipeline/config/infer_cfg_ppvehicle.yml```中的ATTR部分enable设置为```True``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml -o visual=False --video_file=rtsp://[YOUR_RTSP_SITE] --device=gpu [--run_mode trt_fp16] +``` + +### 参数说明 + +| 参数 | 是否必须|含义 | +|-------|-------|----------| +| --config | Yes | 配置文件路径 | +| --model_dir | Option | 各任务模型路径,优先级高于配置文件, 例如`--model_dir det=better_det/ attr=better_attr/`| +| --image_file | Option | 需要预测的图片 | +| --image_dir | Option | 要预测的图片文件夹路径 | +| --video_file | Option | 需要预测的视频,或者rtsp流地址 | +| --camera_id | Option | 用来预测的摄像头ID,默认为-1(表示不使用摄像头预测,可设置为:0 - (摄像头数目-1) ),预测过程中在可视化界面按`q`退出输出预测结果到:output/output.mp4| +| --device | Option | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`| +| --output_dir | Option|可视化结果保存的根目录,默认为output/| +| --run_mode | Option |使用GPU时,默认为paddle, 可选(paddle/trt_fp32/trt_fp16/trt_int8)| +| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | +| --cpu_threads | Option| 设置cpu线程数,默认为1 | +| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | +| --do_entrance_counting | Option | 是否统计出入口流量,默认为False | +| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False | + +## 方案介绍 + +PP-Vehicle v2整体方案如下图所示: + +
    + +
    + + +### 车辆检测 +- 采用PP-YOLOE L 作为目标检测模型 +- 详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/)和[检测跟踪文档](ppvehicle_mot.md) + +### 车辆跟踪 +- 采用SDE方案完成车辆跟踪 +- 检测模型使用PP-YOLOE L(高精度)和S(轻量级) +- 跟踪模块采用OC-SORT方案 +- 详细文档参考[OC-SORT](../../../../configs/mot/ocsort)和[检测跟踪文档](ppvehicle_mot.md) + +### 属性识别 +- 使用PaddleClas提供的特色模型PP-LCNet,实现对车辆颜色及车型属性的识别。 +- 详细文档参考[属性识别](ppvehicle_attribute.md) + +### 车牌识别 +- 使用PaddleOCR特色模型ch_PP-OCRv3_det+ch_PP-OCRv3_rec模型,识别车牌号码 +- 详细文档参考[属性识别](ppvehicle_plate.md) diff --git a/deploy/pipeline/docs/tutorials/pphuman_action.md b/deploy/pipeline/docs/tutorials/pphuman_action.md new file mode 100644 index 0000000000000000000000000000000000000000..cf0c5af501b465de810d37813032bcb316e6fde5 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/pphuman_action.md @@ -0,0 +1,270 @@ +[English](pphuman_action_en.md) | 简体中文 + +# PP-Human行为识别模块 + +## 目录 + +- [基于骨骼点的行为识别](#基于骨骼点的行为识别) +- [基于图像分类的行为识别](#基于图像分类的行为识别) +- [基于检测的行为识别](#基于检测的行为识别) +- [基于行人轨迹的行为识别](#基于行人轨迹的行为识别) +- [基于视频分类的行为识别](#基于视频分类的行为识别) + +行为识别在智慧社区,安防监控等方向具有广泛应用,根据行为的不同,PP-Human中集成了基于视频分类、基于检测、基于图像分类,基于行人轨迹以及基于骨骼点的行为识别模块,方便用户根据需求进行选择。 + +## 基于骨骼点的行为识别 + +应用行为:摔倒识别 + +
    + +
    数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用
    +
    + +### 模型库 + +基于骨骼点的行为识别包含行人检测/跟踪,关键点检测和摔倒行为识别三个模型,首先需要下载以下预训练模型 + +| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 | +|:---------------------|:---------:|:------:|:------:| :------: |:---------------------------------------------------------------------------------: | +| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 16.2ms
    跟踪:22.3ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 关键点识别 | HRNet | AP: 87.1 | 单人 2.9ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)| +| 摔倒行为识别 | ST-GCN | 准确率: 96.43 | 单人 2.7ms | - |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | + +注: +1. 检测/跟踪模型精度为[MOT17](https://motchallenge.net/),[CrowdHuman](http://www.crowdhuman.org/),[HIEVE](http://humaninevents.org/)和部分业务数据融合训练测试得到。 +2. 关键点模型使用[COCO](https://cocodataset.org/),[UAV-Human](https://github.com/SUTDCV/UAV-Human)和部分业务数据融合训练, 精度在业务数据测试集上得到。 +3. 摔倒行为识别模型使用[NTU-RGB+D](https://rose1.ntu.edu.sg/dataset/actionRecognition/),[UR Fall Detection Dataset](http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html)和部分业务数据融合训练,精度在业务数据测试集上得到。 +4. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。 + +### 配置说明 +[配置文件](../../config/infer_cfg_pphuman.yml)中与行为识别相关的参数如下: +``` +SKELETON_ACTION: # 基于骨骼点的行为识别模型配置 + model_dir: output_inference/STGCN # 模型所在路径 + batch_size: 1 # 预测批大小。 当前仅支持为1进行推理 + max_frames: 50 # 动作片段对应的帧数。在行人ID对应时序骨骼点结果时达到该帧数后,会通过行为识别模型判断该段序列的动作类型。与训练设置一致时效果最佳。 + display_frames: 80 # 显示帧数。当预测结果为摔倒时,在对应人物ID中显示状态的持续时间。 + coord_size: [384, 512] # 坐标统一缩放到的尺度大小。与训练设置一致时效果最佳。 + enable: False # 是否开启该功能 +``` + +### 使用方法 +1. 从`模型库`中下载`行人检测/跟踪`、`关键点识别`、`摔倒行为识别`三个预测部署模型并解压到```./output_inference```路径下;默认自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。 +2. 目前行为识别模块仅支持视频输入,根据期望开启的行为识别方案类型,设置infer_cfg_pphuman.yml中`SKELETON_ACTION`的enable: True, 然后启动命令如下: +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ +``` +3. 若修改模型路径,有以下两种方式: + + - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,关键点模型和摔倒行为识别模型分别对应`KPT`和`SKELETON_ACTION`字段,修改对应字段下的路径为实际期望的路径即可。 + - 命令行中增加`--model_dir`修改模型路径: +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN +``` +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 + + +### 方案说明 +1. 使用多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 +2. 通过行人检测框的坐标在输入视频的对应帧中截取每个行人。 +3. 使用[关键点识别模型](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml)得到对应的17个骨骼特征点。骨骼特征点的顺序及类型与COCO一致,详见[如何准备关键点数据集](../../../../docs/tutorials/data/PrepareKeypointDataSet.md)中的`COCO数据集`部分。 +4. 每个跟踪ID对应的目标行人各自累计骨骼特征点结果,组成该人物的时序关键点序列。当累计到预定帧数或跟踪丢失后,使用行为识别模型判断时序关键点序列的动作类型。当前版本模型支持摔倒行为的识别,预测得到的`class id`对应关系为: +``` +0: 摔倒, +1: 其他 +``` +- 摔倒行为识别模型使用了[ST-GCN](https://arxiv.org/abs/1801.07455),并基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md)套件完成模型训练。 + +## 基于图像分类的行为识别 + +应用行为:打电话识别 + +
    + +
    数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用
    +
    + +### 模型库 + +基于图像分类的行为识别包含行人检测/跟踪,打电话识别两个模型,首先需要下载以下预训练模型 + +| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 | +|:---------------------|:---------:|:------:|:------:| :------: |:---------------------------------------------------------------------------------: | +| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 16.2ms
    跟踪:22.3ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 打电话识别 | PP-HGNet | 准确率: 86.85 | 单人 2.94ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.pdparams) | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | + + +注: +1. 检测/跟踪模型精度为[MOT17](https://motchallenge.net/),[CrowdHuman](http://www.crowdhuman.org/),[HIEVE](http://humaninevents.org/)和部分业务数据融合训练测试得到。 +2. 打电话行为识别模型使用[UAV-Human](https://github.com/SUTDCV/UAV-Human)的打电话行为部分进行训练和测试。 +3. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。 + +### 配置说明 +[配置文件](../../config/infer_cfg_pphuman.yml)中相关的参数如下: +``` +ID_BASED_CLSACTION: # 基于分类的行为识别模型配置 + model_dir: output_inference/PPHGNet_tiny_calling_halfbody # 模型所在路径 + batch_size: 8 # 预测批大小 + threshold: 0.45 #识别为对应行为的阈值 + display_frames: 80 # 显示帧数。当识别到对应动作时,在对应人物ID中显示状态的持续时间。 + enable: False # 是否开启该功能 +``` + +### 使用方法 +1. 从`模型库`中下载`行人检测/跟踪`、`打电话行为识别`两个预测部署模型并解压到`./output_inference`路径下;默认自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。 +2. 修改配置文件`deploy/pipeline/config/infer_cfg_pphuman.yml`中`ID_BASED_CLSACTION`下的`enable`为`True`; +3. 仅支持输入视频,启动命令如下: +``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu +``` +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 + +### 方案说明 +1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 +2. 通过行人检测框的坐标在输入视频的对应帧中截取每个行人。 +3. 通过在帧级别的行人图像通过图像分类的方式实现。当图片所属类别为对应行为时,即认为在一定时间段内该人物处于该行为状态中。该任务使用[PP-HGNet](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md)实现,当前版本模型支持打电话行为的识别,预测得到的`class id`对应关系为: +``` +0: 打电话, +1: 其他 +``` +- 基于分类的行为识别基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md#3.3)完成模型训练。 + + +## 基于检测的行为识别 + +应用行为:吸烟识别 + +
    + +
    数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用
    +
    + +### 模型库 +在这里,我们提供了行人检测/跟踪、吸烟行为识别的预训练模型,用户可以直接下载使用。 + +| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 | +|:---------------------|:---------:|:------:|:------:| :------: |:---------------------------------------------------------------------------------: | +| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 16.2ms
    跟踪:22.3ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 吸烟行为识别 | PP-YOLOE | mAP: 39.7 | 单人 2.0ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | + +注: +1. 检测/跟踪模型精度为[MOT17](https://motchallenge.net/),[CrowdHuman](http://www.crowdhuman.org/),[HIEVE](http://humaninevents.org/)和部分业务数据融合训练测试得到。 +2. 抽烟行为识别模型使用业务数据进行训练和测试。 +3. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。 + +### 配置说明 +[配置文件](../../config/infer_cfg_pphuman.yml)中相关的参数如下: +``` +ID_BASED_DETACTION: # 基于检测的行为识别模型配置 + model_dir: output_inference/ppyoloe_crn_s_80e_smoking_visdrone # 模型所在路径 + batch_size: 8 # 预测批大小 + threshold: 0.4 # 识别为对应行为的阈值 + display_frames: 80 # 显示帧数。当识别到对应动作时,在对应人物ID中显示状态的持续时间。 + enable: False # 是否开启该功能 +``` + +### 使用方法 +1. 从`模型库`中下载`行人检测/跟踪`、`抽烟行为识别`两个预测部署模型并解压到`./output_inference`路径下;默认自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。 +2. 修改配置文件`deploy/pipeline/config/infer_cfg_pphuman.yml`中`ID_BASED_DETACTION`下的`enable`为`True`; +3. 仅支持输入视频,启动命令如下: +``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu +``` +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 + +### 方案说明 +1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 +2. 通过行人检测框的坐标在输入视频的对应帧中截取每个行人。 +3. 通过在帧级别的行人图像中检测该行为的典型特定目标实现。当检测到特定目标(在这里即烟头)以后,即认为在一定时间段内该人物处于该行为状态中。该任务使用[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md)实现,当前版本模型支持吸烟行为的识别,预测得到的`class id`对应关系为: +``` +0: 吸烟, +1: 其他 +``` + +## 基于行人轨迹的行为识别 + +应用行为:闯入识别 + +
    + +
    + +具体使用请参照[PP-Human检测跟踪模块](pphuman_mot.md)的`5. 区域闯入判断和计数`。 + +### 方案说明 +1. 使用多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),跟踪方案为OC-SORT,详细文档参考[OC-SORT](../../../../configs/mot/ocsort)。 +2. 通过行人检测框的下边界中点在相邻帧位于用户所选区域的内外位置,来识别是否闯入所选区域。 + + +## 基于视频分类的行为识别 + +应用行为:打架识别 + +
    + +
    数据来源及版权归属:Surveillance Camera Fight Dataset。
    +
    + +该方案关注的场景为监控摄像头下的打架行为识别。打架行为涉及多人,基于骨骼点技术的方案更适用于单人的行为识别。此外,打架行为对时序信息依赖较强,基于检测和分类的方案也不太适用。由于监控场景背景复杂,人的密集程度、光线、拍摄角度等都会对识别造成影响,本方案采用基于视频分类的方式判断视频中是否存在打架行为。针对摄像头距离人较远的情况,通过增大输入图像分辨率优化。由于训练数据有限,采用数据增强的方式提升模型的泛化性能。 + +### 模型库 +在这里,我们提供了打架识别的预训练模型,用户可以直接下载使用。 + +| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 | +|:---------------------|:---------:|:------:|:------:| :------: |:---------------------------------------------------------------------------------: | +| 打架识别 | PP-TSM | 准确率:89.06% | 2s视频 128ms | [下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | [下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | + +注: +1. 打架识别模型基于6个公开数据集训练得到:Surveillance Camera Fight Dataset、A Dataset for Automatic Violence Detection in Videos、Hockey Fight Detection Dataset、Video Fight Detection Dataset、Real Life Violence Situations Dataset、UBI Abnormal Event Detection Dataset。 +2. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。 + + +### 配置说明 +[配置文件](../../config/infer_cfg_pphuman.yml)中与行为识别相关的参数如下: +``` +VIDEO_ACTION: # 基于视频分类的行为识别模型配置 + model_dir: output_inference/ppTSM # 模型所在路径 + batch_size: 1 # 预测批大小。当前仅支持为1进行推理 + frame_len: 8 # 累计抽样帧数量,达到该数量后执行一次识别 + sample_freq: 7 # 抽样频率,即间隔多少帧抽样一帧 + short_size: 340 # 视频帧尺度变换最小边的长度 + target_size: 320 # 目标视频帧的大小 + enable: False # 是否开启该功能 +``` + +### 使用方法 +1. 从上表链接中下载`打架识别`任务的预测部署模型并解压到`./output_inference`路径下;默认自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。 +2. 修改配置文件`deploy/pphuman/config/infer_cfg_pphuman.yml`中`VIDEO_ACTION`下的`enable`为`True`; +3. 仅支持输入视频,启动命令如下: +``` +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu +``` +4. 启动命令中的完整参数说明,请参考[参数说明](./PPHuman_QUICK_STARTED.md)。 + + +### 方案说明 + +目前打架识别模型使用的是[PP-TSM](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/pp-tsm.md),并在PP-TSM视频分类模型训练流程的基础上修改适配,完成模型训练。对于输入的视频或者视频流,进行等间隔抽帧,当视频帧累计到指定数目时,输入到视频分类模型中判断是否存在打架行为。 + + +## 参考文献 +``` +@inproceedings{stgcn2018aaai, + title = {Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition}, + author = {Sijie Yan and Yuanjun Xiong and Dahua Lin}, + booktitle = {AAAI}, + year = {2018}, +} +````` diff --git a/deploy/pipeline/docs/tutorials/pphuman_action_en.md b/deploy/pipeline/docs/tutorials/pphuman_action_en.md new file mode 100644 index 0000000000000000000000000000000000000000..943a5c5678997816f66941e785f4ac95256acf83 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/pphuman_action_en.md @@ -0,0 +1,275 @@ +English | [简体中文](pphuman_action.md) + +# Action Recognition Module of PP-Human + +Action Recognition is widely used in the intelligent community/smart city, and security monitoring. PP-Human provides the module of video-classification-based, detection-based, image-classification-based and skeleton-based action recognition. + +## Model Zoo + +There are multiple available pretrained models including pedestrian detection/tracking, keypoint detection, fighting, calling, smoking and fall detection models. Users can download and use them directly. + +| Task | Algorithm | Precision | Inference Speed(ms) | Model Weights |Model Inference and Deployment | +|:----------------------------- |:---------:|:-------------------------:|:-----------------------------------:| :-----------------: |:-----------------------------------------------------------------------------------------:| +| Pedestrian Detection/Tracking | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | Detection: 28ms
    Tracking:33.1ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| Calling Recognition | PP-HGNet | Precision Rate: 86.85 | Single Person 2.94ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.pdparams) | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip) | +| Smoking Recognition | PP-YOLOE | mAP: 39.7 | Single Person 2.0ms | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.pdparams) | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip) | +| Keypoint Detection | HRNet | AP: 87.1 | Single Person 2.9ms |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.pdparams) |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) | +| Falling Recognition | ST-GCN | Precision Rate: 96.43 | Single Person 2.7ms | - |[Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | +| Fighting Recognition | PP-TSM | Precision Rate: 89.06% | 128ms for a 2sec video | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | + +Note: + +1. The precision of the pedestrian detection/ tracking model is obtained by trainning and testing on [MOT17](https://motchallenge.net/), [CrowdHuman](http://www.crowdhuman.org/), [HIEVE](http://humaninevents.org/) and some business data. + +2. The keypoint detection model is trained on [COCO](https://cocodataset.org/), [UAV-Human](https://github.com/SUTDCV/UAV-Human), and some business data, and the precision is obtained on test sets of business data. + +3. The falling action recognition model is trained on [NTU-RGB+D](https://rose1.ntu.edu.sg/dataset/actionRecognition/), [UR Fall Detection Dataset](http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html), and some business data, and the precision is obtained on the testing set of business data. + +4. The calling action recognition model is trained and tested on [UAV-Human](https://github.com/SUTDCV/UAV-Human), by using video frames of calling in this dataset. + +5. The smoking action recognition model is trained and tested on business data. + +6. The fighting action recognition model is trained and tested on 6 public datasets, including Surveillance Camera Fight Dataset, A Dataset for Automatic Violence Detection in Videos, Hockey Fight Detection Dataset, Video Fight Detection Dataset, Real Life Violence Situations Dataset, UBI Abnormal Event Detection Dataset. + +7. The inference speed is the speed of using TensorRT FP16 on NVIDIA T4, including the total time of data pre-training, model inference, and post-processing. + + +## Skeleton-based action recognition -- falling detection + +
    Data source and copyright owner:Skyinfor +Technology. Thanks for the provision of actual scenario data, which are only +used for academic research here.
    + +
    + +### Description of Configuration + +Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow: + +``` +SKELETON_ACTION: # Config for skeleton-based action recognition model + model_dir: output_inference/STGCN # Path of the model + batch_size: 1 # The size of the inference batch. Current now only support 1. + max_frames: 50 # The number of frames of action segments. When frames of time-ordered skeleton keypoints of each pedestrian ID achieve the max value,the action type will be judged by the action recognition model. If the setting is the same as the training, there will be an ideal inference result. + display_frames: 80 # The number of display frames. When the inferred action type is falling down, the time length of the act will be displayed in the ID. + coord_size: [384, 512] # The unified size of the coordinate, which is the best when it is the same as the training setting. + enable: False # Whether to enable this function +``` + + +## How to Use + +1. Download models `Pedestrian Detection/Tracking`, `Keypoint Detection` and `Falling Recognition` from the links in the Model Zoo and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path. + +2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `SKELETON_ACTION` in infer_cfg_pphuman.yml. And then run the command: + + ```python + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu + ``` + +3. There are two ways to modify the model path: + + - In ```./deploy/pipeline/config/infer_cfg_pphuman.yml```, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `KPT` and `SKELETON_ACTION` respectively, and modify the corresponding path of each field into the expected path. + - Add `--model_dir` in the command line to revise the model path: + + ```python + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN + ``` +4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) + +### Introduction to the Solution + +1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe). + +2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box. +3. In this strategy, we use the [keypoint detection model](../../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml) to obtain 17 skeleton keypoints. Their sequences and types are identical to those of COCO. For details, please refer to the `COCO dataset` part of [how to prepare keypoint datasets](../../../../docs/tutorials/data/PrepareKeypointDataSet_en.md). + +4. Each target pedestrian with a tracking ID has their own accumulation of skeleton keypoints, which is used to form a keypoint sequence in time order. When the number of accumulated frames reach a preset threshold or the tracking is lost, the action recognition model will be applied to judging the action type of the time-ordered keypoint sequence. The current model only supports the recognition of the act of falling down, and the relationship between the action type and `class id` is: + +``` +0: Fall down + +1: Others +``` +- The falling action recognition model uses [ST-GCN](https://arxiv.org/abs/1801.07455), and employ the [PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md) toolkit to complete model training. + +## Image-Classification-Based Action Recognition -- Calling Recognition + +
    Data source and copyright owner:Skyinfor +Technology. Thanks for the provision of actual scenario data, which are only +used for academic research here.
    + +
    + +### Description of Configuration + +Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow: + +``` +ID_BASED_CLSACTION: # config for classfication-based action recognition model + model_dir: output_inference/PPHGNet_tiny_calling_halfbody # Path of the model + batch_size: 8 # The size of the inference batch + threshold: 0.45 # Threshold for corresponding behavior + display_frames: 80 # The number of display frames. When the corresponding action is detected, the time length of the act will be displayed in the ID. + enable: False # Whether to enable this function +``` + +### How to Use + +1. Download models `Pedestrian Detection/Tracking` and `Calling Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path. + +2. Now the only available input is the video input in the action recognition module. Set the "enable: True" of `ID_BASED_CLSACTION` in infer_cfg_pphuman.yml. + +3. Run this command: + ```python + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu + ``` +4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) + +### Introduction to the Solution +1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../configs/ppyoloe). + +2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box. +3. With image classification through pedestrian images at the frame level, when the category to which the image belongs is the corresponding behavior, it is considered that the character is in the behavior state for a certain period of time. This task is implemented with [PP-HGNet](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/PP-HGNet.md). In current version, the behavior of calling is supported and the relationship between the action type and `class id` is: +``` +0: Calling + +1: Others +``` + + +## Detection-based Action Recognition -- Smoking Detection + +
    Data source and copyright owner:Skyinfor +Technology. Thanks for the provision of actual scenario data, which are only +used for academic research here.
    + +
    + +### Description of Configuration + +Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow: +``` +ID_BASED_DETACTION: # Config for detection-based action recognition model + model_dir: output_inference/ppyoloe_crn_s_80e_smoking_visdrone # Path of the model + batch_size: 8 # The size of the inference batch + threshold: 0.4 # Threshold for corresponding behavior. + display_frames: 80 # The number of display frames. When the corresponding action is detected, the time length of the act will be displayed in the ID. + enable: False # Whether to enable this function +``` + +### How to Use + +1. Download models `Pedestrian Detection/Tracking` and `Smoking Recognition` from the links in `Model Zoo` and unzip them to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path. + +2. Now the only available input is the video input in the action recognition module. set the "enable: True" of `ID_BASED_DETACTION` in infer_cfg_pphuman.yml. + +3. Run this command: + ```python + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu + ``` +4. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md) + +### Introduction to the Solution +1. Get the pedestrian detection box and the tracking ID number of the video input through object detection and multi-object tracking. The adopted model is PP-YOLOE, and for details, please refer to [PP-YOLOE](../../../../configs/ppyoloe). + +2. Capture every pedestrian in frames of the input video accordingly by using the coordinate of the detection box. + +3. We detecting the typical specific target of this behavior in frame-level pedestrian images. When a specific target (in this case, cigarette is the target) is detected, it is considered that the character is in the behavior state for a certain period of time. This task is implemented by [PP-YOLOE](../../../../configs/ppyoloe/). In current version, the behavior of smoking is supported and the relationship between the action type and `class id` is: + +``` +0: Smoking + +1: Others +``` + +## Video-Classification-Based Action Recognition -- Fighting Detection +With wider and wider deployment of surveillance cameras, it is time-consuming and labor-intensive and inefficient to manually check whether there are abnormal behaviors such as fighting. AI + security assistant smart security. A fight recognition module is integrated into PP-Human to identify whether there is fighting in the video. We provide pre-trained models that users can download and use directly. + +| Task | Model | Acc. | Speed(ms) | Weight | Deploy Model | +| ---- | ---- | ---------- | ---- | ---- | ---------- | +| Fighting Detection | PP-TSM | 89.06% | 128ms for a 2-sec video| [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | [Link](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | + + +The model is trained with 6 public dataset, including Surveillance Camera Fight Dataset、A Dataset for Automatic Violence Detection in Videos、Hockey Fight Detection Dataset、Video Fight Detection Dataset、Real Life Violence Situations Dataset、UBI Abnormal Event Detection Dataset. + +This project focuses on is the identification of fighting behavior under surveillance cameras. Fighting behavior involves multiple people, and the skeleton-based technology is more suitable for single-person behavior recognition. In addition, fighting behavior is strongly dependent on timing information, and the detection and classification-based scheme is not suitable. Due to the complex background of the monitoring scene, the density of people, light, filming angle may affect the accuracy. This solution uses video-classification-based method to determine whether there is fighting in the video. +For the case where the camera is far away from the person, it is optimized by increasing the resolution of the input image. Due to the limited training data, data augmentation is used to improve the generalization performance of the model. + + +### Description of Configuration + +Parameters related to action recognition in the [config file](../../config/infer_cfg_pphuman.yml) are as follow: +``` +VIDEO_ACTION: # Config for detection-based action recognition model + model_dir: output_inference/ppTSM # Path of the model + batch_size: 1 # The size of the inference batch. Current now only support 1. + frame_len: 8 # Accumulate the number of sampling frames. Inference will be executed when sampled frames reached this value. + sample_freq: 7 # Sampling frequency. It means how many frames to sample one frame. + short_size: 340 # The shortest length for video frame scaling transforms. + target_size: 320 # Target size for input video + enable: False # Whether to enable this function +``` + +### How to Use + +1. Download model `Fighting Detection` from the links of the above table and unzip it to ```./output_inference```. The models are automatically downloaded by default. If you download them manually, you need to modify the `model_dir` as the model storage path. + +2. Modify the file names in the `ppTSM` folder to `model.pdiparams, model.pdiparams.info and model.pdmodel`; + +3. Now the only available input is the video input in the action recognition module. set the "enable: True" of `VIDEO_ACTION` in infer_cfg_pphuman.yml. + +4. Run this command: + ```python + python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu + ``` +5. For detailed parameter description, please refer to [Parameter Description](./PPHuman_QUICK_STARTED.md). + + +The result is shown as follow: + +
    + +
    + +Data source and copyright owner: Surveillance Camera Fight Dataset. + +### Introduction to the Solution +The current fight recognition model is using [PP-TSM](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/pp-tsm.md), and adaptated to complete the model training. For the input video or video stream, we extraction frame at a certain interval. When the video frame accumulates to the specified number, it is input into the video classification model to determine whether there is fighting. + + +## Custom Training + +The pretrained models are provided and can be used directly, including pedestrian detection/ tracking, keypoint detection, smoking, calling and fighting recognition. If users need to train custom action or optimize the model performance, please refer the link below. + +| Task | Model | Development Document | +| ---- | ---- | -------- | +| pedestrian detection/tracking | PP-YOLOE | [doc](../../../../configs/ppyoloe/README.md#getting-start) | +| keypoint detection | HRNet | [doc](../../../../configs/keypoint/README_en.md#3training-and-testing) | +| action recognition (fall down) | ST-GCN | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md) | +| action recognition (smoking) | PP-YOLOE | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/idbased_det.md) | +| action recognition (calling) | PP-HGNet | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md) | +| action recognition (fighting) | PP-TSM | [doc](../../../../docs/advanced_tutorials/customization/action_recognotion/videobased_rec.md) | + + +## Reference + +``` +@inproceedings{stgcn2018aaai, + title = {Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition}, + author = {Sijie Yan and Yuanjun Xiong and Dahua Lin}, + booktitle = {AAAI}, + year = {2018}, +} +``` diff --git a/deploy/pphuman/docs/attribute.md b/deploy/pipeline/docs/tutorials/pphuman_attribute.md similarity index 40% rename from deploy/pphuman/docs/attribute.md rename to deploy/pipeline/docs/tutorials/pphuman_attribute.md index 63c72810fd7029ddf7d90c811a70e91edc31ff94..16109606c0cbfbac9794512648bf34a7150d0cb7 100644 --- a/deploy/pphuman/docs/attribute.md +++ b/deploy/pipeline/docs/tutorials/pphuman_attribute.md @@ -1,4 +1,4 @@ -[English](attribute_en.md) | 简体中文 +[English](pphuman_attribute_en.md) | 简体中文 # PP-Human属性识别模块 @@ -6,53 +6,78 @@ | 任务 | 算法 | 精度 | 预测速度(ms) |下载链接 | |:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | -| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 28ms
    跟踪:33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | -| 行人属性分析 | StrongBaseline | mA: 94.86 | 单人 2ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | +| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 16.2ms
    跟踪:22.3ms |[下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 行人属性高精度模型 | PP-HGNet_small | mA: 95.4 | 单人 1.54ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.zip) | +| 行人属性轻量级模型 | PP-LCNet_x1_0 | mA: 94.5 | 单人 0.54ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip) | +| 行人属性精度与速度均衡模型 | PP-HGNet_tiny | mA: 95.2 | 单人 1.14ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.zip) | -1. 检测/跟踪模型精度为MOT17,CrowdHuman,HIEVE和部分业务数据融合训练测试得到 -2. 行人属性分析精度为PA100k,RAPv2,PETA和部分业务数据融合训练测试得到 -3. 预测速度为T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程 + +1. 检测/跟踪模型精度为[MOT17](https://motchallenge.net/),[CrowdHuman](http://www.crowdhuman.org/),[HIEVE](http://humaninevents.org/)和部分业务数据融合训练测试得到。 +2. 行人属性分析精度为[PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html)和部分业务数据融合训练测试得到 +3. 预测速度为V100 机器上使用TensorRT FP16时的速度, 该处测速速度为模型预测速度 +4. 属性模型应用依赖跟踪模型结果,请在[跟踪模型页面](./pphuman_mot.md)下载跟踪模型,依自身需求选择高精或轻量级下载。 +5. 模型下载后解压放置在PaddleDetection/output_inference/目录下。 ## 使用方法 -1. 从上表链接中下载模型并解压到```./output_inference```路径下 -2. 图片输入时,启动命令如下 +1. 从上表链接中下载模型并解压到```PaddleDetection/output_inference```路径下,并修改配置文件中模型路径,也可默认自动下载模型。设置```deploy/pipeline/config/infer_cfg_pphuman.yml```中`ATTR`的enable: True + +`infer_cfg_pphuman.yml`中配置项说明: +``` +ATTR: #模块名称 + model_dir: output_inference/PPLCNet_x1_0_person_attribute_945_infer/ #模型路径 + batch_size: 8 #推理最大batchsize + enable: False #功能是否开启 +``` + +2. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./PPHuman_QUICK_STARTED.md#41-参数说明))。 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +#单张图片 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu \ - --enable_attr=True + +#图片文件夹 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --image_dir=images/ \ + --device=gpu \ + ``` 3. 视频输入时,启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +#单个视频文件 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ - --enable_attr=True + +#视频文件夹 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_dir=test_videos/ \ + --device=gpu \ ``` + 4. 若修改模型路径,有以下两种方式: - - ```./deploy/pphuman/config/infer_cfg.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置 - - **(推荐)**命令行中增加`--model_dir`修改模型路径: + - 方法一:```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,属性识别模型修改ATTR字段下配置 + - 方法二:命令行中增加`--model_dir`修改模型路径: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ - --enable_attr=True \ - --model_dir det=ppyoloe/ + --model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/ ``` 测试效果如下:
    - +
    数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用 ## 方案说明 -1. 目标检测/多目标跟踪获取图片/视频输入中的行人检测框,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../configs/ppyoloe) +1. 目标检测/多目标跟踪获取图片/视频输入中的行人检测框,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../configs/ppyoloe/README_cn.md) 2. 通过行人检测框的坐标在输入图像中截取每个行人 3. 使用属性识别分析每个行人对应属性,属性类型与PA100k数据集相同,具体属性列表如下: ``` @@ -73,7 +98,7 @@ python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ - 穿靴:是、否 ``` -4. 属性识别模型方案为[StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf),模型结构为基于ResNet50的多分类网络结构,引入Weighted BCE loss和EMA提升模型效果。 +4. 属性识别模型方案为[StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf),模型结构为基于PP-HGNet、PP-LCNet的多分类网络结构,引入Weighted BCE loss提升模型效果。 ## 参考文献 ``` diff --git a/deploy/pphuman/docs/attribute_en.md b/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md similarity index 46% rename from deploy/pphuman/docs/attribute_en.md rename to deploy/pipeline/docs/tutorials/pphuman_attribute_en.md index 38cbc7a7861f1b6902c3dd1f6186b00d46d9769f..70c55de0edea1a2c3b0574efd7835c10d94f74e6 100644 --- a/deploy/pphuman/docs/attribute_en.md +++ b/deploy/pipeline/docs/tutorials/pphuman_attribute_en.md @@ -1,4 +1,4 @@ -English | [简体中文](attribute.md) +English | [简体中文](pphuman_attribute.md) # Attribute Recognition Modules of PP-Human @@ -6,40 +6,61 @@ Pedestrian attribute recognition has been widely used in the intelligent communi | Task | Algorithm | Precision | Inference Speed(ms) | Download Link | |:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | -| Pedestrian Detection/ Tracking | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | Detection: 28ms
    Tracking:33.1ms | [Download Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | -| Pedestrian Attribute Analysis | StrongBaseline | ma: 94.86 | Per Person 2ms | [Download Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.tar) | +| High-Precision Model | PP-HGNet_small | mA: 95.4 | per person 1.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_small_person_attribute_954_infer.tar) | +| Fast Model | PP-LCNet_x1_0 | mA: 94.5 | per person 0.54ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.tar) | +| Balanced Model | PP-HGNet_tiny | mA: 95.2 | per person 1.14ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_person_attribute_952_infer.tar) | -1. The precision of detection/ tracking models is obtained by training and testing on the dataset consist of MOT17, CrowdHuman, HIEVE, and some business data. -2. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of PA100k, RAPv2, PETA, and some business data. -3. The inference speed is T4, the speed of using TensorRT FP16. +1. The precision of pedestiran attribute analysis is obtained by training and testing on the dataset consist of [PA100k](https://github.com/xh-liu/HydraPlus-Net#pa-100k-dataset),[RAPv2](http://www.rapdataset.com/rapv2.html),[PETA](http://mmlab.ie.cuhk.edu.hk/projects/PETA.html) and some business data. +2. The inference speed is V100, the speed of using TensorRT FP16. +3. This model of Attribute is based on the result of tracking, please download tracking model in the [Page of Mot](./pphuman_mot_en.md). The High precision and Faster model are both available. +4. You should place the model unziped in the directory of `PaddleDetection/output_inference/`. ## Instruction -1. Download the model from the link in the above table, and unzip it to```./output_inference```. -2. When inputting the image, run the command as follows: +1. Download the model from the link in the above table, and unzip it to```./output_inference```, and set the "enable: True" in ATTR of infer_cfg_pphuman.yml + +The meaning of configs of `infer_cfg_pphuman.yml`: +``` +ATTR: #module name + model_dir: output_inference/PPLCNet_x1_0_person_attribute_945_infer/ #model path + batch_size: 8 #maxmum batchsize when inference + enable: False #whether to enable this model +``` + +2. When inputting the image, run the command as follows (please refer to [QUICK_STARTED-Parameters](./PPHuman_QUICK_STARTED.md#41-参数说明) for more details): ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +#single image +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu \ - --enable_attr=True + +#image directory +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --image_dir=images/ \ + --device=gpu \ + ``` 3. When inputting the video, run the command as follows: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +#a single video file +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ - --enable_attr=True + +#directory of videos +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_dir=test_videos/ \ + --device=gpu \ ``` 4. If you want to change the model path, there are two methods: - - In ```./deploy/pphuman/config/infer_cfg.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR. - - Add `--model_dir` in the command line to change the model path: + - The first: In ```./deploy/pipeline/config/infer_cfg_pphuman.yml``` you can configurate different model paths. In attribute recognition models, you can modify the configuration in the field of ATTR. + - The second: Add `--model_dir` in the command line to change the model path: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ - --enable_attr=True \ - --model_dir det=ppyoloe/ + --model_dir attr=output_inference/PPLCNet_x1_0_person_attribute_945_infer/ ``` The test result is: @@ -50,7 +71,7 @@ The test result is: Data Source and Copyright:Skyinfor Technology. Thanks for the provision of actual scenario data, which are only used for academic research here. -## Introduction to the Solution +## Introduction to the Solution 1. The PP-YOLOE model is used to handle detection boxs of input images/videos from object detection/ multi-object tracking. For details, please refer to the document [PP-YOLOE](../../../configs/ppyoloe). 2. Capture every pedestrian in the input images with the help of coordiantes of detection boxes. @@ -62,7 +83,7 @@ Data Source and Copyright:Skyinfor Technology. Thanks for the provision of act - Accessories: Glasses; Hat; None - HoldObjectsInFront: Yes; No - Bag: BackPack; ShoulderBag; HandBag -- TopStyle: UpperStride; UpperLogo; UpperPlaid; UpperSplice +- TopStyle: UpperStride; UpperLogo; UpperPlaid; UpperSplice - BottomStyle: LowerStripe; LowerPattern - ShortSleeve: Yes; No - LongSleeve: Yes; No @@ -73,7 +94,7 @@ Data Source and Copyright:Skyinfor Technology. Thanks for the provision of act - Boots: Yes; No ``` -4. The model adopted in the attribute recognition is [StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf), where the structure is the multi-class network structure based on ResNet50, and Weighted BCE loss and EMA are introduced for effect optimization. +4. The model adopted in the attribute recognition is [StrongBaseline](https://arxiv.org/pdf/2107.03576.pdf), where the structure is the multi-class network structure based on PP-HGNet、PP-LCNet, and Weighted BCE loss is introduced for effect optimization. ## Reference ``` diff --git a/deploy/pphuman/docs/mot.md b/deploy/pipeline/docs/tutorials/pphuman_mot.md similarity index 33% rename from deploy/pphuman/docs/mot.md rename to deploy/pipeline/docs/tutorials/pphuman_mot.md index 72e3045cf56cd30185a64eff879d08748a05a38b..cc26da04549f1f74041d939a52b8bc36e4e399b2 100644 --- a/deploy/pphuman/docs/mot.md +++ b/deploy/pipeline/docs/tutorials/pphuman_mot.md @@ -1,58 +1,90 @@ +[English](pphuman_mot_en.md) | 简体中文 + # PP-Human检测跟踪模块 行人检测与跟踪在智慧社区,工业巡检,交通监控等方向都具有广泛应用,PP-Human中集成了检测跟踪模块,是关键点检测、属性行为识别等任务的基础。我们提供了预训练模型,用户可以直接下载使用。 | 任务 | 算法 | 精度 | 预测速度(ms) |下载链接 | |:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | -| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 28ms
    跟踪:33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 行人检测/跟踪 | PP-YOLOE-l | mAP: 57.8
    MOTA: 82.2 | 检测: 25.1ms
    跟踪:31.8ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| 行人检测/跟踪 | PP-YOLOE-s | mAP: 53.2
    MOTA: 73.9 | 检测: 16.2ms
    跟踪:21.0ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | -1. 检测/跟踪模型精度为MOT17,CrowdHuman,HIEVE和部分业务数据融合训练测试得到 +1. 检测/跟踪模型精度为[COCO-Person](http://cocodataset.org/), [CrowdHuman](http://www.crowdhuman.org/), [HIEVE](http://humaninevents.org/) 和部分业务数据融合训练测试得到,验证集为业务数据 2. 预测速度为T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程 ## 使用方法 -1. 从上表链接中下载模型并解压到```./output_inference```路径下 -2. 图片输入时,启动命令如下 +1. 从上表链接中下载模型并解压到```./output_inference```路径下,并修改配置文件中模型路径。默认为自动下载模型,无需做改动。 +2. 图片输入时,是纯检测任务,启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --image_file=test_image.jpg \ --device=gpu ``` -3. 视频输入时,启动命令如下 +3. 视频输入时,是跟踪任务,注意首先设置infer_cfg_pphuman.yml中的MOT配置的enable=True,然后启动命令如下 ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu ``` 4. 若修改模型路径,有以下两种方式: - - ```./deploy/pphuman/config/infer_cfg.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。 + - ```./deploy/pipeline/config/infer_cfg_pphuman.yml```下可以配置不同模型路径,检测和跟踪模型分别对应`DET`和`MOT`字段,修改对应字段下的路径为实际期望的路径即可。 - 命令行中增加`--model_dir`修改模型路径: ```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ --video_file=test_video.mp4 \ --device=gpu \ - --model_dir det=ppyoloe/ + --region_type=horizontal \ --do_entrance_counting \ - --draw_center_traj + --draw_center_traj \ + --model_dir det=ppyoloe/ ``` **注意:** - - `--do_entrance_counting`表示是否统计出入口流量,不设置即默认为False + - `--do_entrance_counting`表示是否统计出入口流量,不设置即默认为False。 - `--draw_center_traj`表示是否绘制跟踪轨迹,不设置即默认为False。注意绘制跟踪轨迹的测试视频最好是静止摄像头拍摄的。 + - `--region_type`表示流量计数的区域,当设置`--do_entrance_counting`时可选择`horizontal`或者`vertical`,默认是`horizontal`,表示以视频图片的中心水平线为出入口,同一物体框的中心点在相邻两秒内分别在区域中心水平线的两侧,即完成计数加一。 测试效果如下:
    - +
    数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用 +5. 区域闯入判断和计数 + +注意首先设置infer_cfg_pphuman.yml中的MOT配置的enable=True,然后启动命令如下 +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + --draw_center_traj \ + --do_break_in_counting \ + --region_type=custom \ + --region_polygon 200 200 400 200 300 400 100 400 +``` +**注意:** + - `--do_break_in_counting`表示是否进行区域出入后计数,不设置即默认为False。 + - `--region_type`表示流量计数的区域,当设置`--do_break_in_counting`时仅可选择`custom`,默认是`custom`,表示以用户自定义区域为出入口,同一物体框的下边界中点坐标在相邻两秒内从区域外到区域内,即完成计数加一。 + - `--region_polygon`表示用户自定义区域的多边形的点坐标序列,每两个为一对点坐标(x,y),按顺时针顺序连成一个封闭区域,至少需要3对点也即6个整数,默认值是`[]`,需要用户自行设置点坐标。用户可以运行[此段代码](../../tools/get_video_info.py)获取所测视频的分辨率帧数,以及可以自定义画出自己想要的多边形区域的可视化并自己调整。 + 自定义多边形区域的可视化代码运行如下: + ```python + python get_video_info.py --video_file=demo.mp4 --region_polygon 200 200 400 200 300 400 100 400 + ``` + +测试效果如下: + +
    + +
    + ## 方案说明 -1. 目标检测/多目标跟踪获取图片/视频输入中的行人检测框,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../configs/ppyoloe) -2. 多目标跟踪模型方案基于[ByteTrack](https://arxiv.org/pdf/2110.06864.pdf),采用PP-YOLOE替换原文的YOLOX作为检测器,采用BYTETracker作为跟踪器。 +1. 使用目标检测/多目标跟踪技术来获取图片/视频输入中的行人检测框,检测模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../../configs/ppyoloe)。 +2. 多目标跟踪模型方案采用[ByteTrack](https://arxiv.org/pdf/2110.06864.pdf)和[OC-SORT](https://arxiv.org/pdf/2203.14360.pdf),采用PP-YOLOE替换原文的YOLOX作为检测器,采用BYTETracker和OCSORTTracker作为跟踪器,详细文档参考[ByteTrack](../../../../configs/mot/bytetrack)和[OC-SORT](../../../../configs/mot/ocsort)。 ## 参考文献 ``` @@ -62,4 +94,11 @@ python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ journal={arXiv preprint arXiv:2110.06864}, year={2021} } + +@article{cao2022observation, + title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking}, + author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris}, + journal={arXiv preprint arXiv:2203.14360}, + year={2022} +} ``` diff --git a/deploy/pipeline/docs/tutorials/pphuman_mot_en.md b/deploy/pipeline/docs/tutorials/pphuman_mot_en.md new file mode 100644 index 0000000000000000000000000000000000000000..9944a71833d70dd9d810a37dd5e6e5a1130c3d6b --- /dev/null +++ b/deploy/pipeline/docs/tutorials/pphuman_mot_en.md @@ -0,0 +1,102 @@ +English | [简体中文](pphuman_mot.md) + +# Detection and Tracking Module of PP-Human + +Pedestrian detection and tracking is widely used in the intelligent community, industrial inspection, transportation monitoring and so on. PP-Human has the detection and tracking module, which is fundamental to keypoint detection, attribute action recognition, etc. Users enjoy easy access to pretrained models here. + +| Task | Algorithm | Precision | Inference Speed(ms) | Download Link | +|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | +| Pedestrian Detection/ Tracking | PP-YOLOE-l | mAP: 57.8
    MOTA: 82.2 | Detection: 25.1ms
    Tracking:31.8ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | +| Pedestrian Detection/ Tracking | PP-YOLOE-s | mAP: 53.2
    MOTA: 73.9 | Detection: 16.2ms
    Tracking:21.0ms | [Download](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip) | + +1. The precision of the pedestrian detection/ tracking model is obtained by trainning and testing on [COCO-Person](http://cocodataset.org/), [CrowdHuman](http://www.crowdhuman.org/), [HIEVE](http://humaninevents.org/) and some business data. +2. The inference speed is the speed of using TensorRT FP16 on T4, the total number of data pre-training, model inference, and post-processing. + +## How to Use + +1. Download models from the links of the above table and unizp them to ```./output_inference```. +2. When use the image as input, it's a detection task, the start command is as follows: +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --image_file=test_image.jpg \ + --device=gpu +``` +3. When use the video as input, it's a tracking task, first you should set the "enable: True" in MOT of infer_cfg_pphuman.yml, and then the start command is as follows: +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu +``` +4. There are two ways to modify the model path: + + - In `./deploy/pipeline/config/infer_cfg_pphuman.yml`, you can configurate different model paths,which is proper only if you match keypoint models and action recognition models with the fields of `DET` and `MOT` respectively, and modify the corresponding path of each field into the expected path. + - Add `--model_dir` in the command line to revise the model path: + +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + --region_type=horizontal \ + --do_entrance_counting \ + --draw_center_traj \ + --model_dir det=ppyoloe/ + +``` +**Note:** + + - `--do_entrance_counting` is whether to calculate flow at the gateway, and the default setting is False. + - `--draw_center_traj` means whether to draw the track, and the default setting is False. It's worth noting that the test video of track drawing should be filmed by the still camera. + - `--region_type` means the region type of flow counting. When set `--do_entrance_counting`, you can select from `horizontal` or `vertical`, the default setting is `horizontal`, means that the central horizontal line of the video picture is used as the entrance and exit, and when the central point of the same object box is on both sides of the central horizontal line of the area in two adjacent seconds, the counting plus one is completed. + +The test result is: + +
    + +
    + +Data source and copyright owner:Skyinfor Technology. Thanks for the provision of actual scenario data, which are only used for academic research here. + +5. Break in and counting + +Please set the "enable: True" in MOT of infer_cfg_pphuman.yml at first, and then the start command is as follows: +```python +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + --draw_center_traj \ + --do_break_in_counting \ + --region_type=custom \ + --region_polygon 200 200 400 200 300 400 100 400 +``` + +**Note:** + - `--do_break_in_counting` is whether to calculate flow when break in the user-defined region, and the default setting is False. + - `--region_type` means the region type of flow counting. When set `--do_break_in_counting`, only `custom` can be selected, and the default is `custom`, which means that the user-defined region is used as the entrance and exit, and when the midpoint coords of the bottom boundary of the same object moves from outside to inside the region within two adjacent seconds, the counting plus one is completed. + - `--region_polygon` means the point coords sequence of the polygon in the user-defined region. Every two integers are a pair of point coords (x,y), which are connected into a closed area in clockwise order. At least 3 pairs of points, that is, 6 integers, are required. The default value is `[]`, and the user needs to set the point coords by himself. Users can run this [code](../../tools/get_video_info.py) to obtain the resolution and frame number of the measured video, and can customize the visualization of drawing the polygon area they want and adjust it by themselves. + The visualization code of the custom polygon region runs as follows: + ```python + python get_video_info.py --video_file=demo.mp4 --region_polygon 200 200 400 200 300 400 100 400 + ``` + +The test result is: + +
    + +
    + + +## Introduction to the Solution + +1. Get the pedestrian detection box of the image/ video input through object detection and multi-object tracking. The detection model is PP-YOLOE, please refer to [PP-YOLOE](../../../../configs/ppyoloe) for details. + +2. The multi-object tracking solution is based on [ByteTrack](https://arxiv.org/pdf/2110.06864.pdf) and [OC-SORT](https://arxiv.org/pdf/2203.14360.pdf), and replace the original YOLOX with PP-YOLOE as the detector,and BYTETracker or OC-SORT Tracker as the tracker, please refer to [ByteTrack](../../../../configs/mot/bytetrack) and [OC-SORT](../../../../configs/mot/ocsort). + +## Reference +``` +@article{zhang2021bytetrack, + title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box}, + author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang}, + journal={arXiv preprint arXiv:2110.06864}, + year={2021} +} +``` diff --git a/deploy/pphuman/docs/mtmct.md b/deploy/pipeline/docs/tutorials/pphuman_mtmct.md similarity index 43% rename from deploy/pphuman/docs/mtmct.md rename to deploy/pipeline/docs/tutorials/pphuman_mtmct.md index 8549baad741859d83ea8658d4be354e6e9d52820..894c18ee0d5541f082f1f70b0250549519768020 100644 --- a/deploy/pphuman/docs/mtmct.md +++ b/deploy/pipeline/docs/tutorials/pphuman_mtmct.md @@ -1,22 +1,24 @@ +[English](pphuman_mtmct_en.md) | 简体中文 + # PP-Human跨镜头跟踪模块 跨镜头跟踪任务,是在单镜头跟踪的基础上,实现不同摄像头中人员的身份匹配关联。在安放、智慧零售等方向有较多的应用。 -PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨境跟踪Pipeline,REID模型完全基于开源数据集训练。 +PP-Human跨镜头跟踪模块主要目的在于提供一套简洁、高效的跨镜跟踪Pipeline,REID模型完全基于开源数据集训练。 ## 使用方法 -1. 下载模型 [REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下, MOT模型请参考[mot说明](./mot.md)文件下载。 +1. 下载模型 [行人跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)和[REID模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) 并解压到```./output_inference```路径下,修改配置文件中模型路径。也可简单起见直接用默认配置,自动下载模型。 MOT模型请参考[mot说明](./pphuman_mot.md)文件下载。 -2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,命令如下: +2. 跨镜头跟踪模式下,要求输入的多个视频放在同一目录下,同时开启infer_cfg_pphuman.yml 中的REID选择中的enable=True, 命令如下: ```python -python3 deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_dir=[your_video_file_directory] --device=gpu +python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu ``` -3. 相关配置在`./deploy/pphuman/config/infer_cfg.yml`文件中修改: +3. 相关配置在`./deploy/pipeline/config/infer_cfg_pphuman.yml`文件中修改: ```python -python3 deploy/pphuman/pipeline.py - --config deploy/pphuman/config/infer_cfg.yml +python3 deploy/pipeline/pipeline.py + --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu --model_dir reid=reid_best/ @@ -46,11 +48,11 @@ python3 deploy/pphuman/pipeline.py id聚类、重新分配id ``` -2. 模型方案为[reid-centroids](https://github.com/mikwieczorek/centroids-reid), Backbone为ResNet50, 主要特色为利用相同id的多个特征提升相似度效果。 -本跨境跟踪中所用REID模型在上述基础上,整合多个开源数据集并压缩模型特征到128维以提升泛华性能。大幅提升了在实际应用中的泛化效果。 +2. 模型方案为[reid-strong-baseline](https://github.com/michuanhaohao/reid-strong-baseline), Backbone为ResNet50, 主要特色为模型结构简单。 +本跨镜跟踪中所用REID模型在上述基础上,整合多个开源数据集并压缩模型特征到128维以提升泛化性能。大幅提升了在实际应用中的泛化效果。 ### 其他建议 -- 提供的REID模型基于开源数据集训练得到,建议加入自有数据,训练更加强有力的REID模型,将非常明显提升跨境跟踪效果。 +- 提供的REID模型基于开源数据集训练得到,建议加入自有数据,训练更加强有力的REID模型,将非常明显提升跨镜跟踪效果。 - 质量评估部分基于简单逻辑+OpenCV实现,效果有限,如果有条件建议针对性训练质量判断模型。 @@ -58,22 +60,32 @@ python3 deploy/pphuman/pipeline.py - camera 1:
    - +
    - camera 2:
    - +
    ## 参考文献 ``` -@article{Wieczorek2021OnTU, - title={On the Unreasonable Effectiveness of Centroids in Image Retrieval}, - author={Mikolaj Wieczorek and Barbara Rychalska and Jacek Dabrowski}, - journal={ArXiv}, - year={2021}, - volume={abs/2104.13643} +@InProceedings{Luo_2019_CVPR_Workshops, +author = {Luo, Hao and Gu, Youzhi and Liao, Xingyu and Lai, Shenqi and Jiang, Wei}, +title = {Bag of Tricks and a Strong Baseline for Deep Person Re-Identification}, +booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, +month = {June}, +year = {2019} +} + +@ARTICLE{Luo_2019_Strong_TMM, +author={H. {Luo} and W. {Jiang} and Y. {Gu} and F. {Liu} and X. {Liao} and S. {Lai} and J. {Gu}}, +journal={IEEE Transactions on Multimedia}, +title={A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification}, +year={2019}, +pages={1-1}, +doi={10.1109/TMM.2019.2958756}, +ISSN={1941-0077}, } ``` diff --git a/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md b/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md new file mode 100644 index 0000000000000000000000000000000000000000..0321d2a52d511a00b0c095a3878c3a959e646292 --- /dev/null +++ b/deploy/pipeline/docs/tutorials/pphuman_mtmct_en.md @@ -0,0 +1,94 @@ +English | [简体中文](pphuman_mtmct.md) + +# Multi-Target Multi-Camera Tracking Module of PP-Human + +Multi-target multi-camera tracking, or MTMCT, matches the identity of a person in different cameras based on the single-camera tracking. MTMCT is usually applied to the security system and the smart retailing. +The MTMCT module of PP-Human aims to provide a multi-target multi-camera pipleline which is simple, and efficient. + +## How to Use + +1. Download [REID model](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) and unzip it to ```./output_inference```. For the MOT model, please refer to [mot description](./pphuman_mot.md). + +2. In the MTMCT mode, input videos are required to be put in the same directory. set the REID "enable: True" in the infer_cfg_pphuman.yml. The command line is: +```python +python3 deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_dir=[your_video_file_directory] --device=gpu +``` + +3. Configuration can be modified in `./deploy/pipeline/config/infer_cfg_pphuman.yml`. + +```python +python3 deploy/pipeline/pipeline.py + --config deploy/pipeline/config/infer_cfg_pphuman.yml + --video_dir=[your_video_file_directory] + --device=gpu + --model_dir reid=reid_best/ +``` + +## Intorduction to the Solution + +MTMCT module consists of the multi-target multi-camera tracking pipeline and the REID model. + +1. Multi-Target Multi-Camera Tracking Pipeline + +``` + +single-camera tracking[id+bbox] + │ +capture the target in the original image according to bbox——│ + │ │ + REID model quality assessment (covered or not, complete or not, brightness, etc.) + │ │ + [feature] [quality] + │ │ + datacollector—————│ + │ + sort out and filter features + │ + calculate the similarity of IDs in the videos + │ + make the IDs cluster together and rearrange them +``` + +2. The model solution is [reid-strong-baseline](https://github.com/michuanhaohao/reid-strong-baseline), with ResNet50 as the backbone. + +Under the above circumstances, the REID model used in MTMCT integrates open-source datasets and compresses model features to 128-dimensional features to optimize the generalization. In this way, the actual generalization result becomes much better. + +### Other Suggestions + +- The provided REID model is obtained from open-source dataset training. It is recommended to add your own data to get a more powerful REID model, notably improving the MTMCT effect. +- The quality assessment is based on simple logic +OpenCV, whose effect is limited. If possible, it is advisable to conduct specific training on the quality assessment model. + + +### Example + +- camera 1: +
    + +
    + +- camera 2: +
    + +
    + + +## Reference +``` +@InProceedings{Luo_2019_CVPR_Workshops, +author = {Luo, Hao and Gu, Youzhi and Liao, Xingyu and Lai, Shenqi and Jiang, Wei}, +title = {Bag of Tricks and a Strong Baseline for Deep Person Re-Identification}, +booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, +month = {June}, +year = {2019} +} + +@ARTICLE{Luo_2019_Strong_TMM, +author={H. {Luo} and W. {Jiang} and Y. {Gu} and F. {Liu} and X. {Liao} and S. {Lai} and J. {Gu}}, +journal={IEEE Transactions on Multimedia}, +title={A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification}, +year={2019}, +pages={1-1}, +doi={10.1109/TMM.2019.2958756}, +ISSN={1941-0077}, +} +``` diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md b/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md new file mode 100644 index 0000000000000000000000000000000000000000..46da107f2d30dec357b458da591f66371af1476a --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_attribute.md @@ -0,0 +1,116 @@ + +# PP-Vehicle属性识别模块 + +车辆属性识别在智慧城市,智慧交通等方向具有广泛应用。在PP-Vehicle中,集成了车辆属性识别模块,可识别车辆颜色及车型属性的识别。 + +| 任务 | 算法 | 精度 | 预测速度 | 下载链接| +|-----------|------|-----------|----------|---------------| +| 车辆检测/跟踪 | PP-YOLOE | - | - | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip) | +| 车辆属性识别 | PPLCNet | 90.81 | 2.36 ms | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip) | + + +注意: +1. 属性模型预测速度是基于Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz 测试得到,开启 MKLDNN 加速策略,线程数为10。 +2. 关于PP-LCNet的介绍可以参考[PP-LCNet](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNet.md)介绍,相关论文可以查阅[PP-LCNet paper](https://arxiv.org/abs/2109.15099)。 +3. 属性模型的训练和精度测试均基于[VeRi数据集](https://www.v7labs.com/open-datasets/veri-dataset)。 + + +- 当前提供的预训练模型支持识别10种车辆颜色及9种车型,同VeRi数据集,具体如下: + +```yaml +# 车辆颜色 +- "yellow" +- "orange" +- "green" +- "gray" +- "red" +- "blue" +- "white" +- "golden" +- "brown" +- "black" + +# 车型 +- "sedan" +- "suv" +- "van" +- "hatchback" +- "mpv" +- "pickup" +- "bus" +- "truck" +- "estate" +``` + +## 使用方法 + +### 配置项说明 + +配置文件中与属性相关的参数如下: +``` +VEHICLE_ATTR: + model_dir: output_inference/vehicle_attribute_infer/ # 车辆属性模型调用路径 + batch_size: 8 # 模型预测时的batch_size大小 + color_threshold: 0.5 # 颜色属性阈值,需要置信度达到此阈值才会确定具体颜色,否则为'Unknown‘ + type_threshold: 0.5 # 车型属性阈值,需要置信度达到此阈值才会确定具体属性,否则为'Unknown‘ + enable: False # 是否开启该功能 +``` + +### 使用命令 + +1. 从模型库下载`车辆检测/跟踪`, `车辆属性识别`两个预测部署模型并解压到`./output_inference`路径下;默认会自动下载模型,如果手动下载,需要修改模型文件夹为模型存放路径。 +2. 修改配置文件中`VEHICLE_ATTR`项的`enable: True`,以启用该功能。 +3. 图片输入时,启动命令如下(更多命令参数说明,请参考[快速开始-参数说明](./PPVehicle_QUICK_STARTED.md)): + +```bash +# 预测单张图片文件 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --image_file=test_image.jpg \ + --device=gpu + +# 预测包含一张或多张图片的文件夹 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --image_dir=images/ \ + --device=gpu +``` + +4. 视频输入时,启动命令如下: + +```bash +#预测单个视频文件 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --video_file=test_video.mp4 \ + --device=gpu + +#预测包含一个或多个视频的文件夹 +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --video_dir=test_videos/ \ + --device=gpu +``` + +5. 若修改模型路径,有以下两种方式: + + - 方法一:`./deploy/pipeline/config/infer_cfg_ppvehicle.yml`下可以配置不同模型路径,属性识别模型修改`VEHICLE_ATTR`字段下配置 + - 方法二:命令行中增加--model_dir修改模型路径: + +```bash +python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_ppvehicle.yml \ + --video_file=test_video.mp4 \ + --device=gpu \ + --model_dir vehicle_attr=output_inference/vehicle_attribute_infer +``` + +测试效果如下: + +
    + +
    + +## 方案说明 +车辆属性模型使用了[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) 的超轻量图像分类方案(PULC,Practical Ultra Lightweight image Classification)。关于该模型的数据准备、训练、测试等详细内容,请见[PULC 车辆属性识别模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/PULC/PULC_vehicle_attribute.md). + +车辆属性识别模型选用了轻量级、高精度的PPLCNet。并在该模型的基础上,进一步使用了以下优化方案: + +- 使用SSLD预训练模型,在不改变推理速度的前提下,精度可以提升约0.5个百分点 +- 融合EDA数据增强策略,精度可以再提升0.52个百分点 +- 使用SKL-UGI知识蒸馏, 精度可以继续提升0.23个百分点 diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_mot.md b/deploy/pipeline/docs/tutorials/ppvehicle_mot.md new file mode 100644 index 0000000000000000000000000000000000000000..7022481526137770c14ace7440bbea4e99511edb --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_mot.md @@ -0,0 +1,20 @@ + +# PP-Vehicle车辆跟踪模块 + +【应用介绍】 + +【模型下载】 + +## 使用方法 + +【配置项说明】 + +【使用命令】 + +【效果展示】 + +## 方案说明 + +【实现方案及特色】 + +## 参考文献 diff --git a/deploy/pipeline/docs/tutorials/ppvehicle_plate.md b/deploy/pipeline/docs/tutorials/ppvehicle_plate.md new file mode 100644 index 0000000000000000000000000000000000000000..9f3ea6fcbc29a90fa83259aab61acaddc79f703f --- /dev/null +++ b/deploy/pipeline/docs/tutorials/ppvehicle_plate.md @@ -0,0 +1,20 @@ + +# PP-Vehicle车牌识别模块 + +【应用介绍】 + +【模型下载】 + +## 使用方法 + +【配置项说明】 + +【使用命令】 + +【效果展示】 + +## 方案说明 + +【实现方案及特色】 + +## 参考文献 diff --git a/static/ppdet/utils/download.py b/deploy/pipeline/download.py similarity index 37% rename from static/ppdet/utils/download.py rename to deploy/pipeline/download.py index d7e367a0526c3077052c3936844b9132a43e4160..f243838b74310f7cdfc5035f7c17d54985d8de85 100644 --- a/static/ppdet/utils/download.py +++ b/deploy/pipeline/download.py @@ -1,4 +1,4 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -12,269 +12,76 @@ # See the License for the specific language governing permissions and # limitations under the License. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os +import os, sys import os.path as osp -import shutil +import hashlib import requests +import shutil import tqdm -import hashlib -import binascii -import base64 +import time import tarfile import zipfile +from paddle.utils.download import _get_unique_endpoints -from .voc_utils import create_list - -import logging -logger = logging.getLogger(__name__) - -__all__ = [ - 'get_weights_path', 'get_dataset_path', 'download_dataset', - 'create_voc_list' -] - -WEIGHTS_HOME = osp.expanduser("~/.cache/paddle/weights/static") -DATASET_HOME = osp.expanduser("~/.cache/paddle/dataset") - -# dict of {dataset_name: (download_info, sub_dirs)} -# download info: [(url, md5sum)] -DATASETS = { - 'coco': ([ - ( - 'http://images.cocodataset.org/zips/train2017.zip', - 'cced6f7f71b7629ddf16f17bbcfab6b2', ), - ( - 'http://images.cocodataset.org/zips/val2017.zip', - '442b8da7639aecaf257c1dceb8ba8c80', ), - ( - 'http://images.cocodataset.org/annotations/annotations_trainval2017.zip', - 'f4bbac642086de4f52a3fdda2de5fa2c', ), - ], ["annotations", "train2017", "val2017"]), - 'voc': ([ - ( - 'http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar', - '6cd6e144f989b92b3379bac3b3de84fd', ), - ( - 'http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar', - 'c52e279531787c972589f7e41ab4ae64', ), - ( - 'http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar', - 'b6e924de25625d8de591ea690078ad9f', ), - ], ["VOCdevkit/VOC2012", "VOCdevkit/VOC2007"]), - 'wider_face': ([ - ( - 'https://dataset.bj.bcebos.com/wider_face/WIDER_train.zip', - '3fedf70df600953d25982bcd13d91ba2', ), - ( - 'https://dataset.bj.bcebos.com/wider_face/WIDER_val.zip', - 'dfa7d7e790efa35df3788964cf0bbaea', ), - ( - 'https://dataset.bj.bcebos.com/wider_face/wider_face_split.zip', - 'a4a898d6193db4b9ef3260a68bad0dc7', ), - ], ["WIDER_train", "WIDER_val", "wider_face_split"]), - 'fruit': ([( - 'https://dataset.bj.bcebos.com/PaddleDetection_demo/fruit.tar', - 'baa8806617a54ccf3685fa7153388ae6', ), ], - ['Annotations', 'JPEGImages']), - 'roadsign_voc': ([( - 'https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar', - '8d629c0f880dd8b48de9aeff44bf1f3e', ), ], ['annotations', 'images']), - 'roadsign_coco': ([( - 'https://paddlemodels.bj.bcebos.com/object_detection/roadsign_coco.tar', - '49ce5a9b5ad0d6266163cd01de4b018e', ), ], ['annotations', 'images']), - 'objects365': (), -} +PPDET_WEIGHTS_DOWNLOAD_URL_PREFIX = 'https://paddledet.bj.bcebos.com/' DOWNLOAD_RETRY_LIMIT = 3 - -def get_weights_path(url): - """Get weights path from WEIGHT_HOME, if not exists, - download it from url. - """ - path, _ = get_path(url, WEIGHTS_HOME) - return path +WEIGHTS_HOME = osp.expanduser("~/.cache/paddle/infer_weights") -def get_dataset_path(path, annotation, image_dir): - """ - If path exists, return path. - Otherwise, get dataset path from DATASET_HOME, if not exists, - download it. +def is_url(path): """ - if _dataset_exists(path, annotation, image_dir): - return path - - logger.info("Dataset {} is not valid for reason above, try searching {} or " - "downloading dataset...".format( - osp.realpath(path), DATASET_HOME)) - - data_name = os.path.split(path.strip().lower())[-1] - for name, dataset in DATASETS.items(): - if data_name == name: - logger.debug("Parse dataset_dir {} as dataset " - "{}".format(path, name)) - if name == 'objects365': - raise NotImplementedError( - "Dataset {} is not valid for download automatically. " - "Please apply and download the dataset from " - "https://www.objects365.org/download.html".format(name)) - data_dir = osp.join(DATASET_HOME, name) - # For VOC-style datasets, only check subdirs - if name in ['voc', 'fruit', 'roadsign_voc']: - exists = True - for sub_dir in dataset[1]: - check_dir = osp.join(data_dir, sub_dir) - if osp.exists(check_dir): - logger.info("Found {}".format(check_dir)) - else: - exists = False - if exists: - return data_dir - - # voc exist is checked above, voc is not exist here - check_exist = name != 'voc' and name != 'fruit' and name != 'roadsign_voc' - for url, md5sum in dataset[0]: - get_path(url, data_dir, md5sum, check_exist) - - # voc should create list after download - if name == 'voc': - create_voc_list(data_dir) - return data_dir - - # not match any dataset in DATASETS - raise ValueError( - "Dataset {} is not valid and cannot parse dataset type " - "'{}' for automaticly downloading, which only supports " - "'voc' , 'coco', 'wider_face', 'fruit' and 'roadsign_voc' currently". - format(path, osp.split(path)[-1])) - - -def create_voc_list(data_dir, devkit_subdir='VOCdevkit'): - logger.debug("Create voc file list...") - devkit_dir = osp.join(data_dir, devkit_subdir) - year_dirs = [osp.join(devkit_dir, x) for x in os.listdir(devkit_dir)] - - # NOTE: since using auto download VOC - # dataset, VOC default label list should be used, - # do not generate label_list.txt here. For default - # label, see ../data/source/voc.py - create_list(year_dirs, data_dir) - logger.debug("Create voc file list finished") - - -def map_path(url, root_dir): - # parse path after download to decompress under root_dir - fname = osp.split(url)[-1] - zip_formats = ['.zip', '.tar', '.gz'] - fpath = fname - for zip_format in zip_formats: - fpath = fpath.replace(zip_format, '') - return osp.join(root_dir, fpath) - - -def get_path(url, root_dir, md5sum=None, check_exist=True): - """ Download from given url to root_dir. - if file or directory specified by url is exists under - root_dir, return the path directly, otherwise download - from url and decompress it, return the path. - - url (str): download url - root_dir (str): root dir for downloading, it should be - WEIGHTS_HOME or DATASET_HOME - md5sum (str): md5 sum of download package + Whether path is URL. + Args: + path (string): URL string or not. """ - # parse path after download to decompress under root_dir - fullpath = map_path(url, root_dir) + return path.startswith('http://') \ + or path.startswith('https://') \ + or path.startswith('ppdet://') - # For same zip file, decompressed directory name different - # from zip file name, rename by following map - decompress_name_map = { - "VOCtrainval_11-May-2012": "VOCdevkit/VOC2012", - "VOCtrainval_06-Nov-2007": "VOCdevkit/VOC2007", - "VOCtest_06-Nov-2007": "VOCdevkit/VOC2007", - "annotations_trainval": "annotations" - } - for k, v in decompress_name_map.items(): - if fullpath.find(k) >= 0: - fullpath = osp.join(osp.split(fullpath)[0], v) - if osp.exists(fullpath) and check_exist: - # If fullpath is a directory, it has been decompressed - # checking MD5 is impossible, so we skip checking when - # fullpath is a directory here - if osp.isdir(fullpath) or \ - _md5check_from_req(fullpath, - requests.get(url, stream=True)): - logger.debug("Found {}".format(fullpath)) - return fullpath, True - else: - if osp.isdir(fullpath): - shutil.rmtree(fullpath) - else: - os.remove(fullpath) +def parse_url(url): + url = url.replace("ppdet://", PPDET_WEIGHTS_DOWNLOAD_URL_PREFIX) + return url - fullname = _download(url, root_dir, md5sum) - # new weights format whose postfix is 'pdparams', - # which is not need to decompress - if osp.splitext(fullname)[-1] != '.pdparams': - _decompress(fullname) +def map_path(url, root_dir, path_depth=1): + # parse path after download to decompress under root_dir + assert path_depth > 0, "path_depth should be a positive integer" + dirname = url + for _ in range(path_depth): + dirname = osp.dirname(dirname) + fpath = osp.relpath(url, dirname) - return fullpath, False + zip_formats = ['.zip', '.tar', '.gz'] + for zip_format in zip_formats: + fpath = fpath.replace(zip_format, '') + return osp.join(root_dir, fpath) -def download_dataset(path, dataset=None): - if dataset not in DATASETS.keys(): - logger.error("Unknown dataset {}, it should be " - "{}".format(dataset, DATASETS.keys())) - return - dataset_info = DATASETS[dataset][0] - for info in dataset_info: - get_path(info[0], path, info[1], False) - logger.debug("Download dataset {} finished.".format(dataset)) +def _md5check(fullname, md5sum=None): + if md5sum is None: + return True + md5 = hashlib.md5() + with open(fullname, 'rb') as f: + for chunk in iter(lambda: f.read(4096), b""): + md5.update(chunk) + calc_md5sum = md5.hexdigest() -def _dataset_exists(path, annotation, image_dir): - """ - Check if user define dataset exists - """ - if not osp.exists(path): - logger.debug("Config dataset_dir {} is not exits, " - "dataset config is not valid".format(path)) + if calc_md5sum != md5sum: return False - - if annotation: - annotation_path = osp.join(path, annotation) - if not osp.exists(annotation_path): - logger.error("Config dataset_dir {} is not exits!".format(path)) - - if not osp.isfile(annotation_path): - logger.warning("Config annotation {} is not a " - "file, dataset config is not " - "valid".format(annotation_path)) - return False - if image_dir: - image_path = osp.join(path, image_dir) - if not osp.exists(image_path): - logger.warning("Config dataset_dir {} is not exits!".format(path)) - - if not osp.isdir(image_path): - logger.warning("Config image_dir {} is not a " - "directory, dataset config is not " - "valid".format(image_path)) - return False return True +def _check_exist_file_md5(filename, md5sum, url): + return _md5check(filename, md5sum) + + def _download(url, path, md5sum=None): """ Download from url, save to path. - url (str): download url path (str): download to given path """ @@ -285,14 +92,17 @@ def _download(url, path, md5sum=None): fullname = osp.join(path, fname) retry_cnt = 0 - while not (osp.exists(fullname) and _md5check(fullname, md5sum)): + while not (osp.exists(fullname) and _check_exist_file_md5(fullname, md5sum, + url)): if retry_cnt < DOWNLOAD_RETRY_LIMIT: retry_cnt += 1 else: raise RuntimeError("Download from {} failed. " "Retry limit reached".format(url)) - logger.info("Downloading {} from {}".format(fname, url)) + # NOTE: windows path join may incur \, which is invalid in url + if sys.platform == "win32": + url = url.replace('\\', '/') req = requests.get(url, stream=True) if req.status_code != 200: @@ -315,54 +125,69 @@ def _download(url, path, md5sum=None): for chunk in req.iter_content(chunk_size=1024): if chunk: f.write(chunk) + shutil.move(tmp_fullname, fullname) + return fullname - # check md5 after download in Content-MD5 in req.headers - if _md5check_from_req(tmp_fullname, req): - shutil.move(tmp_fullname, fullname) - return fullname + +def _download_dist(url, path, md5sum=None): + env = os.environ + if 'PADDLE_TRAINERS_NUM' in env and 'PADDLE_TRAINER_ID' in env: + trainer_id = int(env['PADDLE_TRAINER_ID']) + num_trainers = int(env['PADDLE_TRAINERS_NUM']) + if num_trainers <= 1: + return _download(url, path, md5sum) else: - logger.warning( - "Download from url imcomplete, try downloading again...") - os.remove(tmp_fullname) - continue - - -def _md5check_from_req(weights_path, req): - # For weights in bcebos URLs, MD5 value is contained - # in request header as 'content_md5' - content_md5 = req.headers.get('content-md5') - if not content_md5 or _md5check( - weights_path, - binascii.hexlify(base64.b64decode(content_md5.strip('"'))).decode( - )): - return True + fname = osp.split(url)[-1] + fullname = osp.join(path, fname) + lock_path = fullname + '.download.lock' + + if not osp.isdir(path): + os.makedirs(path) + + if not osp.exists(fullname): + from paddle.distributed import ParallelEnv + unique_endpoints = _get_unique_endpoints(ParallelEnv() + .trainer_endpoints[:]) + with open(lock_path, 'w'): # touch + os.utime(lock_path, None) + if ParallelEnv().current_endpoint in unique_endpoints: + _download(url, path, md5sum) + os.remove(lock_path) + else: + while os.path.exists(lock_path): + time.sleep(0.5) + return fullname else: - return False + return _download(url, path, md5sum) -def _md5check(fullname, md5sum=None): - if md5sum is None: - return True - - logger.debug("File {} md5 checking...".format(fullname)) - md5 = hashlib.md5() - with open(fullname, 'rb') as f: - for chunk in iter(lambda: f.read(4096), b""): - md5.update(chunk) - calc_md5sum = md5.hexdigest() - - if calc_md5sum != md5sum: - logger.warning("File {} md5 check failed, {}(calc) != " - "{}(base)".format(fullname, calc_md5sum, md5sum)) - return False - return True +def _move_and_merge_tree(src, dst): + """ + Move src directory to dst, if dst is already exists, + merge src to dst + """ + if not osp.exists(dst): + shutil.move(src, dst) + elif osp.isfile(src): + shutil.move(src, dst) + else: + for fp in os.listdir(src): + src_fp = osp.join(src, fp) + dst_fp = osp.join(dst, fp) + if osp.isdir(src_fp): + if osp.isdir(dst_fp): + _move_and_merge_tree(src_fp, dst_fp) + else: + shutil.move(src_fp, dst_fp) + elif osp.isfile(src_fp) and \ + not osp.isfile(dst_fp): + shutil.move(src_fp, dst_fp) def _decompress(fname): """ Decompress for zip and tar file """ - logger.info("Decompressing {}...".format(fname)) # For protecting decompressing interupted, # decompress to fpath_tmp directory firstly, if decompress @@ -380,6 +205,8 @@ def _decompress(fname): elif fname.find('zip') >= 0: with zipfile.ZipFile(fname) as zf: zf.extractall(path=fpath_tmp) + elif fname.find('.txt') >= 0: + return else: raise TypeError("Unsupport compress file type {}".format(fname)) @@ -392,24 +219,95 @@ def _decompress(fname): os.remove(fname) -def _move_and_merge_tree(src, dst): +def _decompress_dist(fname): + env = os.environ + if 'PADDLE_TRAINERS_NUM' in env and 'PADDLE_TRAINER_ID' in env: + trainer_id = int(env['PADDLE_TRAINER_ID']) + num_trainers = int(env['PADDLE_TRAINERS_NUM']) + if num_trainers <= 1: + _decompress(fname) + else: + lock_path = fname + '.decompress.lock' + from paddle.distributed import ParallelEnv + unique_endpoints = _get_unique_endpoints(ParallelEnv() + .trainer_endpoints[:]) + # NOTE(dkp): _decompress_dist always performed after + # _download_dist, in _download_dist sub-trainers is waiting + # for download lock file release with sleeping, if decompress + # prograss is very fast and finished with in the sleeping gap + # time, e.g in tiny dataset such as coco_ce, spine_coco, main + # trainer may finish decompress and release lock file, so we + # only craete lock file in main trainer and all sub-trainer + # wait 1s for main trainer to create lock file, for 1s is + # twice as sleeping gap, this waiting time can keep all + # trainer pipeline in order + # **change this if you have more elegent methods** + if ParallelEnv().current_endpoint in unique_endpoints: + with open(lock_path, 'w'): # touch + os.utime(lock_path, None) + _decompress(fname) + os.remove(lock_path) + else: + time.sleep(1) + while os.path.exists(lock_path): + time.sleep(0.5) + else: + _decompress(fname) + + +def get_path(url, root_dir=WEIGHTS_HOME, md5sum=None, check_exist=True): + """ Download from given url to root_dir. + if file or directory specified by url is exists under + root_dir, return the path directly, otherwise download + from url and decompress it, return the path. + url (str): download url + root_dir (str): root dir for downloading + md5sum (str): md5 sum of download package """ - Move src directory to dst, if dst is already exists, - merge src to dst + # parse path after download to decompress under root_dir + fullpath = map_path(url, root_dir) + + # For same zip file, decompressed directory name different + # from zip file name, rename by following map + decompress_name_map = {"ppTSM_fight": "ppTSM", } + for k, v in decompress_name_map.items(): + if fullpath.find(k) >= 0: + fullpath = osp.join(osp.split(fullpath)[0], v) + + if osp.exists(fullpath) and check_exist: + if not osp.isfile(fullpath) or \ + _check_exist_file_md5(fullpath, md5sum, url): + return fullpath, True + else: + os.remove(fullpath) + + fullname = _download_dist(url, root_dir, md5sum) + + # new weights format which postfix is 'pdparams' not + # need to decompress + if osp.splitext(fullname)[-1] not in ['.pdparams', '.yml']: + _decompress_dist(fullname) + + return fullpath, False + + +def get_weights_path(url): + """Get weights path from WEIGHTS_HOME, if not exists, + download it from url. """ - if not osp.exists(dst): - shutil.move(src, dst) - elif osp.isfile(src): - shutil.move(src, dst) - else: - for fp in os.listdir(src): - src_fp = osp.join(src, fp) - dst_fp = osp.join(dst, fp) - if osp.isdir(src_fp): - if osp.isdir(dst_fp): - _move_and_merge_tree(src_fp, dst_fp) - else: - shutil.move(src_fp, dst_fp) - elif osp.isfile(src_fp) and \ - not osp.isfile(dst_fp): - shutil.move(src_fp, dst_fp) + url = parse_url(url) + path, _ = get_path(url, WEIGHTS_HOME) + return path + + +def auto_download_model(model_path): + # auto download + if is_url(model_path): + weight = get_weights_path(model_path) + return weight + return None + + +if __name__ == "__main__": + model_path = "https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip" + auto_download_model(model_path) diff --git a/deploy/pphuman/pipe_utils.py b/deploy/pipeline/pipe_utils.py similarity index 61% rename from deploy/pphuman/pipe_utils.py rename to deploy/pipeline/pipe_utils.py index b55ac9a867cf027662d9b3d84e39f119f28d123a..d4a6307525bb903cc7f09331dbbdbaba5c707fd6 100644 --- a/deploy/pphuman/pipe_utils.py +++ b/deploy/pipeline/pipe_utils.py @@ -15,7 +15,6 @@ import time import os import ast -import argparse import glob import yaml import copy @@ -24,108 +23,6 @@ import numpy as np from python.keypoint_preprocess import EvalAffine, TopDownEvalAffine, expand_crop -def argsparser(): - parser = argparse.ArgumentParser(description=__doc__) - parser.add_argument( - "--config", - type=str, - default=None, - help=("Path of configure"), - required=True) - parser.add_argument( - "--image_file", type=str, default=None, help="Path of image file.") - parser.add_argument( - "--image_dir", - type=str, - default=None, - help="Dir of image file, `image_file` has a higher priority.") - parser.add_argument( - "--video_file", - type=str, - default=None, - help="Path of video file, `video_file` or `camera_id` has a highest priority." - ) - parser.add_argument( - "--video_dir", - type=str, - default=None, - help="Dir of video file, `video_file` has a higher priority.") - parser.add_argument( - "--model_dir", nargs='*', help="set model dir in pipeline") - parser.add_argument( - "--camera_id", - type=int, - default=-1, - help="device id of camera to predict.") - parser.add_argument( - "--enable_attr", - type=ast.literal_eval, - default=False, - help="Whether use attribute recognition.") - parser.add_argument( - "--enable_action", - type=ast.literal_eval, - default=False, - help="Whether use action recognition.") - parser.add_argument( - "--output_dir", - type=str, - default="output", - help="Directory of output visualization files.") - parser.add_argument( - "--run_mode", - type=str, - default='paddle', - help="mode of running(paddle/trt_fp32/trt_fp16/trt_int8)") - parser.add_argument( - "--device", - type=str, - default='cpu', - help="Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU." - ) - parser.add_argument( - "--enable_mkldnn", - type=ast.literal_eval, - default=False, - help="Whether use mkldnn with CPU.") - parser.add_argument( - "--cpu_threads", type=int, default=1, help="Num of threads with CPU.") - parser.add_argument( - "--trt_min_shape", type=int, default=1, help="min_shape for TensorRT.") - parser.add_argument( - "--trt_max_shape", - type=int, - default=1280, - help="max_shape for TensorRT.") - parser.add_argument( - "--trt_opt_shape", - type=int, - default=640, - help="opt_shape for TensorRT.") - parser.add_argument( - "--trt_calib_mode", - type=bool, - default=False, - help="If the model is produced by TRT offline quantitative " - "calibration, trt_calib_mode need to set True.") - parser.add_argument( - "--do_entrance_counting", - action='store_true', - help="Whether counting the numbers of identifiers entering " - "or getting out from the entrance. Note that only support one-class" - "counting, multi-class counting is coming soon.") - parser.add_argument( - "--secs_interval", - type=int, - default=2, - help="The seconds interval to count after tracking") - parser.add_argument( - "--draw_center_traj", - action='store_true', - help="Whether drawing the trajectory of center") - return parser - - class Times(object): def __init__(self): self.time = 0. @@ -162,8 +59,13 @@ class PipeTimer(Times): 'mot': Times(), 'attr': Times(), 'kpt': Times(), - 'action': Times(), - 'reid': Times() + 'video_action': Times(), + 'skeleton_action': Times(), + 'reid': Times(), + 'det_action': Times(), + 'cls_action': Times(), + 'vehicle_attr': Times(), + 'vehicleplate': Times() } self.img_num = 0 @@ -207,57 +109,15 @@ class PipeTimer(Times): dic['kpt'] = round(self.module_time['kpt'].value() / max(1, self.img_num), 4) if average else self.module_time['kpt'].value() - dic['action'] = round( - self.module_time['action'].value() / max(1, self.img_num), - 4) if average else self.module_time['action'].value() + dic['video_action'] = self.module_time['video_action'].value() + dic['skeleton_action'] = round( + self.module_time['skeleton_action'].value() / max(1, self.img_num), + 4) if average else self.module_time['skeleton_action'].value() dic['img_num'] = self.img_num return dic -def merge_model_dir(args, model_dir): - # set --model_dir DET=ppyoloe/ to overwrite the model_dir in config file - task_set = ['DET', 'ATTR', 'MOT', 'KPT', 'ACTION'] - if not model_dir: - return args - for md in model_dir: - md = md.strip() - k, v = md.split('=', 1) - k_upper = k.upper() - assert k_upper in task_set, 'Illegal type of task, expect task are: {}, but received {}'.format( - task_set, k) - args[k_upper].update({'model_dir': v}) - return args - - -def merge_cfg(args): - with open(args.config) as f: - pred_config = yaml.safe_load(f) - - def merge(cfg, arg): - merge_cfg = copy.deepcopy(cfg) - for k, v in cfg.items(): - if k in arg: - merge_cfg[k] = arg[k] - else: - if isinstance(v, dict): - merge_cfg[k] = merge(v, arg) - return merge_cfg - - args_dict = vars(args) - model_dir = args_dict.pop('model_dir') - pred_config = merge_model_dir(pred_config, model_dir) - pred_config = merge(pred_config, args_dict) - return pred_config - - -def print_arguments(cfg): - print('----------- Running Arguments -----------') - buffer = yaml.dump(cfg) - print(buffer) - print('------------------------------------------') - - def get_test_images(infer_dir, infer_img): """ Get image path list in TEST mode @@ -297,6 +157,8 @@ def crop_image_with_det(batch_input, det_res, thresh=0.3): crop_res = [] for b_id, input in enumerate(batch_input): boxes_num_i = boxes_num[b_id] + if boxes_num_i == 0: + continue boxes_i = boxes[start_idx:start_idx + boxes_num_i, :] score_i = score[start_idx:start_idx + boxes_num_i] res = [] diff --git a/deploy/pphuman/pipeline.py b/deploy/pipeline/pipeline.py similarity index 33% rename from deploy/pphuman/pipeline.py rename to deploy/pipeline/pipeline.py index 4d6fa014ae783b61c4464b2e292c5d745a5297d1..6705083e5681704b3c0b2c96fb0f40f69ba5d35d 100644 --- a/deploy/pphuman/pipeline.py +++ b/deploy/pipeline/pipeline.py @@ -15,39 +15,45 @@ import os import yaml import glob -from collections import defaultdict - import cv2 import numpy as np import math import paddle import sys import copy -from collections import Sequence -from reid import ReID +from collections import Sequence, defaultdict from datacollector import DataCollector, Result -from mtmct import mtmct_process # add deploy path of PadleDetection to sys.path parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) +from cfg_utils import argsparser, print_arguments, merge_cfg +from pipe_utils import PipeTimer +from pipe_utils import get_test_images, crop_image_with_det, crop_image_with_mot, parse_mot_res, parse_mot_keypoint + from python.infer import Detector, DetectorPicoDet -from python.attr_infer import AttrDetector from python.keypoint_infer import KeyPointDetector from python.keypoint_postprocess import translate_to_ori_images -from python.action_infer import ActionRecognizer -from python.action_utils import KeyPointBuff, ActionVisualHelper - -from pipe_utils import argsparser, print_arguments, merge_cfg, PipeTimer -from pipe_utils import get_test_images, crop_image_with_det, crop_image_with_mot, parse_mot_res, parse_mot_keypoint -from python.preprocess import decode_image -from python.visualize import visualize_box_mask, visualize_attr, visualize_pose, visualize_action +from python.preprocess import decode_image, ShortSizeScale +from python.visualize import visualize_box_mask, visualize_attr, visualize_pose, visualize_action, visualize_vehicleplate from pptracking.python.mot_sde_infer import SDE_Detector from pptracking.python.mot.visualize import plot_tracking_dict from pptracking.python.mot.utils import flow_statistic +from pphuman.attr_infer import AttrDetector +from pphuman.video_action_infer import VideoActionRecognizer +from pphuman.action_infer import SkeletonActionRecognizer, DetActionRecognizer, ClsActionRecognizer +from pphuman.action_utils import KeyPointBuff, ActionVisualHelper +from pphuman.reid import ReID +from pphuman.mtmct import mtmct_process + +from ppvehicle.vehicle_plate import PlateRecognizer +from ppvehicle.vehicle_attr import VehicleAttr + +from download import auto_download_model + class Pipeline(object): """ @@ -60,8 +66,6 @@ class Pipeline(object): then all the images in directory will be predicted, default as None video_file (string|None): the path of video file, default as None camera_id (int): the device id of camera to predict, default as -1 - enable_attr (bool): whether use attribute recognition, default as false - enable_action (bool): whether use action recognition, default as false device (string): the device to predict, options are: CPU/GPU/XPU, default as CPU run_mode (string): the mode of prediction, options are: @@ -77,79 +81,44 @@ class Pipeline(object): draw_center_traj (bool): Whether drawing the trajectory of center, default as False secs_interval (int): The seconds interval to count after tracking, default as 10 do_entrance_counting(bool): Whether counting the numbers of identifiers entering - or getting out from the entrance, default as False,only support single class + or getting out from the entrance, default as False, only support single class counting in MOT. """ - def __init__(self, - cfg, - image_file=None, - image_dir=None, - video_file=None, - video_dir=None, - camera_id=-1, - enable_attr=False, - enable_action=True, - device='CPU', - run_mode='paddle', - trt_min_shape=1, - trt_max_shape=1280, - trt_opt_shape=640, - trt_calib_mode=False, - cpu_threads=1, - enable_mkldnn=False, - output_dir='output', - draw_center_traj=False, - secs_interval=10, - do_entrance_counting=False): + def __init__(self, args, cfg): self.multi_camera = False + reid_cfg = cfg.get('REID', False) + self.enable_mtmct = reid_cfg['enable'] if reid_cfg else False self.is_video = False - self.output_dir = output_dir + self.output_dir = args.output_dir self.vis_result = cfg['visual'] - self.input = self._parse_input(image_file, image_dir, video_file, - video_dir, camera_id) + self.input = self._parse_input(args.image_file, args.image_dir, + args.video_file, args.video_dir, + args.camera_id) if self.multi_camera: - self.predictor = [ - PipePredictor( - cfg, - is_video=True, - multi_camera=True, - enable_attr=enable_attr, - enable_action=enable_action, - device=device, - run_mode=run_mode, - trt_min_shape=trt_min_shape, - trt_max_shape=trt_max_shape, - trt_opt_shape=trt_opt_shape, - cpu_threads=cpu_threads, - enable_mkldnn=enable_mkldnn, - output_dir=output_dir) for i in self.input - ] + self.predictor = [] + for name in self.input: + predictor_item = PipePredictor( + args, cfg, is_video=True, multi_camera=True) + predictor_item.set_file_name(name) + self.predictor.append(predictor_item) + else: - self.predictor = PipePredictor( - cfg, - self.is_video, - enable_attr=enable_attr, - enable_action=enable_action, - device=device, - run_mode=run_mode, - trt_min_shape=trt_min_shape, - trt_max_shape=trt_max_shape, - trt_opt_shape=trt_opt_shape, - trt_calib_mode=trt_calib_mode, - cpu_threads=cpu_threads, - enable_mkldnn=enable_mkldnn, - output_dir=output_dir, - draw_center_traj=draw_center_traj, - secs_interval=secs_interval, - do_entrance_counting=do_entrance_counting) + self.predictor = PipePredictor(args, cfg, self.is_video) if self.is_video: - self.predictor.set_file_name(video_file) - - self.output_dir = output_dir - self.draw_center_traj = draw_center_traj - self.secs_interval = secs_interval - self.do_entrance_counting = do_entrance_counting + self.predictor.set_file_name(args.video_file) + + self.output_dir = args.output_dir + self.draw_center_traj = args.draw_center_traj + self.secs_interval = args.secs_interval + self.do_entrance_counting = args.do_entrance_counting + self.do_break_in_counting = args.do_break_in_counting + self.region_type = args.region_type + self.region_polygon = args.region_polygon + if self.region_type == 'custom': + assert len( + self.region_polygon + ) > 6, 'region_type is custom, region_polygon should be at least 3 pairs of point coords.' def _parse_input(self, image_file, image_dir, video_file, video_dir, camera_id): @@ -162,7 +131,9 @@ class Pipeline(object): self.multi_camera = False elif video_file is not None: - assert os.path.exists(video_file), "video_file not exists." + assert os.path.exists( + video_file + ) or 'rtsp' in video_file, "video_file not exists and not an rtsp site." self.multi_camera = False input = video_file self.is_video = True @@ -184,7 +155,7 @@ class Pipeline(object): else: raise ValueError( - "Illegal Input, please set one of ['video_file','camera_id','image_file', 'image_dir']" + "Illegal Input, please set one of ['video_file', 'camera_id', 'image_file', 'image_dir']" ) return input @@ -196,16 +167,56 @@ class Pipeline(object): predictor.run(input) collector_data = predictor.get_result() multi_res.append(collector_data) - mtmct_process( - multi_res, - self.input, - mtmct_vis=self.vis_result, - output_dir=self.output_dir) + if self.enable_mtmct: + mtmct_process( + multi_res, + self.input, + mtmct_vis=self.vis_result, + output_dir=self.output_dir) else: self.predictor.run(self.input) +def get_model_dir(cfg): + # auto download inference model + model_dir_dict = {} + for key in cfg.keys(): + if type(cfg[key]) == dict and \ + ("enable" in cfg[key].keys() and cfg[key]['enable'] + or "enable" not in cfg[key].keys()): + + if "model_dir" in cfg[key].keys(): + model_dir = cfg[key]["model_dir"] + downloaded_model_dir = auto_download_model(model_dir) + if downloaded_model_dir: + model_dir = downloaded_model_dir + model_dir_dict[key] = model_dir + print(key, " model dir:", model_dir) + elif key == "VEHICLE_PLATE": + det_model_dir = cfg[key]["det_model_dir"] + downloaded_det_model_dir = auto_download_model(det_model_dir) + if downloaded_det_model_dir: + det_model_dir = downloaded_det_model_dir + model_dir_dict["det_model_dir"] = det_model_dir + print("det_model_dir model dir:", det_model_dir) + + rec_model_dir = cfg[key]["rec_model_dir"] + downloaded_rec_model_dir = auto_download_model(rec_model_dir) + if downloaded_rec_model_dir: + rec_model_dir = downloaded_rec_model_dir + model_dir_dict["rec_model_dir"] = rec_model_dir + print("rec_model_dir model dir:", rec_model_dir) + elif key == "MOT": # for idbased and skeletonbased actions + model_dir = cfg[key]["model_dir"] + downloaded_model_dir = auto_download_model(model_dir) + if downloaded_model_dir: + model_dir = downloaded_model_dir + model_dir_dict[key] = model_dir + + return model_dir_dict + + class PipePredictor(object): """ Predictor in single camera @@ -219,7 +230,8 @@ class PipePredictor(object): 1. Tracking 2. Tracking -> Attribute - 3. Tracking -> KeyPoint -> Action Recognition + 3. Tracking -> KeyPoint -> SkeletonAction Recognition + 4. VideoAction Recognition Args: cfg (dict): config of models in pipeline @@ -227,8 +239,6 @@ class PipePredictor(object): multi_camera (bool): whether to use multi camera in pipeline, default as False camera_id (int): the device id of camera to predict, default as -1 - enable_attr (bool): whether use attribute recognition, default as false - enable_action (bool): whether use action recognition, default as false device (string): the device to predict, options are: CPU/GPU/XPU, default as CPU run_mode (string): the mode of prediction, options are: @@ -244,52 +254,95 @@ class PipePredictor(object): draw_center_traj (bool): Whether drawing the trajectory of center, default as False secs_interval (int): The seconds interval to count after tracking, default as 10 do_entrance_counting(bool): Whether counting the numbers of identifiers entering - or getting out from the entrance, default as False,only support single class + or getting out from the entrance, default as False, only support single class counting in MOT. """ - def __init__(self, - cfg, - is_video=True, - multi_camera=False, - enable_attr=False, - enable_action=False, - device='CPU', - run_mode='paddle', - trt_min_shape=1, - trt_max_shape=1280, - trt_opt_shape=640, - trt_calib_mode=False, - cpu_threads=1, - enable_mkldnn=False, - output_dir='output', - draw_center_traj=False, - secs_interval=10, - do_entrance_counting=False): - - if enable_attr and not cfg.get('ATTR', False): - ValueError( - 'enable_attr is set to True, please set ATTR in config file') - if enable_action and (not cfg.get('ACTION', False) or - not cfg.get('KPT', False)): - ValueError( - 'enable_action is set to True, please set KPT and ACTION in config file' - ) - - self.with_attr = cfg.get('ATTR', False) and enable_attr - self.with_action = cfg.get('ACTION', False) and enable_action - self.with_mtmct = cfg.get('REID', False) and multi_camera - if self.with_attr: - print('Attribute Recognition enabled') - if self.with_action: - print('Action Recognition enabled') - if multi_camera: - if not self.with_mtmct: - print( - 'Warning!!! MTMCT enabled, but cannot find REID config in [infer_cfg.yml], please check!' - ) - else: - print("MTMCT enabled") + def __init__(self, args, cfg, is_video=True, multi_camera=False): + device = args.device + run_mode = args.run_mode + trt_min_shape = args.trt_min_shape + trt_max_shape = args.trt_max_shape + trt_opt_shape = args.trt_opt_shape + trt_calib_mode = args.trt_calib_mode + cpu_threads = args.cpu_threads + enable_mkldnn = args.enable_mkldnn + output_dir = args.output_dir + draw_center_traj = args.draw_center_traj + secs_interval = args.secs_interval + do_entrance_counting = args.do_entrance_counting + do_break_in_counting = args.do_break_in_counting + region_type = args.region_type + region_polygon = args.region_polygon + + # general module for pphuman and ppvehicle + self.with_mot = cfg.get('MOT', False)['enable'] if cfg.get( + 'MOT', False) else False + self.with_human_attr = cfg.get('ATTR', False)['enable'] if cfg.get( + 'ATTR', False) else False + if self.with_mot: + print('Multi-Object Tracking enabled') + if self.with_human_attr: + print('Human Attribute Recognition enabled') + + # only for pphuman + self.with_skeleton_action = cfg.get( + 'SKELETON_ACTION', False)['enable'] if cfg.get('SKELETON_ACTION', + False) else False + self.with_video_action = cfg.get( + 'VIDEO_ACTION', False)['enable'] if cfg.get('VIDEO_ACTION', + False) else False + self.with_idbased_detaction = cfg.get( + 'ID_BASED_DETACTION', False)['enable'] if cfg.get( + 'ID_BASED_DETACTION', False) else False + self.with_idbased_clsaction = cfg.get( + 'ID_BASED_CLSACTION', False)['enable'] if cfg.get( + 'ID_BASED_CLSACTION', False) else False + self.with_mtmct = cfg.get('REID', False)['enable'] if cfg.get( + 'REID', False) else False + + if self.with_skeleton_action: + print('SkeletonAction Recognition enabled') + if self.with_video_action: + print('VideoAction Recognition enabled') + if self.with_idbased_detaction: + print('IDBASED Detection Action Recognition enabled') + if self.with_idbased_clsaction: + print('IDBASED Classification Action Recognition enabled') + if self.with_mtmct: + print("MTMCT enabled") + + # only for ppvehicle + self.with_vehicleplate = cfg.get( + 'VEHICLE_PLATE', False)['enable'] if cfg.get('VEHICLE_PLATE', + False) else False + if self.with_vehicleplate: + print('Vehicle Plate Recognition enabled') + + self.with_vehicle_attr = cfg.get( + 'VEHICLE_ATTR', False)['enable'] if cfg.get('VEHICLE_ATTR', + False) else False + if self.with_vehicle_attr: + print('Vehicle Attribute Recognition enabled') + + self.modebase = { + "framebased": False, + "videobased": False, + "idbased": False, + "skeletonbased": False + } + + self.basemode = { + "MOT": "idbased", + "ATTR": "idbased", + "VIDEO_ACTION": "videobased", + "SKELETON_ACTION": "skeletonbased", + "ID_BASED_DETACTION": "idbased", + "ID_BASED_CLSACTION": "idbased", + "REID": "idbased", + "VEHICLE_PLATE": "idbased", + "VEHICLE_ATTR": "idbased", + } self.is_video = is_video self.multi_camera = multi_camera @@ -298,6 +351,9 @@ class PipePredictor(object): self.draw_center_traj = draw_center_traj self.secs_interval = secs_interval self.do_entrance_counting = do_entrance_counting + self.do_break_in_counting = do_break_in_counting + self.region_type = region_type + self.region_polygon = region_polygon self.warmup_frame = self.cfg['warmup_frame'] self.pipeline_res = Result() @@ -305,102 +361,236 @@ class PipePredictor(object): self.file_name = None self.collector = DataCollector() + # auto download inference model + model_dir_dict = get_model_dir(self.cfg) + if not is_video: det_cfg = self.cfg['DET'] - model_dir = det_cfg['model_dir'] + model_dir = model_dir_dict['DET'] batch_size = det_cfg['batch_size'] self.det_predictor = Detector( model_dir, device, run_mode, batch_size, trt_min_shape, trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, enable_mkldnn) - if self.with_attr: + if self.with_human_attr: attr_cfg = self.cfg['ATTR'] - model_dir = attr_cfg['model_dir'] + model_dir = model_dir_dict['ATTR'] batch_size = attr_cfg['batch_size'] + basemode = self.basemode['ATTR'] + self.modebase[basemode] = True self.attr_predictor = AttrDetector( model_dir, device, run_mode, batch_size, trt_min_shape, trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, enable_mkldnn) + if self.with_vehicle_attr: + vehicleattr_cfg = self.cfg['VEHICLE_ATTR'] + model_dir = model_dir_dict['VEHICLE_ATTR'] + batch_size = vehicleattr_cfg['batch_size'] + color_threshold = vehicleattr_cfg['color_threshold'] + type_threshold = vehicleattr_cfg['type_threshold'] + basemode = self.basemode['VEHICLE_ATTR'] + self.modebase[basemode] = True + self.vehicle_attr_predictor = VehicleAttr( + model_dir, device, run_mode, batch_size, trt_min_shape, + trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, + enable_mkldnn, color_threshold, type_threshold) + else: - mot_cfg = self.cfg['MOT'] - model_dir = mot_cfg['model_dir'] - tracker_config = mot_cfg['tracker_config'] - batch_size = mot_cfg['batch_size'] - self.mot_predictor = SDE_Detector( - model_dir, - tracker_config, - device, - run_mode, - batch_size, - trt_min_shape, - trt_max_shape, - trt_opt_shape, - trt_calib_mode, - cpu_threads, - enable_mkldnn, - draw_center_traj=draw_center_traj, - secs_interval=secs_interval, - do_entrance_counting=do_entrance_counting) - if self.with_attr: + if self.with_human_attr: attr_cfg = self.cfg['ATTR'] - model_dir = attr_cfg['model_dir'] + model_dir = model_dir_dict['ATTR'] batch_size = attr_cfg['batch_size'] + basemode = self.basemode['ATTR'] + self.modebase[basemode] = True self.attr_predictor = AttrDetector( model_dir, device, run_mode, batch_size, trt_min_shape, trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, enable_mkldnn) - if self.with_action: - kpt_cfg = self.cfg['KPT'] - kpt_model_dir = kpt_cfg['model_dir'] - kpt_batch_size = kpt_cfg['batch_size'] - action_cfg = self.cfg['ACTION'] - action_model_dir = action_cfg['model_dir'] - action_batch_size = action_cfg['batch_size'] - action_frames = action_cfg['max_frames'] - display_frames = action_cfg['display_frames'] - self.coord_size = action_cfg['coord_size'] - - self.kpt_predictor = KeyPointDetector( - kpt_model_dir, + if self.with_idbased_detaction: + idbased_detaction_cfg = self.cfg['ID_BASED_DETACTION'] + model_dir = model_dir_dict['ID_BASED_DETACTION'] + batch_size = idbased_detaction_cfg['batch_size'] + basemode = self.basemode['ID_BASED_DETACTION'] + threshold = idbased_detaction_cfg['threshold'] + display_frames = idbased_detaction_cfg['display_frames'] + skip_frame_num = idbased_detaction_cfg['skip_frame_num'] + self.modebase[basemode] = True + + self.det_action_predictor = DetActionRecognizer( + model_dir, device, run_mode, - kpt_batch_size, + batch_size, trt_min_shape, trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, enable_mkldnn, - use_dark=False) - self.kpt_buff = KeyPointBuff(action_frames) - - self.action_predictor = ActionRecognizer( - action_model_dir, + threshold=threshold, + display_frames=display_frames, + skip_frame_num=skip_frame_num) + self.det_action_visual_helper = ActionVisualHelper(1) + + if self.with_idbased_clsaction: + idbased_clsaction_cfg = self.cfg['ID_BASED_CLSACTION'] + model_dir = model_dir_dict['ID_BASED_CLSACTION'] + batch_size = idbased_clsaction_cfg['batch_size'] + basemode = self.basemode['ID_BASED_CLSACTION'] + threshold = idbased_clsaction_cfg['threshold'] + self.modebase[basemode] = True + display_frames = idbased_clsaction_cfg['display_frames'] + skip_frame_num = idbased_clsaction_cfg['skip_frame_num'] + + self.cls_action_predictor = ClsActionRecognizer( + model_dir, device, run_mode, - action_batch_size, + batch_size, trt_min_shape, trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, enable_mkldnn, - window_size=action_frames) + threshold=threshold, + display_frames=display_frames, + skip_frame_num=skip_frame_num) + self.cls_action_visual_helper = ActionVisualHelper(1) + + if self.with_skeleton_action: + skeleton_action_cfg = self.cfg['SKELETON_ACTION'] + skeleton_action_model_dir = model_dir_dict['SKELETON_ACTION'] + skeleton_action_batch_size = skeleton_action_cfg['batch_size'] + skeleton_action_frames = skeleton_action_cfg['max_frames'] + display_frames = skeleton_action_cfg['display_frames'] + self.coord_size = skeleton_action_cfg['coord_size'] + basemode = self.basemode['SKELETON_ACTION'] + self.modebase[basemode] = True + + self.skeleton_action_predictor = SkeletonActionRecognizer( + skeleton_action_model_dir, + device, + run_mode, + skeleton_action_batch_size, + trt_min_shape, + trt_max_shape, + trt_opt_shape, + trt_calib_mode, + cpu_threads, + enable_mkldnn, + window_size=skeleton_action_frames) + self.skeleton_action_visual_helper = ActionVisualHelper( + display_frames) + + if self.modebase["skeletonbased"]: + kpt_cfg = self.cfg['KPT'] + kpt_model_dir = model_dir_dict['KPT'] + kpt_batch_size = kpt_cfg['batch_size'] + self.kpt_predictor = KeyPointDetector( + kpt_model_dir, + device, + run_mode, + kpt_batch_size, + trt_min_shape, + trt_max_shape, + trt_opt_shape, + trt_calib_mode, + cpu_threads, + enable_mkldnn, + use_dark=False) + self.kpt_buff = KeyPointBuff(skeleton_action_frames) + + if self.with_vehicleplate: + vehicleplate_cfg = self.cfg['VEHICLE_PLATE'] + self.vehicleplate_detector = PlateRecognizer(args, + vehicleplate_cfg) + basemode = self.basemode['VEHICLE_PLATE'] + self.modebase[basemode] = True + + if self.with_vehicle_attr: + vehicleattr_cfg = self.cfg['VEHICLE_ATTR'] + model_dir = model_dir_dict['VEHICLE_ATTR'] + batch_size = vehicleattr_cfg['batch_size'] + color_threshold = vehicleattr_cfg['color_threshold'] + type_threshold = vehicleattr_cfg['type_threshold'] + basemode = self.basemode['VEHICLE_ATTR'] + self.modebase[basemode] = True + self.vehicle_attr_predictor = VehicleAttr( + model_dir, device, run_mode, batch_size, trt_min_shape, + trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, + enable_mkldnn, color_threshold, type_threshold) - self.action_visual_helper = ActionVisualHelper(display_frames) + if self.with_mtmct: + reid_cfg = self.cfg['REID'] + model_dir = model_dir_dict['REID'] + batch_size = reid_cfg['batch_size'] + basemode = self.basemode['REID'] + self.modebase[basemode] = True + self.reid_predictor = ReID( + model_dir, device, run_mode, batch_size, trt_min_shape, + trt_max_shape, trt_opt_shape, trt_calib_mode, cpu_threads, + enable_mkldnn) - if self.with_mtmct: - reid_cfg = self.cfg['REID'] - model_dir = reid_cfg['model_dir'] - batch_size = reid_cfg['batch_size'] - self.reid_predictor = ReID(model_dir, device, run_mode, batch_size, - trt_min_shape, trt_max_shape, - trt_opt_shape, trt_calib_mode, - cpu_threads, enable_mkldnn) + if self.with_mot or self.modebase["idbased"] or self.modebase[ + "skeletonbased"]: + mot_cfg = self.cfg['MOT'] + model_dir = model_dir_dict['MOT'] + tracker_config = mot_cfg['tracker_config'] + batch_size = mot_cfg['batch_size'] + basemode = self.basemode['MOT'] + self.modebase[basemode] = True + self.mot_predictor = SDE_Detector( + model_dir, + tracker_config, + device, + run_mode, + batch_size, + trt_min_shape, + trt_max_shape, + trt_opt_shape, + trt_calib_mode, + cpu_threads, + enable_mkldnn, + draw_center_traj=draw_center_traj, + secs_interval=secs_interval, + do_entrance_counting=do_entrance_counting, + do_break_in_counting=do_break_in_counting, + region_type=region_type, + region_polygon=region_polygon) + + if self.with_video_action: + video_action_cfg = self.cfg['VIDEO_ACTION'] + + basemode = self.basemode['VIDEO_ACTION'] + self.modebase[basemode] = True + + video_action_model_dir = model_dir_dict['VIDEO_ACTION'] + video_action_batch_size = video_action_cfg['batch_size'] + short_size = video_action_cfg["short_size"] + target_size = video_action_cfg["target_size"] + + self.video_action_predictor = VideoActionRecognizer( + model_dir=video_action_model_dir, + short_size=short_size, + target_size=target_size, + device=device, + run_mode=run_mode, + batch_size=video_action_batch_size, + trt_min_shape=trt_min_shape, + trt_max_shape=trt_max_shape, + trt_opt_shape=trt_opt_shape, + trt_calib_mode=trt_calib_mode, + cpu_threads=cpu_threads, + enable_mkldnn=enable_mkldnn) def set_file_name(self, path): - self.file_name = os.path.split(path)[-1] + if path is not None: + self.file_name = os.path.split(path)[-1] + else: + # use camera id + self.file_name = None def get_result(self): return self.collector.get_res() @@ -435,7 +625,7 @@ class PipePredictor(object): self.pipe_timer.module_time['det'].end() self.pipeline_res.update(det_res, 'det') - if self.with_attr: + if self.with_human_attr: crop_inputs = crop_image_with_det(batch_input, det_res) attr_res_list = [] @@ -453,6 +643,24 @@ class PipePredictor(object): attr_res = {'output': attr_res_list} self.pipeline_res.update(attr_res, 'attr') + if self.with_vehicle_attr: + crop_inputs = crop_image_with_det(batch_input, det_res) + vehicle_attr_res_list = [] + + if i > self.warmup_frame: + self.pipe_timer.module_time['vehicle_attr'].start() + + for crop_input in crop_inputs: + attr_res = self.vehicle_attr_predictor.predict_image( + crop_input, visual=False) + vehicle_attr_res_list.extend(attr_res['output']) + + if i > self.warmup_frame: + self.pipe_timer.module_time['vehicle_attr'].end() + + attr_res = {'output': vehicle_attr_res_list} + self.pipeline_res.update(attr_res, 'vehicle_attr') + self.pipe_timer.img_num += len(batch_input) if i > self.warmup_frame: self.pipe_timer.total_time.end() @@ -466,6 +674,8 @@ class PipePredictor(object): # mot -> pose -> action capture = cv2.VideoCapture(video_file) video_out_name = 'output.mp4' if self.file_name is None else self.file_name + if "rtsp" in video_file: + video_out_name = video_out_name + "_rtsp.mp4" # Get Video info : resolution, fps, frame count width = int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)) @@ -490,119 +700,233 @@ class PipePredictor(object): out_id_list = list() prev_center = dict() records = list() - entrance = [0, height / 2., width, height / 2.] + if self.do_entrance_counting or self.do_break_in_counting: + if self.region_type == 'horizontal': + entrance = [0, height / 2., width, height / 2.] + elif self.region_type == 'vertical': + entrance = [width / 2, 0., width / 2, height] + elif self.region_type == 'custom': + entrance = [] + assert len( + self.region_polygon + ) % 2 == 0, "region_polygon should be pairs of coords points when do break_in counting." + for i in range(0, len(self.region_polygon), 2): + entrance.append( + [self.region_polygon[i], self.region_polygon[i + 1]]) + entrance.append([width, height]) + else: + raise ValueError("region_type:{} unsupported.".format( + self.region_type)) + video_fps = fps + video_action_imgs = [] + + if self.with_video_action: + short_size = self.cfg["VIDEO_ACTION"]["short_size"] + scale = ShortSizeScale(short_size) + while (1): if frame_id % 10 == 0: print('frame id: ', frame_id) + ret, frame = capture.read() if not ret: break + frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - if frame_id > self.warmup_frame: - self.pipe_timer.total_time.start() - self.pipe_timer.module_time['mot'].start() - res = self.mot_predictor.predict_image( - [copy.deepcopy(frame)], visual=False) + if self.modebase["idbased"] or self.modebase["skeletonbased"]: + if frame_id > self.warmup_frame: + self.pipe_timer.total_time.start() + self.pipe_timer.module_time['mot'].start() + res = self.mot_predictor.predict_image( + [copy.deepcopy(frame_rgb)], visual=False) - if frame_id > self.warmup_frame: - self.pipe_timer.module_time['mot'].end() - - # mot output format: id, class, score, xmin, ymin, xmax, ymax - mot_res = parse_mot_res(res) - - # flow_statistic only support single class MOT - boxes, scores, ids = res[0] # batch size = 1 in MOT - mot_result = (frame_id + 1, boxes[0], scores[0], - ids[0]) # single class - statistic = flow_statistic( - mot_result, self.secs_interval, self.do_entrance_counting, - video_fps, entrance, id_set, interval_id_set, in_id_list, - out_id_list, prev_center, records) - records = statistic['records'] - - # nothing detected - if len(mot_res['boxes']) == 0: - frame_id += 1 if frame_id > self.warmup_frame: - self.pipe_timer.img_num += 1 - self.pipe_timer.total_time.end() - if self.cfg['visual']: - _, _, fps = self.pipe_timer.get_total_time() - im = self.visualize_video(frame, mot_res, frame_id, - fps) # visualize - writer.write(im) - continue - - self.pipeline_res.update(mot_res, 'mot') - if self.with_attr or self.with_action: + self.pipe_timer.module_time['mot'].end() + + # mot output format: id, class, score, xmin, ymin, xmax, ymax + mot_res = parse_mot_res(res) + + # flow_statistic only support single class MOT + boxes, scores, ids = res[0] # batch size = 1 in MOT + mot_result = (frame_id + 1, boxes[0], scores[0], + ids[0]) # single class + statistic = flow_statistic( + mot_result, self.secs_interval, self.do_entrance_counting, + self.do_break_in_counting, self.region_type, video_fps, + entrance, id_set, interval_id_set, in_id_list, out_id_list, + prev_center, records) + records = statistic['records'] + + # nothing detected + if len(mot_res['boxes']) == 0: + frame_id += 1 + if frame_id > self.warmup_frame: + self.pipe_timer.img_num += 1 + self.pipe_timer.total_time.end() + if self.cfg['visual']: + _, _, fps = self.pipe_timer.get_total_time() + im = self.visualize_video(frame, mot_res, frame_id, fps, + entrance, records, + center_traj) # visualize + writer.write(im) + if self.file_name is None: # use camera_id + cv2.imshow('Paddle-Pipeline', im) + if cv2.waitKey(1) & 0xFF == ord('q'): + break + continue + + self.pipeline_res.update(mot_res, 'mot') crop_input, new_bboxes, ori_bboxes = crop_image_with_mot( - frame, mot_res) + frame_rgb, mot_res) - if self.with_attr: - if frame_id > self.warmup_frame: - self.pipe_timer.module_time['attr'].start() - attr_res = self.attr_predictor.predict_image( - crop_input, visual=False) - if frame_id > self.warmup_frame: - self.pipe_timer.module_time['attr'].end() - self.pipeline_res.update(attr_res, 'attr') + if self.with_vehicleplate and frame_id % 10 == 0: + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['vehicleplate'].start() + plate_input, _, _ = crop_image_with_mot( + frame_rgb, mot_res, expand=False) + platelicense = self.vehicleplate_detector.get_platelicense( + plate_input) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['vehicleplate'].end() + self.pipeline_res.update(platelicense, 'vehicleplate') + else: + self.pipeline_res.clear('vehicleplate') - if self.with_action: - if frame_id > self.warmup_frame: - self.pipe_timer.module_time['kpt'].start() - kpt_pred = self.kpt_predictor.predict_image( - crop_input, visual=False) - keypoint_vector, score_vector = translate_to_ori_images( - kpt_pred, np.array(new_bboxes)) - kpt_res = {} - kpt_res['keypoint'] = [ - keypoint_vector.tolist(), score_vector.tolist() - ] if len(keypoint_vector) > 0 else [[], []] - kpt_res['bbox'] = ori_bboxes - if frame_id > self.warmup_frame: - self.pipe_timer.module_time['kpt'].end() + if self.with_human_attr: + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['attr'].start() + attr_res = self.attr_predictor.predict_image( + crop_input, visual=False) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['attr'].end() + self.pipeline_res.update(attr_res, 'attr') - self.pipeline_res.update(kpt_res, 'kpt') + if self.with_vehicle_attr: + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['vehicle_attr'].start() + attr_res = self.vehicle_attr_predictor.predict_image( + crop_input, visual=False) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['vehicle_attr'].end() + self.pipeline_res.update(attr_res, 'vehicle_attr') - self.kpt_buff.update(kpt_res, mot_res) # collect kpt output - state = self.kpt_buff.get_state( - ) # whether frame num is enough or lost tracker + if self.with_idbased_detaction: + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['det_action'].start() + det_action_res = self.det_action_predictor.predict( + crop_input, mot_res) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['det_action'].end() + self.pipeline_res.update(det_action_res, 'det_action') - action_res = {} - if state: + if self.cfg['visual']: + self.det_action_visual_helper.update(det_action_res) + + if self.with_idbased_clsaction: if frame_id > self.warmup_frame: - self.pipe_timer.module_time['action'].start() - collected_keypoint = self.kpt_buff.get_collected_keypoint( - ) # reoragnize kpt output with ID - action_input = parse_mot_keypoint(collected_keypoint, - self.coord_size) - action_res = self.action_predictor.predict_skeleton_with_mot( - action_input) + self.pipe_timer.module_time['cls_action'].start() + cls_action_res = self.cls_action_predictor.predict_with_mot( + crop_input, mot_res) if frame_id > self.warmup_frame: - self.pipe_timer.module_time['action'].end() - self.pipeline_res.update(action_res, 'action') + self.pipe_timer.module_time['cls_action'].end() + self.pipeline_res.update(cls_action_res, 'cls_action') - if self.cfg['visual']: - self.action_visual_helper.update(action_res) + if self.cfg['visual']: + self.cls_action_visual_helper.update(cls_action_res) - if self.with_mtmct: - crop_input, img_qualities, rects = self.reid_predictor.crop_image_with_mot( - frame, mot_res) - if frame_id > self.warmup_frame: - self.pipe_timer.module_time['reid'].start() - reid_res = self.reid_predictor.predict_batch(crop_input) + if self.with_skeleton_action: + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['kpt'].start() + kpt_pred = self.kpt_predictor.predict_image( + crop_input, visual=False) + keypoint_vector, score_vector = translate_to_ori_images( + kpt_pred, np.array(new_bboxes)) + kpt_res = {} + kpt_res['keypoint'] = [ + keypoint_vector.tolist(), score_vector.tolist() + ] if len(keypoint_vector) > 0 else [[], []] + kpt_res['bbox'] = ori_bboxes + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['kpt'].end() + + self.pipeline_res.update(kpt_res, 'kpt') + + self.kpt_buff.update(kpt_res, mot_res) # collect kpt output + state = self.kpt_buff.get_state( + ) # whether frame num is enough or lost tracker + + skeleton_action_res = {} + if state: + if frame_id > self.warmup_frame: + self.pipe_timer.module_time[ + 'skeleton_action'].start() + collected_keypoint = self.kpt_buff.get_collected_keypoint( + ) # reoragnize kpt output with ID + skeleton_action_input = parse_mot_keypoint( + collected_keypoint, self.coord_size) + skeleton_action_res = self.skeleton_action_predictor.predict_skeleton_with_mot( + skeleton_action_input) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['skeleton_action'].end() + self.pipeline_res.update(skeleton_action_res, + 'skeleton_action') + + if self.cfg['visual']: + self.skeleton_action_visual_helper.update( + skeleton_action_res) + + if self.with_mtmct and frame_id % 10 == 0: + crop_input, img_qualities, rects = self.reid_predictor.crop_image_with_mot( + frame_rgb, mot_res) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['reid'].start() + reid_res = self.reid_predictor.predict_batch(crop_input) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['reid'].end() + + reid_res_dict = { + 'features': reid_res, + "qualities": img_qualities, + "rects": rects + } + self.pipeline_res.update(reid_res_dict, 'reid') + else: + self.pipeline_res.clear('reid') + + if self.with_video_action: + # get the params + frame_len = self.cfg["VIDEO_ACTION"]["frame_len"] + sample_freq = self.cfg["VIDEO_ACTION"]["sample_freq"] + + if sample_freq * frame_len > frame_count: # video is too short + sample_freq = int(frame_count / frame_len) + + # filter the warmup frames if frame_id > self.warmup_frame: - self.pipe_timer.module_time['reid'].end() + self.pipe_timer.module_time['video_action'].start() + + # collect frames + if frame_id % sample_freq == 0: + # Scale image + scaled_img = scale(frame_rgb) + video_action_imgs.append(scaled_img) + + # the number of collected frames is enough to predict video action + if len(video_action_imgs) == frame_len: + classes, scores = self.video_action_predictor.predict( + video_action_imgs) + if frame_id > self.warmup_frame: + self.pipe_timer.module_time['video_action'].end() + + video_action_res = {"class": classes[0], "score": scores[0]} + self.pipeline_res.update(video_action_res, 'video_action') + + print("video_action_res:", video_action_res) - reid_res_dict = { - 'features': reid_res, - "qualities": img_qualities, - "rects": rects - } - self.pipeline_res.update(reid_res_dict, 'reid') + video_action_imgs.clear() # next clip self.collector.append(frame_id, self.pipeline_res) @@ -613,10 +937,14 @@ class PipePredictor(object): if self.cfg['visual']: _, _, fps = self.pipe_timer.get_total_time() - im = self.visualize_video(frame, self.pipeline_res, frame_id, - fps, entrance, records, - center_traj) # visualize + im = self.visualize_video( + frame, self.pipeline_res, self.collector, frame_id, fps, + entrance, records, center_traj) # visualize writer.write(im) + if self.file_name is None: # use camera_id + cv2.imshow('Paddle-Pipeline', im) + if cv2.waitKey(1) & 0xFF == ord('q'): + break writer.release() print('save result to {}'.format(out_path)) @@ -624,6 +952,7 @@ class PipePredictor(object): def visualize_video(self, image, result, + collector, frame_id, fps, entrance=None, @@ -650,26 +979,51 @@ class PipePredictor(object): online_scores[0] = scores online_ids[0] = ids - image = plot_tracking_dict( - image, - num_classes, - online_tlwhs, - online_ids, - online_scores, - frame_id=frame_id, - fps=fps, - do_entrance_counting=self.do_entrance_counting, - entrance=entrance, - records=records, - center_traj=center_traj) - - attr_res = result.get('attr') - if attr_res is not None: + if mot_res is not None: + image = plot_tracking_dict( + image, + num_classes, + online_tlwhs, + online_ids, + online_scores, + frame_id=frame_id, + fps=fps, + ids2names=self.mot_predictor.pred_config.labels, + do_entrance_counting=self.do_entrance_counting, + do_break_in_counting=self.do_break_in_counting, + entrance=entrance, + records=records, + center_traj=center_traj) + + human_attr_res = result.get('attr') + if human_attr_res is not None: boxes = mot_res['boxes'][:, 1:] - attr_res = attr_res['output'] - image = visualize_attr(image, attr_res, boxes) + human_attr_res = human_attr_res['output'] + image = visualize_attr(image, human_attr_res, boxes) image = np.array(image) + vehicle_attr_res = result.get('vehicle_attr') + if vehicle_attr_res is not None: + boxes = mot_res['boxes'][:, 1:] + vehicle_attr_res = vehicle_attr_res['output'] + image = visualize_attr(image, vehicle_attr_res, boxes) + image = np.array(image) + + if mot_res is not None: + vehicleplate = False + plates = [] + for trackid in mot_res['boxes'][:, 0]: + plate = collector.get_carlp(trackid) + if plate != None: + vehicleplate = True + plates.append(plate) + else: + plates.append("") + if vehicleplate: + boxes = mot_res['boxes'][:, 1:] + image = visualize_vehicleplate(image, plates, boxes) + image = np.array(image) + kpt_res = result.get('kpt') if kpt_res is not None: image = visualize_pose( @@ -678,17 +1032,53 @@ class PipePredictor(object): visual_thresh=self.cfg['kpt_thresh'], returnimg=True) - action_res = result.get('action') - if action_res is not None: + video_action_res = result.get('video_action') + if video_action_res is not None: + video_action_score = None + if video_action_res and video_action_res["class"] == 1: + video_action_score = video_action_res["score"] + mot_boxes = None + if mot_res: + mot_boxes = mot_res['boxes'] + image = visualize_action( + image, + mot_boxes, + action_visual_collector=None, + action_text="SkeletonAction", + video_action_score=video_action_score, + video_action_text="Fight") + + visual_helper_for_display = [] + action_to_display = [] + + skeleton_action_res = result.get('skeleton_action') + if skeleton_action_res is not None: + visual_helper_for_display.append(self.skeleton_action_visual_helper) + action_to_display.append("Falling") + + det_action_res = result.get('det_action') + if det_action_res is not None: + visual_helper_for_display.append(self.det_action_visual_helper) + action_to_display.append("Smoking") + + cls_action_res = result.get('cls_action') + if cls_action_res is not None: + visual_helper_for_display.append(self.cls_action_visual_helper) + action_to_display.append("Calling") + + if len(visual_helper_for_display) > 0: image = visualize_action(image, mot_res['boxes'], - self.action_visual_helper, "Falling") + visual_helper_for_display, + action_to_display) return image def visualize_image(self, im_files, images, result): start_idx, boxes_num_i = 0, 0 det_res = result.get('det') - attr_res = result.get('attr') + human_attr_res = result.get('attr') + vehicle_attr_res = result.get('vehicle_attr') + for i, (im_file, im) in enumerate(zip(im_files, images)): if det_res is not None: det_res_i = {} @@ -702,10 +1092,15 @@ class PipePredictor(object): threshold=self.cfg['crop_thresh']) im = np.ascontiguousarray(np.copy(im)) im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) - if attr_res is not None: - attr_res_i = attr_res['output'][start_idx:start_idx + - boxes_num_i] - im = visualize_attr(im, attr_res_i, det_res_i['boxes']) + if human_attr_res is not None: + human_attr_res_i = human_attr_res['output'][start_idx:start_idx + + boxes_num_i] + im = visualize_attr(im, human_attr_res_i, det_res_i['boxes']) + if vehicle_attr_res is not None: + vehicle_attr_res_i = vehicle_attr_res['output'][ + start_idx:start_idx + boxes_num_i] + im = visualize_attr(im, vehicle_attr_res_i, det_res_i['boxes']) + img_name = os.path.split(im_file)[-1] if not os.path.exists(self.output_dir): os.makedirs(self.output_dir) @@ -716,21 +1111,17 @@ class PipePredictor(object): def main(): - cfg = merge_cfg(FLAGS) + cfg = merge_cfg(FLAGS) # use command params to update config print_arguments(cfg) - pipeline = Pipeline( - cfg, FLAGS.image_file, FLAGS.image_dir, FLAGS.video_file, - FLAGS.video_dir, FLAGS.camera_id, FLAGS.enable_attr, - FLAGS.enable_action, FLAGS.device, FLAGS.run_mode, FLAGS.trt_min_shape, - FLAGS.trt_max_shape, FLAGS.trt_opt_shape, FLAGS.trt_calib_mode, - FLAGS.cpu_threads, FLAGS.enable_mkldnn, FLAGS.output_dir, - FLAGS.draw_center_traj, FLAGS.secs_interval, FLAGS.do_entrance_counting) + pipeline = Pipeline(FLAGS, cfg) pipeline.run() if __name__ == '__main__': paddle.enable_static() + + # parse params from command parser = argsparser() FLAGS = parser.parse_args() FLAGS.device = FLAGS.device.upper() diff --git a/deploy/python/action_infer.py b/deploy/pipeline/pphuman/action_infer.py similarity index 47% rename from deploy/python/action_infer.py rename to deploy/pipeline/pphuman/action_infer.py index 8df189f9014909d1674e31406b557dcae567b5bd..1cf0c8baaa25c31e27072ae4371f9bb60054e689 100644 --- a/deploy/python/action_infer.py +++ b/deploy/pipeline/pphuman/action_infer.py @@ -28,12 +28,13 @@ parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) from paddle.inference import Config, create_predictor -from utils import argsparser, Timer, get_current_memory_mb -from benchmark_utils import PaddleInferBenchmark -from infer import Detector, print_arguments +from python.utils import argsparser, Timer, get_current_memory_mb +from python.benchmark_utils import PaddleInferBenchmark +from python.infer import Detector, print_arguments +from attr_infer import AttrDetector -class ActionRecognizer(Detector): +class SkeletonActionRecognizer(Detector): """ Args: model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml @@ -67,8 +68,8 @@ class ActionRecognizer(Detector): threshold=0.5, window_size=100, random_pad=False): - assert batch_size == 1, "ActionRecognizer only support batch_size=1 now." - super(ActionRecognizer, self).__init__( + assert batch_size == 1, "SkeletonActionRecognizer only support batch_size=1 now." + super(SkeletonActionRecognizer, self).__init__( model_dir=model_dir, device=device, run_mode=run_mode, @@ -263,8 +264,328 @@ def get_test_skeletons(input_file): "Now only support input with shape: (N, C, T, K, M) or (C, T, K, M)") +class DetActionRecognizer(object): + """ + Args: + model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml + device (str): Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU + run_mode (str): mode of running(paddle/trt_fp32/trt_fp16) + batch_size (int): size of pre batch in inference + trt_min_shape (int): min shape for dynamic shape in trt + trt_max_shape (int): max shape for dynamic shape in trt + trt_opt_shape (int): opt shape for dynamic shape in trt + trt_calib_mode (bool): If the model is produced by TRT offline quantitative + calibration, trt_calib_mode need to set True + cpu_threads (int): cpu threads + enable_mkldnn (bool): whether to open MKLDNN + threshold (float): The threshold of score for action feature object detection. + display_frames (int): The duration for corresponding detected action. + skip_frame_num (int): The number of frames for interval prediction. A skipped frame will + reuse the result of its last frame. If it is set to 0, no frame will be skipped. Default + is 0. + + """ + + def __init__(self, + model_dir, + device='CPU', + run_mode='paddle', + batch_size=1, + trt_min_shape=1, + trt_max_shape=1280, + trt_opt_shape=640, + trt_calib_mode=False, + cpu_threads=1, + enable_mkldnn=False, + output_dir='output', + threshold=0.5, + display_frames=20, + skip_frame_num=0): + super(DetActionRecognizer, self).__init__() + self.detector = Detector( + model_dir=model_dir, + device=device, + run_mode=run_mode, + batch_size=batch_size, + trt_min_shape=trt_min_shape, + trt_max_shape=trt_max_shape, + trt_opt_shape=trt_opt_shape, + trt_calib_mode=trt_calib_mode, + cpu_threads=cpu_threads, + enable_mkldnn=enable_mkldnn, + output_dir=output_dir, + threshold=threshold) + self.threshold = threshold + self.frame_life = display_frames + self.result_history = {} + self.skip_frame_num = skip_frame_num + self.skip_frame_cnt = 0 + self.id_in_last_frame = [] + + def predict(self, images, mot_result): + if self.skip_frame_cnt == 0 or (not self.check_id_is_same(mot_result)): + det_result = self.detector.predict_image(images, visual=False) + result = self.postprocess(det_result, mot_result) + else: + result = self.reuse_result(mot_result) + + self.skip_frame_cnt += 1 + if self.skip_frame_cnt >= self.skip_frame_num: + self.skip_frame_cnt = 0 + + return result + + def postprocess(self, det_result, mot_result): + np_boxes_num = det_result['boxes_num'] + if np_boxes_num[0] <= 0: + return [[], []] + + mot_bboxes = mot_result.get('boxes') + + cur_box_idx = 0 + mot_id = [] + act_res = [] + for idx in range(len(mot_bboxes)): + tracker_id = mot_bboxes[idx, 0] + + # Current now, class 0 is positive, class 1 is negative. + action_ret = {'class': 1.0, 'score': -1.0} + box_num = np_boxes_num[idx] + boxes = det_result['boxes'][cur_box_idx:cur_box_idx + box_num] + cur_box_idx += box_num + isvalid = (boxes[:, 1] > self.threshold) & (boxes[:, 0] == 0) + valid_boxes = boxes[isvalid, :] + + if valid_boxes.shape[0] >= 1: + action_ret['class'] = valid_boxes[0, 0] + action_ret['score'] = valid_boxes[0, 1] + self.result_history[ + tracker_id] = [0, self.frame_life, valid_boxes[0, 1]] + else: + history_det, life_remain, history_score = self.result_history.get( + tracker_id, [1, self.frame_life, -1.0]) + action_ret['class'] = history_det + action_ret['score'] = -1.0 + life_remain -= 1 + if life_remain <= 0 and tracker_id in self.result_history: + del (self.result_history[tracker_id]) + elif tracker_id in self.result_history: + self.result_history[tracker_id][1] = life_remain + else: + self.result_history[tracker_id] = [ + history_det, life_remain, history_score + ] + + mot_id.append(tracker_id) + act_res.append(action_ret) + result = list(zip(mot_id, act_res)) + self.id_in_last_frame = mot_id + + return result + + def check_id_is_same(self, mot_result): + mot_bboxes = mot_result.get('boxes') + for idx in range(len(mot_bboxes)): + tracker_id = mot_bboxes[idx, 0] + if tracker_id not in self.id_in_last_frame: + return False + return True + + def reuse_result(self, mot_result): + # This function reusing previous results of the same ID directly. + mot_bboxes = mot_result.get('boxes') + + mot_id = [] + act_res = [] + + for idx in range(len(mot_bboxes)): + tracker_id = mot_bboxes[idx, 0] + history_cls, life_remain, history_score = self.result_history.get( + tracker_id, [1, 0, -1.0]) + + life_remain -= 1 + if tracker_id in self.result_history: + self.result_history[tracker_id][1] = life_remain + + action_ret = {'class': history_cls, 'score': history_score} + mot_id.append(tracker_id) + act_res.append(action_ret) + + result = list(zip(mot_id, act_res)) + self.id_in_last_frame = mot_id + + return result + + +class ClsActionRecognizer(AttrDetector): + """ + Args: + model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml + device (str): Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU + run_mode (str): mode of running(paddle/trt_fp32/trt_fp16) + batch_size (int): size of pre batch in inference + trt_min_shape (int): min shape for dynamic shape in trt + trt_max_shape (int): max shape for dynamic shape in trt + trt_opt_shape (int): opt shape for dynamic shape in trt + trt_calib_mode (bool): If the model is produced by TRT offline quantitative + calibration, trt_calib_mode need to set True + cpu_threads (int): cpu threads + enable_mkldnn (bool): whether to open MKLDNN + threshold (float): The threshold of score for action feature object detection. + display_frames (int): The duration for corresponding detected action. + skip_frame_num (int): The number of frames for interval prediction. A skipped frame will + reuse the result of its last frame. If it is set to 0, no frame will be skipped. Default + is 0. + """ + + def __init__(self, + model_dir, + device='CPU', + run_mode='paddle', + batch_size=1, + trt_min_shape=1, + trt_max_shape=1280, + trt_opt_shape=640, + trt_calib_mode=False, + cpu_threads=1, + enable_mkldnn=False, + output_dir='output', + threshold=0.5, + display_frames=80, + skip_frame_num=0): + super(ClsActionRecognizer, self).__init__( + model_dir=model_dir, + device=device, + run_mode=run_mode, + batch_size=batch_size, + trt_min_shape=trt_min_shape, + trt_max_shape=trt_max_shape, + trt_opt_shape=trt_opt_shape, + trt_calib_mode=trt_calib_mode, + cpu_threads=cpu_threads, + enable_mkldnn=enable_mkldnn, + output_dir=output_dir, + threshold=threshold) + self.threshold = threshold + self.frame_life = display_frames + self.result_history = {} + self.skip_frame_num = skip_frame_num + self.skip_frame_cnt = 0 + self.id_in_last_frame = [] + + def predict_with_mot(self, images, mot_result): + if self.skip_frame_cnt == 0 or (not self.check_id_is_same(mot_result)): + images = self.crop_half_body(images) + cls_result = self.predict_image(images, visual=False)["output"] + result = self.match_action_with_id(cls_result, mot_result) + else: + result = self.reuse_result(mot_result) + + self.skip_frame_cnt += 1 + if self.skip_frame_cnt >= self.skip_frame_num: + self.skip_frame_cnt = 0 + + return result + + def crop_half_body(self, images): + crop_images = [] + for image in images: + h = image.shape[0] + crop_images.append(image[:h // 2 + 1, :, :]) + return crop_images + + def postprocess(self, inputs, result): + # postprocess output of predictor + im_results = result['output'] + batch_res = [] + for res in im_results: + action_res = res.tolist() + for cid, score in enumerate(action_res): + action_res[cid] = score + batch_res.append(action_res) + result = {'output': batch_res} + return result + + def match_action_with_id(self, cls_result, mot_result): + mot_bboxes = mot_result.get('boxes') + + mot_id = [] + act_res = [] + + for idx in range(len(mot_bboxes)): + tracker_id = mot_bboxes[idx, 0] + + cls_id_res = 1 + cls_score_res = -1.0 + for cls_id in range(len(cls_result[idx])): + score = cls_result[idx][cls_id] + if score > cls_score_res: + cls_id_res = cls_id + cls_score_res = score + + # Current now, class 0 is positive, class 1 is negative. + if cls_id_res == 1 or (cls_id_res == 0 and + cls_score_res < self.threshold): + history_cls, life_remain, history_score = self.result_history.get( + tracker_id, [1, self.frame_life, -1.0]) + cls_id_res = history_cls + cls_score_res = 1 - cls_score_res + life_remain -= 1 + if life_remain <= 0 and tracker_id in self.result_history: + del (self.result_history[tracker_id]) + elif tracker_id in self.result_history: + self.result_history[tracker_id][1] = life_remain + else: + self.result_history[ + tracker_id] = [cls_id_res, life_remain, cls_score_res] + else: + self.result_history[ + tracker_id] = [cls_id_res, self.frame_life, cls_score_res] + + action_ret = {'class': cls_id_res, 'score': cls_score_res} + mot_id.append(tracker_id) + act_res.append(action_ret) + result = list(zip(mot_id, act_res)) + self.id_in_last_frame = mot_id + + return result + + def check_id_is_same(self, mot_result): + mot_bboxes = mot_result.get('boxes') + for idx in range(len(mot_bboxes)): + tracker_id = mot_bboxes[idx, 0] + if tracker_id not in self.id_in_last_frame: + return False + return True + + def reuse_result(self, mot_result): + # This function reusing previous results of the same ID directly. + mot_bboxes = mot_result.get('boxes') + + mot_id = [] + act_res = [] + + for idx in range(len(mot_bboxes)): + tracker_id = mot_bboxes[idx, 0] + history_cls, life_remain, history_score = self.result_history.get( + tracker_id, [1, 0, -1.0]) + + life_remain -= 1 + if tracker_id in self.result_history: + self.result_history[tracker_id][1] = life_remain + + action_ret = {'class': history_cls, 'score': history_score} + mot_id.append(tracker_id) + act_res.append(action_ret) + + result = list(zip(mot_id, act_res)) + self.id_in_last_frame = mot_id + + return result + + def main(): - detector = ActionRecognizer( + detector = SkeletonActionRecognizer( FLAGS.model_dir, device=FLAGS.device, run_mode=FLAGS.run_mode, @@ -305,7 +626,7 @@ def main(): } det_log = PaddleInferBenchmark(detector.config, model_info, data_info, perf_info, mems) - det_log('Action') + det_log('SkeletonAction') if __name__ == '__main__': diff --git a/deploy/python/action_utils.py b/deploy/pipeline/pphuman/action_utils.py similarity index 92% rename from deploy/python/action_utils.py rename to deploy/pipeline/pphuman/action_utils.py index 0fbc92a8aa842dbe92ee61b119be9e8be2ebfac1..483116584e1e5e52aced38dd10ff170014a1b439 100644 --- a/deploy/python/action_utils.py +++ b/deploy/pipeline/pphuman/action_utils.py @@ -68,7 +68,7 @@ class KeyPointBuff(object): def get_collected_keypoint(self): """ - Output (List): List of keypoint results for Action Recognition task, where + Output (List): List of keypoint results for Skeletonbased Recognition task, where the format of each element is [tracker_id, KeyPointSequence of tracker_id] """ output = [] @@ -104,6 +104,10 @@ class ActionVisualHelper(object): def update(self, action_res_list): for mot_id, action_res in action_res_list: + if mot_id in self.action_history: + if int(action_res["class"]) != 0 and int(self.action_history[ + mot_id]["class"]) == 0: + continue action_info = self.action_history.get(mot_id, {}) action_info["class"] = action_res["class"] action_info["life_remain"] = self.frame_life diff --git a/deploy/python/attr_infer.py b/deploy/pipeline/pphuman/attr_infer.py similarity index 96% rename from deploy/python/attr_infer.py rename to deploy/pipeline/pphuman/attr_infer.py index ba034639a959a89df9ed49b7256b316ab541773f..dfbdd8f6898c2cb63946d2afac661648ae1ab98d 100644 --- a/deploy/python/attr_infer.py +++ b/deploy/pipeline/pphuman/attr_infer.py @@ -29,11 +29,11 @@ import sys parent_path = os.path.abspath(os.path.join(__file__, *(['..']))) sys.path.insert(0, parent_path) -from benchmark_utils import PaddleInferBenchmark -from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine -from visualize import visualize_attr -from utils import argsparser, Timer, get_current_memory_mb -from infer import Detector, get_test_images, print_arguments, load_predictor +from python.benchmark_utils import PaddleInferBenchmark +from python.preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine +from python.visualize import visualize_attr +from python.utils import argsparser, Timer, get_current_memory_mb +from python.infer import Detector, get_test_images, print_arguments, load_predictor from PIL import Image, ImageDraw, ImageFont @@ -142,13 +142,12 @@ class AttrDetector(Detector): bag_label = bag if bag_score > self.threshold else 'No bag' label_res.append(bag_label) # upper - upper_res = res[4:8] upper_label = 'Upper:' sleeve = 'LongSleeve' if res[3] > res[2] else 'ShortSleeve' upper_label += ' {}'.format(sleeve) - for i, r in enumerate(upper_res): - if r > self.threshold: - upper_label += ' {}'.format(upper_list[i]) + upper_res = res[4:8] + if np.max(upper_res) > self.threshold: + upper_label += ' {}'.format(upper_list[np.argmax(upper_res)]) label_res.append(upper_label) # lower lower_res = res[8:14] diff --git a/deploy/pphuman/mtmct.py b/deploy/pipeline/pphuman/mtmct.py similarity index 85% rename from deploy/pphuman/mtmct.py rename to deploy/pipeline/pphuman/mtmct.py index 30f84724809753b577503b3bb59d50a21731ddb1..8ab72f4a4351d3872f1ae36881fc10f07653eae1 100644 --- a/deploy/pphuman/mtmct.py +++ b/deploy/pipeline/pphuman/mtmct.py @@ -12,15 +12,21 @@ # See the License for the specific language governing permissions and # limitations under the License. -import motmetrics as mm from pptracking.python.mot.visualize import plot_tracking +from python.visualize import visualize_attr import os import re import cv2 import gc import numpy as np -from sklearn import preprocessing -from sklearn.cluster import AgglomerativeClustering +try: + from sklearn import preprocessing + from sklearn.cluster import AgglomerativeClustering +except: + print( + 'Warning: Unable to use MTMCT in PP-Human, please install sklearn, for example: `pip install sklearn`' + ) + pass import pandas as pd from tqdm import tqdm from functools import reduce @@ -98,7 +104,8 @@ def get_mtmct_matching_results(pred_mtmct_file, secs_interval=0.5, return camera_results, cid_tid_fid_results -def save_mtmct_vis_results(camera_results, captures, output_dir): +def save_mtmct_vis_results(camera_results, captures, output_dir, + multi_res=None): # camera_results: 'cid, tid, fid, x1, y1, w, h' camera_ids = list(camera_results.keys()) @@ -113,15 +120,15 @@ def save_mtmct_vis_results(camera_results, captures, output_dir): cid = camera_ids[idx] basename = os.path.basename(video_file) video_out_name = "vis_" + basename - print("Start visualizing output video: {}".format(video_out_name)) out_path = os.path.join(save_dir, video_out_name) + print("Start visualizing output video: {}".format(out_path)) # Get Video info : resolution, fps, frame count width = int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = int(capture.get(cv2.CAP_PROP_FPS)) frame_count = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - fourcc = cv2.VideoWriter_fourcc(* 'mp4v') + fourcc = cv2.VideoWriter_fourcc(*'mp4v') writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height)) frame_id = 0 while (1): @@ -138,6 +145,28 @@ def save_mtmct_vis_results(camera_results, captures, output_dir): boxes = frame_results[:, -4:] ids = frame_results[:, 1] image = plot_tracking(frame, boxes, ids, frame_id=frame_id, fps=fps) + + # add attr vis + if multi_res: + tid_list = [ + 'c' + str(idx) + '_' + 't' + str(int(j)) + for j in range(1, len(ids) + 1) + ] # c0_t1, c0_t2... + all_attr_result = [multi_res[i]["attrs"] + for i in tid_list] # all cid_tid result + if any( + all_attr_result + ): # at least one cid_tid[attrs] is not None will goes to attrs_vis + attr_res = [] + for k in tid_list: + if (frame_id - 1) >= len(multi_res[k]['attrs']): + t_attr = None + else: + t_attr = multi_res[k]['attrs'][frame_id - 1] + attr_res.append(t_attr) + image = visualize_attr( + image, attr_res, boxes, is_mtmct=True) + writer.write(image) writer.release() @@ -297,10 +326,9 @@ def distill_idfeat(mot_res): feature_new = feature_list #if available frames number is more than 200, take one frame data per 20 frames - if len(qualities_new) > 200: - skipf = 20 - else: - skipf = max(10, len(qualities_new) // 10) + skipf = 1 + if len(qualities_new) > 20: + skipf = 2 quality_skip = np.array(qualities_new[::skipf]) feature_skip = np.array(feature_new[::skipf]) @@ -322,6 +350,8 @@ def res2dict(multi_res): for tid, res in c_res.items(): key = "c" + str(cid) + "_t" + str(tid) if key not in cid_tid_dict: + if len(res["rects"]) < 10: + continue cid_tid_dict[key] = res cid_tid_dict[key]['mean_feat'] = distill_idfeat(res) return cid_tid_dict @@ -329,6 +359,9 @@ def res2dict(multi_res): def mtmct_process(multi_res, captures, mtmct_vis=True, output_dir="output"): cid_tid_dict = res2dict(multi_res) + if len(cid_tid_dict) == 0: + print("no tracking result found, mtmct will be skiped.") + return map_tid = sub_cluster(cid_tid_dict) if not os.path.exists(output_dir): @@ -340,4 +373,8 @@ def mtmct_process(multi_res, captures, mtmct_vis=True, output_dir="output"): camera_results, cid_tid_fid_res = get_mtmct_matching_results( pred_mtmct_file) - save_mtmct_vis_results(camera_results, captures, output_dir=output_dir) + save_mtmct_vis_results( + camera_results, + captures, + output_dir=output_dir, + multi_res=cid_tid_dict) diff --git a/deploy/pphuman/reid.py b/deploy/pipeline/pphuman/reid.py similarity index 99% rename from deploy/pphuman/reid.py rename to deploy/pipeline/pphuman/reid.py index cef4029239f7e0f635547506282c2527bf687353..3f4d59d78e8273385353a45248a059d523eb478c 100644 --- a/deploy/pphuman/reid.py +++ b/deploy/pipeline/pphuman/reid.py @@ -73,7 +73,7 @@ class ReID(object): self.det_times = Timer() self.cpu_mem, self.gpu_mem, self.gpu_util = 0, 0, 0 self.batch_size = batch_size - self.input_wh = [128, 256] + self.input_wh = (128, 256) def set_config(self, model_dir): return PredictConfig(model_dir) diff --git a/deploy/pipeline/pphuman/video_action_infer.py b/deploy/pipeline/pphuman/video_action_infer.py new file mode 100644 index 0000000000000000000000000000000000000000..cb12bd71b6c421e7105e3e2ed60ce2f9a519e1bb --- /dev/null +++ b/deploy/pipeline/pphuman/video_action_infer.py @@ -0,0 +1,296 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml +import glob + +import cv2 +import numpy as np +import math +import paddle +import sys +from collections import Sequence +import paddle.nn.functional as F + +# add deploy path of PadleDetection to sys.path +parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) +sys.path.insert(0, parent_path) + +from paddle.inference import Config, create_predictor +from python.utils import argsparser, Timer, get_current_memory_mb +from python.benchmark_utils import PaddleInferBenchmark +from python.infer import Detector, print_arguments +from video_action_preprocess import VideoDecoder, Sampler, Scale, CenterCrop, Normalization, Image2Array + + +def softmax(x): + f_x = np.exp(x) / np.sum(np.exp(x)) + return f_x + + +class VideoActionRecognizer(object): + """ + Args: + model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml + device (str): Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU + run_mode (str): mode of running(paddle/trt_fp32/trt_fp16) + batch_size (int): size of pre batch in inference + trt_min_shape (int): min shape for dynamic shape in trt + trt_max_shape (int): max shape for dynamic shape in trt + trt_opt_shape (int): opt shape for dynamic shape in trt + trt_calib_mode (bool): If the model is produced by TRT offline quantitative + calibration, trt_calib_mode need to set True + cpu_threads (int): cpu threads + enable_mkldnn (bool): whether to open MKLDNN + """ + + def __init__(self, + model_dir, + device='CPU', + run_mode='paddle', + num_seg=8, + seg_len=1, + short_size=256, + target_size=224, + top_k=1, + batch_size=1, + trt_min_shape=1, + trt_max_shape=1280, + trt_opt_shape=640, + trt_calib_mode=False, + cpu_threads=1, + enable_mkldnn=False, + ir_optim=True): + + self.num_seg = num_seg + self.seg_len = seg_len + self.short_size = short_size + self.target_size = target_size + self.top_k = top_k + + assert batch_size == 1, "VideoActionRecognizer only support batch_size=1 now." + + self.model_dir = model_dir + self.device = device + self.run_mode = run_mode + self.batch_size = batch_size + self.trt_min_shape = trt_min_shape + self.trt_max_shape = trt_max_shape + self.trt_opt_shape = trt_opt_shape + self.trt_calib_mode = trt_calib_mode + self.cpu_threads = cpu_threads + self.enable_mkldnn = enable_mkldnn + self.ir_optim = ir_optim + + self.recognize_times = Timer() + + model_file_path = glob.glob(os.path.join(model_dir, "*.pdmodel"))[0] + params_file_path = glob.glob(os.path.join(model_dir, "*.pdiparams"))[0] + self.config = Config(model_file_path, params_file_path) + + if device == "GPU" or device == "gpu": + self.config.enable_use_gpu(8000, 0) + else: + self.config.disable_gpu() + if self.enable_mkldnn: + # cache 10 different shapes for mkldnn to avoid memory leak + self.config.set_mkldnn_cache_capacity(10) + self.config.enable_mkldnn() + + self.config.switch_ir_optim(self.ir_optim) # default true + + precision_map = { + 'trt_int8': Config.Precision.Int8, + 'trt_fp32': Config.Precision.Float32, + 'trt_fp16': Config.Precision.Half + } + if run_mode in precision_map.keys(): + self.config.enable_tensorrt_engine( + max_batch_size=8, precision_mode=precision_map[run_mode]) + + self.config.enable_memory_optim() + # use zero copy + self.config.switch_use_feed_fetch_ops(False) + + self.predictor = create_predictor(self.config) + + def preprocess_batch(self, file_list): + batched_inputs = [] + for file in file_list: + inputs = self.preprocess(file) + batched_inputs.append(inputs) + batched_inputs = [ + np.concatenate([item[i] for item in batched_inputs]) + for i in range(len(batched_inputs[0])) + ] + self.input_file = file_list + return batched_inputs + + def get_timer(self): + return self.recognize_times + + def predict(self, input): + ''' + Args: + input (str) or (list): video file path or image data list + Returns: + results (dict): + ''' + + input_names = self.predictor.get_input_names() + input_tensor = self.predictor.get_input_handle(input_names[0]) + + output_names = self.predictor.get_output_names() + output_tensor = self.predictor.get_output_handle(output_names[0]) + + # preprocess + self.recognize_times.preprocess_time_s.start() + if type(input) == str: + inputs = self.preprocess_video(input) + else: + inputs = self.preprocess_frames(input) + self.recognize_times.preprocess_time_s.end() + + inputs = np.expand_dims( + inputs, axis=0).repeat( + self.batch_size, axis=0).copy() + + input_tensor.copy_from_cpu(inputs) + + # model prediction + self.recognize_times.inference_time_s.start() + self.predictor.run() + self.recognize_times.inference_time_s.end() + + output = output_tensor.copy_to_cpu() + + # postprocess + self.recognize_times.postprocess_time_s.start() + classes, scores = self.postprocess(output) + self.recognize_times.postprocess_time_s.end() + + return classes, scores + + def preprocess_frames(self, frame_list): + """ + frame_list: list, frame list + return: list + """ + + results = {} + results['frames_len'] = len(frame_list) + results["imgs"] = frame_list + + img_mean = [0.485, 0.456, 0.406] + img_std = [0.229, 0.224, 0.225] + ops = [ + CenterCrop(self.target_size), Image2Array(), + Normalization(img_mean, img_std) + ] + for op in ops: + results = op(results) + + res = np.expand_dims(results['imgs'], axis=0).copy() + return [res] + + def preprocess_video(self, input_file): + """ + input_file: str, file path + return: list + """ + assert os.path.isfile(input_file) is not None, "{0} not exists".format( + input_file) + + results = {'filename': input_file} + img_mean = [0.485, 0.456, 0.406] + img_std = [0.229, 0.224, 0.225] + ops = [ + VideoDecoder(), Sampler( + self.num_seg, self.seg_len, valid_mode=True), + Scale(self.short_size), CenterCrop(self.target_size), Image2Array(), + Normalization(img_mean, img_std) + ] + for op in ops: + results = op(results) + + res = np.expand_dims(results['imgs'], axis=0).copy() + return [res] + + def postprocess(self, output): + output = output.flatten() # numpy.ndarray + output = softmax(output) + classes = np.argpartition(output, -self.top_k)[-self.top_k:] + classes = classes[np.argsort(-output[classes])] + scores = output[classes] + return classes, scores + + +def main(): + if not FLAGS.run_benchmark: + assert FLAGS.batch_size == 1 + assert FLAGS.use_fp16 is False + else: + assert FLAGS.use_gpu is True + + recognizer = VideoActionRecognizer( + FLAGS.model_dir, + short_size=FLAGS.short_size, + target_size=FLAGS.target_size, + device=FLAGS.device, + run_mode=FLAGS.run_mode, + batch_size=FLAGS.batch_size, + trt_min_shape=FLAGS.trt_min_shape, + trt_max_shape=FLAGS.trt_max_shape, + trt_opt_shape=FLAGS.trt_opt_shape, + trt_calib_mode=FLAGS.trt_calib_mode, + cpu_threads=FLAGS.cpu_threads, + enable_mkldnn=FLAGS.enable_mkldnn, ) + + if not FLAGS.run_benchmark: + classes, scores = recognizer.predict(FLAGS.video_file) + print("Current video file: {}".format(FLAGS.video_file)) + print("\ttop-1 class: {0}".format(classes[0])) + print("\ttop-1 score: {0}".format(scores[0])) + else: + cm, gm, gu = get_current_memory_mb() + mems = {'cpu_rss_mb': cm, 'gpu_rss_mb': gm, 'gpu_util': gu * 100} + + perf_info = recognizer.recognize_times.report() + model_dir = FLAGS.model_dir + mode = FLAGS.run_mode + model_info = { + 'model_name': model_dir.strip('/').split('/')[-1], + 'precision': mode.split('_')[-1] + } + data_info = { + 'batch_size': FLAGS.batch_size, + 'shape': "dynamic_shape", + 'data_num': perf_info['img_num'] + } + recognize_log = PaddleInferBenchmark(recognizer.config, model_info, + data_info, perf_info, mems) + recognize_log('Fight') + + +if __name__ == '__main__': + paddle.enable_static() + parser = argsparser() + FLAGS = parser.parse_args() + print_arguments(FLAGS) + FLAGS.device = FLAGS.device.upper() + assert FLAGS.device in ['CPU', 'GPU', 'XPU' + ], "device should be CPU, GPU or XPU" + + main() diff --git a/deploy/pipeline/pphuman/video_action_preprocess.py b/deploy/pipeline/pphuman/video_action_preprocess.py new file mode 100644 index 0000000000000000000000000000000000000000..f6f9f11f7aee643ebfc070073f18f7e28bebf9dd --- /dev/null +++ b/deploy/pipeline/pphuman/video_action_preprocess.py @@ -0,0 +1,545 @@ +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import cv2 +import numpy as np +from collections.abc import Sequence +from PIL import Image +import paddle + + +class Sampler(object): + """ + Sample frames id. + NOTE: Use PIL to read image here, has diff with CV2 + Args: + num_seg(int): number of segments. + seg_len(int): number of sampled frames in each segment. + valid_mode(bool): True or False. + Returns: + frames_idx: the index of sampled #frames. + """ + + def __init__(self, + num_seg, + seg_len, + frame_interval=None, + valid_mode=True, + dense_sample=False, + linspace_sample=False, + use_pil=True): + self.num_seg = num_seg + self.seg_len = seg_len + self.frame_interval = frame_interval + self.valid_mode = valid_mode + self.dense_sample = dense_sample + self.linspace_sample = linspace_sample + self.use_pil = use_pil + + def _get(self, frames_idx, results): + data_format = results['format'] + + if data_format == "frame": + frame_dir = results['frame_dir'] + imgs = [] + for idx in frames_idx: + img = Image.open( + os.path.join(frame_dir, results['suffix'].format( + idx))).convert('RGB') + imgs.append(img) + + elif data_format == "video": + if results['backend'] == 'cv2': + frames = np.array(results['frames']) + imgs = [] + for idx in frames_idx: + imgbuf = frames[idx] + img = Image.fromarray(imgbuf, mode='RGB') + imgs.append(img) + elif results['backend'] == 'decord': + container = results['frames'] + if self.use_pil: + frames_select = container.get_batch(frames_idx) + # dearray_to_img + np_frames = frames_select.asnumpy() + imgs = [] + for i in range(np_frames.shape[0]): + imgbuf = np_frames[i] + imgs.append(Image.fromarray(imgbuf, mode='RGB')) + else: + if frames_idx.ndim != 1: + frames_idx = np.squeeze(frames_idx) + frame_dict = { + idx: container[idx].asnumpy() + for idx in np.unique(frames_idx) + } + imgs = [frame_dict[idx] for idx in frames_idx] + elif results['backend'] == 'pyav': + imgs = [] + frames = np.array(results['frames']) + for idx in frames_idx: + imgbuf = frames[idx] + imgs.append(imgbuf) + imgs = np.stack(imgs) # thwc + else: + raise NotImplementedError + else: + raise NotImplementedError + results['imgs'] = imgs # all image data + return results + + def _get_train_clips(self, num_frames): + ori_seg_len = self.seg_len * self.frame_interval + avg_interval = (num_frames - ori_seg_len + 1) // self.num_seg + + if avg_interval > 0: + base_offsets = np.arange(self.num_seg) * avg_interval + clip_offsets = base_offsets + np.random.randint( + avg_interval, size=self.num_seg) + elif num_frames > max(self.num_seg, ori_seg_len): + clip_offsets = np.sort( + np.random.randint( + num_frames - ori_seg_len + 1, size=self.num_seg)) + elif avg_interval == 0: + ratio = (num_frames - ori_seg_len + 1.0) / self.num_seg + clip_offsets = np.around(np.arange(self.num_seg) * ratio) + else: + clip_offsets = np.zeros((self.num_seg, ), dtype=np.int) + return clip_offsets + + def _get_test_clips(self, num_frames): + ori_seg_len = self.seg_len * self.frame_interval + avg_interval = (num_frames - ori_seg_len + 1) / float(self.num_seg) + if num_frames > ori_seg_len - 1: + base_offsets = np.arange(self.num_seg) * avg_interval + clip_offsets = (base_offsets + avg_interval / 2.0).astype(np.int) + else: + clip_offsets = np.zeros((self.num_seg, ), dtype=np.int) + return clip_offsets + + def __call__(self, results): + """ + Args: + frames_len: length of frames. + return: + sampling id. + """ + frames_len = int(results['frames_len']) # total number of frames + + frames_idx = [] + if self.frame_interval is not None: + assert isinstance(self.frame_interval, int) + if not self.valid_mode: + offsets = self._get_train_clips(frames_len) + else: + offsets = self._get_test_clips(frames_len) + + offsets = offsets[:, None] + np.arange(self.seg_len)[ + None, :] * self.frame_interval + offsets = np.concatenate(offsets) + + offsets = offsets.reshape((-1, self.seg_len)) + offsets = np.mod(offsets, frames_len) + offsets = np.concatenate(offsets) + + if results['format'] == 'video': + frames_idx = offsets + elif results['format'] == 'frame': + frames_idx = list(offsets + 1) + else: + raise NotImplementedError + + return self._get(frames_idx, results) + + print("self.frame_interval:", self.frame_interval) + + if self.linspace_sample: # default if False + if 'start_idx' in results and 'end_idx' in results: + offsets = np.linspace(results['start_idx'], results['end_idx'], + self.num_seg) + else: + offsets = np.linspace(0, frames_len - 1, self.num_seg) + offsets = np.clip(offsets, 0, frames_len - 1).astype(np.int64) + if results['format'] == 'video': + frames_idx = list(offsets) + frames_idx = [x % frames_len for x in frames_idx] + elif results['format'] == 'frame': + frames_idx = list(offsets + 1) + else: + raise NotImplementedError + return self._get(frames_idx, results) + + average_dur = int(frames_len / self.num_seg) + + print("results['format']:", results['format']) + + if self.dense_sample: # For ppTSM, default is False + if not self.valid_mode: # train + sample_pos = max(1, 1 + frames_len - 64) + t_stride = 64 // self.num_seg + start_idx = 0 if sample_pos == 1 else np.random.randint( + 0, sample_pos - 1) + offsets = [(idx * t_stride + start_idx) % frames_len + 1 + for idx in range(self.num_seg)] + frames_idx = offsets + else: + sample_pos = max(1, 1 + frames_len - 64) + t_stride = 64 // self.num_seg + start_list = np.linspace(0, sample_pos - 1, num=10, dtype=int) + offsets = [] + for start_idx in start_list.tolist(): + offsets += [(idx * t_stride + start_idx) % frames_len + 1 + for idx in range(self.num_seg)] + frames_idx = offsets + else: + for i in range(self.num_seg): + idx = 0 + if not self.valid_mode: + if average_dur >= self.seg_len: + idx = random.randint(0, average_dur - self.seg_len) + idx += i * average_dur + elif average_dur >= 1: + idx += i * average_dur + else: + idx = i + else: + if average_dur >= self.seg_len: + idx = (average_dur - 1) // 2 + idx += i * average_dur + elif average_dur >= 1: + idx += i * average_dur + else: + idx = i + + for jj in range(idx, idx + self.seg_len): + if results['format'] == 'video': + frames_idx.append(int(jj % frames_len)) + elif results['format'] == 'frame': + frames_idx.append(jj + 1) + + elif results['format'] == 'MRI': + frames_idx.append(jj) + else: + raise NotImplementedError + + return self._get(frames_idx, results) + + +class Scale(object): + """ + Scale images. + Args: + short_size(float | int): Short size of an image will be scaled to the short_size. + fixed_ratio(bool): Set whether to zoom according to a fixed ratio. default: True + do_round(bool): Whether to round up when calculating the zoom ratio. default: False + backend(str): Choose pillow or cv2 as the graphics processing backend. default: 'pillow' + """ + + def __init__(self, + short_size, + fixed_ratio=True, + keep_ratio=None, + do_round=False, + backend='pillow'): + self.short_size = short_size + assert (fixed_ratio and not keep_ratio) or ( + not fixed_ratio + ), "fixed_ratio and keep_ratio cannot be true at the same time" + self.fixed_ratio = fixed_ratio + self.keep_ratio = keep_ratio + self.do_round = do_round + + assert backend in [ + 'pillow', 'cv2' + ], "Scale's backend must be pillow or cv2, but get {backend}" + + self.backend = backend + + def __call__(self, results): + """ + Performs resize operations. + Args: + imgs (Sequence[PIL.Image]): List where each item is a PIL.Image. + For example, [PIL.Image0, PIL.Image1, PIL.Image2, ...] + return: + resized_imgs: List where each item is a PIL.Image after scaling. + """ + imgs = results['imgs'] + resized_imgs = [] + for i in range(len(imgs)): + img = imgs[i] + if isinstance(img, np.ndarray): + h, w, _ = img.shape + elif isinstance(img, Image.Image): + w, h = img.size + else: + raise NotImplementedError + + if w <= h: + ow = self.short_size + if self.fixed_ratio: # default is True + oh = int(self.short_size * 4.0 / 3.0) + elif not self.keep_ratio: # no + oh = self.short_size + else: + scale_factor = self.short_size / w + oh = int(h * float(scale_factor) + + 0.5) if self.do_round else int(h * + self.short_size / w) + ow = int(w * float(scale_factor) + + 0.5) if self.do_round else int(w * + self.short_size / h) + else: + oh = self.short_size + if self.fixed_ratio: + ow = int(self.short_size * 4.0 / 3.0) + elif not self.keep_ratio: # no + ow = self.short_size + else: + scale_factor = self.short_size / h + oh = int(h * float(scale_factor) + + 0.5) if self.do_round else int(h * + self.short_size / w) + ow = int(w * float(scale_factor) + + 0.5) if self.do_round else int(w * + self.short_size / h) + + if type(img) == np.ndarray: + img = Image.fromarray(img, mode='RGB') + + if self.backend == 'pillow': + resized_imgs.append(img.resize((ow, oh), Image.BILINEAR)) + elif self.backend == 'cv2' and (self.keep_ratio is not None): + resized_imgs.append( + cv2.resize( + img, (ow, oh), interpolation=cv2.INTER_LINEAR)) + else: + resized_imgs.append( + Image.fromarray( + cv2.resize( + np.asarray(img), (ow, oh), + interpolation=cv2.INTER_LINEAR))) + results['imgs'] = resized_imgs + return results + + +class CenterCrop(object): + """ + Center crop images + Args: + target_size(int): Center crop a square with the target_size from an image. + do_round(bool): Whether to round up the coordinates of the upper left corner of the cropping area. default: True + """ + + def __init__(self, target_size, do_round=True, backend='pillow'): + self.target_size = target_size + self.do_round = do_round + self.backend = backend + + def __call__(self, results): + """ + Performs Center crop operations. + Args: + imgs: List where each item is a PIL.Image. + For example, [PIL.Image0, PIL.Image1, PIL.Image2, ...] + return: + ccrop_imgs: List where each item is a PIL.Image after Center crop. + """ + imgs = results['imgs'] + ccrop_imgs = [] + th, tw = self.target_size, self.target_size + if isinstance(imgs, paddle.Tensor): + h, w = imgs.shape[-2:] + x1 = int(round((w - tw) / 2.0)) if self.do_round else (w - tw) // 2 + y1 = int(round((h - th) / 2.0)) if self.do_round else (h - th) // 2 + ccrop_imgs = imgs[:, :, y1:y1 + th, x1:x1 + tw] + else: + for img in imgs: + if self.backend == 'pillow': + w, h = img.size + elif self.backend == 'cv2': + h, w, _ = img.shape + else: + raise NotImplementedError + assert (w >= self.target_size) and (h >= self.target_size), \ + "image width({}) and height({}) should be larger than crop size".format( + w, h, self.target_size) + x1 = int(round((w - tw) / 2.0)) if self.do_round else ( + w - tw) // 2 + y1 = int(round((h - th) / 2.0)) if self.do_round else ( + h - th) // 2 + if self.backend == 'cv2': + ccrop_imgs.append(img[y1:y1 + th, x1:x1 + tw]) + elif self.backend == 'pillow': + ccrop_imgs.append(img.crop((x1, y1, x1 + tw, y1 + th))) + results['imgs'] = ccrop_imgs + return results + + +class Image2Array(object): + """ + transfer PIL.Image to Numpy array and transpose dimensions from 'dhwc' to 'dchw'. + Args: + transpose: whether to transpose or not, default True, False for slowfast. + """ + + def __init__(self, transpose=True, data_format='tchw'): + assert data_format in [ + 'tchw', 'cthw' + ], "Target format must in ['tchw', 'cthw'], but got {data_format}" + self.transpose = transpose + self.data_format = data_format + + def __call__(self, results): + """ + Performs Image to NumpyArray operations. + Args: + imgs: List where each item is a PIL.Image. + For example, [PIL.Image0, PIL.Image1, PIL.Image2, ...] + return: + np_imgs: Numpy array. + """ + imgs = results['imgs'] + if 'backend' in results and results[ + 'backend'] == 'pyav': # [T,H,W,C] in [0, 1] + if self.transpose: + if self.data_format == 'tchw': + t_imgs = imgs.transpose((0, 3, 1, 2)) # tchw + else: + t_imgs = imgs.transpose((3, 0, 1, 2)) # cthw + results['imgs'] = t_imgs + else: + t_imgs = np.stack(imgs).astype('float32') + if self.transpose: + if self.data_format == 'tchw': + t_imgs = t_imgs.transpose(0, 3, 1, 2) # tchw + else: + t_imgs = t_imgs.transpose(3, 0, 1, 2) # cthw + results['imgs'] = t_imgs + return results + + +class VideoDecoder(object): + """ + Decode mp4 file to frames. + Args: + filepath: the file path of mp4 file + """ + + def __init__(self, + backend='cv2', + mode='train', + sampling_rate=32, + num_seg=8, + num_clips=1, + target_fps=30): + + self.backend = backend + # params below only for TimeSformer + self.mode = mode + self.sampling_rate = sampling_rate + self.num_seg = num_seg + self.num_clips = num_clips + self.target_fps = target_fps + + def __call__(self, results): + """ + Perform mp4 decode operations. + return: + List where each item is a numpy array after decoder. + """ + file_path = results['filename'] + results['format'] = 'video' + results['backend'] = self.backend + + if self.backend == 'cv2': # here + cap = cv2.VideoCapture(file_path) + videolen = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) + + sampledFrames = [] + for i in range(videolen): + ret, frame = cap.read() + # maybe first frame is empty + if ret == False: + continue + img = frame[:, :, ::-1] + sampledFrames.append(img) + results['frames'] = sampledFrames + results['frames_len'] = len(sampledFrames) + + elif self.backend == 'decord': + container = de.VideoReader(file_path) + frames_len = len(container) + results['frames'] = container + results['frames_len'] = frames_len + else: + raise NotImplementedError + return results + + +class Normalization(object): + """ + Normalization. + Args: + mean(Sequence[float]): mean values of different channels. + std(Sequence[float]): std values of different channels. + tensor_shape(list): size of mean, default [3,1,1]. For slowfast, [1,1,1,3] + """ + + def __init__(self, mean, std, tensor_shape=[3, 1, 1], inplace=False): + if not isinstance(mean, Sequence): + raise TypeError( + 'Mean must be list, tuple or np.ndarray, but got {type(mean)}') + if not isinstance(std, Sequence): + raise TypeError( + 'Std must be list, tuple or np.ndarray, but got {type(std)}') + + self.inplace = inplace + if not inplace: + self.mean = np.array(mean).reshape(tensor_shape).astype(np.float32) + self.std = np.array(std).reshape(tensor_shape).astype(np.float32) + else: + self.mean = np.array(mean, dtype=np.float32) + self.std = np.array(std, dtype=np.float32) + + def __call__(self, results): + """ + Performs normalization operations. + Args: + imgs: Numpy array. + return: + np_imgs: Numpy array after normalization. + """ + + if self.inplace: # default is False + n = len(results['imgs']) + h, w, c = results['imgs'][0].shape + norm_imgs = np.empty((n, h, w, c), dtype=np.float32) + for i, img in enumerate(results['imgs']): + norm_imgs[i] = img + + for img in norm_imgs: # [n,h,w,c] + mean = np.float64(self.mean.reshape(1, -1)) # [1, 3] + stdinv = 1 / np.float64(self.std.reshape(1, -1)) # [1, 3] + cv2.subtract(img, mean, img) + cv2.multiply(img, stdinv, img) + else: + imgs = results['imgs'] + norm_imgs = imgs / 255.0 + norm_imgs -= self.mean + norm_imgs /= self.std + if 'backend' in results and results['backend'] == 'pyav': + norm_imgs = paddle.to_tensor(norm_imgs, dtype=paddle.float32) + results['imgs'] = norm_imgs + return results diff --git a/deploy/pipeline/ppvehicle/rec_word_dict.txt b/deploy/pipeline/ppvehicle/rec_word_dict.txt new file mode 100644 index 0000000000000000000000000000000000000000..84b885d8352226e49b1d5d791b8f43a663e246aa --- /dev/null +++ b/deploy/pipeline/ppvehicle/rec_word_dict.txt @@ -0,0 +1,6623 @@ +' +疗 +绚 +诚 +娇 +溜 +题 +贿 +者 +廖 +更 +纳 +加 +奉 +公 +一 +就 +汴 +计 +与 +路 +房 +原 +妇 +2 +0 +8 +- +7 +其 +> +: +] +, +, +骑 +刈 +全 +消 +昏 +傈 +安 +久 +钟 +嗅 +不 +影 +处 +驽 +蜿 +资 +关 +椤 +地 +瘸 +专 +问 +忖 +票 +嫉 +炎 +韵 +要 +月 +田 +节 +陂 +鄙 +捌 +备 +拳 +伺 +眼 +网 +盎 +大 +傍 +心 +东 +愉 +汇 +蹿 +科 +每 +业 +里 +航 +晏 +字 +平 +录 +先 +1 +3 +彤 +鲶 +产 +稍 +督 +腴 +有 +象 +岳 +注 +绍 +在 +泺 +文 +定 +核 +名 +水 +过 +理 +让 +偷 +率 +等 +这 +发 +” +为 +含 +肥 +酉 +相 +鄱 +七 +编 +猥 +锛 +日 +镀 +蒂 +掰 +倒 +辆 +栾 +栗 +综 +涩 +州 +雌 +滑 +馀 +了 +机 +块 +司 +宰 +甙 +兴 +矽 +抚 +保 +用 +沧 +秩 +如 +收 +息 +滥 +页 +疑 +埠 +! +! +姥 +异 +橹 +钇 +向 +下 +跄 +的 +椴 +沫 +国 +绥 +獠 +报 +开 +民 +蜇 +何 +分 +凇 +长 +讥 +藏 +掏 +施 +羽 +中 +讲 +派 +嘟 +人 +提 +浼 +间 +世 +而 +古 +多 +倪 +唇 +饯 +控 +庚 +首 +赛 +蜓 +味 +断 +制 +觉 +技 +替 +艰 +溢 +潮 +夕 +钺 +外 +摘 +枋 +动 +双 +单 +啮 +户 +枇 +确 +锦 +曜 +杜 +或 +能 +效 +霜 +盒 +然 +侗 +电 +晁 +放 +步 +鹃 +新 +杖 +蜂 +吒 +濂 +瞬 +评 +总 +隍 +对 +独 +合 +也 +是 +府 +青 +天 +诲 +墙 +组 +滴 +级 +邀 +帘 +示 +已 +时 +骸 +仄 +泅 +和 +遨 +店 +雇 +疫 +持 +巍 +踮 +境 +只 +亨 +目 +鉴 +崤 +闲 +体 +泄 +杂 +作 +般 +轰 +化 +解 +迂 +诿 +蛭 +璀 +腾 +告 +版 +服 +省 +师 +小 +规 +程 +线 +海 +办 +引 +二 +桧 +牌 +砺 +洄 +裴 +修 +图 +痫 +胡 +许 +犊 +事 +郛 +基 +柴 +呼 +食 +研 +奶 +律 +蛋 +因 +葆 +察 +戏 +褒 +戒 +再 +李 +骁 +工 +貂 +油 +鹅 +章 +啄 +休 +场 +给 +睡 +纷 +豆 +器 +捎 +说 +敏 +学 +会 +浒 +设 +诊 +格 +廓 +查 +来 +霓 +室 +溆 +¢ +诡 +寥 +焕 +舜 +柒 +狐 +回 +戟 +砾 +厄 +实 +翩 +尿 +五 +入 +径 +惭 +喹 +股 +宇 +篝 +| +; +美 +期 +云 +九 +祺 +扮 +靠 +锝 +槌 +系 +企 +酰 +阊 +暂 +蚕 +忻 +豁 +本 +羹 +执 +条 +钦 +H +獒 +限 +进 +季 +楦 +于 +芘 +玖 +铋 +茯 +未 +答 +粘 +括 +样 +精 +欠 +矢 +甥 +帷 +嵩 +扣 +令 +仔 +风 +皈 +行 +支 +部 +蓉 +刮 +站 +蜡 +救 +钊 +汗 +松 +嫌 +成 +可 +. +鹤 +院 +从 +交 +政 +怕 +活 +调 +球 +局 +验 +髌 +第 +韫 +谗 +串 +到 +圆 +年 +米 +/ +* +友 +忿 +检 +区 +看 +自 +敢 +刃 +个 +兹 +弄 +流 +留 +同 +没 +齿 +星 +聆 +轼 +湖 +什 +三 +建 +蛔 +儿 +椋 +汕 +震 +颧 +鲤 +跟 +力 +情 +璺 +铨 +陪 +务 +指 +族 +训 +滦 +鄣 +濮 +扒 +商 +箱 +十 +召 +慷 +辗 +所 +莞 +管 +护 +臭 +横 +硒 +嗓 +接 +侦 +六 +露 +党 +馋 +驾 +剖 +高 +侬 +妪 +幂 +猗 +绺 +骐 +央 +酐 +孝 +筝 +课 +徇 +缰 +门 +男 +西 +项 +句 +谙 +瞒 +秃 +篇 +教 +碲 +罚 +声 +呐 +景 +前 +富 +嘴 +鳌 +稀 +免 +朋 +啬 +睐 +去 +赈 +鱼 +住 +肩 +愕 +速 +旁 +波 +厅 +健 +茼 +厥 +鲟 +谅 +投 +攸 +炔 +数 +方 +击 +呋 +谈 +绩 +别 +愫 +僚 +躬 +鹧 +胪 +炳 +招 +喇 +膨 +泵 +蹦 +毛 +结 +5 +4 +谱 +识 +陕 +粽 +婚 +拟 +构 +且 +搜 +任 +潘 +比 +郢 +妨 +醪 +陀 +桔 +碘 +扎 +选 +哈 +骷 +楷 +亿 +明 +缆 +脯 +监 +睫 +逻 +婵 +共 +赴 +淝 +凡 +惦 +及 +达 +揖 +谩 +澹 +减 +焰 +蛹 +番 +祁 +柏 +员 +禄 +怡 +峤 +龙 +白 +叽 +生 +闯 +起 +细 +装 +谕 +竟 +聚 +钙 +上 +导 +渊 +按 +艾 +辘 +挡 +耒 +盹 +饪 +臀 +记 +邮 +蕙 +受 +各 +医 +搂 +普 +滇 +朗 +茸 +带 +翻 +酚 +( +光 +堤 +墟 +蔷 +万 +幻 +〓 +瑙 +辈 +昧 +盏 +亘 +蛀 +吉 +铰 +请 +子 +假 +闻 +税 +井 +诩 +哨 +嫂 +好 +面 +琐 +校 +馊 +鬣 +缂 +营 +访 +炖 +占 +农 +缀 +否 +经 +钚 +棵 +趟 +张 +亟 +吏 +茶 +谨 +捻 +论 +迸 +堂 +玉 +信 +吧 +瞠 +乡 +姬 +寺 +咬 +溏 +苄 +皿 +意 +赉 +宝 +尔 +钰 +艺 +特 +唳 +踉 +都 +荣 +倚 +登 +荐 +丧 +奇 +涵 +批 +炭 +近 +符 +傩 +感 +道 +着 +菊 +虹 +仲 +众 +懈 +濯 +颞 +眺 +南 +释 +北 +缝 +标 +既 +茗 +整 +撼 +迤 +贲 +挎 +耱 +拒 +某 +妍 +卫 +哇 +英 +矶 +藩 +治 +他 +元 +领 +膜 +遮 +穗 +蛾 +飞 +荒 +棺 +劫 +么 +市 +火 +温 +拈 +棚 +洼 +转 +果 +奕 +卸 +迪 +伸 +泳 +斗 +邡 +侄 +涨 +屯 +萋 +胭 +氡 +崮 +枞 +惧 +冒 +彩 +斜 +手 +豚 +随 +旭 +淑 +妞 +形 +菌 +吲 +沱 +争 +驯 +歹 +挟 +兆 +柱 +传 +至 +包 +内 +响 +临 +红 +功 +弩 +衡 +寂 +禁 +老 +棍 +耆 +渍 +织 +害 +氵 +渑 +布 +载 +靥 +嗬 +虽 +苹 +咨 +娄 +库 +雉 +榜 +帜 +嘲 +套 +瑚 +亲 +簸 +欧 +边 +6 +腿 +旮 +抛 +吹 +瞳 +得 +镓 +梗 +厨 +继 +漾 +愣 +憨 +士 +策 +窑 +抑 +躯 +襟 +脏 +参 +贸 +言 +干 +绸 +鳄 +穷 +藜 +音 +折 +详 +) +举 +悍 +甸 +癌 +黎 +谴 +死 +罩 +迁 +寒 +驷 +袖 +媒 +蒋 +掘 +模 +纠 +恣 +观 +祖 +蛆 +碍 +位 +稿 +主 +澧 +跌 +筏 +京 +锏 +帝 +贴 +证 +糠 +才 +黄 +鲸 +略 +炯 +饱 +四 +出 +园 +犀 +牧 +容 +汉 +杆 +浈 +汰 +瑷 +造 +虫 +瘩 +怪 +驴 +济 +应 +花 +沣 +谔 +夙 +旅 +价 +矿 +以 +考 +s +u +呦 +晒 +巡 +茅 +准 +肟 +瓴 +詹 +仟 +褂 +译 +桌 +混 +宁 +怦 +郑 +抿 +些 +余 +鄂 +饴 +攒 +珑 +群 +阖 +岔 +琨 +藓 +预 +环 +洮 +岌 +宀 +杲 +瀵 +最 +常 +囡 +周 +踊 +女 +鼓 +袭 +喉 +简 +范 +薯 +遐 +疏 +粱 +黜 +禧 +法 +箔 +斤 +遥 +汝 +奥 +直 +贞 +撑 +置 +绱 +集 +她 +馅 +逗 +钧 +橱 +魉 +[ +恙 +躁 +唤 +9 +旺 +膘 +待 +脾 +惫 +购 +吗 +依 +盲 +度 +瘿 +蠖 +俾 +之 +镗 +拇 +鲵 +厝 +簧 +续 +款 +展 +啃 +表 +剔 +品 +钻 +腭 +损 +清 +锶 +统 +涌 +寸 +滨 +贪 +链 +吠 +冈 +伎 +迥 +咏 +吁 +览 +防 +迅 +失 +汾 +阔 +逵 +绀 +蔑 +列 +川 +凭 +努 +熨 +揪 +利 +俱 +绉 +抢 +鸨 +我 +即 +责 +膦 +易 +毓 +鹊 +刹 +玷 +岿 +空 +嘞 +绊 +排 +术 +估 +锷 +违 +们 +苟 +铜 +播 +肘 +件 +烫 +审 +鲂 +广 +像 +铌 +惰 +铟 +巳 +胍 +鲍 +康 +憧 +色 +恢 +想 +拷 +尤 +疳 +知 +S +Y +F +D +A +峄 +裕 +帮 +握 +搔 +氐 +氘 +难 +墒 +沮 +雨 +叁 +缥 +悴 +藐 +湫 +娟 +苑 +稠 +颛 +簇 +后 +阕 +闭 +蕤 +缚 +怎 +佞 +码 +嘤 +蔡 +痊 +舱 +螯 +帕 +赫 +昵 +升 +烬 +岫 +、 +疵 +蜻 +髁 +蕨 +隶 +烛 +械 +丑 +盂 +梁 +强 +鲛 +由 +拘 +揉 +劭 +龟 +撤 +钩 +呕 +孛 +费 +妻 +漂 +求 +阑 +崖 +秤 +甘 +通 +深 +补 +赃 +坎 +床 +啪 +承 +吼 +量 +暇 +钼 +烨 +阂 +擎 +脱 +逮 +称 +P +神 +属 +矗 +华 +届 +狍 +葑 +汹 +育 +患 +窒 +蛰 +佼 +静 +槎 +运 +鳗 +庆 +逝 +曼 +疱 +克 +代 +官 +此 +麸 +耧 +蚌 +晟 +例 +础 +榛 +副 +测 +唰 +缢 +迹 +灬 +霁 +身 +岁 +赭 +扛 +又 +菡 +乜 +雾 +板 +读 +陷 +徉 +贯 +郁 +虑 +变 +钓 +菜 +圾 +现 +琢 +式 +乐 +维 +渔 +浜 +左 +吾 +脑 +钡 +警 +T +啵 +拴 +偌 +漱 +湿 +硕 +止 +骼 +魄 +积 +燥 +联 +踢 +玛 +则 +窿 +见 +振 +畿 +送 +班 +钽 +您 +赵 +刨 +印 +讨 +踝 +籍 +谡 +舌 +崧 +汽 +蔽 +沪 +酥 +绒 +怖 +财 +帖 +肱 +私 +莎 +勋 +羔 +霸 +励 +哼 +帐 +将 +帅 +渠 +纪 +婴 +娩 +岭 +厘 +滕 +吻 +伤 +坝 +冠 +戊 +隆 +瘁 +介 +涧 +物 +黍 +并 +姗 +奢 +蹑 +掣 +垸 +锴 +命 +箍 +捉 +病 +辖 +琰 +眭 +迩 +艘 +绌 +繁 +寅 +若 +毋 +思 +诉 +类 +诈 +燮 +轲 +酮 +狂 +重 +反 +职 +筱 +县 +委 +磕 +绣 +奖 +晋 +濉 +志 +徽 +肠 +呈 +獐 +坻 +口 +片 +碰 +几 +村 +柿 +劳 +料 +获 +亩 +惕 +晕 +厌 +号 +罢 +池 +正 +鏖 +煨 +家 +棕 +复 +尝 +懋 +蜥 +锅 +岛 +扰 +队 +坠 +瘾 +钬 +@ +卧 +疣 +镇 +譬 +冰 +彷 +频 +黯 +据 +垄 +采 +八 +缪 +瘫 +型 +熹 +砰 +楠 +襁 +箐 +但 +嘶 +绳 +啤 +拍 +盥 +穆 +傲 +洗 +盯 +塘 +怔 +筛 +丿 +台 +恒 +喂 +葛 +永 +¥ +烟 +酒 +桦 +书 +砂 +蚝 +缉 +态 +瀚 +袄 +圳 +轻 +蛛 +超 +榧 +遛 +姒 +奘 +铮 +右 +荽 +望 +偻 +卡 +丶 +氰 +附 +做 +革 +索 +戚 +坨 +桷 +唁 +垅 +榻 +岐 +偎 +坛 +莨 +山 +殊 +微 +骇 +陈 +爨 +推 +嗝 +驹 +澡 +藁 +呤 +卤 +嘻 +糅 +逛 +侵 +郓 +酌 +德 +摇 +※ +鬃 +被 +慨 +殡 +羸 +昌 +泡 +戛 +鞋 +河 +宪 +沿 +玲 +鲨 +翅 +哽 +源 +铅 +语 +照 +邯 +址 +荃 +佬 +顺 +鸳 +町 +霭 +睾 +瓢 +夸 +椁 +晓 +酿 +痈 +咔 +侏 +券 +噎 +湍 +签 +嚷 +离 +午 +尚 +社 +锤 +背 +孟 +使 +浪 +缦 +潍 +鞅 +军 +姹 +驶 +笑 +鳟 +鲁 +》 +孽 +钜 +绿 +洱 +礴 +焯 +椰 +颖 +囔 +乌 +孔 +巴 +互 +性 +椽 +哞 +聘 +昨 +早 +暮 +胶 +炀 +隧 +低 +彗 +昝 +铁 +呓 +氽 +藉 +喔 +癖 +瑗 +姨 +权 +胱 +韦 +堑 +蜜 +酋 +楝 +砝 +毁 +靓 +歙 +锲 +究 +屋 +喳 +骨 +辨 +碑 +武 +鸠 +宫 +辜 +烊 +适 +坡 +殃 +培 +佩 +供 +走 +蜈 +迟 +翼 +况 +姣 +凛 +浔 +吃 +飘 +债 +犟 +金 +促 +苛 +崇 +坂 +莳 +畔 +绂 +兵 +蠕 +斋 +根 +砍 +亢 +欢 +恬 +崔 +剁 +餐 +榫 +快 +扶 +‖ +濒 +缠 +鳜 +当 +彭 +驭 +浦 +篮 +昀 +锆 +秸 +钳 +弋 +娣 +瞑 +夷 +龛 +苫 +拱 +致 +% +嵊 +障 +隐 +弑 +初 +娓 +抉 +汩 +累 +蓖 +" +唬 +助 +苓 +昙 +押 +毙 +破 +城 +郧 +逢 +嚏 +獭 +瞻 +溱 +婿 +赊 +跨 +恼 +璧 +萃 +姻 +貉 +灵 +炉 +密 +氛 +陶 +砸 +谬 +衔 +点 +琛 +沛 +枳 +层 +岱 +诺 +脍 +榈 +埂 +征 +冷 +裁 +打 +蹴 +素 +瘘 +逞 +蛐 +聊 +激 +腱 +萘 +踵 +飒 +蓟 +吆 +取 +咙 +簋 +涓 +矩 +曝 +挺 +揣 +座 +你 +史 +舵 +焱 +尘 +苏 +笈 +脚 +溉 +榨 +诵 +樊 +邓 +焊 +义 +庶 +儋 +蟋 +蒲 +赦 +呷 +杞 +诠 +豪 +还 +试 +颓 +茉 +太 +除 +紫 +逃 +痴 +草 +充 +鳕 +珉 +祗 +墨 +渭 +烩 +蘸 +慕 +璇 +镶 +穴 +嵘 +恶 +骂 +险 +绋 +幕 +碉 +肺 +戳 +刘 +潞 +秣 +纾 +潜 +銮 +洛 +须 +罘 +销 +瘪 +汞 +兮 +屉 +r +林 +厕 +质 +探 +划 +狸 +殚 +善 +煊 +烹 +〒 +锈 +逯 +宸 +辍 +泱 +柚 +袍 +远 +蹋 +嶙 +绝 +峥 +娥 +缍 +雀 +徵 +认 +镱 +谷 += +贩 +勉 +撩 +鄯 +斐 +洋 +非 +祚 +泾 +诒 +饿 +撬 +威 +晷 +搭 +芍 +锥 +笺 +蓦 +候 +琊 +档 +礁 +沼 +卵 +荠 +忑 +朝 +凹 +瑞 +头 +仪 +弧 +孵 +畏 +铆 +突 +衲 +车 +浩 +气 +茂 +悖 +厢 +枕 +酝 +戴 +湾 +邹 +飚 +攘 +锂 +写 +宵 +翁 +岷 +无 +喜 +丈 +挑 +嗟 +绛 +殉 +议 +槽 +具 +醇 +淞 +笃 +郴 +阅 +饼 +底 +壕 +砚 +弈 +询 +缕 +庹 +翟 +零 +筷 +暨 +舟 +闺 +甯 +撞 +麂 +茌 +蔼 +很 +珲 +捕 +棠 +角 +阉 +媛 +娲 +诽 +剿 +尉 +爵 +睬 +韩 +诰 +匣 +危 +糍 +镯 +立 +浏 +阳 +少 +盆 +舔 +擘 +匪 +申 +尬 +铣 +旯 +抖 +赘 +瓯 +居 +ˇ +哮 +游 +锭 +茏 +歌 +坏 +甚 +秒 +舞 +沙 +仗 +劲 +潺 +阿 +燧 +郭 +嗖 +霏 +忠 +材 +奂 +耐 +跺 +砀 +输 +岖 +媳 +氟 +极 +摆 +灿 +今 +扔 +腻 +枝 +奎 +药 +熄 +吨 +话 +q +额 +慑 +嘌 +协 +喀 +壳 +埭 +视 +著 +於 +愧 +陲 +翌 +峁 +颅 +佛 +腹 +聋 +侯 +咎 +叟 +秀 +颇 +存 +较 +罪 +哄 +岗 +扫 +栏 +钾 +羌 +己 +璨 +枭 +霉 +煌 +涸 +衿 +键 +镝 +益 +岢 +奏 +连 +夯 +睿 +冥 +均 +糖 +狞 +蹊 +稻 +爸 +刿 +胥 +煜 +丽 +肿 +璃 +掸 +跚 +灾 +垂 +樾 +濑 +乎 +莲 +窄 +犹 +撮 +战 +馄 +软 +络 +显 +鸢 +胸 +宾 +妲 +恕 +埔 +蝌 +份 +遇 +巧 +瞟 +粒 +恰 +剥 +桡 +博 +讯 +凯 +堇 +阶 +滤 +卖 +斌 +骚 +彬 +兑 +磺 +樱 +舷 +两 +娱 +福 +仃 +差 +找 +桁 +÷ +净 +把 +阴 +污 +戬 +雷 +碓 +蕲 +楚 +罡 +焖 +抽 +妫 +咒 +仑 +闱 +尽 +邑 +菁 +爱 +贷 +沥 +鞑 +牡 +嗉 +崴 +骤 +塌 +嗦 +订 +拮 +滓 +捡 +锻 +次 +坪 +杩 +臃 +箬 +融 +珂 +鹗 +宗 +枚 +降 +鸬 +妯 +阄 +堰 +盐 +毅 +必 +杨 +崃 +俺 +甬 +状 +莘 +货 +耸 +菱 +腼 +铸 +唏 +痤 +孚 +澳 +懒 +溅 +翘 +疙 +杷 +淼 +缙 +骰 +喊 +悉 +砻 +坷 +艇 +赁 +界 +谤 +纣 +宴 +晃 +茹 +归 +饭 +梢 +铡 +街 +抄 +肼 +鬟 +苯 +颂 +撷 +戈 +炒 +咆 +茭 +瘙 +负 +仰 +客 +琉 +铢 +封 +卑 +珥 +椿 +镧 +窨 +鬲 +寿 +御 +袤 +铃 +萎 +砖 +餮 +脒 +裳 +肪 +孕 +嫣 +馗 +嵇 +恳 +氯 +江 +石 +褶 +冢 +祸 +阻 +狈 +羞 +银 +靳 +透 +咳 +叼 +敷 +芷 +啥 +它 +瓤 +兰 +痘 +懊 +逑 +肌 +往 +捺 +坊 +甩 +呻 +〃 +沦 +忘 +膻 +祟 +菅 +剧 +崆 +智 +坯 +臧 +霍 +墅 +攻 +眯 +倘 +拢 +骠 +铐 +庭 +岙 +瓠 +′ +缺 +泥 +迢 +捶 +? +? +郏 +喙 +掷 +沌 +纯 +秘 +种 +听 +绘 +固 +螨 +团 +香 +盗 +妒 +埚 +蓝 +拖 +旱 +荞 +铀 +血 +遏 +汲 +辰 +叩 +拽 +幅 +硬 +惶 +桀 +漠 +措 +泼 +唑 +齐 +肾 +念 +酱 +虚 +屁 +耶 +旗 +砦 +闵 +婉 +馆 +拭 +绅 +韧 +忏 +窝 +醋 +葺 +顾 +辞 +倜 +堆 +辋 +逆 +玟 +贱 +疾 +董 +惘 +倌 +锕 +淘 +嘀 +莽 +俭 +笏 +绑 +鲷 +杈 +择 +蟀 +粥 +嗯 +驰 +逾 +案 +谪 +褓 +胫 +哩 +昕 +颚 +鲢 +绠 +躺 +鹄 +崂 +儒 +俨 +丝 +尕 +泌 +啊 +萸 +彰 +幺 +吟 +骄 +苣 +弦 +脊 +瑰 +〈 +诛 +镁 +析 +闪 +剪 +侧 +哟 +框 +螃 +守 +嬗 +燕 +狭 +铈 +缮 +概 +迳 +痧 +鲲 +俯 +售 +笼 +痣 +扉 +挖 +满 +咋 +援 +邱 +扇 +歪 +便 +玑 +绦 +峡 +蛇 +叨 +〖 +泽 +胃 +斓 +喋 +怂 +坟 +猪 +该 +蚬 +炕 +弥 +赞 +棣 +晔 +娠 +挲 +狡 +创 +疖 +铕 +镭 +稷 +挫 +弭 +啾 +翔 +粉 +履 +苘 +哦 +楼 +秕 +铂 +土 +锣 +瘟 +挣 +栉 +习 +享 +桢 +袅 +磨 +桂 +谦 +延 +坚 +蔚 +噗 +署 +谟 +猬 +钎 +恐 +嬉 +雒 +倦 +衅 +亏 +璩 +睹 +刻 +殿 +王 +算 +雕 +麻 +丘 +柯 +骆 +丸 +塍 +谚 +添 +鲈 +垓 +桎 +蚯 +芥 +予 +飕 +镦 +谌 +窗 +醚 +菀 +亮 +搪 +莺 +蒿 +羁 +足 +J +真 +轶 +悬 +衷 +靛 +翊 +掩 +哒 +炅 +掐 +冼 +妮 +l +谐 +稚 +荆 +擒 +犯 +陵 +虏 +浓 +崽 +刍 +陌 +傻 +孜 +千 +靖 +演 +矜 +钕 +煽 +杰 +酗 +渗 +伞 +栋 +俗 +泫 +戍 +罕 +沾 +疽 +灏 +煦 +芬 +磴 +叱 +阱 +榉 +湃 +蜀 +叉 +醒 +彪 +租 +郡 +篷 +屎 +良 +垢 +隗 +弱 +陨 +峪 +砷 +掴 +颁 +胎 +雯 +绵 +贬 +沐 +撵 +隘 +篙 +暖 +曹 +陡 +栓 +填 +臼 +彦 +瓶 +琪 +潼 +哪 +鸡 +摩 +啦 +俟 +锋 +域 +耻 +蔫 +疯 +纹 +撇 +毒 +绶 +痛 +酯 +忍 +爪 +赳 +歆 +嘹 +辕 +烈 +册 +朴 +钱 +吮 +毯 +癜 +娃 +谀 +邵 +厮 +炽 +璞 +邃 +丐 +追 +词 +瓒 +忆 +轧 +芫 +谯 +喷 +弟 +半 +冕 +裙 +掖 +墉 +绮 +寝 +苔 +势 +顷 +褥 +切 +衮 +君 +佳 +嫒 +蚩 +霞 +佚 +洙 +逊 +镖 +暹 +唛 +& +殒 +顶 +碗 +獗 +轭 +铺 +蛊 +废 +恹 +汨 +崩 +珍 +那 +杵 +曲 +纺 +夏 +薰 +傀 +闳 +淬 +姘 +舀 +拧 +卷 +楂 +恍 +讪 +厩 +寮 +篪 +赓 +乘 +灭 +盅 +鞣 +沟 +慎 +挂 +饺 +鼾 +杳 +树 +缨 +丛 +絮 +娌 +臻 +嗳 +篡 +侩 +述 +衰 +矛 +圈 +蚜 +匕 +筹 +匿 +濞 +晨 +叶 +骋 +郝 +挚 +蚴 +滞 +增 +侍 +描 +瓣 +吖 +嫦 +蟒 +匾 +圣 +赌 +毡 +癞 +恺 +百 +曳 +需 +篓 +肮 +庖 +帏 +卿 +驿 +遗 +蹬 +鬓 +骡 +歉 +芎 +胳 +屐 +禽 +烦 +晌 +寄 +媾 +狄 +翡 +苒 +船 +廉 +终 +痞 +殇 +々 +畦 +饶 +改 +拆 +悻 +萄 +£ +瓿 +乃 +訾 +桅 +匮 +溧 +拥 +纱 +铍 +骗 +蕃 +龋 +缬 +父 +佐 +疚 +栎 +醍 +掳 +蓄 +x +惆 +颜 +鲆 +榆 +〔 +猎 +敌 +暴 +谥 +鲫 +贾 +罗 +玻 +缄 +扦 +芪 +癣 +落 +徒 +臾 +恿 +猩 +托 +邴 +肄 +牵 +春 +陛 +耀 +刊 +拓 +蓓 +邳 +堕 +寇 +枉 +淌 +啡 +湄 +兽 +酷 +萼 +碚 +濠 +萤 +夹 +旬 +戮 +梭 +琥 +椭 +昔 +勺 +蜊 +绐 +晚 +孺 +僵 +宣 +摄 +冽 +旨 +萌 +忙 +蚤 +眉 +噼 +蟑 +付 +契 +瓜 +悼 +颡 +壁 +曾 +窕 +颢 +澎 +仿 +俑 +浑 +嵌 +浣 +乍 +碌 +褪 +乱 +蔟 +隙 +玩 +剐 +葫 +箫 +纲 +围 +伐 +决 +伙 +漩 +瑟 +刑 +肓 +镳 +缓 +蹭 +氨 +皓 +典 +畲 +坍 +铑 +檐 +塑 +洞 +倬 +储 +胴 +淳 +戾 +吐 +灼 +惺 +妙 +毕 +珐 +缈 +虱 +盖 +羰 +鸿 +磅 +谓 +髅 +娴 +苴 +唷 +蚣 +霹 +抨 +贤 +唠 +犬 +誓 +逍 +庠 +逼 +麓 +籼 +釉 +呜 +碧 +秧 +氩 +摔 +霄 +穸 +纨 +辟 +妈 +映 +完 +牛 +缴 +嗷 +炊 +恩 +荔 +茆 +掉 +紊 +慌 +莓 +羟 +阙 +萁 +磐 +另 +蕹 +辱 +鳐 +湮 +吡 +吩 +唐 +睦 +垠 +舒 +圜 +冗 +瞿 +溺 +芾 +囱 +匠 +僳 +汐 +菩 +饬 +漓 +黑 +霰 +浸 +濡 +窥 +毂 +蒡 +兢 +驻 +鹉 +芮 +诙 +迫 +雳 +厂 +忐 +臆 +猴 +鸣 +蚪 +栈 +箕 +羡 +渐 +莆 +捍 +眈 +哓 +趴 +蹼 +埕 +嚣 +骛 +宏 +淄 +斑 +噜 +严 +瑛 +垃 +椎 +诱 +压 +庾 +绞 +焘 +廿 +抡 +迄 +棘 +夫 +纬 +锹 +眨 +瞌 +侠 +脐 +竞 +瀑 +孳 +骧 +遁 +姜 +颦 +荪 +滚 +萦 +伪 +逸 +粳 +爬 +锁 +矣 +役 +趣 +洒 +颔 +诏 +逐 +奸 +甭 +惠 +攀 +蹄 +泛 +尼 +拼 +阮 +鹰 +亚 +颈 +惑 +勒 +〉 +际 +肛 +爷 +刚 +钨 +丰 +养 +冶 +鲽 +辉 +蔻 +画 +覆 +皴 +妊 +麦 +返 +醉 +皂 +擀 +〗 +酶 +凑 +粹 +悟 +诀 +硖 +港 +卜 +z +杀 +涕 +± +舍 +铠 +抵 +弛 +段 +敝 +镐 +奠 +拂 +轴 +跛 +袱 +e +t +沉 +菇 +俎 +薪 +峦 +秭 +蟹 +历 +盟 +菠 +寡 +液 +肢 +喻 +染 +裱 +悱 +抱 +氙 +赤 +捅 +猛 +跑 +氮 +谣 +仁 +尺 +辊 +窍 +烙 +衍 +架 +擦 +倏 +璐 +瑁 +币 +楞 +胖 +夔 +趸 +邛 +惴 +饕 +虔 +蝎 +§ +哉 +贝 +宽 +辫 +炮 +扩 +饲 +籽 +魏 +菟 +锰 +伍 +猝 +末 +琳 +哚 +蛎 +邂 +呀 +姿 +鄞 +却 +歧 +仙 +恸 +椐 +森 +牒 +寤 +袒 +婆 +虢 +雅 +钉 +朵 +贼 +欲 +苞 +寰 +故 +龚 +坭 +嘘 +咫 +礼 +硷 +兀 +睢 +汶 +’ +铲 +烧 +绕 +诃 +浃 +钿 +哺 +柜 +讼 +颊 +璁 +腔 +洽 +咐 +脲 +簌 +筠 +镣 +玮 +鞠 +谁 +兼 +姆 +挥 +梯 +蝴 +谘 +漕 +刷 +躏 +宦 +弼 +b +垌 +劈 +麟 +莉 +揭 +笙 +渎 +仕 +嗤 +仓 +配 +怏 +抬 +错 +泯 +镊 +孰 +猿 +邪 +仍 +秋 +鼬 +壹 +歇 +吵 +炼 +< +尧 +射 +柬 +廷 +胧 +霾 +凳 +隋 +肚 +浮 +梦 +祥 +株 +堵 +退 +L +鹫 +跎 +凶 +毽 +荟 +炫 +栩 +玳 +甜 +沂 +鹿 +顽 +伯 +爹 +赔 +蛴 +徐 +匡 +欣 +狰 +缸 +雹 +蟆 +疤 +默 +沤 +啜 +痂 +衣 +禅 +w +i +h +辽 +葳 +黝 +钗 +停 +沽 +棒 +馨 +颌 +肉 +吴 +硫 +悯 +劾 +娈 +马 +啧 +吊 +悌 +镑 +峭 +帆 +瀣 +涉 +咸 +疸 +滋 +泣 +翦 +拙 +癸 +钥 +蜒 ++ +尾 +庄 +凝 +泉 +婢 +渴 +谊 +乞 +陆 +锉 +糊 +鸦 +淮 +I +B +N +晦 +弗 +乔 +庥 +葡 +尻 +席 +橡 +傣 +渣 +拿 +惩 +麋 +斛 +缃 +矮 +蛏 +岘 +鸽 +姐 +膏 +催 +奔 +镒 +喱 +蠡 +摧 +钯 +胤 +柠 +拐 +璋 +鸥 +卢 +荡 +倾 +^ +_ +珀 +逄 +萧 +塾 +掇 +贮 +笆 +聂 +圃 +冲 +嵬 +M +滔 +笕 +值 +炙 +偶 +蜱 +搐 +梆 +汪 +蔬 +腑 +鸯 +蹇 +敞 +绯 +仨 +祯 +谆 +梧 +糗 +鑫 +啸 +豺 +囹 +猾 +巢 +柄 +瀛 +筑 +踌 +沭 +暗 +苁 +鱿 +蹉 +脂 +蘖 +牢 +热 +木 +吸 +溃 +宠 +序 +泞 +偿 +拜 +檩 +厚 +朐 +毗 +螳 +吞 +媚 +朽 +担 +蝗 +橘 +畴 +祈 +糟 +盱 +隼 +郜 +惜 +珠 +裨 +铵 +焙 +琚 +唯 +咚 +噪 +骊 +丫 +滢 +勤 +棉 +呸 +咣 +淀 +隔 +蕾 +窈 +饨 +挨 +煅 +短 +匙 +粕 +镜 +赣 +撕 +墩 +酬 +馁 +豌 +颐 +抗 +酣 +氓 +佑 +搁 +哭 +递 +耷 +涡 +桃 +贻 +碣 +截 +瘦 +昭 +镌 +蔓 +氚 +甲 +猕 +蕴 +蓬 +散 +拾 +纛 +狼 +猷 +铎 +埋 +旖 +矾 +讳 +囊 +糜 +迈 +粟 +蚂 +紧 +鲳 +瘢 +栽 +稼 +羊 +锄 +斟 +睁 +桥 +瓮 +蹙 +祉 +醺 +鼻 +昱 +剃 +跳 +篱 +跷 +蒜 +翎 +宅 +晖 +嗑 +壑 +峻 +癫 +屏 +狠 +陋 +袜 +途 +憎 +祀 +莹 +滟 +佶 +溥 +臣 +约 +盛 +峰 +磁 +慵 +婪 +拦 +莅 +朕 +鹦 +粲 +裤 +哎 +疡 +嫖 +琵 +窟 +堪 +谛 +嘉 +儡 +鳝 +斩 +郾 +驸 +酊 +妄 +胜 +贺 +徙 +傅 +噌 +钢 +栅 +庇 +恋 +匝 +巯 +邈 +尸 +锚 +粗 +佟 +蛟 +薹 +纵 +蚊 +郅 +绢 +锐 +苗 +俞 +篆 +淆 +膀 +鲜 +煎 +诶 +秽 +寻 +涮 +刺 +怀 +噶 +巨 +褰 +魅 +灶 +灌 +桉 +藕 +谜 +舸 +薄 +搀 +恽 +借 +牯 +痉 +渥 +愿 +亓 +耘 +杠 +柩 +锔 +蚶 +钣 +珈 +喘 +蹒 +幽 +赐 +稗 +晤 +莱 +泔 +扯 +肯 +菪 +裆 +腩 +豉 +疆 +骜 +腐 +倭 +珏 +唔 +粮 +亡 +润 +慰 +伽 +橄 +玄 +誉 +醐 +胆 +龊 +粼 +塬 +陇 +彼 +削 +嗣 +绾 +芽 +妗 +垭 +瘴 +爽 +薏 +寨 +龈 +泠 +弹 +赢 +漪 +猫 +嘧 +涂 +恤 +圭 +茧 +烽 +屑 +痕 +巾 +赖 +荸 +凰 +腮 +畈 +亵 +蹲 +偃 +苇 +澜 +艮 +换 +骺 +烘 +苕 +梓 +颉 +肇 +哗 +悄 +氤 +涠 +葬 +屠 +鹭 +植 +竺 +佯 +诣 +鲇 +瘀 +鲅 +邦 +移 +滁 +冯 +耕 +癔 +戌 +茬 +沁 +巩 +悠 +湘 +洪 +痹 +锟 +循 +谋 +腕 +鳃 +钠 +捞 +焉 +迎 +碱 +伫 +急 +榷 +奈 +邝 +卯 +辄 +皲 +卟 +醛 +畹 +忧 +稳 +雄 +昼 +缩 +阈 +睑 +扌 +耗 +曦 +涅 +捏 +瞧 +邕 +淖 +漉 +铝 +耦 +禹 +湛 +喽 +莼 +琅 +诸 +苎 +纂 +硅 +始 +嗨 +傥 +燃 +臂 +赅 +嘈 +呆 +贵 +屹 +壮 +肋 +亍 +蚀 +卅 +豹 +腆 +邬 +迭 +浊 +} +童 +螂 +捐 +圩 +勐 +触 +寞 +汊 +壤 +荫 +膺 +渌 +芳 +懿 +遴 +螈 +泰 +蓼 +蛤 +茜 +舅 +枫 +朔 +膝 +眙 +避 +梅 +判 +鹜 +璜 +牍 +缅 +垫 +藻 +黔 +侥 +惚 +懂 +踩 +腰 +腈 +札 +丞 +唾 +慈 +顿 +摹 +荻 +琬 +~ +斧 +沈 +滂 +胁 +胀 +幄 +莜 +Z +匀 +鄄 +掌 +绰 +茎 +焚 +赋 +萱 +谑 +汁 +铒 +瞎 +夺 +蜗 +野 +娆 +冀 +弯 +篁 +懵 +灞 +隽 +芡 +脘 +俐 +辩 +芯 +掺 +喏 +膈 +蝈 +觐 +悚 +踹 +蔗 +熠 +鼠 +呵 +抓 +橼 +峨 +畜 +缔 +禾 +崭 +弃 +熊 +摒 +凸 +拗 +穹 +蒙 +抒 +祛 +劝 +闫 +扳 +阵 +醌 +踪 +喵 +侣 +搬 +仅 +荧 +赎 +蝾 +琦 +买 +婧 +瞄 +寓 +皎 +冻 +赝 +箩 +莫 +瞰 +郊 +笫 +姝 +筒 +枪 +遣 +煸 +袋 +舆 +痱 +涛 +母 +〇 +启 +践 +耙 +绲 +盘 +遂 +昊 +搞 +槿 +诬 +纰 +泓 +惨 +檬 +亻 +越 +C +o +憩 +熵 +祷 +钒 +暧 +塔 +阗 +胰 +咄 +娶 +魔 +琶 +钞 +邻 +扬 +杉 +殴 +咽 +弓 +〆 +髻 +】 +吭 +揽 +霆 +拄 +殖 +脆 +彻 +岩 +芝 +勃 +辣 +剌 +钝 +嘎 +甄 +佘 +皖 +伦 +授 +徕 +憔 +挪 +皇 +庞 +稔 +芜 +踏 +溴 +兖 +卒 +擢 +饥 +鳞 +煲 +‰ +账 +颗 +叻 +斯 +捧 +鳍 +琮 +讹 +蛙 +纽 +谭 +酸 +兔 +莒 +睇 +伟 +觑 +羲 +嗜 +宜 +褐 +旎 +辛 +卦 +诘 +筋 +鎏 +溪 +挛 +熔 +阜 +晰 +鳅 +丢 +奚 +灸 +呱 +献 +陉 +黛 +鸪 +甾 +萨 +疮 +拯 +洲 +疹 +辑 +叙 +恻 +谒 +允 +柔 +烂 +氏 +逅 +漆 +拎 +惋 +扈 +湟 +纭 +啕 +掬 +擞 +哥 +忽 +涤 +鸵 +靡 +郗 +瓷 +扁 +廊 +怨 +雏 +钮 +敦 +E +懦 +憋 +汀 +拚 +啉 +腌 +岸 +f +痼 +瞅 +尊 +咀 +眩 +飙 +忌 +仝 +迦 +熬 +毫 +胯 +篑 +茄 +腺 +凄 +舛 +碴 +锵 +诧 +羯 +後 +漏 +汤 +宓 +仞 +蚁 +壶 +谰 +皑 +铄 +棰 +罔 +辅 +晶 +苦 +牟 +闽 +\ +烃 +饮 +聿 +丙 +蛳 +朱 +煤 +涔 +鳖 +犁 +罐 +荼 +砒 +淦 +妤 +黏 +戎 +孑 +婕 +瑾 +戢 +钵 +枣 +捋 +砥 +衩 +狙 +桠 +稣 +阎 +肃 +梏 +诫 +孪 +昶 +婊 +衫 +嗔 +侃 +塞 +蜃 +樵 +峒 +貌 +屿 +欺 +缫 +阐 +栖 +诟 +珞 +荭 +吝 +萍 +嗽 +恂 +啻 +蜴 +磬 +峋 +俸 +豫 +谎 +徊 +镍 +韬 +魇 +晴 +U +囟 +猜 +蛮 +坐 +囿 +伴 +亭 +肝 +佗 +蝠 +妃 +胞 +滩 +榴 +氖 +垩 +苋 +砣 +扪 +馏 +姓 +轩 +厉 +夥 +侈 +禀 +垒 +岑 +赏 +钛 +辐 +痔 +披 +纸 +碳 +“ +坞 +蠓 +挤 +荥 +沅 +悔 +铧 +帼 +蒌 +蝇 +a +p +y +n +g +哀 +浆 +瑶 +凿 +桶 +馈 +皮 +奴 +苜 +佤 +伶 +晗 +铱 +炬 +优 +弊 +氢 +恃 +甫 +攥 +端 +锌 +灰 +稹 +炝 +曙 +邋 +亥 +眶 +碾 +拉 +萝 +绔 +捷 +浍 +腋 +姑 +菖 +凌 +涞 +麽 +锢 +桨 +潢 +绎 +镰 +殆 +锑 +渝 +铬 +困 +绽 +觎 +匈 +糙 +暑 +裹 +鸟 +盔 +肽 +迷 +綦 +『 +亳 +佝 +俘 +钴 +觇 +骥 +仆 +疝 +跪 +婶 +郯 +瀹 +唉 +脖 +踞 +针 +晾 +忒 +扼 +瞩 +叛 +椒 +疟 +嗡 +邗 +肆 +跆 +玫 +忡 +捣 +咧 +唆 +艄 +蘑 +潦 +笛 +阚 +沸 +泻 +掊 +菽 +贫 +斥 +髂 +孢 +镂 +赂 +麝 +鸾 +屡 +衬 +苷 +恪 +叠 +希 +粤 +爻 +喝 +茫 +惬 +郸 +绻 +庸 +撅 +碟 +宄 +妹 +膛 +叮 +饵 +崛 +嗲 +椅 +冤 +搅 +咕 +敛 +尹 +垦 +闷 +蝉 +霎 +勰 +败 +蓑 +泸 +肤 +鹌 +幌 +焦 +浠 +鞍 +刁 +舰 +乙 +竿 +裔 +。 +茵 +函 +伊 +兄 +丨 +娜 +匍 +謇 +莪 +宥 +似 +蝽 +翳 +酪 +翠 +粑 +薇 +祢 +骏 +赠 +叫 +Q +噤 +噻 +竖 +芗 +莠 +潭 +俊 +羿 +耜 +O +郫 +趁 +嗪 +囚 +蹶 +芒 +洁 +笋 +鹑 +敲 +硝 +啶 +堡 +渲 +揩 +』 +携 +宿 +遒 +颍 +扭 +棱 +割 +萜 +蔸 +葵 +琴 +捂 +饰 +衙 +耿 +掠 +募 +岂 +窖 +涟 +蔺 +瘤 +柞 +瞪 +怜 +匹 +距 +楔 +炜 +哆 +秦 +缎 +幼 +茁 +绪 +痨 +恨 +楸 +娅 +瓦 +桩 +雪 +嬴 +伏 +榔 +妥 +铿 +拌 +眠 +雍 +缇 +‘ +卓 +搓 +哌 +觞 +噩 +屈 +哧 +髓 +咦 +巅 +娑 +侑 +淫 +膳 +祝 +勾 +姊 +莴 +胄 +疃 +薛 +蜷 +胛 +巷 +芙 +芋 +熙 +闰 +勿 +窃 +狱 +剩 +钏 +幢 +陟 +铛 +慧 +靴 +耍 +k +浙 +浇 +飨 +惟 +绗 +祜 +澈 +啼 +咪 +磷 +摞 +诅 +郦 +抹 +跃 +壬 +吕 +肖 +琏 +颤 +尴 +剡 +抠 +凋 +赚 +泊 +津 +宕 +殷 +倔 +氲 +漫 +邺 +涎 +怠 +$ +垮 +荬 +遵 +俏 +叹 +噢 +饽 +蜘 +孙 +筵 +疼 +鞭 +羧 +牦 +箭 +潴 +c +眸 +祭 +髯 +啖 +坳 +愁 +芩 +驮 +倡 +巽 +穰 +沃 +胚 +怒 +凤 +槛 +剂 +趵 +嫁 +v +邢 +灯 +鄢 +桐 +睽 +檗 +锯 +槟 +婷 +嵋 +圻 +诗 +蕈 +颠 +遭 +痢 +芸 +怯 +馥 +竭 +锗 +徜 +恭 +遍 +籁 +剑 +嘱 +苡 +龄 +僧 +桑 +潸 +弘 +澶 +楹 +悲 +讫 +愤 +腥 +悸 +谍 +椹 +呢 +桓 +葭 +攫 +阀 +翰 +躲 +敖 +柑 +郎 +笨 +橇 +呃 +魁 +燎 +脓 +葩 +磋 +垛 +玺 +狮 +沓 +砜 +蕊 +锺 +罹 +蕉 +翱 +虐 +闾 +巫 +旦 +茱 +嬷 +枯 +鹏 +贡 +芹 +汛 +矫 +绁 +拣 +禺 +佃 +讣 +舫 +惯 +乳 +趋 +疲 +挽 +岚 +虾 +衾 +蠹 +蹂 +飓 +氦 +铖 +孩 +稞 +瑜 +壅 +掀 +勘 +妓 +畅 +髋 +W +庐 +牲 +蓿 +榕 +练 +垣 +唱 +邸 +菲 +昆 +婺 +穿 +绡 +麒 +蚱 +掂 +愚 +泷 +涪 +漳 +妩 +娉 +榄 +讷 +觅 +旧 +藤 +煮 +呛 +柳 +腓 +叭 +庵 +烷 +阡 +罂 +蜕 +擂 +猖 +咿 +媲 +脉 +【 +沏 +貅 +黠 +熏 +哲 +烁 +坦 +酵 +兜 +× +潇 +撒 +剽 +珩 +圹 +乾 +摸 +樟 +帽 +嗒 +襄 +魂 +轿 +憬 +锡 +〕 +喃 +皆 +咖 +隅 +脸 +残 +泮 +袂 +鹂 +珊 +囤 +捆 +咤 +误 +徨 +闹 +淙 +芊 +淋 +怆 +囗 +拨 +梳 +渤 +R +G +绨 +蚓 +婀 +幡 +狩 +麾 +谢 +唢 +裸 +旌 +伉 +纶 +裂 +驳 +砼 +咛 +澄 +樨 +蹈 +宙 +澍 +倍 +貔 +操 +勇 +蟠 +摈 +砧 +虬 +够 +缁 +悦 +藿 +撸 +艹 +摁 +淹 +豇 +虎 +榭 +ˉ +吱 +d +° +喧 +荀 +踱 +侮 +奋 +偕 +饷 +犍 +惮 +坑 +璎 +徘 +宛 +妆 +袈 +倩 +窦 +昂 +荏 +乖 +K +怅 +撰 +鳙 +牙 +袁 +酞 +X +痿 +琼 +闸 +雁 +趾 +荚 +虻 +涝 +《 +杏 +韭 +偈 +烤 +绫 +鞘 +卉 +症 +遢 +蓥 +诋 +杭 +荨 +匆 +竣 +簪 +辙 +敕 +虞 +丹 +缭 +咩 +黟 +m +淤 +瑕 +咂 +铉 +硼 +茨 +嶂 +痒 +畸 +敬 +涿 +粪 +窘 +熟 +叔 +嫔 +盾 +忱 +裘 +憾 +梵 +赡 +珙 +咯 +娘 +庙 +溯 +胺 +葱 +痪 +摊 +荷 +卞 +乒 +髦 +寐 +铭 +坩 +胗 +枷 +爆 +溟 +嚼 +羚 +砬 +轨 +惊 +挠 +罄 +竽 +菏 +氧 +浅 +楣 +盼 +枢 +炸 +阆 +杯 +谏 +噬 +淇 +渺 +俪 +秆 +墓 +泪 +跻 +砌 +痰 +垡 +渡 +耽 +釜 +讶 +鳎 +煞 +呗 +韶 +舶 +绷 +鹳 +缜 +旷 +铊 +皱 +龌 +檀 +霖 +奄 +槐 +艳 +蝶 +旋 +哝 +赶 +骞 +蚧 +腊 +盈 +丁 +` +蜚 +矸 +蝙 +睨 +嚓 +僻 +鬼 +醴 +夜 +彝 +磊 +笔 +拔 +栀 +糕 +厦 +邰 +纫 +逭 +纤 +眦 +膊 +馍 +躇 +烯 +蘼 +冬 +诤 +暄 +骶 +哑 +瘠 +」 +臊 +丕 +愈 +咱 +螺 +擅 +跋 +搏 +硪 +谄 +笠 +淡 +嘿 +骅 +谧 +鼎 +皋 +姚 +歼 +蠢 +驼 +耳 +胬 +挝 +涯 +狗 +蒽 +孓 +犷 +凉 +芦 +箴 +铤 +孤 +嘛 +坤 +V +茴 +朦 +挞 +尖 +橙 +诞 +搴 +碇 +洵 +浚 +帚 +蜍 +漯 +柘 +嚎 +讽 +芭 +荤 +咻 +祠 +秉 +跖 +埃 +吓 +糯 +眷 +馒 +惹 +娼 +鲑 +嫩 +讴 +轮 +瞥 +靶 +褚 +乏 +缤 +宋 +帧 +删 +驱 +碎 +扑 +俩 +俄 +偏 +涣 +竹 +噱 +皙 +佰 +渚 +唧 +斡 +# +镉 +刀 +崎 +筐 +佣 +夭 +贰 +肴 +峙 +哔 +艿 +匐 +牺 +镛 +缘 +仡 +嫡 +劣 +枸 +堀 +梨 +簿 +鸭 +蒸 +亦 +稽 +浴 +{ +衢 +束 +槲 +j +阁 +揍 +疥 +棋 +潋 +聪 +窜 +乓 +睛 +插 +冉 +阪 +苍 +搽 +「 +蟾 +螟 +幸 +仇 +樽 +撂 +慢 +跤 +幔 +俚 +淅 +覃 +觊 +溶 +妖 +帛 +侨 +曰 +妾 +泗 +· +: +瀘 +風 +Ë +( +) +∶ +紅 +紗 +瑭 +雲 +頭 +鶏 +財 +許 +• +¥ +樂 +焗 +麗 +— +; +滙 +東 +榮 +繪 +興 +… +門 +業 +π +楊 +國 +顧 +é +盤 +寳 +Λ +龍 +鳳 +島 +誌 +緣 +結 +銭 +萬 +勝 +祎 +璟 +優 +歡 +臨 +時 +購 += +★ +藍 +昇 +鐵 +觀 +勅 +農 +聲 +畫 +兿 +術 +發 +劉 +記 +專 +耑 +園 +書 +壴 +種 +Ο +● +褀 +號 +銀 +匯 +敟 +锘 +葉 +橪 +廣 +進 +蒄 +鑽 +阝 +祙 +貢 +鍋 +豊 +夬 +喆 +團 +閣 +開 +燁 +賓 +館 +酡 +沔 +順 ++ +硚 +劵 +饸 +陽 +車 +湓 +復 +萊 +氣 +軒 +華 +堃 +迮 +纟 +戶 +馬 +學 +裡 +電 +嶽 +獨 +マ +シ +サ +ジ +燘 +袪 +環 +❤ +臺 +灣 +専 +賣 +孖 +聖 +攝 +線 +▪ +α +傢 +俬 +夢 +達 +莊 +喬 +貝 +薩 +劍 +羅 +壓 +棛 +饦 +尃 +璈 +囍 +醫 +G +I +A +# +N +鷄 +髙 +嬰 +啓 +約 +隹 +潔 +賴 +藝 +~ +寶 +籣 +麺 +  +嶺 +√ +義 +網 +峩 +長 +∧ +魚 +機 +構 +② +鳯 +偉 +L +B +㙟 +畵 +鴿 +' +詩 +溝 +嚞 +屌 +藔 +佧 +玥 +蘭 +織 +1 +3 +9 +0 +7 +點 +砭 +鴨 +鋪 +銘 +廳 +弍 +‧ +創 +湯 +坶 +℃ +卩 +骝 +& +烜 +荘 +當 +潤 +扞 +係 +懷 +碶 +钅 +蚨 +讠 +☆ +叢 +爲 +埗 +涫 +塗 +→ +楽 +現 +鯨 +愛 +瑪 +鈺 +忄 +悶 +藥 +飾 +樓 +視 +孬 +ㆍ +燚 +苪 +師 +① +丼 +锽 +│ +韓 +標 +è +兒 +閏 +匋 +張 +漢 +Ü +髪 +會 +閑 +檔 +習 +裝 +の +峯 +菘 +輝 +И +雞 +釣 +億 +浐 +K +O +R +8 +H +E +P +T +W +D +S +C +M +F +姌 +饹 +» +晞 +廰 +ä +嵯 +鷹 +負 +飲 +絲 +冚 +楗 +澤 +綫 +區 +❋ +← +質 +靑 +揚 +③ +滬 +統 +産 +協 +﹑ +乸 +畐 +經 +運 +際 +洺 +岽 +為 +粵 +諾 +崋 +豐 +碁 +ɔ +V +2 +6 +齋 +誠 +訂 +´ +勑 +雙 +陳 +無 +í +泩 +媄 +夌 +刂 +i +c +t +o +r +a +嘢 +耄 +燴 +暃 +壽 +媽 +靈 +抻 +體 +唻 +É +冮 +甹 +鎮 +錦 +ʌ +蜛 +蠄 +尓 +駕 +戀 +飬 +逹 +倫 +貴 +極 +Я +Й +寬 +磚 +嶪 +郎 +職 +| +間 +n +d +剎 +伈 +課 +飛 +橋 +瘊 +№ +譜 +骓 +圗 +滘 +縣 +粿 +咅 +養 +濤 +彳 +® +% +Ⅱ +啰 +㴪 +見 +矞 +薬 +糁 +邨 +鲮 +顔 +罱 +З +選 +話 +贏 +氪 +俵 +競 +瑩 +繡 +枱 +β +綉 +á +獅 +爾 +™ +麵 +戋 +淩 +徳 +個 +劇 +場 +務 +簡 +寵 +h +實 +膠 +轱 +圖 +築 +嘣 +樹 +㸃 +營 +耵 +孫 +饃 +鄺 +飯 +麯 +遠 +輸 +坫 +孃 +乚 +閃 +鏢 +㎡ +題 +廠 +關 +↑ +爺 +將 +軍 +連 +篦 +覌 +參 +箸 +- +窠 +棽 +寕 +夀 +爰 +歐 +呙 +閥 +頡 +熱 +雎 +垟 +裟 +凬 +勁 +帑 +馕 +夆 +疌 +枼 +馮 +貨 +蒤 +樸 +彧 +旸 +靜 +龢 +暢 +㐱 +鳥 +珺 +鏡 +灡 +爭 +堷 +廚 +Ó +騰 +診 +┅ +蘇 +褔 +凱 +頂 +豕 +亞 +帥 +嘬 +⊥ +仺 +桖 +複 +饣 +絡 +穂 +顏 +棟 +納 +▏ +濟 +親 +設 +計 +攵 +埌 +烺 +ò +頤 +燦 +蓮 +撻 +節 +講 +濱 +濃 +娽 +洳 +朿 +燈 +鈴 +護 +膚 +铔 +過 +補 +Z +U +5 +4 +坋 +闿 +䖝 +餘 +缐 +铞 +貿 +铪 +桼 +趙 +鍊 +[ +㐂 +垚 +菓 +揸 +捲 +鐘 +滏 +𣇉 +爍 +輪 +燜 +鴻 +鮮 +動 +鹞 +鷗 +丄 +慶 +鉌 +翥 +飮 +腸 +⇋ +漁 +覺 +來 +熘 +昴 +翏 +鲱 +圧 +鄉 +萭 +頔 +爐 +嫚 +г +貭 +類 +聯 +幛 +輕 +訓 +鑒 +夋 +锨 +芃 +珣 +䝉 +扙 +嵐 +銷 +處 +ㄱ +語 +誘 +苝 +歸 +儀 +燒 +楿 +內 +粢 +葒 +奧 +麥 +礻 +滿 +蠔 +穵 +瞭 +態 +鱬 +榞 +硂 +鄭 +黃 +煙 +祐 +奓 +逺 +* +瑄 +獲 +聞 +薦 +讀 +這 +樣 +決 +問 +啟 +們 +執 +説 +轉 +單 +隨 +唘 +帶 +倉 +庫 +還 +贈 +尙 +皺 +■ +餅 +產 +○ +∈ +報 +狀 +楓 +賠 +琯 +嗮 +禮 +` +傳 +> +≤ +嗞 +Φ +≥ +換 +咭 +∣ +↓ +曬 +ε +応 +寫 +″ +終 +様 +純 +費 +療 +聨 +凍 +壐 +郵 +ü +黒 +∫ +製 +塊 +調 +軽 +確 +撃 +級 +馴 +Ⅲ +涇 +繹 +數 +碼 +證 +狒 +処 +劑 +< +晧 +賀 +衆 +] +櫥 +兩 +陰 +絶 +對 +鯉 +憶 +◎ +p +e +Y +蕒 +煖 +頓 +測 +試 +鼽 +僑 +碩 +妝 +帯 +≈ +鐡 +舖 +權 +喫 +倆 +ˋ +該 +悅 +ā +俫 +. +f +s +b +m +k +g +u +j +貼 +淨 +濕 +針 +適 +備 +l +/ +給 +謢 +強 +觸 +衛 +與 +⊙ +$ +緯 +變 +⑴ +⑵ +⑶ +㎏ +殺 +∩ +幚 +─ +價 +▲ +離 +ú +ó +飄 +烏 +関 +閟 +﹝ +﹞ +邏 +輯 +鍵 +驗 +訣 +導 +歷 +屆 +層 +▼ +儱 +錄 +熳 +ē +艦 +吋 +錶 +辧 +飼 +顯 +④ +禦 +販 +気 +対 +枰 +閩 +紀 +幹 +瞓 +貊 +淚 +△ +眞 +墊 +Ω +獻 +褲 +縫 +緑 +亜 +鉅 +餠 +{ +} +◆ +蘆 +薈 +█ +◇ +溫 +彈 +晳 +粧 +犸 +穩 +訊 +崬 +凖 +熥 +П +舊 +條 +紋 +圍 +Ⅳ +筆 +尷 +難 +雜 +錯 +綁 +識 +頰 +鎖 +艶 +□ +殁 +殼 +⑧ +├ +▕ +鵬 +ǐ +ō +ǒ +糝 +綱 +▎ +μ +盜 +饅 +醬 +籤 +蓋 +釀 +鹽 +據 +à +ɡ +辦 +◥ +彐 +┌ +婦 +獸 +鲩 +伱 +ī +蒟 +蒻 +齊 +袆 +腦 +寧 +凈 +妳 +煥 +詢 +偽 +謹 +啫 +鯽 +騷 +鱸 +損 +傷 +鎻 +髮 +買 +冏 +儥 +両 +﹢ +∞ +載 +喰 +z +羙 +悵 +燙 +曉 +員 +組 +徹 +艷 +痠 +鋼 +鼙 +縮 +細 +嚒 +爯 +≠ +維 +" +鱻 +壇 +厍 +帰 +浥 +犇 +薡 +軎 +² +應 +醜 +刪 +緻 +鶴 +賜 +噁 +軌 +尨 +镔 +鷺 +槗 +彌 +葚 +濛 +請 +溇 +緹 +賢 +訪 +獴 +瑅 +資 +縤 +陣 +蕟 +栢 +韻 +祼 +恁 +伢 +謝 +劃 +涑 +總 +衖 +踺 +砋 +凉 +籃 +駿 +苼 +瘋 +昽 +紡 +驊 +腎 +﹗ +響 +杋 +剛 +嚴 +禪 +歓 +槍 +傘 +檸 +檫 +炣 +勢 +鏜 +鎢 +銑 +尐 +減 +奪 +惡 +θ +僮 +婭 +臘 +ū +ì +殻 +鉄 +∑ +蛲 +焼 +緖 +續 +紹 +懮 \ No newline at end of file diff --git a/deploy/pipeline/ppvehicle/vehicle_attr.py b/deploy/pipeline/ppvehicle/vehicle_attr.py new file mode 100644 index 0000000000000000000000000000000000000000..82db40b3189571452b36a870049b5c17bdaf554d --- /dev/null +++ b/deploy/pipeline/ppvehicle/vehicle_attr.py @@ -0,0 +1,132 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml +import glob + +import cv2 +import numpy as np +import math +import paddle +import sys +from collections import Sequence + +# add deploy path of PadleDetection to sys.path +parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 3))) +sys.path.insert(0, parent_path) + +from paddle.inference import Config, create_predictor +from python.utils import argsparser, Timer, get_current_memory_mb +from python.benchmark_utils import PaddleInferBenchmark +from python.infer import Detector, print_arguments +from pipeline.pphuman.attr_infer import AttrDetector + + +class VehicleAttr(AttrDetector): + """ + Args: + model_dir (str): root path of model.pdiparams, model.pdmodel and infer_cfg.yml + device (str): Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU + run_mode (str): mode of running(paddle/trt_fp32/trt_fp16) + batch_size (int): size of pre batch in inference + trt_min_shape (int): min shape for dynamic shape in trt + trt_max_shape (int): max shape for dynamic shape in trt + trt_opt_shape (int): opt shape for dynamic shape in trt + trt_calib_mode (bool): If the model is produced by TRT offline quantitative + calibration, trt_calib_mode need to set True + cpu_threads (int): cpu threads + enable_mkldnn (bool): whether to open MKLDNN + type_threshold (float): The threshold of score for vehicle type recognition. + color_threshold (float): The threshold of score for vehicle color recognition. + """ + + def __init__(self, + model_dir, + device='CPU', + run_mode='paddle', + batch_size=1, + trt_min_shape=1, + trt_max_shape=1280, + trt_opt_shape=640, + trt_calib_mode=False, + cpu_threads=1, + enable_mkldnn=False, + output_dir='output', + color_threshold=0.5, + type_threshold=0.5): + super(VehicleAttr, self).__init__( + model_dir=model_dir, + device=device, + run_mode=run_mode, + batch_size=batch_size, + trt_min_shape=trt_min_shape, + trt_max_shape=trt_max_shape, + trt_opt_shape=trt_opt_shape, + trt_calib_mode=trt_calib_mode, + cpu_threads=cpu_threads, + enable_mkldnn=enable_mkldnn, + output_dir=output_dir) + self.color_threshold = color_threshold + self.type_threshold = type_threshold + self.result_history = {} + self.color_list = [ + "yellow", "orange", "green", "gray", "red", "blue", "white", + "golden", "brown", "black" + ] + self.type_list = [ + "sedan", "suv", "van", "hatchback", "mpv", "pickup", "bus", "truck", + "estate" + ] + + def postprocess(self, inputs, result): + # postprocess output of predictor + im_results = result['output'] + batch_res = [] + for res in im_results: + res = res.tolist() + attr_res = [] + color_res_str = "Color: " + type_res_str = "Type: " + color_idx = np.argmax(res[:10]) + type_idx = np.argmax(res[10:]) + + if res[color_idx] >= self.color_threshold: + color_res_str += self.color_list[color_idx] + else: + color_res_str += "Unknown" + attr_res.append(color_res_str) + + if res[type_idx + 10] >= self.type_threshold: + type_res_str += self.type_list[type_idx] + else: + type_res_str += "Unknown" + attr_res.append(type_res_str) + + batch_res.append(attr_res) + result = {'output': batch_res} + return result + + +if __name__ == '__main__': + paddle.enable_static() + parser = argsparser() + FLAGS = parser.parse_args() + print_arguments(FLAGS) + FLAGS.device = FLAGS.device.upper() + assert FLAGS.device in ['CPU', 'GPU', 'XPU' + ], "device should be CPU, GPU or XPU" + assert not FLAGS.use_gpu, "use_gpu has been deprecated, please use --device" + + main() diff --git a/deploy/pipeline/ppvehicle/vehicle_plate.py b/deploy/pipeline/ppvehicle/vehicle_plate.py new file mode 100644 index 0000000000000000000000000000000000000000..cfb831c55cc0a76e6bc26328636e978040054a80 --- /dev/null +++ b/deploy/pipeline/ppvehicle/vehicle_plate.py @@ -0,0 +1,298 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml +import glob +from functools import reduce + +import time +import cv2 +import numpy as np +import math +import paddle + +import sys +parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 3))) +sys.path.insert(0, parent_path) + +from python.infer import get_test_images +from python.preprocess import preprocess, NormalizeImage, Permute, Resize_Mult32 +from pipeline.ppvehicle.vehicle_plateutils import create_predictor, get_infer_gpuid, get_rotate_crop_image, draw_boxes +from pipeline.ppvehicle.vehicleplate_postprocess import build_post_process +from pipeline.cfg_utils import merge_cfg, print_arguments, argsparser + + +class PlateDetector(object): + def __init__(self, args, cfg): + self.args = args + self.pre_process_list = { + 'Resize_Mult32': { + 'limit_side_len': cfg['det_limit_side_len'], + 'limit_type': cfg['det_limit_type'], + }, + 'NormalizeImage': { + 'mean': [0.485, 0.456, 0.406], + 'std': [0.229, 0.224, 0.225], + 'is_scale': True, + }, + 'Permute': {} + } + postprocess_params = {} + postprocess_params['name'] = 'DBPostProcess' + postprocess_params["thresh"] = 0.3 + postprocess_params["box_thresh"] = 0.6 + postprocess_params["max_candidates"] = 1000 + postprocess_params["unclip_ratio"] = 1.5 + postprocess_params["use_dilation"] = False + postprocess_params["score_mode"] = "fast" + + self.postprocess_op = build_post_process(postprocess_params) + self.predictor, self.input_tensor, self.output_tensors, self.config = create_predictor( + args, cfg, 'det') + + def preprocess(self, im_path): + preprocess_ops = [] + for op_type, new_op_info in self.pre_process_list.items(): + preprocess_ops.append(eval(op_type)(**new_op_info)) + + input_im_lst = [] + input_im_info_lst = [] + + im, im_info = preprocess(im_path, preprocess_ops) + input_im_lst.append(im) + input_im_info_lst.append(im_info['im_shape'] / im_info['scale_factor']) + + return np.stack(input_im_lst, axis=0), input_im_info_lst + + def order_points_clockwise(self, pts): + rect = np.zeros((4, 2), dtype="float32") + s = pts.sum(axis=1) + rect[0] = pts[np.argmin(s)] + rect[2] = pts[np.argmax(s)] + diff = np.diff(pts, axis=1) + rect[1] = pts[np.argmin(diff)] + rect[3] = pts[np.argmax(diff)] + return rect + + def clip_det_res(self, points, img_height, img_width): + for pno in range(points.shape[0]): + points[pno, 0] = int(min(max(points[pno, 0], 0), img_width - 1)) + points[pno, 1] = int(min(max(points[pno, 1], 0), img_height - 1)) + return points + + def filter_tag_det_res(self, dt_boxes, image_shape): + img_height, img_width = image_shape[0:2] + dt_boxes_new = [] + for box in dt_boxes: + box = self.order_points_clockwise(box) + box = self.clip_det_res(box, img_height, img_width) + rect_width = int(np.linalg.norm(box[0] - box[1])) + rect_height = int(np.linalg.norm(box[0] - box[3])) + if rect_width <= 3 or rect_height <= 3: + continue + dt_boxes_new.append(box) + dt_boxes = np.array(dt_boxes_new) + return dt_boxes + + def filter_tag_det_res_only_clip(self, dt_boxes, image_shape): + img_height, img_width = image_shape[0:2] + dt_boxes_new = [] + for box in dt_boxes: + box = self.clip_det_res(box, img_height, img_width) + dt_boxes_new.append(box) + dt_boxes = np.array(dt_boxes_new) + return dt_boxes + + def predict_image(self, img_list): + st = time.time() + + dt_batch_boxes = [] + for image in img_list: + img, shape_list = self.preprocess(image) + if img is None: + return None, 0 + + self.input_tensor.copy_from_cpu(img) + self.predictor.run() + outputs = [] + for output_tensor in self.output_tensors: + output = output_tensor.copy_to_cpu() + outputs.append(output) + + preds = {} + preds['maps'] = outputs[0] + + #self.predictor.try_shrink_memory() + post_result = self.postprocess_op(preds, shape_list) + # print("post_result length:{}".format(len(post_result))) + + org_shape = image.shape + dt_boxes = post_result[0]['points'] + dt_boxes = self.filter_tag_det_res(dt_boxes, org_shape) + dt_batch_boxes.append(dt_boxes) + + et = time.time() + return dt_batch_boxes, et - st + + +class TextRecognizer(object): + def __init__(self, args, cfg, use_gpu=True): + self.rec_image_shape = cfg['rec_image_shape'] + self.rec_batch_num = cfg['rec_batch_num'] + word_dict_path = cfg['word_dict_path'] + use_space_char = True + + postprocess_params = { + 'name': 'CTCLabelDecode', + "character_dict_path": word_dict_path, + "use_space_char": use_space_char + } + self.postprocess_op = build_post_process(postprocess_params) + self.predictor, self.input_tensor, self.output_tensors, self.config = \ + create_predictor(args, cfg, 'rec') + self.use_onnx = False + + def resize_norm_img(self, img, max_wh_ratio): + imgC, imgH, imgW = self.rec_image_shape + + assert imgC == img.shape[2] + imgW = int((imgH * max_wh_ratio)) + if self.use_onnx: + w = self.input_tensor.shape[3:][0] + if w is not None and w > 0: + imgW = w + + h, w = img.shape[:2] + ratio = w / float(h) + if math.ceil(imgH * ratio) > imgW: + resized_w = imgW + else: + resized_w = int(math.ceil(imgH * ratio)) + resized_image = cv2.resize(img, (resized_w, imgH)) + resized_image = resized_image.astype('float32') + resized_image = resized_image.transpose((2, 0, 1)) / 255 + resized_image -= 0.5 + resized_image /= 0.5 + padding_im = np.zeros((imgC, imgH, imgW), dtype=np.float32) + padding_im[:, :, 0:resized_w] = resized_image + return padding_im + + def predict_text(self, img_list): + img_num = len(img_list) + # Calculate the aspect ratio of all text bars + width_list = [] + for img in img_list: + width_list.append(img.shape[1] / float(img.shape[0])) + # Sorting can speed up the recognition process + indices = np.argsort(np.array(width_list)) + rec_res = [['', 0.0]] * img_num + batch_num = self.rec_batch_num + st = time.time() + for beg_img_no in range(0, img_num, batch_num): + end_img_no = min(img_num, beg_img_no + batch_num) + norm_img_batch = [] + imgC, imgH, imgW = self.rec_image_shape + max_wh_ratio = imgW / imgH + # max_wh_ratio = 0 + for ino in range(beg_img_no, end_img_no): + h, w = img_list[indices[ino]].shape[0:2] + wh_ratio = w * 1.0 / h + max_wh_ratio = max(max_wh_ratio, wh_ratio) + for ino in range(beg_img_no, end_img_no): + norm_img = self.resize_norm_img(img_list[indices[ino]], + max_wh_ratio) + norm_img = norm_img[np.newaxis, :] + norm_img_batch.append(norm_img) + norm_img_batch = np.concatenate(norm_img_batch) + norm_img_batch = norm_img_batch.copy() + if self.use_onnx: + input_dict = {} + input_dict[self.input_tensor.name] = norm_img_batch + outputs = self.predictor.run(self.output_tensors, input_dict) + preds = outputs[0] + else: + self.input_tensor.copy_from_cpu(norm_img_batch) + self.predictor.run() + outputs = [] + for output_tensor in self.output_tensors: + output = output_tensor.copy_to_cpu() + outputs.append(output) + if len(outputs) != 1: + preds = outputs + else: + preds = outputs[0] + rec_result = self.postprocess_op(preds) + for rno in range(len(rec_result)): + rec_res[indices[beg_img_no + rno]] = rec_result[rno] + return rec_res, time.time() - st + + +class PlateRecognizer(object): + def __init__(self, args, cfg): + use_gpu = args.device.lower() == "gpu" + self.platedetector = PlateDetector(args, cfg) + self.textrecognizer = TextRecognizer(args, cfg, use_gpu=use_gpu) + + def get_platelicense(self, image_list): + plate_text_list = [] + plateboxes, det_time = self.platedetector.predict_image(image_list) + for idx, boxes_pcar in enumerate(plateboxes): + plate_pcar_list = [] + for box in boxes_pcar: + plate_images = get_rotate_crop_image(image_list[idx], box) + plate_texts = self.textrecognizer.predict_text([plate_images]) + plate_pcar_list.append(plate_texts) + plate_text_list.append(plate_pcar_list) + return self.check_plate(plate_text_list) + + def check_plate(self, text_list): + simcode = [ + '浙', '粤', '京', '津', '冀', '晋', '蒙', '辽', '黑', '沪', '吉', '苏', '皖', + '赣', '鲁', '豫', '鄂', '湘', '桂', '琼', '渝', '川', '贵', '云', '藏', '陕', + '甘', '青', '宁' + ] + plate_all = {"plate": []} + for text_pcar in text_list: + platelicense = "" + for text_info in text_pcar: + text = text_info[0][0][0] + if len(text) > 2 and len(text) < 10: + platelicense = text + plate_all["plate"].append(platelicense) + return plate_all + + +def main(): + cfg = merge_cfg(FLAGS) + print_arguments(cfg) + vehicleplate_cfg = cfg['VEHICLE_PLATE'] + detector = PlateRecognizer(FLAGS, vehicleplate_cfg) + # predict from image + img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file) + for img in img_list: + image = cv2.imread(img) + results = detector.get_platelicense([image]) + print(results) + + +if __name__ == '__main__': + paddle.enable_static() + parser = argsparser() + FLAGS = parser.parse_args() + FLAGS.device = FLAGS.device.upper() + assert FLAGS.device in ['CPU', 'GPU', 'XPU' + ], "device should be CPU, GPU or XPU" + + main() diff --git a/deploy/pipeline/ppvehicle/vehicle_plateutils.py b/deploy/pipeline/ppvehicle/vehicle_plateutils.py new file mode 100644 index 0000000000000000000000000000000000000000..431b647206fe4539f71d45350586dfdb51e2731c --- /dev/null +++ b/deploy/pipeline/ppvehicle/vehicle_plateutils.py @@ -0,0 +1,505 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import argparse +import os +import sys +import platform +import cv2 +import numpy as np +import paddle +from PIL import Image, ImageDraw, ImageFont +import math +from paddle import inference +import time +import ast + + +def create_predictor(args, cfg, mode): + if mode == "det": + model_dir = cfg['det_model_dir'] + else: + model_dir = cfg['rec_model_dir'] + + if model_dir is None: + print("not find {} model file path {}".format(mode, model_dir)) + sys.exit(0) + + model_file_path = model_dir + "/inference.pdmodel" + params_file_path = model_dir + "/inference.pdiparams" + if not os.path.exists(model_file_path): + raise ValueError("not find model file path {}".format(model_file_path)) + if not os.path.exists(params_file_path): + raise ValueError("not find params file path {}".format( + params_file_path)) + + config = inference.Config(model_file_path, params_file_path) + + batch_size = 1 + + if args.device == "GPU": + gpu_id = get_infer_gpuid() + if gpu_id is None: + print( + "GPU is not found in current device by nvidia-smi. Please check your device or ignore it if run on jetson." + ) + config.enable_use_gpu(500, 0) + + precision_map = { + 'trt_int8': inference.PrecisionType.Int8, + 'trt_fp32': inference.PrecisionType.Float32, + 'trt_fp16': inference.PrecisionType.Half + } + if args.run_mode in precision_map.keys(): + config.enable_tensorrt_engine( + workspace_size=(1 << 25) * batch_size, + max_batch_size=batch_size, + min_subgraph_size=min_subgraph_size, + precision_mode=precision_map[args.run_mode], + use_static=False, + use_calib_mode=trt_calib_mode) + use_dynamic_shape = True + + if mode == "det": + min_input_shape = { + "x": [1, 3, 50, 50], + "conv2d_92.tmp_0": [1, 120, 20, 20], + "conv2d_91.tmp_0": [1, 24, 10, 10], + "conv2d_59.tmp_0": [1, 96, 20, 20], + "nearest_interp_v2_1.tmp_0": [1, 256, 10, 10], + "nearest_interp_v2_2.tmp_0": [1, 256, 20, 20], + "conv2d_124.tmp_0": [1, 256, 20, 20], + "nearest_interp_v2_3.tmp_0": [1, 64, 20, 20], + "nearest_interp_v2_4.tmp_0": [1, 64, 20, 20], + "nearest_interp_v2_5.tmp_0": [1, 64, 20, 20], + "elementwise_add_7": [1, 56, 2, 2], + "nearest_interp_v2_0.tmp_0": [1, 256, 2, 2] + } + max_input_shape = { + "x": [1, 3, 1536, 1536], + "conv2d_92.tmp_0": [1, 120, 400, 400], + "conv2d_91.tmp_0": [1, 24, 200, 200], + "conv2d_59.tmp_0": [1, 96, 400, 400], + "nearest_interp_v2_1.tmp_0": [1, 256, 200, 200], + "conv2d_124.tmp_0": [1, 256, 400, 400], + "nearest_interp_v2_2.tmp_0": [1, 256, 400, 400], + "nearest_interp_v2_3.tmp_0": [1, 64, 400, 400], + "nearest_interp_v2_4.tmp_0": [1, 64, 400, 400], + "nearest_interp_v2_5.tmp_0": [1, 64, 400, 400], + "elementwise_add_7": [1, 56, 400, 400], + "nearest_interp_v2_0.tmp_0": [1, 256, 400, 400] + } + opt_input_shape = { + "x": [1, 3, 640, 640], + "conv2d_92.tmp_0": [1, 120, 160, 160], + "conv2d_91.tmp_0": [1, 24, 80, 80], + "conv2d_59.tmp_0": [1, 96, 160, 160], + "nearest_interp_v2_1.tmp_0": [1, 256, 80, 80], + "nearest_interp_v2_2.tmp_0": [1, 256, 160, 160], + "conv2d_124.tmp_0": [1, 256, 160, 160], + "nearest_interp_v2_3.tmp_0": [1, 64, 160, 160], + "nearest_interp_v2_4.tmp_0": [1, 64, 160, 160], + "nearest_interp_v2_5.tmp_0": [1, 64, 160, 160], + "elementwise_add_7": [1, 56, 40, 40], + "nearest_interp_v2_0.tmp_0": [1, 256, 40, 40] + } + min_pact_shape = { + "nearest_interp_v2_26.tmp_0": [1, 256, 20, 20], + "nearest_interp_v2_27.tmp_0": [1, 64, 20, 20], + "nearest_interp_v2_28.tmp_0": [1, 64, 20, 20], + "nearest_interp_v2_29.tmp_0": [1, 64, 20, 20] + } + max_pact_shape = { + "nearest_interp_v2_26.tmp_0": [1, 256, 400, 400], + "nearest_interp_v2_27.tmp_0": [1, 64, 400, 400], + "nearest_interp_v2_28.tmp_0": [1, 64, 400, 400], + "nearest_interp_v2_29.tmp_0": [1, 64, 400, 400] + } + opt_pact_shape = { + "nearest_interp_v2_26.tmp_0": [1, 256, 160, 160], + "nearest_interp_v2_27.tmp_0": [1, 64, 160, 160], + "nearest_interp_v2_28.tmp_0": [1, 64, 160, 160], + "nearest_interp_v2_29.tmp_0": [1, 64, 160, 160] + } + min_input_shape.update(min_pact_shape) + max_input_shape.update(max_pact_shape) + opt_input_shape.update(opt_pact_shape) + elif mode == "rec": + imgH = int(cfg['rec_image_shape'][-2]) + min_input_shape = {"x": [1, 3, imgH, 10]} + max_input_shape = {"x": [batch_size, 3, imgH, 2304]} + opt_input_shape = {"x": [batch_size, 3, imgH, 320]} + elif mode == "cls": + min_input_shape = {"x": [1, 3, 48, 10]} + max_input_shape = {"x": [batch_size, 3, 48, 1024]} + opt_input_shape = {"x": [batch_size, 3, 48, 320]} + else: + use_dynamic_shape = False + if use_dynamic_shape: + config.set_trt_dynamic_shape_info( + min_input_shape, max_input_shape, opt_input_shape) + + else: + config.disable_gpu() + if hasattr(args, "cpu_threads"): + config.set_cpu_math_library_num_threads(args.cpu_threads) + else: + # default cpu threads as 10 + config.set_cpu_math_library_num_threads(10) + if args.enable_mkldnn: + # cache 10 different shapes for mkldnn to avoid memory leak + config.set_mkldnn_cache_capacity(10) + config.enable_mkldnn() + if args.run_mode == "fp16": + config.enable_mkldnn_bfloat16() + # enable memory optim + config.enable_memory_optim() + config.disable_glog_info() + config.delete_pass("conv_transpose_eltwiseadd_bn_fuse_pass") + config.delete_pass("matmul_transpose_reshape_fuse_pass") + if mode == 'table': + config.delete_pass("fc_fuse_pass") # not supported for table + config.switch_use_feed_fetch_ops(False) + config.switch_ir_optim(True) + + # create predictor + predictor = inference.create_predictor(config) + input_names = predictor.get_input_names() + for name in input_names: + input_tensor = predictor.get_input_handle(name) + output_tensors = get_output_tensors(cfg, mode, predictor) + return predictor, input_tensor, output_tensors, config + + +def get_output_tensors(cfg, mode, predictor): + output_names = predictor.get_output_names() + output_tensors = [] + output_name = 'softmax_0.tmp_0' + if output_name in output_names: + return [predictor.get_output_handle(output_name)] + else: + for output_name in output_names: + output_tensor = predictor.get_output_handle(output_name) + output_tensors.append(output_tensor) + return output_tensors + + +def get_infer_gpuid(): + sysstr = platform.system() + if sysstr == "Windows": + return 0 + + if not paddle.fluid.core.is_compiled_with_rocm(): + cmd = "env | grep CUDA_VISIBLE_DEVICES" + else: + cmd = "env | grep HIP_VISIBLE_DEVICES" + env_cuda = os.popen(cmd).readlines() + if len(env_cuda) == 0: + return 0 + else: + gpu_id = env_cuda[0].strip().split("=")[1] + return int(gpu_id[0]) + + +def draw_e2e_res(dt_boxes, strs, img_path): + src_im = cv2.imread(img_path) + for box, str in zip(dt_boxes, strs): + box = box.astype(np.int32).reshape((-1, 1, 2)) + cv2.polylines(src_im, [box], True, color=(255, 255, 0), thickness=2) + cv2.putText( + src_im, + str, + org=(int(box[0, 0, 0]), int(box[0, 0, 1])), + fontFace=cv2.FONT_HERSHEY_COMPLEX, + fontScale=0.7, + color=(0, 255, 0), + thickness=1) + return src_im + + +def draw_text_det_res(dt_boxes, img_path): + src_im = cv2.imread(img_path) + for box in dt_boxes: + box = np.array(box).astype(np.int32).reshape(-1, 2) + cv2.polylines(src_im, [box], True, color=(255, 255, 0), thickness=2) + return src_im + + +def resize_img(img, input_size=600): + """ + resize img and limit the longest side of the image to input_size + """ + img = np.array(img) + im_shape = img.shape + im_size_max = np.max(im_shape[0:2]) + im_scale = float(input_size) / float(im_size_max) + img = cv2.resize(img, None, None, fx=im_scale, fy=im_scale) + return img + + +def draw_ocr(image, + boxes, + txts=None, + scores=None, + drop_score=0.5, + font_path="./doc/fonts/simfang.ttf"): + """ + Visualize the results of OCR detection and recognition + args: + image(Image|array): RGB image + boxes(list): boxes with shape(N, 4, 2) + txts(list): the texts + scores(list): txxs corresponding scores + drop_score(float): only scores greater than drop_threshold will be visualized + font_path: the path of font which is used to draw text + return(array): + the visualized img + """ + if scores is None: + scores = [1] * len(boxes) + box_num = len(boxes) + for i in range(box_num): + if scores is not None and (scores[i] < drop_score or + math.isnan(scores[i])): + continue + box = np.reshape(np.array(boxes[i]), [-1, 1, 2]).astype(np.int64) + image = cv2.polylines(np.array(image), [box], True, (255, 0, 0), 2) + if txts is not None: + img = np.array(resize_img(image, input_size=600)) + txt_img = text_visual( + txts, + scores, + img_h=img.shape[0], + img_w=600, + threshold=drop_score, + font_path=font_path) + img = np.concatenate([np.array(img), np.array(txt_img)], axis=1) + return img + return image + + +def draw_ocr_box_txt(image, + boxes, + txts, + scores=None, + drop_score=0.5, + font_path="./doc/simfang.ttf"): + h, w = image.height, image.width + img_left = image.copy() + img_right = Image.new('RGB', (w, h), (255, 255, 255)) + + import random + + random.seed(0) + draw_left = ImageDraw.Draw(img_left) + draw_right = ImageDraw.Draw(img_right) + for idx, (box, txt) in enumerate(zip(boxes, txts)): + if scores is not None and scores[idx] < drop_score: + continue + color = (random.randint(0, 255), random.randint(0, 255), + random.randint(0, 255)) + draw_left.polygon(box, fill=color) + draw_right.polygon( + [ + box[0][0], box[0][1], box[1][0], box[1][1], box[2][0], + box[2][1], box[3][0], box[3][1] + ], + outline=color) + box_height = math.sqrt((box[0][0] - box[3][0])**2 + (box[0][1] - box[3][ + 1])**2) + box_width = math.sqrt((box[0][0] - box[1][0])**2 + (box[0][1] - box[1][ + 1])**2) + if box_height > 2 * box_width: + font_size = max(int(box_width * 0.9), 10) + font = ImageFont.truetype(font_path, font_size, encoding="utf-8") + cur_y = box[0][1] + for c in txt: + char_size = font.getsize(c) + draw_right.text( + (box[0][0] + 3, cur_y), c, fill=(0, 0, 0), font=font) + cur_y += char_size[1] + else: + font_size = max(int(box_height * 0.8), 10) + font = ImageFont.truetype(font_path, font_size, encoding="utf-8") + draw_right.text( + [box[0][0], box[0][1]], txt, fill=(0, 0, 0), font=font) + img_left = Image.blend(image, img_left, 0.5) + img_show = Image.new('RGB', (w * 2, h), (255, 255, 255)) + img_show.paste(img_left, (0, 0, w, h)) + img_show.paste(img_right, (w, 0, w * 2, h)) + return np.array(img_show) + + +def str_count(s): + """ + Count the number of Chinese characters, + a single English character and a single number + equal to half the length of Chinese characters. + args: + s(string): the input of string + return(int): + the number of Chinese characters + """ + import string + count_zh = count_pu = 0 + s_len = len(s) + en_dg_count = 0 + for c in s: + if c in string.ascii_letters or c.isdigit() or c.isspace(): + en_dg_count += 1 + elif c.isalpha(): + count_zh += 1 + else: + count_pu += 1 + return s_len - math.ceil(en_dg_count / 2) + + +def text_visual(texts, + scores, + img_h=400, + img_w=600, + threshold=0., + font_path="./doc/simfang.ttf"): + """ + create new blank img and draw txt on it + args: + texts(list): the text will be draw + scores(list|None): corresponding score of each txt + img_h(int): the height of blank img + img_w(int): the width of blank img + font_path: the path of font which is used to draw text + return(array): + """ + if scores is not None: + assert len(texts) == len( + scores), "The number of txts and corresponding scores must match" + + def create_blank_img(): + blank_img = np.ones(shape=[img_h, img_w], dtype=np.int8) * 255 + blank_img[:, img_w - 1:] = 0 + blank_img = Image.fromarray(blank_img).convert("RGB") + draw_txt = ImageDraw.Draw(blank_img) + return blank_img, draw_txt + + blank_img, draw_txt = create_blank_img() + + font_size = 20 + txt_color = (0, 0, 0) + font = ImageFont.truetype(font_path, font_size, encoding="utf-8") + + gap = font_size + 5 + txt_img_list = [] + count, index = 1, 0 + for idx, txt in enumerate(texts): + index += 1 + if scores[idx] < threshold or math.isnan(scores[idx]): + index -= 1 + continue + first_line = True + while str_count(txt) >= img_w // font_size - 4: + tmp = txt + txt = tmp[:img_w // font_size - 4] + if first_line: + new_txt = str(index) + ': ' + txt + first_line = False + else: + new_txt = ' ' + txt + draw_txt.text((0, gap * count), new_txt, txt_color, font=font) + txt = tmp[img_w // font_size - 4:] + if count >= img_h // gap - 1: + txt_img_list.append(np.array(blank_img)) + blank_img, draw_txt = create_blank_img() + count = 0 + count += 1 + if first_line: + new_txt = str(index) + ': ' + txt + ' ' + '%.3f' % (scores[idx]) + else: + new_txt = " " + txt + " " + '%.3f' % (scores[idx]) + draw_txt.text((0, gap * count), new_txt, txt_color, font=font) + # whether add new blank img or not + if count >= img_h // gap - 1 and idx + 1 < len(texts): + txt_img_list.append(np.array(blank_img)) + blank_img, draw_txt = create_blank_img() + count = 0 + count += 1 + txt_img_list.append(np.array(blank_img)) + if len(txt_img_list) == 1: + blank_img = np.array(txt_img_list[0]) + else: + blank_img = np.concatenate(txt_img_list, axis=1) + return np.array(blank_img) + + +def base64_to_cv2(b64str): + import base64 + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + +def draw_boxes(image, boxes, scores=None, drop_score=0.5): + if scores is None: + scores = [1] * len(boxes) + for (box, score) in zip(boxes, scores): + if score < drop_score: + continue + box = np.reshape(np.array(box), [-1, 1, 2]).astype(np.int64) + image = cv2.polylines(np.array(image), [box], True, (255, 0, 0), 2) + return image + + +def get_rotate_crop_image(img, points): + ''' + img_height, img_width = img.shape[0:2] + left = int(np.min(points[:, 0])) + right = int(np.max(points[:, 0])) + top = int(np.min(points[:, 1])) + bottom = int(np.max(points[:, 1])) + img_crop = img[top:bottom, left:right, :].copy() + points[:, 0] = points[:, 0] - left + points[:, 1] = points[:, 1] - top + ''' + assert len(points) == 4, "shape of points must be 4*2" + img_crop_width = int( + max( + np.linalg.norm(points[0] - points[1]), + np.linalg.norm(points[2] - points[3]))) + img_crop_height = int( + max( + np.linalg.norm(points[0] - points[3]), + np.linalg.norm(points[1] - points[2]))) + pts_std = np.float32([[0, 0], [img_crop_width, 0], + [img_crop_width, img_crop_height], + [0, img_crop_height]]) + M = cv2.getPerspectiveTransform(points, pts_std) + dst_img = cv2.warpPerspective( + img, + M, (img_crop_width, img_crop_height), + borderMode=cv2.BORDER_REPLICATE, + flags=cv2.INTER_CUBIC) + dst_img_height, dst_img_width = dst_img.shape[0:2] + if dst_img_height * 1.0 / dst_img_width >= 1.5: + dst_img = np.rot90(dst_img) + return dst_img + + +def check_gpu(use_gpu): + if use_gpu and not paddle.is_compiled_with_cuda(): + use_gpu = False + return use_gpu + + +if __name__ == '__main__': + pass diff --git a/deploy/pipeline/ppvehicle/vehicleplate_postprocess.py b/deploy/pipeline/ppvehicle/vehicleplate_postprocess.py new file mode 100644 index 0000000000000000000000000000000000000000..66a00a3410340995a058368abfc333a35f454b66 --- /dev/null +++ b/deploy/pipeline/ppvehicle/vehicleplate_postprocess.py @@ -0,0 +1,296 @@ +# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import numpy as np +import paddle +from paddle.nn import functional as F +import re +from shapely.geometry import Polygon +import cv2 +import copy + + +def build_post_process(config, global_config=None): + support_dict = ['DBPostProcess', 'CTCLabelDecode'] + + config = copy.deepcopy(config) + module_name = config.pop('name') + if module_name == "None": + return + if global_config is not None: + config.update(global_config) + assert module_name in support_dict, Exception( + 'post process only support {}'.format(support_dict)) + module_class = eval(module_name)(**config) + return module_class + + +class DBPostProcess(object): + """ + The post process for Differentiable Binarization (DB). + """ + + def __init__(self, + thresh=0.3, + box_thresh=0.7, + max_candidates=1000, + unclip_ratio=2.0, + use_dilation=False, + score_mode="fast", + **kwargs): + self.thresh = thresh + self.box_thresh = box_thresh + self.max_candidates = max_candidates + self.unclip_ratio = unclip_ratio + self.min_size = 3 + self.score_mode = score_mode + assert score_mode in [ + "slow", "fast" + ], "Score mode must be in [slow, fast] but got: {}".format(score_mode) + + self.dilation_kernel = None if not use_dilation else np.array( + [[1, 1], [1, 1]]) + + def boxes_from_bitmap(self, pred, _bitmap, dest_width, dest_height): + ''' + _bitmap: single map with shape (1, H, W), + whose values are binarized as {0, 1} + ''' + + bitmap = _bitmap + height, width = bitmap.shape + + outs = cv2.findContours((bitmap * 255).astype(np.uint8), cv2.RETR_LIST, + cv2.CHAIN_APPROX_SIMPLE) + if len(outs) == 3: + img, contours, _ = outs[0], outs[1], outs[2] + elif len(outs) == 2: + contours, _ = outs[0], outs[1] + + num_contours = min(len(contours), self.max_candidates) + + boxes = [] + scores = [] + for index in range(num_contours): + contour = contours[index] + points, sside = self.get_mini_boxes(contour) + if sside < self.min_size: + continue + points = np.array(points) + if self.score_mode == "fast": + score = self.box_score_fast(pred, points.reshape(-1, 2)) + else: + score = self.box_score_slow(pred, contour) + if self.box_thresh > score: + continue + + box = self.unclip(points).reshape(-1, 1, 2) + box, sside = self.get_mini_boxes(box) + if sside < self.min_size + 2: + continue + box = np.array(box) + + box[:, 0] = np.clip( + np.round(box[:, 0] / width * dest_width), 0, dest_width) + box[:, 1] = np.clip( + np.round(box[:, 1] / height * dest_height), 0, dest_height) + boxes.append(box.astype(np.int16)) + scores.append(score) + return np.array(boxes, dtype=np.int16), scores + + def unclip(self, box): + try: + import pyclipper + except Exception as e: + raise RuntimeError( + 'Unable to use vehicleplate postprocess in PP-Vehicle, please install pyclipper, for example: `pip install pyclipper`, see https://github.com/fonttools/pyclipper' + ) + unclip_ratio = self.unclip_ratio + poly = Polygon(box) + distance = poly.area * unclip_ratio / poly.length + offset = pyclipper.PyclipperOffset() + offset.AddPath(box, pyclipper.JT_ROUND, pyclipper.ET_CLOSEDPOLYGON) + expanded = np.array(offset.Execute(distance)) + return expanded + + def get_mini_boxes(self, contour): + bounding_box = cv2.minAreaRect(contour) + points = sorted(list(cv2.boxPoints(bounding_box)), key=lambda x: x[0]) + + index_1, index_2, index_3, index_4 = 0, 1, 2, 3 + if points[1][1] > points[0][1]: + index_1 = 0 + index_4 = 1 + else: + index_1 = 1 + index_4 = 0 + if points[3][1] > points[2][1]: + index_2 = 2 + index_3 = 3 + else: + index_2 = 3 + index_3 = 2 + + box = [ + points[index_1], points[index_2], points[index_3], points[index_4] + ] + return box, min(bounding_box[1]) + + def box_score_fast(self, bitmap, _box): + ''' + box_score_fast: use bbox mean score as the mean score + ''' + h, w = bitmap.shape[:2] + box = _box.copy() + xmin = np.clip(np.floor(box[:, 0].min()).astype(np.int), 0, w - 1) + xmax = np.clip(np.ceil(box[:, 0].max()).astype(np.int), 0, w - 1) + ymin = np.clip(np.floor(box[:, 1].min()).astype(np.int), 0, h - 1) + ymax = np.clip(np.ceil(box[:, 1].max()).astype(np.int), 0, h - 1) + + mask = np.zeros((ymax - ymin + 1, xmax - xmin + 1), dtype=np.uint8) + box[:, 0] = box[:, 0] - xmin + box[:, 1] = box[:, 1] - ymin + cv2.fillPoly(mask, box.reshape(1, -1, 2).astype(np.int32), 1) + return cv2.mean(bitmap[ymin:ymax + 1, xmin:xmax + 1], mask)[0] + + def box_score_slow(self, bitmap, contour): + ''' + box_score_slow: use polyon mean score as the mean score + ''' + h, w = bitmap.shape[:2] + contour = contour.copy() + contour = np.reshape(contour, (-1, 2)) + + xmin = np.clip(np.min(contour[:, 0]), 0, w - 1) + xmax = np.clip(np.max(contour[:, 0]), 0, w - 1) + ymin = np.clip(np.min(contour[:, 1]), 0, h - 1) + ymax = np.clip(np.max(contour[:, 1]), 0, h - 1) + + mask = np.zeros((ymax - ymin + 1, xmax - xmin + 1), dtype=np.uint8) + + contour[:, 0] = contour[:, 0] - xmin + contour[:, 1] = contour[:, 1] - ymin + + cv2.fillPoly(mask, contour.reshape(1, -1, 2).astype(np.int32), 1) + return cv2.mean(bitmap[ymin:ymax + 1, xmin:xmax + 1], mask)[0] + + def __call__(self, outs_dict, shape_list): + pred = outs_dict['maps'] + if isinstance(pred, paddle.Tensor): + pred = pred.numpy() + pred = pred[:, 0, :, :] + segmentation = pred > self.thresh + + boxes_batch = [] + for batch_index in range(pred.shape[0]): + src_h, src_w = shape_list[batch_index] + if self.dilation_kernel is not None: + mask = cv2.dilate( + np.array(segmentation[batch_index]).astype(np.uint8), + self.dilation_kernel) + else: + mask = segmentation[batch_index] + boxes, scores = self.boxes_from_bitmap(pred[batch_index], mask, + src_w, src_h) + + boxes_batch.append({'points': boxes}) + return boxes_batch + + +class BaseRecLabelDecode(object): + """ Convert between text-label and text-index """ + + def __init__(self, character_dict_path=None, use_space_char=False): + self.beg_str = "sos" + self.end_str = "eos" + + self.character_str = [] + if character_dict_path is None: + self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz" + dict_character = list(self.character_str) + else: + with open(character_dict_path, "rb") as fin: + lines = fin.readlines() + for line in lines: + line = line.decode('utf-8').strip("\n").strip("\r\n") + self.character_str.append(line) + if use_space_char: + self.character_str.append(" ") + dict_character = list(self.character_str) + + dict_character = self.add_special_char(dict_character) + self.dict = {} + for i, char in enumerate(dict_character): + self.dict[char] = i + self.character = dict_character + + def add_special_char(self, dict_character): + return dict_character + + def decode(self, text_index, text_prob=None, is_remove_duplicate=False): + """ convert text-index into text-label. """ + result_list = [] + ignored_tokens = self.get_ignored_tokens() + batch_size = len(text_index) + for batch_idx in range(batch_size): + selection = np.ones(len(text_index[batch_idx]), dtype=bool) + if is_remove_duplicate: + selection[1:] = text_index[batch_idx][1:] != text_index[ + batch_idx][:-1] + for ignored_token in ignored_tokens: + selection &= text_index[batch_idx] != ignored_token + + char_list = [ + self.character[text_id] + for text_id in text_index[batch_idx][selection] + ] + if text_prob is not None: + conf_list = text_prob[batch_idx][selection] + else: + conf_list = [1] * len(selection) + if len(conf_list) == 0: + conf_list = [0] + + text = ''.join(char_list) + result_list.append((text, np.mean(conf_list).tolist())) + return result_list + + def get_ignored_tokens(self): + return [0] # for ctc blank + + +class CTCLabelDecode(BaseRecLabelDecode): + """ Convert between text-label and text-index """ + + def __init__(self, character_dict_path=None, use_space_char=False, + **kwargs): + super(CTCLabelDecode, self).__init__(character_dict_path, + use_space_char) + + def __call__(self, preds, label=None, *args, **kwargs): + if isinstance(preds, tuple) or isinstance(preds, list): + preds = preds[-1] + if isinstance(preds, paddle.Tensor): + preds = preds.numpy() + preds_idx = preds.argmax(axis=2) + preds_prob = preds.max(axis=2) + text = self.decode(preds_idx, preds_prob, is_remove_duplicate=True) + if label is None: + return text + label = self.decode(label) + return text, label + + def add_special_char(self, dict_character): + dict_character = ['blank'] + dict_character + return dict_character diff --git a/deploy/pipeline/tools/clip_video.py b/deploy/pipeline/tools/clip_video.py new file mode 100644 index 0000000000000000000000000000000000000000..fbfb9cd08169b90bc71a436f6a414c4d6d1f480f --- /dev/null +++ b/deploy/pipeline/tools/clip_video.py @@ -0,0 +1,36 @@ +import cv2 + + +def cut_video(video_path, frameToStart, frametoStop, saved_video_path): + cap = cv2.VideoCapture(video_path) + FPS = cap.get(cv2.CAP_PROP_FPS) + + TOTAL_FRAME = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # 获取视频总帧数 + + size = (cap.get(cv2.CAP_PROP_FRAME_WIDTH), + cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) + + videoWriter = cv2.VideoWriter( + saved_video_path, + apiPreference=0, + fourcc=cv2.VideoWriter_fourcc(* 'mp4v'), + fps=FPS, + frameSize=(int(size[0]), int(size[1]))) + + COUNT = 0 + while True: + success, frame = cap.read() + if success: + COUNT += 1 + if COUNT <= frametoStop and COUNT > frameToStart: # 选取起始帧 + videoWriter.write(frame) + else: + print("cap.read failed!") + break + if COUNT > frametoStop: + break + + cap.release() + videoWriter.release() + + print(saved_video_path) diff --git a/deploy/pipeline/tools/get_video_info.py b/deploy/pipeline/tools/get_video_info.py new file mode 100644 index 0000000000000000000000000000000000000000..39aa30d81212577666f25d4e14a147113197b1ed --- /dev/null +++ b/deploy/pipeline/tools/get_video_info.py @@ -0,0 +1,71 @@ +import os +import sys +import cv2 +import numpy as np +import argparse + + +def argsparser(): + parser = argparse.ArgumentParser(description=__doc__) + parser.add_argument( + "--video_file", + type=str, + default=None, + help="Path of video file, `video_file` or `camera_id` has a highest priority." + ) + parser.add_argument( + '--region_polygon', + nargs='+', + type=int, + default=[], + help="Clockwise point coords (x0,y0,x1,y1...) of polygon of area when " + "do_break_in_counting. Note that only support single-class MOT and " + "the video should be taken by a static camera.") + return parser + + +def get_video_info(video_file, region_polygon): + entrance = [] + assert len(region_polygon + ) % 2 == 0, "region_polygon should be pairs of coords points." + for i in range(0, len(region_polygon), 2): + entrance.append([region_polygon[i], region_polygon[i + 1]]) + + if not os.path.exists(video_file): + print("video path '{}' not exists".format(video_file)) + sys.exit(-1) + capture = cv2.VideoCapture(video_file) + width = int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)) + height = int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) + print("video width: %d, height: %d" % (width, height)) + np_masks = np.zeros((height, width, 1), np.uint8) + + entrance = np.array(entrance) + cv2.fillPoly(np_masks, [entrance], 255) + + fps = int(capture.get(cv2.CAP_PROP_FPS)) + frame_count = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) + print("video fps: %d, frame_count: %d" % (fps, frame_count)) + cnt = 0 + while (1): + ret, frame = capture.read() + cnt += 1 + if cnt == 3: break + + alpha = 0.3 + img = np.array(frame).astype('float32') + mask = np_masks[:, :, 0] + color_mask = [0, 0, 255] + idx = np.nonzero(mask) + color_mask = np.array(color_mask) + img[idx[0], idx[1], :] *= 1.0 - alpha + img[idx[0], idx[1], :] += alpha * color_mask + cv2.imwrite('region_vis.jpg', img) + + +if __name__ == "__main__": + parser = argsparser() + FLAGS = parser.parse_args() + get_video_info(FLAGS.video_file, FLAGS.region_polygon) + + # python get_video_info.py --video_file=demo.mp4 --region_polygon 200 200 400 200 300 400 100 400 diff --git a/deploy/pipeline/tools/split_fight_train_test_dataset.py b/deploy/pipeline/tools/split_fight_train_test_dataset.py new file mode 100644 index 0000000000000000000000000000000000000000..5ca8fce64d00ccefee965c899a0d2b96863ff1dc --- /dev/null +++ b/deploy/pipeline/tools/split_fight_train_test_dataset.py @@ -0,0 +1,80 @@ +import os +import glob +import random +import fnmatch +import re +import sys + +class_id = {"nofight": 0, "fight": 1} + + +def get_list(path, key_func=lambda x: x[-11:], rgb_prefix='img_', level=1): + if level == 1: + frame_folders = glob.glob(os.path.join(path, '*')) + elif level == 2: + frame_folders = glob.glob(os.path.join(path, '*', '*')) + else: + raise ValueError('level can be only 1 or 2') + + def count_files(directory): + lst = os.listdir(directory) + cnt = len(fnmatch.filter(lst, rgb_prefix + '*')) + return cnt + + # check RGB + video_dict = {} + for f in frame_folders: + cnt = count_files(f) + k = key_func(f) + if level == 2: + k = k.split("/")[0] + + video_dict[f] = str(cnt) + " " + str(class_id[k]) + + return video_dict + + +def fight_splits(video_dict, train_percent=0.8): + videos = list(video_dict.keys()) + + train_num = int(len(videos) * train_percent) + + train_list = [] + val_list = [] + + random.shuffle(videos) + + for i in range(train_num): + train_list.append(videos[i] + " " + str(video_dict[videos[i]])) + for i in range(train_num, len(videos)): + val_list.append(videos[i] + " " + str(video_dict[videos[i]])) + + print("train:", len(train_list), ",val:", len(val_list)) + + with open("fight_train_list.txt", "w") as f: + for item in train_list: + f.write(item + "\n") + + with open("fight_val_list.txt", "w") as f: + for item in val_list: + f.write(item + "\n") + + +if __name__ == "__main__": + frame_dir = sys.argv[1] # "rawframes" + level = sys.argv[2] # 2 + train_percent = sys.argv[3] # 0.8 + + if level == 2: + + def key_func(x): + return '/'.join(x.split('/')[-2:]) + else: + + def key_func(x): + return x.split('/')[-1] + + video_dict = get_list(frame_dir, key_func=key_func, level=level) + print("number:", len(video_dict)) + + fight_splits(video_dict, train_percent) diff --git a/deploy/pphuman/README.md b/deploy/pphuman/README.md deleted file mode 100644 index 008cb69b170b5e60dbf071c9ad0c6ca9a466e4b8..0000000000000000000000000000000000000000 --- a/deploy/pphuman/README.md +++ /dev/null @@ -1,164 +0,0 @@ -[English](README_en.md) | 简体中文 - -# 实时行人分析 PP-Human - -PP-Human是基于飞桨深度学习框架的业界首个开源的实时行人分析工具,具有功能丰富,应用广泛和部署高效三大优势。PP-Human -支持图片/单镜头视频/多镜头视频多种输入方式,功能覆盖多目标跟踪、属性识别和行为分析。能够广泛应用于智慧交通、智慧社区、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。 - -PP-Human赋能社区智能精细化管理, AIStudio快速上手教程[链接](https://aistudio.baidu.com/aistudio/projectdetail/3679564) - -## 一、环境准备 - -环境要求: PaddleDetection版本 >= release/2.4 或 develop版本 - -PaddlePaddle和PaddleDetection安装 - -``` -# PaddlePaddle CUDA10.1 -python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html - -# PaddlePaddle CPU -python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple - -# 克隆PaddleDetection仓库 -cd -git clone https://github.com/PaddlePaddle/PaddleDetection.git - -# 安装其他依赖 -cd PaddleDetection -pip install -r requirements.txt -``` - -详细安装文档参考[文档](docs/tutorials/INSTALL_cn.md) - -## 二、快速开始 - -### 1. 模型下载 - -PP-Human提供了目标检测、属性识别、行为识别、ReID预训练模型,以实现不同使用场景,用户可以直接下载使用 - -| 任务 | 适用场景 | 精度 | 预测速度(ms) | 预测部署模型 | -| :---------: |:---------: |:--------------- | :-------: | :------: | -| 目标检测 | 图片输入 | mAP: 56.3 | 28.0ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | -| 目标跟踪 | 视频输入 | MOTA: 72.0 | 33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | -| 属性识别 | 图片/视频输入 属性识别 | mA: 94.86 | 单人2ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip) | -| 关键点检测 | 视频输入 行为识别 | AP: 87.1 | 单人2.9ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) -| 行为识别 | 视频输入 行为识别 | 准确率: 96.43 | 单人2.7ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | -| ReID | 视频输入 跨镜跟踪 | mAP: 99.7 | - | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | - -下载模型后,解压至`./output_inference`文件夹 - -**注意:** - -- 模型精度为融合数据集结果,数据集包含开源数据集和企业数据集 -- ReID模型精度为Market1501数据集测试结果 -- 预测速度为T4下,开启TensorRT FP16的效果, 模型预测速度包含数据预处理、模型预测、后处理全流程 - -### 2. 配置文件说明 - -PP-Human相关配置位于```deploy/pphuman/config/infer_cfg.yml```中,存放模型路径,完成不同功能需要设置不同的任务类型 - -功能及任务类型对应表单如下: - -| 输入类型 | 功能 | 任务类型 | 配置项 | -|-------|-------|----------|-----| -| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR | -| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR | -| 单镜头视频 | 行为识别 | 多目标跟踪 关键点检测 行为识别 | MOT KPT ACTION | - -例如基于视频输入的属性识别,任务类型包含多目标跟踪和属性识别,具体配置如下: - -``` -crop_thresh: 0.5 -attr_thresh: 0.5 -visual: True - -MOT: - model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ - tracker_config: deploy/pphuman/config/tracker_config.yml - batch_size: 1 - -ATTR: - model_dir: output_inference/strongbaseline_r50_30e_pa100k/ - batch_size: 8 -``` - -**注意:** - -- 如果用户仅需要实现不同任务,可以在命令行中加入 `--enable_attr=True` 或 `--enable_action=True`即可,无需修改配置文件 -- 如果用户仅需要修改模型文件路径,可以在命令行中加入 `--model_dir det=ppyoloe/` 即可,无需修改配置文件,详细说明参考下方参数说明文档 - - -### 3. 预测部署 - -``` -# 指定配置文件路径和测试图片 -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --image_file=test_image.jpg --device=gpu - -# 指定配置文件路径和测试视频,完成属性识别 -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_attr=True - -# 指定配置文件路径和测试视频,完成行为识别 -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_action=True - -# 指定配置文件路径,模型路径和测试视频,完成多目标跟踪 -# 命令行中指定的模型路径优先级高于配置文件 -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ -``` - -#### 3.1 参数说明 - -| 参数 | 是否必须|含义 | -|-------|-------|----------| -| --config | Yes | 配置文件路径 | -| --model_dir | Option | PP-Human中各任务模型路径,优先级高于配置文件, 例如`--model_dir det=better_det/ attr=better_attr/`| -| --image_file | Option | 需要预测的图片 | -| --image_dir | Option | 要预测的图片文件夹路径 | -| --video_file | Option | 需要预测的视频 | -| --camera_id | Option | 用来预测的摄像头ID,默认为-1(表示不使用摄像头预测,可设置为:0 - (摄像头数目-1) ),预测过程中在可视化界面按`q`退出输出预测结果到:output/output.mp4| -| --enable_attr| Option | 是否进行属性识别, 默认为False,即不开启属性识别 | -| --enable_action| Option | 是否进行行为识别,默认为False,即不开启行为识别 | -| --device | Option | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`| -| --output_dir | Option|可视化结果保存的根目录,默认为output/| -| --run_mode | Option |使用GPU时,默认为paddle, 可选(paddle/trt_fp32/trt_fp16/trt_int8)| -| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | -| --cpu_threads | Option| 设置cpu线程数,默认为1 | -| --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | -| --do_entrance_counting | Option | 是否统计出入口流量,默认为False | -| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False | - -## 三、方案介绍 - -PP-Human整体方案如下图所示 - -
    - -
    - - -### 1. 目标检测 -- 采用PP-YOLOE L 作为目标检测模型 -- 详细文档参考[PP-YOLOE](../../configs/ppyoloe/)和[检测跟踪文档](docs/mot.md) - -### 2. 多目标跟踪 -- 采用SDE方案完成多目标跟踪 -- 检测模型使用PP-YOLOE L -- 跟踪模块采用Bytetrack方案 -- 详细文档参考[Bytetrack](../../configs/mot/bytetrack)和[检测跟踪文档](docs/mot.md) - -### 3. 跨镜跟踪 -- 使用PP-YOLOE + Bytetrack得到单镜头多目标跟踪轨迹 -- 使用ReID(centroid网络)对每一帧的检测结果提取特征 -- 多镜头轨迹特征进行匹配,得到跨镜头跟踪结果 -- 详细文档参考[跨镜跟踪](docs/mtmct.md) - -### 4. 属性识别 -- 使用PP-YOLOE + Bytetrack跟踪人体 -- 使用StrongBaseline(多分类模型)完成识别属性,主要属性包括年龄、性别、帽子、眼睛、上衣下衣款式、背包等 -- 详细文档参考[属性识别](docs/attribute.md) - -### 5. 行为识别: -- 使用PP-YOLOE + Bytetrack跟踪人体 -- 使用HRNet进行关键点检测得到人体17个骨骼点 -- 结合50帧内同一个人骨骼点的变化,通过ST-GCN判断50帧内发生的动作是否为摔倒 -- 详细文档参考[行为识别](docs/action.md) diff --git a/deploy/pphuman/README_en.md b/deploy/pphuman/README_en.md deleted file mode 100644 index d45481b4d5f8da088b251997bbe8266146b72c9f..0000000000000000000000000000000000000000 --- a/deploy/pphuman/README_en.md +++ /dev/null @@ -1,156 +0,0 @@ -English | [简体中文](README.md) - -# PP-Human— a Real-Time Pedestrian Analysis Tool - -PP-Human serves as the first open-source tool of real-time pedestrian anaylsis relying on the PaddlePaddle deep learning framework. Versatile and efficient in deployment, it has been used in various senarios. PP-Human -offers many input options, including image/single-camera video/multi-camera video, and covers multi-object tracking, attribute recognition, and action recognition. PP-Human can be applied to intelligent traffic, the intelligent community, industiral patrol, and so on. It supports server-side deployment and TensorRT acceleration,and even can achieve real-time analysis on the T4 server. - -## I. Environment Preparation - -Requirement: PaddleDetection version >= release/2.4 - - -The installation of PaddlePaddle and PaddleDetection - -``` -# PaddlePaddle CUDA10.1 -python -m pip install paddlepaddle-gpu==2.2.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html - -# PaddlePaddle CPU -python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple - -# Clone the PaddleDetection repository -cd -git clone https://github.com/PaddlePaddle/PaddleDetection.git - -# Install other dependencies -cd PaddleDetection -pip install -r requirements.txt -``` - -For details of the installation, please refer to this [document](docs/tutorials/INSTALL_cn.md) - -## II. Quick Start - -### 1. Model Download - -To make users have access to models of different scenarios, PP-Human provides pre-trained models of object detection, attribute recognition, behavior recognition, and ReID. - -| Task | Scenario | Precision | Inference Speed(FPS) | Model Inference and Deployment | -| :---------: |:---------: |:--------------- | :-------: | :------: | -| Object Detection | Image/Video Input | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | -| Attribute Recognition | Image/Video Input Attribute Recognition | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.tar) | -| Keypoint Detection | Video Input Action Recognition | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) -| Behavior Recognition | Video Input Bheavior Recognition | - | - | [Link](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | -| ReID | Multi-Target Multi-Camera Tracking | - | - | [Link]() | - -Then, unzip the downloaded model to the folder `./output_inference`. - -**Note: ** - -- The model precision is decided by the fusion of datasets which include open-source datasets and enterprise ones. -- When the inference speed is T4, use TensorRT FP16. - -### 2. Preparation of Configuration Files - -Configuration files of PP-Human are stored in ```deploy/pphuman/config/infer_cfg.yml```. Different tasks are for different functions, so you need to set the task type beforhand. - -Their correspondence is as follows: - -| Input | Function | Task Type | Config | -|-------|-------|----------|-----| -| Image | Attribute Recognition | Object Detection Attribute Recognition | DET ATTR | -| Single-Camera Video | Attribute Recognition | Multi-Object Tracking Attribute Recognition | MOT ATTR | -| Single-Camera Video | Behavior Recognition | Multi-Object Tracking Keypoint Detection Action Recognition | MOT KPT ACTION | - -For example, for the attribute recognition with the video input, its task types contain multi-object tracking and attribute recognition, and the config is: - -``` -crop_thresh: 0.5 -attr_thresh: 0.5 -visual: True - -MOT: - model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ - tracker_config: deploy/pphuman/config/tracker_config.yml - batch_size: 1 - -ATTR: - model_dir: output_inference/strongbaseline_r50_30e_pa100k/ - batch_size: 8 -``` - - - -### 3. Inference and Deployment - -``` -# Specify the config file path and test images -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --image_file=test_image.jpg --device=gpu - -# Specify the config file path and test videos,and finish the attribute recognition -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_attr=True - -# Specify the config file path and test videos,and finish the Action Recognition -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --enable_action=True - -# Specify the config file path, the model path and test videos,and finish the multi-object tracking -# The model path specified on the command line prioritizes over the config file -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml --video_file=test_video.mp4 --device=gpu --model_dir det=ppyoloe/ -``` - -### 3.1 Description of Parameters - -| Parameter | Optional or not| Meaning | -|-------|-------|----------| -| --config | Yes | Config file path | -| --model_dir | Option | the model paths of different tasks in PP-Human, with a priority higher than config files | -| --image_file | Option | Images to-be-predicted | -| --image_dir | Option | The path of folders of to-be-predicted images | -| --video_file | Option | Videos to-be-predicted | -| --camera_id | Option | ID of the inference camera is -1 by default (means inference without cameras,and it can be set to 0 - (number of cameras-1)), and during the inference, click `q` on the visual interface to exit and output the inference result to output/output.mp4| -| --enable_attr| Option | Enable attribute recognition or not | -| --enable_action| Option | Enable action recognition or not | -| --device | Option | During the operation,available devices are `CPU/GPU/XPU`,and the default is `CPU`| -| --output_dir | Option| The default root directory which stores the visualization result is output/| -| --run_mode | Option | When using GPU,the default one is paddle, and all these are available(paddle/trt_fp32/trt_fp16/trt_int8).| -| --enable_mkldnn | Option |Enable the MKLDNN acceleration or not in the CPU inference, and the default value is false | -| --cpu_threads | Option| The default CPU thread is 1 | -| --trt_calib_mode | Option| Enable calibration on TensorRT or not, and the default is False. When using the int8 of TensorRT,it should be set to True; When using the model quantized by PaddleSlim, it should be set to False. | - - -## III. Introduction to the Solution - -The overall solution of PP-Human is as follows: - -
    - -
    - - -### 1. Object Detection -- Use PP-YOLOE L as the model of object detection -- For details, please refer to [PP-YOLOE](../../configs/ppyoloe/) - -### 2. Multi-Object Tracking -- Conduct multi-object tracking with the SDE solution -- Use PP-YOLOE L as the detection model -- Use the Bytetrack solution to track modules -- For details, refer to [Bytetrack](configs/mot/bytetrack) - -### 3. Cross-Camera Tracking -- Use PP-YOLOE + Bytetrack to obtain the tracks of single-camera multi-object tracking -- Use ReID(centroid network)to extract features of the detection result of each frame -- Match the features of multi-camera tracks to get the cross-camera tracking result -- For details, please refer to [Cross-Camera Tracking](docs/mtmct_en.md) - -### 4. Multi-Target Multi-Camera Tracking -- Use PP-YOLOE + Bytetrack to track humans -- Use StrongBaseline(a multi-class model)to conduct attribute recognition, and the main attributes include age, gender, hats, eyes, clothing, and backpacks. -- For details, please refer to [Attribute Recognition](docs/attribute_en.md) - -### 5. Action Recognition -- Use PP-YOLOE + Bytetrack to track humans -- Use HRNet for keypoint detection and get the information of the 17 key points in the human body -- According to the changes of the key points of the same person within 50 frames, judge whether the action made by the person within 50 frames is a fall with the help of ST-GCN -- For details, please refer to [Action Recognition](docs/action_en.md) diff --git a/deploy/pphuman/config/infer_cfg.yml b/deploy/pphuman/config/infer_cfg.yml deleted file mode 100644 index 0d4de94c2bfec0b05db1a90691528808d051bc28..0000000000000000000000000000000000000000 --- a/deploy/pphuman/config/infer_cfg.yml +++ /dev/null @@ -1,33 +0,0 @@ -crop_thresh: 0.5 -attr_thresh: 0.5 -kpt_thresh: 0.2 -visual: True -warmup_frame: 50 - -DET: - model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ - batch_size: 1 - -ATTR: - model_dir: output_inference/strongbaseline_r50_30e_pa100k/ - batch_size: 8 - -MOT: - model_dir: output_inference/mot_ppyoloe_l_36e_pipeline/ - tracker_config: deploy/pphuman/config/tracker_config.yml - batch_size: 1 - -KPT: - model_dir: output_inference/dark_hrnet_w32_256x192/ - batch_size: 8 - -ACTION: - model_dir: output_inference/STGCN - batch_size: 1 - max_frames: 50 - display_frames: 80 - coord_size: [384, 512] - -REID: - model_dir: output_inference/reid_model/ - batch_size: 16 diff --git a/deploy/pphuman/docs/action.md b/deploy/pphuman/docs/action.md deleted file mode 100644 index 82d1ac0e3a40ccb3745460d27ec94df425616033..0000000000000000000000000000000000000000 --- a/deploy/pphuman/docs/action.md +++ /dev/null @@ -1,76 +0,0 @@ -# PP-Human行为识别模块 - -行为识别在智慧社区,安防监控等方向具有广泛应用,PP-Human中集成了基于骨骼点的行为识别模块。 - -
    - -
    数据来源及版权归属:天覆科技,感谢提供并开源实际场景数据,仅限学术研究使用
    -
    - -## 模型库 -在这里,我们提供了检测/跟踪、关键点识别以及识别摔倒动作的预训练模型,用户可以直接下载使用。 - -| 任务 | 算法 | 精度 | 预测速度(ms) | 下载链接 | -|:---------------------|:---------:|:------:|:------:| :---------------------------------------------------------------------------------: | -| 行人检测/跟踪 | PP-YOLOE | mAP: 56.3
    MOTA: 72.0 | 检测: 28ms
    跟踪:33.1ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) | -| 关键点识别 | HRNet | AP: 87.1 | 单人 2.9ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip)| -| 行为识别 | ST-GCN | 准确率: 96.43 | 单人 2.7ms | [下载链接](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | - - -注: -1. 检测/跟踪模型精度为MOT17,CrowdHuman,HIEVE和部分业务数据融合训练测试得到。 -2. 关键点模型使用COCO,UAVHuman和部分业务数据融合训练, 精度在业务数据测试集上得到。 -3. 行为识别模型使用NTU-RGB+D,UR Fall Detection Dataset和部分业务数据融合训练,精度在业务数据测试集上得到。 -4. 预测速度为NVIDIA T4 机器上使用TensorRT FP16时的速度, 速度包含数据预处理、模型预测、后处理全流程。 - -## 配置说明 -[配置文件](../config/infer_cfg.yml)中与行为识别相关的参数如下: -``` -ACTION: - model_dir: output_inference/STGCN # 模型所在路径 - batch_size: 1 # 预测批大小。 当前仅支持为1进行推理 - max_frames: 50 # 动作片段对应的帧数。在行人ID对应时序骨骼点结果时达到该帧数后,会通过行为识别模型判断该段序列的动作类型。与训练设置一致时效果最佳。 - display_frames: 80 # 显示帧数。当预测结果为摔倒时,在对应人物ID中显示状态的持续时间。 - coord_size: [384, 512] # 坐标统一缩放到的尺度大小。与训练设置一致时效果最佳。 -``` - -## 使用方法 -1. 从上表链接中下载模型并解压到```./output_inference```路径下。 -2. 目前行为识别模块仅支持视频输入,启动命令如下: -```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ - --video_file=test_video.mp4 \ - --device=gpu \ - --enable_action=True -``` -3. 若修改模型路径,有以下两种方式: - - - ```./deploy/pphuman/config/infer_cfg.yml```下可以配置不同模型路径,关键点模型和行为识别模型分别对应`KPT`和`ACTION`字段,修改对应字段下的路径为实际期望的路径即可。 - - 命令行中增加`--model_dir`修改模型路径: -```python -python deploy/pphuman/pipeline.py --config deploy/pphuman/config/infer_cfg.yml \ - --video_file=test_video.mp4 \ - --device=gpu \ - --enable_action=True \ - --model_dir kpt=./dark_hrnet_w32_256x192 action=./STGCN -``` - -## 方案说明 -1. 使用目标检测与多目标跟踪获取视频输入中的行人检测框及跟踪ID序号,模型方案为PP-YOLOE,详细文档参考[PP-YOLOE](../../../configs/ppyoloe)。 -2. 通过行人检测框的坐标在输入视频的对应帧中截取每个行人,并使用[关键点识别模型](../../../configs/keypoint/hrnet/dark_hrnet_w32_256x192.yml)得到对应的17个骨骼特征点。骨骼特征点的顺序及类型与COCO一致,详见[如何准备关键点数据集](../../../docs/tutorials/PrepareKeypointDataSet_cn.md)中的`COCO数据集`部分。 -3. 每个跟踪ID对应的目标行人各自累计骨骼特征点结果,组成该人物的时序关键点序列。当累计到预定帧数或跟踪丢失后,使用行为识别模型判断时序关键点序列的动作类型。当前版本模型支持摔倒行为的识别,预测得到的`class id`对应关系为: -``` -0: 摔倒, -1: 其他 -``` -4. 行为识别模型使用了[ST-GCN](https://arxiv.org/abs/1801.07455),并基于[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md)套件完成模型训练。 - -## 参考文献 -``` -@inproceedings{stgcn2018aaai, - title = {Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition}, - author = {Sijie Yan and Yuanjun Xiong and Dahua Lin}, - booktitle = {AAAI}, - year = {2018}, -} -``` diff --git a/deploy/pptracking/README_cn.md b/deploy/pptracking/README_cn.md index 4330e114205598e41282c9c937c4433e54487a81..cf25c9973feacf5458582cb7387c8377c894be8c 100644 --- a/deploy/pptracking/README_cn.md +++ b/deploy/pptracking/README_cn.md @@ -36,7 +36,7 @@ PP-Tracking 提供了简洁的GUI可视化界面,教程请参考[PP-Tracking PP-Tracking 支持单镜头跟踪(MOT)和跨镜头跟踪(MTMCT)两种模式。 - 单镜头跟踪同时支持**FairMOT**和**DeepSORT**两种多目标跟踪算法,跨镜头跟踪只支持**DeepSORT**算法。 - 单镜头跟踪的功能包括行人跟踪、车辆跟踪、多类别跟踪、小目标跟踪以及流量统计,模型主要是基于FairMOT进行优化,实现了实时跟踪的效果,同时基于不同应用场景提供了针对性的预训练模型。 -- DeepSORT算法方案(包括跨镜头跟踪用到的DeepSORT),选用的检测器是PaddleDetection自研的高性能检测模型[PP-YOLOv2](../../ppyolo/)和轻量级特色检测模型[PP-PicoDet](../../picodet/),选用的ReID模型是PaddleClas自研的超轻量骨干网络模型[PP-LCNet](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models/PP-LCNet.md) +- DeepSORT算法方案(包括跨镜头跟踪用到的DeepSORT),选用的检测器是PaddleDetection自研的高性能检测模型[PP-YOLOv2](../../configs/ppyolo/)和轻量级特色检测模型[PP-PicoDet](../../configs/picodet/),选用的ReID模型是PaddleClas自研的超轻量骨干网络模型[PP-LCNet](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/models/PP-LCNet.md) PP-Tracking中提供的多场景预训练模型以及导出后的预测部署模型如下: diff --git a/deploy/pptracking/cpp/src/tracker.cc b/deploy/pptracking/cpp/src/tracker.cc index 09b2dfa249ffbc90819d2dd5d3e419a27d23cd43..9540e39f6701750ae8af5229ecd9cfa264460095 100644 --- a/deploy/pptracking/cpp/src/tracker.cc +++ b/deploy/pptracking/cpp/src/tracker.cc @@ -56,8 +56,8 @@ bool JDETracker::update(const cv::Mat &dets, ++timestamp; TrajectoryPool candidates(dets.rows); for (int i = 0; i < dets.rows; ++i) { - float score = *dets.ptr(i, 4); - const cv::Mat <rb_ = dets(cv::Rect(0, i, 4, 1)); + float score = *dets.ptr(i, 1); + const cv::Mat <rb_ = dets(cv::Rect(2, i, 4, 1)); cv::Vec4f ltrb = mat2vec4f(ltrb_); const cv::Mat &embedding = emb(cv::Rect(0, i, emb.cols, 1)); candidates[i] = Trajectory(ltrb, score, embedding); diff --git a/deploy/pptracking/python/README.md b/deploy/pptracking/python/README.md index d5c34cdf56efec0f0dd7686d2127c33e584eaf37..e56a69e3fe96aba11ed3a50d1173b39c30fc83cf 100644 --- a/deploy/pptracking/python/README.md +++ b/deploy/pptracking/python/README.md @@ -65,13 +65,12 @@ python deploy/pptracking/python/mot_jde_infer.py --model_dir=output_inference/fa - bdd100k车辆跟踪和多类别demo视频可从此链接下载:`wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/bdd100k_demo.mp4` + ## 2. 对DeepSORT模型的导出和预测 ### 2.1 导出预测模型 Step 1:导出检测模型 ```bash -# 导出PPYOLOv2行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyolov2_r50vd_dcn_365e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_640x640_mot17half.pdparams -# 或导出PPYOLOe行人检测模型 +# 导出PPYOLOe行人检测模型 CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` @@ -88,45 +87,41 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/reid # 下载行人跟踪demo视频: wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 -# 用导出的PPYOLOv2行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyolov2_r50vd_dcn_365e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images -# 或用导出的PPYOLOe行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images +# 用导出的PPYOLOE行人检测模型和PPLCNet ReID模型 +python3.7 deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts --threshold=0.5 ``` ### 2.3 用导出的模型基于Python去预测车辆跟踪 ```bash -# 下载车辆检测PicoDet导出的模型: -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/picodet_l_640_aic21mtmct_vehicle.tar -tar -xvf picodet_l_640_aic21mtmct_vehicle.tar -# 或者车辆检测PP-YOLOv2导出的模型: -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar -tar -xvf ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar +# 下载车辆demo视频 +wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/bdd100k_demo.mp4 + +# 下载车辆检测PPYOLOE导出的模型: +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip +unzip mot_ppyoloe_l_36e_ppvehicle.zip # 下载车辆ReID导出的模型: wget https://paddledet.bj.bcebos.com/models/mot/deepsort/deepsort_pplcnet_vehicle.tar tar -xvf deepsort_pplcnet_vehicle.tar -# 用导出的PicoDet车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=picodet_l_640_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --tracker_config=tracker_config.yml --device=GPU --threshold=0.5 --video_file={your video}.mp4 --save_mot_txts --save_images - -# 用导出的PP-YOLOv2车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --tracker_config=tracker_config.yml --device=GPU --threshold=0.5 --video_file={your video}.mp4 --save_mot_txts --save_images +# 用导出的PPYOLOE车辆检测模型和PPLCNet车辆ReID模型 +python deploy/pptracking/python/mot_sde_infer.py --model_dir=mot_ppyoloe_l_36e_ppvehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --tracker_config=deploy/pptracking/python/tracker_config.yml --device=GPU --threshold=0.5 --video_file=bdd100k_demo.mp4 --save_mot_txts --save_images ``` **注意:** + - 运行前需要手动修改`tracker_config.yml`的跟踪器类型为`type: DeepSORTTracker`。 - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - `--threshold`表示结果可视化的置信度阈值,默认为0.5,低于该阈值的结果会被过滤掉,为了可视化效果更佳,可根据实际情况自行修改。 - DeepSORT算法不支持多类别跟踪,只支持单类别跟踪,且ReID模型最好是与检测模型同一类别的物体训练过的,比如行人跟踪最好使用行人ReID模型,车辆跟踪最好使用车辆ReID模型。 - - 需要手动修改`tracker_config.yml`的跟踪器类型为`type: DeepSORTTracker`。 -## 3. 对ByteTrack模型的导出和预测 + +## 3. 对ByteTrack和OC_SORT模型的导出和预测 ### 3.1 导出预测模型 ```bash # 导出PPYOLOe行人检测模型 -CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams ``` ### 3.2 用导出的模型基于Python去预测行人跟踪 @@ -135,27 +130,28 @@ CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/deepsort/dete wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4 # 用导出的PPYOLOe行人检测模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_mot_txts # 用导出的PPYOLOe行人检测模型和PPLCNet ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images +python deploy/pptracking/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --reid_model_dir=output_inference/deepsort_pplcnet/ --tracker_config=deploy/pptracking/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --threshold=0.5 --save_mot_txts --save_images ``` **注意:** + - 运行ByteTrack模型需要确认`tracker_config.yml`的跟踪器类型为`type: JDETracker`。 + - 可切换`tracker_config.yml`的跟踪器类型为`type: OCSORTTracker`运行OC_SORT模型。 - ByteTrack模型是加载导出的检测器和单独配置的`--tracker_config`文件运行的,为了实时跟踪所以不需要reid模型,`--reid_model_dir`表示reid导出模型的路径,默认为空,加不加具体视效果而定; - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_images`表示保存跟踪结果可视化图片。 - 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`。 - `--threshold`表示结果可视化的置信度阈值,默认为0.5,低于该阈值的结果会被过滤掉,为了可视化效果更佳,可根据实际情况自行修改。 + ## 4. 跨境跟踪模型的导出和预测 ### 4.1 导出预测模型 Step 1:下载导出的检测模型 ```bash -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/picodet_l_640_aic21mtmct_vehicle.tar -tar -xvf picodet_l_640_aic21mtmct_vehicle.tar -# 或者 -wget https://paddledet.bj.bcebos.com/models/mot/deepsort/ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar -tar -xvf ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle.tar +# 下载车辆检测PPYOLOE导出的模型: +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip +unzip mot_ppyoloe_l_36e_ppvehicle.zip ``` Step 2:下载导出的ReID模型 ```bash @@ -169,13 +165,10 @@ tar -xvf deepsort_pplcnet_vehicle.tar wget https://paddledet.bj.bcebos.com/data/mot/demo/mtmct-demo.tar tar -xvf mtmct-demo.tar -# 用导出的PicoDet车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=picodet_l_640_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --mtmct_dir=mtmct-demo --mtmct_cfg=mtmct_cfg.yml --device=GPU --threshold=0.5 --save_mot_txts --save_images - -# 用导出的PP-YOLOv2车辆检测模型和PPLCNet车辆ReID模型 -python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_365e_aic21mtmct_vehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --mtmct_dir=mtmct-demo --mtmct_cfg=mtmct_cfg.yml --device=GPU --threshold=0.5 --save_mot_txts --save_images - +# 用导出的PPYOLOE车辆检测模型和PPLCNet车辆ReID模型 +python deploy/pptracking/python/mot_sde_infer.py --model_dir=mot_ppyoloe_l_36e_ppvehicle/ --reid_model_dir=deepsort_pplcnet_vehicle/ --mtmct_dir=mtmct-demo --mtmct_cfg=mtmct_cfg.yml --device=GPU --threshold=0.5 --save_mot_txts --save_images ``` + **注意:** - 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt),或`--save_images`表示保存跟踪结果可视化图片。 - 跨镜头跟踪结果txt文件每行信息是`camera_id,frame,id,x1,y1,w,h,-1,-1`。 @@ -190,6 +183,7 @@ python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_ | 参数 | 是否必须|含义 | |-------|-------|----------| | --model_dir | Yes| 上述导出的模型路径 | +| --reid_model_dir | Option| ReID导出的模型路径 | | --image_file | Option | 需要预测的图片 | | --image_dir | Option | 要预测的图片文件夹路径 | | --video_file | Option | 需要预测的视频 | @@ -203,8 +197,10 @@ python deploy/pptracking/python/mot_sde_infer.py --model_dir=ppyolov2_r50vd_dcn_ | --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | | --cpu_threads | Option| 设置cpu线程数,默认为1 | | --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | -| --do_entrance_counting | Option | 是否统计出入口流量,默认为False | -| --draw_center_traj | Option | 是否绘制跟踪轨迹,默认为False | +| --save_mot_txts | Option | 跟踪任务是否保存txt结果文件,默认为False | +| --save_images | Option | 跟踪任务是否保存视频的可视化图片,默认为False | +| --do_entrance_counting | Option | 跟踪任务是否统计出入口流量,默认为False | +| --draw_center_traj | Option | 跟踪任务是否绘制跟踪轨迹,默认为False | | --mtmct_dir | Option | 需要进行MTMCT跨境头跟踪预测的图片文件夹路径,默认为None | | --mtmct_cfg | Option | 需要进行MTMCT跨境头跟踪预测的配置文件路径,默认为None | diff --git a/deploy/pptracking/python/det_infer.py b/deploy/pptracking/python/det_infer.py index 90a391e07209951cc80671c97f898b5cdd4bc0a9..c52879453027e099d25cd0388083eb57d1f8aeea 100644 --- a/deploy/pptracking/python/det_infer.py +++ b/deploy/pptracking/python/det_infer.py @@ -32,7 +32,7 @@ sys.path.insert(0, parent_path) from benchmark_utils import PaddleInferBenchmark from picodet_postprocess import PicoDetPostProcess -from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, decode_image +from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, Pad, decode_image from mot.visualize import visualize_box_mask from mot_utils import argsparser, Timer, get_current_memory_mb @@ -416,9 +416,15 @@ def load_predictor(model_dir, raise ValueError( "Predict by TensorRT mode: {}, expect device=='GPU', but device == {}" .format(run_mode, device)) - config = Config( - os.path.join(model_dir, 'model.pdmodel'), - os.path.join(model_dir, 'model.pdiparams')) + infer_model = os.path.join(model_dir, 'model.pdmodel') + infer_params = os.path.join(model_dir, 'model.pdiparams') + if not os.path.exists(infer_model): + infer_model = os.path.join(model_dir, 'inference.pdmodel') + infer_params = os.path.join(model_dir, 'inference.pdiparams') + if not os.path.exists(infer_model): + raise ValueError( + "Cannot find any inference model in dir: {},".format(model_dir)) + config = Config(infer_model, infer_params) if device == 'GPU': # initial GPU memory(M), device ID config.enable_use_gpu(200, 0) diff --git a/deploy/pptracking/python/mot/matching/__init__.py b/deploy/pptracking/python/mot/matching/__init__.py index 54c6680f79f16247c562a9da1024dd3e1de4c57f..f6a88c5673a50452415b1f86f7b18bac12297f49 100644 --- a/deploy/pptracking/python/mot/matching/__init__.py +++ b/deploy/pptracking/python/mot/matching/__init__.py @@ -14,6 +14,8 @@ from . import jde_matching from . import deepsort_matching +from . import ocsort_matching from .jde_matching import * from .deepsort_matching import * +from .ocsort_matching import * diff --git a/deploy/pptracking/python/mot/matching/jde_matching.py b/deploy/pptracking/python/mot/matching/jde_matching.py index eb3749885b0ad8e563e32cf3ca1b89c3364700bc..3b1cf02edd75cb960e433926274b761d49136033 100644 --- a/deploy/pptracking/python/mot/matching/jde_matching.py +++ b/deploy/pptracking/python/mot/matching/jde_matching.py @@ -15,7 +15,14 @@ This code is based on https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/tracker/matching.py """ -import lap +try: + import lap +except: + print( + 'Warning: Unable to use JDE/FairMOT/ByteTrack, please install lap, for example: `pip install lap`, see https://github.com/gatagat/lap' + ) + pass + import scipy import numpy as np from scipy.spatial.distance import cdist @@ -26,7 +33,7 @@ warnings.filterwarnings("ignore") __all__ = [ 'merge_matches', 'linear_assignment', - 'cython_bbox_ious', + 'bbox_ious', 'iou_distance', 'embedding_distance', 'fuse_motion', @@ -53,6 +60,12 @@ def merge_matches(m1, m2, shape): def linear_assignment(cost_matrix, thresh): + try: + import lap + except Exception as e: + raise RuntimeError( + 'Unable to use JDE/FairMOT/ByteTrack, please install lap, for example: `pip install lap`, see https://github.com/gatagat/lap' + ) if cost_matrix.size == 0: return np.empty( (0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple( @@ -68,22 +81,28 @@ def linear_assignment(cost_matrix, thresh): return matches, unmatched_a, unmatched_b -def cython_bbox_ious(atlbrs, btlbrs): - ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float) - if ious.size == 0: +def bbox_ious(atlbrs, btlbrs): + boxes = np.ascontiguousarray(atlbrs, dtype=np.float) + query_boxes = np.ascontiguousarray(btlbrs, dtype=np.float) + N = boxes.shape[0] + K = query_boxes.shape[0] + ious = np.zeros((N, K), dtype=boxes.dtype) + if N * K == 0: return ious - try: - import cython_bbox - except Exception as e: - print('cython_bbox not found, please install cython_bbox.' - 'for example: `pip install cython_bbox`.') - exit() - - ious = cython_bbox.bbox_overlaps( - np.ascontiguousarray( - atlbrs, dtype=np.float), - np.ascontiguousarray( - btlbrs, dtype=np.float)) + + for k in range(K): + box_area = ((query_boxes[k, 2] - query_boxes[k, 0] + 1) * + (query_boxes[k, 3] - query_boxes[k, 1] + 1)) + for n in range(N): + iw = (min(boxes[n, 2], query_boxes[k, 2]) - max( + boxes[n, 0], query_boxes[k, 0]) + 1) + if iw > 0: + ih = (min(boxes[n, 3], query_boxes[k, 3]) - max( + boxes[n, 1], query_boxes[k, 1]) + 1) + if ih > 0: + ua = float((boxes[n, 2] - boxes[n, 0] + 1) * (boxes[ + n, 3] - boxes[n, 1] + 1) + box_area - iw * ih) + ious[n, k] = iw * ih / ua return ious @@ -98,7 +117,7 @@ def iou_distance(atracks, btracks): else: atlbrs = [track.tlbr for track in atracks] btlbrs = [track.tlbr for track in btracks] - _ious = cython_bbox_ious(atlbrs, btlbrs) + _ious = bbox_ious(atlbrs, btlbrs) cost_matrix = 1 - _ious return cost_matrix diff --git a/deploy/pptracking/python/mot/matching/ocsort_matching.py b/deploy/pptracking/python/mot/matching/ocsort_matching.py new file mode 100644 index 0000000000000000000000000000000000000000..b2428d020731a6f91e28f9962168390cf1b3a12f --- /dev/null +++ b/deploy/pptracking/python/mot/matching/ocsort_matching.py @@ -0,0 +1,127 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +This code is based on https://github.com/noahcao/OC_SORT/blob/master/trackers/ocsort_tracker/association.py +""" + +import os +import numpy as np + + +def iou_batch(bboxes1, bboxes2): + """ + From SORT: Computes IOU between two bboxes in the form [x1,y1,x2,y2] + """ + bboxes2 = np.expand_dims(bboxes2, 0) + bboxes1 = np.expand_dims(bboxes1, 1) + + xx1 = np.maximum(bboxes1[..., 0], bboxes2[..., 0]) + yy1 = np.maximum(bboxes1[..., 1], bboxes2[..., 1]) + xx2 = np.minimum(bboxes1[..., 2], bboxes2[..., 2]) + yy2 = np.minimum(bboxes1[..., 3], bboxes2[..., 3]) + w = np.maximum(0., xx2 - xx1) + h = np.maximum(0., yy2 - yy1) + wh = w * h + o = wh / ((bboxes1[..., 2] - bboxes1[..., 0]) * + (bboxes1[..., 3] - bboxes1[..., 1]) + + (bboxes2[..., 2] - bboxes2[..., 0]) * + (bboxes2[..., 3] - bboxes2[..., 1]) - wh) + return (o) + + +def speed_direction_batch(dets, tracks): + tracks = tracks[..., np.newaxis] + CX1, CY1 = (dets[:, 0] + dets[:, 2]) / 2.0, (dets[:, 1] + dets[:, 3]) / 2.0 + CX2, CY2 = (tracks[:, 0] + tracks[:, 2]) / 2.0, ( + tracks[:, 1] + tracks[:, 3]) / 2.0 + dx = CX1 - CX2 + dy = CY1 - CY2 + norm = np.sqrt(dx**2 + dy**2) + 1e-6 + dx = dx / norm + dy = dy / norm + return dy, dx # size: num_track x num_det + + +def linear_assignment(cost_matrix): + try: + import lap + _, x, y = lap.lapjv(cost_matrix, extend_cost=True) + return np.array([[y[i], i] for i in x if i >= 0]) # + except ImportError: + from scipy.optimize import linear_sum_assignment + x, y = linear_sum_assignment(cost_matrix) + return np.array(list(zip(x, y))) + + +def associate(detections, trackers, iou_threshold, velocities, previous_obs, + vdc_weight): + if (len(trackers) == 0): + return np.empty( + (0, 2), dtype=int), np.arange(len(detections)), np.empty( + (0, 5), dtype=int) + + Y, X = speed_direction_batch(detections, previous_obs) + inertia_Y, inertia_X = velocities[:, 0], velocities[:, 1] + inertia_Y = np.repeat(inertia_Y[:, np.newaxis], Y.shape[1], axis=1) + inertia_X = np.repeat(inertia_X[:, np.newaxis], X.shape[1], axis=1) + diff_angle_cos = inertia_X * X + inertia_Y * Y + diff_angle_cos = np.clip(diff_angle_cos, a_min=-1, a_max=1) + diff_angle = np.arccos(diff_angle_cos) + diff_angle = (np.pi / 2.0 - np.abs(diff_angle)) / np.pi + + valid_mask = np.ones(previous_obs.shape[0]) + valid_mask[np.where(previous_obs[:, 4] < 0)] = 0 + + iou_matrix = iou_batch(detections, trackers) + scores = np.repeat( + detections[:, -1][:, np.newaxis], trackers.shape[0], axis=1) + # iou_matrix = iou_matrix * scores # a trick sometiems works, we don't encourage this + valid_mask = np.repeat(valid_mask[:, np.newaxis], X.shape[1], axis=1) + + angle_diff_cost = (valid_mask * diff_angle) * vdc_weight + angle_diff_cost = angle_diff_cost.T + angle_diff_cost = angle_diff_cost * scores + + if min(iou_matrix.shape) > 0: + a = (iou_matrix > iou_threshold).astype(np.int32) + if a.sum(1).max() == 1 and a.sum(0).max() == 1: + matched_indices = np.stack(np.where(a), axis=1) + else: + matched_indices = linear_assignment(-(iou_matrix + angle_diff_cost)) + else: + matched_indices = np.empty(shape=(0, 2)) + + unmatched_detections = [] + for d, det in enumerate(detections): + if (d not in matched_indices[:, 0]): + unmatched_detections.append(d) + unmatched_trackers = [] + for t, trk in enumerate(trackers): + if (t not in matched_indices[:, 1]): + unmatched_trackers.append(t) + + # filter out matched with low IOU + matches = [] + for m in matched_indices: + if (iou_matrix[m[0], m[1]] < iou_threshold): + unmatched_detections.append(m[0]) + unmatched_trackers.append(m[1]) + else: + matches.append(m.reshape(1, 2)) + if (len(matches) == 0): + matches = np.empty((0, 2), dtype=int) + else: + matches = np.concatenate(matches, axis=0) + + return matches, np.array(unmatched_detections), np.array(unmatched_trackers) diff --git a/deploy/pptracking/python/mot/mtmct/camera_utils.py b/deploy/pptracking/python/mot/mtmct/camera_utils.py index e11472637c8615a370f8edb3b4d58ae2735fa509..445e6386cff826742e8f7f7d5171ca247e148b67 100644 --- a/deploy/pptracking/python/mot/mtmct/camera_utils.py +++ b/deploy/pptracking/python/mot/mtmct/camera_utils.py @@ -19,7 +19,13 @@ Note: The following codes are strongly related to camera parameters of the AIC21 """ import numpy as np -from sklearn.cluster import AgglomerativeClustering +try: + from sklearn.cluster import AgglomerativeClustering +except: + print( + 'Warning: Unable to use MTMCT in PP-Tracking, please install sklearn, for example: `pip install sklearn`' + ) + pass from .utils import get_dire, get_match, get_cid_tid, combin_feature, combin_cluster from .utils import normalize, intracam_ignore, visual_rerank diff --git a/deploy/pptracking/python/mot/mtmct/postprocess.py b/deploy/pptracking/python/mot/mtmct/postprocess.py index 7e338b901fa75716dacc2dc0560bbbeafe53573a..32be0466d0ed2899d01d192f82561f5e30cc9ad9 100644 --- a/deploy/pptracking/python/mot/mtmct/postprocess.py +++ b/deploy/pptracking/python/mot/mtmct/postprocess.py @@ -20,7 +20,13 @@ import re import cv2 from tqdm import tqdm import numpy as np -import motmetrics as mm +try: + import motmetrics as mm +except: + print( + 'Warning: Unable to use motmetrics in MTMCT in PP-Tracking, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) + pass from functools import reduce from .utils import parse_pt_gt, parse_pt, compare_dataframes_mtmc @@ -201,6 +207,12 @@ def print_mtmct_result(gt_file, pred_file): summary.loc[:, 'idr'] *= 100 summary.loc[:, 'idf1'] *= 100 summary.loc[:, 'mota'] *= 100 + try: + import motmetrics as mm + except Exception as e: + raise RuntimeError( + 'Unable to use motmetrics in MTMCT in PP-Tracking, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) print( mm.io.render_summary( summary, diff --git a/deploy/pptracking/python/mot/mtmct/utils.py b/deploy/pptracking/python/mot/mtmct/utils.py index ef7ec8be73fbd218363cc02695a1626fc66d71ae..f0b52aa67638b6659e18b470ae384720a68f5294 100644 --- a/deploy/pptracking/python/mot/mtmct/utils.py +++ b/deploy/pptracking/python/mot/mtmct/utils.py @@ -20,9 +20,6 @@ import re import cv2 import gc import numpy as np -from sklearn import preprocessing -from sklearn.cluster import AgglomerativeClustering -import motmetrics as mm import pandas as pd from tqdm import tqdm import warnings @@ -195,10 +192,10 @@ def find_topk(a, k, axis=-1, largest=True, sorted=True): a = np.asanyarray(a) if largest: - index_array = np.argpartition(a, axis_size-k, axis=axis) - topk_indices = np.take(index_array, -np.arange(k)-1, axis=axis) + index_array = np.argpartition(a, axis_size - k, axis=axis) + topk_indices = np.take(index_array, -np.arange(k) - 1, axis=axis) else: - index_array = np.argpartition(a, k-1, axis=axis) + index_array = np.argpartition(a, k - 1, axis=axis) topk_indices = np.take(index_array, np.arange(k), axis=axis) topk_values = np.take_along_axis(a, topk_indices, axis=axis) if sorted: @@ -228,7 +225,8 @@ def batch_numpy_topk(qf, gf, k1, N=6000): temp_qd = temp_qd / (np.max(temp_qd, axis=0)[0]) temp_qd = temp_qd.T initial_rank.append( - find_topk(temp_qd, k=k1, axis=1, largest=False, sorted=True)[1]) + find_topk( + temp_qd, k=k1, axis=1, largest=False, sorted=True)[1]) del temp_qd del temp_gf del temp_qf @@ -374,6 +372,12 @@ def visual_rerank(prb_feats, def normalize(nparray, axis=0): + try: + from sklearn import preprocessing + except Exception as e: + raise RuntimeError( + 'Unable to use sklearn in MTMCT in PP-Tracking, please install sklearn, for example: `pip install sklearn`' + ) nparray = preprocessing.normalize(nparray, norm='l2', axis=axis) return nparray @@ -453,6 +457,12 @@ def parse_pt_gt(mot_feature): # eval result def compare_dataframes_mtmc(gts, ts): + try: + import motmetrics as mm + except Exception as e: + raise RuntimeError( + 'Unable to use motmetrics in MTMCT in PP-Tracking, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) """Compute ID-based evaluation metrics for MTMCT Return: df (pandas.DataFrame): Results of the evaluations in a df with only the 'idf1', 'idp', and 'idr' columns. @@ -528,6 +538,12 @@ def get_labels(cid_tid_dict, use_ff=True, use_rerank=True, use_st_filter=False): + try: + from sklearn.cluster import AgglomerativeClustering + except Exception as e: + raise RuntimeError( + 'Unable to use sklearn in MTMCT in PP-Tracking, please install sklearn, for example: `pip install sklearn`' + ) # 1st cluster sim_matrix = get_sim_matrix( cid_tid_dict, diff --git a/deploy/pptracking/python/mot/mtmct/zone.py b/deploy/pptracking/python/mot/mtmct/zone.py index f079f64c1e99ca083ba34664a73c0f6ecbfbc655..f3fa162e573550a4f497fb7ddd3e578796df6176 100644 --- a/deploy/pptracking/python/mot/mtmct/zone.py +++ b/deploy/pptracking/python/mot/mtmct/zone.py @@ -21,7 +21,13 @@ Note: The following codes are strongly related to zone of the AIC21 test-set S06 import os import cv2 import numpy as np -from sklearn.cluster import AgglomerativeClustering +try: + from sklearn.cluster import AgglomerativeClustering +except: + print( + 'Warning: Unable to use MTMCT in PP-Tracking, please install sklearn, for example: `pip install sklearn`' + ) + pass BBOX_B = 10 / 15 diff --git a/deploy/pptracking/python/mot/tracker/__init__.py b/deploy/pptracking/python/mot/tracker/__init__.py index b74593b4126d878cd655326e58369f5b6f76a2ae..03a5dd0a169203b86edbc6c81a44a095ebe9b3cc 100644 --- a/deploy/pptracking/python/mot/tracker/__init__.py +++ b/deploy/pptracking/python/mot/tracker/__init__.py @@ -16,8 +16,10 @@ from . import base_jde_tracker from . import base_sde_tracker from . import jde_tracker from . import deepsort_tracker +from . import ocsort_tracker from .base_jde_tracker import * from .base_sde_tracker import * from .jde_tracker import * from .deepsort_tracker import * +from .ocsort_tracker import * diff --git a/deploy/pptracking/python/mot/tracker/jde_tracker.py b/deploy/pptracking/python/mot/tracker/jde_tracker.py index 8549801febb198fadece87ed043bf106436134ae..f412842a0205036e02de60018fe86331a2e5d9b7 100644 --- a/deploy/pptracking/python/mot/tracker/jde_tracker.py +++ b/deploy/pptracking/python/mot/tracker/jde_tracker.py @@ -38,7 +38,7 @@ class JDETracker(object): track_buffer (int): buffer for tracker min_box_area (int): min box area to filter out low quality boxes vertical_ratio (float): w/h, the vertical ratio of the bbox to filter - bad results. If set <0 means no need to filter bboxes,usually set + bad results. If set <= 0 means no need to filter bboxes,usually set 1.6 for pedestrian tracking. tracked_thresh (float): linear assignment threshold of tracked stracks and detections @@ -64,8 +64,8 @@ class JDETracker(object): num_classes=1, det_thresh=0.3, track_buffer=30, - min_box_area=200, - vertical_ratio=1.6, + min_box_area=0, + vertical_ratio=0, tracked_thresh=0.7, r_tracked_thresh=0.5, unconfirmed_thresh=0.7, @@ -116,7 +116,7 @@ class JDETracker(object): Return: output_stracks_dict (dict(list)): The list contains information - regarding the online_tracklets for the recieved image tensor. + regarding the online_tracklets for the received image tensor. """ self.frame_id += 1 if self.frame_id == 1: @@ -161,9 +161,8 @@ class JDETracker(object): detections = [ STrack( STrack.tlbr_to_tlwh(tlbrs[2:6]), tlbrs[1], cls_id, - 30, temp_feat) - for (tlbrs, temp_feat - ) in zip(pred_dets_cls, pred_embs_cls) + 30, temp_feat) for (tlbrs, temp_feat) in + zip(pred_dets_cls, pred_embs_cls) ] else: detections = [] @@ -238,15 +237,13 @@ class JDETracker(object): for tlbrs in pred_dets_cls_second ] else: - pred_embs_cls_second = pred_embs_dict[cls_id][inds_second] + pred_embs_cls_second = pred_embs_dict[cls_id][ + inds_second] detections_second = [ STrack( - STrack.tlbr_to_tlwh(tlbrs[2:6]), - tlbrs[1], - cls_id, - 30, - temp_feat) - for (tlbrs, temp_feat) in zip(pred_dets_cls_second, pred_embs_cls_second) + STrack.tlbr_to_tlwh(tlbrs[2:6]), tlbrs[1], + cls_id, 30, temp_feat) for (tlbrs, temp_feat) in + zip(pred_dets_cls_second, pred_embs_cls_second) ] else: detections_second = [] diff --git a/deploy/pptracking/python/mot/tracker/ocsort_tracker.py b/deploy/pptracking/python/mot/tracker/ocsort_tracker.py new file mode 100644 index 0000000000000000000000000000000000000000..919af5b23349b7423f9bee45b38941a4024ac777 --- /dev/null +++ b/deploy/pptracking/python/mot/tracker/ocsort_tracker.py @@ -0,0 +1,366 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +This code is based on https://github.com/noahcao/OC_SORT/blob/master/trackers/ocsort_tracker/ocsort.py +""" + +import numpy as np +try: + from filterpy.kalman import KalmanFilter +except: + print( + 'Warning: Unable to use OC-SORT, please install filterpy, for example: `pip install filterpy`, see https://github.com/rlabbe/filterpy' + ) + pass + +from ..matching.ocsort_matching import associate, linear_assignment, iou_batch + + +def k_previous_obs(observations, cur_age, k): + if len(observations) == 0: + return [-1, -1, -1, -1, -1] + for i in range(k): + dt = k - i + if cur_age - dt in observations: + return observations[cur_age - dt] + max_age = max(observations.keys()) + return observations[max_age] + + +def convert_bbox_to_z(bbox): + """ + Takes a bounding box in the form [x1,y1,x2,y2] and returns z in the form + [x,y,s,r] where x,y is the centre of the box and s is the scale/area and r is + the aspect ratio + """ + w = bbox[2] - bbox[0] + h = bbox[3] - bbox[1] + x = bbox[0] + w / 2. + y = bbox[1] + h / 2. + s = w * h # scale is just area + r = w / float(h + 1e-6) + return np.array([x, y, s, r]).reshape((4, 1)) + + +def convert_x_to_bbox(x, score=None): + """ + Takes a bounding box in the centre form [x,y,s,r] and returns it in the form + [x1,y1,x2,y2] where x1,y1 is the top left and x2,y2 is the bottom right + """ + w = np.sqrt(x[2] * x[3]) + h = x[2] / w + if (score == None): + return np.array( + [x[0] - w / 2., x[1] - h / 2., x[0] + w / 2., + x[1] + h / 2.]).reshape((1, 4)) + else: + score = np.array([score]) + return np.array([ + x[0] - w / 2., x[1] - h / 2., x[0] + w / 2., x[1] + h / 2., score + ]).reshape((1, 5)) + + +def speed_direction(bbox1, bbox2): + cx1, cy1 = (bbox1[0] + bbox1[2]) / 2.0, (bbox1[1] + bbox1[3]) / 2.0 + cx2, cy2 = (bbox2[0] + bbox2[2]) / 2.0, (bbox2[1] + bbox2[3]) / 2.0 + speed = np.array([cy2 - cy1, cx2 - cx1]) + norm = np.sqrt((cy2 - cy1)**2 + (cx2 - cx1)**2) + 1e-6 + return speed / norm + + +class KalmanBoxTracker(object): + """ + This class represents the internal state of individual tracked objects observed as bbox. + + Args: + bbox (np.array): bbox in [x1,y1,x2,y2,score] format. + delta_t (int): delta_t of previous observation + """ + count = 0 + + def __init__(self, bbox, delta_t=3): + try: + from filterpy.kalman import KalmanFilter + except Exception as e: + raise RuntimeError( + 'Unable to use OC-SORT, please install filterpy, for example: `pip install filterpy`, see https://github.com/rlabbe/filterpy' + ) + self.kf = KalmanFilter(dim_x=7, dim_z=4) + self.kf.F = np.array([[1, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 1, 0], + [0, 0, 1, 0, 0, 0, 1], [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 1, 0], + [0, 0, 0, 0, 0, 0, 1]]) + self.kf.H = np.array([[1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0]]) + self.kf.R[2:, 2:] *= 10. + self.kf.P[4:, 4:] *= 1000. + # give high uncertainty to the unobservable initial velocities + self.kf.P *= 10. + self.kf.Q[-1, -1] *= 0.01 + self.kf.Q[4:, 4:] *= 0.01 + + self.score = bbox[4] + self.kf.x[:4] = convert_bbox_to_z(bbox) + self.time_since_update = 0 + self.id = KalmanBoxTracker.count + KalmanBoxTracker.count += 1 + self.history = [] + self.hits = 0 + self.hit_streak = 0 + self.age = 0 + """ + NOTE: [-1,-1,-1,-1,-1] is a compromising placeholder for non-observation status, the same for the return of + function k_previous_obs. It is ugly and I do not like it. But to support generate observation array in a + fast and unified way, which you would see below k_observations = np.array([k_previous_obs(...]]), let's bear it for now. + """ + self.last_observation = np.array([-1, -1, -1, -1, -1]) # placeholder + self.observations = dict() + self.history_observations = [] + self.velocity = None + self.delta_t = delta_t + + def update(self, bbox): + """ + Updates the state vector with observed bbox. + """ + if bbox is not None: + if self.last_observation.sum() >= 0: # no previous observation + previous_box = None + for i in range(self.delta_t): + dt = self.delta_t - i + if self.age - dt in self.observations: + previous_box = self.observations[self.age - dt] + break + if previous_box is None: + previous_box = self.last_observation + """ + Estimate the track speed direction with observations \Delta t steps away + """ + self.velocity = speed_direction(previous_box, bbox) + """ + Insert new observations. This is a ugly way to maintain both self.observations + and self.history_observations. Bear it for the moment. + """ + self.last_observation = bbox + self.observations[self.age] = bbox + self.history_observations.append(bbox) + + self.time_since_update = 0 + self.history = [] + self.hits += 1 + self.hit_streak += 1 + self.kf.update(convert_bbox_to_z(bbox)) + else: + self.kf.update(bbox) + + def predict(self): + """ + Advances the state vector and returns the predicted bounding box estimate. + """ + if ((self.kf.x[6] + self.kf.x[2]) <= 0): + self.kf.x[6] *= 0.0 + + self.kf.predict() + self.age += 1 + if (self.time_since_update > 0): + self.hit_streak = 0 + self.time_since_update += 1 + self.history.append(convert_x_to_bbox(self.kf.x, score=self.score)) + return self.history[-1] + + def get_state(self): + return convert_x_to_bbox(self.kf.x, score=self.score) + + +class OCSORTTracker(object): + """ + OCSORT tracker, support single class + + Args: + det_thresh (float): threshold of detection score + max_age (int): maximum number of missed misses before a track is deleted + min_hits (int): minimum hits for associate + iou_threshold (float): iou threshold for associate + delta_t (int): delta_t of previous observation + inertia (float): vdc_weight of angle_diff_cost for associate + vertical_ratio (float): w/h, the vertical ratio of the bbox to filter + bad results. If set <= 0 means no need to filter bboxes,usually set + 1.6 for pedestrian tracking. + min_box_area (int): min box area to filter out low quality boxes + use_byte (bool): Whether use ByteTracker, default False + """ + + def __init__(self, + det_thresh=0.6, + max_age=30, + min_hits=3, + iou_threshold=0.3, + delta_t=3, + inertia=0.2, + vertical_ratio=-1, + min_box_area=0, + use_byte=False): + self.det_thresh = det_thresh + self.max_age = max_age + self.min_hits = min_hits + self.iou_threshold = iou_threshold + self.delta_t = delta_t + self.inertia = inertia + self.vertical_ratio = vertical_ratio + self.min_box_area = min_box_area + self.use_byte = use_byte + + self.trackers = [] + self.frame_count = 0 + KalmanBoxTracker.count = 0 + + def update(self, pred_dets, pred_embs=None): + """ + Args: + pred_dets (np.array): Detection results of the image, the shape is + [N, 6], means 'cls_id, score, x0, y0, x1, y1'. + pred_embs (np.array): Embedding results of the image, the shape is + [N, 128] or [N, 512], default as None. + + Return: + tracking boxes (np.array): [M, 6], means 'x0, y0, x1, y1, score, id'. + """ + if pred_dets is None: + return np.empty((0, 6)) + + self.frame_count += 1 + + bboxes = pred_dets[:, 2:] + scores = pred_dets[:, 1:2] + dets = np.concatenate((bboxes, scores), axis=1) + scores = scores.squeeze(-1) + + inds_low = scores > 0.1 + inds_high = scores < self.det_thresh + inds_second = np.logical_and(inds_low, inds_high) + # self.det_thresh > score > 0.1, for second matching + dets_second = dets[inds_second] # detections for second matching + remain_inds = scores > self.det_thresh + dets = dets[remain_inds] + + # get predicted locations from existing trackers. + trks = np.zeros((len(self.trackers), 5)) + to_del = [] + ret = [] + for t, trk in enumerate(trks): + pos = self.trackers[t].predict()[0] + trk[:] = [pos[0], pos[1], pos[2], pos[3], 0] + if np.any(np.isnan(pos)): + to_del.append(t) + trks = np.ma.compress_rows(np.ma.masked_invalid(trks)) + for t in reversed(to_del): + self.trackers.pop(t) + + velocities = np.array([ + trk.velocity if trk.velocity is not None else np.array((0, 0)) + for trk in self.trackers + ]) + last_boxes = np.array([trk.last_observation for trk in self.trackers]) + k_observations = np.array([ + k_previous_obs(trk.observations, trk.age, self.delta_t) + for trk in self.trackers + ]) + """ + First round of association + """ + matched, unmatched_dets, unmatched_trks = associate( + dets, trks, self.iou_threshold, velocities, k_observations, + self.inertia) + for m in matched: + self.trackers[m[1]].update(dets[m[0], :]) + """ + Second round of associaton by OCR + """ + # BYTE association + if self.use_byte and len(dets_second) > 0 and unmatched_trks.shape[ + 0] > 0: + u_trks = trks[unmatched_trks] + iou_left = iou_batch( + dets_second, + u_trks) # iou between low score detections and unmatched tracks + iou_left = np.array(iou_left) + if iou_left.max() > self.iou_threshold: + """ + NOTE: by using a lower threshold, e.g., self.iou_threshold - 0.1, you may + get a higher performance especially on MOT17/MOT20 datasets. But we keep it + uniform here for simplicity + """ + matched_indices = linear_assignment(-iou_left) + to_remove_trk_indices = [] + for m in matched_indices: + det_ind, trk_ind = m[0], unmatched_trks[m[1]] + if iou_left[m[0], m[1]] < self.iou_threshold: + continue + self.trackers[trk_ind].update(dets_second[det_ind, :]) + to_remove_trk_indices.append(trk_ind) + unmatched_trks = np.setdiff1d(unmatched_trks, + np.array(to_remove_trk_indices)) + + if unmatched_dets.shape[0] > 0 and unmatched_trks.shape[0] > 0: + left_dets = dets[unmatched_dets] + left_trks = last_boxes[unmatched_trks] + iou_left = iou_batch(left_dets, left_trks) + iou_left = np.array(iou_left) + if iou_left.max() > self.iou_threshold: + """ + NOTE: by using a lower threshold, e.g., self.iou_threshold - 0.1, you may + get a higher performance especially on MOT17/MOT20 datasets. But we keep it + uniform here for simplicity + """ + rematched_indices = linear_assignment(-iou_left) + to_remove_det_indices = [] + to_remove_trk_indices = [] + for m in rematched_indices: + det_ind, trk_ind = unmatched_dets[m[0]], unmatched_trks[m[ + 1]] + if iou_left[m[0], m[1]] < self.iou_threshold: + continue + self.trackers[trk_ind].update(dets[det_ind, :]) + to_remove_det_indices.append(det_ind) + to_remove_trk_indices.append(trk_ind) + unmatched_dets = np.setdiff1d(unmatched_dets, + np.array(to_remove_det_indices)) + unmatched_trks = np.setdiff1d(unmatched_trks, + np.array(to_remove_trk_indices)) + + for m in unmatched_trks: + self.trackers[m].update(None) + + # create and initialise new trackers for unmatched detections + for i in unmatched_dets: + trk = KalmanBoxTracker(dets[i, :], delta_t=self.delta_t) + self.trackers.append(trk) + i = len(self.trackers) + for trk in reversed(self.trackers): + if trk.last_observation.sum() < 0: + d = trk.get_state()[0] + else: + d = trk.last_observation # tlbr + score + if (trk.time_since_update < 1) and ( + trk.hit_streak >= self.min_hits or + self.frame_count <= self.min_hits): + # +1 as MOT benchmark requires positive + ret.append(np.concatenate((d, [trk.id + 1])).reshape(1, -1)) + i -= 1 + # remove dead tracklet + if (trk.time_since_update > self.max_age): + self.trackers.pop(i) + if (len(ret) > 0): + return np.concatenate(ret) + return np.empty((0, 6)) diff --git a/deploy/pptracking/python/mot/utils.py b/deploy/pptracking/python/mot/utils.py index 8bb380af0874e9ee795f7616cc14c0abf55eb320..503589d8185aad91dfb2bad7f9032eacabefac4b 100644 --- a/deploy/pptracking/python/mot/utils.py +++ b/deploy/pptracking/python/mot/utils.py @@ -211,6 +211,8 @@ def preprocess_reid(imgs, def flow_statistic(result, secs_interval, do_entrance_counting, + do_break_in_counting, + region_type, video_fps, entrance, id_set, @@ -221,39 +223,84 @@ def flow_statistic(result, records, data_type='mot', num_classes=1): - # Count in and out number: - # Use horizontal center line as the entrance just for simplification. - # If a person located in the above the horizontal center line - # at the previous frame and is in the below the line at the current frame, - # the in number is increased by one. - # If a person was in the below the horizontal center line - # at the previous frame and locates in the below the line at the current frame, - # the out number is increased by one. - # TODO: if the entrance is not the horizontal center line, - # the counting method should be optimized. + # Count in/out number: + # Note that 'region_type' should be one of ['horizontal', 'vertical', 'custom'], + # 'horizontal' and 'vertical' means entrance is the center line as the entrance when do_entrance_counting, + # 'custom' means entrance is a region defined by users when do_break_in_counting. + if do_entrance_counting: - entrance_y = entrance[1] # xmin, ymin, xmax, ymax + assert region_type in [ + 'horizontal', 'vertical' + ], "region_type should be 'horizontal' or 'vertical' when do entrance counting." + entrance_x, entrance_y = entrance[0], entrance[1] frame_id, tlwhs, tscores, track_ids = result for tlwh, score, track_id in zip(tlwhs, tscores, track_ids): if track_id < 0: continue if data_type == 'kitti': frame_id -= 1 - x1, y1, w, h = tlwh center_x = x1 + w / 2. center_y = y1 + h / 2. if track_id in prev_center: - if prev_center[track_id][1] <= entrance_y and \ - center_y > entrance_y: - in_id_list.append(track_id) - if prev_center[track_id][1] >= entrance_y and \ - center_y < entrance_y: - out_id_list.append(track_id) + if region_type == 'horizontal': + # horizontal center line + if prev_center[track_id][1] <= entrance_y and \ + center_y > entrance_y: + in_id_list.append(track_id) + if prev_center[track_id][1] >= entrance_y and \ + center_y < entrance_y: + out_id_list.append(track_id) + else: + # vertical center line + if prev_center[track_id][0] <= entrance_x and \ + center_x > entrance_x: + in_id_list.append(track_id) + if prev_center[track_id][0] >= entrance_x and \ + center_x < entrance_x: + out_id_list.append(track_id) prev_center[track_id][0] = center_x prev_center[track_id][1] = center_y else: prev_center[track_id] = [center_x, center_y] - # Count totol number, number at a manual-setting interval + + if do_break_in_counting: + assert region_type in [ + 'custom' + ], "region_type should be 'custom' when do break_in counting." + assert len( + entrance + ) >= 4, "entrance should be at least 3 points and (w,h) of image when do break_in counting." + im_w, im_h = entrance[-1][:] + entrance = np.array(entrance[:-1]) + + frame_id, tlwhs, tscores, track_ids = result + for tlwh, score, track_id in zip(tlwhs, tscores, track_ids): + if track_id < 0: continue + if data_type == 'kitti': + frame_id -= 1 + x1, y1, w, h = tlwh + center_x = min(x1 + w / 2., im_w - 1) + center_down_y = min(y1 + h, im_h - 1) + + # counting objects in region of the first frame + if frame_id == 1: + if in_quadrangle([center_x, center_down_y], entrance, im_h, + im_w): + in_id_list.append(-1) + else: + prev_center[track_id] = [center_x, center_down_y] + else: + if track_id in prev_center: + if not in_quadrangle(prev_center[track_id], entrance, im_h, + im_w) and in_quadrangle( + [center_x, center_down_y], + entrance, im_h, im_w): + in_id_list.append(track_id) + prev_center[track_id] = [center_x, center_down_y] + else: + prev_center[track_id] = [center_x, center_down_y] + +# Count totol number, number at a manual-setting interval frame_id, tlwhs, tscores, track_ids = result for tlwh, score, track_id in zip(tlwhs, tscores, track_ids): if track_id < 0: continue @@ -268,6 +315,8 @@ def flow_statistic(result, if do_entrance_counting: info += ", In count: {}, Out count: {}".format( len(in_id_list), len(out_id_list)) + if do_break_in_counting: + info += ", Break_in count: {}".format(len(in_id_list)) if frame_id % video_fps == 0 and frame_id / video_fps % secs_interval == 0: info += ", Count during {} secs: {}".format(secs_interval, curr_interval_count) @@ -282,5 +331,15 @@ def flow_statistic(result, "in_id_list": in_id_list, "out_id_list": out_id_list, "prev_center": prev_center, - "records": records + "records": records, } + + +def in_quadrangle(point, entrance, im_h, im_w): + mask = np.zeros((im_h, im_w, 1), np.uint8) + cv2.fillPoly(mask, [entrance], 255) + p = tuple(map(int, point)) + if mask[p[1], p[0], :] > 0: + return True + else: + return False diff --git a/deploy/pptracking/python/mot/visualize.py b/deploy/pptracking/python/mot/visualize.py index bf39d6db8157175259b8940c5a74a8e500fe38f1..9a4cb5806c769fb8c97177cfd9b06a1163f78944 100644 --- a/deploy/pptracking/python/mot/visualize.py +++ b/deploy/pptracking/python/mot/visualize.py @@ -193,11 +193,14 @@ def plot_tracking_dict(image, fps=0., ids2names=[], do_entrance_counting=False, + do_break_in_counting=False, entrance=None, records=None, center_traj=None): im = np.ascontiguousarray(np.copy(image)) im_h, im_w = im.shape[:2] + if do_break_in_counting: + entrance = np.array(entrance[:-1]) # last pair is [im_w, im_h] text_scale = max(0.5, image.shape[1] / 3000.) text_thickness = 2 @@ -231,6 +234,30 @@ def plot_tracking_dict(image, text_scale, (0, 0, 255), thickness=text_thickness) + if num_classes == 1 and do_break_in_counting: + np_masks = np.zeros((im_h, im_w, 1), np.uint8) + cv2.fillPoly(np_masks, [entrance], 255) + + # Draw region mask + alpha = 0.3 + im = np.array(im).astype('float32') + mask = np_masks[:, :, 0] + color_mask = [0, 0, 255] + idx = np.nonzero(mask) + color_mask = np.array(color_mask) + im[idx[0], idx[1], :] *= 1.0 - alpha + im[idx[0], idx[1], :] += alpha * color_mask + im = np.array(im).astype('uint8') + + # find start location for break in counting data + start = records[-1].find('Break_in') + cv2.putText( + im, + records[-1][start:-1], (entrance[0][0] - 10, entrance[0][1] - 10), + cv2.FONT_ITALIC, + text_scale, (0, 0, 255), + thickness=text_thickness) + for cls_id in range(num_classes): tlwhs = tlwhs_dict[cls_id] obj_ids = obj_ids_dict[cls_id] @@ -262,7 +289,17 @@ def plot_tracking_dict(image, id_text = 'class{}_{}'.format(cls_id, id_text) _line_thickness = 1 if obj_id <= 0 else line_thickness - color = get_color(abs(obj_id)) + + in_region = False + if do_break_in_counting: + center_x = min(x1 + w / 2., im_w - 1) + center_down_y = min(y1 + h, im_h - 1) + if in_quadrangle([center_x, center_down_y], entrance, im_h, + im_w): + in_region = True + + color = get_color(abs(obj_id)) if in_region == False else (0, 0, + 255) cv2.rectangle( im, intbox[0:2], @@ -273,16 +310,26 @@ def plot_tracking_dict(image, im, id_text, (intbox[0], intbox[1] - 25), cv2.FONT_ITALIC, - text_scale, (0, 255, 255), + text_scale, + color, thickness=text_thickness) + if do_break_in_counting and in_region: + cv2.putText( + im, + 'Break in now.', (intbox[0], intbox[1] - 50), + cv2.FONT_ITALIC, + text_scale, (0, 0, 255), + thickness=text_thickness) + if scores is not None: text = 'score: {:.2f}'.format(float(scores[i])) cv2.putText( im, text, (intbox[0], intbox[1] - 6), cv2.FONT_ITALIC, - text_scale, (0, 255, 0), + text_scale, + color, thickness=text_thickness) if center_traj is not None: for traj in center_traj: @@ -292,3 +339,13 @@ def plot_tracking_dict(image, for point in traj[i]: cv2.circle(im, point, 3, (0, 0, 255), -1) return im + + +def in_quadrangle(point, entrance, im_h, im_w): + mask = np.zeros((im_h, im_w, 1), np.uint8) + cv2.fillPoly(mask, [entrance], 255) + p = tuple(map(int, point)) + if mask[p[1], p[0], :] > 0: + return True + else: + return False diff --git a/deploy/pptracking/python/mot_jde_infer.py b/deploy/pptracking/python/mot_jde_infer.py index afabf5f4b6a573cb8a97af757dc92dafb29a76b2..8809b2497fcabb43396509e702d09afcc8b1ee79 100644 --- a/deploy/pptracking/python/mot_jde_infer.py +++ b/deploy/pptracking/python/mot_jde_infer.py @@ -64,28 +64,39 @@ class JDE_Detector(Detector): do_entrance_counting(bool): Whether counting the numbers of identifiers entering or getting out from the entrance, default as False,only support single class counting in MOT. + do_break_in_counting(bool): Whether counting the numbers of identifiers break in + the area, default as False,only support single class counting in MOT, + and the video should be taken by a static camera. + region_type (str): Area type for entrance counting or break in counting, 'horizontal' + and 'vertical' used when do entrance counting. 'custom' used when do break in counting. + Note that only support single-class MOT, and the video should be taken by a static camera. + region_polygon (list): Clockwise point coords (x0,y0,x1,y1...) of polygon of area when + do_break_in_counting. Note that only support single-class MOT and + the video should be taken by a static camera. """ - def __init__( - self, - model_dir, - tracker_config=None, - device='CPU', - run_mode='paddle', - batch_size=1, - trt_min_shape=1, - trt_max_shape=1088, - trt_opt_shape=608, - trt_calib_mode=False, - cpu_threads=1, - enable_mkldnn=False, - output_dir='output', - threshold=0.5, - save_images=False, - save_mot_txts=False, - draw_center_traj=False, - secs_interval=10, - do_entrance_counting=False, ): + def __init__(self, + model_dir, + tracker_config=None, + device='CPU', + run_mode='paddle', + batch_size=1, + trt_min_shape=1, + trt_max_shape=1088, + trt_opt_shape=608, + trt_calib_mode=False, + cpu_threads=1, + enable_mkldnn=False, + output_dir='output', + threshold=0.5, + save_images=False, + save_mot_txts=False, + draw_center_traj=False, + secs_interval=10, + do_entrance_counting=False, + do_break_in_counting=False, + region_type='horizontal', + region_polygon=[]): super(JDE_Detector, self).__init__( model_dir=model_dir, device=device, @@ -104,6 +115,13 @@ class JDE_Detector(Detector): self.draw_center_traj = draw_center_traj self.secs_interval = secs_interval self.do_entrance_counting = do_entrance_counting + self.do_break_in_counting = do_break_in_counting + self.region_type = region_type + self.region_polygon = region_polygon + if self.region_type == 'custom': + assert len( + self.region_polygon + ) > 6, 'region_type is custom, region_polygon should be at least 3 pairs of point coords.' assert batch_size == 1, "MOT model only supports batch_size=1." self.det_times = Timer(with_tracker=True) @@ -112,8 +130,8 @@ class JDE_Detector(Detector): # tracker config assert self.pred_config.tracker, "The exported JDE Detector model should have tracker." cfg = self.pred_config.tracker - min_box_area = cfg.get('min_box_area', 200) - vertical_ratio = cfg.get('vertical_ratio', 1.6) + min_box_area = cfg.get('min_box_area', 0.0) + vertical_ratio = cfg.get('vertical_ratio', 0.0) conf_thres = cfg.get('conf_thres', 0.0) tracked_thresh = cfg.get('tracked_thresh', 0.7) metric_type = cfg.get('metric_type', 'euclidean') @@ -164,7 +182,7 @@ class JDE_Detector(Detector): repeats (int): repeats number for prediction Returns: result (dict): include 'pred_dets': np.ndarray: shape:[N,6], N: number of box, - matix element:[x_min, y_min, x_max, y_max, score, class] + matix element:[class, score, x_min, y_min, x_max, y_max] FairMOT(JDE)'s result include 'pred_embs': np.ndarray: shape: [N, 128] ''' @@ -310,7 +328,24 @@ class JDE_Detector(Detector): out_id_list = list() prev_center = dict() records = list() - entrance = [0, height / 2., width, height / 2.] + if self.do_entrance_counting or self.do_break_in_counting: + if self.region_type == 'horizontal': + entrance = [0, height / 2., width, height / 2.] + elif self.region_type == 'vertical': + entrance = [width / 2, 0., width / 2, height] + elif self.region_type == 'custom': + entrance = [] + assert len( + self.region_polygon + ) % 2 == 0, "region_polygon should be pairs of coords points when do break_in counting." + for i in range(0, len(self.region_polygon), 2): + entrance.append([ + self.region_polygon[i], self.region_polygon[i + 1] + ]) + entrance.append([width, height]) + else: + raise ValueError("region_type:{} is not supported.".format( + self.region_type)) video_fps = fps @@ -340,8 +375,9 @@ class JDE_Detector(Detector): online_ids[0]) statistic = flow_statistic( result, self.secs_interval, self.do_entrance_counting, - video_fps, entrance, id_set, interval_id_set, in_id_list, - out_id_list, prev_center, records, data_type, num_classes) + self.do_break_in_counting, self.region_type, video_fps, + entrance, id_set, interval_id_set, in_id_list, out_id_list, + prev_center, records, data_type, num_classes) records = statistic['records'] fps = 1. / timer.duration @@ -403,7 +439,10 @@ def main(): save_mot_txts=FLAGS.save_mot_txts, draw_center_traj=FLAGS.draw_center_traj, secs_interval=FLAGS.secs_interval, - do_entrance_counting=FLAGS.do_entrance_counting, ) + do_entrance_counting=FLAGS.do_entrance_counting, + do_break_in_counting=FLAGS.do_break_in_counting, + region_type=FLAGS.region_type, + region_polygon=FLAGS.region_polygon) # predict from video file or camera video stream if FLAGS.video_file is not None or FLAGS.camera_id != -1: diff --git a/deploy/pptracking/python/mot_sde_infer.py b/deploy/pptracking/python/mot_sde_infer.py index 62907ba240a34facc1264ebd3b1092c66dcdef99..b62905e2aecfc4aaacb8bfc452198b50bf94ef88 100644 --- a/deploy/pptracking/python/mot_sde_infer.py +++ b/deploy/pptracking/python/mot_sde_infer.py @@ -32,7 +32,7 @@ sys.path.insert(0, parent_path) from det_infer import Detector, get_test_images, print_arguments, bench_log, PredictConfig, load_predictor from mot_utils import argsparser, Timer, get_current_memory_mb, video2frames, _is_valid_video -from mot.tracker import JDETracker, DeepSORTTracker +from mot.tracker import JDETracker, DeepSORTTracker, OCSORTTracker from mot.utils import MOTTimer, write_mot_results, get_crops, clip_box, flow_statistic from mot.visualize import plot_tracking, plot_tracking_dict @@ -64,7 +64,16 @@ class SDE_Detector(Detector): secs_interval (int): The seconds interval to count after tracking, default as 10 do_entrance_counting(bool): Whether counting the numbers of identifiers entering or getting out from the entrance, default as False,only support single class - counting in MOT. + counting in MOT, and the video should be taken by a static camera. + do_break_in_counting(bool): Whether counting the numbers of identifiers break in + the area, default as False,only support single class counting in MOT, + and the video should be taken by a static camera. + region_type (str): Area type for entrance counting or break in counting, 'horizontal' + and 'vertical' used when do entrance counting. 'custom' used when do break in counting. + Note that only support single-class MOT, and the video should be taken by a static camera. + region_polygon (list): Clockwise point coords (x0,y0,x1,y1...) of polygon of area when + do_break_in_counting. Note that only support single-class MOT and + the video should be taken by a static camera. reid_model_dir (str): reid model dir, default None for ByteTrack, but set for DeepSORT mtmct_dir (str): MTMCT dir, default None, set for doing MTMCT """ @@ -88,6 +97,9 @@ class SDE_Detector(Detector): draw_center_traj=False, secs_interval=10, do_entrance_counting=False, + do_break_in_counting=False, + region_type='horizontal', + region_polygon=[], reid_model_dir=None, mtmct_dir=None): super(SDE_Detector, self).__init__( @@ -108,6 +120,13 @@ class SDE_Detector(Detector): self.draw_center_traj = draw_center_traj self.secs_interval = secs_interval self.do_entrance_counting = do_entrance_counting + self.do_break_in_counting = do_break_in_counting + self.region_type = region_type + self.region_polygon = region_polygon + if self.region_type == 'custom': + assert len( + self.region_polygon + ) > 6, 'region_type is custom, region_polygon should be at least 3 pairs of point coords.' assert batch_size == 1, "MOT model only supports batch_size=1." self.det_times = Timer(with_tracker=True) @@ -142,8 +161,10 @@ class SDE_Detector(Detector): # tracker config self.use_deepsort_tracker = True if tracker_cfg[ 'type'] == 'DeepSORTTracker' else False + self.use_ocsort_tracker = True if tracker_cfg[ + 'type'] == 'OCSORTTracker' else False + if self.use_deepsort_tracker: - # use DeepSORTTracker if self.reid_pred_config is not None and hasattr( self.reid_pred_config, 'tracker'): cfg = self.reid_pred_config.tracker @@ -161,12 +182,34 @@ class SDE_Detector(Detector): matching_threshold=matching_threshold, min_box_area=min_box_area, vertical_ratio=vertical_ratio, ) + + elif self.use_ocsort_tracker: + det_thresh = cfg.get('det_thresh', 0.4) + max_age = cfg.get('max_age', 30) + min_hits = cfg.get('min_hits', 3) + iou_threshold = cfg.get('iou_threshold', 0.3) + delta_t = cfg.get('delta_t', 3) + inertia = cfg.get('inertia', 0.2) + min_box_area = cfg.get('min_box_area', 0) + vertical_ratio = cfg.get('vertical_ratio', 0) + use_byte = cfg.get('use_byte', False) + + self.tracker = OCSORTTracker( + det_thresh=det_thresh, + max_age=max_age, + min_hits=min_hits, + iou_threshold=iou_threshold, + delta_t=delta_t, + inertia=inertia, + min_box_area=min_box_area, + vertical_ratio=vertical_ratio, + use_byte=use_byte) else: # use ByteTracker use_byte = cfg.get('use_byte', False) det_thresh = cfg.get('det_thresh', 0.3) - min_box_area = cfg.get('min_box_area', 200) - vertical_ratio = cfg.get('vertical_ratio', 1.6) + min_box_area = cfg.get('min_box_area', 0) + vertical_ratio = cfg.get('vertical_ratio', 0) match_thres = cfg.get('match_thres', 0.9) conf_thres = cfg.get('conf_thres', 0.6) low_conf_thres = cfg.get('low_conf_thres', 0.1) @@ -186,7 +229,9 @@ class SDE_Detector(Detector): def postprocess(self, inputs, result): # postprocess output of predictor - np_boxes_num = result['boxes_num'] + keep_idx = result['boxes'][:, 1] > self.threshold + result['boxes'] = result['boxes'][keep_idx] + np_boxes_num = [len(result['boxes'])] if np_boxes_num[0] <= 0: print('[WARNNING] No object detected.') result = {'boxes': np.zeros([0, 6]), 'boxes_num': [0]} @@ -194,7 +239,7 @@ class SDE_Detector(Detector): return result def reidprocess(self, det_results, repeats=1): - pred_dets = det_results['boxes'] + pred_dets = det_results['boxes'] # cls_id, score, x0, y0, x1, y1 pred_xyxys = pred_dets[:, 2:6] ori_image = det_results['ori_image'] @@ -234,7 +279,7 @@ class SDE_Detector(Detector): return det_results def tracking(self, det_results): - pred_dets = det_results['boxes'] + pred_dets = det_results['boxes'] # cls_id, score, x0, y0, x1, y1 pred_embs = det_results.get('embeddings', None) if self.use_deepsort_tracker: @@ -281,6 +326,32 @@ class SDE_Detector(Detector): feat_data['feat'] = _feat tracking_outs['feat_data'].update({_imgname: feat_data}) return tracking_outs + + elif self.use_ocsort_tracker: + # use OCSORTTracker, only support singe class + online_targets = self.tracker.update(pred_dets, pred_embs) + online_tlwhs = defaultdict(list) + online_scores = defaultdict(list) + online_ids = defaultdict(list) + for t in online_targets: + tlwh = [t[0], t[1], t[2] - t[0], t[3] - t[1]] + tscore = float(t[4]) + tid = int(t[5]) + if tlwh[2] * tlwh[3] <= self.tracker.min_box_area: continue + if self.tracker.vertical_ratio > 0 and tlwh[2] / tlwh[ + 3] > self.tracker.vertical_ratio: + continue + if tlwh[2] * tlwh[3] > 0: + online_tlwhs[0].append(tlwh) + online_ids[0].append(tid) + online_scores[0].append(tscore) + tracking_outs = { + 'online_tlwhs': online_tlwhs, + 'online_scores': online_scores, + 'online_ids': online_ids, + } + return tracking_outs + else: # use ByteTracker, support multiple class online_tlwhs = defaultdict(list) @@ -441,14 +512,15 @@ class SDE_Detector(Detector): online_ids, online_scores, frame_id=frame_id, - ids2names=[]) + ids2names=ids2names) else: im = plot_tracking( frame, online_tlwhs, online_ids, online_scores, - frame_id=frame_id) + frame_id=frame_id, + ids2names=ids2names) save_dir = os.path.join(self.output_dir, seq_name) if not os.path.exists(save_dir): os.makedirs(save_dir) @@ -500,7 +572,25 @@ class SDE_Detector(Detector): out_id_list = list() prev_center = dict() records = list() - entrance = [0, height / 2., width, height / 2.] + if self.do_entrance_counting or self.do_break_in_counting: + if self.region_type == 'horizontal': + entrance = [0, height / 2., width, height / 2.] + elif self.region_type == 'vertical': + entrance = [width / 2, 0., width / 2, height] + elif self.region_type == 'custom': + entrance = [] + assert len( + self.region_polygon + ) % 2 == 0, "region_polygon should be pairs of coords points when do break_in counting." + for i in range(0, len(self.region_polygon), 2): + entrance.append([ + self.region_polygon[i], self.region_polygon[i + 1] + ]) + entrance.append([width, height]) + else: + raise ValueError("region_type:{} is not supported.".format( + self.region_type)) + video_fps = fps while (1): @@ -520,19 +610,20 @@ class SDE_Detector(Detector): # bs=1 in MOT model online_tlwhs, online_scores, online_ids = mot_results[0] - # NOTE: just implement flow statistic for one class - if num_classes == 1: + # flow statistic for one class, and only for bytetracker + if num_classes == 1 and not self.use_deepsort_tracker and not self.use_ocsort_tracker: result = (frame_id + 1, online_tlwhs[0], online_scores[0], online_ids[0]) statistic = flow_statistic( result, self.secs_interval, self.do_entrance_counting, - video_fps, entrance, id_set, interval_id_set, in_id_list, - out_id_list, prev_center, records, data_type, num_classes) + self.do_break_in_counting, self.region_type, video_fps, + entrance, id_set, interval_id_set, in_id_list, out_id_list, + prev_center, records, data_type, num_classes) records = statistic['records'] fps = 1. / timer.duration - if self.use_deepsort_tracker: - # use DeepSORTTracker, only support singe class + if self.use_deepsort_tracker or self.use_ocsort_tracker: + # use DeepSORTTracker or OCSORTTracker, only support singe class results[0].append( (frame_id + 1, online_tlwhs, online_scores, online_ids)) im = plot_tracking( @@ -542,6 +633,7 @@ class SDE_Detector(Detector): online_scores, frame_id=frame_id, fps=fps, + ids2names=ids2names, do_entrance_counting=self.do_entrance_counting, entrance=entrance) else: @@ -712,6 +804,9 @@ def main(): draw_center_traj=FLAGS.draw_center_traj, secs_interval=FLAGS.secs_interval, do_entrance_counting=FLAGS.do_entrance_counting, + do_break_in_counting=FLAGS.do_break_in_counting, + region_type=FLAGS.region_type, + region_polygon=FLAGS.region_polygon, reid_model_dir=FLAGS.reid_model_dir, mtmct_dir=FLAGS.mtmct_dir, ) diff --git a/deploy/pptracking/python/mot_utils.py b/deploy/pptracking/python/mot_utils.py index 3c2d31c89115b656f54cc6579516c873ad0698cc..0cf9ab3198d3e6576efb2ba7eb9a8369187d4cda 100644 --- a/deploy/pptracking/python/mot_utils.py +++ b/deploy/pptracking/python/mot_utils.py @@ -141,8 +141,30 @@ def argsparser(): "--do_entrance_counting", action='store_true', help="Whether counting the numbers of identifiers entering " - "or getting out from the entrance. Note that only support one-class" - "counting, multi-class counting is coming soon.") + "or getting out from the entrance. Note that only support single-class MOT." + ) + parser.add_argument( + "--do_break_in_counting", + action='store_true', + help="Whether counting the numbers of identifiers break in " + "the area. Note that only support single-class MOT and " + "the video should be taken by a static camera.") + parser.add_argument( + "--region_type", + type=str, + default='horizontal', + help="Area type for entrance counting or break in counting, 'horizontal' and " + "'vertical' used when do entrance counting. 'custom' used when do break in counting. " + "Note that only support single-class MOT, and the video should be taken by a static camera." + ) + parser.add_argument( + '--region_polygon', + nargs='+', + type=int, + default=[], + help="Clockwise point coords (x0,y0,x1,y1...) of polygon of area when " + "do_break_in_counting. Note that only support single-class MOT and " + "the video should be taken by a static camera.") parser.add_argument( "--secs_interval", type=int, diff --git a/deploy/pptracking/python/preprocess.py b/deploy/pptracking/python/preprocess.py index 2df5df9c3c3dc0dcb90b0224bf0d8e022a47903e..427479c814d6b3250921ead6b7b2a07ea6352173 100644 --- a/deploy/pptracking/python/preprocess.py +++ b/deploy/pptracking/python/preprocess.py @@ -245,6 +245,34 @@ class LetterBoxResize(object): return im, im_info +class Pad(object): + def __init__(self, size, fill_value=[114.0, 114.0, 114.0]): + """ + Pad image to a specified size. + Args: + size (list[int]): image target size + fill_value (list[float]): rgb value of pad area, default (114.0, 114.0, 114.0) + """ + super(Pad, self).__init__() + if isinstance(size, int): + size = [size, size] + self.size = size + self.fill_value = fill_value + + def __call__(self, im, im_info): + im_h, im_w = im.shape[:2] + h, w = self.size + if h == im_h and w == im_w: + im = im.astype(np.float32) + return im, im_info + + canvas = np.ones((h, w, 3), dtype=np.float32) + canvas *= np.array(self.fill_value, dtype=np.float32) + canvas[0:im_h, 0:im_w, :] = im.astype(np.float32) + im = canvas + return im, im_info + + def preprocess(im, preprocess_ops): # process image by preprocess_ops im_info = { diff --git a/deploy/pptracking/python/tracker_config.yml b/deploy/pptracking/python/tracker_config.yml index 5182da93e3f19bd49421e09d1ec01be0c0f11643..464337702a4a6b5d8d32fc87eeb8df9bcc01c57a 100644 --- a/deploy/pptracking/python/tracker_config.yml +++ b/deploy/pptracking/python/tracker_config.yml @@ -2,7 +2,7 @@ # The tracker of MOT JDE Detector (such as FairMOT) is exported together with the model. # Here 'min_box_area' and 'vertical_ratio' are set for pedestrian, you can modify for other objects tracking. -type: JDETracker # 'JDETracker' or 'DeepSORTTracker' +type: OCSORTTracker # choose one tracker in ['JDETracker', 'OCSORTTracker', 'DeepSORTTracker'] # BYTETracker JDETracker: @@ -11,8 +11,21 @@ JDETracker: conf_thres: 0.6 low_conf_thres: 0.1 match_thres: 0.9 - min_box_area: 100 - vertical_ratio: 1.6 # for pedestrian + min_box_area: 0 + vertical_ratio: 0 # 1.6 for pedestrian + + +OCSORTTracker: + det_thresh: 0.4 + max_age: 30 + min_hits: 3 + iou_threshold: 0.3 + delta_t: 3 + inertia: 0.2 + min_box_area: 0 + vertical_ratio: 0 + use_byte: False + DeepSORTTracker: input_size: [64, 192] diff --git a/deploy/python/README.md b/deploy/python/README.md index 8b672c84da4467ba2802e5e1b39aecba6779516d..0c7fc24c8510fb7f5f96d30269d4d2371cf00ed6 100644 --- a/deploy/python/README.md +++ b/deploy/python/README.md @@ -3,27 +3,76 @@ 在PaddlePaddle中预测引擎和训练引擎底层有着不同的优化方法, 预测引擎使用了AnalysisPredictor,专门针对推理进行了优化,是基于[C++预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/inference/native_infer.html)的Python接口,该引擎可以对模型进行多项图优化,减少不必要的内存拷贝。如果用户在部署已训练模型的过程中对性能有较高的要求,我们提供了独立于PaddleDetection的预测脚本,方便用户直接集成部署。 -主要包含两个步骤: - +Python端预测部署主要包含两个步骤: - 导出预测模型 - 基于Python进行预测 ## 1. 导出预测模型 -PaddleDetection在训练过程包括网络的前向和优化器相关参数,而在部署过程中,我们只需要前向参数,具体参考:[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/EXPORT_MODEL.md) +PaddleDetection在训练过程包括网络的前向和优化器相关参数,而在部署过程中,我们只需要前向参数,具体参考:[导出模型](../EXPORT_MODEL.md),例如 + +```bash +# 导出YOLOv3检测模型 +python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --output_dir=./inference_model \ + -o weights=https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams + +# 导出HigherHRNet(bottom-up)关键点检测模型 +python tools/export_model.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=https://paddledet.bj.bcebos.com/models/keypoint/higherhrnet_hrnet_w32_512.pdparams + +# 导出HRNet(top-down)关键点检测模型 +python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_384x288.yml -o weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_384x288.pdparams + +# 导出FairMOT多目标跟踪模型 +python tools/export_model.py -c configs/mot/fairmot/fairmot_dla34_30e_1088x608.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/fairmot_dla34_30e_1088x608.pdparams + +# 导出ByteTrack多目标跟踪模型(相当于只导出检测器) +python tools/export_model.py -c configs/mot/bytetrack/detector/ppyoloe_crn_l_36e_640x640_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/ppyoloe_crn_l_36e_640x640_mot17half.pdparams +``` 导出后目录下,包括`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`四个文件。 -## 2. 基于Python的预测 +## 2. 基于Python的预测 +### 2.1 通用检测 +在终端输入以下命令进行预测: +```bash +python deploy/python/infer.py --model_dir=./output_inference/yolov3_darknet53_270e_coco --image_file=./demo/000000014439.jpg --device=GPU +``` +### 2.2 关键点检测 在终端输入以下命令进行预测: +```bash +# keypoint top-down(HRNet)/bottom-up(HigherHRNet)单独推理,该模式下top-down模型HRNet只支持单人截图预测 +python deploy/python/keypoint_infer.py --model_dir=output_inference/hrnet_w32_384x288/ --image_file=./demo/hrnet_demo.jpg --device=GPU --threshold=0.5 +python deploy/python/keypoint_infer.py --model_dir=output_inference/higherhrnet_hrnet_w32_512/ --image_file=./demo/000000014439_640x640.jpg --device=GPU --threshold=0.5 + +# detector 检测 + keypoint top-down模型联合部署(联合推理只支持top-down关键点模型) +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/yolov3_darknet53_270e_coco/ --keypoint_model_dir=output_inference/hrnet_w32_384x288/ --video_file={your video name}.mp4 --device=GPU +``` +**注意:** + - 关键点检测模型导出和预测具体可参照[keypoint](../../configs/keypoint/README.md),可分别在各个模型的文档中查找具体用法; + - 此目录下的关键点检测部署为基础前向功能,更多关键点检测功能可使用PP-Human项目,参照[pipeline](../pipeline/README.md); + +### 2.3 多目标跟踪 +在终端输入以下命令进行预测: ```bash -python deploy/python/infer.py --model_dir=./output_inference/yolov3_mobilenet_v1_roadsign --image_file=./demo/road554.png --device=GPU +# FairMOT跟踪 +python deploy/python/mot_jde_infer.py --model_dir=output_inference/fairmot_dla34_30e_1088x608 --video_file={your video name}.mp4 --device=GPU + +# ByteTrack跟踪 +python deploy/python/mot_sde_infer.py --model_dir=output_inference/ppyoloe_crn_l_36e_640x640_mot17half/ --tracker_config=deploy/python/tracker_config.yml --video_file={your video name}.mp4 --device=GPU --scaled=True + +# FairMOT多目标跟踪联合HRNet关键点检测(联合推理只支持top-down关键点模型) +python deploy/python/mot_keypoint_unite_infer.py --mot_model_dir=output_inference/fairmot_dla34_30e_1088x608/ --keypoint_model_dir=output_inference/hrnet_w32_384x288/ --video_file={your video name}.mp4 --device=GPU ``` +**注意:** + - 多目标跟踪模型导出和预测具体可参照[mot]](../../configs/mot/README.md),可分别在各个模型的文档中查找具体用法; + - 此目录下的跟踪部署为基础前向功能以及联合关键点部署,更多跟踪功能可使用PP-Human项目,参照[pipeline](../pipeline/README.md),或PP-Tracking项目(绘制轨迹、出入口流量计数),参照[pptracking](../pptracking/README.md); + + 参数说明如下: | 参数 | 是否必须|含义 | @@ -42,6 +91,8 @@ python deploy/python/infer.py --model_dir=./output_inference/yolov3_mobilenet_v1 | --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速,默认为False | | --cpu_threads | Option| 设置cpu线程数,默认为1 | | --trt_calib_mode | Option| TensorRT是否使用校准功能,默认为False。使用TensorRT的int8功能时,需设置为True,使用PaddleSlim量化后的模型时需要设置为False | +| --save_results | Option| 是否在文件夹下将图片的预测结果以JSON的形式保存 | + 说明: diff --git a/deploy/python/det_keypoint_unite_infer.py b/deploy/python/det_keypoint_unite_infer.py index a82c2c58cefb0926958e911c4f2f8ee2fc5bfd75..7b57714d18c7a34bc8b740ec39ed7443da9a10d6 100644 --- a/deploy/python/det_keypoint_unite_infer.py +++ b/deploy/python/det_keypoint_unite_infer.py @@ -36,10 +36,16 @@ KEYPOINT_SUPPORT_MODELS = { def predict_with_given_det(image, det_res, keypoint_detector, - keypoint_batch_size, det_threshold, - keypoint_threshold, run_benchmark): + keypoint_batch_size, run_benchmark): + keypoint_res = {} + rec_images, records, det_rects = keypoint_detector.get_person_from_rect( - image, det_res, det_threshold) + image, det_res) + + if len(det_rects) == 0: + keypoint_res['keypoint'] = [[], []] + return keypoint_res + keypoint_vector = [] score_vector = [] @@ -48,7 +54,6 @@ def predict_with_given_det(image, det_res, keypoint_detector, rec_images, run_benchmark, repeats=10, visual=False) keypoint_vector, score_vector = translate_to_ori_images(keypoint_results, np.array(records)) - keypoint_res = {} keypoint_res['keypoint'] = [ keypoint_vector.tolist(), score_vector.tolist() ] if len(keypoint_vector) > 0 else [[], []] @@ -79,19 +84,21 @@ def topdown_unite_predict(detector, detector.gpu_util += gu else: results = detector.predict_image([image], visual=False) + results = detector.filter_box(results, FLAGS.det_threshold) + if results['boxes_num'] > 0: + keypoint_res = predict_with_given_det( + image, results, topdown_keypoint_detector, keypoint_batch_size, + FLAGS.run_benchmark) - if results['boxes_num'] == 0: - continue - - keypoint_res = predict_with_given_det( - image, results, topdown_keypoint_detector, keypoint_batch_size, - FLAGS.det_threshold, FLAGS.keypoint_threshold, FLAGS.run_benchmark) - - if save_res: - store_res.append([ - i, keypoint_res['bbox'], - [keypoint_res['keypoint'][0], keypoint_res['keypoint'][1]] - ]) + if save_res: + save_name = img_file if isinstance(img_file, str) else i + store_res.append([ + save_name, keypoint_res['bbox'], + [keypoint_res['keypoint'][0], keypoint_res['keypoint'][1]] + ]) + else: + results["keypoint"] = [[], []] + keypoint_res = results if FLAGS.run_benchmark: cm, gm, gu = get_current_memory_mb() topdown_keypoint_detector.cpu_mem += cm @@ -138,10 +145,13 @@ def topdown_unite_predict_video(detector, if not os.path.exists(FLAGS.output_dir): os.makedirs(FLAGS.output_dir) out_path = os.path.join(FLAGS.output_dir, video_name) - fourcc = cv2.VideoWriter_fourcc(*'mp4v') + fourcc = cv2.VideoWriter_fourcc(* 'mp4v') writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height)) index = 0 store_res = [] + keypoint_smoothing = KeypointSmoothing( + width, height, filter_type=FLAGS.filter_type, beta=0.05) + while (1): ret, frame = capture.read() if not ret: @@ -152,16 +162,28 @@ def topdown_unite_predict_video(detector, frame2 = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) results = detector.predict_image([frame2], visual=False) + results = detector.filter_box(results, FLAGS.det_threshold) + if results['boxes_num'] == 0: + writer.write(frame) + continue keypoint_res = predict_with_given_det( frame2, results, topdown_keypoint_detector, keypoint_batch_size, - FLAGS.det_threshold, FLAGS.keypoint_threshold, FLAGS.run_benchmark) + FLAGS.run_benchmark) + + if FLAGS.smooth and len(keypoint_res['keypoint'][0]) == 1: + current_keypoints = np.array(keypoint_res['keypoint'][0][0]) + smooth_keypoints = keypoint_smoothing.smooth_process( + current_keypoints) + + keypoint_res['keypoint'][0][0] = smooth_keypoints.tolist() im = visualize_pose( frame, keypoint_res, visual_thresh=FLAGS.keypoint_threshold, returnimg=True) + if save_res: store_res.append([ index, keypoint_res['bbox'], @@ -187,6 +209,93 @@ def topdown_unite_predict_video(detector, json.dump(store_res, wf, indent=4) +class KeypointSmoothing(object): + # The following code are modified from: + # https://github.com/jaantollander/OneEuroFilter + + def __init__(self, + width, + height, + filter_type, + alpha=0.5, + fc_d=0.1, + fc_min=0.1, + beta=0.1, + thres_mult=0.3): + super(KeypointSmoothing, self).__init__() + self.image_width = width + self.image_height = height + self.threshold = np.array([ + 0.005, 0.005, 0.005, 0.005, 0.005, 0.01, 0.01, 0.01, 0.01, 0.01, + 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01 + ]) * thres_mult + self.filter_type = filter_type + self.alpha = alpha + self.dx_prev_hat = None + self.x_prev_hat = None + self.fc_d = fc_d + self.fc_min = fc_min + self.beta = beta + + if self.filter_type == 'OneEuro': + self.smooth_func = self.one_euro_filter + elif self.filter_type == 'EMA': + self.smooth_func = self.ema_filter + else: + raise ValueError('filter type must be one_euro or ema') + + def smooth_process(self, current_keypoints): + if self.x_prev_hat is None: + self.x_prev_hat = current_keypoints[:, :2] + self.dx_prev_hat = np.zeros(current_keypoints[:, :2].shape) + return current_keypoints + else: + result = current_keypoints + num_keypoints = len(current_keypoints) + for i in range(num_keypoints): + result[i, :2] = self.smooth(current_keypoints[i, :2], + self.threshold[i], i) + return result + + def smooth(self, current_keypoint, threshold, index): + distance = np.sqrt( + np.square((current_keypoint[0] - self.x_prev_hat[index][0]) / + self.image_width) + np.square((current_keypoint[ + 1] - self.x_prev_hat[index][1]) / self.image_height)) + if distance < threshold: + result = self.x_prev_hat[index] + else: + result = self.smooth_func(current_keypoint, self.x_prev_hat[index], + index) + + return result + + def one_euro_filter(self, x_cur, x_pre, index): + te = 1 + self.alpha = self.smoothing_factor(te, self.fc_d) + dx_cur = (x_cur - x_pre) / te + dx_cur_hat = self.exponential_smoothing(dx_cur, self.dx_prev_hat[index]) + + fc = self.fc_min + self.beta * np.abs(dx_cur_hat) + self.alpha = self.smoothing_factor(te, fc) + x_cur_hat = self.exponential_smoothing(x_cur, x_pre) + self.dx_prev_hat[index] = dx_cur_hat + self.x_prev_hat[index] = x_cur_hat + return x_cur_hat + + def ema_filter(self, x_cur, x_pre, index): + x_cur_hat = self.exponential_smoothing(x_cur, x_pre) + self.x_prev_hat[index] = x_cur_hat + return x_cur_hat + + def smoothing_factor(self, te, fc): + r = 2 * math.pi * fc * te + return r / (r + 1) + + def exponential_smoothing(self, x_cur, x_pre, index=0): + return self.alpha * x_cur + (1 - self.alpha) * x_pre + + def main(): deploy_file = os.path.join(FLAGS.det_model_dir, 'infer_cfg.yml') with open(deploy_file) as f: diff --git a/deploy/python/det_keypoint_unite_utils.py b/deploy/python/det_keypoint_unite_utils.py index 26344628a3e10457a394f351fc64f7049a4245bb..7de1295128d9151cf55a7b1e427d6ee946db8bc4 100644 --- a/deploy/python/det_keypoint_unite_utils.py +++ b/deploy/python/det_keypoint_unite_utils.py @@ -126,4 +126,16 @@ def argsparser(): "3) rects: list of rect [xmin, ymin, xmax, ymax]" "4) keypoints: 17(joint numbers)*[x, y, conf], total 51 data in list" "5) scores: mean of all joint conf")) + parser.add_argument( + '--smooth', + type=ast.literal_eval, + default=False, + help='smoothing keypoints for each frame, new incoming keypoints will be more stable.' + ) + parser.add_argument( + '--filter_type', + type=str, + default='OneEuro', + help='when set --smooth True, choose filter type you want to use, it can be [OneEuro] or [EMA].' + ) return parser diff --git a/deploy/python/infer.py b/deploy/python/infer.py index 84c643935f3d3b20acd910b0fa7412b46e7d1b72..a2199a2be62f04af5e1e940704a1dce426596f46 100644 --- a/deploy/python/infer.py +++ b/deploy/python/infer.py @@ -15,6 +15,8 @@ import os import yaml import glob +import json +from pathlib import Path from functools import reduce import cv2 @@ -31,7 +33,7 @@ sys.path.insert(0, parent_path) from benchmark_utils import PaddleInferBenchmark from picodet_postprocess import PicoDetPostProcess -from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine, decode_image +from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine, Pad, decode_image from keypoint_preprocess import EvalAffine, TopDownEvalAffine, expand_crop from visualize import visualize_box_mask from utils import argsparser, Timer, get_current_memory_mb @@ -39,8 +41,8 @@ from utils import argsparser, Timer, get_current_memory_mb # Global dictionary SUPPORT_MODELS = { 'YOLO', 'RCNN', 'SSD', 'Face', 'FCOS', 'SOLOv2', 'TTFNet', 'S2ANet', 'JDE', - 'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet', 'TOOD', - 'StrongBaseline', 'STGCN' + 'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet', 'TOOD', 'RetinaNet', + 'StrongBaseline', 'STGCN', 'YOLOX', 'PPHGNet', 'PPLCNet' } @@ -140,14 +142,17 @@ class Detector(object): input_names = self.predictor.get_input_names() for i in range(len(input_names)): input_tensor = self.predictor.get_input_handle(input_names[i]) - input_tensor.copy_from_cpu(inputs[input_names[i]]) + if input_names[i] == 'x': + input_tensor.copy_from_cpu(inputs['image']) + else: + input_tensor.copy_from_cpu(inputs[input_names[i]]) return inputs def postprocess(self, inputs, result): # postprocess output of predictor np_boxes_num = result['boxes_num'] - if np_boxes_num[0] <= 0: + if sum(np_boxes_num) <= 0: print('[WARNNING] No object detected.') result = {'boxes': np.zeros([0, 6]), 'boxes_num': [0]} result = {k: v for k, v in result.items() if v is not None} @@ -206,7 +211,8 @@ class Detector(object): for k, v in res.items(): results[k].append(v) for k, v in results.items(): - results[k] = np.concatenate(v) + if k not in ['masks', 'segm']: + results[k] = np.concatenate(v) return results def get_timer(self): @@ -216,7 +222,8 @@ class Detector(object): image_list, run_benchmark=False, repeats=1, - visual=True): + visual=True, + save_file=None): batch_loop_cnt = math.ceil(float(len(image_list)) / self.batch_size) results = [] for i in range(batch_loop_cnt): @@ -231,7 +238,7 @@ class Detector(object): self.det_times.preprocess_time_s.end() # model prediction - result = self.predict(repeats=repeats) # warmup + result = self.predict(repeats=50) # warmup self.det_times.inference_time_s.start() result = self.predict(repeats=repeats) self.det_times.inference_time_s.end(repeats=repeats) @@ -276,6 +283,10 @@ class Detector(object): if visual: print('Test iter {}'.format(i)) + if save_file is not None: + Path(self.output_dir).mkdir(exist_ok=True) + self.format_coco_results(image_list, results, save_file=save_file) + results = self.merge_batch_result(results) return results @@ -305,7 +316,7 @@ class Detector(object): break print('detect frame: %d' % (index)) index += 1 - results = self.predict_image([frame], visual=False) + results = self.predict_image([frame[:, :, ::-1]], visual=False) im = visualize_box_mask( frame, @@ -320,6 +331,68 @@ class Detector(object): break writer.release() + @staticmethod + def format_coco_results(image_list, results, save_file=None): + coco_results = [] + image_id = 0 + + for result in results: + start_idx = 0 + for box_num in result['boxes_num']: + idx_slice = slice(start_idx, start_idx + box_num) + start_idx += box_num + + image_file = image_list[image_id] + image_id += 1 + + if 'boxes' in result: + boxes = result['boxes'][idx_slice, :] + per_result = [ + { + 'image_file': image_file, + 'bbox': + [box[2], box[3], box[4] - box[2], + box[5] - box[3]], # xyxy -> xywh + 'score': box[1], + 'category_id': int(box[0]), + } for k, box in enumerate(boxes.tolist()) + ] + + elif 'segm' in result: + import pycocotools.mask as mask_util + + scores = result['score'][idx_slice].tolist() + category_ids = result['label'][idx_slice].tolist() + segms = result['segm'][idx_slice, :] + rles = [ + mask_util.encode( + np.array( + mask[:, :, np.newaxis], + dtype=np.uint8, + order='F'))[0] for mask in segms + ] + for rle in rles: + rle['counts'] = rle['counts'].decode('utf-8') + + per_result = [{ + 'image_file': image_file, + 'segmentation': rle, + 'score': scores[k], + 'category_id': category_ids[k], + } for k, rle in enumerate(rles)] + + else: + raise RuntimeError('') + + # per_result = [item for item in per_result if item['score'] > threshold] + coco_results.extend(per_result) + + if save_file: + with open(os.path.join(save_file), 'w') as f: + json.dump(coco_results, f) + + return coco_results + class DetectorSOLOv2(Detector): """ @@ -618,9 +691,15 @@ def load_predictor(model_dir, raise ValueError( "Predict by TensorRT mode: {}, expect device=='GPU', but device == {}" .format(run_mode, device)) - config = Config( - os.path.join(model_dir, 'model.pdmodel'), - os.path.join(model_dir, 'model.pdiparams')) + infer_model = os.path.join(model_dir, 'model.pdmodel') + infer_params = os.path.join(model_dir, 'model.pdiparams') + if not os.path.exists(infer_model): + infer_model = os.path.join(model_dir, 'inference.pdmodel') + infer_params = os.path.join(model_dir, 'inference.pdiparams') + if not os.path.exists(infer_model): + raise ValueError( + "Cannot find any inference model in dir: {},".format(model_dir)) + config = Config(infer_model, infer_params) if device == 'GPU': # initial GPU memory(M), device ID config.enable_use_gpu(200, 0) @@ -652,7 +731,7 @@ def load_predictor(model_dir, } if run_mode in precision_map.keys(): config.enable_tensorrt_engine( - workspace_size=1 << 25, + workspace_size=(1 << 25) * batch_size, max_batch_size=batch_size, min_subgraph_size=min_subgraph_size, precision_mode=precision_map[run_mode], @@ -690,7 +769,7 @@ def get_test_images(infer_dir, infer_img): Get image path list in TEST mode """ assert infer_img is not None or infer_dir is not None, \ - "--infer_img or --infer_dir should be set" + "--image_file or --image_dir should be set" assert infer_img is None or os.path.isfile(infer_img), \ "{} is not a file".format(infer_img) assert infer_dir is None or os.path.isdir(infer_dir), \ @@ -790,7 +869,10 @@ def main(): if FLAGS.image_dir is None and FLAGS.image_file is not None: assert FLAGS.batch_size == 1, "batch_size should be 1, when image_file is not None" img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file) - detector.predict_image(img_list, FLAGS.run_benchmark, repeats=10) + save_file = os.path.join(FLAGS.output_dir, + 'results.json') if FLAGS.save_results else None + detector.predict_image( + img_list, FLAGS.run_benchmark, repeats=100, save_file=save_file) if not FLAGS.run_benchmark: detector.det_times.info(average=True) else: diff --git a/deploy/python/keypoint_infer.py b/deploy/python/keypoint_infer.py index e16ddd647cf58a58bb9b4c8cb239fd9e3d472673..70b81375621bdd49ec916f246d90f5d5a243aa33 100644 --- a/deploy/python/keypoint_infer.py +++ b/deploy/python/keypoint_infer.py @@ -95,12 +95,10 @@ class KeyPointDetector(Detector): def set_config(self, model_dir): return PredictConfig_KeyPoint(model_dir) - def get_person_from_rect(self, image, results, det_threshold=0.5): + def get_person_from_rect(self, image, results): # crop the person result from image self.det_times.preprocess_time_s.start() - det_results = results['boxes'] - mask = det_results[:, 1] > det_threshold - valid_rects = det_results[mask] + valid_rects = results['boxes'] rect_images = [] new_rects = [] org_rects = [] @@ -268,9 +266,11 @@ class KeyPointDetector(Detector): break print('detect frame: %d' % (index)) index += 1 - results = self.predict_image([frame], visual=False) + results = self.predict_image([frame[:, :, ::-1]], visual=False) + im_results = {} + im_results['keypoint'] = [results['keypoint'], results['score']] im = visualize_pose( - frame, results, visual_thresh=self.threshold, returnimg=True) + frame, im_results, visual_thresh=self.threshold, returnimg=True) writer.write(im) if camera_id != -1: cv2.imshow('Mask Detection', im) diff --git a/deploy/python/keypoint_postprocess.py b/deploy/python/keypoint_postprocess.py index 2275df78a3be412ac78ea8f26f55211fb1f028bd..69f1d3fd9dcc83278d331cd361b36d50e64ef508 100644 --- a/deploy/python/keypoint_postprocess.py +++ b/deploy/python/keypoint_postprocess.py @@ -35,7 +35,7 @@ class HrHRNetPostProcess(object): heat_thresh (float): value of topk below this threshhold will be ignored tag_thresh (float): coord's value sampled in tagmap below this threshold belong to same people for init - inputs(list[heatmap]): the output list of modle, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk + inputs(list[heatmap]): the output list of model, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk original_height, original_width (float): the original image size """ diff --git a/deploy/python/mot_jde_infer.py b/deploy/python/mot_jde_infer.py index 99033cabaa13164eb300c7e0e4800bffb12f462b..51a2562ee554a3eeb2489821d376486dcba0985c 100644 --- a/deploy/python/mot_jde_infer.py +++ b/deploy/python/mot_jde_infer.py @@ -32,7 +32,7 @@ sys.path.insert(0, parent_path) from pptracking.python.mot import JDETracker from pptracking.python.mot.utils import MOTTimer, write_mot_results -from pptracking.python.visualize import plot_tracking, plot_tracking_dict +from pptracking.python.mot.visualize import plot_tracking_dict # Global dictionary MOT_JDE_SUPPORT_MODELS = { @@ -54,23 +54,30 @@ class JDE_Detector(Detector): trt_calib_mode (bool): If the model is produced by TRT offline quantitative calibration, trt_calib_mode need to set True cpu_threads (int): cpu threads - enable_mkldnn (bool): whether to open MKLDNN + enable_mkldnn (bool): whether to open MKLDNN + output_dir (string): The path of output, default as 'output' + threshold (float): Score threshold of the detected bbox, default as 0.5 + save_images (bool): Whether to save visualization image results, default as False + save_mot_txts (bool): Whether to save tracking results (txt), default as False """ - def __init__(self, - model_dir, - tracker_config=None, - device='CPU', - run_mode='paddle', - batch_size=1, - trt_min_shape=1, - trt_max_shape=1088, - trt_opt_shape=608, - trt_calib_mode=False, - cpu_threads=1, - enable_mkldnn=False, - output_dir='output', - threshold=0.5): + def __init__( + self, + model_dir, + tracker_config=None, + device='CPU', + run_mode='paddle', + batch_size=1, + trt_min_shape=1, + trt_max_shape=1088, + trt_opt_shape=608, + trt_calib_mode=False, + cpu_threads=1, + enable_mkldnn=False, + output_dir='output', + threshold=0.5, + save_images=False, + save_mot_txts=False, ): super(JDE_Detector, self).__init__( model_dir=model_dir, device=device, @@ -84,6 +91,8 @@ class JDE_Detector(Detector): enable_mkldnn=enable_mkldnn, output_dir=output_dir, threshold=threshold, ) + self.save_images = save_images + self.save_mot_txts = save_mot_txts assert batch_size == 1, "MOT model only supports batch_size=1." self.det_times = Timer(with_tracker=True) self.num_classes = len(self.pred_config.labels) @@ -91,8 +100,8 @@ class JDE_Detector(Detector): # tracker config assert self.pred_config.tracker, "The exported JDE Detector model should have tracker." cfg = self.pred_config.tracker - min_box_area = cfg.get('min_box_area', 200) - vertical_ratio = cfg.get('vertical_ratio', 1.6) + min_box_area = cfg.get('min_box_area', 0.0) + vertical_ratio = cfg.get('vertical_ratio', 0.0) conf_thres = cfg.get('conf_thres', 0.0) tracked_thresh = cfg.get('tracked_thresh', 0.7) metric_type = cfg.get('metric_type', 'euclidean') @@ -115,7 +124,7 @@ class JDE_Detector(Detector): return result def tracking(self, det_results): - pred_dets = det_results['pred_dets'] # 'cls_id, score, x0, y0, x1, y1' + pred_dets = det_results['pred_dets'] # cls_id, score, x0, y0, x1, y1 pred_embs = det_results['pred_embs'] online_targets_dict = self.tracker.update(pred_dets, pred_embs) @@ -164,7 +173,8 @@ class JDE_Detector(Detector): image_list, run_benchmark=False, repeats=1, - visual=True): + visual=True, + seq_name=None): mot_results = [] num_classes = self.num_classes image_list.sort() @@ -225,7 +235,7 @@ class JDE_Detector(Detector): self.det_times.img_num += 1 if visual: - if frame_id % 10 == 0: + if len(image_list) > 1 and frame_id % 10 == 0: print('Tracking frame {}'.format(frame_id)) frame, _ = decode_image(img_file, {}) @@ -237,7 +247,8 @@ class JDE_Detector(Detector): online_scores, frame_id=frame_id, ids2names=ids2names) - seq_name = image_list[0].split('/')[-2] + if seq_name is None: + seq_name = image_list[0].split('/')[-2] save_dir = os.path.join(self.output_dir, seq_name) if not os.path.exists(save_dir): os.makedirs(save_dir) @@ -264,7 +275,8 @@ class JDE_Detector(Detector): if not os.path.exists(self.output_dir): os.makedirs(self.output_dir) out_path = os.path.join(self.output_dir, video_out_name) - fourcc = cv2.VideoWriter_fourcc(*'mp4v') + video_format = 'mp4v' + fourcc = cv2.VideoWriter_fourcc(*video_format) writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height)) frame_id = 1 @@ -282,7 +294,9 @@ class JDE_Detector(Detector): frame_id += 1 timer.tic() - mot_results = self.predict_image([frame], visual=False) + seq_name = video_out_name.split('.')[0] + mot_results = self.predict_image( + [frame[:, :, ::-1]], visual=False, seq_name=seq_name) timer.toc() online_tlwhs, online_scores, online_ids = mot_results[0] @@ -307,20 +321,33 @@ class JDE_Detector(Detector): cv2.imshow('Mask Detection', im) if cv2.waitKey(1) & 0xFF == ord('q'): break + + if self.save_mot_txts: + result_filename = os.path.join( + self.output_dir, video_out_name.split('.')[-2] + '.txt') + + write_mot_results(result_filename, results, data_type, num_classes) + writer.release() def main(): detector = JDE_Detector( FLAGS.model_dir, + tracker_config=None, device=FLAGS.device, run_mode=FLAGS.run_mode, + batch_size=1, trt_min_shape=FLAGS.trt_min_shape, trt_max_shape=FLAGS.trt_max_shape, trt_opt_shape=FLAGS.trt_opt_shape, trt_calib_mode=FLAGS.trt_calib_mode, cpu_threads=FLAGS.cpu_threads, - enable_mkldnn=FLAGS.enable_mkldnn) + enable_mkldnn=FLAGS.enable_mkldnn, + output_dir=FLAGS.output_dir, + threshold=FLAGS.threshold, + save_images=FLAGS.save_images, + save_mot_txts=FLAGS.save_mot_txts) # predict from video file or camera video stream if FLAGS.video_file is not None or FLAGS.camera_id != -1: diff --git a/deploy/python/mot_keypoint_unite_infer.py b/deploy/python/mot_keypoint_unite_infer.py index 3eea4bd6b22c148b09be9cf794116725bf6e89d6..edf394152c28d682cdb5845050b78dbb27e8b22f 100644 --- a/deploy/python/mot_keypoint_unite_infer.py +++ b/deploy/python/mot_keypoint_unite_infer.py @@ -24,7 +24,7 @@ from collections import defaultdict from mot_keypoint_unite_utils import argsparser from preprocess import decode_image -from infer import print_arguments, get_test_images +from infer import print_arguments, get_test_images, bench_log from mot_sde_infer import SDE_Detector from mot_jde_infer import JDE_Detector, MOT_JDE_SUPPORT_MODELS from keypoint_infer import KeyPointDetector, KEYPOINT_SUPPORT_MODELS @@ -39,7 +39,7 @@ import sys parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) -from pptracking.python.visualize import plot_tracking, plot_tracking_dict +from pptracking.python.mot.visualize import plot_tracking, plot_tracking_dict from pptracking.python.mot.utils import MOTTimer as FPSTimer @@ -92,11 +92,12 @@ def mot_topdown_unite_predict(mot_detector, keypoint_res = predict_with_given_det( image, results, topdown_keypoint_detector, keypoint_batch_size, - FLAGS.mot_threshold, FLAGS.keypoint_threshold, FLAGS.run_benchmark) + FLAGS.run_benchmark) if save_res: + save_name = img_file if isinstance(img_file, str) else i store_res.append([ - i, keypoint_res['bbox'], + save_name, keypoint_res['bbox'], [keypoint_res['keypoint'][0], keypoint_res['keypoint'][1]] ]) if FLAGS.run_benchmark: @@ -146,7 +147,7 @@ def mot_topdown_unite_predict_video(mot_detector, if not os.path.exists(FLAGS.output_dir): os.makedirs(FLAGS.output_dir) out_path = os.path.join(FLAGS.output_dir, video_name) - fourcc = cv2.VideoWriter_fourcc(*'mp4v') + fourcc = cv2.VideoWriter_fourcc(* 'mp4v') writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height)) frame_id = 0 timer_mot, timer_kp, timer_mot_kp = FPSTimer(), FPSTimer(), FPSTimer() @@ -166,7 +167,10 @@ def mot_topdown_unite_predict_video(mot_detector, # mot model timer_mot.tic() - mot_results = mot_detector.predict_image([frame], visual=False) + + frame2 = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + + mot_results = mot_detector.predict_image([frame2], visual=False) timer_mot.toc() online_tlwhs, online_scores, online_ids = mot_results[0] results = convert_mot_to_det( @@ -178,8 +182,8 @@ def mot_topdown_unite_predict_video(mot_detector, # keypoint model timer_kp.tic() keypoint_res = predict_with_given_det( - frame, results, topdown_keypoint_detector, keypoint_batch_size, - FLAGS.mot_threshold, FLAGS.keypoint_threshold, FLAGS.run_benchmark) + frame2, results, topdown_keypoint_detector, keypoint_batch_size, + FLAGS.run_benchmark) timer_kp.toc() timer_mot_kp.toc() diff --git a/deploy/python/mot_sde_infer.py b/deploy/python/mot_sde_infer.py index 4dd66dda0812d7143bc5f31cb033f8ac8305828c..b4a487facddc4a6eb9492ba367c5333b0c77d9a9 100644 --- a/deploy/python/mot_sde_infer.py +++ b/deploy/python/mot_sde_infer.py @@ -1,4 +1,4 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -23,15 +23,15 @@ import paddle from benchmark_utils import PaddleInferBenchmark from preprocess import decode_image from utils import argsparser, Timer, get_current_memory_mb -from infer import Detector, get_test_images, print_arguments, bench_log, PredictConfig +from infer import Detector, get_test_images, print_arguments, bench_log, PredictConfig, load_predictor # add python path import sys parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2))) sys.path.insert(0, parent_path) -from pptracking.python.mot import JDETracker -from pptracking.python.mot.utils import MOTTimer, write_mot_results +from pptracking.python.mot import JDETracker, DeepSORTTracker +from pptracking.python.mot.utils import MOTTimer, write_mot_results, get_crops, clip_box from pptracking.python.mot.visualize import plot_tracking, plot_tracking_dict @@ -50,7 +50,11 @@ class SDE_Detector(Detector): calibration, trt_calib_mode need to set True cpu_threads (int): cpu threads enable_mkldnn (bool): whether to open MKLDNN - use_dark(bool): whether to use postprocess in DarkPose + output_dir (string): The path of output, default as 'output' + threshold (float): Score threshold of the detected bbox, default as 0.5 + save_images (bool): Whether to save visualization image results, default as False + save_mot_txts (bool): Whether to save tracking results (txt), default as False + reid_model_dir (str): reid model dir, default None for ByteTrack, but set for DeepSORT """ def __init__(self, @@ -66,7 +70,10 @@ class SDE_Detector(Detector): cpu_threads=1, enable_mkldnn=False, output_dir='output', - threshold=0.5): + threshold=0.5, + save_images=False, + save_mot_txts=False, + reid_model_dir=None): super(SDE_Detector, self).__init__( model_dir=model_dir, device=device, @@ -80,65 +87,198 @@ class SDE_Detector(Detector): enable_mkldnn=enable_mkldnn, output_dir=output_dir, threshold=threshold, ) + self.save_images = save_images + self.save_mot_txts = save_mot_txts assert batch_size == 1, "MOT model only supports batch_size=1." self.det_times = Timer(with_tracker=True) self.num_classes = len(self.pred_config.labels) - # tracker config + # reid config + self.use_reid = False if reid_model_dir is None else True + if self.use_reid: + self.reid_pred_config = self.set_config(reid_model_dir) + self.reid_predictor, self.config = load_predictor( + reid_model_dir, + run_mode=run_mode, + batch_size=50, # reid_batch_size + min_subgraph_size=self.reid_pred_config.min_subgraph_size, + device=device, + use_dynamic_shape=self.reid_pred_config.use_dynamic_shape, + trt_min_shape=trt_min_shape, + trt_max_shape=trt_max_shape, + trt_opt_shape=trt_opt_shape, + trt_calib_mode=trt_calib_mode, + cpu_threads=cpu_threads, + enable_mkldnn=enable_mkldnn) + else: + self.reid_pred_config = None + self.reid_predictor = None + + assert tracker_config is not None, 'Note that tracker_config should be set.' self.tracker_config = tracker_config - cfg = yaml.safe_load(open(self.tracker_config))['tracker'] - min_box_area = cfg.get('min_box_area', 200) - vertical_ratio = cfg.get('vertical_ratio', 1.6) - use_byte = cfg.get('use_byte', True) - match_thres = cfg.get('match_thres', 0.9) - conf_thres = cfg.get('conf_thres', 0.6) - low_conf_thres = cfg.get('low_conf_thres', 0.1) - - self.tracker = JDETracker( - use_byte=use_byte, - num_classes=self.num_classes, - min_box_area=min_box_area, - vertical_ratio=vertical_ratio, - match_thres=match_thres, - conf_thres=conf_thres, - low_conf_thres=low_conf_thres) + tracker_cfg = yaml.safe_load(open(self.tracker_config)) + cfg = tracker_cfg[tracker_cfg['type']] + + # tracker config + self.use_deepsort_tracker = True if tracker_cfg[ + 'type'] == 'DeepSORTTracker' else False + if self.use_deepsort_tracker: + # use DeepSORTTracker + if self.reid_pred_config is not None and hasattr( + self.reid_pred_config, 'tracker'): + cfg = self.reid_pred_config.tracker + budget = cfg.get('budget', 100) + max_age = cfg.get('max_age', 30) + max_iou_distance = cfg.get('max_iou_distance', 0.7) + matching_threshold = cfg.get('matching_threshold', 0.2) + min_box_area = cfg.get('min_box_area', 0) + vertical_ratio = cfg.get('vertical_ratio', 0) + + self.tracker = DeepSORTTracker( + budget=budget, + max_age=max_age, + max_iou_distance=max_iou_distance, + matching_threshold=matching_threshold, + min_box_area=min_box_area, + vertical_ratio=vertical_ratio, ) + else: + # use ByteTracker + use_byte = cfg.get('use_byte', False) + det_thresh = cfg.get('det_thresh', 0.3) + min_box_area = cfg.get('min_box_area', 0) + vertical_ratio = cfg.get('vertical_ratio', 0) + match_thres = cfg.get('match_thres', 0.9) + conf_thres = cfg.get('conf_thres', 0.6) + low_conf_thres = cfg.get('low_conf_thres', 0.1) + + self.tracker = JDETracker( + use_byte=use_byte, + det_thresh=det_thresh, + num_classes=self.num_classes, + min_box_area=min_box_area, + vertical_ratio=vertical_ratio, + match_thres=match_thres, + conf_thres=conf_thres, + low_conf_thres=low_conf_thres, ) + + def postprocess(self, inputs, result): + # postprocess output of predictor + np_boxes_num = result['boxes_num'] + if np_boxes_num[0] <= 0: + print('[WARNNING] No object detected.') + result = {'boxes': np.zeros([0, 6]), 'boxes_num': [0]} + result = {k: v for k, v in result.items() if v is not None} + return result + + def reidprocess(self, det_results, repeats=1): + pred_dets = det_results['boxes'] + pred_xyxys = pred_dets[:, 2:6] + + ori_image = det_results['ori_image'] + ori_image_shape = ori_image.shape[:2] + pred_xyxys, keep_idx = clip_box(pred_xyxys, ori_image_shape) + + if len(keep_idx[0]) == 0: + det_results['boxes'] = np.zeros((1, 6), dtype=np.float32) + det_results['embeddings'] = None + return det_results + + pred_dets = pred_dets[keep_idx[0]] + pred_xyxys = pred_dets[:, 2:6] + + w, h = self.tracker.input_size + crops = get_crops(pred_xyxys, ori_image, w, h) + + # to keep fast speed, only use topk crops + crops = crops[:50] # reid_batch_size + det_results['crops'] = np.array(crops).astype('float32') + det_results['boxes'] = pred_dets[:50] + + input_names = self.reid_predictor.get_input_names() + for i in range(len(input_names)): + input_tensor = self.reid_predictor.get_input_handle(input_names[i]) + input_tensor.copy_from_cpu(det_results[input_names[i]]) + + # model prediction + for i in range(repeats): + self.reid_predictor.run() + output_names = self.reid_predictor.get_output_names() + feature_tensor = self.reid_predictor.get_output_handle(output_names[ + 0]) + pred_embs = feature_tensor.copy_to_cpu() + + det_results['embeddings'] = pred_embs + return det_results def tracking(self, det_results): pred_dets = det_results['boxes'] # 'cls_id, score, x0, y0, x1, y1' - pred_embs = None - - online_targets_dict = self.tracker.update(pred_dets, pred_embs) - online_tlwhs = defaultdict(list) - online_scores = defaultdict(list) - online_ids = defaultdict(list) - for cls_id in range(self.num_classes): - online_targets = online_targets_dict[cls_id] + pred_embs = det_results.get('embeddings', None) + + if self.use_deepsort_tracker: + # use DeepSORTTracker, only support singe class + self.tracker.predict() + online_targets = self.tracker.update(pred_dets, pred_embs) + online_tlwhs, online_scores, online_ids = [], [], [] for t in online_targets: - tlwh = t.tlwh - tid = t.track_id - tscore = t.score - if tlwh[2] * tlwh[3] <= self.tracker.min_box_area: + if not t.is_confirmed() or t.time_since_update > 1: continue + tlwh = t.to_tlwh() + tscore = t.score + tid = t.track_id if self.tracker.vertical_ratio > 0 and tlwh[2] / tlwh[ 3] > self.tracker.vertical_ratio: continue - online_tlwhs[cls_id].append(tlwh) - online_ids[cls_id].append(tid) - online_scores[cls_id].append(tscore) - - return online_tlwhs, online_scores, online_ids + online_tlwhs.append(tlwh) + online_scores.append(tscore) + online_ids.append(tid) + + tracking_outs = { + 'online_tlwhs': online_tlwhs, + 'online_scores': online_scores, + 'online_ids': online_ids, + } + return tracking_outs + else: + # use ByteTracker, support multiple class + online_tlwhs = defaultdict(list) + online_scores = defaultdict(list) + online_ids = defaultdict(list) + online_targets_dict = self.tracker.update(pred_dets, pred_embs) + for cls_id in range(self.num_classes): + online_targets = online_targets_dict[cls_id] + for t in online_targets: + tlwh = t.tlwh + tid = t.track_id + tscore = t.score + if tlwh[2] * tlwh[3] <= self.tracker.min_box_area: + continue + if self.tracker.vertical_ratio > 0 and tlwh[2] / tlwh[ + 3] > self.tracker.vertical_ratio: + continue + online_tlwhs[cls_id].append(tlwh) + online_ids[cls_id].append(tid) + online_scores[cls_id].append(tscore) + + tracking_outs = { + 'online_tlwhs': online_tlwhs, + 'online_scores': online_scores, + 'online_ids': online_ids, + } + return tracking_outs def predict_image(self, image_list, run_benchmark=False, repeats=1, - visual=True): - mot_results = [] + visual=True, + seq_name=None): num_classes = self.num_classes image_list.sort() ids2names = self.pred_config.labels + mot_results = [] for frame_id, img_file in enumerate(image_list): batch_image_list = [img_file] # bs=1 in MOT model + frame, _ = decode_image(img_file, {}) if run_benchmark: # preprocess inputs = self.preprocess(batch_image_list) # warmup @@ -159,10 +299,16 @@ class SDE_Detector(Detector): self.det_times.postprocess_time_s.end() # tracking + if self.use_reid: + det_result['frame_id'] = frame_id + det_result['seq_name'] = seq_name + det_result['ori_image'] = frame + det_result = self.reidprocess(det_result) result_warmup = self.tracking(det_result) self.det_times.tracking_time_s.start() - online_tlwhs, online_scores, online_ids = self.tracking( - det_result) + if self.use_reid: + det_result = self.reidprocess(det_result) + tracking_outs = self.tracking(det_result) self.det_times.tracking_time_s.end() self.det_times.img_num += 1 @@ -186,32 +332,48 @@ class SDE_Detector(Detector): # tracking process self.det_times.tracking_time_s.start() - online_tlwhs, online_scores, online_ids = self.tracking( - det_result) + if self.use_reid: + det_result['frame_id'] = frame_id + det_result['seq_name'] = seq_name + det_result['ori_image'] = frame + det_result = self.reidprocess(det_result) + tracking_outs = self.tracking(det_result) self.det_times.tracking_time_s.end() self.det_times.img_num += 1 + online_tlwhs = tracking_outs['online_tlwhs'] + online_scores = tracking_outs['online_scores'] + online_ids = tracking_outs['online_ids'] + + mot_results.append([online_tlwhs, online_scores, online_ids]) + if visual: - if frame_id % 10 == 0: + if len(image_list) > 1 and frame_id % 10 == 0: print('Tracking frame {}'.format(frame_id)) frame, _ = decode_image(img_file, {}) - - im = plot_tracking_dict( - frame, - num_classes, - online_tlwhs, - online_ids, - online_scores, - frame_id=frame_id, - ids2names=[]) - seq_name = image_list[0].split('/')[-2] + if isinstance(online_tlwhs, defaultdict): + im = plot_tracking_dict( + frame, + num_classes, + online_tlwhs, + online_ids, + online_scores, + frame_id=frame_id, + ids2names=ids2names) + else: + im = plot_tracking( + frame, + online_tlwhs, + online_ids, + online_scores, + frame_id=frame_id, + ids2names=ids2names) save_dir = os.path.join(self.output_dir, seq_name) if not os.path.exists(save_dir): os.makedirs(save_dir) cv2.imwrite( os.path.join(save_dir, '{:05d}.jpg'.format(frame_id)), im) - mot_results.append([online_tlwhs, online_scores, online_ids]) return mot_results def predict_video(self, video_file, camera_id): @@ -231,13 +393,17 @@ class SDE_Detector(Detector): if not os.path.exists(self.output_dir): os.makedirs(self.output_dir) out_path = os.path.join(self.output_dir, video_out_name) - fourcc = cv2.VideoWriter_fourcc(* 'mp4v') + video_format = 'mp4v' + fourcc = cv2.VideoWriter_fourcc(*video_format) writer = cv2.VideoWriter(out_path, fourcc, fps, (width, height)) frame_id = 1 timer = MOTTimer() - results = defaultdict(list) # support single class and multi classes + results = defaultdict(list) num_classes = self.num_classes + data_type = 'mcmot' if num_classes > 1 else 'mot' + ids2names = self.pred_config.labels + while (1): ret, frame = capture.read() if not ret: @@ -247,31 +413,54 @@ class SDE_Detector(Detector): frame_id += 1 timer.tic() - mot_results = self.predict_image([frame], visual=False) + seq_name = video_out_name.split('.')[0] + mot_results = self.predict_image( + [frame[:, :, ::-1]], visual=False, seq_name=seq_name) timer.toc() + # bs=1 in MOT model online_tlwhs, online_scores, online_ids = mot_results[0] - for cls_id in range(num_classes): - results[cls_id].append( - (frame_id + 1, online_tlwhs[cls_id], online_scores[cls_id], - online_ids[cls_id])) fps = 1. / timer.duration - im = plot_tracking_dict( - frame, - num_classes, - online_tlwhs, - online_ids, - online_scores, - frame_id=frame_id, - fps=fps, - ids2names=[]) + if self.use_deepsort_tracker: + # use DeepSORTTracker, only support singe class + results[0].append( + (frame_id + 1, online_tlwhs, online_scores, online_ids)) + im = plot_tracking( + frame, + online_tlwhs, + online_ids, + online_scores, + frame_id=frame_id, + fps=fps, + ids2names=ids2names) + else: + # use ByteTracker, support multiple class + for cls_id in range(num_classes): + results[cls_id].append( + (frame_id + 1, online_tlwhs[cls_id], + online_scores[cls_id], online_ids[cls_id])) + im = plot_tracking_dict( + frame, + num_classes, + online_tlwhs, + online_ids, + online_scores, + frame_id=frame_id, + fps=fps, + ids2names=ids2names) writer.write(im) if camera_id != -1: cv2.imshow('Mask Detection', im) if cv2.waitKey(1) & 0xFF == ord('q'): break + + if self.save_mot_txts: + result_filename = os.path.join( + self.output_dir, video_out_name.split('.')[-2] + '.txt') + write_mot_results(result_filename, results) + writer.release() @@ -282,18 +471,20 @@ def main(): arch = yml_conf['arch'] detector = SDE_Detector( FLAGS.model_dir, - FLAGS.tracker_config, + tracker_config=FLAGS.tracker_config, device=FLAGS.device, run_mode=FLAGS.run_mode, - batch_size=FLAGS.batch_size, + batch_size=1, trt_min_shape=FLAGS.trt_min_shape, trt_max_shape=FLAGS.trt_max_shape, trt_opt_shape=FLAGS.trt_opt_shape, trt_calib_mode=FLAGS.trt_calib_mode, cpu_threads=FLAGS.cpu_threads, enable_mkldnn=FLAGS.enable_mkldnn, + output_dir=FLAGS.output_dir, threshold=FLAGS.threshold, - output_dir=FLAGS.output_dir) + save_images=FLAGS.save_images, + save_mot_txts=FLAGS.save_mot_txts, ) # predict from video file or camera video stream if FLAGS.video_file is not None or FLAGS.camera_id != -1: @@ -303,7 +494,9 @@ def main(): if FLAGS.image_dir is None and FLAGS.image_file is not None: assert FLAGS.batch_size == 1, "--batch_size should be 1 in MOT models." img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file) - detector.predict_image(img_list, FLAGS.run_benchmark, repeats=10) + seq_name = FLAGS.image_dir.split('/')[-1] + detector.predict_image( + img_list, FLAGS.run_benchmark, repeats=10, seq_name=seq_name) if not FLAGS.run_benchmark: detector.det_times.info(average=True) diff --git a/deploy/python/preprocess.py b/deploy/python/preprocess.py index 998367a349b3e14045991c2e1d2af2e6ec94e03d..d447c744b75600886075c669e47d05036a93eae7 100644 --- a/deploy/python/preprocess.py +++ b/deploy/python/preprocess.py @@ -15,6 +15,7 @@ import cv2 import numpy as np from keypoint_preprocess import get_affine_transform +from PIL import Image def decode_image(im_file, im_info): @@ -39,6 +40,85 @@ def decode_image(im_file, im_info): return im, im_info +class Resize_Mult32(object): + """resize image by target_size and max_size + Args: + target_size (int): the target size of image + keep_ratio (bool): whether keep_ratio or not, default true + interp (int): method of resize + """ + + def __init__(self, limit_side_len, limit_type, interp=cv2.INTER_LINEAR): + self.limit_side_len = limit_side_len + self.limit_type = limit_type + self.interp = interp + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + im_channel = im.shape[2] + im_scale_y, im_scale_x = self.generate_scale(im) + im = cv2.resize( + im, + None, + None, + fx=im_scale_x, + fy=im_scale_y, + interpolation=self.interp) + im_info['im_shape'] = np.array(im.shape[:2]).astype('float32') + im_info['scale_factor'] = np.array( + [im_scale_y, im_scale_x]).astype('float32') + return im, im_info + + def generate_scale(self, img): + """ + Args: + img (np.ndarray): image (np.ndarray) + Returns: + im_scale_x: the resize ratio of X + im_scale_y: the resize ratio of Y + """ + limit_side_len = self.limit_side_len + h, w, c = img.shape + + # limit the max side + if self.limit_type == 'max': + if max(h, w) > limit_side_len: + if h > w: + ratio = float(limit_side_len) / h + else: + ratio = float(limit_side_len) / w + else: + ratio = 1. + elif self.limit_type == 'min': + if min(h, w) < limit_side_len: + if h < w: + ratio = float(limit_side_len) / h + else: + ratio = float(limit_side_len) / w + else: + ratio = 1. + elif self.limit_type == 'resize_long': + ratio = float(limit_side_len) / max(h, w) + else: + raise Exception('not support limit type, image ') + resize_h = int(h * ratio) + resize_w = int(w * ratio) + + resize_h = max(int(round(resize_h / 32) * 32), 32) + resize_w = max(int(round(resize_w / 32) * 32), 32) + + im_scale_y = resize_h / float(h) + im_scale_x = resize_w / float(w) + return im_scale_y, im_scale_x + + class Resize(object): """resize image by target_size and max_size Args: @@ -106,6 +186,95 @@ class Resize(object): return im_scale_y, im_scale_x +class ShortSizeScale(object): + """ + Scale images by short size. + Args: + short_size(float | int): Short size of an image will be scaled to the short_size. + fixed_ratio(bool): Set whether to zoom according to a fixed ratio. default: True + do_round(bool): Whether to round up when calculating the zoom ratio. default: False + backend(str): Choose pillow or cv2 as the graphics processing backend. default: 'pillow' + """ + + def __init__(self, + short_size, + fixed_ratio=True, + keep_ratio=None, + do_round=False, + backend='pillow'): + self.short_size = short_size + assert (fixed_ratio and not keep_ratio) or ( + not fixed_ratio + ), "fixed_ratio and keep_ratio cannot be true at the same time" + self.fixed_ratio = fixed_ratio + self.keep_ratio = keep_ratio + self.do_round = do_round + + assert backend in [ + 'pillow', 'cv2' + ], "Scale's backend must be pillow or cv2, but get {backend}" + + self.backend = backend + + def __call__(self, img): + """ + Performs resize operations. + Args: + img (PIL.Image): a PIL.Image. + return: + resized_img: a PIL.Image after scaling. + """ + + result_img = None + + if isinstance(img, np.ndarray): + h, w, _ = img.shape + elif isinstance(img, Image.Image): + w, h = img.size + else: + raise NotImplementedError + + if w <= h: + ow = self.short_size + if self.fixed_ratio: # default is True + oh = int(self.short_size * 4.0 / 3.0) + elif not self.keep_ratio: # no + oh = self.short_size + else: + scale_factor = self.short_size / w + oh = int(h * float(scale_factor) + + 0.5) if self.do_round else int(h * self.short_size / w) + ow = int(w * float(scale_factor) + + 0.5) if self.do_round else int(w * self.short_size / h) + else: + oh = self.short_size + if self.fixed_ratio: + ow = int(self.short_size * 4.0 / 3.0) + elif not self.keep_ratio: # no + ow = self.short_size + else: + scale_factor = self.short_size / h + oh = int(h * float(scale_factor) + + 0.5) if self.do_round else int(h * self.short_size / w) + ow = int(w * float(scale_factor) + + 0.5) if self.do_round else int(w * self.short_size / h) + + if type(img) == np.ndarray: + img = Image.fromarray(img, mode='RGB') + + if self.backend == 'pillow': + result_img = img.resize((ow, oh), Image.BILINEAR) + elif self.backend == 'cv2' and (self.keep_ratio is not None): + result_img = cv2.resize( + img, (ow, oh), interpolation=cv2.INTER_LINEAR) + else: + result_img = Image.fromarray( + cv2.resize( + np.asarray(img), (ow, oh), interpolation=cv2.INTER_LINEAR)) + + return result_img + + class NormalizeImage(object): """normalize image Args: @@ -246,6 +415,34 @@ class LetterBoxResize(object): return im, im_info +class Pad(object): + def __init__(self, size, fill_value=[114.0, 114.0, 114.0]): + """ + Pad image to a specified size. + Args: + size (list[int]): image target size + fill_value (list[float]): rgb value of pad area, default (114.0, 114.0, 114.0) + """ + super(Pad, self).__init__() + if isinstance(size, int): + size = [size, size] + self.size = size + self.fill_value = fill_value + + def __call__(self, im, im_info): + im_h, im_w = im.shape[:2] + h, w = self.size + if h == im_h and w == im_w: + im = im.astype(np.float32) + return im, im_info + + canvas = np.ones((h, w, 3), dtype=np.float32) + canvas *= np.array(self.fill_value, dtype=np.float32) + canvas[0:im_h, 0:im_w, :] = im.astype(np.float32) + im = canvas + return im, im_info + + class WarpAffine(object): """Warp affine the image """ diff --git a/deploy/python/tracker_config.yml b/deploy/python/tracker_config.yml index d92510148ec175d9dd7c19fd191e43f13cebe2ce..ddd55e8653870ed9bdfe9734995e8af5b56f49e2 100644 --- a/deploy/python/tracker_config.yml +++ b/deploy/python/tracker_config.yml @@ -1,10 +1,26 @@ -# config of tracker for MOT SDE Detector, use ByteTracker as default. -# The tracker of MOT JDE Detector is exported together with the model. +# config of tracker for MOT SDE Detector, use 'JDETracker' as default. +# The tracker of MOT JDE Detector (such as FairMOT) is exported together with the model. # Here 'min_box_area' and 'vertical_ratio' are set for pedestrian, you can modify for other objects tracking. -tracker: - use_byte: true + +type: JDETracker # 'JDETracker' or 'DeepSORTTracker' + +# BYTETracker +JDETracker: + use_byte: True + det_thresh: 0.3 conf_thres: 0.6 low_conf_thres: 0.1 match_thres: 0.9 - min_box_area: 100 - vertical_ratio: 1.6 + min_box_area: 0 + vertical_ratio: 0 # 1.6 for pedestrian + +DeepSORTTracker: + input_size: [64, 192] + min_box_area: 0 + vertical_ratio: -1 + budget: 100 + max_age: 70 + n_init: 3 + metric_type: cosine + matching_threshold: 0.2 + max_iou_distance: 0.9 diff --git a/deploy/python/utils.py b/deploy/python/utils.py index c542f0176494e03312516574077815fbdd2d6d4c..41dc7ae9e81f49bdd08e0917d50b21ac00f2e527 100644 --- a/deploy/python/utils.py +++ b/deploy/python/utils.py @@ -156,6 +156,12 @@ def argsparser(): type=ast.literal_eval, default=False, help="Whether do random padding for action recognition.") + parser.add_argument( + "--save_results", + type=bool, + default=False, + help="Whether save detection result to file using coco format") + return parser diff --git a/deploy/python/visualize.py b/deploy/python/visualize.py index 9c07b8491d6790ddd2303d9abe1c45070f8c5657..626da02555985a5568f3bdaff20705d8a8dd1c11 100644 --- a/deploy/python/visualize.py +++ b/deploy/python/visualize.py @@ -96,6 +96,8 @@ def draw_mask(im, np_boxes, np_masks, labels, threshold=0.5): expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1) np_boxes = np_boxes[expect_boxes, :] np_masks = np_masks[expect_boxes, :, :] + im_h, im_w = im.shape[:2] + np_masks = np_masks[:, :im_h, :im_w] for i in range(len(np_masks)): clsid, score = int(np_boxes[i][0]), np_boxes[i][1] mask = np_masks[i] @@ -329,7 +331,7 @@ def visualize_pose(imgfile, plt.close() -def visualize_attr(im, results, boxes=None): +def visualize_attr(im, results, boxes=None, is_mtmct=False): if isinstance(im, str): im = Image.open(im) im = np.ascontiguousarray(np.copy(im)) @@ -346,8 +348,12 @@ def visualize_attr(im, results, boxes=None): if boxes is None: text_w = 3 text_h = 1 + elif is_mtmct: + box = boxes[i] # multi camera, bbox shape is x,y, w,h + text_w = int(box[0]) + 3 + text_h = int(box[1]) else: - box = boxes[i] + box = boxes[i] # single camera, bbox shape is 0, 0, x,y, w,h text_w = int(box[2]) + 3 text_h = int(box[3]) for text in res: @@ -363,15 +369,76 @@ def visualize_attr(im, results, boxes=None): return im -def visualize_action(im, mot_boxes, action_visual_collector, action_text=""): +def visualize_action(im, + mot_boxes, + action_visual_collector=None, + action_text="", + video_action_score=None, + video_action_text=""): im = cv2.imread(im) if isinstance(im, str) else im - id_detected = action_visual_collector.get_visualize_ids() + im_h, im_w = im.shape[:2] + text_scale = max(1, im.shape[1] / 1600.) - for mot_box in mot_boxes: - # mot_box is a format with [mot_id, class, score, xmin, ymin, w, h] - if mot_box[0] in id_detected: - text_position = (int(mot_box[3] + mot_box[5] * 0.75), - int(mot_box[4] - 10)) - cv2.putText(im, action_text, text_position, cv2.FONT_HERSHEY_PLAIN, - text_scale, (0, 0, 255), 2) + text_thickness = 2 + + if action_visual_collector: + id_action_dict = {} + for collector, action_type in zip(action_visual_collector, action_text): + id_detected = collector.get_visualize_ids() + for pid in id_detected: + id_action_dict[pid] = id_action_dict.get(pid, []) + id_action_dict[pid].append(action_type) + for mot_box in mot_boxes: + # mot_box is a format with [mot_id, class, score, xmin, ymin, w, h] + if mot_box[0] in id_action_dict: + text_position = (int(mot_box[3] + mot_box[5] * 0.75), + int(mot_box[4] - 10)) + display_text = ', '.join(id_action_dict[mot_box[0]]) + cv2.putText(im, display_text, text_position, + cv2.FONT_HERSHEY_PLAIN, text_scale, (0, 0, 255), 2) + + if video_action_score: + cv2.putText( + im, + video_action_text + ': %.2f' % video_action_score, + (int(im_w / 2), int(15 * text_scale) + 5), + cv2.FONT_ITALIC, + text_scale, (0, 0, 255), + thickness=text_thickness) + + return im + + +def visualize_vehicleplate(im, results, boxes=None): + if isinstance(im, str): + im = Image.open(im) + im = np.ascontiguousarray(np.copy(im)) + im = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + else: + im = np.ascontiguousarray(np.copy(im)) + + im_h, im_w = im.shape[:2] + text_scale = max(1.0, im.shape[0] / 1600.) + text_thickness = 1 + + line_inter = im.shape[0] / 40. + for i, res in enumerate(results): + if boxes is None: + text_w = 3 + text_h = 1 + else: + box = boxes[i] + text = res + if text == "": + continue + text_w = int(box[2]) + text_h = int(box[5] + box[3]) + text_loc = (text_w, text_h) + cv2.putText( + im, + text, + text_loc, + cv2.FONT_ITALIC, + text_scale, (0, 255, 255), + thickness=text_thickness) return im diff --git a/deploy/serving/cpp/build_server.sh b/deploy/serving/cpp/build_server.sh new file mode 100644 index 0000000000000000000000000000000000000000..803dce07c1cdb9c6a77f063b7b01391f3109667c --- /dev/null +++ b/deploy/serving/cpp/build_server.sh @@ -0,0 +1,70 @@ +#使用镜像: +#registry.baidubce.com/paddlepaddle/paddle:latest-dev-cuda10.1-cudnn7-gcc82 + +#编译Serving Server: + +#client和app可以直接使用release版本 + +#server因为加入了自定义OP,需要重新编译 + +apt-get update +apt install -y libcurl4-openssl-dev libbz2-dev +wget https://paddle-serving.bj.bcebos.com/others/centos_ssl.tar && tar xf centos_ssl.tar && rm -rf centos_ssl.tar && mv libcrypto.so.1.0.2k /usr/lib/libcrypto.so.1.0.2k && mv libssl.so.1.0.2k /usr/lib/libssl.so.1.0.2k && ln -sf /usr/lib/libcrypto.so.1.0.2k /usr/lib/libcrypto.so.10 && ln -sf /usr/lib/libssl.so.1.0.2k /usr/lib/libssl.so.10 && ln -sf /usr/lib/libcrypto.so.10 /usr/lib/libcrypto.so && ln -sf /usr/lib/libssl.so.10 /usr/lib/libssl.so + +# 安装go依赖 +rm -rf /usr/local/go +wget -qO- https://paddle-ci.cdn.bcebos.com/go1.17.2.linux-amd64.tar.gz | tar -xz -C /usr/local +export GOROOT=/usr/local/go +export GOPATH=/root/gopath +export PATH=$PATH:$GOPATH/bin:$GOROOT/bin +go env -w GO111MODULE=on +go env -w GOPROXY=https://goproxy.cn,direct +go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2 +go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2 +go install github.com/golang/protobuf/protoc-gen-go@v1.4.3 +go install google.golang.org/grpc@v1.33.0 +go env -w GO111MODULE=auto + +# 下载opencv库 +wget https://paddle-qa.bj.bcebos.com/PaddleServing/opencv3.tar.gz && tar -xvf opencv3.tar.gz && rm -rf opencv3.tar.gz +export OPENCV_DIR=$PWD/opencv3 + +# clone Serving +git clone https://github.com/PaddlePaddle/Serving.git -b develop --depth=1 +cd Serving +export Serving_repo_path=$PWD +git submodule update --init --recursive +python -m pip install -r python/requirements.txt + +# set env +export PYTHON_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") +export PYTHON_LIBRARIES=$(python -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))") +export PYTHON_EXECUTABLE=`which python` + +export CUDA_PATH='/usr/local/cuda' +export CUDNN_LIBRARY='/usr/local/cuda/lib64/' +export CUDA_CUDART_LIBRARY='/usr/local/cuda/lib64/' +export TENSORRT_LIBRARY_PATH='/usr/local/TensorRT6-cuda10.1-cudnn7/targets/x86_64-linux-gnu/' + +# cp 自定义OP代码 +\cp ../deploy/serving/cpp/preprocess/*.h ${Serving_repo_path}/core/general-server/op +\cp ../deploy/serving/cpp/preprocess/*.cpp ${Serving_repo_path}/core/general-server/op + +# 编译Server, export SERVING_BIN +mkdir server-build-gpu-opencv && cd server-build-gpu-opencv +cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \ + -DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \ + -DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \ + -DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \ + -DCUDNN_LIBRARY=${CUDNN_LIBRARY} \ + -DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \ + -DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \ + -DOPENCV_DIR=${OPENCV_DIR} \ + -DWITH_OPENCV=ON \ + -DSERVER=ON \ + -DWITH_GPU=ON .. +make -j32 + +python -m pip install python/dist/paddle* +export SERVING_BIN=$PWD/core/general-server/serving +cd ../../ diff --git a/deploy/serving/cpp/preprocess/mask_rcnn_r50_fpn_1x_coco.cpp b/deploy/serving/cpp/preprocess/mask_rcnn_r50_fpn_1x_coco.cpp new file mode 100644 index 0000000000000000000000000000000000000000..b60eb24ce288e349eec73a4bf6c6b7ce8983e7fe --- /dev/null +++ b/deploy/serving/cpp/preprocess/mask_rcnn_r50_fpn_1x_coco.cpp @@ -0,0 +1,309 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "core/general-server/op/mask_rcnn_r50_fpn_1x_coco.h" +#include "core/predictor/framework/infer.h" +#include "core/predictor/framework/memory.h" +#include "core/predictor/framework/resource.h" +#include "core/util/include/timer.h" +#include +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +using baidu::paddle_serving::Timer; +using baidu::paddle_serving::predictor::InferManager; +using baidu::paddle_serving::predictor::MempoolWrapper; +using baidu::paddle_serving::predictor::PaddleGeneralModelConfig; +using baidu::paddle_serving::predictor::general_model::Request; +using baidu::paddle_serving::predictor::general_model::Response; +using baidu::paddle_serving::predictor::general_model::Tensor; + +int mask_rcnn_r50_fpn_1x_coco::inference() { + VLOG(2) << "Going to run inference"; + const std::vector pre_node_names = pre_names(); + if (pre_node_names.size() != 1) { + LOG(ERROR) << "This op(" << op_name() + << ") can only have one predecessor op, but received " + << pre_node_names.size(); + return -1; + } + const std::string pre_name = pre_node_names[0]; + + const GeneralBlob *input_blob = get_depend_argument(pre_name); + if (!input_blob) { + LOG(ERROR) << "input_blob is nullptr,error"; + return -1; + } + uint64_t log_id = input_blob->GetLogId(); + VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name; + + GeneralBlob *output_blob = mutable_data(); + if (!output_blob) { + LOG(ERROR) << "output_blob is nullptr,error"; + return -1; + } + output_blob->SetLogId(log_id); + + if (!input_blob) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed mutable depended argument, op:" << pre_name; + return -1; + } + + const TensorVector *in = &input_blob->tensor_vector; + TensorVector *out = &output_blob->tensor_vector; + + int batch_size = input_blob->_batch_size; + output_blob->_batch_size = batch_size; + VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size; + + Timer timeline; + int64_t start = timeline.TimeStampUS(); + timeline.Start(); + + // only support string type + char *total_input_ptr = static_cast(in->at(0).data.data()); + std::string base64str = total_input_ptr; + + cv::Mat img = Base2Mat(base64str); + cv::cvtColor(img, img, cv::COLOR_BGR2RGB); + + // preprocess + Resize(&img, scale_factor_h, scale_factor_w, im_shape_h, im_shape_w); + Normalize(&img, mean_, scale_, is_scale_); + PadStride(&img, 32); + int input_shape_h = img.rows; + int input_shape_w = img.cols; + std::vector input(1 * 3 * input_shape_h * input_shape_w, 0.0f); + Permute(img, input.data()); + + // create real_in + TensorVector *real_in = new TensorVector(); + if (!real_in) { + LOG(ERROR) << "real_in is nullptr,error"; + return -1; + } + + int in_num = 0; + size_t databuf_size = 0; + void *databuf_data = NULL; + char *databuf_char = NULL; + + // im_shape + std::vector im_shape{static_cast(im_shape_h), + static_cast(im_shape_w)}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, im_shape.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_0(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_0; + tensor_in_0.name = "im_shape"; + tensor_in_0.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_0.shape = {1, 2}; + tensor_in_0.lod = in->at(0).lod; + tensor_in_0.data = paddleBuf_0; + real_in->push_back(tensor_in_0); + + // image + in_num = 1 * 3 * input_shape_h * input_shape_w; + databuf_size = in_num * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, input.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_1(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_1; + tensor_in_1.name = "image"; + tensor_in_1.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_1.shape = {1, 3, input_shape_h, input_shape_w}; + tensor_in_1.lod = in->at(0).lod; + tensor_in_1.data = paddleBuf_1; + real_in->push_back(tensor_in_1); + + // scale_factor + std::vector scale_factor{scale_factor_h, scale_factor_w}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, scale_factor.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_2(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_2; + tensor_in_2.name = "scale_factor"; + tensor_in_2.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_2.shape = {1, 2}; + tensor_in_2.lod = in->at(0).lod; + tensor_in_2.data = paddleBuf_2; + real_in->push_back(tensor_in_2); + + if (InferManager::instance().infer(engine_name().c_str(), real_in, out, + batch_size)) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed do infer in fluid model: " << engine_name().c_str(); + return -1; + } + + int64_t end = timeline.TimeStampUS(); + CopyBlobInfo(input_blob, output_blob); + AddBlobInfo(output_blob, start); + AddBlobInfo(output_blob, end); + return 0; +} + +void mask_rcnn_r50_fpn_1x_coco::Resize(cv::Mat *img, float &scale_factor_h, + float &scale_factor_w, int &im_shape_h, + int &im_shape_w) { + // keep_ratio + int im_size_max = std::max(img->rows, img->cols); + int im_size_min = std::min(img->rows, img->cols); + int target_size_max = std::max(im_shape_h, im_shape_w); + int target_size_min = std::min(im_shape_h, im_shape_w); + float scale_min = + static_cast(target_size_min) / static_cast(im_size_min); + float scale_max = + static_cast(target_size_max) / static_cast(im_size_max); + float scale_ratio = std::min(scale_min, scale_max); + + // scale_factor + scale_factor_h = scale_ratio; + scale_factor_w = scale_ratio; + + // Resize + cv::resize(*img, *img, cv::Size(), scale_ratio, scale_ratio, 2); + im_shape_h = img->rows; + im_shape_w = img->cols; +} + +void mask_rcnn_r50_fpn_1x_coco::Normalize(cv::Mat *img, + const std::vector &mean, + const std::vector &scale, + const bool is_scale) { + // Normalize + double e = 1.0; + if (is_scale) { + e /= 255.0; + } + (*img).convertTo(*img, CV_32FC3, e); + for (int h = 0; h < img->rows; h++) { + for (int w = 0; w < img->cols; w++) { + img->at(h, w)[0] = + (img->at(h, w)[0] - mean[0]) / scale[0]; + img->at(h, w)[1] = + (img->at(h, w)[1] - mean[1]) / scale[1]; + img->at(h, w)[2] = + (img->at(h, w)[2] - mean[2]) / scale[2]; + } + } +} + +void mask_rcnn_r50_fpn_1x_coco::PadStride(cv::Mat *img, int stride_) { + // PadStride + if (stride_ <= 0) + return; + int rh = img->rows; + int rw = img->cols; + int nh = (rh / stride_) * stride_ + (rh % stride_ != 0) * stride_; + int nw = (rw / stride_) * stride_ + (rw % stride_ != 0) * stride_; + cv::copyMakeBorder(*img, *img, 0, nh - rh, 0, nw - rw, cv::BORDER_CONSTANT, + cv::Scalar(0)); +} + +void mask_rcnn_r50_fpn_1x_coco::Permute(const cv::Mat &img, float *data) { + // Permute + int rh = img.rows; + int rw = img.cols; + int rc = img.channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(img, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), i); + } +} + +cv::Mat mask_rcnn_r50_fpn_1x_coco::Base2Mat(std::string &base64_data) { + cv::Mat img; + std::string s_mat; + s_mat = base64Decode(base64_data.data(), base64_data.size()); + std::vector base64_img(s_mat.begin(), s_mat.end()); + img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR + return img; +} + +std::string mask_rcnn_r50_fpn_1x_coco::base64Decode(const char *Data, + int DataByte) { + const char DecodeTable[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, + 62, // '+' + 0, 0, 0, + 63, // '/' + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9' + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, + 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z' + 0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, + 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z' + }; + + std::string strDecode; + int nValue; + int i = 0; + while (i < DataByte) { + if (*Data != '\r' && *Data != '\n') { + nValue = DecodeTable[*Data++] << 18; + nValue += DecodeTable[*Data++] << 12; + strDecode += (nValue & 0x00FF0000) >> 16; + if (*Data != '=') { + nValue += DecodeTable[*Data++] << 6; + strDecode += (nValue & 0x0000FF00) >> 8; + if (*Data != '=') { + nValue += DecodeTable[*Data++]; + strDecode += nValue & 0x000000FF; + } + } + i += 4; + } else // 回车换行,跳过 + { + Data++; + i++; + } + } + return strDecode; +} + +DEFINE_OP(mask_rcnn_r50_fpn_1x_coco); + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/mask_rcnn_r50_fpn_1x_coco.h b/deploy/serving/cpp/preprocess/mask_rcnn_r50_fpn_1x_coco.h new file mode 100644 index 0000000000000000000000000000000000000000..5b2b8377a88b0cbcc313a3dd8a96c35dd9f57f91 --- /dev/null +++ b/deploy/serving/cpp/preprocess/mask_rcnn_r50_fpn_1x_coco.h @@ -0,0 +1,72 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once +#include "core/general-server/general_model_service.pb.h" +#include "core/general-server/op/general_infer_helper.h" +#include "paddle_inference_api.h" // NOLINT +#include +#include + +#include "opencv2/core.hpp" +#include "opencv2/imgcodecs.hpp" +#include "opencv2/imgproc.hpp" +#include +#include +#include +#include +#include + +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +class mask_rcnn_r50_fpn_1x_coco + : public baidu::paddle_serving::predictor::OpWithChannel { +public: + typedef std::vector TensorVector; + + DECLARE_OP(mask_rcnn_r50_fpn_1x_coco); + + int inference(); + +private: + // preprocess + std::vector mean_ = {0.485f, 0.456f, 0.406f}; + std::vector scale_ = {0.229f, 0.224f, 0.225f}; + bool is_scale_ = true; + int im_shape_h = 1333; + int im_shape_w = 800; + float scale_factor_h = 1.0f; + float scale_factor_w = 1.0f; + + void Resize(cv::Mat *img, float &scale_factor_h, float &scale_factor_w, + int &im_shape_h, int &im_shape_w); + void Normalize(cv::Mat *img, const std::vector &mean, + const std::vector &scale, const bool is_scale); + void PadStride(cv::Mat *img, int stride_ = -1); + void Permute(const cv::Mat &img, float *data); + + // read pics + cv::Mat Base2Mat(std::string &base64_data); + std::string base64Decode(const char *Data, int DataByte); +}; + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/picodet_lcnet_1_5x_416_coco.cpp b/deploy/serving/cpp/preprocess/picodet_lcnet_1_5x_416_coco.cpp new file mode 100644 index 0000000000000000000000000000000000000000..66bfeaef21189e395c2f15d716468723465c24b6 --- /dev/null +++ b/deploy/serving/cpp/preprocess/picodet_lcnet_1_5x_416_coco.cpp @@ -0,0 +1,258 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "core/general-server/op/picodet_lcnet_1_5x_416_coco.h" +#include "core/predictor/framework/infer.h" +#include "core/predictor/framework/memory.h" +#include "core/predictor/framework/resource.h" +#include "core/util/include/timer.h" +#include +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +using baidu::paddle_serving::Timer; +using baidu::paddle_serving::predictor::InferManager; +using baidu::paddle_serving::predictor::MempoolWrapper; +using baidu::paddle_serving::predictor::PaddleGeneralModelConfig; +using baidu::paddle_serving::predictor::general_model::Request; +using baidu::paddle_serving::predictor::general_model::Response; +using baidu::paddle_serving::predictor::general_model::Tensor; + +int picodet_lcnet_1_5x_416_coco::inference() { + VLOG(2) << "Going to run inference"; + const std::vector pre_node_names = pre_names(); + if (pre_node_names.size() != 1) { + LOG(ERROR) << "This op(" << op_name() + << ") can only have one predecessor op, but received " + << pre_node_names.size(); + return -1; + } + const std::string pre_name = pre_node_names[0]; + + const GeneralBlob *input_blob = get_depend_argument(pre_name); + if (!input_blob) { + LOG(ERROR) << "input_blob is nullptr,error"; + return -1; + } + uint64_t log_id = input_blob->GetLogId(); + VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name; + + GeneralBlob *output_blob = mutable_data(); + if (!output_blob) { + LOG(ERROR) << "output_blob is nullptr,error"; + return -1; + } + output_blob->SetLogId(log_id); + + if (!input_blob) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed mutable depended argument, op:" << pre_name; + return -1; + } + + const TensorVector *in = &input_blob->tensor_vector; + TensorVector *out = &output_blob->tensor_vector; + + int batch_size = input_blob->_batch_size; + output_blob->_batch_size = batch_size; + VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size; + + Timer timeline; + int64_t start = timeline.TimeStampUS(); + timeline.Start(); + + // only support string type + char *total_input_ptr = static_cast(in->at(0).data.data()); + std::string base64str = total_input_ptr; + + cv::Mat img = Base2Mat(base64str); + cv::cvtColor(img, img, cv::COLOR_BGR2RGB); + + // preprocess + std::vector input(1 * 3 * im_shape_h * im_shape_w, 0.0f); + preprocess_det(img, input.data(), scale_factor_h, scale_factor_w, im_shape_h, + im_shape_w, mean_, scale_, is_scale_); + + // create real_in + TensorVector *real_in = new TensorVector(); + if (!real_in) { + LOG(ERROR) << "real_in is nullptr,error"; + return -1; + } + + int in_num = 0; + size_t databuf_size = 0; + void *databuf_data = NULL; + char *databuf_char = NULL; + + // image + in_num = 1 * 3 * im_shape_h * im_shape_w; + databuf_size = in_num * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, input.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in; + tensor_in.name = "image"; + tensor_in.dtype = paddle::PaddleDType::FLOAT32; + tensor_in.shape = {1, 3, im_shape_h, im_shape_w}; + tensor_in.lod = in->at(0).lod; + tensor_in.data = paddleBuf; + real_in->push_back(tensor_in); + + // scale_factor + std::vector scale_factor{scale_factor_h, scale_factor_w}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, scale_factor.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_2(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_2; + tensor_in_2.name = "scale_factor"; + tensor_in_2.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_2.shape = {1, 2}; + tensor_in_2.lod = in->at(0).lod; + tensor_in_2.data = paddleBuf_2; + real_in->push_back(tensor_in_2); + + if (InferManager::instance().infer(engine_name().c_str(), real_in, out, + batch_size)) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed do infer in fluid model: " << engine_name().c_str(); + return -1; + } + + int64_t end = timeline.TimeStampUS(); + CopyBlobInfo(input_blob, output_blob); + AddBlobInfo(output_blob, start); + AddBlobInfo(output_blob, end); + return 0; +} + +void picodet_lcnet_1_5x_416_coco::preprocess_det( + const cv::Mat &img, float *data, float &scale_factor_h, + float &scale_factor_w, int im_shape_h, int im_shape_w, + const std::vector &mean, const std::vector &scale, + const bool is_scale) { + // scale_factor + scale_factor_h = + static_cast(im_shape_h) / static_cast(img.rows); + scale_factor_w = + static_cast(im_shape_w) / static_cast(img.cols); + + // Resize + cv::Mat resize_img; + cv::resize(img, resize_img, cv::Size(im_shape_w, im_shape_h), 0, 0, 2); + + // Normalize + double e = 1.0; + if (is_scale) { + e /= 255.0; + } + cv::Mat img_fp; + (resize_img).convertTo(img_fp, CV_32FC3, e); + for (int h = 0; h < im_shape_h; h++) { + for (int w = 0; w < im_shape_w; w++) { + img_fp.at(h, w)[0] = + (img_fp.at(h, w)[0] - mean[0]) / scale[0]; + img_fp.at(h, w)[1] = + (img_fp.at(h, w)[1] - mean[1]) / scale[1]; + img_fp.at(h, w)[2] = + (img_fp.at(h, w)[2] - mean[2]) / scale[2]; + } + } + + // Permute + int rh = img_fp.rows; + int rw = img_fp.cols; + int rc = img_fp.channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(img_fp, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), + i); + } +} + +cv::Mat picodet_lcnet_1_5x_416_coco::Base2Mat(std::string &base64_data) { + cv::Mat img; + std::string s_mat; + s_mat = base64Decode(base64_data.data(), base64_data.size()); + std::vector base64_img(s_mat.begin(), s_mat.end()); + img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR + return img; +} + +std::string picodet_lcnet_1_5x_416_coco::base64Decode(const char *Data, + int DataByte) { + const char DecodeTable[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, + 62, // '+' + 0, 0, 0, + 63, // '/' + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9' + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, + 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z' + 0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, + 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z' + }; + + std::string strDecode; + int nValue; + int i = 0; + while (i < DataByte) { + if (*Data != '\r' && *Data != '\n') { + nValue = DecodeTable[*Data++] << 18; + nValue += DecodeTable[*Data++] << 12; + strDecode += (nValue & 0x00FF0000) >> 16; + if (*Data != '=') { + nValue += DecodeTable[*Data++] << 6; + strDecode += (nValue & 0x0000FF00) >> 8; + if (*Data != '=') { + nValue += DecodeTable[*Data++]; + strDecode += nValue & 0x000000FF; + } + } + i += 4; + } else // 回车换行,跳过 + { + Data++; + i++; + } + } + return strDecode; +} + +DEFINE_OP(picodet_lcnet_1_5x_416_coco); + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/picodet_lcnet_1_5x_416_coco.h b/deploy/serving/cpp/preprocess/picodet_lcnet_1_5x_416_coco.h new file mode 100644 index 0000000000000000000000000000000000000000..4db649a27b2dbd408b1984511cbb184c112bf1fe --- /dev/null +++ b/deploy/serving/cpp/preprocess/picodet_lcnet_1_5x_416_coco.h @@ -0,0 +1,69 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once +#include "core/general-server/general_model_service.pb.h" +#include "core/general-server/op/general_infer_helper.h" +#include "paddle_inference_api.h" // NOLINT +#include +#include + +#include "opencv2/core.hpp" +#include "opencv2/imgcodecs.hpp" +#include "opencv2/imgproc.hpp" +#include +#include +#include +#include +#include + +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +class picodet_lcnet_1_5x_416_coco + : public baidu::paddle_serving::predictor::OpWithChannel { +public: + typedef std::vector TensorVector; + + DECLARE_OP(picodet_lcnet_1_5x_416_coco); + + int inference(); + +private: + // preprocess + std::vector mean_ = {0.485f, 0.456f, 0.406f}; + std::vector scale_ = {0.229f, 0.224f, 0.225f}; + bool is_scale_ = true; + int im_shape_h = 416; + int im_shape_w = 416; + float scale_factor_h = 1.0f; + float scale_factor_w = 1.0f; + void preprocess_det(const cv::Mat &img, float *data, float &scale_factor_h, + float &scale_factor_w, int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, const bool is_scale); + + // read pics + cv::Mat Base2Mat(std::string &base64_data); + std::string base64Decode(const char *Data, int DataByte); +}; + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/ppyolo_mbv3_large_coco.cpp b/deploy/serving/cpp/preprocess/ppyolo_mbv3_large_coco.cpp new file mode 100644 index 0000000000000000000000000000000000000000..2d2d62cd321bf1d2d5055b827552337e86b4aa15 --- /dev/null +++ b/deploy/serving/cpp/preprocess/ppyolo_mbv3_large_coco.cpp @@ -0,0 +1,282 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "core/general-server/op/ppyolo_mbv3_large_coco.h" +#include "core/predictor/framework/infer.h" +#include "core/predictor/framework/memory.h" +#include "core/predictor/framework/resource.h" +#include "core/util/include/timer.h" +#include +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +using baidu::paddle_serving::Timer; +using baidu::paddle_serving::predictor::InferManager; +using baidu::paddle_serving::predictor::MempoolWrapper; +using baidu::paddle_serving::predictor::PaddleGeneralModelConfig; +using baidu::paddle_serving::predictor::general_model::Request; +using baidu::paddle_serving::predictor::general_model::Response; +using baidu::paddle_serving::predictor::general_model::Tensor; + +int ppyolo_mbv3_large_coco::inference() { + VLOG(2) << "Going to run inference"; + const std::vector pre_node_names = pre_names(); + if (pre_node_names.size() != 1) { + LOG(ERROR) << "This op(" << op_name() + << ") can only have one predecessor op, but received " + << pre_node_names.size(); + return -1; + } + const std::string pre_name = pre_node_names[0]; + + const GeneralBlob *input_blob = get_depend_argument(pre_name); + if (!input_blob) { + LOG(ERROR) << "input_blob is nullptr,error"; + return -1; + } + uint64_t log_id = input_blob->GetLogId(); + VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name; + + GeneralBlob *output_blob = mutable_data(); + if (!output_blob) { + LOG(ERROR) << "output_blob is nullptr,error"; + return -1; + } + output_blob->SetLogId(log_id); + + if (!input_blob) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed mutable depended argument, op:" << pre_name; + return -1; + } + + const TensorVector *in = &input_blob->tensor_vector; + TensorVector *out = &output_blob->tensor_vector; + + int batch_size = input_blob->_batch_size; + output_blob->_batch_size = batch_size; + VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size; + + Timer timeline; + int64_t start = timeline.TimeStampUS(); + timeline.Start(); + + // only support string type + char *total_input_ptr = static_cast(in->at(0).data.data()); + std::string base64str = total_input_ptr; + + cv::Mat img = Base2Mat(base64str); + cv::cvtColor(img, img, cv::COLOR_BGR2RGB); + + // preprocess + std::vector input(1 * 3 * im_shape_h * im_shape_w, 0.0f); + preprocess_det(img, input.data(), scale_factor_h, scale_factor_w, im_shape_h, + im_shape_w, mean_, scale_, is_scale_); + + // create real_in + TensorVector *real_in = new TensorVector(); + if (!real_in) { + LOG(ERROR) << "real_in is nullptr,error"; + return -1; + } + + int in_num = 0; + size_t databuf_size = 0; + void *databuf_data = NULL; + char *databuf_char = NULL; + + // im_shape + std::vector im_shape{static_cast(im_shape_h), + static_cast(im_shape_w)}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, im_shape.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_0(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_0; + tensor_in_0.name = "im_shape"; + tensor_in_0.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_0.shape = {1, 2}; + tensor_in_0.lod = in->at(0).lod; + tensor_in_0.data = paddleBuf_0; + real_in->push_back(tensor_in_0); + + // image + in_num = 1 * 3 * im_shape_h * im_shape_w; + databuf_size = in_num * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, input.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_1(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_1; + tensor_in_1.name = "image"; + tensor_in_1.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_1.shape = {1, 3, im_shape_h, im_shape_w}; + tensor_in_1.lod = in->at(0).lod; + tensor_in_1.data = paddleBuf_1; + real_in->push_back(tensor_in_1); + + // scale_factor + std::vector scale_factor{scale_factor_h, scale_factor_w}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, scale_factor.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_2(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_2; + tensor_in_2.name = "scale_factor"; + tensor_in_2.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_2.shape = {1, 2}; + tensor_in_2.lod = in->at(0).lod; + tensor_in_2.data = paddleBuf_2; + real_in->push_back(tensor_in_2); + + if (InferManager::instance().infer(engine_name().c_str(), real_in, out, + batch_size)) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed do infer in fluid model: " << engine_name().c_str(); + return -1; + } + + int64_t end = timeline.TimeStampUS(); + CopyBlobInfo(input_blob, output_blob); + AddBlobInfo(output_blob, start); + AddBlobInfo(output_blob, end); + return 0; +} + +void ppyolo_mbv3_large_coco::preprocess_det(const cv::Mat &img, float *data, + float &scale_factor_h, + float &scale_factor_w, + int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, + const bool is_scale) { + // scale_factor + scale_factor_h = + static_cast(im_shape_h) / static_cast(img.rows); + scale_factor_w = + static_cast(im_shape_w) / static_cast(img.cols); + + // Resize + cv::Mat resize_img; + cv::resize(img, resize_img, cv::Size(im_shape_w, im_shape_h), 0, 0, 2); + + // Normalize + double e = 1.0; + if (is_scale) { + e /= 255.0; + } + cv::Mat img_fp; + (resize_img).convertTo(img_fp, CV_32FC3, e); + for (int h = 0; h < im_shape_h; h++) { + for (int w = 0; w < im_shape_w; w++) { + img_fp.at(h, w)[0] = + (img_fp.at(h, w)[0] - mean[0]) / scale[0]; + img_fp.at(h, w)[1] = + (img_fp.at(h, w)[1] - mean[1]) / scale[1]; + img_fp.at(h, w)[2] = + (img_fp.at(h, w)[2] - mean[2]) / scale[2]; + } + } + + // Permute + int rh = img_fp.rows; + int rw = img_fp.cols; + int rc = img_fp.channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(img_fp, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), + i); + } +} + +cv::Mat ppyolo_mbv3_large_coco::Base2Mat(std::string &base64_data) { + cv::Mat img; + std::string s_mat; + s_mat = base64Decode(base64_data.data(), base64_data.size()); + std::vector base64_img(s_mat.begin(), s_mat.end()); + img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR + return img; +} + +std::string ppyolo_mbv3_large_coco::base64Decode(const char *Data, + int DataByte) { + const char DecodeTable[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, + 62, // '+' + 0, 0, 0, + 63, // '/' + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9' + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, + 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z' + 0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, + 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z' + }; + + std::string strDecode; + int nValue; + int i = 0; + while (i < DataByte) { + if (*Data != '\r' && *Data != '\n') { + nValue = DecodeTable[*Data++] << 18; + nValue += DecodeTable[*Data++] << 12; + strDecode += (nValue & 0x00FF0000) >> 16; + if (*Data != '=') { + nValue += DecodeTable[*Data++] << 6; + strDecode += (nValue & 0x0000FF00) >> 8; + if (*Data != '=') { + nValue += DecodeTable[*Data++]; + strDecode += nValue & 0x000000FF; + } + } + i += 4; + } else // 回车换行,跳过 + { + Data++; + i++; + } + } + return strDecode; +} + +DEFINE_OP(ppyolo_mbv3_large_coco); + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/ppyolo_mbv3_large_coco.h b/deploy/serving/cpp/preprocess/ppyolo_mbv3_large_coco.h new file mode 100644 index 0000000000000000000000000000000000000000..5f55e18f51eae4c3f5588594b2db05773d529987 --- /dev/null +++ b/deploy/serving/cpp/preprocess/ppyolo_mbv3_large_coco.h @@ -0,0 +1,69 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once +#include "core/general-server/general_model_service.pb.h" +#include "core/general-server/op/general_infer_helper.h" +#include "paddle_inference_api.h" // NOLINT +#include +#include + +#include "opencv2/core.hpp" +#include "opencv2/imgcodecs.hpp" +#include "opencv2/imgproc.hpp" +#include +#include +#include +#include +#include + +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +class ppyolo_mbv3_large_coco + : public baidu::paddle_serving::predictor::OpWithChannel { +public: + typedef std::vector TensorVector; + + DECLARE_OP(ppyolo_mbv3_large_coco); + + int inference(); + +private: + // preprocess + std::vector mean_ = {0.485f, 0.456f, 0.406f}; + std::vector scale_ = {0.229f, 0.224f, 0.225f}; + bool is_scale_ = true; + int im_shape_h = 320; + int im_shape_w = 320; + float scale_factor_h = 1.0f; + float scale_factor_w = 1.0f; + void preprocess_det(const cv::Mat &img, float *data, float &scale_factor_h, + float &scale_factor_w, int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, const bool is_scale); + + // read pics + cv::Mat Base2Mat(std::string &base64_data); + std::string base64Decode(const char *Data, int DataByte); +}; + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/ppyoloe_crn_s_300e_coco.cpp b/deploy/serving/cpp/preprocess/ppyoloe_crn_s_300e_coco.cpp new file mode 100644 index 0000000000000000000000000000000000000000..f59c4f341539db3a7b777051c49da6d6f6919166 --- /dev/null +++ b/deploy/serving/cpp/preprocess/ppyoloe_crn_s_300e_coco.cpp @@ -0,0 +1,260 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "core/general-server/op/ppyoloe_crn_s_300e_coco.h" +#include "core/predictor/framework/infer.h" +#include "core/predictor/framework/memory.h" +#include "core/predictor/framework/resource.h" +#include "core/util/include/timer.h" +#include +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +using baidu::paddle_serving::Timer; +using baidu::paddle_serving::predictor::InferManager; +using baidu::paddle_serving::predictor::MempoolWrapper; +using baidu::paddle_serving::predictor::PaddleGeneralModelConfig; +using baidu::paddle_serving::predictor::general_model::Request; +using baidu::paddle_serving::predictor::general_model::Response; +using baidu::paddle_serving::predictor::general_model::Tensor; + +int ppyoloe_crn_s_300e_coco::inference() { + VLOG(2) << "Going to run inference"; + const std::vector pre_node_names = pre_names(); + if (pre_node_names.size() != 1) { + LOG(ERROR) << "This op(" << op_name() + << ") can only have one predecessor op, but received " + << pre_node_names.size(); + return -1; + } + const std::string pre_name = pre_node_names[0]; + + const GeneralBlob *input_blob = get_depend_argument(pre_name); + if (!input_blob) { + LOG(ERROR) << "input_blob is nullptr,error"; + return -1; + } + uint64_t log_id = input_blob->GetLogId(); + VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name; + + GeneralBlob *output_blob = mutable_data(); + if (!output_blob) { + LOG(ERROR) << "output_blob is nullptr,error"; + return -1; + } + output_blob->SetLogId(log_id); + + if (!input_blob) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed mutable depended argument, op:" << pre_name; + return -1; + } + + const TensorVector *in = &input_blob->tensor_vector; + TensorVector *out = &output_blob->tensor_vector; + + int batch_size = input_blob->_batch_size; + output_blob->_batch_size = batch_size; + VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size; + + Timer timeline; + int64_t start = timeline.TimeStampUS(); + timeline.Start(); + + // only support string type + char *total_input_ptr = static_cast(in->at(0).data.data()); + std::string base64str = total_input_ptr; + + cv::Mat img = Base2Mat(base64str); + cv::cvtColor(img, img, cv::COLOR_BGR2RGB); + + // preprocess + std::vector input(1 * 3 * im_shape_h * im_shape_w, 0.0f); + preprocess_det(img, input.data(), scale_factor_h, scale_factor_w, im_shape_h, + im_shape_w, mean_, scale_, is_scale_); + + // create real_in + TensorVector *real_in = new TensorVector(); + if (!real_in) { + LOG(ERROR) << "real_in is nullptr,error"; + return -1; + } + + int in_num = 0; + size_t databuf_size = 0; + void *databuf_data = NULL; + char *databuf_char = NULL; + + // image + in_num = 1 * 3 * im_shape_h * im_shape_w; + databuf_size = in_num * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, input.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in; + tensor_in.name = "image"; + tensor_in.dtype = paddle::PaddleDType::FLOAT32; + tensor_in.shape = {1, 3, im_shape_h, im_shape_w}; + tensor_in.lod = in->at(0).lod; + tensor_in.data = paddleBuf; + real_in->push_back(tensor_in); + + // scale_factor + std::vector scale_factor{scale_factor_h, scale_factor_w}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, scale_factor.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_2(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_2; + tensor_in_2.name = "scale_factor"; + tensor_in_2.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_2.shape = {1, 2}; + tensor_in_2.lod = in->at(0).lod; + tensor_in_2.data = paddleBuf_2; + real_in->push_back(tensor_in_2); + + if (InferManager::instance().infer(engine_name().c_str(), real_in, out, + batch_size)) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed do infer in fluid model: " << engine_name().c_str(); + return -1; + } + + int64_t end = timeline.TimeStampUS(); + CopyBlobInfo(input_blob, output_blob); + AddBlobInfo(output_blob, start); + AddBlobInfo(output_blob, end); + return 0; +} + +void ppyoloe_crn_s_300e_coco::preprocess_det(const cv::Mat &img, float *data, + float &scale_factor_h, + float &scale_factor_w, + int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, + const bool is_scale) { + // scale_factor + scale_factor_h = + static_cast(im_shape_h) / static_cast(img.rows); + scale_factor_w = + static_cast(im_shape_w) / static_cast(img.cols); + + // Resize + cv::Mat resize_img; + cv::resize(img, resize_img, cv::Size(im_shape_w, im_shape_h), 0, 0, 2); + + // Normalize + double e = 1.0; + if (is_scale) { + e /= 255.0; + } + cv::Mat img_fp; + (resize_img).convertTo(img_fp, CV_32FC3, e); + for (int h = 0; h < im_shape_h; h++) { + for (int w = 0; w < im_shape_w; w++) { + img_fp.at(h, w)[0] = + (img_fp.at(h, w)[0] - mean[0]) / scale[0]; + img_fp.at(h, w)[1] = + (img_fp.at(h, w)[1] - mean[1]) / scale[1]; + img_fp.at(h, w)[2] = + (img_fp.at(h, w)[2] - mean[2]) / scale[2]; + } + } + + // Permute + int rh = img_fp.rows; + int rw = img_fp.cols; + int rc = img_fp.channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(img_fp, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), + i); + } +} + +cv::Mat ppyoloe_crn_s_300e_coco::Base2Mat(std::string &base64_data) { + cv::Mat img; + std::string s_mat; + s_mat = base64Decode(base64_data.data(), base64_data.size()); + std::vector base64_img(s_mat.begin(), s_mat.end()); + img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR + return img; +} + +std::string ppyoloe_crn_s_300e_coco::base64Decode(const char *Data, + int DataByte) { + const char DecodeTable[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, + 62, // '+' + 0, 0, 0, + 63, // '/' + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9' + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, + 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z' + 0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, + 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z' + }; + + std::string strDecode; + int nValue; + int i = 0; + while (i < DataByte) { + if (*Data != '\r' && *Data != '\n') { + nValue = DecodeTable[*Data++] << 18; + nValue += DecodeTable[*Data++] << 12; + strDecode += (nValue & 0x00FF0000) >> 16; + if (*Data != '=') { + nValue += DecodeTable[*Data++] << 6; + strDecode += (nValue & 0x0000FF00) >> 8; + if (*Data != '=') { + nValue += DecodeTable[*Data++]; + strDecode += nValue & 0x000000FF; + } + } + i += 4; + } else // 回车换行,跳过 + { + Data++; + i++; + } + } + return strDecode; +} + +DEFINE_OP(ppyoloe_crn_s_300e_coco); + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/ppyoloe_crn_s_300e_coco.h b/deploy/serving/cpp/preprocess/ppyoloe_crn_s_300e_coco.h new file mode 100644 index 0000000000000000000000000000000000000000..cb3e68476998d7fadaafba8e2bc9282c4479a5f8 --- /dev/null +++ b/deploy/serving/cpp/preprocess/ppyoloe_crn_s_300e_coco.h @@ -0,0 +1,69 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once +#include "core/general-server/general_model_service.pb.h" +#include "core/general-server/op/general_infer_helper.h" +#include "paddle_inference_api.h" // NOLINT +#include +#include + +#include "opencv2/core.hpp" +#include "opencv2/imgcodecs.hpp" +#include "opencv2/imgproc.hpp" +#include +#include +#include +#include +#include + +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +class ppyoloe_crn_s_300e_coco + : public baidu::paddle_serving::predictor::OpWithChannel { +public: + typedef std::vector TensorVector; + + DECLARE_OP(ppyoloe_crn_s_300e_coco); + + int inference(); + +private: + // preprocess + std::vector mean_ = {0.485f, 0.456f, 0.406f}; + std::vector scale_ = {0.229f, 0.224f, 0.225f}; + bool is_scale_ = true; + int im_shape_h = 640; + int im_shape_w = 640; + float scale_factor_h = 1.0f; + float scale_factor_w = 1.0f; + void preprocess_det(const cv::Mat &img, float *data, float &scale_factor_h, + float &scale_factor_w, int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, const bool is_scale); + + // read pics + cv::Mat Base2Mat(std::string &base64_data); + std::string base64Decode(const char *Data, int DataByte); +}; + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/tinypose_128x96.cpp b/deploy/serving/cpp/preprocess/tinypose_128x96.cpp new file mode 100644 index 0000000000000000000000000000000000000000..ccc94d2c4a35ed9f47f65fab6e74301e35c801d6 --- /dev/null +++ b/deploy/serving/cpp/preprocess/tinypose_128x96.cpp @@ -0,0 +1,232 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "core/general-server/op/tinypose_128x96.h" +#include "core/predictor/framework/infer.h" +#include "core/predictor/framework/memory.h" +#include "core/predictor/framework/resource.h" +#include "core/util/include/timer.h" +#include +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +using baidu::paddle_serving::Timer; +using baidu::paddle_serving::predictor::InferManager; +using baidu::paddle_serving::predictor::MempoolWrapper; +using baidu::paddle_serving::predictor::PaddleGeneralModelConfig; +using baidu::paddle_serving::predictor::general_model::Request; +using baidu::paddle_serving::predictor::general_model::Response; +using baidu::paddle_serving::predictor::general_model::Tensor; + +int tinypose_128x96::inference() { + VLOG(2) << "Going to run inference"; + const std::vector pre_node_names = pre_names(); + if (pre_node_names.size() != 1) { + LOG(ERROR) << "This op(" << op_name() + << ") can only have one predecessor op, but received " + << pre_node_names.size(); + return -1; + } + const std::string pre_name = pre_node_names[0]; + + const GeneralBlob *input_blob = get_depend_argument(pre_name); + if (!input_blob) { + LOG(ERROR) << "input_blob is nullptr,error"; + return -1; + } + uint64_t log_id = input_blob->GetLogId(); + VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name; + + GeneralBlob *output_blob = mutable_data(); + if (!output_blob) { + LOG(ERROR) << "output_blob is nullptr,error"; + return -1; + } + output_blob->SetLogId(log_id); + + if (!input_blob) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed mutable depended argument, op:" << pre_name; + return -1; + } + + const TensorVector *in = &input_blob->tensor_vector; + TensorVector *out = &output_blob->tensor_vector; + + int batch_size = input_blob->_batch_size; + output_blob->_batch_size = batch_size; + VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size; + + Timer timeline; + int64_t start = timeline.TimeStampUS(); + timeline.Start(); + + // only support string type + char *total_input_ptr = static_cast(in->at(0).data.data()); + std::string base64str = total_input_ptr; + + cv::Mat img = Base2Mat(base64str); + cv::cvtColor(img, img, cv::COLOR_BGR2RGB); + + // preprocess + std::vector input(1 * 3 * im_shape_h * im_shape_w, 0.0f); + preprocess_det(img, input.data(), scale_factor_h, scale_factor_w, im_shape_h, + im_shape_w, mean_, scale_, is_scale_); + + // create real_in + TensorVector *real_in = new TensorVector(); + if (!real_in) { + LOG(ERROR) << "real_in is nullptr,error"; + return -1; + } + + int in_num = 0; + size_t databuf_size = 0; + void *databuf_data = NULL; + char *databuf_char = NULL; + + // image + in_num = 1 * 3 * im_shape_h * im_shape_w; + databuf_size = in_num * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, input.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in; + tensor_in.name = "image"; + tensor_in.dtype = paddle::PaddleDType::FLOAT32; + tensor_in.shape = {1, 3, im_shape_h, im_shape_w}; + tensor_in.lod = in->at(0).lod; + tensor_in.data = paddleBuf; + real_in->push_back(tensor_in); + + if (InferManager::instance().infer(engine_name().c_str(), real_in, out, + batch_size)) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed do infer in fluid model: " << engine_name().c_str(); + return -1; + } + + int64_t end = timeline.TimeStampUS(); + CopyBlobInfo(input_blob, output_blob); + AddBlobInfo(output_blob, start); + AddBlobInfo(output_blob, end); + return 0; +} + +void tinypose_128x96::preprocess_det(const cv::Mat &img, float *data, + float &scale_factor_h, + float &scale_factor_w, int im_shape_h, + int im_shape_w, + const std::vector &mean, + const std::vector &scale, + const bool is_scale) { + // Resize + cv::Mat resize_img; + cv::resize(img, resize_img, cv::Size(im_shape_w, im_shape_h), 0, 0, 1); + + // Normalize + double e = 1.0; + if (is_scale) { + e /= 255.0; + } + cv::Mat img_fp; + (resize_img).convertTo(img_fp, CV_32FC3, e); + for (int h = 0; h < im_shape_h; h++) { + for (int w = 0; w < im_shape_w; w++) { + img_fp.at(h, w)[0] = + (img_fp.at(h, w)[0] - mean[0]) / scale[0]; + img_fp.at(h, w)[1] = + (img_fp.at(h, w)[1] - mean[1]) / scale[1]; + img_fp.at(h, w)[2] = + (img_fp.at(h, w)[2] - mean[2]) / scale[2]; + } + } + + // Permute + int rh = img_fp.rows; + int rw = img_fp.cols; + int rc = img_fp.channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(img_fp, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), + i); + } +} + +cv::Mat tinypose_128x96::Base2Mat(std::string &base64_data) { + cv::Mat img; + std::string s_mat; + s_mat = base64Decode(base64_data.data(), base64_data.size()); + std::vector base64_img(s_mat.begin(), s_mat.end()); + img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR + return img; +} + +std::string tinypose_128x96::base64Decode(const char *Data, int DataByte) { + const char DecodeTable[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, + 62, // '+' + 0, 0, 0, + 63, // '/' + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9' + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, + 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z' + 0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, + 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z' + }; + + std::string strDecode; + int nValue; + int i = 0; + while (i < DataByte) { + if (*Data != '\r' && *Data != '\n') { + nValue = DecodeTable[*Data++] << 18; + nValue += DecodeTable[*Data++] << 12; + strDecode += (nValue & 0x00FF0000) >> 16; + if (*Data != '=') { + nValue += DecodeTable[*Data++] << 6; + strDecode += (nValue & 0x0000FF00) >> 8; + if (*Data != '=') { + nValue += DecodeTable[*Data++]; + strDecode += nValue & 0x000000FF; + } + } + i += 4; + } else // 回车换行,跳过 + { + Data++; + i++; + } + } + return strDecode; +} + +DEFINE_OP(tinypose_128x96); + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/tinypose_128x96.h b/deploy/serving/cpp/preprocess/tinypose_128x96.h new file mode 100644 index 0000000000000000000000000000000000000000..83bf9bf7d17de5fd03407f73bf7e96b512a6fe3e --- /dev/null +++ b/deploy/serving/cpp/preprocess/tinypose_128x96.h @@ -0,0 +1,69 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once +#include "core/general-server/general_model_service.pb.h" +#include "core/general-server/op/general_infer_helper.h" +#include "paddle_inference_api.h" // NOLINT +#include +#include + +#include "opencv2/core.hpp" +#include "opencv2/imgcodecs.hpp" +#include "opencv2/imgproc.hpp" +#include +#include +#include +#include +#include + +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +class tinypose_128x96 + : public baidu::paddle_serving::predictor::OpWithChannel { +public: + typedef std::vector TensorVector; + + DECLARE_OP(tinypose_128x96); + + int inference(); + +private: + // preprocess + std::vector mean_ = {0.485f, 0.456f, 0.406f}; + std::vector scale_ = {0.229f, 0.224f, 0.225f}; + bool is_scale_ = true; + int im_shape_h = 128; + int im_shape_w = 96; + float scale_factor_h = 1.0f; + float scale_factor_w = 1.0f; + void preprocess_det(const cv::Mat &img, float *data, float &scale_factor_h, + float &scale_factor_w, int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, const bool is_scale); + + // read pics + cv::Mat Base2Mat(std::string &base64_data); + std::string base64Decode(const char *Data, int DataByte); +}; + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/yolov3_darknet53_270e_coco.cpp b/deploy/serving/cpp/preprocess/yolov3_darknet53_270e_coco.cpp new file mode 100644 index 0000000000000000000000000000000000000000..5937be46c0ffffe07651e7c8ed13be369d03bf7c --- /dev/null +++ b/deploy/serving/cpp/preprocess/yolov3_darknet53_270e_coco.cpp @@ -0,0 +1,282 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "core/general-server/op/yolov3_darknet53_270e_coco.h" +#include "core/predictor/framework/infer.h" +#include "core/predictor/framework/memory.h" +#include "core/predictor/framework/resource.h" +#include "core/util/include/timer.h" +#include +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +using baidu::paddle_serving::Timer; +using baidu::paddle_serving::predictor::InferManager; +using baidu::paddle_serving::predictor::MempoolWrapper; +using baidu::paddle_serving::predictor::PaddleGeneralModelConfig; +using baidu::paddle_serving::predictor::general_model::Request; +using baidu::paddle_serving::predictor::general_model::Response; +using baidu::paddle_serving::predictor::general_model::Tensor; + +int yolov3_darknet53_270e_coco::inference() { + VLOG(2) << "Going to run inference"; + const std::vector pre_node_names = pre_names(); + if (pre_node_names.size() != 1) { + LOG(ERROR) << "This op(" << op_name() + << ") can only have one predecessor op, but received " + << pre_node_names.size(); + return -1; + } + const std::string pre_name = pre_node_names[0]; + + const GeneralBlob *input_blob = get_depend_argument(pre_name); + if (!input_blob) { + LOG(ERROR) << "input_blob is nullptr,error"; + return -1; + } + uint64_t log_id = input_blob->GetLogId(); + VLOG(2) << "(logid=" << log_id << ") Get precedent op name: " << pre_name; + + GeneralBlob *output_blob = mutable_data(); + if (!output_blob) { + LOG(ERROR) << "output_blob is nullptr,error"; + return -1; + } + output_blob->SetLogId(log_id); + + if (!input_blob) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed mutable depended argument, op:" << pre_name; + return -1; + } + + const TensorVector *in = &input_blob->tensor_vector; + TensorVector *out = &output_blob->tensor_vector; + + int batch_size = input_blob->_batch_size; + output_blob->_batch_size = batch_size; + VLOG(2) << "(logid=" << log_id << ") infer batch size: " << batch_size; + + Timer timeline; + int64_t start = timeline.TimeStampUS(); + timeline.Start(); + + // only support string type + char *total_input_ptr = static_cast(in->at(0).data.data()); + std::string base64str = total_input_ptr; + + cv::Mat img = Base2Mat(base64str); + cv::cvtColor(img, img, cv::COLOR_BGR2RGB); + + // preprocess + std::vector input(1 * 3 * im_shape_h * im_shape_w, 0.0f); + preprocess_det(img, input.data(), scale_factor_h, scale_factor_w, im_shape_h, + im_shape_w, mean_, scale_, is_scale_); + + // create real_in + TensorVector *real_in = new TensorVector(); + if (!real_in) { + LOG(ERROR) << "real_in is nullptr,error"; + return -1; + } + + int in_num = 0; + size_t databuf_size = 0; + void *databuf_data = NULL; + char *databuf_char = NULL; + + // im_shape + std::vector im_shape{static_cast(im_shape_h), + static_cast(im_shape_w)}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, im_shape.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_0(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_0; + tensor_in_0.name = "im_shape"; + tensor_in_0.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_0.shape = {1, 2}; + tensor_in_0.lod = in->at(0).lod; + tensor_in_0.data = paddleBuf_0; + real_in->push_back(tensor_in_0); + + // image + in_num = 1 * 3 * im_shape_h * im_shape_w; + databuf_size = in_num * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, input.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_1(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_1; + tensor_in_1.name = "image"; + tensor_in_1.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_1.shape = {1, 3, im_shape_h, im_shape_w}; + tensor_in_1.lod = in->at(0).lod; + tensor_in_1.data = paddleBuf_1; + real_in->push_back(tensor_in_1); + + // scale_factor + std::vector scale_factor{scale_factor_h, scale_factor_w}; + databuf_size = 2 * sizeof(float); + + databuf_data = MempoolWrapper::instance().malloc(databuf_size); + if (!databuf_data) { + LOG(ERROR) << "Malloc failed, size: " << databuf_size; + return -1; + } + + memcpy(databuf_data, scale_factor.data(), databuf_size); + databuf_char = reinterpret_cast(databuf_data); + paddle::PaddleBuf paddleBuf_2(databuf_char, databuf_size); + paddle::PaddleTensor tensor_in_2; + tensor_in_2.name = "scale_factor"; + tensor_in_2.dtype = paddle::PaddleDType::FLOAT32; + tensor_in_2.shape = {1, 2}; + tensor_in_2.lod = in->at(0).lod; + tensor_in_2.data = paddleBuf_2; + real_in->push_back(tensor_in_2); + + if (InferManager::instance().infer(engine_name().c_str(), real_in, out, + batch_size)) { + LOG(ERROR) << "(logid=" << log_id + << ") Failed do infer in fluid model: " << engine_name().c_str(); + return -1; + } + + int64_t end = timeline.TimeStampUS(); + CopyBlobInfo(input_blob, output_blob); + AddBlobInfo(output_blob, start); + AddBlobInfo(output_blob, end); + return 0; +} + +void yolov3_darknet53_270e_coco::preprocess_det(const cv::Mat &img, float *data, + float &scale_factor_h, + float &scale_factor_w, + int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, + const bool is_scale) { + // scale_factor + scale_factor_h = + static_cast(im_shape_h) / static_cast(img.rows); + scale_factor_w = + static_cast(im_shape_w) / static_cast(img.cols); + + // Resize + cv::Mat resize_img; + cv::resize(img, resize_img, cv::Size(im_shape_w, im_shape_h), 0, 0, 2); + + // Normalize + double e = 1.0; + if (is_scale) { + e /= 255.0; + } + cv::Mat img_fp; + (resize_img).convertTo(img_fp, CV_32FC3, e); + for (int h = 0; h < im_shape_h; h++) { + for (int w = 0; w < im_shape_w; w++) { + img_fp.at(h, w)[0] = + (img_fp.at(h, w)[0] - mean[0]) / scale[0]; + img_fp.at(h, w)[1] = + (img_fp.at(h, w)[1] - mean[1]) / scale[1]; + img_fp.at(h, w)[2] = + (img_fp.at(h, w)[2] - mean[2]) / scale[2]; + } + } + + // Permute + int rh = img_fp.rows; + int rw = img_fp.cols; + int rc = img_fp.channels(); + for (int i = 0; i < rc; ++i) { + cv::extractChannel(img_fp, cv::Mat(rh, rw, CV_32FC1, data + i * rh * rw), + i); + } +} + +cv::Mat yolov3_darknet53_270e_coco::Base2Mat(std::string &base64_data) { + cv::Mat img; + std::string s_mat; + s_mat = base64Decode(base64_data.data(), base64_data.size()); + std::vector base64_img(s_mat.begin(), s_mat.end()); + img = cv::imdecode(base64_img, cv::IMREAD_COLOR); // CV_LOAD_IMAGE_COLOR + return img; +} + +std::string yolov3_darknet53_270e_coco::base64Decode(const char *Data, + int DataByte) { + const char DecodeTable[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, + 62, // '+' + 0, 0, 0, + 63, // '/' + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9' + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, + 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z' + 0, 0, 0, 0, 0, 0, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, + 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z' + }; + + std::string strDecode; + int nValue; + int i = 0; + while (i < DataByte) { + if (*Data != '\r' && *Data != '\n') { + nValue = DecodeTable[*Data++] << 18; + nValue += DecodeTable[*Data++] << 12; + strDecode += (nValue & 0x00FF0000) >> 16; + if (*Data != '=') { + nValue += DecodeTable[*Data++] << 6; + strDecode += (nValue & 0x0000FF00) >> 8; + if (*Data != '=') { + nValue += DecodeTable[*Data++]; + strDecode += nValue & 0x000000FF; + } + } + i += 4; + } else // 回车换行,跳过 + { + Data++; + i++; + } + } + return strDecode; +} + +DEFINE_OP(yolov3_darknet53_270e_coco); + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/preprocess/yolov3_darknet53_270e_coco.h b/deploy/serving/cpp/preprocess/yolov3_darknet53_270e_coco.h new file mode 100644 index 0000000000000000000000000000000000000000..67593040eadd664d49981c66f37d4e689807ec8f --- /dev/null +++ b/deploy/serving/cpp/preprocess/yolov3_darknet53_270e_coco.h @@ -0,0 +1,69 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once +#include "core/general-server/general_model_service.pb.h" +#include "core/general-server/op/general_infer_helper.h" +#include "paddle_inference_api.h" // NOLINT +#include +#include + +#include "opencv2/core.hpp" +#include "opencv2/imgcodecs.hpp" +#include "opencv2/imgproc.hpp" +#include +#include +#include +#include +#include + +#include +#include +#include + +namespace baidu { +namespace paddle_serving { +namespace serving { + +class yolov3_darknet53_270e_coco + : public baidu::paddle_serving::predictor::OpWithChannel { +public: + typedef std::vector TensorVector; + + DECLARE_OP(yolov3_darknet53_270e_coco); + + int inference(); + +private: + // preprocess + std::vector mean_ = {0.485f, 0.456f, 0.406f}; + std::vector scale_ = {0.229f, 0.224f, 0.225f}; + bool is_scale_ = true; + int im_shape_h = 608; + int im_shape_w = 608; + float scale_factor_h = 1.0f; + float scale_factor_w = 1.0f; + void preprocess_det(const cv::Mat &img, float *data, float &scale_factor_h, + float &scale_factor_w, int im_shape_h, int im_shape_w, + const std::vector &mean, + const std::vector &scale, const bool is_scale); + + // read pics + cv::Mat Base2Mat(std::string &base64_data); + std::string base64Decode(const char *Data, int DataByte); +}; + +} // namespace serving +} // namespace paddle_serving +} // namespace baidu diff --git a/deploy/serving/cpp/serving_client.py b/deploy/serving/cpp/serving_client.py new file mode 100644 index 0000000000000000000000000000000000000000..49134c30569d60533b131b8a25d6584ab782329c --- /dev/null +++ b/deploy/serving/cpp/serving_client.py @@ -0,0 +1,125 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import glob +import base64 +import argparse +from paddle_serving_client import Client +from paddle_serving_client.proto import general_model_config_pb2 as m_config +import google.protobuf.text_format + +parser = argparse.ArgumentParser(description="args for paddleserving") +parser.add_argument( + "--serving_client", type=str, help="the directory of serving_client") +parser.add_argument("--image_dir", type=str) +parser.add_argument("--image_file", type=str) +parser.add_argument("--http_port", type=int, default=9997) +parser.add_argument( + "--threshold", type=float, default=0.5, help="Threshold of score.") +args = parser.parse_args() + + +def get_test_images(infer_dir, infer_img): + """ + Get image path list in TEST mode + """ + assert infer_img is not None or infer_dir is not None, \ + "--image_file or --image_dir should be set" + assert infer_img is None or os.path.isfile(infer_img), \ + "{} is not a file".format(infer_img) + assert infer_dir is None or os.path.isdir(infer_dir), \ + "{} is not a directory".format(infer_dir) + + # infer_img has a higher priority + if infer_img and os.path.isfile(infer_img): + return [infer_img] + + images = set() + infer_dir = os.path.abspath(infer_dir) + assert os.path.isdir(infer_dir), \ + "infer_dir {} is not a directory".format(infer_dir) + exts = ['jpg', 'jpeg', 'png', 'bmp'] + exts += [ext.upper() for ext in exts] + for ext in exts: + images.update(glob.glob('{}/*.{}'.format(infer_dir, ext))) + images = list(images) + + assert len(images) > 0, "no image found in {}".format(infer_dir) + print("Found {} inference images in total.".format(len(images))) + + return images + + +def postprocess(fetch_dict, fetch_vars, draw_threshold=0.5): + result = [] + if "conv2d_441.tmp_1" in fetch_dict: + heatmap = fetch_dict["conv2d_441.tmp_1"] + print(heatmap) + result.append(heatmap) + else: + bboxes = fetch_dict[fetch_vars[0]] + for bbox in bboxes: + if bbox[0] > -1 and bbox[1] > draw_threshold: + print(f"{int(bbox[0])} {bbox[1]} " + f"{bbox[2]} {bbox[3]} {bbox[4]} {bbox[5]}") + result.append(f"{int(bbox[0])} {bbox[1]} " + f"{bbox[2]} {bbox[3]} {bbox[4]} {bbox[5]}") + return result + + +def get_model_vars(client_config_dir): + # read original serving_client_conf.prototxt + client_config_file = os.path.join(client_config_dir, + "serving_client_conf.prototxt") + with open(client_config_file, 'r') as f: + model_var = google.protobuf.text_format.Merge( + str(f.read()), m_config.GeneralModelConfig()) + # modify feed_var to run core/general-server/op/ + [model_var.feed_var.pop() for _ in range(len(model_var.feed_var))] + feed_var = m_config.FeedVar() + feed_var.name = "input" + feed_var.alias_name = "input" + feed_var.is_lod_tensor = False + feed_var.feed_type = 20 + feed_var.shape.extend([1]) + model_var.feed_var.extend([feed_var]) + with open( + os.path.join(client_config_dir, "serving_client_conf_cpp.prototxt"), + "w") as f: + f.write(str(model_var)) + # get feed_vars/fetch_vars + feed_vars = [var.name for var in model_var.feed_var] + fetch_vars = [var.name for var in model_var.fetch_var] + return feed_vars, fetch_vars + + +if __name__ == '__main__': + url = f"127.0.0.1:{args.http_port}" + logid = 10000 + img_list = get_test_images(args.image_dir, args.image_file) + feed_vars, fetch_vars = get_model_vars(args.serving_client) + + client = Client() + client.load_client_config( + os.path.join(args.serving_client, "serving_client_conf_cpp.prototxt")) + client.connect([url]) + + for img_file in img_list: + with open(img_file, 'rb') as file: + image_data = file.read() + image = base64.b64encode(image_data).decode('utf8') + fetch_dict = client.predict( + feed={feed_vars[0]: image}, fetch=fetch_vars) + result = postprocess(fetch_dict, fetch_vars, args.threshold) diff --git a/deploy/serving/cpp/serving_client_conf.prototxt b/deploy/serving/cpp/serving_client_conf.prototxt new file mode 100644 index 0000000000000000000000000000000000000000..fb069003ab8a6b8163d7e06d7760b1c6c42b196a --- /dev/null +++ b/deploy/serving/cpp/serving_client_conf.prototxt @@ -0,0 +1,20 @@ +feed_var { + name: "input" + alias_name: "input" + is_lod_tensor: false + feed_type: 20 + shape: 1 +} +fetch_var { + name: "multiclass_nms3_0.tmp_0" + alias_name: "multiclass_nms3_0.tmp_0" + is_lod_tensor: true + fetch_type: 1 + shape: -1 +} +fetch_var { + name: "multiclass_nms3_0.tmp_2" + alias_name: "multiclass_nms3_0.tmp_2" + is_lod_tensor: false + fetch_type: 2 +} \ No newline at end of file diff --git a/deploy/serving/python/config.yml b/deploy/serving/python/config.yml new file mode 100644 index 0000000000000000000000000000000000000000..5ec4285257d618f6c5a7ed02aab5c34dae9a96e1 --- /dev/null +++ b/deploy/serving/python/config.yml @@ -0,0 +1,31 @@ +#worker_num, 最大并发数。当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG +##当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num +worker_num: 20 + +#http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port +http_port: 18093 +rpc_port: 9993 + +dag: + #op资源类型, True, 为线程模型;False,为进程模型 + is_thread_op: False +op: + #op名称,与web_service中的TIPCExampleService初始化name参数一致 + ppdet: + #并发数,is_thread_op=True时,为线程并发;否则为进程并发 + concurrency: 1 + + #当op配置没有server_endpoints时,从local_service_conf读取本地服务配置 + local_service_conf: + + #uci模型路径 + model_config: "./serving_server" + + #计算硬件类型: 空缺时由devices决定(CPU/GPU),0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpu + device_type: + + #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡 + devices: "0" # "0,1" + + #client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测 + client_type: local_predictor diff --git a/deploy/serving/python/pipeline_http_client.py b/deploy/serving/python/pipeline_http_client.py new file mode 100644 index 0000000000000000000000000000000000000000..fa9b30c0d79bf5a7e0d5da7a2538580e7452f8bb --- /dev/null +++ b/deploy/serving/python/pipeline_http_client.py @@ -0,0 +1,76 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import glob +import requests +import json +import base64 +import os +import argparse + +parser = argparse.ArgumentParser(description="args for paddleserving") +parser.add_argument("--image_dir", type=str) +parser.add_argument("--image_file", type=str) +parser.add_argument("--http_port", type=int, default=18093) +parser.add_argument("--service_name", type=str, default="ppdet") +args = parser.parse_args() + + +def get_test_images(infer_dir, infer_img): + """ + Get image path list in TEST mode + """ + assert infer_img is not None or infer_dir is not None, \ + "--image_file or --image_dir should be set" + assert infer_img is None or os.path.isfile(infer_img), \ + "{} is not a file".format(infer_img) + assert infer_dir is None or os.path.isdir(infer_dir), \ + "{} is not a directory".format(infer_dir) + + # infer_img has a higher priority + if infer_img and os.path.isfile(infer_img): + return [infer_img] + + images = set() + infer_dir = os.path.abspath(infer_dir) + assert os.path.isdir(infer_dir), \ + "infer_dir {} is not a directory".format(infer_dir) + exts = ['jpg', 'jpeg', 'png', 'bmp'] + exts += [ext.upper() for ext in exts] + for ext in exts: + images.update(glob.glob('{}/*.{}'.format(infer_dir, ext))) + images = list(images) + + assert len(images) > 0, "no image found in {}".format(infer_dir) + print("Found {} inference images in total.".format(len(images))) + + return images + + +if __name__ == "__main__": + url = f"http://127.0.0.1:{args.http_port}/{args.service_name}/prediction" + logid = 10000 + img_list = get_test_images(args.image_dir, args.image_file) + + for img_file in img_list: + with open(img_file, 'rb') as file: + image_data = file.read() + + # base64 encode + image = base64.b64encode(image_data).decode('utf8') + + data = {"key": ["image_0"], "value": [image], "logid": logid} + # send requests + r = requests.post(url=url, data=json.dumps(data)) + print(r.json()) diff --git a/deploy/serving/python/postprocess_ops.py b/deploy/serving/python/postprocess_ops.py new file mode 100644 index 0000000000000000000000000000000000000000..1836f7de776921c4dae97d42e927834a3d2d8613 --- /dev/null +++ b/deploy/serving/python/postprocess_ops.py @@ -0,0 +1,171 @@ +import cv2 +import math +import numpy as np +from preprocess_ops import get_affine_transform + + +class HRNetPostProcess(object): + def __init__(self, use_dark=True): + self.use_dark = use_dark + + def flip_back(self, output_flipped, matched_parts): + assert output_flipped.ndim == 4,\ + 'output_flipped should be [batch_size, num_joints, height, width]' + + output_flipped = output_flipped[:, :, :, ::-1] + + for pair in matched_parts: + tmp = output_flipped[:, pair[0], :, :].copy() + output_flipped[:, pair[0], :, :] = output_flipped[:, pair[1], :, :] + output_flipped[:, pair[1], :, :] = tmp + + return output_flipped + + def get_max_preds(self, heatmaps): + """get predictions from score maps + + Args: + heatmaps: numpy.ndarray([batch_size, num_joints, height, width]) + + Returns: + preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords + maxvals: numpy.ndarray([batch_size, num_joints, 2]), the maximum confidence of the keypoints + """ + assert isinstance(heatmaps, + np.ndarray), 'heatmaps should be numpy.ndarray' + assert heatmaps.ndim == 4, 'batch_images should be 4-ndim' + + batch_size = heatmaps.shape[0] + num_joints = heatmaps.shape[1] + width = heatmaps.shape[3] + heatmaps_reshaped = heatmaps.reshape((batch_size, num_joints, -1)) + idx = np.argmax(heatmaps_reshaped, 2) + maxvals = np.amax(heatmaps_reshaped, 2) + + maxvals = maxvals.reshape((batch_size, num_joints, 1)) + idx = idx.reshape((batch_size, num_joints, 1)) + + preds = np.tile(idx, (1, 1, 2)).astype(np.float32) + + preds[:, :, 0] = (preds[:, :, 0]) % width + preds[:, :, 1] = np.floor((preds[:, :, 1]) / width) + + pred_mask = np.tile(np.greater(maxvals, 0.0), (1, 1, 2)) + pred_mask = pred_mask.astype(np.float32) + + preds *= pred_mask + + return preds, maxvals + + def gaussian_blur(self, heatmap, kernel): + border = (kernel - 1) // 2 + batch_size = heatmap.shape[0] + num_joints = heatmap.shape[1] + height = heatmap.shape[2] + width = heatmap.shape[3] + for i in range(batch_size): + for j in range(num_joints): + origin_max = np.max(heatmap[i, j]) + dr = np.zeros((height + 2 * border, width + 2 * border)) + dr[border:-border, border:-border] = heatmap[i, j].copy() + dr = cv2.GaussianBlur(dr, (kernel, kernel), 0) + heatmap[i, j] = dr[border:-border, border:-border].copy() + heatmap[i, j] *= origin_max / np.max(heatmap[i, j]) + return heatmap + + def dark_parse(self, hm, coord): + heatmap_height = hm.shape[0] + heatmap_width = hm.shape[1] + px = int(coord[0]) + py = int(coord[1]) + if 1 < px < heatmap_width - 2 and 1 < py < heatmap_height - 2: + dx = 0.5 * (hm[py][px + 1] - hm[py][px - 1]) + dy = 0.5 * (hm[py + 1][px] - hm[py - 1][px]) + dxx = 0.25 * (hm[py][px + 2] - 2 * hm[py][px] + hm[py][px - 2]) + dxy = 0.25 * (hm[py+1][px+1] - hm[py-1][px+1] - hm[py+1][px-1] \ + + hm[py-1][px-1]) + dyy = 0.25 * ( + hm[py + 2 * 1][px] - 2 * hm[py][px] + hm[py - 2 * 1][px]) + derivative = np.matrix([[dx], [dy]]) + hessian = np.matrix([[dxx, dxy], [dxy, dyy]]) + if dxx * dyy - dxy**2 != 0: + hessianinv = hessian.I + offset = -hessianinv * derivative + offset = np.squeeze(np.array(offset.T), axis=0) + coord += offset + return coord + + def dark_postprocess(self, hm, coords, kernelsize): + """ + refer to https://github.com/ilovepose/DarkPose/lib/core/inference.py + + """ + hm = self.gaussian_blur(hm, kernelsize) + hm = np.maximum(hm, 1e-10) + hm = np.log(hm) + for n in range(coords.shape[0]): + for p in range(coords.shape[1]): + coords[n, p] = self.dark_parse(hm[n][p], coords[n][p]) + return coords + + def get_final_preds(self, heatmaps, center, scale, kernelsize=3): + """the highest heatvalue location with a quarter offset in the + direction from the highest response to the second highest response. + + Args: + heatmaps (numpy.ndarray): The predicted heatmaps + center (numpy.ndarray): The boxes center + scale (numpy.ndarray): The scale factor + + Returns: + preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords + maxvals: numpy.ndarray([batch_size, num_joints, 1]), the maximum confidence of the keypoints + """ + + coords, maxvals = self.get_max_preds(heatmaps) + + heatmap_height = heatmaps.shape[2] + heatmap_width = heatmaps.shape[3] + + if self.use_dark: + coords = self.dark_postprocess(heatmaps, coords, kernelsize) + else: + for n in range(coords.shape[0]): + for p in range(coords.shape[1]): + hm = heatmaps[n][p] + px = int(math.floor(coords[n][p][0] + 0.5)) + py = int(math.floor(coords[n][p][1] + 0.5)) + if 1 < px < heatmap_width - 1 and 1 < py < heatmap_height - 1: + diff = np.array([ + hm[py][px + 1] - hm[py][px - 1], + hm[py + 1][px] - hm[py - 1][px] + ]) + coords[n][p] += np.sign(diff) * .25 + preds = coords.copy() + + # Transform back + for i in range(coords.shape[0]): + preds[i] = transform_preds(coords[i], center[i], scale[i], + [heatmap_width, heatmap_height]) + + return preds, maxvals + + def __call__(self, output, center, scale): + preds, maxvals = self.get_final_preds(output, center, scale) + return np.concatenate( + (preds, maxvals), axis=-1), np.mean( + maxvals, axis=1) + + +def transform_preds(coords, center, scale, output_size): + target_coords = np.zeros(coords.shape) + trans = get_affine_transform(center, scale * 200, 0, output_size, inv=1) + for p in range(coords.shape[0]): + target_coords[p, 0:2] = affine_transform(coords[p, 0:2], trans) + return target_coords + + +def affine_transform(pt, t): + new_pt = np.array([pt[0], pt[1], 1.]).T + new_pt = np.dot(t, new_pt) + return new_pt[:2] diff --git a/deploy/serving/python/preprocess_ops.py b/deploy/serving/python/preprocess_ops.py new file mode 100644 index 0000000000000000000000000000000000000000..0a419688b1db243179082487d1c19b2a9a4e3d4b --- /dev/null +++ b/deploy/serving/python/preprocess_ops.py @@ -0,0 +1,487 @@ +import numpy as np +import cv2 +import copy + + +def decode_image(im): + im = np.array(im) + img_info = { + "im_shape": np.array( + im.shape[:2], dtype=np.float32), + "scale_factor": np.array( + [1., 1.], dtype=np.float32) + } + return im, img_info + + +class Resize(object): + """resize image by target_size and max_size + Args: + target_size (int): the target size of image + keep_ratio (bool): whether keep_ratio or not, default true + interp (int): method of resize + """ + + def __init__(self, target_size, keep_ratio=True, interp=cv2.INTER_LINEAR): + if isinstance(target_size, int): + target_size = [target_size, target_size] + self.target_size = target_size + self.keep_ratio = keep_ratio + self.interp = interp + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + assert len(self.target_size) == 2 + assert self.target_size[0] > 0 and self.target_size[1] > 0 + im_channel = im.shape[2] + im_scale_y, im_scale_x = self.generate_scale(im) + im = cv2.resize( + im, + None, + None, + fx=im_scale_x, + fy=im_scale_y, + interpolation=self.interp) + im_info['im_shape'] = np.array(im.shape[:2]).astype('float32') + im_info['scale_factor'] = np.array( + [im_scale_y, im_scale_x]).astype('float32') + return im, im_info + + def generate_scale(self, im): + """ + Args: + im (np.ndarray): image (np.ndarray) + Returns: + im_scale_x: the resize ratio of X + im_scale_y: the resize ratio of Y + """ + origin_shape = im.shape[:2] + im_c = im.shape[2] + if self.keep_ratio: + im_size_min = np.min(origin_shape) + im_size_max = np.max(origin_shape) + target_size_min = np.min(self.target_size) + target_size_max = np.max(self.target_size) + im_scale = float(target_size_min) / float(im_size_min) + if np.round(im_scale * im_size_max) > target_size_max: + im_scale = float(target_size_max) / float(im_size_max) + im_scale_x = im_scale + im_scale_y = im_scale + else: + resize_h, resize_w = self.target_size + im_scale_y = resize_h / float(origin_shape[0]) + im_scale_x = resize_w / float(origin_shape[1]) + return im_scale_y, im_scale_x + + +class NormalizeImage(object): + """normalize image + Args: + mean (list): im - mean + std (list): im / std + is_scale (bool): whether need im / 255 + is_channel_first (bool): if True: image shape is CHW, else: HWC + """ + + def __init__(self, mean, std, is_scale=True): + self.mean = mean + self.std = std + self.is_scale = is_scale + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + im = im.astype(np.float32, copy=False) + mean = np.array(self.mean)[np.newaxis, np.newaxis, :] + std = np.array(self.std)[np.newaxis, np.newaxis, :] + + if self.is_scale: + im = im / 255.0 + im -= mean + im /= std + return im, im_info + + +class Permute(object): + """permute image + Args: + to_bgr (bool): whether convert RGB to BGR + channel_first (bool): whether convert HWC to CHW + """ + + def __init__(self, ): + super(Permute, self).__init__() + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + im = im.transpose((2, 0, 1)).copy() + return im, im_info + + +class PadStride(object): + """ padding image for model with FPN, instead PadBatch(pad_to_stride) in original config + Args: + stride (bool): model with FPN need image shape % stride == 0 + """ + + def __init__(self, stride=0): + self.coarsest_stride = stride + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + coarsest_stride = self.coarsest_stride + if coarsest_stride <= 0: + return im, im_info + im_c, im_h, im_w = im.shape + pad_h = int(np.ceil(float(im_h) / coarsest_stride) * coarsest_stride) + pad_w = int(np.ceil(float(im_w) / coarsest_stride) * coarsest_stride) + padding_im = np.zeros((im_c, pad_h, pad_w), dtype=np.float32) + padding_im[:, :im_h, :im_w] = im + return padding_im, im_info + + +class LetterBoxResize(object): + def __init__(self, target_size): + """ + Resize image to target size, convert normalized xywh to pixel xyxy + format ([x_center, y_center, width, height] -> [x0, y0, x1, y1]). + Args: + target_size (int|list): image target size. + """ + super(LetterBoxResize, self).__init__() + if isinstance(target_size, int): + target_size = [target_size, target_size] + self.target_size = target_size + + def letterbox(self, img, height, width, color=(127.5, 127.5, 127.5)): + # letterbox: resize a rectangular image to a padded rectangular + shape = img.shape[:2] # [height, width] + ratio_h = float(height) / shape[0] + ratio_w = float(width) / shape[1] + ratio = min(ratio_h, ratio_w) + new_shape = (round(shape[1] * ratio), + round(shape[0] * ratio)) # [width, height] + padw = (width - new_shape[0]) / 2 + padh = (height - new_shape[1]) / 2 + top, bottom = round(padh - 0.1), round(padh + 0.1) + left, right = round(padw - 0.1), round(padw + 0.1) + + img = cv2.resize( + img, new_shape, interpolation=cv2.INTER_AREA) # resized, no border + img = cv2.copyMakeBorder( + img, top, bottom, left, right, cv2.BORDER_CONSTANT, + value=color) # padded rectangular + return img, ratio, padw, padh + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + assert len(self.target_size) == 2 + assert self.target_size[0] > 0 and self.target_size[1] > 0 + height, width = self.target_size + h, w = im.shape[:2] + im, ratio, padw, padh = self.letterbox(im, height=height, width=width) + + new_shape = [round(h * ratio), round(w * ratio)] + im_info['im_shape'] = np.array(new_shape, dtype=np.float32) + im_info['scale_factor'] = np.array([ratio, ratio], dtype=np.float32) + return im, im_info + + +class Pad(object): + def __init__(self, size, fill_value=[114.0, 114.0, 114.0]): + """ + Pad image to a specified size. + Args: + size (list[int]): image target size + fill_value (list[float]): rgb value of pad area, default (114.0, 114.0, 114.0) + """ + super(Pad, self).__init__() + if isinstance(size, int): + size = [size, size] + self.size = size + self.fill_value = fill_value + + def __call__(self, im, im_info): + im_h, im_w = im.shape[:2] + h, w = self.size + if h == im_h and w == im_w: + im = im.astype(np.float32) + return im, im_info + + canvas = np.ones((h, w, 3), dtype=np.float32) + canvas *= np.array(self.fill_value, dtype=np.float32) + canvas[0:im_h, 0:im_w, :] = im.astype(np.float32) + im = canvas + return im, im_info + + +def rotate_point(pt, angle_rad): + """Rotate a point by an angle. + + Args: + pt (list[float]): 2 dimensional point to be rotated + angle_rad (float): rotation angle by radian + + Returns: + list[float]: Rotated point. + """ + assert len(pt) == 2 + sn, cs = np.sin(angle_rad), np.cos(angle_rad) + new_x = pt[0] * cs - pt[1] * sn + new_y = pt[0] * sn + pt[1] * cs + rotated_pt = [new_x, new_y] + + return rotated_pt + + +def _get_3rd_point(a, b): + """To calculate the affine matrix, three pairs of points are required. This + function is used to get the 3rd point, given 2D points a & b. + + The 3rd point is defined by rotating vector `a - b` by 90 degrees + anticlockwise, using b as the rotation center. + + Args: + a (np.ndarray): point(x,y) + b (np.ndarray): point(x,y) + + Returns: + np.ndarray: The 3rd point. + """ + assert len(a) == 2 + assert len(b) == 2 + direction = a - b + third_pt = b + np.array([-direction[1], direction[0]], dtype=np.float32) + + return third_pt + + +def get_affine_transform(center, + input_size, + rot, + output_size, + shift=(0., 0.), + inv=False): + """Get the affine transform matrix, given the center/scale/rot/output_size. + + Args: + center (np.ndarray[2, ]): Center of the bounding box (x, y). + scale (np.ndarray[2, ]): Scale of the bounding box + wrt [width, height]. + rot (float): Rotation angle (degree). + output_size (np.ndarray[2, ]): Size of the destination heatmaps. + shift (0-100%): Shift translation ratio wrt the width/height. + Default (0., 0.). + inv (bool): Option to inverse the affine transform direction. + (inv=False: src->dst or inv=True: dst->src) + + Returns: + np.ndarray: The transform matrix. + """ + assert len(center) == 2 + assert len(output_size) == 2 + assert len(shift) == 2 + if not isinstance(input_size, (np.ndarray, list)): + input_size = np.array([input_size, input_size], dtype=np.float32) + scale_tmp = input_size + + shift = np.array(shift) + src_w = scale_tmp[0] + dst_w = output_size[0] + dst_h = output_size[1] + + rot_rad = np.pi * rot / 180 + src_dir = rotate_point([0., src_w * -0.5], rot_rad) + dst_dir = np.array([0., dst_w * -0.5]) + + src = np.zeros((3, 2), dtype=np.float32) + src[0, :] = center + scale_tmp * shift + src[1, :] = center + src_dir + scale_tmp * shift + src[2, :] = _get_3rd_point(src[0, :], src[1, :]) + + dst = np.zeros((3, 2), dtype=np.float32) + dst[0, :] = [dst_w * 0.5, dst_h * 0.5] + dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir + dst[2, :] = _get_3rd_point(dst[0, :], dst[1, :]) + + if inv: + trans = cv2.getAffineTransform(np.float32(dst), np.float32(src)) + else: + trans = cv2.getAffineTransform(np.float32(src), np.float32(dst)) + + return trans + + +class WarpAffine(object): + """Warp affine the image + """ + + def __init__(self, + keep_res=False, + pad=31, + input_h=512, + input_w=512, + scale=0.4, + shift=0.1): + self.keep_res = keep_res + self.pad = pad + self.input_h = input_h + self.input_w = input_w + self.scale = scale + self.shift = shift + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + img = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + + h, w = img.shape[:2] + + if self.keep_res: + input_h = (h | self.pad) + 1 + input_w = (w | self.pad) + 1 + s = np.array([input_w, input_h], dtype=np.float32) + c = np.array([w // 2, h // 2], dtype=np.float32) + + else: + s = max(h, w) * 1.0 + input_h, input_w = self.input_h, self.input_w + c = np.array([w / 2., h / 2.], dtype=np.float32) + + trans_input = get_affine_transform(c, s, 0, [input_w, input_h]) + img = cv2.resize(img, (w, h)) + inp = cv2.warpAffine( + img, trans_input, (input_w, input_h), flags=cv2.INTER_LINEAR) + return inp, im_info + + +# keypoint preprocess +def get_warp_matrix(theta, size_input, size_dst, size_target): + """This code is based on + https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py + + Calculate the transformation matrix under the constraint of unbiased. + Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased + Data Processing for Human Pose Estimation (CVPR 2020). + + Args: + theta (float): Rotation angle in degrees. + size_input (np.ndarray): Size of input image [w, h]. + size_dst (np.ndarray): Size of output image [w, h]. + size_target (np.ndarray): Size of ROI in input plane [w, h]. + + Returns: + matrix (np.ndarray): A matrix for transformation. + """ + theta = np.deg2rad(theta) + matrix = np.zeros((2, 3), dtype=np.float32) + scale_x = size_dst[0] / size_target[0] + scale_y = size_dst[1] / size_target[1] + matrix[0, 0] = np.cos(theta) * scale_x + matrix[0, 1] = -np.sin(theta) * scale_x + matrix[0, 2] = scale_x * ( + -0.5 * size_input[0] * np.cos(theta) + 0.5 * size_input[1] * + np.sin(theta) + 0.5 * size_target[0]) + matrix[1, 0] = np.sin(theta) * scale_y + matrix[1, 1] = np.cos(theta) * scale_y + matrix[1, 2] = scale_y * ( + -0.5 * size_input[0] * np.sin(theta) - 0.5 * size_input[1] * + np.cos(theta) + 0.5 * size_target[1]) + return matrix + + +class TopDownEvalAffine(object): + """apply affine transform to image and coords + + Args: + trainsize (list): [w, h], the standard size used to train + use_udp (bool): whether to use Unbiased Data Processing. + records(dict): the dict contained the image and coords + + Returns: + records (dict): contain the image and coords after tranformed + + """ + + def __init__(self, trainsize, use_udp=False): + self.trainsize = trainsize + self.use_udp = use_udp + + def __call__(self, image, im_info): + rot = 0 + imshape = im_info['im_shape'][::-1] + center = im_info['center'] if 'center' in im_info else imshape / 2. + scale = im_info['scale'] if 'scale' in im_info else imshape + if self.use_udp: + trans = get_warp_matrix( + rot, center * 2.0, + [self.trainsize[0] - 1.0, self.trainsize[1] - 1.0], scale) + image = cv2.warpAffine( + image, + trans, (int(self.trainsize[0]), int(self.trainsize[1])), + flags=cv2.INTER_LINEAR) + else: + trans = get_affine_transform(center, scale, rot, self.trainsize) + image = cv2.warpAffine( + image, + trans, (int(self.trainsize[0]), int(self.trainsize[1])), + flags=cv2.INTER_LINEAR) + + return image, im_info + + +class Compose: + def __init__(self, transforms): + self.transforms = [] + for op_info in transforms: + new_op_info = op_info.copy() + op_type = new_op_info.pop('type') + self.transforms.append(eval(op_type)(**new_op_info)) + + def __call__(self, img): + img, im_info = decode_image(img) + for t in self.transforms: + img, im_info = t(img, im_info) + inputs = copy.deepcopy(im_info) + inputs['image'] = img + return inputs diff --git a/deploy/serving/python/web_service.py b/deploy/serving/python/web_service.py new file mode 100644 index 0000000000000000000000000000000000000000..2fbb4ed839d15d102f157a5f5cc780d1efebd267 --- /dev/null +++ b/deploy/serving/python/web_service.py @@ -0,0 +1,255 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import copy + +from paddle_serving_server.web_service import WebService, Op +from paddle_serving_server.proto import general_model_config_pb2 as m_config +import google.protobuf.text_format + +import os +import numpy as np +import base64 +from PIL import Image +import io +from preprocess_ops import Compose +from postprocess_ops import HRNetPostProcess + +from argparse import ArgumentParser, RawDescriptionHelpFormatter +import yaml + +# Global dictionary +SUPPORT_MODELS = { + 'YOLO', 'RCNN', 'SSD', 'Face', 'FCOS', 'SOLOv2', 'TTFNet', 'S2ANet', 'JDE', + 'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet', 'TOOD', 'RetinaNet', + 'StrongBaseline', 'STGCN', 'YOLOX', 'HRNet' +} + +GLOBAL_VAR = {} + + +class ArgsParser(ArgumentParser): + def __init__(self): + super(ArgsParser, self).__init__( + formatter_class=RawDescriptionHelpFormatter) + self.add_argument( + "-c", + "--config", + default="deploy/serving/python/config.yml", + help="configuration file to use") + self.add_argument( + "--model_dir", + type=str, + default=None, + help=("Directory include:'model.pdiparams', 'model.pdmodel', " + "'infer_cfg.yml', created by tools/export_model.py."), + required=True) + self.add_argument( + "-o", "--opt", nargs='+', help="set configuration options") + + def parse_args(self, argv=None): + args = super(ArgsParser, self).parse_args(argv) + assert args.config is not None, \ + "Please specify --config=configure_file_path." + args.service_config = self._parse_opt(args.opt, args.config) + print("args config:", args.service_config) + args.model_config = PredictConfig(args.model_dir) + return args + + def _parse_helper(self, v): + if v.isnumeric(): + if "." in v: + v = float(v) + else: + v = int(v) + elif v == "True" or v == "False": + v = (v == "True") + return v + + def _parse_opt(self, opts, conf_path): + f = open(conf_path) + config = yaml.load(f, Loader=yaml.Loader) + if not opts: + return config + for s in opts: + s = s.strip() + k, v = s.split('=') + v = self._parse_helper(v) + if "devices" in k: + v = str(v) + print(k, v, type(v)) + cur = config + parent = cur + for kk in k.split("."): + if kk not in cur: + cur[kk] = {} + parent = cur + cur = cur[kk] + else: + parent = cur + cur = cur[kk] + parent[k.split(".")[-1]] = v + return config + + +class PredictConfig(object): + """set config of preprocess, postprocess and visualize + Args: + model_dir (str): root path of infer_cfg.yml + """ + + def __init__(self, model_dir): + # parsing Yaml config for Preprocess + deploy_file = os.path.join(model_dir, 'infer_cfg.yml') + with open(deploy_file) as f: + yml_conf = yaml.safe_load(f) + self.check_model(yml_conf) + self.arch = yml_conf['arch'] + self.preprocess_infos = yml_conf['Preprocess'] + self.min_subgraph_size = yml_conf['min_subgraph_size'] + self.label_list = yml_conf['label_list'] + self.use_dynamic_shape = yml_conf['use_dynamic_shape'] + self.draw_threshold = yml_conf.get("draw_threshold", 0.5) + self.mask = yml_conf.get("mask", False) + self.tracker = yml_conf.get("tracker", None) + self.nms = yml_conf.get("NMS", None) + self.fpn_stride = yml_conf.get("fpn_stride", None) + if self.arch == 'RCNN' and yml_conf.get('export_onnx', False): + print( + 'The RCNN export model is used for ONNX and it only supports batch_size = 1' + ) + self.print_config() + + def check_model(self, yml_conf): + """ + Raises: + ValueError: loaded model not in supported model type + """ + for support_model in SUPPORT_MODELS: + if support_model in yml_conf['arch']: + return True + raise ValueError("Unsupported arch: {}, expect {}".format(yml_conf[ + 'arch'], SUPPORT_MODELS)) + + def print_config(self): + print('----------- Model Configuration -----------') + print('%s: %s' % ('Model Arch', self.arch)) + print('%s: ' % ('Transform Order')) + for op_info in self.preprocess_infos: + print('--%s: %s' % ('transform op', op_info['type'])) + print('--------------------------------------------') + + +class DetectorOp(Op): + def init_op(self): + self.preprocess_pipeline = Compose(GLOBAL_VAR['preprocess_ops']) + + def preprocess(self, input_dicts, data_id, log_id): + (_, input_dict), = input_dicts.items() + inputs = [] + for key, data in input_dict.items(): + data = base64.b64decode(data.encode('utf8')) + byte_stream = io.BytesIO(data) + img = Image.open(byte_stream).convert("RGB") + inputs.append(self.preprocess_pipeline(img)) + inputs = self.collate_inputs(inputs) + return inputs, False, None, "" + + def postprocess(self, input_dicts, fetch_dict, data_id, log_id): + (_, input_dict), = input_dicts.items() + if GLOBAL_VAR['model_config'].arch in ["HRNet"]: + result = self.parse_keypoint_result(input_dict, fetch_dict) + else: + result = self.parse_detection_result(input_dict, fetch_dict) + return result, None, "" + + def collate_inputs(self, inputs): + collate_inputs = {k: [] for k in inputs[0].keys()} + for info in inputs: + for k in collate_inputs.keys(): + collate_inputs[k].append(info[k]) + return { + k: np.stack(v) + for k, v in collate_inputs.items() if k in GLOBAL_VAR['feed_vars'] + } + + def parse_detection_result(self, input_dict, fetch_dict): + bboxes = fetch_dict[GLOBAL_VAR['fetch_vars'][0]] + bboxes_num = fetch_dict[GLOBAL_VAR['fetch_vars'][1]] + if GLOBAL_VAR['model_config'].mask: + masks = fetch_dict[GLOBAL_VAR['fetch_vars'][2]] + idx = 0 + results = {} + for img_name, num in zip(input_dict.keys(), bboxes_num): + result = [] + bbox = bboxes[idx:idx + num] + for line in bbox: + if line[0] > -1 and line[1] > GLOBAL_VAR[ + 'model_config'].draw_threshold: + result.append(f"{int(line[0])} {line[1]} " + f"{line[2]} {line[3]} {line[4]} {line[5]}") + results[img_name] = result + idx += num + return results + + def parse_keypoint_result(self, input_dict, fetch_dict): + heatmap = fetch_dict["conv2d_441.tmp_1"] + keypoint_postprocess = HRNetPostProcess() + im_shape = [] + for key, data in input_dict.items(): + data = base64.b64decode(data.encode('utf8')) + byte_stream = io.BytesIO(data) + img = Image.open(byte_stream).convert("RGB") + im_shape.append([img.width, img.height]) + im_shape = np.array(im_shape) + center = np.round(im_shape / 2.) + scale = im_shape / 200. + kpts, scores = keypoint_postprocess(heatmap, center, scale) + results = {"keypoint": kpts, "scores": scores} + return results + + +class DetectorService(WebService): + def get_pipeline_response(self, read_op): + return DetectorOp(name="ppdet", input_ops=[read_op]) + + +def get_model_vars(model_dir, service_config): + serving_server_dir = os.path.join(model_dir, "serving_server") + # rewrite model_config + service_config['op']['ppdet']['local_service_conf'][ + 'model_config'] = serving_server_dir + serving_server_conf = os.path.join(serving_server_dir, + "serving_server_conf.prototxt") + with open(serving_server_conf, 'r') as f: + model_var = google.protobuf.text_format.Merge( + str(f.read()), m_config.GeneralModelConfig()) + feed_vars = [var.name for var in model_var.feed_var] + fetch_vars = [var.name for var in model_var.fetch_var] + return feed_vars, fetch_vars + + +if __name__ == '__main__': + # load config and prepare the service + FLAGS = ArgsParser().parse_args() + feed_vars, fetch_vars = get_model_vars(FLAGS.model_dir, + FLAGS.service_config) + GLOBAL_VAR['feed_vars'] = feed_vars + GLOBAL_VAR['fetch_vars'] = fetch_vars + GLOBAL_VAR['preprocess_ops'] = FLAGS.model_config.preprocess_infos + GLOBAL_VAR['model_config'] = FLAGS.model_config + # define the service + uci_service = DetectorService(name="ppdet") + uci_service.prepare_pipeline_config(yml_dict=FLAGS.service_config) + # start the service + uci_service.run_service() diff --git a/deploy/third_engine/demo_avh/Makefile b/deploy/third_engine/demo_avh/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..4ea570578809f3d86d8bc75b7c1d41ad50819b58 --- /dev/null +++ b/deploy/third_engine/demo_avh/Makefile @@ -0,0 +1,114 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# Makefile to build demo + +# Setup build environment +BUILD_DIR := build + +ARM_CPU = ARMCM55 +ETHOSU_PATH = /opt/arm/ethosu +CMSIS_PATH ?= ${ETHOSU_PATH}/cmsis +ETHOSU_PLATFORM_PATH ?= ${ETHOSU_PATH}/core_platform +STANDALONE_CRT_PATH := $(abspath $(BUILD_DIR))/runtime +CORSTONE_300_PATH = ${ETHOSU_PLATFORM_PATH}/targets/corstone-300 +PKG_COMPILE_OPTS = -g -Wall -O2 -Wno-incompatible-pointer-types -Wno-format -mcpu=cortex-m55 -mthumb -mfloat-abi=hard -std=gnu99 +CMAKE ?= cmake +CC = arm-none-eabi-gcc +AR = arm-none-eabi-ar +RANLIB = arm-none-eabi-ranlib +PKG_CFLAGS = ${PKG_COMPILE_OPTS} \ + -I${STANDALONE_CRT_PATH}/include \ + -I${STANDALONE_CRT_PATH}/src/runtime/crt/include \ + -I${PWD}/include \ + -I${CORSTONE_300_PATH} \ + -I${CMSIS_PATH}/Device/ARM/${ARM_CPU}/Include/ \ + -I${CMSIS_PATH}/CMSIS/Core/Include \ + -I${CMSIS_PATH}/CMSIS/NN/Include \ + -I${CMSIS_PATH}/CMSIS/DSP/Include \ + -I$(abspath $(BUILD_DIR))/codegen/host/include +CMSIS_NN_CMAKE_FLAGS = -DCMAKE_TOOLCHAIN_FILE=$(abspath $(BUILD_DIR))/../arm-none-eabi-gcc.cmake \ + -DTARGET_CPU=cortex-m55 \ + -DBUILD_CMSIS_NN_FUNCTIONS=YES +PKG_LDFLAGS = -lm -specs=nosys.specs -static -T corstone300.ld + +$(ifeq VERBOSE,1) +QUIET ?= +$(else) +QUIET ?= @ +$(endif) + +DEMO_MAIN = src/demo_bare_metal.c +CODEGEN_SRCS = $(wildcard $(abspath $(BUILD_DIR))/codegen/host/src/*.c) +CODEGEN_OBJS = $(subst .c,.o,$(CODEGEN_SRCS)) +CMSIS_STARTUP_SRCS = $(wildcard ${CMSIS_PATH}/Device/ARM/${ARM_CPU}/Source/*.c) +UART_SRCS = $(wildcard ${CORSTONE_300_PATH}/*.c) + +demo: $(BUILD_DIR)/demo + +$(BUILD_DIR)/stack_allocator.o: $(STANDALONE_CRT_PATH)/src/runtime/crt/memory/stack_allocator.c + $(QUIET)mkdir -p $(@D) + $(QUIET)$(CC) -c $(PKG_CFLAGS) -o $@ $^ + +$(BUILD_DIR)/crt_backend_api.o: $(STANDALONE_CRT_PATH)/src/runtime/crt/common/crt_backend_api.c + $(QUIET)mkdir -p $(@D) + $(QUIET)$(CC) -c $(PKG_CFLAGS) -o $@ $^ + +# Build generated code +$(BUILD_DIR)/libcodegen.a: $(CODEGEN_SRCS) + $(QUIET)cd $(abspath $(BUILD_DIR)/codegen/host/src) && $(CC) -c $(PKG_CFLAGS) $(CODEGEN_SRCS) + $(QUIET)$(AR) -cr $(abspath $(BUILD_DIR)/libcodegen.a) $(CODEGEN_OBJS) + $(QUIET)$(RANLIB) $(abspath $(BUILD_DIR)/libcodegen.a) + +# Build CMSIS startup code +${BUILD_DIR}/libcmsis_startup.a: $(CMSIS_STARTUP_SRCS) + $(QUIET)mkdir -p $(abspath $(BUILD_DIR)/libcmsis_startup) + $(QUIET)cd $(abspath $(BUILD_DIR)/libcmsis_startup) && $(CC) -c $(PKG_CFLAGS) -D${ARM_CPU} $^ + $(QUIET)$(AR) -cr $(abspath $(BUILD_DIR)/libcmsis_startup.a) $(abspath $(BUILD_DIR))/libcmsis_startup/*.o + $(QUIET)$(RANLIB) $(abspath $(BUILD_DIR)/libcmsis_startup.a) + +# Build CMSIS-NN +${BUILD_DIR}/cmsis_nn/Source/SoftmaxFunctions/libCMSISNNSoftmax.a: + $(QUIET)mkdir -p $(@D) + $(QUIET)cd $(CMSIS_PATH)/CMSIS/NN && $(CMAKE) -B $(abspath $(BUILD_DIR)/cmsis_nn) $(CMSIS_NN_CMAKE_FLAGS) + $(QUIET)cd $(abspath $(BUILD_DIR)/cmsis_nn) && $(MAKE) all + +# Build demo application +$(BUILD_DIR)/demo: $(DEMO_MAIN) $(UART_SRCS) $(BUILD_DIR)/stack_allocator.o $(BUILD_DIR)/crt_backend_api.o \ + ${BUILD_DIR}/libcodegen.a ${BUILD_DIR}/libcmsis_startup.a \ + ${BUILD_DIR}/cmsis_nn/Source/SoftmaxFunctions/libCMSISNNSoftmax.a \ + ${BUILD_DIR}/cmsis_nn/Source/FullyConnectedFunctions/libCMSISNNFullyConnected.a \ + ${BUILD_DIR}/cmsis_nn/Source/SVDFunctions/libCMSISNNSVDF.a \ + ${BUILD_DIR}/cmsis_nn/Source/ReshapeFunctions/libCMSISNNReshape.a \ + ${BUILD_DIR}/cmsis_nn/Source/ActivationFunctions/libCMSISNNActivation.a \ + ${BUILD_DIR}/cmsis_nn/Source/NNSupportFunctions/libCMSISNNSupport.a \ + ${BUILD_DIR}/cmsis_nn/Source/ConcatenationFunctions/libCMSISNNConcatenation.a \ + ${BUILD_DIR}/cmsis_nn/Source/BasicMathFunctions/libCMSISNNBasicMaths.a \ + ${BUILD_DIR}/cmsis_nn/Source/ConvolutionFunctions/libCMSISNNConvolutions.a \ + ${BUILD_DIR}/cmsis_nn/Source/PoolingFunctions/libCMSISNNPooling.a + $(QUIET)mkdir -p $(@D) + $(QUIET)$(CC) $(PKG_CFLAGS) $(FREERTOS_FLAGS) -o $@ -Wl,--whole-archive $^ -Wl,--no-whole-archive $(PKG_LDFLAGS) + +clean: + $(QUIET)rm -rf $(BUILD_DIR)/codegen + +cleanall: + $(QUIET)rm -rf $(BUILD_DIR) + +.SUFFIXES: + +.DEFAULT: demo diff --git a/deploy/third_engine/demo_avh/README.md b/deploy/third_engine/demo_avh/README.md new file mode 100644 index 0000000000000000000000000000000000000000..69250e5f90a0e48a21e5f360af21e9809f6750db --- /dev/null +++ b/deploy/third_engine/demo_avh/README.md @@ -0,0 +1,90 @@ + + + + + + + + + + + + + + +Running PP-PicoDet via TVM on bare metal Arm(R) Cortex(R)-M55 CPU and CMSIS-NN +=============================================================== + +This folder contains an example of how to use TVM to run a PP-PicoDet model +on bare metal Cortex(R)-M55 CPU and CMSIS-NN. + +Prerequisites +------------- +If the demo is run in the ci_cpu Docker container provided with TVM, then the following +software will already be installed. + +If the demo is not run in the ci_cpu Docker container, then you will need the following: +- Software required to build and run the demo (These can all be installed by running + tvm/docker/install/ubuntu_install_ethosu_driver_stack.sh.) + - [Fixed Virtual Platform (FVP) based on Arm(R) Corstone(TM)-300 software](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps) + - [cmake 3.19.5](https://github.com/Kitware/CMake/releases/) + - [GCC toolchain from Arm(R)](https://developer.arm.com/-/media/Files/downloads/gnu-rm/10-2020q4/gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2) + - [Arm(R) Ethos(TM)-U NPU driver stack](https://review.mlplatform.org) + - [CMSIS](https://github.com/ARM-software/CMSIS_5) +- The python libraries listed in the requirements.txt of this directory + - These can be installed by running the following from the current directory: + ```bash + pip install -r ./requirements.txt + ``` + +You will also need TVM which can either be: + - Built from source (see [Install from Source](https://tvm.apache.org/docs/install/from_source.html)) + - When building from source, the following need to be set in config.cmake: + - set(USE_CMSISNN ON) + - set(USE_MICRO ON) + - set(USE_LLVM ON) + - Installed from TLCPack(see [TLCPack](https://tlcpack.ai/)) + +You will need to update your PATH environment variable to include the path to cmake 3.19.5 and the FVP. +For example if you've installed these in ```/opt/arm``` , then you would do the following: +```bash +export PATH=/opt/arm/FVP_Corstone_SSE-300/models/Linux64_GCC-6.4:/opt/arm/cmake/bin:$PATH +``` + +Running the demo application +---------------------------- +Type the following command to run the bare metal text recognition application ([src/demo_bare_metal.c](./src/demo_bare_metal.c)): +```bash +./run_demo.sh +``` +If the Ethos(TM)-U platform and/or CMSIS have not been installed in /opt/arm/ethosu then +the locations for these can be specified as arguments to run_demo.sh, for example: + +```bash +./run_demo.sh --cmsis_path /home/tvm-user/cmsis \ +--ethosu_platform_path /home/tvm-user/ethosu/core_platform +``` + +This will: +- Download a PP-PicoDet text recognition model +- Use tvmc to compile the text recognition model for Cortex(R)-M55 CPU and CMSIS-NN +- Create a C header file inputs.c containing the image data as a C array +- Create a C header file outputs.c containing a C array where the output of inference will be stored +- Build the demo application +- Run the demo application on a Fixed Virtual Platform (FVP) based on Arm(R) Corstone(TM)-300 software +- The application will report the text on the image and the corresponding score. + +Using your own image +-------------------- +The create_image.py script takes a single argument on the command line which is the path of the +image to be converted into an array of bytes for consumption by the model. + +The demo can be modified to use an image of your choice by changing the following line in run_demo.sh + +```bash +python3 ./convert_image.py ../../demo/000000014439_640x640.jpg +``` + +Model description +----------------- +In this demo, the model we used is based on [PP-PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet). Because of the excellent performance, PP-PicoDet are very suitable for deployment on mobile or CPU. And it is released by [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection). diff --git a/deploy/third_engine/demo_avh/arm-none-eabi-gcc.cmake b/deploy/third_engine/demo_avh/arm-none-eabi-gcc.cmake new file mode 100644 index 0000000000000000000000000000000000000000..415b3139be1b7f891c017dff0dc299b67f7ef2fe --- /dev/null +++ b/deploy/third_engine/demo_avh/arm-none-eabi-gcc.cmake @@ -0,0 +1,79 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +if (__TOOLCHAIN_LOADED) + return() +endif() +set(__TOOLCHAIN_LOADED TRUE) + +set(CMAKE_SYSTEM_NAME Generic) +set(CMAKE_C_COMPILER "arm-none-eabi-gcc") +set(CMAKE_CXX_COMPILER "arm-none-eabi-g++") +set(CMAKE_SYSTEM_PROCESSOR "cortex-m55" CACHE STRING "Select Arm(R) Cortex(R)-M architecture. (cortex-m0, cortex-m3, cortex-m33, cortex-m4, cortex-m55, cortex-m7, etc)") + +set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY) + +SET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER) +SET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY) +SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY) + +set(CMAKE_C_STANDARD 99) +set(CMAKE_CXX_STANDARD 14) + +# The system processor could for example be set to cortex-m33+nodsp+nofp. +set(__CPU_COMPILE_TARGET ${CMAKE_SYSTEM_PROCESSOR}) +string(REPLACE "+" ";" __CPU_FEATURES ${__CPU_COMPILE_TARGET}) +list(POP_FRONT __CPU_FEATURES CMAKE_SYSTEM_PROCESSOR) + +string(FIND ${__CPU_COMPILE_TARGET} "+" __OFFSET) +if(__OFFSET GREATER_EQUAL 0) + string(SUBSTRING ${__CPU_COMPILE_TARGET} ${__OFFSET} -1 CPU_FEATURES) +endif() + +# Add -mcpu to the compile options to override the -mcpu the CMake toolchain adds +add_compile_options(-mcpu=${__CPU_COMPILE_TARGET}) + +# Set floating point unit +if("${__CPU_COMPILE_TARGET}" MATCHES "\\+fp") + set(FLOAT hard) +elseif("${__CPU_COMPILE_TARGET}" MATCHES "\\+nofp") + set(FLOAT soft) +elseif("${CMAKE_SYSTEM_PROCESSOR}" STREQUAL "cortex-m33" OR + "${CMAKE_SYSTEM_PROCESSOR}" STREQUAL "cortex-m55") + set(FLOAT hard) +else() + set(FLOAT soft) +endif() + +add_compile_options(-mfloat-abi=${FLOAT}) +add_link_options(-mfloat-abi=${FLOAT}) + +# Link target +add_link_options(-mcpu=${__CPU_COMPILE_TARGET}) +add_link_options(-Xlinker -Map=output.map) + +# +# Compile options +# +set(cxx_flags "-fno-unwind-tables;-fno-rtti;-fno-exceptions") + +add_compile_options("-Wall;-Wextra;-Wsign-compare;-Wunused;-Wswitch-default;\ +-Wdouble-promotion;-Wredundant-decls;-Wshadow;-Wnull-dereference;\ +-Wno-format-extra-args;-Wno-unused-function;-Wno-unused-label;\ +-Wno-missing-field-initializers;-Wno-return-type;-Wno-format;-Wno-int-conversion" + "$<$:${cxx_flags}>" +) diff --git a/deploy/third_engine/demo_avh/convert_image.py b/deploy/third_engine/demo_avh/convert_image.py new file mode 100755 index 0000000000000000000000000000000000000000..a335b5aa7296db6a364baaf51f98b073c5de1429 --- /dev/null +++ b/deploy/third_engine/demo_avh/convert_image.py @@ -0,0 +1,97 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +import os +import pathlib +import re +import sys +import cv2 +import math +from PIL import Image +import numpy as np + +def resize_norm_img(img, image_shape, padding=True): + imgC, imgH, imgW = image_shape + img = cv2.resize( + img, (imgW, imgH), interpolation=cv2.INTER_LINEAR) + img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) + img = np.transpose(img, [2, 0, 1]) / 255 + img = np.expand_dims(img, 0) + img_mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1)) + img_std = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1)) + img -= img_mean + img /= img_std + return img.astype(np.float32) + + +def create_header_file(name, tensor_name, tensor_data, output_path): + """ + This function generates a header file containing the data from the numpy array provided. + """ + file_path = pathlib.Path(f"{output_path}/" + name).resolve() + # Create header file with npy_data as a C array + raw_path = file_path.with_suffix(".h").resolve() + with open(raw_path, "a") as header_file: + header_file.write( + "\n" + + f"const size_t {tensor_name}_len = {tensor_data.size};\n" + + f'__attribute__((section(".data.tvm"), aligned(16))) float {tensor_name}[] = ' + ) + + header_file.write("{") + for i in np.ndindex(tensor_data.shape): + header_file.write(f"{tensor_data[i]}, ") + header_file.write("};\n\n") + + +def create_headers(image_name): + """ + This function generates C header files for the input and output arrays required to run inferences + """ + img_path = os.path.join("./", f"{image_name}") + + # Resize image to 32x320 + img = cv2.imread(img_path) + img = resize_norm_img(img, [3,32,320]) + img_data = img.astype("float32") + + # # Add the batch dimension, as we are expecting 4-dimensional input: NCHW. + img_data = np.expand_dims(img_data, axis=0) + + os.remove("./include/inputs.h") + os.remove("./include/outputs.h") + # Create input header file + create_header_file("inputs", "input", img_data, "./include") + # Create output header file + output_data = np.zeros([8500], np.float) + create_header_file( + "outputs", + "output0", + output_data, + "./include", + ) + output_data = np.zeros([170000], np.float) + create_header_file( + "outputs", + "output1", + output_data, + "./include", + ) + + +if __name__ == "__main__": + create_headers(sys.argv[1]) diff --git a/deploy/third_engine/demo_avh/corstone300.ld b/deploy/third_engine/demo_avh/corstone300.ld new file mode 100644 index 0000000000000000000000000000000000000000..1d2dd8805799fe78ee8b2696fe4ff7fab3d01f38 --- /dev/null +++ b/deploy/third_engine/demo_avh/corstone300.ld @@ -0,0 +1,295 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*------------------ Reference System Memories ------------- + +===================+============+=======+============+============+ + | Memory | Address | Size | CPU Access | NPU Access | + +===================+============+=======+============+============+ + | ITCM | 0x00000000 | 512KB | Yes (RO) | No | + +-------------------+------------+-------+------------+------------+ + | DTCM | 0x20000000 | 512KB | Yes (R/W) | No | + +-------------------+------------+-------+------------+------------+ + | SSE-300 SRAM | 0x21000000 | 2MB | Yes (R/W) | Yes (R/W) | + +-------------------+------------+-------+------------+------------+ + | Data SRAM | 0x01000000 | 2MB | Yes (R/W) | Yes (R/W) | + +-------------------+------------+-------+------------+------------+ + | DDR | 0x60000000 | 32MB | Yes (R/W) | Yes (R/W) | + +-------------------+------------+-------+------------+------------+ */ + +/*---------------------- ITCM Configuration ---------------------------------- + Flash Configuration + Flash Base Address <0x0-0xFFFFFFFF:8> + Flash Size (in Bytes) <0x0-0xFFFFFFFF:8> + + -----------------------------------------------------------------------------*/ +__ROM_BASE = 0x00000000; +__ROM_SIZE = 0x00080000; + +/*--------------------- DTCM RAM Configuration ---------------------------- + RAM Configuration + RAM Base Address <0x0-0xFFFFFFFF:8> + RAM Size (in Bytes) <0x0-0xFFFFFFFF:8> + + -----------------------------------------------------------------------------*/ +__RAM_BASE = 0x20000000; +__RAM_SIZE = 0x00080000; + +/*----------------------- Data SRAM Configuration ------------------------------ + Data SRAM Configuration + DATA_SRAM Base Address <0x0-0xFFFFFFFF:8> + DATA_SRAM Size (in Bytes) <0x0-0xFFFFFFFF:8> + + -----------------------------------------------------------------------------*/ +__DATA_SRAM_BASE = 0x01000000; +__DATA_SRAM_SIZE = 0x00200000; + +/*--------------------- Embedded SRAM Configuration ---------------------------- + SRAM Configuration + SRAM Base Address <0x0-0xFFFFFFFF:8> + SRAM Size (in Bytes) <0x0-0xFFFFFFFF:8> + + -----------------------------------------------------------------------------*/ +__SRAM_BASE = 0x21000000; +__SRAM_SIZE = 0x00200000; + +/*--------------------- Stack / Heap Configuration ---------------------------- + Stack / Heap Configuration + Stack Size (in Bytes) <0x0-0xFFFFFFFF:8> + Heap Size (in Bytes) <0x0-0xFFFFFFFF:8> + + -----------------------------------------------------------------------------*/ +__STACK_SIZE = 0x00008000; +__HEAP_SIZE = 0x00008000; + +/*--------------------- Embedded RAM Configuration ---------------------------- + DDR Configuration + DDR Base Address <0x0-0xFFFFFFFF:8> + DDR Size (in Bytes) <0x0-0xFFFFFFFF:8> + + -----------------------------------------------------------------------------*/ +__DDR_BASE = 0x60000000; +__DDR_SIZE = 0x02000000; + +/* + *-------------------- <<< end of configuration section >>> ------------------- + */ + +MEMORY +{ + ITCM (rx) : ORIGIN = __ROM_BASE, LENGTH = __ROM_SIZE + DTCM (rwx) : ORIGIN = __RAM_BASE, LENGTH = __RAM_SIZE + DATA_SRAM (rwx) : ORIGIN = __DATA_SRAM_BASE, LENGTH = __DATA_SRAM_SIZE + SRAM (rwx) : ORIGIN = __SRAM_BASE, LENGTH = __SRAM_SIZE + DDR (rwx) : ORIGIN = __DDR_BASE, LENGTH = __DDR_SIZE +} + +/* Linker script to place sections and symbol values. Should be used together + * with other linker script that defines memory regions ITCM and RAM. + * It references following symbols, which must be defined in code: + * Reset_Handler : Entry of reset handler + * + * It defines following symbols, which code can use without definition: + * __exidx_start + * __exidx_end + * __copy_table_start__ + * __copy_table_end__ + * __zero_table_start__ + * __zero_table_end__ + * __etext + * __data_start__ + * __preinit_array_start + * __preinit_array_end + * __init_array_start + * __init_array_end + * __fini_array_start + * __fini_array_end + * __data_end__ + * __bss_start__ + * __bss_end__ + * __end__ + * end + * __HeapLimit + * __StackLimit + * __StackTop + * __stack + */ +ENTRY(Reset_Handler) + +SECTIONS +{ + /* .ddr is placed before .text so that .rodata.tvm is encountered before .rodata* */ + .ddr : + { + . = ALIGN (16); + *(.rodata.tvm) + . = ALIGN (16); + *(.data.tvm); + . = ALIGN(16); + } > DDR + + .text : + { + KEEP(*(.vectors)) + *(.text*) + + KEEP(*(.init)) + KEEP(*(.fini)) + + /* .ctors */ + *crtbegin.o(.ctors) + *crtbegin?.o(.ctors) + *(EXCLUDE_FILE(*crtend?.o *crtend.o) .ctors) + *(SORT(.ctors.*)) + *(.ctors) + + /* .dtors */ + *crtbegin.o(.dtors) + *crtbegin?.o(.dtors) + *(EXCLUDE_FILE(*crtend?.o *crtend.o) .dtors) + *(SORT(.dtors.*)) + *(.dtors) + + *(.rodata*) + + KEEP(*(.eh_frame*)) + } > ITCM + + .ARM.extab : + { + *(.ARM.extab* .gnu.linkonce.armextab.*) + } > ITCM + + __exidx_start = .; + .ARM.exidx : + { + *(.ARM.exidx* .gnu.linkonce.armexidx.*) + } > ITCM + __exidx_end = .; + + .copy.table : + { + . = ALIGN(4); + __copy_table_start__ = .; + LONG (__etext) + LONG (__data_start__) + LONG (__data_end__ - __data_start__) + /* Add each additional data section here */ + __copy_table_end__ = .; + } > ITCM + + .zero.table : + { + . = ALIGN(4); + __zero_table_start__ = .; + __zero_table_end__ = .; + } > ITCM + + /** + * Location counter can end up 2byte aligned with narrow Thumb code but + * __etext is assumed by startup code to be the LMA of a section in DTCM + * which must be 4byte aligned + */ + __etext = ALIGN (4); + + .sram : + { + . = ALIGN(16); + } > SRAM AT > SRAM + + .data : AT (__etext) + { + __data_start__ = .; + *(vtable) + *(.data) + *(.data.*) + + . = ALIGN(4); + /* preinit data */ + PROVIDE_HIDDEN (__preinit_array_start = .); + KEEP(*(.preinit_array)) + PROVIDE_HIDDEN (__preinit_array_end = .); + + . = ALIGN(4); + /* init data */ + PROVIDE_HIDDEN (__init_array_start = .); + KEEP(*(SORT(.init_array.*))) + KEEP(*(.init_array)) + PROVIDE_HIDDEN (__init_array_end = .); + + + . = ALIGN(4); + /* finit data */ + PROVIDE_HIDDEN (__fini_array_start = .); + KEEP(*(SORT(.fini_array.*))) + KEEP(*(.fini_array)) + PROVIDE_HIDDEN (__fini_array_end = .); + + KEEP(*(.jcr*)) + . = ALIGN(4); + /* All data end */ + __data_end__ = .; + + } > DTCM + + .bss.NoInit : + { + . = ALIGN(16); + *(.bss.NoInit) + . = ALIGN(16); + } > DDR AT > DDR + + .bss : + { + . = ALIGN(4); + __bss_start__ = .; + *(.bss) + *(.bss.*) + *(COMMON) + . = ALIGN(4); + __bss_end__ = .; + } > DTCM AT > DTCM + + .data_sram : + { + . = ALIGN(16); + } > DATA_SRAM + + .heap (COPY) : + { + . = ALIGN(8); + __end__ = .; + PROVIDE(end = .); + . = . + __HEAP_SIZE; + . = ALIGN(8); + __HeapLimit = .; + } > DTCM + + .stack (ORIGIN(DTCM) + LENGTH(DTCM) - __STACK_SIZE) (COPY) : + { + . = ALIGN(8); + __StackLimit = .; + . = . + __STACK_SIZE; + . = ALIGN(8); + __StackTop = .; + } > DTCM + PROVIDE(__stack = __StackTop); + + /* Check if data + stack exceeds DTCM limit */ + ASSERT(__StackLimit >= __bss_end__, "region DTCM overflowed with stack") +} diff --git a/deploy/third_engine/demo_avh/include/crt_config.h b/deploy/third_engine/demo_avh/include/crt_config.h new file mode 100644 index 0000000000000000000000000000000000000000..2fd0ead60697beb86d55dfadde5070b7ae5afd3e --- /dev/null +++ b/deploy/third_engine/demo_avh/include/crt_config.h @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +#ifndef TVM_RUNTIME_CRT_CONFIG_H_ +#define TVM_RUNTIME_CRT_CONFIG_H_ + +/*! Log level of the CRT runtime */ +#define TVM_CRT_LOG_LEVEL TVM_CRT_LOG_LEVEL_DEBUG + +#endif // TVM_RUNTIME_CRT_CONFIG_H_ diff --git a/deploy/third_engine/demo_avh/include/tvm_runtime.h b/deploy/third_engine/demo_avh/include/tvm_runtime.h new file mode 100644 index 0000000000000000000000000000000000000000..0978d7adfa039bf188aa8d17a43a7e61f1adecc6 --- /dev/null +++ b/deploy/third_engine/demo_avh/include/tvm_runtime.h @@ -0,0 +1,59 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +#include +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +void __attribute__((noreturn)) TVMPlatformAbort(tvm_crt_error_t error_code) { + printf("TVMPlatformAbort: %d\n", error_code); + printf("EXITTHESIM\n"); + exit(-1); +} + +tvm_crt_error_t TVMPlatformMemoryAllocate(size_t num_bytes, DLDevice dev, + void **out_ptr) { + return kTvmErrorFunctionCallNotImplemented; +} + +tvm_crt_error_t TVMPlatformMemoryFree(void *ptr, DLDevice dev) { + return kTvmErrorFunctionCallNotImplemented; +} + +void TVMLogf(const char *msg, ...) { + va_list args; + va_start(args, msg); + vfprintf(stdout, msg, args); + va_end(args); +} + +TVM_DLL int TVMFuncRegisterGlobal(const char *name, TVMFunctionHandle f, + int override) { + return 0; +} + +#ifdef __cplusplus +} +#endif diff --git a/deploy/third_engine/demo_avh/requirements.txt b/deploy/third_engine/demo_avh/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..992002efbe1e197bb33d07e54a2722950911313e --- /dev/null +++ b/deploy/third_engine/demo_avh/requirements.txt @@ -0,0 +1,3 @@ +paddlepaddle +numpy +opencv-python diff --git a/deploy/third_engine/demo_avh/run_demo.sh b/deploy/third_engine/demo_avh/run_demo.sh new file mode 100755 index 0000000000000000000000000000000000000000..86607492629b251089f596b1411b0db0f52bb88c --- /dev/null +++ b/deploy/third_engine/demo_avh/run_demo.sh @@ -0,0 +1,151 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +export PATH=/opt/arm/FVP_Corstone_SSE-300/models/Linux64_GCC-6.4:/opt/arm/cmake/bin:$PATH +set -e +set -u +set -o pipefail + +# Show usage +function show_usage() { + cat <&2 + show_usage >&2 + exit 1 + fi + ;; + + --ethosu_platform_path) + if [ $# -gt 1 ] + then + export ETHOSU_PLATFORM_PATH="$2" + shift 2 + else + echo 'ERROR: --ethosu_platform_path requires a non-empty argument' >&2 + show_usage >&2 + exit 1 + fi + ;; + + --fvp_path) + if [ $# -gt 1 ] + then + export PATH="$2/models/Linux64_GCC-6.4:$PATH" + shift 2 + else + echo 'ERROR: --fvp_path requires a non-empty argument' >&2 + show_usage >&2 + exit 1 + fi + ;; + + --cmake_path) + if [ $# -gt 1 ] + then + export CMAKE="$2" + shift 2 + else + echo 'ERROR: --cmake_path requires a non-empty argument' >&2 + show_usage >&2 + exit 1 + fi + ;; + + -*|--*) + echo "Error: Unknown flag: $1" >&2 + show_usage >&2 + exit 1 + ;; + esac +done + + +# Directories +script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )" + +# Make build directory +make cleanall +mkdir -p build +cd build + +# Compile model for Arm(R) Cortex(R)-M55 CPU and CMSIS-NN +# An alternative to using "python3 -m tvm.driver.tvmc" is to call +# "tvmc" directly once TVM has been pip installed. +python3 -m tvm.driver.tvmc compile --target=cmsis-nn,c \ + --target-cmsis-nn-mcpu=cortex-m55 \ + --target-c-mcpu=cortex-m55 \ + --runtime=crt \ + --executor=aot \ + --executor-aot-interface-api=c \ + --executor-aot-unpacked-api=1 \ + --pass-config tir.usmp.enable=1 \ + --pass-config tir.usmp.algorithm=hill_climb \ + --pass-config tir.disable_storage_rewrite=1 \ + --pass-config tir.disable_vectorize=1 ../models/picodet_s_320_coco_lcnet_no_nms/model \ + --output-format=mlf \ + --model-format=paddle \ + --module-name=picodet \ + --input-shapes image:[1,3,320,320] \ + --output=picodet.tar +tar -xf picodet.tar + + +# Create C header files +cd .. +python3 ./convert_image.py ../../demo/000000014439_640x640.jpg + +# Build demo executable +echo "Build demo executable..." +cd ${script_dir} +echo ${script_dir} +make +echo "End build demo executable..." + +# Run demo executable on the FVP +FVP_Corstone_SSE-300_Ethos-U55 -C cpu0.CFGDTCMSZ=15 \ +-C cpu0.CFGITCMSZ=15 -C mps3_board.uart0.out_file=\"-\" -C mps3_board.uart0.shutdown_tag=\"EXITTHESIM\" \ +-C mps3_board.visualisation.disable-visualisation=1 -C mps3_board.telnetterminal0.start_telnet=0 \ +-C mps3_board.telnetterminal1.start_telnet=0 -C mps3_board.telnetterminal2.start_telnet=0 -C mps3_board.telnetterminal5.start_telnet=0 \ +./build/demo diff --git a/deploy/third_engine/demo_avh/src/demo_bare_metal.c b/deploy/third_engine/demo_avh/src/demo_bare_metal.c new file mode 100644 index 0000000000000000000000000000000000000000..07ed5bebe2c266bde5b59b101c1df1a54ba2ef28 --- /dev/null +++ b/deploy/third_engine/demo_avh/src/demo_bare_metal.c @@ -0,0 +1,59 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +#include +#include +#include + +#include "uart.h" + +// Header files generated by convert_image.py +#include "inputs.h" +#include "outputs.h" + +int main(int argc, char **argv) { + uart_init(); + printf("Starting PicoDet inference:\n"); + struct tvmgen_picodet_outputs rec_outputs = { + .output0 = output0, .output1 = output1, + }; + struct tvmgen_picodet_inputs rec_inputs = { + .image = input, + }; + + tvmgen_picodet_run(&rec_inputs, &rec_outputs); + + // post process + for (int i = 0; i < output0_len / 4; i++) { + float score = 0; + int32_t class = 0; + for (int j = 0; j < 80; j++) { + if (output1[i + j * 2125] > score) { + score = output1[i + j * 2125]; + class = j; + } + } + if (score > 0.1 && output0[i * 4] > 0 && output0[i * 4 + 1] > 0) { + printf("box: %f, %f, %f, %f, class: %d, score: %f\n", output0[i * 4] * 2, + output0[i * 4 + 1] * 2, output0[i * 4 + 2] * 2, + output0[i * 4 + 3] * 2, class, score); + } + } + return 0; +} diff --git a/deploy/third_engine/demo_mnn/CMakeLists.txt b/deploy/third_engine/demo_mnn/CMakeLists.txt index 07d9b7f868136ba09cf8f3468cce5b42895b8d95..9afa8cfc011587977b4ef3ed13bb0b050e990fa0 100644 --- a/deploy/third_engine/demo_mnn/CMakeLists.txt +++ b/deploy/third_engine/demo_mnn/CMakeLists.txt @@ -2,13 +2,14 @@ cmake_minimum_required(VERSION 3.9) project(picodet-mnn) set(CMAKE_CXX_STANDARD 17) +set(MNN_DIR PATHS "./mnn") # find_package(OpenCV REQUIRED PATHS "/work/dependence/opencv/opencv-3.4.3/build") find_package(OpenCV REQUIRED) include_directories( - /path/to/MNN/include/MNN - /path/to/MNN/include - . + ${MNN_DIR}/include + ${MNN_DIR}/include/MNN + ${CMAKE_SOURCE_DIR} ) link_directories(mnn/lib) diff --git a/deploy/third_engine/demo_mnn/README.md b/deploy/third_engine/demo_mnn/README.md index 78a0f3a79febce170e06058c6c5f6233d0a5c201..ac11a8e18fdc53aa7eebb57fa1ba2d4680a9dcf3 100644 --- a/deploy/third_engine/demo_mnn/README.md +++ b/deploy/third_engine/demo_mnn/README.md @@ -1,105 +1,89 @@ # PicoDet MNN Demo -This fold provides PicoDet inference code using -[Alibaba's MNN framework](https://github.com/alibaba/MNN). Most of the implements in -this fold are same as *demo_ncnn*. +本Demo提供的预测代码是根据[Alibaba's MNN framework](https://github.com/alibaba/MNN) 推理库预测的。 -## Install MNN +## C++ Demo -### Python library - -Just run: - -``` shell -pip install MNN +- 第一步:根据[MNN官方编译文档](https://www.yuque.com/mnn/en/build_linux) 编译生成预测库. +- 第二步:编译或下载得到OpenCV库,可参考OpenCV官网,为了方便如果环境是gcc8.2 x86环境,可直接下载以下库: +```shell +wget https://paddledet.bj.bcebos.com/data/opencv-3.4.16_gcc8.2_ffmpeg.tar.gz +tar -xf opencv-3.4.16_gcc8.2_ffmpeg.tar.gz ``` -### C++ library - -Please follow the [official document](https://www.yuque.com/mnn/en/build_linux) to build MNN engine. -- Create picodet_m_416_coco.onnx +- 第三步:准备模型 ```shell - modelName=picodet_m_416_coco - # export model + modelName=picodet_s_320_coco_lcnet + # 导出Inference model python tools/export_model.py \ -c configs/picodet/${modelName}.yml \ -o weights=${modelName}.pdparams \ --output_dir=inference_model - # convert to onnx + # 转换到ONNX paddle2onnx --model_dir inference_model/${modelName} \ --model_filename model.pdmodel \ --params_filename model.pdiparams \ --opset_version 11 \ --save_file ${modelName}.onnx - # onnxsim + # 简化模型 python -m onnxsim ${modelName}.onnx ${modelName}_processed.onnx + # 将模型转换至MNN格式 + python -m MNN.tools.mnnconvert -f ONNX --modelFile picodet_s_320_lcnet_processed.onnx --MNNModel picodet_s_320_lcnet.mnn ``` +为了快速测试,可直接下载:[picodet_s_320_lcnet.mnn](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet.mnn)(不带后处理)。 -- Convert model - ``` shell - python -m MNN.tools.mnnconvert -f ONNX --modelFile picodet-416.onnx --MNNModel picodet-416.mnn - ``` -Here are converted model [download link](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416.mnn). +**注意:**由于MNN里,Matmul算子的输入shape如果不一致计算有问题,带后处理的Demo正在升级中,很快发布。 -## Build - -The python code *demo_mnn.py* can run directly and independently without main PicoDet repo. -`PicoDetONNX` and `PicoDetTorch` are two classes used to check the similarity of MNN inference results -with ONNX model and Pytorch model. They can be remove with no side effects. - -For C++ code, replace `libMNN.so` under *./mnn/lib* with the one you just compiled, modify OpenCV path and MNN path at CMake file, -and run +## 编译可执行程序 +- 第一步:导入lib包 +``` +mkdir mnn && cd mnn && mkdir lib +cp /path/to/MNN/build/libMNN.so . +cd .. +cp -r /path/to/MNN/include . +``` +- 第二步:修改CMakeLists.txt中OpenCV和MNN的路径 +- 第三步:开始编译 ``` shell mkdir build && cd build cmake .. make ``` +如果在build目录下生成`picodet-mnn`可执行文件,就证明成功了。 -Note that a flag at `main.cpp` is used to control whether to show the detection result or save it into a fold. - -``` c++ -#define __SAVE_RESULT__ // if defined save drawed results to ../results, else show it in windows -``` - -## Run - -### Python - -`demo_mnn.py` provide an inference class `PicoDetMNN` that combines preprocess, post process, visualization. -Besides it can be used in command line with the form: +## 开始运行 +首先新建预测结果存放目录: ```shell -demo_mnn.py [-h] [--model_path MODEL_PATH] [--cfg_path CFG_PATH] - [--img_fold IMG_FOLD] [--result_fold RESULT_FOLD] - [--input_shape INPUT_SHAPE INPUT_SHAPE] - [--backend {MNN,ONNX,torch}] +cp -r ../demo_onnxruntime/imgs . +cd build +mkdir ../results ``` -For example: - +- 预测一张图片 ``` shell -# run MNN 416 model -python ./demo_mnn.py --model_path ../model/picodet-416.mnn --img_fold ../imgs --result_fold ../results -# run MNN 320 model -python ./demo_mnn.py --model_path ../model/picodet-320.mnn --input_shape 320 320 --backend MNN -# run onnx model -python ./demo_mnn.py --model_path ../model/sim.onnx --backend ONNX +./picodet-mnn 0 ../picodet_s_320_lcnet_3.mnn 320 320 ../imgs/dog.jpg ``` -### C++ - -C++ inference interface is same with NCNN code, to detect images in a fold, run: +-测试速度Benchmark ``` shell -./picodet-mnn "1" "../imgs/test.jpg" +./picodet-mnn 1 ../picodet_s_320_lcnet.mnn 320 320 ``` -For speed benchmark +## FAQ -``` shell -./picodet-mnn "3" "0" +- 预测结果精度不对: +请先确认模型输入shape是否对齐,并且模型输出name是否对齐,不带后处理的PicoDet增强版模型输出name如下: +```shell +# 分类分支 | 检测分支 +{"transpose_0.tmp_0", "transpose_1.tmp_0"}, +{"transpose_2.tmp_0", "transpose_3.tmp_0"}, +{"transpose_4.tmp_0", "transpose_5.tmp_0"}, +{"transpose_6.tmp_0", "transpose_7.tmp_0"}, ``` +可使用[netron](https://netron.app)查看具体name,并修改`picodet_mnn.hpp`中相应`non_postprocess_heads_info`数组。 ## Reference [MNN](https://github.com/alibaba/MNN) diff --git a/deploy/third_engine/demo_mnn/main.cpp b/deploy/third_engine/demo_mnn/main.cpp index 52c977343b55b8ff4c7be305729ebe23c80a565e..5737368d5473a75ced391ad2e28883427a942795 100644 --- a/deploy/third_engine/demo_mnn/main.cpp +++ b/deploy/third_engine/demo_mnn/main.cpp @@ -11,7 +11,6 @@ // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. -// reference from https://github.com/RangiLyu/nanodet/tree/main/demo_mnn #include "picodet_mnn.hpp" #include @@ -19,354 +18,186 @@ #include #include -#define __SAVE_RESULT__ // if defined save drawed results to ../results, else show it in windows +#define __SAVE_RESULT__ // if defined save drawed results to ../results, else + // show it in windows struct object_rect { - int x; - int y; - int width; - int height; + int x; + int y; + int width; + int height; }; -int resize_uniform(cv::Mat& src, cv::Mat& dst, cv::Size dst_size, object_rect& effect_area) -{ - int w = src.cols; - int h = src.rows; - int dst_w = dst_size.width; - int dst_h = dst_size.height; - dst = cv::Mat(cv::Size(dst_w, dst_h), CV_8UC3, cv::Scalar(0)); - - float ratio_src = w * 1.0 / h; - float ratio_dst = dst_w * 1.0 / dst_h; - - int tmp_w = 0; - int tmp_h = 0; - if (ratio_src > ratio_dst) { - tmp_w = dst_w; - tmp_h = floor((dst_w * 1.0 / w) * h); - } - else if (ratio_src < ratio_dst) { - tmp_h = dst_h; - tmp_w = floor((dst_h * 1.0 / h) * w); - } - else { - cv::resize(src, dst, dst_size); - effect_area.x = 0; - effect_area.y = 0; - effect_area.width = dst_w; - effect_area.height = dst_h; - return 0; - } - cv::Mat tmp; - cv::resize(src, tmp, cv::Size(tmp_w, tmp_h)); - - if (tmp_w != dst_w) { - int index_w = floor((dst_w - tmp_w) / 2.0); - for (int i = 0; i < dst_h; i++) { - memcpy(dst.data + i * dst_w * 3 + index_w * 3, tmp.data + i * tmp_w * 3, tmp_w * 3); - } - effect_area.x = index_w; - effect_area.y = 0; - effect_area.width = tmp_w; - effect_area.height = tmp_h; - } - else if (tmp_h != dst_h) { - int index_h = floor((dst_h - tmp_h) / 2.0); - memcpy(dst.data + index_h * dst_w * 3, tmp.data, tmp_w * tmp_h * 3); - effect_area.x = 0; - effect_area.y = index_h; - effect_area.width = tmp_w; - effect_area.height = tmp_h; - } - else { - printf("error\n"); +std::vector GenerateColorMap(int num_class) { + auto colormap = std::vector(3 * num_class, 0); + for (int i = 0; i < num_class; ++i) { + int j = 0; + int lab = i; + while (lab) { + colormap[i * 3] |= (((lab >> 0) & 1) << (7 - j)); + colormap[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)); + colormap[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)); + ++j; + lab >>= 3; } - return 0; + } + return colormap; } -const int color_list[80][3] = -{ - {216 , 82 , 24}, - {236 ,176 , 31}, - {125 , 46 ,141}, - {118 ,171 , 47}, - { 76 ,189 ,237}, - {238 , 19 , 46}, - { 76 , 76 , 76}, - {153 ,153 ,153}, - {255 , 0 , 0}, - {255 ,127 , 0}, - {190 ,190 , 0}, - { 0 ,255 , 0}, - { 0 , 0 ,255}, - {170 , 0 ,255}, - { 84 , 84 , 0}, - { 84 ,170 , 0}, - { 84 ,255 , 0}, - {170 , 84 , 0}, - {170 ,170 , 0}, - {170 ,255 , 0}, - {255 , 84 , 0}, - {255 ,170 , 0}, - {255 ,255 , 0}, - { 0 , 84 ,127}, - { 0 ,170 ,127}, - { 0 ,255 ,127}, - { 84 , 0 ,127}, - { 84 , 84 ,127}, - { 84 ,170 ,127}, - { 84 ,255 ,127}, - {170 , 0 ,127}, - {170 , 84 ,127}, - {170 ,170 ,127}, - {170 ,255 ,127}, - {255 , 0 ,127}, - {255 , 84 ,127}, - {255 ,170 ,127}, - {255 ,255 ,127}, - { 0 , 84 ,255}, - { 0 ,170 ,255}, - { 0 ,255 ,255}, - { 84 , 0 ,255}, - { 84 , 84 ,255}, - { 84 ,170 ,255}, - { 84 ,255 ,255}, - {170 , 0 ,255}, - {170 , 84 ,255}, - {170 ,170 ,255}, - {170 ,255 ,255}, - {255 , 0 ,255}, - {255 , 84 ,255}, - {255 ,170 ,255}, - { 42 , 0 , 0}, - { 84 , 0 , 0}, - {127 , 0 , 0}, - {170 , 0 , 0}, - {212 , 0 , 0}, - {255 , 0 , 0}, - { 0 , 42 , 0}, - { 0 , 84 , 0}, - { 0 ,127 , 0}, - { 0 ,170 , 0}, - { 0 ,212 , 0}, - { 0 ,255 , 0}, - { 0 , 0 , 42}, - { 0 , 0 , 84}, - { 0 , 0 ,127}, - { 0 , 0 ,170}, - { 0 , 0 ,212}, - { 0 , 0 ,255}, - { 0 , 0 , 0}, - { 36 , 36 , 36}, - { 72 , 72 , 72}, - {109 ,109 ,109}, - {145 ,145 ,145}, - {182 ,182 ,182}, - {218 ,218 ,218}, - { 0 ,113 ,188}, - { 80 ,182 ,188}, - {127 ,127 , 0}, -}; - -void draw_bboxes(const cv::Mat& bgr, const std::vector& bboxes, object_rect effect_roi, std::string save_path="None") -{ - static const char* class_names[] = { "person", "bicycle", "car", "motorcycle", "airplane", "bus", - "train", "truck", "boat", "traffic light", "fire hydrant", - "stop sign", "parking meter", "bench", "bird", "cat", "dog", - "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", - "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", - "skis", "snowboard", "sports ball", "kite", "baseball bat", - "baseball glove", "skateboard", "surfboard", "tennis racket", - "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", - "banana", "apple", "sandwich", "orange", "broccoli", "carrot", - "hot dog", "pizza", "donut", "cake", "chair", "couch", - "potted plant", "bed", "dining table", "toilet", "tv", "laptop", - "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", - "toaster", "sink", "refrigerator", "book", "clock", "vase", - "scissors", "teddy bear", "hair drier", "toothbrush" - }; - - cv::Mat image = bgr.clone(); - int src_w = image.cols; - int src_h = image.rows; - int dst_w = effect_roi.width; - int dst_h = effect_roi.height; - float width_ratio = (float)src_w / (float)dst_w; - float height_ratio = (float)src_h / (float)dst_h; - - - for (size_t i = 0; i < bboxes.size(); i++) - { - const BoxInfo& bbox = bboxes[i]; - cv::Scalar color = cv::Scalar(color_list[bbox.label][0], color_list[bbox.label][1], color_list[bbox.label][2]); - cv::rectangle(image, cv::Rect(cv::Point((bbox.x1 - effect_roi.x) * width_ratio, (bbox.y1 - effect_roi.y) * height_ratio), - cv::Point((bbox.x2 - effect_roi.x) * width_ratio, (bbox.y2 - effect_roi.y) * height_ratio)), color); - - char text[256]; - sprintf(text, "%s %.1f%%", class_names[bbox.label], bbox.score * 100); - - int baseLine = 0; - cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine); - - int x = (bbox.x1 - effect_roi.x) * width_ratio; - int y = (bbox.y1 - effect_roi.y) * height_ratio - label_size.height - baseLine; - if (y < 0) - y = 0; - if (x + label_size.width > image.cols) - x = image.cols - label_size.width; - - cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)), - color, -1); - - cv::putText(image, text, cv::Point(x, y + label_size.height), - cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255, 255, 255)); - } - - if (save_path == "None") - { - cv::imshow("image", image); - } - else - { - cv::imwrite(save_path, image); - std::cout << save_path << std::endl; - } -} - - -int image_demo(PicoDet &detector, const char* imagepath) -{ - std::vector filenames; - cv::glob(imagepath, filenames, false); - - for (auto img_name : filenames) - { - cv::Mat image = cv::imread(img_name); - if (image.empty()) - { - fprintf(stderr, "cv::imread %s failed\n", img_name.c_str()); - return -1; - } - object_rect effect_roi; - cv::Mat resized_img; - resize_uniform(image, resized_img, cv::Size(320, 320), effect_roi); - std::vector results; - detector.detect(resized_img, results); - - #ifdef __SAVE_RESULT__ - std::string save_path = img_name; - draw_bboxes(image, results, effect_roi, save_path.replace(3, 4, "results")); - #else - draw_bboxes(image, results, effect_roi); - cv::waitKey(0); - #endif - - } - return 0; +void draw_bboxes(const cv::Mat &im, const std::vector &bboxes, + std::string save_path = "None") { + static const char *class_names[] = { + "person", "bicycle", "car", + "motorcycle", "airplane", "bus", + "train", "truck", "boat", + "traffic light", "fire hydrant", "stop sign", + "parking meter", "bench", "bird", + "cat", "dog", "horse", + "sheep", "cow", "elephant", + "bear", "zebra", "giraffe", + "backpack", "umbrella", "handbag", + "tie", "suitcase", "frisbee", + "skis", "snowboard", "sports ball", + "kite", "baseball bat", "baseball glove", + "skateboard", "surfboard", "tennis racket", + "bottle", "wine glass", "cup", + "fork", "knife", "spoon", + "bowl", "banana", "apple", + "sandwich", "orange", "broccoli", + "carrot", "hot dog", "pizza", + "donut", "cake", "chair", + "couch", "potted plant", "bed", + "dining table", "toilet", "tv", + "laptop", "mouse", "remote", + "keyboard", "cell phone", "microwave", + "oven", "toaster", "sink", + "refrigerator", "book", "clock", + "vase", "scissors", "teddy bear", + "hair drier", "toothbrush"}; + + cv::Mat image = im.clone(); + int src_w = image.cols; + int src_h = image.rows; + int thickness = 2; + auto colormap = GenerateColorMap(sizeof(class_names)); + + for (size_t i = 0; i < bboxes.size(); i++) { + const BoxInfo &bbox = bboxes[i]; + std::cout << bbox.x1 << ". " << bbox.y1 << ". " << bbox.x2 << ". " + << bbox.y2 << ". " << std::endl; + int c1 = colormap[3 * bbox.label + 0]; + int c2 = colormap[3 * bbox.label + 1]; + int c3 = colormap[3 * bbox.label + 2]; + cv::Scalar color = cv::Scalar(c1, c2, c3); + // cv::Scalar color = cv::Scalar(0, 0, 255); + cv::rectangle(image, cv::Rect(cv::Point(bbox.x1, bbox.y1), + cv::Point(bbox.x2, bbox.y2)), + color, 1, cv::LINE_AA); + + char text[256]; + sprintf(text, "%s %.1f%%", class_names[bbox.label], bbox.score * 100); + + int baseLine = 0; + cv::Size label_size = + cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine); + + int x = bbox.x1; + int y = bbox.y1 - label_size.height - baseLine; + if (y < 0) + y = 0; + if (x + label_size.width > image.cols) + x = image.cols - label_size.width; + + cv::rectangle(image, cv::Rect(cv::Point(x, y), + cv::Size(label_size.width, + label_size.height + baseLine)), + color, -1); + + cv::putText(image, text, cv::Point(x, y + label_size.height), + cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255, 255, 255), 1, + cv::LINE_AA); + } + + if (save_path == "None") { + cv::imshow("image", image); + } else { + cv::imwrite(save_path, image); + std::cout << save_path << std::endl; + } } -int webcam_demo(PicoDet& detector, int cam_id) -{ - cv::Mat image; - cv::VideoCapture cap(cam_id); +int image_demo(PicoDet &detector, const char *imagepath) { + std::vector filenames; + cv::glob(imagepath, filenames, false); - while (true) - { - cap >> image; - object_rect effect_roi; - cv::Mat resized_img; - resize_uniform(image, resized_img, cv::Size(320, 320), effect_roi); - std::vector results; - detector.detect(resized_img, results); - draw_bboxes(image, results, effect_roi); - cv::waitKey(1); + for (auto img_name : filenames) { + cv::Mat image = cv::imread(img_name, cv::IMREAD_COLOR); + if (image.empty()) { + fprintf(stderr, "cv::imread %s failed\n", img_name.c_str()); + return -1; } - return 0; + std::vector results; + detector.detect(image, results, false); + std::cout << "detect done." << std::endl; + +#ifdef __SAVE_RESULT__ + std::string save_path = img_name; + draw_bboxes(image, results, save_path.replace(3, 4, "results")); +#else + draw_bboxes(image, results); + cv::waitKey(0); +#endif + } + return 0; } -int video_demo(PicoDet& detector, const char* path) -{ - cv::Mat image; - cv::VideoCapture cap(path); - - while (true) - { - cap >> image; - object_rect effect_roi; - cv::Mat resized_img; - resize_uniform(image, resized_img, cv::Size(320, 320), effect_roi); - std::vector results; - detector.detect(resized_img, results); - draw_bboxes(image, results, effect_roi); - cv::waitKey(1); +int benchmark(PicoDet &detector, int width, int height) { + int loop_num = 100; + int warm_up = 8; + + double time_min = DBL_MAX; + double time_max = -DBL_MAX; + double time_avg = 0; + cv::Mat image(width, height, CV_8UC3, cv::Scalar(1, 1, 1)); + for (int i = 0; i < warm_up + loop_num; i++) { + auto start = std::chrono::steady_clock::now(); + std::vector results; + detector.detect(image, results, false); + auto end = std::chrono::steady_clock::now(); + + std::chrono::duration elapsed = end - start; + double time = elapsed.count(); + if (i >= warm_up) { + time_min = (std::min)(time_min, time); + time_max = (std::max)(time_max, time); + time_avg += time; } - return 0; + } + time_avg /= loop_num; + fprintf(stderr, "%20s min = %7.2f max = %7.2f avg = %7.2f\n", "picodet", + time_min, time_max, time_avg); + return 0; } -int benchmark(PicoDet& detector) -{ - int loop_num = 100; - int warm_up = 8; - - double time_min = DBL_MAX; - double time_max = -DBL_MAX; - double time_avg = 0; - cv::Mat image(320, 320, CV_8UC3, cv::Scalar(1, 1, 1)); - for (int i = 0; i < warm_up + loop_num; i++) - { - auto start = std::chrono::steady_clock::now(); - std::vector results; - detector.detect(image, results); - auto end = std::chrono::steady_clock::now(); - - std::chrono::duration elapsed = end - start; - double time = elapsed.count(); - if (i >= warm_up) - { - time_min = (std::min)(time_min, time); - time_max = (std::max)(time_max, time); - time_avg += time; - } - } - time_avg /= loop_num; - fprintf(stderr, "%20s min = %7.2f max = %7.2f avg = %7.2f\n", "picodet", time_min, time_max, time_avg); - return 0; -} - - -int main(int argc, char** argv) -{ - if (argc != 3) - { - fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n For video, mode=2; \n For benchmark, mode=3 path=0.\n", argv[0]); - return -1; - } - PicoDet detector = PicoDet("../weight/picodet-416.mnn", 416, 416, 4, 0.45, 0.3); - int mode = atoi(argv[1]); - switch (mode) - { - case 0:{ - int cam_id = atoi(argv[2]); - webcam_demo(detector, cam_id); - break; - } - case 1:{ - const char* images = argv[2]; - image_demo(detector, images); - break; - } - case 2:{ - const char* path = argv[2]; - video_demo(detector, path); - break; - } - case 3:{ - benchmark(detector); - break; - } - default:{ - fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n For video, mode=2; \n For benchmark, mode=3 path=0.\n", argv[0]); - break; - } +int main(int argc, char **argv) { + int mode = atoi(argv[1]); + std::string model_path = argv[2]; + int height = 320; + int width = 320; + if (argc == 4) { + height = atoi(argv[3]); + width = atoi(argv[4]); + } + PicoDet detector = PicoDet(model_path, width, height, 4, 0.45, 0.3); + if (mode == 1) { + benchmark(detector, width, height); + } else { + if (argc != 5) { + std::cout << "Must set image file, such as ./picodet-mnn 0 " + "../picodet_s_320_lcnet.mnn 320 320 img.jpg" + << std::endl; } + const char *images = argv[5]; + image_demo(detector, images); + } } diff --git a/deploy/third_engine/demo_mnn/picodet_mnn.cpp b/deploy/third_engine/demo_mnn/picodet_mnn.cpp index d6cb9c9fd3f45eb2ef792819376001041c71e3df..a315f14a9e29f0958a2707a4e09fcdb78bd12b6c 100644 --- a/deploy/third_engine/demo_mnn/picodet_mnn.cpp +++ b/deploy/third_engine/demo_mnn/picodet_mnn.cpp @@ -44,7 +44,8 @@ PicoDet::~PicoDet() { PicoDet_interpreter->releaseSession(PicoDet_session); } -int PicoDet::detect(cv::Mat &raw_image, std::vector &result_list) { +int PicoDet::detect(cv::Mat &raw_image, std::vector &result_list, + bool has_postprocess) { if (raw_image.empty()) { std::cout << "image is empty ,please check!" << std::endl; return -1; @@ -70,22 +71,57 @@ int PicoDet::detect(cv::Mat &raw_image, std::vector &result_list) { std::vector> results; results.resize(num_class); - for (const auto &head_info : heads_info) { - MNN::Tensor *tensor_scores = PicoDet_interpreter->getSessionOutput( - PicoDet_session, head_info.cls_layer.c_str()); - MNN::Tensor *tensor_boxes = PicoDet_interpreter->getSessionOutput( - PicoDet_session, head_info.dis_layer.c_str()); - - MNN::Tensor tensor_scores_host(tensor_scores, - tensor_scores->getDimensionType()); - tensor_scores->copyToHostTensor(&tensor_scores_host); - - MNN::Tensor tensor_boxes_host(tensor_boxes, - tensor_boxes->getDimensionType()); - tensor_boxes->copyToHostTensor(&tensor_boxes_host); - - decode_infer(&tensor_scores_host, &tensor_boxes_host, head_info.stride, - score_threshold, results); + if (has_postprocess) { + auto bbox_out_tensor = PicoDet_interpreter->getSessionOutput( + PicoDet_session, nms_heads_info[0].c_str()); + auto class_out_tensor = PicoDet_interpreter->getSessionOutput( + PicoDet_session, nms_heads_info[1].c_str()); + // bbox branch + auto tensor_bbox_host = + new MNN::Tensor(bbox_out_tensor, MNN::Tensor::CAFFE); + bbox_out_tensor->copyToHostTensor(tensor_bbox_host); + auto bbox_output_shape = tensor_bbox_host->shape(); + int output_size = 1; + for (int j = 0; j < bbox_output_shape.size(); ++j) { + output_size *= bbox_output_shape[j]; + } + std::cout << "output_size:" << output_size << std::endl; + bbox_output_data_.resize(output_size); + std::copy_n(tensor_bbox_host->host(), output_size, + bbox_output_data_.data()); + delete tensor_bbox_host; + // class branch + auto tensor_class_host = + new MNN::Tensor(class_out_tensor, MNN::Tensor::CAFFE); + class_out_tensor->copyToHostTensor(tensor_class_host); + auto class_output_shape = tensor_class_host->shape(); + output_size = 1; + for (int j = 0; j < class_output_shape.size(); ++j) { + output_size *= class_output_shape[j]; + } + std::cout << "output_size:" << output_size << std::endl; + class_output_data_.resize(output_size); + std::copy_n(tensor_class_host->host(), output_size, + class_output_data_.data()); + delete tensor_class_host; + } else { + for (const auto &head_info : non_postprocess_heads_info) { + MNN::Tensor *tensor_scores = PicoDet_interpreter->getSessionOutput( + PicoDet_session, head_info.cls_layer.c_str()); + MNN::Tensor *tensor_boxes = PicoDet_interpreter->getSessionOutput( + PicoDet_session, head_info.dis_layer.c_str()); + + MNN::Tensor tensor_scores_host(tensor_scores, + tensor_scores->getDimensionType()); + tensor_scores->copyToHostTensor(&tensor_scores_host); + + MNN::Tensor tensor_boxes_host(tensor_boxes, + tensor_boxes->getDimensionType()); + tensor_boxes->copyToHostTensor(&tensor_boxes_host); + + decode_infer(&tensor_scores_host, &tensor_boxes_host, head_info.stride, + score_threshold, results); + } } auto end = chrono::steady_clock::now(); @@ -188,8 +224,6 @@ void PicoDet::nms(std::vector &input_boxes, float NMS_THRESH) { } } -string PicoDet::get_label_str(int label) { return labels[label]; } - inline float fast_exp(float x) { union { uint32_t i; diff --git a/deploy/third_engine/demo_mnn/picodet_mnn.hpp b/deploy/third_engine/demo_mnn/picodet_mnn.hpp index ecece8b17e30a3e4ab76f939a7bc0087ef2bdfa0..4744040e258498afd70ee587ffc0ae0b39d24faa 100644 --- a/deploy/third_engine/demo_mnn/picodet_mnn.hpp +++ b/deploy/third_engine/demo_mnn/picodet_mnn.hpp @@ -11,7 +11,6 @@ // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. -// reference from https://github.com/RangiLyu/nanodet/tree/main/demo_mnn #ifndef __PicoDet_H__ #define __PicoDet_H__ @@ -20,90 +19,84 @@ #include "Interpreter.hpp" +#include "ImageProcess.hpp" #include "MNNDefine.h" #include "Tensor.hpp" -#include "ImageProcess.hpp" -#include #include +#include #include +#include +#include #include #include -#include -#include - -typedef struct HeadInfo_ -{ - std::string cls_layer; - std::string dis_layer; - int stride; -} HeadInfo; - -typedef struct BoxInfo_ -{ - float x1; - float y1; - float x2; - float y2; - float score; - int label; +typedef struct NonPostProcessHeadInfo_ { + std::string cls_layer; + std::string dis_layer; + int stride; +} NonPostProcessHeadInfo; + +typedef struct BoxInfo_ { + float x1; + float y1; + float x2; + float y2; + float score; + int label; } BoxInfo; class PicoDet { public: - PicoDet(const std::string &mnn_path, - int input_width, int input_length, int num_thread_ = 4, float score_threshold_ = 0.5, float nms_threshold_ = 0.3); + PicoDet(const std::string &mnn_path, int input_width, int input_length, + int num_thread_ = 4, float score_threshold_ = 0.5, + float nms_threshold_ = 0.3); - ~PicoDet(); + ~PicoDet(); - int detect(cv::Mat &img, std::vector &result_list); - std::string get_label_str(int label); + int detect(cv::Mat &img, std::vector &result_list, + bool has_postprocess); private: - void decode_infer(MNN::Tensor *cls_pred, MNN::Tensor *dis_pred, int stride, float threshold, std::vector> &results); - BoxInfo disPred2Bbox(const float *&dfl_det, int label, float score, int x, int y, int stride); - void nms(std::vector &input_boxes, float NMS_THRESH); + void decode_infer(MNN::Tensor *cls_pred, MNN::Tensor *dis_pred, int stride, + float threshold, + std::vector> &results); + BoxInfo disPred2Bbox(const float *&dfl_det, int label, float score, int x, + int y, int stride); + void nms(std::vector &input_boxes, float NMS_THRESH); private: - - std::shared_ptr PicoDet_interpreter; - MNN::Session *PicoDet_session = nullptr; - MNN::Tensor *input_tensor = nullptr; - - int num_thread; - int image_w; - int image_h; - - int in_w = 320; - int in_h = 320; - - float score_threshold; - float nms_threshold; - - const float mean_vals[3] = { 103.53f, 116.28f, 123.675f }; - const float norm_vals[3] = { 0.017429f, 0.017507f, 0.017125f }; - - const int num_class = 80; - const int reg_max = 7; - - std::vector heads_info{ - // cls_pred|dis_pred|stride - {"save_infer_model/scale_0.tmp_1", "save_infer_model/scale_4.tmp_1", 8}, - {"save_infer_model/scale_1.tmp_1", "save_infer_model/scale_5.tmp_1", 16}, - {"save_infer_model/scale_2.tmp_1", "save_infer_model/scale_6.tmp_1", 32}, - {"save_infer_model/scale_3.tmp_1", "save_infer_model/scale_7.tmp_1", 64}, - }; - - std::vector - labels{"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", - "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", - "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", - "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", - "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", - "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", - "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", - "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", - "hair drier", "toothbrush"}; + std::shared_ptr PicoDet_interpreter; + MNN::Session *PicoDet_session = nullptr; + MNN::Tensor *input_tensor = nullptr; + + int num_thread; + int image_w; + int image_h; + + int in_w = 320; + int in_h = 320; + + float score_threshold; + float nms_threshold; + + const float mean_vals[3] = {103.53f, 116.28f, 123.675f}; + const float norm_vals[3] = {0.017429f, 0.017507f, 0.017125f}; + + const int num_class = 80; + const int reg_max = 7; + + std::vector bbox_output_data_; + std::vector class_output_data_; + + std::vector nms_heads_info{"tmp_16", "concat_4.tmp_0"}; + // If not export post-process, will use non_postprocess_heads_info + std::vector non_postprocess_heads_info{ + // cls_pred|dis_pred|stride + {"transpose_0.tmp_0", "transpose_1.tmp_0", 8}, + {"transpose_2.tmp_0", "transpose_3.tmp_0", 16}, + {"transpose_4.tmp_0", "transpose_5.tmp_0", 32}, + {"transpose_6.tmp_0", "transpose_7.tmp_0", 64}, + }; }; template diff --git a/deploy/third_engine/demo_mnn/python/demo_mnn.py b/deploy/third_engine/demo_mnn/python/demo_mnn.py deleted file mode 100644 index c5f88093869dfc7d20cff7a0f117a03b2477e8ab..0000000000000000000000000000000000000000 --- a/deploy/third_engine/demo_mnn/python/demo_mnn.py +++ /dev/null @@ -1,803 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# reference from https://github.com/RangiLyu/nanodet/tree/main/demo_mnn - -# -*- coding: utf-8 -*- -import argparse -from abc import ABCMeta, abstractmethod -from pathlib import Path - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -from scipy.special import softmax -from tqdm import tqdm - -_COLORS = (np.array([ - 0.000, - 0.447, - 0.741, - 0.850, - 0.325, - 0.098, - 0.929, - 0.694, - 0.125, - 0.494, - 0.184, - 0.556, - 0.466, - 0.674, - 0.188, - 0.301, - 0.745, - 0.933, - 0.635, - 0.078, - 0.184, - 0.300, - 0.300, - 0.300, - 0.600, - 0.600, - 0.600, - 1.000, - 0.000, - 0.000, - 1.000, - 0.500, - 0.000, - 0.749, - 0.749, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.333, - 0.333, - 0.000, - 0.333, - 0.667, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 0.333, - 0.000, - 0.667, - 0.667, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 1.000, - 0.000, - 0.000, - 0.333, - 0.500, - 0.000, - 0.667, - 0.500, - 0.000, - 1.000, - 0.500, - 0.333, - 0.000, - 0.500, - 0.333, - 0.333, - 0.500, - 0.333, - 0.667, - 0.500, - 0.333, - 1.000, - 0.500, - 0.667, - 0.000, - 0.500, - 0.667, - 0.333, - 0.500, - 0.667, - 0.667, - 0.500, - 0.667, - 1.000, - 0.500, - 1.000, - 0.000, - 0.500, - 1.000, - 0.333, - 0.500, - 1.000, - 0.667, - 0.500, - 1.000, - 1.000, - 0.500, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.333, - 0.333, - 1.000, - 0.333, - 0.667, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.667, - 0.333, - 1.000, - 0.667, - 0.667, - 1.000, - 0.667, - 1.000, - 1.000, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 1.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.143, - 0.143, - 0.143, - 0.286, - 0.286, - 0.286, - 0.429, - 0.429, - 0.429, - 0.571, - 0.571, - 0.571, - 0.714, - 0.714, - 0.714, - 0.857, - 0.857, - 0.857, - 0.000, - 0.447, - 0.741, - 0.314, - 0.717, - 0.741, - 0.50, - 0.5, - 0, -]).astype(np.float32).reshape(-1, 3)) - - -def get_resize_matrix(raw_shape, dst_shape, keep_ratio): - """ - Get resize matrix for resizing raw img to input size - :param raw_shape: (width, height) of raw image - :param dst_shape: (width, height) of input image - :param keep_ratio: whether keep original ratio - :return: 3x3 Matrix - """ - r_w, r_h = raw_shape - d_w, d_h = dst_shape - Rs = np.eye(3) - if keep_ratio: - C = np.eye(3) - C[0, 2] = -r_w / 2 - C[1, 2] = -r_h / 2 - - if r_w / r_h < d_w / d_h: - ratio = d_h / r_h - else: - ratio = d_w / r_w - Rs[0, 0] *= ratio - Rs[1, 1] *= ratio - - T = np.eye(3) - T[0, 2] = 0.5 * d_w - T[1, 2] = 0.5 * d_h - return T @Rs @C - else: - Rs[0, 0] *= d_w / r_w - Rs[1, 1] *= d_h / r_h - return Rs - - -def warp_boxes(boxes, M, width, height): - """Apply transform to boxes - Copy from picodet/data/transform/warp.py - """ - n = len(boxes) - if n: - # warp points - xy = np.ones((n * 4, 3)) - xy[:, :2] = boxes[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape( - n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @M.T # transform - xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - xy = np.concatenate( - (x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - # clip boxes - xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width) - xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height) - return xy.astype(np.float32) - else: - return boxes - - -def overlay_bbox_cv(img, all_box, class_names): - """Draw result boxes - Copy from picodet/util/visualization.py - """ - # all_box array of [label, x0, y0, x1, y1, score] - all_box.sort(key=lambda v: v[5]) - for box in all_box: - label, x0, y0, x1, y1, score = box - color = (_COLORS[label] * 255).astype(np.uint8).tolist() - text = "{}:{:.1f}%".format(class_names[label], score * 100) - txt_color = (0, 0, 0) if np.mean(_COLORS[label]) > 0.5 else (255, 255, - 255) - font = cv2.FONT_HERSHEY_SIMPLEX - txt_size = cv2.getTextSize(text, font, 0.5, 2)[0] - cv2.rectangle(img, (x0, y0), (x1, y1), color, 2) - - cv2.rectangle( - img, - (x0, y0 - txt_size[1] - 1), - (x0 + txt_size[0] + txt_size[1], y0 - 1), - color, - -1, ) - cv2.putText(img, text, (x0, y0 - 1), font, 0.5, txt_color, thickness=1) - return img - - -def hard_nms(box_scores, iou_threshold, top_k=-1, candidate_size=200): - """ - - Args: - box_scores (N, 5): boxes in corner-form and probabilities. - iou_threshold: intersection over union threshold. - top_k: keep top_k results. If k <= 0, keep all the results. - candidate_size: only consider the candidates with the highest scores. - Returns: - picked: a list of indexes of the kept boxes - """ - scores = box_scores[:, -1] - boxes = box_scores[:, :-1] - picked = [] - indexes = np.argsort(scores) - indexes = indexes[-candidate_size:] - while len(indexes) > 0: - current = indexes[-1] - picked.append(current) - if 0 < top_k == len(picked) or len(indexes) == 1: - break - current_box = boxes[current, :] - indexes = indexes[:-1] - rest_boxes = boxes[indexes, :] - iou = iou_of( - rest_boxes, - np.expand_dims( - current_box, axis=0), ) - indexes = indexes[iou <= iou_threshold] - - return box_scores[picked, :] - - -def iou_of(boxes0, boxes1, eps=1e-5): - """Return intersection-over-union (Jaccard index) of boxes. - - Args: - boxes0 (N, 4): ground truth boxes. - boxes1 (N or 1, 4): predicted boxes. - eps: a small number to avoid 0 as denominator. - Returns: - iou (N): IoU values. - """ - overlap_left_top = np.maximum(boxes0[..., :2], boxes1[..., :2]) - overlap_right_bottom = np.minimum(boxes0[..., 2:], boxes1[..., 2:]) - - overlap_area = area_of(overlap_left_top, overlap_right_bottom) - area0 = area_of(boxes0[..., :2], boxes0[..., 2:]) - area1 = area_of(boxes1[..., :2], boxes1[..., 2:]) - return overlap_area / (area0 + area1 - overlap_area + eps) - - -def area_of(left_top, right_bottom): - """Compute the areas of rectangles given two corners. - - Args: - left_top (N, 2): left top corner. - right_bottom (N, 2): right bottom corner. - - Returns: - area (N): return the area. - """ - hw = np.clip(right_bottom - left_top, 0.0, None) - return hw[..., 0] * hw[..., 1] - - -class PicoDetABC(metaclass=ABCMeta): - def __init__( - self, - input_shape=[416, 416], - reg_max=7, - strides=[8, 16, 32, 64], - prob_threshold=0.4, - iou_threshold=0.3, - num_candidate=1000, - top_k=-1, ): - self.strides = strides - self.input_shape = input_shape - self.reg_max = reg_max - self.prob_threshold = prob_threshold - self.iou_threshold = iou_threshold - self.num_candidate = num_candidate - self.top_k = top_k - self.img_mean = [103.53, 116.28, 123.675] - self.img_std = [57.375, 57.12, 58.395] - self.input_size = (self.input_shape[1], self.input_shape[0]) - self.class_names = [ - "person", - "bicycle", - "car", - "motorcycle", - "airplane", - "bus", - "train", - "truck", - "boat", - "traffic_light", - "fire_hydrant", - "stop_sign", - "parking_meter", - "bench", - "bird", - "cat", - "dog", - "horse", - "sheep", - "cow", - "elephant", - "bear", - "zebra", - "giraffe", - "backpack", - "umbrella", - "handbag", - "tie", - "suitcase", - "frisbee", - "skis", - "snowboard", - "sports_ball", - "kite", - "baseball_bat", - "baseball_glove", - "skateboard", - "surfboard", - "tennis_racket", - "bottle", - "wine_glass", - "cup", - "fork", - "knife", - "spoon", - "bowl", - "banana", - "apple", - "sandwich", - "orange", - "broccoli", - "carrot", - "hot_dog", - "pizza", - "donut", - "cake", - "chair", - "couch", - "potted_plant", - "bed", - "dining_table", - "toilet", - "tv", - "laptop", - "mouse", - "remote", - "keyboard", - "cell_phone", - "microwave", - "oven", - "toaster", - "sink", - "refrigerator", - "book", - "clock", - "vase", - "scissors", - "teddy_bear", - "hair_drier", - "toothbrush", - ] - - def preprocess(self, img): - # resize image - ResizeM = get_resize_matrix((img.shape[1], img.shape[0]), - self.input_size, True) - img_resize = cv2.warpPerspective(img, ResizeM, dsize=self.input_size) - # normalize image - img_input = img_resize.astype(np.float32) / 255 - img_mean = np.array( - self.img_mean, dtype=np.float32).reshape(1, 1, 3) / 255 - img_std = np.array( - self.img_std, dtype=np.float32).reshape(1, 1, 3) / 255 - img_input = (img_input - img_mean) / img_std - # expand dims - img_input = np.transpose(img_input, [2, 0, 1]) - img_input = np.expand_dims(img_input, axis=0) - return img_input, ResizeM - - def postprocess(self, scores, raw_boxes, ResizeM, raw_shape): - # generate centers - decode_boxes = [] - select_scores = [] - for stride, box_distribute, score in zip(self.strides, raw_boxes, - scores): - # centers - fm_h = self.input_shape[0] / stride - fm_w = self.input_shape[1] / stride - h_range = np.arange(fm_h) - w_range = np.arange(fm_w) - ww, hh = np.meshgrid(w_range, h_range) - ct_row = (hh.flatten() + 0.5) * stride - ct_col = (ww.flatten() + 0.5) * stride - center = np.stack((ct_col, ct_row, ct_col, ct_row), axis=1) - - # box distribution to distance - reg_range = np.arange(self.reg_max + 1) - box_distance = box_distribute.reshape((-1, self.reg_max + 1)) - box_distance = softmax(box_distance, axis=1) - box_distance = box_distance * np.expand_dims(reg_range, axis=0) - box_distance = np.sum(box_distance, axis=1).reshape((-1, 4)) - box_distance = box_distance * stride - - # top K candidate - topk_idx = np.argsort(score.max(axis=1))[::-1] - topk_idx = topk_idx[:C] - center = center[topk_idx] - score = score[topk_idx] - box_distance = box_distance[topk_idx] - - # decode box - decode_box = center + [-1, -1, 1, 1] * box_distance - - select_scores.append(score) - decode_boxes.append(decode_box) - - # nms - bboxes = np.concatenate(decode_boxes, axis=0) - confidences = np.concatenate(select_scores, axis=0) - picked_box_probs = [] - picked_labels = [] - for class_index in range(0, confidences.shape[1]): - probs = confidences[:, class_index] - mask = probs > self.prob_threshold - probs = probs[mask] - if probs.shape[0] == 0: - continue - subset_boxes = bboxes[mask, :] - box_probs = np.concatenate( - [subset_boxes, probs.reshape(-1, 1)], axis=1) - box_probs = hard_nms( - box_probs, - iou_threshold=self.iou_threshold, - top_k=self.top_k, ) - picked_box_probs.append(box_probs) - picked_labels.extend([class_index] * box_probs.shape[0]) - if not picked_box_probs: - return np.array([]), np.array([]), np.array([]) - picked_box_probs = np.concatenate(picked_box_probs) - - # resize output boxes - picked_box_probs[:, :4] = warp_boxes(picked_box_probs[:, :4], - np.linalg.inv(ResizeM), - raw_shape[1], raw_shape[0]) - return ( - picked_box_probs[:, :4].astype(np.int32), - np.array(picked_labels), - picked_box_probs[:, 4], ) - - @abstractmethod - def infer_image(self, img_input): - pass - - def detect(self, img): - raw_shape = img.shape - img_input, ResizeM = self.preprocess(img) - scores, raw_boxes = self.infer_image(img_input) - if scores[0].ndim == 1: # handling num_classes=1 case - scores = [x[:, None] for x in scores] - bbox, label, score = self.postprocess(scores, raw_boxes, ResizeM, - raw_shape) - - print(bbox, score) - return bbox, label, score - - def draw_box(self, raw_img, bbox, label, score): - img = raw_img.copy() - all_box = [[x, ] + y + [z, ] - for x, y, z in zip(label, bbox.tolist(), score)] - img_draw = overlay_bbox_cv(img, all_box, self.class_names) - return img_draw - - def detect_folder(self, img_fold, result_path): - img_fold = Path(img_fold) - result_path = Path(result_path) - result_path.mkdir(parents=True, exist_ok=True) - - img_name_list = filter( - lambda x: str(x).endswith(".png") or str(x).endswith(".jpg"), - img_fold.iterdir(), ) - img_name_list = list(img_name_list) - print(f"find {len(img_name_list)} images") - - for img_path in tqdm(img_name_list): - img = cv2.imread(str(img_path)) - bbox, label, score = self.detect(img) - img_draw = self.draw_box(img, bbox, label, score) - save_path = str(result_path / img_path.name.replace(".png", ".jpg")) - cv2.imwrite(save_path, img_draw) - - -class PicoDetMNN(PicoDetABC): - import MNN as MNNlib - - def __init__(self, model_path, *args, **kwargs): - super(PicoDetMNN, self).__init__(*args, **kwargs) - print("Using MNN as inference backend") - print(f"Using weight: {model_path}") - - # load model - self.model_path = model_path - self.interpreter = self.MNNlib.Interpreter(self.model_path) - self.session = self.interpreter.createSession() - self.input_tensor = self.interpreter.getSessionInput(self.session) - - def infer_image(self, img_input): - tmp_input = self.MNNlib.Tensor( - (1, 3, self.input_size[1], self.input_size[0]), - self.MNNlib.Halide_Type_Float, - img_input, - self.MNNlib.Tensor_DimensionType_Caffe, ) - self.input_tensor.copyFrom(tmp_input) - self.interpreter.runSession(self.session) - score_out_name = [ - "save_infer_model/scale_0.tmp_1", "save_infer_model/scale_1.tmp_1", - "save_infer_model/scale_2.tmp_1", "save_infer_model/scale_3.tmp_1" - ] - scores = [ - self.interpreter.getSessionOutput(self.session, x).getData() - for x in score_out_name - ] - scores = [np.reshape(x, (-1, 80)) for x in scores] - boxes_out_name = [ - "save_infer_model/scale_4.tmp_1", "save_infer_model/scale_5.tmp_1", - "save_infer_model/scale_6.tmp_1", "save_infer_model/scale_7.tmp_1" - ] - raw_boxes = [ - self.interpreter.getSessionOutput(self.session, x).getData() - for x in boxes_out_name - ] - raw_boxes = [np.reshape(x, (-1, 32)) for x in raw_boxes] - return scores, raw_boxes - - -class PicoDetONNX(PicoDetABC): - import onnxruntime as ort - - def __init__(self, model_path, *args, **kwargs): - super(PicoDetONNX, self).__init__(*args, **kwargs) - print("Using ONNX as inference backend") - print(f"Using weight: {model_path}") - - # load model - self.model_path = model_path - self.ort_session = self.ort.InferenceSession(self.model_path) - self.input_name = self.ort_session.get_inputs()[0].name - - def infer_image(self, img_input): - inference_results = self.ort_session.run(None, - {self.input_name: img_input}) - scores = [np.squeeze(x) for x in inference_results[:3]] - raw_boxes = [np.squeeze(x) for x in inference_results[3:]] - return scores, raw_boxes - - -class PicoDetTorch(PicoDetABC): - import torch - - def __init__(self, model_path, cfg_path, *args, **kwargs): - from picodet.model.arch import build_model - from picodet.util import Logger, cfg, load_config, load_model_weight - - super(PicoDetTorch, self).__init__(*args, **kwargs) - print("Using PyTorch as inference backend") - print(f"Using weight: {model_path}") - - # load model - self.model_path = model_path - self.cfg_path = cfg_path - load_config(cfg, cfg_path) - self.logger = Logger(-1, cfg.save_dir, False) - self.model = build_model(cfg.model) - checkpoint = self.torch.load( - model_path, map_location=lambda storage, loc: storage) - load_model_weight(self.model, checkpoint, self.logger) - - def infer_image(self, img_input): - self.model.train(False) - with self.torch.no_grad(): - inference_results = self.model(self.torch.from_numpy(img_input)) - scores = [ - x.permute(0, 2, 3, 1).reshape((-1, 80)).sigmoid().detach().numpy() - for x in inference_results[0] - ] - raw_boxes = [ - x.permute(0, 2, 3, 1).reshape((-1, 32)).detach().numpy() - for x in inference_results[1] - ] - return scores, raw_boxes - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model_path", - dest="model_path", - type=str, - default="../model/picodet-320.mnn") - parser.add_argument( - "--cfg_path", dest="cfg_path", type=str, default="config/picodet-m.yml") - parser.add_argument( - "--img_fold", dest="img_fold", type=str, default="../imgs") - parser.add_argument( - "--result_fold", dest="result_fold", type=str, default="../results") - parser.add_argument( - "--input_shape", - dest="input_shape", - nargs=2, - type=int, - default=[320, 320]) - parser.add_argument( - "--backend", choices=["MNN", "ONNX", "torch"], default="MNN") - args = parser.parse_args() - - print(f"Detecting {args.img_fold}") - - # load detector - if args.backend == "MNN": - detector = PicoDetMNN(args.model_path, input_shape=args.input_shape) - elif args.backend == "ONNX": - detector = PicoDetONNX(args.model_path, input_shape=args.input_shape) - elif args.backend == "torch": - detector = PicoDetTorch( - args.model_path, args.cfg_path, input_shape=args.input_shape) - else: - raise ValueError - - # detect folder - detector.detect_folder(args.img_fold, args.result_fold) - - -def test_one(): - detector = PicoDetMNN("../weight/picodet-416.mnn") - img = cv2.imread("../imgs/000252.jpg") - bbox, label, score = detector.detect(img) - img_draw = detector.draw_box(img, bbox, label, score) - cv2.imwrite('picodet_infer.jpg', img_draw) - - -if __name__ == "__main__": - # main() - test_one() diff --git a/deploy/third_engine/demo_ncnn/CMakeLists.txt b/deploy/third_engine/demo_ncnn/CMakeLists.txt index 4f5cc65fc6d349c98f6490055bb38139a7296d05..0d4344c699d58082eb37ebe6089e16ad120bc87e 100644 --- a/deploy/third_engine/demo_ncnn/CMakeLists.txt +++ b/deploy/third_engine/demo_ncnn/CMakeLists.txt @@ -1,4 +1,4 @@ -cmake_minimum_required(VERSION 3.4.1) +cmake_minimum_required(VERSION 3.9) set(CMAKE_CXX_STANDARD 17) project(picodet_demo) @@ -11,9 +11,11 @@ if(OPENMP_FOUND) set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${OpenMP_EXE_LINKER_FLAGS}") endif() -find_package(OpenCV REQUIRED) +# find_package(OpenCV REQUIRED) +find_package(OpenCV REQUIRED PATHS "/path/to/opencv-3.4.16_gcc8.2_ffmpeg") -find_package(ncnn REQUIRED) +# find_package(ncnn REQUIRED) +find_package(ncnn REQUIRED PATHS "/path/to/ncnn/build/install/lib/cmake/ncnn") if(NOT TARGET ncnn) message(WARNING "ncnn NOT FOUND! Please set ncnn_DIR environment variable") else() diff --git a/deploy/third_engine/demo_ncnn/README.md b/deploy/third_engine/demo_ncnn/README.md index b15052d9812784fbbec543a394960a78128012c7..f9867b8acc9652e22bfd891671da4f5429436c3c 100644 --- a/deploy/third_engine/demo_ncnn/README.md +++ b/deploy/third_engine/demo_ncnn/README.md @@ -1,10 +1,8 @@ # PicoDet NCNN Demo -This project provides PicoDet image inference, webcam inference and benchmark using -[Tencent's NCNN framework](https://github.com/Tencent/ncnn). - -# How to build +该Demo提供的预测代码是根据[Tencent's NCNN framework](https://github.com/Tencent/ncnn)推理库预测的。 +# 第一步:编译 ## Windows ### Step1. Download and Install Visual Studio from https://visualstudio.microsoft.com/vs/community/ @@ -12,11 +10,16 @@ Download and Install Visual Studio from https://visualstudio.microsoft.com/vs/co ### Step2. Download and install OpenCV from https://github.com/opencv/opencv/releases -### Step3(Optional). +为了方便,如果环境是gcc8.2 x86环境,可直接下载以下库: +```shell +wget https://paddledet.bj.bcebos.com/data/opencv-3.4.16_gcc8.2_ffmpeg.tar.gz +tar -xf opencv-3.4.16_gcc8.2_ffmpeg.tar.gz +``` + +### Step3(可选). Download and install Vulkan SDK from https://vulkan.lunarg.com/sdk/home -### Step4. -Clone NCNN repository +### Step4:编译NCNN ``` shell script git clone --recursive https://github.com/Tencent/ncnn.git @@ -25,7 +28,7 @@ Build NCNN following this tutorial: [Build for Windows x64 using VS2017](https:/ ### Step5. -Add `ncnn_DIR` = `YOUR_NCNN_PATH/build/install/lib/cmake/ncnn` to system environment variables. +增加 `ncnn_DIR` = `YOUR_NCNN_PATH/build/install/lib/cmake/ncnn` 到系统变量中 Build project: Open x64 Native Tools Command Prompt for VS 2019 or 2017 @@ -42,10 +45,10 @@ msbuild picodet_demo.vcxproj /p:configuration=release /p:platform=x64 ### Step1. Build and install OpenCV from https://github.com/opencv/opencv -### Step2(Optional). +### Step2(可选). Download Vulkan SDK from https://vulkan.lunarg.com/sdk/home -### Step3. +### Step3:编译NCNN Clone NCNN repository ``` shell script @@ -54,15 +57,7 @@ git clone --recursive https://github.com/Tencent/ncnn.git Build NCNN following this tutorial: [Build for Linux / NVIDIA Jetson / Raspberry Pi](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux) -### Step4. - -Set environment variables. Run: - -``` shell script -export ncnn_DIR=YOUR_NCNN_PATH/build/install/lib/cmake/ncnn -``` - -Build project +### Step4:编译可执行文件 ``` shell script cd @@ -71,47 +66,64 @@ cd build cmake .. make ``` - # Run demo -Download PicoDet ncnn model. -* [PicoDet ncnn model download link](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_ncnn.zip) - - -## Webcam - -```shell script -picodet_demo 0 0 +- 准备模型 + ```shell + modelName=picodet_s_320_coco_lcnet + # 导出Inference model + python tools/export_model.py \ + -c configs/picodet/${modelName}.yml \ + -o weights=${modelName}.pdparams \ + --output_dir=inference_model + # 转换到ONNX + paddle2onnx --model_dir inference_model/${modelName} \ + --model_filename model.pdmodel \ + --params_filename model.pdiparams \ + --opset_version 11 \ + --save_file ${modelName}.onnx + # 简化模型 + python -m onnxsim ${modelName}.onnx ${modelName}_processed.onnx + # 将模型转换至NCNN格式 + Run onnx2ncnn in ncnn tools to generate ncnn .param and .bin file. + ``` +转NCNN模型可以利用在线转换工具 [https://convertmodel.com](https://convertmodel.com/) + +为了快速测试,可直接下载:[picodet_s_320_coco_lcnet-opt.bin](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco_lcnet-opt.bin)/ [picodet_s_320_coco_lcnet-opt.param](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco_lcnet-opt.param)(不带后处理)。 + +**注意:**由于带后处理后,NCNN预测会出NAN,暂时使用不带后处理Demo即可,带后处理的Demo正在升级中,很快发布。 + + +## 开始运行 + +首先新建预测结果存放目录: +```shell +cp -r ../demo_onnxruntime/imgs . +cd build +mkdir ../results ``` -## Inference images - -```shell script -picodet_demo 1 IMAGE_FOLDER/*.jpg +- 预测一张图片 +``` shell +./picodet_demo 0 ../picodet_s_320_coco_lcnet.bin ../picodet_s_320_coco_lcnet.param 320 320 ../imgs/dog.jpg 0 ``` +具体参数解析可参考`main.cpp`。 -## Inference video +-测试速度Benchmark -```shell script -picodet_demo 2 VIDEO_PATH +``` shell +./picodet_demo 1 ../picodet_s_320_lcnet.bin ../picodet_s_320_lcnet.param 320 320 0 ``` -## Benchmark - -```shell script -picodet_demo 3 0 - -result: picodet min = 17.74 max = 22.71 avg = 18.16 -``` - -**** - -Notice: - -If benchmark speed is slow, try to limit omp thread num. - -Linux: +## FAQ -```shell script -export OMP_THREAD_LIMIT=4 +- 预测结果精度不对: +请先确认模型输入shape是否对齐,并且模型输出name是否对齐,不带后处理的PicoDet增强版模型输出name如下: +```shell +# 分类分支 | 检测分支 +{"transpose_0.tmp_0", "transpose_1.tmp_0"}, +{"transpose_2.tmp_0", "transpose_3.tmp_0"}, +{"transpose_4.tmp_0", "transpose_5.tmp_0"}, +{"transpose_6.tmp_0", "transpose_7.tmp_0"}, ``` +可使用[netron](https://netron.app)查看具体name,并修改`picodet_mnn.hpp`中相应`non_postprocess_heads_info`数组。 diff --git a/deploy/third_engine/demo_ncnn/main.cpp b/deploy/third_engine/demo_ncnn/main.cpp index 2f98d82ae5fcdf8c18d3f2eebd30ae3cf31f0cc7..8f69af93b2de7f9404fb86d5112ce62056d936b4 100644 --- a/deploy/third_engine/demo_ncnn/main.cpp +++ b/deploy/third_engine/demo_ncnn/main.cpp @@ -13,353 +13,198 @@ // limitations under the License. // reference from https://github.com/RangiLyu/nanodet/tree/main/demo_ncnn +#include "picodet.h" +#include +#include +#include #include #include #include -#include -#include -#include "picodet.h" -#include +#define __SAVE_RESULT__ // if defined save drawed results to ../results, else + // show it in windows struct object_rect { - int x; - int y; - int width; - int height; -}; - -int resize_uniform(cv::Mat& src, cv::Mat& dst, cv::Size dst_size, object_rect& effect_area) -{ - int w = src.cols; - int h = src.rows; - int dst_w = dst_size.width; - int dst_h = dst_size.height; - dst = cv::Mat(cv::Size(dst_w, dst_h), CV_8UC3, cv::Scalar(0)); - - float ratio_src = w * 1.0 / h; - float ratio_dst = dst_w * 1.0 / dst_h; - - int tmp_w = 0; - int tmp_h = 0; - if (ratio_src > ratio_dst) { - tmp_w = dst_w; - tmp_h = floor((dst_w * 1.0 / w) * h); - } - else if (ratio_src < ratio_dst) { - tmp_h = dst_h; - tmp_w = floor((dst_h * 1.0 / h) * w); - } - else { - cv::resize(src, dst, dst_size); - effect_area.x = 0; - effect_area.y = 0; - effect_area.width = dst_w; - effect_area.height = dst_h; - return 0; - } - - cv::Mat tmp; - cv::resize(src, tmp, cv::Size(tmp_w, tmp_h)); - - if (tmp_w != dst_w) { - int index_w = floor((dst_w - tmp_w) / 2.0); - for (int i = 0; i < dst_h; i++) { - memcpy(dst.data + i * dst_w * 3 + index_w * 3, tmp.data + i * tmp_w * 3, tmp_w * 3); - } - effect_area.x = index_w; - effect_area.y = 0; - effect_area.width = tmp_w; - effect_area.height = tmp_h; - } - else if (tmp_h != dst_h) { - int index_h = floor((dst_h - tmp_h) / 2.0); - memcpy(dst.data + index_h * dst_w * 3, tmp.data, tmp_w * tmp_h * 3); - effect_area.x = 0; - effect_area.y = index_h; - effect_area.width = tmp_w; - effect_area.height = tmp_h; - } - else { - printf("error\n"); - } - return 0; -} - -const int color_list[80][3] = -{ - {216 , 82 , 24}, - {236 ,176 , 31}, - {125 , 46 ,141}, - {118 ,171 , 47}, - { 76 ,189 ,237}, - {238 , 19 , 46}, - { 76 , 76 , 76}, - {153 ,153 ,153}, - {255 , 0 , 0}, - {255 ,127 , 0}, - {190 ,190 , 0}, - { 0 ,255 , 0}, - { 0 , 0 ,255}, - {170 , 0 ,255}, - { 84 , 84 , 0}, - { 84 ,170 , 0}, - { 84 ,255 , 0}, - {170 , 84 , 0}, - {170 ,170 , 0}, - {170 ,255 , 0}, - {255 , 84 , 0}, - {255 ,170 , 0}, - {255 ,255 , 0}, - { 0 , 84 ,127}, - { 0 ,170 ,127}, - { 0 ,255 ,127}, - { 84 , 0 ,127}, - { 84 , 84 ,127}, - { 84 ,170 ,127}, - { 84 ,255 ,127}, - {170 , 0 ,127}, - {170 , 84 ,127}, - {170 ,170 ,127}, - {170 ,255 ,127}, - {255 , 0 ,127}, - {255 , 84 ,127}, - {255 ,170 ,127}, - {255 ,255 ,127}, - { 0 , 84 ,255}, - { 0 ,170 ,255}, - { 0 ,255 ,255}, - { 84 , 0 ,255}, - { 84 , 84 ,255}, - { 84 ,170 ,255}, - { 84 ,255 ,255}, - {170 , 0 ,255}, - {170 , 84 ,255}, - {170 ,170 ,255}, - {170 ,255 ,255}, - {255 , 0 ,255}, - {255 , 84 ,255}, - {255 ,170 ,255}, - { 42 , 0 , 0}, - { 84 , 0 , 0}, - {127 , 0 , 0}, - {170 , 0 , 0}, - {212 , 0 , 0}, - {255 , 0 , 0}, - { 0 , 42 , 0}, - { 0 , 84 , 0}, - { 0 ,127 , 0}, - { 0 ,170 , 0}, - { 0 ,212 , 0}, - { 0 ,255 , 0}, - { 0 , 0 , 42}, - { 0 , 0 , 84}, - { 0 , 0 ,127}, - { 0 , 0 ,170}, - { 0 , 0 ,212}, - { 0 , 0 ,255}, - { 0 , 0 , 0}, - { 36 , 36 , 36}, - { 72 , 72 , 72}, - {109 ,109 ,109}, - {145 ,145 ,145}, - {182 ,182 ,182}, - {218 ,218 ,218}, - { 0 ,113 ,188}, - { 80 ,182 ,188}, - {127 ,127 , 0}, + int x; + int y; + int width; + int height; }; -void draw_bboxes(const cv::Mat& bgr, const std::vector& bboxes, object_rect effect_roi) -{ - static const char* class_names[] = { "person", "bicycle", "car", "motorcycle", "airplane", "bus", - "train", "truck", "boat", "traffic light", "fire hydrant", - "stop sign", "parking meter", "bench", "bird", "cat", "dog", - "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", - "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", - "skis", "snowboard", "sports ball", "kite", "baseball bat", - "baseball glove", "skateboard", "surfboard", "tennis racket", - "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", - "banana", "apple", "sandwich", "orange", "broccoli", "carrot", - "hot dog", "pizza", "donut", "cake", "chair", "couch", - "potted plant", "bed", "dining table", "toilet", "tv", "laptop", - "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", - "toaster", "sink", "refrigerator", "book", "clock", "vase", - "scissors", "teddy bear", "hair drier", "toothbrush" - }; - - cv::Mat image = bgr.clone(); - int src_w = image.cols; - int src_h = image.rows; - int dst_w = effect_roi.width; - int dst_h = effect_roi.height; - float width_ratio = (float)src_w / (float)dst_w; - float height_ratio = (float)src_h / (float)dst_h; - - - for (size_t i = 0; i < bboxes.size(); i++) - { - const BoxInfo& bbox = bboxes[i]; - cv::Scalar color = cv::Scalar(color_list[bbox.label][0], color_list[bbox.label][1], color_list[bbox.label][2]); - - cv::rectangle(image, cv::Rect(cv::Point((bbox.x1 - effect_roi.x) * width_ratio, (bbox.y1 - effect_roi.y) * height_ratio), - cv::Point((bbox.x2 - effect_roi.x) * width_ratio, (bbox.y2 - effect_roi.y) * height_ratio)), color); - - char text[256]; - sprintf(text, "%s %.1f%%", class_names[bbox.label], bbox.score * 100); - - int baseLine = 0; - cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine); - - int x = (bbox.x1 - effect_roi.x) * width_ratio; - int y = (bbox.y1 - effect_roi.y) * height_ratio - label_size.height - baseLine; - if (y < 0) - y = 0; - if (x + label_size.width > image.cols) - x = image.cols - label_size.width; - - cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)), - color, -1); - - cv::putText(image, text, cv::Point(x, y + label_size.height), - cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255, 255, 255)); - } - cv::imwrite("../result/test_picodet.jpg", image); - printf("************infer image success!!!**********\n"); -} - - -int image_demo(PicoDet &detector, const char* imagepath) -{ - std::vector filenames; - cv::glob(imagepath, filenames, false); - - for (auto img_name : filenames) - { - cv::Mat image = cv::imread(img_name); - if (image.empty()) - { - fprintf(stderr, "cv::imread %s failed\n", img_name); - return -1; - } - object_rect effect_roi; - cv::Mat resized_img; - resize_uniform(image, resized_img, cv::Size(320, 320), effect_roi); - auto results = detector.detect(resized_img, 0.4, 0.5); - char imgName[20] = {}; - draw_bboxes(image, results, effect_roi); - cv::waitKey(0); - +std::vector GenerateColorMap(int num_class) { + auto colormap = std::vector(3 * num_class, 0); + for (int i = 0; i < num_class; ++i) { + int j = 0; + int lab = i; + while (lab) { + colormap[i * 3] |= (((lab >> 0) & 1) << (7 - j)); + colormap[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)); + colormap[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)); + ++j; + lab >>= 3; } - return 0; + } + return colormap; } -int webcam_demo(PicoDet& detector, int cam_id) -{ - cv::Mat image; - cv::VideoCapture cap(cam_id); - - while (true) - { - cap >> image; - object_rect effect_roi; - cv::Mat resized_img; - resize_uniform(image, resized_img, cv::Size(320, 320), effect_roi); - auto results = detector.detect(resized_img, 0.4, 0.5); - draw_bboxes(image, results, effect_roi); - cv::waitKey(1); - } - return 0; +void draw_bboxes(const cv::Mat &im, const std::vector &bboxes, + std::string save_path = "None") { + static const char *class_names[] = { + "person", "bicycle", "car", + "motorcycle", "airplane", "bus", + "train", "truck", "boat", + "traffic light", "fire hydrant", "stop sign", + "parking meter", "bench", "bird", + "cat", "dog", "horse", + "sheep", "cow", "elephant", + "bear", "zebra", "giraffe", + "backpack", "umbrella", "handbag", + "tie", "suitcase", "frisbee", + "skis", "snowboard", "sports ball", + "kite", "baseball bat", "baseball glove", + "skateboard", "surfboard", "tennis racket", + "bottle", "wine glass", "cup", + "fork", "knife", "spoon", + "bowl", "banana", "apple", + "sandwich", "orange", "broccoli", + "carrot", "hot dog", "pizza", + "donut", "cake", "chair", + "couch", "potted plant", "bed", + "dining table", "toilet", "tv", + "laptop", "mouse", "remote", + "keyboard", "cell phone", "microwave", + "oven", "toaster", "sink", + "refrigerator", "book", "clock", + "vase", "scissors", "teddy bear", + "hair drier", "toothbrush"}; + + cv::Mat image = im.clone(); + int src_w = image.cols; + int src_h = image.rows; + int thickness = 2; + auto colormap = GenerateColorMap(sizeof(class_names)); + + for (size_t i = 0; i < bboxes.size(); i++) { + const BoxInfo &bbox = bboxes[i]; + std::cout << bbox.x1 << ". " << bbox.y1 << ". " << bbox.x2 << ". " + << bbox.y2 << ". " << std::endl; + int c1 = colormap[3 * bbox.label + 0]; + int c2 = colormap[3 * bbox.label + 1]; + int c3 = colormap[3 * bbox.label + 2]; + cv::Scalar color = cv::Scalar(c1, c2, c3); + // cv::Scalar color = cv::Scalar(0, 0, 255); + cv::rectangle(image, cv::Rect(cv::Point(bbox.x1, bbox.y1), + cv::Point(bbox.x2, bbox.y2)), + color, 1); + + char text[256]; + sprintf(text, "%s %.1f%%", class_names[bbox.label], bbox.score * 100); + + int baseLine = 0; + cv::Size label_size = + cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine); + + int x = bbox.x1; + int y = bbox.y1 - label_size.height - baseLine; + if (y < 0) + y = 0; + if (x + label_size.width > image.cols) + x = image.cols - label_size.width; + + cv::rectangle(image, cv::Rect(cv::Point(x, y), + cv::Size(label_size.width, + label_size.height + baseLine)), + color, -1); + + cv::putText(image, text, cv::Point(x, y + label_size.height), + cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255, 255, 255), 1); + } + + if (save_path == "None") { + cv::imshow("image", image); + } else { + cv::imwrite(save_path, image); + std::cout << "Result save in: " << save_path << std::endl; + } } -int video_demo(PicoDet& detector, const char* path) -{ - cv::Mat image; - cv::VideoCapture cap(path); - - while (true) - { - cap >> image; - object_rect effect_roi; - cv::Mat resized_img; - resize_uniform(image, resized_img, cv::Size(320, 320), effect_roi); - auto results = detector.detect(resized_img, 0.4, 0.5); - draw_bboxes(image, results, effect_roi); - cv::waitKey(1); +int image_demo(PicoDet &detector, const char *imagepath, + int has_postprocess = 0) { + std::vector filenames; + cv::glob(imagepath, filenames, false); + bool is_postprocess = has_postprocess > 0 ? true : false; + for (auto img_name : filenames) { + cv::Mat image = cv::imread(img_name, cv::IMREAD_COLOR); + if (image.empty()) { + fprintf(stderr, "cv::imread %s failed\n", img_name.c_str()); + return -1; } - return 0; + std::vector results; + detector.detect(image, results, is_postprocess); + std::cout << "detect done." << std::endl; + +#ifdef __SAVE_RESULT__ + std::string save_path = img_name; + draw_bboxes(image, results, save_path.replace(3, 4, "results")); +#else + draw_bboxes(image, results); + cv::waitKey(0); +#endif + } + return 0; } -int benchmark(PicoDet& detector) -{ - int loop_num = 100; - int warm_up = 8; - - double time_min = DBL_MAX; - double time_max = -DBL_MAX; - double time_avg = 0; - ncnn::Mat input = ncnn::Mat(320, 320, 3); - input.fill(0.01f); - for (int i = 0; i < warm_up + loop_num; i++) - { - double start = ncnn::get_current_time(); - ncnn::Extractor ex = detector.Net->create_extractor(); - ex.input("image", input); // picodet - for (const auto& head_info : detector.heads_info) - { - ncnn::Mat dis_pred; - ncnn::Mat cls_pred; - ex.extract(head_info.dis_layer.c_str(), dis_pred); - ex.extract(head_info.cls_layer.c_str(), cls_pred); - } - double end = ncnn::get_current_time(); - - double time = end - start; - if (i >= warm_up) - { - time_min = (std::min)(time_min, time); - time_max = (std::max)(time_max, time); - time_avg += time; - } +int benchmark(PicoDet &detector, int width, int height, + int has_postprocess = 0) { + int loop_num = 100; + int warm_up = 8; + + double time_min = DBL_MAX; + double time_max = -DBL_MAX; + double time_avg = 0; + cv::Mat image(width, height, CV_8UC3, cv::Scalar(1, 1, 1)); + bool is_postprocess = has_postprocess > 0 ? true : false; + for (int i = 0; i < warm_up + loop_num; i++) { + double start = ncnn::get_current_time(); + std::vector results; + detector.detect(image, results, is_postprocess); + double end = ncnn::get_current_time(); + + double time = end - start; + if (i >= warm_up) { + time_min = (std::min)(time_min, time); + time_max = (std::max)(time_max, time); + time_avg += time; } - time_avg /= loop_num; - fprintf(stderr, "%20s min = %7.2f max = %7.2f avg = %7.2f\n", "picodet", time_min, time_max, time_avg); - return 0; + } + time_avg /= loop_num; + fprintf(stderr, "%20s min = %7.2f max = %7.2f avg = %7.2f\n", "picodet", + time_min, time_max, time_avg); + return 0; } - -int main(int argc, char** argv) -{ - if (argc != 3) - { - fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n For video, mode=2; \n For benchmark, mode=3 path=0.\n", argv[0]); - return -1; - } - PicoDet detector = PicoDet("../weight/picodet_m_416.param", "../weight/picodet_m_416.bin", true); - int mode = atoi(argv[1]); - switch (mode) - { - case 0:{ - int cam_id = atoi(argv[2]); - webcam_demo(detector, cam_id); - break; - } - case 1:{ - const char* images = argv[2]; - image_demo(detector, images); - break; - } - case 2:{ - const char* path = argv[2]; - video_demo(detector, path); - break; - } - case 3:{ - benchmark(detector); - break; - } - default:{ - fprintf(stderr, "usage: %s [mode] [path]. \n For webcam mode=0, path is cam id; \n For image demo, mode=1, path=xxx/xxx/*.jpg; \n For video, mode=2; \n For benchmark, mode=3 path=0.\n", argv[0]); - break; - } +int main(int argc, char **argv) { + int mode = atoi(argv[1]); + char *bin_model_path = argv[2]; + char *param_model_path = argv[3]; + int height = 320; + int width = 320; + if (argc == 5) { + height = atoi(argv[4]); + width = atoi(argv[5]); + } + PicoDet detector = + PicoDet(param_model_path, bin_model_path, width, height, true, 0.45, 0.3); + if (mode == 1) { + + benchmark(detector, width, height, atoi(argv[6])); + } else { + if (argc != 6) { + std::cout << "Must set image file, such as ./picodet_demo 0 " + "../picodet_s_320_lcnet.bin ../picodet_s_320_lcnet.param " + "320 320 img.jpg" + << std::endl; } + const char *images = argv[6]; + image_demo(detector, images, atoi(argv[7])); + } } diff --git a/deploy/third_engine/demo_ncnn/picodet.cpp b/deploy/third_engine/demo_ncnn/picodet.cpp index c4dec46b2b927ad761b43eebef6bb2500dd31bc4..d5f0ba3c788b0813f85dc61e35ac543661212d1c 100644 --- a/deploy/third_engine/demo_ncnn/picodet.cpp +++ b/deploy/third_engine/demo_ncnn/picodet.cpp @@ -48,7 +48,9 @@ int activation_function_softmax(const _Tp *src, _Tp *dst, int length) { bool PicoDet::hasGPU = false; PicoDet *PicoDet::detector = nullptr; -PicoDet::PicoDet(const char *param, const char *bin, bool useGPU) { +PicoDet::PicoDet(const char *param, const char *bin, int input_width, + int input_hight, bool useGPU, float score_threshold_ = 0.5, + float nms_threshold_ = 0.3) { this->Net = new ncnn::Net(); #if NCNN_VULKAN this->hasGPU = ncnn::get_gpu_count() > 0; @@ -57,21 +59,28 @@ PicoDet::PicoDet(const char *param, const char *bin, bool useGPU) { this->Net->opt.use_fp16_arithmetic = true; this->Net->load_param(param); this->Net->load_model(bin); + this->in_w = input_width; + this->in_h = input_hight; + this->score_threshold = score_threshold_; + this->nms_threshold = nms_threshold_; } PicoDet::~PicoDet() { delete this->Net; } void PicoDet::preprocess(cv::Mat &image, ncnn::Mat &in) { + // cv::resize(image, image, cv::Size(this->in_w, this->in_h), 0.f, 0.f); int img_w = image.cols; int img_h = image.rows; - in = ncnn::Mat::from_pixels(image.data, ncnn::Mat::PIXEL_BGR, img_w, img_h); + in = ncnn::Mat::from_pixels_resize(image.data, ncnn::Mat::PIXEL_BGR, img_w, + img_h, this->in_w, this->in_h); const float mean_vals[3] = {103.53f, 116.28f, 123.675f}; const float norm_vals[3] = {0.017429f, 0.017507f, 0.017125f}; in.substract_mean_normalize(mean_vals, norm_vals); } -std::vector PicoDet::detect(cv::Mat image, float score_threshold, - float nms_threshold) { +int PicoDet::detect(cv::Mat image, std::vector &result_list, + bool has_postprocess) { + ncnn::Mat input; preprocess(image, input); auto ex = this->Net->create_extractor(); @@ -82,34 +91,76 @@ std::vector PicoDet::detect(cv::Mat image, float score_threshold, #endif ex.input("image", input); // picodet + this->image_h = image.rows; + this->image_w = image.cols; + std::vector> results; results.resize(this->num_class); - for (const auto &head_info : this->heads_info) { + if (has_postprocess) { ncnn::Mat dis_pred; ncnn::Mat cls_pred; - ex.extract(head_info.dis_layer.c_str(), dis_pred); - ex.extract(head_info.cls_layer.c_str(), cls_pred); - this->decode_infer(cls_pred, dis_pred, head_info.stride, score_threshold, - results); + ex.extract(this->nms_heads_info[0].c_str(), dis_pred); + ex.extract(this->nms_heads_info[1].c_str(), cls_pred); + std::cout << dis_pred.h << " " << dis_pred.w << std::endl; + std::cout << cls_pred.h << " " << cls_pred.w << std::endl; + this->nms_boxes(cls_pred, dis_pred, this->score_threshold, results); + } else { + for (const auto &head_info : this->non_postprocess_heads_info) { + ncnn::Mat dis_pred; + ncnn::Mat cls_pred; + ex.extract(head_info.dis_layer.c_str(), dis_pred); + ex.extract(head_info.cls_layer.c_str(), cls_pred); + this->decode_infer(cls_pred, dis_pred, head_info.stride, + this->score_threshold, results); + } } - std::vector dets; for (int i = 0; i < (int)results.size(); i++) { - this->nms(results[i], nms_threshold); + this->nms(results[i], this->nms_threshold); for (auto box : results[i]) { - dets.push_back(box); + box.x1 = box.x1 / this->in_w * this->image_w; + box.x2 = box.x2 / this->in_w * this->image_w; + box.y1 = box.y1 / this->in_h * this->image_h; + box.y2 = box.y2 / this->in_h * this->image_h; + result_list.push_back(box); + } + } + return 0; +} + +void PicoDet::nms_boxes(ncnn::Mat &cls_pred, ncnn::Mat &dis_pred, + float score_threshold, + std::vector> &result_list) { + BoxInfo bbox; + int i, j; + for (i = 0; i < dis_pred.h; i++) { + bbox.x1 = dis_pred.row(i)[0]; + bbox.y1 = dis_pred.row(i)[1]; + bbox.x2 = dis_pred.row(i)[2]; + bbox.y2 = dis_pred.row(i)[3]; + const float *scores = cls_pred.row(i); + float score = 0; + int cur_label = 0; + for (int label = 0; label < this->num_class; label++) { + float score_ = cls_pred.row(label)[i]; + if (score_ > score) { + score = score_; + cur_label = label; + } } + bbox.score = score; + bbox.label = cur_label; + result_list[cur_label].push_back(bbox); } - return dets; } void PicoDet::decode_infer(ncnn::Mat &cls_pred, ncnn::Mat &dis_pred, int stride, float threshold, std::vector> &results) { - int feature_h = ceil((float)this->input_size[1] / stride); - int feature_w = ceil((float)this->input_size[0] / stride); + int feature_h = ceil((float)this->in_w / stride); + int feature_w = ceil((float)this->in_h / stride); for (int idx = 0; idx < feature_h * feature_w; idx++) { const float *scores = cls_pred.row(idx); @@ -151,8 +202,8 @@ BoxInfo PicoDet::disPred2Bbox(const float *&dfl_det, int label, float score, } float xmin = (std::max)(ct_x - dis_pred[0], .0f); float ymin = (std::max)(ct_y - dis_pred[1], .0f); - float xmax = (std::min)(ct_x + dis_pred[2], (float)this->input_size[0]); - float ymax = (std::min)(ct_y + dis_pred[3], (float)this->input_size[1]); + float xmax = (std::min)(ct_x + dis_pred[2], (float)this->in_w); + float ymax = (std::min)(ct_y + dis_pred[3], (float)this->in_w); return BoxInfo{xmin, ymin, xmax, ymax, score, label}; } diff --git a/deploy/third_engine/demo_ncnn/picodet.h b/deploy/third_engine/demo_ncnn/picodet.h index dfb0967c99779567c5921efda00df75ddc8079c5..dd8c8f5af96aed9393e207b6e920259d95befbe7 100644 --- a/deploy/third_engine/demo_ncnn/picodet.h +++ b/deploy/third_engine/demo_ncnn/picodet.h @@ -16,66 +16,72 @@ #ifndef PICODET_H #define PICODET_H -#include #include +#include -typedef struct HeadInfo -{ - std::string cls_layer; - std::string dis_layer; - int stride; -}; +typedef struct NonPostProcessHeadInfo { + std::string cls_layer; + std::string dis_layer; + int stride; +} NonPostProcessHeadInfo; -typedef struct BoxInfo -{ - float x1; - float y1; - float x2; - float y2; - float score; - int label; +typedef struct BoxInfo { + float x1; + float y1; + float x2; + float y2; + float score; + int label; } BoxInfo; -class PicoDet -{ +class PicoDet { public: - PicoDet(const char* param, const char* bin, bool useGPU); - - ~PicoDet(); + PicoDet(const char *param, const char *bin, int input_width, int input_hight, + bool useGPU, float score_threshold_, float nms_threshold_); - static PicoDet* detector; - ncnn::Net* Net; - static bool hasGPU; + ~PicoDet(); - std::vector heads_info{ - // cls_pred|dis_pred|stride - {"save_infer_model/scale_0.tmp_1", "save_infer_model/scale_4.tmp_1", 8}, - {"save_infer_model/scale_1.tmp_1", "save_infer_model/scale_5.tmp_1", 16}, - {"save_infer_model/scale_2.tmp_1", "save_infer_model/scale_6.tmp_1", 32}, - {"save_infer_model/scale_3.tmp_1", "save_infer_model/scale_7.tmp_1", 64}, - }; + static PicoDet *detector; + ncnn::Net *Net; + static bool hasGPU; - std::vector detect(cv::Mat image, float score_threshold, float nms_threshold); + int detect(cv::Mat image, std::vector &result_list, + bool has_postprocess); - std::vector labels{ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", - "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", - "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", - "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", - "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", - "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", - "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", - "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", - "hair drier", "toothbrush" }; private: - void preprocess(cv::Mat& image, ncnn::Mat& in); - void decode_infer(ncnn::Mat& cls_pred, ncnn::Mat& dis_pred, int stride, float threshold, std::vector>& results); - BoxInfo disPred2Bbox(const float*& dfl_det, int label, float score, int x, int y, int stride); - static void nms(std::vector& result, float nms_threshold); - int input_size[2] = {320, 320}; - int num_class = 80; - int reg_max = 7; + void preprocess(cv::Mat &image, ncnn::Mat &in); + void decode_infer(ncnn::Mat &cls_pred, ncnn::Mat &dis_pred, int stride, + float threshold, + std::vector> &results); + BoxInfo disPred2Bbox(const float *&dfl_det, int label, float score, int x, + int y, int stride); + static void nms(std::vector &result, float nms_threshold); + void nms_boxes(ncnn::Mat &cls_pred, ncnn::Mat &dis_pred, + float score_threshold, + std::vector> &result_list); -}; + int image_w; + int image_h; + int in_w = 320; + int in_h = 320; + int num_class = 80; + int reg_max = 7; + + float score_threshold; + float nms_threshold; + std::vector bbox_output_data_; + std::vector class_output_data_; + + std::vector nms_heads_info{"tmp_16", "concat_4.tmp_0"}; + // If not export post-process, will use non_postprocess_heads_info + std::vector non_postprocess_heads_info{ + // cls_pred|dis_pred|stride + {"transpose_0.tmp_0", "transpose_1.tmp_0", 8}, + {"transpose_2.tmp_0", "transpose_3.tmp_0", 16}, + {"transpose_4.tmp_0", "transpose_5.tmp_0", 32}, + {"transpose_6.tmp_0", "transpose_7.tmp_0", 64}, + }; +}; #endif diff --git a/deploy/third_engine/demo_ncnn/python/demo_ncnn.py b/deploy/third_engine/demo_ncnn/python/demo_ncnn.py deleted file mode 100644 index 492eb1e0dff46b3056219d6c33db505f668afb01..0000000000000000000000000000000000000000 --- a/deploy/third_engine/demo_ncnn/python/demo_ncnn.py +++ /dev/null @@ -1,808 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# reference from https://github.com/RangiLyu/nanodet/tree/main/demo_ncnn - -# -*- coding: utf-8 -*- -import argparse -from abc import ABCMeta, abstractmethod -from pathlib import Path - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -from scipy.special import softmax -from tqdm import tqdm - -_COLORS = (np.array([ - 0.000, - 0.447, - 0.741, - 0.850, - 0.325, - 0.098, - 0.929, - 0.694, - 0.125, - 0.494, - 0.184, - 0.556, - 0.466, - 0.674, - 0.188, - 0.301, - 0.745, - 0.933, - 0.635, - 0.078, - 0.184, - 0.300, - 0.300, - 0.300, - 0.600, - 0.600, - 0.600, - 1.000, - 0.000, - 0.000, - 1.000, - 0.500, - 0.000, - 0.749, - 0.749, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.333, - 0.333, - 0.000, - 0.333, - 0.667, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 0.333, - 0.000, - 0.667, - 0.667, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 1.000, - 0.000, - 0.000, - 0.333, - 0.500, - 0.000, - 0.667, - 0.500, - 0.000, - 1.000, - 0.500, - 0.333, - 0.000, - 0.500, - 0.333, - 0.333, - 0.500, - 0.333, - 0.667, - 0.500, - 0.333, - 1.000, - 0.500, - 0.667, - 0.000, - 0.500, - 0.667, - 0.333, - 0.500, - 0.667, - 0.667, - 0.500, - 0.667, - 1.000, - 0.500, - 1.000, - 0.000, - 0.500, - 1.000, - 0.333, - 0.500, - 1.000, - 0.667, - 0.500, - 1.000, - 1.000, - 0.500, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.333, - 0.333, - 1.000, - 0.333, - 0.667, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.667, - 0.333, - 1.000, - 0.667, - 0.667, - 1.000, - 0.667, - 1.000, - 1.000, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 1.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.143, - 0.143, - 0.143, - 0.286, - 0.286, - 0.286, - 0.429, - 0.429, - 0.429, - 0.571, - 0.571, - 0.571, - 0.714, - 0.714, - 0.714, - 0.857, - 0.857, - 0.857, - 0.000, - 0.447, - 0.741, - 0.314, - 0.717, - 0.741, - 0.50, - 0.5, - 0, -]).astype(np.float32).reshape(-1, 3)) - - -def get_resize_matrix(raw_shape, dst_shape, keep_ratio): - """ - Get resize matrix for resizing raw img to input size - :param raw_shape: (width, height) of raw image - :param dst_shape: (width, height) of input image - :param keep_ratio: whether keep original ratio - :return: 3x3 Matrix - """ - r_w, r_h = raw_shape - d_w, d_h = dst_shape - Rs = np.eye(3) - if keep_ratio: - C = np.eye(3) - C[0, 2] = -r_w / 2 - C[1, 2] = -r_h / 2 - - if r_w / r_h < d_w / d_h: - ratio = d_h / r_h - else: - ratio = d_w / r_w - Rs[0, 0] *= ratio - Rs[1, 1] *= ratio - - T = np.eye(3) - T[0, 2] = 0.5 * d_w - T[1, 2] = 0.5 * d_h - return T @Rs @C - else: - Rs[0, 0] *= d_w / r_w - Rs[1, 1] *= d_h / r_h - return Rs - - -def warp_boxes(boxes, M, width, height): - """Apply transform to boxes - Copy from picodet/data/transform/warp.py - """ - n = len(boxes) - if n: - # warp points - xy = np.ones((n * 4, 3)) - xy[:, :2] = boxes[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape( - n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @M.T # transform - xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - xy = np.concatenate( - (x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - # clip boxes - xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width) - xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height) - return xy.astype(np.float32) - else: - return boxes - - -def overlay_bbox_cv(img, all_box, class_names): - """Draw result boxes - Copy from picodet/util/visualization.py - """ - all_box.sort(key=lambda v: v[5]) - for box in all_box: - label, x0, y0, x1, y1, score = box - color = (_COLORS[label] * 255).astype(np.uint8).tolist() - text = "{}:{:.1f}%".format(class_names[label], score * 100) - txt_color = (0, 0, 0) if np.mean(_COLORS[label]) > 0.5 else (255, 255, - 255) - font = cv2.FONT_HERSHEY_SIMPLEX - txt_size = cv2.getTextSize(text, font, 0.5, 2)[0] - cv2.rectangle(img, (x0, y0), (x1, y1), color, 2) - - cv2.rectangle( - img, - (x0, y0 - txt_size[1] - 1), - (x0 + txt_size[0] + txt_size[1], y0 - 1), - color, - -1, ) - cv2.putText(img, text, (x0, y0 - 1), font, 0.5, txt_color, thickness=1) - return img - - -def hard_nms(box_scores, iou_threshold, top_k=-1, candidate_size=200): - """ - - Args: - box_scores (N, 5): boxes in corner-form and probabilities. - iou_threshold: intersection over union threshold. - top_k: keep top_k results. If k <= 0, keep all the results. - candidate_size: only consider the candidates with the highest scores. - Returns: - picked: a list of indexes of the kept boxes - """ - scores = box_scores[:, -1] - boxes = box_scores[:, :-1] - picked = [] - indexes = np.argsort(scores) - indexes = indexes[-candidate_size:] - while len(indexes) > 0: - current = indexes[-1] - picked.append(current) - if 0 < top_k == len(picked) or len(indexes) == 1: - break - current_box = boxes[current, :] - indexes = indexes[:-1] - rest_boxes = boxes[indexes, :] - iou = iou_of( - rest_boxes, - np.expand_dims( - current_box, axis=0), ) - indexes = indexes[iou <= iou_threshold] - - return box_scores[picked, :] - - -def iou_of(boxes0, boxes1, eps=1e-5): - """Return intersection-over-union (Jaccard index) of boxes. - - Args: - boxes0 (N, 4): ground truth boxes. - boxes1 (N or 1, 4): predicted boxes. - eps: a small number to avoid 0 as denominator. - Returns: - iou (N): IoU values. - """ - overlap_left_top = np.maximum(boxes0[..., :2], boxes1[..., :2]) - overlap_right_bottom = np.minimum(boxes0[..., 2:], boxes1[..., 2:]) - - overlap_area = area_of(overlap_left_top, overlap_right_bottom) - area0 = area_of(boxes0[..., :2], boxes0[..., 2:]) - area1 = area_of(boxes1[..., :2], boxes1[..., 2:]) - return overlap_area / (area0 + area1 - overlap_area + eps) - - -def area_of(left_top, right_bottom): - """Compute the areas of rectangles given two corners. - - Args: - left_top (N, 2): left top corner. - right_bottom (N, 2): right bottom corner. - - Returns: - area (N): return the area. - """ - hw = np.clip(right_bottom - left_top, 0.0, None) - return hw[..., 0] * hw[..., 1] - - -class picodetABC(metaclass=ABCMeta): - def __init__( - self, - input_shape=[320, 320], - reg_max=7, - strides=[8, 16, 32], - prob_threshold=0.4, - iou_threshold=0.3, - num_candidate=1000, - top_k=-1, ): - self.strides = strides - self.input_shape = input_shape - self.reg_max = reg_max - self.prob_threshold = prob_threshold - self.iou_threshold = iou_threshold - self.num_candidate = num_candidate - self.top_k = top_k - self.img_mean = [103.53, 116.28, 123.675] - self.img_std = [57.375, 57.12, 58.395] - self.input_size = (self.input_shape[1], self.input_shape[0]) - self.class_names = [ - "person", - "bicycle", - "car", - "motorcycle", - "airplane", - "bus", - "train", - "truck", - "boat", - "traffic_light", - "fire_hydrant", - "stop_sign", - "parking_meter", - "bench", - "bird", - "cat", - "dog", - "horse", - "sheep", - "cow", - "elephant", - "bear", - "zebra", - "giraffe", - "backpack", - "umbrella", - "handbag", - "tie", - "suitcase", - "frisbee", - "skis", - "snowboard", - "sports_ball", - "kite", - "baseball_bat", - "baseball_glove", - "skateboard", - "surfboard", - "tennis_racket", - "bottle", - "wine_glass", - "cup", - "fork", - "knife", - "spoon", - "bowl", - "banana", - "apple", - "sandwich", - "orange", - "broccoli", - "carrot", - "hot_dog", - "pizza", - "donut", - "cake", - "chair", - "couch", - "potted_plant", - "bed", - "dining_table", - "toilet", - "tv", - "laptop", - "mouse", - "remote", - "keyboard", - "cell_phone", - "microwave", - "oven", - "toaster", - "sink", - "refrigerator", - "book", - "clock", - "vase", - "scissors", - "teddy_bear", - "hair_drier", - "toothbrush", - ] - - def preprocess(self, img): - # resize image - ResizeM = get_resize_matrix((img.shape[1], img.shape[0]), - self.input_size, True) - img_resize = cv2.warpPerspective(img, ResizeM, dsize=self.input_size) - # normalize image - img_input = img_resize.astype(np.float32) / 255 - img_mean = np.array( - self.img_mean, dtype=np.float32).reshape(1, 1, 3) / 255 - img_std = np.array( - self.img_std, dtype=np.float32).reshape(1, 1, 3) / 255 - img_input = (img_input - img_mean) / img_std - # expand dims - img_input = np.transpose(img_input, [2, 0, 1]) - img_input = np.expand_dims(img_input, axis=0) - return img_input, ResizeM - - def postprocess(self, scores, raw_boxes, ResizeM, raw_shape): - # generate centers - decode_boxes = [] - select_scores = [] - for stride, box_distribute, score in zip(self.strides, raw_boxes, - scores): - # centers - fm_h = self.input_shape[0] / stride - fm_w = self.input_shape[1] / stride - h_range = np.arange(fm_h) - w_range = np.arange(fm_w) - ww, hh = np.meshgrid(w_range, h_range) - ct_row = (hh.flatten() + 0.5) * stride - ct_col = (ww.flatten() + 0.5) * stride - center = np.stack((ct_col, ct_row, ct_col, ct_row), axis=1) - - # box distribution to distance - reg_range = np.arange(self.reg_max + 1) - box_distance = box_distribute.reshape((-1, self.reg_max + 1)) - box_distance = softmax(box_distance, axis=1) - box_distance = box_distance * np.expand_dims(reg_range, axis=0) - box_distance = np.sum(box_distance, axis=1).reshape((-1, 4)) - box_distance = box_distance * stride - - # top K candidate - topk_idx = np.argsort(score.max(axis=1))[::-1] - topk_idx = topk_idx[:self.num_candidate] - center = center[topk_idx] - score = score[topk_idx] - box_distance = box_distance[topk_idx] - - # decode box - decode_box = center + [-1, -1, 1, 1] * box_distance - - select_scores.append(score) - decode_boxes.append(decode_box) - - # nms - bboxes = np.concatenate(decode_boxes, axis=0) - confidences = np.concatenate(select_scores, axis=0) - picked_box_probs = [] - picked_labels = [] - for class_index in range(0, confidences.shape[1]): - probs = confidences[:, class_index] - mask = probs > self.prob_threshold - probs = probs[mask] - if probs.shape[0] == 0: - continue - subset_boxes = bboxes[mask, :] - box_probs = np.concatenate( - [subset_boxes, probs.reshape(-1, 1)], axis=1) - box_probs = hard_nms( - box_probs, - iou_threshold=self.iou_threshold, - top_k=self.top_k, ) - picked_box_probs.append(box_probs) - picked_labels.extend([class_index] * box_probs.shape[0]) - if not picked_box_probs: - return np.array([]), np.array([]), np.array([]) - picked_box_probs = np.concatenate(picked_box_probs) - - # resize output boxes - picked_box_probs[:, :4] = warp_boxes(picked_box_probs[:, :4], - np.linalg.inv(ResizeM), - raw_shape[1], raw_shape[0]) - return ( - picked_box_probs[:, :4].astype(np.int32), - np.array(picked_labels), - picked_box_probs[:, 4], ) - - @abstractmethod - def infer_image(self, img_input): - pass - - def detect(self, img): - raw_shape = img.shape - img_input, ResizeM = self.preprocess(img) - scores, raw_boxes = self.infer_image(img_input) - if scores[0].ndim == 1: # handling num_classes=1 case - scores = [x[:, None] for x in scores] - bbox, label, score = self.postprocess(scores, raw_boxes, ResizeM, - raw_shape) - return bbox, label, score - - def draw_box(self, raw_img, bbox, label, score): - img = raw_img.copy() - all_box = [[x, ] + y + [z, ] - for x, y, z in zip(label, bbox.tolist(), score)] - img_draw = overlay_bbox_cv(img, all_box, self.class_names) - return img_draw - - def detect_folder(self, img_fold, result_path): - img_fold = Path(img_fold) - result_path = Path(result_path) - result_path.mkdir(parents=True, exist_ok=True) - - img_name_list = filter( - lambda x: str(x).endswith(".png") or str(x).endswith(".jpg"), - img_fold.iterdir(), ) - img_name_list = list(img_name_list) - print(f"find {len(img_name_list)} images") - - for img_path in tqdm(img_name_list): - img = cv2.imread(str(img_path)) - bbox, label, score = self.detect(img) - img_draw = self.draw_box(img, bbox, label, score) - save_path = str(result_path / img_path.name.replace(".png", ".jpg")) - cv2.imwrite(save_path, img_draw) - - -class picodetONNX(picodetABC): - def __init__(self, model_path, *args, **kwargs): - import onnxruntime as ort - - super(picodetONNX, self).__init__(*args, **kwargs) - print("Using ONNX as inference backend") - print(f"Using weight: {model_path}") - - # load model - self.model_path = model_path - self.ort_session = ort.InferenceSession(self.model_path) - self.input_name = self.ort_session.get_inputs()[0].name - - def infer_image(self, img_input): - inference_results = self.ort_session.run(None, - {self.input_name: img_input}) - scores = [np.squeeze(x) for x in inference_results[:3]] - raw_boxes = [np.squeeze(x) for x in inference_results[3:]] - return scores, raw_boxes - - -class picodetTorch(picodetABC): - def __init__(self, model_path, cfg_path, *args, **kwargs): - import torch - - from picodet.model.arch import build_model - from picodet.util import Logger, cfg, load_config, load_model_weight - - super(picodetTorch, self).__init__(*args, **kwargs) - print("Using PyTorch as inference backend") - print(f"Using weight: {model_path}") - - # load model - self.model_path = model_path - self.cfg_path = cfg_path - load_config(cfg, cfg_path) - self.logger = Logger(-1, cfg.save_dir, False) - self.model = build_model(cfg.model) - checkpoint = torch.load( - model_path, map_location=lambda storage, loc: storage) - load_model_weight(self.model, checkpoint, self.logger) - - def infer_image(self, img_input): - import torch - - self.model.train(False) - with torch.no_grad(): - inference_results = self.model(torch.from_numpy(img_input)) - scores = [ - x.permute(0, 2, 3, 1).reshape((-1, 80)).sigmoid().detach().numpy() - for x in inference_results[0] - ] - raw_boxes = [ - x.permute(0, 2, 3, 1).reshape((-1, 32)).detach().numpy() - for x in inference_results[1] - ] - return scores, raw_boxes - - -class picodetNCNN(picodetABC): - def __init__(self, model_param, model_bin, *args, **kwargs): - import ncnn - - super(picodetNCNN, self).__init__(*args, **kwargs) - print("Using ncnn as inference backend") - print(f"Using param: {model_param}, bin: {model_bin}") - - # load model - self.model_param = model_param - self.model_bin = model_bin - - self.net = ncnn.Net() - self.net.load_param(model_param) - self.net.load_model(model_bin) - self.input_name = "input.1" - - def infer_image(self, img_input): - import ncnn - - mat_in = ncnn.Mat(img_input.squeeze()) - ex = self.net.create_extractor() - ex.input(self.input_name, mat_in) - - score_out_name = [ - "save_infer_model/scale_0.tmp_1", "save_infer_model/scale_1.tmp_1", - "save_infer_model/scale_2.tmp_1", "save_infer_model/scale_3.tmp_1" - ] - scores = [np.array(ex.extract(x)[1]) for x in score_out_name] - scores = [np.reshape(x, (-1, 80)) for x in scores] - - boxes_out_name = [ - "save_infer_model/scale_4.tmp_1", "save_infer_model/scale_5.tmp_1", - "save_infer_model/scale_6.tmp_1", "save_infer_model/scale_7.tmp_1" - ] - raw_boxes = [np.array(ex.extract(x)[1]) for x in boxes_out_name] - raw_boxes = [np.reshape(x, (-1, 32)) for x in raw_boxes] - - return scores, raw_boxes - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model_path", - dest="model_path", - type=str, - default="../model/picodet.param") - parser.add_argument( - "--model_bin", - dest="model_bin", - type=str, - default="../model/picodet.bin") - parser.add_argument( - "--cfg_path", dest="cfg_path", type=str, default="config/picodet.yml") - parser.add_argument( - "--img_fold", dest="img_fold", type=str, default="../imgs") - parser.add_argument( - "--result_fold", dest="result_fold", type=str, default="../results") - parser.add_argument( - "--input_shape", - dest="input_shape", - nargs=2, - type=int, - default=[320, 320]) - parser.add_argument( - "--backend", choices=["ncnn", "ONNX", "torch"], default="ncnn") - args = parser.parse_args() - - print(f"Detecting {args.img_fold}") - - # load detector - if args.backend == "ncnn": - detector = picodetNCNN( - args.model_path, args.model_bin, input_shape=args.input_shape) - elif args.backend == "ONNX": - detector = picodetONNX(args.model_path, input_shape=args.input_shape) - elif args.backend == "torch": - detector = picodetTorch( - args.model_path, args.cfg_path, input_shape=args.input_shape) - else: - raise ValueError - - # detect folder - detector.detect_folder(args.img_fold, args.result_fold) - - -def test_one(): - detector = picodetNCNN("../weight/picodet_m_416.param", - "../weight/picodet_m_416.bin") - img = cv2.imread("../000000000102.jpg") - bbox, label, score = detector.detect(img) - img_draw = detector.draw_box(img, bbox, label, score) - img_out = img_draw[..., ::-1] - cv2.imwrite('python_version.jpg', img_out) - - -if __name__ == "__main__": - # main() - test_one() diff --git a/deploy/third_engine/demo_onnxruntime/README.md b/deploy/third_engine/demo_onnxruntime/README.md new file mode 100644 index 0000000000000000000000000000000000000000..bdf7a9432f3e35499c616524b031a27cb2e99fc4 --- /dev/null +++ b/deploy/third_engine/demo_onnxruntime/README.md @@ -0,0 +1,43 @@ +# PicoDet ONNX Runtime Demo + +本文件夹提供利用[ONNX Runtime](https://onnxruntime.ai/docs/)进行 PicoDet 部署与Inference images 的 Demo。 + +## 安装 ONNX Runtime + +本demo采用的是 ONNX Runtime 1.10.0,可直接运行如下指令安装: +```shell +pip install onnxruntime +``` + +详细安装步骤,可参考 [Install ONNX Runtime](https://onnxruntime.ai/docs/install/)。 + +## Inference images + +- 准备测试模型:根据[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)中【导出及转换模型】步骤,采用包含后处理的方式导出模型(`-o export.benchmark=False` ),并生成待测试模型简化后的onnx模型(可在下文链接中直接下载)。同时在本目录下新建```onnx_file```文件夹,将导出的onnx模型放在该目录下。 + +- 准备测试所用图片:将待测试图片放在```./imgs```文件夹下,本demo已提供了两张测试图片。 + +- 在本目录下直接运行: + ```shell + python infer_demo.py --modelpath ./onnx_file/picodet_s_320_lcnet_postprocessed.onnx + ``` + 将会对```./imgs```文件夹下所有图片进行识别,并将识别结果保存在```./results```文件夹下。 + +- 结果: +
    + +
    + +## 模型下载 + +| 模型 | 输入尺寸 | ONNX( w/ 后处理) | +| :-------- | :--------: | :---------------------: | +| PicoDet-XS | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_lcnet_postprocessed.onnx) | +| PicoDet-XS | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_lcnet_postprocessed.onnx) | +| PicoDet-S | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet_postprocessed.onnx) | +| PicoDet-S | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_lcnet_postprocessed.onnx) | +| PicoDet-M | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_lcnet_postprocessed.onnx) | +| PicoDet-M | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_lcnet_postprocessed.onnx) | +| PicoDet-L | 320*320 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_lcnet_postprocessed.onnx) | +| PicoDet-L | 416*416 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_lcnet_postprocessed.onnx) | +| PicoDet-L | 640*640 | [model](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_lcnet_postprocessed.onnx) | diff --git a/static/deploy/lite/coco_label_list.txt b/deploy/third_engine/demo_onnxruntime/coco_label.txt similarity index 88% rename from static/deploy/lite/coco_label_list.txt rename to deploy/third_engine/demo_onnxruntime/coco_label.txt index 1f42c8eb44628f95b2f4067de928a7f5c1e9c8dc..ca76c80b5b2cd0b25047f75736656cfebc9da7aa 100644 --- a/static/deploy/lite/coco_label_list.txt +++ b/deploy/third_engine/demo_onnxruntime/coco_label.txt @@ -1,8 +1,8 @@ person bicycle car -motorcycle -airplane +motorbike +aeroplane bus train truck @@ -55,12 +55,12 @@ pizza donut cake chair -couch -potted plant +sofa +pottedplant bed -dining table +diningtable toilet -tv +tvmonitor laptop mouse remote @@ -77,4 +77,4 @@ vase scissors teddy bear hair drier -toothbrush \ No newline at end of file +toothbrush diff --git a/deploy/third_engine/demo_onnxruntime/imgs/bus.jpg b/deploy/third_engine/demo_onnxruntime/imgs/bus.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b43e311165c785f000eb7493ff8fb662d06a3f83 Binary files /dev/null and b/deploy/third_engine/demo_onnxruntime/imgs/bus.jpg differ diff --git a/deploy/third_engine/demo_onnxruntime/imgs/dog.jpg b/deploy/third_engine/demo_onnxruntime/imgs/dog.jpg new file mode 100644 index 0000000000000000000000000000000000000000..77b0381222eaed50867643f4166092c781e56d5b Binary files /dev/null and b/deploy/third_engine/demo_onnxruntime/imgs/dog.jpg differ diff --git a/deploy/third_engine/demo_onnxruntime/infer_demo.py b/deploy/third_engine/demo_onnxruntime/infer_demo.py new file mode 100644 index 0000000000000000000000000000000000000000..41f407097828fa099a43831d4200193ba91557be --- /dev/null +++ b/deploy/third_engine/demo_onnxruntime/infer_demo.py @@ -0,0 +1,208 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import cv2 +import numpy as np +import argparse +import onnxruntime as ort +from pathlib import Path +from tqdm import tqdm + + +class PicoDet(): + def __init__(self, + model_pb_path, + label_path, + prob_threshold=0.4, + iou_threshold=0.3): + self.classes = list( + map(lambda x: x.strip(), open(label_path, 'r').readlines())) + self.num_classes = len(self.classes) + self.prob_threshold = prob_threshold + self.iou_threshold = iou_threshold + self.mean = np.array( + [103.53, 116.28, 123.675], dtype=np.float32).reshape(1, 1, 3) + self.std = np.array( + [57.375, 57.12, 58.395], dtype=np.float32).reshape(1, 1, 3) + so = ort.SessionOptions() + so.log_severity_level = 3 + self.net = ort.InferenceSession(model_pb_path, so) + inputs_name = [a.name for a in self.net.get_inputs()] + inputs_shape = { + k: v.shape + for k, v in zip(inputs_name, self.net.get_inputs()) + } + self.input_shape = inputs_shape['image'][2:] + + def _normalize(self, img): + img = img.astype(np.float32) + img = (img / 255.0 - self.mean / 255.0) / (self.std / 255.0) + return img + + def resize_image(self, srcimg, keep_ratio=False): + top, left, newh, neww = 0, 0, self.input_shape[0], self.input_shape[1] + origin_shape = srcimg.shape[:2] + im_scale_y = newh / float(origin_shape[0]) + im_scale_x = neww / float(origin_shape[1]) + img_shape = np.array([ + [float(self.input_shape[0]), float(self.input_shape[1])] + ]).astype('float32') + scale_factor = np.array([[im_scale_y, im_scale_x]]).astype('float32') + + if keep_ratio and srcimg.shape[0] != srcimg.shape[1]: + hw_scale = srcimg.shape[0] / srcimg.shape[1] + if hw_scale > 1: + newh, neww = self.input_shape[0], int(self.input_shape[1] / + hw_scale) + img = cv2.resize( + srcimg, (neww, newh), interpolation=cv2.INTER_AREA) + left = int((self.input_shape[1] - neww) * 0.5) + img = cv2.copyMakeBorder( + img, + 0, + 0, + left, + self.input_shape[1] - neww - left, + cv2.BORDER_CONSTANT, + value=0) # add border + else: + newh, neww = int(self.input_shape[0] * + hw_scale), self.input_shape[1] + img = cv2.resize( + srcimg, (neww, newh), interpolation=cv2.INTER_AREA) + top = int((self.input_shape[0] - newh) * 0.5) + img = cv2.copyMakeBorder( + img, + top, + self.input_shape[0] - newh - top, + 0, + 0, + cv2.BORDER_CONSTANT, + value=0) + else: + img = cv2.resize( + srcimg, self.input_shape, interpolation=cv2.INTER_AREA) + + return img, img_shape, scale_factor + + def get_color_map_list(self, num_classes): + color_map = num_classes * [0, 0, 0] + for i in range(0, num_classes): + j = 0 + lab = i + while lab: + color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j)) + color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)) + color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)) + j += 1 + lab >>= 3 + color_map = [color_map[i:i + 3] for i in range(0, len(color_map), 3)] + return color_map + + def detect(self, srcimg): + img, im_shape, scale_factor = self.resize_image(srcimg) + img = self._normalize(img) + + blob = np.expand_dims(np.transpose(img, (2, 0, 1)), axis=0) + + inputs_dict = { + 'im_shape': im_shape, + 'image': blob, + 'scale_factor': scale_factor + } + inputs_name = [a.name for a in self.net.get_inputs()] + net_inputs = {k: inputs_dict[k] for k in inputs_name} + + outs = self.net.run(None, net_inputs) + + outs = np.array(outs[0]) + expect_boxes = (outs[:, 1] > 0.5) & (outs[:, 0] > -1) + np_boxes = outs[expect_boxes, :] + + color_list = self.get_color_map_list(self.num_classes) + clsid2color = {} + + for i in range(np_boxes.shape[0]): + classid, conf = int(np_boxes[i, 0]), np_boxes[i, 1] + xmin, ymin, xmax, ymax = int(np_boxes[i, 2]), int(np_boxes[ + i, 3]), int(np_boxes[i, 4]), int(np_boxes[i, 5]) + + if classid not in clsid2color: + clsid2color[classid] = color_list[classid] + color = tuple(clsid2color[classid]) + + cv2.rectangle( + srcimg, (xmin, ymin), (xmax, ymax), color, thickness=2) + print(self.classes[classid] + ': ' + str(round(conf, 3))) + cv2.putText( + srcimg, + self.classes[classid] + ':' + str(round(conf, 3)), (xmin, + ymin - 10), + cv2.FONT_HERSHEY_SIMPLEX, + 0.8, (0, 255, 0), + thickness=2) + + return srcimg + + def detect_folder(self, img_fold, result_path): + img_fold = Path(img_fold) + result_path = Path(result_path) + result_path.mkdir(parents=True, exist_ok=True) + + img_name_list = filter( + lambda x: str(x).endswith(".png") or str(x).endswith(".jpg"), + img_fold.iterdir(), ) + img_name_list = list(img_name_list) + print(f"find {len(img_name_list)} images") + + for img_path in tqdm(img_name_list): + img = cv2.imread(str(img_path)) + + srcimg = net.detect(img) + save_path = str(result_path / img_path.name.replace(".png", ".jpg")) + cv2.imwrite(save_path, srcimg) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument( + '--modelpath', + type=str, + default='onnx_file/picodet_s_320_lcnet_postprocessed.onnx', + help="onnx filepath") + parser.add_argument( + '--classfile', + type=str, + default='coco_label.txt', + help="classname filepath") + parser.add_argument( + '--confThreshold', default=0.5, type=float, help='class confidence') + parser.add_argument( + '--nmsThreshold', default=0.6, type=float, help='nms iou thresh') + parser.add_argument( + "--img_fold", dest="img_fold", type=str, default="./imgs") + parser.add_argument( + "--result_fold", dest="result_fold", type=str, default="results") + args = parser.parse_args() + + net = PicoDet( + args.modelpath, + args.classfile, + prob_threshold=args.confThreshold, + iou_threshold=args.nmsThreshold) + + net.detect_folder(args.img_fold, args.result_fold) + print( + f'infer results in ./deploy/third_engine/demo_onnxruntime/{args.result_fold}' + ) diff --git a/deploy/third_engine/demo_openvino/picodet_openvino.h b/deploy/third_engine/demo_openvino/picodet_openvino.h index 9871184dd7ab15cc6d758c4f105aab2152cba9ea..2a5bced16a3c3d57096adbdfa263b634c74377db 100644 --- a/deploy/third_engine/demo_openvino/picodet_openvino.h +++ b/deploy/third_engine/demo_openvino/picodet_openvino.h @@ -13,66 +13,63 @@ // limitations under the License. // reference from https://github.com/RangiLyu/nanodet/tree/main/demo_openvino - #ifndef _PICODET_OPENVINO_H_ #define _PICODET_OPENVINO_H_ -#include -#include #include +#include +#include #define image_size 416 - -typedef struct HeadInfo -{ - std::string cls_layer; - std::string dis_layer; - int stride; +typedef struct HeadInfo { + std::string cls_layer; + std::string dis_layer; + int stride; } HeadInfo; -typedef struct BoxInfo -{ - float x1; - float y1; - float x2; - float y2; - float score; - int label; +typedef struct BoxInfo { + float x1; + float y1; + float x2; + float y2; + float score; + int label; } BoxInfo; -class PicoDet -{ +class PicoDet { public: - PicoDet(const char* param); + PicoDet(const char *param); - ~PicoDet(); + ~PicoDet(); - InferenceEngine::ExecutableNetwork network_; - InferenceEngine::InferRequest infer_request_; - // static bool hasGPU; + InferenceEngine::ExecutableNetwork network_; + InferenceEngine::InferRequest infer_request_; + // static bool hasGPU; - std::vector heads_info_{ - // cls_pred|dis_pred|stride - {"save_infer_model/scale_0.tmp_1", "save_infer_model/scale_4.tmp_1", 8}, - {"save_infer_model/scale_1.tmp_1", "save_infer_model/scale_5.tmp_1", 16}, - {"save_infer_model/scale_2.tmp_1", "save_infer_model/scale_6.tmp_1", 32}, - {"save_infer_model/scale_3.tmp_1", "save_infer_model/scale_7.tmp_1", 64}, - }; + std::vector heads_info_{ + // cls_pred|dis_pred|stride + {"transpose_0.tmp_0", "transpose_1.tmp_0", 8}, + {"transpose_2.tmp_0", "transpose_3.tmp_0", 16}, + {"transpose_4.tmp_0", "transpose_5.tmp_0", 32}, + {"transpose_6.tmp_0", "transpose_7.tmp_0", 64}, + }; - std::vector detect(cv::Mat image, float score_threshold, float nms_threshold); + std::vector detect(cv::Mat image, float score_threshold, + float nms_threshold); private: - void preprocess(cv::Mat& image, InferenceEngine::Blob::Ptr& blob); - void decode_infer(const float*& cls_pred, const float*& dis_pred, int stride, float threshold, std::vector>& results); - BoxInfo disPred2Bbox(const float*& dfl_det, int label, float score, int x, int y, int stride); - static void nms(std::vector& result, float nms_threshold); - std::string input_name_; - int input_size_ = image_size; - int num_class_ = 80; - int reg_max_ = 7; - + void preprocess(cv::Mat &image, InferenceEngine::Blob::Ptr &blob); + void decode_infer(const float *&cls_pred, const float *&dis_pred, int stride, + float threshold, + std::vector> &results); + BoxInfo disPred2Bbox(const float *&dfl_det, int label, float score, int x, + int y, int stride); + static void nms(std::vector &result, float nms_threshold); + std::string input_name_; + int input_size_ = image_size; + int num_class_ = 80; + int reg_max_ = 7; }; - #endif diff --git a/deploy/third_engine/demo_openvino/python/README.md b/deploy/third_engine/demo_openvino/python/README.md new file mode 100644 index 0000000000000000000000000000000000000000..1862417db48882b02459dd3b2a425758473f09f2 --- /dev/null +++ b/deploy/third_engine/demo_openvino/python/README.md @@ -0,0 +1,75 @@ +# PicoDet OpenVINO Benchmark Demo + +本文件夹提供利用[Intel's OpenVINO Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)进行PicoDet测速的Benchmark Demo与带后处理的模型Inference Demo。 + +## 安装 OpenVINO Toolkit + +前往 [OpenVINO HomePage](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html),下载对应版本并安装。 + +本demo安装的是 OpenVINO 2022.1.0,可直接运行如下指令安装: +```shell +pip install openvino==2022.1.0 +``` + +详细安装步骤,可参考[OpenVINO官网](https://docs.openvinotoolkit.org/latest/get_started_guides.html) + +## Benchmark测试 + +- 准备测试模型:根据[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)中【导出及转换模型】步骤,采用不包含后处理的方式导出模型(`-o export.benchmark=True` ),并生成待测试模型简化后的onnx模型(可在下文链接中直接下载)。同时在本目录下新建```out_onnxsim```文件夹,将导出的onnx模型放在该目录下。 + +- 准备测试所用图片:本demo默认利用PaddleDetection/demo/[000000014439.jpg](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/demo/000000014439.jpg) + +- 在本目录下直接运行: + +```shell +# Linux +python openvino_benchmark.py --img_path ../../../../demo/000000014439.jpg --onnx_path out_onnxsim/picodet_s_320_coco_lcnet.onnx --in_shape 320 +# Windows +python openvino_benchmark.py --img_path ..\..\..\..\demo\000000014439.jpg --onnx_path out_onnxsim\picodet_s_320_coco_lcnet.onnx --in_shape 320 +``` +- 注意:```--in_shape```为对应模型输入size,默认为320 + +## 真实图片测试(网络包含后处理,但不包含NMS) + +- 准备测试模型:根据[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet)中【导出及转换模型】步骤,采用**包含后处理**但**不包含NMS**的方式导出模型(`-o export.benchmark=False export.nms=False` ),并生成待测试模型简化后的onnx模型(可在下文链接中直接下载)。同时在本目录下新建```out_onnxsim_infer```文件夹,将导出的onnx模型放在该目录下。 + +- 准备测试所用图片:默认利用../../demo_onnxruntime/imgs/[bus.jpg](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/third_engine/demo_onnxruntime/imgs/bus.jpg) + +```shell +# Linux +python openvino_infer.py --img_path ../../demo_onnxruntime/imgs/bus.jpg --onnx_path out_onnxsim_infer/picodet_s_320_postproccesed_woNMS.onnx --in_shape 320 +# Windows +python openvino_infer.py --img_path ..\..\demo_onnxruntime\imgs\bus.jpg --onnx_path out_onnxsim_infer\picodet_s_320_postproccesed_woNMS.onnx --in_shape 320 +``` + +### 真实图片测试(网络不包含后处理) + +```shell +# Linux +python openvino_benchmark.py --benchmark 0 --img_path ../../../../demo/000000014439.jpg --onnx_path out_onnxsim/picodet_s_320_coco_lcnet.onnx --in_shape 320 +# Windows +python openvino_benchmark.py --benchmark 0 --img_path ..\..\..\..\demo\000000014439.jpg --onnx_path out_onnxsim\picodet_s_320_coco_lcnet.onnx --in_shape 320 +``` + +- 结果: +
    + +
    + +## Benchmark结果 + +- 测速结果如下: + +| 模型 | 输入尺寸 | ONNX | 预测时延[CPU](#latency)| +| :-------- | :--------: | :---------------------: | :----------------: | +| PicoDet-XS | 320*320 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_320_coco_lcnet.onnx) | 3.9ms | +| PicoDet-XS | 416*416 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_xs_416_coco_lcnet.onnx) | 6.1ms | +| PicoDet-S | 320*320 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_320_coco_lcnet.onnx) | 4.8ms | +| PicoDet-S | 416*416 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_s_416_coco_lcnet.onnx) | 6.6ms | +| PicoDet-M | 320*320 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_320_coco_lcnet.onnx) | 8.2ms | +| PicoDet-M | 416*416 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_m_416_coco_lcnet.onnx) | 12.7ms | +| PicoDet-L | 320*320 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_320_coco_lcnet.onnx) | 11.5ms | +| PicoDet-L | 416*416 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_416_coco_lcnet.onnx) | 20.7ms | +| PicoDet-L | 640*640 | [( w/ 后处理;w/o NMS)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_lcnet_postproccesed_woNMS.onnx) | [( w/o 后处理)](https://paddledet.bj.bcebos.com/deploy/third_engine/picodet_l_640_coco_lcnet.onnx) | 62.5ms | + +- 测试环境: 英特尔酷睿i7 10750H CPU。 diff --git a/static/deploy/android_demo/app/src/main/assets/labels/coco-labels-2014_2017.txt b/deploy/third_engine/demo_openvino/python/coco_label.txt similarity index 88% rename from static/deploy/android_demo/app/src/main/assets/labels/coco-labels-2014_2017.txt rename to deploy/third_engine/demo_openvino/python/coco_label.txt index 1f42c8eb44628f95b2f4067de928a7f5c1e9c8dc..ca76c80b5b2cd0b25047f75736656cfebc9da7aa 100644 --- a/static/deploy/android_demo/app/src/main/assets/labels/coco-labels-2014_2017.txt +++ b/deploy/third_engine/demo_openvino/python/coco_label.txt @@ -1,8 +1,8 @@ person bicycle car -motorcycle -airplane +motorbike +aeroplane bus train truck @@ -55,12 +55,12 @@ pizza donut cake chair -couch -potted plant +sofa +pottedplant bed -dining table +diningtable toilet -tv +tvmonitor laptop mouse remote @@ -77,4 +77,4 @@ vase scissors teddy bear hair drier -toothbrush \ No newline at end of file +toothbrush diff --git a/deploy/third_engine/demo_openvino/python/openvino_benchmark.py b/deploy/third_engine/demo_openvino/python/openvino_benchmark.py new file mode 100644 index 0000000000000000000000000000000000000000..f21a8d5d1ed83c159818d2b405d1b5c9e5daa927 --- /dev/null +++ b/deploy/third_engine/demo_openvino/python/openvino_benchmark.py @@ -0,0 +1,365 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import cv2 +import numpy as np +import time +import argparse +from scipy.special import softmax +from openvino.runtime import Core + + +def image_preprocess(img_path, re_shape): + img = cv2.imread(img_path) + img = cv2.resize( + img, (re_shape, re_shape), interpolation=cv2.INTER_LANCZOS4) + img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) + img = np.transpose(img, [2, 0, 1]) / 255 + img = np.expand_dims(img, 0) + img_mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1)) + img_std = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1)) + img -= img_mean + img /= img_std + return img.astype(np.float32) + + +def draw_box(img, results, class_label, scale_x, scale_y): + + label_list = list( + map(lambda x: x.strip(), open(class_label, 'r').readlines())) + + for i in range(len(results)): + print(label_list[int(results[i][0])], ':', results[i][1]) + bbox = results[i, 2:] + label_id = int(results[i, 0]) + score = results[i, 1] + if (score > 0.20): + xmin, ymin, xmax, ymax = [ + int(bbox[0] * scale_x), int(bbox[1] * scale_y), + int(bbox[2] * scale_x), int(bbox[3] * scale_y) + ] + cv2.rectangle(img, (xmin, ymin), (xmax, ymax), (0, 255, 0), 3) + font = cv2.FONT_HERSHEY_SIMPLEX + label_text = label_list[label_id] + cv2.rectangle(img, (xmin, ymin), (xmax, ymin - 60), (0, 255, 0), -1) + cv2.putText(img, "#" + label_text, (xmin, ymin - 10), font, 1, + (255, 255, 255), 2, cv2.LINE_AA) + cv2.putText(img, + str(round(score, 3)), (xmin, ymin - 40), font, 0.8, + (255, 255, 255), 2, cv2.LINE_AA) + return img + + +def hard_nms(box_scores, iou_threshold, top_k=-1, candidate_size=200): + """ + Args: + box_scores (N, 5): boxes in corner-form and probabilities. + iou_threshold: intersection over union threshold. + top_k: keep top_k results. If k <= 0, keep all the results. + candidate_size: only consider the candidates with the highest scores. + Returns: + picked: a list of indexes of the kept boxes + """ + scores = box_scores[:, -1] + boxes = box_scores[:, :-1] + picked = [] + indexes = np.argsort(scores) + indexes = indexes[-candidate_size:] + while len(indexes) > 0: + current = indexes[-1] + picked.append(current) + if 0 < top_k == len(picked) or len(indexes) == 1: + break + current_box = boxes[current, :] + indexes = indexes[:-1] + rest_boxes = boxes[indexes, :] + iou = iou_of( + rest_boxes, + np.expand_dims( + current_box, axis=0), ) + indexes = indexes[iou <= iou_threshold] + + return box_scores[picked, :] + + +def iou_of(boxes0, boxes1, eps=1e-5): + """Return intersection-over-union (Jaccard index) of boxes. + Args: + boxes0 (N, 4): ground truth boxes. + boxes1 (N or 1, 4): predicted boxes. + eps: a small number to avoid 0 as denominator. + Returns: + iou (N): IoU values. + """ + overlap_left_top = np.maximum(boxes0[..., :2], boxes1[..., :2]) + overlap_right_bottom = np.minimum(boxes0[..., 2:], boxes1[..., 2:]) + + overlap_area = area_of(overlap_left_top, overlap_right_bottom) + area0 = area_of(boxes0[..., :2], boxes0[..., 2:]) + area1 = area_of(boxes1[..., :2], boxes1[..., 2:]) + return overlap_area / (area0 + area1 - overlap_area + eps) + + +def area_of(left_top, right_bottom): + """Compute the areas of rectangles given two corners. + Args: + left_top (N, 2): left top corner. + right_bottom (N, 2): right bottom corner. + Returns: + area (N): return the area. + """ + hw = np.clip(right_bottom - left_top, 0.0, None) + return hw[..., 0] * hw[..., 1] + + +class PicoDetPostProcess(object): + """ + Args: + input_shape (int): network input image size + ori_shape (int): ori image shape of before padding + scale_factor (float): scale factor of ori image + enable_mkldnn (bool): whether to open MKLDNN + """ + + def __init__(self, + input_shape, + ori_shape, + scale_factor, + strides=[8, 16, 32, 64], + score_threshold=0.4, + nms_threshold=0.5, + nms_top_k=1000, + keep_top_k=100): + self.ori_shape = ori_shape + self.input_shape = input_shape + self.scale_factor = scale_factor + self.strides = strides + self.score_threshold = score_threshold + self.nms_threshold = nms_threshold + self.nms_top_k = nms_top_k + self.keep_top_k = keep_top_k + + def warp_boxes(self, boxes, ori_shape): + """Apply transform to boxes + """ + width, height = ori_shape[1], ori_shape[0] + n = len(boxes) + if n: + # warp points + xy = np.ones((n * 4, 3)) + xy[:, :2] = boxes[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape( + n * 4, 2) # x1y1, x2y2, x1y2, x2y1 + # xy = xy @ M.T # transform + xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale + # create new boxes + x = xy[:, [0, 2, 4, 6]] + y = xy[:, [1, 3, 5, 7]] + xy = np.concatenate( + (x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T + # clip boxes + xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width) + xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height) + return xy.astype(np.float32) + else: + return boxes + + def __call__(self, scores, raw_boxes): + batch_size = raw_boxes[0].shape[0] + reg_max = int(raw_boxes[0].shape[-1] / 4 - 1) + out_boxes_num = [] + out_boxes_list = [] + for batch_id in range(batch_size): + # generate centers + decode_boxes = [] + select_scores = [] + for stride, box_distribute, score in zip(self.strides, raw_boxes, + scores): + box_distribute = box_distribute[batch_id] + score = score[batch_id] + # centers + fm_h = self.input_shape[0] / stride + fm_w = self.input_shape[1] / stride + h_range = np.arange(fm_h) + w_range = np.arange(fm_w) + ww, hh = np.meshgrid(w_range, h_range) + ct_row = (hh.flatten() + 0.5) * stride + ct_col = (ww.flatten() + 0.5) * stride + center = np.stack((ct_col, ct_row, ct_col, ct_row), axis=1) + + # box distribution to distance + reg_range = np.arange(reg_max + 1) + box_distance = box_distribute.reshape((-1, reg_max + 1)) + box_distance = softmax(box_distance, axis=1) + box_distance = box_distance * np.expand_dims(reg_range, axis=0) + box_distance = np.sum(box_distance, axis=1).reshape((-1, 4)) + box_distance = box_distance * stride + + # top K candidate + topk_idx = np.argsort(score.max(axis=1))[::-1] + topk_idx = topk_idx[:self.nms_top_k] + center = center[topk_idx] + score = score[topk_idx] + box_distance = box_distance[topk_idx] + + # decode box + decode_box = center + [-1, -1, 1, 1] * box_distance + + select_scores.append(score) + decode_boxes.append(decode_box) + + # nms + bboxes = np.concatenate(decode_boxes, axis=0) + confidences = np.concatenate(select_scores, axis=0) + picked_box_probs = [] + picked_labels = [] + for class_index in range(0, confidences.shape[1]): + probs = confidences[:, class_index] + mask = probs > self.score_threshold + probs = probs[mask] + if probs.shape[0] == 0: + continue + subset_boxes = bboxes[mask, :] + box_probs = np.concatenate( + [subset_boxes, probs.reshape(-1, 1)], axis=1) + box_probs = hard_nms( + box_probs, + iou_threshold=self.nms_threshold, + top_k=self.keep_top_k, ) + picked_box_probs.append(box_probs) + picked_labels.extend([class_index] * box_probs.shape[0]) + + if len(picked_box_probs) == 0: + out_boxes_list.append(np.empty((0, 4))) + out_boxes_num.append(0) + + else: + picked_box_probs = np.concatenate(picked_box_probs) + + # resize output boxes + picked_box_probs[:, :4] = self.warp_boxes( + picked_box_probs[:, :4], self.ori_shape[batch_id]) + im_scale = np.concatenate([ + self.scale_factor[batch_id][::-1], + self.scale_factor[batch_id][::-1] + ]) + picked_box_probs[:, :4] /= im_scale + # clas score box + out_boxes_list.append( + np.concatenate( + [ + np.expand_dims( + np.array(picked_labels), + axis=-1), np.expand_dims( + picked_box_probs[:, 4], axis=-1), + picked_box_probs[:, :4] + ], + axis=1)) + out_boxes_num.append(len(picked_labels)) + + out_boxes_list = np.concatenate(out_boxes_list, axis=0) + out_boxes_num = np.asarray(out_boxes_num).astype(np.int32) + return out_boxes_list, out_boxes_num + + +def detect(img_file, compiled_model, re_shape, class_label): + output = compiled_model.infer_new_request({0: test_image}) + result_ie = list(output.values()) #[0] + + test_im_shape = np.array([[re_shape, re_shape]]).astype('float32') + test_scale_factor = np.array([[1, 1]]).astype('float32') + + np_score_list = [] + np_boxes_list = [] + + num_outs = int(len(result_ie) / 2) + for out_idx in range(num_outs): + np_score_list.append(result_ie[out_idx]) + np_boxes_list.append(result_ie[out_idx + num_outs]) + + postprocess = PicoDetPostProcess(test_image.shape[2:], test_im_shape, + test_scale_factor) + + np_boxes, np_boxes_num = postprocess(np_score_list, np_boxes_list) + + image = cv2.imread(img_file, 1) + scale_x = image.shape[1] / test_image.shape[3] + scale_y = image.shape[0] / test_image.shape[2] + res_image = draw_box(image, np_boxes, class_label, scale_x, scale_y) + + cv2.imwrite('res.jpg', res_image) + cv2.imshow("res", res_image) + cv2.waitKey() + + +def benchmark(test_image, compiled_model): + + # benchmark + loop_num = 100 + warm_up = 8 + timeall = 0 + time_min = float("inf") + time_max = float('-inf') + + for i in range(loop_num + warm_up): + time0 = time.time() + #perform the inference step + + output = compiled_model.infer_new_request({0: test_image}) + time1 = time.time() + timed = time1 - time0 + + if i >= warm_up: + timeall = timeall + timed + time_min = min(time_min, timed) + time_max = max(time_max, timed) + + time_avg = timeall / loop_num + + print('inference_time(ms): min={}, max={}, avg={}'.format( + round(time_min * 1000, 2), + round(time_max * 1000, 1), round(time_avg * 1000, 1))) + + +if __name__ == '__main__': + + parser = argparse.ArgumentParser() + parser.add_argument( + '--benchmark', type=int, default=1, help="0:detect; 1:benchmark") + parser.add_argument( + '--img_path', + type=str, + default='../../../../demo/000000014439.jpg', + help="image path") + parser.add_argument( + '--onnx_path', + type=str, + default='out_onnxsim/picodet_s_320_processed.onnx', + help="onnx filepath") + parser.add_argument('--in_shape', type=int, default=320, help="input_size") + parser.add_argument( + '--class_label', + type=str, + default='coco_label.txt', + help="class label file") + args = parser.parse_args() + + ie = Core() + net = ie.read_model(args.onnx_path) + test_image = image_preprocess(args.img_path, args.in_shape) + compiled_model = ie.compile_model(net, 'CPU') + + if args.benchmark == 0: + detect(args.img_path, compiled_model, args.in_shape, args.class_label) + if args.benchmark == 1: + benchmark(test_image, compiled_model) diff --git a/deploy/third_engine/demo_openvino/python/openvino_infer.py b/deploy/third_engine/demo_openvino/python/openvino_infer.py new file mode 100644 index 0000000000000000000000000000000000000000..0ad51022b1793e7b6430025a7c71cc0de7658c8c --- /dev/null +++ b/deploy/third_engine/demo_openvino/python/openvino_infer.py @@ -0,0 +1,267 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import cv2 +import numpy as np +import argparse +from scipy.special import softmax +from openvino.runtime import Core + + +def image_preprocess(img_path, re_shape): + img = cv2.imread(img_path) + img = cv2.resize( + img, (re_shape, re_shape), interpolation=cv2.INTER_LANCZOS4) + img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) + img = np.transpose(img, [2, 0, 1]) / 255 + img = np.expand_dims(img, 0) + img_mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1)) + img_std = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1)) + img -= img_mean + img /= img_std + return img.astype(np.float32) + + +def get_color_map_list(num_classes): + color_map = num_classes * [0, 0, 0] + for i in range(0, num_classes): + j = 0 + lab = i + while lab: + color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j)) + color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)) + color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)) + j += 1 + lab >>= 3 + color_map = [color_map[i:i + 3] for i in range(0, len(color_map), 3)] + return color_map + + +def draw_box(srcimg, results, class_label): + label_list = list( + map(lambda x: x.strip(), open(class_label, 'r').readlines())) + for i in range(len(results)): + color_list = get_color_map_list(len(label_list)) + clsid2color = {} + classid, conf = int(results[i, 0]), results[i, 1] + xmin, ymin, xmax, ymax = int(results[i, 2]), int(results[i, 3]), int( + results[i, 4]), int(results[i, 5]) + + if classid not in clsid2color: + clsid2color[classid] = color_list[classid] + color = tuple(clsid2color[classid]) + + cv2.rectangle(srcimg, (xmin, ymin), (xmax, ymax), color, thickness=2) + print(label_list[classid] + ': ' + str(round(conf, 3))) + cv2.putText( + srcimg, + label_list[classid] + ':' + str(round(conf, 3)), (xmin, ymin - 10), + cv2.FONT_HERSHEY_SIMPLEX, + 0.8, (0, 255, 0), + thickness=2) + return srcimg + + +def hard_nms(box_scores, iou_threshold, top_k=-1, candidate_size=200): + """ + Args: + box_scores (N, 5): boxes in corner-form and probabilities. + iou_threshold: intersection over union threshold. + top_k: keep top_k results. If k <= 0, keep all the results. + candidate_size: only consider the candidates with the highest scores. + Returns: + picked: a list of indexes of the kept boxes + """ + scores = box_scores[:, -1] + boxes = box_scores[:, :-1] + picked = [] + indexes = np.argsort(scores) + indexes = indexes[-candidate_size:] + while len(indexes) > 0: + current = indexes[-1] + picked.append(current) + if 0 < top_k == len(picked) or len(indexes) == 1: + break + current_box = boxes[current, :] + indexes = indexes[:-1] + rest_boxes = boxes[indexes, :] + iou = iou_of( + rest_boxes, + np.expand_dims( + current_box, axis=0), ) + indexes = indexes[iou <= iou_threshold] + + return box_scores[picked, :] + + +def iou_of(boxes0, boxes1, eps=1e-5): + """Return intersection-over-union (Jaccard index) of boxes. + Args: + boxes0 (N, 4): ground truth boxes. + boxes1 (N or 1, 4): predicted boxes. + eps: a small number to avoid 0 as denominator. + Returns: + iou (N): IoU values. + """ + overlap_left_top = np.maximum(boxes0[..., :2], boxes1[..., :2]) + overlap_right_bottom = np.minimum(boxes0[..., 2:], boxes1[..., 2:]) + + overlap_area = area_of(overlap_left_top, overlap_right_bottom) + area0 = area_of(boxes0[..., :2], boxes0[..., 2:]) + area1 = area_of(boxes1[..., :2], boxes1[..., 2:]) + return overlap_area / (area0 + area1 - overlap_area + eps) + + +def area_of(left_top, right_bottom): + """Compute the areas of rectangles given two corners. + Args: + left_top (N, 2): left top corner. + right_bottom (N, 2): right bottom corner. + Returns: + area (N): return the area. + """ + hw = np.clip(right_bottom - left_top, 0.0, None) + return hw[..., 0] * hw[..., 1] + + +class PicoDetNMS(object): + """ + Args: + input_shape (int): network input image size + scale_factor (float): scale factor of ori image + """ + + def __init__(self, + input_shape, + scale_x, + scale_y, + strides=[8, 16, 32, 64], + score_threshold=0.4, + nms_threshold=0.5, + nms_top_k=1000, + keep_top_k=100): + self.input_shape = input_shape + self.scale_x = scale_x + self.scale_y = scale_y + self.strides = strides + self.score_threshold = score_threshold + self.nms_threshold = nms_threshold + self.nms_top_k = nms_top_k + self.keep_top_k = keep_top_k + + def __call__(self, decode_boxes, select_scores): + batch_size = 1 + out_boxes_list = [] + for batch_id in range(batch_size): + # nms + bboxes = np.concatenate(decode_boxes, axis=0) + confidences = np.concatenate(select_scores, axis=0) + picked_box_probs = [] + picked_labels = [] + for class_index in range(0, confidences.shape[1]): + probs = confidences[:, class_index] + mask = probs > self.score_threshold + probs = probs[mask] + if probs.shape[0] == 0: + continue + subset_boxes = bboxes[mask, :] + box_probs = np.concatenate( + [subset_boxes, probs.reshape(-1, 1)], axis=1) + box_probs = hard_nms( + box_probs, + iou_threshold=self.nms_threshold, + top_k=self.keep_top_k, ) + picked_box_probs.append(box_probs) + picked_labels.extend([class_index] * box_probs.shape[0]) + + if len(picked_box_probs) == 0: + out_boxes_list.append(np.empty((0, 4))) + + else: + picked_box_probs = np.concatenate(picked_box_probs) + + # resize output boxes + picked_box_probs[:, 0] *= self.scale_x + picked_box_probs[:, 2] *= self.scale_x + picked_box_probs[:, 1] *= self.scale_y + picked_box_probs[:, 3] *= self.scale_y + + # clas score box + out_boxes_list.append( + np.concatenate( + [ + np.expand_dims( + np.array(picked_labels), + axis=-1), np.expand_dims( + picked_box_probs[:, 4], axis=-1), + picked_box_probs[:, :4] + ], + axis=1)) + + out_boxes_list = np.concatenate(out_boxes_list, axis=0) + return out_boxes_list + + +def detect(img_file, compiled_model, class_label): + output = compiled_model.infer_new_request({0: test_image}) + result_ie = list(output.values()) + + decode_boxes = [] + select_scores = [] + num_outs = int(len(result_ie) / 2) + for out_idx in range(num_outs): + decode_boxes.append(result_ie[out_idx]) + select_scores.append(result_ie[out_idx + num_outs]) + + image = cv2.imread(img_file, 1) + scale_x = image.shape[1] / test_image.shape[3] + scale_y = image.shape[0] / test_image.shape[2] + + nms = PicoDetNMS(test_image.shape[2:], scale_x, scale_y) + np_boxes = nms(decode_boxes, select_scores) + + res_image = draw_box(image, np_boxes, class_label) + + cv2.imwrite('res.jpg', res_image) + cv2.imshow("res", res_image) + cv2.waitKey() + + +if __name__ == '__main__': + + parser = argparse.ArgumentParser() + parser.add_argument( + '--img_path', + type=str, + default='../../demo_onnxruntime/imgs/bus.jpg', + help="image path") + parser.add_argument( + '--onnx_path', + type=str, + default='out_onnxsim_infer/picodet_s_320_postproccesed_woNMS.onnx', + help="onnx filepath") + parser.add_argument('--in_shape', type=int, default=320, help="input_size") + parser.add_argument( + '--class_label', + type=str, + default='coco_label.txt', + help="class label file") + args = parser.parse_args() + + ie = Core() + net = ie.read_model(args.onnx_path) + test_image = image_preprocess(args.img_path, args.in_shape) + compiled_model = ie.compile_model(net, 'CPU') + + detect(args.img_path, compiled_model, args.class_label) diff --git a/deploy/third_engine/demo_openvino_kpts/picodet_openvino.h b/deploy/third_engine/demo_openvino_kpts/picodet_openvino.h index 242423a3af3644c3f3ad495ab0c291015560e49f..7bd3d79c44a2f6ae62eaba82bcafcae45a84254f 100644 --- a/deploy/third_engine/demo_openvino_kpts/picodet_openvino.h +++ b/deploy/third_engine/demo_openvino_kpts/picodet_openvino.h @@ -38,8 +38,8 @@ typedef struct BoxInfo { } BoxInfo; class PicoDet { - public: - PicoDet(const char* param); +public: + PicoDet(const char *param); ~PicoDet(); @@ -48,26 +48,23 @@ class PicoDet { std::vector heads_info_{ // cls_pred|dis_pred|stride - {"save_infer_model/scale_0.tmp_1", "save_infer_model/scale_4.tmp_1", 8}, - {"save_infer_model/scale_1.tmp_1", "save_infer_model/scale_5.tmp_1", 16}, - {"save_infer_model/scale_2.tmp_1", "save_infer_model/scale_6.tmp_1", 32}, - {"save_infer_model/scale_3.tmp_1", "save_infer_model/scale_7.tmp_1", 64}, + {"transpose_0.tmp_0", "transpose_1.tmp_0", 8}, + {"transpose_2.tmp_0", "transpose_3.tmp_0", 16}, + {"transpose_4.tmp_0", "transpose_5.tmp_0", 32}, + {"transpose_6.tmp_0", "transpose_7.tmp_0", 64}, }; - std::vector detect(cv::Mat image, - float score_threshold, + std::vector detect(cv::Mat image, float score_threshold, float nms_threshold); - private: - void preprocess(cv::Mat& image, InferenceEngine::Blob::Ptr& blob); - void decode_infer(const float*& cls_pred, - const float*& dis_pred, - int stride, +private: + void preprocess(cv::Mat &image, InferenceEngine::Blob::Ptr &blob); + void decode_infer(const float *&cls_pred, const float *&dis_pred, int stride, float threshold, - std::vector>& results); - BoxInfo disPred2Bbox( - const float*& dfl_det, int label, float score, int x, int y, int stride); - static void nms(std::vector& result, float nms_threshold); + std::vector> &results); + BoxInfo disPred2Bbox(const float *&dfl_det, int label, float score, int x, + int y, int stride); + static void nms(std::vector &result, float nms_threshold); std::string input_name_; int input_size_ = image_size; int num_class_ = 80; diff --git a/deploy/third_engine/onnx/infer.py b/deploy/third_engine/onnx/infer.py new file mode 100644 index 0000000000000000000000000000000000000000..9dbe2bde9e0d9e90639f53331a91fdecbeaefb8b --- /dev/null +++ b/deploy/third_engine/onnx/infer.py @@ -0,0 +1,148 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import yaml +import argparse +import numpy as np +import glob +from onnxruntime import InferenceSession + +from preprocess import Compose + +# Global dictionary +SUPPORT_MODELS = { + 'YOLO', 'RCNN', 'SSD', 'Face', 'FCOS', 'SOLOv2', 'TTFNet', 'S2ANet', 'JDE', + 'FairMOT', 'DeepSORT', 'GFL', 'PicoDet', 'CenterNet', 'TOOD', 'RetinaNet', + 'StrongBaseline', 'STGCN', 'YOLOX', 'HRNet' +} + +parser = argparse.ArgumentParser(description=__doc__) +parser.add_argument("--infer_cfg", type=str, help="infer_cfg.yml") +parser.add_argument( + '--onnx_file', type=str, default="model.onnx", help="onnx model file path") +parser.add_argument("--image_dir", type=str) +parser.add_argument("--image_file", type=str) + + +def get_test_images(infer_dir, infer_img): + """ + Get image path list in TEST mode + """ + assert infer_img is not None or infer_dir is not None, \ + "--image_file or --image_dir should be set" + assert infer_img is None or os.path.isfile(infer_img), \ + "{} is not a file".format(infer_img) + assert infer_dir is None or os.path.isdir(infer_dir), \ + "{} is not a directory".format(infer_dir) + + # infer_img has a higher priority + if infer_img and os.path.isfile(infer_img): + return [infer_img] + + images = set() + infer_dir = os.path.abspath(infer_dir) + assert os.path.isdir(infer_dir), \ + "infer_dir {} is not a directory".format(infer_dir) + exts = ['jpg', 'jpeg', 'png', 'bmp'] + exts += [ext.upper() for ext in exts] + for ext in exts: + images.update(glob.glob('{}/*.{}'.format(infer_dir, ext))) + images = list(images) + + assert len(images) > 0, "no image found in {}".format(infer_dir) + print("Found {} inference images in total.".format(len(images))) + + return images + + +class PredictConfig(object): + """set config of preprocess, postprocess and visualize + Args: + infer_config (str): path of infer_cfg.yml + """ + + def __init__(self, infer_config): + # parsing Yaml config for Preprocess + with open(infer_config) as f: + yml_conf = yaml.safe_load(f) + self.check_model(yml_conf) + self.arch = yml_conf['arch'] + self.preprocess_infos = yml_conf['Preprocess'] + self.min_subgraph_size = yml_conf['min_subgraph_size'] + self.label_list = yml_conf['label_list'] + self.use_dynamic_shape = yml_conf['use_dynamic_shape'] + self.draw_threshold = yml_conf.get("draw_threshold", 0.5) + self.mask = yml_conf.get("mask", False) + self.tracker = yml_conf.get("tracker", None) + self.nms = yml_conf.get("NMS", None) + self.fpn_stride = yml_conf.get("fpn_stride", None) + if self.arch == 'RCNN' and yml_conf.get('export_onnx', False): + print( + 'The RCNN export model is used for ONNX and it only supports batch_size = 1' + ) + self.print_config() + + def check_model(self, yml_conf): + """ + Raises: + ValueError: loaded model not in supported model type + """ + for support_model in SUPPORT_MODELS: + if support_model in yml_conf['arch']: + return True + raise ValueError("Unsupported arch: {}, expect {}".format(yml_conf[ + 'arch'], SUPPORT_MODELS)) + + def print_config(self): + print('----------- Model Configuration -----------') + print('%s: %s' % ('Model Arch', self.arch)) + print('%s: ' % ('Transform Order')) + for op_info in self.preprocess_infos: + print('--%s: %s' % ('transform op', op_info['type'])) + print('--------------------------------------------') + + +def predict_image(infer_config, predictor, img_list): + # load preprocess transforms + transforms = Compose(infer_config.preprocess_infos) + # predict image + for img_path in img_list: + inputs = transforms(img_path) + inputs_name = [var.name for var in predictor.get_inputs()] + inputs = {k: inputs[k][None, ] for k in inputs_name} + + outputs = predictor.run(output_names=None, input_feed=inputs) + + print("ONNXRuntime predict: ") + if infer_config.arch in ["HRNet"]: + print(np.array(outputs[0])) + else: + bboxes = np.array(outputs[0]) + for bbox in bboxes: + if bbox[0] > -1 and bbox[1] > infer_config.draw_threshold: + print(f"{int(bbox[0])} {bbox[1]} " + f"{bbox[2]} {bbox[3]} {bbox[4]} {bbox[5]}") + + +if __name__ == '__main__': + FLAGS = parser.parse_args() + # load image list + img_list = get_test_images(FLAGS.image_dir, FLAGS.image_file) + # load predictor + predictor = InferenceSession(FLAGS.onnx_file) + # load infer config + infer_config = PredictConfig(FLAGS.infer_cfg) + + predict_image(infer_config, predictor, img_list) diff --git a/deploy/third_engine/onnx/preprocess.py b/deploy/third_engine/onnx/preprocess.py new file mode 100644 index 0000000000000000000000000000000000000000..7ed422c53c2301a63fafd6f5c8019ed51d857865 --- /dev/null +++ b/deploy/third_engine/onnx/preprocess.py @@ -0,0 +1,491 @@ +import numpy as np +import cv2 +import copy + + +def decode_image(img_path): + with open(img_path, 'rb') as f: + im_read = f.read() + data = np.frombuffer(im_read, dtype='uint8') + im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode + im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) + img_info = { + "im_shape": np.array( + im.shape[:2], dtype=np.float32), + "scale_factor": np.array( + [1., 1.], dtype=np.float32) + } + return im, img_info + + +class Resize(object): + """resize image by target_size and max_size + Args: + target_size (int): the target size of image + keep_ratio (bool): whether keep_ratio or not, default true + interp (int): method of resize + """ + + def __init__(self, target_size, keep_ratio=True, interp=cv2.INTER_LINEAR): + if isinstance(target_size, int): + target_size = [target_size, target_size] + self.target_size = target_size + self.keep_ratio = keep_ratio + self.interp = interp + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + assert len(self.target_size) == 2 + assert self.target_size[0] > 0 and self.target_size[1] > 0 + im_channel = im.shape[2] + im_scale_y, im_scale_x = self.generate_scale(im) + im = cv2.resize( + im, + None, + None, + fx=im_scale_x, + fy=im_scale_y, + interpolation=self.interp) + im_info['im_shape'] = np.array(im.shape[:2]).astype('float32') + im_info['scale_factor'] = np.array( + [im_scale_y, im_scale_x]).astype('float32') + return im, im_info + + def generate_scale(self, im): + """ + Args: + im (np.ndarray): image (np.ndarray) + Returns: + im_scale_x: the resize ratio of X + im_scale_y: the resize ratio of Y + """ + origin_shape = im.shape[:2] + im_c = im.shape[2] + if self.keep_ratio: + im_size_min = np.min(origin_shape) + im_size_max = np.max(origin_shape) + target_size_min = np.min(self.target_size) + target_size_max = np.max(self.target_size) + im_scale = float(target_size_min) / float(im_size_min) + if np.round(im_scale * im_size_max) > target_size_max: + im_scale = float(target_size_max) / float(im_size_max) + im_scale_x = im_scale + im_scale_y = im_scale + else: + resize_h, resize_w = self.target_size + im_scale_y = resize_h / float(origin_shape[0]) + im_scale_x = resize_w / float(origin_shape[1]) + return im_scale_y, im_scale_x + + +class NormalizeImage(object): + """normalize image + Args: + mean (list): im - mean + std (list): im / std + is_scale (bool): whether need im / 255 + is_channel_first (bool): if True: image shape is CHW, else: HWC + """ + + def __init__(self, mean, std, is_scale=True): + self.mean = mean + self.std = std + self.is_scale = is_scale + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + im = im.astype(np.float32, copy=False) + mean = np.array(self.mean)[np.newaxis, np.newaxis, :] + std = np.array(self.std)[np.newaxis, np.newaxis, :] + + if self.is_scale: + im = im / 255.0 + im -= mean + im /= std + return im, im_info + + +class Permute(object): + """permute image + Args: + to_bgr (bool): whether convert RGB to BGR + channel_first (bool): whether convert HWC to CHW + """ + + def __init__(self, ): + super(Permute, self).__init__() + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + im = im.transpose((2, 0, 1)).copy() + return im, im_info + + +class PadStride(object): + """ padding image for model with FPN, instead PadBatch(pad_to_stride) in original config + Args: + stride (bool): model with FPN need image shape % stride == 0 + """ + + def __init__(self, stride=0): + self.coarsest_stride = stride + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + coarsest_stride = self.coarsest_stride + if coarsest_stride <= 0: + return im, im_info + im_c, im_h, im_w = im.shape + pad_h = int(np.ceil(float(im_h) / coarsest_stride) * coarsest_stride) + pad_w = int(np.ceil(float(im_w) / coarsest_stride) * coarsest_stride) + padding_im = np.zeros((im_c, pad_h, pad_w), dtype=np.float32) + padding_im[:, :im_h, :im_w] = im + return padding_im, im_info + + +class LetterBoxResize(object): + def __init__(self, target_size): + """ + Resize image to target size, convert normalized xywh to pixel xyxy + format ([x_center, y_center, width, height] -> [x0, y0, x1, y1]). + Args: + target_size (int|list): image target size. + """ + super(LetterBoxResize, self).__init__() + if isinstance(target_size, int): + target_size = [target_size, target_size] + self.target_size = target_size + + def letterbox(self, img, height, width, color=(127.5, 127.5, 127.5)): + # letterbox: resize a rectangular image to a padded rectangular + shape = img.shape[:2] # [height, width] + ratio_h = float(height) / shape[0] + ratio_w = float(width) / shape[1] + ratio = min(ratio_h, ratio_w) + new_shape = (round(shape[1] * ratio), + round(shape[0] * ratio)) # [width, height] + padw = (width - new_shape[0]) / 2 + padh = (height - new_shape[1]) / 2 + top, bottom = round(padh - 0.1), round(padh + 0.1) + left, right = round(padw - 0.1), round(padw + 0.1) + + img = cv2.resize( + img, new_shape, interpolation=cv2.INTER_AREA) # resized, no border + img = cv2.copyMakeBorder( + img, top, bottom, left, right, cv2.BORDER_CONSTANT, + value=color) # padded rectangular + return img, ratio, padw, padh + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + assert len(self.target_size) == 2 + assert self.target_size[0] > 0 and self.target_size[1] > 0 + height, width = self.target_size + h, w = im.shape[:2] + im, ratio, padw, padh = self.letterbox(im, height=height, width=width) + + new_shape = [round(h * ratio), round(w * ratio)] + im_info['im_shape'] = np.array(new_shape, dtype=np.float32) + im_info['scale_factor'] = np.array([ratio, ratio], dtype=np.float32) + return im, im_info + + +class Pad(object): + def __init__(self, size, fill_value=[114.0, 114.0, 114.0]): + """ + Pad image to a specified size. + Args: + size (list[int]): image target size + fill_value (list[float]): rgb value of pad area, default (114.0, 114.0, 114.0) + """ + super(Pad, self).__init__() + if isinstance(size, int): + size = [size, size] + self.size = size + self.fill_value = fill_value + + def __call__(self, im, im_info): + im_h, im_w = im.shape[:2] + h, w = self.size + if h == im_h and w == im_w: + im = im.astype(np.float32) + return im, im_info + + canvas = np.ones((h, w, 3), dtype=np.float32) + canvas *= np.array(self.fill_value, dtype=np.float32) + canvas[0:im_h, 0:im_w, :] = im.astype(np.float32) + im = canvas + return im, im_info + + +def rotate_point(pt, angle_rad): + """Rotate a point by an angle. + + Args: + pt (list[float]): 2 dimensional point to be rotated + angle_rad (float): rotation angle by radian + + Returns: + list[float]: Rotated point. + """ + assert len(pt) == 2 + sn, cs = np.sin(angle_rad), np.cos(angle_rad) + new_x = pt[0] * cs - pt[1] * sn + new_y = pt[0] * sn + pt[1] * cs + rotated_pt = [new_x, new_y] + + return rotated_pt + + +def _get_3rd_point(a, b): + """To calculate the affine matrix, three pairs of points are required. This + function is used to get the 3rd point, given 2D points a & b. + + The 3rd point is defined by rotating vector `a - b` by 90 degrees + anticlockwise, using b as the rotation center. + + Args: + a (np.ndarray): point(x,y) + b (np.ndarray): point(x,y) + + Returns: + np.ndarray: The 3rd point. + """ + assert len(a) == 2 + assert len(b) == 2 + direction = a - b + third_pt = b + np.array([-direction[1], direction[0]], dtype=np.float32) + + return third_pt + + +def get_affine_transform(center, + input_size, + rot, + output_size, + shift=(0., 0.), + inv=False): + """Get the affine transform matrix, given the center/scale/rot/output_size. + + Args: + center (np.ndarray[2, ]): Center of the bounding box (x, y). + scale (np.ndarray[2, ]): Scale of the bounding box + wrt [width, height]. + rot (float): Rotation angle (degree). + output_size (np.ndarray[2, ]): Size of the destination heatmaps. + shift (0-100%): Shift translation ratio wrt the width/height. + Default (0., 0.). + inv (bool): Option to inverse the affine transform direction. + (inv=False: src->dst or inv=True: dst->src) + + Returns: + np.ndarray: The transform matrix. + """ + assert len(center) == 2 + assert len(output_size) == 2 + assert len(shift) == 2 + if not isinstance(input_size, (np.ndarray, list)): + input_size = np.array([input_size, input_size], dtype=np.float32) + scale_tmp = input_size + + shift = np.array(shift) + src_w = scale_tmp[0] + dst_w = output_size[0] + dst_h = output_size[1] + + rot_rad = np.pi * rot / 180 + src_dir = rotate_point([0., src_w * -0.5], rot_rad) + dst_dir = np.array([0., dst_w * -0.5]) + + src = np.zeros((3, 2), dtype=np.float32) + src[0, :] = center + scale_tmp * shift + src[1, :] = center + src_dir + scale_tmp * shift + src[2, :] = _get_3rd_point(src[0, :], src[1, :]) + + dst = np.zeros((3, 2), dtype=np.float32) + dst[0, :] = [dst_w * 0.5, dst_h * 0.5] + dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir + dst[2, :] = _get_3rd_point(dst[0, :], dst[1, :]) + + if inv: + trans = cv2.getAffineTransform(np.float32(dst), np.float32(src)) + else: + trans = cv2.getAffineTransform(np.float32(src), np.float32(dst)) + + return trans + + +class WarpAffine(object): + """Warp affine the image + """ + + def __init__(self, + keep_res=False, + pad=31, + input_h=512, + input_w=512, + scale=0.4, + shift=0.1): + self.keep_res = keep_res + self.pad = pad + self.input_h = input_h + self.input_w = input_w + self.scale = scale + self.shift = shift + + def __call__(self, im, im_info): + """ + Args: + im (np.ndarray): image (np.ndarray) + im_info (dict): info of image + Returns: + im (np.ndarray): processed image (np.ndarray) + im_info (dict): info of processed image + """ + img = cv2.cvtColor(im, cv2.COLOR_RGB2BGR) + + h, w = img.shape[:2] + + if self.keep_res: + input_h = (h | self.pad) + 1 + input_w = (w | self.pad) + 1 + s = np.array([input_w, input_h], dtype=np.float32) + c = np.array([w // 2, h // 2], dtype=np.float32) + + else: + s = max(h, w) * 1.0 + input_h, input_w = self.input_h, self.input_w + c = np.array([w / 2., h / 2.], dtype=np.float32) + + trans_input = get_affine_transform(c, s, 0, [input_w, input_h]) + img = cv2.resize(img, (w, h)) + inp = cv2.warpAffine( + img, trans_input, (input_w, input_h), flags=cv2.INTER_LINEAR) + return inp, im_info + + +# keypoint preprocess +def get_warp_matrix(theta, size_input, size_dst, size_target): + """This code is based on + https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py + + Calculate the transformation matrix under the constraint of unbiased. + Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased + Data Processing for Human Pose Estimation (CVPR 2020). + + Args: + theta (float): Rotation angle in degrees. + size_input (np.ndarray): Size of input image [w, h]. + size_dst (np.ndarray): Size of output image [w, h]. + size_target (np.ndarray): Size of ROI in input plane [w, h]. + + Returns: + matrix (np.ndarray): A matrix for transformation. + """ + theta = np.deg2rad(theta) + matrix = np.zeros((2, 3), dtype=np.float32) + scale_x = size_dst[0] / size_target[0] + scale_y = size_dst[1] / size_target[1] + matrix[0, 0] = np.cos(theta) * scale_x + matrix[0, 1] = -np.sin(theta) * scale_x + matrix[0, 2] = scale_x * ( + -0.5 * size_input[0] * np.cos(theta) + 0.5 * size_input[1] * + np.sin(theta) + 0.5 * size_target[0]) + matrix[1, 0] = np.sin(theta) * scale_y + matrix[1, 1] = np.cos(theta) * scale_y + matrix[1, 2] = scale_y * ( + -0.5 * size_input[0] * np.sin(theta) - 0.5 * size_input[1] * + np.cos(theta) + 0.5 * size_target[1]) + return matrix + + +class TopDownEvalAffine(object): + """apply affine transform to image and coords + + Args: + trainsize (list): [w, h], the standard size used to train + use_udp (bool): whether to use Unbiased Data Processing. + records(dict): the dict contained the image and coords + + Returns: + records (dict): contain the image and coords after tranformed + + """ + + def __init__(self, trainsize, use_udp=False): + self.trainsize = trainsize + self.use_udp = use_udp + + def __call__(self, image, im_info): + rot = 0 + imshape = im_info['im_shape'][::-1] + center = im_info['center'] if 'center' in im_info else imshape / 2. + scale = im_info['scale'] if 'scale' in im_info else imshape + if self.use_udp: + trans = get_warp_matrix( + rot, center * 2.0, + [self.trainsize[0] - 1.0, self.trainsize[1] - 1.0], scale) + image = cv2.warpAffine( + image, + trans, (int(self.trainsize[0]), int(self.trainsize[1])), + flags=cv2.INTER_LINEAR) + else: + trans = get_affine_transform(center, scale, rot, self.trainsize) + image = cv2.warpAffine( + image, + trans, (int(self.trainsize[0]), int(self.trainsize[1])), + flags=cv2.INTER_LINEAR) + + return image, im_info + + +class Compose: + def __init__(self, transforms): + self.transforms = [] + for op_info in transforms: + new_op_info = op_info.copy() + op_type = new_op_info.pop('type') + self.transforms.append(eval(op_type)(**new_op_info)) + + def __call__(self, img_path): + img, im_info = decode_image(img_path) + for t in self.transforms: + img, im_info = t(img, im_info) + inputs = copy.deepcopy(im_info) + inputs['image'] = img + return inputs diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index b9915b5ab2bf55dd2a8d2a054fd8b550d4cc4faf..e19e6867a15ea18ddb8f85f46f6e020b79d4ebf6 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -7,7 +7,7 @@ ### 2.4(03.24/2022) - PP-YOLOE: - - 发布PP-YOLOE特色模型,COCO数据集精度51.4%,V100预测速度78.1 FPS,精度速度服务器端SOTA + - 发布PP-YOLOE特色模型,l版本COCO test2017数据集精度51.6%,V100预测速度78.1 FPS,精度速度服务器端SOTA - 发布s/m/l/x系列模型,打通TensorRT、ONNX部署能力 - 支持混合精度训练,训练较PP-YOLOv2加速33% @@ -22,8 +22,10 @@ - ReID支持Centroid模型 - 动作识别支持ST-GCN摔倒检测 +- 模型丰富度: + - 发布YOLOX,支持nano/tiny/s/m/l/x版本,x版本COCO val2017数据集精度51.8% + - 框架功能优化: - - 支持混合精度训练,通过`–amp`开启 - EMA训练速度优化20%,优化EMA训练模型保存方式 - 支持infer预测结果保存为COCO格式 diff --git a/docs/CHANGELOG_en.md b/docs/CHANGELOG_en.md index 7714b0c6787ef486623edd6494036f075651ef52..a7e6d422611eae7f0cfa66deb56f8e53e493d8c2 100644 --- a/docs/CHANGELOG_en.md +++ b/docs/CHANGELOG_en.md @@ -7,14 +7,14 @@ English | [简体中文](./CHANGELOG.md) ### 2.4(03.24/2022) - PP-YOLOE: - - Release PP-YOLOE object detection models, achieve mAP as 51.4% on COCO test dataset and 78.1 FPS on Nvidia V100, reach SOTA performance for object detection on GPU`` + - Release PP-YOLOE object detection models, achieve mAP as 51.6% on COCO test dataset and 78.1 FPS on Nvidia V100 by PP-YOLOE-l, reach SOTA performance for object detection on GPU`` - Release series models: s/m/l/x, and support deployment base on TensorRT & ONNX - Spport AMP training and training speed is 33% faster than PP-YOLOv2 - PP-PicoDet: - Release enhanced models of PP-PicoDet, mAP promoted ~2% on COCO and inference speed accelerated 63% on CPU - Release PP-PicoDet-XS model with 0.7M parameters - - Post-processing integrated into the network to optimize deployment pipeline + - Post-processing integrated into the network to optimize deployment pipeline - PP-Human: - Release PP-Human human analysis pipeline,including pedestrian detection, attribute recognition, human tracking, multi-camera tracking, human statistics, action recognition. Supporting deployment with TensorRT @@ -22,8 +22,10 @@ English | [简体中文](./CHANGELOG.md) - Release Centroid model for ReID - Release ST-GCN model for falldown action recognition +- Model richness: + - Publish YOLOX object detection model, release series models: nano/tiny/s/m/l/x, and YOLOX-x achieves mAP as 51.8% on COCO val2017 dataset + - Function Optimize: - - Support AMP training, enable with `--amp` - Optimize 20% training speed when training with EMA, improve saving method of EMA weights - Support saving inference results in COCO format diff --git a/docs/MODEL_ZOO_cn.md b/docs/MODEL_ZOO_cn.md index dc8c38374efa36e378dc0a6d170570872efffcb1..b71d07d5c2ea652b16855967236fa9323e2af0ee 100644 --- a/docs/MODEL_ZOO_cn.md +++ b/docs/MODEL_ZOO_cn.md @@ -87,6 +87,14 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 请参考[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet) +### PP-YOLOE + +请参考[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe) + +### YOLOX + +请参考[YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolox) + ## 旋转框检测 @@ -94,6 +102,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 请参考[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/) + ## 关键点检测 ### PP-TinyPose @@ -108,16 +117,21 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 请参考[HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet) + ## 多目标跟踪 -### DeepSort +### DeepSORT -请参考[DeepSort](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) +请参考[DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) ### JDE 请参考[JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde) -### fairmot +### FairMOT + +请参考[FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot) + +### ByteTrack -请参考[FairMot](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot) +请参考[ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack) diff --git a/docs/MODEL_ZOO_en.md b/docs/MODEL_ZOO_en.md index b8f5403ad4a5c4ef619e9a4d2bf73375fd0b812f..fd23b895324f9544e9d007e25f6f607e4340c7b1 100644 --- a/docs/MODEL_ZOO_en.md +++ b/docs/MODEL_ZOO_en.md @@ -86,6 +86,14 @@ Please refer to[GFL](https://github.com/PaddlePaddle/PaddleDetection/tree/develo Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet) +### PP-YOLOE + +Please refer to[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe) + +### YOLOX + +Please refer to[YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolox) + ## Rotating frame detection @@ -93,6 +101,7 @@ Please refer to[PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/de Please refer to[S2ANet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/dota/) + ## KeyPoint Detection ### PP-TinyPose @@ -107,16 +116,21 @@ Please refer to [HRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/dev Please refer to [HigherHRNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/keypoint/higherhrnet) + ## Multi-Object Tracking -### DeepSort +### DeepSORT -Please refer to [DeepSort](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) +Please refer to [DeepSORT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/deepsort) ### JDE Please refer to [JDE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/jde) -### fairmot +### FairMOT + +Please refer to [FairMOT](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot) + +### ByteTrack -Please refer to [FairMot](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/fairmot) \ No newline at end of file +Please refer to [ByteTrack](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mot/bytetrack) diff --git a/docs/advanced_tutorials/READER.md b/docs/advanced_tutorials/READER.md index bc087f8959f008b8a9ac6b5613c1aca710f239a1..60c9fee67f2718a3de088eb52d899828924f9e34 100644 --- a/docs/advanced_tutorials/READER.md +++ b/docs/advanced_tutorials/READER.md @@ -90,7 +90,7 @@ COCO数据集目前分为COCO2014和COCO2017,主要由json文件和image文件 │ │ ... ``` -在`source/coco.py`中定义并注册了`COCODataSet`数据集类,其继承自`DetDataSet`,并实现了parse_dataset方法,调用[COCO API](https://github.com/cocodataset/cocoapi)加载并解析COCO格式数据源`roidbs`和`cname2cid`,具体可参见`source/coco.py`源码。将其他数据集转换成COCO格式可以参考[用户数据转成COCO数据](../tutorials/PrepareDataSet.md#用户数据转成COCO数据) +在`source/coco.py`中定义并注册了`COCODataSet`数据集类,其继承自`DetDataSet`,并实现了parse_dataset方法,调用[COCO API](https://github.com/cocodataset/cocoapi)加载并解析COCO格式数据源`roidbs`和`cname2cid`,具体可参见`source/coco.py`源码。将其他数据集转换成COCO格式可以参考[用户数据转成COCO数据](../tutorials/data/PrepareDetDataSet.md#用户数据转成COCO数据) #### 2.2Pascal VOC数据集 该数据集目前分为VOC2007和VOC2012,主要由xml文件和image文件组成,其组织结构如下所示: @@ -118,7 +118,7 @@ COCO数据集目前分为COCO2014和COCO2017,主要由json文件和image文件 │ ├── ImageSets │ │ ... ``` -在`source/voc.py`中定义并注册了`VOCDataSet`数据集,它继承自`DetDataSet`基类,并重写了`parse_dataset`方法,解析VOC数据集中xml格式标注文件,更新`roidbs`和`cname2cid`。将其他数据集转换成VOC格式可以参考[用户数据转成VOC数据](../tutorials/PrepareDataSet.md#用户数据转成VOC数据) +在`source/voc.py`中定义并注册了`VOCDataSet`数据集,它继承自`DetDataSet`基类,并重写了`parse_dataset`方法,解析VOC数据集中xml格式标注文件,更新`roidbs`和`cname2cid`。将其他数据集转换成VOC格式可以参考[用户数据转成VOC数据](../tutorials/data/PrepareDetDataSet.md#用户数据转成VOC数据) #### 2.3自定义数据集 如果COCODataSet和VOCDataSet不能满足你的需求,可以通过自定义数据集的方式来加载你的数据集。只需要以下两步即可实现自定义数据集 @@ -259,9 +259,11 @@ Reader相关的类定义在`reader.py`, 其中定义了`BaseDataLoader`类。`Ba ### 5.配置及运行 -#### 5.1配置 +#### 5.1 配置 +与数据预处理相关的模块的配置文件包含所有模型公用的Dataset的配置文件,以及不同模型专用的Reader的配置文件。 -与数据预处理相关的模块的配置文件包含所有模型公用的Datas set的配置文件以及不同模型专用的Reader的配置文件。关于Dataset的配置文件存在于`configs/datasets`文件夹。比如COCO数据集的配置文件如下: +##### 5.1.1 Dataset配置 +关于Dataset的配置文件存在于`configs/datasets`文件夹。比如COCO数据集的配置文件如下: ``` metric: COCO # 目前支持COCO, VOC, OID, WiderFace等评估标准 num_classes: 80 # num_classes数据集的类别数,不包含背景类 @@ -271,7 +273,7 @@ TrainDataset: image_dir: train2017 # 训练集的图片所在文件相对于dataset_dir的路径 anno_path: annotations/instances_train2017.json # 训练集的标注文件相对于dataset_dir的路径 dataset_dir: dataset/coco #数据集所在路径,相对于PaddleDetection路径 - data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # 控制dataset输出的sample所包含的字段 + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # 控制dataset输出的sample所包含的字段,注意此为TrainDataset独有的且必须配置的字段 EvalDataset: !COCODataSet @@ -281,9 +283,16 @@ EvalDataset: TestDataset: !ImageFolder - anno_path: dataset/coco/annotations/instances_val2017.json # 验证集的标注文件所在路径,相对于PaddleDetection的路径 + anno_path: annotations/instances_val2017.json # 标注文件所在路径,仅用于读取数据集的类别信息,支持json和txt格式 + dataset_dir: dataset/coco # 数据集所在路径,若添加了此行,则`anno_path`路径为`dataset_dir/anno_path`,若此行不设置或去掉此行,则`anno_path`路径即为`anno_path` ``` 在PaddleDetection的yml配置文件中,使用`!`直接序列化模块实例(可以是函数,实例等),上述的配置文件均使用Dataset进行了序列化。 + +**注意:** +请运行前自行仔细检查数据集的配置路径,在训练或验证时如果TrainDataset和EvalDataset的路径配置有误,会提示自动下载数据集。若使用自定义数据集,在推理时如果TestDataset路径配置有误,会提示使用默认COCO数据集的类别信息。 + + +##### 5.1.2 Reader配置 不同模型专用的Reader定义在每一个模型的文件夹下,如yolov3的Reader配置文件定义在`configs/yolov3/_base_/yolov3_reader.yml`。一个Reader的示例配置如下: ``` worker_num: 2 diff --git a/docs/advanced_tutorials/READER_en.md b/docs/advanced_tutorials/READER_en.md index e3924641759c100ff9b16ebf82e2d9dc28666fae..07940a965dd4e48499a96def925679f9ff269ad8 100644 --- a/docs/advanced_tutorials/READER_en.md +++ b/docs/advanced_tutorials/READER_en.md @@ -91,7 +91,7 @@ COCO datasets are currently divided into COCO2014 and COCO2017, which are mainly │ │ ... ``` class `COCODataSet` is defined and registered on `source/coco.py`. And implements the parse the dataset method, called [COCO API](https://github.com/cocodataset/cocoapi) to load and parse COCO format data source ` roidbs ` and ` cname2cid `, See `source/coco.py` source code for details. Converting other datasets to COCO format can be done by referring to [converting User Data to COCO Data](../tutorials/PrepareDataSet_en.md#convert-user-data-to-coco-data) -And implements the parse the dataset method, called [COCO API](https://github.com/cocodataset/cocoapi) to load and parse COCO format data source `roidbs` and `cname2cid`, See `source/coco.py` source code for details. Converting other datasets to COCO format can be done by referring to [converting User Data to COCO Data](../tutorials/PrepareDataSet_en.md#convert-user-data-to-coco-data) +And implements the parse the dataset method, called [COCO API](https://github.com/cocodataset/cocoapi) to load and parse COCO format data source `roidbs` and `cname2cid`, See `source/coco.py` source code for details. Converting other datasets to COCO format can be done by referring to [converting User Data to COCO Data](../tutorials/data/PrepareDetDataSet_en.md#convert-user-data-to-coco-data) #### 2.2Pascal VOC dataset @@ -120,7 +120,7 @@ The dataset is currently divided into VOC2007 and VOC2012, mainly composed of XM │ ├── ImageSets │ │ ... ``` -The `VOCDataSet` dataset is defined and registered in `source/voc.py` . It inherits the `DetDataSet` base class and rewrites the `parse_dataset` method to parse XML annotations in the VOC dataset. Update `roidbs` and `cname2cid`. To convert other datasets to VOC format, refer to [User Data to VOC Data](../tutorials/PrepareDataSet_en.md#convert-user-data-to-voc-data) +The `VOCDataSet` dataset is defined and registered in `source/voc.py` . It inherits the `DetDataSet` base class and rewrites the `parse_dataset` method to parse XML annotations in the VOC dataset. Update `roidbs` and `cname2cid`. To convert other datasets to VOC format, refer to [User Data to VOC Data](../tutorials/data/PrepareDetDataSet_en.md#convert-user-data-to-voc-data) #### 2.3Customize Dataset @@ -260,9 +260,11 @@ The Reader class is defined in `reader.py`, where the `BaseDataLoader` class is ### 5.Configuration and Operation -#### 5.1Configuration +#### 5.1 Configuration +The configuration files for modules related to data preprocessing contain the configuration files for Datasets common to all models and the configuration files for readers specific to different models. -The configuration files for modules related to data preprocessing contain the configuration files for Datas sets common to all models and the configuration files for readers specific to different models. The configuration file for the Dataset exists in the `configs/datasets` folder. For example, the COCO dataset configuration file is as follows: +##### 5.1.1 Dataset Configuration +The configuration file for the Dataset exists in the `configs/datasets` folder. For example, the COCO dataset configuration file is as follows: ``` metric: COCO # Currently supports COCO, VOC, OID, Wider Face and other evaluation standards num_classes: 80 # num_classes: The number of classes in the dataset, excluding background classes @@ -272,7 +274,7 @@ TrainDataset: image_dir: train2017 # The path where the training set image resides relative to the dataset_dir anno_path: annotations/instances_train2017.json # Path to the annotation file of the training set relative to the dataset_dir dataset_dir: dataset/coco #The path where the dataset is located relative to the PaddleDetection path - data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # Controls the fields contained in the sample output of the dataset + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # Controls the fields contained in the sample output of the dataset, note data_fields are unique to the TrainDataset and must be configured EvalDataset: !COCODataSet @@ -281,9 +283,16 @@ EvalDataset: dataset_dir: dataset/coco # The path where the dataset is located relative to the PaddleDetection path TestDataset: !ImageFolder - anno_path: dataset/coco/annotations/instances_val2017.json # The path of the annotation file of the verification set, relative to the path of PaddleDetection + anno_path: dataset/coco/annotations/instances_val2017.json # The path of the annotation file, it is only used to read the category information of the dataset. JSON and TXT formats are supported + dataset_dir: dataset/coco # The path of the dataset, note if this row is added, `anno_path` will be 'dataset_dir/anno_path`, if not set or removed, `anno_path` is `anno_path` ``` In the YML profile for Paddle Detection, use `!`directly serializes module instances (functions, instances, etc.). The above configuration files are serialized using Dataset. + +**Note:** +Please carefully check the configuration path of the dataset before running. During training or verification, if the path of TrainDataset or EvalDataset is wrong, it will download the dataset automatically. When using a user-defined dataset, if the TestDataset path is incorrectly configured during inference, the category of the default COCO dataset will be used. + + +##### 5.1.2 Reader configuration The Reader configuration files for yolov3 are defined in `configs/yolov3/_base_/yolov3_reader.yml`. An example Reader configuration is as follows: ``` worker_num: 2 diff --git a/docs/advanced_tutorials/customization/action_recognotion/README.md b/docs/advanced_tutorials/customization/action_recognotion/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d9adf2184592e37c343d743e3ce93c8a4dccb493 --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/README.md @@ -0,0 +1,54 @@ +简体中文 | [English](./README_en.md) + +# 行为识别任务二次开发 + +在产业落地过程中应用行为识别算法,不可避免地会出现希望自定义类型的行为识别的需求,或是对已有行为识别模型的优化,以提升在特定场景下模型的效果。鉴于行为的多样性,PP-Human支持抽烟、打电话、摔倒、打架、人员闯入五种异常行为识别,并根据行为的不同,集成了基于视频分类、基于检测、基于图像分类、基于跟踪以及基于骨骼点的五种行为识别技术方案,可覆盖90%+动作类型的识别,满足各类开发需求。我们在本文档通过案例来介绍如何根据期望识别的行为来进行行为识别方案的选择,以及使用PaddleDetection进行行为识别算法二次开发工作,包括:方案选择、数据准备、模型优化思路和新增行为的开发流程。 + + +## 方案选择 + +在PaddleDetection的PP-Human中,我们为行为识别提供了多种方案:基于视频分类、基于图像分类、基于检测、基于跟踪以及基于骨骼点的行为识别方案,以期望满足不同场景、不同目标行为的需求。对于二次开发,首先我们需要确定要采用何种方案来实现行为识别的需求,其核心是要通过对场景和具体行为的分析、并考虑数据采集成本等因素,综合选择一个合适的识别方案。我们在这里简要列举了当前PaddleDetection中所支持的方案的优劣势和适用场景,供大家参考。 + +image + +下面以PaddleDetection目前已经支持的几个具体动作为例,介绍每个动作方案的选型依据: + +### 吸烟 + +方案选择:基于人体id检测的行为识别 + +原因:吸烟动作中具有香烟这个明显特征目标,因此我们可以认为当在某个人物的对应图像中检测到香烟时,该人物即在吸烟动作中。相比于基于视频或基于骨骼点的识别方案,训练检测模型需要采集的是图片级别而非视频级别的数据,可以明显减轻数据收集与标注的难度。此外,目标检测任务具有丰富的预训练模型资源,整体模型的效果会更有保障, + +### 打电话 + +方案选择:基于人体id分类的行为识别 + +原因:打电话动作中虽然有手机这个特征目标,但为了区分看手机等动作,以及考虑到在安防场景下打电话动作中会出现较多对手机的遮挡(如手对手机的遮挡、人头对手机的遮挡等等),不利于检测模型正确检测到目标。同时打电话通常持续的时间较长,且人物本身的动作不会发生太大变化,因此可以因此采用帧级别图像分类的策略。 + 此外,打电话这个动作主要可以通过上半身判别,可以采用半身图片,去除冗余信息以降低模型训练的难度。 + +### 摔倒 + +方案选择:基于人体骨骼点的行为识别 + +原因:摔倒是一个明显的时序行为的动作,可由一个人物本身进行区分,具有场景无关的特性。由于PP-Human的场景定位偏向安防监控场景,背景变化较为复杂,且部署上需要考虑到实时性,因此采用了基于骨骼点的行为识别方案,以获得更好的泛化性及运行速度。 + +### 闯入 + +方案选择:基于人体id跟踪的行为识别 + +原因:闯入识别判断行人的路径或所在位置是否在某区域内即可,与人体自身动作无关,因此只需要跟踪人体跟踪结果分析是否存在闯入行为。 + +### 打架 + +方案选择:基于视频分类的行为识别 + +原因:与上面的动作不同,打架是一个典型的多人组成的行为。因此不再通过检测与跟踪模型来提取行人及其ID,而对整体视频片段进行处理。此外,打架场景下各个目标间的互相遮挡极为严重,关键点识别的准确性不高,采用基于骨骼点的方案难以保证精度。 + + +下面详细展开五大类方案的数据准备、模型优化和新增行为识别方法 + +1. [基于人体id检测的行为识别](./idbased_det.md) +2. [基于人体id分类的行为识别](./idbased_clas.md) +3. [基于人体骨骼点的行为识别](./skeletonbased_rec.md) +4. [基于人体id跟踪的行为识别](../pphuman_mot.md) +5. [基于视频分类的行为识别](./videobased_rec.md) diff --git a/docs/advanced_tutorials/customization/action_recognotion/README_en.md b/docs/advanced_tutorials/customization/action_recognotion/README_en.md new file mode 100644 index 0000000000000000000000000000000000000000..d04d426b7076abdd38a7317117f5daab6eeff0ad --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/README_en.md @@ -0,0 +1,55 @@ +[简体中文](./README.md) | English + +# Secondary Development for Action Recognition Task + +In the process of industrial implementation, the application of action recognition algorithms will inevitably lead to the need for customized types of action, or the optimization of existing action recognition models to improve the performance of the model in specific scenarios. In view of the diversity of behaviors, PP-Human supports the identification of five abnormal behavioras of smoking, making phone calls, falling, fighting, and people intrusion. At the same time, according to the different behaviors, PP-Human integrates five action recognition technology solutions based on video classification, detection-based, image-based classification, tracking-based and skeleton-based, which can cover 90%+ action type recognition and meet various development needs. In this document, we use a case to introduce how to select a action recognition solution according to the expected behavior, and use PaddleDetection to carry out the secondary development of the action recognition algorithm, including: solution selection, data preparation, model optimization and development process for adding new actions. + + +## Solution Selection +In PaddleDetection's PP-Human, we provide a variety of solutions for behavior recognition: video classification, image classification, detection, tracking-based, and skeleton point-based behavior recognition solutions, in order to meet the needs of different scenes and different target behaviors. + +image + +The following takes several specific actions that PaddleDetection currently supports as an example to introduce the selection basis of each action: + +### Smoking + +Solution selection: action recognition based on detection with human id. + +Reason: The smoking action has a obvious feature target, that is, cigarette. So we can think that when a cigarette is detected in the corresponding image of a person, the person is with the smoking action. Compared with video-based or skeleton-based recognition schemes, training detection model needs to collect data at the image level rather than the video level, which can significantly reduce the difficulty of data collection and labeling. In addition, the detection task has abundant pre-training model resources, and the performance of the model will be more guaranteed. + +### Making Phone Calls + +Solution selection: action recognition based on classification with human id. + +Reason: Although there is a characteristic target of a mobile phone in the call action, in order to distinguish actions such as looking at the mobile phone, and considering that there will be much occlusion of the mobile phone in the calling action in the security scene (such as the occlusion of the mobile phone by the hand or head, etc.), is not conducive to the detection model to correctly detect the target. Simultaneous, calls usually last a long time, and the character's action do not change much, so a strategy for frame-level image classification can therefore be employed. In addition, the action of making a phone call can mainly be judged by the upper body, and the half-body picture can be used to remove redundant information to reduce the difficulty of model training. + + +### Falling + +Solution selection: action recognition based on skelenton. + +Reason: Falling is an obvious temporal action, which is distinguishable by a character himself, and it is scene-independent. Since PP-Human is towards the security monitoring scene, where the background changes are more complicated, and the real-time inference needs to be considered in the deployment, the action recognition based on skeleton points is adopted to obtain better generalization and running speed. + + +### People Intrusion + +Solution selection: action recognition based on tracking with human id. + +Reason: The intrusion recognition can be judged by whether the pedestrian's path or location is in a selected area, and it is unrelated to pedestrian's body action. Therefore, it is only necessary to track the human and use coordinate results to analyze whether there is intrusion behavior. + +### Fighting + +Solution selection: action recognition based on video classification. + +Reason: Unlike the actions above, fighting is a typical multiplayer action. Therefore, the detection and tracking model is no longer used to extract pedestrians and their IDs, but the entire video clip is processed. In addition, the mutual occlusion between various targets in the fighting scene is extremely serious, leading to the accuracy of keypoint recognition is not good. + + + +The following are detailed description for the five major categories of solutions, including the data preparation, model optimization and adding new actions. + +1. [action recognition based on detection with human id.](./idbased_det_en.md) +2. [action recognition based on classification with human id.](./idbased_clas_en.md) +3. [action recognition based on skelenton.](./skeletonbased_rec_en.md) +4. [action recognition based on tracking with human id](../pphuman_mot_en.md) +5. [action recognition based on video classification](./videobased_rec_en.md) diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md new file mode 100644 index 0000000000000000000000000000000000000000..51f281835ab0b842a0718d726ae73a533587e82a --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas.md @@ -0,0 +1,223 @@ +简体中文 | [English](./idbased_clas_en.md) + +# 基于人体id的分类模型开发 + +## 环境准备 + +基于人体id的分类方案是使用[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)的功能进行模型训练的。请按照[安装说明](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/installation/install_paddleclas.md)完成环境安装,以进行后续的模型训练及使用流程。 + +## 数据准备 + +基于图像分类的行为识别方案直接对视频中的图像帧结果进行识别,因此模型训练流程与通常的图像分类模型一致。 + +### 数据集下载 +打电话的行为识别是基于公开数据集[UAV-Human](https://github.com/SUTDCV/UAV-Human)进行训练的。请通过该链接填写相关数据集申请材料后获取下载链接。 + +在`UAVHuman/ActionRecognition/RGBVideos`路径下包含了该数据集中RGB视频数据集,每个视频的文件名即为其标注信息。 + +### 训练及测试图像处理 +根据视频文件名,其中与行为识别相关的为`A`相关的字段(即action),我们可以找到期望识别的动作类型数据。 +- 正样本视频:以打电话为例,我们只需找到包含`A024`的文件。 +- 负样本视频:除目标动作以外所有的视频。 + +鉴于视频数据转化为图像会有较多冗余,对于正样本视频,我们间隔8帧进行采样,并使用行人检测模型处理为半身图像(取检测框的上半部分,即`img = img[:H/2, :, :]`)。正样本视频中的采样得到的图像即视为正样本,负样本视频中采样得到的图像即为负样本。 + +**注意**: 正样本视频中并不完全符合打电话这一动作,在视频开头结尾部分会出现部分冗余动作,需要移除。 + +### 标注文件准备 + +基于图像分类的行为识别方案是借助[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)进行模型训练的。使用该方案训练的模型,需要准备期望识别的图像数据及对应标注文件。根据[PaddleClas数据集格式说明](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/data_preparation/classification_dataset.md#1-%E6%95%B0%E6%8D%AE%E9%9B%86%E6%A0%BC%E5%BC%8F%E8%AF%B4%E6%98%8E)准备对应的数据即可。标注文件样例如下,其中`0`,`1`分别是图片对应所属的类别: +``` + # 每一行采用"空格"分隔图像路径与标注 + train/000001.jpg 0 + train/000002.jpg 0 + train/000003.jpg 1 + ... +``` + +此外,标签文件`phone_label_list.txt`,帮助将分类序号映射到具体的类型名称: +``` +0 make_a_phone_call # 类型0 +1 normal # 类型1 +``` + +完成上述内容后,放置于`dataset`目录下,文件结构如下: +``` +data/ +├── images # 放置所有图片 +├── phone_label_list.txt # 标签文件 +├── phone_train_list.txt # 训练列表,包含图片及其对应类型 +└── phone_val_list.txt # 测试列表,包含图片及其对应类型 +``` + +## 模型优化 + +### 检测-跟踪模型优化 +基于分类的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。 + + +### 半身图预测 +在打电话这一动作中,实际是通过上半身就能实现动作的区分的,因此在训练和预测过程中,将图像由行人全身图换为半身图 + +## 新增行为 + +### 数据准备 +参考前述介绍的内容,完成数据准备的部分,放置于`{root of PaddleClas}/dataset`下: +``` +data/ +├── images # 放置所有图片 +├── label_list.txt # 标签文件 +├── train_list.txt # 训练列表,包含图片及其对应类型 +└── val_list.txt # 测试列表,包含图片及其对应类型 +``` +其中,训练及测试列表如下: +``` + # 每一行采用"空格"分隔图像路径与标注 + train/000001.jpg 0 + train/000002.jpg 0 + train/000003.jpg 1 + train/000004.jpg 2 # 新增的类别直接填写对应类别号即可 + ... +``` +`label_list.txt`中需要同样对应扩展类型的名称: +``` +0 make_a_phone_call # 类型0 +1 Your New Action # 类型1 + ... +n normal # 类型n +``` + +### 配置文件设置 +在PaddleClas中已经集成了[训练配置文件](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml),需要重点关注的设置项如下: + +```yaml +# model architecture +Arch: + name: PPHGNet_tiny + class_num: 2 # 对应新增后的数量 + + ... + +# 正确设置image_root与cls_label_path,保证image_root + cls_label_path中的图片路径能够正确访问图片路径 +DataLoader: + Train: + dataset: + name: ImageNetDataset + image_root: ./dataset/ + cls_label_path: ./dataset/phone_train_list_halfbody.txt + + ... + +Infer: + infer_imgs: docs/images/inference_deployment/whl_demo.jpg + batch_size: 1 + transforms: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - ToCHWImage: + PostProcess: + name: Topk + topk: 2 # 显示topk的数量,不要超过类别总数 + class_id_map_file: dataset/phone_label_list.txt # 修改后的label_list.txt路径 +``` + +### 模型训练及评估 +#### 模型训练 +通过如下命令启动训练: +```bash +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Arch.pretrained=True +``` +其中 `Arch.pretrained` 为 `True`表示使用预训练权重帮助训练。 + +#### 模型评估 + +训练好模型之后,可以通过以下命令实现对模型指标的评估。 + +```bash +python3 tools/eval.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=output/PPHGNet_tiny/best_model +``` + +其中 `-o Global.pretrained_model="output/PPHGNet_tiny/best_model"` 指定了当前最佳权重所在的路径,如果指定其他权重,只需替换对应的路径即可。 + +### 模型导出 +模型导出的详细介绍请参考[这里](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/en/inference_deployment/export_model_en.md#2-export-classification-model) +可以参考以下步骤实现: +```python +python tools/export_model.py + -c ./PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=./output/PPHGNet_tiny/best_model \ + -o Global.save_inference_dir=./output_inference/PPHGNet_tiny_calling_halfbody +``` +然后将导出的模型重命名,并加入配置文件,以适配PP-Human的使用。 +```bash +cd ./output_inference/PPHGNet_tiny_calling_halfbody + +mv inference.pdiparams model.pdiparams +mv inference.pdiparams.info model.pdiparams.info +mv inference.pdmodel model.pdmodel + +# 下载预测配置文件 +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_configs/PPHGNet_tiny_calling_halfbody/infer_cfg.yml +``` + +至此,即可使用PP-Human进行实际预测了。 + + +### 自定义行为输出 +基于人体id的分类的行为识别方案中,将任务转化为对应人物的图像进行图片级别的分类。对应分类的类型最终即视为当前阶段的行为。因此在完成自定义模型的训练及部署的基础上,还需要将分类模型结果转化为最终的行为识别结果作为输出,并修改可视化的显示结果。 + +#### 转换为行为识别结果 +请对应修改[后处理函数](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L509)。 + +核心代码为: +```python +# 确定分类模型的最高分数输出结果 +cls_id_res = 1 +cls_score_res = -1.0 +for cls_id in range(len(cls_result[idx])): + score = cls_result[idx][cls_id] + if score > cls_score_res: + cls_id_res = cls_id + cls_score_res = score + +# Current now, class 0 is positive, class 1 is negative. +if cls_id_res == 1 or (cls_id_res == 0 and + cls_score_res < self.threshold): + # 如果分类结果不是目标行为或是置信度未达到阈值,则根据历史结果确定当前帧的行为 + history_cls, life_remain, history_score = self.result_history.get( + tracker_id, [1, self.frame_life, -1.0]) + cls_id_res = history_cls + cls_score_res = 1 - cls_score_res + life_remain -= 1 + if life_remain <= 0 and tracker_id in self.result_history: + del (self.result_history[tracker_id]) + elif tracker_id in self.result_history: + self.result_history[tracker_id][1] = life_remain + else: + self.result_history[ + tracker_id] = [cls_id_res, life_remain, cls_score_res] +else: + # 分类结果属于目标行为,则使用将该结果,并记录到历史结果中 + self.result_history[ + tracker_id] = [cls_id_res, self.frame_life, cls_score_res] + + ... +``` + +#### 修改可视化输出 +目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。 diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md new file mode 100644 index 0000000000000000000000000000000000000000..fc28ccc7029c7ea7f1e63d0ee4f97962747e7ad3 --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_clas_en.md @@ -0,0 +1,224 @@ +[简体中文](./idbased_clas.md) | English + +# Development for Action Recognition Based on Classification with Human ID + +## Environmental Preparation +The model of action recognition based on classification with human id is trained with [PaddleClas](https://github.com/PaddlePaddle/PaddleClas). Please refer to [Install PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/install_paddleclas_en.md) to complete the environment installation for subsequent model training and usage processes. + +## Data Preparation + +The model of action recognition based on classification with human id directly recognizes the image frames of video, so the model training process is same with the usual image classification model. + +### Dataset Download + +The action recognition of making phone calls is trained on the public dataset [UAV-Human](https://github.com/SUTDCV/UAV-Human). Please fill in the relevant application materials through this link to obtain the download link. + +The RGB video in this dataset is included in the `UAVHuman/ActionRecognition/RGBVideos` path, and the file name of each video is its annotation information. + +### Image Processing for Training and Validation +According to the video file name, in which the `A` field (i.e. action) related to action recognition, we can find the action type of the video data that we expect to recognize. +- Positive sample video: Taking phone calls as an example, we just need to find the file containing `A024`. +- Negative sample video: All videos except the target action. + +In view of the fact that there will be much redundancy when converting video data into images, for positive sample videos, we sample at intervals of 8 frames, and use the pedestrian detection model to process it into a half-body image (take the upper half of the detection frame, that is, `img = img[: H/2, :, :]`). The image sampled from the positive sample video is regarded as a positive sample, and the sampled image from the negative sample video is regarded as a negative sample. + +**Note**: The positive sample video does not completely are the action of making a phone call. There will be some redundant actions at the beginning and end of the video, which need to be removed. + + +### Preparation for Annotation File +The model of action recognition based on classification with human id is trained with [PaddleClas](https://github.com/PaddlePaddle/PaddleClas). Thus the model trained with this scheme needs to prepare the desired image data and corresponding annotation files. Please refer to [Image Classification Datasets](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/data_preparation/classification_dataset_en.md) to prepare the data. An example of an annotation file is as follows, where `0` and `1` are the corresponding categories of the image: + +``` + # Each line uses "space" to separate the image path and label + train/000001.jpg 0 + train/000002.jpg 0 + train/000003.jpg 1 + ... +``` + +Additionally, the label file `phone_label_list.txt` helps map category numbers to specific type names: +``` +0 make_a_phone_call # type 0 +1 normal # type 1 +``` + +After the above content finished, place it to the `dataset` directory, the file structure is as follow: +``` +data/ +├── images # All images +├── phone_label_list.txt # Label file +├── phone_train_list.txt # Training list, including pictures and their corresponding types +└── phone_val_list.txt # Validation list, including pictures and their corresponding types +``` + +## Model Optimization + +### Detection-Tracking Model Optimization +The performance of action recognition based on classification with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization. + + +### Half-Body Prediction +In the action of making a phone call, the action classification can be achieved through the upper body image. Therefore, during the training and prediction process, the image is changed from the pedestrian full-body to half-body. + +## Add New Action + +### Data Preparation +Referring to the previous introduction, complete the data preparation part and place it under `{root of PaddleClas}/dataset`: + +``` +data/ +├── images # All images +├── label_list.txt # Label file +├── train_list.txt # Training list, including pictures and their corresponding types +└── val_list.txt # Validation list, including pictures and their corresponding types +``` +Where the training list and validation list file are as follow: +``` + # Each line uses "space" to separate the image path and label + train/000001.jpg 0 + train/000002.jpg 0 + train/000003.jpg 1 + train/000004.jpg 2 # For the newly added categories, simply fill in the corresponding category number. + +`label_list.txt` should give name of the extension type: +``` +0 make_a_phone_call # class 0 +1 Your New Action # class 1 + ... +n normal # class n +``` + ... +``` + +### Configuration File Settings +The [training configuration file] (https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml) has been integrated in PaddleClas. The settings that need to be paid attention to are as follows: + +```yaml +# model architecture +Arch: + name: PPHGNet_tiny + class_num: 2 # Corresponding to the number of action categories + + ... + +# Please correctly set image_root and cls_label_path to ensure that the image_root + image path in cls_label_path can access the image correctly +DataLoader: + Train: + dataset: + name: ImageNetDataset + image_root: ./dataset/ + cls_label_path: ./dataset/phone_train_list_halfbody.txt + + ... + +Infer: + infer_imgs: docs/images/inference_deployment/whl_demo.jpg + batch_size: 1 + transforms: + - DecodeImage: + to_rgb: True + channel_first: False + - ResizeImage: + size: 224 + - NormalizeImage: + scale: 1.0/255.0 + mean: [0.485, 0.456, 0.406] + std: [0.229, 0.224, 0.225] + order: '' + - ToCHWImage: + PostProcess: + name: Topk + topk: 2 # Display the number of topks, do not exceed the total number of categories + class_id_map_file: dataset/phone_label_list.txt # path of label_list.txt +``` + +### Model Training And Evaluation +#### Model Training +Start training with the following command: +```bash +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Arch.pretrained=True +``` +where `Arch.pretrained=True` is to use pretrained weights to help with training. + +#### Model Evaluation +After training the model, use the following command to evaluate the model metrics. +```bash +python3 tools/eval.py \ + -c ./ppcls/configs/practical_models/PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=output/PPHGNet_tiny/best_model +``` +Where `-o Global.pretrained_model="output/PPHGNet_tiny/best_model"` specifies the path where the current best weight is located. If other weights are needed, just replace the corresponding path. + +#### Model Export +For the detailed introduction of model export, please refer to [here](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/en/inference_deployment/export_model_en.md#2-export-classification-model) +You can refer to the following steps: + +```python +python tools/export_model.py + -c ./PPHGNet_tiny_calling_halfbody.yaml \ + -o Global.pretrained_model=./output/PPHGNet_tiny/best_model \ + -o Global.save_inference_dir=./output_inference/PPHGNet_tiny_calling_halfbody +``` + +Then rename the exported model and add the configuration file to suit the usage of PP-Human. +```bash +cd ./output_inference/PPHGNet_tiny_calling_halfbody + +mv inference.pdiparams model.pdiparams +mv inference.pdiparams.info model.pdiparams.info +mv inference.pdmodel model.pdmodel + +# Download configuration file for inference +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_configs/PPHGNet_tiny_calling_halfbody/infer_cfg.yml +``` + +At this point, this model can be used in PP-Human. + +### Custom Action Output +In the model of action recognition based on classification with human id, the task is defined as a picture-level classification task of corresponding person. The type of the corresponding classification is finally regarded as the action type of the current stage. Therefore, on the basis of completing the training and deployment of the custom model, it is also necessary to convert the classification model results to the final action recognition results as output, and the displayed result of the visualization should be modified. + +Please modify the [postprocessing function](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L509). + +The core code are: +```python +# Get the highest score output of the classification model +cls_id_res = 1 +cls_score_res = -1.0 +for cls_id in range(len(cls_result[idx])): + score = cls_result[idx][cls_id] + if score > cls_score_res: + cls_id_res = cls_id + cls_score_res = score + +# Current now, class 0 is positive, class 1 is negative. +if cls_id_res == 1 or (cls_id_res == 0 and + cls_score_res < self.threshold): + # If the classification result is not the target action or its confidence does not reach the threshold, + # determine the action type of the current frame according to the historical results + history_cls, life_remain, history_score = self.result_history.get( + tracker_id, [1, self.frame_life, -1.0]) + cls_id_res = history_cls + cls_score_res = 1 - cls_score_res + life_remain -= 1 + if life_remain <= 0 and tracker_id in self.result_history: + del (self.result_history[tracker_id]) + elif tracker_id in self.result_history: + self.result_history[tracker_id][1] = life_remain + else: + self.result_history[ + tracker_id] = [cls_id_res, life_remain, cls_score_res] +else: + # If the classification result belongs to the target action, use the result and record it in the historical result + self.result_history[ + tracker_id] = [cls_id_res, self.frame_life, cls_score_res] + + ... +``` + +#### Modify Visual Output +At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result. diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md new file mode 100644 index 0000000000000000000000000000000000000000..02c7d9940c625074f24b8c651c2bcc9f9c138828 --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_det.md @@ -0,0 +1,202 @@ +简体中文 | [English](./idbased_det_en.md) + +# 基于人体id的检测模型开发 + +## 环境准备 + +基于人体id的检测方案是直接使用[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)的功能进行模型训练的。请按照[安装说明](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL_cn.md)完成环境安装,以进行后续的模型训练及使用流程。 + +## 数据准备 +基于检测的行为识别方案中,数据准备的流程与一般的检测模型一致,详情可参考[目标检测数据准备](../../../tutorials/data/PrepareDetDataSet.md)。将图像和标注数据组织成PaddleDetection中支持的格式之一即可。 + +**注意** : 在实际使用的预测过程中,使用的是单人图像进行预测,因此在训练过程中建议将图像裁剪为单人图像,再进行烟头检测框的标注,以提升准确率。 + + +## 模型优化 + +### 检测-跟踪模型优化 +基于检测的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。 + + +### 更大的分辨率 +烟头的检测在监控视角下是一个典型的小目标检测问题,使用更大的分辨率有助于提升模型整体的识别率 + +### 预训练模型 +加入小目标场景数据集VisDrone下的预训练模型进行训练,模型mAP由38.1提升到39.7。 + +## 新增行为 +### 数据准备 +参考[目标检测数据准备](../../../tutorials/data/PrepareDetDataSet.md)完成训练数据准备。 + +准备完成后,数据路径为 +``` +dataset/smoking +├── smoking # 存放所有的图片 +│   ├── 1.jpg +│   ├── 2.jpg +├── smoking_test_cocoformat.json # 测试标注文件 +├── smoking_train_cocoformat.json # 训练标注文件 +``` + +以`COCO`格式为例,完成后的json标注文件内容如下: + +```json +# images字段下包含了图像的路径,id及对应宽高信息 + "images": [ + { + "file_name": "smoking/1.jpg", + "id": 0, # 此处id为图片id序号,不要重复 + "height": 437, + "width": 212 + }, + { + "file_name": "smoking/2.jpg", + "id": 1, + "height": 655, + "width": 365 + }, + + ... + +# categories 字段下包含所有类别信息,如果希望新增更多的检测类别,请在这里增加, 示例如下。 + "categories": [ + { + "supercategory": "cigarette", + "id": 1, + "name": "cigarette" + }, + { + "supercategory": "Class_Defined_by_Yourself", + "id": 2, + "name": "Class_Defined_by_Yourself" + }, + + ... + +# annotations 字段下包含了所有目标实例的信息,包括类别,检测框坐标, id, 所属图像id等信息 + "annotations": [ + { + "category_id": 1, # 对应定义的类别,在这里1代表cigarette + "bbox": [ + 97.0181345931, + 332.7033243081, + 7.5943999555, + 16.4545332369 + ], + "id": 0, # 此处id为实例的id序号,不要重复 + "image_id": 0, # 此处为实例所在图片的id序号,可能重复,此时即一张图片上有多个实例对象 + "iscrowd": 0, + "area": 124.96230648208665 + }, + { + "category_id": 2, # 对应定义的类别,在这里2代表Class_Defined_by_Yourself + "bbox": [ + 114.3895698372, + 221.9131122343, + 25.9530363697, + 50.5401234568 + ], + "id": 1, + "image_id": 1, + "iscrowd": 0, + "area": 1311.6696622034585 +``` + +### 配置文件设置 +参考[配置文件](../../../../configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml), 其中需要关注重点如下: + +```yaml +metric: COCO +num_classes: 1 # 如果新增了更多的类别,请对应修改此处 + +# 正确设置image_dir,anno_path,dataset_dir +# 保证dataset_dir + anno_path 能正确对应标注文件的路径 +# 保证dataset_dir + image_dir + 标注文件中的图片路径可以正确对应到图片路径 +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: smoking_train_cocoformat.json + dataset_dir: dataset/smoking + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: smoking_test_cocoformat.json + dataset_dir: dataset/smoking + +TestDataset: + !ImageFolder + anno_path: smoking_test_cocoformat.json + dataset_dir: dataset/smoking +``` + +### 模型训练及评估 +#### 模型训练 + +参考[PP-YOLOE](../../../../configs/ppyoloe/README_cn.md),执行下列步骤实现 +```bash +# At Root of PaddleDetection + +python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml --eval +``` + +#### 模型评估 + +训练好模型之后,可以通过以下命令实现对模型指标的评估 +```bash +# At Root of PaddleDetection + +python tools/eval.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml +``` + +### 模型导出 +注意:如果在Tensor-RT环境下预测, 请开启`-o trt=True`以获得更好的性能 +```bash +# At Root of PaddleDetection + +python tools/export_model.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml -o weights=output/ppyoloe_crn_s_80e_smoking_visdrone/best_model trt=True +``` + +导出模型后,可以得到: +``` +ppyoloe_crn_s_80e_smoking_visdrone/ +├── infer_cfg.yml +├── model.pdiparams +├── model.pdiparams.info +└── model.pdmodel +``` + +至此,即可使用PP-Human进行实际预测了。 + + +### 自定义行为输出 +基于人体id的检测的行为识别方案中,将任务转化为在对应人物的图像中检测目标特征对象。当目标特征对象被检测到时,则视为行为正在发生。因此在完成自定义模型的训练及部署的基础上,还需要将检测模型结果转化为最终的行为识别结果作为输出,并修改可视化的显示结果。 + +#### 转换为行为识别结果 +请对应修改[后处理函数](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L338)。 +核心代码为: +```python +# 解析检测模型输出,并筛选出置信度高于阈值的有效检测框。 +# Current now, class 0 is positive, class 1 is negative. +action_ret = {'class': 1.0, 'score': -1.0} +box_num = np_boxes_num[idx] +boxes = det_result['boxes'][cur_box_idx:cur_box_idx + box_num] +cur_box_idx += box_num +isvalid = (boxes[:, 1] > self.threshold) & (boxes[:, 0] == 0) +valid_boxes = boxes[isvalid, :] + +if valid_boxes.shape[0] >= 1: + # 存在有效检测框时,行为识别结果的类别和分数对应修改 + action_ret['class'] = valid_boxes[0, 0] + action_ret['score'] = valid_boxes[0, 1] + # 由于动作的持续性,有效检测结果可复用一定帧数 + self.result_history[ + tracker_id] = [0, self.frame_life, valid_boxes[0, 1]] +else: + # 不存在有效检测框,则根据历史检测数据确定当前帧的结果 + ... +``` + +#### 修改可视化输出 +目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。 diff --git a/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md b/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md new file mode 100644 index 0000000000000000000000000000000000000000..49cac4343ae2296d9d97cff16f38002eedbb2dcf --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/idbased_det_en.md @@ -0,0 +1,199 @@ +[简体中文](./idbased_det.md) | English + +# Development for Action Recognition Based on Detection with Human ID + +## Environmental Preparation +The model of action recognition based on detection with human id is trained with [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection). Please refer to [Installation](../../../tutorials/INSTALL.md) to complete the environment installation for subsequent model training and usage processes. + +## Data Preparation + +The model of action recognition based on detection with human id directly recognizes the image frames of video, so the model training process is same with preparation process of general detection model. For details, please refer to [Data Preparation for Detection](../../../tutorials/data/PrepareDetDataSet_en.md). Please process image and annotation of data into one of the formats PaddleDetection supports. + +**Note**: In the actual prediction process, a single person image is used for prediction. So it is recommended to crop the image into a single person image during the training process, and label the cigarette detection bounding box to improve the accuracy. + + +## Model Optimization +### Detection-Tracking Model Optimization +The performance of action recognition based on detection with human id depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization. + + +### Larger resolution +The detection of cigarette is a typical small target detection problem from the monitoring perspective. Using a larger resolution can help improve the overall performance of the model. + +### Pretrained model +The pretrained model under the small target scene dataset VisDrone is used for training, and the mAP of the model is increased from 38.1 to 39.7. + +## Add New Action +### Data Preparation +please refer to [Data Preparation for Detection](../../../tutorials/data/PrepareDetDataSet_en.md) to complete the data preparation part. + +When finish this step, the path will look like: +``` +dataset/smoking +├── smoking # all images +│   ├── 1.jpg +│   ├── 2.jpg +├── smoking_test_cocoformat.json # Validation file +├── smoking_train_cocoformat.json # Training file +``` + +Taking the `COCO` format as an example, the content of the completed json annotation file is as follows: + +```json +# The "images" field contains the path, id and corresponding width and height information of the images. + "images": [ + { + "file_name": "smoking/1.jpg", + "id": 0, # Here id is the picture id serial number, do not duplicate + "height": 437, + "width": 212 + }, + { + "file_name": "smoking/2.jpg", + "id": 1, + "height": 655, + "width": 365 + }, + + ... + +# The "categories" field contains all category information. If you want to add more detection categories, please add them here. The example is as follows. + "categories": [ + { + "supercategory": "cigarette", + "id": 1, + "name": "cigarette" + }, + { + "supercategory": "Class_Defined_by_Yourself", + "id": 2, + "name": "Class_Defined_by_Yourself" + }, + + ... + +# The "annotations" field contains information about all instances, including category, bounding box coordinates, id, image id and other information + "annotations": [ + { + "category_id": 1, # Corresponding to the defined category, where 1 represents cigarette + "bbox": [ + 97.0181345931, + 332.7033243081, + 7.5943999555, + 16.4545332369 + ], + "id": 0, # Here id is the id serial number of the instance, do not duplicate + "image_id": 0, # Here is the id serial number of the image where the instance is located, which may be duplicated. In this case, there are multiple instance objects on one image. + "iscrowd": 0, + "area": 124.96230648208665 + }, + { + "category_id": 2, # Corresponding to the defined category, where 2 represents Class_Defined_by_Yourself + "bbox": [ + 114.3895698372, + 221.9131122343, + 25.9530363697, + 50.5401234568 + ], + "id": 1, + "image_id": 1, + "iscrowd": 0, + "area": 1311.6696622034585 +``` + +### Configuration File Settings +Refer to [Configuration File](../../../../configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml), the key should be paid attention to are as follows: +```yaml +metric: COCO +num_classes: 1 # If more categories are added, please modify here accordingly + +# Set image_dir,anno_path,dataset_dir correctly +# Ensure that dataset_dir + anno_path can correctly access to the path of the annotation file +# Ensure that dataset_dir + image_dir + the image path in the annotation file can correctly access to the image path +TrainDataset: + !COCODataSet + image_dir: "" + anno_path: smoking_train_cocoformat.json + dataset_dir: dataset/smoking + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: "" + anno_path: smoking_test_cocoformat.json + dataset_dir: dataset/smoking + +TestDataset: + !ImageFolder + anno_path: smoking_test_cocoformat.json + dataset_dir: dataset/smoking +``` + +### Model Training And Evaluation +#### Model Training +As [PP-YOLOE](../../../../configs/ppyoloe/README.md), start training with the following command: +```bash +# At Root of PaddleDetection + +python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml --eval +``` + +#### Model Evaluation +After training the model, use the following command to evaluate the model metrics. + +```bash +# At Root of PaddleDetection + +python tools/eval.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml +``` + +#### Model Export +Note: If predicting in Tensor-RT environment, please enable `-o trt=True` for better performance. +```bash +# At Root of PaddleDetection + +python tools/export_model.py -c configs/pphuman/ppyoloe_crn_s_80e_smoking_visdrone.yml -o weights=output/ppyoloe_crn_s_80e_smoking_visdrone/best_model trt=True +``` + +After exporting the model, you can get: +``` +ppyoloe_crn_s_80e_smoking_visdrone/ +├── infer_cfg.yml +├── model.pdiparams +├── model.pdiparams.info +└── model.pdmodel +``` + +At this point, this model can be used in PP-Human. + +### Custom Action Output +In the model of action recognition based on detection with human id, the task is defined to detect target objects in images of corresponding person. When the target object is detected, the behavior type of the character in a certain period of time. The type of the corresponding classification is regarded as the action of the current period. Therefore, on the basis of completing the training and deployment of the custom model, it is also necessary to convert the detection model results to the final action recognition results as output, and the displayed result of the visualization should be modified. + +#### Convert to Action Recognition Result +Please modify the [postprocessing function](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L338). + +The core code are: +```python +# Parse the detection model output and filter out valid detection boxes with confidence higher than a threshold. +# Current now, class 0 is positive, class 1 is negative. +action_ret = {'class': 1.0, 'score': -1.0} +box_num = np_boxes_num[idx] +boxes = det_result['boxes'][cur_box_idx:cur_box_idx + box_num] +cur_box_idx += box_num +isvalid = (boxes[:, 1] > self.threshold) & (boxes[:, 0] == 0) +valid_boxes = boxes[isvalid, :] + +if valid_boxes.shape[0] >= 1: + # When there is a valid detection frame, the category and score of the behavior recognition result are modified accordingly. + action_ret['class'] = valid_boxes[0, 0] + action_ret['score'] = valid_boxes[0, 1] + # Due to the continuity of the action, valid detection results can be reused for a certain number of frames. + self.result_history[ + tracker_id] = [0, self.frame_life, valid_boxes[0, 1]] +else: + # If there is no valid detection frame, the result of the current frame is determined according to the historical detection result. + ... +``` + +#### Modify Visual Output +At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result. diff --git a/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md new file mode 100644 index 0000000000000000000000000000000000000000..21cca224ba8d794e00d25937e14b6f10aeaf324f --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec.md @@ -0,0 +1,205 @@ +简体中文 | [English](./skeletonbased_rec_en.md) + +# 基于人体骨骼点的行为识别 + +## 环境准备 + +基于骨骼点的行为识别方案是借助[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo)进行模型训练的。请按照[安装说明](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/install.md)完成PaddleVideo的环境安装,以进行后续的模型训练及使用流程。 + +## 数据准备 +使用该方案训练的模型,可以参考[此文档](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/PPHuman#%E5%87%86%E5%A4%87%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE)准备训练数据,以适配PaddleVideo进行训练,其主要流程包含以下步骤: + + +### 数据格式说明 +STGCN是一个基于骨骼点坐标序列进行预测的模型。在PaddleVideo中,训练数据为采用`.npy`格式存储的`Numpy`数据,标签则可以是`.npy`或`.pkl`格式存储的文件。对于序列数据的维度要求为`(N,C,T,V,M)`,当前方案仅支持单人构成的行为(但视频中可以存在多人,每个人独自进行行为识别判断),即`M=1`。 + +| 维度 | 大小 | 说明 | +| ---- | ---- | ---------- | +| N | 不定 | 数据集序列个数 | +| C | 2 | 关键点坐标维度,即(x, y) | +| T | 50 | 动作序列的时序维度(即持续帧数)| +| V | 17 | 每个人物关键点的个数,这里我们使用了`COCO`数据集的定义,具体可见[这里](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareKeypointDataSet_cn.md#COCO%E6%95%B0%E6%8D%AE%E9%9B%86) | +| M | 1 | 人物个数,这里我们每个动作序列只针对单人预测 | + +### 获取序列的骨骼点坐标 +对于一个待标注的序列(这里序列指一个动作片段,可以是视频或有顺序的图片集合)。可以通过模型预测或人工标注的方式获取骨骼点(也称为关键点)坐标。 +- 模型预测:可以直接选用[PaddleDetection KeyPoint模型系列](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/keypoint) 模型库中的模型,并根据`3、训练与测试 - 部署预测 - 检测+keypoint top-down模型联合部署`中的步骤获取目标序列的17个关键点坐标。 +- 人工标注:若对关键点的数量或是定义有其他需求,也可以直接人工标注各个关键点的坐标位置,注意对于被遮挡或较难标注的点,仍需要标注一个大致坐标,否则后续网络学习过程会受到影响。 + + +当使用模型预测获取时,可以参考如下步骤进行,请注意此时在PaddleDetection中进行操作。 + +```bash +# current path is under root of PaddleDetection + +# Step 1: download pretrained inference models. +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip +unzip -d output_inference/ mot_ppyoloe_l_36e_pipeline.zip +unzip -d output_inference/ dark_hrnet_w32_256x192.zip + +# Step 2: Get the keypoint coordinarys + +# if your data is image sequence +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --image_dir={your image directory path} --device=GPU --save_res=True + +# if your data is video +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --video_file={your video file path} --device=GPU --save_res=True +``` +这样我们会得到一个`det_keypoint_unite_image_results.json`的检测结果文件。内容的具体含义请见[这里](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/det_keypoint_unite_infer.py#L108)。 + + +### 统一序列的时序长度 +由于实际数据中每个动作的长度不一,首先需要根据您的数据和实际场景预定时序长度(在PP-Human中我们采用50帧为一个动作序列),并对数据做以下处理: +- 实际长度超过预定长度的数据,随机截取一个50帧的片段 +- 实际长度不足预定长度的数据:补0,直到满足50帧 +- 恰好等于预定长度的数据: 无需处理 + +注意:在这一步完成后,请严格确认处理后的数据仍然包含了一个完整的行为动作,不会产生预测上的歧义,建议通过可视化数据的方式进行确认。 + +### 保存为PaddleVideo可用的文件格式 +在经过前两步处理后,我们得到了每个人物动作片段的标注,此时我们已有一个列表`all_kpts`,这个列表中包含多个关键点序列片段,其中每一个片段形状为(T, V, C) (在我们的例子中即(50, 17, 2)), 下面进一步将其转化为PaddleVideo可用的格式。 +- 调整维度顺序: 可通过`np.transpose`和`np.expand_dims`将每一个片段的维度转化为(C, T, V, M)的格式。 +- 将所有片段组合并保存为一个文件 + +注意:这里的`class_id`是`int`类型,与其他分类任务类似。例如`0:摔倒, 1:其他`。 + + +我们提供了执行该步骤的[脚本文件](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/datasets/prepare_dataset.py),可以直接处理生成的`det_keypoint_unite_image_results.json`文件,该脚本执行的内容包括解析json文件内容、前述步骤中介绍的整理训练数据及保存数据文件。 + +```bash +mkdir {root of PaddleVideo}/applications/PPHuman/datasets/annotations + +mv det_keypoint_unite_image_results.json {root of PaddleVideo}/applications/PPHuman/datasets/annotations/det_keypoint_unite_image_results_{video_id}_{camera_id}.json + +cd {root of PaddleVideo}/applications/PPHuman/datasets/ + +python prepare_dataset.py +``` + +至此,我们得到了可用的训练数据(`.npy`)和对应的标注文件(`.pkl`)。 + + +## 模型优化 + +### 检测-跟踪模型优化 +基于骨骼点的行为识别模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,或是难以正确在不同帧之间正确分配人物ID,都会使行为识别部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](../detection.md)以及[多目标跟踪任务二次开发](../pphuman_mot.md)对检测/跟踪模型进行优化。 + +### 关键点模型优化 +骨骼点作为该方案的核心特征,对行人的骨骼点定位效果也决定了行为识别的整体效果。若发现在实际场景中对关键点坐标的识别结果有明显错误,从关键点组成的骨架图像看,已经难以辨别具体动作,可以参考[关键点检测任务二次开发](../keypoint_detection.md)对关键点模型进行优化。 + +### 坐标归一化处理 +在完成骨骼点坐标的获取后,建议根据各人物的检测框进行归一化处理,以消除人物位置、尺度的差异给网络带来的收敛难度。 + + +## 新增行为 + +基于关键点的行为识别方案中,行为识别模型使用的是[ST-GCN](https://arxiv.org/abs/1801.07455),并在[PaddleVideo训练流程](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/stgcn.md)的基础上修改适配,完成模型训练及导出使用流程。 + + +### 数据准备与配置文件修改 +- 按照`数据准备`, 准备训练数据(`.npy`)和对应的标注文件(`.pkl`)。对应放置在`{root of PaddleVideo}/applications/PPHuman/datasets/`下。 + +- 参考[配置文件](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/configs/stgcn_pphuman.yaml), 需要重点关注的内容如下: + +```yaml +MODEL: #MODEL field + framework: + backbone: + name: "STGCN" + in_channels: 2 # 此处对应数据说明中的C维,表示二维坐标。 + dropout: 0.5 + layout: 'coco_keypoint' + data_bn: True + head: + name: "STGCNHead" + num_classes: 2 # 如果数据中有多种行为类型,需要修改此处使其与预测类型数目一致。 + if_top5: False # 行为类型数量不足5时请设置为False,否则会报错 + +... + + +# 请根据数据路径正确设置train/valid/test部分的数据及label路径 +DATASET: #DATASET field + batch_size: 64 + num_workers: 4 + test_batch_size: 1 + test_num_workers: 0 + train: + format: "SkeletonDataset" #Mandatory, indicate the type of dataset, associate to the 'paddle + file_path: "./applications/PPHuman/datasets/train_data.npy" #mandatory, train data index file path + label_path: "./applications/PPHuman/datasets/train_label.pkl" + + valid: + format: "SkeletonDataset" #Mandatory, indicate the type of dataset, associate to the 'paddlevideo/loader/dateset' + file_path: "./applications/PPHuman/datasets/val_data.npy" #Mandatory, valid data index file path + label_path: "./applications/PPHuman/datasets/val_label.pkl" + + test_mode: True + test: + format: "SkeletonDataset" #Mandatory, indicate the type of dataset, associate to the 'paddlevideo/loader/dateset' + file_path: "./applications/PPHuman/datasets/val_data.npy" #Mandatory, valid data index file path + label_path: "./applications/PPHuman/datasets/val_label.pkl" + + test_mode: True +``` + +### 模型训练与测试 +- 在PaddleVideo中,使用以下命令即可开始训练: + +```bash +# current path is under root of PaddleVideo +python main.py -c applications/PPHuman/configs/stgcn_pphuman.yaml + +# 由于整个任务可能过拟合,建议同时开启验证以保存最佳模型 +python main.py --validate -c applications/PPHuman/configs/stgcn_pphuman.yaml +``` + +- 在训练完成后,采用以下命令进行预测: +```bash +python main.py --test -c applications/PPHuman/configs/stgcn_pphuman.yaml -w output/STGCN/STGCN_best.pdparams +``` + +### 模型导出 +- 在PaddleVideo中,通过以下命令实现模型的导出,得到模型结构文件`STGCN.pdmodel`和模型权重文件`STGCN.pdiparams`,并增加配置文件: +```bash +# current path is under root of PaddleVideo +python tools/export_model.py -c applications/PPHuman/configs/stgcn_pphuman.yaml \ + -p output/STGCN/STGCN_best.pdparams \ + -o output_inference/STGCN + +cp applications/PPHuman/configs/infer_cfg.yml output_inference/STGCN + +# 重命名模型文件,适配PP-Human的调用 +cd output_inference/STGCN +mv STGCN.pdiparams model.pdiparams +mv STGCN.pdiparams.info model.pdiparams.info +mv STGCN.pdmodel model.pdmodel +``` +完成后的导出模型目录结构如下: +``` +STGCN +├── infer_cfg.yml +├── model.pdiparams +├── model.pdiparams.info +├── model.pdmodel +``` + +至此,就可以使用PP-Human进行行为识别的推理了。 + +**注意**:如果在训练时调整了视频序列的长度或关键点的数量,在此处需要对应修改配置文件中`INFERENCE`字段内容,以实现正确预测。 +```yaml +# 序列数据的维度为(N,C,T,V,M) +INFERENCE: + name: 'STGCN_Inference_helper' + num_channels: 2 # 对应C维 + window_size: 50 # 对应T维,请对应调整为数据长度 + vertex_nums: 17 # 对应V维,请对应调整为关键点数目 + person_nums: 1 # 对应M维 +``` + +### 自定义行为输出 +基于人体骨骼点的行为识别方案中,模型输出的分类结果即代表了该人物在一定时间段内行为类型。对应分类的类型最终即视为当前阶段的行为。因此在完成自定义模型的训练及部署的基础上,使用模型输出作为最终结果,修改可视化的显示结果即可。 + +#### 修改可视化输出 +目前基于ID的行为识别,是根据行为识别的结果及预定义的类别名称进行展示的。详细逻辑请见[此处](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043)。如果自定义的行为需要修改为其他的展示名称,请对应修改此处,以正确输出对应结果。 diff --git a/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md new file mode 100644 index 0000000000000000000000000000000000000000..a1e8d1f3ca8096c03cab9bb735306ba0db742474 --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/skeletonbased_rec_en.md @@ -0,0 +1,200 @@ +[简体中文](./skeletonbased_rec.md) | English + +# Skeleton-based action recognition + +## Environmental Preparation +The skeleton-based action recognition is trained with [PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo). Please refer to [Installation](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/install.md) to complete the environment installation for subsequent model training and usage processes. + +## Data Preparation +For the model of skeleton-based model, you can refer to [this document](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/applications/PPHuman#%E5%87%86%E5%A4%87%E8%AE %AD%E7%BB%83%E6%95%B0%E6%8D%AE) to preparation training adapted to PaddleVideo. The main process includes the following steps: + + +### Data Format Description +STGCN is a model based on the sequence of skeleton point coordinates. In PaddleVideo, training data is `Numpy` data stored with `.npy` format, and labels can be files stored in `.npy` or `.pkl` format. The dimension requirement for sequence data is `(N,C,T,V,M)`, the current solution only supports behaviors composed of a single person (but there can be multiple people in the video, and each person performs action recognition separately), that is` M=1`. + +| Dim | Size | Description | +| ---- | ---- | ---------- | +| N | Not Fixed | The number of sequences in the dataset | +| C | 2 | Keypoint coordinate, i.e. (x, y) | +| T | 50 | The temporal dimension of the action sequence (i.e. the number of continuous frames)| +| V | 17 | The number of keypoints of each person, here we use the definition of the `COCO` dataset, see [here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/PrepareKeypointDataSet_en.md#description-for-coco-datasetkeypoint) | +| M | 1 | The number of persons, here we only predict a single person for each action sequence | + +### Get The Skeleton Point Coordinates of The Sequence +For a sequence to be labeled (here a sequence refers to an action segment, which can be a video or an ordered collection of pictures). The coordinates of skeletal points (also known as keypoints) can be obtained through model prediction or manual annotation. +- Model prediction: You can directly select the model in the [PaddleDetection KeyPoint Models](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/keypoint/README_en.md) and according to `3, training and testing - Deployment Prediction - Detect + keypoint top-down model joint deployment` to get the 17 keypoint coordinates of the target sequence. + +When using the model to predict and obtain the coordinates, you can refer to the following steps, please note that the operation in PaddleDetection at this time. + +```bash +# current path is under root of PaddleDetection + +# Step 1: download pretrained inference models. +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip +wget https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip +unzip -d output_inference/ mot_ppyoloe_l_36e_pipeline.zip +unzip -d output_inference/ dark_hrnet_w32_256x192.zip + +# Step 2: Get the keypoint coordinarys + +# if your data is image sequence +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --image_dir={your image directory path} --device=GPU --save_res=True + +# if your data is video +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/mot_ppyoloe_l_36e_pipeline/ --keypoint_model_dir=output_inference/dark_hrnet_w32_256x192 --video_file={your video file path} --device=GPU --save_res=True +``` +We can get a detection result file named `det_keypoint_unite_image_results.json`. The detail of content can be seen at [Here](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python/det_keypoint_unite_infer.py#L108). + + +### Uniform Sequence Length +Since the length of each action in the actual data is different, the first step is to pre-determine the time sequence length according to your data and the actual scene (in PP-Human, we use 50 frames as an action sequence), and do the following processing to the data: +- If the actual length exceeds the predetermined length, a 50-frame segment will be randomly intercepted +- Data whose actual length is less than the predetermined length: fill with 0 until 50 frames are met +- data exactly equal to the predeter: no processing required + +Note: After this step is completed, please strictly confirm that the processed data contains a complete action, and there will be no ambiguity in prediction. It is recommended to confirm by visualizing the data. + +### Save to PaddleVideo usable formats +After the first two steps of processing, we get the annotation of each character action fragment. At this time, we have a list `all_kpts`, which contains multiple keypoint sequence fragments, each one has a shape of (T, V, C) (in our case (50, 17, 2)), which is further converted into a format usable by PaddleVideo. +- Adjust dimension order: `np.transpose` and `np.expand_dims` can be used to convert the dimension of each fragment into (C, T, V, M) format. +- Combine and save all clips as one file + +Note: `class_id` is a `int` type variable, similar to other classification tasks. For example `0: falling, 1: other`. + +We provide a [script file](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/datasets/prepare_dataset.py) to do this step, which can directly process the generated `det_keypoint_unite_image_results.json` file. The content executed by the script includes parsing the content of the json file, unforming the training data sequence and saving the data file as described in the preceding steps. + +```bash +mkdir {root of PaddleVideo}/applications/PPHuman/datasets/annotations + +mv det_keypoint_unite_image_results.json {root of PaddleVideo}/applications/PPHuman/datasets/annotations/det_keypoint_unite_image_results_{video_id}_{camera_id}.json + +cd {root of PaddleVideo}/applications/PPHuman/datasets/ + +python prepare_dataset.py +``` + +Now, we have available training data (`.npy`) and corresponding annotation files (`.pkl`). + +## Model Optimization +### detection-tracking model optimization +The performance of action recognition based on skelenton depends on the pre-order detection and tracking models. If the pedestrian location cannot be accurately detected in the actual scene, or it is difficult to correctly assign the person ID between different frames, the performance of the action recognition part will be limited. If you encounter the above problems in actual use, please refer to [Secondary Development of Detection Task](../detection_en.md) and [Secondary Development of Multi-target Tracking Task](../pphuman_mot_en.md) for detection/track model optimization. + +### keypoint model optimization +As the core feature of the scheme, the skeleton point positioning performance also determines the overall effect of action recognition. If there are obvious errors in the recognition results of the keypoint coordinates of in the actual scene, it is difficult to distinguish the specific actions from the skeleton image composed of the keypoint. +You can refer to [Secondary Development of Keypoint Detection Task](../keypoint_detection_en.md) to optimize the keypoint model. + +### Coordinate Normalization +After getting coordinates of the skeleton points, it is recommended to perform normalization processing according to the detection bounding box of each person to reduce the convergence difficulty brought by the difference in the position and scale of the person. + +## Add New Action + +In skeleton-based action recognition, the model is [ST-GCN](https://arxiv.org/abs/1801.07455). Modified to adapt PaddleVideo based on [Training Step](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/stgcn.md). And complete the model training and exporting process. + +### Data Preparation And Configuration File Settings +- Prepare the training data (`.npy`) and the corresponding annotation file (`.pkl`) according to `Data preparation`. Correspondingly placed under `{root of PaddleVideo}/applications/PPHuman/datasets/`. + +- Refer [Configuration File](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/applications/PPHuman/configs/stgcn_pphuman.yaml), the things to focus on are as follows: + +```yaml +MODEL: #MODEL field + framework: + backbone: + name: "STGCN" + in_channels: 2 # This corresponds to the C dimension in the data format description, representing two-dimensional coordinates. + dropout: 0.5 + layout: 'coco_keypoint' + data_bn: True + head: + name: "STGCNHead" + num_classes: 2 # If there are multiple action types in the data, this needs to be modified to match the number of types. + if_top5: False # When the number of action types is less than 5, please set it to False, otherwise an error will be raised. + +... + + +# Please set the data and label path of the train/valid/test part correctly according to the data path +DATASET: #DATASET field + batch_size: 64 + num_workers: 4 + test_batch_size: 1 + test_num_workers: 0 + train: + format: "SkeletonDataset" #Mandatory, indicate the type of dataset, associate to the 'paddle + file_path: "./applications/PPHuman/datasets/train_data.npy" #mandatory, train data index file path + label_path: "./applications/PPHuman/datasets/train_label.pkl" + + valid: + format: "SkeletonDataset" #Mandatory, indicate the type of dataset, associate to the 'paddlevideo/loader/dateset' + file_path: "./applications/PPHuman/datasets/val_data.npy" #Mandatory, valid data index file path + label_path: "./applications/PPHuman/datasets/val_label.pkl" + + test_mode: True + test: + format: "SkeletonDataset" #Mandatory, indicate the type of dataset, associate to the 'paddlevideo/loader/dateset' + file_path: "./applications/PPHuman/datasets/val_data.npy" #Mandatory, valid data index file path + label_path: "./applications/PPHuman/datasets/val_label.pkl" + + test_mode: True +``` + +### Model Training And Evaluation + +- In PaddleVideo, start training with the following command: +```bash +# current path is under root of PaddleVideo +python main.py -c applications/PPHuman/configs/stgcn_pphuman.yaml + +# Since the task may overfit, it is recommended to evaluate model during training to save the best model. +python main.py --validate -c applications/PPHuman/configs/stgcn_pphuman.yaml +``` + +- After training the model, use the following command to do inference. +```bash +python main.py --test -c applications/PPHuman/configs/stgcn_pphuman.yaml -w output/STGCN/STGCN_best.pdparams +``` + +### Model Export + +In PaddleVideo, use the following command to export model and get structure file `STGCN.pdmodel` and weight file `STGCN.pdiparams`. And add the configuration file here. +```bash +# current path is under root of PaddleVideo +python tools/export_model.py -c applications/PPHuman/configs/stgcn_pphuman.yaml \ + -p output/STGCN/STGCN_best.pdparams \ + -o output_inference/STGCN + +cp applications/PPHuman/configs/infer_cfg.yml output_inference/STGCN + +# Rename model files to adapt PP-Human +cd output_inference/STGCN +mv STGCN.pdiparams model.pdiparams +mv STGCN.pdiparams.info model.pdiparams.info +mv STGCN.pdmodel model.pdmodel +``` + +The directory structure will look like: +``` +STGCN +├── infer_cfg.yml +├── model.pdiparams +├── model.pdiparams.info +├── model.pdmodel +``` +At this point, this model can be used in PP-Human. + +**Note**: If the length of the video sequence or the number of keypoints is changed during training, the content of the `INFERENCE` field in the configuration file needs to be modified accordingly to correct prediction. + +```yaml +# The dimension of the sequence data is (N,C,T,V,M) +INFERENCE: + name: 'STGCN_Inference_helper' + num_channels: 2 # Corresponding to C dimension + window_size: 50 # Corresponding to T dimension, please set it accordingly to the sequence length. + vertex_nums: 17 # Corresponding to V dimension, please set it accordingly to the number of keypoints + person_nums: 1 # Corresponding to M dimension +``` + +### Custom Action Output +In the skeleton-based action recognition, the classification result of the model represents the behavior type of the character in a certain period of time. The type of the corresponding classification is regarded as the action of the current period. Therefore, on the basis of completing the training and deployment of the custom model, the model output is directly used as the final result, and the displayed result of the visualization should be modified. + +#### Modify Visual Output +At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result. diff --git a/docs/advanced_tutorials/customization/action_recognotion/videobased_rec.md b/docs/advanced_tutorials/customization/action_recognotion/videobased_rec.md new file mode 100644 index 0000000000000000000000000000000000000000..522eebe3d135a789a7843676ca477ec5e4b23a2c --- /dev/null +++ b/docs/advanced_tutorials/customization/action_recognotion/videobased_rec.md @@ -0,0 +1,159 @@ +# 基于视频分类的行为识别 + +## 数据准备 + +视频分类任务输入的视频格式一般为`.mp4`、`.avi`等格式视频或者是抽帧后的视频帧序列,标签则可以是`.txt`格式存储的文件。 + +对于打架识别任务,具体数据准备流程如下: + +### 数据集下载 + +打架识别基于6个公开的打架、暴力行为相关数据集合并后的数据进行模型训练。公开数据集具体信息如下: + +| 数据集 | 下载连接 | 简介 | 标注 | 数量 | 时长 | +| ---- | ---- | ---------- | ---- | ---- | ---------- | +| Surveillance Camera Fight Dataset| https://github.com/sayibet/fight-detection-surv-dataset | 裁剪视频,监控视角 | 视频级别 | 打架:150;非打架:150 | 2s | +| A Dataset for Automatic Violence Detection in Videos | https://github.com/airtlab/A-Dataset-for-Automatic-Violence-Detection-in-Videos | 裁剪视频,室内自行录制 | 视频级别 | 暴力行为:115个场景,2个机位,共230 ;非暴力行为:60个场景,2个机位,共120 | 几秒钟 | +| Hockey Fight Detection Dataset | https://www.kaggle.com/datasets/yassershrief/hockey-fight-vidoes?resource=download | 裁剪视频,非真实场景 | 视频级别 | 打架:500;非打架:500 | 2s | +| Video Fight Detection Dataset | https://www.kaggle.com/datasets/naveenk903/movies-fight-detection-dataset | 裁剪视频,非真实场景 | 视频级别 | 打架:100;非打架:101 | 2s | +| Real Life Violence Situations Dataset | https://www.kaggle.com/datasets/mohamedmustafa/real-life-violence-situations-dataset | 裁剪视频,非真实场景 | 视频级别 | 暴力行为:1000;非暴力行为:1000 | 几秒钟 | +| UBI Abnormal Event Detection Dataset| http://socia-lab.di.ubi.pt/EventDetection/ | 未裁剪视频,监控视角 | 帧级别 | 打架:216;非打架:784;裁剪后二次标注:打架1976,非打架1630 | 原视频几秒到几分钟不等,裁剪后2s | + +打架(暴力行为)视频3956个,非打架(非暴力行为)视频3501个,共7457个视频,每个视频几秒钟。 + +本项目为大家整理了前5个数据集,下载链接:[https://aistudio.baidu.com/aistudio/datasetdetail/149085](https://aistudio.baidu.com/aistudio/datasetdetail/149085)。 + +### 视频抽帧 + +首先下载PaddleVideo代码: +```bash +git clone https://github.com/PaddlePaddle/PaddleVideo.git +``` + +假设PaddleVideo源码路径为PaddleVideo_root。 + +为了加快训练速度,将视频进行抽帧。下面命令会根据视频的帧率FPS进行抽帧,如FPS=30,则每秒视频会抽取30帧图像。 + +```bash +cd ${PaddleVideo_root} +python data/ucf101/extract_rawframes.py dataset/ rawframes/ --level 2 --ext mp4 +``` +其中,假设视频已经存放在了`dataset`目录下,如果是其他路径请对应修改。打架(暴力)视频存放在`dataset/fight`中;非打架(非暴力)视频存放在`dataset/nofight`中。`rawframes`目录存放抽取的视频帧。 + +### 训练集和验证集划分 + +打架识别验证集1500条,来自Surveillance Camera Fight Dataset、A Dataset for Automatic Violence Detection in Videos、UBI Abnormal Event Detection Dataset三个数据集。 + +也可根据下面的命令将数据按照8:2的比例划分成训练集和测试集: + +```bash +python split_fight_train_test_dataset.py "rawframes" 2 0.8 +``` + +参数说明:“rawframes”为视频帧存放的文件夹;2表示目录结构为两级,第二级表示每个行为对应的子文件夹;0.8表示训练集比例。 + +其中`split_fight_train_test_dataset.py`文件在PaddleDetection中的`deploy/pipeline/tools`路径下。 + +执行完命令后会最终生成fight_train_list.txt和fight_val_list.txt两个文件。打架的标签为1,非打架的标签为0。 + +### 视频裁剪 +对于未裁剪的视频,如UBI Abnormal Event Detection Dataset数据集,需要先进行裁剪才能用于模型训练,`deploy/pipeline/tools/clip_video.py`中给出了视频裁剪的函数`cut_video`,输入为视频路径,裁剪的起始帧和结束帧以及裁剪后的视频保存路径。 + + +## 模型优化 + +### VideoMix +[VideoMix](https://arxiv.org/abs/2012.03457)是视频数据增强的方法之一,是对图像数据增强CutMix的扩展,可以缓解模型的过拟合问题。 + +与Mixup将两个视频片段的每个像素点按照一定比例融合不同的是,VideoMix是每个像素点要么属于片段A要么属于片段B。输出结果是两个片段原始标签的加权和,权重是两个片段各自的比例。 + +在baseline的基础上加入VideoMix数据增强后,精度由87.53%提升至88.01%。 + +### 更大的分辨率 +由于监控摄像头角度、距离等问题,存在监控画面下人比较小的情况,小目标行为的识别较困难,尝试增大输入图像的分辨率,模型精度由88.01%提升至89.06%。 + +## 新增行为 + +目前打架识别模型使用的是[PaddleVideo](https://github.com/PaddlePaddle/PaddleVideo)套件中[PP-TSM](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/model_zoo/recognition/pp-tsm.md),并在PP-TSM视频分类模型训练流程的基础上修改适配,完成模型训练。 + +请先参考[使用说明](https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/zh-CN/usage.md)了解PaddleVideo模型库的使用。 + + +| 任务 | 算法 | 精度 | 预测速度(ms) | 模型权重 | 预测部署模型 | +| ---- | ---- | ---------- | ---- | ---- | ---------- | +| 打架识别 | PP-TSM | 准确率:89.06% | T4, 2s视频128ms | [下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) | [下载链接](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip) | + +#### 模型训练 +下载预训练模型: +```bash +wget https://videotag.bj.bcebos.com/PaddleVideo/PretrainModel/ResNet50_vd_ssld_v2_pretrained.pdparams +``` + +执行训练: +```bash +# 单卡训练 +cd ${PaddleVideo_root} +python main.py --validate -c pptsm_fight_frames_dense.yaml +``` + +本方案针对的是视频的二分类问题,如果不是二分类,需要修改配置文件中`MODEL-->head-->num_classes`为具体的类别数目。 + + +```bash +cd ${PaddleVideo_root} +# 多卡训练 +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python -B -m paddle.distributed.launch --gpus=“0,1,2,3” \ + --log_dir=log_pptsm_dense main.py --validate \ + -c pptsm_fight_frames_dense.yaml +``` + +#### 模型评估 +训练好的模型下载:[https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.pdparams) + +模型评估: +```bash +cd ${PaddleVideo_root} +python main.py --test -c pptsm_fight_frames_dense.yaml \ + -w ppTSM_fight_best.pdparams +``` + +其中`ppTSM_fight_best.pdparams`为训练好的模型。 + +#### 模型导出 + +导出inference模型: + +```bash +cd ${PaddleVideo_root} +python tools/export_model.py -c pptsm_fight_frames_dense.yaml \ + -p ppTSM_fight_best.pdparams \ + -o inference/ppTSM +``` + + +#### 推理可视化 + +利用上步骤导出的模型,基于PaddleDetection中推理pipeline可完成自定义行为识别及可视化。 + +新增行为后,需要对现有的可视化代码进行修改,目前代码支持打架二分类可视化,新增类别后需要根据识别结果自适应可视化推理结果。 + +具体修改PaddleDetection中develop/deploy/pipeline/pipeline.py路径下PipePredictor类中visualize_video成员函数。当结果中存在'video_action'数据时,会对行为进行可视化。目前的逻辑是如果推理的类别为1,则为打架行为,进行可视化;否则不进行显示,即"video_action_score"为None。用户新增行为后,可根据类别index和对应的行为设置"video_action_text"字段,目前index=1对应"Fight"。相关代码块如下: + +``` +video_action_res = result.get('video_action') +if video_action_res is not None: + video_action_score = None + if video_action_res and video_action_res["class"] == 1: + video_action_score = video_action_res["score"] + mot_boxes = None + if mot_res: + mot_boxes = mot_res['boxes'] + image = visualize_action( + image, + mot_boxes, + action_visual_collector=None, + action_text="SkeletonAction", + video_action_score=video_action_score, + video_action_text="Fight") +``` diff --git a/docs/advanced_tutorials/customization/detection.md b/docs/advanced_tutorials/customization/detection.md new file mode 100644 index 0000000000000000000000000000000000000000..4f20cf3c58e8908136bd336abc413536a06a3467 --- /dev/null +++ b/docs/advanced_tutorials/customization/detection.md @@ -0,0 +1,84 @@ +简体中文 | [English](./detection_en.md) + +# 目标检测任务二次开发 + +在目标检测算法产业落地过程中,常常会出现需要额外训练以满足实际使用的要求,项目迭代过程中也会出先需要修改类别的情况。本文档详细介绍如何使用PaddleDetection进行目标检测算法二次开发,流程包括:数据准备、模型优化思路和修改类别开发流程。 + +## 数据准备 + +二次开发首先需要进行数据集的准备,针对场景特点采集合适的数据从而提升模型效果和泛化性能。然后使用Labeme,LabelImg等标注工具标注目标检测框,并将标注结果转化为COCO或VOC数据格式。详细文档可以参考[数据准备文档](../../tutorials/data/README.md) + +## 模型优化 + +### 1. 使用自定义数据集训练 + +基于准备的数据在数据配置文件中修改对应路径,例如`configs/dataset/coco_detection.yml`: + +``` +metric: COCO +num_classes: 80 + +TrainDataset: + !COCODataSet + image_dir: train2017 # 训练集的图片所在文件相对于dataset_dir的路径 + anno_path: annotations/instances_train2017.json # 训练集的标注文件相对于dataset_dir的路径 + dataset_dir: dataset/coco # 数据集所在路径,相对于PaddleDetection路径 + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val2017 # 验证集的图片所在文件相对于dataset_dir的路径 + anno_path: annotations/instances_val2017.json # 验证集的标注文件相对于dataset_dir的路径 + dataset_dir: dataset/coco # 数据集所在路径,相对于PaddleDetection路径 + +TestDataset: + !ImageFolder + anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt) # 标注文件所在文件 相对于dataset_dir的路径 + dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path' # 数据集所在路径,相对于PaddleDetection路径 +``` + +配置修改完成后,即可以启动训练评估,命令如下 + +``` +export CUDA_VISIBLE_DEVICES=0 +python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_270e_coco.yml --eval +``` + +更详细的命令参考[30分钟快速上手PaddleDetection](../../tutorials/GETTING_STARTED_cn.md) + + +### 2. 加载COCO模型作为预训练 + +目前PaddleDetection提供的配置文件加载的预训练模型均为ImageNet数据集的权重,加载到检测算法的骨干网络中,实际使用时,建议加载COCO数据集训练好的权重,通常能够对模型精度有较大提升,使用方法如下: + +#### 1) 设置预训练权重路径 + +COCO数据集训练好的模型权重均在各算法配置文件夹下,例如`configs/ppyoloe`下提供了PP-YOLOE-l COCO数据集权重:[链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) 。配置文件中设置`pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams` + +#### 2) 修改超参数 + +加载COCO预训练权重后,需要修改学习率超参数,例如`configs/ppyoloe/_base_/optimizer_300e.yml`中: + +``` +epoch: 120 # 原始配置为300epoch,加载COCO权重后可以适当减少迭代轮数 + +LearningRate: + base_lr: 0.005 # 原始配置为0.025,加载COCO权重后需要降低学习率 + schedulers: + - !CosineDecay + max_epochs: 144 # 依据epoch数进行修改 + - !LinearWarmup + start_factor: 0. + epochs: 5 +``` + +## 修改类别 + +当实际使用场景类别发生变化时,需要修改数据配置文件,例如`configs/datasets/coco_detection.yml`中: + +``` +metric: COCO +num_classes: 10 # 原始类别80 +``` + +配置修改完成后,同样可以加载COCO预训练权重,PaddleDetection支持自动加载shape匹配的权重,对于shape不匹配的权重会自动忽略,因此无需其他修改。 diff --git a/docs/advanced_tutorials/customization/detection_en.md b/docs/advanced_tutorials/customization/detection_en.md new file mode 100644 index 0000000000000000000000000000000000000000..003ea152906b947473643b93cf1585b7f32d2155 --- /dev/null +++ b/docs/advanced_tutorials/customization/detection_en.md @@ -0,0 +1,89 @@ +[简体中文](./detection.md) | English + +# Customize Object Detection task + +In the practical application of object detection algorithms in a specific industry, additional training is often required for practical use. The project iteration will also need to modify categories. This document details how to use PaddleDetection for a customized object detection algorithm. The process includes data preparation, model optimization roadmap, and modifying the category development process. + +## Data Preparation + +Customization starts with the preparation of the dataset. We need to collect suitable data for the scenario features, so as to improve the model effect and generalization performance. Then Labeme, LabelImg and other labeling tools will be used to label the object detection bouding boxes and convert the labeling results into COCO or VOC data format. Details please refer to [Data Preparation](../../tutorials/data/PrepareDetDataSet_en.md) + +## Model Optimization + +### 1. Use customized dataset for training + +Modify the corresponding path in the data configuration file based on the prepared data, for example: + +configs/dataset/coco_detection.yml`: + +``` +metric: COCO +num_classes: 80 + +TrainDataset: + !COCODataSet + image_dir: train2017 # Path to the images of the training set relative to the dataset_dir + anno_path: annotations/instances_train2017.json # Path to the annotation file of the training set relative to the dataset_dir + dataset_dir: dataset/coco # Path to the dataset relative to the PaddleDetection path + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + image_dir: val2017 # Path to the images of the evaldataset set relative to the dataset_dir + anno_path: annotations/instances_val2017.json # Path to the annotation file of the evaldataset relative to the dataset_dir + dataset_dir: dataset/coco # Path to the dataset relative to the PaddleDetection path + +TestDataset: + !ImageFolder + anno_path: annotations/instances_val2017.json # also support txt (like VOC's label_list.txt) # Path to the annotation files relative to dataset_di. + dataset_dir: dataset/coco # if set, anno_path will be 'dataset_dir/anno_path' # Path to the dataset relative to the PaddleDetection path +``` + +Once the configuration changes are completed, the training evaluation can be started with the following command + +``` +export CUDA_VISIBLE_DEVICES=0 +python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_270e_coco.yml --eval +``` + +More details please refer to [Getting Started for PaddleDetection](../../tutorials/GETTING_STARTED_cn.md) + +### + +### 2. Load the COCO model as pre-training + +The currently provided pre-trained models in PaddleDetection's configurations are weights from the ImageNet dataset, loaded into the backbone network of the detection algorithm. For practical use, it is recommended to load the weights trained on the COCO dataset, which can usually provide a large improvement to the model accuracy. The method is as follows. + +#### 1) Set pre-training weight path + +The trained model weights for the COCO dataset are saved in the configuration folder of each algorithm, for example, PP-YOLOE-l COCO dataset weights are provided under `configs/ppyoloe`: [Link](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) The configuration file sets`pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams` + +#### 2) Modify hyperparameters + +After loading the COCO pre-training weights, the learning rate hyperparameters need to be modified, for example + +In `configs/ppyoloe/_base_/optimizer_300e.yml`: + +``` +epoch: 120 # The original configuration is 300 epoch, after loading COCO weights, the iteration number can be reduced appropriately + +LearningRate: + base_lr: 0.005 # The original configuration is 0.025, after loading COCO weights, the learning rate should be reduced. + schedulers: + - !CosineDecay + max_epochs: 144 # Modify based on the number of epochs + - LinearWarmup + start_factor: 0. + epochs: 5 +``` + +## Modify categories + +When the actual application scenario category changes, the data configuration file needs to be modified, for example in `configs/datasets/coco_detection.yml`: + +``` +metric: COCO +num_classes: 10 # original class 80 +``` + +After the configuration changes are completed, the COCO pre-training weights can also be loaded. PaddleDetection supports automatic loading of shape-matching weights, and weights that do not match the shape are automatically ignored, so no other modifications are needed. diff --git a/docs/advanced_tutorials/customization/keypoint_detection.md b/docs/advanced_tutorials/customization/keypoint_detection.md new file mode 100644 index 0000000000000000000000000000000000000000..c54ff231e07a5540d2d93ecda4507803fd81ac21 --- /dev/null +++ b/docs/advanced_tutorials/customization/keypoint_detection.md @@ -0,0 +1,261 @@ +简体中文 | [English](./keypoint_detection_en.md) + +# 关键点检测任务二次开发 + +在实际场景中应用关键点检测算法,不可避免地会出现需要二次开发的需求。包括对目前的预训练模型效果不满意,希望优化模型效果;或是目前的关键点点位定义不能满足实际场景需求,希望新增或是替换关键点点位的定义,训练新的关键点模型。本文档将介绍如何在PaddleDetection中,对关键点检测算法进行二次开发。 + +## 数据准备 + +### 基本流程说明 +在PaddleDetection中,目前支持的标注数据格式为`COCO`和`MPII`。这两个数据格式的详细说明,可以参考文档[关键点数据准备](../../tutorials/data/PrepareKeypointDataSet.md)。在这一步中,通过使用Labeme等标注工具,依照特征点序号标注对应坐标。并转化成对应可训练的标注格式。建议使用`COCO`格式进行。 + +### 合并数据集 +为了扩展使用的训练数据,合并多个不同的数据集一起训练是一个很直观的解决手段,但不同的数据集往往对关键点的定义并不一致。合并数据集的第一步是需要统一不同数据集的点位定义,确定标杆点位,即最终模型学习的特征点类型,然后根据各个数据集的点位定义与标杆点位定义之间的关系进行调整。 +- 在标杆点位中的点:调整点位序号,使其与标杆点位一致 +- 未在标杆点位中的点:舍去 +- 数据集缺少标杆点位中的点:对应将标注的标志位记为“未标注” + +在[关键点数据准备](../../tutorials/data/PrepareKeypointDataSet.md)中,提供了如何合并`COCO`数据集和`AI Challenger`数据集,并统一为以`COCO`为标杆点位定义的案例说明,供参考。 + + +## 模型优化 + +### 检测-跟踪模型优化 +在PaddleDetection中,关键点检测能力支持Top-Down、Bottom-Up两套方案,Top-Down先检测主体,再检测局部关键点,优点是精度较高,缺点是速度会随着检测对象的个数增加,Bottom-Up先检测关键点再组合到对应的部位上,优点是速度快,与检测对象个数无关,缺点是精度较低。关于两种方案的详情及对应模型,可参考[关键点检测系列模型](../../../configs/keypoint/README.md) + +当使用Top-Down方案时,模型效果依赖于前序的检测和跟踪效果,如果实际场景中不能准确检测到行人位置,会使关键点检测部分表现受限。如果在实际使用中遇到了上述问题,请参考[目标检测任务二次开发](./detection.md)以及[多目标跟踪任务二次开发](./pphuman_mot.md)对检测/跟踪模型进行优化。 + +### 使用符合场景的数据迭代 +目前发布的关键点检测算法模型主要在`COCO`/ `AI Challenger`等开源数据集上迭代,这部分数据集中可能缺少与实际任务较为相似的监控场景(视角、光照等因素)、体育场景(存在较多非常规的姿态)。使用更符合实际任务场景的数据进行训练,有助于提升模型效果。 + +### 使用预训练模型迭代 +关键点模型的数据的标注复杂度较大,直接使用模型从零开始在业务数据集上训练,效果往往难以满足需求。在实际工程中使用时,建议加载已经训练好的权重,通常能够对模型精度有较大提升,以`HRNet`为例,使用方法如下: +```bash +python tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o pretrain_weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams +``` +在加载预训练模型后,可以适当减小初始学习率和最终迭代轮数, 建议初始学习率取默认配置值的1/2至1/5,并可开启`--eval`观察迭代过程中AP值的变化。 + + +### 遮挡数据增强 +关键点任务中有较多遮挡问题,包括自身遮挡与不同目标之间的遮挡。 + +1. 检测模型优化(仅针对Top-Down方案) + +参考[目标检测任务二次开发](./detection.md),提升检测模型在复杂场景下的效果。 + +2. 关键点数据增强 + +在关键点模型训练中增加遮挡的数据增强,参考[PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/keypoint/tiny_pose/tinypose_256x192.yml#L100)。有助于模型提升这类场景下的表现。 + +### 对视频预测进行平滑处理 +关键点模型是在图片级别的基础上进行训练和预测的,对于视频类型的输入也是将视频拆分为帧进行预测。帧与帧之间虽然内容大多相似,但微小的差异仍然可能导致模型的输出发生较大的变化,表现为虽然预测的坐标大体正确,但视觉效果上有较大的抖动问题。通过添加滤波平滑处理,将每一帧预测的结果与历史结果综合考虑,得到最终的输出结果,可以有效提升视频上的表现。该部分内容可参考[滤波平滑处理](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/python/det_keypoint_unite_infer.py#L206)。 + + +## 新增或修改关键点点位定义 + +### 数据准备 +根据前述说明,完成数据的准备,放置于`{root of PaddleDetection}/dataset`下。 + +
    + 标注文件示例 + +一个标注文件示例如下: + +``` +self_dataset/ +├── train_coco_joint.json # 训练集标注文件 +├── val_coco_joint.json # 验证集标注文件 +├── images/ # 存放图片文件 +    ├── 0.jpg +    ├── 1.jpg +    ├── 2.jpg +``` +其中标注文件中需要注意的改动如下: +```json +{ + "images": [ + { + "file_name": "images/0.jpg", + "id": 0, # 图片id,注意不可重复 + "height": 1080, + "width": 1920 + }, + { + "file_name": "images/1.jpg", + "id": 1, + "height": 1080, + "width": 1920 + }, + { + "file_name": "images/2.jpg", + "id": 2, + "height": 1080, + "width": 1920 + }, + ... + + "categories": [ + { + "supercategory": "person", + "id": 1, + "name": "person", + "keypoints": [ # 点位序号的名称 + "point1", + "point2", + "point3", + "point4", + "point5", + ], + "skeleton": [ # 点位构成的骨骼, 训练中非必要 + [ + 1, + 2 + ], + [ + 1, + 3 + ], + [ + 2, + 4 + ], + [ + 3, + 5 + ] + ] + ... + + "annotations": [ + { + { + "category_id": 1, # 实例所属类别 + "num_keypoints": 3, # 该实例已标注点数量 + "bbox": [ # 检测框位置,格式为x, y, w, h + 799, + 575, + 55, + 185 + ], + # N*3 的列表,内容为x, y, v。 + "keypoints": [ + 807.5899658203125, + 597.5455322265625, + 2, + 0, + 0, + 0, # 未标注的点记为0,0,0 + 805.8563232421875, + 592.3446655273438, + 2, + 816.258056640625, + 594.0783081054688, + 2, + 0, + 0, + 0 + ] + "id": 1, # 实例id,不可重复 + "image_id": 8, # 实例所在图像的id,可重复。此时代表一张图像上存在多个目标 + "iscrowd": 0, # 是否遮挡,为0时参与训练 + "area": 10175 # 实例所占面积,可简单取为w * h。注意为0时会跳过,过小时在eval时会被忽略 + + ... +``` + +
    + + +### 配置文件设置 + +在配置文件中,完整的含义参考[config yaml配置项说明](../../tutorials/KeyPointConfigGuide_cn.md)。以[HRNet模型配置](../../../configs/keypoint/hrnet/hrnet_w32_256x192.yml)为例,重点需要关注的内容如下: + +
    + 配置文件示例 + +一个配置文件的示例如下 + +```yaml +use_gpu: true +log_iter: 5 +save_dir: output +snapshot_epoch: 10 +weights: output/hrnet_w32_256x192/model_final +epoch: 210 +num_joints: &num_joints 5 # 预测的点数与定义点数量一致 +pixel_std: &pixel_std 200 +metric: KeyPointTopDownCOCOEval +num_classes: 1 +train_height: &train_height 256 +train_width: &train_width 192 +trainsize: &trainsize [*train_width, *train_height] +hmsize: &hmsize [48, 64] +flip_perm: &flip_perm [[1, 2], [3, 4]] # 注意只有含义上镜像对称的点才写到这里 + +... + +# 保证dataset_dir + anno_path 能正确定位到标注文件位置 +# 保证dataset_dir + image_dir + 标注文件中的图片路径能正确定位到图片 +TrainDataset: + !KeypointTopDownCocoDataset + image_dir: images + anno_path: train_coco_joint.json + dataset_dir: dataset/self_dataset + num_joints: *num_joints + trainsize: *trainsize + pixel_std: *pixel_std + use_gt_bbox: True + + +EvalDataset: + !KeypointTopDownCocoDataset + image_dir: images + anno_path: val_coco_joint.json + dataset_dir: dataset/self_dataset + bbox_file: bbox.json + num_joints: *num_joints + trainsize: *trainsize + pixel_std: *pixel_std + use_gt_bbox: True + image_thre: 0.0 +``` +
    + +### 模型训练及评估 +#### 模型训练 +通过如下命令启动训练: +```bash +CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml +``` + +#### 模型评估 +训练好模型之后,可以通过以下命令实现对模型指标的评估: +```bash +python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml +``` + +注意:由于测试依赖pycocotools工具,其默认为`COCO`数据集的17点,如果修改后的模型并非预测17点,直接使用评估命令会报错。 +需要修改以下内容以获得正确的评估结果: +- [sigma列表](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/keypoint_utils.py#L219),表示每个关键点的范围方差,越大则容忍度越高。其长度与预测点数一致。根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。 +- [pycocotools sigma列表](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L523),含义及内容同上,取值与sigma列表一致。 + +### 模型导出及预测 +#### Top-Down模型联合部署 +```shell +#导出关键点模型 +python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o weights={path_to_your_weights} + +#detector 检测 + keypoint top-down模型联合部署(联合推理只支持top-down方式) +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/ppyolo_r50vd_dcn_2x_coco/ --keypoint_model_dir=output_inference/hrnet_w32_256x192/ --video_file=../video/xxx.mp4 --device=gpu +``` +- 注意目前PP-Human中使用的为该方案 + +#### Bottom-Up模型独立部署 +```shell +#导出模型 +python tools/export_model.py -c configs/keypoint/higherhrnet/higherhrnet_hrnet_w32_512.yml -o weights=output/higherhrnet_hrnet_w32_512/model_final.pdparams + +#部署推理 +python deploy/python/keypoint_infer.py --model_dir=output_inference/higherhrnet_hrnet_w32_512/ --image_file=./demo/000000014439_640x640.jpg --device=gpu --threshold=0.5 + +``` diff --git a/docs/advanced_tutorials/customization/keypoint_detection_en.md b/docs/advanced_tutorials/customization/keypoint_detection_en.md new file mode 100644 index 0000000000000000000000000000000000000000..5f96ddffe3726530108b9887256f1e1ea2360c60 --- /dev/null +++ b/docs/advanced_tutorials/customization/keypoint_detection_en.md @@ -0,0 +1,258 @@ +[简体中文](./keypoint_detection.md) | English + +# Customized Keypoint Detection + +When applying keypoint detection algorithms in real practice, inevitably, we may need customization as we may dissatisfy with the current pre-trained model results, or the current keypoint detection cannot meet the actual demand, or we may want to add or replace the definition of keypoints and train a new keypoint detection model. This document will introduce how to customize the keypoint detection algorithm in PaddleDetection. + +## Data Preparation + +### Basic Process Description + +PaddleDetection currently supports `COCO` and `MPII` annotation data formats. For detailed descriptions of these two data formats, please refer to the document [Keypoint Data Preparation](./../tutorials/data/PrepareKeypointDataSet.md). In this step, by using annotation tools such as Labeme, the corresponding coordinates are annotated according to the feature point serial numbers and then converted into the corresponding trainable annotation format. And we recommend `COCO` format. + +### Merging datasets + +To extend the training data, we can merge several different datasets together. But different datasets often have different definitions of key points. Therefore, the first step in merging datasets is to unify the point definitions of different datasets, and determine the benchmark points, i.e., the types of feature points finally learned by the model, and then adjust them according to the relationship between the point definitions of each dataset and the benchmark point definitions. + +- Points in the benchmark point location: adjust the point number to make it consistent with the benchmark point location +- Points that are not in the benchmark points: discard +- Points in the dataset that are missing from the benchmark: annotate the marked points as "unannotated". + +In [Key point data preparation](... /... /tutorials/data/PrepareKeypointDataSet.md), we provide a case illustration of how to merge the `COCO` dataset and the `AI Challenger` dataset and unify them as a benchmark point definition with `COCO` for your reference. + +## Model Optimization + +### Detection and tracking model optimization + +In PaddleDetection, the keypoint detection supports Top-Down and Bottom-Up solutions. Top-Down first detects the main body and then detects the local key points. It has higher accuracy but will take a longer time as the number of detected objects increases.The Bottom-Up plan first detects the keypoints and then combines them with the corresponding parts. It is fast and its speed is independent of the number of detected objects. Its disadvantage is that the accuracy is relatively low. For details of the two solutions and the corresponding models, please refer to [Keypoint Detection Series Models](../../../configs/keypoint/README.md) + +When using the Top-Down solution, the model's effects depend on the previous detection or tracking effect. If the pedestrian position cannot be accurately detected in the actual practice, the performance of the keypoint detection will be limited. If you encounter the above problem in actual application, please refer to [Customized Object Detection](./detection_en.md) and [Customized Multi-target tracking](./pphuman_mot_en.md) for optimization of the detection and tracking model. + +### Iterate with scenario-compatible data + +The currently released keypoint detection algorithm models are mainly iterated on open source datasets such as `COCO`/ `AI Challenger`, which may lack surveillance scenarios (angles, lighting and other factors), sports scenarios (more unconventional poses) that are more similar to the actual task. Training with data that more closely matches the actual task scenario can help improve the model's results. + +### Iteration via pre-trained models + +The data annotation of the keypoint model is complex, and using the model directly to train on the business dataset from scratch is often difficult to meet the demand. When used in practical projects, it is recommended to load the pre-trained weights, which usually improve the model accuracy significantly. Let's take `HRNet` as an example with the following method: + +``` +python tools/train.py \ + -c configs/keypoint/hrnet/hrnet_w32_256x192.yml \ + -o pretrain_weights=https://paddledet.bj.bcebos.com/models/keypoint/hrnet_w32_256x192.pdparams +``` + +After loading the pre-trained model, the initial learning rate and the rounds of iterations can be reduced appropriately. It is recommended that the initial learning rate be 1/2 to 1/5 of the default configuration, and you can enable`--eval` to observe the change of AP values during the iterations. + +## Data augmentation with occlusion + +There are a lot of data in occlusion in keypoint tasks, including self-covered objects and occlusion between different objects. + +1. Detection model optimization (only for Top-Down solutions) + +Refer to [Target Detection Task Secondary Development](. /detection.md) to improve the detection model in complex scenarios. + +2. Keypoint data augmentation + +Augmentation of covered data in keypoint model training to improve model performance in such scenarios, please refer to [PP-TinyPose](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/keypoint/tiny_pose/) + +### Smooth video prediction + +The keypoint model is trained and predicted on the basis of image, and video input is also predicted by splitting the video into frames. Although the content is mostly similar between frames, small differences may still lead to large changes in the output of the model. As a result of that, although the predicted coordinates are roughly correct, there may be jitters in the visual effect. + +By adding a smoothing filter process, the performance of the video output can be effectively improved by combining the predicted results of each frame and the historical results. For this part, please see [Filter Smoothing](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/python/det_keypoint_unite_infer.py#L206). + +## Add or modify keypoint definition + +### Data Preparation + +Complete the data preparation according to the previous instructions and place it under `{root of PaddleDetection}/dataset`. + +
    + Examples of annotation file + +``` +self_dataset/ +├── train_coco_joint.json # training set annotation file +├── val_coco_joint.json # Validation set annotation file +├── images/ # Store the image files +    ├── 0.jpg +    ├── 1.jpg +    ├── 2.jpg +``` + +Notable changes as follows: + +``` +{ + "images": [ + { + "file_name": "images/0.jpg", + "id": 0, # image id, id cannotdo not repeat + "height": 1080, + "width": 1920 + }, + { + "file_name": "images/1.jpg", + "id": 1, + "height": 1080, + "width": 1920 + }, + { + "file_name": "images/2.jpg", + "id": 2, + "height": 1080, + "width": 1920 + }, + ... + + "categories": [ + { + "supercategory": "person", + "id": 1, + "name": "person", + "keypoints": [ # the name of the point serial number + "point1", + "point2", + "point3", + "point4", + "point5", + ], + "skeleton": [ # Skeleton composed of points, not necessary for training + [ + 1, + 2 + ], + [ + 1, + 3 + ], + [ + 2, + 4 + ], + [ + 3, + 5 + ] + ] + ... + + "annotations": [ + { + { + "category_id": 1, # The category to which the instance belongs + "num_keypoints": 3, # the number of marked points of the instance + "bbox": [ # location of detection box,format is x, y, w, h + 799, + 575, + 55, + 185 + ], + # N*3 list of x, y, v. + "keypoints": [ + 807.5899658203125, + 597.5455322265625, + 2, + 0, + 0, + 0, # unlabeled points noted as 0, 0, 0 + 805.8563232421875, + 592.3446655273438, + 2, + 816.258056640625, + 594.0783081054688, + 2, + 0, + 0, + 0 + ] + "id": 1, # the id of the instance, id cannot repeat + "image_id": 8, # The id of the image where the instance is located, repeatable. This represents the presence of multiple objects on a single image +"iscrowd": 0, # covered or not, when the value is 0, it will participate in training + "area": 10175 # the area occupied by the instance, can be simply taken as w * h. Note that when the value is 0, it will be skipped, and if it is too small, it will be ignored in eval + + ... +``` + +### Settings of configuration file + +In the configuration file, refer to [config yaml configuration](... /... /tutorials/KeyPointConfigGuide_cn.md) for more details . Take [HRNet model configuration](... /... /... /configs/keypoint/hrnet/hrnet_w32_256x192.yml) as an example, we need to focus on following contents: + +
    + Example of configuration + +``` +use_gpu: true +log_iter: 5 +save_dir: output +snapshot_epoch: 10 +weights: output/hrnet_w32_256x192/model_final +epoch: 210 +num_joints: &num_joints 5 # The number of predicted points matches the number of defined points +pixel_std: &pixel_std 200 +Metric. keyPointTopDownCOCOEval +num_classes: 1 +train_height: &train_height 256 +train_width: &train_width 192 +trainsize: &trainsize [*train_width, *train_height]. +hmsize: &hmsize [48, 64]. +flip_perm: &flip_perm [[1, 2], [3, 4]]. # Note that only points that are mirror-symmetric are recorded here. + +... + +# Ensure that dataset_dir + anno_path can correctly locate the annotation file +# Ensure that dataset_dir + image_dir + image path in annotation file can correctly locate the image. +TrainDataset: + !KeypointTopDownCocoDataset + image_dir: images + anno_path: train_coco_joint.json + dataset_dir: dataset/self_dataset + num_joints: *num_joints + trainsize. *trainsize + pixel_std: *pixel_std + use_gt_box: true + + +Evaluate the dataset. + !KeypointTopDownCocoDataset + image_dir: images + anno_path: val_coco_joint.json + dataset_dir: dataset/self_dataset + bbox_file: bbox.json + num_joints: *num_joints + trainsize. *trainsize + pixel_std: *pixel_std + use_gt_box: true + image_thre: 0.0 +``` + +### Model Training and Evaluation + +#### Model Training + +Run the following command to start training: + +``` +CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml +``` + +#### Model Evaluation + +After training the model, you can evaluate the model metrics by running the following commands: + +``` +python3 tools/eval.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml +``` + +### Model Export and Inference + +#### Top-Down model deployment + +``` +#Export keypoint model +python tools/export_model.py -c configs/keypoint/hrnet/hrnet_w32_256x192.yml -o weights={path_to_your_weights} + +#detector detection + keypoint top-down model co-deployment(for top-down solutions only) +python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/ppyolo_r50vd_dcn_2x_coco/ --keypoint_model_dir=output_inference/hrnet_w32_256x192/ --video_file=../video/xxx.mp4 --device=gpu +``` diff --git a/docs/advanced_tutorials/customization/pphuman_attribute.md b/docs/advanced_tutorials/customization/pphuman_attribute.md new file mode 100644 index 0000000000000000000000000000000000000000..692a9dba57dba058db970995f2874732bab8e07d --- /dev/null +++ b/docs/advanced_tutorials/customization/pphuman_attribute.md @@ -0,0 +1,295 @@ +简体中文 | [English](./pphuman_attribute_en.md) + +# 属性识别任务二次开发 + +## 数据准备 + +### 数据格式 + +格式采用PA100K的属性标注格式,共有26位属性。 + +这26位属性的名称、位置、种类数量见下表。 + +| Attribute | index | length | +|:----------|:----------|:----------| +| 'Hat','Glasses' | [0, 1] | 2 | +| 'ShortSleeve','LongSleeve','UpperStride','UpperLogo','UpperPlaid','UpperSplice' | [2, 3, 4, 5, 6, 7] | 6 | +| 'LowerStripe','LowerPattern','LongCoat','Trousers','Shorts','Skirt&Dress' | [8, 9, 10, 11, 12, 13] | 6 | +| 'boots' | [14, ] | 1 | +| 'HandBag','ShoulderBag','Backpack','HoldObjectsInFront' | [15, 16, 17, 18] | 4 | +| 'AgeOver60', 'Age18-60', 'AgeLess18' | [19, 20, 21] | 3 | +| 'Female' | [22, ] | 1 | +| 'Front','Side','Back' | [23, 24, 25] | 3 | + + +举例: + +[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] + +第一组,位置[0, 1]数值分别是[0, 1],表示'no hat'、'has glasses'。 + +第二组,位置[22, ]数值分别是[0, ], 表示gender属性是'male', 否则是'female'。 + +第三组,位置[23, 24, 25]数值分别是[0, 1, 0], 表示方向属性是侧面'side'。 + +其他组依次类推 + +### 数据标注 + +理解了上面`属性标注`格式的含义后,就可以进行数据标注的工作。其本质是:每张单人图建立一组26个长度的标注项,分别与26个位置的属性值对应。 + +举例: + +对于一张原始图片, + +1) 使用检测框,标注图片中每一个人的位置。 + +2) 每一个检测框(对应每一个人),包含一组26位的属性值数组,数组的每一位以0或1表示。对应上述26个属性。例如,如果图片是'Female',则数组第22位为0,如果满足'Age18-60',则位置[19, 20, 21]对应的数值是[0, 1, 0], 或者满足'AgeOver60',则相应数值为[1, 0, 0]. + +标注完成后利用检测框将每一个人截取成单人图,其图片与26位属性标注建立对应关系。也可先截成单人图再进行标注,效果相同。 + + +## 模型训练 + +数据标注完成后,就可以拿来做模型的训练,完成自定义模型的优化工作。 + +其主要有两步工作需要完成:1)将数据与标注数据整理成训练格式。2)修改配置文件开始训练。 + +### 训练数据格式 + +训练数据包括训练使用的图片和一个训练列表train.txt,其具体位置在训练配置中指定,其放置方式示例如下: +``` +Attribute/ +|-- data 训练图片文件夹 +| |-- 00001.jpg +| |-- 00002.jpg +| `-- 0000x.jpg +`-- train.txt 训练数据列表 + +``` + +train.txt文件内为所有训练图片名称(相对于根路径的文件路径)+ 26个标注值 + +其每一行表示一个人的图片和标注结果。其格式为: + +``` +00001.jpg 0,0,1,0,.... +``` + +注意:1)图片与标注值之间是以Tab[\t]符号隔开, 2)标注值之间是以逗号[,]隔开。该格式不能错,否则解析失败。 + +### 修改配置开始训练 + +首先执行以下命令下载训练代码(更多环境问题请参考[Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/install_paddleclas_en.md)): + +```shell +git clone https://github.com/PaddlePaddle/PaddleClas +``` + +需要在配置文件`PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`中,修改的配置项如下: + +``` +DataLoader: + Train: + dataset: + name: MultiLabelDataset + image_root: "dataset/pa100k/" #指定训练图片所在根路径 + cls_label_path: "dataset/pa100k/train_list.txt" #指定训练列表文件位置 + label_ratio: True + transform_ops: + + Eval: + dataset: + name: MultiLabelDataset + image_root: "dataset/pa100k/" #指定评估图片所在根路径 + cls_label_path: "dataset/pa100k/val_list.txt" #指定评估列表文件位置 + label_ratio: True + transform_ops: +``` +注意: +1. 这里image_root路径+train.txt中图片相对路径,对应图片的完整路径位置。 +2. 如果有修改属性数量,则还需修改内容配置项中属性种类数量: +``` +# model architecture +Arch: + name: "PPLCNet_x1_0" + pretrained: True + use_ssld: True + class_num: 26 #属性种类数量 +``` + +然后运行以下命令开始训练。 + +``` +#多卡训练 +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml + +#单卡训练 +python3 tools/train.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml +``` + +训练完成后可以执行以下命令进行性能评估: +``` +#多卡评估 +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/eval.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \ + -o Global.pretrained_model=./output/PPLCNet_x1_0/best_model + +#单卡评估 +python3 tools/eval.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \ + -o Global.pretrained_model=./output/PPLCNet_x1_0/best_model +``` + +### 模型导出 + +使用下述命令将训练好的模型导出为预测部署模型。 + +``` +python3 tools/export_model.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \ + -o Global.pretrained_model=output/PPLCNet_x1_0/best_model \ + -o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_person_attribute_infer +``` + +导出模型后,需要下载[infer_cfg.yml](https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_cfg.yml)文件,并放置到导出的模型文件夹`PPLCNet_x1_0_person_attribute_infer`中。 + +使用时在PP-Human中的配置文件`./deploy/pipeline/config/infer_cfg_pphuman.yml`中修改新的模型路径`model_dir`项,并开启功能`enable: True`。 +``` +ATTR: + model_dir: [YOUR_DEPLOY_MODEL_DIR]/PPLCNet_x1_0_person_attribute_infer/ #新导出的模型路径位置 + enable: True #开启功能 +``` +然后可以使用-->至此即完成新增属性类别识别任务。 + +## 属性增减 + +上述是以26个属性为例的标注、训练过程。 + +如果需要增加、减少属性数量,则需要: + +1)标注时需增加新属性类别信息或删减属性类别信息; + +2)对应修改训练中train.txt所使用的属性数量和名称; + +3)修改训练配置,例如``PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml``文件中的属性数量,详细见上述`修改配置开始训练`部分。 + +增加属性示例: + +1. 在标注数据时在26位后继续增加新的属性标注数值; +2. 在train.txt文件的标注数值中也增加新的属性数值。 +3. 注意属性类型在train.txt中属性数值列表中的位置的对应关系需要时固定的,例如第[19, 20, 21]位表示年龄,所有图片都要使用[19, 20, 21]位置表示年龄,不再赘述。 + +
    + +
    + +删减属性同理。 +例如,如果不需要年龄属性,则位置[19, 20, 21]的数值可以去掉。只需在train.txt中标注的26个数字中全部删除第19-21位数值即可,同时标注数据时也不再需要标注这3位属性值。 + +## 修改后处理代码 + +修改了属性定义后,pipeline后处理部分也需要做相应修改,主要影响结果可视化时的显示结果。 + +相应代码在路径`deploy/pipeline/pphuman/attr_infer.py`文件中`postprocess`函数。 + +其函数实现说明如下: + +``` +# 函数入口 + def postprocess(self, inputs, result): + # postprocess output of predictor + im_results = result['output'] + +# 1) 定义各组属性实际意义,其数量及位置与输出结果中占用位数一一对应。 + labels = self.pred_config.labels + age_list = ['AgeLess18', 'Age18-60', 'AgeOver60'] + direct_list = ['Front', 'Side', 'Back'] + bag_list = ['HandBag', 'ShoulderBag', 'Backpack'] + upper_list = ['UpperStride', 'UpperLogo', 'UpperPlaid', 'UpperSplice'] + lower_list = [ + 'LowerStripe', 'LowerPattern', 'LongCoat', 'Trousers', 'Shorts', + 'Skirt&Dress' + ] +# 2) 部分属性所用阈值与通用值有明显区别,单独设置 + glasses_threshold = 0.3 + hold_threshold = 0.6 + + batch_res = [] + for res in im_results: + res = res.tolist() + label_res = [] + # gender +# 3) 单个位置属性类别,判断该位置是否大于阈值,来分配二分类结果 + gender = 'Female' if res[22] > self.threshold else 'Male' + label_res.append(gender) + # age +# 4)多个位置属性类别,N选一形式,选择得分最高的属性 + age = age_list[np.argmax(res[19:22])] + label_res.append(age) + # direction + direction = direct_list[np.argmax(res[23:])] + label_res.append(direction) + # glasses + glasses = 'Glasses: ' + if res[1] > glasses_threshold: + glasses += 'True' + else: + glasses += 'False' + label_res.append(glasses) + # hat + hat = 'Hat: ' + if res[0] > self.threshold: + hat += 'True' + else: + hat += 'False' + label_res.append(hat) + # hold obj + hold_obj = 'HoldObjectsInFront: ' + if res[18] > hold_threshold: + hold_obj += 'True' + else: + hold_obj += 'False' + label_res.append(hold_obj) + # bag + bag = bag_list[np.argmax(res[15:18])] + bag_score = res[15 + np.argmax(res[15:18])] + bag_label = bag if bag_score > self.threshold else 'No bag' + label_res.append(bag_label) + # upper +# 5)同一类属性,分为两组(这里是款式和花色),每小组内单独选择,相当于两组不同属性。 + upper_label = 'Upper:' + sleeve = 'LongSleeve' if res[3] > res[2] else 'ShortSleeve' + upper_label += ' {}'.format(sleeve) + upper_res = res[4:8] + if np.max(upper_res) > self.threshold: + upper_label += ' {}'.format(upper_list[np.argmax(upper_res)]) + label_res.append(upper_label) + # lower + lower_res = res[8:14] + lower_label = 'Lower: ' + has_lower = False + for i, l in enumerate(lower_res): + if l > self.threshold: + lower_label += ' {}'.format(lower_list[i]) + has_lower = True + if not has_lower: + lower_label += ' {}'.format(lower_list[np.argmax(lower_res)]) + + label_res.append(lower_label) + # shoe + shoe = 'Boots' if res[14] > self.threshold else 'No boots' + label_res.append(shoe) + + batch_res.append(label_res) + result = {'output': batch_res} + return result +``` diff --git a/docs/advanced_tutorials/customization/pphuman_attribute_en.md b/docs/advanced_tutorials/customization/pphuman_attribute_en.md new file mode 100644 index 0000000000000000000000000000000000000000..fdfe74eb2b3c531a0e03cc6e2c7e8ad78114f0b3 --- /dev/null +++ b/docs/advanced_tutorials/customization/pphuman_attribute_en.md @@ -0,0 +1,223 @@ +[简体中文](pphuman_attribute.md) | English + +# Customized attribute recognition + +## Data Preparation + +### Data format + +We use the PA100K attribute annotation format, with a total of 26 attributes. + +The names, locations, and the number of these 26 attributes are shown in the table below. + +| Attribute | index | length | +|:------------------------------------------------------------------------------- |:---------------------- |:------ | +| 'Hat','Glasses' | [0, 1] | 2 | +| 'ShortSleeve','LongSleeve','UpperStride','UpperLogo','UpperPlaid','UpperSplice' | [2, 3, 4, 5, 6, 7] | 6 | +| 'LowerStripe','LowerPattern','LongCoat','Trousers','Shorts','Skirt&Dress' | [8, 9, 10, 11, 12, 13] | 6 | +| 'boots' | [14, ] | 1 | +| 'HandBag','ShoulderBag','Backpack','HoldObjectsInFront' | [15, 16, 17, 18] | 4 | +| 'AgeOver60', 'Age18-60', 'AgeLess18' | [19, 20, 21] | 3 | +| 'Female' | [22, ] | 1 | +| 'Front','Side','Back' | [23, 24, 25] | 3 | + +Examples: + +[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] + +The first group: position [0, 1] values are [0, 1], which means'no hat', 'has glasses'. + +The second group: position [22, ] values are [0, ], indicating that the gender attribute is 'male', otherwise it is 'female'. + +The third group: position [23, 24, 25] values are [0, 1, 0], indicating that the direction attribute is 'side'. + +Other groups follow in this order + + + +### Data Annotation + +After knowing the purpose of the above `attribute annotation` format, we can start to annotate data. The essence is that each single-person image creates a set of 26 annotation items, corresponding to the attribute values at 26 positions. + +Examples: + +For an original image: + +1) Using bounding boxes to annotate the position of each person in the picture. + +2) Each detection box (corresponding to each person) contains 26 attribute values which are represented by 0 or 1. It corresponds to the above 26 attributes. For example, if the picture is 'Female', then the 22nd bit of the array is 0. If the person is between 'Age18-60', then the corresponding value at position [19, 20, 21] is [0, 1, 0], or if the person matches 'AgeOver60', then the corresponding value is [1, 0, 0]. + +After the annotation is completed, the model will use the detection box to intercept each person into a single-person picture, and its picture establishes a corresponding relationship with the 26 attribute annotation. It is also possible to cut into a single-person image first and then annotate it. The results are the same. + + + +Model Training + +Once the data is annotated, it can be used for model training to complete the optimization of the customized model. + +There are two main steps: 1) Organize the data and annotated data into the training format. 2) Modify the configuration file to start training. + +### Training data format + +The training data includes the images used for training and a training list called train.txt. Its location is specified in the training configuration, with the following example: + +``` +Attribute/ +|-- data Training images folder +|-- 00001.jpg +|-- 00002.jpg +| `-- 0000x.jpg +train.txt List of training data +``` + +train.txt file contains the names of all training images (file path relative to the root path) + 26 annotation values + +Each line of it represents a person's image and annotation result. The format is as follows: + +``` +00001.jpg 0,0,1,0,.... +``` + +Note 1) The images are separated by Tab[\t], 2) The annotated values are separated by commas [,]. If the format is wrong, the parsing will fail. + + + +### Modify the configuration to start training + +First run the following command to download the training code (for more environmental issues, please refer to [Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/ install_paddleclas_en.md)): + +``` +git clone https://github.com/PaddlePaddle/PaddleClas +``` + +You need to modify the following configuration in the configuration file `PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml` + +``` +DataLoader: + Train: + Train: dataset: + name: MultiLabelDataset + image_root: "dataset/pa100k/" #Specify the root path of training image + cls_label_path: "dataset/pa100k/train_list.txt" #Specify the location of the training list file + label_ratio: True + transform_ops: + + Eval: + dataset: + name: MultiLabelDataset + image_root: "dataset/pa100k/" #Specify the root path of evaluated image + cls_label_path: "dataset/pa100k/val_list.txt" #Specify the location of the evaluation list file + label_ratio: True + transform_ops: +``` + +Note: + +1. here image_root path and the relative path of the image in train.txt, corresponding to the full path of the image. +2. If you modify the number of attributes, the number of attribute types in the content configuration item should also be modified accordingly. + +``` +# model architecture +Arch: +name: "PPLCNet_x1_0" +pretrained: True +use_ssld: True +class_num: 26 #Attribute classes and numbers +``` + +Then run the following command to start training: + +``` +#Multi-card training +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml + +#Single card training +python3 tools/train.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml +``` + +You can run the following commands for performance evaluation after the training is completed: + +``` +#Multi-card evaluation +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/eval.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \ + -o Global.pretrained_model=./output/PPLCNet_x1_0/best_model + +#Single card evaluation +python3 tools/eval.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \ + -o Global.pretrained_model=./output/PPLCNet_x1_0/best_model +``` + +### Model Export + +Use the following command to export the trained model as an inference deployment model. + +``` +python3 tools/export_model.py \ + -c ./ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml \ + -o Global.pretrained_model=output/PPLCNet_x1_0/best_model \ + -o Global.save_inference_dir=deploy/models/PPLCNet_x1_0_person_attribute_infer +``` + +After exporting the model, you need to download the [infer_cfg.yml](https://bj.bcebos.com/v1/paddledet/models/pipeline/infer_cfg.yml) file and put it into the exported model folder `PPLCNet_x1_0_person_ attribute_infer` . + +When you use the model, you need to modify the new model path `model_dir` entry and set `enable: True` in the configuration file of PP-Human `. /deploy/pipeline/config/infer_cfg_pphuman.yml` . + +``` +ATTR: + model_dir: [YOUR_DEPLOY_MODEL_DIR]/PPLCNet_x1_0_person_attribute_infer/ #The exported model location + enable: True #Whether to enable the function +``` + + + +Now, the model is ready for you. + + To this point, a new attribute category recognition task is completed. + + + +## Adding or deleting attributes + +The above is the annotation and training process with 26 attributes. + +If the attributes need to be added or deleted, you need to + +1) New attribute category information needs to be added or deleted when annotating the data. + +2) Modify the number and name of attributes used in train.txt corresponding to the training. + +3) Modify the training configuration, for example, the number of attributes in the ``PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`` file, for details, please see the ``Modify configuration to start training`` section above. + +Example of adding attributes. + +1. Continue to add new attribute annotation values after 26 values when annotating the data. +2. Add new attribute values to the annotated values in the train.txt file as well. +3. The above is the annotation and training process with 26 attributes. + + If the attributes need to be added or deleted, you need to + 1) New attribute category information needs to be added or deleted when annotating the data. + + 2) Modify the number and name of attributes used in train.txt corresponding to the training. + + 3) Modify the training configuration, for example, the number of attributes in the ``PaddleClas/blob/develop/ppcls/configs/PULC/person_attribute/PPLCNet_x1_0.yaml`` file, for details, please see the ``Modify configuration to start training`` section above. + + Example of adding attributes. + + 1. Continue to add new attribute annotation values after 26 values when annotating the data. + 2. Add new attribute values to the annotated values in the train.txt file as well. + 3. Note that the correlation of attribute types and values in train.txt needs to be fixed, for example, the [19, 20, 21] position indicates age, and all images should use the [19, 20, 21] position to indicate age. + + + + The same applies to the deletion of attributes. + For example, if the age attribute is not needed, the values in positions [19, 20, 21] can be removed. You can simply remove all the values in positions 19-21 from the 26 numbers marked in train.txt, and you no longer need to annotate these 3 attribute values. diff --git a/docs/advanced_tutorials/customization/pphuman_mot.md b/docs/advanced_tutorials/customization/pphuman_mot.md new file mode 100644 index 0000000000000000000000000000000000000000..209c603267c6799d2ed3b8e096d977fa2ff5f7ab --- /dev/null +++ b/docs/advanced_tutorials/customization/pphuman_mot.md @@ -0,0 +1,63 @@ +简体中文 | [English](./pphuman_mot_en.md) + +# 多目标跟踪任务二次开发 + +在产业落地过程中应用多目标跟踪算法,不可避免地会出现希望自定义类型的多目标跟踪的需求,或是对已有多目标跟踪模型的优化,以提升在特定场景下模型的效果。我们在本文档通过案例来介绍如何根据期望识别的行为来进行多目标跟踪方案的选择,以及使用PaddleDetection进行多目标跟踪算法二次开发工作,包括:数据准备、模型优化思路和跟踪类别修改的开发流程。 + +## 数据准备 + +多目标跟踪模型方案采用[ByteTrack](https://arxiv.org/pdf/2110.06864.pdf),其中使用PP-YOLOE替换原文的YOLOX作为检测器,使用BYTETracker作为跟踪器,详细文档参考[ByteTrack](../../../configs/mot/bytetrack)。原文的ByteTrack只支持行人单类别,PaddleDetection中也支持多类别同时进行跟踪。训练ByteTrack也就是训练检测器的过程,只需要准备好检测标注即可,不需要ReID标注信息,即当成纯检测来做即可。数据集最好是连续视频中抽取出来的而不是无关联的图片集合。 +二次开发首先需要进行数据集的准备,针对场景特点采集合适的数据从而提升模型效果和泛化性能。然后使用Labeme,LabelImg等标注工具标注目标检测框,并将标注结果转化为COCO或VOC数据格式。详细文档可以参考[数据准备文档](../../tutorials/data/README.md) + +## 模型优化 + +### 1. 使用自定义数据集训练 + +ByteTrack跟踪方案采用的数据集只需要有检测标注即可。参照[MOT数据集准备](../../../configs/mot)和[MOT数据集教程](docs/tutorials/data/PrepareMOTDataSet.md)。 + +``` +# 单卡训练 +CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --eval --amp + +# 多卡训练 +python -m paddle.distributed.launch --log_dir=log_dir --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --eval --amp +``` + +更详细的命令参考[30分钟快速上手PaddleDetection](../../tutorials/GETTING_STARTED_cn.md)和[ByteTrack](../../../configs/mot/bytetrack/detector) + + +### 2. 加载COCO模型作为预训练 + +目前PaddleDetection提供的配置文件加载的预训练模型均为ImageNet数据集的权重,加载到检测算法的骨干网络中,实际使用时,建议加载COCO数据集训练好的权重,通常能够对模型精度有较大提升,使用方法如下: + +#### 1) 设置预训练权重路径 + +COCO数据集训练好的模型权重均在各算法配置文件夹下,例如`configs/ppyoloe`下提供了PP-YOLOE-l COCO数据集权重:[链接](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) 。配置文件中设置`pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams` + +#### 2) 修改超参数 + +加载COCO预训练权重后,需要修改学习率超参数,例如`configs/ppyoloe/_base_/optimizer_300e.yml`中: + +``` +epoch: 120 # 原始配置为300epoch,加载COCO权重后可以适当减少迭代轮数 + +LearningRate: + base_lr: 0.005 # 原始配置为0.025,加载COCO权重后需要降低学习率 + schedulers: + - !CosineDecay + max_epochs: 144 # 依据epoch数进行修改,一般为epoch数的1.2倍 + - !LinearWarmup + start_factor: 0. + epochs: 5 +``` + +## 跟踪类别修改 + +当实际使用场景类别发生变化时,需要修改数据配置文件,例如`configs/datasets/coco_detection.yml`中: + +``` +metric: COCO +num_classes: 10 # 原始类别1 +``` + +配置修改完成后,同样可以加载COCO预训练权重,PaddleDetection支持自动加载shape匹配的权重,对于shape不匹配的权重会自动忽略,因此无需其他修改。 diff --git a/docs/advanced_tutorials/customization/pphuman_mot_en.md b/docs/advanced_tutorials/customization/pphuman_mot_en.md new file mode 100644 index 0000000000000000000000000000000000000000..0aaea495663666782a318ccbc945f11169598eff --- /dev/null +++ b/docs/advanced_tutorials/customization/pphuman_mot_en.md @@ -0,0 +1,65 @@ +[简体中文](./pphuman_mot.md) | English + +# Customized multi-object tracking task + +When applying multi-object tracking algorithms in industrial applications, there will be inevitable demands for customized types of multi-object tracking or optimization of existing multi-object tracking models to improve the effectiveness of the models in specific scenarios. In this document, we present examples of how to choose a multi-object tracking solution based on the expected identified behavior, and how to use PaddleDetection for further development of multi-object tracking algorithms, including data preparation, model optimization ideas, and the development process of tracking category modification. + +## Data Preparation + +The multi-object tracking model scheme uses [ByteTrack](https://arxiv.org/pdf/2110.06864.pdf), which adopts PP-YOLOE to replace the original YOLOX as a detector and BYTETracker as a tracker, for details, please refer to [ByteTrack](... /... /... /configs/mot/bytetrack). The original ByteTrack only supports single pedestrian category, while PaddleDetection supports multiple categories for simultaneous tracking. Training ByteTrack, which is the process of training the detector, only requires the detection annotations to be prepared, and does not require ReID annotation information, i.e., it can be done as pure detection. The dataset should preferably be extracted from continuous video rather than a collection of unrelated images. + +Customization starts with the preparation of the dataset. We need to collect suitable data for the scenario features, so as to improve the model effect and generalization performance. Then Labeme, LabelImg and other labeling tools will be used to label the object detection frame and convert the labeling results into COCO or VOC data format. Details please refer to [Data Preparation](../../tutorials/data/README.md) + +## Model Optimization + +### 1. Use customized data set for training + +The dataset used by the ByteTrack tracking solution only needs detection annotations. Refer to [MOT dataset preparation](... /... /... /configs/mot) and [MOT dataset tutorial](docs/tutorials/data/PrepareMOTDataSet.md). + +``` +# Single card training +CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --eval --amp + +# Multi-card training +python -m paddle.distributed.launch --log_dir=log_dir --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/ppyoloe/ppyoloe_crn_l_300e_coco.yml --eval --amp +``` + +More details please refer to [Getting Started for PaddleDetection](../../tutorials/GETTING_STARTED_cn.md) and [ByteTrack](../../../configs/mot/bytetrack/detector) + +### 2. Load the COCO model as the pre-trained model + +The currently provided pre-trained models in PaddleDetection's configurations are weights from the ImageNet dataset, loaded into the backbone network of the detection algorithm. For practical use, it is recommended to load the weights trained on the COCO dataset, which can usually provide a large improvement to the model accuracy. The method is as follows. + +#### 1) Set pre-training weight path + +The trained model weights for the COCO dataset are saved in the configuration folder of each algorithm, for example, PP-YOLOE-l COCO dataset weights are provided under `configs/ppyoloe`: [Link](https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams) The configuration file sets`pretrain_weights: https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_300e_coco.pdparams` + +#### 2) Modify hyperparameters + +After loading the COCO pre-training weights, the learning rate hyperparameters need to be modified, for example + +In `configs/ppyoloe/*base*/optimizer_300e.yml`: + +``` +epoch: 120 # The original configuration is 300 epoch, after loading COCO weights, the iteration number can be reduced appropriately + +LearningRate: +base_lr: 0.005 # The original configuration is 0.025, after loading COCO weights, the learning rate should be reduced. + schedulers: + - !CosineDecay + max_epochs: 144 # Modified according to the number of epochs, usually 1.2 times the number of epochs + - LinearWarmup + start_factor: 0. + epochs: 5 +``` + +## Modify categories + +When the actual application scenario category changes, the data configuration file needs to be modified, for example in `configs/datasets/coco_detection.yml`: + +``` +metric: COCO +num_classes: 10 # original class 80 +``` + +After the configuration changes are completed, the COCO pre-training weights can also be loaded. PaddleDetection supports automatic loading of shape-matching weights, and weights that do not match the shape are automatically ignored, so no other modifications are needed. diff --git a/docs/advanced_tutorials/customization/pphuman_mtmct.md b/docs/advanced_tutorials/customization/pphuman_mtmct.md new file mode 100644 index 0000000000000000000000000000000000000000..0d784e243cb782c2a5152dff3bd3652138391344 --- /dev/null +++ b/docs/advanced_tutorials/customization/pphuman_mtmct.md @@ -0,0 +1,159 @@ +简体中文 | [English](./pphuman_mtmct_en.md) + +# 跨镜跟踪任务二次开发 + +## 数据准备 + +### 数据格式 + +跨镜跟踪使用行人REID技术实现,其训练方式采用多分类模型训练,使用时取分类softmax头部前的特征作为检索特征向量。 + +因此其格式与多分类任务相同。每一个行人分配一个专属id,不同行人id不同,同一行人在不同图片中的id相同。 + +例如图片0001.jpg、0003.jpg是同一个人,0002.jpg、0004.jpg是不同的其他行人。则标注id为: + +``` +0001.jpg 00001 +0002.jpg 00002 +0003.jpg 00001 +0004.jpg 00003 +... +``` + +依次类推。 + +### 数据标注 + +理解了上面`标注`格式的含义后,就可以进行数据标注的工作。其本质是:每张单人图建立一个标注项,对应该行人分配的id。 + +举例: + +对于一张原始图片, + +1) 使用检测框,标注图片中每一个人的位置。 + +2) 每一个检测框(对应每一个人),包含一个int类型的id属性。例如,上述举例中的0001.jpg中的人,对应id:1. + +标注完成后利用检测框将每一个人截取成单人图,其图片与id属性标注建立对应关系。也可先截成单人图再进行标注,效果相同。 + +## 模型训练 + + +数据标注完成后,就可以拿来做模型的训练,完成自定义模型的优化工作。 + +其主要有两步工作需要完成:1)将数据与标注数据整理成训练格式。2)修改配置文件开始训练。 + +### 训练数据格式 + +训练数据包括训练使用的图片和一个训练列表bounding_box_train.txt,其具体位置在训练配置中指定,其放置方式示例如下: + +``` +REID/ +|-- data 训练图片文件夹 +| |-- 00001.jpg +| |-- 00002.jpg +| `-- 0000x.jpg +`-- bounding_box_train.txt 训练数据列表 + +``` + +bounding_box_train.txt文件内为所有训练图片名称(相对于根路径的文件路径)+ 1个id标注值 + +其每一行表示一个人的图片和id标注结果。其格式为: + +``` +0001.jpg 00001 +0002.jpg 00002 +0003.jpg 00001 +0004.jpg 00003 +``` +注意:图片与标注值之间是以Tab[\t]符号隔开。该格式不能错,否则解析失败。 + +### 修改配置开始训练 + +首先执行以下命令下载训练代码(更多环境问题请参考[Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/install_paddleclas_en.md)): + +```shell +git clone https://github.com/PaddlePaddle/PaddleClas +``` + + +需要在配置文件[softmax_triplet_with_center.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml)中,修改的配置项如下: + +``` + Head: + name: "FC" + embedding_size: *feat_dim + class_num: &class_num 751 #行人id总数量 + +DataLoader: + Train: + dataset: + name: "Market1501" + image_root: "./dataset/" #训练图片根路径 + cls_label_path: "bounding_box_train" #训练文件列表 + + + Eval: + Query: + dataset: + name: "Market1501" + image_root: "./dataset/" #评估图片根路径 + cls_label_path: "query" #评估文件列表 + +``` +注意: + +1. 这里image_root路径+bounding_box_train.txt中图片相对路径,对应图片存放的完整路径。 + +然后运行以下命令开始训练。 + +``` +#多卡训练 +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml + +#单卡训练 +python3 tools/train.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml +``` + +训练完成后可以执行以下命令进行性能评估: +``` +#多卡评估 +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/eval.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \ + -o Global.pretrained_model=./output/strong_baseline/best_model + +#单卡评估 +python3 tools/eval.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \ + -o Global.pretrained_model=./output/strong_baseline/best_model +``` + +### 模型导出 + +使用下述命令将训练好的模型导出为预测部署模型。 + +``` +python3 tools/export_model.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \ + -o Global.pretrained_model=./output/strong_baseline/best_model \ + -o Global.save_inference_dir=deploy/models/strong_baseline_inference +``` + +导出模型后,下载[infer_cfg.yml](https://bj.bcebos.com/v1/paddledet/models/pipeline/REID/infer_cfg.yml)文件到新导出的模型文件夹'strong_baseline_inference'中。 + +使用时在PP-Human中的配置文件infer_cfg_pphuman.yml中修改模型路径`model_dir`并开启功能`enable`。 +``` +REID: + model_dir: [YOUR_DEPLOY_MODEL_DIR]/strong_baseline_inference/ + enable: True +``` +然后可以使用。至此完成模型开发。 diff --git a/docs/advanced_tutorials/customization/pphuman_mtmct_en.md b/docs/advanced_tutorials/customization/pphuman_mtmct_en.md new file mode 100644 index 0000000000000000000000000000000000000000..e3638c2700a4cd20d5bcb6f85c6c6d2b610a56ee --- /dev/null +++ b/docs/advanced_tutorials/customization/pphuman_mtmct_en.md @@ -0,0 +1,165 @@ +[简体中文](./pphuman_mtmct.md) | English + +# Customized Multi-Target Multi-Camera Tracking Module of PP-Human + +## Data Preparation + +### Data Format + + + +Multi-target multi-camera tracking, or mtmct is achieved by the pedestrian REID technique. It is trained with a multiclassification model and uses the features before the head of the classification softmax as the retrieval feature vector. + +Therefore its format is the same as the multi-classification task. Each pedestrian is assigned an exclusive id, which is different for different pedestrians while the same pedestrian has the same id in different images. + +For example, images 0001.jpg, 0003.jpg are the same person, 0002.jpg, 0004.jpg are different pedestrians. Then the labeled ids are. + +``` +0001.jpg 00001 +0002.jpg 00002 +0003.jpg 00001 +0004.jpg 00003 +... +``` + +### Data Annotation + +After understanding the meaning of the `annotation` format above, we can work on the data annotation. The essence of data annotation is that each single person diagram creates an annotation item that corresponds to the id assigned to that pedestrian. + +For example: + +For an original picture + +1) Use bouding boxes to annotate the position of each person in the picture. + +2) Each bouding box (corresponding to each person) contains an int id attribute. For example, the person in 0001.jpg in the above example corresponds to id: 1. + +After the annotation is completed, use the detection box to intercept each person into a single picture, the picture and id attribute annotation will establish a corresponding relationship. You can also first cut into a single image and then annotate, the result is the same. + + + +## Model Training + +Once the data is annotated, it can be used for model training to complete the optimization of the customized model. + +There are two main steps to implement: 1) organize the data and annotated data into a training format. 2) modify the configuration file to start training. + +### Training data format + +The training data consists of the images used for training and a training list bounding_box_train.txt, the location of which is specified in the training configuration, with the following example placement. + + +``` +REID/ +|-- data Training image folder +|-- 00001.jpg +|-- 00002.jpg +|-- 0000x.jpg +`-- bounding_box_train.txt List of training data +``` + +bounding_box_train.txt file contains the names of all training images (file path relative to the root path) + 1 id annotation value + +Each line represents a person's image and id annotation result. The format is as follows: + +``` +0001.jpg 00001 +0002.jpg 00002 +0003.jpg 00001 +0004.jpg 00003 +``` + +Note: The images are separated from the annotated values by a Tab[\t] symbol. This format must be correct, otherwise, the parsing will fail. + + + +### Modify the configuration to start training + +First, execute the following command to download the training code (for more environment issues, please refer to [Install_PaddleClas](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/installation/ install_paddleclas_en.md): + +``` +git clone https://github.com/PaddlePaddle/PaddleClas +``` + +You need to change the following configuration items in the configuration file [softmax_triplet_with_center.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/configs/reid/strong_ baseline/softmax_triplet_with_center.yaml): + +``` + Head: + name: "FC" + embedding_size: *feat_dim + class_num: &class_num 751 #Total number of pedestrian ids + +DataLoader: + Train: + dataset: + name: "Market1501" + image_root: ". /dataset/" #training image root path + cls_label_path: "bounding_box_train" #training_file_list + + + Eval: + Query: + dataset: + name: "Market1501" + image_root: ". /dataset/" #Evaluated image root path + cls_label_path: "query" #List of evaluation files +``` + +Note: + +1. Here the image_root path + the relative path of the image in the bounding_box_train.txt corresponds to the full path where the image is stored. + +Then run the following command to start the training. + +``` +#Multi-card training +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/train.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml + +#Single card training +python3 tools/train.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml +``` + +After the training is completed, you may run the following commands for performance evaluation: + +``` +#Multi-card evaluation +export CUDA_VISIBLE_DEVICES=0,1,2,3 +python3 -m paddle.distributed.launch \ + --gpus="0,1,2,3" \ + tools/eval.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \ + -o Global.pretrained_model=./output/strong_baseline/best_model + +#Single card evaluation +python3 tools/eval.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \ + -o Global.pretrained_model=./output/strong_baseline/best_model +``` + +### Model Export + +Use the following command to export the trained model as an inference deployment model. + +``` +python3 tools/export_model.py \ + -c ./ppcls/configs/reid/strong_baseline/softmax_triplet_with_center.yaml \ + -o Global.pretrained_model=./output/strong_baseline/best_model \ + -o Global.save_inference_dir=deploy/models/strong_baseline_inference +``` + +After exporting the model, download the [infer_cfg.yml](https://bj.bcebos.com/v1/paddledet/models/pipeline/REID/infer_cfg.yml) file to the newly exported model folder 'strong_baseline_ inference'. + +Change the model path `model_dir` in the configuration file `infer_cfg_pphuman.yml` in PP-Human and set `enable`. + +``` +REID: + model_dir: [YOUR_DEPLOY_MODEL_DIR]/strong_baseline_inference/ + enable: True +``` + +Now, the model is ready. diff --git a/docs/advanced_tutorials/openvino_inference/README.md b/docs/advanced_tutorials/openvino_inference/README.md index 0a67651d9baadf6708adcda21628f9a1dcaaf31c..ac372eaa30d1f02a5016b42b928e9c56bfea547a 100644 --- a/docs/advanced_tutorials/openvino_inference/README.md +++ b/docs/advanced_tutorials/openvino_inference/README.md @@ -3,9 +3,9 @@ ## Introduction PaddleDetection has been a vibrant open-source project and has a large amout of contributors and maintainers around it. It is an AI framework which enables developers to quickly integrate AI capacities into their own projects and applications. -Intel OpenVINO is a widely used free toolkit. It facilitates the optimization of a deep learning model from a framework and deployment using an inference engine onto Intel hardware. +Intel OpenVINO is a widely used free toolkit. It facilitates the optimization of a deep learning model from a framework and deployment using an inference engine onto Intel hardware. -Apparently, the upstream(Paddle) and the downstream(Intel OpenVINO) can work together to streamline and simplify the process of developing an AI model and deploying the model onto hardware, which, in turn, makes our lives easier. +Apparently, the upstream(Paddle) and the downstream(Intel OpenVINO) can work together to streamline and simplify the process of developing an AI model and deploying the model onto hardware, which, in turn, makes our lives easier. This article will show you how to use a PaddleDetection model [FairMOT](../../../configs/mot/fairmot/README.md) from the Model Zoo in PaddleDetection and use it with OpenVINO to do the inference. @@ -50,7 +50,7 @@ Once the Paddle model has been converted to ONNX format, we can then use it with 1. ### Get the execution network -So the 1st thing to do here is to get an execution network which can be used later to do the inference. +So the 1st thing to do here is to get an execution network which can be used later to do the inference. Here is the code. @@ -70,7 +70,7 @@ Every AI model has its own steps of preprocessing, let's have a look how to do i ``` def prepare_input(): transforms = [ - T.Resize(target_size=(target_width, target_height)), + T.Resize(target_size=(target_width, target_height)), T.Normalize(mean=(0,0,0), std=(1,1,1)) ] img_file = root_path / "images/street.jpeg" @@ -87,7 +87,7 @@ def prepare_input(): 3. ### Prediction -After we have done all the load network and preprocessing, it finally comes to the stage of prediction. +After we have done all the load network and preprocessing, it finally comes to the stage of prediction. ``` @@ -100,7 +100,7 @@ You might be surprised to see the very exciting stage this small. Hang on there, 4. ### Post-processing -MOT(Multi-Object Tracking) is special, not like other AI models which require a few steps of post-processing. Instead, FairMOT requires a special object called tracker, to handle the prediction results. The prediction results are prediction detections and prediction embeddings. +MOT(Multi-Object Tracking) is special, not like other AI models which require a few steps of post-processing. Instead, FairMOT requires a special object called tracker, to handle the prediction results. The prediction results are prediction detections and prediction embeddings. Luckily, PaddleDetection has made this procesure easy for us, it has exported the JDETracker from `ppdet`, so that we do not need to write much code to handle it. @@ -156,4 +156,4 @@ So these are the all steps which you need to follow in order to run FairMOT on y A companion article which explains in details of this procedure will be released soon and a link to that article will be updated here soon. -To see the full code, please take a look at [Paddle OpenVINO Prediction](docs/advanced_tutorials/openvino_inference/fairmot_onnx_openvino.py). \ No newline at end of file +To see the full code, please take a look at [Paddle OpenVINO Prediction](./fairmot_onnx_openvino.py). diff --git a/docs/advanced_tutorials/openvino_inference/README_cn.md b/docs/advanced_tutorials/openvino_inference/README_cn.md index 2fc001757157501bccaeffc37d900cbc6d31e7eb..aaaf84eb05c26359fcc48cb14a3f6104bd834d5d 100644 --- a/docs/advanced_tutorials/openvino_inference/README_cn.md +++ b/docs/advanced_tutorials/openvino_inference/README_cn.md @@ -68,7 +68,7 @@ def get_net(): ``` def prepare_input(): transforms = [ - T.Resize(target_size=(target_width, target_height)), + T.Resize(target_size=(target_width, target_height)), T.Normalize(mean=(0,0,0), std=(1,1,1)) ] img_file = root_path / "images/street.jpeg" @@ -93,7 +93,7 @@ def predict(exec_net, input): return result ``` -您可能会惊讶地看到, 最激动人心的步骤居然如此简单。 不过下一个阶段会加复杂。 +您可能会惊讶地看到, 最激动人心的步骤居然如此简单。 不过下一个阶段会更加复杂。 4. ### 后处理 @@ -138,7 +138,7 @@ def postprocess(pred_dets, pred_embs, threshold = 0.5): 5. ### 画出检测框(可选) -这一步是可选的。出于演示目的,我只使用 `plot_tracking_dict()` 方法在图像上绘制所有边界框。 但是,如果您没有相同的要求,则不需要这样做。 +这一步是可选的。出于演示目的,我只使用 `plot_tracking_dict()` 方法在图像上绘制所有边界框。 但是,如果您没有相同的要求,则不需要这样做。 ``` online_im = plot_tracking_dict( @@ -154,4 +154,4 @@ online_im = plot_tracking_dict( 之后会有一篇详细解释此过程的配套文章将会发布,并且该文章的链接将很快在此处更新。 -完整代码请查看 [Paddle OpenVINO 预测](docs/advanced_tutorials/openvino_inference/fairmot_onnx_openvino.py). \ No newline at end of file +完整代码请查看 [Paddle OpenVINO 预测](./fairmot_onnx_openvino.py). diff --git a/docs/images/add_attribute.png b/docs/images/add_attribute.png new file mode 100644 index 0000000000000000000000000000000000000000..1d6092c4a3f778f08b0636875bdcb30a51d0655d Binary files /dev/null and b/docs/images/add_attribute.png differ diff --git a/docs/images/bus.jpg b/docs/images/bus.jpg new file mode 100644 index 0000000000000000000000000000000000000000..cdbbf8c9ba9990fb228360db590e37f078160767 Binary files /dev/null and b/docs/images/bus.jpg differ diff --git a/docs/images/dog.jpg b/docs/images/dog.jpg new file mode 100644 index 0000000000000000000000000000000000000000..237c084d9b0dd5cf32e9ec5463ab027ebd148df8 Binary files /dev/null and b/docs/images/dog.jpg differ diff --git a/docs/images/fitness_demo.gif b/docs/images/fitness_demo.gif index b56ab6563fb02cc77bc29235b3090f46f5597859..d96a3720158add7fec7eeaeae92b5d2d243bca3d 100644 Binary files a/docs/images/fitness_demo.gif and b/docs/images/fitness_demo.gif differ diff --git a/docs/images/fps_map.png b/docs/images/fps_map.png index d73877729c0775709e5954c008a88776bf48606a..5c22d725b01d374de8f394096f140b3d33cebfa7 100644 Binary files a/docs/images/fps_map.png and b/docs/images/fps_map.png differ diff --git a/docs/images/pphumanv2.png b/docs/images/pphumanv2.png new file mode 100644 index 0000000000000000000000000000000000000000..829dd60865b18b211e97e6a1c405dc2ee3d24c4b Binary files /dev/null and b/docs/images/pphumanv2.png differ diff --git a/docs/images/res.jpg b/docs/images/res.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6f281fa3be0053d5a919da4ee36c6005e0664daa Binary files /dev/null and b/docs/images/res.jpg differ diff --git a/docs/images/tinypose_app.png b/docs/images/tinypose_app.png index 750a532fa13f51c33b0a7a24372a6fbeb28b9111..fd43ebcdcaec7bda1c57378e7b82b9d103ee3cb2 100644 Binary files a/docs/images/tinypose_app.png and b/docs/images/tinypose_app.png differ diff --git a/docs/tutorials/DistributedTraining_cn.md b/docs/tutorials/DistributedTraining_cn.md new file mode 100644 index 0000000000000000000000000000000000000000..80ff32ecb0f3bf86b30ce59db55e0fc1d4197cbe --- /dev/null +++ b/docs/tutorials/DistributedTraining_cn.md @@ -0,0 +1,50 @@ +[English](DistributedTraining_en.md) | 简体中文 + + +# 分布式训练 + +## 1. 简介 + +* 分布式训练指的是将训练任务按照一定方法拆分到多个计算节点进行计算,再按照一定的方法对拆分后计算得到的梯度等信息进行聚合与更新。飞桨分布式训练技术源自百度的业务实践,在自然语言处理、计算机视觉、搜索和推荐等领域经过超大规模业务检验。分布式训练的高性能,是飞桨的核心优势技术之一,PaddleDetection同时支持单机训练与多机训练。更多关于分布式训练的方法与文档可以参考:[分布式训练快速开始教程](https://fleet-x.readthedocs.io/en/latest/paddle_fleet_rst/parameter_server/ps_quick_start.html)。 + +## 2. 使用方法 + +### 2.1 单机训练 + +* 以PP-YOLOE-s为例,本地准备好数据之后,使用`paddle.distributed.launch`或者`fleetrun`的接口启动训练任务即可。下面为运行脚本示例。 + +```bash +fleetrun \ +--selected_gpu 0,1,2,3,4,5,6,7 \ +tools/train.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \ +--eval &>logs.txt 2>&1 & +``` + +### 2.2 多机训练 + +* 相比单机训练,多机训练时,只需要添加`--ips`的参数,该参数表示需要参与分布式训练的机器的ip列表,不同机器的ip用逗号隔开。下面为运行代码示例。 + +```shell +ip_list="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151" +fleetrun \ +--ips=${ip_list} \ +--selected_gpu 0,1,2,3,4,5,6,7 \ +tools/train.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \ +--eval &>logs.txt 2>&1 & +``` + +**注:** +* 不同机器的ip信息需要用逗号隔开,可以通过`ifconfig`或者`ipconfig`查看。 +* 不同机器之间需要做免密设置,且可以直接ping通,否则无法完成通信。 +* 不同机器之间的代码、数据与运行命令或脚本需要保持一致,且所有的机器上都需要运行设置好的训练命令或者脚本。最终`ip_list`中的第一台机器的第一块设备是trainer0,以此类推。 +* 不同机器的起始端口可能不同,建议在启动多机任务前,在不同的机器中设置相同的多机运行起始端口,命令为`export FLAGS_START_PORT=17000`,端口值建议在`10000~20000`之间。 + + +## 3. 性能效果测试 + +* 在单机和4机8卡V100的机器上,基于[PP-YOLOE-s](../../configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml)进行模型训练,模型的训练耗时情况如下所示。 + +机器 | 精度 | 耗时 +-|-|- +单机8卡 | 42.7% | 39h +4机8卡 | 42.1% | 13h diff --git a/docs/tutorials/DistributedTraining_en.md b/docs/tutorials/DistributedTraining_en.md new file mode 100644 index 0000000000000000000000000000000000000000..9fea8637340dd76fa9d416d9a40ccd88d17d86db --- /dev/null +++ b/docs/tutorials/DistributedTraining_en.md @@ -0,0 +1,44 @@ +English | [简体中文](DistributedTraining_cn.md) + + +## 1. Usage + +### 1.1 Single-machine + +* Take PP-YOLOE-s as an example, after preparing the data locally, use the interface of `paddle.distributed.launch` or `fleetrun` to start the training task. Below is an example of running the script. + +```bash +fleetrun \ +--selected_gpu 0,1,2,3,4,5,6,7 \ +tools/train.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \ +--eval &>logs.txt 2>&1 & +``` + +### 1.2 Multi-machine + +* Compared with single-machine training, when training on multiple machines, you only need to add the `--ips` parameter, which indicates the ip list of machines that need to participate in distributed training. The ips of different machines are separated by commas. Below is an example of running code. + +```shell +ip_list="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151" +fleetrun \ +--ips=${ip_list} \ +--selected_gpu 0,1,2,3,4,5,6,7 \ +tools/train.py -c configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml \ +--eval &>logs.txt 2>&1 & +``` + +**Note:** +* The ip information of different machines needs to be separated by commas, which can be viewed through `ifconfig` or `ipconfig`. +* Password-free settings are required between different machines, and they can be pinged directly, otherwise the communication cannot be completed. +* The code, data, and running commands or scripts between different machines need to be consistent, and the set training commands or scripts need to be run on all machines. The first device of the first machine in the final `ip_list` is trainer0, and so on. +* The starting port of different machines may be different. It is recommended to set the same starting port for multi-machine running in different machines before starting the multi-machine task. The command is `export FLAGS_START_PORT=17000`, and the port value is recommended to be `10000~20000`. + + +## 2. Performance + +* On single-machine and 4-machine 8-card V100 machines, model training is performed based on [PP-YOLOE-s](../../configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml). The model training time is as follows. + +Machine | mAP | Time cost +-|-|- +single machine | 42.7% | 39h +4 machines | 42.1% | 13h diff --git "a/docs/tutorials/FAQ/FAQ\347\254\254\351\233\266\346\234\237.md" "b/docs/tutorials/FAQ/FAQ\347\254\254\351\233\266\346\234\237.md" index 5d45d14049ea6e929d1d0df50ea72b013f296f65..4478495bff8e52ed1377ad8e09ee63a49ce606da 100644 --- "a/docs/tutorials/FAQ/FAQ\347\254\254\351\233\266\346\234\237.md" +++ "b/docs/tutorials/FAQ/FAQ\347\254\254\351\233\266\346\234\237.md" @@ -59,7 +59,7 @@ TrainReader: - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} # 训练时batch_size batch_size: 24 - # 读取数据是是否乱序 + # 读取数据是否乱序 shuffle: true # 是否丢弃最后不能完整组成batch的数据 drop_last: true diff --git a/docs/tutorials/GETTING_STARTED.md b/docs/tutorials/GETTING_STARTED.md index cdca9fbb8ec7022b11ab2f039acf6b6915957221..6ed4043a2f69fcbba19e757c6abdaa6b8507fc7b 100644 --- a/docs/tutorials/GETTING_STARTED.md +++ b/docs/tutorials/GETTING_STARTED.md @@ -11,13 +11,12 @@ instructions](INSTALL_cn.md). ## Data preparation -- Please refer to [PrepareDataSet](PrepareDataSet.md) for data preparation +- Please refer to [PrepareDetDataSet](./data/PrepareDetDataSet_en.md) for data preparation - Please set the data path for data configuration file in ```configs/datasets``` - ## Training & Evaluation & Inference -PaddleDetection provides scripts for training, evalution and inference with various features according to different configure. +PaddleDetection provides scripts for training, evalution and inference with various features according to different configure. And for more distribued training details see [DistributedTraining].(./DistributedTraining_en.md) ```bash # training on single-GPU @@ -26,6 +25,9 @@ python tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml # training on multi-GPU export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml +# training on multi-machines and multi-GPUs +export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 +$fleetrun --ips="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151" --selected_gpu 0,1,2,3,4,5,6,7 tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml # GPU evaluation export CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams @@ -53,7 +55,7 @@ list below can be viewed by `--help` | --draw_threshold | infer | Threshold to reserve the result for visualization | 0.5 | such as `--draw_threshold 0.7` | | --infer_dir | infer | Directory for images to perform inference on | None | One of `infer_dir` and `infer_img` is requied | | --infer_img | infer | Image path | None | One of `infer_dir` and `infer_img` is requied, `infer_img` has higher priority over `infer_dir` | - +| --save_results | infer | Whether to save detection results to file | False | Optional @@ -128,7 +130,7 @@ list below can be viewed by `--help` --output_dir=infer_output/ \ --draw_threshold=0.5 \ -o weights=output/faster_rcnn_r50_fpn_1x_coco/model_final \ - --use_vdl=Ture + --use_vdl=True ``` `--draw_threshold` is an optional argument. Default is 0.5. diff --git a/docs/tutorials/GETTING_STARTED_cn.md b/docs/tutorials/GETTING_STARTED_cn.md index ea94a5e09b4ef04edd4c415808faea81bdc6fba1..c0230f514746344b0d2ad1f1c36f8c68c5c4e45d 100644 --- a/docs/tutorials/GETTING_STARTED_cn.md +++ b/docs/tutorials/GETTING_STARTED_cn.md @@ -12,7 +12,7 @@ PaddleDetection作为成熟的目标检测开发套件,提供了从数据准 ## 2 准备数据 目前PaddleDetection支持:COCO VOC WiderFace, MOT四种数据格式。 -- 首先按照[准备数据文档](PrepareDataSet.md) 准备数据。 +- 首先按照[准备数据文档](./data/PrepareDetDataSet.md) 准备数据。 - 然后设置`configs/datasets`中相应的coco或voc等数据配置文件中的数据路径。 - 在本项目中,我们使用路标识别数据集 ```bash @@ -83,7 +83,7 @@ ppyolov2_reader.yml 主要说明数据读取器配置,如batch size,并发 * 关于数据的路径修改说明 在修改配置文件中,用户如何实现自定义数据集是非常关键的一步,如何定义数据集请参考[如何自定义数据集](https://aistudio.baidu.com/aistudio/projectdetail/1917140) * 默认学习率是适配多GPU训练(8x GPU),若使用单GPU训练,须对应调整学习率(例如,除以8) -* 更多使用问题,请参考[FAQ](FAQ.md) +* 更多使用问题,请参考[FAQ](FAQ) ## 4 训练 @@ -99,6 +99,15 @@ python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 #windows和Mac下不需要执行该命令 python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml ``` + +* [GPU多机多卡训练](./DistributedTraining_cn.md) +```bash +$fleetrun \ +--ips="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151" \ +--selected_gpu 0,1,2,3,4,5,6,7 \ +tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \ +``` + * Fine-tune其他任务 使用预训练模型fine-tune其他任务时,可以直接加载预训练模型,形状不匹配的参数将自动忽略,例如: @@ -161,7 +170,7 @@ python tools/eval.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \ --output_dir=infer_output/ \ --draw_threshold=0.5 \ -o weights=output/yolov3_mobilenet_v1_roadsign/model_final \ - --use_vdl=Ture + --use_vdl=True ``` `--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,不同阈值会产生不同的结果 @@ -215,7 +224,7 @@ visualdl --logdir vdl_dir/scalar/ | --draw_threshold | infer | 可视化时分数阈值 | 0.5 | 例如`--draw_threshold=0.7` | | --infer_dir | infer | 用于预测的图片文件夹路径 | None | `--infer_img`和`--infer_dir`必须至少设置一个 | | --infer_img | infer | 用于预测的图片路径 | None | `--infer_img`和`--infer_dir`必须至少设置一个,`infer_img`具有更高优先级 | -| --save_txt | infer | 是否在文件夹下将图片的预测结果保存到文本文件中 | False | 可选 | +| --save_results | infer | 是否在文件夹下将图片的预测结果保存到文件中 | False | 可选 | ## 8 模型导出 @@ -245,7 +254,7 @@ PaddleDetection提供了PaddleInference、PaddleServing、PaddleLite多种部署 ```bash python deploy/python/infer.py --model_dir=./output_inference/yolov3_mobilenet_v1_roadsign --image_file=demo/road554.png --device=GPU ``` -* 同时`infer.py`提供了丰富的接口,用户进行接入视频文件、摄像头进行预测,更多内容请参考[Python端预测部署](../../deploy/python.md) +* 同时`infer.py`提供了丰富的接口,用户进行接入视频文件、摄像头进行预测,更多内容请参考[Python端预测部署](../../deploy/python) ### PaddleDetection支持的部署形式说明 |形式|语言|教程|设备/平台| |-|-|-|-| diff --git a/docs/tutorials/INSTALL.md b/docs/tutorials/INSTALL.md index 01cc10b5fb92cad6dbafa84f3371f6ee5cde6cfb..418289b0d4b34c3e4b3df05adb52c0dc277e33dd 100644 --- a/docs/tutorials/INSTALL.md +++ b/docs/tutorials/INSTALL.md @@ -22,7 +22,8 @@ Dependency of PaddleDetection and PaddlePaddle: | PaddleDetection version | PaddlePaddle version | tips | | :----------------: | :---------------: | :-------: | -| develop | >= 2.2.0rc | Dygraph mode is set as default | +| develop | >= 2.2.2 | Dygraph mode is set as default | +| release/2.4 | >= 2.2.2 | Dygraph mode is set as default | | release/2.3 | >= 2.2.0rc | Dygraph mode is set as default | | release/2.2 | >= 2.1.2 | Dygraph mode is set as default | | release/2.1 | >= 2.1.0 | Dygraph mode is set as default | @@ -109,6 +110,16 @@ Ran 7 tests in 12.816s OK ``` +## Use built Docker images + +> If you do not have a Docker environment, please refer to [Docker](https://www.docker.com/). + +We provide docker images containing the latest PaddleDetection code, and all environment and package dependencies are pre-installed. All you have to do is to **pull and run the docker image**. Then you can enjoy PaddleDetection without any extra steps. + +Get these images and guidance in [docker hub](https://hub.docker.com/repository/docker/paddlecloud/paddledetection), including CPU, GPU, ROCm environment versions. + +If you have some customized requirements about automatic building docker images, you can get it in github repo [PaddlePaddle/PaddleCloud](https://github.com/PaddlePaddle/PaddleCloud/tree/main/tekton). + ## Inference demo **Congratulation!** Now you have installed PaddleDetection successfully and try our inference demo: diff --git a/docs/tutorials/INSTALL_cn.md b/docs/tutorials/INSTALL_cn.md index ee8672235cb7edde98b1f49f52f24e4cd2f04fad..4afa7600f3dcd7ec1b3feb9d1cf0285b4de3bde5 100644 --- a/docs/tutorials/INSTALL_cn.md +++ b/docs/tutorials/INSTALL_cn.md @@ -18,8 +18,9 @@ PaddleDetection 依赖 PaddlePaddle 版本关系: | PaddleDetection版本 | PaddlePaddle版本 | 备注 | | :------------------: | :---------------: | :-------: | -| develop | >= 2.2.0rc | 默认使用动态图模式 | -| release/2.3 | >= 2.2.0rc | 默认使用动态图模式 | +| develop | >= 2.2.2 | 默认使用动态图模式 | +| release/2.4 | >= 2.2.2 | 默认使用动态图模式 | +| release/2.3 | >= 2.2.0rc | 默认使用动态图模式 | | release/2.2 | >= 2.1.2 | 默认使用动态图模式 | | release/2.1 | >= 2.1.0 | 默认使用动态图模式 | | release/2.0 | >= 2.0.1 | 默认使用动态图模式 | @@ -102,6 +103,14 @@ Ran 7 tests in 12.816s OK ``` +## 使用Docker镜像 +> 如果您没有Docker运行环境,请参考[Docker官网](https://www.docker.com/)进行安装。 + +我们提供了包含最新 PaddleDetection 代码的docker镜像,并预先安装好了所有的环境和库依赖,您只需要**拉取docker镜像**,然后**运行docker镜像**,无需其他任何额外操作,即可开始使用PaddleDetection的所有功能。 + +在[Docker Hub](https://hub.docker.com/repository/docker/paddlecloud/paddledetection)中获取这些镜像及相应的使用指南,包括CPU、GPU、ROCm版本。 +如果您对自动化制作docker镜像感兴趣,或有自定义需求,请访问[PaddlePaddle/PaddleCloud](https://github.com/PaddlePaddle/PaddleCloud/tree/main/tekton)做进一步了解。 + ## 快速体验 **恭喜!** 您已经成功安装了PaddleDetection,接下来快速体验目标检测效果 diff --git a/docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md b/docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md index 460af362bff54f708cc49f2806434a77582c9aee..32b9024bfa025f008ff236214c68ffe8c1d7b5ec 100644 --- a/docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md +++ b/docs/tutorials/config_annotation/faster_rcnn_r50_fpn_1x_coco_annotation.md @@ -90,7 +90,7 @@ TrainReader: - PadBatch: {pad_to_stride: 32} # 训练时batch_size batch_size: 1 - # 读取数据是是否乱序 + # 读取数据是否乱序 shuffle: true # 是否丢弃最后不能完整组成batch的数据 drop_last: true @@ -110,7 +110,7 @@ EvalReader: - PadBatch: {pad_to_stride: 32} # 评估时batch_size batch_size: 1 - # 读取数据是是否乱序 + # 读取数据是否乱序 shuffle: false # 是否丢弃最后不能完整组成batch的数据 drop_last: false @@ -130,7 +130,7 @@ TestReader: - PadBatch: {pad_to_stride: 32} # 测试时batch_size batch_size: 1 - # 读取数据是是否乱序 + # 读取数据是否乱序 shuffle: false # 是否丢弃最后不能完整组成batch的数据 drop_last: false diff --git a/docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md b/docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md index 9c7985fd26971b5e412d7a920a0d46eed62204e5..2cbc188dc345c84ca619284baaf610d757cc3414 100644 --- a/docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md +++ b/docs/tutorials/config_annotation/ppyolo_r50vd_dcn_1x_coco_annotation.md @@ -102,7 +102,7 @@ TrainReader: - Gt2YoloTarget: {anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]], anchors: [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326]], downsample_ratios: [32, 16, 8]} # 训练时batch_size batch_size: 24 - # 读取数据是是否乱序 + # 读取数据是否乱序 shuffle: true # 是否丢弃最后不能完整组成batch的数据 drop_last: true diff --git a/docs/tutorials/data/DetAnnoTools.md b/docs/tutorials/data/DetAnnoTools.md new file mode 100644 index 0000000000000000000000000000000000000000..fd7c8fee9124cddc2146cda252a11a9c95bf679f --- /dev/null +++ b/docs/tutorials/data/DetAnnoTools.md @@ -0,0 +1,279 @@ +简体中文 | [English](DetAnnoTools_en.md) + + + +# 目标检测标注工具 + +## 目录 + +[LabelMe](#LabelMe) + +* [使用说明](#使用说明) + * [安装](#LabelMe安装) + * [图片标注过程](#LabelMe图片标注过程) +* [标注格式](#LabelMe标注格式) + * [导出数据格式](#LabelMe导出数据格式) + * [格式转化总结](#格式转化总结) + * [标注文件(json)-->VOC](#标注文件(json)-->VOC数据集) + * [标注文件(json)-->COCO](#标注文件(json)-->COCO数据集) + +[LabelImg](#LabelImg) + +* [使用说明](#使用说明) + * [LabelImg安装](#LabelImg安装) + * [安装注意事项](#安装注意事项) + * [图片标注过程](#LabelImg图片标注过程) +* [标注格式](#LabelImg标注格式) + * [导出数据格式](#LabelImg导出数据格式) + * [格式转换注意事项](#格式转换注意事项) + + + +## [LabelMe](https://github.com/wkentaro/labelme) + +### 使用说明 + +#### LabelMe安装 + +具体安装操作请参考[LabelMe官方教程](https://github.com/wkentaro/labelme)中的Installation + +
    + Ubuntu + +``` +sudo apt-get install labelme + +# or +sudo pip3 install labelme + +# or install standalone executable from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + +
    + macOS + +``` +brew install pyqt # maybe pyqt5 +pip install labelme + +# or +brew install wkentaro/labelme/labelme # command line interface +# brew install --cask wkentaro/labelme/labelme # app + +# or install standalone executable/app from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + + + +推荐使用Anaconda的安装方式 + +``` +conda create –name=labelme python=3 +conda activate labelme +pip install pyqt5 +pip install labelme +``` + + + + + +#### LabelMe图片标注过程 + +启动labelme后,选择图片文件或者图片所在文件夹 + +左侧编辑栏选择`create polygons` 绘制标注区域如下图所示(右击图像区域可以选择不同的标注形状),绘制好区域后按下回车,弹出新的框填入标注区域对应的标签,如:people + +左侧菜单栏点击保存,生成`json`形式的**标注文件** + +![](https://media3.giphy.com/media/XdnHZgge5eynRK3ATK/giphy.gif?cid=790b7611192e4c0ec2b5e6990b6b0f65623154ffda66b122&rid=giphy.gif&ct=g) + + + +### LabelMe标注格式 + +#### LabelMe导出数据格式 + +``` +#生成标注文件 +png/jpeg/jpg-->labelme标注-->json +``` + + + + + +#### 格式转化总结 + +``` +#标注文件转化为VOC数据集格式 +json-->labelme2voc.py-->VOC数据集 + +#标注文件转化为COCO数据集格式 +json-->labelme2coco.py-->COCO数据集 +``` + + + + + +#### 标注文件(json)-->VOC数据集 + +使用[官方给出的labelme2voc.py](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection/labelme2voc.py)这份脚本 + +下载该脚本,在命令行中使用 + +```Te +python labelme2voc.py data_annotated(标注文件所在文件夹) data_dataset_voc(输出文件夹) --labels labels.txt +``` + +运行后,在指定的输出文件夹中会如下的目录 + +``` +# It generates: +# - data_dataset_voc/JPEGImages +# - data_dataset_voc/Annotations +# - data_dataset_voc/AnnotationsVisualization + +``` + + + + + +#### 标注文件(json)-->COCO数据集 + +使用[PaddleDetection提供的x2coco.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/tools/x2coco.py) 将labelme标注的数据转换为COCO数据集形式 + +```bash +python tools/x2coco.py \ + --dataset_type labelme \ + --json_input_dir ./labelme_annos/ \ + --image_input_dir ./labelme_imgs/ \ + --output_dir ./cocome/ \ + --train_proportion 0.8 \ + --val_proportion 0.2 \ + --test_proportion 0.0 +``` + +用户数据集转成COCO数据后目录结构如下(注意数据集中路径名、文件名尽量不要使用中文,避免中文编码问题导致出错): + +``` +dataset/xxx/ +├── annotations +│ ├── train.json # coco数据的标注文件 +│ ├── valid.json # coco数据的标注文件 +├── images +│ ├── xxx1.jpg +│ ├── xxx2.jpg +│ ├── xxx3.jpg +│ | ... +... +``` + + + + + +## [LabelImg](https://github.com/tzutalin/labelImg) + +### 使用说明 + +#### LabelImg安装 + +安装操作请参考[LabelImg官方教程](https://github.com/tzutalin/labelImg) + +
    + Ubuntu + +``` +sudo apt-get install pyqt5-dev-tools +sudo pip3 install -r requirements/requirements-linux-python3.txt +make qt5py3 +python3 labelImg.py +python3 labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] +``` + +
    + +
    +macOS + +``` +brew install qt # Install qt-5.x.x by Homebrew +brew install libxml2 + +or using pip + +pip3 install pyqt5 lxml # Install qt and lxml by pip + +make qt5py3 +python3 labelImg.py +python3 labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] +``` + +
    + + + +推荐使用Anaconda的安装方式 + + 首先下载并进入 [labelImg](https://github.com/tzutalin/labelImg#labelimg) 的目录 + +``` +conda install pyqt=5 +conda install -c anaconda lxml +pyrcc5 -o libs/resources.py resources.qrc +python labelImg.py +python labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] +``` + + + + + +#### 安装注意事项 + +以Anaconda安装方式为例,比Labelme配置要麻烦一些 + +启动方式是通过python运行脚本`python labelImg.py <图片路径>` + + + +#### LabelImg图片标注过程 + +启动labelImg后,选择图片文件或者图片所在文件夹 + +左侧编辑栏选择`创建区块` 绘制标注区,在弹出新的框选择对应的标签 + +左侧菜单栏点击保存,可以选择VOC/YOLO/CreateML三种类型的标注文件 + + + +![](https://user-images.githubusercontent.com/34162360/177526022-fd9c63d8-e476-4b63-ae02-76d032bb7656.gif) + + + + + +### LabelImg标注格式 + +#### LabelImg导出数据格式 + +``` +#生成标注文件 +png/jpeg/jpg-->labelImg标注-->xml/txt/json +``` + + + +#### 格式转换注意事项 + +**PaddleDetection支持VOC或COCO格式的数据**,经LabelImg标注导出后的标注文件,需要修改为**VOC或COCO格式**,调整说明可以参考[准备训练数据](./PrepareDataSet.md#%E5%87%86%E5%A4%87%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE) + diff --git a/docs/tutorials/data/DetAnnoTools_en.md b/docs/tutorials/data/DetAnnoTools_en.md new file mode 100644 index 0000000000000000000000000000000000000000..7b948d213fc0f1f49c6ee21276220ee94f3496c9 --- /dev/null +++ b/docs/tutorials/data/DetAnnoTools_en.md @@ -0,0 +1,271 @@ +[简体中文](DetAnnoTools.md) | English + + + +# Object Detection Annotation Tools + +## Concents + +[LabelMe](#LabelMe) + +* [Instruction](#Instruction-of-LabelMe) + * [Installation](#Installation) + * [Annotation of Images](#Annotation-of-images-in-LabelMe) +* [Annotation Format](#Annotation-Format-of-LabelMe) + * [Export Format](#Export-Format-of-LabelMe) + * [Summary of Format Conversion](#Summary-of-Format-Conversion) + * [Annotation file(json)—>VOC Dataset](#annotation-filejsonvoc-dataset) + * [Annotation file(json)—>COCO Dataset](#annotation-filejsoncoco-dataset) + +[LabelImg](#LabelImg) + +* [Instruction](#Instruction-of-LabelImg) + * [Installation](#Installation-of-LabelImg) + * [Installation Notes](#Installation-Notes) + * [Annotation of images](#Annotation-of-images-in-LabelImg) +* [Annotation Format](#Annotation-Format-of-LabelImg) + * [Export Format](#Export-Format-of-LabelImg) + * [Notes of Format Conversion](#Notes-of-Format-Conversion) + + + +## [LabelMe](https://github.com/wkentaro/labelme) + +### Instruction of LabelMe + +#### Installation + +Please refer to [The github of LabelMe](https://github.com/wkentaro/labelme) for installation details. + +
    + Ubuntu + +``` +sudo apt-get install labelme + +# or +sudo pip3 install labelme + +# or install standalone executable from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + +
    + macOS + +``` +brew install pyqt # maybe pyqt5 +pip install labelme + +# or +brew install wkentaro/labelme/labelme # command line interface +# brew install --cask wkentaro/labelme/labelme # app + +# or install standalone executable/app from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + + + +We recommend installing by Anoncanda. + +``` +conda create –name=labelme python=3 +conda activate labelme +pip install pyqt5 +pip install labelme +``` + + + + + +#### Annotation of Images in LabelMe + +After starting labelme, select an image or an folder with images. + +Select `create polygons` in the formula bar. Draw an annotation area as shown in the following GIF. You can right-click on the image to select different shape. When finished, press the Enter/Return key, then fill the corresponding label in the popup box, such as, people. + +Click the save button in the formula bar,it will generate an annotation file in json. + +![](https://media3.giphy.com/media/XdnHZgge5eynRK3ATK/giphy.gif?cid=790b7611192e4c0ec2b5e6990b6b0f65623154ffda66b122&rid=giphy.gif&ct=g) + + + +### Annotation Format of LabelMe + +#### Export Format of LabelMe + +``` +#generate an annotation file +png/jpeg/jpg-->labelme-->json +``` + + + + + +#### Summary of Format Conversion + +``` +#convert annotation file to VOC dataset format +json-->labelme2voc.py-->VOC dataset + +#convert annotation file to COCO dataset format +json-->labelme2coco.py-->COCO dataset +``` + + + + + +#### Annotation file(json)—>VOC Dataset + +Use this script [labelme2voc.py](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection/labelme2voc.py) in command line. + +```Te +python labelme2voc.py data_annotated(annotation folder) data_dataset_voc(output folder) --labels labels.txt +``` + +Then, it will generate following contents: + +``` +# It generates: +# - data_dataset_voc/JPEGImages +# - data_dataset_voc/Annotations +# - data_dataset_voc/AnnotationsVisualization + +``` + + + + + +#### Annotation file(json)—>COCO Dataset + +Convert the data annotated by LabelMe to COCO dataset by the script [x2coco.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/tools/x2coco.py) provided by PaddleDetection. + +```bash +python tools/x2coco.py \ + --dataset_type labelme \ + --json_input_dir ./labelme_annos/ \ + --image_input_dir ./labelme_imgs/ \ + --output_dir ./cocome/ \ + --train_proportion 0.8 \ + --val_proportion 0.2 \ + --test_proportion 0.0 +``` + +After the user dataset is converted to COCO data, the directory structure is as follows (Try to avoid use Chinese for the path name in case of errors caused by Chinese coding problems): + +``` +dataset/xxx/ +├── annotations +│ ├── train.json # Annotation file of coco data +│ ├── valid.json # Annotation file of coco data +├── images +│ ├── xxx1.jpg +│ ├── xxx2.jpg +│ ├── xxx3.jpg +│ | ... +... +``` + + + + + +## [LabelImg](https://github.com/tzutalin/labelImg) + +### Instruction + +#### Installation of LabelImg + +Please refer to [The github of LabelImg](https://github.com/tzutalin/labelImg) for installation details. + +
    + Ubuntu + +``` +sudo apt-get install pyqt5-dev-tools +sudo pip3 install -r requirements/requirements-linux-python3.txt +make qt5py3 +python3 labelImg.py +python3 labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] +``` + +
    + +
    +macOS + +``` +brew install qt # Install qt-5.x.x by Homebrew +brew install libxml2 + +or using pip + +pip3 install pyqt5 lxml # Install qt and lxml by pip + +make qt5py3 +python3 labelImg.py +python3 labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] +``` + +
    + + + +We recommend installing by Anoncanda. + +Download and go to the folder of [labelImg](https://github.com/tzutalin/labelImg#labelimg) + +``` +conda install pyqt=5 +conda install -c anaconda lxml +pyrcc5 -o libs/resources.py resources.qrc +python labelImg.py +python labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] +``` + + + + + +#### Installation Notes + +Use python scripts to startup LabelImg: `python labelImg.py ` + +#### Annotation of images in LabelImg + +After the startup of LabelImg, select an image or a folder with images. + +Select `Create RectBox` in the formula bar. Draw an annotation area as shown in the following GIF. When finished, select corresponding label in the popup box. Then save the annotated file in three forms: VOC/YOLO/CreateML. + + + +![](https://user-images.githubusercontent.com/34162360/177526022-fd9c63d8-e476-4b63-ae02-76d032bb7656.gif) + + + + + +### Annotation Format of LabelImg + +#### Export Format of LabelImg + +``` +#generate annotation files +png/jpeg/jpg-->labelImg-->xml/txt/json +``` + + + +#### Notes of Format Conversion + +**PaddleDetection supports the format of VOC or COCO.** The annotation file generated by LabelImg needs to be converted by VOC or COCO. You can refer to [PrepareDataSet](./PrepareDataSet.md#%E5%87%86%E5%A4%87%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE). + diff --git a/docs/tutorials/data/KeyPointAnnoTools.md b/docs/tutorials/data/KeyPointAnnoTools.md new file mode 100755 index 0000000000000000000000000000000000000000..678b94ac7571a76048dc9a232bd288e579a0483a --- /dev/null +++ b/docs/tutorials/data/KeyPointAnnoTools.md @@ -0,0 +1,165 @@ +简体中文 | [English](KeyPointAnnoTools_en.md) + +# 关键点检测标注工具 + +## 目录 + +[LabelMe](#LabelMe) + +- [使用说明](#使用说明) + - [安装](#安装) + - [关键点数据说明](#关键点数据说明) + - [图片标注过程](#图片标注过程) +- [标注格式](#标注格式) + - [导出数据格式](#导出数据格式) + - [格式转化总结](#格式转化总结) + - [标注文件(json)-->COCO](#标注文件(json)-->COCO数据集) + + + +## [LabelMe](https://github.com/wkentaro/labelme) + +### 使用说明 + +#### 安装 + +具体安装操作请参考[LabelMe官方教程](https://github.com/wkentaro/labelme)中的Installation + +
    + Ubuntu + +``` +sudo apt-get install labelme + +# or +sudo pip3 install labelme + +# or install standalone executable from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + +
    + macOS + +``` +brew install pyqt # maybe pyqt5 +pip install labelme + +# or +brew install wkentaro/labelme/labelme # command line interface +# brew install --cask wkentaro/labelme/labelme # app + +# or install standalone executable/app from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + + + +推荐使用Anaconda的安装方式 + +``` +conda create –name=labelme python=3 +conda activate labelme +pip install pyqt5 +pip install labelme +``` + + + +#### 关键点数据说明 + +以COCO数据集为例,共需采集17个关键点 + +``` +keypoint indexes: + 0: 'nose', + 1: 'left_eye', + 2: 'right_eye', + 3: 'left_ear', + 4: 'right_ear', + 5: 'left_shoulder', + 6: 'right_shoulder', + 7: 'left_elbow', + 8: 'right_elbow', + 9: 'left_wrist', + 10: 'right_wrist', + 11: 'left_hip', + 12: 'right_hip', + 13: 'left_knee', + 14: 'right_knee', + 15: 'left_ankle', + 16: 'right_ankle' +``` + + + + + +#### 图片标注过程 + +启动labelme后,选择图片文件或者图片所在文件夹 + +左侧编辑栏选择`create polygons` ,右击图像区域选择标注形状,绘制好关键点后按下回车,弹出新的框填入标注关键点对应的标签 + +左侧菜单栏点击保存,生成`json`形式的**标注文件** + +![操作说明](https://user-images.githubusercontent.com/34162360/178250648-29ee781a-676b-419c-83b1-de1e4e490526.gif) + + + +### 标注格式 + +#### 导出数据格式 + +``` +#生成标注文件 +png/jpeg/jpg-->labelme标注-->json +``` + + + +#### 格式转化总结 + +``` +#标注文件转化为COCO数据集格式 +json-->labelme2coco.py-->COCO数据集 +``` + + + + + +#### 标注文件(json)-->COCO数据集 + +使用[PaddleDetection提供的x2coco.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/tools/x2coco.py) 将labelme标注的数据转换为COCO数据集形式 + +```bash +python tools/x2coco.py \ + --dataset_type labelme \ + --json_input_dir ./labelme_annos/ \ + --image_input_dir ./labelme_imgs/ \ + --output_dir ./cocome/ \ + --train_proportion 0.8 \ + --val_proportion 0.2 \ + --test_proportion 0.0 +``` + +用户数据集转成COCO数据后目录结构如下(注意数据集中路径名、文件名尽量不要使用中文,避免中文编码问题导致出错): + +``` +dataset/xxx/ +├── annotations +│ ├── train.json # coco数据的标注文件 +│ ├── valid.json # coco数据的标注文件 +├── images +│ ├── xxx1.jpg +│ ├── xxx2.jpg +│ ├── xxx3.jpg +│ | ... +... +``` + diff --git a/docs/tutorials/data/KeyPointAnnoTools_en.md b/docs/tutorials/data/KeyPointAnnoTools_en.md new file mode 100755 index 0000000000000000000000000000000000000000..3ef0548426d79cfd89267cbf6e8087e5dfa407dd --- /dev/null +++ b/docs/tutorials/data/KeyPointAnnoTools_en.md @@ -0,0 +1,165 @@ +[简体中文](KeyPointAnnoTools.md) | English + +# Key Points Detection Annotation Tool + +## Concents + +[LabelMe](#LabelMe) + +- [Instruction](#Instruction) + - [Installation](#Installation) + - [Notes of Key Points Data](#Notes-of-Key-Points-Data) + - [Annotation of LabelMe](#Annotation-of-LabelMe) +- [Annotation Format](#Annotation-Format) + - [Data Export Format](#Data-Export-Format) + - [Summary of Format Conversion](#Summary-of-Format-Conversion) + - [Annotation file(json)—>COCO Dataset](#annotation-filejsoncoco-dataset) + + + +## [LabelMe](https://github.com/wkentaro/labelme) + +### Instruction + +#### Installation + +Please refer to [The github of LabelMe](https://github.com/wkentaro/labelme) for installation details. + +
    + Ubuntu + +``` +sudo apt-get install labelme + +# or +sudo pip3 install labelme + +# or install standalone executable from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + +
    + macOS + +``` +brew install pyqt # maybe pyqt5 +pip install labelme + +# or +brew install wkentaro/labelme/labelme # command line interface +# brew install --cask wkentaro/labelme/labelme # app + +# or install standalone executable/app from: +# https://github.com/wkentaro/labelme/releases +``` + +
    + + + +We recommend installing by Anoncanda. + +``` +conda create –name=labelme python=3 +conda activate labelme +pip install pyqt5 +pip install labelme +``` + + + +#### Notes of Key Points Data + +COCO dataset needs to collect 17 key points. + +``` +keypoint indexes: + 0: 'nose', + 1: 'left_eye', + 2: 'right_eye', + 3: 'left_ear', + 4: 'right_ear', + 5: 'left_shoulder', + 6: 'right_shoulder', + 7: 'left_elbow', + 8: 'right_elbow', + 9: 'left_wrist', + 10: 'right_wrist', + 11: 'left_hip', + 12: 'right_hip', + 13: 'left_knee', + 14: 'right_knee', + 15: 'left_ankle', + 16: 'right_ankle' +``` + + + + + +#### Annotation of LabelMe + +After starting labelme, select an image or an folder with images. + +Select `create polygons` in the formula bar. Draw an annotation area as shown in the following GIF. You can right-click on the image to select different shape. When finished, press the Enter/Return key, then fill the corresponding label in the popup box, such as, people. + +Click the save button in the formula bar,it will generate an annotation file in json. + +![操作说明](https://user-images.githubusercontent.com/34162360/178250648-29ee781a-676b-419c-83b1-de1e4e490526.gif) + + + +### Annotation Format + +#### Data Export Format + +``` +#generate an annotation file +png/jpeg/jpg-->labelme-->json +``` + + + +#### Summary of Format Conversion + +``` +#convert annotation file to COCO dataset format +json-->labelme2coco.py-->COCO dataset +``` + + + + + +#### Annotation file(json)—>COCO Dataset + +Convert the data annotated by LabelMe to COCO dataset by this script [x2coco.py](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/tools/x2coco.py). + +```bash +python tools/x2coco.py \ + --dataset_type labelme \ + --json_input_dir ./labelme_annos/ \ + --image_input_dir ./labelme_imgs/ \ + --output_dir ./cocome/ \ + --train_proportion 0.8 \ + --val_proportion 0.2 \ + --test_proportion 0.0 +``` + +After the user dataset is converted to COCO data, the directory structure is as follows (note that the path name and file name in the dataset should not use Chinese as far as possible to avoid errors caused by Chinese coding problems): + +``` +dataset/xxx/ +├── annotations +│ ├── train.json # Annotation file of coco data +│ ├── valid.json # Annotation file of coco data +├── images +│ ├── xxx1.jpg +│ ├── xxx2.jpg +│ ├── xxx3.jpg +│ | ... +... +``` + diff --git a/docs/tutorials/data/MOTAnnoTools.md b/docs/tutorials/data/MOTAnnoTools.md new file mode 100644 index 0000000000000000000000000000000000000000..433a1a2808cf05cecbffe80e5d8297f2224d3bfb --- /dev/null +++ b/docs/tutorials/data/MOTAnnoTools.md @@ -0,0 +1,75 @@ +# 多目标跟踪标注工具 + + + +## 目录 + +* [前期准备](#前期准备) +* [SDE数据集](#SDE数据集) + * [LabelMe](#LabelMe) + * [LabelImg](#LabelImg) +* [JDE数据集](#JDE数据集) + * [DarkLabel](#DarkLabel) + * [标注格式](#标注格式) + + +### 前期准备 + +请先查看[多目标跟踪数据集准备](PrepareMOTDataSet.md)确定MOT模型选型和MOT数据集的类型。 +通常综合数据标注成本和模型精度速度平衡考虑,更推荐使用SDE系列数据集,和SDE系列模型的ByteTrack或OC-SORT。SDE系列数据集的标注工具与目标检测任务是一致的。 + +### SDE数据集 +SDE数据集是纯检测标注的数据集,用户自定义数据集可以参照[DET数据准备文档](./PrepareDetDataSet.md)准备。 + +#### LabelMe +LabelMe的使用可以参考[DetAnnoTools](DetAnnoTools.md) + +#### LabelImg +LabelImg的使用可以参考[DetAnnoTools](DetAnnoTools.md) + + +### JDE数据集 +JDE数据集是同时有检测和ReID标注的数据集,标注成本比SDE数据集更高。 + +#### [DarkLabel](https://github.com/darkpgmr/DarkLabel) + +#### 使用说明 + +##### 安装 + +从官方给出的下载[链接](https://github.com/darkpgmr/DarkLabel/releases)中下载想要的版本,Windows环境解压后能够直接使用 + +**视频/图片标注过程** + +1. 启动应用程序后,能看到左侧的工具栏 +2. 选择视频/图像文件后,按需选择标注形式: + * Box仅绘制标注框 + * Box+Label绘制标注框&标签 + * Box+Label+AutoID绘制标注框&标签&ID号 + * Popup LabelSelect可以自行定义标签 +3. 在视频帧/图像上进行拖动鼠标,进行标注框的绘制 +4. 绘制完成后,在上数第六行里选择保存标注文件的形式,默认.txt + +![1](https://user-images.githubusercontent.com/34162360/179673519-511b4167-97ed-4228-8869-db9c69a68b6b.mov) + + + +##### 注意事项 + +1. 如果标注的是视频文件,需要在工具栏上数第五行的下拉框里选择`[fn,cname,id,x1,y1,w,h]` (DarkLabel2.4版本) +2. 鼠标移动到标注框所在区域,右键可以删除标注框 +3. 按下shift,可以选中标注框,进行框的移动和对某条边的编辑 +4. 按住enter回车,可以自动跟踪标注目标 +5. 自动跟踪标注目标过程中可以暂停(松开enter),按需修改标注框 + + + +##### 其他使用参考视频 + +* [DarkLabel (Video/Image Annotation Tool) - Ver.2.0](https://www.youtube.com/watch?v=lok30aIZgUw) +* [DarkLabel (Image/Video Annotation Tool)](https://www.youtube.com/watch?v=vbydG78Al8s&t=11s) + + + +#### 标注格式 +标注文件需要转化为MOT JDE数据集格式,包含`images`和`labels_with_ids`文件夹,具体参照[用户自定义数据集准备](PrepareMOTDataSet.md#用户自定义数据集准备)。 diff --git a/docs/tutorials/PrepareDataSet.md b/docs/tutorials/data/PrepareDetDataSet.md similarity index 83% rename from docs/tutorials/PrepareDataSet.md rename to docs/tutorials/data/PrepareDetDataSet.md index ce829db69865888b9877d90a5cce59792dcefdc8..c6f65fc0ee4161e36af2f467f2c9d43af77344eb 100644 --- a/docs/tutorials/PrepareDataSet.md +++ b/docs/tutorials/data/PrepareDetDataSet.md @@ -1,4 +1,4 @@ -# 如何准备训练数据 +# 目标检测数据准备 ## 目录 - [目标检测数据说明](#目标检测数据说明) - [准备训练数据](#准备训练数据) @@ -8,11 +8,13 @@ - [COCO数据数据](#COCO数据数据) - [COCO数据集下载](#COCO数据下载) - [COCO数据标注文件介绍](#COCO数据标注文件介绍) - - [用户数据](#用户数据) + - [用户数据准备](#用户数据准备) - [用户数据转成VOC数据](#用户数据转成VOC数据) - [用户数据转成COCO数据](#用户数据转成COCO数据) - [用户数据自定义reader](#用户数据自定义reader) - - [用户数据数据转换示例](#用户数据数据转换示例) + - [用户数据使用示例](#用户数据使用示例) + - [数据格式转换](#数据格式转换) + - [自定义数据训练](#自定义数据训练) - [(可选)生成Anchor](#(可选)生成Anchor) ### 目标检测数据说明 @@ -236,15 +238,7 @@ json文件中包含以下key: print('\n查看一条目标物体标注信息:', coco_anno['annotations'][0]) ``` - COCO数据准备如下。 - `dataset/coco/`最初文件组织结构 - ``` - >>cd dataset/coco/ - >>tree - ├── download_coco.py - ``` - -#### 用户数据 +#### 用户数据准备 对于用户数据有3种处理方法: (1) 将用户数据转成VOC数据(根据需要仅包含物体检测所必须的标签即可) (2) 将用户数据转成COCO数据(根据需要仅包含物体检测所必须的标签即可) @@ -328,11 +322,11 @@ dataset/xxx/ ... ``` -##### 用户数据自定义reader -如果数据集有新的数据需要添加进PaddleDetection中,您可参考数据处理文档中的[添加新数据源](../advanced_tutorials/READER.md#2.3自定义数据集)文档部分,开发相应代码完成新的数据源支持,同时数据处理具体代码解析等可阅读[数据处理文档](../advanced_tutorials/READER.md) +##### 用户数据自定义reader +如果数据集有新的数据需要添加进PaddleDetection中,您可参考数据处理文档中的[添加新数据源](../advanced_tutorials/READER.md#2.3自定义数据集)文档部分,开发相应代码完成新的数据源支持,同时数据处理具体代码解析等可阅读[数据处理文档](../advanced_tutorials/READER.md)。 -#### 用户数据数据转换示例 +#### 用户数据使用示例 以[Kaggle数据集](https://www.kaggle.com/andrewmvd/road-sign-detection) 比赛数据为例,说明如何准备自定义数据。 Kaggle上的 [road-sign-detection](https://www.kaggle.com/andrewmvd/road-sign-detection) 比赛数据包含877张图像,数据类别4类:crosswalk,speedlimit,stop,trafficlight。 @@ -357,6 +351,8 @@ Kaggle上的 [road-sign-detection](https://www.kaggle.com/andrewmvd/road-sign-de │ | ... ``` +#### 数据格式转换 + 将数据划分为训练集和测试集 ``` # 生成 label_list.txt 文件 @@ -423,6 +419,67 @@ roadsign数据集统计: (1)用户数据,建议在训练前仔细检查数据,避免因数据标注格式错误或图像数据不完整造成训练过程中的crash (2)如果图像尺寸太大的话,在不限制读入数据尺寸情况下,占用内存较多,会造成内存/显存溢出,请合理设置batch_size,可从小到大尝试 +#### 自定义数据训练 + +数据准备完成后,需要修改PaddleDetection中关于Dataset的配置文件,在`configs/datasets`文件夹下。比如roadsign数据集的配置文件如下: +``` +metric: VOC # 目前支持COCO, VOC, WiderFace等评估标准 +num_classes: 4 # 数据集的类别数,不包含背景类,roadsign数据集为4类,其他数据需要修改为自己的数据类别 + +TrainDataset: + !VOCDataSet + dataset_dir: dataset/roadsign_voc # 训练集的图片所在文件相对于dataset_dir的路径 + anno_path: train.txt # 训练集的标注文件相对于dataset_dir的路径 + label_list: label_list.txt # 数据集所在路径,相对于PaddleDetection路径 + data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult'] # 控制dataset输出的sample所包含的字段,注意此为训练集Reader独有的且必须配置的字段 + +EvalDataset: + !VOCDataSet + dataset_dir: dataset/roadsign_voc # 数据集所在路径,相对于PaddleDetection路径 + anno_path: valid.txt # 验证集的标注文件相对于dataset_dir的路径 + label_list: label_list.txt # 标签文件,相对于dataset_dir的路径 + data_fields: ['image', 'gt_bbox', 'gt_class', 'difficult'] + +TestDataset: + !ImageFolder + anno_path: label_list.txt # 标注文件所在路径,仅用于读取数据集的类别信息,支持json和txt格式 + dataset_dir: dataset/roadsign_voc # 数据集所在路径,若添加了此行,则`anno_path`路径为相对于`dataset_dir`路径,若此行不设置或去掉此行,则为相对于PaddleDetection路径 +``` + +然后在对应模型配置文件中将自定义数据文件路径替换为新路径,以`configs/yolov3/yolov3_mobilenet_v1_roadsign.yml`为例 + +``` +_BASE_: [ + '../datasets/roadsign_voc.yml', # 指定为自定义数据集配置路径 + '../runtime.yml', + '_base_/optimizer_40e.yml', + '_base_/yolov3_mobilenet_v1.yml', + '_base_/yolov3_reader.yml', +] +pretrain_weights: https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_coco.pdparams +weights: output/yolov3_mobilenet_v1_roadsign/model_final + +YOLOv3Loss: + ignore_thresh: 0.7 + label_smooth: true +``` + + +在PaddleDetection的yml配置文件中,使用`!`直接序列化模块实例(可以是函数,实例等),上述的配置文件均使用Dataset进行了序列化。 + +配置修改完成后,即可以启动训练评估,命令如下 + +``` +export CUDA_VISIBLE_DEVICES=0 +python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --eval +``` + +更详细的命令参考[30分钟快速上手PaddleDetection](../GETTING_STARTED_cn.md) + +**注意:** +请运行前自行仔细检查数据集的配置路径,在训练或验证时如果TrainDataset和EvalDataset的路径配置有误,会提示自动下载数据集。若使用自定义数据集,在推理时如果TestDataset路径配置有误,会提示使用默认COCO数据集的类别信息。 + + ### (可选)生成Anchor 在yolo系列模型中,大多数情况下使用默认的anchor设置即可, 你也可以运行`tools/anchor_cluster.py`来得到适用于你的数据集Anchor,使用方法如下: diff --git a/docs/tutorials/PrepareDataSet_en.md b/docs/tutorials/data/PrepareDetDataSet_en.md similarity index 88% rename from docs/tutorials/PrepareDataSet_en.md rename to docs/tutorials/data/PrepareDetDataSet_en.md index 77206402b4686b5698d2df11fa6c8529051d05ea..aa8c5f3e183146c078c57e1328008a7fb007001e 100644 --- a/docs/tutorials/PrepareDataSet_en.md +++ b/docs/tutorials/data/PrepareDetDataSet_en.md @@ -250,7 +250,7 @@ There are three processing methods for user data: (1) Convert user data into VOC data (only include labels necessary for object detection as required) (2) Convert user data into coco data (only include labels necessary for object detection as required) (3) Customize a reader for user data (for complex data, you need to customize the reader) - + ##### Convert User Data to VOC Data After the user dataset is converted to VOC data, the directory structure is as follows (note that the path name and file name in the dataset should not use Chinese as far as possible to avoid errors caused by Chinese coding problems): @@ -332,6 +332,33 @@ dataset/xxx/ ##### Reader of User Define Data If new data in the dataset needs to be added to paddedetection, you can refer to the [add new data source] (../advanced_tutorials/READER.md#2.3_Customizing_Dataset) document section in the data processing document to develop corresponding code to complete the new data source support. At the same time, you can read the [data processing document] (../advanced_tutorials/READER.md) for specific code analysis of data processing +The configuration file for the Dataset exists in the `configs/datasets` folder. For example, the COCO dataset configuration file is as follows: +``` +metric: COCO # Currently supports COCO, VOC, OID, Wider Face and other evaluation standards +num_classes: 80 # num_classes: The number of classes in the dataset, excluding background classes + +TrainDataset: + !COCODataSet + image_dir: train2017 # The path where the training set image resides relative to the dataset_dir + anno_path: annotations/instances_train2017.json # Path to the annotation file of the training set relative to the dataset_dir + dataset_dir: dataset/coco #The path where the dataset is located relative to the PaddleDetection path + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] # Controls the fields contained in the sample output of the dataset, note data_fields are unique to the trainreader and must be configured + +EvalDataset: + !COCODataSet + image_dir: val2017 # The path where the images of the validation set reside relative to the dataset_dir + anno_path: annotations/instances_val2017.json # The path to the annotation file of the validation set relative to the dataset_dir + dataset_dir: dataset/coco # The path where the dataset is located relative to the PaddleDetection path +TestDataset: + !ImageFolder + anno_path: dataset/coco/annotations/instances_val2017.json # The path of the annotation file, it is only used to read the category information of the dataset. JSON and TXT formats are supported + dataset_dir: dataset/coco # The path of the dataset, note if this row is added, `anno_path` will be 'dataset_dir/anno_path`, if not set or removed, `anno_path` is `anno_path` +``` +In the YML profile for Paddle Detection, use `!`directly serializes module instances (functions, instances, etc.). The above configuration files are serialized using Dataset. + +**Note:** +Please carefully check the configuration path of the dataset before running. During training or verification, if the path of TrainDataset or EvalDataset is wrong, it will download the dataset automatically. When using a user-defined dataset, if the TestDataset path is incorrectly configured during inference, the category of the default COCO dataset will be used. + #### Example of User Data Conversion Take [Kaggle Dataset](https://www.kaggle.com/andrewmvd/road-sign-detection) competition data as an example to illustrate how to prepare custom data. The dataset of Kaggle [road-sign-detection](https://www.kaggle.com/andrewmvd/road-sign-detection) competition contains 877 images, four categories:crosswalk,speedlimit,stop,trafficlight. Available for download from kaggle, also available from [link](https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar). diff --git a/docs/tutorials/PrepareKeypointDataSet_cn.md b/docs/tutorials/data/PrepareKeypointDataSet.md similarity index 67% rename from docs/tutorials/PrepareKeypointDataSet_cn.md rename to docs/tutorials/data/PrepareKeypointDataSet.md index 791fd1e49e8367815967654de24aed8eb2485635..4efa90b8d2b2a70430c13feccffe0342ce94e5fd 100644 --- a/docs/tutorials/PrepareKeypointDataSet_cn.md +++ b/docs/tutorials/data/PrepareKeypointDataSet.md @@ -1,14 +1,16 @@ 简体中文 | [English](PrepareKeypointDataSet_en.md) -# 如何准备关键点数据集 +# 关键点数据准备 ## 目录 - [COCO数据集](#COCO数据集) - [MPII数据集](#MPII数据集) -- [训练其他数据集](#训练其他数据集) +- [用户数据准备](#用户数据准备) + - [数据格式转换](#数据格式转换) + - [自定义数据训练](#自定义数据训练) ## COCO数据集 ### COCO数据集的准备 -我们提供了一键脚本来自动完成COCO2017数据集的下载及准备工作,请参考[COCO数据集下载](https://github.com/PaddlePaddle/PaddleDetection/blob/f0a30f3ba6095ebfdc8fffb6d02766406afc438a/docs/tutorials/PrepareDataSet.md#COCO%E6%95%B0%E6%8D%AE)。 +我们提供了一键脚本来自动完成COCO2017数据集的下载及准备工作,请参考[COCO数据集下载](https://github.com/PaddlePaddle/PaddleDetection/blob/f0a30f3ba6095ebfdc8fffb6d02766406afc438a/docs/tutorials/PrepareDetDataSet.md#COCO%E6%95%B0%E6%8D%AE)。 ### COCO数据集(KeyPoint)说明 在COCO中,关键点序号与部位的对应关系为: @@ -110,7 +112,10 @@ MPII keypoint indexes: - `scale`:表示人物的比例,对应200px。 -## 训练其他数据集 +## 用户数据准备 + +### 数据格式转换 + 这里我们以`AIChallenger`数据集为例,展示如何将其他数据集对齐到COCO格式并加入关键点模型训练中。 @@ -139,3 +144,33 @@ AI Challenger Description: 5. 整理图像路径`file_name`,使其能够被正确访问到。 我们提供了整合`COCO`训练集和`AI Challenger`数据集的[标注文件](https://bj.bcebos.com/v1/paddledet/data/keypoint/aic_coco_train_cocoformat.json),供您参考调整后的效果。 + +### 自定义数据训练 + +以[tinypose_256x192](../../../configs/keypoint/tiny_pose/README.md)为例来说明对于自定义数据如何修改: + +#### 1、配置文件[tinypose_256x192.yml](../../../configs/keypoint/tiny_pose/tinypose_256x192.yml) + +基本的修改内容及其含义如下: + +``` +num_joints: &num_joints 17 #自定义数据的关键点数量 +train_height: &train_height 256 #训练图片尺寸-高度h +train_width: &train_width 192 #训练图片尺寸-宽度w +hmsize: &hmsize [48, 64] #对应训练尺寸的输出尺寸,这里是输入[w,h]的1/4 +flip_perm: &flip_perm [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]] #关键点定义中左右对称的关键点,用于flip增强。若没有对称结构在 TrainReader 的 RandomFlipHalfBodyTransform 一栏中 flip_pairs 后面加一行 "flip: False"(注意缩紧对齐) +num_joints_half_body: 8 #半身关键点数量,用于半身增强 +prob_half_body: 0.3 #半身增强实现概率,若不需要则修改为0 +upper_body_ids: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] #上半身对应关键点id,用于半身增强中获取上半身对应的关键点。 +``` + +上述是自定义数据时所需要的修改部分,完整的配置及含义说明可参考文件:[关键点配置文件说明](../KeyPointConfigGuide_cn.md)。 + +#### 2、其他代码修改(影响测试、可视化) +- keypoint_utils.py中的sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07,.87, .87, .89, .89]) / 10.0,表示每个关键点的确定范围方差,根据实际关键点可信区域设置,区域精确的一般0.25-0.5,例如眼睛。区域范围大的一般0.5-1.0,例如肩膀。若不确定建议0.75。 +- visualizer.py中的draw_pose函数中的EDGES,表示可视化时关键点之间的连接线关系。 +- pycocotools工具中的sigmas,同第一个keypoint_utils.py中的设置。用于coco指标评估时计算。 + +#### 3、数据准备注意 +- 训练数据请按coco数据格式处理。需要包括关键点[Nx3]、检测框[N]标注。 +- 请注意area>0,area=0时数据在训练时会被过滤掉。此外,由于COCO的评估机制,area较小的数据在评估时也会被过滤掉,我们建议在自定义数据时取`area = bbox_w * bbox_h`。 diff --git a/docs/tutorials/PrepareKeypointDataSet_en.md b/docs/tutorials/data/PrepareKeypointDataSet_en.md similarity index 98% rename from docs/tutorials/PrepareKeypointDataSet_en.md rename to docs/tutorials/data/PrepareKeypointDataSet_en.md index 4656922ab79d93720538bd13ef4bc3f188819862..80272910cee355e28d6aa219e30bc98de599bbd0 100644 --- a/docs/tutorials/PrepareKeypointDataSet_en.md +++ b/docs/tutorials/data/PrepareKeypointDataSet_en.md @@ -1,4 +1,4 @@ -[简体中文](PrepareKeypointDataSet_cn.md) | English +[简体中文](PrepareKeypointDataSet.md) | English # How to prepare dataset? ## Table of Contents @@ -8,7 +8,7 @@ ## COCO ### Preperation for COCO dataset -We provide a one-click script to automatically complete the download and preparation of the COCO2017 dataset. Please refer to [COCO Download](https://github.com/PaddlePaddle/PaddleDetection/blob/f0a30f3ba6095ebfdc8fffb6d02766406afc438a/docs/tutorials/PrepareDataSet.md#COCO%E6%95%B0%E6%8D%AE). +We provide a one-click script to automatically complete the download and preparation of the COCO2017 dataset. Please refer to [COCO Download](https://github.com/PaddlePaddle/PaddleDetection/blob/f0a30f3ba6095ebfdc8fffb6d02766406afc438a/docs/tutorials/PrepareDetDataSet_en.md#COCO%E6%95%B0%E6%8D%AE). ### Description for COCO dataset(Keypoint): In COCO, the indexes and corresponding keypoint name are: diff --git a/docs/tutorials/PrepareMOTDataSet_cn.md b/docs/tutorials/data/PrepareMOTDataSet.md similarity index 40% rename from docs/tutorials/PrepareMOTDataSet_cn.md rename to docs/tutorials/data/PrepareMOTDataSet.md index 7c97e073f3edb81a8cf122476f022a1a05bfd647..633efa95e510f242f6f9ff3cfadd1def1d2d0356 100644 --- a/docs/tutorials/PrepareMOTDataSet_cn.md +++ b/docs/tutorials/data/PrepareMOTDataSet.md @@ -1,30 +1,99 @@ -简体中文 | [English](GETTING_STARTED.md) - -# 目录 -## 多目标跟踪数据集准备 -- [MOT数据集](#MOT数据集) -- [数据集目录](#数据集目录) -- [数据格式](#数据格式) -- [用户数据准备](#用户数据准备) +简体中文 | [English](PrepareMOTDataSet_en.md) + +# 多目标跟踪数据集准备 +## 目录 +- [简介和模型选型](#简介和模型选型) +- [MOT数据集准备](#MOT数据集准备) + - [SDE数据集](#SDE数据集) + - [JDE数据集](#JDE数据集) +- [用户自定义数据集准备](#用户自定义数据集准备) + - [SDE数据集](#SDE数据集) + - [JDE数据集](#JDE数据集) - [引用](#引用) -### MOT数据集 -PaddleDetection复现[JDE](https://github.com/Zhongdao/Towards-Realtime-MOT) 和[FairMOT](https://github.com/ifzhang/FairMOT),是使用的和他们相同的MIX数据集,包括**Caltech Pedestrian, CityPersons, CUHK-SYSU, PRW, ETHZ, MOT17和MOT16**。使用前6者作为联合数据集参与训练,MOT16作为评测数据集。如果您想使用这些数据集,请**遵循他们的License**。 +## 简介和模型选型 +PaddleDetection中提供了SDE和JDE两个系列的多种算法实现: +- SDE(Separate Detection and Embedding) + - [ByteTrack](../../../configs/mot/bytetrack) + - [DeepSORT](../../../configs/mot/deepsort) + +- JDE(Joint Detection and Embedding) + - [JDE](../../../configs/mot/jde) + - [FairMOT](../../../configs/mot/fairmot) + - [MCFairMOT](../../../configs/mot/mcfairmot) **注意:** -- 多目标跟踪数据集一般是用于单类别的多目标跟踪,DeepSORT、JDE和FairMOT均为单类别跟踪模型,MIX数据集以及其子数据集也都是单类别的行人跟踪数据集,可认为相比于行人检测数据集多了id号的标注。 -- 为了训练更多场景的垂类模型例如车辆等,垂类数据集也需要处理成与MIX数据集相同的格式,PaddleDetection也提供了[车辆跟踪](../../configs/mot/vehicle/README_cn.md)、[人头跟踪](../../configs/mot/headtracking21/README_cn.md)以及更通用的[行人跟踪](../../configs/mot/pedestrian/README_cn.md)的垂类数据集和模型。用户自定义数据集也可参照本文档准备。 -- 多类别跟踪模型是[MCFairMOT](../../configs/mot/mcfairmot/README_cn.md),多类别数据集是VisDrone数据集的整合版,可参照[MCFairMOT](../../configs/mot/mcfairmot/README_cn.md)的文档说明。 -- 跨镜头跟踪模型,是选用的[AIC21 MTMCT](https://www.aicitychallenge.org) (CityFlow)车辆跨镜头跟踪数据集,数据集和模型可参照[跨境头跟踪](../../configs/mot/mtmct/README_cn.md)的文档说明。 + - 以上算法原论文均为单类别的多目标跟踪,PaddleDetection团队同时也支持了[ByteTrack](./bytetrack)和FairMOT([MCFairMOT](./mcfairmot))的多类别的多目标跟踪; + - [DeepSORT](../../../configs/mot/deepsort)和[JDE](../../../configs/mot/jde)均只支持单类别的多目标跟踪; + - [DeepSORT](../../../configs/mot/deepsort)需要额外添加ReID权重一起执行,[ByteTrack](../../../configs/mot/bytetrack)可加可不加ReID权重,默认不加; + + +关于模型选型,PaddleDetection团队提供的总结建议如下: + +| MOT方式 | 经典算法 | 算法流程 | 数据集要求 | 其他特点 | +| :--------------| :--------------| :------- | :----: | :----: | +| SDE系列 | DeepSORT,ByteTrack | 分离式,两个独立模型权重先检测后ReID,也可不加ReID | 检测和ReID数据相对独立,不加ReID时即纯检测数据集 |检测和ReID可分别调优,鲁棒性较高,AI竞赛常用| +| JDE系列 | FairMOT | 联合式,一个模型权重端到端同时检测和ReID | 必须同时具有检测和ReID标注 | 检测和ReID联合训练,不易调优,泛化性不强| + +**注意:** + - 由于数据标注的成本较大,建议选型前优先考虑**数据集要求**,如果数据集只有检测框标注而没有ReID标注,是无法使用JDE系列算法训练的,更推荐使用SDE系列; + - SDE系列算法在检测器精度足够高时,也可以不使用ReID权重进行物体间的长时序关联,可以参照[ByteTrack](bytetrack); + - 耗时速度和模型权重参数量计算量有一定关系,耗时从理论上看`不使用ReID的SDE系列 < JDE系列 < 使用ReID的SDE系列`; + + +## MOT数据集准备 +PaddleDetection团队提供了众多公开数据集或整理后数据集的下载链接,参考[数据集下载汇总](../../../configs/mot/DataDownload.md),用户可以自行下载使用。 -### 数据集目录 -首先按照以下命令下载image_lists.zip并解压放在`PaddleDetection/dataset/mot`目录下: +根据模型选型总结,MOT数据集可以分为两类:一类纯检测框标注的数据集,仅SDE系列可以使用;另一类是同时有检测和ReID标注的数据集,SDE系列和JDE系列都可以使用。 + +### SDE数据集 +SDE数据集是纯检测标注的数据集,用户自定义数据集可以参照[DET数据准备文档](./PrepareDetDataSet.md)准备。 + +以MOT17数据集为例,下载并解压放在`PaddleDetection/dataset/mot`目录下: +``` +wget https://dataset.bj.bcebos.com/mot/MOT17.zip + +``` +并修改数据集部分的配置文件如下: +``` +num_classes: 1 + +TrainDataset: + !COCODataSet + dataset_dir: dataset/mot/MOT17 + anno_path: annotations/train_half.json + image_dir: images/train + data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd'] + +EvalDataset: + !COCODataSet + dataset_dir: dataset/mot/MOT17 + anno_path: annotations/val_half.json + image_dir: images/train + +TestDataset: + !ImageFolder + dataset_dir: dataset/mot/MOT17 + anno_path: annotations/val_half.json +``` + +数据集目录为: +``` +dataset/mot + |——————MOT17 + |——————annotations + |——————images +``` + +### JDE数据集 +JDE数据集是同时有检测和ReID标注的数据集,首先按照以下命令`image_lists.zip`并解压放在`PaddleDetection/dataset/mot`目录下: ``` wget https://dataset.bj.bcebos.com/mot/image_lists.zip ``` -然后按照以下命令可以快速下载MIX数据集的各个子数据集,并解压放在`PaddleDetection/dataset/mot`目录下: +然后按照以下命令可以快速下载各个公开数据集,也解压放在`PaddleDetection/dataset/mot`目录下: ``` +# MIX数据,同JDE,FairMOT论文使用的数据集 wget https://dataset.bj.bcebos.com/mot/MOT17.zip wget https://dataset.bj.bcebos.com/mot/Caltech.zip wget https://dataset.bj.bcebos.com/mot/CUHKSYSU.zip @@ -33,24 +102,17 @@ wget https://dataset.bj.bcebos.com/mot/Cityscapes.zip wget https://dataset.bj.bcebos.com/mot/ETHZ.zip wget https://dataset.bj.bcebos.com/mot/MOT16.zip ``` - -最终目录为: +数据集目录为: ``` dataset/mot |——————image_lists - |——————caltech.10k.val |——————caltech.all - |——————caltech.train - |——————caltech.val |——————citypersons.train - |——————citypersons.val |——————cuhksysu.train - |——————cuhksysu.val |——————eth.train |——————mot16.train |——————mot17.train |——————prw.train - |——————prw.val |——————Caltech |——————Cityscapes |——————CUHKSYSU @@ -60,7 +122,7 @@ dataset/mot |——————PRW ``` -### 数据格式 +#### JDE数据集的格式 这几个相关数据集都遵循以下结构: ``` MOT17 @@ -74,16 +136,26 @@ MOT17 ``` [class] [identity] [x_center] [y_center] [width] [height] ``` -**注意**: -- `class`为类别id,支持单类别和多类别,从`0`开始计,单类别即为`0`。 -- `identity`是从`1`到`num_identities`的整数(`num_identities`是数据集中所有视频或图片序列的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 -- `[x_center] [y_center] [width] [height]`是中心点坐标和宽高,注意他们的值是由图片的宽度/高度标准化的,因此它们是从0到1的浮点数。 + - `class`为类别id,支持单类别和多类别,从`0`开始计,单类别即为`0`。 + - `identity`是从`1`到`num_identities`的整数(`num_identities`是数据集中所有视频或图片序列的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 + - `[x_center] [y_center] [width] [height]`是中心点坐标和宽高,注意他们的值是由图片的宽度/高度标准化的,因此它们是从0到1的浮点数。 + + +**注意:** + - MIX数据集是[JDE](https://github.com/Zhongdao/Towards-Realtime-MOT)和[FairMOT](https://github.com/ifzhang/FairMOT)原论文使用的数据集,包括**Caltech Pedestrian, CityPersons, CUHK-SYSU, PRW, ETHZ, MOT17和MOT16**。使用前6者作为联合数据集参与训练,MOT16作为评测数据集。如果您想使用这些数据集,请**遵循他们的License**。 + - MIX数据集以及其子数据集都是单类别的行人跟踪数据集,可认为相比于行人检测数据集多了id号的标注。 + - 更多场景的垂类模型例如车辆行人人头跟踪等,垂类数据集也需要处理成与MIX数据集相同的格式,参照[数据集下载汇总](DataDownload.md)、[车辆跟踪](vehicle/README_cn.md)、[人头跟踪](headtracking21/README_cn.md)以及更通用的[行人跟踪](pedestrian/README_cn.md)。 + - 用户自定义数据集可参照[MOT数据集准备教程](../../docs/tutorials/PrepareMOTDataSet_cn.md)去准备。 +## 用户自定义数据集准备 -### 用户数据准备 +### SDE数据集 +如果用户选择SDE系列方案,是准备准检测标注的自定义数据集,则可以参照[DET数据准备文档](./PrepareDetDataSet.md)准备。 -为了规范地进行训练和评测,用户数据需要转成和MOT-16数据集相同的目录和格式: +### JDE数据集 +如果用户选择JDE系列方案,则需要同时具有检测和ReID标注,且符合MOT-17数据集的格式。 +为了规范地进行训练和评测,用户数据需要转成和MOT-17数据集相同的目录和格式: ``` custom_data |——————images @@ -109,45 +181,49 @@ custom_data └—————— ... ``` -#### images文件夹 -- `gt.txt`是原始标注文件,而训练所用标注是`labels_with_ids`文件夹。 -- `img1`文件夹里是按照一定帧率抽好的图片。 -- `seqinfo.ini`文件是视频信息描述文件,需要如下格式的信息: -``` -[Sequence] -name=MOT16-02 -imDir=img1 -frameRate=30 -seqLength=600 -imWidth=1920 -imHeight=1080 -imExt=.jpg -``` +##### images文件夹 + - `gt.txt`是原始标注文件,而训练所用标注是`labels_with_ids`文件夹。 + - `gt.txt`里是当前视频中所有图片的原始标注文件,每行都描述一个边界框,格式如下: + ``` + [frame_id],[identity],[bb_left],[bb_top],[width],[height],[score],[label],[vis_ratio] + ``` + - `img1`文件夹里是按照一定帧率抽好的图片。 + - `seqinfo.ini`文件是视频信息描述文件,需要如下格式的信息: + ``` + [Sequence] + name=MOT17-02 + imDir=img1 + frameRate=30 + seqLength=600 + imWidth=1920 + imHeight=1080 + imExt=.jpg + ``` -`gt.txt`里是当前视频中所有图片的原始标注文件,每行都描述一个边界框,格式如下: +其中`gt.txt`里是当前视频中所有图片的原始标注文件,每行都描述一个边界框,格式如下: ``` [frame_id],[identity],[bb_left],[bb_top],[width],[height],[score],[label],[vis_ratio] ``` **注意**: -- `frame_id`为当前图片帧序号 -- `identity`是从`1`到`num_identities`的整数(`num_identities`是**当前视频或图片序列**的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 -- `bb_left`是目标框的左边界的x坐标 -- `bb_top`是目标框的上边界的y坐标 -- `width,height`是真实的像素宽高 -- `score`是当前目标是否进入考虑范围内的标志(值为0表示此目标在计算中被忽略,而值为1则用于将其标记为活动实例),默认为`1` -- `label`是当前目标的种类标签,由于目前仅支持单类别跟踪,默认为`1`,MOT-16数据集中会有其他类别标签,但都是当作ignore类别计算 -- `vis_ratio`是当前目标被其他目标包含或覆挡后的可见率,是从0到1的浮点数,默认为`1` + - `frame_id`为当前图片帧序号 + - `identity`是从`1`到`num_identities`的整数(`num_identities`是**当前视频或图片序列**的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 + - `bb_left`是目标框的左边界的x坐标 + - `bb_top`是目标框的上边界的y坐标 + - `width,height`是真实的像素宽高 + - `score`是当前目标是否进入考虑范围内的标志(值为0表示此目标在计算中被忽略,而值为1则用于将其标记为活动实例),默认为`1` + - `label`是当前目标的种类标签,由于目前仅支持单类别跟踪,默认为`1`,MOT-16数据集中会有其他类别标签,但都是当作ignore类别计算 + - `vis_ratio`是当前目标被其他目标包含或覆挡后的可见率,是从0到1的浮点数,默认为`1` -#### labels_with_ids文件夹 +##### labels_with_ids文件夹 所有数据集的标注是以统一数据格式提供的。各个数据集中每张图片都有相应的标注文本。给定一个图像路径,可以通过将字符串`images`替换为`labels_with_ids`并将`.jpg`替换为`.txt`来生成标注文本路径。在标注文本中,每行都描述一个边界框,格式如下: ``` [class] [identity] [x_center] [y_center] [width] [height] ``` **注意**: -- `class`为类别id,支持单类别和多类别,从`0`开始计,单类别即为`0`。 -- `identity`是从`1`到`num_identities`的整数(`num_identities`是数据集中所有视频或图片序列的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 -- `[x_center] [y_center] [width] [height]`是中心点坐标和宽高,注意是由图片的宽度/高度标准化的,因此它们是从0到1的浮点数。 + - `class`为类别id,支持单类别和多类别,从`0`开始计,单类别即为`0`。 + - `identity`是从`1`到`num_identities`的整数(`num_identities`是数据集中所有视频或图片序列的不同物体实例的总数),如果此框没有`identity`标注,则为`-1`。 + - `[x_center] [y_center] [width] [height]`是中心点坐标和宽高,注意是由图片的宽度/高度标准化的,因此它们是从0到1的浮点数。 可采用如下脚本生成相应的`labels_with_ids`: ``` @@ -155,6 +231,7 @@ cd dataset/mot python gen_labels_MOT.py ``` + ### 引用 Caltech: ``` diff --git a/docs/tutorials/PrepareMOTDataSet.md b/docs/tutorials/data/PrepareMOTDataSet_en.md similarity index 99% rename from docs/tutorials/PrepareMOTDataSet.md rename to docs/tutorials/data/PrepareMOTDataSet_en.md index 44f5a4713a3b461e84ab0dfa2a01af1045b7dc4c..f1bd170ccad18457c7faf6c0f0a57f6211af7784 100644 --- a/docs/tutorials/PrepareMOTDataSet.md +++ b/docs/tutorials/data/PrepareMOTDataSet_en.md @@ -1,4 +1,4 @@ -English | [简体中文](PrepareMOTDataSet_cn.md) +English | [简体中文](PrepareMOTDataSet.md) # Contents ## Multi-Object Tracking Dataset Preparation diff --git a/docs/tutorials/data/README.md b/docs/tutorials/data/README.md new file mode 100644 index 0000000000000000000000000000000000000000..947b650e18cbc9cf9bb57c8b6600588ed0a6501f --- /dev/null +++ b/docs/tutorials/data/README.md @@ -0,0 +1,27 @@ +# 数据准备 + +数据对于深度学习开发起到了至关重要的作用,数据采集和标注的质量是提升业务模型效果的重要因素。本文档主要介绍PaddleDetection中如何进行数据准备,包括采集高质量数据方法,覆盖多场景类型,提升模型泛化能力;以及各类任务数据标注工具和方法,并在PaddleDetection下使用 + +## 数据采集 +在深度学习任务的实际落地中,数据采集往往决定了最终模型的效果,对于数据采集的几点建议如下: + +### 确定方向 +任务类型、数据的类别和目标场景这些因素决定了要收集什么数据,首先需要根据这些因素来确定整体数据收集的工作方向。 + +### 开源数据集 +在实际场景中数据采集成本其实十分高昂,完全靠自己收集在时间和金钱上都有很高的成本,开源数据集是帮助增加训练数据量的重要手段,所以很多时候会考虑加入一些相似任务的开源数据。在使用中请遵守各个开源数据集的license规定的使用条件。 + +### 增加场景数据 +开源数据一般不会覆盖实际使用的的目标场景,用户需要评估开源数据集中已包含的场景和目标场景间的差异,有针对性地补充目标场景数据,尽量让训练和部署数据的场景一致。 + +### 类别均衡 +在采集阶段,也需要尽量保持类别均衡,帮助模型正确学习到目标特征。 + + +## 数据标注及格式说明 + +| 任务类型 | 数据标注 | 数据格式说明 | +|:--------:| :--------:|:--------:| +| 目标检测 | [文档链接](DetAnnoTools.md) | [文档链接](PrepareDetDataSet.md) | +| 关键点检测 | [文档链接](KeyPointAnnoTools.md) | [文档链接](PrepareKeypointDataSet.md) | +| 多目标跟踪 | [文档链接](MOTAnnoTools.md) | [文档链接](PrepareMOTDataSet.md) | diff --git a/docs/tutorials/logging_en.md b/docs/tutorials/logging_en.md new file mode 100644 index 0000000000000000000000000000000000000000..b45ceba69d39098f70d0b8825d372529ce40cd0b --- /dev/null +++ b/docs/tutorials/logging_en.md @@ -0,0 +1,46 @@ +# Logging + +This document talks about how to track metrics and visualize model performance during training. The library currently supports [VisualDL](https://www.paddlepaddle.org.cn/documentation/docs/en/guides/03_VisualDL/visualdl_usage_en.html) and [Weights & Biases](https://docs.wandb.ai). + +## VisualDL +Logging to VisualDL is supported only in python >= 3.5. To install VisualDL + +``` +pip install visualdl +``` + +PaddleDetection uses a callback to log the training metrics at the end of every step and metrics from the validation step at the end of every epoch. To use VisualDL for visualization, add the `--use_vdl` flag to the training command and `--vdl_log_dir ` to set the directory which stores the records. + +For example + +``` +python tools/train -c config.yml --use_vdl --vdl_log_dir ./logs +``` + +Another possible way to do this is to add the aforementioned flags to the `config.yml` file. + +## Weights & Biases +W&B is a MLOps tool that can be used for experiment tracking, dataset/model versioning, visualizing results and collaborating with colleagues. A W&B logger is integrated directly into PaddleDetection and to use it, first you need to install the wandb sdk and login to your wandb account. + +``` +pip install wandb +wandb login +``` + +To use wandb to log metrics while training add the `--use_wandb` flag to the training command and any other arguments for the W&B logger can be provided like this - + +``` +python tools/train -c config.yml --use_wandb -o wandb-project=MyDetector wandb-entity=MyTeam wandb-save_dir=./logs +``` + +The arguments to the W&B logger must be proceeded by `-o` and each invidiual argument must contain the prefix "wandb-". + +If this is too tedious, an alternative way is to add the arguments to the `config.yml` file under the `wandb` header. For example + +``` +use_wandb: True +wandb: + project: MyProject + entity: MyTeam + save_dir: ./logs +``` diff --git a/industrial_tutorial/README.md b/industrial_tutorial/README.md new file mode 100644 index 0000000000000000000000000000000000000000..94441e6c740d47b3869d1fa3eebd99729436baea --- /dev/null +++ b/industrial_tutorial/README.md @@ -0,0 +1,37 @@ +# 产业实践范例 + +PaddleDetection场景应用覆盖通用,制造,城市,交通行业的主要检测垂类应用,在PP-YOLOE,PP-PicoDet,PP-Human,PP-Vehicle的能力基础之上,以notebook的形式展示利用场景数据微调、模型优化方法、数据增广等内容,为开发者快速落地目标检测应用提供示范与启发。 + + +
    + +
    + + +欢迎扫码加入用户交流答疑群 +
    + +
    + +## 范例列表 + +- [基于PP-TinyPose增强版的智能健身动作识别](https://aistudio.baidu.com/aistudio/projectdetail/4385813) + +- [基于PP-Human的打架识别](https://aistudio.baidu.com/aistudio/projectdetail/4086987?contributionType=1) + +- [基于PP-PicoDet的通信塔识别及Android端部署](https://aistudio.baidu.com/aistudio/projectdetail/3561097) + +- [基于Faster-RCNN的瓷砖表面瑕疵检测](https://aistudio.baidu.com/aistudio/projectdetail/2571419) + +- [基于PaddleDetection的PCB瑕疵检测](https://aistudio.baidu.com/aistudio/projectdetail/2367089) + +- [基于FairMOT实现人流量统计](https://aistudio.baidu.com/aistudio/projectdetail/2421822) + +- [基于YOLOv3实现跌倒检测](https://aistudio.baidu.com/aistudio/projectdetail/2500639) + +- [基于PP-PicoDetv2 的路面垃圾检测](https://aistudio.baidu.com/aistudio/projectdetail/3846170?channelType=0&channel=0) + +- [基于人体关键点检测的合规检测](https://aistudio.baidu.com/aistudio/projectdetail/4061642?contributionType=1) + + + *范例将持续更新中 diff --git a/ppdet/core/workspace.py b/ppdet/core/workspace.py index e633746ed804e29ca9cc53c9b6cf39c1a8a168a6..e56feb31be4d02e81abcdfb6a33fbfc111abb1cc 100644 --- a/ppdet/core/workspace.py +++ b/ppdet/core/workspace.py @@ -210,9 +210,17 @@ def create(cls_or_name, **kwargs): assert type(cls_or_name) in [type, str ], "should be a class or name of a class" name = type(cls_or_name) == str and cls_or_name or cls_or_name.__name__ - assert name in global_config and \ - isinstance(global_config[name], SchemaDict), \ - "the module {} is not registered".format(name) + if name in global_config: + if isinstance(global_config[name], SchemaDict): + pass + elif hasattr(global_config[name], "__dict__"): + # support instance return directly + return global_config[name] + else: + raise ValueError("The module {} is not registered".format(name)) + else: + raise ValueError("The module {} is not registered".format(name)) + config = global_config[name] cls = getattr(config.pymodule, name) cls_kwargs = {} diff --git a/ppdet/data/reader.py b/ppdet/data/reader.py index c9ea09af2f7250d67cb005345a48d59107ab7eab..f04fd6b3380c915abaf1e8104d8901268d12775f 100644 --- a/ppdet/data/reader.py +++ b/ppdet/data/reader.py @@ -23,7 +23,7 @@ else: import numpy as np from paddle.io import DataLoader, DistributedBatchSampler -from paddle.fluid.dataloader.collate import default_collate_fn +from .utils import default_collate_fn from ppdet.core.workspace import register from . import transform diff --git a/ppdet/data/shm_utils.py b/ppdet/data/shm_utils.py index 38d8ba66cd71baa169c27a44e59a1d4d908b8d7c..a929a809cec9bc1e6b1dd335faa0ba4f2e44ff87 100644 --- a/ppdet/data/shm_utils.py +++ b/ppdet/data/shm_utils.py @@ -34,7 +34,10 @@ SHM_DEFAULT_MOUNT = '/dev/shm' def _parse_size_in_M(size_str): - num, unit = size_str[:-1], size_str[-1] + if size_str[-1] == 'B': + num, unit = size_str[:-2], size_str[-2] + else: + num, unit = size_str[:-1], size_str[-1] assert unit in SIZE_UNIT, \ "unknown shm size unit {}".format(unit) return float(num) * \ diff --git a/ppdet/data/source/__init__.py b/ppdet/data/source/__init__.py index 3854d3d2530b032b3c84d1ab5f2e01ea963c5c70..e3abb16b606de5501886f1a615fd25a7cd114e61 100644 --- a/ppdet/data/source/__init__.py +++ b/ppdet/data/source/__init__.py @@ -27,3 +27,4 @@ from .category import * from .keypoint_coco import * from .mot import * from .sniper_coco import SniperCOCODataSet +from .dataset import ImageFolder diff --git a/ppdet/data/source/category.py b/ppdet/data/source/category.py index 9390e54c4ce5dacce4674363689b629261c787c6..de447161710d32ef623bab5692c40d39efb7e9c7 100644 --- a/ppdet/data/source/category.py +++ b/ppdet/data/source/category.py @@ -39,24 +39,49 @@ def get_categories(metric_type, anno_file=None, arch=None): if arch == 'keypoint_arch': return (None, {'id': 'keypoint'}) + if anno_file == None or (not os.path.isfile(anno_file)): + logger.warning( + "anno_file '{}' is None or not set or not exist, " + "please recheck TrainDataset/EvalDataset/TestDataset.anno_path, " + "otherwise the default categories will be used by metric_type.". + format(anno_file)) + if metric_type.lower() == 'coco' or metric_type.lower( ) == 'rbox' or metric_type.lower() == 'snipercoco': if anno_file and os.path.isfile(anno_file): - # lazy import pycocotools here - from pycocotools.coco import COCO - - coco = COCO(anno_file) - cats = coco.loadCats(coco.getCatIds()) - - clsid2catid = {i: cat['id'] for i, cat in enumerate(cats)} - catid2name = {cat['id']: cat['name'] for cat in cats} + if anno_file.endswith('json'): + # lazy import pycocotools here + from pycocotools.coco import COCO + coco = COCO(anno_file) + cats = coco.loadCats(coco.getCatIds()) + + clsid2catid = {i: cat['id'] for i, cat in enumerate(cats)} + catid2name = {cat['id']: cat['name'] for cat in cats} + + elif anno_file.endswith('txt'): + cats = [] + with open(anno_file) as f: + for line in f.readlines(): + cats.append(line.strip()) + if cats[0] == 'background': cats = cats[1:] + + clsid2catid = {i: i for i in range(len(cats))} + catid2name = {i: name for i, name in enumerate(cats)} + + else: + raise ValueError("anno_file {} should be json or txt.".format( + anno_file)) return clsid2catid, catid2name # anno file not exist, load default categories of COCO17 else: if metric_type.lower() == 'rbox': + logger.warning( + "metric_type: {}, load default categories of DOTA.".format( + metric_type)) return _dota_category() - + logger.warning("metric_type: {}, load default categories of COCO.". + format(metric_type)) return _coco17_category() elif metric_type.lower() == 'voc': @@ -77,6 +102,8 @@ def get_categories(metric_type, anno_file=None, arch=None): # anno file not exist, load default categories of # VOC all 20 categories else: + logger.warning("metric_type: {}, load default categories of VOC.". + format(metric_type)) return _vocall_category() elif metric_type.lower() == 'oid': @@ -104,6 +131,9 @@ def get_categories(metric_type, anno_file=None, arch=None): return clsid2catid, catid2name # anno file not exist, load default category 'pedestrian'. else: + logger.warning( + "metric_type: {}, load default categories of pedestrian MOT.". + format(metric_type)) return _mot_category(category='pedestrian') elif metric_type.lower() in ['kitti', 'bdd100kmot']: @@ -122,6 +152,9 @@ def get_categories(metric_type, anno_file=None, arch=None): return clsid2catid, catid2name # anno file not exist, load default categories of visdrone all 10 categories else: + logger.warning( + "metric_type: {}, load default categories of VisDrone.".format( + metric_type)) return _visdrone_category() else: diff --git a/ppdet/data/source/coco.py b/ppdet/data/source/coco.py index 5c401de14398f72084d8acbd56c1a52325edbe54..1f7c9b7bb528ac26ee405947bdafd55909fd1d84 100644 --- a/ppdet/data/source/coco.py +++ b/ppdet/data/source/coco.py @@ -39,6 +39,7 @@ class COCODataSet(DetDataset): empty_ratio (float): the ratio of empty record number to total record's, if empty_ratio is out of [0. ,1.), do not sample the records and use all the empty entries. 1. as default + repeat (int): repeat times for dataset, use in benchmark. """ def __init__(self, @@ -49,9 +50,15 @@ class COCODataSet(DetDataset): sample_num=-1, load_crowd=False, allow_empty=False, - empty_ratio=1.): - super(COCODataSet, self).__init__(dataset_dir, image_dir, anno_path, - data_fields, sample_num) + empty_ratio=1., + repeat=1): + super(COCODataSet, self).__init__( + dataset_dir, + image_dir, + anno_path, + data_fields, + sample_num, + repeat=repeat) self.load_image_only = False self.load_semantic = False self.load_crowd = load_crowd diff --git a/ppdet/data/source/dataset.py b/ppdet/data/source/dataset.py index 1bef548e696764964608ade67b373a1c19c84a96..d735cfc4a2ac2b709e74cb797a61832d70bd9a51 100644 --- a/ppdet/data/source/dataset.py +++ b/ppdet/data/source/dataset.py @@ -5,7 +5,7 @@ # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 -# +# # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -23,6 +23,7 @@ from paddle.io import Dataset from ppdet.core.workspace import register, serializable from ppdet.utils.download import get_dataset_path import copy +from ppdet.data import source @serializable @@ -37,6 +38,7 @@ class DetDataset(Dataset): data_fields (list): key name of data dictionary, at least have 'image'. sample_num (int): number of samples to load, -1 means all. use_default_label (bool): whether to load default label list. + repeat (int): repeat times for dataset, use in benchmark. """ def __init__(self, @@ -46,6 +48,7 @@ class DetDataset(Dataset): data_fields=['image'], sample_num=-1, use_default_label=None, + repeat=1, **kwargs): super(DetDataset, self).__init__() self.dataset_dir = dataset_dir if dataset_dir is not None else '' @@ -54,28 +57,32 @@ class DetDataset(Dataset): self.data_fields = data_fields self.sample_num = sample_num self.use_default_label = use_default_label + self.repeat = repeat self._epoch = 0 self._curr_iter = 0 def __len__(self, ): - return len(self.roidbs) + return len(self.roidbs) * self.repeat + + def __call__(self, *args, **kwargs): + return self def __getitem__(self, idx): + n = len(self.roidbs) + if self.repeat > 1: + idx %= n # data batch roidb = copy.deepcopy(self.roidbs[idx]) if self.mixup_epoch == 0 or self._epoch < self.mixup_epoch: - n = len(self.roidbs) idx = np.random.randint(n) roidb = [roidb, copy.deepcopy(self.roidbs[idx])] elif self.cutmix_epoch == 0 or self._epoch < self.cutmix_epoch: - n = len(self.roidbs) idx = np.random.randint(n) roidb = [roidb, copy.deepcopy(self.roidbs[idx])] elif self.mosaic_epoch == 0 or self._epoch < self.mosaic_epoch: - n = len(self.roidbs) roidb = [roidb, ] + [ copy.deepcopy(self.roidbs[np.random.randint(n)]) - for _ in range(3) + for _ in range(4) ] if isinstance(roidb, Sequence): for r in roidb: @@ -149,12 +156,15 @@ class ImageFolder(DetDataset): self.sample_num = sample_num def check_or_download_dataset(self): + return + + def get_anno(self): + if self.anno_path is None: + return if self.dataset_dir: - # NOTE: ImageFolder is only used for prediction, in - # infer mode, image_dir is set by set_images - # so we only check anno_path here - self.dataset_dir = get_dataset_path(self.dataset_dir, - self.anno_path, None) + return os.path.join(self.dataset_dir, self.anno_path) + else: + return self.anno_path def parse_dataset(self, ): if not self.roidbs: @@ -195,3 +205,44 @@ class ImageFolder(DetDataset): def set_images(self, images): self.image_dir = images self.roidbs = self._load_images() + + def get_label_list(self): + # Only VOC dataset needs label list in ImageFold + return self.anno_path + + +@register +class CommonDataset(object): + def __init__(self, **dataset_args): + super(CommonDataset, self).__init__() + dataset_args = copy.deepcopy(dataset_args) + type = dataset_args.pop("name") + self.dataset = getattr(source, type)(**dataset_args) + + def __call__(self): + return self.dataset + + +@register +class TrainDataset(CommonDataset): + pass + + +@register +class EvalMOTDataset(CommonDataset): + pass + + +@register +class TestMOTDataset(CommonDataset): + pass + + +@register +class EvalDataset(CommonDataset): + pass + + +@register +class TestDataset(CommonDataset): + pass diff --git a/ppdet/data/source/mot.py b/ppdet/data/source/mot.py index 1baadf570d13afe6f9c648fdff755ac314d1aa35..90a8a1fe88d70e1627623c1cc721f2c6eb9781e4 100644 --- a/ppdet/data/source/mot.py +++ b/ppdet/data/source/mot.py @@ -39,6 +39,7 @@ class MOTDataSet(DetDataset): image_lists (str|list): mot data image lists, muiti-source mot dataset. data_fields (list): key name of data dictionary, at least have 'image'. sample_num (int): number of samples to load, -1 means all. + repeat (int): repeat times for dataset, use in benchmark. Notes: MOT datasets root directory following this: @@ -77,11 +78,13 @@ class MOTDataSet(DetDataset): dataset_dir=None, image_lists=[], data_fields=['image'], - sample_num=-1): + sample_num=-1, + repeat=1): super(MOTDataSet, self).__init__( dataset_dir=dataset_dir, data_fields=data_fields, - sample_num=sample_num) + sample_num=sample_num, + repeat=repeat) self.dataset_dir = dataset_dir self.image_lists = image_lists if isinstance(self.image_lists, str): @@ -95,7 +98,8 @@ class MOTDataSet(DetDataset): # only used to get categories and metric # only check first data, but the label_list of all data should be same. first_mot_data = self.image_lists[0].split('.')[0] - anno_file = os.path.join(self.dataset_dir, first_mot_data, 'label_list.txt') + anno_file = os.path.join(self.dataset_dir, first_mot_data, + 'label_list.txt') return anno_file def parse_dataset(self): @@ -276,7 +280,8 @@ class MCMOTDataSet(DetDataset): # only used to get categories and metric # only check first data, but the label_list of all data should be same. first_mot_data = self.image_lists[0].split('.')[0] - anno_file = os.path.join(self.dataset_dir, first_mot_data, 'label_list.txt') + anno_file = os.path.join(self.dataset_dir, first_mot_data, + 'label_list.txt') return anno_file def parse_dataset(self): @@ -472,7 +477,7 @@ class MOTImageFolder(DetDataset): image_dir=None, sample_num=-1, keep_ori_im=False, - anno_path=None, + anno_path=None, **kwargs): super(MOTImageFolder, self).__init__( dataset_dir, image_dir, sample_num=sample_num) @@ -576,6 +581,7 @@ class MOTImageFolder(DetDataset): def get_anno(self): return self.anno_path + def _is_valid_video(f, extensions=('.mp4', '.avi', '.mov', '.rmvb', 'flv')): return f.lower().endswith(extensions) diff --git a/ppdet/data/source/voc.py b/ppdet/data/source/voc.py index 1c2a7ef98ccbac760430befc375a79cdebc51a7c..2f103588537c5499ef83133fe3f8d4ba7303e685 100644 --- a/ppdet/data/source/voc.py +++ b/ppdet/data/source/voc.py @@ -46,6 +46,7 @@ class VOCDataSet(DetDataset): empty_ratio (float): the ratio of empty record number to total record's, if empty_ratio is out of [0. ,1.), do not sample the records and use all the empty entries. 1. as default + repeat (int): repeat times for dataset, use in benchmark. """ def __init__(self, @@ -56,13 +57,15 @@ class VOCDataSet(DetDataset): sample_num=-1, label_list=None, allow_empty=False, - empty_ratio=1.): + empty_ratio=1., + repeat=1): super(VOCDataSet, self).__init__( dataset_dir=dataset_dir, image_dir=image_dir, anno_path=anno_path, data_fields=data_fields, - sample_num=sample_num) + sample_num=sample_num, + repeat=repeat) self.label_list = label_list self.allow_empty = allow_empty self.empty_ratio = empty_ratio diff --git a/ppdet/data/transform/mot_operators.py b/ppdet/data/transform/mot_operators.py index ef7d7be4514bf015b74852d2978e5c68ef67753d..e533ea3dc186a1b5cae4ee221920839848f387b6 100644 --- a/ppdet/data/transform/mot_operators.py +++ b/ppdet/data/transform/mot_operators.py @@ -529,7 +529,7 @@ class Gt2FairMOTTarget(Gt2TTFTarget): Generate FairMOT targets by ground truth data. Difference between Gt2FairMOTTarget and Gt2TTFTarget are: 1. the gaussian kernal radius to generate a heatmap. - 2. the targets needed during traing. + 2. the targets needed during training. Args: num_classes(int): the number of classes. diff --git a/ppdet/data/transform/operators.py b/ppdet/data/transform/operators.py index 7eae8db0ba67a87a1337bd4a289d505aee365aaa..09a87b128fccd214b9a5d86f5abb063e601a82d4 100644 --- a/ppdet/data/transform/operators.py +++ b/ppdet/data/transform/operators.py @@ -824,7 +824,7 @@ class Resize(BaseOperator): im_scale_x = resize_w / im_shape[1] im = self.apply_image(sample['image'], [im_scale_x, im_scale_y]) - sample['image'] = im + sample['image'] = im.astype(np.float32) sample['im_shape'] = np.asarray([resize_h, resize_w], dtype=np.float32) if 'scale_factor' in sample: scale_factor = sample['scale_factor'] @@ -1054,7 +1054,7 @@ class CropWithSampling(BaseOperator): [max sample, max trial, min scale, max scale, min aspect ratio, max aspect ratio, min overlap, max overlap] - avoid_no_bbox (bool): whether to to avoid the + avoid_no_bbox (bool): whether to avoid the situation where the box does not appear. """ super(CropWithSampling, self).__init__() @@ -1145,7 +1145,7 @@ class CropWithDataAchorSampling(BaseOperator): das_anchor_scales (list[float]): a list of anchor scales in data anchor smapling. min_size (float): minimum size of sampled bbox. - avoid_no_bbox (bool): whether to to avoid the + avoid_no_bbox (bool): whether to avoid the situation where the box does not appear. """ super(CropWithDataAchorSampling, self).__init__() @@ -2034,13 +2034,14 @@ class Pad(BaseOperator): if self.size: h, w = self.size assert ( - im_h < h and im_w < w + im_h <= h and im_w <= w ), '(h, w) of target size should be greater than (im_h, im_w)' else: h = int(np.ceil(im_h / self.size_divisor) * self.size_divisor) w = int(np.ceil(im_w / self.size_divisor) * self.size_divisor) if h == im_h and w == im_w: + sample['image'] = im.astype(np.float32) return sample if self.pad_mode == -1: @@ -2139,16 +2140,29 @@ class Rbox2Poly(BaseOperator): @register_op class AugmentHSV(BaseOperator): - def __init__(self, fraction=0.50, is_bgr=True): - """ - Augment the SV channel of image data. - Args: - fraction (float): the fraction for augment. Default: 0.5. - is_bgr (bool): whether the image is BGR mode. Default: True. - """ + """ + Augment the SV channel of image data. + Args: + fraction (float): the fraction for augment. Default: 0.5. + is_bgr (bool): whether the image is BGR mode. Default: True. + hgain (float): H channel gains + sgain (float): S channel gains + vgain (float): V channel gains + """ + + def __init__(self, + fraction=0.50, + is_bgr=True, + hgain=None, + sgain=None, + vgain=None): super(AugmentHSV, self).__init__() self.fraction = fraction self.is_bgr = is_bgr + self.hgain = hgain + self.sgain = sgain + self.vgain = vgain + self.use_hsvgain = False if hgain is None else True def apply(self, sample, context=None): img = sample['image'] @@ -2156,27 +2170,39 @@ class AugmentHSV(BaseOperator): img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) else: img_hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) - S = img_hsv[:, :, 1].astype(np.float32) - V = img_hsv[:, :, 2].astype(np.float32) - a = (random.random() * 2 - 1) * self.fraction + 1 - S *= a - if a > 1: - np.clip(S, a_min=0, a_max=255, out=S) + if self.use_hsvgain: + hsv_augs = np.random.uniform( + -1, 1, 3) * [self.hgain, self.sgain, self.vgain] + # random selection of h, s, v + hsv_augs *= np.random.randint(0, 2, 3) + img_hsv[..., 0] = (img_hsv[..., 0] + hsv_augs[0]) % 180 + img_hsv[..., 1] = np.clip(img_hsv[..., 1] + hsv_augs[1], 0, 255) + img_hsv[..., 2] = np.clip(img_hsv[..., 2] + hsv_augs[2], 0, 255) - a = (random.random() * 2 - 1) * self.fraction + 1 - V *= a - if a > 1: - np.clip(V, a_min=0, a_max=255, out=V) + else: + S = img_hsv[:, :, 1].astype(np.float32) + V = img_hsv[:, :, 2].astype(np.float32) + + a = (random.random() * 2 - 1) * self.fraction + 1 + S *= a + if a > 1: + np.clip(S, a_min=0, a_max=255, out=S) + + a = (random.random() * 2 - 1) * self.fraction + 1 + V *= a + if a > 1: + np.clip(V, a_min=0, a_max=255, out=V) + + img_hsv[:, :, 1] = S.astype(np.uint8) + img_hsv[:, :, 2] = V.astype(np.uint8) - img_hsv[:, :, 1] = S.astype(np.uint8) - img_hsv[:, :, 2] = V.astype(np.uint8) if self.is_bgr: cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) else: cv2.cvtColor(img_hsv, cv2.COLOR_HSV2RGB, dst=img) - sample['image'] = img + sample['image'] = img.astype(np.float32) return sample @@ -3018,3 +3044,409 @@ class CenterRandColor(BaseOperator): img = func(img, img_gray) sample['image'] = img return sample + + +@register_op +class Mosaic(BaseOperator): + """ Mosaic operator for image and gt_bboxes + The code is based on https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/data/datasets/mosaicdetection.py + + 1. get mosaic coords + 2. clip bbox and get mosaic_labels + 3. random_affine augment + 4. Mixup augment as copypaste (optinal), not used in tiny/nano + + Args: + prob (float): probability of using Mosaic, 1.0 as default + input_dim (list[int]): input shape + degrees (list[2]): the rotate range to apply, transform range is [min, max] + translate (list[2]): the translate range to apply, transform range is [min, max] + scale (list[2]): the scale range to apply, transform range is [min, max] + shear (list[2]): the shear range to apply, transform range is [min, max] + enable_mixup (bool): whether to enable Mixup or not + mixup_prob (float): probability of using Mixup, 1.0 as default + mixup_scale (list[int]): scale range of Mixup + remove_outside_box (bool): whether remove outside boxes, False as + default in COCO dataset, True in MOT dataset + """ + + def __init__(self, + prob=1.0, + input_dim=[640, 640], + degrees=[-10, 10], + translate=[-0.1, 0.1], + scale=[0.1, 2], + shear=[-2, 2], + enable_mixup=True, + mixup_prob=1.0, + mixup_scale=[0.5, 1.5], + remove_outside_box=False): + super(Mosaic, self).__init__() + self.prob = prob + if isinstance(input_dim, Integral): + input_dim = [input_dim, input_dim] + self.input_dim = input_dim + self.degrees = degrees + self.translate = translate + self.scale = scale + self.shear = shear + self.enable_mixup = enable_mixup + self.mixup_prob = mixup_prob + self.mixup_scale = mixup_scale + self.remove_outside_box = remove_outside_box + + def get_mosaic_coords(self, mosaic_idx, xc, yc, w, h, input_h, input_w): + # (x1, y1, x2, y2) means coords in large image, + # small_coords means coords in small image in mosaic aug. + if mosaic_idx == 0: + # top left + x1, y1, x2, y2 = max(xc - w, 0), max(yc - h, 0), xc, yc + small_coords = w - (x2 - x1), h - (y2 - y1), w, h + elif mosaic_idx == 1: + # top right + x1, y1, x2, y2 = xc, max(yc - h, 0), min(xc + w, input_w * 2), yc + small_coords = 0, h - (y2 - y1), min(w, x2 - x1), h + elif mosaic_idx == 2: + # bottom left + x1, y1, x2, y2 = max(xc - w, 0), yc, xc, min(input_h * 2, yc + h) + small_coords = w - (x2 - x1), 0, w, min(y2 - y1, h) + elif mosaic_idx == 3: + # bottom right + x1, y1, x2, y2 = xc, yc, min(xc + w, input_w * 2), min(input_h * 2, + yc + h) + small_coords = 0, 0, min(w, x2 - x1), min(y2 - y1, h) + + return (x1, y1, x2, y2), small_coords + + def random_affine_augment(self, + img, + labels=[], + input_dim=[640, 640], + degrees=[-10, 10], + scales=[0.1, 2], + shears=[-2, 2], + translates=[-0.1, 0.1]): + # random rotation and scale + degree = random.uniform(degrees[0], degrees[1]) + scale = random.uniform(scales[0], scales[1]) + assert scale > 0, "Argument scale should be positive." + R = cv2.getRotationMatrix2D(angle=degree, center=(0, 0), scale=scale) + M = np.ones([2, 3]) + + # random shear + shear = random.uniform(shears[0], shears[1]) + shear_x = math.tan(shear * math.pi / 180) + shear_y = math.tan(shear * math.pi / 180) + M[0] = R[0] + shear_y * R[1] + M[1] = R[1] + shear_x * R[0] + + # random translation + translate = random.uniform(translates[0], translates[1]) + translation_x = translate * input_dim[0] + translation_y = translate * input_dim[1] + M[0, 2] = translation_x + M[1, 2] = translation_y + + # warpAffine + img = cv2.warpAffine( + img, M, dsize=tuple(input_dim), borderValue=(114, 114, 114)) + + num_gts = len(labels) + if num_gts > 0: + # warp corner points + corner_points = np.ones((4 * num_gts, 3)) + corner_points[:, :2] = labels[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape( + 4 * num_gts, 2) # x1y1, x2y2, x1y2, x2y1 + # apply affine transform + corner_points = corner_points @M.T + corner_points = corner_points.reshape(num_gts, 8) + + # create new boxes + corner_xs = corner_points[:, 0::2] + corner_ys = corner_points[:, 1::2] + new_bboxes = np.concatenate((corner_xs.min(1), corner_ys.min(1), + corner_xs.max(1), corner_ys.max(1))) + new_bboxes = new_bboxes.reshape(4, num_gts).T + + # clip boxes + new_bboxes[:, 0::2] = np.clip(new_bboxes[:, 0::2], 0, input_dim[0]) + new_bboxes[:, 1::2] = np.clip(new_bboxes[:, 1::2], 0, input_dim[1]) + labels[:, :4] = new_bboxes + + return img, labels + + def __call__(self, sample, context=None): + if not isinstance(sample, Sequence): + return sample + + assert len( + sample) == 5, "Mosaic needs 5 samples, 4 for mosaic and 1 for mixup." + if np.random.uniform(0., 1.) > self.prob: + return sample[0] + + mosaic_gt_bbox, mosaic_gt_class, mosaic_is_crowd, mosaic_difficult = [], [], [], [] + input_h, input_w = self.input_dim + yc = int(random.uniform(0.5 * input_h, 1.5 * input_h)) + xc = int(random.uniform(0.5 * input_w, 1.5 * input_w)) + mosaic_img = np.full((input_h * 2, input_w * 2, 3), 114, dtype=np.uint8) + + # 1. get mosaic coords + for mosaic_idx, sp in enumerate(sample[:4]): + img = sp['image'] + gt_bbox = sp['gt_bbox'] + h0, w0 = img.shape[:2] + scale = min(1. * input_h / h0, 1. * input_w / w0) + img = cv2.resize( + img, (int(w0 * scale), int(h0 * scale)), + interpolation=cv2.INTER_LINEAR) + (h, w, c) = img.shape[:3] + + # suffix l means large image, while s means small image in mosaic aug. + (l_x1, l_y1, l_x2, l_y2), ( + s_x1, s_y1, s_x2, s_y2) = self.get_mosaic_coords( + mosaic_idx, xc, yc, w, h, input_h, input_w) + + mosaic_img[l_y1:l_y2, l_x1:l_x2] = img[s_y1:s_y2, s_x1:s_x2] + padw, padh = l_x1 - s_x1, l_y1 - s_y1 + + # Normalized xywh to pixel xyxy format + _gt_bbox = gt_bbox.copy() + if len(gt_bbox) > 0: + _gt_bbox[:, 0] = scale * gt_bbox[:, 0] + padw + _gt_bbox[:, 1] = scale * gt_bbox[:, 1] + padh + _gt_bbox[:, 2] = scale * gt_bbox[:, 2] + padw + _gt_bbox[:, 3] = scale * gt_bbox[:, 3] + padh + + mosaic_gt_bbox.append(_gt_bbox) + mosaic_gt_class.append(sp['gt_class']) + if 'is_crowd' in sp: + mosaic_is_crowd.append(sp['is_crowd']) + if 'difficult' in sp: + mosaic_difficult.append(sp['difficult']) + + # 2. clip bbox and get mosaic_labels([gt_bbox, gt_class, is_crowd]) + if len(mosaic_gt_bbox): + mosaic_gt_bbox = np.concatenate(mosaic_gt_bbox, 0) + mosaic_gt_class = np.concatenate(mosaic_gt_class, 0) + if mosaic_is_crowd: + mosaic_is_crowd = np.concatenate(mosaic_is_crowd, 0) + mosaic_labels = np.concatenate([ + mosaic_gt_bbox, + mosaic_gt_class.astype(mosaic_gt_bbox.dtype), + mosaic_is_crowd.astype(mosaic_gt_bbox.dtype) + ], 1) + elif mosaic_difficult: + mosaic_difficult = np.concatenate(mosaic_difficult, 0) + mosaic_labels = np.concatenate([ + mosaic_gt_bbox, + mosaic_gt_class.astype(mosaic_gt_bbox.dtype), + mosaic_difficult.astype(mosaic_gt_bbox.dtype) + ], 1) + else: + mosaic_labels = np.concatenate([ + mosaic_gt_bbox, mosaic_gt_class.astype(mosaic_gt_bbox.dtype) + ], 1) + if self.remove_outside_box: + # for MOT dataset + flag1 = mosaic_gt_bbox[:, 0] < 2 * input_w + flag2 = mosaic_gt_bbox[:, 2] > 0 + flag3 = mosaic_gt_bbox[:, 1] < 2 * input_h + flag4 = mosaic_gt_bbox[:, 3] > 0 + flag_all = flag1 * flag2 * flag3 * flag4 + mosaic_labels = mosaic_labels[flag_all] + else: + mosaic_labels[:, 0] = np.clip(mosaic_labels[:, 0], 0, + 2 * input_w) + mosaic_labels[:, 1] = np.clip(mosaic_labels[:, 1], 0, + 2 * input_h) + mosaic_labels[:, 2] = np.clip(mosaic_labels[:, 2], 0, + 2 * input_w) + mosaic_labels[:, 3] = np.clip(mosaic_labels[:, 3], 0, + 2 * input_h) + else: + mosaic_labels = np.zeros((1, 6)) + + # 3. random_affine augment + mosaic_img, mosaic_labels = self.random_affine_augment( + mosaic_img, + mosaic_labels, + input_dim=self.input_dim, + degrees=self.degrees, + translates=self.translate, + scales=self.scale, + shears=self.shear) + + # 4. Mixup augment as copypaste, https://arxiv.org/abs/2012.07177 + # optinal, not used(enable_mixup=False) in tiny/nano + if (self.enable_mixup and not len(mosaic_labels) == 0 and + random.random() < self.mixup_prob): + sample_mixup = sample[4] + mixup_img = sample_mixup['image'] + if 'is_crowd' in sample_mixup: + cp_labels = np.concatenate([ + sample_mixup['gt_bbox'], + sample_mixup['gt_class'].astype(mosaic_labels.dtype), + sample_mixup['is_crowd'].astype(mosaic_labels.dtype) + ], 1) + elif 'difficult' in sample_mixup: + cp_labels = np.concatenate([ + sample_mixup['gt_bbox'], + sample_mixup['gt_class'].astype(mosaic_labels.dtype), + sample_mixup['difficult'].astype(mosaic_labels.dtype) + ], 1) + else: + cp_labels = np.concatenate([ + sample_mixup['gt_bbox'], + sample_mixup['gt_class'].astype(mosaic_labels.dtype) + ], 1) + mosaic_img, mosaic_labels = self.mixup_augment( + mosaic_img, mosaic_labels, self.input_dim, cp_labels, mixup_img) + + sample0 = sample[0] + sample0['image'] = mosaic_img.astype(np.uint8) # can not be float32 + sample0['h'] = float(mosaic_img.shape[0]) + sample0['w'] = float(mosaic_img.shape[1]) + sample0['im_shape'][0] = sample0['h'] + sample0['im_shape'][1] = sample0['w'] + sample0['gt_bbox'] = mosaic_labels[:, :4].astype(np.float32) + sample0['gt_class'] = mosaic_labels[:, 4:5].astype(np.float32) + if 'is_crowd' in sample[0]: + sample0['is_crowd'] = mosaic_labels[:, 5:6].astype(np.float32) + if 'difficult' in sample[0]: + sample0['difficult'] = mosaic_labels[:, 5:6].astype(np.float32) + return sample0 + + def mixup_augment(self, origin_img, origin_labels, input_dim, cp_labels, + img): + jit_factor = random.uniform(*self.mixup_scale) + FLIP = random.uniform(0, 1) > 0.5 + if len(img.shape) == 3: + cp_img = np.ones( + (input_dim[0], input_dim[1], 3), dtype=np.uint8) * 114 + else: + cp_img = np.ones(input_dim, dtype=np.uint8) * 114 + + cp_scale_ratio = min(input_dim[0] / img.shape[0], + input_dim[1] / img.shape[1]) + resized_img = cv2.resize( + img, (int(img.shape[1] * cp_scale_ratio), + int(img.shape[0] * cp_scale_ratio)), + interpolation=cv2.INTER_LINEAR) + + cp_img[:int(img.shape[0] * cp_scale_ratio), :int(img.shape[ + 1] * cp_scale_ratio)] = resized_img + + cp_img = cv2.resize(cp_img, (int(cp_img.shape[1] * jit_factor), + int(cp_img.shape[0] * jit_factor))) + cp_scale_ratio *= jit_factor + + if FLIP: + cp_img = cp_img[:, ::-1, :] + + origin_h, origin_w = cp_img.shape[:2] + target_h, target_w = origin_img.shape[:2] + padded_img = np.zeros( + (max(origin_h, target_h), max(origin_w, target_w), 3), + dtype=np.uint8) + padded_img[:origin_h, :origin_w] = cp_img + + x_offset, y_offset = 0, 0 + if padded_img.shape[0] > target_h: + y_offset = random.randint(0, padded_img.shape[0] - target_h - 1) + if padded_img.shape[1] > target_w: + x_offset = random.randint(0, padded_img.shape[1] - target_w - 1) + padded_cropped_img = padded_img[y_offset:y_offset + target_h, x_offset: + x_offset + target_w] + + # adjust boxes + cp_bboxes_origin_np = cp_labels[:, :4].copy() + cp_bboxes_origin_np[:, 0::2] = np.clip(cp_bboxes_origin_np[:, 0::2] * + cp_scale_ratio, 0, origin_w) + cp_bboxes_origin_np[:, 1::2] = np.clip(cp_bboxes_origin_np[:, 1::2] * + cp_scale_ratio, 0, origin_h) + + if FLIP: + cp_bboxes_origin_np[:, 0::2] = ( + origin_w - cp_bboxes_origin_np[:, 0::2][:, ::-1]) + cp_bboxes_transformed_np = cp_bboxes_origin_np.copy() + if self.remove_outside_box: + # for MOT dataset + cp_bboxes_transformed_np[:, 0::2] -= x_offset + cp_bboxes_transformed_np[:, 1::2] -= y_offset + else: + cp_bboxes_transformed_np[:, 0::2] = np.clip( + cp_bboxes_transformed_np[:, 0::2] - x_offset, 0, target_w) + cp_bboxes_transformed_np[:, 1::2] = np.clip( + cp_bboxes_transformed_np[:, 1::2] - y_offset, 0, target_h) + + cls_labels = cp_labels[:, 4:5].copy() + box_labels = cp_bboxes_transformed_np + if cp_labels.shape[-1] == 6: + crd_labels = cp_labels[:, 5:6].copy() + labels = np.hstack((box_labels, cls_labels, crd_labels)) + else: + labels = np.hstack((box_labels, cls_labels)) + if self.remove_outside_box: + labels = labels[labels[:, 0] < target_w] + labels = labels[labels[:, 2] > 0] + labels = labels[labels[:, 1] < target_h] + labels = labels[labels[:, 3] > 0] + + origin_labels = np.vstack((origin_labels, labels)) + origin_img = origin_img.astype(np.float32) + origin_img = 0.5 * origin_img + 0.5 * padded_cropped_img.astype( + np.float32) + + return origin_img.astype(np.uint8), origin_labels + + +@register_op +class PadResize(BaseOperator): + """ PadResize for image and gt_bbbox + + Args: + target_size (list[int]): input shape + fill_value (float): pixel value of padded image + """ + + def __init__(self, target_size, fill_value=114): + super(PadResize, self).__init__() + if isinstance(target_size, Integral): + target_size = [target_size, target_size] + self.target_size = target_size + self.fill_value = fill_value + + def _resize(self, img, bboxes, labels): + ratio = min(self.target_size[0] / img.shape[0], + self.target_size[1] / img.shape[1]) + w, h = int(img.shape[1] * ratio), int(img.shape[0] * ratio) + resized_img = cv2.resize(img, (w, h), interpolation=cv2.INTER_LINEAR) + + if len(bboxes) > 0: + bboxes *= ratio + mask = np.minimum(bboxes[:, 2] - bboxes[:, 0], + bboxes[:, 3] - bboxes[:, 1]) > 1 + bboxes = bboxes[mask] + labels = labels[mask] + return resized_img, bboxes, labels + + def _pad(self, img): + h, w, _ = img.shape + if h == self.target_size[0] and w == self.target_size[1]: + return img + padded_img = np.full( + (self.target_size[0], self.target_size[1], 3), + self.fill_value, + dtype=np.uint8) + padded_img[:h, :w] = img + return padded_img + + def apply(self, sample, context=None): + image = sample['image'] + bboxes = sample['gt_bbox'] + labels = sample['gt_class'] + image, bboxes, labels = self._resize(image, bboxes, labels) + sample['image'] = self._pad(image).astype(np.float32) + sample['gt_bbox'] = bboxes + sample['gt_class'] = labels + return sample diff --git a/ppdet/data/utils.py b/ppdet/data/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..02573e61484bc5ef07353dbef124c8afa54ccc64 --- /dev/null +++ b/ppdet/data/utils.py @@ -0,0 +1,72 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import paddle +import numbers +import numpy as np + +try: + from collections.abc import Sequence, Mapping +except: + from collections import Sequence, Mapping + + +def default_collate_fn(batch): + """ + Default batch collating function for :code:`paddle.io.DataLoader`, + get input data as a list of sample datas, each element in list + if the data of a sample, and sample data should composed of list, + dictionary, string, number, numpy array, this + function will parse input data recursively and stack number, + numpy array and paddle.Tensor datas as batch datas. e.g. for + following input data: + [{'image': np.array(shape=[3, 224, 224]), 'label': 1}, + {'image': np.array(shape=[3, 224, 224]), 'label': 3}, + {'image': np.array(shape=[3, 224, 224]), 'label': 4}, + {'image': np.array(shape=[3, 224, 224]), 'label': 5},] + + + This default collate function zipped each number and numpy array + field together and stack each field as the batch field as follows: + {'image': np.array(shape=[4, 3, 224, 224]), 'label': np.array([1, 3, 4, 5])} + Args: + batch(list of sample data): batch should be a list of sample data. + + Returns: + Batched data: batched each number, numpy array and paddle.Tensor + in input data. + """ + sample = batch[0] + if isinstance(sample, np.ndarray): + batch = np.stack(batch, axis=0) + return batch + elif isinstance(sample, numbers.Number): + batch = np.array(batch) + return batch + elif isinstance(sample, (str, bytes)): + return batch + elif isinstance(sample, Mapping): + return { + key: default_collate_fn([d[key] for d in batch]) + for key in sample + } + elif isinstance(sample, Sequence): + sample_fields_num = len(sample) + if not all(len(sample) == sample_fields_num for sample in iter(batch)): + raise RuntimeError( + "fileds number not same among samples in a batch") + return [default_collate_fn(fields) for fields in zip(*batch)] + + raise TypeError("batch data con only contains: tensor, numpy.ndarray, " + "dict, list, number, but got {}".format(type(sample))) diff --git a/ppdet/engine/callbacks.py b/ppdet/engine/callbacks.py index 77ca94602c518411f040fa9e55f6c2ee656d726f..09683d18b4ef16ff7a5221114937133a91974d5a 100644 --- a/ppdet/engine/callbacks.py +++ b/ppdet/engine/callbacks.py @@ -198,7 +198,7 @@ class Checkpointer(Callback): "training iterations being too few or not " \ "loading the correct weights.") return - if map_res[key][0] > self.best_ap: + if map_res[key][0] >= self.best_ap: self.best_ap = map_res[key][0] save_name = 'best_model' weight = self.weight.state_dict() @@ -288,6 +288,151 @@ class VisualDLWriter(Callback): self.vdl_mAP_step) self.vdl_mAP_step += 1 +class WandbCallback(Callback): + def __init__(self, model): + super(WandbCallback, self).__init__(model) + + try: + import wandb + self.wandb = wandb + except Exception as e: + logger.error('wandb not found, please install wandb. ' + 'Use: `pip install wandb`.') + raise e + + self.wandb_params = model.cfg.get('wandb', None) + self.save_dir = os.path.join(self.model.cfg.save_dir, + self.model.cfg.filename) + if self.wandb_params is None: + self.wandb_params = {} + for k, v in model.cfg.items(): + if k.startswith("wandb_"): + self.wandb_params.update({ + k.lstrip("wandb_"): v + }) + + self._run = None + if dist.get_world_size() < 2 or dist.get_rank() == 0: + _ = self.run + self.run.config.update(self.model.cfg) + self.run.define_metric("epoch") + self.run.define_metric("eval/*", step_metric="epoch") + + self.best_ap = 0 + + @property + def run(self): + if self._run is None: + if self.wandb.run is not None: + logger.info("There is an ongoing wandb run which will be used" + "for logging. Please use `wandb.finish()` to end that" + "if the behaviour is not intended") + self._run = self.wandb.run + else: + self._run = self.wandb.init(**self.wandb_params) + return self._run + + def save_model(self, + optimizer, + save_dir, + save_name, + last_epoch, + ema_model=None, + ap=None, + tags=None): + if dist.get_world_size() < 2 or dist.get_rank() == 0: + model_path = os.path.join(save_dir, save_name) + metadata = {} + metadata["last_epoch"] = last_epoch + if ap: + metadata["ap"] = ap + if ema_model is None: + ema_artifact = self.wandb.Artifact(name="ema_model-{}".format(self.run.id), type="model", metadata=metadata) + model_artifact = self.wandb.Artifact(name="model-{}".format(self.run.id), type="model", metadata=metadata) + + ema_artifact.add_file(model_path + ".pdema", name="model_ema") + model_artifact.add_file(model_path + ".pdparams", name="model") + + self.run.log_artifact(ema_artifact, aliases=tags) + self.run.log_artfact(model_artifact, aliases=tags) + else: + model_artifact = self.wandb.Artifact(name="model-{}".format(self.run.id), type="model", metadata=metadata) + model_artifact.add_file(model_path + ".pdparams", name="model") + self.run.log_artifact(model_artifact, aliases=tags) + + def on_step_end(self, status): + + mode = status['mode'] + if dist.get_world_size() < 2 or dist.get_rank() == 0: + if mode == 'train': + training_status = status['training_staus'].get() + for k, v in training_status.items(): + training_status[k] = float(v) + metrics = { + "train/" + k: v for k,v in training_status.items() + } + self.run.log(metrics) + + def on_epoch_end(self, status): + mode = status['mode'] + epoch_id = status['epoch_id'] + save_name = None + if dist.get_world_size() < 2 or dist.get_rank() == 0: + if mode == 'train': + end_epoch = self.model.cfg.epoch + if ( + epoch_id + 1 + ) % self.model.cfg.snapshot_epoch == 0 or epoch_id == end_epoch - 1: + save_name = str(epoch_id) if epoch_id != end_epoch - 1 else "model_final" + tags = ["latest", "epoch_{}".format(epoch_id)] + self.save_model( + self.model.optimizer, + self.save_dir, + save_name, + epoch_id + 1, + self.model.use_ema, + tags=tags + ) + if mode == 'eval': + merged_dict = {} + for metric in self.model._metrics: + for key, map_value in metric.get_results().items(): + merged_dict["eval/{}-mAP".format(key)] = map_value[0] + merged_dict["epoch"] = status["epoch_id"] + self.run.log(merged_dict) + + if 'save_best_model' in status and status['save_best_model']: + for metric in self.model._metrics: + map_res = metric.get_results() + if 'bbox' in map_res: + key = 'bbox' + elif 'keypoint' in map_res: + key = 'keypoint' + else: + key = 'mask' + if key not in map_res: + logger.warning("Evaluation results empty, this may be due to " \ + "training iterations being too few or not " \ + "loading the correct weights.") + return + if map_res[key][0] >= self.best_ap: + self.best_ap = map_res[key][0] + save_name = 'best_model' + tags = ["best", "epoch_{}".format(epoch_id)] + + self.save_model( + self.model.optimizer, + self.save_dir, + save_name, + last_epoch=epoch_id + 1, + ema_model=self.model.use_ema, + ap=self.best_ap, + tags=tags + ) + + def on_train_end(self, status): + self.run.finish() + class SniperProposalsGenerator(Callback): def __init__(self, model): diff --git a/ppdet/engine/export_utils.py b/ppdet/engine/export_utils.py index bddb4af07ca91f5b82389fdeba39ba77c1a1344e..6af8b0f4757dca4d9b0e0ba76cffc1ff3308b9de 100644 --- a/ppdet/engine/export_utils.py +++ b/ppdet/engine/export_utils.py @@ -48,6 +48,7 @@ TRT_MIN_SUBGRAPH = { 'PicoDet': 3, 'CenterNet': 5, 'TOOD': 5, + 'YOLOX': 8, } KEYPOINT_ARCH = ['HigherHRNet', 'TopDownHRNet'] @@ -57,7 +58,9 @@ MOT_ARCH = ['DeepSORT', 'JDE', 'FairMOT', 'ByteTrack'] def _prune_input_spec(input_spec, program, targets): # try to prune static program to figure out pruned input spec # so we perform following operations in static mode + device = paddle.get_device() paddle.enable_static() + paddle.set_device(device) pruned_input_spec = [{}] program = program.clone() program = program._prune(targets=targets) @@ -68,7 +71,7 @@ def _prune_input_spec(input_spec, program, targets): pruned_input_spec[0][name] = spec except Exception: pass - paddle.disable_static() + paddle.disable_static(place=device) return pruned_input_spec @@ -147,6 +150,12 @@ def _dump_infer_config(config, path, image_shape, model): infer_cfg['min_subgraph_size'] = min_subgraph_size arch_state = True break + + if infer_arch == 'YOLOX': + infer_cfg['arch'] = infer_arch + infer_cfg['min_subgraph_size'] = TRT_MIN_SUBGRAPH[infer_arch] + arch_state = True + if not arch_state: logger.error( 'Architecture: {} is not supported for exporting model now.\n'. diff --git a/ppdet/engine/tracker.py b/ppdet/engine/tracker.py index 691a42a8ccb6a12a4054016bbbea33ead7971402..090c72c7eb86562beee9257cf9c32c966af093bc 100644 --- a/ppdet/engine/tracker.py +++ b/ppdet/engine/tracker.py @@ -17,22 +17,21 @@ from __future__ import division from __future__ import print_function import os -import cv2 import glob import re import paddle +import paddle.nn as nn import numpy as np -import os.path as osp +from tqdm import tqdm from collections import defaultdict from ppdet.core.workspace import create from ppdet.utils.checkpoint import load_weight, load_pretrain_weight from ppdet.modeling.mot.utils import Detection, get_crops, scale_coords, clip_box from ppdet.modeling.mot.utils import MOTTimer, load_det_results, write_mot_results, save_vis_results -from ppdet.modeling.mot.tracker import JDETracker, DeepSORTTracker - -from ppdet.metrics import Metric, MOTMetric, KITTIMOTMetric -from ppdet.metrics import MCMOTMetric +from ppdet.modeling.mot.tracker import JDETracker, DeepSORTTracker, OCSORTTracker +from ppdet.modeling.architectures import YOLOX +from ppdet.metrics import Metric, MOTMetric, KITTIMOTMetric, MCMOTMetric import ppdet.utils.stats as stats from .callbacks import Callback, ComposeCallback @@ -62,6 +61,12 @@ class Tracker(object): # build model self.model = create(cfg.architecture) + if isinstance(self.model.detector, YOLOX): + for k, m in self.model.named_sublayers(): + if isinstance(m, nn.BatchNorm2D): + m._epsilon = 1e-3 # for amp(fp16) + m._momentum = 0.97 # 0.03 in pytorch + self.status = {} self.start_epoch = 0 @@ -142,11 +147,8 @@ class Tracker(object): self.model.eval() results = defaultdict(list) # support single class and multi classes - for step_id, data in enumerate(dataloader): + for step_id, data in enumerate(tqdm(dataloader)): self.status['step_id'] = step_id - if frame_id % 40 == 0: - logger.info('Processing frame {} ({:.2f} fps)'.format( - frame_id, 1. / max(1e-5, timer.average_time))) # forward timer.tic() pred_dets, pred_embs = self.model(data) @@ -210,12 +212,8 @@ class Tracker(object): det_file)) tracker = self.model.tracker - for step_id, data in enumerate(dataloader): + for step_id, data in enumerate(tqdm(dataloader)): self.status['step_id'] = step_id - if frame_id % 40 == 0: - logger.info('Processing frame {} ({:.2f} fps)'.format( - frame_id, 1. / max(1e-5, timer.average_time))) - ori_image = data['ori_image'] # [bs, H, W, 3] ori_image_shape = data['ori_image'].shape[1:3] # ori_image_shape: [H, W] @@ -339,8 +337,8 @@ class Tracker(object): results[0].append( (frame_id + 1, online_tlwhs, online_scores, online_ids)) save_vis_results(data, frame_id, online_ids, online_tlwhs, - online_scores, timer.average_time, show_image, - save_dir, self.cfg.num_classes) + online_scores, timer.average_time, show_image, + save_dir, self.cfg.num_classes) elif isinstance(tracker, JDETracker): # trick hyperparams only used for MOTChallenge (MOT17, MOT20) Test-set @@ -366,13 +364,35 @@ class Tracker(object): online_scores[cls_id].append(tscore) # save results results[cls_id].append( - (frame_id + 1, online_tlwhs[cls_id], online_scores[cls_id], - online_ids[cls_id])) + (frame_id + 1, online_tlwhs[cls_id], + online_scores[cls_id], online_ids[cls_id])) timer.toc() save_vis_results(data, frame_id, online_ids, online_tlwhs, - online_scores, timer.average_time, show_image, - save_dir, self.cfg.num_classes) - + online_scores, timer.average_time, show_image, + save_dir, self.cfg.num_classes) + elif isinstance(tracker, OCSORTTracker): + # OC_SORT Tracker + online_targets = tracker.update(pred_dets_old, pred_embs) + online_tlwhs = [] + online_ids = [] + online_scores = [] + for t in online_targets: + tlwh = [t[0], t[1], t[2] - t[0], t[3] - t[1]] + tscore = float(t[4]) + tid = int(t[5]) + if tlwh[2] * tlwh[3] > 0: + online_tlwhs.append(tlwh) + online_ids.append(tid) + online_scores.append(tscore) + timer.toc() + # save results + results[0].append( + (frame_id + 1, online_tlwhs, online_scores, online_ids)) + save_vis_results(data, frame_id, online_ids, online_tlwhs, + online_scores, timer.average_time, show_image, + save_dir, self.cfg.num_classes) + else: + raise ValueError(tracker) frame_id += 1 return results, frame_id, timer.average_time, timer.calls @@ -417,7 +437,7 @@ class Tracker(object): save_dir = os.path.join(output_dir, 'mot_outputs', seq) if save_images or save_videos else None - logger.info('start seq: {}'.format(seq)) + logger.info('Evaluate seq: {}'.format(seq)) self.dataset.set_images(self.get_infer_images(infer_dir)) dataloader = create('EvalMOTReader')(self.dataset, 0) @@ -458,7 +478,6 @@ class Tracker(object): os.system(cmd_str) logger.info('Save video in {}.'.format(output_video_path)) - logger.info('Evaluate seq: {}'.format(seq)) # update metrics for metric in self._metrics: metric.update(data_root, seq, data_type, result_root, @@ -582,6 +601,7 @@ class Tracker(object): write_mot_results(result_filename, results, data_type, self.cfg.num_classes) + def get_trick_hyperparams(video_name, ori_buffer, ori_thresh): if video_name[:3] != 'MOT': # only used for MOTChallenge (MOT17, MOT20) Test-set @@ -610,5 +630,5 @@ def get_trick_hyperparams(video_name, ori_buffer, ori_thresh): track_thresh = 0.3 else: track_thresh = ori_thresh - + return track_buffer, ori_thresh diff --git a/ppdet/engine/trainer.py b/ppdet/engine/trainer.py index fa9167f05b0f8554cd2650a337a51bd31c355b6c..c253b40aa58a32f2b8cbe89d902480a3a537e1e9 100644 --- a/ppdet/engine/trainer.py +++ b/ppdet/engine/trainer.py @@ -20,16 +20,18 @@ import os import sys import copy import time +from tqdm import tqdm import numpy as np import typing from PIL import Image, ImageOps, ImageFile + ImageFile.LOAD_TRUNCATED_IMAGES = True import paddle +import paddle.nn as nn import paddle.distributed as dist from paddle.distributed import fleet -from paddle import amp from paddle.static import InputSpec from ppdet.optimizer import ModelEMA @@ -41,11 +43,14 @@ from ppdet.metrics import RBoxMetric, JDEDetMetric, SNIPERCOCOMetric from ppdet.data.source.sniper_coco import SniperCOCODataSet from ppdet.data.source.category import get_categories import ppdet.utils.stats as stats +from ppdet.utils.fuse_utils import fuse_conv_bn from ppdet.utils import profiler -from .callbacks import Callback, ComposeCallback, LogPrinter, Checkpointer, WiferFaceEval, VisualDLWriter, SniperProposalsGenerator +from .callbacks import Callback, ComposeCallback, LogPrinter, Checkpointer, WiferFaceEval, VisualDLWriter, SniperProposalsGenerator, WandbCallback from .export_utils import _dump_infer_config, _prune_input_spec +from paddle.distributed.fleet.utils.hybrid_parallel_util import fused_allreduce_gradients + from ppdet.utils.logger import setup_logger logger = setup_logger('ppdet.engine') @@ -62,12 +67,19 @@ class Trainer(object): self.mode = mode.lower() self.optimizer = None self.is_loaded_weights = False + self.use_amp = self.cfg.get('amp', False) + self.amp_level = self.cfg.get('amp_level', 'O1') + self.custom_white_list = self.cfg.get('custom_white_list', None) + self.custom_black_list = self.cfg.get('custom_black_list', None) # build data loader + capital_mode = self.mode.capitalize() if cfg.architecture in MOT_ARCH and self.mode in ['eval', 'test']: - self.dataset = cfg['{}MOTDataset'.format(self.mode.capitalize())] + self.dataset = self.cfg['{}MOTDataset'.format( + capital_mode)] = create('{}MOTDataset'.format(capital_mode))() else: - self.dataset = cfg['{}Dataset'.format(self.mode.capitalize())] + self.dataset = self.cfg['{}Dataset'.format(capital_mode)] = create( + '{}Dataset'.format(capital_mode))() if cfg.architecture == 'DeepSORT' and self.mode == 'train': logger.error('DeepSORT has no need of training on mot dataset.') @@ -78,7 +90,7 @@ class Trainer(object): self.dataset.set_images(images) if self.mode == 'train': - self.loader = create('{}Reader'.format(self.mode.capitalize()))( + self.loader = create('{}Reader'.format(capital_mode))( self.dataset, cfg.worker_num) if cfg.architecture == 'JDE' and self.mode == 'train': @@ -98,23 +110,26 @@ class Trainer(object): self.model = self.cfg.model self.is_loaded_weights = True + if cfg.architecture == 'YOLOX': + for k, m in self.model.named_sublayers(): + if isinstance(m, nn.BatchNorm2D): + m._epsilon = 1e-3 # for amp(fp16) + m._momentum = 0.97 # 0.03 in pytorch + #normalize params for deploy if 'slim' in cfg and cfg['slim_type'] == 'OFA': self.model.model.load_meanstd(cfg['TestReader'][ 'sample_transforms']) + elif 'slim' in cfg and cfg['slim_type'] == 'Distill': + self.model.student_model.load_meanstd(cfg['TestReader'][ + 'sample_transforms']) + elif 'slim' in cfg and cfg[ + 'slim_type'] == 'DistillPrune' and self.mode == 'train': + self.model.student_model.load_meanstd(cfg['TestReader'][ + 'sample_transforms']) else: self.model.load_meanstd(cfg['TestReader']['sample_transforms']) - self.use_ema = ('use_ema' in cfg and cfg['use_ema']) - if self.use_ema: - ema_decay = self.cfg.get('ema_decay', 0.9998) - cycle_epoch = self.cfg.get('cycle_epoch', -1) - self.ema = ModelEMA( - self.model, - decay=ema_decay, - use_thres_step=True, - cycle_epoch=cycle_epoch) - # EvalDataset build with BatchSampler to evaluate in single device # TODO: multi-device evaluate if self.mode == 'eval': @@ -141,6 +156,21 @@ class Trainer(object): if self.cfg.get('unstructured_prune'): self.pruner = create('UnstructuredPruner')(self.model, steps_per_epoch) + if self.use_amp and self.amp_level == 'O2': + self.model, self.optimizer = paddle.amp.decorate( + models=self.model, + optimizers=self.optimizer, + level=self.amp_level) + self.use_ema = ('use_ema' in cfg and cfg['use_ema']) + if self.use_ema: + ema_decay = self.cfg.get('ema_decay', 0.9998) + cycle_epoch = self.cfg.get('cycle_epoch', -1) + ema_decay_type = self.cfg.get('ema_decay_type', 'threshold') + self.ema = ModelEMA( + self.model, + decay=ema_decay, + ema_decay_type=ema_decay_type, + cycle_epoch=cycle_epoch) self._nranks = dist.get_world_size() self._local_rank = dist.get_rank() @@ -164,6 +194,8 @@ class Trainer(object): self._callbacks.append(VisualDLWriter(self)) if self.cfg.get('save_proposals', False): self._callbacks.append(SniperProposalsGenerator(self)) + if self.cfg.get('use_wandb', False) or 'wandb' in self.cfg: + self._callbacks.append(WandbCallback(self)) self._compose_callback = ComposeCallback(self._callbacks) elif self.mode == 'eval': self._callbacks = [LogPrinter(self)] @@ -184,7 +216,7 @@ class Trainer(object): classwise = self.cfg['classwise'] if 'classwise' in self.cfg else False if self.cfg.metric == 'COCO' or self.cfg.metric == "SNIPERCOCO": # TODO: bias should be unified - bias = self.cfg['bias'] if 'bias' in self.cfg else 0 + bias = 1 if self.cfg.get('bias', False) else 0 output_eval = self.cfg['output_eval'] \ if 'output_eval' in self.cfg else None save_prediction_only = self.cfg.get('save_prediction_only', False) @@ -196,13 +228,14 @@ class Trainer(object): # when do validation in train, annotation file should be get from # EvalReader instead of self.dataset(which is TrainReader) - anno_file = self.dataset.get_anno() - dataset = self.dataset if self.mode == 'train' and validate: eval_dataset = self.cfg['EvalDataset'] eval_dataset.check_or_download_dataset() anno_file = eval_dataset.get_anno() dataset = eval_dataset + else: + dataset = self.dataset + anno_file = dataset.get_anno() IouType = self.cfg['IouType'] if 'IouType' in self.cfg else 'bbox' if self.cfg.metric == "COCO": @@ -258,12 +291,18 @@ class Trainer(object): save_prediction_only=save_prediction_only) ] elif self.cfg.metric == 'VOC': + output_eval = self.cfg['output_eval'] \ + if 'output_eval' in self.cfg else None + save_prediction_only = self.cfg.get('save_prediction_only', False) + self._metrics = [ VOCMetric( label_list=self.dataset.get_label_list(), class_num=self.cfg.num_classes, map_type=self.cfg.map_type, - classwise=classwise) + classwise=classwise, + output_eval=output_eval, + save_prediction_only=save_prediction_only) ] elif self.cfg.metric == 'WiderFace': multi_scale = self.cfg.multi_scale_eval if 'multi_scale_eval' in self.cfg else True @@ -353,14 +392,22 @@ class Trainer(object): def train(self, validate=False): assert self.mode == 'train', "Model not in 'train' mode" Init_mark = False + if validate: + self.cfg['EvalDataset'] = self.cfg.EvalDataset = create( + "EvalDataset")() + model = self.model sync_bn = (getattr(self.cfg, 'norm_type', None) == 'sync_bn' and self.cfg.use_gpu and self._nranks > 1) if sync_bn: - self.model = paddle.nn.SyncBatchNorm.convert_sync_batchnorm( - self.model) + model = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(model) - model = self.model + # enabel auto mixed precision mode + if self.use_amp: + scaler = paddle.amp.GradScaler( + enable=self.cfg.use_gpu or self.cfg.use_npu, + init_loss_scaling=self.cfg.get('init_loss_scaling', 1024)) + # get distributed model if self.cfg.get('fleet', False): model = fleet.distributed_model(model) self.optimizer = fleet.distributed_optimizer(self.optimizer) @@ -368,12 +415,7 @@ class Trainer(object): find_unused_parameters = self.cfg[ 'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False model = paddle.DataParallel( - self.model, find_unused_parameters=find_unused_parameters) - - # enabel auto mixed precision mode - if self.cfg.get('amp', False): - scaler = amp.GradScaler( - enable=self.cfg.use_gpu or self.cfg.use_npu, init_loss_scaling=1024) + model, find_unused_parameters=find_unused_parameters) self.status.update({ 'epoch_id': self.start_epoch, @@ -395,6 +437,9 @@ class Trainer(object): self._compose_callback.on_train_begin(self.status) + use_fused_allreduce_gradients = self.cfg[ + 'use_fused_allreduce_gradients'] if 'use_fused_allreduce_gradients' in self.cfg else False + for epoch_id in range(self.start_epoch, self.cfg.epoch): self.status['mode'] = 'train' self.status['epoch_id'] = epoch_id @@ -409,23 +454,56 @@ class Trainer(object): self._compose_callback.on_step_begin(self.status) data['epoch_id'] = epoch_id - if self.cfg.get('amp', False): - with amp.auto_cast(enable=self.cfg.use_gpu): - # model forward - outputs = model(data) - loss = outputs['loss'] - - # model backward - scaled_loss = scaler.scale(loss) - scaled_loss.backward() + if self.use_amp: + if isinstance( + model, paddle. + DataParallel) and use_fused_allreduce_gradients: + with model.no_sync(): + with paddle.amp.auto_cast( + enable=self.cfg.use_gpu, + custom_white_list=self.custom_white_list, + custom_black_list=self.custom_black_list, + level=self.amp_level): + # model forward + outputs = model(data) + loss = outputs['loss'] + # model backward + scaled_loss = scaler.scale(loss) + scaled_loss.backward() + fused_allreduce_gradients( + list(model.parameters()), None) + else: + with paddle.amp.auto_cast( + enable=self.cfg.use_gpu, + custom_white_list=self.custom_white_list, + custom_black_list=self.custom_black_list, + level=self.amp_level): + # model forward + outputs = model(data) + loss = outputs['loss'] + # model backward + scaled_loss = scaler.scale(loss) + scaled_loss.backward() # in dygraph mode, optimizer.minimize is equal to optimizer.step scaler.minimize(self.optimizer, scaled_loss) else: - # model forward - outputs = model(data) - loss = outputs['loss'] - # model backward - loss.backward() + if isinstance( + model, paddle. + DataParallel) and use_fused_allreduce_gradients: + with model.no_sync(): + # model forward + outputs = model(data) + loss = outputs['loss'] + # model backward + loss.backward() + fused_allreduce_gradients( + list(model.parameters()), None) + else: + # model forward + outputs = model(data) + loss = outputs['loss'] + # model backward + loss.backward() self.optimizer.step() curr_lr = self.optimizer.get_lr() self.lr.step() @@ -503,7 +581,15 @@ class Trainer(object): self.status['step_id'] = step_id self._compose_callback.on_step_begin(self.status) # forward - outs = self.model(data) + if self.use_amp: + with paddle.amp.auto_cast( + enable=self.cfg.use_gpu, + custom_white_list=self.custom_white_list, + custom_black_list=self.custom_black_list, + level=self.amp_level): + outs = self.model(data) + else: + outs = self.model(data) # update metrics for metric in self._metrics: @@ -535,10 +621,45 @@ class Trainer(object): images, draw_threshold=0.5, output_dir='output', - save_txt=False): + save_results=False): self.dataset.set_images(images) loader = create('TestReader')(self.dataset, 0) + def setup_metrics_for_loader(): + # mem + metrics = copy.deepcopy(self._metrics) + mode = self.mode + save_prediction_only = self.cfg[ + 'save_prediction_only'] if 'save_prediction_only' in self.cfg else None + output_eval = self.cfg[ + 'output_eval'] if 'output_eval' in self.cfg else None + + # modify + self.mode = '_test' + self.cfg['save_prediction_only'] = True + self.cfg['output_eval'] = output_dir + self._init_metrics() + + # restore + self.mode = mode + self.cfg.pop('save_prediction_only') + if save_prediction_only is not None: + self.cfg['save_prediction_only'] = save_prediction_only + + self.cfg.pop('output_eval') + if output_eval is not None: + self.cfg['output_eval'] = output_eval + + _metrics = copy.deepcopy(self._metrics) + self._metrics = metrics + + return _metrics + + if save_results: + metrics = setup_metrics_for_loader() + else: + metrics = [] + imid2path = self.dataset.get_imid2path() anno_file = self.dataset.get_anno() @@ -552,11 +673,14 @@ class Trainer(object): flops_loader = create('TestReader')(self.dataset, 0) self._flops(flops_loader) results = [] - for step_id, data in enumerate(loader): + for step_id, data in enumerate(tqdm(loader)): self.status['step_id'] = step_id # forward outs = self.model(data) + for _m in metrics: + _m.update(data, outs) + for key in ['im_shape', 'scale_factor', 'im_id']: if isinstance(data, typing.Sequence): outs[key] = data[0][key] @@ -566,11 +690,16 @@ class Trainer(object): if hasattr(value, 'numpy'): outs[key] = value.numpy() results.append(outs) + # sniper if type(self.dataset) == SniperCOCODataSet: results = self.dataset.anno_cropper.aggregate_chips_detections( results) + for _m in metrics: + _m.accumulate() + _m.reset() + for outs in results: batch_res = get_infer_results(outs, clsid2catid) bbox_num = outs['bbox_num'] @@ -602,15 +731,7 @@ class Trainer(object): logger.info("Detection bbox results save in {}".format( save_name)) image.save(save_name, quality=95) - if save_txt: - save_path = os.path.splitext(save_name)[0] + '.txt' - results = {} - results["im_id"] = im_id - if bbox_res: - results["bbox_res"] = bbox_res - if keypoint_res: - results["keypoint_res"] = keypoint_res - save_result(save_path, results, catid2name, draw_threshold) + start = end def _get_save_image_name(self, output_dir, image_path): @@ -623,7 +744,10 @@ class Trainer(object): name, ext = os.path.splitext(image_name) return os.path.join(output_dir, "{}".format(name)) + ext - def _get_infer_cfg_and_input_spec(self, save_dir, prune_input=True): + def _get_infer_cfg_and_input_spec(self, + save_dir, + prune_input=True, + kl_quant=False): image_shape = None im_shape = [None, 2] scale_factor = [None, 2] @@ -647,9 +771,10 @@ class Trainer(object): if hasattr(self.model, 'deploy'): self.model.deploy = True - for layer in self.model.sublayers(): - if hasattr(layer, 'convert_to_deploy'): - layer.convert_to_deploy() + if 'slim' not in self.cfg: + for layer in self.model.sublayers(): + if hasattr(layer, 'convert_to_deploy'): + layer.convert_to_deploy() export_post_process = self.cfg['export'].get( 'post_process', False) if hasattr(self.cfg, 'export') else True @@ -703,11 +828,29 @@ class Trainer(object): "image": InputSpec( shape=image_shape, name='image') }] + if kl_quant: + if self.cfg.architecture == 'PicoDet' or 'ppyoloe' in self.cfg.weights: + pruned_input_spec = [{ + "image": InputSpec( + shape=image_shape, name='image'), + "scale_factor": InputSpec( + shape=scale_factor, name='scale_factor') + }] + elif 'tinypose' in self.cfg.weights: + pruned_input_spec = [{ + "image": InputSpec( + shape=image_shape, name='image') + }] return static_model, pruned_input_spec def export(self, output_dir='output_inference'): self.model.eval() + + if hasattr(self.cfg, 'export') and 'fuse_conv_bn' in self.cfg[ + 'export'] and self.cfg['export']['fuse_conv_bn']: + self.model = fuse_conv_bn(self.model) + model_name = os.path.splitext(os.path.split(self.cfg.filename)[-1])[0] save_dir = os.path.join(output_dir, model_name) if not os.path.exists(save_dir): @@ -717,7 +860,7 @@ class Trainer(object): save_dir) # dy2st and save model - if 'slim' not in self.cfg or self.cfg['slim_type'] != 'QAT': + if 'slim' not in self.cfg or 'QAT' not in self.cfg['slim_type']: paddle.jit.save( static_model, os.path.join(save_dir, 'model'), @@ -741,8 +884,9 @@ class Trainer(object): break # TODO: support prune input_spec + kl_quant = True if hasattr(self.cfg.slim, 'ptq') else False _, pruned_input_spec = self._get_infer_cfg_and_input_spec( - save_dir, prune_input=False) + save_dir, prune_input=False, kl_quant=kl_quant) self.cfg.slim.save_quantized_model( self.model, diff --git a/ppdet/ext_op/README.md b/ppdet/ext_op/README.md index 7ada0acf7fd75266fed6c66a9a010debc645bee8..0d67062ade859b0ca025d6ad35d9a630cf4ec523 100644 --- a/ppdet/ext_op/README.md +++ b/ppdet/ext_op/README.md @@ -1,5 +1,5 @@ # 自定义OP编译 -旋转框IOU计算OP是参考[自定义外部算子](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/07_new_op/new_custom_op.html) 。 +旋转框IOU计算OP是参考[自定义外部算子](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/custom_op/new_cpp_op_cn.html) 。 ## 1. 环境依赖 - Paddle >= 2.0.1 @@ -7,13 +7,13 @@ ## 2. 安装 ``` -python3.7 setup.py install +python setup.py install ``` -按照如下方式使用 +编译完成后即可使用,以下为`rbox_iou`的使用示例 ``` # 引入自定义op -from rbox_iou_ops import rbox_iou +from ext_op import rbox_iou paddle.set_device('gpu:0') paddle.disable_static() @@ -29,10 +29,7 @@ print('iou', iou) ``` ## 3. 单元测试 -单元测试`test.py`文件中,通过对比python实现的结果和测试自定义op结果。 - -由于python计算细节与cpp计算细节略有区别,误差区间设置为0.02。 +可以通过执行单元测试来确认自定义算子功能的正确性,执行单元测试的示例如下所示: ``` -python3.7 test.py +python unittest/test_matched_rbox_iou.py ``` -提示`rbox_iou OP compute right!`说明OP测试通过。 diff --git a/ppdet/ext_op/csrc/rbox_iou/matched_rbox_iou_op.cc b/ppdet/ext_op/csrc/rbox_iou/matched_rbox_iou_op.cc new file mode 100644 index 0000000000000000000000000000000000000000..2c3c58b606c22607272d6d37877d11399d7542d9 --- /dev/null +++ b/ppdet/ext_op/csrc/rbox_iou/matched_rbox_iou_op.cc @@ -0,0 +1,90 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// The code is based on +// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated + +#include "paddle/extension.h" +#include "rbox_iou_op.h" + +template +void matched_rbox_iou_cpu_kernel(const int rbox_num, const T *rbox1_data_ptr, + const T *rbox2_data_ptr, T *output_data_ptr) { + + int i; + for (i = 0; i < rbox_num; i++) { + output_data_ptr[i] = + rbox_iou_single(rbox1_data_ptr + i * 5, rbox2_data_ptr + i * 5); + } +} + +#define CHECK_INPUT_CPU(x) \ + PD_CHECK(x.place() == paddle::PlaceType::kCPU, #x " must be a CPU Tensor.") + +std::vector MatchedRboxIouCPUForward(const paddle::Tensor &rbox1, + const paddle::Tensor &rbox2) { + CHECK_INPUT_CPU(rbox1); + CHECK_INPUT_CPU(rbox2); + PD_CHECK(rbox1.shape()[0] == rbox2.shape()[0], "inputs must be same dim"); + + auto rbox_num = rbox1.shape()[0]; + auto output = paddle::Tensor(paddle::PlaceType::kCPU, {rbox_num}); + + PD_DISPATCH_FLOATING_TYPES(rbox1.type(), "rotated_iou_cpu_kernel", ([&] { + matched_rbox_iou_cpu_kernel( + rbox_num, rbox1.data(), + rbox2.data(), + output.mutable_data()); + })); + + return {output}; +} + +#ifdef PADDLE_WITH_CUDA +std::vector MatchedRboxIouCUDAForward(const paddle::Tensor &rbox1, + const paddle::Tensor &rbox2); +#endif + +#define CHECK_INPUT_SAME(x1, x2) \ + PD_CHECK(x1.place() == x2.place(), "input must be smae pacle.") + +std::vector MatchedRboxIouForward(const paddle::Tensor &rbox1, + const paddle::Tensor &rbox2) { + CHECK_INPUT_SAME(rbox1, rbox2); + if (rbox1.place() == paddle::PlaceType::kCPU) { + return MatchedRboxIouCPUForward(rbox1, rbox2); +#ifdef PADDLE_WITH_CUDA + } else if (rbox1.place() == paddle::PlaceType::kGPU) { + return MatchedRboxIouCUDAForward(rbox1, rbox2); +#endif + } +} + +std::vector> +MatchedRboxIouInferShape(std::vector rbox1_shape, + std::vector rbox2_shape) { + return {{rbox1_shape[0]}}; +} + +std::vector MatchedRboxIouInferDtype(paddle::DataType t1, + paddle::DataType t2) { + return {t1}; +} + +PD_BUILD_OP(matched_rbox_iou) + .Inputs({"RBOX1", "RBOX2"}) + .Outputs({"Output"}) + .SetKernelFn(PD_KERNEL(MatchedRboxIouForward)) + .SetInferShapeFn(PD_INFER_SHAPE(MatchedRboxIouInferShape)) + .SetInferDtypeFn(PD_INFER_DTYPE(MatchedRboxIouInferDtype)); diff --git a/ppdet/ext_op/csrc/rbox_iou/matched_rbox_iou_op.cu b/ppdet/ext_op/csrc/rbox_iou/matched_rbox_iou_op.cu new file mode 100644 index 0000000000000000000000000000000000000000..8d03ecce6a775162980746adf727738a6beb102b --- /dev/null +++ b/ppdet/ext_op/csrc/rbox_iou/matched_rbox_iou_op.cu @@ -0,0 +1,63 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// The code is based on +// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated + +#include "paddle/extension.h" +#include "rbox_iou_op.h" + +/** + Computes ceil(a / b) +*/ + +static inline int CeilDiv(const int a, const int b) { return (a + b - 1) / b; } + +template +__global__ void +matched_rbox_iou_cuda_kernel(const int rbox_num, const T *rbox1_data_ptr, + const T *rbox2_data_ptr, T *output_data_ptr) { + for (int tid = blockIdx.x * blockDim.x + threadIdx.x; tid < rbox_num; + tid += blockDim.x * gridDim.x) { + output_data_ptr[tid] = + rbox_iou_single(rbox1_data_ptr + tid * 5, rbox2_data_ptr + tid * 5); + } +} + +#define CHECK_INPUT_GPU(x) \ + PD_CHECK(x.place() == paddle::PlaceType::kGPU, #x " must be a GPU Tensor.") + +std::vector MatchedRboxIouCUDAForward(const paddle::Tensor &rbox1, + const paddle::Tensor &rbox2) { + CHECK_INPUT_GPU(rbox1); + CHECK_INPUT_GPU(rbox2); + PD_CHECK(rbox1.shape()[0] == rbox2.shape()[0], "inputs must be same dim"); + + auto rbox_num = rbox1.shape()[0]; + + auto output = paddle::Tensor(paddle::PlaceType::kGPU, {rbox_num}); + + const int thread_per_block = 512; + const int block_per_grid = CeilDiv(rbox_num, thread_per_block); + + PD_DISPATCH_FLOATING_TYPES( + rbox1.type(), "matched_rbox_iou_cuda_kernel", ([&] { + matched_rbox_iou_cuda_kernel< + data_t><<>>( + rbox_num, rbox1.data(), rbox2.data(), + output.mutable_data()); + })); + + return {output}; +} diff --git a/ppdet/ext_op/rbox_iou_op.cc b/ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.cc similarity index 100% rename from ppdet/ext_op/rbox_iou_op.cc rename to ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.cc diff --git a/ppdet/ext_op/rbox_iou_op.cu b/ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.cu similarity index 63% rename from ppdet/ext_op/rbox_iou_op.cu rename to ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.cu index 8ec43e54b4a813ef5829ba3120cc4a2be4d5d9b9..16d1d36f1002832d01db826743ce5c57ac557463 100644 --- a/ppdet/ext_op/rbox_iou_op.cu +++ b/ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.cu @@ -12,10 +12,11 @@ // See the License for the specific language governing permissions and // limitations under the License. // -// The code is based on https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated +// The code is based on +// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated -#include "rbox_iou_op.h" #include "paddle/extension.h" +#include "rbox_iou_op.h" // 2D block with 32 * 16 = 512 threads per block const int BLOCK_DIM_X = 32; @@ -25,17 +26,13 @@ const int BLOCK_DIM_Y = 16; Computes ceil(a / b) */ -static inline int CeilDiv(const int a, const int b) { - return (a + b - 1) / b; -} +static inline int CeilDiv(const int a, const int b) { return (a + b - 1) / b; } template -__global__ void rbox_iou_cuda_kernel( - const int rbox1_num, - const int rbox2_num, - const T* rbox1_data_ptr, - const T* rbox2_data_ptr, - T* output_data_ptr) { +__global__ void rbox_iou_cuda_kernel(const int rbox1_num, const int rbox2_num, + const T *rbox1_data_ptr, + const T *rbox2_data_ptr, + T *output_data_ptr) { // get row_start and col_start const int rbox1_block_idx = blockIdx.x * blockDim.x; @@ -47,7 +44,6 @@ __global__ void rbox_iou_cuda_kernel( __shared__ T block_boxes1[BLOCK_DIM_X * 5]; __shared__ T block_boxes2[BLOCK_DIM_Y * 5]; - // It's safe to copy using threadIdx.x since BLOCK_DIM_X >= BLOCK_DIM_Y if (threadIdx.x < rbox1_thread_num && threadIdx.y == 0) { block_boxes1[threadIdx.x * 5 + 0] = @@ -62,7 +58,8 @@ __global__ void rbox_iou_cuda_kernel( rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 4]; } - // threadIdx.x < BLOCK_DIM_Y=rbox2_thread_num, just use same condition as above: threadIdx.y == 0 + // threadIdx.x < BLOCK_DIM_Y=rbox2_thread_num, just use same condition as + // above: threadIdx.y == 0 if (threadIdx.x < rbox2_thread_num && threadIdx.y == 0) { block_boxes2[threadIdx.x * 5 + 0] = rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 0]; @@ -80,41 +77,38 @@ __global__ void rbox_iou_cuda_kernel( __syncthreads(); if (threadIdx.x < rbox1_thread_num && threadIdx.y < rbox2_thread_num) { - int offset = (rbox1_block_idx + threadIdx.x) * rbox2_num + rbox2_block_idx + threadIdx.y; - output_data_ptr[offset] = rbox_iou_single(block_boxes1 + threadIdx.x * 5, block_boxes2 + threadIdx.y * 5); + int offset = (rbox1_block_idx + threadIdx.x) * rbox2_num + rbox2_block_idx + + threadIdx.y; + output_data_ptr[offset] = rbox_iou_single( + block_boxes1 + threadIdx.x * 5, block_boxes2 + threadIdx.y * 5); } } -#define CHECK_INPUT_GPU(x) PD_CHECK(x.place() == paddle::PlaceType::kGPU, #x " must be a GPU Tensor.") +#define CHECK_INPUT_GPU(x) \ + PD_CHECK(x.place() == paddle::PlaceType::kGPU, #x " must be a GPU Tensor.") -std::vector RboxIouCUDAForward(const paddle::Tensor& rbox1, const paddle::Tensor& rbox2) { - CHECK_INPUT_GPU(rbox1); - CHECK_INPUT_GPU(rbox2); +std::vector RboxIouCUDAForward(const paddle::Tensor &rbox1, + const paddle::Tensor &rbox2) { + CHECK_INPUT_GPU(rbox1); + CHECK_INPUT_GPU(rbox2); - auto rbox1_num = rbox1.shape()[0]; - auto rbox2_num = rbox2.shape()[0]; + auto rbox1_num = rbox1.shape()[0]; + auto rbox2_num = rbox2.shape()[0]; - auto output = paddle::Tensor(paddle::PlaceType::kGPU, {rbox1_num, rbox2_num}); + auto output = paddle::Tensor(paddle::PlaceType::kGPU, {rbox1_num, rbox2_num}); - const int blocks_x = CeilDiv(rbox1_num, BLOCK_DIM_X); - const int blocks_y = CeilDiv(rbox2_num, BLOCK_DIM_Y); + const int blocks_x = CeilDiv(rbox1_num, BLOCK_DIM_X); + const int blocks_y = CeilDiv(rbox2_num, BLOCK_DIM_Y); - dim3 blocks(blocks_x, blocks_y); - dim3 threads(BLOCK_DIM_X, BLOCK_DIM_Y); + dim3 blocks(blocks_x, blocks_y); + dim3 threads(BLOCK_DIM_X, BLOCK_DIM_Y); - PD_DISPATCH_FLOATING_TYPES( - rbox1.type(), - "rbox_iou_cuda_kernel", - ([&] { - rbox_iou_cuda_kernel<<>>( - rbox1_num, - rbox2_num, - rbox1.data(), - rbox2.data(), - output.mutable_data()); - })); + PD_DISPATCH_FLOATING_TYPES( + rbox1.type(), "rbox_iou_cuda_kernel", ([&] { + rbox_iou_cuda_kernel<<>>( + rbox1_num, rbox2_num, rbox1.data(), rbox2.data(), + output.mutable_data()); + })); - return {output}; + return {output}; } - - diff --git a/ppdet/ext_op/rbox_iou_op.h b/ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.h similarity index 81% rename from ppdet/ext_op/rbox_iou_op.h rename to ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.h index 77fb62e394a17a2e41379a40b3379c4eacf4e80d..fce66dea00e829215ffdb3a38f8db6182a068609 100644 --- a/ppdet/ext_op/rbox_iou_op.h +++ b/ppdet/ext_op/csrc/rbox_iou/rbox_iou_op.h @@ -12,7 +12,8 @@ // See the License for the specific language governing permissions and // limitations under the License. // -// The code is based on https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated +// The code is based on +// https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated #pragma once @@ -32,24 +33,20 @@ namespace { -template -struct RotatedBox { - T x_ctr, y_ctr, w, h, a; -}; +template struct RotatedBox { T x_ctr, y_ctr, w, h, a; }; -template -struct Point { +template struct Point { T x, y; - HOST_DEVICE_INLINE Point(const T& px = 0, const T& py = 0) : x(px), y(py) {} - HOST_DEVICE_INLINE Point operator+(const Point& p) const { + HOST_DEVICE_INLINE Point(const T &px = 0, const T &py = 0) : x(px), y(py) {} + HOST_DEVICE_INLINE Point operator+(const Point &p) const { return Point(x + p.x, y + p.y); } - HOST_DEVICE_INLINE Point& operator+=(const Point& p) { + HOST_DEVICE_INLINE Point &operator+=(const Point &p) { x += p.x; y += p.y; return *this; } - HOST_DEVICE_INLINE Point operator-(const Point& p) const { + HOST_DEVICE_INLINE Point operator-(const Point &p) const { return Point(x - p.x, y - p.y); } HOST_DEVICE_INLINE Point operator*(const T coeff) const { @@ -58,22 +55,21 @@ struct Point { }; template -HOST_DEVICE_INLINE T dot_2d(const Point& A, const Point& B) { +HOST_DEVICE_INLINE T dot_2d(const Point &A, const Point &B) { return A.x * B.x + A.y * B.y; } template -HOST_DEVICE_INLINE T cross_2d(const Point& A, const Point& B) { +HOST_DEVICE_INLINE T cross_2d(const Point &A, const Point &B) { return A.x * B.y - B.x * A.y; } template -HOST_DEVICE_INLINE void get_rotated_vertices( - const RotatedBox& box, - Point (&pts)[4]) { +HOST_DEVICE_INLINE void get_rotated_vertices(const RotatedBox &box, + Point (&pts)[4]) { // M_PI / 180. == 0.01745329251 - //double theta = box.a * 0.01745329251; - //MODIFIED + // double theta = box.a * 0.01745329251; + // MODIFIED double theta = box.a; T cosTheta2 = (T)cos(theta) * 0.5f; T sinTheta2 = (T)sin(theta) * 0.5f; @@ -90,10 +86,9 @@ HOST_DEVICE_INLINE void get_rotated_vertices( } template -HOST_DEVICE_INLINE int get_intersection_points( - const Point (&pts1)[4], - const Point (&pts2)[4], - Point (&intersections)[24]) { +HOST_DEVICE_INLINE int get_intersection_points(const Point (&pts1)[4], + const Point (&pts2)[4], + Point (&intersections)[24]) { // Line vector // A line from p1 to p2 is: p1 + (p2-p1)*t, t=[0,1] Point vec1[4], vec2[4]; @@ -127,8 +122,8 @@ HOST_DEVICE_INLINE int get_intersection_points( // Check for vertices of rect1 inside rect2 { - const auto& AB = vec2[0]; - const auto& DA = vec2[3]; + const auto &AB = vec2[0]; + const auto &DA = vec2[3]; auto ABdotAB = dot_2d(AB, AB); auto ADdotAD = dot_2d(DA, DA); for (int i = 0; i < 4; i++) { @@ -150,8 +145,8 @@ HOST_DEVICE_INLINE int get_intersection_points( // Reverse the check - check for vertices of rect2 inside rect1 { - const auto& AB = vec1[0]; - const auto& DA = vec1[3]; + const auto &AB = vec1[0]; + const auto &DA = vec1[3]; auto ABdotAB = dot_2d(AB, AB); auto ADdotAD = dot_2d(DA, DA); for (int i = 0; i < 4; i++) { @@ -171,11 +166,9 @@ HOST_DEVICE_INLINE int get_intersection_points( } template -HOST_DEVICE_INLINE int convex_hull_graham( - const Point (&p)[24], - const int& num_in, - Point (&q)[24], - bool shift_to_zero = false) { +HOST_DEVICE_INLINE int convex_hull_graham(const Point (&p)[24], + const int &num_in, Point (&q)[24], + bool shift_to_zero = false) { assert(num_in >= 2); // Step 1: @@ -188,7 +181,7 @@ HOST_DEVICE_INLINE int convex_hull_graham( t = i; } } - auto& start = p[t]; // starting point + auto &start = p[t]; // starting point // Step 2: // Subtract starting point from every points (for sorting in the next step) @@ -230,15 +223,15 @@ HOST_DEVICE_INLINE int convex_hull_graham( } #else // CPU version - std::sort( - q + 1, q + num_in, [](const Point& A, const Point& B) -> bool { - T temp = cross_2d(A, B); - if (fabs(temp) < 1e-6) { - return dot_2d(A, A) < dot_2d(B, B); - } else { - return temp > 0; - } - }); + std::sort(q + 1, q + num_in, + [](const Point &A, const Point &B) -> bool { + T temp = cross_2d(A, B); + if (fabs(temp) < 1e-6) { + return dot_2d(A, A) < dot_2d(B, B); + } else { + return temp > 0; + } + }); #endif // Step 4: @@ -286,7 +279,7 @@ HOST_DEVICE_INLINE int convex_hull_graham( } template -HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int& m) { +HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int &m) { if (m <= 2) { return 0; } @@ -300,9 +293,8 @@ HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int& m) { } template -HOST_DEVICE_INLINE T rboxes_intersection( - const RotatedBox& box1, - const RotatedBox& box2) { +HOST_DEVICE_INLINE T rboxes_intersection(const RotatedBox &box1, + const RotatedBox &box2) { // There are up to 4 x 4 + 4 + 4 = 24 intersections (including dups) returned // from rotated_rect_intersection_pts Point intersectPts[24], orderedPts[24]; @@ -327,8 +319,8 @@ HOST_DEVICE_INLINE T rboxes_intersection( } // namespace template -HOST_DEVICE_INLINE T -rbox_iou_single(T const* const box1_raw, T const* const box2_raw) { +HOST_DEVICE_INLINE T rbox_iou_single(T const *const box1_raw, + T const *const box2_raw) { // shift center to the middle point to achieve higher precision in result RotatedBox box1, box2; auto center_shift_x = (box1_raw[0] + box2_raw[0]) / 2.0; diff --git a/ppdet/ext_op/setup.py b/ppdet/ext_op/setup.py index d364db7ed37c68227a5ef7d2f8b2c8d5fcad8123..5892f4625c263b9eac19a434aca10968882bc4bc 100644 --- a/ppdet/ext_op/setup.py +++ b/ppdet/ext_op/setup.py @@ -1,14 +1,33 @@ +import os +import glob import paddle from paddle.utils.cpp_extension import CppExtension, CUDAExtension, setup -if __name__ == "__main__": + +def get_extensions(): + root_dir = os.path.dirname(os.path.abspath(__file__)) + ext_root_dir = os.path.join(root_dir, 'csrc') + sources = [] + for ext_name in os.listdir(ext_root_dir): + ext_dir = os.path.join(ext_root_dir, ext_name) + source = glob.glob(os.path.join(ext_dir, '*.cc')) + kwargs = dict() + if paddle.device.is_compiled_with_cuda(): + source += glob.glob(os.path.join(ext_dir, '*.cu')) + + if not source: + continue + + sources += source + if paddle.device.is_compiled_with_cuda(): - setup( - name='rbox_iou_ops', - ext_modules=CUDAExtension( - sources=['rbox_iou_op.cc', 'rbox_iou_op.cu'], - extra_compile_args={'cxx': ['-DPADDLE_WITH_CUDA']})) + extension = CUDAExtension( + sources, extra_compile_args={'cxx': ['-DPADDLE_WITH_CUDA']}) else: - setup( - name='rbox_iou_ops', - ext_modules=CppExtension(sources=['rbox_iou_op.cc'])) + extension = CppExtension(sources) + + return extension + + +if __name__ == "__main__": + setup(name='ext_op', ext_modules=get_extensions()) diff --git a/ppdet/ext_op/unittest/test_matched_rbox_iou.py b/ppdet/ext_op/unittest/test_matched_rbox_iou.py new file mode 100644 index 0000000000000000000000000000000000000000..af7b076da2435a4f025f608430549f2334c22e08 --- /dev/null +++ b/ppdet/ext_op/unittest/test_matched_rbox_iou.py @@ -0,0 +1,149 @@ +import numpy as np +import sys +import time +from shapely.geometry import Polygon +import paddle +import unittest + +from ext_op import matched_rbox_iou + + +def rbox2poly_single(rrect, get_best_begin_point=False): + """ + rrect:[x_ctr,y_ctr,w,h,angle] + to + poly:[x0,y0,x1,y1,x2,y2,x3,y3] + """ + x_ctr, y_ctr, width, height, angle = rrect[:5] + tl_x, tl_y, br_x, br_y = -width / 2, -height / 2, width / 2, height / 2 + # rect 2x4 + rect = np.array([[tl_x, br_x, br_x, tl_x], [tl_y, tl_y, br_y, br_y]]) + R = np.array([[np.cos(angle), -np.sin(angle)], + [np.sin(angle), np.cos(angle)]]) + # poly + poly = R.dot(rect) + x0, x1, x2, x3 = poly[0, :4] + x_ctr + y0, y1, y2, y3 = poly[1, :4] + y_ctr + poly = np.array([x0, y0, x1, y1, x2, y2, x3, y3], dtype=np.float64) + return poly + + +def intersection(g, p): + """ + Intersection. + """ + + g = g[:8].reshape((4, 2)) + p = p[:8].reshape((4, 2)) + + a = g + b = p + + use_filter = True + if use_filter: + # step1: + inter_x1 = np.maximum(np.min(a[:, 0]), np.min(b[:, 0])) + inter_x2 = np.minimum(np.max(a[:, 0]), np.max(b[:, 0])) + inter_y1 = np.maximum(np.min(a[:, 1]), np.min(b[:, 1])) + inter_y2 = np.minimum(np.max(a[:, 1]), np.max(b[:, 1])) + if inter_x1 >= inter_x2 or inter_y1 >= inter_y2: + return 0. + x1 = np.minimum(np.min(a[:, 0]), np.min(b[:, 0])) + x2 = np.maximum(np.max(a[:, 0]), np.max(b[:, 0])) + y1 = np.minimum(np.min(a[:, 1]), np.min(b[:, 1])) + y2 = np.maximum(np.max(a[:, 1]), np.max(b[:, 1])) + if x1 >= x2 or y1 >= y2 or (x2 - x1) < 2 or (y2 - y1) < 2: + return 0. + + g = Polygon(g) + p = Polygon(p) + if not g.is_valid or not p.is_valid: + return 0 + + inter = Polygon(g).intersection(Polygon(p)).area + union = g.area + p.area - inter + if union == 0: + return 0 + else: + return inter / union + + +def matched_rbox_overlaps(anchors, gt_bboxes, use_cv2=False): + """ + + Args: + anchors: [M, 5] x1,y1,x2,y2,angle + gt_bboxes: [M, 5] x1,y1,x2,y2,angle + + Returns: + macthed_iou: [M] + """ + assert anchors.shape[1] == 5 + assert gt_bboxes.shape[1] == 5 + + gt_bboxes_ploy = [rbox2poly_single(e) for e in gt_bboxes] + anchors_ploy = [rbox2poly_single(e) for e in anchors] + + num = len(anchors_ploy) + iou = np.zeros((num, ), dtype=np.float64) + + start_time = time.time() + for i in range(num): + try: + iou[i] = intersection(gt_bboxes_ploy[i], anchors_ploy[i]) + except Exception as e: + print('cur gt_bboxes_ploy[i]', gt_bboxes_ploy[i], + 'anchors_ploy[j]', anchors_ploy[i], e) + return iou + + +def gen_sample(n): + rbox = np.random.rand(n, 5) + rbox[:, 0:4] = rbox[:, 0:4] * 0.45 + 0.001 + rbox[:, 4] = rbox[:, 4] - 0.5 + return rbox + + +class MatchedRBoxIoUTest(unittest.TestCase): + def setUp(self): + self.initTestCase() + self.rbox1 = gen_sample(self.n) + self.rbox2 = gen_sample(self.n) + + def initTestCase(self): + self.n = 1000 + + def assertAllClose(self, x, y, msg, atol=5e-1, rtol=1e-2): + self.assertTrue(np.allclose(x, y, atol=atol, rtol=rtol), msg=msg) + + def get_places(self): + places = [paddle.CPUPlace()] + if paddle.device.is_compiled_with_cuda(): + places.append(paddle.CUDAPlace(0)) + + return places + + def check_output(self, place): + paddle.disable_static() + pd_rbox1 = paddle.to_tensor(self.rbox1, place=place) + pd_rbox2 = paddle.to_tensor(self.rbox2, place=place) + actual_t = matched_rbox_iou(pd_rbox1, pd_rbox2).numpy() + poly_rbox1 = self.rbox1 + poly_rbox2 = self.rbox2 + poly_rbox1[:, 0:4] = self.rbox1[:, 0:4] * 1024 + poly_rbox2[:, 0:4] = self.rbox2[:, 0:4] * 1024 + expect_t = matched_rbox_overlaps(poly_rbox1, poly_rbox2, use_cv2=False) + self.assertAllClose( + actual_t, + expect_t, + msg="rbox_iou has diff at {} \nExpect {}\nBut got {}".format( + str(place), str(expect_t), str(actual_t))) + + def test_output(self): + places = self.get_places() + for place in places: + self.check_output(place) + + +if __name__ == "__main__": + unittest.main() diff --git a/ppdet/ext_op/test.py b/ppdet/ext_op/unittest/test_rbox_iou.py similarity index 89% rename from ppdet/ext_op/test.py rename to ppdet/ext_op/unittest/test_rbox_iou.py index 85872e484b8ca6d60a62d311c9fdfc4a9e08b6e2..8ef19ae841d5a73c5b90f1b971ed36d1d7f61a7a 100644 --- a/ppdet/ext_op/test.py +++ b/ppdet/ext_op/unittest/test_rbox_iou.py @@ -5,11 +5,7 @@ from shapely.geometry import Polygon import paddle import unittest -try: - from rbox_iou_ops import rbox_iou -except Exception as e: - print('import rbox_iou_ops error', e) - sys.exit(-1) +from ext_op import rbox_iou def rbox2poly_single(rrect, get_best_begin_point=False): @@ -80,7 +76,7 @@ def rbox_overlaps(anchors, gt_bboxes, use_cv2=False): gt_bboxes: [M, 5] x1,y1,x2,y2,angle Returns: - + iou: [NA, M] """ assert anchors.shape[1] == 5 assert gt_bboxes.shape[1] == 5 @@ -89,17 +85,16 @@ def rbox_overlaps(anchors, gt_bboxes, use_cv2=False): anchors_ploy = [rbox2poly_single(e) for e in anchors] num_gt, num_anchors = len(gt_bboxes_ploy), len(anchors_ploy) - iou = np.zeros((num_gt, num_anchors), dtype=np.float64) + iou = np.zeros((num_anchors, num_gt), dtype=np.float64) start_time = time.time() - for i in range(num_gt): - for j in range(num_anchors): + for i in range(num_anchors): + for j in range(num_gt): try: - iou[i, j] = intersection(gt_bboxes_ploy[i], anchors_ploy[j]) + iou[i, j] = intersection(anchors_ploy[i], gt_bboxes_ploy[j]) except Exception as e: - print('cur gt_bboxes_ploy[i]', gt_bboxes_ploy[i], - 'anchors_ploy[j]', anchors_ploy[j], e) - iou = iou.T + print('cur anchors_ploy[i]', anchors_ploy[i], + 'gt_bboxes_ploy[j]', gt_bboxes_ploy[j], e) return iou diff --git a/ppdet/metrics/json_results.py b/ppdet/metrics/json_results.py index c703de63be89e326da979d2edbe0a3e1afca3bec..93354ec1fc592b1567b5f0a3e2044a215d231a30 100755 --- a/ppdet/metrics/json_results.py +++ b/ppdet/metrics/json_results.py @@ -65,6 +65,14 @@ def get_det_poly_res(bboxes, bbox_nums, image_id, label_to_cat_id_map, bias=0): return det_res +def strip_mask(mask): + row = mask[0, 0, :] + col = mask[0, :, 0] + im_h = len(col) - np.count_nonzero(col == -1) + im_w = len(row) - np.count_nonzero(row == -1) + return mask[:, :im_h, :im_w] + + def get_seg_res(masks, bboxes, mask_nums, image_id, label_to_cat_id_map): import pycocotools.mask as mask_util seg_res = [] @@ -72,8 +80,10 @@ def get_seg_res(masks, bboxes, mask_nums, image_id, label_to_cat_id_map): for i in range(len(mask_nums)): cur_image_id = int(image_id[i][0]) det_nums = mask_nums[i] + mask_i = masks[k:k + det_nums] + mask_i = strip_mask(mask_i) for j in range(det_nums): - mask = masks[k].astype(np.uint8) + mask = mask_i[j].astype(np.uint8) score = float(bboxes[k][1]) label = int(bboxes[k][0]) k = k + 1 diff --git a/ppdet/metrics/keypoint_metrics.py b/ppdet/metrics/keypoint_metrics.py index 0f5c8c973ea3d6ad05e18a20647df268438ae9e7..cbd52d02d4af1f6dd81edd0ea63a98b7ed77e614 100644 --- a/ppdet/metrics/keypoint_metrics.py +++ b/ppdet/metrics/keypoint_metrics.py @@ -16,6 +16,7 @@ import os import json from collections import defaultdict, OrderedDict import numpy as np +import paddle from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval from ..modeling.keypoint_utils import oks_nms @@ -70,15 +71,23 @@ class KeyPointTopDownCOCOEval(object): self.results['all_preds'][self.idx:self.idx + num_images, :, 0: 3] = kpts[:, :, 0:3] self.results['all_boxes'][self.idx:self.idx + num_images, 0:2] = inputs[ - 'center'].numpy()[:, 0:2] + 'center'].numpy()[:, 0:2] if isinstance( + inputs['center'], paddle.Tensor) else inputs['center'][:, 0:2] self.results['all_boxes'][self.idx:self.idx + num_images, 2:4] = inputs[ - 'scale'].numpy()[:, 0:2] + 'scale'].numpy()[:, 0:2] if isinstance( + inputs['scale'], paddle.Tensor) else inputs['scale'][:, 0:2] self.results['all_boxes'][self.idx:self.idx + num_images, 4] = np.prod( - inputs['scale'].numpy() * 200, 1) - self.results['all_boxes'][self.idx:self.idx + num_images, - 5] = np.squeeze(inputs['score'].numpy()) - self.results['image_path'].extend(inputs['im_id'].numpy()) - + inputs['scale'].numpy() * 200, + 1) if isinstance(inputs['scale'], paddle.Tensor) else np.prod( + inputs['scale'] * 200, 1) + self.results['all_boxes'][ + self.idx:self.idx + num_images, + 5] = np.squeeze(inputs['score'].numpy()) if isinstance( + inputs['score'], paddle.Tensor) else np.squeeze(inputs['score']) + if isinstance(inputs['im_id'], paddle.Tensor): + self.results['image_path'].extend(inputs['im_id'].numpy()) + else: + self.results['image_path'].extend(inputs['im_id']) self.idx += num_images def _write_coco_keypoint_results(self, keypoints): diff --git a/ppdet/metrics/map_utils.py b/ppdet/metrics/map_utils.py index 9c96b9235f4205279e47ff84006351a012d7bf2d..12fb9ba51242bdd244eb60da8b364ab26ddbecba 100644 --- a/ppdet/metrics/map_utils.py +++ b/ppdet/metrics/map_utils.py @@ -121,9 +121,9 @@ def calc_rbox_iou(pred, gt_rbox): pred_rbox = pred_rbox.reshape(-1, 5) pred_rbox = pred_rbox.reshape(-1, 5) try: - from rbox_iou_ops import rbox_iou + from ext_op import rbox_iou except Exception as e: - print("import custom_ops error, try install rbox_iou_ops " \ + print("import custom_ops error, try install ext_op " \ "following ppdet/ext_op/README.md", e) sys.stdout.flush() sys.exit(-1) diff --git a/ppdet/metrics/mcmot_metrics.py b/ppdet/metrics/mcmot_metrics.py index 5bcfb923470a1f94a2bd951fb721221a8f339354..c9b5ef7506e92adcfe58d5dd1f2f2cad0d9d9e70 100644 --- a/ppdet/metrics/mcmot_metrics.py +++ b/ppdet/metrics/mcmot_metrics.py @@ -21,18 +21,21 @@ import copy import sys import math from collections import defaultdict -from motmetrics.math_util import quiet_divide import numpy as np import pandas as pd -import paddle -import paddle.nn.functional as F from .metrics import Metric -import motmetrics as mm -import openpyxl -metrics = mm.metrics.motchallenge_metrics -mh = mm.metrics.create() +try: + import motmetrics as mm + from motmetrics.math_util import quiet_divide + metrics = mm.metrics.motchallenge_metrics + mh = mm.metrics.create() +except: + print( + 'Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) + pass from ppdet.utils.logger import setup_logger logger = setup_logger(__name__) @@ -302,24 +305,30 @@ class MCMOTEvaluator(object): self.num_classes = num_classes self.load_annotations() + try: + import motmetrics as mm + mm.lap.default_solver = 'lap' + except Exception as e: + raise RuntimeError( + 'Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) self.reset_accumulator() self.class_accs = [] def load_annotations(self): assert self.data_type == 'mcmot' - self.gt_filename = os.path.join(self.data_root, '../', - 'sequences', + self.gt_filename = os.path.join(self.data_root, '../', 'sequences', '{}.txt'.format(self.seq_name)) - + if not os.path.exists(self.gt_filename): + logger.warning( + "gt_filename '{}' of MCMOTEvaluator is not exist, so the MOTA will be -INF." + ) + def reset_accumulator(self): - import motmetrics as mm - mm.lap.default_solver = 'lap' self.acc = mm.MOTAccumulator(auto_id=True) def eval_frame_dict(self, trk_objs, gt_objs, rtn_events=False, union=False): - import motmetrics as mm - mm.lap.default_solver = 'lap' if union: trk_tlwhs, trk_ids, trk_cls = unzip_objs_cls(trk_objs)[:3] gt_tlwhs, gt_ids, gt_cls = unzip_objs_cls(gt_objs)[:3] @@ -393,9 +402,6 @@ class MCMOTEvaluator(object): names, metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1', 'precision', 'recall')): - import motmetrics as mm - mm.lap.default_solver = 'lap' - names = copy.deepcopy(names) if metrics is None: metrics = mm.metrics.motchallenge_metrics diff --git a/ppdet/metrics/metrics.py b/ppdet/metrics/metrics.py index 3925267d7f9e5656033e4c851d8b52f1031867ab..b20a569a0434ed7bac7f461399cb280f08ff888a 100644 --- a/ppdet/metrics/metrics.py +++ b/ppdet/metrics/metrics.py @@ -22,6 +22,7 @@ import json import paddle import numpy as np import typing +from pathlib import Path from .map_utils import prune_zero_padding, DetectionMAP from .coco_utils import get_infer_results, cocoapi_eval @@ -32,13 +33,8 @@ from ppdet.utils.logger import setup_logger logger = setup_logger(__name__) __all__ = [ - 'Metric', - 'COCOMetric', - 'VOCMetric', - 'WiderFaceMetric', - 'get_infer_results', - 'RBoxMetric', - 'SNIPERCOCOMetric' + 'Metric', 'COCOMetric', 'VOCMetric', 'WiderFaceMetric', 'get_infer_results', + 'RBoxMetric', 'SNIPERCOCOMetric' ] COCO_SIGMAS = np.array([ @@ -74,8 +70,6 @@ class Metric(paddle.metric.Metric): class COCOMetric(Metric): def __init__(self, anno_file, **kwargs): - assert os.path.isfile(anno_file), \ - "anno_file {} not a file".format(anno_file) self.anno_file = anno_file self.clsid2catid = kwargs.get('clsid2catid', None) if self.clsid2catid is None: @@ -86,6 +80,14 @@ class COCOMetric(Metric): self.bias = kwargs.get('bias', 0) self.save_prediction_only = kwargs.get('save_prediction_only', False) self.iou_type = kwargs.get('IouType', 'bbox') + + if not self.save_prediction_only: + assert os.path.isfile(anno_file), \ + "anno_file {} not a file".format(anno_file) + + if self.output_eval is not None: + Path(self.output_eval).mkdir(exist_ok=True) + self.reset() def reset(self): @@ -223,7 +225,9 @@ class VOCMetric(Metric): map_type='11point', is_bbox_normalized=False, evaluate_difficult=False, - classwise=False): + classwise=False, + output_eval=None, + save_prediction_only=False): assert os.path.isfile(label_list), \ "label_list {} not a file".format(label_list) self.clsid2catid, self.catid2name = get_categories('VOC', label_list) @@ -231,6 +235,8 @@ class VOCMetric(Metric): self.overlap_thresh = overlap_thresh self.map_type = map_type self.evaluate_difficult = evaluate_difficult + self.output_eval = output_eval + self.save_prediction_only = save_prediction_only self.detection_map = DetectionMAP( class_num=class_num, overlap_thresh=overlap_thresh, @@ -243,34 +249,52 @@ class VOCMetric(Metric): self.reset() def reset(self): + self.results = {'bbox': [], 'score': [], 'label': []} self.detection_map.reset() def update(self, inputs, outputs): - bbox_np = outputs['bbox'].numpy() + bbox_np = outputs['bbox'].numpy() if isinstance( + outputs['bbox'], paddle.Tensor) else outputs['bbox'] bboxes = bbox_np[:, 2:] scores = bbox_np[:, 1] labels = bbox_np[:, 0] - bbox_lengths = outputs['bbox_num'].numpy() + bbox_lengths = outputs['bbox_num'].numpy() if isinstance( + outputs['bbox_num'], paddle.Tensor) else outputs['bbox_num'] + + self.results['bbox'].append(bboxes.tolist()) + self.results['score'].append(scores.tolist()) + self.results['label'].append(labels.tolist()) if bboxes.shape == (1, 1) or bboxes is None: return + if self.save_prediction_only: + return + gt_boxes = inputs['gt_bbox'] gt_labels = inputs['gt_class'] difficults = inputs['difficult'] if not self.evaluate_difficult \ else None - scale_factor = inputs['scale_factor'].numpy( - ) if 'scale_factor' in inputs else np.ones( - (gt_boxes.shape[0], 2)).astype('float32') + if 'scale_factor' in inputs: + scale_factor = inputs['scale_factor'].numpy() if isinstance( + inputs['scale_factor'], + paddle.Tensor) else inputs['scale_factor'] + else: + scale_factor = np.ones((gt_boxes.shape[0], 2)).astype('float32') bbox_idx = 0 for i in range(len(gt_boxes)): - gt_box = gt_boxes[i].numpy() + gt_box = gt_boxes[i].numpy() if isinstance( + gt_boxes[i], paddle.Tensor) else gt_boxes[i] h, w = scale_factor[i] gt_box = gt_box / np.array([w, h, w, h]) - gt_label = gt_labels[i].numpy() - difficult = None if difficults is None \ - else difficults[i].numpy() + gt_label = gt_labels[i].numpy() if isinstance( + gt_labels[i], paddle.Tensor) else gt_labels[i] + if difficults is not None: + difficult = difficults[i].numpy() if isinstance( + difficults[i], paddle.Tensor) else difficults[i] + else: + difficult = None bbox_num = bbox_lengths[i] bbox = bboxes[bbox_idx:bbox_idx + bbox_num] score = scores[bbox_idx:bbox_idx + bbox_num] @@ -282,6 +306,15 @@ class VOCMetric(Metric): bbox_idx += bbox_num def accumulate(self): + output = "bbox.json" + if self.output_eval: + output = os.path.join(self.output_eval, output) + with open(output, 'w') as f: + json.dump(self.results, f) + logger.info('The bbox result is saved to bbox.json.') + if self.save_prediction_only: + return + logger.info("Accumulating evaluatation results...") self.detection_map.accumulate() @@ -427,11 +460,13 @@ class SNIPERCOCOMetric(COCOMetric): self.chip_results.append(outs) - def accumulate(self): - results = self.dataset.anno_cropper.aggregate_chips_detections(self.chip_results) + results = self.dataset.anno_cropper.aggregate_chips_detections( + self.chip_results) for outs in results: - infer_results = get_infer_results(outs, self.clsid2catid, bias=self.bias) - self.results['bbox'] += infer_results['bbox'] if 'bbox' in infer_results else [] + infer_results = get_infer_results( + outs, self.clsid2catid, bias=self.bias) + self.results['bbox'] += infer_results[ + 'bbox'] if 'bbox' in infer_results else [] super(SNIPERCOCOMetric, self).accumulate() diff --git a/ppdet/metrics/mot_metrics.py b/ppdet/metrics/mot_metrics.py index 85cba3630cd428478175ddc6347db4152d47a353..b5ed8a2d4a8f60d37297a94265733970212e24d0 100644 --- a/ppdet/metrics/mot_metrics.py +++ b/ppdet/metrics/mot_metrics.py @@ -22,13 +22,21 @@ import sys import math from collections import defaultdict import numpy as np -import paddle -import paddle.nn.functional as F + from ppdet.modeling.bbox_utils import bbox_iou_np_expand from .map_utils import ap_per_class from .metrics import Metric from .munkres import Munkres +try: + import motmetrics as mm + mm.lap.default_solver = 'lap' +except: + print( + 'Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) + pass + from ppdet.utils.logger import setup_logger logger = setup_logger(__name__) @@ -36,8 +44,13 @@ __all__ = ['MOTEvaluator', 'MOTMetric', 'JDEDetMetric', 'KITTIMOTMetric'] def read_mot_results(filename, is_gt=False, is_ignore=False): - valid_labels = {1} - ignore_labels = {2, 7, 8, 12} # only in motchallenge datasets like 'MOT16' + valid_label = [1] + ignore_labels = [2, 7, 8, 12] # only in motchallenge datasets like 'MOT16' + if is_gt: + logger.info( + "In MOT16/17 dataset the valid_label of ground truth is '{}', " + "in other dataset it should be '0' for single classs MOT.".format( + valid_label[0])) results_dict = dict() if os.path.isfile(filename): with open(filename, 'r') as f: @@ -50,12 +63,10 @@ def read_mot_results(filename, is_gt=False, is_ignore=False): continue results_dict.setdefault(fid, list()) - box_size = float(linelist[4]) * float(linelist[5]) - if is_gt: label = int(float(linelist[7])) mark = int(float(linelist[6])) - if mark == 0 or label not in valid_labels: + if mark == 0 or label not in valid_label: continue score = 1 elif is_ignore: @@ -112,24 +123,31 @@ class MOTEvaluator(object): self.data_type = data_type self.load_annotations() + try: + import motmetrics as mm + mm.lap.default_solver = 'lap' + except Exception as e: + raise RuntimeError( + 'Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics' + ) self.reset_accumulator() def load_annotations(self): assert self.data_type == 'mot' gt_filename = os.path.join(self.data_root, self.seq_name, 'gt', 'gt.txt') + if not os.path.exists(gt_filename): + logger.warning( + "gt_filename '{}' of MOTEvaluator is not exist, so the MOTA will be -INF." + ) self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True) self.gt_ignore_frame_dict = read_mot_results( gt_filename, is_ignore=True) def reset_accumulator(self): - import motmetrics as mm - mm.lap.default_solver = 'lap' self.acc = mm.MOTAccumulator(auto_id=True) def eval_frame(self, frame_id, trk_tlwhs, trk_ids, rtn_events=False): - import motmetrics as mm - mm.lap.default_solver = 'lap' # results trk_tlwhs = np.copy(trk_tlwhs) trk_ids = np.copy(trk_ids) @@ -187,8 +205,6 @@ class MOTEvaluator(object): names, metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1', 'precision', 'recall')): - import motmetrics as mm - mm.lap.default_solver = 'lap' names = copy.deepcopy(names) if metrics is None: metrics = mm.metrics.motchallenge_metrics @@ -225,8 +241,6 @@ class MOTMetric(Metric): self.result_root = result_root def accumulate(self): - import motmetrics as mm - import openpyxl metrics = mm.metrics.motchallenge_metrics mh = mm.metrics.create() summary = self.MOTEvaluator.get_summary(self.accs, self.seqs, metrics) @@ -551,7 +565,7 @@ class KITTIEvaluation(object): "track ids are not unique for sequence %d: frame %d" % (seq, t_data.frame)) logger.info( - "track id %d occured at least twice for this frame" + "track id %d occurred at least twice for this frame" % t_data.track_id) logger.info("Exiting...") #continue # this allows to evaluate non-unique result files diff --git a/ppdet/modeling/__init__.py b/ppdet/modeling/__init__.py index 88a9a3570477f55e8f7fbfeae4fd84271a76256d..cdcb5d1bf08d813257dc577366de2efa9da9add7 100644 --- a/ppdet/modeling/__init__.py +++ b/ppdet/modeling/__init__.py @@ -29,7 +29,6 @@ from . import reid from . import mot from . import transformers from . import assigners -from . import coders from .ops import * from .backbones import * @@ -44,4 +43,3 @@ from .reid import * from .mot import * from .transformers import * from .assigners import * -from .coders import * diff --git a/ppdet/modeling/architectures/__init__.py b/ppdet/modeling/architectures/__init__.py index 71c53067f72f96c1d24da19aa4313449e91f4b95..9360a5b7b15596302ae54fc7b375e83718820da0 100644 --- a/ppdet/modeling/architectures/__init__.py +++ b/ppdet/modeling/architectures/__init__.py @@ -5,6 +5,13 @@ # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + from . import meta_arch from . import faster_rcnn from . import mask_rcnn @@ -28,6 +35,7 @@ from . import sparse_rcnn from . import tood from . import retinanet from . import bytetrack +from . import yolox from .meta_arch import * from .faster_rcnn import * @@ -53,3 +61,4 @@ from .sparse_rcnn import * from .tood import * from .retinanet import * from .bytetrack import * +from .yolox import * diff --git a/ppdet/modeling/architectures/cascade_rcnn.py b/ppdet/modeling/architectures/cascade_rcnn.py index 4b5caa7a3ad16f535c007ffa0888b44c8958478b..fc5949af0ac4efaea3ea28bbb416859881461f30 100644 --- a/ppdet/modeling/architectures/cascade_rcnn.py +++ b/ppdet/modeling/architectures/cascade_rcnn.py @@ -111,8 +111,8 @@ class CascadeRCNN(BaseArch): bbox, bbox_num = self.bbox_post_process( preds, (refined_rois, rois_num), im_shape, scale_factor) # rescale the prediction back to origin image - bbox_pred = self.bbox_post_process.get_pred(bbox, bbox_num, - im_shape, scale_factor) + bbox, bbox_pred, bbox_num = self.bbox_post_process.get_pred( + bbox, bbox_num, im_shape, scale_factor) if not self.with_mask: return bbox_pred, bbox_num, None mask_out = self.mask_head(body_feats, bbox, bbox_num, self.inputs) diff --git a/ppdet/modeling/architectures/faster_rcnn.py b/ppdet/modeling/architectures/faster_rcnn.py index 26a2672d60f49aa989c7945b65ce3ecd9beec182..ce9a8e4b57d2dfe54fde037fed2dc0156cb71b51 100644 --- a/ppdet/modeling/architectures/faster_rcnn.py +++ b/ppdet/modeling/architectures/faster_rcnn.py @@ -87,8 +87,8 @@ class FasterRCNN(BaseArch): im_shape, scale_factor) # rescale the prediction back to origin image - bbox_pred = self.bbox_post_process.get_pred(bbox, bbox_num, - im_shape, scale_factor) + bboxes, bbox_pred, bbox_num = self.bbox_post_process.get_pred( + bbox, bbox_num, im_shape, scale_factor) return bbox_pred, bbox_num def get_loss(self, ): diff --git a/ppdet/modeling/architectures/keypoint_hrhrnet.py b/ppdet/modeling/architectures/keypoint_hrhrnet.py index 6f62b4b218dfd105e48675eedc71dd9a65d7a62a..366e9e3eed466f5e52503e94e6dea2afbf556c7e 100644 --- a/ppdet/modeling/architectures/keypoint_hrhrnet.py +++ b/ppdet/modeling/architectures/keypoint_hrhrnet.py @@ -153,7 +153,7 @@ class HrHRNetPostProcess(object): heat_thresh (float): value of topk below this threshhold will be ignored tag_thresh (float): coord's value sampled in tagmap below this threshold belong to same people for init - inputs(list[heatmap]): the output list of modle, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk + inputs(list[heatmap]): the output list of model, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk original_height, original_width (float): the original image size ''' diff --git a/ppdet/modeling/architectures/mask_rcnn.py b/ppdet/modeling/architectures/mask_rcnn.py index 43b8bff94aaf6f496d978fe755b55ba79f7786b2..a322f9f8e7b41d47d90b03b594fcdb47665c2c45 100644 --- a/ppdet/modeling/architectures/mask_rcnn.py +++ b/ppdet/modeling/architectures/mask_rcnn.py @@ -112,8 +112,8 @@ class MaskRCNN(BaseArch): body_feats, bbox, bbox_num, self.inputs, feat_func=feat_func) # rescale the prediction back to origin image - bbox_pred = self.bbox_post_process.get_pred(bbox, bbox_num, - im_shape, scale_factor) + bbox, bbox_pred, bbox_num = self.bbox_post_process.get_pred( + bbox, bbox_num, im_shape, scale_factor) origin_shape = self.bbox_post_process.get_origin_shape() mask_pred = self.mask_post_process(mask_out, bbox_pred, bbox_num, origin_shape) diff --git a/ppdet/modeling/architectures/meta_arch.py b/ppdet/modeling/architectures/meta_arch.py index 1f13c854072956395e8bb9bbb5b9ad9d43d2eeec..4ff84a97a61739e06f215f56a64daf0459e4a971 100644 --- a/ppdet/modeling/architectures/meta_arch.py +++ b/ppdet/modeling/architectures/meta_arch.py @@ -22,22 +22,23 @@ class BaseArch(nn.Layer): self.fuse_norm = False def load_meanstd(self, cfg_transform): - self.scale = 1. - self.mean = paddle.to_tensor([0.485, 0.456, 0.406]).reshape( - (1, 3, 1, 1)) - self.std = paddle.to_tensor([0.229, 0.224, 0.225]).reshape((1, 3, 1, 1)) + scale = 1. + mean = np.array([0.485, 0.456, 0.406], dtype=np.float32) + std = np.array([0.229, 0.224, 0.225], dtype=np.float32) for item in cfg_transform: if 'NormalizeImage' in item: - self.mean = paddle.to_tensor(item['NormalizeImage'][ - 'mean']).reshape((1, 3, 1, 1)) - self.std = paddle.to_tensor(item['NormalizeImage'][ - 'std']).reshape((1, 3, 1, 1)) + mean = np.array( + item['NormalizeImage']['mean'], dtype=np.float32) + std = np.array(item['NormalizeImage']['std'], dtype=np.float32) if item['NormalizeImage'].get('is_scale', True): - self.scale = 1. / 255. + scale = 1. / 255. break if self.data_format == 'NHWC': - self.mean = self.mean.reshape(1, 1, 1, 3) - self.std = self.std.reshape(1, 1, 1, 3) + self.scale = paddle.to_tensor(scale / std).reshape((1, 1, 1, 3)) + self.bias = paddle.to_tensor(-mean / std).reshape((1, 1, 1, 3)) + else: + self.scale = paddle.to_tensor(scale / std).reshape((1, 3, 1, 1)) + self.bias = paddle.to_tensor(-mean / std).reshape((1, 3, 1, 1)) def forward(self, inputs): if self.data_format == 'NHWC': @@ -46,7 +47,7 @@ class BaseArch(nn.Layer): if self.fuse_norm: image = inputs['image'] - self.inputs['image'] = (image * self.scale - self.mean) / self.std + self.inputs['image'] = image * self.scale + self.bias self.inputs['im_shape'] = inputs['im_shape'] self.inputs['scale_factor'] = inputs['scale_factor'] else: @@ -66,8 +67,7 @@ class BaseArch(nn.Layer): outs = [] for inp in inputs_list: if self.fuse_norm: - self.inputs['image'] = ( - inp['image'] * self.scale - self.mean) / self.std + self.inputs['image'] = inp['image'] * self.scale + self.bias self.inputs['im_shape'] = inp['im_shape'] self.inputs['scale_factor'] = inp['scale_factor'] else: @@ -75,7 +75,7 @@ class BaseArch(nn.Layer): outs.append(self.get_pred()) # multi-scale test - if len(outs)>1: + if len(outs) > 1: out = self.merge_multi_scale_predictions(outs) else: out = outs[0] @@ -92,7 +92,9 @@ class BaseArch(nn.Layer): keep_top_k = self.bbox_post_process.nms.keep_top_k nms_threshold = self.bbox_post_process.nms.nms_threshold else: - raise Exception("Multi scale test only supports CascadeRCNN, FasterRCNN and MaskRCNN for now") + raise Exception( + "Multi scale test only supports CascadeRCNN, FasterRCNN and MaskRCNN for now" + ) final_boxes = [] all_scale_outs = paddle.concat([o['bbox'] for o in outs]).numpy() @@ -101,9 +103,11 @@ class BaseArch(nn.Layer): if np.count_nonzero(idxs) == 0: continue r = nms(all_scale_outs[idxs, 1:], nms_threshold) - final_boxes.append(np.concatenate([np.full((r.shape[0], 1), c), r], 1)) + final_boxes.append( + np.concatenate([np.full((r.shape[0], 1), c), r], 1)) out = np.concatenate(final_boxes) - out = np.concatenate(sorted(out, key=lambda e: e[1])[-keep_top_k:]).reshape((-1, 6)) + out = np.concatenate(sorted( + out, key=lambda e: e[1])[-keep_top_k:]).reshape((-1, 6)) out = { 'bbox': paddle.to_tensor(out), 'bbox_num': paddle.to_tensor(np.array([out.shape[0], ])) diff --git a/ppdet/modeling/architectures/picodet.py b/ppdet/modeling/architectures/picodet.py index 760b8347b76bb6b0acca0aba8727c81bc9397fa2..0b87a4baa429dae1c03286f09243ca3211b199df 100644 --- a/ppdet/modeling/architectures/picodet.py +++ b/ppdet/modeling/architectures/picodet.py @@ -67,10 +67,9 @@ class PicoDet(BaseArch): if self.training or not self.export_post_process: return head_outs, None else: - im_shape = self.inputs['im_shape'] scale_factor = self.inputs['scale_factor'] bboxes, bbox_num = self.head.post_process( - head_outs, im_shape, scale_factor, export_nms=self.export_nms) + head_outs, scale_factor, export_nms=self.export_nms) return bboxes, bbox_num def get_loss(self, ): diff --git a/ppdet/modeling/architectures/retinanet.py b/ppdet/modeling/architectures/retinanet.py index 5e9ce2de4b045abae60434cedb30976ba3398e9d..e774430a03dfebf74c1e91138ed57f2ee52f1c9d 100644 --- a/ppdet/modeling/architectures/retinanet.py +++ b/ppdet/modeling/architectures/retinanet.py @@ -22,14 +22,12 @@ import paddle __all__ = ['RetinaNet'] + @register class RetinaNet(BaseArch): __category__ = 'architecture' - def __init__(self, - backbone, - neck, - head): + def __init__(self, backbone, neck, head): super(RetinaNet, self).__init__() self.backbone = backbone self.neck = neck @@ -38,35 +36,33 @@ class RetinaNet(BaseArch): @classmethod def from_config(cls, cfg, *args, **kwargs): backbone = create(cfg['backbone']) + kwargs = {'input_shape': backbone.out_shape} neck = create(cfg['neck'], **kwargs) - head = create(cfg['head']) + + kwargs = {'input_shape': neck.out_shape} + head = create(cfg['head'], **kwargs) + return { 'backbone': backbone, 'neck': neck, - 'head': head} + 'head': head, + } def _forward(self): body_feats = self.backbone(self.inputs) neck_feats = self.neck(body_feats) - head_outs = self.head(neck_feats) - if not self.training: - im_shape = self.inputs['im_shape'] - scale_factor = self.inputs['scale_factor'] - bboxes, bbox_num = self.head.post_process(head_outs, im_shape, scale_factor) - return bboxes, bbox_num - return head_outs + + if self.training: + return self.head(neck_feats, self.inputs) + else: + head_outs = self.head(neck_feats) + bbox, bbox_num = self.head.post_process( + head_outs, self.inputs['im_shape'], self.inputs['scale_factor']) + return {'bbox': bbox, 'bbox_num': bbox_num} def get_loss(self): - loss = dict() - head_outs = self._forward() - loss_retina = self.head.get_loss(head_outs, self.inputs) - loss.update(loss_retina) - total_loss = paddle.add_n(list(loss.values())) - loss.update(loss=total_loss) - return loss + return self._forward() def get_pred(self): - bbox_pred, bbox_num = self._forward() - output = dict(bbox=bbox_pred, bbox_num=bbox_num) - return output + return self._forward() diff --git a/ppdet/modeling/architectures/yolo.py b/ppdet/modeling/architectures/yolo.py index 00deb32ee3d0ea92e3bd5b0fcce3e53ea78db407..ce5be21cd48272939fd0e2ea36c5db439cb02081 100644 --- a/ppdet/modeling/architectures/yolo.py +++ b/ppdet/modeling/architectures/yolo.py @@ -115,8 +115,7 @@ class YOLOv3(BaseArch): self.inputs['im_shape'], self.inputs['scale_factor']) else: bbox, bbox_num = self.yolo_head.post_process( - yolo_head_outs, self.inputs['im_shape'], - self.inputs['scale_factor']) + yolo_head_outs, self.inputs['scale_factor']) output = {'bbox': bbox, 'bbox_num': bbox_num} return output diff --git a/ppdet/modeling/architectures/yolox.py b/ppdet/modeling/architectures/yolox.py new file mode 100644 index 0000000000000000000000000000000000000000..8e02e9ef7ecce137013ec2e7707dc04e3afabb28 --- /dev/null +++ b/ppdet/modeling/architectures/yolox.py @@ -0,0 +1,138 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +from ppdet.core.workspace import register, create +from .meta_arch import BaseArch + +import random +import paddle +import paddle.nn.functional as F +import paddle.distributed as dist + +__all__ = ['YOLOX'] + + +@register +class YOLOX(BaseArch): + """ + YOLOX network, see https://arxiv.org/abs/2107.08430 + + Args: + backbone (nn.Layer): backbone instance + neck (nn.Layer): neck instance + head (nn.Layer): head instance + for_mot (bool): whether used for MOT or not + input_size (list[int]): initial scale, will be reset by self._preprocess() + size_stride (int): stride of the size range + size_range (list[int]): multi-scale range for training + random_interval (int): interval of iter to change self._input_size + """ + __category__ = 'architecture' + + def __init__(self, + backbone='CSPDarkNet', + neck='YOLOCSPPAN', + head='YOLOXHead', + for_mot=False, + input_size=[640, 640], + size_stride=32, + size_range=[15, 25], + random_interval=10): + super(YOLOX, self).__init__() + self.backbone = backbone + self.neck = neck + self.head = head + self.for_mot = for_mot + + self.input_size = input_size + self._input_size = paddle.to_tensor(input_size) + self.size_stride = size_stride + self.size_range = size_range + self.random_interval = random_interval + self._step = 0 + + @classmethod + def from_config(cls, cfg, *args, **kwargs): + # backbone + backbone = create(cfg['backbone']) + + # fpn + kwargs = {'input_shape': backbone.out_shape} + neck = create(cfg['neck'], **kwargs) + + # head + kwargs = {'input_shape': neck.out_shape} + head = create(cfg['head'], **kwargs) + + return { + 'backbone': backbone, + 'neck': neck, + "head": head, + } + + def _forward(self): + if self.training: + self._preprocess() + body_feats = self.backbone(self.inputs) + neck_feats = self.neck(body_feats, self.for_mot) + + if self.training: + yolox_losses = self.head(neck_feats, self.inputs) + yolox_losses.update({'size': self._input_size[0]}) + return yolox_losses + else: + head_outs = self.head(neck_feats) + bbox, bbox_num = self.head.post_process( + head_outs, self.inputs['im_shape'], self.inputs['scale_factor']) + return {'bbox': bbox, 'bbox_num': bbox_num} + + def get_loss(self): + return self._forward() + + def get_pred(self): + return self._forward() + + def _preprocess(self): + # YOLOX multi-scale training, interpolate resize before inputs of the network. + self._get_size() + scale_y = self._input_size[0] / self.input_size[0] + scale_x = self._input_size[1] / self.input_size[1] + if scale_x != 1 or scale_y != 1: + self.inputs['image'] = F.interpolate( + self.inputs['image'], + size=self._input_size, + mode='bilinear', + align_corners=False) + gt_bboxes = self.inputs['gt_bbox'] + for i in range(len(gt_bboxes)): + if len(gt_bboxes[i]) > 0: + gt_bboxes[i][:, 0::2] = gt_bboxes[i][:, 0::2] * scale_x + gt_bboxes[i][:, 1::2] = gt_bboxes[i][:, 1::2] * scale_y + self.inputs['gt_bbox'] = gt_bboxes + + def _get_size(self): + # random_interval = 10 as default, every 10 iters to change self._input_size + image_ratio = self.input_size[1] * 1.0 / self.input_size[0] + if self._step % self.random_interval == 0: + size_factor = random.randint(*self.size_range) + size = [ + self.size_stride * size_factor, + self.size_stride * int(size_factor * image_ratio) + ] + self._input_size = paddle.to_tensor(size) + self._step += 1 diff --git a/ppdet/modeling/assigners/atss_assigner.py b/ppdet/modeling/assigners/atss_assigner.py index aba857e3d88145151e2246681c2ba673675efde1..6406d7bce5b796c125cd489886e9622a3f4ede97 100644 --- a/ppdet/modeling/assigners/atss_assigner.py +++ b/ppdet/modeling/assigners/atss_assigner.py @@ -22,8 +22,7 @@ import paddle.nn as nn import paddle.nn.functional as F from ppdet.core.workspace import register -from ..ops import iou_similarity -from ..bbox_utils import iou_similarity as batch_iou_similarity +from ..bbox_utils import iou_similarity, batch_iou_similarity from ..bbox_utils import bbox_center from .utils import (check_points_inside_bboxes, compute_max_iou_anchor, compute_max_iou_gt) @@ -51,7 +50,6 @@ class ATSSAssigner(nn.Layer): def _gather_topk_pyramid(self, gt2anchor_distances, num_anchors_list, pad_gt_mask): - pad_gt_mask = pad_gt_mask.tile([1, 1, self.topk]).astype(paddle.bool) gt2anchor_distances_list = paddle.split( gt2anchor_distances, num_anchors_list, axis=-1) num_anchors_index = np.cumsum(num_anchors_list).tolist() @@ -61,15 +59,12 @@ class ATSSAssigner(nn.Layer): for distances, anchors_index in zip(gt2anchor_distances_list, num_anchors_index): num_anchors = distances.shape[-1] - topk_metrics, topk_idxs = paddle.topk( + _, topk_idxs = paddle.topk( distances, self.topk, axis=-1, largest=False) topk_idxs_list.append(topk_idxs + anchors_index) - topk_idxs = paddle.where(pad_gt_mask, topk_idxs, - paddle.zeros_like(topk_idxs)) - is_in_topk = F.one_hot(topk_idxs, num_anchors).sum(axis=-2) - is_in_topk = paddle.where(is_in_topk > 1, - paddle.zeros_like(is_in_topk), is_in_topk) - is_in_topk_list.append(is_in_topk.astype(gt2anchor_distances.dtype)) + is_in_topk = F.one_hot(topk_idxs, num_anchors).sum( + axis=-2).astype(gt2anchor_distances.dtype) + is_in_topk_list.append(is_in_topk * pad_gt_mask) is_in_topk_list = paddle.concat(is_in_topk_list, axis=-1) topk_idxs_list = paddle.concat(topk_idxs_list, axis=-1) return is_in_topk_list, topk_idxs_list @@ -124,7 +119,8 @@ class ATSSAssigner(nn.Layer): # negative batch if num_max_boxes == 0: - assigned_labels = paddle.full([batch_size, num_anchors], bg_index) + assigned_labels = paddle.full( + [batch_size, num_anchors], bg_index, dtype=gt_labels.dtype) assigned_bboxes = paddle.zeros([batch_size, num_anchors, 4]) assigned_scores = paddle.zeros( [batch_size, num_anchors, self.num_classes]) @@ -154,9 +150,8 @@ class ATSSAssigner(nn.Layer): iou_threshold = iou_threshold.reshape([batch_size, num_max_boxes, -1]) iou_threshold = iou_threshold.mean(axis=-1, keepdim=True) + \ iou_threshold.std(axis=-1, keepdim=True) - is_in_topk = paddle.where( - iou_candidates > iou_threshold.tile([1, 1, num_anchors]), - is_in_topk, paddle.zeros_like(is_in_topk)) + is_in_topk = paddle.where(iou_candidates > iou_threshold, is_in_topk, + paddle.zeros_like(is_in_topk)) # 6. check the positive sample's center in gt, [B, n, L] is_in_gts = check_points_inside_bboxes(anchor_centers, gt_bboxes) @@ -199,7 +194,11 @@ class ATSSAssigner(nn.Layer): gt_bboxes.reshape([-1, 4]), assigned_gt_index.flatten(), axis=0) assigned_bboxes = assigned_bboxes.reshape([batch_size, num_anchors, 4]) - assigned_scores = F.one_hot(assigned_labels, self.num_classes) + assigned_scores = F.one_hot(assigned_labels, self.num_classes + 1) + ind = list(range(self.num_classes + 1)) + ind.remove(bg_index) + assigned_scores = paddle.index_select( + assigned_scores, paddle.to_tensor(ind), axis=-1) if pred_bboxes is not None: # assigned iou ious = batch_iou_similarity(gt_bboxes, pred_bboxes) * mask_positive diff --git a/ppdet/modeling/assigners/task_aligned_assigner.py b/ppdet/modeling/assigners/task_aligned_assigner.py index b1f47e786df0261d3925d1b5bc776683657385c1..5b3368e06814b4f3793e3f38c15d985cf4e8e6bc 100644 --- a/ppdet/modeling/assigners/task_aligned_assigner.py +++ b/ppdet/modeling/assigners/task_aligned_assigner.py @@ -21,7 +21,7 @@ import paddle.nn as nn import paddle.nn.functional as F from ppdet.core.workspace import register -from ..bbox_utils import iou_similarity +from ..bbox_utils import batch_iou_similarity from .utils import (gather_topk_anchors, check_points_inside_bboxes, compute_max_iou_anchor) @@ -85,14 +85,15 @@ class TaskAlignedAssigner(nn.Layer): # negative batch if num_max_boxes == 0: - assigned_labels = paddle.full([batch_size, num_anchors], bg_index) + assigned_labels = paddle.full( + [batch_size, num_anchors], bg_index, dtype=gt_labels.dtype) assigned_bboxes = paddle.zeros([batch_size, num_anchors, 4]) assigned_scores = paddle.zeros( [batch_size, num_anchors, num_classes]) return assigned_labels, assigned_bboxes, assigned_scores # compute iou between gt and pred bbox, [B, n, L] - ious = iou_similarity(gt_bboxes, pred_bboxes) + ious = batch_iou_similarity(gt_bboxes, pred_bboxes) # gather pred bboxes class score pred_scores = pred_scores.transpose([0, 2, 1]) batch_ind = paddle.arange( @@ -111,9 +112,7 @@ class TaskAlignedAssigner(nn.Layer): # select topk largest alignment metrics pred bbox as candidates # for each gt, [B, n, L] is_in_topk = gather_topk_anchors( - alignment_metrics * is_in_gts, - self.topk, - topk_mask=pad_gt_mask.tile([1, 1, self.topk]).astype(paddle.bool)) + alignment_metrics * is_in_gts, self.topk, topk_mask=pad_gt_mask) # select positive sample, [B, n, L] mask_positive = is_in_topk * is_in_gts * pad_gt_mask @@ -143,7 +142,11 @@ class TaskAlignedAssigner(nn.Layer): gt_bboxes.reshape([-1, 4]), assigned_gt_index.flatten(), axis=0) assigned_bboxes = assigned_bboxes.reshape([batch_size, num_anchors, 4]) - assigned_scores = F.one_hot(assigned_labels, num_classes) + assigned_scores = F.one_hot(assigned_labels, num_classes + 1) + ind = list(range(num_classes + 1)) + ind.remove(bg_index) + assigned_scores = paddle.index_select( + assigned_scores, paddle.to_tensor(ind), axis=-1) # rescale alignment metrics alignment_metrics *= mask_positive max_metrics_per_instance = alignment_metrics.max(axis=-1, keepdim=True) diff --git a/ppdet/modeling/assigners/utils.py b/ppdet/modeling/assigners/utils.py index 0c5e1d94fc06dd2abdf135638f9b17c8ba2ff8cf..0bc399315797b4be04954858fac5cccbbd73ee33 100644 --- a/ppdet/modeling/assigners/utils.py +++ b/ppdet/modeling/assigners/utils.py @@ -88,7 +88,7 @@ def gather_topk_anchors(metrics, topk, largest=True, topk_mask=None, eps=1e-9): largest (bool) : largest is a flag, if set to true, algorithm will sort by descending order, otherwise sort by ascending order. Default: True - topk_mask (Tensor, bool|None): shape[B, n, topk], ignore bbox mask, + topk_mask (Tensor, float32): shape[B, n, 1], ignore bbox mask, Default: None eps (float): Default: 1e-9 Returns: @@ -98,13 +98,11 @@ def gather_topk_anchors(metrics, topk, largest=True, topk_mask=None, eps=1e-9): topk_metrics, topk_idxs = paddle.topk( metrics, topk, axis=-1, largest=largest) if topk_mask is None: - topk_mask = (topk_metrics.max(axis=-1, keepdim=True) > eps).tile( - [1, 1, topk]) - topk_idxs = paddle.where(topk_mask, topk_idxs, paddle.zeros_like(topk_idxs)) - is_in_topk = F.one_hot(topk_idxs, num_anchors).sum(axis=-2) - is_in_topk = paddle.where(is_in_topk > 1, - paddle.zeros_like(is_in_topk), is_in_topk) - return is_in_topk.astype(metrics.dtype) + topk_mask = ( + topk_metrics.max(axis=-1, keepdim=True) > eps).astype(metrics.dtype) + is_in_topk = F.one_hot(topk_idxs, num_anchors).sum( + axis=-2).astype(metrics.dtype) + return is_in_topk * topk_mask def check_points_inside_bboxes(points, @@ -115,7 +113,7 @@ def check_points_inside_bboxes(points, Args: points (Tensor, float32): shape[L, 2], "xy" format, L: num_anchors bboxes (Tensor, float32): shape[B, n, 4], "xmin, ymin, xmax, ymax" format - center_radius_tensor (Tensor, float32): shape [L, 1] Default: None. + center_radius_tensor (Tensor, float32): shape [L, 1]. Default: None. eps (float): Default: 1e-9 Returns: is_in_bboxes (Tensor, float32): shape[B, n, L], value=1. means selected @@ -123,25 +121,28 @@ def check_points_inside_bboxes(points, points = points.unsqueeze([0, 1]) x, y = points.chunk(2, axis=-1) xmin, ymin, xmax, ymax = bboxes.unsqueeze(2).chunk(4, axis=-1) - if center_radius_tensor is not None: - center_radius_tensor = center_radius_tensor.unsqueeze([0, 1]) - bboxes_cx = (xmin + xmax) / 2. - bboxes_cy = (ymin + ymax) / 2. - xmin_sampling = bboxes_cx - center_radius_tensor - ymin_sampling = bboxes_cy - center_radius_tensor - xmax_sampling = bboxes_cx + center_radius_tensor - ymax_sampling = bboxes_cy + center_radius_tensor - - xmin = paddle.maximum(xmin, xmin_sampling) - ymin = paddle.maximum(ymin, ymin_sampling) - xmax = paddle.minimum(xmax, xmax_sampling) - ymax = paddle.minimum(ymax, ymax_sampling) + # check whether `points` is in `bboxes` l = x - xmin t = y - ymin r = xmax - x b = ymax - y - bbox_ltrb = paddle.concat([l, t, r, b], axis=-1) - return (bbox_ltrb.min(axis=-1) > eps).astype(bboxes.dtype) + delta_ltrb = paddle.concat([l, t, r, b], axis=-1) + is_in_bboxes = (delta_ltrb.min(axis=-1) > eps) + if center_radius_tensor is not None: + # check whether `points` is in `center_radius` + center_radius_tensor = center_radius_tensor.unsqueeze([0, 1]) + cx = (xmin + xmax) * 0.5 + cy = (ymin + ymax) * 0.5 + l = x - (cx - center_radius_tensor) + t = y - (cy - center_radius_tensor) + r = (cx + center_radius_tensor) - x + b = (cy + center_radius_tensor) - y + delta_ltrb_c = paddle.concat([l, t, r, b], axis=-1) + is_in_center = (delta_ltrb_c.min(axis=-1) > eps) + return (paddle.logical_and(is_in_bboxes, is_in_center), + paddle.logical_or(is_in_bboxes, is_in_center)) + + return is_in_bboxes.astype(bboxes.dtype) def compute_max_iou_anchor(ious): @@ -175,7 +176,8 @@ def compute_max_iou_gt(ious): def generate_anchors_for_grid_cell(feats, fpn_strides, grid_cell_size=5.0, - grid_cell_offset=0.5): + grid_cell_offset=0.5, + dtype='float32'): r""" Like ATSS, generate anchors based on grid size. Args: @@ -205,16 +207,15 @@ def generate_anchors_for_grid_cell(feats, shift_x - cell_half_size, shift_y - cell_half_size, shift_x + cell_half_size, shift_y + cell_half_size ], - axis=-1).astype(feat.dtype) - anchor_point = paddle.stack( - [shift_x, shift_y], axis=-1).astype(feat.dtype) + axis=-1).astype(dtype) + anchor_point = paddle.stack([shift_x, shift_y], axis=-1).astype(dtype) anchors.append(anchor.reshape([-1, 4])) anchor_points.append(anchor_point.reshape([-1, 2])) num_anchors_list.append(len(anchors[-1])) stride_tensor.append( paddle.full( - [num_anchors_list[-1], 1], stride, dtype=feat.dtype)) + [num_anchors_list[-1], 1], stride, dtype=dtype)) anchors = paddle.concat(anchors) anchors.stop_gradient = True anchor_points = paddle.concat(anchor_points) diff --git a/ppdet/modeling/backbones/__init__.py b/ppdet/modeling/backbones/__init__.py index bbc696dbacd048ce0ca752580ecfc54e11a1433f..12e9354b744dc97e2de584915a8827d137a3f7c2 100644 --- a/ppdet/modeling/backbones/__init__.py +++ b/ppdet/modeling/backbones/__init__.py @@ -1,15 +1,15 @@ -# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and # limitations under the License. from . import vgg @@ -30,6 +30,10 @@ from . import lcnet from . import hardnet from . import esnet from . import cspresnet +from . import csp_darknet +from . import convnext +from . import vision_transformer +from . import mobileone from .vgg import * from .resnet import * @@ -49,3 +53,8 @@ from .lcnet import * from .hardnet import * from .esnet import * from .cspresnet import * +from .csp_darknet import * +from .convnext import * +from .vision_transformer import * +from .vision_transformer import * +from .mobileone import * diff --git a/ppdet/modeling/backbones/convnext.py b/ppdet/modeling/backbones/convnext.py new file mode 100644 index 0000000000000000000000000000000000000000..476e12b2da50585dd142f3049ba024769e691e8b --- /dev/null +++ b/ppdet/modeling/backbones/convnext.py @@ -0,0 +1,245 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +''' +Modified from https://github.com/facebookresearch/ConvNeXt +Copyright (c) Meta Platforms, Inc. and affiliates. +All rights reserved. +This source code is licensed under the license found in the +LICENSE file in the root directory of this source tree. +''' + +import paddle +import paddle.nn as nn +import paddle.nn.functional as F +from paddle import ParamAttr +from paddle.nn.initializer import Constant + +import numpy as np + +from ppdet.core.workspace import register, serializable +from ..shape_spec import ShapeSpec +from .transformer_utils import DropPath, trunc_normal_, zeros_ + +__all__ = ['ConvNeXt'] + + +class Block(nn.Layer): + r""" ConvNeXt Block. There are two equivalent implementations: + (1) DwConv -> LayerNorm (channels_first) -> 1x1 Conv -> GELU -> 1x1 Conv; all in (N, C, H, W) + (2) DwConv -> Permute to (N, H, W, C); LayerNorm (channels_last) -> Linear -> GELU -> Linear; Permute back + We use (2) as we find it slightly faster in Pypaddle + + Args: + dim (int): Number of input channels. + drop_path (float): Stochastic depth rate. Default: 0.0 + layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. + """ + + def __init__(self, dim, drop_path=0., layer_scale_init_value=1e-6): + super().__init__() + self.dwconv = nn.Conv2D( + dim, dim, kernel_size=7, padding=3, groups=dim) # depthwise conv + self.norm = LayerNorm(dim, eps=1e-6) + self.pwconv1 = nn.Linear( + dim, 4 * dim) # pointwise/1x1 convs, implemented with linear layers + self.act = nn.GELU() + self.pwconv2 = nn.Linear(4 * dim, dim) + + if layer_scale_init_value > 0: + self.gamma = self.create_parameter( + shape=(dim, ), + attr=ParamAttr(initializer=Constant(layer_scale_init_value))) + else: + self.gamma = None + + self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity( + ) + + def forward(self, x): + input = x + x = self.dwconv(x) + x = x.transpose([0, 2, 3, 1]) + x = self.norm(x) + x = self.pwconv1(x) + x = self.act(x) + x = self.pwconv2(x) + if self.gamma is not None: + x = self.gamma * x + x = x.transpose([0, 3, 1, 2]) + x = input + self.drop_path(x) + return x + + +class LayerNorm(nn.Layer): + r""" LayerNorm that supports two data formats: channels_last (default) or channels_first. + The ordering of the dimensions in the inputs. channels_last corresponds to inputs with + shape (batch_size, height, width, channels) while channels_first corresponds to inputs + with shape (batch_size, channels, height, width). + """ + + def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last"): + super().__init__() + + self.weight = self.create_parameter( + shape=(normalized_shape, ), + attr=ParamAttr(initializer=Constant(1.))) + self.bias = self.create_parameter( + shape=(normalized_shape, ), + attr=ParamAttr(initializer=Constant(0.))) + + self.eps = eps + self.data_format = data_format + if self.data_format not in ["channels_last", "channels_first"]: + raise NotImplementedError + self.normalized_shape = (normalized_shape, ) + + def forward(self, x): + if self.data_format == "channels_last": + return F.layer_norm(x, self.normalized_shape, self.weight, + self.bias, self.eps) + elif self.data_format == "channels_first": + u = x.mean(1, keepdim=True) + s = (x - u).pow(2).mean(1, keepdim=True) + x = (x - u) / paddle.sqrt(s + self.eps) + x = self.weight[:, None, None] * x + self.bias[:, None, None] + return x + + +@register +@serializable +class ConvNeXt(nn.Layer): + r""" ConvNeXt + A Pypaddle impl of : `A ConvNet for the 2020s` - + https://arxiv.org/pdf/2201.03545.pdf + + Args: + in_chans (int): Number of input image channels. Default: 3 + depths (tuple(int)): Number of blocks at each stage. Default: [3, 3, 9, 3] + dims (int): Feature dimension at each stage. Default: [96, 192, 384, 768] + drop_path_rate (float): Stochastic depth rate. Default: 0. + layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. + """ + + arch_settings = { + 'tiny': { + 'depths': [3, 3, 9, 3], + 'dims': [96, 192, 384, 768] + }, + 'small': { + 'depths': [3, 3, 27, 3], + 'dims': [96, 192, 384, 768] + }, + 'base': { + 'depths': [3, 3, 27, 3], + 'dims': [128, 256, 512, 1024] + }, + 'large': { + 'depths': [3, 3, 27, 3], + 'dims': [192, 384, 768, 1536] + }, + 'xlarge': { + 'depths': [3, 3, 27, 3], + 'dims': [256, 512, 1024, 2048] + }, + } + + def __init__( + self, + arch='tiny', + in_chans=3, + drop_path_rate=0., + layer_scale_init_value=1e-6, + return_idx=[1, 2, 3], + norm_output=True, + pretrained=None, ): + super().__init__() + depths = self.arch_settings[arch]['depths'] + dims = self.arch_settings[arch]['dims'] + self.downsample_layers = nn.LayerList( + ) # stem and 3 intermediate downsampling conv layers + stem = nn.Sequential( + nn.Conv2D( + in_chans, dims[0], kernel_size=4, stride=4), + LayerNorm( + dims[0], eps=1e-6, data_format="channels_first")) + self.downsample_layers.append(stem) + for i in range(3): + downsample_layer = nn.Sequential( + LayerNorm( + dims[i], eps=1e-6, data_format="channels_first"), + nn.Conv2D( + dims[i], dims[i + 1], kernel_size=2, stride=2), ) + self.downsample_layers.append(downsample_layer) + + self.stages = nn.LayerList( + ) # 4 feature resolution stages, each consisting of multiple residual blocks + dp_rates = [x for x in np.linspace(0, drop_path_rate, sum(depths))] + cur = 0 + for i in range(4): + stage = nn.Sequential(* [ + Block( + dim=dims[i], + drop_path=dp_rates[cur + j], + layer_scale_init_value=layer_scale_init_value) + for j in range(depths[i]) + ]) + self.stages.append(stage) + cur += depths[i] + + self.return_idx = return_idx + self.dims = [dims[i] for i in return_idx] # [::-1] + + self.norm_output = norm_output + if norm_output: + self.norms = nn.LayerList([ + LayerNorm( + c, eps=1e-6, data_format="channels_first") + for c in self.dims + ]) + + self.apply(self._init_weights) + + if pretrained is not None: + if 'http' in pretrained: #URL + path = paddle.utils.download.get_weights_path_from_url( + pretrained) + else: #model in local path + path = pretrained + self.set_state_dict(paddle.load(path)) + + def _init_weights(self, m): + if isinstance(m, (nn.Conv2D, nn.Linear)): + trunc_normal_(m.weight) + zeros_(m.bias) + + def forward_features(self, x): + output = [] + for i in range(4): + x = self.downsample_layers[i](x) + x = self.stages[i](x) + output.append(x) + + outputs = [output[i] for i in self.return_idx] + if self.norm_output: + outputs = [self.norms[i](out) for i, out in enumerate(outputs)] + + return outputs + + def forward(self, x): + x = self.forward_features(x['image']) + return x + + @property + def out_shape(self): + return [ShapeSpec(channels=c) for c in self.dims] diff --git a/ppdet/modeling/backbones/csp_darknet.py b/ppdet/modeling/backbones/csp_darknet.py new file mode 100644 index 0000000000000000000000000000000000000000..4c225d15c8b560385078b19dd3dfafd272858bd4 --- /dev/null +++ b/ppdet/modeling/backbones/csp_darknet.py @@ -0,0 +1,404 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import paddle +import paddle.nn as nn +import paddle.nn.functional as F +from paddle import ParamAttr +from paddle.regularizer import L2Decay +from ppdet.core.workspace import register, serializable +from ppdet.modeling.initializer import conv_init_ +from ..shape_spec import ShapeSpec + +__all__ = [ + 'CSPDarkNet', 'BaseConv', 'DWConv', 'BottleNeck', 'SPPLayer', 'SPPFLayer' +] + + +class BaseConv(nn.Layer): + def __init__(self, + in_channels, + out_channels, + ksize, + stride, + groups=1, + bias=False, + act="silu"): + super(BaseConv, self).__init__() + self.conv = nn.Conv2D( + in_channels, + out_channels, + kernel_size=ksize, + stride=stride, + padding=(ksize - 1) // 2, + groups=groups, + bias_attr=bias) + self.bn = nn.BatchNorm2D( + out_channels, + weight_attr=ParamAttr(regularizer=L2Decay(0.0)), + bias_attr=ParamAttr(regularizer=L2Decay(0.0))) + + self._init_weights() + + def _init_weights(self): + conv_init_(self.conv) + + def forward(self, x): + # use 'x * F.sigmoid(x)' replace 'silu' + x = self.bn(self.conv(x)) + y = x * F.sigmoid(x) + return y + + +class DWConv(nn.Layer): + """Depthwise Conv""" + + def __init__(self, + in_channels, + out_channels, + ksize, + stride=1, + bias=False, + act="silu"): + super(DWConv, self).__init__() + self.dw_conv = BaseConv( + in_channels, + in_channels, + ksize=ksize, + stride=stride, + groups=in_channels, + bias=bias, + act=act) + self.pw_conv = BaseConv( + in_channels, + out_channels, + ksize=1, + stride=1, + groups=1, + bias=bias, + act=act) + + def forward(self, x): + return self.pw_conv(self.dw_conv(x)) + + +class Focus(nn.Layer): + """Focus width and height information into channel space, used in YOLOX.""" + + def __init__(self, + in_channels, + out_channels, + ksize=3, + stride=1, + bias=False, + act="silu"): + super(Focus, self).__init__() + self.conv = BaseConv( + in_channels * 4, + out_channels, + ksize=ksize, + stride=stride, + bias=bias, + act=act) + + def forward(self, inputs): + # inputs [bs, C, H, W] -> outputs [bs, 4C, W/2, H/2] + top_left = inputs[:, :, 0::2, 0::2] + top_right = inputs[:, :, 0::2, 1::2] + bottom_left = inputs[:, :, 1::2, 0::2] + bottom_right = inputs[:, :, 1::2, 1::2] + outputs = paddle.concat( + [top_left, bottom_left, top_right, bottom_right], 1) + return self.conv(outputs) + + +class BottleNeck(nn.Layer): + def __init__(self, + in_channels, + out_channels, + shortcut=True, + expansion=0.5, + depthwise=False, + bias=False, + act="silu"): + super(BottleNeck, self).__init__() + hidden_channels = int(out_channels * expansion) + Conv = DWConv if depthwise else BaseConv + self.conv1 = BaseConv( + in_channels, hidden_channels, ksize=1, stride=1, bias=bias, act=act) + self.conv2 = Conv( + hidden_channels, + out_channels, + ksize=3, + stride=1, + bias=bias, + act=act) + self.add_shortcut = shortcut and in_channels == out_channels + + def forward(self, x): + y = self.conv2(self.conv1(x)) + if self.add_shortcut: + y = y + x + return y + + +class SPPLayer(nn.Layer): + """Spatial Pyramid Pooling (SPP) layer used in YOLOv3-SPP and YOLOX""" + + def __init__(self, + in_channels, + out_channels, + kernel_sizes=(5, 9, 13), + bias=False, + act="silu"): + super(SPPLayer, self).__init__() + hidden_channels = in_channels // 2 + self.conv1 = BaseConv( + in_channels, hidden_channels, ksize=1, stride=1, bias=bias, act=act) + self.maxpoolings = nn.LayerList([ + nn.MaxPool2D( + kernel_size=ks, stride=1, padding=ks // 2) + for ks in kernel_sizes + ]) + conv2_channels = hidden_channels * (len(kernel_sizes) + 1) + self.conv2 = BaseConv( + conv2_channels, out_channels, ksize=1, stride=1, bias=bias, act=act) + + def forward(self, x): + x = self.conv1(x) + x = paddle.concat([x] + [mp(x) for mp in self.maxpoolings], axis=1) + x = self.conv2(x) + return x + + +class SPPFLayer(nn.Layer): + """ Spatial Pyramid Pooling - Fast (SPPF) layer used in YOLOv5 by Glenn Jocher, + equivalent to SPP(k=(5, 9, 13)) + """ + + def __init__(self, + in_channels, + out_channels, + ksize=5, + bias=False, + act='silu'): + super(SPPFLayer, self).__init__() + hidden_channels = in_channels // 2 + self.conv1 = BaseConv( + in_channels, hidden_channels, ksize=1, stride=1, bias=bias, act=act) + self.maxpooling = nn.MaxPool2D( + kernel_size=ksize, stride=1, padding=ksize // 2) + conv2_channels = hidden_channels * 4 + self.conv2 = BaseConv( + conv2_channels, out_channels, ksize=1, stride=1, bias=bias, act=act) + + def forward(self, x): + x = self.conv1(x) + y1 = self.maxpooling(x) + y2 = self.maxpooling(y1) + y3 = self.maxpooling(y2) + concats = paddle.concat([x, y1, y2, y3], axis=1) + out = self.conv2(concats) + return out + + +class CSPLayer(nn.Layer): + """CSP (Cross Stage Partial) layer with 3 convs, named C3 in YOLOv5""" + + def __init__(self, + in_channels, + out_channels, + num_blocks=1, + shortcut=True, + expansion=0.5, + depthwise=False, + bias=False, + act="silu"): + super(CSPLayer, self).__init__() + hidden_channels = int(out_channels * expansion) + self.conv1 = BaseConv( + in_channels, hidden_channels, ksize=1, stride=1, bias=bias, act=act) + self.conv2 = BaseConv( + in_channels, hidden_channels, ksize=1, stride=1, bias=bias, act=act) + self.bottlenecks = nn.Sequential(* [ + BottleNeck( + hidden_channels, + hidden_channels, + shortcut=shortcut, + expansion=1.0, + depthwise=depthwise, + bias=bias, + act=act) for _ in range(num_blocks) + ]) + self.conv3 = BaseConv( + hidden_channels * 2, + out_channels, + ksize=1, + stride=1, + bias=bias, + act=act) + + def forward(self, x): + x_1 = self.conv1(x) + x_1 = self.bottlenecks(x_1) + x_2 = self.conv2(x) + x = paddle.concat([x_1, x_2], axis=1) + x = self.conv3(x) + return x + + +@register +@serializable +class CSPDarkNet(nn.Layer): + """ + CSPDarkNet backbone. + Args: + arch (str): Architecture of CSPDarkNet, from {P5, P6, X}, default as X, + and 'X' means used in YOLOX, 'P5/P6' means used in YOLOv5. + depth_mult (float): Depth multiplier, multiply number of channels in + each layer, default as 1.0. + width_mult (float): Width multiplier, multiply number of blocks in + CSPLayer, default as 1.0. + depthwise (bool): Whether to use depth-wise conv layer. + act (str): Activation function type, default as 'silu'. + return_idx (list): Index of stages whose feature maps are returned. + """ + + __shared__ = ['depth_mult', 'width_mult', 'act', 'trt'] + + # in_channels, out_channels, num_blocks, add_shortcut, use_spp(use_sppf) + # 'X' means setting used in YOLOX, 'P5/P6' means setting used in YOLOv5. + arch_settings = { + 'X': [[64, 128, 3, True, False], [128, 256, 9, True, False], + [256, 512, 9, True, False], [512, 1024, 3, False, True]], + 'P5': [[64, 128, 3, True, False], [128, 256, 6, True, False], + [256, 512, 9, True, False], [512, 1024, 3, True, True]], + 'P6': [[64, 128, 3, True, False], [128, 256, 6, True, False], + [256, 512, 9, True, False], [512, 768, 3, True, False], + [768, 1024, 3, True, True]], + } + + def __init__(self, + arch='X', + depth_mult=1.0, + width_mult=1.0, + depthwise=False, + act='silu', + trt=False, + return_idx=[2, 3, 4]): + super(CSPDarkNet, self).__init__() + self.arch = arch + self.return_idx = return_idx + Conv = DWConv if depthwise else BaseConv + arch_setting = self.arch_settings[arch] + base_channels = int(arch_setting[0][0] * width_mult) + + # Note: differences between the latest YOLOv5 and the original YOLOX + # 1. self.stem, use SPPF(in YOLOv5) or SPP(in YOLOX) + # 2. use SPPF(in YOLOv5) or SPP(in YOLOX) + # 3. put SPPF before(YOLOv5) or SPP after(YOLOX) the last cspdark block's CSPLayer + # 4. whether SPPF(SPP)'CSPLayer add shortcut, True in YOLOv5, False in YOLOX + if arch in ['P5', 'P6']: + # in the latest YOLOv5, use Conv stem, and SPPF (fast, only single spp kernal size) + self.stem = Conv( + 3, base_channels, ksize=6, stride=2, bias=False, act=act) + spp_kernal_sizes = 5 + elif arch in ['X']: + # in the original YOLOX, use Focus stem, and SPP (three spp kernal sizes) + self.stem = Focus( + 3, base_channels, ksize=3, stride=1, bias=False, act=act) + spp_kernal_sizes = (5, 9, 13) + else: + raise AttributeError("Unsupported arch type: {}".format(arch)) + + _out_channels = [base_channels] + layers_num = 1 + self.csp_dark_blocks = [] + + for i, (in_channels, out_channels, num_blocks, shortcut, + use_spp) in enumerate(arch_setting): + in_channels = int(in_channels * width_mult) + out_channels = int(out_channels * width_mult) + _out_channels.append(out_channels) + num_blocks = max(round(num_blocks * depth_mult), 1) + stage = [] + + conv_layer = self.add_sublayer( + 'layers{}.stage{}.conv_layer'.format(layers_num, i + 1), + Conv( + in_channels, out_channels, 3, 2, bias=False, act=act)) + stage.append(conv_layer) + layers_num += 1 + + if use_spp and arch in ['X']: + # in YOLOX use SPPLayer + spp_layer = self.add_sublayer( + 'layers{}.stage{}.spp_layer'.format(layers_num, i + 1), + SPPLayer( + out_channels, + out_channels, + kernel_sizes=spp_kernal_sizes, + bias=False, + act=act)) + stage.append(spp_layer) + layers_num += 1 + + csp_layer = self.add_sublayer( + 'layers{}.stage{}.csp_layer'.format(layers_num, i + 1), + CSPLayer( + out_channels, + out_channels, + num_blocks=num_blocks, + shortcut=shortcut, + depthwise=depthwise, + bias=False, + act=act)) + stage.append(csp_layer) + layers_num += 1 + + if use_spp and arch in ['P5', 'P6']: + # in latest YOLOv5 use SPPFLayer instead of SPPLayer + sppf_layer = self.add_sublayer( + 'layers{}.stage{}.sppf_layer'.format(layers_num, i + 1), + SPPFLayer( + out_channels, + out_channels, + ksize=5, + bias=False, + act=act)) + stage.append(sppf_layer) + layers_num += 1 + + self.csp_dark_blocks.append(nn.Sequential(*stage)) + + self._out_channels = [_out_channels[i] for i in self.return_idx] + self.strides = [[2, 4, 8, 16, 32, 64][i] for i in self.return_idx] + + def forward(self, inputs): + x = inputs['image'] + outputs = [] + x = self.stem(x) + for i, layer in enumerate(self.csp_dark_blocks): + x = layer(x) + if i + 1 in self.return_idx: + outputs.append(x) + return outputs + + @property + def out_shape(self): + return [ + ShapeSpec( + channels=c, stride=s) + for c, s in zip(self._out_channels, self.strides) + ] diff --git a/ppdet/modeling/backbones/cspresnet.py b/ppdet/modeling/backbones/cspresnet.py index 4e0916320f4d25c84dbc8fac0d0e46b1d6c6f942..5268ec835381052988b9ceaca47c89ab2755bec9 100644 --- a/ppdet/modeling/backbones/cspresnet.py +++ b/ppdet/modeling/backbones/cspresnet.py @@ -21,6 +21,7 @@ import paddle.nn as nn import paddle.nn.functional as F from paddle import ParamAttr from paddle.regularizer import L2Decay +from paddle.nn.initializer import Constant from ppdet.modeling.ops import get_act_fn from ppdet.core.workspace import register, serializable @@ -65,7 +66,7 @@ class ConvBNLayer(nn.Layer): class RepVggBlock(nn.Layer): - def __init__(self, ch_in, ch_out, act='relu'): + def __init__(self, ch_in, ch_out, act='relu', alpha=False): super(RepVggBlock, self).__init__() self.ch_in = ch_in self.ch_out = ch_out @@ -75,12 +76,22 @@ class RepVggBlock(nn.Layer): ch_in, ch_out, 1, stride=1, padding=0, act=None) self.act = get_act_fn(act) if act is None or isinstance(act, ( str, dict)) else act + if alpha: + self.alpha = self.create_parameter( + shape=[1], + attr=ParamAttr(initializer=Constant(value=1.)), + dtype="float32") + else: + self.alpha = None def forward(self, x): if hasattr(self, 'conv'): y = self.conv(x) else: - y = self.conv1(x) + self.conv2(x) + if self.alpha: + y = self.conv1(x) + self.alpha * self.conv2(x) + else: + y = self.conv1(x) + self.conv2(x) y = self.act(y) return y @@ -96,12 +107,18 @@ class RepVggBlock(nn.Layer): kernel, bias = self.get_equivalent_kernel_bias() self.conv.weight.set_value(kernel) self.conv.bias.set_value(bias) + self.__delattr__('conv1') + self.__delattr__('conv2') def get_equivalent_kernel_bias(self): kernel3x3, bias3x3 = self._fuse_bn_tensor(self.conv1) kernel1x1, bias1x1 = self._fuse_bn_tensor(self.conv2) - return kernel3x3 + self._pad_1x1_to_3x3_tensor( - kernel1x1), bias3x3 + bias1x1 + if self.alpha: + return kernel3x3 + self.alpha * self._pad_1x1_to_3x3_tensor( + kernel1x1), bias3x3 + self.alpha * bias1x1 + else: + return kernel3x3 + self._pad_1x1_to_3x3_tensor( + kernel1x1), bias3x3 + bias1x1 def _pad_1x1_to_3x3_tensor(self, kernel1x1): if kernel1x1 is None: @@ -124,11 +141,16 @@ class RepVggBlock(nn.Layer): class BasicBlock(nn.Layer): - def __init__(self, ch_in, ch_out, act='relu', shortcut=True): + def __init__(self, + ch_in, + ch_out, + act='relu', + shortcut=True, + use_alpha=False): super(BasicBlock, self).__init__() assert ch_in == ch_out self.conv1 = ConvBNLayer(ch_in, ch_out, 3, stride=1, padding=1, act=act) - self.conv2 = RepVggBlock(ch_out, ch_out, act=act) + self.conv2 = RepVggBlock(ch_out, ch_out, act=act, alpha=use_alpha) self.shortcut = shortcut def forward(self, x): @@ -165,7 +187,8 @@ class CSPResStage(nn.Layer): n, stride, act='relu', - attn='eca'): + attn='eca', + use_alpha=False): super(CSPResStage, self).__init__() ch_mid = (ch_in + ch_out) // 2 @@ -176,10 +199,13 @@ class CSPResStage(nn.Layer): self.conv_down = None self.conv1 = ConvBNLayer(ch_mid, ch_mid // 2, 1, act=act) self.conv2 = ConvBNLayer(ch_mid, ch_mid // 2, 1, act=act) - self.blocks = nn.Sequential(* [ + self.blocks = nn.Sequential(*[ block_fn( - ch_mid // 2, ch_mid // 2, act=act, shortcut=True) - for i in range(n) + ch_mid // 2, + ch_mid // 2, + act=act, + shortcut=True, + use_alpha=use_alpha) for i in range(n) ]) if attn: self.attn = EffectiveSELayer(ch_mid, act='hardsigmoid') @@ -209,13 +235,17 @@ class CSPResNet(nn.Layer): layers=[3, 6, 6, 3], channels=[64, 128, 256, 512, 1024], act='swish', - return_idx=[0, 1, 2, 3, 4], + return_idx=[1, 2, 3], depth_wise=False, use_large_stem=False, width_mult=1.0, depth_mult=1.0, - trt=False): + trt=False, + use_checkpoint=False, + use_alpha=False, + **args): super(CSPResNet, self).__init__() + self.use_checkpoint = use_checkpoint channels = [max(round(c * width_mult), 1) for c in channels] layers = [max(round(l * depth_mult), 1) for l in layers] act = get_act_fn( @@ -252,20 +282,31 @@ class CSPResNet(nn.Layer): act=act))) n = len(channels) - 1 - self.stages = nn.Sequential(* [(str(i), CSPResStage( - BasicBlock, channels[i], channels[i + 1], layers[i], 2, act=act)) - for i in range(n)]) + self.stages = nn.Sequential(*[(str(i), CSPResStage( + BasicBlock, + channels[i], + channels[i + 1], + layers[i], + 2, + act=act, + use_alpha=use_alpha)) for i in range(n)]) self._out_channels = channels[1:] - self._out_strides = [4, 8, 16, 32] + self._out_strides = [4 * 2**i for i in range(n)] self.return_idx = return_idx + if use_checkpoint: + paddle.seed(0) def forward(self, inputs): x = inputs['image'] x = self.stem(x) outs = [] for idx, stage in enumerate(self.stages): - x = stage(x) + if self.use_checkpoint and self.training: + x = paddle.distributed.fleet.utils.recompute( + stage, x, **{"preserve_rng_state": True}) + else: + x = stage(x) if idx in self.return_idx: outs.append(x) diff --git a/ppdet/modeling/backbones/hardnet.py b/ppdet/modeling/backbones/hardnet.py index 14a1599dfbfac36ed54fd55126e77e9046666e8a..8615fb6a67f316cace6f2e9fb0132becf52f2d71 100644 --- a/ppdet/modeling/backbones/hardnet.py +++ b/ppdet/modeling/backbones/hardnet.py @@ -146,7 +146,7 @@ class HarDBlock(nn.Layer): class HarDNet(nn.Layer): def __init__(self, depth_wise=False, return_idx=[1, 3, 8, 13], arch=85): super(HarDNet, self).__init__() - assert arch in [39, 68, 85], "HarDNet-{} not support.".format(arch) + assert arch in [68, 85], "HarDNet-{} is not supported.".format(arch) if arch == 85: first_ch = [48, 96] second_kernel = 3 @@ -161,6 +161,8 @@ class HarDNet(nn.Layer): grmul = 1.7 gr = [14, 16, 20, 40] n_layers = [8, 16, 16, 16] + else: + raise ValueError("HarDNet-{} is not supported.".format(arch)) self.return_idx = return_idx self._out_channels = [96, 214, 458, 784] diff --git a/ppdet/modeling/backbones/mobileone.py b/ppdet/modeling/backbones/mobileone.py new file mode 100644 index 0000000000000000000000000000000000000000..e548badd3ed714946e961bc29459191ec0ab7fcb --- /dev/null +++ b/ppdet/modeling/backbones/mobileone.py @@ -0,0 +1,266 @@ +# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +This code is the paddle implementation of MobileOne block, see: https://arxiv.org/pdf/2206.04040.pdf. +Some codes are based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py +Ths copyright of microsoft/Swin-Transformer is as follows: +MIT License [see LICENSE for details] +""" + +import paddle +import paddle.nn as nn +from paddle import ParamAttr +from paddle.regularizer import L2Decay +from paddle.nn.initializer import Normal, Constant + +from ppdet.modeling.ops import get_act_fn +from ppdet.modeling.layers import ConvNormLayer + + +class MobileOneBlock(nn.Layer): + def __init__( + self, + ch_in, + ch_out, + stride, + kernel_size, + conv_num=1, + norm_type='bn', + norm_decay=0., + norm_groups=32, + bias_on=False, + lr_scale=1., + freeze_norm=False, + initializer=Normal( + mean=0., std=0.01), + skip_quant=False, + act='relu', ): + super(MobileOneBlock, self).__init__() + + self.ch_in = ch_in + self.ch_out = ch_out + self.kernel_size = kernel_size + self.stride = stride + self.padding = (kernel_size - 1) // 2 + self.k = conv_num + + self.depth_conv = nn.LayerList() + self.point_conv = nn.LayerList() + for _ in range(self.k): + self.depth_conv.append( + ConvNormLayer( + ch_in, + ch_in, + kernel_size, + stride=stride, + groups=ch_in, + norm_type=norm_type, + norm_decay=norm_decay, + norm_groups=norm_groups, + bias_on=bias_on, + lr_scale=lr_scale, + freeze_norm=freeze_norm, + initializer=initializer, + skip_quant=skip_quant)) + self.point_conv.append( + ConvNormLayer( + ch_in, + ch_out, + 1, + stride=1, + groups=1, + norm_type=norm_type, + norm_decay=norm_decay, + norm_groups=norm_groups, + bias_on=bias_on, + lr_scale=lr_scale, + freeze_norm=freeze_norm, + initializer=initializer, + skip_quant=skip_quant)) + self.rbr_1x1 = ConvNormLayer( + ch_in, + ch_in, + 1, + stride=self.stride, + groups=ch_in, + norm_type=norm_type, + norm_decay=norm_decay, + norm_groups=norm_groups, + bias_on=bias_on, + lr_scale=lr_scale, + freeze_norm=freeze_norm, + initializer=initializer, + skip_quant=skip_quant) + self.rbr_identity_st1 = nn.BatchNorm2D( + num_features=ch_in, + weight_attr=ParamAttr(regularizer=L2Decay(0.0)), + bias_attr=ParamAttr(regularizer=L2Decay( + 0.0))) if ch_in == ch_out and self.stride == 1 else None + self.rbr_identity_st2 = nn.BatchNorm2D( + num_features=ch_out, + weight_attr=ParamAttr(regularizer=L2Decay(0.0)), + bias_attr=ParamAttr(regularizer=L2Decay( + 0.0))) if ch_in == ch_out and self.stride == 1 else None + self.act = get_act_fn(act) if act is None or isinstance(act, ( + str, dict)) else act + + def forward(self, x): + if hasattr(self, "conv1") and hasattr(self, "conv2"): + y = self.act(self.conv2(self.act(self.conv1(x)))) + else: + if self.rbr_identity_st1 is None: + id_out_st1 = 0 + else: + id_out_st1 = self.rbr_identity_st1(x) + + x1_1 = 0 + for i in range(self.k): + x1_1 += self.depth_conv[i](x) + + x1_2 = self.rbr_1x1(x) + x1 = self.act(x1_1 + x1_2 + id_out_st1) + + if self.rbr_identity_st2 is None: + id_out_st2 = 0 + else: + id_out_st2 = self.rbr_identity_st2(x1) + + x2_1 = 0 + for i in range(self.k): + x2_1 += self.point_conv[i](x1) + y = self.act(x2_1 + id_out_st2) + + return y + + def convert_to_deploy(self): + if not hasattr(self, 'conv1'): + self.conv1 = nn.Conv2D( + in_channels=self.ch_in, + out_channels=self.ch_in, + kernel_size=self.kernel_size, + stride=self.stride, + padding=self.padding, + groups=self.ch_in, + bias_attr=ParamAttr( + initializer=Constant(value=0.), learning_rate=1.)) + if not hasattr(self, 'conv2'): + self.conv2 = nn.Conv2D( + in_channels=self.ch_in, + out_channels=self.ch_out, + kernel_size=1, + stride=1, + padding='SAME', + groups=1, + bias_attr=ParamAttr( + initializer=Constant(value=0.), learning_rate=1.)) + + conv1_kernel, conv1_bias, conv2_kernel, conv2_bias = self.get_equivalent_kernel_bias( + ) + self.conv1.weight.set_value(conv1_kernel) + self.conv1.bias.set_value(conv1_bias) + self.conv2.weight.set_value(conv2_kernel) + self.conv2.bias.set_value(conv2_bias) + self.__delattr__('depth_conv') + self.__delattr__('point_conv') + self.__delattr__('rbr_1x1') + if hasattr(self, 'rbr_identity_st1'): + self.__delattr__('rbr_identity_st1') + if hasattr(self, 'rbr_identity_st2'): + self.__delattr__('rbr_identity_st2') + + def get_equivalent_kernel_bias(self): + st1_kernel3x3, st1_bias3x3 = self._fuse_bn_tensor(self.depth_conv) + st1_kernel1x1, st1_bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) + st1_kernelid, st1_biasid = self._fuse_bn_tensor( + self.rbr_identity_st1, kernel_size=self.kernel_size) + + st2_kernel1x1, st2_bias1x1 = self._fuse_bn_tensor(self.point_conv) + st2_kernelid, st2_biasid = self._fuse_bn_tensor( + self.rbr_identity_st2, kernel_size=1) + + conv1_kernel = st1_kernel3x3 + self._pad_1x1_to_3x3_tensor( + st1_kernel1x1) + st1_kernelid + + conv1_bias = st1_bias3x3 + st1_bias1x1 + st1_biasid + + conv2_kernel = st2_kernel1x1 + st2_kernelid + conv2_bias = st2_bias1x1 + st2_biasid + + return conv1_kernel, conv1_bias, conv2_kernel, conv2_bias + + def _pad_1x1_to_3x3_tensor(self, kernel1x1): + if kernel1x1 is None: + return 0 + else: + padding_size = (self.kernel_size - 1) // 2 + return nn.functional.pad( + kernel1x1, + [padding_size, padding_size, padding_size, padding_size]) + + def _fuse_bn_tensor(self, branch, kernel_size=3): + if branch is None: + return 0, 0 + + if isinstance(branch, nn.LayerList): + fused_kernels = [] + fused_bias = [] + for block in branch: + kernel = block.conv.weight + running_mean = block.norm._mean + running_var = block.norm._variance + gamma = block.norm.weight + beta = block.norm.bias + eps = block.norm._epsilon + + std = (running_var + eps).sqrt() + t = (gamma / std).reshape((-1, 1, 1, 1)) + + fused_kernels.append(kernel * t) + fused_bias.append(beta - running_mean * gamma / std) + + return sum(fused_kernels), sum(fused_bias) + + elif isinstance(branch, ConvNormLayer): + kernel = branch.conv.weight + running_mean = branch.norm._mean + running_var = branch.norm._variance + gamma = branch.norm.weight + beta = branch.norm.bias + eps = branch.norm._epsilon + else: + assert isinstance(branch, nn.BatchNorm2D) + input_dim = self.ch_in if kernel_size == 1 else 1 + kernel_value = paddle.zeros( + shape=[self.ch_in, input_dim, kernel_size, kernel_size], + dtype='float32') + if kernel_size > 1: + for i in range(self.ch_in): + kernel_value[i, i % input_dim, (kernel_size - 1) // 2, ( + kernel_size - 1) // 2] = 1 + elif kernel_size == 1: + for i in range(self.ch_in): + kernel_value[i, i % input_dim, 0, 0] = 1 + else: + raise ValueError("Invalid kernel size recieved!") + kernel = paddle.to_tensor(kernel_value, place=branch.weight.place) + running_mean = branch._mean + running_var = branch._variance + gamma = branch.weight + beta = branch.bias + eps = branch._epsilon + + std = (running_var + eps).sqrt() + t = (gamma / std).reshape((-1, 1, 1, 1)) + + return kernel * t, beta - running_mean * gamma / std diff --git a/ppdet/modeling/backbones/shufflenet_v2.py b/ppdet/modeling/backbones/shufflenet_v2.py index 996697ad719e29f0c4e8c2845dfed4be5e5808fb..ca7ebb93fb8099aa07f348a051d9c9e2f95e3a5f 100644 --- a/ppdet/modeling/backbones/shufflenet_v2.py +++ b/ppdet/modeling/backbones/shufflenet_v2.py @@ -188,11 +188,10 @@ class ShuffleNetV2(nn.Layer): elif scale == 1.5: stage_out_channels = [-1, 24, 176, 352, 704, 1024] elif scale == 2.0: - stage_out_channels = [-1, 24, 224, 488, 976, 2048] + stage_out_channels = [-1, 24, 244, 488, 976, 2048] else: raise NotImplementedError("This scale size:[" + str(scale) + "] is not implemented!") - self._out_channels = [] self._feature_idx = 0 # 1. conv1 diff --git a/ppdet/modeling/backbones/swin_transformer.py b/ppdet/modeling/backbones/swin_transformer.py index 8509b5164195581890e52d1ed46f4c04d6e76616..aa4311ff812dffcfe889b843ad9a5ec6a5ce8e48 100644 --- a/ppdet/modeling/backbones/swin_transformer.py +++ b/ppdet/modeling/backbones/swin_transformer.py @@ -20,62 +20,13 @@ MIT License [see LICENSE for details] import paddle import paddle.nn as nn import paddle.nn.functional as F -from paddle.nn.initializer import TruncatedNormal, Constant, Assign from ppdet.modeling.shape_spec import ShapeSpec from ppdet.core.workspace import register, serializable import numpy as np -# Common initializations -ones_ = Constant(value=1.) -zeros_ = Constant(value=0.) -trunc_normal_ = TruncatedNormal(std=.02) - - -# Common Functions -def to_2tuple(x): - return tuple([x] * 2) - - -def add_parameter(layer, datas, name=None): - parameter = layer.create_parameter( - shape=(datas.shape), default_initializer=Assign(datas)) - if name: - layer.add_parameter(name, parameter) - return parameter - - -# Common Layers -def drop_path(x, drop_prob=0., training=False): - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... - """ - if drop_prob == 0. or not training: - return x - keep_prob = paddle.to_tensor(1 - drop_prob) - shape = (paddle.shape(x)[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + paddle.rand(shape, dtype=x.dtype) - random_tensor = paddle.floor(random_tensor) # binarize - output = x.divide(keep_prob) * random_tensor - return output - - -class DropPath(nn.Layer): - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -class Identity(nn.Layer): - def __init__(self): - super(Identity, self).__init__() - - def forward(self, input): - return input +from .transformer_utils import DropPath, Identity +from .transformer_utils import add_parameter, to_2tuple +from .transformer_utils import ones_, zeros_, trunc_normal_ class Mlp(nn.Layer): @@ -112,7 +63,7 @@ def window_partition(x, window_size): """ B, H, W, C = x.shape x = x.reshape( - [B, H // window_size, window_size, W // window_size, window_size, C]) + [-1, H // window_size, window_size, W // window_size, window_size, C]) windows = x.transpose([0, 1, 3, 2, 4, 5]).reshape( [-1, window_size, window_size, C]) return windows @@ -128,10 +79,11 @@ def window_reverse(windows, window_size, H, W): Returns: x: (B, H, W, C) """ + _, _, _, C = windows.shape B = int(windows.shape[0] / (H * W / window_size / window_size)) x = windows.reshape( - [B, H // window_size, W // window_size, window_size, window_size, -1]) - x = x.transpose([0, 1, 3, 2, 4, 5]).reshape([B, H, W, -1]) + [-1, H // window_size, W // window_size, window_size, window_size, C]) + x = x.transpose([0, 1, 3, 2, 4, 5]).reshape([-1, H, W, C]) return x @@ -206,14 +158,14 @@ class WindowAttention(nn.Layer): """ B_, N, C = x.shape qkv = self.qkv(x).reshape( - [B_, N, 3, self.num_heads, C // self.num_heads]).transpose( + [-1, N, 3, self.num_heads, C // self.num_heads]).transpose( [2, 0, 3, 1, 4]) q, k, v = qkv[0], qkv[1], qkv[2] q = q * self.scale attn = paddle.mm(q, k.transpose([0, 1, 3, 2])) - index = self.relative_position_index.reshape([-1]) + index = self.relative_position_index.flatten() relative_position_bias = paddle.index_select( self.relative_position_bias_table, index) @@ -227,7 +179,7 @@ class WindowAttention(nn.Layer): if mask is not None: nW = mask.shape[0] - attn = attn.reshape([B_ // nW, nW, self.num_heads, N, N + attn = attn.reshape([-1, nW, self.num_heads, N, N ]) + mask.unsqueeze(1).unsqueeze(0) attn = attn.reshape([-1, self.num_heads, N, N]) attn = self.softmax(attn) @@ -237,7 +189,7 @@ class WindowAttention(nn.Layer): attn = self.attn_drop(attn) # x = (attn @ v).transpose(1, 2).reshape([B_, N, C]) - x = paddle.mm(attn, v).transpose([0, 2, 1, 3]).reshape([B_, N, C]) + x = paddle.mm(attn, v).transpose([0, 2, 1, 3]).reshape([-1, N, C]) x = self.proj(x) x = self.proj_drop(x) return x @@ -315,7 +267,7 @@ class SwinTransformerBlock(nn.Layer): shortcut = x x = self.norm1(x) - x = x.reshape([B, H, W, C]) + x = x.reshape([-1, H, W, C]) # pad feature maps to multiples of window size pad_l = pad_t = 0 @@ -337,7 +289,7 @@ class SwinTransformerBlock(nn.Layer): x_windows = window_partition( shifted_x, self.window_size) # nW*B, window_size, window_size, C x_windows = x_windows.reshape( - [-1, self.window_size * self.window_size, + [x_windows.shape[0], self.window_size * self.window_size, C]) # nW*B, window_size*window_size, C # W-MSA/SW-MSA @@ -346,7 +298,7 @@ class SwinTransformerBlock(nn.Layer): # merge windows attn_windows = attn_windows.reshape( - [-1, self.window_size, self.window_size, C]) + [x_windows.shape[0], self.window_size, self.window_size, C]) shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C @@ -362,7 +314,7 @@ class SwinTransformerBlock(nn.Layer): if pad_r > 0 or pad_b > 0: x = x[:, :H, :W, :] - x = x.reshape([B, H * W, C]) + x = x.reshape([-1, H * W, C]) # FFN x = shortcut + self.drop_path(x) @@ -393,7 +345,7 @@ class PatchMerging(nn.Layer): B, L, C = x.shape assert L == H * W, "input feature has wrong size" - x = x.reshape([B, H, W, C]) + x = x.reshape([-1, H, W, C]) # padding pad_input = (H % 2 == 1) or (W % 2 == 1) @@ -405,7 +357,7 @@ class PatchMerging(nn.Layer): x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C x = paddle.concat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.reshape([B, H * W // 4, 4 * C]) # B H/2*W/2 4*C + x = x.reshape([-1, H * W // 4, 4 * C]) # B H/2*W/2 4*C x = self.norm(x) x = self.reduction(x) @@ -482,8 +434,7 @@ class BasicLayer(nn.Layer): # calculate attention mask for SW-MSA Hp = int(np.ceil(H / self.window_size)) * self.window_size Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = paddle.fluid.layers.zeros( - [1, Hp, Wp, 1], dtype='float32') # 1 Hp Wp 1 + img_mask = paddle.zeros([1, Hp, Wp, 1], dtype='float32') # 1 Hp Wp 1 h_slices = (slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None)) @@ -688,10 +639,10 @@ class SwinTransformer(nn.Layer): if self.frozen_stages >= 0: self.patch_embed.eval() for param in self.patch_embed.parameters(): - param.requires_grad = False + param.stop_gradient = True if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False + self.absolute_pos_embed.stop_gradient = True if self.frozen_stages >= 2: self.pos_drop.eval() @@ -699,7 +650,7 @@ class SwinTransformer(nn.Layer): m = self.layers[i] m.eval() for param in m.parameters(): - param.requires_grad = False + param.stop_gradient = True def _init_weights(self, m): if isinstance(m, nn.Linear): @@ -713,7 +664,7 @@ class SwinTransformer(nn.Layer): def forward(self, x): """Forward function.""" x = self.patch_embed(x['image']) - _, _, Wh, Ww = x.shape + B, _, Wh, Ww = x.shape if self.ape: # interpolate the position embedding to the corresponding size absolute_pos_embed = F.interpolate( diff --git a/ppdet/modeling/backbones/transformer_utils.py b/ppdet/modeling/backbones/transformer_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..bc10652d57447c339d923b84ff3e4c39b3c80824 --- /dev/null +++ b/ppdet/modeling/backbones/transformer_utils.py @@ -0,0 +1,74 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import paddle +import paddle.nn as nn + +from paddle.nn.initializer import TruncatedNormal, Constant, Assign + +# Common initializations +ones_ = Constant(value=1.) +zeros_ = Constant(value=0.) +trunc_normal_ = TruncatedNormal(std=.02) + + +# Common Layers +def drop_path(x, drop_prob=0., training=False): + """ + Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). + the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... + See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... + """ + if drop_prob == 0. or not training: + return x + keep_prob = paddle.to_tensor(1 - drop_prob) + shape = (paddle.shape(x)[0], ) + (1, ) * (x.ndim - 1) + random_tensor = keep_prob + paddle.rand(shape, dtype=x.dtype) + random_tensor = paddle.floor(random_tensor) # binarize + output = x.divide(keep_prob) * random_tensor + return output + + +class DropPath(nn.Layer): + def __init__(self, drop_prob=None): + super(DropPath, self).__init__() + self.drop_prob = drop_prob + + def forward(self, x): + return drop_path(x, self.drop_prob, self.training) + + +class Identity(nn.Layer): + def __init__(self): + super(Identity, self).__init__() + + def forward(self, input): + return input + + +# common funcs + + +def to_2tuple(x): + if isinstance(x, (list, tuple)): + return x + return tuple([x] * 2) + + +def add_parameter(layer, datas, name=None): + parameter = layer.create_parameter( + shape=(datas.shape), default_initializer=Assign(datas)) + if name: + layer.add_parameter(name, parameter) + return parameter diff --git a/ppdet/modeling/backbones/vision_transformer.py b/ppdet/modeling/backbones/vision_transformer.py new file mode 100644 index 0000000000000000000000000000000000000000..798ea376878f09ade54e3a5c9bbc6f825769db72 --- /dev/null +++ b/ppdet/modeling/backbones/vision_transformer.py @@ -0,0 +1,633 @@ +# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import math + +import paddle +import paddle.nn as nn +import paddle.nn.functional as F +import numpy as np +from paddle.nn.initializer import Constant + +from ppdet.modeling.shape_spec import ShapeSpec +from ppdet.core.workspace import register, serializable + +from .transformer_utils import zeros_, DropPath, Identity + + +class Mlp(nn.Layer): + def __init__(self, + in_features, + hidden_features=None, + out_features=None, + act_layer=nn.GELU, + drop=0.): + super().__init__() + out_features = out_features or in_features + hidden_features = hidden_features or in_features + self.fc1 = nn.Linear(in_features, hidden_features) + self.act = act_layer() + self.fc2 = nn.Linear(hidden_features, out_features) + self.drop = nn.Dropout(drop) + + def forward(self, x): + x = self.fc1(x) + x = self.act(x) + x = self.drop(x) + x = self.fc2(x) + x = self.drop(x) + return x + + +class Attention(nn.Layer): + def __init__(self, + dim, + num_heads=8, + qkv_bias=False, + qk_scale=None, + attn_drop=0., + proj_drop=0., + window_size=None): + super().__init__() + self.num_heads = num_heads + head_dim = dim // num_heads + self.scale = qk_scale or head_dim**-0.5 + + self.qkv = nn.Linear(dim, dim * 3, bias_attr=False) + + if qkv_bias: + self.q_bias = self.create_parameter( + shape=([dim]), default_initializer=zeros_) + self.v_bias = self.create_parameter( + shape=([dim]), default_initializer=zeros_) + else: + self.q_bias = None + self.v_bias = None + if window_size: + self.window_size = window_size + self.num_relative_distance = (2 * window_size[0] - 1) * ( + 2 * window_size[1] - 1) + 3 + self.relative_position_bias_table = self.create_parameter( + shape=(self.num_relative_distance, num_heads), + default_initializer=zeros_) # 2*Wh-1 * 2*Ww-1, nH + # cls to token & token 2 cls & cls to cls + + # get pair-wise relative position index for each token inside the window + coords_h = paddle.arange(window_size[0]) + coords_w = paddle.arange(window_size[1]) + coords = paddle.stack(paddle.meshgrid( + [coords_h, coords_w])) # 2, Wh, Ww + coords_flatten = paddle.flatten(coords, 1) # 2, Wh*Ww + coords_flatten_1 = paddle.unsqueeze(coords_flatten, 2) + coords_flatten_2 = paddle.unsqueeze(coords_flatten, 1) + relative_coords = coords_flatten_1.clone() - coords_flatten_2.clone( + ) + + #relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Wh + relative_coords = relative_coords.transpose( + (1, 2, 0)) #.contiguous() # Wh*Ww, Wh*Ww, 2 + relative_coords[:, :, 0] += window_size[ + 0] - 1 # shift to start from 0 + relative_coords[:, :, 1] += window_size[1] - 1 + relative_coords[:, :, 0] *= 2 * window_size[1] - 1 + relative_position_index = \ + paddle.zeros(shape=(window_size[0] * window_size[1] + 1, ) * 2, dtype=relative_coords.dtype) + relative_position_index[1:, 1:] = relative_coords.sum( + -1) # Wh*Ww, Wh*Ww + relative_position_index[0, 0:] = self.num_relative_distance - 3 + relative_position_index[0:, 0] = self.num_relative_distance - 2 + relative_position_index[0, 0] = self.num_relative_distance - 1 + + self.register_buffer("relative_position_index", + relative_position_index) + # trunc_normal_(self.relative_position_bias_table, std=.0) + else: + self.window_size = None + self.relative_position_bias_table = None + self.relative_position_index = None + + self.attn_drop = nn.Dropout(attn_drop) + self.proj = nn.Linear(dim, dim) + self.proj_drop = nn.Dropout(proj_drop) + + def forward(self, x, rel_pos_bias=None): + x_shape = paddle.shape(x) + N, C = x_shape[1], x_shape[2] + + qkv_bias = None + if self.q_bias is not None: + qkv_bias = paddle.concat( + (self.q_bias, paddle.zeros_like(self.v_bias), self.v_bias)) + qkv = F.linear(x, weight=self.qkv.weight, bias=qkv_bias) + + qkv = qkv.reshape((-1, N, 3, self.num_heads, + C // self.num_heads)).transpose((2, 0, 3, 1, 4)) + q, k, v = qkv[0], qkv[1], qkv[2] + attn = (q.matmul(k.transpose((0, 1, 3, 2)))) * self.scale + + if self.relative_position_bias_table is not None: + relative_position_bias = self.relative_position_bias_table[ + self.relative_position_index.reshape([-1])].reshape([ + self.window_size[0] * self.window_size[1] + 1, + self.window_size[0] * self.window_size[1] + 1, -1 + ]) # Wh*Ww,Wh*Ww,nH + relative_position_bias = relative_position_bias.transpose( + (2, 0, 1)) #.contiguous() # nH, Wh*Ww, Wh*Ww + attn = attn + relative_position_bias.unsqueeze(0) + if rel_pos_bias is not None: + attn = attn + rel_pos_bias + + attn = nn.functional.softmax(attn, axis=-1) + attn = self.attn_drop(attn) + + x = (attn.matmul(v)).transpose((0, 2, 1, 3)).reshape((-1, N, C)) + x = self.proj(x) + x = self.proj_drop(x) + return x + + +class Block(nn.Layer): + def __init__(self, + dim, + num_heads, + mlp_ratio=4., + qkv_bias=False, + qk_scale=None, + drop=0., + attn_drop=0., + drop_path=0., + window_size=None, + init_values=None, + act_layer=nn.GELU, + norm_layer='nn.LayerNorm', + epsilon=1e-5): + super().__init__() + self.norm1 = nn.LayerNorm(dim, epsilon=1e-6) + self.attn = Attention( + dim, + num_heads=num_heads, + qkv_bias=qkv_bias, + qk_scale=qk_scale, + attn_drop=attn_drop, + proj_drop=drop, + window_size=window_size) + # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here + self.drop_path = DropPath(drop_path) if drop_path > 0. else Identity() + self.norm2 = eval(norm_layer)(dim, epsilon=epsilon) + mlp_hidden_dim = int(dim * mlp_ratio) + self.mlp = Mlp(in_features=dim, + hidden_features=mlp_hidden_dim, + act_layer=act_layer, + drop=drop) + if init_values is not None: + self.gamma_1 = self.create_parameter( + shape=([dim]), default_initializer=Constant(value=init_values)) + self.gamma_2 = self.create_parameter( + shape=([dim]), default_initializer=Constant(value=init_values)) + else: + self.gamma_1, self.gamma_2 = None, None + + def forward(self, x, rel_pos_bias=None): + + if self.gamma_1 is None: + x = x + self.drop_path( + self.attn( + self.norm1(x), rel_pos_bias=rel_pos_bias)) + x = x + self.drop_path(self.mlp(self.norm2(x))) + else: + x = x + self.drop_path(self.gamma_1 * self.attn( + self.norm1(x), rel_pos_bias=rel_pos_bias)) + x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x))) + return x + + +class PatchEmbed(nn.Layer): + """ Image to Patch Embedding + """ + + def __init__(self, + img_size=[224, 224], + patch_size=16, + in_chans=3, + embed_dim=768): + super().__init__() + self.num_patches_w = img_size[0] // patch_size + self.num_patches_h = img_size[1] // patch_size + + num_patches = self.num_patches_w * self.num_patches_h + self.patch_shape = (img_size[0] // patch_size, + img_size[1] // patch_size) + self.img_size = img_size + self.patch_size = patch_size + self.num_patches = num_patches + + self.proj = nn.Conv2D( + in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) + + @property + def num_patches_in_h(self): + return self.img_size[1] // self.patch_size + + @property + def num_patches_in_w(self): + return self.img_size[0] // self.patch_size + + def forward(self, x, mask=None): + B, C, H, W = x.shape + return self.proj(x) + + +class RelativePositionBias(nn.Layer): + def __init__(self, window_size, num_heads): + super().__init__() + self.window_size = window_size + self.num_relative_distance = (2 * window_size[0] - 1) * ( + 2 * window_size[1] - 1) + 3 + self.relative_position_bias_table = self.create_parameter( + shape=(self.num_relative_distance, num_heads), + default_initialize=zeros_) + # cls to token & token 2 cls & cls to cls + + # get pair-wise relative position index for each token inside the window + coords_h = paddle.arange(window_size[0]) + coords_w = paddle.arange(window_size[1]) + coords = paddle.stack(paddle.meshgrid( + [coords_h, coords_w])) # 2, Wh, Ww + coords_flatten = coords.flatten(1) # 2, Wh*Ww + + relative_coords = coords_flatten[:, :, + None] - coords_flatten[:, + None, :] # 2, Wh*Ww, Wh*Ww + relative_coords = relative_coords.transpos( + (1, 2, 0)) # Wh*Ww, Wh*Ww, 2 + relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 + relative_coords[:, :, 1] += window_size[1] - 1 + relative_coords[:, :, 0] *= 2 * window_size[1] - 1 + relative_position_index = \ + paddle.zeros(size=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype) + relative_position_index[1:, 1:] = relative_coords.sum( + -1) # Wh*Ww, Wh*Ww + relative_position_index[0, 0:] = self.num_relative_distance - 3 + relative_position_index[0:, 0] = self.num_relative_distance - 2 + relative_position_index[0, 0] = self.num_relative_distance - 1 + self.register_buffer("relative_position_index", relative_position_index) + + def forward(self): + relative_position_bias = \ + self.relative_position_bias_table[self.relative_position_index.view(-1)].view( + self.window_size[0] * self.window_size[1] + 1, + self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH + return relative_position_bias.transpose((2, 0, 1)) # nH, Wh*Ww, Wh*Ww + + +def get_sinusoid_encoding_table(n_position, d_hid, token=False): + ''' Sinusoid position encoding table ''' + + def get_position_angle_vec(position): + return [ + position / np.power(10000, 2 * (hid_j // 2) / d_hid) + for hid_j in range(d_hid) + ] + + sinusoid_table = np.array( + [get_position_angle_vec(pos_i) for pos_i in range(n_position)]) + sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i + sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1 + if token: + sinusoid_table = np.concatenate( + [sinusoid_table, np.zeros([1, d_hid])], dim=0) + + return paddle.to_tensor(sinusoid_table, dtype=paddle.float32).unsqueeze(0) + + +@register +@serializable +class VisionTransformer(nn.Layer): + """ Vision Transformer with support for patch input + """ + + def __init__(self, + img_size=[672, 1092], + patch_size=16, + in_chans=3, + embed_dim=768, + depth=12, + num_heads=12, + mlp_ratio=4, + qkv_bias=False, + qk_scale=None, + drop_rate=0., + attn_drop_rate=0., + drop_path_rate=0., + norm_layer='nn.LayerNorm', + init_values=None, + use_rel_pos_bias=False, + use_shared_rel_pos_bias=False, + epsilon=1e-5, + final_norm=False, + pretrained=None, + out_indices=[3, 5, 7, 11], + use_abs_pos_emb=False, + use_sincos_pos_emb=True, + with_fpn=True, + use_checkpoint=False, + **args): + super().__init__() + self.img_size = img_size + self.embed_dim = embed_dim + self.with_fpn = with_fpn + self.use_checkpoint = use_checkpoint + self.use_sincos_pos_emb = use_sincos_pos_emb + self.use_rel_pos_bias = use_rel_pos_bias + self.final_norm = final_norm + + if use_checkpoint: + print('please set: FLAGS_allocator_strategy=naive_best_fit') + self.patch_embed = PatchEmbed( + img_size=img_size, + patch_size=patch_size, + in_chans=in_chans, + embed_dim=embed_dim) + + self.pos_w = self.patch_embed.num_patches_in_w + self.pos_h = self.patch_embed.num_patches_in_h + + self.cls_token = self.create_parameter( + shape=(1, 1, embed_dim), + default_initializer=paddle.nn.initializer.Constant(value=0.)) + + if use_abs_pos_emb: + self.pos_embed = self.create_parameter( + shape=(1, self.pos_w * self.pos_h + 1, embed_dim), + default_initializer=paddle.nn.initializer.TruncatedNormal( + std=.02)) + elif use_sincos_pos_emb: + pos_embed = self.build_2d_sincos_position_embedding(embed_dim) + + self.pos_embed = pos_embed + self.pos_embed = self.create_parameter(shape=pos_embed.shape) + self.pos_embed.set_value(pos_embed.numpy()) + self.pos_embed.stop_gradient = True + + else: + self.pos_embed = None + + self.pos_drop = nn.Dropout(p=drop_rate) + + if use_shared_rel_pos_bias: + self.rel_pos_bias = RelativePositionBias( + window_size=self.patch_embed.patch_shape, num_heads=num_heads) + else: + self.rel_pos_bias = None + + dpr = np.linspace(0, drop_path_rate, depth) + + self.blocks = nn.LayerList([ + Block( + dim=embed_dim, + num_heads=num_heads, + mlp_ratio=mlp_ratio, + qkv_bias=qkv_bias, + qk_scale=qk_scale, + drop=drop_rate, + attn_drop=attn_drop_rate, + drop_path=dpr[i], + norm_layer=norm_layer, + init_values=init_values, + window_size=self.patch_embed.patch_shape + if use_rel_pos_bias else None, + epsilon=epsilon) for i in range(depth) + ]) + + self.pretrained = pretrained + self.init_weight() + + assert len(out_indices) <= 4, '' + self.out_indices = out_indices + self.out_channels = [embed_dim for _ in range(len(out_indices))] + self.out_strides = [4, 8, 16, 32][-len(out_indices):] if with_fpn else [ + 8 for _ in range(len(out_indices)) + ] + + self.norm = Identity() + + if self.with_fpn: + self.init_fpn( + embed_dim=embed_dim, + patch_size=patch_size, ) + + def init_weight(self): + pretrained = self.pretrained + + if pretrained: + if 'http' in pretrained: #URL + path = paddle.utils.download.get_weights_path_from_url( + pretrained) + else: #model in local path + path = pretrained + + load_state_dict = paddle.load(path) + model_state_dict = self.state_dict() + pos_embed_name = "pos_embed" + + if pos_embed_name in load_state_dict.keys(): + load_pos_embed = paddle.to_tensor( + load_state_dict[pos_embed_name], dtype="float32") + if self.pos_embed.shape != load_pos_embed.shape: + pos_size = int(math.sqrt(load_pos_embed.shape[1] - 1)) + model_state_dict[pos_embed_name] = self.resize_pos_embed( + load_pos_embed, (pos_size, pos_size), + (self.pos_h, self.pos_w)) + + # self.set_state_dict(model_state_dict) + load_state_dict[pos_embed_name] = model_state_dict[ + pos_embed_name] + + print("Load pos_embed and resize it from {} to {} .".format( + load_pos_embed.shape, self.pos_embed.shape)) + + self.set_state_dict(load_state_dict) + print("Load load_state_dict....") + + def init_fpn(self, embed_dim=768, patch_size=16, out_with_norm=False): + if patch_size == 16: + self.fpn1 = nn.Sequential( + nn.Conv2DTranspose( + embed_dim, embed_dim, kernel_size=2, stride=2), + nn.BatchNorm2D(embed_dim), + nn.GELU(), + nn.Conv2DTranspose( + embed_dim, embed_dim, kernel_size=2, stride=2), ) + + self.fpn2 = nn.Sequential( + nn.Conv2DTranspose( + embed_dim, embed_dim, kernel_size=2, stride=2), ) + + self.fpn3 = Identity() + + self.fpn4 = nn.MaxPool2D(kernel_size=2, stride=2) + elif patch_size == 8: + self.fpn1 = nn.Sequential( + nn.Conv2DTranspose( + embed_dim, embed_dim, kernel_size=2, stride=2), ) + + self.fpn2 = Identity() + + self.fpn3 = nn.Sequential(nn.MaxPool2D(kernel_size=2, stride=2), ) + + self.fpn4 = nn.Sequential(nn.MaxPool2D(kernel_size=4, stride=4), ) + + if not out_with_norm: + self.norm = Identity() + else: + self.norm = nn.LayerNorm(embed_dim, epsilon=1e-6) + + def interpolate_pos_encoding(self, x, w, h): + npatch = x.shape[1] - 1 + N = self.pos_embed.shape[1] - 1 + w0 = w // self.patch_embed.patch_size + h0 = h // self.patch_embed.patch_size + if npatch == N and w0 == self.patch_embed.num_patches_w and h0 == self.patch_embed.num_patches_h: + return self.pos_embed + class_pos_embed = self.pos_embed[:, 0] + patch_pos_embed = self.pos_embed[:, 1:] + dim = x.shape[-1] + # we add a small number to avoid floating point error in the interpolation + # see discussion at https://github.com/facebookresearch/dino/issues/8 + w0, h0 = w0 + 0.1, h0 + 0.1 + + patch_pos_embed = nn.functional.interpolate( + patch_pos_embed.reshape([ + 1, self.patch_embed.num_patches_w, + self.patch_embed.num_patches_h, dim + ]).transpose((0, 3, 1, 2)), + scale_factor=(w0 / self.patch_embed.num_patches_w, + h0 / self.patch_embed.num_patches_h), + mode='bicubic', ) + assert int(w0) == patch_pos_embed.shape[-2] and int( + h0) == patch_pos_embed.shape[-1] + patch_pos_embed = patch_pos_embed.transpose( + (0, 2, 3, 1)).reshape([1, -1, dim]) + return paddle.concat( + (class_pos_embed.unsqueeze(0), patch_pos_embed), axis=1) + + def resize_pos_embed(self, pos_embed, old_hw, new_hw): + """ + Resize pos_embed weight. + Args: + pos_embed (Tensor): the pos_embed weight + old_hw (list[int]): the height and width of old pos_embed + new_hw (list[int]): the height and width of new pos_embed + Returns: + Tensor: the resized pos_embed weight + """ + cls_pos_embed = pos_embed[:, :1, :] + pos_embed = pos_embed[:, 1:, :] + + pos_embed = pos_embed.transpose([0, 2, 1]) + pos_embed = pos_embed.reshape([1, -1, old_hw[0], old_hw[1]]) + pos_embed = F.interpolate( + pos_embed, new_hw, mode='bicubic', align_corners=False) + pos_embed = pos_embed.flatten(2).transpose([0, 2, 1]) + pos_embed = paddle.concat([cls_pos_embed, pos_embed], axis=1) + + return pos_embed + + def build_2d_sincos_position_embedding( + self, + embed_dim=768, + temperature=10000., ): + h, w = self.patch_embed.patch_shape + grid_w = paddle.arange(w, dtype=paddle.float32) + grid_h = paddle.arange(h, dtype=paddle.float32) + grid_w, grid_h = paddle.meshgrid(grid_w, grid_h) + assert embed_dim % 4 == 0, 'Embed dimension must be divisible by 4 for 2D sin-cos position embedding' + pos_dim = embed_dim // 4 + omega = paddle.arange(pos_dim, dtype=paddle.float32) / pos_dim + omega = 1. / (temperature**omega) + + out_w = grid_w.flatten()[..., None] @omega[None] + out_h = grid_h.flatten()[..., None] @omega[None] + + pos_emb = paddle.concat( + [ + paddle.sin(out_w), paddle.cos(out_w), paddle.sin(out_h), + paddle.cos(out_h) + ], + axis=1)[None, :, :] + + pe_token = paddle.zeros([1, 1, embed_dim], dtype=paddle.float32) + pos_embed = paddle.concat([pe_token, pos_emb], axis=1) + # pos_embed.stop_gradient = True + + return pos_embed + + def forward(self, x): + x = x['image'] if isinstance(x, dict) else x + _, _, h, w = x.shape + + x = self.patch_embed(x) + + B, D, Hp, Wp = x.shape # b * c * h * w + + cls_tokens = self.cls_token.expand( + (B, self.cls_token.shape[-2], self.cls_token.shape[-1])) + x = x.flatten(2).transpose([0, 2, 1]) # b * hw * c + x = paddle.concat([cls_tokens, x], axis=1) + + if self.pos_embed is not None: + # x = x + self.interpolate_pos_encoding(x, w, h) + x = x + self.interpolate_pos_encoding(x, h, w) + + x = self.pos_drop(x) + + rel_pos_bias = self.rel_pos_bias( + ) if self.rel_pos_bias is not None else None + + feats = [] + for idx, blk in enumerate(self.blocks): + if self.use_checkpoint: + x = paddle.distributed.fleet.utils.recompute( + blk, x, rel_pos_bias, **{"preserve_rng_state": True}) + else: + x = blk(x, rel_pos_bias) + + if idx in self.out_indices: + xp = paddle.reshape( + paddle.transpose( + self.norm(x[:, 1:, :]), perm=[0, 2, 1]), + shape=[B, D, Hp, Wp]) + feats.append(xp) + + if self.with_fpn: + fpns = [self.fpn1, self.fpn2, self.fpn3, self.fpn4] + for i in range(len(feats)): + feats[i] = fpns[i](feats[i]) + + return feats + + @property + def num_layers(self): + return len(self.blocks) + + @property + def no_weight_decay(self): + return {'pos_embed', 'cls_token'} + + @property + def out_shape(self): + return [ + ShapeSpec( + channels=c, stride=s) + for c, s in zip(self.out_channels, self.out_strides) + ] diff --git a/ppdet/modeling/bbox_utils.py b/ppdet/modeling/bbox_utils.py index 11f504f804c98b3f7a0a44d8b6f4481577d5aa6d..f895340c7e8da8606bfd0f55b1e9b84d36bfd549 100644 --- a/ppdet/modeling/bbox_utils.py +++ b/ppdet/modeling/bbox_utils.py @@ -278,8 +278,8 @@ def decode_yolo(box, anchor, downsample_ratio): return [x1, y1, w1, h1] -def iou_similarity(box1, box2, eps=1e-9): - """Calculate iou of box1 and box2 +def batch_iou_similarity(box1, box2, eps=1e-9): + """Calculate iou of box1 and box2 in batch Args: box1 (Tensor): box with the shape [N, M1, 4] @@ -866,3 +866,26 @@ def bbox2delta_v2(src_boxes, stds = paddle.to_tensor(stds, place=src_boxes.place) deltas = (deltas - means) / stds return deltas + + +def iou_similarity(box1, box2, eps=1e-10): + """Calculate iou of box1 and box2 + + Args: + box1 (Tensor): box with the shape [M1, 4] + box2 (Tensor): box with the shape [M2, 4] + + Return: + iou (Tensor): iou between box1 and box2 with the shape [M1, M2] + """ + box1 = box1.unsqueeze(1) # [M1, 4] -> [M1, 1, 4] + box2 = box2.unsqueeze(0) # [M2, 4] -> [1, M2, 4] + px1y1, px2y2 = box1[:, :, 0:2], box1[:, :, 2:4] + gx1y1, gx2y2 = box2[:, :, 0:2], box2[:, :, 2:4] + x1y1 = paddle.maximum(px1y1, gx1y1) + x2y2 = paddle.minimum(px2y2, gx2y2) + overlap = (x2y2 - x1y1).clip(0).prod(-1) + area1 = (px2y2 - px1y1).clip(0).prod(-1) + area2 = (gx2y2 - gx1y1).clip(0).prod(-1) + union = area1 + area2 - overlap + eps + return overlap / union diff --git a/static/ppdet/modeling/tests/decorator_helper.py b/ppdet/modeling/cls_utils.py similarity index 41% rename from static/ppdet/modeling/tests/decorator_helper.py rename to ppdet/modeling/cls_utils.py index 894833ce15eab82ea06c2e66a8e53cb2e7e057b5..3ae8d116959a96bb2bf337dee7330c5909bc61ac 100644 --- a/static/ppdet/modeling/tests/decorator_helper.py +++ b/ppdet/modeling/cls_utils.py @@ -1,4 +1,4 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -12,22 +12,29 @@ # See the License for the specific language governing permissions and # limitations under the License. -import paddle.fluid as fluid -__all__ = ['prog_scope'] +def _get_class_default_kwargs(cls, *args, **kwargs): + """ + Get default arguments of a class in dict format, if args and + kwargs is specified, it will replace default arguments + """ + varnames = cls.__init__.__code__.co_varnames + argcount = cls.__init__.__code__.co_argcount + keys = varnames[:argcount] + assert keys[0] == 'self' + keys = keys[1:] + values = list(cls.__init__.__defaults__) + assert len(values) == len(keys) -def prog_scope(): - def __impl__(fn): - def __fn__(*args, **kwargs): - prog = fluid.Program() - startup_prog = fluid.Program() - scope = fluid.core.Scope() - with fluid.scope_guard(scope): - with fluid.program_guard(prog, startup_prog): - with fluid.unique_name.guard(): - fn(*args, **kwargs) + if len(args) > 0: + for i, arg in enumerate(args): + values[i] = arg - return __fn__ + default_kwargs = dict(zip(keys, values)) - return __impl__ + if len(kwargs) > 0: + for k, v in kwargs.items(): + default_kwargs[k] = v + + return default_kwargs diff --git a/ppdet/modeling/coders/__init__.py b/ppdet/modeling/coders/__init__.py deleted file mode 100644 index 7726bb36cb06430b7bccd64ab89c8ef626e47790..0000000000000000000000000000000000000000 --- a/ppdet/modeling/coders/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .delta_bbox_coder import DeltaBBoxCoder diff --git a/ppdet/modeling/coders/delta_bbox_coder.py b/ppdet/modeling/coders/delta_bbox_coder.py deleted file mode 100644 index 0c53ea349eed4799cba164c4544051cb45d60385..0000000000000000000000000000000000000000 --- a/ppdet/modeling/coders/delta_bbox_coder.py +++ /dev/null @@ -1,40 +0,0 @@ -import paddle -import numpy as np -from ppdet.core.workspace import register -from ppdet.modeling.bbox_utils import delta2bbox_v2, bbox2delta_v2 - -__all__ = ['DeltaBBoxCoder'] - - -@register -class DeltaBBoxCoder: - """Encode bboxes in terms of delta/offset of a reference bbox. - Args: - norm_mean (list[float]): the mean to normalize delta - norm_std (list[float]): the std to normalize delta - wh_ratio_clip (float): to clip delta wh of decoded bboxes - ctr_clip (float or None): whether to clip delta xy of decoded bboxes - """ - def __init__(self, - norm_mean=[0.0, 0.0, 0.0, 0.0], - norm_std=[1., 1., 1., 1.], - wh_ratio_clip=16/1000.0, - ctr_clip=None): - self.norm_mean = norm_mean - self.norm_std = norm_std - self.wh_ratio_clip = wh_ratio_clip - self.ctr_clip = ctr_clip - - def encode(self, bboxes, tar_bboxes): - return bbox2delta_v2( - bboxes, tar_bboxes, means=self.norm_mean, stds=self.norm_std) - - def decode(self, bboxes, deltas, max_shape=None): - return delta2bbox_v2( - bboxes, - deltas, - max_shape=max_shape, - wh_ratio_clip=self.wh_ratio_clip, - ctr_clip=self.ctr_clip, - means=self.norm_mean, - stds=self.norm_std) diff --git a/ppdet/modeling/heads/bbox_head.py b/ppdet/modeling/heads/bbox_head.py index e4d7d68785d7632b0053f70faf94a5cccdb27713..debd3074c2ad0ae05a26c9ef240d9b4a573846e6 100644 --- a/ppdet/modeling/heads/bbox_head.py +++ b/ppdet/modeling/heads/bbox_head.py @@ -24,6 +24,7 @@ from ppdet.core.workspace import register, create from .roi_extractor import RoIAlign from ..shape_spec import ShapeSpec from ..bbox_utils import bbox2delta +from ..cls_utils import _get_class_default_kwargs from ppdet.modeling.layers import ConvNormLayer __all__ = ['TwoFCHead', 'XConvNormHead', 'BBoxHead'] @@ -178,7 +179,7 @@ class BBoxHead(nn.Layer): def __init__(self, head, in_channel, - roi_extractor=RoIAlign().__dict__, + roi_extractor=_get_class_default_kwargs(RoIAlign), bbox_assigner='BboxAssigner', with_pool=False, num_classes=80, @@ -256,7 +257,13 @@ class BBoxHead(nn.Layer): pred = self.get_prediction(scores, deltas) return pred, self.head - def get_loss(self, scores, deltas, targets, rois, bbox_weight): + def get_loss(self, + scores, + deltas, + targets, + rois, + bbox_weight, + loss_normalize_pos=False): """ scores (Tensor): scores from bbox head outputs deltas (Tensor): deltas from bbox head outputs @@ -279,8 +286,15 @@ class BBoxHead(nn.Layer): else: tgt_labels = tgt_labels.cast('int64') tgt_labels.stop_gradient = True - loss_bbox_cls = F.cross_entropy( - input=scores, label=tgt_labels, reduction='mean') + + if not loss_normalize_pos: + loss_bbox_cls = F.cross_entropy( + input=scores, label=tgt_labels, reduction='mean') + else: + loss_bbox_cls = F.cross_entropy( + input=scores, label=tgt_labels, + reduction='none').sum() / (tgt_labels.shape[0] + 1e-7) + loss_bbox[cls_name] = loss_bbox_cls # bbox reg @@ -321,9 +335,16 @@ class BBoxHead(nn.Layer): if self.bbox_loss is not None: reg_delta = self.bbox_transform(reg_delta) reg_target = self.bbox_transform(reg_target) - loss_bbox_reg = self.bbox_loss( - reg_delta, reg_target).sum() / tgt_labels.shape[0] - loss_bbox_reg *= self.num_classes + + if not loss_normalize_pos: + loss_bbox_reg = self.bbox_loss( + reg_delta, reg_target).sum() / tgt_labels.shape[0] + loss_bbox_reg *= self.num_classes + + else: + loss_bbox_reg = self.bbox_loss( + reg_delta, reg_target).sum() / (tgt_labels.shape[0] + 1e-7) + else: loss_bbox_reg = paddle.abs(reg_delta - reg_target).sum( ) / tgt_labels.shape[0] diff --git a/ppdet/modeling/heads/cascade_head.py b/ppdet/modeling/heads/cascade_head.py index 935642bd6d85c402afa84900c72627e97db0f9d6..0498a35da5ce4952739245ba0426a1ac306bf2e3 100644 --- a/ppdet/modeling/heads/cascade_head.py +++ b/ppdet/modeling/heads/cascade_head.py @@ -22,6 +22,7 @@ from .bbox_head import BBoxHead, TwoFCHead, XConvNormHead from .roi_extractor import RoIAlign from ..shape_spec import ShapeSpec from ..bbox_utils import delta2bbox, clip_bbox, nonempty_bbox +from ..cls_utils import _get_class_default_kwargs __all__ = ['CascadeTwoFCHead', 'CascadeXConvNormHead', 'CascadeHead'] @@ -153,13 +154,17 @@ class CascadeHead(BBoxHead): def __init__(self, head, in_channel, - roi_extractor=RoIAlign().__dict__, + roi_extractor=_get_class_default_kwargs(RoIAlign), bbox_assigner='BboxAssigner', num_classes=80, bbox_weight=[[10., 10., 5., 5.], [20.0, 20.0, 10.0, 10.0], [30.0, 30.0, 15.0, 15.0]], num_cascade_stages=3, - bbox_loss=None): + bbox_loss=None, + reg_class_agnostic=True, + stage_loss_weights=None, + loss_normalize_pos=False): + nn.Layer.__init__(self, ) self.head = head self.roi_extractor = roi_extractor @@ -171,6 +176,16 @@ class CascadeHead(BBoxHead): self.bbox_weight = bbox_weight self.num_cascade_stages = num_cascade_stages self.bbox_loss = bbox_loss + self.stage_loss_weights = [ + 1. / num_cascade_stages for _ in range(num_cascade_stages) + ] if stage_loss_weights is None else stage_loss_weights + assert len( + self.stage_loss_weights + ) == num_cascade_stages, f'stage_loss_weights({len(self.stage_loss_weights)}) do not equal to num_cascade_stages({num_cascade_stages})' + + self.reg_class_agnostic = reg_class_agnostic + num_bbox_delta = 4 if reg_class_agnostic else 4 * num_classes + self.loss_normalize_pos = loss_normalize_pos self.bbox_score_list = [] self.bbox_delta_list = [] @@ -189,7 +204,7 @@ class CascadeHead(BBoxHead): delta_name, nn.Linear( in_channel, - 4, + num_bbox_delta, weight_attr=paddle.ParamAttr(initializer=Normal( mean=0.0, std=0.001)))) self.bbox_score_list.append(bbox_score) @@ -226,6 +241,20 @@ class CascadeHead(BBoxHead): bbox_feat = self.head(rois_feat, i) scores = self.bbox_score_list[i](bbox_feat) deltas = self.bbox_delta_list[i](bbox_feat) + + # TODO (lyuwenyu) Is it correct for only one class ? + if not self.reg_class_agnostic and i < self.num_cascade_stages - 1: + deltas = deltas.reshape([deltas.shape[0], self.num_classes, 4]) + labels = scores[:, :-1].argmax(axis=-1) + + if self.training: + deltas = deltas[paddle.arange(deltas.shape[0]), labels] + else: + deltas = deltas[(deltas * F.one_hot( + labels, num_classes=self.num_classes).unsqueeze(-1) != 0 + ).nonzero(as_tuple=True)].reshape( + [deltas.shape[0], 4]) + head_out_list.append([scores, deltas, rois]) pred_bbox = self._get_pred_bbox(deltas, rois, self.bbox_weight[i]) @@ -233,11 +262,16 @@ class CascadeHead(BBoxHead): loss = {} for stage, value in enumerate(zip(head_out_list, targets_list)): (scores, deltas, rois), targets = value - loss_stage = self.get_loss(scores, deltas, targets, rois, - self.bbox_weight[stage]) + loss_stage = self.get_loss( + scores, + deltas, + targets, + rois, + self.bbox_weight[stage], + loss_normalize_pos=self.loss_normalize_pos) for k, v in loss_stage.items(): loss[k + "_stage{}".format( - stage)] = v / self.num_cascade_stages + stage)] = v * self.stage_loss_weights[stage] return loss, bbox_feat else: @@ -266,6 +300,12 @@ class CascadeHead(BBoxHead): num_prop = [] for p in proposals: num_prop.append(p.shape[0]) + + # NOTE(dev): num_prob will be tagged as LoDTensorArray because it + # depends on batch_size under @to_static. However the argument + # num_or_sections in paddle.split does not support LoDTensorArray, + # so we use [-1] to replace it and whitout lossing correctness. + num_prop = [-1] if len(num_prop) == 1 else num_prop return pred_bbox.split(num_prop) def get_prediction(self, head_out_list): diff --git a/ppdet/modeling/heads/face_head.py b/ppdet/modeling/heads/face_head.py index bb51f2eb96fbed3e9696852d011a55c1e2115937..360f909a67fd272acc15cdbcd79c1172e9b1088a 100644 --- a/ppdet/modeling/heads/face_head.py +++ b/ppdet/modeling/heads/face_head.py @@ -17,6 +17,7 @@ import paddle.nn as nn from ppdet.core.workspace import register from ..layers import AnchorGeneratorSSD +from ..cls_utils import _get_class_default_kwargs @register @@ -39,7 +40,7 @@ class FaceHead(nn.Layer): def __init__(self, num_classes=80, in_channels=[96, 96], - anchor_generator=AnchorGeneratorSSD().__dict__, + anchor_generator=_get_class_default_kwargs(AnchorGeneratorSSD), kernel_size=3, padding=1, conv_decay=0., diff --git a/ppdet/modeling/heads/gfl_head.py b/ppdet/modeling/heads/gfl_head.py index 779d739b835b3091ddabf3ab0375973f6bc3b8ab..9c87eecd81ef8bd0be8bb61db385ef844fcff2ff 100644 --- a/ppdet/modeling/heads/gfl_head.py +++ b/ppdet/modeling/heads/gfl_head.py @@ -79,7 +79,9 @@ class Integral(nn.Layer): offsets from the box center in four directions, shape (N, 4). """ x = F.softmax(x.reshape([-1, self.reg_max + 1]), axis=1) - x = F.linear(x, self.project).reshape([-1, 4]) + x = F.linear(x, self.project) + if self.training: + x = x.reshape([-1, 4]) return x @@ -386,7 +388,12 @@ class GFLHead(nn.Layer): avg_factor = sum(avg_factor) try: - avg_factor = paddle.distributed.all_reduce(avg_factor.clone()) + avg_factor_clone = avg_factor.clone() + tmp_avg_factor = paddle.distributed.all_reduce(avg_factor_clone) + if tmp_avg_factor is not None: + avg_factor = tmp_avg_factor + else: + avg_factor = avg_factor_clone avg_factor = paddle.clip( avg_factor / paddle.distributed.get_world_size(), min=1) except: diff --git a/ppdet/modeling/heads/mask_head.py b/ppdet/modeling/heads/mask_head.py index 604847a2d07224314b2eba700eefa00729b4f95f..939debbaae129293551394b5571f7da158a0cccb 100644 --- a/ppdet/modeling/heads/mask_head.py +++ b/ppdet/modeling/heads/mask_head.py @@ -20,6 +20,7 @@ from paddle.nn.initializer import KaimingNormal from ppdet.core.workspace import register, create from ppdet.modeling.layers import ConvNormLayer from .roi_extractor import RoIAlign +from ..cls_utils import _get_class_default_kwargs @register @@ -120,7 +121,7 @@ class MaskHead(nn.Layer): def __init__(self, head, - roi_extractor=RoIAlign().__dict__, + roi_extractor=_get_class_default_kwargs(RoIAlign), mask_assigner='MaskAssigner', num_classes=80, share_bbox_feat=False, @@ -221,7 +222,7 @@ class MaskHead(nn.Layer): mask_feat = self.head(rois_feat) mask_logit = self.mask_fcn_logits(mask_feat) if self.num_classes == 1: - mask_out = F.sigmoid(mask_logit) + mask_out = F.sigmoid(mask_logit)[:, 0, :, :] else: num_masks = paddle.shape(mask_logit)[0] index = paddle.arange(num_masks).cast('int32') diff --git a/ppdet/modeling/heads/pico_head.py b/ppdet/modeling/heads/pico_head.py index 98c8c8ef932f3af793bc8d69709420b7930cc6ea..a63e7c90ca76f54934f9e28858e135cdb5c04d16 100644 --- a/ppdet/modeling/heads/pico_head.py +++ b/ppdet/modeling/heads/pico_head.py @@ -23,7 +23,6 @@ import paddle.nn as nn import paddle.nn.functional as F from paddle import ParamAttr from paddle.nn.initializer import Normal, Constant -from paddle.fluid.dygraph import parallel_helper from ppdet.modeling.ops import get_static_shape from ..initializer import normal_ @@ -91,7 +90,7 @@ class PicoFeat(nn.Layer): self.reg_convs = [] if use_se: assert share_cls_reg == True, \ - 'In the case of using se, share_cls_reg is not supported' + 'In the case of using se, share_cls_reg must be set to True' self.se = nn.LayerList() for stage_idx in range(num_fpn_stride): cls_subnet_convs = [] @@ -194,7 +193,7 @@ class PicoHead(OTAVFLHead): 'conv_feat', 'dgqp_module', 'loss_class', 'loss_dfl', 'loss_bbox', 'assigner', 'nms' ] - __shared__ = ['num_classes'] + __shared__ = ['num_classes', 'eval_size'] def __init__(self, conv_feat='PicoFeat', @@ -210,7 +209,8 @@ class PicoHead(OTAVFLHead): feat_in_chan=96, nms=None, nms_pre=1000, - cell_offset=0): + cell_offset=0, + eval_size=None): super(PicoHead, self).__init__( conv_feat=conv_feat, dgqp_module=dgqp_module, @@ -239,6 +239,7 @@ class PicoHead(OTAVFLHead): self.nms = nms self.nms_pre = nms_pre self.cell_offset = cell_offset + self.eval_size = eval_size self.use_sigmoid = self.loss_vfl.use_sigmoid if self.use_sigmoid: @@ -282,12 +283,50 @@ class PicoHead(OTAVFLHead): bias_attr=ParamAttr(initializer=Constant(value=0)))) self.head_reg_list.append(head_reg) + # initialize the anchor points + if self.eval_size: + self.anchor_points, self.stride_tensor = self._generate_anchors() + def forward(self, fpn_feats, export_post_process=True): assert len(fpn_feats) == len( self.fpn_stride ), "The size of fpn_feats is not equal to size of fpn_stride" - cls_logits_list = [] - bboxes_reg_list = [] + + if self.training: + return self.forward_train(fpn_feats) + else: + return self.forward_eval( + fpn_feats, export_post_process=export_post_process) + + def forward_train(self, fpn_feats): + cls_logits_list, bboxes_reg_list = [], [] + for i, fpn_feat in enumerate(fpn_feats): + conv_cls_feat, conv_reg_feat = self.conv_feat(fpn_feat, i) + if self.conv_feat.share_cls_reg: + cls_logits = self.head_cls_list[i](conv_cls_feat) + cls_score, bbox_pred = paddle.split( + cls_logits, + [self.cls_out_channels, 4 * (self.reg_max + 1)], + axis=1) + else: + cls_score = self.head_cls_list[i](conv_cls_feat) + bbox_pred = self.head_reg_list[i](conv_reg_feat) + + if self.dgqp_module: + quality_score = self.dgqp_module(bbox_pred) + cls_score = F.sigmoid(cls_score) * quality_score + + cls_logits_list.append(cls_score) + bboxes_reg_list.append(bbox_pred) + + return (cls_logits_list, bboxes_reg_list) + + def forward_eval(self, fpn_feats, export_post_process=True): + if self.eval_size: + anchor_points, stride_tensor = self.anchor_points, self.stride_tensor + else: + anchor_points, stride_tensor = self._generate_anchors(fpn_feats) + cls_logits_list, bboxes_reg_list = [], [] for i, fpn_feat in enumerate(fpn_feats): conv_cls_feat, conv_reg_feat = self.conv_feat(fpn_feat, i) if self.conv_feat.share_cls_reg: @@ -307,50 +346,68 @@ class PicoHead(OTAVFLHead): if not export_post_process: # Now only supports batch size = 1 in deploy # TODO(ygh): support batch size > 1 - cls_score = F.sigmoid(cls_score).reshape( + cls_score_out = F.sigmoid(cls_score).reshape( [1, self.cls_out_channels, -1]).transpose([0, 2, 1]) bbox_pred = bbox_pred.reshape([1, (self.reg_max + 1) * 4, -1]).transpose([0, 2, 1]) - elif not self.training: - cls_score = F.sigmoid(cls_score.transpose([0, 2, 3, 1])) + else: + b, _, h, w = fpn_feat.shape + l = h * w + cls_score_out = F.sigmoid( + cls_score.reshape([b, self.cls_out_channels, l])) bbox_pred = bbox_pred.transpose([0, 2, 3, 1]) - stride = self.fpn_stride[i] - b, cell_h, cell_w, _ = paddle.shape(cls_score) - y, x = self.get_single_level_center_point( - [cell_h, cell_w], stride, cell_offset=self.cell_offset) - center_points = paddle.stack([x, y], axis=-1) - cls_score = cls_score.reshape([b, -1, self.cls_out_channels]) - bbox_pred = self.distribution_project(bbox_pred) * stride - bbox_pred = bbox_pred.reshape([b, cell_h * cell_w, 4]) - - # NOTE: If keep_ratio=False and image shape value that - # multiples of 32, distance2bbox not set max_shapes parameter - # to speed up model prediction. If need to set max_shapes, - # please use inputs['im_shape']. - bbox_pred = batch_distance2bbox( - center_points, bbox_pred, max_shapes=None) + bbox_pred = self.distribution_project(bbox_pred) + bbox_pred = bbox_pred.reshape([b, l, 4]) - cls_logits_list.append(cls_score) + cls_logits_list.append(cls_score_out) bboxes_reg_list.append(bbox_pred) + if export_post_process: + cls_logits_list = paddle.concat(cls_logits_list, axis=-1) + bboxes_reg_list = paddle.concat(bboxes_reg_list, axis=1) + bboxes_reg_list = batch_distance2bbox(anchor_points, + bboxes_reg_list) + bboxes_reg_list *= stride_tensor + return (cls_logits_list, bboxes_reg_list) - def post_process(self, - gfl_head_outs, - im_shape, - scale_factor, - export_nms=True): - cls_scores, bboxes_reg = gfl_head_outs - bboxes = paddle.concat(bboxes_reg, axis=1) - mlvl_scores = paddle.concat(cls_scores, axis=1) - mlvl_scores = mlvl_scores.transpose([0, 2, 1]) + def _generate_anchors(self, feats=None): + # just use in eval time + anchor_points = [] + stride_tensor = [] + for i, stride in enumerate(self.fpn_stride): + if feats is not None: + _, _, h, w = feats[i].shape + else: + h = math.ceil(self.eval_size[0] / stride) + w = math.ceil(self.eval_size[1] / stride) + shift_x = paddle.arange(end=w) + self.cell_offset + shift_y = paddle.arange(end=h) + self.cell_offset + shift_y, shift_x = paddle.meshgrid(shift_y, shift_x) + anchor_point = paddle.cast( + paddle.stack( + [shift_x, shift_y], axis=-1), dtype='float32') + anchor_points.append(anchor_point.reshape([-1, 2])) + stride_tensor.append( + paddle.full( + [h * w, 1], stride, dtype='float32')) + anchor_points = paddle.concat(anchor_points) + stride_tensor = paddle.concat(stride_tensor) + return anchor_points, stride_tensor + + def post_process(self, head_outs, scale_factor, export_nms=True): + pred_scores, pred_bboxes = head_outs if not export_nms: - return bboxes, mlvl_scores + return pred_bboxes, pred_scores else: # rescale: [h_scale, w_scale] -> [w_scale, h_scale, w_scale, h_scale] - im_scale = scale_factor.flip([1]).tile([1, 2]).unsqueeze(1) - bboxes /= im_scale - bbox_pred, bbox_num, _ = self.nms(bboxes, mlvl_scores) + scale_y, scale_x = paddle.split(scale_factor, 2, axis=-1) + scale_factor = paddle.concat( + [scale_x, scale_y, scale_x, scale_y], + axis=-1).reshape([-1, 1, 4]) + # scale bbox to origin image size. + pred_bboxes /= scale_factor + bbox_pred, bbox_num, _ = self.nms(pred_bboxes, pred_scores) return bbox_pred, bbox_num @@ -374,29 +431,29 @@ class PicoHeadV2(GFLHead): 'conv_feat', 'dgqp_module', 'loss_class', 'loss_dfl', 'loss_bbox', 'static_assigner', 'assigner', 'nms' ] - __shared__ = ['num_classes'] - - def __init__( - self, - conv_feat='PicoFeatV2', - dgqp_module=None, - num_classes=80, - fpn_stride=[8, 16, 32], - prior_prob=0.01, - use_align_head=True, - loss_class='VariFocalLoss', - loss_dfl='DistributionFocalLoss', - loss_bbox='GIoULoss', - static_assigner_epoch=60, - static_assigner='ATSSAssigner', - assigner='TaskAlignedAssigner', - reg_max=16, - feat_in_chan=96, - nms=None, - nms_pre=1000, - cell_offset=0, - act='hard_swish', - grid_cell_scale=5.0, ): + __shared__ = ['num_classes', 'eval_size'] + + def __init__(self, + conv_feat='PicoFeatV2', + dgqp_module=None, + num_classes=80, + fpn_stride=[8, 16, 32], + prior_prob=0.01, + use_align_head=True, + loss_class='VariFocalLoss', + loss_dfl='DistributionFocalLoss', + loss_bbox='GIoULoss', + static_assigner_epoch=60, + static_assigner='ATSSAssigner', + assigner='TaskAlignedAssigner', + reg_max=16, + feat_in_chan=96, + nms=None, + nms_pre=1000, + cell_offset=0, + act='hard_swish', + grid_cell_scale=5.0, + eval_size=None): super(PicoHeadV2, self).__init__( conv_feat=conv_feat, dgqp_module=dgqp_module, @@ -432,6 +489,7 @@ class PicoHeadV2(GFLHead): self.grid_cell_scale = grid_cell_scale self.use_align_head = use_align_head self.cls_out_channels = self.num_classes + self.eval_size = eval_size bias_init_value = -math.log((1 - self.prior_prob) / self.prior_prob) # Clear the super class initialization @@ -478,11 +536,22 @@ class PicoHeadV2(GFLHead): act=self.act, use_act_in_out=False)) + # initialize the anchor points + if self.eval_size: + self.anchor_points, self.stride_tensor = self._generate_anchors() + def forward(self, fpn_feats, export_post_process=True): assert len(fpn_feats) == len( self.fpn_stride ), "The size of fpn_feats is not equal to size of fpn_stride" + if self.training: + return self.forward_train(fpn_feats) + else: + return self.forward_eval( + fpn_feats, export_post_process=export_post_process) + + def forward_train(self, fpn_feats): cls_score_list, reg_list, box_list = [], [], [] for i, (fpn_feat, stride) in enumerate(zip(fpn_feats, self.fpn_stride)): b, _, h, w = get_static_shape(fpn_feat) @@ -498,7 +567,48 @@ class PicoHeadV2(GFLHead): else: cls_score = F.sigmoid(cls_logit) - if not export_post_process and not self.training: + cls_score_out = cls_score.transpose([0, 2, 3, 1]) + bbox_pred = reg_pred.transpose([0, 2, 3, 1]) + b, cell_h, cell_w, _ = paddle.shape(cls_score_out) + y, x = self.get_single_level_center_point( + [cell_h, cell_w], stride, cell_offset=self.cell_offset) + center_points = paddle.stack([x, y], axis=-1) + cls_score_out = cls_score_out.reshape( + [b, -1, self.cls_out_channels]) + bbox_pred = self.distribution_project(bbox_pred) * stride + bbox_pred = bbox_pred.reshape([b, cell_h * cell_w, 4]) + bbox_pred = batch_distance2bbox( + center_points, bbox_pred, max_shapes=None) + cls_score_list.append(cls_score.flatten(2).transpose([0, 2, 1])) + reg_list.append(reg_pred.flatten(2).transpose([0, 2, 1])) + box_list.append(bbox_pred / stride) + + cls_score_list = paddle.concat(cls_score_list, axis=1) + box_list = paddle.concat(box_list, axis=1) + reg_list = paddle.concat(reg_list, axis=1) + return cls_score_list, reg_list, box_list, fpn_feats + + def forward_eval(self, fpn_feats, export_post_process=True): + if self.eval_size: + anchor_points, stride_tensor = self.anchor_points, self.stride_tensor + else: + anchor_points, stride_tensor = self._generate_anchors(fpn_feats) + cls_score_list, box_list = [], [] + for i, (fpn_feat, stride) in enumerate(zip(fpn_feats, self.fpn_stride)): + b, _, h, w = fpn_feat.shape + # task decomposition + conv_cls_feat, se_feat = self.conv_feat(fpn_feat, i) + cls_logit = self.head_cls_list[i](se_feat) + reg_pred = self.head_reg_list[i](se_feat) + + # cls prediction and alignment + if self.use_align_head: + cls_prob = F.sigmoid(self.cls_align[i](conv_cls_feat)) + cls_score = (F.sigmoid(cls_logit) * cls_prob + eps).sqrt() + else: + cls_score = F.sigmoid(cls_logit) + + if not export_post_process: # Now only supports batch size = 1 in deploy cls_score_list.append( cls_score.reshape([1, self.cls_out_channels, -1]).transpose( @@ -507,34 +617,21 @@ class PicoHeadV2(GFLHead): reg_pred.reshape([1, (self.reg_max + 1) * 4, -1]).transpose( [0, 2, 1])) else: - cls_score_out = cls_score.transpose([0, 2, 3, 1]) + l = h * w + cls_score_out = cls_score.reshape([b, self.cls_out_channels, l]) bbox_pred = reg_pred.transpose([0, 2, 3, 1]) - b, cell_h, cell_w, _ = paddle.shape(cls_score_out) - y, x = self.get_single_level_center_point( - [cell_h, cell_w], stride, cell_offset=self.cell_offset) - center_points = paddle.stack([x, y], axis=-1) - cls_score_out = cls_score_out.reshape( - [b, -1, self.cls_out_channels]) - bbox_pred = self.distribution_project(bbox_pred) * stride - bbox_pred = bbox_pred.reshape([b, cell_h * cell_w, 4]) - bbox_pred = batch_distance2bbox( - center_points, bbox_pred, max_shapes=None) - if not self.training: - cls_score_list.append(cls_score_out) - box_list.append(bbox_pred) - else: - cls_score_list.append( - cls_score.flatten(2).transpose([0, 2, 1])) - reg_list.append(reg_pred.flatten(2).transpose([0, 2, 1])) - box_list.append(bbox_pred / stride) - - if not self.training: - return cls_score_list, box_list - else: - cls_score_list = paddle.concat(cls_score_list, axis=1) + bbox_pred = self.distribution_project(bbox_pred) + bbox_pred = bbox_pred.reshape([b, l, 4]) + cls_score_list.append(cls_score_out) + box_list.append(bbox_pred) + + if export_post_process: + cls_score_list = paddle.concat(cls_score_list, axis=-1) box_list = paddle.concat(box_list, axis=1) - reg_list = paddle.concat(reg_list, axis=1) - return cls_score_list, reg_list, box_list, fpn_feats + box_list = batch_distance2bbox(anchor_points, box_list) + box_list *= stride_tensor + + return cls_score_list, box_list def get_loss(self, head_outs, gt_meta): pred_scores, pred_regs, pred_bboxes, fpn_feats = head_outs @@ -628,8 +725,7 @@ class PicoHeadV2(GFLHead): loss_dfl = paddle.zeros([1]) avg_factor = flatten_assigned_scores.sum() - if paddle.fluid.core.is_compiled_with_dist( - ) and parallel_helper._is_parallel_ctx_initialized(): + if paddle.distributed.get_world_size() > 1: paddle.distributed.all_reduce(avg_factor) avg_factor = paddle.clip( avg_factor / paddle.distributed.get_world_size(), min=1) @@ -644,20 +740,41 @@ class PicoHeadV2(GFLHead): return loss_states - def post_process(self, - gfl_head_outs, - im_shape, - scale_factor, - export_nms=True): - cls_scores, bboxes_reg = gfl_head_outs - bboxes = paddle.concat(bboxes_reg, axis=1) - mlvl_scores = paddle.concat(cls_scores, axis=1) - mlvl_scores = mlvl_scores.transpose([0, 2, 1]) + def _generate_anchors(self, feats=None): + # just use in eval time + anchor_points = [] + stride_tensor = [] + for i, stride in enumerate(self.fpn_stride): + if feats is not None: + _, _, h, w = feats[i].shape + else: + h = math.ceil(self.eval_size[0] / stride) + w = math.ceil(self.eval_size[1] / stride) + shift_x = paddle.arange(end=w) + self.cell_offset + shift_y = paddle.arange(end=h) + self.cell_offset + shift_y, shift_x = paddle.meshgrid(shift_y, shift_x) + anchor_point = paddle.cast( + paddle.stack( + [shift_x, shift_y], axis=-1), dtype='float32') + anchor_points.append(anchor_point.reshape([-1, 2])) + stride_tensor.append( + paddle.full( + [h * w, 1], stride, dtype='float32')) + anchor_points = paddle.concat(anchor_points) + stride_tensor = paddle.concat(stride_tensor) + return anchor_points, stride_tensor + + def post_process(self, head_outs, scale_factor, export_nms=True): + pred_scores, pred_bboxes = head_outs if not export_nms: - return bboxes, mlvl_scores + return pred_bboxes, pred_scores else: # rescale: [h_scale, w_scale] -> [w_scale, h_scale, w_scale, h_scale] - im_scale = scale_factor.flip([1]).tile([1, 2]).unsqueeze(1) - bboxes /= im_scale - bbox_pred, bbox_num, _ = self.nms(bboxes, mlvl_scores) + scale_y, scale_x = paddle.split(scale_factor, 2, axis=-1) + scale_factor = paddle.concat( + [scale_x, scale_y, scale_x, scale_y], + axis=-1).reshape([-1, 1, 4]) + # scale bbox to origin image size. + pred_bboxes /= scale_factor + bbox_pred, bbox_num, _ = self.nms(pred_bboxes, pred_scores) return bbox_pred, bbox_num diff --git a/ppdet/modeling/heads/ppyoloe_head.py b/ppdet/modeling/heads/ppyoloe_head.py index 920bb2298909e5275c9bc04f3c73cce3f4c8ff36..4e9c303dc64252b26a9ff3153cf65f34b53196a4 100644 --- a/ppdet/modeling/heads/ppyoloe_head.py +++ b/ppdet/modeling/heads/ppyoloe_head.py @@ -22,7 +22,8 @@ from ..losses import GIoULoss from ..initializer import bias_init_with_prob, constant_, normal_ from ..assigners.utils import generate_anchors_for_grid_cell from ppdet.modeling.backbones.cspresnet import ConvBNLayer -from ppdet.modeling.ops import get_static_shape, paddle_distributed_is_initialized, get_act_fn +from ppdet.modeling.ops import get_static_shape, get_act_fn +from ppdet.modeling.layers import MultiClassNMS __all__ = ['PPYOLOEHead'] @@ -45,7 +46,7 @@ class ESEAttn(nn.Layer): @register class PPYOLOEHead(nn.Layer): - __shared__ = ['num_classes', 'trt', 'exclude_nms'] + __shared__ = ['num_classes', 'eval_size', 'trt', 'exclude_nms'] __inject__ = ['static_assigner', 'assigner', 'nms'] def __init__(self, @@ -61,7 +62,7 @@ class PPYOLOEHead(nn.Layer): static_assigner='ATSSAssigner', assigner='TaskAlignedAssigner', nms='MultiClassNMS', - eval_input_size=[], + eval_size=None, loss_weight={ 'class': 1.0, 'iou': 2.5, @@ -80,12 +81,14 @@ class PPYOLOEHead(nn.Layer): self.iou_loss = GIoULoss() self.loss_weight = loss_weight self.use_varifocal_loss = use_varifocal_loss - self.eval_input_size = eval_input_size + self.eval_size = eval_size self.static_assigner_epoch = static_assigner_epoch self.static_assigner = static_assigner self.assigner = assigner self.nms = nms + if isinstance(self.nms, MultiClassNMS) and trt: + self.nms.trt = trt self.exclude_nms = exclude_nms # stem self.stem_cls = nn.LayerList() @@ -108,6 +111,7 @@ class PPYOLOEHead(nn.Layer): in_c, 4 * (self.reg_max + 1), 3, padding=1)) # projection conv self.proj_conv = nn.Conv2D(self.reg_max + 1, 1, 1, bias_attr=False) + self.proj_conv.skip_quant = True self._init_weights() @classmethod @@ -127,10 +131,10 @@ class PPYOLOEHead(nn.Layer): self.proj.reshape([1, self.reg_max + 1, 1, 1])) self.proj_conv.weight.stop_gradient = True - if self.eval_input_size: + if self.eval_size: anchor_points, stride_tensor = self._generate_anchors() - self.register_buffer('anchor_points', anchor_points) - self.register_buffer('stride_tensor', stride_tensor) + self.anchor_points = anchor_points + self.stride_tensor = stride_tensor def forward_train(self, feats, targets): anchors, anchor_points, num_anchors_list, stride_tensor = \ @@ -156,7 +160,7 @@ class PPYOLOEHead(nn.Layer): num_anchors_list, stride_tensor ], targets) - def _generate_anchors(self, feats=None): + def _generate_anchors(self, feats=None, dtype='float32'): # just use in eval time anchor_points = [] stride_tensor = [] @@ -164,24 +168,22 @@ class PPYOLOEHead(nn.Layer): if feats is not None: _, _, h, w = feats[i].shape else: - h = int(self.eval_input_size[0] / stride) - w = int(self.eval_input_size[1] / stride) + h = int(self.eval_size[0] / stride) + w = int(self.eval_size[1] / stride) shift_x = paddle.arange(end=w) + self.grid_cell_offset shift_y = paddle.arange(end=h) + self.grid_cell_offset shift_y, shift_x = paddle.meshgrid(shift_y, shift_x) anchor_point = paddle.cast( paddle.stack( - [shift_x, shift_y], axis=-1), dtype='float32') + [shift_x, shift_y], axis=-1), dtype=dtype) anchor_points.append(anchor_point.reshape([-1, 2])) - stride_tensor.append( - paddle.full( - [h * w, 1], stride, dtype='float32')) + stride_tensor.append(paddle.full([h * w, 1], stride, dtype=dtype)) anchor_points = paddle.concat(anchor_points) stride_tensor = paddle.concat(stride_tensor) return anchor_points, stride_tensor def forward_eval(self, feats): - if self.eval_input_size: + if self.eval_size: anchor_points, stride_tensor = self.anchor_points, self.stride_tensor else: anchor_points, stride_tensor = self._generate_anchors(feats) @@ -290,7 +292,7 @@ class PPYOLOEHead(nn.Layer): else: loss_l1 = paddle.zeros([1]) loss_iou = paddle.zeros([1]) - loss_dfl = paddle.zeros([1]) + loss_dfl = pred_dist.sum() * 0. return loss_l1, loss_iou, loss_dfl def get_loss(self, head_outs, gt_meta): @@ -331,14 +333,15 @@ class PPYOLOEHead(nn.Layer): assigned_bboxes /= stride_tensor # cls loss if self.use_varifocal_loss: - one_hot_label = F.one_hot(assigned_labels, self.num_classes) + one_hot_label = F.one_hot(assigned_labels, + self.num_classes + 1)[..., :-1] loss_cls = self._varifocal_loss(pred_scores, assigned_scores, one_hot_label) else: loss_cls = self._focal_loss(pred_scores, assigned_scores, alpha_l) assigned_scores_sum = assigned_scores.sum() - if paddle_distributed_is_initialized(): + if paddle.distributed.get_world_size() > 1: paddle.distributed.all_reduce(assigned_scores_sum) assigned_scores_sum = paddle.clip( assigned_scores_sum / paddle.distributed.get_world_size(), @@ -361,7 +364,7 @@ class PPYOLOEHead(nn.Layer): } return out_dict - def post_process(self, head_outs, img_shape, scale_factor): + def post_process(self, head_outs, scale_factor): pred_scores, pred_dist, anchor_points, stride_tensor = head_outs pred_bboxes = batch_distance2bbox(anchor_points, pred_dist.transpose([0, 2, 1])) diff --git a/ppdet/modeling/heads/retina_head.py b/ppdet/modeling/heads/retina_head.py index e8f5cbd0ac194d5adcaa0893cf12f0ffaa0161e9..8705e86febb30d06fcbbd06187a76548450c9600 100644 --- a/ppdet/modeling/heads/retina_head.py +++ b/ppdet/modeling/heads/retina_head.py @@ -1,4 +1,4 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -16,17 +16,20 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function -import math, paddle +import math +import paddle import paddle.nn as nn import paddle.nn.functional as F from paddle import ParamAttr from paddle.nn.initializer import Normal, Constant -from ppdet.modeling.proposal_generator import AnchorGenerator -from ppdet.core.workspace import register +from ppdet.modeling.bbox_utils import bbox2delta, delta2bbox from ppdet.modeling.heads.fcos_head import FCOSFeat +from ppdet.core.workspace import register + __all__ = ['RetinaHead'] + @register class RetinaFeat(FCOSFeat): """We use FCOSFeat to construct conv layers in RetinaNet. @@ -34,72 +37,49 @@ class RetinaFeat(FCOSFeat): """ pass -@register -class RetinaAnchorGenerator(AnchorGenerator): - def __init__(self, - octave_base_scale=4, - scales_per_octave=3, - aspect_ratios=[0.5, 1.0, 2.0], - strides=[8.0, 16.0, 32.0, 64.0, 128.0], - variance=[1.0, 1.0, 1.0, 1.0], - offset=0.0): - anchor_sizes = [] - for s in strides: - anchor_sizes.append([ - s * octave_base_scale * 2**(i/scales_per_octave) \ - for i in range(scales_per_octave)]) - super(RetinaAnchorGenerator, self).__init__( - anchor_sizes=anchor_sizes, - aspect_ratios=aspect_ratios, - strides=strides, - variance=variance, - offset=offset) @register class RetinaHead(nn.Layer): """Used in RetinaNet proposed in paper https://arxiv.org/pdf/1708.02002.pdf """ + __shared__ = ['num_classes'] __inject__ = [ - 'conv_feat', 'anchor_generator', 'bbox_assigner', - 'bbox_coder', 'loss_class', 'loss_bbox', 'nms'] + 'conv_feat', 'anchor_generator', 'bbox_assigner', 'loss_class', + 'loss_bbox', 'nms' + ] + def __init__(self, num_classes=80, + conv_feat='RetinaFeat', + anchor_generator='RetinaAnchorGenerator', + bbox_assigner='MaxIoUAssigner', + loss_class='FocalLoss', + loss_bbox='SmoothL1Loss', + nms='MultiClassNMS', prior_prob=0.01, - decode_reg_out=False, - conv_feat=None, - anchor_generator=None, - bbox_assigner=None, - bbox_coder=None, - loss_class=None, - loss_bbox=None, nms_pre=1000, - nms=None): + weights=[1., 1., 1., 1.]): super(RetinaHead, self).__init__() self.num_classes = num_classes - self.prior_prob = prior_prob - # allow RetinaNet to use IoU based losses. - self.decode_reg_out = decode_reg_out self.conv_feat = conv_feat self.anchor_generator = anchor_generator self.bbox_assigner = bbox_assigner - self.bbox_coder = bbox_coder self.loss_class = loss_class self.loss_bbox = loss_bbox - self.nms_pre = nms_pre self.nms = nms - self.cls_out_channels = num_classes - self.init_layers() + self.nms_pre = nms_pre + self.weights = weights - def init_layers(self): - bias_init_value = -math.log((1 - self.prior_prob) / self.prior_prob) + bias_init_value = -math.log((1 - prior_prob) / prior_prob) num_anchors = self.anchor_generator.num_anchors self.retina_cls = nn.Conv2D( in_channels=self.conv_feat.feat_out, - out_channels=self.cls_out_channels * num_anchors, + out_channels=self.num_classes * num_anchors, kernel_size=3, stride=1, padding=1, - weight_attr=ParamAttr(initializer=Normal(mean=0.0, std=0.01)), + weight_attr=ParamAttr(initializer=Normal( + mean=0.0, std=0.01)), bias_attr=ParamAttr(initializer=Constant(value=bias_init_value))) self.retina_reg = nn.Conv2D( in_channels=self.conv_feat.feat_out, @@ -107,10 +87,11 @@ class RetinaHead(nn.Layer): kernel_size=3, stride=1, padding=1, - weight_attr=ParamAttr(initializer=Normal(mean=0.0, std=0.01)), + weight_attr=ParamAttr(initializer=Normal( + mean=0.0, std=0.01)), bias_attr=ParamAttr(initializer=Constant(value=0))) - def forward(self, neck_feats): + def forward(self, neck_feats, targets=None): cls_logits_list = [] bboxes_reg_list = [] for neck_feat in neck_feats: @@ -119,33 +100,40 @@ class RetinaHead(nn.Layer): bbox_reg = self.retina_reg(conv_reg_feat) cls_logits_list.append(cls_logits) bboxes_reg_list.append(bbox_reg) - return (cls_logits_list, bboxes_reg_list) - def get_loss(self, head_outputs, meta): + if self.training: + return self.get_loss([cls_logits_list, bboxes_reg_list], targets) + else: + return [cls_logits_list, bboxes_reg_list] + + def get_loss(self, head_outputs, targets): """Here we calculate loss for a batch of images. We assign anchors to gts in each image and gather all the assigned postive and negative samples. Then loss is calculated on the gathered samples. """ - cls_logits, bboxes_reg = head_outputs - # we use the same anchor for all images - anchors = self.anchor_generator(cls_logits) + cls_logits_list, bboxes_reg_list = head_outputs + anchors = self.anchor_generator(cls_logits_list) anchors = paddle.concat(anchors) # matches: contain gt_inds # match_labels: -1(ignore), 0(neg) or 1(pos) matches_list, match_labels_list = [], [] # assign anchors to gts, no sampling is involved - for gt_bbox in meta['gt_bbox']: + for gt_bbox in targets['gt_bbox']: matches, match_labels = self.bbox_assigner(anchors, gt_bbox) matches_list.append(matches) match_labels_list.append(match_labels) + # reshape network outputs - cls_logits = [_.transpose([0, 2, 3, 1]) for _ in cls_logits] - cls_logits = [_.reshape([0, -1, self.cls_out_channels]) \ - for _ in cls_logits] - bboxes_reg = [_.transpose([0, 2, 3, 1]) for _ in bboxes_reg] - bboxes_reg = [_.reshape([0, -1, 4]) for _ in bboxes_reg] + cls_logits = [ + _.transpose([0, 2, 3, 1]).reshape([0, -1, self.num_classes]) + for _ in cls_logits_list + ] + bboxes_reg = [ + _.transpose([0, 2, 3, 1]).reshape([0, -1, 4]) + for _ in bboxes_reg_list + ] cls_logits = paddle.concat(cls_logits, axis=1) bboxes_reg = paddle.concat(bboxes_reg, axis=1) @@ -154,7 +142,7 @@ class RetinaHead(nn.Layer): # find and gather preds and targets in each image for matches, match_labels, cls_logit, bbox_reg, gt_bbox, gt_class in \ zip(matches_list, match_labels_list, cls_logits, bboxes_reg, - meta['gt_bbox'], meta['gt_class']): + targets['gt_bbox'], targets['gt_class']): pos_mask = (match_labels == 1) neg_mask = (match_labels == 0) chosen_mask = paddle.logical_or(pos_mask, neg_mask) @@ -163,59 +151,65 @@ class RetinaHead(nn.Layer): bg_class = paddle.to_tensor( [self.num_classes], dtype=gt_class.dtype) # a trick to assign num_classes to negative targets - gt_class = paddle.concat([gt_class, bg_class]) - matches = paddle.where( - neg_mask, paddle.full_like(matches, gt_class.size-1), matches) + gt_class = paddle.concat([gt_class, bg_class], axis=-1) + matches = paddle.where(neg_mask, + paddle.full_like(matches, gt_class.size - 1), + matches) cls_pred = cls_logit[chosen_mask] - cls_tar = gt_class[matches[chosen_mask]] + cls_tar = gt_class[matches[chosen_mask]] reg_pred = bbox_reg[pos_mask].reshape([-1, 4]) reg_tar = gt_bbox[matches[pos_mask]].reshape([-1, 4]) - if self.decode_reg_out: - reg_pred = self.bbox_coder.decode( - anchors[pos_mask], reg_pred) - else: - reg_tar = self.bbox_coder.encode(anchors[pos_mask], reg_tar) + reg_tar = bbox2delta(anchors[pos_mask], reg_tar, self.weights) cls_pred_list.append(cls_pred) cls_tar_list.append(cls_tar) reg_pred_list.append(reg_pred) reg_tar_list.append(reg_tar) cls_pred = paddle.concat(cls_pred_list) - cls_tar = paddle.concat(cls_tar_list) + cls_tar = paddle.concat(cls_tar_list) reg_pred = paddle.concat(reg_pred_list) - reg_tar = paddle.concat(reg_tar_list) + reg_tar = paddle.concat(reg_tar_list) + avg_factor = max(1.0, reg_pred.shape[0]) cls_loss = self.loss_class( - cls_pred, cls_tar, reduction='sum')/avg_factor - if reg_pred.size == 0: - reg_loss = bboxes_reg[0][0].sum() * 0 + cls_pred, cls_tar, reduction='sum') / avg_factor + + if reg_pred.shape[0] == 0: + reg_loss = paddle.zeros([1]) + reg_loss.stop_gradient = False else: reg_loss = self.loss_bbox( - reg_pred, reg_tar, reduction='sum')/avg_factor - return dict(loss_cls=cls_loss, loss_reg=reg_loss) + reg_pred, reg_tar, reduction='sum') / avg_factor + + loss = cls_loss + reg_loss + out_dict = { + 'loss_cls': cls_loss, + 'loss_reg': reg_loss, + 'loss': loss, + } + return out_dict def get_bboxes_single(self, anchors, - cls_scores, - bbox_preds, + cls_scores_list, + bbox_preds_list, im_shape, scale_factor, rescale=True): - assert len(cls_scores) == len(bbox_preds) + assert len(cls_scores_list) == len(bbox_preds_list) mlvl_bboxes = [] mlvl_scores = [] - for anchor, cls_score, bbox_pred in zip(anchors, cls_scores, bbox_preds): + for anchor, cls_score, bbox_pred in zip(anchors, cls_scores_list, + bbox_preds_list): cls_score = cls_score.reshape([-1, self.num_classes]) bbox_pred = bbox_pred.reshape([-1, 4]) if self.nms_pre is not None and cls_score.shape[0] > self.nms_pre: max_score = cls_score.max(axis=1) _, topk_inds = max_score.topk(self.nms_pre) bbox_pred = bbox_pred.gather(topk_inds) - anchor = anchor.gather(topk_inds) + anchor = anchor.gather(topk_inds) cls_score = cls_score.gather(topk_inds) - bbox_pred = self.bbox_coder.decode( - anchor, bbox_pred, max_shape=im_shape) - bbox_pred = bbox_pred.squeeze() + bbox_pred = delta2bbox(bbox_pred, anchor, self.weights).squeeze() mlvl_bboxes.append(bbox_pred) mlvl_scores.append(F.sigmoid(cls_score)) mlvl_bboxes = paddle.concat(mlvl_bboxes) @@ -227,18 +221,15 @@ class RetinaHead(nn.Layer): mlvl_scores = mlvl_scores.transpose([1, 0]) return mlvl_bboxes, mlvl_scores - def decode(self, anchors, cls_scores, bbox_preds, im_shape, scale_factor): + def decode(self, anchors, cls_logits, bboxes_reg, im_shape, scale_factor): batch_bboxes = [] batch_scores = [] - for img_id in range(cls_scores[0].shape[0]): - num_lvls = len(cls_scores) - cls_score_list = [cls_scores[i][img_id] for i in range(num_lvls)] - bbox_pred_list = [bbox_preds[i][img_id] for i in range(num_lvls)] + for img_id in range(cls_logits[0].shape[0]): + num_lvls = len(cls_logits) + cls_scores_list = [cls_logits[i][img_id] for i in range(num_lvls)] + bbox_preds_list = [bboxes_reg[i][img_id] for i in range(num_lvls)] bboxes, scores = self.get_bboxes_single( - anchors, - cls_score_list, - bbox_pred_list, - im_shape[img_id], + anchors, cls_scores_list, bbox_preds_list, im_shape[img_id], scale_factor[img_id]) batch_bboxes.append(bboxes) batch_scores.append(scores) @@ -247,11 +238,12 @@ class RetinaHead(nn.Layer): return batch_bboxes, batch_scores def post_process(self, head_outputs, im_shape, scale_factor): - cls_scores, bbox_preds = head_outputs - anchors = self.anchor_generator(cls_scores) - cls_scores = [_.transpose([0, 2, 3, 1]) for _ in cls_scores] - bbox_preds = [_.transpose([0, 2, 3, 1]) for _ in bbox_preds] - bboxes, scores = self.decode( - anchors, cls_scores, bbox_preds, im_shape, scale_factor) + cls_logits_list, bboxes_reg_list = head_outputs + anchors = self.anchor_generator(cls_logits_list) + cls_logits = [_.transpose([0, 2, 3, 1]) for _ in cls_logits_list] + bboxes_reg = [_.transpose([0, 2, 3, 1]) for _ in bboxes_reg_list] + bboxes, scores = self.decode(anchors, cls_logits, bboxes_reg, im_shape, + scale_factor) + bbox_pred, bbox_num, _ = self.nms(bboxes, scores) return bbox_pred, bbox_num diff --git a/ppdet/modeling/heads/roi_extractor.py b/ppdet/modeling/heads/roi_extractor.py index 35c3924e36c60ddbc82f38f6b828197e31833b01..5d2b1528f07003193b03a02bc1320bfb2d304a6d 100644 --- a/ppdet/modeling/heads/roi_extractor.py +++ b/ppdet/modeling/heads/roi_extractor.py @@ -29,7 +29,7 @@ class RoIAlign(object): RoI Align module For more details, please refer to the document of roi_align in - in ppdet/modeing/ops.py + in https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/vision/ops.py Args: resolution (int): The output size, default 14 @@ -76,12 +76,12 @@ class RoIAlign(object): def __call__(self, feats, roi, rois_num): roi = paddle.concat(roi) if len(roi) > 1 else roi[0] if len(feats) == 1: - rois_feat = ops.roi_align( - feats[self.start_level], - roi, - self.resolution, - self.spatial_scale[0], - rois_num=rois_num, + rois_feat = paddle.vision.ops.roi_align( + x=feats[self.start_level], + boxes=roi, + boxes_num=rois_num, + output_size=self.resolution, + spatial_scale=self.spatial_scale[0], aligned=self.aligned) else: offset = 2 @@ -96,13 +96,13 @@ class RoIAlign(object): rois_num=rois_num) rois_feat_list = [] for lvl in range(self.start_level, self.end_level + 1): - roi_feat = ops.roi_align( - feats[lvl], - rois_dist[lvl], - self.resolution, - self.spatial_scale[lvl], + roi_feat = paddle.vision.ops.roi_align( + x=feats[lvl], + boxes=rois_dist[lvl], + boxes_num=rois_num_dist[lvl], + output_size=self.resolution, + spatial_scale=self.spatial_scale[lvl], sampling_ratio=self.sampling_ratio, - rois_num=rois_num_dist[lvl], aligned=self.aligned) rois_feat_list.append(roi_feat) rois_feat_shuffle = paddle.concat(rois_feat_list) diff --git a/ppdet/modeling/heads/s2anet_head.py b/ppdet/modeling/heads/s2anet_head.py index 7910379c402214090b211ed05e3b68a482f5c581..e17023d672532fb7aa786a98f95bdc3315906964 100644 --- a/ppdet/modeling/heads/s2anet_head.py +++ b/ppdet/modeling/heads/s2anet_head.py @@ -23,6 +23,7 @@ from ppdet.core.workspace import register from ppdet.modeling import ops from ppdet.modeling import bbox_utils from ppdet.modeling.proposal_generator.target_layer import RBoxAssigner +from ..cls_utils import _get_class_default_kwargs import numpy as np @@ -230,7 +231,7 @@ class S2ANetHead(nn.Layer): align_conv_type='AlignConv', align_conv_size=3, use_sigmoid_cls=True, - anchor_assign=RBoxAssigner().__dict__, + anchor_assign=_get_class_default_kwargs(RBoxAssigner), reg_loss_weight=[1.0, 1.0, 1.0, 1.0, 1.1], cls_loss_weight=[1.1, 1.05], reg_loss_type='l1'): @@ -600,9 +601,9 @@ class S2ANetHead(nn.Layer): fam_bbox = paddle.sum(fam_bbox, axis=-1) feat_bbox_weights = paddle.sum(feat_bbox_weights, axis=-1) try: - from rbox_iou_ops import rbox_iou + from ext_op import rbox_iou except Exception as e: - print("import custom_ops error, try install rbox_iou_ops " \ + print("import custom_ops error, try install ext_op " \ "following ppdet/ext_op/README.md", e) sys.stdout.flush() sys.exit(-1) @@ -715,9 +716,9 @@ class S2ANetHead(nn.Layer): odm_bbox = paddle.sum(odm_bbox, axis=-1) feat_bbox_weights = paddle.sum(feat_bbox_weights, axis=-1) try: - from rbox_iou_ops import rbox_iou + from ext_op import rbox_iou except Exception as e: - print("import custom_ops error, try install rbox_iou_ops " \ + print("import custom_ops error, try install ext_op " \ "following ppdet/ext_op/README.md", e) sys.stdout.flush() sys.exit(-1) diff --git a/ppdet/modeling/heads/simota_head.py b/ppdet/modeling/heads/simota_head.py index a1485f3905625cb579d95ae4465ca22fe777314f..77e515bbc9c2770dd371fec28d3fe9a628b4bd3a 100644 --- a/ppdet/modeling/heads/simota_head.py +++ b/ppdet/modeling/heads/simota_head.py @@ -179,8 +179,15 @@ class OTAHead(GFLHead): num_level_anchors) num_total_pos = sum(pos_num_l) try: - num_total_pos = paddle.distributed.all_reduce(num_total_pos.clone( - )) / paddle.distributed.get_world_size() + cloned_num_total_pos = num_total_pos.clone() + reduced_cloned_num_total_pos = paddle.distributed.all_reduce( + cloned_num_total_pos) + if reduced_cloned_num_total_pos is not None: + num_total_pos = reduced_cloned_num_total_pos / paddle.distributed.get_world_size( + ) + else: + num_total_pos = cloned_num_total_pos / paddle.distributed.get_world_size( + ) except: num_total_pos = max(num_total_pos, 1) @@ -255,7 +262,12 @@ class OTAHead(GFLHead): avg_factor = sum(avg_factor) try: - avg_factor = paddle.distributed.all_reduce(avg_factor.clone()) + avg_factor_clone = avg_factor.clone() + tmp_avg_factor = paddle.distributed.all_reduce(avg_factor_clone) + if tmp_avg_factor is not None: + avg_factor = tmp_avg_factor + else: + avg_factor = avg_factor_clone avg_factor = paddle.clip( avg_factor / paddle.distributed.get_world_size(), min=1) except: @@ -396,8 +408,15 @@ class OTAVFLHead(OTAHead): num_level_anchors) num_total_pos = sum(pos_num_l) try: - num_total_pos = paddle.distributed.all_reduce(num_total_pos.clone( - )) / paddle.distributed.get_world_size() + cloned_num_total_pos = num_total_pos.clone() + reduced_cloned_num_total_pos = paddle.distributed.all_reduce( + cloned_num_total_pos) + if reduced_cloned_num_total_pos is not None: + num_total_pos = reduced_cloned_num_total_pos / paddle.distributed.get_world_size( + ) + else: + num_total_pos = cloned_num_total_pos / paddle.distributed.get_world_size( + ) except: num_total_pos = max(num_total_pos, 1) @@ -475,7 +494,12 @@ class OTAVFLHead(OTAHead): avg_factor = sum(avg_factor) try: - avg_factor = paddle.distributed.all_reduce(avg_factor.clone()) + avg_factor_clone = avg_factor.clone() + tmp_avg_factor = paddle.distributed.all_reduce(avg_factor_clone) + if tmp_avg_factor is not None: + avg_factor = tmp_avg_factor + else: + avg_factor = avg_factor_clone avg_factor = paddle.clip( avg_factor / paddle.distributed.get_world_size(), min=1) except: diff --git a/ppdet/modeling/heads/ssd_head.py b/ppdet/modeling/heads/ssd_head.py index 07e7e92f9b98a26a708c32c103755c3923b356b5..a6df4824dc036d6419f73ec82dc00e8adf0bd780 100644 --- a/ppdet/modeling/heads/ssd_head.py +++ b/ppdet/modeling/heads/ssd_head.py @@ -20,6 +20,7 @@ from paddle.regularizer import L2Decay from paddle import ParamAttr from ..layers import AnchorGeneratorSSD +from ..cls_utils import _get_class_default_kwargs class SepConvLayer(nn.Layer): @@ -113,7 +114,7 @@ class SSDHead(nn.Layer): def __init__(self, num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), - anchor_generator=AnchorGeneratorSSD().__dict__, + anchor_generator=_get_class_default_kwargs(AnchorGeneratorSSD), kernel_size=3, padding=1, use_sepconv=False, diff --git a/ppdet/modeling/heads/yolo_head.py b/ppdet/modeling/heads/yolo_head.py index 7b4e9bc3353edfe42c0c35db6fb38b67de03c730..0a63060d02aab1d20901ab7c4422d58e55166c3d 100644 --- a/ppdet/modeling/heads/yolo_head.py +++ b/ppdet/modeling/heads/yolo_head.py @@ -1,3 +1,17 @@ +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import paddle import paddle.nn as nn import paddle.nn.functional as F @@ -5,6 +19,17 @@ from paddle import ParamAttr from paddle.regularizer import L2Decay from ppdet.core.workspace import register +import math +import numpy as np +from ..initializer import bias_init_with_prob, constant_ +from ..backbones.csp_darknet import BaseConv, DWConv +from ..losses import IouLoss +from ppdet.modeling.assigners.simota_assigner import SimOTAAssigner +from ppdet.modeling.bbox_utils import bbox_overlaps +from ppdet.modeling.layers import MultiClassNMS + +__all__ = ['YOLOv3Head', 'YOLOXHead'] + def _de_sigmoid(x, eps=1e-7): x = paddle.clip(x, eps, 1. / eps) @@ -122,3 +147,270 @@ class YOLOv3Head(nn.Layer): @classmethod def from_config(cls, cfg, input_shape): return {'in_channels': [i.channels for i in input_shape], } + + +@register +class YOLOXHead(nn.Layer): + __shared__ = ['num_classes', 'width_mult', 'act', 'trt', 'exclude_nms'] + __inject__ = ['assigner', 'nms'] + + def __init__(self, + num_classes=80, + width_mult=1.0, + depthwise=False, + in_channels=[256, 512, 1024], + feat_channels=256, + fpn_strides=(8, 16, 32), + l1_epoch=285, + act='silu', + assigner=SimOTAAssigner(use_vfl=False), + nms='MultiClassNMS', + loss_weight={ + 'cls': 1.0, + 'obj': 1.0, + 'iou': 5.0, + 'l1': 1.0, + }, + trt=False, + exclude_nms=False): + super(YOLOXHead, self).__init__() + self._dtype = paddle.framework.get_default_dtype() + self.num_classes = num_classes + assert len(in_channels) > 0, "in_channels length should > 0" + self.in_channels = in_channels + feat_channels = int(feat_channels * width_mult) + self.fpn_strides = fpn_strides + self.l1_epoch = l1_epoch + self.assigner = assigner + self.nms = nms + if isinstance(self.nms, MultiClassNMS) and trt: + self.nms.trt = trt + self.exclude_nms = exclude_nms + self.loss_weight = loss_weight + self.iou_loss = IouLoss(loss_weight=1.0) # default loss_weight 2.5 + + ConvBlock = DWConv if depthwise else BaseConv + + self.stem_conv = nn.LayerList() + self.conv_cls = nn.LayerList() + self.conv_reg = nn.LayerList() # reg [x,y,w,h] + obj + for in_c in self.in_channels: + self.stem_conv.append(BaseConv(in_c, feat_channels, 1, 1, act=act)) + + self.conv_cls.append( + nn.Sequential(* [ + ConvBlock( + feat_channels, feat_channels, 3, 1, act=act), ConvBlock( + feat_channels, feat_channels, 3, 1, act=act), + nn.Conv2D( + feat_channels, + self.num_classes, + 1, + bias_attr=ParamAttr(regularizer=L2Decay(0.0))) + ])) + + self.conv_reg.append( + nn.Sequential(* [ + ConvBlock( + feat_channels, feat_channels, 3, 1, act=act), + ConvBlock( + feat_channels, feat_channels, 3, 1, act=act), + nn.Conv2D( + feat_channels, + 4 + 1, # reg [x,y,w,h] + obj + 1, + bias_attr=ParamAttr(regularizer=L2Decay(0.0))) + ])) + + self._init_weights() + + @classmethod + def from_config(cls, cfg, input_shape): + return {'in_channels': [i.channels for i in input_shape], } + + def _init_weights(self): + bias_cls = bias_init_with_prob(0.01) + bias_reg = paddle.full([5], math.log(5.), dtype=self._dtype) + bias_reg[:2] = 0. + bias_reg[-1] = bias_cls + for cls_, reg_ in zip(self.conv_cls, self.conv_reg): + constant_(cls_[-1].weight) + constant_(cls_[-1].bias, bias_cls) + constant_(reg_[-1].weight) + reg_[-1].bias.set_value(bias_reg) + + def _generate_anchor_point(self, feat_sizes, strides, offset=0.): + anchor_points, stride_tensor = [], [] + num_anchors_list = [] + for feat_size, stride in zip(feat_sizes, strides): + h, w = feat_size + x = (paddle.arange(w) + offset) * stride + y = (paddle.arange(h) + offset) * stride + y, x = paddle.meshgrid(y, x) + anchor_points.append(paddle.stack([x, y], axis=-1).reshape([-1, 2])) + stride_tensor.append( + paddle.full( + [len(anchor_points[-1]), 1], stride, dtype=self._dtype)) + num_anchors_list.append(len(anchor_points[-1])) + anchor_points = paddle.concat(anchor_points).astype(self._dtype) + anchor_points.stop_gradient = True + stride_tensor = paddle.concat(stride_tensor) + stride_tensor.stop_gradient = True + return anchor_points, stride_tensor, num_anchors_list + + def forward(self, feats, targets=None): + assert len(feats) == len(self.fpn_strides), \ + "The size of feats is not equal to size of fpn_strides" + + feat_sizes = [[f.shape[-2], f.shape[-1]] for f in feats] + cls_score_list, reg_pred_list = [], [] + obj_score_list = [] + for i, feat in enumerate(feats): + feat = self.stem_conv[i](feat) + cls_logit = self.conv_cls[i](feat) + reg_pred = self.conv_reg[i](feat) + # cls prediction + cls_score = F.sigmoid(cls_logit) + cls_score_list.append(cls_score.flatten(2).transpose([0, 2, 1])) + # reg prediction + reg_xywh, obj_logit = paddle.split(reg_pred, [4, 1], axis=1) + reg_xywh = reg_xywh.flatten(2).transpose([0, 2, 1]) + reg_pred_list.append(reg_xywh) + # obj prediction + obj_score = F.sigmoid(obj_logit) + obj_score_list.append(obj_score.flatten(2).transpose([0, 2, 1])) + + cls_score_list = paddle.concat(cls_score_list, axis=1) + reg_pred_list = paddle.concat(reg_pred_list, axis=1) + obj_score_list = paddle.concat(obj_score_list, axis=1) + + # bbox decode + anchor_points, stride_tensor, _ =\ + self._generate_anchor_point(feat_sizes, self.fpn_strides) + reg_xy, reg_wh = paddle.split(reg_pred_list, 2, axis=-1) + reg_xy += (anchor_points / stride_tensor) + reg_wh = paddle.exp(reg_wh) * 0.5 + bbox_pred_list = paddle.concat( + [reg_xy - reg_wh, reg_xy + reg_wh], axis=-1) + + if self.training: + anchor_points, stride_tensor, num_anchors_list =\ + self._generate_anchor_point(feat_sizes, self.fpn_strides, 0.5) + yolox_losses = self.get_loss([ + cls_score_list, bbox_pred_list, obj_score_list, anchor_points, + stride_tensor, num_anchors_list + ], targets) + return yolox_losses + else: + pred_scores = (cls_score_list * obj_score_list).sqrt() + return pred_scores, bbox_pred_list, stride_tensor + + def get_loss(self, head_outs, targets): + pred_cls, pred_bboxes, pred_obj,\ + anchor_points, stride_tensor, num_anchors_list = head_outs + gt_labels = targets['gt_class'] + gt_bboxes = targets['gt_bbox'] + pred_scores = (pred_cls * pred_obj).sqrt() + # label assignment + center_and_strides = paddle.concat( + [anchor_points, stride_tensor, stride_tensor], axis=-1) + pos_num_list, label_list, bbox_target_list = [], [], [] + for pred_score, pred_bbox, gt_box, gt_label in zip( + pred_scores.detach(), + pred_bboxes.detach() * stride_tensor, gt_bboxes, gt_labels): + pos_num, label, _, bbox_target = self.assigner( + pred_score, center_and_strides, pred_bbox, gt_box, gt_label) + pos_num_list.append(pos_num) + label_list.append(label) + bbox_target_list.append(bbox_target) + labels = paddle.to_tensor(np.stack(label_list, axis=0)) + bbox_targets = paddle.to_tensor(np.stack(bbox_target_list, axis=0)) + bbox_targets /= stride_tensor # rescale bbox + + # 1. obj score loss + mask_positive = (labels != self.num_classes) + loss_obj = F.binary_cross_entropy( + pred_obj, + mask_positive.astype(pred_obj.dtype).unsqueeze(-1), + reduction='sum') + + num_pos = sum(pos_num_list) + + if num_pos > 0: + num_pos = paddle.to_tensor(num_pos, dtype=self._dtype).clip(min=1) + loss_obj /= num_pos + + # 2. iou loss + bbox_mask = mask_positive.unsqueeze(-1).tile([1, 1, 4]) + pred_bboxes_pos = paddle.masked_select(pred_bboxes, + bbox_mask).reshape([-1, 4]) + assigned_bboxes_pos = paddle.masked_select( + bbox_targets, bbox_mask).reshape([-1, 4]) + bbox_iou = bbox_overlaps(pred_bboxes_pos, assigned_bboxes_pos) + bbox_iou = paddle.diag(bbox_iou) + + loss_iou = self.iou_loss( + pred_bboxes_pos.split( + 4, axis=-1), + assigned_bboxes_pos.split( + 4, axis=-1)) + loss_iou = loss_iou.sum() / num_pos + + # 3. cls loss + cls_mask = mask_positive.unsqueeze(-1).tile( + [1, 1, self.num_classes]) + pred_cls_pos = paddle.masked_select( + pred_cls, cls_mask).reshape([-1, self.num_classes]) + assigned_cls_pos = paddle.masked_select(labels, mask_positive) + assigned_cls_pos = F.one_hot(assigned_cls_pos, + self.num_classes + 1)[..., :-1] + assigned_cls_pos *= bbox_iou.unsqueeze(-1) + loss_cls = F.binary_cross_entropy( + pred_cls_pos, assigned_cls_pos, reduction='sum') + loss_cls /= num_pos + + # 4. l1 loss + if targets['epoch_id'] >= self.l1_epoch: + loss_l1 = F.l1_loss( + pred_bboxes_pos, assigned_bboxes_pos, reduction='sum') + loss_l1 /= num_pos + else: + loss_l1 = paddle.zeros([1]) + loss_l1.stop_gradient = False + else: + loss_cls = paddle.zeros([1]) + loss_iou = paddle.zeros([1]) + loss_l1 = paddle.zeros([1]) + loss_cls.stop_gradient = False + loss_iou.stop_gradient = False + loss_l1.stop_gradient = False + + loss = self.loss_weight['obj'] * loss_obj + \ + self.loss_weight['cls'] * loss_cls + \ + self.loss_weight['iou'] * loss_iou + + if targets['epoch_id'] >= self.l1_epoch: + loss += (self.loss_weight['l1'] * loss_l1) + + yolox_losses = { + 'loss': loss, + 'loss_cls': loss_cls, + 'loss_obj': loss_obj, + 'loss_iou': loss_iou, + 'loss_l1': loss_l1, + } + return yolox_losses + + def post_process(self, head_outs, img_shape, scale_factor): + pred_scores, pred_bboxes, stride_tensor = head_outs + pred_scores = pred_scores.transpose([0, 2, 1]) + pred_bboxes *= stride_tensor + # scale bbox to origin image + scale_factor = scale_factor.flip(-1).tile([1, 2]).unsqueeze(1) + pred_bboxes /= scale_factor + if self.exclude_nms: + # `exclude_nms=True` just use in benchmark + return pred_bboxes.sum(), pred_scores.sum() + else: + bbox_pred, bbox_num, _ = self.nms(pred_bboxes, pred_scores) + return bbox_pred, bbox_num diff --git a/ppdet/modeling/initializer.py b/ppdet/modeling/initializer.py index b7a135dcca7309be09fa4819e21d77a02d332d57..b482f133dd9ac1e2568f5c971f004117c56a5368 100644 --- a/ppdet/modeling/initializer.py +++ b/ppdet/modeling/initializer.py @@ -273,7 +273,8 @@ def linear_init_(module): def conv_init_(module): bound = 1 / np.sqrt(np.prod(module.weight.shape[1:])) uniform_(module.weight, -bound, bound) - uniform_(module.bias, -bound, bound) + if module.bias is not None: + uniform_(module.bias, -bound, bound) def bias_init_with_prob(prior_prob=0.01): diff --git a/ppdet/modeling/layers.py b/ppdet/modeling/layers.py index 055cbf4f6388ce2acf6ef5d4cd58ab93dbbc9fcb..4afc7f560d1b69ed690eec05d53efd40283a33d5 100644 --- a/ppdet/modeling/layers.py +++ b/ppdet/modeling/layers.py @@ -440,7 +440,8 @@ class MultiClassNMS(object): normalized=True, nms_eta=1.0, return_index=False, - return_rois_num=True): + return_rois_num=True, + trt=False): super(MultiClassNMS, self).__init__() self.score_threshold = score_threshold self.nms_top_k = nms_top_k @@ -450,6 +451,7 @@ class MultiClassNMS(object): self.nms_eta = nms_eta self.return_index = return_index self.return_rois_num = return_rois_num + self.trt = trt def __call__(self, bboxes, score, background_label=-1): """ @@ -471,7 +473,19 @@ class MultiClassNMS(object): kwargs.update({'rois_num': bbox_num}) if background_label > -1: kwargs.update({'background_label': background_label}) - return ops.multiclass_nms(bboxes, score, **kwargs) + kwargs.pop('trt') + # TODO(wangxinxin08): paddle version should be develop or 2.3 and above to run nms on tensorrt + if self.trt and (int(paddle.version.major) == 0 or + (int(paddle.version.major) >= 2 and + int(paddle.version.minor) >= 3)): + # TODO(wangxinxin08): tricky switch to run nms on tensorrt + kwargs.update({'nms_eta': 1.1}) + bbox, bbox_num, _ = ops.multiclass_nms(bboxes, score, **kwargs) + mask = paddle.slice(bbox, [-1], [0], [1]) != -1 + bbox = paddle.masked_select(bbox, mask).reshape((-1, 6)) + return bbox, bbox_num, None + else: + return ops.multiclass_nms(bboxes, score, **kwargs) @register @@ -540,10 +554,15 @@ class YOLOBox(object): origin_shape = im_shape / scale_factor origin_shape = paddle.cast(origin_shape, 'int32') for i, head_out in enumerate(yolo_head_out): - boxes, scores = ops.yolo_box(head_out, origin_shape, anchors[i], - self.num_classes, self.conf_thresh, - self.downsample_ratio // 2**i, - self.clip_bbox, self.scale_x_y) + boxes, scores = paddle.vision.ops.yolo_box( + head_out, + origin_shape, + anchors[i], + self.num_classes, + self.conf_thresh, + self.downsample_ratio // 2**i, + self.clip_bbox, + scale_x_y=self.scale_x_y) boxes_list.append(boxes) scores_list.append(paddle.transpose(scores, perm=[0, 2, 1])) yolo_boxes = paddle.concat(boxes_list, axis=1) @@ -608,94 +627,6 @@ class SSDBox(object): return output_boxes, output_scores -@register -@serializable -class AnchorGrid(object): - """Generate anchor grid - - Args: - image_size (int or list): input image size, may be a single integer or - list of [h, w]. Default: 512 - min_level (int): min level of the feature pyramid. Default: 3 - max_level (int): max level of the feature pyramid. Default: 7 - anchor_base_scale: base anchor scale. Default: 4 - num_scales: number of anchor scales. Default: 3 - aspect_ratios: aspect ratios. default: [[1, 1], [1.4, 0.7], [0.7, 1.4]] - """ - - def __init__(self, - image_size=512, - min_level=3, - max_level=7, - anchor_base_scale=4, - num_scales=3, - aspect_ratios=[[1, 1], [1.4, 0.7], [0.7, 1.4]]): - super(AnchorGrid, self).__init__() - if isinstance(image_size, Integral): - self.image_size = [image_size, image_size] - else: - self.image_size = image_size - for dim in self.image_size: - assert dim % 2 ** max_level == 0, \ - "image size should be multiple of the max level stride" - self.min_level = min_level - self.max_level = max_level - self.anchor_base_scale = anchor_base_scale - self.num_scales = num_scales - self.aspect_ratios = aspect_ratios - - @property - def base_cell(self): - if not hasattr(self, '_base_cell'): - self._base_cell = self.make_cell() - return self._base_cell - - def make_cell(self): - scales = [2**(i / self.num_scales) for i in range(self.num_scales)] - scales = np.array(scales) - ratios = np.array(self.aspect_ratios) - ws = np.outer(scales, ratios[:, 0]).reshape(-1, 1) - hs = np.outer(scales, ratios[:, 1]).reshape(-1, 1) - anchors = np.hstack((-0.5 * ws, -0.5 * hs, 0.5 * ws, 0.5 * hs)) - return anchors - - def make_grid(self, stride): - cell = self.base_cell * stride * self.anchor_base_scale - x_steps = np.arange(stride // 2, self.image_size[1], stride) - y_steps = np.arange(stride // 2, self.image_size[0], stride) - offset_x, offset_y = np.meshgrid(x_steps, y_steps) - offset_x = offset_x.flatten() - offset_y = offset_y.flatten() - offsets = np.stack((offset_x, offset_y, offset_x, offset_y), axis=-1) - offsets = offsets[:, np.newaxis, :] - return (cell + offsets).reshape(-1, 4) - - def generate(self): - return [ - self.make_grid(2**l) - for l in range(self.min_level, self.max_level + 1) - ] - - def __call__(self): - if not hasattr(self, '_anchor_vars'): - anchor_vars = [] - helper = LayerHelper('anchor_grid') - for idx, l in enumerate(range(self.min_level, self.max_level + 1)): - stride = 2**l - anchors = self.make_grid(stride) - var = helper.create_parameter( - attr=ParamAttr(name='anchors_{}'.format(idx)), - shape=anchors.shape, - dtype='float32', - stop_gradient=True, - default_initializer=NumpyArrayInitializer(anchors)) - anchor_vars.append(var) - var.persistable = True - self._anchor_vars = anchor_vars - - return self._anchor_vars - - @register @serializable class FCOSBox(object): @@ -818,7 +749,6 @@ class TTFBox(object): # batch size is 1 scores_r = paddle.reshape(scores, [cat, -1]) topk_scores, topk_inds = paddle.topk(scores_r, k) - topk_scores, topk_inds = paddle.topk(scores_r, k) topk_ys = topk_inds // width topk_xs = topk_inds % width diff --git a/ppdet/modeling/losses/detr_loss.py b/ppdet/modeling/losses/detr_loss.py index 5a589d4a2b4dae5644dc8b8ecf6f839c68559bdb..e22c5d8b101234e8b1032a540e8c98d290631f02 100644 --- a/ppdet/modeling/losses/detr_loss.py +++ b/ppdet/modeling/losses/detr_loss.py @@ -80,7 +80,7 @@ class DETRLoss(nn.Layer): target_label = target_label.reshape([bs, num_query_objects]) if self.use_focal_loss: target_label = F.one_hot(target_label, - self.num_classes + 1)[:, :, :-1] + self.num_classes + 1)[..., :-1] return { 'loss_class': self.loss_coeff['class'] * sigmoid_focal_loss( logits, target_label, num_gts / num_query_objects) diff --git a/ppdet/modeling/losses/iou_loss.py b/ppdet/modeling/losses/iou_loss.py index 9b8da6c059c90d7bbba7b8e5688aa7556a38ac63..b5cac22e342e633b5c413805623ba4015073b3b1 100644 --- a/ppdet/modeling/losses/iou_loss.py +++ b/ppdet/modeling/losses/iou_loss.py @@ -17,13 +17,13 @@ from __future__ import division from __future__ import print_function import numpy as np - +import math import paddle from ppdet.core.workspace import register, serializable from ..bbox_utils import bbox_iou -__all__ = ['IouLoss', 'GIoULoss', 'DIouLoss'] +__all__ = ['IouLoss', 'GIoULoss', 'DIouLoss', 'SIoULoss'] @register @@ -208,3 +208,88 @@ class DIouLoss(GIoULoss): diou = paddle.mean((1 - iouk + ciou_term + diou_term) * iou_weight) return diou * self.loss_weight + + +@register +@serializable +class SIoULoss(GIoULoss): + """ + see https://arxiv.org/pdf/2205.12740.pdf + Args: + loss_weight (float): siou loss weight, default as 1 + eps (float): epsilon to avoid divide by zero, default as 1e-10 + theta (float): default as 4 + reduction (str): Options are "none", "mean" and "sum". default as none + """ + + def __init__(self, loss_weight=1., eps=1e-10, theta=4., reduction='none'): + super(SIoULoss, self).__init__(loss_weight=loss_weight, eps=eps) + self.loss_weight = loss_weight + self.eps = eps + self.theta = theta + self.reduction = reduction + + def __call__(self, pbox, gbox): + x1, y1, x2, y2 = paddle.split(pbox, num_or_sections=4, axis=-1) + x1g, y1g, x2g, y2g = paddle.split(gbox, num_or_sections=4, axis=-1) + + box1 = [x1, y1, x2, y2] + box2 = [x1g, y1g, x2g, y2g] + iou = bbox_iou(box1, box2) + + cx = (x1 + x2) / 2 + cy = (y1 + y2) / 2 + w = x2 - x1 + self.eps + h = y2 - y1 + self.eps + + cxg = (x1g + x2g) / 2 + cyg = (y1g + y2g) / 2 + wg = x2g - x1g + self.eps + hg = y2g - y1g + self.eps + + x2 = paddle.maximum(x1, x2) + y2 = paddle.maximum(y1, y2) + + # A or B + xc1 = paddle.minimum(x1, x1g) + yc1 = paddle.minimum(y1, y1g) + xc2 = paddle.maximum(x2, x2g) + yc2 = paddle.maximum(y2, y2g) + + cw_out = xc2 - xc1 + ch_out = yc2 - yc1 + + ch = paddle.maximum(cy, cyg) - paddle.minimum(cy, cyg) + cw = paddle.maximum(cx, cxg) - paddle.minimum(cx, cxg) + + # angle cost + dist_intersection = paddle.sqrt((cx - cxg)**2 + (cy - cyg)**2) + sin_angle_alpha = ch / dist_intersection + sin_angle_beta = cw / dist_intersection + thred = paddle.pow(paddle.to_tensor(2), 0.5) / 2 + thred.stop_gradient = True + sin_alpha = paddle.where(sin_angle_alpha > thred, sin_angle_beta, + sin_angle_alpha) + angle_cost = paddle.cos(paddle.asin(sin_alpha) * 2 - math.pi / 2) + + # distance cost + gamma = 2 - angle_cost + # gamma.stop_gradient = True + beta_x = ((cxg - cx) / cw_out)**2 + beta_y = ((cyg - cy) / ch_out)**2 + dist_cost = 1 - paddle.exp(-gamma * beta_x) + 1 - paddle.exp(-gamma * + beta_y) + + # shape cost + omega_w = paddle.abs(w - wg) / paddle.maximum(w, wg) + omega_h = paddle.abs(hg - h) / paddle.maximum(h, hg) + omega = (1 - paddle.exp(-omega_w))**self.theta + ( + 1 - paddle.exp(-omega_h))**self.theta + siou_loss = 1 - iou + (omega + dist_cost) / 2 + + if self.reduction == 'mean': + siou_loss = paddle.mean(siou_loss) + elif self.reduction == 'sum': + siou_loss = paddle.sum(siou_loss) + + return siou_loss * self.loss_weight diff --git a/ppdet/modeling/losses/sparsercnn_loss.py b/ppdet/modeling/losses/sparsercnn_loss.py index 2d36b21a2302d6c070728cfb4a213c09232f1853..8b7db92fada6f6e3f3dd7999fda35f6e750a1f12 100644 --- a/ppdet/modeling/losses/sparsercnn_loss.py +++ b/ppdet/modeling/losses/sparsercnn_loss.py @@ -198,7 +198,7 @@ class SparseRCNNLoss(nn.Layer): # Retrieve the matching between the outputs of the last layer and the targets indices = self.matcher(outputs_without_aux, targets) - # Compute the average number of target boxes accross all nodes, for normalization purposes + # Compute the average number of target boxes across all nodes, for normalization purposes num_boxes = sum(len(t["labels"]) for t in targets) num_boxes = paddle.to_tensor( [num_boxes], diff --git a/ppdet/modeling/losses/ssd_loss.py b/ppdet/modeling/losses/ssd_loss.py index 62aecc1f33a104531edc2a77015e27847bb92506..2ab94f2b5bbf1f31fe47d186a92ac805cdf6daf3 100644 --- a/ppdet/modeling/losses/ssd_loss.py +++ b/ppdet/modeling/losses/ssd_loss.py @@ -20,8 +20,7 @@ import paddle import paddle.nn as nn import paddle.nn.functional as F from ppdet.core.workspace import register -from ..ops import iou_similarity -from ..bbox_utils import bbox2delta +from ..bbox_utils import iou_similarity, bbox2delta __all__ = ['SSDLoss'] diff --git a/ppdet/modeling/losses/yolo_loss.py b/ppdet/modeling/losses/yolo_loss.py index 657959cd7e55cf43d6362f03e1a4c1204b814c07..1ba05f2c8eae530e44e20d21375f7cf9b9cd1fb0 100644 --- a/ppdet/modeling/losses/yolo_loss.py +++ b/ppdet/modeling/losses/yolo_loss.py @@ -21,7 +21,7 @@ import paddle.nn as nn import paddle.nn.functional as F from ppdet.core.workspace import register -from ..bbox_utils import decode_yolo, xywh2xyxy, iou_similarity +from ..bbox_utils import decode_yolo, xywh2xyxy, batch_iou_similarity __all__ = ['YOLOv3Loss'] @@ -80,7 +80,7 @@ class YOLOv3Loss(nn.Layer): gwh = gbox[:, :, 0:2] + gbox[:, :, 2:4] * 0.5 gbox = paddle.concat([gxy, gwh], axis=-1) - iou = iou_similarity(pbox, gbox) + iou = batch_iou_similarity(pbox, gbox) iou.stop_gradient = True iou_max = iou.max(2) # [N, M1] iou_mask = paddle.cast(iou_max <= self.ignore_thresh, dtype=pbox.dtype) diff --git a/ppdet/modeling/mot/matching/__init__.py b/ppdet/modeling/mot/matching/__init__.py index 54c6680f79f16247c562a9da1024dd3e1de4c57f..f6a88c5673a50452415b1f86f7b18bac12297f49 100644 --- a/ppdet/modeling/mot/matching/__init__.py +++ b/ppdet/modeling/mot/matching/__init__.py @@ -14,6 +14,8 @@ from . import jde_matching from . import deepsort_matching +from . import ocsort_matching from .jde_matching import * from .deepsort_matching import * +from .ocsort_matching import * diff --git a/ppdet/modeling/mot/matching/jde_matching.py b/ppdet/modeling/mot/matching/jde_matching.py index e9c40dba4d3f2a82f8138229ff20b6d27cc1a0e5..3b1cf02edd75cb960e433926274b761d49136033 100644 --- a/ppdet/modeling/mot/matching/jde_matching.py +++ b/ppdet/modeling/mot/matching/jde_matching.py @@ -15,7 +15,14 @@ This code is based on https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/tracker/matching.py """ -import lap +try: + import lap +except: + print( + 'Warning: Unable to use JDE/FairMOT/ByteTrack, please install lap, for example: `pip install lap`, see https://github.com/gatagat/lap' + ) + pass + import scipy import numpy as np from scipy.spatial.distance import cdist @@ -26,7 +33,7 @@ warnings.filterwarnings("ignore") __all__ = [ 'merge_matches', 'linear_assignment', - 'cython_bbox_ious', + 'bbox_ious', 'iou_distance', 'embedding_distance', 'fuse_motion', @@ -53,6 +60,12 @@ def merge_matches(m1, m2, shape): def linear_assignment(cost_matrix, thresh): + try: + import lap + except Exception as e: + raise RuntimeError( + 'Unable to use JDE/FairMOT/ByteTrack, please install lap, for example: `pip install lap`, see https://github.com/gatagat/lap' + ) if cost_matrix.size == 0: return np.empty( (0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple( @@ -68,22 +81,28 @@ def linear_assignment(cost_matrix, thresh): return matches, unmatched_a, unmatched_b -def cython_bbox_ious(atlbrs, btlbrs): - ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float) - if ious.size == 0: +def bbox_ious(atlbrs, btlbrs): + boxes = np.ascontiguousarray(atlbrs, dtype=np.float) + query_boxes = np.ascontiguousarray(btlbrs, dtype=np.float) + N = boxes.shape[0] + K = query_boxes.shape[0] + ious = np.zeros((N, K), dtype=boxes.dtype) + if N * K == 0: return ious - try: - import cython_bbox - except Exception as e: - print('cython_bbox not found, please install cython_bbox.' - 'for example: `pip install cython_bbox`.') - raise e - - ious = cython_bbox.bbox_overlaps( - np.ascontiguousarray( - atlbrs, dtype=np.float), - np.ascontiguousarray( - btlbrs, dtype=np.float)) + + for k in range(K): + box_area = ((query_boxes[k, 2] - query_boxes[k, 0] + 1) * + (query_boxes[k, 3] - query_boxes[k, 1] + 1)) + for n in range(N): + iw = (min(boxes[n, 2], query_boxes[k, 2]) - max( + boxes[n, 0], query_boxes[k, 0]) + 1) + if iw > 0: + ih = (min(boxes[n, 3], query_boxes[k, 3]) - max( + boxes[n, 1], query_boxes[k, 1]) + 1) + if ih > 0: + ua = float((boxes[n, 2] - boxes[n, 0] + 1) * (boxes[ + n, 3] - boxes[n, 1] + 1) + box_area - iw * ih) + ious[n, k] = iw * ih / ua return ious @@ -98,7 +117,7 @@ def iou_distance(atracks, btracks): else: atlbrs = [track.tlbr for track in atracks] btlbrs = [track.tlbr for track in btracks] - _ious = cython_bbox_ious(atlbrs, btlbrs) + _ious = bbox_ious(atlbrs, btlbrs) cost_matrix = 1 - _ious return cost_matrix diff --git a/ppdet/modeling/mot/matching/ocsort_matching.py b/ppdet/modeling/mot/matching/ocsort_matching.py new file mode 100644 index 0000000000000000000000000000000000000000..a32d76155b985f03b8ecbaedd88df70eaa9fd0fa --- /dev/null +++ b/ppdet/modeling/mot/matching/ocsort_matching.py @@ -0,0 +1,124 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +This code is based on https://github.com/noahcao/OC_SORT/blob/master/trackers/ocsort_tracker/association.py +""" + +import os +import numpy as np + + +def iou_batch(bboxes1, bboxes2): + bboxes2 = np.expand_dims(bboxes2, 0) + bboxes1 = np.expand_dims(bboxes1, 1) + + xx1 = np.maximum(bboxes1[..., 0], bboxes2[..., 0]) + yy1 = np.maximum(bboxes1[..., 1], bboxes2[..., 1]) + xx2 = np.minimum(bboxes1[..., 2], bboxes2[..., 2]) + yy2 = np.minimum(bboxes1[..., 3], bboxes2[..., 3]) + w = np.maximum(0., xx2 - xx1) + h = np.maximum(0., yy2 - yy1) + area = w * h + iou_matrix = area / ((bboxes1[..., 2] - bboxes1[..., 0]) * + (bboxes1[..., 3] - bboxes1[..., 1]) + + (bboxes2[..., 2] - bboxes2[..., 0]) * + (bboxes2[..., 3] - bboxes2[..., 1]) - area) + return iou_matrix + + +def speed_direction_batch(dets, tracks): + tracks = tracks[..., np.newaxis] + CX1, CY1 = (dets[:, 0] + dets[:, 2]) / 2.0, (dets[:, 1] + dets[:, 3]) / 2.0 + CX2, CY2 = (tracks[:, 0] + tracks[:, 2]) / 2.0, ( + tracks[:, 1] + tracks[:, 3]) / 2.0 + dx = CX1 - CX2 + dy = CY1 - CY2 + norm = np.sqrt(dx**2 + dy**2) + 1e-6 + dx = dx / norm + dy = dy / norm + return dy, dx + + +def linear_assignment(cost_matrix): + try: + import lap + _, x, y = lap.lapjv(cost_matrix, extend_cost=True) + return np.array([[y[i], i] for i in x if i >= 0]) + except ImportError: + from scipy.optimize import linear_sum_assignment + x, y = linear_sum_assignment(cost_matrix) + return np.array(list(zip(x, y))) + + +def associate(detections, trackers, iou_threshold, velocities, previous_obs, + vdc_weight): + if (len(trackers) == 0): + return np.empty( + (0, 2), dtype=int), np.arange(len(detections)), np.empty( + (0, 5), dtype=int) + + Y, X = speed_direction_batch(detections, previous_obs) + inertia_Y, inertia_X = velocities[:, 0], velocities[:, 1] + inertia_Y = np.repeat(inertia_Y[:, np.newaxis], Y.shape[1], axis=1) + inertia_X = np.repeat(inertia_X[:, np.newaxis], X.shape[1], axis=1) + diff_angle_cos = inertia_X * X + inertia_Y * Y + diff_angle_cos = np.clip(diff_angle_cos, a_min=-1, a_max=1) + diff_angle = np.arccos(diff_angle_cos) + diff_angle = (np.pi / 2.0 - np.abs(diff_angle)) / np.pi + + valid_mask = np.ones(previous_obs.shape[0]) + valid_mask[np.where(previous_obs[:, 4] < 0)] = 0 + + iou_matrix = iou_batch(detections, trackers) + scores = np.repeat( + detections[:, -1][:, np.newaxis], trackers.shape[0], axis=1) + # iou_matrix = iou_matrix * scores # a trick sometiems works, we don't encourage this + valid_mask = np.repeat(valid_mask[:, np.newaxis], X.shape[1], axis=1) + + angle_diff_cost = (valid_mask * diff_angle) * vdc_weight + angle_diff_cost = angle_diff_cost.T + angle_diff_cost = angle_diff_cost * scores + + if min(iou_matrix.shape) > 0: + a = (iou_matrix > iou_threshold).astype(np.int32) + if a.sum(1).max() == 1 and a.sum(0).max() == 1: + matched_indices = np.stack(np.where(a), axis=1) + else: + matched_indices = linear_assignment(-(iou_matrix + angle_diff_cost)) + else: + matched_indices = np.empty(shape=(0, 2)) + + unmatched_detections = [] + for d, det in enumerate(detections): + if (d not in matched_indices[:, 0]): + unmatched_detections.append(d) + unmatched_trackers = [] + for t, trk in enumerate(trackers): + if (t not in matched_indices[:, 1]): + unmatched_trackers.append(t) + + # filter out matched with low IOU + matches = [] + for m in matched_indices: + if (iou_matrix[m[0], m[1]] < iou_threshold): + unmatched_detections.append(m[0]) + unmatched_trackers.append(m[1]) + else: + matches.append(m.reshape(1, 2)) + if (len(matches) == 0): + matches = np.empty((0, 2), dtype=int) + else: + matches = np.concatenate(matches, axis=0) + + return matches, np.array(unmatched_detections), np.array(unmatched_trackers) diff --git a/ppdet/modeling/mot/tracker/__init__.py b/ppdet/modeling/mot/tracker/__init__.py index b74593b4126d878cd655326e58369f5b6f76a2ae..03a5dd0a169203b86edbc6c81a44a095ebe9b3cc 100644 --- a/ppdet/modeling/mot/tracker/__init__.py +++ b/ppdet/modeling/mot/tracker/__init__.py @@ -16,8 +16,10 @@ from . import base_jde_tracker from . import base_sde_tracker from . import jde_tracker from . import deepsort_tracker +from . import ocsort_tracker from .base_jde_tracker import * from .base_sde_tracker import * from .jde_tracker import * from .deepsort_tracker import * +from .ocsort_tracker import * diff --git a/ppdet/modeling/mot/tracker/jde_tracker.py b/ppdet/modeling/mot/tracker/jde_tracker.py index 5d2dcb9a018e3c0a26af837c9abd0a965fdbc7df..9796e6ceb328d5bcbec256aedb6654e53d1bc850 100644 --- a/ppdet/modeling/mot/tracker/jde_tracker.py +++ b/ppdet/modeling/mot/tracker/jde_tracker.py @@ -44,7 +44,7 @@ class JDETracker(object): track_buffer (int): buffer for tracker min_box_area (int): min box area to filter out low quality boxes vertical_ratio (float): w/h, the vertical ratio of the bbox to filter - bad results. If set <0 means no need to filter bboxes,usually set + bad results. If set <= 0 means no need to filter bboxes,usually set 1.6 for pedestrian tracking. tracked_thresh (float): linear assignment threshold of tracked stracks and detections @@ -70,8 +70,8 @@ class JDETracker(object): num_classes=1, det_thresh=0.3, track_buffer=30, - min_box_area=200, - vertical_ratio=1.6, + min_box_area=0, + vertical_ratio=0, tracked_thresh=0.7, r_tracked_thresh=0.5, unconfirmed_thresh=0.7, @@ -122,7 +122,7 @@ class JDETracker(object): Return: output_stracks_dict (dict(list)): The list contains information - regarding the online_tracklets for the recieved image tensor. + regarding the online_tracklets for the received image tensor. """ self.frame_id += 1 if self.frame_id == 1: @@ -167,9 +167,8 @@ class JDETracker(object): detections = [ STrack( STrack.tlbr_to_tlwh(tlbrs[2:6]), tlbrs[1], cls_id, - 30, temp_feat) - for (tlbrs, temp_feat - ) in zip(pred_dets_cls, pred_embs_cls) + 30, temp_feat) for (tlbrs, temp_feat) in + zip(pred_dets_cls, pred_embs_cls) ] else: detections = [] @@ -244,15 +243,13 @@ class JDETracker(object): for tlbrs in pred_dets_cls_second ] else: - pred_embs_cls_second = pred_embs_dict[cls_id][inds_second] + pred_embs_cls_second = pred_embs_dict[cls_id][ + inds_second] detections_second = [ STrack( - STrack.tlbr_to_tlwh(tlbrs[2:6]), - tlbrs[1], - cls_id, - 30, - temp_feat) - for (tlbrs, temp_feat) in zip(pred_dets_cls_second, pred_embs_cls_second) + STrack.tlbr_to_tlwh(tlbrs[2:6]), tlbrs[1], + cls_id, 30, temp_feat) for (tlbrs, temp_feat) in + zip(pred_dets_cls_second, pred_embs_cls_second) ] else: detections_second = [] diff --git a/ppdet/modeling/mot/tracker/ocsort_tracker.py b/ppdet/modeling/mot/tracker/ocsort_tracker.py new file mode 100644 index 0000000000000000000000000000000000000000..350e62c9cba46d3cd18d4bad97cc8b4dd0a8bdd7 --- /dev/null +++ b/ppdet/modeling/mot/tracker/ocsort_tracker.py @@ -0,0 +1,369 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +This code is based on https://github.com/noahcao/OC_SORT/blob/master/trackers/ocsort_tracker/ocsort.py +""" + +import numpy as np +try: + from filterpy.kalman import KalmanFilter +except: + print( + 'Warning: Unable to use OC-SORT, please install filterpy, for example: `pip install filterpy`, see https://github.com/rlabbe/filterpy' + ) + pass + +from ..matching.ocsort_matching import associate, linear_assignment, iou_batch +from ppdet.core.workspace import register, serializable + + +def k_previous_obs(observations, cur_age, k): + if len(observations) == 0: + return [-1, -1, -1, -1, -1] + for i in range(k): + dt = k - i + if cur_age - dt in observations: + return observations[cur_age - dt] + max_age = max(observations.keys()) + return observations[max_age] + + +def convert_bbox_to_z(bbox): + """ + Takes a bounding box in the form [x1,y1,x2,y2] and returns z in the form + [x,y,s,r] where x,y is the centre of the box and s is the scale/area and r is + the aspect ratio + """ + w = bbox[2] - bbox[0] + h = bbox[3] - bbox[1] + x = bbox[0] + w / 2. + y = bbox[1] + h / 2. + s = w * h # scale is just area + r = w / float(h + 1e-6) + return np.array([x, y, s, r]).reshape((4, 1)) + + +def convert_x_to_bbox(x, score=None): + """ + Takes a bounding box in the centre form [x,y,s,r] and returns it in the form + [x1,y1,x2,y2] where x1,y1 is the top left and x2,y2 is the bottom right + """ + w = np.sqrt(x[2] * x[3]) + h = x[2] / w + if (score == None): + return np.array( + [x[0] - w / 2., x[1] - h / 2., x[0] + w / 2., + x[1] + h / 2.]).reshape((1, 4)) + else: + score = np.array([score]) + return np.array([ + x[0] - w / 2., x[1] - h / 2., x[0] + w / 2., x[1] + h / 2., score + ]).reshape((1, 5)) + + +def speed_direction(bbox1, bbox2): + cx1, cy1 = (bbox1[0] + bbox1[2]) / 2.0, (bbox1[1] + bbox1[3]) / 2.0 + cx2, cy2 = (bbox2[0] + bbox2[2]) / 2.0, (bbox2[1] + bbox2[3]) / 2.0 + speed = np.array([cy2 - cy1, cx2 - cx1]) + norm = np.sqrt((cy2 - cy1)**2 + (cx2 - cx1)**2) + 1e-6 + return speed / norm + + +class KalmanBoxTracker(object): + """ + This class represents the internal state of individual tracked objects observed as bbox. + + Args: + bbox (np.array): bbox in [x1,y1,x2,y2,score] format. + delta_t (int): delta_t of previous observation + """ + count = 0 + + def __init__(self, bbox, delta_t=3): + try: + from filterpy.kalman import KalmanFilter + except Exception as e: + raise RuntimeError( + 'Unable to use OC-SORT, please install filterpy, for example: `pip install filterpy`, see https://github.com/rlabbe/filterpy' + ) + self.kf = KalmanFilter(dim_x=7, dim_z=4) + self.kf.F = np.array([[1, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 1, 0], + [0, 0, 1, 0, 0, 0, 1], [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 1, 0], + [0, 0, 0, 0, 0, 0, 1]]) + self.kf.H = np.array([[1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0]]) + self.kf.R[2:, 2:] *= 10. + self.kf.P[4:, 4:] *= 1000. + # give high uncertainty to the unobservable initial velocities + self.kf.P *= 10. + self.kf.Q[-1, -1] *= 0.01 + self.kf.Q[4:, 4:] *= 0.01 + + self.score = bbox[4] + self.kf.x[:4] = convert_bbox_to_z(bbox) + self.time_since_update = 0 + self.id = KalmanBoxTracker.count + KalmanBoxTracker.count += 1 + self.history = [] + self.hits = 0 + self.hit_streak = 0 + self.age = 0 + """ + NOTE: [-1,-1,-1,-1,-1] is a compromising placeholder for non-observation status, the same for the return of + function k_previous_obs. It is ugly and I do not like it. But to support generate observation array in a + fast and unified way, which you would see below k_observations = np.array([k_previous_obs(...]]), let's bear it for now. + """ + self.last_observation = np.array([-1, -1, -1, -1, -1]) # placeholder + self.observations = dict() + self.history_observations = [] + self.velocity = None + self.delta_t = delta_t + + def update(self, bbox): + """ + Updates the state vector with observed bbox. + """ + if bbox is not None: + if self.last_observation.sum() >= 0: # no previous observation + previous_box = None + for i in range(self.delta_t): + dt = self.delta_t - i + if self.age - dt in self.observations: + previous_box = self.observations[self.age - dt] + break + if previous_box is None: + previous_box = self.last_observation + """ + Estimate the track speed direction with observations \Delta t steps away + """ + self.velocity = speed_direction(previous_box, bbox) + """ + Insert new observations. This is a ugly way to maintain both self.observations + and self.history_observations. Bear it for the moment. + """ + self.last_observation = bbox + self.observations[self.age] = bbox + self.history_observations.append(bbox) + + self.time_since_update = 0 + self.history = [] + self.hits += 1 + self.hit_streak += 1 + self.kf.update(convert_bbox_to_z(bbox)) + else: + self.kf.update(bbox) + + def predict(self): + """ + Advances the state vector and returns the predicted bounding box estimate. + """ + if ((self.kf.x[6] + self.kf.x[2]) <= 0): + self.kf.x[6] *= 0.0 + + self.kf.predict() + self.age += 1 + if (self.time_since_update > 0): + self.hit_streak = 0 + self.time_since_update += 1 + self.history.append(convert_x_to_bbox(self.kf.x, score=self.score)) + return self.history[-1] + + def get_state(self): + return convert_x_to_bbox(self.kf.x, score=self.score) + + +@register +@serializable +class OCSORTTracker(object): + """ + OCSORT tracker, support single class + + Args: + det_thresh (float): threshold of detection score + max_age (int): maximum number of missed misses before a track is deleted + min_hits (int): minimum hits for associate + iou_threshold (float): iou threshold for associate + delta_t (int): delta_t of previous observation + inertia (float): vdc_weight of angle_diff_cost for associate + vertical_ratio (float): w/h, the vertical ratio of the bbox to filter + bad results. If set <= 0 means no need to filter bboxes,usually set + 1.6 for pedestrian tracking. + min_box_area (int): min box area to filter out low quality boxes + use_byte (bool): Whether use ByteTracker, default False + """ + + def __init__(self, + det_thresh=0.6, + max_age=30, + min_hits=3, + iou_threshold=0.3, + delta_t=3, + inertia=0.2, + vertical_ratio=-1, + min_box_area=0, + use_byte=False): + self.det_thresh = det_thresh + self.max_age = max_age + self.min_hits = min_hits + self.iou_threshold = iou_threshold + self.delta_t = delta_t + self.inertia = inertia + self.vertical_ratio = vertical_ratio + self.min_box_area = min_box_area + self.use_byte = use_byte + + self.trackers = [] + self.frame_count = 0 + KalmanBoxTracker.count = 0 + + def update(self, pred_dets, pred_embs=None): + """ + Args: + pred_dets (np.array): Detection results of the image, the shape is + [N, 6], means 'cls_id, score, x0, y0, x1, y1'. + pred_embs (np.array): Embedding results of the image, the shape is + [N, 128] or [N, 512], default as None. + + Return: + tracking boxes (np.array): [M, 6], means 'x0, y0, x1, y1, score, id'. + """ + if pred_dets is None: + return np.empty((0, 6)) + + self.frame_count += 1 + + bboxes = pred_dets[:, 2:] + scores = pred_dets[:, 1:2] + dets = np.concatenate((bboxes, scores), axis=1) + scores = scores.squeeze(-1) + + inds_low = scores > 0.1 + inds_high = scores < self.det_thresh + inds_second = np.logical_and(inds_low, inds_high) + # self.det_thresh > score > 0.1, for second matching + dets_second = dets[inds_second] # detections for second matching + remain_inds = scores > self.det_thresh + dets = dets[remain_inds] + + # get predicted locations from existing trackers. + trks = np.zeros((len(self.trackers), 5)) + to_del = [] + ret = [] + for t, trk in enumerate(trks): + pos = self.trackers[t].predict()[0] + trk[:] = [pos[0], pos[1], pos[2], pos[3], 0] + if np.any(np.isnan(pos)): + to_del.append(t) + trks = np.ma.compress_rows(np.ma.masked_invalid(trks)) + for t in reversed(to_del): + self.trackers.pop(t) + + velocities = np.array([ + trk.velocity if trk.velocity is not None else np.array((0, 0)) + for trk in self.trackers + ]) + last_boxes = np.array([trk.last_observation for trk in self.trackers]) + k_observations = np.array([ + k_previous_obs(trk.observations, trk.age, self.delta_t) + for trk in self.trackers + ]) + """ + First round of association + """ + matched, unmatched_dets, unmatched_trks = associate( + dets, trks, self.iou_threshold, velocities, k_observations, + self.inertia) + for m in matched: + self.trackers[m[1]].update(dets[m[0], :]) + """ + Second round of associaton by OCR + """ + # BYTE association + if self.use_byte and len(dets_second) > 0 and unmatched_trks.shape[ + 0] > 0: + u_trks = trks[unmatched_trks] + iou_left = iou_batch( + dets_second, + u_trks) # iou between low score detections and unmatched tracks + iou_left = np.array(iou_left) + if iou_left.max() > self.iou_threshold: + """ + NOTE: by using a lower threshold, e.g., self.iou_threshold - 0.1, you may + get a higher performance especially on MOT17/MOT20 datasets. But we keep it + uniform here for simplicity + """ + matched_indices = linear_assignment(-iou_left) + to_remove_trk_indices = [] + for m in matched_indices: + det_ind, trk_ind = m[0], unmatched_trks[m[1]] + if iou_left[m[0], m[1]] < self.iou_threshold: + continue + self.trackers[trk_ind].update(dets_second[det_ind, :]) + to_remove_trk_indices.append(trk_ind) + unmatched_trks = np.setdiff1d(unmatched_trks, + np.array(to_remove_trk_indices)) + + if unmatched_dets.shape[0] > 0 and unmatched_trks.shape[0] > 0: + left_dets = dets[unmatched_dets] + left_trks = last_boxes[unmatched_trks] + iou_left = iou_batch(left_dets, left_trks) + iou_left = np.array(iou_left) + if iou_left.max() > self.iou_threshold: + """ + NOTE: by using a lower threshold, e.g., self.iou_threshold - 0.1, you may + get a higher performance especially on MOT17/MOT20 datasets. But we keep it + uniform here for simplicity + """ + rematched_indices = linear_assignment(-iou_left) + to_remove_det_indices = [] + to_remove_trk_indices = [] + for m in rematched_indices: + det_ind, trk_ind = unmatched_dets[m[0]], unmatched_trks[m[ + 1]] + if iou_left[m[0], m[1]] < self.iou_threshold: + continue + self.trackers[trk_ind].update(dets[det_ind, :]) + to_remove_det_indices.append(det_ind) + to_remove_trk_indices.append(trk_ind) + unmatched_dets = np.setdiff1d(unmatched_dets, + np.array(to_remove_det_indices)) + unmatched_trks = np.setdiff1d(unmatched_trks, + np.array(to_remove_trk_indices)) + + for m in unmatched_trks: + self.trackers[m].update(None) + + # create and initialise new trackers for unmatched detections + for i in unmatched_dets: + trk = KalmanBoxTracker(dets[i, :], delta_t=self.delta_t) + self.trackers.append(trk) + i = len(self.trackers) + for trk in reversed(self.trackers): + if trk.last_observation.sum() < 0: + d = trk.get_state()[0] + else: + d = trk.last_observation # tlbr + score + if (trk.time_since_update < 1) and ( + trk.hit_streak >= self.min_hits or + self.frame_count <= self.min_hits): + # +1 as MOT benchmark requires positive + ret.append(np.concatenate((d, [trk.id + 1])).reshape(1, -1)) + i -= 1 + # remove dead tracklet + if (trk.time_since_update > self.max_age): + self.trackers.pop(i) + if (len(ret) > 0): + return np.concatenate(ret) + return np.empty((0, 6)) diff --git a/ppdet/modeling/necks/hrfpn.py b/ppdet/modeling/necks/hrfpn.py index eb4768b8ecba7c11dcd834265e1289db8d6ec7b0..5c45c9974b3bd213747cc7b6f0f5f670f38c61bf 100644 --- a/ppdet/modeling/necks/hrfpn.py +++ b/ppdet/modeling/necks/hrfpn.py @@ -37,7 +37,8 @@ class HRFPN(nn.Layer): out_channel=256, share_conv=False, extra_stage=1, - spatial_scales=[1. / 4, 1. / 8, 1. / 16, 1. / 32]): + spatial_scales=[1. / 4, 1. / 8, 1. / 16, 1. / 32], + use_bias=False): super(HRFPN, self).__init__() in_channel = sum(in_channels) self.in_channel = in_channel @@ -47,12 +48,14 @@ class HRFPN(nn.Layer): spatial_scales = spatial_scales + [spatial_scales[-1] / 2.] self.spatial_scales = spatial_scales self.num_out = len(self.spatial_scales) + self.use_bias = use_bias + bias_attr = False if use_bias is False else None self.reduction = nn.Conv2D( in_channels=in_channel, out_channels=out_channel, kernel_size=1, - bias_attr=False) + bias_attr=bias_attr) if share_conv: self.fpn_conv = nn.Conv2D( @@ -60,7 +63,7 @@ class HRFPN(nn.Layer): out_channels=out_channel, kernel_size=3, padding=1, - bias_attr=False) + bias_attr=bias_attr) else: self.fpn_conv = [] for i in range(self.num_out): @@ -72,7 +75,7 @@ class HRFPN(nn.Layer): out_channels=out_channel, kernel_size=3, padding=1, - bias_attr=False)) + bias_attr=bias_attr)) self.fpn_conv.append(conv) def forward(self, body_feats): diff --git a/ppdet/modeling/necks/yolo_fpn.py b/ppdet/modeling/necks/yolo_fpn.py index bc06b14dba4fb5a2e5570162a6151927cb2df8da..79f4cead360f872233f48be739e2357d4c9e1121 100644 --- a/ppdet/modeling/necks/yolo_fpn.py +++ b/ppdet/modeling/necks/yolo_fpn.py @@ -1,15 +1,15 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and # limitations under the License. import paddle @@ -17,10 +17,12 @@ import paddle.nn as nn import paddle.nn.functional as F from ppdet.core.workspace import register, serializable from ppdet.modeling.layers import DropBlock +from ppdet.modeling.ops import get_act_fn from ..backbones.darknet import ConvBNLayer from ..shape_spec import ShapeSpec +from ..backbones.csp_darknet import BaseConv, DWConv, CSPLayer -__all__ = ['YOLOv3FPN', 'PPYOLOFPN', 'PPYOLOTinyFPN', 'PPYOLOPAN'] +__all__ = ['YOLOv3FPN', 'PPYOLOFPN', 'PPYOLOTinyFPN', 'PPYOLOPAN', 'YOLOCSPPAN'] def add_coord(x, data_format): @@ -986,3 +988,112 @@ class PPYOLOPAN(nn.Layer): @property def out_shape(self): return [ShapeSpec(channels=c) for c in self._out_channels] + + +@register +@serializable +class YOLOCSPPAN(nn.Layer): + """ + YOLO CSP-PAN, used in YOLOv5 and YOLOX. + """ + __shared__ = ['depth_mult', 'data_format', 'act', 'trt'] + + def __init__(self, + depth_mult=1.0, + in_channels=[256, 512, 1024], + depthwise=False, + data_format='NCHW', + act='silu', + trt=False): + super(YOLOCSPPAN, self).__init__() + self.in_channels = in_channels + self._out_channels = in_channels + Conv = DWConv if depthwise else BaseConv + + self.data_format = data_format + act = get_act_fn( + act, trt=trt) if act is None or isinstance(act, + (str, dict)) else act + self.upsample = nn.Upsample(scale_factor=2, mode="nearest") + + # top-down fpn + self.lateral_convs = nn.LayerList() + self.fpn_blocks = nn.LayerList() + for idx in range(len(in_channels) - 1, 0, -1): + self.lateral_convs.append( + BaseConv( + int(in_channels[idx]), + int(in_channels[idx - 1]), + 1, + 1, + act=act)) + self.fpn_blocks.append( + CSPLayer( + int(in_channels[idx - 1] * 2), + int(in_channels[idx - 1]), + round(3 * depth_mult), + shortcut=False, + depthwise=depthwise, + act=act)) + + # bottom-up pan + self.downsample_convs = nn.LayerList() + self.pan_blocks = nn.LayerList() + for idx in range(len(in_channels) - 1): + self.downsample_convs.append( + Conv( + int(in_channels[idx]), + int(in_channels[idx]), + 3, + stride=2, + act=act)) + self.pan_blocks.append( + CSPLayer( + int(in_channels[idx] * 2), + int(in_channels[idx + 1]), + round(3 * depth_mult), + shortcut=False, + depthwise=depthwise, + act=act)) + + def forward(self, feats, for_mot=False): + assert len(feats) == len(self.in_channels) + + # top-down fpn + inner_outs = [feats[-1]] + for idx in range(len(self.in_channels) - 1, 0, -1): + feat_heigh = inner_outs[0] + feat_low = feats[idx - 1] + feat_heigh = self.lateral_convs[len(self.in_channels) - 1 - idx]( + feat_heigh) + inner_outs[0] = feat_heigh + + upsample_feat = F.interpolate( + feat_heigh, + scale_factor=2., + mode="nearest", + data_format=self.data_format) + inner_out = self.fpn_blocks[len(self.in_channels) - 1 - idx]( + paddle.concat( + [upsample_feat, feat_low], axis=1)) + inner_outs.insert(0, inner_out) + + # bottom-up pan + outs = [inner_outs[0]] + for idx in range(len(self.in_channels) - 1): + feat_low = outs[-1] + feat_height = inner_outs[idx + 1] + downsample_feat = self.downsample_convs[idx](feat_low) + out = self.pan_blocks[idx](paddle.concat( + [downsample_feat, feat_height], axis=1)) + outs.append(out) + + return outs + + @classmethod + def from_config(cls, cfg, input_shape): + return {'in_channels': [i.channels for i in input_shape], } + + @property + def out_shape(self): + return [ShapeSpec(channels=c) for c in self._out_channels] diff --git a/ppdet/modeling/ops.py b/ppdet/modeling/ops.py index 52a4f33962a49c45a3597c372842d965a0023166..567c26d7233e561118e22ca3a8e7d74f7b7cf686 100644 --- a/ppdet/modeling/ops.py +++ b/ppdet/modeling/ops.py @@ -1,15 +1,15 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and # limitations under the License. import paddle @@ -17,18 +17,23 @@ import paddle.nn.functional as F import paddle.nn as nn from paddle import ParamAttr from paddle.regularizer import L2Decay +from paddle import _C_ops -from paddle.fluid.framework import Variable, in_dygraph_mode -from paddle.fluid import core -from paddle.fluid.dygraph import parallel_helper -from paddle.fluid.layer_helper import LayerHelper -from paddle.fluid.data_feeder import check_variable_and_dtype, check_type, check_dtype +from paddle import in_dynamic_mode +from paddle.common_ops_import import Variable, LayerHelper, check_variable_and_dtype, check_type, check_dtype __all__ = [ - 'roi_pool', 'roi_align', 'prior_box', 'generate_proposals', - 'iou_similarity', 'box_coder', 'yolo_box', 'multiclass_nms', - 'distribute_fpn_proposals', 'collect_fpn_proposals', 'matrix_nms', - 'batch_norm', 'mish', 'swish', 'identity' + 'prior_box', + 'generate_proposals', + 'box_coder', + 'multiclass_nms', + 'distribute_fpn_proposals', + 'matrix_nms', + 'batch_norm', + 'mish', + 'silu', + 'swish', + 'identity', ] @@ -40,13 +45,17 @@ def mish(x): return F.mish(x) if hasattr(F, mish) else x * F.tanh(F.softplus(x)) +def silu(x): + return F.silu(x) + + def swish(x): return x * F.sigmoid(x) -TRT_ACT_SPEC = {'swish': swish} +TRT_ACT_SPEC = {'swish': swish, 'silu': swish} -ACT_SPEC = {'mish': mish} +ACT_SPEC = {'mish': mish, 'silu': silu} def get_act_fn(act=None, trt=False): @@ -106,392 +115,6 @@ def batch_norm(ch, return norm_layer -@paddle.jit.not_to_static -def roi_pool(input, - rois, - output_size, - spatial_scale=1.0, - rois_num=None, - name=None): - """ - - This operator implements the roi_pooling layer. - Region of interest pooling (also known as RoI pooling) is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7). - - The operator has three steps: - - 1. Dividing each region proposal into equal-sized sections with output_size(h, w); - 2. Finding the largest value in each section; - 3. Copying these max values to the output buffer. - - For more information, please refer to https://stackoverflow.com/questions/43430056/what-is-roi-layer-in-fast-rcnn - - Args: - input (Tensor): Input feature, 4D-Tensor with the shape of [N,C,H,W], - where N is the batch size, C is the input channel, H is Height, W is weight. - The data type is float32 or float64. - rois (Tensor): ROIs (Regions of Interest) to pool over. - 2D-Tensor or 2D-LoDTensor with the shape of [num_rois,4], the lod level is 1. - Given as [[x1, y1, x2, y2], ...], (x1, y1) is the top left coordinates, - and (x2, y2) is the bottom right coordinates. - output_size (int or tuple[int, int]): The pooled output size(h, w), data type is int32. If int, h and w are both equal to output_size. - spatial_scale (float, optional): Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling. Default: 1.0 - rois_num (Tensor): The number of RoIs in each image. Default: None - name(str, optional): For detailed information, please refer - to :ref:`api_guide_Name`. Usually name is no need to set and - None by default. - - - Returns: - Tensor: The pooled feature, 4D-Tensor with the shape of [num_rois, C, output_size[0], output_size[1]]. - - - Examples: - - .. code-block:: python - - import paddle - from ppdet.modeling import ops - paddle.enable_static() - - x = paddle.static.data( - name='data', shape=[None, 256, 32, 32], dtype='float32') - rois = paddle.static.data( - name='rois', shape=[None, 4], dtype='float32') - rois_num = paddle.static.data(name='rois_num', shape=[None], dtype='int32') - - pool_out = ops.roi_pool( - input=x, - rois=rois, - output_size=(1, 1), - spatial_scale=1.0, - rois_num=rois_num) - """ - check_type(output_size, 'output_size', (int, tuple), 'roi_pool') - if isinstance(output_size, int): - output_size = (output_size, output_size) - - pooled_height, pooled_width = output_size - if in_dygraph_mode(): - assert rois_num is not None, "rois_num should not be None in dygraph mode." - pool_out, argmaxes = core.ops.roi_pool( - input, rois, rois_num, "pooled_height", pooled_height, - "pooled_width", pooled_width, "spatial_scale", spatial_scale) - return pool_out, argmaxes - - else: - check_variable_and_dtype(input, 'input', ['float32'], 'roi_pool') - check_variable_and_dtype(rois, 'rois', ['float32'], 'roi_pool') - helper = LayerHelper('roi_pool', **locals()) - dtype = helper.input_dtype() - pool_out = helper.create_variable_for_type_inference(dtype) - argmaxes = helper.create_variable_for_type_inference(dtype='int32') - - inputs = { - "X": input, - "ROIs": rois, - } - if rois_num is not None: - inputs['RoisNum'] = rois_num - helper.append_op( - type="roi_pool", - inputs=inputs, - outputs={"Out": pool_out, - "Argmax": argmaxes}, - attrs={ - "pooled_height": pooled_height, - "pooled_width": pooled_width, - "spatial_scale": spatial_scale - }) - return pool_out, argmaxes - - -@paddle.jit.not_to_static -def roi_align(input, - rois, - output_size, - spatial_scale=1.0, - sampling_ratio=-1, - rois_num=None, - aligned=True, - name=None): - """ - - Region of interest align (also known as RoI align) is to perform - bilinear interpolation on inputs of nonuniform sizes to obtain - fixed-size feature maps (e.g. 7*7) - - Dividing each region proposal into equal-sized sections with - the pooled_width and pooled_height. Location remains the origin - result. - - In each ROI bin, the value of the four regularly sampled locations - are computed directly through bilinear interpolation. The output is - the mean of four locations. - Thus avoid the misaligned problem. - - Args: - input (Tensor): Input feature, 4D-Tensor with the shape of [N,C,H,W], - where N is the batch size, C is the input channel, H is Height, W is weight. - The data type is float32 or float64. - rois (Tensor): ROIs (Regions of Interest) to pool over.It should be - a 2-D Tensor or 2-D LoDTensor of shape (num_rois, 4), the lod level is 1. - The data type is float32 or float64. Given as [[x1, y1, x2, y2], ...], - (x1, y1) is the top left coordinates, and (x2, y2) is the bottom right coordinates. - output_size (int or tuple[int, int]): The pooled output size(h, w), data type is int32. If int, h and w are both equal to output_size. - spatial_scale (float32, optional): Multiplicative spatial scale factor to translate ROI coords - from their input scale to the scale used when pooling. Default: 1.0 - sampling_ratio(int32, optional): number of sampling points in the interpolation grid. - If <=0, then grid points are adaptive to roi_width and pooled_w, likewise for height. Default: -1 - rois_num (Tensor): The number of RoIs in each image. Default: None - name(str, optional): For detailed information, please refer - to :ref:`api_guide_Name`. Usually name is no need to set and - None by default. - - Returns: - Tensor: - - Output: The output of ROIAlignOp is a 4-D tensor with shape (num_rois, channels, pooled_h, pooled_w). The data type is float32 or float64. - - - Examples: - .. code-block:: python - - import paddle - from ppdet.modeling import ops - paddle.enable_static() - - x = paddle.static.data( - name='data', shape=[None, 256, 32, 32], dtype='float32') - rois = paddle.static.data( - name='rois', shape=[None, 4], dtype='float32') - rois_num = paddle.static.data(name='rois_num', shape=[None], dtype='int32') - align_out = ops.roi_align(input=x, - rois=rois, - ouput_size=(7, 7), - spatial_scale=0.5, - sampling_ratio=-1, - rois_num=rois_num) - """ - check_type(output_size, 'output_size', (int, tuple), 'roi_align') - if isinstance(output_size, int): - output_size = (output_size, output_size) - - pooled_height, pooled_width = output_size - - if in_dygraph_mode(): - assert rois_num is not None, "rois_num should not be None in dygraph mode." - align_out = core.ops.roi_align( - input, rois, rois_num, "pooled_height", pooled_height, - "pooled_width", pooled_width, "spatial_scale", spatial_scale, - "sampling_ratio", sampling_ratio, "aligned", aligned) - return align_out - - else: - check_variable_and_dtype(input, 'input', ['float32', 'float64'], - 'roi_align') - check_variable_and_dtype(rois, 'rois', ['float32', 'float64'], - 'roi_align') - helper = LayerHelper('roi_align', **locals()) - dtype = helper.input_dtype() - align_out = helper.create_variable_for_type_inference(dtype) - inputs = { - "X": input, - "ROIs": rois, - } - if rois_num is not None: - inputs['RoisNum'] = rois_num - helper.append_op( - type="roi_align", - inputs=inputs, - outputs={"Out": align_out}, - attrs={ - "pooled_height": pooled_height, - "pooled_width": pooled_width, - "spatial_scale": spatial_scale, - "sampling_ratio": sampling_ratio, - "aligned": aligned, - }) - return align_out - - -@paddle.jit.not_to_static -def iou_similarity(x, y, box_normalized=True, name=None): - """ - Computes intersection-over-union (IOU) between two box lists. - Box list 'X' should be a LoDTensor and 'Y' is a common Tensor, - boxes in 'Y' are shared by all instance of the batched inputs of X. - Given two boxes A and B, the calculation of IOU is as follows: - - $$ - IOU(A, B) = - \\frac{area(A\\cap B)}{area(A)+area(B)-area(A\\cap B)} - $$ - - Args: - x (Tensor): Box list X is a 2-D Tensor with shape [N, 4] holds N - boxes, each box is represented as [xmin, ymin, xmax, ymax], - the shape of X is [N, 4]. [xmin, ymin] is the left top - coordinate of the box if the input is image feature map, they - are close to the origin of the coordinate system. - [xmax, ymax] is the right bottom coordinate of the box. - The data type is float32 or float64. - y (Tensor): Box list Y holds M boxes, each box is represented as - [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. - [xmin, ymin] is the left top coordinate of the box if the - input is image feature map, and [xmax, ymax] is the right - bottom coordinate of the box. The data type is float32 or float64. - box_normalized(bool): Whether treat the priorbox as a normalized box. - Set true by default. - name(str, optional): For detailed information, please refer - to :ref:`api_guide_Name`. Usually name is no need to set and - None by default. - - Returns: - Tensor: The output of iou_similarity op, a tensor with shape [N, M] - representing pairwise iou scores. The data type is same with x. - - Examples: - .. code-block:: python - - import paddle - from ppdet.modeling import ops - paddle.enable_static() - - x = paddle.static.data(name='x', shape=[None, 4], dtype='float32') - y = paddle.static.data(name='y', shape=[None, 4], dtype='float32') - iou = ops.iou_similarity(x=x, y=y) - """ - - if in_dygraph_mode(): - out = core.ops.iou_similarity(x, y, 'box_normalized', box_normalized) - return out - else: - helper = LayerHelper("iou_similarity", **locals()) - out = helper.create_variable_for_type_inference(dtype=x.dtype) - - helper.append_op( - type="iou_similarity", - inputs={"X": x, - "Y": y}, - attrs={"box_normalized": box_normalized}, - outputs={"Out": out}) - return out - - -@paddle.jit.not_to_static -def collect_fpn_proposals(multi_rois, - multi_scores, - min_level, - max_level, - post_nms_top_n, - rois_num_per_level=None, - name=None): - """ - - **This OP only supports LoDTensor as input**. Concat multi-level RoIs - (Region of Interest) and select N RoIs with respect to multi_scores. - This operation performs the following steps: - - 1. Choose num_level RoIs and scores as input: num_level = max_level - min_level - 2. Concat multi-level RoIs and scores - 3. Sort scores and select post_nms_top_n scores - 4. Gather RoIs by selected indices from scores - 5. Re-sort RoIs by corresponding batch_id - - Args: - multi_rois(list): List of RoIs to collect. Element in list is 2-D - LoDTensor with shape [N, 4] and data type is float32 or float64, - N is the number of RoIs. - multi_scores(list): List of scores of RoIs to collect. Element in list - is 2-D LoDTensor with shape [N, 1] and data type is float32 or - float64, N is the number of RoIs. - min_level(int): The lowest level of FPN layer to collect - max_level(int): The highest level of FPN layer to collect - post_nms_top_n(int): The number of selected RoIs - rois_num_per_level(list, optional): The List of RoIs' numbers. - Each element is 1-D Tensor which contains the RoIs' number of each - image on each level and the shape is [B] and data type is - int32, B is the number of images. If it is not None then return - a 1-D Tensor contains the output RoIs' number of each image and - the shape is [B]. Default: None - name(str, optional): For detailed information, please refer - to :ref:`api_guide_Name`. Usually name is no need to set and - None by default. - - Returns: - Variable: - - fpn_rois(Variable): 2-D LoDTensor with shape [N, 4] and data type is - float32 or float64. Selected RoIs. - - rois_num(Tensor): 1-D Tensor contains the RoIs's number of each - image. The shape is [B] and data type is int32. B is the number of - images. - - Examples: - .. code-block:: python - - import paddle - from ppdet.modeling import ops - paddle.enable_static() - multi_rois = [] - multi_scores = [] - for i in range(4): - multi_rois.append(paddle.static.data( - name='roi_'+str(i), shape=[None, 4], dtype='float32', lod_level=1)) - for i in range(4): - multi_scores.append(paddle.static.data( - name='score_'+str(i), shape=[None, 1], dtype='float32', lod_level=1)) - - fpn_rois = ops.collect_fpn_proposals( - multi_rois=multi_rois, - multi_scores=multi_scores, - min_level=2, - max_level=5, - post_nms_top_n=2000) - """ - check_type(multi_rois, 'multi_rois', list, 'collect_fpn_proposals') - check_type(multi_scores, 'multi_scores', list, 'collect_fpn_proposals') - num_lvl = max_level - min_level + 1 - input_rois = multi_rois[:num_lvl] - input_scores = multi_scores[:num_lvl] - - if in_dygraph_mode(): - assert rois_num_per_level is not None, "rois_num_per_level should not be None in dygraph mode." - attrs = ('post_nms_topN', post_nms_top_n) - output_rois, rois_num = core.ops.collect_fpn_proposals( - input_rois, input_scores, rois_num_per_level, *attrs) - return output_rois, rois_num - - else: - helper = LayerHelper('collect_fpn_proposals', **locals()) - dtype = helper.input_dtype('multi_rois') - check_dtype(dtype, 'multi_rois', ['float32', 'float64'], - 'collect_fpn_proposals') - output_rois = helper.create_variable_for_type_inference(dtype) - output_rois.stop_gradient = True - - inputs = { - 'MultiLevelRois': input_rois, - 'MultiLevelScores': input_scores, - } - outputs = {'FpnRois': output_rois} - if rois_num_per_level is not None: - inputs['MultiLevelRoIsNum'] = rois_num_per_level - rois_num = helper.create_variable_for_type_inference(dtype='int32') - rois_num.stop_gradient = True - outputs['RoisNum'] = rois_num - else: - rois_num = None - helper.append_op( - type='collect_fpn_proposals', - inputs=inputs, - outputs=outputs, - attrs={'post_nms_topN': post_nms_top_n}) - return output_rois, rois_num - - @paddle.jit.not_to_static def distribute_fpn_proposals(fpn_rois, min_level, @@ -570,12 +193,12 @@ def distribute_fpn_proposals(fpn_rois, """ num_lvl = max_level - min_level + 1 - if in_dygraph_mode(): + if in_dynamic_mode(): assert rois_num is not None, "rois_num should not be None in dygraph mode." attrs = ('min_level', min_level, 'max_level', max_level, 'refer_level', refer_level, 'refer_scale', refer_scale, 'pixel_offset', pixel_offset) - multi_rois, restore_ind, rois_num_per_level = core.ops.distribute_fpn_proposals( + multi_rois, restore_ind, rois_num_per_level = _C_ops.distribute_fpn_proposals( fpn_rois, rois_num, num_lvl, num_lvl, *attrs) return multi_rois, restore_ind, rois_num_per_level @@ -621,143 +244,6 @@ def distribute_fpn_proposals(fpn_rois, return multi_rois, restore_ind, rois_num_per_level -@paddle.jit.not_to_static -def yolo_box( - x, - origin_shape, - anchors, - class_num, - conf_thresh, - downsample_ratio, - clip_bbox=True, - scale_x_y=1., - name=None, ): - """ - - This operator generates YOLO detection boxes from output of YOLOv3 network. - - The output of previous network is in shape [N, C, H, W], while H and W - should be the same, H and W specify the grid size, each grid point predict - given number boxes, this given number, which following will be represented as S, - is specified by the number of anchors. In the second dimension(the channel - dimension), C should be equal to S * (5 + class_num), class_num is the object - category number of source dataset(such as 80 in coco dataset), so the - second(channel) dimension, apart from 4 box location coordinates x, y, w, h, - also includes confidence score of the box and class one-hot key of each anchor - box. - Assume the 4 location coordinates are :math:`t_x, t_y, t_w, t_h`, the box - predictions should be as follows: - $$ - b_x = \\sigma(t_x) + c_x - $$ - $$ - b_y = \\sigma(t_y) + c_y - $$ - $$ - b_w = p_w e^{t_w} - $$ - $$ - b_h = p_h e^{t_h} - $$ - in the equation above, :math:`c_x, c_y` is the left top corner of current grid - and :math:`p_w, p_h` is specified by anchors. - The logistic regression value of the 5th channel of each anchor prediction boxes - represents the confidence score of each prediction box, and the logistic - regression value of the last :attr:`class_num` channels of each anchor prediction - boxes represents the classifcation scores. Boxes with confidence scores less than - :attr:`conf_thresh` should be ignored, and box final scores is the product of - confidence scores and classification scores. - $$ - score_{pred} = score_{conf} * score_{class} - $$ - - Args: - x (Tensor): The input tensor of YoloBox operator is a 4-D tensor with shape of [N, C, H, W]. - The second dimension(C) stores box locations, confidence score and - classification one-hot keys of each anchor box. Generally, X should be the output of YOLOv3 network. - The data type is float32 or float64. - origin_shape (Tensor): The image size tensor of YoloBox operator, This is a 2-D tensor with shape of [N, 2]. - This tensor holds height and width of each input image used for resizing output box in input image - scale. The data type is int32. - anchors (list|tuple): The anchor width and height, it will be parsed pair by pair. - class_num (int): The number of classes to predict. - conf_thresh (float): The confidence scores threshold of detection boxes. Boxes with confidence scores - under threshold should be ignored. - downsample_ratio (int): The downsample ratio from network input to YoloBox operator input, - so 32, 16, 8 should be set for the first, second, and thrid YoloBox operators. - clip_bbox (bool): Whether clip output bonding box in Input(ImgSize) boundary. Default true. - scale_x_y (float): Scale the center point of decoded bounding box. Default 1.0. - name (string): The default value is None. Normally there is no need - for user to set this property. For more information, - please refer to :ref:`api_guide_Name` - - Returns: - boxes Tensor: A 3-D tensor with shape [N, M, 4], the coordinates of boxes, N is the batch num, - M is output box number, and the 3rd dimension stores [xmin, ymin, xmax, ymax] coordinates of boxes. - scores Tensor: A 3-D tensor with shape [N, M, :attr:`class_num`], the coordinates of boxes, N is the batch num, - M is output box number. - - Raises: - TypeError: Attr anchors of yolo box must be list or tuple - TypeError: Attr class_num of yolo box must be an integer - TypeError: Attr conf_thresh of yolo box must be a float number - - Examples: - - .. code-block:: python - - import paddle - from ppdet.modeling import ops - - paddle.enable_static() - x = paddle.static.data(name='x', shape=[None, 255, 13, 13], dtype='float32') - img_size = paddle.static.data(name='img_size',shape=[None, 2],dtype='int64') - anchors = [10, 13, 16, 30, 33, 23] - boxes,scores = ops.yolo_box(x=x, img_size=img_size, class_num=80, anchors=anchors, - conf_thresh=0.01, downsample_ratio=32) - """ - helper = LayerHelper('yolo_box', **locals()) - - if not isinstance(anchors, list) and not isinstance(anchors, tuple): - raise TypeError("Attr anchors of yolo_box must be list or tuple") - if not isinstance(class_num, int): - raise TypeError("Attr class_num of yolo_box must be an integer") - if not isinstance(conf_thresh, float): - raise TypeError("Attr ignore_thresh of yolo_box must be a float number") - - if in_dygraph_mode(): - attrs = ('anchors', anchors, 'class_num', class_num, 'conf_thresh', - conf_thresh, 'downsample_ratio', downsample_ratio, 'clip_bbox', - clip_bbox, 'scale_x_y', scale_x_y) - boxes, scores = core.ops.yolo_box(x, origin_shape, *attrs) - return boxes, scores - else: - boxes = helper.create_variable_for_type_inference(dtype=x.dtype) - scores = helper.create_variable_for_type_inference(dtype=x.dtype) - - attrs = { - "anchors": anchors, - "class_num": class_num, - "conf_thresh": conf_thresh, - "downsample_ratio": downsample_ratio, - "clip_bbox": clip_bbox, - "scale_x_y": scale_x_y, - } - - helper.append_op( - type='yolo_box', - inputs={ - "X": x, - "ImgSize": origin_shape, - }, - outputs={ - 'Boxes': boxes, - 'Scores': scores, - }, - attrs=attrs) - return boxes, scores - - @paddle.jit.not_to_static def prior_box(input, image, @@ -860,14 +346,14 @@ def prior_box(input, max_sizes = [max_sizes] cur_max_sizes = max_sizes - if in_dygraph_mode(): + if in_dynamic_mode(): attrs = ('min_sizes', min_sizes, 'aspect_ratios', aspect_ratios, 'variances', variance, 'flip', flip, 'clip', clip, 'step_w', steps[0], 'step_h', steps[1], 'offset', offset, 'min_max_aspect_ratios_order', min_max_aspect_ratios_order) if cur_max_sizes is not None: attrs += ('max_sizes', cur_max_sizes) - box, var = core.ops.prior_box(input, image, *attrs) + box, var = _C_ops.prior_box(input, image, *attrs) return box, var else: attrs = { @@ -1005,13 +491,13 @@ def multiclass_nms(bboxes, """ helper = LayerHelper('multiclass_nms3', **locals()) - if in_dygraph_mode(): + if in_dynamic_mode(): attrs = ('background_label', background_label, 'score_threshold', score_threshold, 'nms_top_k', nms_top_k, 'nms_threshold', nms_threshold, 'keep_top_k', keep_top_k, 'nms_eta', nms_eta, 'normalized', normalized) - output, index, nms_rois_num = core.ops.multiclass_nms3(bboxes, scores, - rois_num, *attrs) + output, index, nms_rois_num = _C_ops.multiclass_nms3(bboxes, scores, + rois_num, *attrs) if not return_index: index = None return output, nms_rois_num, index @@ -1146,13 +632,13 @@ def matrix_nms(bboxes, check_type(gaussian_sigma, 'gaussian_sigma', float, 'matrix_nms') check_type(background_label, 'background_label', int, 'matrix_nms') - if in_dygraph_mode(): + if in_dynamic_mode(): attrs = ('background_label', background_label, 'score_threshold', score_threshold, 'post_threshold', post_threshold, 'nms_top_k', nms_top_k, 'gaussian_sigma', gaussian_sigma, 'use_gaussian', use_gaussian, 'keep_top_k', keep_top_k, 'normalized', normalized) - out, index, rois_num = core.ops.matrix_nms(bboxes, scores, *attrs) + out, index, rois_num = _C_ops.matrix_nms(bboxes, scores, *attrs) if not return_index: index = None if not return_rois_num: @@ -1191,111 +677,6 @@ def matrix_nms(bboxes, return output, rois_num, index -def bipartite_match(dist_matrix, - match_type=None, - dist_threshold=None, - name=None): - """ - - This operator implements a greedy bipartite matching algorithm, which is - used to obtain the matching with the maximum distance based on the input - distance matrix. For input 2D matrix, the bipartite matching algorithm can - find the matched column for each row (matched means the largest distance), - also can find the matched row for each column. And this operator only - calculate matched indices from column to row. For each instance, - the number of matched indices is the column number of the input distance - matrix. **The OP only supports CPU**. - - There are two outputs, matched indices and distance. - A simple description, this algorithm matched the best (maximum distance) - row entity to the column entity and the matched indices are not duplicated - in each row of ColToRowMatchIndices. If the column entity is not matched - any row entity, set -1 in ColToRowMatchIndices. - - NOTE: the input DistMat can be LoDTensor (with LoD) or Tensor. - If LoDTensor with LoD, the height of ColToRowMatchIndices is batch size. - If Tensor, the height of ColToRowMatchIndices is 1. - - NOTE: This API is a very low level API. It is used by :code:`ssd_loss` - layer. Please consider to use :code:`ssd_loss` instead. - - Args: - dist_matrix(Tensor): This input is a 2-D LoDTensor with shape - [K, M]. The data type is float32 or float64. It is pair-wise - distance matrix between the entities represented by each row and - each column. For example, assumed one entity is A with shape [K], - another entity is B with shape [M]. The dist_matrix[i][j] is the - distance between A[i] and B[j]. The bigger the distance is, the - better matching the pairs are. NOTE: This tensor can contain LoD - information to represent a batch of inputs. One instance of this - batch can contain different numbers of entities. - match_type(str, optional): The type of matching method, should be - 'bipartite' or 'per_prediction'. None ('bipartite') by default. - dist_threshold(float32, optional): If `match_type` is 'per_prediction', - this threshold is to determine the extra matching bboxes based - on the maximum distance, 0.5 by default. - name(str, optional): For detailed information, please refer - to :ref:`api_guide_Name`. Usually name is no need to set and - None by default. - - Returns: - Tuple: - - matched_indices(Tensor): A 2-D Tensor with shape [N, M]. The data - type is int32. N is the batch size. If match_indices[i][j] is -1, it - means B[j] does not match any entity in i-th instance. - Otherwise, it means B[j] is matched to row - match_indices[i][j] in i-th instance. The row number of - i-th instance is saved in match_indices[i][j]. - - matched_distance(Tensor): A 2-D Tensor with shape [N, M]. The data - type is float32. N is batch size. If match_indices[i][j] is -1, - match_distance[i][j] is also -1.0. Otherwise, assumed - match_distance[i][j] = d, and the row offsets of each instance - are called LoD. Then match_distance[i][j] = - dist_matrix[d+LoD[i]][j]. - - Examples: - - .. code-block:: python - import paddle - from ppdet.modeling import ops - from ppdet.modeling.utils import iou_similarity - - paddle.enable_static() - - x = paddle.static.data(name='x', shape=[None, 4], dtype='float32') - y = paddle.static.data(name='y', shape=[None, 4], dtype='float32') - iou = iou_similarity(x=x, y=y) - matched_indices, matched_dist = ops.bipartite_match(iou) - """ - check_variable_and_dtype(dist_matrix, 'dist_matrix', - ['float32', 'float64'], 'bipartite_match') - - if in_dygraph_mode(): - match_indices, match_distance = core.ops.bipartite_match( - dist_matrix, "match_type", match_type, "dist_threshold", - dist_threshold) - return match_indices, match_distance - - helper = LayerHelper('bipartite_match', **locals()) - match_indices = helper.create_variable_for_type_inference(dtype='int32') - match_distance = helper.create_variable_for_type_inference( - dtype=dist_matrix.dtype) - helper.append_op( - type='bipartite_match', - inputs={'DistMat': dist_matrix}, - attrs={ - 'match_type': match_type, - 'dist_threshold': dist_threshold, - }, - outputs={ - 'ColToRowMatchIndices': match_indices, - 'ColToRowMatchDist': match_distance - }) - return match_indices, match_distance - - @paddle.jit.not_to_static def box_coder(prior_box, prior_box_var, @@ -1408,14 +789,14 @@ def box_coder(prior_box, check_variable_and_dtype(target_box, 'target_box', ['float32', 'float64'], 'box_coder') - if in_dygraph_mode(): + if in_dynamic_mode(): if isinstance(prior_box_var, Variable): - output_box = core.ops.box_coder( + output_box = _C_ops.box_coder( prior_box, prior_box_var, target_box, "code_type", code_type, "box_normalized", box_normalized, "axis", axis) elif isinstance(prior_box_var, list): - output_box = core.ops.box_coder( + output_box = _C_ops.box_coder( prior_box, None, target_box, "code_type", code_type, "box_normalized", box_normalized, "axis", axis, "variance", prior_box_var) @@ -1533,12 +914,12 @@ def generate_proposals(scores, rois, roi_probs = ops.generate_proposals(scores, bbox_deltas, im_shape, anchors, variances) """ - if in_dygraph_mode(): + if in_dynamic_mode(): assert return_rois_num, "return_rois_num should be True in dygraph mode." attrs = ('pre_nms_topN', pre_nms_top_n, 'post_nms_topN', post_nms_top_n, 'nms_thresh', nms_thresh, 'min_size', min_size, 'eta', eta, 'pixel_offset', pixel_offset) - rpn_rois, rpn_roi_probs, rpn_rois_num = core.ops.generate_proposals_v2( + rpn_rois, rpn_roi_probs, rpn_rois_num = _C_ops.generate_proposals_v2( scores, bbox_deltas, im_shape, anchors, variances, *attrs) if not return_rois_num: rpn_rois_num = None @@ -1639,8 +1020,3 @@ def get_static_shape(tensor): shape = paddle.shape(tensor) shape.stop_gradient = True return shape - - -def paddle_distributed_is_initialized(): - return core.is_compiled_with_dist( - ) and parallel_helper._is_parallel_ctx_initialized() diff --git a/ppdet/modeling/post_process.py b/ppdet/modeling/post_process.py index 72e409e4008ea55b4e84a09125a069215a8f34c3..27890c17ec39f3e29a3126adab173bc9e3596bc2 100644 --- a/ppdet/modeling/post_process.py +++ b/ppdet/modeling/post_process.py @@ -33,7 +33,7 @@ __all__ = [ @register -class BBoxPostProcess(nn.Layer): +class BBoxPostProcess(object): __shared__ = ['num_classes', 'export_onnx'] __inject__ = ['decode', 'nms'] @@ -45,9 +45,9 @@ class BBoxPostProcess(nn.Layer): self.nms = nms self.export_onnx = export_onnx - def forward(self, head_out, rois, im_shape, scale_factor): + def __call__(self, head_out, rois, im_shape, scale_factor): """ - Decode the bbox and do NMS if needed. + Decode the bbox and do NMS if needed. Args: head_out (tuple): bbox_pred and cls_prob of bbox_head output. @@ -85,7 +85,7 @@ class BBoxPostProcess(nn.Layer): """ Rescale, clip and filter the bbox from the output of NMS to get final prediction. - + Notes: Currently only support bs = 1. @@ -171,7 +171,7 @@ class BBoxPostProcess(nn.Layer): pred_label = paddle.where(keep_mask, pred_label, paddle.ones_like(pred_label) * -1) pred_result = paddle.concat([pred_label, pred_score, pred_bbox], axis=1) - return pred_result + return bboxes, pred_result, bbox_num def get_origin_shape(self, ): return self.origin_shape_list @@ -179,6 +179,7 @@ class BBoxPostProcess(nn.Layer): @register class MaskPostProcess(object): + __shared__ = ['export_onnx', 'assign_on_cpu'] """ refer to: https://github.com/facebookresearch/detectron2/layers/mask_ops.py @@ -186,9 +187,14 @@ class MaskPostProcess(object): Get Mask output according to the output from model """ - def __init__(self, binary_thresh=0.5): + def __init__(self, + binary_thresh=0.5, + export_onnx=False, + assign_on_cpu=False): super(MaskPostProcess, self).__init__() self.binary_thresh = binary_thresh + self.export_onnx = export_onnx + self.assign_on_cpu = assign_on_cpu def paste_mask(self, masks, boxes, im_h, im_w): """ @@ -200,10 +206,13 @@ class MaskPostProcess(object): N = masks.shape[0] img_y = paddle.arange(y0_int, y1_int) + 0.5 img_x = paddle.arange(x0_int, x1_int) + 0.5 + img_y = (img_y - y0) / (y1 - y0) * 2 - 1 img_x = (img_x - x0) / (x1 - x0) * 2 - 1 # img_x, img_y have shapes (N, w), (N, h) + if self.assign_on_cpu: + paddle.set_device('cpu') gx = img_x[:, None, :].expand( [N, paddle.shape(img_y)[1], paddle.shape(img_x)[1]]) gy = img_y[:, :, None].expand( @@ -230,15 +239,37 @@ class MaskPostProcess(object): """ num_mask = mask_out.shape[0] origin_shape = paddle.cast(origin_shape, 'int32') - # TODO: support bs > 1 and mask output dtype is bool - pred_result = paddle.zeros( - [num_mask, origin_shape[0][0], origin_shape[0][1]], dtype='int32') + device = paddle.device.get_device() - im_h, im_w = origin_shape[0][0], origin_shape[0][1] - pred_mask = self.paste_mask(mask_out[:, None, :, :], bboxes[:, 2:], - im_h, im_w) - pred_mask = pred_mask >= self.binary_thresh - pred_result = paddle.cast(pred_mask, 'int32') + if self.export_onnx: + h, w = origin_shape[0][0], origin_shape[0][1] + mask_onnx = self.paste_mask(mask_out[:, None, :, :], bboxes[:, 2:], + h, w) + mask_onnx = mask_onnx >= self.binary_thresh + pred_result = paddle.cast(mask_onnx, 'int32') + + else: + max_h = paddle.max(origin_shape[:, 0]) + max_w = paddle.max(origin_shape[:, 1]) + pred_result = paddle.zeros( + [num_mask, max_h, max_w], dtype='int32') - 1 + + id_start = 0 + for i in range(paddle.shape(bbox_num)[0]): + bboxes_i = bboxes[id_start:id_start + bbox_num[i], :] + mask_out_i = mask_out[id_start:id_start + bbox_num[i], :, :] + im_h = origin_shape[i, 0] + im_w = origin_shape[i, 1] + bbox_num_i = bbox_num[id_start] + pred_mask = self.paste_mask(mask_out_i[:, None, :, :], + bboxes_i[:, 2:], im_h, im_w) + pred_mask = paddle.cast(pred_mask >= self.binary_thresh, + 'int32') + pred_result[id_start:id_start + bbox_num[i], :im_h, : + im_w] = pred_mask + id_start += bbox_num[i] + if self.assign_on_cpu: + paddle.set_device(device) return pred_result @@ -502,7 +533,7 @@ class CenterNetPostProcess(TTFBox): x2 = xs + wh[:, 0:1] / 2 y2 = ys + wh[:, 1:2] / 2 - n, c, feat_h, feat_w = hm.shape[:] + n, c, feat_h, feat_w = paddle.shape(hm) padw = (feat_w * self.down_ratio - im_shape[0, 1]) / 2 padh = (feat_h * self.down_ratio - im_shape[0, 0]) / 2 x1 = x1 * self.down_ratio diff --git a/ppdet/modeling/proposal_generator/anchor_generator.py b/ppdet/modeling/proposal_generator/anchor_generator.py index 34f03c0ef084d1976f7f6879caf3e25b1f67d7de..94fd346002562fd772a21f525f7ad4f50f4c4680 100644 --- a/ppdet/modeling/proposal_generator/anchor_generator.py +++ b/ppdet/modeling/proposal_generator/anchor_generator.py @@ -22,6 +22,8 @@ import paddle.nn as nn from ppdet.core.workspace import register +__all__ = ['AnchorGenerator', 'RetinaAnchorGenerator'] + @register class AnchorGenerator(nn.Layer): @@ -129,3 +131,25 @@ class AnchorGenerator(nn.Layer): For FPN models, `num_anchors` on every feature map is the same. """ return len(self.cell_anchors[0]) + + +@register +class RetinaAnchorGenerator(AnchorGenerator): + def __init__(self, + octave_base_scale=4, + scales_per_octave=3, + aspect_ratios=[0.5, 1.0, 2.0], + strides=[8.0, 16.0, 32.0, 64.0, 128.0], + variance=[1.0, 1.0, 1.0, 1.0], + offset=0.0): + anchor_sizes = [] + for s in strides: + anchor_sizes.append([ + s * octave_base_scale * 2**(i/scales_per_octave) \ + for i in range(scales_per_octave)]) + super(RetinaAnchorGenerator, self).__init__( + anchor_sizes=anchor_sizes, + aspect_ratios=aspect_ratios, + strides=strides, + variance=variance, + offset=offset) diff --git a/ppdet/modeling/proposal_generator/rpn_head.py b/ppdet/modeling/proposal_generator/rpn_head.py index 608d5425065ba98fcb678252e5a568f3b21a88b3..8a431eeac208a052ed8de5dfb7278948cfbcf042 100644 --- a/ppdet/modeling/proposal_generator/rpn_head.py +++ b/ppdet/modeling/proposal_generator/rpn_head.py @@ -21,6 +21,7 @@ from ppdet.core.workspace import register from .anchor_generator import AnchorGenerator from .target_layer import RPNTargetAssign from .proposal_generator import ProposalGenerator +from ..cls_utils import _get_class_default_kwargs class RPNFeat(nn.Layer): @@ -67,14 +68,17 @@ class RPNHead(nn.Layer): derived by from_config """ __shared__ = ['export_onnx'] + __inject__ = ['loss_rpn_bbox'] def __init__(self, - anchor_generator=AnchorGenerator().__dict__, - rpn_target_assign=RPNTargetAssign().__dict__, - train_proposal=ProposalGenerator(12000, 2000).__dict__, - test_proposal=ProposalGenerator().__dict__, + anchor_generator=_get_class_default_kwargs(AnchorGenerator), + rpn_target_assign=_get_class_default_kwargs(RPNTargetAssign), + train_proposal=_get_class_default_kwargs(ProposalGenerator, + 12000, 2000), + test_proposal=_get_class_default_kwargs(ProposalGenerator), in_channel=1024, - export_onnx=False): + export_onnx=False, + loss_rpn_bbox=None): super(RPNHead, self).__init__() self.anchor_generator = anchor_generator self.rpn_target_assign = rpn_target_assign @@ -89,6 +93,7 @@ class RPNHead(nn.Layer): self.train_proposal = ProposalGenerator(**train_proposal) if isinstance(test_proposal, dict): self.test_proposal = ProposalGenerator(**test_proposal) + self.loss_rpn_bbox = loss_rpn_bbox num_anchors = self.anchor_generator.num_anchors self.rpn_feat = RPNFeat(in_channel, in_channel) @@ -296,7 +301,12 @@ class RPNHead(nn.Layer): loc_tgt = paddle.concat(loc_tgt) loc_tgt = paddle.gather(loc_tgt, pos_ind) loc_tgt.stop_gradient = True - loss_rpn_reg = paddle.abs(loc_pred - loc_tgt).sum() + + if self.loss_rpn_bbox is None: + loss_rpn_reg = paddle.abs(loc_pred - loc_tgt).sum() + else: + loss_rpn_reg = self.loss_rpn_bbox(loc_pred, loc_tgt).sum() + return { 'loss_rpn_cls': loss_rpn_cls / norm, 'loss_rpn_reg': loss_rpn_reg / norm diff --git a/ppdet/modeling/proposal_generator/target.py b/ppdet/modeling/proposal_generator/target.py index 7af30f64153acf5e9c68c51981a02c76acbe50f0..fd04f052219a00c919b945d8838436de018af873 100644 --- a/ppdet/modeling/proposal_generator/target.py +++ b/ppdet/modeling/proposal_generator/target.py @@ -74,9 +74,11 @@ def label_box(anchors, is_crowd=None, assign_on_cpu=False): if assign_on_cpu: + device = paddle.device.get_device() paddle.set_device("cpu") iou = bbox_overlaps(gt_boxes, anchors) - paddle.set_device("gpu") + paddle.set_device(device) + else: iou = bbox_overlaps(gt_boxes, anchors) n_gt = gt_boxes.shape[0] @@ -356,7 +358,7 @@ def generate_mask_target(gt_segms, rois, labels_int32, sampled_gt_inds, fg_inds_new = fg_inds.reshape([-1]).numpy() results = [] if len(gt_segms_per_im) > 0: - for j in fg_inds_new: + for j in range(fg_inds_new.shape[0]): results.append( rasterize_polygons_within_box(new_segm[j], boxes[j], resolution)) diff --git a/ppdet/modeling/proposal_generator/target_layer.py b/ppdet/modeling/proposal_generator/target_layer.py index 3b5a09601682151afcd47a0ea0db4fd0f03440a9..201c8bf86b14ee19f4398d2451dabdc886e9af98 100644 --- a/ppdet/modeling/proposal_generator/target_layer.py +++ b/ppdet/modeling/proposal_generator/target_layer.py @@ -392,9 +392,9 @@ class RBoxAssigner(object): gt_bboxes_xc_yc = paddle.to_tensor(gt_bboxes_xc_yc) try: - from rbox_iou_ops import rbox_iou + from ext_op import rbox_iou except Exception as e: - print("import custom_ops error, try install rbox_iou_ops " \ + print("import custom_ops error, try install ext_op " \ "following ppdet/ext_op/README.md", e) sys.stdout.flush() sys.exit(-1) diff --git a/ppdet/modeling/tests/test_architectures.py b/ppdet/modeling/tests/test_architectures.py index 25767e74abd9fce29c51adbc4b5109e17a50aa0b..5de79b2cedb3fffac0ce853406560821a9142363 100644 --- a/ppdet/modeling/tests/test_architectures.py +++ b/ppdet/modeling/tests/test_architectures.py @@ -62,7 +62,7 @@ class TestGFL(TestFasterRCNN): class TestPicoDet(TestFasterRCNN): def set_config(self): - self.cfg_file = 'configs/picodet/picodet_s_320_coco.yml' + self.cfg_file = 'configs/picodet/picodet_s_320_coco_lcnet.yml' if __name__ == '__main__': diff --git a/ppdet/modeling/tests/test_base.py b/ppdet/modeling/tests/test_base.py index cbb9033b393a24167ec1ebc32a4d924fa564f929..451aa78e32ce0682f55a2ab0f9d1ea03e939e481 100644 --- a/ppdet/modeling/tests/test_base.py +++ b/ppdet/modeling/tests/test_base.py @@ -18,9 +18,7 @@ import unittest import contextlib import paddle -import paddle.fluid as fluid -from paddle.fluid.framework import Program -from paddle.fluid import core +from paddle.static import Program class LayerTest(unittest.TestCase): @@ -35,19 +33,17 @@ class LayerTest(unittest.TestCase): def _get_place(self, force_to_use_cpu=False): # this option for ops that only have cpu kernel if force_to_use_cpu: - return core.CPUPlace() + return 'cpu' else: - if core.is_compiled_with_cuda(): - return core.CUDAPlace(0) - return core.CPUPlace() + return paddle.device.get_device() @contextlib.contextmanager def static_graph(self): paddle.enable_static() - scope = fluid.core.Scope() + scope = paddle.static.Scope() program = Program() - with fluid.scope_guard(scope): - with fluid.program_guard(program): + with paddle.static.scope_guard(scope): + with paddle.static.program_guard(program): paddle.seed(self.seed) paddle.framework.random._manual_program_seed(self.seed) yield @@ -57,9 +53,9 @@ class LayerTest(unittest.TestCase): fetch_list, with_lod=False, force_to_use_cpu=False): - exe = fluid.Executor(self._get_place(force_to_use_cpu)) - exe.run(fluid.default_startup_program()) - return exe.run(fluid.default_main_program(), + exe = paddle.static.Executor(self._get_place(force_to_use_cpu)) + exe.run(paddle.static.default_startup_program()) + return exe.run(paddle.static.default_main_program(), feed=feed, fetch_list=fetch_list, return_numpy=(not with_lod)) @@ -67,8 +63,8 @@ class LayerTest(unittest.TestCase): @contextlib.contextmanager def dynamic_graph(self, force_to_use_cpu=False): paddle.disable_static() - with fluid.dygraph.guard( - self._get_place(force_to_use_cpu=force_to_use_cpu)): - paddle.seed(self.seed) - paddle.framework.random._manual_program_seed(self.seed) - yield + place = self._get_place(force_to_use_cpu=force_to_use_cpu) + paddle.device.set_device(place) + paddle.seed(self.seed) + paddle.framework.random._manual_program_seed(self.seed) + yield diff --git a/ppdet/modeling/tests/test_ops.py b/ppdet/modeling/tests/test_ops.py index d4b5747487d3ee49627e4fe8aecec31cf2759ae2..4ef9cbc28c0f1dcc4268571c7206d70b306682cd 100644 --- a/ppdet/modeling/tests/test_ops.py +++ b/ppdet/modeling/tests/test_ops.py @@ -23,8 +23,6 @@ import unittest import numpy as np import paddle -import paddle.fluid as fluid -from paddle.fluid.dygraph import base import ppdet.modeling.ops as ops from ppdet.modeling.tests.test_base import LayerTest @@ -50,127 +48,6 @@ def softmax(x): return exps / np.sum(exps) -class TestCollectFpnProposals(LayerTest): - def test_collect_fpn_proposals(self): - multi_bboxes_np = [] - multi_scores_np = [] - rois_num_per_level_np = [] - for i in range(4): - bboxes_np = np.random.rand(5, 4).astype('float32') - scores_np = np.random.rand(5, 1).astype('float32') - rois_num = np.array([2, 3]).astype('int32') - multi_bboxes_np.append(bboxes_np) - multi_scores_np.append(scores_np) - rois_num_per_level_np.append(rois_num) - - with self.static_graph(): - multi_bboxes = [] - multi_scores = [] - rois_num_per_level = [] - for i in range(4): - bboxes = paddle.static.data( - name='rois' + str(i), - shape=[5, 4], - dtype='float32', - lod_level=1) - scores = paddle.static.data( - name='scores' + str(i), - shape=[5, 1], - dtype='float32', - lod_level=1) - rois_num = paddle.static.data( - name='rois_num' + str(i), shape=[None], dtype='int32') - - multi_bboxes.append(bboxes) - multi_scores.append(scores) - rois_num_per_level.append(rois_num) - - fpn_rois, rois_num = ops.collect_fpn_proposals( - multi_bboxes, - multi_scores, - 2, - 5, - 10, - rois_num_per_level=rois_num_per_level) - feed = {} - for i in range(4): - feed['rois' + str(i)] = multi_bboxes_np[i] - feed['scores' + str(i)] = multi_scores_np[i] - feed['rois_num' + str(i)] = rois_num_per_level_np[i] - fpn_rois_stat, rois_num_stat = self.get_static_graph_result( - feed=feed, fetch_list=[fpn_rois, rois_num], with_lod=True) - fpn_rois_stat = np.array(fpn_rois_stat) - rois_num_stat = np.array(rois_num_stat) - - with self.dynamic_graph(): - multi_bboxes_dy = [] - multi_scores_dy = [] - rois_num_per_level_dy = [] - for i in range(4): - bboxes_dy = base.to_variable(multi_bboxes_np[i]) - scores_dy = base.to_variable(multi_scores_np[i]) - rois_num_dy = base.to_variable(rois_num_per_level_np[i]) - multi_bboxes_dy.append(bboxes_dy) - multi_scores_dy.append(scores_dy) - rois_num_per_level_dy.append(rois_num_dy) - fpn_rois_dy, rois_num_dy = ops.collect_fpn_proposals( - multi_bboxes_dy, - multi_scores_dy, - 2, - 5, - 10, - rois_num_per_level=rois_num_per_level_dy) - fpn_rois_dy = fpn_rois_dy.numpy() - rois_num_dy = rois_num_dy.numpy() - - self.assertTrue(np.array_equal(fpn_rois_stat, fpn_rois_dy)) - self.assertTrue(np.array_equal(rois_num_stat, rois_num_dy)) - - def test_collect_fpn_proposals_error(self): - def generate_input(bbox_type, score_type, name): - multi_bboxes = [] - multi_scores = [] - for i in range(4): - bboxes = paddle.static.data( - name='rois' + name + str(i), - shape=[10, 4], - dtype=bbox_type, - lod_level=1) - scores = paddle.static.data( - name='scores' + name + str(i), - shape=[10, 1], - dtype=score_type, - lod_level=1) - multi_bboxes.append(bboxes) - multi_scores.append(scores) - return multi_bboxes, multi_scores - - with self.static_graph(): - bbox1 = paddle.static.data( - name='rois', shape=[5, 10, 4], dtype='float32', lod_level=1) - score1 = paddle.static.data( - name='scores', shape=[5, 10, 1], dtype='float32', lod_level=1) - bbox2, score2 = generate_input('int32', 'float32', '2') - self.assertRaises( - TypeError, - ops.collect_fpn_proposals, - multi_rois=bbox1, - multi_scores=score1, - min_level=2, - max_level=5, - post_nms_top_n=2000) - self.assertRaises( - TypeError, - ops.collect_fpn_proposals, - multi_rois=bbox2, - multi_scores=score2, - min_level=2, - max_level=5, - post_nms_top_n=2000) - - paddle.disable_static() - - class TestDistributeFpnProposals(LayerTest): def test_distribute_fpn_proposals(self): rois_np = np.random.rand(10, 4).astype('float32') @@ -200,8 +77,8 @@ class TestDistributeFpnProposals(LayerTest): output_stat_np.append(output_np) with self.dynamic_graph(): - rois_dy = base.to_variable(rois_np) - rois_num_dy = base.to_variable(rois_num_np) + rois_dy = paddle.to_tensor(rois_np) + rois_num_dy = paddle.to_tensor(rois_num_np) multi_rois_dy, restore_ind_dy, rois_num_per_level_dy = ops.distribute_fpn_proposals( fpn_rois=rois_dy, min_level=2, @@ -251,11 +128,11 @@ class TestROIAlign(LayerTest): rois_num = paddle.static.data( name='rois_num', shape=[None], dtype='int32') - output = ops.roi_align( - input=inputs, - rois=rois, - output_size=output_size, - rois_num=rois_num) + output = paddle.vision.ops.roi_align( + x=inputs, + boxes=rois, + boxes_num=rois_num, + output_size=output_size) output_np, = self.get_static_graph_result( feed={ 'inputs': inputs_np, @@ -266,15 +143,15 @@ class TestROIAlign(LayerTest): with_lod=False) with self.dynamic_graph(): - inputs_dy = base.to_variable(inputs_np) - rois_dy = base.to_variable(rois_np) - rois_num_dy = base.to_variable(rois_num_np) - - output_dy = ops.roi_align( - input=inputs_dy, - rois=rois_dy, - output_size=output_size, - rois_num=rois_num_dy) + inputs_dy = paddle.to_tensor(inputs_np) + rois_dy = paddle.to_tensor(rois_np) + rois_num_dy = paddle.to_tensor(rois_num_np) + + output_dy = paddle.vision.ops.roi_align( + x=inputs_dy, + boxes=rois_dy, + boxes_num=rois_num_dy, + output_size=output_size) output_dy_np = output_dy.numpy() self.assertTrue(np.array_equal(output_np, output_dy_np)) @@ -287,7 +164,7 @@ class TestROIAlign(LayerTest): name='data_error', shape=[10, 4], dtype='int32', lod_level=1) self.assertRaises( TypeError, - ops.roi_align, + paddle.vision.ops.roi_align, input=inputs, rois=rois, output_size=(7, 7)) @@ -311,11 +188,11 @@ class TestROIPool(LayerTest): rois_num = paddle.static.data( name='rois_num', shape=[None], dtype='int32') - output, _ = ops.roi_pool( - input=inputs, - rois=rois, - output_size=output_size, - rois_num=rois_num) + output = paddle.vision.ops.roi_pool( + x=inputs, + boxes=rois, + boxes_num=rois_num, + output_size=output_size) output_np, = self.get_static_graph_result( feed={ 'inputs': inputs_np, @@ -326,15 +203,15 @@ class TestROIPool(LayerTest): with_lod=False) with self.dynamic_graph(): - inputs_dy = base.to_variable(inputs_np) - rois_dy = base.to_variable(rois_np) - rois_num_dy = base.to_variable(rois_num_np) - - output_dy, _ = ops.roi_pool( - input=inputs_dy, - rois=rois_dy, - output_size=output_size, - rois_num=rois_num_dy) + inputs_dy = paddle.to_tensor(inputs_np) + rois_dy = paddle.to_tensor(rois_np) + rois_num_dy = paddle.to_tensor(rois_num_np) + + output_dy = paddle.vision.ops.roi_pool( + x=inputs_dy, + boxes=rois_dy, + boxes_num=rois_num_dy, + output_size=output_size) output_dy_np = output_dy.numpy() self.assertTrue(np.array_equal(output_np, output_dy_np)) @@ -347,7 +224,7 @@ class TestROIPool(LayerTest): name='data_error', shape=[10, 4], dtype='int32', lod_level=1) self.assertRaises( TypeError, - ops.roi_pool, + paddle.vision.ops.roi_pool, input=inputs, rois=rois, output_size=(7, 7)) @@ -355,134 +232,6 @@ class TestROIPool(LayerTest): paddle.disable_static() -class TestIoUSimilarity(LayerTest): - def test_iou_similarity(self): - b, c, h, w = 2, 12, 20, 20 - inputs_np = np.random.rand(b, c, h, w).astype('float32') - output_size = (7, 7) - x_np = make_rois(h, w, [20], output_size) - y_np = make_rois(h, w, [10], output_size) - with self.static_graph(): - x = paddle.static.data(name='x', shape=[20, 4], dtype='float32') - y = paddle.static.data(name='y', shape=[10, 4], dtype='float32') - - iou = ops.iou_similarity(x=x, y=y) - iou_np, = self.get_static_graph_result( - feed={ - 'x': x_np, - 'y': y_np, - }, fetch_list=[iou], with_lod=False) - - with self.dynamic_graph(): - x_dy = base.to_variable(x_np) - y_dy = base.to_variable(y_np) - - iou_dy = ops.iou_similarity(x=x_dy, y=y_dy) - iou_dy_np = iou_dy.numpy() - - self.assertTrue(np.array_equal(iou_np, iou_dy_np)) - - -class TestBipartiteMatch(LayerTest): - def test_bipartite_match(self): - distance = np.random.random((20, 10)).astype('float32') - with self.static_graph(): - x = paddle.static.data(name='x', shape=[20, 10], dtype='float32') - - match_indices, match_dist = ops.bipartite_match( - x, match_type='per_prediction', dist_threshold=0.5) - match_indices_np, match_dist_np = self.get_static_graph_result( - feed={'x': distance, }, - fetch_list=[match_indices, match_dist], - with_lod=False) - - with self.dynamic_graph(): - x_dy = base.to_variable(distance) - - match_indices_dy, match_dist_dy = ops.bipartite_match( - x_dy, match_type='per_prediction', dist_threshold=0.5) - match_indices_dy_np = match_indices_dy.numpy() - match_dist_dy_np = match_dist_dy.numpy() - - self.assertTrue(np.array_equal(match_indices_np, match_indices_dy_np)) - self.assertTrue(np.array_equal(match_dist_np, match_dist_dy_np)) - - -class TestYoloBox(LayerTest): - def test_yolo_box(self): - - # x shape [N C H W], C=K * (5 + class_num), class_num=10, K=2 - np_x = np.random.random([1, 30, 7, 7]).astype('float32') - np_origin_shape = np.array([[608, 608]], dtype='int32') - class_num = 10 - conf_thresh = 0.01 - downsample_ratio = 32 - scale_x_y = 1.2 - - # static - with self.static_graph(): - # x shape [N C H W], C=K * (5 + class_num), class_num=10, K=2 - x = paddle.static.data( - name='x', shape=[1, 30, 7, 7], dtype='float32') - origin_shape = paddle.static.data( - name='origin_shape', shape=[1, 2], dtype='int32') - - boxes, scores = ops.yolo_box( - x, - origin_shape, [10, 13, 30, 13], - class_num, - conf_thresh, - downsample_ratio, - scale_x_y=scale_x_y) - - boxes_np, scores_np = self.get_static_graph_result( - feed={ - 'x': np_x, - 'origin_shape': np_origin_shape, - }, - fetch_list=[boxes, scores], - with_lod=False) - - # dygraph - with self.dynamic_graph(): - x_dy = fluid.layers.assign(np_x) - origin_shape_dy = fluid.layers.assign(np_origin_shape) - - boxes_dy, scores_dy = ops.yolo_box( - x_dy, - origin_shape_dy, [10, 13, 30, 13], - 10, - 0.01, - 32, - scale_x_y=scale_x_y) - - boxes_dy_np = boxes_dy.numpy() - scores_dy_np = scores_dy.numpy() - - self.assertTrue(np.array_equal(boxes_np, boxes_dy_np)) - self.assertTrue(np.array_equal(scores_np, scores_dy_np)) - - def test_yolo_box_error(self): - with self.static_graph(): - # x shape [N C H W], C=K * (5 + class_num), class_num=10, K=2 - x = paddle.static.data( - name='x', shape=[1, 30, 7, 7], dtype='float32') - origin_shape = paddle.static.data( - name='origin_shape', shape=[1, 2], dtype='int32') - - self.assertRaises( - TypeError, - ops.yolo_box, - x, - origin_shape, [10, 13, 30, 13], - 10.123, - 0.01, - 32, - scale_x_y=1.2) - - paddle.disable_static() - - class TestPriorBox(LayerTest): def test_prior_box(self): input_np = np.random.rand(2, 10, 32, 32).astype('float32') @@ -509,8 +258,8 @@ class TestPriorBox(LayerTest): with_lod=False) with self.dynamic_graph(): - inputs_dy = base.to_variable(input_np) - image_dy = base.to_variable(image_np) + inputs_dy = paddle.to_tensor(input_np) + image_dy = paddle.to_tensor(image_np) box_dy, var_dy = ops.prior_box( input=inputs_dy, @@ -582,9 +331,9 @@ class TestMulticlassNms(LayerTest): nms_rois_num_np = np.array(nms_rois_num_np) with self.dynamic_graph(): - boxes_dy = base.to_variable(boxes_np) - scores_dy = base.to_variable(scores_np) - rois_num_dy = base.to_variable(rois_num_np) + boxes_dy = paddle.to_tensor(boxes_np) + scores_dy = paddle.to_tensor(scores_np) + rois_num_dy = paddle.to_tensor(rois_num_np) out_dy, index_dy, nms_rois_num_dy = ops.multiclass_nms( bboxes=boxes_dy, @@ -666,8 +415,8 @@ class TestMatrixNMS(LayerTest): with_lod=True) with self.dynamic_graph(): - boxes_dy = base.to_variable(boxes_np) - scores_dy = base.to_variable(scores_np) + boxes_dy = paddle.to_tensor(boxes_np) + scores_dy = paddle.to_tensor(scores_np) out_dy, index_dy, _ = ops.matrix_nms( bboxes=boxes_dy, @@ -737,9 +486,9 @@ class TestBoxCoder(LayerTest): # dygraph with self.dynamic_graph(): - prior_box_dy = base.to_variable(prior_box_np) - prior_box_var_dy = base.to_variable(prior_box_var_np) - target_box_dy = base.to_variable(target_box_np) + prior_box_dy = paddle.to_tensor(prior_box_np) + prior_box_var_dy = paddle.to_tensor(prior_box_var_np) + target_box_dy = paddle.to_tensor(target_box_np) boxes_dy = ops.box_coder( prior_box=prior_box_dy, @@ -808,11 +557,11 @@ class TestGenerateProposals(LayerTest): with_lod=True) with self.dynamic_graph(): - scores_dy = base.to_variable(scores_np) - bbox_deltas_dy = base.to_variable(bbox_deltas_np) - im_shape_dy = base.to_variable(im_shape_np) - anchors_dy = base.to_variable(anchors_np) - variances_dy = base.to_variable(variances_np) + scores_dy = paddle.to_tensor(scores_np) + bbox_deltas_dy = paddle.to_tensor(bbox_deltas_np) + im_shape_dy = paddle.to_tensor(im_shape_np) + anchors_dy = paddle.to_tensor(anchors_np) + variances_dy = paddle.to_tensor(variances_np) rois, roi_probs, rois_num = ops.generate_proposals( scores_dy, bbox_deltas_dy, diff --git a/ppdet/modeling/tests/test_yolov3_loss.py b/ppdet/modeling/tests/test_yolov3_loss.py index cec8bc940a4abb852d0b210b76ffe4386b8fc12e..433b3cf2cb95c2a1dd27da30ef9b99f3148e004f 100644 --- a/ppdet/modeling/tests/test_yolov3_loss.py +++ b/ppdet/modeling/tests/test_yolov3_loss.py @@ -17,7 +17,7 @@ from __future__ import division import unittest import paddle -from paddle import fluid +import paddle.nn.functional as F # add python path of PadleDetection to sys.path import os import sys @@ -27,19 +27,9 @@ if parent_path not in sys.path: from ppdet.modeling.losses import YOLOv3Loss from ppdet.data.transform.op_helper import jaccard_overlap +from ppdet.modeling.bbox_utils import iou_similarity import numpy as np - - -def _split_ioup(output, an_num, num_classes): - """ - Split output feature map to output, predicted iou - along channel dimension - """ - ioup = fluid.layers.slice(output, axes=[1], starts=[0], ends=[an_num]) - ioup = fluid.layers.sigmoid(ioup) - oriout = fluid.layers.slice( - output, axes=[1], starts=[an_num], ends=[an_num * (num_classes + 6)]) - return (ioup, oriout) +np.random.seed(0) def _split_output(output, an_num, num_classes): @@ -47,31 +37,31 @@ def _split_output(output, an_num, num_classes): Split output feature map to x, y, w, h, objectness, classification along channel dimension """ - x = fluid.layers.strided_slice( + x = paddle.strided_slice( output, axes=[1], starts=[0], ends=[output.shape[1]], strides=[5 + num_classes]) - y = fluid.layers.strided_slice( + y = paddle.strided_slice( output, axes=[1], starts=[1], ends=[output.shape[1]], strides=[5 + num_classes]) - w = fluid.layers.strided_slice( + w = paddle.strided_slice( output, axes=[1], starts=[2], ends=[output.shape[1]], strides=[5 + num_classes]) - h = fluid.layers.strided_slice( + h = paddle.strided_slice( output, axes=[1], starts=[3], ends=[output.shape[1]], strides=[5 + num_classes]) - obj = fluid.layers.strided_slice( + obj = paddle.strided_slice( output, axes=[1], starts=[4], @@ -81,14 +71,12 @@ def _split_output(output, an_num, num_classes): stride = output.shape[1] // an_num for m in range(an_num): clss.append( - fluid.layers.slice( + paddle.slice( output, axes=[1], starts=[stride * m + 5], ends=[stride * m + 5 + num_classes])) - cls = fluid.layers.transpose( - fluid.layers.stack( - clss, axis=1), perm=[0, 1, 3, 4, 2]) + cls = paddle.transpose(paddle.stack(clss, axis=1), perm=[0, 1, 3, 4, 2]) return (x, y, w, h, obj, cls) @@ -104,7 +92,7 @@ def _split_target(target): th = target[:, :, 3, :, :] tscale = target[:, :, 4, :, :] tobj = target[:, :, 5, :, :] - tcls = fluid.layers.transpose(target[:, :, 6:, :, :], perm=[0, 1, 3, 4, 2]) + tcls = paddle.transpose(target[:, :, 6:, :, :], perm=[0, 1, 3, 4, 2]) tcls.stop_gradient = True return (tx, ty, tw, th, tscale, tobj, tcls) @@ -115,9 +103,9 @@ def _calc_obj_loss(output, obj, tobj, gt_box, batch_size, anchors, num_classes, # objectness loss will be ignored, process as follows: # 1. get pred bbox, which is same with YOLOv3 infer mode, use yolo_box here # NOTE: img_size is set as 1.0 to get noramlized pred bbox - bbox, prob = fluid.layers.yolo_box( + bbox, prob = paddle.vision.ops.yolo_box( x=output, - img_size=fluid.layers.ones( + img_size=paddle.ones( shape=[batch_size, 2], dtype="int32"), anchors=anchors, class_num=num_classes, @@ -128,8 +116,8 @@ def _calc_obj_loss(output, obj, tobj, gt_box, batch_size, anchors, num_classes, # 2. split pred bbox and gt bbox by sample, calculate IoU between pred bbox # and gt bbox in each sample if batch_size > 1: - preds = fluid.layers.split(bbox, batch_size, dim=0) - gts = fluid.layers.split(gt_box, batch_size, dim=0) + preds = paddle.split(bbox, batch_size, axis=0) + gts = paddle.split(gt_box, batch_size, axis=0) else: preds = [bbox] gts = [gt_box] @@ -142,7 +130,7 @@ def _calc_obj_loss(output, obj, tobj, gt_box, batch_size, anchors, num_classes, y = box[:, 1] w = box[:, 2] h = box[:, 3] - return fluid.layers.stack( + return paddle.stack( [ x - w / 2., y - h / 2., @@ -150,28 +138,29 @@ def _calc_obj_loss(output, obj, tobj, gt_box, batch_size, anchors, num_classes, y + h / 2., ], axis=1) - pred = fluid.layers.squeeze(pred, axes=[0]) - gt = box_xywh2xyxy(fluid.layers.squeeze(gt, axes=[0])) - ious.append(fluid.layers.iou_similarity(pred, gt)) - iou = fluid.layers.stack(ious, axis=0) + pred = paddle.squeeze(pred, axis=[0]) + gt = box_xywh2xyxy(paddle.squeeze(gt, axis=[0])) + ious.append(iou_similarity(pred, gt)) + iou = paddle.stack(ious, axis=0) # 3. Get iou_mask by IoU between gt bbox and prediction bbox, # Get obj_mask by tobj(holds gt_score), calculate objectness loss - max_iou = fluid.layers.reduce_max(iou, dim=-1) - iou_mask = fluid.layers.cast(max_iou <= ignore_thresh, dtype="float32") - output_shape = fluid.layers.shape(output) + max_iou = paddle.max(iou, axis=-1) + iou_mask = paddle.cast(max_iou <= ignore_thresh, dtype="float32") + output_shape = paddle.shape(output) an_num = len(anchors) // 2 - iou_mask = fluid.layers.reshape(iou_mask, (-1, an_num, output_shape[2], - output_shape[3])) + iou_mask = paddle.reshape(iou_mask, (-1, an_num, output_shape[2], + output_shape[3])) iou_mask.stop_gradient = True # NOTE: tobj holds gt_score, obj_mask holds object existence mask - obj_mask = fluid.layers.cast(tobj > 0., dtype="float32") + obj_mask = paddle.cast(tobj > 0., dtype="float32") obj_mask.stop_gradient = True # For positive objectness grids, objectness loss should be calculated # For negative objectness grids, objectness loss is calculated only iou_mask == 1.0 - loss_obj = fluid.layers.sigmoid_cross_entropy_with_logits(obj, obj_mask) - loss_obj_pos = fluid.layers.reduce_sum(loss_obj * tobj, dim=[1, 2, 3]) - loss_obj_neg = fluid.layers.reduce_sum( - loss_obj * (1.0 - obj_mask) * iou_mask, dim=[1, 2, 3]) + obj_sigmoid = F.sigmoid(obj) + loss_obj = F.binary_cross_entropy(obj_sigmoid, obj_mask, reduction='none') + loss_obj_pos = paddle.sum(loss_obj * tobj, axis=[1, 2, 3]) + loss_obj_neg = paddle.sum(loss_obj * (1.0 - obj_mask) * iou_mask, + axis=[1, 2, 3]) return loss_obj_pos, loss_obj_neg @@ -194,45 +183,48 @@ def fine_grained_loss(output, scale_x_y = scale_x_y if (abs(scale_x_y - 1.0) < eps): - loss_x = fluid.layers.sigmoid_cross_entropy_with_logits( - x, tx) * tscale_tobj - loss_x = fluid.layers.reduce_sum(loss_x, dim=[1, 2, 3]) - loss_y = fluid.layers.sigmoid_cross_entropy_with_logits( - y, ty) * tscale_tobj - loss_y = fluid.layers.reduce_sum(loss_y, dim=[1, 2, 3]) + x = F.sigmoid(x) + y = F.sigmoid(y) + loss_x = F.binary_cross_entropy(x, tx, reduction='none') * tscale_tobj + loss_x = paddle.sum(loss_x, axis=[1, 2, 3]) + loss_y = F.binary_cross_entropy(y, ty, reduction='none') * tscale_tobj + loss_y = paddle.sum(loss_y, axis=[1, 2, 3]) else: - dx = scale_x_y * fluid.layers.sigmoid(x) - 0.5 * (scale_x_y - 1.0) - dy = scale_x_y * fluid.layers.sigmoid(y) - 0.5 * (scale_x_y - 1.0) - loss_x = fluid.layers.abs(dx - tx) * tscale_tobj - loss_x = fluid.layers.reduce_sum(loss_x, dim=[1, 2, 3]) - loss_y = fluid.layers.abs(dy - ty) * tscale_tobj - loss_y = fluid.layers.reduce_sum(loss_y, dim=[1, 2, 3]) + dx = scale_x_y * F.sigmoid(x) - 0.5 * (scale_x_y - 1.0) + dy = scale_x_y * F.sigmoid(y) - 0.5 * (scale_x_y - 1.0) + loss_x = paddle.abs(dx - tx) * tscale_tobj + loss_x = paddle.sum(loss_x, axis=[1, 2, 3]) + loss_y = paddle.abs(dy - ty) * tscale_tobj + loss_y = paddle.sum(loss_y, axis=[1, 2, 3]) # NOTE: we refined loss function of (w, h) as L1Loss - loss_w = fluid.layers.abs(w - tw) * tscale_tobj - loss_w = fluid.layers.reduce_sum(loss_w, dim=[1, 2, 3]) - loss_h = fluid.layers.abs(h - th) * tscale_tobj - loss_h = fluid.layers.reduce_sum(loss_h, dim=[1, 2, 3]) + loss_w = paddle.abs(w - tw) * tscale_tobj + loss_w = paddle.sum(loss_w, axis=[1, 2, 3]) + loss_h = paddle.abs(h - th) * tscale_tobj + loss_h = paddle.sum(loss_h, axis=[1, 2, 3]) loss_obj_pos, loss_obj_neg = _calc_obj_loss( output, obj, tobj, gt_box, batch_size, anchors, num_classes, downsample, ignore_thresh, scale_x_y) - loss_cls = fluid.layers.sigmoid_cross_entropy_with_logits(cls, tcls) - loss_cls = fluid.layers.elementwise_mul(loss_cls, tobj, axis=0) - loss_cls = fluid.layers.reduce_sum(loss_cls, dim=[1, 2, 3, 4]) + cls = F.sigmoid(cls) + loss_cls = F.binary_cross_entropy(cls, tcls, reduction='none') + tobj = paddle.unsqueeze(tobj, axis=-1) + + loss_cls = paddle.multiply(loss_cls, tobj) + loss_cls = paddle.sum(loss_cls, axis=[1, 2, 3, 4]) - loss_xys = fluid.layers.reduce_mean(loss_x + loss_y) - loss_whs = fluid.layers.reduce_mean(loss_w + loss_h) - loss_objs = fluid.layers.reduce_mean(loss_obj_pos + loss_obj_neg) - loss_clss = fluid.layers.reduce_mean(loss_cls) + loss_xys = paddle.mean(loss_x + loss_y) + loss_whs = paddle.mean(loss_w + loss_h) + loss_objs = paddle.mean(loss_obj_pos + loss_obj_neg) + loss_clss = paddle.mean(loss_cls) losses_all = { - "loss_xy": fluid.layers.sum(loss_xys), - "loss_wh": fluid.layers.sum(loss_whs), - "loss_loc": fluid.layers.sum(loss_xys) + fluid.layers.sum(loss_whs), - "loss_obj": fluid.layers.sum(loss_objs), - "loss_cls": fluid.layers.sum(loss_clss), + "loss_xy": paddle.sum(loss_xys), + "loss_wh": paddle.sum(loss_whs), + "loss_loc": paddle.sum(loss_xys) + paddle.sum(loss_whs), + "loss_obj": paddle.sum(loss_objs), + "loss_cls": paddle.sum(loss_clss), } return losses_all, x, y, tx, ty diff --git a/static/ppdet/utils/__init__.py b/ppdet/optimizer/__init__.py similarity index 82% rename from static/ppdet/utils/__init__.py rename to ppdet/optimizer/__init__.py index d0c32e26092f6ea25771279418582a24ea449ab2..61737923ef3dfe25eed969e1f5807e61c9094758 100644 --- a/static/ppdet/utils/__init__.py +++ b/ppdet/optimizer/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -11,3 +11,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. + +from .optimizer import * +from .ema import ModelEMA diff --git a/ppdet/optimizer/adamw.py b/ppdet/optimizer/adamw.py new file mode 100644 index 0000000000000000000000000000000000000000..821135da02a9368de407593877987101228daf35 --- /dev/null +++ b/ppdet/optimizer/adamw.py @@ -0,0 +1,244 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +from paddle.optimizer import AdamW +from functools import partial +import re + + +def layerwise_lr_decay(decay_rate, name_dict, n_layers, param): + """ + Args: + decay_rate (float): + The layer-wise decay ratio. + name_dict (dict): + The keys of name_dict is dynamic name of model while the value + of name_dict is static name. + Use model.named_parameters() to get name_dict. + n_layers (int): + Total number of layers in the transformer encoder. + """ + ratio = 1.0 + static_name = name_dict[param.name] + if 'blocks.' in static_name or 'layers.' in static_name: + idx_1 = static_name.find('blocks.') + idx_2 = static_name.find('layers.') + assert any([x >= 0 for x in [idx_1, idx_2]]), '' + idx = idx_1 if idx_1 >= 0 else idx_2 + # idx = re.findall('[blocks|layers]\.(\d+)\.', static_name)[0] + + layer = int(static_name[idx:].split('.')[1]) + ratio = decay_rate**(n_layers - layer) + + elif 'cls_token' in static_name or 'patch_embed' in static_name: + ratio = decay_rate**(n_layers + 1) + + param.optimize_attr['learning_rate'] *= ratio + + +class AdamWDL(AdamW): + r""" + The AdamWDL optimizer is implemented based on the AdamW Optimization with dynamic lr setting. + Generally it's used for transformer model. + + We use "layerwise_lr_decay" as default dynamic lr setting method of AdamWDL. + “Layer-wise decay” means exponentially decaying the learning rates of individual + layers in a top-down manner. For example, suppose the 24-th layer uses a learning + rate l, and the Layer-wise decay rate is α, then the learning rate of layer m + is lα^(24-m). See more details on: https://arxiv.org/abs/1906.08237. + + .. math:: + & t = t + 1 + + & moment\_1\_out = {\beta}_1 * moment\_1 + (1 - {\beta}_1) * grad + + & moment\_2\_out = {\beta}_2 * moment\_2 + (1 - {\beta}_2) * grad * grad + + & learning\_rate = learning\_rate * \frac{\sqrt{1 - {\beta}_2^t}}{1 - {\beta}_1^t} + + & param\_out = param - learning\_rate * (\frac{moment\_1}{\sqrt{moment\_2} + \epsilon} + \lambda * param) + + Args: + learning_rate (float|LRScheduler, optional): The learning rate used to update ``Parameter``. + It can be a float value or a LRScheduler. The default value is 0.001. + beta1 (float, optional): The exponential decay rate for the 1st moment estimates. + It should be a float number or a Tensor with shape [1] and data type as float32. + The default value is 0.9. + beta2 (float, optional): The exponential decay rate for the 2nd moment estimates. + It should be a float number or a Tensor with shape [1] and data type as float32. + The default value is 0.999. + epsilon (float, optional): A small float value for numerical stability. + It should be a float number or a Tensor with shape [1] and data type as float32. + The default value is 1e-08. + parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \ + This parameter is required in dygraph mode. \ + The default value is None in static mode, at this time all parameters will be updated. + weight_decay (float, optional): The weight decay coefficient, it can be float or Tensor. The default value is 0.01. + apply_decay_param_fun (function|None, optional): If it is not None, + only tensors that makes apply_decay_param_fun(Tensor.name)==True + will be updated. It only works when we want to specify tensors. + Default: None. + grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of + some derived class of ``GradientClipBase`` . There are three cliping strategies + ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , + :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping. + lazy_mode (bool, optional): The official Adam algorithm has two moving-average accumulators. + The accumulators are updated at every step. Every element of the two moving-average + is updated in both dense mode and sparse mode. If the size of parameter is very large, + then the update may be very slow. The lazy mode only update the element that has + gradient in current mini-batch, so it will be much more faster. But this mode has + different semantics with the original Adam algorithm and may lead to different result. + The default value is False. + multi_precision (bool, optional): Whether to use multi-precision during weight updating. Default is false. + layerwise_decay (float, optional): The layer-wise decay ratio. Defaults to 1.0. + n_layers (int, optional): The total number of encoder layers. Defaults to 12. + set_param_lr_fun (function|None, optional): If it's not None, set_param_lr_fun() will set the the parameter + learning rate before it executes Adam Operator. Defaults to :ref:`layerwise_lr_decay`. + name_dict (dict, optional): The keys of name_dict is dynamic name of model while the value + of name_dict is static name. Use model.named_parameters() to get name_dict. + name (str, optional): Normally there is no need for user to set this property. + For more information, please refer to :ref:`api_guide_Name`. + The default value is None. + + Examples: + .. code-block:: python + + import paddle + from paddlenlp.ops.optimizer import AdamWDL + def simple_lr_setting(decay_rate, name_dict, n_layers, param): + ratio = 1.0 + static_name = name_dict[param.name] + if "weight" in static_name: + ratio = decay_rate**0.5 + param.optimize_attr["learning_rate"] *= ratio + + linear = paddle.nn.Linear(10, 10) + + name_dict = dict() + for n, p in linear.named_parameters(): + name_dict[p.name] = n + + inp = paddle.rand([10,10], dtype="float32") + out = linear(inp) + loss = paddle.mean(out) + + adamwdl = AdamWDL( + learning_rate=1e-4, + parameters=linear.parameters(), + set_param_lr_fun=simple_lr_setting, + layerwise_decay=0.8, + name_dict=name_dict) + + loss.backward() + adamwdl.step() + adamwdl.clear_grad() + """ + + def __init__(self, + learning_rate=0.001, + beta1=0.9, + beta2=0.999, + epsilon=1e-8, + parameters=None, + weight_decay=0.01, + apply_decay_param_fun=None, + grad_clip=None, + lazy_mode=False, + multi_precision=False, + layerwise_decay=1.0, + n_layers=12, + set_param_lr_func=None, + name_dict=None, + name=None): + if not isinstance(layerwise_decay, float): + raise TypeError("coeff should be float or Tensor.") + self.layerwise_decay = layerwise_decay + self.n_layers = n_layers + self.set_param_lr_func = partial( + set_param_lr_func, layerwise_decay, name_dict, + n_layers) if set_param_lr_func is not None else set_param_lr_func + super(AdamWDL, self).__init__( + learning_rate=learning_rate, + parameters=parameters, + beta1=beta1, + beta2=beta2, + epsilon=epsilon, + grad_clip=grad_clip, + name=name, + apply_decay_param_fun=apply_decay_param_fun, + weight_decay=weight_decay, + lazy_mode=lazy_mode, + multi_precision=multi_precision) + + def _append_optimize_op(self, block, param_and_grad): + if self.set_param_lr_func is None: + return super(AdamWDL, self)._append_optimize_op(block, + param_and_grad) + + self._append_decoupled_weight_decay(block, param_and_grad) + prev_lr = param_and_grad[0].optimize_attr["learning_rate"] + self.set_param_lr_func(param_and_grad[0]) + # excute Adam op + res = super(AdamW, self)._append_optimize_op(block, param_and_grad) + param_and_grad[0].optimize_attr["learning_rate"] = prev_lr + return res + + +def build_adamwdl(model, + lr=1e-4, + weight_decay=0.05, + betas=(0.9, 0.999), + layer_decay=0.65, + num_layers=None, + filter_bias_and_bn=True, + skip_decay_names=None, + set_param_lr_func='layerwise_lr_decay'): + + if skip_decay_names and filter_bias_and_bn: + decay_dict = { + param.name: not (len(param.shape) == 1 or name.endswith('.bias') or + any([_n in name for _n in skip_decay_names])) + for name, param in model.named_parameters() + } + parameters = [p for p in model.parameters()] + + else: + parameters = model.parameters() + + opt_args = dict( + parameters=parameters, learning_rate=lr, weight_decay=weight_decay) + + if decay_dict is not None: + opt_args['apply_decay_param_fun'] = lambda n: decay_dict[n] + + if isinstance(set_param_lr_func, str): + set_param_lr_func = eval(set_param_lr_func) + opt_args['set_param_lr_func'] = set_param_lr_func + + opt_args['beta1'] = betas[0] + opt_args['beta2'] = betas[1] + + opt_args['layerwise_decay'] = layer_decay + name_dict = {p.name: n for n, p in model.named_parameters()} + + opt_args['name_dict'] = name_dict + opt_args['n_layers'] = num_layers + + optimizer = AdamWDL(**opt_args) + + return optimizer diff --git a/ppdet/optimizer/ema.py b/ppdet/optimizer/ema.py new file mode 100644 index 0000000000000000000000000000000000000000..bd8cb825ca0ecd33ca174acea7adb7ad37ba6185 --- /dev/null +++ b/ppdet/optimizer/ema.py @@ -0,0 +1,110 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import math +import paddle +import weakref + + +class ModelEMA(object): + """ + Exponential Weighted Average for Deep Neutal Networks + Args: + model (nn.Layer): Detector of model. + decay (int): The decay used for updating ema parameter. + Ema's parameter are updated with the formula: + `ema_param = decay * ema_param + (1 - decay) * cur_param`. + Defaults is 0.9998. + ema_decay_type (str): type in ['threshold', 'normal', 'exponential'], + 'threshold' as default. + cycle_epoch (int): The epoch of interval to reset ema_param and + step. Defaults is -1, which means not reset. Its function is to + add a regular effect to ema, which is set according to experience + and is effective when the total training epoch is large. + """ + + def __init__(self, + model, + decay=0.9998, + ema_decay_type='threshold', + cycle_epoch=-1): + self.step = 0 + self.epoch = 0 + self.decay = decay + self.state_dict = dict() + for k, v in model.state_dict().items(): + self.state_dict[k] = paddle.zeros_like(v) + self.ema_decay_type = ema_decay_type + self.cycle_epoch = cycle_epoch + + self._model_state = { + k: weakref.ref(p) + for k, p in model.state_dict().items() + } + + def reset(self): + self.step = 0 + self.epoch = 0 + for k, v in self.state_dict.items(): + self.state_dict[k] = paddle.zeros_like(v) + + def resume(self, state_dict, step=0): + for k, v in state_dict.items(): + if k in self.state_dict: + if self.state_dict[k].dtype == v.dtype: + self.state_dict[k] = v + else: + self.state_dict[k] = v.astype(self.state_dict[k].dtype) + self.step = step + + def update(self, model=None): + if self.ema_decay_type == 'threshold': + decay = min(self.decay, (1 + self.step) / (10 + self.step)) + elif self.ema_decay_type == 'exponential': + decay = self.decay * (1 - math.exp(-(self.step + 1) / 2000)) + else: + decay = self.decay + self._decay = decay + + if model is not None: + model_dict = model.state_dict() + else: + model_dict = {k: p() for k, p in self._model_state.items()} + assert all( + [v is not None for _, v in model_dict.items()]), 'python gc.' + + for k, v in self.state_dict.items(): + v = decay * v + (1 - decay) * model_dict[k] + v.stop_gradient = True + self.state_dict[k] = v + self.step += 1 + + def apply(self): + if self.step == 0: + return self.state_dict + state_dict = dict() + for k, v in self.state_dict.items(): + if self.ema_decay_type != 'exponential': + v = v / (1 - self._decay**self.step) + v.stop_gradient = True + state_dict[k] = v + self.epoch += 1 + if self.cycle_epoch > 0 and self.epoch == self.cycle_epoch: + self.reset() + + return state_dict diff --git a/ppdet/optimizer.py b/ppdet/optimizer/optimizer.py similarity index 78% rename from ppdet/optimizer.py rename to ppdet/optimizer/optimizer.py index 1305b76fed48c6acd3967928ffc82b74ef5a881d..e8a0dd8c880699044a7af52a314b33bff27c683c 100644 --- a/ppdet/optimizer.py +++ b/ppdet/optimizer/optimizer.py @@ -16,8 +16,8 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function +import sys import math -import weakref import paddle import paddle.nn as nn @@ -25,6 +25,9 @@ import paddle.optimizer as optimizer import paddle.regularizer as regularizer from ppdet.core.workspace import register, serializable +import copy + +from .adamw import AdamWDL, build_adamwdl __all__ = ['LearningRate', 'OptimizerBuilder'] @@ -209,6 +212,33 @@ class BurninWarmup(object): return boundary, value +@serializable +class ExpWarmup(object): + """ + Warm up learning rate in exponential mode + Args: + steps (int): warm up steps. + epochs (int|None): use epochs as warm up steps, the priority + of `epochs` is higher than `steps`. Default: None. + """ + + def __init__(self, steps=5, epochs=None): + super(ExpWarmup, self).__init__() + self.steps = steps + self.epochs = epochs + + def __call__(self, base_lr, step_per_epoch): + boundary = [] + value = [] + warmup_steps = self.epochs * step_per_epoch if self.epochs is not None else self.steps + for i in range(warmup_steps + 1): + factor = (i / float(warmup_steps))**2 + value.append(base_lr * factor) + if i > 0: + boundary.append(i) + return boundary, value + + @register class LearningRate(object): """ @@ -225,7 +255,18 @@ class LearningRate(object): schedulers=[PiecewiseDecay(), LinearWarmup()]): super(LearningRate, self).__init__() self.base_lr = base_lr - self.schedulers = schedulers + self.schedulers = [] + + schedulers = copy.deepcopy(schedulers) + for sched in schedulers: + if isinstance(sched, dict): + # support dict sched instantiate + module = sys.modules[__name__] + type = sched.pop("name") + scheduler = getattr(module, type)(**sched) + self.schedulers.append(scheduler) + else: + self.schedulers.append(sched) def __call__(self, step_per_epoch): assert len(self.schedulers) >= 1 @@ -278,8 +319,13 @@ class OptimizerBuilder(): optim_args = self.optimizer.copy() optim_type = optim_args['type'] del optim_args['type'] + + if optim_type == 'AdamWDL': + return build_adamwdl(model, lr=learning_rate, **optim_args) + if optim_type != 'AdamW': optim_args['weight_decay'] = regularization + op = getattr(optimizer, optim_type) if 'param_groups' in optim_args: @@ -295,7 +341,8 @@ class OptimizerBuilder(): _params = { n: p for n, p in model.named_parameters() - if any([k in n for k in group['params']]) + if any([k in n + for k in group['params']]) and p.trainable is True } _group = group.copy() _group.update({'params': list(_params.values())}) @@ -304,7 +351,8 @@ class OptimizerBuilder(): visited.extend(list(_params.keys())) ext_params = [ - p for n, p in model.named_parameters() if n not in visited + p for n, p in model.named_parameters() + if n not in visited and p.trainable is True ] if len(ext_params) < len(model.parameters()): @@ -314,91 +362,10 @@ class OptimizerBuilder(): raise RuntimeError else: - params = model.parameters() + _params = model.parameters() + params = [param for param in _params if param.trainable is True] return op(learning_rate=learning_rate, parameters=params, grad_clip=grad_clip, **optim_args) - - -class ModelEMA(object): - """ - Exponential Weighted Average for Deep Neutal Networks - Args: - model (nn.Layer): Detector of model. - decay (int): The decay used for updating ema parameter. - Ema's parameter are updated with the formula: - `ema_param = decay * ema_param + (1 - decay) * cur_param`. - Defaults is 0.9998. - use_thres_step (bool): Whether set decay by thres_step or not - cycle_epoch (int): The epoch of interval to reset ema_param and - step. Defaults is -1, which means not reset. Its function is to - add a regular effect to ema, which is set according to experience - and is effective when the total training epoch is large. - """ - - def __init__(self, - model, - decay=0.9998, - use_thres_step=False, - cycle_epoch=-1): - self.step = 0 - self.epoch = 0 - self.decay = decay - self.state_dict = dict() - for k, v in model.state_dict().items(): - self.state_dict[k] = paddle.zeros_like(v) - self.use_thres_step = use_thres_step - self.cycle_epoch = cycle_epoch - - self._model_state = { - k: weakref.ref(p) - for k, p in model.state_dict().items() - } - - def reset(self): - self.step = 0 - self.epoch = 0 - for k, v in self.state_dict.items(): - self.state_dict[k] = paddle.zeros_like(v) - - def resume(self, state_dict, step=0): - for k, v in state_dict.items(): - if k in self.state_dict: - self.state_dict[k] = v - self.step = step - - def update(self, model=None): - if self.use_thres_step: - decay = min(self.decay, (1 + self.step) / (10 + self.step)) - else: - decay = self.decay - self._decay = decay - - if model is not None: - model_dict = model.state_dict() - else: - model_dict = {k: p() for k, p in self._model_state.items()} - assert all( - [v is not None for _, v in model_dict.items()]), 'python gc.' - - for k, v in self.state_dict.items(): - v = decay * v + (1 - decay) * model_dict[k] - v.stop_gradient = True - self.state_dict[k] = v - self.step += 1 - - def apply(self): - if self.step == 0: - return self.state_dict - state_dict = dict() - for k, v in self.state_dict.items(): - v = v / (1 - self._decay**self.step) - v.stop_gradient = True - state_dict[k] = v - self.epoch += 1 - if self.cycle_epoch > 0 and self.epoch == self.cycle_epoch: - self.reset() - - return state_dict diff --git a/ppdet/slim/__init__.py b/ppdet/slim/__init__.py index e71481d1c8fd61c646f4919ddb52c85020f11725..81ced2dd9f50534e1a3794bc2821d4313c5e91c2 100644 --- a/ppdet/slim/__init__.py +++ b/ppdet/slim/__init__.py @@ -35,16 +35,21 @@ def build_slim_model(cfg, slim_cfg, mode='train'): return cfg if slim_load_cfg['slim'] == 'Distill': - model = DistillModel(cfg, slim_cfg) + if "slim_method" in slim_load_cfg and slim_load_cfg[ + 'slim_method'] == "FGD": + model = FGDDistillModel(cfg, slim_cfg) + else: + model = DistillModel(cfg, slim_cfg) cfg['model'] = model + cfg['slim_type'] = cfg.slim elif slim_load_cfg['slim'] == 'OFA': load_config(slim_cfg) model = create(cfg.architecture) load_pretrain_weight(model, cfg.weights) slim = create(cfg.slim) - cfg['slim_type'] = cfg.slim - cfg['model'] = slim(model, model.state_dict()) cfg['slim'] = slim + cfg['model'] = slim(model, model.state_dict()) + cfg['slim_type'] = cfg.slim elif slim_load_cfg['slim'] == 'DistillPrune': if mode == 'train': model = DistillModel(cfg, slim_cfg) @@ -64,9 +69,9 @@ def build_slim_model(cfg, slim_cfg, mode='train'): load_config(slim_cfg) load_pretrain_weight(model, cfg.weights) slim = create(cfg.slim) - cfg['slim_type'] = cfg.slim - cfg['model'] = slim(model) cfg['slim'] = slim + cfg['model'] = slim(model) + cfg['slim_type'] = cfg.slim elif slim_load_cfg['slim'] == 'UnstructuredPruner': load_config(slim_cfg) slim = create(cfg.slim) @@ -81,7 +86,7 @@ def build_slim_model(cfg, slim_cfg, mode='train'): slim = create(cfg.slim) cfg['slim_type'] = cfg.slim # TODO: fix quant export model in framework. - if mode == 'test' and slim_load_cfg['slim'] == 'QAT': + if mode == 'test' and 'QAT' in slim_load_cfg['slim']: slim.quant_config['activation_preprocess_type'] = None cfg['model'] = slim(model) cfg['slim'] = slim diff --git a/ppdet/slim/distill.py b/ppdet/slim/distill.py index b808553dd0c0b6a8285b8090385ac6e1cc4b8e69..808713ffeb6cf2c7536309a5977b1b712b3b4320 100644 --- a/ppdet/slim/distill.py +++ b/ppdet/slim/distill.py @@ -19,6 +19,7 @@ from __future__ import print_function import paddle import paddle.nn as nn import paddle.nn.functional as F +from paddle import ParamAttr from ppdet.core.workspace import register, create, load_config from ppdet.modeling import ops @@ -63,6 +64,98 @@ class DistillModel(nn.Layer): return self.student_model(inputs) +class FGDDistillModel(nn.Layer): + """ + Build FGD distill model. + Args: + cfg: The student config. + slim_cfg: The teacher and distill config. + """ + + def __init__(self, cfg, slim_cfg): + super(FGDDistillModel, self).__init__() + + self.is_inherit = True + # build student model before load slim config + self.student_model = create(cfg.architecture) + self.arch = cfg.architecture + stu_pretrain = cfg['pretrain_weights'] + slim_cfg = load_config(slim_cfg) + self.teacher_cfg = slim_cfg + self.loss_cfg = slim_cfg + tea_pretrain = cfg['pretrain_weights'] + + self.teacher_model = create(self.teacher_cfg.architecture) + self.teacher_model.eval() + + for param in self.teacher_model.parameters(): + param.trainable = False + + if 'pretrain_weights' in cfg and stu_pretrain: + if self.is_inherit and 'pretrain_weights' in self.teacher_cfg and self.teacher_cfg.pretrain_weights: + load_pretrain_weight(self.student_model, + self.teacher_cfg.pretrain_weights) + logger.debug( + "Inheriting! loading teacher weights to student model!") + + load_pretrain_weight(self.student_model, stu_pretrain) + + if 'pretrain_weights' in self.teacher_cfg and self.teacher_cfg.pretrain_weights: + load_pretrain_weight(self.teacher_model, + self.teacher_cfg.pretrain_weights) + + self.fgd_loss_dic = self.build_loss( + self.loss_cfg.distill_loss, + name_list=self.loss_cfg['distill_loss_name']) + + def build_loss(self, + cfg, + name_list=[ + 'neck_f_4', 'neck_f_3', 'neck_f_2', 'neck_f_1', + 'neck_f_0' + ]): + loss_func = dict() + for idx, k in enumerate(name_list): + loss_func[k] = create(cfg) + return loss_func + + def forward(self, inputs): + if self.training: + s_body_feats = self.student_model.backbone(inputs) + s_neck_feats = self.student_model.neck(s_body_feats) + + with paddle.no_grad(): + t_body_feats = self.teacher_model.backbone(inputs) + t_neck_feats = self.teacher_model.neck(t_body_feats) + + loss_dict = {} + for idx, k in enumerate(self.fgd_loss_dic): + loss_dict[k] = self.fgd_loss_dic[k](s_neck_feats[idx], + t_neck_feats[idx], inputs) + if self.arch == "RetinaNet": + loss = self.student_model.head(s_neck_feats, inputs) + elif self.arch == "PicoDet": + loss = self.student_model.get_loss() + else: + raise ValueError(f"Unsupported model {self.arch}") + for k in loss_dict: + loss['loss'] += loss_dict[k] + loss[k] = loss_dict[k] + return loss + else: + body_feats = self.student_model.backbone(inputs) + neck_feats = self.student_model.neck(body_feats) + head_outs = self.student_model.head(neck_feats) + if self.arch == "RetinaNet": + bbox, bbox_num = self.student_model.head.post_process( + head_outs, inputs['im_shape'], inputs['scale_factor']) + return {'bbox': bbox, 'bbox_num': bbox_num} + elif self.arch == "PicoDet": + return self.student_model.head.get_pred() + else: + raise ValueError(f"Unsupported model {self.arch}") + + @register class DistillYOLOv3Loss(nn.Layer): def __init__(self, weight=1000): @@ -107,3 +200,254 @@ class DistillYOLOv3Loss(nn.Layer): loss = (distill_reg_loss + distill_cls_loss + distill_obj_loss ) * self.weight return loss + + +def parameter_init(mode="kaiming", value=0.): + if mode == "kaiming": + weight_attr = paddle.nn.initializer.KaimingUniform() + elif mode == "constant": + weight_attr = paddle.nn.initializer.Constant(value=value) + else: + weight_attr = paddle.nn.initializer.KaimingUniform() + + weight_init = ParamAttr(initializer=weight_attr) + return weight_init + + +@register +class FGDFeatureLoss(nn.Layer): + """ + The code is reference from https://github.com/yzd-v/FGD/blob/master/mmdet/distillation/losses/fgd.py + Paddle version of `Focal and Global Knowledge Distillation for Detectors` + + Args: + student_channels(int): The number of channels in the student's FPN feature map. Default to 256. + teacher_channels(int): The number of channels in the teacher's FPN feature map. Default to 256. + temp (float, optional): The temperature coefficient. Defaults to 0.5. + alpha_fgd (float, optional): The weight of fg_loss. Defaults to 0.001 + beta_fgd (float, optional): The weight of bg_loss. Defaults to 0.0005 + gamma_fgd (float, optional): The weight of mask_loss. Defaults to 0.001 + lambda_fgd (float, optional): The weight of relation_loss. Defaults to 0.000005 + """ + + def __init__(self, + student_channels=256, + teacher_channels=256, + temp=0.5, + alpha_fgd=0.001, + beta_fgd=0.0005, + gamma_fgd=0.001, + lambda_fgd=0.000005): + super(FGDFeatureLoss, self).__init__() + self.temp = temp + self.alpha_fgd = alpha_fgd + self.beta_fgd = beta_fgd + self.gamma_fgd = gamma_fgd + self.lambda_fgd = lambda_fgd + + kaiming_init = parameter_init("kaiming") + zeros_init = parameter_init("constant", 0.0) + + if student_channels != teacher_channels: + self.align = nn.Conv2d( + student_channels, + teacher_channels, + kernel_size=1, + stride=1, + padding=0, + weight_attr=kaiming_init) + student_channels = teacher_channels + else: + self.align = None + + self.conv_mask_s = nn.Conv2D( + student_channels, 1, kernel_size=1, weight_attr=kaiming_init) + self.conv_mask_t = nn.Conv2D( + teacher_channels, 1, kernel_size=1, weight_attr=kaiming_init) + + self.stu_conv_block = nn.Sequential( + nn.Conv2D( + student_channels, + student_channels // 2, + kernel_size=1, + weight_attr=zeros_init), + nn.LayerNorm([student_channels // 2, 1, 1]), + nn.ReLU(), + nn.Conv2D( + student_channels // 2, + student_channels, + kernel_size=1, + weight_attr=zeros_init)) + self.tea_conv_block = nn.Sequential( + nn.Conv2D( + teacher_channels, + teacher_channels // 2, + kernel_size=1, + weight_attr=zeros_init), + nn.LayerNorm([teacher_channels // 2, 1, 1]), + nn.ReLU(), + nn.Conv2D( + teacher_channels // 2, + teacher_channels, + kernel_size=1, + weight_attr=zeros_init)) + + def spatial_channel_attention(self, x, t=0.5): + shape = paddle.shape(x) + N, C, H, W = shape + + _f = paddle.abs(x) + spatial_map = paddle.reshape( + paddle.mean( + _f, axis=1, keepdim=True) / t, [N, -1]) + spatial_map = F.softmax(spatial_map, axis=1, dtype="float32") * H * W + spatial_att = paddle.reshape(spatial_map, [N, H, W]) + + channel_map = paddle.mean( + paddle.mean( + _f, axis=2, keepdim=False), axis=2, keepdim=False) + channel_att = F.softmax(channel_map / t, axis=1, dtype="float32") * C + return [spatial_att, channel_att] + + def spatial_pool(self, x, mode="teacher"): + batch, channel, width, height = x.shape + x_copy = x + x_copy = paddle.reshape(x_copy, [batch, channel, height * width]) + x_copy = x_copy.unsqueeze(1) + if mode.lower() == "student": + context_mask = self.conv_mask_s(x) + else: + context_mask = self.conv_mask_t(x) + + context_mask = paddle.reshape(context_mask, [batch, 1, height * width]) + context_mask = F.softmax(context_mask, axis=2) + context_mask = context_mask.unsqueeze(-1) + context = paddle.matmul(x_copy, context_mask) + context = paddle.reshape(context, [batch, channel, 1, 1]) + + return context + + def mask_loss(self, stu_channel_att, tea_channel_att, stu_spatial_att, + tea_spatial_att): + def _func(a, b): + return paddle.sum(paddle.abs(a - b)) / len(a) + + mask_loss = _func(stu_channel_att, tea_channel_att) + _func( + stu_spatial_att, tea_spatial_att) + + return mask_loss + + def feature_loss(self, stu_feature, tea_feature, Mask_fg, Mask_bg, + tea_channel_att, tea_spatial_att): + + Mask_fg = Mask_fg.unsqueeze(axis=1) + Mask_bg = Mask_bg.unsqueeze(axis=1) + + tea_channel_att = tea_channel_att.unsqueeze(axis=-1) + tea_channel_att = tea_channel_att.unsqueeze(axis=-1) + + tea_spatial_att = tea_spatial_att.unsqueeze(axis=1) + + fea_t = paddle.multiply(tea_feature, paddle.sqrt(tea_spatial_att)) + fea_t = paddle.multiply(fea_t, paddle.sqrt(tea_channel_att)) + fg_fea_t = paddle.multiply(fea_t, paddle.sqrt(Mask_fg)) + bg_fea_t = paddle.multiply(fea_t, paddle.sqrt(Mask_bg)) + + fea_s = paddle.multiply(stu_feature, paddle.sqrt(tea_spatial_att)) + fea_s = paddle.multiply(fea_s, paddle.sqrt(tea_channel_att)) + fg_fea_s = paddle.multiply(fea_s, paddle.sqrt(Mask_fg)) + bg_fea_s = paddle.multiply(fea_s, paddle.sqrt(Mask_bg)) + + fg_loss = F.mse_loss(fg_fea_s, fg_fea_t, reduction="sum") / len(Mask_fg) + bg_loss = F.mse_loss(bg_fea_s, bg_fea_t, reduction="sum") / len(Mask_bg) + + return fg_loss, bg_loss + + def relation_loss(self, stu_feature, tea_feature): + context_s = self.spatial_pool(stu_feature, "student") + context_t = self.spatial_pool(tea_feature, "teacher") + + out_s = stu_feature + self.stu_conv_block(context_s) + out_t = tea_feature + self.tea_conv_block(context_t) + + rela_loss = F.mse_loss(out_s, out_t, reduction="sum") / len(out_s) + + return rela_loss + + def mask_value(self, mask, xl, xr, yl, yr, value): + mask[xl:xr, yl:yr] = paddle.maximum(mask[xl:xr, yl:yr], value) + return mask + + def forward(self, stu_feature, tea_feature, inputs): + """Forward function. + Args: + stu_feature(Tensor): Bs*C*H*W, student's feature map + tea_feature(Tensor): Bs*C*H*W, teacher's feature map + inputs: The inputs with gt bbox and input shape info. + """ + assert stu_feature.shape[-2:] == stu_feature.shape[-2:], \ + f'The shape of Student feature {stu_feature.shape} and Teacher feature {tea_feature.shape} should be the same.' + assert "gt_bbox" in inputs.keys() and "im_shape" in inputs.keys( + ), "ERROR! FGDFeatureLoss need gt_bbox and im_shape as inputs." + gt_bboxes = inputs['gt_bbox'] + ins_shape = [ + inputs['im_shape'][i] for i in range(inputs['im_shape'].shape[0]) + ] + + if self.align is not None: + stu_feature = self.align(stu_feature) + + N, C, H, W = stu_feature.shape + + tea_spatial_att, tea_channel_att = self.spatial_channel_attention( + tea_feature, self.temp) + stu_spatial_att, stu_channel_att = self.spatial_channel_attention( + stu_feature, self.temp) + + Mask_fg = paddle.zeros(tea_spatial_att.shape) + Mask_bg = paddle.ones_like(tea_spatial_att) + one_tmp = paddle.ones([*tea_spatial_att.shape[1:]]) + zero_tmp = paddle.zeros([*tea_spatial_att.shape[1:]]) + wmin, wmax, hmin, hmax, area = [], [], [], [], [] + + for i in range(N): + tmp_box = paddle.ones_like(gt_bboxes[i]) + tmp_box[:, 0] = gt_bboxes[i][:, 0] / ins_shape[i][1] * W + tmp_box[:, 2] = gt_bboxes[i][:, 2] / ins_shape[i][1] * W + tmp_box[:, 1] = gt_bboxes[i][:, 1] / ins_shape[i][0] * H + tmp_box[:, 3] = gt_bboxes[i][:, 3] / ins_shape[i][0] * H + + zero = paddle.zeros_like(tmp_box[:, 0], dtype="int32") + ones = paddle.ones_like(tmp_box[:, 2], dtype="int32") + wmin.append( + paddle.cast(paddle.floor(tmp_box[:, 0]), "int32").maximum(zero)) + wmax.append(paddle.cast(paddle.ceil(tmp_box[:, 2]), "int32")) + hmin.append( + paddle.cast(paddle.floor(tmp_box[:, 1]), "int32").maximum(zero)) + hmax.append(paddle.cast(paddle.ceil(tmp_box[:, 3]), "int32")) + + area_recip = 1.0 / ( + hmax[i].reshape([1, -1]) + 1 - hmin[i].reshape([1, -1])) / ( + wmax[i].reshape([1, -1]) + 1 - wmin[i].reshape([1, -1])) + + for j in range(len(gt_bboxes[i])): + Mask_fg[i] = self.mask_value(Mask_fg[i], hmin[i][j], + hmax[i][j] + 1, wmin[i][j], + wmax[i][j] + 1, area_recip[0][j]) + + Mask_bg[i] = paddle.where(Mask_fg[i] > zero_tmp, zero_tmp, one_tmp) + + if paddle.sum(Mask_bg[i]): + Mask_bg[i] /= paddle.sum(Mask_bg[i]) + + fg_loss, bg_loss = self.feature_loss(stu_feature, tea_feature, Mask_fg, + Mask_bg, tea_channel_att, + tea_spatial_att) + mask_loss = self.mask_loss(stu_channel_att, tea_channel_att, + stu_spatial_att, tea_spatial_att) + rela_loss = self.relation_loss(stu_feature, tea_feature) + + loss = self.alpha_fgd * fg_loss + self.beta_fgd * bg_loss \ + + self.gamma_fgd * mask_loss + self.lambda_fgd * rela_loss + + return loss diff --git a/ppdet/slim/prune.py b/ppdet/slim/prune.py index 70d3de3692707b132ff398babf5a795a5a9e81ba..28ffb7588d1e596e5883072b3bd2b5e6ba80ed7f 100644 --- a/ppdet/slim/prune.py +++ b/ppdet/slim/prune.py @@ -83,3 +83,69 @@ class Pruner(object): pruned_flops, (ori_flops - pruned_flops) / ori_flops)) return model + + +@register +@serializable +class PrunerQAT(object): + def __init__(self, criterion, pruned_params, pruned_ratios, + print_prune_params, quant_config, print_qat_model): + super(PrunerQAT, self).__init__() + assert criterion in ['l1_norm', 'fpgm'], \ + "unsupported prune criterion: {}".format(criterion) + # Pruner hyperparameter + self.criterion = criterion + self.pruned_params = pruned_params + self.pruned_ratios = pruned_ratios + self.print_prune_params = print_prune_params + # QAT hyperparameter + self.quant_config = quant_config + self.print_qat_model = print_qat_model + + def __call__(self, model): + # FIXME: adapt to network graph when Training and inference are + # inconsistent, now only supports prune inference network graph. + model.eval() + paddleslim = try_import('paddleslim') + from paddleslim.analysis import dygraph_flops as flops + input_spec = [{ + "image": paddle.ones( + shape=[1, 3, 640, 640], dtype='float32'), + "im_shape": paddle.full( + [1, 2], 640, dtype='float32'), + "scale_factor": paddle.ones( + shape=[1, 2], dtype='float32') + }] + if self.print_prune_params: + print_prune_params(model) + + ori_flops = flops(model, input_spec) / 1000 + logger.info("FLOPs before pruning: {}GFLOPs".format(ori_flops)) + if self.criterion == 'fpgm': + pruner = paddleslim.dygraph.FPGMFilterPruner(model, input_spec) + elif self.criterion == 'l1_norm': + pruner = paddleslim.dygraph.L1NormFilterPruner(model, input_spec) + + logger.info("pruned params: {}".format(self.pruned_params)) + pruned_ratios = [float(n) for n in self.pruned_ratios] + ratios = {} + for i, param in enumerate(self.pruned_params): + ratios[param] = pruned_ratios[i] + pruner.prune_vars(ratios, [0]) + pruned_flops = flops(model, input_spec) / 1000 + logger.info("FLOPs after pruning: {}GFLOPs; pruned ratio: {}".format( + pruned_flops, (ori_flops - pruned_flops) / ori_flops)) + + self.quanter = paddleslim.dygraph.quant.QAT(config=self.quant_config) + + self.quanter.quantize(model) + + if self.print_qat_model: + logger.info("Quantized model:") + logger.info(model) + + return model + + def save_quantized_model(self, layer, path, input_spec=None, **config): + self.quanter.save_quantized_model( + model=layer, path=path, input_spec=input_spec, **config) diff --git a/ppdet/slim/quant.py b/ppdet/slim/quant.py index ab81127aea9eced229122858c0e5912b80caee85..44508198c46b77485d61e2b4e4d2804c62f96622 100644 --- a/ppdet/slim/quant.py +++ b/ppdet/slim/quant.py @@ -38,6 +38,11 @@ class QAT(object): logger.info("Model before quant:") logger.info(model) + # For PP-YOLOE, convert model to deploy firstly. + for layer in model.sublayers(): + if hasattr(layer, 'convert_to_deploy'): + layer.convert_to_deploy() + self.quanter.quantize(model) if self.print_model: diff --git a/ppdet/utils/check.py b/ppdet/utils/check.py index 45da8857e50e59fafde91b6875eff738a9a7a286..58c48806c82d8616d65f02fec80725dadd45d52b 100644 --- a/ppdet/utils/check.py +++ b/ppdet/utils/check.py @@ -20,7 +20,7 @@ import sys import paddle import six -import paddle.version as fluid_version +import paddle.version as paddle_version from .logger import setup_logger logger = setup_logger(__name__) @@ -97,8 +97,8 @@ def check_version(version='2.0'): "Please make sure the version is good with your code.".format(version) version_installed = [ - fluid_version.major, fluid_version.minor, fluid_version.patch, - fluid_version.rc + paddle_version.major, paddle_version.minor, paddle_version.patch, + paddle_version.rc ] if version_installed == ['0', '0', '0', '0']: return diff --git a/ppdet/utils/checkpoint.py b/ppdet/utils/checkpoint.py index e4325de8bb3988495fe90b3ab078805718408cbc..add087c890d4fbe82ecaec5635c19fc2c2090059 100644 --- a/ppdet/utils/checkpoint.py +++ b/ppdet/utils/checkpoint.py @@ -84,9 +84,14 @@ def load_weight(model, weight, optimizer=None, ema=None): model_weight = {} incorrect_keys = 0 - for key in model_dict.keys(): + for key, value in model_dict.items(): if key in param_state_dict.keys(): - model_weight[key] = param_state_dict[key] + if isinstance(param_state_dict[key], np.ndarray): + param_state_dict[key] = paddle.to_tensor(param_state_dict[key]) + if value.dtype == param_state_dict[key].dtype: + model_weight[key] = param_state_dict[key] + else: + model_weight[key] = param_state_dict[key].astype(value.dtype) else: logger.info('Unmatched key: {}'.format(key)) incorrect_keys += 1 @@ -209,6 +214,12 @@ def load_pretrain_weight(model, pretrain_weight): param_state_dict = paddle.load(weights_path) param_state_dict = match_state_dict(model_dict, param_state_dict) + for k, v in param_state_dict.items(): + if isinstance(v, np.ndarray): + v = paddle.to_tensor(v) + if model_dict[k].dtype != v.dtype: + param_state_dict[k] = v.astype(model_dict[k].dtype) + model.set_dict(param_state_dict) logger.info('Finish loading model weights: {}'.format(weights_path)) diff --git a/ppdet/utils/cli.py b/ppdet/utils/cli.py index b8ba59d78f1ddf606012fd0cf6d71a71d79eea05..2c5acc0e591af4bbd07a1d22e1237656ac47da65 100644 --- a/ppdet/utils/cli.py +++ b/ppdet/utils/cli.py @@ -81,6 +81,13 @@ class ArgsParser(ArgumentParser): return config +def merge_args(config, args, exclude_args=['config', 'opt', 'slim_config']): + for k, v in vars(args).items(): + if k not in exclude_args: + config[k] = v + return config + + def print_total_cfg(config): modules = get_registered_modules() color_tty = ColorTTY() diff --git a/ppdet/utils/download.py b/ppdet/utils/download.py index 54a74c92e9d768463107d5c81882ca140f81f516..71720f5e058df4335c8cde85d63eb615ff20cfca 100644 --- a/ppdet/utils/download.py +++ b/ppdet/utils/download.py @@ -97,7 +97,7 @@ DATASETS = { '49ce5a9b5ad0d6266163cd01de4b018e', ), ], ['annotations', 'images']), 'spine_coco': ([( 'https://paddledet.bj.bcebos.com/data/spine_coco.tar', - '7ed69ae73f842cd2a8cf4f58dc3c5535', ), ], ['annotations', 'images']), + '03030f42d9b6202a6e425d4becefda0d', ), ], ['annotations', 'images']), 'mot': (), 'objects365': (), 'coco_ce': ([( @@ -393,7 +393,12 @@ def _download(url, path, md5sum=None): def _download_dist(url, path, md5sum=None): env = os.environ if 'PADDLE_TRAINERS_NUM' in env and 'PADDLE_TRAINER_ID' in env: - trainer_id = int(env['PADDLE_TRAINER_ID']) + # Mainly used to solve the problem of downloading data from + # different machines in the case of multiple machines. + # Different nodes will download data, and the same node + # will only download data once. + # Reference https://github.com/PaddlePaddle/PaddleClas/blob/develop/ppcls/utils/download.py#L108 + rank_id_curr_node = int(os.environ.get("PADDLE_RANK_IN_NODE", 0)) num_trainers = int(env['PADDLE_TRAINERS_NUM']) if num_trainers <= 1: return _download(url, path, md5sum) @@ -406,12 +411,9 @@ def _download_dist(url, path, md5sum=None): os.makedirs(path) if not osp.exists(fullname): - from paddle.distributed import ParallelEnv - unique_endpoints = _get_unique_endpoints(ParallelEnv() - .trainer_endpoints[:]) with open(lock_path, 'w'): # touch os.utime(lock_path, None) - if ParallelEnv().current_endpoint in unique_endpoints: + if rank_id_curr_node == 0: _download(url, path, md5sum) os.remove(lock_path) else: diff --git a/ppdet/utils/fuse_utils.py b/ppdet/utils/fuse_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..647fa995da615fcb2bcdca13f4296f73e3204628 --- /dev/null +++ b/ppdet/utils/fuse_utils.py @@ -0,0 +1,179 @@ +# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import copy +import paddle +import paddle.nn as nn + +__all__ = ['fuse_conv_bn'] + + +def fuse_conv_bn(model): + is_train = False + if model.training: + model.eval() + is_train = True + fuse_list = [] + tmp_pair = [None, None] + for name, layer in model.named_sublayers(): + if isinstance(layer, nn.Conv2D): + tmp_pair[0] = name + if isinstance(layer, nn.BatchNorm2D): + tmp_pair[1] = name + + if tmp_pair[0] and tmp_pair[1] and len(tmp_pair) == 2: + fuse_list.append(tmp_pair) + tmp_pair = [None, None] + model = fuse_layers(model, fuse_list) + if is_train: + model.train() + return model + + +def find_parent_layer_and_sub_name(model, name): + """ + Given the model and the name of a layer, find the parent layer and + the sub_name of the layer. + For example, if name is 'block_1/convbn_1/conv_1', the parent layer is + 'block_1/convbn_1' and the sub_name is `conv_1`. + Args: + model(paddle.nn.Layer): the model to be quantized. + name(string): the name of a layer + + Returns: + parent_layer, subname + """ + assert isinstance(model, nn.Layer), \ + "The model must be the instance of paddle.nn.Layer." + assert len(name) > 0, "The input (name) should not be empty." + + last_idx = 0 + idx = 0 + parent_layer = model + while idx < len(name): + if name[idx] == '.': + sub_name = name[last_idx:idx] + if hasattr(parent_layer, sub_name): + parent_layer = getattr(parent_layer, sub_name) + last_idx = idx + 1 + idx += 1 + sub_name = name[last_idx:idx] + return parent_layer, sub_name + + +class Identity(nn.Layer): + '''a layer to replace bn or relu layers''' + + def __init__(self, *args, **kwargs): + super(Identity, self).__init__() + + def forward(self, input): + return input + + +def fuse_layers(model, layers_to_fuse, inplace=False): + ''' + fuse layers in layers_to_fuse + + Args: + model(nn.Layer): The model to be fused. + layers_to_fuse(list): The layers' names to be fused. For + example,"fuse_list = [["conv1", "bn1"], ["conv2", "bn2"]]". + A TypeError would be raised if "fuse" was set as + True but "fuse_list" was None. + Default: None. + inplace(bool): Whether apply fusing to the input model. + Default: False. + + Return + fused_model(paddle.nn.Layer): The fused model. + ''' + if not inplace: + model = copy.deepcopy(model) + for layers_list in layers_to_fuse: + layer_list = [] + for layer_name in layers_list: + parent_layer, sub_name = find_parent_layer_and_sub_name(model, + layer_name) + layer_list.append(getattr(parent_layer, sub_name)) + new_layers = _fuse_func(layer_list) + for i, item in enumerate(layers_list): + parent_layer, sub_name = find_parent_layer_and_sub_name(model, item) + setattr(parent_layer, sub_name, new_layers[i]) + return model + + +def _fuse_func(layer_list): + '''choose the fuser method and fuse layers''' + types = tuple(type(m) for m in layer_list) + fusion_method = types_to_fusion_method.get(types, None) + new_layers = [None] * len(layer_list) + fused_layer = fusion_method(*layer_list) + for handle_id, pre_hook_fn in layer_list[0]._forward_pre_hooks.items(): + fused_layer.register_forward_pre_hook(pre_hook_fn) + del layer_list[0]._forward_pre_hooks[handle_id] + for handle_id, hook_fn in layer_list[-1]._forward_post_hooks.items(): + fused_layer.register_forward_post_hook(hook_fn) + del layer_list[-1]._forward_post_hooks[handle_id] + new_layers[0] = fused_layer + for i in range(1, len(layer_list)): + identity = Identity() + identity.training = layer_list[0].training + new_layers[i] = identity + return new_layers + + +def _fuse_conv_bn(conv, bn): + '''fuse conv and bn for train or eval''' + assert(conv.training == bn.training),\ + "Conv and BN both must be in the same mode (train or eval)." + if conv.training: + assert bn._num_features == conv._out_channels, 'Output channel of Conv2d must match num_features of BatchNorm2d' + raise NotImplementedError + else: + return _fuse_conv_bn_eval(conv, bn) + + +def _fuse_conv_bn_eval(conv, bn): + '''fuse conv and bn for eval''' + assert (not (conv.training or bn.training)), "Fusion only for eval!" + fused_conv = copy.deepcopy(conv) + + fused_weight, fused_bias = _fuse_conv_bn_weights( + fused_conv.weight, fused_conv.bias, bn._mean, bn._variance, bn._epsilon, + bn.weight, bn.bias) + fused_conv.weight.set_value(fused_weight) + if fused_conv.bias is None: + fused_conv.bias = paddle.create_parameter( + shape=[fused_conv._out_channels], is_bias=True, dtype=bn.bias.dtype) + fused_conv.bias.set_value(fused_bias) + return fused_conv + + +def _fuse_conv_bn_weights(conv_w, conv_b, bn_rm, bn_rv, bn_eps, bn_w, bn_b): + '''fuse weights and bias of conv and bn''' + if conv_b is None: + conv_b = paddle.zeros_like(bn_rm) + if bn_w is None: + bn_w = paddle.ones_like(bn_rm) + if bn_b is None: + bn_b = paddle.zeros_like(bn_rm) + bn_var_rsqrt = paddle.rsqrt(bn_rv + bn_eps) + conv_w = conv_w * \ + (bn_w * bn_var_rsqrt).reshape([-1] + [1] * (len(conv_w.shape) - 1)) + conv_b = (conv_b - bn_rm) * bn_var_rsqrt * bn_w + bn_b + return conv_w, conv_b + + +types_to_fusion_method = {(nn.Conv2D, nn.BatchNorm2D): _fuse_conv_bn, } diff --git a/requirements.txt b/requirements.txt index 91c79fc0f396546bb86f26abbaebd4a503d2ebbe..ae1b657ac7d37c3e5ae9c2230030f1f27c55504e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,6 +1,6 @@ tqdm -typeguard ; python_version >= '3.4' -visualdl>=2.1.0 ; python_version <= '3.7' +typeguard +visualdl>=2.2.0 opencv-python PyYAML shapely @@ -8,10 +8,13 @@ scipy terminaltables Cython pycocotools -#xtcocotools==1.6 #only for crowdpose -setuptools>=42.0.0 +setuptools + +# for vehicleplate +pyclipper + +# for mot lap -sklearn motmetrics -openpyxl -cython_bbox +sklearn +filterpy diff --git a/scripts/build_wheel.sh b/scripts/build_wheel.sh index 6fa9c0b20b7bf622543a8abc42a76bdb722a20e8..c3445cd750647629c36131fb4b95981713ba02d8 100644 --- a/scripts/build_wheel.sh +++ b/scripts/build_wheel.sh @@ -91,9 +91,8 @@ function unittest() { if [ $? != 0 ]; then exit 1 fi - find "../ppdet" -name 'tests' -type d -print0 | \ - xargs -0 -I{} -n1 bash -c \ - 'python -m unittest discover -v -s {}' + find "../ppdet" -wholename '*tests/test_*' -type f -print0 | \ + xargs -0 -I{} -n1 -t bash -c 'python -u -s {}' # clean TEST_DIR cd .. diff --git a/static/LICENSE b/static/LICENSE deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/static/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/static/README.md b/static/README.md deleted file mode 120000 index 4015683cfa5969297febc12e7ca1264afabbc0b5..0000000000000000000000000000000000000000 --- a/static/README.md +++ /dev/null @@ -1 +0,0 @@ -README_cn.md \ No newline at end of file diff --git a/static/README_cn.md b/static/README_cn.md deleted file mode 100644 index 9a9696ea2ee2b79e67cf681ec786adaf563a4012..0000000000000000000000000000000000000000 --- a/static/README_cn.md +++ /dev/null @@ -1,261 +0,0 @@ -简体中文 | [English](README_en.md) - -文档:[https://paddledetection.readthedocs.io](https://paddledetection.readthedocs.io) - -# 简介 - -PaddleDetection飞桨目标检测开发套件,旨在帮助开发者更快更好地完成检测模型的组建、训练、优化及部署等全开发流程。 - -PaddleDetection模块化地实现了多种主流目标检测算法,提供了丰富的数据增强策略、网络模块组件(如骨干网络)、损失函数等,并集成了模型压缩和跨平台高性能部署能力。 - -经过长时间产业实践打磨,PaddleDetection已拥有顺畅、卓越的使用体验,被工业质检、遥感图像检测、无人巡检、新零售、互联网、科研等十多个行业的开发者广泛应用。 - -
    - -
    - -### 产品动态 -- 2021.02.07: 发布release/2.0-rc版本,PaddleDetection动态图试用版本,详情参考[PaddleDetection动态图](dygraph)。 -- 2020.11.20: 发布release/0.5版本,详情请参考[版本更新文档](docs/CHANGELOG.md)。 -- 2020.11.10: 添加实例分割模型[SOLOv2](configs/solov2),在Tesla V100上达到38.6 FPS, COCO-val数据集上mask ap达到38.8,预测速度提高24%,mAP提高2.4个百分点。 -- 2020.10.30: PP-YOLO支持矩形图像输入,并新增PACT模型量化策略。 -- 2020.09.30: 发布[移动端检测demo](deploy/android_demo),可直接扫码安装体验。 -- 2020.09.21-27: 【目标检测7日打卡课】手把手教你从入门到进阶,深入了解目标检测算法的前世今生。立即加入课程QQ交流群(1136406895)一起学习吧 :) -- 2020.07.24: 发布**产业最实用**目标检测模型 [PP-YOLO](https://arxiv.org/abs/2007.12099) ,深入考虑产业应用对精度速度的双重面诉求,COCO数据集精度45.2%(最新45.9%),Tesla V100预测速度72.9 FPS,详细信息见[文档](configs/ppyolo/README_cn.md)。 -- 2020.06.11: 发布676类大规模服务器端实用目标检测模型,适用于绝大部分使用场景,可以直接用来预测,也可以用于微调其他任务。 - -### 特性 - -- **模型丰富**: 包含**目标检测**、**实例分割**、**人脸检测**等**100+个预训练模型**,涵盖多种**全球竞赛冠军**方案 -- **使用简洁**:模块化设计,解耦各个网络组件,开发者轻松搭建、试用各种检测模型及优化策略,快速得到高性能、定制化的算法。 -- **端到端打通**: 从数据增强、组网、训练、压缩、部署端到端打通,并完备支持**云端**/**边缘端**多架构、多设备部署。 -- **高性能**: 基于飞桨的高性能内核,模型训练速度及显存占用优势明显。支持FP16训练, 支持多机训练。 - -#### 套件结构概览 - - - - - - - - - - - - - - - - - - - - -
    - Architectures - - Backbones - - Components - - Data Augmentation -
    -
    • Two-Stage Detection
    • -
        -
      • Faster RCNN
      • -
      • FPN
      • -
      • Cascade-RCNN
      • -
      • Libra RCNN
      • -
      • Hybrid Task RCNN
      • -
      • PSS-Det RCNN
      • -
      -
    -
    • One-Stage Detection
    • -
        -
      • RetinaNet
      • -
      • YOLOv3
      • -
      • YOLOv4
      • -
      • PP-YOLO
      • -
      • SSD
      • -
      -
    -
    • Anchor Free
    • -
        -
      • CornerNet-Squeeze
      • -
      • FCOS
      • -
      • TTFNet
      • -
      -
    -
      -
    • Instance Segmentation
    • -
        -
      • Mask RCNN
      • -
      • SOLOv2
      • -
      -
    -
      -
    • Face-Detction
    • -
        -
      • FaceBoxes
      • -
      • BlazeFace
      • -
      • BlazeFace-NAS
      • -
      -
    -
    -
      -
    • ResNet(&vd)
    • -
    • ResNeXt(&vd)
    • -
    • SENet
    • -
    • Res2Net
    • -
    • HRNet
    • -
    • Hourglass
    • -
    • CBNet
    • -
    • GCNet
    • -
    • DarkNet
    • -
    • CSPDarkNet
    • -
    • VGG
    • -
    • MobileNetv1/v3
    • -
    • GhostNet
    • -
    • Efficientnet
    • -
    -
    -
    • Common
    • -
        -
      • Sync-BN
      • -
      • Group Norm
      • -
      • DCNv2
      • -
      • Non-local
      • -
      -
    -
    • FPN
    • -
        -
      • BiFPN
      • -
      • BFP
      • -
      • HRFPN
      • -
      • ACFPN
      • -
      -
    -
    • Loss
    • -
        -
      • Smooth-L1
      • -
      • GIoU/DIoU/CIoU
      • -
      • IoUAware
      • -
      -
    -
    • Post-processing
    • -
        -
      • SoftNMS
      • -
      • MatrixNMS
      • -
      -
    -
    • Speed
    • -
        -
      • FP16 training
      • -
      • Multi-machine training
      • -
      -
    -
    -
      -
    • Resize
    • -
    • Flipping
    • -
    • Expand
    • -
    • Crop
    • -
    • Color Distort
    • -
    • Random Erasing
    • -
    • Mixup
    • -
    • Cutmix
    • -
    • Grid Mask
    • -
    • Auto Augment
    • -
    -
    - -#### 模型性能概览 - -各模型结构和骨干网络的代表模型在COCO数据集上精度mAP和单卡Tesla V100上预测速度(FPS)对比图。 - -
    - -
    - -**说明:** -- `CBResNet`为`Cascade-Faster-RCNN-CBResNet200vd-FPN`模型,COCO数据集mAP高达53.3% -- `Cascade-Faster-RCNN`为`Cascade-Faster-RCNN-ResNet50vd-DCN`,PaddleDetection将其优化到COCO数据mAP为47.8%时推理速度为20FPS -- PaddleDetection增强版`YOLOv3-ResNet50vd-DCN`在COCO数据集mAP高于原作10.6个绝对百分点,推理速度为61.3FPS,快于原作约70% -- 图中模型均可在[模型库](#模型库)中获取 - - -## 文档教程 - -### 入门教程 - -- [安装说明](docs/tutorials/INSTALL_cn.md) -- [快速开始](docs/tutorials/QUICK_STARTED_cn.md) -- [如何准备数据](docs/tutorials/PrepareDataSet.md) -- [训练/评估/预测/部署流程](docs/tutorials/DetectionPipeline.md) -- [如何自定义数据集](docs/tutorials/Custom_DataSet.md) -- [常见问题汇总](docs/FAQ.md) - -### 进阶教程 -- 参数配置 - - [配置模块设计和介绍](docs/advanced_tutorials/config_doc/CONFIG_cn.md) - - [RCNN参数说明](docs/advanced_tutorials/config_doc/RCNN_PARAMS_DOC.md) - - [YOLOv3参数说明](docs/advanced_tutorials/config_doc/yolov3_mobilenet_v1.md) -- 迁移学习 - - [如何加载预训练](docs/advanced_tutorials/TRANSFER_LEARNING_cn.md) -- 模型压缩(基于[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)) - - [压缩benchmark](slim) - - [量化](slim/quantization), [剪枝](slim/prune), [蒸馏](slim/distillation), [搜索](slim/nas) -- 推理部署 - - [模型导出教程](docs/advanced_tutorials/deploy/EXPORT_MODEL.md) - - [服务器端Python部署](deploy/python) - - [服务器端C++部署](deploy/cpp) - - [移动端部署](https://github.com/PaddlePaddle/Paddle-Lite-Demo) - - [在线Serving部署](deploy/serving) - - [推理Benchmark](docs/advanced_tutorials/deploy/BENCHMARK_INFER_cn.md) -- 进阶开发 - - [新增数据预处理](docs/advanced_tutorials/READER.md) - - [新增检测算法](docs/advanced_tutorials/MODEL_TECHNICAL.md) - - -## 模型库 - -- 通用目标检测: - - [模型库和基线](docs/MODEL_ZOO_cn.md) - - [移动端模型](configs/mobile/README.md) - - [Anchor Free](configs/anchor_free/README.md) - - [PP-YOLO模型](configs/ppyolo/README_cn.md) - - [676类目标检测](docs/featured_model/LARGE_SCALE_DET_MODEL.md) - - [两阶段实用模型PSS-Det](configs/rcnn_enhance/README.md) -- 通用实例分割: - - [SOLOv2](configs/solov2/README.md) -- 垂类领域 - - [人脸检测](docs/featured_model/FACE_DETECTION.md) - - [行人检测](docs/featured_model/CONTRIB_cn.md) - - [车辆检测](docs/featured_model/CONTRIB_cn.md) -- 比赛方案 - - [Objects365 2019 Challenge夺冠模型](docs/featured_model/champion_model/CACascadeRCNN.md) - - [Open Images 2019-Object Detction比赛最佳单模型](docs/featured_model/champion_model/OIDV5_BASELINE_MODEL.md) - -## 应用案例 - -- [人像圣诞特效自动生成工具](application/christmas) - -## 第三方教程推荐 - -- [PaddleDetection在Windows下的部署(一)](https://zhuanlan.zhihu.com/p/268657833) -- [PaddleDetection在Windows下的部署(二)](https://zhuanlan.zhihu.com/p/280206376) -- [Jetson Nano上部署PaddleDetection经验分享](https://zhuanlan.zhihu.com/p/319371293) -- [安全帽检测YOLOv3模型在树莓派上的部署](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/yolov3_for_raspi.md) -- [使用SSD-MobileNetv1完成一个项目--准备数据集到完成树莓派部署](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/ssd_mobilenet_v1_for_raspi.md) - -## 版本更新 -v2.0-rc版本已经在`02/2021`发布,新增动态图版本,支持RCNN, YOLOv3, PP-YOLO, SSD/SSDLite, FCOS, TTFNet, SOLOv2等系列模型,支持模型剪裁和量化,支持预测部署及TensorRT推理加速,详细内容请参考[版本更新文档](docs/CHANGELOG.md)。 - -## 许可证书 -本项目的发布受[Apache 2.0 license](LICENSE)许可认证。 - - -## 贡献代码 - -我们非常欢迎你可以为PaddleDetection提供代码,也十分感谢你的反馈。 diff --git a/static/README_en.md b/static/README_en.md deleted file mode 100644 index 29e689906f5f587501fe1a84f671a50e105952dc..0000000000000000000000000000000000000000 --- a/static/README_en.md +++ /dev/null @@ -1,273 +0,0 @@ -English | [简体中文](README_cn.md) - -Documentation:[https://paddledetection.readthedocs.io](https://paddledetection.readthedocs.io) - -# Introduction - -PaddleDetection is an end-to-end object detection development kit based on PaddlePaddle, which aims to help developers in the whole development of constructing, training, optimizing and deploying detection models in a faster and better way. - -PaddleDetection implements varied mainstream object detection algorithms in modular design, and provides wealthy data augmentation methods, network components(such as backbones), loss functions, etc., and integrates abilities of model compression and cross-platform high-performance deployment. - -After a long time of industry practice polishing, PaddleDetection has had smooth and excellent user experience, it has been widely used by developers in more than ten industries such as industrial quality inspection, remote sensing image object detection, automatic inspection, new retail, Internet, and scientific research. - -
    - -
    - -### Product dynamic - -- 2020.11.20: Release `release/0.5` version, Please refer to [change log](docs/CHANGELOG.md) for details. -- 2020.11.10: Added [SOLOv2](configs/solov2) as an instance segmentation model, which reached 38.6 FPS on a single Tesla V100, 38.8 mask AP on Coco-Val dataset, and inference speed increased by 24% and mAP by 2.4 percentage points. -- 2020.10.30: PP-YOLO support rectangular image input, and add a new PACT quantization strategy for slim。 -- 2020.09.30: Released the [mobile-side detection demo](deploy/android_demo), and you can directly scan the code for installation experience. -- 2020.09.21-27: [Object detection 7 days of punching class] Hand in hand to teach you from the beginning to the advanced level, in-depth understanding of the object detection algorithm life. Join the course QQ group (1136406895) to study together :) -- 2020.07.24: [PP-YOLO](https://arxiv.org/abs/2007.12099), which is **the most practical** object detection model, was released, it deeply considers the double demands of industrial applications for accuracy and speed, and reached accuracy as 45.2% (the latest 45.9%) on COCO dataset, inference speed as 72.9 FPS on a single Test V100. Please refer to [PP-YOLO](configs/ppyolo/README.md) for details. -- 2020.06.11: Publish 676 classes of large-scale server-side practical object detection models that are applicable to most application scenarios and can be used directly for prediction or for fine-tuning other tasks. - -### Features - -- **Rich Models** -PaddleDetection provides rich of models, including **100+ pre-trained models** such as **object detection**, **instance segmentation**, **face detection** etc. It covers a variety of **global competition champion** schemes. - -- **Use Concisely** -Modular design, decouple each network component, developers easily build and try various detection models and optimization strategies, quickly get high-performance, customized algorithm. - -- **Getting Through End to End** -From data augmentation, constructing models, training, compression, depolyment, get through end to end, and complete support for multi-architecture, multi-device deployment for **cloud and edge device**. - -- **High Performance:** -Based on the high performance core of PaddlePaddle, advantages of training speed and memory occupation are obvious. Support FP16 training, support multi-machine training. - -#### Overview of Kit Structures - - - - - - - - - - - - - - - - - - - - -
    - Architectures - - Backbones - - Components - - Data Augmentation -
    -
    • Two-Stage Detection
    • -
        -
      • Faster RCNN
      • -
      • FPN
      • -
      • Cascade-RCNN
      • -
      • Libra RCNN
      • -
      • Hybrid Task RCNN
      • -
      • PSS-Det RCNN
      • -
      -
    -
    • One-Stage Detection
    • -
        -
      • RetinaNet
      • -
      • YOLOv3
      • -
      • YOLOv4
      • -
      • PP-YOLO
      • -
      • SSD
      • -
      -
    -
    • Anchor Free
    • -
        -
      • CornerNet-Squeeze
      • -
      • FCOS
      • -
      • TTFNet
      • -
      -
    -
      -
    • Instance Segmentation
    • -
        -
      • Mask RCNN
      • -
      • SOLOv2
      • -
      -
    -
      -
    • Face-Detction
    • -
        -
      • FaceBoxes
      • -
      • BlazeFace
      • -
      • BlazeFace-NAS
      • -
      -
    -
    -
      -
    • ResNet(&vd)
    • -
    • ResNeXt(&vd)
    • -
    • SENet
    • -
    • Res2Net
    • -
    • HRNet
    • -
    • Hourglass
    • -
    • CBNet
    • -
    • GCNet
    • -
    • DarkNet
    • -
    • CSPDarkNet
    • -
    • VGG
    • -
    • MobileNetv1/v3
    • -
    • GhostNet
    • -
    • Efficientnet
    • -
    -
    -
    • Common
    • -
        -
      • Sync-BN
      • -
      • Group Norm
      • -
      • DCNv2
      • -
      • Non-local
      • -
      -
    -
    • FPN
    • -
        -
      • BiFPN
      • -
      • BFP
      • -
      • HRFPN
      • -
      • ACFPN
      • -
      -
    -
    • Loss
    • -
        -
      • Smooth-L1
      • -
      • GIoU/DIoU/CIoU
      • -
      • IoUAware
      • -
      -
    -
    • Post-processing
    • -
        -
      • SoftNMS
      • -
      • MatrixNMS
      • -
      -
    -
    • Speed
    • -
        -
      • FP16 training
      • -
      • Multi-machine training
      • -
      -
    -
    -
      -
    • Resize
    • -
    • Flipping
    • -
    • Expand
    • -
    • Crop
    • -
    • Color Distort
    • -
    • Random Erasing
    • -
    • Mixup
    • -
    • Cutmix
    • -
    • Grid Mask
    • -
    • Auto Augment
    • -
    -
    - -#### Overview of Model Performance -The relationship between COCO mAP and FPS on Tesla V100 of representative models of each architectures and backbones. - -
    - -
    - -**NOTE:** - -- `CBResNet stands` for `Cascade-Faster-RCNN-CBResNet200vd-FPN`, which has highest mAP on COCO as 53.3% - -- `Cascade-Faster-RCNN` stands for `Cascade-Faster-RCNN-ResNet50vd-DCN`, which has been optimized to 20 FPS inference speed when COCO mAP as 47.8% in PaddleDetection models - -- The enhanced PaddleDetection model `YOLOv3-ResNet50vd-DCN` is 10.6 absolute percentage points higher than paper on COCO mAP, and inference speed is 61.3 fps, nearly 70% faster than the darknet framework. -All these models can be get in [Model Zoo](#ModelZoo) - - -## Tutorials - -### Get Started - -- [Installation guide](docs/tutorials/INSTALL_cn.md) -- [Quick start on small dataset](docs/tutorials/QUICK_STARTED_cn.md) -- [Prepare dataset](docs/tutorials/PrepareDataSet.md) -- [Train/Evaluation/Inference/Deploy](docs/tutorials/DetectionPipeline.md) -- [How to train a custom dataset](docs/tutorials/Custom_DataSet.md) -- [FAQ](docs/FAQ.md) - -### Advanced Tutorials - -- Parameter configuration - - [Introduction to the configuration workflow](docs/advanced_tutorials/config_doc/CONFIG_cn.md) - - [Parameter configuration for RCNN model](docs/advanced_tutorials/config_doc/RCNN_PARAMS_DOC.md) - - [Parameter configuration for YOLOv3 model](docs/advanced_tutorials/config_doc/yolov3_mobilenet_v1.md) - -- Tansfer learning - - [How to load pretrained model](docs/advanced_tutorials/TRANSFER_LEARNING_cn.md) - -- Model Compression(Based on [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)) - - [Model compression benchmark](slim) - - [Quantization](slim/quantization) - - [Model pruning](slim/prune) - - [Model distillation](slim/distillation) - - [Neural Architecture Search](slim/nas) - -- Inference and deployment - - [Export model for inference](docs/advanced_tutorials/deploy/EXPORT_MODEL.md) - - [Python inference](deploy/python) - - [C++ inference](deploy/cpp) - - [Mobile](https://github.com/PaddlePaddle/Paddle-Lite-Demo) - - [Serving](deploy/serving) - - [Inference benchmark](docs/advanced_tutorials/deploy/BENCHMARK_INFER_cn.md) - -- Advanced development - - [New data augmentations](docs/advanced_tutorials/READER.md) - - [New detection algorithms](docs/advanced_tutorials/MODEL_TECHNICAL.md) - - -## Model Zoo - -- Universal object detection - - [Model library and baselines](docs/MODEL_ZOO_cn.md) - - [Mobile models](configs/mobile/README.md) - - [Anchor free models](configs/anchor_free/README.md) - - [PP-YOLO](configs/ppyolo/README_cn.md) - - [676 classes of object detection](docs/featured_model/LARGE_SCALE_DET_MODEL.md) - - [Two-stage practical PSS-Det](configs/rcnn_enhance/README.md) -- Universal instance segmentation - - [SOLOv2](configs/solov2/README.md) -- Vertical field - - [Face detection](docs/featured_model/FACE_DETECTION.md) - - [Pedestrian detection](docs/featured_model/CONTRIB_cn.md) - - [Vehicle detection](docs/featured_model/CONTRIB_cn.md) -- Competition Plan - - [Objects365 2019 Challenge champion model](docs/featured_model/champion_model/CACascadeRCNN.md) - - [Best single model of Open Images 2019-Object Detction](docs/featured_model/champion_model/OIDV5_BASELINE_MODEL.md) - -## Applications - -- [Christmas portrait automatic generation tool](application/christmas) - -## Updates - -v2.0-rc was released at `02/2021`, add dygraph version, which supports RCNN, YOLOv3, PP-YOLO, SSD/SSDLite, FCOS, TTFNet, SOLOv2, etc. supports model pruning and quantization, supports deploying and accelerating by TensorRT, etc. Please refer to [change log](docs/CHANGELOG.md) for details. - - -## License - -PaddleDetection is released under the [Apache 2.0 license](LICENSE). - - -## Contributing - -Contributions are highly welcomed and we would really appreciate your feedback!! diff --git a/static/application/christmas/README.md b/static/application/christmas/README.md deleted file mode 100644 index 1500544d9d521955f56fd065af6bc395d1240278..0000000000000000000000000000000000000000 --- a/static/application/christmas/README.md +++ /dev/null @@ -1,65 +0,0 @@ -# 人像圣诞特效自动生成工具 -通过SOLOv2实例分割模型分割人像,并通过BlazeFace关键点模型检测人脸关键点,然后根据两个模型输出结果更换圣诞风格背景并为人脸加上圣诞老人胡子、圣诞眼镜及圣诞帽等特效。本项目通过PaddleHub可直接发布Server服务,供本地调试与前端直接调用接口。您可通过以下二维码中微信小程序直接体验: - -
    - -
    - -## 环境搭建 - -### 环境依赖 - -- paddlepaddle >= 2.0.0rc0 - -- paddlehub >= 2.0.0b1 - -### 模型准备 -- 首先要获取模型,可在[模型配置文件](../../configs)里配置`solov2`与`blazeface_keypoint`,训练模型,并[导出模型](../../docs/advanced_tutorials/deploy/EXPORT_MODEL.md)。也可直接下载我们准备好模型: -[blazeface_keypoint模型](https://paddlemodels.bj.bcebos.com/object_detection/application/blazeface_keypoint.tar)和 -[solov2模型](https://paddlemodels.bj.bcebos.com/object_detection/application/solov2_r101_vd_fpn_3x.tar)。 -**注意:** 下载的模型需要解压后使用。 - -- 然后将两个模型文件夹中的文件(`infer_cfg.yml`、`__model__`和`__params__`)分别拷贝至`blazeface/blazeface_keypoint/` 和 `solov2/solov2_r101_vd_fpn_3x/`文件夹内。 - -### hub安装blazeface和solov2模型 - -```shell -hub install solov2 -hub install blazeface -``` - -### hub安装solov2_blazeface圣诞特效自动生成串联模型 - -```shell -$ hub install solov2_blazeface -``` -## 开始测试 - -### 本地测试 - -```shell -python test_main.py -``` -运行成功后,预测结果会保存到`chrismas_final.png`。 - -### serving测试 - -- step1: 启动服务 - -```shell -export CUDA_VISIBLE_DEVICES=0 -hub serving start -m solov2_blazeface -p 8880 -``` - -- step2: 在服务端发送预测请求 - -```shell -python test_server.py -``` -运行成功后,预测结果会保存到`chrismas_final.png`。 - -## 效果展示 - -
    - -
    diff --git a/static/application/christmas/blazeface/data_feed.py b/static/application/christmas/blazeface/data_feed.py deleted file mode 100644 index c7eb4c5473e80290e6e50ee0350e86e350ab9fe9..0000000000000000000000000000000000000000 --- a/static/application/christmas/blazeface/data_feed.py +++ /dev/null @@ -1,371 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import base64 - -import cv2 -import numpy as np -from PIL import Image, ImageDraw -import paddle.fluid as fluid - - -def create_inputs(im, im_info): - """generate input for different model type - Args: - im (np.ndarray): image (np.ndarray) - im_info (dict): info of image - Returns: - inputs (dict): input of model - """ - inputs = {} - inputs['image'] = im - origin_shape = list(im_info['origin_shape']) - resize_shape = list(im_info['resize_shape']) - pad_shape = list(im_info['pad_shape']) if im_info[ - 'pad_shape'] is not None else list(im_info['resize_shape']) - scale_x, scale_y = im_info['scale'] - scale = scale_x - im_info = np.array([resize_shape + [scale]]).astype('float32') - inputs['im_info'] = im_info - return inputs - - -def visualize_box_mask(im, - results, - labels=None, - mask_resolution=14, - threshold=0.5): - """ - Args: - im (str/np.ndarray): path of image/np.ndarray read by cv2 - results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - MaskRCNN's results include 'masks': np.ndarray: - shape:[N, class_num, mask_resolution, mask_resolution] - labels (list): labels:['class1', ..., 'classn'] - mask_resolution (int): shape of a mask is:[mask_resolution, mask_resolution] - threshold (float): Threshold of score. - Returns: - im (PIL.Image.Image): visualized image - """ - if not labels: - labels = ['background', 'person'] - if isinstance(im, str): - im = Image.open(im).convert('RGB') - else: - im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - im = Image.fromarray(im) - if 'masks' in results and 'boxes' in results: - im = draw_mask( - im, - results['boxes'], - results['masks'], - labels, - resolution=mask_resolution) - if 'boxes' in results: - im = draw_box(im, results['boxes'], labels) - if 'segm' in results: - im = draw_segm( - im, - results['segm'], - results['label'], - results['score'], - labels, - threshold=threshold) - if 'landmark' in results: - im = draw_lmk(im, results['landmark']) - return im - - -def get_color_map_list(num_classes): - """ - Args: - num_classes (int): number of class - Returns: - color_map (list): RGB color list - """ - color_map = num_classes * [0, 0, 0] - for i in range(0, num_classes): - j = 0 - lab = i - while lab: - color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j)) - color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)) - color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)) - j += 1 - lab >>= 3 - color_map = [color_map[i:i + 3] for i in range(0, len(color_map), 3)] - return color_map - - -def expand_boxes(boxes, scale=0.0): - """ - Args: - boxes (np.ndarray): shape:[N,4], N:number of box, - matix element:[x_min, y_min, x_max, y_max] - scale (float): scale of boxes - Returns: - boxes_exp (np.ndarray): expanded boxes - """ - w_half = (boxes[:, 2] - boxes[:, 0]) * .5 - h_half = (boxes[:, 3] - boxes[:, 1]) * .5 - x_c = (boxes[:, 2] + boxes[:, 0]) * .5 - y_c = (boxes[:, 3] + boxes[:, 1]) * .5 - w_half *= scale - h_half *= scale - boxes_exp = np.zeros(boxes.shape) - boxes_exp[:, 0] = x_c - w_half - boxes_exp[:, 2] = x_c + w_half - boxes_exp[:, 1] = y_c - h_half - boxes_exp[:, 3] = y_c + h_half - return boxes_exp - - -def draw_mask(im, np_boxes, np_masks, labels, resolution=14, threshold=0.5): - """ - Args: - im (PIL.Image.Image): PIL image - np_boxes (np.ndarray): shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - np_masks (np.ndarray): shape:[N, class_num, resolution, resolution] - labels (list): labels:['class1', ..., 'classn'] - resolution (int): shape of a mask is:[resolution, resolution] - threshold (float): threshold of mask - Returns: - im (PIL.Image.Image): visualized image - """ - color_list = get_color_map_list(len(labels)) - scale = (resolution + 2.0) / resolution - im_w, im_h = im.size - w_ratio = 0.4 - alpha = 0.7 - im = np.array(im).astype('float32') - rects = np_boxes[:, 2:] - expand_rects = expand_boxes(rects, scale) - expand_rects = expand_rects.astype(np.int32) - clsid_scores = np_boxes[:, 0:2] - padded_mask = np.zeros((resolution + 2, resolution + 2), dtype=np.float32) - clsid2color = {} - for idx in range(len(np_boxes)): - clsid, score = clsid_scores[idx].tolist() - clsid = int(clsid) - xmin, ymin, xmax, ymax = expand_rects[idx].tolist() - w = xmax - xmin + 1 - h = ymax - ymin + 1 - w = np.maximum(w, 1) - h = np.maximum(h, 1) - padded_mask[1:-1, 1:-1] = np_masks[idx, int(clsid), :, :] - resized_mask = cv2.resize(padded_mask, (w, h)) - resized_mask = np.array(resized_mask > threshold, dtype=np.uint8) - x0 = min(max(xmin, 0), im_w) - x1 = min(max(xmax + 1, 0), im_w) - y0 = min(max(ymin, 0), im_h) - y1 = min(max(ymax + 1, 0), im_h) - im_mask = np.zeros((im_h, im_w), dtype=np.uint8) - im_mask[y0:y1, x0:x1] = resized_mask[(y0 - ymin):(y1 - ymin), ( - x0 - xmin):(x1 - xmin)] - if clsid not in clsid2color: - clsid2color[clsid] = color_list[clsid] - color_mask = clsid2color[clsid] - for c in range(3): - color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255 - idx = np.nonzero(im_mask) - color_mask = np.array(color_mask) - im[idx[0], idx[1], :] *= 1.0 - alpha - im[idx[0], idx[1], :] += alpha * color_mask - return Image.fromarray(im.astype('uint8')) - - -def draw_box(im, np_boxes, labels): - """ - Args: - im (PIL.Image.Image): PIL image - np_boxes (np.ndarray): shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - labels (list): labels:['class1', ..., 'classn'] - Returns: - im (PIL.Image.Image): visualized image - """ - draw_thickness = min(im.size) // 320 - draw = ImageDraw.Draw(im) - clsid2color = {} - color_list = get_color_map_list(len(labels)) - - for dt in np_boxes: - clsid, bbox, score = int(dt[0]), dt[2:], dt[1] - xmin, ymin, xmax, ymax = bbox - w = xmax - xmin - h = ymax - ymin - if clsid not in clsid2color: - clsid2color[clsid] = color_list[clsid] - color = tuple(clsid2color[clsid]) - - # draw bbox - draw.line( - [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin), - (xmin, ymin)], - width=draw_thickness, - fill=color) - - # draw label - text = "{} {:.4f}".format(labels[clsid], score) - tw, th = draw.textsize(text) - draw.rectangle( - [(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill=color) - draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255)) - return im - - -def draw_segm(im, - np_segms, - np_label, - np_score, - labels, - threshold=0.5, - alpha=0.7): - """ - Draw segmentation on image - """ - mask_color_id = 0 - w_ratio = .4 - color_list = get_color_map_list(len(labels)) - im = np.array(im).astype('float32') - clsid2color = {} - np_segms = np_segms.astype(np.uint8) - index = np.where(np_label == 0)[0] - index = np.where(np_score[index] > threshold)[0] - person_segms = np_segms[index] - person_mask = np.sum(person_segms, axis=0) - person_mask[person_mask > 1] = 1 - person_mask = np.expand_dims(person_mask, axis=2) - person_mask = np.repeat(person_mask, 3, axis=2) - im = im * person_mask - - return Image.fromarray(im.astype('uint8')) - - -def load_predictor(model_dir, - run_mode='fluid', - batch_size=1, - use_gpu=False, - min_subgraph_size=3): - """set AnalysisConfig, generate AnalysisPredictor - Args: - model_dir (str): root path of __model__ and __params__ - use_gpu (bool): whether use gpu - Returns: - predictor (PaddlePredictor): AnalysisPredictor - Raises: - ValueError: predict by TensorRT need use_gpu == True. - """ - if not use_gpu and not run_mode == 'fluid': - raise ValueError( - "Predict by TensorRT mode: {}, expect use_gpu==True, but use_gpu == {}" - .format(run_mode, use_gpu)) - if run_mode == 'trt_int8': - raise ValueError("TensorRT int8 mode is not supported now, " - "please use trt_fp32 or trt_fp16 instead.") - precision_map = { - 'trt_int8': fluid.core.AnalysisConfig.Precision.Int8, - 'trt_fp32': fluid.core.AnalysisConfig.Precision.Float32, - 'trt_fp16': fluid.core.AnalysisConfig.Precision.Half - } - config = fluid.core.AnalysisConfig( - os.path.join(model_dir, '__model__'), - os.path.join(model_dir, '__params__')) - if use_gpu: - # initial GPU memory(M), device ID - config.enable_use_gpu(100, 0) - # optimize graph and fuse op - config.switch_ir_optim(True) - else: - config.disable_gpu() - - if run_mode in precision_map.keys(): - config.enable_tensorrt_engine( - workspace_size=1 << 10, - max_batch_size=batch_size, - min_subgraph_size=min_subgraph_size, - precision_mode=precision_map[run_mode], - use_static=False, - use_calib_mode=False) - - # disable print log when predict - config.disable_glog_info() - # enable shared memory - config.enable_memory_optim() - # disable feed, fetch OP, needed by zero_copy_run - config.switch_use_feed_fetch_ops(False) - predictor = fluid.core.create_paddle_predictor(config) - return predictor - - -def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - - -def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - -def lmk2out(bboxes, np_lmk, im_info, threshold=0.5, is_bbox_normalized=True): - image_w, image_h = im_info['origin_shape'] - scale = im_info['scale'] - face_index, landmark, prior_box = np_lmk[:] - xywh_res = [] - if bboxes.shape == (1, 1) or bboxes is None: - return np.array([]) - prior = np.reshape(prior_box, (-1, 4)) - predict_lmk = np.reshape(landmark, (-1, 10)) - k = 0 - for i in range(bboxes.shape[0]): - score = bboxes[i][1] - if score < threshold: - continue - theindex = face_index[i][0] - me_prior = prior[theindex, :] - lmk_pred = predict_lmk[theindex, :] - prior_h = me_prior[2] - me_prior[0] - prior_w = me_prior[3] - me_prior[1] - prior_h_center = (me_prior[2] + me_prior[0]) / 2 - prior_w_center = (me_prior[3] + me_prior[1]) / 2 - lmk_decode = np.zeros((10)) - for j in [0, 2, 4, 6, 8]: - lmk_decode[j] = lmk_pred[j] * 0.1 * prior_w + prior_h_center - for j in [1, 3, 5, 7, 9]: - lmk_decode[j] = lmk_pred[j] * 0.1 * prior_h + prior_w_center - - if is_bbox_normalized: - lmk_decode = lmk_decode * np.array([ - image_h, image_w, image_h, image_w, image_h, image_w, image_h, - image_w, image_h, image_w - ]) - xywh_res.append(lmk_decode) - return np.asarray(xywh_res) - - -def draw_lmk(image, lmk_results): - draw = ImageDraw.Draw(image) - for lmk_decode in lmk_results: - for j in range(5): - x1 = int(round(lmk_decode[2 * j])) - y1 = int(round(lmk_decode[2 * j + 1])) - draw.ellipse( - (x1 - 2, y1 - 2, x1 + 3, y1 + 3), fill='green', outline='green') - return image diff --git a/static/application/christmas/blazeface/module.py b/static/application/christmas/blazeface/module.py deleted file mode 100644 index 40656ff83fab02e54dd2c620575d7384fdbdba5c..0000000000000000000000000000000000000000 --- a/static/application/christmas/blazeface/module.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import time -from functools import reduce -import cv2 -import numpy as np -from paddlehub.module.module import moduleinfo - -import blazeface.data_feed as D - - -@moduleinfo( - name="blazeface", - type="CV/image_editing", - author="paddlepaddle", - author_email="", - summary="blazeface is a face key point detection model.", - version="1.0.0") -class Detector(object): - """ - Args: - config (object): config of model, defined by `Config(model_dir)` - model_dir (str): root path of __model__, __params__ and infer_cfg.yml - use_gpu (bool): whether use gpu - run_mode (str): mode of running(fluid/trt_fp32/trt_fp16) - threshold (float): threshold to reserve the result for output. - """ - - def __init__(self, - min_subgraph_size=60, - use_gpu=False, - run_mode='fluid', - threshold=0.5): - - model_dir = os.path.join(self.directory, 'blazeface_keypoint') - self.predictor = D.load_predictor( - model_dir, - run_mode=run_mode, - min_subgraph_size=min_subgraph_size, - use_gpu=use_gpu) - - def face_img_process(self, - image, - mean=[104., 117., 123.], - std=[127.502231, 127.502231, 127.502231]): - image = np.array(image) - # HWC to CHW - if len(image.shape) == 3: - image = np.swapaxes(image, 1, 2) - image = np.swapaxes(image, 1, 0) - # RBG to BGR - image = image[[2, 1, 0], :, :] - image = image.astype('float32') - image -= np.array(mean)[:, np.newaxis, np.newaxis].astype('float32') - image /= np.array(std)[:, np.newaxis, np.newaxis].astype('float32') - image = [image] - image = np.array(image) - - return image - - def transform(self, image, shrink): - im_info = { - 'scale': [1., 1.], - 'origin_shape': None, - 'resize_shape': None, - 'pad_shape': None, - } - if isinstance(image, str): - with open(image, 'rb') as f: - im_read = f.read() - image = np.frombuffer(im_read, dtype='uint8') - image = cv2.imdecode(image, 1) # BGR mode, but need RGB mode - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - im_info['origin_shape'] = image.shape[:2] - else: - im_info['origin_shape'] = image.shape[:2] - - image_shape = [3, image.shape[0], image.shape[1]] - h, w = shrink, shrink - image = cv2.resize(image, (w, h)) - im_info['resize_shape'] = image.shape[:2] - - image = self.face_img_process(image) - - inputs = D.create_inputs(image, im_info) - return inputs, im_info - - def postprocess(self, boxes_list, lmks_list, im_info, threshold=0.5): - assert len(boxes_list) == len(lmks_list) - best_np_boxes, best_np_lmk = boxes_list[0], lmks_list[0] - for i in range(1, len(boxes_list)): - #judgment detection score - if boxes_list[i][0][1] > 0.9: - break - face_width = boxes_list[i][0][4] - boxes_list[i][0][2] - if boxes_list[i][0][1] - best_np_boxes[0][ - 1] > 0.01 and face_width > 0.2: - best_np_boxes, best_np_lmk = boxes_list[i], lmks_list[i] - # postprocess output of predictor - results = {} - results['landmark'] = D.lmk2out(best_np_boxes, best_np_lmk, im_info, - threshold) - - w, h = im_info['origin_shape'] - best_np_boxes[:, 2] *= h - best_np_boxes[:, 3] *= w - best_np_boxes[:, 4] *= h - best_np_boxes[:, 5] *= w - expect_boxes = (best_np_boxes[:, 1] > threshold) & ( - best_np_boxes[:, 0] > -1) - best_np_boxes = best_np_boxes[expect_boxes, :] - for box in best_np_boxes: - print('class_id:{:d}, confidence:{:.4f},' - 'left_top:[{:.2f},{:.2f}],' - ' right_bottom:[{:.2f},{:.2f}]'.format( - int(box[0]), box[1], box[2], box[3], box[4], box[5])) - results['boxes'] = best_np_boxes - return results - - def predict(self, - image, - threshold=0.5, - repeats=1, - visualization=False, - with_lmk=True, - save_dir='blaze_result'): - ''' - Args: - image (str/np.ndarray): path of image/ np.ndarray read by cv2 - threshold (float): threshold of predicted box' score - Returns: - results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - ''' - shrink = [960, 640, 480, 320, 180] - boxes_list = [] - lmks_list = [] - for sh in shrink: - inputs, im_info = self.transform(image, shrink=sh) - np_boxes, np_lmk = None, None - - input_names = self.predictor.get_input_names() - for i in range(len(input_names)): - input_tensor = self.predictor.get_input_tensor(input_names[i]) - input_tensor.copy_from_cpu(inputs[input_names[i]]) - - t1 = time.time() - for i in range(repeats): - self.predictor.zero_copy_run() - output_names = self.predictor.get_output_names() - boxes_tensor = self.predictor.get_output_tensor(output_names[0]) - np_boxes = boxes_tensor.copy_to_cpu() - if with_lmk == True: - face_index = self.predictor.get_output_tensor(output_names[ - 1]) - landmark = self.predictor.get_output_tensor(output_names[2]) - prior_boxes = self.predictor.get_output_tensor(output_names[ - 3]) - np_face_index = face_index.copy_to_cpu() - np_prior_boxes = prior_boxes.copy_to_cpu() - np_landmark = landmark.copy_to_cpu() - np_lmk = [np_face_index, np_landmark, np_prior_boxes] - - t2 = time.time() - ms = (t2 - t1) * 1000.0 / repeats - print("Inference: {} ms per batch image".format(ms)) - - # do not perform postprocess in benchmark mode - results = [] - if reduce(lambda x, y: x * y, np_boxes.shape) < 6: - print('[WARNNING] No object detected.') - results = {'boxes': np.array([])} - else: - boxes_list.append(np_boxes) - lmks_list.append(np_lmk) - - results = self.postprocess( - boxes_list, lmks_list, im_info, threshold=threshold) - - if visualization: - if not os.path.exists(save_dir): - os.makedirs(save_dir) - output = D.visualize_box_mask( - im=image, results=results, labels=["background", "face"]) - name = str(time.time()) + '.png' - save_path = os.path.join(save_dir, name) - output.save(save_path) - img = cv2.cvtColor(np.array(output), cv2.COLOR_RGB2BGR) - results['image'] = img - - return results diff --git a/static/application/christmas/demo_images/result.png b/static/application/christmas/demo_images/result.png deleted file mode 100644 index 5b857e4cc1377d3ef3bc0400a9a6bff2f286abb9..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/demo_images/result.png and /dev/null differ diff --git a/static/application/christmas/demo_images/test.jpg b/static/application/christmas/demo_images/test.jpg deleted file mode 100644 index 10030612246c085d6250f34bf1c78207de8e049e..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/demo_images/test.jpg and /dev/null differ diff --git a/static/application/christmas/demo_images/wechat_app.jpeg b/static/application/christmas/demo_images/wechat_app.jpeg deleted file mode 100644 index 3edcb7c6d91e22107538a049a53ab64fa5eb1762..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/demo_images/wechat_app.jpeg and /dev/null differ diff --git a/static/application/christmas/element_source/background/1.json b/static/application/christmas/element_source/background/1.json deleted file mode 100644 index 5e9b4d9a36faf0161cfc80b00220f506367da94e..0000000000000000000000000000000000000000 --- a/static/application/christmas/element_source/background/1.json +++ /dev/null @@ -1 +0,0 @@ -{"path":"/Users/yuzhiliang/Downloads/docsmall-2/12.png","outputs":{"object":[{"name":"local","bndbox":{"xmin":282,"ymin":366,"xmax":3451,"ymax":4603}}]},"time_labeled":1608631688933,"labeled":true,"size":{"width":3714,"height":5725,"depth":3}} \ No newline at end of file diff --git a/static/application/christmas/element_source/background/1.png b/static/application/christmas/element_source/background/1.png deleted file mode 100755 index e4bf623b645a1704be2c948a8ba9a1dbfe8662ca..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/background/1.png and /dev/null differ diff --git a/static/application/christmas/element_source/background/2.json b/static/application/christmas/element_source/background/2.json deleted file mode 100644 index 4bb6ebbc16a7ab89a63634e3fff09cc18e0a3b44..0000000000000000000000000000000000000000 --- a/static/application/christmas/element_source/background/2.json +++ /dev/null @@ -1 +0,0 @@ -{"path":"/Users/yuzhiliang/Downloads/docsmall-2/2.png","outputs":{"object":[{"name":"local","bndbox":{"xmin":336,"ymin":512,"xmax":3416,"ymax":4672}}]},"time_labeled":1608631696021,"labeled":true,"size":{"width":3714,"height":5275,"depth":3}} \ No newline at end of file diff --git a/static/application/christmas/element_source/background/2.png b/static/application/christmas/element_source/background/2.png deleted file mode 100755 index f3c4299c97bb9923fa4995336616901f4e0dfc2c..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/background/2.png and /dev/null differ diff --git a/static/application/christmas/element_source/background/3.json b/static/application/christmas/element_source/background/3.json deleted file mode 100644 index df28a39f151c6e664bf9493b074a086bfbb4de48..0000000000000000000000000000000000000000 --- a/static/application/christmas/element_source/background/3.json +++ /dev/null @@ -1 +0,0 @@ -{"path":"/Users/yuzhiliang/Downloads/docsmall-2/3.png","outputs":{"object":[{"name":"local","bndbox":{"xmin":376,"ymin":352,"xmax":3448,"ymax":4544}}]},"time_labeled":1608631701740,"labeled":true,"size":{"width":3714,"height":5275,"depth":3}} \ No newline at end of file diff --git a/static/application/christmas/element_source/background/3.png b/static/application/christmas/element_source/background/3.png deleted file mode 100755 index e8ccd1f29a1ffa539b173d990b6a2b726fc9bf01..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/background/3.png and /dev/null differ diff --git a/static/application/christmas/element_source/beard/1.png b/static/application/christmas/element_source/beard/1.png deleted file mode 100644 index c3645f897c2e59266b7dea869bf8eadaa1c7a924..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/beard/1.png and /dev/null differ diff --git a/static/application/christmas/element_source/beard/2.png b/static/application/christmas/element_source/beard/2.png deleted file mode 100644 index 24000ad0b4eeceaa851d0d7f2a5a43c1ad5f60ca..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/beard/2.png and /dev/null differ diff --git a/static/application/christmas/element_source/glasses/1.png b/static/application/christmas/element_source/glasses/1.png deleted file mode 100644 index 385a475d4967796f6220cc9b987a070f87dbd1d4..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/glasses/1.png and /dev/null differ diff --git a/static/application/christmas/element_source/glasses/2.png b/static/application/christmas/element_source/glasses/2.png deleted file mode 100644 index b179531f41c2916e9aab340f8882e18a40562b62..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/glasses/2.png and /dev/null differ diff --git a/static/application/christmas/element_source/hat/1.png b/static/application/christmas/element_source/hat/1.png deleted file mode 100644 index 97cb6f314cd1eaed21458f6b0d21c49a4114c802..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/hat/1.png and /dev/null differ diff --git a/static/application/christmas/element_source/hat/2.png b/static/application/christmas/element_source/hat/2.png deleted file mode 100644 index 045cfee71c2b601a18aedc93e0f463c7e65cb8c9..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/hat/2.png and /dev/null differ diff --git a/static/application/christmas/element_source/hat/3.png b/static/application/christmas/element_source/hat/3.png deleted file mode 100644 index a86aaf9adc532817e3b95e0c6d73b1cfbbeeb61b..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/hat/3.png and /dev/null differ diff --git a/static/application/christmas/element_source/hat/4.png b/static/application/christmas/element_source/hat/4.png deleted file mode 100644 index aac53f9fdf29335730863b910d3baa063f548afe..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/hat/4.png and /dev/null differ diff --git a/static/application/christmas/element_source/hat/5.png b/static/application/christmas/element_source/hat/5.png deleted file mode 100644 index 3292a3ed5415aa46040d5bfcb0402c5cd0c49ffe..0000000000000000000000000000000000000000 Binary files a/static/application/christmas/element_source/hat/5.png and /dev/null differ diff --git a/static/application/christmas/solov2/data_feed.py b/static/application/christmas/solov2/data_feed.py deleted file mode 100644 index 5fb37a179551edadfb3f63232b6d4d0384d00496..0000000000000000000000000000000000000000 --- a/static/application/christmas/solov2/data_feed.py +++ /dev/null @@ -1,337 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import base64 - -import cv2 -import numpy as np -from PIL import Image, ImageDraw -import paddle.fluid as fluid - - -def create_inputs(im, im_info): - """generate input for different model type - Args: - im (np.ndarray): image (np.ndarray) - im_info (dict): info of image - Returns: - inputs (dict): input of model - """ - inputs = {} - inputs['image'] = im - origin_shape = list(im_info['origin_shape']) - resize_shape = list(im_info['resize_shape']) - pad_shape = list(im_info['pad_shape']) if im_info[ - 'pad_shape'] is not None else list(im_info['resize_shape']) - scale_x, scale_y = im_info['scale'] - scale = scale_x - im_info = np.array([resize_shape + [scale]]).astype('float32') - inputs['im_info'] = im_info - return inputs - - -def visualize_box_mask(im, - results, - labels=None, - mask_resolution=14, - threshold=0.5): - """ - Args: - im (str/np.ndarray): path of image/np.ndarray read by cv2 - results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - MaskRCNN's results include 'masks': np.ndarray: - shape:[N, class_num, mask_resolution, mask_resolution] - labels (list): labels:['class1', ..., 'classn'] - mask_resolution (int): shape of a mask is:[mask_resolution, mask_resolution] - threshold (float): Threshold of score. - Returns: - im (PIL.Image.Image): visualized image - """ - if not labels: - labels = [ - 'background', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', - 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire', 'hydrant', - 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', - 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', - 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', - 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', - 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', - 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', - 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', - 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush' - ] - if isinstance(im, str): - im = Image.open(im).convert('RGB') - else: - im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - im = Image.fromarray(im) - if 'masks' in results and 'boxes' in results: - im = draw_mask( - im, - results['boxes'], - results['masks'], - labels, - resolution=mask_resolution) - if 'boxes' in results: - im = draw_box(im, results['boxes'], labels) - if 'segm' in results: - im = draw_segm( - im, - results['segm'], - results['label'], - results['score'], - labels, - threshold=threshold) - return im - - -def get_color_map_list(num_classes): - """ - Args: - num_classes (int): number of class - Returns: - color_map (list): RGB color list - """ - color_map = num_classes * [0, 0, 0] - for i in range(0, num_classes): - j = 0 - lab = i - while lab: - color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j)) - color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j)) - color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j)) - j += 1 - lab >>= 3 - color_map = [color_map[i:i + 3] for i in range(0, len(color_map), 3)] - return color_map - - -def expand_boxes(boxes, scale=0.0): - """ - Args: - boxes (np.ndarray): shape:[N,4], N:number of box, - matix element:[x_min, y_min, x_max, y_max] - scale (float): scale of boxes - Returns: - boxes_exp (np.ndarray): expanded boxes - """ - w_half = (boxes[:, 2] - boxes[:, 0]) * .5 - h_half = (boxes[:, 3] - boxes[:, 1]) * .5 - x_c = (boxes[:, 2] + boxes[:, 0]) * .5 - y_c = (boxes[:, 3] + boxes[:, 1]) * .5 - w_half *= scale - h_half *= scale - boxes_exp = np.zeros(boxes.shape) - boxes_exp[:, 0] = x_c - w_half - boxes_exp[:, 2] = x_c + w_half - boxes_exp[:, 1] = y_c - h_half - boxes_exp[:, 3] = y_c + h_half - return boxes_exp - - -def draw_mask(im, np_boxes, np_masks, labels, resolution=14, threshold=0.5): - """ - Args: - im (PIL.Image.Image): PIL image - np_boxes (np.ndarray): shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - np_masks (np.ndarray): shape:[N, class_num, resolution, resolution] - labels (list): labels:['class1', ..., 'classn'] - resolution (int): shape of a mask is:[resolution, resolution] - threshold (float): threshold of mask - Returns: - im (PIL.Image.Image): visualized image - """ - color_list = get_color_map_list(len(labels)) - scale = (resolution + 2.0) / resolution - im_w, im_h = im.size - w_ratio = 0.4 - alpha = 0.7 - im = np.array(im).astype('float32') - rects = np_boxes[:, 2:] - expand_rects = expand_boxes(rects, scale) - expand_rects = expand_rects.astype(np.int32) - clsid_scores = np_boxes[:, 0:2] - padded_mask = np.zeros((resolution + 2, resolution + 2), dtype=np.float32) - clsid2color = {} - for idx in range(len(np_boxes)): - clsid, score = clsid_scores[idx].tolist() - clsid = int(clsid) - xmin, ymin, xmax, ymax = expand_rects[idx].tolist() - w = xmax - xmin + 1 - h = ymax - ymin + 1 - w = np.maximum(w, 1) - h = np.maximum(h, 1) - padded_mask[1:-1, 1:-1] = np_masks[idx, int(clsid), :, :] - resized_mask = cv2.resize(padded_mask, (w, h)) - resized_mask = np.array(resized_mask > threshold, dtype=np.uint8) - x0 = min(max(xmin, 0), im_w) - x1 = min(max(xmax + 1, 0), im_w) - y0 = min(max(ymin, 0), im_h) - y1 = min(max(ymax + 1, 0), im_h) - im_mask = np.zeros((im_h, im_w), dtype=np.uint8) - im_mask[y0:y1, x0:x1] = resized_mask[(y0 - ymin):(y1 - ymin), ( - x0 - xmin):(x1 - xmin)] - if clsid not in clsid2color: - clsid2color[clsid] = color_list[clsid] - color_mask = clsid2color[clsid] - for c in range(3): - color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255 - idx = np.nonzero(im_mask) - color_mask = np.array(color_mask) - im[idx[0], idx[1], :] *= 1.0 - alpha - im[idx[0], idx[1], :] += alpha * color_mask - return Image.fromarray(im.astype('uint8')) - - -def draw_box(im, np_boxes, labels): - """ - Args: - im (PIL.Image.Image): PIL image - np_boxes (np.ndarray): shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - labels (list): labels:['class1', ..., 'classn'] - Returns: - im (PIL.Image.Image): visualized image - """ - draw_thickness = min(im.size) // 320 - draw = ImageDraw.Draw(im) - clsid2color = {} - color_list = get_color_map_list(len(labels)) - - for dt in np_boxes: - clsid, bbox, score = int(dt[0]), dt[2:], dt[1] - xmin, ymin, xmax, ymax = bbox - w = xmax - xmin - h = ymax - ymin - if clsid not in clsid2color: - clsid2color[clsid] = color_list[clsid] - color = tuple(clsid2color[clsid]) - - # draw bbox - draw.line( - [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin), - (xmin, ymin)], - width=draw_thickness, - fill=color) - - # draw label - text = "{} {:.4f}".format(labels[clsid], score) - tw, th = draw.textsize(text) - draw.rectangle( - [(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill=color) - draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255)) - return im - - -def draw_segm(im, - np_segms, - np_label, - np_score, - labels, - threshold=0.5, - alpha=0.7): - """ - Draw segmentation on image - """ - mask_color_id = 0 - w_ratio = .4 - color_list = get_color_map_list(len(labels)) - im = np.array(im).astype('float32') - clsid2color = {} - np_segms = np_segms.astype(np.uint8) - index = np.where(np_label == 0)[0] - index = np.where(np_score[index] > threshold)[0] - person_segms = np_segms[index] - person_mask = np.sum(person_segms, axis=0) - person_mask[person_mask > 1] = 1 - person_mask = np.expand_dims(person_mask, axis=2) - person_mask = np.repeat(person_mask, 3, axis=2) - im = im * person_mask - - return Image.fromarray(im.astype('uint8')) - - -def load_predictor(model_dir, - run_mode='fluid', - batch_size=1, - use_gpu=False, - min_subgraph_size=3): - """set AnalysisConfig, generate AnalysisPredictor - Args: - model_dir (str): root path of __model__ and __params__ - use_gpu (bool): whether use gpu - Returns: - predictor (PaddlePredictor): AnalysisPredictor - Raises: - ValueError: predict by TensorRT need use_gpu == True. - """ - if not use_gpu and not run_mode == 'fluid': - raise ValueError( - "Predict by TensorRT mode: {}, expect use_gpu==True, but use_gpu == {}" - .format(run_mode, use_gpu)) - if run_mode == 'trt_int8': - raise ValueError("TensorRT int8 mode is not supported now, " - "please use trt_fp32 or trt_fp16 instead.") - precision_map = { - 'trt_int8': fluid.core.AnalysisConfig.Precision.Int8, - 'trt_fp32': fluid.core.AnalysisConfig.Precision.Float32, - 'trt_fp16': fluid.core.AnalysisConfig.Precision.Half - } - config = fluid.core.AnalysisConfig( - os.path.join(model_dir, '__model__'), - os.path.join(model_dir, '__params__')) - if use_gpu: - # initial GPU memory(M), device ID - config.enable_use_gpu(100, 0) - # optimize graph and fuse op - config.switch_ir_optim(True) - else: - config.disable_gpu() - - if run_mode in precision_map.keys(): - config.enable_tensorrt_engine( - workspace_size=1 << 10, - max_batch_size=batch_size, - min_subgraph_size=min_subgraph_size, - precision_mode=precision_map[run_mode], - use_static=False, - use_calib_mode=False) - - # disable print log when predict - config.disable_glog_info() - # enable shared memory - config.enable_memory_optim() - # disable feed, fetch OP, needed by zero_copy_run - config.switch_use_feed_fetch_ops(False) - predictor = fluid.core.create_paddle_predictor(config) - return predictor - - -def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - - -def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data diff --git a/static/application/christmas/solov2/module.py b/static/application/christmas/solov2/module.py deleted file mode 100644 index 333ec6384aa0459b10266f8b7bf2447e3c35ddea..0000000000000000000000000000000000000000 --- a/static/application/christmas/solov2/module.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import time -from functools import reduce -import cv2 -import numpy as np -from paddlehub.module.module import moduleinfo -import solov2.processor as P -import solov2.data_feed as D - - -class Detector(object): - """ - Args: - model_dir (str): root path of __model__, __params__ and infer_cfg.yml - use_gpu (bool): whether use gpu - run_mode (str): mode of running(fluid/trt_fp32/trt_fp16) - threshold (float): threshold to reserve the result for output. - """ - - def __init__(self, - min_subgraph_size=60, - use_gpu=False, - run_mode='fluid', - threshold=0.5): - - model_dir = os.path.join(self.directory, 'solov2_r101_vd_fpn_3x') - self.predictor = D.load_predictor( - model_dir, - run_mode=run_mode, - min_subgraph_size=min_subgraph_size, - use_gpu=use_gpu) - self.compose = [ - P.Resize(max_size=1333), P.Normalize( - mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - P.Permute(), P.PadStride(stride=32) - ] - - def transform(self, im): - im, im_info = P.preprocess(im, self.compose) - inputs = D.create_inputs(im, im_info) - return inputs, im_info - - def postprocess(self, np_boxes, np_masks, im_info, threshold=0.5): - # postprocess output of predictor - results = {} - expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1) - np_boxes = np_boxes[expect_boxes, :] - for box in np_boxes: - print('class_id:{:d}, confidence:{:.4f},' - 'left_top:[{:.2f},{:.2f}],' - ' right_bottom:[{:.2f},{:.2f}]'.format( - int(box[0]), box[1], box[2], box[3], box[4], box[5])) - results['boxes'] = np_boxes - if np_masks is not None: - np_masks = np_masks[expect_boxes, :, :, :] - results['masks'] = np_masks - return results - - def predict(self, image, threshold=0.5, warmup=0, repeats=1): - ''' - Args: - image (str/np.ndarray): path of image/ np.ndarray read by cv2 - threshold (float): threshold of predicted box' score - Returns: - results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box, - matix element:[class, score, x_min, y_min, x_max, y_max] - MaskRCNN's results include 'masks': np.ndarray: - shape:[N, class_num, mask_resolution, mask_resolution] - ''' - inputs, im_info = self.transform(image) - np_boxes, np_masks = None, None - - input_names = self.predictor.get_input_names() - for i in range(len(input_names)): - input_tensor = self.predictor.get_input_tensor(input_names[i]) - input_tensor.copy_from_cpu(inputs[input_names[i]]) - - for i in range(warmup): - self.predictor.zero_copy_run() - output_names = self.predictor.get_output_names() - boxes_tensor = self.predictor.get_output_tensor(output_names[0]) - np_boxes = boxes_tensor.copy_to_cpu() - - for i in range(repeats): - self.predictor.zero_copy_run() - output_names = self.predictor.get_output_names() - boxes_tensor = self.predictor.get_output_tensor(output_names[0]) - np_boxes = boxes_tensor.copy_to_cpu() - - # do not perform postprocess in benchmark mode - results = [] - - if reduce(lambda x, y: x * y, np_boxes.shape) < 6: - print('[WARNNING] No object detected.') - results = {'boxes': np.array([])} - else: - results = self.postprocess( - np_boxes, np_masks, im_info, threshold=threshold) - - return results - - -@moduleinfo( - name="solov2", - type="CV/image_editing", - author="paddlepaddle", - author_email="", - summary="solov2 is a detection model, this module is trained with COCO dataset.", - version="1.0.0") -class DetectorSOLOv2(Detector): - def __init__(self, use_gpu=False, run_mode='fluid', threshold=0.5): - super(DetectorSOLOv2, self).__init__( - use_gpu=use_gpu, run_mode=run_mode, threshold=threshold) - - def predict(self, - image, - threshold=0.5, - warmup=0, - repeats=1, - visualization=False, - save_dir='solov2_result'): - inputs, im_info = self.transform(image) - np_label, np_score, np_segms = None, None, None - - input_names = self.predictor.get_input_names() - for i in range(len(input_names)): - input_tensor = self.predictor.get_input_tensor(input_names[i]) - input_tensor.copy_from_cpu(inputs[input_names[i]]) - for i in range(warmup): - self.predictor.zero_copy_run() - output_names = self.predictor.get_output_names() - np_label = self.predictor.get_output_tensor(output_names[ - 0]).copy_to_cpu() - np_score = self.predictor.get_output_tensor(output_names[ - 1]).copy_to_cpu() - np_segms = self.predictor.get_output_tensor(output_names[ - 2]).copy_to_cpu() - - for i in range(repeats): - self.predictor.zero_copy_run() - output_names = self.predictor.get_output_names() - np_label = self.predictor.get_output_tensor(output_names[ - 0]).copy_to_cpu() - np_score = self.predictor.get_output_tensor(output_names[ - 1]).copy_to_cpu() - np_segms = self.predictor.get_output_tensor(output_names[ - 2]).copy_to_cpu() - output = dict(segm=np_segms, label=np_label, score=np_score) - - if visualization: - if not os.path.exists(save_dir): - os.makedirs(save_dir) - image = D.visualize_box_mask(im=image, results=output) - name = str(time.time()) + '.png' - save_path = os.path.join(save_dir, name) - image.save(save_path) - img = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR) - output['image'] = img - return output diff --git a/static/application/christmas/solov2/processor.py b/static/application/christmas/solov2/processor.py deleted file mode 100644 index b2f02c09ac5f027f631b6696ed60876d650b829b..0000000000000000000000000000000000000000 --- a/static/application/christmas/solov2/processor.py +++ /dev/null @@ -1,248 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from PIL import Image -import cv2 -import numpy as np - - -def decode_image(im_file, im_info): - """read rgb image - Args: - im_file (str/np.ndarray): path of image/ np.ndarray read by cv2 - im_info (dict): info of image - Returns: - im (np.ndarray): processed image (np.ndarray) - im_info (dict): info of processed image - """ - if isinstance(im_file, str): - with open(im_file, 'rb') as f: - im_read = f.read() - data = np.frombuffer(im_read, dtype='uint8') - im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode - im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - - im_info['origin_shape'] = im.shape[:2] - im_info['resize_shape'] = im.shape[:2] - else: - im = im_file - im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - im_info['origin_shape'] = im.shape[:2] - im_info['resize_shape'] = im.shape[:2] - return im, im_info - - -class Resize(object): - """resize image by target_size and max_size - Args: - arch (str): model type - target_size (int): the target size of image - max_size (int): the max size of image - use_cv2 (bool): whether us cv2 - image_shape (list): input shape of model - interp (int): method of resize - """ - - def __init__(self, - target_size=800, - max_size=1333, - use_cv2=True, - image_shape=None, - interp=cv2.INTER_LINEAR, - resize_box=False): - self.target_size = target_size - self.max_size = max_size - self.image_shape = image_shape - self.use_cv2 = use_cv2 - self.interp = interp - - def __call__(self, im, im_info): - """ - Args: - im (np.ndarray): image (np.ndarray) - im_info (dict): info of image - Returns: - im (np.ndarray): processed image (np.ndarray) - im_info (dict): info of processed image - """ - im_channel = im.shape[2] - im_scale_x, im_scale_y = self.generate_scale(im) - im_info['resize_shape'] = [ - im_scale_x * float(im.shape[0]), im_scale_y * float(im.shape[1]) - ] - if self.use_cv2: - im = cv2.resize( - im, - None, - None, - fx=im_scale_x, - fy=im_scale_y, - interpolation=self.interp) - else: - resize_w = int(im_scale_x * float(im.shape[1])) - resize_h = int(im_scale_y * float(im.shape[0])) - if self.max_size != 0: - raise TypeError( - 'If you set max_size to cap the maximum size of image,' - 'please set use_cv2 to True to resize the image.') - im = im.astype('uint8') - im = Image.fromarray(im) - im = im.resize((int(resize_w), int(resize_h)), self.interp) - im = np.array(im) - - # padding im when image_shape fixed by infer_cfg.yml - if self.max_size != 0 and self.image_shape is not None: - padding_im = np.zeros( - (self.max_size, self.max_size, im_channel), dtype=np.float32) - im_h, im_w = im.shape[:2] - padding_im[:im_h, :im_w, :] = im - im = padding_im - - im_info['scale'] = [im_scale_x, im_scale_y] - return im, im_info - - def generate_scale(self, im): - """ - Args: - im (np.ndarray): image (np.ndarray) - Returns: - im_scale_x: the resize ratio of X - im_scale_y: the resize ratio of Y - """ - origin_shape = im.shape[:2] - im_c = im.shape[2] - if self.max_size != 0: - im_size_min = np.min(origin_shape[0:2]) - im_size_max = np.max(origin_shape[0:2]) - im_scale = float(self.target_size) / float(im_size_min) - if np.round(im_scale * im_size_max) > self.max_size: - im_scale = float(self.max_size) / float(im_size_max) - im_scale_x = im_scale - im_scale_y = im_scale - else: - im_scale_x = float(self.target_size) / float(origin_shape[1]) - im_scale_y = float(self.target_size) / float(origin_shape[0]) - return im_scale_x, im_scale_y - - -class Normalize(object): - """normalize image - Args: - mean (list): im - mean - std (list): im / std - is_scale (bool): whether need im / 255 - is_channel_first (bool): if True: image shape is CHW, else: HWC - """ - - def __init__(self, mean, std, is_scale=True, is_channel_first=False): - self.mean = mean - self.std = std - self.is_scale = is_scale - self.is_channel_first = is_channel_first - - def __call__(self, im, im_info): - """ - Args: - im (np.ndarray): image (np.ndarray) - im_info (dict): info of image - Returns: - im (np.ndarray): processed image (np.ndarray) - im_info (dict): info of processed image - """ - im = im.astype(np.float32, copy=False) - if self.is_channel_first: - mean = np.array(self.mean)[:, np.newaxis, np.newaxis] - std = np.array(self.std)[:, np.newaxis, np.newaxis] - else: - mean = np.array(self.mean)[np.newaxis, np.newaxis, :] - std = np.array(self.std)[np.newaxis, np.newaxis, :] - if self.is_scale: - im = im / 255.0 - im -= mean - im /= std - return im, im_info - - -class Permute(object): - """permute image - Args: - to_bgr (bool): whether convert RGB to BGR - channel_first (bool): whether convert HWC to CHW - """ - - def __init__(self, to_bgr=False, channel_first=True): - self.to_bgr = to_bgr - self.channel_first = channel_first - - def __call__(self, im, im_info): - """ - Args: - im (np.ndarray): image (np.ndarray) - im_info (dict): info of image - Returns: - im (np.ndarray): processed image (np.ndarray) - im_info (dict): info of processed image - """ - if self.channel_first: - im = im.transpose((2, 0, 1)).copy() - if self.to_bgr: - im = im[[2, 1, 0], :, :] - return im, im_info - - -class PadStride(object): - """ padding image for model with FPN - Args: - stride (bool): model with FPN need image shape % stride == 0 - """ - - def __init__(self, stride=0): - self.coarsest_stride = stride - - def __call__(self, im, im_info): - """ - Args: - im (np.ndarray): image (np.ndarray) - im_info (dict): info of image - Returns: - im (np.ndarray): processed image (np.ndarray) - im_info (dict): info of processed image - """ - coarsest_stride = self.coarsest_stride - if coarsest_stride == 0: - return im - im_c, im_h, im_w = im.shape - pad_h = int(np.ceil(float(im_h) / coarsest_stride) * coarsest_stride) - pad_w = int(np.ceil(float(im_w) / coarsest_stride) * coarsest_stride) - padding_im = np.zeros((im_c, pad_h, pad_w), dtype=np.float32) - padding_im[:, :im_h, :im_w] = im - im_info['pad_shape'] = padding_im.shape[1:] - return padding_im, im_info - - -def preprocess(im, preprocess_ops): - # process image by preprocess_ops - im_info = { - 'scale': [1., 1.], - 'origin_shape': None, - 'resize_shape': None, - 'pad_shape': None, - } - im, im_info = decode_image(im, im_info) - count = 0 - for operator in preprocess_ops: - count += 1 - im, im_info = operator(im, im_info) - im = np.array((im, )).astype('float32') - return im, im_info diff --git a/static/application/christmas/solov2_blazeface/face_makeup_main.py b/static/application/christmas/solov2_blazeface/face_makeup_main.py deleted file mode 100644 index c86a179428760a2cf58741341b8bb66f68bcfba9..0000000000000000000000000000000000000000 --- a/static/application/christmas/solov2_blazeface/face_makeup_main.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import os -import cv2 -import math -import numpy as np - -HAT_SCALES = { - '1.png': [3.0, 0.9, .0], - '2.png': [3.0, 1.3, .5], - '3.png': [2.2, 1.5, .8], - '4.png': [2.2, 1.8, .0], - '5.png': [1.8, 1.2, .0], -} - -GLASSES_SCALES = { - '1.png': [0.65, 2.5], - '2.png': [0.65, 2.5], -} - -BEARD_SCALES = {'1.png': [700, 0.3], '2.png': [220, 0.2]} - - -def rotate(image, angle): - """ - angle is degree, not radian - """ - (h, w) = image.shape[:2] - (cx, cy) = (w / 2, h / 2) - M = cv2.getRotationMatrix2D((cx, cy), -angle, 1.0) - cos = np.abs(M[0, 0]) - sin = np.abs(M[0, 1]) - nW = int((h * sin) + (w * cos)) - nH = int((h * cos) + (w * sin)) - M[0, 2] += (nW / 2) - cx - M[1, 2] += (nH / 2) - cy - return cv2.warpAffine(image, M, (nW, nH)) - - -def n_rotate_coord(angle, x, y): - """ - angle is radian, not degree - """ - rotatex = math.cos(angle) * x - math.sin(angle) * y - rotatey = math.cos(angle) * y + math.sin(angle) * x - return rotatex, rotatey - - -def r_rotate_coord(angle, x, y): - """ - angle is radian, not degree - """ - rotatex = math.cos(angle) * x + math.sin(angle) * y - rotatey = math.cos(angle) * y - math.sin(angle) * x - return rotatex, rotatey - - -def add_beard(person, kypoint, element_path): - beard_file_name = os.path.split(element_path)[1] - # element_len: top width of beard - # loc_offset_scale: scale relative to nose - element_len, loc_offset_scale = BEARD_SCALES[beard_file_name][:] - - x1, y1, x2, y2, x3, y3, x4, y4, x5, y5 = kypoint[:] - mouth_len = np.sqrt(np.square(np.abs(y4 - y5)) + np.square(x4 - x5)) - - element = cv2.imread(element_path) - h, w, _ = element.shape - resize_scale = mouth_len / float(element_len) - h, w = round(h * resize_scale + 0.5), round(w * resize_scale + 0.5) - resized_element = cv2.resize(element, (w, h)) - resized_ele_h, resized_ele_w, _ = resized_element.shape - - # First find the keypoint of mouth in front face - m_center_x = (x4 + x5) / 2. - m_center_y = (y4 + y5) / 2. - # cal degree only according mouth coordinates - degree = np.arccos((x4 - x5) / mouth_len) - - # coordinate of RoI in front face - half_w = int(resized_ele_w // 2) - scale = loc_offset_scale - roi_top_left_y = int(y3 + (((y5 + y4) // 2) - y3) * scale) - roi_top_left_x = int(x3 - half_w) - roi_top_right_y = roi_top_left_y - roi_top_right_x = int(x3 + half_w) - roi_bottom_left_y = roi_top_left_y + resized_ele_h - roi_bottom_left_x = roi_top_left_x - roi_bottom_right_y = roi_bottom_left_y - roi_bottom_right_x = roi_top_right_x - - r_x11, r_y11 = roi_top_left_x - x3, roi_top_left_y - y3 - r_x12, r_y12 = roi_top_right_x - x3, roi_top_right_y - y3 - r_x21, r_y21 = roi_bottom_left_x - x3, roi_bottom_left_y - y3 - r_x22, r_y22 = roi_bottom_right_x - x3, roi_bottom_right_y - y3 - - # coordinate of RoI in raw face - if m_center_x > x3: - x11, y11 = r_rotate_coord(degree, r_x11, r_y11) - x12, y12 = r_rotate_coord(degree, r_x12, r_y12) - x21, y21 = r_rotate_coord(degree, r_x21, r_y21) - x22, y22 = r_rotate_coord(degree, r_x22, r_y22) - else: - x11, y11 = n_rotate_coord(degree, r_x11, r_y11) - x12, y12 = n_rotate_coord(degree, r_x12, r_y12) - x21, y21 = n_rotate_coord(degree, r_x21, r_y21) - x22, y22 = n_rotate_coord(degree, r_x22, r_y22) - - x11, y11 = x11 + x3, y11 + y3 - x12, y12 = x12 + x3, y12 + y3 - x21, y21 = x21 + x3, y21 + y3 - x22, y22 = x22 + x3, y22 + y3 - - min_x = int(min(x11, x12, x21, x22)) - max_x = int(max(x11, x12, x21, x22)) - min_y = int(min(y11, y12, y21, y22)) - max_y = int(max(y11, y12, y21, y22)) - - angle = np.degrees(degree) - - if y4 < y5: - angle = -angle - - rotated_element = rotate(resized_element, angle) - - rotated_ele_h, rotated_ele_w, _ = rotated_element.shape - - max_x = min_x + int(rotated_ele_w) - max_y = min_y + int(rotated_ele_h) - - e2gray = cv2.cvtColor(rotated_element, cv2.COLOR_BGR2GRAY) - ret, mask = cv2.threshold(e2gray, 238, 255, cv2.THRESH_BINARY_INV) - mask_inv = cv2.bitwise_not(mask) - - roi = person[min_y:max_y, min_x:max_x] - person_bg = cv2.bitwise_and(roi, roi, mask=mask) - element_fg = cv2.bitwise_and( - rotated_element, rotated_element, mask=mask_inv) - - dst = cv2.add(person_bg, element_fg) - person[min_y:max_y, min_x:max_x] = dst - return person - - -def add_hat(person, kypoint, element_path): - x1, y1, x2, y2, x3, y3, x4, y4, x5, y5 = kypoint[:] - eye_len = np.sqrt(np.square(np.abs(y1 - y2)) + np.square(np.abs(x1 - x2))) - # cal degree only according eye coordinates - degree = np.arccos((x2 - x1) / eye_len) - - angle = np.degrees(degree) - if y2 < y1: - angle = -angle - - element = cv2.imread(element_path) - hat_file_name = os.path.split(element_path)[1] - # head_scale: size scale of hat - # high_scale: height scale above the eyes - # offect_scale: width offect of hat in face - head_scale, high_scale, offect_scale = HAT_SCALES[hat_file_name][:] - h, w, _ = element.shape - - element_len = w - resize_scale = eye_len * head_scale / float(w) - h, w = round(h * resize_scale + 0.5), round(w * resize_scale + 0.5) - resized_element = cv2.resize(element, (w, h)) - resized_ele_h, resized_ele_w, _ = resized_element.shape - - m_center_x = (x1 + x2) / 2. - m_center_y = (y1 + y2) / 2. - - head_len = int(eye_len * high_scale) - - if angle > 0: - head_center_x = int(m_center_x + head_len * math.sin(degree)) - head_center_y = int(m_center_y - head_len * math.cos(degree)) - else: - head_center_x = int(m_center_x + head_len * math.sin(degree)) - head_center_y = int(m_center_y - head_len * math.cos(degree)) - - rotated_element = rotate(resized_element, angle) - - rotated_ele_h, rotated_ele_w, _ = rotated_element.shape - max_x = int(head_center_x + (resized_ele_w // 2) * math.cos(degree)) + int( - angle * head_scale) + int(eye_len * offect_scale) - min_y = int(head_center_y - (resized_ele_w // 2) * math.cos(degree)) - - pad_ele_x0 = 0 if (max_x - int(rotated_ele_w)) > 0 else -( - max_x - int(rotated_ele_w)) - pad_ele_y0 = 0 if min_y > 0 else -(min_y) - - min_x = int(max(max_x - int(rotated_ele_w), 0)) - min_y = int(max(min_y, 0)) - max_y = min_y + int(rotated_ele_h) - - pad_y1 = max(max_y - int(person.shape[0]), 0) - pad_x1 = max(max_x - int(person.shape[1]), 0) - pad_w = pad_ele_x0 + pad_x1 - pad_h = pad_ele_y0 + pad_y1 - max_x += pad_w - - pad_person = np.zeros( - (person.shape[0] + pad_h, person.shape[1] + pad_w, 3)).astype(np.uint8) - - pad_person[pad_ele_y0:pad_ele_y0 + person.shape[0], pad_ele_x0:pad_ele_x0 + - person.shape[1], :] = person - - e2gray = cv2.cvtColor(rotated_element, cv2.COLOR_BGR2GRAY) - ret, mask = cv2.threshold(e2gray, 1, 255, cv2.THRESH_BINARY_INV) - mask_inv = cv2.bitwise_not(mask) - - roi = pad_person[min_y:max_y, min_x:max_x] - - person_bg = cv2.bitwise_and(roi, roi, mask=mask) - element_fg = cv2.bitwise_and( - rotated_element, rotated_element, mask=mask_inv) - - dst = cv2.add(person_bg, element_fg) - pad_person[min_y:max_y, min_x:max_x] = dst - - return pad_person, pad_ele_x0, pad_x1, pad_ele_y0, pad_y1, min_x, min_y, max_x, max_y - - -def add_glasses(person, kypoint, element_path): - x1, y1, x2, y2, x3, y3, x4, y4, x5, y5 = kypoint[:] - eye_len = np.sqrt(np.square(np.abs(y1 - y2)) + np.square(np.abs(x1 - x2))) - # cal degree only according eye coordinates - degree = np.arccos((x2 - x1) / eye_len) - angle = np.degrees(degree) - if y2 < y1: - angle = -angle - - element = cv2.imread(element_path) - glasses_file_name = os.path.split(element_path)[1] - # height_scale: height scale above the eyes - # glasses_scale: size ratio of glasses - height_scale, glasses_scale = GLASSES_SCALES[glasses_file_name][:] - h, w, _ = element.shape - - element_len = w - resize_scale = eye_len * glasses_scale / float(element_len) - h, w = round(h * resize_scale + 0.5), round(w * resize_scale + 0.5) - resized_element = cv2.resize(element, (w, h)) - resized_ele_h, resized_ele_w, _ = resized_element.shape - - rotated_element = rotate(resized_element, angle) - - rotated_ele_h, rotated_ele_w, _ = rotated_element.shape - - eye_center_x = (x1 + x2) / 2. - eye_center_y = (y1 + y2) / 2. - - min_x = int(eye_center_x) - int(rotated_ele_w * 0.5) + int( - angle * glasses_scale * person.shape[1] / 2000) - min_y = int(eye_center_y) - int(rotated_ele_h * height_scale) - max_x = min_x + rotated_ele_w - max_y = min_y + rotated_ele_h - - e2gray = cv2.cvtColor(rotated_element, cv2.COLOR_BGR2GRAY) - ret, mask = cv2.threshold(e2gray, 1, 255, cv2.THRESH_BINARY_INV) - mask_inv = cv2.bitwise_not(mask) - - roi = person[min_y:max_y, min_x:max_x] - - person_bg = cv2.bitwise_and(roi, roi, mask=mask) - element_fg = cv2.bitwise_and( - rotated_element, rotated_element, mask=mask_inv) - - dst = cv2.add(person_bg, element_fg) - person[min_y:max_y, min_x:max_x] = dst - return person diff --git a/static/application/christmas/solov2_blazeface/module.py b/static/application/christmas/solov2_blazeface/module.py deleted file mode 100644 index 1c63427e514b21ce6505e767b1e02fff71252194..0000000000000000000000000000000000000000 --- a/static/application/christmas/solov2_blazeface/module.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import base64 -import json - -import cv2 -import numpy as np -import paddle.nn as nn -import paddlehub as hub -from paddlehub.module.module import moduleinfo, serving -import solov2_blazeface.processor as P - - -def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - - -def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - -@moduleinfo( - name="solov2_blazeface", - type="CV/image_editing", - author="paddlepaddle", - author_email="", - summary="solov2_blaceface is a segmentation and face detection model based on solov2 and blaceface.", - version="1.0.0") -class SoloV2BlazeFaceModel(nn.Layer): - """ - SoloV2BlazeFaceModel - """ - - def __init__(self, use_gpu=True): - super(SoloV2BlazeFaceModel, self).__init__() - self.solov2 = hub.Module(name='solov2', use_gpu=use_gpu) - self.blaceface = hub.Module(name='blazeface', use_gpu=use_gpu) - - def predict(self, - image, - background, - beard_file=None, - glasses_file=None, - hat_file=None, - visualization=False, - threshold=0.5): - # instance segmention - solov2_output = self.solov2.predict( - image=image, threshold=threshold, visualization=visualization) - # Set background pixel to 0 - im_segm, x0, x1, y0, y1, _, _, _, _, flag_seg = P.visualize_box_mask( - image, solov2_output, threshold=threshold) - - if flag_seg == 0: - return im_segm - - h, w = y1 - y0, x1 - x0 - back_json = background[:-3] + 'json' - stand_box = json.load(open(back_json)) - stand_box = stand_box['outputs']['object'][0]['bndbox'] - stand_xmin, stand_xmax, stand_ymin, stand_ymax = stand_box[ - 'xmin'], stand_box['xmax'], stand_box['ymin'], stand_box['ymax'] - im_path = np.asarray(im_segm) - - # face detection - blaceface_output = self.blaceface.predict( - image=im_path, threshold=threshold, visualization=visualization) - im_face_kp, p_left, p_right, p_up, p_bottom, h_xmin, h_ymin, h_xmax, h_ymax, flag_face = P.visualize_box_mask( - im_path, - blaceface_output, - threshold=threshold, - beard_file=beard_file, - glasses_file=glasses_file, - hat_file=hat_file) - if flag_face == 1: - if x0 > h_xmin: - shift_x_ = x0 - h_xmin - else: - shift_x_ = 0 - if y0 > h_ymin: - shift_y_ = y0 - h_ymin - else: - shift_y_ = 0 - h += p_up + p_bottom + shift_y_ - w += p_left + p_right + shift_x_ - x0 = min(x0, h_xmin) - y0 = min(y0, h_ymin) - x1 = max(x1, h_xmax) + shift_x_ + p_left + p_right - y1 = max(y1, h_ymax) + shift_y_ + p_up + p_bottom - # Fill the background image - cropped = im_face_kp.crop((x0, y0, x1, y1)) - resize_scale = min((stand_xmax - stand_xmin) / (x1 - x0), - (stand_ymax - stand_ymin) / (y1 - y0)) - h, w = int(h * resize_scale), int(w * resize_scale) - cropped = cropped.resize((w, h), cv2.INTER_LINEAR) - cropped = cv2.cvtColor(np.asarray(cropped), cv2.COLOR_RGB2BGR) - shift_x = int((stand_xmax - stand_xmin - cropped.shape[1]) / 2) - shift_y = int((stand_ymax - stand_ymin - cropped.shape[0]) / 2) - out_image = cv2.imread(background) - e2gray = cv2.cvtColor(cropped, cv2.COLOR_BGR2GRAY) - ret, mask = cv2.threshold(e2gray, 1, 255, cv2.THRESH_BINARY_INV) - mask_inv = cv2.bitwise_not(mask) - roi = out_image[stand_ymin + shift_y:stand_ymin + cropped.shape[ - 0] + shift_y, stand_xmin + shift_x:stand_xmin + cropped.shape[1] + - shift_x] - person_bg = cv2.bitwise_and(roi, roi, mask=mask) - element_fg = cv2.bitwise_and(cropped, cropped, mask=mask_inv) - dst = cv2.add(person_bg, element_fg) - out_image[stand_ymin + shift_y:stand_ymin + cropped.shape[ - 0] + shift_y, stand_xmin + shift_x:stand_xmin + cropped.shape[1] + - shift_x] = dst - - return out_image - - @serving - def serving_method(self, images, background, beard, glasses, hat, **kwargs): - """ - Run as a service. - """ - final = {} - background_path = os.path.join( - self.directory, - 'element_source/background/{}.png'.format(background)) - beard_path = os.path.join(self.directory, - 'element_source/beard/{}.png'.format(beard)) - glasses_path = os.path.join( - self.directory, 'element_source/glasses/{}.png'.format(glasses)) - hat_path = os.path.join(self.directory, - 'element_source/hat/{}.png'.format(hat)) - images_decode = base64_to_cv2(images[0]) - output = self.predict( - image=images_decode, - background=background_path, - hat_file=hat_path, - beard_file=beard_path, - glasses_file=glasses_path, - **kwargs) - final['image'] = cv2_to_base64(output) - - return final diff --git a/static/application/christmas/solov2_blazeface/processor.py b/static/application/christmas/solov2_blazeface/processor.py deleted file mode 100644 index 702985db392dddfe4b0d0b87e5061b8b447433a5..0000000000000000000000000000000000000000 --- a/static/application/christmas/solov2_blazeface/processor.py +++ /dev/null @@ -1,163 +0,0 @@ -# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from __future__ import division - -import cv2 -import numpy as np -from PIL import Image -import solov2_blazeface.face_makeup_main as face_makeup_main - - -def visualize_box_mask(im, - results, - threshold=0.5, - beard_file=None, - glasses_file=None, - hat_file=None): - if isinstance(im, str): - im = Image.open(im).convert('RGB') - else: - im = Image.fromarray(im) - - if 'segm' in results: - im, x0, x1, y0, y1, flag_seg = draw_segm( - im, - results['segm'], - results['label'], - results['score'], - threshold=threshold) - return im, x0, x1, y0, y1, 0, 0, 0, 0, flag_seg - if 'landmark' in results: - im, left, right, up, bottom, h_xmin, h_ymin, h_xmax, h_ymax, flag_face = trans_lmk( - im, results['landmark'], beard_file, glasses_file, hat_file) - return im, left, right, up, bottom, h_xmin, h_ymin, h_xmax, h_ymax, flag_face - else: - return im, 0, 0, 0, 0, 0, 0, 0, 0, 0 - - -def draw_segm(im, np_segms, np_label, np_score, threshold=0.5, alpha=0.7): - """ - Draw segmentation on image - """ - im = np.array(im).astype('float32') - np_segms = np_segms.astype(np.uint8) - index_label = np.where(np_label == 0)[0] - index = np.where(np_score[index_label] > threshold)[0] - index = index_label[index] - if index.size == 0: - im = Image.fromarray(im.astype('uint8')) - return im, 0, 0, 0, 0, 0 - person_segms = np_segms[index] - person_mask_single_channel = np.sum(person_segms, axis=0) - person_mask_single_channel[person_mask_single_channel > 1] = 1 - person_mask = np.expand_dims(person_mask_single_channel, axis=2) - person_mask = np.repeat(person_mask, 3, axis=2) - im = im * person_mask - - sum_x = np.sum(person_mask_single_channel, axis=0) - x = np.where(sum_x > 0.5)[0] - sum_y = np.sum(person_mask_single_channel, axis=1) - y = np.where(sum_y > 0.5)[0] - x0, x1, y0, y1 = x[0], x[-1], y[0], y[-1] - - return Image.fromarray(im.astype('uint8')), x0, x1, y0, y1, 1 - - -def lmk2out(bboxes, np_lmk, im_info, threshold=0.5, is_bbox_normalized=True): - image_w, image_h = im_info['origin_shape'] - scale = im_info['scale'] - face_index, landmark, prior_box = np_lmk[:] - xywh_res = [] - if bboxes.shape == (1, 1) or bboxes is None: - return np.array([]) - prior = np.reshape(prior_box, (-1, 4)) - predict_lmk = np.reshape(landmark, (-1, 10)) - k = 0 - for i in range(bboxes.shape[0]): - score = bboxes[i][1] - if score < threshold: - continue - theindex = face_index[i][0] - me_prior = prior[theindex, :] - lmk_pred = predict_lmk[theindex, :] - prior_h = me_prior[2] - me_prior[0] - prior_w = me_prior[3] - me_prior[1] - prior_h_center = (me_prior[2] + me_prior[0]) / 2 - prior_w_center = (me_prior[3] + me_prior[1]) / 2 - lmk_decode = np.zeros((10)) - for j in [0, 2, 4, 6, 8]: - lmk_decode[j] = lmk_pred[j] * 0.1 * prior_w + prior_h_center - for j in [1, 3, 5, 7, 9]: - lmk_decode[j] = lmk_pred[j] * 0.1 * prior_h + prior_w_center - if is_bbox_normalized: - lmk_decode = lmk_decode * np.array([ - image_h, image_w, image_h, image_w, image_h, image_w, image_h, - image_w, image_h, image_w - ]) - xywh_res.append(lmk_decode) - return np.asarray(xywh_res) - - -def post_processing(image, lmk_decode, hat_path, beard_path, glasses_path): - image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR) - p_left, p_right, p_up, p_bottom, h_xmax, h_ymax = [0] * 6 - h_xmin, h_ymin = 10000, 10000 - # Add beard on the face - if beard_path is not None: - image = face_makeup_main.add_beard(image, lmk_decode, beard_path) - # Add glasses on the face - if glasses_path is not None: - image = face_makeup_main.add_glasses(image, lmk_decode, glasses_path) - # Add hat on the face - if hat_path is not None: - image, p_left, p_right, p_up, p_bottom, h_xmin, h_ymin, h_xmax, h_ymax = face_makeup_main.add_hat( - image, lmk_decode, hat_path) - image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) - print('----------- Post Processing Success -----------') - return image, p_left, p_right, p_up, p_bottom, h_xmin, h_ymin, h_xmax, h_ymax - - -def trans_lmk(image, lmk_results, beard_file, glasses_file, hat_file): - p_left, p_right, p_up, p_bottom, h_xmax, h_ymax = [0] * 6 - h_xmin, h_ymin = 10000, 10000 - if lmk_results.shape[0] == 0: - return image, p_left, p_right, p_up, p_bottom, h_xmin, h_ymin, h_xmax, h_ymax, 0 - for lmk_decode in lmk_results: - - x1, y1, x2, y2 = lmk_decode[0], lmk_decode[1], lmk_decode[ - 2], lmk_decode[3] - x4, y4, x5, y5 = lmk_decode[6], lmk_decode[7], lmk_decode[ - 8], lmk_decode[9] - # Refine the order of keypoint - if x1 > x2: - lmk_decode[0], lmk_decode[1], lmk_decode[2], lmk_decode[ - 3] = lmk_decode[2], lmk_decode[3], lmk_decode[0], lmk_decode[1] - if x4 < x5: - lmk_decode[6], lmk_decode[7], lmk_decode[8], lmk_decode[ - 9] = lmk_decode[8], lmk_decode[9], lmk_decode[6], lmk_decode[7] - # Add decoration to the face - image, p_left_temp, p_right_temp, p_up_temp, p_bottom_temp, h_xmin_temp, h_ymin_temp, h_xmax_temp, h_ymax_temp = post_processing( - image, lmk_decode, hat_file, beard_file, glasses_file) - - p_left = max(p_left, p_left_temp) - p_right = max(p_right, p_right_temp) - p_up = max(p_up, p_up_temp) - p_bottom = max(p_bottom, p_bottom_temp) - h_xmin = min(h_xmin, h_xmin_temp) - h_ymin = min(h_ymin, h_ymin_temp) - h_xmax = max(h_xmax, h_xmax_temp) - h_ymax = max(h_ymax, h_ymax_temp) - - return image, p_left, p_right, p_up, p_bottom, h_xmin, h_ymin, h_xmax, h_ymax, 1 diff --git a/static/application/christmas/test_main.py b/static/application/christmas/test_main.py deleted file mode 100644 index ed8f2c375ad5df461063cfc572c88a1379cb681c..0000000000000000000000000000000000000000 --- a/static/application/christmas/test_main.py +++ /dev/null @@ -1,32 +0,0 @@ -# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import paddlehub as hub -import cv2 - -img_file = 'demo_images/test.jpg' -background = 'element_source/background/1.png' -beard_file = 'element_source/beard/1.png' -glasses_file = 'element_source/glasses/4.png' -hat_file = 'element_source/hat/1.png' - -model = hub.Module(name='solov2_blazeface', use_gpu=True) -output = model.predict( - image=img_file, - background=background, - hat_file=hat_file, - beard_file=beard_file, - glasses_file=glasses_file, - visualization=True) -cv2.imwrite("chrismas_final.png", output) diff --git a/static/application/christmas/test_server.py b/static/application/christmas/test_server.py deleted file mode 100644 index 34963b871242199d1a48fff3dc2230ffbbf01db0..0000000000000000000000000000000000000000 --- a/static/application/christmas/test_server.py +++ /dev/null @@ -1,53 +0,0 @@ -# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import requests -import json -import cv2 -import base64 -import time -import numpy as np - - -def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - - -def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - -# Send HTTP request -org_im = cv2.cvtColor(cv2.imread('demo_images/test.jpg'), cv2.COLOR_BGR2RGB) -h, w, c = org_im.shape -hat_ids = 1 -data = { - 'images': [cv2_to_base64(org_im)], - 'background': 3, - "beard": 2, - "glasses": 3, - "hat": 3 -} -headers = {"Content-type": "application/json"} -url = "http://127.0.0.1:8880/predict/solov2_blazeface" -start = time.time() -r = requests.post(url=url, headers=headers, data=json.dumps(data)) -end = time.time() -print('cost:', end - start) -result = base64_to_cv2(r.json()["results"]['image']) -cv2.imwrite("chrismas_final.png", result) diff --git a/static/configs/acfpn/README.md b/static/configs/acfpn/README.md deleted file mode 100644 index 802d61478150af1b22e1e94fa53b30f2ed905cbc..0000000000000000000000000000000000000000 --- a/static/configs/acfpn/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# Attention-guided Context Feature Pyramid Network for Object Detection - -## Introduction - -- Attention-guided Context Feature Pyramid Network for Object Detection: [https://arxiv.org/abs/2005.11475](https://arxiv.org/abs/2005.11475) - -``` -Cao J, Chen Q, Guo J, et al. Attention-guided Context Feature Pyramid Network for Object Detection[J]. arXiv preprint arXiv:2005.11475, 2020. -``` - - -## Model Zoo - -| Backbone | Type | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-ACFPN | Faster | 2 | 1x | 23.432 | 39.6 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_acfpn_1x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/configs/acfpn/faster_rcnn_r50_vd_acfpn_1x.yml) | diff --git a/static/configs/acfpn/faster_rcnn_r50_vd_acfpn_1x.yml b/static/configs/acfpn/faster_rcnn_r50_vd_acfpn_1x.yml deleted file mode 100644 index 2696be3f8b4a621ba185271d2c62f3460234991e..0000000000000000000000000000000000000000 --- a/static/configs/acfpn/faster_rcnn_r50_vd_acfpn_1x.yml +++ /dev/null @@ -1,107 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_acfpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: ACFPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -ACFPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - norm_groups: 32 - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/anchor_free/README.md b/static/configs/anchor_free/README.md deleted file mode 100644 index 15b1717be3f156ca92b490fe038dd979a95a502e..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/README.md +++ /dev/null @@ -1,80 +0,0 @@ -# Anchor Free系列模型 - -## 内容 -- [简介](#简介) -- [模型库与基线](#模型库与基线) -- [算法细节](#算法细节) -- [如何贡献代码](#如何贡献代码) - -## 简介 -目前主流的检测算法大体分为两类: single-stage和two-stage,其中single-stage的经典算法包括SSD, YOLO等,two-stage方法有RCNN系列模型,两大类算法在[PaddleDetection Model Zoo](../../docs/MODEL_ZOO.md)中均有给出,它们的共同特点是先定义一系列密集的,大小不等的anchor区域,再基于这些先验区域进行分类和回归,这种方式极大的受限于anchor自身的设计。随着CornerNet的提出,涌现了多种anchor free方法,PaddleDetection也集成了一系列anchor free算法。 - -## 模型库与基线 -下表中展示了PaddleDetection当前支持的网络结构,具体细节请参考[算法细节](#算法细节)。 - -| | ResNet50 | ResNet50-vd | Hourglass104 | DarkNet53 -|:------------------------:|:--------:|:-------------:|:-------------:|:-------------:| -| [CornerNet-Squeeze](#CornerNet-Squeeze) | x | ✓ | ✓ |x | -| [FCOS](#FCOS) | ✓ | x | x | x | -| [TTFNet](#TTFNet) | x | x | x | ✓ | - - - -### 模型库 - -#### COCO数据集上的mAP - -| 网络结构 | 骨干网络 | 图片个数/GPU | 预训练模型 | mAP | FPS | 模型下载 | 配置文件 | -|:------------:|:--------:|:----:|:-------:|:-------:|:---------:|:----------:|:----------:| -| CornerNet-Squeeze | Hourglass104 | 14 | 无 | 34.5 | 35.5 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/cornernet_squeeze_hg104.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/cornernet_squeeze_hg104.yml) | -| CornerNet-Squeeze | ResNet50-vd | 14 | [faster\_rcnn\_r50\_vd\_fpn\_2x](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_2x.tar) | 32.7 | 47.01 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/cornernet_squeeze_r50_vd_fpn.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/cornernet_squeeze_r50_vd_fpn.yml) | -| CornerNet-Squeeze-dcn | ResNet50-vd | 14 | [faster\_rcnn\_dcn\_r50\_vd\_fpn\_2x](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_2x.tar) | 34.9 | 40.43 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/cornernet_squeeze_dcn_r50_vd_fpn.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn.yml) | -| CornerNet-Squeeze-dcn-mixup-cosine* | ResNet50-vd | 14 | [faster\_rcnn\_dcn\_r50\_vd\_fpn\_2x](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_2x.tar) | 38.2 | 39.70 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/cornernet_squeeze_dcn_r50_vd_fpn_mixup_cosine.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn_mixup_cosine.yml) | -| FCOS | ResNet50 | 2 | [ResNet50\_cos\_pretrained](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar) | 39.8 | 18.85 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/fcos_r50_fpn_1x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/fcos_r50_fpn_1x.yml) | -| FCOS+multiscale_train | ResNet50 | 2 | [ResNet50\_cos\_pretrained](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar) | 42.0 | 19.05 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/fcos_r50_fpn_multiscale_2x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/fcos_r50_fpn_multiscale_2x.yml) | -| FCOS+DCN | ResNet50 | 2 | [ResNet50\_cos\_pretrained](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar) | 44.4 | 13.66 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/fcos_dcn_r50_fpn_1x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/fcos_dcn_r50_fpn_1x.yml) | -| TTFNet | DarkNet53 | 12 | [DarkNet53_pretrained](https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar) | 32.9 | 85.92 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ttfnet_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/anchor_free/ttfnet_darknet.yml) | - -**注意:** - -- 模型FPS在Tesla V100单卡环境中通过tools/eval.py进行测试 -- CornerNet-Squeeze要求使用PaddlePaddle1.8及以上版本或适当的develop版本 -- CornerNet-Squeeze中使用ResNet结构的骨干网络时,加入了FPN结构,骨干网络的输出feature map采用FPN中的P3层输出。 -- \*CornerNet-Squeeze-dcn-mixup-cosine是基于原版CornerNet-Squeeze优化效果最好的模型,在ResNet的骨干网络基础上增加mixup预处理和使用cosine_decay -- FCOS使用GIoU loss、用location分支预测centerness、左上右下角点偏移量归一化和ground truth中心匹配策略 -- Cornernet-Squeeze模型依赖corner_pooling op,该op在```ppdet/ext_op```中编译得到,具体编译方式请参考[自定义OP的编译过程](../../ppdet/ext_op/README.md) - -## 算法细节 - -### CornerNet-Squeeze - -**简介:** [CornerNet-Squeeze](https://arxiv.org/abs/1904.08900) 在[Cornernet](https://arxiv.org/abs/1808.01244)基础上进行改进,预测目标框的左上角和右下角的位置,同时参考SqueezeNet和MobileNet的特点,优化了CornerNet骨干网络Hourglass-104,大幅提升了模型预测速度,相较于原版[YOLO-v3](https://arxiv.org/abs/1804.02767),在训练精度和推理速度上都具备一定优势。 - -**特点:** - -- 使用corner_pooling获取候选框左上角和右下角的位置 -- 替换Hourglass-104中的residual block为SqueezeNet中的fire-module -- 替换第二层3x3卷积为3x3深度可分离卷积 - - -### FCOS - -**简介:** [FCOS](https://arxiv.org/abs/1904.01355)是一种密集预测的anchor-free检测算法,使用RetinaNet的骨架,直接在feature map上回归目标物体的长宽,并预测物体的类别以及centerness(feature map上像素点离物体中心的偏移程度),centerness最终会作为权重来调整物体得分。 - -**特点:** - -- 利用FPN结构在不同层预测不同scale的物体框,避免了同一feature map像素点处有多个物体框重叠的情况 -- 通过center-ness单层分支预测当前点是否是目标中心,消除低质量误检 - - -## TTFNet - -**简介:** [TTFNet](https://arxiv.org/abs/1909.00700)是一种用于实时目标检测且对训练时间友好的网络,对CenterNet收敛速度慢的问题进行改进,提出了利用高斯核生成训练样本的新方法,有效的消除了anchor-free head中存在的模糊性。同时简单轻量化的网络结构也易于进行任务扩展。 - -**特点:** - -- 结构简单,仅需要两个head检测目标位置和大小,并且去除了耗时的后处理操作 -- 训练时间短,基于DarkNet53的骨干网路,V100 8卡仅需要训练2个小时即可达到较好的模型效果 - -## 如何贡献代码 -我们非常欢迎您可以为PaddleDetection中的Anchor Free检测模型提供代码,您可以提交PR供我们review;也十分感谢您的反馈,可以提交相应issue,我们会及时解答。 diff --git a/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn.yml b/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn.yml deleted file mode 100644 index 75ea874020f77de5c24299a5d141dfba700b2e2c..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn.yml +++ /dev/null @@ -1,163 +0,0 @@ -architecture: CornerNetSqueeze -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_2x.tar -weights: output/cornernet_squeeze_dcn_r50_vd_fpn/model_final -num_classes: 80 -stack: 1 - -CornerNetSqueeze: - backbone: ResNet - fpn: FPN - corner_head: CornerHead - -ResNet: - norm_type: bn - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 3 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125] - -CornerHead: - test_batch_size: 1 - ae_threshold: 0.5 - num_dets: 100 - top_k: 20 - -PostProcess: - use_soft_nms: true - detections_per_im: 100 - nms_thresh: 0.001 - sigma: 0.5 - -LearningRate: - base_lr: 0.0005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 511, 511] - fields: ['image', 'im_id', 'gt_bbox', 'gt_class', 'tl_heatmaps', 'br_heatmaps', 'tl_regrs', 'br_regrs', 'tl_tags', 'br_tags', 'tag_masks'] - output_size: 64 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: False - - !CornerCrop - input_size: 511 - - !Resize - target_dim: 511 - - !RandomFlipImage - prob: 0.5 - - !CornerRandColor - saturation: 0.4 - contrast: 0.4 - brightness: 0.4 - - !Lighting - eigval: [0.2141788, 0.01817699, 0.00341571] - eigvec: [[-0.58752847, -0.69563484, 0.41340352], - [-0.5832747, 0.00994535, -0.81221408], - [-0.56089297, 0.71832671, 0.41158938]] - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: False - is_channel_first: False - - !Permute - to_bgr: False - - !CornerTarget - output_size: [64, 64] - num_classes: 80 - batch_size: 14 - shuffle: true - drop_last: true - worker_num: 2 - use_process: true - drop_empty: false - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - use_process: true - batch_size: 1 - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - batch_size: 1 diff --git a/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn_mixup_cosine.yml b/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn_mixup_cosine.yml deleted file mode 100644 index 823d702a44f1a3dc082a79aeb107a5268304720e..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/cornernet_squeeze_dcn_r50_vd_fpn_mixup_cosine.yml +++ /dev/null @@ -1,167 +0,0 @@ -architecture: CornerNetSqueeze -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_2x.tar -weights: output/cornernet_squeeze_dcn_r50_vd_fpn_mixup_cosine/model_final -num_classes: 80 -stack: 1 - -CornerNetSqueeze: - backbone: ResNet - fpn: FPN - corner_head: CornerHead - -ResNet: - norm_type: bn - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 3 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125] - -CornerHead: - test_batch_size: 1 - ae_threshold: 0.5 - num_dets: 100 - top_k: 20 - -PostProcess: - use_soft_nms: true - detections_per_im: 100 - nms_thresh: 0.001 - sigma: 0.5 - -LearningRate: - base_lr: 0.005 - schedulers: - - !CosineDecay - max_iters: 500000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 511, 511] - fields: ['image', 'im_id', 'gt_bbox', 'gt_class', 'tl_heatmaps', 'br_heatmaps', 'tl_regrs', 'br_regrs', 'tl_tags', 'br_tags', 'tag_masks'] - output_size: 64 - max_tag_len: 256 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: False - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !CornerCrop - input_size: 511 - - !Resize - target_dim: 511 - - !RandomFlipImage - prob: 0.5 - - !CornerRandColor - saturation: 0.4 - contrast: 0.4 - brightness: 0.4 - - !Lighting - eigval: [0.2141788, 0.01817699, 0.00341571] - eigvec: [[-0.58752847, -0.69563484, 0.41340352], - [-0.5832747, 0.00994535, -0.81221408], - [-0.56089297, 0.71832671, 0.41158938]] - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: False - is_channel_first: False - - !Permute - to_bgr: False - - !CornerTarget - output_size: [64, 64] - num_classes: 80 - max_tag_len: 256 - batch_size: 14 - shuffle: true - drop_last: true - worker_num: 2 - use_process: true - drop_empty: false - mixup_epoch: 200 - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - use_process: true - batch_size: 1 - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - batch_size: 1 diff --git a/static/configs/anchor_free/cornernet_squeeze_hg104.yml b/static/configs/anchor_free/cornernet_squeeze_hg104.yml deleted file mode 100644 index 8e08dccadd71c920635398179320db5b7961403f..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/cornernet_squeeze_hg104.yml +++ /dev/null @@ -1,145 +0,0 @@ -architecture: CornerNetSqueeze -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: NULL -weights: output/cornernet_squeeze_hg104/model_final -num_classes: 80 -stack: 2 - -CornerNetSqueeze: - backbone: Hourglass - corner_head: CornerHead - -Hourglass: - dims: [256, 256, 384, 384, 512] - modules: [2, 2, 2, 2, 4] - -CornerHead: - test_batch_size: 1 - ae_threshold: 0.5 - num_dets: 100 - top_k: 20 - -PostProcess: - use_soft_nms: true - detections_per_im: 100 - nms_thresh: 0.001 - sigma: 0.5 - -LearningRate: - base_lr: 0.00025 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 450000 - -OptimizerBuilder: - optimizer: - type: Adam - regularizer: NULL - -TrainReader: - inputs_def: - image_shape: [3, 511, 511] - fields: ['image', 'im_id', 'gt_bbox', 'gt_class', 'tl_heatmaps', 'br_heatmaps', 'tl_regrs', 'br_regrs', 'tl_tags', 'br_tags', 'tag_masks'] - output_size: 64 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: False - - !CornerCrop - input_size: 511 - - !Resize - target_dim: 511 - - !RandomFlipImage - prob: 0.5 - - !CornerRandColor - saturation: 0.4 - contrast: 0.4 - brightness: 0.4 - - !Lighting - eigval: [0.2141788, 0.01817699, 0.00341571] - eigvec: [[-0.58752847, -0.69563484, 0.41340352], - [-0.5832747, 0.00994535, -0.81221408], - [-0.56089297, 0.71832671, 0.41158938]] - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: False - is_channel_first: False - - !Permute - to_bgr: False - - !CornerTarget - output_size: [64, 64] - num_classes: 80 - batch_size: 14 - shuffle: true - drop_last: true - worker_num: 2 - use_process: true - drop_empty: false - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 2 - use_process: true - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - batch_size: 1 diff --git a/static/configs/anchor_free/cornernet_squeeze_r50_vd_fpn.yml b/static/configs/anchor_free/cornernet_squeeze_r50_vd_fpn.yml deleted file mode 100644 index 42a84271e85cd4e15a5e568e4553364dcb7e7ece..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/cornernet_squeeze_r50_vd_fpn.yml +++ /dev/null @@ -1,155 +0,0 @@ -architecture: CornerNetSqueeze -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_2x.tar -weights: output/cornernet_squeeze_r50_vd_fpn/model_final -num_classes: 80 -stack: 1 - -CornerNetSqueeze: - backbone: ResNet - fpn: FPN - corner_head: CornerHead - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - variant: d - -FPN: - min_level: 3 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125] - -CornerHead: - test_batch_size: 1 - ae_threshold: 0.5 - num_dets: 100 - top_k: 20 - -PostProcess: - use_soft_nms: true - detections_per_im: 100 - nms_thresh: 0.001 - sigma: 0.5 - -LearningRate: - base_lr: 0.0005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 450000 - -OptimizerBuilder: - optimizer: - type: Adam - regularizer: NULL - -TrainReader: - inputs_def: - image_shape: [3, 511, 511] - fields: ['image', 'im_id', 'gt_bbox', 'gt_class', 'tl_heatmaps', 'br_heatmaps', 'tl_regrs', 'br_regrs', 'tl_tags', 'br_tags', 'tag_masks'] - output_size: 64 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: False - - !CornerCrop - input_size: 511 - - !Resize - target_dim: 511 - - !RandomFlipImage - prob: 0.5 - - !CornerRandColor - saturation: 0.4 - contrast: 0.4 - brightness: 0.4 - - !Lighting - eigval: [0.2141788, 0.01817699, 0.00341571] - eigvec: [[-0.58752847, -0.69563484, 0.41340352], - [-0.5832747, 0.00994535, -0.81221408], - [-0.56089297, 0.71832671, 0.41158938]] - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: False - is_channel_first: False - - !Permute - to_bgr: False - - !CornerTarget - output_size: [64, 64] - num_classes: 80 - batch_size: 14 - shuffle: true - drop_last: true - worker_num: 2 - use_process: true - drop_empty: false - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - use_process: true - batch_size: 1 - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'ratios', 'borders'] - output_size: 64 - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: false - - !CornerCrop - is_train: false - - !CornerRatio - input_size: 511 - output_size: 64 - - !Permute - to_bgr: False - - !NormalizeImage - mean: [0.40789654, 0.44719302, 0.47026115] - std: [0.28863828, 0.27408164, 0.2780983] - is_scale: True - is_channel_first: True - batch_size: 1 diff --git a/static/configs/anchor_free/fcos_dcn_r50_fpn_1x.yml b/static/configs/anchor_free/fcos_dcn_r50_fpn_1x.yml deleted file mode 100644 index af6f265a44b8482c1e52b1ab5e6d42f823f08cce..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/fcos_dcn_r50_fpn_1x.yml +++ /dev/null @@ -1,181 +0,0 @@ -architecture: FCOS -max_iters: 90000 -use_gpu: true -snapshot_iter: 5000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/fcos_dcn_r50_fpn_1x/model_final -num_classes: 80 - -FCOS: - backbone: ResNet - fpn: FPN - fcos_head: FCOSHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 3 - max_level: 7 - num_chan: 256 - use_c5: false - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -FCOSHead: - num_classes: 80 - fpn_stride: [8, 16, 32, 64, 128] - num_convs: 4 - norm_type: "gn" - fcos_loss: FCOSLoss - norm_reg_targets: True - centerness_on_reg: True - use_dcn_in_tower: True - nms: MultiClassNMS - -MultiClassNMS: - score_threshold: 0.025 - nms_top_k: 1000 - keep_top_k: 100 - nms_threshold: 0.6 - background_label: -1 - -FCOSLoss: - loss_alpha: 0.25 - loss_gamma: 2.0 - iou_loss_type: "giou" - reg_weights: 1.0 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'fcos_target'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: false - - !Gt2FCOSTarget - object_sizes_boundary: [64, 128, 256, 512] - center_sampling_radius: 1.5 - downsample_ratios: [8, 16, 32, 64, 128] - norm_reg_targets: True - batch_size: 2 - shuffle: true - worker_num: 4 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false - worker_num: 1 - use_process: false - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/anchor_free/fcos_dcn_r50_fpn_1x_cutmix.yml b/static/configs/anchor_free/fcos_dcn_r50_fpn_1x_cutmix.yml deleted file mode 100644 index 8667a49cb48caf8810c4c191ceef5a42c9767602..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/fcos_dcn_r50_fpn_1x_cutmix.yml +++ /dev/null @@ -1,186 +0,0 @@ -architecture: FCOS -max_iters: 90000 -use_gpu: true -snapshot_iter: 5000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/fcos_dcn_r50_fpn_1x_cutmix/model_final -num_classes: 80 - -FCOS: - backbone: ResNet - fpn: FPN - fcos_head: FCOSHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 3 - max_level: 7 - num_chan: 256 - use_c5: false - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -FCOSHead: - num_classes: 80 - fpn_stride: [8, 16, 32, 64, 128] - num_convs: 4 - norm_type: "gn" - fcos_loss: FCOSLoss - norm_reg_targets: True - centerness_on_reg: True - use_dcn_in_tower: True - nms: MultiClassNMS - -MultiClassNMS: - score_threshold: 0.025 - nms_top_k: 1000 - keep_top_k: 100 - nms_threshold: 0.6 - background_label: -1 - -FCOSLoss: - loss_alpha: 0.25 - loss_gamma: 2.0 - iou_loss_type: "giou" - reg_weights: 1.0 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'fcos_target'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_cutmix: True - - !CutmixImage - alpha: 1.5 - beta: 1.5 - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: false - - !Gt2FCOSTarget - object_sizes_boundary: [64, 128, 256, 512] - center_sampling_radius: 1.5 - downsample_ratios: [8, 16, 32, 64, 128] - norm_reg_targets: True - batch_size: 2 - shuffle: true - worker_num: 4 - use_process: false - cutmix_epoch: 10 - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false - worker_num: 1 - use_process: false - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/anchor_free/fcos_r50_fpn_1x.yml b/static/configs/anchor_free/fcos_r50_fpn_1x.yml deleted file mode 100644 index 618914e9448a54acf230f1114dc2caf36e7151c9..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/fcos_r50_fpn_1x.yml +++ /dev/null @@ -1,180 +0,0 @@ -architecture: FCOS -max_iters: 90000 -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/fcos_r50_fpn_1x/model_final -num_classes: 80 - -FCOS: - backbone: ResNet - fpn: FPN - fcos_head: FCOSHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - -FPN: - min_level: 3 - max_level: 7 - num_chan: 256 - use_c5: false - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -FCOSHead: - num_classes: 80 - fpn_stride: [8, 16, 32, 64, 128] - num_convs: 4 - norm_type: "gn" - fcos_loss: FCOSLoss - norm_reg_targets: True - centerness_on_reg: True - use_dcn_in_tower: False - nms: MultiClassNMS - -MultiClassNMS: - score_threshold: 0.025 - nms_top_k: 1000 - keep_top_k: 100 - nms_threshold: 0.6 - background_label: -1 - -FCOSLoss: - loss_alpha: 0.25 - loss_gamma: 2.0 - iou_loss_type: "giou" - reg_weights: 1.0 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'fcos_target'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: false - - !Gt2FCOSTarget - object_sizes_boundary: [64, 128, 256, 512] - center_sampling_radius: 1.5 - downsample_ratios: [8, 16, 32, 64, 128] - norm_reg_targets: True - batch_size: 2 - shuffle: true - worker_num: 4 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false - worker_num: 2 - use_process: false - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/anchor_free/fcos_r50_fpn_multiscale_2x.yml b/static/configs/anchor_free/fcos_r50_fpn_multiscale_2x.yml deleted file mode 100644 index 7ce07dec59eee10afb85aea00b6d24add76af9c2..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/fcos_r50_fpn_multiscale_2x.yml +++ /dev/null @@ -1,180 +0,0 @@ -architecture: FCOS -max_iters: 180000 -use_gpu: true -snapshot_iter: 20000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/fcos_r50_fpn_multiscale_2x/model_final -num_classes: 80 - -FCOS: - backbone: ResNet - fpn: FPN - fcos_head: FCOSHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - -FPN: - min_level: 3 - max_level: 7 - num_chan: 256 - use_c5: false - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -FCOSHead: - num_classes: 80 - fpn_stride: [8, 16, 32, 64, 128] - num_convs: 4 - norm_type: "gn" - fcos_loss: FCOSLoss - norm_reg_targets: True - centerness_on_reg: True - use_dcn_in_tower: False - nms: MultiClassNMS - -MultiClassNMS: - score_threshold: 0.025 - nms_top_k: 1000 - keep_top_k: 100 - nms_threshold: 0.6 - background_label: -1 - -FCOSLoss: - loss_alpha: 0.25 - loss_gamma: 2.0 - iou_loss_type: "giou" - reg_weights: 1.0 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'fcos_target'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800] - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: false - - !Gt2FCOSTarget - object_sizes_boundary: [64, 128, 256, 512] - center_sampling_radius: 1.5 - downsample_ratios: [8, 16, 32, 64, 128] - norm_reg_targets: True - batch_size: 2 - shuffle: true - worker_num: 4 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false - worker_num: 2 - use_process: false - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_id', 'im_shape', 'im_info'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 128 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/anchor_free/pafnet_10x_coco.yml b/static/configs/anchor_free/pafnet_10x_coco.yml deleted file mode 100644 index 4c6728bcda05654e1e5c383e084e4aad1bbc6c6e..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/pafnet_10x_coco.yml +++ /dev/null @@ -1,170 +0,0 @@ -architecture: TTFNet -use_gpu: true -max_iters: 150000 -log_smooth_window: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/pafnet_10x_coco/model_final -num_classes: 80 -use_ema: true -ema_decay: 0.9998 - -TTFNet: - backbone: ResNet - ttf_head: TTFHead - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [2, 3, 4, 5] - variant: d - dcn_v2_stages: [3, 4, 5] - -TTFHead: - head_conv: 128 - wh_conv: 64 - hm_head_conv_num: 2 - wh_head_conv_num: 2 - wh_offset_base: 16 - wh_loss: GiouLoss - dcn_head: True - -GiouLoss: - loss_weight: 5. - do_average: false - use_class_weight: false - -LearningRate: - base_lr: 0.015 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 112500 - - 137500 - - !LinearWarmup - start_factor: 0.2 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0004 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'ttf_heatmap', 'ttf_box_target', 'ttf_reg_weight'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_cutmix: True - - !CutmixImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort - hue: [-18., 18., 0.5] - saturation: [0.5, 1.5, 0.5] - contrast: [0.5, 1.5, 0.5] - brightness: [-32., 32., 0.5] - random_apply: False - hsv_format: True - random_channel: True - - !RandomExpand - ratio: 4 - prob: 0.5 - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - aspect_ratio: NULL - cover_all_box: True - - !RandomFlipImage - prob: 0.5 - batch_transforms: - - !RandomShape - sizes: [416, 448, 480, 512, 544, 576, 608, 640, 672] - random_inter: True - resize_box: True - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - - !Permute - to_bgr: false - channel_first: true - - !Gt2TTFTarget - num_classes: 80 - down_ratio: 4 - - !PadBatch - pad_to_stride: 32 - batch_size: 12 - shuffle: true - worker_num: 8 - bufsize: 2 - use_process: false - cutmix_epoch: 100 - -EvalReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'im_id', 'scale_factor'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !Resize - target_dim: 512 - - !NormalizeImage - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - is_scale: false - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'im_id', 'scale_factor'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !Resize - interp: 1 - target_dim: 512 - - !NormalizeImage - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - is_scale: false - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/anchor_free/pafnet_lite_mobilenet_v3_20x_coco.yml b/static/configs/anchor_free/pafnet_lite_mobilenet_v3_20x_coco.yml deleted file mode 100644 index 1b14238839e52a48707395de301c986c6f263334..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/pafnet_lite_mobilenet_v3_20x_coco.yml +++ /dev/null @@ -1,171 +0,0 @@ -architecture: TTFNet -use_gpu: true -max_iters: 300000 -log_smooth_window: 20 -save_dir: output -snapshot_iter: 50000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -weights: output/pafnet_lite_mobilenet_v3_20x_coco/model_final -num_classes: 80 - -TTFNet: - backbone: MobileNetV3RCNN - ttf_head: TTFLiteHead - -MobileNetV3RCNN: - norm_type: sync_bn - norm_decay: 0.0 - model_name: large - scale: 1.0 - conv_decay: 0.00001 - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - freeze_norm: false - -TTFLiteHead: - head_conv: 48 - -GiouLoss: - loss_weight: 5. - do_average: false - use_class_weight: false - -LearningRate: - base_lr: 0.015 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 225000 - - 275000 - - !LinearWarmup - start_factor: 0.2 - steps: 1000 - -OptimizerBuilder: - clip_grad_by_norm: 35 - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0004 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'ttf_heatmap', 'ttf_box_target', 'ttf_reg_weight'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_cutmix: True - - !ColorDistort - hue: [-18., 18., 0.5] - saturation: [0.5, 1.5, 0.5] - contrast: [0.5, 1.5, 0.5] - brightness: [-32., 32., 0.5] - random_apply: False - hsv_format: False - random_channel: True - - !RandomExpand - ratio: 4 - prob: 0.5 - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - aspect_ratio: NULL - cover_all_box: True - - !CutmixImage - alpha: 1.5 - beta: 1.5 - - !RandomFlipImage - prob: 0.5 - - !GridMaskOp - use_h: true - use_w: true - rotate: 1 - offset: false - ratio: 0.5 - mode: 1 - prob: 0.7 - upper_iter: 300000 - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512] - random_inter: True - resize_box: True - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - - !Permute - to_bgr: false - channel_first: true - - !Gt2TTFTarget - num_classes: 80 - down_ratio: 4 - - !PadBatch - pad_to_stride: 32 - batch_size: 12 - shuffle: true - worker_num: 8 - bufsize: 2 - use_process: false - cutmix_epoch: 200 - -EvalReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'im_id', 'scale_factor'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !Resize - target_dim: 320 - - !NormalizeImage - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - is_scale: false - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 2 - bufsize: 2 - -TestReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'im_id', 'scale_factor'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !Resize - interp: 1 - target_dim: 320 - - !NormalizeImage - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - is_scale: false - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/anchor_free/ttfnet_darknet.yml b/static/configs/anchor_free/ttfnet_darknet.yml deleted file mode 100644 index d32224b6d24f6f758aa54d66f07b7c883da8a2ca..0000000000000000000000000000000000000000 --- a/static/configs/anchor_free/ttfnet_darknet.yml +++ /dev/null @@ -1,141 +0,0 @@ -architecture: TTFNet -use_gpu: true -max_iters: 15000 -log_iter: 20 -save_dir: output -snapshot_iter: 1000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar -weights: output/ttfnet_darknet/model_final -num_classes: 80 - -TTFNet: - backbone: DarkNet - ttf_head: TTFHead - -DarkNet: - norm_type: bn - norm_decay: 0.0004 - depth: 53 - freeze_at: 1 - -TTFHead: - head_conv: 128 - wh_conv: 64 - hm_head_conv_num: 2 - wh_head_conv_num: 2 - wh_offset_base: 16 - wh_loss: GiouLoss - -GiouLoss: - loss_weight: 5. - do_average: false - use_class_weight: false - -LearningRate: - base_lr: 0.015 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 11250 - - 13750 - - !LinearWarmup - start_factor: 0.2 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0004 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'ttf_heatmap', 'ttf_box_target', 'ttf_reg_weight'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: true - - !Resize - target_dim: 512 - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !Gt2TTFTarget - num_classes: 80 - down_ratio: 4 - - !PadBatch - pad_to_stride: 32 - batch_size: 12 - shuffle: true - worker_num: 8 - bufsize: 2 - use_process: true - -EvalReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'im_id', 'scale_factor'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !Resize - target_dim: 512 - - !NormalizeImage - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - is_scale: false - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'im_id', 'scale_factor'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !Resize - interp: 1 - target_dim: 512 - - !NormalizeImage - mean: [123.675, 116.28, 103.53] - std: [58.395, 57.12, 57.375] - is_scale: false - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/autoaugment/README.md b/static/configs/autoaugment/README.md deleted file mode 100644 index 0f960db95325b7b27a7b2b5535f94d6325f19373..0000000000000000000000000000000000000000 --- a/static/configs/autoaugment/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# Learning Data Augmentation Strategies for Object Detection - -## Introduction - -- Learning Data Augmentation Strategies for Object Detection: [https://arxiv.org/abs/1906.11172](https://arxiv.org/abs/1906.11172) - -``` -@article{Zoph2019LearningDA, - title={Learning Data Augmentation Strategies for Object Detection}, - author={Barret Zoph and Ekin Dogus Cubuk and Golnaz Ghiasi and Tsung-Yi Lin and Jonathon Shlens and Quoc V. Le}, - journal={ArXiv}, - year={2019}, - volume={abs/1906.11172} -} -``` - - -## Model Zoo - -| Backbone | Type | AutoAug policy | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------:| :-------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-FPN | Faster | v1 | 2 | 3x | 22.800 | 39.9 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_aa_3x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/autoaugment/faster_rcnn_r50_vd_fpn_aa_3x.yml) | -| ResNet101-vd-FPN | Faster | v1 | 2 | 3x | 17.652 | 42.5 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r101_vd_fpn_aa_3x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/autoaugment/faster_rcnn_r101_vd_fpn_aa_3x.yml) | diff --git a/static/configs/autoaugment/faster_rcnn_r101_vd_fpn_aa_3x.yml b/static/configs/autoaugment/faster_rcnn_r101_vd_fpn_aa_3x.yml deleted file mode 100644 index ba83c1ef623da24f6c002b84850b743f711c2a89..0000000000000000000000000000000000000000 --- a/static/configs/autoaugment/faster_rcnn_r101_vd_fpn_aa_3x.yml +++ /dev/null @@ -1,127 +0,0 @@ -architecture: FasterRCNN -max_iters: 270000 -snapshot_iter: 30000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/faster_rcnn_r101_vd_fpn_aa_3x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_size: 2 - use_process: true diff --git a/static/configs/autoaugment/faster_rcnn_r50_vd_fpn_aa_3x.yml b/static/configs/autoaugment/faster_rcnn_r50_vd_fpn_aa_3x.yml deleted file mode 100644 index 887aea5bc66c8179214925241837cb23f90ec275..0000000000000000000000000000000000000000 --- a/static/configs/autoaugment/faster_rcnn_r50_vd_fpn_aa_3x.yml +++ /dev/null @@ -1,127 +0,0 @@ -architecture: FasterRCNN -max_iters: 270000 -snapshot_iter: 30000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_aa_3x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_size: 2 - use_process: true diff --git a/static/configs/cascade_mask_rcnn_r50_fpn_1x.yml b/static/configs/cascade_mask_rcnn_r50_fpn_1x.yml deleted file mode 100644 index cd1b466e9a41a78da2e74371b36fac2ae9a3335f..0000000000000000000000000000000000000000 --- a/static/configs/cascade_mask_rcnn_r50_fpn_1x.yml +++ /dev/null @@ -1,113 +0,0 @@ -architecture: CascadeMaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/cascade_mask_rcnn_r50_fpn_1x/model_final -num_classes: 81 - -CascadeMaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - mask_assigner: MaskAssigner - mask_head: MaskHead - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_hi: [0.5, 0.6, 0.7] - bg_thresh_lo: [0.0, 0.0, 0.0] - fg_fraction: 0.25 - fg_thresh: [0.5, 0.6, 0.7] - -MaskAssigner: - resolution: 28 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/cascade_rcnn_cls_aware_r101_vd_fpn_1x_softnms.yml b/static/configs/cascade_rcnn_cls_aware_r101_vd_fpn_1x_softnms.yml deleted file mode 100644 index 958c0d363386c1919f188650ded3184540bce7cd..0000000000000000000000000000000000000000 --- a/static/configs/cascade_rcnn_cls_aware_r101_vd_fpn_1x_softnms.yml +++ /dev/null @@ -1,109 +0,0 @@ -architecture: CascadeRCNNClsAware -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/cascade_rcnn_cls_aware_r101_vd_fpn_1x_softnms/model_final -metric: COCO -num_classes: 81 - -CascadeRCNNClsAware: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - class_aware: True - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: MultiClassSoftNMS - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiClassSoftNMS: - score_threshold: 0.01 - keep_top_k: 300 - softnms_sigma: 0.5 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.0 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/cascade_rcnn_cls_aware_r101_vd_fpn_ms_test.yml b/static/configs/cascade_rcnn_cls_aware_r101_vd_fpn_ms_test.yml deleted file mode 100644 index f3f893014e14479d887f0a19c513c64bd3149d90..0000000000000000000000000000000000000000 --- a/static/configs/cascade_rcnn_cls_aware_r101_vd_fpn_ms_test.yml +++ /dev/null @@ -1,162 +0,0 @@ -architecture: CascadeRCNNClsAware -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/cascade_rcnn_cls_aware_r101_vd_fpn_ms_test/model_final -metric: COCO -num_classes: 81 - -CascadeRCNNClsAware: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - class_aware: True - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiScaleTEST: - score_thresh: 0.05 - nms_thresh: 0.5 - detections_per_im: 100 - enable_voting: true - vote_thresh: 0.9 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.0 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 - -EvalReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - multi_scale: true - num_scales: 18 - use_flip: true - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !MultiscaleTestResize - origin_target_size: 800 - origin_max_size: 1333 - target_size: - - 400 - - 500 - - 600 - - 700 - - 900 - - 1000 - - 1100 - - 1200 - max_size: 2000 - use_flip: true - - !Permute - channel_first: true - to_bgr: false - - !PadMultiScaleTest - pad_to_stride: 32 - worker_num: 2 diff --git a/static/configs/cascade_rcnn_r50_fpn_1x.yml b/static/configs/cascade_rcnn_r50_fpn_1x.yml deleted file mode 100644 index 7d255ba6f347ad91fb1b17ab10cfd2b413feb4dc..0000000000000000000000000000000000000000 --- a/static/configs/cascade_rcnn_r50_fpn_1x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: CascadeRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/cascade_rcnn_r50_fpn_1x/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: b - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/cascade_rcnn_r50_fpn_1x_ms_test.yml b/static/configs/cascade_rcnn_r50_fpn_1x_ms_test.yml deleted file mode 100644 index 8b04aeeda73f30b5923cd9874e969eb8e977861b..0000000000000000000000000000000000000000 --- a/static/configs/cascade_rcnn_r50_fpn_1x_ms_test.yml +++ /dev/null @@ -1,161 +0,0 @@ -architecture: CascadeRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/cascade_rcnn_r50_fpn_1x/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: b - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiScaleTEST: - score_thresh: 0.05 - nms_thresh: 0.5 - detections_per_im: 100 - enable_voting: true - vote_thresh: 0.9 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 - -EvalReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - multi_scale: true - num_scales: 18 - use_flip: true - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !MultiscaleTestResize - origin_target_size: 800 - origin_max_size: 1333 - target_size: - - 400 - - 500 - - 600 - - 700 - - 900 - - 1000 - - 1100 - - 1200 - max_size: 2000 - use_flip: true - - !Permute - channel_first: true - to_bgr: false - - !PadMultiScaleTest - pad_to_stride: 32 - worker_num: 2 diff --git a/static/configs/dcn/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_s1x.yml b/static/configs/dcn/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_s1x.yml deleted file mode 100755 index 06d59aaff2ca5d5360dfeb35ceb6eebd3cf98895..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_s1x.yml +++ /dev/null @@ -1,261 +0,0 @@ -architecture: CascadeMaskRCNN -max_iters: 300000 -snapshot_iter: 10 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/SENet154_vd_caffe_pretrained.tar -weights: output/cascade_mask_rcnn_dcn_se154_vd_fpn_gn_s1x/model_final -metric: COCO -num_classes: 81 - -CascadeMaskRCNN: - backbone: SENet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - mask_assigner: MaskAssigner - mask_head: MaskHead - -SENet: - depth: 152 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: bn - freeze_norm: True - variant: d - dcn_v2_stages: [3, 4, 5] - std_senet: True - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - freeze_norm: False - norm_type: gn - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - norm_type: gn - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_hi: [0.5, 0.6, 0.7] - bg_thresh_lo: [0.0, 0.0, 0.0] - fg_fraction: 0.25 - fg_thresh: [0.5, 0.6, 0.7] - -MaskAssigner: - resolution: 28 - -CascadeBBoxHead: - head: CascadeXConvNormHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeXConvNormHead: - norm_type: gn - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 280000] - - !LinearWarmup - start_factor: 0.01 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - # batch size per device - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd', 'gt_mask'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - image_dir: train2017 - anno_path: annotations/instances_train2017.json - sample_transforms: - - !DecodeImage - to_rgb: false - - !RandomFlipImage - is_mask_flip: true - is_normalized: false - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !ResizeImage - interp: 1 - target_size: - - 416 - - 448 - - 480 - - 512 - - 544 - - 576 - - 608 - - 640 - - 672 - - 704 - - 736 - - 768 - - 800 - - 832 - - 864 - - 896 - - 928 - - 960 - - 992 - - 1024 - - 1056 - - 1088 - - 1120 - - 1152 - - 1184 - - 1216 - - 1248 - - 1280 - - 1312 - - 1344 - - 1376 - - 1408 - max_size: 1600 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 8 - shuffle: true - -EvalReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: False - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !ResizeImage - interp: 1 - target_size: - - 800 - max_size: 1333 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 2 - drop_empty: false - -TestReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: False - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 2 diff --git a/static/configs/dcn/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_s1x_ms_test.yml b/static/configs/dcn/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_s1x_ms_test.yml deleted file mode 100644 index 6b3476ec5d81d1a56c0bf35aebe45f24f8e51b29..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_s1x_ms_test.yml +++ /dev/null @@ -1,279 +0,0 @@ -architecture: CascadeMaskRCNN -max_iters: 300000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/SENet154_vd_caffe_pretrained.tar -weights: output/cascade_mask_rcnn_dcn_se154_vd_fpn_gn_s1x/model_final -metric: COCO -num_classes: 81 - -CascadeMaskRCNN: - backbone: SENet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - mask_assigner: MaskAssigner - mask_head: MaskHead - -SENet: - depth: 152 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: bn - freeze_norm: True - variant: d - dcn_v2_stages: [3, 4, 5] - std_senet: True - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - freeze_norm: False - norm_type: gn - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - norm_type: gn - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_hi: [0.5, 0.6, 0.7] - bg_thresh_lo: [0.0, 0.0, 0.0] - fg_fraction: 0.25 - fg_thresh: [0.5, 0.6, 0.7] - -MaskAssigner: - resolution: 28 - -CascadeBBoxHead: - head: CascadeXConvNormHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeXConvNormHead: - norm_type: gn - -MultiScaleTEST: - score_thresh: 0.05 - nms_thresh: 0.5 - detections_per_im: 100 - enable_voting: true - vote_thresh: 0.9 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 280000] - - !LinearWarmup - start_factor: 0.01 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - # batch size per device - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd', 'gt_mask'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - image_dir: train2017 - anno_path: annotations/instances_train2017.json - sample_transforms: - - !DecodeImage - to_rgb: False - with_mixup: False - - !RandomFlipImage - is_mask_flip: true - is_normalized: false - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !ResizeImage - interp: 1 - target_size: - - 416 - - 448 - - 480 - - 512 - - 544 - - 576 - - 608 - - 640 - - 672 - - 704 - - 736 - - 768 - - 800 - - 832 - - 864 - - 896 - - 928 - - 960 - - 992 - - 1024 - - 1056 - - 1088 - - 1120 - - 1152 - - 1184 - - 1216 - - 1248 - - 1280 - - 1312 - - 1344 - - 1376 - - 1408 - max_size: 1600 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 8 - shuffle: true - -EvalReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - multi_scale: true - # num_scale = (len(target_size) + 1) * (1 + use_flip) - num_scales: 18 - use_flip: true - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: False - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !MultiscaleTestResize - origin_target_size: 800 - origin_max_size: 1333 - target_size: - - 400 - - 500 - - 600 - - 700 - - 900 - - 1000 - - 1100 - - 1200 - max_size: 2000 - use_flip: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadMultiScaleTest - pad_to_stride: 32 - worker_num: 2 - -TestReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: False - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 diff --git a/static/configs/dcn/cascade_rcnn_cbr200_vd_fpn_dcnv2_nonlocal_softnms.yml b/static/configs/dcn/cascade_rcnn_cbr200_vd_fpn_dcnv2_nonlocal_softnms.yml deleted file mode 100644 index 3330656919cbaa7ae4c1381d3db323764edb7e5a..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_rcnn_cbr200_vd_fpn_dcnv2_nonlocal_softnms.yml +++ /dev/null @@ -1,212 +0,0 @@ -architecture: CascadeRCNN -max_iters: 460000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/CBResNet200_vd_pretrained.tar -weights: output/cascade_rcnn_cbr200_vd_fpn_dcnv2_nonlocal_softnms/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: CBResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -CBResNet: - norm_type: bn - depth: 200 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - nonlocal_stages: [4] - repeat_num: 2 - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: MultiClassSoftNMS - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiClassSoftNMS: - score_threshold: 0.01 - keep_top_k: 300 - softnms_sigma: 0.5 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [340000, 440000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: [416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024, 1056, 1088, 1120, 1152, 1184, 1216, 1248, 1280, 1312, 1344, 1376, 1408] - max_size: 1600 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 2 - shuffle: true - -EvalReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: False - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: - - 1200 - max_size: 2000 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - worker_num: 2 diff --git a/static/configs/dcn/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml b/static/configs/dcn/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml deleted file mode 100644 index 29b204cd733af120333c3e2e87fe97e8236f03d1..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml +++ /dev/null @@ -1,214 +0,0 @@ -architecture: CascadeRCNNClsAware -max_iters: 460000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet200_vd_pretrained.tar -weights: output/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms/model_final -metric: COCO -num_classes: 81 - -CascadeRCNNClsAware: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 200 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - nonlocal_stages: [4] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - class_aware: True - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: MultiClassSoftNMS - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiClassSoftNMS: - score_threshold: 0.01 - keep_top_k: 300 - softnms_sigma: 0.5 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [340000, 440000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: [416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024, 1056, 1088, 1120, 1152, 1184, 1216, 1248, 1280, 1312, 1344, 1376, 1408] - max_size: 1800 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - shuffle: true - drop_last: false - worker_num: 2 - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: False - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: - - 1200 - max_size: 2000 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - worker_num: 2 - drop_empty: false - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - worker_num: 2 diff --git a/static/configs/dcn/cascade_rcnn_dcn_r101_vd_fpn_1x.yml b/static/configs/dcn/cascade_rcnn_dcn_r101_vd_fpn_1x.yml deleted file mode 100644 index d4597b328a879885b62c30aaa90e82e8f17caa58..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_rcnn_dcn_r101_vd_fpn_1x.yml +++ /dev/null @@ -1,107 +0,0 @@ -architecture: CascadeRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/cascade_rcnn_dcn_r101_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/dcn/cascade_rcnn_dcn_r50_fpn_1x.yml b/static/configs/dcn/cascade_rcnn_dcn_r50_fpn_1x.yml deleted file mode 100644 index ec12ec0c132927a7439e59326f44020983f25465..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_rcnn_dcn_r50_fpn_1x.yml +++ /dev/null @@ -1,107 +0,0 @@ -architecture: CascadeRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/cascade_rcnn_dcn_r50_fpn_1x/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: b - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/dcn/cascade_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml b/static/configs/dcn/cascade_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml deleted file mode 100644 index 80f059961c2decf4f4a2bb8af86198a049e4de6d..0000000000000000000000000000000000000000 --- a/static/configs/dcn/cascade_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml +++ /dev/null @@ -1,109 +0,0 @@ -architecture: CascadeRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/cascade_rcnn_dcn_x101_vd_64x4d_fpn_1x/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNeXt: - norm_type: bn - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/dcn/faster_rcnn_dcn_r101_vd_fpn_1x.yml b/static/configs/dcn/faster_rcnn_dcn_r101_vd_fpn_1x.yml deleted file mode 100644 index be99fa74b57dbb2d9114fd2392a114120fbdccec..0000000000000000000000000000000000000000 --- a/static/configs/dcn/faster_rcnn_dcn_r101_vd_fpn_1x.yml +++ /dev/null @@ -1,108 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/faster_rcnn_dcn_r101_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - # batch size per device - batch_size: 2 diff --git a/static/configs/dcn/faster_rcnn_dcn_r50_fpn_1x.yml b/static/configs/dcn/faster_rcnn_dcn_r50_fpn_1x.yml deleted file mode 100644 index 828ec4d40714d66ed2e05fffd14436516bd231d4..0000000000000000000000000000000000000000 --- a/static/configs/dcn/faster_rcnn_dcn_r50_fpn_1x.yml +++ /dev/null @@ -1,108 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/faster_rcnn_dcn_r50_fpn_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - norm_type: bn - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - dcn_v2_stages: [3, 4, 5] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_lo: 0.0 - bg_thresh_hi: 0.5 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - # batch size per device - batch_size: 2 diff --git a/static/configs/dcn/faster_rcnn_dcn_r50_vd_fpn_2x.yml b/static/configs/dcn/faster_rcnn_dcn_r50_vd_fpn_2x.yml deleted file mode 100644 index 39d45b2476e358e3223d029e12e140b6bd1cd692..0000000000000000000000000000000000000000 --- a/static/configs/dcn/faster_rcnn_dcn_r50_vd_fpn_2x.yml +++ /dev/null @@ -1,108 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_dcn_r50_vd_fpn_2x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - # batch size per device - batch_size: 2 diff --git a/static/configs/dcn/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml b/static/configs/dcn/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml deleted file mode 100644 index 4f537a05d7b4e5a94390a3874f4a50121c33ba46..0000000000000000000000000000000000000000 --- a/static/configs/dcn/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml +++ /dev/null @@ -1,108 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNeXt: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - - -_READER_: '../faster_fpn_reader.yml' diff --git a/static/configs/dcn/mask_rcnn_dcn_r101_vd_fpn_1x.yml b/static/configs/dcn/mask_rcnn_dcn_r101_vd_fpn_1x.yml deleted file mode 100644 index caee6d5ef0fee8cb4bb33b24a5b8576f7b53e914..0000000000000000000000000000000000000000 --- a/static/configs/dcn/mask_rcnn_dcn_r101_vd_fpn_1x.yml +++ /dev/null @@ -1,113 +0,0 @@ -architecture: MaskRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/mask_rcnn_dcn_r101_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' diff --git a/static/configs/dcn/mask_rcnn_dcn_r50_fpn_1x.yml b/static/configs/dcn/mask_rcnn_dcn_r50_fpn_1x.yml deleted file mode 100644 index d952967345bd6fb240aad84a51f2e545b61b9721..0000000000000000000000000000000000000000 --- a/static/configs/dcn/mask_rcnn_dcn_r50_fpn_1x.yml +++ /dev/null @@ -1,113 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/mask_rcnn_dcn_r50_fpn_1x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - - -_READER_: '../mask_fpn_reader.yml' diff --git a/static/configs/dcn/mask_rcnn_dcn_r50_vd_fpn_2x.yml b/static/configs/dcn/mask_rcnn_dcn_r50_vd_fpn_2x.yml deleted file mode 100644 index 4169c12d20308dc7994863035ba3130e38d20d18..0000000000000000000000000000000000000000 --- a/static/configs/dcn/mask_rcnn_dcn_r50_vd_fpn_2x.yml +++ /dev/null @@ -1,113 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 360000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -metric: COCO -weights: output/mask_rcnn_dcn_r50_vd_fpn_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' diff --git a/static/configs/dcn/mask_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml b/static/configs/dcn/mask_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml deleted file mode 100644 index d5a665cafcab391a4661db31090e29452b3546ec..0000000000000000000000000000000000000000 --- a/static/configs/dcn/mask_rcnn_dcn_x101_vd_64x4d_fpn_1x.yml +++ /dev/null @@ -1,115 +0,0 @@ -architecture: MaskRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/mask_rcnn_dcn_x101_vd_64x4d_fpn_1x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNeXt: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' diff --git a/static/configs/dcn/yolov3_enhance_reader.yml b/static/configs/dcn/yolov3_enhance_reader.yml deleted file mode 100644 index 0cb379d2542587d04fee3cd68d42adf22ec18031..0000000000000000000000000000000000000000 --- a/static/configs/dcn/yolov3_enhance_reader.yml +++ /dev/null @@ -1,106 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - use_fine_grained_loss: true - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: False - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 8 - shuffle: true - drop_last: true - worker_num: 8 - bufsize: 16 - use_process: true - -EvalReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 50 - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: false - - !ResizeImage - interp: 2 - target_size: 608 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: False - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - drop_empty: false - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: false - - !ResizeImage - interp: 2 - target_size: 608 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: False - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/dcn/yolov3_r50vd_dcn.yml b/static/configs/dcn/yolov3_r50vd_dcn.yml deleted file mode 100755 index cdf8c23adb9fde7158423da6be0c5a37de5cd832..0000000000000000000000000000000000000000 --- a/static/configs/dcn/yolov3_r50vd_dcn.yml +++ /dev/null @@ -1,66 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 20000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/yolov3_r50vd_dcn/model_final -num_classes: 80 -use_fine_grained_loss: false - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: '../yolov3_reader.yml' diff --git a/static/configs/dcn/yolov3_r50vd_dcn_db_iouaware_obj365_pretrained_coco.yml b/static/configs/dcn/yolov3_r50vd_dcn_db_iouaware_obj365_pretrained_coco.yml deleted file mode 100755 index 287442f8362c8412fbf8652b05e9151152acf435..0000000000000000000000000000000000000000 --- a/static/configs/dcn/yolov3_r50vd_dcn_db_iouaware_obj365_pretrained_coco.yml +++ /dev/null @@ -1,83 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 85000 -log_iter: 1 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ResNet50_vd_dcn_db_obj365_pretrained.tar -weights: output/yolov3_r50vd_dcn_db_iouaware_obj365_pretrained_coco/model_final -num_classes: 80 -use_fine_grained_loss: true - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - iou_aware: true - iou_aware_factor: 0.4 - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 608 - max_width: 608 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 75000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_enhance_reader.yml' diff --git a/static/configs/dcn/yolov3_r50vd_dcn_db_iouloss_obj365_pretrained_coco.yml b/static/configs/dcn/yolov3_r50vd_dcn_db_iouloss_obj365_pretrained_coco.yml deleted file mode 100755 index 26e434a22cd2ed859f64c77691d3338de3758467..0000000000000000000000000000000000000000 --- a/static/configs/dcn/yolov3_r50vd_dcn_db_iouloss_obj365_pretrained_coco.yml +++ /dev/null @@ -1,75 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 85000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ResNet50_vd_dcn_db_obj365_pretrained.tar -weights: output/yolov3_r50vd_dcn_db_iouloss_obj365_pretrained_coco/model_final -num_classes: 80 -use_fine_grained_loss: true - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 75000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_enhance_reader.yml' diff --git a/static/configs/dcn/yolov3_r50vd_dcn_db_obj365_pretrained_coco.yml b/static/configs/dcn/yolov3_r50vd_dcn_db_obj365_pretrained_coco.yml deleted file mode 100755 index c274b64ab08aa2600f50642d64f525fda7e0b481..0000000000000000000000000000000000000000 --- a/static/configs/dcn/yolov3_r50vd_dcn_db_obj365_pretrained_coco.yml +++ /dev/null @@ -1,70 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 85000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ResNet50_vd_dcn_db_obj365_pretrained.tar -weights: output/yolov3_r50vd_dcn_db_obj365_pretrained_coco/model_final -num_classes: 80 -use_fine_grained_loss: true - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - drop_block: true - keep_prob: 0.94 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - use_fine_grained_loss: true - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 75000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_enhance_reader.yml' diff --git a/static/configs/dcn/yolov3_r50vd_dcn_obj365_pretrained_coco.yml b/static/configs/dcn/yolov3_r50vd_dcn_obj365_pretrained_coco.yml deleted file mode 100755 index 31d3fcd01c4c4fe27cfd2cb2b923a2d586115d6e..0000000000000000000000000000000000000000 --- a/static/configs/dcn/yolov3_r50vd_dcn_obj365_pretrained_coco.yml +++ /dev/null @@ -1,68 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 85000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ResNet50_vd_dcn_db_obj365_pretrained.tar -weights: output/yolov3_r50vd_dcn_db_obj365_pretrained_coco/model_final -num_classes: 80 -use_fine_grained_loss: true - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - use_fine_grained_loss: true - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 75000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_enhance_reader.yml' diff --git a/static/configs/efficientdet_d0.yml b/static/configs/efficientdet_d0.yml deleted file mode 100644 index 10e7625e24e5263d34743da41820c100ee8dea86..0000000000000000000000000000000000000000 --- a/static/configs/efficientdet_d0.yml +++ /dev/null @@ -1,157 +0,0 @@ -architecture: EfficientDet -max_iters: 281250 -use_gpu: true -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/EfficientNetB0_pretrained.tar -weights: output/efficientdet_d0/model_final -log_iter: 20 -snapshot_iter: 10000 -metric: COCO -save_dir: output -num_classes: 81 -use_ema: true -ema_decay: 0.9998 - -EfficientDet: - backbone: EfficientNet - fpn: BiFPN - efficient_head: EfficientHead - anchor_grid: AnchorGrid - box_loss_weight: 50. - -EfficientNet: - # norm_type: sync_bn - # TODO - norm_type: bn - scale: b0 - use_se: true - -BiFPN: - num_chan: 64 - repeat: 3 - levels: 5 - -EfficientHead: - repeat: 3 - num_chan: 64 - prior_prob: 0.01 - num_anchors: 9 - gamma: 1.5 - alpha: 0.25 - delta: 0.1 - output_decoder: - score_thresh: 0.05 # originally 0. - nms_thresh: 0.5 - pre_nms_top_n: 1000 # originally 5000 - detections_per_im: 100 - nms_eta: 1.0 - -AnchorGrid: - anchor_base_scale: 4 - num_scales: 3 - aspect_ratios: [[1, 1], [1.4, 0.7], [0.7, 1.4]] - -LearningRate: - base_lr: 0.16 - schedulers: - - !CosineDecayWithSkip - total_steps: 281250 - skip_steps: 938 - - !LinearWarmup - start_factor: 0.05 - steps: 938 - -OptimizerBuilder: - clip_grad_by_norm: 10. - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.00004 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_id', 'fg_num', 'gt_label', 'gt_target'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !RandomScaledCrop - target_dim: 512 - scale_range: [.1, 2.] - interp: 1 - - !Permute - to_bgr: false - channel_first: true - - !TargetAssign - image_size: 512 - batch_size: 16 - shuffle: true - worker_num: 32 - bufsize: 16 - use_process: true - drop_empty: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeAndPad - target_dim: 512 - interp: 1 - - !Permute - channel_first: true - to_bgr: false - drop_empty: false - batch_size: 16 - shuffle: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id'] - image_shape: [3, 512, 512] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeAndPad - target_dim: 512 - interp: 1 - - !Permute - channel_first: true - to_bgr: false - batch_size: 16 - shuffle: false diff --git a/static/configs/face_detection/README.md b/static/configs/face_detection/README.md deleted file mode 100644 index 527aa708ce0e9a135e8c88cdc03f717921ab2a55..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/README.md +++ /dev/null @@ -1,2 +0,0 @@ -**文档教程请参考:** [FACE_DETECTION.md](../../docs/featured_model/FACE_DETECTION.md)
    -**English document please refer:** [FACE_DETECTION_en.md](../../docs/featured_model/FACE_DETECTION_en.md) diff --git a/static/configs/face_detection/blazeface.yml b/static/configs/face_detection/blazeface.yml deleted file mode 100644 index c02b6e3c73980bff057747b3299b8b5552d0db21..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/blazeface.yml +++ /dev/null @@ -1,121 +0,0 @@ -architecture: BlazeFace -max_iters: 320000 -pretrain_weights: -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -metric: WIDERFACE -save_dir: output -weights: output/blazeface/model_final -# 1(label_class) + 1(background) -num_classes: 2 - -BlazeFace: - backbone: BlazeNet - output_decoder: - keep_top_k: 750 - nms_threshold: 0.3 - nms_top_k: 5000 - score_threshold: 0.01 - min_sizes: [[16.,24.], [32., 48., 64., 80., 96., 128.]] - use_density_prior_box: false - -BlazeNet: - with_extra_blocks: true - lite_edition: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 300000] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_train_bbx_gt.txt - image_dir: WIDER_train/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !ExpandImage - max_ratio: 4 - prob: 0.5 - - !CropImageWithDataAchorSampling - anchor_sampler: - - [1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0] - batch_sampler: - - [1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - target_size: 640 - - !RandomInterpImage - target_size: 640 - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - batch_size: 8 - use_process: true - worker_num: 8 - shuffle: true - -EvalReader: - inputs_def: - fields: ['image', 'im_id'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_val_bbx_gt.txt - image_dir: WIDER_val/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 diff --git a/static/configs/face_detection/blazeface_keypoint.yml b/static/configs/face_detection/blazeface_keypoint.yml deleted file mode 100644 index b20cc5bbb55dfda91ff8d5e77e41adfec4a960c0..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/blazeface_keypoint.yml +++ /dev/null @@ -1,130 +0,0 @@ -architecture: BlazeFace -max_iters: 160000 -pretrain_weights: -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -metric: WIDERFACE -save_dir: output -weights: output/blazeface_keypoint/model_final.pdparams -# 1(label_class) + 1(background) -num_classes: 2 -with_lmk: true - -BlazeFace: - backbone: BlazeNet - output_decoder: - keep_top_k: 750 - nms_threshold: 0.3 - nms_top_k: 5000 - score_threshold: 0.01 - min_sizes: [[16.,24.], [32., 48., 64., 80., 96., 128.]] - use_density_prior_box: false - lmk_loss: - overlap_threshold: 0.35 - neg_overlap: 0.35 - -BlazeNet: - with_extra_blocks: true - lite_edition: false - -LearningRate: - base_lr: 0.002 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 150000] - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'gt_bbox', 'gt_class', 'gt_keypoint', 'keypoint_ignore'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_train_bbx_lmk_gt.txt - image_dir: WIDER_train/images - with_lmk: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !ExpandImage - max_ratio: 4 - prob: 0.5 - - !CropImageWithDataAchorSampling - anchor_sampler: - - [1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0] - batch_sampler: - - [1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - target_size: 640 - - !ResizeImage - target_size: 640 - interp: 1 - - !RandomInterpImage - target_size: 640 - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - batch_size: 16 - use_process: true - worker_num: 8 - shuffle: true - -EvalReader: - inputs_def: - fields: ['image', 'im_id'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_val_bbx_gt.txt - image_dir: WIDER_val/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - batch_size: 1 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - target_size: 640 - interp: 1 - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - batch_size: 1 diff --git a/static/configs/face_detection/blazeface_nas.yml b/static/configs/face_detection/blazeface_nas.yml deleted file mode 100644 index be645502fea725fdaea1d140e21adfbed5e2e4ca..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/blazeface_nas.yml +++ /dev/null @@ -1,123 +0,0 @@ -architecture: BlazeFace -max_iters: 320000 -pretrain_weights: -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -metric: WIDERFACE -save_dir: output -weights: output/blazeface_nas/model_final -# 1(label_class) + 1(background) -num_classes: 2 - -BlazeFace: - backbone: BlazeNet - output_decoder: - keep_top_k: 750 - nms_threshold: 0.3 - nms_top_k: 5000 - score_threshold: 0.01 - min_sizes: [[16.,24.], [32., 48., 64., 80., 96., 128.]] - use_density_prior_box: false - -BlazeNet: - blaze_filters: [[12, 12], [12, 12, 2], [12, 12]] - double_blaze_filters: [[12, 16, 24, 2], [24, 12, 24], [24, 16, 72, 2], [72, 12, 72]] - with_extra_blocks: true - lite_edition: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 300000] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_train_bbx_gt.txt - image_dir: WIDER_train/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !ExpandImage - max_ratio: 4 - prob: 0.5 - - !CropImageWithDataAchorSampling - anchor_sampler: - - [1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0] - batch_sampler: - - [1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - target_size: 640 - - !RandomInterpImage - target_size: 640 - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - batch_size: 8 - use_process: true - worker_num: 8 - shuffle: true - -EvalReader: - inputs_def: - fields: ['image', 'im_id'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_val_bbx_gt.txt - image_dir: WIDER_val/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 diff --git a/static/configs/face_detection/blazeface_nas_v2.yml b/static/configs/face_detection/blazeface_nas_v2.yml deleted file mode 100644 index 1741e530b37e9c53c02742fdd23292d018fcbadc..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/blazeface_nas_v2.yml +++ /dev/null @@ -1,123 +0,0 @@ -architecture: BlazeFace -max_iters: 320000 -pretrain_weights: -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -metric: WIDERFACE -save_dir: output -weights: output/blazeface_nas_v2/model_final -# 1(label_class) + 1(background) -num_classes: 2 - -BlazeFace: - backbone: BlazeNet - output_decoder: - keep_top_k: 750 - nms_threshold: 0.3 - nms_top_k: 5000 - score_threshold: 0.01 - min_sizes: [[16.,24.], [32., 48., 64., 80., 96., 128.]] - use_density_prior_box: false - -BlazeNet: - blaze_filters: [[12, 12], [12, 12, 2], [12, 12]] - double_blaze_filters: [[12, 16, 32, 2], [32, 32, 32], [32, 16, 72, 2], [72, 24, 72]] - with_extra_blocks: true - lite_edition: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 300000] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_train_bbx_gt.txt - image_dir: WIDER_train/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !ExpandImage - max_ratio: 4 - prob: 0.5 - - !CropImageWithDataAchorSampling - anchor_sampler: - - [1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0] - batch_sampler: - - [1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - target_size: 640 - - !RandomInterpImage - target_size: 640 - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - batch_size: 8 - use_process: true - worker_num: 8 - shuffle: true - -EvalReader: - inputs_def: - fields: ['image', 'im_id'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_val_bbx_gt.txt - image_dir: WIDER_val/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 diff --git a/static/configs/face_detection/faceboxes.yml b/static/configs/face_detection/faceboxes.yml deleted file mode 100644 index 9f6fcfb79191536cd05acfa8a7202f8df6fc10c4..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/faceboxes.yml +++ /dev/null @@ -1,122 +0,0 @@ -architecture: FaceBoxes -pretrain_weights: -use_gpu: true -max_iters: 320000 -snapshot_iter: 10000 -log_iter: 20 -metric: WIDERFACE -save_dir: output -weights: output/faceboxes/model_final -# 1(label_class) + 1(background) -num_classes: 2 - -FaceBoxes: - backbone: FaceBoxNet - densities: [[4, 2, 1], [1], [1]] - fixed_sizes: [[32., 64., 128.], [256.], [512.]] - output_decoder: - keep_top_k: 750 - nms_threshold: 0.3 - nms_top_k: 5000 - score_threshold: 0.01 - -FaceBoxNet: - with_extra_blocks: true - lite_edition: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 300000] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - batch_size: 8 - use_process: True - worker_num: 8 - shuffle: true - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_train_bbx_gt.txt - image_dir: WIDER_train/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !ExpandImage - max_ratio: 4 - prob: 0.5 - - !CropImageWithDataAchorSampling - anchor_sampler: - - [1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0] - batch_sampler: - - [1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - target_size: 640 - - !RandomInterpImage - target_size: 640 - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - -EvalReader: - batch_size: 1 - use_process: false - inputs_def: - fields: ['image', 'im_id'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_val_bbx_gt.txt - image_dir: WIDER_val/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 diff --git a/static/configs/face_detection/faceboxes_lite.yml b/static/configs/face_detection/faceboxes_lite.yml deleted file mode 100644 index ca8d37a5c68eccbb241eb61ec5756e0465306d89..0000000000000000000000000000000000000000 --- a/static/configs/face_detection/faceboxes_lite.yml +++ /dev/null @@ -1,123 +0,0 @@ -architecture: FaceBoxes -pretrain_weights: -use_gpu: true -max_iters: 320000 -snapshot_iter: 10000 -log_iter: 20 -metric: WIDERFACE -save_dir: output -weights: output/faceboxes_lite/model_final -# 1(label_class) + 1(background) -num_classes: 2 - -FaceBoxes: - backbone: FaceBoxNet - densities: [[2, 1, 1], [1, 1]] - fixed_sizes: [[16., 32., 64.], [96., 128.]] - output_decoder: - keep_top_k: 750 - nms_threshold: 0.3 - nms_top_k: 5000 - score_threshold: 0.01 - -FaceBoxNet: - with_extra_blocks: true - lite_edition: true - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 300000] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - batch_size: 8 - use_process: True - worker_num: 8 - shuffle: true - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_train_bbx_gt.txt - image_dir: WIDER_train/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !ExpandImage - max_ratio: 4 - prob: 0.5 - - !CropImageWithDataAchorSampling - anchor_sampler: - - [1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0] - batch_sampler: - - [1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - - [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0] - target_size: 640 - - !RandomInterpImage - target_size: 640 - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [127.502231, 127.502231, 127.502231] - -EvalReader: - batch_size: 1 - use_process: false - inputs_def: - fields: ['image', 'im_id'] - dataset: - !WIDERFaceDataSet - dataset_dir: dataset/wider_face - anno_path: wider_face_split/wider_face_val_bbx_gt.txt - image_dir: WIDER_val/images - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - - -TestReader: - inputs_def: - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: false - mean: [123, 117, 104] - std: [127.502231, 127.502231, 127.502231] - - !Permute {} - batch_size: 1 diff --git a/static/configs/faster_fpn_reader.yml b/static/configs/faster_fpn_reader.yml deleted file mode 100644 index 1ddac93b4402c7fcc6488ade52f3a61cc7769d44..0000000000000000000000000000000000000000 --- a/static/configs/faster_fpn_reader.yml +++ /dev/null @@ -1,101 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'im_shape', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/faster_rcnn_cbr101_vd_dual_fpn_1x.yml b/static/configs/faster_rcnn_cbr101_vd_dual_fpn_1x.yml deleted file mode 100644 index 5005c8a36f707992f011db49ac5f9f38b2bb4c47..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_cbr101_vd_dual_fpn_1x.yml +++ /dev/null @@ -1,108 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/CBResNet101_vd_pretrained.tar -weights: output/faster_rcnn_cbr101_vd_dual_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: CBResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -CBResNet: - norm_type: bn - norm_decay: 0. - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - repeat_num: 2 - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/faster_rcnn_cbr50_vd_dual_fpn_1x.yml b/static/configs/faster_rcnn_cbr50_vd_dual_fpn_1x.yml deleted file mode 100644 index 4b763af7279beec1c04aed023c9e98039fc0efc7..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_cbr50_vd_dual_fpn_1x.yml +++ /dev/null @@ -1,109 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/CBResNet50_vd_pretrained.tar -weights: output/faster_rcnn_cbr50_vd_dual_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: CBResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -CBResNet: - norm_type: bn - norm_decay: 0. - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - repeat_num: 2 - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - # batch size per device - batch_size: 2 diff --git a/static/configs/faster_rcnn_r101_1x.yml b/static/configs/faster_rcnn_r101_1x.yml deleted file mode 100644 index 06f460f0dc02d36e74b685db75094ebef5da80d2..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r101_1x.yml +++ /dev/null @@ -1,91 +0,0 @@ -architecture: FasterRCNN -use_gpu: true -max_iters: 180000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r101_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 101 - feature_maps: 4 - freeze_at: 2 - -ResNetC5: - depth: 101 - norm_type: affine_channel - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - use_random: true - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - sampling_ratio: 0 - spatial_scale: 0.0625 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_reader.yml' diff --git a/static/configs/faster_rcnn_r101_fpn_1x.yml b/static/configs/faster_rcnn_r101_fpn_1x.yml deleted file mode 100644 index 6bc0d3d10e569259304b863280b8c4271c3628bc..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r101_fpn_1x.yml +++ /dev/null @@ -1,103 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: http://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.tar -weights: output/faster_rcnn_r101_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_rcnn_r101_fpn_2x.yml b/static/configs/faster_rcnn_r101_fpn_2x.yml deleted file mode 100644 index f9ce45b59525e96f6a75c61e2ad958fe3f067d2b..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r101_fpn_2x.yml +++ /dev/null @@ -1,103 +0,0 @@ -architecture: FasterRCNN -max_iters: 360000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: http://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.tar -weights: output/faster_rcnn_r101_fpn_2x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_rcnn_r101_vd_fpn_1x.yml b/static/configs/faster_rcnn_r101_vd_fpn_1x.yml deleted file mode 100644 index 464a325dc220a7a87e6d05ec94b81504f621f025..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r101_vd_fpn_1x.yml +++ /dev/null @@ -1,104 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/faster_rcnn_r101_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_rcnn_r101_vd_fpn_2x.yml b/static/configs/faster_rcnn_r101_vd_fpn_2x.yml deleted file mode 100644 index ee5ea586c68a3ff7bf91f3009eec08b552b14561..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r101_vd_fpn_2x.yml +++ /dev/null @@ -1,104 +0,0 @@ -architecture: FasterRCNN -max_iters: 360000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/faster_rcnn_r101_vd_fpn_2x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_rcnn_r34_fpn_1x.yml b/static/configs/faster_rcnn_r34_fpn_1x.yml deleted file mode 100644 index 1ea388e088cfe4d4cc328a8fb42c916df97fdc22..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r34_fpn_1x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r34_fpn_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: bn - norm_decay: 0. - depth: 34 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_lo: 0.0 - bg_thresh_hi: 0.5 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/faster_rcnn_r34_vd_fpn_1x.yml b/static/configs/faster_rcnn_r34_vd_fpn_1x.yml deleted file mode 100644 index b176aad69e0478ed81a50b66a5c625fcd109dc36..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r34_vd_fpn_1x.yml +++ /dev/null @@ -1,107 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r34_fpn_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: bn - norm_decay: 0. - depth: 34 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_lo: 0.0 - bg_thresh_hi: 0.5 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/faster_rcnn_r50_1x.yml b/static/configs/faster_rcnn_r50_1x.yml deleted file mode 100644 index 26753124560365c818fc15ed04ee99f242b89145..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_1x.yml +++ /dev/null @@ -1,91 +0,0 @@ -architecture: FasterRCNN -use_gpu: true -max_iters: 180000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r50_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: 4 - freeze_at: 2 - -ResNetC5: - depth: 50 - norm_type: affine_channel - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - use_random: true - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - sampling_ratio: 0 - spatial_scale: 0.0625 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_reader.yml' diff --git a/static/configs/faster_rcnn_r50_2x.yml b/static/configs/faster_rcnn_r50_2x.yml deleted file mode 100644 index 9112d9616f9c3baaba2ae377add9729a848d2188..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_2x.yml +++ /dev/null @@ -1,91 +0,0 @@ -architecture: FasterRCNN -use_gpu: true -max_iters: 360000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r50_2x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: 4 - freeze_at: 2 - -ResNetC5: - depth: 50 - norm_type: affine_channel - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - use_random: true - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - sampling_ratio: 0 - spatial_scale: 0.0625 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_reader.yml' diff --git a/static/configs/faster_rcnn_r50_fpn_1x.yml b/static/configs/faster_rcnn_r50_fpn_1x.yml deleted file mode 100644 index 83dcc356d6136626568bdaee3e3f753a95be3c68..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_fpn_1x.yml +++ /dev/null @@ -1,105 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r50_fpn_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: bn - norm_decay: 0. - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_lo: 0.0 - bg_thresh_hi: 0.5 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/faster_rcnn_r50_fpn_2x.yml b/static/configs/faster_rcnn_r50_fpn_2x.yml deleted file mode 100644 index 909f0fd1c12c1f39c9590089ff2348f22e426011..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_fpn_2x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -use_gpu: true -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r50_fpn_2x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_lo: 0.0 - bg_thresh_hi: 0.5 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/faster_rcnn_r50_vd_1x.yml b/static/configs/faster_rcnn_r50_vd_1x.yml deleted file mode 100644 index 8a10886b5ff7bd9d3a476ea5819cf9067e9a9819..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_vd_1x.yml +++ /dev/null @@ -1,93 +0,0 @@ -architecture: FasterRCNN -use_gpu: true -max_iters: 180000 -log_iter: 20 -save_dir: output/faster-r50-vd-c4-1x -snapshot_iter: 10000 -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r50_vd_1x/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: 4 - freeze_at: 2 - variant: d - -ResNetC5: - depth: 50 - norm_type: affine_channel - variant: d - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - use_random: true - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - sampling_ratio: 0 - spatial_scale: 0.0625 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_reader.yml' diff --git a/static/configs/faster_rcnn_r50_vd_fpn_2x.yml b/static/configs/faster_rcnn_r50_vd_fpn_2x.yml deleted file mode 100644 index 51f8ed5502573ab59516e8694ed1d583dafa50ed..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_vd_fpn_2x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_2x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/faster_rcnn_r50_vd_fpn_roadsign_kunlun.yml b/static/configs/faster_rcnn_r50_vd_fpn_roadsign_kunlun.yml deleted file mode 100644 index 7401438562952ab5f0191e8f9ea9a74c30a98db9..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_r50_vd_fpn_roadsign_kunlun.yml +++ /dev/null @@ -1,239 +0,0 @@ -architecture: FasterRCNN -use_gpu: false -use_xpu: true -max_iters: 2000 -log_iter: 1 -save_dir: output -snapshot_iter: 500 -metric: VOC -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_2x.tar -weights: output/faster_rcnn_r50_vd_fpn_roadsign_kunlun/model_final -num_classes: 5 -finetune_exclude_pretrained_params: ['cls_score, bbox_pred'] - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - norm_type: affine_channel - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - -ResNetC5: - depth: 50 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - use_random: true - - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 1300 - - 1800 - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 100 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: train.txt - with_background: true - - batch_size: 1 - bufsize: 2 - shuffle: true - drop_empty: true - drop_last: true - mixup_epoch: -1 - use_process: false - worker_num: 2 - - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - is_normalized: true - prob: 0.5 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - - -EvalReader: - batch_size: 1 - bufsize: 1 - shuffle: false - drop_empty: false - drop_last: false - use_process: false - worker_num: 1 - - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape', 'gt_bbox', 'gt_class', 'is_difficult'] - - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: valid.txt - with_background: true - - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - - -TestReader: - batch_size: 1 - drop_empty: false - drop_last: false - - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - - dataset: - !ImageFolder - anno_path: dataset/roadsign_voc/label_list.txt - with_background: true - - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true diff --git a/static/configs/faster_rcnn_se154_vd_fpn_s1x.yml b/static/configs/faster_rcnn_se154_vd_fpn_s1x.yml deleted file mode 100644 index 02d0469616037f1fc136a1bd7e77849afd599146..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_se154_vd_fpn_s1x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: FasterRCNN -max_iters: 260000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/SENet154_vd_pretrained.tar -weights: output/faster_rcnn_se154_vd_fpn_s1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: SENet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -SENet: - depth: 152 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [200000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_rcnn_x101_vd_64x4d_fpn_1x.yml b/static/configs/faster_rcnn_x101_vd_64x4d_fpn_1x.yml deleted file mode 100644 index fc83b00b2e6222aa0b633f217d3e3e5aec7be08d..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_x101_vd_64x4d_fpn_1x.yml +++ /dev/null @@ -1,107 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/faster_rcnn_x101_vd_64x4d_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNeXt: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - values: null - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_rcnn_x101_vd_64x4d_fpn_2x.yml b/static/configs/faster_rcnn_x101_vd_64x4d_fpn_2x.yml deleted file mode 100644 index e41022149c37efe5bb0cdc608b64d795bcf4bdc3..0000000000000000000000000000000000000000 --- a/static/configs/faster_rcnn_x101_vd_64x4d_fpn_2x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: FasterRCNN -max_iters: 360000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/faster_rcnn_x101_vd_64x4d_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNeXt: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' diff --git a/static/configs/faster_reader.yml b/static/configs/faster_reader.yml deleted file mode 100644 index 3099bb656d641d99c71e12aa6f61fb625d23d6bd..0000000000000000000000000000000000000000 --- a/static/configs/faster_reader.yml +++ /dev/null @@ -1,91 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: -1 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'im_shape', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: false diff --git a/static/configs/gcnet/README.md b/static/configs/gcnet/README.md deleted file mode 100644 index 5c25d34676f9bd6bcc66a8d90a4197f69c744d9c..0000000000000000000000000000000000000000 --- a/static/configs/gcnet/README.md +++ /dev/null @@ -1,34 +0,0 @@ -# GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond - -## Introduction - -- GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond -: [https://arxiv.org/abs/1904.11492](https://arxiv.org/abs/1904.11492) - -``` -@article{DBLP:journals/corr/abs-1904-11492, - author = {Yue Cao and - Jiarui Xu and - Stephen Lin and - Fangyun Wei and - Han Hu}, - title = {GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond}, - journal = {CoRR}, - volume = {abs/1904.11492}, - year = {2019}, - url = {http://arxiv.org/abs/1904.11492}, - archivePrefix = {arXiv}, - eprint = {1904.11492}, - timestamp = {Tue, 09 Jul 2019 16:48:55 +0200}, - biburl = {https://dblp.org/rec/bib/journals/corr/abs-1904-11492}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -``` - - -## Model Zoo - -| Backbone | Type | Context| Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------: | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-FPN | Mask | GC(c3-c5, r16, add) | 2 | 2x | 15.31 | 41.4 | 36.8 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.yml) | -| ResNet50-vd-FPN | Mask | GC(c3-c5, r16, mul) | 2 | 2x | 15.35 | 40.7 | 36.1 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.yml) | diff --git a/static/configs/gcnet/README_cn.md b/static/configs/gcnet/README_cn.md deleted file mode 100644 index 579bfe0ca9fca376e7645dd0a9b59b15d7e817a0..0000000000000000000000000000000000000000 --- a/static/configs/gcnet/README_cn.md +++ /dev/null @@ -1,69 +0,0 @@ -# GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond - -## 简介 - -Nonlocal基于自注意力机制,给出了捕捉长时依赖的方法,但是在该论文中,作者通过可视化分析发现,相同图像中对于不同位置点的attention map几乎是一致的,也就是说在Nonlocal计算过程中有很大的资源浪费(冗余计算)。SENet使用全局上下文对不同的通道进行权重标定,计算量很小,但是这样无法充分利用全局上下文信息。论文中作者结合了Nonlocal和SENet两者的优点,提出了GCNet模块,在保证较小计算量的情况下,很好地融合了全局上下文信息。 - -论文中基于attention map差距很小的现象,设计了simplified nonlocal结构(SNL),结构如下图所示,对所有位置共享全局attention map。 - -
    - -
    - - -SNL的网络输出计算如下 - -
    - -
    - -为进一步减少计算量,将$W_v$提取到attention pooling计算的外面,表示为 - -
    - -
    - -对应结构如下所示。通过共享attention map,计算量减少为之前的1/WH。 - -
    - -
    - -SNL模块可以抽象为上下文建模、特征转换和特征聚合三个部分,特征转化部分有大量参数,因此在这里参考SE的结构,最终GC block的结构如下所示。使用两层降维的1*1卷积降低计算量,由于两层卷积参数较难优化,在这里加入layer normalization的正则化层降低优化难度。 - -
    - -
    - -该模块可以很方便地插入到骨干网络中,提升模型的全局上下文表达能力,可以提升检测和分割任务的模型性能。 - - -## 模型库 - -| 骨架网络 | 网络类型 | Context设置 | 每张GPU图片个数 | 学习率策略 |推理时间(fps) | Box AP | Mask AP | 下载 | 配置文件 | -| :---------------------- | :-------------: | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-FPN | Mask | GC(c3-c5, r16, add) | 2 | 2x | 15.31 | 41.4 | 36.8 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.yml) | -| ResNet50-vd-FPN | Mask | GC(c3-c5, r16, mul) | 2 | 2x | 15.35 | 40.7 | 36.1 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.yml) | - - -## 引用 - -``` -@article{DBLP:journals/corr/abs-1904-11492, - author = {Yue Cao and - Jiarui Xu and - Stephen Lin and - Fangyun Wei and - Han Hu}, - title = {GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond}, - journal = {CoRR}, - volume = {abs/1904.11492}, - year = {2019}, - url = {http://arxiv.org/abs/1904.11492}, - archivePrefix = {arXiv}, - eprint = {1904.11492}, - timestamp = {Tue, 09 Jul 2019 16:48:55 +0200}, - biburl = {https://dblp.org/rec/bib/journals/corr/abs-1904-11492}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -``` diff --git a/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.yml b/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.yml deleted file mode 100644 index 49dd68977407aedbce8a65271d9b269f7b97342a..0000000000000000000000000000000000000000 --- a/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x.yml +++ /dev/null @@ -1,119 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r50_vd_fpn_gcb_add_r16_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - gcb_stages: [3, 4, 5] - gcb_params: - ratio: 0.0625 - pooling_type: att - fusion_types: [channel_add] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.yml b/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.yml deleted file mode 100644 index 3b93547b491c371947ce9eec5d64448879bed01c..0000000000000000000000000000000000000000 --- a/static/configs/gcnet/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x.yml +++ /dev/null @@ -1,119 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r50_vd_fpn_gcb_mul_r16_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - gcb_stages: [3, 4, 5] - gcb_params: - ratio: 0.0625 - pooling_type: att - fusion_types: [channel_mul] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/gn/cascade_mask_rcnn_r50_fpn_gn_2x.yml b/static/configs/gn/cascade_mask_rcnn_r50_fpn_gn_2x.yml deleted file mode 100644 index 566f4e86394542874df1aac2a01d5fee95900e78..0000000000000000000000000000000000000000 --- a/static/configs/gn/cascade_mask_rcnn_r50_fpn_gn_2x.yml +++ /dev/null @@ -1,117 +0,0 @@ -architecture: CascadeMaskRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/cascade_mask_rcnn_r50_fpn_gn_2x/model_final -metric: COCO -num_classes: 81 - -CascadeMaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - mask_head: MaskHead - mask_assigner: MaskAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - norm_type: gn - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - norm_type: gn - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_hi: [0.5, 0.6, 0.7] - bg_thresh_lo: [0.0, 0.0, 0.0] - fg_fraction: 0.25 - fg_thresh: [0.5, 0.6, 0.7] - -MaskAssigner: - resolution: 28 - -CascadeBBoxHead: - head: CascadeXConvNormHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeXConvNormHead: - norm_type: gn - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/gn/faster_rcnn_r50_fpn_gn_2x.yml b/static/configs/gn/faster_rcnn_r50_fpn_gn_2x.yml deleted file mode 100644 index 2f95b4b784cf4323c8b1f7bab8c57f3fa962044f..0000000000000000000000000000000000000000 --- a/static/configs/gn/faster_rcnn_r50_fpn_gn_2x.yml +++ /dev/null @@ -1,106 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/faster_rcnn_r50_fpn_gn/model_final -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - norm_type: gn - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_lo: 0.0 - bg_thresh_hi: 0.5 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: XConvNormHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -XConvNormHead: - norm_type: gn - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/gn/mask_rcnn_r50_fpn_gn_2x.yml b/static/configs/gn/mask_rcnn_r50_fpn_gn_2x.yml deleted file mode 100644 index 644cf28f8718d8033ae9352497fbcb18830c90c9..0000000000000000000000000000000000000000 --- a/static/configs/gn/mask_rcnn_r50_fpn_gn_2x.yml +++ /dev/null @@ -1,113 +0,0 @@ -architecture: MaskRCNN -max_iters: 360000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/mask_rcnn_r50_fpn_gn_2x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - norm_type: gn - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - norm_type: gn - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: XConvNormHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -XConvNormHead: - norm_type: gn - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' diff --git a/static/configs/gridmask/README.md b/static/configs/gridmask/README.md deleted file mode 100644 index 3e39b8a7120b6522fe151ee3b378d584d5556d7d..0000000000000000000000000000000000000000 --- a/static/configs/gridmask/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# GridMask Data Augmentation - -## Introduction - -- GridMask Data Augmentation -: [https://arxiv.org/abs/2001.04086](https://arxiv.org/abs/2001.04086) - -``` -@article{chen2020gridmask, - title={GridMask data augmentation}, - author={Chen, Pengguang}, - journal={arXiv preprint arXiv:2001.04086}, - year={2020} -} -``` - - -## Model Zoo - -| Backbone | Type | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-FPN | Faster | 2 | 4x | 21.847 | 39.1% | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_gridmask_4x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/gridmask/faster_rcnn_r50_vd_fpn_gridmask_4x.yml) | diff --git a/static/configs/gridmask/faster_rcnn_r50_vd_fpn_gridmask_4x.yml b/static/configs/gridmask/faster_rcnn_r50_vd_fpn_gridmask_4x.yml deleted file mode 100755 index 43bb740094a22440dc4b7b11c5385fabbe6cf208..0000000000000000000000000000000000000000 --- a/static/configs/gridmask/faster_rcnn_r50_vd_fpn_gridmask_4x.yml +++ /dev/null @@ -1,135 +0,0 @@ -architecture: FasterRCNN -max_iters: 360000 -snapshot_iter: 40000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_gridmask_4x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !GridMaskOp - use_h: true - use_w: true - rotate: 1 - offset: false - ratio: 0.5 - mode: 1 - prob: 0.7 - upper_iter: 360000 - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_size: 2 - worker_num: 2 - use_process: true diff --git a/static/configs/hrnet/README.md b/static/configs/hrnet/README.md deleted file mode 100644 index c18cb6d7d27ba1ac30d9fcb48803233dc6aa16e1..0000000000000000000000000000000000000000 --- a/static/configs/hrnet/README.md +++ /dev/null @@ -1,34 +0,0 @@ -# High-resolution networks (HRNets) for object detection - -## Introduction - -- Deep High-Resolution Representation Learning for Human Pose Estimation: [https://arxiv.org/abs/1902.09212](https://arxiv.org/abs/1902.09212) - -``` -@inproceedings{SunXLW19, - title={Deep High-Resolution Representation Learning for Human Pose Estimation}, - author={Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang}, - booktitle={CVPR}, - year={2019} -} -``` - -- High-Resolution Representations for Labeling Pixels and Regions: [https://arxiv.org/abs/1904.04514](https://arxiv.org/abs/1904.04514) - -``` -@article{SunZJCXLMWLW19, - title={High-Resolution Representations for Labeling Pixels and Regions}, - author={Ke Sun and Yang Zhao and Borui Jiang and Tianheng Cheng and Bin Xiao - and Dong Liu and Yadong Mu and Xinggang Wang and Wenyu Liu and Jingdong Wang}, - journal = {CoRR}, - volume = {abs/1904.04514}, - year={2019} -} -``` - -## Model Zoo - -| Backbone | Type | deformable Conv | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :------------- | :---: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| HRNetV2p_W18 | Faster | False | 2 | 1x | 17.509 | 36.0 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_hrnetv2p_w18_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x.yml) | -| HRNetV2p_W18 | Faster | False | 2 | 2x | 17.509 | 38.0 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_hrnetv2p_w18_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x.yml) | diff --git a/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x.yml b/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x.yml deleted file mode 100644 index 3108e9c60e18f0470d12fceb54c7223cb79c9c48..0000000000000000000000000000000000000000 --- a/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x.yml +++ /dev/null @@ -1,103 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_pretrained.tar -weights: output/faster_rcnn_hrnetv2p_w18_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: HRNet - fpn: HRFPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -HRNet: - feature_maps: [2, 3, 4, 5] - width: 18 - freeze_at: 0 - norm_type: bn - -HRFPN: - num_chan: 256 - share_conv: false - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x.yml b/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x.yml deleted file mode 100644 index ecc307e075e662ba21402bde6b3dd7e472135567..0000000000000000000000000000000000000000 --- a/static/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x.yml +++ /dev/null @@ -1,103 +0,0 @@ -architecture: FasterRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/HRNet_W18_C_pretrained.tar -weights: output/faster_rcnn_hrnetv2p_w18_2x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: HRNet - fpn: HRFPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -HRNet: - feature_maps: [2, 3, 4, 5] - width: 18 - freeze_at: 0 - norm_type: bn - -HRFPN: - num_chan: 256 - share_conv: false - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/htc/README.md b/static/configs/htc/README.md deleted file mode 100644 index 519c0cde318d128400a42e2a4bfba6891ac40f85..0000000000000000000000000000000000000000 --- a/static/configs/htc/README.md +++ /dev/null @@ -1,26 +0,0 @@ -# Hybrid Task Cascade for Instance Segmentation - -## Introduction - -We provide config files to reproduce the results in the CVPR 2019 paper for [Hybrid Task Cascade](https://arxiv.org/abs/1901.07518). - -``` -@inproceedings{chen2019hybrid, - title={Hybrid task cascade for instance segmentation}, - author={Chen, Kai and Pang, Jiangmiao and Wang, Jiaqi and Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and Liu, Ziwei and Shi, Jianping and Ouyang, Wanli and Chen Change Loy and Dahua Lin}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2019} -} -``` - -## Dataset - -HTC requires COCO and COCO-stuff dataset for training. - -## Results and Models - -The results on COCO 2017val are shown in the below table. (results on test-dev are usually slightly higher than val) - - | Backbone | Lr schd | Inf time (fps) | box AP | mask AP | Download | - |:---------:|:-------:|:--------------:|:------:|:-------:|:--------:| - | R-50-FPN | 1x | 11 | 42.9 | 37.0 | [model](https://paddlemodels.bj.bcebos.com/object_detection/htc_r50_fpn_1x.pdparams ) | diff --git a/static/configs/htc/htc_r50_fpn_1x.yml b/static/configs/htc/htc_r50_fpn_1x.yml deleted file mode 100644 index 348343ccf4dbc511cfe2a98e80d97dad33572743..0000000000000000000000000000000000000000 --- a/static/configs/htc/htc_r50_fpn_1x.yml +++ /dev/null @@ -1,225 +0,0 @@ -architecture: HybridTaskCascade -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 50 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/htc_r50_fpn_1x/model_final -num_classes: 81 - -HybridTaskCascade: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: HTCBBoxHead - bbox_assigner: CascadeBBoxAssigner - mask_assigner: MaskAssigner - mask_head: HTCMaskHead - fused_semantic_head: FusedSemanticHead - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 1000 - -# bbox roi extractor -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -# semantic roi extractor -RoIAlign: - resolution: 14 - sampling_ratio: 2 - -HTCMaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - lr_ratio: 2.0 - -FusedSemanticHead: - semantic_num_class: 183 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_hi: [0.5, 0.6, 0.7] - bg_thresh_lo: [0.0, 0.0, 0.0] - fg_fraction: 0.25 - fg_thresh: [0.5, 0.6, 0.7] - -MaskAssigner: - resolution: 28 - -HTCBBoxHead: - head: CascadeTwoFCHead - nms: MultiClassSoftNMS - -MultiClassSoftNMS: - score_threshold: 0.01 - keep_top_k: 300 - softnms_sigma: 0.5 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - clip_grad_by_norm: 35.0 - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - batch_size: 1 - worker_num: 2 - shuffle: true - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - load_semantic: True - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd', 'gt_mask', 'semantic'] - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - is_mask_flip: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 1 - shuffle: false diff --git a/static/configs/iou_loss/README.md b/static/configs/iou_loss/README.md deleted file mode 100644 index 2082ece3892341f05954a83d51e6859ebeb6e4e5..0000000000000000000000000000000000000000 --- a/static/configs/iou_loss/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# Improvements of IOU loss - -## Introduction - -- Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression: [https://arxiv.org/abs/1902.09630](https://arxiv.org/abs/1902.09630) - -``` -@article{DBLP:journals/corr/abs-1902-09630, - author = {Seyed Hamid Rezatofighi and - Nathan Tsoi and - JunYoung Gwak and - Amir Sadeghian and - Ian D. Reid and - Silvio Savarese}, - title = {Generalized Intersection over Union: {A} Metric and {A} Loss for Bounding - Box Regression}, - journal = {CoRR}, - volume = {abs/1902.09630}, - year = {2019}, - url = {http://arxiv.org/abs/1902.09630}, - archivePrefix = {arXiv}, - eprint = {1902.09630}, - timestamp = {Tue, 21 May 2019 18:03:36 +0200}, - biburl = {https://dblp.org/rec/bib/journals/corr/abs-1902-09630}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -``` - -- Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression: [https://arxiv.org/abs/1911.08287](https://arxiv.org/abs/1911.08287) - -``` -@article{Zheng2019DistanceIoULF, - title={Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression}, - author={Zhaohui Zheng and Ping Wang and Wei Liu and Jinze Li and Rongguang Ye and Dongwei Ren}, - journal={ArXiv}, - year={2019}, - volume={abs/1911.08287} -} -``` - -## Model Zoo - - -| Backbone | Type | Loss Type | Loss Weight | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :------------- | :---: | :---: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :---: | -| ResNet50-vd-FPN | Faster | GIOU | 10 | 2 | 1x | 22.94 | 39.4 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_giou_loss_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_giou_loss_1x.yml) | -| ResNet50-vd-FPN | Faster | DIOU | 12 | 2 | 1x | 22.94 | 39.2 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_diou_loss_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_diou_loss_1x.yml) | -| ResNet50-vd-FPN | Faster | CIOU | 12 | 2 | 1x | 22.95 | 39.6 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_ciou_loss_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_ciou_loss_1x.yml) | diff --git a/static/configs/iou_loss/README_cn.md b/static/configs/iou_loss/README_cn.md deleted file mode 100644 index 3288e587dda9e787c7de265c5d94c2d39f764264..0000000000000000000000000000000000000000 --- a/static/configs/iou_loss/README_cn.md +++ /dev/null @@ -1,126 +0,0 @@ -# Improvements of IOU loss - -## 简介 - -### GIOU loss - -IOU是论文中十分常用的指标,它对于物体的尺度并不敏感,在之前的检测任务中,常使用smooth l1 loss计算边框loss,但是该种方法计算出来的loss一方面无法与最终的IOU指标直接对应,同时也对检测框的尺度较为敏感,因此有学者提出将IOU loss作为回归的loss;但是如果IOU为0,则loss为0,同时IOU loss也没有考虑物体方向没有对齐时的loss,该论文基于此进行改进,计算GIOU的方法如下。 - - -
    - -
    - - -最终GIOU loss为1-GIOU所得的值。具体来看,IOU可以直接反映边框与真值之间的交并比,C为能够包含A和B的最小封闭凸物体,因此即使A和B的交并比为0,GIOU也会随着A和B的相对距离而不断变化,因此模型参数可以继续得到优化。在A和B的长宽保持恒定的情况下,两者距离越远,GIOU越小,GIOU loss越大。 - -使用GIOU loss计算边框损失的流程图如下。 - -
    - -
    - - -PaddleDetection也开源了基于faster rcnn的GIOU loss实现。使用GIOU loss替换传统的smooth l1 loss,基于faster rcnn的resnet50-vd-fpn 1x实验,coco val mAP能由38.3%提升到39.4%(没有带来任何预测耗时的损失) - - -### DIOU/CIOU loss - -GIOU loss解决了IOU loss中预测边框A与真值B的交并比为0时,模型无法给出优化方向的问题,但是仍然有2种情况难以解决, -1. 当边框A和边框B处于包含关系的时候,GIOU loss退化为IOU loss,此时模型收敛较慢。 -2. 当A与B相交,若A和B的的x1与x2均相等或者y1与y2均相等,GIOU loss仍然会退化为IOU loss,收敛很慢。 - -基于此,论文提出了DIOU loss与CIOU loss,解决收敛速度慢以及部分条件下无法收敛的问题。 -为加速收敛,论文在改进的loss中引入距离的概念,具体地,边框loss可以定义为如下形式: - - -
    - -
    - - -其中 是惩罚项,考虑预测边框与真值的距离损失时,惩罚项可以定义为 - - -
    - -
    - - -其中分子表示预测框与真值边框中心点的欧式距离,分母的c表示预测框与真值边框的最小外包边框的对角长度。因此DIOU loss可以写为 - -
    - -
    - - -相对于GIOU loss,DIOU loss不仅考虑了IOU,也考虑边框之间的距离,从而加快了模型收敛的速度。但是使用DIOU loss作为边框损失函数时,只考虑了边框的交并比以及中心点的距离,没有考虑到预测边框与真值的长宽比差异的情况,因此论文中提出了CIOU loss,惩罚项添加关于长宽比的约束。具体地,惩罚项定义如下 - -
    - -
    - - -其中v为惩罚项,α为惩罚系数,定义分别如下 - -
    - -
    - - -CIOU loss使得在边框回归时,与目标框有重叠甚至包含时能够更快更准确地收敛。 -在NMS阶段,一般的阈值计算为IOU,论文使用了DIOU修正后的阈值,检测框得分的更新方法如下。 - -
    - -
    - - -这使得模型效果有进一步的提升。 - - -## 模型库 - -| 骨架网络 | 网络类型 | Loss类型 | Loss权重 | 每张GPU图片个数 | 学习率策略 |推理时间(fps) | Box AP | Mask AP | 下载 | 配置文件 | -| :---------------------- | :------------- | :---: | :---: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :---: | -| ResNet50-vd-FPN | Faster | GIOU | 10 | 2 | 1x | 22.94 | 39.4 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_giou_loss_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_giou_loss_1x.yml) | -| ResNet50-vd-FPN | Faster | DIOU | 12 | 2 | 1x | 22.94 | 39.2 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_diou_loss_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_diou_loss_1x.yml) | -| ResNet50-vd-FPN | Faster | CIOU | 12 | 2 | 1x | 22.95 | 39.6 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_ciou_loss_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_ciou_loss_1x.yml) | - - - -## 引用 - -``` -@article{DBLP:journals/corr/abs-1902-09630, - author = {Seyed Hamid Rezatofighi and - Nathan Tsoi and - JunYoung Gwak and - Amir Sadeghian and - Ian D. Reid and - Silvio Savarese}, - title = {Generalized Intersection over Union: {A} Metric and {A} Loss for Bounding - Box Regression}, - journal = {CoRR}, - volume = {abs/1902.09630}, - year = {2019}, - url = {http://arxiv.org/abs/1902.09630}, - archivePrefix = {arXiv}, - eprint = {1902.09630}, - timestamp = {Tue, 21 May 2019 18:03:36 +0200}, - biburl = {https://dblp.org/rec/bib/journals/corr/abs-1902-09630}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -``` - -- Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression: [https://arxiv.org/abs/1911.08287](https://arxiv.org/abs/1911.08287) - -``` -@article{Zheng2019DistanceIoULF, - title={Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression}, - author={Zhaohui Zheng and Ping Wang and Wei Liu and Jinze Li and Rongguang Ye and Dongwei Ren}, - journal={ArXiv}, - year={2019}, - volume={abs/1911.08287} -} -``` diff --git a/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_ciou_loss_1x.yml b/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_ciou_loss_1x.yml deleted file mode 100644 index aa6c17b79ec1381af552ace8d31777a4673a1f0c..0000000000000000000000000000000000000000 --- a/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_ciou_loss_1x.yml +++ /dev/null @@ -1,114 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_diou_loss_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: MultiClassDiouNMS - bbox_loss: DiouLoss - -MultiClassDiouNMS: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -DiouLoss: - loss_weight: 10.0 - is_cls_agnostic: false - use_complete_iou_loss: true - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_diou_loss_1x.yml b/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_diou_loss_1x.yml deleted file mode 100644 index f780c919f203618df16d9e2b7fb1833ec2507713..0000000000000000000000000000000000000000 --- a/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_diou_loss_1x.yml +++ /dev/null @@ -1,112 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_diou_loss_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - bbox_loss: DiouLoss - -DiouLoss: - loss_weight: 12.0 - is_cls_agnostic: false - use_complete_iou_loss: false - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_giou_loss_1x.yml b/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_giou_loss_1x.yml deleted file mode 100644 index 66721a03fa2f25d0fcd90154f9cf7e723271fcc4..0000000000000000000000000000000000000000 --- a/static/configs/iou_loss/faster_rcnn_r50_vd_fpn_giou_loss_1x.yml +++ /dev/null @@ -1,111 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_giou_loss_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - bbox_loss: GiouLoss - -GiouLoss: - loss_weight: 10.0 - is_cls_agnostic: false - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/libra_rcnn/README.md b/static/configs/libra_rcnn/README.md deleted file mode 100644 index 3451390d34f969444503f4c5ebc90993be85b568..0000000000000000000000000000000000000000 --- a/static/configs/libra_rcnn/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# Libra R-CNN: Towards Balanced Learning for Object Detection - -## Introduction - -- Libra R-CNN: Towards Balanced Learning for Object Detection -: [https://arxiv.org/abs/1904.02701](https://arxiv.org/abs/1904.02701) - -``` -@inproceedings{pang2019libra, - title={Libra R-CNN: Towards Balanced Learning for Object Detection}, - author={Pang, Jiangmiao and Chen, Kai and Shi, Jianping and Feng, Huajun and Ouyang, Wanli and Dahua Lin}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2019} -} -``` - - -## Model Zoo - -| Backbone | Type | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-BFP | Faster | 2 | 1x | 18.247 | 40.5 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/libra_rcnn_r50_vd_fpn_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/libra_rcnn/libra_rcnn_r50_vd_fpn_1x.yml) | -| ResNet101-vd-BFP | Faster | 2 | 1x | 14.865 | 42.5 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/libra_rcnn_r101_vd_fpn_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/libra_rcnn/libra_rcnn_r101_vd_fpn_1x.yml) | diff --git a/static/configs/libra_rcnn/README_cn.md b/static/configs/libra_rcnn/README_cn.md deleted file mode 100644 index 7d71aae1bb977aac21dc28520633995d006442a8..0000000000000000000000000000000000000000 --- a/static/configs/libra_rcnn/README_cn.md +++ /dev/null @@ -1,75 +0,0 @@ -# Libra R-CNN: Towards Balanced Learning for Object Detection - -## 简介 - -检测模型训练大多包含3个步骤:候选区域生成与选择、特征提取、类别分类和检测框回归多任务的训练与收敛。 - -论文主要分析了在检测任务中,三个层面的不均衡现象限制了模型的性能,分别是样本(sample level)、特征(feature level)以及目标级别(objective level)的不均衡,提出了3种方案,用于解决上述三个不均衡的现象。三个解决方法如下。 - -### IoU-balanced Sampling - -Faster RCNN中生成许多候选框之后,使用随机的方法挑选正负样本,但是这导致了一个问题:负样本中有70%的候选框与真值的IOU都在0~0.05之间,分布如下图所示。使用在线难负样本挖掘(OHEM)的方法可以缓解这种情况,但是不同IOU区间的采样样本仍然差距仍然比较大,而且流程复杂。作者提出了均衡的负样本采样策略,即将IOU阈值区间分为K份,在每个子区间都采样相同数量的负样本(如果达不到平均数量,则取所有在该子区间的样本),最终可以保证采样得到的负样本在不同的IOU子区间达到尽量均衡的状态。这种方法思路简单,效果也比OHEM要更好一些。 - - -
    - -
    - - -### Balanced Feature Pyramid(BFP) - -之前的FPN结构中使用横向连接的操作融合骨干网络的特征,论文中提出了一个如下图,主要包括rescaling, integrating, refining and strengthening,共4个部分。首先将不同层级的特征图缩放到同一尺度,之后对特征图进行加权平均,使用Nonlocal模块进一步提炼特征,最终将提炼后的特征图进行缩放,作为残差项与不同层级的特征图相加,得到最终输出的特征图。这种平衡的特征图金字塔结构相对于标准的FPN在coco数据集上可以带来0.8%左右的精度提升。 - -
    - -
    - - - -### Balanced L1 Loss - -物体检测任务中,需要同时优化分类loss与边框的回归loss,当分类得分很高时,即使回归效果很差,也会使得模型有比较高的精度,因此可以考虑增加回归loss的权重。假设bbox loss<=1的边框为inliers(可以被视为简单的样本),bbox loss>1的边框为outliers(可以被视为难样本),假设直接调整所有边框的回归loss,这会导致模型对outliers更加敏感,而且基于smooth l1 loss的边框loss计算方法有以下缺点,当边框为inliers时,其梯度很小,当边框为outliers时,梯度幅值为1。smooth l1 loss的梯度计算方法定义如下。 - -
    - -
    - - -因此论文考虑增加inliers的梯度值,尽量平衡inliers与outliers的loss梯度比例。最终Libra loss的梯度计算方法如下所示。 - -
    - -
    - - -在不同的超参数下,梯度可视化如下图所示。 - - -
    - -
    - - -可以看出Libra loss与smooth l1 loss对于outliers的梯度是相同的,但是在inliers中,Libra loss的梯度更大一些,从而增大了不同情况下的边框回归loss,平衡了难易边框学习的loss,同时也提升了边框回归效果对检测模型性能的影响。 - -论文将3个部分融合在一起,在coco两阶段目标检测任务中有1.1%~2.5%的绝对精度提升,效果十分明显。 - - -## 模型库 - - -| 骨架网络 | 网络类型 | 每张GPU图片个数 | 学习率策略 |推理时间(fps) | Box AP | Mask AP | 下载 | 配置文件 | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-BFP | Faster | 2 | 1x | 18.247 | 40.5 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/libra_rcnn_r50_vd_fpn_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/libra_rcnn/libra_rcnn_r50_vd_fpn_1x.yml) | -| ResNet101-vd-BFP | Faster | 2 | 1x | 14.865 | 42.5 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/libra_rcnn_r101_vd_fpn_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/libra_rcnn/libra_rcnn_r101_vd_fpn_1x.yml) | - -## 引用 - -``` -@inproceedings{pang2019libra, - title={Libra R-CNN: Towards Balanced Learning for Object Detection}, - author={Pang, Jiangmiao and Chen, Kai and Shi, Jianping and Feng, Huajun and Ouyang, Wanli and Dahua Lin}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2019} -} -``` diff --git a/static/configs/libra_rcnn/libra_rcnn_r101_vd_fpn_1x.yml b/static/configs/libra_rcnn/libra_rcnn_r101_vd_fpn_1x.yml deleted file mode 100644 index 2c425a58c91a47dfc3dff7b7df69249cf0b0d2f9..0000000000000000000000000000000000000000 --- a/static/configs/libra_rcnn/libra_rcnn_r101_vd_fpn_1x.yml +++ /dev/null @@ -1,117 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/libra_rcnn_r101_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: BFP - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: LibraBBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -BFP: - base_neck: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - refine_level: 2 - refine_type: nonlocal - nonlocal_reduction: 1.0 - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -LibraBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - bbox_loss: BalancedL1Loss - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/libra_rcnn/libra_rcnn_r50_vd_fpn_1x.yml b/static/configs/libra_rcnn/libra_rcnn_r50_vd_fpn_1x.yml deleted file mode 100644 index 6208466ab72810da5f5e74dd4d4dbf299ac250ee..0000000000000000000000000000000000000000 --- a/static/configs/libra_rcnn/libra_rcnn_r50_vd_fpn_1x.yml +++ /dev/null @@ -1,117 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/libra_rcnn_r50_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: BFP - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: LibraBBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -BFP: - base_neck: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - refine_level: 2 - refine_type: nonlocal - nonlocal_reduction: 1.0 - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -LibraBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - bbox_loss: BalancedL1Loss - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/mask_fpn_reader.yml b/static/configs/mask_fpn_reader.yml deleted file mode 100644 index aca3e0ba902bc44fb6bd0b49f282d6a2457a086f..0000000000000000000000000000000000000000 --- a/static/configs/mask_fpn_reader.yml +++ /dev/null @@ -1,103 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd', 'gt_mask'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - is_mask_flip: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - drop_last: false - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_last: false diff --git a/static/configs/mask_rcnn_r101_fpn_1x.yml b/static/configs/mask_rcnn_r101_fpn_1x.yml deleted file mode 100644 index c1cd56230c2376792fb85afca6fb6988a69391b9..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r101_fpn_1x.yml +++ /dev/null @@ -1,111 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r101_fpn_1x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_r101_vd_fpn_1x.yml b/static/configs/mask_rcnn_r101_vd_fpn_1x.yml deleted file mode 100644 index 1ba08f542fa85599887fe5d53ae60baf075fb9ce..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r101_vd_fpn_1x.yml +++ /dev/null @@ -1,112 +0,0 @@ -architecture: MaskRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -weights: output/mask_rcnn_r101_vd_fpn_1x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_r50_1x.yml b/static/configs/mask_rcnn_r50_1x.yml deleted file mode 100644 index 127e783713a981b6d04e534ff129a86dd48a30a1..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r50_1x.yml +++ /dev/null @@ -1,102 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r50_1x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_assigner: BBoxAssigner - bbox_head: BBoxHead - mask_assigner: MaskAssigner - mask_head: MaskHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: 4 - freeze_at: 2 - -ResNetC5: - depth: 50 - norm_type: affine_channel - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - spatial_scale: 0.0625 - sampling_ratio: 0 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - normalized: false - score_threshold: 0.05 - -MaskHead: - dilation: 1 - conv_dim: 256 - resolution: 14 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 14 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_reader.yml' diff --git a/static/configs/mask_rcnn_r50_1x_cocome_kunlun.yml b/static/configs/mask_rcnn_r50_1x_cocome_kunlun.yml deleted file mode 100644 index 58517fd4a56811b8efbce57ab0248adc3dfe902e..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r50_1x_cocome_kunlun.yml +++ /dev/null @@ -1,104 +0,0 @@ -architecture: MaskRCNN -use_gpu: false -use_xpu: true -max_iters: 1200 -snapshot_iter: 100 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_2x.tar -metric: COCO -weights: output/mask_rcnn_r50_1x_cocome_kunlun/model_final -num_classes: 2 -finetune_exclude_pretrained_params: ['cls_score'] - -MaskRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_assigner: BBoxAssigner - bbox_head: BBoxHead - mask_assigner: MaskAssigner - mask_head: MaskHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: 4 - freeze_at: 2 - -ResNetC5: - depth: 50 - norm_type: affine_channel - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - spatial_scale: 0.0625 - sampling_ratio: 0 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - normalized: false - score_threshold: 0.05 - -MaskHead: - dilation: 1 - conv_dim: 256 - resolution: 14 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 14 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [900, 1100] - - !LinearWarmup - start_factor: 0.1 - steps: 300 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_reader_cocome.yml' diff --git a/static/configs/mask_rcnn_r50_2x.yml b/static/configs/mask_rcnn_r50_2x.yml deleted file mode 100644 index 8a8e62bc0c20f83af23c1c1af3a8232d2a8fe534..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r50_2x.yml +++ /dev/null @@ -1,104 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 360000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r50_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - rpn_head: RPNHead - roi_extractor: RoIAlign - bbox_assigner: BBoxAssigner - bbox_head: BBoxHead - mask_assigner: MaskAssigner - mask_head: MaskHead - - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: 4 - freeze_at: 2 - -ResNetC5: - depth: 50 - norm_type: affine_channel - -RPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 12000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 6000 - post_nms_top_n: 1000 - -RoIAlign: - resolution: 14 - spatial_scale: 0.0625 - sampling_ratio: 0 - -BBoxHead: - head: ResNetC5 - nms: - keep_top_k: 100 - nms_threshold: 0.5 - normalized: false - score_threshold: 0.05 - -MaskHead: - dilation: 1 - conv_dim: 256 - resolution: 14 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 14 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - #start the warm up from base_lr * start_factor - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_reader.yml' diff --git a/static/configs/mask_rcnn_r50_fpn_1x.yml b/static/configs/mask_rcnn_r50_fpn_1x.yml deleted file mode 100644 index 35a5495f42fb3679fb7fe49543e11ea62182f779..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r50_fpn_1x.yml +++ /dev/null @@ -1,111 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r50_fpn_1x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_r50_fpn_2x.yml b/static/configs/mask_rcnn_r50_fpn_2x.yml deleted file mode 100644 index 9fffd92211bda8bed1c28d94f1fded93730805d5..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r50_fpn_2x.yml +++ /dev/null @@ -1,111 +0,0 @@ -architecture: MaskRCNN -max_iters: 360000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/mask_rcnn_r50_fpn_2x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_r50_vd_fpn_2x.yml b/static/configs/mask_rcnn_r50_vd_fpn_2x.yml deleted file mode 100644 index 5f46b475b83aa2925783f70ccff63c745d23a16a..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_r50_vd_fpn_2x.yml +++ /dev/null @@ -1,112 +0,0 @@ -architecture: MaskRCNN -use_gpu: true -max_iters: 360000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -metric: COCO -weights: output/mask_rcnn_r50_vd_fpn_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_se154_vd_fpn_s1x.yml b/static/configs/mask_rcnn_se154_vd_fpn_s1x.yml deleted file mode 100644 index fe973ececb8545bf05b1772bff910ca40ac3bc2c..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_se154_vd_fpn_s1x.yml +++ /dev/null @@ -1,114 +0,0 @@ -architecture: MaskRCNN -max_iters: 260000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/SENet154_vd_pretrained.tar -weights: output/mask_rcnn_se154_vd_fpn_s1x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: SENet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -SENet: - depth: 152 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [200000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_x101_vd_64x4d_fpn_1x.yml b/static/configs/mask_rcnn_x101_vd_64x4d_fpn_1x.yml deleted file mode 100644 index 315dc2daf13759cc0b168768fbd80e2aab99e969..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_x101_vd_64x4d_fpn_1x.yml +++ /dev/null @@ -1,114 +0,0 @@ -architecture: MaskRCNN -max_iters: 180000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/mask_rcnn_x101_vd_64x4d_fpn_1x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNeXt: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_rcnn_x101_vd_64x4d_fpn_2x.yml b/static/configs/mask_rcnn_x101_vd_64x4d_fpn_2x.yml deleted file mode 100644 index 3630c269ca10a1d279228d500e16ccce266fd957..0000000000000000000000000000000000000000 --- a/static/configs/mask_rcnn_x101_vd_64x4d_fpn_2x.yml +++ /dev/null @@ -1,114 +0,0 @@ -architecture: MaskRCNN -max_iters: 360000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/mask_rcnn_x101_vd_64x4d_fpn_2x/model_final -metric: COCO -num_classes: 81 - -MaskRCNN: - backbone: ResNeXt - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNeXt: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: affine_channel - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'mask_fpn_reader.yml' diff --git a/static/configs/mask_reader.yml b/static/configs/mask_reader.yml deleted file mode 100644 index 165a09b82bb448dede9273ae3b7da297e318c131..0000000000000000000000000000000000000000 --- a/static/configs/mask_reader.yml +++ /dev/null @@ -1,95 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd', 'gt_mask'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - is_mask_flip: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: -1 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - drop_last: false - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: false - drop_last: false diff --git a/static/configs/mask_reader_cocome.yml b/static/configs/mask_reader_cocome.yml deleted file mode 100644 index 1b44491c5c3a7dfbc12138c682ac00b24946ef4a..0000000000000000000000000000000000000000 --- a/static/configs/mask_reader_cocome.yml +++ /dev/null @@ -1,95 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd', 'gt_mask'] - dataset: - !COCODataSet - image_dir: train - anno_path: annotations/instances_split_train.json - dataset_dir: dataset/cocome - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - is_mask_flip: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: -1 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - drop_last: false - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: train - anno_path: annotations/instances_split_val.json - dataset_dir: dataset/cocome - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: dataset/cocome/annotations/instances_split_val.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: false - drop_last: false diff --git a/static/configs/mobile/README.md b/static/configs/mobile/README.md deleted file mode 100755 index 24db1c98cd0c182d653e715919a7fcc7f9f60579..0000000000000000000000000000000000000000 --- a/static/configs/mobile/README.md +++ /dev/null @@ -1,97 +0,0 @@ -[English](README_en.md) | 简体中文 - -# 移动端模型库 - - -## 模型 - -PaddleDetection目前提供一系列针对移动应用进行优化的模型,主要支持以下结构: - -| 骨干网络 | 结构 | 输入大小 | 图片/gpu [1](#gpu) | 学习率策略 | Box AP | 下载 | PaddleLite模型下载 | -| :----------------------- | :------------------------ | :---: | :--------------------: | :------------ | :----: | :--- | :----------------- | -| MobileNetV3 Small | SSDLite | 320 | 64 | 400K (cosine) | 16.2 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_small.pdparams) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_small.tar) | -| MobileNetV3 Small | SSDLite Quant [2](#quant) | 320 | 64 | 400K (cosine) | 15.4 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_small_quant.tar) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_small_quant.tar) | -| MobileNetV3 Large | SSDLite | 320 | 64 | 400K (cosine) | 23.3 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_large.pdparams) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_large.tar) | -| MobileNetV3 Large | SSDLite Quant [2](#quant) | 320 | 64 | 400K (cosine) | 22.6 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_large_quant.tar) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_large_quant.tar) | -| MobileNetV3 Large w/ FPN | Cascade RCNN | 320 | 2 | 500k (cosine) | 25.0 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/cascade_rcnn_mobilenetv3_fpn_320.tar) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/cascade_rcnn_mobilenetv3_fpn_320.tar) | -| MobileNetV3 Large w/ FPN | Cascade RCNN | 640 | 2 | 500k (cosine) | 30.2 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/cascade_rcnn_mobilenetv3_fpn_640.tar) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/cascade_rcnn_mobilenetv3_fpn_640.tar) | -| MobileNetV3 Large | YOLOv3 | 320 | 8 | 500K | 27.1 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v3.pdparams) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/yolov3_mobilenet_v3.tar) | -| MobileNetV3 Large | YOLOv3 Prune [3](#prune) | 320 | 8 | - | 24.6 | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/yolov3_mobilenet_v3_prune75875_FPGM_distillby_r34.pdparams) | [链接](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/yolov3_mobilenet_v3_prune86_FPGM_320.tar) | - -**注意**: - -- [1] 模型统一使用8卡训练。 -- [2] 参考下面关于[SSDLite量化的说明](#SSDLite量化说明)。 -- [3] 参考下面关于[YOLO剪裁的说明](#YOLOv3剪裁说明)。 - - -## 评测结果 - -- 模型使用 [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) 2.6 (即将发布) 在下列平台上进行了测试 - - Qualcomm Snapdragon 625 - - Qualcomm Snapdragon 835 - - Qualcomm Snapdragon 845 - - Qualcomm Snapdragon 855 - - HiSilicon Kirin 970 - - HiSilicon Kirin 980 - -- 单CPU线程 (单位: ms) - -| | SD625 | SD835 | SD845 | SD855 | Kirin 970 | Kirin 980 | -|------------------|---------|---------|---------|---------|-----------|-----------| -| SSDLite Large | 289.071 | 134.408 | 91.933 | 48.2206 | 144.914 | 55.1186 | -| SSDLite Large Quant | | | | | | | -| SSDLite Small | 122.932 | 57.1914 | 41.003 | 22.0694 | 61.5468 | 25.2106 | -| SSDLite Small Quant | | | | | | | -| YOLOv3 baseline | 1082.5 | 435.77 | 317.189 | 155.948 | 536.987 | 178.999 | -| YOLOv3 prune | 253.98 | 131.279 | 89.4124 | 48.2856 | 122.732 | 55.8626 | -| Cascade RCNN 320 | 286.526 | 125.635 | 87.404 | 46.184 | 149.179 | 52.9994 | -| Cascade RCNN 640 | 1115.66 | 495.926 | 351.361 | 189.722 | 573.558 | 207.917 | - -- 4 CPU线程 (单位: ms) - -| | SD625 | SD835 | SD845 | SD855 | Kirin 970 | Kirin 980 | -|------------------|---------|---------|---------|---------|-----------|-----------| -| SSDLite Large | 107.535 | 51.1382 | 34.6392 | 20.4978 | 50.5598 | 24.5318 | -| SSDLite Large Quant | | | | | | | -| SSDLite Small | 51.5704 | 24.5156 | 18.5486 | 11.4218 | 24.9946 | 16.7158 | -| SSDLite Small Quant | | | | | | | -| YOLOv3 baseline | 413.486 | 184.248 | 133.624 | 75.7354 | 202.263 | 126.435 | -| YOLOv3 prune | 98.5472 | 53.6228 | 34.4306 | 21.3112 | 44.0722 | 31.201 | -| Cascade RCNN 320 | 131.515 | 59.6026 | 39.4338 | 23.5802 | 58.5046 | 36.9486 | -| Cascade RCNN 640 | 473.083 | 224.543 | 156.205 | 100.686 | 231.108 | 138.391 | - -## SSDLite量化说明 - -在SSDLite模型中我们采用完整量化训练的方式对模型进行训练,在8卡GPU下共训练40万轮,训练中将`res_conv1`与`se_block`固定不训练,执行指令为: - -```shell -python slim/quantization/train.py --not_quant_pattern res_conv1 se_block \ - -c configs/ssd/ssdlite_mobilenet_v3_large.yml \ - --eval -``` -更多量化教程请参考[模型量化压缩教程](../../docs/advanced_tutorials/slim/quantization/QUANTIZATION.md) - -## YOLOv3剪裁说明 - -首先对YOLO检测头进行剪裁,然后再使用 YOLOv3-ResNet34 作为teacher网络对剪裁后的模型进行蒸馏, teacher网络在COCO上的mAP为31.4 (输入大小320\*320). - -可以使用如下两种方式进行剪裁: - -- 固定比例剪裁, 整体剪裁率是86% - - ```shell - --pruned_params="yolo_block.0.0.0.conv.weights,yolo_block.0.0.1.conv.weights,yolo_block.0.1.0.conv.weights,yolo_block.0.1.1.conv.weights,yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.0.0.conv.weights,yolo_block.1.0.1.conv.weights,yolo_block.1.1.0.conv.weights,yolo_block.1.1.1.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights,yolo_block.2.0.0.conv.weights,yolo_block.2.0.1.conv.weights,yolo_block.2.1.0.conv.weights,yolo_block.2.1.1.conv.weights,yolo_block.2.2.conv.weights,yolo_block.2.tip.conv.weights" \ - --pruned_ratios="0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.875,0.875,0.875,0.875,0.875,0.875" - ``` -- 使用 [FPGM](https://arxiv.org/abs/1811.00250) 算法剪裁: - - ```shell - --prune_criterion=geometry_median - ``` - - -## 敬请关注后续发布 - -- [ ] 更多模型 -- [ ] 量化模型 diff --git a/static/configs/mobile/README_en.md b/static/configs/mobile/README_en.md deleted file mode 100644 index 133afffe91de71c19f39d92bffdbbf234d1b26b6..0000000000000000000000000000000000000000 --- a/static/configs/mobile/README_en.md +++ /dev/null @@ -1,99 +0,0 @@ -English | [简体中文](README.md) - -# Mobile Model Zoo - - -## Models - -This directory contains models optimized for mobile applications, at present the following models included: - -| Backbone | Architecture | Input | Image/gpu [1](#gpu) | Lr schd | Box AP | Download | PaddleLite Model Download | -| :----------------------- | :------------------------ | :---: | :--------------------: | :------------ | :----: | :------- | :------------------------ | -| MobileNetV3 Small | SSDLite | 320 | 64 | 400K (cosine) | 16.2 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_small.pdparam) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_small.tar) | -| MobileNetV3 Small | SSDLite Quant [2](#quant) | 320 | 64 | 400K (cosine) | 15.4 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_small_quant.tar) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_small_quant.tar) | -| MobileNetV3 Large | SSDLite | 320 | 64 | 400K (cosine) | 23.3 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_large.pdparam) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_large.tar) | -| MobileNetV3 Large | SSDLite Quant [2](#quant) | 320 | 64 | 400K (cosine) | 22.6 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/ssdlite_mobilenet_v3_large_quant.tar) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/ssdlite_mobilenet_v3_large_quant.tar) | -| MobileNetV3 Large w/ FPN | Cascade RCNN | 320 | 2 | 500k (cosine) | 25.0 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/cascade_rcnn_mobilenetv3_fpn_320.tar) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/cascade_rcnn_mobilenetv3_fpn_320.tar) | -| MobileNetV3 Large w/ FPN | Cascade RCNN | 640 | 2 | 500k (cosine) | 30.2 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/cascade_rcnn_mobilenetv3_fpn_640.tar) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/cascade_rcnn_mobilenetv3_fpn_640.tar) | -| MobileNetV3 Large | YOLOv3 | 320 | 8 | 500K | 27.1 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v3.pdparams) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/yolov3_mobilenet_v3.tar) | -| MobileNetV3 Large | YOLOv3 Prune 2 | 320 | 8 | - | 24.6 | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/yolov3_mobilenet_v3_prune75875_FPGM_distillby_r34.pdparams) | [Link](https://paddlemodels.bj.bcebos.com/object_detection/mobile_models/lite/yolov3_mobilenet_v3_prune86_FPGM_320.tar) | - -**Notes**: - -- [1] All models are trained on 8 GPUs. -- [2] See the note section on [SSDLite quantization](#Notes-on-SSDLite-quant)。 -- [3] See the note section on [how YOLO head is pruned](#Notes-on-YOLOv3-pruning). - - -## Benchmarks Results - -- Models are benched on following chipsets with [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) 2.6 (to be released) - - Qualcomm Snapdragon 625 - - Qualcomm Snapdragon 835 - - Qualcomm Snapdragon 845 - - Qualcomm Snapdragon 855 - - HiSilicon Kirin 970 - - HiSilicon Kirin 980 - -- With 1 CPU thread (latency numbers are in ms) - - | | SD625 | SD835 | SD845 | SD855 | Kirin 970 | Kirin 980 | - |------------------|---------|---------|---------|---------|-----------|-----------| - | SSDLite Large | 289.071 | 134.408 | 91.933 | 48.2206 | 144.914 | 55.1186 | - | SSDLite Large Quant | | | | | | | - | SSDLite Small | 122.932 | 57.1914 | 41.003 | 22.0694 | 61.5468 | 25.2106 | - | SSDLite Small Quant | | | | | | | - | YOLOv3 baseline | 1082.5 | 435.77 | 317.189 | 155.948 | 536.987 | 178.999 | - | YOLOv3 prune | 253.98 | 131.279 | 89.4124 | 48.2856 | 122.732 | 55.8626 | - | Cascade RCNN 320 | 286.526 | 125.635 | 87.404 | 46.184 | 149.179 | 52.9994 | - | Cascade RCNN 640 | 1115.66 | 495.926 | 351.361 | 189.722 | 573.558 | 207.917 | - -- With 4 CPU threads (latency numbers are in ms) - - | | SD625 | SD835 | SD845 | SD855 | Kirin 970 | Kirin 980 | - |------------------|---------|---------|---------|---------|-----------|-----------| - | SSDLite Large | 107.535 | 51.1382 | 34.6392 | 20.4978 | 50.5598 | 24.5318 | - | SSDLite Large Quant | | | | | | | - | SSDLite Small | 51.5704 | 24.5156 | 18.5486 | 11.4218 | 24.9946 | 16.7158 | - | SSDLite Small Quant | | | | | | | - | YOLOv3 baseline | 413.486 | 184.248 | 133.624 | 75.7354 | 202.263 | 126.435 | - | YOLOv3 prune | 98.5472 | 53.6228 | 34.4306 | 21.3112 | 44.0722 | 31.201 | - | Cascade RCNN 320 | 131.515 | 59.6026 | 39.4338 | 23.5802 | 58.5046 | 36.9486 | - | Cascade RCNN 640 | 473.083 | 224.543 | 156.205 | 100.686 | 231.108 | 138.391 | - - -## Notes on SSDLite quantization - -We use a complete quantitative training method to train the SSDLite model. It is trained for a total of 400,000 rounds with the 8-card GPU. We freeze `res_conv1` and `se_block`. The command used is listed bellow: - -```shell -python slim/quantization/train.py --not_quant_pattern res_conv1 se_block \ - -c configs/ssd/ssdlite_mobilenet_v3_large.yml \ - --eval -``` - -For more quantization tutorials, please refer to [Model Quantization Compression Tutorial](../../docs/advanced_tutorials/slim/quantization/QUANTIZATION.md) - -## Notes on YOLOv3 pruning - -We pruned the YOLO-head and distill the pruned model with YOLOv3-ResNet34 as the teacher, which has a higher mAP on COCO (31.4 with 320\*320 input). - -The following configurations can be used for pruning: - -- Prune with fixed ratio, overall prune ratios is 86% - - ```shell - --pruned_params="yolo_block.0.0.0.conv.weights,yolo_block.0.0.1.conv.weights,yolo_block.0.1.0.conv.weights,yolo_block.0.1.1.conv.weights,yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.0.0.conv.weights,yolo_block.1.0.1.conv.weights,yolo_block.1.1.0.conv.weights,yolo_block.1.1.1.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights,yolo_block.2.0.0.conv.weights,yolo_block.2.0.1.conv.weights,yolo_block.2.1.0.conv.weights,yolo_block.2.1.1.conv.weights,yolo_block.2.2.conv.weights,yolo_block.2.tip.conv.weights" \ - --pruned_ratios="0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.75,0.875,0.875,0.875,0.875,0.875,0.875" - ``` -- Prune filters using [FPGM](https://arxiv.org/abs/1811.00250) algorithm: - - ```shell - --prune_criterion=geometry_median - ``` - - -## Upcoming - -- [ ] More models configurations -- [ ] Quantized models diff --git a/static/configs/mobile/cascade_rcnn_mobilenetv3_fpn_320.yml b/static/configs/mobile/cascade_rcnn_mobilenetv3_fpn_320.yml deleted file mode 100644 index e02b5ac6803c3570c400780c6c0a6ac594a5b048..0000000000000000000000000000000000000000 --- a/static/configs/mobile/cascade_rcnn_mobilenetv3_fpn_320.yml +++ /dev/null @@ -1,219 +0,0 @@ -architecture: CascadeRCNN -max_iters: 500000 -snapshot_iter: 50000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -weights: output/cascade_rcnn_mobilenetv3_fpn_320/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: MobileNetV3RCNN - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -MobileNetV3RCNN: - norm_type: bn - freeze_norm: true - norm_decay: 0.0 - feature_maps: [2, 3, 4] - conv_decay: 0.00001 - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - scale: 1.0 - model_name: large - -FPN: - min_level: 2 - max_level: 6 - num_chan: 48 - has_extra_convs: true - spatial_scale: [0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 16 - min_level: 2 - max_level: 6 - num_chan: 48 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 300 - post_nms_top_n: 100 - -FPNRoIAlign: - canconical_level: 3 - canonical_size: 112 - min_level: 2 - max_level: 4 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 128 - -LearningRate: - base_lr: 0.02 - schedulers: - - !CosineDecay - max_iters: 500000 - - !LinearWarmup - start_factor: 0.1 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.00004 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [224, 256, 288, 320, 352, 384] - max_size: 512 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false - - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 320 - target_size: 320 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - - - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 320 - target_size: 320 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 diff --git a/static/configs/mobile/cascade_rcnn_mobilenetv3_fpn_640.yml b/static/configs/mobile/cascade_rcnn_mobilenetv3_fpn_640.yml deleted file mode 100644 index 5e2a486c0b693e80d955c718dd4e195405781d1b..0000000000000000000000000000000000000000 --- a/static/configs/mobile/cascade_rcnn_mobilenetv3_fpn_640.yml +++ /dev/null @@ -1,219 +0,0 @@ -architecture: CascadeRCNN -max_iters: 500000 -snapshot_iter: 50000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -weights: output/cascade_rcnn_mobilenetv3_fpn_640/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: MobileNetV3RCNN - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -MobileNetV3RCNN: - norm_type: bn - freeze_norm: true - norm_decay: 0.0 - feature_maps: [2, 3, 4] - conv_decay: 0.00001 - lr_mult_list: [1.0, 1.0, 1.0, 1.0, 1.0] - scale: 1.0 - model_name: large - -FPN: - min_level: 2 - max_level: 6 - num_chan: 48 - has_extra_convs: true - spatial_scale: [0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 24 - min_level: 2 - max_level: 6 - num_chan: 48 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 300 - post_nms_top_n: 100 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 128 - -LearningRate: - base_lr: 0.02 - schedulers: - - !CosineDecay - max_iters: 500000 - - !LinearWarmup - start_factor: 0.1 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.00004 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [416, 448, 480, 512, 544, 576, 608, 640, 672] - max_size: 1000 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false - - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 640 - target_size: 640 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - - - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 640 - target_size: 640 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 diff --git a/static/configs/mobile/ssdlite_mobilenet_v3_large.yml b/static/configs/mobile/ssdlite_mobilenet_v3_large.yml deleted file mode 120000 index 345d8f12b405262ca0d6bfb6d4110632b0335d5a..0000000000000000000000000000000000000000 --- a/static/configs/mobile/ssdlite_mobilenet_v3_large.yml +++ /dev/null @@ -1 +0,0 @@ -../ssd/ssdlite_mobilenet_v3_large.yml \ No newline at end of file diff --git a/static/configs/mobile/ssdlite_mobilenet_v3_small.yml b/static/configs/mobile/ssdlite_mobilenet_v3_small.yml deleted file mode 120000 index 63fb2a9f353e6791b8007346cfd06541711d0541..0000000000000000000000000000000000000000 --- a/static/configs/mobile/ssdlite_mobilenet_v3_small.yml +++ /dev/null @@ -1 +0,0 @@ -../ssd/ssdlite_mobilenet_v3_small.yml \ No newline at end of file diff --git a/static/configs/mobile/yolov3_mobilenet_v3.yml b/static/configs/mobile/yolov3_mobilenet_v3.yml deleted file mode 120000 index ea0525a3eca88cd99d3e09df1f665a7271957e1d..0000000000000000000000000000000000000000 --- a/static/configs/mobile/yolov3_mobilenet_v3.yml +++ /dev/null @@ -1 +0,0 @@ -../yolov3_mobilenet_v3.yml \ No newline at end of file diff --git a/static/configs/mobile/yolov3_reader.yml b/static/configs/mobile/yolov3_reader.yml deleted file mode 120000 index 0539e0d461ce322f4686fb0542b1f9a3227f8013..0000000000000000000000000000000000000000 --- a/static/configs/mobile/yolov3_reader.yml +++ /dev/null @@ -1 +0,0 @@ -../yolov3_reader.yml \ No newline at end of file diff --git a/static/configs/obj365/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml b/static/configs/obj365/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml deleted file mode 100644 index 48ab8d5ae95a15eb69e8be12f2f4a00e601376b5..0000000000000000000000000000000000000000 --- a/static/configs/obj365/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml +++ /dev/null @@ -1,215 +0,0 @@ -architecture: CascadeRCNNClsAware -max_iters: 800000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet200_vd_pretrained.tar -weights: output/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms/model_final -# obj365 dataset format and its eval method are same as those for coco -metric: COCO -num_classes: 366 - -CascadeRCNNClsAware: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 200 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - nonlocal_stages: [4] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - class_aware: True - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: MultiClassSoftNMS - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiClassSoftNMS: - score_threshold: 0.001 - keep_top_k: 300 - softnms_sigma: 0.15 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [520000, 740000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - dataset_dir: dataset/obj365 - anno_path: train.json - image_dir: train - sample_transforms: - - !DecodeImage - to_rgb: True - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: [416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024, 1056, 1088, 1120, 1152, 1184, 1216, 1248, 1280, 1312, 1344, 1376, 1408] - max_size: 1800 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - shuffle: true - drop_last: false - worker_num: 2 - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - dataset_dir: dataset/obj365 - anno_path: val.json - image_dir: val - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: False - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: - - 1200 - max_size: 2000 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - worker_num: 2 - drop_empty: false - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: dataset/coco/objects365_label.txt - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - worker_num: 2 diff --git a/static/configs/obj365/cascade_rcnn_dcnv2_se154_vd_fpn_gn_cas.yml b/static/configs/obj365/cascade_rcnn_dcnv2_se154_vd_fpn_gn_cas.yml deleted file mode 100644 index aef0fe9664382bfb452ed9bd55f21530b33e0d3a..0000000000000000000000000000000000000000 --- a/static/configs/obj365/cascade_rcnn_dcnv2_se154_vd_fpn_gn_cas.yml +++ /dev/null @@ -1,250 +0,0 @@ -architecture: CascadeRCNN -max_iters: 500000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/cascade_mask_rcnn_dcnv2_se154_vd_fpn_gn_coco_pretrained.tar -weights: output/cascade_rcnn_dcnv2_se154_vd_fpn_gn_cas/model_final -metric: COCO -num_classes: 366 - -CascadeRCNN: - backbone: SENet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -SENet: - depth: 152 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: bn - freeze_norm: True - variant: d - dcn_v2_stages: [3, 4, 5] - std_senet: True - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - freeze_norm: False - norm_type: gn - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 1024 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeXConvNormHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -CascadeXConvNormHead: - norm_type: gn - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [400000, 460000] - - !LinearWarmup - start_factor: 0.01 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - dataset_dir: dataset/objects365 - anno_path: annotations/train.json - image_dir: train - sample_transforms: - - !DecodeImage - to_rgb: True - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !ResizeImage - interp: 1 - target_size: - - 416 - - 448 - - 480 - - 512 - - 544 - - 576 - - 608 - - 640 - - 672 - - 704 - - 736 - - 768 - - 800 - - 832 - - 864 - - 896 - - 928 - - 960 - - 992 - - 1024 - - 1056 - - 1088 - - 1120 - - 1152 - - 1184 - - 1216 - - 1248 - - 1280 - - 1312 - - 1344 - - 1376 - - 1408 - max_size: 1600 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - worker_num: 4 - shuffle: true - class_aware_sampling: true - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - dataset_dir: dataset/objects365 - anno_path: annotations/val.json - image_dir: val - sample_transforms: - - !DecodeImage - to_rgb: True - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - batch_size: 1 - dataset: - !ImageFolder - anno_path: dataset/coco/objects365_label.txt - sample_transforms: - - !DecodeImage - to_rgb: True - - !NormalizeImage - is_channel_first: false - is_scale: False - mean: - - 102.9801 - - 115.9465 - - 122.7717 - std: - - 1.0 - - 1.0 - - 1.0 - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 2 diff --git a/static/configs/oidv5/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml b/static/configs/oidv5/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml deleted file mode 100644 index cfc99c67c979dc4b70997c2b8068c68b7aee57e2..0000000000000000000000000000000000000000 --- a/static/configs/oidv5/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml +++ /dev/null @@ -1,212 +0,0 @@ -architecture: CascadeRCNNClsAware -max_iters: 1500000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet200_vd_pretrained.tar -weights: output/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms/model_final -metric: OID -num_classes: 501 - -CascadeRCNNClsAware: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 200 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - nonlocal_stages: [4] - -FPN: - min_level: 2 - max_level: 6 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - class_aware: True - -CascadeBBoxHead: - head: CascadeTwoFCHead - nms: MultiClassSoftNMS - -CascadeTwoFCHead: - mlp_dim: 1024 - -MultiClassSoftNMS: - score_threshold: 0.001 - keep_top_k: 300 - softnms_sigma: 0.15 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [1000000, 1400000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - dataset_dir: dataset/oid - anno_path: train.json - image_dir: train - sample_transforms: - - !DecodeImage - to_rgb: True - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: [416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024, 1056, 1088, 1120, 1152, 1184, 1216, 1248, 1280, 1312, 1344, 1376, 1408] - max_size: 1800 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - drop_last: false - shuffle: true - worker_num: 2 - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !COCODataSet - dataset_dir: dataset/oidv5 - anno_path: val.json - image_dir: val - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: False - - !NormalizeImage - is_channel_first: false - is_scale: True - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - - !ResizeImage - interp: 1 - target_size: - - 1200 - max_size: 2000 - use_cv2: true - - !Permute - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - batch_size: 1 - worker_num: 2 - drop_empty: false - -TestReader: - batch_size: 1 - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - worker_num: 2 diff --git a/static/configs/ppyolo/README.md b/static/configs/ppyolo/README.md deleted file mode 100644 index a993e119f025020e4414a8cf79895741fa1e6d96..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/README.md +++ /dev/null @@ -1,252 +0,0 @@ -English | [简体中文](README_cn.md) - -# PP-YOLO - -## Table of Contents -- [Introduction](#Introduction) -- [Model Zoo](#Model_Zoo) -- [Getting Start](#Getting_Start) -- [Future Work](#Future_Work) -- [Appendix](#Appendix) - -## Introduction - -[PP-YOLO](https://arxiv.org/abs/2007.12099) is a optimized model based on YOLOv3 in PaddleDetection,whose performance(mAP on COCO) and inference spped are better than [YOLOv4](https://arxiv.org/abs/2004.10934),PaddlePaddle 1.8.4(available on pip now) or [Daily Version](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-dev) is required to run this PP-YOLO。 - -PP-YOLO reached mmAP(IoU=0.5:0.95) as 45.9% on COCO test-dev2017 dataset, and inference speed of FP32 on single V100 is 72.9 FPS, inference speed of FP16 with TensorRT on single V100 is 155.6 FPS. - -
    - -
    - -PP-YOLO and PP-YOLOv2 improved performance and speed of YOLOv3 with following methods: - -- Better backbone: ResNet50vd-DCN -- Larger training batch size: 8 GPUs and mini-batch size as 24 on each GPU -- [Drop Block](https://arxiv.org/abs/1810.12890) -- [Exponential Moving Average](https://www.investopedia.com/terms/e/ema.asp) -- [IoU Loss](https://arxiv.org/pdf/1902.09630.pdf) -- [Grid Sensitive](https://arxiv.org/abs/2004.10934) -- [Matrix NMS](https://arxiv.org/pdf/2003.10152.pdf) -- [CoordConv](https://arxiv.org/abs/1807.03247) -- [Spatial Pyramid Pooling](https://arxiv.org/abs/1406.4729) -- Better ImageNet pretrain weights -- [PAN](https://arxiv.org/abs/1803.01534) -- Iou aware Loss -- larger input size - -## Model Zoo - -### PP-YOLO - -| Model | GPU number | images/GPU | backbone | input shape | Box APval | Box APtest | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | download | config | -|:------------------------:|:----------:|:----------:|:----------:| :----------:| :------------------: | :-------------------: | :------------: | :---------------------: | :------: | :-----: | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 608 | - | 43.5 | 62 | 105.5 | [model](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 512 | - | 43.0 | 83 | 138.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 416 | - | 41.2 | 96 | 164.0 | [model](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 320 | - | 38.0 | 123 | 199.0 | [model](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 608 | 44.8 | 45.2 | 72.9 | 155.6 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 512 | 43.9 | 44.4 | 89.9 | 188.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 416 | 42.1 | 42.5 | 109.1 | 215.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 320 | 38.9 | 39.3 | 132.2 | 242.2 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 608 | 45.3 | 45.9 | 72.9 | 155.6 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 512 | 44.4 | 45.0 | 89.9 | 188.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 416 | 42.7 | 43.2 | 109.1 | 215.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 320 | 39.5 | 40.1 | 132.2 | 242.2 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO | 4 | 32 | ResNet18vd | 512 | 29.3 | 29.5 | 357.1 | 657.9 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_r18vd.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_r18vd.yml) | -| PP-YOLO | 4 | 32 | ResNet18vd | 416 | 28.6 | 28.9 | 409.8 | 719.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_r18vd.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_r18vd.yml) | -| PP-YOLO | 4 | 32 | ResNet18vd | 320 | 26.2 | 26.4 | 480.7 | 763.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_r18vd.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_r18vd.yml) | -| PP-YOLOv2 | 8 | 12 | ResNet50vd | 640 | 49.1 | 49.5 | 68.9 | 106.5 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolov2_r50vd_dcn.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolov2_r50vd_dcn.yml) | -| PP-YOLOv2 | 8 | 12 | ResNet101vd | 640 | 49.7 | 50.3 | 49.5 | 87.0 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolov2_r101vd_dcn.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolov2_r101vd_dcn.yml) | - - -**Notes:** - -- PP-YOLO is trained on COCO train2017 dataset and evaluated on val2017 & test-dev2017 dataset,Box APtest is evaluation results of `mAP(IoU=0.5:0.95)`. -- PP-YOLO used 8 GPUs for training and mini-batch size as 24 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](../../docs/FAQ.md). -- PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode. -- PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method. -- TensorRT FP16 inference speed testing exclude the time cost of bounding-box decoding(`yolo_box`) part comparing with FP32 testing above, which means that data reading, bounding-box decoding and post-processing(NMS) is excluded(test method same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) too) -- YOLOv4(AlexyAB) performance and inference speed is copy from single Tesla V100 testing results in [YOLOv4 github repo](https://github.com/AlexeyAB/darknet), Tesla V100 TensorRT FP16 inference speed is testing with tkDNN configuration and TensorRT 5.1.2.2 on single Tesla V100 based on [AlexyAB/darknet repo](https://github.com/AlexeyAB/darknet). -- Download and configuration of YOLOv4(AlexyAB) is reproduced model of YOLOv4 in PaddleDetection, whose evaluation performance is same as YOLOv4(AlexyAB), and finetune training is supported in PaddleDetection currently, reproducing by training from backbone pretrain weights is on working, see [PaddleDetection YOLOv4](../yolov4/README.md) for details. -- PP-YOLO trained with `batch_size=24` in each GPU with memory as 32G, configuation yaml with `batch_size=12` which can be trained on GPU with memory as 16G is provided as `ppyolo_2x_bs12.yml`, training with `batch_size=12` reached `mAP(IoU=0.5:0.95) = 45.1%` on COCO val2017 dataset, download weights by [ppyolo_2x_bs12 model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x_bs12.pdparams) - -### PP-YOLO for mobile - -| Model | GPU number | images/GPU | Model Size | input shape | Box APval | Box AP50val | Kirin 990 1xCore(FPS) | download | inference model download | config | -|:----------------------------:|:----------:|:----------:| :--------: | :----------:| :------------------: | :--------------------: | :-------------------: | :------: | :----------------------: | :-----: | -| PP-YOLO_MobileNetV3_large | 4 | 32 | 18MB | 320 | 23.2 | 42.6 | 15.6 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_large.pdparams) | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_large.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_large.yml) | -| PP-YOLO_MobileNetV3_small | 4 | 32 | 11MB | 320 | 17.2 | 33.8 | 28.6 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small.pdparams) | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml) | - -**Notes:** - -- PP-YOLO_MobileNetV3 is trained on COCO train2017 datast and evaluated on val2017 dataset,Box APval is evaluation results of `mAP(IoU=0.5:0.95)`, Box APval is evaluation results of `mAP(IoU=0.5)`. -- PP-YOLO_MobileNetV3 used 4 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](../../docs/FAQ.md). -- PP-YOLO_MobileNetV3 inference speed is tested on Kirin 990 with 1 thread. - -### Slim PP-YOLO - -| Model | GPU number | images/GPU | Prune Ratio | Teacher Model | Model Size | input shape | Box APval | Kirin 990 1xCore(FPS) | download | inference model download | config | -|:----------------------------:|:----------:|:----------:| :---------: | :-----------------------: | :--------: | :----------:| :------------------: | :-------------------: | :------: | :----------------------: | :-----: | -| PP-YOLO_MobileNetV3_small | 4 | 32 | 75% | PP-YOLO_MobileNetV3_large | 4.2MB | 320 | 16.2 | 39.8 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small_prune75_distillby_mobilenet_v3_large.pdparams) | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small_prune75_distillby_mobilenet_v3_large.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml) | - -- Slim PP-YOLO is trained by slim traing method from [Distill pruned model](../../slim/extensions/distill_pruned_model/README.md),distill training pruned PP-YOLO_MobileNetV3_small model with PP-YOLO_MobileNetV3_large model as the teacher model -- Pruning detectiom head of PP-YOLO model with ratio as 75%, while the arguments are `--pruned_params="yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights" --pruned_ratios="0.75,0.75,0.75,0.75"` -- For Slim PP-YOLO training, evaluation, inference and model exporting, please see [Distill pruned model](../../slim/extensions/distill_pruned_model/README.md) - -### PP-YOLO tiny - -| Model | GPU number | images/GPU | Model Size | Post Quant Model Size | input shape | Box APval | Kirin 990 4xCore(FPS) | download | config | config | post quant model | -|:----------------------------:|:-------:|:-------------:|:----------:| :-------------------: | :----------:| :------------------: | :-------------------: | :------: | :----: | :----: | :--------------: | -| PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 320 | 20.6 | 92.3 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_tiny.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_tiny.yml) | [inference model](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) | -| PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 416 | 22.7 | 65.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_tiny.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_tiny.yml) | [inference model](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) | - -**Notes:** - -- PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box APval is evaluation results of `mAP(IoU=0.5:0.95)`, Box APval is evaluation results of `mAP(IoU=0.5)`. -- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md). -- PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8 -- we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance - -### PP-YOLO on Pascal VOC - -PP-YOLO trained on Pascal VOC dataset as follows: - -| Model | GPU number | images/GPU | backbone | input shape | Box AP50val | download | config | -|:------------------:|:----------:|:----------:|:----------:| :----------:| :--------------------: | :------: | :-----: | -| PP-YOLO | 8 | 12 | ResNet50vd | 608 | 84.9 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_voc.yml) | -| PP-YOLO | 8 | 12 | ResNet50vd | 416 | 84.3 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_voc.yml) | -| PP-YOLO | 8 | 12 | ResNet50vd | 320 | 82.2 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_voc.yml) | -| PP-YOLO_EB | 8 | 8 | ResNet34vd | 480 | 86.4 | [model](https://bj.bcebos.com/v1/paddlemodels/object_detection/ppyolo_eb_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_eb_voc.yml) | - -**Notes:** PP-YOLO-EB is specially designed for [EdgeBoard](https://ai.baidu.com/tech/hardware/deepkit) hardware. - -## Getting Start - -### 1. Training - -Training PP-YOLO on 8 GPUs with following command(all commands should be run under PaddleDetection root directory as default), use `--eval` to enable alternate evaluation during training. - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python tools/train.py -c configs/ppyolo/ppyolo.yml --eval -``` - -optional: Run `tools/anchor_cluster.py` to get anchors suitable for your dataset, and modify the anchor setting in `configs/ppyolo/ppyolo.yml`. - -``` bash -python tools/anchor_cluster.py -c configs/ppyolo/ppyolo.yml -n 9 -s 608 -m v2 -i 1000 -``` - -### 2. Evaluation - -Evaluating PP-YOLO on COCO val2017 dataset in single GPU with following commands: - -```bash -# use weights released in PaddleDetection model zoo -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams - -# use saved checkpoint in training -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo.yml -o weights=output/ppyolo/best_model -``` - -For evaluation on COCO test-dev2017 dataset, `configs/ppyolo/ppyolo_test.yml` should be used, please download COCO test-dev2017 dataset from [COCO dataset download](https://cocodataset.org/#download) and decompress to pathes configured by `EvalReader.dataset` in `configs/ppyolo/ppyolo_test.yml` and run evaluation by following command: - -```bash -# use weights released in PaddleDetection model zoo -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo_test.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams - -# use saved checkpoint in training -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo_test.yml -o weights=output/ppyolo/best_model -``` - -Evaluation results will be saved in `bbox.json`, compress it into a `zip` package and upload to [COCO dataset evaluation](https://competitions.codalab.org/competitions/20794#participate) to evaluate. - -**NOTE:** `configs/ppyolo/ppyolo_test.yml` is only used for evaluation on COCO test-dev2017 dataset, could not be used for training or COCO val2017 dataset evaluating. - -### 3. Inference - -Inference images in single GPU with following commands, use `--infer_img` to inference a single image and `--infer_dir` to inference all images in the directory. - -```bash -# inference single image -CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --infer_img=demo/000000014439_640x640.jpg - -# inference all images in the directory -CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --infer_dir=demo -``` - -### 4. Inferece deployment and benchmark - -For inference deployment or benchmard, model exported with `tools/export_model.py` should be used and perform inference with Paddle inference library with following commands: - -```bash -# export model, model will be save in output/ppyolo as default -python tools/export_model.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams - -# inference with Paddle Inference library -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output/ppyolo --image_file=demo/000000014439_640x640.jpg --use_gpu=True -``` - -Benchmark testing for PP-YOLO uses model without data reading and post-processing(NMS), export model with `--exclude_nms` to prunce NMS for benchmark testing from mode with following commands: - -```bash -# export model, --exclude_nms to prune NMS part, model will be save in output/ppyolo as default -python tools/export_model.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --exclude_nms - -# FP32 benchmark -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output/ppyolo --image_file=demo/000000014439_640x640.jpg --use_gpu=True --run_benchmark=True - -# TensorRT FP16 benchmark -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output/ppyolo --image_file=demo/000000014439_640x640.jpg --use_gpu=True --run_benchmark=True --run_mode=trt_fp16 -``` - -## Appendix - -Optimizing method and ablation experiments of PP-YOLO compared with YOLOv3. - -| NO. | Model | Box APval | Box APtest | Params(M) | FLOPs(G) | V100 FP32 FPS | -| :--: | :--------------------------- | :------------------: |:--------------------: | :-------: | :------: | :-----------: | -| A | YOLOv3-DarkNet53 | 38.9 | - | 59.13 | 65.52 | 58.2 | -| B | YOLOv3-ResNet50vd-DCN | 39.1 | - | 43.89 | 44.71 | 79.2 | -| C | B + LB + EMA + DropBlock | 41.4 | - | 43.89 | 44.71 | 79.2 | -| D | C + IoU Loss | 41.9 | - | 43.89 | 44.71 | 79.2 | -| E | D + IoU Aware | 42.5 | - | 43.90 | 44.71 | 74.9 | -| F | E + Grid Sensitive | 42.8 | - | 43.90 | 44.71 | 74.8 | -| G | F + Matrix NMS | 43.5 | - | 43.90 | 44.71 | 74.8 | -| H | G + CoordConv | 44.0 | - | 43.93 | 44.76 | 74.1 | -| I | H + SPP | 44.3 | 45.2 | 44.93 | 45.12 | 72.9 | -| J | I + Better ImageNet Pretrain | 44.8 | 45.2 | 44.93 | 45.12 | 72.9 | -| K | J + 2x Scheduler | 45.3 | 45.9 | 44.93 | 45.12 | 72.9 | - -**Notes:** - -- Performance and inference spedd are measure with input shape as 608 -- All models are trained on COCO train2017 datast and evaluated on val2017 & test-dev2017 dataset,`Box AP` is evaluation results as `mAP(IoU=0.5:0.95)`. -- Inference speed is tested on single Tesla V100 with batch size as 1 following test method and environment configuration in benchmark above. -- [YOLOv3-DarkNet53](../yolov3_darknet.yml) with mAP as 38.9 is optimized YOLOv3 model in PaddleDetection,see [Model Zoo](../../docs/MODEL_ZOO.md) for details. - - -## Citation - -``` -@article{huang2021pp, - title={PP-YOLOv2: A Practical Object Detector}, - author={Huang, Xin and Wang, Xinxin and Lv, Wenyu and Bai, Xiaying and Long, Xiang and Deng, Kaipeng and Dang, Qingqing and Han, Shumin and Liu, Qiwen and Hu, Xiaoguang and others}, - journal={arXiv preprint arXiv:2104.10419}, - year={2021} -} -@misc{long2020ppyolo, -title={PP-YOLO: An Effective and Efficient Implementation of Object Detector}, -author={Xiang Long and Kaipeng Deng and Guanzhong Wang and Yang Zhang and Qingqing Dang and Yuan Gao and Hui Shen and Jianguo Ren and Shumin Han and Errui Ding and Shilei Wen}, -year={2020}, -eprint={2007.12099}, -archivePrefix={arXiv}, -primaryClass={cs.CV} -} -@misc{ppdet2019, -title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.}, -author={PaddlePaddle Authors}, -howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}}, -year={2019} -} -``` diff --git a/static/configs/ppyolo/README_cn.md b/static/configs/ppyolo/README_cn.md deleted file mode 100644 index 6af1912dbb8c68fd59c3ec49822e77ab65349371..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/README_cn.md +++ /dev/null @@ -1,247 +0,0 @@ -简体中文 | [English](README.md) - -# PP-YOLO 模型 - -## 内容 -- [简介](#简介) -- [模型库与基线](#模型库与基线) -- [使用说明](#使用说明) -- [未来工作](#未来工作) -- [附录](#附录) - -## 简介 - -[PP-YOLO](https://arxiv.org/abs/2007.12099)是PaddleDetection优化和改进的YOLOv3的模型,其精度(COCO数据集mAP)和推理速度均优于[YOLOv4](https://arxiv.org/abs/2004.10934)模型,要求使用PaddlePaddle 1.8.4(可使用pip安装) 或适当的[develop版本](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-dev)。 - -PP-YOLO在[COCO](http://cocodataset.org) test-dev2017数据集上精度达到45.9%,在单卡V100上FP32推理速度为72.9 FPS, V100上开启TensorRT下FP16推理速度为155.6 FPS。 - -
    - -
    - -PP-YOLO和PP-YOLOv2从如下方面优化和提升YOLOv3模型的精度和速度: - -- 更优的骨干网络: ResNet50vd-DCN -- 更大的训练batch size: 8 GPUs,每GPU batch_size=24,对应调整学习率和迭代轮数 -- [Drop Block](https://arxiv.org/abs/1810.12890) -- [Exponential Moving Average](https://www.investopedia.com/terms/e/ema.asp) -- [IoU Loss](https://arxiv.org/pdf/1902.09630.pdf) -- [Grid Sensitive](https://arxiv.org/abs/2004.10934) -- [Matrix NMS](https://arxiv.org/pdf/2003.10152.pdf) -- [CoordConv](https://arxiv.org/abs/1807.03247) -- [Spatial Pyramid Pooling](https://arxiv.org/abs/1406.4729) -- 更优的预训练模型 -- [PAN](https://arxiv.org/abs/1803.01534) -- Iou aware Loss -- 更大的输入尺寸 - -## 模型库 - -### PP-YOLO模型 - -| 模型 | GPU个数 | 每GPU图片个数 | 骨干网络 | 输入尺寸 | Box APval | Box APtest | V100 FP32(FPS) | V100 TensorRT FP16(FPS) | 模型下载 | 配置文件 | -|:------------------------:|:-------:|:-------------:|:----------:| :-------:| :------------------: | :-------------------: | :------------: | :---------------------: | :------: | :------: | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 608 | - | 43.5 | 62 | 105.5 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 512 | - | 43.0 | 83 | 138.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 416 | - | 41.2 | 96 | 164.0 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| YOLOv4(AlexyAB) | - | - | CSPDarknet | 320 | - | 38.0 | 123 | 199.0 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_csdarknet.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 608 | 44.8 | 45.2 | 72.9 | 155.6 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 512 | 43.9 | 44.4 | 89.9 | 188.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 416 | 42.1 | 42.5 | 109.1 | 215.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO | 8 | 24 | ResNet50vd | 320 | 38.9 | 39.3 | 132.2 | 242.2 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 608 | 45.3 | 45.9 | 72.9 | 155.6 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 512 | 44.4 | 45.0 | 89.9 | 188.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 416 | 42.7 | 43.2 | 109.1 | 215.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO_2x | 8 | 24 | ResNet50vd | 320 | 39.5 | 40.1 | 132.2 | 242.2 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_2x.yml) | -| PP-YOLO | 4 | 32 | ResNet18vd | 512 | 29.3 | 29.5 | 357.1 | 657.9 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_r18vd.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_r18vd.yml) | -| PP-YOLO | 4 | 32 | ResNet18vd | 416 | 28.6 | 28.9 | 409.8 | 719.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_r18vd.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_r18vd.yml) | -| PP-YOLO | 4 | 32 | ResNet18vd | 320 | 26.2 | 26.4 | 480.7 | 763.4 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_r18vd.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_r18vd.yml) | -| PP-YOLOv2 | 8 | 12 | ResNet50vd | 640 | 49.1 | 49.5 | 68.9 | 106.5 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolov2_r50vd_dcn.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolov2_r50vd_dcn.yml) | -| PP-YOLOv2 | 8 | 12 | ResNet101vd | 640 | 49.7 | 50.3 | 49.5 | 87.0 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolov2_r101vd_dcn.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolov2_r101vd_dcn.yml) | - -**注意:** - -- PP-YOLO模型使用COCO数据集中train2017作为训练集,使用val2017和test-dev2017作为测试集,Box APtest为`mAP(IoU=0.5:0.95)`评估结果。 -- PP-YOLO模型训练过程中使用8 GPUs,每GPU batch size为24进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](../../docs/FAQ.md)调整学习率和迭代次数。 -- PP-YOLO模型推理速度测试采用单卡V100,batch size=1进行测试,使用CUDA 10.2, CUDNN 7.5.1,TensorRT推理速度测试使用TensorRT 5.1.2.2。 -- PP-YOLO模型FP32的推理速度测试数据为使用`tools/export_model.py`脚本导出模型后,使用`deploy/python/infer.py`脚本中的`--run_benchnark`参数使用Paddle预测库进行推理速度benchmark测试结果, 且测试的均为不包含数据预处理和模型输出后处理(NMS)的数据(与[YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet)测试方法一致)。 -- TensorRT FP16的速度测试相比于FP32去除了`yolo_box`(bbox解码)部分耗时,即不包含数据预处理,bbox解码和NMS(与[YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet)测试方法一致)。 -- YOLOv4(AlexyAB)模型精度和V100 FP32推理速度数据使用[YOLOv4 github库](https://github.com/AlexeyAB/darknet)提供的单卡V100上精度速度测试数据,V100 TensorRT FP16推理速度为使用[AlexyAB/darknet](https://github.com/AlexeyAB/darknet)库中tkDNN配置于单卡V100,TensorRT 5.1.2.2的测试结果。 -- PP-YOLO模型推理速度测试采用单卡V100,batch size=1进行测试,使用CUDA 10.2, CUDNN 7.5.1,TensorRT推理速度测试使用TensorRT 5.1.2.2。 -- YOLOv4(AlexyAB)行`模型下载`和`配置文件`为PaddleDetection复现的YOLOv4模型,目前评估精度已对齐,支持finetune,训练精度对齐中,可参见[PaddleDetection YOLOv4 模型](../yolov4/README.md) -- PP-YOLO使用每GPU `batch_size=24`训练,需要使用显存为32G的GPU,我们也提供了`batch_size=12`的可以在显存为16G的GPU上训练的配置文件`ppyolo_2x_bs12.yml`,使用这个配置文件训练在COCO val2017数据集上评估结果为`mAP(IoU=0.5:0.95) = 45.1%`,可通过[ppyolo_2x_bs12模型](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_2x_bs12.pdparams)下载权重。 - -### PP-YOLO 轻量级模型 - -| 模型 | GPU个数 | 每GPU图片个数 | 模型体积 | 输入尺寸 | Box APval | Box AP50val | Kirin 990 1xCore (FPS) | 模型下载 | 预测模型下载 | 配置文件 | -|:----------------------------:|:-------:|:-------------:|:----------:| :-------:| :------------------: | :--------------------: | :--------------------: | :------: | :----------: | :------: | -| PP-YOLO_MobileNetV3_large | 4 | 32 | 18MB | 320 | 23.2 | 42.6 | 14.1 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_large.pdparams) | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_large.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_large.yml) | -| PP-YOLO_MobileNetV3_small | 4 | 32 | 11MB | 320 | 17.2 | 33.8 | 21.5 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small.pdparams) | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_large.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml) | - -- PP-YOLO_MobileNetV3 模型使用COCO数据集中train2017作为训练集,使用val2017作为测试集,Box APval为`mAP(IoU=0.5:0.95)`评估结果, Box AP50val为`mAP(IoU=0.5)`评估结果。 -- PP-YOLO_MobileNetV3 模型训练过程中使用4GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](../../docs/FAQ.md)调整学习率和迭代次数。 -- PP-YOLO_MobileNetV3 模型推理速度测试环境配置为麒麟990芯片单线程。 - -### PP-YOLO 轻量级裁剪模型 - -| 模型 | GPU 个数 | 每GPU图片个数 | 裁剪率 | Teacher模型 | 模型体积 | 输入尺寸 | Box APval | Kirin 990 1xCore (FPS) | 模型下载 | 预测模型下载 | 配置文件 | -|:----------------------------:|:----------:|:-------------:| :---------: | :-----------------------: | :--------: | :----------:| :------------------: | :--------------------: | :------: | :----------: | :------: | -| PP-YOLO_MobileNetV3_small | 4 | 32 | 75% | PP-YOLO_MobileNetV3_large | 4.2MB | 320 | 16.2 | 39.8 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small_prune75_distillby_mobilenet_v3_large.pdparams) | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_mobilenet_v3_small_prune75_distillby_mobilenet_v3_large.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml) | - -- PP-YOLO 轻量级裁剪模型采用[蒸馏通道剪裁模型](../../slim/extensions/distill_pruned_model/README.md) 的方式训练得到,基于 PP-YOLO_MobileNetV3_small 模型对Head部分做卷积通道剪裁后使用 PP-YOLO_MobileNetV3_large 模型进行蒸馏训练 -- 卷积通道检测对Head部分剪裁掉75%的通道数,及剪裁参数为`--pruned_params="yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights" --pruned_ratios="0.75,0.75,0.75,0.75"` -- PP-YOLO 轻量级裁剪模型的训练、评估、预测及模型导出方法见[蒸馏通道剪裁模型](../../slim/extensions/distill_pruned_model/README.md) - -### PP-YOLO tiny模型 - -| 模型 | GPU 个数 | 每GPU图片个数 | 模型体积 | 后量化模型体积 | 输入尺寸 | Box APval | Kirin 990 1xCore (FPS) | 模型下载 | 配置文件 | 量化后模型 | -|:----------------------------:|:----------:|:-------------:| :--------: | :------------: | :----------:| :------------------: | :--------------------: | :------: | :------: | :--------: | -| PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 320 | 20.6 | 92.3 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_tiny.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_tiny.yml) | [预测模型](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) | -| PP-YOLO tiny | 8 | 32 | 4.2MB | **1.3M** | 416 | 22.7 | 65.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_tiny.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_tiny.yml) | [预测模型](https://paddledet.bj.bcebos.com/models/ppyolo_tiny_quant.tar) | - -- PP-YOLO-tiny 模型使用COCO数据集中train2017作为训练集,使用val2017作为测试集,Box APval为`mAP(IoU=0.5:0.95)`评估结果, Box AP50val为`mAP(IoU=0.5)`评估结果。 -- PP-YOLO-tiny 模型训练过程中使用8GPU,每GPU batch size为32进行训练,如训练GPU数和batch size不使用上述配置,须参考[FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md)调整学习率和迭代次数。 -- PP-YOLO-tiny 模型推理速度测试环境配置为麒麟990芯片4线程,arm8架构。 -- 我们也提供的PP-YOLO-tiny的后量化压缩模型,将模型体积压缩到**1.3M**,对精度和预测速度基本无影响 - -### Pascal VOC数据集上的PP-YOLO - -PP-YOLO在Pascal VOC数据集上训练模型如下: - -| 模型 | GPU个数 | 每GPU图片个数 | 骨干网络 | 输入尺寸 | Box AP50val | 模型下载 | 配置文件 | -|:------------------:|:-------:|:-------------:|:----------:| :----------:| :--------------------: | :------: | :-----: | -| PP-YOLO | 8 | 12 | ResNet50vd | 608 | 84.9 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_voc.yml) | -| PP-YOLO | 8 | 12 | ResNet50vd | 416 | 84.3 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_voc.yml) | -| PP-YOLO | 8 | 12 | ResNet50vd | 320 | 82.2 | [model](https://paddlemodels.bj.bcebos.com/object_detection/ppyolo_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_voc.yml) | -| PP-YOLO_EB | 8 | 8 | ResNet34vd | 480 | 86.4 | [model](https://bj.bcebos.com/v1/paddlemodels/object_detection/ppyolo_eb_voc.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/ppyolo/ppyolo_eb_voc.yml) | - -**注意:** PP-YOLO-EB是针对[EdgeBoard](https://ai.baidu.com/tech/hardware/deepkit)硬件专门设计的模型. - - -## 使用说明 - -### 1. 训练 - -使用8GPU通过如下命令一键式启动训练(以下命令均默认在PaddleDetection根目录运行), 通过`--eval`参数开启训练中交替评估。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python tools/train.py -c configs/ppyolo/ppyolo.yml --eval -``` -可选:在训练之前使用`tools/anchor_cluster.py`得到适用于你的数据集的anchor,并修改`configs/ppyolo/ppyolo.yml`中的anchor设置 -```bash -python tools/anchor_cluster.py -c configs/ppyolo/ppyolo.yml -n 9 -s 608 -m v2 -i 1000 -``` - -### 2. 评估 - -使用单GPU通过如下命令一键式评估模型在COCO val2017数据集效果 - -```bash -# 使用PaddleDetection发布的权重 -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams - -# 使用训练保存的checkpoint -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo.yml -o weights=output/ppyolo/best_model -``` - -我们提供了`configs/ppyolo/ppyolo_test.yml`用于评估COCO test-dev2017数据集的效果,评估COCO test-dev2017数据集的效果须先从[COCO数据集下载页](https://cocodataset.org/#download)下载test-dev2017数据集,解压到`configs/ppyolo/ppyolo_test.yml`中`EvalReader.dataset`中配置的路径,并使用如下命令进行评估 - -```bash -# 使用PaddleDetection发布的权重 -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo_test.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams - -# 使用训练保存的checkpoint -CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/ppyolo/ppyolo_test.yml -o weights=output/ppyolo/best_model -``` - -评估结果保存于`bbox.json`中,将其压缩为zip包后通过[COCO数据集评估页](https://competitions.codalab.org/competitions/20794#participate)提交评估。 - -**注意:** `configs/ppyolo/ppyolo_test.yml`仅用于评估COCO test-dev数据集,不用于训练和评估COCO val2017数据集。 - -### 3. 推理 - -使用单GPU通过如下命令一键式推理图像,通过`--infer_img`指定图像路径,或通过`--infer_dir`指定目录并推理目录下所有图像 - -```bash -# 推理单张图像 -CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --infer_img=demo/000000014439_640x640.jpg - -# 推理目录下所有图像 -CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --infer_dir=demo -``` - -### 4. 推理部署与benchmark - -PP-YOLO模型部署及推理benchmark需要通过`tools/export_model.py`导出模型后使用Paddle预测库进行部署和推理,可通过如下命令一键式启动。 - -```bash -# 导出模型,默认存储于output/ppyolo目录 -python tools/export_model.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams - -# 预测库推理 -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output/ppyolo --image_file=demo/000000014439_640x640.jpg --use_gpu=True -``` - -PP-YOLO模型benchmark测试为不包含数据预处理和网络输出后处理(NMS)的网络结构部分数据,导出模型时须指定`--exlcude_nms`来裁剪掉模型中后处理的NMS部分,通过如下命令进行模型导出和benchmark测试。 - -```bash -# 导出模型,通过--exclude_nms参数裁剪掉模型中的NMS部分,默认存储于output/ppyolo目录 -python tools/export_model.py -c configs/ppyolo/ppyolo.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --exclude_nms - -# FP32 benchmark测试 -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output/ppyolo --image_file=demo/000000014439_640x640.jpg --use_gpu=True --run_benchmark=True - -# TensorRT FP16 benchmark测试 -CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output/ppyolo --image_file=demo/000000014439_640x640.jpg --use_gpu=True --run_benchmark=True --run_mode=trt_fp16 -``` - -## 附录 - -PP-YOLO模型相对于YOLOv3模型优化项消融实验数据如下表所示。 - -| 序号 | 模型 | Box APval | Box APtest | 参数量(M) | FLOPs(G) | V100 FP32 FPS | -| :--: | :--------------------------- | :------------------: | :-------------------: | :-------: | :------: | :-----------: | -| A | YOLOv3-DarkNet53 | 38.9 | - | 59.13 | 65.52 | 58.2 | -| B | YOLOv3-ResNet50vd-DCN | 39.1 | - | 43.89 | 44.71 | 79.2 | -| C | B + LB + EMA + DropBlock | 41.4 | - | 43.89 | 44.71 | 79.2 | -| D | C + IoU Loss | 41.9 | - | 43.89 | 44.71 | 79.2 | -| E | D + IoU Aware | 42.5 | - | 43.90 | 44.71 | 74.9 | -| F | E + Grid Sensitive | 42.8 | - | 43.90 | 44.71 | 74.8 | -| G | F + Matrix NMS | 43.5 | - | 43.90 | 44.71 | 74.8 | -| H | G + CoordConv | 44.0 | - | 43.93 | 44.76 | 74.1 | -| I | H + SPP | 44.3 | 45.2 | 44.93 | 45.12 | 72.9 | -| J | I + Better ImageNet Pretrain | 44.8 | 45.2 | 44.93 | 45.12 | 72.9 | -| K | J + 2x Scheduler | 45.3 | 45.9 | 44.93 | 45.12 | 72.9 | - -**注意:** - -- 精度与推理速度数据均为使用输入图像尺寸为608的测试结果 -- Box AP为在COCO train2017数据集训练,val2017和test-dev2017数据集上评估`mAP(IoU=0.5:0.95)`数据 -- 推理速度为单卡V100上,batch size=1, 使用上述benchmark测试方法的测试结果,测试环境配置为CUDA 10.2,CUDNN 7.5.1 -- [YOLOv3-DarkNet53](../yolov3_darknet.yml)精度38.9为PaddleDetection优化后的YOLOv3模型,可参见[模型库](../../docs/MODEL_ZOO_cn.md) - - -## 引用 - -``` -@article{huang2021pp, - title={PP-YOLOv2: A Practical Object Detector}, - author={Huang, Xin and Wang, Xinxin and Lv, Wenyu and Bai, Xiaying and Long, Xiang and Deng, Kaipeng and Dang, Qingqing and Han, Shumin and Liu, Qiwen and Hu, Xiaoguang and others}, - journal={arXiv preprint arXiv:2104.10419}, - year={2021} -} -@misc{long2020ppyolo, -title={PP-YOLO: An Effective and Efficient Implementation of Object Detector}, -author={Xiang Long and Kaipeng Deng and Guanzhong Wang and Yang Zhang and Qingqing Dang and Yuan Gao and Hui Shen and Jianguo Ren and Shumin Han and Errui Ding and Shilei Wen}, -year={2020}, -eprint={2007.12099}, -archivePrefix={arXiv}, -primaryClass={cs.CV} -} -@misc{ppdet2019, -title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.}, -author={PaddlePaddle Authors}, -howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}}, -year={2019} -} -``` diff --git a/static/configs/ppyolo/ppyolo.yml b/static/configs/ppyolo/ppyolo.yml deleted file mode 100644 index 9fe271bd1cd24ffe73b81671fb19f18677cf0e78..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo.yml +++ /dev/null @@ -1,90 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 250000 -log_iter: 100 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/ppyolo/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - coord_conv: true - iou_aware: true - iou_aware_factor: 0.4 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 608 - max_width: 608 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 150000 - - 200000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' diff --git a/static/configs/ppyolo/ppyolo_2x.yml b/static/configs/ppyolo/ppyolo_2x.yml deleted file mode 100644 index d5d8a5b9c019d69eaf75983437054c69d59e3620..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_2x.yml +++ /dev/null @@ -1,90 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 100 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/ppyolo/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - coord_conv: true - iou_aware: true - iou_aware_factor: 0.4 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 608 - max_width: 608 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' diff --git a/static/configs/ppyolo/ppyolo_2x_bs12.yml b/static/configs/ppyolo/ppyolo_2x_bs12.yml deleted file mode 100644 index 128fb55207f8764b2df795712ab387a226dc2fb5..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_2x_bs12.yml +++ /dev/null @@ -1,92 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 100 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/ppyolo/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - coord_conv: true - iou_aware: true - iou_aware_factor: 0.4 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 608 - max_width: 608 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -TrainReader: - batch_size: 12 diff --git a/static/configs/ppyolo/ppyolo_eb.yml b/static/configs/ppyolo/ppyolo_eb.yml deleted file mode 100644 index f32634b4557024ef9ab7697c6d6874fbcb38ebf7..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_eb.yml +++ /dev/null @@ -1,74 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar -weights: output/ppyolo_eb/best_model -num_classes: 80 -use_fine_grained_loss: true -log_iter: 1000 -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet_EB - yolo_head: EBHead - -ResNet_EB: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 34 - variant: d - feature_maps: [3, 4, 5] - -EBHead: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 320000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' diff --git a/static/configs/ppyolo/ppyolo_eb_voc.yml b/static/configs/ppyolo/ppyolo_eb_voc.yml deleted file mode 100644 index 3d30c14afde86bb5bb12194afd630dea7e6caff5..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_eb_voc.yml +++ /dev/null @@ -1,103 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 70000 -log_smooth_window: 20 -save_dir: output -snapshot_iter: 3000 -metric: VOC -map_type: integral -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_vd_pretrained.tar -weights: output/ppyolo_eb_voc/best_model -num_classes: 20 -use_fine_grained_loss: true -log_iter: 1000 -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet_EB - yolo_head: EBHead - -ResNet_EB: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 34 - variant: d - feature_maps: [3, 4, 5] - -EBHead: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 35000 - - 60000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -TrainReader: - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: false - with_background: false - mixup_epoch: 200 - batch_size: 8 - -EvalReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: test.txt - use_default_label: false - with_background: false - -TestReader: - dataset: - !ImageFolder - use_default_label: false - with_background: false diff --git a/static/configs/ppyolo/ppyolo_mobilenet_v3_large.yml b/static/configs/ppyolo/ppyolo_mobilenet_v3_large.yml deleted file mode 100755 index 262c6c0b94032d45e9d544f25cf674047dc08048..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_mobilenet_v3_large.yml +++ /dev/null @@ -1,192 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 250000 -log_smooth_window: 20 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -weights: output/ppyolo_tiny/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: MobileNetV3 - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -MobileNetV3: - norm_type: sync_bn - norm_decay: 0. - model_name: large - scale: 1. - extra_block_filters: [] - feature_maps: [1, 2, 3, 4, 6] - - -YOLOv3Head: - anchor_masks: [[3, 4, 5], [0, 1, 2]] - anchors: [[11, 18], [34, 47], [51, 126], - [115, 71], [120, 195], [254, 235]] - norm_decay: 0. - conv_block_num: 0 - coord_conv: true - scale_x_y: 1.05 - yolo_loss: YOLOv3Loss - spp: true - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.005 - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.5 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 512 - max_width: 512 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 150000 - - 200000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 90 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [224, 256, 288, 320, 352, 384, 416, 448, 480, 512] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[3, 4, 5], [0, 1, 2]] - anchors: [[11, 18], [34, 47], [51, 126], - [115, 71], [120, 195], [254, 235]] - downsample_ratios: [32, 16] - iou_thresh: 0.25 - num_classes: 80 - batch_size: 32 - shuffle: true - mixup_epoch: 200 - drop_last: true - worker_num: 8 - bufsize: 4 - use_process: true - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 320 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 90 - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 2 - bufsize: 4 - -TestReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 320 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml b/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml deleted file mode 100755 index 9c3e27976e285386331d8d52004f1a79ac11aec0..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_mobilenet_v3_small.yml +++ /dev/null @@ -1,192 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 250000 -log_smooth_window: 20 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x1_0_ssld_pretrained.tar -weights: output/ppyolo_tiny/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: MobileNetV3 - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -MobileNetV3: - norm_type: sync_bn - norm_decay: 0. - model_name: small - scale: 1. - extra_block_filters: [] - feature_maps: [1, 2, 3, 4, 6] - - -YOLOv3Head: - anchor_masks: [[3, 4, 5], [0, 1, 2]] - anchors: [[11, 18], [34, 47], [51, 126], - [115, 71], [120, 195], [254, 235]] - norm_decay: 0. - conv_block_num: 0 - coord_conv: true - scale_x_y: 1.05 - yolo_loss: YOLOv3Loss - spp: true - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.005 - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.5 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 512 - max_width: 512 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 150000 - - 200000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 90 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [224, 256, 288, 320, 352, 384, 416, 448, 480, 512] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[3, 4, 5], [0, 1, 2]] - anchors: [[11, 18], [34, 47], [51, 126], - [115, 71], [120, 195], [254, 235]] - downsample_ratios: [32, 16] - iou_thresh: 0.25 - num_classes: 80 - batch_size: 32 - shuffle: true - mixup_epoch: 200 - drop_last: true - worker_num: 8 - bufsize: 4 - use_process: true - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 320 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 90 - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 2 - bufsize: 4 - -TestReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 320 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ppyolo/ppyolo_r18vd.yml b/static/configs/ppyolo/ppyolo_r18vd.yml deleted file mode 100755 index e65adecbb29e214772bc959d33b8f73578e5572d..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_r18vd.yml +++ /dev/null @@ -1,133 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 250000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet18_vd_pretrained.tar -weights: output/ppyolo_tiny/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 18 - feature_maps: [4, 5] - variant: d - -YOLOv3Head: - anchor_masks: [[3, 4, 5], [0, 1, 2]] - anchors: [[10, 14], [23, 27], [37, 58], - [81, 82], [135, 169], [344, 319]] - norm_decay: 0. - conv_block_num: 0 - scale_x_y: 1.05 - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.004 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 150000 - - 200000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: train_data/dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[3, 4, 5], [0, 1, 2]] - anchors: [[10, 14], [23, 27], [37, 58], - [81, 82], [135, 169], [344, 319]] - downsample_ratios: [32, 16] - batch_size: 32 - shuffle: true - mixup_epoch: 500 - drop_last: true - worker_num: 16 - bufsize: 8 - use_process: true diff --git a/static/configs/ppyolo/ppyolo_reader.yml b/static/configs/ppyolo/ppyolo_reader.yml deleted file mode 100644 index f03e47216f9ce2003e66c2032511acf05e6ec1d9..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_reader.yml +++ /dev/null @@ -1,111 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - ratio: 2.0 - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 24 - shuffle: true - mixup_epoch: 25000 - drop_last: true - worker_num: 8 - bufsize: 4 - use_process: true - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 50 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - drop_empty: false - worker_num: 8 - bufsize: 4 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ppyolo/ppyolo_roadsign_kunlun.yml b/static/configs/ppyolo/ppyolo_roadsign_kunlun.yml deleted file mode 100644 index 46d6348416674e82e9a95f5356db86d995dd7d68..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_roadsign_kunlun.yml +++ /dev/null @@ -1,198 +0,0 @@ -architecture: YOLOv3 -use_gpu: false -use_xpu: true -max_iters: 5000 -log_iter: 1 -save_dir: output -snapshot_iter: 500 -metric: VOC -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams -weights: output/ppyolo_roadsign_kunlun/model_final -num_classes: 4 -finetune_exclude_pretrained_params: ['yolo_output'] -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: 'bn' - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - coord_conv: true - iou_aware: true - iou_aware_factor: 0.4 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 608 - max_width: 608 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 800 - - 110 - - !LinearWarmup - start_factor: 0 - steps: 100 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: train.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - ratio: 1.5 - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 8 - shuffle: true - mixup_epoch: 250 - drop_last: true - worker_num: 2 - bufsize: 2 - use_process: false #true - - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: valid.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 4 - bufsize: 2 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: dataset/roadsign_voc/label_list.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ppyolo/ppyolo_tiny.yml b/static/configs/ppyolo/ppyolo_tiny.yml deleted file mode 100755 index aa80ebf41ee035863ad76417de7a01915b383598..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_tiny.yml +++ /dev/null @@ -1,193 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 300000 -log_smooth_window: 100 -log_iter: 100 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_5_pretrained.tar -weights: output/ppyolo_tiny/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: MobileNetV3 - yolo_head: PPYOLOTinyHead - use_fine_grained_loss: true - -MobileNetV3: - norm_type: sync_bn - norm_decay: 0. - model_name: large - scale: .5 - extra_block_filters: [] - feature_maps: [1, 2, 3, 4, 6] - -PPYOLOTinyHead: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 15], [24, 36], [72, 42], - [35, 87], [102, 96], [60, 170], - [220, 125], [128, 222], [264, 266]] - detection_block_channels: [160, 128, 96] - norm_decay: 0. - scale_x_y: 1.05 - yolo_loss: YOLOv3Loss - spp: true - drop_block: true - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.5 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - -IouLoss: - loss_weight: 2.5 - max_height: 512 - max_width: 512 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 200000 - - 250000 - - 280000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.949 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 100 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: train_data/dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - ratio: 2 - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 100 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [192, 224, 256, 288, 320, 352, 384, 416, 448, 480, 512] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 15], [24, 36], [72, 42], - [35, 87], [102, 96], [60, 170], - [220, 125], [128, 222], [264, 266]] - downsample_ratios: [32, 16, 8] - iou_thresh: 0.25 - num_classes: 80 - batch_size: 32 - shuffle: true - mixup_epoch: 200 - drop_last: true - worker_num: 16 - bufsize: 4 - use_process: true - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 100 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: train_data/dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 320 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 100 - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 2 - bufsize: 4 - -TestReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 320 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ppyolo/ppyolo_voc.yml b/static/configs/ppyolo/ppyolo_voc.yml deleted file mode 100644 index cf138863d74fbf8406c0e78fae1c3e819c80bff1..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolo_voc.yml +++ /dev/null @@ -1,117 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 70000 -log_smooth_window: 20 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: VOC -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/ppyolo/model_final -num_classes: 20 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - coord_conv: true - iou_aware: true - iou_aware_factor: 0.4 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 608 - max_width: 608 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 608 - max_width: 608 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.00333 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 56000 - - 62000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolo_reader.yml' -TrainReader: - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - with_background: false - mixup_epoch: 350 - batch_size: 12 - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: test.txt - use_default_label: true - with_background: false - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false diff --git a/static/configs/ppyolo/ppyolov2_r101vd_dcn.yml b/static/configs/ppyolo/ppyolov2_r101vd_dcn.yml deleted file mode 100644 index 9ba339912fe791ef3d9f48442f7e91bf1e2bec12..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolov2_r101vd_dcn.yml +++ /dev/null @@ -1,89 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 450000 -log_iter: 100 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_ssld_pretrained.tar -weights: output/ppyolov2_r101vd_dcn/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3PANHead - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 101 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3PANHead: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - iou_aware: true - iou_aware_factor: 0.5 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 768 - max_width: 768 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 768 - max_width: 768 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 300000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - clip_grad_by_norm: 35. - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolov2_reader.yml' diff --git a/static/configs/ppyolo/ppyolov2_r50vd_dcn.yml b/static/configs/ppyolo/ppyolov2_r50vd_dcn.yml deleted file mode 100644 index 7ceb75833767b60c76528e6fb07786b40dbd6bdd..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolov2_r50vd_dcn.yml +++ /dev/null @@ -1,89 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 450000 -log_iter: 100 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/ppyolov2_r50vd_dcn/model_final -num_classes: 80 -use_fine_grained_loss: true -use_ema: true -ema_decay: 0.9998 - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3PANHead - use_fine_grained_loss: true - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - variant: d - dcn_v2_stages: [5] - -YOLOv3PANHead: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - iou_aware: true - iou_aware_factor: 0.5 - scale_x_y: 1.05 - spp: true - yolo_loss: YOLOv3Loss - nms: MatrixNMS - drop_block: true - -YOLOv3Loss: - ignore_thresh: 0.7 - scale_x_y: 1.05 - label_smooth: false - use_fine_grained_loss: true - iou_loss: IouLoss - iou_aware_loss: IouAwareLoss - -IouLoss: - loss_weight: 2.5 - max_height: 768 - max_width: 768 - -IouAwareLoss: - loss_weight: 1.0 - max_height: 768 - max_width: 768 - -MatrixNMS: - background_label: -1 - keep_top_k: 100 - normalized: false - score_threshold: 0.01 - post_threshold: 0.01 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 300000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - clip_grad_by_norm: 35. - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'ppyolov2_reader.yml' diff --git a/static/configs/ppyolo/ppyolov2_reader.yml b/static/configs/ppyolo/ppyolov2_reader.yml deleted file mode 100644 index d291ab1f7b02b4cca3dba589e552376e4d4d12f1..0000000000000000000000000000000000000000 --- a/static/configs/ppyolo/ppyolov2_reader.yml +++ /dev/null @@ -1,110 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 100 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 100 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 12 - shuffle: true - mixup_epoch: 25000 - drop_last: true - worker_num: 8 - bufsize: 4 - use_process: true - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 100 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 640 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - drop_empty: false - worker_num: 8 - bufsize: 4 - -TestReader: - inputs_def: - image_shape: [3, 640, 640] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 640 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/random_erasing/README.md b/static/configs/random_erasing/README.md deleted file mode 100644 index 2b101348e086a8115f5fe4e6a0a10a008e6228b3..0000000000000000000000000000000000000000 --- a/static/configs/random_erasing/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Random Erasing Data Augmentation - -## Introduction - -- Random Erasing Data Augmentation -: [https://arxiv.org/abs/1708.04896](https://arxiv.org/abs/1708.04896) - -``` -@article{zhong1708random, - title={Random erasing data augmentation. arXiv 2017}, - author={Zhong, Z and Zheng, L and Kang, G and Li, S and Yang, Y}, - journal={arXiv preprint arXiv:1708.04896} -} -``` - - -## Model Zoo - -| Backbone | Type | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-FPN | Faster | 2 | 4x | 21.847 | 39.0% | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_r50_vd_fpn_random_erasing_4x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/random_erasing/faster_rcnn_r50_vd_fpn_random_erasing_4x.yml) | diff --git a/static/configs/random_erasing/faster_rcnn_r50_vd_fpn_random_erasing_4x.yml b/static/configs/random_erasing/faster_rcnn_r50_vd_fpn_random_erasing_4x.yml deleted file mode 100644 index 049b34b1936c76dd07698bc32f414f66e2d4047a..0000000000000000000000000000000000000000 --- a/static/configs/random_erasing/faster_rcnn_r50_vd_fpn_random_erasing_4x.yml +++ /dev/null @@ -1,144 +0,0 @@ -architecture: FasterRCNN -max_iters: 360000 -snapshot_iter: 40000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_pretrained.tar -weights: output/faster_rcnn_r50_vd_fpn_random_erasing_4x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [240000, 320000] - - !LinearWarmup - start_factor: 0.1 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !RandomErasingImage - prob: 0.5 - sl: 0.02 - sh: 0.4 - r1: 0.3 - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false diff --git a/static/configs/rcnn_enhance/README.md b/static/configs/rcnn_enhance/README.md deleted file mode 100644 index 7fc757fc159e67ed1266c04c067467b2d5da111b..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/README.md +++ /dev/null @@ -1,40 +0,0 @@ -## 服务器端实用目标检测方案 - -### 简介 - -* 近年来,学术界和工业界广泛关注图像中目标检测任务。基于[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)中SSLD蒸馏方案训练得到的ResNet50_vd预训练模型(ImageNet1k验证集上Top1 Acc为82.39%),结合PaddleDetection中的丰富算子,飞桨提供了一种面向服务器端实用的目标检测方案PSS-DET(Practical Server Side Detection)。基于COCO2017目标检测数据集,V100单卡预测速度为为61FPS时,COCO mAP可达41.6%;预测速度为20FPS时,COCO mAP可达47.8%。 - -* 以标准的Faster RCNN ResNet50_vd FPN为例,下表给出了PSS-DET不同的模块的速度与精度收益。 - -| Trick | Train scale | Test scale | COCO mAP | Infer speed/FPS | -|- |:-: |:-: | :-: | :-: | -| `baseline` | 640x640 | 640x640 | 36.4% | 43.589 | -| +`test proposal=pre/post topk 500/300` | 640x640 | 640x640 | 36.2% | 52.512 | -| +`fpn channel=64` | 640x640 | 640x640 | 35.1% | 67.450 | -| +`ssld pretrain` | 640x640 | 640x640 | 36.3% | 67.450 | -| +`ciou loss` | 640x640 | 640x640 | 37.1% | 67.450 | -| +`DCNv2` | 640x640 | 640x640 | 39.4% | 60.345 | -| +`3x, multi-scale training` | 640x640 | 640x640 | 41.0% | 60.345 | -| +`auto augment` | 640x640 | 640x640 | 41.4% | 60.345 | -| +`libra sampling` | 640x640 | 640x640 | 41.6% | 60.345 | - - -基于该实验结论,PaddleDetection结合Cascade RCNN,使用更大的训练与评估尺度(1000x1500),最终在单卡V100上速度为20FPS,COCO mAP达47.8%。下图给出了目前类似速度的目标检测方法的速度与精度指标。 - - -![pssdet](../../docs/images/pssdet.png) - -**注意** -> 这里为了更方便地对比,统一将V100的预测耗时乘以1.2倍,近似转化为Titan V的预测耗时。 - - -### 模型库 - -| 骨架网络 | 网络类型 | 每张GPU图片个数 | 学习率策略 |推理时间(fps) | Box AP | Mask AP | 下载 | 配置文件 | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :-------------: | :-----: | -| ResNet50-vd-FPN-Dcnv2 | Faster | 2 | 3x | 61.425 | 41.6 | - | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/rcnn_enhance/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.yml) | -| ResNet50-vd-FPN-Dcnv2 | Cascade Faster | 2 | 3x | 20.001 | 47.8 | - | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/rcnn_enhance/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.yml) | -| ResNet101-vd-FPN-Dcnv2 | Cascade Faster | 2 | 3x | 19.523 | 49.4 | - | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/rcnn_enhance/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.yml) | - - -**注**:generic文件夹下面的配置文件对应的预训练模型均只支持预测,不支持训练与评估。 diff --git a/static/configs/rcnn_enhance/README_en.md b/static/configs/rcnn_enhance/README_en.md deleted file mode 100644 index 2572376450f55d6851a53da647b7ab27d754953b..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/README_en.md +++ /dev/null @@ -1,44 +0,0 @@ -# Practical Server-side detection method base on RCNN - -## Introduction - - -* In recent years, object detection tasks have attracted widespread attention. [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) open-sourced the ResNet50_vd_SSLD pretrained model based on ImageNet(Top1 Acc 82.4%). And based on the pretrained model, PaddleDetection provided the PSS-DET (Practical Server-side detection) with the help of the rich operators in PaddleDetection. The inference speed can reach 61FPS on single V100 GPU when COCO mAP is 41.6%, and 20FPS when COCO mAP is 47.8%. - -* We take the standard `Faster RCNN ResNet50_vd FPN` as an example. The following table shows ablation study of PSS-DET. - -| Trick | Train scale | Test scale | COCO mAP | Infer speed/FPS | -|- |:-: |:-: | :-: | :-: | -| `baseline` | 640x640 | 640x640 | 36.4% | 43.589 | -| +`test proposal=pre/post topk 500/300` | 640x640 | 640x640 | 36.2% | 52.512 | -| +`fpn channel=64` | 640x640 | 640x640 | 35.1% | 67.450 | -| +`ssld pretrain` | 640x640 | 640x640 | 36.3% | 67.450 | -| +`ciou loss` | 640x640 | 640x640 | 37.1% | 67.450 | -| +`DCNv2` | 640x640 | 640x640 | 39.4% | 60.345 | -| +`3x, multi-scale training` | 640x640 | 640x640 | 41.0% | 60.345 | -| +`auto augment` | 640x640 | 640x640 | 41.4% | 60.345 | -| +`libra sampling` | 640x640 | 640x640 | 41.6% | 60.345 | - - -And the following figure shows `mAP-Speed` curves for some common detectors. - - -![pssdet](../../docs/images/pssdet.png) - - -**Note** -> For fair comparison, inference time for PSS-DET models on V100 GPU is transformed to Titan V GPU by multiplying by 1.2 times. - - -## Model Zoo - -#### COCO dataset - -| Backbone | Type | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :-------------: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| ResNet50-vd-FPN-Dcnv2 | Faster | 2 | 3x | 61.425 | 41.6 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/rcnn_enhance/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.yml) | -| ResNet50-vd-FPN-Dcnv2 | Cascade Faster | 2 | 3x | 20.001 | 47.8 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/rcnn_enhance/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.yml) | -| ResNet101-vd-FPN-Dcnv2 | Cascade Faster | 2 | 3x | 19.523 | 49.4 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/rcnn_enhance/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.yml) | - - -**Attention**: Pretrained models whose congigurations are in the directory `generic` just support inference but do not support training and evaluation as now. diff --git a/static/configs/rcnn_enhance/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.yml b/static/configs/rcnn_enhance/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.yml deleted file mode 100644 index 517b56b85545204e1ed8c412a29f57f71cc719fa..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side.yml +++ /dev/null @@ -1,215 +0,0 @@ -architecture: CascadeRCNN -max_iters: 270000 -snapshot_iter: 30000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_ssld_pretrained.tar -weights: output/cascade_rcnn_dcn_r101_vd_fpn_3x_server_side/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 64 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 64 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 500 - post_nms_top_n: 300 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024] - max_size: 1500 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/rcnn_enhance/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.yml b/static/configs/rcnn_enhance/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.yml deleted file mode 100644 index 140bcd12b42ddb951037e906134e5f0db7a585fb..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side.yml +++ /dev/null @@ -1,218 +0,0 @@ -architecture: CascadeRCNN -max_iters: 270000 -snapshot_iter: 30000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/cascade_rcnn_dcn_r50_vd_fpn_3x_server_side/model_final -metric: COCO -num_classes: 81 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 64 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 64 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 500 - post_nms_top_n: 300 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024] - max_size: 1500 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false - - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1500 - target_size: 1000 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - - - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1500 - target_size: 1000 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 diff --git a/static/configs/rcnn_enhance/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.yml b/static/configs/rcnn_enhance/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.yml deleted file mode 100644 index 45c63d84afaa83a6da635fef6205dc088c44ddc6..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/faster_rcnn_dcn_r50_vd_fpn_3x_server_side.yml +++ /dev/null @@ -1,217 +0,0 @@ -architecture: FasterRCNN -max_iters: 270000 -snapshot_iter: 30000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -weights: output/faster_rcnn_dcn_r50_vd_fpn_3x_server_side/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: LibraBBoxAssigner - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 64 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 64 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 300 - pre_nms_top_n: 500 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -LibraBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - bbox_loss: DiouLoss - -DiouLoss: - loss_weight: 10.0 - is_cls_agnostic: false - use_complete_iou_loss: true - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [384, 416, 448, 480, 512, 544, 576, 608, 640, 672] - max_size: 1000 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false - - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 640 - target_size: 640 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - - - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 640 - target_size: 640 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 diff --git a/static/configs/rcnn_enhance/generic/cascade_rcnn_cbr101_vd_fpn_server_side.yml b/static/configs/rcnn_enhance/generic/cascade_rcnn_cbr101_vd_fpn_server_side.yml deleted file mode 100644 index f659c1bd1b97294793797a8db13e1e66cbaa2901..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/generic/cascade_rcnn_cbr101_vd_fpn_server_side.yml +++ /dev/null @@ -1,219 +0,0 @@ -architecture: CascadeRCNN -max_iters: 1500000 -snapshot_iter: 100000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/CBResNet101_vd_ssld_pretrained.tar -weights: output/cascade_rcnn_cbr101_vd_fpn_server_side/model_final -metric: VOC -num_classes: 677 - -CascadeRCNN: - backbone: CBResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -CBResNet: - norm_type: bn - norm_decay: 0. - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - repeat_num: 2 - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 500 - post_nms_top_n: 300 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 14 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [1000000, 1400000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024] - max_size: 1500 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1300 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 - - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: false - with_background: true - anno_path: ./dataset/voc/generic_det_label_list.txt - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/rcnn_enhance/generic/cascade_rcnn_dcn_r101_vd_fpn_gen_server_side.yml b/static/configs/rcnn_enhance/generic/cascade_rcnn_dcn_r101_vd_fpn_gen_server_side.yml deleted file mode 100644 index 4d4f9ee262b6c64cc60cf5da61f0c4dddb369efd..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/generic/cascade_rcnn_dcn_r101_vd_fpn_gen_server_side.yml +++ /dev/null @@ -1,217 +0,0 @@ -architecture: CascadeRCNN -max_iters: 1500000 -snapshot_iter: 100000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_ssld_pretrained.tar -weights: output/cascade_rcnn_dcn_r101_vd_fpn_gen_server_side/model_final -metric: VOC -num_classes: 677 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 64 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 64 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 500 - post_nms_top_n: 300 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [1000000, 1400000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024] - max_size: 1500 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 1 - shuffle: true - worker_num: 2 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1300 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: false - with_background: true - anno_path: ./dataset/voc/generic_det_label_list.txt - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/rcnn_enhance/generic/cascade_rcnn_dcn_r50_vd_fpn_gen_server_side.yml b/static/configs/rcnn_enhance/generic/cascade_rcnn_dcn_r50_vd_fpn_gen_server_side.yml deleted file mode 100644 index 7eba812677a6477aa3c54a44a7a0604e921b35c4..0000000000000000000000000000000000000000 --- a/static/configs/rcnn_enhance/generic/cascade_rcnn_dcn_r50_vd_fpn_gen_server_side.yml +++ /dev/null @@ -1,217 +0,0 @@ -architecture: CascadeRCNN -max_iters: 750000 -snapshot_iter: 50000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_v2_pretrained.tar -weights: output/cascade_rcnn_dcn_r50_vd_fpn_gen_server_side/model_final -metric: VOC -num_classes: 677 - -CascadeRCNN: - backbone: ResNet - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: CascadeBBoxHead - bbox_assigner: CascadeBBoxAssigner - -ResNet: - norm_type: bn - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - variant: d - dcn_v2_stages: [3, 4, 5] - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 64 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - min_level: 2 - max_level: 6 - num_chan: 64 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_positive_overlap: 0.7 - rpn_negative_overlap: 0.3 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 500 - post_nms_top_n: 300 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - min_level: 2 - max_level: 5 - box_resolution: 7 - sampling_ratio: 2 - -CascadeBBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [10, 20, 30] - bg_thresh_lo: [0.0, 0.0, 0.0] - bg_thresh_hi: [0.5, 0.6, 0.7] - fg_thresh: [0.5, 0.6, 0.7] - fg_fraction: 0.25 - -CascadeBBoxHead: - head: CascadeTwoFCHead - bbox_loss: BalancedL1Loss - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -BalancedL1Loss: - alpha: 0.5 - gamma: 1.5 - beta: 1.0 - loss_weight: 1.0 - -CascadeTwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [500000, 700000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_crowd'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomFlipImage - prob: 0.5 - - !AutoAugmentImage - autoaug_type: v1 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800, 832, 864, 896, 928, 960, 992, 1024] - max_size: 1500 - interp: 1 - use_cv2: true - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - batch_size: 2 - shuffle: true - worker_num: 2 - use_process: false - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - # for voc - #fields: ['image', 'im_info', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1500 - target_size: 1000 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - # set image_shape if needed - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - use_default_label: false - with_background: true - anno_path: ./dataset/voc/generic_det_label_list.txt - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !ResizeImage - interp: 1 - max_size: 1500 - target_size: 1000 - use_cv2: true - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: true - batch_size: 1 - shuffle: false diff --git a/static/configs/res2net/README.md b/static/configs/res2net/README.md deleted file mode 100644 index 4926264e1395540ec230a02e09892292cde99c8e..0000000000000000000000000000000000000000 --- a/static/configs/res2net/README.md +++ /dev/null @@ -1,36 +0,0 @@ -# Res2Net - -## Introduction - -- Res2Net: A New Multi-scale Backbone Architecture: [https://arxiv.org/abs/1904.01169](https://arxiv.org/abs/1904.01169) - -``` -@article{DBLP:journals/corr/abs-1904-01169, - author = {Shanghua Gao and - Ming{-}Ming Cheng and - Kai Zhao and - Xinyu Zhang and - Ming{-}Hsuan Yang and - Philip H. S. Torr}, - title = {Res2Net: {A} New Multi-scale Backbone Architecture}, - journal = {CoRR}, - volume = {abs/1904.01169}, - year = {2019}, - url = {http://arxiv.org/abs/1904.01169}, - archivePrefix = {arXiv}, - eprint = {1904.01169}, - timestamp = {Thu, 25 Apr 2019 10:24:54 +0200}, - biburl = {https://dblp.org/rec/bib/journals/corr/abs-1904-01169}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -``` - - -## Model Zoo - -| Backbone | Type | deformable Conv | Image/gpu | Lr schd | Inf time (fps) | Box AP | Mask AP | Download | Configs | -| :---------------------- | :------------- | :---: | :-------: | :-----: | :------------: | :----: | :-----: | :----------------------------------------------------------: | :-----: | -| Res2Net50-FPN | Faster | False | 2 | 1x | 20.320 | 39.5 | - | [model](https://paddlemodels.bj.bcebos.com/object_detection/faster_rcnn_res2net50_vb_26w_4s_fpn_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_1x.yml) | -| Res2Net50-FPN | Mask | False | 2 | 2x | 16.069 | 40.7 | 36.2 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_res2net50_vb_26w_4s_fpn_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_2x.yml) | -| Res2Net50-vd-FPN | Mask | False | 2 | 2x | 15.816 | 40.9 | 36.2 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_res2net50_vd_26w_4s_fpn_2x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_2x.yml) | -| Res2Net50-vd-FPN | Mask | True | 2 | 2x | 14.478 | 43.5 | 38.4 | [model](https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_res2net50_vd_26w_4s_fpn_dcnv2_1x.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_dcnv2_1x.yml) | diff --git a/static/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_1x.yml b/static/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_1x.yml deleted file mode 100644 index f91914bbeced7af01342fe424328918d05bae71d..0000000000000000000000000000000000000000 --- a/static/configs/res2net/faster_rcnn_res2net50_vb_26w_4s_fpn_1x.yml +++ /dev/null @@ -1,108 +0,0 @@ -architecture: FasterRCNN -max_iters: 90000 -snapshot_iter: 10000 -use_gpu: true -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_26w_4s_pretrained.tar -weights: output/faster_rcnn_res2net50_vb_26w_4s_fpn_1x/model_final -metric: COCO -num_classes: 81 - -FasterRCNN: - backbone: Res2Net - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -Res2Net: - depth: 50 - width: 26 - scales: 4 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: b - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - anchor_sizes: [32, 64, 128, 256, 512] - aspect_ratios: [0.5, 1.0, 2.0] - stride: [16.0, 16.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 2000 - pre_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - post_nms_top_n: 1000 - pre_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - box_resolution: 7 - sampling_ratio: 2 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../faster_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/res2net/mask_rcnn_res2net50_vb_26w_4s_fpn_2x.yml b/static/configs/res2net/mask_rcnn_res2net50_vb_26w_4s_fpn_2x.yml deleted file mode 100644 index 4f26fbd63a8fa2e2618b51e11138f7d0658d09e3..0000000000000000000000000000000000000000 --- a/static/configs/res2net/mask_rcnn_res2net50_vb_26w_4s_fpn_2x.yml +++ /dev/null @@ -1,117 +0,0 @@ -architecture: MaskRCNN -trarchitecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_26w_4s_pretrained.tar -metric: COCO -weights: output/mask_rcnn_res2net50_vb_26w_4s_fpn_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: Res2Net - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -Res2Net: - depth: 50 - width: 26 - scales: 4 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: b - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_2x.yml b/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_2x.yml deleted file mode 100644 index 987e01ac10ac29f1ec8666234bd29dbcfa59cd96..0000000000000000000000000000000000000000 --- a/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_2x.yml +++ /dev/null @@ -1,117 +0,0 @@ -architecture: MaskRCNN -trarchitecture: MaskRCNN -use_gpu: true -max_iters: 180000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_vd_26w_4s_pretrained.tar -metric: COCO -weights: output/mask_rcnn_res2net50_vd_26w_4s_fpn_2x/model_final -num_classes: 81 - -MaskRCNN: - backbone: Res2Net - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -Res2Net: - depth: 50 - width: 26 - scales: 4 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_dcnv2_1x.yml b/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_dcnv2_1x.yml deleted file mode 100644 index 986d672c95e85566f5ab9283381f8205e9b425e8..0000000000000000000000000000000000000000 --- a/static/configs/res2net/mask_rcnn_res2net50_vd_26w_4s_fpn_dcnv2_1x.yml +++ /dev/null @@ -1,118 +0,0 @@ -architecture: MaskRCNN -trarchitecture: MaskRCNN -use_gpu: true -max_iters: 90000 -snapshot_iter: 10000 -log_iter: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/Res2Net50_vd_26w_4s_pretrained.tar -metric: COCO -weights: output/mask_rcnn_res2net50_vd_26w_4s_fpn_dcnv2_1x/model_final -num_classes: 81 - -MaskRCNN: - backbone: Res2Net - fpn: FPN - rpn_head: FPNRPNHead - roi_extractor: FPNRoIAlign - bbox_head: BBoxHead - bbox_assigner: BBoxAssigner - -Res2Net: - depth: 50 - width: 26 - scales: 4 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - variant: d - dcn_v2_stages: [3, 4, 5] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - -FPNRPNHead: - anchor_generator: - aspect_ratios: [0.5, 1.0, 2.0] - variance: [1.0, 1.0, 1.0, 1.0] - anchor_start_size: 32 - max_level: 6 - min_level: 2 - num_chan: 256 - rpn_target_assign: - rpn_batch_size_per_im: 256 - rpn_fg_fraction: 0.5 - rpn_negative_overlap: 0.3 - rpn_positive_overlap: 0.7 - rpn_straddle_thresh: 0.0 - train_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 2000 - post_nms_top_n: 2000 - test_proposal: - min_size: 0.0 - nms_thresh: 0.7 - pre_nms_top_n: 1000 - post_nms_top_n: 1000 - -FPNRoIAlign: - canconical_level: 4 - canonical_size: 224 - max_level: 5 - min_level: 2 - sampling_ratio: 2 - box_resolution: 7 - mask_resolution: 14 - -MaskHead: - dilation: 1 - conv_dim: 256 - num_convs: 4 - resolution: 28 - -BBoxAssigner: - batch_size_per_im: 512 - bbox_reg_weights: [0.1, 0.1, 0.2, 0.2] - bg_thresh_hi: 0.5 - bg_thresh_lo: 0.0 - fg_fraction: 0.25 - fg_thresh: 0.5 - -MaskAssigner: - resolution: 28 - -BBoxHead: - head: TwoFCHead - nms: - keep_top_k: 100 - nms_threshold: 0.5 - score_threshold: 0.05 - -TwoFCHead: - mlp_dim: 1024 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: '../mask_fpn_reader.yml' -TrainReader: - batch_size: 2 diff --git a/static/configs/retinanet_r101_fpn_1x.yml b/static/configs/retinanet_r101_fpn_1x.yml deleted file mode 100644 index 8cbff0bbd358e5f0b3223310b30b3f4d8ba9b8ad..0000000000000000000000000000000000000000 --- a/static/configs/retinanet_r101_fpn_1x.yml +++ /dev/null @@ -1,90 +0,0 @@ -architecture: RetinaNet -max_iters: 90000 -use_gpu: true -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.tar -weights: output/retinanet_r101_fpn_1x/model_final -log_iter: 20 -snapshot_iter: 10000 -metric: COCO -save_dir: output -num_classes: 81 - -RetinaNet: - backbone: ResNet - fpn: FPN - retina_head: RetinaHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 101 - feature_maps: [3, 4, 5] - freeze_at: 2 - -FPN: - max_level: 7 - min_level: 3 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -RetinaHead: - num_convs_per_octave: 4 - num_chan: 256 - max_level: 7 - min_level: 3 - prior_prob: 0.01 - base_scale: 4 - num_scales_per_octave: 3 - anchor_generator: - aspect_ratios: [1.0, 2.0, 0.5] - variance: [1.0, 1.0, 1.0, 1.0] - target_assign: - positive_overlap: 0.5 - negative_overlap: 0.4 - gamma: 2.0 - alpha: 0.25 - sigma: 3.0151134457776365 - output_decoder: - score_thresh: 0.05 - nms_thresh: 0.5 - pre_nms_top_n: 1000 - detections_per_im: 100 - nms_eta: 1.0 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 - batch_transforms: - - !PadBatch - pad_to_stride: 128 - -EvalReader: - batch_size: 2 - batch_transforms: - - !PadBatch - pad_to_stride: 128 - -TestReader: - batch_size: 1 - batch_transforms: - - !PadBatch - pad_to_stride: 128 diff --git a/static/configs/retinanet_r50_fpn_1x.yml b/static/configs/retinanet_r50_fpn_1x.yml deleted file mode 100644 index 6cdfcad29d979a92583cacf8c5e59e14f4ecea96..0000000000000000000000000000000000000000 --- a/static/configs/retinanet_r50_fpn_1x.yml +++ /dev/null @@ -1,90 +0,0 @@ -architecture: RetinaNet -max_iters: 90000 -use_gpu: true -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -weights: output/retinanet_r50_fpn_1x/model_final -log_iter: 20 -snapshot_iter: 10000 -metric: COCO -save_dir: output -num_classes: 81 - -RetinaNet: - backbone: ResNet - fpn: FPN - retina_head: RetinaHead - -ResNet: - norm_type: affine_channel - norm_decay: 0. - depth: 50 - feature_maps: [3, 4, 5] - freeze_at: 2 - -FPN: - max_level: 7 - min_level: 3 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -RetinaHead: - num_convs_per_octave: 4 - num_chan: 256 - max_level: 7 - min_level: 3 - prior_prob: 0.01 - base_scale: 4 - num_scales_per_octave: 3 - anchor_generator: - aspect_ratios: [1.0, 2.0, 0.5] - variance: [1.0, 1.0, 1.0, 1.0] - target_assign: - positive_overlap: 0.5 - negative_overlap: 0.4 - gamma: 2.0 - alpha: 0.25 - sigma: 3.0151134457776365 - output_decoder: - score_thresh: 0.05 - nms_thresh: 0.5 - pre_nms_top_n: 1000 - detections_per_im: 100 - nms_eta: 1.0 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_size: 2 - batch_transforms: - - !PadBatch - pad_to_stride: 128 - -EvalReader: - batch_size: 2 - batch_transforms: - - !PadBatch - pad_to_stride: 128 - -TestReader: - batch_size: 1 - batch_transforms: - - !PadBatch - pad_to_stride: 128 diff --git a/static/configs/retinanet_x101_vd_64x4d_fpn_1x.yml b/static/configs/retinanet_x101_vd_64x4d_fpn_1x.yml deleted file mode 100644 index acdad29f48b94436b5fd0d039fbc1e61a10a7d99..0000000000000000000000000000000000000000 --- a/static/configs/retinanet_x101_vd_64x4d_fpn_1x.yml +++ /dev/null @@ -1,89 +0,0 @@ -architecture: RetinaNet -max_iters: 180000 -use_gpu: true -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNeXt101_vd_64x4d_pretrained.tar -weights: output/retinanet_x101_vd_64x4d_fpn_1x/model_final -log_iter: 20 -snapshot_iter: 30000 -metric: COCO -save_dir: output -num_classes: 81 - -RetinaNet: - backbone: ResNeXt - fpn: FPN - retina_head: RetinaHead - -ResNeXt: - depth: 101 - feature_maps: [3, 4, 5] - freeze_at: 2 - group_width: 4 - groups: 64 - norm_type: bn - variant: d - -FPN: - max_level: 7 - min_level: 3 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125] - has_extra_convs: true - -RetinaHead: - num_convs_per_octave: 4 - num_chan: 256 - max_level: 7 - min_level: 3 - prior_prob: 0.01 - base_scale: 4 - num_scales_per_octave: 3 - anchor_generator: - aspect_ratios: [1.0, 2.0, 0.5] - variance: [1.0, 1.0, 1.0, 1.0] - target_assign: - positive_overlap: 0.5 - negative_overlap: 0.4 - gamma: 2.0 - alpha: 0.25 - sigma: 3.0151134457776365 - output_decoder: - score_thresh: 0.05 - nms_thresh: 0.5 - pre_nms_top_n: 1000 - detections_per_im: 100 - nms_eta: 1.0 - -LearningRate: - base_lr: 0.005 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [120000, 160000] - - !LinearWarmup - start_factor: 0.1 - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'faster_fpn_reader.yml' -TrainReader: - batch_transforms: - - !PadBatch - pad_to_stride: 128 - -EvalReader: - batch_transforms: - - !PadBatch - pad_to_stride: 128 - -TestReader: - batch_transforms: - - !PadBatch - pad_to_stride: 128 diff --git a/static/configs/solov2/README.md b/static/configs/solov2/README.md deleted file mode 100644 index c7581da229a16b24de831a9861aec327d6f337e4..0000000000000000000000000000000000000000 --- a/static/configs/solov2/README.md +++ /dev/null @@ -1,50 +0,0 @@ -# SOLOv2 for instance segmentation - -## Introduction - -SOLOv2 (Segmenting Objects by Locations) is a fast instance segmentation framework with strong performance. We reproduced the model of the paper, and improved and optimized the accuracy and speed of the SOLOv2. - -**Highlights:** - -- Performance: `Light-R50-VD-DCN-FPN` model reached 38.6 FPS on single Tesla V100, and mask ap on the COCO-val dataset reached 38.8, which increased inference speed by 24%, mAP increased by 2.4 percentage points. -- Training Time: The training time of the model of `solov2_r50_fpn_1x` on Tesla v100 with 8 GPU is only 10 hours. - -
    - -
    - - -## Model Zoo - -| Detector | Backbone | Multi-scale training | Lr schd | Mask APval | V100 FP32(FPS) | GPU | Download | Configs | -| :-------: | :---------------------: | :-------------------: | :-----: | :--------------------: | :-------------: | :-----: | :---------: | :------------------------: | -| YOLACT++ | R50-FPN | False | 80w iter | 34.1 (test-dev) | 33.5 | Xp | - | - | -| CenterMask | R50-FPN | True | 2x | 36.4 | 13.9 | Xp | - | - | -| CenterMask | V2-99-FPN | True | 3x | 40.2 | 8.9 | Xp | - | - | -| PolarMask | R50-FPN | True | 2x | 30.5 | 9.4 | V100 | - | - | -| BlendMask | R50-FPN | True | 3x | 37.8 | 13.5 | V100 | - | - | -| SOLOv2 (Paper) | R50-FPN | False | 1x | 34.8 | 18.5 | V100 | - | - | -| SOLOv2 (Paper) | X101-DCN-FPN | True | 3x | 42.4 | 5.9 | V100 | - | - | -| SOLOv2 | Mobilenetv3-FPN | True | 3x | 30.0 | 50 | V100 | [model](https://paddlemodels.bj.bcebos.com/object_detection/solov2_mobilenetv3_fpn_448_3x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/solov2/solov2_mobilenetv3_fpn_448_3x.yml) | -| SOLOv2 | R50-FPN | False | 1x | 35.6 | 21.9 | V100 | [model](https://paddlemodels.bj.bcebos.com/object_detection/solov2_r50_fpn_1x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/solov2/solov2_r50_fpn_1x.yml) | -| SOLOv2 | R50-FPN | True | 3x | 37.9 | 21.9 | V100 | [model](https://paddlemodels.bj.bcebos.com/object_detection/solov2_r50_fpn_3x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/solov2/solov2_r50_fpn_3x.yml) | -| SOLOv2 | R101-VD-FPN | True | 3x | 42.6 | 12.1 | V100 | [model](https://paddlemodels.bj.bcebos.com/object_detection/solov2_r101_vd_fpn_3x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/solov2/solov2_r101_vd_fpn_3x.yml) | - -## Enhanced model -| Backbone | Input size | Lr schd | V100 FP32(FPS) | Mask APval | Download | Configs | -| :---------------------: | :-------------------: | :-----: | :------------: | :-----: | :---------: | :------------------------: | -| Light-R50-VD-DCN-FPN | 512 | 3x | 38.6 | 38.8 | [model](https://paddlemodels.bj.bcebos.com/object_detection/solov2_light_r50_vd_fpn_dcn_512_3x.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/solov2/solov2_light_r50_vd_fpn_dcn_512_3x.yml) | - -**Notes:** - -- SOLOv2 is trained on COCO train2017 dataset and evaluated on val2017 results of `mAP(IoU=0.5:0.95)`. - -## Citations -``` -@article{wang2020solov2, - title={SOLOv2: Dynamic, Faster and Stronger}, - author={Wang, Xinlong and Zhang, Rufeng and Kong, Tao and Li, Lei and Shen, Chunhua}, - journal={arXiv preprint arXiv:2003.10152}, - year={2020} -} -``` diff --git a/static/configs/solov2/solov2_light_448_reader.yml b/static/configs/solov2/solov2_light_448_reader.yml deleted file mode 100644 index 07b2a484104702c1f3615a69ed47abde5a639eb5..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_light_448_reader.yml +++ /dev/null @@ -1,103 +0,0 @@ -TrainReader: - batch_size: 4 - worker_num: 2 - inputs_def: - fields: ['image', 'im_id', 'gt_segm'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !Poly2Mask {} - - !ColorDistort {} - - !RandomCrop - is_mask_crop: True - - !ResizeImage - target_size: [352, 384, 416, 448, 480, 512] - max_size: 768 - interp: 1 - use_cv2: true - resize_box: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - - !Gt2Solov2Target - num_grids: [40, 36, 24, 16, 12] - scale_ranges: [[1, 56], [28, 112], [56, 224], [112, 448], [224, 896]] - coord_sigma: 0.2 - shuffle: True - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 768 - target_size: 448 - use_cv2: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - # only support batch_size=1 when evaluation - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: dataset/coco/annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 768 - target_size: 448 - use_cv2: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false diff --git a/static/configs/solov2/solov2_light_r50_vd_fpn_dcn_512_3x.yml b/static/configs/solov2/solov2_light_r50_vd_fpn_dcn_512_3x.yml deleted file mode 100644 index b45581bdd04c7c3687f062097381c014e7fa7b29..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_light_r50_vd_fpn_dcn_512_3x.yml +++ /dev/null @@ -1,80 +0,0 @@ -architecture: SOLOv2 -use_gpu: true -max_iters: 270000 -snapshot_iter: 30000 -log_smooth_window: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar -metric: COCO -weights: output/solov2_light_r50_vd_fpn_dcn_512_3x/model_final -num_classes: 81 -use_ema: true -ema_decay: 0.9998 - -SOLOv2: - backbone: ResNet - fpn: FPN - bbox_head: SOLOv2Head - mask_head: SOLOv2MaskHead - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 0 - freeze_norm: false - norm_type: sync_bn - dcn_v2_stages: [3, 4, 5] - variant: d - lr_mult_list: [0.05, 0.05, 0.1, 0.15] - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - reverse_out: True - -SOLOv2Head: - seg_feat_channels: 256 - stacked_convs: 3 - num_grids: [40, 36, 24, 16, 12] - kernel_out_channels: 128 - solov2_loss: SOLOv2Loss - mask_nms: MaskMatrixNMS - dcn_v2_stages: [2,] - drop_block: True - -SOLOv2MaskHead: - in_channels: 128 - out_channels: 128 - start_level: 0 - end_level: 3 - -SOLOv2Loss: - ins_loss_weight: 3.0 - focal_loss_gamma: 2.0 - focal_loss_alpha: 0.25 - -MaskMatrixNMS: - pre_nms_top_n: 500 - post_nms_top_n: 100 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'solov2_light_reader.yml' diff --git a/static/configs/solov2/solov2_light_reader.yml b/static/configs/solov2/solov2_light_reader.yml deleted file mode 100644 index 26228f6c565a6d1ab4e2d250bd878b777691884c..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_light_reader.yml +++ /dev/null @@ -1,103 +0,0 @@ -TrainReader: - batch_size: 2 - worker_num: 2 - inputs_def: - fields: ['image', 'im_id', 'gt_segm'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !Poly2Mask {} - - !ColorDistort {} - - !RandomCrop - is_mask_crop: True - - !ResizeImage - target_size: [352, 384, 416, 448, 480, 512] - max_size: 852 - interp: 1 - use_cv2: true - resize_box: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - - !Gt2Solov2Target - num_grids: [40, 36, 24, 16, 12] - scale_ranges: [[1, 64], [32, 128], [64, 256], [128, 512], [256, 2048]] - coord_sigma: 0.2 - shuffle: True - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 852 - target_size: 512 - use_cv2: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - # only support batch_size=1 when evaluation - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: dataset/coco/annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 852 - target_size: 512 - use_cv2: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false diff --git a/static/configs/solov2/solov2_mobilenetv3_fpn_448_3x.yml b/static/configs/solov2/solov2_mobilenetv3_fpn_448_3x.yml deleted file mode 100644 index 11e005d26cfce67fad4a7723c9d181715f3618ed..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_mobilenetv3_fpn_448_3x.yml +++ /dev/null @@ -1,79 +0,0 @@ -architecture: SOLOv2 -use_gpu: true -max_iters: 135000 -snapshot_iter: 20000 -log_smooth_window: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -metric: COCO -weights: output/solov2/solov2_mobilenetv3_fpn_448_3x/model_final -num_classes: 81 -use_ema: true -ema_decay: 0.9998 - -SOLOv2: - backbone: MobileNetV3RCNN - fpn: FPN - bbox_head: SOLOv2Head - mask_head: SOLOv2MaskHead - -MobileNetV3RCNN: - norm_type: bn - freeze_norm: true - norm_decay: 0.0 - feature_maps: [2, 3, 4, 6] - conv_decay: 0.00001 - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - scale: 1.0 - model_name: large - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - reverse_out: True - -SOLOv2Head: - seg_feat_channels: 256 - stacked_convs: 2 - num_grids: [40, 36, 24, 16, 12] - kernel_out_channels: 128 - solov2_loss: SOLOv2Loss - mask_nms: MaskMatrixNMS - drop_block: True - -SOLOv2MaskHead: - in_channels: 128 - out_channels: 128 - start_level: 0 - end_level: 3 - -SOLOv2Loss: - ins_loss_weight: 3.0 - focal_loss_gamma: 2.0 - focal_loss_alpha: 0.25 - -MaskMatrixNMS: - pre_nms_top_n: 500 - post_nms_top_n: 100 - -LearningRate: - base_lr: 0.02 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [90000, 120000] - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'solov2_light_448_reader.yml' diff --git a/static/configs/solov2/solov2_r101_vd_fpn_3x.yml b/static/configs/solov2/solov2_r101_vd_fpn_3x.yml deleted file mode 100644 index 09f1e265bfa076c5637150121b688efd6980fe01..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_r101_vd_fpn_3x.yml +++ /dev/null @@ -1,107 +0,0 @@ -architecture: SOLOv2 -use_gpu: true -max_iters: 270000 -snapshot_iter: 30000 -log_smooth_window: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_vd_pretrained.tar -metric: COCO -weights: output/solov2_r101_vd_fpn_3x/model_final -num_classes: 81 -use_ema: true -ema_decay: 0.9998 - -SOLOv2: - backbone: ResNet - fpn: FPN - bbox_head: SOLOv2Head - mask_head: SOLOv2MaskHead - -ResNet: - depth: 101 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - dcn_v2_stages: [3, 4, 5] - variant: d - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - reverse_out: True - -SOLOv2Head: - seg_feat_channels: 512 - stacked_convs: 4 - num_grids: [40, 36, 24, 16, 12] - kernel_out_channels: 256 - solov2_loss: SOLOv2Loss - mask_nms: MaskMatrixNMS - dcn_v2_stages: [0, 1, 2, 3] - -SOLOv2MaskHead: - in_channels: 128 - out_channels: 256 - start_level: 0 - end_level: 3 - use_dcn_in_tower: True - -SOLOv2Loss: - ins_loss_weight: 3.0 - focal_loss_gamma: 2.0 - focal_loss_alpha: 0.25 - -MaskMatrixNMS: - pre_nms_top_n: 500 - post_nms_top_n: 100 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'solov2_reader.yml' -TrainReader: - batch_size: 2 - sample_transforms: - - !DecodeImage - to_rgb: true - - !Poly2Mask {} - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800] - max_size: 1333 - interp: 1 - use_cv2: true - resize_box: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - - !Gt2Solov2Target - num_grids: [40, 36, 24, 16, 12] - scale_ranges: [[1, 96], [48, 192], [96, 384], [192, 768], [384, 2048]] - coord_sigma: 0.2 diff --git a/static/configs/solov2/solov2_r50_fpn_1x.yml b/static/configs/solov2/solov2_r50_fpn_1x.yml deleted file mode 100644 index 587661b29956c96d69e3cf18421a41587701a648..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_r50_fpn_1x.yml +++ /dev/null @@ -1,72 +0,0 @@ -architecture: SOLOv2 -use_gpu: true -max_iters: 90000 -snapshot_iter: 10000 -log_smooth_window: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/solov2_r50_fpn_1x/model_final -num_classes: 81 - -SOLOv2: - backbone: ResNet - fpn: FPN - bbox_head: SOLOv2Head - mask_head: SOLOv2MaskHead - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - reverse_out: True - -SOLOv2Head: - seg_feat_channels: 512 - stacked_convs: 4 - num_grids: [40, 36, 24, 16, 12] - kernel_out_channels: 256 - solov2_loss: SOLOv2Loss - mask_nms: MaskMatrixNMS - -SOLOv2MaskHead: - in_channels: 128 - out_channels: 256 - start_level: 0 - end_level: 3 - -SOLOv2Loss: - ins_loss_weight: 3.0 - focal_loss_gamma: 2.0 - focal_loss_alpha: 0.25 - -MaskMatrixNMS: - pre_nms_top_n: 500 - post_nms_top_n: 100 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [60000, 80000] - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'solov2_reader.yml' diff --git a/static/configs/solov2/solov2_r50_fpn_3x.yml b/static/configs/solov2/solov2_r50_fpn_3x.yml deleted file mode 100644 index ecaeb7bbfa9045aaf29b83e396444c62f37cabce..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_r50_fpn_3x.yml +++ /dev/null @@ -1,101 +0,0 @@ -architecture: SOLOv2 -use_gpu: true -max_iters: 270000 -snapshot_iter: 30000 -log_smooth_window: 20 -save_dir: output -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar -metric: COCO -weights: output/solov2/solov2_r50_fpn_3x/model_final -num_classes: 81 - -SOLOv2: - backbone: ResNet - fpn: FPN - bbox_head: SOLOv2Head - mask_head: SOLOv2MaskHead - -ResNet: - depth: 50 - feature_maps: [2, 3, 4, 5] - freeze_at: 2 - norm_type: bn - -FPN: - max_level: 6 - min_level: 2 - num_chan: 256 - spatial_scale: [0.03125, 0.0625, 0.125, 0.25] - reverse_out: True - -SOLOv2Head: - seg_feat_channels: 512 - stacked_convs: 4 - num_grids: [40, 36, 24, 16, 12] - kernel_out_channels: 256 - solov2_loss: SOLOv2Loss - mask_nms: MaskMatrixNMS - -SOLOv2MaskHead: - in_channels: 128 - out_channels: 256 - start_level: 0 - end_level: 3 - -SOLOv2Loss: - ins_loss_weight: 3.0 - focal_loss_gamma: 2.0 - focal_loss_alpha: 0.25 - -MaskMatrixNMS: - pre_nms_top_n: 500 - post_nms_top_n: 100 - -LearningRate: - base_lr: 0.01 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [180000, 240000] - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0001 - type: L2 - -_READER_: 'solov2_reader.yml' -TrainReader: - batch_size: 2 - sample_transforms: - - !DecodeImage - to_rgb: true - - !Poly2Mask {} - - !ResizeImage - target_size: [640, 672, 704, 736, 768, 800] - max_size: 1333 - interp: 1 - use_cv2: true - resize_box: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - - !Gt2Solov2Target - num_grids: [40, 36, 24, 16, 12] - scale_ranges: [[1, 96], [48, 192], [96, 384], [192, 768], [384, 2048]] - coord_sigma: 0.2 diff --git a/static/configs/solov2/solov2_reader.yml b/static/configs/solov2/solov2_reader.yml deleted file mode 100644 index 46f81958ea96708a55769665903b345afb2346ff..0000000000000000000000000000000000000000 --- a/static/configs/solov2/solov2_reader.yml +++ /dev/null @@ -1,100 +0,0 @@ -TrainReader: - batch_size: 2 - worker_num: 2 - inputs_def: - fields: ['image', 'im_id', 'gt_segm'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !Poly2Mask {} - - !ResizeImage - target_size: 800 - max_size: 1333 - interp: 1 - use_cv2: true - resize_box: true - - !RandomFlipImage - prob: 0.5 - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - to_bgr: false - channel_first: true - batch_transforms: - - !PadBatch - pad_to_stride: 32 - - !Gt2Solov2Target - num_grids: [40, 36, 24, 16, 12] - scale_ranges: [[1, 96], [48, 192], [96, 384], [192, 768], [384, 2048]] - coord_sigma: 0.2 - shuffle: True - -EvalReader: - inputs_def: - fields: ['image', 'im_info', 'im_id'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false - # only support batch_size=1 when evaluation - batch_size: 1 - shuffle: false - drop_last: false - drop_empty: false - worker_num: 2 - -TestReader: - inputs_def: - fields: ['image', 'im_info', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: dataset/coco/annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 1333 - target_size: 800 - use_cv2: true - - !NormalizeImage - is_channel_first: false - is_scale: true - mean: [0.485,0.456,0.406] - std: [0.229, 0.224,0.225] - - !Permute - channel_first: true - to_bgr: false - batch_transforms: - - !PadBatch - pad_to_stride: 32 - use_padded_im_info: false diff --git a/static/configs/ssd/ssd_mobilenet_v1_roadsign_kunlun.yml b/static/configs/ssd/ssd_mobilenet_v1_roadsign_kunlun.yml deleted file mode 100644 index bd6e24e1ed8933bd5acaa3441800f350c043f8fb..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssd_mobilenet_v1_roadsign_kunlun.yml +++ /dev/null @@ -1,143 +0,0 @@ -architecture: SSD -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ssd_mobilenet_v1_voc.tar -use_gpu: false -use_xpu: true -max_iters: 3000 -snapshot_iter: 500 -log_iter: 1 -metric: VOC -map_type: 11point -save_dir: output -weights: output/ssd_mobilenet_v1_roadsign_kunlun/model_final -num_classes: 5 - -SSD: - backbone: MobileNet - multi_box_head: MultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -MobileNet: - norm_decay: 0. - conv_group_scale: 1 - conv_learning_rate: 0.1 - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - with_extra_blocks: true - -MultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 300 - flip: true - max_ratio: 90 - max_sizes: [[], 150.0, 195.0, 240.0, 285.0, 300.0] - min_ratio: 20 - min_sizes: [60.0, 105.0, 150.0, 195.0, 240.0, 285.0] - offset: 0.5 - -LearningRate: - schedulers: - - !PiecewiseDecay - milestones: [2000, 3000, 4000, 5000] - values: [0.0001, 0.00005, 0.000025, 0.00001, 0.000001] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.00005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !VOCDataSet - anno_path: train.txt - dataset_dir: dataset/roadsign_voc - #use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [127.5, 127.5, 127.5] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [127.5, 127.5, 127.5] - std: [127.502231, 127.502231, 127.502231] - batch_size: 32 - shuffle: true - drop_last: true - worker_num: 8 - bufsize: 16 - use_process: false - -EvalReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id', 'is_difficult'] - dataset: - !VOCDataSet - anno_path: valid.txt - dataset_dir: dataset/roadsign_voc - #use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [127.5, 127.5, 127.5] - std: [127.502231, 127.502231, 127.502231] - batch_size: 32 - worker_num: 8 - bufsize: 16 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,300,300] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: test.txt - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 300 - use_cv2: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [127.5, 127.5, 127.5] - std: [127.502231, 127.502231, 127.502231] - batch_size: 1 diff --git a/static/configs/ssd/ssd_mobilenet_v1_voc.yml b/static/configs/ssd/ssd_mobilenet_v1_voc.yml deleted file mode 100644 index de5bae42f859390ca233a4b30532da267f1105fa..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssd_mobilenet_v1_voc.yml +++ /dev/null @@ -1,143 +0,0 @@ -architecture: SSD -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/ssd_mobilenet_v1_coco_pretrained.tar -use_gpu: true -max_iters: 28000 -snapshot_iter: 2000 -log_iter: 1 -metric: VOC -map_type: 11point -save_dir: output -weights: output/ssd_mobilenet_v1_voc/model_final -# 20(label_class) + 1(background) -num_classes: 21 - -SSD: - backbone: MobileNet - multi_box_head: MultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -MobileNet: - norm_decay: 0. - conv_group_scale: 1 - conv_learning_rate: 0.1 - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - with_extra_blocks: true - -MultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 300 - flip: true - max_ratio: 90 - max_sizes: [[], 150.0, 195.0, 240.0, 285.0, 300.0] - min_ratio: 20 - min_sizes: [60.0, 105.0, 150.0, 195.0, 240.0, 285.0] - offset: 0.5 - -LearningRate: - schedulers: - - !PiecewiseDecay - milestones: [10000, 15000, 20000, 25000] - values: [0.001, 0.0005, 0.00025, 0.0001, 0.00001] - -OptimizerBuilder: - optimizer: - momentum: 0.0 - type: RMSPropOptimizer - regularizer: - factor: 0.00005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !VOCDataSet - anno_path: trainval.txt - dataset_dir: dataset/voc - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [127.5, 127.5, 127.5] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !RandomFlipImage - is_normalized: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [127.5, 127.5, 127.5] - std: [127.502231, 127.502231, 127.502231] - batch_size: 32 - shuffle: true - drop_last: true - worker_num: 8 - bufsize: 16 - use_process: true - -EvalReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id', 'is_difficult'] - dataset: - !VOCDataSet - anno_path: test.txt - dataset_dir: dataset/voc - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [127.5, 127.5, 127.5] - std: [127.502231, 127.502231, 127.502231] - batch_size: 32 - worker_num: 8 - bufsize: 16 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,300,300] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: test.txt - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 300 - use_cv2: true - - !Permute {} - - !NormalizeImage - is_scale: false - mean: [127.5, 127.5, 127.5] - std: [127.502231, 127.502231, 127.502231] - batch_size: 1 diff --git a/static/configs/ssd/ssd_vgg16_300.yml b/static/configs/ssd/ssd_vgg16_300.yml deleted file mode 100644 index 24100301041b17464d927fdf9a21bbc1959829f7..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssd_vgg16_300.yml +++ /dev/null @@ -1,149 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 10000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_caffe_pretrained.tar -save_dir: output -weights: output/ssd_vgg16_300/model_final -num_classes: 81 - -SSD: - backbone: VGG - multi_box_head: MultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -VGG: - depth: 16 - with_extra_blocks: true - normalizations: [20., -1, -1, -1, -1, -1] - -MultiBoxHead: - base_size: 300 - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]] - min_ratio: 15 - max_ratio: 90 - min_sizes: [30.0, 60.0, 111.0, 162.0, 213.0, 264.0] - max_sizes: [60.0, 111.0, 162.0, 213.0, 264.0, 315.0] - steps: [8, 16, 32, 64, 100, 300] - offset: 0.5 - flip: true - kernel_size: 3 - pad: 1 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [280000, 360000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [104, 117, 123] - - !RandomCrop - allow_no_crop: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !RandomFlipImage - is_normalized: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 8 - shuffle: true - worker_num: 8 - bufsize: 16 - use_process: true - drop_empty: true - -EvalReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 16 - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3,300,300] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 300 - use_cv2: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 1 diff --git a/static/configs/ssd/ssd_vgg16_300_voc.yml b/static/configs/ssd/ssd_vgg16_300_voc.yml deleted file mode 100644 index 37d834780ef9070039b031f18abd5496643d0257..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssd_vgg16_300_voc.yml +++ /dev/null @@ -1,149 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 120001 -snapshot_iter: 10000 -log_iter: 20 -metric: VOC -map_type: 11point -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_caffe_pretrained.tar -save_dir: output -weights: output/ssd_vgg16_300_voc/model_final -# 20(label_class) + 1(background) -num_classes: 21 - -SSD: - backbone: VGG - multi_box_head: MultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -VGG: - depth: 16 - with_extra_blocks: true - normalizations: [20., -1, -1, -1, -1, -1] - -MultiBoxHead: - base_size: 300 - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]] - min_ratio: 20 - max_ratio: 90 - min_sizes: [30.0, 60.0, 111.0, 162.0, 213.0, 264.0] - max_sizes: [60.0, 111.0, 162.0, 213.0, 264.0, 315.0] - steps: [8, 16, 32, 64, 100, 300] - offset: 0.5 - flip: true - min_max_aspect_ratios_order: true - kernel_size: 3 - pad: 1 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [80000, 100000] - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [104, 117, 123] - - !RandomCrop - allow_no_crop: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !RandomFlipImage - is_normalized: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 8 - shuffle: true - worker_num: 8 - bufsize: 16 - use_process: true - -EvalReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id', 'is_difficult'] - dataset: - !VOCDataSet - anno_path: test.txt - dataset_dir: dataset/voc - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 32 - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3,300,300] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: test.txt - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 300 - use_cv2: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 1 diff --git a/static/configs/ssd/ssd_vgg16_512.yml b/static/configs/ssd/ssd_vgg16_512.yml deleted file mode 100644 index 0587fc626f4f72110bb7dc8eda7fb1bf2fe96ed7..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssd_vgg16_512.yml +++ /dev/null @@ -1,151 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 10000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_caffe_pretrained.tar -save_dir: output -weights: output/ssd_vgg16_512/model_final -num_classes: 81 - -SSD: - backbone: VGG - multi_box_head: MultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -VGG: - depth: 16 - with_extra_blocks: true - normalizations: [20., -1, -1, -1, -1, -1, -1] - extra_block_filters: [[256, 512, 1, 2, 3], [128, 256, 1, 2, 3], [128, 256, 1, 2, 3], [128, 256, 1, 2, 3], [128, 256, 1, 1, 4]] - - -MultiBoxHead: - base_size: 512 - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]] - min_ratio: 15 - max_ratio: 90 - min_sizes: [20.0, 51.0, 133.0, 215.0, 296.0, 378.0, 460.0] - max_sizes: [51.0, 133.0, 215.0, 296.0, 378.0, 460.0, 542.0] - steps: [8, 16, 32, 64, 128, 256, 512] - offset: 0.5 - flip: true - kernel_size: 3 - pad: 1 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [280000, 360000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [104, 117, 123] - - !RandomCrop - allow_no_crop: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 512 - use_cv2: false - - !RandomFlipImage - is_normalized: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 8 - shuffle: true - worker_num: 8 - bufsize: 16 - use_process: true - -EvalReader: - inputs_def: - image_shape: [3,512,512] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !ResizeImage - interp: 1 - target_size: 512 - use_cv2: false - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 8 - worker_num: 8 - bufsize: 16 - drop_empty: false - -TestReader: - inputs_def: - image_shape: [3,512,512] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 512 - use_cv2: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [104, 117, 123] - std: [1, 1, 1] - batch_size: 1 diff --git a/static/configs/ssd/ssd_vgg16_512_voc.yml b/static/configs/ssd/ssd_vgg16_512_voc.yml deleted file mode 100644 index e9dc59beb8026ae47a06ae508895925589080e82..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssd_vgg16_512_voc.yml +++ /dev/null @@ -1,153 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 120000 -snapshot_iter: 10000 -log_iter: 20 -metric: VOC -map_type: 11point -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_caffe_pretrained.tar -save_dir: output -weights: output/ssd_vgg16_512_voc/model_final -# 20(label_class) + 1(background) -num_classes: 21 - -SSD: - backbone: VGG - multi_box_head: MultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -VGG: - depth: 16 - with_extra_blocks: true - normalizations: [20., -1, -1, -1, -1, -1, -1] - extra_block_filters: [[256, 512, 1, 2, 3], [128, 256, 1, 2, 3], [128, 256, 1, 2, 3], [128, 256, 1, 2, 3], [128, 256, 1, 1, 4]] - - -MultiBoxHead: - base_size: 512 - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]] - min_ratio: 20 - max_ratio: 90 - min_sizes: [20.0, 51.0, 133.0, 215.0, 296.0, 378.0, 460.0] - max_sizes: [51.0, 133.0, 215.0, 296.0, 378.0, 460.0, 542.0] - steps: [8, 16, 32, 64, 128, 256, 512] - offset: 0.5 - flip: true - kernel_size: 3 - pad: 1 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: [80000, 100000] - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 500 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123, 117, 104] - - !RandomCrop - allow_no_crop: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 512 - use_cv2: false - - !RandomFlipImage - is_normalized: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [123, 117, 104] - std: [1, 1, 1] - batch_size: 8 - shuffle: true - worker_num: 8 - bufsize: 16 - use_process: true - -EvalReader: - inputs_def: - image_shape: [3, 512, 512] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id', 'is_difficult'] - dataset: - !VOCDataSet - anno_path: test.txt - dataset_dir: dataset/voc - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 512 - use_cv2: false - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [123, 117, 104] - std: [1, 1, 1] - batch_size: 32 - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3,512,512] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: test.txt - use_default_label: true - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 512 - use_cv2: true - - !Permute - to_bgr: false - - !NormalizeImage - is_scale: false - mean: [123, 117, 104] - std: [1, 1, 1] - batch_size: 1 diff --git a/static/configs/ssd/ssdlite_ghostnet.yml b/static/configs/ssd/ssdlite_ghostnet.yml deleted file mode 100644 index c76eb87833f08bc4a3dbf2637deeedb1be192789..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssdlite_ghostnet.yml +++ /dev/null @@ -1,161 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 20000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/GhostNet_x1_3_ssld_pretrained.tar -save_dir: output -weights: output/ssdlite_ghostnet/model_final -# 80(label_class) + 1(background) -num_classes: 81 - -SSD: - backbone: GhostNet - multi_box_head: SSDLiteMultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - - -GhostNet: - scale: 1.3 - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - feature_maps: [5, 7, 8, 9, 10, 11] - conv_decay: 0.00004 - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - -SSDLiteMultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 320 - steps: [16, 32, 64, 107, 160, 320] - flip: true - clip: true - max_ratio: 95 - min_ratio: 20 - offset: 0.5 - conv_decay: 0.00004 - -LearningRate: - base_lr: 0.2 - schedulers: - - !CosineDecay - max_iters: 400000 - - !LinearWarmup - start_factor: 0.33333 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !RandomFlipImage - is_normalized: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: true - batch_size: 64 - shuffle: true - drop_last: true - # Number of working threads/processes. To speed up, can be set to 16 or 32 etc. - worker_num: 8 - # Size of shared memory used in result queue. After increasing `worker_num`, need expand `memsize`. - memsize: 8G - # Buffer size for multi threads/processes.one instance in buffer is one batch data. - # To speed up, can be set to 64 or 128 etc. - bufsize: 32 - use_process: true - - -EvalReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - worker_num: 8 - bufsize: 32 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,320,320] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 320 - use_cv2: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ssd/ssdlite_mobilenet_v1.yml b/static/configs/ssd/ssdlite_mobilenet_v1.yml deleted file mode 100644 index c118bc2e489e2808fea798ebc33a04f70afb4c91..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssdlite_mobilenet_v1.yml +++ /dev/null @@ -1,158 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 20000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_ssld_pretrained.tar -save_dir: output -weights: output/ssdlite_mobilenet_v1/model_final -num_classes: 81 - -SSD: - backbone: MobileNet - multi_box_head: SSDLiteMultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -MobileNet: - conv_decay: 0.00004 - conv_group_scale: 1 - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - with_extra_blocks: true - -SSDLiteMultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 300 - steps: [16, 32, 64, 100, 150, 300] - flip: true - clip: true - max_ratio: 95 - min_ratio: 20 - offset: 0.5 - conv_decay: 0.00004 - -LearningRate: - base_lr: 0.4 - schedulers: - - !CosineDecay - max_iters: 400000 - - !LinearWarmup - start_factor: 0.33333 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !RandomFlipImage - is_normalized: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: true - batch_size: 64 - shuffle: true - drop_last: true - # Number of working threads/processes. To speed up, can be set to 16 or 32 etc. - worker_num: 8 - # Size of shared memory used in result queue. After increasing `worker_num`, need expand `memsize`. - memsize: 8G - # Buffer size for multi threads/processes.one instance in buffer is one batch data. - # To speed up, can be set to 64 or 128 etc. - bufsize: 32 - use_process: true - - -EvalReader: - inputs_def: - image_shape: [3, 300, 300] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 300 - use_cv2: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - worker_num: 8 - bufsize: 32 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,300,300] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 300 - use_cv2: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ssd/ssdlite_mobilenet_v3_large.yml b/static/configs/ssd/ssdlite_mobilenet_v3_large.yml deleted file mode 100644 index d75693575867c872496881efde9b829562d920e5..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssdlite_mobilenet_v3_large.yml +++ /dev/null @@ -1,162 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 20000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -save_dir: output -weights: output/ssdlite_mobilenet_v3_large/model_final -# 80(label_class) + 1(background) -num_classes: 81 - -SSD: - backbone: MobileNetV3 - multi_box_head: SSDLiteMultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -MobileNetV3: - scale: 1.0 - model_name: large - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - feature_maps: [5, 7, 8, 9, 10, 11] - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - conv_decay: 0.00004 - multiplier: 0.5 - -SSDLiteMultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 320 - steps: [16, 32, 64, 107, 160, 320] - flip: true - clip: true - max_ratio: 95 - min_ratio: 20 - offset: 0.5 - conv_decay: 0.00004 - -LearningRate: - base_lr: 0.4 - schedulers: - - !CosineDecay - max_iters: 400000 - - !LinearWarmup - start_factor: 0.33333 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !RandomFlipImage - is_normalized: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: true - batch_size: 64 - shuffle: true - drop_last: true - # Number of working threads/processes. To speed up, can be set to 16 or 32 etc. - worker_num: 8 - # Size of shared memory used in result queue. After increasing `worker_num`, need expand `memsize`. - memsize: 8G - # Buffer size for multi threads/processes.one instance in buffer is one batch data. - # To speed up, can be set to 64 or 128 etc. - bufsize: 32 - use_process: true - - -EvalReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - worker_num: 8 - bufsize: 32 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,320,320] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 320 - use_cv2: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ssd/ssdlite_mobilenet_v3_large_fpn.yml b/static/configs/ssd/ssdlite_mobilenet_v3_large_fpn.yml deleted file mode 100644 index 02abac3aa43bbd502a0f483c969710e0f50a458f..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssdlite_mobilenet_v3_large_fpn.yml +++ /dev/null @@ -1,169 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 20000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_ssld_pretrained.tar -save_dir: output -weights: output/ssdlite_mobilenet_v3_large_fpn/model_final -# 80(label_class) + 1(background) -num_classes: 81 - -SSD: - backbone: MobileNetV3 - fpn: FPN - multi_box_head: SSDLiteMultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -FPN: - num_chan: 256 - max_level: 7 - norm_type: bn - norm_decay: 0.00004 - reverse_out: true - -MobileNetV3: - scale: 1.0 - model_name: large - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - feature_maps: [5, 7, 8, 9, 10, 11] - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - conv_decay: 0.00004 - -SSDLiteMultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 320 - steps: [16, 32, 64, 107, 160, 320] - flip: true - clip: true - max_ratio: 95 - min_ratio: 20 - offset: 0.5 - conv_decay: 0.00004 - -LearningRate: - base_lr: 0.4 - schedulers: - - !CosineDecay - max_iters: 400000 - - !LinearWarmup - start_factor: 0.33333 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !RandomFlipImage - is_normalized: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: true - batch_size: 64 - shuffle: true - drop_last: true - # Number of working threads/processes. To speed up, can be set to 16 or 32 etc. - worker_num: 8 - # Size of shared memory used in result queue. After increasing `worker_num`, need expand `memsize`. - memsize: 8G - # Buffer size for multi threads/processes.one instance in buffer is one batch data. - # To speed up, can be set to 64 or 128 etc. - bufsize: 32 - use_process: true - - -EvalReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - worker_num: 8 - bufsize: 32 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,320,320] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 320 - use_cv2: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ssd/ssdlite_mobilenet_v3_small.yml b/static/configs/ssd/ssdlite_mobilenet_v3_small.yml deleted file mode 100644 index 09dc73f368eae46d06b3f8c18ab79aa008bf38b8..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssdlite_mobilenet_v3_small.yml +++ /dev/null @@ -1,162 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 20000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x1_0_ssld_pretrained.tar -save_dir: output -weights: output/ssd_mobilenet_v3_small/model_final -# 80(label_class) + 1(background) -num_classes: 81 - -SSD: - backbone: MobileNetV3 - multi_box_head: SSDLiteMultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -MobileNetV3: - scale: 1.0 - model_name: small - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - feature_maps: [5, 7, 8, 9, 10, 11] - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - conv_decay: 0.00004 - multiplier: 0.5 - -SSDLiteMultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 320 - steps: [16, 32, 64, 107, 160, 320] - flip: true - clip: true - max_ratio: 95 - min_ratio: 20 - offset: 0.5 - conv_decay: 0.00004 - -LearningRate: - base_lr: 0.4 - schedulers: - - !CosineDecay - max_iters: 400000 - - !LinearWarmup - start_factor: 0.33333 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !RandomFlipImage - is_normalized: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: true - batch_size: 64 - shuffle: true - drop_last: true - # Number of working threads/processes. To speed up, can be set to 16 or 32 etc. - worker_num: 8 - # Size of shared memory used in result queue. After increasing `worker_num`, need expand `memsize`. - memsize: 8G - # Buffer size for multi threads/processes.one instance in buffer is one batch data. - # To speed up, can be set to 64 or 128 etc. - bufsize: 32 - use_process: true - - -EvalReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - worker_num: 8 - bufsize: 32 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,320,320] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 320 - use_cv2: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/ssd/ssdlite_mobilenet_v3_small_fpn.yml b/static/configs/ssd/ssdlite_mobilenet_v3_small_fpn.yml deleted file mode 100644 index a8270ca220577b18cb68324fc5695818b8ecaad4..0000000000000000000000000000000000000000 --- a/static/configs/ssd/ssdlite_mobilenet_v3_small_fpn.yml +++ /dev/null @@ -1,169 +0,0 @@ -architecture: SSD -use_gpu: true -max_iters: 400000 -snapshot_iter: 20000 -log_iter: 20 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_small_x1_0_ssld_pretrained.tar -save_dir: output -weights: output/ssdlite_mobilenet_v3_small_fpn/model_final -# 80(label_class) + 1(background) -num_classes: 81 - -SSD: - backbone: MobileNetV3 - fpn: FPN - multi_box_head: SSDLiteMultiBoxHead - output_decoder: - background_label: 0 - keep_top_k: 200 - nms_eta: 1.0 - nms_threshold: 0.45 - nms_top_k: 400 - score_threshold: 0.01 - -FPN: - num_chan: 256 - max_level: 7 - norm_type: bn - norm_decay: 0.00004 - reverse_out: true - -MobileNetV3: - scale: 1.0 - model_name: small - extra_block_filters: [[256, 512], [128, 256], [128, 256], [64, 128]] - feature_maps: [5, 7, 8, 9, 10, 11] - lr_mult_list: [0.25, 0.25, 0.5, 0.5, 0.75] - conv_decay: 0.00004 - -SSDLiteMultiBoxHead: - aspect_ratios: [[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]] - base_size: 320 - steps: [16, 32, 64, 107, 160, 320] - flip: true - clip: true - max_ratio: 95 - min_ratio: 20 - offset: 0.5 - conv_decay: 0.00004 - -LearningRate: - base_lr: 0.4 - schedulers: - - !CosineDecay - max_iters: 400000 - - !LinearWarmup - start_factor: 0.33333 - steps: 2000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_train2017.json - image_dir: train2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !RandomDistort - brightness_lower: 0.875 - brightness_upper: 1.125 - is_order: true - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop - allow_no_crop: false - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !RandomFlipImage - is_normalized: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: true - batch_size: 64 - shuffle: true - drop_last: true - # Number of working threads/processes. To speed up, can be set to 16 or 32 etc. - worker_num: 8 - # Size of shared memory used in result queue. After increasing `worker_num`, need expand `memsize`. - memsize: 8G - # Buffer size for multi threads/processes.one instance in buffer is one batch data. - # To speed up, can be set to 64 or 128 etc. - bufsize: 32 - use_process: true - - -EvalReader: - inputs_def: - image_shape: [3, 320, 320] - fields: ['image', 'gt_bbox', 'gt_class', 'im_shape', 'im_id'] - dataset: - !COCODataSet - dataset_dir: dataset/coco - anno_path: annotations/instances_val2017.json - image_dir: val2017 - sample_transforms: - - !DecodeImage - to_rgb: true - - !NormalizeBox {} - - !ResizeImage - interp: 1 - target_size: 320 - use_cv2: false - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - worker_num: 8 - bufsize: 32 - use_process: false - -TestReader: - inputs_def: - image_shape: [3,320,320] - fields: ['image', 'im_id', 'im_shape'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - sample_transforms: - - !DecodeImage - to_rgb: true - - !ResizeImage - interp: 1 - max_size: 0 - target_size: 320 - use_cv2: true - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/yolov3_darknet.yml b/static/configs/yolov3_darknet.yml deleted file mode 100644 index 0aa2fcac288f7ba06302e266e5eed8ef6a7aa542..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_darknet.yml +++ /dev/null @@ -1,61 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar -weights: output/yolov3_darknet/model_final -num_classes: 80 -use_fine_grained_loss: false - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' diff --git a/static/configs/yolov3_darknet_roadsign.yml b/static/configs/yolov3_darknet_roadsign.yml deleted file mode 100644 index 16fa37a3c9a779184ee3457f0d45a74fa5890a8a..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_darknet_roadsign.yml +++ /dev/null @@ -1,172 +0,0 @@ -architecture: YOLOv3 -use_gpu: false -max_iters: 1200 -log_iter: 1 -save_dir: output -snapshot_iter: 200 -metric: VOC -map_type: integral -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar -weights: output/yolov3_darknet_roadsign/model_final -num_classes: 4 -finetune_exclude_pretrained_params: ['yolo_output'] -use_fine_grained_loss: false - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 800 - - 1100 - - !LinearWarmup - start_factor: 0. - steps: 100 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: train.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - ratio: 1.5 - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 4 - shuffle: true - mixup_epoch: 250 - drop_last: true - worker_num: 2 - bufsize: 2 - use_process: false #true - - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: valid.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 4 - drop_empty: false - worker_num: 4 - bufsize: 2 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: dataset/roadsign_voc/label_list.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/yolov3_darknet_roadsign_kunlun.yml b/static/configs/yolov3_darknet_roadsign_kunlun.yml deleted file mode 100644 index 3def79eb187e5968f9589a9d681a9d46b7b00951..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_darknet_roadsign_kunlun.yml +++ /dev/null @@ -1,173 +0,0 @@ -architecture: YOLOv3 -use_gpu: false -use_xpu: true -max_iters: 1200 -log_iter: 1 -save_dir: output -snapshot_iter: 200 -metric: VOC -map_type: integral -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov3_darknet.tar -weights: output/yolov3_darknet_roadsign_xpu/model_final -num_classes: 4 -finetune_exclude_pretrained_params: ['yolo_output'] -use_fine_grained_loss: false - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.000125 #0.00025 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 800 #400 - - 1100 #550 - - !LinearWarmup - start_factor: 0. - steps: 200 #200 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: train.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - ratio: 1.5 - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 2 - shuffle: true - mixup_epoch: 250 - drop_last: true - worker_num: 2 - bufsize: 2 - use_process: false #true - - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: valid.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 4 - drop_empty: false - worker_num: 4 - bufsize: 2 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: dataset/roadsign_voc/label_list.txt - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/yolov3_darknet_voc.yml b/static/configs/yolov3_darknet_voc.yml deleted file mode 100644 index f8435a9b89a2fc629c805fcaf95e005af2200771..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_darknet_voc.yml +++ /dev/null @@ -1,89 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 70000 -log_iter: 20 -save_dir: output -snapshot_iter: 2000 -metric: VOC -map_type: 11point -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar -weights: output/yolov3_darknet_voc/model_final -num_classes: 20 -use_fine_grained_loss: false - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 62000 - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - with_background: false - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: test.txt - use_default_label: true - with_background: false - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false diff --git a/static/configs/yolov3_darknet_voc_diouloss.yml b/static/configs/yolov3_darknet_voc_diouloss.yml deleted file mode 100644 index 979c825e18ac9cc315c45c209202f8586249cec6..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_darknet_voc_diouloss.yml +++ /dev/null @@ -1,93 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 70000 -log_iter: 20 -save_dir: output -snapshot_iter: 2000 -metric: VOC -map_type: 11point -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar -weights: output/yolov3_darknet_voc/model_final -num_classes: 20 -use_fine_grained_loss: true - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - iou_loss: DiouLossYolo - -DiouLossYolo: - loss_weight: 5 - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 62000 - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - with_background: false - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: test.txt - use_default_label: true - with_background: false - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false diff --git a/static/configs/yolov3_mobilenet_v1.yml b/static/configs/yolov3_mobilenet_v1.yml deleted file mode 100644 index d39530341d2e7cfdaa302c7a3ff1136fa67b2080..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_mobilenet_v1.yml +++ /dev/null @@ -1,62 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: http://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_pretrained.tar -weights: output/yolov3_mobilenet_v1/model_final -num_classes: 80 -use_fine_grained_loss: false - -YOLOv3: - backbone: MobileNet - yolo_head: YOLOv3Head - -MobileNet: - norm_type: sync_bn - norm_decay: 0. - conv_group_scale: 1 - with_extra_blocks: false - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' diff --git a/static/configs/yolov3_mobilenet_v1_fruit.yml b/static/configs/yolov3_mobilenet_v1_fruit.yml deleted file mode 100644 index 757c106373ac0c1b0f85fd28a16ec7cdda2f7734..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_mobilenet_v1_fruit.yml +++ /dev/null @@ -1,129 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 20000 -log_iter: 20 -save_dir: output -snapshot_iter: 200 -metric: VOC -map_type: 11point -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar -weights: output/yolov3_mobilenet_v1_fruit/best_model -num_classes: 3 -finetune_exclude_pretrained_params: ['yolo_output'] -use_fine_grained_loss: false - -YOLOv3: - backbone: MobileNet - yolo_head: YOLOv3Head - -MobileNet: - norm_type: sync_bn - norm_decay: 0. - conv_group_scale: 1 - with_extra_blocks: false - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.00001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 15000 - - 18000 - - !LinearWarmup - start_factor: 0. - steps: 100 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' -# will merge TrainReader into yolov3_reader.yml -TrainReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/fruit - anno_path: train.txt - with_background: false - use_default_label: false - sample_transforms: - - !DecodeImage - to_rgb: true - with_mixup: false - - !NormalizeBox {} - - !ExpandImage - max_ratio: 4.0 - mean: [123.675, 116.28, 103.53] - prob: 0.5 - - !RandomInterpImage - max_size: 0 - target_size: 608 - - !RandomFlipImage - is_normalized: true - prob: 0.5 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: true - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [608] - - !Permute - channel_first: true - to_bgr: false - batch_size: 1 - shuffle: true - mixup_epoch: -1 - -EvalReader: - batch_size: 1 - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/fruit - anno_path: val.txt - use_default_label: false - with_background: false - -TestReader: - batch_size: 1 - dataset: - !ImageFolder - anno_path: dataset/fruit/label_list.txt - use_default_label: false - with_background: false diff --git a/static/configs/yolov3_mobilenet_v1_roadsign.yml b/static/configs/yolov3_mobilenet_v1_roadsign.yml deleted file mode 100644 index 89fd3e7cb8705864a0d4e1ff95043e1c740caf48..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_mobilenet_v1_roadsign.yml +++ /dev/null @@ -1,175 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 3600 -log_smooth_window: 20 -save_dir: output -snapshot_iter: 200 -metric: VOC -map_type: integral -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1.tar -weights: output/yolov3_mobilenet_v1_roadsign/best_model -num_classes: 4 -finetune_exclude_pretrained_params: ['yolo_output'] -use_fine_grained_loss: false - -YOLOv3: - backbone: MobileNet - yolo_head: YOLOv3Head - -MobileNet: - norm_decay: 0. - conv_group_scale: 1 - with_extra_blocks: false - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 2400 - - 3300 - - !LinearWarmup - start_factor: 0.3333333333333333 - steps: 100 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -# _READER_: 'yolov3_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: train.txt - with_background: false - use_default_label: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - ratio: 1.5 - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 8 - shuffle: true - mixup_epoch: 250 - drop_last: true - worker_num: 4 - bufsize: 2 - use_process: true - - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/roadsign_voc - anno_path: valid.txt - with_background: false - use_default_label: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - drop_empty: false - worker_num: 4 - bufsize: 2 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: dataset/roadsign_voc/label_list.txt - with_background: false - use_default_label: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/yolov3_mobilenet_v1_voc.yml b/static/configs/yolov3_mobilenet_v1_voc.yml deleted file mode 100644 index d5d324dc46c70bb6f7c510160aa77e75a5877ad3..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_mobilenet_v1_voc.yml +++ /dev/null @@ -1,87 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 70000 -log_iter: 20 -save_dir: output -snapshot_iter: 2000 -metric: VOC -map_type: 11point -pretrain_weights: http://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_pretrained.tar -weights: output/yolov3_mobilenet_v1_voc/model_final -num_classes: 20 -use_fine_grained_loss: false - -YOLOv3: - backbone: MobileNet - yolo_head: YOLOv3Head - -MobileNet: - norm_type: sync_bn - norm_decay: 0. - conv_group_scale: 1 - with_extra_blocks: false - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 62000 - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' -TrainReader: - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - with_background: false - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: test.txt - use_default_label: true - with_background: false - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false diff --git a/static/configs/yolov3_mobilenet_v3.yml b/static/configs/yolov3_mobilenet_v3.yml deleted file mode 100644 index bc2b6b3a55e64c1f30fb98ab1ab220b02a2dce9c..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_mobilenet_v3.yml +++ /dev/null @@ -1,64 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_pretrained.tar -weights: output/yolov3_mobilenet_v3/model_final -num_classes: 80 -use_fine_grained_loss: false - -YOLOv3: - backbone: MobileNetV3 - yolo_head: YOLOv3Head - -MobileNetV3: - norm_type: sync_bn - norm_decay: 0. - model_name: large - scale: 1. - extra_block_filters: [] - feature_maps: [1, 2, 3, 4, 6] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' diff --git a/static/configs/yolov3_r34.yml b/static/configs/yolov3_r34.yml deleted file mode 100644 index 76edc41d0d178605956593ad7df0cfbfb8e2d5b4..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_r34.yml +++ /dev/null @@ -1,64 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 500000 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_pretrained.tar -weights: output/yolov3_r34/model_final -num_classes: 80 -use_fine_grained_loss: false - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 34 - feature_maps: [3, 4, 5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' diff --git a/static/configs/yolov3_r34_voc.yml b/static/configs/yolov3_r34_voc.yml deleted file mode 100644 index 0e48498e4ccc88c251884137afb31974a7a83aa9..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_r34_voc.yml +++ /dev/null @@ -1,89 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 70000 -log_iter: 20 -save_dir: output -snapshot_iter: 2000 -metric: VOC -map_type: 11point -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet34_pretrained.tar -weights: output/yolov3_r34_voc/model_final -num_classes: 20 -use_fine_grained_loss: false - -YOLOv3: - backbone: ResNet - yolo_head: YOLOv3Head - -ResNet: - norm_type: sync_bn - freeze_at: 0 - freeze_norm: false - norm_decay: 0. - depth: 34 - feature_maps: [3, 4, 5] - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 55000 - - 62000 - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: 'yolov3_reader.yml' -TrainReader: - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: trainval.txt - use_default_label: true - with_background: false - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 50 - dataset: - !VOCDataSet - dataset_dir: dataset/voc - anno_path: test.txt - use_default_label: true - with_background: false - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false diff --git a/static/configs/yolov3_reader.yml b/static/configs/yolov3_reader.yml deleted file mode 100644 index 2a8463f1e6c2cb598ea4a55c6289f5b04b290d4a..0000000000000000000000000000000000000000 --- a/static/configs/yolov3_reader.yml +++ /dev/null @@ -1,111 +0,0 @@ -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score'] - num_max_boxes: 50 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - with_mixup: True - - !MixupImage - alpha: 1.5 - beta: 1.5 - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - downsample_ratios: [32, 16, 8] - batch_size: 8 - shuffle: true - mixup_epoch: 250 - drop_last: true - worker_num: 8 - bufsize: 16 - use_process: true - - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 50 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 50 - - !Permute - to_bgr: false - channel_first: True - batch_size: 8 - drop_empty: false - worker_num: 8 - bufsize: 16 - -TestReader: - inputs_def: - image_shape: [3, 608, 608] - fields: ['image', 'im_size', 'im_id'] - dataset: - !ImageFolder - anno_path: annotations/instances_val2017.json - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 2 - - !NormalizeImage - mean: [0.485, 0.456, 0.406] - std: [0.229, 0.224, 0.225] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 diff --git a/static/configs/yolov4/README.md b/static/configs/yolov4/README.md deleted file mode 100644 index 55e8a050dbbb43737c1792e8501ee37d1f724ff0..0000000000000000000000000000000000000000 --- a/static/configs/yolov4/README.md +++ /dev/null @@ -1,60 +0,0 @@ -# YOLO v4 模型 - -## 内容 -- [简介](#简介) -- [模型库与基线](#模型库与基线) -- [未来工作](#未来工作) -- [如何贡献代码](#如何贡献代码) - -## 简介 - -[YOLO v4](https://arxiv.org/abs/2004.10934)的Paddle实现版本,要求使用PaddlePaddle2.0.0及以上版本或适当的develop版本 - -目前转换了[darknet](https://github.com/AlexeyAB/darknet)中YOLO v4的权重,可以直接对图片进行预测,在[test-dev2019](http://cocodataset.org/#detection-2019)中精度为43.5%。另外,支持VOC数据集上finetune,精度达到85.5% - -目前支持YOLO v4的多个模块: - -- mish激活函数 -- PAN模块 -- SPP模块 -- ciou loss -- label_smooth -- grid_sensitive - -目前支持YOLO系列的Anchor聚类算法 -``` bash -python tools/anchor_cluster.py -c ${config} -m ${method} -s ${size} -``` -主要参数配置参考下表 -| 参数 | 用途 | 默认值 | 备注 | -|:------:|:------:|:------:|:------:| -| -c/--config | 模型的配置文件 | 无默认值 | 必须指定 | -| -n/--n | 聚类的簇数 | 9 | Anchor的数目 | -| -s/--size | 图片的输入尺寸 | None | 若指定,则使用指定的尺寸,如果不指定, 则尝试从配置文件中读取图片尺寸 | -| -m/--method | 使用的Anchor聚类方法 | v2 | 目前只支持yolov2的聚类算法 | -| -i/--iters | kmeans聚类算法的迭代次数 | 1000 | kmeans算法收敛或者达到迭代次数后终止 | - -## 模型库 -下表中展示了当前支持的网络结构。 - -| | GPU个数 | 测试集 | 骨干网络 | 精度 | 模型下载 | 配置文件 | -|:------------------------:|:-------:|:------:|:--------------------------:|:------------------------:| :---------:| :-----: | -| YOLO v4 | - |test-dev2019 | CSPDarkNet53 | 43.5 |[下载链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_cspdarknet.yml) | -| YOLO v4 VOC | 2 | VOC2007 | CSPDarkNet53 | 85.5 | [下载链接](https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet_voc.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/configs/yolov4/yolov4_cspdarknet_voc.yml) | - -**注意:** - -- 由于原版YOLO v4使用coco trainval2014进行训练,训练样本中包含部分评估样本,若使用val集会导致精度虚高,因此使用coco test集对模型进行评估。 -- YOLO v4模型仅支持coco test集评估和图片预测,由于test集不包含目标框的真实标注,评估时会将预测结果保存在json文件中,请将结果提交至[cocodataset](http://cocodataset.org/#detection-2019)上查看最终精度指标。 -- coco测试集使用test2017,下载请参考[coco2017](http://cocodataset.org/#download) - - -## 未来工作 - -1. mish激活函数优化 -2. mosaic数据预处理实现 - - - -## 如何贡献代码 -我们非常欢迎您可以为PaddleDetection提供代码,您可以提交PR供我们review;也十分感谢您的反馈,可以提交相应issue,我们会及时解答。 diff --git a/static/configs/yolov4/yolov4_cspdarknet.yml b/static/configs/yolov4/yolov4_cspdarknet.yml deleted file mode 100644 index e2299feee7b5dda667329c22f87ef0924442645d..0000000000000000000000000000000000000000 --- a/static/configs/yolov4/yolov4_cspdarknet.yml +++ /dev/null @@ -1,118 +0,0 @@ -architecture: YOLOv4 -use_gpu: true -max_iters: 500200 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams -weights: output/yolov4_cspdarknet/model_final -num_classes: 80 -use_fine_grained_loss: true -save_prediction_only: True - -YOLOv4: - backbone: CSPDarkNet - yolo_head: YOLOv4Head - -CSPDarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv4Head: - anchors: [[12, 16], [19, 36], [40, 28], [36, 75], [76, 55], - [72, 146], [142, 110], [192, 243], [459, 401]] - anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - nms: - background_label: -1 - keep_top_k: -1 - nms_threshold: 0.45 - nms_top_k: -1 - normalized: true - score_threshold: 0.001 - downsample: [8,16,32] - scale_x_y: [1.2, 1.1, 1.05] - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - downsample: [8,16,32] - scale_x_y: [1.2, 1.1, 1.05] - iou_loss: IouLoss - match_score: true - -IouLoss: - loss_weight: 0.07 - max_height: 608 - max_width: 608 - ciou_term: true - loss_square: false - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - clip_grad_by_norm: 10. - optimizer: - momentum: 0.949 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: '../yolov3_reader.yml' -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: test2017 - anno_path: annotations/image_info_test-dev2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0., 0., 0.] - std: [1., 1., 1.] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - batch_size: 1 - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0., 0., 0.] - std: [1., 1., 1.] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True diff --git a/static/configs/yolov4/yolov4_cspdarknet_coco.yml b/static/configs/yolov4/yolov4_cspdarknet_coco.yml deleted file mode 100644 index 4cb44c0777f5b0bd026e63d13ca45ee5b4e497e8..0000000000000000000000000000000000000000 --- a/static/configs/yolov4/yolov4_cspdarknet_coco.yml +++ /dev/null @@ -1,174 +0,0 @@ -architecture: YOLOv4 -use_gpu: true -max_iters: 500200 -log_iter: 20 -save_dir: output -snapshot_iter: 10000 -metric: COCO -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/CSPDarkNet53_pretrained.pdparams -weights: output/yolov4_cspdarknet_coco/model_final -num_classes: 80 -use_fine_grained_loss: true - -YOLOv4: - backbone: CSPDarkNet - yolo_head: YOLOv4Head - -CSPDarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv4Head: - anchors: [[12, 16], [19, 36], [40, 28], [36, 75], [76, 55], - [72, 146], [142, 110], [192, 243], [459, 401]] - anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - nms: - background_label: -1 - keep_top_k: -1 - nms_threshold: 0.45 - nms_top_k: -1 - normalized: true - score_threshold: 0.001 - downsample: [8,16,32] - scale_x_y: [1.2, 1.1, 1.05] - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - downsample: [8,16,32] - scale_x_y: [1.2, 1.1, 1.05] - iou_loss: IouLoss - match_score: true - -IouLoss: - loss_weight: 0.07 - max_height: 608 - max_width: 608 - ciou_term: true - loss_square: true - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 400000 - - 450000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - clip_grad_by_norm: 10. - optimizer: - momentum: 0.949 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: '../yolov3_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score', 'im_id'] - num_max_boxes: 50 - dataset: - !COCODataSet - image_dir: train2017 - anno_path: annotations/instances_train2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.,0.,0.] - std: [1.,1.,1.] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - anchors: [[12, 16], [19, 36], [40, 28], - [36, 75], [76, 55], [72, 146], - [142, 110], [192, 243], [459, 401]] - downsample_ratios: [8, 16, 32] - batch_size: 8 - shuffle: true - drop_last: true - worker_num: 8 - bufsize: 16 - use_process: true - drop_empty: false - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id'] - num_max_boxes: 90 - dataset: - !COCODataSet - image_dir: val2017 - anno_path: annotations/instances_val2017.json - dataset_dir: dataset/coco - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0., 0., 0.] - std: [1., 1., 1.] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 90 - - !Permute - to_bgr: false - channel_first: True - batch_size: 4 - drop_empty: false - worker_num: 8 - bufsize: 16 - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0., 0., 0.] - std: [1., 1., 1.] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True diff --git a/static/configs/yolov4/yolov4_cspdarknet_voc.yml b/static/configs/yolov4/yolov4_cspdarknet_voc.yml deleted file mode 100644 index 4bdc7669a6807fffaf1a2d4d340a1cfca70f4ede..0000000000000000000000000000000000000000 --- a/static/configs/yolov4/yolov4_cspdarknet_voc.yml +++ /dev/null @@ -1,173 +0,0 @@ -architecture: YOLOv4 -use_gpu: true -max_iters: 140000 -log_iter: 20 -save_dir: output -snapshot_iter: 1000 -metric: VOC -pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/yolov4_cspdarknet.pdparams -weights: output/yolov4_cspdarknet_voc/model_final -num_classes: 20 -use_fine_grained_loss: true - -YOLOv4: - backbone: CSPDarkNet - yolo_head: YOLOv4Head - -CSPDarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv4Head: - anchors: [[12, 16], [19, 36], [40, 28], [36, 75], [76, 55], - [72, 146], [142, 110], [192, 243], [459, 401]] - anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - nms: - background_label: -1 - keep_top_k: -1 - nms_threshold: 0.45 - nms_top_k: -1 - normalized: true - score_threshold: 0.001 - downsample: [8,16,32] - scale_x_y: [1.2, 1.1, 1.05] - -YOLOv3Loss: - ignore_thresh: 0.7 - label_smooth: true - downsample: [8,16,32] - scale_x_y: [1.2, 1.1, 1.05] - iou_loss: IouLoss - match_score: true - -IouLoss: - loss_weight: 0.07 - max_height: 608 - max_width: 608 - ciou_term: true - loss_square: true - -LearningRate: - base_lr: 0.0001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 110000 - - 130000 - - !LinearWarmup - start_factor: 0. - steps: 1000 - -OptimizerBuilder: - clip_grad_by_norm: 10. - optimizer: - momentum: 0.949 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: '../yolov3_reader.yml' -TrainReader: - inputs_def: - fields: ['image', 'gt_bbox', 'gt_class', 'gt_score', 'im_id'] - num_max_boxes: 50 - dataset: - !VOCDataSet - anno_path: trainval.txt - dataset_dir: dataset/voc - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ColorDistort {} - - !RandomExpand - fill_value: [123.675, 116.28, 103.53] - - !RandomCrop {} - - !RandomFlipImage - is_normalized: false - - !NormalizeBox {} - - !PadBox - num_max_boxes: 50 - - !BboxXYXY2XYWH {} - batch_transforms: - - !RandomShape - sizes: [320, 352, 384, 416, 448, 480, 512, 544, 576, 608] - random_inter: True - - !NormalizeImage - mean: [0.,0.,0.] - std: [1.,1.,1.] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True - # Gt2YoloTarget is only used when use_fine_grained_loss set as true, - # this operator will be deleted automatically if use_fine_grained_loss - # is set as false - - !Gt2YoloTarget - anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - anchors: [[12, 16], [19, 36], [40, 28], - [36, 75], [76, 55], [72, 146], - [142, 110], [192, 243], [459, 401]] - downsample_ratios: [8, 16, 32] - batch_size: 4 - shuffle: true - drop_last: true - worker_num: 8 - bufsize: 16 - use_process: true - drop_empty: false - -EvalReader: - inputs_def: - fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult'] - num_max_boxes: 90 - dataset: - !VOCDataSet - anno_path: test.txt - dataset_dir: dataset/voc - use_default_label: true - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0., 0., 0.] - std: [1., 1., 1.] - is_scale: True - is_channel_first: false - - !PadBox - num_max_boxes: 90 - - !Permute - to_bgr: false - channel_first: True - batch_size: 4 - drop_empty: false - worker_num: 8 - bufsize: 16 - -TestReader: - dataset: - !ImageFolder - use_default_label: true - with_background: false - sample_transforms: - - !DecodeImage - to_rgb: True - - !ResizeImage - target_size: 608 - interp: 1 - - !NormalizeImage - mean: [0., 0., 0.] - std: [1., 1., 1.] - is_scale: True - is_channel_first: false - - !Permute - to_bgr: false - channel_first: True diff --git a/static/contrib/PedestrianDetection/demo/001.png b/static/contrib/PedestrianDetection/demo/001.png deleted file mode 100644 index 63ae9167fd03e8a95756fe5f6195fc8d741b9cfa..0000000000000000000000000000000000000000 Binary files a/static/contrib/PedestrianDetection/demo/001.png and /dev/null differ diff --git a/static/contrib/PedestrianDetection/demo/002.png b/static/contrib/PedestrianDetection/demo/002.png deleted file mode 100644 index 0de905cf55e6b02487ee1b8220810df8eaa24c2c..0000000000000000000000000000000000000000 Binary files a/static/contrib/PedestrianDetection/demo/002.png and /dev/null differ diff --git a/static/contrib/PedestrianDetection/demo/003.png b/static/contrib/PedestrianDetection/demo/003.png deleted file mode 100644 index e9026e099df42d4267be07a71401eb5426b47745..0000000000000000000000000000000000000000 Binary files a/static/contrib/PedestrianDetection/demo/003.png and /dev/null differ diff --git a/static/contrib/PedestrianDetection/demo/004.png b/static/contrib/PedestrianDetection/demo/004.png deleted file mode 100644 index d8118ec3e0ef63bc74e825b5e7638a1886580604..0000000000000000000000000000000000000000 Binary files a/static/contrib/PedestrianDetection/demo/004.png and /dev/null differ diff --git a/static/contrib/PedestrianDetection/pedestrian.json b/static/contrib/PedestrianDetection/pedestrian.json deleted file mode 100644 index f72fe6dc65209ab3506d18556fb8b83b6ec832a9..0000000000000000000000000000000000000000 --- a/static/contrib/PedestrianDetection/pedestrian.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "images": [], - "annotations": [], - "categories": [ - { - "supercategory": "component", - "id": 1, - "name": "pedestrian" - } - ] -} diff --git a/static/contrib/PedestrianDetection/pedestrian_yolov3_darknet.yml b/static/contrib/PedestrianDetection/pedestrian_yolov3_darknet.yml deleted file mode 100644 index 379ece6820bc3e5152795a36a27401a7baee025c..0000000000000000000000000000000000000000 --- a/static/contrib/PedestrianDetection/pedestrian_yolov3_darknet.yml +++ /dev/null @@ -1,86 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 200000 -log_iter: 20 -save_dir: output -snapshot_iter: 5000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar -weights: https://paddlemodels.bj.bcebos.com/object_detection/pedestrian_yolov3_darknet.tar -num_classes: 1 -use_fine_grained_loss: false - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[10, 13], [16, 30], [33, 23], - [30, 61], [62, 45], [59, 119], - [116, 90], [156, 198], [373, 326]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 1000 - normalized: false - score_threshold: 0.01 - -YOLOv3Loss: - batch_size: 8 - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 150000 - - 180000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: '../../configs/yolov3_reader.yml' -TrainReader: - batch_size: 8 - dataset: - !COCODataSet - dataset_dir: dataset/pedestrian - anno_path: annotations/instances_train2017.json - image_dir: train2017 - with_background: false - -EvalReader: - batch_size: 8 - dataset: - !COCODataSet - dataset_dir: dataset/pedestrian - anno_path: annotations/instances_val2017.json - image_dir: val2017 - with_background: false - -TestReader: - batch_size: 1 - dataset: - !ImageFolder - anno_path: contrib/PedestrianDetection/pedestrian.json - with_background: false diff --git a/static/contrib/README.md b/static/contrib/README.md deleted file mode 100644 index 51f8b57e396ba892d401f29ec7ded657521b1065..0000000000000000000000000000000000000000 --- a/static/contrib/README.md +++ /dev/null @@ -1,2 +0,0 @@ -**文档教程请参考:** [CONTRIB_cn.md](../docs/featured_model/CONTRIB_cn.md)
    -**English document please refer:** [CONTRIB.md](../docs/featured_model/CONTRIB.md) diff --git a/static/contrib/VehicleDetection/demo/001.jpeg b/static/contrib/VehicleDetection/demo/001.jpeg deleted file mode 100644 index 8786db5eb6773931c363358bb39462b33db55369..0000000000000000000000000000000000000000 Binary files a/static/contrib/VehicleDetection/demo/001.jpeg and /dev/null differ diff --git a/static/contrib/VehicleDetection/demo/003.png b/static/contrib/VehicleDetection/demo/003.png deleted file mode 100644 index c01ab4ce769fb3b1c8863093a35d27da0ab10efd..0000000000000000000000000000000000000000 Binary files a/static/contrib/VehicleDetection/demo/003.png and /dev/null differ diff --git a/static/contrib/VehicleDetection/demo/004.png b/static/contrib/VehicleDetection/demo/004.png deleted file mode 100644 index 8907eb8d4d9b82e08ca214509c9fb41ca889db2a..0000000000000000000000000000000000000000 Binary files a/static/contrib/VehicleDetection/demo/004.png and /dev/null differ diff --git a/static/contrib/VehicleDetection/demo/005.png b/static/contrib/VehicleDetection/demo/005.png deleted file mode 100644 index bf17712809c2fe6fa8e7d4f093ec4ac94523537c..0000000000000000000000000000000000000000 Binary files a/static/contrib/VehicleDetection/demo/005.png and /dev/null differ diff --git a/static/contrib/VehicleDetection/vehicle.json b/static/contrib/VehicleDetection/vehicle.json deleted file mode 100644 index 5863a9a8c9e0d8b4daeff31e7fe7869e084d3fb4..0000000000000000000000000000000000000000 --- a/static/contrib/VehicleDetection/vehicle.json +++ /dev/null @@ -1,36 +0,0 @@ -{ - "images": [], - "annotations": [], - "categories": [ - { - "supercategory": "component", - "id": 1, - "name": "car" - }, - { - "supercategory": "component", - "id": 2, - "name": "truck" - }, - { - "supercategory": "component", - "id": 3, - "name": "bus" - }, - { - "supercategory": "component", - "id": 4, - "name": "motorbike" - }, - { - "supercategory": "component", - "id": 5, - "name": "tricycle" - }, - { - "supercategory": "component", - "id": 6, - "name": "carplate" - } - ] -} diff --git a/static/contrib/VehicleDetection/vehicle_yolov3_darknet.yml b/static/contrib/VehicleDetection/vehicle_yolov3_darknet.yml deleted file mode 100644 index 7c2ddbd1efcff7d5413168e4b9d9b62be1b1aa6f..0000000000000000000000000000000000000000 --- a/static/contrib/VehicleDetection/vehicle_yolov3_darknet.yml +++ /dev/null @@ -1,85 +0,0 @@ -architecture: YOLOv3 -use_gpu: true -max_iters: 120000 -log_iter: 20 -save_dir: output -snapshot_iter: 2000 -metric: COCO -pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar -weights: https://paddlemodels.bj.bcebos.com/object_detection/vehicle_yolov3_darknet.tar -num_classes: 6 - -YOLOv3: - backbone: DarkNet - yolo_head: YOLOv3Head - -DarkNet: - norm_type: sync_bn - norm_decay: 0. - depth: 53 - -YOLOv3Head: - anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] - anchors: [[8, 9], [10, 23], [19, 15], - [23, 33], [40, 25], [54, 50], - [101, 80], [139, 145], [253, 224]] - norm_decay: 0. - yolo_loss: YOLOv3Loss - nms: - background_label: -1 - keep_top_k: 100 - nms_threshold: 0.45 - nms_top_k: 400 - normalized: false - score_threshold: 0.005 - -YOLOv3Loss: - batch_size: 8 - ignore_thresh: 0.7 - label_smooth: false - -LearningRate: - base_lr: 0.001 - schedulers: - - !PiecewiseDecay - gamma: 0.1 - milestones: - - 60000 - - 80000 - - !LinearWarmup - start_factor: 0. - steps: 4000 - -OptimizerBuilder: - optimizer: - momentum: 0.9 - type: Momentum - regularizer: - factor: 0.0005 - type: L2 - -_READER_: '../../configs/yolov3_reader.yml' -TrainReader: - batch_size: 8 - dataset: - !COCODataSet - dataset_dir: dataset/vehicle - anno_path: annotations/instances_train2017.json - image_dir: train2017 - with_background: false - -EvalReader: - batch_size: 8 - dataset: - !COCODataSet - dataset_dir: dataset/vehicle - anno_path: annotations/instances_val2017.json - image_dir: val2017 - with_background: false - -TestReader: - batch_size: 1 - dataset: - !ImageFolder - anno_path: contrib/VehicleDetection/vehicle.json - with_background: false diff --git a/static/dataset/coco/download_coco.py b/static/dataset/coco/download_coco.py deleted file mode 100644 index 47659fa76dd2c1183404667efac3a48de9b099c2..0000000000000000000000000000000000000000 --- a/static/dataset/coco/download_coco.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import os.path as osp -import logging -# add python path of PadleDetection to sys.path -parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3))) -if parent_path not in sys.path: - sys.path.append(parent_path) - -from ppdet.utils.download import download_dataset - -logging.basicConfig(level=logging.INFO) - -download_path = osp.split(osp.realpath(sys.argv[0]))[0] -download_dataset(download_path, 'coco') diff --git a/static/dataset/coco/objects365_label.txt b/static/dataset/coco/objects365_label.txt deleted file mode 100644 index f71ffd8c01dc7b4b148cf7bb28da4ea038fbb64b..0000000000000000000000000000000000000000 --- a/static/dataset/coco/objects365_label.txt +++ /dev/null @@ -1,365 +0,0 @@ -person -sneakers -chair -hat -lamp -bottle -cabinet/shelf -cup -car -glasses -picture/frame -desk -handbag -street lights -book -plate -helmet -leather shoes -pillow -glove -potted plant -bracelet -flower -tv -storage box -vase -bench -wine glass -boots -bowl -dining table -umbrella -boat -flag -speaker -trash bin/can -stool -backpack -couch -belt -carpet -basket -towel/napkin -slippers -barrel/bucket -coffee table -suv -toy -tie -bed -traffic light -pen/pencil -microphone -sandals -canned -necklace -mirror -faucet -bicycle -bread -high heels -ring -van -watch -sink -horse -fish -apple -camera -candle -teddy bear -cake -motorcycle -wild bird -laptop -knife -traffic sign -cell phone -paddle -truck -cow -power outlet -clock -drum -fork -bus -hanger -nightstand -pot/pan -sheep -guitar -traffic cone -tea pot -keyboard -tripod -hockey -fan -dog -spoon -blackboard/whiteboard -balloon -air conditioner -cymbal -mouse -telephone -pickup truck -orange -banana -airplane -luggage -skis -soccer -trolley -oven -remote -baseball glove -paper towel -refrigerator -train -tomato -machinery vehicle -tent -shampoo/shower gel -head phone -lantern -donut -cleaning products -sailboat -tangerine -pizza -kite -computer box -elephant -toiletries -gas stove -broccoli -toilet -stroller -shovel -baseball bat -microwave -skateboard -surfboard -surveillance camera -gun -life saver -cat -lemon -liquid soap -zebra -duck -sports car -giraffe -pumpkin -piano -stop sign -radiator -converter -tissue -carrot -washing machine -vent -cookies -cutting/chopping board -tennis racket -candy -skating and skiing shoes -scissors -folder -baseball -strawberry -bow tie -pigeon -pepper -coffee machine -bathtub -snowboard -suitcase -grapes -ladder -pear -american football -basketball -potato -paint brush -printer -billiards -fire hydrant -goose -projector -sausage -fire extinguisher -extension cord -facial mask -tennis ball -chopsticks -electronic stove and gas stove -pie -frisbee -kettle -hamburger -golf club -cucumber -clutch -blender -tong -slide -hot dog -toothbrush -facial cleanser -mango -deer -egg -violin -marker -ship -chicken -onion -ice cream -tape -wheelchair -plum -bar soap -scale -watermelon -cabbage -router/modem -golf ball -pine apple -crane -fire truck -peach -cello -notepaper -tricycle -toaster -helicopter -green beans -brush -carriage -cigar -earphone -penguin -hurdle -swing -radio -CD -parking meter -swan -garlic -french fries -horn -avocado -saxophone -trumpet -sandwich -cue -kiwi fruit -bear -fishing rod -cherry -tablet -green vegetables -nuts -corn -key -screwdriver -globe -broom -pliers -volleyball -hammer -eggplant -trophy -dates -board eraser -rice -tape measure/ruler -dumbbell -hamimelon -stapler -camel -lettuce -goldfish -meat balls -medal -toothpaste -antelope -shrimp -rickshaw -trombone -pomegranate -coconut -jellyfish -mushroom -calculator -treadmill -butterfly -egg tart -cheese -pig -pomelo -race car -rice cooker -tuba -crosswalk sign -papaya -hair drier -green onion -chips -dolphin -sushi -urinal -donkey -electric drill -spring rolls -tortoise/turtle -parrot -flute -measuring cup -shark -steak -poker card -binoculars -llama -radish -noodles -yak -mop -crab -microscope -barbell -bread/bun -baozi -lion -red cabbage -polar bear -lighter -seal -mangosteen -comb -eraser -pitaya -scallop -pencil case -saw -table tennis paddle -okra -starfish -eagle -monkey -durian -game board -rabbit -french horn -ambulance -asparagus -hoverboard -pasta -target -hotair balloon -chainsaw -lobster -iron -flashlight \ No newline at end of file diff --git a/static/dataset/fddb/download.sh b/static/dataset/fddb/download.sh deleted file mode 100755 index 7a40c8b0511f9f7bf45ef6b10bc2a8725145f381..0000000000000000000000000000000000000000 --- a/static/dataset/fddb/download.sh +++ /dev/null @@ -1,31 +0,0 @@ -# All rights `PaddleDetection` reserved -# References: -# @TechReport{fddbTech, -# author = {Vidit Jain and Erik Learned-Miller}, -# title = {FDDB: A Benchmark for Face Detection in Unconstrained Settings}, -# institution = {University of Massachusetts, Amherst}, -# year = {2010}, -# number = {UM-CS-2010-009} -# } - -DIR="$( cd "$(dirname "$0")" ; pwd -P )" -cd "$DIR" - -# Download the data. -echo "Downloading..." -# external link to the Faces in the Wild dataset and annotations file -wget http://tamaraberg.com/faceDataset/originalPics.tar.gz -wget http://vis-www.cs.umass.edu/fddb/FDDB-folds.tgz -wget http://vis-www.cs.umass.edu/fddb/evaluation.tgz - -# Extract the data. -echo "Extracting..." -tar -zxf originalPics.tar.gz -tar -zxf FDDB-folds.tgz -tar -zxf evaluation.tgz - -# Generate full image path list and groundtruth in FDDB-folds: -cd FDDB-folds -cat `ls|grep -v"ellipse"` > filePath.txt && cat *ellipse* > fddb_annotFile.txt -cd .. -echo "------------- All done! --------------" diff --git a/static/dataset/fruit/download_fruit.py b/static/dataset/fruit/download_fruit.py deleted file mode 100644 index 2db2e207210c4bab39e8dfdb3abe91a51c49af1f..0000000000000000000000000000000000000000 --- a/static/dataset/fruit/download_fruit.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import os.path as osp -import logging -# add python path of PadleDetection to sys.path -parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3))) -if parent_path not in sys.path: - sys.path.append(parent_path) - -from ppdet.utils.download import download_dataset - -logging.basicConfig(level=logging.INFO) - -download_path = osp.split(osp.realpath(sys.argv[0]))[0] -download_dataset(download_path, 'fruit') diff --git a/static/dataset/fruit/label_list.txt b/static/dataset/fruit/label_list.txt deleted file mode 100644 index 1f60d62c399939cd92e667c1fb938764b3ec2901..0000000000000000000000000000000000000000 --- a/static/dataset/fruit/label_list.txt +++ /dev/null @@ -1,3 +0,0 @@ -apple -banana -orange diff --git a/static/dataset/roadsign_voc/download_roadsign_voc.py b/static/dataset/roadsign_voc/download_roadsign_voc.py deleted file mode 100644 index 3cb517d3cf362e3ad2ec7b4ebf3bff54acb244d4..0000000000000000000000000000000000000000 --- a/static/dataset/roadsign_voc/download_roadsign_voc.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import os.path as osp -import logging -# add python path of PadleDetection to sys.path -parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3))) -if parent_path not in sys.path: - sys.path.append(parent_path) - -from ppdet.utils.download import download_dataset - -logging.basicConfig(level=logging.INFO) - -download_path = osp.split(osp.realpath(sys.argv[0]))[0] -download_dataset(download_path, 'roadsign_voc') diff --git a/static/dataset/roadsign_voc/label_list.txt b/static/dataset/roadsign_voc/label_list.txt deleted file mode 100644 index 1be460f457a2fdbec91d3a69377c232ae4a6beb0..0000000000000000000000000000000000000000 --- a/static/dataset/roadsign_voc/label_list.txt +++ /dev/null @@ -1,4 +0,0 @@ -speedlimit -crosswalk -trafficlight -stop \ No newline at end of file diff --git a/static/dataset/voc/create_list.py b/static/dataset/voc/create_list.py deleted file mode 100644 index a137bd38caf713f5930768f0018f16dfaaf6feea..0000000000000000000000000000000000000000 --- a/static/dataset/voc/create_list.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import os.path as osp -import logging -import argparse - -# add python path of PadleDetection to sys.path -parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3))) -if parent_path not in sys.path: - sys.path.append(parent_path) - -from ppdet.utils.download import create_voc_list -logging.basicConfig(level=logging.INFO) - - -def main(config): - voc_path = config.dataset_dir - create_voc_list(voc_path) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - default_voc_path = osp.split(osp.realpath(sys.argv[0]))[0] - parser.add_argument( - "-d", - "--dataset_dir", - default=default_voc_path, - type=str, - help="VOC dataset directory, default is current directory.") - config = parser.parse_args() - - main(config) diff --git a/static/dataset/voc/download_voc.py b/static/dataset/voc/download_voc.py deleted file mode 100644 index 080226ee94ffbcae59c4caf509fcb2d6c67f7161..0000000000000000000000000000000000000000 --- a/static/dataset/voc/download_voc.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import os.path as osp -import logging -# add python path of PadleDetection to sys.path -parent_path = osp.abspath(osp.join(__file__, *(['..'] * 3))) -if parent_path not in sys.path: - sys.path.append(parent_path) - -from ppdet.utils.download import download_dataset, create_voc_list - -logging.basicConfig(level=logging.INFO) - -download_path = osp.split(osp.realpath(sys.argv[0]))[0] -download_dataset(download_path, 'voc') -create_voc_list(download_path) diff --git a/static/dataset/voc/generic_det_label_list.txt b/static/dataset/voc/generic_det_label_list.txt deleted file mode 100644 index 410f9ae593ba501be091bc267491f6158c339a44..0000000000000000000000000000000000000000 --- a/static/dataset/voc/generic_det_label_list.txt +++ /dev/null @@ -1,676 +0,0 @@ -Infant bed -Rose -Flag -Flashlight -Sea turtle -Camera -Animal -Glove -Crocodile -Cattle -House -Guacamole -Penguin -Vehicle registration plate -Bench -Ladybug -Human nose -Watermelon -Flute -Butterfly -Washing machine -Raccoon -Segway -Taco -Jellyfish -Cake -Pen -Cannon -Bread -Tree -Shellfish -Bed -Hamster -Hat -Toaster -Sombrero -Tiara -Bowl -Dragonfly -Moths and butterflies -Antelope -Vegetable -Torch -Building -Power plugs and sockets -Blender -Billiard table -Cutting board -Bronze sculpture -Turtle -Broccoli -Tiger -Mirror -Bear -Zucchini -Dress -Volleyball -Guitar -Reptile -Golf cart -Tart -Fedora -Carnivore -Car -Lighthouse -Coffeemaker -Food processor -Truck -Bookcase -Surfboard -Footwear -Bench -Necklace -Flower -Radish -Marine mammal -Frying pan -Tap -Peach -Knife -Handbag -Laptop -Tent -Ambulance -Christmas tree -Eagle -Limousine -Kitchen & dining room table -Polar bear -Tower -Football -Willow -Human head -Stop sign -Banana -Mixer -Binoculars -Dessert -Bee -Chair -Wood-burning stove -Flowerpot -Beaker -Oyster -Woodpecker -Harp -Bathtub -Wall clock -Sports uniform -Rhinoceros -Beehive -Cupboard -Chicken -Man -Blue jay -Cucumber -Balloon -Kite -Fireplace -Lantern -Missile -Book -Spoon -Grapefruit -Squirrel -Orange -Coat -Punching bag -Zebra -Billboard -Bicycle -Door handle -Mechanical fan -Ring binder -Table -Parrot -Sock -Vase -Weapon -Shotgun -Glasses -Seahorse -Belt -Watercraft -Window -Giraffe -Lion -Tire -Vehicle -Canoe -Tie -Shelf -Picture frame -Printer -Human leg -Boat -Slow cooker -Croissant -Candle -Pancake -Pillow -Coin -Stretcher -Sandal -Woman -Stairs -Harpsichord -Stool -Bus -Suitcase -Human mouth -Juice -Skull -Door -Violin -Chopsticks -Digital clock -Sunflower -Leopard -Bell pepper -Harbor seal -Snake -Sewing machine -Goose -Helicopter -Seat belt -Coffee cup -Microwave oven -Hot dog -Countertop -Serving tray -Dog bed -Beer -Sunglasses -Golf ball -Waffle -Palm tree -Trumpet -Ruler -Helmet -Ladder -Office building -Tablet computer -Toilet paper -Pomegranate -Skirt -Gas stove -Cookie -Cart -Raven -Egg -Burrito -Goat -Kitchen knife -Skateboard -Salt and pepper shakers -Lynx -Boot -Platter -Ski -Swimwear -Swimming pool -Drinking straw -Wrench -Drum -Ant -Human ear -Headphones -Fountain -Bird -Jeans -Television -Crab -Microphone -Home appliance -Snowplow -Beetle -Artichoke -Jet ski -Stationary bicycle -Human hair -Brown bear -Starfish -Fork -Lobster -Corded phone -Drink -Saucer -Carrot -Insect -Clock -Castle -Tennis racket -Ceiling fan -Asparagus -Jaguar -Musical instrument -Train -Cat -Rifle -Dumbbell -Mobile phone -Taxi -Shower -Pitcher -Lemon -Invertebrate -Turkey -High heels -Bust -Elephant -Scarf -Barrel -Trombone -Pumpkin -Box -Tomato -Frog -Bidet -Human face -Houseplant -Van -Shark -Ice cream -Swim cap -Falcon -Ostrich -Handgun -Whiteboard -Lizard -Pasta -Snowmobile -Light bulb -Window blind -Muffin -Pretzel -Computer monitor -Horn -Furniture -Sandwich -Fox -Convenience store -Fish -Fruit -Earrings -Curtain -Grape -Sofa bed -Horse -Luggage and bags -Desk -Crutch -Bicycle helmet -Tick -Airplane -Canary -Spatula -Watch -Lily -Kitchen appliance -Filing cabinet -Aircraft -Cake stand -Candy -Sink -Mouse -Wine -Wheelchair -Goldfish -Refrigerator -French fries -Drawer -Treadmill -Picnic basket -Dice -Cabbage -Football helmet -Pig -Person -Shorts -Gondola -Honeycomb -Doughnut -Chest of drawers -Land vehicle -Bat -Monkey -Dagger -Tableware -Human foot -Mug -Alarm clock -Pressure cooker -Human hand -Tortoise -Baseball glove -Sword -Pear -Miniskirt -Traffic sign -Girl -Roller skates -Dinosaur -Porch -Human beard -Submarine sandwich -Screwdriver -Strawberry -Wine glass -Seafood -Racket -Wheel -Sea lion -Toy -Tea -Tennis ball -Waste container -Mule -Cricket ball -Pineapple -Coconut -Doll -Coffee table -Snowman -Lavender -Shrimp -Maple -Cowboy hat -Goggles -Rugby ball -Caterpillar -Poster -Rocket -Organ -Saxophone -Traffic light -Cocktail -Plastic bag -Squash -Mushroom -Hamburger -Light switch -Parachute -Teddy bear -Winter melon -Deer -Musical keyboard -Plumbing fixture -Scoreboard -Baseball bat -Envelope -Adhesive tape -Briefcase -Paddle -Bow and arrow -Telephone -Sheep -Jacket -Boy -Pizza -Otter -Office supplies -Couch -Cello -Bull -Camel -Ball -Duck -Whale -Shirt -Tank -Motorcycle -Accordion -Owl -Porcupine -Sun hat -Nail -Scissors -Swan -Lamp -Crown -Piano -Sculpture -Cheetah -Oboe -Tin can -Mango -Tripod -Oven -Mouse -Barge -Coffee -Snowboard -Common fig -Salad -Marine invertebrates -Umbrella -Kangaroo -Human arm -Measuring cup -Snail -Loveseat -Suit -Teapot -Bottle -Alpaca -Kettle -Trousers -Popcorn -Centipede -Spider -Sparrow -Plate -Bagel -Personal care -Apple -Brassiere -Bathroom cabinet -studio couch -Computer keyboard -Table tennis racket -Sushi -Cabinetry -Street light -Towel -Nightstand -Rabbit -Dolphin -Dog -Jug -Wok -Fire hydrant -Human eye -Skyscraper -Backpack -Potato -Paper towel -Lifejacket -Bicycle wheel -Toilet -tuba -carpet -trolley -tv -fan -llama -stapler -tricycle -head_phone -air_conditioner -cookies -towel/napkin -boots -sausage -suv -bar_soap -baseball -luggage -poker_card -shovel -marker -earphone -projector -pencil_case -french_horn -tangerine -router/modem -folder -donut -durian -sailboat -nuts -coffee_machine -meat_balls -basket -extension_cord -green_beans -avocado -soccer -egg_tart -clutch -slide -fishing_rod -hanger -bread/bun -surveillance_camera -globe -blackboard/whiteboard -life_saver -pigeon -red_cabbage -cymbal -faucet -steak -swing -mangosteen -cheese -urinal -lettuce -hurdle -ring -basketball -potted_plant -rickshaw -target -race_car -bow_tie -iron -toiletries -donkey -saw -hammer -billiards -cutting/chopping_board -power_outlet -hair_drier -baozi -medal -liquid_soap -wild_bird -leather_shoes -dining_table -game_board -barbell -radio -street_lights -tape -hockey -spring_rolls -rice -golf_club -lighter -chips -microscope -cell_phone -fire_truck -noodles -cabinet/shelf -electronic_stove_and_gas_stove -key -comb -trash_bin/can -toothbrush -dates -electric_drill -cow -eggplant -broom -vent -tong -green_onion -scallop -facial_cleanser -toothpaste -hamimelon -eraser -shampoo/shower_gel -CD -skating_and_skiing_shoes -american_football -slippers -pitaya -pot/pan -calculator -tissue -table_tennis_paddle -board_eraser -speaker -papaya -cigar -notepaper -garlic -rice_cooker -canned -parking_meter -flashlight -paint_brush -cup -cue -crosswalk_sign -kiwi_fruit -radiator -mop -chainsaw -sandals -storage_box -onion -bracelet -fire_extinguisher -scale -okra -microwave -sneakers -pepper -corn -pomelo -computer_box -pliers -trophy -plum -brush -machinery_vehicle -yak -crane -converter -facial_mask -carriage -pickup_truck -traffic_cone -pie -pen/pencil -sports_car -frisbee -cleaning_products -remote -stroller diff --git a/static/dataset/voc/generic_det_label_list_zh.txt b/static/dataset/voc/generic_det_label_list_zh.txt deleted file mode 100644 index 0012d759df820f99d6fd814215a78453274b26fa..0000000000000000000000000000000000000000 --- a/static/dataset/voc/generic_det_label_list_zh.txt +++ /dev/null @@ -1,676 +0,0 @@ -婴儿床 -玫瑰 -旗 -手电筒 -海龟 -照相机 -动物 -手套 -鳄鱼 -牛 -房子 -鳄梨酱 -企鹅 -车辆牌照 -凳子 -瓢虫 -人鼻 -西瓜 -长笛 -蝴蝶 -洗衣机 -浣熊 -赛格威 -墨西哥玉米薄饼卷 -海蜇 -蛋糕 -笔 -加农炮 -面包 -树 -贝类 -床 -仓鼠 -帽子 -烤面包机 -帽帽 -冠状头饰 -碗 -蜻蜓 -飞蛾和蝴蝶 -羚羊 -蔬菜 -火炬 -建筑物 -电源插头和插座 -搅拌机 -台球桌 -切割板 -青铜雕塑 -乌龟 -西兰花 -老虎 -镜子 -熊 -西葫芦 -礼服 -排球 -吉他 -爬行动物 -高尔夫球车 -蛋挞 -费多拉 -食肉动物 -小型车 -灯塔 -咖啡壶 -食品加工厂 -卡车 -书柜 -冲浪板 -鞋类 -凳子 -项链 -花 -萝卜 -海洋哺乳动物 -煎锅 -水龙头 -桃 -刀 -手提包 -笔记本电脑 -帐篷 -救护车 -圣诞树 -鹰 -豪华轿车 -厨房和餐桌 -北极熊 -塔楼 -足球 -柳树 -人头 -停车标志 -香蕉 -搅拌机 -双筒望远镜 -甜点 -蜜蜂 -椅子 -烧柴炉 -花盆 -烧杯 -牡蛎 -啄木鸟 -竖琴 -浴缸 -挂钟 -运动服 -犀牛 -蜂箱 -橱柜 -鸡 -人 -冠蓝鸦 -黄瓜 -气球 -风筝 -壁炉 -灯笼 -导弹 -书 -勺子 -葡萄柚 -松鼠 -橙色 -外套 -打孔袋 -斑马 -广告牌 -自行车 -门把手 -机械风扇 -环形粘结剂 -桌子 -鹦鹉 -袜子 -花瓶 -武器 -猎枪 -玻璃杯 -海马 -腰带 -船舶 -窗口 -长颈鹿 -狮子 -轮胎 -车辆 -独木舟 -领带 -架子 -相框 -打印机 -人腿 -小船 -慢炖锅 -牛角包 -蜡烛 -煎饼 -枕头 -硬币 -担架 -凉鞋 -女人 -楼梯 -拨弦键琴 -凳子 -公共汽车 -手提箱 -人口学 -果汁 -颅骨 -门 -小提琴 -筷子 -数字时钟 -向日葵 -豹 -甜椒 -海港海豹 -蛇 -缝纫机 -鹅 -直升机 -座椅安全带 -咖啡杯 -微波炉 -热狗 -台面 -服务托盘 -狗床 -啤酒 -太阳镜 -高尔夫球 -华夫饼干 -棕榈树 -小号 -尺子 -头盔 -梯子 -办公楼 -平板电脑 -厕纸 -石榴 -裙子 -煤气炉 -曲奇饼干 -大车 -掠夺 -鸡蛋 -墨西哥煎饼 -山羊 -菜刀 -滑板 -盐和胡椒瓶 -猞猁 -靴子 -大浅盘 -滑雪板 -泳装 -游泳池 -吸管 -扳手 -鼓 -蚂蚁 -人耳 -耳机 -喷泉 -鸟 -牛仔裤 -电视机 -蟹 -话筒 -家用电器 -除雪机 -甲虫 -朝鲜蓟 -喷气式滑雪板 -固定自行车 -人发 -棕熊 -海星 -叉子 -龙虾 -有线电话 -饮料 -碟 -胡萝卜 -昆虫 -时钟 -城堡 -网球拍 -吊扇 -芦笋 -美洲虎 -乐器 -火车 -猫 -来复枪 -哑铃 -手机 -出租车 -淋浴 -投掷者 -柠檬 -无脊椎动物 -火鸡 -高跟鞋 -打破 -大象 -围巾 -枪管 -长号 -南瓜 -盒子 -番茄 -蛙 -坐浴盆 -人脸 -室内植物 -厢式货车 -鲨鱼 -冰淇淋 -游泳帽 -隼 -鸵鸟 -手枪 -白板 -蜥蜴 -面食 -雪车 -灯泡 -窗盲 -松饼 -椒盐脆饼 -计算机显示器 -喇叭 -家具 -三明治 -福克斯 -便利店 -鱼 -水果 -耳环 -帷幕 -葡萄 -沙发床 -马 -行李和行李 -书桌 -拐杖 -自行车头盔 -滴答声 -飞机 -金丝雀 -铲 -手表 -莉莉 -厨房用具 -文件柜 -飞机 -蛋糕架 -糖果 -水槽 -鼠标 -葡萄酒 -轮椅 -金鱼 -冰箱 -炸薯条 -抽屉 -单调的工作 -野餐篮子 -骰子 -甘蓝 -足球头盔 -猪 -人 -短裤 -贡多拉 -蜂巢 -炸圈饼 -抽屉柜 -陆地车辆 -蝙蝠 -猴子 -匕首 -餐具 -人足 -马克杯 -闹钟 -高压锅 -人手 -乌龟 -棒球手套 -剑 -梨 -迷你裙 -交通标志 -女孩 -旱冰鞋 -恐龙 -门廊 -胡须 -潜艇三明治 -螺丝起子 -草莓 -酒杯 -海鲜 -球拍 -车轮 -海狮 -玩具 -茶叶 -网球 -废物容器 -骡子 -板球 -菠萝 -椰子 -娃娃 -咖啡桌 -雪人 -薰衣草 -小虾 -枫树 -牛仔帽 -护目镜 -橄榄球 -毛虫 -海报 -火箭 -器官 -萨克斯 -交通灯 -鸡尾酒 -塑料袋 -壁球 -蘑菇 -汉堡包 -电灯开关 -降落伞 -泰迪熊 -冬瓜 -鹿 -音乐键盘 -卫生器具 -记分牌 -棒球棒 -包络线 -胶带 -公文包 -桨 -弓箭 -电话 -羊 -夹克 -男孩 -披萨 -水獭 -办公用品 -沙发 -大提琴 -公牛 -骆驼 -球 -鸭子 -鲸鱼 -衬衫 -坦克 -摩托车 -手风琴 -猫头鹰 -豪猪 -太阳帽 -钉子 -剪刀 -天鹅 -灯 -皇冠 -钢琴 -雕塑 -猎豹 -双簧管 -罐头罐 -芒果 -三脚架 -烤箱 -鼠标 -驳船 -咖啡 -滑雪板 -普通无花果 -沙拉 -无脊椎动物 -雨伞 -袋鼠 -人手臂 -量杯 -蜗牛 -相思 -西服 -茶壶 -瓶 -羊驼 -水壶 -裤子 -爆米花 -蜈蚣 -蜘蛛 -麻雀 -盘子 -百吉饼 -个人护理 -苹果 -胸罩 -浴室柜 -演播室沙发 -电脑键盘 -乒乓球拍 -寿司 -橱柜 -路灯 -毛巾 -床头柜 -兔 -海豚 -狗 -大罐 -炒锅 -消火栓 -人眼 -摩天大楼 -背包 -马铃薯 -纸巾 -小精灵 -自行车车轮 -卫生间 -大号 -地毯 -手推车 -电视 -风扇 -美洲驼 -订书机 -三轮车 -耳机 -空调器 -饼干 -毛巾/餐巾 -靴子 -香肠 -运动型多用途汽车 -肥皂 -棒球 -行李 -扑克牌 -铲子 -标记笔 -耳机 -投影机 -铅笔盒 -法国圆号 -橘子 -路由器 -文件夹 -甜甜圈 -榴莲 -帆船 -坚果 -咖啡机 -肉丸 -篮子 -插线板 -青豆 -鳄梨 -英式足球 -蛋挞 -离合器 -滑梯 -鱼竿 -衣架 -面包 -监控摄像头 -地球仪 -黑板/白板 -救生员 -鸽子 -红卷心菜 -铜钹 -水龙头 -牛排 -秋千 -山竹 -奶酪 -小便池 -生菜 -跨栏 -戒指 -篮球 -盆栽植物 -人力车 -目标 -赛车 -蝴蝶结 -熨斗 -化妆品 -驴 -锯 -铁锤 -台球 -切割/砧板 -电源插座 -吹风机 -包子 -奖章/奖牌 -液体肥皂 -野鸟 -皮鞋 -餐桌 -游戏板 -杠铃 -收音机 -路灯 -磁带 -曲棍球 -春卷 -大米 -高尔夫俱乐部 -打火机 -炸薯条 -显微镜 -手机 -消防车 -面条 -橱柜/架子 -电磁炉和煤气炉 -钥匙 -梳子 -垃圾箱/罐 -牙刷 -枣子 -电钻 -奶牛 -茄子 -扫帚 -抽油烟机 -钳子 -大葱 -扇贝 -洁面乳 -牙膏 -哈密瓜 -橡皮擦 -洗发水/沐浴露 -光盘 -溜冰鞋和滑雪鞋 -美式足球 -拖鞋 -火龙果 -锅/平底锅 -计算器 -纸巾 -乒乓球拍 -板擦 -扬声器 -木瓜 -雪茄 -信纸 -大蒜 -电饭锅 -罐装的 -停车计时器 -手电筒 -画笔 -杯子 -球杆 -人行横道标志 -奇异果/猕猴桃 -散热器 -拖把 -电锯 -凉鞋拖鞋 -储物箱 -洋葱 -手镯 -灭火器 -秤 -秋葵 -微波炉 -运动鞋 -胡椒 -玉米 -柚子 -主机 -钳子 -奖杯 -李子/梅子 -刷子/画笔 -机械车辆 -牦牛 -起重机 -转换器 -面膜 -马车 -皮卡车 -交通锥 -馅饼 -钢笔/铅笔 -跑车 -飞盘 -清洁用品/洗涤剂/洗衣液 -遥控器 -婴儿车/手推车 diff --git a/static/dataset/voc/label_list.txt b/static/dataset/voc/label_list.txt deleted file mode 100644 index 8420ab35ede7400974f25836a6bb543024686a0e..0000000000000000000000000000000000000000 --- a/static/dataset/voc/label_list.txt +++ /dev/null @@ -1,20 +0,0 @@ -aeroplane -bicycle -bird -boat -bottle -bus -car -cat -chair -cow -diningtable -dog -horse -motorbike -person -pottedplant -sheep -sofa -train -tvmonitor diff --git a/static/dataset/wider_face/download.sh b/static/dataset/wider_face/download.sh deleted file mode 100755 index 59a2054def3dfa7e27a2ac7ba84b779800a32933..0000000000000000000000000000000000000000 --- a/static/dataset/wider_face/download.sh +++ /dev/null @@ -1,21 +0,0 @@ -# All rights `PaddleDetection` reserved -# References: -# @inproceedings{yang2016wider, -# Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou}, -# Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, -# Title = {WIDER FACE: A Face Detection Benchmark}, -# Year = {2016}} - -DIR="$( cd "$(dirname "$0")" ; pwd -P )" -cd "$DIR" - -# Download the data. -echo "Downloading..." -wget https://dataset.bj.bcebos.com/wider_face/WIDER_train.zip -wget https://dataset.bj.bcebos.com/wider_face/WIDER_val.zip -wget https://dataset.bj.bcebos.com/wider_face/wider_face_split.zip -# Extract the data. -echo "Extracting..." -unzip -q WIDER_train.zip -unzip -q WIDER_val.zip -unzip -q wider_face_split.zip diff --git a/static/demo/000000014439.jpg b/static/demo/000000014439.jpg deleted file mode 100644 index 0abbdab06eb5950b93908cc91adfa640e8a3ac78..0000000000000000000000000000000000000000 Binary files a/static/demo/000000014439.jpg and /dev/null differ diff --git a/static/demo/000000014439_640x640.jpg b/static/demo/000000014439_640x640.jpg deleted file mode 100644 index 58e9d3e228af43c9b55d8d0cb385ce82ebb8b996..0000000000000000000000000000000000000000 Binary files a/static/demo/000000014439_640x640.jpg and /dev/null differ diff --git a/static/demo/000000087038.jpg b/static/demo/000000087038.jpg deleted file mode 100644 index 9f77f5d5f057b6f92dc096da704ecb8dee99bdf5..0000000000000000000000000000000000000000 Binary files a/static/demo/000000087038.jpg and /dev/null differ diff --git a/static/demo/000000570688.jpg b/static/demo/000000570688.jpg deleted file mode 100644 index cb304bd56c4010c08611a30dcca58ea9140cea54..0000000000000000000000000000000000000000 Binary files a/static/demo/000000570688.jpg and /dev/null differ diff --git a/static/demo/infer_cfg.yml b/static/demo/infer_cfg.yml deleted file mode 100644 index 99f1d63fa6b3159924ac781f488e9d12ee1b2192..0000000000000000000000000000000000000000 --- a/static/demo/infer_cfg.yml +++ /dev/null @@ -1,48 +0,0 @@ -draw_threshold: 0.5 -use_python_inference: false -mode: fluid -metric: VOC -arch: YOLO -min_subgraph_size: 3 -with_background: false -Preprocess: -- interp: 2 - max_size: 0 - target_size: 608 - type: Resize - use_cv2: true -- is_channel_first: false - is_scale: true - mean: - - 0.485 - - 0.456 - - 0.406 - std: - - 0.229 - - 0.224 - - 0.225 - type: Normalize -- channel_first: true - to_bgr: false - type: Permute -label_list: -- aeroplane -- bicycle -- bird -- boat -- bottle -- bus -- car -- cat -- chair -- cow -- diningtable -- dog -- horse -- motorbike -- person -- pottedplant -- sheep -- sofa -- train -- tvmonitor diff --git a/static/demo/mask_rcnn_demo.ipynb b/static/demo/mask_rcnn_demo.ipynb deleted file mode 100644 index 5027ce1e4518254d1ef30ea512df2a8319a6e9ff..0000000000000000000000000000000000000000 --- a/static/demo/mask_rcnn_demo.ipynb +++ /dev/null @@ -1,413 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Change working directory to the project root" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "/home/yang/PaddleDetection\n" - ] - } - ], - "source": [ - "%cd .." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Now let's take a look at the input image." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAIAAAC6s0uzAAACwmlDQ1BJQ0MgUHJvZmlsZQAAeJx9lE1oE0EUx/+JLRUs9aDWKhXmIEWkLUvjSVRoth/0wzSkqdYilO1m8tFussvsNlbpSQSPonjwJPiBF71bPIkUPAj2UKsgItSbqCAUSkFrfLObdIfWOmGY37z5z/u/eQkB6lsMx7GiDHDNkkj1x9n4pQnWsIwGROAPw3Sd7mRy2GfSYsdYfx9olzv+ff7fsU+QIRDZS3wwF3Cr5KmANclXPMcj7pFs5o0M8Thxu0indOIbxE05hacUznDXJL5LfNZ0BOWpe0fcnXHNIsWJo3/km4NyvCHg3DFgz5swNuECz54Ah9vC2IkW4NBFYCEWxtZSfh8izUtuNtblhyKNcaD+c6Wy1gY03AE2b1cqvx5WKpuPyOMT8NIyZ0W52otINAYE9frjgG4UuTCY/FqSws4WLK50bpfTojVbu7+fZiMvjY3SepzeeZ+7vZLbyWo4Y/TQW3GK+GOG9/RW4/eyhb4B+UCai1nRN0brGbr7quANpANNVJSsxHA1Z1PJTowQUw8ic44XTwV3o4NueVTmPEnz+7QxmKS1VTZnxh6SGupftOtaPk09xFGKE+sJ4mbiRQzBQBEcgtYS5U6hH3F0wKGIjSwK9LFo7oxbpM5infZMzaI91Va1r1uZmPZB+6GtaA+0x9o3PjdyvrwxPRmeXheTBXPp1k8k/du1vNVTcjP8yhh0rJJDsKvVMqPUrtZVO09uq/X39lrhljdyp0M/WQm/mVjfyjBFOpd8BOUoE1u046T3qjnVXDs7xH2NHmreei/Sz+dDv5W6hcvLja/nd3l9xvfS8WXLQ32RrSj7aOfRNJV6FF9FqWbg2zqUwgXF12JH/L4G3wH3NfJ1+aoXPD7nyR+/bjtXRSGX91iXpsVYN/01cabbRWfW44INlMzOdmZYFvM1LhPc5aLMM53y7l83VwTq0a+9hAABAABJREFUeJy0/WvMfd12Hwb9xphr7f1c/pf3fc95z/3YsY/tOHYiHAfSlOZCGqmhUdOCgA9cREUQAoQiUVA/IKFy/YCUgvjAp1JEpVIREkgiKKVNAyQ4adLUSWtju3aI49s5Pue89//teZ6995rjx4cxxpxzrb2f/zmRYPn1/+xn77XmmnPMMcf9Iv+Hv3kQEQAkSQIQERFZYKoqhJkBKKIiQlLWtwGIZ5VKAAojSQNBraAUUxGl34hKoYACIRQipN8vIhSQFJTN4D5+Rc2/1EfzCbMNPVwiIiaAvwUADPDxqcVIotJvA0jCOFGoFrcJQBWqAMUMABnT88FMABXJNxfEu0hWzYmRgPpkABQE3EiaoH0vrKDmYvN+SpUFwGpxRgBVdQRL+1zAi987fAwCqgCAKcynamiQ7wggUoZXapvzjBJj6wLAxKyC5DTtSDqSjPjgn5U+GXUgA1iEDjEhfNMrCEADKAFhiPlQVVQIZWylf0sBWOnrgSqhEAWEMAoQY5KExpT2Ir7kSvpL/fFYNSFS1EQAx8CKwwYIoIiIlb7XOuxpFUwQISh2ElZIgRQISVDo/4kBJmoirKYmEBECglggSIECICzXaiICMTEDpFKIyfFWjABoYAlECawzURNMbS+qBTQqyUnnhiENAgFzKNixCz5b5vKHS0Sq5XaLkfQHRUShhJkt7U5AQbHAzzgaolRVVa2nI6CIDYSIGIyk0ucjvoMkDUbWWedlWUiWUiAS1KkUVonliAEQp1HASeiwlfUCaq3DX9L+ZSUUEIM6YihNhGJKEVHkG1UBmBlEycBVoiY8DdwzgWNC5IYWmxtxI7mAlVbBveOViR8TCiyPAB2PHZR+mskqw06Nq7D+zbhxpkKyDlTCzyZpZnZ1df3w8HC12x9Ph2U53tzclAerXHbXu8PpWGk67Q7HZZ53CnH8NNTcdzHBVKkQnzgFhFaCgmILgarwCQuhFCHatoLmmCBWl2WRed9oRaMPJCcGszAVK0KAZrLYNBfLZQAOJhGRY4VznaTxiXg+DT/jlWaY5/nq6up/+af+2uE/8Y/h+vXVX/+FH5q/+81PJtt/+Gb5d57os9ene4heYT6yGmbMO5zsy7/nv/byO799/9F3gAOKGmYss2B/dX2AsdbTESfMqtNEW3g44upn//T/5L97uJ6q3cyLFBzK/vruOE++SarBX5HMT6o5idQBdf2IjPs9bjYJERNxIPj2F8jq9IrTKPY/IfFGCsxM1ifl/BIJIuUTHtF0nFV84PbtQsfBIvGgKZwbaDANn4GIQEjKBJofF2frBqjCRAKB1SkoYCAE6vuLMhwABaC0dh5WkDNDX7KdTXkNOrKxfZKrcbi6c/g6URm2uqeocsuA818dBo0/rVYAFBNSFQUipZOADTKIiNkiIgy67G9QAP5Q8BsAAo3fBEI4x3IA+gYZ6TBhzMan5RQw+ZUKYxwnkEF/paOaAZA1lOgcyEQgztxFJbm8oImAmjwjWG88DMJlCFYRiKj4xkpREKDGCAYRGIN+CiECVEUJSjZAroudlBUaUEVAZwHoGE5W0SakGRBSpThOOyfomCMAVDU2ncGU/HunX6utdGmE280dbxNxVB+GIkRkmqaUQf1rFRLx/wQAsSIqEBNJBhbycCCM72G7PeiEiogvQVWZS4uzIIEbq21eT7j9dr6oICCKHDYoiM/KzFTVclFmJgOEhx1hg0lIlOo4IyYlgOqCLwBAQajTWG4GGWcoDe9EyK3y0z4MhDc3sK0LIKiki31OtVSEKCJyPB6v91f39/dPnt7e3ZmwVtbbJ9f3h4OIFC3V7Pr6+vXrN9f7q1gPIYiDXWLnqb5LAAlRwFBKcb0M9E2FWPDwEDk79IoqjIbEzNhQuopWSGjIIRAX2XTQcEoBkgiICOsoVzXY3lzt7u7urC5XV1eY9Hg87qb59nr+E3/qR/4vf+FXb67rl39y/6t///aIaX/4jZ38yGs9Ae+A8qA3uPrSE35R+Pn7p0+//a0TylfL579OsXI13+y0HE/1zZvXD1ewO1k+Vnwm9aTXerIZFGCiSZEJxH7e4Xi8e/la5qfTOMVxUxWiEIFQVfL08mw9IwasFixNw5OEJCgoEKT+KirtN+fEqtoQscmVAEQl5Ns4ptLRtOk661XQSY/meWvbaVCneoSrHypGJ24QCgRGitBCoYeJS7sEkAqVQE2CHzj3lSB5gtrIEAYNibAGk5HHqkxQ2xwkR8cGN+QPQdBjmVhffcxxj5QwR/U1BRql40YCSZYkgkCqzyAEUoRqfmOQgHiqE4tGc0UEYuj44DslAKgWfNsBKijJJYZ1+WjBvAukUSBNuVitmjQmCgqqr0gpI4QTEmYyLrkktTZxjZPqlLIjc1hiUuihr0JTBCAteKcIYCIUQlyQcEkcVggKDQymDLjWDEgBrQkQPk8nT32TJGbatzV2RkQEBlKUNBMR0JinAkIVMZjIAIbAAGmL2mCLqobgRefNZ/x4e5mICAYhTCmC5bioqh9RM4tDqmxoJzmZpgrDnI6bS13asABi45aJw9bRTZwB94VIDUriaJLfB/KsF7v5MIie7fuRpjmP0XEgURVAVQ2DJo2yEn1jFwUivlDKlimqm0kEJiZGpJwFkQIxgRJVgiU1wiLAlIyWo7RBBupheEvQYQjdnBJ0yDFZJnl4WPb7fa3LvJtQl5v91eF4f3s9q8K4QESk+MrVeUEulyoNZ0wMOU+4PB1kwtVcbYqon2+lufbcdD4DqStkc2qiRAUqrKhowIQmUIGqHGttWq/4QSVdPsu1t6MEAA8Pr0WoRU7LvUjRguPpzYcfvvkv/nf+0b/9M//bqxfT7/3Cq7/3t35tev8PHfQb+w+u8d7+nd/xo/PTL77Grj69ffN85jWxl+nuheg15WnlbKfT6fAGhzd6un/+uf2XbsuPPtff8Uzfvyr7xZZXy+HV4YOrP/i5z7//2y8+nstcoPPVdZl5WKZpc8Caqj4pBBY4J8EOJRCoS1ikEyZMaRKh27Jii0IDaWKLDbpn3D8IOwA0TbhNlAsKrkBawwKXYw5NKVqhvvoccljmTJCyctIPc4361Cx+gAgVGkQRJKshJAPSbVwshMDt3GhqOx2tSTeCSiyvYpAPMIjtANaUzpqOuxZzXV7BeP/mGknh+rIQIyTYgYgQte1RkMgu7/j/OvRMBG4xl8nXJY0HA6IAjU0stbYmAC6QBhXy+ytc5I+j1XUVDWVrZe30pU+q45/OfQUgq1BdkKLUlBViVZ285mi5H0lTBxlD6SIGAaCGymhspNn6Tkm4NnycIqk2QhCioDkFJGrD/OLsWiWtJEIRDeUBq7OQxCJP2cDe3IkgdK1DKKIGuOXVUcVIQtR1EcEIzO4gWInLMspMJecwcu7HkKrJ2RYOlI7GDrFBXA4DbJqLABX1HTEzKW4DkXw2TgG0gGzQHl5heeQN4qTAAUJIs9+YkwsEbmPkwWjKadOWBh6c6+r443xEz7WUUEiGbVpJNWlzGZkixX147WQP7Mol+CbjQ0Qm5x9EHcyTTRJrSshq5l2BWVnI/LAUhMWlCbXVTBWlFDst+93ucLifZlXV6yc3n332QlR9K1X1/v7+5uaah5MgxGtJhAdobsISc02JMKUCCI0bEGMKSASgqf5Kk67p8twSMlSoSYRCSFNOUCEK1JFbAhayIpSBIihB7buC7Z8qOc97AMuyAFbKbGYPh7sPfvPlj/3hp3/n31r+mf/0f+Pf+sv/wqdfMPzQf3T38rp87clnqHj5Kd4c94eb+cV0/M6DvDlcLdNxqae6YKpPnslXv7T/0R9792tffP6Fcj/vFuyWo9y/wemNyf75deH0lekbH999t1a72s2v797MeirTBMzy5/7WcYVVee20L4zu013ftnkECPN6Xo0Hp4k4btcVCUjbl58TM9NJhjH7RU2xnd0LSpNw6Z7NamXaG8exQpKSrjWh+/OOPiVYyOvNHyYnl9YteZIPM3FqhqSG6QZOQa9HJaPbSfK2vsABZDFy/lGwoX/hQ624dE0yjRBbE1+1EGEAcy5ovhy9fP/qzemyNtJf7SqvOslelkXSKjhqwFRDV0cJtB0xB1QbP7xKTbBmlwz8vejI08HovnmTeLD7g+2ENbR9KOOgSna9UNylHYTb+qz8fhHxVRNh0UJNtFdfSPAG82gHgqImoBiMhSYFHjgRIREp8Wu+y2deH3M85F54bIUFRCE0Bcha3UbqxC7FUxHxXYstGfiN72FwHennXR+JLRhhMl6GGnyF/VyI0H3GmzMuqQ27t17St2fm8Em9SBqGGDBxcC2TJCrJUkpTrBWdLREnj/egpM8IEGpFXclbfmARsvNIuxMO8PiVhDMg5rEFF89XmdhQl9aGgop7wTVlgjQtrEWa5NDhurZBcNHB2VQNzXTsxkAbtPYR1Fuy2S0BxciIs2k6MXDi4Wp/8/DwsNvtjsejCmutz549EeDFy5fTtKPAoy0eDqcnT57YwxEI92ryOEEnt0HtlRBqA7gl2AM6wITKBEdYoehqSsUQ++KxGgYsWCbRPScAC82DOwqkuikyFGVCW/DICs4NDtOsp9OJFubrFkNQF333+vX/6J//lZtyOn3wm+X92zf10+tnT5/c/vSLb39wPBxwu4edcPcgc+HT/Q/90Ls/8PUvfuOHv/KF92+ePZXd1enN6dWLVx8/yL3oZDofDEejEnvoFXH3MO2fUVXBPZdayuF4PE76fOo2NaSJKnw1uZ3u22pe8UuUQkRoZqwi4vzRY4Ik9RJK7lCPhjA/Be7EjXAVsNncHnsXUbuHVVazGsluHNqVDBjvFQ0vhKQyjPBBmlJWFh2VAgHFRAvClmLJEkYGa/kKUUVanv3H84PBdurWmrGPkf9rceu49GF155C58CKyiJhLqVZddxT1iDKgia/kIEqPere291JK6J0yMEjj5qXDxDZ7sVJKCyIqp0UYCCb45uY+BQ01MQHOllu7Ak2XpylQGMOk3Ed2eUgyTAmIGMCcmRPHTlByzo5RfSIyGn8AZaFaqo1qqBaOidhThSmNmHKWVakW4wilAmEYBFBEnAfLGSa3aKx8eYvUCxUb+S0HIY5BgTYqrIq/101ZkvfCfYPJRCUfpEJOwAVCD6Y5E80MRtJUhLScmi/WkMqsSBBjkgQZ3nfCeZ6bpQiIK+LuFEhjjAG0Rh8Y+9rwp8ClMQd/TNgGN5dDZtCJzg6RuLFN5JzwaPoh2pVHRpqtrn2PZDxDBIKGEyptPp0IgJLmFs1zIQQR6rsvszRprzndtjN/+2WiTS327QEIVYWYiNRl8b27ubna7eePP/rs6urmcFymaUfh6XS6ub26f3hzJbuUiQ0hc5AUUQ1R1eUVUfVVB1aPCpEBqKItPAXh5XWCoyXtFcERiCJiZSkmRNVU1kitoKrCKD4AWUG6i2qIPeLw7uVg4uEXBjKsRzC84MN//Bvf+MM/8eH/62d+eYcny4efXOvh+En5cP5lEJ/74he+9o2vf+nrz3/kR977+jee3zwr99NhqYf7u8/uX3z7g1dHe72r0/VpfvfW9jhBDuWGBVPBXE0PguP+2fXp9KIIYWK2SKEUPRyP09t4qk/QFYigQl0i7pLUYAkRkTWO+pbAGrsSnEmB/Y1K2PlRH7DtnBDEWTlbgVjlGU4qYRrql0ER/EkEohpimPsVDWYCQmfL1bmDQtzenLSKwaOcjckQzDLaeAFkgFUcTuYZGPjZBiqPmZpLW++GTK/3MfiHeOzQoAEIQBUusWXNoNy0CthqJv5GEyFUUYga73KdZiqNf6+0/O4tjtgTRyRlArJPE5I2S8nYq9S2Eca/UHCTPxGLZmg0Ez+5VpQHNVcZKqYkgUtNiMrilByAi83JaM0f9rvcXwW4hdAQWpR206vGG4Nittgla8b2BlVlUKgwhvvGaaxi5elv2xqTkmafUIQOyogWpoMbRhsWb9JFkYBG03qbZiwiVgFqd9bmhMfTPQ6SQWoRE8TAH3h0sUh4lM0sLZVUESNUzUzMcw3SP4o0bmWEiFLdrx3IKQGCrrYmcQLi1ZML98xNHIXypmgGzktaEs442XiNr8g3QprmpKKqrCF7A0AKN0CgfUhjRo/1Q1pWXXpwoqqQcF0NjD84Zeq5Hp5ShvXqKsyzEwTisoWsWTsaoQ7MVhwfDvvd7uHhWIoI5dnTpy9fvrCqLjuSICNK1x13HFhGem1VCQ5LEwmhI08WQ0NNxLZu0wfDaSGiFVbWEx+s9J6+4dkzLqcZw5JNpjHHnU1SpKtfomynz1B3u52ILMsCiKqa2eFweMZnv/7d03/kH/v6X/uZf3/+oa88/cGf+uqPfk3mL/yBd7/z7jfef/9rT3f17rPXH3+yfPzh4SP9rboc9/d2b3O9uZ3m2/n4cDC7K2V5IzoVlhubxM10tdrJap2KnO5P11dX+6ubV3dv6unh+vp2YfqAB7aR1tGpMHmteZCHyHDStsxNMxoz2GKIJJdFM6ekDRXgBxGpOVxgvRvp03q2zCWTy1vkQY/FC1SgVlKpRk7uMFJZywAOGRXSxW8Rj3GhaVgFXMxDo6SKDIYHAFrQu2kune7JMMP18e/C2hYP43pEbAcfMRUiw77H+0XiRG3EL8Xg1RTxvfAb6hIm3yKO3okQqQGTXMXSh89yWFecRnMa1DiBR9qkZcRWEBHSKCE25eY6Xs0KQC0YsGYeU+0xatLGACDB17uSERNbyXbaODfhTolCkpGKRpEiIv5lkBKGxQOFI3sIuiKkiVNPVSFZ3B1gYIHb1jyqpe1SqDo+TUSGBeD+dhmnLfRQo+Q3BhKuZmE8CGKjLCj5gSn/+22lFJIe9hQvJUhO5RFziwJ+ukcUEhJWdFadyAhIF8mEEGgJ4u2EVKBuGXMwNIsUArbiskQjFHQRPTlc7KzPVjClWas06S1iTkZpNelKw9sRBxJPQhTPWQGwWhOk2zMoqY1jEHSMfoDdUJG0IqdB84wgP4COMZWreSC47znwQw5IwWh8iPTg4+HPzrNDknOfRcnJWDUSy7Lc3l5/9smnn/v8u6Xoixcvnjz7wv39/Tzt62JUTtP08HC/3+9RBYR4XiGQpFgyXi4djqyQYsCkflgCmpbWgho5D5bucQNEodWQ4YQwUhhumzjPIQMRHrkGljxw4oYI9WPlyE9gm0S3202Hw72ZzfMsIofDAbD9fn633H/71Yfv/MTVn/lX//t49/VTez4/fPLq6sXLu9vDqw9/6ef+3r3Y/vpKj7TK3ZMn7yx1mue7aXqzyASZtUwkF+xOR5mkTnav9bjYrpYndvtErr979/Ldp0/r/eHVy0+nq72Uabk/2AL5C3/7iOG4WkZ0nJYFSU0akyaDproqU8QTbEDS8yPPr0ZnMSAT2RmNNr3BIaX9YIyY1LyMm++daI4G/XiRjZJg34ZF43V+ei39kbOsxox3uYInStLzTQXm/y5huRS6jRQmSgqtihtJ/BzUTLclWSQ0DFdjFEVQFj2p5xpKCnVYSAI7AO73ylWYiLAoSFRr+YJUgYhBWA3GybO3QQNUtUmCbG5sR0SygpXWUkCKiEKOPIIFmAAoDbKIFDFd9AR0zxkAX/iCg0IVRaku0huqjB6s7h0iACtVMg2J9FPU8sJja0asqAEaaUc3MGTZYpr/qghBoaqhmycUctSmg9aOq7EWtZDCqRVCQ0nfcDPpKCEiJyebSjMDVXUCxSIgms3Vos41CdEa6Bf6pTu87FgnyZjqFCyAgcmOqcwioovHs5sP4ttWdK7LUVU9JYxk8yUbl4145MfkdKyqOrByeOwF0w0ck/GEU4sUAHIljXm0g3P3fEWkwKpRVSnFJSEtgFUzK+6k8nOkbniTaqZrQbMJqYNLekUNqrlH3KV2a6iyKDS9wp7x7AxeDWTxQwip7l0GRXZWF5KiqiYOgaqKvXs020ZYIE0Vx7yCYATmZzNCJtcWIJLgtJl5QBsw0rgw1VD3mmvpGi2boASAp9BJgqOH+Vc5O2GpzoQIkEosmhwpdyQsEGHpiZ1qe2fLvEjVQh7u39nffPGLX/z5X/mVq/felcMhtV4wY4AoQpsgq2RIUmiYZKk0qIoUMysok6iZLTiWUgTFBTIRF/IoKD75oLSO5Ko72S3LyWyJUIYQespJi5kp6QjDCsTeLYuF6zzM0dXMbDcBOpFS3dcsAqDaSVXneV7MKk1Vl2Vhtd1+eo5yZH2wpRpsYa31RFBwI5NMhUXDfc4IsxWrVDF1V20eDfc4mQhFa4pcCiLS/0LxUw+/qBWcNiiidKcoBwtBGgFE6DnvfkJSKU2+eJkBbwS0rcrVLCiyGQ0jQo/jrLB8+LXjbhDiFctfUaIhHrbAg/dgdRnvbzEpVmszVzZxmWCtzhiKalSbCCpWZhcm2uQiAYQRR60MbcdAkZp2A+uZrw0WF2RtOMdsYrio9AoWEjnYHlaY97AJy+K+0QRZjN/gttXG0eyr58Z8tHBKVYVTdIGQoq6MNr2byQgjIGscPjxcPUZmczkkZAg9xiMGmAYrd3CL27IT2TKj7NIrmvXC44cZfoquYWQuMdW9k8W/S+N9iyfv3Bdh5HTDsO+mhodSBGbDqy/oN/EhGbC0fA3i/MGN6NkZWOnctw1uZp4uSdBtxaoqaYhbz8MFzAtAbscznQWj+ijOFXoMorlaEhaXxTxTR+TMzzoecFkZ4ftL0U5EZLg4G06ArQfZYJr/D9p2WKAsPI3VM1RpYElhykm/uClVtCsSkTEgGEKgmaYWf7ZTmLOZE2kv9PMKE2YdiX5nGieaHpFgB/K8OEaUtLt4CJfq1nS2ofDbsyO2m6fFDmb23vuf/+Y3f/vmyVNaksYIE8hUFTcPJlPPTRSoiExS66B/WxgDk5BuptGxyMfJ9Z5sgdvSkjeYRdCqmDHg14hWUrw8xSoOAkGoLtK3XkyhQmWNlDIhVEV1mnSqp6WymkFU5nmapl0BoTIto0cnjp6qk5lG5/sWW249M0nSycjiNmGSEmZGJy6TtYSwgbEJW1QUkOHskovvQJT0rj9u8sWap3YmNpTHGkd7yyDt8zlaP/bnhlj7uU+1KNI1PC1qkEBXqrYl8mWai5BQika9EU938OJYl41FPieBCBQ0pIqGiLhOlBXjJVY07EtfkUj6dtcLH+1jBWi6ElPdz7irlgkdwR15w1o2GvWks1iBYFHNo+8KJCtVoFI5eDe3awIaU88htw7wREvqmvsmcZRHNKTu/fRASREweUW7RU1GP7eEsa+tq6Um000+0gdWuvfSs7nMpABmYgVlCHTwOBQPvPGgsFHS7Ii+ESKBLRLFtjZkFANcGfXzW6FBvkd918xKmVNmDMRwCpAEsaX5pW9bloBA4nvbCqyvILvJffNoD1EgTilCNfTSIcIQ/8Ooy6hshVQoV6cbZ6SmXa4ACM8wX3yuUdSM0gxPOayrke2/DPZQuPPY2AIS44H1i6PuGYMZuDHDNdjODUZ7w+X9FddkRnXZwb1mVCPNbK9o8jQgYikaqgDGQc4mSdZBAlCSCKfPYBekAm734um0vPvuu/f393eHh/nmqfu3Yw7jVDs5lYxOyGwXKd2BEF8ZxdzfZqQNhRAs3XDSyAg8hsEdzuKJw74SoUqIs8naxtPKVIDgKr1A4AU9snKPfykQUVVZpNaqqgJaPU2qc/EIiHyDiMjkbIJAKSBNjCy5uYCWQtJANVYLK5mIu1Si/gHHODuptAwxCqeKVwyTiYyg/SY4MWW0DQL5O1oJt/jFAblxnK4fufj9pQqSaf0f+XROqe19P+fDr/5N80GSZF0uvtrzStsBFka4kHvtzyKnoDrlSvJ1AkqZIn0i2I4CXq9zQWc5jSkaoCigEZ5eXdFCgQa1JokX0IM/tpxPQ3geIGw08VolcKtce7sANpjfE1BOOqVnp5AiYsHYlNSLu5OVhwZ5UFAigoYkrHsWjKiS5a2kRV61cdrkRco6CafJYS1sfpjBUFpiuLZCmBetzHFoFHL0qYtI58GNNRuHuqXa71yfgoK+Z474/qH7rjNIXoadXfMzbVlDXAdJkGTWF1OspUyabDbeh7PMhgIwHIQcv+sZeb8fMdWscRfnpQ2u4ltEE6Qo8naA+3xjLauQLxS61wQP9eRLDCR3kVTtfEVtwPUbY6peV2qMa3CnpoZjvoaE42hlm/H7WpRuimJzovsRzsIssZ0eHVjjz2QAicFkVUzdhhQiMjDYK84EBdcpu63CNeB5jhKhG+qnk0Yi/jrCgGmxcjVL1JEJYeFOGu6jmVlD1HFKJKeJDw/311e7p0+efOs3v7W/fXJaPEenO33Gp0qaLtZBoynnmURNYjV6Mp5OXqrUPKhbhVRm/bv+PBNgKp5Db2Ye5llkKqWYJ+u30hQg4Carbp6TnGRGJw7hRM6DYSzF/fkQF1uhUrgsxE4EojSomcVvIiKa8UwiiBzQ5Orx74YkMWP3GMLCWAOxmW1cgcLU0MIeYYoNLTZfhpFtZBv/v7hGfjNufLOpdp+cB8/qFqXyZ8EZtgFRBABdVAyTwbJEBoe/VHXILToj+gtY0qKwIq9UQ1VIK69IkuG9aZoovcqNH9UQBaTFUqrXzEliuhWiCyJ8vyanD41tiE6vKcsjfQ8J3E6ArGYgtkjPgRHRmBTCPx66oOo6lovSg35HdiUColE9P/Pdd4WBtbsK7k+WTSzJyL3Smr2iX4/pR+2RONBMqEaFJU/ejXEKAZRw4UfyaJ7mFdcZX5qEb1VcWDwWGEBUM3L92yAwayqmm5JbTYaTD8JOI3zkR0zlOEkGvo7n0cOcWmTGuOnnYmujp+fg2xRyGTM3Ll6uc2MtqbuJe8X1E8dG/sG03Da1byOFPPZScaW/sbCMSEg9Hxm85peKiLHCExfDamJhP4AUokosxFWkt9AxhtkDzt5TlWy7yUZtwh/5uOLRTNyxQRAR8Xz6tlntcfI0yhBMk5K1cCdUSRpVyemRfVuppgMyWD0J7PntkxefvJAyEwq1ZkxnuiPDStp24cyZ6N5ckRaNleRjrUqN2IgBY9mc6JJJKhGb596tAl6O7t4M26BaiqNcD34OF2dRgiwihBTFXETVVI7HkwmoBVkJCgItutQ1+0xFpRVUKQBSmbHmrgpItdBK8zTLDRsVkelcRV2LsH158evgmc8Tfvmpt18yJI20ywSwrcQdh7YV9BhMPQAyPY8AR3PreLY5+G8XszHYxw9KqISDEbLbapIbtCNngEEkSoOJt6wgxSrIbknQ5puhFyN0JVFAVfGA6gZW20BDUKKQ0rhYAbL6vKVrwVpyQmNsoEQSJWCjnLoCbNPyGuqH/bknn8SOdAFTJHNzhA4Yd/94JJi6VxtpFBl4xGBm7xu9nsbmatCICqBrKXDDQEZCYCAczHLhTjFtPDgX5Udcq8vUDP31sZwwCsmahcNNI5mmLC0faXgiow1c+HBc9UDrrIM1ahhrL6Z/9sivVBBW0AoKqJ7KlorOmc9lBFEeXhrNGgKLb6wmJjhJyRJXj14NhTvtA5BpVwGNCigUqei7ekpXVUlr5WjX87wo8W9uGL9pioiIUAphgx5qyQ0b0aihM6pHMhogcENX1F4CEgObm7nRn7RDlMuCgoqL1E3wbewqZlvNxQ+fsKqKFEW3LG5kkYpW2STriBsNxoFYAfA6sULIlFw8RzDPjVVugOyn8ni6f/+d95b706tXr26fv3N3PBURIaJAXOCkVMCz5ioXkZJaXBNYzGmbtiwjqIhSgGoiomBQOKsCVUjNKMW2CkfOgEOmaZjAoCerSpj7w4tPCUAVmSg2yDrhnLDQelYbRNYYX0hv61BUZTIwyt6p19OjKBsXMouuI06ifVsXWmt+E8LlxlrQ0AEkrJBZGC/h73b0pgFvLHsiUiUEe6zX8Q/HZnO08XOgI7c8+KLM3Q72qMnxbLZYCQQXBmrvXWhGD6RpRMHLAegwVO0DCnuWUQvZRRZldckZAojXSXV+7Avs1ARgoZnRnb4e48mQpRKwKcazz6TBO94EoFpkIQzHyaMP6EzUESVIUhvf0PcxRI+RB9NnMGgAoi5xhEQnKhs3gwu7bm7RnkbisLJyZjmIfWycfhxqmyq13bgNnyZ7Yt8IJTQdV9rEN0UAtjyYrvF3M0YA0mw4ThLJQiSpHlWJkiyW1juWJNc3j5rSrDeeGOUmDW1bjeF0hPz+eCREFzuzwD+y1kFNI8GA/FGQ+YylhXe7GXiSgxYvoNEHEEMPuVKsjr6MlbPa/NPWs3i9K4dvjdQgo5hkNyFfKYTY5LMOq7jI4WLdiZVof2W8BUFAyZqNAZznR+ZMkyyXqJHoATUQEZp4IZEc85zIMKtTGyIiOioV93sSSBtitWHGUdO47Y50xQZJeaJEYPG8KBdse6CXlerVphQA1ECh5+JW0dDV85iwaW5IKheJ88B+d70ruw8/+uBqvl5qpZhgklQBU1zzvRKRVb3VnK1H88VR6hhINWAiBZR0iEpIM1qjzdfAg40i3r2nWqR0CgAaKrjLkSXSQdupSW6WWgpDUmjmxhGXTKqHEVSSc5kUqCezhWWejWYgBQrRwkqr1SYp3rbGzMQgRUkuyyKlR6EUSGUkCKXdJ1iGAh5gZl64hoGmhHkk2NRwgs3SO/DwzQkgKTyTpFLPw6Wri8ZNWBu/SR6c+LA2mSYuwsWtszig8S3j5zGq089zJ0DJRyPgwk8sUZcIcFdVkR6HVZnWs/w/74/nGxyHxE0R1AI1TVwj6XUnfGoKN71ZFDL19IPLdhULj+4KqCOlRloRae7foU46LvbsKYc8Uk+QCnH9bAPSNOGFQT6+W4N6fKCSJUPVHD08M2ty/Irxrbuck5bFpj7GeAels/HgbIh4+Zn4XvNZFbjRUUU2zYXaK8RK8uCWnezHSJN/YyMfqNBq3OJbDBGL0pthcQVdvgSBIhmxVbPYNcFIC15DNZloalEDuMBHjQVoiq/3DIUz/KGY+MiDw+Kq0mzFdIfXNJTEOzv459fI2hEUQEWkZFntKFgBExOqoCJLspiD0YMYbemDYM19L573Uc/AIOTRoCWkauttMpIBQxBhSn0cEXEkacKTgdr8AgGHiN33qGACDTEdUcAaFVSSaorHNbfC6OvJZ7r4CoBkFJUbz3hQQhcSXHYd/R5qpGhEX4k6S6OYVUmzxAjSxwp0PH/+/NOPP1WU/dXVq9PDifXm9np5dQcp2NJ5EUTSY5MEWgmMyRvmhE2sh2aSbKJFElWS0BJ1CDD48sxsN8+LGWqtpEANGnmm1TPqLWFEFwIWhKru7nqXLlNw1EHuDFDsBPNUakW1OmvYHiJurdYFi05Ftaiq1HqCCabIIPKsgX6gBr6ewooBIlvULcL4PwGlkDRhk4nkL/3tU6w+tK+Y7kQRiRaBDekfiykHUJJetpaTPMMzDGruPPSaHccZ/dDjYVsmIamVKyeiyqbDZbu6TmCB39F/tG5nHuOLS5Rh0TWBgRWcwq8AhHRSvA/rCVHrdRjExvXGlJJGnDS68IqoUgXF4y6XskwW0KtqrRCdUn0OBjja+cTqmjZ2ymW9zXBHc6AMprBmzSJFSu/0DHfpDXazc/gnqmku2VocbBJ9IPMySaJ4P1VtIdYQE6NhcjuVhiztzFVLSrg0zwcLf9KS/aG7uOO4awYTLvX6+vpwOJRJKqoULMeqMqlODJNs1QIRFi+pwRpcFrNBMj2UorU5ImiFBtlFonF6Q0O+cQXYk/8ipRLR33f0hnpKYmcPyGQ2E5FiAoUHHairvEqAVYkaAkSm6ho1O2bj0lUzvcHBkomwq7z58QxKSbEvhF/1oDBxRdohD9TFHyyZE9BZeGx3NWY4VbwRIiIJZzYviZORujBKr3ey7p6LMdN9xOwsMOIz6KpbSwC/YCFnahHtMjVhj1QgpZpRMIfd2qWuXKAAdI28oXcNbOeOqD2J1oFDhXnbOCGr2QLx7rZyOhqgUIkcMomaPFOUwBj8IG7xwuJIZZFWbiIsol5Asi2KyXzAap7iJb2aoSgrFz/pCY2AkvE4T/uHh+M0za6CHk/3T59dT8eyLMtx8W5aRVBYWWuVK+VSSRYnm+Yx9JIOL2d5XgmAFMxLx42GMACK9KTzRpMBz/mw6JccvXwLdCo40iQIHuCIpNrZkEYGWr5ltqVahahMSrVaC2TSsvBgLgqEJKUMN/5UxAoW2unp7ZM393eHWiGlylQ80sXMLegGVNquqgmqWQWlaOi91bwK5uSF35OBioicDpQ4v451ntm7TKIWLSIsC8dMlIkyQAVhryMJb36ZhrgG2V6Tck2gywDi1ZGIvJ3485H6Tqv740M/lyG5+x+R3SFbVeziJa0+VMZqrdhJvshXRBLEgrCQh3AXq+33R6UFyWym4W0XZXb/UtNlEl49GFRdTo7wN5+hirIJ02FaOCc0Fy8fuXuX3/bUZXXqAsdtgyta+G4sXOCMO78QIAohYrCIjOOIrEttjxbPbF/oIVRKCegPixjBazze3tycTljqUZRUTKp3D3fzfOUWcVCEFNHGWQF20wLCq5VBlcMMRUTFrAHB3xg2ZIX7yAZSktrVBpKOPzLYk4BITkBHbyf8cImpujC9ltnd2fFYlkF2m22WL7ZrvG0zJrkNwRi4db/fV9qeXX3wpoNDdqwbz5zgtZqawdgSsA0SJF1/fQxHvf5K93RG9Di74s7+apKe3+zTaMe8syKJ7ymR2+0CitM6a8m20TTvPPC7J9usQgv9fRiF78g2wSofl4BrbzDrnpoR9EXTQhBWGPEUR6bvI6zuOdxKsP4+rmmaXr169c477x2PJwDL6bQr083+5u7u/lRDoPRNovOOHHxD04YDDz+cHlDq4Gw3tklZ2vJ7IR24GcT3VlVMVT2VR4zU6FmMhpNCd+01PreJN1JVVzAFUkpRzy7LYqiDE09AFjGwGu325ubu4V60YKFoiQMbBzvEFoFQowc1Eis8XWkWFRFqY7+x+LmotLocSYU9GD0A4lgnMOMimFoWr7aC4KRFFnWXYbsKUrqY3/AH2+YTPQqwhtzhnhmgiXKPWLnaieSApH6zm3O9dOr3RDrXn1YcBU5Bw7CzZTDsbKMd4IIm5Mei2qySS60nHxXm3J8R7EQi3TajNgSI82o9VyGoeB+qoXk7rj6rbfJ0Q+nk2i2re6wnAGTx/DUZHUUYSganXroqCNQV4fBFJLuPrzJbzgvBPO7bXVkdwcCTtiRnw0YrrZR1/C/dxFQmvHrzai77unCaJoCH0/Hp06cPx+qeOqFBRSIkm5Uioi6fJCFuycYrVT7y1CzfuZZLLl4XBIv80DjBpt+RNBnFK3EyEsmCg6KLJsnmL19qXTk+5xudD7W5AYC28xqUYVDI2pHfSE7bI+/bZOzrjX9qcl8fChbp5ZOkafRcSD2/6O3IibY7UeOsz1RGTA5TU0s9z6O2BIcctjP65KHpLglKn9s2AD5uUEF0SkZfCACrg4NcgWh5n713EWcsOzQxuioNmkDKE2gmLNEMMDDIo+AKe9UG8VrR8ph8uG5qrfv9/uHhYT/vTsuhHk7vf/kLh7uH42KUkJ0sDgq1YGGvxRSMT9z/NpxcQoH0fQ6i2IA/XSXLVFWHXXV1IycaJcvMK9xqpZR4iwkNoqPutkjzJggsT4qZESrinZUz2aUfRX91UdTTMhUtpSynyiI6zZUikTfclwx31ZNF1UuqmUXPZhWpZiJdv0OG/bopKaXPHC3wLu6MEEEVI6dgJ4BJX2IRdUZCjcKKfUcv4sIWNYYDz1BjUppJcfiRqzGekQFLtm/bTCCljEtT4Opxv3mIxz273yWsFt2i2yPtPNLEwucRJvSRMA38NTtWItmqmCLtkNaKGAjUKz0ITBpFAHrWOTItMdf7yCV5vtGUyDBZ+5NnWd3W2f33JIjSLRCj6QwAXLCQ1v4UvVHaxV1ucZJjKY0iPNoS5h3326IALn2e6dAOOp2maRKZ5llJipbj3YNEk8LoaCYamzaIKcWtwUGmtbJOKwCm1quSpQSj2qI1DoPGt4IJBHZ3kpryfoumtoGU0zU2dWWqaXgIqn6G52/fnWTWAu1Ju+u1rK6cxngOujlE0ra3okSb1+WnptzkNxD2CqE+JXQvzFrtSyXYmdaFdQVUTDyp24MLdSsiXsYxItlkZktC3T/ofUgNcFdUsIpBMLULmoQla1GgCopbGuJoilnEZTqBLSFOrVKovSC4o9OEfHFbhIOTNESgCDxaTWAovb76FkSXJCQ5k9V84cvCJ0+evPjss6mILfX506dKvPnsFeYbNwKZALTFz0V283U8bK9RrrLOXFMNTBU32hOAaFoSzGszXrgoLouY8zkRyTggZVey093uPM0VzQxadhuDVNMyi6hZbUSnwgtvDuJVE4XrsQj2V7uH+4NO5XCy6WpvpyUqyqVJqbYXumbkke2MoRqJY65FMq3eKlPHjKu6gSuVQFd/q8DIggzCinxzQUnMIE08EABIJ5So6rmROQE6KDSX7pG1NXXUdMd7zqktBnQdaVzC53sow5sj2p7d3CADNNEF2h5/85Zx2qN+X9yga/2SbCGTOSaZCS92CRr/UNcIuvVx1a2eLiYSJr52c/s8LnMc8zzqNS6LN3bulM8+Ns+zGfbvR8aWvzwSjHPcTZPW5QScDofD06fP5+lKbaYc6Ww+EL5ph9XPCNn0PyPFc4oAiJSks5WZz+pS/2PCxHZKzcokPXjKL+08eEUflX1zRt45cl/p4N2+CGc+nQyAvTw9esz6iueJ1ykSqmT0kRNyAsbHBoMUbeYBEYGx/X3hZpENVWpffi+U15QPqoiAommBI1enT0rpstEQ3NetbuLDhZmnob1ILxjQRZNV5bSoUdpeN8o3MrBSQWmJn25B20RZil4ybz9yyWBWkZSM2tBmrcb4IEmszlG/WURKKQ/397e31w939/M8v/vuu9/57W+rTtBC1+0jqgLupzfrdY1SGxJAQkAIZOqpd03Ilbw4fLNdV8PzaMTi34+6BqgUg5dtFKDAauhublpRkN6QykPEIeqx+RSXk9yM77buiBAUoB5Pt7e3qvrw8DDt9jpFBYUiUUfdsq4SyQJdmuXbggPSjORUSiswkhfyKTdhC1LZZapDSPz0hVIw5bGMq3OCFEEB1FQcRyl+c/n52xhIJVHHL/U+dEGqLzPC88u3U5M1hmpOQs7O9HgZ050yqONDfOnmpXmgU/8ghTDf6bABrooT5IPjnP1zCXgS1vscg8gq5NHSBUCGzMAlP0n8QslePDrQLcrq8G8upu4LxPZ4anz6qvNZeZQAUC5sa38dzwhToGmmk/qS5TIJ7gPSsiCCa7pua7MikREXXkVUiJvz1vPvnwtYX7/68Ad+8EsPD/WTj7/97PkXTwumoL7B8nKvlcoKlpB8whYt9BIoayQhmRbLxAi0IzKwwyayICDTnA6ACCigF6lETyzmug7DWkaP7MXtvqwlpO3nouOdHUqPZBN07kvd+DMa7++LOlOmR5k4CIdkSh6AwfPC4XS35QOr0/eWyyN+o+8NCS25EY1ubHatdu1vCEh0r7u0hPZG/Pyp/OAxPtVseqwLGd1lkEBjmH+lrB44ExabAAGKjXG5m90pqpaaOBA13zx7tx0okVWaU9q8xuindk7PBFzqUo+73VWZ5MnTm1evXi2VNzdPjuZBavR7RDzjsJunnN6Gdcq4ZKK8v7vvARWRZeT8HBEwPQr0+a+48ZZUlLT9uslSEkAGCJXmmfbJR0WQx8zlIgkpJA0SrQxAx5YBYgDIOs/z/eFBih6XZX/15OHhoEU9NtmaKMPos1Sc+GdnKhHxjKs+poS5VAgxVhGKFPSkDzeP0BrGUpu0TWwLp4yCDCK0O54LEWv4dX0mQ3w458HYxDYPTOL8ai7MLkAhaLoQi6zYv8jbWLCwG8OZ4WMRp94m398baxnd1Vglw2R1Yq72dYCAANnSNJ0D1WuKkpYzL4BIKTlwM9lZAw7jg3RQoE14ky09HuIYx+XUi5ZD6XHaGsgxuri25HglqSRJy5kqBnrRrtIs0bICDR7d8Lirea4s9xoRFuK8oyWtxmAq9dXLD/7wH/2dP//zP7csx5/6fT/1i//vXz8cy5P9U1V4OEuV6KkHhYgH22h6B4v7nUcq1ubrqD4KpltInl0ZZBcb0Z70jAPLAAvXMsku1GqeVYuOktt3ilyAc7t6Y6fsSe5kwpLjXp5w9FpodaNsBEV7b3uUl3CDaYtTiKSZUHSyLls3UT+OzJnsK4/hhXraRnudBXhrPSBIYUDGbTN1aI/VquN5yqH7KMPbL+0IcaADcTYfk26Bnp/pXCcm1r4bjmV+FS8JJFJJ84SMtzUSKhGildl2Hn9mxm5ZGEifSFctLs6ZYTkHkJKozPP88PBwfXMFxccvPru9eVJdSuRKV8muw4Y1MiC1Jst4JP/O05XacpoU6FcdKKXTVd91NYpKM/C7pphLMkQlIUF0zVDhqZ3NAlc8I/DF3U6TTCaoVruDL6FFsvWw2e92x+W0nKrqtCxGMxGw1ib5btiTc2KkDcTMTFAVEmHZZGrh4gieS5WwJ0Bc2A+7UqCuerMuYhKLICnHaAS14zRNrGZmOmw2uSr9uJJu8oZeRn+gpF3STMn08WoDKxEba9UHjUCLSPhnLqt09JYaIbkOEnFXXHrUADyWz+U3D0AkYFRi8QdXYsFI1x49seNMADCGV0senGNGpVcgdNyK3shaUr34nnoDGUc1bs5/x4ibDZDPNyEfuXCFT0tksHoKvN1A31CBROR50A25xNdzXX0yZKOwkRCpnoa0nXoHhd3/wO94/6/9tb/yz/2z/ylM+mf+pX/tT/5Tf/I/+LsfvP6kyuSdKtzFNCSAC2k0KQXRc5esEoJ7pzVRhdRPZrNJclThQiIamHceH4SKjNRo/SoQEOYqNwFGtVKPJ3Qx2XclwpqSPTij4bpbzogMrRmTE3s/rUW02ja95+1Y5KvasIeRqOZKpR9MQeq+MhKsZtxymETq5HojH8PA4TJA1GuNm0TW2roAyChhZEXoUBMb+ZXwMjoCQynuk9u+ulV/fAREQTrSli0i7hmEey7O1qFrS1sSsXiJjHX6knl7M0n3/TFiIvsyx2PJ3Gi8Va4dr3meD8d7z+/57LPPrq+vq8GsTsWzFwY2yUhKZhNictNJStHQ4UhK74ohOgEEGqKKiJZSluU0wiRWTQjp4S/oNRukgpP3e81kAQM8lMP7LooIIgjHYzO8dn06p8Wb4kCKFhFJnbjRQ5K7q+s3b95M8w6V0zQdDvf7/fX9/f08zRVBLVzFMmNRz+ERlxNrrRCxIhTMWk40pujl0iiq6bQnqpupTRgC4EBUnWJ6KkUB5P/0sx1A4zXurhNa1+pqWeumeUA9iKk3pmjVgVKMrikL+rGehn6uI6nyp1pIYAxFuOnDAR3YUM3MqmISlUgM7T1NfRrjcY0sUqOncHkFHDNbPD642lhls726fR7HFJHI4sTQhaYaydbgVBPV/Dq5Tx2W7EaM0mqHKkSdWKQIVtXtoJKIqwkZa7szTvUYPl3XfKqIqKoWqYuFddszKb1VKgw6jfSrXTtKCyCSXvgU88IK0lsNCkSqa/pil4MsHNaruHdSiRol8fw8OPIqgKoLyGKxm0gTuk8vTtGAIbvpC4fX/59/8b/5X3j49re+i5d3uP89f/AP/a/+0r/x8BH+/q9JvfoOHnZX98+vb17eVfDJ1fV9HTL2IuAC0Ak0cIHApBgUZoWGaO8+LiislLoHQFTAFFJMJxSFVDnVdAi5zdGFEgNB9YL7dFJRoJPgaJRYe6K6AVjgNYGLurPWq2SYUDRn3WOCVNV4AATUjOcyEVKIRUopIlJrtYGGtiCvcW1klEr1YZ1VBKj6EehReCQnvQaMrGiaVgQX16FmdfAqVT3VY2O6PrjDc556OKDzMR9tSnW50RP/91RtnmczM7Npmsys1jpNU+tfnrG7rb6DSJY5RJqazWyar06Ho4DzPNtpWZbj1W5fa72/OorJbprrcVkWu9rfnE61gvupOAK3yNcgaEMshQPZp4TdpEzBPQmpgZqaemT++Gf3kXQ8N1+IEEvIj2NF4gKI4NTPmRRAPc9YtBaZTqcqRp2narbYqezmerjb7/fzVMzsdFqqKWSCTqxHMtQ4kqgmhIosNPU236QYWUMTw6C3GMh0nIXDi/BWCrlldmKUz5EorKfurdpBKEbU6mExVsACKMoSKx0yYJVem8jpm4eyqgcs4bh4+XynVOZ2K+qNVNOyWK2iqrocT8J6c3W94/2Bem88QsGigr1JARcK3cEtXqBUT8BJecMe9mTobcfUTi4E+SlunuCdCQVVNIJJyUIT43GdPeSp/4JHQtTg5bVASY2zAtoC+RDWuZFyj0xrdbbzQ+hKCL49sroekJWjcHiQETkSQnqrAh+AUIFqy1I3uOd21V18nFiLcx61E8mabuNpb3jWpfVxNNdSB+Wp3b96XTMaoVGZFcPz0pXtIDZ9XQh3iIzB6+PENgAXUlw/EDinA1lP1l6Z9qqgF8um/2iD9pBwOq4nQ1vMRQEOzq3xGoaqbeZN8KqkS2QS1MdPatAwz0P0AHJLk4Drf6Pi5Q/dHT7+3DP7xW/+9Ze/8Dfm7378N3/mb/9v/uK//Z///Jf+9D//L/30H//9335zPT9/fq+fmd0K3+jhUFFCuUw4eGNvZgPPIC9JLc1WhVbyc1iLIRk7CxgqqUe/X6PShJGolWQpRdTrXHhlEjN4MiguXooCNrdLDS9bAS3c4WZGVJUptA2RoXy3Q1xd7cMZsp1v99k9bb2Wqh5HRMj8Vzp8vGDFsC8etGCpMOVYQ5F2tFs978C6jVIkPJ/tnouTb6eylQBr5x398LZjO0TtDkaOejpNUzGzZVlE5cmTJ4fDgYLb0ztktaPtynQ1483dm/31bidydzwqoRmyKyIqLsFvzQzO7zFwXww6Rge+tB/9rzHWLzn6eMOj3uUoGEIIKTSDhoziGts8z5Weqodaa63pIQdrPRXf377Flhq+ANEhGWnMkGZ7d6dsGgRIFmTD7zG8VjBrcddqBLeF70OqVdAkS5a6Iuuiva9UhlSaIYfYUcZNAgJ4kHjU4XNWDVGKepkSqAtyLKXMUqYiDw/VIAKdZIrW3mqV1Ti54t8OgBe1pMdqS8jTzj0CA0K8drKdEWJFexoIQdDMlJkLMGp0KtJixFckOE3HmiHUlVSRCm+mvT3VHAsurr9vw/ojUYPCtm+UgfNlPuv2FWiafnMwQFR1sQpEiWb48ZhWNba2TNSiPi9ymW5A01JcrB7n1pYgZ/Qy7ZPt+1592e9veZ8XiaA/q/6f5G75p1U4W5zBmuinRcbjImnRnbsW5Q7PYmYgVFP6bslCwIrkrWdV8xCMAUKCSLR0L0WIMavOM30EH3NVsGyEWnVSkpOnV86j42gsNu2AJiiRZhjDI7GFt89/+xd+6eXvfPf5N56+uf71P/aP/IE/9j/+o5/8vV978ff/9fd/4jf+/F/+o/f1yXLzsvBzE5f6Suz2lIO4JSCnl/ixWYgWr5MVDu9ggwL3f6Ub3FUWhJFfo89rVuT2lxhQou6Hk0gAlTocxdYhB8DEli/uIlI7RPTJmpc68nSaxVQ88jY0Ebh1PTu1tV1uolsZgrPOMHOTEdTD08ZDFBvn+Tor9fc8QYjIogSj7CgD+tGLv2lT37UJ/RcvrzLrTK7WKiI9Xv3Sotp7x9q0ACimRSFYlgrTKvpwWvb7/YEPKoLJjstxv9txhwceTvV0O73bT4VZNQ5bg80b1dlJzsY5SxTwaV3LBmhzq8AkcyaCj20vgi5qVxHN/mcGESmatZulmpVZVcvheH+735GstdZaQfXqltL53BbgRVKrQyqcIWXVFuPR5wxMFjHMa1aCzhkIXIqtGSuzh58u0RWZZU5StSRCaiqABmpVr9iymv8EUouXSoRxWZadiqouy2mJalQ6wU15xswtjwBSCQJVIC0kR1NKX0An2mm5BKCNRuSGa8iswWbUxJSw4Uy6bBXJjudoFE/Tk7UcFM6DHaHYdkyznvDaLN3HWQUrXUQlAVJcraCHAERNguFBZBTc+TVqSOFJlHWO9HizCtNU3qiwV9NGipltwIvAyZdWRk69ZAmbIgLv8pFoG2IBkD6s4Y1hMzSzNKv69AJQ0QYowOIYa6SsWjX0S1W8iyVZRYozDBEvaDYmNEdei6apvIkp7YNIuCa96k0yldAMG0kFoDa5JYADBqQQA4xB9S3TNHgoIJYxKS3sMeHfzAArtaDvDoBJX89f/uFf/ejpT139yP/zz/3Lv+uHdf/er33tG4f3fv+z337z777z/I+9Kjvq9emI6/00F3nQBW7oS8kU69jAkOmc5QQLZNe/0/MtkTqWREPMLahlnkjWMIIRErXSKFVolYBVmkCLQqgF8H7V0mTTINBV0hWhqt7P3ACUyevjRi5EIIMnS+e5liSWvg9tm9rpaJgzcqn8BudXpihS+rDJUDVlM/bs7fhmlHdJNzhvqpz2uYXiC/hhcelhy5D6I6JqZrq2V7klIEWBIHEX1rPaa5otRnHJ++7hvkyTljLpfDwed2WHk/B09WR++ub+4XaeT6fXmfHSWIJ6dnGbKJvoIAIbfGztFPSlDMIK3XlpRJz39eG29nzCMwi+twHPfu5+mxXdL7YclxOAMk8iXOpRGEFZHqsMLYFbQlF6D4LYOKXHn9NNGV7XIydkTRGKiYe1DQPEJau25RF29O4p0cHoXFeV3FfSfbslGqK6ljbEvmWsCVCkWUiFixFkxCozchEIWhEvaUkTGHUqReX0cORuB6N6zTnQxKx4yhMBz+qWAhSwiqps8ciDOaJqEwCKR/hXElQKKkEaPREpGru4apXmhBEZpHVDasiQBC4ERgHWZ2Cckr4NyVdjXvyS0scP6gO5UM4HfQ46hIAFIZ6m8Zsm55YBa1czkbB0OU8KRpuW7Y3u2875Rnzun8+WpulLi6B2ZE6YrR5hMx4NIGrq81uAea6j+7OuB2wI60XI5zQefcXK8pWkgtbsLFHORLO86uZ1SXwfe/lGzVoTZQEdmwfEY5olxxGvD/P9s9u/+R/+hz/wIz/97j/53/7FX3+JT+yvffrZ+9enT+zj3ZMvw+5OD2Kn+2l3d7BPJ34RW1TskdUbgMgQRQxxfTb4WSYN9+V51KVyW9s8xTgFo9aQTs7fi0lI3ToMpTkbAiUKm/TYnsmdi+Q8zyKy1EpymiZZfA4J/JT8ZLDQjhLMCNXHhMvxGvFNMj6FZDuJOUhDyMCXAUUtVZY2hx5fqWHTHkPMvnfn1xHC33MJjz1oZhQUnUVkWez29vb+7m5/vdMidjzcTIXLHY+8Aa4mfWmlQbLL6JCBp/bD+JZZDTUs/ZnIYnVBflCb/dNq8pv1yoAe/Ruv0GQnkVKKLPVkS7252i/LIihQROqpSRawujzP9VmWtrzuhmknlJSseEVnWS6WCEjOeZLbohkWCGRLRhSUxRWMrD3qfbPaklRliTDmYLSN8Po4llqnJBwWMzN4ffZ5nvf73SQ8Ho/myp0nGaeQSkhxn5Jzb+lSz0b6P9tHpqIYEpWBko1GRCQawtYVZR6BPI1/ACvyQQlgOd4xXYv+a8GFrTvftjFKuU0hpCTmZ9mOQGys0N0R2Kfn78oacyYdRiLCTEkcRX62hOjUtNxuYBLRpxvPcRvwXGPI1i5NOlOikiwobSZNeBTI0gVlyY2KvASNBYQmVGkEgjARgGlSWBXxAhotwc6yp+Gy1Nz04m8nKkEzhtE1CWWgGVd7PcDWxi8b9AEla4VLLeYZLAIll3Pg5OeIJOgFvhHVSQba0SU6V83rAKhWzuwcfd/w5Unk88++9Jf/7X/w8dX+9t1vvI95tte//fBVKae75Y2Wer274V5ErzA9d44XZE7gcKS8ra6ptKREV8UtBIuAp4X9SQAP3PCDFxTKvMmQIYLvoAr/sdpChkvsPG3Pq7h7CFuAxatSq5Mq36eyiWxXWPOTKbKdALo1bzyY/YysoLrtzSUiadtEV+IQwS6pZ2oSPaY9w18hEc3ryo3Z4FxDmxhJNT+5vWXq26X6ES3DErAWjsc1Yo3bG0RV1UpbaErs5sLT8XR39/Fnnz1/+nSpx6vdfFjuaz0+e+fpR2++ezO947USk8anPSgZapzHlFeaI6cBT84RbajLV0px/XJFcAjoJvUgEMPz6NCpqxtCZFmWUso8TQRoBuNuLtdX+xefvS5FPJ2acH3NlTIv8M7YE7d2OIdLuEU1MeetIpALNf9bSWPIin5WErCCQGpzpLCwVpLZW1gEGoWVkSkKMbBmBplTtuAW0ThEC4RBYUK9jwwup/EVIqqF5HFZFqu1VnVNrwTQ1FDNvahWAarBJWoxP94+j4v54W4prOGC9m0oCE8Uw44QmljPTpbU19EqYbWr/QztVZD9pHq+l7VyEDmWc7Xx7D4mmbp92B2JzXt4UTbgWgdy8+mG+7r85rHHXhLFkdwj+ipa26PQYxpGrGThlO0aRegvPQseWZ3kuKc5t8zR1hKyba45hZUkJSlqQT01hdEpVyIhqneR8H9jBEap4ka/skb5YnCnuG87vdC0pBcJq9FgRAantAMvydnbkleACs215m30SuR1TdTavzVYRrdQZZSsn2EThDol7sHRCKRkelMKvL7SPBLTBsdbe3o/3326e7jav/f8YPbZwyc7O+6ul6tvztgtb2QvO+NxseNSuds9F7vPhVBksL6k49nf3cQ+P7QNYCIhcVUAWagPgBClCUV09BYBKNFzeoHX2nF64HXQYNasb15rcQzN8B2LTfGOogBOpwcRKaWYiZFeIaDSZnEHMxRmmSEyWjhl0OZVlbbdrwQL1kYZ+OfzPtwSliFLv28jlC42N0rVTR3j4eLaB+wrtsEkFInRj9hP3P7seKvZqFVVbY2x52f5jMwoBDAUkVqP19PVy48/+fIXv/QnfuQrH374YtpdvXxzdxR9dXdv81xPL+vxjmT1HMXJY9QhYH3EJYTU84QRqlTQCIXPZ+UDRvDrOA6SUdCGDYvNIt5RRE2G7VYvElFPy+5qT/Lh4eFqN99e75fD0RXEgH7rB8OADEEPFJcB8o1GkfQeElIUja2kxlm8DUsvGARkTzmILCTAKrlgxgDRbiV74jk5SrbiSVCrcs6lFEAjhpb0rEhxOda3IjhTkGzxJKcKM6Po8bAs9WiEGqGyZN6fAoIyxbjRWy/KnzbuAA8sBETMokBH1IJ2RAoBJqkZeEYXs4xC1o9zlNCxUOdGZkRRiaaWIXqjc51eLgPJb8ZBxmPQfLmevoIUKFql796+kF1cTF6QrxNBjdYKG0GhEQhJXOnJPZeuFi/WCyCIG/163kUTZpnRlY21SFr2rAq82WtDVh8tmsWKuALnYxIozd2eLKcTfUhaIwVw5+EpATv4xYHoAtddoW3XdJoBMNOQ/F1axLwxbONc6QnWoq3D10ibhlCCjTLhplQEZooKKNn0d7zyRIXgLJJuBQ/rCwkh3ubWGUdQWhZZdepjvZQKR1IOAPi0vvrcclSeFtzoQa8+d/vhm4+/UN872O1Sv/Pe0x+8uz8cafv9u1o+mcpHUp9Z1qv20Rzx5nW0AVK6cr+Y98kREc3gVz9n4/0SWSCw7N3ZKSJknmdWgx3p2U/0lJht9rpvkRAygaQHdzVZUaQsy7Lb71Wn06mSNu1mii2nOgTfOdhtrPs9sp/A3kua8XA1HhwTVJ18Ub1ZoT9qobq0FQAU8XBWprjcFmAlgxxJkraZgJtTOht7LIID8CMZzVlVa63BgIfkiDYsSR36NTVQA6DJYieSV1dX98tRVe/vXv/uH//Ks9/48//XP/t//MoP/NgBT774gz9xxesH7n/8y19/s//asS6H0/G4LLXWZVmW4/G0LJgVgIq6badFfhhMkyhldGGk9g0Q63MKWpBahzNghSz1iKEjZz5gm22K3oiC3TS/unuzsx0Ey/F09eT6+mr/7Y8+mq5uKMWZoGoJRmvOQ52BaOtEDTCi/FswpqAAqto63GwwJ8WLdIAEDZBF3DAUsqxKr3EsxkjZZr4VJihuKGoKNUmPZHTiZeGxRpgFnWSSadJjhRvhl3ne06ofXC8eUuZ9QTnBQNbINSkFKFIUrPD8AQS2Zz9EJwjNIc2UJm0wdWSijsREhsap0Y88E1YDASwkbfkLf6cHC6wgW3vRlujj6Mmzi1fccQB0W+K5wutTny6ZngCwFSN2WJKeRnlthcAinnWKNmVZtj5avzyvNNikwQMjNZO72/3dyn2qDpQ+Tw22eo5VcIY9tJATJw5GTHMy1wonneraBpq8Q08cJJrpPt6YvwKo0tKQAuAVNLLloQ4h+ACgWUS+bZakrD0CuX0+JQsJm0u7x7RNw4Kt+g+1W6eDlAvJfUGrcNTcEABmnU+10jV4Rmz0JHo0euqNpFQEb/M5deRhRnAIXbdUo4alV2lSAavcqbAQWuk2cAqoMlloxlQTERMn77aXG7Ka1Ep6lp1IhZxQBUWZEncRVSqrkddqtSh0wou7l9P1zGpFpJL7eS6my7GamU6CgEB1+uLJ2TEfsmjgQ5T7btHgMlXQUP2gqkGtolqRyeFQwZqhPAaq9Ux3yS02s1byMHxdiRG1ny9zuOX+pqwUMk3sdV0kei+Lu7oivmGxUhQqqULIVE0Xs2spUFYFxWC1kBNEiPvmy0yZj6OmYk258Rg1E9vT2YYkwfFIAt36zht6nx9GANSBLA7YrhKNmhk2FJcOCtHrHLSnSM77qR5PnPa6YL/gtC/3y/Gd1/a5V/89Ho9XOrHixYtXJ0yv72+W+hzPnrK+/96Xv/H0C9/41if7r//4j7+orzA9L3e/9S0srz6ZvqDX5clnb+7fPJmev6kvS70diw34/qpqXbrVShopN5sKAaUVM9LNsbIsnjRMVar3wYTn5LrtQyTzOaIshYjM5OHEad4v9QFyev9zn//42y/Urut+cR/ZBp6WOSXMzGz/fhKYaDMsC0yMSqvTjmbeF0jRsq6xcHG614hPGI7NtzhaC5O0ClWt2unt+O8CFsmkZCMQhQFOy8Hz2k+VJHUqoJoZpsUJihjd+RKdxVkXOwGRqFQNBi2leC4vjC1w2sJeNW2QKj5nkGm4CJsAJ1HoOwoqJc4bTuAESIUBpgVGsSoQFwWKplrlCc3TZv3jGWiyZMyGbolYBWE1D5qsTVuSvr7zIzSeh803EoXh4/N4n0QrwHxEQuuUdKSPEqKIeNvEZsmVHv2/ncAA58t2rz4sPOo75B1fYEqtIS1GhO3IIJ0YPiLSX9TVZdBgIn59mE6T9BuVl7Uat6FloCKNq0zAVmaZxgwjGifEEHmbyt6iu0Oy3b4i/8fV1TyHDbyhDIkI0wZ6ibj2dSUY0NvZtXd5oFatAHr1+2zN48RoM7ciYqlihrHVk9oB270wk8PDacLuvffee/Xq1Rff+TyW+mp5czwup2WZpqnIZGZWF9Usf9VEuvUyHAZtXwBnQC20hCqihHhsOrLuKcBUldLqRLZwJv8c1qZEY0ncjr1Iw2YFhqCwHnuaD5ZJ83RGUUxASCupwkXKI0QLCoVOVGmJYVIJTcPgqIFJNldvSLQGTdgyc6/TtXLGaBvoxhPdr8Qn2dIWRd8cpq3EWizIZnCzu4L5YTGtulc5Lp8+e2LLx7/1E1/+rz4sP/f/+Cv/yt2Lw9e+9rUX99/dP/3cUg/6zd8wXb7z3T/7G6Z39z/w4c/+8JuKL37pR2++8ruuf/DJe+9/6bPf+qwURX12svem6T3IJw4rRDaJm4LOZt2XRYBeHpVwSzXF6Pky7agmVKMqRVN8GqAeTser/ZP7+4Mqb26vT4fDqS7XV6XK8SIJGrWa9qeIsC4g0csupdPFLRbMoygh2TVCtF6R94EKIS0xG+eb3p5tTeQaRatZur11t3H8zZJEBcg82KDhJAlxN03TaBG9nlSQKmTncYK61M1M4rIVY+hJgbmEKhkOpWHXkoxVlExVdU7F3C0/2p6V1KBKyys4h0oFax5Ide9iNRNQe9iUMv7b4MFF6fX8Jw6XiCikKlzFEZEsydMLYI0rx4Au4/fxp0NvrKjFMagdmzk8NttxjW2Zk2xzKnKc0E4kM9hMvL7ZW4a/AJk2yQsTc9/8+b+XFuV8dAOfkV4H0q8Wq/Ef3BhkRai9+lIApB/UJig0a0UUobRxGjGrs7j54YaV23JYbRR3gGs/KlnjXVCiopk/MslUSgFc21jF7jo+FUSTbm1cDTgu92VGuZrLfvfizafvPrv+m3/13/zg139JjWIVhVbICSgUOZ9+n3CscVWnzMgKs3EJrb6PiYfQhFzipelkAOzmIrvM3mDTXrQCpgdtpa0sx2+BmUydlfQkDSpY3MToWScko9RXQVUzMXXrRBVSDHIcMp0GIBS3PF08nm41lRAHW0aTpglURYr/l1v0tms9sqO0+rPjMpuCOFIG/1CWp7tyU4qUK9Fdgd3e1h98/WtPf+Zfv/61X5z+yX/in/n6V7/08pPD6e7q9YsP7h/+fbFPy0Ge8/Pv8Or96eVX9t/6xs3d3S//8v/sn/vf/dJf/ZUnWq6m3c3Nk7Lnw/HDw+kzP5VMtTIOCFund6S0GtEI/YwIRxLV14jKoYuzyDbOJpZZpofj4WqeeFqe3z755JNPdte7Iw9kdYAk4AqotFVpT/+lnSm43kJiyDUSi4wdP+mUSHEetyN3JPQiEaHGf0Ef1G3CtNBixI9nyRTblVvT7UOacW35IoW4e2EAlIdSozEUGXRIEYGZt0yaVFcMpa5E9nEhQgpZhvf6yRq9CRWsA6FbU1pEWtcZiXYQT0O6RUjunYUm+3ax3QMKmpxg4v3qczj3oQIRQdffc5n5bFiL5ItrCgki4nWqmyvUcSF83amc9XWuAeeSeLOQtCIP8fazyfSQs81PabsDgj6qKw81St8hDYl+25SB8q7WpATA8yA6SYBLDg7APfPunWh7PBZqeOxaE6NHf83IezTk8/VE81LJ0pLidtTsJsxtFGtJFZDWm3u0CQzyWfuxofuliY3frot7NNZlbSej1NmFtLFJJg8ltYytkvW8JYXrFlT2fP+lh/sDVO5PpzLLh59+85/9U//Uv/a//pd/6Pnn56s9Zj1ZPdXTJDpPk1Q7IeKc0/IkCKdvlZUSYA5kdzsznXwL4IEsxi0BLW4GXH8ZCKC62dd4Vnzs0e3q4IiM5gYcRNARjN0OKb0hukeckWaQDDJAJQVKj+XzOl4LaC4JXUoWGpvgZhRo7prXMOjCShP9V6x0nPD3fwVfD8zpbuw2zMh6fXBb5oXLguO018NJBNenl/P9N+UHf/Tzh+Mf/jf+b//BL/z8L/7UT3/tc+9Nb06f7KYXy/Ji4te+/e39qw+/9PDyxz/6cNbp+snt13706a/99E/8Qat8eHj98HqZbh6ub7jgAXja6GcDvn9gkDFpGptCIpraS/sBEgpaSQftiC3KoZJakCZE0oQUpS1E/cLn3vv4g4+LzlA52Wm+ULduS1M2CON6TATBS5cgBa75dsfchWD+HDBiUwVuA6CmEW4jTAYLksIiEHiThEgW8Z+81pa4ocXzcUEsZkrQKx8HR3CqQ1U1CLmAqkVJoUUVjza92nX3NmckW8kXIc1UjWELtFkBciV+rjR5UwnncCbhJQVY8UavGZIrHCJgk6U1lmoCHSyBKzUzykpcvtYSyur7dvD6i0gp0UoZ1Rr6yton1JhHA9wgoHwPjbZRLg6gf/v9fkLaDG0A+v8/Lgmrb0A4QsaSCXEwtm/EzDZCW9oo+4sIxFrpJotdbaZrlxNWdr/xc6vP1X8NLT+ZXH7vfkX1GhEjqRVIWDTji9WSUwhsvkP/pbihlGEpAuCNpWIVZ577i8YG32aS0pJmvJCKCO/qVbk6Lcv+dn/Cw90b/vI/+NYf+eN/4je/vXKYsZ2Lx2yIqytzUUgrEoUlVMwII/0gt7iUNkl6SkOcpDGIgWelVfvcHpnP2LxhBYrMjhhgpf52z7HxXmpMT31QYkrLx6BXSB3OYIdSC1rEIBt1t5ENum9MM+1cq0E4REqfzX/73vjGRn6/vX8c3D9MuxOWKiy0cn+4e/K81E9+9fntb/7Au5/+4r/36V/9s3/99/7uf/TX/8q3SylP3vkd3/7g1z5arn7w6793OTw5Pjwr+kNf+ty17OzuDs+/8uQLP/zer3788/tn796DsJsZ8+nhgH2eu9QutkLSMEOSkQABJPIoqEIbe8+M8LkMnRiQ19dXZnY6LuXqqoJSxgKzkEQCDkJ2I4NZEUQc1xuj92AiycfbmuwS/fQJk/TqMYShR49GsHcbGMhMJ1KD1KnSqssrZ5b2rrpI1mGl90yMrCcKIcWlXm8cqVQIVMWWBQAFqmrozQKaoaKNP+KPhEyLmspSc3i9DVGD6FJsi5oiUgFmlMqFMTyDczxmhhb4vpWdrPkIsWUGW5Un4ei3MWJcI7dakjZwuBPiZZma1tt3gmQPRF4zctVVPZ3e9kDP1AV0vf98rkwv3ahBu+E3wQwAEwqdQouh5R54xeTBIdq4aptwODYspuSKb+maoWPr8PkcmmdXg4OIZKwbG0LQT40G1N1V3s44WYeXaBav1F4SfAQ+MOJCZ/wRnWcSsQl+YyukM0DuAnpYw/kGfw7yu5sWkOl2q4WbQhbrdmBRE4gCZhr72IuOhY3hTSm7avVwD8zl2bOvfPPbn17pdH1zc1xOy8NBd/M0zXVZTtWTyNWTvolQ1jNTuM8f45lqulfXAAjBdGaxYOq05+cf6EH7Dfh0zbv0XnjDU8UL5J1nKOgqiCmkEUSlQSL7FkSgCpBmwtiILOTqhZVWU2W3A7sBwgD3YCG00uS+g8hrow7RJjkAb1P9QCLzdfwSIiK1L7UZ5N8mKpmWIrafJ53LcZrm6ep0//rzz7+ub37hK1/4pf/Sf/nb95/9yvFu/uyjh2mafuSn33v1av7ko3+v1nn/ztXrgyx2zeOXPvqt+dk/9lP6TO8+OH75+e3dq1eHB97o9axBD4PKxRooAgq9jd2IuqLqtZyx1ZYk0b6LdJ62K9qLgI5CCetpt9Or6/lb3/z2e++9/+LVG1WdSkHdSG+SGt1KR+pHWxVnIhxTcFxN3h6FtEi2HLTY1OJyuwzHsGmQIDCSc0/ItDNTgRuKLPYeka/V3GQGmmDWAkimCwfcBJy0tC4vzIOgaS0Z+cJK98tZdcmxGsT81SIRAkf2gkvJ44JeCVq6r98fytVUhuXFi33xBNQTUuncq6wFsbYlOhCfcfbn0nGbFlL0GO8QETFWR7rs9dgiRXVDattQK+U4QNbmORpGGv0i0479fWgz1UwbU0leaFlafHOzZLBPEYnm5GSRrXngHDjn10WhEpk0FSMMXoNy6ZGzAf0EAS16wqlvwg1IAxmQsp2Xu+qJ262Rrd9fGbyE1VoSRRxxERCCQrIVDc6EjC33fvt2kBSxVkWoWFpKPBEvYuB1Jf91guzu0zwAGlKOu5NrKff3b+Z5vr2+enV3KPP1u+9+uZguuGc1BYoHsVOrYGuIv3CtjPEimDwuV4CsP5pRegPRHNY+NhWQkYJb5quNozsUU7NsWN8/S+fBJfYrNZ5eLM4Hs2hXM5BjkSjZZQpzqSPIqCeMcfAsrPR5sumjiVq20n1FxDspAYIz1BXZ+rvXI/fbzl7XDlfrLritLuL33x2XawI8sdZSysMDxJ599Ytf/c6brz37wX/8L/zMv/CTv/vzX/wRe28+TCofffSddx9evv9DXySefPfbr3b3P3H/8OVjff9qsh/7fT/05mE5HPb3r+/n2WhS5bDghfBJe1fwUgkBu4NmEJRXk1x97l1shrXLiD8DcERhz548/ejTD6br+c3heHX95P7h9U60cqWQZAwQN/CXgSxQtkxhJJst6ElEJpGW+rhm89CIBVdnn8xayu14+va02F6jwPskgeLZ7v69hT7boSrdcgag+SCaSGdmALNaSx9QyAXGjP+P3SmlCx8N+YeJxXK6KwEGp29o6hgJqApaERF6I0zPhZO1uuEjT5qECesXmxkgo0BClSLaKlttet2MqngwdJCPFMwadjoJUH65+UDQRbfpPC4jGdJKcGuDe5Rsw6d0RbOe3QkwfW8X1DHnR+76VRFEKdMaJhzJRJq0bXp1rbTlShhVxyVfvvQRXz3Q02EBZ4xdNOufW5j3ZZODpGYZW9y7IUXp1tCwyxR9zXx1ZJStaeCX3vEtzEMxQxGQos6sQ1rp04tpGNfeopit4/YFD7yaMpIVwQnQmG5xD01lVaqIKIvRSNra2c5hX4jQo512mEBFXp30ye27PN0d7t5MAtj9yQ61yhHY7WaVyU7LslQpOpVSow34VjVoxKitqJ0jNZC9zpoJAK00TcWooW54mGwoIDVQ55ZZiDxu4uYc9z07ZXPdFwAQORhrJRgtdNqJVOwOfZ4V1YzQIuGHQ4GKaVUQUoXwkl40TQN+QhTpRR6BkGSgo33jwQZYDxo546kXJdR2VBtsx4Of+VcdCAzz44KzoyciZf8w2+5Qy/KA3axVXn7hq/bjt88/+uBH//y/+lf+0O/7n969+rXv/NJHd6/vf+O3/++7py9/x4//sY+++f7XvvSHHvSdZf/5006ePPvc6bPvfO59e/nik3efvFcflmnHhxMerNoe5QQg4n+95qUXB8aZLdqXY80oEg0JUooSAWQsn3JJD/IxheTNfgdbHo4P1zfPD0ea1eur2zdvPtldPYs6MGevPpd+RGSht/xqu5O2DIn/mtqjEIE0+tmwV0JnXUkQoNfmiyobUfxr5CCMqrdjJ1PAlMUtlZaGsb5wwH3q/nZTiLB69qBGrWLn3gKt9WTRJjwWO7kqqEnu8iD68LWZuIPKQdxJUyTdguYrMZAmulN0QZYMo5IS1YVNOgdJciR//m8fRvzOvaROUba+9VdPu6IoMS00gZX0xo2yWQ6lKQUEvEqUrZlESynlVFsEnc/JBa6VPt2SiIChtDkwOCOLaNNKfRl+g/ev9Y6VcVyNJIuhgln03J+kGE+TCHrbY4p6ENd8WhyE4eqoEdvlfZFHRj4KsxwCkRwtdlsTUIf5QvPA79h+6ySm5RO3hU/eILREfmHPlmbajGL+RKjLcWoCXAneqE1DtDOWqYo+/4bfbTuGNOWoE+5t1yjGzItNeehtbnXdDBis1K0FFiHWJjjBjCyt/XNCxj/vZE8Sqz6pBHBAFEsSkVqry99eDUfBEq5OT/opgmIDgW6Dc2ge0HY5sKWmO8SrP8INOao4OdwiDdG8xD1nLQ6W1tI4jpXuUgWEszH/9WQnQVHVEmE1EQ03Rl0m+AVU2tLznLLRNwBjEREqK42sRSFGM8PQl3Tco72BkMUr/TV1XATVFheBVEVKFkzVExxuFI8Po4eLUriq59qk+ZFnrE7KGaq0Xx9BoY4/jgD511iruTY+Pa60nU0Ak+xqrShKsVpPNyr7w/Hw6We7X/3Wz/3sz3z+2bMvf/GLzz939c7n3r2+nj97+e15995v/+a3P/r1Y3353otPH6brWaYv/t2/8+E//b//4wt4supQAlAgSlRdvbQh0qTFuVNEvzJcY1KidEEr7+VI2FxmbTS/5qKffPbps+fvPjw87HY7sB4e7m6vrt55592PPv3keDpdXV1R9PhwADDpfETiTO/S4R8ywtcCbwKeWYxcJHYWVNUidm/QOONQ9L7dMqgZrqSQZJZM1mTKkZJQ5SEh4xpqGJBLWUnQmR1KlMnMzJY2pYh4N1ZEngxswXKateyn+Vi9lJESk7MYb2kzWRX3ZZNkD8M2TL53wSeqlwvBsffP1txEAJhL7G8NKSTR0EzE99SNpJNBAd1hMTOv0ugymfehXzVjOBesLl6Nuzh/F9eE1jbixiCjwGLqQF5fkF42r4e2AqlFja5W19zj8zB8xCE722DmJwYDGCWBC5fJaJYbv9xeImHtRWuQJx7tfcGjjrU8DiAwwyEGLG53CZthB3W5FCvTSEmT8DrxFeoa1C6S+sajCUACIFoktTE3kYr+DumD98mvBaqcRyj03YngkpNrxE1Ir7LCyPMrV9TekprNORAuKfT+9lOtcPtDSrK+LZqqg6Q7MwhxvlrSqETSNclG3TgoB+eE+3waTs2YyENS2x+ZtBjjoG+6z2bsuCdhNAuogkOs5jmyjsxJsi+QBCGoXoE8m4sQ9GrhVp1RPuoXNfazw0R2wssjqTNgZCCEpOmDjBp9DeZjRsfmushoMbDJR5d5YZAWAx9x19ze8Ogg/s0itUqNmNyiVOF+p09u/9YHf+6/8j/4fR9+81e+/pVPfv8fef7N3/iNv/vv/vI/8Z/7Xbfv/xh2M05XON3g4SuvXr/z9//ek/f+zZvjshhY08+bjGcUYVc5sudSXUOecYYbto0BCf2b14f758+fL7VeXV3d39/vd5OIPHny5OXLl8uyAForDYtn3GAd+pCjuVriRrWcQ+KHF83IGJU2O1KKpNjEtFyZSOu7LC2QM1bt2mV73prZmGtdHI8ctNiyDpnVT5HvIAhPtLcZdM5KUFi5eLath50a1EO4ASLc6UnqY0p9PgCmoRJiLCTDaPyGDaHLIMTcfRBiRlZUU0a56mGXJ1eJRnUtqGOyNGm2BQLilRE9oCI2TdmpxCYh2N2hrqf0Wn4kqtWpQ7zJ3JsmqeM1Iuh4oDYbMp3xGG7OgCu9uUqS7lroQNPViXXbtTW/3cDSLl7SvNdhZEy2FGJCn5V/aAIEkjQEwL1tYpZdk9ywJaLRUr7xzNGI0LlExTaFCBLPTHyCw0Ou4KU3YFDOgDzAfkQbgZDmDmdMGLlToyVjfVly+pArSQJCLkHBPRzIJXOVMZp3JExRaCVdy0aLQBVtOXurCYQVL+EWtEZMTDFsU7P6hU/rDMGqDJjSipsQVaIheV4a5dyZRVibufYMIO1I0oMHrVNqBNqN4XcrW32Jah7MENDgwQlq0YY7Xio8OqL7kvuEawbjNbOL0jcpa2W4nT3lsLR1j4YfTeD5sNEwGC4nrSO2vs/rkUc2PL7nwW3Ex3MS365FzcyKiKpUShU5zROe3nzjD/y3Pv+7fvx/+D//z/zTf+Kn7POnX/i5X//H/+B/8varT/6Vf/F/8RO/+4/w6id//cPdP/j05f3u1bOv/GT5k7/fvvURBV5uEb5yYwt6HScQyxl+RVIkIVqJ0NVPlwTQBnAzI3E8HqdSTqfTs2fP7o/Hh4ejyDzPpZrVaqUUUGutGs7EUfBTEam9Qg5HaDOwO+rJGwlUgKWoyxetqBw8QrDVNk/GFuP0IkSeyCNYO3eb776pOnj0GkHk4ygAKA1CM6hMoiI8mTkZbohS03BNICJ5WqYiAKJAoiMk484wDUr7ToEaBKTN/Vw9iPpFHp4JeIqi2AmEUaIJrVKlGAQqf+lno5tNHLtGQGX7Al/5SaCV3tS4paYUiwPcb04NmFOPOva8YU/uPk3YnA1JaxUucd+WHzn+JIQVaV8KVxKAlwkca4a0q1lukb2BUXRdgNaFJkyggctQvdbN1BdVCTaD0vq0k2zw2cizxUJeG5cvGR++qYnrp26Uhdv3leIm6LAEZE661hV4G4U6CQWrTG6nC0snys1JTABFervDNpSbNNsIMoz/mO+/yfS5ioDLiYuKTBLBjEAGSdXaXqdD7JJFGfOB8nqCoM7+iIiMdUlb4FhG5aSibKtiTP1d2BJEv+foFjaRDBJVGEGldidJc4gyXSpNQ9WmJajnWxt7Er8A3mvDaZz6onyZDVybM18sT26+IjbPRnnlzGk9yhzJPuM2sahSG5plWWi06KrpvU+KiMnskx9GU7BV9Lc2mv8O6UX3Ll5rCZvnkP8e1ypTfiW0XeRhx5lTlR100rJIrdmNVOfl9Eq/8OT5k6l8+5u/9jt/12T4ub/xM//nJ1/7r19dvXd/P5O3+/0edmKlLbJcLS6XI7XI7nRbn9CY2VqgbB+WZRkFx3OcxHC4AMxzub+/3+/3p9PJ5bZ333n28ccfF71WVVFdrHpnpCK6LItOlwJlgNZf3JXaXrbIQEbtFK9h6VU7NqVDQxoDqpaxSjO69bgRn2yOqauNHltCifueh6vda0XMTLJSTeMpTjmr9+QomLXAyFqZDg5AV40RNjatZiDUyTOmvFBrwn+wqK6vKtOYkxmDCeAJ3XlsvRecgcwgVkWRaOcKQCbnud5mLmYcWsIYV7m6nG+JRAijMP4NmEob54IEN2gmnZoHvY74++3rLNQQ0WyGhcbgJRE6GTBSOiEg0SngkgjZ8KwJVK2moevizgLQU8hXI6yhMh5yyyCaDQWZeon8FuIEIAtNDwqYr5dp0Ypx0mUkjxQ2uQi6t1yepuV1Z9q7kCZcjBv31jzpUtcIDW/kdKE41+bagCgJd1e04F07B5I0PpsaQ0Q5qqrjm2UTrC7L+1l1OxzQG7ICG0lqfNFFzGca1vojiTbD/Wed5yXSvNrILqlsXQKDhixNj+yotcoDzvthQ1x2lPJoPwdT2R7Ai7sZ6eauMQ87rjqpGTXDJtT9m56lz3FMkjQr08R1ciQzFPH87Y9N5iLwYz7YIkO85bEHHrlkQRHOQAFBqeRSq6oe63ukfnz/4jfffHh18/lf+C2++PQHv/SDf4bX1/cPD6fysLs+LNPBTlW4s6USM9E9Co2Y6Lo/9zk3XU3+TPjoguBZrWz/danc7a/Jut/v79/c3d4+efXyzTztSam1wkyKevFkqEzTZEMVrS0oHpkSB3tsm5V3bxxkOyD8gGMkojtTfZz1i2TMeHSd+zIcLl7NwxOvIFyDV8Ck9yirHOmkFQltClBwwSXidrHb4GPA8WE9QUqSb+a5K9WbdDaHjppkBGw0DBOTaB/OcFB3BuAfMoEVeTIjiZZe2JDSmviKMNPFxzgRNP3Y2AKVvZlXJpfmCR8wd0OUMXpnBzFmtMc1qt2jiM+0xtQw1hYe9L2U1BGbAaTRwreQ482zfqdOfSvpmbvfh/zukYRY94ZqWN6pvLGHc58pbeMUc1adX4RglD/3QZgfkgSPglE8yTA+tpFdfFNXWIcXC4NELo+SxJjYBrCah8RzhMJPYytRCcOSi64auTv59zmBJuGJoURdeyiFNI+tiyU43dAucQ+equDYDc4OH/+5Wc+atSMsVSqN+5IUFAqMtQUed2kPqNVUmv2ByH5wIzydb4lIXROI1CcAwFJiimDJ1GT9niIdPUYhYIR8Ho7qkO+VWSgkM9YwXTBZU8C4jMddVUkPPnEXQxLZWO4jXPP7YMbbn8Q2ma/+fpHtibj4xi4GVZvmnQKs5mo7SKGW6bs37+xffPLZ7fPPyTS9uT/cfPFrb3gHfrDo9W56wqr3bw4Ft1d7kfKmpmA90pMV0g4zoceyJWXAcColr5Ftb2Y+fl6WZb/fHx4Ox8P9O+++A+D13Zvb29uCstRKcIIy6h5X93iOVSeZKpb1LPI+K1+C8zUtIEVlghevWLw/sT9kzGqQ/nc299UmZWb6hqXicUElu7hTZwCsSToKAPU4O/Wuht6IPRSVTC6K90rYYzRiI1Da65zgO2WbEIVGGuN0IdjWmkRnq8jCsQ1cwX1GCCdfg2oUMGEAWcyDMqaWgu95q2E3aPvUDnCy+oLoWSXSc6SAHiXVTUEkgYlBsBbQw8AEQpXSzIlnEG+bMVqVOgavVb0m6TAd0Y1X4dKlg1Hfde6N7r6aTDg7JDxtpCfBtITgzUUyfJNI82N6Ghqh3Sz5UX12o2m1aqAtbcy3VgeMuHRx2A6T/j4z2/iAAykjPppoHBkAsLB2JXWgdNTmWYGM7rhHLi+eTjIzAFe3ByHLTCcZGHBCJf52qTZzOkhmXEgKQE0w8ksNNfioZ+hndevGYUbhL/3Q/tmHAyAQramabgGuo49XUFzubNy34WeuITSMQa5IAw+H2GBx4qHCkT/1sgwUeBuppnsBUAEnYVXJqBAzT4AqrIcGmdzKrg24Bhwzc/HXbDAyWbON5tnd+DiBKL2HiMoJnHOR6h/aBzyMvOXBbQmjYz3FVmDA0nGQ9ll1msqO9bhwgRQItBRV2PLk9euHd59/8XBaXry4v3nyjPV4Ou5F3lNeiVXDp7vCeSooi/FBsEfu6fjGDR/drL17bZKSSLYA2gimm65ufZnTfDweSarq1dXVJ598st/vl2XZz7tSSmRV1MWzakmquAjVKsNfaIMxXklXPODYNSYVaskYX4RHNAoked2XsGL6mozocqyHZfXDXtsymyLHNe/ByqpnrllRqZGCl0U2nFLFMc9I8sly1kMiUwSj1QwKZgpeGExTTSc2QIvyOGDOUHNEikuboXRXhME1GDHJrB/sCfesuR4CYqFP0KakLGj9oSVF73Hjm/5RHGeazBMm4rjTw89HMIqIilrqUDVPbHFhO9n2iLlo7HZAiK4qpXEjb+9aSzrlgNQY8lz0kQqk5iSRnMZ7DmKYyShFKEVUCiLvQVKWwaWrBrWKLSy5rUsCt/HOhMPq8UY+korE95pCMggOFbn7hBsurwGo2TdikxbV+VPnVQCwLCcA5wUnRKRNjq3pjQiLIqP5LdQstOoub7kG1MqJkSSrNyBDdHbkpeAUksuyAGip9620+jJ2bRo/uMrY7cDx9ahzcLA6bPKqG7gUHXnWUzKRrrK38vTB/1YI7qp4sIoN1Y7TZ5mc3aBuitW2x1W9TxExhY/GU41xylXF0sy9ZYMUO1ykO29VkpZZRs8IS0TXa4S7umSZ/V9rir2NfD2id4ZB6WwBbw1pPANyO8s1Czfq5p7Nh/bg5rYis85TRT3UQwRRqZfPeyHl8/dvPrZyeHb1w69efXj79E0p7zyUj5S3wFL4cirXZuX+wXR/Y1ZFZFXT5K2MP8Xc73XDwIBHURJ5TESjFNlXv/LVDz/88P7+/vk77z08PETcQ1HL6kxFpVazaiO+sYmuJaws67xbpIBIwNOQDN4MT9wM4pEajO5vImU4Lwow04qgA3mhDNaLzFO/rIJsL5LEGVUylqLuvy+u31MrWIqGtTp7CMa7GLbSqE4obkQGGPUhnIjGzB18GpRVRIDSgh78MLZ26CphvPTETIqgrtHAfPZqXETgArko5C/9bACilfR0Hr+3MDJ7f8AhHZQi0fGmUWERmSnMhlO1GQBSU/GrJ3Wlwtw4/eD836aCNLQDttQKgEy9j2NLRGui0Nt3dPxQhgDQ+F6w+X4M6XpstC4orBHazNPnpILVrJHaFoMwni4AkpVlNi+SxVpTEcukNQ61gjdmg/PUI//3WE/OfFMwzBYkoj5IE/Y8V3jJeGOP9W2pRgcxV5s8A9VAl21nM8s873GojZjfFlhbSY+Yc4iri7SGEFsEGImUj6yqTCewSR9KCCgX9uriTj1al4tzOCeuZlwJRTADWqMEYFSTyCCjCpSMGwjF1A/CzF1Ugw1BUBRFZap2WG13AwWnvqmpMVBFk3K1p+KWdSH2VvhJZUfU7E00nIVZG63x+kSooqrEyW1ift5VC6SQUutpgkwsBlQ1Q/V8JZW5tbmN+RMYRI0tSFnNDNSmkQN9RSEZoIV0GWQnYTw0CGno5TUCUYbNAk68EJwIQK0CYJlEipF1WVxovJpnT2D1YCNX7U2gZgOxeyyZLlvGkeAOUf7OYkOggiKTtaZTbbYisqgWenCPVAGESisQslLU+2b5IgQmtCVjl1Qmicjn2L5lWQR47733Pvvsk2maTqdTQqBZxVtHUVFIZSWqY3b1eWrRTidb0KUBYF3tYAepH1+TISghc+EiuNiiaoMVsCwlAKWMCgr+oqNOymwALkJBFaNgCiOLMKTY4u6kZYkgTbeqR9daZV8su9VTRVzhOT/XF/kCvcFwblMjmyLy0I5P2F2UBKjeriYEL7F0NZqWvaXRaETI1orSgyUDUxChpKvLxYQWFN2+9Iedmp3LubU2wufugyBsGwm3s731S3u7q3RUbBjkaoYiSNtaPfu1If33eSWYvq9Hvp/GRBiW0w3jbI5ctwo0hbttFeMJL0gyCC4brkOsghGQkSn+axlkqcbfzoG5UVI7zanR8nm1SpXIr7WIQiKjf9RumkmmQAwIJlWDN2LNiDbfr+2g652V/P+0tTThrC0Nl/6ku3+c4dWahoztJjWs4PB3f+P55ZETLZYkpAuoV6Cz9DeHbU1qizBIA0kqLCFrb1CylOJRl0gNPk5lctOQ0P0zt1jdGLZVhAzX26M6q+/FKBqoY9fEgVbCr680Nys6VnEFIREKNLIpfPwogXqBwJHdZcAzPds5wQVAt2WieGEvV6jIOAsZplAHC+oaFHGqLuyjSO+dKoAqFDp7acCWLepGSBdn10nFw0itBS/S5Vxd5myHNxcogHnI7jkhcqyTNDz52rxoYsJmpWOoTs16X2s1s1LKNE2n48PN9f7JkycffPDB1dXV69evb29vj8djKXOHNjtMzKrvTojgMc9Hid7lQ7G+YcQBX9H4DVW8mhaAgsAMijVXpiN4qzvlOBUACwA1iEXQaAXFDP7BhfVISI83bl2ob7VGtHvaWlpADNlmLmE2i/w9ACKqoTBwM3IxLuzKjCS0M0I6TaeNRKwYcEPujYG1c0fIaehDGW/xtuiOnumLTmq3TUcZz8/33OO3AK7v+ogEa5Xi4jiP8eZI+2mkanj6Isc9S5TKD2eduUYeTGetDYCWTGmgJ35EW/TjZvJMq0fN+KGwyT9iNtgUCMsx3fYfv8FVvZysZAieo2a0SQqpsEdSRNgOs6zEwN8EiP6dA7jcc6GlV/a5uEf6CO41+Pj/2KXF0nXTRnBFVhs5PM+kf4JykQpFvpfQ83sFMCG5FCBtYd104eXWkEEZ7UiL9KSLvliCCId645HjEvxPCiSrTGBEsHVhh4Y6YZIVWxGUpAsiKZVHdbQ44inqQ3qQbEwseIUCFrmMiKILRWBjgZc2sw0Mzwh0vyU5kETqAkKMaEqbkS3QbBywjbbZ18fOu2UZXg9EnbQUoZfUJelm/cb4BIUy2vn7mAyBoJc8apnQDcgkkS62gY+sB+oTRtQFd7ajCjKBjFCyRVWbhS8Jt9fJstP11dPT6UDW0+m0u9pXw7y7qqdB9xoCYQyWntJYbFJrid/7Ui94C0Z2O6JW/xwADKmRTphVxEu0B8upSHpYiIohOYeen7NCKhE3Qam1aPChdlXG7ftczTymTOK783qx47UR4NBQLb+2XIdqiZPS5wYRlnh92O2QecPSy1yiYTwTFhleCkZlbEzn3CUm1OYKtPxLWXNQpCIlgpoiKAWIUKksznaJfbboodJ0H2cYQ57rYx8al+Uq3Dy91wmox3jtxfW2CYhEoKc/bmcH+2Lny7f/6p68xlnijeZM6fJ8NijSsF+Ac/ez9xhpkTujmn4uATr0plZZTSJj2H/eQ2vIk6xpVgMwDSFubW4U1OUonuHdw7MtBIU1TM4z6lasIilGOwJv372GAwXS44w2j7S4BkBVJ0P1sCcRoylMdKuYDnAqBe6qbfDxOF8nhYqe7yMZ3tLUtSRyffAWnZaeAhNGQQIRccNjh0YLzMlOLVTdFlvOP51jNf6cOQbbhWVP4SoCcdnNWm9IhVU3cljjKCRg1QsfDWkBClirMvO2HRqmipQA1we5y0IkW5sPkUIiRWugFRULMfFywgg7pVufIFUPFQKgkEmh0En0RHNvi1ektqx6b/LI+BYy1tB1GLmntmbVVaRID5YHhp9b6YsRdh5H7w9vto+p+E5aJp3drbCcTjf7q2U5vXjx4ubm9v5wvLm+efXm/vr6msfostBATTeNRgnMKJyjsBoW40c2cTCr5CQvhNAHyc/1nI/lFqOQHpD5mQJYFbJq1AFRiBgnyulC1py1ohRsnjsNV60K6O5Umjkf9/f+QypgiGj2leOPJFYCmZf+JyDnDXgNxLo1EQa0HHWBFs0hxHRO7uNJiTvahFKVSmZAF1FCa2yVXESCFjtd8dJ0G7FdMuEncLotbzh22+VZL4Uxri3lDF8npIW92EoPWL1iDaBYDjrtG+27g96yAu33R3xW7x0VFt9Hb9p+8ea3+LClezfzZgTpl7ZxWQ3gXNGMP3NnN+y8F65pjLzbrFYjxDg1N30wn0DklPjW09UlSiQOzGNE0JFeAOhlXtq12k1BCfMvgEh/MmeNcfcF0Cm8Zo8TjMtHNO5U1EAtabEcAqEtiEIZSHvvyksNgKuJdtYlPQSmnipVVYo3NItCSBAQnuaR4hGjdmbYHwYo5UtWZZbRLdj+op6S7vHnjROOiyUJU4preDXi7UMVHhK9AuvZ7L1jclpqXdiO3KY2WCkvXvkTAWjjSXCyFqbhxx4+P+ySWdGRpgUrogp1/d3amoBNHPVj41+aLT34jmR4B0NCrVgz8s66PA3Eapaa8nahY9CrE9KMncw+tSKRA6aQUqbr6/0nLz4rZaq17vf7h/vjNE2Hw2nWgfEPn1S1moGaZYS1INxdj0DUgRYRBs7FxZ2LLi+sTdAjTAabSLMpp2zUDU4ZhLiC8CrvuUdLuZU+JCqJJEMveSKa3N3R2zLx9hFf72BZHMmOl9606FjmuO0a0ipGB4E0S0pQZ1UTNvw7v/GOgnKWZjaNz4xk3cbhUroZ4kCAzLb2Ecc2ag2o50ei/RuVhpqPLe/ZJKsxlaRWYX8zWuvjSHaT5yigfZ9yehxJHR70z2+vPHl2XcxQMkHxsqqeeZKTVq7iY4fJ85xx+uWtRZL0QtvrGj8exvTY6QuTTDMzEPJTBrVKzXJasVUt2XMx6GYomsCLp/e4A5eBkrtt7AFVMQVZX/2Qoq3PzchsvfLIDtKL2rccGAtaCGwl95QdLcJ7zHMFPNrXPe2PCKCeOBwsUQET139FkxnqBnUfvyIKLU9Z7WKfiGRkTfKw0MSczTc/4sWj1JHkLA4r4Zbj5NFjccegmlnWAyA9PZDS+taEL02al6lpFM4Iq7r36pznnZku27xyen3h5CAGpbnC59/6FucghahvAfaGMjS0USrEVKikKryAWUVdGhUeRU5gHXg16H8dlRtgLz61SsTr8uVKjqziMpzEyGLjvuerI4+LpRRVgKz1BOM8z/ur+e5wqNWurm9rrVxYa73Z39zfH1jICzxA05Q1BHkJi2i9EFrvdojNlxf4WdMpRMTYzQAjh5EoacrUQcJQXIQURGVUQkgzqfQKPoIBD0U82tRUwmITSBmZzV2uFYmfy+CIWh+cCPnecMG+ls4yU2sY9JykMMm5APZzt3rdRjoRkSxzmZQ5H5uahxIJoc3MVioIEG26UytqGDZrlJprV0xqoNqNm/qELzKYzTWyUgwI7UOphi8Ho6DwdpnuTDyJia25y6MmXB/EawIPVxcguOXBq4YToYYQxsgs2joYIpfuMeBIslVGCEO8ugzwGWFVHmEwZxOPE9LqNLpOKZ0KK2xEOI/HR0Gx5lCyyOnyGTTLs0BaQJbIqJN1LtIEVg7JaSK9ODUS6duk2+MdrVXTl60bG1GYD5PgSyh5eZzOqn112SQwozgPlkgckwEzi8fjNMQKQoDuBPIpkcgsbnqXtPMtGwQOczdXBhuuLEnobKZZQZHQGG2kaN7KeLGuC5AxqjFQi4IQsajb41JyhMsCEBSRChGytsHOsXTzTYNqW9q43tgpDF6otmviDd7odT0v0oEOB2A8qeMZd41TRFxxUkWt3h8wAb4y7ZS1SXaj2yH1nvjMyJceNz9/WsOkDWRWo/22Mg2Lm7W4iBM9ZTO+WgSmEBROigK8fFim+XpZrJTpdDrtd7u6LHOJilcMuU2HJcSf6cZmvNcBzlGAaGtfTWwDc2yuSFaTFpBCdKezDcP68mwyGEu0GwehFFYBaCU804k5qEIoVCAqYmKkV2aToFIAUE3gweWeg6w6Y31e2AC6xsBEn27sAaJHH4EpjWoxTZiELQpw8SKtjFFrVsKmxVSiXImBrdpitjlMAEYe3GB97st0r20rHCRZNNGMJpHp3/huYyGNUW0E+dL4pbiGnj+dmZolrwDNNsjbmpamwBDFeMGXwzMTRPt37KE2yhCbUb63yOC3JQ+2cfmDYN6WtpYXsf5peDY/uB9fGYDlpacEaMWlxy1fyRwpc2c7h/jKKy/LsFIxQqVZJnxdNVdWl+rkraiqQrLWd8nABB/IshfPGPo7ElP1zPChkMK58DOemVa8t+lcLpAttVn1V7qIiEBKQyZARKfolpA3jPBZuLhrWyCAt3os3qUszOoZFksmMTjb3BCqbatQEDZPU8Pk8bK1QBz/Mjy4m00M4taGTWE0GcMWi2iy4FSkiIjBihRVFVOBVfFqwAG0NkIPfg5SYoFeo8jYX0BZBw8O6Ifzi6kBO/X2okvDQppQ0pTjvq7zk9iAM55i5znFG1U7vgHVTKJk5hgkqhDhMggBKwxkW0Wa91fSpDe6yfq8sWF96ETrY13YKeTqJSIXeJ6qLssC1lLKPKtqYbXD4VCmnaoux9Nitt9dL8tyPB53+0kVUUUcSNiu+JmHuRELiVr5VoomZ2cwksdGE/Qw+bXaR/ciFAAKT8xryKoViwpog81VBYTV6iSgJARCYNXJh1bCslG2QmqssbQseUkEjm3b2MDXFe7aEaNFTHxDLBEhq+pMnp3rteW+AQSAJsfwBDfX/CHCsY36yN3+4t9dLk80o1UlK39mvCSYrcLj/iw5CaAOrmanI9WoIpPI5AqWoKJSoobwePm0PMqagmwLH7ibxYUJZwlZ5YRnVvj2FBJ9vBur/1ky07TdryEo9FR0ZxX+4JylNEWEYwzwWoFuS3gsScm1JLdFFoviDFWwG4oXrp+wWClg2ku3lNPQaXjg8ZtAp7YEXQw+K01uZ4HRl6PJ3OI+tDbydMnOu5JrNptzm4x4bB1j45FCmyOsZq27wKv1aEsKQKymgHNzZGP5dmfn2YNnwKtGcDhUmrtZEbg6MRDAQkQBsjkmoGDrKm30Trem4+qgQZVn61leY84rqw0Zrr3+FCJvuJHb+D6CFt0TYCbZSHjpr2x0hACKlCxDxdY6muSe6mn34iXAqsE4iTrOL7S0WIlIUYjaqZ1l31wDq8sTnoUqEfpuJIU7m8yrWzVDH6FA8lltTSNcM67UtlkYToqWvjsArKZO3M/LymtgS2KINtXZSHowVCPNDTfcRRUDDcmyp3myh9c3sz67ulpOuD/VSixcBPPqvcl0raWPDXJ5rbWcFYHYHNuNSGCDojLCIYq4chih6Q+qiLxljw+rBh4X3OyvltMBVoX12bOnD6eHh4eHMt0GBtgIbe8KEWnoI5GaPEZJxSKfsCohrHS8Emt9iIUuSCcXz8bAPs5pScF3I9vJRDgO2FAuVJ3W+XxsKOxYrHizBy8pm0R7gh2cn9WFgJYye+YR8SBlgkiFmC0KK4KicpSJ1cTCB2pcKCJF6xKKvnRCqoAsyJIJNsRbVaNWqIgUi/PIIhCl1gKJkKuFS4NoKcUs63h6nxIKyYkuprfcaFJUZeLxXnSiilEWGllVVQumM7q/MvI4mMfj5F1gW9/A5iQPE58Aa2ue528hMreQcm03TXfG2aLpMkmiSRrpCRtYZkjCF3wTG9EsHkFmxbjsFsWBgyjL+Qj5rlGF5TDh9pbHTuClq5mnI84CEV8aA69udSColzfKidmW++JS3PU4sYBoHiARcetsKyg2nlOms4rWFWhtOHppvaKdFJKMdlvI9gNt2Da5LH1KAJkH2cw1CrGGcgTJUsr43g3wc0gZwoa3LniNno+JMBtYSUDWrdbqEPXTCZTW1jcVag+7HCFw/nn80jJ3Jecf9Ksk9MRbEGZ64EZVlFRf+llwKGWjRopgjDf0mLHIA8zMMQ/OZEJ7JMx5xrWHE4TeI5IZkDJOJqwjFYwbBXCTqwqyrstqcGkYsQWOSEfCviNrA8nwa2LG2rTTrosJFDge3puu35vLqdrL0+nVUnfTdFOuDrV1r/I7m6nrsrNzlB7G2T526jfft+UYqvRwkBV4g8TlSQUA467My/E0zVM92dXNzf3xoFqadjSO7NwYMihx49vR08rjdY4VIhJxfzkUTWSKqGlUriWSUlbA34AiY8eyQEXbjmam6qU3kVrjZuuFBOnB2hL4W8TPnyfvqKpSRWi0llbbdyo+VRERpjQwOdqq1Br3xpJDcSG6uS4HJICx+kvROWGS9MojYAZBvFNzEhoaF0mdirknDg70oAkX8oADFgjO1+RNr1t9nqqV9+cgjd47oFVgLZXek14E6FgycjV4pzP2QhNMibt1KTiv53B+9W1oj+SbNh5ZjG9//GqsevuWM3Kpjw+V2UEMG3I4GraSdX+jiMO/+PLzwK3emG+zFim3vjyEn0MNL4W4PoUmFT0ygRWlW5OeDaUbWfjmhtVsL1DV/jrtgV/Dl8P4bUpIAaX96bBtqNF5TEqoYKTJgk2oj1zFnNzGXr3CoXCDy2rTx7U0kz4S4HDApgLKsBkogG6fdKK7muzFa4tp2oxlgSYRbd5OkyZlKj231MZKmRFiCiFYIAYtQNaiEkC8vg+kAgLxZCA3aHhYTc0JJzAHe+z5uXjsVAwwfOz7kcJ27tsY2PjNOZu8QnkmZf9QH05Hm0WuZla10/JYkau3C1UXBYWL41zsYoSUWDzVgiktUgCrIhN604Jw7StkqUfqXEoRlOPpQYtASiuVKkpJbVVEKHWIXxR2AbNHCSAxzRAGWxHJHDDHECNqq1YZD7rtaA23EfiRmNc2ziJDyJ/3fjMcHK0b7ust5un9p0EPXLAszBLAEvOScJJpSJfJl1GwhMQZlNtlAI/P6Dp9/Fs8DcGt5cMZp0I8/prmyqQITWnmNoPY/YCSyzm5QnFxGyBFqqoyi+OEngqQNj0qwY2QGXBoRMEWP0jS2xWcP5vZSrlaZ81nTHSchsIJhW8qMkgDGLW9oeXzOEKbnp090rDS76no9cZkaFKZWoDAEwsuEZTHZv6W29xr29K6BCmqckVN+lCpe0XiXjIMG/oENzi3WtaN/fSVZg4dNgzMvKmtj9+CpLyoeiu6z3wUUlTbiWyUjqjqm7Qy4APBhxp+tnW55tRxP4GoDLdCSjAhn3IoAtVOu2Sy2RgkTNdTpcMHXehBBiKhKbsxn4SJBsVKm4uzLLfpNRQiWvOJEdrn8kcD9RhbkBSHI0DWn/HYVYcIEX9EETpEyNhtv5IAR8C8WIl1iXihqYwsTdgDQMSQtIQfNoOUAR6u38Vk/3+RTSFzZTTaPl8Xzyy422uD/45P5/Ac5JZ4Kt6t2oLJ2yMkryAnq58eHt4sx7J7cj1dnerxUK1MTXVo+c0A4IUCx5HRZbWVWPAY5Txf13qcnFt0r8p1iXrcRhpfYg/BOmux03Jze/VwOIiU4+k47Xesw0pR01UvwCpVWrwrRkppGOmhhP1BxNPNAxpeB8GsmVs9TqqkvLcVreK5ZtGNzHWRgdmPR3qo0r/K9Q/s1VKtAlJKGP8FKFqlptSprgb2ufms0YLqSedH/qcJQM1QzuanGOo7iCFdbEZzTG1qwMKsyRboJ4kqrfjfav6pr2qkP1uoxKPna7ymDCjYDtRq47FzKf85RtmUTR4NNytLqavqweL6GWoupa2Bq4b5IUI5GcwwjRp9hmF+X69nVJ9W3w9TRdL02vtnXc5W8lVvDr9/Pu9S4mN+z2PZoj62AuD6vQVZuy1Ev2Ajbx981An8bLs1UpGNtUhIZONbu1NWI2DNyBtiOavuqob7p4d64O1Bs5ZG9LZ5rt6Y7qENRM6pIdboOgxCSfoy/jya6Ns42w+R4EBAk5ahb/nZq88ZwPms2jlqwVY9kG0IBtws7XvWMMcgeHl4eZDXaqFapRsHDRLmOcuRo820/HcgZDtFUcIbmLs85Lvt5YUIi4Pt+Lix2fpyVilPbbE88ws0BFgx3e+PpY3XuBFbFi5C8nQ6vbqaT9dg3U0m+uaoMFxPsAudLTbXKNNv5rYR988vs+V8OWYUDQ2kEUAKVLXWVuV/eAtRRHQqtVarMDOdimAWFMji6JrTC0G9PT+SAiTdW/HC+DJ6Xa5YAKLqJ1OxkyExfQWEDhwbiLG2iQx5E84nm4YTKn54j6gAFcVkQ0K9yF4xmqC4s8ZYLSIECricbz0AkRKaiygx+kWW8TamL0U9f7PLMWi+QpVJppRpSRGZpp3zkJa8KiKAjMi/wQwza6Hgriy4FtHzgN+CTAlKACiqUaRw+NIL0IeZjo4Xax002rAKySKsYNhGR5DF3bGZzWJcEJFXOY/IJHv75fLMauRhIczxK6gi6sFBgjZy0+dMUxEDNjmjI6G5eErXq1OkypuVxTjwYH+8zzG8RDlJJ7XexpHMarypnPVC/ezz2dIjDlMh6G6k3MGaou1oDM0Foi3QxH3SopHgh+hWBCCrfRENIwGsj2tXcIEWMxnc/QIBxaAitDk08StK3Pmdgsm5aGXE1QdjJUIPIMlgtANR8O1tOGJBpVToLQc8HKBRk8w7X5meHiXHItI4k+f2SLdFA2tcakOtx2np9SnwGTEIhRENzsBSr29H0vsDMKctAISSefc1FcmSpCKyYkQA8230LypUW/iVkRBG/KP4U83k1J17g0A2rOUCcGQIttjsu2xwmJoa1Moi0sRBo3fH6hET4uWSarkXnuYysZSjyEOVyVAUdSO69xCwzeA5pXoubIkgKyhtZa8RMbrM2oSwIYRQVEXEoJ6hTa0wihQXm6g0LtM0Pzw8lN18WpbdvDvVmuU2mHHj9DNk1B5Mk9MZZ67o27S+dA2HMiyFxOLRdqJbIAAYIuCCm4pmSdTx5vU7tywzFfdxtpJd6GhFJ9HmTWb1Prc9RCqvIkJy8SBKaKesCjj1JdEazacxzKvAhb9UmmyhUd5IYtfy3FlYXIYJi6iZt4AxQItkmpqYCFiZFBIAnLaUsRnDGttChqntpyQ4FzmfeJzLYGJFJE6iJB1wE5bX39Aej35GtrIaPcOlJwZoT4oAVtxXzg+2X70Gxdom1hV67QKLF8o4pwXn02s8YPTxcMzcfbRuh9f2iSRuj9B/S1VLyehVZrCoyz2FEt6jhJKDIZlZJ9/tQ4HnlySPIQAssj4PORNuLR7IZTVLZ9wvbg3SLhp/TxkuB1yR5vhgTEejL0gM7Aa6C3pSUsn14CGWyHBztORdxDW/rSYzykBxJpvo026yxGFmAjoZhhQNAhrG5V4XUwTr9jXDgIhO4GBIYxbghV6Gj2TUNLJKmvcNO4LTYFNxWaSCBWNNN8LdYaSrHu2wO7iUII0iJGrWiVJRFa1kxeCe71HvrT/nuC899evsp8u4cb6/yXctOe4F3WCz7X6/pGNiFGvKfn+0A05EkVoE17PwuBzvhfuL87nEmfrEzr+RPoGNBnbOrQXA4uYoFbcvRBcmACwsIvT4YXfSFRrJpZpRUm+jmqGVEPUNb2W9/AZ/f84jCI4347w4fxFxOTbomIdBRbjKuMWR8jOucTj1qQFH04JtSVRJHXGAySj0JPbaqYiA6ib6SdRF2EWRHl0ToapSWFE1sctPaE9+RwVVO2s2J78iM0MgTkXPRFSFNSRxt31RoyIQ3PWWU3UmaEtzC7YjEdBL0uH6d5FKQkWoitQOvPyPB21PI4AeQxoXVAPWLcDdz+54+H0bsULV4HD5XRVm4ukWZRvpB3poUHfit552XQpb8dfxdQ1N2JTFs9tEQ0XOX7vAO16thNg4z8a5OWgzj0n045Uxoi4dCFeW+9UlKwgKQW9hWKA14+Oayi5NFR7xezhubXeYgpHRVj81bd5sJT3nmB43ZBZtF5KdS/H2ha0lsAuPchmYyFgMB4E12WWwh58j4TiOc7sk0F3ciTsH8iCD+MXmgxwV/JyP/9lZLAjRSVbjI6kFped0jlu0ea/kVZclT1aTUVyiCmLmwFfJ/gM56zV1WwlYbDFWgHNxyUpNTfIwCBAZdJGRRRM/O6HEjziTVNdY6QzYtBRladFlCvE+p+7ClxNTzmEglwgo9WzXzpnlBnS5wH5PMjY2Htxo95lPPe/Pa5QFRaSKTIY9iwkebEHRHbUcCR1oNTAoURf0sPPre8qa49wwUNdaF6+kqIMMZM4Swr+YtXA88l3gqcD76+tlWXbzfFoWLUpreLWyCowyUMyEgIxBix3+7uxL8AY7T6Lu8RfajFMxVVstB40NCwCjycCrgiCs4NagvIafSASGeLsnUbY8NABmUFF/vwpU1UiDMCLXhk3JMMlSYqVCkFVhkAqYcd/DBlHIC56Csfxk0cnMxBjtomlOA2utqtqCmZOpR8hNC8MKSg9MWhbzwrAQkSIKYzWTv/jzRyGEGpEagsqwkl0kiGO/3hF8tfXyzJ1Wj+jIfIkEZXxulZsGa6CTJGsgaCjF1nYKQ/i0k+8y4dJ52DCk8eqlJ5IHmETec7OitxVuKEVDdKaK075fnEU2f/cwE5JWYu2TqKC3sLWlhw5yCKCtAyMfV9EYm2bCgr9rYS9gMtK+E6xAhBElW0GoUIWB3z3J0h8xl3xzqBaa5H15R8Lnn+cas23SgL97Qih/OrAn9pqXZ/s1FQaDXxHuk4SQLpI+hbBIXXAiIvO8z0HHMpHht5ZhVkpPSc3USQ1i41Y154tjnu4DevrguMsnqx6nigqiFgFhy3IE51KKqnoFShFxr/7EycyqRheAWk2pCkGmdYn0MpmkFxhDk/za9zvuABgqsw+308q93L6016Xg8HD3/s279a5+YlZur6/58rU9vT59OF8/uaulnu4OuH2iMhX97uuPrk8PX//ql67eff7qoX7zWx/fvsTy7rTfHa9qORzn19fK+nKvent8gh3t1f10tX9zrXeH43WVSct94Xxaih8BAGnrpvQcIYW0vr9irB6cnoKDNNpHUgTK7IximQ7ntYO6ZpOPdovUyBuwdOLTzHj+nDfs6pcRRi+tikG2zgG7d3AkgCceAITBln0OOrSXAnrWUHOZDygqAKTYstg87b2d0bwrh8NhLnqtu1OtleaJ3UwDjHMCC520IDtjkksTEC1OoFCws2KkpSGdqN7G2zx8dhUJpDQxt21lEy1VT/gW2jLmW7c1LkMwbA9uBao9iBZA3SY8OceqdgrEiLBkLxIqhLdtjvFHFaIGvfXvmwd98sbSzMgpI1mJaupWwxZRqEoVatWlwT/3UeGuHO0bFIZ9E9VCGnECqxIiRTmL6EkfFMV31ttii5IilQbW0jQTUzdIazmJq8eOV1TfxynjSCvprZNBGulSWrMFdcSVDBLbkrlH5MVNaOI5BncQrys4bt6yYS3ty9T/L799M0+5pJZJaiej3tf0bQz0bpyAnH3/dpEZA2q2i4wDT5+Bhx9nKYy3rELODN0X3972RdzYhfASbOj4aoYgBSrefY8VUAvOtLk5Fp7RyM21lON2XWdF5VK12V61E6wR1KnH9topZ2bkNQS4QpUVCjGtxH6SmUTWKZYKXJLwA4DtS9zSoIPbcwRIEG7zmElWUAWlzDYUqcxpCAblwEOHy5hQ5hZjtClwBDzX3n1zsarLQObxnafjm7Is+/nq+ZP3P/nkk9vb66cFy/Gl2Wn/5NPj6dn0IeZnh3ef7XaffPBid3334v4nf/LH/mPv73793/mt3/7rf0Nv5A//Z//IR4eXr36Vv/LRZ+9P+6c76sNynOb3X9mn5dOT3uyu5wPr6fXDtc63OttS68NJrkpHgSatDgFoWNfxL+o1qF2/R4O6ilSnbSl+ufyUJDMYKlMMddobLVuaWmbJts/2Kz4z5X6GIWLUFtq/TjrPByEpOtTKc0dE1qDIexwAMaVEzuGh+EamaaLZNE1mdjwe57lMRY6HxSulhNWuvVpSiaa76UpUXEsuNc5Wkhu2gycS3TMz8qOvyHFPVUX6NOmFnSL2fksDRxTXAVEBRBm4XnTB/627aeehBP69IvKkHmuC1g41mxtUBfB2kxDW+FujXeE26q8BbQ0Z/9/Nn0x1FmHst5a5K14vPRVfSeraHpSsYCEhBqkJVD1obsCrMCZlGpL0abgRxHRErBHomVQrZ+Vzc+i0TrY9Hn49f2T1+IZar0fm2qs/aiFvGZVrnKurnwbe48fY8ri0fzfLzOm5JrdBFxlx9q2XSJRHaL0eWqlkGcKvzleRkS4DCZbeL7LRNSSytiSu8QYlMLyAYYGPdbVccn/LWBzx/OoJrwDwNq/2xfvP2G2y8XGq/iCIVv8B0F5UJ6lAYPFWYksMNHrrgeRtHgzcknbGS9zAYP2AZSzGo3MmIJZeNBGymqvTRZ2ZM/0dBRLOG4+ToiuFTNLJ88lLujZG7uv/LsxKdg7YlNs+vH71OT7bc/rs44+u3r15+XCYXu0/P39NHr754sXt73nynS/92I/+6rde/NLf+Qcvfugfef3kgz/949+wv///5ey/42/ZsrpQ9DvGrFrrF3beJ5/OgUM3qRtpoMl0SxPFByp6FZ+I4So81Iveqz6z18B713QNeEXF+8EEoiJBUWK3CAgiTec+nU7H0yfss/MvrFU1x3h/jDHnHFW1frvbVxx2r99aVbPmHHPMkcOtr3ntb3rd0UPP7y7/Sn7mtT/8uHzOC85/44Pf/NjnvPXZ7e0PP/kc8MD6wdvp5s0Lw/kTob0eGRfADDodN5JxYe/gLm3v6YRp++47p43VGbrauhIToKOWIDsQNItqVxDCCFDk5Q1nLF9TVXVSGnMn8ZmpB8bCZ1IpgEp4IvBVSxJ1HNyD1Hqg0pvopZqLUOTaKnVdt83btEoEzcOwv7dPRKcYBc3AXkMxwGKh6h4q43WUSZInT5LL02jYWb5ElIAnbTzq0rLVXq7SUrlJdvYpFxFKDI9ZmgCKKBXDQEVpDz61oG0vcEZuw45pLTGS13V0l00B44UENaWXVHWk5qOuu+KnD6oyiblxj7Ut1uc5dfoYtESt5ZKHQRV5b6KUMrMgq5pdwcM6LZVZAWUICYeFBa2R6EfeNlbqztHwG4JrMMXdSN+xvEogDcOiiCcnZNr4Zcews+LgVRjp2PUVmWp7vCsKEWcz+7oNVESd+GW8LV51sfXV5sOIDDh673aYoMMh5CBf62B4UpSG8N57Cyv2+vox5kFGULv0S8WNvxCbIsIRUS1dqaF2R2EbGqlVfHY5VQ0udgpykwWfzqiP7qw97qJx9TBNOmSwLDBn16LCN5MzUyu4JSvi3nDBFBiyUpozLCWiURu15RI5QkRW4p+ZU0qAjOMoIsyk7C4JEas+7zTYzkXcjihH+zdx/gvTfdmOJihLIKYpH6+EaO88nex9/MlnX/aKq/ftPXkq73vdp359uoM7v4b/7f964wE9984bb772e79VuuGlhxg/fP3RG90X7z3v7uMf/vA73nn38fesjm58AB/7hq/9ze98zeFDb3jFaw6e99NPX79463CT+MpevimbYbM9h45Tf7qftGO6u4nHH5VQcWuvSZUZVLDs2rWOk6qa6bXFtYmWxA2p++hr19ZUigPtSsFLFw+XWwiLWIOQ41DRKW7N8tQ4QTDXRgtWKp477KkWzkVS/I51c+cmaBWrZL5lZoaCpO87ETnNLtW576aCMZU3FhCYGXZoQTaokzefYNZ2ZlSVraTalF/WdVVvCAANBKq6lhBOh4hwMN1HQJlEqer9ZxOUWEkhRgiCK8EfX1RIsfX1hLhf0XE531rv52YmXAn3TV7UIO9MtCtRzrY6a//M4nJBDvnTiZSVRji6JSJSEhHJyAmJWg9MiJL5DvpGS2uiuQLoqFbHAYQklRgE76qBasOss24O6jr7CVEuv5rksWS0//9dkd9P2ICnZyx4uYeknfne/9EpxTUaDcWc9H/iAav42SxjnUtttkBjmgBSOrNVc3vfVDed8SGa5qXA7a5KRF5gIpy6OVvdxWWXr1jeEK+SqO1UJ7Lh2WhVKNFyVTHCjSi7GNU9prQEnZZwZVXN2QuRlwDmSeQc6/+gGLRYd5jDhJQTBQs9OdZWC6qWO3fu+0zzWEADlfSrKqDEh3lcP3Usd596z1d//vM/76Un7/kn3/v+X/2pd/7hh3/oD3/o5z/6g//5gx94Pn/pybmTG9/948cHV9585WPrKxc/55GXv+nGm/dfeuH5L33Va8593QveduP0zW95x8+8+8f+/S/8H+//gx+68q4Xff0r5TPP00eGp6/dokuHfbefhMdxvHN60nWr/bPsV4urIN7ky7iVIb984ms0ONVjgiLLWlOQWYgWBV1DMTXkFLPsJ9zfeyBY+WwnhqwpFoCaLENzTbcYyRfyPXPKOaeUhmGzXnUHB4ebzekwDOjWFMXDMlTxqTdCQs7hogXSQBkMXJ/8MsPZBJqd3yp+RzLvKW+lKzZNtQhCmvCI+pmagNskHpd7J+pQiRqZ3E+FHSTiwKTsa1e4M5RKjfciFBajVtHFg6fASlQRwQraWwxa0XiVS5fPBEDJIlWDEkeg0qAuZh0TEayNpnZorN0DOgHQj7x1U8QcdquFwpzVc1nGjrc0iXtyQwGoFqpTG/PNxCUjMYKJ5lqC8YoUr3MJMZXwRglDobThw5IkTXTotoRaAAtFuANQWwikBQWJXH8CjQrCcmWXznZrwEoTwlrRcXb6K0ecuQCW84l/zuK060+VmkswbJreXGzcNKVuxGP2lKfFfJakSqfdpeI1qcQi7e2yg5UCJTrDtfZoep0O3iwTQUP9ZAA1qjCz6fFRVco5N2Y/lfN2DxtqX8ddZu0zRnUy5PIGEiMXk6bXDkgW1TV2Fo/Gmiun0RplU1arjZekeWnDAqg0jXr1R4aj0/sOL93aH7/25Q8+eg44fuYvffv/9rf+3b87d/ApH9L/9sruf/7a9BVvvn36ljd84bOfdX113/724z3RsL57Q575+Pb0Grot5fGR/uDCy17yG/v7Pu/W4fW3Pfkz//U//uT4X771G77it/7p77y2Ov/+Jz/+zLPPXr5wse9Xwzju96vhzvG4t6oasKrWIKy4qEKJzIRQNKeZDWQcQWT+bG9+AxBR59YdzUECI6Jhm5k59sOw73OwfFQGTFYDzqaDovABDAw5z7iCfbBCGXOGoVX1Ep3Th7WqFg2vbVDOg6FB2EQC0HXdMAyrLm2H0/V6fXh4ePfu3XGzHVdriIe+VsZKzGPeYi7UtsAiF3ntFVlUtes4q+W6Ff8lkIilBl03BLJ+zFwl4YLwQjQnF/WpWQGZtuOiFvyrnABz61iKUo7ykJu4Anqn6dFWO0fUaILNqkPn4HX8MUOgA0Za76NsQV6SnKFW+PvaPQsJ3iHbKmKKu/ij0l9racHbBZUMYoiQsLIxetOhKSGrirhmqyGE08/vv3/r1jqi2JzYGTAkaTzwVK487WKkQWAv0l9BFNHIgGtpLbtlBJX4L9//T4YB2ybFPYhIMOFtiy5J7USZJaYwYCK3ElSEmIRJ76pMVBmPRoLIThp2MmBiFyqXMeEI2NzO9qJbVH317lACaVrjTOZQ9Y435I4OFwjK4xVuiYgoD9Lck1q3UnYxWg3pxTMrcbwnMmDs2hEEGegs/jcL7sAZY+4MvgAwopEqj69w18CEAaNq4WU+7OzE/6z1G3RqNmftQZJ1FHH6hZSgjLytVYSIiLlT1Zxz5sycWNk6QysTIFkzECYTGPByXX6Ai2ri35Zw1ttrufSx9z/26Y9ce/MTq/TQf/3vb3v/9/6Lbrj2C8P7X3rhaz//gW97/P3b51747Fd98+uHk/Te7r1PXMvvub/Xyxef6glj7m4cbe7evLzB9mMfOrpz7epDD9xP/SseePBrjva37/jgP/+ln7z/K5/3J//0n0oXu194z/to3XeC8XTYOzw4FanMLDLgOkkzQVcAajWhx8BCUs7OYkPEHUOUU8Vbrd3JAJQyJ3P8z1FzivsV8srqTwyMiwp39meWYXZC6xbNNqUM35Yc51NEtIWLrevysCVWZkopEXHeZhEZUrJet8ZSxKOxSILgqKU9HRH1ShaO5/K1vUI0JcoqAqVS94qARF1WY+TJSa53tSrd8ISsv57KaCDOKkvgGCAqMDVuuoqAVAipA2DyKJOOlsKrjaAJJiY9okn1Up/VlDaoKlHvLZja2wnKTFRbnNlGWX5N5hYXHYfinMxebQzIe8oo8yQ+vK23pKeaWiLuwbaqncpWgJCZUyKB5jxY963IoYwZ0o+/fcxQlxOVqWqu3UQbqJs9jvMNwMQkMmfAra0hTZQPb0NVa/Z+EgzYAdlMDaqqVTMo+1G47GSECbHeyYANsZZpSEtNy+lz36lqDpKR+Q6zYdhSAy5lSKWIIyi5LhGSgaBg57WT06hap1oKW+v8uCMeVcyXVhkwK0aqmkFjwADgFYV8qNrXbwjIgLAFnBUlMcA/BB5SN93ZGMd8jAlUM6Znq64LzZERGxrORmjDzgusFXByEhGDUp8SKUQk55z6WnK1CZoAxkIZKgM2tNlQS8+LuywDpY4AGa3zdpcAyqOylcor2glREjgDTrSTASebg+1pXaaE8MG4ahmdEBNNHBJHJ/iMFz7yN7/3/1z/8I/8g2/42u/+u39xgP4Ubj987ls/8+5v+9Xuvc+MT377X/2m3/wnXvwNq6+6/Iov/Oe/9U/+v77nu7/7+Nn7v+DL+SUvee7B/dXx6fFwa7Xe346EZ27u33xGbl7fW5979ODg+Wl48fveuVqt7v+sT3ns6153eq5/6smPr/f3Tjtab2lnGpJZGrzLJLlVgBW5+OG0cGWTKfa4G1XELUpERGq1GA1vvcI5qWpWUaUVzxu7GamJjbEj+U5ORouGIN5mpibgRUQFEE2m8crV7FwCtMtT8f6QnuR9nSngm4kgQpBxHA8O9lX15O7xarVHSCNDSzqfLcf1OyFmRjmGFT367N05wWQ1tux9DI0MWEQYiYmWDNhmbumjrJws0VYzQ5h50ObpiEEnLWVu2qwzsdWNTOAV4E3VEvFIQ5ODyTdCCDSGWJkKey3eXJkcfCIC9YosMqIGdiirUspq7mSrPkgQg/BQkWDKgDvtQUbNswBK7gNmHYoneIJco2gqpSRFLFBOlVWJSRgZanmMDNU8SmbuahErT/sXqCr98Ns2MZ1UCi51AgUyN0kqCQjI3lq4skCiUhiooqwdLSpK3jJ3XlU9LzMINVQ5maNUubhIHHaERGtyqqrqtDMXAvuZkWn7PmFu2p2d29k3Xdep1jIkHoZNU1NMfJZFZ/Zbmobzzd6ioNqJM9delQWAS1oQTdfx31TgOVv1rGZ1vadNO/QtQNHj4xGyP2M/1AifkYOGYtqYAsBSUkbYmqWgMNPgawomFulP8YpQjexQVdFSLgRAysmkBGsXoEW577RUGatqAcjYsxOyxt+aNh+B7MIBKMsopU2KUrJNW2mxuqsHvBjLIc1LODBzFWh8IaXGQiYmRQdiL2blyxtJjYWTKZ2uTOhptxo3d7/gM1587V/8q1/43//E945P3Mov+vKHv/Xqwee9Kz39wNXHPra+/pY3/s2/dOUzrt99+lvf+B0P/+L5n/zz//mt+Re/9Mu+8md+Uv7DFzz1wc/6olsvvXrwzHi8OsatC/ett8PpnVvHxEcfSh8d7+PjF189+IWf/K+/66te9B3f+i03Lp77IMnNJz7OevfB8w8MG7nG+b6uT4r3jHce6i5surxJ6eIdFR27fSYZnhtOH8Hh9dVxn1Y0KI3a9/0IGjRz19FmUyGsoQCfxeTX7uO22Ukg3dz7a5fkmPcQTG7sGl6wLiYAwqJZEnGyKmZElHiUzIImGMXdVyWisegwqdQsLBFMAioFNbySsBb05ogA0Kyqifuu65QwDIOIrX3SDxtFsFhxD1h3nYYnWbWnLnhYPEZJRDh8b4MYvsE1y1rEmy1OcLCayYVLCRSlH0csad7ObwBLvEZQgnofIJXSE0WDLO4H13/y855JMyCk8DxdQuTHdStNsI7sxo5sTYAmUrMfGccbJKG1AEh28IWgw2lEmzr/U1AvtCIBRKAjzCydko7Rbi8kqqKaNe/X0lEelmGiw8jcdyi52fZ9zpl++G0bFC0Nwdh1DwaMiaN0HrSGYJ2L1LZyAmdlZpGYmJ1cNZwxYCKyKMrKdKu6DNEaZY0po92pWarqrF9p/bWdhF0XhTvjHXGrKrtFo/2N0OMMb6WMRq9bI3e1xcqcUdUhd793UpPF5zajF3Hu7bZg85n9KkErDXx2yoBrJt8CcjsZ8BmLQglOoSg5oSDMfOg6Q4o7C5QDTBIYMAAgCVUGDKAicVf4olf6LNVzDA/NctBmm6X2J45rJCLztAmcrhHRqKqh32ewlzKKLzBCyRc/yqQGS2HAwolBZrXjWjWFMFpNbuPBjLqQy3cOH9978jWvefQffOef+Y8/8LbHvvDbPuMVr/vQ9sZ704fx2CPjs8982cMvv/3DP/43/h9f/rz76bv++F84d3z+lekzP/yQkJzk9538jfGHX43XPvm8q+/6gudfeOmnPrvSAZTWfT66fXju8ilt863L2Pwsv//ChY9/+Ogd73z9G17z1X/4DS99xf23PnT3+gdPGJd5lU7STVqPNCTCwf643eNVPtrm83t3t3fOSRqZB+UDEe14Mw4bCHfOnfb6VfX5zXbZOoJ0IFU1KwUTdUrbSXRrwNuzowQQTJGV/Vg4ayoRCTZEzplBywNlf4M9Vd4SC4EpA57rwbHEQDsIkoeu6yyb3AwfjuEyzR8t9/fUWVvaSftLVZpEAAV3lfKMAvv8Q2GQosXagAKgCj7q9oId59QMQWf1thCBFfSoDLjS57KnJRrFfO0gQFhdF/dIKUwsWzsZcAuVB4ho0GopkcqAVUS5B7iyJyFYdWgrnLKkgVtwB+2897aMJcCpMzIiJN6HySIbhLBn664AKKMZ3QsB5CQi0hklKrUBWpqmLvx5Nd9USk1qZgYm21k2ft5XfHk5EMufjInW2G5z8QoQr0CpVh0iGOLnI8980lN9pYWlm7pWvo9IED+MpZTgTAKtFXPq+DMgnDWr2dwqNLw2E1l0HckukyyFXP7ZlztfisCGozObyoIbbybUGu67GV55Z/SUA+g8pbwJKV4p84ya2LNQACw48QyYs19xNoTLPLX+CzQrsY1SRaQ4hBnBuPzmj2axxgxlBFIAaR6JVv/MpQlEQRVipaxaEiGcgHo2I7yAyWxFRsRrbYQZby4ndBKVoG55s+YmZTKE44u3VC8c3L74wY+efM/xz37gGfzZP/D3v/xzvu6xbX/5Qzd/3aY/ePzXnnf105/7hff+6I99/ze/4KtPr1y+e267d/7wl+689/nf+Ir/6dyLjt99O+9d/+Ab3/zsm/87fdbn72H7mR873Xvq5D8/8ZO/7lVffvrCl733lQdXvvDLb8iN4dW/7id+8Zfe9C3f9ao3vP5Vb/is13/WK0D5yTe/SyStu8sXrx+t90+fwR1ZH2wvs2xvgka5cH57+5TXhzz0qrnX1KWUVt1G80aG4yQHQnVr6krVLGrekFilNKC1kxPhWdHmrKNRwdtEKFVAaEEB/A2L8eMN7DG3jlhKoNARqKAbRwF+isZChJTYqk6WMofk+UftcvGeyJvCL6dEaIsSGbWAR2R+s8PBjgMp1eBUYxKWS0PVV1rzqSy1uLzdmgDsOpLlFUHKrDisWkm+Kc/WeguAigdr2SFyFLAmoHVk+8HFMqFS1AJ26iyHI26fMmk2Y4TaWpwIMGCBcu6AKCaVMktV6lIlZWKNxVUhWVMCWFkJDLbkbOuQ7auO0jYAZhIRkRarD8vC+OG3b51ZxgJMRClrdeyVZVjfWZhFtiCuSzSzPWh1i6bKZUvRC3GAlRRpidrwN0YVofiYc9DUVecMODLO2jC5iXvUqFdkwMAOdmfPGgOu1d3gkTGTUxSPem0nELsqQZumPnu2l3Jagpg/ndrsmvCqMNoOzR7hsM0oUWf+s5ZX57+mUOkpZqnWRMCqb/mfzKgR4CF1GDmyiXZVezsC9EwUrbJCXBfv2l8qLgAJ4bVekrOKWC00konIRPSqAddHGLvFl1nUa/0wFlF/9lQeCSTMqAfMTNjCY51blF204535lH3qIgTqT6OFg5AEgMyOutdNsD83p8c5pU996MU/8IP/5vCbPyc9cO5f/q0f+bR/O/7+6zfvXh9fTAd39dqtPdo/3XvNZ37e25758On6wn24c7y+PN65dv3B8zfPn8uPrJ56uDuQ+z62//G38ZVu8xz9wx9+8Ihe/WVXf/3xp3zdr/ydLx+/IL38lf/+82/tverrn3t4D7dv4qee3n/zz5/we7/md33en/+2b3vyiVtvfu/TPa3HQ31ks98NeCpt9xKvSTeb4Wo6f61n5SOzq3dAEnMV82q1GsbRcMkcAVKPBiF59wgd4X92xDtjCLAIotwplMfCt64BW36aaXsWKq8TYhLH9+0upil1k++qBTRVDVhIMcTHa4wPs6PNOI45BBiWMJ/J/Cvd06IBU+kXMp2n1FkZ2u5Y/sRzw2hEW11ljwm1JCTFpuMaqlO7TKsZ5LUYxog8phgQ6/Jk8HG+63y4aIomeHEDZtmdOT00VcFKS3JYmkFmrO0+VRKRp/hCSp50zbYFjH+OQ4RPnb/2K8rCkoko1xKYoin1qqozKzRyp8GKEC0xzOM4qmbT3EiVSLuuox9++5YwMQXDT/pURArHnlpst4bY9xaIr8WUBzdB7PABx0D8CNZx6lttYM1ebMVrcZTPWNSmdj7dpEitIgKZlSmYTKNYO5tMFCCaolzu2MmDbcAlAyYghyYWcXUrddE1mjoVZ9RSm+1E4KxV1K0/lVW35IT44IwBV147Y8B1/FrTzxhwXJqrI6F2hy6KrcfdWRZjUXUpbMaAiYh1x+ZGBlwvQ7mhpDeoBu4Li8ZUBEHHiHgt1DCbZw3IjEAGmilsToiz9TWLoXAEZeFRi3m8DsgK6lIsfF3hYDEHFWPrr+NU0qxdHVOYZA23BnDcrVd8LMPpp1x+2buefHZ1/4XHt3f6f/X2rzzscN+jdzdHD9+/fw26Xt//r//E33nR3Z72jl9y5fnD6nmPP/1Wukiv0Adu719b3X7o9n2DPHou7e8dj93muY//4n/7d3/oj3773R/51Z96989/Pn/+j69+7Y3D49+Qv+EDV7sPfM2rP/YZ960efMHmY+85+kt/7Te/7AVf/b/+9td87Rdd++jxkx959nYS9Hvr593/wO3xzvVrdy6m/li243gKjJutDmMCJe6560V5yHqAhoSGpcaDU0pQpVHAlAnWQI5FY3pYxJblzlZ4I9Y2qVoAiYhQYMBsH9DoVRx2FuPSSnmjNwYcNpFVyEoSlnuawyIltuItRCTaHFJVw6kr8g8hfhjh4FiYfTk+Wl0YoWVpwzcAnKY2J53BrZB6FB7pxbubidXGlzPSJpMwsVqtcmfAfgT6Oo4UTtw2gpq72sO2PXh8TqVH3VIM1yJXgAY/9WB1tdjn6dWwuRpXBKwEY7FL+OSOOStlAdjqS0MyqYI7VcvrsvOuQlDNvdPMBZVg0mzKuuuTKVHf9/Qjb9/uDMJamnbLiGIKIdwYazKi5ynXvfeKP0DlXzMPhEe3BxIPd5hMJNbKyOs4Mw04pqPM0HRC3yvBwm7Gv9Rlyw+T6DsDJaqktmvPMGUMi+Cg6UvL3GKMNFGpvB50d3+c5kfRrnFXPqtq5cs+27bAEExEMcslrDTyvrqiGopcwNOWbPeXkjeFUky35iwSqbuCtqiYzuo8J6AL3DTOpM1fNTLg5gC20aSBfbnqVuBTGv5M8lnDclRVKTFQyhQA8ENet78SOPUmaPOVzin4FG0sUEA9HEmr9NOLqz5KPnUjyxf69e0B215v3f3Y+YP99V1eXb34gsv7717T1VvrkY+7UyG5iAfxwK99/M3/z3/xBUmvb05fhPt/6eDmR48/8OkHL//ZzX95weEr9rh7U/devjZ+9me/4ZeeeNsrTvNFGc8fvPzw0YuZ7o77qw8+ePnc9YPH1x96+/qBX/4UloMXPXL/pa//x//tMz76zPfcfuOVV176/b/ta3/Db/nKZ9+jd56+/c63vXcv8/7l8+PDh0ebo4df9AJen8/nVuOV/ibr9bu3t3dP07HgZBzXkx2fVDHOYo5UTawEzUJZ0qrXcGFxQOIuwOXtevjqz+ztB4iIyCO3iXLOVTCaK0kzzdgqiqs4fZsx4GYOzBBV5ARLOoIybbdbEe26DkVIdTXr7BXF7ysDroJdDMKC8kwRKvez18tsAcaR9c4ZsDFCnQrQwMT+EGFKatXTc1YBxMKGK/2Pg5cDwlaLqngaCcrKicVtCTp9xaijLayRStOMiVU9ONHnadPmJmfYITPU6oIBsR1qVWFN6nm9uTSVYsnokoWyV88TDERTKtRA4SXHLZLJd6fjRD/y1g2ABC9FJEUr2om+AFrJ77M1YCBWJ74XAwYmdk4B1FXBwkRbvzliZkGTDeOYdbUVM2bCb8XjvrSvMlYX+cpOwLmJZsmWgkTZuFq1QS0Y8OzmyZsKn/S/mJjZY3imDBhA7WA426C8yFcuL9q9j7E0YJRXJqUBz1DC4xK6clO1Qs/umTHgs8ZBsODFB2UadVyvJQN2n7S6yc5ryIUlxH3xw6mQRFGkqNw3i3Cp/1e9M8o0M6033myl4IOcQV7EwF5ZZ1JuyDtOGZXCETafCjoSZebSBKZsXDChe4FPlChFBa/7o5PN/ef2h83RduhP5NzVdf+e43f368MXHXdPdVllde5Ub+7d/LzP+PQnfvrd177tnz4GeeTqIw9eeel/fO+PvuLR125uv/uNd9/98u6x7z730//rn/vHn/K1n/XUCX74t/8FffuvfPFjv/u9H/rvp1eefqF+wY+94ParXvroe/L1nz09+bQvfd0zH7/+tve/63W/cvvvXP2Sg5Pt47fe9R+uvfFHx7e8WF/yZz7tf3rhtfM3Unfp4pX7b27SQXoi37l2+8N3N3eO0nb70J5++qP9Fz529NgDT/X5ogdBT2rjqOqgkkwKZRImgWoWFVmtVpFuzvBEw75EfCuAbEeYTN9lNgYMwDRgC76rHLTSMVCyLaBCGXVSVKCd1oIJtYyDM2BmSsybnNv4E7/vPGaiXPO+YWUVkctqlecqvYr0U1sKLoCWog0nj86DiaiEMojVXyzvTUSkVo8i5d1nfDSXvVd5rAp9yKqobjoFSoQ/mSzLJQWIU+gbGFlGRmbmWtO0MmBY39+674XUd4xqaYfzOxt20pewYRFJgln2eAQpSafEpJRgUWW5aTgCoNTJmAdhgYtApuhTYmZIzjnTj71lgxIRby+WQFB2XbauyhgqXk5QJPiAJ4JS2/gQrVcdkAIvgVljFysDztNxZp/reYhcsH6OtWFXmoCWdlL1tnRG3m2UmBw+VjqHJm0QZ4ektucjnUd7NSAWSLXc6HA8kqQ45pIBT/YDQNRuJ+9q0nH8vv3ZJgLM2iCezYArJ1uJM8LJosKeLhlw3a9JHmFgjZHS4QwTsRHH2sGizn+Ve/GYPQhJYagljXAqGJF6GlW1q9eUJIEyvE0vxNNJjQHPBD6fW2JAiYiNRHlbctLRGkk5WaFipec8x1v7s1a7rWHnpg+wV/ax3NeacyqWSiEez0lUZIsLurqZjjbAfj68sJEnLtx++LhDd/GY7hxuTu50D53r1ooPnPbnbn/oztf/rlc+9zt/+qM/+B9eeeVVr7zx4n+8/ccPdy/aXrn6xOesf/e3/r5nPxfPfvTaLz3x7PNe/ILHNsd/9rf8wd9w7TO/6MIL/u7tf/m7Dn/LX9589zDS5zzv61/759/wsle/9p35+Iff+Jb/9Ivv/uw3H33j0/LZz3v0an9ppYd/a/M9T37o9l95/R86fvzORzfH79g8+7abH3257n3ZI5918NIH1p/54LVH00fOHZ9cXY0XVscyjF6S29wQDVG3mjvyvo2ZIFBYV4KyLzMyihT8u5MDEsqdlvpoACxvnpgt88JwAFmqibs+VfaeKwO2v4R9+1QVEJs3gYtfsNBrydZi1RrcHm+3XdcRUs7Z1FWiJLs6Q5RV1JLCE32jaO+uZ9fAwDy2BCSgFGEVURGi0p3T4phEAYxoDBjw4ouAeGVZS6WjpELGDjg133ac8Dgw1zYHhQGPIqtisvaOBWXM0sbRTi5ndcFlJwO26VYGrFZuViEiKa1VtbQcdwYs0CQhMF45WMXm6V7weQysxEiilEFK0oMTNFM2AzCU7QwaB5TcOtiHyDUQr1RVZCTFuu+t7spms/EgrInBmUmBrsayUPlSFUCvcyOAQW5wgb9ESpMr3TIqSm4lAoFLilxERbImPPZTtgwuqk8Znp/pYzhDwdJC20yTcOuo6s77iUjyhDE4JJglZ4TlOEO1owU4GVf7KgOQ1FEFizZAVT04kl1zILUtj4RYtX7jP/p2UBVuqv+biEQnwUEBWecM2x/R5uuFtkNujKeytMgeNLg/S/AktYRd+6mEhMykgfrqbvQqAZVnt+UsjBmq5qxmL42iZCE5Ci70cP7gWSUqx9Hy4tmOpW0uM29kayS+UtWKclUbbtunsHifemG63rgFfoVCMXUcg3/OWclJIUSRhe1rQy+CCU/VN+Thb81AJUKWFe91xKz0TU+ciE/UfY1EZLgjIiKytzbBjnLOIpL6jkg3m819f+xf3Xzvk59y5cUPvuzFT97X7/36V179ipd+jIePXjt69qmn9oWvPHDlbR95/Dd95Rdu/t1bfvwP/oVvvO83/qdrb9KrB8ML9j7/N3/tq77hy3/5metvu/bkhYPDRy88iLT3S7/2jrf82//8yJtvflF++EUH97/m4iv+0sf+/l87+ed5e+kSHjxOV37P133Z6z/rcy99+auOk95hOUYehiFvtvl0m8cx7a9NQkLBECOUMb2wIoBqS6CfwT8hcQ10UM3s5JiKDRolCMP2h3mrOgn8UbdPBb6+3GVDjqbjMtMINO0TRV+sTlNi78yhKiqSQymFEPnMpC661TdGU02ZlVRQ5NLHTlVJhVtyjgJepKIMLkSUU0pWaT2XvFXTgrgUzDKoVzsEefetGBZnc66ErXAmq2a/W7Ohad0Ph5uyYqgNlDyCjDPVsHPlkgfl7cM7ksqPp+auWjnLAsBY2VqTJR0zNFtlJ4/55WQ+YCoKfbUDKk8K4FTkG9g4liZSP1tYETrFNqJKJfV9t3dyetyTHB7uM9HR8fEoue/XZwZhzRqbV9n8LE1xJK3ct2wvlJyQsCIyYAAkajEUpB5Iks2vGqyG0QCYz4jy3f0tICiVPbQsSjWrdrxbo6o+3dmq7dZ5tG1hwMbfjQGboJ5rq8ipZllHnvGY9iJAY2xwMxAV9LLRysFoS7MRuN0ZxxxLbdvZTxXIqi3VYwaWyBpzqSgU7T8zcGGqYu4cs8+oDNhXsehrFMmop+J7M0Bu+LMrQ2k2WpxArcRZnzI5Y4DsZMAo53nGgOuY9Z4leOOfZ0UDJOKg61Ct95vJ609GBkzW50wtXS2OLMaA65kidXfSQKSai7pDXieZJA9bgCv9ImZFPj09ftkTw+GVy3ThwnD/+e19uH739NZTH+tOt0erg/39/aR0fPf2+UvnHhz4733Nt3xdevhTz33a5jOvXP1dX5S+5KUf3B6/930f7sbu6sWrm1s3twcrynjB3n0X99Zv+eX3vO+f/6er//3J1+DS5Yde+R+fes8Pdt/PL730g//kB1eXL/3a8TPXPnw8qGw1i4JBXUqcejBlZFsvFbhpCZWfAflM0cfgHBgwgNHodbGx6YIBE4ZoxQ0fCnoEN2Rl/I4cgQETxjJ+i4KMOihIukIrRLJMahk1qpZKGGIJcp6+yq/oNmqmZqr54iKJCVYSsg0uAEZTUIHGgNkBEsDLKL6qGsWt7Y0AoObLQVmflURG9SjPd4eUl9wXqAyeVSzKJxPrPRhw4qYAxKpnLso0FcMUUuVuT8dsfyWi0RN8ucBqUk1BlZgnLr/IgJFBas7jCQOOGxMoJJPkxOj7pKrDMAgIzB3TBNx1MWqx0JHEB31uqWFYlYDaikMBVu8tHvcpbgaFzTFBUUW7lFS93lBJn7tXnuvsanfuCpaZzX85bE0zbw/ufAtKM2x/lXMmIoL59gBUAcIo+LRYWJnjHOyokFnqdstp0IQlzH+d7lRDyuLsXz6lmLRvs7w29oMX63NWLgjDz7q5teZXYDYTULdg77rYhQYzBU4RDuAoGLRzqC4o8k5HDmA+PEdIIhKrf660K6p5KVi0MT1tseYA7WjLgbMRbHZPM5bU8QEUYRdRclUFSAkSdr0Kqy4fMJN6fCynpEpAaKniBSDJZ6dEHuSSUuqf/uIXrpU3t+7cvf4RukPDsKVBDg8v7I+r7amA0WO1lw7e/fh7vuo7/sBjeu7B3/Llmwt4951nnnzve45ubS7vXb7/4pWnn3763KW98fh03fUfO3r6fUf5eV/64s9+7Xfc+JX3vPOv//z4wZ/9gv5FHxk+ffXZr77vtZf+8g/8WNKHXnjpfka3j/UKqROMWU+QN1lWPI8DnZ7LHcDE9LD4N2Jxb0FaIjSbX2mf3sKJdWlpa1g3m0GlFTtOXwll8adqvk2r+4EMLXoOhXM6KTAQuS/KkRfLNG7vbYEvQylJS2o6t0+eCmExvdw+wATWac2QgNjknhf1V3tK7pSYO36WpYFAFmt4xh6Va1zeAFurqWVsB60U21YOM7fbykIrKZuih6lPdo9ArbCpjQkP+ZRElFVJUaPHy56aUWF3wA0KdyMr8+W9lk20qwuvBI0B6HDar1Yp0TiOWQXEoKQE+rG3D2Ejg3KTah6Vu2Fcyli0zrUvDZPiNMU1D/8uYYLBUmqWwFIpREdSUV2nSRpGVSDOKuwwE1frVRFnpgGn6UGqm2dts+oGVHZV2VjkK6rq4fvqYTfKRdY7oyLMWRrb5J7ANb1k9FQUiAO0ndL4PwsJKeSMTd8eJdP2LU/5ZXukyH4x7glMPLTPiPBcpOe3c1JunqziDEKWlbiVR6hcXJLMCaXuMEOFN3bFpx4ZrWgJnmpPGSB3FkIBULtpVVpApdnqzvt3RvmR94GYFwRlxVDyt0g06g8jucWy8mMpJtm6424nzKKqWVNJpjQe7pGl5MEpBLCIpJ6ZaRzHE0r56Phiv1p16ej0KK3WB3sH43bQUUcVQur7/njc9gerF7/wyult+cj1p+6c3lXg0uFlnKBDGsYRK92MmytY3RlPbx7Qet2nZ+8AOHz5I5c2tP+B7VPf/SMf/W/vf7P88ur1n/ZH/8nfOPkY3vrkE55MIdojMXeaupxI8gZVAg+gm5lY6hbPKt+h4AoLlf4H9qza5lrwoBDUmrw7VnMHi+3IvgeNw7WCWVV2DPzA1dJCcLnYFFsxQjU7WRgHYRw4/jt1qnxFpMQx2HLKq6ZXs5yNhkiFoxMRlRKSRSaYFNbImtj4TXbfsAWRpWJ+19Yzs42GBYlQMM2Jqk1g9zkqFoIwJeff1YRb6osZeoeo6UortEQ1Y0E6Kj03JzEgo4qIMO+Ljgx3AdREHvBKREonKKPAoxYDRr2a9YBEJbHFZljCt3bKSfTU3m7CbmMp+bTveyIa8pizgjswqdCcAdfFLBmwfS/J31gpogUc9WZwiLpOYdht9lUWB6BsdS5V1RiwEjJhxWknA15y/TLV3VLxjAFXHzCHLQTCAaaYP9c22GSLGXCEYAEUhARQOdgWE7Bb845XHD+281uKLxwtKjagFFF6Ov5ODRJoNp7IgKPIbz+2T9Po3EroyVVeyk19bQyYptZpjTFEU39wZGz21EyyWUCLbQ01/cYPlXZxOXWBU99YG1Nagma7n7WZuNusAEx9MhNInbGhO0QWG4fmrNfePmSvsGY3qKpZoXPvZ9yqadbZeiB2wBHrcFVrZ1ZtzHy9OatZnZldl7SwHaIk2csuigglMLNI3ht6JRHKmfJIKjBKTJt9dKN23A0nW1JQwp3N8ban5+fDrQy8t8qgk+Ph4Nz+0fbu3uFqvZGBGYl1zBhzWq83rNfv3r560KfzDzz8wv1zv/LkM//6537kn/3SEa59y1/93fmzPmNDcqyysYAg6y0rhFUUEDVaobE4v3UHo33FtnvFvWoOTzma1ENd88YEjQEXB1PLT63ld6quWhjAxARdGfAkCrowclWpNdsxPRqyg45ZiGLy9DNbZnlVHaHeiYkgUmKnS0MtToiObSdWbPFmLdeOEgOcRVLQueP5oGjunhjtis94dhYWJ9H3i1pXqCkNb8FW1q3AuGAUNONoGmWg8r2qdimJSBYxBlz1KYHBX7zmdQkDzZomDNgqhUtm6uPggRuLSrIYpIyBLGkK/YhTAEyd17It5HTNIlBRqFJWUoLFpdO/f+sWmMQ8m64pyUL74jY7Y67F321vPbqMuIaTo+L01EOmFidi+8ndSP5tKmbMTIXlLLKplkaA8tOZDJiDKlaRmKYstq0au6sMLjUkT9NyA20SH8r9GQnNRzvJUq0NqwuhLPyg3c9hVkUwa9dMJ57x4BletkdngmozQbcvIwOmIoPXBAafZyFh6oYNHyfJ5IDVaBHGjn2pN9TwMYsiBObBUzPmWiCplRYzJsFNs/GX2JKnwSB1Q6v3tP1kDHJWPYP85njnWf7dOP+4kKgMDHlg5hqja4EFIoKu05Ii5YzTHp8SQiEubdoFga+jyEyT5hlBQzIilFIiSuM4WoxeziN36z4RtqOMY9d1Y8JGJKe0R5KPt2m92jD2V2u9c3z5/IW72+Mb1O8xd9thTUkG2cow8Cgp73XrTZb1iPUWp9Dj/b4j3j/J29W4Gjcf3fYvevDK5YMLj1zZHP3H9/2z//OffeGf+9ZtR8NeL6uUiJNCsyCLpCac+enQHQy4LrlGWtSqHZUBZ6sCYeeO1KNka6WgIt0amncx2LNpwK4qRAxxDneGBlxd9X7uimVsEkQWyNfMQmbSkqp64mxYbjWBBCC0IKwg0Hs8sIbIbcRmTd5Vng0zPXuYPSY5TbIKmxvbalPXOdf5GLMM35dc3sDAFgy4yQpFUZgQAYFzX8BjtgGJ5ZUQCJFx07ovXbKE7pI1W+KnJBsBbwzYJHDR5IHlZk8uYbJKtZp7AYR/UBFWJGWoDopMQoQkhaWbz8tSyVNK6462w5CzgDuwF+FJxPRjb9nQMsp3ivpRvyFmAjgHDbgEOdRnHXwELdXJXe8RJzSsUErGcakEdtUA6cozilNfnfHvuiIDjsdyFoR17yhoANMGiG3VVT+LGj/gGrDEzE/ruiMtKKa+V9W7O2IhTETGHw+2k91IysM98ZslrZ8ApLwu/kpE4xBs5YEzUDEl1Zl7YY0CE5muIpU0pDaNKateTkmngU7miaBdbR8BMDWYIwQlzQhW4y675DFVFdJqyPHQJCIiGksZgHrzvRlwDULU0lgJgED7XS+ezTAy4Ky5Nt5Q1VQIeSaqOVGGwYZLfSZUG4myR4NblAoRl+qMNf+t77zrnIhIdtJsHEhEEndEZFFgRMg5o0+DinQJYBnziqkHD+P2UNJ6pC3J8V53+/jovr2D45u3u8M16V7X43g4HlW6vT1SXqdONicD076wjhmJ095q3GYd82q16vL65nAtHRzi5MZwspcuS3/u0v3Pu3L3Xc8Mkk90zDlbWjMYysTZXbYxWp7UK4ItBazKgA0ViwdP19SPOkp1LTEs4DyoHE0NrQ3Yp7vI9QUaxOLIgFV1xoATF610ZjSn7FbowvINl6YV2cT2TlUZyU8iNWx0N3BgwPVZmxURJebSXH7S+aA9QgKoIhFRUZTZVPcM7ULkWg3CQlU/FigtGKo92TLmvKY05iUqHQwTmzpUh3LSEgCPpoYn+RghLBGF1SKYiUhQl9aEMACsYrb0EOVl702io2UPA0pigY0JXDr7hi0gonEajBySl0U0KZISgbLolq09MKU6T1UVHVNKq1WvYx7HPArAyeCmmhNRR8FQUFUzaibBVhurQGpC8lCMk4ZLEm1B5lvILYzeSLvRwVGa1cxzSmKVorpPO735y+1cHEgt0T0IAa73eDznJlvF7wVCwW+osNAddIbTTRot94eJNGwnSsxGDMv9DmyrExNZfn21qmIRkCXS8h0r5pmdfDl5ACaKlLn790SIzSQiA64IO9vo5B2FPQpXagsG3aGJTpYwlTls4booZ1EFhdlmqWYjy0oIcWDstdxLWfI5UoYlaNGfqLJ/uDS0RJs4eacyaP9D5AJXjMQ+y1s8mw7XAEX7kxiJKylHzVbfZfvWYhkyxizqgCBhSloZsIdVMCnTMGyIyAyDznssDYYGU3iMOhMjpcQJ6Xg4WK3zlnPWpB1DRt0yyTNrvjzSnnRppMsXr2zv3D24dOmYhr2sYz49f9AdbbYdp2Grw2Y86A73RE6Rt/sMyGpzugLyHt+kk3UHwerycPvW+ly/P5y7c/m541sfevbm0HcJ1IHW3PVKAjkl2dK4p8ljeQJpMkqzE9SaxRi21qLrVHT/AsPpDky2yci3ArENW3myHPJK6KMFa4pCVEKWUKkZtTQkAIrsljjHzJIhmbUS/UpMABCIUGKu7G/1hYRz1yJXgFaSQktCHQlZTFM9XEQ2METyLO6MyoudL0hTY8KsHCTTciZSHEaNzS9F6jJOtDhW9VeA3v2RXuWWASIkPStoi4nEhRUNNYVUM+DaosHATlbqWIUiIqiq0qSUuAlvbRvO0iW0KRsEYiIGa+pVt+o03QzalFI6Pd1kQADN2RISEhEk04+9fVjy1Pp5SdBLEg5Q1EHrEV01gJnAF0w9k+9VxeLrq1DvtyUCUCwDcw1pOoJpoq1nGYqdihRbSK0uUoyWhFAjNHbkBkAlelDLZX/KVI+plzFgCUIlB7Usnkz7MymQSoZA4LiUpRnqC6Ao9BtmdeIbtbe4I/bvqDvyXyvJ0HC/OXFTqDeLciaZuaYtIZAYa3sVa5nVZ2toh210bfbXgTI0Nq4wy0fu5l2kfC0ygRgK2xDKXuKglKMir1jSlpliSfTkVlZyA4CXqatFASNXJiLr26UlxcXR1RbOnEuf144TsuRxtBpJy6lqYczO461KrGjuEoladrtvAZOym8vi2Z4NNdvlJHw6nCqh7zsCZDMkxbpf3R75woW9k6PrI4/U98jpIK31ZLslnwxV/wtBtQQftVAaeHl9Gnfif1I3e4xWTDsZAdBUitiYLMAgAhKS5G1R5mlsRbsSt+jEuEAm0mzKjVVFNgzPnsBKQQMumfElylfUiI8tMmUGSu/IaronWUu/1WzCFot6o8DEKgNzp4pxEABdz4CKjKCSGCIt9z2LEJo513K6nOgzVXcOWfCukZRWW5hVVdS6JNBas5qhwqHAbgPOY0ADDs96E5GG4f4/YoJXpZMuf0zpZCMIpWRpVZhs1xRDPRdajNhEtM3e/rWaMN0C6nQmBVQhVUUfjlWh9aQQ7sv3whArBUUMFY6TDJJBGdCB4KZsjWmowVrJpFXNrfkBDHRWnUasHwUDYjoKZCQLjxICkDrbWCFlsGkVLN5nWYhAYyEahfLYmjtvPiFSowSgACfqhnFEglCWcThc7625O717NLjCE+0cDKCbkeyKBDu/R9EsG+BgCZrgGm07sV184ksxF0u1Kq8o8T9nGzPPGrZLRfYXhTuAvc4cl6hQFGzWGGNXBt85bCO1LRiKZo8sP6hOKvLUcXxfCxzsMgrXRIcmiCsw8dYXtgrASzZSuNNHO2Mh1Re1ZEtLOCx/pXIF9hqBYeLzBAj+7S7uUlexZEggb79Fpa19OZtRZA0fjT7Ur0i0FM+LWF1ll6km0QwwCZRFuJgoMGZV7TlJqLwxm6r9bd/X8meGeMyezukC5VmRC7u2wL4ZZFitVpkx5KyEdLAeCadjvtJ3149v6XqPt3pwJ3Xr1TWcPHeYn7/pubwdRSqNJ7e9iMRKAoHgyWbkKoKaRh5hVRaVmEk1l6OpXi5D6s1a5FEiM3zrclGAMpsNjDWAkcLnlgcBzIwWZOZ6Kl5ItCy48pqJecaW5S8usQ61G0H5snJ6Lf9AVZl51oE0Cs04g2KUgmhg0mrFmd5fVeH4fR2qSDAh8ZdU5exzPbvqyFzz7aaaVcWEKFZWM9t8OQUx6gjkNk2yCobeZdNYNRK5aiGsprYTUVcU/Ra0VcYzuoqAAmWvNJTKAyYcg1GKcyl7CpSdlwyAOPn5BauokCQqXIsqRhc4+Nulpg+rKpmls2hrCBtXny1Ew9u6p5SyDKTjfr9KoGEYquo15XUEoIt7UIerxCVuQ6VNkWXaIhgINpLJvp5ln4seXRs35hkDE6P3PdjhHKErwlUKCKDZDM2qtTAczbjIdJ7aWF07coU9zOEzmUz4k0qBdQA1clJjFGUBrw+10GXV4mmnuhem8Fnyknr/HIZEhlsVAeUMrkDTEzsbbcnD4kYs4DANCQ7bV2nrjOVr5QjB1DwXBQJaEyaoy24ky5UfTB4M1G4mJyRKsEpVRAqM5qFMrIt9mTL1CNHSL6RxtErW2xlanj5avALWDYn9b/L6hxhUh6M7h/tJVrLt8kk34PTk4JQfOLx8l4+qQABMorQcThXkitJ+NexLuTIpWXUHq4nhcTEgd6d6zUWCibQKavto1e7smldosV0WsCqXSCgURl6I4wLrKpKYORFeDnq6fUqtM+8OCCOQuCpH7oS/bVO82dcS+khaEPGSbpCyt+ZVJTM5WKnAM+wc4ZoEu8zgoK65lp8i4QpC4ZJgWnA5CqQ8PxjQVhi5wVxyrhrJzMNS3MmLQgXKABjF3t4WMwKYlVoUICE52pSKRk7kiFy6MOGJBEh0dhJqcxiYoYgIClJszLEOR5piBIUQLO3Am/4BqiTQRCaq21hsQCFRSSlmRBSZ0lxFCi8EUM6ychbpVwlZINhLfc55u91y11duFvZXAUxCvCYAvedlQQ2NkCiaN8BZXUHrM/rCElHMmJ0FkUqJ3gIsSnguuNWrWeEDu1eCjpmKBal57IrmV2GQqlyza5I7ef89Ds9ZsoK9ujJyBNxtQV5B0U+gmeBiL60WzuV7aw3I+C0AzCkv4m1xtMj/ZrcBrZBIfWNZ705IFHVwKlnHkxjJB4pBIt5cX2Gbs1PIQGXS9VdpB64EyClIzMaxJGdnzd9sbmRpVzZRbgy1DtIouKoCSSEhbN5ehsh+zjDnoJ3ticCkJQiA+y5nSVn3tEsKDJpImbvt5TWOhvTcsO6wPezzJaYNnw7DyAJ3oJF7DYlYW61voprXoYBXMC48qan4LqGUjrz1YkUOkSLOGBQ63aYEWjR3CQhWuqhV4QXFBFrC+pqNhar422ZCRMSqokpcvNqmfPtZY/NG1D0iC3/ZFZlYdpOjQKa7cLJeupBZA4tNgLBPQxLI/LhNoa93hvbqy5DSxlDVETKyWFqg01nU+0ySRQbjxkcUZs8sdL1cpULAbnrOcB5cHvFAM5It3P/sNxRqzB7x5CWx2khEREjFksrEStRl3e0DDlTCUZ1ATDY+iaolCInFSVA/yFACA6zCpff0Q08kpKIWOEyu2XMGK0WDFgA7R0DVGuN8mCDaceqYWXTIWSb1QwAIhYC7SUcBLKjA8qoY0GoT7traRtbPGqjgipWDFvKClNW0qDU/EvU4gQIFXKLsbIY7r7RYWjpDvbvHIBVK8Y4ZKsxGqOJ2lFKp1nye8ph6EPzZIPtnaImP2D3tGQVfdu+x0Wud9wjPGSbsXFd8JN4cyRaqJjjTLXYRiJ1wjjx75/TOGoSNypm24S4CqVJI27ipTrO8BhVOrEDOWVW5SwqMGqJhlnYjVM2tMYkUMsMrDJd1XhtKtPX64o0LJuZRPEpYFUojmFJHRyfHTHtYr7asR3ePOuT7uv3+eHty2JZMdYYaDyRTS7NpgTNRwlBVsHWQmOCJN402juI6Gdn/jcUUXMSPZoQtw07gbMbVOSZP5nnmZXucF/dSCbWL5rEp3bAsRYpSMVxE2X2llGr4hdmitURCUDH5RIJQX8h2fGlZP3J2uX0jfGMJZtQy6woPvscoZ/46XZgfaqBKJ3E5dbFnnVZa5pcuXuXo5/0362kEKytTs5CRBEnV+ysbm3eAn33qMd1Z87mYCECctGiJgKnRdhw82jksrQQ4WkCXL00TktIZBfbtIJGG8DEuc5Bx1P2+65k2m01W6bouz+W2moKFTqeEYLaks64KQVRhf9oq+ZNhwIBF91n8/qSwol2mBxNAonJGQUrlSTs5511MXOI/VTWVyCMyUmJRTo06gIqtbMI+AWDuG67XWYiePSIRmOJ8YMAgaS/iUHSzMlolK80atrxQUnaM3yF9oyHcjvcibgqRyoQxt62vwTLlWDoplknf09Bgaoe7SNV72Wj5s73lEwk38WYy25OPQtN4gd0jkIqT1maeYzBBPin0rhy6CjruD0sMII9jHyqLxQ+BohHgGcNqMTaA1l4CxZCbp/OPPHg5JQC0HXtOY9K7kpG0S0g0nmg+3/dXh3xpOLlz6+OXX/7Q6eHFN7/ryQsXn9flU8DJq9ViVLXOTJ58af+aCUYhrFRkXWf7bhY02kSTyVgKlos3kckVnLF6YVHiDDAPCKDK7Kprq8fSemQHuUqdEE8sPa6vIAO15HIxe5BPzgup7rhUFYtdaA2IgpYcrx27M904A7iSEMFivkiNIVsSxA5lei40A822WuQ3NbZn86G6gwH4QdhavmKx8DO/L+SiZa/4K8qOxJdQiAIpzl0m14O1LIkrrwfK2aRMu1SIIsh7A2PLNQqW5vllxTZzlRcVKl5x1hZi+auerqDEiUWMH5sgTsxGNgkQ80x7RU0IWdChqridvOA/N8QxwHvat6oqsozgPhO2ksFp1XUyCmhWV0taENYMChMmtGTPxZdVybqFUQSp5xMw79mlaHSVFdkEyXJ8EMSrhRhbcCWSwvJ99Qx5XFxjc+4GDmijChDvTqeptmPjJ20OReydwQe79KoZykb5cVZYQyuHW1CN2ek/SwJooFhIpvGqIeKxvocBcLnpFSV2y0CBx9dpMrMuqBtq0g5NfgCWTsrdK5pNrPwWvmexEE2JqLJAy4o/8fu6xcYYkAUibFGvWtwuuyCjqqbpapinnw4/u0hmSQZBC8+45zU/lVmyaE6UOmZFN25WOfcyvuQjT3zmqz/7P//q29ILz28u5F/9wFv3r7746M6wV9Vu2xECmnY9TaknJWhcXSTEIsJMqsjBm6PBZxRiMifgrfogmTVvwblQqaTFyRX+YSZZ0YaiNqc6t/oeJ7jksA0j7xBujD6bbZXCgDtXjXK+IxzqK0TO5AftslLG3NLPslHw2qB3JiVOcP9e49v8cjHkUPg+3jY/qiRlyQXKvsbqhazpwiSyI+QNxYI1e5HjuTcR8bp1BKiHSnLYemmVrsnaKTrhB1w3tfRZD9AGmJIlFE1JQziGqnag/JbCODpKbtkO6qay1RXzQE1yLCCmbpChLkYVgKvCHVRKsRaN5ILZY+39LbB4zZSpX62y6iYPOVEiFhGLDSQiUOwVL6hR0Etx/ixqWDfG8zLJu2+MObc7QpDtWT7gyZiNNJQ5hFNBPp8z803jzFGEwTHnHWlI2vapvoCCvjublWVHLL+PSk/9kgo1wfRUG8oqeUQ9lYxPSwtecaklVYazP2tlhpnv7R6yCML2VdCddQVCM3/83vfHV1jfE/LfTD5zUdRS8CaTrCRj1wGOsI0i4BkzAS2cEQU+tv/NnmLJGqLj8l2ogvPiFT3xmEcdc9d1CaxZAe2Zc2jXEUfjZmMrdLzgQ/EYOZ6zn9sd64rHcC4GrbrNZsOULnT7e2NOd48PTk8vdd3LX3PlTbefeMdLX7Hly/tv31we9/dyd5sGVnY3aqFyykTEGGui99T3JtZHx2OsyhLcUFSJrt0gqmm34K7RElYhTAqEwhdhXVxqZjgDZvYqyhR6FgGh/aSa+GpOaS20wshSiRluzFNK6e4yw8rQhUDC7EGQzlLd49gYsMnlFK5IIWkpV7S1CcEaQWuyYhHKKlKzabTo+OX0cK1ZHfe99ZmfccSd2sLCslVfxK3FupG7gmnhkDIzSpduxjwO1IYqpq/JS4kIXhmwlEyFdVK3tCOAtOQTlzUmAbQ0cbM+wYZdoyqpjoCVn7SKYN4eccclaqZ9LbZ0mzUjSaGxSgIwuOParVtzagjp1LjQMF+pCXdWHEY9eiuEOABKQqi83PZHmdJ6vToZtqfDtlv1UBq2mc0KUMAV94X+wztHhhfA8iROgoZ+LMDEMkxnEEcJp85TbC1tTklLtSwzFydzYod807jBO78EJuk31TeMisBhhjZzN4JFvk4Qwkk6WqW9Lqd17oeTgVj78+ubm6N+3bP1XPasFWZQQtrqUJtXm3OeREUkpT6Tp0EbHpAoi2rXWFpuXApEKWjhtVG8MFmjZqGQR0REHVKGtlqu5HSqR5dLpkTFNxLV1LotaVBJl1xqEsMZbovPItI+BWr9r2m7SSCWppuUEgu1doMArLqCN8G2k19LJVPwo/hu+TO5narA/mNnOg0/xSsuJ1Gy2o1xIaSgRO7lnZVhmaJiBQ7mQHOk3YpiTXTSXRy69b48JQN4/8q4eZZz6kR0S0RrXo3bTHk83O+323XOOasgMZGKjglIHY26FSi0gybOPY/SY+gp39a981cuHiCPH/zg4bUnP/UFF69+yqXb+ydvevb+o5vHrF3f7x9vxqzjerVikeymVJR8GwVURDpuNc8nq6MJklScUW2thGzTbd/Xo9egrXdaMZDB7lJ4+DAsio0JGUAR15v5txvZaqk5iSC1t3BGFeBSsFrPWGCd6hjknvhU5naGOlO/zBrfkWYBqfX7k6xKrJQ4b2PbGLdqFIeC457LBmYaaQwvQtVYiC/WlS6oagc30fsRTkwWKzrk5SAAQGWZVh5JjX/oNIY1nIU8/9IXgpgH31IQ2w4WR54z7C6VvGe17BpVUSaWtdNbVaNjLhYYAy7r9ckQpTxRCeq/y7aw9llacwVXjv2phFpauL5CJuaNxpiIWkWkIv1KFUT8SQBgyyFWglX+IhcCSEv7wk5bqWBQk26IKOdMlFLqkSWPW6a8WvV932+2w5BFKYFJ8wiSjmjLKlk5p9o4WTBkz8EHdCp9RVFxKeMteWT9PgLUYeHYDNVQQfpMufFeV2C67UshlLh4IJAJKoIxptt8aTinA0Yi6UjWJJKJpOvS5Vu6TRgTjZ2nJG6BkYaDPFH77LQoaFRRhZToEiJiJmKoJXFPtWRVrcZXrkYOQJWsdigH7cnmPKpFz03gSeQV3BqeFc0yi0xuWwAN0wN5jxtm9xiPNPNUhKcEjLeH6oBEIe8zUHYu869XrR7HxBMWXm/rPAq+nIEiokSDx0zfCpJHW7LXMyv5xG1utARIPdUzYFKwbM/Y2NWRb6dnhvV9z6V+2HzkBXuHx8d3nrp859Lm0WHM/Wovq56eDHv9Xtetb1y/sT439KnbI+5MvCIeVMbttsf5lAbiIWN7QsNR16Hfw3p96eja/uPvufDME4++9NLF3/Lp7xzTv37P9eFo/9LekW5zx8k4XNd1AHLOk1S3sK3L7db5Pk5AAWDWD9TDpnhS5VRVxbs9ujZSxiwWp10uBlUdnZ2p3UnkAazT8qg77BYzmay5F2eXdQfiYiK1mypAFBaEYlohw1s6xhkqgBio7IF+u6MZwp+886CJRzIQl9FUFVl3Yi/O2JezKDAwqTU9uf8Mkjsj7zFTOc6BiNx67PsplYoRa+G/y8GnLqfAEYalK8pwxl/aXHNm+zyrImI8hvW0tmNLUsSBRK0nVXttecCb3JTudiYestaY7fLyCtxKb5FHE1M4JWbeDuMo2Q2nVqlFrQiEJhAx15fbJLsiOUwYmOc6zaBp7Q12gqGc7UiwKgM2sWXm1PzkEatSw9n3fjRqP5licNOS4GXLodKMwc5dr/3pqJlH6oGOZBAMeR9pZAyMIRGIOgHnnCA9lKlz/7qY9V6JCEwQKYHyQSkqXCfy4NnqiJSIvayMRf1NkashQFX0A1St/9WMjApUah/QQJh0qiUsgbyT+y6v6rao3QhqyftuUVP3Hq/7RDNp0EMtnZEaiUeD4e5BZ8wSZSOmODkxqNbvZ5NZrsVuSKFwWHz2bq96cvH43LsOH6DnbV/59LXNcCWl7YP98e2OOA/KnPbTOo86EJ2/8ghtrivTkcpJ3rDSKnV7xEl7ktPjnDcdeLV/rucHx+OLp9f3b9+5fPL05pHLq1//ho/y6ofe9oGTu/rQhQdpTcN4uur2EnoRMDO5EVG6lJwA+mxR6iAH8do0KzSGtLw8NqpsrnuzTPSk5hJGjUG1zQnAIyJSosqDS/yg/UudT8UtJQLVrIREqQG5HIJ7IBhNk/trBRsSZWYyR7d6jCQzq7jGVgwwpApoRqKYf1HxIZUYEe8WAy4m8N0UcYJOobxrLeYQTynQbN1YKsFAPQ4w2bOg8FlYulMsqGdhYkCOZN8orYFcM7kCQKSovlsLuvQujm54lbogfzyOvwsmRDTs5NhAp1jElykRC4qGPV1aY0m+L/5SE2unYzNmZjZvCUXlV7QPFkxNzkHKKWgQK5D0doYpUUpJCdtxyFBCMmd2fYTF0wQadSF0SB3qpNgBXyEV4xtj8Nbsmm25aihQ3m6YyB1Ck8GxE+fmr9iNauQGltKwrMivO4JEAFbc6vxmyQMgHXUYiDXdOk+ckbLyqLbKnpjUXPNQ9ZJVUkBkUe32Hzz4ruX/LzkB25khQRFEyCI2d/XNrUv1O4EaPhepQPu7SalzSmpNOSr3apsS5havmO8beZ54mb/Jg0wU8gInT6UiibtfwxUJLAU7G3MW2NIUaI2rbKQktfryE13BcCJOnoiEkFRqUJ+90g6zlujZuswZp8cURes37b2sgHbpyjPdhz/lIn3JK/e+66//qd/wG/7Ms9cvfmjonz68e7h/Pm90PD3Z7/oOmrcDtulGWq0o7Wm3T/sZeaB8m/O2GwG5IvrCYbjv5u3+5GTUrZwjvpBuPvZZN3X1znc+e/uGPO/cC84drk42d+mAKB+sqNOMnLPvgWhKqcDb0a2kHu5gtTR1zM8YT33A6w2R82CPFjZoFwARkYAIob6beutyDqx0EvBRfMYEy4rSwqpboGQ7Dosdr9PO7gohACmUkI/ExWmgG2/U47G0MGcTAKoGVF9hFHKaBw8tIR07FM7JNeF2u0xQmOmB4U4YLmtdjpR4JZ3dVi/TOD+5Cvo2vZKx52yoCE9kplmHcGgTLsUg0ap0NerMEwZZ4eUIUjNXp5U5pquYA1RV4SGAwIyeN0Y4IT7+5w6jC4fnhSjVNixE1vih3TmpmVbxoXxwi6BmhTJR13WiOoxZmbjU1k5EwlZ41lh5AmkxqttWagnCqmJsYb3RwPiJcGx+Lbkp6w7OjQmezQ2JZ91pL4jfa0m/oRKLX++ZtTMDsEnDXqfrETSKcJKUNqIqMpAeZDrcKpRPej7uQaQdiLcTXG8v5uoIaZegpnks5gxRF7QLrghhYTLSEKmkU/1Vp8aTGeNh5tn3/ut09Kp53OM6awtMumhZrZ9okHsLVbMZFnnGLy7Nr6T1J2kTW563hj9o5MC+9OolMimNuRMCOyl7W2nJTglTF1W1XkPrQ3nwxrW/+hf/xId+++e+/Yf+0aWPPv7Z3/SHXnrfF926c+nW06dd2r94eGHEZtQB6+7uZug6HWV7jOEg4WCQ/dPTC4OeY754YRzH7Wke7pA8ucbddG483KfDvY+/L+twdG7/woseOTcg39wesQJ3Bt1bjyqlAjCZnNR13TA233klUmduwRkaMC2CjIxSW8ENG75QKYcMGbWZQlVbcFZgogTA5QYiMq98ReOWDhT2/ax5otSumSpIrlaoeow3lUY6UgrMs/gKE2rNcaLidkHV+M96K7CUaBzOhfWEG40uN0No+Z44uDYw2445o6o/nZmSsJigSZY7PBETgWZmuSxGoiXYC/8oFImBUhJ88l4FYRJLFKdXqd499jTef1aWSlymngXD6W0oWhKKiKDlz+WAUeGhsNnKyYzPTOg6TinJmAeRrl/LmI03W9FwG4MBohYpJaVgeUdTPzxQc+oxg9TOde68ovJR4T8v07qA0XLkuvc01YBnHILJCj6b8Aspt85eZ4dqX8Y95TWnYYR0lHsM26FbJQynKpypJyYi7pWgyuKWbT8kcPJPNTpR3Vbf4uB3XWS1F7wktfm8nFDx1K3oH7hyagWYJ2TLEaiCq7Zaw/RE2ZelT0jYkTO2YM65Z5LZwgdcNqid1Sg07BwNANR6F87hU4hAoQJYfCiTr0rqjDFPBqy6Vox1jCTxnigcAaXz3WmoWCA8qur27rXrDz/7h/7i7/wX//s//aY//M/RvfPP/unf+O3f9o8OXv6N1O3rdgXoMJyKblfr9aXLl69u7+bjgU82B+CVCDDunV8dnN97z97J7bvp6IjA57r+Ykcd8vHmuTsX9tf9uYsJtD0+3oxDt9fpIQ376LcqOTNRSmkk1WwOeFSySYFs2Jc7ZbuzCFbsydhQjtB5oQPnNJUNsyUglc6sXhVyJ1cgoLS5rMpKnUmpxOR3xwcrPsd9WcyeYVoGoc6vJKaY65eCVsdCRJBa5sLFuIBmORSFqCKvaovznGPjQgcpC5xP1LZEQwzH/I4JbZTJgwtePoPEvdlbvWp9mSpwqYKoOMfqwS9SrBZ3BmDNcCYb5ONY3E/zc0ymWoM0Z4BSzzua3K+auUSz2/qraVcXfKFcskNzLNMhwEqgohRKAVCkBXtKqowYy8LU5ArJVkgVqatSI0M9poo0K2rnOhpUekrwWERRKkESyp3p19pOETGQ4H1S40VEVPpQLsnrkvIaecohSrBaiXEGt4qi+llsfnr6gBIXV50Zsd6eL6qyCKADQTASbRNLok62682d+/r19nS4u0p390mV1iMfblmVRsrbrmQCq9ULVeOaIzKVRl112lp7M+1aWg0JI0rGQas+GuVf5d0nuKzAH7QqRq3jzZT1Tt67nEkA5ux7klD+PmxoLtnnVBQjsq2UXF46HXyhttqf85rARfuRMp+5dOXRM2GeVSDdtRDWFpprj1GJAHDQtRM4QTMt11JMwQIb1WJmiQAr3KM3NqsXveLcd/2Z7/6F7/+10+v7f/0/fe/V//qWP/pNv/c3/H/f8arHvvD551+yPunu6/pz4NOPPXtyezjoz50m3azT0eH66TXdIR1pm7Zy+tx9lHJaD9TpiJM8sibi/mIaR5Ys+ZRW2LvQb09V7uiFdGHTbSmhSwylPG7JOPF2JE5VfCpSIohI2cU4jUsrbRbjlvllQUxa9q7UXrZAfp7sM4EwQmlhYiGPW5i81y5rmlIDpFSdE3UcSEod5AxBQVWp5phOqZMVVKkz4JLGMSKbH8l7CFhw5a4z5yb34ERTVSr5hFIyupdUMY7RPpVVuK2bMCuQpbsOzk6JcHbR9FDcgyBMBuCqEZYbSq1Ol0N0tCOqqgwRK2tDKCUMZKn8RjvtsnubzeoTFQWLZYi8dmNFgAk1OBPkJiiHWGvz4pka5cIDkxEuAnTwb8zzXUpGxzISRfiwJdj6uGMmxZjHITNxp1kZxiNNVNesUOJELFAN7Mjg3GpBS3U1REN4gN3ZKW/lhqa+tC+Xt91zmDOvnWhnJ0GLvAzAC11ZHmopneSE2H5P6TgrM4+kHeX905PLd49ftqfr2/37L/J7DvQ4CUteCeVER6vUq6ooZLLxRGTRXrWJnpZ2b3GqlXAAUM2inkOJoti1McszhtVElHegKFVtJp7HGXxmMDcfcDwG9UwujygRqQqDWofEEBJcldR21AkyDuWNkw1KzWSk9UW6mF7ITpwL736DAmq9nloDaRH1Qus7jGNnXEXonECz1NmaUfYInCXrFcmBBNv9erA50qee//f+2D85fv2bPvLUr33nb/3CfX7Bhz72gR8/eeK5J7fPHJ1c3L/vWEAj+ssPpMv7bz+4Q5R69Mi9ZBIBlPOWzq2PoUoDp+16FBbk3I2gzXpYj4ycztFAvNFzKQ0ruTker9VqP3GWcRi2q/Ve3/fbrUXkoYoUZePcD7dc1E7gI8RSROCIqnUDbCojERMl0KDVzPaJDzqFtptNuCQWWPPLyTmq21G3JqyLVLJrvBP/J4uOVkpFVWEZrsY4rYJbudXlP7JTamQ4sDSC328gzQKAiasat0S8yD4j2zAZIU93YVTpdm8KIjeKwmIpmjFn/IU96AQM9zgXtUmSqWHmHt91kJK7f5Ptbaljyg5+3W1VDmaGABxQNU3NX+VZ4LVIv5yFmct3RdxgD93PKEnYBY+aIFVGSACYM5ydtyBZJlALLfDZhKc0cc9MOQ/DNmekru8xbikRgy1lTJxkcw2aNywi8saR9BPvnhj0grpaZAdRVU2udyG3rtETCKYCxijdZMt7ozb1ajoQneBo/TyWMr5coiXt1RtWiCcyGRQtB7cUzrb3NTPOVjODEpwuSOmSw7S6Pd69sOq74zT0XV7J1Y9/4AuGLe6/+u7rTz1z/rFrq7w+ODy9g74/JjoQyiojq5hoJEpKibseojln0pxSIqKsdqg4jYPhdF2vWnZd8sa0NauSLDh0LKfINrhwcdZAPak2IgWFXphSmIcArKmYMEzEM3h6WWSDg/cWbQfbpukzLdu2W2LNBLbseYuCZighqyZtmZ11K4kImiypT1VBKoRMoqrrUn58xuGoS6oaQ6wVEELvHeeK4VE9vduyuqPlgEuzgZ1KkjOMkJds1NOdNMWsnUCWELzJY0TyBnOCJYITQFbnR1VIVnRnOD2fV3rp4Wtf+ikXVrhzNx/87Luf0Gc/VYhAvSCJEkRBwqpMavnHRLXmSmkZUdDAIuQNgTetxnJZEQCgQ0Jty12S/QAIuJ41Ind/WLJrOHGKmq9PXMefpiu0vsgKShbMJcX86KjTHhkNqgGF7ISaJcyyfB2xRbVmeRdQaxEWtbbv1FLV0otKSt2+7CvVDO2oBklpIe5SZ6LCAeFBBFXOqiBxFCrlImpv6XoSgQmlipeqIpEZ4ZKNpSo0KRw9e5CCTg805dfKg8simXAYx5ihTsULk9usxaQfcy6MOZ7fWryijhBjDJ2SGx5GSZ2ZhUYoUzFn1P0V3r0uiNZ6CdrqdSilzvr7tfNOUPKWG/aNJ+LbJhPs/goBgWaVVJRgTOWwmiNnRh6tuybKFnNVBhGwqrKVgo59Ksmy51Y+eB4ZSJb6LJK5tw6DlJCHEYCpvHncrNdrSjwMwzBmazgoIkxdbdUMwNE+JWQUB4iXhTCxYG4l5xJaBlP7SmSEMlVr5xIK7UtykQglSxr1ndMPZ10MP7FtAwhCsJbmM1RONAlNnEheoXF6DkqYnOj+aj8TH+VT6UC6fcnz7vuln/1xPTdcO9ms1nuHfLi5fXuvu32wljzcVRktCzallLrOEMuqiHSMRCAVSKa85byVYRMhU4Fjphgvy1IuEck5w+qckYAEJAwLC2k5RfECeDZ43bXZl3Z/jPCKD3rlmMme201UD8ZyIcsrzo3LVdAj+4Pl7LKyod1sE+3a+SU15XuyLiohWnWqyw+zJfDimoxf7hdChpaC3juE69kM/Y3KKzmPIV2iB557x/7f+b6ffttR/t5//8s3P/YiK1OsUipakLBqTZCIpwmWKTgFb53VPa6K3ijSKpGl10j8j0iJlRWz/5LJqTpJHIjmlwZ8I/plbrFUpFZtm8047/8BoprNTa6qhhV1dXEj6uu4hK7AhKoZQu6CQNz9GQLY7lDhN3UcIuqWOFEmYx/iv2eDn6BW8AR2IxFEdlfJuMcVS11WlLDOBBTobb0hpZRSiicOmJjopi+1wRsERMZKjmYH0LEaiQLPAwASYiVF/Y9B9fNZ12zH2k5V1DB2xWzdxvyp6YcZNZgNZahWh7WR7YZqDwjz2TlNP56aHVGlhCB0xJXmVzIiIn3f+3wEZEJ0obc7MHD6ZfzcmYY/Jy5w57aGh9V62qNZ5xB22uuEWufV8JJW84wK3DFJ955dJZqxWJKCybpM7BNfqiXGKerltmCogrYivF4r4/TujUuPXr52cf/mT/3yF33e696Lo2dO+fD+CzduHUEP+/Um8UpE1PBV1RXRRHkcrKIJQ4iQEqlCRMCdTAkFkZmu2hIA1FiACSsEYCYne8RuLCmNBmopzQ92O1FI3HTm2hQRKGGOEAGGXDVgOrNRXruEwIuuGE55p2zb72cKwXfkhc1nUzYhqUgFM5unLr0+BgeaPF6nF/jZ5GrZUM1TUF5g/xveiGDenJ7eVmyc0GySAI7SxaPzx5pOL11+9KFHv+kXfuZdz7/6xZe391/HcyXPRUHK5q30kZy6oNEUuMtSSgWDxCoNvDSZcp1S4X8FPAxS8pIj85N/hht1R/RCeRmZPQYOXFYQnOSRKErOz1levTovQErwJ4gIVphAJytq1EoyArT9Jm3+xgwFvGJRfXaKXaaECTwfuvFgS4mu71X1nE8EL6NWhJmM39RQZ12grGJTpWLcrngSgLkbOiHoaUJOUS0T0wJtdUM9M7s4dCtCh82dKVdN5otnqg7oEk/RXL2Pfc3GJpnQ0wCTHYsK8pAfrvKo1mxYeJ1IAJRc5LKtomokuDdGmfq++NXztIxZhHxuDk07UPtDkNFgsACU4wqV2Lt6EcGLu1HHTCoiY7+3n9U0KFf2FKhdFIkawY9gieeRiFThhTh0ul1EVDuNIIQI1RiwJehrEM0MHFQqMsY775HXxAEb0GR/b42BdhK9ieGE4MbjUWO+2lEBAF5RHgcGrVb9mEHoP3Trziu+9muu/8B/GD74lvseWx3cPbe5+ann9w7HAUInB+hQxHZSVihDVCzfka3CHkE6E9CExFmD1q4sqmjlgytSVVMJ61IA8uJPQK1MGy9V1RCobvvKTI0UNp6izGwdpVQdfGpN5oOzIb46Cu87T1f7tbSYqxw38uDkBTRE7CQrqco0XnL6iiK0UvE2abGmzshZtdNiPli5edc1U55ag8/6L3k3BQBKVhDAZzhjeGz0aMpwuueeuXjlgvSnd45vHT+394Krn33K45P9x1fjAQCWKuaqNeoaDBYLCJO5RVXNECUVFF41t4j29UjO+pKRB22mwhFj3Ic5INtyohBTDPsVyHZV6S15SA7s6NVkdjtcNbK9DN4YSQmDLWE1Lo1q3D1aaGCzSqjtpyK5LjZ6LvX6V9KI6oQa6lJioDKwI8eMRe2+hBiJSdhdTApIIb+f+Jqt+pN8RFV10R+3wh0AwHOk5XjMtboJ60/V+FxmNa+6pKSWaHnv6c0X0pqitF8Jzc5cgdzqCdSqUG5bbrQOC4h5nD2A0mrWRMISETAJCfBqWP54YgVqu2hGbdzrKjWVFkkiSmQmevdrMFNi2eQ8CjNzYvVQDOQgRFaqqOFfuFXc3xWDsIrxGUDxjZWIoVLspAgUEcpnkbzZsisg7r2BUdaLH2qBVvPk1ew9KbYvm2ddvIGjFI/wPUqgIY0yjnv9HrbKSucv3vf+mx8/ePRhfP1rn17n533Ghc99dv2rb33/9aGjw0fW99Odm7eJuO/7vu+Zupytgu+Wuz0wVJNaFQ6rIK0t42Jh/q1ZMFRJRZTo5xdJPACqav5uDXjsN1rIXkhUcBHZ+fsONlXdcT5CfXYawbK8lCoddUIcCxSYmFGmkT1BDG43MRJPPMGcJQFqiAtYaW0TszxqFIBqPptYLQVA/9/wBZWOvBTyUKs6u5Pshs8Z0wNmgXPnH0gnd7fDhmm91x/Q6e1ndJV0v8NYJmDEhBTQkn3aMFZLdP2sIMmM/U+UsGhvjBAo/GPnZdXsysyNGhYmWoAXU73ru422aVG4AzzneuHuF0/BSJWRBOsOAkqksBYtWbxY7m8hbYh3BlyasoN2GNjaIJqyG+6imla3aJa3RAlVVWQ2s7nHBojZ0LPshkPd9wgQDUFV9fKTVW6LN9xTJmjemUgQwh6FAGNy8xQtIVVtA5/ctTy/5fOOe2Axv0ymx5tbjtlqWmo9iZOnFuX//N8SjpCqaUpdsjXpq2mMU9BXwc4fKcSv/OU2XQblnF1A1zFn7TjtrdeSNecsIpSYKQET/71NuAJ2JxDJNeBSrLHOzepLELOTTYNOAV6ePH8mQrQoDMvct4+hNNe9N3ZJdMwdUmvZKLu7K+p8SzRCAaXBwOI2VTURZxlJOY+8WV38tee2h/e9QofTD777Rq+0fvDi4bHePd3cfYYyDb5S5a4jEu3MKkekkpWgCaosGSBw32fZTGJGXB0myjVisgnyllu8ICy0QzzXeQXECl67RCR6Fo2uElLOORYBrU4E/65sLJFJjeDgVp/IQLP9dRY5TaAKnNhrfhRxnJQpeQEHLA62qpW2AgFZm4xNRFBP36TiASqED5gysCbT7NInnH/H2EtVNW2AjOwi2TH3ELBPoJE4/yiT/cB276oeX+jXJ+nire148bzIrVvn6OIRGb1TEFjICx9PsZ+I2C3GTrcbHst8OTOGB7h9z2Ly/R7xLnAmsDYaUDbeoVc7FxWqVRtzoYDXBNwSg9mOfO1pYVfr0mFpLTrZE8DDTcrOVr0ngzosEFtbFOucFFiTlUYip9d03w2x299lyxRga0JmpYLrLTZyxTGcjU7tHSLEbEGuOlQLRW0+MX8wcoHIY3aKL0Q1l7StzgUUU1Xb0nYkQRm7DZaJ4p+Ci+0VLFjgWCk6H8QCpSo+71zXUg4jYzKGZuS6KQpT8MF0wmUqSVNq/MKEG+x6dVrOpLAt/5NbtdEib9g2CYyAFLEzSr6qqq738khEQJZBxwyilFLXdaebjTUCcHvlYuPwiTDHyE9nMy7p0g3nSn6b6+KA511IMDNO3ueQCpNQAK0GFocNOKswFgpKVbkvlT93smyast4Y8k4o4oz7LjwSL2lP0KxjXquOWznRg4ODu6cnqzt7129vL1958dF4NMr2woXLw/aanq73r+zlrHmU7TggS2LuoImY8mbIozBxlzIDEKK06ul0cLcWdEKBmFNx14RTZwGNM6tAKUIWgFI9tXPddwYWJ6SAWTQqvYlnuEB+9ByBppSoagb6M3amzCUIlRXalfWiSOsWF9/epuLRy81+HKdNOsUoNrs9GrjaG6cKRElRnbOl3ZMvo4koLBC32ICoblVFvoWZZ6ElUC0Jdd9w6fyaGCf55Din1XPE+5fPaxaMypbWUBiGFCXNRH8iYiIthoTsaF+8BlYvYLqmKC5naZWaqviPXWfTR1tUTHMBdxfYWDEy0Kx5xdnMridq8BS4rJDmjLMpK9CsrTugklRRNOIPCv2ioI6gkHgqJnd/Z62cyCY7cZHSKuegooxYXX5jrgFvizTAagG+hXAXyZ6KmX0m+vjeJZCKhwr7PQxl1ky7LNEyrfg7Kzo9u6JcEgmdAdOObjgzwbJc3oaGAIpS3AeButRU3cg4mRnmNVcpPJiVLB+3CQRnXRVQKDJ0nFYJA6KsWsJoXI+qyZdaGJhgB72IE2C0k1XnX6NwyJNNwCXw0fQL8sLCNhWBieZqaE4JGMXpZ845pQRIzqLAqusT0zBsswqHqm3+OmXQHD6GzKoefVKsEQ6frsLIP5Q8MCOcrtcHsW0mG0ZwRK233g/VWsDdRJNlZuFsuhHV2vfmyrTGA+UA6Qwvw2Gend72/Zb399dHusmddCT7XZfGfIC07cZL9w356OjKuTSMF0/v4IH7Dxh8LDwO0iUBmMX6omdRObdizUMWVe40YxgyUWLSrkFG+qsAAQAASURBVOthAdseswXbXVXZmYZkR1uqs61a0orVn4iKKKmOTAHV6tJKukf7yQhu5FVoARCGAL7JAEqgLMkkjWGCSUvnPanXjDTY1oDGaj1TpYxsNNzPRvQhzYwoC0wgIuRsRV3jlKqAubwiHYkf/LQ0+EwFlyLKcKk1KaXJ1QzOYKcJJti0EdZPPsMXNqcHh93J5fXJce47HGxvn3QHXdEOBVBCUmuJ6DrGfBUCnQXNIZRwWl5aNHtHFMCqwolMZTi/u/TXq/S7QK0O5di2y4JqsK0Mf7L+1q2vzSo+NZtwpVMzBlwZjOQxcG6Ee3ZTf3eZZef09TJIu7TAFgbHTkjKYn2e4ZQBrWLGGYCvq7P4hmxYwdwJ8In8bHM1ESU3awYoLZanCB/71xIxqCTpEbksqEIzHlkfqcKE51wFS6kGT3Bl8Oo+2qo0t1KaZ61o+eVScfJlJoLohAerioUUL2y2cV/ii9oGqRNVALls/ZIRkmrJwJ5PCimBQEU6TuJvHIZtv16l1Fky0mq1gsrJyUlarZjZmLSIkIIptQrkZeFLsMyEKvqJd2+robDCl0gz3OtQ87ocL88Q2Wqfb2Z2s4A1Y7IYbm3ZxsZYIobGAV1CtPo7aObxoTzuzuDyoPYpFusxxGHF0JXED6AjpilCxyS/Up8rWdrl7GxYppPlGjJzAlkG8Fp11MwdUcLx5vTo+JQorfcPNI8MWq3We/t7RDgZsB2ydTxdcVplSVBV3cpAB2v0RHePz58/f+vmUZfWJ9sNM6/X62HcDOib1gtJYtGnGJlj8gAV+7wnaE59YDPYVvFSVTv2zri1sJGDInXlKVHVFgcoc02lEEoLiS1KUDGFCTIbUhfUsrTdnhrDjjS3elAsHNf5xDREM+JubFQe93FQ4UBxtPgdVtSNw5ZV+r5X1XEcMyh1/aiWX2themIxxwScdmxxIlxmmFWV0FWGUWO7yWIgR/uywJ8syNNKNkag+XRNGyEhUqKkQhmQDOqUFCRBHHEI2InJzdFgGcAt972MTwJg0PqeCXxMUOqIrXHhOI6jCoi0Y7u5niPjwUKiSqYbqCe4i8jI1BNgaGmbaw19u8nmNr1Q8oii3Ff62HHaaBYq9kkgCZKCFKfUIhcn9EEsWavqqTSCAK7xsFQWa2vZULZc6ij3W73scRy9kTZpzpkV6+Q1tEOgAqQFviSBG3tVszXhWPdJREYRtbQsBUkrZO2d2Ri2pxUmUpWZqEJgTlqptBmdcV8q7pglpymAhBXGUMcWjW1M66UWt1LApqoxxspAVl5dkCd0OKhvZ2YJ84lT9RdPkh5RJ7ZkJWdlNGmhGxUyjhtDyySfwKQ3pszQlmapmhWDcdi2BCFV7QhgApOIjOKcmJEytquulzwAulqtxnHcDmO3WmdkVlYhgoCykkBZCpqpanEtuiVGSmqfkABIcGG3BWHNFzwFZf2yamOz+41bc2SNUBMOZ4PUbWxAn4kqRTj3/9d2D4fP9kwWIS/ZQQ1BJ6JcCfALstWO+U+VonJCgJxhNo2SO+KT7DsMmkUT06rfw2EPJeoSjkZmOr1z68bJU12XLlw6f25vPeY8dr1utyenGxqFE/bO7X/4w088c/25+85ffOCBh05PtnsHtFqtVHUYBlh6cYOIRz95cZKScFb3KKVUG8btXF3dO4MvWZo5+eAoCB1cA87e1GOhtJbuWo4c3qih4/pSuwMVl6T/6ILtRFUTuDYad2q5NFrgjxua0Kzw7QaFQqmYeQv3JLjhoREVIWaIqiYlBXix3F0TUK0KdHuFAMTMoKpwuh42mXZh1RZhxwliSlu4J2LlYiZRAqkyij9YDVEaDL8ppWxJXznDKzVS13WDZBvCglraBGzoWMtTQF5zgCrhVtvxXZQBQcECLAiAKAsRKSGPWZnK6SUq4gwW6Wo+/4UGbDlXWiMLa4dBY346R9kMhYJFk8AkzEwgIIGSYFsCZ7kopkBt2eRmJI/vcKIvccm2xZUlVDJCaJQDmCA8xaDkxbXzxM2Y3ORmy+UBSuE38ncufOpFVAzaWNjQM6azY3p1ApELtjGZljvZcHMR/X4Py5ZOMxIdCMl94e4kJhR5YbYpsDJEpZJMmyQTE/M4bhls2NeRJ7Mo8nq93pycMune3prIA6JzzspNnGovQipRUjXawRvWW3qtkHAhu4kU1QS9xCFLzK+00nIHGg1fnDRGQ6v6k9SAk7hPC0UqvppcbmmXx/2al4cmj1BRJOp77ScrTcCYIITGaS+wugXLEeWpa7UlsRWpAsAIkkQiKlkpcd+ZSMV75y+tVhiHc0dHt0W3XZdPT27cunvr7ildvXjpwsG+QIZxc9+V+8APP/zw/QfrC3lUwjEl3L59m5nPnTs3yqyCWCISkHM1DqmBjR9PbYbhhqaonWU+mu3C4rbJg3NpiQhTe7Cdw5yFLMRMAYgUk8dcr60Mhic6Cltpz8VLl2gzmXcVU8r3VGoMZesKxjyW1ENSztDOObEt0LyhDEI3lcXvQZCM+7gFkxRCxURhvrRcGfB0smx03PFTFRbpmueafblKUI+bGd3qVhibVttv6bJVqtlqOb4WF8dEQqJay40Zn5jZnAM5QD09qXixWKlYWryVnzEYouLu8g2iOKBWE7eLWcqKLiWNTm71tp5cWAEKxXCQFGrQ5gehEiTBZUmoO0pc3zipEi9qMre7BpU6ePCjx0bXGXGlISbHKDwZ2qLWycocelRTqM2pRdPQIlRRiaGZ6TRo9olJi2WdareTDQq2LnuRkYWsrfKC/Wt3EE80yDBQjZo2j4ihH3YHBdxTIKg3ROpkSTcL/lKZ4ic28scHd9xfdqeZfMMEJjVYpnkl8+Uwaamsl4gTcwkNy6o59auUus3pIEpd12+HTDwuxmnGpiBww/onJJDShIqaHtTVvyuj9DEWbfJ2gqP9XQ9wDTQlYiKUviizNZ8p8U1BLOXZKvnGX7XoHFRLn8DxMeYTF0B8gqvWIqBivSQicPEGaaPrSjjNQ0qJKalqHqXsO25ixCAM7S4crNJBJmFZXzx/kK6f7B3sK6XT8fT45PSZa3duH9/N0GvHty9dvTqq7KXVQw9dvXO0uXt8tFqtOrN3kXM/AbN1a4HC66MqSoicuTZmzKnkLVRwSfgXzAzX6Zs8pGpouhSwOB74AmP1t0BKzEsgB1Wy9mrvxpK8dxPCCbR/ucRKhKvIilN94d7i+ZJ1GQ8eACQm1WwKLid1qWUubRirWFsCt1npq9gXbHRzlK6kcOK8d+Oqy6YaAWu6brT7qYtT2lZhFCFD2RkFhaYYqkXfVW3kqZ6XCaCKIJZztrqofsBFMpREav9mm1LIRCIwVFusPiuAlMn7XjjVYUpWhOeM3al1QztvI+Krs/dWMVpVRcz4sHOYcDYLcGwZ3tBeA9BUlSzNw/HNQp5bFwRqscDKJMb8FCYQ1DxG925zYaSqVcKzkqRadrZuoYbQITHhswJmblD0rfHAYGrsqigeu69aJEvD5QTQHtytaMyP9jLyoljBzmD805vrNzVae/ZvpMBlAK3T20GUzxI4imSAQDQA1EzaXe+C9V8PUxdxLy8HIyJEhLtOC/YqsvHtRDg5OV2v1+v1ersdN8PY933iPhFQ8rA9/E4JsEJzO/CfWNW6lxTgepq+cjNBl3kXhWSalzajv8uLa1mMoqca51iQch+8+sFnb6nvmn2DRV58fTDNivgQEVrBJlZQiPycLTM8JS1Tt0LUynoSooHXSmO6zYxJRbjWjxXpzq3GzSAKIT4ZZbsdiajvDy5c3GcizbJ/8dL+xUsKXe2d2wzb1X5/48aNvu8BHB0fA3zu3DkRwZhJoVYW3GdsE9vhHdBiIp7CwD9hsWDVT5RO36SW+e7QlIJUg6nLgFrhPA/ZcF13F32e4dVSXGvz2YU28arrqj8vS2uhsUxdINscjFJseVxlsl0iP4ozHkDV9mUmsXOp8lZ2UFQJiZkAZGjOY7IKJrMJ1+TUksdXXz2bSXth4biR0Gm5u55lo/fmq54ufDIBW+HkG2YVERFGc4iIytREHOdWmnxxa69JofCLC1vMpZBzS+OJUCSvLMw16qu+rwXTTsNu67vI2kgQCWEkAWsiZNJMpAphypgvs3IxolRz4KnGHzRmE4/JPRnYIqwBC8SLR28nni83HVXwZbcdWNH4ynxyCHVe4vBk2EY5FjrMgvbO51zkgAZwiVWIMLvnk7+iPcoTKSdB3W1d5Z6k2FEmi0hmd8b51zWqKBSiyonW67UIttttSslKQbBlDboFlotIyTCT+8wrT2aBM1rjX45Fcu1oR0hYw60ZoHceeJ+3dbh1HcAlOCYy32tNyTlr5W2coB9XPlG0swKdskIh1IM4GdwIhhn6d61lulWNMNlVQ2GZaGvxFOWBWiWY2aPPE8j7MeSc8yB3xz6ljjoSJulWaQWABBmjQMFptepVcXx83Pd7/WqfQSmlzWZzuj05ODg4Pj69efPm/v7huqhAQLNFVPfmhKaUHi+Rsrd/S/F0AsVmDHHJBiDyZgaTrCjfYuWaAT5DCRGBl46ZnIFSI4XN8NLSz6aiVX0kizRx2hc7aYxYlxafamhT944KBAhwzlGeEAW5EwjwOItSsszJhL3TlqLk/sJq1GG0gic+sFdypwiiMjczPifLaFwkLIpHiBAoqEg1BqpesdYV/EebAJYXOV2AdVyLELOF1XAkLdE9CSShvOiMGHClGUZllM1t3BMLlzcW3GMFLSqV+lGt57e07kY4zm2xpRlspNo0gXnmitMGdwWA0USBglPZyr0RZkTYuC9UMyFxqGPJpMVATdP8ZtNxZcxEag5iJaSph6LaGyqRpULHmAik3rMcxQWz2LUdqmCRZetoFUTApNb9RFlSl5lLOrsHXi6Gr1i6+KGkVBU0m0oGZ7CAOofZOYU7E+ge/q9P5trJeiLcZmKNSc6qWVuARyHc05hz04azdXUzd55mQK2k6OHBgaqenJwKdH+9lpyHYbvqkpZEAYuzs2NizT4jzpV5mgWVol2fATINeLk2QK35AWr+H0AgBuXpXrZneZL0ZhIKTcac/TqHb/nkcqwWM3KJyTJfU3uuWcl24SiVsiZxV4RqRa05P87F0Unkx4kLDvIMBW3TICpiUob6lTlRB2Zi5DEPoqpdWnGXVFWhw9gKoK9Wq5xzx0lBx8enq1W3Wq0ArNfrw8PDISukcIJijzPuO4tuX659IeJEvJ946nfD3x8JGrKJ0rs00Si9qWoRn0yvssbMgLKZtWt9ErubChoY7zf/jE49wamOuGD8zeJSp6kKIAmUHEvNLCDkXQfMapmKb1FRShyj6etExCqA99iJZsBCEH2BVNQUpziqoOa4aU3FyRUR8U4JFVACEBMDjbukNM8c9fe44B48qYRSm1LqHqkrUhPEjoWnlZA8L969NsxuL50nlBePklojzhoW5Z5l551Wn69eVQjbwYPtjVq7NRWJvJsENDSyWBADMw14oTAQEamIdakt3xjVz4SYC6tF7jRnR6eUCjylrDolNkHN4Kih3J6NYRic1eQnaGuAGKbFpLGCStkTLRRMQ5pTDSMPC5zHwNPiFEQBpT4VLOJ1PgZlneadxGsW3lFuO8PCFK+dZCfuvqq/16Debi5lLuPNC8L1id9OboxwggM05bjUGzS8MgMGAyy0RZNSXPQhAjyMvWh6BE6JiJh5ux0AdF2nyKq5Sxq8KMFApYksgK9QvLIip34gZWVpFnhm/URR0FSuWYW8GSC05IBWQKTy4CgT3aiesbr+2VBuOnbtRKk5Vwy+zlalwHiJmkEIchwVkdrKqaJBw5j6SKC2xnRZYS1HCKg0wweQ4khTHiXnPFiW2OkgECVGt9cRY5Sc5VSg/cjrrusO9nLOd+7cOdi/dPf4aL1/bpu16zpKLIOs12tAtnm0DJMoLpX8EB7hYfcVkv5n1rr2unAiyroFYAwmQpu5VF4Mli61aGEiQGrDrKUgHAFoLRJLlxtVba4gneo3NmxqSbTNzECgXCZtIocW5lpDYGYSAImmAqIohZir3qQpI7WkUNN1SmM730xSIraa/gU6xJaJCAyFCpq/3ZORREc3gTQXuzMfEU5F25OxTWnqHXGi7iU5yUqmFsM1MScZ5vFuRFaqI56bBs+pADS/WrpLwW3JYv0N/Rt1N2c9nvHVKMGl4dz4h2EYamcYr8lHxMwptQQPoJ3BnlOuMfzBZzRIrpvipMaOap5UgJoitifDu1yhRjecESaFC3/ckhhnAFHVRF48HYpkIq71XU2ldE8xnygoAZyYyLpQi3pypg8VrdYFianulvNCZ/6T9hJUtGQ0VXWy3rDkHVsc/yw4KH2XbMiShuS/c2ikNtniYg+qLy+QOoMBM+m0vLN9jqZmmkhgVT8pdntGOegT0eqTvCImGB20b8METVBy2XT2ijLP5gMWURFZcTJnn4gISUfc9X3fpdPjE1Ws12uBnp6errq0t7ceho2J3iZ2E1KR80oeSeHBJtOrqplnBNVf6aEx9NPvKc7kKQMbqzO4iCp1gXZDbRpY7nfy52XAPGgEHLqOREwSTgCoFE+uCYh5IahVEbWWsURBLCVC1jqNmiDLzKPOCZldRQGaLBYFaZbzjLWOZ0Mh8EiPilScjrHDaTg5iyhl+5BVZmP6HNBrq2bnMBSRPSSBZpHcNgAkOnThMKtycfJ2Ilqy0Gr5QCKyRtPaummWWck0jL58dtaireks+Yuk0k2fJFRrQ9CacE0iJErSy3oGeZ+wJ4jmYrJmgKGkkFkGp8JKe1rPGUZiVUXOUOvt2LOax5ZSSki8GU6FlE7A5/dxcrq3ok03jMdHF/jghPgEmtZdf3S67fquX/XEN+TWhY55s1cJVnBBYMOZSv6rqtb8VxZlUAezRWMkGkgV6NhKgHG9X5S0JtZXpGUWQlbpMpFpnCS27+KFXP1gl9QWIRUiGtz5YP3RXVJhokGb1cFRDiCigXb01daySwzqShJsFhGRxFwqs1EUdyB7mUZmls3xmraPPHRFVxce/8A13pJ21w4uXBUZ84iETHp9jYe2lAFYqNRoPW67lHMmwbIZgIgUZRVx8lhc9Z7ah9Eer3TgVP0Q1erflQtJKZNU/SPVxhY7PBr6pZGJFex5nKqWBkwnyCklNhXZtV4GMKoQtTRfLkWvshskQgsEJWYeAp4v7dFOZqdUqBGxcOeK06iSRTxvO4uqkiJFM3WgS6NkS2Is3andpNdxagWeQt5jB8o5jyrWpkmgIpKhXdeRKFRQNC4t3rLZS10FrM1XpooZzuLHDFVorTgAAGL9pOM21djPkbxxOAEqZDF3WbUvwysB1gOJiCjJeGpHUsasEGbuui6ldHyyRWF2Ru/LMevrCTWt0IdNXWFqDr0MUcJKCzyZmDurFpKzdmeptvHcxm+o1gouAqBfrdCQyctKXhV2RxAWUYmODuPfu+npWVdo3xENgDNFoV3LydTv7/HU7Msl6lc5LPpm4s3NlVZsUCgMfnJbVd9ZsPMcmp7KFK3JqIp7OaFRXxEyB4HWG1wprGN+EuYmc9Kqo7hX1Eol7OWsvatyfs1a2X0btfqWE+pDSiBZVE9jBYtZAy0tlYWVRJmIkgybLYi6rttsNkLj+mCd88D7eowtOB3yYb47HHaXCWlMaa8/3b/bH+RzN7C5k2+CcIHWvN1THtrcwk4wEhuGF/2BrGgZRlaY+iwEhTBYoCbPsIvL7kdtol57hX+IoLAXVoRh5VoImUp+hbENqnIOHIcoaFf1Mg40sSmhkZbodNS2L1Sluki4D/c3z928e+5Cv786ffXLX/YD/+znz1+9/HmveyXo5M7dlzz++EcEl/f2ldeiJy8f+5vRExJPR63wMztxs3PBi6OHqXKTizsDgEC1VsguJ02abdb/9Rrjijgy+Uoxu7wvGbXzYjifQpGfEodhgRHzEWBqBpewcRMIyvTmZQSDMB3np8WqNGnGHNQJEallk1mhpnCyNzlYnvQiCAalfGpq0tK5vGAUjSpVGjBs7IJ9e0b6quRUZRGUYO/Z5CNh/CQvIuqqiZsAN2wF8NmA7su0LnDZhAwycg1AlYpZ0UUx5j51BMqDV4+pfEo/0QytmntNGQAc+jaOV0NVDxgh0h0m6LITAJoraMJS7J5CRJqUpEaZ7eVK0sodLK+ZoquNgszvP4tDOFxKFBIK/bLD0GJePrktzUViSxP0mOvumH4/Y8AZYE47D8ZykBkpmS2t5qylWmFADaxhL8h9DkJIUzmqDme6r1HYZPGjIFbkyoALCQCm/sP5Je52LRsUfVeVWBjZM1RW1RrC49QLqSXDzdzwLXFjmv9GAoKIU9I6Bx+fBMQg4VI+QofDg70LJ0cboLtwcPHo5Hg8GZnXPfd8fHIeaYXtZnU8rHKf0xZdf5TvpDtH57ZpwAN7D5ycdBhVdYjpmCjHQdXrBUsJ32GABOzuZBaqeTlgCCuDLLGUBKVKiJ+eWsGKygtCr1wyyzeV/WF2dwDMltgiAwrCU5hoZJZnXmVJdRoEy9FpPqNZu89yogFgPF4frvTk1snLX/ayL/h1n3f92ttU0+UHP/N1X/FN3/ANf/C1n/uSG0fXH3/81ukmdb2u+5E3LV0E4SzUMxuPeTwpOyjPLnFZwlmrVFLCsHFwIurgfR2qjUb9+O8EldEap8OW+mwPGQMQEQKEwUwimqFMzbddxSyF1ugt8kJFbkPlRvd2qStloyPcsIuG1BPqCVcyiUKdSf+TjSjctxJVCoTQPjOzirh1rUh4voMy37sJSQtiHJWU6J2K2SelDPidAW5Etf6zuokuknEnUskz+sxLkIBimKxRvSIEdJx6TiIybgdNHRVegyjCGqwWk618oRzrIrcY6XZ3ls0cStTt1BpRyGsVhCt2tKZL07PUWK0htBZYp8n4s+2JvGrJ5CK4qQqnswWLxNNon4XQ7RKZZ39GDKt7EEvDLKeBYuDiku1Xo9LUYRKFpDoxReqqDFglQcDTDZsQw05VB2tUiTaYFYUsDK8cqiJbtTqkFnbkFRJULWmqsNZa4iAuvI5vY+8Wd9jquaBZPvyHANvwmT1KRSxCkETBHFPRZ5cCNI+TFA3MWEqWsM2TElugrzMgLwFLSkcnW6z2e6Lh6PTuqJmgJ9sx7a1Pe1V0w9Cv+ALdvXu4Ut4b9/vU48qNo6zKH7v+3MHl/cM1n2qmvKqwp6K1KNBlVkIGlM3qWKoeUhICuJQNUyQFqwilSoCrZipUIiUjlhr0J+IH13/Zen03DdZ2lD1mIhwu40U1+AsV39THN0G5opxTpZrDU2hNaKTbrgSvZTbk2/3BHo+b++7DIw9dee7Jc4noxkff8m/++dv+zT/945/22b/zO//k//L6173yiQ8Nj7/vY5xWe0bTfdETJal+jt9nzIvO2go4ZL7Wp2aSbhyq0Xo7vMWZbaJhNDVPpzEvusTMSsgEBWcVGxdVPiISyw8Op8nSQKRanrzYtJuPNB7bYpFSVZ4fQKJ26gGU8AiNdzTePPlXtAWf72Jvqkql4kQbTYFShaJSSCpXY+cGbVEt67KpaH1R+AbTfakjn7X7976aRAKtBctKMBVUVZk82YHcmS0oJltXPDysVcsOCtRq4CTmBBKRPI4iwn0peVXlutLEKS4nTE6KldH3zIU8P00cl58AjiCOGzORMady1lJdIyJIc3O6uAQKYt0cvhPmFzZ49v1O0McPqsIETkRs8oyI52iZB0BtHvXPnW+PE6jLnEP2jDtnM9dw7VzsjHYsb1guvz5C5M6MGej8w2yeCkZTYs6CajxgMArFZK7G+N/OpwBYAVWluY19NhOU4zobpN1cyh+zsonYSmg9CNjnVgGSEyExEisrEpQJiaVn9IpeN7K9eXQr9d35Sxep3zs4vLgdt4+eO/9AxuHx8eo07/WX1vm+dOPSB27tP/P0kO4c7x1uX/E5j17YG09uXl/lFCcZ51xdDAxK3niCmWcGHX/Qap1AVdRdZfafhv6e7X6ZW9onV6scUf4LQG+CMpF60Wad/0cKaq+I21qzaJZncFSpXMq+TKAEon4lSFnoZACnnqBdUuqk11PI5h2/+o9+z29+1Vd+wW+6+8xzr/+y5yc9jaRqRmosNWAOOubq3U6WHrygGPFfW/XsvxaiUa4lckaYz5Y/eV3yW+0saon8qtFnVMJQUJCktjD3kavkOlWL7WYL1F8E3EAZTtDgpIzJHEo7KAy1jJXJfzWTZfkrAnuu8NUSo1BXJCLjOG63W9usLDUSzYjuPKulgbe4um3MWDykLr/uy056G68lDthQDsyyKQkB2pE0KRKSpzxYrxwFSIjVQvKI0XWJiMcx51ESdwwkC6q3GTIZhTxrVrPjE1bE1Z1ULiHSSRDWhOtMyXp9UnSCT/WtWQYU8TkCye7XqYIb/9S5cNR2Iop1ghauhaJsiQW/2B7D2mEXEJyxl2eUmJ2s8Sz5umKJ1kpbIa7yXtQTsKDiJfeNmBfxTyg4ctzbSQAyl0yqILcqgXN8FtXQIcjN3WvdBUpTB19vKIvqTwBVUgufM6b1yGzhI+UyyYkQXcRAtg0VqJAoRVPb5MrkHRuqZVtJBOgKb6hFl+yGgZlEiZVRJFywEo2j9pwS0/Hx8XB6IsQ5Z07pat8/x3nD+vyXPnSbNv2N690Hnr6Q1p/z6KPPHDz32MsevHv71t/+P77v+S94zeaRFzxw+aHjGG/cKK92WsQU8pUWNw3LpIWCAUEAziDDk4LPBKC2aPAOB6URiG+JwcxAAgYoidU60xqNVfw7ajuLisCq8AycKXcBUFi1BPu/7Z21E7YJ2A8ZKmgyUwrHWlXHvHd0euvChQsPnz9/6+P//Xf91tdmGcfNuR4EOqKeX/e6b37HE7/0kQ++69Vf/Hv/3v/1D9/7wWvjOCpa/zskFhEzHSI4fW0hI4lZEUw4sFBtVe1SWh4WCibo2RUFi7qJqmpNNTK0HuHI/KJmbLDtOwhIhUaj+ERE1BHnYWRmTmRsCUBN+CGiOo6xBA+UYyL2H4hISgp1nXCUdyW2FAy7PFeSw3orQ7XSm8aTcvHTVUxw4JTwrlBNZRLWGkFXm0xUkLbPYbZuQxBn5DEmjoi0yFuRVzXaeIaeYKyyBmERkQVhmaK53HsNboWq7y5vAKRETTsSdpxylmHw1CMLmSweUqfJykQ57QzCcgi3d3ksj85ZkjsWuzgbim6GalgulxHfyuEQpBiUbLk6mn1ZgrLnKqCtwXHFjl9j+HOfTVvVLg9BRzUrv5SlJK9PO3u23LPDDoMgD9oZqfkbFMoHRoEgLVKi44uW82ym+12CSIV8fRGXxoNWGk+JPQAEKqXGr7/FurDwmfIj60KFJY/mqnJUjYKOY2j4H7MeCyqDnFQfqyE87RsWllDGi1wT5zOC/jLDE45dm7N/JQmp1z9xK5KYFdrKfGR1ui0qpMRYd6Pmo0sXzr/iU67cPTrtejzvwb1Txc0jvGKLB57avvUf/sujj/z3177hU1/wxc/Pjx6899mf/Hc/97Zf+AM/99NvfuLkbrrzgTs/eePm9tkTWu81aGmBg2IkJQWHIBshSAnJ4hL1k6FKMkJ7EJcbHAi2aTIpEWHHg4ig08oR6hVDi5AvVTLTmk2JYiHwo2SQzg2MYR+tdE3dKyd6aD4Lm83M/kxkmaTNwJPSrfMX9g4P5Zmbb/+O3/cHv/i1v/+//Nx/HLuPDbqhzDyOv/wLP/8Hv/2vvPixl3zf9//rt/7y2w8efHi56YEqzA+Oaok2U5cwFJrRglZmkutZ566tfkpVNBQbj8VPCgTaWywJccAIANQZ86QSNe3htaEYDgrJRjnXHOiztAAg/1NL+et6Q9gut1S3XSipdDVLZaoKAUzIihIAn87ovN5IfflgAtZY3h+D0iv15tIfD0XLr/BMpYxJhbJnk6kQl37YxVcSK4fMBKl7ajHttrCK6U/lQ8NwJjsqVYNE0x/EjehEEO37nplHycM4wmwwKFUozK1UcsW8hs8uLc/J+UKxtENXIpNE1fX1SWRE3Zg60sR2rxN4UdDhiIhTYtUq8ZlEr0Uhng3u/wLxM7vM+D92EZE1JNGS6WsIfW/rN8Kr7fLjFG0LTMRMWeMhn+BNEDja4K1212QPdGhJ1RVEdZ9mAkqRgaYl9XZdWtIQM4NLwHnUFE24qDxYyXtcGw4tI60lKkruPPPPrEDl5eqqQ1vOvJKzf1OQBMRsxqx7LIddFgKr5koEF7cJodcEVlJJBEBGw8+sCQebU/373/c9+/v6uZ//Ge97/zsef+/br9987pd/6YkvunntX33NF3/hlY/2v+/TTl578gM/9wN/69u+b/yp/EXH9NcefvVvvLn94u/6Y9SlJz544/7nv6w/urN7igmkoKxVUzenelbLTfLmL4l0NPIqlgjCTp6sJkBBjwmxN+0w2wkoyXtlA5xe1B7C1AwVROQF8DwDjFCjPcuONB2lVtgo7RnqDWVO5WyycdxQdrFiteqKDjbDcOfGeHHv4W/+5m/+B3/7L+Th7vrwyvYkv/pVr/6t3/j7j4en3vGuN/3ET//cqz73Nc9/5OXX5agMPyG4zLRsCO8/mRurZQ+rQpdZG/aI17VY/FQS7UqOUFlL1lJjtGgaxtvGECxd6RsAQSYk8vJaTArSjIb/qsXEilFFVNNEA5nPuUgApFAXMuouFzuib5M78kxDMC1oaUmvcOOUREpMQKk2IyIzsGOqz1TSZMUB4pcAzOzsdH7ahw2FVSfa4YVBoWYcOtvWQeLM67vuIULtHr9QSoSDxApiSxMzEafMREHibhiGW/YtZh4iKSUwjdvNkMe+7ynxILlHMvN9jX6NNDPCME4KoGIi9Zgi6/k5T6bVYIKeLZ6qeb3Ay8AnnTvozd5WKay0nov+VGU2O2Hqekt9JCoU4TQ2xIqlmYI4WfWIqmrbaMmliDmvJaJqIK2DWIrYTJr2Z3NzkNgdZprrMcehikmTrZjKQRFE9lMSLuJYW5qWZjUe4KMllWhi3Gg3i0hKfXxjE3pCu7SI4jUtVYutxlc9Csw+EZUAkI65sfDCkomI8ujujTIxVmun2dc1xt2cg7fuAmIMgdt8MrTnHmh2crfTEllaVDQFJwEBq7x3vBpe8cLtH/n9//NP/ej3f3ta/44/+SX3f90b/uwf+TM/81/PverRCz/+0b/5/jv/9vM/7wdOP3j6vS9+/W/JH8Mb0kc//9WP/Y4f/cCTH33j+4fu9JIe3umwvn10d+9gjzrabrddz6xWD7YnbU7TCisthSNm8My7zpSqplLN2PQmC97SLKBk5aUs8cPqWuSca1kJVoAEku2RLa8c7JXJZlFVTmZRcPutzbTKNVQcyjDDvSKXOg0zdM156MxLZwhTjBDE+ydHdw/3V7dvPPWGL/u08e7Rb/uGr37bm3+ZsAGlC5ceuXDpsa/+2m89f+WRJz76/t/xLd9482SzJCYAlhWaymma1OwN2OJADvgGAJnDGQyPSJ6vyJcZ6Yk2zkdCsCIuMWBQWaQ5ejX0Rx+GTWRUqLSCkogoMjPDujaQMjMNfTNmwAUpLW3Oo1GtSnjVzRx52FnNKqqLbQZtMq4fJOZiAwOMCala2Tl778ieROtyPE+io1VVRYjAzMbgB7KAwMKPa3boGVkqsXRlnP/WqtGZRl4mQIqRJ2RzJkNE/j0bcPYIhFJK6v1YxSpGjON49fDgdDuO4+jOTUv1ZJZhbMJJGRBAok5VVTOsSICSKokIcWfRHLVQizmNXXBBrltjLKuLVGNCQYqtYL4GUUXp+GbioPVvqQcjEFbaVUKLisgpRfA0wav8bCPY6+aW5NnFagkaCtPqvGCfyxwzEh+3JBdSPqktEKY3ub/Ms91TBFRdkNflYpcIEV/EzDIppeM/eQUie3UJh55J+HWqzKWQps+3DGSKzpRt12lUohNuh4vXRDWqWhXzKqaeC2QmA/JpVomYwLtOwk6+26ABoLktXKVe2vhnYnJ7RyG8T+0f8+bc4++++/0/+C/lJ7792d/xx17wlicPv+G9/+QX/xLh8E//2T/3wOFvleP0N159/zd942Prn3rT9W/98it/5Svf+I9/5jidv++Bw6s3tk9/6HjVP6fDQ+u9FUhOTwcRIepBtF7vj4NRTFTeJlnpLFRbbHfjKFM7FVyRZcXukeoOEhGQTMAkeJ59RUgfRBUQy6AOpEkgYPXCigrQWe8KtKZxHWlaBhFtjm5fuXTp1q2b++cv/sSb3vbI/Rf/6Q/99I//6A+98JEH+nU3aPrIR+8++dRGV/olX/FFq4MeJ5sIk+VJmV0749omM4z4dvZgZ+9M/CnSirMsNE5aan0lQ4BImic8HkvkN3krJkS3z7owfVOgA5giEoXjObtiXQQUUItIZxXKysSCJzXipx9muGnF6U89+zPlb3k+l9taVefKViLEsNggkyNZoSVuudSmn1OP+PksalM/N8aU2CKiiUjNUgusu34053Xy8EUulW5ns50BdrZY53dUTBCodmIqoRt2mzaw/8x7zyyyX6W8KnbRrmAHpyyLNm12f855dmcdPw5SXSZLVlHnEO9sv9YquLwDTJNXBBkizrDK9VgQRABWSjPSxVoYqN4/Ezt2vnonewYCdk9XV8lKCyFG01r8qp0fw8gTcS9+U63BopjK2vG9nKmqv7nUByDFMoNo+qKYzOClguqdsxO4HATT9nlCYqWYRXVN/ey9Tm1FlUqgEMCgBGLQNj93EfcdYXzf3eu/53UvfOn9V77q6Ma/EOArVvT/fvX2S3878OR3fctPvOmfvvmf/fYve/j//sYh/3RPl//GX/4Pf/Sv3vzbf+W7rn7JV7z4oU//1f/28fvvW6dVT6kbhoFSYmC73a7X6zxIq5Nc/IhC6DDZgqoBR9//RNhVjtEGZpUmhWjTdQBUDVhIpvVExWJAVHoEDOSCLYqBKBGRhWt5rVASll6obRMrMxGUJFSsa6tQpY7Iom/U95mIlGiP6PbRMad+VF2tetUM2T72Kc8fbg/C48nJ0f7BJZHu+DSPevqhj73v4sFDvn3caiJSI/hzo1dKczdhwbcS4jSN+6sasYbiD/7DwgKkqqBsWlb5ZhKKNdGAtSiRRgCDwRzwbBPbcRSfrqoydQZwckGr1FPLFGYi9fRTyS+vUzUNYStZQ9mNGm03nsGAG00r81QvUdk5vqFRA6YiT5DDioLmHS8pC6xFglVrXAdEPN24bp+V79VpSFf7qfCFJdmU4kSeBPkCCGcnLvYsRhj/rMhGROBkr05mY8vZSgiPeXA0A7S25SCKhz2OybBgwIgMrKrZKsmHWtCh+G4reqjqPHjejnCGrGh4XyX3sLAKNfJ6EZOvp8x0KSdOQKb+MiwiniagXDzKtcm21PCrewnXqko8TQmwfiW7DMgAapOJykK4NDVbcpSzuOyMHVJQhmRmMygTo65U2AlMtzJRDfFT93gXQniUlWCZdXEJdKquxb+R8i8DtbBoLeRZZTg/uZpnkzkLFDM4t3CM6b0UafP0MhrXEVnJiNqxw5LPhu7+a9tnzh9e4OHynVu4/yWv/tFf+/lfh/SGnzj+Iz/x9r3H/vClv/W5/8v//cc+9/e96/d9x/f+qTe/6TWf89SI+3/0X/Pv/sIv+tJXpc/68s940098+Ld944M/81/uDJsT7lbEiZVzzuMgjLEjRo2YK+o/A/PgqbJHy3gTnZtJm8kH0+2LR4bcZlVHU1UBJJYYjNJxuT+BxBzFaqEfRKWGtv3/QlCb0oFUZG6NCoHq6Tis1+tNlq7rRZkoKadf+pX3JV6v9xi0Vb3dd/ubzaAYHnrw+ZujHGmLBrpc4WCzKOS9wSr+21wdtECagDnh71YmrFmFXH+2w+E1+Lkh/45huTBrbo0ZoKpJO510cPKHE6lCJeygRSRGzZWIeGqIqlM1AFTg2MyqoZFQ37O4CnYVuZgUrR8rUCpwfSLDAFek5ULuoBnak8N9hrQV8xr+VP48n+CcvEda4dMrWen+KzWVfc6PAhbFCUQFb4JF/p2YQNGlxExJdWOKIhFAYgVCfeg6ymR8eF+3uD4hQuKkmr1ajpNrq6sf6UPFwNruffFfdNVQSHTzLMAikcF8otNqGHHZxGr/gST+lzqq/3FC/S8CtP4rIjGdywGh2MmCqFBk+49B8YPPKrFlr4oLp7vRcSmOLP+8t2Bx1vTq1fJKSz6cfw5Lm1l+IvDjJHdeLQwqCLZLaaPKBFHypXBNxvQ4qWkw9mKZ8dkJ75lCbEKIFQBMl02gdLYdUoqI6r5PJlUdVfhoL/X3bze4KHKwxsOP3HeC1Zv36P+z7r5kvf3jj9/3X776l7ev+B2vP3nHv/3V73zqe9/7q/81d/iiN75fXvrun3nJc0+/9dO/9Ku/8AV/+U/83fsfuEKEcRyJaBgGVd1br7tuUhhkJ3B2gmL55zIqvsrpy+8BJKJEtahHvYFJM2lmCGNyviLk6x4Foun6rhUwiYECs+2LR6B+o6rHJ8PxdiAiyXnF1JPSOD585fLV+x7oV/vnzl/u+7XmfPHc+YPu3NGNDZ1x4Qw8XOLVEtSfzIlb3jMfBJF2KbvdVQoB8QTRSk+Mkth/0To1gxLpjglDVSzd27YJQDlHy32PF4dfqSjHZ8FzNo59bwyHzb3KbBx6Z+6WL0dUpzEimKai1Vc0KWEX/fcCy8GGqtoi6ZY6DKZkwQj1LEUtzqGemmWSMRaoRUSiow2mqkzU913PaRzHcrM/nkAJiXX+3sncwkzCKZN4Txm2KcQztO/OIt91JTOE8DjhAHTf213XzsGLNFrwY/q9Fa2vxTOdSkBnReIq69WqzlJ7gMhrn8a1+OKLSUGKExpBs1zOnFE6k9WfFkubbfO9Vr38Pqz/rPvJtL1yUJuMPDlmOxghUYmY0jasA7eoEtG5qG5UMZncvzc1asbI0fCpZQmX5XgNpp2gmPn26g1KXjzIb1NXB7w54Ez2otrOlioVsP6dXXfz3N2DzeH+aX5yu3fxwsHlFTabccvAE+PhE7j1Pft4xXvwNV/9nz71wR/6qu//lud+/oc+PP71c5vxZzcPXf0jP/b8fX1lx//yh/7Zb/u270wpQTmllEdl5i6RagZZ8qipS0Wky4K0u5jATPKoZ09aMfqJEENeeKfoWMhF1VOAQZJKIUCiDiR58Nq2CMigZOlMJoZ7Ly+TtsVfLPZ3wKQ24UggIAICWwR7eQEphHmVUkopkUressp+TzKcbrKoyvaUe+6FM1QP9w7v3j2eocGMGC1/OvObGUbtgvlZl57hCSKIziWbMKxOXjHjN7VnZXuFyYUCnWYZkMON4KYMjuMsJVRyelWKMJc+yuTDnrnkiIdLWtHSCM8GmlEJ/2xjlhfWfs+VOJtphL3O+eTtcYGz8Ze7vFyCvf7eiSB1qPiWyuxno5F1H7NCgVCy/qFZxnHkrqOQQcPk0VHR8bZrzlyMFs56s1iN53A/AVAzPIV5kpm+7tWOML5yKebU5dZVxXM7ezwOUj9rsMX5aubEnag4XeIeRsXXSuyq52m1Mhuz/ud15qZbDrm0vzP5RESmxU20/OvR1OF7I2iT0gfTa/chX9xZzsb0lQUOXByxzRkMFJRv9PoTkp6SlgoEZp8J3a5JJlBmQS0f3TxrxPByr1rXaEKxWrEtVtQ2Ss465jLEPU+d5UeVYhSqJfqxUA2tXhlPuXXkdrlDizv2yv7R09LtHey98PreXVDeu6pYXXnxiy98/NydvXSuu3Fp7/5be/JzcvUH5ca/f/0vft+ff8MP/9zmbvrAH/uhf/mVX/yGn33XB7/nxvUXffZn/8I774zjyN1aRqPCebMZQNJ3a1WpdfYAJItuop1VhN2khrCDFSDOfoOp7SzoVMZdCWMqGdZF1Z04X1RbMwLAwvBYNRfRSgBvU0bKJFnBk+Cdwqgmsw1oR0RXLl86OTnRPAhzYiaiQUSZScd1vzeOuUud9nxyciTdeHi43sgAYIm3lVzMvjnr2pHUPj1xVDjD7J4Z901QQighQFOisuvNKPlCRETEbptD6TpjryiUVwtTV7LGNJZNJDWoqJzrOZmavTWBzOzsvJwrKzwrWKyVATlrzCiqu4ukaVX+CNVO0vDQk9qhFVrrqM7zGOOblIDaZSs6Ssq1k3bVkI46VXhd4930tuLPPfhOvLPSscRE5orO06rXACExCHpPLkZKmGRCO1JkAErEpAzriNCYUtU9WsEW+qnHN2dgfJPo6zyYeSxtMeKMbNC6jCnljf1fAwOuN9T3OauYvA4lC004NfY8OStFAQqQEuhqEadfUWGUPIoQUUqJiCCiWTTt0MxUtaOEQDWMASt5us69OMonYo0G2NIK3nFOtSWYVgZswLGcSGtF4oNzJNpzK6JNrxM2sS4OqKq1iwjVPEt7L4uvUVshJCJKwtNtpcIaGUBG9oQKcHGJurgXkTvi0myqNTaEFVTbL4YCPS1qjImIrPmE8V3zI1ixpJO97vnXD5/bv3NlOP/+4aOf+amPvOIK9m/w8Wp7fHi3kz0+PVgl3Cbc38vtLb/rg9cee9F9P/8TP/WhD21e+7Vf+uRT5/Qq9Oknsaau3+NuNY5WCFGHYbPueVCyopJWHzEJoIpq2KoSkosHk+IVcdVEJto5wpNmIkrmKAxQqjBnLuqGECss2xgkwqlCOFsvnMJqoGZxdDVdISRZufbDcAbspda61n5uvjNsg6uqclmUtdFk5pztNCWQ1V8chq123RoQTtol3Ww2kpG6vRgWVA8jtZCsCYGWMwoCpKlFpClqIX2RykbY/HdiXfJI+yB2WIxbMOcA8EIbYFALVq3FR3POWSZBRo12iSCxAgOEwD0xg0h0Eyod1PIUpFahu9SyMODYEYcPbtE9FpSnqoqWPhqvPHrdzQptm+qgAtVq/wM8syix522qTtKQrMunxKAtVww455xhB9NnRUSkJeHFFEcRL3McjIsxTasCc0ZV6oDeLq8epmiVCddORm6cNb6lPms/MVHXJQDjOKpaMR8HrIgQPBVQc9aE+N62y6RMCaFiIJESQ6zdgxIpq6ogCwQkjH3VSW6bBeXRG98z1K8m/r+xRP1VgSkxM+uoO+9HHjXwg1YWI0+jeclT3ZvkEqDWEuEL/wsUywHU8v9sDpzUQjxVE1EiJlEZc14lKjeb68KQ2/pxWsxA9dOwInN7aZ1PfXn9cvbrDoIlHqFQGaetIgZbRYSofCVUIgSKe2b53gaPgsdGNzv7TjkTZwJIkkoSHXm3ZoYFKXTk2x3aBc2T7P/6ecBY91e1NZoFJW/JbFXmsyW98aieV1fdmWZMZtRGD6jKnHgdeikMTOvCswxEpJTULMDgRJyYTzH4sReVUVXV4inuWhSrMoSQ7UgoEaWVrtfrg4MDUhwdHeXtkFJi5i6lnHMeRjU7FSXj9Jtxs5f6jngQHVQsOie1vnfzzRpIGZSKxJAZWVWgnduA4UhK/z/W/jvulqyqE4fXWruqTnzCfW5OnXOiAzQ0UZKABAmKIDgq6ozjDCg/HR0ds86YfiqKihkDKkFBsZWW1MQmdDfddEPncG/fHJ94UlXttd4/1t67dtU5T8O871sfuH2ec6p27bjy+i5GREYXYs1AIi5I1wAiyjhqk7TSA4NBnCS5mpp9GoIDIJiklhiIBT1gn/WCZKBE8YZnZMfJnXol4JKYQ/EWdW5517WAl9E9FJmIiAVMpco8ZnJCFVu3u51IXcW4YIWs5MJfAIjAsAY3VQRaT3EhMwRfRNQYGvHVf71tC+K64JUE76prAmhZXxTWqhoOlUGFyKCBWADIBVGxQQRAWAViRJkkrKdDKiRngOhco1cc9YOx1gtReg/6vs0+4GJRxFYALK77scYevmYAYKigOt0HAABI0zRU/PUnHUOssj9fGKiKtcEtWl3i85UlEugl9CmK3g+kKTGpKlAi4isP+DYjEFzyu9GBdUaR0o4XbiK4N4h29AObNBGxOuokyUDQWslaZjweZ0mWpmme5wxsUiqZpfTnQlyzWJtc3yr6l4rWWUdwLiFW0F2VpMTjRTsgB8Qy2Fyjnnu6/aRXJb846WC2zocNys2hxGX0SuVJUS7TdFsUQkyliQuKrp6be9bn5bEiVpUiDg2IgNLNEbWYCRHrSMilyGaZ7Y2lbTDRGd9HbjWoW3VgFuGASNrVp3BzSwtEdDMsp0Nb9dnYKogBYFy3cLNxVae0OoSz749z1+JxGWchc90O1nIB78h3ZV70+4AnAwFQE1wmmPUzVgWKogCikUqIjphQBfsIQlqtFQUcGoBBBKMRhpIgMsKCi10gFETW+D1GxFJKW5SD9Q1ELMtSi3eVZYkq2YQ9giIAzGwoBSARJBGj8ZkgYCXeyzgllcdx7PVZRfCptaC5WFpMzZ1t8BPlCJ+zW+qUE7Czkukqii/8AiBYM4iQmgqRfYkh2GQDiwjWBwLgoT9cMohzZcE0yas91fiVXBU6LwJohOgmtASCkVaFWQDvMZOqq2EUfrYDga5ClEO345BGfVh5Dwiis51pxm2U114dH0eRonIQfoRIIDaIG/6Mq5HAzbCT+50ZVciViw51A133pKIZ1SUiAgEIq0oSBpie4RkXzrAHVD85Bl9Pk4vJS9xImGoKWLEaG4NV+zOVhFhjqQ54tBBQX1Dl4nGkSCjkMD05jWfjxinR0CpOkgQAytKCoDHpZDJqtdoGSbm7Lbm0FgwFJK9K55nFnqK5UgcCOtMLkTLgGAgF9JRIDY2ncW3OgDVIB6tJARBmrtdervXJ6H4O8yIAUY58bFANOyPulzO5CIAveRbfQFxhXUG8n9Tz5CHrROt5hTtdQJY7zIJAmr0W8p4QwFO6GXRzc2/uDJmrfg96thRvwbjZ8GcDRkDXLaBNBZYWfsWpGFEiEmZHij1t1Ni4GmRz4y31EfkPs8e1+agJo3Ne9ZPrRT6CMB7ySJy24lqPKaREQknFGFQb9NZpUhVaHZuoYaogIFqUQukCarF0dV4V6mrxJiPRd2Jm0tzmGu0MTg9DAXHA+sGwrANBTNXsrG4kJFVFEDazmAKiGjvj8HIHgReIuSPb+pMAREkd4YwhsBMS9RxpCBV6IypokA74sqziI1xRyJl72GVFNAklRnoMhCX2vTWgrtJ4FxEjGxYAFbKV6yvDj8JGIydlNXZk8MFHurBhAsCnTuqGcfo6eonEM6ggyE5txdAUhuIZzEw+eLNxDCpm6W3XJJqoGxf2dONF5EQwwDSiVlZHjX0gQEd22acHIQJap7NSFDAvWCFzuabFCQ9BZedg+3EL7QF5PCmoE5WoNQl4FX7hIoEvRMqi54W6AgHVYFqsqU+CXyYvRzmoCgHwtH3GzXXGTE35M1JLvBpgjJkujSVT5sNpohq67TewzqABwqjYJltbGNPhkieTPE3TJEkKWyY4bYCoiS/xG6NfNYtMRShBJK195gY7Q9Se8YpNGXDIl2/M0WYXVvp7VKWAZ4sRsewjHl689vapUAuMkDbY0VwAALBSGfLEBymIkIe5dGHMnt0naFyyvN+SGsMOm6zo9Bjj+Qm3VZvMJxAFBqxb226CgWwAQ6hC1Vp4ftYVq63V6kRTJ7MCVRpXfOTqn59Mtphmw26TeE0l/CISIjtmzKFIM3Jevc5e2kAAFxcdj06cRVRN1t454iiXd0cpOFd1TIVZBCEFBFELY+gKCIhYS0RtShjBWssghgiBmAURFR4kxH8RIoFRdowu9ZlJXND2zGGqVzv8KKrdAjjLretr2D+IngebSBdVbUn/8sVVXL0jR57AuFpJJF7IYIPK7hDUkQQhbKvaQtG6G5haKc/rq8unj/hVEK/PgSd5AqBZsFJryr/dSQpBH/KnpcqIifmQw4RBZBZF32vwhrD/yTjhs/HrZtW3SrCh6qVPfiRBiqTC+HL6sv7fSQZuNVH1ewEgH28jIon1SOnoC9U4hldCUO4B3CnxfWAf3q8xmCq6VhOImsQhEWeY7mqV94wASJ5thz0WHVEMLUjMaUBC1YFZ5z38ixUyvNfKPQg/+oPMkQdXvxesLWJDohIHeK4LAqjFMDaTBqI/A/cNfU7TVETKohSxSZIAiS1tr9sGsUVRqMKSJInjK4E+fyNBJEyzLp5HlwRgAc37j6wCbhZk06Y2j4ImFFHodhdYb0TNdtU91DhjsbVhioh/Q5YAcaf953BQleF6qu22l0RRCQZRoyOssPPAc82IXcVrhCveYXVJMNzSmLV4jGGYNRmtTkdmSi0N9jk9e1EzU9MS9TBuBCkhBkSfMI1AkhADy2zGP33Fw4H6QjzZUxHHVQriLBlEXlcTAGeIRqmNNBYgrEsrAh27NluKJBGh8c+SNhvaceqIkv6aqQoJERAYhExLRICggthUcmDLePgBYkWfZAiIS4JESCRW0+S8rsZMeu+s2VIWPk0kSQCmyQpXEzW9+lgnYX7g+nPDOscARJ4aiks2cjK0t/wARjHVbg7rPZ8eTryTiAiAvB0VCP2ioIiP1fLdJgEwGNnEY3eGm4aIYjiJmp2C5XvCCDNhifwWkqa65YY8PY7wJPt3CYoIEUoUpOSqloBiaWnRNfGejtANF8vq9ky1NiYhBTzRfFrHW0CS2N4B3gYN4MwTs7obC6BYkfVvTE/rTG6zdB33ITqM2h2AKSIQyYkoUdnEuOUp8UjAu6wD256qnVa1We9JdVWxxA14D6n1s5qr4ArU0wpAxgBAt9va2BgCYdbKQnayGsDCBdFOmuZlgc4jYoj9UElJRCjxFhcB8NtBcBYVAACApIY5XruaaWrqAplZg0/fF35QMzK7cKdmsWY/Qgx3Vs4NgVADx/0UTmXUSPBjWRBGJieYgkFCYHYpvlGxhOgMlyESz2PdSRS8N0134t0ff9/4M7goPAg31P8z40Jv4pDoz9BPU99Y1dxO8Wx0waheghQAcrrCk5SWirlg/CVGh6o2xlr8XvVg7DtHv9sAANiG+wC8nC2CkkTUv3ILGahu83oBYGxTrTvpLQMAMqJ4OHsvi7s3kj+prjuAFoFZGPXFRAIs0k7ToigKtmpRScggoLqFxMfQOXsMq8OQ9YT5cAy0gCQEm0SlTh0730/vKnEgkY6YESnjQdQThOwsNSH6Az0PZv2T/SHRWD/3Fq6iTzWqHQB88nK0XjOIQlP+Y+vDdSMztSvuVCH2+tPvDRKiPl+sgtOEqsIpQRuU5rKqBVhEFMBLKg4lBGp295y1LoNG31Q31Dpd/0v5aLAqiwviZdVJRGy0/zHYXVRRQsQg9Ag7aRGBATFBP9ksquYqWVFmbMAV5kPwAFj+BdbntlU8TkBYXEydjt7FTKhWNB1cWR039FowOs2nKVgHbhdb8prNzSJ9oV6cP6O1XTQtW0PQaqRCOQw9nH5jWMq4zWn+Ov3GxmWMUTU3IYOIYhlAOq1sMimK3CZZK02y4XjE1mZJSgYbeooGsMTSRqxlISKJoqoDopfDWDTyGSJuGIvLM7ud1ASW6A5n0CMMRe78FfU0fkAMCoToRgDN3fTPeNIcQJTstJU8YkQqXoVAfPCKVONKBHNhlUJRXOSXlmlyXo7p5SFP2vVVLA78zMUK1ZIFp7fgZvx4+vL8yY9uFoNHn20Z0wWNc0N/ToItLjTrRcIp4dQLIiK6AwQ2WX6IpJ6ZVPibudyDNVm2xsu9xat2+I3HKZfIhwRe8wtNhzMXPc4QJd65KAHjfQcsBhTZz0VPiOZWAAgLipAxACwkiKKxrCiATEIOXRDA5W9ovUFxrAVR6YJasC0LodXgXkA0JMyiIbVTdabDUMIo6jfU0AHBFV41agqLxVkAEAQLFqNnxHPlqJAWCEbp7/VVNT6pMTYtxl0ScSkBjXPmjmPUnMb4ReGglaSs3CfaEdEYEUUqkdF5TcVF5fmmfNFrFU8ceav6oegJDe5bzUiU1Ic+GBi83ThcnhYxSiw7Igkwkia9Oo7pXgSI6Ly1SGE/+PlU3lwB3OpGtT4OGtHVMFTdF0pxZFt9kggaK95AM3Ie3EhVDnqvxiWUHNLu/YjrK9fgu66EbW1Z/QoGHln/pnGzo40RRZNItg4Il7EGGTPUSiZARO9En1Yu43eF72VWEFZ4ZPrc6bZEAAIfOidoEI3waDhBNCJS2NKBZxEYMBwk/WA3dr2Zfa5F9yuBPzYggMBoUWjqdkZIoNq08bWpCToUY/AC+wwZpC6oADi2V02N2+oC4ONfN7MIxWRcIu0QPRqDdcHeMn0/Ehlwth4UIKKEYTO9HgIQmQj4HL5EU3eU8deT1aanLOynmARMzw/WJdyGSNjYYegfCEFqGhXglSHAEPRfViOrUU9KCLQGswip5E0oZGUTzWxKFPD/SvzltKLcOJlSFx3j25RSSJSWoL529EtMsRXLr2wwsXjtY8aBVOkOHfhUOPlikGxCpMVIItR4EWEuGaygR3xFJCZgnBRWRIwxLjDSOtB9cRgIPg5LiEFI1WHLzIxJCkQEpOX/YBZBeZJLY5vjLYGIiCSsLkPUXRPLKKFpiXTiEB3ttmvFzyrYHRXiXH6R1ATBMEPxsdV5j8dCIKIFeQPOQ2I0/oQ8Mpd+XdaNT9GCgWfWgOBL2glY9Jbe+qUyD/n6ZtoiAmCcuRudPh+9rGSnSsWJN3TsBVMaLd6bAAC6X21NNBWVjqTCYAeABnAe+lOuGoBGg3KIrlFHBXgHNkmCDpVb/Kq5QMLoDLo/RQTFVwMS8HhnAJuTt+gnqf8tKr2wT3gDLXTIgUtWjFO8aT2eLvRicTz5IX3IIMU2LPeZxagfZ6YhNzJggL9HxEbmB/HTy0G0aix9yKpo0GFmTpIEBcuyJIE0TQihKAoE0263xpO8yMftdpsIy2Kivmb0kQfgFyCaxSYBZOZQE7A2KM/4wiZyFvAo+K5GJD/7WCm+zCRE+2CqaIm7XNKL1xjAM63E13+N28F6NSSIBGdX9DSEt4hPCyOf74uKw+pLh87SIN1ExOvqU68YqnWNAZDjhP36hDaX1rUPtXkP82PrQBzhg0T5H9NM17dZzUYqLhEe6qdKPe7iWJSX1dmCP9lhqhuNY/3Pah05dvGS4GzGLBYSpLiYCSjlUg2g/lKsl5skpjBLQIUwigAICSEQWrBWWKuhhlVwPkMRk/jN5skxM4tgopU7wRKRYkMyg7XWEAg4uBIRUZMyCggYJrEIBGwYAJgRSrSGXdCTX1fRomFFSghAVkggTtEpXFnMWvIoAJSVJM4QeQwFSbVwVd1K4RIECNvW8UXd3xyiNuqVQCu8Di+BNdaFJAEAdhnvVSUZLWxbugA0S4AkjIgTQiMO2AT8RFkEsWgAySVlWb+L2BoDrHV/3CZxvDwVZo1xRmQhRSVBE7oqkQjmeYaIiEOQ0JozIpklINGN5NK7WUTQpMAMTiuspH0xBJrB2aiI6iKnHCsKM8ZCqDtcT7vL4UCxHsHGjcu46ZqIGIQUwQCCWMvASILgEpbrND2wwwa3AADjmXGog+QERxJwBkk1duuDbF2leI+3AygIjEBROlAsaYXzG28GEUmRLEgJ4vc/JOBwFKY3jzi5gByBFFGtHZARMj+uykks3noEXmqpIrmMEakBeigNT8lovm9j92qdZpSqCJtGbxikRj/9/mdjUhAHl2GMARCxBZpMGU1Qz7QdSwIsJEC+YAEzl9YKUWaMlDY1aK3N2u1JUeZsE4cK6V/HDq1aPOATRu0wc5q4+Yk3A0BYUAj7RDdJgaK0BX1KhQ458dbpxlImoemZclBjNkWqrPl4WxCRVNi2tfurXkYy9fQWCddsWWnzp6jifABBHPW27pntN77xw6zMm3X2XN0TpgURn8QkEuaw4kaRCFm72cXlSiw8hmCDxs0AEBdQmx4RerNBPDQH5OltP/62mj+m0X9GFxMoQQv3Yp4EIjsteYjKw+4bRUciQFXAhMVleSgjjNQSAEEk5gppyPXTEAgCcxXPxQwK24SVjTYO6FHmPj2oaaNQ9RNGFcf83EkkXIZeNT6Az29yw0DUpNDQHYPIKqC4glrOeoqQALKtlln/4ybWDbXeYRFPsN0iRj9bEQD1YZHTDg0KaPwkBIM/OFOshmR762w4wr7eOgA4UG73GmH1dwqihgLT9M4kn5gQvsFpW5crtKq2O1DsTAS1oMw4qpttTmcR8dvYf2GAbXC4AoBuNoVLIEJmCRYpL/RLwJ4D1ZMJAQ15oI9wYGPCWI0ujDey4oR1a67RrMs/9Q1ua5z08GcpLABI6JzuLFpQo+bKqT0YMgCsb1YABJH97ghHXADiA+764drxN0SNV+JRg8TFRM9XpFe1opZuHjqJiEwOpyXAIAJIyAZstI+IBlFIYgM7IhpjhKgoinaaFHne6/UGo5FJMyxLNNh4qes5YSi/i14FDVJXfH80D7X+uA96Un2VNhENkK9ZmOIGk5nc1488et8US8aoNCYR2ToDDjeTj3XCbzYQWikyxHtPJ2Tm3Q6SEDdtXHsUyPpmNkLeJE0IN2G0ZGoycviXcMb9TdZe07ArFBjXMgDUDl4ltWBsJKnftplgUf2KEofLYiQZ1J4iVPEYHSYNaryuKsAcCJN/RV3gqr5UcGQhFAZhERaaFRDs+uDhtIKtQpyQ4LFzCJnZWissiIYoDjDQ6a01a5wvxIX0BX6gMRER91Zgowj+Bp3q5NqBGSdNr4aEp5ss4NSQaDyPxu67vRclwRv0xk+GCD5MIh29vq8sF1DtnwgAj0vlngY1ZlsEgYFgs+h3ZKiWz7EvRPTQ0K76CHoPvfdNainiYLUNbbiZMYgNPXhqcqoTGmWXBccweh7g6Uw1V1EjQfcFAI84Es1VJBTWBUSpYJsEABgkRUARV9ZARMigw12uHmzI0NMf3AHxR1TiTnIwbldysYQC240jXxG62exh+pv422rVYPNLguAbpBT0xn9HbwEA0JNf76qIiSfMpjZOTHds2c+h+O0buwuJCEUYa1/GtJGISssAklIKAJZLADC+JqxEr3Yo6+hiz4NqBKEGKGJRFJ12uyxLIsrznKhWEYGIXCiCiPflVsxL+VojbzvaD7Mnm3yhZd+s6KJLTWSpCG/NBzzN6jf7c6o3VQliT1JFX0mqY0T1EtxJmFXMoCEExN9vxjgpSMSBwUdky+3XWICtK+hRSzMI1rTANbsT9SuWiWYOx0RfKoJE4MENhx/6HJKqP1xVdQ7TNX1QIdrQEOQh5Ai1p3Zb9bjReA1wCIkgoInsJYiPntNb2RPm2iQiAChSFDI4/CgnPLC+VQmBRUQfWa1arcNjtWpc8tWdEzfG2ipEByCiCG4CGTzB0y9JwCIFob6aYUISIPY7FRxX0e0agBQaU624sRpa7H5SBZqAdWUjYooCiiUrlYYRRD1ED14WlBiq5yQEHiAiXKsPb0PIToqiWqQ4nHwgIRYh43wxeilqB2AogwSgqHkKDQYuKirISKLzBgKMnvSBphf5Ydf6CUFwDN9EOjBFYZiNqy5Bem0DawF3TflVF8CpUZ5PhOJCihUZjdvJuAqmHXxeyCpnCCOiESCravPsjm16VWJyMBdN07Q6/YWYVQO44PC6KBs/HithYR8SkYOFZ2dRSJEUh3czUtD4zndbc0DcbsfwQg9Zg6JJRE5oAlcHt8Y/VfiEiqo4gSQ+uNblCjoqV0kSDQWA3QAZQu40ANDMyomIKJaD+Ba3JsypSawtjDGj0QhNKsyx1uQmkyonWjhrMs1BZr268Vm8cIrRNwAOf5Ol9lR4VxI/HL9g2vegay9eP0Z/iUcWDU03uhhamTmS6WE0hv0kB6BBmuP73YHXscS+OqhNXHjLZiblxsJXMocKEDqV+lMU5xbz4G8w3jiQ0S8SCZTA6GvMuUYEgyQRzmFop+FrjyatTrmobjKbur9CzIkOU2MbTA9KvJFDP1sADRtlPU8KFGlRHEApY237gmP0av22lYYa0knVvoboMETFD8JEIfpazwqtarGMEfQhTO2i0Iipx+kIYWNXNR/0HEZfq+Ym8MqN8/yoQylI2VNmkplX2P+Ns63fZMFJDkFlBFQACtZtaLnSTWmm1d2rnhoY7qN7dMilt9mG3cwBM8SPrskF63Oid246rqbOwABICJYA2RkiQgMVkCnqeJxrXNMMwdtI3OkzIlEeoO9CPNXBziTeOe2lMUIiCyICJGDFmsgDG/6DiDWrQxzPEYdteyx0RFQk0fiQVuJ4aB1dKHic7jw9e42dg97/FX5StwL76InGCW2ul3sp1f6sLQsgouUykBqByrIdRwWj16MbmzaepZiqx79OHwd3s0hiDGgIiCfLzIz1tNj4KeVw5LU78XKDiO13u8PhkJJkMsmzrFWWJfisqwb9xLohPXBi79ZpDnB6ueKRxp10puySMZbOfR5mEi/wNBOdfgHW427iX2dKXlpClSI27Hd4M5/P3V9jGNEw6m27GwTAV/Oo0jF0Emn2dDSowJMPGcBVoQn3hJ5ult8Tb7vpS+pqLgCQj6GIRW/rdIhqfwfQ1VjWCfdPi2NTY69IdsjUmL3KXuIgzxFFBNjplU7BRWTvlIxFe7Xu1mmNPhcHXFSlyCvSrAIygJqdwSPvq49QGbm/f4r6N8gNCYmEbHLXGf972EWM3thrRcDzfj3JClsz5QOO7St6xTZ1BWbWknOMwA57SvUsgHDkqvRNq3cKiPEjU1iP6aMNUTCgY5zIfnhap0p1bM2JNwEJWvxmq6QM7T8DoDOReVYYia0YC2JM4sVWt3AmFlGqRDLdruwg69izbb2nsdVcUq2LCzKI4n0KSnVn+3HC5+rwCVkUAEjQuCp5Lh7N2Ssq/EbdUCyAirldiXEIhkAYLEViwkyCVg2z+pXV/CsebgBnOTirS+Ucd6tybhEvh0+To4aoXQnKzAiQeKQv8ZWL1FeEAQwLUV+pVTWxMnZ7EBL2m97RN9RorDCGEKMAEBDc/PQ7TgbVZmiY1omMB0ZU1qj0IVjiGvcrT3J5p2wFwQVMMQTPXsQa0U2lQma5XeeqxSUESZIotSzLUmN+4+yy8Do/sf6baJLjFZ+SaSrqWmvQ9SqwIe9EmxZbESEOwqpEyLomHs/O9D6oHp8p/EIAUnDnLGrNyVaNS5xx0lFfqY7E7Ko+peddgT4FBa7Rf4x3yibXDBkicsXX+dzslhzM3ibsPKxz9U29woRrBB2OhL/ZNzjFeiFiwA0ZbcZ7Yx3dt9a8EwAiRkW+Zw21OdjMLVaVXrRFtwsh3lQiAkCETILFTCZqA2w4Oj86aQhNFLsnfnFFYkOGrnW9QWl+mHkJ+WpfmsWi3DcGfw/iqc8VDpepjyJOf3UeRydCOZOvjzmq7lF7H7K3zbv+VEsZi1AOIlEU2iqYo4EpEbAIKIQJO4gTArARCk88LW4wGCiEcwm7N6KA9tgjl5EjMYHQqFG34qzVDHjOXpthT5KsqDEeKIRfIXg3Pzgm4WJKNCivcmnFlIedZ8TNFxPArLNWbUg9ucazN0QDaJAKsK70pSAhIIoWNW9Ek001WH3Wf8mPkSKJlgR8IplKOboXEKINoD4Fqoy7TaIfj7pxg0TZU26I5GjUk1TX8WwjTuwKLTrHkP8pkkidnO00qM3MkcZTYZFqQaoIWf+UTIFgNQZOGhYGkhAxaBU1JJ83XJ3r6HH0UayOMYmISCvLEHE4HqVpmo/HnU57khdESQTtojdWMXdQz/1xfZMahdyMxta+dzPqWDGL4MwI3MCAQ9f1K037mVm70c2RZ9UBymszllOThpyU+GSM0NPt2v6TKYVyerTVnXqD1LCXMbpmOtXDh5jkbfZG980mw/zmWwCoWJfUnWQIKsgb3U9x41rDOF6pmS2H+8nbsQH0bLhrs8BCcsWqhJSbIhgBJu/onLXQavtS+gLesYsWwAB6uqC2ZyJyRrdoCjVrk4yI06oRXWkNFivg63FGE+u6ErwAoUVBQGaXbVEHaJx5ALQt9MYGCAIlO61Z6vcrB3LtRAuNAAaJfVqdRI8wYuR2nbVGYcX9IzPuCRC76KF2RTRMzqrbFwGRiRCBtSoDea5f9V8AEJirVxCCHl8G0MwgdBaBqthDsEmqLsIoItKYW/BW0Hiupvmi95FLoGu+AqdEt9RxrGZcHFkRnHonmt8lNZ1blxUZrI6dEHw+tA5HBDQ2WPlKggTIubcsxJMf06V4mYJcHpxH6GPPN9OBA8p9vCEpTmirXzF9Dm9k5owMaDFpdvEITuabykcP/Nvv8yCLV0K5f1kVhBCsEEFUq6ai3jK46IrgAnNcTetLCqN4woPebIm1x2sjden4KJQQARRWlOhxnVOGDpBgjLYrfoo6WWt9OECUyWTS6XTysjDGMAsSqo3NVc72UBDW2hAFHfLf6mSnNuoAWxY4S1gp3Y3q2RFxZboJzUwuiZ9+vBARzfBD9HVDtRC9tyMBVElECTQXwDWq4W31fGJEzMtSR+z2gQSa1dRow4tmXuisVAKeb6nqm5PHu/eXDh4jn43E0sMs77KIgNXw62i/CpCWmq+6GLWJ1GzBMbwZ0G5K02HmnqNaC+EqPa8kR6ScrG2wVnQzLAHDjPYBVNGYIRAE3S48oHNYJKh5dYZIREprLQiQK2eLQqSFppEVAxITo0VnnfEZQdMTQ93Qxmxj9GWNpviwQ6zbGzBHQAYSoSA+IwAYz0hCtTJ9sITaeMOHtAJwcPE7Svc1NoTEOfmYmIEZoM0tvS1sOYpaFnTVI5DdykqSALCIlZpTjUz0VxgaABAlIqJlknXeGIFBMm0FrQD4hEUExlQsI5QoYpk8NI1FSJjC2oUNAwClTFw6Y4A9BABwxQAQjGdj7C23VX65++CUTG3agUt7DRkMaIFk6zQ5IQJjACdYhv74HQiIqHnGoifNHWFdy5a+FUCt66pLsdXZBvJ6JKOzgOqdFb1DMIhYsGbX2HB+q3xrceh42rI+yAY9LeJ4AtlEWL7RuZgOnHDtgLj2g3EYFYRA54pIwCcnQ8iT8voriy/0NEbWutEB+K9kFgRyyEqaikc6hYxIxqfthRMEIiItqcAN2bM9QUQmz/6sSFVtzUKJqIkFmv/AWLKIYGKqnRCJj2EeuFKmAVEyS5rwGkwI6ObHhWm6o4du6kL+tK3TByMV/azRtwQCc4GYE4tNTFZYZhFjTFlMCGWh38vHkrNFQ+LN9QYJLefsdkjMaOrS44xVDoC7AixSAACZFkSHpdp1HPAVKH4WqUYM3aCYk+r5OolumJKcpTSSVmvcF0Csy2CPw251+U1sQpmljP//91J5lLym6yca4UnzoHSpq/GxV7VqYY2Rf1c1pHqyh54pqDlgaq+Y2VWYxSCdSVM35ZStOGzNWPJqtOAI/XSdy6n7dVAO+5MFq8KGgIjGyfOB3qlw4u4xs1aSQmrzVMdmAgtAbK2a5tnofWyeSoIPqwFwviVXSyDWpnwL7v7wRqz1yWsjCBAkEk1nZhUW0Guowabrl8xbAg0iQOmdspXyE4D+/WJxDLagENCV6qS6ywwJFB3ejlb7ZCRvjgACH6Pg0RgEQKwIIqZJy0FDMGu9Yr8fHBn2kxyKVjWD+LQfHLC4tZOOCkzRB3SJRkHMqn6qjcKBIYO7y8vrAJVzbkp8De033xjumCnyYvWvth+GxZqxhV72iajENHVilxjtRhF3wLE3cIHC6Oy2lZjo7xScyk6UYMsBQMSETKDx/hviWOmUagiIgFVYvNqdNFGshrgULlK0VHRHCSoCgk4DFGBmVTmCX2aqt6F3+tJKK3Marv7Xj8ivQswyq6kDib+vUbba6/wHp7DWGZiIGDQlW2MSYGZrU5NkCRV5nhdQCqMQEzJz4gQeNl7ACqJAaG563uLL7zdETNzh9d2odsIUd9Nz3TjUOvl6cwJRsqNe5NM9pzsRxJYZ/fPhf6pasRfNks3iKWaoZLOj5tylhrXokIf3NhsBB2AZbgjDNwKWmkvuRh2CbvTmKS05/nN6oqs7KznP0xUAgZr/O75mNwKQ1N3wai0NIYjoL5nqpz8Szf3d6HBjizsuhajpv1ValJ/T8KSEkJ/weOSarVOeGb2Kr/Ar+tS9MOQwz4QECIy1jaEV7WaT3dDl+sxYJYtYqTvkqr5rPBKEWQWAkKzvv6gZ4qqRhoUACHFFbuqEHGYT2HilRLRkF3Ed+Qg9VRJQP7Cph6oFgkWa/wgAiEAAVhyugtvniErrCw9sgohIlbVJWBOPo6kL+Ol6vz/jTin00TphWUUDk72p1kSmLBGHqR72Q/ipihGpWcJ8eglJeK8LN/MrAlNnBKY2M1Q7bVp8YRUgg1+eURAweDslMCT/Wn+Qq/Y5Hrv/Wt0lhQvjAXcmyGPz+zqo6Gz44sdFUFe5XGCj+yNidBq7Som3TzOAi6JCcGppEEYDImOYZu9ir0+ITxZVsoiIIco3SNmESOSjL5vTG2dRcCALOmJVYIJ8i36h46JJm8nl0ZeRXBU+hNIlWPtVANAktiyNQUNUFHnWzrLUDIcToSykjbhmCVVQjRsP0rmhpqJS75kAePAYQHC1ciqyFpHZxnDc4xJMv55g6L9JlRQf5+35qkR6KuI8mZlygogkasYTqaKuCAkp1loglu/DVmgOuxpGbS6kZgZxdqHAL31Xq1bq67uZZTu6mFxEo5LBgE7cfNLNY50jhvsIRRwgEviTIgKiss7s1fVXbVOyC9Op3oizPz95UzOJV204jV+nGveCHyijCt+Ts+Ep53YUhLGiITU5yUtXETJXTTtUKHuOAsocM/YISqEdkmZljpmzoTbwClmaJPiMw/4BAIUYgQAUxy6xn8FWvNdPlQB4GIoaudPNGM2hi/nR9sFLHjC9Lj4AKvgFNVDLT4sIuwBVdlIPhphhlNre1CgV8jqcMalIEAaFmS2XhbXdtOt0IGYIrEUcorNz5XqfDrppREYBqVRfC2K8mFmNCHyAVEO8c7TKCwTRKpGAV+TQ44W5XwAsolH2j4gaaAaRr7R6LwsEKxHWVwtQSIgDd2edYhERJgCwAEiuypA7sZUkUuPxwY2vPLjm84oZebUP3L5lDEFzykHd/IifisDvmyKFWwJUJV5C6Luaz11Mm4CGp0pdZKkvDWhsByL4opDilFU2WkQkRCHp8vmYRDeTPidY/zWAgOx8cNptRHUvx9hw2gIzV8lsmvEeWME3YRMNdEOTuuKfdMRWQAA12jlNU1RvrgAkBgSRiCNG9mTEN3pj9YpaUHTFHhG5ERMQPVujV9OX1Bd6RtBceB+67eKOUJDo6v1z/1bBU5vLODNFqie5mpzAW7GqFznpufaUC3Mgxxhi3i8iqYs707UPHauNK6w6RCyhMYSKD9UTPTW4qaJJiuPn229sNUSsVSmJBhJhZtQvcmEw1Ru91aExb6HNhi7YGMusPVQbIDi9DVWc9QkELn0hUMRYzo1H2nj7dPsNPh2ebezU6NdKioT6JkHEavERAEDLwEmNZdZWUGupsyf+AbzCQTi7BA1QnEOlNfoi426TeNmUemlLoXthLKHbsfobXu3pESCaCh5Tp1aIwVoQYSKHmQoAjITkDd0aOQIewxYxEnRU2AXKDGksUlCB9bBUoQ/TOxxBi0YxBmMSB5MseD4hkXkzXv1wzZY/nAQCbg3RRSNJ/dC5B78p4TV+KhJqAFx1R/TMr2q/ts83o0omqDgqbXkZDkVQgVE8O1HFNBbywkxyPfIl1qfjLV1/qmnYDBse68+4G6aqcmHYXo5LciCAng2rllUtzfT8h/dq6COCC+sRAFZgcObKRuJ7DlXQu2rkzosUxFOoE9vw9tob3ZGxcZcCbRERY8hamxC2s4RLOymKlFKLrgwaehhqEZEYojiSUXQuYOqqT7DLzAuztxkL20yeaLAzvRIXH+h5ld9VQBQlx/u5hroAC15GE01BA8QIhUsTslTDwLoOOnN1n0QIAi9XoDJealqwg/0qGJwr7kUNTsCzJBQBJj0Pii2nKZ4oLiRtxlxXGnwlhYgIO2yGSqIUkWAgmmaTM1dRDwZFteaqWfJRjo2mZnJTmFpvqPYBY1068TOpUoFqew6eRzeivrK600PbxLQ75iXTaxrfD/78uImKgtcaQ7P+y/AgAdq6jM8uxg2qQE89db63FMB3EGMfqmhODYJnmexK9aAfkRsSgO4bF80SXD6gOz3xMUvaslaZAb8BJAoxc7HrTkAUiLYjIopGB2lTyAHYASuvOogIsIABA1JaD5gATC50FaUM+pHWUzKISIbQUFkyOtQlBT9S2zAZ96eeLa2XIKFzDKL3h+w4F0/k7VIons0HiuZH5AQaX53JT6fa/12Ql6cqin8WFp+hUvsgrN5mhKJx6Z2VqMEkyJ7VuzQzZ8VFP4SpTVvbP96g5ToXnsLKwF7lPfuVxQoyk0CAicNtTuEUt6aIWOP/2qKPMlEbjUNOERCwXrhAP3PgRYvIURVFY3grgpscQE1xsFibc0DV7+JJaBicfCqPgPjoPL+yEWnSZ4xWQ3M5fQ5MvHZtoh40viQf9BrLB4hYCicmAVsSiEEquShLTjstsQwefx4rCWNTOjktGtaviiICAEQ+/W/4oP8Vw1WtDvrSOoHQWwcLgIZnS4PxAGJZTHxQo5svqEzJDfaAHrR2egrCzH7DMxbk8QhXtrpqOrGL8oR48z1J+zWKO9Xyk7A692uUbx6UINWjvxmxqPrgIZkgimYEFeuihZQn3b6w+f7QBPnqqUgfEMf2FBRDib8Wpfc6mntA0QqraYlE7RnOaXBimcNQi79sUj0/LkR0UPr+e4XrjYuJxsxeu13bEqiSAiROCvYrIpUo7PtMTlQCQBGMosLi26ZXChWx2oqLlQa14KmZTsSaMCcQHR8UUXgSgVrFxvgSES9vMiVEzKWG1zIIMgoDkjCHjDISSDRIXsAQEhFoKHvBtrS5LmBivLztcpkASfV6qZ94h3IsNR+5n1vItZZynSBIIxup1po4zccntPlfLAAwBuguFiTYRN9Fb7EXz7QAIC5QKPGWdnOI6GC/EIRA0CENeCbR4D1xBAxG9rwaVrl/AwOkgtbZYCsdAAEYPHqT+5E8TngJdSeIt10r23OJJwadAgBQaZaICOwJJwXk0aAGKVuNjl40NHHhhJXaoIkyAI77hgGKt3o3V1AEABKkytYVlb5FjKYg+nJqDV3HGvpYTPekcpdCUBJMnWhUL7UlsG2lBgWKohBBk2TW4zQmguigdVExIYMyFg8v9gHP4g6R3V6piS92H8hO9ch0pnM1P42dKRBM0NNvDbeHaLGA/TvdNAAIuZTPuEIDoi/T61U3gPCfJ2OxT8IjccqU78hY48tQ4gZ8Key6Fh4u/yKOt4WId3jXymZVIgJG0GXxPJTgMiQc4p2nXLOL/20yXvF5NaAxKU4LAYhgRqZbqA/Hm1gjKLVoGwF4Hjw9GxjZRcXbpoQDmmnFSKaH33h7o1dhxqbTjWJE1ngLEZGG94oXoKY36pOIRNXlHMC+P9X9CIDisboQjMO80CwdFoy0EwQAsCqaVrxcGXDpBx62EFZTRFMhHhEvqQ1HBW19E/lAawAg4zIRPPXUQXCapkSELIpFaq3l0hZ5vjJca7e67XY7TTOdQxQBoFKk5iRCaHQszKQK/EAIgt6I6xgfsGBq1LvhY7Ucu3LrOC0WOyhIDGRESZJ159SKB3N28gmXIeyjmiVkrYQIU9sgMOB4CO57AeVT3txt3MZzg6kcDa6ns6JQg+2UPVdm/yIjLoMR9NQrPyYF5CL3tZsU9PH7QTyMd4gG12pHNW8UrC0k7Eav1wiDabnjwyI++VivpHEMApMI1Dc+pEhOvG48UpejKrpnCAyQy3eXSOkGJELREs6q9LoFLlRYsJuczsYRmMmlRMT6gxXTB3TZIjZN28AyHA6NSbOsNc7LhNx8i78IQKylpBkFHYY283LnESBAt+oW5amwZ0eFKstgk0zpv3E4NyLibY+WgiARlGOwQkskEocgkcZSxZILeFIVSKqIpGTCFMRdkfoR9VGRYFURCXqeF3WFZuBfxg82ulQ/gdFJdvOOAHpE2B3LzVLhWbet+yh+KlKuVL2wim5L6I4JM1O9sepe+Dfz81bFKehGZ7dRLAgHKYYELfkDo5ySNVeVprARGrMUJFyNH9gMo5hAQpKlJ5KIiGA5DF8jWLRNm1T1QTFOSE18I35iXel4IoySCsJ2skrBfXItAiSCBmkMNd9PuLS+b2PpAcBiyT4VR9k8ioQ+6OEhD0gCqq1ylaWte96CEGahfYr2mCUN/rRBF9RZNUAcKffKDhFR0MQtVHofsXil0zCpomsJRBBYDIMH0gFLIAhiy9CZMBYEEMhT292wQAY6xqyaUTE5+ezFhX3X7BicOHX7Z7+8UXbmzr92RTLL4xaMO2UySXi9NIRZisRlwYYl68hg2O12bZkXRZ5lLc65LLnXm8tl4p3KDgSGxSqwQLTrMXA1cbnLlgDAV1gCZMAaHQiPWUECELEITAIgpAesSKvBhjNOAmyMiCsBq8unXMCiM/b68DH3GUV0q1aJxcgMkAnqWdBzGvZzCPEVraKJpNQjpVREwMecA3iTu0M8atKBRpZLFTpMpZ6FkFYuIlbYlQYJxbBBrCg/syAUkkpJPMmSRDN6Ieiw4OYqXBKlIAZIXZEq4BwRC2AQMj6JyYG7AIMJLiFNT3D9d+nrIMJORNA05RzD8MX4CQGucA5CnKPjDoqNrPWrAVTIZhEzZTmrSBMRITJYZrYIQGhFOpMyyzIgnOS5BTFpBgDjPE9SF22uCxJSmbW6r9oUVUwgIiC0Ran3aTSfFZ/atwlvpsIgCpJuAw8gA8aCOF3JJYQ5nxR6T0E1OkKYGYRVrWVkgYlnpHlPMNdMWefAazYx93UHz85mqABMPuwOqjfO5o6IKFPA1E8izjSuEJ4Xejj9LIaIN/0z4jHTWwSgyhaPJ05/8ZvemYPiyQ1toX+yahN9x7zqg4gK4RoGgdgUWRu9mr7iFYnHgkBYiY2R5lqpoN7oGjUfGgn6IvsHgymXNeHNk2DwLMS9mlBc2IDLpgFE3vwAbHaRgEHSUsKOl7MAVKD5ACARA3ZBC97xqnGcAuJ9b6GrkfUYGatqB/qldxD6e2bisUxf6G2hYTr1VKNzwrkp36wpAWAjhR2bpGDOrWwdH9146mVLl14GMLyvt3Dq5S/Njxw889mvPbH74tceWp4bpztW+WA2WOhnkmTtlcmaaUnG/WL9DKSd0WiEKFnWIiI2YETyfCwk6Mw5jl2gc10SREHH3iJH3rUMEDMDX7QvmOc06UIXmkWAUJyhG5FIRCg2CHmdP46pxFhNd2y+Ytjhs2OEHOCXnUcy2Fork54uhWOu7hUGkINppxb7yr5Bjd53Ec5q2BeIkkF86zGFkXqNF93t4eyHL73V0CvVokwSAVxn/On4Bp47RIzrbdeCCabkdhVZraMwFM213moBKC7pou6DwNEpAltGRPEav6JzCntq4+zfDIAEYIERiaDy/NT7I2AIWEprBSwRJURWGJjb3Q4zl9YiohMKEbMs89KJo2ZALpuOwRUgY51LXRdbOZWnJn+Ti9RWESbc82DF/gKI6bGAczPVEM8EASDZLOzPK/zhdZtm8sSsFyLO5Hov1T2N8Uz/Kd4O9Q2JV8z84nbiz5FkUFO1ubIFNVtodt6ZMRCihBast98YuH4ZCwVVOkFgqS7qAwBg2jes4YXWmYJrvEoQCVGEhZxzEFWoryOMx1MR/qxl480yj/hhxF9Kg3BAvQKBfq+wG06w9XGGAWQ1IhPAAOIzTt08Q3V0xeu+ysC82g//dxcLEmpZQFWz0UN0VUOMJXFmVU/Jh0WKRLltbg/HM+CDiSIGDLN2Y4OibkYiVR6tDhqRprzrfhEv/UAIj8BKBASAgolIMpTu4vxgwOcvmOv3nrYb710/nH/hM5+/6uIrDhw4PTw0/trjX335f/rZR06sSOfcBIdHHj9bjlaos0g8D3i818rKtGWtNQbLshxubLRarXavOx6PdfnIAETRZIg0UzZCtKGkM4QBOUZYmezihdCqVSKCSNYZMAGcBVcf1GYCq3eTVsFQBLoRuG9kJ+DwuP9VO2DRab1G3L5lEBuC4f0/iJi47CsLAtObEafCgkA3lW/JG8/ENyhBqA2rjz7HRI+WBrSCB18Dx+U1w5pj26FvxFTO3U15sEf4avoorfINx5/Q4awh+mkIO18pTpxEXvf5o6sqHciLOyka84NREoEAiE6QKx8S3tTstldmBIFYGbZqBSIITCBJmo5Go4ItJYbQFNaCiEkTYBdc6cgyi4oOoWwuOY3cBd4jNabFxW9vxoksidd8gt2FBQCRXG1sb+hV8TGg4odB6Q34ucef3Dv5zV6RelGj143v4ydqf4QDs4my2+Ar1feRJj1NEdxTkXm2IpzRvXFMRKM16/tZKb6RZB2OX3iEoTL7xK+jqNma2h1M934jujNs60qbf13D5B4sXdMM2I9tdhrVZvM8HUCgT5AGfbit49VHAfQuhsBWRYQEyggUsPZ2pJhGBMt/6bcm+kB6qo+0ITd4Oj8lSXjbL0RbEbHydovE1SOgVEEhAgrlKdNTvDdKl09Rc5tttn8QseRmC76dUk8gIRomTe8qUQw6Odpx+4AvHZF48fiFAJB2FlYHpxdNuj6268Xaq65I2it/3t7/5fs+e7bbPvdTt9yTcoK2WB7l/d1X2vaeQq7ae+UNV7/gpkcO0tGDp7dsKSZ5hq1JOUZU1UFsURRZliVJMhqNDFUm1tpasDM0glrbvJdR/DEhrqVjcRQjDfVjGAcihBelthIZ47yG4HqYFuiD3Kt2Wv3sXY9UkVpEAM5JSCC1zirDCDkJo/P6Ywg386pnAdWf4NFMRcTZZOtET0RSvz/VDsxhN5K4zRMdzChwh/RPEQHRcCeufvCNKzueeq83+EPt1fHltlAkwDXTybyhW6gxt9UlMfCtuOPMEeAMMysmKCKC2MDPvTVb2Mvd4XXgzCpQ1vlC2CdWODUJIgKztRaAkyRRH1Oe5yJCSYaIhS2Z2ZgUhauZjC4KECyeAfvyjLMtspvRSYugKoJi5iO63AqCVH0TjCw+go6BFAo3ZNmFmU9m0o7/L66ZkkJ8QvCb0Gth1hGdburJv28chs2ewoh7BYpc422RnBIuClbBKarReGP4skG5mvds0kOhEPCuDhIfBAFosXl4YJP5n/1G3fTQHC8EBhkNWnymTeC+CK5IEYBmOXMFouPvYQzhQrXGZUqjDf2x1iJiEoDiNG6VannST76F3AomEVa2LwWmffXDoVgFN2SUllYtwNT+iV5KRNMMGBCktCGyLEbqJj8hzd6KEW8hAFc7x011jAsBIVaOJJg8UMPqWQBg/cxGuw+GTAL9+W6+Z7/5xMcOffpPH7xh9yXnXrz9/AuvGQ0H4+WVXltWTx4s+Yl88vk/+4dV3P3K//rLP/rM5132mU8cmNtiiTpz/STP87KYmCRpt9vMPJlMyCBhBR0an00ltvUVEQBPhtClxwTNL2yPxgFHwAQajQQb1YxLQcSCdBvW9xvSFgmGjcqPO+PB6fPScLrHlrMQH9sAWEXEAFKrwTsVOk3Ucui5VHVnY8keSMCCGMTKpsbiYqvF1tsDz6MjALVN3II1J1fEvP1fJFhB+TYuNgLO9aDWaXWnCoLLTYoC1PxIa3PiMWdcWkVkYBNAEpS62h3YlXW3WQBEITIGiZCGxQSAFMfTCmOoeeAFWKy3hnWFUMTliMrULD05B9FIguA3Ai+Fi59NcTsM1WTlMNhj0QdAtHpVlFQexXB6g8d0h74hEYwvjlIs4n95k6CnKj2pKbzMmA7Eb+Ak25T/OWgCtfVQ44Z4hbSez2b68fSfPJVS4kEo1Uosmo1Q8TNxedLucQCKyIrvBhOQPgMkxGgj7isiKCibsnKAOqtu+Jkgmj30dFN7BqGf6FNlHJV14ovbubPe3Ij7Ze/xU2D0hmSmt4XA74pqAADVBrbZIKsDZgxbG4tEVkQizGqlk9U5jA+PK6rjafNU46DT4iYm2lp+Yzf2m1Rq64wtOmsLRQJNdIMRsX5dwjN6Qjst7PQwH4zHg9FlF3cBi89/rv2+v7rsYHrFhjx64Y1800u37L+4PdhYlmOmP+xnC+V//Y7z7nj4M295zvv+88+/88d+4U13fPXM8QNnYL7bbnUzk4yLvLRgEjREzC4r2bE6d9i8EhdC7yqLghPEvLWwYgkVG3NUz0k1aEhEEI1KWm63S1P9rVpwqlSIi55BFxiraVItJ14sHxXtPMHhEWficlwk7F6wwCQKsBPxFWQdAooKXiAgLK46GWkmld4oAL42ib4dsZYNH48upLwDgEqKEvEzjZt3hUvcDHCkG8+4pgkyetXT7UxAgLpH2hOh2LHlPiATsA1Z2w4lW0CQqjQeMRCHhOpb3EKwTyZQv6CIGB+dqenNgR83zkuWJFakLEtATpMkSRJgkdIKkEkMIlprgZEIjUnE6ejN0ukekwLDEUfUxNGqxqcTgASgHiTUuIyzRBBU7iKN/XQp1D6T2pOLmIREo8NPHygRKgCg0IMQ1RxOjtOZNmF4M6VRRIwLyMfTIQ0Do4SeufudZKVKJEV223pTskkwVzxILTpddVTlL3D0dxpxqs6zK52sHmE446Xiq/EE5w1Vk1NpQkrRlNan7CxUUwNgRHTEzm1KAwAGRcRFSLqLQUTQzB6FYRdvUjUsgCxhfZtPQf3WioMiVb/p1AF7UNSGhgTgIZYiS6PrrPUxVv42vacEZ3wOyFMzeW2QjSjaabVjRv5keIbKXpwITCK8VETY18YlD6YvInFeIDRM0ArLKN5CoYMCCrEnLgeDnCJO6GRcf6c/GlIddSOMAkxgEcws5DIRqcweUt96hVhZ63SWRqeWv+0Zi/OLx7/nhf9r7a4bn3fu5ccOnz2zfHoNHrvgGfk135Jcerk5dvCrR06MeTjeu+v8VTv6g/fe0dnz6p9/x69feu3Oj3/yYSJqt/pojC0lSRIhHA6HnZZW16kEc+0Yk51eIREXnIOIjbwCw7PcQCI2BQBStchbZSwgc32U1YESV3UnzI/uZ7XW1tbEzbOeL6y+RQZgwzR9hAmwgFLtkwFSA4AB2UUrVdXNA6bHbBXc002q4sKAQKPfpaoe5nslqaAAWMBohxC6aoq6tVjldydSkERvrCRXjGvzYcwGTeikOwboZ1v1eG+KBSU11hnbozFpeF0ZKjtpBoFeKWTga8e5lpFFhJTUiAMqZgT9nID1bN41jSQAbEvT4Jra53ZiSrbWFkKYpmlKhm1h86IwqZJ6seyKOgmIiKXabISYfH0jRvxEJzwBDJHPeoNGRBeyGX8REVXiTZC8mUsg4yeNI0mOBD2OTfO9T3qpsOY/1L5v0m6usAsCQjJGmlA8m6Ii3Dc0z1b2zKlePUmHv9GXFEe4iZ/LTSxRgelq2HLlMpllDlXhN1g0vKTvaT06echx3wjqJFisQhqSPq+BA6E/kYbFAMTgavbEHZ4i3xWpin+S6J54forSRt8HQxkrKUHQRIVIU1Spsi5jae4y1rFc9Cfym0TnnMMBbreYWZhVPSVy5XU38494pCGnBoVX2KJMkAxqXWNGdkXuMeReejNpLGvrb+j/o5Vvq0FF7yVy3rvIk0+IQFGZSE1fttYyM00nzldLIGEe0MvjGG2tiqo6y/OM6LkkS1Pevj6QC/dsHx7/ynx/5dVvvPqvvvyFI8O5otzY1e3tTG98+IuP3Xv7gUuuLm668dr9V5yda3Uevec+2IBfeMNLbr7rjhc/46LvfOvvvvP3f/C2L5w6e3ZlcWGrkEwmkyzLOllLpCQiBbuw1gqzKyYBFsIWUALELowYYnkLQb9J6gJT+GCBFbcRnJ/WqUvsjKEATvMDCOJaHYtUmEEEDAHE59x9NIIuScxVymPNADKMmryrHmgDiAIGsMSarM8gAIISV1VywrGOnIDisuhhQa1LUnIqOroyCupsAD910QF0Q6tTqqr2jK8mSKJYHFifTz9mE4K8GlegQptsR64+CAkCJkaUXTlYPGdsN4qZD4QEAsLe9OZIqLD4AuR+Y5vonRRi6JBdfLW6ulGttlUZ4qrbrqnSEiEmiQVh5pLdu9Bn1hpKUHFUmJkZyCAigvG2+siGj6gzxZ5SsbfrxZHPT8JlADRaED0mmP5PNKsJvEdfBSb16hdR3nBt4r/whA1rEy9PPY+iMsjoCgcJVHwsQCIuZkwnJZwcJ9CxgBZKdwQaysRpHuKdXj64CeMuir8cyJ7+NGW+FnR1JU1gflztg9qdoEFn6Gu8MHivxcypl5Lib8JchX0S72wRMSaFTTa6REoMeyLihHY/IoqercV5YvUulMpQr0SEEaSsUjIDJwAALb81Q9DZxAVAHhy4ekShgxNgBoNkgKy1QgCEzKXWJ55up6iLa9UfVpQ5pWma53mSJK4Utkev4YBwgoiIFFWYDoFXIs6mF5R79AbRVASASgAJYZMgBKwaJIpqJCik4aQUW7jjpSzJG5jE2Yd0X+YVlhDUNGmD5IR4R3g86YzMTRHRNFyb5DC0cE8j6M8xdWFrrRCSMRakKIpuARO03XZ3MT97Bd8NWw9Ndt34n/b+9FW913T65erySTtspbCzlc6fHh+ewJltW9d2vmjwzIsNmfWvH75zIdmXd6/5xd/9k13P+65//qe/PHZ4476j49ZCv5yc7S0sLa9N5noLg7Onut0EW+n6ODem1aaE8mLQmrSgnUIbbJnzCMUaThJpF+VcOXdK1kctao3mOC2y1jphTwrbycsRZUhQSlkaSiChETOCcQidUobZEJGUmYEYUFBvEBTLzBmhqAXITzIRICIXTlt1SxPQi9TRSCaiWjU/dOP8qkITL5ZeGRpQw4y6kNEtKJYcdiZEZ5yRAUj1Tn+oNc46jY9JJIJrYFfkAWUrIpA64BFn7/e71SL5zan5ysAgjJCy70/IugFwSke0ycOHVIQFOVSJRERhQCZvaBapmQgRtf+hNG/wgodiM/4nZBFJgQRBLXZO/dWQdUqdmQAV5sACMrIAJUGIV26i7yWTjkeDTmpaqUlNsrK8RpSmWVtIlOMKAlGibMuK6AaYrk9vwXrhAL18ICLicUMi3UlERAqDKIICRk1ZSlcJiS1OqdEAgLHNQCoQJ1bLR5Q1DozMnMT775u4vCz65DeJOLoT3Y7eXehW2r/UAxoEoKNaU/FhiL+c7nBTYAn1WevRUn7Lq8xS/Rb7LKcbj79p7OC4SzM7VjVb62QQemq8MGz6EOzj2pcIkHJqBhCRTE1QqARJdhsqmpmZHWxesZriVffaqB2/9C6GhoxC0yui3QEoyzLLstFopP9qyE9mgqroZbtw+N3chkB0kKltKF47YZ+9ww2nBSjB0sadO42FmZsyr/u3Xh83mDIwUIQGAgy7bFX0mxkRgFCDRxp7Q9d9eqPE99Q7AwBQ2NJ9g8jWikiClCWWsV3YztC0lxEvOm//gaNr2WR03s7FB04+kWzfc3pypJ0/vmAWEsw6W64aLi/f877j9ydHnv7qfbuu6bTyk2cO3/sHP/qd73rnbfv2XvPFh74yvzQ++KVjF6bnPtHa2IITODuc37JtDJPJaLyttZCIWR0NaK63bSC2ZAsTzkRSnoBlA4ZKzg8V42R3O824sz4ATkvbmZTtdRoZm7SzdH48gSxJTCdZH+UlJPMFi5SI4kNcS+XEJaZoJDWJFWQprGVkiyAMiRetERDIV6FIIfO5nW4bOBOxWwmI5PoaxYt3qdvp7obwpVuxAEYE/jiIePvKlD4qzhjgTfbOtEES1eGA6Fg1tkG18SLuq/uw0jgBQKrsKH2J1GMmnuRyb/G6exgRaNxl6EAwWc2iz+A3bVRfJPTfkBGwLnawBlImosW4dCg+/8oAaQqWMwlot7S1Ii+6rTbbIkvbg/WNTqc7nhSIppRcDZN6W6nMWIRMjSihWwIdLjlGhKiqTeXWnhoaVSOuudtnEjeoK64iNQJexSH6oFVExNsOljCLAddJ/AxElXCbs+FwZeXHEDoYkFCiDcTelBLadONz389SQ6XiPTqYRofFCx4E1QZttOD/dMQ7VPVxX4Z6wPV/SSofMHxT1wxfl/tcWUFdjQcAUBFMv2zUDBap+R8kqhsAUFmt1YNGs1KtEDH4yGPqADUyMeNq+IwBAEis1SQZZbqAhphLCbirQbJz69WUmfRDyTYEshZFgYhElKZpIiiKsDO1suFwxEph2EhBA5Yomt36Y6O3kIBUdewrLZMRLFf2Z4i2ZYm1xPkQjB3Pm0hkf8PajmUEl2BmRaZovYioBhzfP03Ea5lOzBoojokBgLIsASBJks44416xOlpfmFsc3X/nV790y1Oveu5n3v6vey68/ut3P26g15+fH8LGqbUzvdYiTLIExgvZ/hM8yctjSwsnrnmB3X3xsZIO79l6xV/8zR1/83D5wNGHqTs6O+oc/NTJtcV8e6s/Hg8NSsmsayeWDUqeGKKMMEUBlgKAE0wNtUw6tsn8cPnEfLJnRENITmRm57q0+7xWgi24SKmFnA5GY0tlb7GXTCTI60VuS5Ysbbfb7TxfD1XwVOk0hIjC7thWp1Kro/sqZy7+S/yEi4imeoXFAmCkWujINMOr1reiOUHAZYgiVEw91SesaSklgKsJ7XeOIDg8jGonBKcjMAgxVHsM2AKyrVCyow6L82o7FzXERrJQ9jEmCIhRPEd8JeB8z1VPpLLxNDoZN+u/CeQlEJY6nfEwOM7aX3OuAwBAFYwS8oYxPin6OS+5nSVZamxeMDNbIMqEocCJ0wS8eq20JUtml38tVWHApteAxcTzWaWxAarVWsFndNGBEKd8w86+EkehR6uvqPnBLkvgZbPPHygaDTVUmaoJTxoaDNhrPIa9nKgMWESABU0lkqmRRAAYIfMKRpzkB9XZmeYQkaDqqV64jT1wEW2SYDPzqt1DtVGHfxNM3aTX54enELhCQ5u9XSIiG8ZroDa3UR+qduK84cb8T6fxxauGiA1GOPNqCgr1yzEDAuvwYhwDNsYwl9MacNz/6csKG2PyPE/TdDgctlotZk7TlEsbXk0UKG8FZacuupBIZ33z05KEo0TsHnMPitEzgALODuJg9zxAgf7rt1+AQIn3AwAkUX3fKj4A0SMH6bOuG4hIdoalBAB8cTO/nbC6YeZ7ww7RvFVrLQEaY8COSwJbZDkkTzmn9753v/vXf+m3f+sZ/zM/tbIxlgNHVnKAhay/OD+3MlgZ2jElYMajx9PT2+zuc8y1B/iBpP+la65d2nXt6edfccXf/tsd//DvEzr3qa/4kZt+/L99512PDO9/9OH92/Zt7S6My4I7BjOyo8lc0hoOxyw4KorxOEc0CaWT3K5tDHcV9mwq2Z4twnmnGLfGKeDCKJm0Ju1WP11eP9Zup1mSTYoy63bOri+n7V5ZliKSJEmSJMJYlmXBtp8k1lprLRFoMpTCr1rPUz3ZccK+ps0EEiSquvmbAZyE6uhmPchrMwYcXw0GHA5aA9K12pDoYomj7wWAw0oHwqV/EoqwVhkxYQMKWImUn1h6rs61svQqbdcbXb39U3lNbSyx0g8KUSFNSrIZ2XD9r4n1MOsYunFJdRw4isxLpGG3R/Uh6mrWVDgARGQQImq30sFgkJisKGyWdsqSGcdE5EJKuXKP0ixDn57aIIsgBVncggTo2RoDVkcVVhCVurgz8BLc4+jNVAAzGXAwDap7Am87WEqdQNSaq3/Z4BMQbQv0wexBwRWXHFZFoQiCRQc60y6iRJqIB8coP/W3ROq/xN8DeNYOLrvA0dIn2RwzGOQmUcHINc04MJvNW5vNgCMRtZqlIKnEE6t/1kwZkXBAQiGpf1pPFX9FKuk3ZsDxKKbTqPQyBmcy4E0mATbDpmaEsiyTJJlMJu12O8/zLMs0XilMlDJgde1Q4n1g4iJldIClF7QbXRVx8VkB41fLkBnWuCF3FyIKWBQoXVk6L0dWlVgcPkODtRNUMVyIlYlMGXDlpQ4CFiNEOyfMVZzyB1MMuMF39YO688EyM2upDxGhhDYmg8W0t2YnnC8/7xkXf/yWz/7Ea77vtZ3vPv/Si79639estdvmtvFIjO2UpVldgtbZU0f3PHYajmVH95wPT12i/iket1tf3XOOufIV7R2t0UduPfzXXzmzsHPbn938rqdevv+xQ8Xp5SEUkhZ5PhkOxCbzc1sE0/lOa66Ttk1CQASSgE0A1mEscO+ja6vD07u6W9CuboyOtOSSVmsi1CrBdDqtshjk+bDd6w/G5VzSFmELJXOOxJigQMHMNt+mJDIhgyhSWufPSysHHngGjIimdCGsQcpXmOgAcFExYA0j3ERAnD7jut/UBVgRaH/Ft8UriCTC7vQF33PMgMOOcv1D8b0ivyNm99O17+OKI3esgjc5+SDASjcYcOO8ILBGwVU9AbefZ87PZgbumCTGH7R6UjCOhtigRqaZO2uMZNzjLsijtABARElKiDiZTNKsPRqNWq1OkdskSVkmQYStIvAB4uybuFfiMqbYQwX4swwhdqeqFoqIltnjAlXpKgzYsBNUS2Cid3EV3msUG7+KHYoY8OwZ9e2qwh7m1wpilJz+JAxYf9Y+uknxK21FWlwz7U5HF09t68iYUzNBu8fDe6OMpoqxVX+KCWZnEYFgpNwECcVVZp+W7KjGaGdw9PqXiCYwYLcXiRDRVOlWtacoTkhARG/qR4+OE0O0NIppx09JOUNmgoipN4crNTwd8EuTEForQIhadYDAIDGXsdDQmKGZ7Yt3F4lIkiS6f6Zz1UTEeWgTlx4Ydp2IAEtBzhrvTAjkJFPnc40MbuQUoCD6uOPt9vMmcY+N7df43o0wyiSxUfSvzobvHsUrEhpBL6rrVRGj6IWI1TqwLZMkIfHGZzIAYK01SWdCg2Q4yObmR0M7Xlu74mnnLxTwphd83+TB9WfvumHBttYmg40iz8etxdYu28HTZ6Xo3fff3r7lj/7xVz72cLnHvvQF3W/tm/MfXr87hbW57aOrn9brLm7568/cuzIeP/NbvufWj95+di1fSBbzcsNCaUzLgunZ1gQL7pbtjslIer101zlLu8/blV7f3Vvses4zdl123a77v7a+/ZLFIhvwyd7yRnlq9bgYGg/TcqOVJS1Lq0lvhENFkjJWCCQBTIxJCZOkvYaIhGIAmdmWQmmSZe1JPgCAWC9Uo2smIO5QYDCzQUVP3B4AVXcA4vTC+odauUzxF5HXkNilCTRUWKhDJgi4Igp+r6jqzA0fcMjAdcWiopQH9V4gmFhwD8wmkRBW5sOgfKS4334mHCXmTX2WhlV74SgIS0EtKr4QZ0y5AqRPWvpFBxT/IW6sTpKOhyO1ZQL0xZScXGstIhpjWoaKoijYkklL9cUxG0oRSqmOEiGizGLA0bF18WjxW5i5Qc+1z4goXKI3OwGQINZicWSq/Sj0JFY+4ywvcciphIhVFPTMK2bA+qf12hVFhEY1rcCAKw7tWVyYC/YBe6lmQPEU4abZEqVEm+D/lgHHJk2ERCoXoZ+ozRnwdJ6x2ygzCjNWv866KGaxgTokkZEqvsEVcq/LJaHqUcyAp40tsdjBxZRHR19nZmvq4qKFm9WTUoSSAQ0hkLVWGTBwOT1rvs3NfOHWGKPq73g87nQ6eZ4bY7TEb1hZrGA0qp7H1pEShQBrXlIEQUgssoqe6J0SggiubKqa2hARJdge/N72LNw3F/e5YtI2knxjMcVpwACu8+KMMbKJywbrIlekalfSQLw2JjoUGGWhtJLhqu30ioUhTYpiZd/c4gNHT527Z/+N18MPfdfP3Pn+26/qXHrF0gXlpFwri+VyvLSx1bYXDozvu+iZB9/637uHv3Dr+z+/9lf3rlxdfO+L5l6xWthicnIgA+iM9rS7dy+XLVg+Hy/ZmuyZFAQdY1s4XB0ZSdbSYsWurtFYDCWc5EU+huGEijN8Yg7k5//P9f9883uu2P/Mrx4+tuPql1561fyVV165c998a55LO06lhdZY4aydFmkyHucgxqSJMKytw2iYlwKrZ49Za8GqhEclA6JJ0qxlkqDr6FJqtF2L3e61IGFlGMEgCIPOtc99tyKsrqVZW7QeJFWZvlI/+ZGpRqCyxEREiUSjo0gotv/pqa2RtQghS8AjkAYGDCAICdSlbSUdgQEzsmidbFcPmD2hM8relAFPVwfRP1PW+GRX/QnR+MxpG+/wcNAsRsGJsZtZS4hVdCayvoo3BHjbDyA7n6ubveooACYaVWeMUWGdiJIkobLMy8Jk6XhStLqd4XCYZRkzt8EwswOUxWjIzRjMqs+B6YI7jygiBrUuddharv/CefA6KQMWVO4eFs5dahgrpXLJoatirHTM7WQAEOumGgDwC09UBbZm9FZqCq+ImiGfjAHrLnfIL+LKflTVzZWLgyTepINRDhIAiGmyZL+KTRO0ZzNugWMG3FDLXJ6DputBElKPqvYYpxlwg9jF33u6PcOw05AeoqcwBGGJB/gFgIS5wqGJTM0JOd9zbKi0IBrsFlIYg8k9ZhIxA9b0JJgiKFL3eVeXD5qrADH0wIMEBszMgGKQxBbshz+lCs9m8ETA7NzASZKUZZmmqYhYly8SNhUYY4ioLKp1CQZe7WeQEiRirplNGKxgEOoJBAEowdKKS/Nw2GcixIRQZZW4XI7Q+zoB0gFOFNcpHpKGmbg0U2XMggKJIPl6t2FFwpwTVWEvMQMmisKL/IOM0EUTfKWIaIVFhIg6Y17NyI5Ls2CgGGdnJ9n2xdPrZxHGz7/xkgP3PPo7v/ynt/3bQ+fQBZe3tly3c8etpx97YOPIM+ZevLo+mbvy029809G2OdjdML/36dWbP3Py+3b84nj5krwYzSXFw7K82ya4MBS7RLwNJomxJRGUWStd2Lr76gH1s3Fp1w9t5E+U6VoKDCXYTLKLng5/dfsfAL/ue54zPmB/4wO3vZCzq215NOfuOiz/0A89/XWv33/f1+964Cvtkwf2F9uWu93Wtu3z/cVkXK4Bjc67cO+VV162sH9vORFhTlNDBkYTWN+YjPJi4/QaM2saEhBiYoxBY0wq1pcTAPAlC0SEDDYYMINltqlPB5raojxz34pjGFX0jQsv8gA4wYvh3JzIIIRoQlqdQmrEhd8hYsC6faYZMKEzjQY7rV6pJd2uin2h9Q0NoOJyW5jBgOPxhgFmQoJgwYrL9yUfZlWTM9xngAIii2nFcT1adkWlq2EGU7l/Slld6m9yxnYAQRY0mTJgLdxrQTRIs1hbT7K0FMY0CemLiNDBlkv8BRByOJTMzB76bnopEQPAEQYMUYIcALT6pP6iRIEkd1PHIgiCxjFgCsFuvlldd5OE9wpXJBcSQETyWU/M7KIKvnRYGle8P8KH6k9vNI8HhoiBIUkUkh7r9W6ozOhhFmbOTmP9ACD0n0EYJCTOE3g/3kzjoa1ZFWKFuLYS/pqB6xOG5h+MB2ggmX6viKSlKTLi0fr8wtyhtWKulc6PGYRandXlwVy6QOvlxHCrw4p5tDHskBGWokUmLSFHSMtJ2ekW1rYa84NRtE7lhgznF2d0RkR8Ydna8BGRPIh88xVWt7JfzYoxEHPJXJL30QojWAgm4hhJStCl3DOIioQJErBYa/NR3u1208x4G5E/PCXZBBkTW0DLQsvAmPOC8i5MBmZOIMWN1fn5/rHRJEvbc2wNSJEg55P2hNNue5nyoii2Jp1h2WYuEQqTIKJYgZJRIAHg2FAfGGqeWBQwUoHfav1dKsmi1Tp8DADMwkDg/BeI6GU4Z1hLKVUx3HnzrNNL1MRtoNJnPTGdsdMAwFeRIvFSmp4XW4qABXJpr2KZLCKSExi8ZSzUErBle3nt0Dnn9W68cs99dz3yvj/6l0/9053ry4LmM1c9Z+ELnzpJ5so1m165tPA/vqe3sPPfr9590U//28M3/+PSW7o/tl4AGjw+Pj2hMyxbBVKBDkiqgkqX2u1u79wbd110bXdpzxZT9lYPnj74hdMnH1gpxqeeMPmVS2vtc+cePXzihZeVvX333/y+ay4tXrraWrGT3vHs8aVdJz/8xKdpofM7r7ng9//q9jPZ99x44RnI03979NTrf7D/49/3nN/5jT/+63/Ntp5/5bUXL7/kZS//2Of/adt1z+8u7dmxMbj2BVece872FcNbTpSPLk62F+Wjx8d2kFAyWl/LoBxLnnW7Xcp4mI967dSWOaZZZrKCLbUwtyyW24Zkkq+20m0g+Uov7w2zDsAGCQ6LbEdq18qybKcZAhDRcDRqdTtW2EKHx8Nui/JivcQizbo2b0GRSbaRGComo267k09KShLBlBnqSFXVeTTqECV0vkZWkFQ1yGj1BdRwaGYWRpNYIxDC/gUAAQxDUY8VCBIwhzxgLRUgmt8DOZLCb5EAknOTW5DUJv6ws2hlYo05F0UrAXAwWxybkf2gIh+ncVHEFaSrakSKDChegyQUVNjJSfMA6MWGiMoyT1KyeUGE/X5/bX2Fk3Z4dQQuVlM8ICLIwZLnvkRmkVK01pMBIF81hwUsAGsQFgUnL7h4Zkupu8s366wRtiQiVY4VG0TVu5aQpqVYZp06JY+pEWZQgyE65OqSucQvHi5jmaha0U1MrMFJDg32AHFiQI2Lz2TAlmdLmkGkCt84hccJINGdAijABLMvnzcS8844dLmh124W09wYDgYXdVlrJMyebUHfmpLNqBwt9EbHVpfTrbuSfIyTrWl3dTjJU5gjypPEFNIpUDq5KZmTjIp8goWd66TDESO0JLONbrh+Yi1QKJD1EpthPu6RKBJvswCr+BXk1QjHszGeQG6mmTOycZHJThklF4iYgAARgwScIAJEkSK3qv6OxgMAMMakaZokSZLaQWGF0m6SyHjYMWLayfpkBHmPWnMi2EIo81FvrjMZWiyLvJ2dmpyen+/jBLJWd50nOIKFvGW2bhSFcC4ACQipWqyQYXGp0ZB9niMTgGF3xhyZQ0hdbW0Xhs3Masr2sxHPm9V5c/zSOCsTABjAXKyayiv276a0tljRRmJvYIoqYCIaMSwlo88DZBZGAlKkCdGeB2weRE6z4bg0lA1W13Zs6V1/VdcU8JGbP/Q3v/hT/+0NrQXa8ZX7jnz9xPGbP5dvg7nXnpftosMvePUz3v/AmT/59/T1nR9Zt6dPl0fbfG6RrPWxlZYpSItwwXR6tsjHxfrDBMt4bAPzVrmwCxe3SD+D1ly/391p1x47ufWC4Y7Fg8nJnV8+dDpPbniivI1gnWDvpf0yLU99cbzvgmz5xed/8o8PdS4b/q81ONKC3nX7d86d95Vf/uw/vP2FTz93b/e29/YvvObGM8u3rq21h7Djimf3n/tdW3/krX88KM5/3ouf9uqXXPW10we2L+279obr+925NIPt5+DKelkQrIzoxNmNjBjWU17vjEfHBmnCOE7ygjHLdrYkL7sbS8WcwNqRPdvOf2LlUKu9SIOMzNkBQUYdIhLmYjzpdDpFUTBCaW3WTScbQwOm015kbA/yjXansOWKMdtHo1E7TWxZJmlalgyUCYKRKr3TLa4rr+ksYay4E+CyW0ofte/0MwRmtZRwYMBa2SlmwPHmqd7izIFNBqwoMRCUKNUvbTCT+uhuZFCIWbelPaNS+b9GPZpWrmCUCpYzByPTZMBAsEnsERvmMsuyvBhnJmm1ssFggIilScArHrFyGFv+YxaGJvEMuMI2tlpFUCuQVZ1W37xq5I4BiyMbTlMXqYFFMwgJG6iEngCdm8FsBmzQijjHmLcKWAaLtz2Rh8HEw4ixQ+Or5naLbiCsynLNvMGxLu9C5plcfGp1qxsUqz3+3VPMaY6iUx8bRad97I1rMwbcGEL1TT0MIdxjiTM7LkyLx60lhN3nw/2HV0fLvf72pMjBltBp5VIOi0kCSYvaSXtcUEdWRznauX4bSgFoF2PmtEjieQtz2xhsEPHKqfBE90jkmGmI5EFqhmjvUmXHqz9CzsoByMys4XwIxhLEDFgfVAZMRAygEc6ImOgSIBIleZ7no3GgHURkZcN0FgbDyXxL2rC6e+ucQEJp/8wkWVlZtcaUYlAkQWYQTLPeCna3m/Gk4Lw9sGXeGiZi2pPeUM4YaiemaxCtgOUxJIVJRIoUoxoMFM0bShU6UKI7eZnmFKJotQBlwB6qWpqbFlkYZzLgAlxQW4Dj1yky3mcp3sjsWmOV8wB92J3OUiYarqcablgCBHH4hS42RMcFYGkk0BLqFaUQ8trqscV5Omff9t5C7+/e/PrFMx+44aaldH5Lku59+DPUPru8eM+Jh0dJ97xv/cza/B6+9tTK6SEsn05psaAx5CVIl7IOZwitAvpl0t2TdlOBvmlNhu2BUG6Wh3bVwNYH4P5vve6Cn/n1q37zx3/7S1/b2d3ygj0LO9/ye3Pr+7fMjdY+/aX7XvfSZ77/vR/59u+60aw/+ht/eg8euuDQ2tEHHj1pTp99UXfnbZMdP/bynfee+c0vf/5ZPXhGmq2PYEs/OXH75G9f9ctP/88XPOXH3viBPee86Y4nPr63I8dyOZisX3r5eXPyyJ2D1o2Xv+Kaq2j7toWrrnt6Mr+ednpbti30tiWyAp0OHBnB2VO88tgjBJinu8mszrd5+cywtdAXbptJu90e5cCDsgCAbqvN1qotVH2NYy7bJkVrkWRcTkrAhNJyUgimWZIYYwaDwdbt20ej0fLq2sLiIhWT2g7xp8m5WrzsLCLKgH16lSIERbSXBHzWeGDAJCBcS+OBEHyjhhxCTY/2Z1dxCL1agg4EQ5xIgLU4Eu899RZUtYRFqmHF75txHjV2qE1GxAEAAuQWbMqAHUEoykm/0yXC1dXVVqtlcQaiH0el/AKF9POWkH+78uB6YUGqM2AASaLcxkgDhoSCrZEQAKxXMAJzVUVFsV9MA1Les0XjyiNWodQqTTgG3CDrIk11qjFL04Q7SZIY9wsiBhB/g+Hxetj6TCYRS1ixrTX8KVLliTZ76KltgOedOZBwbcaAsd7P8KHhUwk3m0E+XigRMlptL86N7vzYr7zlDc8ESu64c/vS/p2tnXsefHwo3O/1QQhWV0eW20tLPBkVWas9LocgxpaZoRFhq9GTBgNujIWnpiHWaKcfsQ3BuZpniWcj3BOiCkVqoodFmsmAU9+fsCtc8c6yTNMUNJQaSUqb57ktBXHSX9jKFjIY/8eH/vjEoQf2771k+86Lrr3ogvndW2CuV0Ivn8iwmKzlI5P1WiMDNChH3M+25whFewXNGItW2toymZSTsQUAxWwXKUsuEE1wlovKoo5v1YAOVP1lgKQQBhZyR5xZtxwm9Sj9eA5DvqOIhDkpPZQdgKsZoDf7Mu8C9QhqckANzTXNLLqkBvAkQy2HpbM/h0VUJdjYISZUgqCh4ahsZwtlkeYjLvty9Q2TL/3Zbz7+T+/Z0l1++LaN/evZxXOdR5PVD52G4/LsrXDZQVzeB1lXkgRgB+xe6iy1obM8Gp6FISdZxyxkkzbBYAIpg+3B3BLM97vLeTZZGW17xuX9o/L+/3jowe/6lhccO/SJh7722hL44PyJtWTebB89rb/t537mWx9fe+yOL95XyLMvf85Ve24aX9NZpLG569AT6x85/lu/8bcL6RniM4/hsxbs3DbYeafc/Ya3Pet8uP/EPWe276a//+AT1+B/3kjXzfyYsjPPecG+7krxqrfd8P2v+AUeP2M897WHzh47DTsLHCatLGmd3NpZ2n/eua25hX2X7Hz1G2+6/KrLxjndc9/6aH21151v9+bXJ4MkGRs7KfMJJB3T7o5HIwBIkNI0FctlWRIRdGC4sjHfnt9YX+/02lu3LbZM2sloXMJ4bPNJWdp8eWWl3euSARaRwmnAGANVipRW0DlxfFCS7hof86ExShiqkqDnuCKaw2nEMWDwTEib0F2UWv8lWAmkFUWDa0AJI1XhBSFaKvAS3V1E5LNoMNJ960Fkm8dCB1IT0g3iA4KIcQZjfKGQIgSkmWmn2Xg8Nkadsi5mKGz1BqBT42JIFI0yoK/EnEICmn1QPDCBSgMG8AxYfJEm8JqrzrMLsaxLG4yAClHpAXp1llAEQ7Cnf69DpfzCoQJmMdRqlqdmdppbA9TqdDoBzzPguHEMbdZ9GI3WootC+3E14zAd0/2BSAP+5hmwNNsITdWanZ6ZRptznC2nhS3SpRbw+MDf/MZr1u+9+5k3PPu/vOWSL98pydJTr3/l6yHr3vvIY+ujbta+YHmeeYV7kAwny9Q3WTYPG9AHWMtqwXGB0E9LA3pNM2C9moiJ4f66+ShmtBBJtuF7LeqnbFiNVrrcTCYwm0r0A8kIGwKZXjlbYwwwiogrHFaUwogwHA5h69LisQNf39I6u9guvvC5O88sl0spn4GR7bb3773kyktu6PS3d7bMjybFseFkPDozP7c4miRlYTtpOZe2Td7awDUiInXEMokY5BSEymQSa8CBAcdAtxVigDI8YEYg0uwmNx8UHVrYnAEHv7hVxV/NjxwF0ZAJ7dRUGa9+uCn1+YhS1De/8fu/9A/6XaEUp23aAnY0mZiE8qJotVq2zEWkVfbWSrny+u6Ro6NkcN/kE+//9P97y6nxPS95048++7/8wB/e8sU9dmtnx9xXHzzy0BNHdg/Pfvqu4xtnz/aL9Jrs8ovosnRs1uD02Kytma0Z9yblWm5wzYLA2fP681uKC9K99y51z8ztuqprD7zwhY/+3s8+G3HvWnp2KZeDlD/z3LW/P/TxbduWnr3rsd+4WzbSp28pMoLR9p1Lc3hiMBi85lkX3vy5jyaj11iyg+LEcKHLxd2/8mNvufOrf/R9r3jdT/7X91x14y8vjhfBnD6C/O0v7Ox55pHPvPND+1647+gtl7bt8+d2rhbSzm12xQ15grmdrH3xyKmHHr3n4AlzhpOvPPqpnbv3vfG/vuHV//3qncnFn73jwJm1wVx7rtuiLEtK2ylyoGSSpmlZljYvWq0WAIwGw36/PyqpHA12LfYuvHDh+InT9971wKFHVifrZghH9u0/p91ub9u5I02TdidLUxyNB9RebDIeXW4gp3Eii49RBRTClogwlxEDFgAoCI2365bgUmmNAPoyJTHMhYik1u9GijJMfL1Sv2m9e5ElSs6o28y0KoOIB+cS92/MIOLA2E0cW8Ei3bhTwM6+nwkRLRetVmtSFnmed7vdPM+NmBCoGHocI3s01AnrIjBVCfaKe5RiEzPg8GDEzgNgSkXfwlRbEPKnFn0wXTjIHlup6k/gR9qByitHhF88nMfjD/2rFumbvqJna8wpzgKKbB2bpcFU+VK+EQQAjUqvpS1VSDezLu8Djh2fm3HrJxmUQpdhXVyYKYLoNRxTl43tTrIE2huPXX3le9bPfPA9P3H48bvksuvMrR9bm+8/5cIbVt701ufsv/hV0HrK+mjf2tkRU9Fa2vXAERgZaM/LZDDIqBM3681WqJnpAJXtvWZv2OSKuYVbplnpXmFc03AoQsgWIKDnkzCz2JK1MKdo2fAQwyAGyVqLUZVfDXOzBAaMbkFNZgXLRCS8wpgWkzxfO3bT1fv66dAW/Hfv/aecpNXf+sjJgw8/8dCRB47v6V50zfXXXHrThTec99QLzlt6+Pgqb58blTQ6KcOjRTEWbk/a7XbWIgvMRcnMqUmTJBnaEUE1QOOnLuRR1QaLkFhgDY3xqCAiaECTXqixNAAoZelqs2iQoBaJYnFRsixBEnIS9CYSIToWzwBgMLgJmEtBLULqLZMOZMW6dBQN8VVbuohMsJVlKZf5ZDLasjS/sbGSF8Nut5OtW2vmDo8Hi1vSXcn8vn2mhWP79eX203Y/+PBg67Yk6036x5ONnV2b8nZD6Tp85baHP/iPH7/lox89evzgTth6eXr9PntRyeMs21LKmDlnm04MDRJZKexBvvuHXvG0E0f+6GtPDI7D0fHy6/e2n7M0XJ9vtYDxosXHOnsvuXPj3h95c3H3e/LlR156YO/Z+44d/Dqe/basuG+08qIril7v1LtuH3XgSmhtOT158FIoDuFDpRz80Zed/0cfO7VangdLnWRjbPLdTzHyZfnKW657BY6+dst983vazy7Ha/O496bnnpfs/I/tu7Z84dOHzr/++7bYwxdefGnrwvO3LbY++gcfed+/f+bM1vXX/dCLv+9tP9jZCqceW1k7nQ+KYqNIWIZGUEQWFhdFZDAYzM/Pl0UxHo/z4ca3POeyB+557P/80rsS2nXJJZddc+2Ovfss0J7xOJ/kBTOvra/2eq3rrrvy5MmjA+mBJ/SIVVR2sMTEDJgxQtxjid181qu8jFB6sUwdtHpD4Apu/4g/v+iKNAgAEqAGTLm7CNGFbqDYOCFCAnsWC0Ic+4AJATiBiG5MacCNGB3wXlsfm1lPHZl1GcDJZLKwsGCt3RgN0zRVWhGk1IroUUXeo+PjbisECCUcB5VpMBS/AQ4Rjo5Nq+IRmf8aDDjMnMdjsHqPlydAq2QqkSEBEAEhIo/Q7FxLEi4XHvulIwVMKb76qpkTtNkEhmcbtElzTqoRVsiLs024FVxRHXRUo3ZRKtdvwK6azVNZYsb/JAx4auC1K+Q9Bw7nOOHUE/pr2RnND+fW7argvFldveKq396a/Pljf772oS/3hmv9K65qPetpT/+R7/7bpW1bL3lau8Ct1+296YZX7d5yyRW33HriVW/43nsPHFrG3SvQmy+a/XRDCNjOEQMOkuBMESHuefXNZshfDUnQN8sGXaKgAriQaLUWa4zT9tzy6UkLll5IiBCxZNYO5yVryTERa5CYGbTCnSkpa62uLmd2dMnuLaPlI/vP2dFf7P/u7/6/vUF37zX7ejd1Dnz94f/83B8u7coffOUPV+8uXnbDq7/wua/cdeCrrf6WFz37Oy674hqeG3NCg/VyZZUno8RQu50mCNYWI4tZYMDiiykhog9+qeYqxsAT4sCAAYgELLmMw+oRjWy0VlOrAwMGALBsqaqnBNHkhDQJRzFDa85ppF5ztwYAUDIYRLAozqIDgsxgSVrTELAoUKZpPh502klZ5sxskm6rPb+yulH0YMtgrd9bOr0x6ZrJxjAfU3/rlgSOnF7Pum3TPs3Upv4KHNudppPBQtHbOOec9IL9rWxcPPKFo//xwS9/5OOfu/ehB86hvXt42+XdfR0riJ1JuWVQrm3trO2+cOXK777k9ls/+zcf+wq2Xw3jQwfhnrn06m22dzGmd9q7X/30Z5yfn7rnKAxWLtyxbe47fuW7brh066cOHrh4svGzv//uHXPLP/f2Nz7aMfnkfIPd5dUHthlJO/kv/NlfvOX7X3PN3CX3PnR6ZSNtrQweXD+yDXof+pdbdnWL571o8R9uGc61zi3yZLA+3LJw9r5H3/3q65/z0FcOfIFuAG4twblb2u3vftk+Wbn99PHePffvux0e2HHxvu9/2wt2ddrPftp14+5Gf8/8uMyNzY4cGa+Ph2vr6ysrK4uLi8yMAN/+/PN/8kd//z8+/JXf/N1fv+Jpu0awMR72Th7mpL2aJhmLFr8rJ+O1LYu9jbWzA9sNi6Ir7U2XDpoXxQbAEEQwXGHCR8WGK5WXEbSil4urEgN1zbIiC66MgfMBs69GoVSAERBNCMgyYp1rEwKSkgqS1u9KDYZwonVGvnwQ1NQksMxTQbLkqw4wBi+yy08FmM1HdAN3u93JZJLbUmumERFLRXJjxWNm7AsAsAghGhBUss8uX5YdFIZlF7WKepatLSDKNvT9BAGSKJpa1V8BIBT02iAiCvk0BACHT6CMAwxo/JdznbJn8OScs4EBN2j0tINd/3WaaHQFRbOhijmJw4OMBAbs1dYmA/Z6kvcN1BkwI6nZXedCU/ssSIhtaSipMQMGb7wNnZles80EC45ars0SV2MM8yMiLcgG5WCu1ZqUg617Wg+8/90Pv+PH3vrSbbbdefSJ8aduO7r9nP0l4Lk7X3brrV947svntl+9embPvfc8/syf+8kvf8dLy794/5+eOPNtT/DujaKGsRxeEcr/xXYYCEE934gBB0nC0ox7wm0hwC/4P0oVJ9GoYUpBeQHAGjNTAy7YGkCD5AqEWStaZFQlUbHMbJBYSmRJkmRSZGzAJNIGsStn+8ZujI9fduV5X3/wM/f9w5fWF1dumfvkD/3g975h8cUn4PBfF+89/Mj8nX9y//OWbtq/2HvtS19+3oVXT8zhR+xtxyYHMtpL9pJ8Y99obbEYpSAW2aZpp2KBgcsCVJjSkcBeRRgYd6swqgyteZBBcAaPo2JES9I6pA63TJZDGlIMUc4ghXESt4gQIIGjm4VbV/EM2OEqFMIGExISywhABEBipQwMOJQBFZU+pTCYFLnNslZhSyYc2zxtZZ1cJj2ZrMhct1jL016yBeXQBLaV2cmU9g3HZoucGfXWzPp2gbQLjxbJntGEV8c5ZXzeuUsX7Wqna/nyoWMf/PfP3vyhTxx8cGM8Guw02/bIVbsFvvPli//4xXs/ePa2X/rWV51Y/tC9X37jPikegLXDyamhfTyBo6+74BWXv+E7/vrL7zjwsXmB3X0qUzAL5x/b+8Jv+Ykff8lztm+FRfOZ0/bYo6PFNne6wMnGcnvb0np2bitfTtKV06f6O3uAvTmGydxaYub7K/DE8qmLtmyXrVIOEMZQlIP+fO/+e8/OzZk9e3h1deGxW+9/5zv+5M77zy7C+M3fMnjZtz//n794wWMHy3/94hcH8NCF/S3Pe/bV777l3Z3tT3neC55/0ZW9q5/ylH3nnbN1x5YtS3D8OD/22GNPveGi3/iR//3AA3fd+pl//PrjZx88tJab7jDfWNjaTYussFIUttvt5eMBYWlgDJIDLMUuT1e2VdN7IgYMiEIIBDSx4HHZAvoVAGjgbmDAukUxCnqVKKcOEXMSTyetO7yEjEKCyjkCA0YBZEmNQxCzANZF8xIApN6CqwZIC8rOOSNTkZQovmczBqy4U3UGLLSZARoALM/Pz29sbFhrKU2stS7lN5Q7rEd15TSD4gFAyQUBJkgIoJWXvCnCMWDr7NIBrYzjZoOmXgLGVm4X4yJiNBkhStoUFadsSUAGUJgNoKBnwOjfgq5Appu9LxyppoICBa8jxUNE2UPmcjXRWCkW4csGJ5jB4D1/jZVLmTJfV0reZsUP6laN8N7a4zXFvpnXGwQI96BOsW/SbPLaIME1Rp0YTnMqchgbe+62dvuO2z75HW9+c3vp1NrGGUjutSsmncfW2cWd3c5w612jO3Zd14Gl/T/zkUeuGHZuvP6KTv/eOz43fufDxaNmY2095VYmiYDFDEYlW6It7GvSTS/N7F4K1FwOiJq3ruZf8Wm7XsqBtC4Yhbly0gyCCJYuKQcRTcKlowLkimmXZWmtNWlW9SFaAC4lth1J2DlkgqvGYWyV1halWcj6HTL3PXS6/cCnz/3Qjnm8sn/jB45/uLN1xwT79//Mvbf9widOLD7x78MPPrrxlTNblheof+DUg8lw95mDrZdv+bnryytaW5fOtHM5uiFJT2wLOpAOJ2BKQVwj3EZmYzhOsw4bKGze6XRW1gatrNchmkwmmGJe5mCglZhJPkJExhSrNHRxrnEQBXYIHhZlihYkM4kIC1hfpoYQDCKJaCkIjysnIJZBhJMgILoMuxg5DwAQja4gIhImgZQ1t0E4X+I2q95QOIMZGKyUGBTM0aJH23YN6FAMacFmANjY2BiNRnNzczt27Nixt9jdz848cfKu2x+4+V8+88mbP2lXTixC+fLt863kuld+3+6vH7/5X979wtRcM7CPJ7CYw3oBd44ufcTOX/S6t/zn1z310gceLv/gLz587yfuGoOZg/nj6amFrdve9O0vftXbnrZv3+5HnsifOHQ8H5db5uc3imHS6yQTOze2y5AbGgNyK13YGG10FzNK5gYb7YTWAZ01JU3TNM1AEmvtcK64eKF7xd7eV29/6F/+7l8/96GPr6yeeenrv+03/uIXe7Rx97/TW1/+40tw5Bd/7Uc/8fWVv/jwXWfXzqzAEZOsb5/vzZ1nnv/a//Sjb3vl8qNHfuB7/tcX7v37d7//85Cet7i1SzzqUa9YPztIuiVIztJrtTsIzLm0sEylNYKk3VobDNvtNhQWLCcplWJBIttJtGSlYc1HD+Hxik/hTZdNk1tKzo/ToJlqvg5eZ4wAXNkXGQzBw9M6R0XngwzqspPBg+mW6A1F4g3DLL7SbVXeQLwL2c6kz2AIQAuUEbAlAUArIt12pyxtXhYKwUGYgGVrLSS1vY0hoIkLPUDuFw+/TEkqYkWs/qinlJkJEcEECGDrrNPSAvQpiyjiIqcAQMiq6Ut7roWNBaGjPrhwWDwEKJMhFPIWLp0HC8imCXjluI8yYEdkp9ZjeuJmMkjwfHSz+6f5a9yPmNZTBXr+ZIpp1f4m6UBxsLS7VLkxlQDRaDyOZKm6tyl3m83/zqSjrZP2XJI+OFpePG9h74FHP3Dlc37GnLthxwa2PwgP3kqH9zLlwHO0pc2tNWgN8eDXDVxQ7r72w7963tGvPPG/b372B2+59dzLjhy05+42dn3FThazNiACjgd5qwV1BjyzG42fJEo60j9Nha9WjV1qTvqqWQl5XL5uiceXRxMh8+mdzr+bZJEwVDEJAhPXvZCoSyoKKANGscDCzButtAW8CGZuR/HV9I8eoE+trmR7l2R06sxo/+79cEGL6b7Jwyfs/Vf09vfXz0vm8/fe9cWvPlr8xYvf+tLshY+fXrvl3x/tnXPBNZddbwo8mY86AmfMxuKE06TXyc0oyYqizNrdwXA9y4wtJq1WR0rABKy1QJRl2Xg8TtN0Mhr1er2hLdR8FxmNURDEOtO942s6n6R2e44RYsFZ62v2w+CsQg8RGjhrA7pWK+vp7BlKWWZHk3KkYtQgM0mNgr5kvaiFDW1SM5WHIWiBSEQ0xjBzURQAYIw5fXLYS8otPXPeefvmtwCXsHz44N2f++j9H/g/Vo7mw+yB2+eW4McNmbTT3xhkj7bvecO3XvCiF974C7/6i7ecOtiay974A897+/e8+Xie3fIPH7r/lsMffejhBPdvla15/+SNL7rqB773RS/8lqeeGsFXD54ZrOTtSZ72O7JtG0wKEZsXQ7aQmbahZH1wen4xWR9QkhARiUK7MLAFZkhwOJwkSZLt2ZGdf067nyYffPfHf+ptvz7Mvvzrv3zLW3/qho2cXvL8/3bPXZ888tjdO5cWH3ro7Ny+pS/98+cHTxx57MzwA+//aLlhHt4YvOn7b/z9v/yfh1fk8GlYHi6P8kli+/1ue2luZXHb9tFYRht5vj4uS3tqdZkTysYtSI1ptUcbg4RMK0kF7KicZKYdn6xwFeQBYcThwDcYcHwwg9IS6Fg4fRgBgMT0MwQAxm+dSVsdLwj3eQas+5TB+cZC0KW2KRr8Vd/nIi6reFoNY3Sw9hpLjGKJMEkSLq1IVVxHBUQSKH3aEtajrwUU4tHPAHs25GJcGNTU7C1VqB3TwyqiUS0AnADoQWF0Sj+JOuc5ZsCMoHFrKRdhCjky6JaAyoDJm6MYCIAsbRL1fdvhErzuG2uEjf3RIMQzGpqKh4qZ3GbsIWbnbuExCe+KG5kWo6oJrV/uXXWw9WBKVRP6jP5M+URjwMtZA5kl1gFQL5GNspyUtp9CH0YP3Xf4bz/wX577kuMvvPqHX/Odf5+fN3rT02X//hOdj1/w8fd/9qG5lZvL63bs/+GTnzrnshf/1cP/9qaFzjxPXvz//MJrfu7nb7tvtMHQTyYJ9VdNUvCgj8RcaZb/v1yRBFP/fmpCwwSiDzuvanJFDLhBEYjiepwVgI5YwBiG2rsJdFc5PDJ0SbQokE0G1GqPCzrbHnxy8Ivn3HDI5NyGUTfb/9nhoY9+/eOvetqz58u9nNgc1kZjM8LhC7JnPEue06PWpChwcmZ1vX/kdH7gSGs8wRde8Wxu9bea7AmBwcq4bLfTSVEWFtFkWTIarndaaa/VWV8ZQBfKkhExn5RJkmQm49IKM6ZY7Si1LiIiYs6i6UbItR3rivhWYK4eoC6Cnla5x8H5ksQzqbMXrwtGCZGESYMBh6fiKPc4VUOIlAijRAVKBUsjMxkwGO9BEEmSxBhTluVkMul3diBPhutn2chIxPLk/D2L+7b3lx9affDgR8fj7jt+8y+fePCJeZCtct2F2657wxXt37/jY/cN+R3fdflt/3r/cvmfbsk/UsJjT71GfvWv3nn1OVeMT53+vd/4pz/5q/vWAS9OeyeKw7sv2fKy777+h//bK3rtuYMHykOnB8fOHt662Ou2dk0Kzrq2mJSJneumneHoJLRSRBTQoklsTGKMIUzaAxz0y7V8o2Nb62slZ3LTDbsv2wY/++Z3/Nbf/+bua5//+U//3fnd4qd++F2/+bf/6y//43PPfcpT3nfr6Zuu3bK9hydW4JkXwMf++ODP/vk7e4X98v0fueqap/zUz/78eZfvG05Wur2FU6ePnHnQ/MfHP74xGT3nWc/eMje3tHW+u9DtLnSS3tzh48c3JqOFxSVgKSc5JIAGwdYi+MJnSw47NdiTLYiH7kAiapCj4HFsGCDj9GKI2C2FYKhot3xDBhxzQf9e6/ZFXAQFfESyiH+nZ4fI06Q7+kwaCIJik8RkWTbcGKAxROTithgBIEEq6xCYECyXEpX/izTvYPEOErHDXoZQidIHl4kwsiu+5JpxqGG6LhAlc/oJJOM0bwRQzA3Q7JBCAIFJs5eqqaMSm2V/Xe++dMQ2WF08njCqBh/6v72+GQYMUypvjR3CbNMNzGpZND0GAOp5OCgzTOh6vyQUrC4YfWiY4qO3z1ZAW4WUBJgmKWI+WGt3k73nd4+WMjy9VmyRq2R+0KZtOdzxlR+78djftD7c/thfr1yZXfGAXT+wdMHNdOb0ye7n8J79duPDf3jVzm/788fM9TCCfGVstrTzDtuNSde04/V68kvE2XtjM4PuG0VlC9MSbFw58swlJisxb/ZAbmB8cGUj37d2d5RdF0zQUteAU5+h771NjhK1W/nayiTpzvEiffHUu06Zj1112eVtWt2YjA5icp89ng5Xn9o+nzI41jqRQplyv089e3rjGCwvbtvRHuX9ZO8+3H5RfumZMX3xriMHPnN4gfa99Lmvntt1zulTy9Ci/vzSxvo4STMymCXpcDjqt/sjOyBy/EYsqz87MTgpcvQXBFOzCCcJCqBIgNGOL+XBFXXwxucAmKDRakTk8r38FmVwwSdqudGlgKAQg9H6o7UVdzOewKwr3FBzdwlMwG7GgBHRWqtVmDKTICIzD/O1xfn+ZDTOJaHW3Di3nOfExdyeLLPZuVv7+5fgvtsPf/gDn7jjji/f+vkPPQ12XN69vt268Lt/sPiLf/7wyYe/3STbqNc6u55/mX8rT4v//vafePuPv37vjj1/+kef/Zt33HL/wwcKnCRIyPC6H3z5s990/bXXXdmf0KHDJx557PTi0u7OfLYxWE+pRWWZpXZiMwBwddsIAUTLCfeTJZvZ4XCNcru4Zfu6HZ0+e3Jbv/2a68+57TMnX/qKl1x6+dqf/tF7bvqWp37oPWe+8/vPf+ffv+/F3/ltH/inAwXwxefOPXfX7gv3X/LRuz/z9Gs6f/0X733wnvVP3vzwwcfXluZ2AgDD6MrndM4594L5xW0C6drKYNu2ba1W68DBgyfHZ9769h9bG48mhbXWttNsYieASJsoKkFAC5xA7csY7TaJLqaKUIejLeoJUiFW2/PqZlo6sIgZCsTU3tD3TlMYRPTOyRqHFgACV29bPKQXgOqaFT2JBXSMfH/I6sBCRCzzAo1RsU+dNDr6EJyrbD90qcGAwyjYJ0CLuB4RJYgGoNSMhkhwsSIiEb+IkzpLiI9zBVJkQl5vPG+k7mZHu2qWp036ibcfURtZk8XGT09zaKivfejck18zV/RJfq09u1mia71XlUTpb4+pIQIgVPmXMVuKC7+768mRtuom3OozGgQmK3kxmVvorWxsjG1RFJPtya5BD9qPTvJzR1fsXHzGPP5z32xsQBfmH4X5AoaHAHr79+Rbnrp65eIb3vyave326Utu+NJyvmdhbbHAUdGZCPaxM/SmjKnpmkoJ8KaYaSNEgwGrdesbMWCXWRdNNbJnwMHN3JDH4765B7npA9ZjmfhiatbTGRJGxKGRHdhblzIphk+9ev4Zf/AD92+TVzz1nGt23rfQ3Ti0ury0dN6iNQMqVov8gqyd89rRYnjADi9L9s5bczYDA8PVyXI+BIHWJduecun43PJQ955PHzp+YPydz3/t/htuOnV60O71l9fywSQfTawxWWLaqbHMMJyMFxf7+Sg3CU1Gw3aWSOEYp+6N4IcrkwS1aneUceTOdT2cEAAA2UVFEopYTZlLtMBUaZVieAHFqbxA4q3QVU4Bggnldhr7IS5TNY0fB41zIVDQbA04t6VWxQAA540DUH9/URRIgiYtSpumaStLB+urWb4CfO5qPhAa7dy+dO0VvbQcl+v5T/3kz5z4j/ck68Nrb7j+iw8u7Ft/5SQ1x4YDA+Xu84uffsdLf+3P/5+//7dDV73wVX/1xz9yw3mX3fapO//uHf/yb/9651mYX4e5DBef9cJ93/amy1/78pta3f59Dy8fPDbCNMnattdujQeg4PjirC9KKAUATDq2a6bf3T6EvLCrGefjIWZzO48ul8+72l6+feOtb/mtv/zA777k5T/73vf83JxwsuWiH/3+n/xf73jbkdV8e5E/67prv+0t3/E/f/3XP/CPD9x4w2V7doHYQSfN1lYmaLI0zXAOxmPYWLcmMcMxrK2tbenPHzt6+OD9B2+46aYnjh/Nut3JKG+lBhMzHI2yNK3WJaYeUxyRI4KshypsNgBgf65DUzF9Jo9IKl5JTQr2EJWbhyDHipBWT4pZpjJCcXHFcYKvABhnyooBhjGsSLNxRGBMUhIRLkpE0IoseZ6nJhHEgOyorhIAV51JHC4XhT3MNBteN2bAbHUIhigB0JgVDIMSscy8GQMO/JXR6cH6Z2IwSEjg86zA68pxgjIAoEDDFx4mBG8/4iFA/YH3qz5DTHvS6xsz4HpTmxZMnskAZnZAF3LmbU3utHnIkrgFmZH3HDTgWdfsBsfIVNh+2rYEwzLPkjQpuM04LCftfvZ4kj9nsfOeD7z7lu/7H69tb0VO+pftueQF35pdczVcdNFF5/bhnC1tSQ+ehjODwYnJHCbQtyttXBgiAkI2skViG1YKL0jOni4HHTxl4eAQPS61R2pxjDUpOKBquJIjnk/WgtqChyI2PARtWNT37Bmw7l/9bMRB1Qj5OBLNZy1HS3Yp643WN1b2nbv7Z//1fR9ZPrxRPvL9L7/oGZcdL0Zn//4vPvfd3/Fto8VDw17Z4pHlffNJ+6w9cnxtY9+WfmrbZlyul0m+UDy+fpSXR4fO5C++7o0f//hjb9vx6tdtfdZP/+UfD4d83VOfff4lV7bnFzv91toGbKxPoCwEKS85z/NWp00E7VY6Gg061HFpP5HMSoBj0FrqaKKNId4nF4uP6MOqdbAO+x5dBiGUFgC4Zr0wDjk2snS5CffFIaaXPi7jiDGGq08vwch1gMCaTjbTB6wBX6lWIC5KJTomTcfjMRG1s8xyMRkNiSjNzNlxf75PSXK6HGX52Jxdzag/3Ll/6elJfvCxr93/xIG/edf7PvO1h82kf3XnquvGFy/0Kb3m0N99/vZXXn7DFZd13/VvjxzIyx2XT37qh7//B9/2umFZ/v27v/RbP/0BWTer+coQNpa2zX/rqy/6kZ/87p17z3/0gBw6drSESafTaZsWAAiSMOS2VH0dDbZgmJQiSXu91S9GZge2F1rF6eEx29tx/NTK3Nz6q55z2V0fOPudr3+ZzcYveM13/+gbXvs7P/1Lz3vTG/Y/9ZwffuV3/vyv/PUr/+eVn7v5ZH/34sopa6hEmFjbNq0sNyvjcpCOFsFykqSYEGWQGNjWb3/h05++9oprW/25tSIvRDpZi60tyzJrtdhWzvnNiCpGecPhG6z/WUwVhA+Cr3LfykeLyOgYMECNXgZwien9A3U3M6FjV5Zna8C+mVghcX2FWU5rsmgSZ6jQuDlr7WQyybKMWQr2mq9Knywl2shiR40OzDgFgXx5JdhJrmipboIGABBm8m2Ki+52tk9iEIz4q3slJRoEZwFcFLQSNAMIQOzLGgKyo2abWI7xjsPsBQKAiP76stWzH4u3jl/4Te+cyWhDnxpWlJq21BCaZl7c3MR+A6nFv3lniPj14o/fExxN+iy9fEqxm627E4BJzWg0SciQgBCKgYktO9Tqj/OB2Ui2LN36wX995pJ51nNfsNxrD1Og1mTdZuuncMJw5Mxw10ROLpp5IzzqbO1DPlzhJJu0QAwUG0k3Alipj3q2IJIYDOW6Yv+91AWmAPAWMt4a7Ui0xRFNCK7WNJ5GfzCKvdRZqaZRYKYG7DRp9SuDKMqxiEg/bxVzUi4PgBjo3FannWWHW7K2YifF4XMuP++P/uwPjx359//5i99y59lPUqe3LgW0e9uLoeUW8LH5ZOHh7Pi4FDT7cFg8/ujJF1zzyp/9p0+unVlI789fdHLhmddebG16/NRGu79l//nn79m7P8/LvfvPa7W7vf6iYLq6MZnkdjTJtYBxajLBqnaWcUUmIPdlCuMgLAYH7QlC1URStd8c1gcwAaJHz3f8WkNFCNXq4OR0EaKkEoxktkMxXoiYwgpqOVsScYEorvgScEm0mQm6LEtmDz3PgoiEOByvL2xZKgve2Nhot9LUZJO8MEnWTjeGE5rAGJN5k63lG/O9FDdWzxZL3b1Jf8/O3t45ODGYfOiP//EfP3zz1+6+69ylPS/rPf3Uwe1PexbgRbf99V+fumTLWx4cP3RgdPNqZt74w2/+hbe/5dL9i/fcc/Z//8K77v38fYfXYb1gAHjpq6798V94zUVX7Ty7bA4+Wq6tn2LAJMnSVjdNWkDIXDJzS7IklbOD00kvJTT5etlLF7gkzI91drZPrbfX1uU51+3owcrnP/T4B//ulhMPPbp7t/nEvV8bAJ9d+dwKmH/+yPHLt8+tpGcT2ZkX61l7Yst+yUl7wY6LQWrzdtIpchmO8u78wmQ46gj0MMnT8sxwkM3PozH5eGIEsiwrimL6ZIE7IX7dwTEtJ8AFOduTQb2KsgwULKw1IlpwFZbCPlQAy7R01O+bZMDWGaTcoQ27ggViBlz5gL331zuhdQhNDbiiS6Kyo5BxnJItiAgRWZXO3YgcA9aoYmddlwo+tsrjD2lOTiZQS2ctCEtEGAWclOyCsBAAESxCUErE1UgGECIsACCoK064YVK6py14JBMRkQSIRRjJFUNFoUhUmuabePshVw8VQ2C343k2vm96kWI2Fma80TpE+cGNBZ7JgHWMM19UYx6xr2JKiPT3V0FAGGljFaepei4igr6skqoLQWkIwBffJANGhJJt1s5GkyJJElsWbSJiW6TJ2rrZtbA+2hhtO29HJ4fH1qG9AoMJoMk3xvk5S+XZQZqZDe7vtDDp5EWS9tcG60m73UvS8TCXdIidRTuqSdDhMmZ2gfEGAw4yh0abs0chFm9mSWYxchFBXwWLAIMbkplLlGCi1FAdpwfbeN2jNa2boMG75A1KKS67JjBgsJwm2KXeMo45a7dGUOYHsL3ERbcNIxrNDSarFzx14f5DTwzpbNE+LNlwuXsf8ojnj3W5i2b1ONCptQdlvt0ZLttiMTmxbdf2F/3M73zovOe/4s1Pef7X3vk3T5vv5QXOL+xI0s7Gxtra2hqiOXni9NZ9+4g6a4PyZa96vYXW0tath4+eybLMCosGxRjS/F1gQQHxiFfu0qRDUHRZzfnzpxQsAJOkIiJggZBAEFG4RK6gg9nbqBUxx2Cw89cSMUMCRnMrcgV+W+PB4jRg4xZFNWnOwSWGNhhwKS51HtmlYGpjadLeGI1N1koSYDsCYItUIKa2U+TY7q5jvnWSn+q0ukiny/GWAi2YjdUce5MJbek/49LdS1vgwBfv/B9v/8VHv/hJSrvPv/rCrXLBA3dduDW56DE5nsn2rHv/9S/rPA6P3nXoiY9/4rMLXUCAP/2jW9/xO3924NHjhno5J5dfft1bf/K7nveSc9Je5+RJPnb01GA4MUmWZZlLy+aCWm2ZFHMlgeGzJreZMSULLMHayo751qjIj6yf3Llr/zk78Pyt/cnk1K23PPGtz7nhV3/2Vz/00U996ejHP3bno3K229tK62fac/2ccBnzLWT7eb7Sa/fWM5Ci7LY7a6P1Eope27SKyVLWPjO07fn+yfU1SlrGIzOUZZmmycxzSoIzGbBJvNkzRGQhYh39O4qQRwQCVYKt8lGXfZRE2U3+jW5vTGs7AA51y0ldGCAIGTCZqQELM7iculjmRq5nSYXxtigrygkiZK1ERPI8F8Y0zUDLIIMTR8jXkAgMWERiBlylEjQZcCpiXZaqMkvWskW6n5EqPIPIRMQCWh4eiQEYJJFCXA04VL1CYcgK8NBaGEyDgIjEoAxYNRxlwOjTaGcx4KNFYyu45vR8RgDWDuybq0LicX0Ji6CB1wpWFZCqwnrHEAfRCtXCj1FjSr3o5zPW1Uy2iYnbh91LwNuMbKez9npNw26MfVpO4YghVajcgeD67+MhzHyvBG8Bs1dlCBFT43xCDk4Bq1fMvKa1c79eJj6H4XNJxKX0MjPXK584cXbS2pGB3ZWu2rUlbk+GQEBpi4pimKZtyM0ELUUjjawC9XQjBwTNjFkWBh7MzgbJRtG5YV0AHNKT3q86ChEZY7gYIxrw+XkATCiIkHrcSg8qRMxsrcUs80g0VM2V0ASTrL9yZnC8u6V/dPzFw6N/2XPpwmjYtyALRj73pa/93d0HZcu3/sHrrvvDr33k0a/O/fgDabffabXSs+uru/fsY0q379g9HJfj8TAt8Yf/63/53Kf/4+//5aNv/sGf2n3OrpNnhh2kAR1eH2/bmLQQWiZ51C5v297bciIbb0dZW6a82yIz6FgkbI1wBblLhAZtami8XrSSXmnKCW5Q2UrIJMaAZWsLphJRgEA2iXJ3xi4kditCar03uXVZi+jiPrzWkgOQZjS6fQIKnqeGuMiFwSIiifFEn0POvR6liMT78HULYh2B8BXofASvNZvsf8I8z9M07XQ6g8Hg7Nmz/X5/3769F/U2vnrPHSsbZz5y87//03s+0uf9F5prl2T3XGvvBfuO/9bjH/jB1z9r2/jBX/vwrU/7lh/4iV/8769/5uU5ll/48iMfec8j7/3b2w+u3gKS9uCq133HtS/5rle+6GUXnDVw5PDqkYcHg5S6NMReNzVbeFK0s3UeFbZoJ525IllNRr20LZMyB6TReNjqmtXlZZBsVycp+vtyc/iV15/3tjf80s3/8U8PL9/z2S8cXh60Wl2xJbVandH4bGraKXbZDjmlSdkjswF23cISp2nPnpzbWBu222l7YX1YFIzdXoe5FAsJmthLWoEbi5Qap+u1As3BExFIEj0jml8FoGUSKunWmVaQna80IZcjZ1WhclZoyzmKgSSdm4CQPQPjuaQDJXJCPMiTrDPkvNdKWmyXV9eSxXnwQUMNhH+w7oCjD0LUeDGY4i6BUMR6DngBXWxujDGUioi1thRGJCQquBaspJkFdU29YdasIRWGrpoAg+h5uf/JxMGn1S7NMOgPwbxHAiYhhV2MRseADhDJjSlQXYSSbeBccZyjeEjjWApBRLzj2OygHkdvpxgw2SpbKSRRhF9dKoVo9CYCzQbQQAHFE3D7Cau9FRqszc4mUXnaw2o88YDr8kT0eVMGPM3bwJtqK74bWqMmA55+XXPUfgFiBizW6aBVC9hsc7qrjTECQKCbsZqLiGmxMZyHpX5/+cFj1924/8TZ0amjnZUx79x2amy32CS3VjI7J3Yds2GR7zBJGd4uPt0IAAxO7RBdd6rqfgQGTERxtZM6EErq7xcJ1R2M4SIHRPGhbQRiCBBBJUrPgI14SYXDnSEnQQmQNSYpixJbrWyjHK2WD/W3zNliscNZgeu7z+//48f+3zt3PnLd3v67/vGBt1/9K6+d329tceT4wbvu/vLdd91+9RWXDwcby2dPX/eCN3/xto/s2p7Y0WRpbv+jD5/Ye87eV736RTddnkH2J/nhcybjZx2DhVH/8mOrtDE8MQ9o55AwgbKAth1t2E45l2GLEjOYjIctse0kGRY7uVOk9kwr7+dU5hbYGEAiSDNkKvJ8DNCfvfAkDpNIiZ2XRxNmL6E7GEIvqJbOKx0iRyIngo1tm0HeVdOFkJeuRHx6mESr7xCFoDqkqjej/3XmdhVCDSlXZMF2u83M4/G4Q93F+c7ClnR+ATKCj//Lp//8t//k6AMHhuPke6+9/N67x7/5Wy/42qdveffNe+5OVtftwQLSb/++G3/1137knNaOud74wTvzX/rtv3vvh/8Z8vvm53av5hf88Ftf8m3f+9yrrrrQnNo4vtx+9LFjljKbjEwKC3PzxEsnzx5tz3FmYTxKhUy3L4hmuE7zvXkLq8sb7Z27Tp84IjxJ3vCKPT/3o7/8zj/45xP2K5+64/jZM3ZxS2ttuNJLd0sxSswIy/lJwr20GE5yac+lkKeT5RSTw2fW1lbOnHfBpVl7flQULJa5VLhCU0WzewSlSOMMSgZWR8lwoN2l1TVihccEAI8OrcsAAKylB/2iIyIjCgBJOd+aW94YTEgIZS7JVjaGuHVBRidbaasoKeu0BytrKdLclqXljWE3aEQRVwMA8RIe+uAjgCpSCeoECurcUqIbEmQiUrWBmZUBExFrFd6w6+vqjYg0GLBEdDUQvfDe+I3eJzubbkOKYDk8DoFNEoBM5XRgjJdQldYFj2XmOh9NFEzNJzQYcDx3AI7uilSvdgtgQz1LJwEFBmyiYNrwiHf1RV1xg6eZDLjG5PyDEgFENDncN2LAU498s1CU7imPe9DkiLPeO9290CxPFXVwQlPJ0nyzo6Qzr8b+Dl+GDPSGnNVZTs9/Blz97B878fm1n3n9897835+75znnn5zAY3cDZ5J0MMtgMj491wbgOVu2SioANmXAsTilHVAGTFIXnhCDCwOg5qdXDTg+MohIBFpkIggfhJggILkdxqKCNoZAEpFo/hWeWkREyKzaMslokZgELJK1JQKm0i55LNhK23js99b/9wNnJwfuzL7yP/5w+XEoynF/LlvaQv/6Lx96/IGv9hMerJw+csru2rUtn6zu2bM0P9futpPTx48i5jh6dOe+Azc9a+O8nYWx9p77lq55/q8fG1zz8MFtx1Z5xLTYmjNrkLRxhcbU6+2A8cZ40ptfGG1MTAuGk9UEsJ/M52ULqMhaI8L18XgM0gFYKCaJac3OF+Q4d8vLkeABrcKqVCwzAG8hglDMfUUBgPwKktdOXNkMiTKRI94sIgGUgyhy6nN1CkRq5UHjs8AICuWh1f204tB4PJYks2WZIJS2SDO5/JL9+3fA2kn7m7/22+99x089f/trv2Y+J6fz6+hXVzpm7+r8F/Ch0vzblq273vwT3/Hhz9/6h7//VxfuB4Thx2+/7zN/dse7/vwz6/LZNqTdPa951WtufPObXnzJNQslmQNH5eiR9Y3lQYJ2x9YFW8o4ybO2LQucDDpESas9AJuXebqSwjy0E5lwNlk/dfaNL7v6137ld379937nyMrh279y7PTqiEzPAPbbnXx9nJKl9nxpj06KVrszb8qDo1Mn5rZdsZq1zHiQZv31YSEIWSczCgcpIGVZnZF4rkKVOu/oQR/P7DRgQM0pJyJrrTUkvtR0vE2AEOuRAQAggGOZdHpzcHpod/QSRHN6o+h1uKQsGZRWRNAwGWMwS86cObM4P2dtje9GC4ph0SvuS42Yj+pDqI/rSJ83nnXUtM5oJeB5qebgZIsA9aP2s4jy1DWxepBX9TXANAUN6B/TpJsMBL0oKBKIWCJ7EI8qaAspomleE5ieLo64HvriQE0G/P/h7L8DJTmqe3H8nKoOk2du3ru7d3OSVjlnkSSCyJhgGwz4gbExxmBj+Brb8LDBfsaAzQMMxthgcs4IEJJQlpBQ3FXYnPfmMHk6VJ3fH9VdXd0zV/B+/cfu3Jnu6ornc/LRKuj0LAPDJFxHW4i1nGECiYyPpRa9UxMkk8c10si42oxuSn9msXMKQKKdlgA8PZ3JK35rSTT+JpUCE572IiKIvQAyyDcwtMN8bwbRo4QMaQAGgCSJWp8aZ9UurSLBG+CXXCNj4tSp+i3//q3x1jmf/uHNR5tzN1x91V9/8Nqzrtl87DjML4pczSPiQdtFBMlWmFWEZCZVSDsAgMVQK8/NDiC3BmgvnhaAwSjrrXLFQbwvo4hkFcnOGI/ywakstSikVGV/tPok6UkMGzLfll7RBjsMG7YlbavQ7XYtm5ZsNr5ckGWn3jj2372P3PjLo3/1sve/auqck4d9ISVzMPQ7OzZPQLdx+shTMycOHzl0s+NXHHA8bCy0G5VCbfua3MaxYDqXo5Z1cO8tr//TnlNpral+6IMfe1dA1be+5X6yCycbi+tHKnLZWnSo5dL8gZ7nt3KhFTZofM3oSrdDvONyR3Zz6Iog8BgHDso0aNu22/NDmWbw9RUyg1jEXyJimGaJWDwhYeibFSOS9VJQGiOrbk3ow55sYI7pOtB6cRERjexmqY6uUn9aPc45V6l9Iw/qXK4TLOatXNAjAItb9tLKUhh016+Z2DRVWDvaCQ7yT9x8289+9LPDtxzx7fwZfrXGnrV+3XxQeejM69YdW5z5wpe+wtglL3/7K1/5iuvPvWz3phZfOAr/+rXbv/rFT56YvQPsOUve8IY/etNV111w/UsnQp9On/T37mlaBWbnR6UM80VZb52qVcp+hxjlGGCYg3bTKTiBQ13k1aMLT73qhgu+/X+/9RcffN/Bo0/u3b+41KBChfeafp6XkJaDHpN5PlSu1I/uHalhh1XasmrlgEsL0Or0QuSMc/KDLgPOucWMGTbLEEVx3mk6g7HCzMIoGhsRVeaK0AjaSAFPrMFOfla1FjihFwYg2eJKwEQ4XJ5wR7pNn0IiDojoMB6SJJdxJqDXCZRXuVHNTJkqJGNgUBvScbGDfGKIiMkIGiQCUSJs5JFJUEUedFeV/EYS4ljE2KqSptVZCdh86Wrkd+AV7efMn6q3MU2T6e2ddINiwTItPyXhNgBgWBky85P0sx+A41Gy1JCMuG9IK2Mp1jqarWtiGn0Zv11/YDLVsv7MJEA/ACNwOUByhfTwfxtVcCbMacDeTX9v4ll0T78/m9l+WgeXkYAHtZ8NEv2NGNw3osFbRB3ddZvzH373O3dszb/77a97/jn/uxW8+v59ezda5bOeBf/6378/Ojn+0CPQZc1SLR96FvKoKogGYIh5c46D3xVlRtQ81iAATq1LypafhMIAKA9LyZFZyCCWtIQQKs9qSFJISchip3+luWEJWyORiAKrwWFEhkRY55wjlYUEYk3m8vJRGyYdK9/72sGPrbSLb7v2LXNPSV5CYtyynU6nSzIsWqzs8Gop12ThifufaJx4okMnL7jm+vkTC2zpx8P4HZlfRDi73iudd+nb//A9/9/YWRd+5affgOquoVBe9btveM+Fz6p89qYTNz6+5uwdG194FV18SWtHoVmA6RmvcbhbK1e8Qsf3w6JX5c48gcOxQNLq+gFggCwIwo7j1AaucqDScVC0KprZ9Uiog8bSG3i13OlRbQxjezNDJjYf1AlV1OY3C8UQEcbeEXpLqNApvkrCHI33UWsxWQ+8IGdZLmedTgty3C0UO50OIm/6OeF7Y0O9HRsn1g9bJ08sPvbwzI/+819+/tMD69k4Z0sv+v3r3/ep6y8def2Y9baD7T0n4ECtsHTVs892Nq/9x/f/zfrhwqn59t3/ftunf/S9Ox+827VCGW563osv+uN3vmfXNbWmhKOPLZ48tei6hepQudfpum4BZECyyzBvuyzscgh5kzesar5xcvZFl5xx//13/+mfv/9nt908vbhy5HhzdKwqqA4ix7iTd8oHHvvluTvGOl5O1NbU2y1XgOeH5VJNIgvDUJIPIDi3gyB0ONezoUkFGRq4DIRIiErIKABmEKmgVwuPXE1zVurKOddbL8SZm9cOO+7+/YeeLHDZZpvGy0KEIfLZesdjLjIHgp5FAYGlabgWCqWUkqMm7OrgK2ZO1/fMrD6L44kESO1JwDlaIUkgSYkagBGCJMZQA7CM1PIZYSM1Lkr9lCXm/fSKx+W5M0/p8qyY1rprydXcvUTEEp+J6HWYUYYDQAzARKTrf0OaRKdU0OalFtJ0s4L05KIxTkQkmepiVLMC4ky5faDCAAeqoDOzZqqgByKTJuiKJP1G9FoNrvo5pngNEk1AZhIyU/EbGlwlYkT2Ke0Bnj7+OPuugSPSCb94ruU6xeb+73VPhh//2t6X/MENX/rYv+y999E8vCCXt579O7WPfuHdDz/e5flxL6jbUGE8sQGbTlhkFIEw95l2TzA1GU9nA45Teeg4N9KabSIOZDtWVKYQEBC9MK6qprRAidAgIA7JSwGD8GxeEWHI3a4E8v08t62eWMyhVwrHp2m5CE55tNwIhbPYtPlIHZaqtaHlpSZwK+cW/K6HAKHnw0h7rb1mjQtN2Z1us5zjjuCpAw986Gz4AdVaJ5cbzQZrd0d//lD3Mbv1uE+d/QDz4AyNTS05b2/suMwvrCydGodCWBgJX7Btx4f/Yk9x7cnjyyNejwLfGZmqzx+0bBe5y7jDLIdZNqRZ48wl4yo0TAdkEwBAqBIRQCrwHwAQbADQ5c8o1qFhHO6liT6L0i2gqaEholhRwU21BwNtOUjtt+S9crAEDHGspErlwXkcTY5O2GsXC3YvaHsUuvlSGDAObD2JY8u9VtktBJ1uvV7YsGFqvLpjCk4/sfSZ//POfXctP3nyrpVgeTO87iz3ok32jhOt+sZn/HrqSn7gxJP/94sPVNY856/e/Jwzrrn0Rc/Z6cv2fb+o33HzrZ/5+HcWg59Zzqbfff2fveVdL9+1Y82Bo+1Tp6jTDXv+0vB4jUFeQhMbAQenV3B9C8NuZyhXmF+av+LMjQ8+tPdv3/+Bb33vuy0PTs/OOEW+stJYv3nrwzfdWbCWzr/q6gMng2KO2VZXymHHEmEoup3AspjjcsaBoeX1QlVyAPpU0KZS15w6gaABmERiA2Zxpd6ICKgHCTlBRgrggICyaBUeXHjqXZdcUqxR+Mv79n75ps6rr7d3nnlxrfM/n/7P3Jq1tQsun/YLtiiDH0oIKS4KEmEwRb2KAwVR9VZKKYEEkbVaimIt8CgXJxYFTUg/JIQQ0MiApNAOtPhr7nODnKY4FWR9+nZFLowiAQlNNiReysg83Ey3SZqKQhCnootdqVXnLDsNwAZt1OeF4saJSKdAzvAoAwAYY1uv+pw5aWTKweZTMvbJwigwI1KZG7ZSPaFEpLYR9AHwQDdgGsQhRn9i4lQ1QAHed/X/auIHpFXTmmxFf67SJv42YmusI818Hfmbxl9ryWY1AO5fQvN7Mtya1JcslBLtwhgWrKV17vD0StepzL/vLe859ETv3l/f/fKX3/DF73z+jodOFYfXkQg4doMwb/QnC8Bg7LN4mSI1dZR+KDYlUFyOFNLrYuSRiFecSMiAAUdJnIFtW4yxMAwFSSAMIniIii8ZwC+N13Fd3tgRjhDAACxHeEEgJOc274m6TUOCgSgFXCzSctEu5l3k891mpZBfWV6cGB3zfT8MJWOWBMYtO+jN2jjUbobCEoWR0nJjuVqwyjkx3H5Adm4Zdx7l3vE2fzKorl236WcL6B3zHvv0V75y03du33n+5TNBN/jGo4de95nH//tHk595RfOj/3Hv3fe+8p4n95y7Qyw1zh6uHK2L9ZO82YITJxsz8x2JNncYt0Quz4XnDl53XaUqTgmktk/A4tjEdB0IBAeAVMgTGDtZAzDoU6OdyRMAFsZejeJAzKUEiEo19fOUjFI52JOLGWo9KYMgYIw5jiOZH3o+EpBlhYBAjBFCQF3RHBrP1UPK97DIax0BnZXlBQ/O3cTOPndi8eQCX2nd85P69374lXvv2WdB7kK2c+1EeJg//pNf/umbXvWeJ489+8mlg213YQQXN26+7Pm/c9E73/mnpcrSwsHqD3/wrX/9l08fWphG2/3Yv/+fq64/Z/2Gqaf2eYcOLhFSoeY4Qce1a02J0u5wD0k6oeP59c7FF21+7MGj//t9H/zs5z4i0D5w8PRFV2x/6Ff7H33grje/5Q8f2nfCyZcrXEi/2wtdx4HAF0SsWMxLCtrttsUdxmxgRl50QwWtPWb16kTnCKPYPCWxKABO0u7EeZeISNFjS2kskLRmWLnozYVye4Gee/66la9/NbzvgcOHT+euecY5f/0nsx/79H03/fJAGFz4nnfVx7a2TnqVQqmbF34Y54FQvnhxbkXBUkVzle+thES9AWmKypBDxGGTxRQPRwAgZVSkLwmpJ+SAARiJaAwnrH4Ajv5IpxbW9EHlQTOZgnjrJv005WAWh09FbGIsSMQZMKVxuhgAMB6PlBBMmmwQYWYa14z5SVnrtAo6K7HFtt4sq2vI42jIuzpsSQOwylEQ1WuUyb9RJzia66QJt0A03Tr0DljNueP/DwDOaCfMbgyAeYlkeBAMbB/72hlAmNAiw4kpeZGe9nSY1tMAcKaFzKC0p4baSchsDwT4Peq4lLeE5Y1zumBtLszBXXfdvWHq3KZXWgp8abMctyDoBYwb3UgAmMe6lMwYnwaAk76lNBNKYI39HRgjEEIIB20pJWfkWjYy6vlhIEIJCDyaNwbcLKeoMnxFzTKOsU+m5cpWq2FbaCEPfGRWnnFbytCxm916AVk+LCwXfcuBTs+v1avg9Oy8zXrNesHhyCCUMiDJLMcN8h2rQXkOXs7phnkeBJjvSgudZp6VXXmoQk8sHb+/vKVe2fmS7x75ZHFia6EwcsfnH3rkH35+zrbtP587ffqRE5+/7pUvOKvm3vwz+fj8E5OXXfzN7xw+o3jF5edbc/75Vz/r+hue9+znPWtq41gIcPSkf+p0w2JF11pFAk6cIfUWIgDwQGoAjpR1yuYXMgAyCoBDBoATjlOFIaEbVzIgtXax2jOuoyWj3atiQz1mbJF4lQEih7j+/a8lA705Va5mGXRHhseazU4owc7bPa/tstAmQbJGgZBO0LVzgcB8UB8v5+b9rsdLC7PBZM0ZzlvDm3u1kbGV480DP7vjb//soyf802dtGg7mO+s7F2+la63a2rnO4S6dvP6P83fN/vBL37tn+1m/e+75217x0ue/9EWXH763cee9d73n795X73iXPeea9/7DO86+cHun1z3ylJhudLgVDnEufO5Ucu2g4UDBy7H6XGP3lvXdxtInPvqZl738hiuuPfdHN95z4tF7X/bHbz0yF9pSVAvtdlMilSq5XpcYQ0fRLd/vAYDj5oUAwoR/MsOQLKMKh4zTThECJ2lIwFJLwKQF3wiDI3Kqw0Qpkj1VfhisBHkfZoa+9OVzRkW4eDw/PLH0xpevWXfOgbd+aufEuplhZ+kFV97bDstUsTCctuplUeKxOVlKqXYXY0x7v+sOqBC4aJdKaS49InJmhWFIUiKCa3OOjEQYhiGz7BBIAgYk1eaxkDPEgFKZ/hQAM4rEYtVq6mAM8NNOABgAuAENkKZdRjwt2BJjLh919jdUZbeJiARFObMiMUBAGHPEaNJ/3R8FwIgY2c4NNbBprsWHZpO4tAwrkemuCUKZ0wUA2IcYJjcE0AfwgxqBPghM/hRZb+pIBZcsVRw3rJQ5fLDKuv+98RgNdVAK24zEF5g8xRSh0Y1AlMcgxMRrVD+ojlDcSioOKmOrMxYpW7UpMyGZSwf+mvdTn+pDCBEEgRDC4k6plAeAbtcHiHKxCiEkharoDUkMgkBKUHG6EM+DTO9dRpIQEDgmtgZGRMQEI4YRNjDF0irmjLFIk6xjhYkhtuu5Qsl13UDInu8LIkDOGDPdUjJD618sRISYt5VAElkUFxcXh1AZZVUBMrWatoz4P3VCVEgrAJBlJe9QMy8JCXzHZ17JYcg4OcPhXYf/9Vj4i9L4xExp1g6Cl2z4ne//+Zeu/Pd9o3ButzIx4d37TG9mbhjW5Cd6p2Yfy+GcR5NbdnxpZv9n2raXD6FHDh96yevf9vtveOEVF5935/3SKbU5r3lBD1mDE1iiyiwHrI4P0vdD1ykyxrxeq5x3W8uNSqnaYWBx3ml2XCfPGO/6Xi6f7/qeE6cgRaIoHQEBIgbEAKLyiPG0MgDGyZeGIT/KyEEkmM0U6ZFKNFHTxSzGiIQ2NCByAEYyMQMhYqSuU3TNigmTJk8QadpIRcqwiNwTIUhEHrPmaeMUs1C1JwS1GvUg8CbXTmzZNMwaLZvlvv+9n/zw+7fsuX+/06mUqLKxsPGc0s4TE1/93Hf+5L2vuPGhPZX91oEmruSD1lnn73rNH1519WUvbCzs/frnf/Tz2x5rMvfP/+5Nl5x37uYLNh2rBzPTcmX2WM0uOmwIeRNkEUoYTDfWbM+LnvvgrYdmxTFoyxe/9sVzc0tBELiuq8qzA2dBEKAExpjKLwoAgqQamxUwIkLl5Yzx5CAJnaM+TvIsYmQCgx1P4rwNP1+z1pZOQcoxFstUBg9eabX2vmCsiN+7+9gXbpy89rx7NuSuvuoG9ws/9EdY8KJnP7pm4/SpzkguZ5VzMnSXcEm03CHmIlv0rYCzIa8HkEObgijmjRLBiYg0qqCRTw0BfASGwAhQCsaYZVlCkB8EzEp8Bcy6nEILfgyjjNaSAECgH5cBjt9DBKQy/oDOlRFtaGKCCaWFNWYseR0RKaldrQMj8IyCMSz5gKFIPZuwrYliKfre3K79VDowsoml0rA/OJPNTZppNIOO/ZgWn/MBwEB94mbmqcyonuYiIXWCKp2ggwEEMRgjZgF44HsH9LzfqWoVL63fCMCMIIz3vUk4sh5YqxWWSF0D+r8aBidkzmBfDPiI9B66Y0QkQrJtG2L3bNCLhVGWf4ovNNo1dQARMyQFMNQlsiVEAMy4RBk5TKohR4DHrVCqtMaSMaVXk0KI0VJREAlBgQhDSaroppRSBbz3j/fpZi0m4/ooxdm7oqyrAsgEYONLU5HLMnlu1U8+ehS6Nko/DPK10uNz31pz3omePR3W8yf9Q/94z+nnu6//p5+snPjK15vN7mWQGx3Ke93Fld7p0TG7Pr84ubYkijbP5X551PnJpRd/8uEf/cNb3/+9G39+76O3/8e/f+4Zr/yDJx+fr9bWeuAPjzpLK412g7vF4lJrturwQqHWbnsAUCrku+26xTgJ6IheMVcMfaGiKolJy7Ja3ZZtufHYAeJ4FUSUxAGlRIPtBgbAmAzAUIrqeQhBeWkJAwmYiFOHpsVrRhIZJomZEnMSYmxqSbalAmAZCpXIwwRgJCZBaNDF+JJSImdCCCK0LMuxbCLR63V6Xse2K5MjpTXV/NoxnDvV+9Xdj3zjC9+/5dY7fBaeh/l1MufDOTX7kg254aONh+dLB//oT89/4MDtH//uvc946SvWjl1y6Zad5cLd//R3v+Kbh7bvDi679NrnPu9F67flDxzvHTo16wV2MVdptw6Njm/tLMnAb0+uW9NtLI2OjxyZnrWZjchU1SxEZAyREQFX4lJcXCQisxbxtDwTe0Jop9cYgGWcWzjah2kMlgYtirM/qtMatW8CMAAszncu31o6b/eYf3rGcXJze57s3vLkCAwfbJ4YllTftm7lGdfak2PLx07tO/CElSvvGKrkJtbPeNJrd5yeZPm875L06wzy+liZ9JDM1EmxMo+IBGdAkgNaHKM6xwKkTGXCNQGYKKtkjeg5C3RpP72VgBhgFL6oADjJkMWoH4AlJsK06WKNknxT6adegIiUcic3yU5/cYFo2o1yt+b9wqDZqwJwprl+Qv8bpVjs6xYCM7/JdDcrvqxOW1NEn5KEAyFLMjdp9zxGIFj22f4X6T6gkTpq4Hujz5h8qQFY/YlaAY4pxyIzLYbR6ABQMfsGkdCV1R9QbF3QmJiByf6eD5xVRJQiysmlfCLCMCQix3HUK5KkWoz0eAfwXhjX9SGW9AdVDjaBRvEQAiBCLQEzBgjK4zoKTclZzPNCzw+ISOWHVX6RSFmllh5v/7jAkAtk7KUC2paRfA8yVuSyOBGKiqBPAJhQqdfMxhkBWkQhR+b7nuBOdSXYJ4fv2jC1ssOvfYG13/3JX3/uun977u5hbC2HNz528o5frvvGr090mpN/+/ptb7jh8Le/+4m3v/WdV5674ZFDXxqpvfbwidf8xadeuntykZYf3H88R0sf/NgnTh5tzi+sPPHU48Pj1drQ+JYtW5fqAFa4POcBUKGSa7frfuCVSzUROJwsiS2QhMgtxqWUXtBjDGzb7oWKwVeTlhw6BlyJLmrqBESp55kUAAAscc1VZCKUjEX4rVRQytDAGIQGAEeUkSjJEJQF4LSmJzG1CEkoNT+kLHcagLUAoBlZQuDcBoDQD6SUGMGMdBg2Ol3BWMikbdFFZ28Zd+Hh2++ePt65/cZffeMnd7b8uTJfe6Y448KNZ4sqt8ZvvOYVF7znHTeWvecAnLLL7Jyrup1ZsXnn5YcO7WF58cATS5e84PKLn7Xj+mufPzoEB+otueI+enyuxiq5YpeqsyNBca5pFUtupx06VtHiea/Xsx1wXOx2VpAXk2NChJEwbGgFwKhqhUm6//jMxQ+SkT8yXbEtRZoo0mwzZpFSYLDU8ffRvrLT3LyjOrf/SOHXx/2FpaWllVJlvN7t9U6eOnDseHt4YvjC3dt+54rKBRu8wN+ay//PO/9x6trLF87avdSAkgeMei6HHjgQU7n4QAGBYAbCmNXAJEcpJUdwFbsfhEIq9V6yH2S/l64x0qg1rttUP2k6GReZUPQCInNMwJSYlABwnIgj1gazSG9KRIzAN2zGiZhLAH0+TNG6pMNZ9WVbFqRV8ZE+47cEYGNF+/y10iR+NQjJ3IAmq7PKUxkKO/CbVEKPGIDRSH1nAjDElGJAfwY5c2UE/dWeMt25M4lBKLZ0IrcyABz9imR6LSYJAp9W8u5nUwaqmgEAVxmv7gClLdBROL8Qusq6lNJxHBFGGSIRlQIXQQVCoA2DLsXCp6M/1TEwMj+ofxEAQATSsizbYioDHQOwbe44drvlRa7QDAEgFKrkjqWcv56GmTAHCwAMk8Q0Oq8WxOulKF80kwgAwETMQas/YwBWQCUNbUXUjgwkcW5JAYRQYZxmGzedcYF/IfJvsNFv7su9nc5pz+2nwvDwxtHRMRheAa8juzX26IGTzzp3/TWXPHPPA7etc+yFIMjtuHANz73r9S/sFH0UxYN3H5UVvuuK87x2o5C3V1YanQ44VnX79p1rJkaKG85pd/3ZmflytVIsFxrtDmdu4IPFAzUzQS/kFuZyuUD4vt+zrJwxLcmeiZ1DJYCMeRQFwEAIZNSZICIGIAg1AGOcqkwCQ+rPH4BEZLGEQzJV0DKOktBykrGOMhU1DgyRE4mUqUz7tnBLSokUiRrKuGNZ1kroO8xylFISxOz8XCVfXLtm0sk56zZ2i838E7fOfeBf3nvr/V8Gi50/8qbrbjhzotJcU3zGzXfc35qHzkE856oHz73m3Pd/8he7do1W4fzGslUc5Xfu/WFtG9q5na/4vTMvOfeGdbvXcIITx+HQqVNlGgrshTwfcXPMk60w7Nm2LXwKPJZ3yxK78YQDgYgqfiIKNAprR8mIEBG1zTKbKJGI9P5M5zzIHAQ1hxZaas4jVVZMKHo2XWmRe/PdS/uPr6uNy06TrR1eOLlcbfM2+G6x0F7s3PvIg5NXnfnc176wMzfbO9G4+7ZvT184NvW/3nrqQG+0NNEkwZsgC+mUtLEqxZGJ851p21bcHkfGOJBEEoIkMhbZl9VtpkJQASQYpD6aEx7LrcnOQUQUqZ2TKMB9njhw9QMwxDxE5BGcFthSDa7i06Ojj7L3G/foFhBRrBIWiA9NC/NW/Vk7tum3qoZ+I4Jmvn8aoRkGDWy1+ykygQOkJWDJUvKldpmTWWs1mDdk3o5Geb6n6y2mHgEAkNm3EGFfSgqSUvLEJm2Iv32VYs32NfECYx/8NlOdmdXM93GDDADCMEREy4odnRgLg/iAMfVgLAHj4GIPZnG9+DVKJyESFa62USFYaIdhKIUAkJwzi7EoIzS6RIQWZ4yFgoIgAABuW5gkysnuRujbeH23JdlSTQA2n2JxOYqMChqJKUAydX2MgKEQknHOPdHhzIHALZZkL9wfFjs1e2TeG3eb5VwYuIGYLvHSwdnFbTzHcs6ycHuUL1i7dg195F//5es/+OqRxx6zBM9b0BbBs668fNf6zbXa2kMLB4cnpjasrZ48fihfGN51zmULKw0p5fSpkyea7ff93Yfzbv7Jp1a8oJcvWyF1uAW2NUyEzLKkBBXhI2Vo2zZqChjLqZFySLK4sFUi0QIwhQurATCiaYxkEhiSF81f/K96l3bWy0rAYWonQ+qACC2RA0TmZClT1X70ObJtV2lrFOMopQxESEQsb4MXMMlECHYu3w2kbTvtZmsU5fHeTGl884ZxOm9Dae+vnrr5x9/76Af/W8LM9q0v3bFj25c+/f4PfuB9H//8vz7zvNf9+R88L5TNuZkzTk2fHC72lo89XiuPH50donFnqXz05rsObd5sD20ZueIZ1157zeWUc3tNmpmeX1ppcjtXKg8JIaT0HZsBhtJnEauBiSkXQIY8PixKx6NUrohaIowL/sRHXkpQAT+YyJ1IEM9YNq0Qj+PEMgBM7fqrR0dWvvydZaRONT9UdLylxtqGO+OEIpALi4vl0SHHch+95Z4chpc86+KiRIaL35LH17zlbfXlSujbPdsqy3KX6mq/qOFQ7LPtCh5vj8Q1GgBACq5cr1QYEnAOnAhMMp9yUIqrQg0C4P4jj2ZyDzAA2GNkArAGXS5jjTfTM5mgcjRXKSgYDJzp4FPQUx3XHn46TWrqqYdnBscBm+5O/dDY39zTJLjIdFFd/UkNn+ZZRKS4nFbyZQRnA1QESnUwuKk+ANbS7SodMGQ7gxZTLFclYrdyLBJiIACzVLyaIZkZ35stm96k/cia4bBW+ylzT/otwDkXQigttFYSmiruuCm1X1NZQqOOxfMMlJh7UwAcG5XjYaOqpiClREhSLEkpHSenvWQh7dk4aLMNjsXquyIPbQFkxQ1mTk7kTd0HwFryUOpBHYZhuTwMQxFyooBZHooiBTlJXhAEgbNS5flGkBOVXNHvNC25Vlbm/MMOq9h82LZ5faVTzouLzipTVzDiN//oS1/5zk233H2Hv7Ty2v/1p+//xN/e9rPbj9x3eMNal6i9Z//JD3z0X/7sve9vdurnnLG7PtuZmFiza9euSqV80QVblpowP9/OF4ozy81uz+/2ArRsx81blkNEYRhaMeekAVipFphAJf5iEuDAAFikaTCq5ahYz1DCQBU0pzAGAM1ECuONUUih3n4UJFRPr1pMZCWRQDXlysuGcRkGWgWtdW8K2R3HAgDP88Iw5LZjWRZJDJrTxfIwsUI3BLtY7HXrOQgs4TUrZQya2Kn4Xacr2pumnHO3l3Mt+NkDhz/0R3/dOHYggOPjI8/74lc///3bf3JCnN4233hk8Sl3csfeB/eeeOBXf/jC639y491Ir7rkeeftfk5w+dk773r01J5DD5w6NOdXtzzv+s1XXHXt0FBlqQ7HTi4hZ7k8hrIJ5ENQZGikkYotsoIGHHAdk5kwHBSbxoSg9LqoXZocK0rHwevzEt+g/hvt+ufcev/yyceLG9YOYX5m9mStUiUsTNNiyanKetsXPbtaa51segsL41uH26M4KV248Nz716091XWL5BKXLlIglW9EpFmJ5RzJpaW7Z/KsHAXnXCIEQUBEnNkWWlISQYI7JsRxyUwUh1gnb7ZpXglFUgK5Nk3GeZFTVZsQLIqs8hkAzhibCFPUGPpASida6BcPMuRa3WCmE0mh2GoALGGAyKvVR5nBZxo1fzXz4aXu7wN1E0j6AYYJ1Stlb4+pv6LpaSXh07MLCWtpkPh+iTkDlpkxEpGMDQxJgwiostUYWt+BgprZpjnDqX5CkgBkoGjbfw38lbFs/DFFhmSm4xnUPTH0ml0zVsRgSKNeqb2rAthlMgqVqkqilKTq20UPIhED8EPBrKgKihJlOOeWZYswICIR6+1VHLMcVMPLXLgBk6B99GK7okqsYyV7w1Q2MYAEeEhl5aNogCZ+gGK2ADwioIYIXNeqWm6LMTY/G05MVO0AFjxRyWMgWlZHtCvFlU5jrV0AlrOAFhsLrODwfK7V8clH3oXacH7tiNiyhvsdaPZmcsND9z7SvfDMsX/9x//av/fmqy/d/eieo+Obznjy2FNTU2P12UXHGuJMbtu27o/f+vof/vD7Bw+c6rad7dvPn9o5tXHTNubwpRVvYakpAAv5Uq/XcxgHAIg8eqSmXyzi0E0+hEkEHSwRL2KktA9RhahK5QavPLYkAqcQiBGhMZ+6FmNMOswJjMlMiouNARhAKgYukliQYVwGMQPAQgilmUZEZnHGrDCUvhfyclW2Gq7scgx8GYSWm6uOdHvSWe7mRoc6Ypn3fBsZuu7R043hkdrZ24q1KlA9/Pj7PvvNH33VHav8weve9Zd/eYUjq8sCWuQfe8KzV+Dw/p98/FP/vXc/g14ooXbhjtHN2yu7dl138dVbT7QffuLB0v4jezZumbjwqt1nXrCLu/b0qaXGirSgZOc8c09CXL1OhCmrduKEFRrbOxZ+GGMkIs9BrUFVpwm4QesgUf9GmVIQACUiypgQjdqF/M9/trGzPLX7LJ/8Tn1p7oH9fnloyGkvos0k5bnoBiLfLrdOz2y+aEPjyvOWnMoJO9eWSD465XI9aOWoB6Bs2wQAyEjEojaECR9g8gp21A0RRkvLGHEAFJCYMEwAtiI3e5IG6VOaGBh0cWmoqWJFPgMI44OsAVg1wGV00iOAM+7J9NzcqP0XSzvJJrcZsd3mh1UB+JHZwTldMyro6O40AKde8/8CwIro97+3H2z0nzyMXN2Uoswg9zHIxfMVb8rBGXl+SwA2nhoMwLFbSBLkmuL30zbg/ktPy2opA02kMfuj6ZE5Cop9OrKDjc3nOoTfkMi52ZqyAVuWxXhUGZuI1D3KXC0xrg9N6W0QOa/GZIVAUX2pyHYc9YhCRjb7yPmTYWxQBGASAELPsizVEzIdcCilTjDnAfowWK2jBmAgJiFKLWsnk8PSKxtqAqd47ehopY9fHE8FHne4PYd+GYISWu2V9tzY+MaV5lIgaLQw1myvQA5qoTNvyZE2bxZyVq9FOTvnk23bc9BxLO70AnR5NxD1oOEtjqwfLTbDOgvDcq60jI3NE0N3/+ILp598OGxZgopr1w8VWVBCd2/32L4njq0d21IrjNYbS2efs4u74vHH91r59Wg7laHxiy69enTN+qXlBhCTElT8GHLGGEOMk+smCR8izkZG9A14HIil11ENOWSRvIVRVVAgiRLBRpX7Ew3VV5I5S6+FxmNuFM9IatlGdjipSZ/Kd6YAuJ/ZIiLHsnt+l0DYtg0MA18g2rZt+8LP2Y7f85jFrVy+1e1AQK5lo9uilaqb9xoorRzzG+1aYUMLZ+eXgiE+PJzvnnHmaD4P3/mv+7742ffuefjAi17+nMuf/6pLX3w9FPnSaaJG94LdBb8ePnj/ySfvveN/vnCfWDjdDY43Yd2WiY3br9h99TPXSXQPn5g9Pb+8advWS664cN36UdsSK42IXfDDIAz9MPYldHlBZaXQIQCg1DBBdKo0ADPGiCETFLsNglq+yGzJjZNOmuMHxiIAxohvitbAypc3nT7Uu/WmS847z0UHCrZwnAPfvMctNoRwGSKryE7DXw+Tj3Rnan/xzMeDTV4nbzPXY4tYINnE8eJwq7mAzI2Wm0UYr9Y9EKiXycx7bKEMpSCUjDGJjEICgRy5YFkJOHpWUOR7kQbgQCR8mEkHeGyalAjKtqLU/pL4QADWErCZ55IZoJD4fKD5X4oaqHMAMXaYv2ZEZ02rmehrWY16z4zQtFTfISGb+lE3lNydlYDjwUMUEQsACCC4kRF+0LMmfCa/6kdiNUKYRv5kLsKo7mH0pSSVAERgSojRb6e4ZJv6U620Gq8euzlNtuRq8RKDljoD6Ylm8QxJaeC00VvOUvfrgQuEpB6cZsSkRGZhfA/G2aAyTFJqQjDZDSorcnSzToYuJeGAsCgynLMAgPPBGgtYxUauPGJ0a9oBUnunQ8SWqlkCiZZu+WmG8xt/jXxwgKSUICmqRY0o0sBpzDYHkDEDAACo8EOHcGhgjqU3maH+EMv9GlRMNgijt0TROZp2CAgRMdJnqPyszOLIPOEpkhSRYLQYgJRSsMba8XGH0anD+04dfLy7fCpsL3iduu1URc8/dPi4h5YzNHps+vTlF11w1aUX7Nn7VDFnL80vPPL4wVe+7o/PuviyJw/PWXauNpxfaZ/udSYBmlyMOMXDYTAWBrkctHzX9UFaUlaxEPTId3ttp06eU7VKjs9JCOFQiEEQCA6cASnODBFVJUHlsseAR4oozgAgkEJVZrVEEOdLASElEcZu9gmxjnUtxBgTXoiIhGhqp0FGycfjeU5JeAP3g7GOqXXn3Da1u4YPY8dxSu1OuLiyMjJU2X3GeLkAc7ML93z569/+wU/3Hjn2R3/5zhe++hWlodqpU/W5+fa6ijM1OTxUYLPHl++49dGbfvDrJx+cCRrWAnhnX8if8/ztk2unOqF7cml52RNrp84qlaAybJdKYFMwWihX3Vzot9w8m2uHftvhUOM5y2PLwLooOXkcnMLTbHuT+EV4DCzieAwnDACmM7xinOVbtel4GA7bW+db+PBjZ1+1G5sNttxtVdyTX76xuHbEOz3THHErdt5/ambJcar/9c7bjzTHlihfrMyJdokhEvickS8ZdyI5CIlAMGKKQWcchBASGOecSCAJm6Ft824XCEGwKF8FA+ASkEBiAAAJ96YTBXJjEjDydAKUFGYkmTjC0NDeRVOlLpsrCzWg5Ip5lvGpTGbTMIOGyBhTCSmllCLxTicAGChFaYDXsJ3l2g26wVlMT0jJHqCcFvExwwv6twlPXc1WmlAiLYsTAKS8y9IYmvaO/n8EYP12CpU0lWTgYhH3DWCgKRiIixSlR1GNyBiVMz2JOCbBNABnBqhb0HwZUhQpAWltEsXhGdC3PDrvKADwVGEGQ01NAKuczIETmPIokJSoxI1MCyap6t8xGckSAFZ7efJs+obUzMcwBwAEfOBAVvuyvyeZR9SCqqQzjKK6qoNuxhiA46WXqAiWcU6Md8UmHkgfJJ0MGQz+N+H0YoMCxUamKOVFJGlqKkmCwhQAM8aISSmlQCmlzXml4BZzIHqd5bmTS4tzZ15xBXneE7+648QT97nUsAulh0+svOqP3vWrW75/8823XHLp5es2bD1+/OTcqWP/5+/eM1p07rjtOJQqo1u2+C4sNUSrfdqmHLdIckmd0ZwtmAyCrgOFes9asJyNrNMNQiLuoMME+VKEjgCbmOSWWQVPcx4AsdmSM9DFwhnDwMM4qjVi7BhnjOk8LnrhIn4zVO5amOGVKVKxJkJP5F0/aHk1tPfvIh0Xi1l5OvRDYTlF13VGhqdyAAEAAElEQVRbrWa3uVRwYWioOLyuMjlSevjOp776ue89cM/ezTs2veGPX3rN88473bOXZzrd5UbOptGhXDnnzB07/eQjTzz1QO/uO/Y4Tu3Cy88dXud02EJp3BmdHB4p2MXCpqNH6sePHT5zZ3X7tpFWtxUE2MkPu7zsdyEQ0i7wQPjSF+VcpSe6mY2dGkUfAKsSk3rgEftILImvS7cQCrfU9ZtVKHJv+NYHr96+Hc6YWp5fgC9+tz7ONzBr2lteQjncK951668n/+XPHvWts7fsnpmvC9fhIiQRBgSlfMkXAoAgzuXCCIhQ1yOU0a6QHIkBIUghLUIQRpQBjwvGm8tHukog11766rfog0nTTPJoElXGGMRyY0zdIncHRowiPDLn1ojUEFHIaMSo6QZWIXyIcWFAszJjHIOaGprqGwo9RkQkHa336JzUY+ivtGxeZJpt+rXTqxHoQbmdMfEA7LttUMtPB8Cx2w7EdDDWQDJzHvV86beY62wS+oxFmUum1Rfmq9Gol2D2mYQc6GmyWjEJnUoG49xA8Z+plH7au1sY8XOpWTI10rGbJcbp63RPzNtMXNH39yfLjO8Z+FowNTnxV+pWijqembq+jYWxnnzwC1a74qlRzFOECasDcPSE4RGmmW6Noxp11UcYdPwyzFD/eTMXFCCuM6pyPUXqB/V4tE8EGFo7SZYNQSgFEGOWEEIIr1TIV6uFoyemC/ny5FipwuSvfvGd22763ti6qQcf3b9zU23jpi37jhybW+lMTG7YfebO83ZtufuXv+h2bF9CrpzbuHH7mvWbNm2/YMWH2bpoenPDhUkGy92g7ebWIi5bvi8XJp2xVs8LJFicJBO+5TBwrJbvW+ioTCkqb6jy2rMsi2LWwRw7IgIFFCcXM2aG9e3/iOxwAonxmrGYqEFUBZRkFoD1O/X38eKmw1GS3ZuwOGBs9TipPgdAALIYMCkDr7sSOK4j106UNq8riJb/xc98/qbv37pv76Fr3viCyy4+96KLLhgZnRJoCQbFPFSrMIwwtyhv/OGv9z56tN5YOuucjbVa7aEHn9z3wP7tu6de8/oXXHbl5kazc3D/iYU6FUtrl8OOzcmxbELWCwmI2wgopWTZ/Z+ihzLZVNGBEjpeOAnFJolkxafb2KJI0My7k212WjbWTdVq37mt85WfbNq6fjZob3vGBXc/dt8FuVKl4FqWtTTTmj65fHTbmPu8Z+xdWtp0xvnksVyxADJkErzAB8uO3hhZmSDuGFfKMz8MGYBlMZAiDEPGHAXAevWZSrKmTZMaYhU955aBF1IH5gYik69Cr28yfFMIZhqFlRe6VDewEDPzrF3VMLrS1Q+TVF/pS69LAsBpsp8CBSIWHYckhwkRASE+Mh9bdwwARoAQyNj62U6YrWuanuliivIa4Tq6kobuXNIhSFngwKB3/XG9q11RO5QAMKYVuZn+r3apZzklbFJ6RPHkrKISyHy52iz13xlfqfHGQiQ8PQCrRkwAlqFRcDc959C3joioQln+HyTgtOCiLx1pT4pTodgnti9wC1bZY7/h0urjdBflKi0khwqjcOPoIRAqjyZEfEk8HkMCNh/XQDKgq4b/hYiPT5THUcSnO6KNJgATANMAzOxWKEUoEZll2zYx7Hme53dHC6PddrfrAzqFNZN53m03Dj3QOfnU/ulTnEQ+Z80uLM8st+dbQbPnb9m+Y8vG8VrZmj/xVNmROV49fHJxfNs5L33d73emnfmF+mK9Nzq1fn5lDgCKmC9apR5vB8ySlsUk8KBLMgw49ABKItI5E5HKVxqGIecczFSp0txOUkMvxvkliMgGy2D7IkW0lDJKXBBPvLbfM5bZpRF8ir74y994lvsPICIyyw6CQMQaERtdjlyG5DhL7ZYlqdru9oaGrO0bK2uH4OFf7zu4b/4H3//Z8WPTQ2uHsCidSnHblnPXrtnBiuHo2NDoWFgqiAMPtvbe0dm/954/ffvlooBOvnb7XQ8cOLD/qqsvuuLqSwIGS61WrjVB2MoXIUSr0QXAgmWD312y7aGBmz/aeDKZ0mibCR1fYORCISY5obE/9QmxQCBBvdMoT+Retn3jXa97V6HbmHrbi+bPvq637xhMz3EMn/qPr+2cnMqVcj9/+J6z3vmWwtWXfOemu55zyXUrnTZKWWK8JXyhmPg0+gKAypCs+8wYSCllGACzVRe152yEqbEmzMijKYhIYORNrR2YI0UJZnyVovOrijaJGNuMs4mMRSHsMs4wkEkAAgZTrhNl6IhtdVZlBjKNdQEAc10yQJDZllktZiIBL2j/jKTkJwMM4iiRjJxkplwY8KvRAy35md8ny5YOH9Kq2kyCCz14XV0q88aMGKoXgMXedMmviouUSZsAqwvug/T45oO6WAVk12YV1XHGzX0VnEjeq8Yb56X7jfiUojKGlGlKHmY/dQZKPcyMBNx3DSZzGRZH9yQDwBTniE+r2Z92BiLj0Kq3KQBO5sdUPfW9IsoDkAFUNGzh8a+IiMhNG7DZ4MD9Gf1pAHBS0EZENmBIdqyyjieJZaI7FZYBOpYNAL4fSim5zRhjADJgjPsBkyJXKp+anx8arlZcVmTi9MLC0uljjbmj6ydGDhzYt2/fk+VSweu2wk5n7fp1b3jDHzy05xNB3a0WK0/sXRzbQM+8JgDnhe7oiw/PtdesrTSb8NSJ055dZ4sly84jc1zLdjnz/Z7P0C6XveUFx3GEEGEYOo6DykuOCHkc52psTlJVzgwAVsdcSmkAcCxbxABMkLLvqB2vctjrXaclYDKz2RjXQKfOZKH7VNDIme/3mM0c1/a9wO+GnLmune92xVDNDXqLMgxz+aH5lWaP+eNrR9aVS6U8WAgry92jh5eeeuLI6aNHO80FWS55HrXb3uSa4YkRd+emNcf3Hd774OOt3Kbtu4evetbu3WfsvOXn9/3iZ/eddf6551927hpo5gqi5a/Uuyj4ZEClUPiuK8LQMrudPQVmRvQIgPlAFTQZknTiH0rkCokEDeyu2zi2ef/R2X/81Nqto8vP3/nAxueskXnm5oKatf7ux5/80hevPnPjL+768fJVz7rqz//4yEKHtbgP5HBuB2HIUSU+4sl7IYrUR0sZYi2GyJnOpqfIiTQIabxOMZ1MJGABAKEyAitvCSPGRGQ1BLENOO6GwmANJdEqG1mqFIsQZXxLoxgRSR7vDdLUTD1m+MTEuxQASOlQDXGCEFQQmL7HxA6W7NtEjUFE+NiCSEArloAhNuoMkoQGy8EZ1WKygQydu6l+l2kJL0FKSu6kvojsQdwu051XsVzq/Cub6wAJWKa4FRpk09XtYmxL1n0V8QIrydhkAjKInkI+ImDJQqZeJLPa7whRRBS3kPHuXsUhIB2/awYFpQE48VA1MmqR4VLUj/TxZlrlvZDatcZ8qv8wo8NRVaEyr4CYIzS/j1dtFc2HkCrOWE2RWlzFMJrdNvchGcCouHgwIFD/quYBUnOe3MMwOzPRfIoB3BgiSuWUFOcdlECYFJkQplEgCoymkpQSSHBkHEHKUAhBICw7IJYPQiaC9lg1v9Jqdni5i/mNZQCSUvijwzmv1Vk4tm//r29tzh0fqrEjB05fffnuzed87+iTR8NeJfR2TW4PLr1ufs5/xz985cAnv/y5Sy694RlnXfe6l7+KS6fCwfdgeqZ9emERbZ7P59GHsB1Ckas8LUIIHS8eUzTMnEcpVXSeVOxIMo0gGKmCx0bmeRBExNDq50IQQJPO+BVxSBKz+tEX0hpm8wYzxDbuoZIjSVKIiEQSkdmWi2h7HV/wEEBayCzLabe7jp2TCH7ohdjs1aHsjhSLeXRao2POyHAZekK0PcJCsQLLHa/jC6+TLxVxeAzas/CD7915+uTy0sLR889Zu37dxIMPHdm+6+oLz5omDH0h22HJhzUh1SQgQwHo9Q8q2Z9pQg8AKK3BEjDGTlgxCVWX5G4Fcx54jd4SP3Bgx3Lbc3y8Zsdsd60vIOiGS5b/grUbH/rsp5YeuH37iy+YfN4f7qs3GgEfKo8FQSBF4CDviaSIQnw8FZ8kkTtCCCRpWRYABCIkQmAMknIdaH4gmVVBq97qPxgoKS36U0vAqd+JWJxkJvL3xiTjveJOJMYZS4ghIieN92nA0n9Jg2j362vjDzLWAWjfXgAQQCz2SE+BIxHHtPDNonpxuHc+GZhIG8bNfugv+3d/9E1ss8w4YVHMWaQSYCKKVcavNQ+QBmBT4jSv6GbN5AIIIEHSYrbihvRcRDMVu63rZhXt1km0iUgbHhBRUAoBZExhVbyaShskDaYm8jo2hqAeF5hI8CkyIRNCptkFNGy3YNB0IqJVABjMJSdmjtoMC0mSPKf3h57bgdRNdXyV16Y4p+T7jEImlk5pULgUxIyaxjzU2pdVPAOT5DUGAMMqW1S1r2yxsbkoUj2lSpma8zYIgxOYNIagJlNleiLj+8gvHRKNd8xlMgBgMirNFGcyEQDAAMOo3LpUOV4kSMYYszjzWh7lfZYv5a1ea5mCbnFodLEV5oKWEKJUqy2sNG3HmRiujJfZEw/ef/TI3awb+CsPXnzRbZsnV4oF3HfQ63HoBVMf+e+5vaHlnFE8fZrCGYctyHM2XL77nI3XXfyM5131vGLFObTUXul60KXuQhtrQ1FWMs5V7tKooHoc7650/rqqt3JDJzSyrKAASNUJpljvh7HJXBFQjlFqQCRIEuykspwSYylJMVnf1Xdv//ZGRM55EASIyBjzfV8IYdu2bduATSGLIRYkk93uYtm2cpTjgdOA+UqxRITNNvnMlYh+0JSyM4TrQlpqe01kJZ5zfKrnCqXZmd6OMW9keLJWLvtdf/7UEUBRGx1pB2G3Mef1eq5bYU6t4dnE8g5nvte1HRjYf83Wm2w0AAANBmCCJFwwmRwCP2fbArHR5TnOynzIdYO2tyy8gKBn82HPakOIfmdNKVfrdRrjbGaRHN+S6LZI2ESubYFleR1faYgVQ6liHCJnbG6BJKVsJkIJJIEB46rIx6ArTcfi1LzIRPYGUm72mXaiXzmFwFCFMiqCH1UNFzKybETeVwoXE9JMsdY6umI6HPErQGpvRLni037OEqIAbg6oVa1xIoHBCZQMQSjJgUMk8PE5YaC67gxZg6o69O/yZFsYdWQT9X2sOiCj4rTCpzBN0BPTm5aZI84qyryBq6T4YhShr5KACSAkKYE0AOsrpo6RmBJnSouCQ7htqcqXCjK1RBiS0JxOxOYwAACLlC9rRGoTUUZIE4C1K4Qw6haDQdm5jOqfqFXXAGB6Taccs1eTCE1pO654iIgMQEksatW1iTeDIvq9WVxMblvttSkVgl7lILZlqDAk1QwDCGTqLQlfbCSW0d8/DQArzlQAUazzWM2aEHVMxQEpmEsAmEQ6z4MmBCZDZoJxPwBH1W2D5HtpjM62mCCpQqsjd31CIuKEqhojcgYolW2TI/MwKgAlpUTkyJkg9MOAOWVH+HbQQ4AAidsWiICHfovXcjb4rXq1XGx0eoK5vgjHx6u5MAAKZ4492jx1+3jx9hLctXVdbvlUb9ZqEjr1wHlspjWNW/ecLPXsoVMLx0492hzb03pNsPXFmy4d3rU997xLCtdf1hgrzB5oN5tNRHRd1/d9z/Ns23Zd1wJERAEkhCBIfKEtHfaAkmIxl4jShnbSod6hFIhRCU8WI40C4IQoKwe7aC1SdMBYi+w+iV8nTD5Jf5aSMUDlI01E3GZEwvO6wEsMPS59JLCtAoHV8tpkSccaEb0eo56TI2JBEAgmrDwvtW3uB/OlnGvBcLcelKvW0vJ0dajW6FlB2Ov1OpVCPm87nU4rYII5lkNVzljgEyGgA6HoouwWc9z3HXNT6avfUhaHIVkDw5CUasXkR6NQV6sddEJeLkspZasburke2DVZsFi35XcqltuwyUF7RvpjVIaWtzjU3tgrSIlL6NdyTtjzWlKU7EJIAYLyXFPmRSIQAFIQ2NwCkCL0hZTMsgm5ILBihiCyAKZXKROGBAAMAwCdQy2hdSILOgkAIyIhk4aGiRAdEe89VKpBrv4wBUGTsDJJiBjpUKWkmEvDINTSre5/PwATRkWWLEw0NCmeL1qROCIuzqGNe+Z8c8vqPSoM64vmCwCAJxrFNBLwrG0mvj8rSUcTrf1jo4LS8duE0PilmQ4i4gySWLHU2kjJQGm+LBEVwxEMVKH4TB4yqRKAEJlytrqBKLU/MM4puppFlIyC8CZlEZypQDftZU0IgkUB4zDggDHNtegOEEKfU2R0qTjsfkZEksJxjP6IkUPlpSKjxpGSYIJEFZPqT4DEEC1CkJE3h6KPJitmMgScsoXW1Ys4RMuq94+MI1VidEFz3laZZqOHae8tmTaRJOAnBkwOxAdmQLM4WCLXVX3AWF8ptdSd1R8EnmMNSb8d5HyCAms5vuh2R8WaBb81xHM8j0uy7tpOuZsj3w8Kbatb9DmDnC1YKIXPBDFyRGihE4IUijkAAEQmSUntgzeEzbiUMjRc55SeI6BOzi3nnTyXYejNYHDEYdM5p4kHPhj6JzojcPU5u97x2ad+9sPaX3QuurVxW/eiHceLEy+5864/OSLw/jvHW+Lh1/6vobf9lf/iZ05s2bbQg0OnT5+3e23gwb6jC40Wt2m52RFurpxzSyDCXrtVrVZXWm3XqRHrCtlDAIcXel2fuRCA55CqoqMSbKBKA+ML32VOPIcRIVZ/YlyOOcOQBUJyI14AiKkKTiR81UIknEXLqqojxpE5A0uTadYqYhJ55DNk5HoEAM5dlaxG7VspQwCJjEDwDIWEWMWt3hmxH6oGGGMi7KjvY5iR2iUtPhEpiZ9TXBQEIHJ9koQgCSxQfGRiA1ZzEmV2S5T2keBh6TYT7hZAyEALG2b/FWNkjivqXhgbbqJMO8reIAroEpEEoVBQolL5cjMuOXVeQHIJau4ESiJS3tGSUzRpUY4lBEJiaMlQax+jTkaCrYS0S2+keWUIpP0CmEQgQIlAYZtzDlH1L4YYxTED96NjrhTJ0aEDJ9bECFLSdEQ37fiVEiPuR805Z4rhMzNUJz4fZlSVCtnBvfNBZlXU+6Rh8zORnKW8yIy9sloKrkGklYhAxlExMQCr+5VqNwPAAIAgM1X84kAdofXJqwEwaEl6EADrLH3RSYn5A5UUKUhr4JPPLH1udQuMIWUB2ATU9BBQsxum1xghpDKemFP3tFWeNLel249kL+O2iIIMCnRDxBCjdIxIkfgu03pdNUvJI8a7zAYZIRHpGu8QHx4zh2oGKQfCsB7X08QZm8+a7k6p71cJKzIdK8xLqUwHUGoQ2AfwRMQKob1ie67VEmzHGpydn0F33XB+KXCrc7Oe7PWqRXc5wCaHoUrOroeAIeOFVqubdy0Q3VD45fJIvSVsKyQSaJRPeHoA5lESyXi8cdf8XsDQYTxnMW4jWAAW+UCBb81OBbfMrtz7nZ889pF7Hq7V5QeeGD576JqjucOPTc+98iXP3nD6pH3FTvnjn1aOnOgi7wrRfPb1B3jlI7f/Iv+CZ1xy2XXPf84LNpy3uTkX5krW/mPNerOVt5yc4zab7WK5IgI/QA8YWcwmjztoMxu6YdsiR+XiEIIAQIjAdW3HtQLhZQAxAoaYzmi6FH0vBGKiYVLe9ah0akrlb1SNJUKWqjU7QLuTAU5CNhCAkaI6xDGKq9hoEIIyTamuBsJHRKUwJ0LtfMAwiLuXAmCjD6mNxyRL7GVRUm4lGKhpGQzASitrUgPsYzTRMIr3X4JiHxRDQ4aIWpUqQVBS+EjmmUtEgkJSmjzGgRik4+zN2SaLWTEtkyhJ5daUpMJNFQDrdYQYwDIAjIhIAhH7AVj5GKUAGEECoOwxZinCqPx84jkRYNhNIFIqMOVlTVG6CAbxdNvx/tQKZFJagYiexD7bMScEUiAipMKaGRHg4wthhgSrOTILCw8UKdTsJCe/D4CjBmWq8eQGDcB6gpTErPiXaGH0GKLydiknmqgDEQAjAZerAjCxKPJPAUA/AGspTQOwUi94cfxZRtjSVV80xkTPKolW0m8E4PgphHgzgQlvg8J1AEA7c5mwCrGziV4vLSKEIllETcWIlHfqgFcoU0gq2CaulB69NxOxLVMqXA2rqm19Z+L1N4jqZQB+4LhTZjCiKAd1+n4i6k/JGS1onGEn896B6aYRkbRTlbEoiEgyhPSJiMZorTR7JVfkzjnP+b8f/uRrz7tu76m7/uKnP33vn/3zc6/eePpk/dipcrXiFGiZhTJEu4OMM8hzO2h5pXwJOJtdnB5bO+63PSIyAZiArTYnel0SvhmiAgAu8kAGAfkSJWMWEKeQywBsq9AtQ96h9e0luSl/6yN3zf/9pyfvenRnc9gCbxFy48y26MhlyOrDYXU9B8ZhYWml0XGt3KHexBtaxx6s5DcVyhvOvnbtlu3v/YcPPXX4RD5f9Ls9GYDtFmxGAQUhUD5f9Jt+jts9r8FdxqRrWZYIyfO8oeFqGIa9XkeIgOcS1Wu8WxRfnnVajIaLEQAnVFJl1Y/Pcxzfksh/en9qVc3TcH4J1GUBWKv9EmW4mvbMZogZhSAFwBQZhgCVxjELwMY6ZgipioYw9md84mMsMP1EWUYChvTB6R+yqh9snuiIrEHaNKOaSjlYJJOJSEwyCSRlqCgGIhd9jGOqGwyRlCsrSZSAqBJ09JsmNUFOaGM8LoVsAwFYEI+mLXL1SJx+TH89vU8UG4EpBTUBRXUTtakRVJ5dAJtC0F5TRlG7yAYMAlQCdmLxKyIAjuaQmNoV+MSiGLgAFhsMwIQyrXzOOn1lFntACzFfRoZ4JONtzUzefxXpxLSDKmccyRIAFjEeQwz/0ZGA2ERPyTpBLBxnw5YIlCDopyUt3WEWc44yk/OStEQb8U1q5UwBzpwumVhT4v5o4/TAazVGB7kG4KgGiyQppUgizNFcI1wlc5BZXCURxzHKQTqgyKPMCrK6R3r/pBZaJpy1yRkM3DZg7B/TqxMAmJXNqBX9mbYZJ0OOjS4mNdFPZVAZACAQqKJo4vVVxAZWkRjstrWS667dlNt70y1/9LIX/PxHX7rjrj33/erQTw+eWrt+2y0//a/JIt/76EKD7ND1aoURgU7gNQE6Nq/0uq7rMLREz5tGPqzUjIBqU5veW4MuSimf9WcuSoRCsq4AP/L2AobIQ2jmgjUCnSZbKDpyQ3Foyzr7qbmnaidY+2c/mv6fO9mBlQLUtwMPHRjxnzgwad3lN1+zdUuufTK3UDw5dMnWp37+qf/6zJve8JaLr7j2Nf/rLa/8vd87cvzUpql1M9P1wJclu7LYrEPebbY6HBmQcFz0wm4hNzQ7O1soFFzXDYXf7XYrlYqUoZROlPogshbLiH00Upamhsski3lWolhOYqjmIe0hwSCl2o02ibnfEFO2FWMfZleZISlPWiKSUjVLZsWnzKWg2djk2gSeSMBSpZQCCTHAqJ8ye1L3ABJxdrCnoe55PwBnJFHQdN7KaeOUmSs+HkV2eMZWjHLQcs4ZgzAEVVdU21wBWMaDVbeGiL4UPK48JlgkASMlGj6KFcLRBKo4AoM+x66yBOlDj1G4aeJlLfW3AIqxzjAWan6j8UVuZXGYkyrEpMu3Y6RLiCTyvlXgkqkKNLEQzKQizuQn+zCypRoAnFkYSNvAoI+Q6R7rTZP2LjOvbMRV9EkKPRhQrAQxoiiFdj8Amwm+tRxMRAxRGJUuTAAmIjTgIRKyGUdK5PJEyFbfxPCm7kECYQ3QNwIkNl3tOh4JSVrXpNpfXaSLOgARk27eT0bc88C3R51MHQ/lFJaQG/NcqSQ1KY5KZtdLPZXJXQ6xNgKMOU+myIiz0xhPcbGH+IdUV2UYKmMYGBKJam/weOPHzaBG/f2A+2NTb4Z2yLQNeABlT9vGIBAqG36cMgk05TZb0G+xw9yibF1wVumGi688+OBjG8ecxfmVie0XHDhxzIMWOBv++9s3XnHtFptg6TQsLrfI65XLhZ7wpON6knNuW0HbpbDLbZVDRhVfgjheeTXvd0wM/kkoTiglUsiYjWADMCkQACzGGGO81w6sJjDhoVOCYnOlVUc+UR4Lq+0zx92cI2bueuj4Rz+/ZUXmt1QnH99T3/frbV7X9gtvWeuEszOve8+HT77gmcefemBbpfzz2+62KrXpheUdO3acc845hw4eJ7SGK9XK2ERtzbpGO1g/tenU9HwI5BSKXrtl21FCaTvnqtRaCEzlclfOempeFUsXpk32Ok2ggDDWPUYAjIiEQDI0eUf9HECiFsoICQNlhgGnVYlQofIa43EKcVQSsDAIcRrmMwCs3h5JqKYErCi1io/XByZhQNPZiCPkwMFyLUbG6UgFHX0Zu/+ktnfSZ66LtaCqv5RODp+5iFtxqKuIUv8yhoyEVHghNABTpDCIvY3SGsRACg7IJABDgUyq3KPSJBeRcVeTF4jVUaohJSApSX2ADVhyANCJObU+Q4TpkEutdpY8emkMwOaaUiSzGkHGkYYmuwQWIYFIfKwQpVprCCBmI/TkExE+uRhkSHD0Z18VmugB06hr1pNnWYqYedZ8K8ROWPEiJSxMxjdKX4xSkjdRXMlFhR6x2G0qDcDqG4iniiiq+2gCcKyaQETUwYtRjjFJkmf7P3Bakh1mMp+qYTW3g2AYEQOKOHpm3CyBGKyigjOCGtNXzDDFdSm0QKy8rgDAzOWbKadltJLI9Bm5IYX9+iNFQfcaVimKGbUgLbqp5cY4vrlP+BisaDXd4swOryp5ZG6LP/RX94p/GszQkEhV2dKPrwbAjk1H57rP2zpSHOmd/eb/ffjRo+LhbwBUarKx4tSA111J1q7nnvcnf/d7L77y+gL0PDh1bBkhlx/OL3Rmc3mbdVgZay2rTUSMGBKwSK4dkDpAX4yDoISAEqLKFmnnWygsEjmULgADKZACSYIKw7KxlC9Cz2VBk1XDYo9Th3o5QdjutoehMGmfPzYkQLZEb+sTTV5dfP9HP/ePn/zvguwxCFYAhodrS+2ll1xx2e+97g31Vhct3mgu1ev1am20GwSzCyecfLXdw6mp7UFoP+u5L+H5kaOn6+jNT05OElGr22HcDqXo9fxcroAsgNhogpHtQxIRYKIaNRdIBe8a+iRGAIioE8tkYEZzhBqAzdOaongpyTLx8YwyJQnBIqFcP5hA7yBEZ2Zn1EullDwqMoGmCpriBDXqwRRDYBS5icarIIo0SMSGVETFdQNkARgAyNCUpKeUm5p53dXMtCTjYnEGq1hXoWAsjOKSY406ZI3Z/Z9RVR6LARgkcYJEkMPUU5F/ciQNxzk6CAQmv0bIEjWQ2GujOVTTJKIRIUY1OAGAMQahyq2dbLlobk3TVcoJKUOf4ywaLOZONH4pJFV1J6OnVDsSAAYAcGamIL1NU0TKTJC9mkSS4SPiLy3GtcGVCAeakNMOXKapI76TGKdUpm8NwAAAMgFgxDizEYvkPPWiCIAZMhnrstT9sQS8ahUmjU+SNNuFiLAKsPV/ox4PVaHA9DiNc/XbXqbpaICrQlrlG03WoF5xyZSuXtG1WG5NcB1S5CD2AEwH8xAR9KXeVX9aOsNMlh8fPFGJ8TjT09VU8X3tRB/Y4HUBIzBa0x0iQj5YxY0ImXbUEPJ22FkKrtiZ/+hXf/B/lnxx8Zmbnjrd+dTHTj+1N4AmoSgyKST1XDv/8nfUrnvpx6+5cPuEe2zPdKWUA9fqhSLPi9ATniNAKgBmsVwoJdBqAAxa0YAxOZeSiDjYUbwTqoqEgiAkEHXJiq4MvRBFiTOXQd3Oi5OyOeyNFwBDyYTXC8LWQtCpWNVapRoUg7O3OqcfuPNnP7t/z95DN37z8yFIBhbYoRfApRdd+NrXvIpjMDYxVu92QmR2qUKSjQ6NX/fMZ/x/73nvxNTW8y+/fnLj9p1r4M679i4tL2/dsTMEdNwiMt7x/BU/ACGllqXUwRSSHOXMnFoUiI01ho9ClAc+sxzmnwPX8TddGQAmII6IRDqDDREIKaWqbJ3Z/wDAICkCgYjqZiLCyDOXEaEJwJYOO8xkNLIEAKBkMUmL3CmYOhMG2VXBCpFPQySbRATqaUbdD8nZgwMpAkJxWUnGQCVmISElhYJsZMr8JqNgTuXrlBamTWhnMdqGjCmRiUliTLM1DCDRSEuUZpdMAMZBErAycSZpxiHO6YEpPwApldcEwxgRjU4iEANLmn1OZM4on13UO2O7AERwKwGi+tkAgBACMJ2HAGLleRaAkx3ft0hq+nnKgp0WggddqWSQxp0MknKhoEqCR29PnJ4YZXcDIqqNpTerCcDKgUsDMMb14JP+A8g4QKAfgBV0RW+nqCb5aoyFr4psICpZmZTOEBMFvYiGGc/TKgCcuQ30Eq7y4n4mKf6QLERiC0S0B+jBgIh0arcslsg4ZCi+W02j4IiDAFgOsqnoedDAQBgBg8stJTETkdYrAoA0c/2YXeIx15kxirOUAG0yHAMnTQN5ZrfLMNR90DcQEbMsMsqcYZwBivdZj9QNJ8PumePlXXz2+X/5ntsnXgrD6wuVY50zdo//5Mtzn/2vQmMxtAvkDGG7TWDR1JR75cv+40Nvv2pLtXFq+fiRMFcZY3nwg9PAKiChD4Dlb3BKiveSBgMuLRUeKTFQdeIIGHDuCi/EPCIvNprhcG5WyNEWFjmcKPAxb6HHXGJVm8mutEdCaxoPllrjvdApjufP2oVFChaWe48+dvDbn/j0XU/e/fF//fhFZ5+756H7v/XVL7h5a+Ou7YdOnNh19gs2rV/3q7tureSlzTu+FCcWmqw4fPGGjb4IZ2ZnC6Uyc9xcqcK4O7Vxi7tmt/CDIAhAOSsQk1JSKDyISh8qKq8BwGIWxMeTiITKBoEmDqUQV8aVsDOnIJOKVW9Uvb10AwqAQ5GIdIwxzpFACBFysDMtR/sHItWume4UAFYDYK4ZdyMbDABIWzIAXUWTgKmcDxY9HQD3S8AmK5OZqzS+JmNJ0E6XUyNizIoPbxQrH4ZhGPrcKjCuMngLpWlTZTdD43yZfUDOUJDSUGsAtgBJ9giTsOYI54CLyHs8EXOZBEQUq9iAGRCh6VXFlNlYonI6jq3dIlpQK15wIhGtjirqaodoCr6xmUPZ71m0LElmbOX8pbXQ2gmLoYid4RMARkb4xIIPAAQpIVKtdOzsQEQk1TIwJBocT7lqykCDUJoLwCOXbtJAqC6dMxYyOl5jis09xGUcYqT7T8AIJBtQ9g4RmaQgCNC2gDM19UHPs21bG0IsywJAoRIjcBYaJBsMDIvCIXSEa6zCETLUDo2IpsIeNepAHC8rjBIfJowluNUXHoaxn2Q8k9G8cK6qe1NyopQcHKacUHT7wC2ARD+vTUosfVyTF7GIH4pMvwghklrHuChL1BeViYHHTKFWqpvLygC1k1rM0ady0GgBV/EJiT5Kc0h9YVpksDJSc10x8DPQTj1aRQYAIEzrlCk9a11In59OZn3VzgkKBbc7PcW9oycW33Xi+IOCSh038FfstWe6o7j49x+o3H1jg/nAXBvskLVy/lj3ze++7Jkv/d0NC6+6cvcjj/AWd9Y7iyuiyjkXYVgoOM3Oom3xvMhbPaflKtthvJo6nqEX8IKDQRB4Prq5orTR9xtuyMECxvxAOI5DRH7Pcx2LAfm2kCGhRAaMSwahYMA5s+tOjwNakpgkKaVkSBYQZ27Ig0D0ur7neTbHaq0wPjE0NAQ5Bp0A7n/4yI5dm+dPn/zA//cnz7nszLA1Oz3j1JeaYyMjayZqpYptWzQ0POb1xNzcXK7oSqtzem7RtoZrNXHt1VO/uus2J7zy6tf/5X3HZ2rV5fXV7RazSqVmJXDmLTffAl/QkRMHbRhiNloUgl8hq2kxWZfStsu28EO3a3nYgLwTdPP5IiG0uz3HzcuQQt8vl6ptttwNZC5fFAE5ocMFOEwKahFWAUOggCgExglygpxQcovV9aLrHQsATNOieB+qnzhQLNlobY0ATGImM+d6YM5q87xn9jO3gKIcuhi73SiCkJiTKOYUOeemxJkhgBkiAJHqfoA2XuErxASBGEpVexuBCc6ALAaKbxDKHYrhqmGTPFETRjNpuLummZ5E0UWYQAMiIkWJjKIVQX2E0SIULFKvsbhSu0DAvuFHb4+/0+siI9IP3FD5qWIqZFDITGuuNisY6AYAchXTYRj7oDBjP0gp8amlEGIAJsNoKgm1bVIRSkUWZV9uz+harZgwyyo/1Z+RzcNQlqoGLBrgfZ0B4FTzqsyiYYFWACwwLalriVMIy7VCSYjYaDVrtTIJKBfB60AQUOQbgkxnjzKVGylOVkawFPWWIvuxFKG5oRWBJiJm5ILW0iGlF68fgM2eG+Qg49emno3phZGDVwIpPhH6Dpg6GEkUr2YU+9KURx8MANZlfCLFkeEeJROHuMgGgxmJBKJUneo1Kre2+obSubtVg6Z/eGoPDPISJyIeZ4U1ARgAFABDxOGqVxCsDsADE31QnwJA8yv5Du8UG72Vk8Wg8OFDR38gJSx65AK0Zpk1ktu1tfjA/Uuf+iexcsi1IPQsUQhzHad3xYvgwle9fnP1E29/9hOnrOlFGMv3AtEoWE5vhWOh3LIhkJ2yLULBiSgt4EsiIl4qdIMgbwmLsZZoQRgWndEAhV+3bdsPJXJOCCSEZbPA89HiFMfOEUZCMzCknrQY46jYbpJAIRIBsFAyxi1mA4AQIgh6JDxJgRRe16fxyfXNTneoXMhRp714cmXmxPTJPZ7fbtYbIKBZb51x5s5Od7HenNm981oQuenFgyNTa5/5nNfee+vXXnJ96+iDH4PSBj94RaF8+frtF7z5Xb/7uZ99b+PVbxxdU1hfowtLuzedcf55F1x++PiKJyjXCXss7wyXVhamh4jX21AqVVwnDKlXwJxlsVa35eRcQZKILMYZY163lxce2jVhQ9tbYGTlnclGu2MVfSYCrkJ+hRQgkEV5AjiVzOOmt26k+TN2WrTPgSD2jtYhTKrEln7WbNCKNSvamUOnr8mcuGj/8/jgR+nttHNTqAEYDEF/oMrX7Ib5vebm+3/SWMgACJGM1IEMkKtQSxOA+45j9GqGuqmoG6qZPgDWhwuN16kxMsBAn820N6UJwDxm1hUAD7xMB+8MXTX9fCPX2rh/GUhCRJ0PQxi57gnBIjs1/PiDZNpJiFKL8uRiAOmNZUrAir+CmE4RAlAqLjahlasAcKY8qn6LrvqUyQixet3cwe1rjin7DaQ4Tf32EMn3fWbxUsVeXOxMTBS+/tUfTU9Pv+NP/qjXkyo9LOecCJUihTt2/0uJiJMERO0xIWIEJiOMSiGTOmyWZen1DimxQ0iDc/ytARjQrNGhdPgxY5RKgm9UH8oOIVNdK2EMUxycPo3cWBfNlygXbj1SMJ3aRGq9MLZzhzEA6ywfFFexThGs2LfXYokzRX/HTKge+Cuhnod+jUjUgWgnU+rxTORPhifQYgcjUN5nBWYd8ebHi/nu8eW3/PT781ddmMfq7J59uUpBTB9khZy3efuONZvyd98+/YOvzN1/T5VaQX6o02vVdlwgr3/v+ir/4u9uCEY2Tu9vVIYm0ZWtYLqaz8luRVKuiUGFSUV6jL1BANACWC/cWegtgr+VylzCHPPzwG0IifNAhJJxgSBBIGdChMU2Y4wBZ4RSIijziSDpcDeyU0ZUIkr6hVJEIa/AUeV9laGUoZOzQ4GhxCAglFQp5kqOhSRzufxK88Ti0sGF2bkzd15YKU3kC9a99//y3X/xxpc9+3llx5E4NLVjQ616x/lT9+wsLRwreeE0h2Ck6TdH1m8+suJ98seHvn8PWMMlvm7U+/nxq899yf985btPPHaIDxXOH6t2ZomtC8sle044Rxf8+uFZu1QY61oNx0LEUAbAGecoZMgIEMkJiTvlTtB28oSMVpo9lrN4iWBF2szh4ErBItqGBCiVl2z/pkJIbQhNcCMABgDSYUUAKFcDYKW2lUZBi6cB4Mw3JmboBk2ROsPsZocwWOU+mD5IKQ2aEFtw4nwGyAglKeyRivJR6nXJe+PoKd23foptPqIzO5o3MMAwYbhNYiCRmEbcyKto1fS1AKqGS980AgBDZEYnI1hVTr4Z0qQ0BxTlA9DSlPrXBSczIvVBATAqI4HJeykAhr6Fp1hYjuKa4w6ZQ8sKwYOuqPqTwaxBekuBAQYamAftwpQs0n9pu2/03lVCJ0NAtWVbnXY+n+ec5XLQ64CN4Pt+4AsFwABRXVPb5pmNqxcs+kYmdmJTYtbLqQ6bKhKiHg+NLD9RcYg+BqX/LEXzw8wDkxgnTM7OdCDkq2gCdBWjgb+ab4y+N0qCQyz6gwHYyW7DaFrA4KN5vHFDXT0ppkAUS2NJWdlI1E4Ilu6PPnyZGdMdMLE8vd/6957qbUIgohlWG55FN/evglmRUAEwIjb9jiznCoEcK+S+cnT/P93+U9kQo1NndHqyk2/mmNtblJDLwbZycevoxEMnvK987dRNXwE35F0QlRK88n0Vd/Ohj/3OXADecWg3en4FAtaqSGLo9AqubCSBDSbvS04PfYnIweKhDJpMAOM14QjmCD+QoXC5xQnACxmSzfhyIWCAQggRBCDJsiwVcGhBPiQpKEwKgROBlJbDI1MmMsYYV6YDEJJQSonIbOZ4XgAAQpAQAvMhQ1Er12ql/MkTPSmFT0vnnDf1kx/eeNt3f3zZOWtPH93jd8XGqe5l58858pHRser2te6KbNR2bFx5Yq4mxb6lUuGyH7/xj/7m7bU1L95w2es+9C//57/+u/TMS50aa7zw9fwnX/e2jva27ZjccR0967neMy88kIf9D84OObxYLHa9nud5jDHXdT3PcxwHrMAPiEQOfOZatgh6hWp+qVm3eJ4xZiGTUgAJBcBJ1KmxcyJGeZXInJjgsLjitUz7u6Y2JxFpuqEPeLTH+mTBZLsZapvEpkVMGilgdTsD6UZm55g3rKZajO7Xb8Ooz5hOrKEzLq6amU6JIiZupAM+M5gd3ckSwUxJwGEo0YhQVRNCRMotPPIBkoAEkoFEADF4vQQmlSvNzEUYU2zVrJaAoW8yMfK3SBLymBhsSwv6qCgABDpA1CCkiBipoPuv2KKX1GNSpIj1NZ3Zstl2DIVBxAIYfAckEzpYUM7ckJkFANC5lFO7Kp6WfmgRgCCk49hCyDAMCwWn1eop8TQKp0HLfONAKk9ENrJE+UAscbLljIjMBCDRJmaJVcl0+WbpsL9kftJWKH14uGmdlclkMp7aB0k43dNwg4OuQdAFAKnALfNXDrw/k7bZmhaRIQLgGPCISCUqYYix9owoqo2ROK8l+uHUBlg1UDHT7TQ9Su+iATNj0LcEfc3WtNCvVoTFhC8HfEV6LJSMyfU7a08cXPy/3/zmHQcPBDsvKpenOhvGxPKJstftSRk02zBZG7r0ypHHH+195b8XHvyVnDnpetQ88xmv/NTnLqjA1mr33LVTM6dhObB6cqEmZaGVb5UHuAsAQC8URSx4XuBDWCm6TiA9LxS2K3hXSgkMIU4VpIxV3Cfbtjm3ARCBc7SkhDCQjDyJEhgSZ4jIgEhIIQTYSESMopoyRJHR27LsXqfruFwGIVrccfOhlIKQeaHr2p3OColeseT2utLN1Rp1b9uuCSlgdnqv7J48+fjB9vIJLzjq+Y1nXPQ7V1939r99/m/ueeDmv3/3lq0jR++57+LLn3/fv/3lP49/6hvb8iMlp3pi5SiMju7g1TXFprWmVbxnT8cJwQ+aBbZIfMM73zP3l2994NHTnNuFQrFWHWbMWVqp5/PF+koTodPz7NrQKKOODSFIq9vz3bztMSIWEoUEAokh5UhaMmBoe+b27ufzMlcCwFpEVv66coC8oQE48woiQsYGblSdcBfALFNDaPjKmAk0+l9q7vV+WirTtxkXy1BmpfjhKKQunGKmgkiHrOhuKDcXE4BRkQU2oCwpGS4mplSjWEajWYO2S4wQNwZgQhAslYExgwIahkzqGpLUNmDSZUsRUQyOn9a+UNLoJyEwkd0t6kGde1/bgCP+SQHwgMoEMfibBRFl7AUN6V1l9i9zmchqjkH2sV6RczL2hSobsD2gfUPwXQ0G0v1B1+btds+1bMfhCwvLExND7Z5IpV6Tq75Ov4WZuxMgZlSJpYs0aH5FGt7IuvwRxardAbDHUxHx+icOSJScWwCIksXHCUOE2RrDPp+56FIJT/rzhKxmE7IAKd6aKQZFkhjERptxzNGMESCiCusyvFgjozVHpvVy0dSp85UJPDPcrDIzAzFlgphemN+zmMAZ480So/QpoMzMxNwuaScsFfis+DbBmcVYIZQN7LVb9e0jk7n19gMHZv7q3758RHiwY6K8bqIlSvn8ZO6Jx8WdP21ccEn+dc9xgiH31GH28J7uD7658uhPq+Hm+l9/YHiscnbR+sSrbyg2w4ZjCQdyHnRSjCzEcAg8T4UehgCBA5YIOp1GCHbFrVAPbBu4DZ1OIChEx+r5vh8GjDlSyiCUXhAKkqp0DCIvckXBRRJorqIn04SSAcbVnwTnjESQyzmSwrmFheHhUT8MLJuCruXaw6HnMy6YFXQ9r1guB3PHRWmUVQscYYTLEl86cuD4zm1XngrFXHdh987x68++4OjBR9Ze455e9GDa2gaFj6979qagOvf4/iK2uiQmZXkMim5tzVMrj5/9hT9r3fyzJ77+rSumqr+c7l384x9611wfBNBs9B575DFu5fL58vDomOPmmS290D460yT3NLekLdeEvsucIOdJQCkhFBAgtxg6QLYQxK04i695xCBJlpfaEtHpxz4AJmmw4OZG0gmODIhFRNSiZAaQkpTAaGp/CCBFFRM371WrpWW9vtWr5dOSS43rAMAZ44CAgiSIJBUVAEBUgdA4jAkAUyaOOSny0w/AAJEzrwLghBSk7JIpyQqJKQBGSdphU7DBNQhUVyFNNNRnT4amE1aUoAmAySS+XHeY4nrzYBiz42eN9MwmoxMLYCYACyEiFTSjRDUf2X5iJIlUqQwVatqYJYhmjwde5lbT31BaABmYmVldEQCv8oIk4ohiRPkNIh+TUoZ+UC7kPS/I5+1ms8tsK+JMjAAn6JPIU63EKlRjoZkAMqpFJcwOAISm17Sxtzhky95F42JZJkDdYzNlQ0rOMCJGLlKRcSL5Hlf3crTM42OsEVst4QOiNIpi6kRjENt9zQguAGAC0VDIG++KnbBiAFYqaCtmUMwNAACZc55prX91zGR1eshIpAPqDQoVHQP1B0/jscQBJ5iIMv4mLH6kDlhyebHZWmSBdHK2D/Wgftaadfki/HDPIx/54teOrrTL51zRHC0Ba7xjfPuBj7/wxltC9srfX/sX77aKI72Te3JLy0e+/T/WPY/XLnrRYq525bU7X3jD8+584Kh0S7lcyEoiHnFMIyQQERNuy6MhLAN5c+VOZcfEEMvNPzlvBys24rCbLxFWLLuWKxbQKuScIdZxLF5w7KLNOIHodf1eRwThMi8CEYYSRRhxQgyBEaDDmGI1EqohpeTIAYhZvNtrc85zjss59zpdj3PbYd1Ow3E4AOOWS4StTm+Mk+Bj893ZXCl05Ri1rFzO98NmyFpdb2QkV71gt/Xt22++dfauE0u3Hbnbb3eO1A/NVPbDRz/0v1/3l399+u6HRh4++OD//tzljaXGuWeMPPK1a174qntu+sl1xdrcyky44+z/9WdvnZxcd+/d92zdujWfzx88dCRfKpYq5Zbordu8e9POS1tdaLdWqjmoFir1pW6ZFQgwBAxAChag7QF2CXsoRvs3Feq6xbHyQ39W/qlqLyiGVO0UISKvKEgTQIy5ZzDIy2oADACJE1Z6H1IcxSfTgeyr0cl+AI5OBww2UZEMI69SIFVilSOzkEkSJCEkxTgnSRllX7WxhGFNA3Akuhi0CMyDrKq5pr2gWcp+JJPeokRpacRlsQ2YcABlAEO0zb4UQEqZccJCRAKwgMx+ZmcpIRfR93yVYjCSx8OJm4lo/r5lVS47ThmqdZaI/QBMcbWi/kvbNTNXSloyO42IhmykfxUxQ2HCEmMslfTDmAs19SYAR3FymJ2yqFluSyEdzvxewAFDlc/FUmWzDFqvy6IZb0whmQr+1QkaGUZ5D8x9YzAcoRAqrpGIAilU6igTgDNd1TikZ0ORRYdb8edkUAg8jKqvMIiF4JjPSmksIV74DABDLD5y2Z/PDyBWHIVa2JWR+Ms5l+q0xA9FleSDODeTUs7Hv5IWpAwABgBLJLykWYZSA7BWcqS46b6uqmScYGi9Ys4zPhLJRkrtTGYcDAQjtD4tDDGLayhS0K6mvdi153mrbEHouNixbIntCliz867DnfW5seHCnXee/vKjv779R99o3Hrb5tfd8N4/2r2rd/DDH/jij35dXPNXH+7t3AqS5PZd1Ambt/yw2Gl3csPELXtNGfMln1WgFwIiIDJVb5yICElKTgvgVsSKBxM5aC3gP3ydOiF+4a/sZdtfWQKOUZ5k3wfGyxPj0OtyEZQBJwu59aXCxlp1w9DQSDk35rVyll1y3BwCEvhC9sIgINFu+SSkECEAMBXaiZaiy4IkIePc9n2fS7CRgSRuFbxgxbYEgkOiEAQSeMfKh210WVeWHJBB6Mk8L0pBjdDPW7ZdEizwFzxR2LZ5TdkNC45V9xqPNU/XDx1dN56bOvPyRx/hpaq1Zbw3dHT/96+5ZuNyb/zKK267d993S63jjcayA/XQAikdyw5Cb2x4aMOG9R/44N8/sf+JtWvXTI6c8f2bv/C2d/xz0KyVXDm59uZe49fYGj3GtkpZ8/3xTneo57sCkDAg8DrdFsaOUeYHTdMjDE52y6oAHKVkNw8RogyFeR71uUZmmftNf84AsGZMpZQqQtrMY/UbATgDGwBJqY8MiZMiQM60PgkJbGCccxEGkmIJmHEGGCeXWsXmugoAyzSqJXMSB3Rp98kIgKNKvWHSW5QAwKU9EIBX9SWy4/hsY7aJiDPWbwOmuBxhRjmnaVSMBckaWavE6wcQ54KWSTuMsQiAIdJtEkBEWyUfPAC+ivFNL5X+Rg/GVFpS/BPJVHy6vmyWSIRJmuVYNZF6Y2R7WJXTiSOyozkKkaSUDhkbHRNyz/ucLGLmIMqUqRc+Cj+InYMy25dLVXsyyRENRnR13HTCdYbcDAEC0CecJypZfcAQUUrSR06dQIiDEGiQyjqUUjkSm6ZoQWQxpmKuGGNx4DwDABAWt2TP7ziOEwbIwGYMOICU4OUA6n5QxJpkDS5KgnVscAIZEJFtEzBO4HueZduAUnZb5VI19EQogXNOKFt+q1DISWGbRx1iGpR2LjOutC2H4tQ2NiUaArOpUCd/zVhuWGSZU1NBsQpGQpKoIRWtSKhyQYeKWwKwohIXQp03tcGUJg4JwqT7qVLOkkHQlDbCyJp8ZRQsDt/75r/956ffuf9A4U9euubtf3j+HYenP/m3e/ZYzxj7zMdOPLGfKiV3ctybmWUr3erIWKtWDktVChhIshizgCFRKCVxVLHswP0hPhL4izQlRj7/68stefjxb+/94g9Hb35oxsoTt4NenRUr6z7307HHPn//E3vdv/mMt34U0HanW15R5jzoHTvlDJfkmRfbR/dsrPHJQm17cbTgyN2TE9t51R3qTli5no0ekbPsdXvBaZDtwLeFVWTcAulbYZcTOZbo+ENOyRayHXYdx5Hd0CoUuhxkR5RDRuCHBVsCWl1BQsqcLTgHP8xbIGXYEx7keIdkp9mp8WIZ3PGRise4Z8OpU3N5kgVu9TqtjVs3jQzDvje+rfKtL0zZ7YaEwwLuGq41X3rtZ398u7/UyEvWk/J5r/zdL37zyy999Zs3rtt85Xo40WmEhbGpkU3XXeBOjf5kqfWFkXU9OgqLo1uKS9urxefff3LnknPBKRlsy5e3jVXqE+H8sYYX5CtBs+uXO3a+6z3qwnZLtD0WYF3WkeXRsooQWEEBan7QQuQ5ZygM2sg7SEAiSrGpt6W5RfVhjL6PwhcTGpJQTim1Zis5AhSxygOPCyepKojoqKEoLDN2KlTeDRBTNo0XGVLPuS2EkCJgDCzLspAp5UeISTUhin2+tOuMJkdkCvdxRnrdeR0ulYEAIgKwVDot5RCnpAvGLGUDJoxSYjGtololE9QgkxMAJFFMCFwm2kqGEPtCGeUsAeIiLkxFADEgQolEUcYu89IMRATG6WgUpAHmeURMADiV45mI+lL3qed/ewBWj4R9LyZTj5buEBFZhorbxLCIdPb7NazC8EULrBMxIihBzRKm40AsJCFqb7HsRgRu9kQDcH9PovtjAAYjt0lmjKaoPTAJdjLqfslYGoun2d74JAzAttgDOUIXFcJL5Np2GEaOfGpzKzgXgXQslNKXEkQIluUwxoiEJ3pYzNmB9PIOb4ZLwl/nFloBhTlkPvmyU3Acq+kND5UWGiLHeNuBblu4Di+40PMkQAgouMNDwWgQcK4WDqEBWCWplkBPD8C6PGf2bKcBGCBqiNDQ/BvMKYVSAbAgSSQZYMzHSA3AzCj8HCZ7NXU0QwocKPQ6bbdIK61muVTcvN76z0//1UThlv17fK8oXvvcoavP3flP//jIJx/e3H3Xh1qNel46jZMrDi9ArernOdgW83wJpNg6lRcLGIJtAWd5G9n4ZKfeq/aKQ1up4PScyfXLd3xt+0XX3S9HN4Qz28ojfMweXhDXjcz/x8fvOJzrXfqmP9izZ2k51x0bGp/Zd6rwxS+3fvb5ufOfvfYv/+r06Sb4dQjrzummn7dhamgYCxu6cNbQ8Jqyu7FW2lwrbq2VhxAWQHQ6PZtx4QkQZOfyK91uvdttYGvIKXLJbKfg9zxLgmAS805HcOp2ASQr2S6i3ekRBGGeuc1SYDMfpSWA9byhSnHFb3uW7Ha9qs1lL8znam0Ku7IrpNNegvENud1rKmPLJ+e+/7XgF7eHDXHmP/xD51kXdXpw6w9v/Pt3/VWh6O46b/cf/ukff/GLXz5jy45CtVHjI/VTJzq9EwExJ5/vyUcuuiJ47vPHO4u/AOeZX/7OL5+547rtV9z00Ck44+j8no/+/dye2gvf/w44s3foyPHRsZ091zq4Pnf62CHeFbnx8o6zN2yTeKgXnHp8mXJDbb8hyRMENq8QSsIOCenaw6Ho9B/eeKubqTQjSm0e3hThTru/6L369F7Z2iSEsWpTZ5QjqW2VKsXW4MQdKke08g/nnHGIUr5LnqCs7o8CYDA0l7odDYTmU4rpN0dkkEeuQ8KiMRJLQi0QJEqKc2709TmrJe3HI2nktiOExGQrBzvLUCyWKwDGxFY1+H406ibpCVH39wuciIgHl6XZSzPvRP/AEBEHJUAASKUrM7/vB+CsaJLRRcT1ZbM9WcU2uVq4kdp26hmKFQUAhqrUUIwgJglGMpeFFqUtB5l4pwwMJBJz/Gusss1sCGY+lRqR+kJkOcSYfUyOKKYV9cm4BgEwCWkCsGNZOktc5GTBORH5IAClY3EZEhAnZIHwBAYOuiRCF2AZaDhf9DjIZb/gcIFhEaw6CxtBb22x2un26kF73C42HSECWXALnXoXJa+U8wjQaAAvhJkZi7s6mJMiw486UU0DWHH4Vma81BcWFf2pYTHtppi8Nh1nBSJKw0Jx/ABHJBLKzTKZagIA4Ijay5EopYzjDvM6Qd5xW37LdvIUMpd3h4eDL3zmzRP2kYefOP7N/1l0z1x3wzMn8yf5j9Y+a+myV+D8omgtVqaGV2QHyGHWsO3Z/hARECADYFwwO5C2DzzEZq4rVtq5lUncJO3uoufnPWyxLdvkqWVbUlCrwPys/aWfBW88q4wXNUsyX17Ew7JTdYa2V4IDp9fuPLdyxtDyXXd3bnp0+pUXWStQccaWSiPDxYp/7w/b7/8beu7vuddd6PVaOSxZhL5Nw8QvHd84PirKLFdlDvNFzc2VGVRJ7Fo/0e2SXcCZpUWffK/dctEu2C4JCFxZRDcEXA58KViFFxjwlhQcPER0c3YoBDEuJEkJJMB1ljthuetRxZbDHEYqwx27m1+X4wfFYj7ACm0tF0aAArBP1mHvU7NuyTp/28gvf3Dz3gfvnl88vrA4fe2Vl5Udd2n5KeZMrFk7XnJObRoPO3X49RO447xX7Dib/e5bXj318ou+/eMfnDV25siaS1915bPecv5Lv/P3b7X+89RZkvbT/t2v/oMNPPe5r/7b7q99burVL2/OgbsCb3rfa33euubMa579xjdWPFF2hivDbP/B0/Um2PmCoA5jDGQReKA3laniCjEJ7zHpiXllABjSREZdZhK91CWjNk22EgB0mEZGldjvBR0DkipeS5rYgJBEKLm2bSUEiogUAGc0cCYDYQ4EtRth36sjfiRybxNSyjgDiabhKZvUaomhtAk58yslXs3cnHkSYf/NEOefEFqkQlS+WqsxQLo8q5loQQN/ppOIiIeWlAk9ujVRbRtxVCZkilUAz8LBbvSmTRfMcBQarILW5fCyTfUBcLyigyVRi0U6nOhXvacxtX6G89dgTlBJwCY3kAHgTJcU49CfHDzEwfVKdVydKTQDRGUNzV0bMzeY4ZEH9MH4lSLkIBI6SSyoOp0yLmGkG5FSOpYt/MB1bBEQIEqCEALLYfUerzBACNoy6K10qmtGO91WleMKK9Y8YEVY7vo15rQdaBfA7YHsdMvFfLOxUikVpAzDUEoBzHZI9rmlxLbqgRdhaqVEXKLYIuwfuwnAkF5HBcAR4ZPJOZZxph5988BszyzeTv11o6NzkfhAMDMeTAjJGEqgXCHv9UIpmQy9coEX1jtjVuMtL3v5j390F5NyXY5f8fa/+PnEFga0qTi67/H9DRIwta6wbqMXEhZLohEAItoOuBZYtsp+CgC8FYiS71SK/iNPwWc/B5dvLbzojzv7FyDXrg5vaEBQveMrm9atOfbBd237wFdqL3negfuO8NzonAPNG78+eezu6Ttvgle8Dl72VhgGePRYrTK0MncMvv1TWDxp//nvD1V2uz/5ZfO1V6+w3nCdN4+fDrxWUUL7Vw/Bmg3FqXVt13amJsliYas+bPHRcvEZ9sh569dtHS1PjBXcIi6vtJYanXYoa2G5BwFjULQsROxSgIJKgq+UXOZ1w1arUHQCTiuNdtUpF8m18vlKBd2cxRg8ceTUXG/+0PTS8dP+jF+ZD+dtalQb3uLJ2e3nn1OuFK456+x1kzxs1K86d8v0keMLc0dBdg8/tef4wUMT1a1nP//FC+BWrTn/8Meh/v18bnJy0+/NLW9+xQfeuLQWdk/tPNVaPnV0rjCDGzafM59fHr7z+Ju3XP7aNedN/M7Z7L57u26Qn/Ea11whxjbWvvmrJ27/+StO7qX1Z51/+VWze+480inUxks/uenm/UeXOh6gJTi3A4+hFe03tbs0nVEArC2Ump789peJfwMuSQNckGQiBxjfMgJQTtn9zZAAxlSAk+qnVlNF7804fylv+cyX0EeWk1EM+pKIAHkS4hBXK1IAHzeVUhGr8o4DoGeQaRIAIjIYl5GOB5VIwBFzFDt1E+gogLSpezVvcy1AJgQYEVHjoO5hRMcPr0R6N/WtttdmJF2NwasBMBqqUXMihClAG/dknLb6wTghiIAAEKb3mykCDnxKVd1J0mwhKoFYyz/qbHDd2z71hbrMoJe0BJxagMwYpcH+mIkdALIrxwlMUzfEGKy96UyUpXQiDnOBV0uxRshMAEbOEFVecJJGnnrNVnOwQimEDERIlpWzHAwFBYFnFXO82aysLdUKKBpw9FSHjRTyvqxzZrd9dAms0PJFIC1uF+yG7GBQqbpSgud30EJJhBb3Q2EzN5lJYyOuWoGURaHrGT/5jGIf+3RfmaXJ7FsWm+EprgqlnjU8IVVsZWygieqkhtwoFG/uh0RHQilrthTccnGltVKp1BqNJkdWGy7PTC9xL1y7js6bGn7Xv/xz0JQf//v3PcVgeqlTZcFErXpqSd62/9jXf3HHwcVOs1iGqQmnNA6BQCFBos+B8jZUClDKDy9gvXkKhzr4oW9e+I7r973+bXT9DeGb3148fWjWp2KVdyY3bxneeuj0ry5bu/GtIdQu3FjozIyUgezK+b3C7Y881EDvvAsvv+tQozqe8ztNK3TXu6X7brzR3jg+ctVFvA3zvdakVQznl0enhh++/faL161fWJz93Gd+cNbb3/BfJ/adWvZgmcrVidyuqXlqQtCC043CzIodemeuXfM751548ejEWAEWlv226IWh5yKELnZsSYTFEK1ml5WqK13hWrmclJvWF4Xlt2R9bpluefT4bUeOzdhNcHmxYztzi6969uVD425uKJ9jvOTn/AaseOLEwsnjp4/M1nmtmKNOd2rN+PhwNWexlcWFLZs2nb290mmylWW0LGt8qHnyiX/dWPwR6+4r4Pj6tc3vPOb6OHTydPubt/D28cbOycrI5ed87dM//9Uv9+88a73cuIMdP+lUy6zbBIENYc+NiK2cTs5ZL3bYI8JyRbBt5/lPHnn099/85n/88Md/9dCJkfHher3u2hWJgYJec/eiLuICBkTFO6df2AUA6nNujW8YDDAJIYqdalOHQlKm4HeG8CY3S2KMscj1ScvJWQDWik/GbTO/Zlbe7btolZSZaNhuEY2c5ynnsgiDiQhR9DcCkEj82RcnddBTCYuMGDMVfRfLJIlTVZTjLBoUDNa0qXkDSJLgAgBjrF8THI334HJKlWFIYAMA2Lwhe4nkBanZN8YJsXt3phMmOWPpgHQeMwqBFNC3O3H18CQmI+5SGhsC4+0cvcu4nwbtEvUIJCxZIgFnwkyNyU3pHzTzS8y8f1V1qx45N3wWzFHrFHeZzZ3ReOh7iGFktFALFKdxtpBrLRYRcc4ZwzAUQbM3NlkEGwKA+YUWCJa3chZnS53pMzaPfnvvTZ/+xL9/5C8+eub6Xfsa/pCwBF90WX5NsXT0SKu2puRLcpmUQWdyqnz0eGdxsT61frLVgZ5HuSL6IfC0c4oxdaumzDQFCM0m9sdPx38m7ZgnX0TOFwwSLFfGgiSskKeiHRSNBBkXtFFM6kAATu3BKI999A6bOT3ZZQ7rtINaqeT1Gq2eNzw2xnswt1Kv2famLQXp0tGTpzpLfALXhK7X6q5IHmzcPuXm4eAMfPmu+2/e/9SRroRq2R4ddcrlgJjf9sEHlEAjJQjA9TqcLXaePFXce7+Yvr/yjk8uN1kALqANjdP5T3+6y2bhFX8I1VGYnoYc2+EM7ywNt8etc665cKzrgdfZsHnkHL+AFtBQLxc0XVZxhMsXe4ccZneF2xbouE+2l4rDw9j1a/lidV1PrPgyzM8x64GF2YeOHf7+V7/B6t3g6qtgdBxHpoZyI17QbbdPDC/Ovnxq83VnrtvglvMC55srrZ6XZzkLnW4QskJR+M0tG8ftvDXf6dz2+P4fP7TneLM7/ZM7/+BPX7P9rB071mwaI1pbsooldvQEuL0mh7zXE4ErWZF6/sra6pjjsUWXe14YeD4Ixa1jKV8UQp4ITlkrwYbqeL0je1AcrfFc62fj1l1t//uTncMLdmGltHvL+rc9MbNj8cjMx3/33RtG6xuWefXKy3Inpl/1yIHaDn6aL+ZPtMQo9GbYsZ68eOP4wyfnzvjipyuvfC0w9ol//fpfvPetr37TH37kE/9+y+0HS5WybTEZALejfYJxYXJ1aRV0IjywLNCmGdOUKUoThFWdFuNEN/r+JFCVMnXQCZiZ2Cp1MUHIACK+U6qsZ4hc2zIzR0/ZmDP0GfoYi+RzpuqifpAnaRgQVXACgfJ9oZRUS9G/KVuy/mzUuk1z3ooYSlOPxQCArZY5UQrNMCluIHpqlZz5SfTjoDAcfXMywP0rA5yw+pseyMU8/W0RyMWMlnZdIcMGoLvYv3JmlzCyxiVzmmGy+i8mlCtgZA5BRJDJnFEaVlcbDsQKSTNSNgqbWcXNPTTylmiv5oyDm+qg+lNJ5Jh+Kawi0iFilPa9L7GJPgCZHS8RIr820wkLyMKkcIp2QvZ9f33RnW/OHJw5sP3sXbXyGAdszYUrc0ulYSs/Ic5//4sacye3Fc64/d9+fsfRfRuxCOX86PDYqy5/4Sf/+VPu7o3L7Xp3ZWH9OVs/++F/7rW9f/7A+/Y8Pjs5PtFsdiRSoZDzvSQeOs0xrMKUGAAczVLfGg2c2MwVA3A20FnrKnicnFx1TABZyJAiQxRjDBAlSpQ8DkSJmblBAJy8txdIN3Dyea+NOc5E2JSMhVDuoe8wz7bccMELHAirotSikijP26FAVnJd3vZaS4tjG0eH1jmdIPzVvpU7n3r8pgNPnJYCJtYXxtfZUJQ+NZsrEHCocpeG+Jqgd/rg0ClnEQ9AuZirnjWxc2zqC5+besErvvb1j7lf+jn/wEc6oZ+zS70gdELbLyMcn+EbNluy4NX3ubRW1EblGrETutvtnDtc3lAo7AQ7N1Zx8nytZe8cqwQMpuda4zJ3gnrU88NWu1guBOBPbpyYOXn07//uvRe/6d1ff+yh0x6KuVAy258srNs5CUtLVeFcvmbqWVt3bp1ySxa4LcAuAMBKbmmxI399eOb7v370yfm5sdGhV1599bW7to6y2RFnon6y0+u0fCkaHR8lFYvcJy5YjkGBh6EFgXT5si8DnpuQja7v5csVlNhqtUqFkh8EaHHu1Hrdjs1DGXrFkl2v14vuCIkCDR0p9B5zu2GIl3ZhqGfXN+yYXGPLf9/9su1H94neCQeqm6zJCkwfCeeue8e72bah5VP+t/Y8/pkHb394ZXZrbmrri170sX/7xNte9PLb7vvpfY8/7ONovc1t2wISohdy20lxaekUp2ZmiSgVK8U6T5M6G05DA/d8/6XCV00DmQoRxLhMHAIQkoQoZSOXg+lYHCsiBUQ1+4AxhhbJlONI4tuMWeIfaZ6EMIOh1Wepig+lBxUdQysJpooxjwgESBtQ6vwBWoI3OHgAU1BcJbc2i9idRJRSUJqpQaA/hyg4xmpppZXEpKZt/yswzhNC6YRFuo6hOdgEgDGOg1S9IhqQm1QLnTDoyjI4Mc7bxCKHagAJpPCGGGI6V2cycWa6VJm0w2xLA0z6kSxKqcsiS0KU6TDqoZBE5DAOMRdj9mAguwQGACd3xgA88BGBLFOcMto0IPoNwBDbICFGF82iZlSmCdbG7KGhICIpJbc5GTH+elBKAkZEpYJWnQ8pMnmboYRhGPZ6vcrO0n9+8zOf+uSH8tundmzffd7OSy/ZdckZm3ZeCIVT9vzu/3gZWY3aHLztwlc+6wUvXR/gmL3t83u+/bkv/uMLfv/6W07v2+ZP/edLP+SGo2/9rzdMjm9aO7R9rLxly/rdo5UqSDh+fMEtlLEvey2srhVQiS+i1J5qUOklg/SfjFnmT/rDQABGAsESADbYcwqRbMYZKvyVxJBZSIjCJzMJkOwP2EsDsE0UWl1PgINV5kvH8bpCCizaEmyrW+dtO3CHe+UGgRiBTrMxbFU66FPouUJYOWdZyKAZjmFpaG3TKlaWiN97ePa7Dzx6x9GTYb4Ek+tKNbtrFct8ZOXA/fzxo+L1r3eXWt4Td9rNnNx7vDKxcv2Oc697+bPnDjZfUNvTPfPs+uLIrUf2/fzAzKGFoPWdb+X23dtbsw7//L1EBBjyXFGQqD16bOW+e+APboBwGPI+iBBEfYxDaf+JrRs2bLn0vNmZucvH1owN1SaGqxWHD7lAzea6Unm4BB0L/B7M2eKB6RNOx3nX77z9JNpb3/Sqg6dm7cCnmYVgZi4/Ut580TnbzzhzsjD05PTS/OyJJ+785d+948+u2r5lQ4k3l7szc9OWqDaEJ2w757p5Lm2SxK2AkJHVy3no+Ha9V2LFDgFjMi9DgSVfCsmAGHKSROTk7Ga3YwUNu1j0wYbAcohzlG3pYd7JzxRza1vd5vCSHZaD5WG/tr93fPPw5jO3s6e+/q3q40/lA7/je4UWjb3+NZ3LL9zXhpER2NEJFluLG87YHTRaRcnaPORiSODytNd4bN+CL0pIxEg43JEikRZMg5QuTpAc1cgLWqYVOcrzIArXMYmeulZzwpJxIL6paiZUFYWlqqYBQJKhCcD9p8mJTqgISUaBvpwjcCkCzUCDwTEkco5xrhVt0aALpq9un+YyOr9Kwacdr1BKKSSFDFwNwCr3ByEAJnWyB6JAH1gApgJKDUTTqmmZIkdkC5QEKsklIcQBNVY6x77+kGQcNoNviRglNQVSeHpoJbXA+urP+awubUPNGOFWmwVijBkeLlpCl0lZ1gySGfoHY0Uz4TpG0YVsOEq8/KuoFGKvQ0RMczHRC4ShFwIATknLupOUcgpILPYAUoTJvCWuWJgK30pxTBg5ecXmRhmFnIcpEV/PhsqY04/9/fqAaIGNMn+MMa1IcDCJtwaDkXSKKxe+8wXdsy2ohfDDB17ykte+6Y1/vnRq+dJ1V/3el96xlw6T7AUH9z93y7prd5/Zde09xx8o1yrulvVu/cTBzvzV8uq/ufQTe05+/ebiTR+564G3b77huWO7/uTn3/WWR7/0og9Obdz22BFEalaqrgjREyRDKtlu0JVWjS2vSDfXs6EnZTG0kYJeWVR62EVExiKXReVIBgABBTa3RBDmHMf3Qi8MiuVirxeYtav0SqBMOD+IxRFOgIiBINCmcRBCCEJgjOU5E0IaVIMJIYQQzIoagqgnEdpGeeNQQlyIGwGAmKAwYbYGSed6yeK683HwN0ZbSH3d6nEIOiWL1oxUy6NwtAE/efD+mx966KGlfO/8zXDfbTBdvwS699/81epnv16nzXkqdm0HTz3qfvlHvScPWts3TK3dedW1u15w8VlbL5y6ONfs8Xyuax3I967ecsn5L3/1o1sunZ49CpYF3LVHRoJ9s9XxSXnBePPE/GhxYnFdjR5+dFPVnf/Qf7bPmnRfeJX92MlWU0JteHg5GGNB7YL1TtHZTJXRkfyaNbmLt05MVQtDPFfxwUJYZP4KOouzy4w7D9y75zs/uHWuVt1z6jj4BGeuf/OVV15eye+YmrSqtZCgxjFPPUu6QQ47Erorvbxwu20/5E692x0WvTnR7TnoWG5eWhZQID27YJHPgl5QdApSCECV30oiBxTU9XpurhCGoeM4QegVcvnA6wmK8idHNlrGVOVQL/TWT00MDVtCgrLYzM34hw8fd+0aUtCuL1947q5ee+6fP/S3jDonjh85MDvziX///BlnXrrv8HwuV7Fc7nuNfMHxusw8gCxjmEtb6wAgECnVrgYqicIQFo0MxsiRxZmTSYIkFThr8TyAJJIy9vokRCDGINBCGBjmG0GCMR6p1mRIRMgkY4xJHpJU6cRVHjQiAiGBZzNnRXs77RStSZZO8UvpLAXKn1ELdYl7ECRJsiJiZcipZGj+ZFTm1QKdnNJwpF1NUIS00yXFQRYcUMSyHHIGsR91akSa3tOq0Tepy5S4BiUqQcQsAK/a7/jSkpnOG/Kb7k9BtTadykEKYCJCg7PoB5VU7yON42Bv2NX6pvs/EIBFH4CxuMEsEGoGNDWzUibeVimbhQ6vMltTAwDDBzvZcH3J3GMAHiypr6Z6RVPZEjUMRGQbKejM1nKjXvXEzJnPP/+C1z/70bmDf/kHb3zo5L6NOzZ3Hz71qR98Gs6qwfDQlWvWXnP2ugNHHqrkcuVc1SsKu4fltfate+789vW/bLWnX3jP7z5j87YODcm23Zo9OrpxstkY+87f3f+pP/vGa1580cljlgxzPR6GVrtYqnbqXt6xGvXuxkKJMZgL25ZbDDqCl3p1sEbJDcOQ4qS4YSAFEOc8lAKkUMkxpABEVDG7lhlmliwuUWzIgBiAVQGyMGH4iIiUjwJjjJNIz3FsMqBQLW16zpnyouwHYGCUBPUTSeNzhvWMtofs8+pkCABMuohCSD/wO4jgVsv5kRwrwokF78477/7yP31q6aJnTZTlyX/52+pr39T7oz/rPXmztHYzPlw9Y+fyUGf4Oz9fuvspOHUKvC761bGz7Qu37Dr/kquff+HIOVtqFQlzLPjl4c708dbJVvObt94ycfCpX//oS/DCa/hz3yam2xbP84KcPHfk6N673viiF1+1YUejIdesYSdOT99+x20/efM/lF/y/OazLoL9dRATLjW9m35mkTXxrKtxtHLZ1MQlGyfO2zK1bm2pKsKhnOUBeAC//OUDI5s3PLTv8E1Hju45vhweXrBOLWzftUUMVUO3yGr1DWHhvNq6kaEqjeQKa6t5lGtKlQL3qu4YSTh8uuv53ZrrAGd1r1cVOXBlG1qsyNrdbjlXLUBxea6ZKwRDI8NLK3XbcrvdruNaMhQyDLid09KbRgIppW3bnU7L8zuAISICWbZVcJ2Ck3O77XatUjp17PDEWO2sMycZAWMAHI5PN48eX5I8bzt5RGQsbNZX8rmySYIo9vcUfaY6TKtqTUlRxtUhTeCMTrEu6xmRC02gYorFJBFFAAzAUj49ydsFysgqHfEixDhwQCkiQFL8aOwaPCA/hLmfM39inMIzTfEioQsNBMkYQMmQuKIzYqoVtVQWxx9DGoD1yRpwGRqI6JtM1u6Usw6ZucCSDtDqXtbmew0AzpQx1v2MADjzbbYh4+oHYH1/BvMiDis9uYl3AA4GEkyHM+mmdID5ar3th6uB/Q9jW0tmySWk3svixvXW0a9IAXCWr0lqTGR+wCgMLAvAqllzt6myDdzIY5cixwNHtboK15jwNBsxqCEiCrvhjjWu31340Jte/+CBh97x4b87XQw2OKXHGyd2X7Lpe3tvOTI3e876SceqW6WA+eRDq1FpTXZGTwNb7DxxBl0QitNifUv6i3VZQL+GAaLlle3h5oHyA7cu/P7z//6G51zfnZW2tNpt6ninS6UKC8s44XXqYBO6aMsQ26Gfr/JuJ7QJ1WIRkcrlCRDl4QuCwLHtIAgYY9xm7Xab2zYzowaNSYm0K4gCKGpNTS+q1HRhvDxxYfMwUEkfJYKUUhKqkC2VvQT6t26cei3K1EMs8pNHaaazlhjpuCxIznakolSpc9Iu38nVk+hawmZCBBQGKCSFQggxOj4+MQQuh54LW4obZKm+5a8/ctI+H0awJaVVleFCl33j7sl3P396aiPtn5s6MXt89lBh37J3uCma+2C0WpgpP++G7a958dXXX7uLAQShZFLMC/feh/YdvveX3uXPvf/JA3N+uBcXYc+J3NG53mMnR599BR8dfd72iYsuGLpy6+SmkamhHngMTjswewAaY3Dzj39x+223nvHyFzx2fOH0ocZCF8Sh+8orwVnPuIxNDj1z287z1wxdc8E6u9FhlYIjoBtCHemx/YdJOF7efWzhhEPD3eOzD917/8SuLTPdRtDzOp1uvdfzlvJjE3j9xRtffMG5kyOT+48vdntQtQqtqo+CoeBMWNIPA7+Vy1m1obJbcL757e8+9/9H2XcGWpKUhX5fVXU44Z6b5s6dnHdnc2bZBZaclSCCGPCBIsIzgQnjAzGLAX1PEEEQEUVFFBVZ8i4L7LILm/PuzO7kcPPJHarqez+qu7q6zznD2j9mzj2nu7rqq6++HF780vV2hzOPtBaC+cKLpbKtXrOtJ9Bak0atNRfo+z4QS9PUFJVMWcIRZaprYTjo9WUcaa0DXyBJQhE0Wn5Q7/YGUspmo0ZKm7xSKtNSLGcrVI6eSxyKz/ldzv3mp9I4psUJACil8qBGl3pYtlxlwzrveWIG4cbIiqhkkfIHueLEKKtR766oQh7dC23oj9v4z2r/zlyskkasSvTyNM6qipgJT9YSOU7FHMPFRhiwGViVo5QLSy2Dyv0GGpM8ZtU3Wh6MYyzNiIhPtEuM0E5iUrpRMYmyBuxWGIEcOgZCJQbsvLyifuUP5iBwtgrA5m+Vsj4gzz+uMN2x2GDnOf57VuSGWys3IpooOHTcljkyMbveih4DlaAto/oTq6BO9mcWWQd5TRZmdFzmMPuK/Dj2yrEQ3a/M/8WeOmVjoVxZzI5AjLVVvL0Ou1r8F176ksu/94ptT9/+eHQiqOlV9cT8nA5Cdnj1WDRY2VSbpnB6KZT9aHV3unXJDzzVPk0P3vck9mfwyvlpnGbzPX9qsKVfq/NobUqlvfnW6fvXhod3/OJrP7DF2zfT5I0pWNmI40R328O2FlOzrRZC1IsgFCR1mPhRaNaU1d7OiaYSQmgNjLE4jj3P4xx7g14QBKRLloBRjC9JIQCaca2lieHnDDggICGilooxDogKSCmlCYkhY2y0z2g2GjFTgUOjBmDM7CExm8jkRkVqBJ/QxOdrIEvpXMGo4moRgJJBalLFCEOGHIhItVNJgxTT4dyOmcUdzUGnc8/JwUfuvu+eJ4+stObTOoP3feWKZ+098aEPrLz3V2c2XRuHU/6mhYE+FTx435aPfOZot51e/2xIz8Cx0Bf80qt2/MirrnvxBXtmZ6nZClHSkeU0DvzuWufEPDbObJxeW7v/P7/51x/52J7f/K0jDz4MJwde5OHm6akb9v/s3vOv2D6z95Lm+TwA0EuR8nv19RiWmpT6PbYq/+69f7nlZTd86pH7+o+texQsra1fSI1LnzF70fnXXLT74N6ZYN+ORgLx8dODEGfV6rC3GLIZnEmUN4z7GrBW00M5iE+daovP33f0G3fff/X2rW958Qs2b2ncd/Y46/o+oEeSa7mwsBDUa2fbw86g/5EP/8VFF1306u977Xq7I4RQqfY4i6JI+EU6XOlgKu17IRFEUUJEvu8zBlKmmidhWO90OoFfk1KHYU0RxXHsEQS1MI7jRKZBEIAGLZXneaSkJQWQK6xQNtWWMNNS/PyGDFHHaVqImGlU2jYXwVyTlhXKmbmfRoJGs/uzehYZy+GIJh44lbmyiwVD4oB5pcURuFExZomks8L4bKkZlHUAIrJuShNRAdZ0n99j02fsRY6vcJQ8jlK2UWjYFwFASlm0M1HR7owD2hiRp3jlgpFLhzNBvDKZjH082S1x5uIDjq81alW7isUzc9SXl01EwMs+YLtJzutoxHxhp2g3zLIT85/TfWJM+Ps5QDbpzsk+5qIOM+SOBxtZkE+7yF2jESyx58PCZFQYVERESESInMxPIwcpn+f4GqdZoagyA2aUIW5FuEZEcPYLHYuN9JMkUt4A9Vx40Tx+4h/fc3rly4MDw9nhst/sTUHHVyDmGzUfvQi82lxMw/bs9EZPnZC9qc0HvvDlW//ylR8/mq694bO/+oKLZ7e0WIjzrNcYJL3pTbXeyurOxuK9tx1624/+Vsu78o6vr3/hsw9cecllNzz7ysu8+ZltcJY69y6dqNfmZ9R02kt5E6MoNNa5QHhElCYJAPi+r9KIM6EUSdJCCACttfI8T06o6W0v13+DiIpy9RpJIEMGSKC1ROQGC4whTlOWAV8JvgDIjhYSs+ovZTXuGFKpd5LbMYZpRESdEzgb1emW3oQSHipiHIgxQqYJmU5QKVQ1wqGopULwbgRiqLna1qjPztU7ce/+jvyXD/3Dg6v9h8RR/Iu/3/7mX0rf8CPLD39LwwL0ksblu+dm9aYeWz3N+ycf7PWW4mM9uPcJiFTYql/4jAued8PVz7v06Xv3QLMNQQeOa+hRMjPnz6aR6q2vbqo9dqZ99FTv/lNLX3rg9rU/+kcR+v7VBwZ64ZKLDl53zUWv/J6nnbelNh+nGPtH1wYD1Z1amPOFDylCiG0F7dW1D/7F++Yuuvbz9zy2MuQLHM+rpfsXwhe+8MWL2/bVFzrNJKQ16A1xtTNUnBIhE0/Ph82pkDbXwlbN/+TNd33ia1+57Py9b3v5y7dgvT2E6a1wej2595GHTpw65fMGAF576Xnnnbf10cfOEmEY1ofDIQOs1+txmowlBUgSkTP0EZnWIGUCQMgoCIJ+v8sE933f2Ki6g36t1hDEpJaMgSmdyABJgZLki0zgdh2Z6LquytckBmz8kSWUy4ohZxQDwPYeNu9QMCLrE6iSqdYsGTUiSgWQ86SsQYvpMkPMxB/YdkYZQ4KiOMZYBcy+JactpdKzaNuuC+Y6Yq2CrjhzXa3mIJDTJM1dF0BF0S/dU7kyBp+b7l3IA4ByCqQYCJj1yqLCHY2awUdfpx3f5NgJgLsFAHikRwXzGJGPxizDWu0rptERmSifNC/5gO0NI9Ny3zs6sm1iNWZKIwwYCn5dvVzHQ6V+9SilRkQ3vaeyxvIcClbqzoSP3FxhwCaqWUEhJAJyRASZjjJgxCJSoDJbQZywVGaREyBBioRlqPJxsrD9M1C9qDk11KyxAhs4OLg11HQk6R5ZTrGZrKTJan+4xqLTffV4Nz5Si+NmqICLbq21kfbVbP/z96+9+OLnb9r3kj/+0p9uq+nG5gCZ2OzNRIpQNFpLINlA15KNJO4nDR1P3/yp2/71T26+cPZpEcKdGx/45umPLQ0711/wY1uTV/vdgzIBZH3f9wVyY/sV3AcAKaXPUmQizdsZEhEyYgyVo6G62dvMEbqJyJxDDQAKENEceo5ZOqAmCeiRzk4+IhJkvTGYK5cSqyBP7ss3GZRFfiHlDiqHGztSneHxWGzEqAlack0KUGpB3BNCMxqgGlIiqDUl2zHnWqo5xWTAVqNOmG6N6p0Lm62pmWGtWfu7v/34Bz/4wT1vePt/nTyBB/aktUZt3yXynmPhP/2dP+ulb/+peO3Rzb09en5jnwgf/fN/Xbr9Lr55hwKvNg/B/t0vfeH111y048Xbtjenwt4wPbTRnqnNt8+emN66/SCrdWrkB+nZUyf/9iP/3HjRs7/+jWP33HZL7cRK78EnYc/Op73sec+65uIX33DNJbtaHQBqw+BYbwC0JNcajfTgvkUW1D3gy73kZDe6+9sPfelTX6jNTJ+S3XZ97nk7t126pXnF9Qe27dzUTHzeF92uOrHe4wkbqEEUDPbu37xzvnX7nUf/4+ZbL7nuQp8Hn/nXz7bC1o4tW69/2tU7d25G1LIbra5vtGZm+/0h57xer6tUyjhhXimdz8pkgqGUOhOFiZRSnHPP44NuVJ+qK6W0llKnWutaoymlNPEHfiBkkiqVhmGdlE7TlEPmv7Doke31iKKS3QBFEBY5JmiwilBZhTKxDgU1yENG8mwWJ9SATHRCdRwzDWWabDHC3NGToTRxKzrYg8Mo64FA48hrhfjnn7Of7K9ZqwaOowwYTLC2O2ahJTs1nF1WVdKAx2u97jVax9B1blqwYz5t9100wSDhrl2XJKyJPLi4w9WAnwoDLgrujf155DIaxuiMRsGTkR5W0mjtT6PzsRKI5Y52h2AyA2ZUerVbMcN+U7o/T0WvyHH5ClxvPwMASdJO1bXSm5PurjT7rE2zB1Ws0RRDlVVNzvxqj1zl4poZBmxTkA0DluU24kWy2bjJICKLIapDV0cLWO81wF8F4UMfoTEFIhpKzut1HzsgceBP68HplTV/sF967TrryQ29eosXL9+0ftNHPvftn3rRtu+st5tzmw5uWzjbPelNT+uhz7QngjABf6W7OrO5vnGq//IrfujK5pXt08f+8vG/Wmo8cPCK81rdfXPdp583/6OwvnUKIvRCzrmSFMcpIvM8T2qI43g6hChOhV9PJSlAz2NpHNlznimXhTkHBJRMXjZFDVPinHMgQCJSDNAUu1EgKCcQiFgYObLKO0Z2K1UlA9RZdgQwJMaIkQaeFfXOswPzzwy43VMqye+8sr+GMUtFjAMz1VKkJI3gcfD9+SFbrq/w4ZTQuo/Q8mpMdmJoKWhvqFZtsJFi74qd04sLzVtj9vB3Hr71vie/cCbp3P/13q3fPvirb1z6+d8IvvcV3Z/9WXXiWBQ3N7ca+7fWjx158tod87sb4WduvmvtgbPdew4B9/nexStf8+wbtm994ZX7F3w979fi09G34m6jTTOBL1tw0f6Z5hpszEHYh/uGnU6v+09ve9dnvnErXHhp7QgNd9bOv+q8Fz3zyh959UvqNTEbYqCgvbL25ElaiaKpwJvXwx3ba+G2uTWAtZXh0vH2LUcPHzu+/N8f/uR55523+cp9l1+y/8q5xfN2HdiyPWyv9Fpe42w7/fTXb4WZ1qOPP3p6uBqtLv/uL7xjV2sqkLBydiPVntLCE4kCRETGvSRJAs+XKiGpPMFUFkbLMC/Mh4hSaiFEkVyDnDQqpWq1mtKp1jKRKWOsXq/HcSpT7fmkFGmtkZjneXEcD4fDubmZNFXgEHRLDUaOdY4/ML7Ahc7phiWjxqnBtbCfS3mPSADMiIY5dmoArfJ0HYccayJCJViW6GrQnrLCFJrbICyrGqImYlWFx/1cISmIqPMgc6hwGVbKxbJR4sphh6XLqaVcYhCcXDm4wvPGkEqW6bgFxKh0c6WJghV0ijHNACOuhHxu1eCfyvsrU8InOtUuQFh49cdcqQ1icm6e9AJEVFTKfi1W63QjcDdm0o5a57xbk8HcROPa8I2uPLtUSRGxDFjkiGlxzjZFqKwxFxQsKthfGCIq1GjXoot1ETm9RV2OTgIyImsGytKQTEP7CmQwrwjmLtb8yxSaRsimLY8AZAQMUHEEW3WSzJHQlAcVw8h+ScEGpHdJnTC9Hg9VPWwM0m6IjfUazOskHkIaqMDnWoaIkeAtLte1CGK9yOUZ318Ikm506NYP/MRzro//8s67apua++rRvku2bgTBsuRxXXRxkPQgZDMyUVHU46x+wxWv9dWBm499anZO71i45L8+9e3kKP+rX/rsk4dEUgOdtldXV5WiufmFMKz3BkNE3mrVZX99vd2ZnlkcpgqAtVq1drvncfQ8D3IGbHcTALxMywVjlrcMmCdkCK4mRaQYmuBkJcmzKEREDgNObYieGwuNTIFxJTgMGAgZSXCi3AGyuFZyNOASGuXUJAsfZdkWcxIpKIkpMvI0epohCdIAajDgxHA6mumJnmxEM4PGBgw7DW8WBlPRnNa9YS9OZSx31DfNLsazIW4I/66HHjwk079+yy8cv/+27X/03qP1gzA7u4nRyqyAYcq+cldrprn9+ZdcKtiBha2KyfaTR774vo8fuftJnF/UWxYWd+++6jmXvPy6S14zPQ++PKZV2lZnjp6O60IEYpb5U7WwOV+fnfE5gBLwpQfufviuldsfuX/95nvuve07ePEF+6YXnn3w0sVXX//ip+24aHHR17DchWMrSTSUarheYxGf3bQY1Gaa/iCkM532gw8evvX+R+Xs9PqpJxdqc8995nP/++YvYyM8f/++7a25fZu3Xbw1vOfuo4eefOK6ZzzT9/1+v6uUkjJhitebrY2NjbBeR4Q0ToLQQ63I1k3MeUMWAI+BUpIgzSVvAOKMiUj2gyCQKgn9II5TBIHEGArNu6iR8yCOpBCCcx5Fw7Dmm9gRgJIvE0cKzth9lyRdLmU5jcr7eVcYJ1M5Q8W8rFOh3jFb7RwRDQNOVZWRm3EE+YiASBqU1loBIXJkgikYTUNCTcSLzrjuPKvCaP6rHsn+QBPkCBoc8shzUiZtPi4W/wIAqjEMGBGzJisFDy5IYmUy7rBuM4xzpyF5eaESN6cURnTCglWBLNd7qM6kyoCP9QqAGuRwNX0YuSZFf1UYtn32HN2KKhKTXXyxfxashMg0AENAQtAaiLTOpDZERFtE0J1dZcHZ5mmGWLSxRGYqWrA0kTBu5yqlMe1lC7xVXlFkcE8wrbstEKS0wX0lboqIiRMcP3ZilYtBhohQVHdimsgD0rabEzBjjmYEpqScoe9uKw4hi8FdnEMa8X0CAEAklcc4Y0xLlcqYA7Wmm9NTyLoPPfH4L++efvz4Y6veVOtMdFQS1YO5OBhGNX/Db3W9uZ5Wnfb6Ynjgin0vPTg997mjn7uvfXa6xeAou2z2ey+/5IYzbf/aqWtOLZ+N1XBh25ZE02CgkxgDv/6jP/bS33r3Bw7s29ftaiRKBunCXDg/D/edWELZmgrDaDj0ONPKk1KKACT5Zv1KUbOGg6EEkFwwnp9zAzTj3tMIXLtmJbfCrVPm07Vk4Hh8q1zOuXCLYoIV2jhgUbgGMypACD5llpiCbedUwI5ZnkCeoMyyhFeTxxxDqOKNvVumt8367fXukU739hOnPnfnd+5cCfqQqpaa+sfPX/OG5x76w3898bbXwHmX0tLaQuhdcdX+XVrd+fWbf+vdv/jNz3zn8w/ee/8jh71Dcfq8fS+47MqXLGy6+roDF/us009Wj0XHwuFMO9J+0MFwnurpcOnCA7Mh1cLNLKbks1/83K7mti89dPSvv/kd74Huig+1RnD5eXtf9qJn7d21sH3zzJbZZitElcJQ6kQpgdgUNOUhI9rYaH/0G4/821e+sPfqS2+9987nX/eM5xy8+PKphZ3h1CmRBCK8754H777jjt17duzZs2fv/gOnz640/Lrp4p6Lvlbl4u6uue3O7OUUBi9Zttz9RRpfYCdCqBPKKGahoNDrdTpzjZk0TmI+nKct63CqrmiFzzUobvFopS+5V/N9gaTMTgk/HAzTWqOuo8gOXS6HnGXymF+cYKUCNyqVBI1lBwGQMWJIyDQR1wkiYyiIkMgGaWsTPIuIbk1pBcSZT6RNCFhGsYEbMSALaLAsk7JpFuBy2CTLS8MSkWusYoRuIHROZos0KrLebgANJACUU24TABggA0hMCUmnoC/kR7jAAbccUC4hVdCg0IJyzdjts2TgY2cLYPoHs9zsiJBn9QOrOrAzuepIx23D913sykQ0LkPCgLv6rDvvMVeZwRRrdvJ60bEtMKaJUOusfxYiAmqNxhLn0KNxcygmiaikZqyoCkZ52ol0TEbjJ1aZPoyx4k6SWswK3JtNjRulFCuLbPaDnGC7GPXg5msrMWDIxSlOGebmp5eZQQwDrtAdxFK7yVIFn/L8nfdy23qZQKFWyMjjgnM/0I+dPvWeRvOwwBcfP3rJpZdd0R08cerkv6XyLqUfZgKgNrPu8RNJupKyGm07fOzQ3NatEvutqSRZ9uXp5ve84GlXbXnjeudMN1ryayxNIOSLU8HOualdenmpPjsVQz8RsQLZh+BrXz36R3/wD3/2xx+ZW2iud7vNqVkF1JhCFUM0AMbXiSjgvtagUs2YqPmBUqro8UBEyIhIjT1+hVtuBPCZ6Wx8QZuxbBjGuWYsjYDcKP1dC7mMFWHzb7A45Lm4qZSarwfrw36MpAg9hPm52dYcJAQQwbeWj97+mS/c0z1+22/9JVxxWe37Xju86HIRz2s/hTNn9KZa7Z4nDyx3Ln3rCxfnxfZw59ao/X9/5QN33f9oesV5MOxf/Lzn/+9L9j392osvDOHBM6vL6/Fc5C350cziQvd0lwXUTdM5JTZfuOg1QSzJ2UUREaxE8J7/814xvfhQp3so7qbx8Mq5rd6plajhzWxaTFLdrDdOHXnioa/dVF/YcsXlV1143cVv+aEXT2vYFMCDj5z+yu23P3j2DE5PXz+/98D+bVdfvWtjjR647/CTTxxL0v5Lvuc5neUkCAKTtJamMSJyzhnLmtJbo5fjrC1OortBkyrTIaUjjXEYAMSMPEIfeX/Ya063BoNIJakQIghqg4SrZAlnwloy3UjSM0Gs6jPNfoeUBiU9z0vTlPuB1IozD2Rq3zhaUSs3pRYMWDtBgu78NTBmcByNwMEIQRP6mCIakzVqbZGQ8pYwBbXJ+vmAACDbkwDA0BwUDLU126Bj482zAMoMGIxnGjHD6wKeutISN9sQbheY74B5ESfSZXUICZAopZLG7OxNKfcMyvq0895ioy0jQydiw/461oLtMuDsFVgUJCkx4CfbKZTIxEQenL31f8iAz90WqkKeiMiNanPHFCJTH4nQNKrJbtOOz6wMOIAxBNEyYESkPKIPEY3pvrITY0cw1ygDHis9FffrMS2TtdYcq3DLCLrDGt1/R5vlVR8H41TObuCkzZ65DJgRWEx2kYmZGnaORjVWM3DXq7Q2h5pzzrOuSzEp3Q8adepPBylCjEGN2HSvDQzBa2nZParb96rObWn8DfAPRaq31k6vuPC6Pmv/21fvO92eedYrLzix8Ug02C/5kDb6Yd1jPgkfCUHwEMH3vXpvkCJToWjWRMtD2DK/bbH2jPUT256z9wWrg/ZaGg8hVIyfPX1irr45pFkZxkqlSqXTzaleb+DxUKYYCo/phPJMXCrszFn7rApsiaji2ii0ZCj2aOwNVRhW0sDsr+P6uKFT0s89wIxVQ+IrApNLaEz0aRon/lSoBUiVcqUgkZRoRNZkg6nmrL+3tQngo//6Kd2Rn23ob979GK6oYWsaLtwPN9/yPZfuv/0LX6LG7vYrXirPrJ934aVNfnRX1/tfb3zZf33p2E13/MfR+894jcErnvE9r3zJ8354Kz8W95fPDrrLcd+HVj3sC9zUhThOlzCeqTfjqB8Iz59Od2xenGkEnWgIYe3I8hry+i1fuy3sw7XPeOZat0062rltnpJesx7Ozrc6Qzj08FGuAo8HtVo4u7kmmnD42PEnHz58+tTaoCtveNb1N1y/89iZ/spql0Ej4CaXVwFoIYQVfE1Cubuz7gHJPk1gwO65RkoBoFLpF4BJrhlj8TCZ8mvxcBg2G5FKuceGvc72sLlBgim2rKMt2FgbrM80atL3ZJIqnYaeSJKEcw84S5LERN1nO+gcRud1eVpKpsMV9LDU1FUjz8zRmS4LxIChjxIACJjWQISKcsEwj99324BqIGZ8yW5EGDEA4BlfLw5CKdApc7VUPcHmXYho2RtThsmBRXXzq6Fj4DBgIiK03eocum2iuJ1MB3eX3ZrPY1XSyoEqmHS5tvO5g7mKS1MuK+Qrdeg/IuKTndjIJs7ozApK/5NrfJm9SRrApHy40gIcbcwONfJIFp5aOhWTuZRWOUnKw/myjUdGVFC3ycsszbOi8uJTTrQv3gtj0r0qgsiktYwyfhfRzcVIE4KGogZTlqJqoxmtjQEQEdWI02L0jeCcbSklY4yhyKU7xRgw02Y9YT5gf5BKr1Ovz3fW+625dtzZxv1hrc4afoBDHbVv7ax/rNP77z3zg6OnOlu37/7iF89AM776+Vt7sdQiqnvNemOKuJ+CIBGmBN3BMEmlnpvRmqWpkClL9bDbX0GqL266MDold8xfsbl5cNfMwZu+ct/HP3bTn/zxhwdRGomBgLpgXntdtRocOdXqsj9YETBPRDpH++yYUTXf0SqRjPEKZDL4T2p+osdvHwFDHANqU+jAZu6hJb5l/LGzsttXfYVTZcJ9iwTh+SxNh5TGofCQmJIghC/JX4Z4JpZRtH7xgR08BCJ8Muk/9mT7gRMn/+1znztx45fT04eDFz9t6lnPiqa2NNims8nZqT1Xdb/11fo3nrj2Xa/eHW8JL9ly+kt3/ud7/3LuwPnrmxZ+6rWv/tGr9mzehAnR4MHeSS9iDW+L8MVG3GUU1etBArHEQZKGYX3Y7QSgty7MDqL1HbsWuxyW1voKOGNCJ7GHLI5UpzdsMOUHQXOq1ulseELoVINijcaUrvEgFMePH/3Wt741HCSve93ruYCVlTM+1IlICCGEkFKmaco5r9VqcRyP2awy0yoz4PF0zDDgUQbDkHpaMSZChVONxnJv3av7/Y1OcyZl3dbQxyRZXagfHCwvDTat7ersXK5LjwupEsaYKejCuRfHsSMo6HL55CJnqJg7IqOiFnp5LcxBCoMYHBhjFGeMzZigbbDCOAbsWmIqvAptiChm8zLPEsMxRD5n21lwoqPyijzozEgD9l3MtkEs53yykZHNZY3PrqZrpdjRq0JLKwffPUeVk265voWb/dNltHbAyjnFI92kgj0lUv4/uM61sLHfj2Wo7v1lZlwQHBeyjIGJQqyEMExkoo7oks/BDD2m/+651jIuqrny3jFk2hk/12AmMDw24fsJ31ReZK09lgGDMUNR+bZ8TiwP9Uj1+JGzLh/ls42IJriMA2oNRs9gDDjnSrcDNhulMQs2OC4OOuD7gHzg81Bq1R1GmvFGow461bDaaMTx+qPNfps2bvFn7zjWOdsfhGF8cntQh2ktPKWwT5DUpgQyj7RPICLyYgXk8Y4a9pUf4/TaYNiL19OprZ0uMVYnLep634HZ77lkyyuo1zzZRsEQUDZqPmdaplEU9+qBH1HDNpjKd9UYsoqdKreOKJni3WM5do9opHCHe/8oCtk0JkS08TtERKKa755NoLxdBWLktb4r2+3RUJKQ5CvwgKFAJWgA6SBhrTT06xH1o4GH0AfZ7Ct/83zQHNBia5eE3tKp//O7/6ezfffNMNu+P+FpX+2YguTY3MWXR//wUfXMF8X798JGbeeV21ZOrhy8/MDy2SfSv//WRnPhmYvbX/Oia1515e46g8NLndV2xwMehzzw/fmUDf1IxUTcjzzRpVSrdF4z0e5KPs08z6sHw6iNetioBxqUZjzths1GbTDo+YFISaeIEoVCrGmtddJohlPTjTu/c/8dd3znda/9vs2LM4O2StM0SRKlSk7B8UArX6MMbMzN6LjwXCtgHIm5ViJBd4cpaFbnUXvjyp27uzEN+eqRrr/TE5328dnzL9npwdcfPh2maaMxJUmbmtU2X9gyfsRcLMuYVqHpkqMvMSpq1Ffmb2hd7hHnjAnGmEz7AMwkOyFylTl3FTCOZX+jSVVnqoz2OS/MameifR0AnLtmMjPZAfm5yz54ea9fMOzcplflbRCN7p5RISCe5xlXCKwJMi0xzu/GgEcPI0xgJUSFQgzOGbRwKOhJvl84omhl0z7Wd0rr0fiAI3AhPq6mJUCRiDb2NZOuccKRw3iclDXbhzVrTknAmEBEzsEUzWciK1JI5ciUClgto80Htk1IJpjKz83Induycco1KItpQPXkW/Fq7IsoJ/RVEOkx9yMiFRmoUBxGIiSdtzEwJhIGkJeVs2YTR9yTqhiWjZONzM/lSmRGOzQjkNZSa81rwySRCC0VB34IsTrLWejj9NpgfWZmFkgnySDkDRWjTIfNKdlJ/IUgmK+tf/2Ov7ruGT/S60xFw29Pi5nl5D7G4oD3UK962JXRCsIAMVWNDaRm4DfT1JPp9np4eZKw5faTqvmFAa7F9XQtYZ1hvd+ZXjk2/9yn/dgVW19zZrXHQLeaU2dOrIAW9WAGAWOWOsB0TYtV6T4/UQXM3Q8Ta5tPELAKhBln5M+JDkBGowg8nrdlRVcj5zj+zJIzf3KEXc0BYxWgh8iHKlUCuMeUTAXTOkoaonY6hL2D2hNNmhvCQMuaPDrQc5Fu7fS9XXs5ikF7bfhtld7+7Qc+e/Kh6CNffPLOr8MP/TTMCGi1mgvbQjWtrgjl+z8ze/Ca06+4zjt8Mjp9hH/53taFtVdd+8xXPefZl22a7rXlkk7TlaipfdkASiOfAWeMeX431QoCEfh12R6mKtHUqDXTaKgl1Vuz/b6qBd3uIA4bs51+GoYhh0TolEOaKJmmvBa2FMi5haDX2/iv//zCTHMRWbp3795t27YhYhJngZZSShTG/nSuyxpCAYADr2xfvlWq4pUwRyxE6KOOIjkbNmKmFEufvWfBf2j91Vc942+/+rEHLnr68kNnX37Nwud+8UOHg7W3/r9fv/v+ZSJAzoZx6vs+ERmfscmhMtNhBDafrexpcuikkSbNtwa1EABAkKF7yqI0R4GIcRoBACJ3ImOU1toWUDLvJsic0HmMiAs5TURIzNYo5HlPYpZnzeQo7SZzenbmbo4ApyJBxhAWk8GMuS85ozyU26vM/66gAAQAHKv6WCZMjwT/mhskjY+yLvmtHdFc5wpSRQIuKn6Xa09OYuQFAwbXMnCOa1zJTQAwCenuRM/NesdeRjQr/nTepaUy0gSiAW4Wsye4b4r1CyGQF+bocuSqOxl3Y4oZaj1GGjjHPMGJGC9BGUpwd57JOojZqFRnyWPeOyo5WqI89lkrkbmRa6BNnSdmleCCAUMGH112bhUmdOf9SJA7YDIe7Iq6RtCELJwctNZKpUwESvSUFAEPmegNe20/3BrLtJ763MNYy4hWPVEXNAUSAEAJ6CtIcH1K1HgvnK5DT3e01+JBohKqs0Cn2qNUyg0/SAgiSYyzEHRN6bpMw1QD84D5NC37w/7XA35rJL+9lDx4hrd7jZ1PrjT8lbnz9z2zBgcfuy9+xUt+dNABlUotI2C+hS2UzO8aS3z3uwRhaSi5VArNoEzkx8hbTtShydlDqqIiEdnbKghTiQEZfRE4ZAgRh6IeyqSmE0bxgGSKvuY1BJ8gaUm9jkNFejYW6w2spzrlsutv3tZPhjBY47quAi9CxvXCVB0aye5Fv9tf+cc///d4//Qdt7c//cRR6PdAkl8P/csWep/96sXv/pmjyhfNqQgjeSSd7vfWYOVioF+49Lrrr7wcPFjtDdJeZ3UYe43ZEOusGy3MNNqq3WHDVtQMG/WNXlcwXa8LlcQKGQg/5k1SEST9hs90nDLmC9YYxlKg4F6iWRzFiiCcbraCEE6degIwqNXqpmQpEQV+jTGWpqkNEbbbM+rDcymdvc2VbMwXORY4pIAY6jj1PJ8ExhpqGPj08be8vXnzQz+6Y9fs3/zi9OyF6bZ24+ww+N4//evTH3vszW/8qff+7RNPHvXDZqoUMAEAaZpyjkVnXtC5Gda8IlMkij4uhtFqZueUsWeGAOBprkGZVHVm/E0aAVhsjh8w046TAZquShZN88IeGaLlbVVLDNiETrn4h7kSLEtBpgVtRBKQU+CMzmcV5fK1ZHYpREQNDCm1jePMhDKqY57VlOVYYm4idtKfSleezuRyWXQSwaFylBy3ght/Y2J02DiRPRd2bRvKTDWqMHIzATw+kJSrQeg4g92ayaVl0HgNWOnxDHhSW0N3t1ycnsCATalmYgz8QAScAUCSgkxkHOlESkT0PI+JzOxg+2hawd/BKAdkuSGaiCxKuXtDRMYHM0Z40VVqa57VuamkAj2ZKjMlEwlCeSrwKIrkBL1yzrOr0jja+cFKZPlOGzcMNx/RaethJIAs6jsrhZgPU5Tuyt0YVp8GhwFDfhKYJsYYEJOmVjMzBlsNwKIhhTUGlNBQ+D5LIFIs0EFXD1qeBl9oRT2NoUYvgb5PAw9nEPtSzSjElA+1DDxfe7GIY+n7QlICgksAHvAogRpCGkdI2AgCpSmBFeF7ybAGfsLl1JQPQi+p5O6V5ZuIvoP+4ftmZbdNUXvGk/sv2Pny7XPPbvgH4x7TUeaCyeI5cthKKB1UKpxJJRnZ4aPF3oGD0pMYcPFNDm/j8eJUvc28VzkCgUVyRKzESbhPVb7MxkkTIUSidcIBA49JHUSqRXwNuM90h8ULQ9JTjV4UT0tYCVSNYDgczgZhhEmXKYHCF42N7qDue0uDwf6gsWVnDZqeHq51JZw+0//Sg9/4wh/9w4NH2/Sy5+vDa3rPfHPb1i0XXcI2q+Vf+8fdP/mD93hddnh5L6PL925+48teeEUYco5S0vLaapxSP5YcvZpfQ0/2hkmzNZXGSRoP6mFjmBJwrxb1CZB5fpQmzPdSkBKlF4ig7wVBrd3vN6Yavf5arRaqVPk8RJ+SJFFKm9bXSmapXJZvVRgwlHJgiuvcDNgiBuRqDDKZIvM1DySsDTcuP3/3qb/6p8033rv5g79Z/+c/OLPM2W9fsvlnf2P50VryUPJv1+x4w61f+dot9zem57jwUyLOPa01A9I6DzOyqYbnYMBIninQkf1UIJhHaCqhEgMOiISgNCmQAdcaDLvigAyQFRU5gDm1aPJKbYoo0wuzg8MMjynqKEDOgJEgLfkZcwaMmisfrO6LZgbmFp01xMtMhsZKxFkuKBQt5ohpG4+dM2BiGaFTqcS8gL+L/zotSvy6SposN2OwTxEUJTOfCgN2bFQ2rE+bNE6X5YNlwEYDHkM4JtjKJ0nceZD6yG2TzDwTilMXXxLLpQYGAIkP9SHMTMPf/MuHr9k1fwFM/eenb3rTr//eWgCDKCZfAHLh46AvBeeCodSltCJLtkydKdOzOttfZghTYYLAyZbhYvrWpJyFt2XAZUxom+Pv6N/mNFeUqlEGbz8Ij5un8m9y+cChuG44gDVFunuMeV7y6PiMuaTESnksC2Uum25MmzanHFhuI9KuhTXfUFAAoIQwJxDLTmXtwGFU+DC58AZuWQ1YneWJkVPSnSOTOjUTzrzRwLNuQjEqBIVAHHxPhqxDyYlkeIrFn+8kT/BmsjaUx5cG3dRvta7Yv/fFO/2XDQdxmpInfOBMylST5BxNgZQK0BCR55Ks2ysmW/fI/fZPkxphYJQnKowvRONG17ujZQHPmgDAKCuGSmoOqAmdY5YzFqaUMgfZdXwyqJ70DPJOuohLIKBcSMHOinOhtVZJmiaJVIp5otao12q1C6d7GNQOn1o+003ufuLkl2+965Fjp448eghaddizZXHvgUv3XLS0KXw4GLYeOdP421tmXnvD7q0LF87UX3pg7465mj/b6A/k4Gy3y+uqH4HgqdA1zWbIS0H3A/RlH5AL4aeJRETUBFp5nPU5QyI/P9dasJj0II1DrUkjQebyJEaMAeMIsiRIFTjJss66BoiQGy1jSFnGnwo4EwJX2kqxSGCKSpHSnmSrDRKx2gSNM7K/dfPUVZ+9C4Lu2j/+k19fbL5o7/I33hn2yDuzlx/ZJ376eckbf4I3w9v83satA76pkYKswyBsLcaqk1HqMs9wai8z94M2xWAoC1ywvUMY87WWWktkxDnjyIlIa4cyACCiqYlr6BKjDEDZAQRItWKgXStpoS8Vikde9A0BgICq0eYutMEJ7TYwFFCxwhbB/EQkqSgMgppM8W1wjqFjmctU/Sy9BTSRAsj6GY+5zD4yx7MODBGZzp3i2ZwZAANCohgAEHiml2tpynkq1ADMyCiQdTcGQD2pCPj/mAHDhCCdism0sEdUfa7514WrYrwZNpcfsqpDceNEo7ODEjjRuf/8bY/17v36/XesX/OSX4Dtl6DHgUNvmJpUH49zLTPtucrUwdSUAKM6UF4TGBiCrk5g7OUikIWVQ550xs+duLsMoZkYO5QL5xKUjHau7U+WYxU+FQtzIqoQ1lF4TloIAGSeYxxPkqygMIYBE+WU39EWUBORHgniICKNRTALjnOKGKONee/YqpmWPRMpG6kBjijAcBm4r7WvdCjA8wFQDWTc8ZvNRiuN4nVNMqgpECvHTxy69Rt3P/eZv1ULG0IESSxTrYTgyEiplKE/Ci5EZHmljgJ789lNgG1Gmi2Y8ijAMRtEVBTfHx0tm0MFpNwkmVGFAVPuTTCKsjVojVpQsmF5NWwi2249PjhFa+KO3dU2k2j3RC0USkfbt7c2zUIqIZXR0pmT733vB/dfffVX73/o3tNLsjVHtcZlF15+39dv37lrZ3/LprVBAu3hi3buuGTav/aiXZdetLeuVutec2MtXenGg1gnacxqTPs0FbWY4JFUikGj2dxorwWBBwCB0kIIpsiIHZojesIPg5maJwSkErqddDCIiBQxItACvNGdQszKVeSlIbKkeQBIQSLiqKAj8pOYlQoHZABa65QzJNjs1ZdWl/0drS2d9s7PfJOijfoLntb/1O3hHY+we/4VhewI3pCXcpxe1+tnL2rOf/z3+5fuHgxbfYiio/3uxmmvsVjBf3eP7CTHnhEAAGUboHkAGrJmLrnsVc2eAiPVGfrPAdEhU5pIkuZYMEiXGlgNxAAwnw/Z7mR2hha7sg9WoSXK0o3G5VUjktbaoJnZiKw9Wl5QJduU/INAMKVFnC3WiDQxml1bAYJZNQCIsUyeKH41DBgwUwAwi3ZUiMQYS3Wa38Pc0YVTqMfFOjw+KHHmUQmlerlVNEfYmwuL/IfxDMZkgI1hzBYh8jp/+b6iFwGvQVxPZe/sduEHtanP33dXHRempqYarVat2dAaUJNKNVI1ynT0FVmNkvJtFjqjKtroVCHni5V7bDiDUsoyLcHHHPhzXNpoJCNkenRFGRsuFzYZ+65z/Gol2VHXgwWIgzouAy6NbGP0NJhsCisEZlGUwpGIC0IAYLbMlYKMQczFK3IKJtjaflkKr869/jxijAFxJTlqwREBFek4rQkEnQwgwJATCCanp4VO9LHV5dbUjO8H0VCmWvm+h4zSNEbwXPjYafBylWk7t8lgL4KZ0VY+wVL4W/nu8d/rMpZaNqwYGUsDK3Roc08Gn4rmNBkDS+eR8st2/bK7nxP0fH8RzbqUUoq0CDxEjAfDNIlknDDA6Vaz5ge79jUgZW2EJYS1s9Hxew+9/58+UX/ahTKOaD09lA426hpWevBft4OH8ILLLps/78oL9r74ykueceG2UEAUge5D++jaas1P09T3fc5RycRDEhw9hhpEfzhkKIhxrXWz1UiS6PATh84ef3JpaaVRn77y6qt37dwzHA7jZFiv12RKY0Fh6uwZMBIRN145gJQkIvKyzR8YaimNBpxvX2aciFiyLW0sJTGi2nvJlpvf9q79n/rODc951kq0tvn6PfDNQ72bvhBAQHCc2ELHn16I/JPi+K1y5XV/+S9wQt209PCu9/30o6sS1vue5xmfmiN9kpu1YeM5AIBRERVsWlmbp7hmjAMyBqhTlbnDgHFb5LnAg2zjkQMyJ5iGADQWFffcw2tem2Oma0oBnfeHqVBUS2bsocggChIK7csBNJI9/hmZNTFiI137MtcYgyyGKzORaQs5GHe+eO4DNrXwLAzzI2PVHk6m3FfOgC3QEIlznqjMVF7pBDyRAVc04OKHCefU5U/jb7ASTfb28QO5uV/FSyvDOvFE8TCZbfK1wUoaTrVqzXQdEgZyFmYjvbaxvrS6EvjhVL0xNztrEKDS99duf1ZbtaT+jTfKjU6pPFVyxKs8rirPUiGn2rOpvOPSL3dkrca/zgRzIUzQj10glRVxl2ueg9+f+7bRVbvR4+79JReDG/hX+GoyPm0YsAeltmgWLLbHdaYEG6cUMrSPm/5FWcFI4jmDsS/PQk6UhwwYgFYpESHXxBVBEsBsv69EyIgUoGJCRVHEueczbsoAIfKs05dp66vGQ6+iMYy9Sr8yVrHDZ26qkfxgqzS4YCkIliGCFu3yzDGJ2jJgeyc4DNiOlnHKCf2wyank5eKqcZ3addkJ+9wztS3NirIsFCAx6KPgflBTyFKilEBqRcA67biGrBFyLmQjxC2bGmKKt6EXDqZXfTi9stxK0n//989MX3H5BjXXTvU/ecdd/TOn4MQpOHzoquuv+oEff/0F27ZfuXknTSec+cNBevbsCsOQaQ7A4jhNvYQL3683w1o4HCanTp/o99r1MCA52LywpdmcGQ7iRErf94UQaZpyrCqOBSicqIjM/kalNnbu1ihDcE31UCfn1aMohrCBTdXtNy+bPf0bHz7xp/986Y7zuiuPbYkPHaTAC6Z78Qx4JwMBYtglmBtwJvlKP6HtnL8O169/+89//7vfc99DR8Mw5Jwbzd4QE611ERxUaJymoZ4p7phRA5mzT6aRsazQr9TahBRY1yrlAqIN2UlJGwZcbDoiscwkmwOtOH2a2TszBONoZO/x9NM0oRlDZ5ijEJazclxkdr8Ze+WFnV1reUl5qMzHMmDDs12JqiziGPMyEiQuYTTZNDkDHmM8HjVBZ+TaRkFXdb4JjHPSVUCnzICtEbJKs9g5hHHnyuUIxdMQfcFgmCpMVai1ThPRCtcT1WzWUg3xYNjt9JqNBkfGGKtIRgUhK8PBIXDFoXLZzCSCa/Oz83uy82BL7ZhTYQ4MImhbFLT86rEEERGtpbKAxAhRBkcZKqKgy0tgORxK2F+2ao6dwIi4UBrH8vuxI1Almt1hwHxkuZmYYmR8yCpRoM7iqotStACQs2cNlKdVFADJCCIxBgooBiICAegpBAWqnjINVGt6w6QbKxJ+K0kAAKYCiqJIKRLCB1b0HGQjh9/AX5enXQHX6NJM4Y5KgU8oMzxw9nQsVBFRKYU8c3hrrU2nDY4Yw3gGDI5LgnLDDGMsYxjOEipIVdl3GMG37Js82MQtWgQAnt+IokhrCaQ8T3g+T6NYaxm2hCCuhrLfi/3GVDuKPObRMEmYqteb00G9zlhtAYcNUAqCCHggEfjqyeWzZ9b+5sYbP337N+fPu3jP7I4DYbK4aeHqSy+99OCF03UYrqczU16vnbKmN0xgmKTHj588duzY/OzMeQf21wNfyUgpklIraaQQAwZmultO2r5yBQLtwhMcdoWYFWyBnG1ndJ8hUBqxYHNPdOb5IqW1835ii1b/Lk6fJ/lF0K4JGMoogN2BR5AOgA37xBvUg9reNEGvPv3V7hdqb/7JbX/93vvvPBYEgRDCmNPGMmDIDSeGAefaLZLJOiJijIk8eFADKcrKSyGiyVMnIm7MGvkyE614/me2uQwJkZX4QsFppAlCzE0wmMEz00jGEJwsCroabVNhwGMQr3xqJlEwN6rZHWRUY87oD5UYZyV0yfUBuxqwVZHtEcu1sgqZYzQSvJxRrRN9Nebb72aqgjE0IpvxRGdwecHlPo5P4SKeppp0EvghavAZKIoJooQ1jHvSqEpCCFJaKTUpx892vbCqqzlOqiwRg6M3VGeePUiuncSFmDX72IwjVxar0NxJieoZpS8zYHeGFRvmqO+wOmCFsLo0BgvtWclSNKDzUnf+hTUs9+WMuIUYVWZICBoAJwCNM6aU0nlwr9Ya8nI22eMA4FQEFOjn0yu8mwCgmESSJlkCwSf0FJHWOvS0THgcx42mjwDxkAUBkzJWoDj3zBtVUYWHBCsQwBW9FbJRlGalXMYKnggyDcE0WSDABMw/x7kzZkdjpbdRVwwwwowZj2jAZHGPbMj0CCkcZbTupmeM34FAcadUyBkYxdexjnZlj6PwhYcaQQMHDhqFEIMk5opSJH++lcYJi2Kfi1hAEKheJ1VDbPr14aAd1CBOur7ANNjJIJ6r65kprz4Tnk6j+x94dLge/fddR775jVsuvfTCY0ef2LVj2+Li/KVXXB4nUXzoNGMoPLZ3966rrryy2cCl08ONjbbvc4bIULB8qoah0oQSsJabotOolPICLIbLkom3AkREleehGmjbHlYBr0fDfiMM4o3V+QNz9Y/8O/7an9Vq8SybZmlEMffYINV1ySJP1jssntNhu6abw84qPJ56gHxr/XOfunf/JfHa0JigrUxvN7c8fcpRscSALWMQjJlwFNPgyOKJmXYFFpRjjxtZYuqzMq2tLJLPgQFAmqUqIoBmuWJAVNxcgbOm8vdWYWCF0lwQEwATwl1q4YMAAIqM4b3yEsjzet0IhnOpfKiy6Gg7MVetchhqrgFzq7gSEam8Pkke45JFdxcFTkeqxmaPnxxU81bPodebhZShYydWIsGW+E4abZIJehKMtO6GwZRWnBA6cZuFPoLHQHiplFIKIXwhiGAwGCBiEARSjg/CEqbgPhAZEpNrJyoXIcdMdSytHDWPmPANZYBRFBrMydOYkpMA4wURIhphwAUG5P9lsLKVktzZVhSaMasojCwFtQXQMh0T9Yp5NHU+lHKlv8rEjLShWZbSCrkFz7zTYmRlhhwwVVJpbdO0gIhBkSBfYQwmPzI7G1YqRw3gE0gFKTBE5EiMKCVQsdpohFt1yhCiNJZI9WaDDeONRHqmWL+UMjXl7wG01sLpe2qr22itiZeylTAv2T+2aHYGFyoZKszNFenNPsvKsnMhoKAwuotBGNPsGTTFAsZqwOjEzaIbRl7u7lVscQ7bCpxlEW1but8z1fwBtLFnEnDOBWOgGABInRrtUIMynkjJ+LQIkbPjq0uLswt+J9Zax43AZ+04kp4XcEKfEEj0NOh601dnonbs8ykFgVS6XguDBhctXoPUr3uHTiwdW2tvKH7vE0c/8rF/eMnrfuDnbri2WfOj7tqRQ4+dOnGyHjQOnHfJ/v3nt7sbTq5IpuchYpqHB1salS1T6SzfxmyYJkCNmghFTvHJwJYBcsAUlInFFVbaRgSGsgfTKj0yk2yLBSAePH/uns1XP7Ozhgd2i0NnSDdX/f5MwgRG0FqQ7aGABeDrsXoi+uj/m96z7eyFF+hI3NGbmoLIPWjO3rnHvORosB8KEsEYsow6mS/yDEwNI8G2Mo+CBq1BFwKxYcA8m0CFTjKFBQIXNSyRXA+bi0hutL9b2yDnLxWUY5oRUpYFYP0dylFs3MZiOXQM6pqVAgCQRpoUe2RZFTEr5VuG6uiveWS1oCysOqcMiByRM2JgIq6tRE5FFMXoq/F0ZGgdVX+YxDid7LgSQA0SOww43/sJ1rmxdq3JOjFDHsdJKLjPeH+Y1qa8dAieBNUAKTVjLEmSIPCVksZc4xiHs1eYi0OWv2EIDMvTXVSZgI4yoTIQCKloiO22TUkTZWxc7klAJ5ahsmpg49OQIIvit0/lFj9rYSyYDgGA5mPgfA5ZyvqebekucyZ1pvAUazd/KuXaWzIGzDiWfdhFOlOFARsJGrKEwzEEnREkMlVae54nhDBIiQSpVlaaKdKQGEsU5j1EKRvfyMhIGpRC0ChAe4xM1IRCkHEchCGm1JOkAn86SglF15M1ezY0AuYFjwwDHq08lceUZOfcxoJVuuW4G2rL8kGmWqFGULJaijUTRLAqs2bnSDMNJIEIM+MzKkKi4QgDNieLcs2VHKOClBL5eEEQHe3Kwpkx5jJgl0pwAmCoc6UQspBU0DoRwldKMSaQUGvtiUBrnQglYoVxPNVqteNeUAtTKbXWmmpKkVerRRTLZDAVhFpqqVAxaHI/VknikaepNkgUwrpQHnGBUK95tUAIjs1mrdsf/v0//MN3nly67qrLn3v9VRedt5VSeOC+w6trgy1bdyNnDPOKs5rsgaIRETNbvtRZO0jM6Rgo1ATMIzQlKoiIuGXAnEBpJBCMIYFhVMCYp/2B6CcU1/g8nFGDi5vPXzrx+Kt/9Lx2g60cFWz+E+yUjOuv1YoxVtdqvT6oDzqHF7buWPrWNx8RWstGNAC+QWzWlQLtbJVOEDEvcJg1+SZC8LLwYMuAOSJjLNUxQNajOjPZIAKS7b9rsjd0rjBwzkkpy4DJZEQi2lyOglBl5z3zHAFqsBHySC6cS1SOFchvSRkDcHqRuaeAEQfQmpRmBIwx0/VBam0Vm7EMuJzLjUDMFmCpUEsDXMrom0EAAKeCmHMjIw3ENZEiUlnQjwbGBGMCFJho8yIghnLTizuV/OxnDBjKuFiZYvmb8QySRsr3mPuFNU3b1ptgMLjwDZRgMaF70rmDwtxJmswfz/MqUgVOKItvHhzbhACdaK3KVAXzitKAnAFAqmSapo0wGCvKTCgJXATOW/LniiOV+QAUxUnIsS6gYwSuTHUS3NSk/E4nWFc71uOylOqqSucKExvFpcpVnEnNgREx0qDSJAk9nxNXiTq4K3z86BOt+c3Em8tL7VpQ59qDBPpNGEYbDW8m4NAbnvBrTZnWQMShaimZIhJjQisgjlKkkiJPhSav0WwZ55wAEiV9IwDl8ynSGGAM4QAATbyyKGe/tDVUGMgQIfDR2wARmSJjT87aawIYS7J93MUERFRMl2wMuVitR/rUQn7u7HvdV6cEiGiyxs3FMgFCAWQZTQAADLOQovHpiwBamfmTGZkoSwspp9sV+zuhFF8W3FQQYixFjdjklnz+puqOceImScIYm5mZmZ/32wk98MADt37rjmY93Llt68te9LzZJspYPvDo6VrY0JqRZkL4nCOAjpPItQRY67TW2mNZM2jDmZhgGrUm6Wuv8HY78EYQCBqp2HdFQMACgdZorFQKANPT03Nz4QIOat/300tfvW1t9uCN6xtcnfm52mYYrqkaRar39WT56r/9MP3wG75625Hztm1ZTc8yqkPZxGrkA9TkHN78IGtknECTcRh7nHueB6ZjCqCGEjsy8nGKxrTLGbE8E0ETaRtTorUGyILsSCO61azOySkyxBOZb4yc0H1EVI4mWpT7IFCI4MS+2YWnjDEqaLTKGsyQl3fVq0xGofGI59/kITisbKmFYkBhUoYz/m0iMZ02iJUrxSztGxwLnwbgwHPWq5XSQMxoxgoUIjLSLPM8KiPM4ZnB+De4mY7u2hCrUzc3aF2V3GEyA4ZxjAFzn+hYYs1GfAn2FfnEMtJGuUnTQLzcXXIyY6NC0XGnpCZprtqEc6DWJLWyhj6GBCO7C5PEFgBS47vijALTXHocRUTHRIxlW8ckBqzzKJIKAKGSd+uMWZmb+WYyix3jkx67s9l+cc493u1HRCoIgoCz9sbG/j1zd/7Xx/7yg/9y+Gz71ru++ejJ5eHAm5me0nqDVM0XHmg+6MdzM7UoASAYDMlv9oiIowDydCoAkTEABgoVEQkEzhgRKaUIwFiexwAHSyTP3Qu3kIi7IiJjpajYA1he3b6EVADAdZF2ZTHWps9Z8II1BTtBItloxNDpEjPpqsBcAprwact9IWfAZGLfjCTNsoBzPqnymkbrz7arPYeYxcqmb/u9NBWXnBI+5j/upG1APkkAEL5vOivkMWXKVKLlPiwsbiKCXi/6ypduvOv229Jh5/qnPe0nfubNhw5txLHyRJjfrKZazTjNaoBXtjLP+0Rj2UGOJuKBE3MXa+vKAXEEzZzzbRiwx8igFuccQEspTQjVej14/r7p4z/wlk2fuv1bbIusL74sOgnim2E0n/jz31L01+rMGz77j/te9Pxj950KVCNtaqgIZ+b/sQyYyFgcs6J7eQ0WAy7K9cEM3xABQBrBCwSjDIcBQEHBgKvjQ/Xs2/PrbmsFUNnlpEpKJ2PCZvwDZL55y4ApNyClDKxFzfoCCIGr8V5OCZlkCc5Qo7cVu4ZZo5oKA2YTgpElK3L/zLHN1oIiM0GY80sMgOVdLrRJrUZEnWtkeGY4/gC7W1VmwFWdafQb97KlDVVBqsbfmY8DY2+bxJBghFRlH5yoY8uDTfcMd3CXgFY0QvNnURu5bEvRhJxzRNCaVCqJiHMuBDf5uy67OveF5SLv9qq8134ea8oAAF2quVos7RwMGEYYCToVdrBM+/SIGjf62X31pGjDSWBJZcS44CxgHov6keCkkv5ULXj3n752Ogg//8lbnnXF6z76iQ+lHtx7bDnh9RYlwDw/qCWJNOXihBCBj70YEAGZQlKgibRkkALJvm54QvhcEJGSidaacW6an7uQpFwfHZ1lrhCU4GaXlh8NN922qBMCI0fJ9RxXvqmoEfl7jaZbLmiXFwoYvVzW4k6gkmBpTehSg9Ya89B9AFBESimPn0vAsofaTdcZOx9XtHUFOG0cdaVCfYwwS1YbxRYzKzASGy8aVKTxkDEhtZ6enp6d9gWHb3/r9ltuvklMzb3gBS/asrgjTWWaKkTyfW8wGAgxvtIZgUJE0oX3FFBrrVmZAUPOBkzCCnOyqA3nrgW+Ybq2QLqUUikFmm0wfPlFC/98zSseiYa/9/efPvTjv3jgcnXnvQ+d9wA80px5R+/Oh+vDB5eX7j60uh13rPI118WAedi5zhv7uBqq1hrzUjYZNcuTIfPyLMViDSvVoIytgzkJshor/LUgjGNEdgAi4nxMkBGWiwW5tEhzBEf3daDKWJl2ZTWlc8sfUhZflgmIEwooVSrKfVcDqmHABaOALF7yuzJgm0+YNYFgnimmnb8gq2ahQGU9KgAgl2+0ZcCVMw/OIalolpOu0dNl/hzt/6VHbivvWSX+Ox9nEgQdia9AOwBuKtI5RGck7M5VfzPkGGUtk/JfATO/C+RVY8wNNKGN4CTomShZGAHgpPSkseNY0jz6/aS8z1ILa5cBE9iQTsODTcaFsvGfZR5/bvHLvUaZtPsn4wrI6w8SIUJSsubTwlxY82H51F07tp3/5BOn9u8/eP11z/zFX/q17/v+7yFQG5qfPLMyiCFszAFPQXNUmKj1aX+TIp3KWGoQPPQYAgFoSJnKQkml4hw9z5NSRlHkhQHk1Y7QLoqgYjiBKpaOHvs8+tGI9jrbEZx09MflOQCARlbZlOxux8hpn8XJsQVlHHZC9B2d23JfREzTon5Cuf3i+F3DkXaHuXYyPojMiQ8onTKW580rG6RqisXm+YKVWjd2wkRkOJzneb7ve1z0Bn3GGCH2u13Pw21bF+dm4dv3HBsMooVNiwAYx3Gj0ZAypTzrxi7TDqvdyFULWK3kpApKjp3TDqWc8bXWAJoxZjKqWdzTqRgSu+qCBarBk8Oz07V5IrFpuPatC374vhOP/UVr9e8+f9PB/Vfce/b0DA+TrB5A5umHQvmuagsZGCGrP5pJUUqhJs55qox+CdawkQWZQRHqXBhsTV8QVt0yw+xdcBUKgK5aeiZd2eN8PJ1305ZKDNsRNN2NY1Bi/AXCTzi/FUtq8aupt2VeBQWcJ6Xj5rWgC93aUNRKyRTME4vdyo/uKvBsVD1g5jL9FCsLhgk84ByEYFL4deX7ggGMXS7YxLKR7yf4lphDICiPNB5X/i+/dLE6l9NU+juCPaiExtHCEIUQHNCk/5o2Z6PXpHdXgGABaPtHjm7bWE6AeZRQ5XhM4L+gyu23ivlA1j+DskQC4ll6z5jTRURVuWbydY7DSUQECYBgLETCQb+3b3fznz/5yXf9+jtfd9nF73zXu/uo6gszP/sL7/yPf/uqJ6avu/b6X3vPbz3vhZcrwMcOD71aClAf9tGrbaTrU16IIFJg5Pm+RpAyIaa4FFwIt0sHIgKRwmr4Buaa6CjrNRhkD3lpCcCK3AOynT3ZJAZcxofCHGcbLVcgVi7qic494520hSWjzM4tA7bcFwAYYhQXwYNQEq0mtB9FTk5JEOf7SSd4vAmaY2HONTwYEaFc8MG1vds+r67jlojSNPWCgIg0UKPRSOJhr9cD0M1ms9WaXltbJ41hGEZR5HkeoCZZnHc36MwJsyLEojabdDT7It5tgpkBAExqRq6jF8WOGDSIrYOHvQ1UScyw1/KbnXV/9umb5h5+7AO/+c6f+N331M6/+v5HTrSaNcIeg4bLfSFnDLZ724hMY1zxaABlssg45zLNtTrDOXI5W5ej3O3SVFm2rggrLoPJHtDa/bO4c5xfsvg8QjwKMltGFqu7FxQv43zVk5JNdYQBV4atfJ8zYG0NUXkYfHWG5nIZcGWSdmTrbzbmirF8EJeGVS2NcgmlwoDNT2yCSU1NCMIaFefzgiTjGYlyFuBefGTwSZfln6NcbfTAOFdJdBgrfIDD2JIksbqC2zNkbAL4OWc74U6aJCGOud/KDboo11zte1i58va9Iwy4aGtYeulYmBBNnP5oreMRSlG6FKWeF5iMFBUP9uxo/PRbf+oTn/jIJa1nnDx7H/qDZ93wnPag/Yrve1GtOfvpf7nptpvv27W38dznv+iP3/fnogadvm60WAIxAyGAAKg3SOJExFIo4CKE3mpHKUUM/aCGnGkFWmvOmAY1UUcdd1XQvoAncaP+Fj+RycovNMvywo2vC/OQDZUXYBqjASMiaGlkMgerOSLChPF1eZ6FDMEy7wxYwyIAESWp4pyjKAocckDGmOnfPgYOjJPjM0bIO/BMbqMJ43BgxOfNdKZWGqZlxVAL8IzjWp0yQ3vM6s1JrTCrFwaMMV+wjY2NZrMJAEmSWKeDMUG7iG1mpQi11gzyzu2aMmO7FVyqG68Is3nm4NRIWXZDvlMEuayw7E9Pb6z4c35IYpgg1kQ9SpeDeKavxWywffvs+sm1k6fXWpu29qMhV4lpl+mqv5kIpZ3Dm2Md5eIy5BKkXZpR68mptZ7vCIBtGJAXRUBiMg+Os+Pnx318bX9rIS8RhzISTnJrVtBgotfMMdqZAG906Hzlst+5xYYxLzwyZgm5qJwz4HxWExhw5mM25cMcg5lrcXSOV1b5BHIPRaZPahKu9J0/lg2FhRJgsZ/s5woNQiQYc/bAfom2/m3RQTObXX5+s8XAU7Zq2svlrOaDRgBEE6pgzwwCTgrqnMTY8hpGObpoMpvLCDhjaLRDJDIjYFHYFB0JAHIj55iZj8QqZU+Vd97Ojk2oDi0r4uF3kwCsJEG5X8fuh/tk9gcCULGi0qsnMeARU487pVE2jMhlqhCJoeoO2wwaszOtdJg0D6bUayz1+p+99QtxBHfe+cgUaz79qkve/vNv/sD/+9CF5+8erJ/94j8f2rbv4InVIxsdOPrEmT07Nu3c3Nw829i0MMNDnqJMh7B/11wqoTugdn+QRAq54MwDYoyUe0RHse6pyFJEZL2/5Wez8dx9qcIwPw0EBKA5cHuM7LNo+mCDCaq0m6UhN5OMjj/Kes3FQBMSQH7o8hREwRELeZ4ACIhgssxEaNJSANHRT50oshFQVCdmvsl9Y1bsIO4Y2E0cTPE4ofAEgOk5rQwrNYKmYJAkESIyYHEcNZtNwb1ut5uSnJuZNeJyLQiTJAn9gDGWmspYuZUOCrGDI5I9H8aLhZiRZ+56sg01Y2RySw2bzUCIGlFkTl8oUm0RcVr3ZmYXcb13qtZtCW9Tv/5k2p2SkEjyl/DBtTNtSHe25gera9SsBaKZqiQHOABkYecaYYRslEANkNVesvnfAgURaQIEhKzqJACA1qQRARUgAujsa1PmBbODAYA2rWdsBUMiIjBMGipoAgBkdTPDUDIbuABwKpznyIIFh8lGNmtHsiGKiIhAwMjQ8/FGiLHEyi0LUn0KAKydCYlN5LzVi+VsO+sKQxozL5JJCNaAGhBM6VhtCrwYBzURYu6AdGeZMTAnGbn0vgnzmDRdBTSW99hT7UKKnOCdUQhOuhydAIvZMkACYoAEmEtMgMBHdjf7gGM4hNnpikxnBHDfSXNCpwZs8eDIDM8xeRcg+XKKL91ZYTkqtbo7I2n1k16dC0wFoTZEmRECVb3m5UMxdrSR35350zhDCJS3WCsMQz9JEiJZrwWSYN++fUDw+NnlRPfmvNCPG8RpNV5fFxvH7jwBt38e5dZPfPpff+k3fm9KvGljWEugDd51kJ6aYTrUKwGsz0wRBFFft5Wf+vPr/+tNP/aGN71p97bGchs6vYGUEoD5HNHMJGfDWTAFlSZcbNOIj98eeMMoiy+NxsAyVleAwuyydrHLSHkaLNUrD4uIWYAIlpq1GZI3ltVNkifc/bJaqdZaCEFIutCn7dMFOxw7WsWpNhb/AUr4UJ0tIiLLzEg6wzWNmXhRea+NWkdEz/Mw14CTKA6CQEnJGHphHRUNBr2aX+MexnGEuc+YMZamKSK6jMTCyhRgycNBCMAUlSyAXJDv/EgwYISMkKl830lrBK20BIC8j4I002aMtaK465+hKdwCYSz5KS0XmnOJHMTTQbQRzUmY8msJ99MaBBqkGlpNGhzum0UYoYVrDmPUWhUF+JBlBViU1khV6BusKiwWqAkASWfC0HjM0TSO+CDiaAx8diJ01gEWIGPAGXJOoIhVvGU52Inluhvm+0BAYFSqUfKSy09EeXyczsZ7qpS5ssbq5fKOYjbOwh2vKZqTRSYI2jmhRHg2ykp8Zeu1GpXjG3AnMU7Fzj67buCCsPIisS/T8QEAwAY1VFbISZv4cgDHXYEodfGsO+FzhFZVrpyc5PJ6/o15i3HiAmRZ3oVobCpTANPapKgrZMAY0zBmvYwxlYfF57VmzHuIsTHRzmRNdjorXQm5QdsGuVQI8WjBh0xy51WfsfkgCF1gZrNBQGKmfqG9X2Xf22+MHJeLBflyMm6AmW1ggkWhynGteDQaK2B+5cxPo64GJWrNROI0p/l0/bd/5sf+8DP/Pd2YZ90IWL+rYA6mGUA0rRtp9+SANWr1Fz//hz5/6+rf/81fXLlvx+t+8Pd/9M0vbVx01WPf+urPvuy6ZFofONhsixgT8eF/ufFzn/vsTZ/+yquevvMfPv77fMtlfb++3uuvruBUQ5FHqheEOohbFLN+K4aB76exbGivoVkiZcrB9e67hlZj1rPpGZVdyH3ArmCUnwCn3XUuweosqDZj4Q6vnVDBp9IFq2D/E4IB0WmlbkOIGWPINQAz0b/ZDagRSStRSNdO6ojVXZg7SSi0/goocEJNcpbZpBz4mBRnpjIXc/6riQ9HCXby9sgopZigMcBxDDpYnthoXnuxRltm2ZkYYVGRzY20sLq7842RJJQrUttYVME8Qq2BFJAmMm10OSBpichtf0BAnTsvxqs81ifNXCGJsr8t3XCmNJ6RpIq46XpipRAgRSSAKvDJwY4OXHXGklED88fSN/tshZSRTB0i4ACcCRvEBA7dsCPQdxMuy6suiI/5Xue+XTQs2aTbIeSVJc2DOZ6gqSeUW2uc+UtUhSvHoc/SIYiuTZtAa1fQNxlWALg0LJFQy4BVeXmjIsbogt3wFvu5XPGwuCaNY3T/CgOGEQSyQcujPif3JIx5HVW/N28RWITnmOJ5+bZrRDRlYIgIQBsGrAjOyYC18/25GDBZ/QeAMeakxDmGPVcAGqfBWyiNXtYoWKmtAcBsAWEzjmXAdgL5e5GIgEHOhSsyVuVgj9+ObDIjQR8FHIALlFpLCVyjXwM1nazfduOn3/32313za0sbZ6aFHNTURl/WfDFMZaDmIdiyZXYx9NNHj93/tje9ceWxR2YWm+9526/Eaeuy17x07ciTjz5+5nfe/Js/8LpXzV64/bmv3884DnnrGzd+dvkr7wtWj2y98iVXvOIt0d4r19Yg3UiwnqY17ff9WhT0fOCqlzaEToY+UuQxQK9BQdJPmcegjESj0ZLuAss1dMD5/BQYMIAtuEET8r9HozQpt9qN3QXmpL1RXpaEMQamMohbYBU1gDZF8yGnR3ZA7S4p58GsyP4rcSNyCnRUTsEkBqwpo4jknH1iyBRVThwYraB8AJy3jLcJTaor4NbvBeck2v7ilflX+IHDgMd8b8IOCIDQtuvRWdGovL910ekPFICeVMJWISNTt8bBNwv/ynthMgNWQG6SMQBoID2uaUq+WJ6TXBPtlUGDl9MpLb+spNWB5eXKWjJ4SWJjYpSuIqKtHFOdD0wo8eu8sfRNXrit+IohMbcHhDMZ1G7teuWC1LWgOIORY1tyLySdFQXD4hEsKghbApAT1EkMbJR6Fu8op/pln/Ol2illJb7GobJ50n5TUp1Li8xMEwQwasqozBDLFjntwIYBIAI3G1ngICEV78sxCYiK4PLstlJNEuvJ1lBShkp6QPWUGtu4fTERAFburJRucWVJO0NErJTaKoBgd0STxiLDteJryR4pDWvvIABiwAgKM4u5RntnF7B1RiqwIkfQyoWITDAZycATDL1BJGOmEz+85kUvbfHfOLWxsrkxOxisDhIFCHECQBCHbNdFLzx1TCB1pg6e98F/fBCSIxAcOvbobDq4830/8BIxlZz81mefmx575e+94t///OM7gxf8zIv/auulW6//4Rc++09eMoyPHv3cfbe97yd2TD9t7w//NLvg0mNrfrc3kGks/L7n91FtqUXQZ82BgKbifqz6TEUzfq2XIhYVMxxsGA+HCdK6e9rsxpnqgJCfAysqj4ExWktvCXncB8d8SVlAkMUfMjUZEQssydyLqCtTRkT3bEI5HKmAwwihyI+MzWsv3ZZ/cLunqBJ6g6VnDMvVY8qCTtWF5J6XUdBBIWhWlgnIyCrB+aFmMBKulT+rnWMCzufxeK7z3eWACjUrFHTKGEqxcA7Az+GL5ICUxw4hMzYYIl2Ffw6E8YNgvkIb5GUrcLiD5GIVISerimrM+qVi7kyp7IsZu7I1GWDtg9nFMnKK2hh7sl0yzHjEs+bO3475VH4lZpQ2lwugOWbm98o8uRNMx3JWgohuFgmSA59xkzTlY0fnh4i4GmUbbNm7y6Kf+oV5XqybPQbOqbKZ4OYS1ZYXxYfKnVZYAMfwVfl1nKQzxndFec1n1+RuLlOcz63DZd5lsQ0KUgJ2tFEgyJJG4n4eJyaXUaScalKtFWwu5czQ/opog8WqV6E9lLo3ASJWXgcmuIOq0e+VORfswmj2BaEpHZLRwTMlYFwlNQBIlGzWwrg/IOB+PUxTGffX9u7YfObuL1933YtAoQjrK2m/qX2PvHXGNm3dK8Qr+3KOphKob/bYQqsu1wdPto6cObbyGV1rQxwCnXnTM6//yVe98Su33/6iq2+Aze1/+eQdt9382Gt+8B3LCX3PT1x93tP3L33x/z78wY9fcdXTd7/tZ3r7LmwnsH56MMvrmoHfj/VCsCbTqX7qhd5q1FuAYOg7CDBOHq3gcxlh3JszHdK5rdAqx4LI/d5FxVGBbHQa7q/u/QbNjJ8SEQFYPgFV0XIrl3ZYJlJxmx6ZiZ362PmU05xK75qkA0AZDlZgHQuTUQEo+0YX37u/Oi7XcsMMzsqW58x+wFA5BTEcjZwE5lovjByl3H7rIAmZylwIwHJDNENEgnQsMhDY+FnFjEgIGhGVKgHNAqTafai4xkcF47gsDCJCDjn3dQgsACc9Ok/K7QdjKZ5zOU3PWCZ+uVA10HDxdlSuGl3FePzJXXvWH2rQRo7ka5r7haP/qBGMGhVQbLaOHc4Anmly+Vrx9rVYWeKbPZAPWZGhRldYQtxyhmW2yLzmbTanQvQEz4n/qnwYO1F7jWXPo/OZZHpSvJBZXI+yw2BKkB3NrLVLG/fSquneXnwcxhCRFRTcojCGqbqD29eluW+pgl6VkoHnoOC5z3t8pao8eK+KXqYbj7tBkBko7VPm1qrlvCK+jJI8c0kk0IqUajaavWGMjDFOWqW7N/mXnX/p+qETc43m0vCs0KAA1sDbu/0KXzy9y7f2Ax5BM07E9MJmSNVzOr3O/Nwj8fBA+/QUP3T/8UMnkpOAj12868DP/9T/rh3r7tk5eOjJU4duieav2vPlTXrYb/3mq27gt3+1c/O3L7zo8gve8WPtfXt6S/CQiHcSq0VRl/o40/C9UPbTAIPEOagu35vE9pBVtOOc1qDKv3dM0EhArq/UNTB+d8bsCm2T7oQcpe2Ec75VZbfn9kE6JVFZ2ahUQp4KDlQmDE7YoIN+xZ/lU5Z1FZt0AM8hlIzSZXe/JpJyhwkxDqMMmMiYEMbIqWMLZWitAbkFWT7JSukSRERTQxgACNLRJQBkjAQAsqgs8wFRazeGfAzAq8scXXWu7bnfOIy8eEIj2P1iExjwOHZrr9HACEBUVYiNMODSLrPx585G/lYwUKErNTpWF9fF4+67sxblYFrlQ4HPWlX4HQAQANM41gcsnFHyIczKHSw/JxDBTgXdbJZiBcXf3ERSmMnpUg1e5xTxfDZZiqGJ7B/bZhgRXWNCmQiOmm4IAHhO4CyHszq0WQIbqcdrVpabZSg325TIXEHOMi9uhWwhQBHv7dI+ymuWkolDztcF5a2t7EWF1pgXVKXs3NhVoT48B2BFJjZitPF0uxhGpBCLtBPIc19UZqfTueMeAJiNYUFjsQaAwuqcZYwSVaz3gIg6jqenpzrtQSpTzk3pPooT7YM4cOUFdx0/U0+ooVlUJ8lm9u+86sijdzcaw23n/1CvW9+7w+sMYqRBwDbqvY2HpmQ7mL4IdrTqm/Zd9hxa6Sdbk0fu/syv//IXGq1HfvE3fmz7M684/OAHX7j3hnf8xMse/a/PfelT/xQp1p1iBy/Y8pE3vB1nZ370w7+7fXHPIUiXj6rtte1LEtbXujubtdPUmaM6OEcLHYHV/OniAwAQlCsYZW3DnOLBlA2UbV/JVlzVmNHBzPz7UlC08+ukEpVVpM0friIEIgISlTSD6rsMH4G88QOSLpUeHDkgAFWCPsL4dVkIyKUQE5kIwJCMkxbtDXntjvx4Qg4oc3DGU7BRkj1mAsbnnXezqQgWuSw+KbFxgvLtSCtIYLszIfJ8qGzfTRpRkZ+DoxL2SBlG82WZntu3jZ8lqcydl20oUdafoyrw5TueCxwZHSDKbNETgTBKJPPPVkgan/VT3hrt7K+lTkbCLd5SvM4S0jLKMa2yLiO5NoiaQBfdxLI9KnwQxUG1n7Go3pMvv2juVOJ3kLM8RGIMXfxEANQkKHcmm/wWnqV8jffV2WXkkxsjhFbudAOp3A+qMnzxYOYEMxyG5eqpK3kx50Wj6D9WXLCzZWWKgtkLKX8UMzdx1s46S+piDE3QqDVaECnzKtu517wnx87CPWp3pSKdGFxhzp9motnZthNywv+IiPEKa7cPjocDTZaFDcktmquMGaGgaLpEOgBKOrelodryYDdWvnhjDr7Rg133guEg5b7HPS/qtoUQ9aARD1IitXvf9pvi9gU7LlhZ7vaGw/mF7efvedbxh++sxQl2N/wo7R1eada3cVychbnm3I7Lo07ATyCEQZu36Ym6H8yvbb/4ih86tta5b7jz5//PR9JkbZYNL33Jk//69p/eu2nm5//2g8PHzvzXL/3K17/wyd1vvN4/vvqne/Y+88WvvPDP3rl48dVqKdqMIZ+bSmO9ux926wAAJvU+I0llqEIF4Fj91YEYQV6Iw9CjkRvyfQKgvPKRO/jYczeqiJR/1Vm+E+OIQJndxWCmiQzNNHJDpGgUsUrzKl6BBORmW1WF1Ek+YPOvLGkjAIjcRVSAUtPP73q5/HUSQYBxu2YoQCm2GTU4oePusIgIYLtjuYIFjTmQBgqWdmsy1mNT+sZQcQBA1ABkCIiRfcdfKK1H3GiHSJyoINsu7yGiSolfdyBwegkgAEdWDURwIMaAZ+k/eQEZZMgBKpnJY5WN0p+MCkG89C6WM2a08i0iyxq/lhQDGvtGd/kwchwyul4CK2VhdhnDMc9nsXiWAhsEp4I36XxdrrQHWVa4+8aMlI9BX0TEldRIeVS6e4xwlW/AhIbeFRP0aC8BtysqAKQTnMzo2DDBYDcBACRQhE/ZJlAAVc3YXpaAuRtDRNzRRlwzPdMEtvU0EbCstCTnHmPG+6601kTImGCMKZUWTc5ddHeLkudJBYjMNCyrICVAEW4NkJkdsi1HcO+kPPmBe579xt6AiCBLokiBf5W8YftB581cGWa7o4kBqEIcyUVpJCLSeQUcE7RZpJCiLJtSs0hO7UTMunyCO/KEO9UAeEKUgGZMM60EIEIQJRQ22MI0u2DzrrPLJ2st3lCyhrtODDYn3reD9OCOy/7X8Q4utporG22/0ZxO081EOyJ/qh4v+A3JtnGdDMVGa8M7MbO0sLrp0S3xgxedmmMQtLu33X0jg3r4xBM//uxr1Yx8+3vfc3B663f+9I+uef0NcM2+m9/10eV/uvH61zxnx8++NoFWqndv1M/rK5/kMFtUuW/m6IHPPljbPJVVzJLc5jBgY6FBa9DLdlmpojNxSSZzKmfZFxERm3C+VK7JuWWkIAtqNMUusiAsg5VSyoJ2O3Yd4qVQ0sJlOFLhzs5nLN2A4ixUKLituJQffNQcUAMz1S0snXF44RjhY1JJThrHkDATrLOVFKMRM00aXMhnLx0X7Ux5m0Ww25GnIZnSp2j0CsqbbHMmUwZg4o90MQGNMKG2tgbFoMAH2xLAnYb7Tcm5XYYQERFmvjPT5xgRbalLO4i5BAZ5tx+VR4aTFQIqQxMRmvywEdM94wYPnYiTLO2bu+X8KI9RMD3I3Z3KhmKlqH77r8dKtcrtI6hTQjDrNbFRAMAACsOxkwRITpQymS7y+VAeFtb4vARHmdMbfldAGV0PLOb34Eoy3oQyiaAwVuwHmB5q5iRMCK6ZdHGdpXMUo3HTEnrMLsI4gcCus0LKR+fsyk0qb1ReQS8BvgZKSRtPJ6INix7DSACA52WrDbMskGwSDHLCVIVSTlc5ZBlp9iRQznStXZpR1kWkskAAG7RQ7K8m0gjMEAIqnmJMYN6jdMw0S1kYRbY0J6gQdQRgBGl5NRaqJr2h+N6SMiogb/2FACDR1MDLpCsjHGgESLEW4NyM/9G//ZvffdevL51YrjH0uH/p9T+iaNNat76h/JQJzvy4DbPBgo8bu/vDrdEwaNZIhF5PziPUa+xEsOwlzcd39u+ZOeEloe/Xo9pwV2PqxNIT58vTz60/9Ml/Xv3zX/uZqDn1/vf8wX+//wPw/c84+e5305Ztf//X/xLthnd/4ENsuLV/sNk4vrC6fXa9HtRWZVcncZ03B3Hig+Yw34YTc8FcJAYyAQ9JUBjLFHwAsI37mM2HLhJ2kBDQ8FGGZBp6Y4ZLRATEGYAeqeE2iu3FruVEH6DIlyUipZRS2q35bIdSE4qGs3EmRDPm2HNXKYFpL6kJCbLUczMOAiEIra0oQy4L4SJDm3K3GQbcRhRZZklEjJVrCunigNjJuMWFaGSe2ekAjJK00WgMBzGgrtfDKBogA60F5P402/3XKsf5tG3AETP1zDLSnHdNNkTSMmAzKeOsUbq0HQWDUQAAJnU4lyQIEbUs1x2ytG4kvzkTs6ioR13aNSw6SlmZRmvNGAAIKRkxVEoGIdepJsImU0OSCWhGbMpvpAOZIGjP81g/jZsJrNbrdZ6GSTSsT6tB3wfhp7IXhnVSMcEAVAi6RjggVL4fKEUAgMBNr0atNTPpnSybtsqxV6sYcw3VbZBT4F4eoJoV0tHFMgtfLAIlEp3L7h3j2oS/lQLRkVLFEAEZIVpkQyJkWW383DSSo4GALEOaiDIlhCEACA15rVmAXM3HczDgSRfmkizlfXLcRT51HswUVliXMc2X2nE7H9wi9e73bi/Jc5CkfPLjC0EAABIzGehEWTosInLESUFV7pjWgENOVHBlDlhWxEcH4ZntiUyOoDXwZvtk4QxVCGSfc2c5ZdabDCOZrYlBBdFB5BUfvL04wSQGrKA0b8OAldu9xBF0KnB2GTAAlE2sJqGdmdtsSFJ+ENhg0Gk2wk1zTQbqP/713/7kD//oyccPDTu76vU9temD3syWqS3zwNOo16sz78BqHJPnQ71OAVcpUFwjNQP82PTGUX7Ur+NQDWFrMjff6CytRmF/Ld30yln5Z69ceeX7Ht9Yj375WaLb7QePdu98bOr29d4H/+yXz3vhrs//4ecef/y4723s2X3eM//mb/Rn7lnbPBO+4OlJrHun25vDxWWpPS4WEzxVT8JIxkI3uRgqpXyuJFgySaQdC1Pha3QVY60QQGMpawFRo+kMPopCVrWqHA37jd2aXJ4Dw4Ar41D5T3uNZcA0zr+bnVPnyzF47nzWRJTbY80IbstLYMhyx41T548488wZydGkpPeMvsUaEs1To2BxgQYAiCSEiKLE932tdZJEACA8DuTZiRfIDCqbT5UBZ4pBVpDZGJnNuSBpgsYtA87bV1TPUS4QZAJHTmRKErkL25wBV8fJ1j6pZI7TtKbEkxQQYRDWNag46QeBB5q0BoqQPBQ1L4378WDQajTRF70kojhuNraSH/U6bY8anoCUOoLPRmni1ddAzkVdaM72kmFN6O3ondWg01QiojEx9nq9IPAYY5w8CzGttbKx5Vg1qWY7a5ef7U0GcM/RrUvKmxovaAJKw4CdjTB3CrB9vg2l0mhmZDc93y8kIsYKpUKBsrNlOdZYIcxMHtfiCd1UHNPHKLvKBh3HmyqnbqLmqhDzMhrZeR7HgMcD0blQFyNMmoO7igoJKxibBu3SNcjcWhVf9Shbtd9n4eZOupF786jBK7sHEcpdkw0DthZ7l6sBgJ7A4PNYleL4FcUL7Ru1M2ekseNwXWpJa5UGQVmBMPsYADCC0TZtFTiXp+De5+Y/kB4xmhrJRmvZaDUGg6i9vjY3Mz073QCigOOH/uH9jz64etcdZ7/9neNp7HvU2N7auXvz3svTjaMtL5ybXdzoh91OGvppXGt0G0+Ep45uPeGBnpLNE1OrHbYyLepyNuycfOhPrt35g8848pJ3fPtFL33WLz1rba1zNuRpvQ7ffKi+ftPaQ0n9eOvy//v+P8Hzar928I37cPktN3740F33N9hCWN9TO/+ys/tn1o9tSMEXwU+FApm266Iu/VB5baV9FrsKjUlxBABtg0GK/VNERJoBQMW8zQiAjxHgyLFtQhkbIT8UGdoUZj10Wx6NPlvdu8kMGJ7aucu+xDK5NLs+kjdJlYNZJjIKCkGtYpstpT85s9VqDDIjjilgYq5ERUIIrcgwgFqtliSJ8LiMzYNkxzE0mqOwMHEZcCW9B4sDLhEAiHHIWtlqUNYWarfDPughM8JoRi11piMIXmqKULyl1Fm5QLzycXMBPj71CyUDAGIUp1EQeMhASsmRIYQiEL1hj3Go+348jCRpxSAgpkgoTBlPhW4ik1IPU+3V62JtaWNh82wUJUhhkg6ajXq/K/3AUl3TSZ1F0RARQbPqzBkyxpQkJ1oinzwVkSsVr6IbvWy+z+DJSxUG7bu0tgwYysH83O4mGcI8IiRZjZyIjOpNJrjHDcXN31dlwBvxhGjJCW3FVO6UcjmlYWIj7zrnlXfnsFAwdN/VaJ/iUKP3UFnOhaegHEsqKjRl9bUBiahi8nXeMSZPDmAM+6FMkh0zDaKsfUkRmwYZaiAVh3mUzY+upcy9XCnYvKgCDUYTujbZQkA5v7cSSSU2tfq5ygPsgOXxy5B0nqXxPioCOYxiIfx6vRlFg+GwzzgSqYt2LAAwDXB2Qx46euaWm+/85lfvO/nE2sKZ4Bsrx2fCmecEixc1m2JzmHK9mTcfCbuPdk51Fobr1AfREHOq3zs9zzcPqXNw7daLz5v+oat2CvntmcEZHTDkmgcIU6zV8LvH/c/9Z7v2mP8wNtbDi97x7v+1c6q1dOjRLdOtuNNfm1uoXffscP+FfYCjZzd0qmf9Zl9QSwlc6fqzU22IwApYZLHAuELyw2/N0aCMpoUTdjnfvqqyO8oIOefG22KNz4Zbuxg1iQGf+7L3Tzh34/NHDYNkI28ZHcQasc25MHIw5KiYZgUEq+NYBuz+ZFXnXAssKENukyiJ74gYpUmz2dRSJVHqeR4A9Ab9IAhs3YK8hEr24CSLl5WhKiUHJKUAwIEhcg2m9oAizLIz3NOU0w1NeaAW5L2BKRe83Dsxx7HxRG+CL5lGiJL5U7DaYNABRvVmQynV2ej6vuCMSS2llFyIJEmCICTNw7AuNQSk1nudemMqTdc5TA2jteb0bEo66fAdu1unTi77Ykr4qfAoGqaa0nptqtPpce6FYcg5j6JBEHiEJJOqz958liorj2o3PIdq4QN2BKMqftrPQghyLkcBM7B1kgBRE9Fov/ZRMLo0n3heTd0o6E4lS/O8QciM4BPhRjLeFAkTGImJ4isEpbKEW1ntOa5MSnQNVgZlVdXU8FTGHCeVTMrUHmFpRACgslYExYE3l2KVxydOw8C0UqvZvgtltQRjBk/BwCnBQbnZipm+N+N0mvHTcFIm7IwAwJqU3C/dEarornTJBG0/T7LcjLU02ihu50Uj8HE5t2bER40cjIAz5MKLI0WEYJyhHJIkWh6ST6pRY3PTQT1kPvgQg8/gyVMbh06d7BxbP3TjAw999bGNtiKFC37oe9iaWjy6p3tm3yBJ9Ea0kdZ7eqM9NXfp8PhNvUfXPv7m+V3T35rVoQppMVbQkPH6VKABtqRwBuG0B0vin860D57a9Jlh2lnx3vfAf0Nj9fDP/H5wSh5/sr/lE3+9+NxrmYTTp6I+08DlpoZYGawGbLZYDWVxeSWgVWhKYeqk3O6qAEoll92NK29uftSJjEcNchu1aQbAOR/bPYZyhzE8hYOGjol19FfgpXEK4qiJHO5o5eysMM7oi3L1155Hg4qmBrLVce3BSbSqsF4zT1ewLmVSjEtJR0Tu1bqdDcF46PtRFJHGmfm5Tqfj8XwhCEDMEUzHCy6Ftp1J89klKeWAiMzoWxq1BtJAnMYLNDbWxGhELKNRmUG5svsAVcXAjmMhVbmfxiESIsapbDTC4XDIfb/fjg/s29VowrGjS9PT0wKZ0mlYr2uA9fWNTruLxEKobds9S4RKD+TQX9giTi+rhPq66x05ef+ll16+sSobzfDUmcNponZsO//IyYcWF7fVa80zZ1aEF/i+6A+6tVpgo3dd6RAAAPko9wUgBsJdlK1VMNqWxm6HK+UU8MmQpGJ/1mCCvMqKrz1l2YOudYFbBdeYahy9BQD+pwy4MstsBXYe5H4/XiOcdLEsNCCXdq2r3LHRf1frFjiwAKgy4NGn0KlF7AKRiIghz5KSHIwEGDUeZ+c2D6qq2OGJqgTu3PPXDBEyQ7pNd9ZYbV9oC2BN6sfJMXsQIDuEeTZJpl0Vdd4BAJie0GyDpHLje92UaLPdPGtWk/3q9v11IV9Z+6hr3/xVDI6smL+ZomHYqWRMmPbmgvuJTBWQEKIeiiRK00hrBv044iGLk4Hv4Vx9zg/iVgAJaAZTGME9N919x5e//OSp8O5v3/5Y7+xKqzszz/bMHwxrzWChe7Jd+6G9a9fvOLu79ziXp5ho1YJhrZ3CfBDUYr/rReuptyjYEY7HAthRozVPDppfeeT0xqHhbrF4S+fkC37gtRcGC+tfvnPHb7ztxK7tzZe9MJFw9kS30Wr0uWrEqQGUOa0VgGcQI/ulJhBQBK6TAmW8d9yVzZ0Ppu/6KMBdHm9DhTnnUo/YKibvmjtb95QhFk2FK48QKw7X6Pi28jnqrHGdygvn2dfnphcNAKycvWEaAVmIFXUcCaSj4blnR7pBWGUGDOPOUSIBlJxqNJJo6HMRhuF6py218n3fnQbQOHnFrZXtFLd1zaEaFZqeY8AgS25VBFmTDBcC5n7BGIC2YivPZ546WRXl61wuA3uN3lMZjXliGPWmp6eXzq7v3Lbvxv/6zzhenZvzd+2+YKpWe/Thh9Y7K2dWlrbv3PbmH//x3nr7ofuOfv22/7jishdodebYE6vrncNHjqXb97fe/avvedMb37x5q/+rv/rbf/uhL/7yO9+wvL586AHv0uun7733wRPHz55/8NLhMPaDmiZpumIzxmxxx4q0iiOOs3JQZ3G52TGuI68qf5R5hxOVrZ13FbwZMvw3wbqlVtyZ1dopDFKtJoZmYtkmZRNbn8CAXX46kfmVuVvl12xTxz4JwPM+WfmS0DWOj16jZMJ+MwrTSXeOrqVEVgAAiuLMZJJf9XjEtRVYKr2eJlZ8GycNAIDKM7IrI2Q+IcweLILMJ1RfNg3MdU6vMNeZdJHQAggcxwn+pUtnXTuyOefLyVqOUzZVjbZU2QSN3Fl1eU8rSfcZ/BkwE/ZshzJB0Zp5WiaBJwj0cDhEYMKrKcA+SaapxnzUKEn7AQc9RIyRcBhJqUCKUHMehGI+FPMBAsFaurG8vHbs1On/vPlzN3/5to120vba/SNTH3pV7/WXP/jIsda+LTVU3VCg9hok+/WwGz8E9W5TPnsQP6K9k7MslmwI0k/8zQ0Ve+p080s3nXhwcf9L/+h39j1y+3/83t+86rIX3XnlvgM/9PpNT7/8sTNxgAGKPmJWy9Y1keUmxMLzbZQ2lQXQmW0iQK1AAWgOngvnAkpO5dcKMc0QTCmti8hnNVKIwxyBSf2qiwjSMgMe1YDPgQOQC6y2cQUjMIaflLuJKGAnpkxH74zR5gIfEaE2QMvxKjdQOx4MV9pTZXnagkVAlT5ArpHXwmBt6eymmenQ95Cz/nAQ1GvrnQgAjJiYw4qRUx0JAGDEbYcjQS3mvGc9dsjotaSAEMczYHvccvtcpgQnenwbPj1OkBplwKM3QBkfEDgy3et1pptz2xYWbr/tlt/73bfPzcOXb75ndgbe+uNv+djffbwdp+cfvOjtP/3zP/fWn77iyosG6siRRwepjF79shceP3231rv5VDdQB/zG6pVXXUQ0+9GPfvTCi3d+78tfx2j71JbaA/c//HNvf+fpk2sa/G6315qe7vY6nh9kjpK8zZerqo1smUmVPhcDdlVeIxuOG8f6cc3pyNOrTKgzoUlfyO+snsHczZdNyflVF9mkOhug5AMmwI10fBCWbQg/aecqITNjT98YITm/BOfaSOa5mGOkvFFCcO6DPXpPmcBl34w1u7nf5AkG+QgMFQIhMFUdOfvMMk3UZcCIlc4KxYdR9p8zYGYScCvRBDwTl7JvC8vtJAZMqZEYsiiVjIGxvIJbBhPXPexOz52ttXvb5RBRFoygyVqSs6YaVCUB5nIJdzlSXViB0X3KVAVxY+kNA5Yi1DLyBIJOODLGgyQmYJ7SEvWwVff60UBxocBDJtI0ZbH2Gk1UECSSxGAoVKKE7mKDeNyg6Vnc5NWQYMhgI1o6+sj6J2/8z13eja9Zh+t+/97f+DH6uQs7nRnVGAKrga41oMOibjfcJHpH5cx9GG9rQrcb+Nu6cbNxdgP9Pu7YevQ73S8+1n1998yTf/0Pg5vvuv5Dv3bbv38xoMZVr//B5VAsH9tgjAkhGAObn6CAjEabbzqzNiuV18GhjBFrjRpAo8NhJmkz7jdEZJRjKaXJ8RAia4FeQX7zpyjn9xf7WL7NvN1EUY8l3DbquDKOTcgxq2ZkwrohcTWGfHWMIM0trpaAZmlIuWHHYIidn+YF2rhgUfmY52DA7uoUSJlEPsOLD+687NKrH33o/q/f8Z09+/aeWY3ydeWmMmJQKShUMOAshGZUDVAIxrGbH3lNSIBa5wez8oiMU0QEzhhjPDdBQ64Bj2KCy4BdI5/9xv0VcutI0ZXSRsunWAvF9q0Lp08upYNo0D3xx3/yc4k8vv/qLUk33Tq748FHjn/ms4/cc+8d7/rV3/vspz93yVXh/LZk6Tg//2D9yKMrz3j2vn/8xBMvec3ez3ziWKrUL/36S5dXV265+a7N2/Xcgn//tynizc/f+JWHHnjy2c9+6YlTy1u37Hzs0ONPu/rgw4+dzVpMptJihdalZQE4ZbpZCZPtv8hzeObExzwwiZeYwi86sw9pzLpUZ5tJucXCYc/j9PL8nNo0ObIlDXX2pWuCRgLsyKKFeGnvJ0x07F4Wy0asGKxKGk8ptWKCJEKlP507x0isAICMORGezrAjtXiye8qWZwAwaIf5yBUyROXV2c+jYSDZ/ZPDta0Q594iqGpSLox1I68GKBIcs06WTnhLRRc3VwqloIYSjuaXCxAPsprPKlNMMydc3g81iyHMKwggymI+OQkjIvI8j5zOqRbOJEv5iA4Rr0oGmYAC5n7u3G8+2cwE87dB+SyRtjIIABBypVSaplJKIvJ9v1arBYGYq6USvEip1dOHO4dvpsM3Ti3f0hysyQbMNqHhC+hSsgTUb7DUk7FkUdsLplknUMdW+6Bal54f3XPKu7u3dME1jb/5+c//0HuuOHDR+e//+W/+yv8N+/7Vf/GbcPFFG4k+sd4NEjFDQZclcQNEEtUpXE3j2vQ0T9MkGSiOgT+lOnFQA46Qxgl6TJL2PE9HSYi8A4iInHMGqLVUSnFEzrmSCNwk7OSFc42DllXa1uawZRPw00U2KKI6C0EWzeiZ6RnJAyCOBT3SCKQRcwQjIu1sHGQm5SqK6qwyjtn9AhV5Hq3KbL0nIgCQKQnBtNaKNOcMEU2yhzbRv6C5yTXSCICMMc6CwWAgPCaESJKIo/mQJCqaak4rpaRUjDEiLUl6ntdPWCuMLtix+OoX/8A3vvnfyhu+/DWv/8x/f2Hj7NLR0yvttvLFlJQSGTEwGjx4noeIKkmNpYEjA2Caigz5XCwxyIpYaMaUBV0ieMABKEmjMKjLRNZ9P/RoZrp+the1Zqc7ne7G6kbDDwFYJKVCFgpGRJJ0bt7gqMllVBkcs5AD1ORklDlxNikolk4Jf00TQz4rYS3q65n6lt2bvV//zZ97xnWXPXr32dbU+sc+/Ml9l2lvfml+5sKbvvbw2TPw/T++/dFHTr7sed9359dOnzp598yOsL/Wv/zibUtnBhGt9GPeH+qVk9OXXaVQN4gv9db44cfT7bungppIoR0lwa/8/B/9+Ot/a/eF8b988pury70/+KNfrbPFP/yz99abO7/yzRsvvuR5G9HRkM0zYCrxgEd2UZZwAQCNEP+MpOSfq6SGFYoW5ISDiNAJanNHk31da4QpJMgx1UprEuAjcgQJACqVACA8lqRKCKEUcQ+SKA6CmpTSBJc1mvU0jRm04nS13uA6bQ2jtNFSg0HEoFYw4NHDOPaIVnoVOD+Mv78cFelIK+UB/kcMuHTDiA8YRu8uTbMgEHZ8e7xhZANgUrTnhPEnMWAC5Vo/bF6jVw4isO996gzYyPWuRO+il8QqZCyeudKS/VdQ1u/TMmAobGX2C9C5sySPTSm29hwlWXAkyt39cex6FdguPdmvOR8RebmoEgNmI91dKNOfiqhyVzJgg4FkPgtbzWmcCiHQkGw8Odw46T30xf6ZW/nKd+obbTgJ6SqEQT2cnQKmFFepIKGIrSl5hg9Perpbm43jM732Qji7QclwuDEHtVMoGInoJa/c+39+nD3zyhXQZ1b7QRRswukhMK7b2vcSJIqGrUaY6CTRJIJQxSzmkCL5IIKUuMaIUeLhNKGJZKY8WZ0xxgWqmIAza6gwm8Uhi6yu9EcBgEpFpMpGVBgwQMmsmpNtDQCkOWSRB9ruO1CmGWdANhvleMhGGbAVCIqZZK6TXAIoM2DOBCKmMkmlLBCYISLTWgEAY4CYtdtFZMQgTVOfCxMWDgCCo1Kq0WydPn16dnY2jmMpJfc9IUS/398yv3V17e72yfidP/eb9z/6ld0HFvrD1tmNJw7sv/Rtb3vHj/34j50+M4jjWMq02WgMBgPNmNkXj3HOczcNoiZF5XXlhCHX3XMKkCcdMClTjyNnXnd949qr9731zW//2Ef/6sZbvv7AQw/tO3DeVVdctbG6AcRAeBKAUskYI4YAhIhKKUbgeZ6JtivQHsu771ikslnxQPBhp30K1TaGHveXdyzO71wQP/5Dv3r/gw/cdf/tz36Bv+dghKr+hc/oA1dFQQBbtrClExB6zd17mk8eOdXbCLfuju69X1xwpWQKvvofMLvID16ljh+GAweDaBhPTTV8nLn9myelZJHUey8QWgURxXFP7to1c/tNyet+4rx44B09/ESzBolq3XtX97U/8v2/99t/ffe9J5D3fTYtvHYqp8ZiLDpcGRylH0oaKhb0xFgOygwYxnE783jdr6131qdarWE8RMEZ4ypJETlnSisO5Hs+DqM1xikMGkmkkzSt1WppqnKzE0MGw2FfsClg/TgZCDaDwNEbcu6lMcOOnOTMH09DJ3EeaylyCSgiOqywfNTPyYAr4ziAGnlqAoPkEyQa63WubA+NqwSEuXHBnUz24AQeM4n3QHn9lLt1vXEl8awEN+ayhQUcgwaMxIIVGsaIwDSG2bsYTBk1rzBgQ0BNFnJmIyLSWnuYNVrXFTjnFcfsSzOGzceXiINxoEPErIBAKebFJLyLLISleBAhb49a4b7gWB2Ko6iU1trLK8rHqZJKo/CCsBmEbFMISSRRbvj9o/jEHXDPLXT/HXT8WLwsWi2hp2AoMFzi6s4Nv7aV/ACHimZaCAEM+gOM634Iivcj+djy2U21Ley6a+q/9Kapl78gBTiztDEAxRjzU/RZkAAQF2kU+8CZooaHXUpTzj3OA8lI6USwAdMN0LbdJzqlFVDnfUIwM7ghIhsxNRcIwMYH8eWl/pxMVsMIKcmfNaXrsnG0BkRjR9UAoIGMiMOdtDfImLMJCawKRnmgPHfqbmZTRUStqibujI1raT0p4EiQhUTFAREV5YkleXs7njUaIESOiCoh4QulU84xlqnpPpQk6QXbFv/qA3/8a7/6zqddce2DD3+nMeu123FrbtPK2Y0vffUb23fu+c6ddz3z2Td0Op0k1mHQ4J4wiASQHVnSGkAzJgCKtGaLhIzQbFORiA8AqOMEmlO1eNiXSbp105azJ49de/XFjKntB3YOovjT//af83Ob+92IiyCSym80WaK5JyRpAOKcp2mKpOphLS7H9DjqQBaMycunkofszImNyy85WKvB6uqZvZu3/Nyb3nvL194/Nbv67JfvX+qc0uHKE/cHUy0xt4U+8YHU35wma7h7+xxT8uTxth8Ez3pFbXrHoL+sD93bOHuqvee81iOPdhpN1j01s3P/WpLA067feuRwZ3pr//pn7/zsvy4N4nj/FXDL58DjwRveMveBPzi98yBs3z2XxmsHdl1z5OjxH3nj6w+e/zxgPPT3cD4TeDyKEmIejNAxIhJlelIQtJJ1rdCYzU4V99vRyo/b8clY4JjgXPT7fUSYatbjOBYeaB3GCVM6mZ4VUW9DMA+kxzyhNQGAHwRRFA2Gfc/3a/UgGWrhp5x5caSF8AGVUjFQDTtSlSY0IXvnKV5jCev/9MH/2RsRxkCfiGOVAZsbpHJqL7tTHXczjBwhO1s2mW243xe4Xr7ZfmCOfRicvcdxPjlEtEfKVXqKCu/uzVTcdo6rumQnWACgiA7LEYPlKYk52x7xSdsVgqN7WRLJRTWv7rsIB2UGnA9cfZt54ruutAA7y2xz7WESeljzkJNSUiqpCQTnfrsvscGgjsBFjfvzDMIkTTdO6qUH+EPfjh76Gq2daiyp3tdONHv19WHE/cgHSBIIAZPmVKy9zcOALSwAchj01tLBShJFBw5s+qW3Lv7oG4CL4/0NNky5ZjFxHgSkkYesl/Z9YEJiiNzzvE48TBhphpyjSHnWjjTfaUNQx7pCEFHJMeAFACi3b6tAnjK7iGvwNyVLixKAZjtUlqCnnSRdpgFMe0rzZzZP+/zI6wAATR5nUecoQxsjD5fwgQAATA12w97sPUop4/IgUpA7lAwP5kIBgFKEuihXyVAwEEk6BEZBwBFxMIhI882bNsvl0xddvN+vDffs2fPwI8dmN4Xd/kAmIk3lV2/+2p//xfuvf/YN7/i5n7n3oScAwjCYStKBMUFLKQ0v5Ay0log2naxCQxDBTQpQJp6R+43hoEcq2b55y5HDT7zu+14BapAk3Y1epzlb/9JXbt69+/y51nRvCMsbG+v9fkghEzxN01RL3/dBS6WULzypSifIGhIEK4LhbeCe1ppptbiw+cMf+cP9+8679tpn3Prlr//3v31EhF8D9B85lBy8OnzgLv/he2DLjs4lV3mry9j3/Oe8yPvoH68fuACotz1S7dNne2kCe/ZA9+zu1dXV2mxPK/+Kp7NWoO+7hcL5dGEL9HresBcQg617WT/qbJze3h5G3TPt2rTctn3q8OPD7efpK56hb/wEXPb0xeufveev/uj2t/7y6//wPf90152neLDu8ZapTf0UeQoRIXdLhI4JwQFHaQTIyxaNXMhkksiw1lTKFD/XXACAHg7TsFbXIIbDIQNNSjbDkLRGxobDoV8Lk1RywTzPY4z1+33GOWMpUnMw6AiPeyJESEmH2FXazhKeAgOeRC6fyq/nuL7rzefQkyAnHOceKt8NdDfSPojOs6URnJ1zd7GyXZMoWgEQN3WsvKIKVk3CmNFvypyIqt9k1/igrdGZZ5qHVFnZnTIDNrZEzBozONiSVxpCx9c+ylwdRGej3xNNjtLM85sd2cUdmRwObeAzvsCIS3pMtVtzm8+Z1BQrqRABOWcMNYIm7gc6ijj1BIBUXsoC6YtU4LzHN/nAMO2T9EQtTGN4/JHO3Xd0+8nC6e5q97Q4c3aq311aOpLc9VCYgtQwR9CAJq9t0aK51F1b2rQpfcsL9rz9d6cX/eU+nFxdDxpTSLIRYhCo/nqqpY7iOJiaSnU6FYQwGPqSknpoZ27XgojG7jp6pUkRVmMhcG44E2WdGKzKS0SEVqNi9ksAIESOBJpyizQSMkXE8zrJhPaRc+2v6TeQ/2rjURQDzz7rDAimAmCe3Kw555x7SinTJ96ozYiYd70hxgAgsxIL7iNktaQEMvRYqhKlYiHEcDDYvnnL6dNnP/M3f//g4a/c+PmbGo0aejKRaa8LBHDt0y/9f//3r6592rN4EHz07z7+2tf9wIMPn/b9FuIwC6DVkJWuNkFz5BdgLyMi5vUGbEQeAMQSGajAFx7C2pml17/u1adOHvZ92LFr7uiJlfMuPP8X3vErD9z/2Oe/+JUv3XLTSrsje8wPPE2UKMk5ckQynt28KlOm8OUM2ONYGAmcU7B3YdNNX/nS7/zB2x9/ZClW/d/7k1cLWv/cv9+8vML3XSYijPvr4bWXvfgrN36p31Y79kvP8+Y3Nz719/3mQnz1dTMhx4sPzn3iLw9HUxB1YOeeLUePrqAvNy/ARReGK0/GEODM9Px6p//E44NOB7wQUc0nSXt2lwiE9673vOaVL/vfT7vsnafP3uI1akoOdUpbzm/9yi+/K2ykz3v2m06eOtusHYjjmLPhePyp6Dk58WGiVGKzoL2W6pZppW1aQ441GwAItNZapsSRBUEgpUxlRESeN+X5FMfdZmNGpT4HkaRDxhOtZCqlFwbDOOJCIEAURfWwlqoEmaJ0dsv2+tmlkzIKuRf5fH4MA4ZMw5vAxsZ+myMWlPlElS25eswEEzRMpp5j7x8rMYwuB+zeYJUBmM+VxPxiBKcSVokBjyTOu5tdeQQRC5OIoxrCuGsUpSowmSSCZB/K34+FD5Z9QvZLw1CzouGG3hluR0CEeVco0lpryMsq6RJkiMioZtb07WIFABhBtvIIEdkaNFU5wykwkn9pHyf3BiKCvH8ZjMNAk4pDeVkoMzeWDon7kguFQgMx0gKkx3Soe9EwAJrigZdQX6q+8LnvNdpdQRD5mAql+iBxZsrz+GIQxgANgAElM+hLgI2otymh06eO1L/61VPf/Bbc94B/4mS40Z4FLwHvLMgT4f4Lf+WNm37yNcnWXVHs9VYjPxoMzp7YdfWlMsDlWA0Hie7GAROSAzZC1WtzzjnnedKwCZwGidrm6oDJlNUmR7Fg0iU4T8C6Ep64Bn/jrjTla6gArIkytU3ZgFhWJNURCCwPJiKeF+mtXtrgQ7X8gltIIcMrU6BfaSLyfR8R4zgWjHHmJUnCeOa6VmCax+WMPNN68yROTUopKSXnaLq/cI5pPFzcNDc7VUvjeLY5tbhY88NoGMlU1tY2+udfuGVubvfJE4+trbX37b3gt3/3D173Az/47//xuauueeaTR0/PzTejYaI1832fMZamqdQx51wQrwRDZGIrsbyYRoafJqGANNZCf/XsmWuvOvhbv/EHf/CHv4MQT035SRz9yBtfv77R515dSfy3T336ac+94dOf/c/jh1Z83xe+B7m7x1Tus5KH2dOsnITWnhvkyAqj1GJ9++u+/yWzm49HQ3XL1x678Eqo1aDuNY6e8PZe1ltZVnu3XvH/fucbP/tL+9Y3lmpTdOWVCzd9pXdiSbz1l172b/9050/+rzf87V9+4ND9y5suXLzovItPnvnGqSe3adZtt9tpDxcXcGklWZhffMZL02Mn1u6/QyAEmvWfft3CbfcsX3Zw+4lH6twbduWJAxfsf/Er9n7qE996wSu+96ff+o6Du56+0Un+4i8+9Ku/+TOPHj4cyWFIc2MP9SjTKVC3fI/FogomZ2kdI3TSPCU4B6Wnp1tHjxyZarWGw+HUzAwRJTHEw3bdI8HZ5oWtDz16iPl+c3pKJanneZqTSTqI4zRgYm5mbrl9Kk6Gi3OX3fvglw9euCeNp2vNePlMXGLALtH/nzLgyaYBXVLCyBqmxnAvd+WVX8cSDvv9JOIyymYqDZyprAFXZgKQidyjwzJHunc/YG4ZtiDFzPec6c1VEYxBZZzvdo03IZamV7n7u2nS7k+k0RYDgRw7GWSaLmFOcPOuVRyL0m5uIF6FARdzm+TzdubmfpmRlVJvuPzIORFAYKZJSI4PuLRqVghhlFel4Jwr7atYS6mRwGfIGQHFpJNBOAMQCaZAc5kiIhJLlR74ABBOCVZjA1BpSg3eGbYhVS0VNIbRep0ixgLpp40gaIgZnwdppEIGDOnI4/LWO6Jjj3Zu+5b3+Tt6afoEpWcRNr3oxVe/69e2PfP6tc9+9T9f/Zarb3ja7pc/f/5VL0oO7E5ZbRip3mpXS5KhZoyZuqUMUCAjRVprLRgR2Rh4BaS01lpzxtAxNk5CGPs9y4iy81OlWGwm/GiDFQqZwZGM8UPOgClFxIKFY8a8WY6V9uAbBBNlQTbfOOS5LVpZ7MpMMGAMzgAQDQaIKASL45gJXwjBmECdp3LmpQS11lwgEaVxIoQQQiilUhV7QV0pHPb6B/Zv+9dP/uNv/MrP/exP/+S7f/sPSLMLLw6Gg+GxJ5li+qWvvvzeezqnjzwJAM+54TmXXHbp+9//l2/72bf98fv+/Njx04O+Yugx9IELowQjB8aAqUIBLVaHmckdAVCrDAgMATRo1DKZn54+ceToNVdecv99Dz5x5OFffufbb7juSuaHl15x9Yc/9NHHH33i6c98xsc++fHNO7adOdrtD4dMcM8LpJSk0ROCpCIB5ATfUa78eZqVarznUN28jT18x3LcO3rffV+866H/jtSptfXBwYPbljp44JLh+397bd9B6JyBy68JWzONbduu+fsP3+HV/dpCF8BvzQdJJ2h40/fecf9zXnLdm3/4197y1lft3bX5J/73WzctbH/TD79j+57Ix9m1tY0oonpj8/rKGhCEdb73QO3hQ72ZKX7ds+pBfVibjR9/2P/IR/72+176O+vJkT997/uPP77x4EP3vfb1L7zxC9/43te8/MLzn7O20gaHntgDbgLfYITkuid9LOZnaJ9bodU4rYCImNRJ1D995vj3vuJlhw8fnp7ddPTYKeT+prnpAPyFZuv3f/+djz9x2zNfeMPP/Ox7bvnOQ7O1TQqU1Gqjs+6LoB42WrXmX/zZ+971uz+/ZdOmH/mRd23E9/3DP3/k0MPdD33sD37z1/74f8yAYYKN2hL6CuOp+oCeMgOuAPrckvto+2F3D0r3s+qGZbMtT8PR2MZEiRMRx/GViSYx4FQqROS2GG9e6oWJMb7qc17jnBmVwqf5h7yJzhjwupWwqksmolwDNimPDBG1ICINeStWRqazrBB+RvK1Jse/ngV7lE1DlAVPleBj/h3tUvVdGTCBnsSARxcFWARfuMdsiD0PhA+Ma0YEErhkTDHW0CqRRMBRAFLMEJkOKRXkD2SKpDkTCngEpHzdCKi5znt1D3qQSo3TqqFTiHQvBBl5U91+f86r1WuBnOUBgpeCT7D++AOLOxY733rwrg/91+Fjp8X+uWdtndv+wGPRlz8rIeiy5onm1ME3vkE899r6M6/SizN9DWkiB4NBMkxAUyA8jhwJUp7hxP/n7L3j4yju/vHPzGy9vX6n3i3LvdsYY1NMNb2XQOiEJATSCyGFFJI8gYT0AIGEEnrv1XSMe2+yZfUunXT9ts/M74+TzmdJ5nm+v3ntS6/V3uzs7OzuvOfT3p/8k3I5cylljCmyDEd+HYWJafL7A0e8t4fHnzM07jXN8vkP0Hj8jJN/OuPAXwBgwl2GDnPq5o3BnPMCABecNvIvmFjIh8bHmUnGPpAxgoUJACwJsmVZefGROpbP5wuFvAjgUHufJEmiKOcBGHGWt00woMwdA2zLNjVNC/p9CCGLOkOxuG2BV9GmTwtcfcXVrz7/5Kymqt2tw+Eorq6QjCxNjHBf1Jex0yOD5rS6uo72PmCIAxUkXllb9rXbvnbFlV8ydW8kXCaK4kg8l0lnRUnAAnIcSxq3AReHtnMEeW5hztzDD4UAQkgmwmhseOnCWTffcMsTTzx+4vHHb9m27qyzT7Wy8bfeWxcIel995c2bbrrZofb37vjhzubdf/jtA129vZbtapqPUgocC1hwLRsruGCnwBgXAFhwMSrQ/cJhysYMtavCoell2p/u/vnr7/69P5ZKjagONVIGqZtW2tAkHr9iGTX52nUvj8SEGQurqkpGX3syO9rvPePswNq3+37002+u37y5vbstEPGecOyxD/7lea+Pp1Pk9DPPW/veq8tO5SMd0dFULDMKWJCJaDs6xwg407jAgiXGJVdqe7YahuFpPpBV1ZKZsysf+u89f7v7LY8Wf/Dvrxx7Uvk5F5xZVrZq9uxjTGtMozPBqlL4kFlRomuMcZ6KdYr5/8hJr4giFBejez4ZPOd8yayGd95de+UVF7346vMVFRXvfvDJV77yja3b9w4NtVx0xhUvPPrKT3/8pYUr1Ei9v3za8atP/WptyczhkaGaumrLslRZEUFqqPbWVi9ZtqraFwht+GRg1RlVu/dvbGux/nbfT1etOg9lXGesW4e1fIhzDogWf7GF3rMj8aZQCmnsUBGVMuc8f5yPL6URH889kM+uM0aoxMaPM5ZXlYyLSoiPcS2xo5ART34YYyMrEEppPvCgoHvkRXGNhV6OeZEcKYkebnZSYuqxs2A8Z1b+KebTnfIxYBtf2I696AxAxEJxI2i8MPr/lg7yaKUAPBNfuCPSWx7ez3Pq4nETb4H8azJHLhp3Rit81YcdcTlnLp+y/clJIwqAWnzwMAAfGedd+HUcgMfaP3zdSbSOU95+0UCMWUsxJgAwFrJCDiu3J14lH9ZMx0Npj+ScmnwtXrTmQ0UpEERBYEUFjauFacgHDLwCBAkIGcvo62KjMYna7jOvmS+9G7lscfKDdcMHh2wICkLZ4MlLZl1+fvns2anjZkqAqG0ncjngomG5vrTuBH39il5pqx7TP4JskRtcCYDTZyuq1xERoykPVW0uWmJKkn3g5HtOKXU5k2UZIeQ4DsbEsixVVRzHAYC8mCggnHfpHHO2LyzR0JiEN3nBRMadoguDOTFR6fgp+TZFdDiO/wjZheoeoXTEpQFLT4QxNnEIBMkDbnciURmWMhbDdiiq+Uz3a7fffvF5l51/4Wkd7YOcARIJYIESYlmOQkQB2YIg5nIGAGiyXF0RuvMnP/vDH/7n9jt++pvf/TKTxFu3b3FZ8sff+1Vr2xa/D59y0RmvP/u+JkEkGghW2Js3jgoynHjSopEe3tPTNTScFLDXdUkgmlq+sqG/27nr9rv2Hty7Y9+un/z65+VlCxOJTC4hB6I515UJ8jLI2Y4sCIIgOtks8/rJqDHiF0KSE5g2Q2zpTOYMSfOIutFCFLX/YO68U+c/8eR/r7v+pmuuvqa2wdvS3PLcy++GIsFEPBnwa4qiVFWV7djZyqndHx+0mdw3kBFFWRGQYxmy4HEpBmwB5FemY59bfmKjnCCEAJjjOEQcmw8JIaaRTicy9TUznnjqL919nw4PD/sDVW+++WagpCrT23fCWXOHRpJmp+Eri/cMw8lLSt/elIh1SeCgsjqnfkbVpo86AZSTzp6vSe3vvpS54GrvtnVOOkVuv+us5x8b2Nb8oeqAbQARZNuFQIknXJ0abEc056fEorYtqy5zNQ6yC2kAwVfiXnHDpf++/91QQJ0xJ6RI6o03fG1m06LK6NykaYrYcd2sk2sYzrR4whazZFUK2NTvsD6wvT6/HgmU67qeSFDi6WSOn+Cw5YyIqAJRL8ddGAhygo6YBO7nxJAIMN3HaA6rOUx8hpnWBJnattdXKYW07oGWgBI1UrlKzR8bTK4++eSqusz3fn7JN7/90N8e+/Yl5//p9z/+555Pnuk1tvv87vJp4YdeH77xrtNX+E97f13LPX/81Q+/+7uf/+yu4eHYz370769+Z9Xrr3744rN/FxXwBiAx4G1YmO3rVz7Y9qiemnMEABdNKCgPwMUTYmEinroUu0Ed8UGOAfDhmTBPcgSkwKYE41k+CIzxYR0JwACAjgbAMAmDx3YwKrirFH/YBaLRo7U26bYmykzF1y3KVTAGwHxcduRFf6EIeIrbLF7fTOrSF2lOpuzphPa/uB2ad5wurEXwGFXQBACeMMMWxjO/SGSMYSRMqFYYKzjao5m6HGHzO3z0yHR7hed41FE4qtv3mHs3ylslx9nmeBE1GC9WtBDgRcQQeNzwPKH9w/d1FAAupI2bcBey7lqcuhgcAXGBCSLyKFhRRDbY0VF5xpLaEK6NGnsH1VOXQXtL966t8mlz2a5OwRfVzz2nbM6Z9umrnQqfKsEogBmzMbdE7NgZkykqUl3FycioKutaRJQBsJvKEa80zNLTRS3LsW3bsiwTSUyn0xgThBBlTBpz5XXGBtkdkzyJJBUP+OFxn8R4VQzAR2Q4Hgfg4vmh0GDx+1bcGgYaZ6gsx4jGZCZZkrero6OsTItMqxpoGy5BSsJDp5eH7vv177/1q597mfjLv/7xlpu+FstlXUnMJrKyqlHKRd1xBJdSGvCHRoZj8+dVX3zBl9587UUBIxfUhx75n5NPuNzjhRNPWHz+OWc/9t8H0yOe235xymhf9t1X1weCvvaOxHdvv/WNt150svVEOtDbaVFm+oOIgjs6DAjB6Wc3pRJ6V0/fyJD3R7+48kfferC3z6yoxYM9NGMOqx5BFPwIUV3XgYseL7ZMFgpH9YRVW505bvkVz772X0EJxWPpitKQ7FMEMMKiuvrEUz7b8FEw6Ftz1qpnnn7nrAtPHhjs3LGhB5AryaJtYpAtcKGyvnbXnpZ9h3rKSiv0dErAwByEicSRDRP1QgDjAJznMHEcSxRFx3Fsx6wqqY0EJEVljOIf/ODHT7549623fP+uO+/fsOvpv/zs0WdfW1tWFxSdXlkpO/7S7FsPurGsVV6hDnZaguwN12W37vzssvO/2d+dUeUBhrMeMbx7WzxS4ps9e2ZHd0dfZxYBDng1bziejItZ2yqpUrKj4vFnZpq3wVCnl+Osy0ASABDYNqgBqF8YOu+sa1S57Pf/c9e1V1/xtz/8e+0HnxpmYlrtovSoHSljnd2pVaes7OuFZK7FzHFAZmm41s5mRYFmE3wk3T5vwTGD/Uj2JikjFJmCyB03K4qiTLy2kxFJyKaSy1KObShQIoscZCOddSgzZ1TN/OTNz+76/dcXrax89OHX9VHlx9/9afPBT/7wl+uffPyZrv7dPb26I7iPPf3M3MbLDu4bPWVeaeMi3/0vfeefd7z+wpt7b/7NqT2b4nWzpeWLL9z4ac+6DR8N9qDhZDtH2Vlzy8KlbvNWYugZwzRcXZZK8N6+x7PxOShLiwK3i5yk0BjRKpowd0xEicI3c+ScWPhk8ZGoXPS95iP52IQ8ZWNfY9HL88XhNFPiBBTFoU5YRhBUoHD/P8FwXmopBpXCuWMH0Zg+uXhoOOcFAB5XqU3RZ3QUfT5MHM7/S5kagIs5Zot/5QQXCPELAAxHtxBP6GpeRUPpYZL6iZLoUSgJ876ak4G54JxVfLD4SEFWnvAUJuwXUS9NKGx8gVG4QYYQYkfhskaYH7EQ4YVBmPpFLF50FnrOGENTGcIBgDGbiBJHxLYoMCZxTDhjjmO/8Xj9k+8lNnxgY16u1es+2VNVqjdvF+//slim2t//B7cMPq9x6LNObdFSrxrNXfLNyIXHcSRmNLXHBMkyndyoH9SMa/kDnnQ6FQaPpvgTmBlWTtMEhNR0Oi0Igm6ZXq/XdV1JkiilRjY7lvtWEDweTy6nC4IgSZLFHM7HHOuOfDem0GQAjFG3TgbgicM1zuAmwOGPq/i7EA1GNUHgpo1Na8gKzG7q3HPg6xee/tIbz82YsWBU8ax9/b2vnn+RKJKSppLOkW4SJ3fd+8fLvvzl3c0H62bNxianadPvDZjgEIQTicSsmbWdrV0L5s/AiCFwfWUSMNUxSWvbuqaa45HLLv5S5QtP769qnPvX+2689Kzvh0LRvqH4gmMqutpHsyk377oXDAaTSeLRXET0XFqqbRQq5mVjvULXIeuBh3571WU/WvfZxqwxmkynzj//mn179vh8vlAolMvlbIsyxmzbbW9uuezis6//0teffe1f37vjK7fd9jvK8EB39rNN7y+aU/etb1zT2apHovz08xe/8NRWj4d5Qmp/d0LT5EzSAgDAMHNW4xNP/zcUqqiqaTjUG0sk0h5J9Hk0y6S27YpSXvF2+EMep1McY4ILnAdZpgABAABJREFUhQLZdBohTiktKyvdvXnb0MjWvp7k9On1V1x248qV5zW3v7F+yxs0Vb7mpBOHkk6wgn/n53zzZvrea+SsC9QPXjCDEXdkGCgXKxtCB/cN3XzNz5584i8Ns3wVNalt6w3XUCizv3Lz1z76/FXHCKadli9fft6mTw9t39rMAAVLoqUNiWnzLJ6CtS8GEDCGM4hLAETyWKEyRILo0ksv3b9j0OcJzZnbdGj/SPvB/q273ikJw7HHh4CLoXDtjCV1s2edVV96UeO0cDbpKlJuy6cHX3nxv7HMus83Nr/+zscz565oOzAcCgWGRhOCjNO5pKx4TStLCMGOYLlc9QHBTGBeS88hyXa54FjScfNr33pi7b/u/2WWb1J9+IKzLo8N9Bwa2DXYocxaNhLrk9UgcjE3jYY77vh7xLfwF9dc3pb5oHRp5a53s66TTXBh3jx766dw87eWGQZ/4eltAgkqwaTmg8QIVNcEEM7t3cgJIZTZV331gtt/dvvwaBblGC2eaODISXNKgWPybDLh0zpaa5MbmWLiY3xKifloDI8T+B8KvWVsTPM84Sr5hPNjXUUARbqaKbGwGIChGHcnVz6SC6zghZTH4Lx9tNAI/G/DeLTyf1k0wKTxnyyAorE0WxOr/a8AXKiQl4ALThDF7cARzrBHFja1QzublCYPigBvgnj0fxyECWVKAJ5ydQJw2GIy4adC8u3JN3ZktcOjhMadoYpPtF1LkhQEhLmAKZKJgDnjtuup8Yh6Rty8fd+3fxaID6D4YOniGXbEJ7Vs4sdOB41YHhd/9STtpd3Zhz6Xb7mg99W31RaZr5kba/TXnvE17ZhzRUtKeiCednAya3lBNG0LMVA85Zbcygw/ZV6v17ZtzjkWBc6ZaZqyLIc0TZJwV89gKBTKZDKiJEuSlEqlJE0ZE+vzFB+8cINH+C4Unk5RWPD4uCCASZ9w4VxStPAqtlUzyZNMjiilsmw4GLxg4GlB9ODyY2Z3dgwvbFx067f+sfaTvz/1fFVJcDQ2dP71qz59adcpV1/ac7Djjh/8bOaqFcZIrkTwDjqGIkquY5WXlowO982fO2vZ0vnVVeXPPf/yKeedcMrJK3/63X8A5H54x5q9W7t2bur2+szWfo654FNZZbW39RDlAqudpl70pWl//s0WykREHMxlyi1BAXA9jTO1ERQbbQ9gzn75yy91t9CX3ng4Pii/uvZfp596ne3kjBx88tn7K5evFIhWUuJpbx+eO6PurDXnbN70yYKl4fqGZX+574Gv3/ybRcfUf/8b3yWIRMNWfAT++cDvP/j8heee2FpW5ktkTcfg9dM9DXXTdV3v7e8pDc+dMbsJMDVdfut3flBbPz2XylCbKrIHAKhrF0xjAJAnJkMckChQyhHizHUVSWSchoMBv99z05dufOnlZ1wGJWXK935447du++WNN3179ryKX9zxV1lklErTF+qlFe7urYrN0JpLPFvfH+nr4ozBhV866aZbL33msc+eeuhtSci4ImhKJJ0YBQ6qHL3syjVvvv38aNyOVNZccMHsR/7+nsi9NujBKiaK5eHK4bpK9tGrouM619149hOPv0VdiFRItY2krdMWRBaP8auuWrN98+4DO0eAaZrPDUV5TZ03WjX0+pN42kIeKOXnr7miIjJDFRp7uzqfffwf0UCpo+yvaAjarFy3A+nYUF3d7N///rFNm3bPnrsqmbKRmAWMPJxYzOaiTV2bcEkkEiDXBaQIPprse/k/j3f2PNgwR372v8ONjWUVtabOndY9pu2w6fPBtKH1IAQrYaAL3fr1a712/4cbN6/fnV59irrpA7FiYarCG8lZJiZyZysSvKMrTw2+8rju2IAQMgzLF4L6qqhpp85cc/YTT709MGCHQqUox8aIOP6PqDDOaj75OCn+IPl4fpUJpNlFhET5amji8fGE8MXL7C8A4MkT6GHt1pExGGPzIGWFyxXOzEuxk1uDMc36FLIX8IkIw8aNhfnbm9QrNAFCJkDd4Sksv3MUas9i/o3/tRTj1oSFEcZHxMkVZ6EoPn3KZieLqlNeevK/nHPMyeQWCgqDyeprTqc2PUz2pj7qwmjsQnRMBT2GoHlvMj45TvqLb2qyhD1ef4qE51MCfL6fhIgcgAJ3KaP5pQBiAOAmHOzzkVKpGkAa6Uy99VKguwM//4mxO+aihOK3RA/Ra0rFlbPdtk569RLiBtQf/xf+uYBW0+Yf7QwvqLBKQm7tldGzrvP7ylKCZOvcTaVG3RwFVA0BXebjiyfq9XpTqZTqkbPpzJ4d27u6um644QbbZZbjEkLyLyDFY+gLUJQInB/m2p0wGngsyfzhVyvv3cLdKbLWcM6haOFVAGCMMTiiTIQMsYA5VoYKsjAnaMSOOTvaui+nKZmc8yy4z/rEWNAXXVBf75G372xrax8MhHwvPvPyqaec2nKo1yEC8XntdDoY8A/2dhy7dIFI4NrrriwtjT740L8krUSWORGH+zvcOdPnyFIyNpS0bbtnSC2N4FmLU617AeGSvqGYV8PlZYFDbQnCAhRSRBCACy43NS/4I+Cv9HTuV49bpa5/r3fZ8bB/O9Q1qRWlMx3Hd+75py5duObGr1z5ysuv/+rOP599zqrzzvnSd7/73aeee/DYE0JGsrq7e8gFIZXqf/7tF05YdGpFNCSAp6GJzpgz/63Xt1574zntB7Kfrtvo8Tl6ln3tlsseffT5C869tKq2LDkKza3bqxoa7vnzX1JJHRiRkKQbliQLiLMxrpV8vE3+KTLucPD7/dl0GhgHRoHRjs5Dq0884cyTTty+a5usyoZhEUGYt6huYKQtFYMFJ8DwMJR7Vt3715tOWnXjqjPhk1eCij8ZDUaGBrL+gBLPpKJlVbGhvojPmxrBrpQGF0RBc5wc4hIHd/aiQGtrwlcOiW4vsm0MtguwdHVYdxLNmzhiAmcewZO+7dtfdWz8z78+IMgRJCZCYc9wd7a8VkXI0JNiOoGCAS2bTYsRqo/CguWiKHGXu8uPK+3cG1+8TPzb3TB7oXHm+bD+TbFhpm/jlviBvVLFPLvEjypr5T07zHvvfvqc887r7B51qM9yhwOkPOekTOZgTDDliEuWafuCmpHGXz73svv/ftbGTz9/7Z3XKpuUnZs1b2RUlcAbgOQIUjW/Gsht+lg77VJ7oNfwKSCzyP2P/e7SNX8YGGi97odL23s60DDLSWndZD2dECoJA0ke2MNOPSeQHqG71wEihjeA5i0q+fS9WGoUALsYB5FBj2DCKl7IT9CRjmnVJulv8zuHnZuKkAON5ziakNSz0FpRZVwc91IQghGw8fiK/wexm48z9hV/2AXXIZgMD0W68aMsL6ZQQY/1e3xxUJwwa4LQDAD8KKrsCRL84Z2jcUofRX9+NKf1o9ZHR9pW8ypWximeWtY8evtT93OCs1Xh75TiLwBwNvUCpdg7uvhVnHQ7//uaIO8pP06i5AIAxsCO9l7lF6ZT/HhUYpNi0C3+KCa/dQghi3IALiAsChhxYMzNp5EwFFzFRWMk2Sm6dZ4KJSJKANn2Vkh1Dz35gvbZ+nIvH/l8d9gCqSma9XHvN6bDLtuZO0gu89F3kmIO3NPrhp854LZiu3qm7JkrXXezVTav1vFZBLqGY8zirut6vV4AJhKhp6erurp6x44dP7/9h4uWLvnqzV/z+gMlZRWpTFrz+i3LBjLuPMGh+AU+2oAX0oxOBuDCYyoeVFY0aIXBAQBPljsedTQ7GvHKdVUlB9auL9u8UbthGv/gc7m5Dbbvg5bM01nnhZPmv/v59qacmKtT6EBSrK38wU/vfONvTzz4zNP9Ihd1Fygti4Za9u4875zTHStbU11x+Zcu+/TzdeBJ79ve5/UEPVK0ed9uWYGQX2GMiX7e1+7MXQiuKwo+p3Uf6JlIJDSaMwTT9APkwiWWY8sYqYJkjo6al94we/f2rvIqPRyU0wORsvqhgR7a2wFcgLYDcOzKikzS/87bH95488Xvv7uXo9zZ56zs6NnbeijDHUFVxUxaP+eq1X/881PTQuE921vPPHXNmRc0bNt5sHnf6Nadbz/92Nrm1p3z58//8x//mc9qJYri7HmNwWBw+7Z9ZbWVO/fs7RsaScUNGamCJILAmOUAQPEyOr+UpwQxSgkQxKGqrNTnJbIoH7NssV+DWHpTf7cXi/bwgAvgvfHbFa1tB70loeUr10wLHV/hn3bWBef6lABiCSr6RI/uWnTBwpK2g6m+Lg+S0hgxsFQ5wBua+L4dloi8gAyH0i9ff+H6z/Y3LrFtfSTeLezblePYBczPuiLw6duZXDIAkGha6Js+fUlZefXefTu3bjgAlM6YHTy0X+fMBgDEJZEIFHRKARSPx+PVMyPV07R5KzI84xW51dfizDpZfO8Zt6baw5njWKH+oaQ3LGUMu7rOGmwPzVySPu/CE95/I6l6gr+661cl4RmdrSNqSACiZDKOKgqy6M1mLNWLfKr+17vueeKhh2/97sy24eb2TvBFIJcB6kA45KeMffQmX32++fFr0tKV4fpZ2T2f2TOXi5Ulc4+bfdr3br1PqB2tqFKaytG771vT5qj1s+1Ywtm4FjOHKQqoQpRa7lVfK+1uifR0pvbsiolqzDEwIuO+v2isTAEhUwgrRVImTAU24xMVKvY8GnenOvxejPle8aKotOJmYSzM/8geHrUU1yGECAgTQAQQ5lDYKeB9cZ+hKKPAlGXyUBy+R374L8CYxXcyWkzZWxhf7E/ozIRx+IJe/a91Jt/F4TLxcUxBZglHB7ai5zX1VozuxVcHjACjvPc7A87HPeEnD86U3YZJz27K7h1lBPKwwcbYH8YjTb9gJCeP0pSd/F9/LT6S348IShjJHsqJ4YBhc4e5DLtARFPrSequT/Zqig1mX1dioCUxWjndXnxKzR/v83y+VXjkreDj98WuWg7HVeHcaPJvm2CORt8fsB8aAVNisWGEeypPIVX1qek/EgzxEX7n1e9fc+mXz75oz4Z9TWUlDXWlMxorVVkysxnEKadOx6GWJx97FLjz/LNPPvSv+1RVeuTRhzwej+2YkiQSxPMbwjz/CbF8hsTxrXA8vxUeerEBqNiOUxiI/IdfUFKhfFATQggAAVhe7DNMT6lcjrOpJ59rXFWnLJy264q/yWkYbe+EExthfk3LyMhDL7w+Y9aCbkvv39vrDwbOu+iCm2+6tmWkVw34iOkSBoxxjIVnnnnmt7++K+wPdHf3P/30s9SFD985lBjVs5l4ZcNgdb1kmjhHzYERd94itXGGnIxJ1PY1b0PApTXnhYkMoigCpIlgnX+Vf8lqS/WII8Mu4vj5/zTLqp5N+N562VJLRru66f7t0rmXNsyc7yutguUnzFh27Px5i2eNZHsEDyudRhw6IJFSx+CiCpmUecfPr33piY9c183krO7uwb0HPx8aQC0tsYYZweVLzvrTn/+YSA5fcckN4YjmkcMIVMd1wv7qSy+7oLKysu3AwTPOPjWTTYT8gfwIW5Z1tC8IE4QQ0vVsdWXFTTfduH3Lzj/cffe2LZs//HhTV4cwGncsZq88Izx9nvzwPw52HvB89rozd/r59ZVLv//dG3/yq+u93lI9I6T07EiKGoa04eOR/i4bA+UuoxQESdQzpiJ6jz9hpuOajksbGqNbN+/raOv44JXOTCo7//jkOV+SgUUaZpSNdntzCQY84fMG03H69oufLJ57uojV2rpqcKH1YLIkEgKqAcUeTbBdHQCIAGCDnhgG2z/Yjt5/jlg01zDPadkLrc2ODdxXopx5lX/JqenG+VYgkMuOWPs2lojeREk1/fXtH3cf6kwbH/e0f/TVa649+eSF77zzVEmpHxMwrZxp6qIoMm4Md+KVy0+594+/CAXLFFwnSz5qigc2iMefcMzxJ89OxaqUYG7fbhoodRLxVFdbqqXPKG+IbN224Z6/3LX8vNHRziACc8QyMOBdG3IfvsL3bwSWFWQRiAuZ+GhVndvd05I1erP2cFUD1MyAO393HcEw0UNn/OEdkcOuUDjniI1tmEPxNrmd/xUhphRGj1IY5lNvaIyNgY1PrBxjKGLROTwnFiNGoWA+Tvb0hVPqlBUKLFGFmhOqHZYvp5qRC/ULgT0Tpukpy/9hrP6XUtzOhAbx0dUMxRPr/6Uz+cEuTkdYjMeT+1M8iY9N5UDzwbtHG4oJ3fjihcv/0+lHvTU8EUe/4PbzdQpBmYVT8vvDkE2IVk5GloxdRUKKCqLqIpGJGZ/iS0LEb5UMi4yGbOQ3PfER+8BIrHeUjqb3hlT9sluCT260//Ci54VPxLOv71+fUMzGkWfi4hBm3T7jewne10hYcPCDnTOuntfqa3caoK/ee9yJp3qmNZ695sKXXngzElTa29vLSgKbN2082HLg3bffskzdMnRRIh3trW++9npZiR8ALNvIL6PxuKV2gj1gwtdRWJZNGN7iI2MnHjljTBh5zrkFFi5RhTc+S/3PffITz98fWqAuXwByxa7v/THYlTGe3/Xipt3R089UN3TPSHHF1q894cSHXnjqyiuv3Lf/4D/efr4/FVNGMiCT8vIKhMjf/vJn16bBYLg0Emk9NPj559uCfnjvk98tmn/i+28OI2yHo0GPDzxe/94dtGaG5dGEtB63XY4Qb95/CDGEiFHT5Ab8/g9ey2KzdqA/GQwjxpmq4mysZM9uPVwNxGvt2qgdc5oTT/Zk9cwZl0RfeWnTS2+9wOTMjp2xUy+qlTTvh2/1DMdbF5/gR4wrRH7vtbY///XXtRU199z9r4uvOO2YY1f29RrBgLeiouz73/+uLIsbPz+w9JgFTY2LsJDD2AAurN/4wasvfdTd3f3Aw/+aNq1eFEVATBRFw8zJslSseTqspQPEOafU1TQtk8n86he//PrXvqYq0u6dO0Ph8Le+fwOWrTMvWBgqUYfjsbnH4O5WhdnWZWdcfdElx430Ki+/sGtwJE4RgMRnLQqtOLkcCdynSggIMJWAZNkGcGjZl9uxtVNUXEBQXTPrWz+4BgSnvjZaWqI9f39gNJ1tmIP8ZUOb1/XVV0z/+S9uyqb11Sce97d/3fntW6/fs2drfU0UmF8RYHhoCAFGWMgZutcvMSfIXe/CBc7spkhFVdIfSUdKPB+/6tu+Xpq1VNnyIQi8VAqMfvrRyCfvuFV1kB5GFYHyYG1suC9QXe278ZYqAkZ5FDZuej6VWU9dFk/2WE4aYxQtCdq2aRgZxq262vCn7+9+94N/lpYHNCWyYJ7/2CXVa85Eb7/c/sJzm5adGN605dPUQJXqdecv9nMqLz2n4ulHOmIdYvUsb8VM76rV3BiBzj4pEHJXrCa1NZGqyvApF9i1NerixWXhINcz2d2fa5+u7UllRv756NWPPPja/X95T0C1SB9LX1VMyzAlffq4qjlvuy2Cz3FGG1LQ1xVrT8nR0gge1nUXKuQ5qibOoZM1e1A0/eXnuILGr3giOEo7ZMo6R7suPWzVRZgXokKZC2OhJnkdKSEijNk4J4cAAHDM8GEiiCNWAEWT8gS9AkxV2FEwBo17+eYvN1aLY0TsI29/rFnMhAl3mi8UjtqH4rm1cASPxXGPWyjGTxfouI9bXtjNe1ohEI4STZbX5B9uuaA5P4oNGHNc6E9xV9mRL0xeMiCEAHMLuVcL3v6I44LgP+VysBBGWQisRIeZZMadABDj44xgBct9XqADOGzLR0eurvL+xhOoMTHGecpPznk+rxwWSP4NFxkihOTPymQyhBBfMCCKIniwatrKyMHMxuetQ2ujqj60ZbeKQG6SaI9N7pr+3x+NPrS+ZNXPrn3xrmdr1iyadvypT/zsf2bE1fRM3713/LihrPT23/xA9WjrX3wvB+6C5QtTI/Hnn3l+WuPMzdt2rlhx/EgmFgQtIzuQc5kgWCr3pgmVIe89jjHOh9prmjY6Ourz+Tg1KAMiKLbtECIKSLRME2MMyEBIzEfVI+xiApbpyLJmuQ4hhAMFYNRhiqRKSEilMiSgzQiKyTu+7j69K3Lryv2vv1R+9s9yC+s++vpN110/v/WZDa8ZJd8a2OasbU9WChsWLrtwdYP5y/s9x5+aIZimU0FNGkgKpRq5+JKzL7/0Cq8qfOdrvxtNm6B1uuAxHYh6fEtPgI6O0Za9bijkU/wZOwuOIQmSE/CK0XK7qxNicQ9DrLbeHGrzh2ocoH4QhxiF4V5t5gK06uSaR+9r5iqUlmiN0/3l0wdad0q7N9uV9WI261TVYpcy3ZD7eqxgGYRDoVzKJaJhD8uJRI45okhEh+rX33T5Q/9+loEjcvT4o09cd+MNS4+rSgwH27v2XXHFpR+sXZdMxByH5sklMCDbsU86aVVr66FodeW69ZsJFlvbBxmXbdcmosvBksCbDyAjCDHmcspEIrjU4YJOWDVDKcPMLpzfSED4/e/+1tlx4D8P3nf1tVe/9tpr51+y9KlHNnmDVqRE7WrL3f2Xm6npffH1v+zfrmJJ5EJKHwHC1ZIGy6uFB3vjepYzl5dEgwi7sZEsZyBJEgC2bQqAATHNS3K66fV7kKDLondkNDt7vscwjKC/IhSR4/3lu3ZsuOqaCz755JO+niQSxXBpMJmJYR0cG0TB47gUgOa/PEJEymjdbLZwWWTv7tHuFqCOUt2AGueweI/Q0ca8JcZAW6Ovoi0zAPOWwLLl6vCQsXMz1M3x9PY5kseR5UDaSFXXSZdf8tVbvnQPReqHL30qlXvXHLdkIMEiAf2dF975671PBao+FHDajk0PVg4Iitrdn0Yy/fxz8tXvnX38guu+fvmPV5x+oO2Q1D2IgUhNC9JBn08NZPQc2bebljWA46qyTJMxmxowa0ag81Aq3g+aD+YtDrqm0dpqJWNKJoNqZxnzZp3+1ptrKQVksgl0QhOzhk0sBYGmeNZDgI9Ehgm+V19QJgAwwBQ2P5hkWoNx1Vbx7HY0GevIMpGefvyUw4NQfF027uxTQF8YB+DiixYInqYG4LGmDjfOx7uNJl3xi8sXAPBh7vux0UcAgPBEJq+xbhetkv6PV5+8+uHjuUVZcYokBABA2BgwQzGaHv19yAPwYXEZjV3oiwF4wk3BkQB8RN4CTvOLxTzv7thZ/Ehr2YRXLm9fGG8236txAGaHHX3zYjqQPNaOPd9xAJ4QdjXhWoe7N75cwBShAg3I+EXzLGNer1fXdYD8TAfUdizLokbCU16SxILX5wsDCAMHVL0NDq2H5Naebe+pxOO/Mjq0tjt49vWvrd31y3sPzj334l6kbXvnvd/+/dE//vjHnsHBL63+ut9R3ow9NXvW3A/fWzvYP7Ruw8avfvub/33qydLyspGMo7mGzpSIBA51s1wqlVCWYc6cfCypKIq6rnPOg8FgNptFiBJCEBEsyxJF0TJ0Rh2v10vzGiuOGGMutRRFchxKXVBU0bUJQ6bqES2TOa6pyMAo8YaV9n89vOKl+91p1FrkH7lvG/r9U/ENm+lDzy1dOivnzWzc0jp663cvOPXKob6d+y6+6awSeW9WmPf5G9f/4A/PffjaFbddf9NZV805+9hT5y/Yu7dn2VLVNJO9vU46BWU1ZDhGHRs4QGNTaXebVTktlUlBXVVpV1s8GKIC8wbD8qFDo/EcETUXLA9musUBEHgDkE2BLEbKqq2MkU0MAMhQP8Mnq5nK8ujuzUY67RLRPuWcxuGujEPjra1ONBJRfDydMgfadBBg3jz14C5wqOHxiLrhAPOuOnFu+6GRn93+gzffePf9z145ZlXj55+0aRq55NLzMQ8/9uh/AgE/IWR0NDFr1oy+vr5MNvfqqy+cet55+/cfsGxGkCcULKWUGmba51U4wpl0TtE8juMQwJrqyeVyAiaSTzVzRi6jL1nU+K3bfnzFpVfHs9tfeX7rt2778t///s89u9pPPadalKRIuPyH3/6zR9OOPVX2BocP7IT4UPnKc/U3n0sjWyqJwmjKphQiQQ+lNJWwKspLchkjndIBPJSbRHAFAXMEtskAgeYVc2kKhFVWlFm2mUylqAMiCSmyljEGTzhh6e49W1588eUf/ujOHVt3ldVHh/pGCAVGgXMBI6J6JF3PCoLgOA4IACBiJDLmYsEhAjBKqIMRs6fN1oaGXaRax58Os2cFMwMVT9zfPHMRLFyh7Ntj9rSGR3JxgUYoHq0sq162Omylay9Yc3nIW37bV6+/8JLjF81d1tc7sOW91wzobZgrLFlRruvuhx90DY3A4BAkeuscueuUc5c21IT3b9yH2MjCE6W338j294OdE5as5Id2s0xKW7Ra50hrb80tWc72bwM3E7Sd3IqTsST5333BZdwMl/J0AsuqnksLmLiMygCyQNLIoM6R08KktJ0TLAqF6WPydDKhofwcOkkCQBOda45AqSnBgBf5DBcksHwoKjrS2jRZ1zdlmVABHZ2iLw/AeVgtBmA2nrit6Ip5SDgaAI8dL4iY+T7gSen8JlSbPBRTHidHOj0V9X/K6kcA8JTNTin+TvipAMBonPAIDuPfFOkUJ5xbfLwAwAUAQ1/oDV5s9Sju6hEoXrRiO+zcNw7AY9HbX7hCnLzCKE5UkKf0Ha+KD4u/fIwljaOj9j+fH77wAheU1QUKw/zlCul9XEYFQbBtGwMiJJ8VHBFCuOxxrbRr6oogUUfSAkGkwCfrPz5mcU2N4LA9vZn999jJtU5SKf/+mQOv78Kz53SMClff/OqCK2/yLl3x5C3/+Okl3yO7/G25tpkLywPV7sHe9e+8/+JoKrnjQIctqtSCiNefIzazbSwgRwQznmIiKfEE0+m0qqoOdUVRlATRNE3OOREFRLCuZyVJkkSMMA34vKaetVxsGpQQGRPBdgzTtv1+P2NAzbSAwjZLI8IwBOKJAUVzGupmpba1P3z55T+K7zemO6GVM7c/tm9J//Aba644vjYIxy8Irl+/hw3v3eu98rOP2154uPnX9567AhIdLd6tgdcuvHz93MjLv/39GxveS8USjz7/+HvvfGa4qQXHQkPD9JE+3xuv7fCoJMNoXW21Tys92LZ99jwPdezEsBsOC/FBtzQaPrQXsBpnpMKwE5SaiIpElBDJiRLJpgUAAVAOEMiC33LTx18Irg2bPoDqWlHzC8RjLJi3sHX7gOnEDF1ubTaJAJSC1+N1DNGBBKNYkSqY0tfY0NDfNZBJsq9+/YxHH1prUVY/E0ZHnd/89vav3vjzdCZWVbLQYWmMQZZk07SwgI85Zqlhm7t27AEsPPzEYxdfcklf7zBnYi5jBP0+I5d2EVc8qq7rlHNV1WzD8igqp2Bwl3Amcpg/u6Q2ekHljP5tO/b98x+Pnbxy+eOPvP7n+39SWkkrK6dz9VA8Bp0H/SednzETJfv2DDctgL5enM0QhciJoSxgiTl2JBoSBKGrIwYcQgF/NmM6zJYVsCyQRCgpLR8cHCaEKKqIiZ5JAgLBdRkA9vqEnG4SDDW1tcOxPkFEJ514ymuvvodFqJ9ergWErn2xdMoSRdlxnMKciRDiiCuy3zQgFPLpZsyybOAEuCYJCAn03BuzH7wBsxdKsS6RYH3uYl5R7Wtry8ga7N3oGRq0TdN1dT/IabAFf6ObboFXPnmuhOYuvea26SV602JNSEsZGu8fIpE6mhwsCZcnieT7eG1G9hEky9mMKEojcxr9Wz/ILDsFC2qorX2kd3+pZccxyABQ06gOjowcc1I40UM5mMmYp38gUVoli36rtBbMHFRU+JgLRsbtbjUIgVwsYKQ1h/bjvJvJ2AYMOB/bJpUJlrOJiUjRxJJ3fZpg+5mqFMy3+WosTyGe3wqW3cIOQpxzypjLOS1YfItPKZw4oZH8VgDaCQAw+cjYRAmcTF5sHEbWYt9mOslGVmycRuMNouLYx+I6R1iyjzSxF7ZJw3y04Z143YnlSINr3pUGYV5wusHAjtjGfdny7mz5rdi+MAHIWJ4bbNwnC/OxbSp3Lcb5EYTkh4VpzvNOW4VtgtPW5NuaMFwF57sCkwZCqNhvaPIIFz+gwg2SwymYxscPoWK+9MIuzv9UOBHh/HbYmMoBceCUYUAiEQRMgHHquMC4gAkg5nLXYU5+c5nDgHLERFkwbQMhLkgEIU4EJErEpTbLDsicBQIBE2XFUIYKg5nU8GVnX/LmE8/a0qzPSFS4+T3PD0d3Vfz4nm+kPlo/cvCej5cs4u3bzm/a/hIdTsyeXhHtG11z8uz+7sS+d7ojPbNbXsE3nP2n39/5ZGV5ZX1NoLpcTJtDSX2ASjRn5iq8/oUN9Yura3KG7gv4KeKaz8uAu8A4AsWjMgZ61ggHI6ootx1qwxQ988STv/n1XX/9693RqFc3ko5tiCKRReRSM5tL+n2EMkvzSIi7lVEPN819WzZ17G2RZ027s+0zc8dLoVc3HbjwB/PWv3/v936pGSh4+apgbGQ4l2x/Zlfl8fNsOd3+0WcBkqOOzSvIXpQ649c3fvPL57V/+J9Z//PD1Jduss13FyxOZRL403dg66bhWKq1urJMNyh1oL291+HW7HnR5Ai2DMkyQVIoYqBouVBFynKQTQewbAYDYSQ5LrUsC7IZKgoYAwYOwYBm2bS6ribRTzr3SY2NUW/Y6e819AR88OZ+QMOiwNMJ0yMFmaNhDsAdl2YRBQCRQc42ccbo0XXzzDUrwkHRYVZ5pRzwB7MZ+PD9jZdfek1puJExVBIJBoN+27YRQUuOWbZwyeKFixfVNlZde80Nn3/86bpPPk4mh71e0etTOeeqqslYVkV15ozpleXliiJJkigrkmUZgmQjNzh3dvnvfvZwPPFx0OsDgFi87Q/3/OjuP/xi5iwp1sfXfdzGrJJEHC64Xu9q4X09w0ZcWv+md6iHUdObTGWPXS011JVzDn3die6OlIBUSVQkRXS4rcp+21CAqbaljsTSlLFQuXPeZfVnXBCdu1irqBEAMwyqabgcwKVQ31A3Y8asdMp98/X3gAPG2DYtVVI5R5FIyHEsjIEIEAh6JBlzzoEDJhkspCUlAyxPZEclOT1zWapyRvbFB8o8YhCI7Q0Ii1eSzkPKcw/KBENmyFs7XZ/WyEMBKCtPE70GMF15XLRpPrnt1ivX73r56utPOP2sM209kAZTjAAEacM8n41ikRIn02/xhOuPmt/98QUd7evOOPGbGzekl50uqQFKeVxCHn/JMMYuRzkOue62rOaBTDwdG0SH9jvJBAOqjsbYYHt4xzuqIgQIcT5+OeMJGivX4BPP8px8AY7WD55yKUH2eBzw4bU8QF5AnSj7jklsrFjWOaw/LmbRmnLGP7JMqT+Ew/LTRDmpIPsWJ3PNlwmCb6HylNctTkJQPOOP5X6ZLAiOZ9EBgGL+4fFA0vyRw3QfX+DHBBNG7H8biqO1cJTjaBL8AxSp0CdfZcqmjiYQHvW6R4YPFSrn07UX4kcL983JEdSPh7vExiMXx23J+UKOkj2JHIU4ZXJvC8+Fo7FL5AXTseNHaWPyEBUfmUJbzjCMazw45wzGFgpCkZ/jlKI5KrJVC4LgcodPdhTA2HFo3grIXOq6rpDPcwuIeTmiqpNDzDYaG8KPPvrA1792CwBoRJo1b8FHb34+qEhielitiioZTH3pjZ9s+ee1p512cqRPqD0w8zixg37wwHN//+u9/35s6wKouOLYi15a+/HG3pY5y5o+XffGFVdf+J0/nO8vq8zlrESKlYTUfXt2/fDnP5uzsP7ee/7e0TMgKrLruvkOK6KUy2S9Hp/P58skUyOxIdc2wiHfr++688orrsg6+onHny5Kvp7ugZKSEkkVHMfhnI8O9wV85YrMs9lYabDihecezmZ6R4eSZ3zpqn1bttWvXj7bM5OqyBNRHv/K7eidF78bLNnS0uwt9QTv/W3FFTcbg93/PX31VyOVqHo4+3E/mn163z/vvHPOCQ+ettS/Z9s/hrQXp5Pjz0Eff5xFUk2wynrrlYFpM/FwH0mZDBidu7g6PcIy2X6fJ+S4ibJoaWooOzSsVzaA46JsSkilnZIqJR7z2G5cEqRwVB4dzTiWirHBABDzcjABhGCJVVIRFj2jEqruaOk3TXbiaR6/t/T9dzrB8ViGwCDjuFwQZMYt6gpAXFEFR4doJLjm9GPeemVtsDwaGxkBLmZzDkYy4xbmEgMbccACcAaMgzfoy6Yyslc677zznn/shfc/WHv6WWcsOW75c8+/ajvg2lTTtNpo8MCh7nc/fG/ZimWhUIggnElly6Ilok8Y6hq+74//WffhO1q0ORYTM64eCHtIWqA4HR8K+EpSHp/W2Qbzj0Mte/WaOlZRAfs3q9kMOMhw7AigUcx9oTJ3NGaIouRY+TeUEoFhDI4tAmAALGDBZTlZY3MWBtLZVENNybZNMYGgZIK7tgzAVEUwTQVLSdfh+RhVUZAdaig+bDuMUORRvalUBgDKKyLz5s15//3PZEl2HIsxABAEInBwKKUAAgASVGfeMbBvG/h83kRcr23wmzQ5c7Znw8f66edUffJ2n5mRfQG7vCzQ32nqprngeFiwTOwZ8K19bPQnd//q43fuFTXhuBWh9l3ZjVuG6+aUNjcPYwbnnzuTkYO9LaEN+6zqmfqZp1712Tudu3asP/Ucn2XSPbt0UYDhXqiq8TTN8W9ZN5rNYBCtadMizD9aUS51HrKTA5qRzQEi4IqCZnJLLa01Tl5T+vS/k5zZ3hIoKRe9AZYn3KMF9OWccp53CeETxLL8T4yxvBf0mPmq4CFZkKUmCXOTxdCC4HikHFMsBx8x6x1tQpyQHAOmcnadcmIt/guF9UeRIHhYOiyeBxGf3HhhDQCT4bNYyix0j09wnGbF4zMuRh+V+ProxZ3yRAI87y5e2MGcIUaLJdpiobZYSpuwTZDh8kcYZhOFaWAon12jCEMOD9lULvTjixw2QThGCArbhOMTRnvyoqpw0bx5YuxdzSuuGSvI5UcbzYKbbv5+8i0jDnl5/Yhgdw6IYUAMF7/zmOeHIv9xcaATdgQRA2KUOYy7+XFj3HVcy3GcsSwu4+7TnHPHcTADwhFzqSAImqqKouhYtsfjIaOqbCGALBcdByCXwhW+kgADUlK5fvO6+165//pz1lQGoyO7Dx6EXKrZPdg1+vwe63tPiPc+saPz80/60q3VVy28u3fvnOsqHmt/cKOwiVbqS2uqblt9xenHXPjHJ96du/DxmYtvee7xQ5XlWOS9qe6hjz5649XXdlg2lUSiiIIgYEWRBATJxGjTtBpZcB5+6G8eyXn9lcc2f/7Wqy/95zd3fnPb5jeQa/75nt/ZmfTCWQ2Jwd4tn36kjw5WBLXO5tbnnrj/vr/8zskO/c9vv7Jj2wtlUdrR/u5LD999/sUX5/Zn/nz3zz9/5THNVs676vrKEQOSsRlcGjhh1r5U9q4v/eStX//tmIjqzgmCoemmhq+/bv+vH/5WZdAf6oUAzFT1NZcGDh1KrTiNhssG9u0YuP3H1/W1SWUl5T/88Zcu/dKJy5bOjY32y7IoeRMl5dB2aNiyBAZAmSYreOmxfo1Eh/pM0WPX1gVFUTQM3XGBEFdRJUAgCAzArag1p80SZswMijQ00Nu7dIW0aEEoGdN727NzZlZatg4kTSnnAI4DzBUiERkoBDSsip7TTl/2znsfeYLi8GA2mwNLV0VREogGALX1pT//xQ0cACPCGCBAuXQOY2Tl7BeeffHKyy/76R23v/vuWxvWbdB1HTOwc7lf3/nTn93xw6WLG//94J8Uid5++7erKssaG+oOHWzdvmHb4oWVn3y8kXiHVCVaWs3srBwI8JIqipha05QNRmHRsSVzjs2pvqye0tKjEdtW43EDAFzbW1Yz2jQzglAmHqPAMHAHkEMEB4BRFzu2CMhBxJJVw+UZIjJLhx0bUvF+X3eHm8tCbIg7NgZAgJhuGQxSrsMJkWRJQyA4rqPIiplhzALH5rbtAgDCMDIy2traKkl5z0QBQJBE2aWUc0QIwQiLgkxNX+uOUG1NND6oV1ZzbktDbcKmD0SPDzaszZgZpWkm/Pw3JwJLXniV9PXvVFeH/X273N6OeHvPS3uad7TsRJIGIi5bvIB8+eoy4lj1dUqgFB7+T+ejj0KMJ+ct02VAD9z7AkW7FizT1n2Y+eRt7OTQcLciCgpgtGnDoGE6SLaAQWerUF4WGOzjCMHKc3LzVgC4SFXcWfVlay40jj0RkiPOrIV2aZWEnYqGOU5vN0UWs48UF8Z8OyeT541NNwVDFz4Cw8Z9RA/XnEBxNalMYR+ddPxwBwqCAhTN5oXjU55ylCOsGMuLGp+6o3nr7xE0IHk/au5wzvM+q5DXGHBcYCMZr8mK2hnbKbYB83FtwRQdPhrhw1FsioDy3uwT0z4WLa2OPJELcJQypWRZrFQoNr66iBcLkYUXgI5njP+/NJ4/fsSrMu4bRdjhCsX1izUfULQ+40e6CxTcjPPpIxlwhoBzBoUoskmm7rEG8/+Ov8CYHz4yXu2w2Rs4zmuzC4WN55+fksCEcy4IQp5aAY07NORzqLmWTQjhnLuuyxCIoogQopQqomKaJkJIkeREIhGNRi3Lcl0XawKljmVZJdHy3/zyd489/C/bHsLIFeQZpy1aumHrW1+5+69XnXTVtCpxfdf+lU1NTasvXMGzSmX63y/t/M7V180+ruSOW/8xY965tR4xhtJsTmTa0vnP3/Hg16Z/aU7NKY9+uGHZ967+1xMf6h0vEq3+7BVzfvy1Y777s780NZpPvPT8xs3bZM0TCIf8fr+eydZUlD7z1PMvv/bIo4/857jlC2fNrDuwf2t5mVeQHFnBx626MBqp3rervbZ2mtejvv3O616PfP7558Zj2d7BXYwLzzz3+qJl4aVLVjz+8FsLlkRXzT5h1oqLUjH+xJO/mdcUZdNWHnPsaS8ff9a/vnHRwSc+euf82QvPuv6vP/t3T9eWj1atiAy1AI7va4byt597/Pa7vzPLZPHWvfvlN9XMJ7Woq5MR5r38hrr2QyMnn3L8Iw+/2LYPlp++eGhoqKNjlDIrNgBLjiODPTQ1DPUNMBxT5i4p6+rs8ogAVOrrxxYxjQRwDhiDKAF1wXWBkLGZp7JeyGVg2mzafYhrQm2opNvvk00Tdm22Kiv9WSudTAJBqu4YRFCAyozrkuBQV1hz9szm5tRwvLd2utS8zQ5EQBYjQ4MJRBhnoCoyYMsyMGMMAEuSZNtmfX396Ggsk80BAGCYt2jea2+8MzKYmD19jibiU09afaD9k29865ZDXV0HW7s2bd4XG0iZWbj/Hw88+/Sji5fOiJbwt15f5w1DOmX0druVdS7NCbYpSB4TCxqW9MF+JntgxgKyc5sAljVtBrTvB9WLfGWc6WJqBDsOIwQocxACSQLLAlXRDMOWRc1ysoBcUQTHBUki5RVl3V39ZVUgCmSghzJXEUVsOzYAUhQEmJi6hYDIkmDZBgdQFI9pOQhczpGqyoZpiCI4DgAHSVJchzFuYzK2eMZIYtzOJ/ABFxAIAO6yFTXdPT2JEdUTNpIxEdygIsd//beK5MhAOq7MXZ4b6JSSfbYagFefx6pX/NGvLj9v5S/uvecv3f0v7Fo/OP94sFnE4YF1aznFIxbHnpB75jm50e6SkSFmmGkXO0SEwQ6xtIL1HIrGE0OAYfYibzyeNfUS004w5J5xZtiyhF3bhnMpUtMAElH2bxE4ShFWg4QsxtilORdMRVQkr5keCWJ0RBwqL9o/QvQsnt3GBAt++ODkWfUwN0VROWICBTKlhFpcuXinOCtOfqrKT8EFDVi+WnHY5RGXK4TfjN9dQTAqlpaO1qXDAUhF9YsrTDHLHlEOI3Gx3RR94XX/n8q4pFgsoxOEyGQ2knHb7RS25yNF8AkHASbB1cQ+FA0DOeLOprjHKX8qfvr5Z4TyozTuvF7Yjrz3KXp1tOseoUTBnGOU3/KS7uGdImhHU608EJr6unmFUOH9L1YzFDYBYWo7mINEBIkI+YOqJIf8/tJwuCQUCvv9PlUNKGrI5ysJBUvDYTObEQkO+n3d3Z1PPPHf3bt3apqKEHcZ564YkMNuKhWQ0jOqVcl2Z1XPyfa3bDr4wZMvPoqHN/a0ffDw4082RQIvPP7BofdfFBc2PfrMzktOXfD6R8/fcefD55UEwyMvyLH2FQ6Rn92afWzbKZFjqqWoZuWqkHHgn//51rmzgM+58NbvbTMCqy56Nhld9knzfD2p15RVbF2/cbin74O33qktL3328Wev/fJVPT0Hb7/96+l0rKtra1OTZ9ZsbcnSUCRq7N7/8q5dLw7FPt+//7Xy8lQ4OOTSvS+8+NvY4EeiuGvP3tebZiil1eKhjkNpE+RAZTvq+vDtv7/w8f0Ux4LKCE5uHvn8lcg1K2/LNj+xxt+fhvveefeMi5YZkcC3Nm7kLKFHS30YHnvwhesf+GeidR+WI+8NKNskLpeUqhEyY3l23drkR6+kvnvjiwES/dq3oh++dWDjJ/3UoaMDCKjSvE3FTDEMoaMVfCHz43cTnQfBK0cs3U2niJkFxFVAIBCPRML+oCSKhFMfY8CZlBgGWQyWBGYnhsC2+uIDsk+pJswNKP6BvmzE74t4A7ZDFEml1KQ4pUgKc2H1STXICfd0DOs5iCVYRbU0f1FDdYMAiM2cG50+ba5pCLYuMcYEQQIA27YRYAETRZYJAlDguz/57oatW/yB0JJF85574qk1q08llv3la07etef9ffvX79q975avX9fd3Wnp5puvvdvbOer36/WN/vJ6rXlffKBHUj08HZct5oRL1NI6XlrJo2V45gK8cJl3dMAHEpuzQqFuKWBi5KSqaj+lHn8Yqx5OqYORwJlomSIAMcwcYMehSYQocOzYIkGabaLurn7AMNzjTQxL1BUQdjnRAbsEi6bpmoYBwDAByzYwBlnCpmliEDAWRFE0DAshOOOM07xeEQBs22WcISwhpGAsAyIM7PwsILAgApCIKyJx+8YehERRQiLxzlmIsBATkdfj721rpm27ff/6lbR/i6B5wchUn3sRC9daf/7xk/c/daWRet2vZrw1wbUfCoPJ5Ix5UiLRQXO2OWKeeY7Zva+s7WCMCKMj/Y6ZUqgVKqngiWGazcXrposnnQ+yhw71y95IYvEJ7srThe5D8eHu1Eg31hOo/SAdHkQUpTgIDumxWcKkoxwsQlXHJOVhCKop5FCbjtOxMuYSQBghxpgNTCQC55xzRghBgF3quowKmEyYzv6XSfkov34BaBWQ/gjf5qJwjsJkDQBApk5XN0GCL1yuGMiP6CQ9nLsQirAcWIAIFJDBwSVEBI4YA8aYJCmWa7gy8ZnMRUTnEELCkKArLEAgS3iOEB/Has7MKKpoO8zrINdlWCZExmk7wxnSJB81AKs4z4wvEsF2TEkSHMchIkH8MOfwEZhxFC/io40nLiSmZizvw5W3UrtH4ZakSMAYA3Vd10UISSJBCFFKue0RpdyohKMJsIN2IKX2SaIiOZGU4/rlHj3pDwaw4chccDHYmOc1JXhcEY3QmAk2zwmcF/jGWuZcFEUKnLvUdV1N02zbJoQ4jiMIAqacAqecCYJg2BZiXNM0UzcwEjkChA5LqAAIcWACy0emMsYIIFEUMcamaRIQiYABwHFdBCAQEXFOHYYk5LhuPn03Q8AYk2XZNE1ChPFmx5TP+SJgVIBnfoTgjqBo3Vmo72IJA2WujREQQmzL0bxBXTclkqHIa2NBN0dLAj7J0ZBD333v6bTjnnf2OZ++v3PFMfMrpnlfev6t66//wVvvP3PMyhOY425dv/lP9/x2//6W+/9930mrT9/fvSXibbRMs6oiZBrZpsbZo/E4Zfy2b94y1N7//BufgpQ44cQlx8075fyzT2/u7Lj5tq/XB+u+cs259z309IIS/fyZx/QKerpv5PH9XdiA/7nslI92DQ8MxU5YuHxoKy2tbMpRKWT6lp170sdb1r0+2L/08qvf/veLvlOPmT+3/s3f/3PebMHNtc+Ym4sNx7u6UgyPhKOCt9xNDnjPPC+6dV3nzJkVPr+VSMQ9KgES7OvORAJaJKxR1yirlFNJ/cC+ZGWVWFZWd6ivFSGtozM31BO47fZj1r7RgVly1vxAOgGhqOjYrLtrIDHsXXPewv889J5JOeaerkN69Sw1VOWdEZ6Lk9vq1hJvWbLyjmWJFnXOM5+l49PeWNCe9IM/WuXIfe3NQcOhJVXG5k/c2nrQU2TpahoJRx97YIRgtabem8zEvD6wsvLwsEMQOfYU1t2i+UJmf4+dSoIkCCAin88XG4ojBEQAAlBS6s/lclZS1JH53Z9ds+rYFd+54ZvnnXH8229/ms6FKqcnRgY8sbiuyJrtKAyNUipgIJRYhHs8qn7xBcc/89Q6CqKoSg2LcvpwuKcrLomyYVorjpv/2GOPzpm9lDMBIUTzgSp5swvGnGJAGDhauWrBCcvPjHgrB0Y3vfTSS1lDv/03q598cLfJhgf64bSzTv3OD284cdnVd9/92x99/4dhv+f0Nce98tJnfp+q+Xl3j4kwbpyLme1tP5SMRgVfwA34vfF0VlFguF8qqbC5q3YdIhX1LkNmcjjCwQYpwy0sSZJpuIbhAh9TRKGiTC3oSNUUxgJjriAIhKBIJDI0FKOUAmDAjGCgrgAgYiwCtxixkKsJao5IAEw1cwbiPs5zGIlYsHxC1FNi9fVngGMMTMYSBWq7FLiMQeAkJ4KHiO70uVQQeVWdrKe9RClf++aer98RGWwxyktMI8NkH5LAs3+bu2SVFY5E//23kTnLIDEAkRCkHPCGYNt6bWjYUH2gZ1ldXbi00gasU8ooFUTN3b8LTBtqZwGANmM6SevpgCe69tWRyip1uM9wKaw8lWz9iNoWMBfywUYIC6ofh0qZKrmOHurqTCCOFMVjmjnO/QTbiHJ3TNlFCAAg4K7rIhhPVweQn8vysyRG2JmUhOCLAeD/HwAXgLYgEh2Nr6o4YvJo4FR88AjwLuokGg/iLAZgzrkojbvJUAQgYCQBMMZtC5Ie5LFRQHEcG+WwIhM9ZRIfk5DAKQZkUS4gLDAqcgQuiWlUcbjmYsrAkDiIRHCYaqEktwM+v0tdXdd9Pp/r2jCWS3FqAP7i+5qiFL6H8THMi2jOUczMCEhhEPD4OAAwpNg5Rw3YsqUhDCZOjjpRvzTCEhHVyeiV/gjP6pl4MlxSqru2gzgQjDkAOxzGMxaZwznGYznY8zv5lEr5zIaU0rxW1rWdPBYKCDuMMsYQwaIoMsZc1xWJQN289xPPM5wQTvIAbFBDVVXXdQVBQIy7rmuapqqqApIs22ScS5LEObdNW8BYVTTdzkmSxDjXdV2WZY6Rbdv5RQAUGVxQwZmrsCg88tEUM5lDkYzucM450xSVM2ZZlqyo6XRa1XzM0RHxu8jx+6Tu1s6WvXuf+O+/9+3f29nWrcnQ1DBTVsymacsDJUZvj/3qW5+89PLf1px5nUGNULTu5NPn3XPX36O+aE1DpH/YwhAYGRrduWtDZZUmivTaa27q7kohyEqqZhg2YCfo81x16S1vf/D6B+ufffyNDf/5zc/KIfnVE5e/s76Hm3o8h4JVFWmb2YnWaiyPKMbsBAw7KLBgxrHl9QO92ocHO09acOHBfnTZsvoh5vz97WHv6dOTQ/0Htj8AvA1AX7RgumWNBkJmKOD1lRqaJqbiibDWGIj2iETev51Hys2MztIp5hUahoY6Vq5WsjnTqzbu398mRcTyKiWdzgx1C3ZGdS27qjxyaH//SecFfVppV2dvpDRaUhrYs2PYteWWlu6oEr71N98//pQrew+2zaxc8Ps/3PPSyw+uXCVrNpDgiBLwfPYqvfIy5AsGfvvX5MFBa83x4ZLyTGZYbm02Ziz07N4eau1LYiV98w21Tz0UK68T29vS9U2+aKlvz7ZE9TRm54SerlztNJLLUlEE1wwmEkksgCj7crkcMC5JkmVbfr86fVp9e3srcklJg7J8xenP/ffN1cfW11QIL764e8a8matOdNrbrM7uzN7mNGXAEWARmC0I2EtZ8oYbz373rbXxGGNcspgbjEIu7Xh9SiJhCgIAwLHLl/3gh9/7yle+MhrTJUngnDsOlVXFtlyECKMUsJtPn3711cdv3rqu4wDUNAanz4ddG5KRUImOYloQ9n6unnLamv7hT0Na08qV5X+99x0AiJRaqkfo7qR1Tb7ShnTLVp9hZoCDNwBLVqjd7VZbMylvcKqjVYlc3+iAkogzSbMNA7jtBbA0L8dYsCzbtlhBmEFT2QHzBWMBgDHGEAJRFCjllFKMBcYIAACyMJYZt4BBSQXkYn7dBQBHUAxqy4JgO3YAUBIBzJlTZzm2zd0Zs6ref303Ag/gLDDMCUOMcK4SMYsAOMDSFaGBvkxvO7nxO4tfenYvN2nTNOu8C+o3b2zv61FqZ7rxmCsiQiTV0L2BykG/F8oj0c8/TNsMzVuhfPQWP//qsmf+faiiMuAPZX0ldLS7fPuG0cparaSaNzen5i4uNdlw30HIxEXuwLQ5YnWd2rI/MdwnMheAOYIghKK2nS0xDWrjeCTspSS7YKFnsM/WNK1lfyqbAgDQPAEspxBlFiDEGMvbJl3XRYznI/0ZYwTjPNkNYwwYp5QKHmXC1P/F5f9SZ3KZrJAUjpL9Jt/tyZj9f+8PGg84Kb5c4eqALQDASAIuUxdhjIngUGY62K+xeMYRfB7RMZWsI0Q0sA1HFjJZDCDIkLMxUZFHsXKmn1NbUbKm5fF4JAosYwoETNG1JeYVtEw2oyjSGLSIom3bkiQV26SL+zyF7P6FpXAzUGCu/kIJWIQx99S8kt9lFCEkCnLc6Q6TeupaDjc49rmygbNJPy63JYfaDlBgkmAJQDnTBBWZTh54C8+Mcz7OZYELuFswnyOEMBIYpT6flkplqO2UloYBIBZLaJqq67rsUR3quq6rKIrrupwySVQZp5SxwzwYnHMGoibmcjlZEB3HkSTJdV1Fkm3bzi88AMAFl3CCEAIGQCFnZYPBYD4lH8u7O2FsOrYsiMVjXqw6L34Kh034RwmXN1w94A3Yts1cLgiCYZqK5jHMnObxZ9I8neuf3Vjftr/rG7denjOGNm/pOX55pWW6q086YeeeV9Z9SH/w0zMOHux7/cV9Z15yYs8h+pM7vibJ6Kd33rp/T9oXCJ5y8qVPPX8/R2zDhk1nnHGaZdm//uWPX37+hY5DrTrA8pWlzduShs20oLRwwSkBv7B08YIf3fGttk3dxx235IZVx2060BuNxHNDtHp+zUufHTqtNKJ5c8wxfTnF7w122YP10+vf29IpB8hIumJmReNILF7fdIKo+FJJnqmv3rPzo4ZaSmlu+/YPQAQACSyvV07PXxCdtdCZ2VS+bUNPqNRMJW0nFx2MZVadUjo6aLqO4fWjgT7b40X+Emt0hHMnnLXiA91ipNKpbZC6DkpNs8PIznKckETUdYgFIjDSG+achsuMllZ72sw6cEus0YDqqUyyg9Nmbls57TjKup5/pyd1CM5Yg/fHKkdGemvV0soVQqJLHIp1ffoOzF8OHa1qV7/h8WPDZqIJc5dApBKad/pi8VzTAtS8JeItGZ4zo2zDp0OlFWDkUCbNJRnADZXV5Do7XYFInHPGxmw0okgch9bWVi5ZWf3mK7tkyz9zRrqywtNyyLAQqgpILs+2tLCcy8vqQt1dKSAMSxgb5JwLlozE4uvXHwIQOTiYAKNaeZWzaMnslkPNQX/Z9i09wMV5c+d1dh1yXMMyKQBumj4TMD906AAACBK4tjZnSWThioG9W53e5hJPJDZrgW/H1kyJX5g+z/vm89krr73cJfvKwks+ePfj/t6h2kY9EpIHuklfl2E5nBC/6Rgz5qupEVY3KyuAtmtLLpf1Kpq5aJWYTdOD2+yaOn9fh2BBvG4aySS88VQKUYEQlzNEKQcAQsS8dJ7/oo8yA2FBwO4Y3+JY+DsAJqAByVLKFQ+YOviDsHCJt3lvNhVXqYs5IEBZSYG6Brmr3VIkTyqjz1swo7pe2bF5cGho+LSzprU36x0dg5KkWpYoCq7DdOCgebFuMO7KimZhp0aQstNmJK64dB6S9h06iA4dIPsPOYkhKVoqZoxcVa3n5HM9bc0jQc0nEaennQgeQQ6lug7B7AWyx+NZ+2oCYamijg71UoJUmxp1M8nubbSiRqurdQ7skIYGs6ESSMeBOsRfSn1+ogrUzGqeUI47QmuzK8lgWarmZ6tOFRMj2BuwbMfSk5GOgzybi68+D5BppCVFZhws18EYC0QggACwQ00EY0pCAJBFCQCo6wAhUyo/j4YERyOU+AL+g2IZFI4MN5qy2pSNTFbbTtBFT+hzsbTNi/TqjEqAXEHACCHmcoQ4FhxKbZOGVJJzsc90ByWggCKMZf1SVNc5lpFDHYlwBCKjyMVJQco62RKBAIBLKUUEC4oMCDvcNkczvoCfMTdvE7dtWxSlCbnbpgTgSQN+VEKPMXH2SFcpdrTnAmNqYSwQAEwpxUgghKii5KSMjC+jWIx5yzUOOnJVJqTNNOFAEGYuFySRI8hlsj7Na7PDEF9IoUg5z7Ni5ElU8gR740+ZIITWrl178QXnYYB//OOBiy680O/3Z3LpUDhMgacyaVVVBUEwdcPn89m2yzmn3D08FAwxxhxqq6pKCMlkMprqydPTA4AoKY7juODmFxaIImBAgGgBxXXdkZGRgNeX0XOSJAmSyDnP5zOeMMScc4bHBrMYlTEHiqaoDwAcuYyBa1OCBQAwLcPjVV3Xdiy3prIKY/eFZx6/7Ss/PO7E6qzbt3WLXllPHRslBjyWkywpKXNpLpU2zr5gwYfv7WqsrdyxKXbxJaUfre0SxcDpFxz7n4c3tjbvrGkoefm116LR8uuv+/JAz2CpP2yb8TiXJLCR43VBevmtZ0878/hF80o92PrFT/7nhX89sfuTHVXl0TLJu93s7B9VR7ixKBhuCmY6Op0vn9b41o5BFCstjXZ8OgI0ABEKJx0TTexnbw7FcwBqIPCra898aUN7SF0UCPnKy0/YsLN5y7793/z+TX+5/5tBI5czOwhHALxhWum5V3qGY92jA5pDMlZCC1fkSsu04UE9l+PZFIiKaGfkqkbWvEPJ5ugplxib1tvdzdHzrksN7MH+gApCcrgfQv6q/r6+6bPEeIwRIjTMkNe9mT71DP9LL2Qv/075aFd/vRjetD5nKeqCY0pTmUOKX4p1W4sX+vYeckTHtJnv/bWofH569yaYPRduumnpB680SyJac3HJk490bt8MFdMhNiR4QzQRE/waCkZwb5dpGRCJhIdH44oGiqjldJpXimCMRSI4jsWBE4I44dQWELilQVi5dAajVjLn7NrXXyZN85emWlvT3ohf9GUGB5huyAjspfNL5IC+/vOkJPss00aC1ThDGermOd1hnM2bXx8KBzZu2FVfV+u4VmfnEOLAOYiCSpkzb8Hs0URvNpfw+onlQsMsygy1p90oKccDvczIQeOsQPuelGHDgqXR9et3/vIXD/zh978Jl8GChVXxwcFFK9SP38v2dcihqKj4crEYLy31SyIbGsxOn0UYU0VBjedijIqjo47K8cgQ4xgCPl8qlYmWQ1YHM0PG07COedKMqccQHF3kwQjxvFxBCKI0r/cCSWaODT6tLK0PIQ6zZntEAfbvdijIkpitqMa1TUwRfGUV5U89cijgDcT11JyZjbpu9XUIZXXZOcus9R9mVq6M7N49OtwPqhz0hczBPl9JVLj2e1Z7W3ykH3Z/Dr+6twJsJ5OI797OQAZCAp98JlQ3jg52CuV1vr7hRHoIfIo6faaRG5Fkv92yMyBojqq5TfNdPSM5Oh6M62ZWJKLjmFA/A2VGA8OxdNN8lhyCWK8/kcxKkmDbkuLNzpyn6lnBNTKREsnIyvERSxCEVMKTyWUa59Cs7g52AeKiIDmnn18m+Ybig/JAN0aOqyOEGHAYd+illDJKsYAJIRww5UApxYgTBIwxQRAKYmIxsKGjAMD/Dwm4gIWFZObFK6ziS4+/ARPBuBitv1gvXdzPYoQ+jPpU5eAgYiPOGAPOcN5KjETQWZY6muZhClDm4BzOWI5KFdFruhhITuASRX4d2Zwmg2Kp4YIHZaykRwkS7o0P5CjI4RKJCJDJZoiIXe6oqmpZliarmWxWFI+QwI42UEV1jrYAoofTuBZOHLdfTq4/JpgSjBBhjDEKeWlVph5LyjDXDQZC+rNv7r7n0ePvus184VXld3eBVxqhhqipbsZUiYgxZhgxl0Ih6LboRtg4gXBe81ywB+uGEwmHfnL7HZqmjQzH/vXAA9/79nfuufdu23XTmYzDaDAY5AiMnA4AruvKgkzzbMyYY4xRnpmSAoiIUspcKklSXvmMGKeUOoyLkkQE5DCaz8HLKWeuazrGR+9/cP3111mmZZimIEsOdTk/agJqios8qMdDlTA/HG89sWCOOKa2W1oayqQMj1e1bN12Xa9Mdm5vr6/Tbr7xyk/Xbj734sWiz3n00b0VlaWGYS5dXr9xy+4zz17YvDPbcrDd5/NffWnTgYOjn67rmL0suGdbqiLqr60W58yc9uQzOx574pE9ezr+/Jd7Fi0NMkdPjzBE3dqVautm96TjZ9vQPZJj1eVLmze31pUK77+y++xF/nibuNsZrS0p/6BvkBO1wevU62iT5Xzvwrkffr4vFSdZkQYUSZV4MudkMb52eumGHWZVPWYz3KffT190rtq5y3Cz0bQz4gv5py2uGhr1aOqC2pqmt595lsm7nJzXtBwGUFpuKV5cO50tnreytXNzw3QZc//H78QiFXT6nOgT98fUKtQ4Ax/YQZcsVecuhM9eQ9Pn2PXTpA3r3WCJ09+hxGNQW49OPTeyZX1/ZwvcfEX153u64oNlq1ZlfDgiR3vffpFPb4JhorXtMDgwCiWeqlgqgUOyZ+SQEfB5RuJa3Bq0mWKZ5qJ5vmMWRvdt7Tru5Ppg1Hr2sYEzLqp+4tHukZhYP5u3NDMMLBgIBqKZrjappAJSKYMQMHLAADgDQpAoygghyzIYB1FElIUZjHqQtyToWXE8723Vtu5J/+KeVf/9x0Zdz1U1uuVV4e1bB7MpyKXEoKfu5m+e8Ke/P6LrApJEn192LD1S6mQSPJOFcMibSRsIcdNie/fs9HpK6upnABgAIEmS65oMAyCYNU8KhmTDzCRHIBoOCLKRHPa3NKcwdryeMlVzB0dGBVbqwjAiwKlYP9PpPAQSA59f8PoCI6NOzsgqGhNlyCQBmFBVKweCUFafE0XS2UllVXTNkFdJyxpdv5ZHIoFMbhTcEJFTrh2wrAQhIkKIM0SZA+M2LihyVyzWIyJEOKfFEzjGGCECVMbE5QxTbsrYU1Prr65jrc3xHLV9XjjjzFnbth6oa/C1HTAHh5zEKLgIgl5sZ4gFjr8Equq0iurcaLd3YNDRvFJlY0ZC5WvfGgQm+L2ROccOhUrk0rDcUJuurw7s2kT6+mAwEe9okfoG8dxlppnxDMd0JIARD6062TrlTHfXVvvTdyCT0OYcm2uaWar4h3duAG8ISqdBesiz7TM9GUOKKgTLndRQMKdjgcQphcqqgGE48bgjCoLDDEBQHlLLqhWXJfQsJBNiOu0gQJSGKqrT6YSYyzkYPFhIV9VDNo1cByPOHdt1CCEYIYc6ru1IgigIAmUOZeC4FImSJEgMXEZdmRA2wR16bBY6usr3/zWiFR3h/zwGwAhTNmZ7niBwT14K5P/NX3cKORJNcToA5AX9CTLxOJC7LrWAgShoQCXOkSgKmEJOAs2BXH+mM5mZWVcpBMFGScnWQGI259hCAnE40RmOZpJY5rYQkAQEuZ2p3Q+t2/fGjpxDbZ98ymNnLF2+JGdkiYJN0yCEyKLIXAZHkYCPblM/2vGJidCnfFiHxwHG3KOA8jw0jq11FdBNW2ERj2p1z19Um4rFGBH7h3f+9C+r7/hmnKVSyPVIsuJgEWGHs3zShfG0g2MFA/DxhXPBBgwAgiAgIti2vXP7jgvOPS+Xztx7zx+CgUBfX9+Z5589Z95chFAqk0EICYKgKApzXUbzEMnGcr8znqeVJJJomqYkSRhj17INw1BVVRRFy6WHmVQYFwRJU1RZxL/4zW9+fecvvvWtb33t5q/Onju7q7vHFwzYriPn0wZPGtQJw3w4pJvxKesDEmzLDPq8+3bv8mjKs089ef1NN1ZUVDzyr3u/9/27mqZXTm+q2LRh24pVTfFsfMPG0Zu+fMrQ8GhFjec//9qw9Nia4b7c8uPKy6NN3X29r72yY+6i6p07exHwkD/iUXMrT2x686X9i5bVDA7GEDZPWVM/MNCXSUBVrW8obXXuo3PmEcNJuRYwC7J9iLikVmg4a5r21Y/2icyxEVAv1DPJy2zLxpFyuTdpf+NH11q9ubsfes5fC6Vpnwyo2UjXUIhRlWBjyezQ552ZE+eEMjg2ra5iND6w4SMorwFBgjVn1xhGX474P3ozueK4JhCHDuzWbaP80ME0hSwmrL4hqGlkxlw7GXdjQ5I3nJJUSMRDikfvarOapgcEMec6zLYQp7LoFSyTjQ4yy3Ji/c413wh3tsc3r1UfejT67/t6JBHm1pR3dg4SH7YMoSoUbR0a3bHR1Tg97yqpKxXZt3dgUV3F0MjAvhQkuhW/Zi9eKvk1IW1kRUH68F3RR3JzFyrHrS7dtU3ftHHEtUJaSSprMEsHWRK9Xu9oKtEwE1r2+kU1rclhy0wx4ADItihCIAiCQ13OADAQLgNj0SCybNvWNdnrHHuyjK3IYKwTqNa825o5xxsIqFs2DHz1GwuefrovldYtijx+GooG+rpGOGNlpdrQkCEKHkYBsEE5/fLVZx2z7ERJCN1xxx3ZbJYxlwOfPisiqbY/iAwz3bwxqASScxZDb6uSGhVyZpZTEIig+ng6RQFAlj2cCQhzy8zIChEZQZiZJnAiMjCIgB2LhCPhhlkpI6PZbDQcgdZmiA8LgNyZx0BVmdLRLPd154JhOZnM2SYIAgbMXBtgzLVq7LMmBBUmz8OKQ1b80TMA8HgUXTfzKmhCxFWnOF4/PrBDcbk+2CEfe5JV0whlFfDx26B5PNyhmYzluODVvJz5du8ecLh84YWhbDw9GmdDSbNpXgknsdF2j8vdull2OgWbPwIAlWFj2hy4/PrSV54fdhPCdV9zD2yOfL5u1HTEqkZ3+hy8Z7vS2pwTRLB0DZBYOz355WtDezcn0obH0rWzL/ceONihp8XuQ2p1vXxg7+iBFhSN4rJK3tsqJhOGSLw2zUoyUBsjUCintdMUTlKmDpGo5o/m4j3Q3iJi4mCs2A5dsJxqPkYkIATWvQMCCjo0iTBQFxAotY0ucpljOrYsiQSQ7VqyQAA4UAoOB0kBhEzLRaLAqeVSR5UlwOQIjvv/DYCnzGkOXyCPHk4beES87xeomie43o39yyZWK7Q/4Xj+p4Jf7kTpGdkCFiybAsOqogHYrpMQRNi9608Hu8WaVntm3fk3PPDIQ3975Km/NW/YtmeO5pFXGV//1WWZ1oyHQrhJ0zuHPMky6MM71366+9NdQ3uGfBD1lYX3ZJv35vbsK29tbT/ABEq5zTHL5XJhf4Ax5rh0is5/UfjT1BJYAf6OlsVhQjuYjOU/QBwQp8A4IUQQhJwRR8Tn9fgyD/8N/ePX3hmBoU0p+Ru3BX/8q2QqyRkKBYI0ZwCAyZiLQcR5bhHO0Hjmify1xhnH8qPtui7GWFXVj9Z9esePbrcM08jlSqMl3771tuuuu3bZkmUbtm2Jx+MOdT1eL2PMtm3GmGvbPm8AUD4mkDLGOEMCxhgJhm0RQizdkGU5EgkBgKGbCCFRkrLZrMOookiEiJZuGDmTueyJ55/8+U9+Sl1XEsR0Oi0pYldffyAUHEtsnh+6qV7hCe81pofHsLi+S0EgUBLSLjx/zUcffIQB/D7f5Zd/ae3bz/YPubLX4/HZhpkOhQLdfSlRDmX7krMXeFrbc1a2SvL2GVkgACedOPO9T7t9AaKnhLCXVtTilmbsr8wODzk+hG/77prP130aDgRLK8mGDd0Eq6Eyp64hUllftnlD88GtjuLC6SeXR3zKhs86NdeL0s6rYJ1eHd3bMlIysyy+a3RAjmre0VtXHP+7tz+5eM2JTmbkk3V7TT8IaTHgFxUU8vqHzKz/UMIqh1wOK9Uesz0LMxt98xZUbfl0QAqkFCwfu1JMDmb1ACAnMGsedridTtrtLW5FdZPob/nsYxg45DP0jIjxvGNZ01xIx6KeyIgmAKfCyIi74qToY/90jjkr9fyj4BowfxnYLoRDOJNkjdMrOBoIh317t2d2b4QVx4QWzs6FNPL6W0bVQqUsapULYksrDzYpDeUk3cviuhCJJtetZZ0ZedSwjFGY2RB46OnjLln9DlZI5fSq7du7505X9SwaHtIF5HVd0wIXQ8myU2PJgeDB/UmfTwAsBaPuQI9oOTkiCA21le2d3eNSAERLS4JBf1tbGwfwiJpl2oA4IEKZ0zQjGg45bdsTAi7zl8eCEXSomQaD+PyLTvz8wwO7WtOOiwQx59K8GgoQIl4/C4VC3d1xSYKSkuBAf5JRAJBDgdJUpq+0NPqTn97+z/v+PDTSW1kdFCQ3kchmhv0gpJMjgiS71Q2CY8NP7vjZ44++7vFEaxrxs//9zHEMx6EAkWCQJpPJ+ko/8aTb24EzqJsJRATuSqrXprZQP4NtX8dGBmVAlixhAI1x5LhpQsB1MGAGDM+c62+c6372NjGMjOuyogmHjZt1iyWW4ikF56kX/H5vKpWRJCnvkDFvgVhSpvV3W/VzcmF/BSFDPS1kyzra0Fg6baYjiImyCmH/ThyLmbquWSw3Y0ZAJSk7A32dgXCDEmmMUwv7JLJruy6RKPGNSNjb0YLmHJMrryMtu8nOLfYpZ7HhTigtx1nOutuhtFTra88hAcqqwUyW9fXqS0/KrlpdPnem8Nkr9lPPDd3+J0gncHeLGovlwlF/e4tuZUP9Q2ZVnZEY9A70JxEHRIBRFQkGd1UAQ1SgaZYnnTLiQxoAa5rLReTu2el4PN5Eki4/URJlq6dFbpyfSsfCu7clZUlVPQKWU9NneRxT6RuII5dzyzEQQtx1OKOaR9bT6d07tnc3d5xw8ikV02cAABAMwKhrEQG79DATZIFIjx89GIlPYRfL/3A0PDhshT0iDOlI3o/CYy5kKi3UHH8PJjY6/pdPqF9ovxADzYvyLLmuK0seQ2dAic8vWG5fzjwgezIDh3ZwUsqffGTG4uvh/Gu79mW3PS4FAqFH3j1www/CC8/zv/JWy8aH9kZ2dp3fND8xbHV1HhpN5fwkVIvLa0KRLfGN+2oO3vTfW4874Zw//ekvt37nptHkiOpVMAAG5NoOFr4oDGkKq3BR2FJxKabSLF4o4Uk6gPwOQYgxJotEkGRwbTObc21HErAkBhJ+j/bRy/gbt9hSwlNWxYfq0LZPRvURL5HsnAUEc5EY1FEUDzAuIcgDMAVewH4E4Dpjq7d8Mj7HcQghXq+3alrdcHdP06zZ1199TSqRvPeee2RBuvrqqx94+N+u62KBJNNpxlgoFMoHUo+MxMeim/M+dIQooiIKMkPjvmYY7rzzlwsXLKisrLQsi1I6b/58f8A7ODho6FY6lXrhuRf/9cCDOTuNMfZ7vHf+7GcLFy/2eLVjVqygwDLxdGEMD3OvIhA4Kv638KvgTgHAHEEua9ZWl77x+jMvv/Bk28HmloMdtgGRsM/vR80taV+0JG3EZs8LpNLQcTAl+VWfYMRjXiWYFVVIxxRwUWUNGh7QNRXSGeACgADHnjA9p7dmeiqWLJcyuYG5c8OpQTrYE/NFoL5h9huvdTDCYm3qvJPkz9cN10cqany5GZVCZUX4ww/aypn2Vso8VoOk393fE5ozXeoZHBo2oURTpy2evu+jdsZyy8pF3lCxdygVTfKuuLNypW/D+mFLBgGBYPsjchoiMKu6EnF7MDZCCEYK62mRTz2Ll5bIbW0ZiQRnL9UP7rUVL+R04dABt2661jtiXHx12bb1A1371dZ9hubxrTiN1c/Jbf601BcZLq0AbiJmiS17oKzC47LMvmaaGAp5vAZG5qJlpYnEcHk1jvVpdsaascLmuagbzzXUIQPKt+1onzE9tO9AbjRmf/k6deCQ0X0In3Sh94mHtaqlcXsA7dkiBkO+pmP6171HKHAOkVBljKYDnAmmOVpWAQ3T/Rs/Tbs83DQHKQKynNG+DiFnOqKMbYuBAOD6EGQwQaxgf+AcYcy5i7gA4CIgDCRAhhoE5ILMlfm1ZQxnewcSILGRGFRUVJ5x9qIXnn9Lx0Im7i49ptYrV23etMehWY+mVtSjgU7KkRUKy6UlVbt3dtiWoCqaYWYBucGgdvbZZ+pG+uPP3g+GtFQ66/ORhSvpm095jl2xJFJhvP/23jt/cUdNfeDj9/f9675/XXTRBWvf/dC0dIEIBMuWmxMJqYgSw+QjCUqI5Am4lTWqyzIYQSoBZjqUTiUUUdLU0tF0LyBAXOMgAUpwDoII5ZVeLWCm0m5iQPVoQiqVYRQEQSik0Sye/ydISvkwJIxxQSomhFDKCbCAL7Rkpfrxu4PzF7PUkG/F6Zn0kNbSlluwICwKZtsBnTPgyCP59bJqj53Vd70PIZXMWUnrloaa2xLxflAQYCLv28ZG4+75V2qeSFbAwUQyaWSkjOOk+gLDnTmHMxuL2ax53OpQLp6xHVRe5XLgn66FhcuRz4tEm9MsPzgEie7SE89NI9Hc+TnkMv65x+Qsmx7cCyefD217YaBDs2mO2yHbdZCQra4XY4OOmcXhsJ9IuWzOMTOAwBfw27m07ICFiFVV5x8ZTZtZEGmDA8MYQNRywRDUVFe0tQ1QS1U1hga748GaEOeg5FdkAjCexY6RNfsOfLb1yR/87vILLp+95nSjrrRCC0FIAzkAYAFHwBRqW/FUZ7Q0hLjXIRIB7jq2KGAAYJw71MWiIHFPcjgZjKo2WBZSBMdVZQTUoUh2OEWYCyAihjASbGZSQrmLxtTOHPKU2xgQxkDH7W0IIQRjiXg558AIiIxjl1IqYwUDZpxzDFCUqRCNkw0xNhbcXCCULqrAJFHKZDM+b8B0LADAGNsuVYlMXKTLiWuu+OGLjz2qv/Urzyx3t5Zpe/H95Jb9ZzOlTF7Fl53/eU/DH+7N1p1+wlfOD824TWGMdGPWDOa6h/e9c9Nfv4nmphSlzq201WFJ9oJh/iP7zHuta+VG59mnXr7+hq8ZVo5TTp2sS01B0hwuENfGEnKY4zhOSAvlsjoWEBIJ4tR1XcYgb4xHCGFETNN0wfJrftt0AEASJNu1iSjYtimJqm3bIpFc1xUlgjGyLAMhZCOvLAyD5IesqoDDqGMC5orhN8v0QDxpbpU3DnikMrXOlxv9J/dc7J12Ztx4Vb7zR+prBr58SffTH5Tf9bbx5dXxeHODuBhUsASHZZMK4brrjVMSEW0dqKqqNJlVff40s/VMtlzQcNBLGHDTMmyLe4ScZbqWHdJ8iqL99O479u/d0RSceehAl1qprH3/DXfEzDJveUn4zddeXLBkcc9wLJXKxAcHX3j8yb/f/08mWhnXTg5ka2sqLAS//9VvB/f3XXPVhRTDph1bahsb/vzne3du37F985Y3Xn715NPOO+akpYCdjs6hk044LxQIx4ZaRCGd05VkeuSee+7q6enbsW3/x5998tH7Hz740D+efu6loaFRUdUAI4dSr0dJjoz4fVrazCqSDIANwxBliSAsASDbZarFqABcdmyQFQ9jLJkaCQYDlpgsV6p/c8ct67a+/fmOror6acsW+996Yu/yJUuv+dax3/jm3yorylYsr3v2sc0uAV+EGBa101BfWZ1NJtKZnNev2eBksk5I4VkGjgSQVn2qccllTWq0s6PTz/TRZExqnIN3bWbRSmnzJ9hkaUDgw/i8y2e+9fmhRdHoCTPltv39AhabR4y4zgUcUiWjq8+Uwt6RVM6LVMJZnJmIA1G93MgKAHf/8Q/3/fv3LV2jxMCBIA6FpOQoREpJd7vlMlvC0NgIiZTk80QCngFEYF8LzJnTUFnZsXhVpHdgtLMD+ntR02JxaNDevUGsme6ceJxEmK+301xzcaivdyCdiEYqMt1d+kvPidEqRxEDC1akNqyFkB9OPF1t2Ylc15XEkrffHFq02t2zQS2dZgy0hisbcktWYFM3RBvqK7xD3dlEr2/Bsf6DB/rSZiQY0Y89wbd//zDYSqyvzBa6wlGNCPburayrg7qgSXIOwEtEHSh2TdHhDgHJBYIg1zBN49TKpSh2tKpZ7t4DuqH7AGcURbByskxy5tTRl4CJyBnjnIoCMAqXXHJSMt6fSAzVh0Mfftillajf/+lXXn3lnbra6EsvbKiq96bdLHIrEsM6JkzP2Yy7jFEkggQBF1IEY9sQRdUKB73+cHa4w69UpIkTTAxSw85FyiFYyjyktKt92ODgpLVFK8uuu/GU797871u+ef62z9u9gfSF59324ANPNB9s1vzUdBjG2DYlRqkkOcetDg8Nxlt2E4boyWd7k+msnQt2H3IymZwkCbbtSpJk225etEUgYEIpc1QP1NRUtrT0AwevV4mUCqlkNpMG6gDGeX+gcX9FDuPB/bxgGCYAHANjWJCZawkAsiDlXFsAQksUranSpTYvnWspIdTVzsHyMSuzYGnJ/n2xgX4oqRQHB4X+PsSJzsELlJ52gVlW4ulvz1EKw30oFfPKckb2EC0oSx51z77RmgqvSJRYbPTE1by0KtLeQde+rvsD9vd+EXnzGb1mprt/Mz72TPv91/CipaF4Pw1EEiCAYcCGjWBnhcbZ4uigNBJLTZ8DkbAa62UkyHwaTw2TeD/TTYdzWHicltVz02eJ2zfw/h5X8kBJqdZzKAc8giHNRAdcQBxE0BgQAFdQLdVHM3E8rjmA8REmHDj6y39eHHXYgOUkUsnj58y69OQVYT9xuL52w/DyGbW1IdJz4MMdn70t7B85/aqb//vgS6fcdCmRhiLB0Efvt81ZtLJyToR5EZZLuaUrsoIAGKV5kYUBw0R44JHHDjXv+cNdf3WRK0mW4yiOJciyzl0uyoRihhjilGMkWI4JBAmCiPJEQqhYxuWUH044X1Ayc86Bc1EUbW4ahhHwhBzHcSnFIiGT/LDyki5BuFjwLThz5WmuZVk1DENWFYww48y2bQqi7iRKvMErr7qlP5n59D8/b/7Vj3Z/2pwd7JsTJQ1VJ9mNJwUuODm46oRnnhm5+86d5dO7315/Y2+cJ7wo42ZVr/eci39Y9rL5Q7EhIanEok3E/z/w33P/eMF3bv7m4OCO8rqFF1xyeX3D9P/53V2OmVIViSHMQaK2rmlaMpeUiIA4GQtsJQhjcF0myzIAGIbBKBdFUVGUPIGqY9mSKOY1QoCIKIpZK+3xeJhDLcuSJIkDcxzH6/XQnJMDC0zC/H6N2kSVGYBiG7r+kafd7d336siTL86vlEgtcm/8nlB7eqblWV/fJ+B0j/4hDtcc51/fZn/5YW3FmTkRWva8h5LBnCgvmjOHmWAGEiEXxXNU9CiOaSmipBuGqKleTcMu2918YKCrp766ZsasmVnb5AQThINe31/v+9uvfveLpvqm5m0H6xvKa6aXvvXmuq9ee9ulV1wqIr5i+RIsi8SjfvWWW5585DEJ4Jabb00lrb8/8FdBM3fv23nGCReBw5LZ2Kymel/Av3Hr7hmz61avXv3aa2/kkukF8+b5fcE9B/Zv2rZ12649V1z+5WwqjTglAAgEhF0igChKx686+Zavf/MnP/lxIhl7883Xm2bNjicSitfnOI7tmF5FxhjLIAOA6vHmbNPFkDV03cyVRKLZuK6qqmOZiiIZZs513eqqSsM0PIL2wgtP3vOnb+/YkmqchRctn/7Be83JOL/6y8es/7z5UEv2zAun11WuePD+56rroafDPmN144Gd2YbpuZJIdCTeoyrLtjdvio2oDGRBTjuWVxLFmvrRGdMaMtlhG3I1kdLhAaezjYKUtnL+ujn26JC64jT/J83D3k5B0a1ary/cyN/cGG+s8A4P0m99qbJlV9sr26GksQG7vR1dTgoDIJjmVRYsq1pQY/QkQu991i4LtGOYzQhrvfFcpFQeGc3NmFndPzQYH3U9SqluDcsc5IDH66NBGXJxqWGZf/eOvhOPl0QxsH1nbNp03/qPoXGRGSp19Hg59gyWh4T9Wz3DI+kLrtD2782lE1BWTfRMaHg0zak3NhJXBX82Q4NB/ZyLI088MHLOFaLj+LxB/s4rLhaw7DEyCSljZufNlWbMluJdEV3vqirHmoY1rzvQA3OWiM1bQ/VzjJyeefMZnytkHBdsW2CuO2N2RA3mEkmz7SAAAwQEgFaFYO6Cxq3b23RdYJg5wKbPIMkkTQ1ARb0Ui9vppABAPB5k6KaIkX1UCljACAMAZ0wS4IILz/z0k/cNw22qlRcvPjPL9ntI049v/8E1V9y6c3+zqoFJlNPXLHjtxc3HrzzDF7LeefeTs9ecuWHDFsriK489++133pw2G82Z15DS25s3kXnHgKiFNr2fXTC/dvfO4YyerK4JRqJyYtCumcOaFrnvvqiXVnGfF7a+H7btOHDvuRdOt/hQX/8ApmW7tsdEQXGYrvkQtXlDYyARzw31u+UVVRRGHcfMpsGxsaIolmUdxtH8bXGRAxVFNGYL4yDLqmU5gFyMASNJILJtu4y7CEBWRMZN22JkjPYHFbj/JJHYTn7xogIxFAWcnF9R3XPPiRza3jNvmnZorzJsjdbNCQ3HUh0t7NzrYbhH6G5zEyMg4ogNow0zxFzW6eyE0gi+5aeB//4lkRoqk3xDkgq6AbNnyMMxy7JRoMRHeVoQUXKYD/ZC3Uyvtyy7axPU19YqYrdXg1K/Qpm97QCbtyBYU501Uu70Jti7C154Bij2e2RNNwcEL1x+AwCCzHDoraezmDhIATvnlQRKXQdjV5XCgsfMZHRMQBAkWRUuurp83br2rn2qqVtEZNQJV1XRaIkBzO7tAsZFCkiQWTzGAIAQhDEwTvNxixwAcT5ic39PiuzsGurrGdqzqzXhDW/OZmv37P/T+ecvP3+O6zUE19q0/tM6weMSO70//smux7cf3PPiOwPPPLP2jEtPpG4GowiTDGAom835vCHXZaJIXNc1Tf03d33/7t/9e6i7G0G8tHZGjjuiSAA8EmWUuw63GQJJkAiQnJHzql6HjjFx5yFznEeJ86JksQihAk8FpTSf2kYWZdu2ZEl1KKWUEmGKfLScczyenWYCPAsCsW17zEEXeN5RSJIkwiQuZvdvbzvn3l9dcdxl7z/4wHuP/7pUmwc+duPq3y0/9fLKk09wKZxaD9h2ciXiKw911n41uqN3gBHNG8ox3Un3WaMb+vff+8rJML8CRIST/xHeXWt9nKVuMIupnJMV3/VfvfnPf/xD0KdQ26IMEBaBu4Zl+v1+6rh5vQ0hJKvn8pqfvNpHURTOIJPJ5OOGFVmkjkOZIyBBVrVsNoeRQAWXMeY4riQptmGGw2FDz9q2HZL9hiyrAHZ8SIxy/Z3XxYdew7+7cfDAXdGBSrzyRLL1N0QQRtsSm/pOXPOvOdl3HgjU+cE2M2Kjb18zOIHEGU+z8JoPnnj12JPn1c+fG4bI92+47PJbr6qducxJJpVg2EpnPIoKkmCaJqM0k82Gysqamhq72zoRh+6u7khpWTyVLC8rvfziK1579zWfLKcS6WXHrkimhj0iGhzK+Px1c5sqqyvLo9FwuCyyZdeOvqHBz9etm9M0c8eOPauXrrzvnw+NmMbQ/8fXW8fHcV3vw+fC8PJqxWTZkpnt2I4dcpg5adpQU8ZvIYWUmSFlSCFt0iZtmDlxYma2ZDFrpWUYnnvv+4ecNIXfO3/os5pdrWZ3Z+e555wHZsZvv+GKdas6SrhStgRBFADGx9PNzY2qqk5NTgshvLLV2NqMqH68uzcUI1hApcBS8XixWEaYNTc3TE6mJaoGPni+B8Bfee65My84HxBUqibnPBoNgxCWZZXzVYLxL37xs/b29js+8P5sqWDEYlMz03Wx+kqlpCo0YFYyGuPAjx459o4bbopJ6LKb1v/5b39jHrngwqbnHh4en6Ygc8LJpedfvGbT9E++u7toieWnGf37tC/fdW00/OLE5FgqZYwNsFhDdXAYZDn++D8LGRsQh40b2wcHyol6IXChpSkycNheuc63LQMhPVvMdB+U56/2auINLz8/JTeCnkmuXlnv5Y431MhM0Vri7uRUXFX1B3cN1abCWtVxsdvcEFFsPdZiLzstPtMT+/UjhxbPmZsrT5eAiYKHk7qoVroWNB46OPnkcz9/9pltDz3y5NSUl2rgNTF9cNQFxHUOWMTLKN81v3bwyMzpZypz5qcCPNV3VM8XfEKhvpXMjEE4pVpVd2rMzacjWMmlGvF4vxGIytkXQvdhCOkaUYL8lJKos5vbWX5SzUxFGZ2++pbGA3szYwPgexwQy6WVc84Ln3a2tGfb1O7XoasztGBR6KkH0hs31TR3lEFoe95gnDplJ2AYquVoqVxKxZFuSEQC05Qy02zzBbXHD2YlxT13jaKGlWye7dxTLhawGXABUFvXYGiVycmqpoJVJYyrHEwsAUbge/8bfynFQcBnRXSCc11TLdsCAF2Bm27dkBlTJvrz69bHjx/f23PSbmxp1+tT297Ygyk8+fTfuuZ1ckbmdy4DTh998vfXXHXtHe/96J///OCF16r7tjn5KRkAC5CJXL7o4mXpzJis4mI175li8Uo4tBeS8ZDrB+NDrkKjS9bYQ71BIgETUyyVCvkuPXmiqOuyZXvxmG6ZQlcN06z4gSsAYwxc8FmnZSzBnPbkyHCOBwBCfmv2x4Uz228HwJRQzgUXXKKKH1Rnd2qq7roOwoxxBgjQv/FtMcZv8bBiCBwsO8yXQXCEAgnD4pVK2HD7d8avuLnS3xds3Uaa5oXjSbXnYBpJYFYT8bBgUDAr0NSoLF1Zu/P1sbld0UCUJoaA+9qCNXa+CFMTUNekldLCsl3HE44LVAIMEAnJmqIXrKrLwPeDm95bl0pmVIiO9Je3Py/VLxXLVyRoUFWFOzkAtsAjOaVnyF293unb17Ti7AlNg/QwMBcDQ0MnpIopE+ojyWZOrRFxK+USFhrFPuPJAKaXrk7NpM30pAUAkqSzAJIJJxzlySSk02Ao2HZ1Kusz2ZlKUZp9k8TsaA5g9id1TaFIMFf3OzoBLY53K1732Mjnz75o6V1XFLYOH//E3+d+9PLKUO+6Ky8Gxn0hmtcF8bFG+09/efkTNy1csAoA+VjiBHTgpuNGI1HbCVRZHh4aa2lufumFV1etuwywE0+EZVEPSHDqci64l6U4jBGSkIQpLrsVIYQuG77nAZ2tVmdXYW8BJwIIZtWkIDAIAWiW3IsJRdw/FUVDCCmbpZARQQKDYOjfx89CCBDAxb+0xW+/13FcSZKEEIqiMMaCIFAkGQRYdimMFTtbjBXopg/dsnVyqnXFJd/+y8Mbr7/sL3OWPjG6H39ja7ab1vrxFlCveE+nX8UbjBZtRdORAdY94x072ZPdOvbOuZc7n/We+sUrV3t6hvWvWb0Ig0i41ZIRkmyfY2v5yiXPv/T8O6691g+4LMsBD7CMwzRULZqGodmujQjIsowEABe+64XDYYxxsViSZTkRizuu4wUO48wNbF3XOA+KlWlKVIEAc0CcRKLxwPcjifBg/1BjfW08Gp2eHv/Kz363eeGSG2++Yubv9+NPftxYkvTnG9XnJprr26Dbg+i5duLxqZ3QMThIRv1oOlyOR2SJGbhY6Ib48vXxupbxffd+4b3viUnti2qSTa1zEEv+4nsPhhu3fvPbn/njH/983mkbQ6HQ1gN7Lr30YuKxutqG3z7w12qlCAANjfUn+3obqubCRQsYgxtvesejT/7zQx+7fV5n07ZDh/70u51bnn3mG9/41qu7d37+0/dcfPHF6zdsmJiZBEmAhJtbWvrGx5J1tc3z6j/4kTuOnBjWNRqNgBqFmYyZzlmCARWQjMeWL1pW19C4bsP6n//85+N9PSrFw+MDdSk9EjNGhzPxiGpVTc5Zqiaey+U4F7JMTauiyrrneY7JytOlv953X+/QwM9//bOf/fSXD/7tgSeeeEJOxR55+J/17U0vvfzcHe+5HVXMXL5Ul6qdmRnQdUMwOaQYL7/06s4d277zrW+FQppSh/fve3HVioaJybzvRTS9JRabLJUJI97x4080NyaxLa68QcpORTed5bbOe/7Vx3PRmmDMKkWiqcwYbUjZpl14560QaZX2PCfdeEf18JHyy68HtksaW5N2bmZyTOgRi6BaN8isu0A6ujMRWw2+m/DzztLOIJPv62pSS8VoOFV95Tln8eLQvTutNYt4bT155EW3vsHgMxXilWsEDHW7J9MT51/a9cSzvVEdGmuhO6fJFQ850oGDk6l4w3vffSclqlUQMSWeDFuuGaKy5doQTWiNc/NHjiht86ud84H4DU3twe6djDGzfS4/cRhiMZkx742XTKBQXyclWnJOlUyPxeLNec+hYMUXLS30H4P1m8ihnb6u6yNDlZYmdSoz49mhP9w9qWlINxCBSNdiOiQ5V95Y/9u7j2lR0OPKdLnaTrz1F0nbXyi1jkVa5vojk1WJRpFRCvxEqZLX1YZEbaG3x5nFB0UNfJEpFuxYRH/9xVjemqxtAtcBIypHVWU6VzL9ac/TInFQacQsl7s6o0OjjutThtz/Db8AjM3S/TEXAgExbQcRDMCvvWH5q88fikWkiy857Tc/f3n+/GQ0ph85OTSPmgQpc9rrzz/vHN9RxoYzn/zEp+qSDd/+1g++NPezpgWtc4hVRdEkUERmpm2MZcYgnTkRq5Enxq0VpxuHd5ujvSoJpCP7KlQGEGDEjNGh4mi/vPAqZekmb3IQHz9cXHuGVFMLuQwc3mN5LvVdS4BLJIhGw+VyqSZJfZ9RHO5alGABGejPAYdZNa8QnLOAUAgZqu8J12UBCwhBwHyf+YBAkoTvBxyqHPFYDEdjobq6ZP/JQqVsMnaKPSPetI3j3EcYMx8AeVg0ychctanY2oofvS9c11DYu7flyPGxunZ9cKg4B8lEAtfGDXUwnSlwDpIM6Rl38tkxStDQSSlSIxeK+sbzigM9ZHJcpOrk6WFWKAUIQFI0EBwx3WNesRhUcak2kqqImbJb51egCPyNp91lZ9Cq7LplUdecHjgsGlNxOVHoPQx51/HLUJyhS86YKKRjr7xhAoQlUm5oZC3tbOCEFTC07AwI2MzYAIgS1DfYibgyPjEtkdqjB2ZUFZJ10NwBiSRSI+bJPTJz5cGTYFVYmvtIsn1WFQIw8RECFswuTygCKgQSQiBHlCZ6Bz5x9Tu+ePv18RR34/rHv/DTz3727otvumVSrtq7DzQtX7bl03/83q//8Mkb7jj34zdLK2VJjXJMGYAAmzmWDLJEwzbLYiKNT84MDAx/9zs//MB7P3DZRZcuX7r04GifZ0+Gtfiz97x24eWXsHrQHJOCakNFU0KByzzhEpUgxCWQuctAevvp/a+oQS7eXIKKt5e22Oa2hnUJJMGYC46iyJbj6bLhM/c/aPFv/tW/kbn+bc0GQAktV8qqqgCAJEmO46QVswk1PHP/X/+2e8jdcC4JuaX80JY7vw2V4cuu/ukiunjN4oTVJZcqXnaHfc89+9IweN0nbmxfHV7YEW5eOudwPzzzm5eMyvSmNat3p49P/ej5NlkoZ4l7X/wTmSpUaoTMZKzKhWr55ltue/LhR5njESwQAYt5BtVloQbcZ9hHMlSKlVgozoEjhEzTlCSZEDLr2Ow6vqrJs9bV7NQIQIqG437AVEJAoEKhdOedn5Uw+eOf/vC3P993w03v+Nx3PnX3N356Rtfil7uPZXsP1tz7vUk00/iN343+3+n6le+tmdcILfVHfvfX7F+2bf6a7R7wlATYNvClimSRYJyz/bxY19zy6TVQ17Dn5pd+/7eZ0fXNL+0/ZPh46sTRT//6H/f/9kfjJwduuf22nUf26Yr6nhvf9fW7fyDXJTTmblh/+kc+8rEf//AnW17bMj428fVvfqNYKj38yBN33Hjdho0rfnX/Hw/t6V7ctqRvqA8ZJG6Es/lCY1OLrNFwTAeCCsWKz+l0X1+AaH1KKU+X7vndF7fvPfm73z+8uKuub3I6Fglnpyqdc9umJ6ct1/UCEQopMZksW7k0nZs8eHisoT5iV3wkoGzaQgZN1cplW9MoQsiyfIRAoqoc+J5gK1cs23foyBuvv/qtb3939+69E+NTEFIefuxBblsxTcmOTT73zIsXXnbFez780aI5HQknb3nnHbff8u733HHbBedvaG2Lv/TykxU/nxmGdesbDh0uDE2YRIrNm4PXnya99lppeVc0RKoA6trLlZ/9cPxjH4+aU5WBNBk4ZlAJ2jrtg9uC2nrRuUBHwp23REz1SIKTnF+tBHXT5jQFCIFRUx/tPm62LvC3vGxlpxKNjZYsOTMj0dqO6k0Xh7t3hDauwy++TLf2j6w+Xd613z69I3T6BTUDA+jwrlLJzk970NKUumZT2yNPdufL5rJFMDYVmyoApcWYoZ25Xntuq9VQlyiXZ5YvWb59x/6YoRAUROIsmgrt3Fud0xqPaoWWOeSl55XTL/JaasKhmspon5zNWRKh9Q3R3u6q4/hdi+JaLTl6YKack8oFjkSovqNsxEX/wVRdrLJsgziyT/jIUxW1VHZ8B3wfAIOqyGbFk6geS1hTY+C5sO4srTBj93ZLDORUC1M11FhvpOqz258C04Fla5LLNub/8jMhQVyNFCplqG0AhLSpCVmPlgKX6LooFznmqqY7jpWkJNfQLJcrrFgQgQBKFT/gAFzVfeYCwVCTkiYmfQEakgQE7v9DiIFnWUUIkEDcCCmmaS1a3LFmsfz6SwNf+sbmu7+5tZCzFq+LOyI5VfA6Gpr/+ve7ly3dvGzxBemZoRMnDqqSouk4XmtLKsRTOJaID/Tn0sNEU+PT6Rw1RDxkdC2zXBuVKnxkEAIbcx8AuEITTlBu7ggqRbCL8aXrgtUbzZ3beH4GwnqcKm48Fu4/OVPOEEWOVv1cIqnKMh0dreoaEELNciCEJJCPAITAlMiMuwKEqkM4AoUsRCKRfK6sqrKqyeVKFWFoa4sN9hVn/TQkGfwAojEUjqg1NbGTJ7K25b95XT3lfiWEIBow91RnWxJabR055yIpM10d7PFb29R8wVm4EpeKvKUlVBNT/vzDXGoeAJL6usHyNYHLhADzw5g4AH5ITVAtv2AJ9OxNXPYOJoS9b2uQraDsDBNcA4DZkGaKgQeoq10MD8OnvrKybUnfb78BWKtmsnDpLdCkam9ssQcHpDlLyHh/QLE0MWGbVUNgL9XiN7fIQ72eYNA6Rz6215s/n4CQfOwEAKrWVC37Qydn3vvRxMLF7v1/Mu2KMthNGLLa5qmchUrVrFUlrs2QkAXIFKOAV4gMjIMkqb7jYHyqsMQY2JuUAmSzUq7sOCVvbmtjgKxRc6pOrjGCOOAqCAJcgIpK2L70llsv4IvCWlOhafrgjtGPffD/zjlniWSYoKgVj2NZVsG3XOeCiy5umzPXd4JNG866684vHtl/uGtuA0j0/V/7zJZ7Xj2x94jdwkhpTA+3D2R6Bo8Mnr/5EgCwg4qsUte1daozYLON5jfhE8Ns1iH/l551dno7i6OCEpnL4CMAHlA3QC5mBAWEYf/tEqa32tcY0/8JwAghy7IkSaJUIgjPgpwsq3Y1E4u1XXbHVc36GuX620fTfc2SctolK4/vevzVD9z9wP99CfQakkwRkQ3X10oLOr758S2PZQZDF8ybPHri/FUL62t1MoaDivW7Xz20cO6KJG/Lje7Zk3nw8V98/cqPXmSaviqwR7kH7OOf+PRHP/TxtcuXe67p+aYcik6NpEPYSDbUlO28EpICN8A+8cEPhcK2bc/6MrJAEEIURRGuQEACACqDHwCWYNeu/ddff73C3XUbTn/4kYdDeqRqVRuStYlopKm+YX/PrjMvPfeiJRsf+OFf/vL4T2tfu0860Q+f+hT7wY/HeybxZ27J/Ow3Qbly2icX2N6QSLbqbAI0HU6UfCajUEBbNLunWOKgLjRiqTnP/WXinkm6tz/zrjOuaDn33Pu//evWzo6OuXPv/s0vr7z2uoO79n7yIx975x3vvuD6q1pqog8+/PD1N7xj/+7977/jPUNDQ57w//n4w+3NzaW0my3mTz9zda4w2Xd0UMKKw30tTAKGfYHDhq5rqFwutnctOXa8vz7E5Ro9FqZLW5s2rt/wxsHjx48N6h6xgWUyuUrBrK9rmpqa3rVr11//eu/9999/7x9+9ZnPf8pjNhdkZDiPASQJImG14PqCY0VRisXqgoVzRkdHWQCGHnXN/NKlnavWrjp69PAVV1wxOTlz9VXX7d9/cOPqM+cv6/rF7366e/sbEVX/1Cc/W3b5aWdtrmSK99zz52985Zv9fSevufqCVAPhIrNy9fyxbPnI9kPLF9dnKxVPC159xr31hrVtTUfv+6Nz2YXLlq6cmJrOVWzS3FGbnyxMDzmDHtTXJEpmngWQn1CLadTZSRCpRgTIkiKr9Tt35Zady/KlZOeKwvMP2kZElExRLlEqS+lxWyXS+o2yKtsXr5t7/JWsHK8IMzQxFTqRHX/3LavTdu/okLznSK4hpgwPusvX1cwx/Cha8Letu6cCkGzqkSBi0Gol4LKeQpJe642OAgZbkgF74VQdq+RxJEY8t9S1CsYnJIKkDSsazKrbOzbjetK8Fj1XCFaeWawWxVCPhoiDFOGYsGxV/UsvpZeslkd6hFPVbb9UzIOmh6I1ZmO9CMeg+6BOVNs0YU6Xuv0lvua0OqTkp6eqhWmEBa1W/YYm+YrrVg6N785MAWPGwd0uMYKQIQcm8UyQkM9VuPim4MBWVJpJlkoBBUuAFzEQColKPhxABflxBhUZhxXZ96EaiFDgVwkB7mMARVWQ41qEAEeEBwaVyqdvbDh5vDiTsTEFFoQRqr518Xn7dUOW1VldjSQR33dntTmLl7YnOFuxvu61lw/PjOLWhW7zvMS6M99Z3975/H3/XLlq7Xe++5uKRQUgiXh1jUpzc8isBlo0YzvgM+g5KAH4CCQAMGKBV41A4PtgLd2A4snagzuyza3a4AnF8UoU1W+6YpxiJGGjsaP6wgN4fAIBMFkGz6UIAOEAEGAEmCDfpVTCiRpaV1d39NAwAKHUR5h6XjCr60VIaDqNRBUB/vSUSykEAUQimu3YNSmjpiY2PjFRymoAwAXHCLjwKYWA8TedfQAACCEIEYxh1tEWgS5ATta55YKNWIixqi4RL2CUhDxRXbI6QiTbsfyQQo/tMjiU1p+JxkbEyAjmWFKjbrUKwAB4HOFKJA7ti4LDWyUFEp0LK9PjTkjXxgpm4AHBKqWy61Ux5rIsBwFwFuaivPw05uTJihX80htYtaAHLDnW6+w+mDlwEC3bENvxSmHxEmXkRJijrBJSW+fSfM4s5oTnQVip71yWQUw5doDZnt88VyubZqWIG+rkK25ik/1oeiY4cQCXiqqiVesaIT1BbEemRDBwAIDQ2cBKYD6AoAAEQJEI4ZwJ8BEOGPMRhmjUQELMMA+IFBEBsoFhSXGYoyKZMxdJSGMcOIgAoQqGGgmQBzaaGisgRalvTzIwGfcEQ0hgSZb2HztQqBYCzqpl65EHnoipNU89+szv/+/r6vrTr/rZnz+8/sLPXLvylcNP33jdjZPBxFNPP9G9ve9DN32svW0OMgKsM0ABuHw23WiW6//WeoqDIKcQ8+2hhAQh5GJ+yZmX3XHD7ZdfeUm8JWKzKgqwijTvza7R2/lW/915fmsTQlBKOQdCKEZ4ZjpTW9sAQgBGFodHtjz2+MmB3lwNHwoMSopjE3T+cvfF3S09b1zdfO7c+tYD41sP5IrKssKKmgWNi646vKwuLUnVdHVbf2/htbGPrN1cCmWtY8E8dfHO/le2jWz95MXLf/zURwbSuYQvxZtqGIaX33jjxWdf++G3v1UozIQjspC1ay6+9vju43/40z3nXbU5XZwIKWGJy4LyQqGQSqUAwHVdiSqvvfbafffdhwq5WE3dqrWnS1p4KjPT3Xv8b3//c1tbs0yV/v7+uZ3zNE0/uP8wxTB/Ttu8jrleKetEpfLRiYmp9H2ffsc5S+Ij7/tN+2ltuaXR5NHg8NCJ5S31rDOECxmklcwIoPYavY745Yw0pyWwTWc0G+pUmK2XAprZZs6/+24RXoSq/a+8/OjXbnxyOyG1LisBSFGaiiQNLF1y+RU//eWvN5979nMvvjQ5PrZ89ZrPf/aub335y4VKcdf+neddct7N12166NE3auuaF82vLxbGvQAxj5Vz08vXX/zQI88pWtiuVGIG4QGT9FihGlxwVosRl9adtukf9z9/aP/gyjUpEP7UcLGxaZ7n84HBUUnTgoB3tM8pZGZSNclidSwzUzVtqElFjZBSLpcURaGIZMrVasW/4PwLXM98/Y3toRCKx1NjozOA4MKLNuSz6Xnz5j766Msrls8fHZn6xS9+c2Trrqb5TciAN159YbRvYP2aTXd84GOdS1ZfecW1zz//7NVXXW6EpC2vPadpHLDr+Uym9KLL2158fGpyqKap00qPZy+/NB7VC8kaNRyqqdjjgIGycHq6EktBY4d67HBNlY33noBsVmqa44OA7BhEw7CyK+ygytgExOsgM66pqsqIXSwRSvyxURga8xSiuw5SwF62lFJGOltsZxCQJhOYO1zprrj46pXzulrE954e6JiTmq4qsXh6XpT3Dde+cnAygqAgNBEmlFYDPwxVroLp4AgAC2Nz3Rn1AsjW1yZmZ4JCAMagsfjSzXyguxSlsHAFoDDseR2+8t3kjtcLjsPb25KpJnP7S9LEKAQs6uPxieHomrOrLz+GY+EQNUqlPGy+IsIFoSjXf0zu7RENrWBVlFDMHB+ia06PIynXs5+qMtZUyWNlztGFN4qXHqXpqaCxob62LZtKJV99ZjoeA8+BG+7A6SlOZXrkQFDMhCtlb+4CeWpILlf9SKzqe7rr2qFwvFTKzp1Xn89UiiVTICAUmA8y1lVNcf2C58FsEzaktgYwEath0+MAQhbIQ0J6M2/7v2U2wPms35MghDDmazq17eDisxpmstbwgLNorVuY1npOuozwsy5d9Mdv333WplvTpXR9cyyXZbZbmdMJTc0ywsbkRKFrOWx5wrBdUyJgqCoiTqGoIHDDBq5anItI+7JyTUwbPGnns9C2BJpaYKwf0kPyhs18+3Pc0CUffNvhIECVkkEQMF4BAIXGnMBFgAUEhDiCC0qRoshV052tTRWFuk4we1mdleq+Za8hK5gQTAgSwEyTIy7P5lVzzgUIjJCqybquYsIrZdP3WRAEs/aTCKHZTLNQWO1Y5ESiCvjxXCYdDmkhTd21nVx5h31sn9p/wrXtKpVIEMCcxXTdckiPk2JJms44U5OuJEE8rmfSFoAkAEJJ38rLHLywoVpVJxQC2wUBgDD4HggBCCQsMcY4gER0v6VJ/toP4juen57fpe/ZZpWKSEmIaEIfnrAIoW88G3Cfts4PMBiJhGlV9f5up3OZ6JhXy2C6kose3GYyEjiWUtcsonGpVLSKGVzXxMwqaLR2ZmamcyEaPCEQyHrMy+UAI2AcU0o592IxPWB2WA9VSp5gtOLab4mkJYn4vo8QSiQSyHWFjATjtkeERiSoeiATE3tEUlUrCIjIKjjCXRxw4nBPplTVEbgEIAgQD0BWZn0oFY7EkRMHPeR+8atfuPj8S5KRxp9//7eHDx6LRM5JXff9/hVL1hZODN798cf/+eNNG5a8MX30Yx+5+bQ5p+94cu/qZWv++uQ9LpiAPAmRt0EsftP4HnEQ5E2rv9k70ZsWlUtOW3v6ko2r5i27433vRhGfSz4Fgj1JyHz2ed5qPs8+/i38/o/vUsA8QwtPpaeee+7Fpobm7dt23nnnnWPDY8df3LvmtvOaEqnf33fvxz9yLyQ3yaUSoykWrlmuuueN0RWsvhLKV8w09fRhyA/ikwd4ekoe2bT2+qYLr5mOSjt/vY+7xrldibmNKfd1PB1Vp8sjxwZ33HxL257n/vT9X37rgusuM32LU/W2d93xsx/+tKE+TuXg/scfed+NH7xo48WObz3z+hMuclSkIJ+CxABAgOjv729taVcU5a677vre936weUmNHkmc6B6J1zX19A9qITp/cZtu0IOHRo1wKBJLHD108tLLzvn0Jz7xiY9+ZHpyan1n2/Hhqeiy1dYb275WU3PDxHPl8z4XbH81oQCsr4OYBIFpVxkxALmyhKpBRPdO5p1bNyXOm1vuO6wNDVHNRUrcGp3Sb/gMzL9IwFEvvUOp0SpS7VNf2P7AE1O3fvGTPZPjv/juj9955fWvbdvW0zd+58c+sGr1hpvvuONzX/zS8aNH161cHjD3J7+5e+6SzqljA9OWu3HjusGje2Oh+LGB6XPOWbGqK/bP53aWLF6p+C11dc21kcxUulAJ9HhTV1eQ1FOZXHYwP1oTT1XHS4019SO5kdI0V4zQ+ExVEIjGwq5ZRUws7Oqo8jGroqanbEpxgLwzz17ce3I4PWZyBIKjNzWXsHrNouPHeiyTU0n2fA8BrFq1MD2RnpkuCEFuftdtX//qJwbSQ3/8++9CMtY9URjLvbF1/zU3vuuFA9sbG5KEBtMz4+vXrTtyqHtsZAqBElODmlbBsDp0gt58S71dPCJj1NSqVmbM8UkWAImEZCvPtLBc01kdHpYqk37jvNA/7ndoSGLErm2EgYO0qQW11vOiHRLh0tBheuaZ0NVFhgZCL75YvuSq+qcfG8vmMAKCwf/4R87IjhURPZrvDfcPQY9V2bhcznVjRp33fqruu7/MFwtizbK6JV2mYOo/nk5H400YnGKQcytQI2HTgpYGHenSieGCAsA5zG+InHd56wsvHvP8muFxs2NROJeH/Ew+qgRGilCIuqV8qhYKFpmaUFYtwxdcb732FDI02jpPbHvVW74+/NgDlZXromWvZJYAY0iPAKGa7dmKLKtRr3MOmRiKV52sbRGBGA/w4jWce8rJw24oFFq5Pjj73K6ffP2oaYmla+KH93lUd4lQIjE3M4Xq6nyzhF1HO/eiaGayWsj7FceuWKBGwHWwZXMIADGDSqbvQzRJS8UgUQMRo2ZipBwwCmARQAIQxjxWI+fzHgvCCFWEkGRF9nwTBIDQZMoZc/nbCCX/hsGzACYrrsvQ7A3PbmpsmN+h7NlR7FgWHD9U3nD6/ACmQ+FFf7r/9y/fu/UTn/sQyOD6iiPc1g7oWljrBTOZGehaJJ/Yk1R0Z2amkJukMoVla+i+A45wQwgsDpIgLnAgKMF4HpDUukAOh8PH984AEEx8wjAIOQCiSqrt505dPpkkSbIXmAjJAnmUQOBJMpGIbDkOSDjkMRcBCGAICYwBkCAEex4HQTHChCLfd+GUoyQimHJwTr36N/NHZn8l9NSAc3ZZ8hZ+YwRCRJK13oKVXjnPkzWwdJV2bA/RRbWuVX7+CY9zyBcUirCJ7FidWp1i8VqCZUdRopNDXHBCodjYJIVr8PHDlAOnkixQOQDR3Bgt5nlUp6WC57nCB19RwHF9AIlIUcSzjNFkXXDO2fWRRDqfUSbSbjYH4VA8ly0mYklFzU7045mMkmizU23yyC4aq7cISlos1zIXhk9QTROtXSwzY1RLePn6ysw4HNwBFKsBdySoD0Rh4Qq3phaAkaFueXpSI9RxAosSlbMgHqetc/3Fq2SEbKuEclMoa3PfB8cGDJhgPTtjFvJCUQCxwAGYddGF2UUcCMw5B0QRDgR4ggkECmNACHDkOkSoLgcfYV0XCMALsCxVwFVYREX5iYL1qe+99+JzLrvnT0/seHzLnOjpF/3+9b8T8HZ5QfbZzzXWffWba52qd/bNK9934btcgn74zfuv2Xz9z/7wLdspg6pTUiJMElhDxGLcoaKOBw5WLNfHMop6yFFFwDkPJAU8ISPyjxef/ukvvnPj+bddd9XVVjltxBsam5tMv2DI4HOJB4xioAp1PJcQIhNJuAGivOo4Rijq+i4IpsoqY+D7SEAgwB8aPnDXXR+K6rGN6y7a8tJLjU0Ry65M7Rp76NDRF/a+fvWN36/pvBpX8tjXZ0g9UYrndGc3VzVGpJjsJm3XqU3kZkZD5y4ejlgv7H4xKxleXctEmUjSBj899Ovbbhs9cmjXoDM60TdoACwrfWlF/NLT5uEFCxa2qWFjzl3f+nJDsuUjH38/YfDDP3zjz/f8/pLFm08MZD78qU+fuXFVxi0eeH7bSO9QpVn5wde+dt/Xfn3dB+646aPvO7zv4bNPa6SqFq4P7dnfP9hdsnP66jXzBib2Dg2EG+bUDvX2nLdp8YLGecJV12w8c9Kc7Jk4et+vt5y+uHbhhvUXbz7zn++566tXX1Zf3CO90q2BZrq20Rqi80M8imxgEnDZqjqyo47RkY5Y22QebA4baqoLeajegm7HrNuAL7gat5H8c79r6OuFufHq5i7/u+lb73fTMt/Xn/3RnZ/oXDvvxps/LCGwbPrlT33sq9/63oqLNh/ev6MlGb1gw/Ltuw6HEpH+nvL8ZeFYPLRr61B9gzY8UFi5ZOXM9MG2Oa1aONQ2p+MPf3o6GoGGuhoAXmDOTJ914RlzTbu6//A06CRksHY5annxvDJdlu3CKKhAa0NqbWty34kR7AEIaG1MFku5VCpq236x4JsOBnBramKmaboOE0ABcS4CTIAzkCh87JO3/OI39wkpoeheU0Kf7i9ef/2mT33s+39/6HcrVi785Ce+u3BpS+fC2IN/ey1KUzuObrnw6s3vvv29e3e8MT29Xw0pubzRs2e8vj22fmProZdHztrg1sQdUIwq56VJGBt1lm0Srh1h3ALBfBMRIZcFRbL02ksFIQNVwSnjziV874v62lVWJahxkVUfldVQ0fFa9+wYRQ50Lq4jpHBoD6qtSazYKDlW5dgOd/5CzyzKG04z2upaIo14rIi/+53eSMyUqH/RhYnhE4XT1zc989D48mXL0pneiy5Zc+xQ3/Fj06QteXK45KaD05fX7Tk8XWGarNiei5nMIxF94cK6maGh1vbk4cNm0XaAAabAfQDAsso9x6ipc8+8KAXVmZFROHFMrptjW1m9a5GfG8fL15GJCbZtiwjVe44FxDeWrHaqJXzu5ZHCFBcCP3RvLlVnXHCtePYfnqbTatGoVAOslhAzXI9QWl69LmRZXu9RlqiPpKcKICghxPddRcWGYeTzFSxBJAIUq9mMg4RECBHgMiGa2mByBAhSA+YRpDOoNrVo6QmHMUEpDYLgTXNE8uZK/ZTz/Fv7Z0UHGP+bxfFbMIxm/VgEBgRUEsyTw5K44prF/3zyMHMlQxNLV4jBXr+uKZKZFpNTJkH6htOaug+dpKqcLpHGeTgaQxiqPfspkQPmUlnCjHuyKkeSIpf1BdMEtusa5Lb2xu3bhpNxOZ/zBAck0H+M2GaPSleFqmjlqh0EQKgcMA8AVJ0yj8xCKaU4HDFkWSoW864HiEOqVrddy3UhpKvlkhuJGLpBx8ddwAGiDANvaKjNTOdcR1CkBoIBcEKAMR8QIAQE4yDgeNYiCmQAC2HGOQEQVAmEC0gBjCFi6EuW4+kRf3zUbe2Uok3+id2GVXEYMAYKUB8EQcivS0Jrm4aYdGhv+bzLdM7E6IBo7fR8ph3cbRYLIKtUVlC54kdiGmC7mjUYMmVF8lwVUEUPgWdRQgLXR5pk+K576RXxNWcFTz9SGR/nQHA+64e1Vo9PdSzxDZ2mx/DQSdY2T54YtJevoxwCs6ANDtrzV9B4TXhk2KltsUkQGeqtMFcoklzOa6WSpxq2isLFqhyL59cuFrUp/fGXLNlIeZWMZYUkyb78pmjv8crR/YpqVGUVnLJOJSea5OvOSFElqJZwz9Hq2KDHmI4E9xgLEBYYE8YCx3EQEN0w/MAX3ENIEKIAV4IAEGJUEjZ4mieQrACGiuOECQWMAfMAofTMTFNt6wNP/OrYid0/uftpJQ9nf29PX2NL96uH/295fZPx6t+/+djuwaeuePdlfTu333L17fc8+Fe7ahx6/UBrV8JjlqTowMqCUw4EKOPMJ6BiAE58AVQw7gTFkKpyJtmMGbJ873333vXVb776yvaXXnytNCoWdzZecetZEkYAkhMoMnLNSllTVIa5rChVs6xLCuYi4FjSdNOyDcPYvu2NRx96+Md3/zKbKbz8xsPPP7Pl29/6yn1/+0EhN1hf3/jE40+3tDZWXfOsOed74Xhra2vXorNWb3zv4hXvyk6MuUGjSytnu2xjzo0gWeO8Vk9ssXp/iZ+8cP77P//gN+7N7itVS+Nbenfef0zl2Pe8S2tXLGho7baDA4d7Pv/w7V968dEm7o89/OeP3bj5a7/8kVmd1Fn04uvf9fKL//zU5z5RKIwN9xUqFb8wPbbn9W23vvuDY5ODlpI3fOpjT4C657WDN7/zirR//Jobz7VnygcOnejtpYwgWQqrXCqW+ymNT2dEpZw/Z8WSlE7LlrnlULcNcMuN77rh4ovb17cYDv7Vb3756D/+sX7t0kdfOnoOoOfWnCuGBrNmOhlgEbjQga15LtWpJtcDS8OMEtAw5bbVTKFiEuJDu6S009KMT6gW6oxXpIxHcNKKZ/aNpC5Y5Zzx7kP7YWb64Jc+/JejVRGvRWcvrj0+ZN/6jlusUupHv/sqNUSIxFhQ1aUOSZ6M1vLBk05jU9uCFf7ebVP5DGc+1KcSDJFQKDQ4OPih91z/mc9+oq6hGWR1KDPx/BNPfPzOrys+LFlQE25oq9js5PQRu59LBiTqUpNDGSQohiAU0cum1dJYLwKeyc7IEugh1Nk5d9fufoSBCKwbWqFkznZJAEDRcF1dDRGA6PTixSv3HRpwUNn3hVuAczbMsyp8ybK23pPpbdu6P3vXzXV1Td/81k8WL22Z6i/cdPsNN7//unPPvNnNq6nmaTXq7ttJbv9gy1h/4pVX9n/og3WrF6HxvrRHoWTGRgaKgavXd1hjA7S5Ta9WKvkZvO4sGBljzz8Ca89OrDizeGBb9OVnK0tPCwhPNIQCVEPUiPHYn8YxaFjzI1G5mPZau4Lxfrp+Y+N5V9GZGee3359cvzrZ1CAlE+yVxzOxFPgQPz5SdAKxcjm+7vrkP+8zFyzQVYkO9M1cedPcZ5/sG+mFXBZMDG319cwqShpYlBR9YeYdxZOQrNp+KS7A18NJ7I84TA6YhzgSMYCqJIPnoaY2FArBdTc39XQPiUJSSuSOHgfLAjBbFH2scyHs20Idj81bJU52w+rTIz0HypvOaC4UZ1SNYmqXK8Iu6BvPTYwM5btPWJ4NmTTYVUjUQ0ubun+7QyW45fb1zz6516xgTphZ5SBAllWAwA8CSkGWFc8NEsnYdDqnKIrruoBg6fKWWEI5bf2SH3/v8VmJBGeKAEtWUOCGiORgjD3Pm2Uy/r8AePY2xlgI9v8gYQHBCgMXI5Bp1HdLl1689tiREyNpYMxOJviS5YlCmnT3ZLiAcJxYJouqdNmypkOHRjZdsPZE90Q04aQS6qHdUiY3mkpGsjN2qjFonhM/sMsMGA1FnWo5qEnFLacAGNavX7Zr+wnmGa5b+p8A3N5Sn5nJWm4AAIAIAFCZ+J6HEeX81GoDE6AUex7HGCQZTtsYr6mTpiazTlXP58xUvUxkd3oGMEaKonBfUGIMnMy7NkgEczHbbEenMmrhTdN6AYQghETw5n8WTBIgayq2Ha7ozp3fmPvYX3one4ELUj+HrduQ3PZyLqShAEQuD5kcYJBUmVEcNp1Scwu0tKr1rd5Id70vTeqyks8EpUJoJhMI8DSDmpYjhCAycA8AQCAgBKIxLZFIzExWK9WSgGRjS27jGfIZF3uP/D402OcuPt0fHgItBPkpORxjx/ap8RozWYc9W9IibiVjhGOMSIFgcrliNbcnK26uoRXGemKyWhzshogeStRXGxrlA7s9q0LjNUF6CjZtiq9Y4PUcElv3Wx4CWWg+D1JNfjQOfcdAVhQ9xAs5aJ8nu7Y0NVEGUBD2ZS1wXQABVFIQZ04QBJjA7ClYrVYRkHA4EjCbUsn3PQQyRqoQiFAA8AWXEIJcfiZXLc9rn4udYGBy4KNf+8y7brz+kd/8Ugq1nXdu432v7Dnxj4Nd5/y5/4Ibilu2/OqjC95/SVNmomfPwb2Wm/3ja897g1NXbr5kwbqV01nr1qtvDpycUFQQisQZEMnxbUIVhBCASRF1AyLLCoIAwPJ54LpKSAsd7d+7c9+urrYVu1/Ysat7W0JCX/3MTxkxUvXyxOAASUU7muZgSQIA1zaxLEmUeLYpKyr32WxPWyA8PDD45JNPXnD++alU6rWD//zA7Z+/8tJzzj9vVcVMDw4ML1+z6pkXHk/FY4V+9PyOI5dt3HT7Hdf/Y3v6kd9ba1rmWyUT1cyNChybPDTHHO90wgSS3Yht5Vv3wMuk7rIzb/5S4oL2PbsOnPzblpZYpzu1xxnTGeTXrL0oCfwvr77zD4XcF+97RDx99Fe3nHvD1VcY9SURJD/xlS+df+6mXQe2PHb/P6699IrODeu+8bHPXXLOOb/4631XXbiqaqD85MTKlsS0WZ4xYXFLMpKsllgw0wsnB0eKRU0OIbvCvKrk2lUAGo0nPnXrZR5HDz705Ph0ZuHSZT0nT/z5j7+/7KKLfnjvH5/855a2TvTKP17WdDVyWszenv6mtOjWFZwfmMF2AJGgWrQ0gQiiIPmQCtthQjokmRQ8FmA9AvFGeiRd0YrhZfFS1I5GEDQJWw20cbUKKfqxL0qME5b18esSXbL9N8e/++Hnnc65xyeHBKIqDV9yVchn1gsv5SINYn5no8QSrXPDLz47ks5MxMNzBoeGFBVWr23dsi8PXnV+e83+nbsNowZkvVzM62GV4lAG+6UTQ6rvT1h7n//1TzuaO3Ky8+0/v5wd5821raZZrtglIxLGQCrlYttcuSZe39c7IhM1m3WoDCyAcDRaKpQ0Dfs+l6jiMzQrAo7FDENxFFkvlSuxBkUzYpbp2+V8Kqa7JgWpLOFQLute/84zY9GWH33v3lQDlMpQLktB4G3cuFrXszUN/uDIzPCQ/M6rz6uYRaT1LF5kDewkmQmztoPZAFMTcmOb19IS27OjGNJls6qHaopaSDm6NXTBO3Njvcg29UhSxkQ+0TuNg7bbbg595/sncnnj+pvVcjF7rAcam0OttU3Zav8zf9MWdDZMz0yWK2ZHW0O1MLV4PqltEELwdNromzQnZoC5xre/tm782MizewauuKb5mYfHFZV2roAXng7md9aGwtWh3qBjTurY4emSHyxfHyln8WCfD5oJNnS11KyZ21bxi1t3DhUFJwIMjfzwx5/47J2/LFtue2dkuM+6+GqdeDqmJQz2G8/HHeyed3n7aL9lusMLl4JgytSo2tQlcjnZMnNLVhC/issFb6gPIglUreKpcda5BA5viyxe2bB378mGRllwSrFkWeVySXQtMmzTnBoFSQbLA8FBcHTK7eFNcaUs6eGIUSoVgiBQFMln/oqV7aadPXm8CkIJG9GKmQUAKnMBQBBGIM3qBd5yUny7MYB4Mynk7bf/1XV+ux0socAFBoYEIKCnb5wHJNi5ddAHrqsR1y6nUnIsgR3TGBvP1dRFC+VS3DA4M1O16m3vu+ng/pktW5656/O3qaGmD77n54ZeDcfBtYFIKDMtZCnsBxYhQgCXJOQ4QpaoqqrlcvV/HMnsagCAUMkPfFmmrhcAAKEqCwKAAABmi/63HqyqctscAwDGxwu1qURmJs85BAHU1ia4nk/VREWATxwrBB7IBCRqmKZ7ahYuACHy5rMxIQCBAuACAAZFgEDEEwKASwKIwI4RkjRJj8ZKpSwkQnUdy2dOvB6ZN19fc1ZxaABly1bHgtD257BgQc60hI+b241Y0jpxmAEzKqbd2iWyaaEbEI1JyXq5UDbHh6CQBQBj/sIYQxOTJ/WAWcvX0FA0UnLzgUfmL0zW1gZvPJdfu0Hv77OWrlJnZuRtr5bf9dHIz79eJqDOW+KnGvni5YkDe3KVTE3PsRwWoGokGlfL1aptE8dDIBgIgYSmUc0NiqrMNQMsCwIfMS6oojDPXdgse7Y+kK3qUUKw6/lACSlXGHAqU8MNShhTgQOMAWPwfUBoNtERz9IHUOBbACCACSEkSXYdTwghywoG33IdWVYFYIzxsWPHBod6J8ZGn31668WXX3jPvX+YP29RqS998abNn/zJV54+vu2hn/4tGWMvvfayR4ZKxTlP/fknF9+vhGBZz9eaSH3JRCafKtx7769vev/n3/XRj3/koluvvfZqFgIXBPVcih1MJdcNFEQY50QFz1MxxhjPIKAIxZ3AxD4RyCaqDkGIYghI9eWdW15/cmckVFqx4jpwp5O1ncd27Yu05F94esvO/Uc//pHPDg8MrVu37srrr/N54LqWhECRKVBkVspGODIyMva9H3z/C1+4a3xkYMcbW9L2vtpEanJ4cGp08tZ3fRyRxLd+9OVFq+vZSK5x7prLb77jyx//7PvefYXUuuzOd71ylnGlRCppCxxVjem4zXP94YER93i4ISR3LYqvmr9/4sktT/xt3aIbEqtvfe1kHjzLtKtxFloQTE8OeSU+duP7zjQ7Db1pye6hwrGH77vQzT759G9Ya7j35PZLz7n66X/+7sHX90/0HBmfnkkPZCfz6bXnL920cumDDz7e1FqvskIkldx7aGpOao5pz2zbMxTXtVAsqkYqdam2qcl8ftqSkDjr3MWDPdnp0Zl0uTxnQVs4ro4N9q5avvq22z72oQ9+zq4UmhKqw4RdVg0p50ZY1VISlvtRRD95/unugR1yKVBJmKtSIHlVs0odrAGXVYBOg7fG3Chx7UxsyubJJjDLtg5GnZJbXA0v16oHC4WlG+ee8x2QjlSP3WvtPZxoxrR5xczr/MMf37eHwfUfvECO9Bf6izPT+doFMQZxt5hhCnvyH/bGs9tefWkkWZNygtKSFal8BmaOT4Tqmo8Pdj/39z/teeSB9auXnXnzzeEFK/2KblhOuQ5+99MfPPaZr1+1KSYKRT2UmFyz8E9/2J63sBGVKpYbCgHzEHDheqDIQIkUicTK5WLZ9NvnJBjzp8YqqVTMdV3H9l2XCUS4CAgBwgGBhqkTqhGOjStFvnbtnHKpONhXWLKkUZLBdCrj45VkQm1tbR8anij4TNe9hQvb2hvW/Pn3z2NiceoLTzv7DNsLYPxYw00fmpI8aWqgkcZHxseMzuXcLOmEhxKtI74He3fAguXRv/0KrdtIVm9yenvMukbIT+s9Jy0tiicm+Kq5ynBGPtZdaW2UuVPTuNRkjJ3cgyy3smBhfX1zNZerHt0t3fbu1c0NxfTY4IEj3hVXrX/wz8M2KzLitbbqzDKJo249IRNkapKshvyy6XO3gYFHoBqrc0+b29BzeKpCajOVCghfAvCBt7VEa7Eda21935XtN37mxc994Lz7Xjo2PZA+d3P9jjfSnMibL5tjldj8lf2TvTCnjfSeZK+8QPQkW7Eu/vqztGthuZiDeL2bm9JMz7YtkBXoaNN8z25sMKhu7tkNFMI1LZVIOH54T5UxFq8J5zIlELPt3UhjIxUkXy6ATENWOahtCY2NZkFAOBy2bTMIuCRJQcAFMECwcEFHd/cgCJBlOWAe50DI7GwSE4LeqnERqIR6b4ei/5eh8duBGf7dP+BU3YkkRHwaUAy4syu2cOm8Rx/fwThggnhAKEYKVUPJSmEGAkYEsEQyblVtSWiSUqxURKpGTyQSmOa7B62wodXWs/YFXv9xsCt109lp31UIRow7gEBVqWsjAZiQ2TTN/1mNA0GUiwAANB0HjHseyJLqeQzAf/vLwRjP1vosCDDhXIBET5mNhKIQT0KlbBSLJghAIBFE/cAHAEVWjIhPiJyZrmJMOUcAAiGmqMRzZUzdIGAUy0HAZ/EeI1kID0BZsCiOabqrXbMLSjhZLGQjruusOcObGggdO1oNJ/SKbS1aBn4llLeD2lS4VMpoGsEQfuFJyw0Qpq4Kms+ZEH5dU8S0So6LHIuAUKIpMxqDjnZZkfjCFfLUjDUxrs1fGk3Upu/5LsxfCFe8Q5vTpT73WEFwbbgfdm8lF9xg5tORw3tLgim24+qaYsTdlhaqqGBbQSFD8hlRqvCG5kQ2m48moaUdNFp37MB0fYORzZm+TQM3RBS5Ys0oQqegccj7IELxUBD4iLuypClqLJfLeYEnUxkg5LI8OsUSp5KkcR4EzMUYBALEmSOAzebSSJJsVi2EsK7rgDgAcAgEsM984ZPbt25ZsWKZ57iuh/bv27lw8QLNiLEZxqbKa9avbl7Z0Xf4xfnXX/Wus275wLferfV75/7u8Ss++tQ3mxbf9eW6aXuiIdQFAVj+DNKiPQd2G6lFXaFwhQbY4CGMhM8dHmBACjEAOT5YLAgFPlBSoFhCEGHEkVHYEqZCDeJIh47sOTZ26O67737oD489u33L84++/s277qpdXvPbr37+SLr321/+xrOPP5qZ4seOHL/2mhuMaOy6m64HDARzCPwK8wxFASEA4aGx8b7+E3ZlJj3RPzDyfF2y1lDUiZEsFY3Jmvkr1i/+x+O/ZGMVkYw2tK+Ka1pDTDuangjMJekH54bBzRdmLEnPeipTMWoz17yjI9YshgSBcqDUxj1SPPHkr/f98fHW9o9Ww+dY1HZKmdPEHGo5+fCRM6+54EjxtUouP0Zi1mQltOPlWy+45GdPfSvAzte/99P3Xnfrd3/6hd/99oHN604P1+t7D+1dsWjNvp6DGqrO74qjWMgri6BA9x0bqq2vk8BrbKk5fjgNYCq4Lj09dtbm5TVJY8e27ZTjvIb9maCGJMqVslZj2LxqF9n5p68+lO4x+4MIpKpy2vY4AqW2XZkcLzKADxD67SUL4jMzbCqLOEcyAAHPBEWirsBSQJEqvHlYijk4hn1QJcuGZA14QUbLpzpCvkmlOz8WNC9wTvyEHpmQrHAJj0sZR1FYb3nTS6W6LQOP7NoKy7uoJFDZU2rbq95UosqFF5B80Y4mQmZVnrsw+eQj3QBMyEFf9/b0wIlnn3r+y1+/W4012H7ARCB5WNW0P/zxB+/7xOfW+coFagTm4FLVbNzQ9trR4p7xSgWV4wZEJN2x0FTWbKwz0mkzmYhWTMe2XYwBE2AcFIISiZhpmmbVCwRQIiMkAuaHNM1ynYBRQv2NZy7sPzlTyFuhhJ1PAwEdqCUpEAlHuLAR9bln2MxsqA8Vp8KFrDl3UXXNpraJEWv//un339SCtanuA+zMi0Tv/ujO10oX3aQXq9bGs8OvPSWAViMR2bdbIg1DbqA8/4i76QI+fhKX8hQgNjVhVbxqsRTfdLEXckyfdhX83s45iVeeKOEoy06qwncwhSXLQ9Ppqm8my+Xy7Xds3Pbs4aVL/X8+5y1c7DWltJtvX/zkQ+l8eTwWXdK1ZujhJ8yBQwQJGSPGhYjFMfJdVYGli6WutuannhjKmAmi5n/6/Zu+/50H+iaVhe0xr1jsL7rvvbjx4dfLH71x3Xfvf535yjWXdg2PHz9y2FuyrKO+bZqwIGK4pclE/8kETvSHk7KiomO7tHCiappBtRjqXGaqUTExmNh8USSZwDtfG6xtgsm+xqoopMd5TbNbSMuhCGSznusAcFBosmleDvHQUF9VcFAUyXV9gFQsWbGtQHDi+z7nPBw2JEnJ58uAg1AIHAdi0XAuWxWCypLs+S5AQCkGwEEQIASyrLguYIQBubPp5rN9ZozxbCNaCMHeFGn+WxjMm9vszrf2I4UIjxGhqMQ99/xlz75wBLAaCAe4BuDOJq3P6VBnxkOWV44lfSyBrpGx4aAuqQkPOhbgSFQvlrMHDoj3fnzZ6y+czE4puXy5sTk1ncl4HhACggECnXE/FCahsJKeKklUCQLvf7bEMRBZwYTyiy+94IknnkOAPFdIVGPcfZsYhMBb2QiIYwIIAfMBQZhiFWHPC0qAZwGDgBCzvWWEuR/AnXd+sL9/6OWXXq9WPACCAAAFQghAgKgsmAcUwAcQGgAGsDsaDZtVlq6i9XUaqyiOnT15BBavMizJnByCaAIyM3IkhUBydVFf05AeGYH+E3DOeTQ9hjJpLggeHSSm78xppYYeHh0peD4IDlSWLYsT0Ila9hy6ahWEw0GpAJwbx7rNxatJ32GtscX5zj38x58OJ1JQ32Y/96jXuaB+spCe6ptr+wOqJnGmeC6NpYpt85TcKNQ2BI7HZibBsxTHCSI1LJyAmgZUyupDPU7gs0gMjBCVpNjMdMl2CHCPUs4CTLESTtgMIGDR9kYYHiw5Ng3HIFGLKqWgoTEpaHa6L2FEq/XNZHzCLuXAqSpcSAJ8xLn1FgeBEDI2OiFJcn1Dk+dVGeeqSi+97nwvqLzrXTc++PeHWpvaeroPnbl2mc2dJ7bvWHfGxd6MVR4bamlMHHht6jM/+PZ1177rx/d/1c+u3Rdb/tSPvkh6Xz7+0FPzr175k+/+5QdfvnP3q/sfeeXBQ9senXfWe7/yidt2dh8/7fRlk4P9LXVzmSII9cEDQXwfPIxDFOnAATwfJBHwAqVGb6WIqkFnKvXS1qfv+tZXL9503t6texsXNhfz3R8871Nj6rFj2188Pl1TOXZy56u7P/ilT9/1uS86Np/JFs644FzGPcssGYrskpBGkGdX39i2o2lO25x5TZ/79B23veOy3/728yGtXpVChUJhfGx68eL18+a37T/yOgqio7kJ4tfk0+k7P/Qxz/B//c9nooNXzyWbunMHpTaDRQ1jQSq8vn6smrOnfVTkJV0Wst001xgbEMPPPWeeuFdYanPwhdCyQJmZp5OpYwNHWteviZ6bcmh+URsZ2Nq986mXUXbm6LO/6Dr/nC/94JM///FD33jvO7X5bXfd+Tk1riPTblajI7giMGuP6FnsZYZK775h0wv7dqxbc9ZY99j+Y/0tdXG/YlhmxazasRo1EkqN9E5ec83CrccOrFzQGVeoXSkBgEAyBMpgz7DT0qTi6qFdJY45gK9AOKpxsE1JVkcQ21jxH1ixJBlMUsuXRy0AUZmvQsHlk0wDFYPHZK4gqSho5AwDx5irVYOKbyRr/JzFpm189mrcTGhqwCsLOUnBD9yJ3OjcMzov/Qy4FdB6Hnv8ta9+dFtSaws3FNaes/iBB3am4vP0mDM9XZpOs0hSlPMNU9ODCMgXv3TnN77yXVGxUURifp5RkGkcmAxgv//rdz79o19/sqaprzixL5A7A73GKJLOhjcG8sfNIBCsJqlU0m4qFTUDCOkVgbCmxPoHsgIkghSEmSRjxzYVhXgeQwg4B0Bk1hR31qlV0fVwCJtWFQMQGQQFt2h4rqdoyPU8jCGRVGzHJQQUDzc36oYuNbZJrfNlq+zLoMtqdtFy1N9dxYgbRnzra4WOxSiWUvu6nbAmBvth2SbAWNqz1W+dJw8NeIvXQmO4ZXRsrGtZ8tCh3PGDkMuhci582nnm2o7ky9vMk73Wwi6t54hlclizPpWdzIz0N3CYwhAxYna57EsC3/Z+jszoQ6+U3nlL+65n06dvpHteVWrm5sJ1Ndt3Z9PjwBxADAimQghVYwqLzJkjElxG8cSre/I+r37jM6fnBsceeKRf1MZkszhtggAtEpZYpQrAXQBPkiOK19SiD/X6ncv8s87VR/ut5sbYzFi54vPVG5OvbsmddU5rS3Pzj768IxxDU5MSwn4mLz50V+TRv5ZjYVi1IrV3V2ZqTI/UBqWi5zOgvNFyJ7UQ+K4yG789pzPk+tXJYZjT0ZiemXSqQCnmwAU/lbQiSSRgTHCgREVY+L4LCCIRjQUwq4dBSBAUQ8RxZ8MlUQAABCMmBAbMOdd13XGctydjAsBbVq+z8PwfPV74j8YvBQgMlXrvvGnznn1Hu7sngSAGBBjXVMN2KxgBpoC55Ad+JEp8h9kuECqjoA7DBAZe1wLjk8AZaZmjFnOmWQ5RqeoDMD9GADFUiEaNasVhjBGCBAgA4AzeLuh4e1GuKartOoAgkdQKBVtwIBgx9pYvFX/z4NGbo27EGAWgsoS9oDKr6JUkJIRgDEC8aUqI0OywfNZ7K/ABAGNEuQhmcxfkEAYW8mxfkkXg+4ahNLah+hanvYERGTw7XMxWCpMgIZxo4BjTGTt6dG+uUoWa2oTplSPx0PSQ3djkxZI18ZrCgS2so0u2CrVjU2MMQSCAgBYyiOtVFS1UrVYRhoDNct84FiQk1Tn+JEACk/y516rbXtBKpUJdXbS5RU7WF0DQvW9485fKw2POguWQywJwlJkRBEM0EhFg+a401s11Q62Ypab6SNUKSmVPAJIVEmtwp4Z1qpjnXa4XMvjI/qrrA3DgCCViYt7ciI1sLeLPbw/t3Vk92U+wKajEly6vl8LpaA14VqhUrDo2LFtuHDlgclerVpgs00zOMstAIIQYMxljs46mhJChwRFFUZtb2iynQGXyyFP/+MZ3vrBh02oWBL3HRykOJxI6uKVUUzRS27h32xFnpnL25rUz3lT3VrTnhd3bpp/7wqc+b0V+3334j87Ia0rdjU2lKrnx1pORNPz20tNqztwy/uz1H28SfUmpunQsXyGq//U777xg8xWkBlyeY7YrJKqoOkHk5nd+MCa3/epX3wCVVyoj4WjzVx56dEP7gotXzRus9P7iN/fX2kaqOXHj+99z5ftXXGTAzqGhsy/87Z8eO7IJeoJwYWDG+vXPf/eXe//+vg9/vH1ep+lZukYREj6oZm4mPTbUNrfjwccfMb3skb3PIm8mHg1jYsxkco5b1Yz4+NBMoZA785wFJUs+emJACeKLVq4aO3a0JhnqtayU04FGb7jkV7e/lH0Vsarr0rSLFQjLJb+sVBd3zJ8ZHXjtj0esmaiypNPznLrx16eyPwer/qq5v0bhuoPTg/oqyVqEixlXivswVM4cTUN6IGYeueX2ixnr+ft9Ty9oX3bP03/62o+/88j9j7WmamtiAHGlAmr6aLB6VW2p3McVMjqRnd/UOTowbmO5JspnRkvNDcn+wVwAACDFFWPjWX5TrZypFkwfDCU61VuqjyYxhcaW2nK+dGxyMtoQklDjSy/3+YGQZOBulKluzAEci8wtVX4Yjy2PlUNtTXRYCpqG6Z2nZ5/bHdsZ2IctA1QMEMiyTSzUoiiLVCkpwGXFoVJMDXFimQbHnVG9SzX1Skgz0lsz9d+8D4QPpQeKE/2x5k4oJT5zx4MH9ktTxOdtEA/RQiGwS81TuXFVVS3bicZoqYieffGR+e3NIZlKRjhc0wxAkem99MCDn/vx58MDU+cqKTfIQBSmi5AhqK4WjrpyzmdFpBRKViSklQtWIqHP5DwKQSRCkERzeVeRFdd1qQQBAwnJgJkfsFk/IAAALAhBCuYu0xi39RD4VehaGAfV7+6tIltl3MFIwhiHIsjzvGg07gUlJ0011bnlfV1Eob/+2YkrrlzW0VYtjBXmra5MjwWSqJ2pzAQChgb0aJ2l4IiZx/NWlQ/uI5yEM4W8olAKqh6p1stk6enk8BGvWIhni6XxIe7YEDOMEHPCc8VMFlLh5Gh/vmxHm+cV3WxoOutg4vkMBMQwLYlAnLc5GtWrL73GTFNSUfji650Th6zpKZKrYEF9KQjP7YKJsUoQYNvHgAIiYiG16HsQcEWT3FgSIjXhY8fMJR2NZ66O3fPQMa4DtsCjESMoOypWXR5vlCu2X86rBPBHPiPnM1Xd8A9vD3Ut5baDsnmbEyT8qFnJz4yokqJKhjk5HKt6mUiMaqQ1mx3uWsjjKWX/PlfG8YpVqK8PzUxXmYBUPfhWwqzaHNvMUwA4IZwhlkyAXVGI4lZKoKkaY8LzHIQBz2abIyKEjBGLxORKpSrLkm3NIi4AwLz58cmxsm1BOKKUSz6ARqk123+ORCLlcnlW03hKNoNBCKCUBAGjlPg+Q+h/RKf+q5eLQMK0c35DKCzt3T0IiDIIEJIFeMAlABxNui1NDVNTU6UCRMOJhnq1uyfNBMcQiuhVxAFU8FjI96vMSWiKa7smgZAPLgCRFcF9GnBTkoFzYAEGkDDyEeGc/5ui8m1rAkwpppQ6jjN75JIkMZ9zYG+fagPArPMuZ4SDKyvguSBTw/ORRLDPyrMuhKd6ABwBIAAsBELYmWWlvdUDmDXxSNZgWdIymYoqUccOahtg/mLUPEdMD0DfsVikthIOa8VstZrX0zlL1Yjl0/oGjGUhqOMGJBFLKnI5PeRMzcDylTFNsiTK05PB2CBQOQGUSYpXLNnAQZHqHZ8BlGWZ+T6TJeJ5gSCgEEMg+8b3a6ZpPvsAhFJgloGZ4abOSjKBRrqlchl7vuicD+de6471UUVRDuwyiYgohoklpqsQePLksGeoqUwmY7qg6uC7ICQ454JagSq9R+1yVi6UOCXYZx4g5ZLLwlNjxaPdauB5suDhiBKfZ553bghwdajbGB00HRtGBlQBFONqIqbl8zZgmB2LCAFczFoqM3N2rmHblqYZkxNpSZJTtfUBLwMWN7/vnVdff+Hr217p6xmYHnckiOm19ePjJ84/Y3V6x7G5ef3chat2ZQ5NtwQ9xycD9ewgfmhwH+T9HHiFG6/5w+FqoifoiC5sdFak/O9vUnsOPfHr2xZfFG3QF5mhnGZ0/uaePw7uT3NzXtOCpXd+8WMgFIFcl8GD//jLHe/6WFJe/YNvf+3d/3e+4HmOkos+8JkHvvXDVbV8kA099+yB7Ov9X737rp+98tR37nt+s/mP5Wfc/pfX1w85W35wGgyLqUZcf/55l0TCqQOHe6595zsZYr5nUhBlt/Lo3+8rzYw3z2lbf+6Z/3z0T4M9W4Q13jHnzMnMaP9YX21dbSWvSqhmTltkaPgNKqcm074iR49OZxY1RE9r79x2YnxVK0rFb6v/yhd/8Nwr1cPd6+a2p03TB9za0ugo+NBD20dfDJTahdCA3VxO8gEUheaLWumVIPe0ZJzdXP9Oo6mTbS71TPbOq2nr2znMd81IUbWALJh6uramTjHSc2rDpZny4cnejrZkLG70T04sbm9lRrB3/+hnrl1V8LP3PDcEaW1u0hChbLhVzfcndMWaGvYZWAtWxzRDWjQ/JKF0ZhQlU+pIOkci4XRePro7t6Il1agqBaOYisaZPRaPNY2W4PDhCVkk69pwKuS/saeIuWLjIGDsLJ3O49JJy37fPHTjfWcFI1uM2o7MJwdTh2gZuESBBlyCkAsm6hRyswJY4mHVpbZGm5g7QeJmsC7hhC1zuCF814f0EweqA1u8qYwiqBp1ib78/249PKhqU3VquVJoqFvQ3d+DhJ7PWwiBIhm1tdGjf//n7vGBM2+86sie3V9774daZePgkSNVBKuZvKg2ZTt5wWVNaGWtsC9wt5aTaiLX1lpTyZvTk3YkHnFF2fXAc/W2elKsVLCCLQuKeQ4gg+D0lGENYszHFPEAAWDADISgIhSgKsaKYFJMd+oalcmcKYeTxamcqiqW6XKOJQklE7FSOc+YWHN6Yu265mcfHRgeAaPOkoQWI+wDH0r27q8kaq39b/BUcyzRxl97EYeThbF+qXMBO/dGfO+vo8Ug1zE/dHSPHQ1J8+bpV56Fevpy2bJ2tMc2Imo1F5XodGCprSm5e6K8eAPkBkGmat+gYIGYO9djoPUet30GiCJAmvAtFRrr45OKBi6Ax2odlo3XoPFhxrnMwEsZcP7FnU8/PFHxrACoFAkwl9yqj0ARmgsuUXk4gGKAQcOGEZhZiBC5bJBw2a0gVRHMpS6e36ZYEaiPLhjqPXT7R0hpWnPcSv/xSKFcNivAmeIx5NjOOZvbU42FN14ugVB7+5x5C+OeKAz3KImYWypiTLgRiZcLlhERju37vg6AEKlyDgiQEDKRXeZJAH44bFQqCkVWolbkc34Q8FPOU7OFGAIhCEYUYZ9QPssvJURiDAgmAgLVCDQlmstagHxKMAuIID4wQAjFYrFCoQCnQmrZm2AGiiL7vo8xngVgAEDof1ScAECFcua5c13X3b6tXwBQSgWnjDsYY85xIql0LITuQ65tBwRDImHMa03s2jcGRF6yLDwxmEMIylXdZRxzRUBJU2H12jk7tg5xMIhkBsGsHz5WFd1xq5IEjAHnIJFQIKz/yQvTtYhlWwQTIYALhgAwwlwEgMRbi4xZYH5zYXGqo44wn7VY4QwoURkTGAkmvLcb6iMEgksAglJ4a4JOqRwEga7wDRsTkmb5Nspm7Hg0DIBnJkszo3E9VZi7lABh2THoWtC078DE5BgRPkMgL1/vJetqXn0x67ugQYLzvEchrNQuXRm0zs0/81fVEQGVFB9M3w9JctUPAAPmHBDlQgAwiiBOpEzAAVgIkBOvDebNSwWseOwYYOLHozXFfHblOrmx3T95IHHxO8vTff7UUDwcQ+3zveNHqoe3GsWCWVcfKZWDtk4rkUJ7Xhe+HzMUUXFLGKmS6oQNtaaBVQp+ZlwDKmIpMT3lCgryqUIHNpyTwE5+ehwmshrxIZe3W9q1VCPvOcQwVVu7zKGTolSkAAEgwFjiDAMAEAYioFgEACjwGOIIgAiBEUjAMRXl4yPTWoJte/b1ycmCUtM6drx3w1yjMRqvHDQffvxwnTn+vRXf6Vyy5LJXYuMTld7wwM9qew9vF9dct2S08dwXHzz+4J/e890Hn/zCe+8N2dcs7RLhixc/d3L31r+NPffsU4SOjZXCq5eod/7sF8H71KP9r82tawIvGEsXShPTr+57/Kf3/7lr3fy+3ZX8iOVJiLpq0To8NDDzx3/uNq5oe21v4ZPf6b37tnNMBLbUMFNb27X4Vy+fgJ40+UAHLZjHf/XQ9rs//Z6O+vjXf/bLxUvXAPE9uyoDlDScgIbLb71sZHi8Tm4eOnbowL79IILNa9cf7Rk/crwvWdfUULe0ON2dme4RrLauce1rz/UuWDJncGSEFh2puW7fwGDYgJ5M/u5HfvletevLX7vspiM7zdHJCxcti6jB/p3De587DtVaOd6i1JrlqWndadWr5Zia6VTi0fkfOVk8q3/knycHPhJxll5y3a8tv9bUTzKpaIY92Te0aNRu2DwzNXm6cvZI/2O+U04wNDI6PThCEYnsnBjAhJ+xKTWQnRrcO/7Ryy/qE5N7Xj0mVxODB/Lnb+IHDhVD8dB737EwFvLt0MhkbobYepY6fT1OsgYqo5W5zS3JZUapYNJa35mo6rXJXLVhy/ap8QyXoT7SnD7j9GZNDjad375l+/C2LYhjOEH1fZWyROin+sL1fxYX3NLiiWlrGhjEZy5F7WHN+eeIJJgk6nhfyetz3DmyMddWYlS40w4SBjHEKwF4JHpZQq1M54f3xUbTelPULZpOSbKPn/z0g9eWtM7f/f6fO54x96V7Ao411zAgsIW3ZkHNvtHs4deeOefOz2WZOGvzhWuEwQzpwq5mQ6o5Mjz6QHoiABmBVY+gu+KOqICCnCIjO0skxOQGmMh7cxugOAyJiK4apTBS01OO6yGMAWMv8EGWIgBVSilCs2mdAiEhZoPnpCr4ABwkCRUsXhgwQxFIRvygBL7rCg6GockKiiXU6YxoazEibs3JI0cWroVCubm+nXfvICRiDZ50ym6QG2JqimbzykQp07qILdmg/fVXXJb9wnh0yQp2cgjGRqp19WR60pmeRH/9R0TXoW2O3VQHZuCPjJZlGdRA0AAtXIj8ojI85te1QE19fKgvfeQoCNlGnOgSc3zCJAIS2Kw0WQaJAUKoUnJa5nLPRIETk/UiJpoZBI89MeIJH1FsyJS5yPN8gkGADw4SgjlQBAAkkMMsGwCgzDwoQwVjjNwAYwkk6J0M5ij+2pWhg9vEs/fTZH2FYkM2yoYbLZUqjuMCEM7h9S3Dng/19RHHcRjQcAIc01i8LEg10pPHg8xkvFAqIIRME2NMCXUZY3wWBBEg4MyjCHEBIMCrqYViwZmZhje9l5nvc4zBCGmcc9N0EWGJRCKXLcApRyqGEGKcESKsCjDPAvBBQBBwAE6AyJps23ahUCFYZZwxJhBIAhggjjHhgUBMcMYQAMJAZWisw9MzAWPYcxXAsuAWEj4G7YFXfrk5WlO35nrAFEig+boLZSQD+ByBZJvOQA93PMExNLSHJgaq09NBW0fYNS2vYtfEKfOCploY6aWeXPUZqIq6b98QloAFJmeUUkkwW1UpYE9CoMs4FtJxEFSL1YwnSTIPvFP4iRAOODIiZE5j9GRvmfMAYUACMJHYrEGGEG8B9qyR0ZvV85soLhAIMfv+B8wBAPam4ca/1hwCAPkgDCR8hEFwBQji3DGkZDxSlhXfqjh+laxbE65alVeeNkINAJEyIMRdNtIj5zNq97GJ5ZukkRG/swONDnpmPqmrikIhFUlMT+dlEhVBpcIy27eJo0d1iIBXCjw/AMCIVn1/dmWGVVlxPBsQBxQIkeEMYwAOVRBQnEH7ZrJCiDPPhTMvJL2Hc0vWKvt3u0M90LXIDalBqBkPH2S7tpgkxBwXGpJyOGZ6BU1j+cE+ODkYVROmKBV9R8WAA+QhW7MCO+0DE8Ax85nXNq8eSDY/HrS3J6lW4dTzy9JrrwJwSuVAVX1BYGDQnpiC2nojHJHLBQHYBBwQBFhALKpapu15QcBAACDmFwBjjAmAYEwMDAxFI/G6+oYALA+0B1/8zW/v/lWTHuNmcfVpF76276V1g+RdZ16698GHfxOafuSv4w/d8IkPr7haRL2gPPJI6O/fftlbcvnHG350/d9XffHI33+98hM337jkgu/+9lY96v7hd9/8yAfv/uFdXyItM5nR6cDJv/bErvMWVz501/l//fXLV5118RHTHh5a0Vs9fM21l970k7/CvuHWgJ+2+ap7n/wz8mDELLS/85arbvjCE08dXdU0p2bFmS/89o/28x/ah20vbnzhs4+dfEPVljcvG/tOKuaVfdh41oatL73x4K/uNZoTVasUQtgDU1YVZiOhq7+95xfH9zx99MQbsfrImhUrzekBQRJE1rZs3U2J2jmnCzPR3XM8UZMIJG6WFcvJGHqc4kgmPXX+JcuOHx3OOumDBzu/8dunV9zc+LfXd+15YGDoyW3g1tZGO4sy9cIGYA/GC/NwXUc0hokdLtGC4gNXCm6pVHvEgy3jR/u6Fl5f9+EPjm8ZGXjhUH0qmq6MAcN6NYRNfmHSSjI+rRdG3ZMf/vQNZdWuVH2vLBaEqo/tff2lZ/Z2Nbc6MWtktFCvzp+ZSUu41NGVWr6+jdF9lYzUf8RpXRAKZHd0lOemWMJoCUeqy1ZolLjHduLWObyQ00vFyTlz6gqVsmJovT3++jNrfDcjS5DJVRmq2b59ulSKIChTAJ0msZRDNvzmrJprrlLY01W6zYBbCbtlurLbKz1tNBw0SQBVRZcLlqar+aVKojmwm5hWlKxxX1Md5AJ0JGCJB6lqHiARqrO6M3ZsZfL9nzehYlhDU5Plv9z69+/syoCGKpKImtDAwz2k8uyjL5512Zk3XHj98ItPfeKai3t6R7pn8uOeM2xanh9wwJ5BwPcVj9c2N4YC2lcd1SxYuizVl8v4LC45QSVfiddCMhEeGKyAwIgolmMjDEIAwQQhIUkS5/zUmPBf9cRsWCcBAEKEEaKagSUZ4kZyaCRtWQHGlPHACKFkIkwllB9wQjFFT5DJXHnJeuXYTt8rKvPmVBfNx0QyRkYq7QuhainbXhYrz6JbX7Yu39y++vzc2DT5yx+K0YTuWbpnci2ctyrAbDjrrBaspXfs8BO1ykRvKJ7MlTPye+7UnnyoND4JIS3ks6oWgram5HAvK1SrAIqk0sCpUEAcGAeqI1mJgKQkpnPjEgUsaCQuMckuTAAhQDCeNUX3vTdB778ELf9NAIY3acAIoea20PhA0QC5oxm5rguy6krO1KRSU+9SIjFfTU9WNE3lwknUSDW12qFj5cAJhaPVd9yy4IWnerIz1PEDLNSAnRrB/odlLOdvUo5RoGmUUmpZru/NlmN81hlDkiRMIAgCz+OEACE48P9VFr49Lj4UClWrVYC3EuOZJBHfny0HiTiVwQeMMUykWdwFAIIJ56cACgOSZNX1fFXzMQ58C4iAz3z2orAaw9te+P7WQpFHEHIFYDkITKpAwDBQIlf8ABAPEYwCUZElWhNTZrImBWioi8zMlI0wUA2cSqSmAafTRR6AQLJleae4CAiQwLNqKwGgSCisyzJCELDJik9xmPEKQWAYdeVqDpGAMxzSuONAIhHxGRSLZQEYACRZ9j0H3rbN9pA5n+WQB29V9v8vrfNbGyFhzkwMXCDggioaZdy55HJ96IRVzEvVvJaI+60dtlUwRiZNwWOK5IejtiojRTIYKSeSyRPHq5USXbJB3rO94PogqeA5FABJss8C4BzPnguUYgAIgtmeOQLEAQHFlAVcopIX+IgAISTw/H8/PDL7Kloaas+80MG08PwDtal262SPgwQORby4AdEUBIE2PKjVteVHT4Qa2qtnnhsd7na6u/18UUTjiDvUqfpYEZYraUAC7CIVXCsBUg4EBcYb2/BV1zUe2j/qVNXek47vIEoML3D9wAcBCGSEiAAPYYYJCA6MAQiMAACwjGnAAw4gy4rruciyy67raorCRYAxNk0zHIoSQv7021+tPuP0X/7le2bROm3x0ldferqufeGzW1/5uX/99R/4Fmx77tpn779hZMuO5//6hw88c/Has+uGRq67fu6nH3rQbfxAsfTc5KAFwcKu1aedfP4SLzHl5Uc6Vm/OjwYLa1oWbIxW0raF9OZwoq+3d8nCVknJNrc1RRsacsRJktCWoxMv3vvGD+94x4f/+MM3DvhMRkuWyFOuMtozcdbyzvfe8zhB6w2DN8tOg1WsTTXKrco1X3jEnepcdEbzt66qfWF/T11Ivfpi9farb7xm7Vk3fPC9nUsWE9MXkuTmq2q9smffgeM9rx3e9ygRvhvQasnLjxYb5qrxVP2xE32Vom2XndWrVk1NT+ZL+XhropizqsV8V/uG0aljmlqfNyeY4HOaa0dy7qE31Me2PffMqwN/+P5jjfXrMoqDPItnq4vqOlCJNyXihBe9cjUk6ZOCloYnatsMsj5CNnbSeCi/8+m9v/sakRKdK95xfDICUgxcBlZAKuWUjGRWWeWEN8RCd3zmshdffeDKD71X7uiUjpqAvYpW9Dx6aP+Jpw488kL3s0PTkx4lJIgkifOpb8RPjE8d2sNTYVBV2jtCylkSBFY5D/PnK7G4qymQG4sZkRIH0ZRKSsSiur1sbdvOHUXJqFLMTp6EXBYMAy9aTTFrrZZFz9HC+Gje1GEpqY1Z1ZrAWpII2fnql766UrrA0of72LzTs9duS4yrQ+B0KbTsBhEcDsIYneVgjPNpOxyjMq519QprqgRzWoxGl/je8Am35f2fJEnZH3utOv56KFEvkZWP37LlGz3GET8Q4HFqRYLwP//xu67zLltQkzqvuWFmYhg41ITj46yQtlBJQVFJQeDFGbRxhcuopqnuob7Bxe2NpjVpcqhriQz3lIVLUg315eKEALVYdhCWZFVyPUtRJMvyEYAsS0II3w/euvgghCRJAgDP8wAAEwiFFAG+bXOCQFZUs+pxDpgKVQXGhSwBmNplN6Ri8VB3/8lCCUBIPQecea31G9Za8XreO1BlGMbHIRJKdq3KxROqzJzRUSia0N9jZAtWba3cf5TrIb9cUCTJjUcVRZPHJytzF8SnJwrMkVJ1/qLlqel8ZrBbtSyHM+I58nkX1ew7li9MmRQD9cEA6Jin5xwn1ppcMdeau2hh33h3PlP/zCODSNCA+0hSJeQIMaudnS36AQEihPhvMxL8bwz+D5ksxphTGpLckK9oIgDC9GR0suTnTautTrMdu6YuWi7Zto0AgMrMdgLLAQ03MzyuxwB8KZ/hHDGZ6n5g/zfwI4TeoigjJKiEhBCcIcYIQqcMMRRFwhj7vh8wDnCqdzrLUJ4F79kpphCMEKLreqVSmXX6m+Vb/fciA2OY7X0gDBhOEZdkSRdAfD8AcBFCmo4dM8BAWxrYZZet2rT2/Bd+991br1z+ygH45tOHF9YbMxkzh4F4hDGsKLrrlWQZoyAWME+irh/4rfVKKOYPngRFSpS9LEfAEYAEqSjomhQEulUlxVJFkqSAWbqhuBYSQjARCAAQDAFQAAmDxTVAtkwkEAoTVVnSfQ84seqTsXK5DFhyHI8xIcmq7/uzq5D/+EBnV1SqqnreKUXWf6uK/8eGoLnZKGQsyxGSgjyXyjhYvyHqusVYHVQK0NQMxTSAF2G02tctTRY9wELiGud2QzuYFShkI7WNlAcFxzSQhIuVsizjwCNCCMERpWI2FmL2/acUz/LFfH82jIfP9s3fPET8rwr+bScPAAhOzrrA33Bm6CdftuvmMtOEuXNDJ3YpkfpqbYs7OQyFDLAAURxFmHlQQQgiUkQBqWKVBDBB5GSicTo7FIQAJGiOSCuX+YWxeN8JUnCL0Tqux/jUMPbcCAITgAHoCDiVLd+TMaYIiADGuPPm4VFKKGOBLMme70iyHATBrFoM8UAgDMxziEIC25qcnGztmDM9Nf3pj13fPTmwZvUKN23rtbUHC+NHj47pBWURvvbSEipUHxp9/3s2fukra92DFyy7Yvk5a81MKhxetfPwN+xqGrmkveYPy07fPD761Pdu33TehxYDVFG4RqHGBfM20C4xMTo9r31uaWq8TBPIdlLRcm0qJSarnjZTt6Bz79bx7O6pR+67v+OGCwSBfwwf90hHSFjXNCs2IRd89afbvvvsL+75i5WofPnJh4fv/pZF++1C6JZfvXGkW9rcKllu9/aff+a8Oy5ZF2r5+Ic++bs//PkrP//+RGkqiXwSQgSijz3zx+df+kk8TM0c1TTJMstNieUHTuzNFEpUCTEX8QBqa1JVywzFor19QxiJttbaob5q/Ry3f8hRYgYnCBdKvJKMNbXs3d3dFt5c39HWW4nRUIOw8+toe6PjO4BsyeROPgptk+Nlcz7qOnNFeo6YoDmerVarCCeNszatPfHj7+x6/l4Zwlro8iCyBoWg2nekToq2CLEYV4lbvW7OeXqS/9++b6yMLr9Rv8bKuBPBZEhPzps/L5KkvlZ8eOjZvw8/X9DL6+epHa0zJ3qgSkAhjVO9lhEprj6/xXWcQi4zOaD4tltXF1p9WjwWFyVzuqmeVnNOVF14+OSJhcvm942dLGX0CrP8aqj7kLVybaKprVrfpB876I+OVpKJmqldhaOWBJoPAQNPW6IG91+3sOXGfMIIVy/uFo6kXZVyKlDaMtnAYgEBxorUkNEGiYZ1cIWn5zGqc3he2QiSBhPdrOFLH6/07IgOHgTKi1M+C9Unk8p33zEycz680hc5+/L63z/V++1z3r3qmnN/85XP5vc7ohE6GqymUkPVLAxO8V4o1wCkNFn3WQtI0baaaoT8tHtSBH5HS/3JIR/hXDwZymarSFJJ4MQSkVy+7HPQdN22LU2XLcubdfARQrxtoHbKZIDzAIATCpTSNw3ohWV5ACAriuf4y1cuGxsdqFqVNWs6RgfNqjm9YeVpDS34/r/tkjXaOo/X1xlL2iwaYvt2GRXXzubQ2jNYY0Pybz/P33hbsnmBe+SgXzIdSYdD22FqGIADk5CCdEnipsmJ4iIv2trpDA24a1a32mz62H4WTaC6Vr/3EJx5TnN61B8bzMQb+cQEBUmAb8Tk+Fc/28FLQ/3TJRQtMtL0j/unlq1utJ3ykcOlgMHSxV0TE1P5XIVzoITODvACFog3L2T/bUnx3zsJIQwHYcUggenbEFaUgusmG0IV13QLEK+RwlGcyTqBB56PZIWEI2p2iiK5CAJ8DyQZdF31PdW0ipRSzvl/jDNnZS2qIjvOqQUQ54AR5pxgfMqAiRACwN/k9wLAv+aa/2bV/F+uGv9xvf4PyMEUuAAk4BQrfvZhGAPjVAIkJISDJUuSEb3m2IFh03J+9H/n3XzRuus/9fMXeivXr2p98tioimTJ9Wwd7CqAILMuv7G4Ypou9wgBtmBhcqjXCRjzwZU1jRoWIGXJgtYD+/scG2Qqu/7sPw4wAcEoAAjgmBDO+Ww0OgJBZcnzOIIAADQ17DhVVUeOwyWKPO9Nr3uAtxe48L+6Gqqq+r7/dgCG/18MTiUaFixWd+wc4gwwhAwt7FhT604LVdLcSDgCo2hMtUpAw2YohiUdKgXulOt3b8+1tsqRGLGDckMqNTyUcTxIp0FVDNM3QYAQFIATyhHQIAgkSQoCHwBkWfF9X5Ik1+UIQIAPALKCEZAgmOWr/w9jfwCQVSBAfIcBgoYO6FqsH9lrFaa0eAP2A/PyK+esmH/9G68deubZl1KtIp9FLhZxDDEMsXBN91jWx6Apsa7FlfZk6Mg253Ofv+zRhx5PNkvHRxxPg6qvK45laDXHDmU5gKZh2+cYwA80St1T61oEhL55UmGsynKl4lAZfB8oheCtOMLc9PDBg/tOHDs0Njo0OTVqWZai6rliYfHq1iIv9B4Z2NB1+r7u/j6z2JTo7H1uxLjwS0sXlA8+/rmrP/WH1PlXzh996W8ffZ8Ujg5KPINdf3JO48a1x//xzCcv/NmXnjlv/vzPo96OL/34tEDr/vKH321huHDZ6QOxQno8VxPwZPPcJAsQ+Kzqruyau/ai06+79Iabv/ruszo2v/9jn85s7X2y5+RgYfK6d9z++8e3rl9+2m0XKAC1OQzpzNQHf/nktqfHrrr15k45OzxzMJyjKKbFRfSHn7giwwcCNWict/TyRWvPO/Ps973vQ1pzo4ORyhlHTl9f7olH7xkceTabG4xp9akaGomJvr5s/1AlkWoolU2z6sciicG+odrG+ly5GMYaFQlJz1uOSXVqMX3FxjXPPHvUOjm9emHSkUoZsw60YsElLclzC0cmA6nmotbzeL6ohFpCPCxXy8etCjq/JrqhKQ9pp2p6fng8sBrjctTErzy3n1dxHEX8ga2e+4YPSJPPa4ifH5SQ4wyuhtAGpBSJmcDhTinxvPnSCTh4IVywvv6sXHq0BAMciASNmh6v8PzRJc93nd+n1Ir0tHxyOL9zB1++2j5nY+z4aDE7A0sXtRw/OjavU6ZBTa4wyTyIJGS/GBrrzbfPU5Soe/QoLFsXzhYrJ/vVmWEa2G444qfqwMwlsBKsOUvKZarxiPrCATa8g9YAzilF8CAu+M9ldGmnCGGAk3OsT8j6DaXSH2aUx7gyDSiUqFQtAzt4jSE6DURnoFQHhgmFqumDcVoN3LYm2LWP5vNeY1QuGxUtJw/wcanTXLt255P/OJq3Kr7cmjr/jps+/c1zN29qTo7rhi9DqGhViigLGmXBipq6VwcPVxXo9CQRuB2rOu6eME1ves68WGGmHI7xwb6wHTge+M01eiZvxeLhXKHiOP9BuvkXzMxmgbz5LeYIg6IQjHEQMASEUtmyLEmhvu8LhgGhaFS1bbNtjlG2zOIkrFg4t2pPLVsbr/qZ9o62118aao6whacp//gTzRZZTYv98a80PfQbZ3oiFwqh866Rt7/h9p6AxUvm7No2JEtEkiTTcWQl5rlFVZIUYgC2y1W3rVOaGZNtx6QSOf3c2IGdOasUkQm6+Aa68zmravOK7wEW8xeFk7LdJCssb4abYfmmtqHp6ScecRI1sh/AmtPb9+6e6umtzOIKPgUwmFIqSZLjWP8Ta/9756xqVpeMiu0pqnA9LrhMsd3SrEvUGRz09RDYNnAGkhTinCNiB1xAoMUTgeX4ri0jwmNxLZutEExmmUFvAfBbGEAlkKhiWQ68mTg0y6tCKPhveCAEcS6i0Ui5XOYc3g63b/+gJYnMNl1932eMvNnKPiU9glNUrFOTUUQA41lONBMCSGBwwI0tvLZZmhp30pMOJYh5RhxZa+LcEeHtrqZZWRKKeFWFQ04PBwBSpYID8FWDx5Lh6alKEBCZMAIGRhzLtmtCPBElWsmxooVCiRAkSZLtehgDiFMhS7O2SQIAkX816gEAcYEAhaNypeIKDrpCfRbU10cnp0qMAUKnqvk3359/O9vfDldvv+ut7e0ss//YZFl4LqjU8AM9gOI7b+uQ5eGJAVe1oJCPh+qswwfdwAdZimerZZCZxEAIUA2k6CI7rXTO0z2/mk378YSOcSiXL2GZpeqSvX3TFOuA3YCx2U/wrWHE7LyA+UAICbiHEFAJg8C+FwDgty8c3378SOLcVdacFj663xRYNM93nAKAq1cYV2R3/97H9m6xNE3c/r53rd9cN9BXCrcSf0ptjliTI3bvEDYBCAFd5+sjkfjcsss2Pr99jwv+2k3Q0SJzB6shvncbmxzXOHJcETg+FQCyEXjWW8fwloILZBnHokahWJkVds92pDU9ZFsWuuziBZSIpsaU65mcs5ralBMEhWI5N2a2LU5wGtux6xghSiIaT09X+nvIovO/3tIy1f3Id99/1V01t199sLH3nivP/eqC86em4IktLzV++SgLp4xn/hYtLb/kSvPLP/jL2Vd9+Ykdj/BFGtrxLbdov/O0NemISGemTDPKZG7wsDsl1q5Y/M8Xfs9pIeD+Tz79fyvPu2RqcFybFpu/+sU7P//7zQs3X3VuQxTHs21eTfZ4NpOZNMK/3zLx228f10X0sd9e7iSC/4+vvwyT4vrWhvFdXt3VbtPj7sMMM7i7uycQQiBuxN3diBtJIAkQIAkJ7u42zDDuru1aXS7vB875Pec57/v/f+y6uq+9e1fVknvd614LXtuptsBAlz6U457dMNmZGWaKQu8/+UI2ZLQm5qiIqSgxd8+xfx/YsOa5D14I9kZGDcuetbDk4sXjUpQpLM5v6GhTABQIhQGE2azxdIRVZABwrL2nz4bCJr3J7rTLINbe3ZeWk9XdF22qdSdqkYSCvMHe+lgQhEiQnpNtQHuabskSHGfjdVt/fuf3ny4ILRqdig19cg2zDO6pvd3jxjhRJ0rB9ARz/fnammPNgNPgWKJgIIBG0GIevuckFO7AQJrVNMnMjDPLAQ3SOZwUjRHFAJIsQOM30OekU0lMQglRkI6lSCJoU7s5QtJE9RxQKux7fYXVVCLS0SSyIjprhcnbyd0uF0mUqr4dGjleGjnW3lrv1RGJmL4foIDxwkJMJXWqBIHWJq3WAGCgBkQ24jIikCSyjCNejYZgnFTYKDlvMeeA0j4/7eK6SFIJ8RgwiCCgh+KjNgPkLVZBJwCbNo4svKdfw/dLYWd4visJAEhrCQoxSuL5ZAIZq9fYMbg/EuyLGeOcHOuV0mV0JAYcpDYIiwl4oN6rApPzmYMA1AFhMHDq6J7XKpi04pLZ099/9OuhOllPULCYrDX0BnFciAq4KqdhhspBD4ujTo3OaEKsOnQHRNNwLBgEUTdIiCcCUSwQo3OzyHAA4iWe0Gg9XpplAYZqJElCUEiWxTuAJwAAQTD1f3BV/gs++m+p2zveBwD4TjspjCAYSvA8E+dECVzyBwk+ApeNYk1ajcGKhUNanpMxjZf1Qy31JlYOPvWB5dThSF+XVFCkLxgWPf4PMBsSuvoHYgEDiikqQkeDVgBkFXAoySEwUCWgCAarLUazigirIg1gFMgiCgDQECiM8hYb6fOwIgQkAQDJMHeqaVSeJDFCXY/vUjlS5rRaUqNdvWxPhy4xm+boeIwiWru7FBjneQGoMIridxzPHX2oO0oU/8tA//9xwECCYEIUeAxAMED41EQrAcWAxPV4MRiGeZ7/zylqNATL8gQM6406GSiBEA0AwFBMlmVFVeD/m138HzMKIwoM4YJwZ0t3CLooBFBZ4f7XDv/jlmw2m9/vv9O/+58pCxCE3FFyVhQFx1EMw0iSjNJhgQd3fvffPUgqAPCdzUAAgSBcVVUFSABIEBABAA69PasYRVC88paL4yBJVjFMK8lRHJO+eahoZnrauM8PD3jB4jHZ+6+0IhoMjwKjgQhFGACjCiRICsBxPcdHUUhrtRN+T1SrkZJTsVhEEw5FZBkIqpkTaI0GlRVeEBQAAIqgkgQwVFEB+C+M4H8SklWY1CocAyCAWO14yM/qtOTUGUPPXWiKxWI8L0IQBMOookiqqt6JYP6Pc/rvE/v/DLbA/xAbAf+vzFJVES2JsTyLIjpI1cqQZ80aq683QHAqw5hYJCQB0NcByRJC85Ii66IyD4Cq0WpZIXKnbZfAIZ7BdYA3WUgYRaJsLDld19xAS5IOAFGF+f+51TuroygsS8p/TX6E78AhdzqsEEn+v6jp/y1PDalAffTpERZnz8cvueetTOhqFfq6fAoDZw3Tufoj333+9d4/GtJT8n/87WnKAMEoElMVjCfMWpbSAowg/WG8tZ3HYAOveCEoSVT9ACAzFmpI2Juit/U3+RKHozWVksBpu7sUX1iQAYAQBMAYkGQVSAAoEISoCnwHSUExoEiyrCoIgskKIAgSyAgv8hiEo4mpDpvF4PO6ACJBqOKL+hBC29bbkaPLRyW5qbXDqDMbzZaOtjaJATolwWjN7+CVydHk5+pAyzc3rV8t/WfkrC9vX5pkSyAMQ6IZlvZXn/c1frd46ZcvvNH58PMvfPLR0OIF5zUz79fFVZ/b8k+PIrfejqpGA6poorFer8ErBWIXPtxz7J+ff3/35VRz2V0LJnVX1T7z9jcwyGu+/81NL927/P2fr18xfbllrRLsAbxWhVM5H01gZkQnXvh5Y7E9tuHYz/CSaeDqoByCWvzIQ/+egmDX0/fNaOuGnOkhjaba2x/EhaF6u2H1w8+Nm2mcvmY2CcOD3g6CUiwaR1urT2NzuDra9Bpddk5BZWU1SwsZmTnlNY1mqzHeamB5T3s7TlI0iUIzJ2edP928evF0Tg1+9G0FFravmJGeU5r5/md/qrhaOqUwWOXXW727b62/HmNThiaHFO3+bw5YjiXEm0rKJg1VcymOsxx5f5+vhdenFEoYDItuiJNEBlWAE5aWOxL8En+z3//zANibBc/IE/JUGELAIAmYIIRqI/Az+APnwdmDYC9NSwUgLSPR6KEDLLCagMbMF986H8gaRwwtYctvd/+zWeCUSHqSJTFdtdicY6dSkkgzDNl025uQotU5WEg0kTo8prpEBfQOAtzPY7AMmYAoKW6fSJAq3QGJHMjLtSEGX0UD8LV3ARcmo2yMBDgDBL0RxMI0oCMEaOFQFRgf+bH2ciiOejCBa9ObcFeIwFQnENsEC0IRA4z0l08sNIF4CTcABQqRBgMchsPX/cYyO8BREOgNtoKcD9Zz0b10+xGdJ2YxmR/emP/lAzVigadgbs75o65ZJKWX+hUWslplNRZjMBCSZYqA8oqHsSZtVXt5licQ4FHFjoejqJZUurq5+GSdBOjBJnNUGXQmmv3+IMcBAIAoSSiCSBL3PwPn/+kJVFX+j6GHIAiC/8tsKQpkMppiTFhVVJ7jdBRBYrCiSAggMjLI7AxrVwOiMwXNDpfNBmQRFN9r7GnnaqtA0BvAYKBFLfHpgaZaLc0ylngvT+tjLArxAaDAWpKMSf0kCjgewKhVlP333D30791XML0KQThQFUWEjJSNE8I2pxoJWhTYzwpAKwMJEEAXOXo2Un0ZmTY6o7nGN29Okk2P//uXlxcAjmFJaSAQ8F67LomARFDhTkPnHWTyjryDXq/jOEGW5f/u7IT/l13+Xx9lWcZxIIkAhmWLQ8QACvhofJrU2wI4Hmg1pKyIhEZJTjYJgujuZWFgkBQ6HImICtCQGkFiJUVWFRhGFEX+v2p4/1lIkgBQBQBQABT1vwBhCYGR/3NH/pckJAB3aFbgv9PZOxfV/1K2UgAAgiDJsgzBqiAoAAKkBlUURRAUCJYxGBFFRVEVklBEQVRUDoYJSFEBEC0mjTPeWliq27e7CYaArAAYwFpEI8lBWAG8DBDB7PG3D1fhgwJx8lYrKQMuRgAI+CM0iiCiLEAwIGGg8CyFABTDczMtNzwhq5nUGXn3QIRhMQmIMggCAERRvQNgEgQhiiIAivg/S/N3JgDCAAAAKYrEUQCOKUCmw6iigKUrpp84cjYQ+K8U7L8DrDuHAPA7dUdFUf+/FDfB/ysz/v/0vgAAGJM4QVJVIMo0UBkYAldOCkatMT9PFRBtxSXabFJI2JA5JMpxoKmK1mn1QSYqMxEUkDDMK7JBleiCLMxqwcrLaQzBRFnTUE/LMorCKsB4SQD/HS7AAEAAyHcqRAY94EXAc+AODZ7AUAiW9ZQ2EGb+E0T+X/9FJQ/uKw/2wyiEzZjn+PytBiFCQjJOR2SBBQzUwKtRmKTDUSIukbfYsJuXBZRgEZigzHxyOher5XQERIsBiQSYOuAwKyWjABdByqvRUNAHVIOlIUbHCAgAgwWxO80Br8rLAlBoBOgUVQZAVVUFhjAYICoQRPHOk3znkAWeEyEAKALleRaVZayptc9q0QOAESSGklR7d689IdfgsEfgLglV4wljZ1e/zhQXGhjcWLq8OFVY+tvxkZDAj4fyjp3K+6vw3bDe18K4zfaWAFN0eUBImjQ37cf6DDNWmjDYhPkCse7W2sIB97ld+5LxvCrZJQKUD2KJHD9/w2ja77/R16mqwd8//8jlZUh3z2NXP3Cg+sI4E+22vPjsL2tenJ1ism7edfzH6AOtrsYwlmBENagU4Vy+s18/nZjQV9nRe1vhjTpKycYi9YOps8owDb8mfbWOPkwpHQyZcfpYfVFOWcRBTp6bL+TE6g93VMCXIDkCCF1u3pDLx84npea4Il4cRseOHNXW0eMZYOfNnnzqzHmzjlJkRcEgRbS3trkKSyGIN9XePp6dTGUnmL8/UJfnoPImybOXmP/Z0Tpu3oTzZ2+5znWWTkJ1Kab4DHwKaw1F6KxR+gs3G7Dqfr96sbMzNRIRrVpbanxS6hCsre2CRDhg1S7xgCJRJeaOQ+yGWL4Oz00xz7nF7u7l9vfDhqA0cSFaEgEhFu53qo4Dws7kbHNWDu9Jjhza22iRnaMXWPS6y21XSEPIOZIZcuvKWUsSte7+pMprERXFj+8LsDyIc1qunA2ZbQKMCqOmAzqg6epkTPaAisPdPUCj10IaJikd83QrkT5rlPXBsJ4TBQzGYEzweYDOCPraKL2V1HJ+EAIsSgo4REZjepiUIFaFUBSWFIiLN2m+beya9zlSiDLIqBT99R7oUb0rCtzvBowyTFIauD4ENwF1XCqC0LTi1/OokUiLXewSZgKzBKwJMNzl59VOS2OY4z1uZkCTYr77keTNvwcmLEhgHEGvh+UwjtCgFkZmNKgBkO0D9FkFdNVVcLKEKOAeHZWYzPkxFYSFvEJjX72lp3dAbwZxThfv0fq8URUClBanYxIEkDvB8v/wLv8nFfvvnEkFACAIpiiKosgAAJLEJREJhUIYDmRZxRAcUlQYUrUa0NcbmbvAGaX7CyfEktLh9mpKp4npLJrqmlDjDcv0ReilE9H4BKjiYnD3j3haLuA4fSTMqGgUgUkAAxkoMW4QgQyiFEFhvaTQyQnpM6bO1eng85fbG1v9BAZkhYnEBs0OIEtUOOIPR1Ac1zJKxGIyCqJHAADVyZZMu2Gwx90VvD4Q8dJodgYqxrAJk4r3/dsCIDSrBHTWAVW9M6sVvlM/k2VVBfJ/qMj/y/7+J2H6X0aZF6ww6kclA85EJEECOOHthbmYBtPFmFgYAErkRL8vQhIqgFVYZjGgQXEOQ2WGFlTVDOAgSigybwQg/P/L+gMAYygmSuKdUbWyrAJIuQMv/9/fhCEIVlWZ47j/udv/JEMAAFWFYRjcofbwPA8AIEigNxCSJClRHoEhDMNgWOZ5sWwkTlFxbjdbV+MDEMjMstusVIz279ndp0EMgoRmplKbvl138dIBTE7/9YfLLi56sKFv1DTzxHjj0YGYFAOCFiTx8iDKWi1ar4tz2kzDRhj6evrdnSrNAJwI1VynCURNScebmjhJxjKL9JEgabSpXZ2DsYiEIhpJVnheBkBBMSCJ/8UzQiBYAaqqqkC+A28SkhpDUByBcU6IEijy7+4TOCGiOAZBkMhLd0a+AwBgGMAwjOM4+G9S4f87ooL+m+L+n6jlf/nd/1xRZQioKoHreIkGkAKrWlGmR02H//pZZtQwiWM0Desdoe5aFFJAUYax0hcGoklWRBWTJUgFAgCqwqEAJbHCEn1zA89LiqoACMJEBcAqUAFAEeTO63ZnUQRBBFHKyEzy+6K9feE7by2v8EBVRF5QIPR/Pg93CAoIgqAkN9BFEZCcPYR7Zl3rjCXC5Gcyfv28o68LxTAsGkwoLjMUFMTLEB/2xvV3ueNs6KAPDHpkq9Vy9XgAkkzOOKWxJwJxlATFIAg5e0iGkbAoAZPR4EwwdHTgAkQDiPGGAVBjkAIgADQawPM8CksAAoqsAJVXAQwgCQIAgRQVKJIspKZZJZmzGLVvvfnqxXNn0caeNppGK292kcmKM5HS+PQeBTJni2eOncgbPt3jqaeNAwokGMz6bpVe2BkoDXI57cczH7sXeu8J0bmDP7xpYsXVE0k5ohgFRFv7T9MUkFoNLdm/93HzEHG8458dSRceXv/V1AVnV20S73oko9Xb3G0djA2gCqsKHf0i1TVpohXX83x8MtYUyrRJVk2SRmEhgbEUJuD3jnjorZtvvl624e7Fk0c9/sl37zic0Yre+r898j/t4g+7/3hx6uj5c7MtSl9XsJuHDUpufEuAH875hibQh08GulsCMPB29bgMhoR+t+v42XMrJzz46dlnME08E5JHD8+ubbmYO9LhdnlATM3NM92uqLlYExmbm3v8WDlnMyRSsiLqcTixvOmaqgUGe9xAo9TSAcZMo9Y+VDtlVFbZOJWNDlytPadk6EC/Ew8zyROTI7Q0v3jmpRtbZs8b9ed2j06yZsWlzl2azsUCMmj99Vd1w/0JXVX1BYvZSM+Im529vc2h1l6aiJZkwxoRkmKgOyiLPNRfBCZGkcZBufoSONGjXFyPjM0W8uqxm9Dclp78CGqOpgLc8qx5z27vDztdT7xgMc8OBINKzZkGK+7c9ZdrZtg2aVqk36V//R2qxcddOh41m8VLF0yC15Ka4Yad4aYWxOhTsksUUQW+HkarIdxePszAMc5nsdoiYVoWYBkWIAyIcCgUA4yIu1v9CALBOKVyHAIkkQAiL2GIUWHDDgOIjxMxE+PNAH8Kck+lOCbFY7SQ95vjoORux25t5A1EbKYJiuK4mOGSJ1JCEgVWAATAdlEGCvldEdJZc5YJ9DWAQC8nRrUaA05JUZ6LKwwXD8TRihQVwcIsPa/VNHUH6SDkUFEB4FqLPc/nnU7hZhnCcYxPNld09lGJRh4JX7kWzsA4jAQ+HuCMCiGMKhokNKLVCAiPywqnQgBBjKoQUYEKwUBRFRjBVVVVVVEFSlGCk5aFDl9AEWEVlvOGkPESuNIgkECYfffU4pGlwS7/75t3hjgx3qgBCnAaknKzoJA/ZtWaIp2h0lK8tRFwMYGFYF8Xc/mg7Mw2/vlreNwI48K12qZOdc8ut8DCMT8wGbQhhoMgI4TwsswiMClBURQGx//9zZxRfHHPoWiLG2Ayr8gkqqfIaMzjDIPg8FFgSJnlyD5fKIbrdB6XG+ghfXdA+eqvazrGwKNhlTTmUbEkEekCvj6P/lI3l2JGuxoICVdIBmCAlABHqHgMKICShDDEaVhKUWPACpCwgkkwRxoAH0JQRAYykEmg8jjABFRQCRIIAIgciKCqTkZCAzQAqhaIzPBcqy8aEGkEqBAE86oqhUIAKABDNRIAIswBQQUcIAgVx9loFEgigHEGkoCiwCoKA1gCIoAUmEQgBJchAY0CCQYsAQCPQooECAB4SAQKMOGGkBwhZCDDekkrGGk+DAGAIUBWVOVOr8edHlgVyKoGURkJRnQyxhKKAsk4J2Eaq4GjWJhiOK8BVnxAY1blsMqjIomDmirBqBv0SERZoWNKsvF4vev6bV8SiphhMoCIiBT5YesPUycvCTe5O25zY3K5fTWiwBMDsR4lENVisB6CYowGy2DGWcy0NyjqcBjQni6yrl02GIFBq6VwJi/bcbBmoPk2L4tAb9OkiUKrnqnvQUEEUBASk2EEgVGZRyEQEwkU8BAERBWWVQSCIaAKBEZKogwBSAVAlQQZCAQGeFGWIVSrN0hhGhAorNUqDEdCOAAwq8gKgdNCOJHQihrBEwGZacYBJsT4MVISOY2KMJhMalU+jClA0cEGGuZVCQHw8lXz5s7M2PXRlhNtgEexRDjar8gbH7i74nL9lfomgAFCgBJTk+IzQz09usVrHOVVnf4Bd7rR1BDhYATxS1Jjm4wipCTTCJAQYFJkPY4HUQG4m6SOVtqoIWhBn52gnzPV8M1fdQBTUAYTUFmEZA1AWESmZDwGY5QaEyC4tmoQxmUAMBRBCoZYjRRBEVqTQQ0oghO3+YJc7aXbvI0iAdrjCaMQzONJGOjlAOgdJCQ4duI0qD/ue/SF1K1/u9qaRLr5au8t0VA997Wpi86dOTCImUg1ZEJIt8y528UBFlOgkLYH5A7Lnj0pByP0ZYWFWkvq4S0fpsSw5S88/8ZPR9v7L8TbINySmJ+UYE9ySF2eVnd7XXUABwSrigAGGsxAQbEYK2G4VRD8uFHhwhpHhpJdnD0mO+/GxfMWCogdCej00klzx+cmIPaD1y6bEpDzZ+u66qJ2HmWEUa6ugfwk0mzSQTBaHew3SUiGLb3lh19GUco9r90F+W5IjxTroJuxIwN6Y1lvRyuSks923x6aly9Gc87sv53Wa4lKjcbCDdevdxUk+5ZOJQSoIjvRGp9ouMqE+6PaK7cjaWkpWkZ4aMzYUXOHsyuWfPjbrnvt2rqIEsDRCbJf52qMhcC07Ox7P3j/QsC8/PW/n3u0cHN5jT8zkUwRZX/Op5uPIqYxfWGeoTG5qROzJYuR6vXrp+pT20sz8hEZBRKRZEoqL6/od/nVEPf34W91FryypjrQzZCwI2dIWUd3a8gjpyVloqRMh9umT9WIvJ/ujNnUDJsBtybje4/dyEi0uhvVpTNnHsH2ZuVMVaWeF1/CPYGwgTSxWsKZLp3dHJQj2Y4k28RRQ37/6RKzrCk9Dzl/oVbFlF5356gZmChzuMYdlvQpNjq7KLem8jalUP3a4xk56qIxhT1hQQwZmk8FlR5sGJzJAiaK21BYDAraXDDUo/BtUNOHmgM6+tBzq0aHza7GJm2gR87PQlGHVJKuXb8q4Z3XmxfPSxgxkhv7GpJfZGmsgTe9GioozoyzuQdVMdzELphnrOsOY51c3KhQxWXY2Yvb4mBRFFurUBhTdBQVDESAYhDYCKoCOuDDMQghUIFXRVH1BiW7wwhiQZtB5/fTEqBhCEcRPcdLeoqKxsJTplt1GiHkkSAKBgYqEPEbR8ROK5r6cjbpXHTafXmQEvMMhVPaomgspmCoIrOaKhYLabkhOGk2gUAUV2TgISPugJaPGTN1QEvQsqDjcH0sVs1Yl+7cGaV0kPOL3z7ae++YOArDgM7TLnAZNGfWkSkk8PiYIIVag1J/JNDNYqxfgjEo3UQ6FTMuRbvkqFvQUDCrNzKBABzjtRAs4jChqrIkhSFAAiApCqC0FM8LiiwRKCZKYm2fCxAYgIAO4VmVaKrnNJm2Z9+6+/E1Dzmz82SAIpLyxVdbd/y7+/mnX2QHYVzb13ANM6px2pwwkgB+3BXMygfxJGi4TUZguL6Lu34jPHdOSltT9NjhwdQ0YLWQPhdHkCDKBHQGIhwOqwAgGCqrJqDSEsr/c+jqs89NMmqNMVkBsIyIsApYWUR4xJU+hoi4idunmACvqFEoHHUWlbgETxQSdVFJl6CJ9vqBSRP2Q9iAwAS8ZNPP7cm8ySuEREzSWy00FFAVBagEyvMABkAmeAAAa2ShECBojYqykKwAKQSrCIRIGAdkFIIoVY4BQgZ4jIsCAFCSABLPAgAwBFcUEpZ5PoDzEQRBZBmoQEVUFaAwkAGQJAFDCEWGdFqKZSIWQqeqqqhysgqpggwDCsUURmRhgGFARCFFlQGpxJvsEBsY5EUVgiAgIiSKCxKTmZ/lGegPRSIaHIgyBgSWoqU7vU5agVAAj8ISQUIxTlFkgMgIABgDOEhFFBrwAEWBpFWAEONCMZjBpR4UA4NiVgo+0EOKqKiHKCMuejhB5pWCbCLFlPvb9RpfLEbw6LzVeb6+5kOnhN2/vlSYgL+2ePqZwzduaAQrqyUAmDypiHBdZ/0SImpEVBcCUY2gGehW28IAF8gITfeHXFoZx0MCKwpuDdbiHQAQFpBhDaQJDrJ9IRxmURwwAgRYIGshkZUFngS8AmCgoKSOi/IAURBSlHmAoECUOAgFkCRKsBHAURRWOEEHACLK4UA/j5OEyPEoJKgwEFBJVQEqApxlGRiEBS2hkbUY6OuUeAAATsiAIBga1+midPTe6cOcJimnZOTfp2tOXqgpGRp/d4Hz49e/bR0kKMDyEIgZMOCDhpVwg5d6taqUXJZdf+lyV3lX0D/gUQ0alPn568yjH/z9+vE9C4fqupr6amPGMXnOWCQ4Zvzktp769CR9w42+41e82iR9aIDRw3CYBgjwe9lQmjCdkGoRFRAObaoJQyJsq599+9VJmz+4EDNbN6xatHLdXANgU9OHaLR2WSaCXvrDd9/u7WhVfQAJQxUDDbY4J4PqAwMCgPl4C2Q3iYOhXj+ExyUI2ba0SKllzhJL7c/dFe3IImtq6buFccjEjsqvm6wVFk0kioNWJQQCAECkDuFEVsCAnFVKWvOKJhYkawLWvgH8eGvPofO///jBwzvf/n7T9HVhoDt//fssR4mruadoejEEJ7WdLo8zRlPnrGhtv/7wyqePnSmHpICPAoACLBuGcYKXJBVlzQk5nR2B/s5tv2/+oSB9wn5dDHUIyk+fHNQoILcs89Lp2zXdXT0S7B8g3Ez51IRiyZ5y4FTFyLSywbbYvIjWONQv9vvxIBeIDT7+wrbff3kPvW9p8c0zu/6qVDjw4r9Hd7z/yappzpc/W0mZ5zJfFC+bvXrnsSFpU/a/8nXF+rv4zrbBpHRbjMDlsM7vCpUmpvcE+q+3d7w+YsriMbPW/L6Vx0CN13YTeDQyPI6TtrzdMOP1VR/tHGwG05D1qicUyE8veTYZPPnHnyCSB8IdP//20INL7Cc2bZNbMYMEmCOnv//kQQyzjc2ZfvTvT/7a/mtX58DmH7eOGFVUVVF71+olA11nkTinKloEiVFgXzBImC2SJEUhNXzmbGdhdrxOsIT1vcmZRLImaI3P3b7jOqpBuzr9Rge4eu1cWZFSkMf89afPYLYJSntCsqZoaN7XP1zqqwJvf510/GzL+JE5Hf2nGtsqQ0Hc5IRjkqBjiZ4GwWQIjxlt8fYFJs82HDu13+JEjXpnusl96A8/m3w7M9UspXeU3Wd2H1RNnQQStuhVFGXdBpCtAYGhQDJj+E2kXLBJm/deTskkMVPIkaAxZUiKkc3SkUpo8OmXTO3NXF8jcuxfpL2zbfEi8pUvE197sXX1MmjPUXXsSFB1mw1xYOQkuKHJSsb5lZgU7MESktHeXi450Ui7gtlZhtaGWLJd4w6zTAzAAOZ5EdHs1AAAAQAASURBVEVQCIIkWfS5wjYjhEgUBEQY8JRW4QURgiVe8VEAtlKKzaJqVGTATQdoRGdBMnIo+kBEoZyP7Gn5qE9e9WRxdk0TI4NAQQpcjIX2thsFI+hRvV2h5FKN346ZDCSiSCbKCqqiPBvhM0TKBAGTxXsjlP7KC0A+JNdtXfvUhM5jQk9Nb37ZkOiAx+cAuhDkDnN6vVngRDkGdIDOjYOPDZhkwUtIcJuXbTawBi2Q/YQisTIOx2gNgGkVpmUJkxUZAAWHNTwsAVWBMCAAVlIEHEMsFqMscQjBeIKkLMgZOqCzx93oc5+7UquxUTir5xhRYXq6q2539PXefd+9SxfPWzVz4bWLDZXN/ozcKEWonRfB8mGzT186AdIMLBsWcOBtoQpTtDPGjT9x5M/i0TDsdZhkNRzh7r4rr7fXf+mC30JQKBkzm9Vw2JVpgaraEXfT9Q9euK+m8srMaSktjaFOL581HO/pCCoRc+f1YKoeNVsQuwaYdbxZ78JY+/Sltn/ONyqSzcArVhta1R1mZRHIKERKEg9lpEdthpE/bf5Qaw74+n2Tpz4GZNhBAQMGcIucnEl6OxlRhmEUq21htMAsgcBT98/vaGv/90IjRikaNcpLQOFgIOIwwCEMlfggAMCo18AQEYlEYCDn55ho/6AsoaIABEFBIFSSJQSBFEUWZAbAgBYjJrsxGA3zPIAQQJIYwwg4CjNyjNACHabBVEt6elxja43BwYX9AoWQYYlVEUIHkZwQGzV+1LlzJ8Jh9tatE1B/36L7X5cwnUZGY3AIAElQJAQFHEBHjZ3a7xrUaVFZ4Ww2w73rF5/ed8GREG9Oz2dZXxwiFKaMRbXOusHThYVlO194pblrMAqHGAu0ZmJZdV1zsk9JzLO1hdEbVZeICAAAjs9FrJRJgdIFtP37Nz75OwYuC+DdNbmGnc2DRqYnihOKnyCTkssCoUpRjUXNuKr0sW2oaJM0PgxGRYxSQIxCYFEcfffCIgdFQuSY8aVf/bvDte/ayqGFHYk2nxa0X2oe9AVpiecgAcNRSKVUQYWByAkMgBQCQaUYhQLJbJBNFtI1SJN6EGDCsgggAGtxkZNgSNGhgLNpEC8KFt87Z9TwkoPbd6xfvtYZn3T0/IWU/MK6utr6we6cgpHDclJOHDmk0yTJTF9vg6e5px/RSFu/+eq9N1/uuthcV1cDMDYuP86dlz1t9fIR3dFN/5zYvn1b3/lzf548WpizMDyZDFi7Rs8YWzZ8dFOjHxU5XhVgCAwrmPD16y+7fnl/X4NkzNH88PObwAdmTR6KJ87uP9zrPfezaLlwmOoM0FFFBVGRMOj4iIJMHjvc7RqQYe2KWUXLXnoqJzetAM976Mn3uP1HMiDo8c9mtfeY33xua2swoAdWNtA5ojjr0vkKDxuQCCDJAOAAQBDeENY6tOvnj8kcU1Calj9n0UrA9LFc7qWrR4tKUhISR53fvc01r2tUUdm3z/9waP+VlrafQCQBjTapCiZjgJJQFmhgmGNxIItgxEjnkrunv/vpZXO7qav7RJAhXNHgljMHp08eH+mm6r5/c9zYjMM/fPHV9uoFWYVd/nqHY1LPYOCxZUNG54yMN6cvT//yEPKupPxpipkYQAtmRIkqk8oyv/h4y4p16/oGXOOH5YwceT8A2HPv5aBBLvb3pVYnEBpqbkhRigfw0pXD+zvdt7buOrP7aLu7+ckZ6XsvX7TmZD317qtg5CisJs+76tg1uuiILu/Qlb67ZySs/X3vx7XD+UrH0EzLx1VILVbww5fX05KL4oZlweLgL39ccZ8/mpFtv3UWSBA4cKJ2dArMsxQG452syylqBW0unpPbUn4ms7PVboMS9D1KC1hSqKvh5GHLh1/ctl1BLWlrVrgDlwRepX3Ru4q1+FML3vu44fcvnrIGq+ramSxVrQVsJFGj7bJWnDkXHZqf+tKr1Y3BPfv+mjBu4vJ1a5KS4IQ0a4TtXffAzM2/n4RV7Ruvr7I56evll2NsOLfQfuF4Z3J8fJiPpjqR9hbEpDHxAXTXuZvTJ5byWG91RSgnx5qSbHPgtp8+uj5nTfJgrKfjtnqjp4ODB6ZNByvmFQUjl8qGOi3JXku8KHJibQVKWRSr01Ff4Rej6sypTktKT44M+oJ6fqBfhcCVurZ4g6AlKdnKh73WgKvdKw0UTUzTDG2u207lC+MA5RAEfUB0J+NaRKDiBTRpWE5FS7fPh1hxjlOgqMw03ABih3b6vEA4RvpCfK9LDfkNyzakN1R0Rs72WJzI5u/kd14ugYqNm966XZSkXD/B9Q4ICAk8HEJiXIjGMC0oGqnr71Urb0UAiIeJoILZFTjKcjIAEIFhAsem2M2UhuiLuFQ4oAKRIEBaWkp9fQcEgMSCdD2qDGIkpibrVIhGU80mTvXVXIpcagQc60q0Wh45G2i5UDMxD0x+xOGJ9KTcn8cJ5q6jwXQOitOhym3WGq8BBZaYsY/iBWC0q70eA4kDvU5s6lfsQE92Sq1nsJsRucQ9+4HJ7z16KbG3m+Y0CkAHAW81qDQfK8mL9wT8E+6aA8UGP/q9SsWBRafVymozHYsxIF5VNYQpwofsjlgwpPICwHCRQIAq6QUxGkcBCAaUDtgs+pZGv80gTxiZFg77YeDtcGt7OqJvPHo/S5jWp+YZzc6I38NZgUFVAaazl5REjHZ/Q5e3t+arD+//4KNXumrjoO5oAMKlsGbl2jm//XY8zsI4MD3HRwl7bPQw2y+fHRiRSy6fPurSqXNjC6m5I3LyCqgODT06twAQXF+PNyMjSWMgfdUVuQnGJEv/7ZqauQtnOcuI2tMX+8IBAxnL1FDWFK1KR+L1pswhpt9+bZs6OUNrQmMeNexrZHtBSRmSYJUamiWYB8kmZNzI7GtnB7VJOGlER8xLPX5m93tvbi1IMBpVIOmUH77/5J9Nm9euL/SD3vLjTYmmxNuD4RsDCk4DGAfG9jbQE8QU3ISJ969Y8MNvh0wa3KcIsMSzIqBwjc1p6ejpRwCnIZHMTIOP8VJObLBNlCSAoQQMo7CKwLDCCyJBIrygQDDm84YBBLQakmE4hhGACkNwlIBQiZFsmZGsNNzrapcE0N0VlBFAygDHIEEWBEUaP3bkuUsnI61NlXt3zXnuM3uKQ8KAAtNxVnORPfdi7SCFizGB/fKbT9bcs6qurs5I6Tx9XpvFduzi2YKc6Y54p5/3hmgRTUiavmKlDAGnqrl3yeiCZGNtfeOXr06tv9569UhFtUgyqnK3or/9/aNcjh1CEp9Y9eqWyzevX3S19jM5w0c70o2TzK4nJ5P/vtw0mUSP0mRE4QOxCOlIyHWDUwd+6z18dcdPv18E0th4kzjgZyUWtoDrZ89AGGrG0c62zqZ+dxJnSTLad7z47Q73K444tds98OzyhRuvd0QlWYtjozKd3b19HbEwwAAAAAd6WYIFIWaxxDgO+CLAF+EBDOgohQIWVihRUUWETkyGPG6eSoifvnjE6pX3YYruyvnKUyf6dm//+7cdBw4cPDR6hSknd57iuXzrcm/jcU9vh3cgeAMgWqDQWhXIepA9Z3Z7Qzvs1ygvvdYB+V/YsOT0s++sf+2NpVdfffa+kdX7dny+5+SnP70wdOq6YWNWLmrteODxx5uqOwVgFQBKEAGUR6/XX4+mObJNmTfV5o+efdpf0d7W3+YdoPVcsPFCcPaUhO7bnfqYKYp6LKo5QgQZGgW4ZFbp2z4FyLGSyRlbX/ni1JWmD79+fnXZU5N//mL7qmk7d1a3DyZ11lXrDUm9TI/djrcHY72cqAIdEFWgaFApLMFi2fji93/4sqepW4bJKXNWvf32J4vGTb3Vcvbrd7fUh06SqiVt2NSJFoJu8McJfQ888lR55ZXdv+8c7cwevjKn30Pv+eeWUYzaebmT1S/auOThu3JnTXstUVv8zlt373qPONHb6Mwwqk0dmz77sfZY5X0rxlvzjQxssJbAZ/095nRyUPLTDino0NTsbIAgvRbAgEizSZSsl+7aMGnXd2fsskbWOE/Vt5ltps/efKqy7uz37z6koUcd2GeDxg1NqajqmWHQZKAQYYgP0ujajUvT4yMVXR1vfXRMRaB0CTyy8Knpezd9vfnDX9/+aj2VcQbjnv72h2hDD9fGZ8zNHTavpHjB0PYzPbKqRbnRuOEhhm48sm1NySw1yXk/rsvQREqy7TsfeUhdO2fiptc7hkw33DxyNhBUmnr6ehjQhhhTDfBb09Iqr9aRSooXjw034rHunu8iYPTG3TfP7ZOtqSD/rmiwxqQGP3ho/KS4xMe++flSNDYqLseqc3249rn39h/51YuBkD6OZ8cUho69/OncF98e5uxtb22VOVngfCa92NHUgkCR7AI5N3/U4EBV0MNJEgw0geS00pNHavPzU+srGsJAkxeP0WFDQnaEwuJ//bPvj21jfzt6iu9y3r/e0NPKd/VIUTFAEabAYFBjRGSMY7zyyDGmygY8p4hMSeS+/8pTVKqP0tGiUWCwx1p+099WBR5+Bqo+q4myzMTJGq/KOnWEylm6+/zjiwtjSM/Nm34nQvIwF+R1KTmYNUFhqyjftqEoSCHRWEziKch+BT6bX8LXBTpK5zq2/Oi5d23eghnw9cvBIDmoes0jp1r+vtw7UC4Mn0a5m+XCEZwrAjffooza6OXTxteeiZPi+D+3dYfa41SD22owI4q+qdsPUBnFYBhhBBZkZVo8Lo7hGEUiVZnDMVyRBBiGZVmBAaC0sCgqtAiTJMlzMongqhKNc4IhRXkN9Z1j8hMISNURUYqCIIIw2FF/xB/wQMGYPuamA+HI4smpxjRTS1dbnEF9qDTRxrYKCXAEG0Kur9aFAYsAmQc6PaSWmdVkWOZ8Co5ALALxolhIaJMxumyI7upVYEwVtN0hc9kXT1dCbYBwWCwYEcVCBkJhFUmmVTqmvP3C3dg/J4/Z9a4xeSIOR1t7B3lBQ2BSONYQiUI8Xzgk7drVTh2VmJKu7e7sSUnOTEpBAp0+GJHT0pI5hm5qaJYFkUChIYW5kXC4JRLlVBofUL1K7j+3axA6gmqRKByAZQqXDBiKqpDAAanmdkPYQ6ci5bVVbber9mspjCCIiMdEEEGO7RFMiRaD12ZKZgXq3LHWsmJnUlJCSGk0aSlYIkQ+Igoae7LWF+kz6wo5zhPDo6h/GOwsjg54GupOjRjN1w2gqUZtW/dAkbn40W9fu9EV2vbxr9MnJ/60ZUc6FffS02u/+O5ISb6vv1vCrQkM7VbFWICRQx4bw3rCAYCT0qp592mzHS23rle1t59vBWmpVlIx602h7T9Xzh1+V2HW5Veffrr47pc+XJoLZeFVN+uqKkAMN+WIwZsRY0wMz5k98cFlK1c++ESCEegztYCH6ptiiAxpTaQ/xGOoSZaiVgsajLKSBNISTS53CIK1LC8AoACgYDjIysi0a/lDhw7d/+CGvLw8g0F3+vTZtWvuSU5OvtXcW1RUdO70kRVL53636afrV2uHji60J2OYvVDqcN28VZVaWHDwn70HDu+ZMW7422vWugj6i62VBQZ8caG1krK1VbXRAhcFGCBEBEFSkpK6e7oVHqAATBg1piCv8M8juwJBaMTQkdW3zskAFI5YRKt8bp5tYjYbulDe3hhmlPCKlaP57ra/z/o8MqiVNEDD/vPwfcNyssrrmnUQteqrzVOWLXpsztxYhqXYmvDL+gdG5cKHdzen4dIOkRhE0OcSsEUPjAtFGxhhxkwad/z+E6vAAsYDGcYUoEG0EUmcUjpi+ojxr/2yCaCSQdSlp8Vv3bVH7uz8bsv7e65UdFSeXz73sav+Hm2MfsBgGlaY5x2W3an6Hpk1edvBywjB9Q96Th6vHlo6YvSE/LETy6z2eCuW4Kq6Skc7z1c1367u6W9tn718TFSffbFKbqyuUIKu1Svmjx1T9vJbr9Is99TLz1Z0oJfLWwFKk1AYDSIxd7vJHAkGZKDH0QiQYAGky103b8eHbfVHO3fU1fJtnn0XPxmUmJuVV51c+Pj2Q2//+PeSx1d/8O2mHU9//ePXO0UHkpPBFJoTunhtJxrOiYEL3W02yqTQUdECvzttweHDfyePGFLX1i32+bi0vL8u/XbkuT9Obqu8Kp8KAFmKqqrW0u2/fWTroYefeGLHt8+s3Pjl3UOzL3tbvX3YW+888dibL198aVNb5X5o1tK82Ss+ffTp6RMKWFd/Zrxz247fH3nhnfzR01/96N3T549OGj8MN1oO7zxFkpSBVUXATCoadvejM+5/7uNiU8rjj6wLt7qk9p5H319Cb7rpu9Hh3/OoJqPQt//qW6+9Nnp51pz0gve+2l/HMb+8OsRYWjZq6nIyEqpv8+fkjF4weYnVlv75lp+ulJ/Zv//gh9PK+rft7cpEg7Mnh3rNrz7+eEdbR2K8pc8voN7YRx8+W10exFHhq/EZbx06VYdTb3363PwRKaMmbXj3+89Bd5uSUDbRnlt3fvdnh3ZH+mJpIDUc50AJV6CQAGSUC6qqgRvASeyTLz9NT82urmqykMYEGP0T+IfUnk146Nmnv3viqUdf+GPN7D27jr798FL3qc66LiHuxPjjc3Z63NHkpJFGe1H1rTrKQoGIufGm6eS5v9YvHPbWC5/RFFM45MmY1IuDhldPETQ6dsHzMIBa6OaWtp/+PVLr33jhty/eXtlzWoywfgpjj/Ugq++b91SKvy+2e/KyDFI3e1/FmdWJuV+/c9/GPVvXv7RZCmXqx5d6UyGlIG/sX7+TzVzx9Pm1JypHllkS00wSDp8+VzV1Y7y3vE2Idk8am1F941xWmmZ4WU5t7c321uMoyosCsFmTDY7U3r721AwsNYvDJItq1E4fNyPkK6+vFuKyyD+PLGu8eTCRAPmzMm/dLEdRwWqnlAAbY1lbIgEUY6c3ZsWMGpiaMiuVFzw3L/ZwnNbq5KSOxO7ayIUTwpDJoGQo0lonF4/Baq+Zer1yRjzmc1vcLm9ZmbTryO3hJSgWQLS5uChxhChHQ3SfG0wZSwcrm6DaZAAUFWME0cNoaEuhZmxcSjDW8+CGlD07WlKSEJ/HnDI8sc3ff/tGkAKADYCrJ0UKETAZP30NojTRWVPQja873vmk9eGZ1KoJ+r2km4LjXnx2VG1jw88/xuJtsL/XqcHgorJEhmsbluo0OrTtbT5/J9DqZMpoa+v26SwGvdUuyGDQ5UvATOGwm5b5d99+NDFJ0Bi629pup2fKNqMx4u/FZFKj1WsMqihzcc74whIy4AEaNRyhnDAOSYMtUyeZbt4IfvFD9xNLMhKcHRjh7gUgSyIhRcVgkmOiSkVU64dAiRbDMdAf5UVVWy2GXRzJe9w5SFzEDbnSHJPvkx1dfHcgHAtQABgdGjckQTE5UdWTZpXzAlkyD57p6pLCWHpCitWQmKLAEEZFk8ZpFQAikkROHpIUDIdiDBhfkEOQCoKgLp1egSU61oEa0TGzcwWOc7v6olg/acFGFSfUd3W8+cF3m7aVcwBQuMixih53iooKMBQoAOLEV558bPtv261AO03LFE4ZUloy/nbL5bgMIOqbMspKVV2Jq2bATPIQC2yJlklz4CQdHIxWQ7jMYDSCsRCBiizWHWJFCUhcA8MFe6pRWKv58MMP39pwrxT2E+GYDcEZRkjNT3j4tS/21t549L6PypJyzve3QlBWfLPPEzvR0NqjheD0PCnID+hJSBfviCdjA31RKGKi9GhuadmxQwe3boq9NTVrWFj+YP/TZFb+D5u2YVHuoQc3VLo6Eorkl198mRIhBHDXBwcnF5YevFYpBYIfvjej6PqNr44AmJRPXT6FaMgIS3Y0hCiAKqoJhRh/hCUNOi4SS03UW4y4wWDgFdTr7VcA0GrIdRvuP3X2RMDnmjN7NokTvF/4/It/jh6tq7jlDQb9nMheuvzOW6+//vv3OweDnYV5ee4Gj9Ni3v/HD5KsT08fh2Qg37z74fFjZ+WWFk5WE+Idv3zzFcfT3+07++n6Z+6HxZxY5M8B2S0YiUzsvrlTc4uHDS8d3d7YryfM+bkFDz54/9J1ax595MFle2ZMvfvu7DFDG9y3nn38eQ2eCTOu7lvnylDnG2cDTYCZlk+d2Hcd4431Mh5gVYCzRAxe/tm+9BLj+0+vjZXf/mRZ/jmEX/zAE7MXzdi7flJbZeOJSnR+nOQPAVglE6VwXQ8Yfu5Wyd3zpjzx88MCEgYUAOhTj69bPnFuo78R0ZhhiGi7VlF++EyJM64t2L9yzbzL5Zd2Vv6zZPhEHqVWP/WcNW9SRn5WdZNv1JzRiY50Z0ZBh7e/q6mv0wv5vcz5y5cMdvKhp5fl5ZfeutHeXBcprzgUUeBze05BQEcDKDkPPPTS/GCEqKjjxw1NHDtydpzTkKCjEnSGU/sPfbt7j1fUtNXXDStK8LV1k5zMAo+s+B5Zs/KPg0d6e6K41TCvaJxviCEOj/vz7e8Vve2Hnz/QpkQvH9z9zWsvv/rw+lMXmtJvuN5+2h2NHjUGHvW09acBY3Je+uLhSOTm5WlL1+fMnfvEuoecccSD9zzwxstvPj9r7sQFM93NfRduX9ly8W1PjW/TG0dPvHfhZhXkivo+Gp+4qih17uZrp1D5y1c+GCJoEEw3UNu+r3D8o13l8UOGfPjVzPmTV2o8pLenNWvu+JTRGfl56Q3Z4/dt/U2Bid9c5wQgf/zhl37/60tXjPi+9sR7r3ycmJk/7flsvxLKT0xq9PvOnb2aSxpDr21RVk6Ec8zSSfjk0i+/nv48ArJOANeJeXcpGpUI2XnMO7Mn97d9+89x/tDgBaN5OFAQgMq8J5zmACpiLdGP/+zK/tOjJ0XC/csffT7lgVd2f38q1sJt2f21HwB/y6nqWp81ZvTwIZyVxk0fs/q5eya//UzZyX4CKIISve+Ft4EIEAiseeW5pXx8NOv4/U3eTJX7ftF41+2b512Rds4HzTLCmrAWg8koHtNzrIKC/I2rtm89AhidS3QBA2SNqGvsiScC7oUF0148+uX211cLdt1zX1/HRUjQikACz/70x/b1a1dt2HxDYG/9seOvj8+MWU6mZA0vBEmVG9/Bvi7ckJ/7/anTeNIgw1iivvPQCbNpWJpSVgyHtFojrkJS1+DJzx9fpN6WVMykHYx6ROKdr//cJ+//8KNfN9yfn6I+75iQMoQ0HK29sqXblD5qYn9fpOZK3azx1vpmT18Xaku00QmdU+1pkzM1Lx847ixMGth1xezRrVg8/tbND4pzAqV58ZQm0NY6oCcp0ko1Vnny81JVIKCEQVACMEYjMpFTMEoVleqajv629qy8+B5/dNzklAeW9q6fm1o8heplboTcsFZIxrR9Bmt6S19/exM7Z15p0N0qCrzW5Oj19Ps8IL9MG/LK5efF0WMdzc0uCAMwrL16HFt8b5j2GSMcE6cXnRmA4QEkgRvlIN5GlgxBbtTE7NYUOuwhNXC8nSgsFituCNjO6Vo5kUYGnIrmkFz97AvJl91nccQ6YrTtVpP29vW+J58pu153jQko2ZZMoz7RzXZwHMeGCcIQ8obD7kHOTFFzliZu2kSnQ/DYuZHrdejEiTl00HvydPvCOSVihPO5oylx2TgwuD1NejMDUMFkyBVkiGV4lNTQnKQ3OXDcoDVYHPb47vaOtpZ+JiZPmz7e7a31Bptb25pkETbZgCoLBm0cDOG4XkJQXGLlaLRXj1uMlrAsWbt4OuaVHBqBUDHOzvAnsakZ4hgSB78KAqolS8wCQys1YR0AEgVDWkWTZwYgokgysBsBwFUdjSTJUQYTX/3XqMV3Lnly4GYDG9N78SilqsZkGJeVIsr+V4M3a/3SN774vaX2wB/nL3TfPq+423gcpKebcqw6gzZ+0NtJUIDlBMpostq1gwM+RSIEMcaSuijtdyTYo7EIHWUsFpuiSFE6DAk8jhlZgRiaPttetHDc3HmUxIgCKsAyBekkCHYjYFpS+liXd2qKvpX3XvDpR02IyxyiFQi3ikM0E7m0Hy0dkjJ0epHCA1I32OspZxkJVlCcoAQQEQlZiEqoaBVEXkEZs9USjQQERom4AJm86PLVeJ3vUulY3oS2KRqiuY7XmbTNAuMfBOvmWS7f1B2+6N371qPKP1dulA64oN5oHUjOBjKGxbpEhSJtaZwAQFFqfHfNYHJByac/S629XYc+egSq2HrgInfNxZmT0dJhznVP/3rkaFXMfebmpbM3OoiHFjl6oh1jcmztrOLvBCoXyEsx7r7B2BLF7MSkvw73ITwABtiEwlFa5QRVRhWgAkQGe3d8/Ms3ny9avOH51z4pmTRq/YYHT5y6EI3RRw7sgyBAYDjPCQhCqZAiSxKAFZzEBY4FAHzw/tvb9h+1xut4kWuo6ea8USDTAFd/3bZjnL1syORSEcgqkAAMrCR0YMev4+ZN3fTwW23bfl89e8SV3u6k9z9eO3edhIsqG8EQc8zrEVnWZDDV1DbXN3f9tf94ZV2j6tYbcoxBGB86PqMs3/HQvOk9F//qvFiJwd5f99TTuCSExIScpFPNfRwBgArMAkUAnUvrfnzFjDcKp13746szDS72rrk5ZUmjx8/pfeEhi0c/bPfmL8cvHhCZ2xJhwMU4FtydQoYz0ruAPS3oe6ChTqeAM5VXR+SXQp6OWGIKpaIfLL/n9X3/ZpqwPEb87OhuUWdvb275eev24zcrv/jy22fG3b3z128GmX451VQTCOkJa92ZS+lO85e/fL3xifd8kT4e8AowVN2qp3DY2986Y9L40cN1N2pu3//IR0uW3geAIvMwDgHAxjiUvXnxdpIje/K0uWsffHDavIkvvvDYtKmjXS7u1L/X42ODw9KR8s4gB/h/Tr+9duOPA/3eja+/gbiijWzo9y+3Nv96Sc0idGnylOKxUVuiu39QY1bCrWdjzWrKmGnpmcS+Y1syslcPg+KaCXrR6KEJPt+mljYgQyilikG16uqZGz/9/OfOA4X3TBxDohfafQdaOgM+37wxQ7J5JcNc+GvN5ZrAQCEOGkTdl89MeuyDIzOmlCYnpc+PK2jY/jv7ztr373sfoHCU8e58Y+tHX/986Pr2goxRj5Zlum0lU+flJWWCCNM/deZcq3WI3pwHAAEkAGBJQBgcYHIIwBGVjgToWLDp9IU5r7+eCLhH337Yc1FOy11L9H7NIPJLx/+GeWDCKVqIMRrQ9PMD7792cMdA6IG1E7/46QQswgoh44BDgApict/xzrJVpbnTShdMGfnp61tIg80XYV+6Z6rtSsU/vf2H2k+JsF1yB/1w+I/t+6YmZO1/+6tfmeDS9WO2LZi3+qEvgA6c7grSQNQAKw3CU/KT9LlFw0eabK1df+y7fCWsyclRobI0W16XT4E0jRBbrKLbz1SDKSmfPzPs7FctKckg0qs5h7FrchK+qR+AgOmHHOZtrzgQVMej2hYJHkRJQmLuysw8TXPHL1wdde/TqZXBbzY8pV2pfPH71c3jcoUn3004Wnes/NURQ6y2mUMUAg3crqmowmatezrq64508PFlcSoOyUDNpUwTWDQtTQI+cJyDRttnbao41sx98djqV7Izx7z8yh+rX/iqPYfSauI4l980tsAAk+NE/cn9FzpDsGbI2Ghc/RdlU7OprgUf/rt05XRfbUtFZa8JNU0eZtn1/Tu5ickPPZKrsTbX3h6kzISJcNRVNSclacxWnOUiFr0jPb54/cZTHz0zpXbgWkFhfoh2mxORUzu9B69ydgFkJ1GjJ8UZ0joGvCDGgzijQ6LZ/Wf4xZPzh09MbOq82tNAo2as0ytSCtbXK5uNYkICIDRqdwfp9fNavYpABITxdhwCGpTgAUToRa2qQ4VLV2JjpgBFMUUGQhlZ4PwJsPFBe82FaMAAoZ3D+ZuEQ5uEMZo/oQuz5zRnTCW7+mKUrJm9fuojM08vm+pMHao0NkX1RCg+ziiqKQYkhFLGGJ+hIzt50Rrgm7keaMIMdNGGvrX3jFs9w1DXdqS9X5vsyFPgZgjPIrU+LkD4XO6khGS9RWioHaBMmNmUAEFQc31TcUF+wBOgcJ3P5Y+zOz18h8hYDfokd6DDYIUYgVcUEBdvwyQeluwQKnGqH9OSEAIkFsKVuJA+ZopERcUYURrZiBJlUT+NDUmKNgwCazh+XCK7JI7qOdZvLiD0K7I6PqhPukERQIRhJYJIIFtvKE4IMwNGBwWuRgdIIuFElWjT3Pjh2zcffzfVgicjeEeYTkFIPcshqUY9q3CidnM0sO37vaM3zJBQGFExiYlFpe4dP33616/bcsckjM+F62r8BUNyOnrb21ropBSdyS5qCYfs5uxOswxJoWhIQxl4XsRxlNJpQ0AhWEhHavqb4Pue2p1YkL3vxy+mrJykTRimyMDd0rBuzNAhsJhlMfREuEavUDg3a9XGIfvP7r9x3D5hhDE/GzrwZ1dS4nQokTt8+grMpTz6dA9JYIOhmCyrlGCVYExgQzwtioKsynH9vYGiMhiFibAiO9OX7tnah/l7T7nbSkr1UEx59bHH2wdOtd5onJNLaTOc857vWrti2gQ78tH3+3bt3PH5H/80njswb0UyA7l8dRKDqxyPT5yKtPexVjIpzpL1wtfnL195uauD3vf0d8uKQXmdLkzRxctmzl77fLim74PX3+8wdtS1gOwsdIkp8VC4e0IeqTcl9DaHYaNi0ul8dO+cSZOTCib6+7tFo5d1D9682nG53DxsbOaJY1coWXd0xzebP/syGjLVNfVUcs0pGanBUCQaCk6ZMeXdt9+JBminNb63t+WRxx5ZvXZ130Avy9Jdna3RcJBlYvPmLEvPHtrU0TFh0vAVixaxQYkO08kpjscnPlWjumMY11R9E0IhrSg8+cDaXef29fTRp0YkNrD9Gvv4BV9/s27u3aoATZg/YeXaVRPGTn/+8Rca6job2nqS8vPMaYkCqibkFz31wOJ5U9eZTKYfP3jSe2NvV/nBU/ub6BBkGCUqYfhGi6Bimk6B1apEBIKBhgUc0AALq3KQKiyjoEwC6swZ+9fOrb+tWTkgVDwnoKRx3OW2W9PdsRcTimBvX1gUTBRTEwNZgFyyfGzCwuG7jx5Y/NKOrLx8uuoinxDvSEo//fxbfx44zUGeIrfPPnNE6Zsf6yLyj5u+2XPl8rfPP/LWp19MXf3cV9+9IQG2MxIxMSpTebnqykHErpk0a2OY9fpibghTERHPjks/d/Dfm5dPzliOjLnr94pG5NrpE09s3AggvPXcjV2vfiKY4kunjUIT8AceXz+8bCgsgsmTZq26e235ngs//v3H7aarqJaH0+JWL1xi49rnTbrfXlz20Vtfzpg9v+r6+cPlF2RPrKmr9f6Fc84dOlIH9IQS/vLbd9dveLbm2KnZqxYFZay385g5NOLdMTO/4CoPHdvft3VrZ35CnEzuvFF/49C/S3RZwwlDU07kjZdffOm+p7eU7+vutQ5bPGn1zCEWVvn32qBVmxLz93aiPiDB1z5ds/uL6w1Q382YvMc2B6TxU4589ct7L63/4O/+f+uzV5TNS5n8wDPpH374+93zHoSt2u+3/h5fGHf84kWeszNRJt6Gefq9kn9AgfUX/r165tz1qblDT1RcvHz7wjOvvJZVgHeGQfwonLqGwenWE7sPeS7Udvvrg2KgXoZxQPB6AXDqysL0rzYMW/bu1WvhPk/HP3bHMiEErt6qRJTkz95+6LPtX00dnfXRT1vcjfVvv/dZakZCe8fAl5uWPTQpc+yIHz2a6GhTUloe0s/2L7vnOf5Wa8aQvAde/KZGT19fMOH8uc6I6j/crziH209W9i2cltfjj2BSUhzG+9toX9g9AGAoCUGD3b52CEJJfhQHsgomvffe+8cX7S4w5g3LALZek2PFmPcXOrbUVhYP+maPGj1naWH5n3/8dcbdCWSFRGBeNgKmo73Wkj00F7Mxt/rPtf/x12MTxOlJ/8Queuv+opQW9eyh2R99Rl/8mL+0lZcZm2HN+JWLL/55pOpCe0lBdmJpTn80CsXCX23Z8sr9D4g+oj/GtyvQU2sf+vOHU9klo4/8cB5OScybvLwfXrHykaf+PvIKiOpjfTEV8z40d/Y765584+vjP57405YyOlBbN//JSed+0KwsXPbi+ytlqNGqn102pbSg9LvLe6+8+NquBXMcs+Y7aXe0occ1dGiaRvVGIQ4mrZBT+/j310OMPtja7szRMa7+gfaQ7MKqWzhEINYWl7x/9eTiGdmPTCoLS1XGENVch00eh66/R6por4YqqnETiBFEQS7UVi7Zh5j1mmDAT3AS11aFGPWqmVQT0wx+RtGRVKgzEJ8gxnpxfaLa7wvqozp7Drh5BBRPDYV4A+vnRuUpNTcUYzInxhHDM7DjN7thJmtQ649noeiA1oljPQiXiJsOfXfqoefifny7b9OMXATGaUEk8AS9FnJFdZSCs5GjoqaADtdlAlOgQD1a2bNje8YrL9bMTUkRyEJM2yYQ1Qxnbm9sHVmaIhGhwsK8KKfSEBafpFEIgxTjtTIzbHRpQO6G0zR0SEzP0Lr9McQel67XdQXbC1NNXjmcJSa4GUWFIFIlPVhMFb2UiaQG1BCO6giE4X2JPBXGYVnDIGG5ZNQQfsDtlrJt0qDdysUE2gfit8V6lFW22RKlaanPfjnZcyAGb4+hioFUIaQhLKEeKlGK7HKFP/04+eHnGBTVAjD+gWc+MFKH3/1AaOeGwnCtytl0QIcoPBPVCbJJkj7/6as3HQmpRQm4FSYN5lCF+tSjvz/13C8r184+d+vs1LHWgd7+TzfLY1SQM5soHjKi0lObmJaGSjQTVHT6NB4awHEeVuL9ISlCx5wJRkbRNFTehPCOGJHy4DOvjtw2ZO7qVf29Az98+2UiAJ0UXN8ZcWFkEyCuHW1LG5leEDfy3yD7ydaaH76fsbOmJaflDA9AGDdbkvoZNc4f6G3oglP1iWauJ5qgT0rRHtoqzJxC9HDBksREX3soqoZ7+qylUPuwNHFPiw8V0a52LguXWWEwHdJWKYh20kufv/Pl/KHUs+sX/LvnuMWkc0dOpcCds19ZqXUMBAY4y7BCyuJEZf/VxqsamYwzojfKz7sGfrXYlo0YffDKpe9OVcGfnN9x/+tv/7bvpK+65ubNgJospgmOOskzWCvVFNOaEOhr4xZPtsPWTirZAulT31v3blvFDSrYHumuqK1p9wb0q9e88uHz2fUXP17s48zznyqacXfuvottvvL5dnxB3D2pI0ave3AZjJgDfKdZl95TfT01Nz492bnQYlm0aFxnZZPWDyW/+yQi98NyphPXuj7YNfDXHt6hCZQUtN6qDbsDhy7f/rfu4LrMseKk4vsXzhX+vrhw56vpBgk78adjzPg/z19eODS58/jljlPnvvh5y9WamiEzRuTmj/j2+ydbL1XtunDenjasvWHf4G8HSka9cK+AI7c6Mxl3+Y3De0t0y0pzv/+96iJAJQJeL6EMCg0qeEzBFIiN4DzgMYLR8FqW5aIGRBYl5AIgJiXoaxur647+7Hc3TLQmvSQZPls5K+3pC+8S1tuBVkHlh2LAFbPcTUQ9PFf66xdEZ39aTYO2uebmjXMaS6JGlbtqu6Eyx5Zndr319Dvpjce+OXv76rFpnbdu9OL6fj+26vVtoipNCHedPnsweuGWg8pd/NlXX/zy2sRnXjTjaW5/JCkX4ipcl3fXGhHurT1vr3lwKm/Vguwva8rVLb+8kZkwKtjH+dnO1taorNN/e+qH9yztB79snT8m89PfHo1PeIDuG3R7/V/9+pNGlnmgxbTma3//+/ADD9oTS0w+aF1G9qPz7z3salXq0Ns3bhIAMAj47sBpDNeSQhBCwaKZc5KTHNOSh2YkZnapQSTggjMhj6CRtOJkpvqZf84xz0yekr9a9+PB2dNGtysDkOQvW7SoxXvq8Q8f//n+Vfzoh79+5an61nP9KuUR/D5WVkEIh4CAw7urTzw0x/bVb+xpFGvPU9uDraNDeG7SWAyof1adfWLyOo/aR8eVbHjpGauCvbl1X0eQnz9lMgJ52q+dMyeVCY5MGOJFYLJbjNcO78fMuDOTPfDr3zSEf7757V7G/+D8GY9O2jR506Lqni4FABQQCoICmQRaSBbYUr2uTaIv+uk133ZoMQgSNVOW33/rD7nvnPGph17KiNccDraM+vnZTUVpf65/8igfy8DNjYEgDoFshwZ1RG/uHfnNw1ef8gTWZVsXbny9pblttoOV/D1AZQ/u2zEq3l939JXfwrwtjjheXl+YPbf8Ui0uRLrRKC4IyWZtO8IBVcoYBGiS6hitD+itRjSiy9l4z4cHD8xee38+Fr76Q1sPzDw0a1L63Y9zrlcdajtMV1w/1ny+n0AB5seQVRgj6eSLPpAyfFYyN4hnOuakfxKXmDjm5Rc6Tj0GvqqR0pUBDJRsXBoDoJ4G9e+eXD8n0//m2k8/eVnfkyHFmRWbfbDWe+Lkoe0///j4q09+u//gklkLV0ycumHKCsxprWpsPfNv/9zhI/KGJ5QtWmD3hzJ1tdhAOE+N+Yy2dD96s2nQc/7wigmjOc6/7cStNZu/P7Hv25d/eHbL94/t/uG7cTOUMyciubMm50yY9sS0UUN7q/ZdTiaC3cOXxTtNrTLUxShgRMrU+ubLGtFnCIHJ6SWPLBqxJXC86eSAQZp137OvP/0y8t3d6/ob+jbefc8oagx0zRgNVj6wKu23SJ0vYOu+bZw+X9vjApArSEeg+jMaQxIzf47uUDly42xo1br4KDNotsjVF7Vagk6zo9XddApChGi4X2LTKigHAXijyUlyO45/sOGxr4KDgzUxMK0AtCr+wUHSxHF7fGdzh5UpFYMJQpYCUxVV9VPDowZv31TSBiNh4HT0Jc/X/rEXe/mRjItneqj0WEttWJOij8M0noz0QLDBYM/pBygJKU4VE0Vu3hxp0Vvdf72l72dILhYW5FAhKdGeDhAjB0w9TNSvd5j1uEJJhNuuSrzW29LsHWTTi2SENcg6CEkEcYqOdjF5KXndTfUcqQATJMZiJmCKiSGDhjTLFp+HieoYwarqEEukV+BZry2+UGvB+l39N1r7M3WCIbE36oliQTNiSOGYHorIHoSEAdyP+mBnQm8UAIrACZjhY5KiA76aIBMdknPtd0NhbogOmowE7QtqZNvI4XcfKjn7e+uJuwDi1OizYwgzyAdhyG/RdLmYJdOm1tS3RWU6b2j62aMHWxq6fVt2LVg2+++dR195eobi6RlAjIravi4j/adrXcPnQc5IyM832hLSYNQQ83lUCBMQCwoQExTVW4lQ/4BG05tcrE9ITXnnuZdEIB5ruH3s1VuIBBJNehfLdfISBoCe58bbrcWlI7V0/+0r/UdOXFh//7JwO/vei1MtjpIXnv1y1tTQ6KGwISQIbmuc6oPw/j4F9J2P/n4RzFof//txwamj1sxP+edvDmFCKOI79Ldv6XOrS1R/KZS8cFRuaioSB0Pff3HlkbfmmiwsoipFo6aWzn1wQmbe+mn5cSCo+JtNmuTW3qa81NTudnciwLVwMMcxbs6KiReu3Ri5aqbdvt4V/ud63a+zn0unA4F2+OT729Z++1pPPK8ma8CTXysHTj6zrLtiZEFeW7PHlGW1x41gWM/VFvrCtebD76TW/L2ltWHQYjcsXrPs4LkPmnsi7d8dWZUZSbVGc4vnX7h2/eylZbEgv/GFx/cePpKXa5yxarzYP9jecktnM968dmXixLLzO/7qaa4f+cCq1sPnnKnZ5bWnR08eZ5j6uMHdO3n5ikhFSxeIZl++ds9TzxdNtsQIaOS6FcavnR99s3mJyygKksjFNBLc9vjmV7/8CttXeQap+72tVw+gv555df+vu02yP3rmiO/GmWWPvXadfe124+QmWvvqVMefa586fuiFxLK144fH8bVQnpEK3vz+jyO6TkgzqGLJCHOgWggrgAewRqI5FVZ4lAICCwDKAEAQEYmmgMKIYieq1YnBb578ZOXKjK6KQFe7l0nISLDnvRgMvQ7zCmowcJEUgKc7tX8i0fSeDtuO79d8dWXd/cTW73+N9ARj7Tdzi4feOusV/i2fu+/IWb2wAU+ZNz47LWVk9bVLc8eWDQz6mzq9x87e6Ou+Pj+v7NU9W6Y+fO+sxcuBhAEYxJF2EOY+mz9t7utr/bRxb39Py08nN66wjRlW1H29bihRarTJ5oSofA0aadE+eXbHY2WFW4+c0OnA3W9WOBxlg9dHD5025QUL0pKKneuAIJQekkFeLz/15WcfNp2/WfnProM00ney9tWzn8gy98KMcb29zWU5KTtv9FeHaQQAgGH1R7ffm1W8ctXsdR99ODsZ3fPTZ9mOUKIuRnDQH5v/WHTfzGVfnrpg2bfltdcrr+9fsvGpG56WlUU5XPPpbIPhTEBzq+36ixvvv1xxcdSIwqsXGqKxkAoIk5kcbs7esrd1zIPQc7tea9v8T+Kc/CsftL77wNxVax7wl9/c9vFrK9csuni2Nu0o/uK6GTu+2sZHfN+/M3PklFxcTUrSKeHqpo6aVh1lYFqqdlyvuGt5SeGYwkP7zusUdemQjO11jZMc9v3/ntl7eHRQZVUADKhWACyr8hhsVRl/nNNclmeJ1tFtAciSiTUMhCjA0h1I+fGjR18+txBXMhEGl7glpghk7NvKc9MAMXmuptSRc/TQDXPVZVDjPV0b9w8iX9v1zOilH3/72UtJg7nFH49qW/PYS9nOBWKlFAje3/XLitZ/9p7eL4HKtQX3fptnzCmJd9LcuKGpRaX1QKj0/XJ27oV6aLYNz+WlSdqsJ9yu4fPuWn//TNFT5229cvWbc0ac6kTsM0aMfO3srm9yHbIN0gCxrw+tcvksnJpohYwJtm0tnklDSrsGa/b5TJuydj32ZS4/NbV3+SjhIJ7PXqI/+gDnLNDbD+jV5hc1Q147vB8pHfPOe3/MnFl0+spAls0eCXhsdjMbjbgYb1CNePq67l22rP5Si4tXHE4H0Op4lwB5BjTjLK72LYHQwMHLuv7yutFvfNR6ulcYXmSNl+fl2dIg1mmZcObsT1jX/rYrvZPmz/WwDRDWNTQn4d9a/MKe6GtrFz626ZnE4au09lxHe/P7z6fCjp6q/kadwjOuxJc/21U2ai7fFnvl3tHVbFNzXej8qQGovOfaR5tr6ObtrbVXZZrCdBMh89zEwe8GpTZVf/S9CatfOv7EUxlt192p4wlfNf/vueiiFSCmgs4eDJYhCyI4lXzI2ShY4hoa3CQNikp0t9vogSa4JFsRYLSjS4q3gzKzLqY4/7rRZjebF4ySVUPEJwCc1zvjxJiOD9TE+09D2epQHHVWSJW04fZ99yWfHwzqSbosFTNYHZv/gIcUywvG2c9cqM5ItvWHIhoOOOMtMBQMxdJArLemjxky1tHY6DGlgKYWs3ADe/hF7NYVQYECmEVDK5QJRDmWSSDxoJPkgjaWieTHawYHnOtXFrgD4HztRbrHn54dc0tOCw6zjBwShGSz3cP6LToSwnGERQghhjr1vh4/T5AmGxKJ9elZ3JxeBCnBII0OdA4Upzp/OdU0TAeoLMJpyuFgCIOjYQ7YGArB+DAfMVDwDDOi/uSDqzjUQOnSYjcbdUXffK1dvwYlABLDZUqVgB+EaHedfLm91ZSuqbl46e0337SoYCmikyGuT5LCAIiZeZs/3bHt7xOTJw1buHDyhSuXG5p7QmGGjga0Gmju1PQLFz9KSRT2feQfncJ2sNbH31rxZ/OBPBt1q6Yr3hFPKFEIwMBggFBZA1C6TU+a2fhU6JcjNSNL5prqYxdP3QZDIwvjRxyp7b3ZFgEyGZIDNoMG8DLGqw/eO9MwxOcbFJ0ZQ/w+V6C5YsKsQjoWv33vn6sWTgkH+/vcPSaLhlQBLGkhxWxyFFHxsFGXXF539qU3am7+c19MH2UHwKWORihsO3Dzsg2V7poxh5TF7ceOPbHkQY0cyZ9R9PKTn+dTQ/oYz66GZkpPjUvRXbtBx+ljY6enQEjUgmOESYdbjFJUzjRjJAS1R/yrXtwnWo7s2f4aZkZ8egRg+nA1kz8h0ylbEpp7L1QmHGiffbv653+fHkXBeL/bVTZhWmpB4emDJ1Jzk1QtkgOZ66s7X37545QcZPqiktLxw29W9lfX+hdMLS3KzGDsI606fsO6h241xu5ZVPbkJx95B0glzFoNut1fv4VhWIIjrvyf05lTC3ytbnNRSrCu/kpD52MbH9/83W9Ewfibh/+MF/UhC+pCuMGeLhCKAEWQgazEmG3PvdngHdhZXb35659e/eJztrXtwDPPldxud5051QeUn0Fs1advnXpj8xXF/ctHHx68eaYtIttJ9OvL5SofXj582PZLTYsyswRSffz+Da+8+lwdbJ2GeKYPjTvfJeIB7pQsFiqaPibqRUkZVlCBlwGqwiiAOJ1C0BAPKRgMRFwLs6IyngDzCxN6Gt1DCwxt18O3dGA3mUX5Bk7ANKNo+gHrwEC7CBYP093ToI5PQQsdrFAJpeXyv9qKhyeVFXhbnjx79ccRUxZ65Z31t3jAXcKUksnTHn541ddf76FRL4lIp7tbJ44e+dr6+7Zsu/DKtr9+e3896D4v6lKzbDPgEI71V4aGStxg8IFfj9ec/PvuDW/UNTTPH53twIZlFur+rT7Fu4GnozOiV6eNzbTdGByWZ3npWp958tihje0ayDx7UerHv5+YPTll04ke1GgelWWrq2j95NmPy2sbt57exmoAwqBpGsnNg0KrsVQFo6enbK9yXWry23SaAI1892TpY99eWFEcN9CHFBUJw4sX3/5+l08Vd0OiRauZP7M4TMtca0eFT1mbIj/+2RNplrRfn/1k27W+Ji3qEWVS5DkEkCr+8LyZx45cllOI9l5vok4/zhnuMI0vv3yw89lXPvnll3BCSiQuXXJXWxRoMKxpD4ZS0hAuqISVKJkIBpvNjCY4dUxub4v73gUPathYfXv7vouXzQZd0pBsXInqCKi1K+SLKK5YT64GbopQMiooMA8wgEQpOyouWjpzyZo16x9/YDAYQ2IgGYAHANYDxB0oMJj0AhJNyDQmOWwXj7XLvFEw2kGMSXEy3zw4rbPnhlA07eLFq3m+1rdFRGUS25dkZY+M1GxvS/5gf2LmpCPvv//VG2+snjC7xFM+TFYC7UHyhXhtuvX8rT7TsFR7WbunYulNev7Xf7SNfrK0cd/+8tPXZGO9I4EKtUT2fr8IIjNKlnUMAtRzq2RSc0fUEOmOyOH1MybIveWeFqVbYXRAu26Sjun2xHTaTxoYI2J2osFRJCiyAkAZG7zq3gDr4VE4LmGmi/scLoq/sbd+1wOGSmvx2U9lmKh+6yPwzd/JHy8/+M7n95+5tvX05Y4G2pRAqYxVoyWiMhcWWJIgJIHVosCII5GefkTvIHQG2j0IWTQijpgYFtVJYd+/ENVUL4/5t8+wYPTa4+dr6RTT8iGGhYHwtJVTNjy09sTpK0l4anZawoWOmxRf9sIGCXF2RvwcZF3w6+d78nQJd3/x8Y/nDvXVuKj27iXL3SNmYbdO0iWODUUz5oycuXp+YsGEYQmGbF5P+DU1ccr5QQfhkJPxP6uPbVNxWcLjVOrT4oIjc61DXX1L7UkPbvrnqVUFOyt7bONgGY1UHQYp8cY4B9QPhTvdarIBt4VlUq8OQhpviC1IUhiztfmcf9rdRsade+b8zZFD9bcuR0fnj3zjzx+4vpr3PviBDlfpKBOgglZe/usKmWnhxk51oHi08Tgc68hI1hQMsPXZUzq6YZTUR8bmxgV63PlTS9Y82P3ZO+My0Vt7uv25hLW7SzbHos48tDmoFuTr2EAw1GFAET8dr8+zG5/8qu+eiXY5GEBxOSmFsKfFnbkwKAtiHoZVy/i4DDQjXbv7z8G3Nu0i0abz1xq66ApdOGDE2AiKZlgNiAHvdIW1ok6rBYoS0egoi9Fyq78p22bRxTBfmMMpQpQFA07xUpgFZioZC5YHfUaNUQK93X3jSocHQ6FBsc0cl6kxhZABLs6R0yMH3AG1gLKP9kR0Bii8uyU2Lz5jU7kOSoQgIIEeXwVnws1QMhaBY8Kg5PMGm/r6QESrz0DOnz7ZeOhITAyruJqRVjhtzcMxlzDQS1t0yLCiDBUFZ6/dwjUmA06IdEQ2cQ6FbnZ9OSKrDP/q5F+ouXRIWtpyXVtXt+CLEbBOwsQoy+kNWtiQ9Nwb14EASuKTN7819qVf/9JZpj5TOuT0H5snz0qpqPQgWmNbdyAwyGpk3Gk1ZGZYGhsaR41PqYLdcRlIzMfa4vK8A52wTDV1MvPnWezOuKNnW0dPfrSzpaujcs99M4c1XL89+5EnE2a+/s27s7jKntWvfiRJIVEO1Xz6VwyJQGWWex7d8OKTb/Y380s/+7UsOxdrOE62X65T0AbaEwn1kBoIGCA5hmoDVuF0vSRCrFNZ9NT0qxVdfx0ZHDJhemoKmhW5oNMktbFdj31yaPeVhQaUw3RY7YCW4/E4SuflVAsfT1X0aEEWkY9qZF1uRh6JGQDEWx1DhOjgjXOVzji0panRXJSXl50TCYW9gwMCJ9qMdpPOqSrogK82M7NkEE1VOm4e2LYDMhXOG5kbAtLtijYKl2aVLAjlOMYsXFdA8IOC6jEhQlBGFVmUQb4VLFy09uCRc6aov5NFUsxEhd8/e0wZ29BOxyI2h4XgJS2E3vSGA0CKEchIc8ZFT1uh03xlzmp5664Oiq6JCR0YpNosSID2A5HkYT9GbYMYIOOoLBGApzSmABt6ZGpRX2Jejh+5Wrmnz6XcO27Y5NnFT777GytCHUCbDGIKqvWjJCexQGEhBcAELisSKiqSVoMwEgEghhABCmcw8j05lAS00S6vlQdmAmgFQKmoH1GyYPWaqmZJaCfA7/todf2Zq+euD+YuL/zw0OXHcjNuMGJO6RDSz7x3sPL62PFq+81NHt+cpLSVJXnzTh1PmzP16eHpQ97eOjc1xQTH/uqMIpIwY9rIM1du5mKJKu+rkXkgJ8BgUI+oGhlMGpt07GqfVQ9zVlWKWbzBACSpKo5CghYHUmv3xYbT56fcd//inPRjXSGtDDasmz8rybBl04ENKxc/vWvns++9XZhun3fP46wGs2hklVVUCEcRVSdigyzz5a8b7x4556c//tQMMoQQfvzfswtGZy9bvnDtc59sfnf2U+8c3/XZcx99s7eUilzpkqcmsS01qDBF3vnsiyWL37VDdthGSwTU3A0DiidEzWQ5Uq8HEY6MchJESnPHlEhK9Mz1DpudCPTxAId5FKUYafbqMTt27jlqT78lCD9ENE6H+v6zq1e/shVVgUrqeIjGWLxsTN6G+9Lh2uDr26/JEOGME8bkOQZbpHNNLtwB7l0/HWv3sCy9ZPr8+CE53373c0VrVNVRpaaYm0QxkJ6Tlf7GJ89JSBwKQP9fu+o/39ap8WZHI9l4FIWdveMTSz6dBdMRAwoDFV6/8fvztcv2bN0a7ThCD5Q78joYIpllkWCora4Xwg623yN3J+nChNGEjIk//FHV4/2GlSVJBVEk/8HVt3oOJpIQWitbEDp1ita5cu3fa16RQDqarTBD5vahea79Z7/Y+4SvqSIRJB/ZeXHRD7+PmD305hvTURLNuWGJEiw1YBg+NDNMDp002MaFdKAL6wJqZ1Z+Ia8ZgPR263zNjp3dnBK3/c3nVrw1ryyucCicHZUHWgdizy9c/Og3jyhc1m36fGjUI8SuXeGqhjhxBVAwhukre3HV8W++dT7y5rB1c0CctamhMsMwGZjJGAyCAgMAoGBCZGRYAAa7yRdysbxRiYdgf6/TlBARRD2OBvlwHDCGgF4dCGfYm23WqRouRJBwARCetXnWf/DYoauzO07WxBlywxpi5KSCnnCLd6BOYEalGUrtmXZCKd9zcv0Xm+Fru/b8unbCdttlyxQNRSq3LtCDjCFTYITejlQnNHZ4XnMstsAycqBxf83hSzMzJ2HOZN7vsjuNSI/y/VvPP/vdZ9nzx346RVe95eq0z7sWAjBMl3BF7UOaI9NXZnsvhWM+jogPZ/cAWAuiQBgQiUTVIsQG8SjAfbqQN7hghEG8oWZXlZPJeq9P3rslLyUx75sXP8gsmzXv4efXL1zvWR37FiQf29t9qWZL++3gDx8+WTINT7pLa6bR1u+uEyD/9PnQUFt43pOGGl/QlIB1nWl9/iXylU+Pvj4/axLF/1npZiR44RDH3suuUrOuloxyYSzk8uflEl5PtKeSn1SI/nkk8OnGtL3nuxiWr+/o8YUM7qBYNKRg8Hp1i6qNN8r9biAzoLmb9XXHutrcqVaGsiBEnNraHCTjeCOV1NrKauCQxap39UUCTlpnVEjCECUw2Kzr6R7AgTYiCFiITUyPC3a1G9NtHb196aNzm13Ql4dvJSoAcYNlj6Kt/RBOOSRXvZHK0JP6/u7yU1kFgB8SyKJffqMqipj4ICDN4N/PTyJ6XRKUkKvm80QsyRZvQeGW9q66ut64bnVS8awlY+eF5ShJ4qqIXKtuVnkMaNB+ryt4sT85I1VjNPpDNAJBOEyonI61ILMHJ41eM9PWexKXQUXH7bjj5IrH0hBYFSBEwGSdiOh4keTYbKBtVcW+cG9Hc+P5H3csXPYcmpPf0M3n+Oiy4YmB2u60BE29zIbcrM4D9XvDhN1w92sf/7ry3qFG1W42DwwM5OsoySNqs/SikKTiMI6K/2zeHWfLlYHxans7K8p/HD+B7N4NEdGJj6xPHbdO6P3X38oJA62O0YW3B+qvn9xZWpo4cXzxjaP7pDity/XP2Pw1BEoQfndWYprCEhF/mIVYa0FG+vh0mSY6A64Lzb0F+cPfLExq7Krg/JVtLKtR3J1uu59uBqRNtkADfQO5SYb+QYVGOUxrB4QCZU7OMiN15ZVbdleMnVwejAU3rH9m795Prl5oeuWNJ4yUeWZeri/AMv5YcmJSgjMRg4lgIALJEklilKFI1eqqL+wfl5uz5vFHbvf2ZU8ZE3T1xGmhdW/9+eSWM3vfX3TPOOvW8z0FONW4/81Zj22+Ut85bvyoCz+/Fp+9UICAzUmlEHRIRlECdPe5nampPVH3zYCHBCgni4kmnTnVWghrDQYDFlAXrZwRdblPg0ChSvgA0AMtGIw2oEKeBCIkSnKRa48vyXjz/u66asGS7m5rxzh47LTl1Tc2sWL60w/d/ckDi+ePij91rCMgkhEKmx+nOdkdE1QOcJxFTwgSHGMVTASEgjMYp1VQHrAQQDWSjpWkfpVlaIDrvLweGFW9jo8GEW0NhhYITA8sqby+G/ASzrhDg1ev9D1S9X5Kzpp5uza+u+avmYut8xeU/Lbj6Mfb12X304feklQVb+3t+sbblcOD8RRdzTHHf3+21u2ZOznvOd4WiPmnD7H1eiY0DugQIcHqQCLeJhWPC2GW4rj49KTEzvKeeo9sS+m2MCFt9hivqyE3Pr3VK2t1PXj//gO7rj7z3k9uV6i792qK3E3H39V1trr8g10HDuzEZd3GV99++bGH3aEb5bWnc4i4OB0J2bS1Ld7SgnGtXIepozrWefOldROjIUYwayYtmGnJMALAVd/ekmVAVtz1sAVGhk4dnpziiLp8ej7MG1EdJiuDic0X/rCMGBbqqNK6BuSC8T+989FLP+y3Lhzee+CXUEtlBx9DPXoNqpBxAovpHGZj15Uui1mVdYi3vSujeGjP5Jf0pmGrNmbck54hxpfk2gx9Uwsbr7dU9jBjZ43oOXJw5Zhx3R1XdDmpd1+7L0JHTemIYhNQCAr13+ZjrcYkI1ox5cQ15nqro7e87/3Ptmq0YcSJEK4GyJsK7FFJNUevdwy2/gsjvM8dWvTlU0CuBYgBqARtbEk0Za4c+vUAQeusfMs5ISeb+ueT5GHUKbfcGTc+OezBB3xcvtURSUhfkQfweyYKMY0IZ10oXZIAuvF+8DjAaF3D0MenFz1YMNptisUEIWOYdO7G4OETUNc19zXuyf1zn1r36fdb65+ePj9tQiHCdPQ31aWMc1R1VQMi8N2TU1554H0IoARQYJu+GIofgTsB6qmiMpLC5Q2QHJ2RPqRL22OlFbWz5pfnvz6Tp5sxcoUmUR8FvTMnT/Je7VsiakMUWhVjt7/yU97T8zwcx335eiCaP2zD0CNTnp7yzT7q4WwFePq//qTvhS90/36n14746a+jZclFgwLPQkDWYAon4RLC8jIJQziQBxlPcWGRa+Caeqs6d8ScKwHBZraITChTZwvBpzy9R7Lj4YOtSYrW3s9kjUg2v7wu2tX267ufipkxyGFP/PSMW9RhOOKalZqyZFSWVyc2uip8/dCCGcWmEcVLFv302rgsRzJeNGTM4UNbT14F5oLRqTryw5cemffgxokOdFzZAujmzYTEoJUbwg/KviDjB1Jj4PJHhPaH9z6oP7u//cSFUguYNi3p7yux4Upq0sjUH0+fGzUzsnrmymVvXs6kB5ZMzMGo2BkX34tC7q7g6JFYfQPX1QVnZekpkm7uJ5L6se803BaLdClOHaolgwPc/S99M25O2fBha4pU97M5+vk1sUlZ0RR50rDH7tr+46MjRlgrmvylIx0peqL1hrm6njw9WH3/yPi1d6W+d6QqECu+f0xfa6336wuxt/OJjDmLwPCSv7941e0HoxNNtX1MgoQXFdKK2fbP1UBjizIljaiJwpKfnbMY1FwHEkuoBqjZI9kgqTgTudgJFTmlgRBSd6bvz/NnJk0YdmTHH/v3flCaaoiSKg1FhV5QnAY6BXD6LLDHg7JhAKDx40u8Xq9xMBgZmZmKWbSNPTWdHuBEjHpSh8hxJqlyanFhk3ewzwNueHkqAhhnYndHy9BManYSgR0iD9ADX7Ted/1Q3cFjA/uPefccPFaaPw0YIt2NzQ6z9ou/roO+mEXnyC+KLxua3VLTnl065PTVq/0tkX42igNFoygaAoFhOBKKkojGgGr6YMGs03JMRJZlgtLBECYKPKpChBpNiMtYakMa183tzc6qbOqvN6vZlEpkWxMnmNoiLoIM8X4FE5DRJSlVx/v2uOLcXX3rpoPijIRffhpQouDeOaNiwaDRiHT2DFr0dl93f5RhrIn2uibvvImT+dxgq88P2wEhBmVRa4kz5yfrL5TXKZImxvMpqYnNzW3OdCQsyiYqXqsaVRRNNWZdqTvy5PJnssYvDXqumuPn9tddsGnaLzS0n7txamxqdlrR+FBdZ1gawNNsnT1RC6QatJSLlXKLR/p73E6TXkIiNZ1ho+rX6/Ww4NRJGsxIo3YY5jBTXNq1ystah27uvYUt3oGIFLp67a+SLNgfjQ9jPKzoEB7ONk/TDqiJBk0ITVS6YzyqaCh9eqITN9kHB1phRMJVHmdIVcW0lDFIh1BMNhh1AisDFSMgJATxJh2pxETY6lBgIQYkXSQKme39KE73e+JyUREn8QgFiRgsenQaB6MBkBCV3TFdQVqEYwxwEgf1kBIiaRAUaAQhpuIaAhCAEwACBIzCJVlFYUiEAKYFg8HnEkY9hxkHRPayBnaxwE9SFlXMVqUBgdEhoBUyrHxhjQR0Mb0cu30FBHt1yWkbf6tsbD1++oufL/y8S5oy2t14vTh1+Hu36v99Kuv+X2pdDE4Jwrx0863OYAeCAhUxKAKjVVEGmAnEoUKNAiYAsHBiCtTQnhKS0vJtjWFllIurUBmrCMchCiFbw8AfxBFYkZ+uuXLvzKXPvbt+2sLx3IFrDbdOU3rDTSJ56YZHqISEKz9sevmZr8ZicVbCHabjn7s3+bkLjZBx6Obqn6GqCkzO8KttOpVx90cYk9Wmk2161Btj7ZAmEPLhLFq585vKfV0LWUOvTg7QsQmbnrZNx/2dekip1eXk4WJaU8PgqHWvPbNuMhGk9GRWJxe1jhqzYtq09WNmFC1weJWotjQnM9n49pp7QF/I1dWnatR4iuxuba/ddHLeOw9HxyQbguYo24ekp2sJAy2ThOBpOX4+XaeBRaQv0B53e/DK77sT8wuG7HwbIJaBusbu4PVcTTlZJNWcy3Zd08OJjjAe773F+TTG3jxrkitQltSc4ESiEaZe6WjqbZ9dMlRLJ1mdNkokbXxUl2yP1dZowrJa3dmndnXmFyYsfT7Y1zFp6RNKtI323gq3VadpSX9rBx9gLIjSB9kQmxrlolqj35jWYNcrQrs9EkqP6EZQeksX4jy6P3jsem+eduDllXkGh0Yg6/2DmA5BNZgAI3bYaNbasVAwqIFwBopaGDim1WuwlHETljfjMiIAAoDHVk/67PXC/nA8qWTSXK1JyTDooSDnV2AMEcRoyG+Oy+zxQzfe20oF+ujO/ksaMP3eaURH3cI/XnBfZx0OK4kSl+vo9r0/zzYbwuNmubf9/c71hk8O7Ys1odfrjr747dTmsy25C5bHZU6alJ//7bIi54PfQpYxE4PXKmFAKxBQtbmm7Ayp6kx8csn8DQuu/Xs4AdGLwEd0KO/8+kPW8vG4FBEjtGJCjx7b+eorn4QauRQpXA7AR8t35a0u2fbrbzMscY8uWwYWJHsjzdLybXEblsBjCkG8gf51FDPix7NXjzX3yxSak6AxsjDvUhhZUhVeYWWVArBWFVmNPNIO9zZc52rrp85Z1mywNgRpRZIhkUPoAOVs0Oo8tKC6xFBHKxzPANLelJacyIp8Yerotz45UVUvWgCYkmtZPmrY/saqFKuuS2IjVdp7ynQZE12K1mmjDL6gL9ZNp5gRhBvPyPaO/v55j9zVUH1dFuTK7fsKhoIUR9TWMIIeOxZoJKNAyzouiug7qwa0Dn4/7LaE63EYuu9Hj0WGH1s97ItdVR+sSXIh6K+7u6ywaE4ESU7Q1GlSqJBZC6G0pr+b09pJFRc6MSnaDuM4PEeQevQgcQgII5rLN9nq2vafP/jsxJ9bJ45BLvYaYj7PmHzKR8Y6q8DEqTYLpnJ+P9ACUY9hKLDCOBVNP9fUXViQPIj0VnqmxPU2v5XWMdGUaDjY/ebkOa8f/Asz6R9dGhfMNx39qv0ZnTSZsf7uDtrmDHln36xRxZ9qjfk0kRO7fPDJyenxiueJi9ylxvrWnnNrZz0yYaZFgyBxJ2NzCtPt6x7jUGOUPbXxq20LZ6/UlTqqth1Z6Ows5OPfPTP4y6XDSePGZcDmex8sQlKE3ot961C7rbz7QhYy9u3hRTNKH5612Z821qOf8bi8b+Hc7MYzN64kALNEiQ4j3+T9va5z47LRkRMN3psxMMYEEf5Em6GyTpAtJV9uv06wNAC61p6BppNXFH3+hVsnHZB22tRiPN1S0+zGXREpFmlhGBnBQIzXY5jA8TBJRjjaoqfoSMBkdLACLyiygdIrgojCSISO2BNsOg8X1mLjiei1Hz96fNa8v/bs3h5WnBAeTWMKl+BEfDwbcbMeMeTWBP3uWaMX3/3pvq9ffvnWqS8feGLjG39czkwpaPp9e5HOsnhm3pWbF3OcOiUMX+yJ1EGAgMC0xIT8RXks7Hfkl4Zqr1pI05m+zuJcxKkkhnAHH/blDykBejtGqlK06/KpoxCkf+SBV/XhQFVbnRzQFz33Cs25G77eG+no7jhzveyZpanzMuv3nkzPHgUINST7bILOr3ZoZIrEDDJG5A4Z0tHUlJWQqPByICYMdnehDg2sR+P0BjrKG0xOs6oBEhFgbxHUWNbQI0CDda0fk6RdJAIATQ5xfgjXuvq50ekLysiH5Gg1TVIaSZZwXKuRJDcGQEhGHZLipyiTGBOACsEIJHCsRoPwbBRSAU5qZYDKJIREFU4MqSgMqToJZY2QBgiDADdLMAZELUPKsqya5FhUo4i0iaIiEKRIIaef9dvsKhLVi3pYYUOAxJQob4JwGUCcJMI4KpKoGovqJDmAcrYoJkMUMzb32tD1aX2RZLtjp7utGyDVGEhSSEGOzb9/8ZIVi/4fkq76T67qbn/PuS7jM+u+kd0kG3eBJGgguBV3KJQihaLFixUo7hQPEhyCB+LEXXaz7rszOz5z/d5z3h/ev+L5PP7QZx9LPTuDOlN+yvGTls2omHNi5mB27zcfrrz3sbt9jeW1VV/1Ja64bIoABz9Zm18zAjEKY4idHeGWIG/LmL2XFx0wsQslwLLErQryTcT+Vld0om378s6aGSc11x13VUCCgMEQKM2WdhYT1RzKkaDN5OtsLw10xgnzDh933ef3XX/XnXccc8N5fT+vOrJ659rDm5783zWqO+2PH3/p2vDnopR/VWKkGSxvUugQ9h0gdNMjf02oklzIprqdymlSwnY9KK3ybFOxM00t5YEpOZmGCoCKA9jQi8YRunGv7+xLoTQCg0NQWQF4h77tr3LVxW7t6+/c/bfDnW0niOi7P9xT7r/hvoffOHvWommhmvO+eyJGRbnWPP3Oift37Lvt5GcWt0xhix0opfM49NaZlzkevujey7/4fZNVp1SXTNz0yw4mvOi/Pz2i9+R/Wf3uobYDp85tBDYw0faGV/26d8fRlIK486dSQahqyM6c0044N7Pv4kU3fEhCAS1DQcmDM+HGGx9nc/trq2DBwkBJXah9aOB/7zzz0oOvqk6xmLTUsOwKsdb//HfhS49luga2v/qLvmswdk3d3AtOf3rlC/sE/ou1z2d2HhAsm4loAlcLBd3lB4p+n2qUouEsUrWsbYcD9cC6QPLgCRAFUAPtlhiypBixjVTCk3WVd8DhwFOA94qplFpe7dgm52TAUsDL2lVTeTRuSrTuCIiCoJnIOf/MpQ+cVNmw9JjiEBur4PWUjkxqhNhwwfYw7xRdcXLLDy++H1xUssgbfPn0D0dPrJx03rjP79r/ceIF85Ctpoa5ORM2f9Zmd/QuPMa//unVAxvyVSXF+IkrN1i+uoll99yzMLuuL9Yw4+WbXty4a2tATHyXALaoM7Wzp404xdpUemBMK3ZlFv3t1rBcuvv7VWUhr2DoR3YlVz32v0lnL96y4Y9df+y95Y6/2rrCS7NnXHT+4bWbDmw4fPrcc+cubVn78+ZS/6zJ55TCSfXGSGeoYrJxBuQuWmie+lxkzdl25RJOsE2vQhASgEY9WjAtBQuACKGOJ6iqpRU5v9iW7fvvo/dkGQwIlm3fe9nZ589bsOxQ62DGB6oaK7TvYz06ZarczDq+6mJ5toINLVRzTFd8kzBh3II668bx0UllUkkVE+Fqjp01MmZ4NU0tzz17pFOTZ1cuXH9w55Y9bSXBsOrLpubNve7WzzJZ9/gzV55w3NnX33pL6NDgwgktZ9xw0+Y3/zXXr4Znl7FHByDlMDOWrr32vOSx19bkgnc0iqjawFb93DvQsOQz2rZcFZMybT2bdgsKOMsmCmRFBZ91drUN6kW213EtCypryNCwHi3lSgrBcMzFPBrRCv0qEvb6W5bmoEK9/y8XHOpJVcqCycFgxprqgzzV6ktLqk4u6P1J3Q7y5VBWxpqDUkHLt5vOGD1UHYN3V/ePq5bLo9/9ORA4t5M7k+09fQo88PO2+LFzGina/mue5thKgfycQH0o9V0U/qr2cVntULt/wtxCyTGnHdz+7ZGj8TMmmSnkmUPxXAHlcHDKzEmftB10GeOWs88mS+pGd3WPDfR0ZQR34vLNufCuPS8/fkv5hO6gHzJvff3phc3NJvIf7mg9tNGLxJhurW+y4Buzac7LDO9fUxnh6oPptpA00N929NsDh0OSPcaMjulRmxZCmcuPK+k/ONSucCf+I2qLKNkxntYIkcKhk4+7WLCdAjVEnmmo9+0XsYC7I345n7N5NrB7zdaKltnrDu2oLQt7hPV7Hq/IaVNXfQHigcx7xDaUgMgTalHgeMkyKe8insOqJGqmgcOOjvVDcl35GVce+v31YtFoEqHbzFlDgRrdCxTwUEYsrSjt0/T1vXDNvz9+9/Fnflr9+3GX/f34Sx8xq6wff/n5mv/Oee2X7/zd6xfOnjnS3pPI55r98kwDjZnarFnNJ1x0xcG1X9Q0TI7OmOsTo5Nstrt979TaY8vnzEoc2NG+b2j5iVe++/J708Ox+65YGTescDFt+hsmXfMX7+Pnfz714nNfeuOhNVsOb92aUSo+uvWFPSf8Xn/r4qHN36kynqzOtR1uojwjnWU5f8GHdXeYVgYqKU8sK8JFE9Mr5hACmOcgk4n4OINwRpE47KgijROqfccue7Bnf/q119nS5kRBqrD0QRGHM66Y4BI2kVyvFVEpYAiAqasX8lmi2mnMRmyUwoyYGx4wqBwrk0w742Dg2SqLlTieFKDAOwS5QPkaTjYwtlkvWnR6PFG2cR1hiWznEI5yblLSAEmUMfx+6o65OIhURcwo/hLT1fTIUMgsAdFfoI4akZyiaYBDQyKmIGk25yvRSD4kqmYY3JTrY4XjTjr5f2+9fZIamVw/MbXvqGeDQbSDAPru3rPePOekZHLXBmnFldeWL5jGeSnI2ovmnfBp6753b7hxJ2uc1NezEhgf3/j0D217RjiFU5OOB1y+PCrljxqV4E+D2W+Dy4i857Twios1lYGQa8o+qGxipBr/2lceWv63h/5eGRgqOETkalR5w6jWzKTHUSA8IFv1ndCQ+OyR/95+1bSyXeTWN0aOZMwBodZUXWmqsHRl1Wi89501/qUTDm0cG+da32TJ2v2/7d300/dpMnPL3qHnPrbOXtJ4+4c1DIDRZ0h+BSI+I0EKh6POTAiPGqHJkjei4lp7yrL+wz3DG9amMvVpc2f3b3snL2F8lV/8uCb3/CerUO+fdZNWVMrV3nM3VcfNM678285nV3Pa2BgAd1j67f7ei+849fUv/rru05Iqr3rxsmNLK0pv3vvp2wvv273uUNONl/mODuk1/IvNF8y46Ilf/+NbcdMDp51x7rk3XwCp/tbhvCeKDZfMLx9oDxRrnDrgxDo9nqJ8fz7x6viVI/nrt31/949nP/tohcAPaT1XX2bPnHEptB0BxTXi+cYp05bc8/Kmnz9dsfwsPoT5aO1P9/17+srTTWlKqHE+ai4VJh6ORLoFce5Xe9Yf0XFheHvI5+UthphZxhUKYIRyqpSXWDZhcmkihMIiC3bvoI1DGCFwcKoMdSYnVBIomgmRjxLDygqg40Ix1ZXbD6LcGK38452Pq2vqbIEnusZxHNvR3ttt9YNTPb2p0N5nmpmt67fE62ZOPm0u8qugYcIhBhlhN2bJZZgMFqoKYoRLZtL6fvG34HLxan9LE33kwXdueuBUCYddJu1J5SwdF+CH9xxoE0+8oPFAJlJqxiOw/+s1Hxjyq/9ciLm5/nAoGSr85d4VFWvKP3z19SlVEZbZf2QUiFUrhSfMiNaIf/65jc3k//jm+5lM5PTTTl7908ZrWsYfe9dSCrBo4fKKsomff7hx2rzpS2Yt+ebLH89cdk39ZeMcl/320GbPKj32pPrpx0z+8tU33DfXr/zxSVxeL2AIxKYUoYKtO2kwJg8V2iVWoe1Zfu5pWeEQzvgCTKhfzEYydno4aVW5uX3vBYHJyR4tcOtYw/zqlzerj98dKPhdRTALZ8452x7+0e+Ltcf10+TKMsXzcqmcz7es9tRNr345c+9IrS19YhZmLqpcNk1NMrYuGP97dfPTmwyfMNRwgvxn3/AdH/8RKLdfu+2kfFf/+Rcu2Ptrx1+bpyV/+GVeUT39oXtnFlv7v/luVi1594vt95/zOWNDNWJC/33n+GgF2v3hZ53mS4AKQM+O7qqpq5wxd3n9JZetvEAHQb7iyH66a0s+P8pRfsO+ngXHlhUQP5rUOaWsL148XBhOJuwhO+sASByXQZyRdXZBbudPGJA5D8epzOXkwAKvUCIaOwaBT8PsfOJoHCIuUF3Lt8EYcTlamFDF1lYG/A1TFsys+3LNmUtO+cdDd7927MlzCXXufeGdNx98844rAh/82KYb+Cgy4IchHwEZ5IOgx6j00bfZnUffi4LevcNo3XkNYG71iDXABBk29a+/X1te1wxs/tu3tutAxxBtD6JYxj1tycJnE33A7IK1P21p3YUBPvpmpCysHgTL3rQ587d/BaNhmsqXjoGPwTv9voMVWkVzyfq3O/crTKRU5HF2fvppoioH/RWDo2O6ZZWWByRFFDOhymAVqEOnTWyiiAx3d9fWqNR1jeHG2YvPSQ8Uw9UxkwWOAgcOI1eIbNxg6NGxzHCOhpLZQNBvIk4Gl3KM5qV9nIpokmGowKkech1XMFkHgFLXUP0hrWhxAhYQ45qmJDIhC8fFlL+65vtWl/JmhoUZ4P/Wzk2OXThs75psZiaVzPNqfbc//mXP8MCz97106vkLqtnxMCRdeGL1nBXnSGiJubtPo8g3DWci2EmAA46HpSwTlGomBsrnTDl/Uay23nGBYa3JYDRNPUUD7qsXt3Bh7+y/nPfI45/6UlB9wSKvlAmynOtGRJbZ8fh/Rv/c+utA25+XX/j6lu9/vebGrBTYsWbripYVL337bsXsxTjRb+iGwHN2Pq2IpZxUB27Wxj2qGAReccRs2I05msVxHDVcxAWBZUXHQQFWROChiKPy54T9G+b4Tj3t8v+tfSwie4zfB5lieTEkVDPpsXam4STkFYtAZZYTaEiQAVCUYioQ0fM8obSccz1gJFkNEtejlPp9PFAKhAFVAscBpBMvihAC3pOFKoyxxBAPKBFkiqlnq2JIdj1PBmQXSSwQ9VzNJQzLUpEqIt9sSjYVZB+h4Jp2RFIEgcESuAwAS1VJyWigljFywRzTkDx5YPEk/I76/ejIskV1jcHSGOOifv6kYADiXWcx/m9kON/zde/dePt/Hh4ZyEZMZTx4E6vN/mzfrSU+mZGbSsa1hmLnT2HpqLPT0aNgJSXgMxItLRwczf91Qf2ugz2HbHNRgHVSmuMFDmt2c9B48esPYzWLRtd/2nLdLZ+Pa3ll5d9mjSO77UEajZ4RVYqHEsTj9mFnulycPRMWnnoa07fDSOyQzi1dWX0cpBr3vrfHP+oJbRtmXHDhDTc8217HhKLjnv+k1cjnCj3x5HY5FubckXTTaR6D9365+C+/Okas90g00pDqj7tuftQFAzkzZk36+cARnvDNLowLBjbSfH8wGizBdWpjXus1Bs473bdu4UkCKnZ9t3Dl2bNq3yl6b733rk2t5jmXvfLfLzgCTpSpFMOdA4N/OeX22//+BNvfwQV5J6dxhIOS0Iqf7xl666c5p5+JRnOO38DUt5Vc92TzguOuuJBoh2G7RqMNzSUx0xnm87LnnwERm81TJ6uLsgpeixK4L9vxnWPdNntu0Nh/8pnX7x/ZM7x+44czq8v0kT1InQjEIm3tksyuWHJWKlsMC1Ji7+ZJS48JLy7BkDO257r+/GPZpacVD+9xBf6jZ5954fY72Ip60AflGGH58YB4xNkWFTFCBGERyoFSSjzP80opJbbFMSwGEyrU/qPdpZV1MSZgSi7LxL0IJ5HyJlIGDHAcs7BmhWnqQb7cSWcSNOuPRaomVZQ88O1FJ1z6E/ly16E/r7lmyewrLzmwpa00XO/KSOXLwYmbFrCyY/PljKFCV/t5l58iz7xo5TX//fmnvfFXN9769B/EP40CIa4pVraYbVtbLjt5zUur60+9ZwZW+zNGb9LPA64i2c937bzeOp8IbEwP5GLax698uxWFTp/PI8AAmJtVUj4w3J+oqI26DB9jhjszKDDNP7fGf2D/d/f+t/n6Y3/5YNXmzUMhpOWtBpdjkCJni1ooFMmm0gzDREoihXjnJRet3PD9O4Yxcv5+0XfnpeXlZnrxCl69Rt18L0yMrf/iy4MdWnzoEEqQukWTU0fzXNSvI8/nD/CJ7mMfPss7/MG2Fz9Y0lgCZ55x3hWvHz0SLyPcG3c91e51cZ6Sy6LF80gg2J0cHdr58+4F9c2SD+Oo6LGFqJH74OecALwlkeWp8lmXT5WjpRDq12Ll7mErcOpJ3s97maNDcN8JsH2303uAa5rp7ulng420ug6NDHW9uUoH/5TxfCJSCrq/P7LxnVb2jVV9lAkDjIILZ8bk2/9x/Lqe9TOmrvAHudbM4bGDA+OZGT072+xBK8zIy648dvKx4zJdqXTNpIpQX2ZX69C23bIsjhST5bXViqKUlJdlgg25XDarFZLZDOZ5WfWNjCUHBof3tdrxYTNlFu+5/HhhnH9iebC7WGw9OrCvc3/3DsNM8ysvnn3D3660tEJls9/halTo6+1iqhrjrUP+f1z87yUnzi/Ge5vn1NZO4y+f+tr8+Y22v373SPz6yydHSie99uZn8XaT55z4yMDV1597z0NX18Yann/ugycfft4J2SHq83Hcvs4Uh5zGCNgWEBZ8Flw2ZdZoMd0wsXpNR+dPbcMtCvRR4EP+EyvI4cPFHMvVRKWD7XkXoUUT6VC/qFSwJT5msCsXLpdkhwTBmjy3ZgRpnmv7VYlBzNEjIwGFiQQDPb1pjoFkApoaA319uVCU4Vms553KmJLno3c9/vXWbZ0rzzy+iCGmhrr3tvWNGSMjI63tPY3jmro6uhtqKw29MDQ6Eo7GzEwxGGJ5FMhmBxVFBpA1J6FIUY8S27YlUTFNm2cZjkG2UZRFgfXzQ7lUNByrlpXN9901cayvV3C3OF6oufnTgzuyPe/bVrkvhE1shdi5N//jdnckXF1DgqV1+5KJju5tq+55ceXyM9Ox1EPHnHJ43x/zjpt3oOuwz8yEitCVE7/pz6cJK6rjTj7lGltHE6orvvt69b8efnTXobY136z6s++rvX8ePO2Cq4b7N2B8AEZtp1ezh3fu++jbdV/vufbrF9nxTQ+3nPj3F2/vynYfWL33qyO91z7xwC/vv//Rt2+xgulQIozoZOa4P344cP8/H9uy7/fiwAGVYz3DQX4ZEwdjbFkWx3EUgW3bgiB4nse4jmljeXzkeP608248/bpXXt20b0amqq//iD+E3Eq7qlMeDKYipyx+SR5OokDINV0AQAgBAEIAAJRSQgiHWcuyKGBRlgBj2zQJcTmO9RyPUsrzPKUUAAAjx3EwxpgCZhiEseu5DIMRxo5lcQwLctDSC6zAAmCtaEqyWrQMzHOGTgq6FhQVRjNzheyonfcHFbegi3lTYe0jvT1AvYjBpQr5OaykPvr9rzjdRuTmGUZ5uNIJu51WIdOlXfbP+47ow81VmA/Xart6RmxSf8qJR7/4+nC3cek15+w49aqE2Khl+k++768f7t5w6w8HPBrjIY94+yybrpwvdBQsJQueUaa5lo0yCGOSYWkUH2XMuz/4cuHxDV52wMqTra///M7Tr1dUV4RoYe+YNXtqXURPp5PGuRdNqjpjyfDoO3WjlPp9yBRAbt5165rNbO2ecrSAll+3fzV071q94saPdSb2zvI1J35aydqP3n9ZVKxiKuQyPFZ9zwvu0yfuDSzN2VwV5w1yKiI0rGLChOzRdElVuD2f8EVK64BWqRgYgJJaEOUxmg27dUxye7p7ox/Iy5f/WBbum/fwI9c9/EJlbfl/772rva1k777eY+YxX3zxVWODb/2m9TfdfsP0ZbNoZ5/FAEIc8likE67cD2296aAgMRyVRGKzqlL595bjbtvxdkNLI6Q9D0KkaCBDY5FM2axD08SzjGwvaHs5d4BYQ4ovk0up2Qz+7RfDiFYf3ssmJPzeulXcH9sslYlUVNmY4bPU5F1eFXFZZNf7X9eNb440uk7JqZ1v/9IzYFcsCn5212WnXHfrMX+954vjzxyZyVx95aWZUU1kfa5bLPHxelHTNK2Qy9q6gTH2+ZRAICAqsm5YDAXi2Llc7ujg4Izjl4emTCnoRR+qBwZ72HU9i1oF1jFYxwbXcbGHc9bg6GDNhKb2PaMnXXDLnHNWfP39R6evuOzLVQ/vWvf++NIZkqrbliTyFiI2wxrgcEAV4EU9NypJ0t5eUj7jvMnVC+94/s6vflr9wiO3L6pPOllTz6AAMkn9+JGu8NxFc9O0ECA8A5ASBQsXnv/w/uvOrC9stIILx2V3Dd+66JUNwaP/+Esli8Q5FGD/cPrExecIK6f9+u5nU0uneeVCng89deV1bz95T7C2vDAa/26gNaKUZjpTbGUmGI4VzGLAzyFSqCwTPKIDDMZqKteu2aA4UmjahEyQlNiOaekjXrG2JguhwsRQWcBquPOqc/ce2TMhHPWjRV4kj8Eux20DvbkpZSH7nae7v12br6j++ZDcxPWdFQq4p5Tt6I4XwNAOpktaagwrP7NqtizXQIW57LhrbM/hHYMUcxiJjpX6qz26L0/qRUZg3I07zZGYdriD5w9/XtMdTy0bjDNifP3PV9bW3XbNs0MI/fj5LZW15qvX3rBnKFM7a9qShrrukY6fX+0/Y+mFDX+b6D/3vge++/JB8aOb3jo8sbrk4EDijFNOm3PhX+w3+5avuF4XM8cIp/+Iv/jbjavnlnKfdf5632NP4pmzoG7i4f2rAod+h+nzI2de4LvyEpovlvWPRAIBoFTP5itLqyuAIlEEgQNEbcclFMRwBFp7wRcY3LO56pxlPz3+2q8/Do4TlDuvX27ffzd4Ms/477zxuk8fe8FPDp147oJn3hptrEJzZyze9tV2r+k4LPh7Dnc++NAxH2z69OEFw1kMbIY+8eQJmYL48TcdxCuue391tL5YtParQhM4wU9Wv/Zbzj1+/pxbE08DVz2a8bzRsa6BgZ7efZt++EHzJu/7pePdr184uOG3Krvh6Q8+v/6fF/74txsLhudjsyAxgDKdPb3RcGXQ9SBclrLsbM/+aNm0w0d3xkRc5osVXEa3XF4mA/qIbxBV1VaYtpHTipYHlMWAqaQI+aJmamYoFBlLp0orKyzL8ixzdHCotmRWMBQtjWUkUT164NDzHz9z/ZXX9HS1lpaXeV736GjCF/APDY+GQ75oNJYtFqrKK/fs3iSw4fnzpw4ND3S0Hp63eHI+Z+nEYRk+oxVkWXaB2J6lBiXd1DlBLvN4FC+ozbHc/KZffzl6dnlz39DwtR89BMP7rYGsr6xSPDom1TVt+uzDHZ9uWnLGhZ6dgMF24etfnz9t2c0nLUoDXH/2hRUTavZu+alr/eZxU5sGu5l0NpcYylcoTGtBWLbouB93tTG2b+vuvvqKGY/+521eVqqqxrVu6Ny1u6cw1NX/+MPxtj879naMtI0NESKIoMlMSovtf2OtTOGN995+cuOnX3z0l4ICE6XI1P/+5+fXPz35jr94AyNpiQ8PJ2OlFQe7BjEXUcZMrLrEMLFd8LBrOo4gCE7BBQCeZe28I/ICEE+GiEejNYFcnkl70Prw1Z0XXDHv4r+XDWcOWR5uycao7Cm2bWCBMwBhBgNCiML/IzBCgIFByLZtIRQAnncc2/FcHJBZnqEAnEc9z6MYu65LKWUYhgGB5ThA2NE0Sinv9xHbRgyDkQcIw0hKHxtLxkd9kpwYHvLLsuMYrW2HZaR4suQQxs5bPl8go2VsozihpLS/JCCgQlRiwv7mUik2FMy7wci2V36fn06PSfpPewFgUOX5mG23gaTMP3XehDQMKzbHR2ZzEd0Cqsy8bso9xy08xomVN0z+8tBh2wP/3jZUErqcwpFoBmEyazz1JZieTttHQDMhZdsm8QQhVEoyuoipaaaKvlUvvTZt7t/6Pu2aNGXymy9+rwfQiQo+YunldV5Xa/fhovvW+nuFaCv56r91NTJMaNzxwMG9xfGXvHyOHV6jlFWupVd/s+2Bhi8OnHT2mavaL2qoYMr5nL/GntUvbOzsOqVarZx2LFo+o/W+D2vNaTPOv4D1HCgkm4IlpNDveMOa5AuPKwdbb6wKgy9gu0VwNHAQ5HOFvmxMFEHoHbD56jn/Gti3c0PXq/f8pfmbR18rbE8+9vrbYZK74cbzn/ljjTC2392fdsNSZahCGzPBUXIZrNRVYllFmMN5vyEjcekkZXQYlZbwJGBTDB2jpVC299edoIvZYSuV29BSg8rKSS7bxcKQljqkpdNiiIOIo6mA0tLhLWj1+mK8FypYofoY/7nHN3Yd3hlKMaCCKoXAV+EUGRriRR9rcDYTt2e3nJJBw8gtsfWInVDHzywGa4Unt/9xbNkxXzWrDJeYNOF4rjxWZhQY3rQNEfFlcnVWotEY1CIGAcMAQkAJeJ4CDJYlSl1V4CtNx9Md6+BRnyDncZLxqAxIwCwAAc/1EKEMq/s4fzhYIysgBCsagwmaL60pOe64BYuWNAI/Mm3KRI54ABLvZ/QCI8qc6VGRk8GTKOJxpBSFw8Wj7a179lY31oxmjp6w5Pj7b7tt7ZZ3uczWgFqqUYv6PG5ozzAp/O3m08+cXXrCNW+pgn3WGVNblC5pJEIjosuUaYeTm2AwlbV3HRpjmyZxrfv38A1LjxQiI+99jYukd99QyQTlglnj1X0/VSKu7sQJI/v2s31UrGM6jrJzoqpm52WeBewxiLJADKcgcby/MCSHhCQLcKA3O2ch+eTP/EPLY7fdCocPHNr8M6MRw9fZu2fLm2u+ZKOxNKOFnV7IjWrqNe+/c+0l/3hlnGVtl8A2e4Gq+NstInUnYfjnHX/vSrUz5eGCT50wufy97349eVFlMMRYqZFoSallQiFt6aVSTcukhWc/dd6k43aVis/8+MNrl59n6wX/VGHhtfeEGn2bPls9CQkTV9xg2sYbvz1oa0z7/k/Z8ZMv+OiR0r792SOj/mD91CuO//uFH7bMCx058N2ZNbXfHNr2wOrDL1x8zrpfflgIpYoc4UvLm5umQrans62z69Pflcrg5/csa/E1xp//+LFTTslu/vXg1jcqjlmA2xGUKMyeXp8JwAKUlxn9GuYlYFW3K+8R4tI0w7G8JFLX81ziioZm0c5fd9jZseA5gemnTWVnB5spN8wPVfxwGCqqtx385uRjm0r4ciM+uTs5sOD0GSxXGAr2xRpLN25//4FbT/vXo2v/88hvsZrxw3rcCcIPR7vTt7x6903nZ8yB1Hb65tFr65ZUVpezE8rMI60HPHttMs/cfO+6ssqGFYu79OSsj7/qv/2KpUyB/+utb8+e0fD1l6/IZcLLq375c+A3cUnLt0+sOmnK9iN9R6uqS0NiqK+vb8EpxyGPgkXIqMMKQn3lUssYXTjnJLDBA+pTGaAUsFTDiNYMg2GxnzoVDAuUBc/zXBMhisWQbdocJ1CGtahnea7CSZR6vImBreakgXw62dzQJDG+jRu3c6zcdqQ1HAhansfzAmHZoq4RQnhe7E+OjJvR8vlnP1Y3VwfKy3Z/u2b2ssk5o+ALBxVRKRaLiixaloEA8zxLCRvLWdQv2Qh1jg2dOf/4O9d8+/qOD96rXjhJi0JJio9WkUIvbZyaiPc/edcDVzU3jQ5t8k8qXXDMmaMOTPv0o4nTZ/3x8e81tYE/37/IEVxWqTISJNeTFyQ6ouINWSk25VgIV0lVRnVpLbaKPGNNK5timvqWHw9lsuTokKOD76XHXqaOqZegxpmhJs7nE9FQR8/R9e9tO3R45d2XzbnmHEaufnvH2uMmLLju1lsOuwnvxDlsboSvqxNzWaOmRB22NeL74NVPzrzlJH/fUbZOtDM5HgSOYcBxGIzBI57jiCLruW4xqtF0PsiPTZwZa9vXv681vX6vtHnP9icegMbpcOPZCyITZVxIZ6ryTCEv0ZDr2RQQoRQ8QqlHEGCMEYN5ypHCqG6aLM/xPOd5nu44gAiLWIyxKEk8ocTzgFDbtj2EBgcHq6qqWA4bhjkyPFgWK8FAGVU1FBoqC1lFPhhQItHxjmXKkjh+VosX4xkQneExD0AUZeo6KBY0WY+XKnHXKKQ7YNI0mtZrgyLX0LChyX3tTxwxIjFJixh2zLWRIP8gcTjig/1HAKtUMKwhwWYIcsdUpvrHg7/t3x9IPfFJKE54f+VHndqHbz64mmj4l11N0+rHWnvSBugmHxOtop/P5rIkQtgcFBHiJTmmMi1F59s1ax9yrmvfsePwZ9+sFfMXRMsvFyIPHxlCMwKyj7Zh5+lj/nPzqxP9y8fBevGn1w7uDFz/923/eq25xaFyeYVesqPtzq9+e/OFRXX2qpsvOG3L2tX8d0cG+9X/gXe1pCaOb5acvnm4of3ys1s3HZm1pKNnz8Fg0/TiqBEuq1aSkuBEM4EBgkWfiZmCi0Lh0T6t0DPkLy0jWn86B2O4o5qv1LNF/97s4rKy9p3FH/q1A4A5a2DTq58H0pzP0Ddu/jM4QwbRUgSyb91vM6ZMGN3fFcJS6ayahJ4vCRNJ8Q8WM1X+8rwpeEVRKg0d2LkmheKRQE+1AqLU42gbc2kzXYBAhCM2EkrtUA3k005YlEa6vZ++NHpaIyWlofk3lm14e3vn9qOdbNehPbChZyEQ+bl3H4hmOTJSIEHVHdZdhUEudkMBCQDKY3uufWLLz7+WNCYvWvOinh7ZkO1Jv/Dh+7/te+L2m0zKcmVRnbNZBxyU4zweKKWUeoj+v05DMaYYEYYjjokBscTEHgUWhKgCmPVjm1LPA+ohi1DwCAuYZ7BA3Yxn5AyHiCFeqCs3WWfjht+5XD49fQRYlEkUozLFTIh4usyGbSNJmRABz7RyOc+JloTBspomTHroP18e6Fv/aMWp5RXRJx843L9nV3kswMWH3Gi10Ntbsmjl6TR6UlP0+EtuXZ2J3Hvn2+/97X571jToPQB6QYonfCdWnyWYN6x7Jt2rsbOWLZ+84tg960ZBUdQhs2VmdevBPrSrff32P44CuvyyR4DAO7/9ItncBIETq+rsrMtgwkgWdTUWY5HhOV6iLpvnzahklOvWqIaGikZQyU2fe5+Qez/364drHnghzsKy8sl3bNpIRRjtPUx//bY928Mdc1bf5kce+efHFZzVGvTRrABigeGLjM7pQC0SOnH6yW//9ll5SJ1WFb789ruaZtbe+NrtvRu+E3UvMTTksMhVeGEkR2Y3NtRHH/j28fP/c8ej999x9kMXmYd3ISbo6xt0dK35imuYggs+zyzmJKRCmaqfPC+YZyCXaiiZGD7nxHRKfPWDrdWlQ7MevNpcDxmbLDp1xcYz7ywoLfetqm2G+InVoY7PN9Qsv2rNLx+PDPDXvrWqN38QF5WCwiXWb9qx9s/TrrrGV9AcUxMjFIZzrkqdKMs5BMY6GEyxy/K6Y7IMz3KIuJ7mWiniuh7PCqyqFrVk4dCm/qHkgs+OYWXzhJjqsDKbidmQxcmu2eOriT/k6pxcPYcYPTjG2plZPJ8CfmDRhfeLYK774vTvXv7f3x9f7cgABVkH/Y+Do18svnjGGQPBMRu8Ps0M64V+uyc3v3KupzQhX+Tuu+aBlszxOJDMX3Rng8/hAZmEJEl391nXPD5w5PCTd12dY/njlp7ltA7UnnFcbXwFDXR7tlQxuQWLUHQKBBy/38caFqaCJETynSmB9xtGsZgacXU9nSiobCAU5ZLJhCSyAFDM6ZFIJBL2p1IJo5CM+MMuMB1Hu4eHh5saJ2ZsNzOaCN1yS33lBJdxE/kU0nS/P4REznNcvyxptkcxQz0PYxB5wfM8zAs8z6oyd8EF5/zrX7eEg8Hbbr4jGg4gBxWTmYGOnuaJk7pbOwIBn8AyhWzOc2x/XU2RFACj1ECyfs7kM8ZP23n2fTssZ3uxb0nJpFDrsDd+VkavuvPiq25966lv3vy3r3J5an929doP0lVVr33041/PWQFunnZtTG/p1RIgVahHdh4qmnhEI7ttmDHz4sbmeYNDY7LNGcmkwnlYpGP9Q4FAQJDyFpuwRw+F8WDNsWVgE47x0rkxayx3sOiWlzI1M/wv3PnQ6KgV7+w7vHZzxYSGi84546VXv9z79Os51ZaRwwb50Y7eeUvnjXI1AuRfXfXCYiW1IdfmK5oFYgXKKjOZTFlpaS6XYwAFg8GRkRGfzydpwYI9XDZ3fLim4Yv3d/1v3EknXzI3ubYnxPHt23ovWL91tgIJDboOTvVKElBIsv4AEEoJ8TzP8xxKKcEIYYwsWgCwOcrzgHjkukBsTAhQkcUYswJDKQUPAQBxgMG4NFInqKqpG55MyyKNsizrxSIokkTG7HSmNKZ6rGuxhCvxGy7yTE/NysTSPEn0fLjgmj4k6Yk4HwqQRIdRVUFLS7CTFizKOLK2tvXEXWNfAHqdT05CylVyYK+RUi3TZ+ljHQPS/Hnqtu2EV1nRVjIEl82getZlKjZt6xqc3NhhnLe2Y8uz563IleK/3HBC5/rhret6mnxMkKWK6EIAOxapMggdhWIlyDlaS9Pv98D51xz72M1Xyfmh+e89oG+3ShfN+udty3PHlpx368gH7ZpdpjWWhI50GJ8/OHLRP5Y8d+8PO6NwzMM1TG9Rbpl/uGNjZaWSyb45Tbkk8I//bl/1dKC8cWcK1v2348Tzpl5821/ogsWHetZtHG79+PaXJowN3Lji+sTXH4U83LZp30gCnfXKXaBymUwhRCN58JDIMDy1XQcQw0msoDgaX864brSuwR0cEqeeffKy8O68eYsz866bj7v9rvf649aUu+7Z+MGK1ODBhqK5t6APHjhAI0LF8pnynIkTq0sQpyT37SwJRgw0SnSuKhKBfB/SRyUR9R/YJvsPPXdUMX1rU4cIoGDzlADGTMbWgn6Hw5BJ85pBgjG3mET79yB/6YQFEyK1uGHjD/svvXDpn61dR4e7ly0MHjjibbGTx8y9eefWb/01sUJPd55hIylxOJ0KitmML1eqKUf2fTZnhfHa+/mm93+ccvXcvldfbW/rXLxkxpovv/7ncc8V2/t9JUGgPDEdi+oMwzAcy2JMKKWUUkIoAsG2gGcAA7Ut4BjEYJc6pmlIFssIAstRhjie53LUZShByLYZjpE51cplMkOhSn98YNf0qYsqmUh/3yAYHMN6mC037W6GqB4e4CWeZHXM2qLgcQzHaTmST5RUzXz68Rv+XPVaFRqcufK0W2dMfOTeN975+VGvPYNIXvR7Vn6wqio2c/J0c3TLeVfdp3+1a8nC8+ovPemzf14ql8jQPRyavVRgRY9Jzr9wBvvR+11oXJngOJPL8gsWHQNMds/6X7MunMf5g0zZuCVzeobHetuGrjtm5ea9G/d1jU2rrxV9fFHPBUOqazumCSyretSW/KVFd8SgxBcsrYyVNl1Xv7//wKOBxuevufLN1vbqqPz5kUPFjh18aab97Rfffe1PVDAd9+2tVKxjrQKSw1ohCxrj8h4FznVkDhicffX9N8di6O2XVtULmGhWtpP88cqHC+bVcVEfkhU9r8lAmZqKTG/hX49eK7Ij97e0KJGq0Y/+pLX5jNtNxhjOK8u27y4DLPoiBiK+gJv4dc8zn2yeUxI857yFldOmaR39G7oOPfDg22RbCjq20bln2ag8+8Omr35d/dgni94++47w9RPKgqWdO4ZIV4PkO4atUJ66/QO2zOB53hcKvvXyS5s+fS07Mpx3tJDupKnmSmURSTH0Ys4tSv5Ahewjjq2Fqexx1HOJafEqZgBZuoUpJgR7vLr0lstBQNCaEF2fe2TY8DF+DLYscYUkSohYAdvQRaaIZS8f1z13r+sF5fwwWL8nMmZ4+fGxOU0DEBN1J8SRjAjUtB5/4rUnXn0w0/+6TwnIbloCH45Iup73DMMaPFrcd0BmGEngbC1gqH/auiGzIuX9OCcao5/G27oLnenArh3ZwdS2b35csqjFTI3GRJ+BIOAKjq0Jfh5LUqGjl8NMJteGFCuIOaBucXRMLDhCQJUUj8N5LytWqIwaEk3PiZZEscA52JDK2ECwVOAlYNjmunn1HGYVhRV4GBoqqx4HDG6ZOe3PrX9IWlBV/AQR17E4oKrApPKG4g8CK9qmYbuOXSxyLu0bSkybNu/CM0/9/usvj5839723no8EKvvSQ599u/rfdz+mhtSipu/atbO+oba2tjqpZcVqmR1K5/WCC2xpdeP3m369gJPGl1VxLECJCCH10oYZL/yymQ/lpu294NcvfhorD112z+3nnXsxCrAWBs7mf2lNjE69AKzKOz77oirUkOdV/8T6uU2TY1ztcO+IyjE8AtXHaXpRlqNgeUbGLiuf8Mnbf9QHYxPEWEmB6e4ZzmgwYXLzgvuujLRMFSOqxqX37z9STLgyJzABKT88vPDCE65+5bFCX+9gT1esNFzIZypqK6ItS0Z+7wgJvgfuub/8rMVKX7sQVG1kyg7xPI/juP/XgTmOazZNAGAl1UskpdpJlZL/5vcPmPu/N02bg0hwXENVICr2HD3nyqWvvfrzb71bTzh2hj48INg2AkDEYyjFlCKMgcGAoeA5gWgQ2CgYJrEdQRIBM+B5LiWe5xHwXM9FCPEsJ4sy8LyASS6bDfj8QCXieYZlCSG/5bocP5VlKcIMYkxBcTyWYAdLPrlYTFoiiUSjZCiBqUg4Riip1ExTKlPc4ZGAjItckqmLAWMmvM76b0/49GRV79twwcqx2w4Vy/3hOXm94PPKjj3lsb9fevez//YGjkghE5pDdgrzGYq/+OLs5tNHIvlD63qhkLr+pjnw1YYfn343M5xuiMi2Z3u8x7FizrBCAldfzgzGJdXIBuTA2gI95a/HXffi3V5mmMXjpOFecDO7XjhBJkPuy9+Omy8u3WtuoxXGGG4MMH8OJf986sdFfz9760ur/5JBXRv+sOLFJjGmmbFTG9zBA2t+v+++9w51Pf7gXYmJze/9dmootrTdHhnIbLSPdF8UnLv06Jd/PW0huuRaeaSLK/dPyBrzdZTV9+JsNqCUgo1UjqcegOFIsl+qqTTK/UyQKskE11IKCQTTZ0OcP2KYE6PyMTXc719vf+jGMyaeOoMd0sYjkQYmyuNGRjfuk2vLCO+Jqk/L5qzupC+KIqUVxNElhYF8e7Zrl0v2MuzIcFrz8gABcddOk/BQhnkDFTPpbFDkS8sq7dGhfp4Rkc05wGuVf/w6Sq3ymQ3z7r7tw4bxW9sSbAfVZsSkcH/QqmFSBTKNcpvBmX7hOe071wQ5BB51FCr6fCzGFfEIN86/frjn6p/ePir99ui//vvzzYO+6Xr4xmvHf7mt4YLzLnri7sqaABhjjihbnCDZAmBEMKKUuhQAIYbnecxQ1gIGFR0Ly4yiqCRXhKKp+oIgm8QxHYcAZXkpgFQfiAJQEtBDkBsCz1M8z+7qi1aUDA4dvub0K9/7fM0T/74kEitJp3LhilpgWZIvjiVGYsFS8DzTQcSRWYLBJaneA5G5ixb61V37d0x35j+366fzqse9++znV956kX9P2qJFYbzi+C2mlBeTZq6stXzGCS8c3/TFkQNDXFjw58OFFJM9dMaH/1RTAjOUZ1HiJ4TGsSE8sMfsGjKz1sAx4dBeOZ/V9UrEbF+3fsuHj21uPbj9++8Giv3nT7zUKjEEWWCoaukC9QSG5QFxFIGc7REiFJf4+rp6lEPS+weTbx/d3M6j8oZgLwebn/kBHfgVavWOzz996bUfhTScx+K3KXvBpaddObes6R+fEE+PAjvGmhET0qBWePZC4q8dG3Yb+aRrTmf9JuiJRILzRxHm3VyG5kzGZRDQRKErMkH88aeuY0+5fiy3oX5SS8lcx0yNxUSHn1Sr58wWrwqoUyiYFaEQE+KH2r/brveu/zG1ZMFCN+iJjeVnHT/umBm/Lztt3s/d+/dt/vGZ66448ezl5bMq/V/ARV89v+fyXQc3bM3JQmCka+O23x959b2b56/wxmDhgonvv7f6xicfgLqYv6/gD5RkiRYOKASKcsivGCIa9iKxcp1FjuNyNuRBFwTV8xyPUEopGw0ihs8VCjWCX2cCTtHmGw2eBLxxVNKpiRzkFjAq5wJRj3TxjOwYCNma7FOYQAx5AKTJC5iKo7ElUWP4GwxjLFc64sYBsyxIz3zw3tmXrZyz8GzjyE7wg17MyyKwgoqIgBjBdQ1wPV7jSchfEpEJ0h29IERIitMjMd9s36Kpp3n80aLIwLPvPmSOFq3SmFIwWJeAoiHW4lg+mx7hoiGRY0U/dTUZZNlgXXl8vSQoRU0TOFBdYJGiFzOUZZDtSpyMKDiORUBgKnnXBUGQkW5JDmVyDnbtyjw2dU/CriixalDJjmQpcSlFPg5lixlBDYJneo5+5OChoOpT/UHbtCQxQG23o/VQTWUUqL1r25a6qpqa2HimVnK+Ja4E0cqKXDq37LRTCPUc1zUlBKmchdyYGhrJahUXnz7rkuNXXHnj3zOISWYBBRKJA/989ryKmb6Db7z+7M/fHbfi7C/efs1jAYGOLQ04wVOkxSedu7/tqF4zfji6yM375vrlkEiT8XRfb4ILIUppIaVxjOTynIGIIHo8D2bOzJpMpEXt+HjspzZywS0rZ593SaxxHOb1oT8PZw+a4UUt4RkzZE5GmqeHWLmQA83KHN4ZYuUJVRGm4PgjIT4YhL4j1ePlkeQ60PPe9j9KQYF4ApAJrksdBwmCa5qUUobnka4jhDjGYrwyiA8yk+c3gPfP599dUDnj1cI7s0vmmXpqrMrn5AMnnHv6H+9+vXzh1CzSQkwYEQ9jFlPAGFMGgMUUIT8onm5T12Q5jDFjG45LHYQxjxmGIpbjOAEQQoCx5zjY8gixOcKCA45DAGEpXAEYGakUOCnkIV5mkWrt7NvZHR8ojdZOrJleWdashIN3P/jw9edeVF9Wga2saSb9CkfYXGB8AQSbk7MG9Lh6b31JB0zsBFORa/PvrBu/aP5IR3f6j5gYLQg+x5s/czbs6lQ3bfFMXj+4Xz26tRgoZ+PF6s7Xwww72eu+sv3jlHFwz1Pv/7pjNBkCftTmpWjA52mWJnGUp/wYmyMTLS4jVqnequHitctnFguMb3gsf2Cff+0Oe3gdHzBpPuiqwFRwJz88bt8rbeY4KdXhFWPlBZrq3bB/ga+2/vTSbe/uymX6c0jjBjZs6tasb998YNO7Rrhr4o0VFzQ0rB0tDgw+V6OXvXXBKn8K/nXo5xvqTn3xrW9evKrX6zjAdsthX8xjLL/I2WyFkWSGqI2Bt/JugPjcPP/hz79mWO+2m64YLowqgyamWTccsl56QAbuQFx/Z5f5ef/Rz044yec2QDl70hnHF3fvb91+4Mx/PdW59vC/X3owVtuo+KlSMgrqYRDH7GxHNr9D8qjhF0EzuIAXLQc3B61Jk5dA9xhDtuV6UHiUbrXbO4aqfWJpxix4kbiRgtjIhPpYX1vVs29/26eAO6qOFkizVfRkcW1PtqEQHvFMA0AANZ71nvrv1w/88zro6zdDbMjFHEFupItEFk477rh9H3/1t5ff6d++74ypNV8f6PKB/5Hrrvnh/TsqGTuVg7A6iXMcztEAGQD4/9MJLAIgBFwHiG0GOEczVFbAgLXhhOBT2OpoWiuE8y4RFBSJ4XA5ZQOG7uRHMloyY7txNxtnRK+geQEWIDfQtKT63if/c+bgSKikXqSlYoUClgiSjuVgzJ/Kj+72cYwsyUAEcF2kCqJdBLe44uorhvi9GHEWdF/ywCNfPv7w+bddoZSBUIgBlFc0Tw1aPiLYFGIFr+3rjw4//tw4VPjDGpmCx9VCJDqz7qyc2OVoFqtwbnFsPxtn8+Pr2ElRGDA60imVeEXsBNXczaue3v7P60Uu9976QzeUlgflcn+I1Y2CEiixbMqLMgFPt/VgLKBCNFfM+Soq3vzwPsLRMocMIXfx+Iav2veVyaGFJ0+DVFdxe8/VN36DCXsmL3T5nN9ScvNwRhs+xHJJC4fzVhoIkweeCuZ0v9top6DRLpnaEJKB1d0QgO3hJSedCk43MEKhYPqCpXo2HZ6hblmzZvfmP4CJPPr2u3ch+5TSU4pDJiepOW275HqDoXLGdSmfxRPKb15x4/rfO1Y0z71z85Md+7/45N6n7nvxi45kMTHa05rxXRWbffZNJ0w/vqapqu71T34s2KCzygUXn33mgmkvbPgja+cUgK9fv6fh2OOvu+ivsDU9+7RjLz7lpPZfPqyrK4G84WG3mOn3cSWZRB8n85IsmPm0W9QFXqAIgtHKfDKhW3o0FgMGFXTN55NDJRUatRVBhawDLnZL6xi3lZUFDmqALUI96zg5R5omM3WQ7gKu38pVIzvuGQzvamAXPGdIY2KAT/DQH0U+IVJAlmtDgbLw3q3/mr3qWY4TGCYgcTpr5cAFYHnJFwEfA14CHBHjVNFQBTEsqOXu0FBEYt1UGcsb2bG8LxryC2BlCxAIBkjSViXBkDRfAXEcP2wH1Zg+ZGhaTjGTDMm7YyCV1DhggldQ/ZwrBlgSdAVL4hkk8bauIzVIGRaII7KIS3nEJUUzLfh8SBIpxxCemjbyS4HR9qGylmpFkZJWWhZEzcwHeaGvq33W/MWWT7Ycp66qsq6mds2aH2XF16UPnX7u0mQqU1JZlQc9recIIUOpzMLjZ9VXlFvFvBKtGMtr48dN+Ozz1aeccopg5zjGxa4znC3w8aIlFsc1lT156ikotQtSjdm4rqjOspaVpHA0lRIuW3LWXe89Qz3I9Cb94YjNyQJAfiRXzLlT6ycunjlxfkX1bf96g50xu71jmHW4WKlYZMRMsiCJZQwOOnYy4I+amX5VwiGtwAZYZ3Tv+StP/cc/bhq3OAYDw8XRg3RzsnRqzAMqbG+HbFarIaLuygWlIGCGpSGCC4LjA15TOb7TstkxHhMfyzmprpybjGIfcGHPND3F9TxW8imWYQiqRDFyXFeqLgPLciQPp8HiwRcru/CMhi++PXjvnVPCQT9FZomfLS+b8M1nm7/Z9v4rjz/NeHxFJAhSFIgLHgFCAYBiSih1KfEcwvv9QIlhaJRBUijMCwIBgjXb8zyCkEs8AAAGuyxGCGGC5aoqPT4mB30EM/FUsn94ZM6SxcTMYn+kYFobBo7sE0u7K0o+/3a7URw+Qxw9buEyUzzxl8NcS6ZYxiUap3BuMAfkAMEDLEgCxQSBJDse9OuM4yPpEZ0vjXa0b5lw2yVMvqTi+gnhpupG/49fxz/5JNa0wNm9XtVGMj45XMXD3KnO2h/SoRmVXz2DduzcefcTH+xy631i3uXYGkvU0qpOGEvxGHvMyEVLoIytoLHhNGbPKWUiWhK2/Y8aU+Rf/6fvGpTrwMYCW1FgiMBxdpYDZswVA3qwXEKZhN/12ocGugpubXThd/t//Oennz/54K3qlKoLLq9omR39Ux3duf2LtKy4R6zykLTSWrJnzfaHr71wdm2VUlL/r5dWLV/9KAjj+NnjUFJIdBU6DS13tLguaXWNmYwfGCTGPacvMSTyJZr/ZJcj3z6+jbauZaXCEHfIF2/ysBtY/tCi8sgQlzgfXfzq3p1vLru4ip9SwLEtA4NHsrmL55fEyrjXTvj30pVTNDSqlFtEsQnkbcmOoHEsVhAr+EyUQ0nDLAQaJs5iJwMVwB0PuAh8v2cUwzPl0Uyb4eVd08DEq0KViaNozfed8+aUTKgqdB5OMi3x1kMwM+n93OnEgSlh02IVHB0sC+BRJgOPP/XG36+6NFgd5hMDbDBgWZQZX0+6vtn92dc33PeQGz/05NcPX9d01uEHLqkZaJxu5pYeOwmGDkksyqd7/Eh2FR4TzvOQ5xCPEgwMxhgDYjGWskRiWAAKQBiGYQDRvOFzwaio48QAr5QASI7uFkZyTi4vY5DKIv4JMTfk59gyhQBgy6BOw9Sq+hbwPB045BLJkHQEKivYDKv4var0vk1edoQL+lyRVYNSOp0XRXHGDZfsvvQT8/D+BAqfFGt8OUfjR9oVL5/phncXNP4opc4YnT+jLoS/XrVr3fYBNvv1Pm7RjFCASXT/3L9p1UtP/7F+8vjwW6tuQ0w4MA4FUqlU5bGzhdrSHZ9+I9uuxLANHj8/wv+Yy16zZNb/1u0f51NOrFraWtoQqAyHykp010aICftCVi6jqlgUcJ43IiYyufzRjt071n7ZmIA+B/7xyH9+WLu6ZMqkR665aqTSv7xlZkWCmyo6oimIIQs1wut7uAnIueufy85/ZZ1WUDhWw67SyJp3VHhaqBQtXuqF+37asaf4m60K0GXBjg8+4xfw6d2bjxzoGt80JSIByBG1JIhrS7L9A8GKCtc2tN5RniArGikaZsBxs8WBysAcr0bJp+CRd79/5dkX1n/25OLpzGfvfLJ40vE5uWrvzk+PP2bp84c7fnz8h4aYL9QQ+m1tl04UqYQZS6RYTg07zFgpWfOfJ6ZMrIvOa1p68vU3Lhv/1U/9X6z/SN+9mtYG5QJnClgyJFcqsrYMhpdhciEFjRbdMrcCVM60BobXb6k6ZiVpXBTfv71m6gmUdg9t/2VwpLtcKOcj5ZAtyEFEBZbkES9Ljpd3TRN7lPEoJtSjNJ5OSsFAMBYBjrC2R9kAHevwTahEE487+NGaYy+9xeAl04YyzqjnIafB9X854+b7riwc2O5oNDyx1lU5zbEYj2IHEMebhCCHcMjjkcfxbKZQwKzAcgwhRCAcy2LdNGzPDYaDxPU0vaCqKiGEEWWKRc8m2EPguazI6lpelgAY6lGipzM+iommIwpFS/MZruMSTlYMj3C+AFZ9VOAYWSY8wiwPpkNthyIGIxYcV8vm+RXXE9sUeIFY7KsvfChgiSNWRGW/+nz9FZee39nT7jIc+GJ/7tkbTw08cP+dzz/2QSjAT53eOLml/s5776irabzk/CsO722dNr3lwQcfnNYydfmypYmR4YkTJ376ySdnnnl6BqdLCma3oY0MJcur6xQ+du78JZedu/iltW/XGH1uLsm6AM2Tetr+fPDmFz9oHQJWGh3sGxuojcYExjeKadnw8HBplRuIBgY6NNs0kjl0z+P/rWqYrmsp1ivRSU63GcG1Qj41XnBrY1IUxzmWbuoYqLHlk1cGrrr7dDsed9Zt+/mN9+taKvqOmPsHuxdfcPoJN19taFnXdWWEGQIWFQRVdcwiZ+mgchlXYwXV58qWkxR8AatY5CUZYQYsl5gES6pLXURNhnUBYVdzWVZ2iMdxDPAEDDJm5GKNjX+8sOGJO++bd+qso6ORSCnp8/hYUiutxBWU/8fmn3e/9O7M48Y5vcNIcEiMQ7qLDcbxiMhCQTBljUXYThe16Li5EJvhMrwHQMASiYBQATb8AMWUffwsbSCraHmHLzBuyBqzA+Nti5QIlmjJ7tb3VjeeeWkYM4+/timhzHUnTNioH5lIyr65qaVvNLO7y2Q8q64qBkaGp4ZPxHXVJa5jcYG8kRnRMpas1PZ29TNu0rAO1zYzStVCAaYCWDqUyToLfB6OdI8V8n6J4/s7zXSHNO90b/JshoJr2JZcoNaASrXsJ59v/OK7TNw5cCTeZjjhcEj1Mgr2e0yeBdbw2FBM2NOdK1KY45OHs9aC5RWzlqoqCtdNGMdLFa7aU+xpL4sudpDEKWbX+iOrv+4ZxrF4brhWHQsb0E0QBV//QOaEi8/tVhM3X/mXCc3TPcWLe9pQfFSzR1G5r1yYVdYRVdVAyj+Kxoh7ZOi1H1MH4mZ83yidflJP/1anx3BtHmI+XzkfKZErJ9SpsWid7JzQXM2F+ha01MkegAuAIYEtjuAQBoo5k4DEaGA4cUcRuXyeiGQoqRt2dyIJJlZjYcGH5WKucl5l/qsNrhSonxceBajKW8nSSDRXcPgsxQ5fyGnFuABoEOww9kkWk8uaxZi/LlCRymcyg4NCSSBmccWApI3skQW/ljJ7e+IojhIHv/94zbA5u+Wnry4/rumOLQUQEarw8Q5jyS4cMMLgFQJBx8zAjRef+N+PHi2sH/A5SA+JMogpMRAZPwsoxHtaSziKHAeCPgNsUl4l2ojhFANRyaKmYHHU9hBPgLDAIEAYAAAoUM91EeYwxgQoAPl/XkyAsIBZKlH6/506FgAAAaWAACgClwDCYBPgMbiux1FEXY8SlxMEYLBnm5brIEQRgymlEqeke9v623aERMfRMuFQYHhkbGgkYWPVNtOTJtUF5KAkhN959wPNylbXlZ6ycIkS8qshJZ2Mj8XHKmrqfXUNmqErg0dBFl3E6qASX4wJqTzNCPlRxCDsYQjIQVMzLOQCoj5W9EzDQvQChvzsSUsbgtu64pMxqatZkpy0LIbMiroaj0GyrFq64eMZjnNZjqqMnHFNd2D3P/73yq+/3nrKincbFWPbR++tvO/2J4//65IXHrxs8YT2LZnzGqy+YbNBBqe6Sks4w6mxfQ5SBRLm8F5dzHhaCfhj4fyJNl561uU7ogMa25MeFX/98PBEMbrOcsd2HXDH5TkSZ3Whf2CYC+Dw+GpOM52D3ULU71m2O5rJhMRYQzUz1FfI6QjzvI/jW1YunN2S2Jtorh83fUb1o2dNuPO1j46fWi8IgexoRpeFCTOXT5lWISw+b+6sk+LtPcmiGZZLTMtIeqYiSYZhLJoQee3ux8dNDpx86z0zpAZR4S955aFJRtIb7E/ku0JckIgx2S6mVT6XGithZTEgMK5TlASVj5ie6dj5rgJpWXrTO6//47Z/vPDcwsqufd7Ca05trFPMYn+MqUwSXaAplmWV0vGKQRLucH7MqaysRCzT19cXCIYRQr29/YqqIiY/q+HkjQd+nXXyWYe79j3yzIZHV8567q33d1lQpXKS4dQEA1IW7nj+Sq7Yw2f9giAgnyCVRXbs3V0ejjmG3TBhwmg2RepKS8LBYjHPCaIvFEKcpGsaUC/raQLDcoBtw3RdlxcFRmApi1nLwxRnUumApDCUsALK2UUssWxWYBXeYwBjEDELAIwkWMQlDBU5XsvkVEECh4JLitmCKIpJASmSzCOO4VgPA1DMYOwaFm45n+eAOi5i1Q/f+HxkIFEZCQvY/fn3TStOP7Orr9/nC3iG4dkOI4qhWPS1V95bvnzxuMbydGZk646tmWTmxqv/tubrNdUtTbzA7duz94zTVx7cu0fk2JGh4VNXnuIMDbmIGxns5YOVgZrmQtvGs65cVLrwxN0b10yoTylxNkd5prlqcN2O25766dvffxrY24uRrPpCnueFS0WP5gHxsioCgr7O/tqqmoOd/edf9Nyk2ROox3Is4yDo7Buti4RUUWjrH5tYG27w66aRdwvmn1t2/rbmwaCYsvN5pkyKZzI+jvgCVamROOuQgM9n54u8IhOgDnV4PWWrqkBlx/JcAZsjY37E4IjPUUuyYwN+jvKYdV2GkSWN5l1kSJQFIrE4gBFn2rqgMC5yMcvwOi4ani3h8PSmD2595fAr7wSqIDXtLEEMUtYeam0nDlV5r9Qdu/HLR0pdSgqGwTKSxWAXQVgBAuCyDrgQIvkURHyNg72j+eRQfUlIcD1sUZIbyBRHEZuT7Cy7t5e96DQ0Y4k5SEaKbSPFoTwLg1iXBQIkVimb7z+2ZtPWzgfWbXhr647sL8KCkibPy9TH6JEuui5aq3C8m0srnhvhWKQXVIZTRcFxrVjUl09mN2zcravlWdedHGWYtq0luEetryyBvNXoe2zh7GBZXhG5kUNfR+pqmAyjuyDHZmqSTOqVID8+++3+wR9XbR/YVl6vlk6beuBAYvP3nVhhkeYSSfJTLkxNK+B0F+iq/s263rN97Z+bv/tGCEbvvPv6gx++teGdATHEXfPRv7RIwHLHwtJ8h220/T6Fk4ECgHH/tecc+PSnRROl0VG2YNkuI32Zyv7w80u7JumJQ3v8pVVeMuPweNTNqQJC/czrF39yYOd7g73G8nPvWq7kdQ1uffnx4VD16tc6t+yx/bNn2ZXldnmAqBSSORnHLF7lx4rs/j9PWlxymqq/+t3G06YHyqrH+yJsaVD5YMvu9tF4nVSOSOWutsGUkc4lksgpyI3V6dE+MNPApb556l//uevKqWfOee3j94/Bzc2Lml5b/RqMjWDGb9k6ETwBs2P5ZKlfBQlnegdDS5e8fvtD5pi+4Phjfrj1Pwf97kdH1kIxpeRyRCKUZy2tKMv1Wmfq7GOuqmWkP0HqhOJxdfYJpxzTPGqu2Z5eleg9vQLmhsY/tLPVQICpn4d8SBLzpjkyczzs7tbBAwAFoXi4RDjxOAFk+9tNnJ7PR/3+yeN1VZJcXhEjLweSFz16j98fZAczUBtx8wbGGAAIIZRShBDLssCwlAqEkP+vzxHqusShlAAADvgBIYZhGIYDAI8ihBBCGBWySBKAZVxNwxQQEKNQTI0l84kMx7I8y3CYIM8jrsNQwjEoF/aHA5LEe4ho3T0d/nAkXFouxcoK+ajAWamxHuS4PJbCsVLiapgnBX0QChrPIKpyDod9Dgs7BuI/bPXavySpAudwNFZHAuW+tMtmkkWuiEQAU2LApgEqmcS1MAHqIADK+K4SC9uLUFoVOpDPlXjCdfOu/nNoqGHmbE6ReJ+PgIc9T5VYReYMOy8WuWQV72vtO/Xi4xuWT/n2y4fOvPDDnY9efu79L277tbWt88crrr39tvKI6WZ4PyHA8H7vyCFxvGTmg76ujJNi3R0GXhJ0cqO0RIJJ5U1xYvkXRtiqNJjo29f6FcoccfGaq+5vOqXkkLF/XM0EH/LlimPZfT1cNDDppMV7Wg/WNTUG1ICbygseIpyAi3lgbEt2rz37qfOWrCi57JS/zTtpkxHPf3/jx5+tPXJE7rN0qyc3JCsX+PCSy49f+pdTmIoyLEjXXH/rD2s7zKIsOhpGts2wlu2GwH//FRd/8NMXF583T6ib6BspypnE5NNPDs0q4VMppyc1RhNhHPRzlC0LpdLZCPUDNVJe3tXd0qwEE6q3rfvkvBc3tAj+vqwzoBfunnfW3VtXaRufk8K1Viqn+KzuxJCCS2VBFEsotgXLc4FjJJ/fdQghROAEYDkzM+KMEHVJbevB5NNvv3fXaRc+eu+NpcCYI15HVjXBWcBZXQH4svvAaOumEEcFTyhkklRkfIqCPAoEuQhZQIS0zkqSZVoMK7Cc4HiUECLIArDgeC4IHPCs6xIeMIMYMCxwsEUdISAZVlHgWMc2LdvOFvJ+LoY5RMBhGYaaHvEQxdQFCq4jiqKkiDZ1XKBZLR8ujWimoViCovqI6XhAOUk0XZt6ROIFmPoX29I8y5DUcM/R4bdff39Sw/jU6NDAQP/UhccO501q2c7YSG1ZLBirfPLFN05bsXzpokUfr/qgsqo0WhLZunX7icce//tPv8UmNfI8Gwn4RZaxdD2bTjbW1iiSGOHh+yN7x6OQPL3aPtJ26YN3PvHCizt3bf161eejvz9HQ+WDbcMzjpnz/EOPVFaceeFj99z195euv/56f9TA4Bd9RA6nXV3BWNaKRceAcIn6zHOfrfmtq6RiPCMWDcOwPJopuFPqqrRctns011QXafRbjqN1d++zRgpf/PjI9p2fY8qGEadgny8quo7DcZxpWZgBhgC4HrUccLx8teobKbIeGY1yTi7bIPhH7ByyvTAj63lMiOAi2+MLBAxwBBGHgxFeK7gM5gMhETE6MCiXc4grI1KwcULPUo+Hqsr6S+bfHATIKVCUJKD8ZmqcN3153fymex+7BdxusnufB1T0l7tiiGXZfKKfEsumyGHEynF/0WURcSi/eY394QvqtrUhHhds4quATAhCFlgBEEbBTonw+iPOKEdFT53RlBw6dOSPP4fqM3280rRssca5/QNDKyqvZwri/+5/ZnFT0/ITz0nHB1whEFXClmUA8TADtm2LspRKpXhB6Ggdeuf9L5WgcOW1p09paDSTeuW42OovX732pqdNCWwLOCb0Vwl9Vkxfc27D35+qCsII8LZoezA2yfXmsGm+eChdsMfSxWJRgrKYb2CsJ59Ibf/16IZO8HPlVWVJnjhqDkQRXk7Cgulzvv7xeae8ggLHx7/d8/pPW95c88kwPPTm/SdeMd/ceVAsKQUNg0Egb6Q7elOGUTF75lghf9tZ986v0xxLYtmKBBo5POqGT1t0xmfnft+6f7Iqj3gC7zGZkQNXNy3ff/vaf3/72/Dw57B3eNyyG6+4tCUb58Qx74GtDzuHBzi5dsgsWZeP/rLPGO6DBJLadR0KLkyonj4+etqeV/M7D51zUdm4uvof3n2vIjbzxKvPfO7F797eIrZXNvPVYVujIMhg+LHP5tbt5TNtL3379wsX+Xveee6hWx57cNXbTafP/vSeV//Y+sOb63+D/esNqVICyxKRQGTXNVjT1sCmFlKntWz49Ke9G3bd+tYbruM+8/ATYz9ufGbHZ+jQEVNyRNMGF2jLOXeduOKzDbuSnG5SeuepjTUSv/rTnvXgquA6LBBXKBPlAS/DOD4CJsUcqHqzzdxpehfMEEjIUUYUTSgolmy06QgwRCQxLIODjYG464IiiaZhDp61IPLe3f5BM9eTpAhJvMwwjEcJwzCSLAOlhmFQSjmBUEo5jgMATSs4xGMYBgCACTEMw7IsALZt27IdhBDDMJiCpMiEUtu2EUKKLJlFLZ1MRWrDBDOM7OcDUU9STMSDIHOs7Bm2IHAMRp7nZjI5RfYLvEIpw9iaFe8eG+uI+kXkUOJ6up5XfDwgBhgQBQYs08oVqCCgkrAh80FRMMwchxw2WYDhJASxmRvI/7aeNREg06OYyVFLAAYRj2LgCIyrEUe79SCS2UDjF5c1XPvk+khNLckMA/EMTeckmRJXVBhAjuU6lmW5IX8lwNF4saFmwn+feGLstx9PEgxkCHy4VG2KXrXizpUxlkepjAqBAKTsUOZwsipAhj1RGjaPQeyA4x4/3S/HKjYneo8pE7/oGUyp0rK0PUJSgWhFdQ1K9phIFlJuohjwqwND2HDcfk3y9FxHMpHLTmL42J6DY2u2Bk49RqDIcF1e9RvEsRZOPbBt71cHOsRw/fU/B05twY8vnvfb7mRsblW8ta2fA0KD87VCeOLUk29/hO5Z4/bHmabJLz/95Ce1K23KSawNYkWJZxQ5CZiRv7332rWnzWxqKd+x7vBbP/3emnOWvfdTU1XVk2teyPqH1SSDA1qRhPTBpM2IuaIRU7l8KhebM8ttmvPpMw889vTvgxq2qZ3gzQqB6Tm80zi8Oj9SSCU7YqxMpRCryUMDHTUzmotHU6al8zw/lkgFg2FOkAghBKBYLEqs2zDjHFBrK7gd7z7zn/89++FnrdIyP2N4xVMn4515+GQAtj39iBXfRLr7SH19PD1KEAR42bZcz3Y4jrMo9RhGkXirWESsQAjRC1lJECildlbjC3miG5hiLhjgWCZr69QvKdGIjhlVlFNjcZFlcgVLKIn6p45jqCf0j3lE51gXUwSUB2Co6yKBBcMFgFwqoSiK5JMVSXVsuyRaCS4GAhhhhMDFwDI8A8g0TIEAz/AZOy8Rp7yqxLSKpm25HvIBy5kOb9k+ny+eS4fLKx994qlps+fJyP7l+y8njW+Ip9LEww3jJnT0dp961mnFYpF4nlnMJUZGaivLpjQ1//7br7FISG6cVMdGQlMmD2/bMG9pY3L/kfv/9d85WPIO/MmFg7JdVHwuG1Y7d+6oO2vOltYjEl82kuhjxRLbtBoigp53qIMVFTiETIfbtu1gd+dYQ21F92C8pJphWd50TUkWAYhlmZIkOJYN1ONZTpUFOQRg5Er8il8JO3mNR1JR10OK5BBPkgXJ7zMcSzMN2RfxPKcqw0BVpeHjG7MYZEbjnFrdIArRtFSo1mdTEIJ+j8eeZfKWwzo0kyioFRLL4YKeLRb1UCigRHwYiURssIujpbMqaCiA2OaXchd25rtU3KiGKDMwEJWIn3Nh95Hk8lu4ebMCT9wEQInLGKzKOpZcMcd08r6BsRyxrPheORg28oXSplrvwX8O/TkxZAz5DmzXulK86xmcJ2UB/BLfYYy99D3955Vj73wRfPMHo3XbpP3G7EdO3bVw3lVzHr3p/vNKS6TNv77w9KNrM4Pxdxiyb9uMcaV7wOBMgwvzvFHUFJ8vX8yKIJUGxFQqdeIVi+LurGuvu+29r/6abv1N0tPD++H8K5bMbVnZfOyxDBee5BUeCZWAA/uMhjL9VjffjohujGetukm+fgy9+7qH+t0qTmRHRD5aW95YW9c8MNQZDlc2u+T77zfsPkwrS9haHfVoTjnw3xzcPXXCoial7Owyoa2zb5sGTVK0wBWTIzvsYd5oS4m5MttIdHW3ZYczfsCTj1sAVFeq6ibUyX3DWnWLm+4b1T37mEr5w9Xrjr+suWlOsLNPa2AIO9hzY8uxjQkmsGJq6Q9biI2tuiAgPLnlhFNff7SBD/5Ts6RkxvEKwZBweVC5agEHp1SaTKRTC0rl477+dANsPrp0ZTAyga8/rTI+bL/0wR/nzTyAmI2Xnnzx5OlDT2w7unm0RZxbRUeK1E3bm/pvunHhvTdfwTL7sdF9qK/npodvPtKzo96dfsq1l2FDA/CIGuJllXqcYLuga6xPAMASrxDNAcP2KSrDIhsOe1uO3vjoxU9v/rG9Y2dpiA8OZcHHF6bWks3fHt2wYUytrrK0GVXwx+7ePYPeIvCFwSr6GafgcQwzYOksYhgwROBzrB7LYxdC24P5S3qsVDvDawVZgTFdjwX8gEAz8jSpIQtYFSQZOT4Ew1B65EAAp7OJfldkg0rYE22PUs3QGYYRgoB5nrMBY8Ypuo7jIAExHMcKIo8xLwjE8zDPg2lSW0OYFVTW51GtqBuGEamupsh1bIv3M4R4wNssRyJ+P68Q17WRZ0E+x2Qob7mYYpYTwd9ACfVYjkpi1K8Cz3lg53IFZagNY72qXACnAAIFD6SwBIYGbBn1DNNzsSIwPomlCFwipE0LMrznMIQBXyjZXMLnXX+oRfz3ZSziQLWhQAmokl0wKYuAws0rz11+9tKnbvgH8Whr78An3+0ySGQk0a+pKJ9N+cNRs1hgBUbLGeGwbBoWx/G0mEsXApHFlV989saqp549UvA931Sy7cMPPvz0w9defUQ16cwYHcrDhCll7W4RHczVNQWseP5IUqhlBdsBXhJDku+rdSNTFd7NmoHGBj5WatvDAV/FQE93rAzScTCLzjFzTggfO65ewoUQ71sWcojmqwpLggLxXOW587R0lrEYi9oiz2dHHSGISN9wdVBygf5vwy8jm7b+5/Vb/v7M6oXHcC9tbjuxLtxSgj7aUTzKet8fPHJFhwFKMMDa+YEOtgmfdlzot7VaAmHQhtMsVDq8jNkhxhkXLnP3D25Z/YvL8BwI2yRh81B/3ePvXPH0pfl0K+IF5FkcouWSaKjckJscf9FxmYMj0rq2oe7uPgpzEBmhbhUi1XZVhzX88+bhc5ZN7v9za6KsLNXaX62GZ54zve/AbkoxG45ywNQrYdewLcMOhIMFx+EQU7l4ZXxXG3dke3jFX/P73l7z8VvHMIxB7MkMjB02OwX73WdvDy9sObx2Y5WodCfGBMsxDK2YK7qmHQ2GPEoMywYGd+SSvmCktNyfGktYhUwsEjAtbyiemNJYpY6rAkptzaS6GWQE0DzaeQhcU5OFYEWsoJNQ7XgjS77+zwedPYPXrlxmWymGM23P5ZUgYYSio0uqiBiWZThFlW3HSvWNBWQ/omBmRm3kWrrhE2VBkLRCQRRFVpRyff2BZkuUuEwqEwqGRZGbMLFez2VNo8BzSljms+lRqhNR9vcNjp1/+hnI1VKj/cFgWOCYqqoKpIjYFCoqar9e++OEcCVQTy/kqWt1F7JGemzyxAk8x/Q5wyUTqnFQ2/ztl3+7+kuy/3dv9X2aWu907VbxsJAnnmfDrkNXzF+o1Td07u7mRDo0NLh588HpMyaNa2l66r5PrrjqPLmohgNyJp/r6wYXjJ3bOi684pQjnb2pbDFSVprMGJZl8iwWBda1TVn02VYhWFLa2XsYiGfEE8EwZm1L4lnWM1K5jG3biFBBEFzbAwAxFDQNQ3NcHesBx857toNxWUl5hphe0gHkOLbLEbZvLN8fT9eNGy9xmFh66ZKJujmsIRfHoiqaKqr1xEMUeUAOjf3xUvrjUjlyYm/8y6hfXVI3AUZ67SM/8F2H7J7usaSe90Gj6cCf31nvvW7ef2lACfr7x0Z+219+z9WHOw6qn2+oPnPuaFmkJCwKYDrAeUIoWDnfrRCtZafxXT9xO7NGzyjtOoo0QztbQIe2Rjpnlvj68uHD0Zmi7oB80TLBXFjY/8C/L/+ooHEgOz5/6JRTW9ZuOMTaYTseS/v4MrCcoiEwfDIeJ5TykpjRM5wqOIfb502f6yIAjWBNCJTUuJaZ39pVdvzZJy+ZsnZT637BqxkZ4m3poaUroDRYHHSFcHU+hUvliYO/fPfFU48d7jJKqstUMTlt/uRplXo2O3bgYJ9e1Gonl9x/wwmvPnNoV2/Ok4HR0ZhEjzFIZRFG3dE34lDOMZVlXiibnOxJ+17+ZfD3tpiBiuYmXRsYG0mYhOMqlH80R7GdyQ5nzjtn8kPPb2cZwx9xYkZdH9t7Xpn4yIWvfrbr3Vlz5pShKf9Z/VJNxUzYN1Q+efr0xm+wXCoFyxZPaWnff+gszj7mpEmdW1pnNZ6M3G4sCWbBUDDnZFIml5lSUde26Zdt19/0xo6Hen/Y2dU3Wn/xOclPvzkiwKRLJv36zCbfsXNPmaiuaKxpeWrf4d9MbqiXTKlZdPqwtf4L/8LTdEHv2D44s7y51+w3+vcJGbNj29a3X1i15Ky5PkbNp3Y6VBAtrXTSuEwibue1QDjEGB74ZK2/3Y33UgApaXJZe/aF59x71yNffPWZnutBpUHxvU35GfVODBY5QzYb/Kov69JAkKfFGjfUSdOGB4wqQtFmwCWcKzqW5yIbjwHOSenvs8GHb52m7k6iTWO2lY6xgpktuEBVngXLtRngCdiOrA1roWh0oDNJH18T/NeV0JbK+kW/RjDH8kR2PJcaDDEJJRiLohAIkaLuIYQwxwiUZVnAjO2Zei4nSZIUCQDGru26risFQookkWKKEMKwRBAl17Y96njYY3kWaxyPRECUEkSAIg5TjFxMIX+IIMQyIuQxIQhcj8FsGGHw+W3NsTRdUDlT1xBwyAFejjimxXkgUgQITMnTMBIcIjpARJ/nWp5IWcuJ5ixQeJtkyMYOlnXA+X9L29A5ABsACMxuHHfpVfdeLMJQRN4Rj//eCstaJnSPHJ1bOymhGwyDGcwwlMiywGJqU5LL5ImEyyHEutxPnesGkDgTm4YdPuflh3mhcNyTL93tY9IjYtXUyjHdZokFuEgV+F+c9vHmAgIKQNn4cruGp3utAOKLOFBWFTmUHO3VubIImRgTxVig73BcAPT5z79cclOLx3JqZY3ZERctx0vEiShiXnTGNF+kBDha0DKeB6VMYoyXYril1UuAqHIm9FeHS6fWzJyU7Oh1Kgk/y/Ztak/ZsuV6cIoFkfExwpTl4xmUj1rZ4uc/fjeScjt7tw2O6A8//oZ5MNlpoRmx0Ob1batHukcwJBzLRhDVxBR2x2IxVHVczfLxEC4D74jW2q7n80pjeevLP66+9/mdO7JejX9F06ypBqSIRMDIWiIjDk4gJNk2YB874bdPN223zU/Xd8wFePmj56qPWZj5aQdXGuCBcYq6KIuSgghmYsFwpars+PgtmiENkq/1padS8R3xhFNHnIsWRDeOpMbXN//3w8cJb5k/7alZXCHaRnlXzqxuGFddQy3DMY18LlcWjeZzRY7jqtwaYPlCUQ8E/dGqaDaTwUpo/vxlw8PZzbv3JgaHm8dPqKys0ywbBTgcq6vAluj3D3T3V1Y1PPLvlz7+/vekA2kX/nnGSvCDl+7DxEOYALEFragaRtFwMctpntvZ319aUyNGPcfzbNdxVUbxyYbnFCyT8MgEHWuGxpNSnnUN2ygYruGwslAaDQ6ke6ljSNFoPD3i9/OaZTGU4TwY7OyAYqpU8Ov9Xe2FA3XNTbrnchxXSBRnhaq8WFAr5GuqxgsMcmw9Eg6lEyMYwB8Sy0Tv92dehvjAxyedMlRiX3ziqQ99ffdLv68TrN1Gu8YKMvDmxBnN3LGn5Pb2WpAZHhVb2zpOXnnMQ/d81t2TragJZpNCKpXfubtjKF6wLbp4UdOd/zjp5jteHhkoZNKszxdVRJwtZkSe5xASGIZQZOWtQCAEoXBIERUZFxU27RkM60msGBKDnkMYhIEQ0zRtx1B9klQe7O5sk8pq5PHNkETFgZRTqUQse1/PUFAKuoRMPOeYRr/gFNIcovbgkD2wlWBTCatiAAM3BvwhE3Yy0MN6XdVXIOvAHnfvTxPGms2tRX33QI6H4BBjKJzQzMfGTwzFhyHAFm3b68uhZ54DUQBBCAwU6YGm6ecuh0nlRys0pc9IUKKKIb3gOsmkKqS9wUIhPyrwk2ILS/CiMSvfBgOt3v6+8HARn/Vi4S/UPyvoWHpo9hIoGLBrb3VJxSc71104Z84rn39x7gUXL1t8+vdbj4wYv/jjWcr5R2TFNg3LNgQOMwjiyeGg32caeY8fC7LeqZNB61nnjhENeA1GMma+ch88/reLv9l6TwQk1yNZIKP5DhAXFhLpoCpJBwc7nlzz/Puf54DIQd9+czQ3qmwc3R8MVhAmV0iMqmxo67d7JMOriTlDKQfbdoGFqTbja/Kzg95EHcsyi3WD5L1QACayTrXkP7Qxl4CsrSTDEVQaFBhPSRb0fRv3l1eGe0Zz5y5duGL+0Ia9HWXlvGsPyjYoMaVi1NE3D3H1Z8+cWXHJ0gr90E4qKWrLlN4+bd+mw9MXT778hvOf/ui1O004ec6s/T9tnvXUWebvQxIjGIo4whCFwQh5kDw88NsHs06YEWmWlx3/w+1/mW2x0d1rPmnAysl1CzcqG9vfX9NJolLdxhfqK75gxYabZoeSg3tuvfevXz7+/bYtZy0+zppAuvp6xlVWtSyfZYy2x0ePnPav83hWK5jOyFiHolSmOS3feZiNBSI+WVA45Li5eP+MGePH1UTZtsHRCq7sYM8ZTVMmhyW0d49TzPntQCcEnzvzcb/GMrJ7ICkFJS9pFIrYqbbko4oEGo4AzoIfWF1BrmWqChg50RNNSXHcIT7906zYuc/dzb72NXln7dCuzkqECAXd82yBtXW3hJex4YTYgJvM+gRu8Okv/ded7pYJ/oQGsgIsZngBEQ6zrOu6juMRYgq6JREXMAbHJLZNKGVYVqRUlGXiuk4m8/+DbtjzjGLO8zyZIpZlWYzBKLAeBYxZAHCJgYd5TmQwRz0PE8QgBiimloeQBLbjUZ1iRFjsgUcoYTAj5IrINonnAiBwgBM42wbdNKlnuALLsCx4LpfxRAqAWcogUc+hsEwLhlc0HFUAzWMcytePYxEFl+fA8VReMl1NceH7Ve+///qbKaFYjSJ7CuCxcFJs4r3XX3Dsg/ddGpjti+F8Ie3mWZYlJTHJsZxgKObYGFNuDPXJGX9TzZSvuF2VxCs9cWb18ectbgwtoBgxhOW8USU0qg1MNCWjRdFzqZNbfL6o0nYwM4aCwVo+SxnRluKmwQcl10QBqR7oiGdmfL4KWcgvKVM6stooyeHhNDWc9G87lIqyQhBJKWIUijTKFDuGlJFUXmHaNmxvmjAOJk6KhgMQavrtkbctt8iBcOVpF+/LZJ7+3jrJ9S+R88NmXwsOTJ8I3r6yLnZ0cEit9IVU1qR+hUG+eG/eHwwv8Y/LN1VeePpN1mj/3559NhhvB7bym4+7wfOxvE6JZ1MAIEvrZ/7+2EeX3n/DsdPmnbhw6vUP3pJXhr1w4OONh3/fl/WBRPtpQ3FdDSnfjuIgs+CaDMMMmzDd7n30lnv/va59nCsFQdjg54+/5K6239YEFs2kTs5ybFMROL9f1zUOYdUv//nHOp8Zx7p751vv6SPOUS4w7LBKuf+5PSniKbMmGdktHRr1V06O8QlH1+rDjWlDZ4rtfbZlYAw8y4y192KPEoqRjx/LFDDHCywcbmtVQiU0ELr27Gt+2do3YdyE4WQikctyPPZcQj3gGZis+mrGN67fvc9CgCgGKkiCyrnm+vRoU02wNzVUEwurYHkuBpba1PH7Ap7nSoift3AhFHNg6oVC3s9xxYLjVyxNM1zP4xTFpqBbNnhE03IKL0+ePB1YBjyCiB3yC16eJHvaJjYu6xgcknwhkbpuIdlQX6UKdTnkVkej+YKm25YoCIbjGQ4JKIG8qZdWludyGVAVVlVGcplYRUU2m46Nn/ntf545tPXAhEBwC03mhfH/fPHdnt0blxxz7sbhT6RMghoanRn5+Z3XxeFo/TkXrV795/HHTQ6XsfmiuW79rn/cfbagSPEjOb2YyeW9ohMPRyP33n31s49/ftWlp3X1mC+9/ZkkBRAwtmkSnud4sAzNdazqkso98aR7sK18/yh4iWCFqMg+IZ0zamOi5UDBAMAACDNUY6nUWGm89pXcMtVZuPLZW7/O2eWf7tkqdxy99aHrrrj3ZmA0F7tPfvj7/u5gSG1mcvGJJeUXTqzSNTS810qmkqKiT59rBcrGHOYQExVcWaAzOP8sAE4Fq0UW5vFuzmMXDbzW8+XfP5l0VLM1unL7myrNevwQ8oXM/gQAll3R6c9yXX1jxZxi4SDlTaDgmapK5Chva6YoVXhqDAVHKCTTabsUz0lOXVAyPgVz90D3Qe73bPybbOm14/JLjmPNYHiKvDs3/Mx9T9fXSzf87cGhUajjrevmT5ozderIpl0iV5B0RrPNiF8xkeuCS3kmDRqSkcGkmlX/CzfOz3Sv85esTAxpaiDkFDgvK/oXVHMEUsgE3he20LKZlfbAdq5GBMMbaWt/67c1RCSTG0L9Y3ojlVCF19YHW7Z0Nc9uGBw9hPU+xIV7OsaSOggMK0XYkOcNFr3wsMYKkBBoARO/Ift5oY21jgwxJTg/c6oajARJ2jB94Gou2MQrMhvXbq0pDzl+4f33U9MVptVDpokVn106yo763cVVpT+899yxF581JTpViC3Jp9fH4gTSxl9ObV7zwarpiz6cfeFVh2584Kef91x60223XXUN8KzYGMbhgI/yzFhOLi878uPadC6/ZOUZM89pjW/sPFbkqi4cJ2z+77Y/Ro49ccZvX3xdMa7l5e3ZmvKOyXTWkS9/Ua+I//O0i7U/d5Y/ePHk5sbJk6rcoiapwvwlSzKjYzIVQHKOu+yCJRKmw72IweXz/+KNpJgSBbKWxXioaDsucXWDLVFd1402Vnl5vQxLBcHIG/nqqVVaMW+bWVQdKZQrhZG2069evuHrIwkYKRoQZspcb3RwRDdcYIArCILH6GASzAms5+SQJ1l+A+crKAo53KqHvrvgjGXmWbPZK0+I7DhivP6D8+02pFNFJ0HRb5oFkaEJxi5BEGRYw4HDf39s8qoH6dERU5cJIYQQRCjLsizLiiyLGSZtOCzL8jyPEHKJ6xLCYcrzvJ2P//9HCMMwyEMAwGHgOey6ACyDMbVti1BCHcSyLLigO0Eq8CyHqGdR4rDYxohS6mmsixlEXIfHLIsxz2DXdRFhPOx42EMc7xCb5cBxixzLU4pM5GHb5UyKEAaglOMoAy5xec+BXo1KEgT8rE0Ag+cXTcNmHQDgONb2TF3DIlNj4h9e+t9H+7awlPvUGBMEHiP4PH70k1vvKCdufqQDx8p4Xi6PVc2ZO51BRcvIYVYsFBzPg4g/auPE1gPriyn7F3A/mHbHvrWrDg54NwRDAu+OVesbdmyPMpwbc3p3w/jSEjaZ6O8pyGX1bInYkXUybTqDDE4CxqnjOZ8JLpeWuVhaLZP5YUeKgFQCbdkxyVcCSxdnv/vN5wti1nHLbexRCSO2pdxwXJ/jLrzqFCkWTBRD3zz9zgOvXzhtao1AQOHRrS/efeTn+37/8rjzzj/8ILLbqTlJ9bUeyQmM3eeq777zwv33nBlf11FZ25jhsj4EtKOnI5MeT5VVG14558Z/Xnre2ZctPf+4ik7WVV2pIBiSy7E5nKux2Qv/8fc4WOccd0I4Ji2cNRl5VrG3NVA5bnpTxQ9b+hTweGSWZoUj6gg4DKfxdeBalnfO1JaZt9yw+b2nQn+0dwMGZBDXHgb55127z/3b9OIBKxoLo4BStDV/SOF0Sx8dU13PDKh33fNJXMCVrIicXAzU46fJ69L8wV2jv37VKTtfnfKvK1NdriOoYr0U73FcwVUkSY1ENFMHjkUSo/BivHegkM2HyyptQn2SIPglNVbVrXHf7OgL+QN721oBYMWpJ//8y28sz7isZ1E4lLf37DoEnMgwjGeZIo/zbg5x1KRCoLTKPIDESNQGwTA8jLGNzGS2oKiSYRTtvkRAERnPE2RUzI6F+ZjZP+JX1EA04rguoyq4VBntH+RYBDyfGUp1Hm0/fGR/Oj5UX1au8SjdetCeNVv1RQuuJTOuifJSSThtOJjnhvNpnuU8lgocFgBhzLCU8YuyZzkSL3iO61huKBzWTcuwHJ7qWS8dxXrONDM23NDcADve/O3XNx666unMv1aVX3XqwKYdVR3Tlt16HXRWbjqarCyf2jfYMWlaxSsvf1DbUH7GuQu0vDU4OFgs0oyWCoZis+aEd2/vnjl9HKaoptY3ccK4nsGEHyuqLGYdj0HAYmR5rhwIKKbLqkKyUY0Go1jTWAOgIiwVDeAsg/WowJuOrbA8TebhUKcbqT1630sTH/3o9juu2hXJzj5xsiXPQmODBza1hwJQUhmt4UKkpoQoWG70u/bAloE6IIbqI9WVIHk5dzQ/1r6IIzOKFUnezjBlmWJFQhQSyN0NvvonL+5qacL8rOOf8OzLZs3+buOXjQe7q6sEMVF0GGqzMk/ZfpIRFYPPdep+xa/xRWpjWaIspNPpuvKKgmVkXZtXVWKEs3nXBkVjnehArtPnBkpnS9XL1ZNHzd19UBk18jkbp8dFmt6cWXbDR+8hxj1rDhOZUbPHTP05qrhoUuWkRDKeSIpMTe0UQyt66ZTKB4xcwdLMWDCSG+Wgpu4QtgKZkaV1KAcpnmHqIlWsnxQ27fUTSY9UqGPdaUQPtHccN/3yqHe4OJg+sLcnP+bI5f6im+M8QkOyl0OhqLTnaFdvcUyknuLJKQJpL2iy1MfZJGNwPqhHvqzDFK1sicQxBnBIHy5wz295Y09rZ9faLZ2/7dpxoFgZAFaFEhYoKriiwEshkmNJstgmjsbjUqCETSqmKNVmynKRbHbnaG7JCZOZkXR8z+99bz5GKxjx/3h660e7quP9f9babkfvue73xt1DSEIS3KU4xR1aSosUKNBSaEuLFHeKOwQJEBzi7nJvrrscP2ef7Xuv9f2Bz/s7/8OamTUzr+eJ1ls7couOOuZg75YLpk575Im7TuKCa8459pwJFVVi5Z5tX/BHssiybcanHDMtWtP/3fqyhopoY9meD4shb3uG8Y6dvvzP5784XltzYWPEUeHxd/csWgANMPPRT9Y3iGLn+xsWtN7zu1uvPjERGjtyhOU0icc8tnOpImZDZNQpuUOMGnBmQVTDBRiR+sFhbKE7xcheoKm8hxFBMpVsAzFECCzqywwtmoImlXEhV7c0UPiGOFHLpOKWJbMrj7tq2pFNO6kPdpEjQVoE6AtEXgBwvMApAQMcgOU4PsMCZi0viFPoYZmF1Ns0iJ0C4bIOuz/HTpzivDubb+vinvum9MaPIaPIAVACPA9EB8fzy1QVf9mV/2hr7NyV0JEWeZ4TBCDELBmW53EUMwTHZJn4vmNZFBGeZXHgW0axFLgaF/vV3YsCDcAnhGCMGYaxfQshJIoiJYRlWMuygGUJYIVkOSKAg1yfeEHgIIRZDmNMM0We53zPpRQcoIwguRQYVmCwh4D1DQ9j7DqmJHBeUEIUS2HRs2yPUMpgGwLqYRax1PY4TTJkzPKIdV3H1jElrI1d22UBQDQYSyFgwwwPO7L37NadAQsQBLsEMRxgyWePrSZ5xIwP+6ccM2ewUTFtt3ew45/PfJTOZQ3XNh0rl8t4XtmEmuEgH/rjGadxnM95IF4z4flTL17IAotggEFDeXbFzImjuj7O0noWEc7mw+WJBbEfB0p1fcEFx859aWAdzwLjYb8slyyKdbwIoVSAC9Wx+V8d6dy4aTAlcKLsdg5kOl/dctzjv2fGdwrJ0SAbCWpyZmlIYOrF2ORiTdjrGfr7TS8/9fpbx8w78ZjpE/797iNN9Yvuv/ACMHNiauyJZ3/UXMUP2+liaK87VKSwgSkexfvx1V/CPbfXTJtogK15NpQiIzKtEVth0uSn7751wYxzUXpo3AUxp7GcTl2OAg9+AVhIg3/aoqM+2PQlMEUDehU77ezfWyaVu9mRW2+c/+RrW/sF/kSHDPDOjWeesPOdPVWSMQFgjwNzL7j8h23r//3vPYqGSkVMQWRNmzDk+b8+cO6fCkzlF+5Iie8eckOiWlfx+pqO9KdfX3ZSy8K/fR+BYEo0KmWNAYAS57VUNQetqfE9Y6lyvKf3UPXWPfNW/qbv8C/xOKJaTJE92Slzktv9oJ5VaFl0EZVK1a2E6+OyM6fzGwaK5k7LiuftNt2gELBB0U4OjALH/PDjOkAAgMEJEECAHZYFn/gBAQAAhsPAuJ4Fri+E6YwJNTniy4xapnCECbwUcWLIYlmkhdlAtRkmlcmVl1Xi8vpDvV0tzYlcVzsd2hcvL/dELc/KshY7Mjp20sQ5R2nFQjF3x/X3N6+c7xcxkfy6ObPHsmkuFA+LgmUHrBK1AobjWSAIMVwQEISx51EGGEDI8u2AEpZlXR8kWWARZ5qmLHKYBCEipQN1gEAzFnZTq0FKwKE2a/f2+/944kdL7lk51l9/6gIyMjr8yLuzXniVbho8qO+bxza1rRtacEzdNb+/oDiWWvvFVqrWua7V3FDe2lJvmUUryGlqNDVYdB1y5vKF767+PFssxENR1UyJLAG5wrQtSS8qYd4NWCERz9muzwok7PMcTznMCTzLsoVcEXNK0nXDtfUjphH/7TETT69D598Ljz08628XzTxrJd9VKjaxhfG1w12jeptwUiIUaxKMUtYFEjCMrWkhjQmoLvC4pNueS2icdwhiAsXy58tH7HiG7hgZeO2ND8tDeM3XcPEtFzdPrYGIOH1axei+svUHB363eM5gkmphwbMsxIuN4doAqOvaEhAmigyfhEJqPp9XQlFG0tQYwzCMJIgmxmoVr+qGUzK9aKIGs5ymOgHR/erg7AVmIQeWXzdh4bMvfPp6e1BRFTGy6Y92pUJcTvwWjY60zW6Z16wwj770R1kTvv60raZCFTg7cuFKZ3tQuVJh04fKp04sRGuUcRZPNGF6Ux1bAVjJcyOKRfEJK5//p3HFv972oCrgR+YtOBbqbdg45B4abdvXDSrLBaJhFpkKgALyNJvmApNAkHZUTci5rlvKSkhkWECBa0axWyQaU8IAURuy4AtANRGyGEc4aXFVw3F/nJ25tHcsm2z7rv2bz34M2KABB61I6E4VO0qB7YDBAq4KamWUGxb35gcnVVLSWvXAc+dNq20wxza/uPqvZskaGx3XWF3PjQ71ti9bPufoFXN1N/tC25tYlorJFM9y/b19rOhLlWGrxEOArDMX7nr6tWOmtab61+/YfnD1vtLHX13wj5MerllYcfpvj15aTWedvHEc4S93MyjYzyCxE3uCCbf+/d3rb7gsbXisyIrUsnJWibABh3nFMqgBwPD5lKEyPknzNpPxRwWJT9lJ3hfCxLNdyKez5WVlqEBdz2QFNtAxcJw9mnNtR1VDOVcnhFiHj0yZMcn/4x9OXXb30npFy6M8R1weN/o4IqPtlsWBgIQAAkABeCzLED/gJADD8ACI3A96mHXCrN8xNMJxQnVb0ttSMsul+ENXhm8+Fd78xX/5K0g7EZsrikSyCediFcjwLU/HltUqOap7jq7IiiyzXuBR1wbK2BCwUd4DiVFAYkzq2hwSOY4zHTbOu66r60VZliVBti2PUsYniFMcFnEYiEcCjwaMymOEwA+wLOftIE5FkbFdNQhrGhm2PCrIUQh8ykmKbTmO50sM5TB23KIdBKIoetgLhUKBiW3iCZJgmibvIh88hmNYlvKUIYSQwPfAT+vZwCMhSXECSgjwsuxDgEWBBQAb+WACsJAl/tlVdV+Oj/UZUEmjY37GDgBx5TlWzg8kM4p822drupNbPaAADAWWIgioBwAsgwkeG0gLgeO//OXqIIDnH3rRHt311YbkRTLUlUcOlfyedMrmnZpIROaNMVXf0ZWdgcLB9+0xRlysVI70tunInuwwOiBAohCRsnmT5dRErOS4wz3bO6qAneyAvW7PBdMX1qDE6q9eWXrGgsvv++1Y7+dVU8+Fysm2Ndaz+vvH31v91rqtF0xY+d7rT59xwe8uOP+YlqYFp0yefNbiyXDg25fbutdsExNg7OaggWEGrcQ+xCyGsdZo05fDewtXXH73Fy8EHV8TWh3Ua1WxtN8fAJs+bdrcs5Yf3x0aDwtgN6X9vU4c5CRfSASg+HDLnKP6hwv5lK44XXJvr6+EfCT4TGU4XrfgkscBcVcEpfYAxcrqx3N9Npdqs9hxEBxw3lv9wX9e+WcaP3qxLJ59Ruzqb/sLeR5L3k8m979br7rqjjuT0kYuXCEKUrbz4OXntYYe3/7y/TsvCksEwS9jaV7FEiuEzcAc2B+vplMayiqb8tt+zFe+uXrWaWdVz5hiCpilRZ/BOCxKiRNNt0ekXLfTVtXvDeSKX7/1zsSmSafddjuMWAEeRTXTcpYHgCRNLa+vWTBv/q8+r77nAQBGmCPEJQxgBhAIguBZJQBCEchilEgRE8lVUpQCP25lUEisCzfKjmOapmUbkiCNj43XVFQpgppMZUDSUDQeW3gUFHOlXF7RYsgkqhRq6t0yM5aNZYw5ifoPXvwvfYU3ibfonGNwpImTxQAC6nssRphhCAmCgGLgEQBC/+ecB0ARAAD+1aoMkSDwKAIOUUSBxZjigFaoRwBUAY/YcP+BDe//91l/pK/56NubT591yu1njGXp8cv/fBCY657/+MWn/rPrcKolIRwZG1pWe9zTv3vr89Xf3nX3H2dOq0+DoSmyZRteQAjlirpTKNrDw+Oj43kWc4qAPceMhGVNoLZt86xgsqxaUcHX1wVjXdglIpalqFwqGC74ajRqWRajyIQQmRcYUYziymJbX2xahdDxUd/My/mrXqn+j5D/zeJIhuercN3ERqB+ITUOgqRwHHJcOVEJHjFzuZBWazlGrDLKSAJiBMACIUnMloFt+RTPXXj+xu7m1x578bdXPNS8cp5rMRqtfG31t6OZ5NJYKKhuaMjTQAQ1oJZj24g6jqOFVVZgWZ5XGcHS9cr6ZgAgjsOzPMdxvutxfkCACpVlYFs8z1OPYB9KmXxEiASHx3GNyngsuMzHL7zxr/deOeuMs1EgqkJEN5M7Dh464YQT1n+2d8gXvviqe+sv2386dNiCsvNPbP4zW//T559qU2defP7ioX3b31y78djpc52BThimpYGDJBKL2MhVQzVO6L/P/c8plVhgEggmT51k/LJrfHigc2/H8LjOhFjLMyWVNxwXI883QeREAGy7rq77AosIRo7vUYYRWFbksaQSCQRKKPVsjqcqy7gFr6ohiqN0bP+QMDYYrVGnNs+Z9rvmc5/654/33bbjs3VOSJ67pK5i2VFcdyoxp+bQF+v6ft7feubRV152ar1ihJpraMGzhnTHcSRJqqiqCYdinuPX19e3zJ7jZYYBYy4hjh7utl2nqqISiF3Fq2pLtenbkZDAeTxXZDb91HHRtWfa48myyVUrteD++36SKL5tVfXo1u7b7m3vQhRjhlLEMcAA47qWgFnH81999bUrf3NK7pfdQVMFO7NRHtFZi2RcQ0WcLRZp08yKnJ0vdTphrp5MCFyzosIrOphByKVBeUUZxthyTFEWOIGzTf9X4pZheYZlCSGIIFbg9fG2SSdcvBPuPpUvnx1N9xSMSg/6gIQUC0xggfE8gSGOBzxgRvFJ0bI4zAO4wNs5m4pmBCg0t9T5eWIRH8XVkA/Otm6HONJVK9j7zsEf/ZR/4iu8bxwAPFTkFbbcIG33fdnywSN0z141VQgEQFokjDli61T1PaC677LIl4kkBsAFmDIsRCTX8gUtyrOSYxqmXgLAgoCDIBDUBrAs4lg88SggxucYhkOYcfLjJCAuy/BEFFnFSnlSOGRjz8tbkqQwDMdxhOdFXuQIIZhBNCCiJLmOgzFD/IBhGI5ho9EYIUEQ8BgDxtjzPIxZXhKQIGVG+hP1DVY6Y7i2LKsBJqKiuJ7LUg7At8KUKQRBhtBqV9jx8+sVC35bwBkg4PMwwU2G5yzeJFh6fzKb3GJiCAggFLAs9jwPAWAAPyASx1qujUA576arn3zq+WNOXfSfe+6usvwFk+r2jo4zefYUTxvuHhrhA98nDIElEVl2CSMqw4L8r0xf4xAS3WgBxCibE2uYQ4UujpSpXlgRmc7+TK4oRMCjHKmrTBwY7A9oZldn+q6lDzhCuGrGjKGUtu2db8OMcf1/nj39xOMObb8jUSfHCrwTrKEkM3/2GRcuFkv9A+7cc2//y2m/u96sndXyTpFb5PiE5Oa2tlSFEuv2pk0OtJikMzLuL7mTFcwOZXdAgkHu+NiUZcsHHvuE09UnHnv4/DPRjzN+hxxhNiYVHnt8Be0nneVTp71wzskXP/r3eldlZ1U5TsZ4/+ewssLNiGHP3YaEe++7/dwH7rvn2IlXnbUq47G9Q/qj911Geg9t/vq9O6Pw0Rju3+rMkbhtGZc62POtdVsGzo654UJoIJ0N23b57AVb33/mwljlh5nghZJxZggqAbotWQqcFU1SnjWTI9UfdfX9ddYx35E95xU11ieuR8Ldo05LmHrEEqPZnm9tKkWKg4wXEJ658bpnoya6h93/U+2CRXddmNryTqS2JrV1BxKZxgkt33y9due+PZhhGJYN/P/n044oRwGAIAAIAosB0KJlWcPoC6wlSK5Ra3BTAvKuglXLd8dYWsYpnMCLTFwQuFB1DSfLEEB9ohzKY4WBfh/5Sn0dl8/ZZsBYPo6UqfPCV9+w4l+PrNHdVG2thIna7xRO/dMVrz77bYITCEU0CFggGIFHfQ8CkTIUASCKKaIAgBElQBEACxRRHGCEEBAfGIoIxUAJDizOL2LgHGYqKLv390O77Q3mOzc9q06Z4e7YoFH06f5nI6LEWOF7bzn77TdCM2dMqKusOfnC3xx10oyNqU88Ssd7ki1StW27lZXxkk72H2gfSWY7jvSk0xnfpYT6gszxyBcwxZSALwiYdymOlVdBAAHwaiQEwHmOxUmKyrFjgyOVjY0yqwKlgFnqehAALGgSBnIQ+LWbXhm79GE/JiqKYnWPSc0JyzWlUCik1viuyTIaNt1UphCNxVyZYqYkxcLEYBGN0FLW8cZBF8QEm8n0M6FQWOQPHjkkK9akqeBmx5HDtzRHUU6oKwumTotydr40nlWr48jzrWSOYRjAiBMUR9cdCDwj8DxPqqnxSqXx8XFZlBRFsSwLi5yedcH3UsmkKIq+R3hRYBgWcJbEVaNvrG7mkqcffb5u0pQ1r75XXVs1PjbsmuMcT3/e+tMVF1w5aqSLnPbVL1u2DXRddNzMgZG+j37c/tG32+/84x3P3vPY2lcqSSj36DvPqHzihTtumnuhY8oI9HxRV2vD7uiBrfuHimHMZBnv8hULy+Mo3zOQSqf3tnWXHJAU1rc9jhNY16XYF1mR8THBACyPMPV8HxBmeZbFhAQksALXQ65tBAh7jucDBIBdQt1S1k73sizmKSnlc0MdBsl1lJdJ/giOOvyezmE8q+a0KxYp+wxm0py2t9Z9MQrXzZkxd/rRnfu3izuG864jSSrPS67tj+mjsqxSinKFbJDNiBL2SWCmk2XliSjDFAoFmRcQQqlcnhPYdGqwpX7q0Efr62IAxP7k041OlhdoJM+FHnv6/A1PvLe5n/3fKDV5H1wMgH0AAh6wJCRrxVIhnytJosw11BYKxWA4LUkKL/GyAyiulTnh/gO7vUQsVlnrm37BzfvY1lgF25YsSQJHMcMVi0XEYTGkpjMphVUFQZBl2fMJAAQuVTQ1JikdvXsmRoyP/v3IJbfd/+ox9Ws3HUkiLsKzx/DsN6xt+wHGjM8y4CLk+iZDhUBlAxcAAMpcPAKkZPmHC8mJEZsILOuz1JcEtryct43ikbTUkZYXLY18MNdP9pX++QH6ejDksBFgcms2d93+0MQ7rmEnTYfesVy6bySbrFEqioB8zYrKqpcv6alCRAu7GBk8aOXxga17E4kEy4LnuwymHMcFvm1Zll5KBh5wrKzIYZ5hXdd07TTChCX1LM7YQRZYLggUhiArn6Gshblyw7YC3+A4AWNIJ4sUkUgk4vmu6zi/wsQYY47jLMvieT7wiePalFJBEDiOCwJi2KafK0a1UG5oWAlpiVjM1Iu6bbq+x7IMC4BCCBVpABQSWIglaobGdrGAAkIFRXVsesPippv/eunMm29r7xSJ4AWeDwxQAp7vAQKGFTBCruvaGPEUVs1s/ePfboxtGYnOhuff/u7PU8IfWoWfkyYvc1OjSqjkTQzhqMQ7xHfTzJ442jxs2CVX5mFuqLoCpAOpgaOoioQ4DrDCaSrxsAXTlyzcuP6r2/7y1zsf+2vEThqaf9RVV3316kf+dwdGhOB/73726MdXupa7AkW/6dpc3yyhIz/n128rhGbSSUv6c/wc7GZKR9/6/N/Q6xtjhcnNV2snLpm/ce2BHsAlDU45fvE9z/13QaJ8bpo/9pxLNSYDUyTQ5sKWdZkWFStRrihOVrb9/paT/nXvSxAKw4E3f1pz41FnP6cZ0nS19NkYs2WMzFV2L5m1bPKqs/d880bF2g3omrsrtQlv/OcfE5PBegr3PfWni2/8y5LJ81so++pPX/NeMsvFOj779Kp7n4nY6k3HVc2z4Y5f0kGczpwQOXAEAmwc2LshelCHGC6ThWhljUmHH3l+zVgXLOVgIODf0GEGF0p4gcJ640lv+SkT66rUK3PG859uwCGyu7N4+Y8b8w1RJzdcm4mM6LpqbD7Q3XvMbOPzTw48vLr7KBFJZtUWPuNg/uL7Hui77PhCv5mIjhulIeLY2/bsOvWsM3hBcB0HKP21+gYk8AUWXCcsYMcmLMMjQcvkCgA03XeYjZ7Ym80wyKjRAzqhUhIiJF+kEiYBQhjl9JKiKLlsxnG8yurqXGc/5piCYYoCYxg0qoQllQkcZ/j9nw5+8U2FiwUGuWa2mC2WXXXBxCVnDd3/futUDgBhihDQIPAIIYhhEaGYAgWgv+rSUUopBDTAlAUAhkEMRgCYZQBogAC0jDuDja0n4FgWD1yfyBwYSc04eg450j7avcXRSK0jhVrDzKEj4UXLZ06uDhuvBj0ta7b2T5s+b83Xn9sU/IIdV8t9F6XHC7t3b9m562A2U0gkKsbHx1VVFlnMEY8jtsAGLPU5xOCAAADj48BzSLaETTeTHKQgdvW2TZk8Q5AEZyzbNTgeUlSeFx3L5jhucHCwIiL7Uqw0kmVSWf6pa5Dhb3nl3ekTFxT3JgU1NJg/DIgkolHfdhPV9awDw8O7EtEGYvBHDnQlYholfT//tH7ypBlN0bqh4fZw2GPB88f1IGPVJmqefeKZZctbY/NPPrSnp2HGYmdY7/pqaOWxJ+nuXjLqC5IUVTSWZV3XNTJ5yzYEQUACx3FcIZ8xTVOOaKqqOo5jEhdKTlQLYYY1gQ0MJ6ypWiya14uFsaGRLKpSasERXnr45RmLl7aWxe67+Q+9+cNuCr304Yf33Pufge4kYJQjhXX7OhyvXZTOXLy82d77wZtrf7z84n/K4HUMjPzyyp+c779A0xaDMRy3ixGXPzzaq7HxQBItzsdKhLgcsCmhoqLQ3+Ya2Xy2kNUtXmRYHLAyQ1yisVLJszDxEKI28QXMEob1KPIpeL4nczTw/cADBqHAoZzMsiLiJQZRHE9EBnI5UarQokYogIyXm1BXQRvj4JSj8qg5znmOe9kNl3AHzJG9nYlQ88jBznofGqtq+7v6VLZMC+Gw7o7bHi9JHvEwIwiijBBieY5SKqgixlj1fUlVqe9j3XB9wjOsqIrYsBoaq/249PT9Dy9dPF3GnGFrHQfGhrhczdTq5L79T3/Wt8FXCccCAMdgP/h1RQSAEPFcAFDUKCh8n1eqVFSV1VLppImCqvJaL+/v3rgr2ba/6qiZbqIpLIaDMA1FQ75FBFV0goAXJCwIyHFKeUNSiKZGiBM4rv/rGIwTBU6QKMMatjWxYToZGVq+uBmD9cVo6uX6xAul1HCOTYDAMHbgO9gHEABQIFMwOMDUsKmGqMvYY6oim4aF5NpETC7yIu8jjRN828uX9FhFpVJRSYPATOUVHPJbj6YfL1CH+kovf0lf2BDTgXtqc/KJb/yL54m/Pycya0qUzKS2XlZIoyHf001HYmmZWPSJSJiYSd22garaKpZlg8BjMc8x+NfQQoLh2pzA8jzr+VYqm3ctOxSNhCqqIZ8PUY7QuENcmxoSJ0oQkxjN5005JILnWZZDCNE0jSJiGAaPse+4QIhnOxzDQEAc0wpcjxNEUZAopb7v+54NABizkiQhRDDGuULeGh9XVTWsaIZeZDiGBcL4xKecIIi4Ug+OPf30jamfNcAhOdZnpUUqrrji3+zsiJN0KLCe4yPEUPJ/bmUAvu//mqaBBi4DOwYH3z7pxstfeeK7p17kIBhzs590AwOYtdj1xigA+HYAhBdI1OHS2iAwWKbghl1o4HiwUwzrekz5WD6biNQFjlf0BuR8eO+3/ScmyqXOkRnx1pJKnr7olhNuuuWBD9+e+vydp38+/YNtX7z440fpAJ9+we8mNVf7Ywfatxenn3Bdntk7ftfd7JH9L2aC+Zmhx15564et7ZMaZ+5s37zrmz0WSwyfcKY49lMXP0KnHjPjwBftA9+8N+Ws1/943aMrJ6499dH/1KXa3EzWE51pVb+575E6f2y/y1c6tdPKpsxNjp9+5rKTP92JP/jwofrFEz7dOPj+jfeaf3x49Y3/6e8fP2H3Zrj7T8HX47uCwpVlVSfHW8qFqlQQ6+TG35588mkn1nz+7rq/GIN1tuBRf/2ewkmCef9C5aX91t5MHjgRNIR1+PCJl09/9ylv38fgGAfW7drbBbNUzhWCRa7r6oBCRT3gYqzA1OLf/683bnkL5sZLvBgUzac5OGZd+2/efqJ97VuGotQIhNYwJ604/omb71nzXraaYX+IOgPMGOtR3nQHKPz84dsrjz8G2MBMA0KsIAi2ZbmOgxmGEA8ocDwXeH5AiCBgxya8gDxRsvTcBWet+Nefb/jxvfecXYfqZ8waPbCrGFdy+w4Joga2x2tiNpsVBCEcDruGHThOanQ0SKVrKhoH+gdZhM2CGxOkPds3VTc3VDTWWKNjju45bNi29EBFnQJzflmTY1Jd9xhOIV6AMUIUUEAxAkAMQujX7EAJAP61ABNKAwBMKWAMlFKAAID5tYfY7WXiDXUGAktgkO+RUtD+y/YZM0Ltbd2JGS0HDm+ylEZte9qxiubuHbPZ5ArZK0OZppqKH3p25vKZGBdj8lzPQOqXLduPdPZ09oyUV9QKvGYadiQSosTzrVw8LGOwA0/XFEGWed/zKAVJCSs813H40IRGnDNKtlWYN2NmUS91Z4ZbJjb/+oKyqWyspiw1nmye0qKj4kh/srF+Ui7K967+pZAIT25uLkhu15Z9vCr/7+13fvf763ZtXsdhSlwqSQpnkvWZD23fDEBa83U3r8Jzrz388Zo3DmhKRWXrHK1h17ZtpI8/3HNkSsukrRsG3nrxg2t45s77znj9uf1RL9retZsdOEjcXEnkAsQCgJnPhcNhNRxWIWxZlmMWVVU1bTukSaIoWpYBFFVVVQSW49CAjUdqG8pHRkYQgBSNFYv56nnT1JQVX37cQ/c/nReZRE3Nq23r2t98ThbhkfseA2fNZ+9/UxuOMUwAJPj4g8/OvfikjleunnDZLcm9B1eI0ixnqEMoq2r0l11+3dOXnveHG/5W/Ncd7d2djWrVlHlLCrbpKbGxrj2BUdKBQR5MmTtLkSOOyYx0DwLiWBaB51CMwAoURSi6ILBeNCzKLio5HkFsIMiuD/mizvOCD1SilGV4m3g8yzgM5jAhgYskTyjKAwNBfW1NoWNQLWtw8gM8qnXbRxadetIrD6+OzA7xkypzm3v4hhY25ENF/Vj3ULxmckSwAlsfzeYiWmNYsHhRIKYp8byuF36VSxRFMTM8piiK7/tWJo8psJSKvIA8MlBMNfBh3bLFiFIwzL7B4S3rdu06MLYL4Iy5kaExtOrmr1wWAKyoJ+cBEA4oMIABAwuBCzQAwIPDSS8i8IDlsnj7aF+M52PRGM8qX9/z5NK7zp9/7SVuWz8OsWx1RbEvx0VVKz9sWb7nBipLSclQQ1FV1TCAoeuu62iaxsuS57qeGwSUyBwHAIPpZAJjteBtffr+s//50KUy81xN4jPq/NcuGj6HOIbxbM4BjwfTZ5HtuywFKIoMtPCQ0qlSThE7ob97p2wKQqKswCKE2TItAjlvsK9HlKU4V2aypfzBoSqsotpm6R/3sld3wJad9L9r3P2+9tY+++3deoJnTp0bzJ0ajzUF5yyigYftkhRQnmJSNEqGQZGo8iwhhCUcJRwhxPUC8ANJkhSEXQ85AcMySiKWAOJYgZVJpWU2QljX5z0U8IqlSXwk5SUJk9Zc1vM813YCzwMANRTiQxGWLeH/U8S0LAtj7DgOy7KSJGULeU3TMGIIIQAYIRQEASHEATcci4qem8Cca9vpsTFVFFiGYWVeMa0CeJ7DMi08UywW561YWQ5rp9iZfz92++qdP7Snx2ehaKqfB9Z1MTAuYjFHEPE9F2MACjzD8QwbeMBRC5cqr/1+HeRW3/+X5yeFBZFP3DMNbxwf22O6uslhLABPMJiaaVFP9ZUSMc3L6qrCtt11pFcLsbyDihUERSkx9VyxUDe7cvWabXO5mhvrw/dsfPGVdz40Q0FVSw2PpEk1dSUlc+aZl3x6xz+ee+qhxx7834Zt33131Mcd7d1GgXzzGtfr9z357w98TQKwOF0YT9fUxsa/+OWrAWvI1uSFk2bt3L3RY7yc2+VsfnT9ulRM8A+m86vaN0tc34a3O1pPPSu547Pp8aPU6jpHMwBKqGk6y3KiHMbEdzPShzs66LNPiFWzgrqTr73QlT7e/OzDd62889apDz/wyX9ffHLNFi8ADWCzWfvqVU+ltZNnnfmbfe+clyof97G6JzM4EQNRnWET+oP42d/+8N4HN548sq9lvFKcIklOwQicf3/8nl7MX/nwjXTO6Tu/eB0Br5VEsWT2I2EqqhC9gYPE5+rrb7j2pF9uf0UHydMzD1XLL/QnBoPUhN+dC21ddTFQSGeJa1QDw5GaOwtlnfzgSjl0z7HVu3Y4h0ul5hb8fHeJY8ViFQ5pVbs7bESw6zgcz3uuKwgCCbDzf2MW8ILAowjhoouBlM75zYr7rjll7xePHf720KlPff7g3+846i+XBnt2bmk7dMqkxkINCiM2UVnue95g/6CmqVpYI+BblpV3i1kzJwrCwT37W+vqSr5VwG4oJI4mj9gaQ31rELBaJIJL87nhkbfeYzEnslLeKvIsokARRpgwxKMUUYr+r/ej/7/SK7AYBUFAAfmBCzSgDKIUMwzDMmIVVmUKnm1zHNQz7FDnAHhm2aypalhdNXE5ClcW7axXpIKJSiYBE3QzJdeB4dk+Zxm2t+7nA59/u1PiioKoJqKxwLExZhENbN1QFYHwxHVLGgeqojqeXbI9AB4ILhi6Ar5aXlsKutVYqFKJMjx1Cg4vsASCnp4eWZYJJW5gV9ZVuK5bVdfCVBvMoK1Z7uRFc1WGzTt2jJemLl+oRSKvX3GRPto//+SlYBuFfFGUtXxuFOHjC0W3srr6hqejfQPdVS2tvz/qP0bvXkVqzHT3rFh1XFZo5FXXokUppk0/fpXc3DrSnRskmYKsTMRNbkWDrO4iAKVMRtXkiCZB4Lft38exQiwW48B3i3ohkxEEyRZ4BAzLsrppBRBIqjbe3+8DKi8vN0t6IZmMJcpSoxlOjBMDffLJ2vmTZrft3HvNZecW5h318sdf3/KXO3wgFQnxhBMWvfnuWl6Aiy65/tFHJnYd7pj49upQT+rbMRP5gsiUtHG5vmp+eYz7A8cvm94akLEidZgujSq1Ns02NcbvPmsO0qT+3PCVVyxFhztNr5QcyAiKFpgeA5xhByIFcHzThHAMNEkQBU9TRddndIdKHMsigWUkCkQDRBDYbhD4pscCG4DIocoapXdA/+DDL++49zrDdtyiqRpZq7pRzxxpnHPe1c/+uaoS80lTdHNhHGUYRp4yZduWI5F4mTDQKTO8R8MjfjYSAMszvu+zLEsp1TQNIWrZhhINI4QYjhEEAVPwPA8A/CCoilegjMuKUdZlwrFwxoUPPl4XAOQxlgu0M50eZyuUYJyBIIN0mWEd3wcEAQWPepgDBigADoDHNOBML3Wga/JJS63UMHaRlfM2j6WWjVjD3WtrElXrP9n20TdfLTx6IUXusctX1MyenkqlfN8XOJ5SWsgWEAli0aggC5RSyzQd+/9xPizLmqYps7IYlUfHxyuWxg9++f6dx5x/vlOqqba4fcABtjnBAV/0fC9ggTDAMAAE+0T0gzJWK0Dhf6+dDiTXWNVEK2NQdGTHB8S44CEGxcriyHRLiaRgSGWoMuBEo3dU6emD6njmgnPUE5YEbUfMV7/nvjig5jD7xh77tQ2j5RL37yZoLVfntXgCTmeyWmWZPKVpTEasImJgWIbHDMdxAs9CqVQyTVPzGF7EDnVMq4go4XkWSzzPa25z3Wj+2zz+gSkcasXl2dHmmsV/BGiBkXbPKLnIVlXFcZx8NqP4LmIwouD7/v8jo4ChGDEM41OCEP11KC0IgiBIAETXddu2JFXQdZ1hGADi+15VTTWIvJlLs46lA2LCPLZ8P+Z5Q4N9R+cWh4EoCFeOkUfPOuvnd//3xU3FSZPiXLh61+ZDBJMAAkQ8DiAqi27J5ognIlAp73JYQsqu3X3JnjfzJZilCIGOSmphicouU8VDlPlsIAemJEI4jWzA5kIV5se05sDansoLVZWTKyvGRo4ocl1RYAUPRSUxM5ypqitL6vaWrHV2yyIxFk53dO/Z23fy346bseyoay665cONG5587YWtnzz62FkL1YqImbeNq6447oKLtz5xbwNfuuz4WZ8NjsKAFY9FNqzf/PTXDwGVeD/DIOeKlacePX/ak5+8dePDDwtH7xlVxhrT7Bnz0boHV106D29MNdbOX15mlmW7fi5rmVqqSahMDnbuZ4pFJ9+zeU0b9gMXxvf35XN3v3nSicsUjl+2fFn528+8+eQd99ZNXTs8fKHGiU5ot5cBauvu6ZEZLYf2Dmg+hAzn658PbadQLidSeinMiCkrs/OlxyISN/nEhPbp2KBem7fceNg4XhQOfrf2bz+tvfrF1YIdV5DbLrpNViSAvECHxzzkItJqWl0ffntmM5kYCVARtCn4NwVY4/PZb/fA7X+i6QIYpNDTZlS0VrANx9157P827LGs4k1vY4PNNyCIJEH0oHNEX/DpJvs3J0yurSLI5RjBc12O4yzTZBgE/xc8Ih5lKWUwch7405U3nHusFpTaHS1tEB6ElX9/ZF+0qvrsKacsPcWRaOnwIamhIZVKSYJcVl+nxWKF5DgX0sob6zJZY/YxSxzPLm+p0VPZeVNaTUBSJJo2bGGAMcG1Vb7KCR2BrNZQ0SeZHFAW04C4BDM08BjqMwgTEhAWAf1/t1cIIQwIMCCCGQS/kvKEAqEUIUSAAkKWbciiIPAy49o6EI6wvxxqv5kQlDJz2VRURN7BEjRqVtFWJ7VII5zGSSJPx9OpegK0w3zrszf7+/u1kMsF8WJhTBGlADAhPs+yyEE8xVQOOcWCBKwsy8W8G3hIFBVKMGJ8QRICyvT1D4PrylJOEEGSFIlIPYePqKqKfUopGLmCXTIcx3GHk1rRGNJwXVkFD1IhX2IsV6iSOVFCATW6elHgdLa119XVyrxoZCziqIqMyxvC+eGR4faBmvLqjk9/qayOGoE5WupKKHJn20BsxtRC0p1WmdDH94UlLujZeNl1Zw+kmE0/7s/YQmHvD/q4oYYZ7HnFZKpQKKiSnFBDiqKZhu2U8eFwNKSKetEoWJYkKQgCDCjIZT3LLFfCvb39JcNHHFuiXl+2d+rcJdzUmeedeml7MjO9bhaITMe2Ha0LZ/Iaa2WJgCCXsncc2nnp5avqKhvXfr/xwK6OG6++ZvqCppOOu/zn/h9Gfvmpsk568M435jRFX3/9BTu7wavEg93paRdNLLYVqZ4TXYjUx5bU0s4dO0/5w2UaU8Dpoq06rkk95GMBKHABIgG1iQ+GjRyDGoxFwZUUlWWwQ2zEUUGlQ2OmT+1KWfZ4ylNgfcxqYggwOKal6w1heenRUywjwwuIjyhlXA0gi3EkPzsYq55VxuQKg0fYcB11HevA/qseur2ittrK9rYN7F3QUKtHcZ1gpzMM5zsUU17iw0zY932WwZqiElHQdT1WVWEWCqbjRGKRUqmkaRGP2Fxc4strYdDkXPeIFVigJRJ8vL/wU5+VClyR5Byq+YLOM5LpWgiAFSXfNihQjMF3fQCWIIZxCRfVutoOfHr3AyeddYrsS8US8+WRviO33X/titYvneKujclJExov/es1u776bM3Xa25obSnjpWJJ91jbZ5h4dQV4XjGdETUJY8zzPALGdV3fdWVRkliBL8OlTFAVl0qZEW9y00Nrn/nwz3dWVUy6qn9kQ97a5JUoDUQAFBAWfJbHjkUaIATgrlw+8Zf3bzGSBzJtw1V8hIykXUJt38M8q2hhjmE5IWrppcCjZgUf0S0YL7CqaoIayvrxHSMgBka8SX73EbO/j/1wHbN2d7DhSEWB6unD4UOH/dW/pKsYetZR6JgpQYUaNVyOMLbtWp7t+4TjOAZh0yphjAs658leoFpYckOqIkYSVI4QVj5EPwiFwxAsYiNNDAqSnHn6cWefPO93py/EE1taBRaTwJNElmUVwGA7DgZECDFNMxqNIoQEQSCE2LYtSDzLM74fuK7n+h4hBGGIlkV917Etm+Mx5pAc1lJ6bmw0XdvUwCIWRB8ZjpeQpGpsff3jlx1938dkNINXHnzi8Rziuioh3kc6Dn/8h5df3LqZciJ2TUfkIBHWZrZO6D542CjZsu8xnFfw4IQKqOnctOenIRkzcRlF4krnkCFHaIWlTwvHD8bZ/oxFqAUqzLdiJwROzrS7RcwoQpYEXVYfy9oRzk2RouuZoiWiIltZVv/N4O4jDntxFHV/9M0/dvy8vv1QC/FgS/tvf/rjV+u/ee6+v73z9OPnxcudTIGZPeu+Pc9cOG0eyQzOp+78U+e3jWsCSa89/GNw+GcBiMvmpiyYc0zz4u5BPDIuPHL+XTG+dd97D/z+7EnTi3lgfrSBNSN2iJWG93csv/DUHx95F2ZMlofa8z3jQnu/wcODtz+x0Ss2WEJGkqiRFxjY+8naBJUn9+TeuueOpImXQnCzzPZA0EVKg4wySg5UqlPzRjSghs7Dd3u6Z1NYqsm9eqoc836Qs4zQ/U9+spzAda/ev9r9IvvW3gYNowAroqw0oy7DfutP55580/ETEqitSNuxMYZCE6Co+FIY6ObhZEyG8xcKPRm814QFkcmLrjTHn0x98/Q/VlxyhjdyyKteUJ0IjxS3wf7M3KpzYvp/GxS8lPePWDAU8D0sYKBNzdMyc1xn3d5zb1h2/a3gex7Pcr+e19GAMAxDCKFAXQScJnkl+4bfHHfvxSd9+OwTe9vS7UP+1jG7AsVdn73ruVc+vfgtMmaYo6PRmniqbySTyUiSEo1GB9t7amurVVlI9g2P9Y9j3weV5TGqjpaRohXhVbcjOVIU24pcPe8dLrkYg0hh+eITDmVGwXcw+JghDMcG1A+IL2CeEhoAogiAwv/9fTECihhAFDAgFjOEo4RgjDEAIUDdkl5TFgsCqgLnYYejzNBolnd5CIGIlSwh0cZKo9RbGZPzDEkyPBG9Md2XwvFKN1MGPdW1GReHAjsxOtJVXh5O53XXDgRB8T1P4lijUAxASkQSgWWns26Aw8DLeRMYRmiIa3Y2Vcro5VrM0Q2FU1xbB+KUCDCsyLAiw2Hf94HBbhAwgoi8IB1yy8PClr7DYbm8JVpHZG4sl5ZY3sg6iURFX9+AIoquBwMDA1X1zQwtjQymutu7EmVlmVImZxei5bG2wREFOy6jKuHQhEXNR/RhE0YpO9xQ7qm2tWPzrsUXPdg88cD27zcZXWO8hwqlgIkFlusIEleVaEAI+Y6Xc0yXuCRlgwPEI9gnCTUu8sLY2LgkSf1d3WV11XwkbGrcpGMWlXr60h3ds6ZMhzx99JRrdu04wJkwPjxSEQ3Thlh9dVVn3wCwyCcyC+jAvkxf509zJ03Zsafj+GNnnnv1CYe/3H7JqTM/eW/t/mnsMVdd+uPa19jdP3j2iKijnv76D37RT7pkpZP+nJuiBiWcbk8d3LHP7fayh216jOgFIUZUAsrlzaKico7tUVa0PVsSqB2gUo7yDCIAiBq8jCOKgERUMs1MwWcZ4EKYFZHHcIHu+ZgEHo0IjMBJfFzI5I5IygwiIAtZxGGztMSyJlsxTVCya1589/q/nT+SHcSeUjTSCS86+fTLhgY6WyfOEDjDKwpMOCKKBUESLauQyWQUUfI9x/eobdthLeIkc+2dfeXl5SzLpjMDkqqkMwWH90Ox8kJ/csMDL9Zpoe/7x4s+VFo6UqWeEiEYyohbBOIDUFOWwLIBfNsGBiPC+J7Pshg86OkbAAhMmVl8x7X6R6sPfLthy/fbhvJkvh+9+X9XLFq24I0//e2iG+flfefr51+aOn3isqUTb7rg6hNOOWbS9KlVDTUDw0OV1VWKqhb0rOFJCCFV1jiOYzG2XTeXzlqWRQOWl3uJx4JRA+27w7H6+vg03bXnLg6p24Jxw8y5MB0kOyLagV5Z8nv5xFGVKSkG+zt2gKqJHTM8I5Ut+ECy1PPLYnEIcP+RI7pptdY1Y9ePumxJdxAnFBnLCSE573hY7JQo5yjNQojZPerk0saSKdIFx4Ty+dS67Yl61eeQLTDl0TI+VA4uZ+UM4iLd8F3XFQSBoRBYluWZrmuHI6HYfAGUJqRNwFw5ALFgOAPdWeieDCfFPGYccgecnh4x/8aBd2suCZ1/amj8qxEA6rou9VxZlh3XYTghHNY8L+A4jiKQ4zEzl2NYBlHMI2BZ8DzPMCwEWNM0QNT3fYZBAIhnOceyGMI7nh3I/JRp81lNYycRsnAKaZ1Xn/+ouM2xnnr68n//sj6xc3Sn6f7ugTuOv+oEah7CSRULwZzpBYp91Q3HkduJg6f+8uCpN19x1ZUL3nmn82hBG3NgItIf+uwOzjxwaGyXRpTqgBsQ3LhUwrmQEYQxNpfavAn+CM9ONoKZctbWtVIiSkpFRFE2Y8jj3JxVi3uSxfSIKMerx40uyum2pwge2IC+yOz/6esjmZFcVJRGeLnXyjQrIj/Yt+npx++dPRUpZn+RzGuY+MHd5/YePnLW0lllkLz/660uRK++/jrOSGYHRsITjmqsORphLteXDBjuvQ/+fP3F3/z7gdfOmOY8+fe6tU91deikYNEJHHy0r2N2Irr65T/OPPcY0NNgu/s3rYnL8a8/2d5tFWeqEmKsuOjJvOT6iOruOOt+tm39HE7LK/5+m6417QTAzIiS87KsERpz8iyn0s4nwFWTIUbwnZJpNkRjxVxhQIYCLTaDPMS4PbsOpzoGQzx4Khdz1CYuM1rO1ichLtfPPnf59I51Rz4JZPAyDF9GsAQeAp9F8FOKX+wjrsBPYZj0tp1H3bL/XRvkAAEAAElEQVRyyhmDXav1zk0jLZcuSn+3oXz+KTXiMkdt9fZ+rwOr+UGVYwUcz2JXrmve292zc+1HK676IgitZ4ToNf+++IV73nVRgJEmooJFgDAIIVnzDR2J2NI1iPEF/qt//EutFsIzq7dsX1chigdLmdNC5V+3Hx46Ml7Lo1y+v6xxQfnyuni/GUBGYuWwxnNMeWlkqLxWSjTOQ4aRHR8VYs0eSwzFT9Q0+PyIM5oBMApUEcDIQIHF/KIli3q/+TxAeUbwMHieRRif4RCHMc8QFDClXxkkQgiiCDHAYGAxYIQAYyA+z2LXJxxDqe9zlGKbIVWsia2AKI7jYMSOc8Z4x5AcE7KBLZtBN22rcnijWuOslJPOE92PcGjI8Rqb6tE0bwXf/Of7Pl+56sy02dLXvqcmHJYYMUdNkzUYX5OESs8OUj5iWIWAy2IA00EERRSZ912Gi+3aP3DJeVP05LDnmmOFFAr4xvLykkkYlWMQNXMljVddSnhJYTm/Qo2KktTogizL/eO9HMeVxcspC2VaiAJMmjvT8zyGQTUi1vVRnlObE+HROBdYuKWyGuLY8eiUijkeEyDbUwK5bddBBzV98uCVe9ZtnX/2tCnLKs2u6Zt2rXvthc/KhFYUU7VFx8f7/oXY8ubGWk9gRIdkTJ1G5FopVAIXmW6hN8PVllHeozlDcrFr22WNsSl33hsNGHd4eJKJ6f2vMGu2NqixfGLP33at22ZpHbRw6aVn7fpoq+XVXj/j8t2HtzCYwR4KkPHb68547f31etHYtq8NWPj5x64lP59fwUHPvh9efe31r7ZvuuW3F7aUR/63aWDZojqAqZVz5nS++BKk9/uVVSFb7e7cMXXBCTN+c+rG59ZkIyKqkrjAjlOJK2kl1UVhz8gzlYVSvAy2DBOD5VnRNQNXkGQHAwHCIi+wrLDCnjZBHndNSfYYhgdHcsMRSlLVURFKHNGqsm4XyaIxyqhJXMaVPCYu0CJb2Ty4Z92BV3787+dfXv/mXWEr69iWEXA1jNdS3ZrasyOqCZYsM2yhFFCNU3p6esvKylmgY6P91Y31nheEhWjOdOP1ldEgYpcMLRTBJdd2bY8EVYpGSSyf7Dj9zlW/PF9Y2D++m0aYeDY94DAMpQHkWECAwMEeznoAgEAhnueDKwNryRY1AXGzWB9IaMsTj+6omb7grJN+fPLT6WfOml6Z6f1hkCt0tf9356JqrdtJEuomdx949e/r73jiArGstav78PZDB664/cGZk8wh2w8sgwc+oqkBISXbMDNGYLvhiEZREE5IgUeIrbFxzRNLnkHtSqfqogXJlz5umlTZnKjIfLz7O4CdjA26tTDEhHl5VjTFOaxaDOf7Mk/e+Pgfbr+5OJAv0yI6JhWRSQ+ufjV60uRJJ69kc/KPW7f6fOHoeGsm0CZXSmVmYs/BbQ2z57PVzqZ9bUKk+eh/XX/pnNmPnXctgEgJi2omRq9ZWOg6TEydI8S1hPHxIBTVXOzIUcKyVbJniEgGlZC456uLGa61v6/9p8zXobqh9YMP6ePsSNpHseFzFxw9w1ul8bGPcs9yorcRH4oOKdccc/79296/6a0Xrq38k9QymZq9LCOxWgxLUSKgQ/37J/H4yIHolOWXf/PuU8m+PWdderqRHFXCYclneYbhFcnzHExNo2RgVi5kPUp4zs+E65p9UXINPoiMpFmzAiLsmQuW/+Ot295bc8fTq4cmQCgY2Ffa0R2Lxwq57KrjjqKlMTdXwlaJZKzpTdMUuq3Vz/Ry/DwtdNoCDbY//fTTl36z5f4DPUaBJfOFRLR8YirbV3v0rLU/dcLcFr3tkIwQJQGHqG2YGh++4bRVD77/UxwgkMS0qYs6kHDk0MBYG8AUtXLrT1tbJkysSTBFc7CaCesW5mKywwN4UilJi1DyAbuuDciSAKZXhHp//k5LqINj3YpAPI9GWsw9R45Maoj1jnZ15w3LA+I6bzz3akhjWusbDu98rzm0N++Vd41spmhQYuHTtWMbRyW2kFr0UZ64gRiTnl0Dm8+oTcY7f3n8nevfvCzXty+K5M7dBwY7k3t69m/d2FMliZwVUAcUYLNFkgVHFAG50MCG0h496Fk+AGWVYSoP2q5mg6uaS6aK+7Y/z/G8Cx5bNFyBQSwIhoUAwh6wHu4GM1bGH+jqwSP5eAOQvMMwQYfLV5kYpcPcKdMRM/XTbz0dgmhM4gqGzEFAotgvsYi4JffbNuaERXTPgJvw+LaP9jQAq1PIHFw3IXcKppEXHn72h4NDmc27tOXRo69Y8PHP7e1jadEJADHHVE474pCHv1y/pXHlB9+86BJ8bmL6WwHjBMBKRdMSWMahLlDJ1DEwnu0gCAj5unPdA+v+d9zcC009ghx1H/GBA77GW96GR3Nu7VE1zVULCymT21UIBFeqW0qEISYegkiIXTTfSJpulIYcN7YAQwDBWJfYp6faRmPTy0vZnMtAzjNZQBgYNqxQiQ/xkkBDnCeAgxgBixJ1bMN2C6Is+XaAGYQQy2IG4NeLQEJpEPguz/MBJX7g8TxbKpV4jhFkqWiYUbk58MAAIrIIfM92/M5DvUt+s7yQHA6XhzAD3li++9CheccfFausGUQwoyni9RSrJy3DHfZIdkwsi2zecIDG5fJp0wfGbQh46hoSJyDim964wvs+En1G9hiWsJgnhAsMHgcOy/OaPJJLdw+qFSGPQ15jbVUoFsUjRoSPJEdzPgQMiyghvCiPjI8lQqHRwVRn55ETTjjBcRyZZzRNc02Dw8gHyOeLVVVVHMPYtskzwDl+yM8nI1y4wItVDWOsHSqk47wCipzZtaWzmJxSVpHbvmPesVX/feC5ZD9sLEXavr/yiS8+tH/aEgvSHcHo6QvPDCQpPvVkji1aqdF0vlDyncmJ+pGBkcM8U8YoXkiUp4TcbJqmOUeT8jE6cX/B3jwaaukrHBxG721lvbTJerwPvWH72SNtq2677M4NB9hNnzz/2j3LvzgeOX3DvVsefHp1HIJMLAKl0llNsWe++fOP2eR1p70x5K697PJLpkeb7nz464Z45KLrT3/3379b/fbrG/561skX/WnY4Giht7ZunhzLQlOo+NmWQmWEy6et+KKJJ9TsO8x/+tqX2GR/c9lvRj7frrW6cneAMIq61FdEb8CVsGL4RZ9qKdMISZSlgWe7UYTKsYxMMAQuIYIgOHKeYZDvVYuJZKXn6yQcEmV30iA4o6iysilH9hVDWnrUikcxNixm3KlbOWPK9m+8VEoRVD4mxCZWffO/Fz21vqWxSmuIG/t3Bs0RRQ9M7DZX1GCGsTgob6g30wUASAZBBLGlpFEoFVRNyxXSmOeKbr68vDxl60aural1SmfvgZc/OTQzFjPN5J4hSRFoEHgO/MoDBggAASKUAg/UwQKwrhlWIDuLgW5iHzOTd6Xh2QvO/fKXz0b+O5rkhm7944N7X3h22hnH1E5o/em/L/003CUbYl/On7Kk+kt2xH5xw5XTG9cesiZUjV9x1V2bt78IBzuVWlfKRIpFvWiUKmtqyyJR33FNvRiJRoGBnJk2GTKYGprU2OqlAtZDsybNGYptKO3e3yOWr/r9isdvPfPQ7vbbbv5fZ7/nhihThFTgV9uZVdPrPnxp/Rvfbdzde3D9j6ub43Xfyv5Xx8288ZhzFIvtp+jOu56a/fZvPx5pO+AcMHaMMN/m5R77jV2PZY2RSxZfKHNM7u4b1Drl7+++3+pN3DQw+uWGT99+8J5ls2dZdrGQHeNcUtdQQVI6VhLJvj6dOcR4LseExp3EiBDq474btV9/5fmfp15Sffe001dVnt5T4y/ArEH2d/iHc9D2mR5s0kcuVJatYI9K1CRiAT+5uvnDWfsU+7Wh4r4FE6p8KPWPZw0/nxxvHxwbY1DF+h/S5PcvMkbloeFnyNgvVROPyni9BwaDwHRL+UJjeSLKMYath1gicFw2Itc4E/dvPmJLi3//6N/P/dsNc07htzufspwYBipedNGlD7/zn/4OoeWBe3LfXTLalZ4xfYJaFj9ycGuEVzxiRBrLNqwfsCnJCIzjuHccfx27aPnBj5758sWN6V7hlXtuPOGmVSfXnvHBHQ8ed+nsaZEZDG2LpAplDC6U3KgSs4tFzAI4Tv+unXWAqoAhBuE1TgqCEmUoz2HGi3OsxWoVoXhKNAPfzRc9NZbYM3IEEAAuliAEmC1ThYoKrbImIlDj5ssvXP3qix5yK1ur9h7sHyzBqpAnopDnjtdVKUMp/+Pvvp666hTbDSpRdNGE+dv2frAvu93FhoZDJOBQEN09dGgwv1FR1O2vWcs+/mfHhp8mTP1hU+dw/zCsuPQ3fgZHD7Vt+H7j99/9FNh4ZCAnIwiob0OANVbFTFWICRGx27AinOK4RQuCOSKyqLbFLTqcAQRUgKmx8g/eu79uyTyVXRKMj8oBBQaLAJJpOQCmx1JgI4AH0uYTP+++NRFJB/oYG0wQWD1sc57ms9njzp302GN3LJ/RlBly+gYHF4CWFPRqPqMZuMfjFpXz9JB22MmQcpB518kFTJzXNVmti+vjBbZOq1yqnjxvWfmtJznj4T5mmF10Wt5o79qVfOf9nzMllyQLiYXLjBKzcuGZ/3vjL2+8/rIhUd6i2KYAks9gJrCwBYykRlwDQTjJsmfMXBSSqsMlodvJlWusrwuLm8MR0zuiUs8ugDap/5eBhtoqc8oEDzSvf7NFK/PWfm9dR9/3OwUel8yiyWk+I9k5s6nFb50T/+iTDcuWz5ra0rL14H6CWNnDeeKYVHIBD3X3KpzgWZbvOhQTkeckRbQ91/JthHhEATBF/w8EphgIAOEwYXHg27bA85ZVCoVCLMPk83lCfTFgoqpq2CTwaZQlxIe2/f1Lb6lmM2kfg1HK109tqqyvcTlhYChVYlk9l+MITG2eAS3N8XwxKsiZ+ppiLp2xbaGsamhkXOFDeglk0edCFqERCDzR8zQ2kLBIGc70+LyJEUdjGoeF6pxbUR8LglIqEk70dBwmvsshEm+sK+RzEot5jpHjicBHiogtszh39hwAmk4la6trGIahFA0XMrW1tdEJLeO9fa5ph0KhYqlY1dRgF0FCBQNlh/ZvmFhdX5QoaW0sfT+0f8/P01oXU1/rHgs6nv1HLCKmTF8tlT7o9zumX3DXlSveffq0R17a8OgbH+7dvvOlpx70U/qEFceHh8aMzLAb4ppi6uH+brkxjIAPhooVdRPMZeXe7v7wt/1etD7/4jMcpBALBGNdlfOO0A3WpkLhqodvnbXsgp9fWMYIRuKbQzs6V3/5zuqTjz3rrife0gnM9PLdLugdP4zVRGdPmfS/V26YGlt0uHcw1btVUbibzznjuttuf/bWS6664W9nXdK3/YMbez98mA/kyVyGjUNqXI2vaDCLTtUJp9Jg5MN/39uVyRzI0nhf5nRV1jqziXK1OFyKFaFIgnIEe3BgESIzUGJ1IgNybaTTUFmERYwVYEdhOSdpimHJArPe11Pq8Iac1sCWsVHIjYyWWCYL+pb2bNteWh7KdmTSPZ3QVN48dbLV3zV9+pSjprS89MxTN9163c4318w959Q9/Z05e2w4P+n7PeN/uO1ye/MeMmu2vvMnJxYxSkaVGmcIcsJqKBSJyWEH2UKkKuZzwILV225kRquViDGcjyiCyjOI8Sy2VmDlyhBVsiCC5TmYAOE53qcQ+IRBgIACApbECM172AVkzdCC2TrYGE1YPosbsyddePXW9T+uPO6s4Vzthr8/+vXLhakLPqgZP/o3dx935K/jvTrx48KHm0dmQ9nWrpHhMWs4l4sNw2BBf/SRl2777ytk8ydZRhJF0lRTm89mXMuWBTFSW1tKJfOlfDQa4ot2BYTskYxSHjMLVjgeW/7ondaB7vKlNYTzjB0j09zKr/91+4W3/evAqNWqsQagDKduaB+cEOcLtjd7xqJ9yINmTcla5NzZ3dPmD5TEvz/zJ3yJsr2zDfbvBYgD1ZjZ9kdrN45aB7b39+3rfwN51sj2fevtZJKR4ItvYcyFRvmcRx9cftoxk6fUnH7Cgj2dP7C8F65IeAWpoqF5mbgoYKx9faPvbR7YVSj25g/87rfnffb9rT3w6TFkyjDe32Z8dSSbMcq5PKSyOcZPFrVY1QRx4jSm5bv0tk3+XgH6T6qM1M9TLKbti9RGJqCMbIxmdEENazMU4rorH5HILPedG7787bPcpTcf9cG7L8+aM0OXipOaJkyb3kxBKbl2aEKkQPIMZhmIdkN7uwa+r9615qoe+8ONdgcrsuyM1qmgRoBpnjy9MmKpAZ49nsmHKb76/vuBdXkunIhXDhWS6oS6TVvGeCxR0a5x1AIrg0O/2LZ7/75dT54/6+q/LNv29ot333n0iStOioTj4/3jLTwWIlBMWxEpVigU4iGlRIKC6U9pbNnX3Q7EL3NCOi5xqsKaQcn1JoQRFIsQ+CISczlTp7i2XNq5dfs55525csVUNH44cczEptr6RE0zyDHHBEGpBV+4769vTQwKeRpsJuxWQo9tmQhcTfGT/VAwjbL6P9z1F990FDXSl8k+89E7gipOKivnxntLXslksEECjevnnLaUE9mve1XrpnaW2naY0POopfCyGhTZI6UDm3bqTipWU9l2aMzBnMIzLHiBTzmBz1nmNgrIBgYghdw4QIjHtkfMgE4DlgooWlP1Xe/AnTf8vrZp7qIlJ23bB0DRGIUx37MRqAyEZVE3cSEw0ywTx5rp6n1BzEqhnJ47BFQMwUbD0XioWf1jU+3UcjE68Y8rLrnkympP51hlUcj4wSSzIGBsUl+fiU2XFUeA3ly4NpGQ3fIwroon5KmTmIO7zzrmLIeNAzYEYf57zz68dvfu2686a9YdV97421s+27nO8XJW38DPmd4GNtHa2qxz9tc/ff/p/x596+X1COWppwrhhFNIYctJMTEEGaAw6bgFP930jzkhe0kd11diRwOjssBFgnDRLA2//frJV1x+50vPNjTXc59+v75tWzNXvmffLwU+GqmfXDG7NdQSq1TDah1bnkDFJKpctLCXgzv/9vKuD5/c/9DjVXF5YMTkgWV5SRRkJ3A62o/IUfCxqYZFYLmSZXCYwaxKfZ9nfIIoJQFQSihlEEWYMggh8KlPgHoMZlVF8gO3VHIlWc3pSY6QOdOnduzdpwHieCxT6BsuOrYXDkcxOKLD7Tqw2+vNLL7i/PRYLuSRBKf2klJ6fKQk+Ou2bjU88Yrbf/vifz/O9I6E3MHmOmUgY5JwRUA9mSmkHInxbRUFqu9T3weGNYlEA7ZFlq2irnGxnjZ9ZkvUKFqFscGIFHEjnKSoankVHuFdvYgRcvO6KssIBXUTp1LbME2rsqqW4fmhvsFDhw7Nmb/gwI79FWUJVZKTmSInQk1ta3Fs1CiTyzI0XNFQs/CYnuHe5K5dkb3d6tTpyWHn8Sc+OP6Exuop6kEz0d7Zv2QGoyl0ssQfHMIPv7b2wnz52WdMXpSb158fX3reb/k0/W7ta+Ndh2rr4lb3YEyTG5unHNq+u6Km1h53v+zekPhQnzBKQ9ddBp/vaQe/J1wDuq37JcUtGRG2eursBxediG679TOufOG5K6X5rXgE8QleKAXBzHn3JaT/pCIHreRdl5/945AZ8atmcA1ju1+67ZHHzvjTpcdKM/98/VkVocE/nLPq3t/fUL2sdcWimXDkoBKZ7HpG5ckntt7x3fYPt5x8+apSV2psJN6xebM2Y+Gzd9w+tGOXGnRlv3unu1BMd1qijThFlIr8mOg0zK37ZsPgfMyxeQ/KJIZjXaQTznYM2y0B9iFKcFxxdDE4uN4bAePPn/1lcnTaU5dcXNVYs7+Y+dvzf7/38r9cqVuOKmhuVXSq5nvBlh3bFs1fsn/9jp/3HDihenZfLl0ZT6SP9C9ePHtq9fwnvvv286dei+mDumeX/byHg8GYqsqyPMbz6VR+xuTpO9o2IIEZNBzb4TETzubHF8xvPvG4JXauFE80uUHeGM3qRzrLUWzZ5PjkMlo2WFo5lWw8EPgAPgXKsOC7AfWBAiBwXR0wARF4t7SiXs4eRFOqmMY5K9JdqTfuWkUO5m7+4vFz/9j885uHWmYqpUjl43//4eOYm59Q01A+mh8TbAbrQiFhw/58TgZguKrH/lb27N+23XTnd2aeoXKR8Pzo+KjneYhQy7KCVIoTeJ9gq6OgrpzFhkPqSJ6UdImV6ZFhjeW0aFXx8yNuWvcNL6dydog8/p/bPnjpg083DGkIsJ8TFM7yGcVX5PbctHK5U01J06smVdc8+NF90NYHIyL+2oapo/zChBwW8juPqKx6/a3X96f2ieWhaUuOrtbJxs7haVNnNEWl7B2TpZLVf+BgOoKO1EujKXPD8z+WNcXTfJqy/Wb30F9+e5PAtaYg11u7XVt4+I6JJwWMkuE+eiJ5x+LwnJPfeuvn71JoHrfkRCnT1Temw82zT//rzAs5aNChYTt9l5axY20yreD4Sqdf399bFFjPUQWoVuo0jDOFQmCUXx4/vRyi7kX61Rct3mM7ezL7Tr5gwhDT3gSRwdx37bZuBHmH6pLAUQeqK+pFEEZKbW5ZImE/eVJEmehE8+KpPW6KnVQTats3UBp83xnKt7eNMMWBohCrwbnpxx092P5FWbwKg6nnHbAzUAKZWPVubQ4P5Xdv8DpPunRupO5Pz5Dq+e9ec7WZM6YoVZmDR5Q7rtp618OtVVpJ9TEfN1M6MKxulCAk2JgYtqGoglE0S4ENhOrIhRLxGJhVVlWy8jiAunB0qFDyOQlkIQnwh3/cVdOIab4JR6Y4IynozwXjHXo2uW7f/ulzF8yMprEvj7QP/eXSE1deffHuT577bl3nvIZEokL4YlfmyPAAFrDpmBQIALAM6R/odQgbQn4lJS0s3bLzEzdKOszCCj8Y/unVyHlnJvveKuFI1is4xaSX7+0tdKfH8qbhjY0WWGBFFLheQBmwLa/kQJsqXT170mN/XHXb6pe6vqchU/KZdC3oKRa6dDhZjN30m1k7n/rvY91DQ226IteaSE5xVonhXc9JUggRTw6IC+KQb2fCxokuqpumrrrstJEdW1ywxvrCyuTKOx59EfKHGhde9fA/frrk2XMHPegFFRXp+0HLCScbX/wYnJhP6VzFdNGdv2ylNUicsJQe3ZEWuhrHBvzuOn/cYoWD3pCkB8WqSNu8y6Z2Htp58z8euuG35y4596IX3nqVMjA63kNQYBIjaGq++sY/K33dB7cP2chVZMkwwNRTE6YvPO7YaXdce9ZbH7zW/+3wlb978MQ7K5ssaKwOvDGvenLEGMvnff2EMmn7lu1jOiw581yaPPDt2ndDtL569T8nwggEFlgYPA5cEwqmo2Ff9CvrORjvr+8b6O37UebM8sbK8aGkihnE8kOu2SRUawx33bVXP3HvPwtGySGYo8BgDgglji9gnoKPCAVKftXZ+FWRjcGIw0yxmNc0zXUtjDHCnCgIlmXxPJtKj5x0zMItW7eHVLlQMnkMPcmcIEdNSFmOX97UwiiKTVQgZOxITwKUpOE6gPN+Hu84HJs5+e+3/1EI8ILlUz7t6xJ5uZAarktUdvSOYbG84CgWtjHD+oJS9AkTuBpLQgoOqVyQHIqHZNZ3nKytJ0FifCEk8ixYOglKpUM7f66oSAg8k9Xz/cMjM+bMkXh5fHg8HA4hVszk9d279x533PGx8uqSaUtSOBSJi7zA5PKFkkXZQjqnQ++YN6FWbmj65bn3Vxw1vyhwOTP55g1/HkmH41XyTzuPkL1w6nlzv032ynuFllqrMQNunB4Zg8e/ZC7tHLjllz9y4sonC31QXkofalMSLbwbYBQkPV/0lIZpR48e3k8Zq6lCGx4ructnf37+9YPDozsATE/vIX5IlSaWsJn3H53Wmn/mxeLzzx0HkPp2K3fvu8neTHH/Hqqb0Lzg9jtPrv1ow6KbH3zqmz2PvHz5GUvvuvvG6Rf89zUBR/5y622bK4d7ujs+euGhnk9eWXTZJXu/XfPjoW2F0mIjHFQ1NwCeXhvl5JI2cFB+8sl+pbU4XasP10577j8fSdi64Jiaws7+0f78wcEMIiDHtR/GM1f/9brr/3D5V9GjRxxmgszkshYVMcMyTp6EOVHSwPTtEY9+d9jOunDRDSc++eRzPK7qeO4fHSmgYfzv//zH3bI3RCtTh/UJDRyJxjmt3ioM10Tk8f5+EL1PNr2a3dc7tO9I64Sa9gM79PGcs2zWFctn33vZm8nMgfGB0cKwM3vlyYPdnRWJsgCz5RMESVQqqJ13ikvlioHB/HjSNN2g98BBfWqjldUDvpgXcOfQoMzyjROQ3Brf13Fg7lTRG2HztXj/UC7ju5QQwIRBTEACAADGC2HwTHaC5puZ4EPiPHnFcb4m2Pnx6//27wf//NIlFxi5osXUwpQTJjz94t5YmTDiyhu3D18+GWzOygesaXoWAzPCSlfOXTw/kvqpdEiwnnzojbsfvHpk0NcolmNhxDC2abqm7dmeHNJCPOecOR/ZDuzo2fH4KxARhMqQp5eMUpFGxFh8AquJTMI0+3sKW82c49Q0R5f280fGehSNVTEOfMh4pUpBas6hqmTMElr0uaShw53fdBofpAaSu2O0RY1FSjI5/tqLrp9xU3fPSFDuqarXGF2+c8vun3ORkOI7tm0RJVI5eWCK0jPS1vXlMCqAW3Q61QIzxZ56dPjolZXdGx7+89hfqET6+rXAVRq4YQ8PDCb9eiH6zZ499hB740UVd51wXL0yb1O6LXD766qnbE0dQNntOVZxlU3frV/3c7+IWUbutmgC5s+ZWEqy/V2dfEs6nhDiUi0VSju7/jfSLeN6TogVh4ZZx5NyFUwaCrVCIElAAmiIxDL9zA/vjY0eQMme3Cm34H/deW+ML7OlzeWozMErj593xcnnYbZPz4hfv3nc3eces59krFzAOjMrmYqapeTw3kLnvkRijkWSLKaACBuxPBYsyDMEeNMgreGObej7x7/b/ekTPw4MTgPpPug7Y1vXnS7+YeP2i2ZWiYzKlIyQEkmaWZb4uazFypFoLGxZfSzPFSFISHLRshUS6AH0D41oFCZUN3QxWR2NYNcX/OozJ8+sL5o7XvmcBIV17z9DGLrqjFU2bwgR6seCD1a/VBnDI6NmQ0P9+1/seGLz/j+ftbRUzGjYdlxy3PHHH/zsM0AEqMtzDONBSXcVRT1hwdSe7IHhDqthUh3dny23uUhj1S+DA9KX6+578CqtItZnFsM6UbqHSw1cVj9yYKueSREVGFlSA+znXI8iVHQ8yrEVJa+wr+fHN/3TZi9/8qd1ObDVAPpZhIWyKh/9Z+9eZf/eBUg8rTNbyo0bIsMBY3kgBKJIUAFs1w18DByxbQUEnRwCtuOXw4e2DR+JOcumzlzSoE1XGgceej7WWn6ZV/v47y7vtQ6HWC3EO0PYveCi4+956b5vmmp2lZijbMbqxi+vW13dWJOLOpWT48kJsKc3dYLDFASfIzFOoFEsADZao1OLEqL96N5/Pdl4+WUsYFkIFcwSD1AWDUNQsX1D7xuvPplzy4HxXIMXmdK/H3vklhtuguFdUNXQeSD/3s4dfzr/zGnseCIcE7oD3XbF9nxjizCmOxmD+yzn9XHQ9/Br37z/5IGh0S4ze+7ci2/audZa96GMfLNOGG/f0ty0SMhgAYBCOOkw2qyZkSZudHBnKj8OHnEIxFTVyZpXXHgJotDX3h4YiKW8T4ldMjRVEEVsWSZmfcf1KKUIA2YwogQDwZQCBd8noigGxBM4tlAohKMJPwCOYfOWsW33xnN+e8Jzrz9T0mmYFbFv7xrqzfePiLLqm3qmfzgSibgNPCDGGBjtBz9CGE6Qi91DciSyatI88Cj49rnnzKTgPvnfj+dMmiwa/vwKrW8sjdVKnnjEcxU/EDDwTKAKmMcE25ZBec9jeJ5p7xo6t/riwZ5vFNaoCpVJsZBt2hlDj0C8WCimkmNHLT3acv2BoZHy8nIxpGXGx8sra45eFgookkORQHKxLAjRiGnoVS1NXCxKDKMqGpZSNKe6h17/6L3n35xeV5Nfv/f7TdvlmtoZS7V1P7SlRCygqo71HccfN/HdHzr6isJps7Q/X7WonydCZvTZx/eeWXHuvf84YcF551hbMwrHlWkKZEo7BzqaJ03NbNkcmtI8c+X07Ph4FIneOcdzJpFm/0Xs66knSPFrRdacN6OhxHr5jJnuSKonzPK54HC6WPzih6N0kdbEuLK53qa9CI0Mf7y7jlZd8IenLz5l3o9fbixmerv7Fyzc2nP66Wf/sPq9rTv3rGit2d31y5rn3/zyta0rFi5fVXFvd75xYn307ZeGJ9BDvRaztdgZLp2kq8yJK64Zsw9wrjlh4qrx1LhezAymRro6hzEDckhN9meFenz9768C3r7mqgs+ee4Dw1LCCGsaS6QgWXT785C1OQJRScwdd9bM3//n9kjDbH7zpu3Pv/rKR1tbHLzt0OCPZ9+ysgoyPnQWco3zj9UPJxl9XAlo7dSpxcPds5cvKaYyQlV8Pq5l48JSYYk7ralv/6BDTaW2SSypC1etwChELKdiSotvFP1UEUAWm5uVpkqReBCXJzOqU3SERBzyw8MdbS5vltdV+YawYtXRAAQorTlqwfSLFtPa+b9Zfs3pFRVZGUom2JQiDJQEQAEYlgl8J2ABmOPmVdRkfLWYX7JsPsvJtTXVunLKS3uvu8IuV7jkb48/erifcwqMpPBL5kj9m8wdPcKK6S2XXMW3b9u7p43tSRvHThH2b+97LW/JRCvwIilXhd3DUlM1cKxj6mokAhEEjl8oFNWGBjfroJ8P6b/syrl2slRs0tFYX6+uMDCCu0b3m2m9fcxCPGgCkAAyJhOuYlkXiEnGLD+sgSJCzvJsRknQ7PD6TdJ6oebY6udvvTYx5YwAgCnt1YXDmlkRGMPjo21TahIQVGSH24PU4bjsXnjasXpuMF5fRjAXHwkIUwOnX8mGZZZhILAB2VZhRATfThWlJtma45R8nz8+wjg+r1u8J0CUcR2fP0u2FcoX7KG9Q235A5NVjVTOS20aYYhTTgtVLCL8jFULTjKnZarKqhBb4foHWKbWdzwhGoNxa3ykv6Kl0TNEr3RoJIFbahtKaZtjJT8YERt5/UhpjKUJXmV0I5TjmJZKuEvwmbLO4dz+wZS/IS02DREs0gp70/o3n/vr4kYpy3Z07Vl+w3Xffv7OEx9v/t3EpfqXq/duLP31wrpSNs0wSs4qYiSoZYzjxSfMqvj0k84GhLezMOHSW22xPt+28fsPswfTIgbnR9mp85VltbO6KjyWspkj+Umtk+x4f6k9x0d4HgWcD5lsvjw0VaLg+J6scIWcwSl8OCZNyDsVvFrV2lgfD4/KIJlyXUXl+g177ZReSA+kMkNJO8O3Mjwn4xAsX3T0z+u+3r93/6oVq378Yb04mO4bTC9YdXShgesbOpxJp+O1kQP9o5t6f5ElHumBA9TlAuAAfH5p64oKa+rEiTO32lt+PLgvT+DRm0657dnjVr/y5kef8Z88+NjguMuxoi6ZP67fNeM3zqaPSJtPNJZTEHY8q9d3opLoE+yD3++71Sy73i/u/uVwzY/tLvg4wvTlgffRESlVBrCEF4dde5/qsz0///viU373ynex6pg+ku3xdRH4EiN5QtDCo0LWKTcgJfIZm5QYohNzmVPz7283F1kAf6vI5G1g/7Dq9KWtzI6XjZAPw4hnfTjhrAVern/L2ue0BTf1WCMNu8STFTlcHJZiIbNTD43U5qtFZur08O5RMNRcros/tnXwix0T5i5tnds02OXe8MCTJ7Q2f939S9FkT198/AP33T7zlFUnzDnVHO0bsAAgqTDVFox+/PFzZ5x0TftbN6998mNtEL3tFaaSiBX1eQ0Cg+aw3aoyAZKP9JqTq8JZBQ2m8r7If/T3fwyCLkr8surMF3s2hVaedtl3b/j8Hqa4q9qnpGqv6RFVEZEgVcCAg50ozCp6fZrAjSJHEDAt2Rggp+cA0Z3f/9wU0yTX4FneRL6ZLZgMUAQMy7O8RgGAUiABAp/SgCKMAZdsKx4JFws5QWNVWTH1IkWsLKtzJs9pLhdqZ5XfcM35f/3HhxEhEiF2LwTjfcOTjl/kGMVCzvri5Tcv+dMNRjo5saLyl/bhBPZZDpklmlbV5Jpf1h36+pjLLp4yMX7miUc/89w7+ZLnB35jnI3wVi7frUox37JVzlMYwjKIx4rpg+kEfDw6WjS1kG8I4bdW777xxnPy/Ru4WNXwcHs4El9+1SXZg4dq1Jrm6dMK4ynMCNUVlYT6677/fuLECenkeFllVT6dFmRF80GTQ/179zMMqm1o6N642faDsvJEhrfjZnjSiae++Kd/7Hjk5TUfbq8/ecroWHpB63RUbn7zlVXqNcqX1UrlOLFf7UuW1uecsh36sgSzY2To2uvqN3028OwD2/4TD1UffaoxnPRDEtOkzVKUtJnVjmqEYmEk56rxmlTJim3a45leXUVT3bEn5zv3S4XhYO700sEkWFAZ177ZvWnxquMbJ81OGUOJ85cYIqfuGSJTWivf/b7P59A9D3V//vbvl89btar8nvuf3JHs3fL9jxUrLvbG1zO817rqzCQKt8pzv/nx5TWv/Wfnl+oDL37fvXPv+q6Jd9z7eazMOe/k5+95evHqNydPOnvhRVc/tPzUiqPKa788tOGkC2ancr/ouMN2fF5U3KJLBKwMogM79k+rcW6/67z1r31gEQMcfHDcT3GY8WHygpoTTp934bFLmxrmQVWZd7iH++8Lmw8fmXLZ1bNGjG9+2rUoEe7Oo1C5Hxsu7fxl4/F/uDhmZEfHumMtE8YGxpAs07yu6T7UllHDp+mUCypz0PZTTPWEFug3fFcc3N0vSYKkRPI793qe3jxjxq6du/994rUvvvqQOKFMX9+bzqYSlRXj+VyAoLymSq5JlExDdZzRXSP792+PNs9ZOm0iTR957s+v/3FOZb5QrBNh3IQsDUgAGIACeAFwAFI4ZOjZxjgjl/xluvPeQ19cr1UgWQk2PFPlhg/tNwCkAO3ft18XOVjQTL/fm+cpnCi5Hx/u2N3DzatvdNJ9F08LfZ/Sj44xrqz2jGqttfU4qRfTfbpdIgxU1VQPjvQInKhpWjgchUw+dHgk09/bIRWcuhB0947l9Ghd5f69O7QCNl22c8zOgeAFjOZhsE0VmI6SMynO8Z43wsCoByIWMXYINRgeaaJQZ/vpDT1TZ545LaKdePHxf7zyGm16Q6GzA1AxqkQzXf1SeEQWiJdNNjVIkAWom5b1SjHbBJlaSgiPWV73AZfjvAAERpW4cGCUJKEMfM8olKmBLhEMHKUC5zFlAQ077jgyqdAnlDg/JFXWh6cCJjRIKeEWmbB+FKxSVmFEyyAqU2enEC/Ivrko76dZQSxlLZ7qFZWtRor3cBCJzK21PejJUZZwibiYiRR60pHWOilb4qjKhDW75Hhpn5cAaHJKBTt56iRUrDD18RBfzYz5J08Js/URK51ir3rqyXBj6M1nBgeSXx7zr1Nf8wt9buGCC3/HHa9Yo+VOocTlREEeLw4IV595xov3bG0j3mLC6bs2fnDiS5VHVxyiKO2VIgr8uYrb0GW0NNXuWLcuADVjA6vFGmUjEw3ppAg+MBQkFvUcPFAbD41limrJo5xkut64nT+ptro83HLEyvaM9BYNYAN18469p1xxcmtj2djowCmnndo+uKnukpn9R/qsvLnp57Ubv18zqbV+6NB+DfwBjAczpdt/91t5aWzH/15Yv2lovDg+ad7R675fjxnBBsQyDBOYQACxWt9gIScfPty+cyw/JlLQQC1nxsy+52I9/W++c0L/tkA9ZeX9l20qktKG8a6Gb8tet905fMRwTIDABJ9I7D7HNghgQAzlJJ/mANKImBJTa0F1nvrAWAL326VHv7/+x3HLjgPUmnxXaezbFz+7av4iduGMtRt/2Lp7tx/YQiBwJXcyQIhV88QN27SX82UCvcSSBGOxEr3l8rO6E5F39+yyRlh25tJNX34oKdrpRy+58V+XaXhuvWjnP3m7JBUzYz+V2g5sffPdtZ/unTovcujT/LLLVnxAssNvPr+woabmllVi8pA0dy7nxKqmngbJ6B13vBGk3T/9fnlb+6Hm5sS6F9bXLq3//IXX58rM63/9X1mCO+/q60SOz7kjL731+hmnXDj8zd/19sGKOktY0DB9PU53lxqOXjTFz3737tZzI+IIa1bb7Kcl6BwohUPYZ0Ah/MGQFy1HnuHmfFjeGv7f+u1jfzrnzmdnG4c/DgcJs8sRivmiUezLYFKioSI/MvJeyzF11bG42Roc6MnhggMAC5cc9cnqjw5t2f7oX/60+1CHFXDNEya5AXEJACsVSpZeKgSBHwReQHxCAgSEwRRhrCrayMhYLKplkimG4eLlFXPnLfzb3x+SQvj5T57pWPvyWWeddN8/PjYcr1xgi76/6ed1k84+Boty/dRpV6hlbkiiTml8aLCChmK4OFjQm1ZOLAs19iWGT/3t2W2HUsy42zizbu27T5198d2GVo9sKvF+jeAXrbwjoEDRMoAs1xMDVmIZjgMwrYSiAbUa6quHD/T1bRRZr/PAmh9S+bGxdG7OwsVHLV/mFkqYZURVpQzL2FbgBo0NtXoxTynt6jwSi5ZZhqGFQ0Nd/YoohEJqwcjGqmLR8oqSZfOjhDRoYkj2Plm/+eWPQ1qsuDW5eThzaMO3deXMmadOlE4vdW8rpFxm6QmNmZ8PnabRF17b/CII1zY7Rw6nJ05snT1bu+WPnzz8Sv2E444JhnI9w+MNkydVwYTANTPuEHMoHW7ylaYEnTYVBtK64BQOH6jFZeM8Szf2xRXB4VBpdKx+xtRIYzTVuzUUeJ5BRqEHu5z7SybXs3dnUV841TrvtitP+811YXLVw7ec+f1/33vg8Rd2fPrnU064aM0b/ziw7udrzzxNSB5901/vfeOex178dODiG7alU1kSDOuQLRXefv69LxpiVf/46radX3XurmxY8+rb8vELX/nuQeyYg8/tagkt/tD97ocSqQK/DsE6yv1wqGPG0Qnwem7++JaTT31qkoSXXbjk4uvOWhiNyY01LgT8iBn07ma2Fby+0S7dmLx0Dpfr++7nnTMU+UihZCtiQrX9CIz2FcT2viNt2xJzJhgSriyKZoVMcqnukYHUgY7K2irXHI1OmllZPckaHRUZN62ZdVw5qYg7ssua4bq6qFFMGqYzb/nSpz+bxoX5nJV3tKgbOCim1tdUJscKDJ/wCAKWM7keDNpRy87fpf/CxWv2rOsXG0dXnr1y579/KlKhW89ZHuIEHggJEKO7wapyMPWSIAqJJPfBzoF6gPc3910bQmUMHZk0ra6J/uuxk9/6w8fJHmFtZeKDudFH1o3LhntShZaiOskx+y1va34AszCFk09KBe8UjAm84QSlT37pveTOP8UWM0rSI5hiwGFZ5VmBBaavp7eyppblSlGGxF1vcHxUL2TVutqBXDrGaUaESaYtoSwkpwuYwTlCyqskUrDqKJQ4logKY3llxHQ92w9ARLhESLyMpgS/yeIWqJFvnfwrL3z26POrn3nx7gvOPn5g3db6qEdxhcxkIRBMwS+OFQVMnY5kSAqVMGYUzvcKmlRw5Qae84G4tu0RgfVxjHK+nh8vYznQeEswMK5FTsxzDiuxrCjUF+zRMIS1aCxFem2/l4BgEC9hs6BFiZvUghoP2ywYHAgg9tlWTBVUxoaiUQyHQgJUg04lcBVZhCxrW6YYUUMWLZYYM+RXFmQrqWKMAsQUimlR5BELAc/5geL5LIyM2jlHCDeZvi6WJE4ujq/r0QCz0aajlgPXJhOWws9PPNumUF+GW2+83iumB0qjk6c1t1Qn8kZ+Z2+HZfsnN0YH+0rRkHj1V6/98bSji2vwYidT06qoJVaTxAY2M295a8+BMZnZTQO2eORwqLx538H2KX6FKRaIYMdsrrGkjDZmjqQBARgCiyyfC8Ajxk5lr75XLy+HcBKymB8j7q2nn62dNDn53TdjQ91CPlTa1F2eqBImEoQr5y5fZmEmUq5+8Non7R+uOXN2E03UWNv3iNW1xVRhcutJl5097eYnbjr2N7/f1ZakAWUcAMS5NHdE3+ilqcMCsNyJyxaMdBih/JCcdJgq+43zvqkOB5d+9MDP35irP9y+d2i4OHrw0+saF6866qyL3htRqoeskcDyOSQAQwwE0cAHoLMYCHFKDoGuQQ0Rh3x9eXN86TGt3679MQ2QQxDl8WwbunBxbNPWLdvWzpk2HdzoADYN7CACPzDqYlKaDJDF4HmQZcS8Q7eOZJXAaWrbsbjivCq25c8XHT1jjjz/o21mPmfuP9z96Y+/jP300tvv3nzWZef99+6f//HUWO9eybSnxhOfr809+syzoUuXPzLleB8Hp/71rrZTtuwcTYazR3yFn7Lo+LvuvmvK5Bm1icRbbz3Y3LR469tbEzOiV591FsJlb7361Y4X1jy78YUAWItzrrnkvGvP/S18+tcjhzbu+X5QZSde/PDtN39wOY6XCZXxiVKpR8DvjZGjIug1IzgAguQ4E4WAQVyZC7zgj2RoC8NByHdzpZUT5EPP7TmYGbW5vF7SvVLAhmTg1Jp6hS8TCyEx3KAZFrWtdKTaPxaza3axKfCWnDD1vqfe+rj90K5rb+t0i5TABTOXtMTLaETieKkpVFVeXRbRNEIDD3s5uxgI2PVtjF0JuEhNmBWlsvK4xocHe/rvuP+uj7dumqCW+a5UVdeoTZ12802n/ffFLxJMbZMz9N1bG676DxFYyWFMvjJKHEq94va0IEGxDE9IwUBiciXw9TNPmsAAVE5XN/3Yeedl//zb3x5/+L77HvrP41ZQ5rPVeTuQ7DGGZbBtSywrIgrE5THLsYzpI8UkLCcO+6YnhF5Y3X3LzScvO8UGT7Eyad8wIRbN9eYqIuWjHb08ZipiYeCFhvnzxgr5MkljBT491I88l+eYyZOnJXP5MdepaGqMJmLJrs5yNQSTPC+IcCPOtq9+HCl0JuqmiqR4UXN0+5GebJHs3Nt9yooFO3dsXLpgyvwL5o011Qz979tZVbVrRvs2ZcTIuHJ4pG/62RPmT/ZX3//IUcO9c+ecxI+O7ft64x4zL4erTzq+0QhhYMnhL34az6ZSY+NMQAPPX37cKqEqlGhuBoU9vHl7LBJatGJ+8kiHVhX/5qV14aZUVW1zZQvb33mIU6qmVtUsvvhEGOi6/4yzP1qzJlx77aX/+POaF2+VssaU35z5l1tfF2rrPn3g9X/dcflHn3y/4Mp21kTjxVJgWx57z/QpyjknnvngMw8cc9Sq7565u7V+/jdf/Hb3zsVnnT4BcgfHtvcmZGtPscNO1Pz+hpbOHfkD7UcuWkKuXbkNdu/oHbNPPPmO3Z+d33rGRWpQBV0ZGO0j7gCvqpmfD0QYpb17vKK6UUaOaajplBOLqHtypUolMgF5usWobJDP5onIxufJclmGgya9NCoa1TSTqiubhKuGNI9J1DR2dXXZroEymgQWCoxCKhmOVOCM2rn/O0ABP72iRqg1BvfzSp1KhfGRIBoPV04QRsd6XSsoL68r5fKqJIMhpdNCLBaYdGRJYhUFofyoBXP2qnFmbsv0zPCH39VjxVKMlpDSPWpWifYAw0+PQ5YqvlU4MGBFZMiDqJhO2cSItdsKu4Vr587C7YM/ZWC+7Lxz7YpX3jm0M18I8TCLC3UN6WmedbyA5Yjvwb6DZHaj0TIotOlsGYt3fLPx2/e/OWvFJKuYHGeyPBUq5TCWpHxfH19KMnXVLMvnUWnIHAt0s0RKVtiM5nDaYHuTumHaHqW+xPm+p3EMmD5BkGdBA0/0XCcIOAQUEMtijAHz4BadBACRvbySXpbgvEC0S8Efr36mWTMXtc5Yv/OXyZGJbo4ppVL/H2F/GW1pcXWPo6set+1+3Pr0aXehu4HGHYITQoIkJIGEeCBvFAIECSQQCBAgQEgguLs0tLvLOX3c99m+97Mfl6r7gd//vWPc8bv3rlGjRlV9rA9rjjWr5pq4S/RrtUxyNoaA6wPHalq1ElRS+3cPt7XTbCSEMWt75vTQsXQmw4eSkh3QbYN1BeQJBEw+WKdwTMsbSsCXMWPLVdYxE1QEON619AQnOOBYuQIm2BPzCBGKoYEDvRwKR8Kurlue79o+EJoIBLFseaYkeK7vYpqmajWPEzjOKfMu8nm2Xp1UghLBJJoIeZbD8Dw4vmeahFie64ejAQQ1IJTPWIbm0hSAwqKJI3dOTPVPHZxJBptHizOvvLVr055iHcS7r1/ds3ieXquKLgyNTHx5pHf7gXyQh9k+y3ju2kVddqd26NDMdUsym97NzlmdNify7lQ8PqeraE0e6B3vXHlCoTK2oKWxf9duyQvVqbrmY9aizupuK66I3/3irpMlsHUUC0rDpv7XF/88bBSe/c6DDV1EHRfpAKPZ/q9u+emCb64yjh9kkkFGETHDzEwP8YyAHaExGKpMDsjp4JhfO3vt/XddePXXH/uWnu99+e1dT93x6lVLF17y5HXNyy759tW3PvPyKyzlRTxeBbAYGzzg2XDIrZ+84pTzTvv6dx/42cdvZFqDo0feI1KXX9/OrDqFDZ97+ezk6ypdS/IwaUgNMWOyBLQcMKw6DxT4mKLAJJAhkoEMlgMDwLGBAjhfCsxpCBenpze5vuNRHEtRLi4xZI5LmkGQBfs/DqljcS2wHqhZFgyfKmK8nhV9lxmg0Azrgm2CCIBBcDnAjgUicOZcB6K0NB5gr8s0Dxw/EiJw2pVnPvXBJ3SdStPxzV4+SQPnQ0qGfh1+8o2LDSW58+V/NBrt/XjkqtXrL7z5plqHz6ql4LrTbvjDI4ITfeapxzBdGzo2/dYv/33nG8//9opvnnTd2T///mUbR6YI1kWAAIF+rLFbnh947+MH//b2uAVzUpFvnHPiqc+986cbv/uz607OH/+CaYvf+sfnXvlyJgUQFJqS1iQF8AXwGYY9g9UDHEf5tpqEsB/h/IpWF9mkefKNJ2cHRc2t+VyobpDh/smaxpcMoZTNXf+1s9d1TNQt9e3Xt74/Bo5Aeo9ve+3zTb0Hdj3x3zd8ijGLFusDS4EJAPirS4IYAy2B0IKe2dFoeO7snoZwDHRb9Ui5VJmqFsfK+d7e45lE45Rl7pwYWb8o+cWuQZjZR8IiEt1I84XM5HSHDLKOPtr/PpXh1f27RwYmwXGSMfYP1/8dB1AIYp/Z6u/++purbvyl43McosHzHI+56Xt/9F1ODsWAQeNTo0GJ9z0HgKcoimEYhBAQApjwNMUzbAV7voOxYzelUo3JTDU3fMFZjStWCoWZKTmg7N+/NxaNdrQ0gw9cLKbni4xMV6q1qMtzjOCGBezYbFUnDEznpyItzb7Ih+SomSsZxVIwHKhZmtTWU6et6qNfOI4xvqBn83d/Zcei7R2NQ+bwtoNqmE9GcH7OyUmB5blUcv2Jy3s/f/PYe/k3VGgPgFOH1iR0xyHdquRc28rTK85ZHlXSOMNt+Gxw8crOJWKYakqEWjJbN23e/fmmq6++msj8vn373GLt/HOvrPC+ALBpw+eXnnnhqw89lme9Cy6/ZO+xoyvmdzZdtPz97zy44ZXdTsfsXgdzAenD3UdnNUQHs9kvX3u2uzl+xmUXXLz6/DPnnvzKu4eez87ginbuvK99MFquB2owluP97QwzY/gQE0VEcQWrhnyIivSpl5/+19/8+q93PPH9O3/W3DqvdOTI9IbnWgOyeNoipTABEoAoe3bBqY5ITBNEWuswGah9HcY3TZe2BX2aklIupWx88D9j+/utkOATbHlO3QJJAcOAWIyTRH7raP385el949VIgNNc79rXbqnQ1JKOgYAQchkVk6StKqjEBvwuaHcI4ZGnq6jF3db/wvOvXvLbXzW1lUsHS7GGpM4pTk0FVQ8FQjgK5jQfsKYLtGlNl4PRuEUc06/qei6dCHIYv/fWO5fdele1VqLHxyBOeQaOJztRQ0h956OSnnznr/99+4CtBOyogHfUvDkMHDOYc5qgr+pHHdLpCIxioUb27ePuDvKlV90oha+69MQb+S2bvnZj1+ax2vOfFFQWgq6i0vZ63u1SqBfz2IcogMGDNaeVvyqBByfcgzXpuGU00oBp/oJl9g3f+56STpbNfHNHjxDI0HwUj04TugyTbvZ4//FjvROjYzaywo0JY8Y+tGdk0gLHBx8BoZDrE54GngaCQVYgKHAMxpblOT54FEMQIESA9jlMyRTNsUBYl7A0JoxjI0uzJpLCW3tfEvtnyhO54dqR7nNPko9IdXDUesl1NJ6jmltaSoVqtabFoqns0OCRo4eWLVsaCMg+EIqiqlU1HktMF3KNjY0jY6PT09Pt7e2hUEiva1pNbezpikbjxWLRMAyOE9KZzFe2LipNQgGFAkwR7NsOtt16WSU+BKJhPhSqFwocx7i+95XJIPC8VVFd1w6EgoAIEKLrpuO5Ai/Zjh6Oxy1No2mWZUXwfdO0bdsOZ8K+YyNMgBAEVK1WDydjBHtMUy7FS2LL+uVYq3ctSM8/fT5AUKwagwP+0HDf5MCRdYvmrF/bNndNcu3E4OVXXvnNqx6sDhU+fOJBiA19MXJg253/WdSgeP1FS4ji1noNH9rVr1Wl4P/c+e2/Pfg/vQf2KjHKmqoLLuYomKYpvyUZb2wjZFcIK55EEHEoFzp65p1z1rmrpbCkVwd1Q0IkqKHGOfMZkbV9HfkhR9UxZUtugPLSwQalRsaD89fRLev//MMbGYCu9Ssh3iWHFpbtjTsp7437f09YDLndm3a9D7TnC5A3bCAgIYqAYGLvm+fexLV2XHf/t+ck566/4IEa3JUa2jNe9eY3eWZb667ntt92lnrHl/DKdet/8dmWTf1hF2zGqwMBD7MczTl+nQOlBMShGXA9oIFjRIfxdtr12ogBrC8jiBAJOT4FlovBZIBBPE07KUzqHNXLOBc7StjVeujIF3TpKGVqAuNZlGIzDkDAoRnfN8GfS8PKZW1dp7bwqn20D23/ctv26hGeAiou3/HqJwtkQSOW7pXOpsIaYytJYcC0zm+IP/Phm5QprezqyMTGG+vi01u+7N+/+5of3kSfvVyfmHrmh9f/57nXn7c0HYfvu+2u1T3zC95h3m687LwbR+0xn7UZJqjb/lMP/DHgG2PbP3/wqbc5oNsbZLlc/eNLW20e5nczuT0bs9VSczjV6cndQPOALE/VEOgCMk2/MR0Mh+kD+dpKGVgdOMYsO8BRJuU3fve23ZPQg1iZ+CoIAugRPtzOirJm7eB75vF0uX/gmMJm8jB+7tyeYFOnpr7yq9t+5sj0q//47yVnrf9y7z6NAE9R1WLVwoBZsFmxc+HS/rHR6vj40x9+FmFDtO310XUK84SwLE1xvtrR2fj1M9f0PdS7a9/4IrqJ65ZvPvvsax567PdXrrv9wQ+iWDtOw84jfasXr3Vod9k5J9dsszzUp/EQZsJ1z5xxnNUr5gMFx3bv8LykOjm1+LT1mcamwb4pDJrtmxxFPEcPiFzR9ikgLAYaUTQgCoBgyvchHPD4WCwZi9rZ0uHDOzZu/fDyr9+u5iZCqs1Jypr5CzVNY3QHMFJrU8FUHGwz7QoaQ1GpIItoSETcBaJaKMRTARoxHMPXKpWBseHlK5YDwUqFERLdT99zz21/+acsUJaGX/v19+7a+NlLWw+nBbYhI53VXEtNwSef5a/64dmTVP++kbFV59zU/8H9cXCzNvZFmq65cxMxii3FQlFTd2d6e9kl9TO+/ZflJ03+++GHBpUMTWnRXHZZe+eyny8YHh5mS+68xqb4SWv2vf7uWL5S6SsoKPjXJ3+XLczcv/nZfRMHLrzpuo2Pvzv0ynPfem1H0oW/XjF3WTRePjIcHAoc1rPze+QDm1+LrWv//IUnRsbE1ddfC04CQmEwsxvKmUcefuj3v/7pON+HfVf3ARBVsZCPMYC4vDHmGup7L22eO/b0hd+/RJsq1oOOUNBDo3Xc7PY/eIcndAejQV6GpmQ3VJdYjFmtHx0Z0OsHT3SESLHuVioVZFLZKVXiIdMBhmGJPLAEYixvWx7H+JTv2bp/9unsaP+MT9OKZS2eFT1n5X0ArXs3jfHzOdtwAhzwieVurAb1Sd2jsC3KpNEprvAbQw0Xe6evuur3vzrxqh9VPvnH0Sc+SL7xyUtA5U33OGtuC8zCLu6PK3NdoDmYCREbKv1gVoDQU9nhs75D1JkfxpSweXKaNR1OlIz6BmOzLF96c/OULLzwGccMLhSY3bQzx6EFz59Fe0cK4mmLFy1p0za9chw0QDoOc3B0213e0fyKG0d+e1/HpWv3aR+PImAAAHBIZVgGaTfd8+20Km698xnJqbFpTvGis+yyg/k5JFVxcwfEaM967d4nz8LaCcXRj5qWhpPK8vxQcXzrx7yHdNddMG89uFPlmqYV1Gq1yAfk4lC9WPfqJngY8FfGoEAh8DEAQYiiCfKAwoQCoDAgAgzgryzMEAYekQBNcQCOC7bne5h4DmnPsFPD1huP/+Waped6XczsWRe/9te3t71Y+nB4d92spaNyuawvm39Ce3uMVaYu+/rqlWsWda9fRNN0tVoWGQ67XjAi5SqVeCKg6SW1Xli7bnk+n7etWjAgiFygliuUp7OFYnbO3Lnl8syR6dFMuqlumOFAsOBN6IYWCoVoGgmC4DumoijZiVFryG5pacEuOKaJPOyxrlksKaIUiIR80zRNs1avRyKxSCgGgbBoa+B5tm4zDAEaO45D06wiiCNDI4FAABEQOA57viCJvu0ahsGMqFnieqlYplgCKRBZGJQRS5MknjOX9dw2RrjUz5UN01vV2nWSqoda2xPznt7fV9i+YfCEb86Pe2MTurhu1cKjXx5urPAuqzUsaYycu/DVe16NDtX/8vFrd687BY0QhqN0F2iGplyfAFXAnoXANR0DHIaG1Z1d073DdAlYlthlYECMxUxBI2wmVc6PMCIncQG1MlPOWQhNpaTap48MnPfNG48PD965ujFl+L9cmth8+1N73tx4/eO3DH3x2c/WL8us7x568b+jy9BMpQYeK1iMS1Mu1imRMFLArGlvbd9d2/oJUPCT684ZvP15x9u/8jKHP9bZdQPDDBY/eq30gw3fvEir/+nr71bHfQxmIBCt6yaDAQFl+hgQQxHiIg8I8IT1XJ8GH2isEtiN/Vk+CMCUweU52ncIT6MgoVXXlOlkis6PIL3swBbEXQ7CGF1a40POBk32aAQW8Q9wbNUhKVrWke6FMxeecf5Oujis5Wbimo/sfinG+SWx4nf7kqo6nCREXSrnVgGBXjY7bapulwwTbvlJ6Pm/D1/2x6WzTpr73tqX2Uz4vvseII/Q2PAxByUClxLQxPKLrzzKX/4jHkXv/vRuOeSgvA3AcrbaGG266qffGH/urief311VoTNKD2RVluaPeKU1J84988c/Pfr4fRER7vvO3R+XISKC6wDxVIkBXWDBceI8XiiHsFbL+YFZkq1XmEAKqCpw+ZnlmUVTRiaotGkeZlmajhGCeIajWF7ClsUned93pmdMT4YA8QHkLRu3rFq6+o5bf/HZC68e7z/S2dnu02K1rrU0ZAxbGxjKEwtj1+ue2713aqSYnym6FQ64BMNZtu2CzYhgGVBjtS8PHVOBogCPE606mH/g2f9ce/dfVl95qfHoBwwbqZkVs2CyvTPpaMw3TaYjmWGIzAFdwwbrtcbDmXDMdfx583s27BjDlL9t72ZWdinBM2zDQ4hQguPZlbwhipKLHQ8hgeUEWVYkJSDJPC/OTE5s3bVveGxYrVYRsA64wwV79txu3TzMSQwRBDkapBwCFMu6Xo1jZLVGNUYdy5JoGjBV2j8Alh1rTENILPePMj4VSiaXL15eKRQVSRB42d6365afXNcQZ//5039abZHfP/nML7tnqxfMT2CuYhWnD41MgLhfNpt7J87oad226bN6W7uyKNa0JTvg8Ig1LZf+ZMhpzsPKTqNMW1oW2BZ6+IPPx7PDJy5c3X3GSYPbttZMvajVwohEeKl/ZqRaVbtdF7W1z2puD1ycnhgehUIuwDDXffOG0884q+2U+cd7jwwOV1YvmPXHB27Rhj7DuzeerEZOk3F9/eK1d900PjzoeCg5b9GHT9z7ybVn13JT337ncHje8pPXfS2Xn1y3/OSXXvuEhTqhAHCIAtvn6ywNRq3s2pRJxOhFGTpY9qsMxapucXT4w51qQ73kCY65J0J45BEIfjmZyzJVsIt8idhVkQ2alQyw5QAjho1lC+Shon1oKoDBdBxHEniGwpbpRyJSSTdCIWlmExhxEGI+Y4c+7/f3zvwlnHqf94jJiIGIA+Wc51ZtthVzjZJfRm4vSLzqIKcwetmlo4s6ldGdn1CoK9Ie/e1t+Lm/3rZgYWjeqRWD2hyUaAP4kNZH2cQo86UhwrNuMEk8CRKzIlw45XpJiqFluwiiZVZHQQuLa060tgw7n858/67fD533w2ypJAQAQg295kS3wBxSzV/eM3+BPPDWGyC60UZD7XJA5hPs6nHS90HLgq5FKSo7Gf7tj+ckd/Q/vS+XppgZAmdd8i3tgf/Mcoxujk1wZnPKK1qxUJWDRlDzQquJi7ude3+86/GXxppmE1Z76r0nHCnYyvh6Y08yFMmCl8Q2UzNN4vvBqEIQP9JXGjV9jIFlGYqABwgQUBQwNGIZikVAew6NgQFCA7AIMAU+EPABOKAxQeAjQAyhXQDf8x0HJqbkua3VT+7bdGtt07LZoFaEvTk49+zF3GigpVH/+NP7aap69NBQWGkYGgpUcxU1WCjk8olE6ivbU0lUdKsuU4KYVtLhcMO8ztHh4UBDTOSFcrHU2NFkVlXLMsfG85WKGAgG2juajxzuZ2h+/NAhzdDbO9tyY6NyQKkZqihL8+bNCwU5I1uqFLKNjc3YpyvlQqVSaWhoqOg1fUwvVoqhUITjuJLjjg2PHDhwIJmItre3K4oihiOF7KQoigCUZVmp5gRHs3pd4yjKZ7BrGbmpWnNzK9O+5hwYnjjStyvRFmdUV80WicKB4RvjHMdQSggBy7s+yR45wvM8sNQ1c5QPmPCmV/fO+/W3FzSvbnjhvU8+2DEroRzO61+y1hePPf3f7bvK8OrmV9644sZ1mhBy9EqrEHB9veT6mJDMnNathRGJhgLHB1yighvoah8ojrM2cCKdC4Jn+a5P2sV4ICg4vuNS9Pj0TCIqiJEKjTvleAxnji/92rnH816YJREXPtmnnxQsbCr0vT3/TZGivv+vb3p5kkowf/zbHrPMS2LYNnMIKGBA04lACjygYuVIDemIwM5XvrzucaHumyNfoHkXBDY9NhxtFU+/knH4DUl50ZpvZybH/MFKtqbWgQp6rEpj8ysVPHI9kdCu73sAQCOTOIBhKQpMyaZj0Q4vjdiVbofnKNZkXNPwNfCRVxhm8Oz2JsZQjswMbQcrxoDgwInAV0x/RPQOeL7m+z7DTmOL9iHYlt6vFe9/9tnmlo7hsWHZhyxxsc2sEjyXYMXHeRqHHemwAkENuhGngRtIOBevXMQy3MJ1WTlInn/1KGLczLyLjpQ/WtmV3rV3W3sEOk2wBaSAtCWP/vLeY+3As9qxPgRbH/3FC58eSq1Kn9u52Jv55LmH//bFUW69oCAPjRC8EzMquH+98BqYyIajiff+s+NIORBoid3+jVU/vu/lOfFgguLrtgvIOTBZaq+Z0XDLRKkKQYRZTHQI8jz27WYZsyXd8PMMHyCEtrHluTYdiWObGz5Yi14ddYntAuP4MHfWHELxzYkW11Ngpnr66ae/89FnGTVs1rREPKHbmmvhk9YsyB3p27Ztc1OzjERY0KUkw/Hje0YnKQCZBQuB5YgS8LyP3dELz8yM7DLblqceeeQFOv85ldvxj398TFyNt1GIZWyDQKG2fduG5nnLAjzrEEgJUNe9Kdt2WYEGmaEYEOWTT4wRXRueyM67ct5Pf/j7aLzVZ3mHuKIYE1iO8zxAGCHkg19TtSO9R0dHx2byRdMjAsiswNt8KBCKWnm37/jERVcusSdGgOFdx9NUNciIyPfFSIyYOiMrTsVQPJ9CLiRDgVmNUKqDINiVutSc0V3bFTmvrrKIsiiEY3Jtcl/S7f7kgXeWxJtnrWwaOTpC2046y2a9oUrRDtRDRS8y4o1+vvNoGKvRJcsLY7t7Tp831D/ZU5LmLLY/3O/vsuolj2vMYkgJsWbl0JZSK//xiZfd4PPMRLnQ0NOTQCQQCNRK5cbm1s7Vq4HnQTcgTOrTqhtNz/vahR4iBKzv/PQGuzDFHNJveuTXdiTF545NHTmqN6+Od65pijYr7YIfaMgdO8BX6daLvvH9s69dPJo/7UfXUH3svqT/aZbOftG/59UN19z8g7DcWiFHWCdCkMkwjuOwDI9OWBPrPTbRP2P0fnL/SnTlsh88OvzlNnPoi9AFkfL+mYGxClWhJ7SiD76HEJHBs4EQu6GBT4Vs16ZcnlpAoFZnPp3E1YpHQyXMAcsAS7kIvGiIdkwjHADXMfggFQuFynRFR65LC+BPE0f+z0NxMTsYnbVi0TfnR1iHtUssv0zTXS8tRWB+cUZd2dmKVW/W4oZZi8NmfcYsMsFa289+sXFJBzz2yhUNy9Yb9hjnmgA+QwWZhCHNknwPaCYBQIMRgkocCtw9N2+qEeu7v8tkFvRIiathf1z/4C2XZ/JOvbmpwwjTSbVWKcxEEVVVGQu8jfd82HJq08Wnrtvw8ZasB6McopyFzctsv3qcDrR/7Rrj8QcPP/vSzr88du4v/bM23vHj/kF4+2e/1t/aelY0PV6eGSyBNo6bVjrI0MtCQubBtatPvXWKa2wa/rgSDwfZnvL5tzS7KM+aLJSniWTYI0MMzKFEhg2KUS6Zn6xWqlYRQGQpFgAAMMaYEIalGCDguYCAo4BFwFKIMIAA+QghHxMKEEchjAn4HiAHYxdoYCiGxzKt8qJEgv5//nzB4UFHaow90WpVGxtD00wgiNzJo76PTuyYUy87PYtW0jQy1YpCWCtXZjQvGAwzDpJcKhiOV8rVvv7DhJBgMGjW65i141Jo+GC/7hhNrQ0dnbMomoiirBvGwkWLKIodaU0mY3GGoo8ePMTQbE/nnIHe3uf/9Uw0pjQ3tQ4O9B06uHfXnr0rVqxYsGBBIT8xPTUuSVI6ndZ0VeRDDE01pMJoybzD+/fM7mqZGh8cGXSmp6cXLVlcrai1Wm2OPCcohWzNLE6OMSzlOA6FhOmxKebCOde8svmN+Rc05Y98YTSF6mWVq/vheFvQzksMNz41NlUtNDRlUlKoki8iH19857sXbD4htWpdkNgws2nVqsjZL6pnG4U1375x5y8u2zC2/+Zrfn7o3r83rW1WGS+Sjk4wFVvVkEswYCKgE2/4xp9+8p2EBxZvA3aLPnTPnlfTChEqZILOBoKKr7KWwLWmLV6fHhpPhuIiZohusKFYtr9ouPpZl54mNFCze8JUOPS353ZOafxvv/GtzKLA0489ctOvti37+wsxZ+Dpv/6Dbmo98fz27R/2KQBGOApV/e4fXqMLI7MbOpyG5E3XPAqe2XFOhl0RJKN7M3PDueMHtn0KP/17ipmzlkNB7/AHff/MX7kIFtmBv31QVyEq0LT71fd/BnwXPOQjAj4C5qvOcAQasTCh13vB4wzXBUAUpWObMQA4cRybz79469lX33PrVf/T8surTw1FNlDQafJrILhTMPotb9pmi76LGFjc3tQ/MOIBWzy0f2MY/+ePv7bC0e9e9ysKwDVVjwmGLFWR+ZjuZDX8HmiOBkJAnnZAYv04LZiWPj7WP68tc/AYx4ay/Si+/e03T735mmfefOG2+3674YP/jh4ZdiG1MV/KS4ivC4tD2lygPvTws2OVfZ9v3vTj++RTVg09+cCWQ9ATCgVrxmHPHABi0+blK7t+84c/P/LL+7tbw/xYQQkD3cKuv+1h7/43+0x/t16YBgBeHPG9x4qGxY0vcmC2BLYKPgWTLiP7tpEvOkIDxTCu7yLkEdBAAiw5PrK5kIj8mixLrIyZGrS0dRHwBw8caf+feX+69dsjubGOzs5D247LWDZKhks5HT0t1Zre0N4hd5ByKSuLTCwg0BgWLm5wemdaokpDMiqGeJ5m/IpD7GjJtGsBtlicbGmbpcd6pwzmheeeTQX5nMNMGLoeDJBVXSc0XW4Thi8Z1pyMFEejJT0nSnPnzmECIac08epbB04/74JURnGrwlMvPh9kgxLF1E21UJyq12u+61k1s1av19SKDgYBCoAFiqdoBkVCruZbVhUU3gCnLd18/+133/abrwnA5o8OUxTluS4JR9hAtLL/WCSdrjQwYY5FslTJZsW+CpE4gwfZqXOSQHEc4xOvpokUUyqUYg0N1YNDSjSqVw4tWRRs6esTou2tJ8wa7B+LlYYINcvxJgLp+sx0zaBiE1Zp4Tnzpilr4/N7Lr7+wnmndE1s19tiXaHuiNbXd8yroUk4neX214q8Rm94e2+6dUn3tZcnB0f4eKQ2OpnLTWXmdHmuB6pjWhodktH+HBNgA3YeykXKI6VsgW9vUqKN0BJ0szXe9fVSPZpIxheeyB86roWDRNe5HaPJRAqtbv3syVcistjbHG35wb1LBAhbrMqKe2o748GOvHVFHQMxRYfSgGU9G2iEfMvZ8PlEW0IMY/OJz/nGZpHbsX/p2gsr+szQG2MzGyxWcjCAHYJ4hHeytmUrbCdhBN3L2f0DkIriOMt80a8XCOQAGgJMEEngqyyFLMuTOJZCPKJ912dN2yoEnNZahY6C4+OMr05v2nDjPQfmtHH3PS7e/ZfPTw0GR44G28RJ8PYEXIZIni4NNdqXFSc0uc1kikdZKkjT8/OHtytdldwwTGUbLlj7yq23xr55FXEnzRK2NaogBYFidcsChAukDgm66Z3/bKlkg6edf2rj8q5UT6teE8Q9IkxMBZa2Oy77xA9/dvyQxrbHQIwEcFED3Cdaiyzqr5/Wth0mN7TMimdkHUFYtV6/9/Hb3l5i03YYtst2yQHfz6IjeU88+ka5j+VcausbW+M+zNRm5rZKxpiRhXBkuuRHQoxV6gpa7+pQSp66jDtBtdlggjr22ccdjT1EoCqmHgxG7GEjpIZUD8uyYCRFzmPseoEgkCjaoCjGcxFQCDCDQECIBkx7hALgRKAJQQTTDKIJ8nwCBBgAgglFAaGQ52HDARf5DEdLAuO7bBQ7veNo1oLZp3xnJZq0dG4ysyuPWmiJC1RLrFGzJydmpqameF5obmqdyWc7OzsrlZovshOVgqqqCxcu3LlzZ1tXZ8/SpeVyKZpMWIbB0kw2m+1esZTxaeJ7mfbZlZkpURTVmjYxnkc0a3nVvGH5jju7s1OR5Fq5snjpomUrFjOIOI5b16xYLLF+/fq6pmq6TjNY01RB4A4f3t/R0TGTnaxrRnf37FJhyrLrgaBQ12jP9wWR/uTT9zs6Otrb27d9+cniRStELhCQ+VqtrARDjsNs2biZea88dedNf/rGjetnGGutNC/TEvQVp+xSuEoEiu+c1dNJ+8DzUDcNdkRpbdz+0iP7Nk7+56FzyAe7vDPOXXmOr94wT5ue9Gj52f+5fevbG34eV/p3bzk4M7f5oNSm5TFhdeKqABTwLHhW70h9ME8D0LqTkBXH1CJELDujVVyjeAYm1UyIK5dw6uwlSpoXJznbowVeoi3NrRRo7CTCjf37BlY2rgxYglHT7/31N6iqBjo9+PmmM8+YPXbztZEhqmZMXn/bRau+9ttn3n/86rWzVzQ2FWTt4Yc/y086D732gU/3fvaPZ8YGHm46+6f6EEHTA6JpeVYpuSj9y9c4fkglxsy4O/PTs6pndvMDOaddrZ/VGPhouu7YvgiM5nmIFzDQhNKBoTjCYcfiAWwER8EoAvgAAHxjiAzXTIWFQy89v+br13ciZvfjbw9h6Ntw6Ozf3Xztey++cPa1g8gvQUnwqarImabHAaMwdECUaJoyONRvKoc2D+7a8pe6BwQiFlfjHOCQd1CCMy1pI9hBivORMwvTSl2PM1wD7Y9OUzg3s2y56eXRc8cOf5Tr/fSjUw/3D/15/YIx7ozHP9p+0cm3fXvDr4AuAe3KBuiUt4OD9kLyF5d3/HnHO6xmysuvhurRZx76UKLDMb2cZf3dLiBJ5CzzvT39JoIpHjbPVFlgZdMNbxu4snHNMd8B3ZNAAAoD8QAzHrCA7UHg3jtqnhCVMpIzYbCOAwpVBzyOuVawPAIABMAS9LJD0UxDOjh2bFsmmaL4cdGC9lmdhckj08ePuY712fGjJ65ezkncpj3HdcoVfbOluSFr18NEMG1s0EiJdxp6Taak0anhxUsXy1NWPJlSFKGiOb29g20NTYEoowkaOyO5hdQ183r+c3z4H4//jPHpaNWu8EGe1Y2ZKuZlbWpGSXc6Vs2s1Zs74kdHC0OmcfWSLojzm1/+1613fHjVYO4vj9wBXpwDngYCgC0wOESzLGs6FktLGGOCPAbRQDGAKB8RQJiomu/TXDiKOImjg6wgYEgUB4xYMqzQQExLTjZZjmEgzQ0QEEhExa6I/OxMvW8o1NLsZlWrXChGhe5lK6xSzTVNoKGGrfjs1up0gadZ28ViU/Lb77/zyknnTv738xPPXBWkFBKcR+tH66wfiDTy1ZJYL6ke5ApMx6x8fyBkqflrv3X9sYu8m678g5FaI4uLVG13n2+2acC4XJh3yjj01hsv37JyKY9Q9dgIFZIR5w1u3uO6Lh0NdXZ20hpRaYeXA4CZqVwpxAqJQMou65TuTe15N+syg+PqkqULZ89qzT32OqzriVcs2jSdQN0iTnmP3j6/9XdX/lUbnbkn0WAIFd8rzhzXBg588dbH/Tv3Pjp/tbxmxTcXxdje8U3tDZ0s3f7Ch++mo8H6jNOS4VPlQiSjld754+9/cMWapSs+eX5jJJmk+XzAUrQA8WgpmqrTvNZXZgq1UITxzvth69fv+8vjv3ti7K9vNVO0YkPNQ66pmiwEJQbxLub8iqvxMui+KWcg4IOnA6LiNlt0VYaPLnvo7fXN6fzw/v2/+d2F373FTbDLysNfHt2z/8QzO9V+Lzh7Xr+dbuWnhv/TExQFicf3PFZcc+76hddfXt51JDNLOFRpQJZrl0zUPCWUGOyXA8A6ZQerVCIzqyaoXoS/5C7RaJyrMPO0YhHXpgJjJRAka1FaGzhuZcVzrjnjfMQcOGy8+9z79RictLhztW2+vH36RI7JzxR+mi+wWFwInkD5//p4PH7tikJd/MY1153x8NrPd1z2ztHJHx33Uk1e1dUBoprIJKLom2ec9+7ePZO5wWaKmLaUNvmKXZ/bk152tJ7bjOH8dfm9H+8aGovGTty/Uz12eN+a01cWEI6E55sRkYyOyAJF2uJUyRuDSYaHIMVMGnaIRYhG4CNCCMcgjgBFg0ABxyIEHhBACCiMMMaEAEUD7RKWARYR/6sedgQ4TCjwRcbjPMAS89rTb36/s337ixtaok6wfYk7ak/V8sSHZCyOkT2nrQsjkstORYOKVql4huViqJeLgwMD2DYkjjJ6B7Z+uXlyeiociwJAPp+XZblUKhUdR1LkYrHY1dU1OTm+es0JDEMNDg6etHBxTc0CxRwbnQrHYx98/MGyFUuT6URICWiaEY+lP/3kM1kOtLW1hIJBgvBpp50BgMfGxjiOaWlpMyxz966969atW7li6cDAcZZlDUtfunxJW0frseN9daOeSoanJsaxT7e2tuqmEU0kMXALl6xi9m+6d+TQ9Eix7/HHnrlxoLqwcVY0QfVNTRfrbgOrLJrVLcfDxLcjkphqTR8tvLHli09vWXeD9flTzHnXWod697+6Zeeef40dPJSrTa9gRFmI/rVavue9/zI2FQP8m3WzAIaiKZGddnRCjSH/8htuoRkkA8dzTsJjqpiyBb77rJXM7e8oDNRdUBUH05Ezvnah298XNQEHRI0hjsI0Oj1M43Bez/b0zLPsSolYrs6MvXvQjyYaLLUjImlmjzTg1EolV53piZ0Hojnw3HsXfP+29pVti2c2nvTvO6WuG9b8uL37tOYoJzWkwsd3P2C/twXmhZHPy9UZ3STyuOs3utN9lb9d2Hv1qVSxTNl9FK3IOUuv0z4Ajz0CCIhlCSBSPm1g5BJCgBUQZbP2QaID4QE5BgarZjUAFaOY+++/d0D2F1SoR78YUFjq3h2vPkFvO+Nbl3g0A5RZoYGx8IJwOMtWZzTPt7zdhwYsFvOmo4T5jKuNGBAgQUesAPIcDHHXzdDsBr+yUIq4DuY8BwHEKNnz7AEEnKvuB84tMMvTzMYC6ftw153PP7Jm/vn8xZd8PZV++KEz3//8c4VBmoeA4gxWkt2qWeOPQ+6u+zYVH/rOkl/fb/uTL531jQO9WjoJpEQN8zCJwDIpBgUYhpNwybZBAtlHhmlDVWQmzGEOgYPAYhD2HMAA2KeBAc/XgT3AmJG60QNQc/CUCdivxwWmSIvgG6xri7SiuwrjCZRnHNn6XmNqLDM7XK8VeYBCqbQ0TIkA01NDR49O3XL+xZMTI8vmNB0aynWkMzIBi+MyPF809brj8pxi5Qxs2qvaurMDQ3nD6mTxscEDjBKJpBqPj1QWxRYQhgnQYzjU9OXhyiePPlm2qxbl1XmIOmUKA6/5lG3WdAMVVWlOSyQsSkklwBYoRzxt3XJdm2loFs666tru1et+M7f1yw+/3PrBJ+BiIB5iWZ+mHRcYJmiBBTQGRBDxCfYAIwAGKDZEqjWKcX1b8cMsw8yYtTpovMwhUagPVWjD8FwdYgE+HEiGmvKH+lgl6qhqOBHxQlwRzOisZCPfcnzrruLOo4nmhqKhNrW3B1zHqtSkgMgFA6LvWtsPu6vcs+95+IHTLtRxMNXMelmH0aLpRl6M4OC01VilXGCPbOktfV5OdoV27d6/5339hzuf/HWg/c1//vPoaHDoKGPasKngXNYaHpxyomnbGbIGXvp34vyLY61JjmI87HNNsfaGJrVYxhjXsVnjQ62evHvH5uEjR678/rcL2ImyCRNj5cIrlmu1hfS0yzdXdk3XBE3J5y0hly+TeIoN+imReBO5Xi5YSaKYwZN4sepXfU6xOn943WW3tIJ6xAssZawyeAwEridlg/hw/a0XYWBFTAFdBA/7lWztWLbjnNP7R52J/37aLJfrJbZqmJWC74DOslB3IbNU+uEDF6w8fR0Rm0ETJNWPuyAE5RKoSYbDFLDgMoTyHDAMrChSvWxQglgrQti1cCyi28WII/SXLNnOCMe94Q3t0UxLdmMoKtW2jI8tXnJZU3xudqLNx5MIrRCcajQ6n80sCzDzPNx73xuN9LThHKw50JCdUJrH2C3be6WGxcsXxUl6jWyPCuGYkx2GQAHaZof8EKZ4j3W4rOqpg4pLAxMwgypnjAtj/uQBr3D08wJXY9ratr51cNnFJ110zdL06jnZh7+sbHn3kKtRDKxiuKqFKuACAcaO/fyF10MYfvvmL19/6k+P7N7LLWq98C8ffX7XN9OzI/qUytneaAnu/edLcYVdHhDHCmUqGNBDeVJMjk7KbnnmvReeO/uG8zrX3Nwxf9iks7IPS05aSCUapvqGU8iuZMflUEwUNJalvLrtG75lgxSgKASIIgyNMAZCgPJ9ChEegcwzDAsU8gETClEAFPIJQoSmWcH3GEQYhrA0CzzlEwK+79vgcLztRefR+ekj/czQ3pbZ7ZLnsSVWiNGZdLyYz/q4xjAsQbRhGslMRmJJtao2ZdIMJ9iWvmTxAtu1BIHDks+zoUyCxwiCoVDIyWia1rx8Nu2Y3d09/YNDtm13dqaSyUhNKzc0ykxTtE1u03XT1gwEsG7dukgogDA2Ddd1iFo3urvnBgIBADw6MqYEFYGzGYZqa+vI5/PValWSA6eeeur0dLbCQHNrm+M4DU3NtuumG5oamlqq1WrrqgVH9x6T+ChBRA4FVcM4cmQgGGxkPrr7maFadUg3j+WrBg0fTQ1Y0wAM0B7VbxW/3F+Er2QVX80ANIGC/byFenqe2rjn+NY6gjYJesLpsxqV4ZJihuR9r7zMlvebXHZ+66k/uPL7TJ1qi/jVDJWbtr55951rTg+fs+SWMCW0OE44VvVtZel5K6vpUcdLmFQVe14IA/Gs5Kkd/sHNKBjnZMYpTIdCGV2rpMJtUq3maBogVmTpSEqOJiIMImZFLxk2BZpOs3xYDobn6MXC9GE9SLO0HmTrOW0yzTTEHn303MHPP3rxH8pb73048vats85f+d9hFT+7N9ktSyva5LBl4n6h8WLQD2ozgEfRkalQl1A9ZLm7VJ/2RAooBfQKBeCBRZmAeeBtYkUAqi5r8zwdrvumKKg+k/H1KDBjNJy0KH7dNd/42y2/UWn2onZu63idQm6VOvzcz/7pykc+efx1g2YYxBwrFG0EwFA1wDTxOJejAfuOXjaCwKk6o7IGNLKREqmcOIsOlJTXraJHewTqLlCYp+tI1y3p/Gsu+9XTfzDwJKPPbHjr7c8v4rK73t3xRjbdFF/MK3Mx/EFM7sWlw76PKGYf7QQ8YXUA+nTfWXPi4cNf9I45l5z6Df/N7z+4N3eOIAdd8RgppjoSy3yy95hqEs6zSgAAAAboQAAAwPR8AAIIEUwcEwEABgDsgAMAQDk+DYM2DLgK4cyADabPtFEDRUuSJVZ3KNrxfXcClLCkUjMiLD37PNkcTDc3lnTj7bc+PvPy77FtQQ0o2odPPt3ePzkyVSg3JjPhdMrP10NcqGqPYZfImNMKKuJ5V2SPzeQlOZjwNRYHPSreEG9hGSE/nS1kp9tb2seiZjaXzTQzAwO7wbQBQclGzZxILH2aybtmMYpMJuPSNbqe8y4+9/z9A9utXUeam06UA1Z3zynXX8xedePjb778sAixre9sijIBw/UthDyB5ojKmGWXJkAcIAwQkaIwoTVCEBClRhgQBeKKLq1QxONZwYLQ7k27V13aGQk7lhz1WCXa1GjZXnViNNnUVW8Mybrt5ErxZEMwGfM9oqpW64kn2QUVS7IYFgjPmmbdns4LsaBTV6lEqpTGfJ2qTumU7RdGdioioiU30NHdjIfUSpKh4NIWPBqi3t0zetly5vSWeR/tHty/dzu58MbfPPHwDlrBSfakxm8+8+mzkWh6cHLkqANtdewRdGRq7PTSONfYODOaize2RwPS8PhgS0urrtXDIYWmaVCd5aetX3H1hc7otGA5Nq1LSti3q7V6XcRBIejKp/SEoNspVKmKlemgXNOou6qt2BmxHSzOsb3qF4dKyBWCSqFSnuOLeuHw+NjY3576+tpT11/zo+9Mbz3Os0IkFqscrgBF6HCEppBLeMKGmUwsCuicVS17Pnxv7xeHTlk625oV7krFopLUumBW48LOhpZ233KZ/lJ1bIPYs0qMBeosdPK6agPyTMxKDOUWBVvCIiuY4JoBhzGijuGyjBCSnGqAClq6imSunECNoXSG0YLJ+MjQUN9k9YSFcz0jH4kJvjlWr6mmUoJkvVhROdr1Ivs1S6cOq8fe3tSwoEMdy8ptzeqinlR3Q713687hikde1hJNjmFeuGK919wBOqfVCoqqE9HhZzwHGcCH/Km6mhsOtUq6S23bc/TwF32WwK39WvJ//vljKcjHqt7HP//yttf+WwOIYBAwVD3XpzmfUhhiBKnSGgg0s/UaHbj5xl9taGp6+N93n//lls39A3qz3qzxiA0gu0Kl6IQfGZsuczwwHpoZo2mpUlPzS0L8/g0jpzZe3lvK3fXLn3QFm2e11+UAld+3qTGK+qcrclM0KFRZS4pCpK+2x85pNEBTmNqn0hXaD4CnIOBoEHjG991YkEeu7XkSTfmYOK6HXYxpBCxDIfAwJliUPGIItKu7DM3QEu0jlhEFeyCfi/E4wrX0q1rY9qz47MKenfUst2jRIkzEdKZh944dDM91dXeZtlGrq5F4RtXdnZ9/cf7FXwOWJaauG3UMVGMXI4RkxCHiuPWqHozG6qrBU7imVtoXLIhEwwRjy3Kimrk8FnMM0zAMWZYVOeC6fjLT4rq+JCoeqNgjvuM5HrZ8FwDHM8lULFq37Vw2xwkiAiakhB3PnZ6cjicTuq5rdTMYDGZzuWAwaBoWISQUiuWnikpARpWgK5RsFnU0O1++EagvjzDn3riud2BonU91z1+MKHZiYmJBz1zLNPN6ESFk23ahUBA4rrujKxmPU4BoPsyL3sDOL/5983Ozlwc9ig1a+lS9eBQ3Gla+hYnOkSWfa7OgGTlW1tWbXX6yqDdzYACsOe2czAKRTUNoSrV4STcdBnCIxrRVsah80FDyIuV7zuoTzi15jGS4fDTQX6jOCcR0J1uLSBrRsIgpV2cJkjFfzRax79c5NySFRYHHFDF0VZKF0bERAfjm5d3DVuVbKYbPmzggDh/fcuO3bv/uF3ef3e7++dwLfvnxOz85YV3/xJFf/WjJ46/VHz2/XEgMJC8+Q92/J9AyvfLi6JfvOW6wxPpg1Ji0S49Trk+8CgAgKgl0nuXA1cM0uFCgWWkWkf2664MOzbXDQwIFUFTCAa3kj9cfePElBOAIdH8WQGRxjSRAufq232ilmTZFKWCnQFsKL/q6iYhA+IRnlgijuRgMBxjAElGuvPqCn996LdbshasuahiBo34xxgdQ3YhJKAI0cn0RQxqY7Ntb7+y9oVwrxhQuKDPzF3SuOvOEE7524o/+fvsNp131wd6JTTgv24ABZknc1zyk0U5YgLBLOBj/9vfufWfjZ43o3R3Hx8K0VwK3rBmbgJYs3BELmlRR5HjTgv9rYCAA8JWW7n8PEUJhTKo0KjBksK61AQJJsBjSxUvjrpkvVulwzDKnZKVRN506lBXc1jRv9e43P+qKFu9aHrpj39afzvSvPPtcvjTqA+TK0+dfcfZkIXfVt6657IrrT5rbppIp35EIYVhBwJhYpoUwpmmaoRg5rNQtLRgK2Y4DwAUCgbpWcXAq4LMWHyCuM1OofrFnD2AgiNKwq4HAJDsgvFbpnKe3Jb92whmf7tg1dfDNGfQ841rRaNimyxt3fHR8aLEVYX5855M73rsLgC+7NcAMIzVginZcggNJMMr/z2VQBANQDCAAoFia9wAxgsBxnK7rgggANCZIHqg607VgR5PnU2RqqlTRGjIhh4FAXoOEPA5a3GPUo0MzpUIinWHymtCWhoAcOzLqyxzfmvA6GsUptTAwmlCYtNJBp4Pjg4/e+OTXU50nw6r1vW++fuTB38xeeBIXKBazYYEJySxDS3W1JDzz1M4+Fy9s6Rp+d8cL1J3nXHXVmdf/cJYVFhkhVhRFLhxPa1rddwVp/3Sx9NQrV98UjKabKrWZRKSd0Qw/WwjFI3oxHwgm3JRcLOZTo4Y2NkPHgxLNlo8cl2ziIEI3pryq1rdtj+diORZumt3pTOS4RFjoSGpDw9VqNdPcXsVVsyGYDCuBZCrhuKXJSUPGc85e98hpqzRNyw2NNcxfUB0eBYoJhiKswNqmqcgyTTm58WxEDqiGqerF3z15Z9muhxIx+tCw7drc7GY0U3MmS8Vs0SrUmxctqo1Gw3IbXWECLkPpXkJiXcsljMPbvFL2gGCWZYAFVfACHtDE5zWLFsGoqhIvmnVbdCKRRJtXG3PrdmtDm9uA6XB4qPdIMpMGjIki0qJgDwzZsVAtO1Ws5wLt6SSSWufNRpkolwzyplfdfSjd05FuTLG+ryIr4CPQnZyhJeKKPVHhJJooAn9opqqqulovTubVUg3J1KqOEz546d0dX+wVxHDWrGaP14pHcw7nPPDcf49M+dvV4cr2o9+68kqjbAg0o7ta0Icox0osA76vEwjReIkCjz90329+uWb1vMTJ554ysvWsZ2+4w2T01qb08OhMnq5i14uFWd0xGIFybM91wfdJjBP7p0ddmhzro77MjjY0ebGgXMqX5y0VGpuC6xfOI6NVnlGtmuF52EeMS8BzrBZZzOqazDNAPBqAJp4g0CwDvgu2Y2IGIUQQApoGAIqmKIpgjkIE2YhQjg004zHg0z4lBhGucyrlRAmFNXPe6ac4O/PDM+r8lWuPjx6sFKemp2ZGx3szLY2W7x4dPZDMpMOS6Lmu41hdszqKM9Pj4+PphhRFUePHRxiRTTelp6en582ZyxByfMd+SQ7IAo9dt2KUB/YdnTNnTrFYohnGq1mIgCiKgLFtm6IoVgsz6UQiNzVEeIG4HsdxLM8lw3GMsWOYpXxNiinxeNyyLIRQvljgeZ7hWMdxIpGIZVmmaUZDUc/zmhoaDMMoFsrBYDQdT1d9O5Ocw4b4A3v258rpm0+5iAmmopf2zKERa9V1l4ZV85q8qsrwtNHSjhASOR4AAQGwHFPTGILUyenw3MapwhCTAEkSykWzkLOkWKJgFdgMZ9r1K9rXzkWB27e+AQvI8gVtTGds+sPPsyUIQ3Ljrx686oPbMyvjO98tXoYNF/M6x3heQFFSTJDk9HorzRrIr06PhMDXqrmjsj6P7rYwpRKN0T2aYRDFEpojABbHmoqAgcQFQWA47BFESEQRTdNsTLVYdRsYOuxLKBOxC1mnTMvRIPIUG5e9BafNWsAtpbg/vf/Safn9KbEB7f34mgf63GFuQfDTX/TfJzv/1bjN/BIIjkpVyotD/XsdMIWF50ZBC1FMzckzmLdpm2VM3bMZAEIqYBQV57QFq1579ykpvhAgnNP1BRxnFr3NM4cljpt2jCE3CJwXAbcM4H+y8Uva+Z9T1y294pJLbr7d1G2sRIlVjphjusA7WAn5Wg2BB/CXh351y3UrDj/4x38/fhBchb9yzbq4vO3vb84GxGCCPNdAwPIAnhpHXm7HRB1gDAADvPnlQfrZ91XNXihxzWwkasMUAQpEiuf2GrVOXl5BqL0Fq+nENf944YlffO9/mrsD/pub//bQ52EHJgHi3e23/+DrNmO8+/Y2oCY4WzP/7/j7/47/Dwx2wAbM6MSbpjnJB92wPIAOU8uAXQLeV6sC7+jaFMgMxZTaQmCNHmD84Lc/fHXf7x939k6+987rp/Ws/cdTf0mlxB/9/NYbf/HTl954fevOPY4JHEpYtSzLcC4BBJTlGJZjSjIrSYIiCFKAJ8STZMG2HJ71RYnT9ZrjqRJmOMTREl8w7KMj0xQgxNIFzwYBTpdI/2NP/fujnYO18c/27cYAvS9+VN2jekjOTryXGGWidUqtlrpbZtOJ2rfu/WNzLPPqUy8O7xkMWnTZqAMgj2AEHAEXAJOvvgGQr/yZKAoxBPuEENM0Pc8NZCJmid+0Y/fJ1/yA9gf7du3p6VlYmBwSInFtOieFo6qu6wOVrqXzbNan0m43P7uSzVIupQrVYLEAEZgqTbTsyrEscuKRYFe6pupSpYaxsfS733LiLFdULB8tOfUyr2hse/7upllLmuLJbcf69UWt4cUnBdv5pQ3K4hrlqKqzumXY0ja9+hbYtfPXLCS43HtghlbMHsWbmaRYA45z5NB0/rKbvII+k+nu2bvry+6mdkOrgephClUNdebfb0upqH3yIloWQoxkaXZe1TJNSTERpsNBczKn0BzxXIoQmqOpxoQQCc4Mj4GPM7M6HE0Ph8OUqvqmqR0fdFWd5YWgLAJN2UXVypXjzU2gGrIUonhJYrDtmIoY0Gp18L1wPM3F43S1HIgEwUTRCjXx6ZfBzpgliM6hcVyz3JqKaKpz0QK7ZKdbJIsteMVxmvftRkmtWG4YjDpmBKIYiI0go+RJFDIoEDXwOMAy0BF+qM8O0W5MBP3YYaohLVaoYsAP8ALGXrlU6e5Z6Nl2IZdvb5+TA8Flmcbli0p9tJqFhvZO2vRpjs6q1VYiE8FFKxo824OpGk34TDRRZypsMJKKRq1qneJdTjXK45OkL1uidX2yapTKOS0XUpoP7h/Y8O5eGViDc0MIBkcnVmpll9bPPu/0G1cvE0XthXvuny9F5KbZfX19CY72LCuo0LaPWd6iahB1LJ6B6aMzYrJn8vB+88hxi83++Inv/ufF93uPZRfPaqt79RDN1lTTNGs0w9kOoTBgz5N4wXPMhlQ0nkEk2jCtVgaHyKzOiz44dCg+hv776rYHfnVaVJYUc5pnWEaUad3EjtMskyEdMDAswiKFeSASz7IALgaCCEGEphAhBBEATIjnEQQUEj3f9H3OdTATxKxHRJrjJbx90IlFRatsRpdHHOQ6dZ93nWP5woL16195+uk1a9Y1tbRWyyW/WhI5PhGN0BYMDQ2Xy+UVy1dZjt3f36frtZ6eHkHkAqGgU3dENuC71P49uzmeac407N5xYNaszmQ0nZvO7dy2ixOEWCwWi4m6YeVyhXq9vmjRAh87COGqVpaCXLHgtLY0+b5brpaAohQlaNhGOtE0MHU8FArJsqxpmud5w6Mj7e2tuq6XSpVYLGaapiwFJiems9M50zS7u7tLhcL0lIp4Mri3vnrVyk3ba7GutTzJM4oTrU+69WJODCgOS0r5Gey5IqLcGgAAJhr2fYHlEAHHsnmWowRFNT0mkakyTN6mVV+QQ6C6jGIYst08WRifDFBjOhpcc15PKH75f34bCITq87s+2vDG8U/Mp3dtOwcSiwPxaboo25AX2Lpl+FKYobiADAd0ODmijJuBf+amv3VMD/cs6dQGcLxUrBhpSQSH50WhUMwxHEsI8SwvJkY8QkSeqZSqvocVSZ7Oz3AMayNW5BSo12fsINL0Q4MDyEBLT/rjI1ee8OxL97329L+27iweAEkr+U4NDxz+WKlnd6Ym9426Gy1y+uO7Fl+gVlRQBjOKmfW5hJYpMCUh5NvfaVaeKBoUyKKn1ykLXH5xOJBeG9/ywYCC6HEb5rc0U7EycKA5bpKVXCiNeRAW4kW36PoQBBsIMQE+enN7VfC8H1+/7pxV//rzr05sQrmqMlwuAQM6MLTnIGRrvAJAPbNi4XmF7G8XX1vKzSxYvjSY3ffgC3/71nd/iFgBYaolw7q8vL9vmvAiFXZhFmoqAKI50/Wh5puINijwLSgZToUqdrBsE8NYvqvZZhPDVG39MLD7FfSD2XMP7/8s4QyNvPmIUaB6a9RiiS8ZJFewUoP5FUvnPNOb43FUJ/n/78D7lfqe/O/6qy2SOGR4HT3zZiolM1eYw8myo5vI7o4XjhZ1huUcTwcIghPFHixump2bHE2H5weP67sPbjnVZZ5/8u/2gtPPPun8ky+Q777n9h17tvUPj//xD/ecfdop9cnpQDDo+6zjOr7ver6NKE/kgGcQcXQa7EAgSDyXBZejMYOcaER07AqDYjXTVBLRgudhAIEWAbCKQbDY8370+wnR5k3WB3c2HTwK+Mq/Pf6dBenD+zHfKFd5ZW7josd++UIocB52pre+N4zo4Wu+95sts7787F//DkejoNc8rWYgQsADcAAoBPT/MYpA4HkECMa0D57NCBzLCZwgb997kEWRcsUWmxpKCrCLW8Ox5skduwNCVLYR5qXqnuNic3xS9Nt5hS2auCkRsAI1s1bK5jpSs6wlifHsVFskxauEb9an8nTQEYySzUxU9Sbe2bHN37d/1a/ub16yYu9fbvhkZ/m779wXXnbex6888tljHw4Q/oTWNiEobinpp6xcu7IpevapLVsefWZ5PCDPZ3Xbi2DIAU/A5JHAs8yGLzZ/47KLSe9Q76ebGy5LIZGmVS0IykD/Ye+EDqm9ZfBA74JlyyAVpV1nVlsjXa0Dz/iOq7Q3TGEjHY2HpODU0T5JDA8NjoaCioUdUCTTt1yaBDMJp1ozTFUIRzhJzJZKIY84PqQ7O7yAnM8VgOAoT1s+FkIRTTeUlsbadFFRwlBzQpg/vv2wZ1nzly2LRFMOJ6YaWvFMnUrLBNvI9VXXDtJ8zYqHEj3DxyYrGJkVo8kGWWJYyxtRgISkgmdEZIRcyARRvQhslG9W2QqldSd5i6ZKqml6FN3VbEI57ietuiZGY9b4hFVSGYIU4MzJGYcJpxPJ4u5DdIO8ZNaJoNrA+5qqdtuM6ZhWQo6E4zImMx7ho4nBPcek2Y0NEFYREySsNTWubz9WyVf2DY7wCl84NhEIcZP+1AUdawd2FTiKcylSMXVehqmKb0ocS8zRw8d9XE7EomtXzHvjs81KqcJRJCUxxAfGdlxCUwi7BOq835gJ6SP1Qx/snX/2kslDlk9PbcyO/PL9d+66+bfVTzakWnC96JqIFyWKEIx84BCSKbpqG0EKGhrTAiebuWJTIilmIp5DNSTmyIpYqLZlTSkoWbRhu7ZveATRIvGdKKNTgBiK4gBkFgEQBpBjuY4NwAJNI4ahwCfIxeATRAELCBOH2GA7LqIplgGJQEhky7amA5VmKJMD17U5nYwVK5HOJBytAuaTTd2ajUaHs1q1wjNsWmkIMPGB6cNz582jg8HpgYFMJnPVtd+sF4tTU1Mdi2a7tot8mjbM6WJ+2UlrxkYHNVs7+WsXWGpF1+vzVy6zLIsTuZmZPJGYkBIXQoGAbVtAVM3k5IDpu4hAMBQeHRtD4DW3N23ctiUcijWlmg8cOtzc1aAoSrlQTqcauGYuGo8hRFzXjURihXzJ9/1gINrVMSufzzc1NDuWq8h8IJwsWDOLls45uKt/Jp/5nwdPc0uvULxH25YnRqI0LwaFqMKEgoEUG4xHeCHMC1FJCsmSLIu8IrAiRwlM1TSJ5nRGUmFgFccNIJ/Y9aBnMtFY3/T0+svXvPzG7Zdc3Z44VTiEp2/92s0/PvcbH9/zD4Gl9kC9vzQTOjKFA+GkjdiQNObDAkGxBR1qM8tPXwQubPO9h6qT+736N26+BxrPGi0JZqnMWs5wwTB83cY6sL4Q4OSwSAm4ahTHpoeKxZntO7eEY0pNL0gSJQpQLGVnytMgUrzLZPjwrAi99NIb3n/536vPOZVzvGee+2jPK1lNe+L9f/zmkac/PeHcpUTxv3z+1Y8++f1f7zv75U0bzFG5cojvq2d7ffh1rvL8ca4mcX6YUvL134T9b0WddpldGGaawD6lZ/YSNl5ioV+S0i742w7D1Gj3AtYBx3dKMgWBxqZhqwqEkbhgrHt5NJjGVOSK7/60b/vklqr7+ZsHptxwI1KOP/Xz05vBQ4AlzvII8Rjesudb+j/2bmm44+9PDOjr7/zD1W8/owLzk8CZW57/LO4iy3crZd0pVHkAWzVFhAJFPeYCbTgMoliZEWVIyf68JLWyWWxK4hPiSpfviOC1iEwDBbMUzqJdSyO3/eGmZGR7y/ymVEuDZ0wKmGiO4wnO1tr05U+/1Hr9z/ZmRwlb9+n/H/iLgMBXAwH6361uOqIkDY2Olyk0CH7BsSROygKuVAqtrazvVs88dUFTQxncAQb0qZmdnFcLJCP6+OCqE07M8p5YcT8vH+o+ofGnt9zx4P2Pf/rJposuuOq0E1fa9Tz2Kh4pE4xp8CUWyRyERVqgfeRqWi2PHC3EUwLlcsjnKMzQROAZXa+ryLMoZIJftDQfgPjYdlyfBhtIEbFggY3c2ZHwUaw1NyU+efGujo50MmE1zb84Gc2wKLNv+041v4vYE3NbF7Eac8cPbv3s5dchKFWtUtWt6FDDxANiATIBTAAPyFd1MMEEAIChEMcyPMeWSiWGFzxMQcH0Md2wcF60tTnIKIWB8Uxzq8ojjaXDc2exS2bxyXSrJviFutDeNNw7lD16PCQrTES0A77A4qHnXmTHh1U84bgsM15ScoUgx1qjM2HJSxXyDWtX29rR+MkXuQa17vvXzDrv1uby0IleeI0z3eiN6JOD4/v2SxOD7//tiV//5h+4Yc21f3hg94HJFkJ1+FLBorZjMxfA2DLihrPzre39MxNo3Zxrbv9VNJJQfDrY0VLpTMavXt8eS+OKNuvU1ZZtGJPZ8kyezumYxvs3bKQtKrvtYEcgLrO8X9caG5u4iMw0xeOnr810dkBODZU8oaiD5nKUHIinNZbrnZiKxTMgx6RgFJQArhq05SelIGO6gkNo3RE8gKqOQjTwHnh1vTzDsZ7SFNIVx1uUpiqauu9Yzij6vlatZlUn702P1PSp0KILPnplT5FKzV57YXjWkgOefDwHgyCnHI722TAFfkD0IhIxSSISiFasHK7bEuPykuu6S+ZC75bP7eHCjpf2b3//I8ZxoVIJhwJCJEiFhANjx7l0lK67siBu3LGZjwVthLdv2OhZDoMxExUCaxcxRJz6cJ9zeBIPTWtH+9sXzYpYsb4Pd6NEDKo6dWC6MpjfvmXvzn1H923t23NspgRO85xZMidu+vBzH7teiA6yIbApoJmBwTKyArUp6/imIzOfHLSqlQYRePBlDiSGUBg8GzMAhg0+RhLGnF/jKAviwDYysVZ2+Vl3nvSNB0FceuPvn66iWKqhnRIgHiMCT1HgMQAsAYlnwfcIhoZUg+YGrbxdPD6hFod1rT/CaDyqOJLnYtlFRKvWHMczXZsQxHCQDNAiAgrbFMIMhRCAY2Nd930PKBoQDRRFIYQIAQTA0ZQgcojykSsSl7C8z9GCxIAoU2MFqAAWLEPBMJmfAY6TfTK5e1+8OT1+7Pj6dSe1N7UAJuVyeXhkxHDcD9/9wMPEctwPXn9DlJViuVKYzg6PjhmWPTo1VioVw+nk8ED/6PiIi93W1hbbNh9/+IFde3bO5LOua/MyNzw8TFGQSiUo4uqaynOcpbsd7T2KEGlMdQh0qFyvDowOJjLxQ70HF69Y1NrVGkxESvVqIZdjEMNLYk2r17Q6omkX+1JARjRNc2x7eydCdLlaa2/v1DQNEM40tMQSnQzfpCgthWxDU+u5NX//zMQAQy2YnQoFbMuigHbqVphmEEIURSHL9jyPMBQLhGYZimUCAKwoJh3LKk7NWdJdJU9nKxVWCRAFqbUqowVlTkqceWHq9AW3tcShI9p7dNd/L3h2Vr1a4bi+Q7ndCEJgQ6dd3DFRpsScZ8Sp0IxvWKNFdGFryA6uauvIrG155YJVmzfteugfH7x6052X332JNrqBj4mxOmURX1dVzzOrJR0hIooi5ZuZqFS3aqefe9JMflxWeIWn1UqlY3aa4eRjB7czllM8vAWkaK5YGz762tcee3pxfMEdf3o41Tb2wDcefWbzyJITk5/sche3CUL7uYm9m996V3x8w7Efn7X6m38+7dZvfGLNb5/v5Y8M6H/NST3EXJDxR6bBdZVhzvZ0I4rYP+/csbq98aTzztv59vstQO+dyIK4sL2po3fvcY9DcZFjJMFgPMVnT1ix7q2jw7QLAq5VCrvu+eO3GMT9uTxDezxB6oeX/voPF69YMZl9fm9+HORTl7Utnd8g2BrbmHnluquag+HP33x7dWAxC1wxPN5Yi+W8msOh3ppXpT3gJNEDbBh1A4mhgEI7HHZZial7LsNyhGM8qZ4gIJuVSIpSSgAeKlR9zmQNDuaGF/W9/nk9O3jFxT8ItkUDBzKEIRgJ05aeBuSYTjqgFC1vzPNYDO7/Pwr6f4no/1MEi8SwDB7Tp/WccP+mT5bNWgCM3coqQ7rxyJ2PnXP1FWD2DQ58+co7uydHi2ec15M6Nvny66+bm2ubAlXR5oZFdB4tlUemMYKHHr1f5/Ci+c2mWnV0MyVFS9j2bBsI4SgEgCkaI2z7xEc0gO5Rnsci5BCXYI9muJpmmZ7PMlYwEmRYxHJAKEAYMCCXAkJ7rOObEq0YZLxSPWnNCe9ufWv66ZvfeenQCEtljx3kjnwcXnxiZxpWnrJ+z8RhNeR2KpneXVM88cCibTBBYIlHg4cBHAALEAvAAtAAFCAaIZrQ4Lo28jxF5h3PAR8cx3d5ksikQIPKvl6jMBlt76KCNBUQIB70HVeUZdt1+UAAgiJEhLDZkuxZ7mC3pbPbz86UX9117re+a/IuC0Dl1TBW3bYop3mBplmepY6M9caaFsaIUdn4yll33S0v/q5++O3BJ154+onPwnNRa4r1ELRx0pwYz0P3zknlskuuy/cearrq6olXX+xoSh+xEQYjLvCGoY/TQsrU3/nnCz9KJrhUDx+MVJ1afXI6k55T3z/gspBsaMC5OhjuoV27fIYJrlxe95xF69bUprLFfMkyzGgiXtPqLe1tshKRZTm/ZVdAlkU5YFoWIytI5CmbYIIjDQ3xdAPyCClU+WDE8myBF2MJwa+pIyND7d1d+WIxKCuE0IokgO15iGGi0Y7WFggGoG7WpwuBxjZzphxBAggC4SLBTGb7gTcTBsVQ9ot/fmTW6rk0HYuS1LP7DwVtXw1xUs1J0M6l7V3jg4OeAKEIP6apDTI04bDlVjVc4VlKUjLPvD68/hfCCZeeuW/XlxWwGd+XBXl8dKCrZ878pYvpUCBfLzRx0qW3fq9WyhOb6jhhWbVURjJFxYO10YmAEhIWzckXp6Od7RpDoc7OD1b/uuPrJ8qKCNnysV3H/v3eJ4okljVfNfVAWpCbktFI5NCu3lrNY2Q2W7dkYoVl2cH2oc27mk5Y7LiWRsibb7+XO16J8GAjTFO0YVs6gmCUt32crlI5haJDnIeNWDq289ND80++ILB2tk3PjwMAhkRLgzx/7c6D23s6242xEZMBkWFFgatWdAdcigZJ4HrmLx4plEMRhfYFiiEUzdZKxXRH3PRUz7IZHlm6w3Mix1N03fEJCAKnMDaNfZYFggBRtOsRjwDPcYjzWIpC/yc5/G+ewBQHrMu7jskywCLgaEazvfEqNANTpr3mRkGuUPu/2MPJHGNJxLVyA7217CgrCsmmhpMuOn1ieDgcjZ8yrwVZVKVSmrdkuRSOaKpaLpflULire05hZhRZ3tGNW5fPnVe1jWqhFBCFRCi2YE6PyNJNyWS1XA6FQkt65o2NTUwOjFKeRwwr09g2NDj6/q53WZa1HXP9Kac0NCYVmfWB9MybH86kwScj/cNnfu3c6uR0qVTSLZOm6WQ6NT4+Wq4U29paZ3L5traOw8eONje1RqNRgjBFQ39//4Yv3p+z8qRUY+uxw9P9o4wddpNx30o0MgMf7B7u6w9JyvjkhI+gq6199OAxxvY9IRCJRUORiO06vCg0tjQHAgGM8Uw9v2TNkr/e93A2by5taJqazjseEXkuwAd3mKWfJJaXt+fMI5W4Ezy+c2BbveIkE85QobmZ5ln89l3/MMTqiEf5yGBpNOPWFgRbkheeOrjt74++tvnbLT2yzG78/FhxOBeh4AdvPnb53dcrJ14NpSqECOYISyGaEOQ5WNeR67G1OkujeDhUH5+IZ1oFmhDLCAUZw/R6Dx+Yf0K335KZnj4UXfmzPVv/edN9f7lm3spb777nolO6AKJPjv9j4IuXrrzoe1ffcdf03X81t372n+fff+zzL167/d6Wxcvazrj2C/jgjuXSzS3n90fbNr/1yqfHZvrGYWEgdBRVwnyAcaBCmPL01oPvvH/CZRcmN72/TfNXKfzrP/vz1o+O+yzjOd40ONnKIEdDPKSM5Gboap+PQFeUsOVUORdsNwCszqqsL9RY74HPBxpZNIMdArSkW/detlw9UBuNNDxx65Mbtx6pUjOXXrP6T6/u+CArclBLcZSHHUVCTDx2cLzYLsiWDS/bpA2bszm/kWBJ8JMCeMis85BxqFBUYVij5HhNCWFo3DKjUn7SiCBqY7b/z7fe9Xauvz75iZ61A52Loh5vAuE5LuwTjrZJ3cYAIIQky6qB/X+HXEJoRH2Fu4QQgglNUQghlyCEXYXiSoP989paL7v50jcff72BpWmEP3vhhXPOWm0UDurjU7/+7b1fvv3lTb/9drLfbGvjtxPy3C9/8ODfX52cmdxyePLBa2/7TT2njU0tPWXN3x74W25sYuG8ZfW8VnMxjTENBGNMHI/hESHEA4R5DhzbsXxNVx3fp0XP9KBsIsQGaICErCBsxmVelMA0CfaR74MMoDKmYtEUxfjYefLZO4Ll3HhE+ukdcxLvuH//1+t3/fHOSn0EaP9Xf/jZk/96ecgsHD90AFKsndcA0zziWJs4BLmAABAB8pWyESGGIA4hHn31OO45xEPId0RBUSsmx0okO/XBk09HGtoXrFphKcJwrTCvpVu0XNoGkITC57sTc+cYaRGXynRfNblmIRw8yEUVp1gmUYFaHDf8opC3qbb4sd2fzmk7pZqKWiPjUqaNmUGuI8XiYO7p84/tYW68nZCj0sEDX+z8gg1zVmM7XUd0UMdehXgpW5+6oDuTaV1ydNe2NVd8562tn2dLRVNDS2gvWg9Muu6IREdYfqxaLntaWmZ8QqVaMgAEGCnQ3KAeH4GI51CEE9jF117iI4JH80mVs7SqF5IXfOtrYJiVbJ4p0SCwNjFKE1NH9x5obGxu6mijeLFeqSZ5RcO6FAropsHQnOs5LjigWrIsOzwxanVGQi1LehzwY9FmhqDs1Ez94Gj3kqV2vihHwuDQUHUrM5VIKAGpiF03w4zi+iiabq33TZ1w+hX7P9/VxMQvOe2yNw59qmkTl6yc19Oc+fkVlzz65ksNa1ZlQvKOnQd7zj/p4082yVn7snULcnsHxs2qaEENgYQVHrijHrz86ae//eWlywMnc7xQnsoSQF1tsyqTuWq5EmEVxvaDklgaHMn1Hp/bs9TULOT6sXjS0ly1pDJNAU8ts7btU4KU56n7+wb3DV76xd+03oNKodY3PFYwgRdZhjZ1TFOURdGeW/GP7hq0AGjsJWTeBrtU0ZUkY2nqUO+A4Rl1wrKOpQWAA5A4otcsoKiIHKgX6kEJsalgQKspNS/dEtzWX/r22h8E0+cCrRSxGqd4vVoSo9HfPvGX6zoWNDc3sxxgAI6iwacoCxHXsxEsW7NOiTXCiEELoXrNlRw7kRLYoAR+cUV7y+ixI4u6wo5tuz5G4BJi6jYYnscThvYdWuAczyU08pFHs4QVgGYZ5PvgE0SAQZRPYY9g18WcABy4LM34nkcji6a48ZxbxZBhqQZe5lx9Ogcds5aGQhFvMHd4dCTcmIol49HGTKVY1Gq1ZLqhVtUN3RMFJdXU5jmWqtZYWZ7V2AKCMH60l6H5dCJFY0YzrGhjmhWFaqEyU1RXnbhW1/XpqelEIoEJEGBau+cVJqYkmUmy/I59exLxdMWo5HIzbR2tDmVGAkEWqLGxse45PZs+3kQQzO+ZO3z0SL2ixhMphGhAdKlU6u6ZPZNVJEmct3ABy/LxeHJyYqZUKg2PDK5esbyjq33Fuu6c7TY0Kr0bawU9eto5ncb08bdfP8S8+eQ/temZuKzkLfWC6678cuM7E7uHG2jQG6KCIKRSKcTQaqVar9ctwyS+L0gpoK4JtjVQqWDdNiKhYNUtUhyvTk1efPpZzeJUbmbIFykuaNeOFJKAUFWzw+LxvJl0w/MvWC0NvjM8PnWKD7qYsoyZhvmt0we2/eN3v50nxJ4c7ns1ds7a9W0fbVGf+PvThZr98lN/vPPRtziIz5KdZKopnU6yEts5uz0cDxEGCEKmaQoS6Vk43/ddRWRiUhxo17Zq6676HoRJF3ph9i+eM/ur7fzp37/8hqvOuub8UxoHej+hZemue793++33X3PyfKL1Xrxg2aKOk//y0kvVkd1PvXrP+6/tPbz97atWNv/+6Z1rlnbR+bc3Tw4ODgzNmrc2pqN1DDdQrzsMI2H7X7+9/5f/eUP6/h0tgCq8OF4qX/78f6NEJowdAnYSfByFy5rmHipWDg4dlSmwsAguX8UaZ4CPqLrEcbrrgw8cLqnVCvA2x8yPUU0BiT//ngdWLly7suva08mffn0zrDkjN5D9YMP1rol6a24aO40YOn2ld6oiM+DaeiITaaj6I57dp0GQVpyK3cTxgqMCB5RDZpDqE0ZneOQDItwRZBAOWIO69PSuF99/xaf5gw8/su6et8b2/2aCtXmfbfVdnYDnRBRwfVazrVoeAZD/O/5SAAxF0TRNCPE8DwOhEWIYRjGxDXSJseK01fvv1//yvZ+9/u/XD+lqDODAp/vcQqU+NRAA6e3nXnriiX99eWj02jUX3nR925YnviBosuBTKxvbP5wamfz3Y7/8+d8FCo1P9h05efJfjz9XKpmG4ZN40PUcRAghiKIYnucIIrpr6C6mfYR9Kl8oc7LIeFg1fReJDB3gWeB9zLi+gD2BRzXNZtiA67smFhAGk6d8F//ix9+Y3ZGCoofkhcrwcDID9/3l73dd981gl9bWvmJpceSJe588+7LzlnavHF42cuijj5RA2K5rGpgg8Iz1Fen8VdAIOIQEihKxbxOMeZ5FNOs5NiGG71rz5i7nuqLNZyyOd8xTki3KMFOYKHzxzH+oYKB5wdzm5tbgqYvqmqFMFMdmpuV40N20U2lqRLquBRne8sLpDClUUWNs0/Nvyq1JFG8RKAeIzzhOcfO+Y/2H+Ooa5fMt9gndcY619h579+Mvjk4Wl3T3qMUROu6aQqdCJTxvZtWypS8/uyew/mI3ldT6ybnf+fnHt906JxjECdc11RgBm1Y5T8IeSCEZbMdWTeyXaZ4WA4Jaq9ABSa3VxLDi2qZ15DgdVJRACGIcrtdozbCPjjE0CnNcuKHBZyl/eIr14YwLzvV9UswWUk1xJZ0iAZnrr3rVnIgJI/BiKAgMmuwfooBxWNDrtUQ0MpqbFmUxEU8SipJVy2rlau6E0iKrehnrFdf1o5GoZZZqh46lEo1ldcwfd8fe7s0EGrJm7+3/fOrlC6998ukn13/9rI8Gx8oGSrjp4SlLKAQaZ1PzU+HDjj872briRz+77YFHXu0bHTCdhmRi3dzmhvYm54jVubyNbHr+62ed4U1vt0Y5l6Y5z+cDomdYpmpFoknbxTo4bmOYn1HndM8uTk6F0w1eW0KrFhUq3tkUNHNZMS7prWGZac6/dXTnP3YeZ7ATFKRiuVaYqIxkAy6M1dS0iCo0ZfogUCjbO56drPBBTjcdTsNOCLhguOpVGUqZKtiMwBZLdARTAVqiHYO2vSADJYfkNJMRkMEQQ6218YAANFaPRcHy6kThTbMWEznQOTnc4IBpYLU7mZycmW5LSJRlmpbluy4tUJzAOnVr6do1AzOFkMvaoiHHFI6u8mCoFlk5a9Y7T2wnC2YQrPY8T6vrmDgiTXMiTfMi8kwMgH1k+IRifZ8GngaKdrFHYc9HBAihEEMzCAHBPvlKOqHTFO+6HgBQBOfKnsOJgu/6ZdPPAEOLW9/d3L62u6lpzpLGBg87nudVDD3S0OFqNgt8GHHgAxEED5OqpkfjcSDkwIEDPbPnJtMZLhxSZ3KUEijOTMWiYVqSGJ9qa+3S7KIYSMxpbXZ0w9QtLCvVUpUKhGqknkmE56xelisWrvj+Nb5vi6loZWrUKpu+4yOf7jsysKBnkRIKsDTFYgiKIV4UMsGA47oVtZLL5TAhgiDYrq3ruu+h6elJjhNCodBkdrIhnbQsqqGlUy+ro/0kmV6l6/r+z21bm83UvNGF5y4dmpq67Zd3TeeyXbjtjMvXFLLT8YbuhsaUJAkY42AwmC/maB5a2horKHLPXR8PT01t6FcPSdJVl505OjCq6/Upn71zzTro7iQbDjRG2nmQfJK1BKbGeowRrnJYapVjDV2gXBJlniixk/NoP0tB2+JFFXXPAxvcK7tZfZy78p7Hd6Fvrpl37tWX/8pmRVoJx+Y0m7q1sa6S6oCXPYZd7H0Cjg8YwKMQECIRKsQJssQyPJrIVYMspDkhCOyUww/6xYu3T/7p4XuPvPrhfY/fddr5Jxw9tKm7rWv88LC0vPPqX6Qvuej3D0rpVXN6ekdK5Rx+69WtJy28xOpc9Nsbr7r7tt9v3fnMP+/83RR2zmnpePa5B2/4wdpnHt56JmkJwvi4AiEDJyreew/f+OrHb+7bUhFsotn+XKAspFcJTPOwnGfXtXVUk+yRyWmGAI0YH1je9hCI55zXPDDl9R4YZ4A3BJu1FACgwGZc78iMN17rDyL096HCPoc0f1LBv9/Wpjx4zHSsmukwXJIB3wGGS35K5T2fT1A+LcXfzxV7AYAGHngWqbgTpik7wkjpZLtXFzRxponrOHG2HplNeKX1inNvwAp99JNXT73sDgpi1e2vvvjPA6tvGXr8oQ8VFwIs5l0awLeZigWQcGkTUB4I0MD7vkeBjyjGBwLg0wQwQSCEwSo7fjpEZp94sl9Uj+7rS9pABTAo/qES9I/PrLnhrjMbZ363uv03n+aXskRxZ86ad8I2Jrzxb3/csH/7129/+ScnXJXyDu/+MnwCEf/8+1fbo6HhaW9RW8usC6/63rlrzjnjsice+3dJ05va5rg8ciwvrDmaRwIBTpBJvq56tsBhSWDChlbDMe7Y9CShpFCibaJSB0kOMxJHi4ybLZpq1BZH7GxLS1KrFVQPAwBNWRgDbScoVPjzn+498u6jLfMXiI5ekxo+eXOTz8I72z++sP1rv7v+/Bfe+cV3v3PLY39+4s933nrRyWsObfpMtxxCcYBc3qp7EOYY3sY1oH3wXEyAAc5zDWAwRwcx4Vy2CmYircRMarNPr4WiJSrdqaau8t6JaPvs6LK5a9bPG+0bFRyazTSMHD5MVwwqk2o7YaU6OakjWpseETk+HAj6jmcadZv44URg1aVnMNgyGIx3HjaJI7aIYxN79L7h6bfei8Q7O877vpPbXPno/dyGQ+0NTRMBM+JRQigWMKtO2FqemD14oPjcZO7v513szIBdGOPWLuWWnqzu2jg7mSjiWl1zEgonJtHgEAx/dHzu6U31jB0haazXddegI2FZDlUnJ1kuBIrMR2gQObNcwVN5Lyjx8SgPrF2u8YTFNjZkoDPNCYa2azU+Hk0talINi604aKiMQ0Bh5NXrDENy+46n2pobOtNUOOBRdCoQqGfz3XO7gSDTttmWNjI9zfUe44GxCQQTDWDYHvgV16Acq15y9299ty0U4OZlZs1rhUAyFOu4N5QJx9JXXHLJ/qGhE5q73996KNrY9t7WfYcLI8GRtmr/UCoWHT40PDC8lUf8RLEOCjRkkgB0Qmn/fHxX8T8fathPtXQztB90x9x0gOUCUEcE1EhzSAynyw7XwJYYw9RUS2hojiUzCDu6a4eS8+oT41SSl8OJkm9LQkvx4ePTH/Y/r37QclmPDzJyapX6pGe6ZRrGEEQpBsu25FOiER8sT+m87Xg0DyzrUeE6b6JqnOfrlpbVIcBzrKN7AmW6mOMRSyPXRwQowQfOI4wLUYGaoHALzdkuP0uqb33osfN+eDep0apuK3Gg7brDc3LnrPN//dNn7/yRF5GxLzCc5oNjAL1/1DrhwjUCF+JUypFrohycrJZahY6KURElLoaa3zl45XM/u5GpgunbjmqqVdpCdJR4llVjAABB3cciwyHsSkB4FjhCIxYIAYZCholdDziadzyTYiBsghoFrg6EEtpNerdBtjFoLe1arO/5iAJKothS39blZ8wVy7X8SFHiaFoWeFEAhXdZ4nuujazs9Fh3ZGFtKufp9ZGJ8a4lC+YtWVQq1sJyuDRdSCxfoo+Nt4QjZlGtZvsTmcYaUjkfsO0BAY6WNM/xXIjGkxhjT/cd1XR0pyndrFXqyLFqg1NhXsENqaDEkljI1dVISAFJruua1N3BZIsUR+dzE7SE0m1diGvAIl2qjiS4eG1sHIN7wonrXfAYFjy16lRrliF4vPzBh+9Wue8LxKiWgKdFhlvE/PW5Zz0fM7JcrlWTjdySVas8Q6MAU1jyse17bq1W82xIJZqK5Sxg9o8P/O3xf+6SlagNYsmgHn3+LQ5oSRAyTY3X/fF3n574r572zumxQkalUj0dNj1DfKIQbaSGkn75wzt+/o++3U2RmlOCmqplaGhdtDLYnBKBL9tOmTgEwbl/euZb55310mfv0OE6OzPhgeWlI4LHuoUcKysYgBIl8Em5UCEuoQGZlipJCifIubrx5Y791aoHDocsVICxhNV0ZHLy7bc+2vzxM50xL/vxW40tjb15swcnORBsO3vC+vmmkzvz7BMvv+D6tQvPe+ipv4f8yllnzXYG61/umGKm991408+Cxuhzb9z92K2PZ1rZNAVZdrzkhxqMWs1p+OObbx058/YNI7B01qr69p05HiJ24Cho2KcEYI7Z5rdmLb/lxcdeDacsoFVRZw1t9bmNjTD7hX/dveKMay89veXNDcMiiTpeWeB506E9ygca666NSZiys//uzc5y5FNp81PLLAvAx6JcvVzzpCBNbfTyl1655uUv9rROQsnwzv7aFb+8unUROy/WRgIRLhpJQiAJfhKoCd1qYUUFQ4GCNs4BYDWSLSBzqPXCZxEB+Pv9Xbfc+dNf3QQLrUVtgR37BQtbHiNJNKW5moEhyYvI1lgCLtD/54kXYwCKYQXfN3iWoj0rF2Z+dOZ373/x73z9i1e/c/UVu81lYShXWYEIPbQ4QLmMNfReFiZELAh6nwV0iKrVAlfdct0VD7z9whsfXnrzkm/NOePPT20477wzDh4aOy0gjldqGoHTelZJIvPzO2+96rJvyWy8c05GN2se8RWZZRmBuBRD+QghhFAsHONYYWxykmcZTyLIpNKRZt9EEmGCQbFcmIjHgqrOM8gJBKR4W5uGygf254B2eJrxHI8XRINSn33kXlcy5zfPAp4rYvfQ3u2tJHfUBbMatJR5zNzG1e2RR/7+cLWsPfTI/Q8/98+Z79tPP3g/IIy4ADEZCigMBAghGNFAAeIJQl+1r3FsG7ANLLACHQhy+ayi1mteKjX5Wn9XLBBwSzvf2dI6tyOdaJw9bymYFtT1VGOD1MhAoTZx6FDz0nnTn2/zNG32wvkGwnwqjChKIQgsTLuMlYjIdWNfLbe0Yz5IjJyIay3h/PCMP3vBAtCsgcK2DTtVngSiksLYGSZYqdkNPT3zZ8/btOHIbW/vJJ1Lezq6Dr3/XwPFvax51sUXb9u7cfNAoSsudHSlDuVq+X5j6VnL6xmF62pgs0NQ06uliRgvU/O6tGoxGAw4tZrhu4oo1Qu5YCKGHZ6rGGwgWHG0SGMKWMap1AImUe16wbMiqZgLHhj1YCTlknLNo+LhiK5rdDxQMrXg6gUWQnpNFyyP1R0vbwZCUk2rBwIBZFrH3nizvbsrM7fbMwwP+zb2WJZmOE5xKCGeji4IdC2baxPTQ74cTo6PFUH1vHjSB++M80575KG/rV61LpmM79170MbmL77346NbtozXjf6Zybaku+bE5V3I/89br1OecOzo8UOe9/XTz/lL7ngFdBm5L//tj9/89U9QU90VoVApJtOtbDDqDPV5pSqXaB6zjAZRjLW0eCw7ODnewvNyOg6JYIBZ5B/fakwPx+TG4ZdHxp+d1utmr109b9VKEXxXx4XR8pRqSjznOXjadqUgBS7J5nP1ep3YwBAC2Cc0bXqWzwJNI45nCAOEohBLYRrRmEZfUUAYAGGKomhEM+DbPvYBMCbGdD3eIk+O6bedf+6jb3xatlXacmwCsscSBEu/efU/f3YTwZxjW4xPK0IkZ1aDjULX0hNdImCjLvIcx7I8AZZGJgIlHBo83i8AtKaTkJ/WHYMRWZqhGB9jzwcAClH/jxcDRQCAAoSQD4TCBAEQggghGGNE+yyLJI7CLC84hi3ajZFA1a33l0DBgERWZjnP8LBPCNLAaUvOmZ396HBm1lyg7YmJiVCANw1VoDmKZkxNj4sBzayV62WRZThRIC5G2Jvo7a2wYlBW7r3yunNu+HrXgh6YqMfDkXK5HKWUuuRpmtbY0ESzjMBxPEsXisVYLMa2d+eHpyUhIIcScoADibXHpohhm3WNsThUtxKZRt2zsa7bOU09PBWalRFtVmhoIrqGiFjNl7944bWTz12utmJPIUEhXKgWQ5GwY3uT43nKdcMicY6N68XOpthCjdTcKJebnMqHAoxVNuuaLggW+F6Q5QrHJn3sUBRgRtJ1dWp6oruzy/FwtVpBDAuYXX36qn88v7ts5IHhTUIAg4egbun5yQGMgJiS1BKE6Tw2jXRrm+58EeClWtXo7lxy2g8vyu54O5aKzoo2fNE/ZATiDdlJBeKXXPRDE/jNZc3y5B6J8TzpgY8+Hrow9+rjD3pihnE1erxcbRDCYg94dG1qRq1WNM2gGLqhpZXlOKbRV5SQY5pxhbru2+sAJNfBLCf3bfrwlUc/eO7NP2U/2YnWK/zAMfeC85QsOzddBSkHLYEvb/33f//w6M6pPc/88Z7wO0OBSrHLRTd/vhuFmnce/09z0/KcVrzlxY8Ay/8vqt4qTrPiat9eVbV978eln/bpnu5xF5gZZpDBJTgJESAJIYZEiELyRiHEiEECIRBcgrvD4OPu0+6P2/a9q+o74H/yHdd5rd+quu7rBlRtEZTMu263go+4DDHuUdERp6YB/fifz9tV4d09HzKRtBvqNq+OdU22AibJohP835OP/eudNy3bF0HpVMjydQv2fzTS0zYxr/2yK09r+f5L//nPnfdcf8MdkqQ0fQoqBo/JPhAAJJt1M/b5lXwyUB/b6xSP3P5Ro1Ta7j///tRTTzzsUVABbvvyT6Zrvy9PfDQNtVXHZT576W0mDIo+kkH0QJcDfP+Nnzmw9eNb/vLD8YK/+Y0nTtnw5eTy6PAjbzcaMyv+8JaIgrEn/7Dx+p/f9JeffO+6XwuTe875xTUvfPJjoyjLIfNCirAYQFAClgcIVACfU4CQfepEI14YAAKF4zqTvnjc4t89/Fv5gz/tfvql5MqN96+lN3zviXnJSLxiY105alfTS9Zf9L3LZ51x1bInXvvXD74w4QUcKr/8/ne+c0v7j6/6bmHL6MKLTti2efD+1948c8WaT47tuu9f/wdivSvb89yD/95RGHrp7fd+88s/bv3ww6hKHMdDYiJETJECUcOcY1lSGWM0DKnvE0kSQyEXj0VVqkc0y6EzM1OYBjgETYgGqFgqTVjctkmtMyceKwS+Dzxi2JZ58oZZX77q3PLTT6W6e10JhyGfs3LhnVsnMAYICJ2hxrb8F0//wvObb862n7X38NYrLr9iYqRKMCMS833gEKNgMs4A/b97hyDEgCNCOHBBEBHiAVND6vlBk6B0pVajzJmzaoEnmEet8a5Vc2WMoFILpq2i7HHLrdfrid72llxOOFKs7h9sO26BN14RW9vCUqVeaqZasj7QhuthRVa3T9txP5ltqY+VjUzEqzoaQRfd+uu6qkJ9tHhoaniqLMdEyXUUiTU87/TlS3ckuq655ZWj0zODGO775Q+tqfFAUJWkUZ925q3foHSl9wyX2pi6bzA/0ITDKvxl8Zo111w68+qOXHsWFncLOwpl7OREQeYYIxRSN5qMVfPFVLbVdU1uWTiXAGDxuud6Bcd248mE3xGTC2GUSpNbdyNZzi2c71ijldHJKEMeadHjcXAcRUsHlhVyloqlOUdB2iCaBE1XHGlg8BVVWrB2OWuY4Aa+HxqCZI6XC/VKZ1u7HIQ1pwDxSCzXIru4YTf0ipMUUr8497rcmrUv/++hF19+6YovfIEg8d0t7zumq6rSwJEjK9evPTwyrsaSO/bt7uyeVRgbPWXdaQIiOw7smPZKIZdciYLguTZ+4pWPrv71j5plMxLqUGPNmaFYS9SdLojZ9qmpqWg0TtxwfHioc8lylK/TVAxjyaqbyEDSnBYSF0GIaCP50cjuN0ujlginXHBWOH7UL9jOTOAi0KgIDNtCM0kx4+H45IRjUZkBUOZzYAIHTJDIfUYZBsTADwNBwCBg5vkUgAPQkAMFBAwjTjj4CIuYMSKgBjSq7nHzWrft+OSnl577m5dfANtjMkEWQ7roWNMrFq+emDpkZDSrwAmJjVeqG7/+uSYk7DrICCMB64JgYCQSTIiQiieOvvtBBFBGFXnZDHDoA/VCj4YMGA8DwBiAQcgZ5YAR5xhxBPz/8RAIAFPOKAWOfIEAAmh6flQR1Eh8bLI07EIeQAZVFh3fBEVVaBhG4nT/5iEzKBpEcJnPmRdJxKKJOLM9CANOeaIlBwGdhErn8nnNWrMtmfY8z/fC1eeeVT9ypJ6vzFm/QgMkzdTEnk53eCwCgptUIyLIsuw4DrMYY4AxDj3/0P4DCzdsaGnv2vHxlsKHH82Z1X144PCshXNbZ/emKh5gqDNazBdSqQwmkpE2SkwyFMPyaPWT4Z6lc+6+7e9HDoxvnxmW5yj41eGWObN7ly3xTcs2fUSIlk5n5sxCLTlzf5E+tD+aelql5Xhbl7MkhmZnBU+VsUiCEJAgeJQiWeNUUCIaD7kokoiua5rWbJgRLVJrVnzTDVnTcQMkA8KcOQEWIKQMMEgceUiY3jaB5izGnm+EuCWTi0e1mhSMSejGMzec9J0r3DPbvjpv3h1/vf3ow0f6vKJmKBd97wuH/ZoChmv6goBGLMsV673J2MtbduSOO/ncnj6E1OG6yUtBR7eiJKKHpyfGiwWMMPGZATglR9LZWaooiAJqzSVb2jI7dm3fd+SILosf8UCTpXlNfu2XPq9AYPTnSDYiNAOIByvTCx+88ZO3N0/8+4WvR+JBv6J88vxLpy6Z/ZXLLnprfOT5DzdlW1r7I12m551/QrsspN+dVqcP5GtyIqagDLfGhUbMBZODjNS/7t91Zr+6/KyFz7x94GScPggz3LMlEVGnYYtYDtihegEwqNR95q//21w/suF0/9nnXnp7069yK+a+9ODbN193x33fP/2+t3Z+cqzMbNDEmMUoKJQgR4Fmy7rLH7392lu+8Z2bf/nKFWct/Nov/85rNMmgphGJooWnni+KCpdwKmBP3nzn9AfFau3jX//ma6+9sIN7pHNlZO3pi7900amjO5rJFuOsb9yeCie/dfXf0q3pXz6xn1DnyG8+e8Fvn7v9Rzes++Il9X/97N5HHv9wS0UDWgEak2SXIhchn4HrOTISEGXwaZP2p/++CANQIgnghhcsnHv/K4/xg8/x9qXLfnMSVGaC7vXdi9eccs73liFojyi6Bb+4+aZzPnvmrz4qf+2aS8cG3n/3n//oUYyffO6vL5R3+Ufe+8IpP/r33/79z7t/FEyw2/5405lnXzB7zimHansO1K3eOUvHG8Uff++Huw4OT0xUTzlhpdJ0LF+UpbhnjmFNYyGN6LrnuEzk6VSiVCqlFTkeNarV6enqhEWhbtOOtr5GM9QIKxRGZkUzuq7Ho7JvW6gQKILueP4pfZ2vP3r70eeeS0UliHXmm5Pux/l73j962TduPjDw0Y6HHrzkuBN3P/FM5fIliNrJWKZUJPmpMQRRSRDdwARAAfiAEWAEIWABKGUUMWBMkgTf98PQRyJHSJdlHDBHFDNbd+wauPfphResgtoYKtJnHnjg2z/69hSutp2yKjpaNDpzreVGbXhy+MDhlKiI5aauyFPZGOiSSqOvvvxyTDNO3Xg6D5nEEe1qEZN2jMsyqMTXwEPIM0nb7GREpLueG/1kn+XyWBTihDTy+dmrVmzaM/7ld9/LRRapqcQZa7MXLut/4qGX1c45xLfB9WoSOf6U814ZfvBgubloec+6+Kyhtz985p7nv3vlxb7J6uPDMU0JM0RryGPHDsVNyntatTldhU3bhKgeCoHiUtaarA5OuITEemexwE+0dZql6sFHX0jNaZvd1RVtz0T06NSuA0BhanzMTyWC0nQ6ndY+NYSJkus66twkatRQzRwfHEq15fT+Tsd3lID7vmsbUkJQtIQO1SalVGvJBCJCAY/39wam16w0o02WSifKlplaseK4C8/8aP+xE10GRXPQOSDFUxWn6uGw4niFD6ee3yRuWLH8J5dd/l5Pz8eHBmsNZE9Pf+HUkzChz255q2Lbnzv99IfffCokwuR4DTwnEk9WiRtb0K3JSShOWRWUEITcTOhEI5BLxnILad7sS3W4/dGgvU2votArCGqLq6fkJE6sH7yq/9yPr/7d5047J9PTZr6/iVecykTdFUEKOXCeD6AdUZBQrWpKAKqAwxACzigOMQYsCAELCEJhyGnAsYxDxgVJEBAQDpxwzrkIWORIZMwVxXbdnSk6Yqi5JVvi08d1ZF7Y8vb48OGe1m4WoNpkKT4vghKxlgVLKnvNujvFjdTmsfzai85vmb0kX9UQIwjXmCBjznRFQjjUdDkqkOrwaBKYyAIEYciZxykiQMMAcyH8f5k74JxTzgUAzjlCCGNAwBAIlCHgBCEKAAIGoEwHwfIFQkqTElz1n8unDrO//Op/mIEhoZrlxvWIqDYHh0wsKlpPjwuI2NxvumZQUhWdyEoAYYA41pR2ELjDUlqC2lwWdRzQ0q79cV2NHre466S1rGmNDBzryoMvIKmvTdci2PNRowEACJFCoZBIJFrbOybHJwa2vq87SBmYiNvNmbC6dEHflvffWqhecKw4AaqU7mhLZFpq0wWQlPjsWZnlcxtbPw490nP6ukql8NVf/kDMtgPygM5YTa8+NOHUrbb2Nl8RQZEjIeeFRo2x0sMfdpLdy+Yerc7k2Vbmvm/IG04WTNcRCVEkkfmMssAwDK9he54HFDDmmqZwygRCJEmKGRGE0LJ5ywUGKBSDgBNksMAEkBHCArge4L2bdn7m26eoilycLrau6VWSojtu+y7MCrXqjk3VD3ZErRlq05gMRpAuY2vb2GPv7Xp45zsTTNf2vz/87BtH1qZzg8UZASBg5MGhwZByAABJkEdCPgSqSAQs+76vG0aNs0NWnY4f5EAJQuFRigjiGHFdYr4PoahwPCihzcXRmK+Yk2OXGPrlX/9Mf0ufEpXWl7Z986tXnX7R+cPDe998980Xr7rxm5dcd8XDDx44MPivR2/52sWrE6Wp65euVVpShpq+eVHfTbe8eOKJC5997Om//Ok7N9/wp3se/1NmUVeYHzq0c8ujz+37/YMHOIJd/lSLSgoM/IAD4iIQ/Om8khAKWGPvh39/4N8PP/TAlqffzBjij79/++13Pfj2w4+ccmpw0bcuOlrAp579PafBE0BCx/dA52C99J9XTi7Wbv7xV1566smLr/r7GFBJl3QvPug3gTpKROpuenVBlLlY4vyPb747J6r88Pr7Nn5m3Yp1ffl9W0cO260XtMw+LruvXG0tDYyNb430td303y00v+mhb33/zy8e/PZZG2Sz+vhZX86dtmYA2pbG7GP1ZmDgMctzOTgABBOHUZuH3Mf/D7iCT7OtFGTMA78/En9m22t47Gg9sVRv7X3noTve/fatn1uw7OS//aY4tbUlfU6lWmsS9S9/fW07OfXNkbA6Zp3zo1tfvue+KQ8mPn7EESwRkJipt5OFv/ndgy99eN+SeR2r+mb/6Se33fzfWz7Z/75nBP25jo0/PVtJtdz57/sfe/SJztYOyeCF0mAM49D3HEolQjhHwJEgSAhxNYYHR4cIMqoNTzTirS1pLCJJ8wxBT+EI9UJZFnVFmtc7Z8fRHQHzL1+++LGt/wv9Hboh5b1wx+1PVXNkAsF/x8v/cvLfOOuCL/7mezccG6yXGhcsmvUTHpsYndGNWLNSkABCxniIFRW5Xg2QAjQEDihknxqygTNgFBHEGQB8Smxzy7KyLe0jY9tqaxaHhAgO6z1u+byLPkMVKk0QenScCAQ+PowzMb23TZhRmkPj8a4WOyEqI0WrYluBd/E3vgpBCCFVLOQ3Ta/SiIBBhoaVuT3AJEOM6IRwIYpY2dl9bNvjr+R621jUdYG3JbqoQ/5xVJ67fuPABx+uWth95y2/e+S/z8cy7RGJoWZgC85EszK7f0Um+7pHhFd3T8xq5XJH+oOJyZd3HNh45inWjlc233d/78YVhtiCbYv0t3r5muTQGcHrkqL2rmN+xoi3tGupuCoonmm6QaBFjKN79skIt7S1UgmhmOKwMNnTtn3L9kVrl6iGKjf8kln3VQVrmpJOaeVq/vDeiCCTcoNTW4+ohY92ZNrbQ0MRgCQc5ASlQCZaALFYzFMRUE40pdGoRdu6HMfyOlS5aBkr+vY88L/48lnnze2bGJ9evmQ11mNH81MBBAx8IsLJJ63dtuXw4PCRp154wiTSgeGDsVjqyPDQQ89MW1QGJoi8Lo1OUV8A7H/rzGWO3KbSMcl0uVP26/nJ/HD7yh6UyR6aOJqhYmXPYTNoGtkuV8Josqk6+UDLeFXViMuGrvueJWlt7v5tejO84c5fOOPDWrM+U2iMjE6HCFJRTWqYFQauz1WRBD5VJSKBGIIPCBBBNAgFgYsIEUwI4pSHELCQgiThT5dLAQHCSACOAw6c+aGXU8hQSO3QXppSQXSI6EguHdi2uefyHt2XC7zih6Fi6GZ7dseT46kuODwxctrnPtd7/Ip80SpPO4oYRSqEouDzUI9KlPm6KECtgSulCApVTayUihKRFFlGhDBAHqMKx8AY+n8pRMwQ+/S7ihDMgTIGnkvDkGEMgoBkkUgYx5BkO+Z4NZa9cOlJl3+flxuP/eWTsjU+S5U81bNtJyIDOLhYmumy0jYPk+n2mKjLRJwcHcOi0LpsGatXKCAaMq5KgqoSEIBz23OSc/sAGDftoOYEipDq7fQ4R4EoSUpYszkPRFXDosg8r6O7B1TVKhRWHbfa84su4NYzj69OTVDbjGrapUuv9CkI+RFS84/teqfhuumermUbT73+/MtWLFn2lRu+sO/DPRhzkfnTYxNNc4tBWFt3RO9ffmhgW3NsCjnMkxFIuC2eEpCQqjbIN7q6blumwsEY75GBNJ8qv3PrnUJS0VkQ2nUzouuGFnFtuzWaZMCrrsU5r9fruh5RVdU0TUGARsNsW9DWkYuMzzQJBsIxAsbBoxxszgDjzp5ZQJjNAi0ScUIWnx11p31BcOiShS1dEZTIQrq9tfOo7cGEN5FbvEz0tCXakhOv/hxJ+zv6njr3vMWHRw9fnlmpqFmMtXKpIIOzYkF7OikNHAlfe+ctLZNo7e6sVurM9aOiljZiNGqXa9VEIjGro7M93QqUEVkOgHWq0o7Nu989eHTP7tGhJuvU+10s3/v2/qa030CNzxw3e8X6BXu2vzK15ViMtP3+pnuu/vatTXvUHN3/xmN/e+HpXc72Y1+49Yej7zxzyFQXbPzBa7f+tKOves5v/myW9vzq/pv2vdt44YZ/ItlbvCQV4BaNH5XFeNmrYZ8yJsqYhFFZrTsWYE3QbPBajPimV+5wRaHx8uPHpobeu+eJB+58ojT498lNr+x9dd7w8ODKM8+slQ4byW7PBApw4vknP/2v68cnPjnt8odLT8+96Fc/jW/dts6dq3do72/ZN1tK6oG3baiEDck1/aJITQLd1M6V4ZmG9ubvHz8tlbg9v5sRE49KR3b9c/Fx55ceu/OI3v+H+58JZt55+7JfPbd59wKqxcX6vt3vv79ztGNnPaGN5w11i4BkR7R4QIlgU1/GnBLBQwQwQa796QYsIBFzDhgwhW9/4Uve8Cjd9E5k2VocX0R5c9xw3ts98IczL/jr165+7pazvnTzwwkl8pu/fS3fJ1nHWk9ZZH/wbgPAahJYMie17PhFiGZ3791++Mgw+KEvwcbzzl3elqpMHJZwYdWspSjW+8A9O+7/xudXrZqf62gTSGRqxhSxr8ocIzkMHEmQbM/VFS2goWWZ8WRipMKPTcCcnki2K6sokiwJzWpZRajuF4xYpDlaZSgAjlviGQJw2SXnPvzUP8rHtvADu9uWr/7kze1/fPSRX/z5tht/cfm2vz780cOPHVOUSy+7qLFtcM63PpchrStQfGtY8xFWVTlwpjnOGkbGMvMgBRAIwAMJgDEXIAIoBCCB74KIMRFFIgY8CAIU+NCeEQEn3//vlrW3fMEkgzvefrs13T5nw9p0zxxnvDr6yXvd8+dx8Ic/OdjXNSu6fJ7n2WG16VuNo4f3p3MtuihyARuJuBM4SBUi82dZolcrl7zJaXVBYnrvkc6+VYxQsm//0395VE6ocgyriswYK/swNVVifatmBredvGbOfXfe+vT997fElkTimUZ9IoaUeESenp4+Yf3Jzs9+OIMRUHxwYnr9iiUeT1rTnqq2TcjSmq9+3nKqvtbqT43xwal4LgPZZL9rNZqW3plO61HPYVJPR1BuOmMz8WwaXFjcM1s8ZxYUKuXB4YisSq0ttuCsPPcUxAIhDDhlIlJ0XSPJeHFiLNPS0mJ016amggWxbqkTMHaRjyQxODyupVL5Ssn3GrFMqtRoeo7bMqtTNAxAXAk8tuNYtCVOm25QrLIIaYyOLbz4opuu/+WlV12VptJf/vqvq6+9NgxDQHDuaRuOW7QkpyYffeHl/+3fafoBIdAoV4kIBys1BRhIksPxwp7eBWNHDrrlgYlJpzKhJhU6FpC2DicbtnQpENio7mRA1nrT1sRQRlFcCbR1Sw88+bz/6o7lN/9IHD5Q6nHTruQrEm/IHzzz9ppvXCrrhO2bDvLVg3uP1UJfFcEKrIC5oYItlwkci4yxEHzgHjBOgCBOAIAxDAgCSigXCeGYc8JCyjhnGBDCQBAHAM4gDCGbidh2sxmAIsqe57gUJN3sjMH+117dcPmFUHPcMMAYgPHLb/l5lCvvfPjkt644J9Iza3QiH9piTNZAIpyrFAksCGVNdOxAwyQo5JlVjaWJoCm+47XFWybCZt22PYK9kIc+IgAcgAAigDBHABwjThAwhCgHz2V+CFwATDjCwBmrmWZbEnY6jbmrchLbAdH1J/bJ23ZJjYanZgTXClOyaojcmqih9sXxYr5JQ0FTQJBbOtrsejOcnLbMpqpHCBJAERvjpcHBwaULF8UiSbtWsj03TgQU1SuDo7l0CqdU2/QwBlHTwsCvm6aqIsWIeY0mppYWjYEgyFqb2HC9I3lNVeW+riP79/DBfKeR7V61FEti12nrBoeGe7p7gMGPf3ij6DNL9PvOXEdmHKypkVmtYFE7Thy33Nw3MX/Zcfr5LXZ+SqvWFEI8BRW4k0IEq/vU/EONYS4v7PPCgchlfacvvUjwEXEgVNKJuudx7smGVAncwPUkSePMT6azZr1JMUQjiVK1aIhK3a5k2tKj+SYhEASeIoPrMcBAJDX0nQBcHwIvCElIcnJ87ry2D16cioXgtEYrRwcDmQU1PYnDJEiiIPatWAFMzR9uCMXRmdpIV6q997j2z59+HjAKqgAEnEJJVXRAItTd3nPE8y9aC35o1ppGJAq66li2qMi4YVIMAAyHjDCAep0FPlZEQOKK89aedMEJYlrFLPRDGkqa1iQQVKmi8Zlw6uN9HZIeX7aCqmqXctiqDR566b6F511a21t5+b5Ho0Ol5w+Mj77zphdriRB21f/d7vqEqf8s+gBUj4JlYZB43HkvwMRql+K27ysAki6wQA8CS6jXGwJkQqi5psBgFIpbggWmOfbdxz46AjMX/f3Vtpb0+/eMTn1w+O6tz42C5956/8+u+9wF61c+8dYOROGlW3/5yptPJo/K04/d0nX2l7MSvXbOqc3U/Gt+deUfr/9sfn+xr2fRJ4ObDtk+RgkIbBziIY3upbAu4GNq9D3mvPL179y148D+Pcde+/1333rmy7M/88XTLr5l860/eeQf/6zNNGcByP3qYMNbuVZFNLnv0HjBEo5bc8r4rOXDh/ZLJBzavx0CE2xL4kwVsey6IRIoDwGDhAXKPPBDGUALw5c3nHXmXbeAeRjwyuXxE16Zgf7PJtIjyp9u//sN9/28gSGrxw7uaf/4kSc379v1fGliujiqEom6/vQgJNjModr+2oy18uTMeWeuktvmnL3xM3NyraMDB177OH/XA/s373WsqLjqlG/NDLw6sP+N9u4FFGmaygktEw4cM1WVOedIlHzfFxRF0pQj+0cSiYSsyUQIXd8KAkEWVLvuaFk5X7MQVYkkYvD279395SsvuvuBO6qjO1OVGl+1euex4WOV8JZbfzPv/DPDA8Ozo97uGuqw7csXHGftbyxYu+Dw7576wWUXf/GVZ003AaEpiU0xmjNLTQkgBJVxKgJEJWL6nAOjoQ9IAaA8CDggjyJBY6qeooHSsEvAtPvvferG26/FnbmTvnVlOFVza9V9H3y4OtnnFiuFxFQmlV3YP5cnDWDMnKpAxhCX9y5Y3BNNtvCxGeZ4Qb6iKLLt2da0qcWR1t6FtDQfOjR08MCl/7yTsPqHf/77yL6Z1g39VHOjIBYC2+hdrinaOTzZd943N5x83MtPP2yIkmFg2qhjVSuZdozEg0pDP05Z0rvwg+G9AoSgxTOJ9IsHP7r73v9cdO1VnSecxsoNbBjBpBPLZiAaEEUe3rvPbZjz160JfAcw5hSTvM0JlvraPNNzG/nErNZw37HpgbHO1YtCFLqOTcxAtjmTWYNaKARJ1Yojk7FKM5NM7nrrvXhrqmfuHCjWXNm2oRnvztXB1Ja0e02nJdICSs6q1lvae5AsQkA5Y14Q5Mcn44v6YaQoNW0U19VUizu39YPXPk7U5R/dctN/73gwkUgMHhtCgAWRv/3GB2HV3z9wTJEFkwarj1/Im+bY2IRrQwP/P8bQoWIBHBqWuY/lWX3W+IFkPF3Yf6R0ZLjoufN7e2xqxroUz/PtLXsWrZkDMfXw8x+0OWH32hVf+/r5Xfe9tnFNz4YnfgwuQTYTHHfJZWviX75MHipBwT20e/jAoSFTIMjl5dDpysKRAjMJkkOmExy4LBSpJ2DOOaKMSAInVBAEzw6AgqiIIQuAIAFxzDDmREBUFPin/ingGNumnQEzBAX5oQCKpENoRzk/+vbbEjAci+kVW8AcEGeSsOjEk7Vc1EV2cWRCCPUwlADzgDoMYRWJhFEQGBGozEJsmx5tpOdkmabEoqnpY/n88LQHYHHmAoQBFz5ty/t/+SKEEMccOGOfyjc4RxxxhIADIMZ5yFAEanViOezaH11Xe3J3fHF9abzrAz4QiyRqdtWQsGeLNGgU9084cw105IgWayGS5JVLvm1hxHjgJNMpoCHoEcd3tVyiHXeCyAEzrCuxTNIJ/EggtbZ1M+aTih/RY3WRKSGljGmRSL1eJ5LECQkYExWF+j4LUSgoKKpKImEz1tyeJdCN6uU6DtHgxFiqt3P2qtWVg8ecYrWjq8P1XFRoymIEWlsA+b5j+jHDQKqLWbQ3bpZL1uFhxsJINguez0qVloCH6VpUHg90ufQetMzFkIjJtWliZQXZxxAQgpEs6Y16lQVMYCBj0aWAkdhs2roRpz6tlBuSoiY7Ou94+N+79g2DIIUBBxBc3+EgAsahF4oENFUUMU8pBiY6Hy/O68w+hOgCFEmkJF1M1KAgWnoyFp3Gvhj6N568ACYGuzQVoUK6NSOEgjAYVJSJIPTtehMRLCdi06ypaQbjnAznpy1XVVVZ1SbMSQagaFrIuMGoxUOuKQFlbsOWkKhFYxR4CGVtGE8rUpRanuNmdL1pjxZTROJMcUSTCpk5fa5fEX2nVY5nTlpbc+lpJ//2wLYjAjVjra3NSP6bX1m1dbX+21vfskTAPmaSCY5qEMckYQMgwpCIXZ+6lMM490BygIJnCRzVMhyMdp2r2EDicVrSi8p6JplQxhePm8umpZERkUQbnhe56E9/VkKJiv6S9mRfT8///e0JLmtCiBNYmL3qLMf1XPDSt0apqJ//678jChhe+P6vf09BBCzqH24iRrrVr5gIPM9zAF6/7evX/vHBzeP0mxvWlSLm5fc9d/2Kjs/eet7g6DsHO1acdv4tr3zu3H+8+MoikpjV0j9UO/b107p/+dCOfZvwisXpMsBiGgYNecde/+qrf7R83dKaNXnW2jmP/OfOm669zue+Fo34DQeIAMC8wFNB8LgvANxz710dkD1veQ7vnbDe3Kx3Da351tywVpjTIweqUdp24NLF3U8dHv3lNad0tS5A088KXQk0PeODLhC/b5ny8x/89LGdr//huu9qbV0Cj0KIDx7eyRz2g6/+75VdHjvtjNil6bBqDEw8t2ZBV8JFddNzQUDcVRQPuBGEnud5HEEQBCHl6UxqenqK+HbKEFUiEixxMELGkIhRlFEG8XhufGI8DMPp6kRXx6y77vvnxNjW+NiUFQMdRYwGv3rJOa/ueG9jNtM4ePikU6+Uy0MPvf/h0mfeuvwr3war+M7ffna1Of6tX/m/+8sWWY75LO8HdS2Sw6YfBIoItiZI6bgSFJoMI8pCREBEhAmEhQqjIuVVz4uKOArYnD17ydDIS3/89vd+esnGQ5XJXEdXwvE0CKt6edklZ3MBT1TKWjKGpiv5RiGZSoiTdYjLoKv++DTSJCETB84BYx1jf9qG9hANuyGRd370ausZK4X2xe7guzPHDmSVdoJoOqGNDoz2bDz+pAu+vGM8nyLUqqLd72zqSvdUKWsKmMyUaJY4spLhCd83fcm/5JwLP/zbzmW92e1j9eOXrpvJV/cf3nz0j0+49jBNxXXcnHfxaaFZwIYKbtCR6xCXpNxGXZQkG4NmAYupAgbBdn2fagGHwK916KLU5SooCFikvaU5WTJr9ZQWj/seqApHyEnEIGIwEae7O2RCoOmApLCmmcxmqWsTQzMnpizXiSdj9UkLO4HeknYsk5YaRjwBrueNFcSuNuI4wbxMMFFSDg+HozVRaLWzuR9/9cRlC+flkqnD+w8gRALKAgSbtu6xQVAgzCbgpsu/+ODdj5x66dcf/u8DFrEZAWC82xCfGNxzmKntBsRhmkdyLnJ6TjuO6Posi7ozM0JSB11lxcb8s1ZVNm9uJEn/xuOCRqDEYt/5xU2lZw78/Jnn//PMOUsvPE8ft4EnYp8/SwDw8sPBscmBg6OWy+vUlwMx151dc/6K1//xsouwTxlCQhhSLiGKMeGcB5SJjCMQJNH3AwRIEATbdikDTQVgwBhjiHEOCDgRCIQkKqBmBBuSqob1hg9qyFTgOpF4zfQaFdWIO82GW6oISR0EreL5o5PVlrTIKk3EFcYw6BgBF0JZpghxHtBQEAQCIAG44LTPbsG6JumxXR+/MnJsPBaLVRp1n4GIRYH4IeeYA+JAEBYQBwDGgDIAhjASJJFgKcRCKAgYGCCZTk1nT1/vEMHf99imY3/+p26rBhYaTUuMQbPJojFdU8xNz72+9ks/lhTVnshbVlM3FIkgOR1joW9bJUnVmBeo8Qi4loZ5UK9KjiNKMpFBVOIeciU5LmBWD5uKLMa4RAMugExEMasZgLEIAGHYbDQisRiVMWm6UkBotcYF3rAaWiQWSybsmekMiUWLNCiOJePpmqQGEZWEUuBiMwQ0UKiaM8lYVKS0IdrRqO41CoYs1e2GEtfd0Bc1VW5v82oWVw4FxgAvhb2JOYBnaF6FKBnc87EQyASJAkLU9axIXPW9gAJyAyoznwEgxD2nCZzHk+p0rZgjVBOVMAJxq6tGBgDjSACWGOCAqJjbgBNnLkUzuMG1CLJQIOYW9CouNMVYp57IW9sNRfZhnAJaGIOSSXKzlzj1yarip3k8cKihkSCOCBI5EwxNZohhziIMQ+AKHPxoTI3HFFkOPT8W14DgfLEgyqInJRVBwAQ8z5FlWRCEIAjCgCYVHSGuICiZhUg0XqaWlEoLFOxmhapiXJMd2xJFLRWJO57tFBxNUipTM61dMd/O3/7rLwL6IoT+Cb1tN3z38h/89Pf7js1QrEeiNB5J5lI9hMWX9uJjk1NmSE3TNoh0+UXne37TC62e3iVhGLa0tBiRiCTLoAoQeMACtzCuyFqxUv56tRKNxDRDdwM/DEMhYEcHa3ff+4oKEJEDm/Ci60PoiwKgEMrQiIPSwWWHhz6AD7QbiMlsk0FoNurAdKWqZJSevuN3Hys8+ePPCN/43fL+czWrzvrWJFaddP6JS2hX5tTsoj8sX/jRoYOLkZb3/Tn4WMWP3fa/3YIIzwKbmH36R7v+dM6JZx78+LkFIJz+s2/FZwdxNOfrX71l+NB2mbV4oSUGTQpAaNSWG8vF1jXx2HmzV74WFv+2440Te9Ov3/zI/PkNvZpOH3/SOb/+S/OJu8b2FXBpsy5VqyU6d2HPUoKHGu+sWTjXyPXafvnjrTv+8Lvbgub0P/5683e/e1s0t6RCCxHPBoGctmHdtCOmjI1nXnFltTjanBj0odUVOwfMgU7JCyQ9yhCxaxRHWYA8l6gx7tmWIUnZBC+aM1sPNHqSIDQd7pSEmCqmxFDmgR0in+qENANwjYZAzRgT/Sq1SS6S7jM6V3EHccn47n9/u/mlH6S02EU3/9+Rh5/i78384Lk/4gfueP2NrZd/5aydv7yv4+wLZC3sDSLJ4BgF1VVSUG+yZJpjAYsNxLy674QVJwSgGEAysRv3JROAAuWShhSbZyWsZLxKxRopvZzoIJ/5yoVWVDSg6DpjR+rVhQvmVfIV8KFw4IhIBFDSPuGyrPoeJemE6xdVL4Z8SRR1q+joCS2cnBg/sC/CWYRvLNa9juzRyr6ZZX/+qxS6B/95x56d+QWL20C1p4bqLV2LFp37lQ937E8gbYSjIAhbs61hgJr5mu+4NBEhIGUVkaJGDCcHd0yuveA0/Pdfz7hyeyJ5479/LpvCsnknfTC5tfr2892rVy/buIjozKdEFjTAnCPqNkNuRMK4FPLAViXNF4GI9YwR6WzDI9PVQ8N6f1aMRQUsK4IIVSuiaxA1GkeHo4IC/R0Bt3VDk0PsW2Fn90KIi/XSqKx0aDHzk3deW33yudxIKG2KIUqNUtU2C0YyDgF3bRf1ZinI3qTddtoazbKhPSvK8VKPMTOQnycZv/rerysXnf6V6348su3QqvSs/x7axhBWKLgqv+nKz//27scDEZfq7K5/3a92zHr+4aez2exUYUSWMPKCsbq9YlZuaDxfEtWj75YGvzXS1ZGjYzUTynJrQmtL+z73iD4TF9rLjpJLadjg01Voi4IcSdL0/PlXnLNl+o33Pll65qUz0enkVEUci2J73M+PDRanZkaq4xbKUeaIyKHNi0+Yf/0d70MYOmD7hCtUg9BzCAMmZGTqUCaD5DUCIALVWNk3BQZxroS2yzFjAngBIAsQJiGmioZLLDx1YefIeDU6LUS9sGk4PAKFI/68s/urk1VxvqqLUer4ihspGXzZRSenefjn555aNHelXLExdSlKEerrchVZui+Bhljo2MzymqI3EWOXrV4uFPIfv7L91W2ja87Njh2WpEYdca3ObURFEUICnLOAMQg4tjjDAKoo1BuBj0AiTAIkMkQDBgh0iRRJ86wNC/GjT0ioVjpUqCT8OAsbWGB1SKmR+tR0NgmTm2Y+uONvp17708q7r6RiipTQwCOhHHGYFQGdhkQKHdsJNNkgPoG6C/MypRhLjs4ISPElXW5wMGgskfRqyFJAl2wfiaRcBxpwzfCwrCiRSEKllYLlOEHgRZIxtTPhYy64AXWpV61ZvivrShkcBlx2Id7aeeC9jwWE5yzsNn2TRYXW9l4iq7VaVZHFAIde06bcqvsNFApepZTpWhC4gZQmCDV8q8mN0G2lvDZLSR2DWiZx0VIBBQFCnAgSQQwCEVFQBFESGOGUAaecB0EgEOyHoaEaIKm1wJQqUEvmoxWjQbwG6EJgBbIe8EbMFzbM6m1Obe+I8AY2LFVJhQW5BWqTE/k6jfK2QrQuN+nibHrumvjMey2Z/uzI1o+xjDhxtEiHOe17ckHgiigrBAMPfcooJpgAAcYVDpxSZtLACbjABUnUkKEIigphuTBNCKpWq4KAM5kW33Fiul63XVkRAXgkGvU8ByHk20zTNNA0VVVd1+Wcc86bzabjOJlMxjYdLBDqU0Z5cWiqaTUYY7quq1D9w0++ZuR6gPqAfQh8wAqEIjiNwDLFdAo0A2YK4LmWVZdVKRizKpWKe7SQaG/73W23rlixbOGi+ZqmUoFzw5CDIEOFKIjYoZIXEkJ8w5rfo/z0+2c8cN8PS7W6F6LnX37P9kmNyhBSv2lv/fiTo0dnUkkxkU3HEgnRc5SY3NLemY7mYhI669Tj85OF+fPXTezcsiyee/dn//4mtP8lf7g6NTb59ofFCndHDv74S2fogFNayu0rr5t33Hee3BXy+pO/+MvN//77l+YJV599/U/unlp19Te48NA723fc+KMvb/zaTUcOfvTe/b/xAIgM3/y/b1xodp40kyzPWdJeasK7kxZTtFrNXfnOVU88lZ3aPDjFt7/8SDZo+C+9cfq+iyv9BrBD87vmbT8yOjrZ+Nbl59/wp3sPHn32D7/5O80lrr3s3Nuyf+3uXlieOrBhwymE6MydTCpREKtcpG9sf1em8hOPPLp32xvcnD89qddo0RCjbaneLCmXnCrIDjeQK4tiPcAypQwEosgKMaKR97bsAJACD5rYlzSDhaE3WY8kjYgebQjOh0caPbK5fFbGATq7K50cKr33l3+dfd1V4IsNzZeAfuuyK7a9uK+USEQP7Nn2zq7WGh+546Hv3vX9SfNemK6PPfDemQ/8dOiux2TLbsktP1hhEJZkGMpBeZTKnHZyKIBQzrAoEpTh0OkI64sQjPl+O/gFnbb50J3t6Fsh3fH+tjyVbr5+w/Xf+W46nqFHhyLdCyCt6s3iwPBw2CjHSZdjiAgLVEZYiUdxHCHEgZJ8jEqOhUpi6OqR1sAM8l4YXbEgNWd28aOBeKc0WbL7jz+pJb02/Puvn/nLM6kF7ZbcSIQS0rIn/+rHlV1HdAUxPZIKqCRqTctxnSCd0AJOHJ9xxGzbjulGtVazWbigp4NxVi6XaeBpDEKIRL0wvXTVN+/4GQWLNRslrxRtXRRIAgDGIMmAQ98TKZEpgmS5KHAZcORQfmzLrvbV86PLF0AYYFV3g1CKqgxDaNpgO5GVi2oaM4qOhIWaVXcMNd6dhULVG5uRdEKmx2wi1qp2Yfdm6iGmpyOpWLJFi649LhR4rWkn/BiM2BZrxDpbgSPHMLASyhJNmYzMWwIn9czc/dBnTllhTUxsH9g3eugQdZuAWYh4K5EK9dLKuamPBicIw5sOH330mi8c7NINlCo/9+pYeRIk3MLCXRA9eW5/lihPb9n8TzfgnlcOnZDRtiAVDE+gZIqbtWVC4q03Xm5vTa89eXG17hpRdey5nVv+NXVBT98jZOCiYhL0QD4w6JIwWipDre4dy0/sPjQ4OKFGNK4aIwVzQUvygzf3H9ceGZkod0skDHgdBzFOIyBaGIIAVBmzMOCUIxGJAhYEjDCjrisahCFGECDCBY4FjBAGgaAjlvZ/N/7xpneumS2LBIcSQiiIOOCvXbe6MjkiGW3NuJ8jHBxLcc0wpC2XnXXy8NFD+/b3LFyt1TxGSMnnEulwWRmAocARCQawEHZYOP2Z0+c23zmy98CuBcfn5nT073jxI8QRAo8KAIwjjDDmhADGGAAQAAIgWMQ4BAQcAQVKMGDMOUYcE4eanbNaDxw5MjQ2bSSVgldJJ+NHK/VcBjUbth5Csw79C9vuvPXuBcef27HyBPvIJ1PVQmdukVuuKZrtZ0U8Rh1DsveNFRXcvWBh1anJlbAl0BwjKUaikeFCIYWUwBbKlo9jcTvXzHXXX/0ws24etWtaTGGlIlhBnTk+pnhyJnvc8vLUmGtbiUQWl5shcDGiZaIJcG1QNcB85/btknQonc4KSEC2z5peJK6E1YYTVOPZrFeuVWq1lp5WxtwuIwkIgRwtDo2GIUtmAzk2yAQsMmP40EjHmllMJNjLZ+K9gq4Qn4aAOAAOaIgQEhCinDLgAf10BoeKqnpegDjzZmY2LFv0n76tpGZPoTDiEyS5EV9rBH5fLP3x2A5rbKfFpzVNky1cGp3OLc6tXqm+MOMUBibSZyTY/j2mG5fOWBOhf6hzDk2cDCWWECt526kdS6iSougBZT53GOUIcSxhQBCGfkgDScCEEAxIEmTOOQUqqgJD1CXIF0gsFmlLxzHGYRjWm9XAF0QlIesaQhwYbTRriiQTjAUUTJUKhw8fPv/884MgOHz4KAB0dnY6jicgwhgEXkCIIDIhriaCIFAF1To0ocfZxJ5R1/KielQQhACY5bkRYjCJ+IeLltXEHHRBMvRozaxIWaFOXECkPHnstMs+QykVWhKFWg1VPddknHOMMQ5tjDHGgmqofsVWBGVR13xrvCEHTBPFG752NYQhFx0kCCBJrH5Z6AeSplHftSxLT3UAYcSIgeXbMzMa8lUrePWRP7Q2cpvjb+68bdfGlq575mXag94Xhra/BzAD7MZ53XcfnfrppevOPbdt8+5XPrj/M6W8fsc//3L797774v3//vi9e2Zj47NnHp/602+3vPLPHU+/UXrg70snC7IErSfOPj9+0hnfvIPeO4GeKBv7mrsG3lU1ad6KsxtJ4YK/HTd25GO3Lmy46Nv8ku/zZv3ASw/8/G9/vxAWF8r2zEcVtxfnktbGVSc50OyZfe7D/73Q88Yl3IcUz2ke0RM81b4ewhQELgR1X9Ik0Ob09XpjMzf/+qZT1l2+rSQJqdW8teQ12Ei5YAjQmohQt8R9KgWcythzLBrKUTnimg1TQ0EICgmTUaXuuuOVSlST2hNROSBOzSKi3DlH1vyY5kIbSEKS71PRrd//9tf++Y9rfvjlheddPbHt2XD7poBVGtMmDAfrvrzi9V+93VkA/95ju6br6Vvur07tcz7eeaQ4Fo+kMtEAVQdOxBCCmwiaNEYrjVGdu8fPMb7kpN8cPvAfRUYutEIzTphGxQimyxRJ8pvm5okXn7qrb+PFijxJBw42JgoeeAlfEIqB6vG+/qUwF1joxHNtjCNQZMAIi2Lg+44TGG1aEEiKq9FaqTKxV5DE9ll9TUlzxs1Em+gFahsW0Tf/AFtf/Ot3bo20JrSsFItAebj4zSceBhZBnU5fIjE25YSeZTmeWa/YdhjwOsOCGomHIWUhIoTIsiIIomu7J6478f2P30fADRHS3fPeHNza/2LLBdecj197C5IZdmgrio8Upmv58aoUqJKH/HypOj5WnZ6SMZXndpCWqANh66qFQiqKRBVKNYYcwVDDACOfmWagxnXuUWFgQpjb4QlaJNuCmjNB4yAXAgEMfkwUlmYtRs++8XpeK1oTJVVMOI4V+r5Vt2ImM1gYGoqQUXVMmBtim4lxETVwgbnZXNv0c5vGsnuee//V5rQPrnvC+tX03QOvbhtFYRAi/sVLLj20Z5eCs8cvadu3c3sgwc/vvvfw8LgWYFdRQYBOP3O0OjO3I/f3V14hMvz08susRTln+4jfNDv6Zm9/8+1YJIpdJ9PVd2xq9NIbLpoZd6zDk9H2tDcqTN9VOKntrDs++Bf1KrQU0uZotFTADJhjYRBH63ZxaEqMZkM07buqFsM3/vzGf/z8v+va5T0TLqOSxQMqMD0EhSAPA+aAQ8YYIhhR4NSjAgaJEEwAccQAYQQEgCCEEcUYJAGcmm3q2W/f8OPbbvzZaj1axK5Y9YW4OzSe75ld43qxbWlfcGjcjgoYiU4+jxTtM9++pvzXf+zavam7fyFFUlwTfbMZSyqNagl5llmlMiZOCEt6V5GGMvbB/p6+bqFV2ff2gZCzmJKIKlVVxBVgAuIYI4wRAAZGgSKCuOd4YQgAgDFGGABoyDlwVGvSRFbtnNWab5C6NVhu5uNt6sFDNQ/k6bpnANUkIwxdr1pZ2opf+NsdF37li3LX3M6OLOw9Am2qKKVpoeCotiFEhY4sd5pAbT2uAADka03qHHz1nZWXn5vePeQiaGgYVZqwYHZ1c/Hpn933vb0vHXn7HZWUc0kAq2oyWwQ5tayfl6Zln4my7NXqckRjjk0dC0ejns+bpfyuPbtXH79ai0YpBjWdntiyhxASa293p6bCMCwcO8pDGotEZwr5XC7DQxTYZlBzBEYQdWXBcrS66rmhZLX2zaLDeSU3q6QeSI97ghUEXhgAIkAIETBBwHgY8gCBIBDCGKNYCBBnAhaJ4LveyV2dM3s/+d43vvjsc8f0ZtOXlFW6uLNZf+A3/1Ji5uD09paWllojSAV2LCrIzJgepw0K+599Z9nl1/n+h31LFvzzmrsTc9adOXRgy+7di+csq9sH0poKagtFpmhZrmxwDhhjImDGWOgHIsKSHrEChxARAAQCQRCEYUiw4PsBCHIi3c459anPQ06I0tre7/s+YTQ/OQ4AsVgkEUsyxgRBAE4SicSaNWsGBgYMI9rW1sY5KIrqum61UTMMQ5Zl3/fDMMRYUCQVcYyTvSazUUTKtcTcJvcoxirS05TVKrKKeRjKiMuCCAywjmWicsvORVIcQcSISbIMnDMECSnqtxHHcYCyWCyBEMpPz3iezRQ5Fm2r5Mvvb31j9uz+eEwPLau4bSuRBWYhIgqmbSWTSUkS6mZek8SgWSuE06qhuoFfqVVjEaMOtKe/f+u7H5HIcDirdWcMflA+2GkDb5ay0eSD99689tz2F//5tw3/Gj99A/bodHlfCefEYnn3GT3l9166GePkq+888vpfrqrsumvDVe8vmRU9f03/eSf6grFi79H2jd+8jPzBfrnrp3vc9xKQihaVg+7YVy+94kDhwadf++Qr737TUJLJM77cZK5mMdDnL77stts+fxNY+288+7PP550ky2Y1+d677vvbgpV4SX/+6GvEM9K9NaAZKdrenBpxDr1pkEBsydi6NvPh4YEdA5xWnPrkivMvu+uvP52/4esAtTPOWS42/XVdx4/vKA7s3deeigpqRPTE6cq0GmUkRM1ipbVDw1h2HEhECEMwp7eHc+7apqxoHMCz7DAIuucwXgA/xAdrk22t6RAbFhjPDR8avO4/E9/48a//cJvc1n3vm3f85No/No16advMso7ovmO7h14P6HRlJDFIE/6WnR95Stw2nQtywtxqfikj1TLkmw0d1yl4fYCTHRlp19g9WuRER9oV7WgEM6JhidUg0VSGwVU7wKsI3StXSQc3DzSH+lYslSYmo7lWNjVhekhPtFAXISZihcc1iXo+WL4fBoKui4Ig6nEo1WUNV5EV7WjT0x322AyaqvDKXnH+YqEZF2QfllxpHnzzL8efb/Rm9I5cSraGBgqx088c3VeSwNkzOszxsZycmbYqgIggG9m2aNPxy7Wm4waYiCIRKpWqKCmTY1OLVq0894yzt2zfggmreB4dPyZwuP/t124e/mnbKaubTsXILqlZgHWCaxPFo/tt2433ze658sLj5s8nSYm5nhTTfJFLRIbJijddpjE1GBl1DptKIh1ZNC/RliQhhXLRyGSDKYQgpDqRkqtwx4m8mqexsj+r6ouZuBjwYoPRmNHe4k4Oiemo0NGlj0wEzBS7WyGuNUYmVD/wOCdxKaRWxFezasptmsrs3JxZ3bs/2b5w4cZ89cjwkWGcjQ6MHJWBBxw27d2tzFTTCpm3ZNmWwWPQqB45Nr44G9mTbwJFOoX+drkY2os9+QdXXfvjR+5sXd4dSSyXT4jrx8YatrVg4zrOuW6k84PTye6cWyjmWKsTm+0ZaOyZeriv5UP2+q5wSxLD5nc3u9UZXaHAtWB6VFLaD28/OjpRmJC0Dgm7FQcrQB1ItPXMMSZbD+J809dB0lBIOQ8glJAgEgSMc84xIQwBBJRIIIsIGFDGATHM4NNxhzgHAEy4QaB4+PDnvvP5Hffev+PgQL8E0J7sjfMHH3r72guuWBaJ0UNTFVVJZNuCkWIQ4D3/fPKEG7581c9/9NYDz27bvjee4J5VUiHNbMGr+smUpmrYsRozUxOpqBo2Kxa3JEGdPjgwcKQSYAhwLRfBpi1x8BgAxogBBgYIYZEywMB9gAA4AGeII845CIA4IMcD33P0lja/WaxWGoJEGg0nm4pPli0vAF2WGq7DgboWS2djRza/8bZsXHbvg3d+/1efaYklrjhpeku+Z3aP3diHamVtWXcmTwrVSXUi5Jm4G1QDx1XbozBRMWd30s7e3I6JLa8/4ud64utP7l42v2x6Xb3Hm8OHpa5YaI/HA09HRj1iKXqC1bxQlEBRQhnLJMY9x224KKprqnz6RRcGruv6jmZEgNNQV8ampnhejycjUVXzbFNCpDCT37Nzr7R6tSzpjPtTk1PZZDciLsC0Kkz4nIpesrK32Hl2W93PGwHa/CIXjPaFBqOAAATEQ98PHBpYhDNdiAEDCEIAToFhEYPnAGVVVnOmNr3y8s7jVRjzxNNbUh8OTp6xpGfxlScVd7w+Nxq3Q7Uv1lWtj+sLW5+/49+nnH/Zxbcs+u/n/vkl51vpaI5iqkXzXzz1GnLz7Htu/sGq/h+KjbgbFkzZ7rCiU7IQE+UwDH1GUYAIEYkkcc69kDOk+BR8GnLOGQOCVUVVicxqk9NY9wQB66qKCNCQiYhLmAS+mdBVQkRABChIksoR1EwzEtEURTEMY2amEIlEJienOjo6XNfVY6of+Jxy13OTyTQGRCmtViu5TGfNtIC7IQ28Co1EU2EYOM2GqEUo4aKoqFE9DEPb8gr1qirrRiIJAEYk4jieG4S+HxBRVBSDmbVsLOG7XnlmBgFEVdUHKI6Pq93zfRauWrtAlVG9UhCRFo1nB4bGJSKk0mlNFRgXHZf7AWiKLEuRlCKAolFuxpKCGo0VK9Xa2MzqFcudaP7vtz3buqz7vB4hWVILRL3+iyevXZX47MJrPxmuf/zmlUO7Sjf/du/dXzvh9p89m8zIZ16x8dmHN/XIk9d/68Lzb7r7+Fl9Tmv/NQ9/vnLsYNNsi2mCurgy9drOngtPWPP5hfp7Tv6VkemjE9+75bObxrYXnU2Lr+j9z713nbH+s+6OX67+3BfC/sTUW/eP73t/WefJ1f7ore+/8Ge68PZ//Xi21DartydI6QB2InsysQZmpkoD7z1lHjty1iWXVyXDbO+KZ3q55fVuXNi5PoJ8P9QlGUkTZu1b3z7jjNMuPPzhJ5SVzQODS/vazNFJUZCBmUEzn1ClEHujo+PLZvVHDXn/ZJECpHTDtKszI8PZeDwe1erUscMgEtVb9Wi54bQoOEBOQ404Ol6YSj+1+dAoF0aDMSWZ+OzvfvL7M86+4ZozFj78vdpoge050NnX192VHtm3NxOLpbmQ6p0ze0Hfx9ZkJpb57Okb7v3DkYGRGS6DpIiZejMCkEqxkZnKs+VAZCAA6I1yJqEM2DCNIQ48oysR8C2X2TPjEdWbnW2p7xulvlOZmM71dMiyFIgCohi80ByfjkQiRJFBkjUkgs990wTAea/eGW9JuGJYaHApovV0AzVFzRfUtopzKLn4Urpz8682XLq4Lcs79Bi3h/cMrfzxD07+zs833f6IqrNIZtHE0WPTwTgRVCIR2nTtqSYHodZsul7Q0dEF3GUcaUSJxVNbPt5x7tnn3fTLn85fuKAx1eie1dm/5IR7H/jH9Sd96elDb8etclPPGu2SunBB68aNgDXwPAjtwCw7VjHqZ8Dz/JGKJIksCBFHAWeaEpO7crE1bbQe7H7hvSMf7SYSyeZSx/btzY/uL0xZscSsXYcHFi+bd9rZ6weLA6HK0n76uCV9qq4k+udUmaNJASo41rStZ3S3LTa2Z5eQb6QyaTGXCR03cJyoFGVWGYdlnoxGly8lWueRv96ezs3uW7z40LZDK87amLrjL2OCIITBkfHxFUpmxq/gsb1Co3rhiUs0UUz4wsVn973+2lsfl6hZnFEL86TVc7c/+cCvTzlZAwST+0bHR3KRlChyJEo+ASCo7lhCIqkkZsH0tLQwN/zOSO3umXpoPFV7qSHwNlHd5JqFQ2M9CzvYoXJ9Ta/7UXH640NcNwzZFUwkxOVLLjnxpituFIl+zjdOvGBZ36MfHJVU3QjqDIOFwWAUI6ToUt32XEYpB0EAUSQYkI8CTDECAPL/ai8BOOKAMcYiRHMR7/Anv/79N6f3bf/f0y/VTG4Qw3C8t/7+3/W/XywhcCvmA3f9/sprriRWdXBkx+QtM1/++Q/P/PxF1gRIqumkozN73UyLg+JCvT4tgRZ4lVyLtuWDYzMzi1kCykfr27cUJCkpqZWYxA3Cx02RgxsiYADAKOHAAUTMCQZFUDBygbIwDBFjmAAhmAChHg+d4F+/+rN5lLKQUxQQCULH5CyMynEWeD74RAbXw+OD1e558uChw2JtPE69qp5tn5Zf/c/D33ji9uG/Ta665oLClv0tfa0RLapihzIea40IKm5PZvhHh/RkAk/Cjj1Hl972g/GxQmedTdhDntOQhJbbf/vAlb/68pwN85SZEajUtYoj6hrmkj3djAo+QtwjAegSBdATKdZogsO5wyNqHFzUmJ6ZNX/+rIXznVqdEGK6tpFIMrPZ0t155rz5/tSM7QSRuNo/Z67vilyoc63E7CZpSc7sLDemwG3mY9kEm9R9Z5nw4Pf/LAqEcypJAuUhiEiPR9va2w/Vp1si8bkdPfF4vO47FDhQaiiq2N6C084CXREqoUGD5qBb1eC2h/4HlSPghTiSxAzqjaIWSVtT1uIlS7qWnGXPOffDS1489sKzs2clcFi48Gu/23r1Dad/dGDJeV944U93rv3WV3JzloZH94/G6tjG1K4BQhCGXhhgjEWRMMY819aJBgSrCAU05JwiDrZZwRySuXizWQdMGq7HOWUMnIqjKIqCcTQSDyhzfdpsWrG4wBhjHNVqDdueyeVyLS0tvh8sXry42WxGo1Gfu0QimqYJtsQxbzabhhFNZVLNsCFGVYRF6jMlrRlJw2MeQTg0ReSFgRsygYWAkrHWJjNlIvmWI0pSrVTlnKuKJmmRYrFIooIiyVbTdB2HYCxJkkQEJWrE49FitcoJlqT49HRel9KYqLWa19YxC6CKsacqxHHqgiBEEpLjVrnAJzwqBCbzPUwCz6YMsNloZpK6x+fe/cDfINUH9W3gMCh3hvjwrlcevfSLJ/20R1WrWrE2fOPls2/6x1vf/uHX99Tfe+y5j87csER1p/fvbR63euN5Z8evXbBErTPFj8xfvswTG+FwqXVl11jQ2PWvu4yFzoObJ1ojnf7Y5nrp0LzFKzuSHaitKKn1nEIKB7YO/vp5vV9LLTCqZqm7dXHpvscrp377e9/+PZp47/CLm51c1xN/v+MrN32J5FiOnZjtXD7l1ux4uxg2WHGkse912WebX9vUderxbSs2OtjA/kBbRPjDDV/5y51/nB6LcHccGw6oc6YLTYFXWuJiThIaqjzVMOfO6VKYj7h88NgEBZiZqWZb9YiIcRg2ahVf17RETMGC65iaGhmrNJNpNcNR1BUPjQ9ccNHJJ3RH22enz7z465IiRfiB8t635p/4ubcfvqUuRXbtPKrvHQp18Z3SqL1js0q90fr4kF2Sptg7/y6duLBt1ANFkMd80wUIcwnwqv1V9YWY8AtZOOHzl8wx6Bu/eSjVLaZGY47SNP1Qp0xuBSkqgyhP7d3ZnurkLRk/1UcVVfQZ1OtebUqO6zwR54LAOCWB54cUCwIXRSC4M9MNJgOtBemerQOIqFERWnvW25VacvEZ3B7/z6nn6poRZOKjYweXZ1PdJyw6+bofMBpd9Z2rP7rnMczl+NzFuDZayTdCjwmS7KNQUuRcLOs4HhbleEQaGpsUI0gmajlfzsayZ5x85mubXmOQzHYG551yLpSq97567zHH7A8RFALVjwdeo16a1FSB8VA1IkhUI3qrzU1gnpZNA+FNpxlTDaPmDLzw5uSuocGDA5O1ph0zepcsufue+86+/ILTv33lCeqsWcsTAsmPbNnDTCPS2ntKT3tolzwv35xxYwEpHN42WZ5Yc+HF1HRG9+3pTa0tHBrE0zUFY5aUQRfDciUAsNo1FWuOhsHQrIHh6OLM3u3bImXou/PnoSJNN8rdgj5AKkoIdqU5+5yNmw4cXd/V/etMROrKsiD6s7vvuYSL83OZLTPF/YD7UWFu0RKkeK5Xe/G1Vz7zuTWtFpLSht0ouSOFSG87xHVHQgmEPd90OsWI5w7+YqittvRx696SMOOCCq5zMkgv//S/X/nwDqUFZzbte+ffL0zSQKAkE4ZxHjsgWvXQaQ2hjdOx/WPnLO5+9KOjLqMoZIEMDCPMKDAAACwILGSfKnCCgCLMQESYcc4BA2CMMWDOPnUu4wBQbWJASrU5Hxz+YGwwHA54Wok1tfkdMD088NbHr1z61a+Mv/6JtWff0Nb3nnvh3nXntfzq/x7939P/+eMtP/TNwvAk37pnyw1XX7d/965IonWsXI1LCg8RcWF00n7ppcOnzE5Mk1qVg4S8FkNpjfrUZtMBFQGAw6cudMwBAAQGAkM+QMAhxARhgeHQozRwKGc05ApwOnasZtgaEZDrgYhINIpFEwIPOHhJCYCA5YXRaCQaoYPjBx+45ppLfvSzWt2Z+ujjthU92DQnS5OLhNSN53/v4fr+6lsfOS1R1YWQ8Mis7ukX9+L2HNk/mV6cWnnFRePbD5glMziJY9YMqvnI7FkRo/Xh3z92oXX26kWpEDc9UbJsN97a6lYqhcJMf/9sWTGAhaIsO5ajprN+uSJlY06xbJkNFgRyscoYUyMGIG42HJA16oYcKK/UGCXxdJphF4uioifBaHr+FAJRCoNYRiVz44qmep5/9FiTG75QInkZEde1JIpDTjDXE0Kkvb1dC8e3fvD61oanS1o0l5UMLQh8AZAcKLE+18CBFFPTFnwA5T/cfDddMG/w+TtznS1VgSogIiN0mdkYKLTPOo50nCYB/OauV903/zPT367x1i+sXR8XW/tvf2jd2i+pD32ttufpP3/zJzc+9CgMH4JW21fioiIjRQUEYRiEoS8IRJAF2qBEEEAi1PMQ5hjA9WxFVvhMI97RCwIyS4UwDHVddxzHcbyQhUOlauB66WQmkU75vhsGfsTQCInIskIpI0QQBFKplBOJxMzMTCIR88OA+hQxTn2KEPF9nzGmcxWD1LQbAiaqINCGTUPquJ6eoiJHkixjQ/VcmyNLEi1VDE0Xc2C6rmMMpmND6MdiEVkWXT9EhAi6LiJEEGaIhYxRSjNZwbQ9P7DDABFFA+BEtE13WoU0BaSKWuBhEhLMpMDxdV11Yl5SSUDgmc1pUVFi8QxitJY/rEqZ5rHS2IfH2vSctEDZvemxrsWr1FkLVifzLsIvPfDAnC99PTw68sOfXtIyW3ryD0fu/cfth/NvHN2HF8Z6Tv/SOVCvOIXRqfJE3j6cf6sYERMrT12eN5vDb3/QPr83DN1f/LXzRZMcShhXXPPbpx997uE//PPk1W1sfYJ1Lnr3v/9Y2ZMqj5jzelbO+sqN31qx6gc3rLvjuFlLj9vg4uGtB4XfnzfrsotPrlleVm/lgooFQRovkGpea41/+O4H688+tzZSr3TPXrPm7BAyMgPJDw6/tykmCklGCzrq7u7FEfHEMy596N5nfGbaPqkGguQ7shIXJPDNEgvUb37/ht/ffldg0+KMFUZJGBENI2JEYrYbWJ4ZVURklXOZziq3KWH+UHn9rFXfe+huTt9EQiYYz4u1OtUjqTlng+dqa1YPKbGh8aHdI4N2Kb/g9PlzovFGRhxhLCIu7o+1LejTjrz6qjc2JboohjDGNChVpwEgQQ4U3zNqAaTmvHL/XU2A5b60i5RkBBYA2NCUIZZMhawQm9Xma7KkGajSqDVmYqmkVS0ZgWcfnhTnLeKfQiyEYMQFTRUQAUCeK8lK4PsNx0axzDIQNYCirzMt3hGM1ir33+FlDY2CKVWO37jeLhRWXHYh1N0Zycoa+vLPnPLuI69VEpF4hWqRRK3ZCDggQbBsh3EgRCgUCqKQUQyjafuiGBEE+d23Nl3/revf3PSmirxtR7Z+9atnxLq7Xnny4dlJ3jgwFE22MLEoKqIGuqZFuOMhgYDbDMyCJuhARECi7VgxPeVUq2pca79wfd95Z8wbH9+7ZXtXa/tkpXj/c3+blWw1bMYzBTQwwwD3d/VjXTaPDuCJstzVIcZS0bY0cNTZHe2kDlDOGO6dswgkpSWbkxfPhYgIIRvZumdsYmbJmSeHKRlqODR9lWFJpQ6ZuexrF99x5S+EE+cKiuSazVTU+N21P77rn/+aLNX2HTnYnBiv9S3ce7S254VN3zj/3CsuOeO+p9+IRNuuu/z4Rzft7NVY0Rx/u1pbVWg9/sTzwo4NkvcOZOOaKlsHjoUhE2q2WLQycqtYrsmdi1648rFue8391lsfkq0Ss21WLUWkNtf/2YHDG94emj0n9cmf/vfu2LCUi2uUSo5LA0+ohtv+9+GZchx7zbE9Y0tWp3MIbC9oAiAGMhCEKQfwHY8hDAhzwMCD0ONI4KouES9kiCMEGGPECeOUUU4BFBe41IwU6XQD1GkBpERPTC0WKx+X648ePvjNUy+98KxLVp9/oVYQXnjgbjgp5dqNsZnGcg395ufXfe/mW2pbpvnM6KYtmza93qiGU8XaxMkr+9f2plhpLCrqb7zzycaeEyZGpygQgViqq7g6c5HgMRcBcIQ4AwZIACQwoIx7lFnMb1AWIlAQoRhhBhBAGIAoUZeCrGgaUcPQEpDMUMDA5wSAUhkzSsHyobVNL5WbLba4pC/90YsfnnudpUe1Y5Mj0dWz3X0j8zau2vu/V0669Hzg0tTOiVU/+tq7//pPZv3CRUWSCbRJOUrXtmMzcHZMbTlQjvW24rgoxOIlr54joPa3lPZV3SLbPjy5ZEGLwRBY5vjOnZ2zu1q6UrbZ5JWG7LOKiLKd3YHZBEV0acAMOZXoQIyGtinLMnAIXT+ZztpNkwXMSCRBIW7DrTfNSEoKKWO2Se1JkqiKOPBDtVSiQF2IRwM3HyOJrkszgirmZYKJb8qCwrEaYMDMErDfmiNTETuhK1FJMrJiyalYbo0yODhcW9w9/8u/vujmq57KYOhD8S13P3v6upOjl1wWvvd6EHgipHQqTMvTibltxcO8vOmuzv65h/bec/dtL3zhe79w5L+ed3ZXc1S+6P++bDhsjdq513FWLOy55/qfWs0wRHJhYqpYLlGEWzs7Eqmk77uO72CCymWbECIqkus7iUSsrbUlauidnZ2taQEhTmmQSqUA8eJgPh6PR/Q07lJb+2SQJVqpEGCVmXEM3DHLDRMlEgnTNBFCnDNCiG3bkUgk8KiqaIIgEC44nqurOiEkCAIkYaSIgFUQiR24mPqxdNatgeVTFYhlOqzmOTRQo6qixEIaRnWVEGJaDVEWVFHwfZ/5YdO2dS3KKONAAXhIfcYoQkgQBKuCjEQ6pHZ7a6xcKGGOk5mEWbbyznQmmQm4iwkDxF1KsYQs6kSo0Zioh3YjmlIErkyPVYr5ib5ZkaZjMzc/W29FsyI/+9EvTkxlVi1bcsZZd//sX+dbWw++u1spl5+dtybYsPjKZvXoD66/5bWtW3ix9PG7208+uX3k9Y/2jO84deNnZ8/t7VlyIa0fc0R26IOj/33q0cWG6vRFZnd0rEie+oN+MRaqlU3HFk45uSuvT2bq0Uj0+f/+b9sbR2obl0fqxfKeu+7/3SXUhuiCdns+oI5g49q1lRWFA4+/t7x/qW871Xc/mrCmI0IwdnRY1toTnZ17P3x5/+uPx0gi3r/k+et+9uBD95x93LpjLu1fMKfnlCUNnLjws+v74pF8xf7F92/KKAoYcaaouAwhL0YSWQlXDFH6cMfhkX2jludLoMmE2Sw0NA1kxak5KpEVLe5QS4rkqDsTFSxV7TB55GBlGGRSOUqjat1tNgTPcGljprhfHPSXfGZFOnukP3sxjXnE8ejcFYR3OyilBq4vmp5dN0houfzNB3ZC0hivlJAEGT/eIoeHhxre/qPqzF533rxmYS9XjWroEIAOR5+M4To0k3NPxLj7yJMPtXYulDs0t16o5vMKY5Jv9V7S0wABAABJREFUShHdlDU0qw03KEI8oH4YBk3TMqIxUdUYYEVoBjgitiyWxLQ1U2lOHM31ddj7jpGoZpLx8clQGqqZC1myK6O2Z/a8vnfFFyQAp81QoRlm0y25k/o6jjaTc7sPlEeIIlRqVcSRKAmObSmSHDFUjrDrBRwL1UozaUSspr1q3bLPffaz7uiBvo6ON7a+/I9bf7Jm/YKdd/9+4VknVnZ+rEur5WiUhIHTrFheEG9pwfEcB8QC03NCUm1qIIaugxWZ1kw2PF6TIEpIm+LvevtJ5jvHd1xMmZdPkhara6RQmrV8uVevNAbHM5ms3/A/uvOJ9o45nl2vN0xB0lMR0ZDoWL3aumgNsRrteqw5VqiNT0hWYPpOeyoTbjqo+Ro+fQGdGc8fHY10dEYzyMiXNwvVK2OaV6ime7rinSk+Mnne+Wffef9jSxfMrdvm4LF9ZWp1G0ZUFmSPqxL4bv2Dx4989qINuWi7Juec6SeODqAf3/RVgU17BIf5ohriSHdbGFcAqXWzyduTuKfnrev+qz2dGdaO7YdHq6haRxDH0Nbkvg5dlvrsj/61NoYeHR+ek467FELPxBKA5c7iUguWhKAGAKIn0qDShmGUIkdGMuUkgFAEAQEPIaDMDZkgYVmSCPf4/8/LDgDAgFPGAQENOYHk0PRQX1uPK1YnqgO+IRSE5t6Gfdr3r89Kbeu0eXdc9/sTT71w1uIlHx48dFw0952Xdy0TUyOj5dQpsz9/1c2/v+ys7ii7/6HtvfOXBzaSIh0DpXyXFvTooMraeGn6k6177JopBiypAXLAUknBplFADgIATgFxQIBQCBAyLlAIROR54FDwA1/jICMkSaIgonrDQ1jxQ1qslRUKAQiyJCbSPJhmMqGROA7LLBZROXckGSQhItWMSG/p8d/96Wv3/a/qf3TKonmb3n33jHO/+pfbrlj3g69PvPdRya+DCLLLZ69Y2tx6SAvqsq/nYrPyg0OVihyyFi2QCa1oQsyvmrLP5vZ1uoMTB97YsujLK6b3j77x0tNfu/aajhVzw9CsTA1l01nGcCBgGJmZnqm0dHWDqmBZBix6tbpMRC5gkMWZ6elMrgUUVSSIAHI8z3XqiVQbiErglQUiCYoBWAio6UqgOjTVabBEAhoosJTcshVSbpfgFd1ACDi4tlVjHHOs44AOfMir1oCI7EZQccPY9GCRqzQkWrpr4ZfOWT777EV7C+4OeGqNGktK6ltj7zx7+pLXH3o5ecHlomRLdQsqldbsYtDUiQ8fTXTmBKOJ08tueeYz2bao6cR1nTpe7SoTJoqlfMOdna+NT1TeOTrgmwxCHGTTTjSBCNs6vju/vyGIUqPuUwoUC4gIGhcUiqKCLIYceCiCUAYTgEZBloC1xJLNeoVBoIkKSbaoObE9oXQEaHZHx3EXntC/YjZ41RbU4lQq/P8xC4AxYoyFYShIYrleEYjouq6iKJRTRv2mXUc1kmlpURhwnxJR86hnmk1NFTknQRAQQ5QQTmpJIoj5iQnDMLiEitUyIQQzTARJEIkd2IqmBl4YUp+IKGTBp71dQUCz2WxAQo64Y1GugQ/UcU1ooES8JZEkKKATx0ayLS1138ICJBOx/NhY0/GJrIEkmR4RoJ4vDtmOZXvzAruBsYZT5NCOLeedsqaju+vRN17/xd/O7krjBycOtHYH0c7Wj96v/e+Z3y9dteLxZ56+7oZFPb0rr/rRou6OuQ/e+XxL2tAc/tYrL5uFI72Lz4rl5FFz58lnLFm77vQPH39L3xJ98tYnc2aMQ7Y3tWBx61n7Fze+c+8Prj1v4baQvVGFy1f373q9MDtYxKfGn35y5743n7/wsxeddNppu7ZuPbN3LmPw1EePf+6Sz5fFOgMLx1J9J6yczs/sGnh36eq5C5csjudybr48NDZ87Zpv9WWTZ2GvY/G5jz23bdcH756z4YvXXPnV4VKjoz/aktJLDVVVmaIZiXinUC3pWmGkJG2vAUhOn6BWKWtJRgqGpePA822Ti51UqTUDqsiOMyNzwXSMLlEQA9OvIXB83RdsWZa0ONKQUC6W9h/NZZP6yLC9tzI0t9CbWVIaGOATuzLZyaBanwztuKBIWR1FO+edvtCRoDXqOhUY9MEiNiW4iOmWZ/efeN15xuReT8sGvp8BeliDYzbpqzsNBVrdyZJRn79mQ1NCYkjcJG6Nz6E6NKoVZapU0sksnA49E2WUsFKRmrh0dFjuiTjMj+dWOj0rVSsC9crAgz/C6WzvKWcdefpZMrO/+wc3x5/75PXnHzWWdM5t8YXOmDpta6phvbrzEGT7P9/jSCzihevTvfumjtYsc0kiWgM+hLHFeblYs10/BC6rUiBJkq4ShDUSAq8i6pbGB/ozqUMjYsKTxJK6dHV78/DTxHZDSwWmyx7jAbMqbrw9o0ZFmKq5UlnBmCV0gTFRVgDzwGsKbkA4ClEYN5LN0J21fkP/+edIkSgQKXTsCCG2SWf1dPmjozMjo+l0EpAkZaJBWpkKxrKx6LP/+U99pvTzX9zsMNFzzGfu+XNlz2A8k/ridV9zEKSW9yejClIkq15TbL/AJrLt86AvZ0xVwCBvvPPij67+9rzFyz559JFKvF1CSlfPvLfufxYAHnvpfS3T+MP/3fTbn/1BVsTZ83r/+sImm5G//fSCyfJMF+26/5HHd5o2AnHWuuUaqkOzRkmoGwbUeChSUrJgYdZmrqbppa2H64+TWBbuNR/eCTwaiotRkOJKQ3I7LFwmwSdHd3etOL6DQ8z3JIkqoE+YtsB5BuN6aGMOGSQFZsXKd5xxUmrL++WSlwygQmQ/JBlaqfoGRRxLHJCPQkSBIMx4YAdcAozAdYCoIRKQEBUohEwiCWigGWq24JnJgaAUYhGJgiRVg8+fvWznk+90rFo/OfbRn//x3b/8+blSxW9dOHepF+x/a+eFP+kqNga7WxKvpnZmW9WzZg8cPkZb1bZqBTjzHMiMVqcRr7VSY9u7RVVRRMQLGBYLMuaentP3TpqmqMd82wCxDnJdckI1DDzQQEahh1URhxzCUBfAALBtvxYAB5hmHKzoialiY1puci/rSEKcZQmYoeu5PBshEniUslhc4m6topjzu1q3Dn2y9+E/p5b3EyK50zakZ1skfvyZF//9pAtO/fddsK+R68sJoO/YNXjChWsyIA5vOSiKscnGQFunagUzgOaThGo2LK7QNiW6v3/O4Ohm9viL7d++4Yqbzy7seirZaYGoRDM5LxBl35fTelZXrLqJCfWaNWQiKWIgCTncD7FIADRFIxRYtS6qGjAQCYJkq+NazXw1kSZIjYcOYGWCoaISqIEcItVyilEpEyb88ubX3ly9tEVAUGJ+iDlXAAcOcW07bNRR2BTlslmdqVQcghtaJJ2ItRixbDzeFTA++srHi3rRgTd/eMalf9xZra/si5++oHfO589e0dJTqs7kwQwEhG2aRAAiXP2163fv/p9Zbxw8sG9hf++1X7uynh/33WZrTki0ZlfNW3juhfNAk7ggmJ5DOdMaDBFkOw0sYlmWLNvxPMoZ2r/7oUMHCm+8fmTLtkohaIYAHAMToLMnkoqlIkrk4/f3DdUnFUnR9BTjqJ4f7q4qx513FhB47+ho6bFPOj+Jh7EgP3Z4/fr1XV1dQeAJgiRJku+7WECKIk1NTRFCMulsIpEwrSYA7uqc5TZcxpiiKJZjI8bi8Xi5WJyqVLq6unRVtW1bUZRGo7Frx87+/n7KOfU8VVUFQfj0BVtVVV3XEUIIYYFiREDCsiiST1uFHMuORKPVWjUIAgUAIWQ2GrIourbDLBtpspyLvf3xpg0rV+lEcpqNWFcLkwVJUgKXNus15Pn9/f2GEWvMVOXWBMK8ajd6581VFNn3/QXzFs+fP//DXU+fcdFn+tpmT08e7V5mLz7xRpaKfffJ2TOHD2ckdOcv7lrQX155fNfI4NTtf/zJ58/5/JMHjUd++cdrrzlrweyed3cfITg1Fx2Xf378K303iYIIuD2sahAxtx++7aqLViWjcHUGn6EvciaPDA3bO/e9dfEVc2/861eJXOtsWb1zy5Y5c+YMDAzNnzt/3tyFu3bvmTdvQTKTrDcb1WpF07TPXfFlEEjQqFXGJ1A8Nn/uglzRZCqVjxZL6elFJ66658Hw8qu/OLc7vaStS1AdwXPSiq7JAWs4ddvJdpCD75tX/eanI48+P/jx4TDCenNaqVkvlPx5LRmFCZ3tbVbdjMb0hlmPK9koIlhV6oE9XpsuDLnjw/vamlXF8u1anYoCx+z4U0/lIrJ9Z+mFFwKS2HgpPbs7BD54+JhQtaN5c8aqd69dao4Ot56wUENg1+yEEo+wWt0Pqa/rsjuaH8B6HJauxNuGgPpE0VpKvssbJkS7YwFrkrSFmC4YycgkC1qnAWIxv1KNti2BVnkWqBbyIJ3Vix9oLblmS8e8JacSPVkLZI8RNP4ufJJ/5z9/NL78+bTeHzrqP79xw6/3vCWK1Ttv/FXVdRbM1yWexIe9LUPbc+m2dz9+wRzed6lszF290ZoY94mrRZuFwuDdz+5UJXHp/IX16XwuljTi8RoPTRZGi44cQki4Eld9gTlh2NYxJyJvoXlxrz153spT9M4VrLV1+Wo/KOXli05gwnzsHE4yb3rnDtNF/WvmMdeCZAu2Qhw1HBpCGEiINIsVPRKJzZ1rN+oRIwW+Dy5iTsNumirDdqWu6FHuT+7bvKWzo003YuAF1ULljLPOhUbN95t/fudpt1pqVhvJdLo3Fl1inx/zOcokOXVTw3nBAzhchkQstrA/xHoWEB0sGnkPJ/tA7Mq0zz3/FzfJrenmRH7pxbNnL1v0q1/9YgywqCDmenYJHnngvktPWtDX3v+3+57/eHKwv03qMGIjb+wZVumAaSsAS+enezpz1kTRTQUatDuB6oig6IionO+rd6VW2n7s9c89vDK1+umx17YHe2cp8jqImm65pLhaCJ4oJAM+BfyxA1vWapGJRhMMyXPMAgMJALMgCgyDQAUcBjAx4CxcuWjXJ+8h3wUETgBcKkZS4LpAgTGCABDFgAEQAUQEjKlAJNHgAsK+70IQggACQUU5CMugkeHNr064iWZ7Vyc70Ohf0fnC4/eVGrmzzv0i6EWNqN/40hc2rtswdKzUkpxrpgvDn3hdPZlmzZ6q1CqU1yYhcJLpdEtALduuxbJdkdCYtspJhItZSbL8VlGls8XGoKnFoI3opaQ3WraaBCTEpbAZw0q1HsZVXXEtR0FBCB7llOAm5g5wm4DJiI9RNMD2eNGbD0Hgtam66Io0sA1DdGogSBxjQQAqqgiLRBQRsmkgo+N6Zz/zyD1f/PfdA3Yom+Kex+/VO7LNI3t4d2bWks6DT74T6W4Twe+Z1Y+MFDS9StXs7GzHRDIiCaduAxLlqIw8ACZ293SpR/avW7t25MCWZ55+7NirE1/8zsYN85fUCwW7WG/NpRxuWcOTEpejRqSZL1iW02w22zs7tZgBAgFBDJpWVBKDRkOUZbdeE0QJBFEV1MBt2PlitrXbnZmqVaq5/rKscxCZKHPfAXFiQsrmqJ1WapzEFwmhJzAGIQuwTNSMIWFxxpoZmBnpaJnfu2TJPF3V4i1KJMpEEJVoNNGuMK09lXn9f7/tXB5duRTMILa4u3NxZ8uNP+9JciFqaNCSrgaBV2tODA+M58cfeuMfyxYvbO1U+patFnz6wqsPiIy1pbKUpwIm8IA3hyYblZLvWbbVYEFYYzyZTGIBYYwlTZRlERGwbTMVaz115bz1q04zPUpkSY+qWOKYh519LbKkEyy9+cZ7laptuXSmXDdtd9H8PrVUjSlyk8CJn5nXJ0RKxfG67q5dcb6qqooiGRGVc85YKAgCAIRh2NXVIQiS63i2bSPAQRhWKg1NUevVqhGN6Lru+75jWYqizOrq5pwW8zOxWMyymvV6bXZ/r+PZSiDLgoYQIoSoqgqfQgmMYYxFQajUyq3d3UADs9lkjEmirGl6o17XdZ1Satt2W1tbe2ur73qmaY6UJxsz9ZNPP3sml7G5TzlybMvOz8TU5MGhsXnzFsiM69GYLiuFQtGIJVzbqdfrooA9imrTJU7Z8ced6BQKa9ZdHY2Gh/cdefaN7Vd+/qKZvZ8Qqvz2gq9eduLKyW59enT45I2rDaMyPlQ75ayNxxojZ33jzO8+9OMHbrtVnsRnfuZzU3E/nC8qOUZJKpDVIBoTFsbtLuu0lRc+/OzPTuk67rld+weGZq68YeO3Tphz/MaNez7+MD8jrF9/VmliZMHCJWEY5GcK+ZlCe3v3rJ7Zgii7vqNqRmtbGxA8Mzqu6zpgPjY2IY1Me7msaMQiDctc1Itdb1FPi9moSLmEllbL5Wro2AZjwJAfOAYTxQbW29r31+yu5Loz+4d+8/5By6aNSjXWAT+//jtP/fhv3UAaR6c9DETHWaSMsVIR06Qc3+1NPffCf+fNnetblerIpJjrEnPa6HShtavDwQBAgoC4jKPAlHUdPDbwxofZjja8cr6gyHNjhljnLN+kpoMjkmv5LAwSgmqDG2DSTmD/229p1i8DVS/uHupWFdtyNS7JRmiHEFJAMqFIJZJkun4uoeL58fLWoeSKpaMDQ4nRyWhUeO7uO5Kd68++6btchgiSgWM6tj8+NkQHZw4OHN7896cuOrgnfPrh96/9VvWKr168Zn1s4Zq9//c1d0becPaGmfy2aHbZq28O9K3qYcnG+j8/ety6E3c9+/rAx+8SSSVtmdZFK1NzlwRdvd/44jX7d+/OiQlrpiRqaOXGlV2LeqgmmQ0zDFlg1kWO0rrqFiclEgyPTnZ0ywkGpWsfUS+5+ND+D5PzcK8w/85f3LR4yaknXvdF1fhI8avQtkIrB0We15t53iirqohFbFlWvL0LfIBaIIgqiDKT5MLoWHFwhLhhKpvKtLfy7iTifOXs8/16lQrEDUKmEhY6NdzAgevvOaDKikz96ug48XkimeO18hu3/WP5aevl3na1JSUu7Dv8/u7wnteREisdOBJfmOuOKdDR+9S510ldAkqIAtADdnmda5+yZu1P77jdA1FzuY1CJdSf3HJQXQpO3n9vcgoAbjz/XIxhLMitzci5/rZDx6Zygv7PRx9//KmXlLau2uCWeDKuxrXG+LDp1ORoApLCzMcDwnTXZnXv0eCT8+JxveFIrMmIoriuiFCZB0TUrCCwHe5rxAFcMv0CAMUQ50gBFACoiIocAKA4U50Vb5cMiBC72ABRyTZ5oWCDKgFQDIwxzigDjrkIQABRygPfE7BAOaMhEAE4Bx4ECRlMmDjyVBO7fiDKlYnhqqUu7egKteiq+ct8wushRFozi1Z1F8vj3OU1Ut24bvVUw937zruzZre1xJOeNZ1q6wqCzHS+IWNkec3DR49+/ty1HoCe1J3pZkc8Zvl2dtypajBvZXeIlDmh0iaUPzrgHisGGQ1KtqtIZNK14qIY9ymmHDMAATvAwpCbIQQYlQKWAAcjKAEkcnKz7gU61kSiyOBCYFPwECIsEAEoDShjOsiuKHGLxrFbLh1tbe/dtvlAv3/02r/f/cgfbj7z6i8hXrHzw/PPXs8hbF+/pp4fA8C1mtveKRZLzWg66XqUUuRLAjUD4AAia42JY4P5Z9/csndiT3sH/GrxGTA2Uhkbj8ZS4FmhzOVMLiKIIAiRiBKRtRbHQSI2m3URE6nmhq4rKFJ+clLVNVHVhGhMQMidKUiEdXe1g9Vs1GrxJAKc566DAvBc0NUUwWUkowarLLp6aRg5hm7dKKq6IahRSY8Z2Vwk0yJFIlgWrUmwrEauNVsr1yqFUksi7lkeYaTgNVyn3N4abdQOzOpGrcnZ1EsLAu6es8RJKEQz4kyDEEFbGrANhIOuhJ7HeAiMcwoiiEABI5HWqpViyazVm9VmRNN1XY9EIr7vY4TKpWou1y4IQq1el2TCwQdgEGYCMEU9iMYRZ4FZtpAtRJTYZHWCMaZpmh4xZF1DiRh1bRKNQuAyHjIWClykFriOIxMQODXNkFLKGPM8z/ddXdc/nX+NRkPXdUlSbNsGjjDGjDFJkoLQ/5SJkWU5CAJO2aecF+dUFMVPj7zAj6TT1LZ9GiKGgiBACEmShBAKgsD3fYSQhEVJkcfGx+v1+vz58xljvuflZ4qCiDItWc/zHMeKRCIIoUqpnE6lIm3J/MDI1PB4LJHwMdKiEfBDHQm0nWdTOWYxq9YUiZTPF7GoYUnVBLBtOxGLHzlyyHPdNWvW7N69u6enJxbGRmaOdM1fGCiw4/CmxfNWI9cQSfNYaSajJFra2z96/q1Vpyz1sUXC1O5tb1WLYzgQ5rfPm9U2d8uObY5QV4H7E076w6VqY1beKB1SnpqRJmc6VgaZ5LWnLZi9oZf4rZg1MchBreFaZiTWWanu0cWEZVnC/0fXf4XJddXZA+hvp5MrV+ckdStnyZaTHMAG22BjY4MTYPKQmSENzMDAzJBzHHKONphkG+ccJCfJyllqSZ2ru3KdvNN9aODeh/uvp/NQX9X+qs531l5rr99alNq2PTtTKRRKnpcFjRQBrfVCZZ4ybNu2ZVlRFGWyWY41EQgLXW/WzUQE+SLqHv75bbedmp2e3XnYdUshaBuIbZiRaLrEKEAydfTEWa+/4cOfel/12JGrbn3/tpdufeVrLnzZxW+IgV3Yu3ltZLGgE1KRKGActFFYSBs2kBOOfODPPxxc0hd16pkBAvMKeKpAJ1KeOjO5bNmyuBNhpW3HOzg5Pjq6hA13m9kM1H3JU9EOSMgP7T86dukFX3//p07+bY9l4QpXNQ29CpZbwAdLX3jqAdJVuONLP/jpJ75yaRnHkVnDUvN0OEs7Tv4Tx0/s/dzHVly27eCLndTuvOR1V86S9K6v/3j/w09989G/xW2oRDMjPdvo/EzzxUemnt9feWof8RvHmxUuzXYy9b7/++zEr59d0Wf96le/uWyhOlSf+nr/Jc75A9lsWOovV+uSGUYyXll33c1b/uMLaVgzaRoFHbvQq8FGGoMkAAKQvvGVL28uVDatXFOfbbTrwUD/yMpet2/tSlIuMjfDFDl69Oi7P/7Ry6542aNPPn2hXHk2OBkzT5NQovlTbAHznqKG7l7rvNdu83pyhxamOqZ+95vegEsZ6B0C6UPYTjt1jZFJjNZkNZftUj2uaPoGMQApsDQYRPohoaz98Au//9PvX33ja61y3u7vob1dQEkcJ1bkB62WQRkzTN7xWbkLpIy0sgmZnpvWc9VyqZCWC6ePTH7+nZ8IO7KdsiOKb04hAWhjeOMH/s1j8q3/+20wwu9+8Uury0vf+58fOtJoYkYU1TRQwgUjNDEGLhMLeVwzE+KXb1i9Ycy9wMu9+56HJyLIR3GLQDLzwlO3/WY473ZtGeUh9ztx12wUXLAi2+h/5MN38icb2DlwOtw7B7EE5QMsgBlksYwSl8tJk7aJ2Kq80VhQYj2hm2cQZBApKVZCyJRRN8AIxbZQVQpv+NSVj7+4Y3rW37dLtYWdLTIDEiwTISFJQWowGVAMhgZGGCMijbXJmJKAQZmm1qAKRTPwk2tuOOf+X+xTURoOeNji7QV57geubMTl5cvOCXxDE5ibPeyo8PBTL6bVpMOCsN0aXnFRhHCl3bAyBQ/JpB4GAJ1AKGVwlTSrp6699PyH//QHV6NSN3LmozzgqFdhG1733i3VE7PVeUY6JseNh1+o7qkjDNoFw6S5pq6PEpVqEkktQVIEQkIMIAH1gU3dcE7ABau6h0W9GoqOgrESzNWdnadDLwt9hunpxMkBoqSrV1FinQaSh3I3E43+7MXXfKIxk1z07i3AV088/WDf2ZtlzrLm/ShXgnZyes+01+sSQvfvO7z17Avuvu/BwZGehdbMzbe+5pdf+d+y1XPlO983t/3FQ7sP/v7PL9Zx81e3vU3R571mp314Lu4eyg8Nq+nKzEytb2CN6+qFSiWfzwshlFJRFLY79dFVq1rzDcuyTMNMeaqUsjJZUAoQVpTGURsj1urMF/LdRq6eoJ8y71AdOWUbxb6fTti8z3brvuqM6M096BdvWg5ADZpx7JxQuNZsBnFoWGaBpdVGpy2E192V7x9kTj5f6unq7TGsQlfJNzUnyDE8mqTKZAWkhNFoR0jFqTIkUZFQRIfQVpbMRq6UPEx8ahiYEimVwRzfD0tZ2m77IEEj7ObzElCsZbPVyWF/dma+qzxgma4QyvMchYRSImlGdsYVBPlRiCgpZYtIQLPeKnUPaZBh6FuukSSRRiLmCTWIrAZpwdYY5YBJgZqIZx27C1hLRLlcDiGktaYUB0EQRZHrupxLpZRSSmtNKUWApdSe53HFGWORHyyWegghFpEYY+04TqvVUqBzuVyj3Vr8kwxiaK0551JKhBDGGCEkpaSYtFqtbC4fx7HneVJKrRRBuBN0vGzGyeXmpycNw3Bt59jRw5OTkwUvu3z9uuLysc7UtCEQEkoTQjwbk749u15cvnQ005WbPXO4b2kvOEa91TT9KEziWrWxasP6NInbfmfRp9HpdCxbdOV6Oo06oX4aGko4Xi5dqKZU2ylMDa7ZeGIqmJ87tXVkdGrv83++88nXv/edSZLMnzjY3ZVdfsG2Oheug8Et41YblfOh5XlWIbY0UYE5Plc/3cln3Wp1Iu+OEEISOW1aOWJ5QHnU6RBCDMeZPHWqVO4mhEqhgUAchrlcjhCite502iYzOOdOIRsGkcmMQIqkGvSsWpZGqdG/bGFy78eue0ea6MlQlAtdFBNBqecya2726Rl179N3ZJw6nzrec+HGjlPILOD5vTu6z7noPz/+xe0/vO8sz6kkITeoCVYn8G0burm7hwaf/uy/XXLpuQjDtH/GkB4yaHmon4chYwZIIToBQQgN9M/NzBRSbGbd2kwly3ULeLnc1cwCTnR2zYrffuYrB793ZwaMubmghVBPhtZjvuoVF775ts95vtg70f7Pi6+7uduahXhmnizplaXuoR3PTv6wchSCNjHT5x57WG9aty31pvY8bV/7MkXGujIDIBvqhUPH/vanif2HKpWFRmWWZHp03s7nWt0DmfjMZE9o9vcPn4lmNr3lze51H//55jWVhYlVVy3papOjRxsTE5PdRciMLL/lvr8hMJROKAMMZqcWZLADBpo+cWjm0PGt2y6dqwUf/OinuNC9XaW0VePtto+JCNMtqzYSZDTanVvf8S+NTueWN7xudbdRmU+2erlW3OjTq7awUZ22ItuySWaqs1NovwZFgW2jWD3ZOTXat/LcS86+6pYry6vKGnOhpKKEO8wnsjfMgon9NHJMhlMe+m2cs32iZMfPOV7U6mANuVKXX214hgMKQKTAcKgFMihRQLlWsaDMqA9kirEEyYGn0EprzSZdvjSX6wpo+8iR+cd/9rNLb7lidHCZ01X45fu/tOUd//rYT//07v/+hDPdWn/+eUe0L2IJEoABCLOkoQVMUJ6xoBMmA6Olb//rWx77yS9r7fLfTo9/4d9umEySL/34jupTvyxt9qDShIGlMB+npjZERvVs+esN3ys/NHt+39RtR+44jiFAHrAYIV2PZADgA5YIj2uxibKtgnGIuWnM6/gFDovTQzlCukVaBt1j0Azg6RRufs+riuvbB57dO3OCPP3sLKW2VJFjQiogSgEwOBY2McJSM6CAUyXBtl2eCCkSQsBg4LmMFHiZDx84OWG62VIpOXkqGbtkbOnNF7TnB1csX98JSDOIlq/o3f7An1xfzO2bXiB8vlrxTJTvWm6Wh/ftPViwexWv+uk8pRYz7CRNBY8dJDpnThEeulQopofzXs70u0aGrnjblkM7d2XCfEtrEdfANR/e23zqhZAgK5EdbIDLgVLAmhCpbcxMRJBCQGisxLwO5gB5mL16jLo81DZkFSTK23HUxx7021YBxV4WMGWlLgE2ngdkm8NFzHI5tn0iHX31ra/7t5tZs/jXv9x70fplqcZLR5a14oiL/GyrMlDM1Gq148dOnnXWuU89/YzlGJXG5Nve9aYfff5ry/K5S9/z9ujgwve//NtnDvgPHrz78cc+uHmQ+fWKLpZt6YowDNN60c4DynUaM2EYFgoFIQRCWknRaNT6e7slMxFCcRx7uULcbEVBGIchpZRlXcPCCAzTMnCmO208qrK/AzZjkWyQNN3+/N772MDQshw7wYdeok5N04WpGDSWMoiSiUgkxER9I30jy7q9pf3blq6l1DCYLhe6tWAYsSSsI6CKRwR14kDgZFCnC8puBB0IM25WAtYiZVo7jKch1Rhh1VSyp7fXBtmsVQ3LopRiDZZjN5FiTo5q7Rgs9gMDg4lo90BPGzvrlo3GqRIcGYQBxhhUGsc9+XyUCEIMp9QrlExlRC1VymcT2aGYUBNxnZq2zZOkq5gPw7A8WGyGoZfPaK0TnuTtTBSG861aplDkUtbr9YWFhVwu19/fnzXtTqfjmLZhGBpUFEWcc8s0pEyCIHAyThRFhmFIIRZ9y1EU2bZtGCxNU2ZaYRgKoVwnU6s2FjF4UXmmlCql0jQ1DMMwjNnpmYGhQUoppRQhlCax5zkL8/OOm6lUKq7vNxqNYrFY6cyWSqXRkSV2b9/BnS+8uPdFAGBA/+/bv/vQh95y1tatl7/tjUeP1JcV4NpLL3rTTVd992P/Wy5kz95yVkeFpa7uVctWTO87nMnlCxlPYkjTFKNcqYxnZ2YILivuFko2dbL1U1NdBTh8ZO/h/fuWn5r70TefnJ2Orrpq/ZH9p97widceeXbX5a98RZ1UnaGe9lSYs/qq0UIwNd3bnU/3HvIwlU6xWY1zFgugVbT7wbKL3b3AENeRRQpJO4RWKCnzfZ8xhoLIyxcMy04S7uayURDZbk4jkiZCKeW5OWwxGse4k3o9fRylTivNXr5lz+9+/+hf7vrXT3718PaHvYVkuDcnFlqm6MR+iKg9n0QVwKtG1o0MwfFTp4aLS6oP7rO6egMLd1smRO1rrrv0yR/fZ2vopVYk0ULsWyYmkVqW8w63gvHK5Euc80U7KbCCkykASP/0rOBJ3vP8hapt2wIpznFvsQg9hfTASc1Tsnpptun7YZCHovBrMN8679yNnT8/tzAx2ee6nAcDNtvV4W+64JKS7YZnDm886+yRwe7T1XmHwXIw+VxIMo1hIHs/9YnV3/8hmj5ywWtuPfLNb97+/KGb73gAmij805927nzs6AN7kNlqG5X8UF9+qC8/nEvoHOWVEZI5vn3P9sP6oE7PKcxedMNq78qP7/v42/fvOfzSN1xy+uHdpyqy/9zlr/jMDcs3XpkbvJiEPhhTqtbQQRgu1Ek19DvcGx21gM8dmv/F098eXT56xebRR3c8K30YHBhtsnYQTiyxCu2TJ4xMZiZuv+Odt37tQ5+Y+esj1Q999cFgIfLnUwjuIy+q+EwR8mZiT4ACa2VX5PbmU583UlFaGB4+CBMv/PK3Lxzb/unPfoxRnekdgEbEWOg5tGmQXGp5EkOQcKSNbIYCsUIBvQMgtGVlIQiTTsBMK+SpYzpRxrARdYQAzwGHRWnMNIFEupHSxIxOV5DkMFwsDI2mtQ4g5I5X1rq5s778JQiTBT1f377zxo/+d5aVt3z583/81Lcvvu4lA2etf+OS9be9+NDuw+OeVD4CnUtuPXflHx7Y10EG0fCK7j46Nzvn55+pznRw2mfLypE5JHG6dLgtvKh20mz15sEUxkm+Yu34/z3q/HHXxau7v3z4nicAahgc4TWlnyFQBFYC1IZ0PkNWtMmI0BwShXWbxyUFo8AqwMdBtbV0AFsgYyERBWaqPz/y3H/deP2ZEyfM0618CRrzEaIAmGmsNJKAQGMstQKlhBIKAwCESSyExBgAQCFSbfKlY7lHpidmBayjQWZeIheNXXdh0c9mu7s5kQFLiWusGFl/0NhJXX5w4WDqjVmFPtN/vj3+woCpy7S25+ChC1+61fHddquJZZsg7ebduN5BOswbWgEdJQoSv2zTZ07OD//l2KbzLzpy+GRfOjt+xvDjzuW9PaE3fsL3KQWSQg6ZCNNYqjbwOS18lAQYhIZQgsHsBCIprPum2m9cnms3gzpgj3FCIOIQM6kN0BIw1Zpr7eqMIgaRsWVYBBOjc+HlG+JaMr//ULFnrLIQFAaG01whbYa7n901uqqcpGYcR6bFarUFKdMoTngaK85zVletNgMqtfP9f3vwhCp0j6xZ8YNf3veDr31MzEtH5yi2aMamwCDFynMzmaEMxu1GI5MpSy4oI2YpL5RCGpTWElMpwfCyCrNCqQyAeBQwyoCaMpKtqTNOrmnlEtAu8HobwAXSOOplxXB5RTp1/wv50KesmxSLpUK5v9w72D80Wu7vt3M5xOjM0ZP7njtxzpazHCTq47MGccPIZ2akcF6iTArEplq0fY/RymSja3AlBAkVamFqvhr467ZsyRiO31hwLJdmzGNHT2U9r1zqJRiHnbZhGEKmLmNKqVRyhEgCsBgGkkqOYldwqjm3GGOadjodhFDOdauynoqEaGYTjjViikqOG1HsZViYREqBElJbVAgkCcPKq4qIAJudrNbCdrm7UAQqQ+XkuqTkvt8yTWvZsuVaayGkUopzEes0TYVpMtu2EULMoIQQwzD8KGSYWJbVbDS01qVyGRCam5srFAqe5wHnhXwxTVPTsTOZLKVMCb5osTYMQ2u9iMeLiZiLkI8xbjQaUeAvzIt2u7l23ZZSqTQ7OxtFURT6IuXYoI169U8P37e0p7c+Ndk90IczxmXXb11oVI7tefG737veJE67GjBg+6afW3vFlt6eIaWN5PTRw4f3//rXv7bdzOrVa5ctXz44OIikxEhUZwt9y86ptKfLvcuTmvifj3/4qivOY63OxPEDWdz9+M6Je2uNOqDn73lhlZl9b8kp9AyfqJ9e3bd+YWE6VH4Jt3pcZ258oRrOtBudsaUbAqQL/YzJVLDuthbV2alcMRuHJ23DFElZCQOpKBW81NWbpjEhhFIqlGaGJbi0DTuM40ajVSwWsdYKoFNvOo4TZ3E6v5DML/RsWv/lG95/yzuvueYtryZDxblmk6fgYDWYs4AjXwKSYnkx+1S9feEFfVAoZ1406nzK6fbsMD40URtYMxiPHz3vZWd7Q9CshLbhCT+wDEgTtbSXNRaaDsDFN1wNRHVaC4VyN7TrkeTAcCLTetjJDvX4Wiib5bwuaIR8vkqGyjmVx4iwOF2QCY5D27Mhk51tzx89PDlWJmk97rIMP0ikhjWbNsOJuXBqvrwsGFg5MDczv6YfTGGmKuzwUNrKVqEXt1KEU0t3X379q25+7e4vvv3w1/5oGP24bCSlM1RkbNoDc+2J5v5IWz1ubrbd+dbk7KRR/NpX//1bH3hffPIha+S6dPaFR7/w64tetfzAQ0+871f/l73ktZr0CAwsaSyMP9W10IRjLUPqoFPt6FZ9fi5YiE7+9kHSM3qgeTBnZJ77+XOZXGnj2ObxmTNpPJ/NoVVdve1GU/S4bVMv6Rpdu7xnWzl/53ve+6mZFzYzlrXMZ0vhmJ2TmdLhSGV0LI10ofJQSOIOQMcFX5nL+3pXLXO3vf99V790W9iYGxwdDbk2Ct0qjQ0k8oQlOmWeFTRDKrVNMyqKkEH5RMPABDAFIUzX5VgbeTMRHBBNEJjMPv34s/7EzPKlo8K1SXeBYoIIM7PZMIkzLR3ML7hD3VGzlcxMNIKa9WhUOuu8wiDqWrEiKS+t7d2djwfvO3Ww7+TQhqFlP/vlr256z+uWukv/+sJDxVL+zRePrRgZ/suD+0hHZQksWd5bsnKNM8e/8IErfvvi1Du+dtvW/LI8qEIusiaSRk3n+kniSGd4Wbqv89i/fu36lWd/5fAP7wMWI1MIUsPkpGK2lgXAiqlhAYNtuRrn6qqVOKwvFCbIMxjKGgBoRYsO6FlQHQAAsFPx5te/9NDx2UMTsn/p2jMHdqw9a3T6ROPIRINInWrMsdIY6OL4kQKmFCMUY50qqSlQEyEwqOnUmo2hbRe+6U0XnXv+fwy3pJIw8Oaz+odWHn5icsWFvQLjudnj61ZtTFPRCUl/T+n8K7d94fsPX3/dNe7xU3ZhfmHqQK+HR4bmx08+2VU6J5c1kqhpWbTZDny/owUXCIhNkz6SaUmazTUPyP/5y+ErjsftyeljYdrRkAqUR4nKiDkDKslAADOCJqAS0IAQYIRAAdJACQhMs2bm6g14x3MJASNohriFuBCkh9oW7aRCAWLUxJCYhBItpDJBMhq1W7oNtNcDWtn3/Nob19UK/lhP78EnX8h7XbST3HnHPd5AcVm5GAdxnKSZTIZzTgjpdNq+H2JqMI06cQTI/M2PH983Fzt6eusrVvz4F7/5yjcMjLHht9K8bAje4xVFo0YNolKBXdcmLrLyNGlCKCnJAAjNA2TaRiGvuMCUmJhpBGkUm8XesF2hmFDDzBUs8GScVKhJpYI+yxFRw0nrmXNfBWGr+dsX2P++nb7j0583KcOAQMg0iYKg2ZmZRwgKw9YVS7eFrbDRmscWaYdRplRICdaNarZYqiy07YypMlE7zmZKGzvBQhyGabszsKzPQCLGiVdLXGY2PWy25oa6KTNo6M8R0+CGSlEoqSrFjGtl2naScEZMmWqTOiKVhGjfDzzPDqOm1ipfyMdxHCRNl2RtLBk2CEcpjzVVBkXS1DwCAsxxbACd8NS0WZR2MMah4ZvYcF2naBbCoBXxdq5UmJtfYIx1d3cjhHzfX/ReSSk9z7MNV0rOeRLFYZqmrusqpMMw1KARRmEQRFE0PT29sLCwfMWK/qFBKbUfxYQQxaVlOaAg5+UWuTJlbPH9GGPHcSilvu9jSrqyXdVqFWNcLOZ13uNpmMnarU7bMIwkiZYsWTI5cVpLZZcK7U57y7Jla7adO1+ZSqIwQ83z16/fvXe/25NDFVaZm1+/aWMQdZZsXuqWco89+PCaNWtVcXkPNccuuyCfLaShSJMkzVthGKbz9eGBwh3f+vzaVRt/8NuvXnH1JZds7jm+Z5eRYT4Vds4vdRXu/tuHkYXK1rBNai5eT5z5uMmJ70ULycBYb76nu95e6D2nGySkdKzZiksIax4hJ4NTC5Gov2xaiHUqedO0tamRw8LUyblISi6ldl0rSlI7l+N+JJSmGAhChXwpTTihCAByhVISx7YAYCYr5sPT82tf9bLhpUshqXED4nI+9iAxcIyFTOKeDLQ6nCZQAnrt1WPHf/aL/LLluZWjxrzShc76zSPxVFjUCGYWNg2W0kpNCIENjS3o0VYcxDUMI6tXjW1ekz7/dKGUFyBqqtXd3yu4klLk7Ew018qVS2Ej4bV5s6fYOHqye6D/2ScfX796VWawZyC1wLTC5gKery7bsNYsG9zBbYjnpHRiGMlli+es005iKV+Gs69+/+u/dvd+AsLvpOBgGZqZ4ehP27dPXfPa2w+PD8Oy1ecOhw89saNTO+KiUni0bGfRqfZLXtk1tnTdF374+y/89AM/+e0fP/fXKUDWFz9x05s/832k4IlXvWLzZz8ljYOHfnxv0G9MHJ18zR/vsi68qHP0aeT65jTAAi83p8Znp5pBfeFIXYbGVKvWDudVnDYSnMxModFhLdHg6pWxjFuyke3xRByBxMInGNtBkNicQVCPUvHFh//4Hw/+4DI7DkDk+jZxP4wn7lYLx7uvvCGGHkvmNG+caFTykLOpRT1taxvsEsjdSVARg4MRl45ZEjEQxxW2xJ2YYpIEvlPIYoOlXFI7m8YJL7gGtWunJg1MMo7HUi7TmCGENZdRHAX+krEljXLOJ8hzHCES5tqxC9jNZ6gVzsxDhMAXrWo9ZvneS5eGPbYx0w201UmtFOWmD4+zMYdsGps9ebo+Vx3btI5V/bnD41rDp65evePh3QOsuGrJwLOnmqFmP7hjx2fDp84uLW/OhZPbZ2vg3V8/tbLUZXFTkMbIhvUQNEBpFa8f/+wDLz/35gdP73sSRjTMxDZQHjoi7DagmUIECRN4GBtUwjgLC5JaIZ9jNEEsSblFeVmb/RLGkaxQPUcgB3gggXsffrp/ZMvPfrnzmx+9cmLp6ZA5oyuc5mOt6YNCA3ANGCOOEICiGBTBcSScrImxFACJ0FIklHmRBmtVf7Z3ye+++LZ3vfNPuYH2B298+WPPVDMbNwmnO661zECVLKvSrtHe0nxQ6R80b37NBfc8etcHzr+4tvCkYWjHSC8425ufdSea2VA0e7sHDx49ZLtOX9/Y7EKtp6ewoH3ZjLuWGhMLtb8KBpg9d/BUQUOuOFLOB7l8dcPKkd1/mynHwdCK5JmqZm0LSUE1MGwgxFKZaEhtTWIQiISbhgJ/P/iYzVKOEBQJAcEZNjgXQmkABBKwxkpC7OvUdvMQuEaqYrHcG7r7hz+46OUXrVq3deHYfJ1DjXfCw/t7l4wtO6uYBgsFZ02nc8ZkRiHvZL3s5ORkpTIPmppYVOIEJP3i5386MLC1gp/thLVS91kf+/BXvv+9/5p87M4hxXq0AbUaXTnSjoRruVyC9pwoCoASShnSst0JcrHgfttwM5wirRAzDSGkWS7KMHH6++M0TGNk5XLNhTlnGKjnUpRLScfQevSGTDl9YebgMxv/eGX06HbKo8BqOXHaabDUdnsdx7ZonbXpI/ee1s1dfdnsynWboGADWdDthXzsxKWuMEgKWTdNY60po0mcTFFKSzkHcp4EyEuiRZp6WGtl+5KystZaCkkIA6ENQkGDUsonUinlMSRTbhBkMMx5IJXAFJlMyTRkQJVWMpYUmc1OvcP9crmsCK82m4ZluswRUma8AhcJ51yIBGNMECANi2eKWezZhht0/CQRFHlay3Yr8GwHA2nXWqZpIgmcc4wxKC0k74iO4CqKojSOXdeeryzYjkEJMqnlt1vV2nx/b8+mdatqtYbfaoOm1CVS8mw2m0Yp51JKHoW+aRq1hebU1Izr5YaHRiVoqbDfCizH4TqVCDK5HEEo6ASUYsPwGPPiJAQQS5ctbfudUl9PPluYmZxy8yXHyzSnag51PddN07jhh8tWrrZMZ2J+enTD6Gx1QgkZ+PTMyckt687hXKpqTZiiUOoyBUxOTKxfs/bQgcMzU9Nbzz3n+NSxFZvX5rpz173jZcuXrzDtcyYmT3v5HKXUdV3W0xNOTYGmtukis+RPn5FNxMPYyJB1Z50VRUmnVjewIXymhPI7zYxtB5ITI4s5UtJnHGtk+yAyI30KlFDS6O2OT58W2ki4cBxPaUKJEbf8OI5d1xVKE9MgBHFQiUgNypRITYP5oDWTbrYLK3n10FkqiCot3BeFY5g8RMEoWpmFFqEZ2el0ZyDUmf41aPSqV03ceze3gmB26uTk6R6nEJ+aJiZ+9szCS9ylvVCrYivViYEKuaihjHS4nHnyVPLhG7agdMFInI5ps7CSQ/nJfRP9g32W4YBhRIZGKkJYmyaB2DcylON484XnOG5GCS0QQr6vfKXwXE839ZgZIVnPgx0Sw5BZbOiIPvXj75z7rveTjOgtdmITSOjEPQHE4LpebTrpX5Nf9vZzr9tRf/axvX9+eOfysfz1V1zZLmf7RvuySTWanZik7g9+tv2wn5Hmsm/95xvz6+4bXP+Wt9z4Xnr6qftvfNdDB49cvOmC94xeW0yr6y48yysnK0Vd3/Ub1A6YRlHDP3b05L7nd6edYIEJP46wYTDDMWwn193vUFz3/SQMzO7CVKPiZl1DibxlhmlqemYzSHBKPOnFLUkGegI3fuTAgQ8v3+xJ5FqAFLAcYsuurrLZRK+miCrCpMyt6FsC0AZAoDJcgJIBUyt5ZtDAqZg8yhemUc5KJHNadpCNXFqwhIVa1K/MVq10YO06c7Ji6orIsmyfBcjoGGCGyAgigDgFToRj9/QIv26sW+LaRThTATdNObamOVgLPMuYtOYOH1N5llm6TPNGlPLi0/U0bb3wwp1bX/F20z+gW5AdXv2d/9iMEHroPe986OG/JUlynC9ohx067BvlroKV33tq4bvvuPreXQfvOXLUAyu2owMvVt508egBTG9/4vm00Qk7XYZZ9Q+8qGtx5hU3zfz1WM7vDYPi4/W7TehoQJ3E8CUCCjSNLQQ+gXMVswET0OVERVgHmNocxSACKolAGMsC08NKzynoSGghFhHzfa+99IE/HP1e9dCXP3zj0NrRBR9cRG+8Mv117XRlJu1Q4oARI7lM0E6UmA5iGGScCkYUAq00lRBX672Yzt/1m/3Hrfz4+HJobnv39QtBVkdNjyjCG3nVnoKQ5jO8JlkkYoMdnKmfs9o69tiGKXW028XE9wLfz84PPP/codVnT+eKY3+658/nnL/s5zse+t4tn5zc/zyJq10yZ3bFZd/8dDPtAdZSaWzgNoYHvn9+DoV/uPPoqQOHHgshD2z4NCcpdMw4LwgC2Z+lRrM1A9AkmYaMEIJWK3zwhDnUA+1TSa3krMIyipNmZJ3djw8eZUmb103cxVAseMFw2xZSST1kOd12GBNRnstp5R+eyW5z83YmIzOhNot5q19ECGJM+1LVJFGG+6DL85e/Ymz6yP4zdS2lMlm9MWs99vPHZnWU76+QcTgzg9atX3nXr3d85b8bA6tHZMsGwYnHnvntHes3rIVVw0kQu9TiQWoSmqYxY6zY2w++j8IwCltW0SE5M0klk4jP+agkIcyYtIMA1/1OcaQ7wj3Tv25FB2ZZF9CVXQPX5rWo9I/RpJqMP3uKYjI8IycHuvssLLzRoad/c2dfNj9TTD7/i+9+9C1viKPob7+9ffnG1SvOWzmpEt1TdpFEWBOCFmd4MBCMFQAIQAgtHkWABkAIACGlNCiJFtUHRBZNSQCAEBJSUmKAxosjs1LKxdEdpUXMU8YYY3TRYMw5z2TdoOkTQjqBb5qmbdlpmgLA7OysQbFt25ggUAoDJFHEOaeUhjwJ/SCbzbcaTc65m/EYI1ymXElCyLETx03THBgY0FojBCnnoHTvwADkCtBqd+oLWssoCohtVRdqoNWqlWvSJIrDyHK8arWayxXSEDKZTKfZAoBCodDpCEwI59zx3LMu3KYjrhXhnDPLdF273WmahgFKIwWmY0ouFpk3M42cXQ6CgBKrXHDiOMaIDQ6MMEqTNEUIKS15mhJsu7kCIUwptXLl+k69anWZSspcNgeG1VpYKPf2lru6oyiiptFsNtasXicEHxoaWL9hTbPhHz9+8rKXXTw3P9vV1f3EE0+laXrOOeecOHB83bp11amFdHy6VCp1Ou2T1RN9vf2MEtCQ8bJS63ar1fHDbDYrUpFIrjUq5DJAMMSKEpzwNAw6lcnZUlfZtm3GKKW0XqtN7n5xy5YtrXaQy+WEECKVSZIoBLlCCSHUnK8SRtyMYxq2ZVkikZxLKVMXGYjSYL5OKaZmEgphZmywqGg0t509aKckDMEDQQzgBE8365e98mZIYgRQPzPnmZnqfC3JpG65D3OxZllfe+p01KICJVoSQtsaQEvVqCbIEAPr1tYOjCeaswlf91vZTL7Ps1OeEEBpkhS7ulSSYEo7zU6maOTz+VRwx8vyJI2jNFMohSL2uopQzHY6p6anOj19uJBx4jB2smBmkx8tXbb7nJd67W8se/VrEoc6CeYOE3NgA4gk8SNVXDmw9MLNjzx+x1UffcWFL3059F0GaGjn7z5P/ZNJ13DSs+mJN3zhsluuGrq0sX/qyNpL3/6/G/9dmxI98rPD9/yss8r9wu8fRyg5d21p9+mT8932BWtGdt39l/rUbFBphZ242ooaMXe6euxyHyV+l92HDOp62U6nAwwMk/VkSjOnq8WeHCkWq+12JpsPWxGiXqXSxIorbXKRBpzbifvcgX0//uXvGEOh0XaAcKImX9zXOTXPsB3sml2+ZXOwNM1ke9JEhZHI5YsKg8DAqI07yGM5maTGUG881NaVqbQ2S0Zd92ibLxxlo+Xxp7YXbbN7qPfw958q9w4UzttA2xpphzLK4hiQiPNMdDSLRcLrlReODF99tVsPZrY/WV7a04yZ21I6mDL7VvkiX2BqaN1QikMjw+SxCiCvGUqeba27/AZJ009+9MNvf8/HfvbVzxx5dnzYGHzqxR0S8I7DexIbVviZubCy2Rv6j1/9JsbGaK5n1/FHiLIvyJvVqVp8mbd8U+/8Czt/9ZVbnvz5M62ayq62lTGWKayDeVH58/hZ4brP7P9xCrNNO2ERaACCMWikgWCQUkOicQiCAQjQEiAGhAAS0FxDHekMgC1Ij8YAQiE9r+IOxCcWWmvGBkh06Ntf/ul/fO6d4umnqZTN/MDrrlp1+72Pz063LSpzKaqrNMqjQGiHGZoijYREEmtwDHBdsKjY84i4577bhrzUdLpf87Ibntz+fJqqDVsujCXe+8zT87bRu3b53jt39GazE3567kuvmdn+/DP1P9OF85b20Xb1VD5HSIuev7bfW1J8/vY73/ru6z7+vY8FndOnbr/DMalvSbsVFTRMhzrqAAaJQQNQHMuX3/x7pL0EGAG6aV1vjxdXONtUXMqb7dN7KnUuDjfjEiKKWSl0QIMGGPO6To7PF0egmIVTZ8JiHxs2TT+OU2INeXbs88hXRt6MdJJoakiWAtFKgQYMWqVxzjR2PfXkZdturc+0u3uyQCFvWmYpe6ZRHcj1lkqZxyaPNRsTI1vOeuyRXYePnak0p4BhpHNM2Y88vUcjkfJOjGJbRpnu7oZLfnX/0++59YIgrbpmIalVfB4LRKNnDgMlaGjAUopYpqYGV1Ikvh2k1UY105U9M37KUGhwZDQJUkVpOzV6TCuBkig6RYfP/WR69ocV3VwJ/LwSRwGbOvHJduFzVu7aobvWPnbDn9ZSdfqYhSmMbradDPUPlxJ/5dKzh0eMJ55++o7vfdUaGXz6vnuuescthqW68kVMmUgSpQAhQAhhTDEgSg2MscQKY6QV0hgBACCktdaIch4jWAyiQBgBQloppUEJrihVcRw7jiOESNNUa81FwoWyPTcMQ0IIQijmqeu6CGPXdTElhDFKqdZaSskYMyilGGFAURByzm3bBi1BS4OZjXq7Vqv19fQVCiUv43DOCTGCgJu2ZZpmf3+/YRgAsKg2I4QcL7v9ySd37NjBMLv11ltNi2WzxU6rSYhpm8bCQi2JQyGEadj9/f2Uma1OM+z4nucFQVStVgmGxTPjRCXtWo0Adb1s4idRHGSznsWMJJUIg5RacsmY4WY8AAiT0K+3nYwnhBIiyeXySKGEx1HIhRKmaTJKMCJSyjTRWnOlFBZCSZwrljvVhTNnJrXWU1NTW7acrbWJECPACoXysSMHTZMN9PdOTZx2vK6XX35lnPiem+OpPm/rtvzISDw/u2FNkRGGXORrP2zHjukuX1putVpaqUwmgzCOgiBbKCZJwnmCMcZAMcEnTh5lhPYN9GsFXsa2bFr2sqkQGGO/2UAEF7OZqmURZmWzphCSc+1kMqaTabXbUmHDNAuFAqJYChFFiZvNIpKAQoxZKE1FnKiUU2qhVDAlEQGgWNTreUuGHc5TECziElyWaUFrZHRMY9zbNwCtsLvYu2T1cjBNQKwyM5sVC4/87v7GDAJbyzZVMjYZyRkw1UFDy3PDF26d37mTlHJuFLFCMQgCKaUQKSg9fXqip7vbpMS2baVgoVIplIppKvz5ec/Leq6r4tjpLkbzbVtobDgYQaelslleyNkhBLydo33R+WFjaKTrD2/799yarmxXilPwLGgJ8CVyOQ2nKMmsf/mVH+q68FIonTs/Xr3rZ58cHe1dfu6Hh9ZunKmc/Ned5w/3L0NJxwkiONFsx08x1bQtuvyLX+vjvcwdPHX06c6Jp1fp9OItF0TNxr5HnqnUqlY+o1wTLy0M54siFkTI1C0JDH4aap0oBpwnDBSl1CsagiYKESefDQVKEAuDVNNMdb5lmQWtdSIbx4+cHMku9Y81YmeBLSmBQtiAweE1xrL1WvoQ+pBJzGxOAMMm9Qw3SWsgq7ZVgiSfZkAn3DRsnSqLdOtiEfVsqsp2dskQDCyw9vToTVdEFrLHgw3XrYlLJk6A9Q1AJAFFO7/5zdXbNrrL13AnxxIBPOnNZ5MndjrlYoawxLG7Z6XOZtPBDGD1gfWXvOTV5yE7GNSZcz/5juzK5RCdjPpWFGopbB7c8/MHtmy7dviKc+ZPjD/+09+e/a7XlWaj17/yusd2bl+Sd/ozJCft+vTk+edtuvOFPX/dsV1GieJxqYesWpKNGsG3fnRquC93Vveq3+75U1gy+9C5YmaHYmT/tx8t7jMerj2yG+9CKkqiqgYPAVAFmiAAwjRgLbmGCAQHJDEsMlQBKAQlFDQBUiVKQAzAJWCCKK6FVNBZSLcnM+f68P1nDp1/cOqcnsH5TrVYZU/P7Nz2sq313z8aJ4JpEkptaxIqGWVSDIAlIA4oAVAgKZPMMrKmkrJqsLlw/vChAycmZq55yU3PPL9nSdh/zeZL7zv0YNYmzU4rTeTkxPzq9etf8qqL1Le/dNeudull4UXnL63sa9pOrayXnjm5+51ff+dL3vlq2P50enxuEjX6mOdSrMwoS8ztzaQtMAOpkATACHCLIdPo/OKbr7r5bW9pnRo/MTm58qKXen4D0LLtD+y68rUfiBFIQ6aKSwmmASiGatjkEupZOPvCUf/5xkK1USxiikBQmYf2BNCsTcIgdXqBIy06MSKgmACCpeIoDnuzmUPPPfsyKdtN33QF19BpNju86mW7lywfBRnWatWAV3qHenc/P9dsR8zVAqCnd/DwrsfuO/NCrms4UgF2vVqlPtdXXD4y9pufPPaum18JeFpwZPb2vfwtb4BUxlHDsl3gUNl/2K6bhaEBZhoAKixDd98ySKLe4UEnn+VCIZNRocvSTOcmrRTk6szcT559+H0vbLzoYwdzy/MX9x1FE/j5rSP1/vG3/n7ZJ+ilf3wnDN5N5SvfVhTtMy/uT6eO3/atT1XmW339uzvHW33neC+98bINLz3/qy+7NKrVg/lpBxlpqyMsCQBaL6IqRoAIQQgRDRIDBgxKKdAYAQIEGrSkBCH0d3asF9sttda6XC5rrVut1iL6Ki0WEdeyLGYYBldaQ5omSikNkMQxo7TdblPDEEIIIQghWkrHcZTglFLLNkyLGYYhJcUYY4zHxpbatjk1MT0yMtJotJRSTIHjeAp4EHQcx9Fax3EEAK1WQ0qJKArC1qWXXTw8MBzFnTBCCCGDMscxRcoBkGN7QgjOeRCFWYPNz88rpcbGxhgjUegvWq4Mw8CIGoahFJqvVCilSMOZU6ezOc9yCyZlAlGCUZL4KecSFOe83mgUkXIczzZMrOHU+Mkk4UtHliQq7oSdxSJrrZHWqWU6XjYbRRFjJGy2AZNyucwYKxQKjDGptGGw2dlJrXWpVIrDYG52/smnnnjjOz5QnZ0wGXadXFuESqHZI8c8z2HEWJQKLMtqNTtJkhQKBCHEqCmESNI0iiLP8yzLIIQwxpDS7VZrcKAnSZIobKecJ/NcSqm46OnpCcPYNG2uZBSEa1esOnX40PDIsjSJLcsKmk3DMrOOHadxmkSeabTqDdt1TGZ0Gq0wjAuFUpIITrVSYFqZ0O8gLr18dmZyKqrVRdRBEHJBuQDqIpVqqaiFYO2SFSeOnZzdvWd482YoeM1OS1YqzMqUunMnH37wuXtPRD4YOcCEgY4lkTnLfaoWvemiVVAqmoadYQ7qJlIR0zSpbes4BqWL5S6VpmHHN7yMoRG4LvAIC24QwzCtNIwR4KQZKoEAcBRywIiZWhOUqggHXuhOoSHn2AtHVhzrX7FE3HXb4ysvQO0mzikspGoLPyZyAc2Bm/YSSZrG9IHfP33XX889/5r119+SyFo0/ky/NdjTnJHP7zNIHHvM6lnnrV/dnDill2xyGtIu9B792Pf+8uWPlLb1FgvFYz95oDbbqmRVYXRJLusgAwKdkgwOKDcMo9tHfpoSjUUcm5bNkUwQcIS1hMrsfH5wycx8DSkDKaw4aoW+mykvtE+NT0z09axLuPPc3me3P/eO5cMXjFpdPctGRTtyBrrWbFmXpG1rabac9/pNyPbMuF3gFPspKQEpAVfPP/FMdLSdZejg9OGjE0cu2rLlgoteYhTLf/zyt4+MH7vwgrHXv++VzROnOpVO79AYw7l499FkobZv/20bh1d8/uufufGDtxoDZXl6qlEJyaBVQvk2Cp3lJD5xkhb77TgvN9jxZNsd2LT/I1992a2bX/XGy/O451gpl9kzHyzLW91rDIcsYLcLzEah/74vfPt1H3n3177xw+FtF1z/hU9HOW/TORu/9f1v/O6T3/nYz3845PuvOe+sjX2FexH+0fZ9vcwYy5h3TfjvX1Y6Pl9elTd+evTF5H8fesqI/3TdG6P5Y09Ntr7ytS/O/ma631jy09YdM1AREOcBBxCD0hSQVASBsgCoAgGCaxD/eAjGWiOAGAAAOIIOgNbSAWkg2gU0h0iCxAOPPZ1i6AUzTpKPffor23/4lXZ1NiSou2T1b1zy0oUL/nDP9pxn+Sm3GEs04g1kYmpRamBNKAedRphLJM2MT32tcI9EsGHDVtZ3/pL+tfft/+Mtn771fa955bd/8D1Q1voNF9Vm52kh6MlOLrnk1ve+9bIv/vzenz98LruGXn/Nltu/+9cD/rP3PfPD8pqNwW33QL114m/PIQNEHFNtiQxoRZ/zAw12aqXAATQHMLtV0kBw4IHO/iz7wldv/93Dj3/1ze+Edu0Xd90/MLC0DxsSqyQRAMrEdhJDButAxhuX940O8slKhTLSjkEpsCgwxkoWP51YPE5Mx0A4sR2v2agy1+RKIgw8Dqkhc9lipVZ/+m+3CT/rll3MvZofrD97JbV7NGsjle3pMo38SL3WaTdCRozTc3MKgFHZqrUXeHvMU6lFYh/35AsLVbF1ZNnzT57Z8fDBVUta05PHli1ZqW2lqGlJvf+5p8fWrO5fOwaUtBZqOZqVQWjkcrzRYSl3sAkchFQ2swGQbFRwNhPWT9NCf/MZ3ZV567MjW547MHnTqjXJi8fHobfz6rwzfmH8o93XXGWBTNHx99w+8LatRmbifddc87av/Wi3od9x5evu/sBHLrl6U2ZJodpegJSm1VZ3zkyVlEZGJeEiuQXQWuvFImglNcJ/16AXPcD/vNZUAQDSAABI6cVZWwQAGhuG4fs+pTSMfIyxZVmEECF1u932HNegNE1TwFgjQEgTjcMwNCyLMaa1xhirRSReRHat4yQMwxBjnMl4ruv6YafT8uMwQgjl84X5+erqNeuiJEZIE0LSNOWcZ7NZhFCn07EsK+J+d3f3kSNHPNvr7e1tNlrNZnNkaEnbD5BWGGPQEkADRqngSupioUAIiaJoUSr3/QAhxBhTGGq1es7LGIYluVj8NWzLWGgGfrsphBgeHLAsSyiuEBCCHcep1WoUY865yYw4ThBChULBD9tCCEKY53kGMxexHwC0YqbFwtBXSkglOOddpXKr1Vpozi0dGQGNhVBuJtuqNUzTsvr6WpMV22EGw0kcG8xMotQP2qVSkctUSun7bdM0ASBJOMbUdV2LOfV63bIN0zT9IMAYGybzfT+J4v6BgbDjI4TsfD5sNQ3L1Fp3Wm3LsjjnmFDDMNJUCCHq9frQwJIkTW3b7oSBYZmMMak1xhhpbRg0iKMoTNyMhzBjzAyiJJtnURRRhMNOO2o2KSG1VsPKuHvv/lVP0Xv+qandz1eHS4aRikhkT/vtL9zxg8KmPJmetPoH0iBt1uaieoPZmf4+7/df+OyLf6kqV3U4ZLDHQEgWZ5j16Hz820c/XVp2lpyetzGkJBbKtAmJeWpZFqJEK8WlMCyLc562QsMwuOJOIQ9KxmECClmWBSZAqIRpL4QTX7n0Y8yNJAKTQzbACky6LBL7YcUbzloyuuLhnzzoWxFRoZNAYprKyKkTvrF1/Ye33ylbDRfHnWo9s3R9sjBDThygHQEZN4EZVFxhDIyCRyTUVSVM/vKo3V/au7/a/N4vyhuW7ttzcGDbOUdVPTl5QlCaFooFT2VMO2mFNrWEhFSAZbpJmPgsEhgjwxIceCSRQJGfWKadx/aZVjuzdIm0nKm5hflqhWPVijsqEopZkHMOTp0KEpQd2HTBxVe62Yy/AM0gmdp3JgVzYbpxwVkXQMYu2R6cmMb26SPHH+gp52+45pUzp/eYKEIpWjNSzlnmmpWjAxetS7MGmgsmjp0evPQC5drQntnxf9899LdH3Y3rbnj/u9uViYHzxx752QMv7njypddesffhJ9726+9Wn947dfTUrr89ds2HbmJtll+3ZnLXrqGLzvvINW+67qbXbL71Zcju9g8vdK3JtXc/MjO9sOyGGykwkUgaYz7fZMMlmIeGF+p2ofn8BGS6rnrFK16sn/rdj39YqIX7jhx99vjJsdE1O559sejmupPouBq3mLFzJiIQrXfyz3EfBKJaY9AKNAGWGPGxXT8pr9taiMpPfOjH5AcnpuzZ/00e0QqECXmVU7wFCFGNOcagBAKyAGIQWD9IhUEBxkpr0ByQAK0R+BghhLCSVGkTwEHU0BiBbmI+hZknaN6hJ6POm6/d/NpXvmRi/4uTCydveesHf/Dnh7/2m/vOkaSOpAIgIWhKQQgCYAGYADYCxwDLgFIeJkS+Zffq00cuv/6GaOn5n/vK1+swq2wzG8lN3as//NFPTR6ZsSzr+NzRM2cOffX+bwxkl9/95y/detN3W6A8mL5m69iXn/7WgEALv3zEnPJ3P/vigUf3/AjEBRnH7qVO3C5q77+nfB9sYkcoBoXBlebgUNKtck/NtRRzv3vttQ/PTB0/89TVgxfsOn6IeHa93VlodJoKGAXTc2aboaZ2WfBslxAcLukr19p1mbLzFXI8FdF06Ujfdx6dXUvo8n5GilFfT2l2ssWytjQ0YpRzbpu262VjLua8YmnVJe/91w8+/8x4/+ollDUGu5ZIqzP/THDXPX94zZteOz7T7MkMjR84+JeH7/jyr35w5o/3v+/mO7fTI5svOHs8DKOFtFU5bQ/1rxhZotpirGf2L7d/oLl7Jw6EaREz3x1lEE1h4dRkj1eYOn1meGzJ9MKsm8sW+kYBqzDsqDTxHDcVErkO2EZl12HHxFa3dNZfcNu1f95/sg9dMJY1K+cvMw7tqrXaZzNjfv355Om/PfNvPz3PKH+H/uLBz15ELndHrU/97Fd9QxefrYJ/8Xdpy0dHVWthyiSEClYu9aRxjbpmvR3kDQMTECJFCGuttFAYY40kKA0IFmkuwD+vNdJ4Ea6xBq1Ba4QQwRh3OoFQGhGMCLYdZ1G15jIlmJWKeSXBsW1CSMK5ZTltv0MpcjMZzrlpmkmSBEGQcV2RcqE1QsiyDNM0F1Vlxmiappbp5odLoDUA+L4/tmyZHwaEkJinruGmUjDTSKVYHOoVShnMmpmZGR4e1kK2223HtQ3DCIIAIQmgLdsyKBNKpGmqETDPTOOk2WwKJVzXBQDGGEIYE8KTZHRkSRzHSRwbJkuShDEWJbHrmUJaSRgtTiUtbiBizkGlhVzGMK1Os2maZqFUCgI/Fanr2lJqIVQcx4tUlVCstaTUWCTuCCHTNP8pG/T2dddbdddy44BHQVwud9VrTVSpMUwIoHqtZhoWTyPX9hZ3KsQ0Wn4zikJqGgCAGV3EyEaz1mo3k9TKZDKUEMuylJImM4rFYpokiMDszHQ6carc1SWlDKKwp6tcqzUWg68ty8pkMjzlY2tWKj/kmAdB5FiW6Zl+o2HYFk+4ktiwiwyBmfWoZcZxqhTKduXTmUmNdJunpVIp193F4yC3bLDmN6KwZfT0zs7XvJLX8f08AdOCqKlCBDlELcXapyuu7eWsjNlDvd4+OHHo0N55xXKG0eIdSEwfk4xtxKc68bo13X2XXBQ8f8oxTMBcBymzXZ5Gdj4PqeBxohEojIQQzLZ4M8QYnEwu8dtpmiJEvEz+xNGjMxPj61dtLowtoRgpnVIDSa2lBCFoG0dLnd5T3XPd1TrdwqLJVvZ80FXmUiyjpMbnqQc5PRM/+JeTv7q3VDSQnUm7ektjg7A0H68YtkpjZmTBzKm5PzzU2rPXbMa/v+upi6+/9vzLhh/81L/d8qsvn3j8rpJZmpg+cEbylYVSLoJ2HbVmgjPtWcIYmNTJ5SUgX8rYD0zPCMOg2ZrFYMaduOCWoyjlhpwhwal2059vrzhn67Onx49PnxIEsAumAK68/sKqG97zQWX21TvlY+MV2QwyxZ7efNfZL9l2eudOj43Nz42b9c7U0UqAT9kqW14++tQL97968PzPf+3jUgHwbN14nDaE0UbtYMoOPUWTvScf69ngetADKFq7emTFK/5z8ILzqnfvsKJO7cjRc664aPPLtxT7S95o99Qv7puZnN/+xBPr169o7d+fWT46/vM//fuXfrru4nPf/8kPOkmjcvcDlao88cLxcJ3ztiuubExUkmOT8w8+sefJo+WVG4pe+/4fP/XAmTPvu+mVA1de9r+3frcJs+deus0WnbPXjy7smT341Ol9jdmWY/zXF/7lLe/40LrywKZzt5w1ZLK/PnvFa9/82f/7uQkw7PS99uahL/zyEFB/2KS1Ni1l8oV4wxPf/dSBn+19iZF8J3oEGVQi0UyhDbyEYNEPExOgSpsYEQCkNQIiQGmtEGABECEtEVAFpgSNACFMQWFAqZYhCAlwYW/vfXP+OBJruDgLsZ/8dfdNb3zHyz53+dSDD9OXn3vlPD34g/u8fF+TtEits8TuallVrKlJDIsZWINKI5mmvtAyBaCsMN9ebnuP/vHPf0B3mGbuP6667CCaf/Kve3ZUDz75HzciKQoWNCIA8P5cWv/ld18nV17UMuev3Lzlte9/0xtuusKcqsAdz9Wmpk49sXf6yOwMMoqGmChBz2Q778EC1gQgC0hzyFFQ2oyxSObGdvOTpgEOCv7j97/rgHt+KffXF17sLdsHZ2fqMSjIp6AszUkzJMBSJYHisAFIw+FWNV8CP1Q1zu2C25xNk9V0bF1vcKwZp4nHIQjaUmHEIZGcKr3I7oIgYrbDWvOGXqg2p8ZW9JX6uysLQdBacC13YfLU6NJiuae8/8hCParce/dvzsyeYABxi2QLw3H10IEjxworls5N7/zhr79214PPPPX0+Hlnb925e/ex040VK0frh49mi7l24JsxsGy+f9koEFKwUUfIvqFBYhqcCEaZk8uCY+skTuPEch2E8OArXg1iGqY74Ynq2sv6Zmbx5J7WVe8dHbOPPJ1OsXRpkHejk3MXX7oyP3QkmdpFb/nRJyCBZrPeOVVpPP+TdgGRAl5vlpFXtk0HE4iwUpahiQ1K92YLcRwgIAAYIUCIKKQJYQCYL5qtYFEz/qfkDIRr0FojUAj9A4ARQqSvry+OY61lkiQIIaUUIMUYq1drnu0JIdr1BmOmYZogdblYbrcbjJBFvTqO4zSOpWESQjBhUsowTgCAMYNSrLVGAKmAoNmymIGRBgBEsOQCAdiWqyQQzGzLFUJQYjBq1ut1xlg+11VdmC/lC1IkUgDn2nPcdlBlxDBNY8eOHUmSnH3OVsDYsqxmu9ZsNvv7+yzHbDQatm0nScqFcGx7bnaWUprNZuM4brVa+UJBaW1abCQ/hJQGgE67jYAwZlAKoNP6QnVubs4ynf7+/jhO6+2WbduqzSmllDJCCNJISwUIAehUxhi0YRlBHGqlAGBhYSGfL6TczHoZ18tqUTWYJbiyHJdz7lh2HIUUU9d1Odf1ZsMwKJcpD2QpVyalbq01oTSOosXDZs9zGaOtVmti4kyxWAr9II7jTCZz9OjRgYGBNI4WY7wAgDBqKrPebGVy2Xa76dqO1rpWWyiXy3GzzqWilFJMoyQybdMwGcVooV7LeHnRaSdJmKZpvdUeGBlutOp3//oXQxn37PPOKff2KqWiZjPkUVdhIKgEMg6nzlRrDe25OKyBQcEtyhjg5PHxpef061QpBRiIEgIjwBid3L6rOgOYtXnEmBYa61R1MuDNxf6/vf4lEGqIQ6kQMlKpCVOSExw2Gxhjy82IJEEaZCymT0915XO1Wg2aNcZYoVyozFSiIBzo7Vm2fGlcD1oLc2YOclmbGEkUcIOaspQLphfMLed/4js3v2vrTR+jRu9LITooeJFN+UnWNHtMe3auCWYhf9kVZ606rxWEKlcwqazXI/946t1/9PTzP9szU2keey5nDYeZIutKeleKrf++7Zkf/q2v7A5eef2TH/quvTITqngwMacnqvfW6yc5zJbY2SPL6Fzdmg8zMylomUhhuPbx480UoKMkte1mms53GtLEcahcDiFgLe0nn34sX85vecXlF116TrE7t3Zs9XRlz733PJUhg1SMjh9+oQeyBt1o1OdO7nr0/onnc7SlVD1btrb0DW8YLR6fD7tLDNKZz7zjog+++kKoVqLTh2xXd8nRyZPjKJcdHB1T9VpUZJe88dXOdLO2UCehsF9/VUkCPLY/21cyB1f7k3N0trPn8R2nnn7xwrffIiaav3nkrtXF/h3bd3ZvevWyfNfx4eCtX//IO//rq+j/Zt72vltRb1GdOLbtv99Q4vqxO+/tX71s8p6nfvG9n25786ud3vQvP/rzJe963dIeuGDrDc/f8/DW97x827oL3vmlT5297CUvu/GWoBZPKiUw/O53v//KD7+xtMfM98QLx6ZOhuzGy8/NsTQw4aaNY91l1Jqf+r9/WRvNdf37A38DDLh3c/vF+3b9+y+2upsfTQ4fBPBTHWJwMA1k1EZgAlCAADQDANBKA9egMQjQGhAGLQDFWHENFoACBRpgsZcUACOUIEhA67StMQ8U7OA8zUFPC19+43sfuuhVbOHE+Ece+0Fl6h6AMJ6RiRIOHI2rdYVQKmwtunE4YJBhiw0VC2XHiqL4Wb/l5SFO1WaZLRRat8227jv23F13fP3+tU998NO/8LHJtLJTVPIoxH4s4OPf/2tIbv/519/35ne/ATrH+W8fnzl8rPVM5YWJAwvTc91ueUa1yzkzzzEyIEytgxBLTKgiiQCgFAT1WdrmJx2Abu7KNAxsz478iRoKQc3Mxm0MgDAjGGiIaXrVtlXP7D0yOwcdBP3ITmRklo1U0UYUpj1OGzSRcCaJ3/iuGz79vu+tMC2dcol4EICFIMVSS7AsQ6eQApcolbWAzk2BTsujLtHcQY7bXZDHKlNTh5esWQqM9gz03//7uwYHvS0vvZxLkFy1RMhAEdsOo8TC8XWvfUkS0efunxKtqJRf+r0f3fXNr73FG240A+0NdDcxJ6YjWqFoNh3EFqpVmsuE9Srr6Tr0/ItlO9M10Bcy1L9ylZ5rTOw/NnfyT70vWT+y5AIxcbhBZvJe/yOHx1Mz33PJ1Guzxh1ff+zGN1///MOPaGdjkh4xDY/qqJdbvtPXPRMALieZNs/F6jia60kS6TfKxQJRNEo4MWkaJ1QqSg1CkFScEKK11AhhvIh5GiGkNUIEtNaLaAEasAAJgAEhQAq01gohrBGu1WqLIrDWWggeRRFlOJ/PO44zfvJEV6l7z559GNNyqevo8RMvu/zyclcuSRJKqVDSMAzHspSQlNJUKdO2IMVCpMRgXIgkiQzDoIaDEFFIgEYaIEoTz8u0/Y4QyjAMw7CCIGKMEcJarU4uV+BcCI6Khd40iTKZQpIkgqt2Kyh05Y4eOfLss9OlUtfY2BgAWJa1qIH39vYGQSAV11rHcWyaFqFUplxL5Xh2HMeGwQaHhhZjLwOZtP12EsWOaQshEOjFSGpGkRTaMZ2+gUFKqdAqny90/LDg5nzfj4PIsizTYlILAE0ZTmKeyXlSKSklpZQQ4vu+bTu7dh/haTx1ZuKVr7gSA8/mM4bFhNC+72dzmTTF7XYbgLqum4gok8/phAR+AKBarZbr2Y7tUZMdPXxiydIBxlhfXx8GZJtWmqaWZTHGhkaWSJ4SQhzXiuPYMh2ltZLasq1ms2kYdGJqcvnyMdMqL+6oqGWHYZjNZlUICkABpEJ09/ZOn5k0envTJC719JS6y1xKxzZvff1NtmsqLlIheZwaiBgSIEzzYFGudIgyDjT9tue6WAeB8CWGYGaBZYozjZozMACOheLIRRhF4pl7H1MKGWUdnTFMgpmVCA5pSJnjvfT1lzYPTFkW0UJhLsG1UJCwkqfCCCMMWlHKOs2mZdglNwMAlm1ky2XQMuh0yl0lQg3AWKYxYThXzEIuUSKWkQABGIBk5gXAXKuZXXv5Nf/31v/8wM/euQHND2Zqc51CzoqacSdNSgPU5u3D3/7+8Yd3qtQ2g4XmzIyMfNNjU0iXlo/l+qVxzrYSNTsicXGSTOvTv31CLF279LrGkU/9B/CokRltHK88HzUrS5dsue6V64byN736dStXrr/q3Au2z+znAAEAtpy41kJaYoQ1IBRFBJMUo3d/8IPL169JDGrnC+X+kbQdD2XyF61dgURiUAAOQeOid17ziX95y+t/e9tHe/pWTcyeVrDgYjPmyYVrut/7sXeHIr7/zw/dde+L3cRcd86SMyf3X/mSje9/7/VT43uz7jBlw8Q2I9EYOmcY4jg6ub3Bo/7S2uNP7SutWpfNIYbZ6R/c24wCe0m+PLzi1P6Dye6ZCiQzC+Hvnjn2xL4vfu6pn/37Rcs+/l//c8l5l+SXrPjQGz86sGn0XZ/716m3/5Vj99jdz+y+82+dIJn53Ozc/NzW6zY8dNtXNl/4hi8e359MPH+icuTG912LenLnXP6GtHLgJ1/89kv/+382v/6KqQ/9ZxTP6598ky8f2J8sDCn27S9+9ge//Mlr1px7dn8831528On9XVeYrWQGEvfkQtSD0988G755q7N3YcaLnL6uMG/mnvrJ/RcsueroxO0/wu02yqW6xXAuES1KIZSgFcGgO0qbAIZGqYYUtNIgsFZKE8ASQ4KRkIoAAIIEQ6qAaHBAO4CYxgDyaqN/Wpx5nnCgFIdWB2gF+S99+s4DfMnn0dEf2AIB1glQjITU00rjhCjAAUAVwaFUoERandhicJXZRUOxIe/MNH0Zx32xGoHS0QPhutVv+9KbrvnRuy574uC+3UcaxxdEI7GUkDFz3nz96E9u/x6ZrCe/+O7pPfO8GR9//OiROveT2kbXPG223QZYVDtZZ0g4R+J4bxT7GpsGSgTUhVBAUqpBs7bUtiGQ1EKIPtBNluuQJsQWWAlEyhAdrPhwyZo4MT87B8zBXqg4eG6J8Wa7JuisDbFOI2SoBJZs3Xz2hRsihiPJSyxDWDuJlGlKgkEpoRGJ0hiZZuCHRavr2ONPz71pyl5ayk0tEFmYq9Sm7t07NXdsZM0KMPB8bWbjxq0ogdBqMGJK2YqgRlESxbERi82r13CR0MTs7x2rNRtSmp0GhUCcOnhk5cgWiHS52q43JojBJAKvp7t7yRApFx7+3W+9knfpTder0FdcOZEQlQpWeGT98k5mRi9Uq5kTxdocqSZHOrXzL1pjhJVjz+8rGOLwnrk4vPnKa7tO3fdAavfZ84OUMwHSUlw4wLWQ3FZVkYrU40mdAKr6dUaJZ7GMbVqMIC2VQRRChuUBgAaJqCmVUAibkmAMEqQQWoNCCAOAkIJjTBFWkiOEtJKYgCZ6oTbzxMNPbtmyyc26xKIgUc52QKo4Uk42N7ZmTafTWbd105IlS8bHx7e9/MJKpTI5W8llPMexkAYphFBACUMIm0ioNNFaUUoFV1ojx8hSQqQSmCCtDA1AsIG1SqLIJFQBCJFijQ1MkNJSC9MiQoaKU6m4xggh7ftNpSVhGGOsYvCczLZt2wo9fft3vaiEsg1BMTFsnMSyWl9AiKxYsZILJSQgjJFWpmlyKQg1pEIiFosbBUZAASKWlSSxbdvtdjuTycRxyqWu1hcIIVLLmZlZx3EymUxvVzGJfdvBRw/vW7Fs+dx0dWTpqFBKI3AyWS4U55Ii06Sm1np0bFW73V6xecnJkydXd69yuh3FVTtu92S7UynsrNkOfUoswijGwGVq2y5PsJJBKkOEUHdf2TCMNE3PTJ1aMjZoGIZSIk3jUldRK8UYXTRhzVWq+XzecL0oiU0zl0jhOBZgwtJ2PuMgw8tpi7jdwPDE/t2FvA08ZIhOnjhWKvYkgDEhiJJmuzUwMowQsjEEYTuOY0ZNgzGbUB4kOoilUNQ2hVQZKwtgRK4nLSsWMznTijpxxAPCMHAgStf8WbCGM8PDGcdUaRQHCkZX15+7Z/f+2Clou0GlDLgNJrdKntw/1XzVJ65h9gqjtc8seNpBKKEaKCrmRdjp1KrFQgYEEoHKuAWgipkKMNMJh1TrOEUSg9QqDrDW0JNLbKnrsTGyjPRk/DjONqMOJH2Go+zQalTAPHXz224ph+qOj/xi5VbMDCNciBNsMJI2DdHnWr/58HfN1X1el9HbU2Ijq7yeInGss2xDtn2ZJpIlFWN2SBQMb+1s6PRfv7HzzTvr0/MlkY16vN33PrkT5KoLtn3645947NS+3pZ641vfu2/n7je//oZwdZ+HLduzty5ddvZFW2m50D51fCGpYWGcv/KSJas2xwpsDJMzIZ6vtXYfbibNUwf3v+zHP7Q6oQJp9liixC7q6v/sDz7531++YbC71Jmd/83Dd11/1avKXQME8i3BPWq/9a2v95vjnkwfv+32Da/+VNHOzj756JHduzdffq1VND/26ps++D9fEn3FgPDe5evsIG4cnRw6a7PoK8O+1pldj/IiCajg1XnjdNeZfROFXnvmuf2hnCuOLTl+rHLszt+KWDZPxSP/s3LuyAu956/YePO1drbc3DPz39f/a9URX/nef+Gsc3Tn6VJ8gV6d/7dbXvuHL37tI5ddt66ntPfY6W888+so0Kd+fefSK84659/f98UP/vCt7/rIv779o5//v//dC407PvqhX/78V8d3HfCqjU3FfCdsd+c3Hp16+iQPH7zndE4LBPbO2Zmzl258/xXOEKFt0npiZvLqK65qnji8584TvfVDfwM+IZSAVg5YJNoxABPAgIaAOEUSgQAZS2kCrhC8QZJpLXwKPUIxRVKlfAQaMGZKCEi1YQBnAFOatQwnTRu9UXxB0ftZo8G0SpgvgWEQExxGWIUrbMSEIwUIhAaUIg1awt8j9kD+XWuMFYoT+A1f2CSzw5ybwjtkSycCBE0GGQ9H3/z1XeDBv733oktyJ49x+J/HZ66+auOvf/VhB0ny7R9Uj04dk2jhuYn5Y9PHk7CXkWWZws64geeMCmTKulMX0hNhGUkkENU64W3MbC6AQ0Qi6DVEW+gWx6CMLE5ig4RpEzRQIKNcpRqmALBBKtPxJBBBmGBcWqDQwpYRoxhknl3oNOtQL4pNgXiSlU6+cOb1bywVsSOm/WxPk+JSFDdEQ1MHsKXSOCUGw0GkJBYFXPCse+74zicuuL1WbU6d2Dd5/NSxyfmBVflcVwLa2NzTN8tP7DwU9fQPSADkltNOalgFHrYLdmmiOgtKYlWIWhJLrzmz620/+BduhAO9m8O45YR6dnqaFjOZkW5NqFSIIHNuauaVN9+iRcSrHYYZVrTTrgdBp3dsCSix7rxbgAVyaiq6clX1T3c2d/Vc+fHXnrXyt2Iqrh0zSmMv/9Vvnn/lml32SH9OrTn2gd/TRTJHCAGMQEnQUoPGgJRGQikDY4R0K0w7nY7nmNmMbf7DZgUAGvRi3Z7WWiiNQAMotDgCjBAAUGqkaawQVlpjhAijhDHTNhDB//Ku92iRdsJOyhUlBsEYMFACSvHFeR7XdeM4zuVy7XYbYzzQ04Mx1loKJRa/givJeexZpkZ/X8qiuq1BSqkAI601/P0sGhBCsOgLUwr9c/2LfjGlNdaELH6qAg0IIYL+7qZuB2H/wFCj0YgnJnt6ekChidNnMl5uaHmvl8nl8/koStI05alkzBQpNxhzLAsTEqVJmqQIEZBKKI5A/d2QZdo84aVCqVar1et1wyAbzz4bpKzOV5csWTI7O5skiWEYqdKWZZ934bYwjAcyGa6Vm8lEaZIk0eLKbduKoihNU4R0NuuZyBy+aFjGSavVcky74HixHxCMg46fy5Z5qpM0cfNuHPJWvWFbGUJxsVAihuG3WlLECCHPzVDCQj/AGC8WPSG8KEtLpRRPo2ZDlMtlxdMzp6f6+voSrdrtdinnaomp0o5hNObnMYZiruDa9vzCRBBEWS/HObcsG7RGSp86OT48vNTzHEopQsSxKQBoDUmUNP0W46pYLApCECI8TkyloyAUSCOMNFJuxg3bAQVEAOVNVqvVVKfWaTYQN7xM0cvatGjvObgvVnrQ7D7VmC93Q+y7xArOzFJjBX3zZz5Y3b43kzV1Giet2HYybiinOjODVqG72CfSCEBrwGkilFBAtBkngRAAilqGYzidVsNwLWQyNbFQmZsbW7ockiC7YeTE83uGPUMGHByUi4Cc4LBv7vjxpy/98Juj4eX3feiTq8rZBT8dMnXUgp6cyUwxcElPV951Cksnoqbl2V3YiiYWOpbyujwpwpIvhzJOtXombPFqJf3htvfutvWkM/x8ZX9EeNZBxVLxry9uv+OaV17/xlt+/O0fveLtH55vzp+YPP79S18K/O/3u2q1drwwjifcN77ptff89Jff+NH/HqvNmdVGzQjjeeNTX/jUjkPPFDud86655DvXf6tn42g4vWBnvN7+fgVxFLUzlMHkAsXGu1//XghF6+hMLpjIYdSG0BjqsrAtlbX1ff8SnZyIsSoOjLpnZsaP7uufqvzXj743P3MSnZkT1cbhJKJ95fLKMatUeORXt//Pv/389ue/P7J+7KGv/qR7dIB7dPno8MT+F5gIWrqs6/s3rnV+9LN7v/H9X979yZ9FnT3IOXLOO98DR87c+clPl1csffun3jl69cWuDtXxAPpU88DOJz99cslnXrVy03ldck/kWF/96MdUpf6jmz4zNLis7w1v6FrV2HjNOIX2L35xV760LKwd+fT/frOrv3z1NdclSbB29Zrbt+/YPn5cgIgtAIBlg+VdZ6o0NrTvPuQf6ZrMe/kCILj4jR/8ynvvDsJde3VwFEUxAqKRj0RItJJgaFJHwtOQFZSDaoGKMAQaXKXaCDNAjtISdAqSo79rgjKFFEAw2iboVJIIW/OwMwbQWD9aPzOO6g0FmFFDc6kkADFSzjkovhhCCYsBC4AWnTb//15IwinqtwxzmelXO9Ahxmgx3deOWpKdhcvdXfqTX3jq7N7e2fm5e7/w1ss+8i7jtl8f27kzRkooVtkxPr1/hhp23sR9efVYGFW2jX31Q7e881WftRyjRMiAJU52VKq0BAAAxbnWCDAYhARID4x5p051LGoppZIUHCgI3Y5w2kawNJNlnXagSZ0SoQOHsFExWImnWpANovb6AdXH8kfqSSeDO4Z5aK523dYNrKyWIPsoTe0ozci4kyoGmILGHBgRBtMMYawhoMezvZlDz75wcPvfis7Y8ePPTFdmi72553ZsP/fqiyDhdR02ZhqTpydH1wwgrcJWTDOeiJQUUaJlqyUCP7VLxqbzBzszufuP/pXoPoZKYX6/pwZnpw7kz1ppOx5oSOOEUArACoKIo7NRkWipLGYTjTLdXZmOcfiF54ZHBlGRO7OdUOTkUyfPuazb6V/95y/em30XHljq+riW1E4a4YVLXnmtM/jc6W/u9A9so5wLghkiGCEMGGPENCWE6TiOEegEpOBcC4mVYhopYqBFQ/Oi1UrrRdjTWmNKtNZag0ZSA5JKaq1BY8BIAQDBCoFSSklBBKXUqM/PCyEQxYwxwUUiUoMwwzCU4Fpqi1mGYaRx6lpu6IeMMZBKCrmo9yJCmMGUAoyxAv1P3/WiGK6VkqCx/LsNDP3z1gQEAAQh/Q9XttYatFagkNaLh9ZSSq0lAF50SgFQyqxmq21aNkJkfHw862VWr10PADEP6/WmEMq2bUaYpihNooyXoyZKOffjyLZtQnAQBIwxjBFwks1mqtUqRtJxvGajaRr20iVjftBo12q+H6Zp6nhuX18fRkQpFSc8TnjWy2DDQAiladqOAkJIu1lf1MAJQVJy2zal5J1O4rru3PhELpu1iZEEYSdN/Hanu7s7jZO2bpiGazDSqtVNi5UKRa1RnAZxvPjwATeTAaWUUlJK27Q1gsW6RgDAGBBGSquRJYNpFKVxmMtmstnVQog4jnu7u4Mochwz8APLNM2sx+MYqIcRLpW6chnuedk4EggBwphgfNamLQnXlm1oLTnntu0A4DRNBVfFfD5p+ZzzhHOCMVZIpongiSIGaB6LlGulMVBMqVaQJBoAm9izTItiUGm70ypKHwAME6pNkWXQAZA8yKrSXlW7/adfJrMLot2GQs5v+4brhiZq1xuDWzZ1Tk0yijhg0zJCIvPFwtTcdLGQ563A6yt0BF+YrgxZOT5VbYpYmvTx2/40sGzMxNCNw7OXjkz+dU+Gap5i5JMFgC1Xjfn2QiF29JnWqy4650y/Xa81exHMRrTQZ3m4Ex45YareZ3bNN619K7oGjk3WtgPFju377QAloQNxQIPAADM8J3NYveJ859Ybb77x+iwnO8f3bNy89aJzX9puxPtPnLnknE1z+xeGRzYR2r7tW9/5yOtfh9/wrl/ff3eh3PPckf0XLV2zc/rgB7/8sWN/mR3OiZvfdGmz0ohkcvHVl/YtWz15YB/vWDe96Q1g0+bUmZkdT6zZsAHy5tSepweXLW0+9Gyzv5DDDkU0mO0oO8qZXicPyDDobGgdrdQb1RwQm1mu7ux+8aHHDh9/yRUv3bC8P5DglMvZmWMnZ04x23L687bnZRIKh6dGVP7HO/+QyQLU0K7Hdub781v7+nY88Oc6b5zcufDM7snCWs+wYtFGU83JqW9/aurEqYfvfrIrm91w4ZqL/uWmjJtlRhdob2bn0Ynx+dWb+nr/65Mzy+9qnxjfuGHlF25/YmRpz6mJw8v6eq75/pv7ugvtR/7c+PSvv/CDr/utANHpWuP0qGcdnzq+d/p40bL8KJobPy2A+Tn+0Ruu/sOf/jbTcN90+cUzd93fmQp/uHMHMCgmnXZnur+/mKf5+Ye3j1LjD1BpIdAaDNBcg0QACFFNwVC+gpQLBpADsAExpEFBi0kqEVUgQHeQlgiwhhQgBUwMWpNpRwhKQITpMC1c0u1959TeTjv1iNlS3IgEBgIAgGgKkVpE28V4BQ2g/1/gCwBAmMk5lsNkSW937/Z5TtNmCEwlS7F9UM6enoMEiDXiPvjAL7uH4+gb3262hO/RobUrnv/qg7OHkwnk9SJ/qA92TELf1pHv3f/w/f/11jaYAwiBPz8BGWoqDQEgBEiDAgxaIaAINzlvneos63ezdub4iTnXstLYN6XMEMkltKVpAybUm0nC4RHZFeld88nZBdjfYLuO9ISVyujy5oe35ipnWpWGPkrgQ//5rsr+g2aSVjRvtfHa3uxTPKBaM4kZlgYGh0iHKgMjHWuULfbV5J67nn79514zvLc6tv7cZ555tH+wMLx81cQTzw1dcm7j+dMgsWacAba03eKJn9RcKo2MkUrHbybFITObibIr1GZ09SUX31g5+r1cTy/M6q5ly6lU0OG81VYGk2VLYWwO90EkLMLBMGZPnLZNy2w1QasVS0Yw1jzCHS/MFFfyMw1zffbwHT+ti7fn1mzls4/0r176ijfiX3/ukQcf6n7ZuenMbZN0yRuo6+QAYBEmMSgApRVRSGgsEVIy5VJwx2CFTNbzGMEYIflP+vhPzFsM5fhHxR75e6+fQhgjRuk/KChSQnMuhAgAgBBMiWGaJsYYoZRShhCK0oRiLAQ3TTNNBcZUSk0II4SJhGMMhJDFysC/z+RoEPrvpq/FjC1AanGLoNXiMuXf/WD/uIUXsycBYaT/PpcMWmusEVmk8vIfG00CgLRCmlDLtDqdjmvZ69dvDMPQDwOllECp62WR1kmcKqRyWc/3Q8Okbb+1GBKCQXORSp6ATDzPawV+u1XPZDI8jZpJiBASXKRI1evNRqPleZ5lWVJKnorjx49nMpmewX7Oebvd8TxPCGHbbrvTYZQWinmMsRBCg8IEaVBBEDabTZC4Vqv19PQEYScMw7Gx0a6e7iiOc7kcY1YSSyml0kIIlKZtgzmL/2CSJIs0WikFAI7jEEwwplIpIVMAjSlSSgjOa7VOLperVqrHx48bhkUIWTa2IhU8VRLCoDY/azIjn8n6UYwYowYr5G2RSqVUymOMKCGkk8TFYslkGBMKEqUi5jHXWhPCDNdrtOZcw8AaLIv5fsg0Stuiu1goFkvNyQmswHNYSmhYix0LDEKTSAIPEJdSC57KThCydj3r5TsxjFjhBEA2hKGh7N2nav/zrbeObVoPOw55PQXFlVcutXnqlPLdI4O7brv74AMPbDlrU29v753331+Zr950002Hjx298MILjs6e6Vm6BHnWQL4UNOrFniK068Ulg2/41Zdnn9p1+9e/e9n6zQlTfUu9iTO+MI1+3tYu65pLvaMTTx3fcV6++afP//iMDlwLYkFKLXPOb/avNiuzha8dmjtr3ZYLb3zJpz/z9W2bz+Gx6kACq/qWLu1eU8huWHX2ha95pe1CIUqMwrCAkpuyiWh+zWWvKDbUJ//lk9/8+Y/OGVpqMPX4+P7Vy/u++583I7rjgLxr4snt//P2W0dGxnxGcl5m9z33brlxg9h5nLvu6alGdrY2dMHZ7vw0HN85d2T/KKEHn7gTH6seO3Q8EGr1/yybeOG5yiPPl2644tl77uzNdy276pLeZcvlwSmad+PafDBd7d20AVYMpZNzRkDrJavAraivuGbN0qVK53tLaVQjlLaP19nI2HlXvryZ+C5m0UxDmZ4a6Bm68Fzmt8InnvnJTz5fjYMtF1+47+HHW7Nz7SVw3GzKLuQxXBM+9dw/fONL7/3ozT3Lzhq44jy7YA2s7OOV+chSJKiLVi3GUauyPdf/ide94oaTj7zwwXde0r0O3/fsxCs2Fc9RA8e/+dflb3oN7Jk+WKu8+sdf9OvUKw7t33VfbuVof7ZQCVqawt+efCzBpKTMDDF7bWxOJh+/7qo/PnlsbGzt3PSDq3qgFbkrVhiu7npk18TGTWcdPb73IJ4eF5NzCDBCtiImCAyQaNCAOGgzxQlSKYFUggmQB5bXCEMSKWFrrAECDByAaZRqaIFKMQvTNMYYAPIKNDiBaO+eaYQWrC11DyDeChqKK7lIcgWXf4dd/Xf6q//ODOT/A4BNrhIge075r+jtYTDPMm4xAqqC0xApx4m85J7ffXLkmleWDj+h/rLX7lt54OT2M/cfHXfnXjw8ucRj2wr5lq+n6+J+kTz1x580jPJzP3vIgFwxyIa63SmZIz5oFGiNENEAGCvQCLiQwExb2y6NDLbAMEllYme07kCXtOuGqCbNlUOKIX82kMP5jI6zlC4MD8GJVu2qc8xcWrz/+cbarFq7rvcX2+cevPd3RdabPvLwO7/93wvv/8Qgo34+mQJgilIOJmAbZIbqjIFtCrgbGrXZHpdOnXpOhUc7UDv21O7Zxuw7338TCO725BCIk+NT5XJvqZwFUBCIUAZuztL+pN+py3Z6YPdhb6B82cUbapDseHr3G9/4Oi7qc88dsoojJLQsxBkzmOMy2wRNWo0m1cgAbOUc4LK7vy8JwnqzkfFsCloksebtDCF+55QRz+y+/xnc3HDxwMUO7gzapRY9c/W1y+77XSV1N9aqbugsG+/JUSm1BkAKEP17C71SQopUpoFWCqnYojrvOcWczSgSIv0n+v4DqP4hOGsMIDHGGOm/YyHBBGMF8p/iMMZYSimEWERBpVQc//0DDcMgBC3C6mJT7GKjX5qmpmlKKRHSi5EfCIhSUmuxmAWyiBwEYVg8k1Z6EWWV+v/qzP8/GAwAoBEgDYARaEAKAcYI6X9095J/vB+DxloDT+Wia1op6HQ6i22+jJEwCXzfRwoQQoJzRmiaxpwnYRh2d3fv3bt3aKA/k3GTTqert+fFnS/MzVWvvfbaKIp4nPi+n8/nDdPQWpbLZYRQd3d3s9lM4lRr3d3dnc1mdSwzrscJF6kwmIGULmSySkiFdBzHhmFgjA3DSJLEtm3P8ygx+wb6HccxDKoQcM47nTYxjUazFvgJRqxUKiGEDIMBoDRJvayrlArDkDFGKfVyuTSKkiQ5M34qk83ncjnLsjBFXKStVqfWrGUybik/4o+PT83OrFixKp/PCxCtTtPyMmkcDQ71GpSOHz9+7OQ4s51MNu9lnHq9vnz58nqt6Xme43i5TLbVaFLTSlpRHMeEEM/zKDU451prijEjOPBDKVOstRYq4lGhN58K0JJ3FcyFTiwkxApcy+CaW2YWkJKpFErbuUwxZ2Zsa3zfaUMQ1Kv6p6jVRZ4bb1/7louufO8ttcf2lCgzczkmdVDv5Lt6H7r/0e3PPLvV6ltx4aqOjn79029eeOG24Q0je8dfOOe8c5947tFSX09zbmpgeASDoC5uch+55OSh/daz++3u3Ed+/s2ZY0dHlg78+VcPdnluvcnbgyarJBNZUFdv3ui3djz6xDWfeM9MtfqnT/wso4wZ3PQCmAhynz4199KLR/7vXRcdaxd+9bNvXveGdxCwIQFgoESEDZyCCa3Tf/7AD1a/8bUbV0J9crbdgOExvu++J/94YMfaTPV3P35L98qufkkKa0e9rn5ZbRIuxIO7PcKKQyMignxTtE5PLi8vCx88FIi41pnqK3RZW0ZqnfFGvTky3N+/fiQ72Lew/1TPq7f1X3NxbuWK27/x/S3F7Dnvuv6x7U+cf+W2XIJOL5z2J0/NJ/6xv00UhRoY7Sn0O+GJoMmM4fPPpx1BLREEHHtOfnK+tfdpZ91K/9iJE8/tPO9lV/q7xknGEIxlba9zemr37XdbGpd7so88cX9tVfmrX/iROrzXZGR4zYrG3IGzNtuHpD15our1s7xjHz+6gNx1xdXlHqOkVMzaKXi9qpA58qc/r/mXV4+uWheeSmrxoVddNLr1fVeUi7XKmeFHj/1aBfEdX/32knzhnq98Z9DKvfaWKz75kf++7l1fbj936HOf/RYVdnNOaY0BKQUgmCaOwxv1D73pDQ/84ZGnn67nL7xo5Lrl8iN+Gva/98Leb9734svPLX/71qs//9ihx7t2PkNOZxzL6CQKiEIgNGAAS0OAVUcrQ0tXMwQoAlFHKsJ8UJOyNJBMFUCKIMaAJbI0jkHXsEZKKgAwGNLQTBLBNFD82999zax2Pv/BT6/sHzh0ooFtnCYCNKYgFoW8RfXun7qz+n8z4BDANMITC8Y9nbnu/vyRedVgAoMVgf7Me6/8ry9+Cmb3hj/7PxI59dbgvQ9+2d/n7grns3z+5b2loF27b2rhHgmJZg0gs0d3bi5taKeFGMIkRtRzmCJJGgr990UsKp5KgQIFIlGaZyyTIOnZFrLtTIbPdvxpiFxJ88rqz0qIWC9If74zXulQy1hWLLRG/Wo1UQFe4WpOOp97Vl/7uvMuvXxb6zu35a6+Zv2mTWs+823UmKrPdWYwxqCowkyDDcjTOoe0A3jJDJRMhh3eODARzEwPrl42dzJum+7g6vVQjb2xQQJkcHQU+djzsipJCYdWPEf+P5z9Z5xc1ZU1Du+TbqpcnbOkVo4ISYAQOWOiAYMT2Dhj44jH2B4nHLCxccaMjY0ZY0wymJyDQICEUEA5587dlatuPOn9cFuy53nmP79n3vPTh25Vd3XX7VN3nb32WmunuaqJJEMlwnZt2/HRs69tGOzya37aKO+4+LwbmjvmiOLjHE0YvEeLwKdhqDUOSMJ28nYSGAGkvXrNMCyadBzLdNKJ0cMHvVq5vaUFOqpQ7bbHG5CNmhPJd94RydTWJ0785Wc+mjzhPe5jP3ywMvSBUy9aOj1Vv2vVLkSbqe04k31ckFHkeW7NbVR4GFhMEdBJi+VTTsahRHNQwIgW+p8UNKBY8AwAgDWg2GikkAYZ23+VBIjNR1orFftSiAJpYhoqTggxKcOYhmEYeq4EDQAIoxh3GWNKqZjfllJijIQQWqO4aY0Ijf1FCk1y0BoknjwoKoUA4kNA3HmNu7+AAUAqrbUmgCYDugjWCBAGJWASlo+CdcyoG4aFEDIo1VprKUxGOQ9rNV8h1ZzL12o1ALAs653163bt2DF37tw58+ZFUTR//nzJIyF4Pt8cen5//4wzzz6nWCwChvbONq1blVKNRoMZTPoCAIaGhhwnobVmjFmWFYah4orRSVJBYd6oe0pIpVQ84ZhSGouNj7UA6p6bTqf9MCzXqpZlAEaYUkopAbtSbuRz2WQyOTY+LARPpVKu61Zq1VQqZRiG4yTr9XqjWldKGYY5c/ZMACQVCCmJJJbldHTbHZ3tQoMK+fyFx81ftNit1RLJJADkKHHdum2bXhRMTIzl21ouWHo8aKiMjqaSubrbSKfTTiKBMSUIE0Ydx1GgXB75Xj2ZTEZhQDDmkeQhAIN6vV6r1UKke7u7mQSmJbNtu6l5/OBORgwVaQxgGIAM6qnIYRYIV0vFbEMD8FBAGO56a1uumZCSRu14YjBKtNlfuOMrB9/c2IlZmE9WN+9/ZfUbixcsnN3S3pJJXrj0xPkrTmjU9msNPUf2Nmy1ZMW8Xdu3VVmYnd4284STHGoIP6p6dWwbg8Ojs3r7kq0dutOmgLxth1jVzy9ra5/eGm0pJX1tjOcjqBq2hSFLF84/4+zlCTPJ979zuJbtTJc6KUpbTQOFYmPTHZalSivfuPiEKf64u//hn7fMO4nQ1u1vbetp68v3dBx4/h5RP7TnyIG3bln1yxs+8tPbHnro7Q0P/+C9K1ZMXXj+wmDbfuuE+Xv3720tk2DnYHXdobWvvpBsSS09cbExvR1ChRuRHK/aTU492ZTkKIk1lSGrVJnm5fHh+f2zBSS6Fs4pT4z0zZpDXF0fPKSqtQNrX28/+zSyZduZp5624/Aef+eQNbOj4A6nDHbyonmd7z0FMWqON8wOMyMVHpzwSo2x4lDPjOOxJl6xmGjKUpTUQFecczpqTvCB4Zydh9CNvFpRFJddtWL71m1N7T3XnXNz2NtXLYzYbiPd3lTGjYWJebs3+ZVm+drW8VkFQyZQbuqU913wxVfeuA9m+qMvv94zdSrv7j6yfcurTz3kdKGNb6ytj9Q/8t5vXHJ8ePPX71275/DyGS1Ln5513plTZvV112qjg6++dclvf/ybO27t6p13+offsyyCG776jTtP+d7Pvv1DijRguPD8s597843hcglM9d3b7n3fOaeadlBuhHOmnmcTZ7g+9vbbwYrF6cfW7l21ttC+pMeuDNMomQmlB9rTKkDAAUwEBAPWoIiIFAjgGJjWFLQIhR4jskH1VAEKNEda67iAxRFwjgATxmQIARcAggCAAp+3+zKL8Psuv/itjevmpBM7Gy5oMJykcmv/I9/83yxtcMmhJtnfXXcxgYMSDWnIMWO/t6ZtRK4595PHfeUyF6tt6/YffnWN0ZvMZp0mDxYm8Gq/8Rsfqoi0mc6UwJsA29y6aeVdH9hUD7uwpdINDaJdtI7XxmRc4WgFWmrAgABj2ttqNKVVFPh+DTqbjIFRd6jAELbACSwuzJDt3AWuDAgw5fP2DJRR9OiGaKDe2gG1TpwkWX/J7Oz01ZXlM9Lu+K5bvvz177z2eOGW3+PxsdSMHtQZlncOa4UwKAqIgjIVJENtKU0V6DBINaWqKkx19vH6qGDh2edcFAXW0I69LWd143rEedjwgrFCvbvfkJoYWLtMAaK8VHcMPDg83J3LXnTjb9KuffZlH/rY1Z9Y+fSd/dM6Kju2dk5pB02obZgGwU4Cqg0olblBSCZhUOK69QyljYaXNE1q2DwqOakcZExl5YmW7xaKWh456+z6iFqF1y7YuMpd884Gd9eUTL7ll1/50Rc/txC3zBJ1jt59aTVCGhMAUFKEIqgLHiAtBS8nLSubtDMJ2ySgQSKkMaNSxqRuHLchtdYxyGFFMUJKCa2EUgpr0ForBTr2HyEURREzCEE4CnyTGZIgEXHFJaUGY0xKzpVABEMUxRRrPNA+Ll6llMwxhFCKK/3PLam0loaTkiLSQmKkCSAAJSdPZhQm7Xbxt2DQOD63gdIIEYowQkiCRFgD1irACAHC/6LP0lhrzZXEGBOkCQJKKSYQBAEiyLaSrutqJSilAIAxSiaTSikvisIwZHE5rUGBVkoYlunWPdM04xzNpqamcrnsOI7v+4SQbDZ74MDBVCplWlYQBPW6m8lkHNMChMLIZ4YhhOBRNDExnnQS1DQRQnGAdlxEKqU456HghmEghJQCrIFSihDinBuWJNhEQOMIbs9rIERsa7L5qpQyDENKKaWMtXhKhwBYK6w1whgTggBJLYWZyMSZyUhpQNqyrEKhgLRKZZIIEc/zKKWpVKper7pePZVKSUG44uVyOZfLEEJkJN26pzW0tuUAQEpu2/bY2IRjJzGmlmULUDYiAFCLAoaw9sIGD832fBiO33/79+URwIhxaU+M1g0TjU7g3pOXfO2uG2qb9qVzCU9ixZGryn94/4+bm6FaT2re2O7Cn97+XXNz5siOnR1GrkxlviZdEGCxweJoW1dncz6/uzqSjwLTSqfbOw7t3pnO2qBFoVCe1juzNFTww+DgwOCyE04SWmXa2sJ6zUwl9mzc0rdgHo5EeWwss3D6K7ff+8TtT6e6MtSrj/jsY9+44fT3nvHij75etMJLrv3KWPnAHV/+9Uyuhpth50649KtnX/edaw7/9W178anqyJbDI6MLZy8JykFu6vS9B/cmBBrbuPNTv3lm5WM3O+85GUdtUDoAHQm/HowdWJevqOrh8Wxzx5hQhS0H2ufM6Lr0VDZQa0RDYaGUB+PA0FDflBk0lYpGRw0KNRYGSucTedqcE27dq1V9A7UmcoGvC3sPv/vcqydce0nOdiIkaLFOcwkmtI/UxNa9tDWbnjslaSXC4bGBsZGe+TOUwIASNKAiFNBkKioSQQTchHQa6mHBLzZ3t8FYeag80awM6M0bmSTYht9wHWZA3QNQkEyEpqlKpVAZVo2bsjher/gT4abtG2oqeahWQiP1Tet2jA6606c2lVnUazR/8sMfmf/+Zbo4oT3EmKlb9cg773Yev3xiovjkD2/7wLVXrCtubVTTi8+flmw9DrbXVq79oz1h7N9Q+eubW2+544ZTP/G5K/Nn7HMLm1558tGtGz7zpVua25wpM/rndfVv2rpn9Y5dkEo3u5VbvnjNH+545PSLbnjfuQvP/dynQozOzabPWzTr31e9EyFjfjb5rR/cdN0N/94DrRyNewhqgDjSJkBKAUNYglagawpCBABANTZAIQAMsAgwQigC5WsdAeIIVUAXQQNQqoVl2W4YgZYa7G6kL++ms7tazv7Yp3fu3HH7L+9fzUQGSFVS0OF/o7b6Z97Rf/1vhADAMHSoKdWGiZUXBY5teH60Zc+z86dP/yaaPzyj6z/3bHl6+TJkJXZCqbutaezAltrWaIeRerBeZ9TMAGrwoNsyD+nwjts/vzRQn/na75YhUqAq2d3e61aHSv4/OKpRKmQEGhPAkggTW/lEGAR63rRmXo5SNpcyteVwUYIUyEyj0ASTM85slbYsvywwyR70KlkqigIEgyRrS3pj8wwEhl7UYyy1Ov687/CZZntvX3bPloPtTgame59/11NKHXvtBIACMMANykwRppNmOQj/cved82bMfPzvT177wRvWrX/+1KXz2hYsATPx/K33jPDaJ779kdKI3H7Xtg/+/NbBzFg7T9YlJzJbkwNTcu3zpl+3eA69+8GHS6V1jlMeH99JC/vqY0dS0/onBgb3794zZ0q/qbEMoyiKhBAti2YBM2XAiWkDUKhWIYrcaqUwXt6wZs0VP/3KW7/6y77X/viRfzv5nb+U732hJUFmScdQ0nVY4p2tO9LT3WWnfRrVE5RZptZSSx6GfuDVI6+GdEgJbmvJZlLJtG0iKbQMCSaAsVRqkl6e/JNjpdTRklgDQhhjpRFFGBOMNEipuRSYsLh3SxBimHANWIPQ0jRNiURc4x3FPEkJkVLG5OpRbMOU0lByhAhhDABprZFWGAPGYNh2GMQabEUwQQhrJaUUsQo61kHHh4W4PEYIaQygQR3NylRKIdDxsCYEsSpaI4QQBtBEY2Qy4tYbkRLJpFOru6Pjo71T+srFUjKZlAo454yxUqUcCe44DkvYiUx6bGTENi1KEA/DbD5fa9Sbmjvi2l0IMTI6VqvVkkmRyWQMBsVisb29PQzD+JV2dXUJIUqlMjWInUxU6jXbtlK5DDVoLpP1g9D3/Th3OgzDOCfLtm2HkHq9jhByrIRhGEKIMBIJ2wllzQtcSkyttWWZhm1pheqeCxIYY4yx+DobhmFZVr1epxYFjbXGBAhCCINGGAOhocuZQXQEXHDbMUcHh4UQCKFkOkUps5wMZWbAOWJ2a3sGae4H2mKWk0hgAq7rciX7+qfVShUpIym5lDKVTlqWkc2lfS+s12vUYFhjqRVQXC6XW1NZGwMhyJrSlelsier1RiFsVGsOMTDSIuSORRNNCcg4zKKUo1x378tPv2VwoBbktdp1JLXokrbmRf27n3tlFss0DJ2mRi0bWQJVgc9aOCsaL0wMHmwLIiVQIMuVg+Up03vrQUkBmdI2NWqIlvauAKO+k0+p7Dv02t+fvWDF6QEPto0MJh3zV1/51tUrzn30icc//dsfdk+bJkyoWUq4qj2j1v35t9vX/GPpWTOmn7BoZHSw1a5q24qk50BHSlccpwdUU19T05i3szBemd7VPzY6uHX7ljMS0Yy25CtvvWLOyXznJ+8bg/2rr/weQznXIScuOrm1feqUnrQWmf1sgvCCMzLeNz3V0UHvu/7GVLrpvPefkbHMoVoppVAouQ7qbjOZGBlOBjlHUZROVdYf4jYkelpz+8a87shJqJ65Xe0zPyDKXujQdL5dtitS9ioD+yHj2FM7s50dY1IkRwM3go6pM6NtpfpECVtGmDYtw2iLWouVolzUb/uEhg3OoDlsgvGgmhDZZIuR6hTDo0gIt1FJdPVueOzZ9rbObHsLDSQLfJVPMmpTEoCqlCfGoSCn9k9DfoU/O07b9Vk/uOCvj255+6WD81LNO4cPbp61Z+GXL3OH9w0n5abP3zbvmlPaOjof+83PqRzqnn3cG2Oj519+tfbdF+99roOuTi05c2HLtF88/sCnPvXJcx778Iaf3WsNnHPqGQveeG7ltq2Fu+58zAO6f8/EO6tfee+Fl1510cVrd+ww636E6Q9++dAEJLY/fsfBlc0vvPDaGReeAXlv3fCElPiyE+bkRrYuPvfkU05a/ur6NQsAmAStUR0QBuUAtGmaAkOj0ANV1miC4BISQkFCQQs1tFZUI1uBAhWAriMIQRsaFGVCiIbwAWMTKJe4oF08s//9Z7xn1d49WsvprOltVFFRCMQGI0Qh+mdJcAx9Efr/kmKRELApjYi6rIYdy3Odl1f/eP709oGv/8c+Ek0h/tMnn/23t4fftygxb4F5/rXn/OKKbQznN9dKC0yjNRSK0SJlxSDsAHr7LXd9stNq0dmBlE5zSGtKvCRFIQak5FEA0FgCCCHKFZ3LmnWPURWENb+l2V44NfPuwZKvqU9CKsKmMNGaUG7dHw2hhmoWZMdQCdIOuI1Wc3z53KY924vN3fbIqNjLB87yrfH66NKLZ6/evyuh6jPndLBNRwQCBUoTAA1SgdQ4AgKCmwBBIxTU/OpN/37Pb757ww+uqx4YfPyB56/51Ic2v7RuyvnnRPXqiaefBDhBcK3OVcmtQBrNO375yldeacqzno5+q8LGJx67bdVaQ1aQnSpzNG324sO7N9LquDg8nAtlrs4zmHpUhja1zARqeLxW8wKRyuaDcl0J5ZjWjm1bq+PjM47vX7x8qZ6ozm6Z45/2gceef3DbiyiRcMYRSlFX4jIfbHTPmJK4WhbHZH+ujYZeoKUU3OVBFWSQt1lztimXShLdANA44pJKYSIFmmHMMIu4JFhpEKAirZHCVElDSmRSLrVWoDUGjQApjTQghBEhsbwWIaQ0CpUA0/C1NgFpyQEpQkFDJJHGBJRSmDKEEChlUnOy/NUQcaWxYRgUg+Q8RFpLBZxrqSERNTBGJjaUFkJIrSVCiCLCtTYoUxpJKSk1GGYIIYxJI/B1nAyBeNxpxhgTbGjASmmEMUYm5zyMfAAwDIYwb/huU1vTY4882tnZuXTZEo10vVpLJlNKCwAAjaNQJO20jHSxUU2lVAhgEVMpkAhsy/HrDQj5tt1rOzs7TdMMBSeI90/r0Vp7nlerRIhgTRTS2K17lmUFQSCEsB0zm80WCoWEaZnMRFIb1JiYKFjphOnYSGklpUkZKGQZttZIiMi2bQAQiqtIUkoth3EVamRgJE3KCKK+68flvohCkxmAZMAloliADoOIc2kZduj7hAAmShMAhITQWBHDsA0zKFcmMplU2PC379zled6pp51RrVQIYeVyuSnfLHiAtDYwFn4YhJ5tpSqFimEYiVSyKdPiGX65XM415X3PI1oDoIYHiUSLlNiwbI2wJ5STSmke2oRoKV3Fc03ZifGJlvyCpo6pu9dsthwtfOzgCBvNE7iQ6p2lDduv6BD5lswLJNvV8PAUJ9Xwqm5QsNVPvvG94sqNSZGPWrDl2p7wDEI94TVnsmMHRzKZjMMSkASuQxUEPa0dfhBUG2Emk+MSI4KQG+Ew+PF3vr91++BX/+2jP/zd909fvnzntk3zLjj3By8MzF3xTn9f6pFf/L5zannq1KbxPUV3ijF6kFz09Y+ddOtlxoZNb7/8Uks2n7n6uqa/7tn66lvHkYlRLWYumr/z6e3/uOORQj2cPj150vKl8xceP2XRou3rX6sfIWdecq2A2q433lr11KYFF1yBLaMR+FOPW1yfqKx5Z/2MqdO6iYlb2/MrTguqvqJm/8UXdBopoCmaTHe2duMFRnV4VEVBWpNcvofbCWbbteJYoifFKC0PjTi9rV69hpyk36gQ2smaO2ymNz35XMvU/p0bXluy+PgqE1MWnxSOlDuMUKZkNjclbEt43E10dCVTOZ/RpLZBiqbu5mCioHRSW75h9LpTg0SJRCkn41hy94A/NcPq6YRlB1aw5OorIagCU9rPq/YqVQSqntTleqmUNWzSyeq1cHP9QHZ6iyrWm/3ETZ+6uP6p/Lduuj1hNv/+kfttKLzvP77TMnjwrM998sDed0br+xcsmCWj1vy8ObtfXP/2Q8+NDQyeevn5xSjYuuqNSz/5/o9kslNmzk+VvE1FpjBbuLS9+Xn281/eJU3EAF3/sStgl5zZvejdsdoFM3u3jpWOVO22DtmKYf8Q9L33JFY6eHVz22bfe2loIgPWyYvwmsNI8/COO78z9/grOhnssPy8rxkzGFdDwBtEMtlo02Q6NduF16ugSOhuUA5B0wUqYaEVYKAKzBC0p2QDS8AalA8aqEBAcagFIOEg/Orawoq9jyy6/qP3b3zrrGvPePjPDwHKgAyIBPl/lLoaEEKTxcR/XUepQaq49mjD5ExKnumpnzVvPqx85bkHHk9oGN5V2AFibkt99YFaYTPUdzzbA8Fnw+DWfzs1KqYf/eszZyZlR7r5kdHCu6FwS+hhD52YqlRQUwTFPDRhMswjKQGUZkAxgMRCE4k7baO9RyAfM140LPC85L7xmsDCBmhB7oTTVKkXTe2OlJwBoAwEMSBABSpt0eAAQJHeekA1VPq8pY23H1EdGBY3Y3ssv/mlLWcc3/L4qxOLDE0pFWGAEBDAItb9EKSUMAHXNUXYoFSPFMsuZFv5rL/e8/NFF5ysRfKvf3joJ2ees/yK82SEJE+yDtN3gxAYqFTNqvb2m4cOvt6WnrH4pIsPbdgECvsYq7DelM8MD1Y/8Zmb7r73DnhjbWTy7pPnQibnVDyHWn6jRhPEpTKZtzFWptLaSQgGXcumd+meTB2au7tLxWphmnXOxR9+/A6Pd44lqwtr2VLea3nnMM4vLvQsr89aFA08PFZLJShCvh/Vfa9GgKdslnCohrBaCwCEY5uOZWJMsJZIaxCKc4kYAg0YkNIEtCZIUYwVA9AIMCCNECWgNEIa6VhpddQdhOJ+CKKEAEAsvcFHfbo4rmsBRVEEk21agjEGghEAxphQHPqelJxRHJOlqUzGyjWVDh+Sk+cxBQgBxhpAaRW7mwApwzCUhFK5wCPpOAnDYACACcbY0FrGam2klUZKKcW5JoRgjC3LBFAIa4SYbTMp9ZlnnplKpQgmjpMwTUsqJZVCiBCG4soVAFtShn5gWKbJbKGFEEKEvlZKKTlj1hzOecQ5aJxMZaSUcaZmwrEBQGgRhoFQUiqCMKYUZ9JZ3/fj2YumadbrdUKIZVmB52ONQGuCcJyEpbXmEU8mHaWUECJ+K0ZR5Pu+53ntPZ2u3wi9wDJsIYTSQijheV6mPR3yyLadIAgiHmYzeSVEoVAwbQMjjQCDEqCxEAJpbAolpGdbTqVSsW170aJFYcilEJl8Mw8CRo2xsbFUKgUAhsl27t6VSqVyOWw5drqlpV4oFIvFZDKZSqULEwXbtpRSSTuhFBBKh4cGTMtoyjc5GA8MDEgpp/T2FH2/s7enPDbmJKzy6GBXT8/BzHYmRCaBGkVFUIMqaOkx0LAXJDMdJg0KFZVJ5Y3W9P6t9R7zkBeuuGBZftaUxoHx1IIu7o3hokizdLFQaG5uBnaUjyEEIeSwZMrOlAvVVCrV3tKpFMTXvBKOTtTKn//Ol6QWWsub5t3s1Srnvv/CQnXs0T99PJM3Oy+bsu7l/aefd9rjaz9jH8w59epBHan5LXL7nlf/4wFvXltp/+G2N7eff8YJv37kTXtaikCZrd+zdnCDN5udtvTCE5YiVHc2bX1Bvgm71pfPv2zZC/fe1jftrIZNP/Tdf6PNmfpYwSr5wWgxiHjPicc582b6+0ez2Nz2yFOiVPUsgK50x/FTvGHPYHDvb3+XM5zLP3qd35iIIomziWBsolYuNepVQ6pcIrV5w8bDQ4PXXv+x0uZxaueOjL0+a9oU6cCcU4+vbto+f8WlielgjTIYHZsY2p9umZpOOe7gPjrSmlSYWWY1bWaB1YfHaS2q5I08Ms2MNVL30vVNbCgFbUl9wB21utr7j7d2bYuykvg1dZAMBA0VHEFiOD1vcdZqq46M2UCJQnYiWatX3UbNNFInTFvxlv/SKe+/wK1DZbDU0yL/9ttbbrzxBzAve+cDL2dPOv/cszs371uZbU9y2u50LdDBAAy4J51z3prtW8+6+TOpkm/nnBlnnwGHDi8967xw/9irr67qntnNUaN96TJPPiA0HTm45b3nn9jwh0eBHxzZVRhn05yUxRofOmdaP0b7ask944fXvrXjI2ddusobKzUAaGL5lGRfsm2Tle6fZlHad96S+W9uWLdIslHESzpUgO1k3myUsYUOR8qHoAmgW5tJQatIaIpsSdoVLoMaQaIAwtOgAYgCBfHUQg2gtZwkVAOlKr4/QvGau/+6+PIzTjv95OyTLxYKAcYhoVRGAv43y8cCCFAADCjEsig0pJvXv7P1pSMH73nsrus//KlDUGzixCMil0aje7ceco1L59BP3/q1oYdef+SeZ6qAMmPV98/Lz6mELxx0Z7GKsIxWUTpYBWZLKEqVSpQDlwITXAIBG0jINEmyes3O2tmGN1IrQhBgDzQCMCm4Auq1sgXYz5ko8vIuKhJKRWRJi4GfsZqpDPvz5u4jjeZ0Kj2eAuRlbH7rmHce8TaMwJ2//ea68WfkzgnHcULOlZJCqNgHLaUEwEA00VhoriIBAB/40EdN+PiVF131x1/ct+6uXX944pHPH7ruT7e/+vTf11/3g2tPnzvbQ0EG9eQ728IJsWTGdYd23+mW+b6tw4NHtgvtIqxAQ7FQbW7K3P/Acz3dt37v1m9Uhjb5hWFTC5pPD48Nt3W1CCGSwgFQpUY9n2tBvvbHCpnuJkhSlfLCUoOWeRfJQKlx+Uc/Mrj3fjGSCHY2rxfbFn6i12+qt7ZadIKURkpf+PQHaKNeQUgnbItoDFo2Gp6vFQZtOxYhCmOJONIUGYZpYQZIccoRQkQprYQGqUFppDDoSEMc9KwBACM1OShJq9iJO2kCjl22cQ8fIwCpNQBMNl5V7Nmd3HAKgQaNlAIABZpqZFBCbRZFEcbYSSTfeOutX//mrnvvuTMmsRECQkjMiiupOeeSC4Q1IURIEQRBFAqMsZIRAGAMR9ErPlVijQVlhDLQCiklAam4Ayo4UEqrpWoikQiCKJaSEYoI4Ji+BgAppRCTgxYN0wYNQRRNqi9isRhCXsi11qadkCJSSrtuzbbtfD5fq9UYY5RiwzIsjBljnHMAXa/XG41Go9HI5/OlUim2AieTyQSxhJA8jIQWAECoYRgGQ0a5VKKUxjp2hBClNJ1KpVOp0A8QQghQLNpKJBMKlFLqyODAlClTfM/nUhiGVS5OMGbmm5uCKFIgeRRpjBjDjJlKgQQtpXYcQ0MiiiKEonQ6Xa1Wx8cPWYbVM326U3eVUqHgGuFFi5egVEpWa67r+uUqj0RrSxvnPPIDAoiHoWmag4ODruu2NDdjhHzX48kUIdSgxEmnKpVKuVykGDGDHDlyJN/a1jt9ukyBHkMGFtgAQhTR0JzPqsBPk9LYER+ftKi5o3njO4e5EdCG04jghHOXQt1z1+350i9vWHT8vGsvu3QIB83Nzffff393d/dp515QnRh3nCTCdGx4JJlMOlaiVmlEUdTR0RHPz8i2tWdaW0PBQeBGo5ZLZ0zTDkOZcdpPvWa5u3rP+NrDV1577aannrpo5oyHN+3I17UB0N6Stog66yMfiLqaZb1qt7cmt28bB1yqh3uBrO+wP/r9H8jxA+6BypZn1nNdMozW/kX9fmLV9/7jD0OH8bOvfhX8UmPL4b2DA01NTZLQfFsb1Vi6qvLa1rTlRDk0Kr1FS+dx4FbK5ANjSW6T8cY1F1xEbROOjKtazcXy0dt/290/5ZQzTs31dHER7dqz84TLz5mvxQgKSl45q/1ke+9Lq55feNl16aFhd/rUqSpx5NV3O5ctHNq0Nxot+ekUTeUTqTawLR0gbdDMlsFD9eK0GTOOJEOrEkZtKeyVO/qWQ076+wehf5p66P7G6MbAOZ2PD6T65oCTsA4O3fjp6792+3dWnHhqlcjqkYGMkwbDaQwWeRhls61OSgkJ7zz9GtKssTviPKIV9PLzb40MTHQqU5r54VnV22+6aeqPPmHNXdoye3bjUNGemTcPBLtqIyu3bLz64vdufeB5myAU8irW85ect3fXm4Uj28768idXP/DAlr8/u27t2PvPWfKHNasXzU+fPG/GjjfejRItH/zECbf+7d6TVsxuqQ8e19L9oxdW0Rr4vPybD3x96SWX/g0Fa9585hd/W/P8/vEjv3/+rK5ZRIsGr/7k9h8d/56LDR8MC1pMUm9Q3PCHiV4RkPUGOqCEj6FNYQtkWoNQSoFHwfYgGsOygQA0IAk0NvH+s2JVAAAIQg0lGV329Zu/8OmfHB+EW/btGi1WAUyCIRL/g975v1+EJQX3pFIUZAqRS+cvbfzxsYPfvHcqhud//edzJfEDaecTe93qrDZa8vkf6vzWk/rDrVvlqrXXNndvmhj0MgJHlVm+qndn33fhjN/du2FBXvencgO14qJcMx8oUg1acxOREKu65BLBoVo1EUKQsqRCbqgVmBJMAg2lSIRYiw7LzBj1PCcEDRqkKakAkECgFPg9dm7tvnIZAOFSIJ26bx7KO4ed6mEPwqS14Pwl9HO3UnRcEOySSmoAQgggopWKQ5dCzUEr0Bi0RgAKga/lw88/0bjm6uAguuATl2zdeejed9b3nnzat7916+9+dOP+6r6aflYM9EBEW5bUrr3uopdefmHnjgcBEAA/psAVHAsBv/7V3QvPOu2KU+fXDhyh0i/WJpq70kQDCRwwHACV19HIjl3MdJxkYvzA4UTCwUlqIGa3tYBjBtGElbCv/9S1N11765IPfI70psfkcxgq559wUfltsb8N+mYYlGjKo8jnvpKcgjIJthllhGoFgRcF9SCUgoPCGBOEsUTY0QRRkzLHYKZBMdUICYU50UQjrbUEjZUWWqm4l3pMVYzj8lRrEnuCsYTJfqvWejLMCgAwpoAhDn6JKWgV92+l0CD9Bo+kwBg3XL+7t+dHP/oumlzHvEb4mErfNE1AKgakTCajlCKYqTi/HyGNADDCCMfTmZBU8RQThBFhDGOslBAiosggQBKJBEKI8zCRSERRAAAITUaAEUIQIjESI4QYZZxzKTWAYnGxpYUWCDSlhCCCkVIYY8dJYIyCIDBtC2OspRJCGIZBENaYUEql1rFxKJlMVqtV27a11kIIzTklzLQdebSBHYahRmDbdlwNxxqBWCkdj/+zTZMgEgZ80gNGMWOst7c38EPbtlEQIK2SyaTnefGsRseyMMY8XhHXGmHGTJNVq9VsNhvnhPhe4NiJlubWbDYb1uuUGlEUUUq1RoVSCUolk1mJREophUMuhNBKaaUc26YG5Zy3NDelU0lCSFNTHpQeHR2VUnb19ioZxVOWo9CPP/DdINPRm5/a3Rg+ZCCBDdNAoAG1TpuO23qNwkC2eyrumwerN214eHdI0IAItGkuOX1xfXBgKAXfefjXHdNmkH27U4JUKpXzzz+fENIoFHgoyuWqkjqRSCilEqmUZdvlUmnHju1z5sxBlAVjVctJYKFtYibbOt3hEZOlqO1EAhXve90bG/qP+x+7Ysu+lp7kuV+84v4nN45XEAJkDRRHqeQTQX7r0KaxXfNPO2G7N0FYyiKBg/HwW5vgvedteHV1UsKysxbWXWPTnjVRXuDUnO6Ze37886vffeulPVu3X3jJpYqrTDLju969d99zxvnnDhVG/F0Hz7r4ssHhwVn9syhJEK2bMl3u/oF94/tod7OJSA7IEPJCWWvGyRtu/RbM6GoUi/VSERre8aesYJGqDA8Gvp8QqHdmM4wPt3/y84Mrn8heep18ZdcLO7ee/62v1DY90b6guTS31bAz2HfA1tt3PRUeYXOvOIst6unaQyCVaB6WjpFbde+Lrxw4/N1ryW2/+dHHPv2N9X97etkXvpjc+dC7T/xx1mkf/tOP/3HwsWfbTqTfvecjx83sUxnklIos1SSDoDY8AkKmU1kh+P59u/YeOFiNGqcsXTK6Z+eBA4c2ba+s3zSRyLBkuj5FR4lOmjazX/n+n+4dXptN05Fn7nr8h3/87Gt/CB8eW/uPF64+9bw5i+ZVc3THUytPvebK4dJgiPx8pm3lnQ9teOeJ5ctObV92+vTlSwu/+Pknrr/ib7+6B5WCtRPbc+7YubNn6o3rX5uIKv26v+k4VN/0qw9euufJIff+29RwafZB+YnLzvrpE4+ftYTmO5LInIFXrl185vKrTz999fMv9UnGq7wZ8HisvQLaFNFxEEUGexFPSYEBMggEwH4kyqBCBYAAFBBAAh0TosTIGw+rAQngafjH5i0d09oXzpz37N4tgBRooSVQ2xBe8L9D4FAyluCIC4wC4T/50tqhl9Z++bi53v7dn1rzdhuB40zSVPKXdrbW3fp9Ne4R9NfXd37xyvHdoxM4GEsZBstzNIbCBnRPbe46a/HYXev6COQhubswNCufZFnaUuIjVCqhDA6amRBxxyEWEfV6nQPGFAmoa6kYoVJoRl0QKEIacaAGBMxErtAKhKEFACOwz48AA7Ggd1omzJIOiJ4e8hTQA+3mjNH82rt/dMmF1z79l6c9PXkdlAKtBfwz4xBMywr9KGYEIx5RZno8evLhp5p76MCt+z7/kV/Pyy9+7dCbJy3rW3z2nEdefsIgxC3sUwhWrjqUSKXq1QkKRIJACKQEAKDUqNaqHa0t4+MTV174kYPvPN7mJDWSybZmo8UKGnWGEkSRXRvW9XU2c7fuhY3m2cvtEVwYGLZTVGabPVU3aISiqLxnzGHNF37wpN35HQTVurjRN2P51I7e1rnR9EXkzUfvpA23yhixbKY1Dvz6RKVar9aiIACvjoFYlpXM5pxsmiUShmEYlMl6RDS3iBAJnUI2ix0+hDDMNEgJR6/OpKJJK3k0qQpU3J09GlwV4zOA1mhSmQxwtAxGeBJXAYAAaK0pSM6V4zhJjN3AZ4hkck0aAfeDuKo+Vo/GrZCYFpaKx2ph2zaVAq01EDsGTowxAgKTImkJQCmlGFEhJI+EUhxhDYC1kr7vG4aBCcQSqjDklFJCsFKAjuJ3/KIwxr7vI4QMijGmACCEkFIqAIoRAARBhBBQUKZpB77rul4m24wQkkiCVkqSKNRhKAjREqJEIhFbkgzDqNVqmUymUqnYjGkFMfpKrfRRLVkiaceZkYQQx3FibXMssOJhpIl2bFtKGUQ+VjiZTIZhaJl2tVo1DcOyLKV1S0tTEERICh0hqSUGcCgFaoSREEIqopLJpGmaQijbNj3PK5fLiURiZHg439RUrVbzTU1+GCCEWpqa40nGcTM7m8tFoY8xjp3WsXTLTKesKAKMG7Ua0kApJUiXCmOpdBowUkpZdiIK/dAPkMRgZuadeNLKVQfbc2ysLqNAaGB47+odOzdMm9/h+bWmt1+/6wufD0wNzQk04mbyGWNqZ23VoeN7+uTBwsP/eDZbqxx33nmEEIpJNtcc+j5CxHES5XJZq2hsbGzN6lWnn36626hMndJdLo0DQMrM+LVGJLhSCiZKuebmaqFQHxjOTWnHM/HM6cd/e3GrG1HHZ/6uvVNyxuZRVMZq3Y7d81K52ngNiaQHHAk176wrqvw5WYcWAofHhqFI8lG3lYgqdclI8ZyTzty2YcfQ9jVf/Nwnsl29E/67Z55xbdrIZKtlQkh+yfwr+5rH9x1ulyw8YcbBI9slIq8/88wlp51T94M3nnvuxPnHda9Y5HhSIO27frbk261tPJ+USI49/KobBT39fUKTwSMDQEn/grkc68gV0G57XT1w6FDfdZ8ZeHnV1z79veuuumTvb+7pft85hJZaDhwSZS9yxw4O783PXEiW5C3KVj/ybK67u2/u7B9f/91X39q7NQXEzUysffl9n/3oC4/f8JGHgjvGqp+9+ePzZpX2Pv7MC48/evo1F19+YpdVqWAvAKZ922CIEowy2VTk+oXxIkglfb5o5qx9Q9tCqGzevb+rY0ZLx+bff+mjr61598kn15WplkcmmlPNJUl++Z4bb/n7L6Zdde7cM5fDgYk57d233X8X7B0bwHWSbVr2sUsrOw5iv2jOQEfueynbPPuKj3y6d/YKf8E1d331K5WNY2Mr3M72GV2djT8+/MwQbZo4PH7trPlbymNnzJj50MaNjoJnH3hjbRKKONu1oPWq0+fDpkO0YeTJlEsvO7/qv1seXNW7uXz/vT+fO3vZER8t5/ZWqKUAmIZtBm2OBAUcErwfRF7CdINlqVHj7i7MsQQAIBoUgAAdz2jVgBGKgVijuAzGABoeXrnu2hMW1GvBky+sAwDGQHNTBP9L9AVA2OcKQIPg2DSSdRG+pvj6LTs6FS7jdJnLURVUsgzGxlsVjOukA+HWI2rlxnf2y/IYARDCrWnTJx6GwwOjCy+9YGrfo2OHi03ZSrdySmOji5rTrd29t43sJwQxqV2sMLPqvpezkZTCVGAlHYF93wVTCUpg6fz2J/aOGa5PAUJhRZEEwrOEVSLTYL4ULlCDITipI5cbl40kpI5rzB6EkaT57lt/2vLne4/7/J9HvvjJnbiqBRBMpP7nvHmkFcJaYhYGAcZYKxRxBWAgTAFJrC0M5MyzL1+1WWdJe6J3ZNHpS8PtAyvXbImkdqjZEKGSul73QTPMmOQBIAClAECICAHU3Goub0+U/Q9/5qY31/zVGz/imK2gXcIMP0BJP1DAS1DvPW5maf/AnheezzS3NPW0IVsSYqtqgMs+M5Dnif3lPRdee2Zq23ourGXd56b6MsX6RPv0aeCuX/3qBO2e1up5jWKpUCiMNupVgnQynWhqb0nSKSqUCBE7laKJhCBEay0xNsFUnCOkNEaAVDwGCTTRUmotlZYaFBzLhgQUq5HjohMgTjxUSilN/xkSCQCAEY5Re5KzhhhLkZZaa6QhkNKybN/3peROKhlxXqmWEomEUsfw/mipjTFCRCngnEvFEUIYwzGnDQBBGGNECCEAICXXSiktQSIpNABXapLQpphShqNQmKYZRj7VlDEmpUykE0rFpLhUSgiltJRCCKUUwppqIpUSHAghlFKMkUQKtI6iyKQMaWkwIwx9rbVUkEqnOedxS9uwzLgRLpTknGOm4+eMTxJxY9IwDMM0ldJBFMYPMWoSQhBAEASxYTquiRljWmvOuWkaSilQQhFFKTWRCRgIIVHIKWHpVKper3POfd9vuG5fX1+lXFRKRYGvlEokErblSECCK+ZgTNDw8HC97iYSiVQqFSeBOAmbMQYAgwMD2Ww2mUpNjI4qpRTC7e3tIoyi0G80GslUAgFCGIEEIURp4FAmkwn9wLbtIIgyuSwCpQEqtSpXMpFIcJ8ThKMoMlACao2OKbMrPJqSyFGjrKugDUr6W61QRSndXFGN3Tt4Ty5/OOQ110OQkNjfXUeS7ikPtecy809bvCDXW6nWTNPkXFbLZc8LHMcpFCYMw0imLcCtbR2tGqlUJhlEfsOr5vN5aXBFJVbSZsahQ4dIStd1Tdp6bPu7djrjOcFIhc+YNSsMw5Kk133lkhu/+urBajk5s3/uafOPbNlW1+i8485xAWZbnX2t1KMpwct/WrP3wRMufOln3+3qM9dv3FYar9Yrr0zvz522dEFlV6Wyg7dm2/KHC/c9+SdpGueeec6aHzx48uLj9+zdkezI9+U6R4YOSZNOn9t7aGIPNkm+C3w50DHauXfoUIZZg7v345Tdw3n13e1RwhwbHO+dNrU0PtHZ1ZvsmAKMQiUKapWRdRvW7ausWLboznv/sfzcDfNb+m7/7Q3/+MM/9mzfOXDb7R/4+PtPOv24+sCBh/923/JLL5s6ZfH2Zx5pnTE3HK453alHP/OLBzbsNUw4p25vg+BPBfjJKccf3Fb8z6sPvPnkgzduff7SKT1Tz5h+32M3TXCED6BBlE3PPA4dKCaMXL08Vq0UmzJp3/VVpBDA26vfGhsfvvIjV616dv22DcPVmdhspy4qlg7t7ge9Y3dRJqFkBh0rurasXffxhefe/NtvzuxLwLgc3rQtE/hds6f6q7exNzdbJ568/cDe6p59YaZ53q0/W/O7ext7tm7ZsflyP7/x0cdOf/9VD6988cYPf3r1Y38BAa9t3ms4ZPrJx00bvm+rn+puiHw6IWvBAS5bl0+/6OzZ9z00/Pu9m/o6zLor7RnzMr6XmXuCl4ycZPjaxtc7p580D8wugNCCjQbwWuQiGWkNQAMpqgBS0TCCUQAFoBGQydBKAACiwATwNAACdYyLjj9CZMOO4WtOWPTu7n1bjgyZABFwBinQ4f8agBUwRrgGpJCjMCgQYDa0GgFsKC9yVMNT4EcmSVaw7/CGZs1cFZ7ZNXJRZ3JHvdKkVBBCJU9lWRUDXw4f/tlPv3jnNd8TUJ9GbElhYKJWsCUYIHxtA4BhZ+ohAHieXjAnh7Fq1Lnp5GmLhSPOaEWFVR1qh7Ee0zjc0CFIk0IUcgCJOCUgI6AgIBgpRxLcDsgNZ9YUald/8nKEk8d9/Mfwllu5SxwRKs411koDxK1GKYVGChCSGCMNijFDKxpJGYUhIiqQ3mBVTVS3nnnmVbq1cXLbwqRh1Yp7MfYI+CATGKckKdhE+pJEyMOYKgWMmlJKpSNEQGmhEMkTtPad/dt3H5w31antGhjcsmfu/KmMWkIFc09Y7FZGglol19lmOI5pmpRRX4XMcy3DAeRDNplMOFPzHq4VT5rRGkgfah4uQbvdDnX73bfffWfzOAVkMCPZ1Mraurpsk2IMkgseRSMjIxojokgIWGhEACOMAUHoBVgLYIphSpBAoEBjgqiKQj0ZAzmJrHHeMsUYADTGAEpopUBpQIBwPBEBw6RpFyB2DMfJWnrSlqS11AqkQho0wmEYUsMgkri1OjWNpGP4Xs2kzqR7CNB/QWIc914IpZgQIlQUd0YlaNCYYBzrv3RsNwI8PD5Sr9fDkDt2MpvN2o4ZcU82pJaQz+e5wIQQQpBCKE5wtC0j/nGx7towDIC4lazCUAghQBNKEEEUGAOARNLhnAuhEdJxmHN8wDAMyjmPQZwrZVDmOFb8TopLxrjnbVlW7LWVWmFCLGbHsyKQxkopKQTSwCgDymL+Of7UZEYQBSYzlNKx5xgRkEL6vs+YUavVGKXpZKparXIetbW3Csktx7IsS/DQqzc4j7wQmbadTCci6RWLxYSTnDKlFyHi+y6l1PNdUzHXrSsFqaSTzKT8ao0AYpQqRNxafWJiLJPJ5JtyQRBU6pVsNqsBFUolJaLW1uaRkaFp+emWkygXS5lMplqvjRVLs2bNcl0XI2QyM+mkwkj7pWIq19rWny01vKYs80o8MNTMxe8/tP7FNM2vfPkFN/DmfvjL//nkDzo6WEfeXnVwdMPWsVMuWOrs3ssI7bdYuRxgQmKtQBiFjLFEIuF5nm3btVojDmhLpSyt9fDwsOM4lFJm2lRrarPKyKhpGYmEQwymAQqKK93icXdh/0JZjIzWnnz/mWCsws5LOECWi6DQ6MaJusH03vKwp+794leXBOq3bjAGkNNmEIQfuel7237z5RP6O9bVB4+bO1Oi9K6Dgw1/W9+U9t27wp1vrzvtzLN6+qbt2LmtyqIJh3fOn9LV0rJ/775CobD8jNMGi+Pc9fuPm3OkPpFvbd/81rvFcnnqZRda/Z18ooQjeXjPmNhf6z11WbqtmVJWqE3krMS7b72pJe+fOeOtQv2d9atPverMa752xsrv/X35T7968ydvO1yDIQyXvKf/tVUPPvXXP33sjh8kL3zfN3567w/TbfOXr9DzpzVJ/e8Xf2mdgE6L8gh2kEhImEHo4jO+SLqmb3rsgkPtI71TLz7ZTI1mEpEbVTccmvfl7zjuhLFzby1NedBQdT8q1wMEjYpbq/hCq+OWHd/acfbBCp57wolNXSmdkJd99MbPf/YHuGZc/alz6GOrob3/zRe3lArV2Qs6zFJweHg8SnXMn93tSLe+9UBo2l3YGZvRXrWjeaevQB/7rIlNmJjo/fzHA6asg66Uez905rzuy5d/9uvXnXTcefsHxgGyzJGa1x995Ilfffnaq2+6pz/dbIrxBIN+DkPlkhFE+8TGcxZOeX3roTfM2i/mLYORN2ttremkGezf1TT/5J987cs/+Mkvb1L5F3SJRDADcESkK7UWCCsSghwQIQM0DpBShCMdIRVT0KABARJAKGihQSMFMOlnjRP3AIx9w0VTBSFmtgpDDUAkxSD4/w6AMdiCAyCtTSiLGhAADYhCXQCJBxIjSIYcA68DgAPK90DDY28Onn7ZrFyXRhMmibzxEc+mcJgDHzusS25fd3+I9/IgKtUga8C7IJEPJkADALwGYVZdQVoCI6AiGVQ8FIXMzBLABha7BqM2AcqmMnCTAA0D2q1sVdU9JUMZJbETSSIJ2e9FXb3Q1GI/u5o7oA+8tRbd8hP4wl/e/P0bH6evjPxX1beUPL7jGwbwaFKCG6rINAjVQDHiQnVNQ8tOnz5DzJ7GbT4H9ajkq69vyf34murInmymNdJcgk8RUYGwsAqUoXUEAFxM4ohSQKhRrvhNTYYY13ff/+gvfvBJoStzF50ASdmoVBNNqahWSpg2mCwEkNmsCDmTyDaSQJnCSiSNALht54t7R5O8JrIW6FRTRx+q1XVdVKEyu//04y97liKUN03NgHMRlCtuGHhScqQ1ySWZIkxRggxCTYyxElJxAaAo0ZaJbRMo5lJIBRoUQ4DwZGOVACCYrFxxtVQhhGBKENIKQQxXgBHVGCa9vxAzqZMnRSk1AJB/TjHChIDSiMTZT9pkDEAhpCkhYJroKAcbR2KC1nH9LTVQjBCguJUJUsUENQaitZBCSwClhJQckMQYkgkrmbDjqEvOuedWKaWOY2NkxrUspTg+fIVhmEjYGNFYdSyE0lrGFCuA1lraBiO2hRCSUnIhMCXEYGEQG8lR6AeGYUitCCGccwAlpWDMIoRxzjFFsWvLZIbneZlM5sCBA11dXYZhxKqxIAri0X5SyohHWGOK8ORs5qM0eHxBPM8Lw9ByLAkSAMUtYalFnOOhpM5kMlqper2GEORyOUwIpVTZCmHAijiZhGMnDcMIufCCEECZJstk0416wzCsKIqampoabp0QpDVJOg7GZOjgoa7OLqK1YVkNP3Rss7OjA1MshFBINTU1KSUoM/qmTgWsG9Vy/4wZ1XLVMKxcZ4dw3VQmM7+rs1YuZadOhWpj4vAR2zATuSav5uea2k84bfFL963s67KUIXwID298ujVrgYLMBfP9e/7Ct0TpXm1HPKjJFMJ//Madp3/ovmp5Uz1BezK0AYmUEUkpGTNtO+HWa0JE7e2tUoiGG9Ubbnt7u++FYcSnTu1XSnleo1qslybGLZNN6e0yE/bBHQOt7W0KUE+6acgbZ7RvaHRrKj3ltTfdr33wquSB8WU5c1WovV1H4OQOr1CuZJPp4477z6/9+q+7K+dm1ICHLpuaM7F8c4Jvr+PP3/HEr1f9ds6MRXvfXh0VDlFeOGnucQpolDh89Y0fy1npyAvnzp89f8WSRq1iNIJ1z7xYz1tXXXNlA6P5S5ZEQ5Xh0cL8RReqw2O909SszMKDO3YXhodlGFXGC+OjI9Nnzxx8Z33HihVr1q0v1xuOkyiWSmeed3aDqes/evH1N10+sPPQlnvf+uqfvvPInU97rT3zL1z01wvm1xrOE397uGsu79TGh0474Y+3/8dJH731G2dN/fyyi5564OH9BJbl2va5lSM0zAM1JNoaYDMh3vq3qUc2oH//+a8nnl1/6Sce+Nzls/aNjR//7zfxRim7d6drURY22xPhSGFYuu5guVCpNSr1aKJYmDV/Bhi4S4zsHt/d1NF0wbnXfOczX7h02bLjz7vhMzffQrH7pctnXnx63x9f2v3iysEWrZw337n97M8d2LS1Y/5sOm0aogjNauprhDLhlDfsndiwudNqt89aUVq/o2f6HHCa1z/0t+WXfzJq7yKl4csWTd85vWvdmvERb0gweG6gZv3mlQFpnxdVXQcqddrtUG9XacPsA6csou2uZ9eTZ1/ZA9bARMBypgUTDUYo37v35u/f9Op9f996pNgXWe0IhiHMSLOsA08oDMzH6ggoQ0FEzRbBpdYKAygwNCCAEGmgyuA4RmNAgDVghTAgDUQCWrt5f1drk8QRUTYgX7JABv9UtPw/roD6DDGDcxQhDqA1ECBMWaEhpQhMLxsAFlZJAaAgGQUSsAcICkfIhsNhd0u0f0xjBTnWVJZFw2rdvn2z967aND4yJwsHlMxlM8Q2Xx8dT2DHVjJMSrMhKAT5FtpC7GrVZSLV09rOrFKt6srIUTpZ8bw8YcLnngM9HWAWYdANLSEZEAFSK+1A3aNQFgA2joq6yfFbm8nWVcN39C3YNdb4HQNCgYgkQEOpSUjACBBAW2ty6tRepMm7m7dO6e/cf2A4DHysQCjobc8sbWn0FK2KHl5nNlr2dg+ZZdcfPf6k9zzxH4+csmRayDI1ws8+u/8fdz1+eNQETDWKDANHoUKIMmYJEdRrATWYhMgymta9vk4GVzqZFNAmSUs0mdbjDTCpSBqgqQmGCdqLqj4CEikjmfR5mLCwESJRl+mWdqEN5hcz3bMaolAsjAUQzjrr/S9+b0eWYrT99TeUUlKKONMKIYTjORwEJBecS62QRgCaYyGxEAZMpNPpRMIhBAFSBkIYlBRcET3Z59OgpYgzFLXWjVqREpOaFiEEQEqttAbQFHGOEIoF0AodRQ6siZ5UF6vJ6z1Z1Go5STXD0SlMcSsXIXJsj6L4xwMAAAESf8GxnkH860VIglTxdLxju1YhQAppLbWWGpSSGgAoNRgzERjxNyJCuAgBIdM0gyDAypNxjrbGWsdjHgBjHHFNGcYYlBJKKa0RQgQ0Bs1jSI5TMgzDiruhmJIo8AlFQggtJaVUCGlZFkco5swTli2llFyYpqmkTKScarUanyomJibaWju01o7jGBbzfT9O8zANBgC+7wdBkEgkhBAE4bhG1wgQwVJrhmkYBY1GLZ/NKaVq1Wo+14wQjn2HhmHE+rJGo2FZVjKZdF1fg8AYlFIIqEI4mXSC0MWIaSEFj+LsrUajQRgmBqPENE2z0WhYloMxbtQ927YZY43yhFA8nUlKpTjnQLDjJGt1N0kthOmR4ZFpc+bec9ddYyPDX//Wv9fGRvyc19aYCl2JzeNr/va5ny5oNv1x9oetjV//9vZT3jevuvpVfcIS88jExtcfffGJ16ySfTiQrRo/Nxjc/LEbr7rjOv3S60FrsxdWWahNm3l+jVDKiMGIEXgNySNkO7Ztu/Wa1toyWeQHUnHLMEcmxnfs2jl77pzOzk5CCEFYS4UBkLTDDB7dt2vKCaesfurZGz5395/eeOXZv93xg98/ixm/60s3fOynnyo/8xRKWtmFy+ctPq04YIRaejpK5pKlagNhIGBoQT5x2vKbbrxixilto4c3Q9GjNR6SoEgr2UYGJxN7Bw/P7Z+tGn5Qd7PtLbmeDr944CufuNsdqN/24DdHHUEH6439OxZdfs3wwfUJu60oaw5O7X72FUwD1dNzxpXnBRODaiJXrBTtFt7T3qlr4vDBIwKo3/APHj6cS9unnjXtvkdGbvntk2vevaU5b5zT9/UtCZFzAZnQwdDU0H5ZeklKRyN0CvAmEwbz2U0jlT5sSBQdxqyM+IN3/NvlF+aKG1/dN5FDQ42vff85d5b9uRP7Lrv5i23Tpkcb9wcykl4DAaBqOFAatxNwYGBPIt2ye9uQje2pczq3HNkws3vOtK6+ZsOuytxef+fSOW0vPXZQZFoWzG/ft2tdwmo6/cLzX3z7xd/86pmRvfW3X/vLoR37cypM5wyfAQkhmU4D4ft3bPWTctbUa31rVAIv7n97uj1/dPuBLYfWtLSdOVT1R4aPrDitv4r2r9q1/ZmVR9ZubjRbYjlz2kxmNaraA5qzSw2/arfeM/H3t+++f9eOd8/70KUzu6fxoRpJ5HEyB+A1KtXknPN+/qv77rv5q++VxhbgHrZ9JY+Q0NZQVFDCwBUYptkS6kEsgSDKJSEkVAJpIJhoJf/vIQqTCVZaY0DzFy18d/NmwEQoCQCYEiX+v8Yu/Pdr0kaBACFKKeVRgI6CvQSHIcJxHStqKuFDEuOGUtgAHCG0sBUuyyl+RI4FjqJBG0MvR/KTV/b3u+iJ5/fnWsAd1iydGfCqD8hEirp1AQhjS6rOHJvVZxUGjf3FYqChJ8ESSckyQJVqZSm3mt8zMVDlqjNNF3QLpujG/XQfRyHyNYntlAACHDD7DDEzISsSKr61ohs/MOTVIhNILsIFxIXGiGImhEAABgEpVUdrcvlJy5pkLVJ1ZqaHRibyLRnXnYiCupaJNO0uIZbBzTQxOMU8bUe48iM3fvDS07/elEJPvPH3T17/pef+8fMp83JQS3795h/c9scXHWpGKhBKIw0GoVIqjIgiMoftGnXnnJja+Or96uAAKbUBSUFLTQqMKIBEGBleobZ7x86WjnxHbztBJigFlCgpFWhKKWAMhhEyMIMIIh+oGVa4OW323+5+bPS5rWj7W2/HxZPW8aDcSfZX6UhEXHKltVYgkeRUC6Yha0WWY9m2HadXEgQ4dhjF4A1KKaWlklJC7DgCqRFGwBDWAFKBllJpRbDix9BUIdBaTbLHSh2lZ/W/pE4C0v8y8ujoo1rrGBKklACAsD72nCRmd/5laES8QhBI6X8F4EnIIYZS4lgdCQCMmYyZQgjOuZTStm3AWEodM5Za+lJKrRFM+pHi84E2WFJpERf6sd9UCCW4ssxJrRbGWEqNEJoM2sRI8ohSykWotWKExU4nX/JkMhl6vlLKMs0wDCUXhmHs2bcHY7xgwcI4gct1XUZNIQQiQClljCGEBI+iKIoRlFAmhcAwGU6pEQglwzCkhtmUyxYK43FtTSkNQy6FipvHYRiGYWhZVvyESilKDakixojWmhKTcwmgXK+GiGExA4GOzxOGYRCGOec8UoSQVCoVP5thWBjjMAyrpSKmxDCYk7SVUrWGhxBuam7mbr3e8AzH0Qhef/31TCp94pIlSnKJAjzOSI5FzeGfv/Qd7foZ3PTM28XFH77ypts+jjdtlpEeYp4zxXz1rgf3P709R5tWh8UpJFM7GH3/mV+as5Owb9BGfoBzTsL0w4aUkkeaMZNgcCyz2vCllKbJfM8zTZbKZPx6FSFktTQDIRD4w4ODURQ15fIGZfVqNdKFjOskLn2vt3r3Vz91u3Fi55xs4t11T+ysNL2zr3hcO3r6J99tuAdEc7L91IvefG3vBR/9EkhsoAwITkATIl0ZaAzZRLtXH/3K+y/88Q+/sGf9C3milBdwmyUIG6mVqGM1yrWFM+f89Z7/XLPx4Oe/9rGtrz/6xXsSoP0L28s/+dk3OpYuOLB2Y2+b4aqegFSsprkf/8Bl37r+/S+/siY1XTnQUh8YuGfDxNnH56cyunW8/9W1a/5y+3sffmr1ktNwLn/GnA4L1+HCG+/ZQY0Zgf729OSXByvU1XNSTXvDYiqCea29L48f6Qbby/uHS6aNVLtWFNPDSnJItyXKr627pb976p7VW3TgtjXP2vrqQ4l5s/e99OTZ3/9ZU67TXbsHlPI27baWTPMjPxwqNWphZkZboF1VKaNAJls6BwvVjJU18pRxVoC90zPH+QPWqHHETCFD6Ccee/Gc86/vm0Zfe/HeJcdd2jprejHcXxiuNcuM3UwbE6UUaWpwIZnI23lDpQcPPjz0Jq/mxGlXXkoHQxzwbYMbJ/YPb31k98FCY1yiJPgLls4Ca+KMz1zYOb3t8Tvf2fHEhnKtlnUglMhnOhIQZFrvH/mHWa5DwvJLI7ZAIIkPDHNMkMTTWg985uGeA+zza36fcF0FUE4ZrG7uQPUaBcqhjs1BFWKEgOhQACAgGgBhCQo0EEJAavl/VbSTAAwgtU4knIbrAcKIYCnlv6qm/3cLQZy8C6CmTWv/8hc/+8B9d6/eWAZVMwAiBk4EHjFARUgDBiwx63PCKzsNGIwqoYlpmMTwpg9nnzvlvG54+J4jiSZlchiooeOXdz1SYVt3HuzKJrumNG0/MrSgLbV5X7lCQAOiApJKJxHMn9XWlEi745UnRyd0CAQgYyexH3QazKA+tvBuP1HyIwmSUaE1MIz7knpeBkKkaYH2L8ms21+cs3TuH57dkQ6hhkyAkFHGuUQAtsHCKEQAlAIWmIBa2N83c0o3AQ1Ip3PpUqnhOWh6Z08YTWlN+1FYTpOhz//mpc9eeIGb6fj3L14059T3uXKV2vUyF3PTC05dPvvcTbt3EGZwDRhRzjnBoIBrBi0m1Oq5Fed3PP/stxt7g6RqxhlZESITGoC11pIaDvjR4KGDubZUoiMvAyylJITErUDMaGxIQbbBq3XTYGAYpdFyvn9BY6jm1DDa9OZqAEBKA1JIg1JSKQVKCempSGghEQatIiS5jaRDaS7FCMOMMURjlljF3UqtNYBGSiul9NG5fkgDZVopNBnQjEVcuCqJKVH/irD4qLxN/8sG/ddK9xiI/msWJgAowWOBFSAVZ2joo4Mf/tudqQlFSsc2p5jfiX8KiRvJR0c5xWomAEyZxohwHiufWRRFXEkppUGo1hoAx18fgzcAYGpJKRHWhmHE8yQEV0opFYXHIBkAT46EAkAEIw2UYiEjrTVjTEtACLmBm81mgyDg8TwoLmKFM49zHFOpRsPVWieTSYRQGIZHz0FaSglaY4xN0zRN0/U8KSWjlJG42ayYaRiGobQWkhNCTJPVPRcjSikFQFrIOFzaMIzY1ySEYIwBYKW5aTIhBCVmTLxrEJiaUkpGcKwgM00ztoKHoQCAWIYdBL7v+4lEQkoZhbK9vV0q7nqeUqqpuTkSMooiJBqJpuZqoRBw0TZ9OkgxsX8/QSAwTrCcjmrGtOSavz7x1murkA+NI+KxMbVlw7MFNZSaGGcWIjnLl/y7X/9u+0TzhCwe3u2HHJlJ56GXHoy8IU8MEZLFRIXcQ4hgZFBqCBFRghLJXBQEEQ98389m0xjjQmG8qanJC/xisdjd3V0oFLZt29bf39/a3HJg376pucyg2/LJr/5c6/JgWfzs+tM7p8+//Jb/KALN61wpqv3gfed/46GbBu+4fXSidOLHP7l6++6PfPJHh0eAK7CddOTVbICmFJztdA/X3HpWv7n6kc3P3sNVKdU5pTM3szG0/9FHH53VP33posWe502EjQlDn3HFxa8//UR978bv3nXghLOP/+YFTX1d56D5S2rlN/7+0wcGttVP/8gJ/aeca8lwaPfK3rZlDJeYldG9bZJX9r2zMdHS171guoUp5S3Q0nbg9ecf+/Wf335ueNa5uUabfvAvjRyNEtC6E9UtLjuIUZINg4JJoRRALyQhAeNu2OToEU8kWpsv+8C0f//yh5pbZq//6R2zTj4zleXr121pT5t7d+yadcLizrNPL76+I82S40w0iD/LaYZqY9Atd8xs3r1mb8ZpH4kqySyYIrJI1sz3VAeCqdMycibxh7zHf/HnD1x39durtzZPaS2raqJu79+77ZRLz6r5LGVkW7qbS165OrG/t2eu9E1XCjAELYyPDRxuOXkBOtz45df/7eZf/0ilkXkA/ELpUHVsZF9x//i+sV0HjjREdyPzaq1qWPDK0PMv/OEnz/3qtWAUnKwtdMQ1IUzTiMtc92/33bvvnU19U3p43eNVj2XSJJU0XFEH4XS17P37+vKn/1hblP3ZunUrsi2jlQlmGG4UHaIgBZZg70OB1DLEEMdsBLHEChENGiOsldL/HwBM8LHSA+Bfhqz8/wG+jMVCEMQYU0pMn965a9fWq99z9rDf9G9f/MTV7706MoxExF0gAAIjUBoDZSkRXjHVSRUDERCGOSFkj+vwaegXn5v7+5veMduIofieCTplhvl0w6pVqqZkkgWjHPekzINlv1lpC0gA0qCmgBAQcAxVDgpDQmdD6QrKUwQWtpvSDSseFANaU0iARKAkgAaYkkVzm8ji+TScSNZ89+Bh/5Y7Pve+rz9YGSq6ChAApYzzuKhDlMVQBcpwSBS0J9U1F5y9aNacPbv37Ts0MGvulAg3nEKzaEnYrL3cWHPbfb/a83Zh1vKrjgxsvPqkU8792KXf/P755vZDEk1XMzsnNvCek1YwiwkgKowwwg4zgzBsa+2wU7Uj+73PffbEX/zm02IgBQLTfBDUk1aEgOoo8CjGmLLQqzEHK8wxbZFSUkq1VAAQ+zkxxpEO7VRSC86V4ogaglXH6009PRgkaKGVAi1BC62F0lxKLmL/DGhNtMJaEYgYUY6hMdIglRISpEJKSxmz1xopDZOsc7yFCMYUUya1lgpk/OC/QKlGoEArOLbpMEIEAKP/uo5Vuv8HKh/7lBqMmYZhmYZpUsbif8w4yhv/10GEAEDRZJAWjiljjDElmBJMY0czcKklaMyoYVtWwhobG4t547jOppQ6pmMbNqYGYSY1LGpY1DCoaTDLZJaJMWaMGcwihERR1Kh7vu9LKRVooaQCzaUCjAAjTAnCNArC2J4rYmAXWkgdhFxrXavV4tHCQRBIKU3TBKlM02xpaYlxcTTWGyt1rE6ND1+2bVuWBRr7XhhfAcpYIpU0LUdKzUOBNBZhWBgdN5lBMbMNOwoCpDXnkeBcK0UJSSWTpmFQQhCAaRjxAEQ4Ot6RMWZbViaVtiyLAJJSxpWu13DrtVrg+1EU1et127aCwE8k7JaWppGRIcsy8vl8GPlSSkYJF1Hg+xSBCCOESHViAiGUTieLhw6M7NnT0taWSmWymRaUIqaNXSlOO/Nc4epKI2prS7Umrd9/6zfZGXMQKMNmh/cOJafMPO+q83cPDZYPhTrbUk/aVaV+dP33jPbZ5Wld8ZshlUolkw4zyOQBCyPBg3qj6nleOp2WUjYajWQ6EwkZhmFnZ6fneW1tbeeef0FzvskwjO7ubjjjrO/+/Z733Xz5S4XRz378vcu/9e37//6UCAjoKAKPAFm5dT/6676OSz609GMf80bHl5y1ZO/gum//8HPAIPA8G/DxU6bYdVjlDr7Cy4P1ChipGeefPvPMFW1mxh8YHyoNn3DeKSdeeU4th8ag1tXVdnxTW/Dmu+ecsPSCM8780XcW3XnnVyC/6K03H7/zs5/Zdu/4dbd9/ZYnvxGKzK4nn/rTv30zYy+2WgiqNSXYQmNsSGyJJnZ1zp57YnXtYPhWsG/nOwcff2bsiRdP6u75/WPf6Z85N7s72wdRQKmG8RnYn45VDTWKBhBq4sBiYGyjxsawMcj4Dg/8HGwfe+in3/2EvXvw1R/dnpndteXVDTteWNOZNYcGR1rn99odUwsvb2rKt7C25uDQaG9Hlwr9mvCdXGbvuwOzZkyT4ejBdzbUxsmgmwqZY/tjfXNbwzwdWzn62HcemrJ4EZ4/LTttiuOHZoEMlnduPTKyfUspkUBVr1Lc1wg37+7KzVF7h/Tg/lyWJkRdjI78/id/+/3Nb3z5lkdmnLNocF91zXOvDfAjDRrVBqIoLJUnsC6LXgwJVc0YsPCEuXKihndNdLWkTQZCCK8mQWMDMPagUSsCj6Y3tdCShxoyjSzHTPkj4wjr9LhHtw50ffTM8rLUEhef0zV1yPMWYJAkmgkwWyCbggE8p1EKoekKcpQ0AwUAQEA1AsDqf4RToTRmTAOYlnXsRvd/FBv/jyt+M8bPoKQcPDwAgGoTUWdb+rILroyQDRGOsEZgYQxYQ9yQDgHGQhLZFiacUYikbMZ41+FaHdn9vZbXUHUXzAQbPeDuGanNn9tSVMGIi1BEDxY11roG0NLRygCqIpTEVIK5ISjDoIJxqQRQQASHRFbteoh3+xAggahgFGEAAmBiG/Fkraj9clD2ioZgpRLs2brz8pMuFoRiRLUGzvlkDQM64loCJgYFKSRJjNXh7r+/8ugLby1cdsbC406tNQKmWkWWNdvG2NA7/YungTz+3y793Oe/+eGeNr51aOzDV56Fx8vFeoImHL57d9ds54m/PIQCgAAAFDEEmK6R0KPF4f0D1ZNO5b+47Zq9r6wd37MH1yuVd4fY4ERQmwirRc8tB4ELIorvwFprzAUVEnGBhcBSQsSJF0DdtYkhwwhJjBElJmEgm7O2azUoQ3HxGXc/NdKaKEAK8aNe3mMQFt/foxAhhIgCAygmoLXUCCFJtFTH6GsUq6Em61eNED6awgEaJEYEEXI0P3IyPUOhySEIxyiXSaDVCAEG0JSS/3a3YUwxPiaa1pOVN0IY06PP8M9MD4QQQgRAa6UmrfDxnE0EQkiMiWmahBAhonq9XqmWfN/PJFMYGxhThJCScS8YE8yA4GNHAa2RVkjHJ1eQk2OXJMSYerStHqtwAeIwF40poQopw7AMgyKkKaUKNCEMAHgksUZxCxkjipGSQgihJKBquRIEoWmaCKG2tjYEhBLq+z4iiFHDYLFBTk3aooCYtiGlDIJASQkax3U5ACghbNOsFspjY2OZfC6RSIDSDGFsMsMwoij611lJYRialhOjfnxJtVJSa84DyizLsriIRMSDIHAsW0ghpbTMhJlkSsjQD2zLSCSTHe2tqaQTRqper9u2TSgwCsXSeCqVoowCMN/zc7kMaJTP5IXNQ48DEF3nlEVeoZybNnfjqn+gYd7a3qwYTjWqWwYHaaad+uClwikLj9vx4htd2earvn/uHf+5o/5OoTsvXCZfP7z93/eVp3bZkS39oK4ASymlQFJqKTlgXRgvtrW1xccXrZXrurKhUqkUQqRUquQyWa/hYxwCYCk1M+3EnonfX3V+/pLFpY/e8Lml7c5z69TG4SqIRGhoqEsKr+7dSq+7vo3Z/VxwIt+79KQKMR9dt55xhgykIlEvlpPAtokU0mKoEfz97hcv6GmWzSmyaEZTOpU70sw6O2t79lAMvb3dtUp92uw5jXK5VlO0e/4lSy7y9g4yMr78faeJ1tHXVq2c9iYnw4oc3oH6+t5342f62tkrf181b/qUTE/z7n+sf/zJ9auHS3ZiX6o09NbeUpDNlfcPNE23H3zRP/DAdlGvNSmwwUgEyX1QqjFmYKo0hSgaQio0tRWaF703awW5M09c9OaqN+fMm4N2D9V2jr3ywGOzF7TVeO2ldeuu/+BVq1Y9c8VV19z/1D0fPfV9JQm7127avmnruV/9uN2QuweOzOjo8QNuVtmmwW3F0uDZ557CsvnUlFmi7A3tPDAw/sopKy55+bFfH/fek+rSGXh+a8HbBycv6fSiGeXOrLObTTPHwnB47TuNodHuufP2PLu2e16y7+rzNrw78MpXbv/0DZcPdxsPP/nU9R9a2JNN/vxLP/3SD65L5zsKXl0bGomhFDTXaDLwPZ9aed8PpUccjSxuCME5ZJIskyZjXLqc2wko2SDCmjs8lOnpbzTGXl/19gXnnJNNJqKwjrpSXq2aabgLv/1hYVo3LT23OzvjQpXRUQ0QtGsdKjQB2gIVYt0mKWBggEYxcA0EgKPJcKH/YQnOAaMgDDCaPOke08H8r5aanOIKsfgjDNThA/unT1n0x388cOirO6+/+ZcP/uQzPoDjOKHfAADACJSUAEOezjs0RUADRlJZOsAY3nhz9NTTO7fdty+TgIQFSQlJBRZT1NCOAdILpzW3Usm3c4HyDqka0ouQQkgTBjKKNKQYd2tUwdy+xLKZ5pHNpQNDYKH2gIxySUAKG8BEjGKGQ64jmU93GUocLo61N9FytX7iorl3P0ywaRNV5zwOUsQIY62UlsClAixAIcGgys1n1m17e9uWn3znpy2d6SPv1pqX+Pnx7ENvrv3xUytf+N3vXpk4XPnunbd86crOaU0z5zSHOxpNHU1RaX9CTB07sOqS8+a88Y/fnn3FZwRJ+IHPI0UMS0p09Qd7/vbnD9YP706z1mwyWalOWCxDiO+LyRO8YZkKiTDkWGMzkwauhRL46L2eIixBatDB6ERkYNu0iGUgRWQ9IKapGKWgtFZKy7iNq+J4ZNCKUIaoRkorLTUWSjOuonrATUNThI3Ybis1II1IPHD3n+gbh2FMgh8mABgTDFppjTVoBBghrIHEc43iKAmYzJ78lw2HEEL4WAH7T476XxvDCAURxzjmpZHWEI9kisXM8SngKE5PFsRHu8VHe8kIYiu8aRtxZoZSGgAlEolE0gYAjC1GqRCCMGvSYqsBY6yVOKYSgzhNBHAcNqe00EITQuJxQ5NxmKFASMZ9WQSE0qM5X2qSGNBIAWDOOUZUKY2ZYZum53lSQyKV9l0v4iJhO3Yy4bpuxCWhRiaTiSIhlNYIk6Pcu5RC8DB+mYQQJrFlGlyKWr0ehqFtOgZjURQRpCzTtiyrs7NTgs7n81EUeZ5rp504JDl+Ns/z4mZzKGJHNVJKua4ruMqkkzyKeCjT6XQklWmaSimEsUEMxli94abT6XKp1NbRjhAqFArZbG5waCSZTDqOE49Ism3bMAyMsFbSclJCSABcKVXiocixclsayDIoyuUlR0BJiYJwml/ctCtjO31t7b5b4aGLVapxeHha1xRmRtPm9l7wsXsuXXxu0t+fsxIl17j0sg987HPvv/wz57q+x4XECCcSDsY0DH1AKpPJHDlyJJPJ2LZtGLSztzd03ThS27KscrlsEGrbNg9Cy7IqlUqqo+0Pf/3rU9f/ZjOA+RfLNQMFAM3QxOCa95/z/stO0Fk8w556IBRHNu/eOrT7T39ad2jfuG0iijWPwM6a71armKJUUGowTChc+51vmwy6MTmno2d+X+euQzvf/+mP9p+woHPe8rLfsD3/QLHekuiysFcfUe4ESA3AZwwcSpz1iRNO/fQ19b1D2WXZ89uug1odyhOSeud97BI8Ujvu2s9/9bwV3153BzgJvnEACG4mteZGY9++3WGG/O2nJ11+0c2H12XcrNwVNCqiesbJbcuXnzi2b0//gt7pWhoThfzFJ/bNn93W1mYhf+Ctd69774dee+ydr57/yY9/+rKe5XMnnKZkofThL52WSVnN67M3f+03iWTt8O6NT99+d9PZy8750Q0pD3Y//pSVSbw7tKFYqRYHJjZv2XjyhWeuO3igya7uuPvRRSecNPXkJWvvuN8lXcRJcxbRcKL35Itba7nwQDgaFdvOOLc5Yx7YtFMeaGx9c1XLvG4yY2b91Wftjuu33rbql9/+YeslM9nMJcv0P2Zcoj702aXoUOXnr1/RQKXcqFUcPRzRekMk6rUDKGqkqRkKjLgGM6e9FC7mK9E+lARPROAKSSBjm5bgQRDRXHPR3Xpo3ZpFF5x9QV+PHC2RTDL0hGNZ1JKVQwM9s+cHxEPC++6ffvHt67/yHzK1IREyN5quMKWoqFVBQ4R1TkoHmAHANTCAAClA6n9o6CJEtVKEUCmE0ur/pu7+H1ccRosxVgpA66TjuG7jlRdfmtbVJQRZuGz+1+98J9ua8YvK80oIAYtvYFpoQGMe7zSxSQALpDHSFDIU3npj6JwP9WWTyCR2pCJsIydSb20rRxJ7QgGBFIsi2maUhw7u3K8U5EwSKVkHHgKAhURdZ01op8TfF7y1n49oQ+FkpEdVhAEIAcg5RuB5XHBNoakFj44NuS5JpYxCaD/11p6fftHoaEsfLExQNHnHlpID4NgVqpQyVMRBKQIKZ2VUmr6s/cZv3PTDb3x90YnNTzz5INTGP/jlayxe+sAXbvzpvX+gxsTjj6382te+4I5tjkKOXMMPK9Kr2k3O5h1PL7v42ms/etWO3Xuv/sCpmzY+PGt65gNXX9Yxo3Ns/wHSaGvpmqmQyDQ3e0EUmYHjU2wyrhUiFGtip5OgOHAkDSwRVggpqSnCwBi1KMLYCBIJJQLfk5wrjMKQI4bTdp7GQxGUElpriF1rGAEQggERAIWUEggwAaZ4FPCAh8I0MJIIRZxghJEkCmnQGON4f+mjuwchpJGaTJuKsUprmCyNFQaCMZmUfmmttUZYY4TjDigcrVknYfJolIfW+lhJjhDSABQbceP2KOGs4w84F0fBEQBAxUZhDXE9Had+HCu2FWil9L/iNAABJDDGEkgguBbSMBlCcTwyBYAw5FrrOAYLAI7x5wRLCVJKrbjCGCOspRBchZQa8XsjnnNMyCQLyhgjBAslKTEAQHCFGSMYRyKiGqTUErQhAWGKCVKAFJemaU9S076IK13btjnnGhTGxDSpZTLOeRD4nEeHDg7ncrlMNp/OpAR3GGMEUcYY11EkFUHayWaLE4VqvYYxJozG5XJ8KbTWnudhjB3HCYWgGJRSBmWSR6C1YRiVarEyXmFT+lzfTWXSiGAuBEFAARzHqVQqseRbKWWaNueqqakVAKQUURQZzDINWwghpY4b9rZtBYGntYwiiZBOpTJBEAaGIbw6ZSkxXucm+fpT/9k2/wLzth996zu/nT14JDVeGbMs07SoAqDYL7hoVucvzr1iypGDfjZ6cjCaEJBEYD+//qKPnR0KnXJMLZWIpFJcaWFYDFmoubUpmUzGmn9vvMCDsK2jA5KWqNcxIMeyQWsnlaoUixjjxtD4Rbd8zrp0+1nFCQN6J1jUw5IfvvLkzunzR8e35YsDxqHiCBQWotbjFsy89LwV/37DJdwN6gGvhWE6l8eWs379jnXrN4+UG6GnVr6zbe/m+uLO6QkXVo0Wnz10xHQyL3/v7iiq2UCBYE8KCRgQma7Dw5AmrEY41KDdhnIThMfPnllqCMIFbm17dfu6b954PQ2CnTs3LJm96JaLrxppFB787gvQiDrbkqm2Vsukrc1o4eIVwNsHH39z5d0/j5pVsTKBQ4dR7UztsircL43bCRYcOGL0dXmmbe2t7X7oqeb+2Qkrse3xw02s89a/f21kAlClMLerxXc6Bl77h9mOX3xx1XDKeuiZPz73029e8/ObcthedfeDb973XNclC3pVs1IQCLnryOFFlyxdfMbsR/701EvPDJ505vLz+7tef+AXUzv6HQ997U+r/vGea05clH7nmXfq6zdbvdaJ3/3Zn2/8AgyX9wr3hEuX3vyfP7rvrr/++fe3f/Cqkw6sXPPwo4984vZzVpx6+s3f+knntJmfOXsqsuzmhbN0v5kVPahQkKWJam0c21PLYdkmkbSUjXSxDM2tGdSaq5YbjbpiqaTvNhI22LaDRKiqKplLKKWnLVgAdrI8PpHCFrXt4kgh25THpSDBtGeZou7qNHcn9n3uA1ff/ZNf7do9ZnOMASjIHgHjFJWUVkjnFaRBWwRCBcbkbeZ/mCIIGCGFEEZIHr3jHUPT/xUAH0unj+/jSikM8OSTT86y0m3TWv7y29985YvfmnL6FSN//4sBShgJCF3QGjRg0NUIioHfwcCNEFI6VMgIYcivv7l1OJmj1dFQK4nazLAS+R5g4oBugJPdNlIJwHWY7eOIcKmkFEIyoAZFBoBtgC+hGspmAJ1CVVcALhFFum2tMEIat6SsI4EXKfAJQDadb412v+alO8g7B6pGU2tvh8wmERRACBXzz1JKrZWSSkkOgAEsBIYOOEFYY37vfT8e3Zs4/+yPFPz6e65fPO6/PmXKj77z/g+2z+r57Affs+fFB/ePlD9y3UUE1qkk9j1IZ7pQhttOe49DgbRP7V28bu3OG278MkycHBb2mppEA16LuQDbvSCoDCaoyZjUmjEdKmAG0UJrohTG1ASuQUkUAZEYY6wUIggBB8ElBoVAlUZHM2mH5ZqikXGnq021toBQ6N2XXztaJiocj55HCABE5BFE478oIkCw1pEvfZ8HVcdgjmEYWGOkKNGUYkIRIkeBNIaxo4JkwBSkjoOyMVKAhFZITyJgHPUch1XKuHKVEf9X9D3WAz4W9wj/Imk+RjUf3ab/5KIB/tk+0f+ySJy9cZReF/GsIq2MyYzM2M6rAGmENMbgh/EIZI0JKB4BgGHSwPNNamqNQOPJY8SkJwppCAAwAI4r+1grrpRQCse7x2AW5zw2AcdUPaU0EmHMDIdcGoapJCii8DE2SSqDUMaY53mYAKNmLHI+On1BxxoxrRUgBaAQKK3VpKBMcAmIMobjkGqhMEKgcUSlxUzBFQAYjIVhKETkWCYGHPuXgiDIZrOMMdM0AaDm+ZIHUvJcJosx46GwLGPTxnVBPVi+YkU9cKnBpFaGYRCCQj8ghMTKZ845Y0acKEKJEY+BwgRM06hWq46TsAxbKeX6dYQ0M4iIQgCghqkVAsACETushYGZcdINb4BcdrGU0wyS+elXb/z73/60afeb/hurtQEUkJnLF6TblMrs9utv/vpX53zp+hfu29D3ngvevvOB259Y2TjwQvHQHgMrGUYGZUopLkKMFbPsY32K0I9EGCGE3HrjwNChbDbb097Z0tKyds3bzDQ6Oztb2ttqupQNpqJZTOkJnGO8kYx4PbFXlLzxRrGSMc1BHUAjyFgJv4llXREBIGxYiYQG4CI4dOhQX8+UjjkLdg3s9Cto0XHn3P77+3/40z9kgYagCkw1K5a3nL6mplqxYCdsZbA6jwwnsTcs90eOKoTCCczuXNVnJiWjB48IFmVZquzVZ3fO8IeHJAjfZoKHaUy4TpS474CfwVBQGLNkJ4YUFTknPzEx2JZO33rbN4vlbZkWe3C03GSmuubOMFIGCvxoTkcm2brpF092n7LCYe765x7p6ekpp4i0nSZBX/nb79tzSTPbu3V74z0nz/nBn546+aNLP/6z7/BVOy2Xy/rIph27kq3NuCMra9X81O6ejmk7X10TzM9Og/bhTQc9g887Z9HB/QesMet3X7vjP0eDfoo3aPW3T1125z1PrA3gq92ZcjJYcfaCOfN6exbMb55z3JH16wpbt8064bRS29RXv/ft2e8978RLzq6sXuNVaqmFswdH99vjh1e/tv+U484TKLj2+j/++AdfXDgNXn32KT9I7l5/BEWlHZRPDdiuIp922gm/Wnn3XeedV1sz0rDSpbBOtB7U0GTjZmG8reDZvY+Sqi8jjhGiEoWCm9kMuEIAgmiC2k2gzYo1npUG5Odtef3gF65572eF7VJe0MKQsJMZWyHKcZxBKqfNx6ywxCEnzRESggSEJ0cw/OuK73UYWDzeFCMMWMetK/ivhN//4zpaQmCtlG0wEYUL5/f1o6CycN4z970yg6G5335s1W3v1Z7RIIYp3ZAgpBVVIMHot6KTcxCWTYfwMd/2SFTjen4XTLXF2H5AGrf3qDtLZHar3nhQhYAVEEWR1BGRpiScgGpSYCLbJ6jBvWaKI6HGwUammGUZvKoGkI8SDDXsk6fXQkWrNZGw7D2Dfh0zUHxGi3P2Aqv0qlhp1sohjlDm1b+95+WHwh8+/QJF/rHeNiWT1CmllAs+s39KU7O3Ye341R+Yf9vvLurIXXPbd5/45o9ukd4osNGJtXzm8hWvrXlx0Unowwt+JtrgP59/f3CknEXTwUayERBRr5FG2qWQmDY4xi/+0MVr1j/s1vamzTT3jYTo19aEVCSsKhMqEIXU6lAAOGFJJIBpYjreRLleqUvJO7vblKe11oRQLSXnPIoi13WFENlpLbJSRSCSmFYmKullC718NgUpioEgHBPIREsludIgtdaAgKtoUr2sNYqb9cSipBG3I01GGKNYISmRBkKJVlojjQjGcSGqNSCsIZJHqWSkAWvFAABjBAgppUDJ+EGlkNZYI4zJMfWy+lcxVuz2mdxhR4EWAKT6Zy8WIQxIxSNHqEYIIYVAKA6TrDfSoDFgAJBaaaUnRViYYMBIaYwwKAAptdYaI4VACEUAgdJKqdiRhbCWgTCpyRUBpAFpBBKQmkyVBtA8NgQrNflLYiWxlIRgpCUQRKUQCEAKQTDGGEeKcy0wpQoAIWIYJLYhmxoDKAlaK4WwlohHYcAVV0KR0DeZgTHikQcAmBKgIFCgFGgFDDNAGBAGDVprZmewAqkR51prgqkiSGOiTeQIIbCSGGMtJMPItGxKKQZiZIxkMhFz45zzIAgs00kwqlhCSun7oUahaZrUYieffTY4jjs6yiSTEc83NfkB970o29zFkg54Da9WjdV9UaCklBSTSAaIIqV1TJIHnhd4HqU0mXJiuSC1DUAqCAKMsVI8Y2dw7yJvYkwgnjz3PQB9JaWT4fCHlsLEynS9XPG9IokQMrp4hR/Yucaef+KOtzdkZp7mv1GbeWjizG3DD7688ksXnQadJzU1sLBGy8NHGqM1SGbDut/kkIYIYLwuGDZyqTSyalJAygY/6p8xpV6tCekXCsMEwt729qSD3cLQ/sGhXLpgFhMGNTnnWksNPEg6ScN2Omw3inqtVtqKQUvheU7a8g1SKRclD5TgBkHgBytfevEfN/zETfT022ZYvPXkS0//8mdPXLl565LjT8qnmg7uHX30kTdGgyooStzIAJ23LBagGekkchB0mRRMxnGroTDirXM7PNCMme1Bi9/w/LRtG4nOVF5EgmMRNOp9+XYJYTphwEShOdVMBR6uDh6olDwgO/zawhu+3oHhiqUzP33FKZkZPRlChgcKqZbuN+5c/crKt9es3XH8rFdu+cr7Tr7uIz+99c+XHDdt3uz0rpHhaSfMqlf85OKpx0+P9pf/f529Z5xcZ3k+fD/t1DnTtvdd9WrZknvBhdjGmGKbYgyYGiC0FBIgEEiAAAm9h4TQbcAUxzbYuBtjW9iWLVmSJVlate19+pz6tPfDmV3JSd4P+Z8P+u1qZ2eec3bmXM9931eZ/skjH43itXe877tvevtFvGOw8sx45/azBw2ymKN96/qj0HzygXuH3Xa+u3L3rntXbTh7eNs593zxl9s3bnnq6NE3vffPzoynN6/u/91Du2/84KXGluiSjX/2T//4kVfc9BdndATzpUqwWIKu6sJUecPZL330dw8ffPobH/niR9/78g/O7D/Ua4ntr7ymsbiYqcv+NX82fN6boFw9/NiTF93UlnTPCT+nqRPzaqQlA7K2JFCGO8qaRQFAd84unuSzpicU104EvQCxoLFpOmHNj+YK9YQoEkherTWznletLYqm397eFkXxEpXdppdvsDrXGdLc8urzh29848mf/bybYIRNP4z7eTIP4JtGWxxxEAMxuBhzLA0AjkADRuBou2mF2YiGRHKqzUtvoA/fbkiooFSUqGXqFqi1xoD+r/CLENFaaqSpUgJDIONOYBNjiyWqzh5alABYWPOjz1z3Fx+85avfg2woKzqvURUBRwBUHU1gXZDNGY16CAWmjZjNQMC01/Abrg3HuZnI8Mo2566pRh0AYaWJMjlWdruMqxS0oKApSvyQKDi3vfuFpbkKtJuwFAuYrWkbFNKmE7dX6PwjJ2AwJ8+wjOP1MJdhlYhvyRkjWaRKErXXV0VwMPYSUj00Jc/Zvtq6q4HMjIAEIDGEm0HFsp4Ek8k4uerCTXfsvLU0fnTd8I2TJ+OewhtEaeajn37/wcf3bOzfcWhh9w3vfMOVN79x2zkVWOx77MQD//zXX0FgoLgCpgJucg1EFFyd5V0RWxrvvPjtlao38VSl/8r1akG70uRmQ2Ii/TozMM12KqnANBHRYS3EiJrYEvWoVinNjo+15TKN2dgxiTAYWDmEDRUpK2vJvLTyLl6KYgOwkInmhe2reM6ykaGqAdr30B+11qmSR0l+amhKtNY6bSCnFS2AAlAoXkJSmFrZFDFCKMMIY0wJoiRV+SKE0mgohDRgpIVMC1VMTmspt4hIGkCllpAKNEIEYaLVivD8FGUaWoPa5d3A6ZUuYnpZvpxqotIfUYkBI4WWJcKpRYdWgE9NXiH1vwRQShGkESKgtE4jwEjqYi0RWiZ/pQtDrW2BFCun86I9LdIa45ZFZWrQoTWSQq8MVtOXW7nOhJ4mNtCnzjGVEkErSRlphDTIFh+bi1RpnVbShmUiSrQWUiqtEQGSXjEMCBNIuCSEAaaAsNYatMCgMAYhAAAwTo1V0cooOgpCQtOEY8IYU0oLITCihEKz2QzD0LKsXCHPOT927Njo6OhZO84u5LKOYWUzzpNP7gSCz73gwvGpydmxuUIhVyjkTdNUWgiuCCG27WgCSinJReov5jgOj5N6vc5Mg1KMEArD0DRNwTljJON5h/fcPztX7GPF/ndcbbRt0NxkdrNeHctm9OzBp4zxats5L1maOaKWxuSScvgqNTB/cqxy14P7f3zfrgkfAMDFmZw0G6i0Pmv/8EufH+ptt/qyRrcF9QqvLUUT5XigMFGZa8dGsrBEpbQJIVLLYs72vJALzIyOkdX+zGLixyKKq0vj+XyRYaNeraamldSkSokaFxqB5WSk1K7rNmvVdJ8uI9HWXqjUygrJYlfH+NSU7WUHt54xfnRnW/9q12xL6lVkJT4EBbs4+sAz2JyvBKqpvc99/dalgBmZ7sVKVCnXM4FUBq1L7icSABiAwcEA4BiEApeCRVHRc7oKeR4G5aWKnclkctlm4EdRyBghiPr1KAx4lUFbNpfP0fHJUqHTJVGWl2ZXZ735Jm9ATJRut4vjYVOA7KOFBkqO8/Czb7pkS45/7T8eJybc99ineLGAg3ZdOWEICawY8Mr08y/sev7wq97y6mgymS0fS0D39PbOla3Rh58eiBujc+OPHghf/prVnTv88dFouGObKYPF8bEd11z10c/87N9++omFZ5578McPbnn31ZW9h8b2H/ur735s9NhEH+5YWirH1Bjavt3MFT9w/ZuDycZ5Z2357t0HPvypq8JH95x709vOeOkFyewc6yg+9LV/z28cOvO87YxQWNdbPXxy/Fd/mCuXD4+fmD8WKJqIUJqmnBhXwYbu25/f+cdPfPDOr/y+2JutJz6qSLCNOgBEYgKr39fu1YcmRTNWjiXCxGRGJBNLY2oUQqjYlpSNBKNcPDwQzJfC6mI03/Gm19/0cVSYVVVByZIWVUQfEOIsAqDIKJJKU9BwEkQdQALWmDiKxxhMCkI63/rB+0ZKT1z3t8/G6Y0LpeFvLfkvgv+zDBhjqpQADEwBxwAAnQopm4ahfsM7rvi379//pk0bflMe7D7z4lWepeKjT937Y9ASEDKxFQtBdHKmhQaw5gkUpbGo8ShEOwpGv53EEdQDq62Ay17Xr0dPCpqVIiKGhEQKgIyb534dgfKAtjvZ+aB86XnDpWgKhaKjP390viqOAyXuft8HBNgCFWIL1FAGZUztYvP4UtzUEBHwJFzd5hxKkj0xICHeed2ad73+XVe9+Z9qIkJAsS1kBIYGSWHrju7vfutd55+RlKeNwqq3fvaT//q5L/06J2qPPP/o5o0mQPHKi9569E8Huwe2PnHoz3XGefQn6HVvu7EaP57MHyFR3q8bVs4Hq5Y0wYhYnDW9WgbOOuuq9pec99ELP/XhGyuPzbdTCl4Rmj705qEjWz5ywmsvhphnNdE+1woUIGpZIIVO4jhqNuqV3JoBYhiEOeFSnQacaVxZmCUM+1S7hFKMaEfO6O3VzOIhGMyjcRKmd3ylVBpJlMJlyw8LSPotIKW11BpiwKARFzKRiGGBY52OfgmiSGmCgVLCKMYYMCUYQ4pGqW8xnDbcRa12DE5HsQQgpbepUz2aU13oVoT1/96NUcuTlf/R3NES6VbYMAJIH6WWn6iF7ukXSimkEdJI41S/jgBa/iSSL7d0AAA0YK2VVsuREae981uL1mgZQAFWhuIY6Vb2MQAAJmiZuqVXzlHrF4nuZZq1pDGC1LpOacAEY9XKU8YIkXQCndqIAyIICF4egacpLGpFP40xAJIqFUxrrTWlLN14JUmidYq4FCFsGAbCOkV3jHHqlC2E0JrkcvlCoRjHcRxGiOCRkaHe3m6DuQSDaTEp+bYztwotntn1xMLSomd3KdCGZbqZTIruKQMuDCMMyLIsAEiSJAgCy7I8z5O6lSFhMiuOwqybieM4aobGmu0Fa39HYYuVv0AjQeQkLPpZA/H6UP6M4m8/+p6eC+H8V72Crd6GzklAxDBtFkbWbL/uko/hDx4/OTU1NmNIJf2GwMV7xqeufOffZPO5nGetD+SGsze89JzzBrd29kW5dpKBdjfe7JgAoKFCwnYpk4WlLEdxtbHv9rsCP1qzbq0fR52dnVEUSaQIFW6GlstVnDCMsedmlBCy7hOtG5WqRhBoIaXMUFcHsa0glnL2yInO9nbL8mZ3789gl+86KqwJn0NYEUVmj9UPQZFn7fVuBpXC2kP3/JBrgakenxyzbdvHnhR6aaFkGFZPT9/EzPT+A89LraZmk8nJ2UYl2P3kwfGqP10JYqFt4jp2EpRmEEIZ21EILNte4spp6xzKyfLcQq3WMbSqt9jbU2kcrZmFA4uqc5hY09rO4blGuc0iOOPVVdhuWNxS7UzteM1Zn3/Z0PNPl1//+m+f3Tf4yjec39NbPHH0gI3bOW4CE9e/8lq1VGlKPnLJeZVnXugbOufJ2z705QcWP//JN9BG32rnvqtffSFg7+yri8/85mHS513+yrfd+eHbX3fDhWLhhNOhX/+Ft43XZje+/urz3vGmZDowaU/EhN3fVp+YPf7Qw1bG+7NrL/DyOdyMv/bKdWP7Jx6aXHrztsmaQxAAAEdiSURBVGE1vSfItX/z5veNbB+8aPsr5qenMoYVVpY6zYxjOovl44JQhrlvolgj0wAbICo3APw1F59l/uDeZtgEgsIYiMURIlgD1tD0m0xpQhmzLZOawIgRIYiTsAebdp+aqpCMqdu8+vGJ0onpHivbsa1r442vvPO2+y5HxpyIu4EugtjK3ED4UmsBkNfaBFJCEGAQCjLExIqDcpMk/NgHXvbut7773y65LbGEwZnUSi7fOJbvNv/3479xrTVQ0EpKG4y5RilEMLS1CLfPNP84tYeMJWQPIAQYIYGQ0kxjgumcFv2MxEJqkQTEBE3mGkmPRUMmXMx5iTwxe1IACKUBEEgpKGxf463urS4tQWUWppdESZkKjD37xi7a6hZNY37aX9/m9OzIPPLsgqZg28Ab1MIMq7AstIMoZ6TLcX3BI6GUw/xs4C8AJLYDWMzweG7mrMHLhq8q7b3/iHa7rG7vrp9/9Hd3ff/scy8/c2umNlHO1wfQ4V0fef8ln/znt/z8e09ftOmlO3/zizWvWfezb/7HwLnbXvGyDuq+HJLZZ774+Yv+4jpplFCYkEigcoNVk8hRmeE+QFUiJPcXmIzf8Hcf/uMTPyHv35QtNud9H3gl01+0a+Geex52uwoeolnHLCUlN8KUGlHMWRwmSZLJOFbeswpesNTUpiQZKqPEtm0wzRzROGNnsdDlJrJtGBmKMTG1YSAeBg30p9/evQKQ+PTpKtYAcIqirIVSQimhUAJCYiUoAgoaVJIkiRAccYm0Mii2TGYZBjMQpRRTRCnGGCOyMnVFGqdPj7TWoE+JlzRghJBaLljTFrRuMZ/l/2xBt35rWcB06g2oNQAQDSsIt8IL01pL0iJfpfSEtL2tlFLAU60RQulMVyillBJEQ9rKXm6FtzrepLVRwACnKtdTw2v0opk0pFKlF5tintr3wKmlwspT6FZ/HE6TPmutueJIa0ZoSuZCCCnQiRCp89fKbkmpdMOkDcNCCEGa2SClBpnOtkUi07jitEwnhABgpZTWHGHNeSvGsWX7pdBKR1oqbpqG67qYIiklKJwk8bGjRzK21dvbG8dx/6oRIASwCXEspUySOI0KThVNABBFUbr4dDCcemRall2v1zOOE4YhKGUyw7LNKAyt9j6YOqQ3jTz2w4Obr76WDjSWDj5nmOusXS90vvn8sXufHHnfPwMlZ0TolRduX79paN3Ihk3D7SCXvDWdMNQDPQWFJIeGOdkA1wbQ8ydOPjZ6bPfo/L2/efbQpL8xRAu8ngXYmMn3FLI5h/V0FFaduTFKglUb14xsWZfp9JhDMFPB0rzDMAgWRz5PApPhRqNRbO8GakUN38w5OhGCc8O0eBQxy6w069V63bAznR1tIDizTNn0ZSSUAiuTAZeVm41YCoNoLOOoXrUVtTmdCyqU4lwhW65VU2szizmM0KbD8o5HqQ2RUDFHloU8FywGecSDkHnZ+ZmlZsL3HThSKle3bd+xYcjhYcLAcAwWBE0NODeyFsCA2szzu3d/999//PNfz8RAMRhnbbf9eunAMWZQ5AiOQUsKuYLnmDhG8YgsTJbnfvHpt2y7+KUvlF/YsNH+5e3P3/Jvv37F1rWbXn+5rZLM4kLXyLljoU/x+KA3+OzP//C923a/gOFDr37p5os6ihvOW7WtX9Cx+mTgGd3xoWojmZCdBMvC3Xf8YqjHrYqON/7tu6vjk3k7X6otthkdQbMeVoLIa2YFXpifNXvbmc26cm37nnxq7ZmbThzc6+XbnBxT2MiKtsDm8dJU+8j65mSz2NUbLyzUPdrW1Xvo3if23v9EstSYq9ZrBgQNnjN5bUwtZN1fLjwoxo5+7hUfmD3R6OsvhOVGhYtYQgGZxyD+4cmfFxYjHCtlMZlwZphKcyz14ePPixfKAzvOoz39S6X6kZ882H39S3/0je/9zRffO3aMXn/TG77s9C6gxYzPlwg6IDUDowbqCBNdHHUCPUbEONENDS4vilw5ruXWXVJ85rHHMex99F/3XvfxT9jYEFoJJSW07iP/D9NfaJlOKsBAFQiMMOhBhZBFTgrx5tdd9ZMf/+o/Pv9X7/vn364qXHCytJsZ80pRQYUdoQQIACEINI4vpZRp4SUwwWgoWaDCy7sMrhLXBw85d+PgUAMkZiZhDIKuTfYg4jIuYmgOKDQfiD2TcU+mbTYodWF46ZmZ8rw+NO1jBTPgzFCBdeIQgxpmhocKiWyMJzVxsV1nvo7luQNwxdZNjzx2vKzikoaBLLz1hnPFhh1/9Zefeu+rrt5w0ZU/vOvRfTv/A5KTkLTJyqjIdQfHk0wYsMFevzmF1l30xBN/uvaqt/78O5/lB53v3vvDQPrrrzv7h1/6VQ45v3/6m+evmaWNTkIGtKFYRkGI5ue0WIiEoITV+3PDB54fP+dtbzv6zCNLv/1Z2Nt3wSvOgEqwUK1bGS/b051UF7BBVBQB1xgTwJRmsjoM4zgUQgiZ5GIWS4FsI1JCG1RRjE2SzedwM0xCQYsduLfPB+UAQ6GIalW0+94HVvjDGp0OA8vt3pVcIpBaS8rMVPdKAQiSSvIkiuIkNCFCGkyCLUYtk1KGCUHpHRwh1OpIp+hCMEKIIpqCpdZqRTsLAHo52X6FpJx2mFcALAXgFUA6nSW4EgwMABROfwevDImRwICXq0+8nJmotVbAl21AWgCcnjVNEyJAp82Alm5Ya6xSRsApAE6Xln6pW3IspbVuda0VepEPyfJxenl92i8CRvT0b089niItZbpVSgEYEcylxJqsPJOG064DpVq3+vopMw4hjRBSkhPMWviqWwEgQgilI9OwAeB0GgjCmlArbWsTigCA8yRJIiGE67phGPZ0dzdr9SCItAKMDGoaEfdTGTFjTAgOAIwxrSWlRnrKQRA4jmOappRybm6u0Whu2LChXq0ahlGrVIr5QhraWI0aPbrIhvK/fN/HNl1y5bab37Dn4Ycsxyz9/g9vvntPBPDqqzZte8Nlt3zp7t0vTNpMhxwcAnkKZgxtYF+0fduWjd1eVm3csqZ/45A1WEA9bQwXwtnAHl77+//86Vve88nb7vze13/202eePVA5WZVAkOVCxNdRQoXMEuYiOGPzmnWbVuc6M7lCNteT7+3pMi0yOTZ29kUXTR0/Ua8FCmEV+mHTZwhAqkKhYLqOk80FUViJQpXENjW0kOX5xe7OzpyXP3nyZNSsbT7rLE1wtjuXWDHLYKxh9JGdpp3LZh3JY0w0xcix3MW5BUoNopJESQHI9lxqmEEjEInMZnLT89N2xvaK2UjF/at6wcYSEqFEdUK41D72/KHOQtbL2Lt2PZUrtlWbwWDXhgZMDG7Y/uzoyX/5/K///H0vu/nG62sLxz781TtPHD/6Lx/7x8987DNz9SAMpEy4156vTfKTIhTQ9s5z1Pe/8oEjE/5AATmbt8tycOjpx9Zs3YAd/fTvn9tx7qXukD6x5/Avbn386ldv/PI//nJ3DTIhHDThZSx/+yPfLNtjc49PyZBks03mULN7uLvL46WZd7zzR1eeufUlrzq3baDodbVBqVLJsoJhJHPUGOnV5RKKeaWyMF5f2LxpQ/n4pN1h1Oq+1Wjme/pnEjbkOn6Bk/FwXFY7BLURPXzieM/gSKNcf+T239dLtXCp7iNTlUPD4DNLMO+6d0z9nJSiuz/9zYd+szPv2bGf1JGUHApgjer4ZxO/yMzH0IwhY0HIE8kTpFxMIg7z0WJHoeiMbPvI+780/dv9pUz86du+uN0qvPDDp9/0k/80w5m/BjhOANHMaNwEsOax2mMmeQ5rBKqDXsAwDxARUNxe08WPPPTL5yYO7dj+EmH2buo8c0YlQiup1QoAw2mb8v9HAAZsElglEXbwwcB76xsu/vHPPlFdiHpWvSnizuv+8m8ff+w/l559hjLAHAIMCAytFBBxnra6ME+kbGCMpHsUGpd12AUV6hDykv3e4AeaoBGzFeRccAbMbpnkZT5GTY/H0NMxOj0fLeg1a9j2tuxordSlrdE5OFGCWUik4eJIxRBSG3mhZADthnVEciwxx9xT4DAScswA1jqWz2O3LXnLtVtiy337R255w1UXfugzn3vjB/5ucfaek7seQsGqvK3zxIVupxkkmaAMA2uaMyfdrVftKwfXDa4tJ7B/Yu9wz+Y1Gzo66lHfmr7fPHwL6Hm/2eGyDuDBkSf3/OFHDx198JmR3h4DlkKpwxzrKWx82x/vum4t6a4G268eectHPmxSL0gUMUzE48XKQntXt2k4kLEhimQsQCotZOgHhmFoKa1sVgSBlJxZZpwkWmvHcUCqmdn5jvXrSFt3LKlt5wPh43rTkoiu1FhKpYn1y57My4G86UEIBYQRoqANDVJplSiBFGgFQoFQiFGElOZaESWxQEopQhHGyjTNtPAFhAFrhDAGjADpFk8+rbEJaUGHBrJiuPEi65gX45A6DcBe3MVdQbIUtEGvoB1gBEiBQukYGC+/vxUCDRpjqpQSCgDSITSmBDBmWqb9aZ3i1oquCTBFL/qErBTBSp/qTmNI9xZIK4CVtaYgnDbWMSErGKy0aCV1o9TTUyGtkSYrCJ0WuFrr1Bn7VLcCY5IuAGQK8xhjRAgA1gqU0ulsHmkEWkuQSilKkJCJjFt8y2X9lTYNk1Gmlt8NadCTYTAhZeqJzSjGGEsplAKMqVAaY1qu1PxGM4piz8tVyrVMJmPnTM5FWjQrpRACKSWleGFu3nadTCbDGEuSJH1OqVVfX6+UAjAKgiBbyArJf3rrLe985zvdWnXCms3MJTfd/YPmidFjB3cPGPmlg398aoTccs/HSvO1TXZx9cs2vv9la8fn5ncfnRsdS+qV+h8f2VlajCbqdM+eZ2EPYWByuO8GVujbMbBxw5o/HTv6w127rx7ZZPm8jOG2b9929513NnAjqi7NzU4s+qW5k+Pf+evvacZm6nVpsOdeeCHet5cBEIAcQhRhrjgBcC07iKIAtAXIpxoUEACl04R1QAgiBQxAAxgUtIZQggBgFISCkoZBIFkNXqEz5moo764fcT/+T+/32hMpxPy8H4VJve4H0WxXd5/reDJsxoLnch5huOE33Y4Co+bC/PzgqpE4jHJOjgBZOFgqLS51d3cjjbCHwiBYt2a9TsLZ6RPn7DiTmsaTTz29cOIRbeX2lu956TWvvfR373HcgZ3/9avbf7Lzqz//q5l9B6qlh197faeqhudd8ZLhzat33f/gv3x/5yfedcMZw8Mdlvn4f/1h3669b//2p+enxnf/5hGjN+pY2jD23POPP/LrjWdtOvZ4CVVK289c+5Nf/9pvA5jr2bTDLggUTR5//Xl//vGvXHP2ZTfOHduZ33aG3TTHHtuVXHP9L3905zf/5eXHj87NVQ8PrL/u4Vv/dN6brq3tG03aQ29gaOcvf7Vj7XpFtZtzV0MbS0TgkUJXcWJ0wd5yLkt41pxanG8aC8XpxdmFxQnU0emM9KoOL9/XFYbxUFfvsWoQGdhSlmMY2qwzNzHtjEk4EGdo46psZqdfD2MOqMiI5rzJlYGDRj0Ta970uUiIRpgQLkUpbLZbXV1Dg3iiguL4zPXD35Z3vufaV3hn9N394V9Wyv4VI9vuPLQwaYhezOwIzzlwLIgSTZSCioYKaBvwgCIeiFGtI5NWy+Gvbvvate5lb/mrN27fcVPecyaqkQJ9Kid4+fP+f8VgBAgQ0umdBimMEAUgWhescoCaIZb57tXXnTVw21N/yg/vEHdVhTtB/bIEDgQI0iJRQGhN03YuG0xirhBIjWAuEW3EiFDCJba5hbUwrKwOK4mvKod5HdBgf9yVz4fNeTy1dFbXwCyvnjxcn1ClxIQRM5oRbEpxbgDEDRdcnrFF5DcAbACOZLclNdELnDUSq8kdE+brJvRbNg9454bcUr7Zk2wyob7ryNL0ZOmK80fC+Ej3gI0XDT9cqsydKHRuTaq1iCcTj94/d3gK3fX8JTfefOVFb/jxw7fddMNNT37v0zs/9PXu979rmIo/ff0LR++Zh0yPrFYRl6UoqwJ04dmbQ3emHKOCQzxqbFpv9T0cZ9qHv/v8AwlRhpiX2nAWw3B6luZYX3cvKBZHksuaQZlh2jKMFGhMmelktBCNOAIttRQMm4hi23IgVuFSo9Ddxzq6gGZxEIHSRCIMGIKQCr4i+saA5LJCB+GWB8bKt5DysNKJrdRISlBKg0IIMMVExFpJmSjNuYxjzjAhBFFKtcCYQKqbJoRoAlqDTpEXQSoCBgCFEXnR0PdFgKq1/m/C9JW3JlrRHL/YNVoBKKRxOv1NnS+RwhgDb+0wWtX38hgYQxqroCDV9S5X/0qDTrmIreIXUq8SQlLy18pOpcUd063pbYqaKH0SpBEgeTp8a61xGlOyvMVZWXkKh6m7tQaMW0gNBBFCSKwiUIpisswe0wBIKImAps3ndMytAKchVhKg9ULL5C+phQQd+AFCyDAM27FTXy2llFLSNC0pdBzH/+2Tn0qWLctQSgnOAZBpWoZhRDHHBBRo0zZqjapuCCfnZFwSK4kJSSt4yzQBaSGSmAvP84QQYRhmMpmWB6eSjuNgjGu1itY6iiMFUgrxF+//i1qtRvpytiTOfGn3g/dv6N2RzSViVdx/2Rs/TFG9Ok9tyqeawYNHva6OPrsr26Fu6G+rWPKzH3+98GuGNqDBDyzUwo6OPQ8c+qePfL1nDH37qT0JKNcljxw7lJHgYvjhI4/ckmkfALhisL+rr93uyvMoPBEuemA4Juvr6vb9EAC4krGIuUCOncFAJOeEkN5MRmgRBM28YROC/LBJMYmiKGPZWmjLsAMZGMwKw1Bo1WbZQRjWQ7+Qz/fU5yrahLah0fHj556xLj/gzTcbT/5p1nHrhmHsOPcc4jmAJWRMETWAAJVKB0HQqPqNipS8Ui8T28z2FTiXmuJABwRjIwO9mfZcxq5VquGCn83mn9u76/DhQy+57DxmW3f97ncbN2zOGB31KLrk4qsPP79XJU6l9HBfb/7L37p+16M7+zasyobZLau3nKg3Jk5Ozh7dtWHHxbf9+qJiPX7+6SMPjT7bv7r/A3/zzxOP77/9mw+edc1Fl1229dAjk3fcdc/fffGv8wm+41e/vf5DN0/+6cFv3fcbwNl3bLvxl7tnCXIbBN573tZVWy9fXHxG550H/v3OQ4++YHSgM1Hpwz9+7Pu/GHwqnrjr49c+8aN/f+z3u55/5taztr1scvTg9lddsVSdqLLurGOfXBgtdnWAvzRiZ6f3xcKs5QuVR+/Zc/b6jZXZ8e413sDmVV690DMwXK0t9QytJQqZptmMfMEglqEfYiIw0CQSYMQaqgt80QWmCUWOxUyXlVHEORhSJgJxHoMmQdj0I+yYdq5YcJkT4wbQ0D4Ww8iq6eeO8icPPnjLDy6+7jpxeHzz17/x2K23fe72dyvPerrRfDtAGUIVQAVBv8ZDsZ4HVQYoUtyhCFNQ0m3Teu47v/nMq1/1nkc+9+1bJqevvYYvNbVIkRcvd9hSHfD/nQWtoZXMqgGBBqI0BY2lrHAwUc2WJ4KF6NwzLvjtoakn9zxQnfjDtne89/j3fpLoeUABFhwBaK7GdLMHrBi4C9AEYWo2FfDhvCNQIplCochY2XpYWd1mblw7+NRTRxpgdXlN3ggg44HUk7MTXpu9zmNWNru0CHOTlXGgHDIIuNZNDb5KAAnAps25nOZJNgFlKxwjAKSpHyELqFwK6x2Qee5R//xVq2dPzo+/EFaA7j38LK80fvqTp97z9rcBQiZAMNQb4Vyxm2pmrOsFnm/LVox3vuLm2+aX/uUf3vjVH/0c73j33/bl1ipybGH2W1+r33hRx4HDEyNr1i7VZpCz1L3RPTp/oDPXb9V8PtDZU+s7eHAJWGHfSfzvb/ib39/x9BUfuuHycy/ZcOEO++yhaPJgyBtg4CwpElGHRHIVUUwIwdSigHCitEMtjJlGFCOiMQAzpyYnUQx9IyNcaIQ5pZQ3GyYgABJFIVUKMEaEIIyxRlRrncrR0lScFn6knF6ltJZYcEwIIogwhDVBwBTTSoDUUnIueSKkBgESK0qpksCjWjrKMgyDMJoaMCOEmEFAYY2QhmW4BYDToFe/WDH34p4tPgW9arlTCoAArXRvJIaWojmFbiXTx4DSgHSrUbzStMUpUmpCKEo9JqVSSmqNEBgAMi1f00Z9atgp1TIuL/O/UgwmGDBBoFGLxrb8OmmhC8uOnrBSf2uFACFAK2eHMcYUKYUIIQBAAKXFKMZAKeUCK6lSp+7URAUwkqCBmIwRTBHGoDVWLaa3xji1H9FIA2gNSKWjbivDuIi11kKI5asrAUBKJbjiXKam1kphKXlKt0YIlFJCCASEEKyVDpqhxIZhMKWTMI6GhnsAaSU4RnHO6TAMI4m574daa4xRGsdk2IZadsoMw5AwmvqFhX7TMIxms9nZ2dlo1pSGSq0qpYymw4Gunnomt0HZk9PP9Jo94BfDxeMLSY7YqtNmE0PWauL86bFHOzv6lJkZr+7qaB+uHKwzz8IeAsQLCd7Snj0e6koEaG6pFzleJ2FRk/S1N8ByonqEIINtqfB9tQZMVTxlhgZZWxyKObcymYmJBddzEEUUg8lQYso6r1BkYEwxklGwSBgIkvBmpJFSSHOZZFy3ETSohDgKGq6kKpGIay7r5apj2wYSYXXRsjocHCfJZEcHXho7Mbc/UAb7rz/uFgBF0zET3u04/bnMYFuGqGigt2Nk27o1m9YObxxxh7tU4ofValgqR/Pz2CTtHR1RmNQbDUQIxvRkeTGO45HBtoiLi668bMdLzp2YHluS0Ute8XI74y3MTA8UehcWxm1Dt7U7tVmvs33zQmXuvLWXLU1OEtct15suV1t6hjq625944P4hL1vKb3PtDtxEF2y+au99My9/zw/e+Lk31f+0b/rkfoHN6//8lft2Hr33jkc+/62/lzq+9mXX3ffNRx659WeXv2ywFId/9eUPssLiBsZKR59o7z4zm4P+fm/NB24Y3NhTm5358Y9fK41kfeHVZo+Uo30fff2rJ+aOzYdRoSvTs3HDuq3bpB9ghrry1AKHYFRuhBlLGMbZ9Li+5OIz/fmGu2lV2a+zoMPIZPjsvOBRgk270zX7iq7t9GAr8syGRFIAZcjAmiiAoO6X4/Ub1xoElxqJtlVdy7wJRavIy2VCMCBFDaOjWGS2C5QxCoZphkFCO2B04Tl/19zH7n8Q//7ZXPu/vlA+6pqZMEze/Jprzjxry1c/8bk3UquBGwtAKJZDkW0DJBCUiVBEOBoB8Gk2d15nx2s7h+54yztfd8vdnRtXr7muq+1ONDalWxOuFHIRwP9zFlKLztnqrGHQGLRn2AcePwbhdqNnxOx8MojmP/CV1zx0Xqe1+vJjP74/iWugIpYKQwiOQDVF68mawC1gixwaBJsaFOUdhvVcVKMEhkfshaUjfXmyvyYTYTnY9jHKk6Ro0ibhfoJ3HSyd0+nMgopwCChkiiUpFVYWTKjEIsQKpEFQYvlB4gCnrEE4TkAizoa60GwYGNQpz+0ZPuOKmn9Ce8IsmO1u//MHZwRpzh892j44TDU2giWuSKM657V3b95iQ/Hs377hcz/7zQeue8Wlf/PZz3zvH3/xvq9+k2Wjt2wY/smesXXOwGvec90j9/9q65ZBVvFGS1PX/Pl7Do1OmFFu4mTy6z/8IlsdVASOTJ1474MnkIS7v/QTgm7NIfXmGy7/+o//xSiNy0BEOKacJ0kik9grFrUUiRIYadOxOVcIADMbMKKggdh+k6/qHuKmwRRWQmHDxkKBn0C9zrGmhtVykZZaaKUxxgQTrZVQWiQ89UaOwwhjbBu2ECJmQBGmBDDSSgmpkEIYCKUMMQO1Wo5aAlZKi0DGRoJQLCnSBlFKcw2J4UAmYyeJixCiDFNKMWgNQiJECFaaAtYahMSAkKYIA2AtEQYKCNQpqFumbqU4p7XSOkXgFMYoABCiVcuECyOGtJZcEIZb/GqdmpulO00NqdwoncIsCwC01oCSVBYglG6ZhyCCCEGIpjQxaPkia9Iy5yIIAcItZCfLltFSYCkF0rIF5BpJjRDCBkJwivlIFIDkSiQKkEx3Kun6ERAhNeeBJpgwUwghNWcOoyxlV3EDaaWSwI9zhWIcJaAAEOWJMA1FDQOACiEAWh5naUwKIiZvBZRigpDkmicJJRQRQAhJLUBqg5qMMSX00uKi1jqfz3MuLMtCCAkhAGuXgO83tdY5twiKcs4BTGIYUeA/eP99Z599NiGkpSwSwnEcjpRfb3R0tDWbTSyUbWApZLNcjaW//9ix7t6+hdnJNWvWNIIIKOQ9r6nJbGkRY8xNI5vt9hGmMgLW5lHNmAmUdQcQ4qhv3SAgXSiYqvu8ZrNZzDgIISkkQpRa7Fc/+o9HJ6rvedtLGlw3k+ipXc8ECfgTSwTcEHyTAZJ1m0DBsbp6i6GfkFjUZMJlMj0z77o2R8i1vNJS1fNyQie2a9XL5Xw2iwEHoZQh5tLWRj3yI8uyTexENTCog2hqJRaaJrUMqxEtOQ4zDaoTWvQ6ykkZae0yjwNfisNMb1cYxv3ForKIiID7+rCfPOcH5kwlh4RzcLz64FECcRuI9cX8W9/y+vNefn77lnXaX6rMjpajWqbQkSgOQmLp15cm+zvbTxw91tPbO3X0sNSAEkEBgqDGcfO5PzwyvP2yod7iwMiq2onmOZefszB5tClxbWGUUOgyCwW707RVk88fO/nC+h0XO45uM3uk6Do8O35y6sT9t//uhXs+dqB8ZD4eLE3u7RhqX9PT85Gf/eqMrf3mcP/8c4e+8fnvHhONri1rz37pdTd/5lyQDV6qNsdnwyBTHNoqZqd2XPIKyBlxZabQ330R6WwsNrp6usfHp1dt2VJeWlq/9YzBMJRyUzg1Obq01NHZPT0zW+jshAJ1XM92WOpt3ogClhCnoy2p1Lt6h8Nmkycwvrh05PDRgaHhM3oGYXbRyGfjzGLcsL04Cg3FGEQ+NDclMDBCj02aa1a19XUuTs0RSkYCUUcsAJ9jJCfNcrZCUVgv1R2Piyihha7R/ftqkydecv31+Vq0+S8u/Ga/+cVvfLOvzzvD3NDwF9pG1vzLuy+yTO9Tn3SmVVRgcFlYqKClmpF1k5lhQ51IAGkztJLQyDVG7zRddeJXh996y903vv+VP/rgxx67d9dmlHteL8aOkfOhltoRKmkRFikJVIAER0GAbcChJ3MNXAP1vzl6pOCLFSNEcSlBYcykFA2Ki0IO4XDfNP/Xj/z9W7ded+iWPcBFMWN/6J3vOJw0H73+0/6vPg1qZ2wQkeQNUTIMKFE1wqFqAoq1A4m06L5mcj4Dm3h21Bg0jBMYP7a7vKbN0An+s3709Ek/Ej5DsLq32EYcNVcf7ulpaw+fmKxmPBs1sAFCQQwEEpUBnUiKPaSEBg1KGLGRiARAKAMsgEQibfRs4mbBjg7zWuSMnTz+5psaLi8s1iof/uBrv/TLH3FmtHuxiU0VTtRrBsoSrxP7sun1XvCmHddtvHDTq1/z6qgB1uFGT4X3h1FJwe6pgGD4wa8O7h81X3h+vv8FPNg/8Ojj+64l/ffdd9+STm0ZiYZpLQFjajYBCJgY1TWtqvpPHtj1ZbifNPo4Y0w3KI8x4rjHVTjiUhjY1AnoWGCTYOrxWo0Rg3HtU6nK8YEn/rR5gED7eqUR+BUMmGtAAnsNHz3z+wdbLVBoRdhijLVWGJ+KzBNCUEzSkHaNsZA8NSNMVUYpD0mm6UkACCHJwzgMpEoYRtjkkR/xkBuIOoZNUEsX5NAUGIExxhhhBjEM2orcwRpASyQRQiQtDSVoBctVYlo+ypUeNTrVr1Z4ecsolcAYI8DLRbyG5XY3wIvfva1C9tS/KzNXrXUaydf6/+XJKwDwVN+MWrETAEAAIYQSKf7bM7R+l6SJ7gq1Xg5JjQAjIlP/rNahlnlwCEuEEIKWwRhCSCukVVq/K4QQwilTTKZpviLRnMemxRLOtdacS9M0MTUohvRbrXUa2cF5LKXESFPGUg9IpUCJdBSrEBAuRVdXhx82kyTREiYnp5GGkdWrlFIYQxzHKT07XUOz3iJbua6LCYnjWEqJMUZY1Go1z/Mcx4njWCnlOE6z2UyCGAAYI9Vqtb+/d3zsxPT0tGmaa9aub2/vTGVRaX5UJpOJ49gwDIRQHMcaI8d1tUaJ4Iyx0A8QQlJyx3GETCjFrutWKiXQFmMsDqN0Y0YpzeZyhCLcr8F1olha2Z4gNG2vkPCGkD5F4pd3/V4g7+47Hzu+d4wie75as9vz9dEFPwktQiMpKFAERs5rc2xPVBeZQSIeANbAsJCKWbbSSMex5+UAkSCIGoHvOFYYR1rLATe3UC5jxuysE8mYS4GRAdjoyBZmZ2c8zxUySHiIsOjuaU+SqLxUdSgzKZOIckKJ4zFsaYlzzcXFxPfaCpVSbanuWxi6THTFljM+9I+v11FiA9FhgJDEGRKigHoMKlpjpEAzxorF4tzsQhLHjmnxfNee+3++9+GjUf+mv3zry6tipqiZ5SSNKNtR7KpWq1rzRPGM5/lR2NvfE8bB/mcONhqNvuGuZlgf6hv0ctklv9Jpjex8/uHNF59lCiya9ekDBwqFkSOH69sv39S+qhfnXHHs2O7HnnjhyNiZmy/43ld+1H9W79tvfsvkyXHP8TBCHZ3diBKltelKy3IWF5Z6Vq8f3X+gVCqdf/75R44cITjp6+sjBktibjkuYIoIbtR9BchxLCm5ZVm1Wg0AMKaelwMQ1HJAqLm5udJiafPGTc898ti+Pz1VqtRrJ6qBTkxTT09AuKntlse+PH/PkYGXnvPDD/7TY7cfsDuYEfG6xh5hTzXiH9zx9Q1n5vR0udnER0YPbF69pqpkhgAeLjAJJFJxLch1dsKqfoW5Mg2qQhm54sROuWrLTa/85NIjB75CYS8BUDDDYQagahsHI7GgWTfoNYhU+4cmzQbMwUSzAVB3ABqA//qia+5+8vdTRLdxc5GCAimEIACSYFAKNPawaiAbdFgkUBaAMVOS/68ADAgsQoSQEkAjYmjZg1Cb1iHB41IpgMGced7W6+49sGvDu695+8e+vvvA7DsvHrxs499HJ3+h4glJgQhbAulGcqOHG3U/oUYoklkA24Jrc4auJNrGj9bUBFDPZI6yQt4sum7o1wBAgGa2EYQJBVhVtPqzzuPT1YSryMlD4LsWrwooCKcOiQJiWqjXE1s6XQTZJ8eCBT8GK4aYEw1rC/aVl9mN6XJ9wc7mNdjRa9/xyZ5tV33tO//0zc/9TXf/e+er38vzShIIfbJasVR3z5o4qJl9w8/tVdtf+iodjcNTu77wyS989Q8PLVgEIgRUAVPFAMoGgASMQAugqKV/5RIsjQFALpdCCiCV11IEggIIOHf78NNPfjI61rTc1aCCINEMYUYwSA1CgW1LpYRS0gCIwbANEoc+gkznmi+//x82DazacPXgqoteHkfS1EIHIUqorjdVeYauIMrKJDKNhUnBOIWQ1Oc55HEKJwRhSg2KCcI6LXk5jxmSJkJCCB6FIEWGUkypBhmZjmPkaF4hziHmPOQ8SDiXM7VFxkjGc7J5L5v1MGGUGZhi0AIBAqyJRmkTJqVBaQCtNGiZUpA0KK01Ulot5yctA/PyTvA0YVLKlG75P7dG0LBMm5KgAZBGLx4yrwxlV9rFy2lHKyxleDE9WaMW+eZFr/6iXQICpSFlS6nUFae1NTjFi8aAgBCEkJACNNatKbk+tXINUgpCCAKsBCKEUWxoqQ1mCa5BMymTTMYBgCiKtIyDSFqWlcYppn9cjLFhGBjpVkqSUhhTigklRCHEOccYVaql9GQxQatWDZuG4TdDZhCllGWaQohUU62EzmQy6TsknSKvmGkzgw0NFZMkkVIahiWlbDYDrZHrmoyxsfETURTFSc71nPMvPG9ucV5qiohRrSy5lh3H3DJMy7ANajITY0ypaQRB0Gj4AMAYAwa2bfu+jzFtNpuEECXhyOH93T2dhXyhVqsiDa7pRlHEkziM5jnnxlimVj/a1dv963t+ec/dTzh2R0d3j0ZqXaard13vy995+Y3nn+MOtwPI6dET/mx1ocFnZmbCIG42g0cefrxY6Hlm115KrLirI4x8ls8YBnUMM6j5TGEtNLRbUut6vWxaZpvjmBbTlabjWLWoRPOIYMqB+0HdIDRX8EI/rC7MMi39asnzPNfJG4ZBEkoSGjOoVqs28AxBhBELYWyoSrlmYGuxKU6EJQGoc6Cvuztj+3Okg5tUQoZEcZxrL6bxl16mO6yHAnzXdhp+kxK6UC7lOwoGZbfeeqtbD274wDsvel0oR8dgUGZm8uMlf523oaPITTtnx/FiaS7j5hjKt2Xbx04c1fMR0ZJZeMuGs8qTk/PT03G14QciKYxfeeYFRx7Y/cTh0YzHOtZ2Pv7kY22da+1y6YGH/kCoqWNerZUuv+bPjKz7l1//i97+/JM7n2zU672s98JLLo+rwcJi+dChQ5dd+pID+w90dvQefW5/R1tXW74dKdzb1ecWaBhG+57bu3HL5jAJAz8ql8uDg4PMyjQatXw+X61WPS+bjjMwxpzr+sJCEESmafYP9terJQ6JmXMKAoQZEk00BAwQMiwDU8cwgZD+9Ws1OsCYCTGXQjFGDA3Hxye2vvSK+vE5g+V3nLXdYNhSMYn95sEpms3M1ytWNjM1PsrGjhAee6Y1g5rFuKeZTHXL4he/+PFrz377uOBnSXVU5ywkDe23h8kQoHEQh5g+RhI+eVhjcBS0bfRef+XFc/v3n3HFFc0nn1vjeAvNOoKYCEQYAgIgAaQiBCEwpIwAcQSw84lbb/rzz+59fhT9bzTptBygrRscEgAKINI6BCBgZ8APM8VRv7whH5x9/uX3fePWt7/vH2O1VAT7n27/p7/bvAfBBAHANJTYrifkRL05TFiiKUFg6aQSQ03g9hxkPc+tBQi4H9MYuGK84Tctxy2AsoNQhokNbkLJC+X6aCXCmjWwygdhFcnBTT1tJ2ePN4KMIk2FvThsZ7A4Wx+r1Goa24TwiFNA+Yy+cFMYjLHcLGArHGzrPFwhVLMnfvvzzqi/0HcWgym18CBYZ4RjhmEZ3UMbAj7vxIrz4fe85MIf3fWN+77w+es+/gUNkBAAxUwNlGstdAwcsIEoVmHMDIMnCQgAjABDfLqa5rTOv0CAJSgb9u0eA0QF8cpTY/nOHiYUIiQU2vZygBBwiZPQVFpxjT0vqi/yWtVdv3Xu7heSRZXbUZg/fGLV2T7CNGwEdoJ0vYl4nERNupwlcIoglP5RkyjGtEWiwXQ5HgfAQJD2RaMoSmeQGGOEcQNpyzBs13WKRYsSBCoIgrDpk7IwmFY8rFfmqouzcegzhCmlmVyn7bqFYi5XyDqOiSnSWEuQFGmEYIXlDACgkVIK0jBDrdPaF6cTTS3T1vGKAEmmF/AUpC0nFmEAUDqlKCyD46krvlzprpz+Cp+rBa4aVqbLrSI7ZZOlP9Q6tbkAAERPzadPAefyVgBOswtZsdk6Hfu11kQDwTgWOo1ZRMvXQWutpSKEpKtLEqFUGpyHqtUqwoFlWVIpw7CCIGKMcS4zmYzGXIKO47gFXQjFSdwMg4xlpudICKGYAgBPkiSMDNeklEZRZBiGViqI40wmk4i43qh6npcWsumeJu1CUwO01kJozTVCiFJKKEEICSFPnhwrFAqGYVSrVdM0XTfj+z7S0vf9keHVlm3UGg3b9bDBBgaHkphU6zVKqQTleZ6WYmFhjhBiOWat2TBN2zAMUNqwTKXEiWNH2wrtGGPCMEE0iZJczlm7el0QBL7vu26GYlKrVS3L4pxrQI6TIcjvzLZrYlxx3bU3fPADP7vzob/8+28jluH8WYmBfev7ToLbgGUMs7Cmd9KvIBEJIRzbjaJkcaHC1YxFM5GIJUxoAMJQIiTRkLfNdjdnIOxX6o5tO4hGtSZR4OXzXSRrSBZ1ZCNfyBgZhBbbszIOwY+ZxNOq2dbRliQi4LxZqhuGkSQCY5zLhSMj7XFDVMtN28hisGulWsNvlO3Gui2d//mVf2hDce9A73zCSabYlu3CS/snTp40HVytzjBE816mUlnwvJzU2aYfmkbGzXhRFAR+zA35mte8jmb03HRNFDpHXvP2B7/6ne2rz9zQWVicedbKjSzOH+npzedzblvXIPdxvdk0iREIec7Vl+IO9/lH9nUxb+MF54dRo1EW+YJ3/PjBo7PHL7zxmoJd4DY550xx4Mkn7nn43tVbdvzugcdXr1771ve8txksRlF57VAn121X/9mNYdO3HWfixAlKcfdwr9vuPrb7D0ODI2aRTtfKOSuXcb26qBCPLNUbjFAn6y0uLeXz+e6B3oznZDwX3KISMom4yQwMyHQ8mSSSC6HAcTLZTC6IwlqtMjAybLpWqMLU0U9KmQoelj9rWi/VhoaGbA2OQFyBLSCDiafJ3NR4ODNdr9ZM05gYmyw4BHl2d94Frwi23ZdrD8Ows+CEtbKdZTKOcm4bLXPP3Vg7eGT99e9464ff8v1v3PoZjnOs1kkzgaZxxDGiL7mo775bPvVfd/7H4Jo1E3MLl555Hs4Rs2go8sZMfuODn/7G8Z2PEwDLzCgRJFylHoQMYQVISM0AAKuCV/juv332+ecPM8MSPIb/7aAAWAFBSAEIAAmQAAhEsExcghaaYTsizz32wKI5CLH7p699671f/+xd09WbN7Af3nz11K2PSyH8goB6GIJlMJCCY8BAmZRaaH6oFl2Yx81yzUO2YfI4iUYGBs44L//b30yFQVMDWAgogGlIEYcOIkjrGCizqenHgKEUNEb3PeGuubgpJbC4L2d1SD4XodhAUnKmlQ0oA8U6lH660+xoY6+/NFsbrc8FJcNEU8eP3vKDB37wk48mjakiwPj+hnvhZDznGj0smtpNIxsufvWv3vjlZ+jCW6+8YeM73qqAcJegUGVi2TSxiLmlsU8JiSQCiQGTRBvIjDVPIUKR02/ULedkSFUq3FTKB2D33/Ho1Ze/YWHq+aXFWmfW03V//Nh4rd70Ottygx354S6wiFFjflR3HCpyHTymj9zzNHLbjyzNDFo+hDWSzUulgQPiUkdNhTRdKelWSmEAQAgYoQghoaQQApQGggCAIOQ3awgR0BghlDr1WwbFGCzD9WvVqFLzg2Bsfm5xYYEAyefzRhJRpgHHCW+YVOTaMqn603A7HMf2sq5pGyhNLEAaYU3SSEHVUhphQBoU0kq3ZrJqmaglT32xctnQKWVQGiSilW6V8q3Gr0x5BqDRclhhqzWtxKmeNpxW2BJCYJkphlDLEhsAlEzSGhoBYCAAoGm6FThlvnH6ke4bQEPqmCFbYqc05OmUO+ZKKayW9dArxDSllFYq4QEhxLZt0zIFVxhDFAWNRq2jq5NQVavVTNMkhGlFDOYKjmgavIgxQshkDCGEQVGE0/wGSojWOhEclAatqWlorcIwyGS8dFW2bTebDcMwbNtmjIVhmCRJ2oU2DMMwjETwlTWnkBzHMefctu18Ph/HcZIknuctLS21rC6lzOWKdb8ecdFshuVKpVAoZLNZzuMoCjzPC4KmSYllW1JyTIAYhDFGaRrKzTnncRw2mrWN6zeVSiWEkOM4tm0nCWeM2bYjlNBarejXs9mcRhCGISTQViyWK0Exb5ZPPvvuN21+7Wu+MD47m1m9cebwycbEgo5lsbu9o78jW/C05M1alJp0AsDc3Fyx2F6t1C3L6nSTk1Nz02V/oa4OH104Mjp7/PDMUjXCLp0PlizTEZZOEn50cRqAYIxhHjRQE1lcJxgSywIpuWEwJ589uTSTy7naEINntK9aPeR5dqNZu/eOg2OVaq6QaRvuiYKY+812O7+qOCB5E0fsk3/3743FxcCPFmsNF2BDR9vrLrvyopdf2nXuOsiIKCo364tZx2YKeAM68zmhVeQnppmZmh3vbGuPeSLL5f07Zy57zbp3r3rVLbPNuz5bvOrms3yxIajXkBGVq7OVpUb96UNDI4Ojo6Nnbj/PszO42vzdL249/Mzx17/uzffe8WRXsbh9+MyZ2frw1rMB1AP3/urigY1VbNh2Ib/93KsuPkcq9JlXXgkKpg8eKRieFbYR0nlydB9Curun8+ToSTdjBfX6RK3s5bPrN2zo6xuQUp1z0QWgUVCrCRkbpmO7rmVZ23rOW5ieNpgxfvy4aZq+72NcI4Tl83nstgHnlcXFXC63uDjf8JuD/UO+Hy4szduuo+Ig157PdRSbYZlrHoSxYQJlhsJUKNkIA7NSKQ52A4IkjgUAokghRUEuTk3Zrk2VsEzSN9DrV0v1UGDdJG1t5VrZrOGs4UaNCrHYsek5K+e0hW4tL5wSt7u6Fh6+52Mfe919D//+l3sWPkjYZNjcCMYuB2LFzyzNz1RHhywVLh7Z1NkbvbBnYnHOp/aOgbOf3vno0QeO1cKkDqBEHCMCSAGjMlGgsVSCWlkQMSh1zjnn/OMn3vOdn97EZYQR1v+z/F2+p1CMcWooDTgBxTHGii9JILZiITH9OAqOGjD02//8zoe+8tnRJ+pTr8n/9Ot/ffbtd2SDnVAxmaN5mLziirOeeeQ5W0KJ+1UCDNsnRXi+siySIB3HHMDCczOTw5Pw+su2/v5PRxIpEyVXD+XNpL4wpwIFFW2HSDk+X8QKKTtHPKsr1wnmAo6BqueXosQyN3YbuNbYW4MmMgigAGrdcecZGxfcRbbrbvKSizPjM/mO/rkoUW4vGz5/2PC23PTmV99yx8LnrrsBF39nF7ZOjT7fe/bLn/3BYzf/4tv9HWa/45VMkwFmvpQgmgCag2TIxwiEUkRrqRhjIY8RoFTQCRiDOFUBn94LlZzYIETsurZ/6GDj6teIzsF+cK2luOFkjNXFzbLSnB2fePKOO3oGe9Zv2dCgbU7WaNZKzqb1C4/NVisBzRt+ozEzX5qbnOza7BKt4opvYlqtVgVPaGrUAMtk4ZU/ahTH6Q0IaRBCYIW11kKITCaDUDqcRRihJAlm5yr1WilTWZwaH1uanURS2JbhWK7jeIbOoqJJDANRhJCltYkpldSIMM3ahFkYUw0glNIIodTqKu02K6XSglLrFLpUi8SrVav8XSlgZQuoNEYrtS8AGIbBOZcpHIICrZa9NVAqzGkVzS3vVbn8cvr0XUgqJ4blV8NppGELbMhpj1cIiPr/gd4VBAalMZAU/LHGSgtAIEEhQDhdCUp1xAqUQoik+ml02qdLa9VeaEv7xikLmovYsqyBwb5Suey6bibjcM7D0K+UZouFNtO0EeWGYaTOU2nMLULItm0pqVIqkYJzjjWmlBrUIBgrLJlQcRyveHQTQrkUtm1jjAlBQiRp24MQQggGgmG5YZAuNQ0tSfOaXNf1fR9jXCgU0h4D0hAlPJ9rm52fy+UKnV09jUaDUsMwwHaY7/taSz9K0r6L1rrp1y3LBsBxHFNKCUWWmTnzjG2lUindiMzPz9u2bds25wljzLGsJEmkBMdxjh8/3tbW1t7ZwRirWl4tnE2SStEvsITNPD5jW2y9VdDjfHh4Izr3zHplkdVjMdeIT5bcTN7EkZd15+bmMhmnaJi8Uu4xjEZ9MUtzg5YzMOQNrF8LthH7frq3aLeLmBAjkzkxPV2PIl/IpWpt/6EX6lJ3dfRuP2Ob5lHQLA30dLi2KaXszeQSHrW1tUkpENLUIFEYmiZDX4WPf+7f/uMX+4+OVQFlkE5wVJFVsBRFJUW4OqNveKbWKPSubQSNQxH629/c5tx+25pi7oJN6y+9YEf/QKeVt4prhyGjZseOY4wVUpJL1/J8P+zr6W0kbcUzph7d/V+v+ttrPreqcGi+8YUP/XAVRee+9rKiWxSJ6O7uKLaF9eb01de85NGH9+w8ePjjG/58c//IuVvOmdHJZTffNHfgyNN7npo9cXjbeVf94Y+7X/OuN0+e2KskLxTz1bGFjjXdlOPmvrFq07csI+JBU+knXzhaTCrt7e0+X8p1ZXL5vBVnDcvknOek4Ze4a9n7nt3TVuwwTIoQmpub71474vth0Aja2zqVlIvzS7lsJp/PP73r2YWFhVdef/3cyZOWZRkGrVbL7e3tuVwOI+1YRkehSCzaaDSGzzrzqaeftBzTdCwdxoQgCSjiCbWtbN4jDGdX92PPDlQMNihJmpDEAFEzAcziOPSwJhRZjl3s6Tt58IC7NJ/JF0zPa6g6dZng0VBXW9hs2kzVQdFsGAQGKR+ef6L8x5/8yyWvfb880rgK7IrhdSQLn7rprPtnF37+hd1/9+0vNV7YbRI82xw9+9pXJsRx56oL/uj77r5r4ub3PXHg8WYt3Qq3iKJpW1JIpSSAAQ89+MCH/zIEzYAJSP53sw6xrDgU0BrICQCBNGCglEkc18DKmrQ7FJKhRVGdHx332uDWb8/YbdzpvSoa28Mo5UECGIYGuhsbi3v3lwGwJkolXBJWUsoBkTMMxBOdWCCSY3vFaHgYATcITSR02WZ71hlug/1lNTOTANOCq6wJ1QC3ZbswMTPILktJI5FQciiOy4uiy2KDJp/nvGl7WifbdizYAnVk8r94ZireC2ElWD/lbtqyUJl2tm256Rtf+vbJ0aUXFqf+VQ8Uc1fBhuH+LVeDsF/3wc3v+9Znv/3eVz57x+3n3PiPYGEUKRPjjEllmMRKS8xthUKkDYslMYd0womAmlTE/MUq11OHaUISx5YN9QCynR0qCqvhNAncdmkCQYB5iOP+s9f3nbUqiYLZ2WlR45n+PAn0/MHys7/ayWVEhTBCNL/kH37+UPfWDbgRYKET3hBcNcsRTWONdcsduVV4nRKNGAYAaCHTKoRzHkYNhVIOkUBaCRkmvCFVUFJznRvyg1vzKgriKNIKEWpirM2MA5hqhLUChLBhmpmMa9t21m7pTzBuvYekFFJKCRqhFISR1hKQ1kooJQmiWr2oM31qw7IMTiv9Xr0Mk+npqFbjWiKsQafoi5d7US1O9Yp7hl42H1lpIOuWJu9UY1xrbVBTaw1atmRUIFRrwkpXnuf05WGQGFDqbJJuKzAAaJmqlXTLDHJ52KxhufxqVfwIIUIQIOoHMQAyDDP1kArDME4EIbqtoycO/TThIJMhBiMEEylj17CFEFEUplQpjHGSRJzzTMFLU5xN22LEAACZ8CiIqIld261VG7brUErDKMrlvYbf4JHQIKnBOOeGZRJCFCjQ0JJCASCEKCHpAizLiqKQUkYIDsPAcWxC8NLSYjabJYz6zSaX3PMycRw3Go1CoYAxFjxGCIFC+XxRcJW2DZWUKU0sSRKtkOM4gFQY+lJyw8goper1ek9Pz+LSfHWuvGbNmnK5rBNlmU61Wi4UCuvWryGYBVFIKS3yPGCSHciNjx+X2ohCsMGRvjSWFhceLblWoaOzZ7Zebu9vy/Z0xEkzrNMwjG2ru1ppYIyVoo5jeW6hXOKFQl8UNMefntU8zrhmbz6nQTXKDT/ygWIRNgsZZyBfKIysu+7iS4G5UG2CRry6WApiPFWnmBnUKsVHHCczN7pAiBEGvNmIjh07blnOtiu2fPoTH/jYpxI/jhiwhZnK+MQSQtbj+/c8d+gEj9zH7jtIsDe9sKhE0wVBALBb3NNUjz++/3OPPSeQ1FQJBX97nvfOd78rn88RRgFUvi1frVZPjo9ZQpwz8lIf6iznHziyt9/IXnpBezkflmpTDGVPjE5ffeWfnTixxzDJM888c/7FF67esnH3Q0+E5dkGxPOJztJsX1vHkvPCJZ+8ef/9JyPqFdz+cKBuGXax2FtcKB8+crw6Xc5l2zfuOLNcmmtzqOaNc88asPE23/cNy5FSz1VDg3lxSDHGwANMTAEMqJlIdfLo8S1bNrU5ztjxsfb29jAMRSIBYM26DVpK3w/OPmc7ANRKJdu2G42GYVDTYlzECU8alcCizLCsMIq5TPjYyTVr1x4v7bUMgwAiiAoe1yt1UMowDBmG5qDtuXYQJmCyJNYR566D//TsIdBGZ2+fQnp6Zqyr0Hbi6GhPR7vXOTIzMzM1O9PZ0x2p2M3apaDp9XjNRkIEazrKVjbbsH5m35G+9f4De+669xMfX/raUzKhoU2GLttqHdinEueZ+x4eIWieVLGLTj60vzIZjs3tFwG+7f1ffWH/gagZgMSEGoaSoeCAQGpgBDgXFKNE6J/+5/fP6jzvZ/deIqWPWvGp/2Ofj7DUqmXwm1YqChQCJABrTmOIDLQkRIeBZ8Q8ILNGwnddvOG6h5888OmHobsCua24ustgTiLMJ59/YUNHzy4oI8e0ZZiAAGSONqPOPNiGNBIWA0sgnguB4YylKga1pW4eXyxFOeQwXMAwZIjFEMCGtRL2UVNItPjQ47MipIBAgwkypGgusRdDtcqEPheO1+oC4MAudu4gdtqqWyjxy6gjowPUaJajK3Zs+fFDx25+87scBA2MetgFfSzxzLZ3vOyaxTqdCpO/37ga5ubXbbroo5ef/52HnwoBIq2iKAEMDFET4VAlRIOMuIGR1lppYITwkBMAndoj/g8cTuKYmgZVcQy47HNsD8dsrsdZxaMmAwygMwpDNUAF12zLDLV3IG6X5o+0XXTeyV+/sHd6yR1yOJ+vN0XWLJ48fPyyiRnqE+C0Xq/HYRLVOE1j4QFSn41WI5oQzAwaRVGz2Qybfr1eNw3Dtm0hBGFATYNSQyONkLZsw3PzqsPFvV1J2Ij8GkDgmBI01YhpIBTZSmOEseuYls1sCzs2MhimzEppO1KeVk6BBBkDpIPplrEUIACkQEmUOpUrkOhUv6X1UEjDrlt+XgAqkkoIoWRqX5WG+yJCsBRaa7nc2U0PpVvKuf+dBX06oLbwWGkhROtrrDHG6TpTVtrpD155DUZaxTZS6V9atzyTTttGtDYQSqclJgBojdNKPRUvIYwUZgAQxsIPm63YJUSEAsJRpRZgkJmMpbR0HDMMQ8NkYRAQQigmUkMatuHYNslkOJKUUsBISplwDlojpTElUgrf9ymlYRBjzDFF1WpVKIGBSilN0zxdE6yUIgbFK1dMaSlbl8UwDCFErVYrFotJkgBAT09Ps9kUImEmxRRhBM1mvVAogJZ+s6kkUEot041Cng7sEMaYEMshcZhQSkHjIAgIRYQQIZK0B95sNsPIZ4x1dHREUQCgGGNKC8OgQgjOue/7biabJEneDJOITI1BW/v6MKl4udhx3DjkAHY1CSdmJ0b3Hp6bnL/55rclTaPOIahNao0K+TbXthHCUkqkSbMeGDRJAiH80KHMdAuLpXIt4DFPcl6nFBgL2cacDGL+RPXQkwcVF3Wle3r6MAZAysxYglqcOtLMZVleKWEwzUXsZlw3o9asW81F7C8FU7NjdtZxmKmi+pqMs37D+kpYv+rMG1Fnrlkrj74wOl2q3vPHP15zxRtq41NzKvP0rmcxpkhDd3th09qhvEu3rl8H00ttnbkgDA1iUEor9Yplm8zEFmVTlVlUrUWZTG94lrkpuwqr+kJkdMi7b9/3tS/dc9lLLqUW6swP1XOcUqjOzCzVa6u2bqDR3Mu2XPylT36nfaTn/V/+4KPff2h6bPEDn3zfk//1A9puD5kj882yZWOqVP9AR9/mkUU8yzuSmNOc0d9oUGHGnu1yzjXnRRuLxKcac86TnKOlrMWVbZefG1SqXl/G9pxGozEwMKCEbG9rr1YqGNOF2YVqtVooFMK4yZjJuQSAfD4rpVxaWtBadnd2IZPFQQwAhWJhZnG2u6dPCzn29H6CMCjtGKaNmeQiDsJqpQKLIe3tshQIIUEhGQkkoM1xDs43m7UEYZrwuLOraAjdUcjagP1F7qICUJqVrlKCBto1cjpREttOjA4/P90zlCVycOOW7NT+o/njc6/9+w+8etfhynhUs+TLXXT98BnfunvnuRdd6mfxwT8+4XUbJw9XwSfn37DluWf37bvzcVsDJUAkkiLB6U2PAiQKSQBAWCFE9Kc+8Q8fevU3ciy/qKv/E3pbB6FKJBoQAg0IAUJaSYRQJyooXclBbiLxGwgcDIEKM7pj/94THbmBay/um5jqkUf3xfn1zsmZeHEJW107nz/aOdjjmN4M9w0OkgASfJEC9gwHa9rUsWxoqnwBgCuACIkbBUSaC8Jxu/1aKeuxcze2V2ZiTpQx6+cw2/fsc6976ztDhYER4C5gArxJddMkcCKkLNSdGAwC89zYORP/YcoZQokyExVBGfKLxvNW9qqGUvnccDWZjLg0SfM4B5PNvu83t1BghARrr3rduoKlSuL8jRd4QGMkJAHQgCXBoEJQmmGtlwPyABDBUSpm0S3O8/88KDDOE6JcRKOPfOT756xbfdmrbxCHyyCSuogQkYZrNqphLpSyUrfsjBS1nGNO79m75+dPMTcXxUs0aRjYJZa7MDm27/Gd21afP3NiKmZydnKW1OH/A6H+gBX7V4TZAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "execution_count": 2, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from PIL import Image\n", - "\n", - "image_path = 'demo/000000570688.jpg'\n", - "img = Image.open(image_path)\n", - "img" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "For inference, preprocess only involves decoding, normalization and transposing to CHW.\n", - "\n", - "**NOTE:** in most cases, one should use the configuration based [data feed](../docs/DATA.md) API which greatly simplifies the data pipeline." - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [], - "source": [ - "from ppdet.data.transform.operators import DecodeImage, NormalizeImage, Permute\n", - "\n", - "sample = {'im_file': image_path}\n", - "decode = DecodeImage(to_rgb=True)\n", - "normalize = NormalizeImage(\n", - " mean=[0.485, 0.456, 0.406],\n", - " std=[0.229, 0.224, 0.225],\n", - " is_scale=True,\n", - " is_channel_first=False)\n", - "permute = Permute(to_bgr=False, channel_first=True)\n", - "\n", - "sample = permute(normalize(decode(sample)))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Some extra effort is needed to massage data into the desired format. \n", - "\n", - "**NOTE:** Again, if the data feed API is used, these are handled automatically." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [], - "source": [ - "import numpy as np\n", - "\n", - "h = sample['h']\n", - "w = sample['w']\n", - "im_info = np.array((h, w, 1), dtype=np.float32)\n", - "\n", - "sample['im_info'] = im_info\n", - "sample['im_shape'] = im_info\n", - "\n", - "# we don't need these\n", - "for key in ['im_file', 'h', 'w']:\n", - " del sample[key]\n", - "\n", - "# batch of a single sample\n", - "sample = {k: v[np.newaxis, ...] for k, v in sample.items()}\n", - "\n", - "feed_var_def = [\n", - " {'name': 'image', 'shape': (h, w, 3)},\n", - " {'name': 'im_info', 'shape': [3]},\n", - " {'name': 'im_shape', 'shape': [3]},\n", - "]" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Next, build the [Mask R-CNN](https://arxiv.org/abs/1703.06870) model and associated fluid programs" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [], - "source": [ - "from paddle import fluid\n", - "from ppdet.modeling import (MaskRCNN, ResNet, ResNetC5, RPNHead, RoIAlign,\n", - " BBoxHead, MaskHead, BBoxAssigner, MaskAssigner)\n", - "\n", - "roi_size = 14\n", - "\n", - "model = MaskRCNN(\n", - " ResNet(feature_maps=4),\n", - " RPNHead(),\n", - " BBoxHead(ResNetC5()),\n", - " BBoxAssigner(),\n", - " RoIAlign(resolution=roi_size),\n", - " MaskAssigner(),\n", - " MaskHead())\n", - "\n", - "startup_prog = fluid.Program()\n", - "infer_prog = fluid.Program()\n", - "with fluid.program_guard(infer_prog, startup_prog):\n", - " with fluid.unique_name.guard():\n", - " feed_vars = {\n", - " var['name']: fluid.data(\n", - " name=var['name'],\n", - " shape=var['shape'],\n", - " dtype='float32',\n", - " lod_level=0) for var in feed_var_def\n", - " }\n", - " test_fetches = model.test(feed_vars)\n", - "infer_prog = infer_prog.clone(for_test=True)\n", - "\n", - "# use GPU if available\n", - "if fluid.core.get_cuda_device_count() > 0:\n", - " place = fluid.CUDAPlace(0)\n", - "else:\n", - " place = fluid.CPUPlace()\n", - "\n", - "exe = fluid.Executor(place)\n", - "_ = exe.run(startup_prog)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Load the checkpoint weights, just wait a couple of minutes for it to be downloaded." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "100%|██████████| 140690/140690 [00:12<00:00, 10843.70KB/s]\n" - ] - } - ], - "source": [ - "from ppdet.utils import checkpoint\n", - "\n", - "ckpt_url = 'https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_1x.tar'\n", - "checkpoint.load_checkpoint(exe, infer_prog, ckpt_url)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Run the program and fetch the result. " - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [], - "source": [ - "output = exe.run(infer_prog, feed=sample,\n", - " fetch_list=[t.name for t in test_fetches.values()],\n", - " return_numpy=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Again, we need to massage the result a bit for visualization." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [], - "source": [ - "res = {\n", - " k: (np.array(v), v.recursive_sequence_lengths())\n", - " for k, v in zip(test_fetches.keys(), output)\n", - "}\n", - "# fake image id\n", - "res['im_id'] = [[[0] for _ in range(res['bbox'][1][0][0])]]\n", - "res['im_shape'] = [[im_info]]" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "source": [ - "Now overlay the bboxes and masks onto the image...\n", - "\n", - "And voila, we've successully built and run the Mask R-CNN inference pipeline." - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": { - "autoscroll": false, - "ein.hycell": false, - "ein.tags": "worksheet-0", - "slideshow": { - "slide_type": "-" - } - }, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAIAAAC6s0uzAAEAAElEQVR4nOz9Xah127omBj3P2/oYc87v+9Zae+06P1XnJ0nlVAWMiFYQNEajVwZioFAvolEQSgPe6IXBQqIIQRB/QNQL7xIFQWLAiiiGRBCMCSYWBomIJEWVsSpVdersffbP2mt935xzjN7ex4v3fVtvvY8xv7131bkIePpZ59tjjtF766297W3v/w//1//KK0kAkiQBIElyhZsZBXcH0GgkJXF/G4B81mQCYHBJcgiyDrG5kaa4EV0UIYKCgZTifpIiJBHtMHiM39HrL4vRYsIaQ08XSTqBeAsAOBDjy5pLQlfcBkiCaxFlnrcRkFFGoLkDkHJ6MZgTMLLe3JDvktStJiYBFpMB0JBwk+TE+J7qkNVi636xcwWwW5wLQDebwTI+N+ju9wEfByEjALjBY6qOAfkNAcg2vdLGnE9oObatAJzuHZKW5SwpkGTGh/hsislYABnASgXEKMSmdwiAJVASwqDHUJ1GwZRbGd+KgLpiPTATDDSAgotAjikJllN6IGPJXYqXxuO5aoFs5iQQGNjxegACRJLetr22aU87sYAURL9SHWxgAyVBVPxHB5zmpLqbEyQFELlASIQBELzW6iRBpzvALgpL4C1dAORQS0RJrHOaE8vYi+4JjS5psdPAkAGBhDkM2rALMVvV8qeLZPfabrqkeJCkwQR3X8edgEH0xM88GjSZmZn16wUw5AaCpMMlmWI+jB2U5HCpn+y0rquk1hrIpE6tqTOXQwfAoFHAlQrYcr+A3vv0F8e/6oIBdFgghslJ0U0kDfVGMwDuDpqUuCr0gqdDDyrgOIXa0OanQdwkrVCXd+gh8MoZx0SE1xFQ4HGAMk6z1Dnt1LwK376ZN86NkvpEJeJsSu7uj49PLy8vj+eHy/V1XS/v3r1rL961np/Or9dLl9tyfr2sp9PZwMBPR699pxNLl4ExcRGCdUFE81VAN8SEKZhIYWwr5IEJ9L6uK08Pg1YM+iBpUTILN3qjALlz9eXUvJYBBJhI8tIRXKdofCFeTCPOeJc7TqfT4+Pj//jP/Iuv/7G/D0/fPf7L/68/fvq9v/rjxR9++HH9v36wL7+7PoP2iNNF3XHC6Yyr/7F/zz/6s7/x159//28Ar2jmOGE9EQ+PT69w9X694IqT2bLIV71e8Ph//6/8d/6x16el+7vTyobX9vD06XJaYpPMkr+imB+7B4m0CXXjiMz7PW+2BNLJAEJsfwN3p5dBo7T9CeYbRbg79yfl9iKTSMWEZzSdZ5UfdHw7FTjYmA+6IbiBJdOIGZAEJXGBPI5LsHUHzOBkIrAFBQUcAmGxv2jTATAAJh/nYQc5d2xL9psp70EnDbYvaTeOdndOXxcqw3f3NDMdGXD9a9Og+af3DkB0SmZoINtGAg7IQNJ9Jamky/EGAxAPJb8BQFj+RlAIjhUAjA1yKWCinE1MKyhg8SujcpwgkEl/uaGaA+AeSgoO5CQYzJ3G4vLEEAGteEay3nwYQsgQ6iRIY2wsm0GALEdwkHAl/aRAAt3QkpJNkNvETnGHBjISChaADcOlThtCmgMpVTJwOjjBhjkEYGa56UqmFN8H/dptZUgjOm7ufBsZqD4NJZBclqVk0PjaKCH/XwBAbzSCThYDS3k4ESb2cNyedMJIxhLMTLW0PAtM3Nht837C47fbRSUBMdSwSUFiVu5uZl6LcndOEJ52RAMmKVFa4AydLYEagi8AwCBY0FgdBplnyIF3pHRUfsaHifDWBo51AYJMCrEvqJaRQiN5uVyeHh6fn58/fPH+0yeneld//+Hp+fWVZLPW3Z+enr777uPTw2OuRyDyYLfceVnsEiCBBjhaa6GXQbGpoCcPT5Fzg14zg8tRmJkbqlDRmgRLOQQMkc0mDac1oIgASfVZrhqwffd4/vTpk/f18fERi10ul/Nyev90+gf/zJ/43/+5v/Tuqf+xf/fDX/qL7y9YHl7/8pl/4ju7At+D+GLv8PhHP+jXqV95/uKL3/1rV7TfbL/y26K3x9O7s7XLtX/8+N3LI/wT1x8Zfsp+tSe7+gkisMjZuEB4OJ1xuXz62Xc8fbHMU5w31UADCcqMdXp1s54ZA3YL5tDwWJCEiAai9Fcax2/Bic1sIOKQKwHQmPJtHlNuaDp0nf0qFKTH6ryN7XRYUD0h1A+jK4gbKIJwiZSnQg9nSLsCUAoVYc7kB8F9mSSP6IMMYdKQBB8wmXmscYH54SAFOg64oX5Igp7LxP7axpz3yAQPVN9ToFk6HiRQUisiCJT6DIFgo8zjxiQB+dRGLAbNJQk6NnyInSIAmSffDoASrbjEtK4YLZl3AwcFspKLzbtzMFGI6LEiE2cIFyTcOS+5FbV2hsYpC0q5IXNaYkroUazCSgSQPHknCTgpCgxBIiRxeBNEOZRMGQitGWCDfAgQMc8gT9smMWe6bWvuDEnCIdEkd5KQq04FKCMdTk5gSAzgWNQBW8wsBS8Fb77hx8fLSRKTEGYisV5WM4sj6u55SE0D7ViTGaowPOi4h9RlAwtAn7eMAdtANwYD3hbCnpQk0KS+T+TZL/bwYRI9x/czTQseY/NANCNgZo5Jk0bbib65iwQZCxWPTNHCTEI4nS6UnAWygU6Y0JksaRAWAksxWs3ShpSoh+ktSYdBhTkl6VBgMhe+vKwPDw+9r6fzgr6+e3h8vTy/fzqZwbWCJFus3IIX1HJlHDjjdNQ8EfJ0kolQc20oonG+TR7a89D5HJLtkC2oiQkd6PBmtISJnDDCjJfeh9bLOKhSyGe19nGUAODl5TtS1nhdn8lmDZfrxx/+8ON/7r/29/75f+l//vjN8qd+7du/8K/+W8uv/kde7XcefvCE7z987+/4k6cvfv07nPsX7z9+ddKT8MDl0ze0J/GLrpNfr9fXj3j9aNfnr/7Iwx993/7kV/Z3fGm/+tgeVl+/XV+/ff3B43/4j/zKr/71b350aqcGOz0+tZNe12U5HLChqi8GwhPnmOyQiUCbhCUFYcJSJhGFLSu3KDWQIbb4pHvm/ZOwA8DKhDtEuaTgBpQ1LHE55zCUoh3qW8yhhlXNBCUrF/3w0Kivw+IHkDJYEkVI6o6UDKSwcakJRNi5MdR2BVpLYQRlLq9jkg8wie0A9pTOh467F3NDXsF8/+GaSeH+8hQjmOyApNDHHiWJ3OSd+N+AnpMIizmXWBcHDwZogFxDLPWxJgAhkCYVivs7QuTPo7XpKpbK1s7aGUtfzOY/g/sSkDplIUiJvWSFXNVGXmu02o+iqZOMYQoRQwDQU2V0DdLs204xXRsxTmOpjSBSFPSggEIfmN+CXRvLSkKRlsoDdmehiEWdsom9hROBCq2DIs2BsLwGqrgk0EIXIWZgbg6CnbjMWWZqNYeZc7+FVEPO9nSgbGgcEJvE5TTAlrkIMFrsiLuzhQ2E9WyeAliDNKA9vcLryDsYpCAAInDYbzzIBRK3MfNgDOV0aEsTD651bfgTfMRutZRUSKZt2kk1ZXOZmaIYPrxxsid2FRL8kPFBcgn+IfTJPDkksaGE7Ga+KTA7C1kcloa0uAyhtrubobXm1/XhfH59fV5OZmZPH9799Kff0Cy20syen5/fvXvS65VI8ZqF8IA8TFj00JQENxmA1LgBukpAEgAr9ZdDulbIc2vKUKkmCQZKblpgFBoskJsJC+4IZaIIWlL7TcGOT106nR4ArOsKeGsnd395/fSDv/Kzv+vv/+Jf+xfWP/2f/C//C//H//ZPfs3xx//95589td/68FN0/Own+Hh5eH13+ma5/I0Xfnx9XJfL2q99xdI/fMnf/KMPf/Lv+vq3fv2rX2vPp/OK83rh80dcPzofvnpqWn5j+Z0fffq93v3xfPru08eTXduyACf+M//qZYdVdZ1tW5jCp7u/7fAIkOb1ugYPLhNx3m47ElC2rzgn7m4LpzG3S1ZiuzYvqJzp0r2Z1c60N4/jTZJYrjUq/HmXmBI85fXhD+M1pHUvnhTDLFqGIWlgukNL0utZydjsJHXbtsAJZDly/dFwoH/pQ+24dy1cZojtia95ijCABxf0WI7dv3/35nJZuxSvDpXXgmSv68qyCs4asMyxqaMCxo54AGqMn16lIVhrkwzivdiQZwNj+Oad+eDmD/Yr9tCOoVyTKrnphQyXdhJu32YV95OMVQtp0UIvtLdYSPIGj2gHQTQnRIerydkQgRMZElESv9W7Yub9LcdD7UXEVnhCFJQbIPUeNtIgdiWekoxdyy2Z+E3sYXIdbufd3ogtmGEyX46efEXbuSAVPuPDGWdpw+GtZ/n23AM+pRdxYIgDiybXsiShS2qtDcXasLEl4RrxHmL5jADKOvpO3ooDi5SdZ9pdcEDErxScAXrEFtw9X23RQF35GArG8IJbyQRlWtiLNMWh03Xtk+Bik7OpO4bpOIyBPmntM6iPZHOzBDSXMs5m6MTAVa+PD+9eXl7O5/PlcjGq9/7llx8IfPOzny3LWUREW7y8Xj98+OAvFyDdq8XjiI3cJrU3gbIBcC+wJ3SABV0FjrRCKdSUjin2JWI1HFixLrQHLQBWeQR3NLCHKTIVZcFG8MgOzgMOy8mu16s8zdcjhqCv9vXTd//Ef/3ffNeu1x/8lfar7z/2nzx9+cWH93/PN7/7g8vrK94/wK/49MJT0xcPf/yPf/23/fav/87f+Ru/9qvvvvyC58frx+u333z7oxc+0xa306vj4jLhAfYofHpZHr6UmUEPWntrr5fLZbGvls2mhjJRpa+mtjN8W8Mrfo9SkJS7q5MM/hgxQSy9RKwd2qIhPE5BOHEzXAUaNre33iX0zcPK3axmspuHdicD5ntp6YVgKcNIH6SbuLPoGBsI0WkNaUvxYgkzg/V6Bc1Qluf48fZgaJy6vWYcY9T/et46L31a3S1k7rxIaqSHlOo9dEdaRJQBQ3yVJlF61rttvFdsqXdyYpCuw0uniR32YqeUNmRUzogwIBbE5tY+JQ11OoGb5fZNgVbI0yIMrjQpbyOHPMQKUwIyBrBmFsRxIyg158CobSKcjT+Aqcm81EZzdE/HRO6pwU0uLDXLbjLPcSh2IA2DABoZPJg3mDyiserlI1IvVWzUt5qEOCUFOqiwxnhvmLJY9yJ8g8VEWQ/KwCtwh9BDZc7EMINJciMlr6nFYh2lzJJJjCUJUnrfheB5YZYSwFDEwylQxhgH5IM+KPd14E9DSGMB/pywT26ugMykE90cIoaxjbwlPFZ+iHHVkeGw1Y3vUYxnikCwdEKVzWcjAhDL3GJ1LigIqb7HMtuQ9obT7Tjzz19OG2pxbA8gmBnoJPu6xt69e/d4fjj96Pd/+vj47vWyLstZ1PV6fff+8fnl4yPPJRM7UuaQRJqlqBryCs1i1YnVs0LkADpthKcgvbxBcKyVvSI5gtBIb2tzCt1KWZOsQ2YGF2MAqUMKF9UUe6Tp3eurM8IvHFJaj+D4Ri//od/5nb//7/7h/+Vf+jfO+LD+8MdP9nr5cfvh6d+A8Ed+/dd+63d++4/+9ld/4k98/7d/56t3X7bn5XXtr8+ffvr8ze/+4NuLf3fuy9P19PV7f8AVfG3v1LA0nLrbK3F5+PLpev2mUXC6r2xis9fLZfkcT40JhgKRVGiTiDdJarKEkNzjaGwJfLAr4kYK3N5ogt8e9QnbbglBnpWbFdC7bnDSBLdUvxyG5E8kaJZiWPgVHe6EYCev1YWDgmFvLlql5FHBxjgFs8w2XgAVYJWHU3UGJn52gMpbpuY21nsg0/t9TP7BiB2aNAACMmrNLRsG5aFVwHcziTc6KZihCT3fFTrN0gb/3mn5m7c4Y08CkUwFyG2aYNksWbFXpW0jjX+p4BZ/Elar0GgVfmqvKE9qrilVTBaBK01IphaUHECIzcVoPR6Ou8JfBYSF0JFalG2mV8s3JsUcsUs+jO0DqqakUGkMj42zXMXO0z+2NSfFYZ8wpA6qjBZWgBsunxbv3ESRhMbQeodmTNI7INuctTXh+XTPg1SQWsYEKfEHEV1MpkfZ3ctSKSNdMHN3euQalH8UZdyqCBGThV87kZMJgk1tLeIE5KuXEO5VmzgL5UPRTJxnWRJuONl8za+oN4JDczKamXrK3gBQwg2QaJ/SmCti/VCW1ZAegqgamK6rifEnpyw9N8JT2rRe24V5bgRBuG8hG9aOQagTsw2Xl9eH8/nl5dIaKX75xRc/+9k33i1kRwlSRumG404TyyivrZmgaWlkCh11spQaaiG2bzZ9KJ0WpHV42098stJH+kZkz4Sc5kpLtlTGnHA2sXFTv2gap8/Rz+czyXVdAZqZu7++vn6pL/+/v3f99/59v/0v/kv/j9Mf/40v/vZ/32/+yd/i6df+g1//ja9/51d/9be+OPdPP/3uRz9ef/TD19+3f7uvl4dnf/ZTf/d+Ob0/XV5e3T+1tn6kLU3tnS8MM13vfvXel8br8/Xp8fHh8d23nz7268vT0/tV5QOe2EZZR5em4rUeQR7kdNKOzM0qGjPZYook90WzoKQDFRAHEaU53GG9B+nTt2yZeyaXz8iDEYuXqCDrkslcWsJhZNzLAAEZoxTiNxkxLnJLq0CIeRiU1FDB8AAgT3q3nNpG9zjNcH/8N2HtiId5vSG2Q2+YClFh3/P9ZJ6og/hlmLyaZOxF3NDXNPk2BnoXQpQGLGkXS58+y2ldeRo9aNDgBBFpU5YR30GEkospNtXmBl6dDIB5MmCrPKa+xahxjAGAydc3JSMntpPtbHBuIZwSTZIyFU1kIxlfJilRWjzQNLOHpCuUnEE9zSiphTvAoYawrUVUy9ilVHVimsgMCyD87ZynTUWoUfEbh4RQszAfBPosC7I+qOT/uK21JinCnvKlgqSlvWFuMSBO94xClODNTmaLlAHpZCWEwFoS7yCkhIVlLMAwLFJI2DJkiUEoFCJ6cbjc2ZgtsZRZqw3pLWNOZmm16MrA2xkHCk9SFK9ZAfDeC6THM8jSxjEJOq44wGGoKFpR05BHRlAcwMCYrt08kNz3FvgpB5RgND8kRfDx9OfGs1OSC59Fq8l4dwnrur5///TTH//kj/zK163ZN9988+HLX3t+fj4tD311mZZleXl5fnh4QCcERl4hUKSYFS9XDkd1sDmwWByWhKaXtaBnzoOXe9wBGqw7KpwQLlHptsnznDKQEJFrUKsDxzBEWByrQH4BxyS683l5fX1299PpRPL19RXwh4fT1+35d7/94ff+7sf/4f/yv4Wvv/vCvzq9/Pjbx29+9un967c//H//63/hmf7w9GgXedf5w4fvrX05nT4ty8eVC3iytkhacb5euLAv/mz9svq5tw/+/gOffu/Tz77+4ov+/Prtz36yPD6wLevzq6/gn/vzF0zH1Sui47quKGoymLSUNDVUmcZIsIGkyI+8vQadxYRM0sZobOgNASnbDsaMScPLePg+iOZs0M8X+SwJbtuwWr4uTq+XP/LE3Zj5rlDwaJIi35Tw+HdNyyUVNlI4TaK8M4wkcQ56pdtKakwNI9QYQyPaaleLXEOWUIdVEnAGEH6vWoWTVDNI6D7yBWUE6aC6w7VE9jbkgJkNSVDDjR2IKHWoy0cKSCMNvOgCNWABYHJwJRvdVrsCm+cMQCx8xavBDM1kIdI7OmcP1uYdEgBvnZWGJMUpGnnhuTUzVvQEDcfRTQxZj5gWvxpSUOjm2MwTBl5s6KB9w9Vci3lK4bIOytHKNzxMOiaQvAbZNLk7ZGYLRM+AaA1XiwXXFGg90S/1y3B4+aUvrJjqEiyAicnOqcwkbY14do9BYtuanfp6MbNICZM0fMmu9SAexTG5XrqZTawcEXuhcgPnZCLh1DMFQNpJYxHtENy9XpEpsOYyM7GFJGQN8O7uLZxUcY4sDG/s7rYXNIeQOrmkd9Sge3jEQ2r3gSqrwcorHBnPweDNIbU4hGAP7zJEnr2vkmhmzoBAN8NDeDTHRngiTWdgXkMyAo+zmSGTewuQJGg5zDyhDbjkWlVqaHjNrW0arYagBEDX1EmSo6f513QKwtKDCQmQTFitOFLtSFog0tKTOzX2ztfTym5Nen3+3sO7X//1X/9//pv/5uP3v+bra2m9UMUAiZQv4C4ZUqIcC9cuhxnZ3L2hLTR3X3FprREtBDIyhDwRLSaflDaQ3OzM87pe3dcMZUihp12tubtJgTDqQO7dunq6ztMc3d3dzwtgi8QevmYSQPermZ1Op9W9y81sXVd1Pz8sX6Fd1F987Q5f1Xu/CiLeceHS1Czd58owW3qX0S1ctXU0wuPkpGi9RC6DkOl/qfhZhF/0Di0HFDGFU1SThaCMAKQi5z1OSCmlxRfvM+CDgHZUuYYFhYfRMCP0PM4Oy6dfN9xNQrxj+TtKNMXDNkTwHryv8/0jJsV7H+bKIS4L6j0YQzPLahNJxdophIkxuUwAUcZRm1LbcYjsZTfwLfN1wOKOrI3gmEMMp3GrYMHMwY6wwrpHQ1hm+EYLZDn+gNtRG8ewr94a8zHCKc0MQdEJSrRQRoferWKEGZA1D58eri1G5nAFJDiFHuMNA8yAVTi4GbbsQrbKKLv3imG9iPhhpZ9i0zAql1gW3skW35XxfsSTb9wXaeQMw3DspqWHkoT79Oo7+k1+KAbMka8h3D54ED03BtY27jsGd/dIlxQUtmIzYxni9vMIAfMOkMfxLGfBrD4yuMIWg+ihlqTFZfXI1CFv/KzzAefOCL+9FONEZIZLsOEC2H6QA6bF/2BshyfKItJYI0NVDrUSpoL0M0yptE2RyIwBYgqBVpla4tmNwtzMXCh7YZxXOFV1JLY7yzgx9IgCO1DnJTCild0lQrjMjqazA4U/nh36+bSs/uru3//VX/mrf/Wvv/vwhbxIY4YJVKpKmAeLqdcmEkZyYe+T/u1pDCxCepjGhkUxTq336ivClla8wT2DVumuhN8gWkXx6hQbAwREqi7ctp5uMMrUM6WMghnNlsWWfl27ujtoPJ2WZTk3CMZlnT06efTMgswMOr9tsdfWq5Ikg4ysYROWxDQzBnFZfCSETYyNGlFRQIWzsxa/AZHlXX/b5Is9T92Y2FQeax7tM4OMz7do/dafB2Id577UokzXiLSoSQLdqdpeyFdpLpRgomW9kUh3iOJY941FMSeChEGOUtGQEdeFsnTdY0XTvmwrIsu3u1/4bB9rwNCVVOp+xV2NTOgM7qgb9rLRrCfdxAokixoe/VAg1WWEsWvybh7XBAymXkMeHeCFlrI99y3iyDc0pM37GYGSJFS8YtxiztnPzTT2jXWN1GSFyYfbwKbwXkY2lzsb4E5vaFOgQ8ShROBNBIXNkuaG6AchEjgiUW7rQEY6EMponN8OS/I967vu3tqpZMZEjKAARRBHml/5trkmBArfx1ZgfyXZLe5bR3uKAglKkaphlA6hUvxPo66yshVKodydbtyQmnGFAkDdYD5jrlnUTByGpxo21MjxXwV7GMJ57BoBifnA/sVZ90zJDMKYERrsxg1me8P9/WVoMrO6HODeM6qZZo5XDHkaIL1EQyPgmuRsSVKfJACThHT6THZBGRB2L12v69dff/38/Pzp9eX07ovwb+cc5qlu5JQVnVDZLmybAyG/ctHD3+aST4UQvNxwHGQEEcMQDmdG4nCshDKmOFusbT6tKgUIodITRBT0qMo98SVBmhlX9t7NjJD362J2ahEBUW8gySXYhIDWIDldarW5gLUmySFzdU8rGRkulax/oDnOjl1eIUbpVImKYVykDNofgpNKRjsgULxjlHDLXwKQ5Tj9T/0HHvCH1x9ef3j94fWH1x9ewH/zf/GvR6lSj6Buo2Sq+nfjkkpGNUYOvbtHmGfj0lrzSNYfpSkgIExWm3mOxbAqOnEKJwoeDFdr4c8HQ2yFsWldhXMEKzjM3fM3krSKZyKROaBDs45/D7KiKnZPKSzMNRCH2SYUKCxDbnXgoJLuRrzRcdPIFjLX7Sz+8PrD6w+vP7z+8Pr/72szeQKYbM7j19l+kH6NstY5wEjIZoPuR3cfhlVdrQWr24Kf08XZTJAaKbAZTo1mbrxcrk7IGqoSFAhrtvY9+yTLIB52jgh2zzqNPtxVw2SfaqpHmuWBjZJcbnwC+6yFaXn56+SZn40tt0/9rV9/7v82CuJDVaJo/Ov7NJjtzkmDr282M9EoFzJDygG4j2XubDVa9okx5oCDS0Wfkox4Iu+QuLbV0rCzIZmXgSirSpmclZbQR/5crCsDdqps+ghyyRSUEJJ8SqgPJ9BIuuoaRbzDETP2xWdbrvfNyBhYFYjfFLYpkzq4OiP6Y7GICs4Qo7ifAJpWZpYOPIKd6TDCt0inMLMHwKN0XyWFb8VADraWcfXJiL3h4T7ke2fAjND0mEamW5pV9jaQBTTGVeVUrKc9LZO8DxXHthhsb1KvwuGZPkW0de88iEIZFK4aWeK5v2SUqL7EKcawNnWXNEclSbLKIxAukdtThy7cKepRnyFsvL1vhOyG6tXnBqDLRwGcQFTHeLYhPBdZ4iqCNDdaFvCYjN6b7VRSo/mUbULBZAZGARlV9Hvc76Mc7Z4i35X47+z4VCeLFYe7LwshxxVYKoJpTZ+9lhPN6e4eRcvZFgBbkG5RBpZbIAunVkT3oDmNWfV6PBV/ndoWXjMvh90BdHNJXgA3sNK3au/Kgt1xtawQZ4yK9QLgPuVH1VolaVmWLFQy6GQPl82udFIuFljXj7/6ve/71X//m5+8/+p7ny7XBhqsRZDmVskqAye71hFDHYPFyzvo7tEBxN1l/O/+F/8UAHUnaVBVXu+EGdgrSnGsIjhLWlgrTcMJh129m+DhD28BfwCdXESflOl0TrjEdMHNONNzfErR1qGZcXEoy95Z1NMTTcMj7+7lBoCYsVSrfDS/SaV2V7dxe7FBQpCMzcEkpfeAQwM+uFxIdoYPDNiv4w+ezX72Gsc1Vh5f3hrG8fOOLstBsspdEUhTD4bbYLJXjJDOkHS2LKMRsosqyhrOBRBg1EmNFlIk56gtAmpyd4XTN2I85XuXUzldNVlO5mMW33TPs7X32YdjyREJ/Fn+Y4oGy9TVeAJASBKD+YV7arZk0ELiSDmFxn1+VkQcSFlMYaSRBKx8iyGoK/eRuyygHOqYKnXcON+T/5BR5ru2nywDZGrihyIAoNvMgyWVjy1jAwKQ7tNxYiYLSZJFVCUakoLIN2mrqHZyX6t644VR4QGysdXYNrrk97cjITauWgX+Kwkuu9xwwoow0M3crn5K73bxyHTckS0KaGwD0LGFXBl2R5+zgDvmn3zO16h3FfDtmRrkorO6CaXYQ+GQzzozoXsyWa67sBLjr4q3EBTiYzUGYJytyJxh0bQ1OUsE1ICknFFIpMa8JTKq6tReEdFZqXi7p4B0IFaDOsdGZE3jsTvcFBsU5UkJqUVeFGP/R6CXtx7VpgwAzCEqcnE7DZGrVcdEQ3MbSZ6ZOA88nJ/O7fzD3//B4+lp7V10YglFsly2wc/AEcdZ9VZrthHNl0fpQH5NRojlEGUaiq1nm6+JB7vI6N7TPVM6CUCODp1rZGY66Dg1xc1SS4kuKFFDS9jnmwHOHmEEPYQkA/rVfVU7nVzukAgDranLe/eFLdrWuDsdbCZpXVe2LQqlgV2ZIJRYX7GlBkSAmUfhGiWaCh6RYMvACRWbmXn4Heu2jpJUYt44LSpZEceHd9+/ddvNG4s7cEzy9trL+LuozqErcMCGUwWoOLFCXzPA3czILQ6ra7Pax/9Ff7zY4DwkYYqQNZhb4ZqkqDsRUzMwHAJZyDTSD+7bVZxW6Q131hjMODfI3V1w2WLzYm+eYgE7D0kHI1L6ANIqLRHQS1Z9APX8QJdahaoFekRm1hL4leO7ykpTgnlt6luMdwr+GTy4GiLefya/t3o2Dj6Spd59yOmtePDITo5jZMW/cZAPjPKet8QWg/QsvYkUcxUxPhDQWBFbvYpdC8q04D1Ui4kaR5DR0GPmEuvHVVRpxugZimD4UzHxmQen5mqV+AsoHF7LVBLv5uDfXjNrR1IAI9mqrHYWrIDTKSM6qiRL5CdlEKOv2yDYc9+7533WMzAJeXJYS6natzYZxYBB7EtcJUEwTrE9kX+tSRauAOCsJ1NGnBocINSzgkpRTUZc8yiMvp98pYvvAChlUbn5jKfKEUJCyK5bGUHAXKJl9BUtWJro3nNi007x7QIdX3311U9+9BNDe3h8/Pb6clV/9/5p/fYT2HCk8yQy6XFIAqMExhINc5gh0ZrsIkO0KKIqCdayDgEqOjjI6fl0Wt3Re5cIc1jmmfbIqPeCkUIIWKEsqeUKwg6O3HSb5M4ExZk4La13dO8nS9tDxq31vmK1pZk1M2PvVzixZAZRZA1sB2ri6yWsRKGIA+o2Kv+PEJskp4ZMtLSeiw+0c0vJaImqcFXEtXBiqwNXO5J42u6mqszXz+O1t1ekUgUE14WSrG+OaifZhqkWoYeN0+uF5XBJKzz7j54PVEzp019PToFSvDQiWju0hMaj1KUWkquAfs33hgsgzvi6YrWovIFJtZAArN1DFyJFGdEgEcva1sVbc0Do5o4Oognm5sx03kC7SOPpS4UdDqg2okHeQ8kN6u3oXYKhleWA5KjT6yIbXO7V1w/ukcHp0OCKpqKYqbcQMCWNcrIP6TpZHhzESZATLfQ5S6seATpdKxaTW2bHyTM+1tqo2emRD6ZG0rVWf2gpl0gaje4Op9b+9PT0+vraFnY6G9ZLNy6LLcFXXN1aB9eF4aroURqKODloHasEitaBKwAX5U1OnhM/Ko+5AaCxdUknEQr5KWzdvJrbFLfYrEUgphYBaEJXVDWNhg5WhWjYoqdTsO+F1uUQsuYz6S6Lqrk8Z2Iho9BrRF62uWmYBzEKscBMmvV4AXAXm7l6uNdiQ2HNeCIzsDyMXyUgNdhoLLizb0dDYCWtk3u0gyXQSMKTrjIJTXcR7luwdITliLSQHrYDWcbcpHcKq/tQ3cKUjfjRtwet7D1VAgNpsTDKk7fCoObuok4hw3Y6F1JyWZRbzWjX0ZE6826jlNOWRIsOArK1+q9C3X0F3cwWizoKBmPmkDFr8nApma5sFGnxamvENHvky8pJLWaOlnuhco4Agi29RUteY2PZn6ypsk8zAjiDnwGXTsvDy8tlWU6AtPTL9fmLL59enz/ybN3wcX1u1j7gtH68dhkfqLXLtZhZs+7u6p1cskap52EhOk0Gu8q4JFaxWbGfU1symRCY820WF+ShOiryi1qDnVZcRDY7VROTaHzka7adZlRzrYTv6+mk1b2DZovJuPYGLtZWvUZycfi8iCaZOnpb5N7oZuvTw+PH508XdiytQ205Peik7p6puo2iubrW7t4hWuYQnqx1F8wWS/pGCcEd+ivCysXAHltlBHqjOVolV5BYxPgP44wGYiTFtHRTYC+WbjUp99LoncJNg0nv9bjtp/me+cvh1SgJjNV5GxX85YVYn784nIWVtDPrBDWFUlskCGtYLVTCXa52u7/coKPIw7S2ezJ7fBkGMWB49RxmISdn+FvM0GgawnSaFmYh7jNXjOwYxprPPHVfnZo39LAQC3ftVE8xXIebY9kIZCHEedMPet6sW2m2eFb7wvCiWaD00ETq8TGa6/L+3bvrFWu/0CTDYvbp5dPp9BgWcSgQvdy6IqDNtBD9oFgOhHmGJI3uAwjxxrQhGxonbRVhGsFxj4IVaFjAxvdVoKT0o1CCEIpBD2F6L7OHs8Nv8GpsQ6pdMc7mitndfxgzJcrDhLWNVBhETUvbfYimg1N2bBjPOHywkx6gAuyAhMry/xaOqhSAfGNgrLQp7poTcxX5zTGNccxZ7C2TMhXBrBSysE/QOh/Jttk075h5NSfblA2/CF7KMV6ArSpau3xcAaG9wX3z1Mygb1YWgrTCMFIcVb6PtLrXcLNg/QYId9eyLN9+++33vvf9y+UKYL1ez2159/Du06fna1dWk2Aa5jglPh1omu8iSYLyR490jRrgmGiPly1/cq+HUB97a0Y3s0jloUuWPYsxcJIK116i9aG2atgsY1/B1ppFdlkVQ52ceITU6FB3+ft37z69PNMaVtFaHtg82KkNEJRlD2oUVkS60olGUrb1M4nFn5px1OUoKhwpVQmQwDrCXSuxjCxeGwXBQyLJSk+VdcrKFm9DzN/wB8fmE3vOeuCvmAj0rUU6CFfR4vwRqpVn6dSfi3RkuigxOEq8pQw7RwajbVbjADckQZlC5CuujUNPmBcdFebCn5FLYKbbtsHpkefVt2oVScW3oQaaj+OaQu7xjXVbce0hZc71BIAqnr8no7MIo6mswO3VwyKws02GUsyZBHhly4Uh4W3f7sF/gT5ocWxBUEZ5G6Ws838VJqa24NuP357aQ1+1LAug1+vliy++eLn08NRRDmNUewTURdJCPilCPJKNHdOmRyPRFD2nqX6G0t0RLOrD4ASHfkccRyAqcUYjwcFBsYkmxebvX+ab/njLNzY+NOYGIOtI5w0S5oizceQPktPxyJf5eltv/tOL+8ZQ8AzvW1im0Vsh9fZStCMXxu5kjbNdvYoNRNkZaaSe11Fbk0NO25l98qpw8ZhMyug7KG4LNyI7JWNbCADvk4PcgGx5X713kWesOjRVQOikCZQ8gWG1o1WAgYNvgitt1AfEG0XLc/Lpuum9Pzw8vLy8PJzO1/W1v15/9Y/92uunl8vqYspOngdF1rBqq8WUjI/hf5tOrmBA+T4nUYzb1mwWykpVDdiFmWXINVmyzKPCrXWx5VucctBmFW/l8CYQXifF3RVWEWf2ZMG2u/XqZujXdWnWWluvXY22nLrIzBveloxw1UvNLEqquWfPZiO7O8k5GDQMYBaxBil91mhxQ93sMRtjWchChOO2xEYLRiKj7RH95x+dcf3yNucDQ5uZkI12TLvb9RZlsgkzB677PnRnd39IWCO6xY5HOnik09PnkeUtZ8I08dfqWIliq3Qboco+ihgQpjREOQdFALas86zjsK33jYuTlJNKJIhBaHUUm7bOZ78AQeRmgZjIkwAgBAuO9qfYGqXd5Vg2qsVOpTQadfE1wxrCb4sGhPR5o0MDkv7MP/Abn5/z4fon/w8/AsK631VjmnX1ZR4ZpfUaq5RgVlv0wWEw+FYyAeZmTpaD3Mrq0uwTKVdobBbK1NDwkFT9Bs8/vzvFrAkr7ey4lt1V09iFNE8aHotq3R9h+zOrRc+/gdoqhMaUAFTTwL3aV0owshvSzboSKk4Ysgp/lL84RiTcPhs1ZOKnzJaEsYzJgau93FultNf9R03CUpBPoHWihaUhjybdMy4zCGxLcWrbDURMVPr1uaBePBYR4JQcGSiCiFYjHG2rr34E0T0JiTeyWix8XfXhw4dvfvrTpdHX/tUXX5jw8aff4vQujEBOQL7GuahuvoGH4zWmXX2I0FQTUxlGewGgTUXgb+cd3zNkEQ8+R+YMANOmZI9Yeg3TzghaDhsDu1s7keYZ7U0CHVF4cxKvhijcL414eDy/PL/a0l6vvjw++HXNinJlUurjhaEZKcxyOdQgcaq10CqboAv7yJaOSaXMBA10wqWGCsKKOLdOtMIMyRmBAMgI9/izvwFS3bJm6RfiwQfNGPNfk642HRKN9f8Crzgc0fHs4QZO0MQm0G7xN58ZZzw6L0G21y+lETJZYypV+knU+CXkm5tlzorX9Isd9XQ6mSa+cfP4PC9zHvM26jUvzzdu3KmefWueNzPcvp8ZW/1yPxjnl70cPc6INPQ/lxg5RQDIkc/W5Z4Vl5hOzs+ov+Mak8yGWdMjtvHgHX00bZsz886Z+3I66ocX4cb1UwGw96enLBEw8zxGnSLKWNFHQcgFuN4aDGw2zAMk4Rp/37mZPFCl8eXPQ3kr+aCThGhlgZN2p4+tbbLRFNw3iooH3IO82oT2TNv40IZDCpz1YCvD3FGy1OQUR4p32haHdoiypN0zb79xzWYVlmQ0hnYfNcYnSWJ3jrabSbbWXp6f379/evn0fDqdvv7667/x13/XbIE1hW6fURUIP707p94q8R8BpoCQyFQMEluGBOvaLeSwroHn2Yglvp91DchER5RtJNDgPV0ZYVoxSNGQKoMPaBGbL4acFGb8sHVnhCCBfrm+f//ezF5eXpbzgy2IOInGrKPuSSIgqcHWYfn25IByl7S0NgqM1IV6KkzYRCm7Gj7hws9YqIiljmVeGycoERRAL8VxluIPl2x/Uocx+Xhfcan5hqG9acO4ecPIrRlMquYSeHOm58tV7pRtKE3xpWPwmtfkAgiaK3jsdNoAtfN+5IPzquNzS3gKvvU5hlBVyLOlC4AKJUBIfiz8QqtePDbRLXF3+G/gmjsK5PZEofPyVdezfJMAiHe2dXudbghToulIJ0WY9e6S4G1ApXkoajSWrc0bMyMuvYroYJjz9vOfFvtLXf/of+LXPvPrP/nP/2BblqSyWBZGYByRiR0OkSV+2HzYoc6KUBSpBIYqrMqJ0uAhk9gk3omlOEhIx89tznfXBqUpbW97HbBxX9nBnzF4/7aoG2V6lomTcLBS8gBMnhdN2zSWj1947yLiN/veSLBWGzGY7v5oq2/an08xmIrg4XQrYhC/eKo+RKhpd1/e6kKmcBkU0JTmX7bdAzfC4hAgMjF9bNBhd5qZlyaOMHhHuV5u4RTkLs2pqOac+T3O6Y2AK1v75Xx+bAs/fPHu22+/Xbvevftw8Qw+jXvIyDjczFNBb9M65VqnWLBQK4fCgcwyCn6+AWSbQ/3LMN5Khla23zBZsgCURTjcKxqxwj3rmGUMW0ohZZAIq0zlWCWbGBADIPXT6fT8+sJml3V9ePzw8vJqzSI22Ycoo+yz1IL4V2cqkpFxtY3JNJdSoKuTIhu2pI8wj8gHxm6VASgsB3I+CzLI0O58LkWs6df9mbzhnT9XaTjccHP/cOSklCes3LF/8nMsmMIoQ6EKH/N6kcpiU+/KtUT882Sv32ZXvWt2+4o9M86WpuUc6FFTVPKaeQPI1mrgYbLLvJ+BOiVesJRClYA2X/MhznFCTr1rOeQIFUlPkXYuriM53kkqRdJqpoaJXoyrDUs0d6DZTfXm4uRG89prZFhI8I6RtPpzB/ubuQZZHP9mLM+9O++OUEF2uRHjyYh29gqwCC1T2oRaq7Pq2VHy+E7yDpzHtZU7qZ7kQSa8OO79CWevhdHP2Lc0vYkCjEd1DzdUtjgDWWZC2uIY7x2ifh6ZG9mXb22lRdrGeJ0neHt/RZLChEzYZvrUHmuUSY+Uw/BRjgo2owrMRAfybL4l3QJbfmZwnZzY+G46lvVVviSRyFjmCc63DRLKDNGqbLuIP3PXZlmYSN+sZd6ds9JyDiCMLxJPp9PLy8vTu0cYfvTNT9+/+9BDStROV6muw449MsRqgErNKDoa6UpjOTspMKFUk0I2+2FEexuHgT80xVqSA54kIbtmGHUdZ7MhFM8MfHEH4AsXJ7r3zcE3Imyk0cPm4Xy+rNf12s2WdXW5k1DvQ/KddZh4tmFTF93diW6o7CCptHAGgtdSmfYEMAJPKo4lmXg06xIWugInWVppgGNZFnV3d5s2W9LcA2AH5V/ARvc3dx220wfKpn/mvkqnaKmRkuskEW+KyxY1gIjlC/ktAhAFuExY48GNF9ZxVZ3BX2D+AJTDmxcPrjFDGQaQOm7H1siapV78XL1ByqOaN9e/c8TNuAYq62aQtwh9+rTIyepJRLuBTTgjmJHnSTd4j6/XurbJSIPCZkKkiawcgsPjE1vIV9zAYvf9W7fdrh1AEnRHnMxhk9SswqVENKkgdXyQKjJKo42rgRA8TNACIuQKGU8YYnLsSoY1FXsIRqN9t5wZGUYzpiD2cVobrXvXLeTfxqJY1YE9zES1VsoxTqaXkGm7HuNguxnMWleHjPe3MHC6HKChRQIPLA3M+wp320mMNreGVBMH+WV6GQOBYWL45I6vrnm+Zb9J0lG2bDLKmQnhubhZh+0tbUXE8iWsVaRskfeE5zGyxjUmeFAV4pvRcvwXFEVPp9Pr5Zl0kj/96U+fnp66w70vLbIXJjapTErWEGJq0yWxWepwkrh1xaAtgICBqIVIe5jkl5HwGeFbW80GdmiJfq+VLOAAGK0AOxQZbpluRicpmknlnGZW+2OzRrJ04kEPJZ0fnz5+/LiczuhaluX19fnh4en5+fm0nDo8ESPSn1zNovcOQ07svYP0RhEna1e5SvQKaRTdbXkQepipnUoBcCI+QTEjlaIBS1Ltgj1THWGYvMPAkvuc2vF2jIdoJkX4+B+0XlIchcIamabCYi2xobt7l2GhMaKEfOppWq67kEIsU1lFXzvUzMxMRndf5SQv7ha2F01I1LiUuVGuKJ9mZmZco5wApi40o/YKgLngZdoSQNLC/yrvGbQgluffgNYRNNkbaA5XBxGaN6PHFgbDSxDVAbwE4UlbWvrvrbGvKxjtAiO6t8sFOGzZxeW6CDTaIm4BRGZexoOT1g5JFkEEZCc74PQMVtq0NyMqzNvUhxU9zkm37CHTqo6jnELztkJqjlPl7Wcr5b6RIW361mevX14W7DQ4m8Pg3uTgEt1zJsEvrZTtAYDQgW5gc1twNrHzGhPOGKc4PqK3VMNy/uZosIXtAhEVlAdUXtdqPWXk+M7VnXQjLS33VbkgMJF6BQhFyQJ0uKNfscLZWiPZey/zHSGorxh8AENRMztBksvBFnSxZE0rUWMLRpO02FODCx3R6oaIFC1gbZEcH7xMImFmV1xKy8rBowXvadki0YKPxS4vmXYS6JrAIXl1nE6nqCEZZRd778uyxIGsrPTURZL0giikyuMhX5bH9fVC6HQ6+dX7enk8P/Tenx8vdJ6XU7+s6+qPD++u196hh0WKyqCVDtkpGLSmGXZIJzElPy82wosLzA4xbF7ZZC3NxQKkRjRmUECE9obvuSa+ab+NIHCVOuDWQraIpjqNrTcu12unq52W7r56b+fl+vrtw8PDaTl791M79TC2nk5rv4TsnjHV3SkYucqNxqU1ia5+XY1sZkAahxrosmhWRrJDEYGtFoJJzrbsEqIyczoEzrOZ6ELv6DDAG9WaTM3dABBdCEZFN4fzsdCvw6KDpJHExWBiU8dVLkAmw9UWdjeu3kUzs3W9Uv3d41O7Pj+clmfvFxpEO5209ven09ppXBj56e6gXRueTe/UMuypmVrWeDHwotfMUCIgc0omUc1fQfRmYCPQpCZn90szr8K6AMzTGLC8RZIa2IM9pIAJG4F8SOvczG9n7vvP/vnLODOa5LgU9YwAlr5p0gO3VM7qjMnf3jVEcbiP+H6SNCMs5aWoZYOp7do8q2STFec8ayesmm6aNPtB8TdpfR4ttFRtqxj37143js1GZXZiSpSuLD8nhr4exYQ0Qu/izO7tPPOLKDH0g6gRgQ6pX328suxVSaHWQ//RsY9Twum8ngptccAsExDuJBNPQ/Ux8wR1puKw1OxyRlZ1m8hDjAByL5PAKDKwwf9WzBs67gzbg0Iz68Hjy70Jh6WOAnAP0jEvcMQhOFixs4gqynaJ+y0rTbiE3iW11mgyWnSFE90RyaA3q8h3NGi4XXp62Rrk6Q53d6Ebl9Q2WJUcNvy0UPtwg2xjpXvgzfeM9XqpepoRofJfFfCRHMPUGYwSmYEzn4a5JybGrcFiRwPVsoXUprypiY5TOUqAjfOO7fCOYztF7U5Gjn69Lktz93Vdafzw4cPr66uI99fvSd0vfm7L4wkfP318eDqfyU+XiwlWIbskjRbZnIfpheyLqmezaSncplc/TOhXeD5+G3TvsPBpp7z+DUGVEuUO85BRQmM7nU5dkaqH3nvv5SGHer+22N9ti71EHQLZIRllzOCwvYdTtgwCkhqq4ffeb3OyqBXDEjbCnsTuHXJWydJQZEO0j5VyCiqaS3SrHB8BnJa11UPUi5gBE627Q4ItoU201k5sS+PLS3eQsIVLtvY27+quJRT/cQAYvCJitZl6oKEidUgqXc4qzyFJNtvSQITQ5kyVCzCHyhopHHMwBpoGiQ1zTZeM7Ihm2sdTrbng4v77TVEL3SVOhR/fyInzVT7r8RUYmv5ITQbNbPUOZIlmxPFYbD5vRybqWZ8XtcwwoFlrIVbPcxtL4A29LPvk+H6rvhz3D2HnLhGMZy3+Y+1WfIrkjT3j7oV+1jgfF5ZF9zTYAQyk1NwdgllJ3yNZCNiRvP2seh2COUCIyETL8FKkGLN5EHcjxJhTwbI91HqQkpq8esqQCjcqEOFj9WzLNMMcHhO2jFfmyOPfAxt+yyKtLT3g9rJGOcr3bMkGiawBFN+mgQtp5Lfs81oVuYNaeVTCSuYUAmmXTUfRpmrVi0a+eIhI4xApdF/XSkZVB/rqxoi8tbK1RPbLLn1rSF2S2ts11W8ygraaLvMhyo2LfJ2N+9pBGsvjUEUJZtmRE/opVLukth5trt+qOgIgqswGk+u9k9zi1e8tarx3rk0LQHRrBmJdO9w67eW6Pjw8vOrFSCx+WS8P57POeNHrtV/fL19vp8K9u6atweGNZlUJSmlGGFm+VgxbE7SF3WZtzFnx/7dvERSidictO0yEXNisajezu7eTmbXXy/P7h7Ok3nvvHbKobsmNzx0BHk01JCVpt/QyRHQk9seQwOIZw3xLUqazeye2Zq7Mnn66QldUlrkks1YIaTWqQxYWNe43fYFkzQkY4VrX9WwMPXiNZg6wBWHKc1Vueap8TALVwBGSk8WRwBUKol1BKkBWsZurVFrKrMlmzOkm+HQms/BDJDveolE+rUjWClAEDw6E0tgxq3rCd4/MYJxjn+6gEoESVzsUhtasSTA9iM28dOctQ0NKTyL3OdLzzcaUGCbHTVTTRomZY8C7wKmX9sqpZ5WwaSTCTF1om2IBUD6s6Y1peXP3shbE9BJQMX9LsATGusRdq4btMmN0sZQ62YJhkFHQbE5ozryWQ1VR1RamShHOXHcW3D0n74OkAjBfDs1hakHJ16b0qlJQkocC9LJqjrDHgv8wA+zUgm13jpvxy9uccTNIynTBcpIFatO/y/PNTB0rokGP4NV2WiSl2VwCs1aa2CnvArzLCWsGyhqwBqkdsmkS6M6qOmhm0c/cAbQFo9lR5ixKjGTpOtcsYhn7MLZpnA5MDGle+D14xE+hb4jbsMVQrWQzbdnb+c0s70phcLYbF1XOJxVfIA5L9jk4MKTtEZq5u+3tVWEJKFEgSdyd9ez2Wu6riyF5f3p5bstirS12ulwu53bGlbo+fjh98fH55f3pdL1+VxkvgyVYZBePiWpz0xG+Tu+r+W9LmYQVhfMyTO0Bt3myPp4veCbBlzMSm8hxmzd7WH29rFcA7bSQWvuFyqCsiFWGtcQtiqboQZAbZ4r4cyHsXbJpC3woQjnxtLZhgjj3VduYDGxLiU5GF7oqa1+l8O02moSU7KuKUcEhPrfQAQCAWl2QMlZZmYsgyBvlpEcdUtnSmvH6ctH5DJdFzTnI6d4i5UlAZHWzAQ3qtKrLvF0RzJFVmwCIEeHfJchEdEFyhX08G7uEalXmhBkZOLohDWQoApcCY+oV2z3zlN7qH3y43lIBxW38zQ32RppM6lVTCFj8G6aV8c2Qc9uEtbuZMC1dwZOS0ZZl+6D7jnN+EJ+3zzdLs1ZiflpeKifMd49oGI8mEA31+TPAvNXR49nQAw6E9b5YNJG8u9fO8lWkQj7sLFnOJLqkjTsPJP7tlx/UrD1RJhTYPCGeSk99a8TPKLLH28a1v/8AEE5RxGDos8nPKml4W55nm8id3QWbXGhQOgNtCf7ePGvKwqahMmgAENCysMkW27OEc1E6nU4k194lLcvCcCkO4Jfkx8lCO0swM1TfEi7na8Y3VnyKpHESa5CBkIkvE4r6rAjNYockS5v2HGL28zu/zhD+uUt460F3F9HsRHJd/f3798+fPj08na3RL6/vlqb1ky56Bzwu9jNvA5KbjA5OPHU7jJ+Z1VTDMp7JLNYQ5Ce1OT7tJn9YLyf02L6JCk1+JVtrXPvV1/7u8WFdV6LBkKmnzipgdX+e+7PMsbzNDTNOqMSqeKVgWZNY0uokj0UrLRChdQhAQ4sQH6/ao9E3ayzJjGuGMSejHWc5xvHSOllwWN3dEfXZT6fTw8N5oS6Xi1d9cZYlK+TlFj6l4N7cpJ6D9H+zjypFMSUqh1iNRkjCo+7APbUBkLTMfwA78iEmsALvVK7F+LXh56B+TH2OUtb0k9fMB/fdTQMHK/TmCNymFyhSNeacG4xIqlISZ5FfIyG6NK2wGzgz+vTgOR4D3moM1dplSGdRq10NbcxkCI8E101QDvzN5MgItwfT3mJChJ0kYRIAt6KwRkYBDZZt0Kun4br22vQKUkMXos75rDQz0Uy7vZ5g6/OXA/qASb0jpBaPDBbCpPUWOPU5IwmmJPGsTjLRjk2iC9W8T4Aa5czeZL3YFnP8fugi8w1FSrb7P0crR9YmwerlzIKnp/2JAOjwPHhJoTybQUTFEwvPOQ1Q91WVGDuKYW6XKWnPaGAcVaktSFXsUztEtht8+MksG/xVdPqNDXk7IzugHXtzkSzbJjYlLoOwSsmRFdFT2TPiFcxo3lBu3CfnGsbEJJnHyd1apn5eqp/RMi0Be+F4XiP2uH1AVDPr8lX+j//Df/Kz7/xFr//e/+ovDnllOHKAoWbcUMypLl9rLfTLHcERYIfUg0SMqOmGjbpmxNa6rq2107IIkDtc51N7enz45qfftcZIpxZCXwulLAq8K/ckrB1KoZZFHk2TqZJTgcm6RkljcAfnsN41JFJ7IIWntVLRoiR2zbKwMsK4nIN49oKCJ2VLbhFdMWQNVFKYVO8zgytofAdp1iRd1nX13nu30PRaAs0c3cOL6h2Qecb70uN4xzzu5oeHpbCnCzq2oSE9UUo7QmpiW3YyS1/HqIQ1rvEzbKuCHCc18r18lIOosYKrzWf3Lck07MPhSBzew7uygfY6UJhPD9w35LdoYx4lUQLJI6KvY+4LMskzfTc3lWw3KML20pvgkd1JznuGc8sDbb0gO+ZaU9hJUixRCxapKcpOucyEqK2LRPybIyhLFQ/6VTXKV0c4xWPbFYWmWV4k7EaDCxWcMg48i7OPJe8AlZprr9uip5P6nqiNf3uyjM1ClYU74gY6keoUw4Nj4eCIj0TIzi7jaSamGxwx8c6fqwYdbrh3f7x7iH1xaAfAyJS4OoAq1AeAQhtCkQK9SUDMntMrotZO0IPs5OM+rG9Ra3EOzYgdy02JjqIArtcXkq01d7oUFQK6/MRwMMPgXhkis4WTkzZvZvLjfuV7gb1RBvH5tg830zJU/YUqxoCBehqUajN1zIdLex9wrNgnk1AmRr8hcoX9OfDWqlGrmfkeY2/P8g2ZMRDwjGv5g79Kz6MyVCmzGDQE3J0PGMmv8zgweiwJjgOLrSLeGS/NabstikT063p+fJD08vLyeD69f3pYXy+hICb0Rz8YJWQERd0rTpAfNEqSZ1KMYbCV0jhbtGHZCgYB2gyQkgB11oKVA2S7FRR20cYbg6gmhSr+0FoDLGNopciKZMixoeMkZ0qSzUhy6nB30S6v69ovLpgLxrXy/gwg2pLjMgSLLH86uAMisBAg3bNAR9aCBgBLTC/OKOiGLlYZhaofFyhhmg7q4ZhlDdKi0ZEXO26e+kp9rhwVpyMe6Ssq+z3rmmpvbbw4fiqc2EzByuyF7WhFk8HhFZc0ejDcvQY/GDWuowjzwasUB/jQVHh8L8n7rmzsthzvri44KIMoh3f2rfY99kRBUd20fo1yakv2HgSmauY2d/AuOAxo2HJiWyIOvGuN1uq2GJg28Fl6COJ1mEaKNXXbWNSAHGTlCk0Rj/vrsI9bbXklGWoCp0yO2Z2cfWnq4XjKdJjDBuf/2T/3w7e2+G/xyvk4oi1p7z0862GGuGXeJNuo8xBhteuq3kMwPZ1OrZ2SHzC5YDvWv9mOElviqKS+mW1bX1czM1t6V+/dbGGzPoTMLFW9wfYu+xnYi+PmjmuOggYAs4Uxp3zQSZmhqtbudp9TY/nxL0maWmuzQ2emNio/4nzIPrM7I+B51oBjzAMR0xT8NYOapJzrurr7+Xx+612/7PXf+Ef+xD/+n/+7gso5XXTAxWjCKMIz4P/OdTS8BVVrbSsovd8sr+/Hry125GE5resatHq9XB/Py9Pjw3ff/MzMomc5gOx3y4gpIxCClJEtxqmow82ckC7CPdGY196Qp6AJSx2HmFkQ6Pz/tq1uokhDCQy91mbQxPeDQPn+Sqo1UbAud8h9NTNSVvzPgXZ6OLf0unZ5l3eRwJKpdxHKiDETDNpYGYMkIwU75gRkzGrS2DomY24AYKw8UM5rD4V2Gadx6vMlAH5dWWajDq2IfCq2VSo/UyS/RwPNGTui51qXy7WU6Snk4wR61AsjWfFHLnXCoUUmYIWCHMf9HVpUdldmon1cK52Wmt8g7KkGzrrsxs88K2TVbwGUiuOYYqUiQ11AS3Ty2ggKbWk5Q4/sXdDCiNJY5bS6lLmAQgsmGwJgDlwcKuUgHyFXLi0VEuxM6X6EXmAuzTGijndpRUVGO7p7hkfWYgPqXGE0i7Rn206y2DdFMItNUtJDW0PlkbKHZ5eAfrLTtXcpGxFHbPRCu3TR5FFKj04HJHda2SyD42bNd8HQYeayaN5KU7cO8y43UxOsS+o9LFfG9zjhD+56XM+24JuXny1PJ3Vv5Co9nE7Nbb10d7eFaOHQcYCUR6x6L/iwLfEJ7K1IvwDrma0ZB9Uc5p3dT1wia6MjMR+AU7bKzCIyfkk1Q97X0+P7wKjzYpJCpFsY2Xwtgp+JQfm8saHsxYZM6Hf37mYtTUWEkylldm/NYIzmAhCX7ra6nxxm7GEP896ERaDwPHCMg+6FprKCRg8ticimH/RrtEgAuab9PPI9l2j3O0X2A2UUuMOGrbUe7WOBtXcAbVkEnBbkcQwHTrT0YJOuKJNH0QxIOj0suLiWh/V1vX3L38r17qSP/Setvw9SKyJcEQCM1jGsVn0wGHe3JoPJm7sUhQ2wXrSGhcykCKTKtn/A2sn0eATkOiPDXjqfz2Dr/eXpQ3v3xdMPf/en5/Yr3dboZ0xgyzW0tKK1mMZ1cyctRMQwgTS0kCjQL+tylksucxmqegngvgbd2wsKaIKxybsqAl8OM7aMmlZExRJoEoQVayghSpWPJzaS1/W1NZC6hv18adDJ3clVotNJNTbAGhrJM2y9XAhgMXB1AcsJrV0hd9KxJI/oK3Blb7ZIttU+N0IqNZySUM48huF+OQeTXlgpES7AHFdoAVuHA24GjzJcDAd3izD4Dq3qInYRTDPgJvEkM+3KErELwhoBO9zrxyxf32cw9faAMQvD5+f5PmYrwO2MIrlxRcdMPi2S0TZxaO1jevMra+E55/H5rXkyo76zH2IssGzl8bmS0GeDW9TYekOmvyvrczb36VBef/NtDwGf+6Dxg7wPGYqnapiIM8Ev6P5keOSY9nByi1t0d8g1N6+o/yGA6tYMDPCylGaqbKB31q1tXQUGbO3sxrtCuOn9n/rnfj9yClIgRZd0xhO4etqAQtpdm63ew2SeJq/FjG7u3k8XOPvLTxecv//973/77be//r1fwdq/XT9eLut1XZdlaVzc3ftqBh1Euv0yAgZjX4BgQCO0REaawIhNR9U9DdNcsh0fwqXVbqp6w410i2Bt016U/NRTDo2gsC32tB5sS8jqChkwdkfyZuGer5RH0BqaqCCq8sIwdsE2bjlpYEyTzECiPWjSlll7Xa4VHRntAN18oickSXzikbYYts1RWd98xIIcBnf/1HB6Wd26jWHv0KvD92/dNl3Xy/eX5fvgjwNWyGwSkJwqUd6+R4CiLafSUi26Il9mHNWCamiiW4D0ANTL9fL48OH5+dVM794/XV9fr319emydl7skaND5+U+S6isktLEF5XQJe5vqKJZoMwjRYfyo/xe1GQqzcbvp49nRRG5QtB51NUf8SJnHlHGKDag82KThkgSGm6ZiZITs9WTMoq0zjyP62g8zqQXsGMOWFFhL6KxwKIvshcDbMvaXOQ955PLZCOvBCMIaJ2fsgayiyUcBG5fcvUWaaVqNDjHo47orvY6fxgrHbcFiCeRxqOTP7UWjIfNkW4uVbUNNP8Vq03IS66jdu50P9hr84boN9l5o680CNVRSMJVlaWSI/5zYkj1kQpI9wKpmk4m/0lbt65bcTbJIGwaD+FsRlFuRWYkN22JH4ibykPN4YGaZYGxHFLHYjCH03TTi5htQFtAwB+kM5h2rtdo7WQNHyXrKtnQUkgsWN0cPbWPLYQ0RhMwkeo33SgY8r89PD++AU2vnbz7+5I98+cW/8n/+53/nt37z/W/8bfSORm8yI7rYZWK/hygbq+Bsn3ApDT02nPrVbTDsIUMeaqNKzkQQuR9/F8WEiSBNcXMAINJyj8cILWYWUuLYFmfivpilRhPpBHbArLHDo0xb1R6EgDUH3mUsBA9e13WcyhTJEpcz62xahwFWESq3tPsXioXexNzqXFm57hrUckPRyTHc1i9Oza5Y7fSZo/9zeO3d6+XyQ+n0sGz+u3ijRQbLzO0Q5CIJXfIy3oQ6xxrRgRmw8/ZuoEBbXi6vj6fl9eXTV++//mu/+9fPTx8u/lpRS0CVVk3Bbtl8/MkCzUhuad5BQsd8XIOrYIRtA1W/YT+ZArtbRkplCoVxCNXDFWiDElQeVA4V9iGLhJ8NNgaCvmteoAybYmVp1/5HgQ3K3WwJT94Mt+iUME94g38IHNx0FG38JbcmqmVYnufi96nGJO3K2krTZVHeykMwQR5NVPBWbI8XrMPzPJOwURwkTay0kcwK1/bfG9fgpSp51ZTZkCOeky6r8XsY6yAn5g/YY/kk4XqfrLKjfs0BvmMyYyGH/+bZwhWOSQDqq/o6/MAxpbUE/Lht1H+/28NxsAerRzjJNNR2zy+S7vXW0m5+9SSldXNUXU6HnolIZxKAyfVy9F21GvKWMSg6xQbYZQWHzeVzO7H9Vz7X97AJg+sLSJLtphRDLUyXyjCFkbvo21wlhtkfXz38UX9+gD8+v6xs/OFP/up/4c/8Q3/5L/9rzx8/tdaWh6Wzv/ZXpy/LYqQH26AlUaq6m1GUcQL1xmiHCixphbohbM6H2jVtlDK+nbDZ7OXa4GYkN3f+Bo1KFp/Ba9l9q7uv8d8speU2p+NqON66ypdp8sie75LzEK41ZtumsDXNch24jhWNfUQyIKsHrbjyL8f2xpLnkcOvtX/ppAGvp3XVul6cl91YJcfnh/EnsPtw+DwR9Kd3L+fTt4OQDpQbgIiTPj7Aw08ZPRi2wJc8ZwmTGTFs7v1Q+kMjGmRsJwBC/7U/8v0f/eBHzU4wXnG9t188sPCBLUMnGXRsMJKkgdhWJM3TOV5J4TehmdUYcHeNIICG1tBG8MdAhAxEYKJ6+JtNcHd0TwIeT8RbItMsOzZmpFDYRTnxbN+OGMZ/4bMP/BkrtUmG88we253WrihGXBIe2Fits6ZYhJ0mEzVDWLPBgH6NUsIXnLDJEripmQBSvn5jA/bi1fz9TDo3SbZlK2V0H/oBp+CR7eU5yfxmkJrDIg9X3sQNKNwbVG/vlzTiVJV5778AS/ybvZhW34SwJiV+hKCPyWAPybhu9ztvoI/STZ67OkzXkewyKXPYfR71ubZfs05NMbn63gG5W9SIkDZSSzAtmvnFbsl5AlWBkRmX1sJQqrQUAVE7ulZxrHF4pyIbSsOSxJE0E4VUSH3qj+3xuq4P7x+uePn0Uf/G/+ev/Uf/gX/wr/zurpmBxrn4hVSiykWRvDELSxjdBZeCHQ8rxJikIqUhT9KcgKub0qrb3N6Yz9y8YQeK4gsTrNINHxWUopeaylOflFgc+RiKCqnTGdygVPMcaXLA7DbyrQVUTZNFUudBNEVK38z/+N78xoW3xdx58PiwnK9YO9Xkbf45ntn+nRdYdqkN5uPPCTdO+N715RUPde6Qikqt644MGjS9Ihxj3w0yyufeMzN87kMnB9TT06O7Xy9re3zsENtcYBYcWhYwySjjTKW8HOsZS4UlY4rHx5r8Hv0c843qMYJngwcFr5zgUJQtaIuhLJ7yHiHxN5Z2luVvkzWlCAvsYXmiwKbclyjoayDM6OsKQFXofgT0jaT5Mf68ECKdrVmvaXj0Po+oSXRFP6ImyQ4oakHf4gSqFvR8zBwj8P1osPXyEeY19wa+hysDfEHWhcytHoYVTXeG3SVMj9upHhyo3TPXZCukDTO8ilIPU/aM0pxsKceplpduzuILqR0T7BY0BYWmY+QeRMXkySE6uOqYcNoRPadEZSbSECY4imrF51to3lwDDmSmyMQaAyFC1clOq6j4rzrjUp9eMkwvtpUEn4EPzLiwMf7IhI54/mEF4SikM0HuDnr4wPkBf02pNWnqqXS73cLdwNU5WBfNCRrgbrmPk+Qe8/zY2rl7f30GTu3LL3/jr/7uTx5teXr37rJe15dXO5+W5dTX9dpD7LVI+haQZnbOdAqD+4515v8G5Qp3PLHsMRlDSwBvzz9QyTkT8CU5YG3rhTc91aJA3mz1il+jAloR8ZRGkJUGNTSwaG0SnC1da4SAKuQahZV2U5VUumwUMXMgXMqxaRv3nUReH0djnuQEvEP1A2bGwPwlSLJvSx0G+c+JSm6t0R9Oi532GZ6/vM35gMPmTydLejh0pgQhIYZZcW9nNotazhiHSEmlCu03kS5CIWlbEdBZKFG/ns/2+HT6a3/1d7///V/95tuPZra0hn6Q3oImjD8B7OgnzG51ZpXguJu8fw7SRhrpnoBtIbdzOoa1+2EmnMh5JGR6Bn0EVQdDkvZRKguZr2XVpMQhJ07WAJbhPeFGaLHWC7iqg2AVJDHzhZ3uV7PaJMfuoMeryYxul7aCS8Xjkl6xghWV96dytbRpefniWLwAi4TU6PESTfSO6KkJlPN1ODljbWOFKdHvH6GrB9KpnArMvbEDqR1D7ZTjBNmY52y8HfRL0qwBf/7q7jaYSvFCr9Lit6t2UlE02/MgNR7NA7fAub3eEip9FM3Rzgfc7j1yM2AWe8GIngjqO5lTAlUBlGwX5a62xO3RyDbu70peop61NVn0lSQEokkaRYNzW2+49+e3QxLpo4pQ87KURCJeaF1uqlCvXFhcntHJif+WUg6MMvbWnp8/nk6n90+P3356baenr7/+Y81txbO6W4RxCl3WWVVTPnfNWy0SS9QBJVD1RytKbyKa09rnpgKcKXi0Led+9NTAfNicY4zt8+R5arlfpfFsuZoxmMcvMzkms2SXGzykjiSjEUerigZK7jvvV21AoZbvdF+SUhjtjxpt/Xofvkd+fHzdOFyju+Cxukjc/+myPgnQVX3vbP4FjRwHzXi6entd8Q31YbwreSlTwN5AMwnKu0nuPmfHz/3aOePPNCka/MsPX/z+T36wPJ0+vl4enz48v3x3pnXtFJKKAdIB/pzIgibVtpjSTsguqxUXsjKIjpQtjb+hhoKqWsrjeAb0R2yvi6CGE3dkaWd30Mmvn/k/m65m2CRvIY1Aqly1bUBKK1zV9C53Z58ENMjImFjKr5srIaJj0o5vQ5swI0YREYVXySrSe4ZNjLxYESbsX+zuCM/CtkA22qhsdeh1M6viYc0MA8VbBbNqp4sA1ZeHD6oufwtuSF8xpJ3gNgaPczXwqZTyaG93QDhVQdU76ljwo7DmG4ksZdrLeV0u27JtZhexTUTQ5OG4C4q4sqr2vVsybyoX8EYU9Ajzvm9yYGmWucVbN6QKtpNEoS0WdWpidVKWrRngJ7fcAxZtCKkYEi2YdUor2/RyGq69RztnG7h9C36ZWxQjBqAFsJxui/iErm4ykqbmcmlnUMSEWqnAZbAsEOo5+e3VPrz/WtdPr58+LgT8+eqvvfMCnM8n4+LXdV07my2t9WwDflQNBjEaKxrnyBzSVmfNCcC63EoxGqibfmKfCkhN1FnDIl3HjWHOQSeZlC10XwDA6texR/PlGyomp4nNMY+cNcEa1ULtaDC6dYPAnhkpMLmVAb8givAi77c1sWriT4MHO+A0HY7tWOxdCXUc1QHb+eBX/tUGBKX5ccXN0SPZHl5Ofn7tbX3Z/YCihPsNTqjtbhgmnHE/CeDl0v0B7Qog43+j5mUUB8aNLTqW48Mokg0JSooiAWoXTEPeUJIYUtK7hzN8fbm8PL376vUi9/70+P7jxx+fH7/MOjA3r76VfkiuipZfY3fKlsH8b6g9BhIc9JM7jShdyEOCgKI2X1bZyOJfMwdRVr3dRc7CTS0slV6GsW3hAJSiJkk3kOqRPWhZqzj2jbDer6FSDrf0EqqgFbmrgxjD92HiTioHhpOmVep/aCYerVtoZ8MmyEppVDKhh7CpigwPcrRcQkLkVponSJudLbbUXAbIKGqlizRhWeWEt/LG3aP6LdQsL3g1C1xcYK21du2K31neyEgMDXgTyGA2hCnhuqd6wxnZZBEx7fGUGYAOnGCRUDtaV0eBjqimyeo0DKA56LguYFVfAiCaAzA+rQnCdHV0jzT1fqi9XNRkQZ6xzpQbAy3Omox1IzmKhHyVi/BT2MY9anWxxAsUhOORZRUJNBMjyCSFWJbNKAOGw7BNsHmcmppsTitaiJrAmlhUZA+NJ2+EhwZLpAUVQIeuGSJZwUHRdinQ0ftCisjDHs0+kSgWkRRM9HCSaIlvnYLUPHPYwjvtWptseA3cVwVNoM79QVJGGgtANwrEq+A94pLQ+2vI3+4Gt0ZrFkVqRLEZKbxr8vWF1mAtsiFl6tTJzKPSe6S3SPRuCdMwzroznAwQzHmNBbdUHOXRP9mchMwcGqlfFN2a1EFlVakqCXTVSjUzO7dzUACEvFduCQLuGRICWWYGDgU60twB4kxSpi6XejPQ5e5t2XzJqsYhABZHQ1sjvmdU86AEdbm6WlRyiGRIGbCGCSZtoVECTmK0RreUSOOgufehl0RhXIR0zzbLLoF4ge20EVi+6WQA2Jb67K6qR0Ggz7Wae/FpcSr2MmuNT/39tXdrFgdkkxJ+rvp7uOHm/rb4af3QA7EBWpY/ktx7X6yZNWUujCDB5d3PrYVC1myRFK5TM7OoIzzpyjSQOLXTj3/6ky+/+vrl5eV8PkP99eXj+8fHD198/fs/+XHje9PyeLbLy6eXVzyc3188a1xkiLCC/kPeGDVUg+5VaZcW5MfCbEkJcDNrxLPLIhVTDGIbS+FQM7hlp4LWhc6MoydgNAO54iWkyQiRcPfgfllwMc4cktNKxqW5e/ceIfuMWGrwHJttukamwXo9W3tYThe7RgqFvAU3ccqxLnQLiT1ISsgApHxB8iyXpB4dt3jZkNAyscUB6FQCZ1TLCb2DDX718IdYxF1j6TLQzhHC2xhV29xd4om2a8bweRVtXBrtWpVUmYLvbcSDQWaBxdKBgsMpyuZtoa1AaVGzq9VsY1rz8JHSn2xDlZ+Y8sg4zffXkj127nx5vNK+UVwneZJtBYp3d+7lcQD0rayxgDXsLrmdG6jbvViZQUqGhDfud6bEsd2cRNkz8naCYrRIGmMeoqlTLtkG3yY/CVRDLEuxfViBoviJM/PHhlQR3rjPmGprReMtpdncAuGeQh9vv/YORF2t/DK2xUp14D6gtFTwyqQpEXUW2GflYLeV985FbpFvxEaSjT8qR6gE9G3TYza7jjHJnBKqQdg2VLmRbTethZWIxQwP6VGBvJqLCIpq4d5pabe7f7m2szMkNaUPzyTRtkAIlulDUmV6ljrV3wrHvLWXDoJ93yNz9/v6ZrROyVpROt7w5iDxzcre2e1g1v+DuAZjHX+OfceEVxtDvZn2dPx3d87ffPf6/NVXX629Pz4+Pj8/P5wXkh8+fPjZz362ritgvcuxmi2BIXPoQ41GhKw1MqC0xWhE0YyKUdl0B7ERqSypLFeRapeLvQnkjDTdet6H2Vh7XRxvHDRgM6FNgMnLK/0xPdHRZjASMQRRXWuodRF26lGDKbE7e0uQG00YZsIYf6nSvwByIRVGEzccCF0FIeYlKHKlOrqbslz1tMtLmIintnEZLIRiaRy2BQGMjoEYcdipRe2pxOC+4Q4NPcUGXkno3pcN4iw58tAkdb5mBJ0P1GFDlhseo8MZIGozkgeEUWkDmu1OrDLPpPx2E0u7e3F4r9PIWGwpxYRtVvFhCBAo0pAAjyI1XgemNmzNaLSSbyL4PiN07lGxQyGCwjOnMh1uz2qr/OFw+g48K/LBjUBwuMOVE0bt1N3IgBowbou9C82P0poUnKkGNZJG31X4ms5fFFop1/JQhmhbicT9vmRA7zZJAnS6YdqmYfVLn9YNgg3DBkYolgChE7bjbpbl3JXWHQxz7Q1AxpEMpYe+UWok2s3hdztbfctqHqoQ0OTBBepIoghwRIGhSAONJW8T7hWMN8wuptikqpURumrJYWXrnr2YVsCLYTOTFCEn7SO2fsHrjUcOPH7LiTqIj7ckflyrubs3Rs+5P8iL3IJe5wnET3Ids0kIKpvWHG7Gnu6NMfMt7hIul8vS2vV6/fLLL58vl5eXC3k6nVp3791ba5D13i2dibPgZyT7ViFHM7SV2J315F0COqDWIrFHwpZAbODw8XkxthxnK0IUpiRi79wdvvuh6rwN2hlEMU6kAspBucO40Ehd3YMMD0TpZbgWkJE8c9kCoYHZEVJ5Z5rfOb4zoCcBGXO/VQ+yflGEZyJSk0G/QnAxm9CajM1BWDZjMO2UUZQ+qinHK8WEoXpOTMtu/CYjb2fAX1IDR+mUtTZjwHRGuIOWKW2epJkxU/DG8SU3TyUOauqt7L/xnSHzzJZhhHqTx6lXSBvCy17lpm9Ftk2wDRSoRe2cFlOaEEp7s6K38zjzK+Y33v7KEi5mgdojkXq6cwbLgQQn8boVQjN8PEzuG09SltCzIXzZzTG+d6n2c3iap2uCLe6Bd1wbupaAGOg6B6PN2kYhhin6KXDvKz5MUVOg2X5HfGixA3VdENredlPAHHXLqs5GjJM9NWz40oZMbXXuBhiT+yUo7IB4g5FOGZMU4MzwdxZH9LeFIkYYKbLfU9kSJ81SNqaUb+SSYe1BJUJOkoXlrcKbUSK9Kkzl7QlMmz7jz82fO6QdAL9lXYfxD984fTFbwBFc8pf+qf9+WPO08rwsC/ny8unDBwI/+/GPf689/u2tnXo3oJkZggmJMv9z7//Tf/Y/+ycB/A/+mb8UJmXukGHHhrf2wfOsjFrv198AjhQgrlR8Hx68d5fM7Hw+/+hHP2r21MxI88xzKcUUfpBO4hUtm7Fl7uwWK+Op51GMyvbZwijbFCUfTWksvPpe6gK2yDevqOCQvMlhiLsjXpAVV3hzia5kmfHCQENXNe0OEstmHsEhmVcNwnw6/vP4YsVEIH0lOUiRSmW5rbhG9CIAdC4poU5XEKW2HUavAnISHdGJChlhGG9Z4nj0osyTdnXI/j6CicwQRs5p2vPa7klwk2ay8bDgzcj4++PrPEFC06bClnhS+Y7FgFH8RgHTIs67uU0izJjOUDHm4lkjYOcAhwMzn0+LVxDN/IikpSpMldIwKBs1zZHMGAwVmuc45TLiGwL7XdB95oo0ragGMN6FMuFi3rjP5km3fhBEwonON22ddR1AVJ76TdFCdO0c3+9MICqNIaMcox8HAC8n1syGpXBX7cKUAKAMVzn1vVR0d87Yfz/QZrr/pvM8U5LbhLNY/rHAyqYhD66JDbV2ecB1P3yKy86KvuPnbHF9PIB3dzPTzUNjnnbcbLGQ4yJswkzohsjS35FySXJvy6J9cqQqFPH27W9N5jOCl90j3HiTs795cUWjTkDDDmGkE8BrX5/7a2vnb59xvb57ePq70ax3FzvNZa70hTuw6bu990FMbB8j8nmZ8oDbnHB+dKOaRyC5dp0fnqT+8PDw/PHT+/cfvv3Zx9PyILH3Dnc2a62RhHFZFsexJskGijemNPZxY7TkKr9tkeISKok0rruxt2TY1bbVcR/u/hk6kzdMgk3k74QGb4Bz61HWNdNJDyEjEiagFfeI21vy+Nuo6JEgVTE3Q9tuPZp0DoeOOUtwz4ZhdGZNXC2ZXz/mU3pYycOVnhgMKWJMII4uRqQqXfxAc7N0g2sEKnemFXeWrSd56EiUMXtnp/5Osz1uUO0tinhvIt6ow56PFoXbBiE5DCCDFn6GHB+ejTtt2bZSkbn7C5jdIpIQkwVmPocblfdJRN0fae1V/JrVxi80ayCzTK36UCR4FozySaXxcYxsAkCj7VlYlvgBsL5JEnNiB8BaHZLIEUo/je9EJUxLbrZr5B7kP+YEOdMTI0abUcJEySO2LpcQdMM2EXfyVCXHHnAO+MTPw3o2fDRpqTJuuq8U0WuuPgKPN2kP6N2NGL5pVD+4GZ7Bt0j2PYEIfSKA4SUxZRBNGWTjnsYNPWYhYIZ8HY4ekB9AcIWGVL+rCGi8V+sUyRDp9RRVLoYisrncN7jmL8CMjz/RD5mv8X7yeCLuvnETg7ovp7MBw2n9597/6WYnLS/vHh6++fFPH999zeX08fn1H/70z3Z06FVsZAPYr060/92X/7D4sevMeRL5mnmntplIYtUnGIQbA14hLk1s+zDz+fO6rg8PD68vr5fX5+99/T0A3336+P79+4a29i5ogSntST08niPDreYTgVEji3ybVSwh+Jo1SDQuiOIVa/Qnjoc89eb6u5r7bgaJeqmX4nFP632b9U4A7EU6GgCLODuLrobRiD0VlUouyhkylW/L2Ai08bog+EHZFmShkcE4Qwj2PVfb2CoyLrVoweA+M4SLr8GqP4ISyPQIylhGCn5YTqogZ40yDnCx+gZmIiW3HClgFyVVdeolYFESrBVydxnD3NFGKuQNxMdmzFbxDYP3qt4WvF2O6MGrcO+yyagPzQC/kwKU7NOYnjYpkmBGQvDhkpS+yZBeURECNav533zF3VnuBcz4olpf1BGNrbUJI+5d2zzDm1B/uvvBB5xIiejdKQyODABY1TcldbZi2fCsgLM77o0riqdLKpPO7naVCSh7D0wMuKCSf4dUWzkdkioupASgIRjFZY6efDTC6qt73eAws/BXfuj4HMMBIGi9VNMjwG328RIt5M7BfQd+1hpSw5itbmRGh2wpmAziYdTMn7ayDGJWlB26FwAjtFDdWBTQPRKgmvrrgExt5aYNVLnK1H2coPtkZEq7pBIewqGgBIGsfo+MykmcC5Hql/YBTyMfefBYwuxYL7EVmLB0HmR8NluWdla/rMoamX/2P/Pv+qWm9I/hz35m/AMfPax9IM+gJCRHtP881FwQbbfM5XS5XCSZ2ePj449//OOHh4d1XR9O59aaCBnV18iqlWQMEWpUmb3TBmO+iq6QUsaDyChr5ODSHYSyQFLUfUkr5kR1hu1D0lwvsY9lDkVOe96DnVXPQ7OSyTIFr4psBKXKYx7gMlu8Zj0lMqXXp1dQcBbF1JSCm4sIXIc102Xa2anmCFtIm6l0d6TBNRmxpNHtLnT1XusRQE99Qr4UZcHoD80SveeNH/pHC5wZMg8qZeiAIhMQjealQ/U6sS2E7WLbG9gnKWnHqIZIhaN/ZXzwvAEojYFp7dtGamBwmO3xiDcYjBwJhHGDiTS2THmpkkZvcLwsQF1b2Gpb1wLu4J0Fh93jg3wUFcnvrYTkcE3MtQPGTlWK6A6AEcKKfZzd4GTxP/UZANY12rcdo4VIjslpNL0h1QxV/NlTzcKo7vKZa0KtmpgkqQMLjSCj6tK94BRJ67oCGKn3ox7yOndtmj+EyrjZgfPrWefQZHU45FUPcBk25NlPyclNZfciCcn/9q53uSLBY9bbMMkfVYIwJWAScLvbn6BHn6JqwkpGqjGutapcmkcywiTFTpekyM1g0TKv6BmqZXS9ZbhrSJbVXr6PvDcOSnIXPmlQulnAZ0Mab4A8znKvwo12uOfwYTx4uK3xZKelo7/2119wAr/g9XnGX2Luz7thYsCzKIk6JrQsRfabv/GbP/zhD5+fn7/63vdfXl567yStmVd1pmbs3b37jG8aomtLK8s+7xYlIAqINCRHNMNjmEEiUkPZ/Y1s03kxZNo6gMxTT+GMk/Wi8tTvqyB3oCrcUCVXa9bdSbZQtWUdas3SWh359wPOSltpVidkGJEBZf5VENFEswCfJWUlCbSRSx2H0bakhzRemkGiSPQ9GnjM3lwriRDIaVhsTUh5JC1UxOZDJBg7oj9g+IY70X0liegoHlA2qtky57lqU9y7WY+5GpfoUVpZpIjjmm8vGPlGBG3CRRXtmCMMAVhbhvqSOGQk6dVvxoZfMxhMgImbo05QJ1r2LZ32lhCx0CB4z3TDTeOsWzUmQ6DZsE8eEHpxhZ+zUz1K3ksku2zoByjqDCBaW42c8A0+q2dms9EjwECSZBXcmhWHMAe7TvMpTe4iB9kBg0GwVGKxLCdJvplCsx9GnL8OmJgaDQzCpUe3HGuM5BR1qQMnT7SZhoqY7TBRlsg5202IkZ/X0wRGm0iCE9v+tz6e7d7Ns+bOIgQsvBLg0dFkanS5oPB/hM/EwN4DOW8FjsC3KoHsJE6AeYuQjqwmkWkcuoyu0ZEtaHS5VuikM+jM24KINLMT/BWSr7lBFvKzAC3Rtt0gZTx0YIrP7vm6cFLU14KATkgp6S04y+DuvffQmCNnkycbuZ5RnwidZia/BuxXyQmzRi4mXvt1IRc17+gGRw9RxnjyiDVVsYeAE9JbnIpdC0MF6N3doyw+UrELWswN2tVBS3LwzDQeRgQQqorRJEjGCwEDrtpa/c2ZBRbUpi1kc6mvawiNjye/rp8ELefT/+h/8xdDtXfC3Cf97a1kuimgKfdXgEtXRaorGhcvI4s0+qyQa+CkqNAEKJM3oWrUmLUQLES4QWqIxZMLMvJZAOhdEmHPL+va9f7Dl8/PzyQdV8DUwyrbAMgD5djV3XsQgk6CRlvaZN7I3Yhd6GmD1aAbFNBV1QoAa6DcADeLhP6gLa7RLgtwLKF6mTy0xExtb2cTzGURNEN0SsQpjSzhzJCzhTvJVhi5ZASsOySjzADDYh4ok3UjolxU9CVrG6ZEaQH3ESXEnkY7SVwaYrWCIVkVe5Q4jRbp2chJAtSCIQnsAOgUzAW4NXN3uQV8kOdXw2er1rrkPTozL8stcoWY4DsFbIi5aZu4lXN73zh+sIf4dJBwJ7a3u0Y8Cs2Od74hUcZJ7je/TmLyL3TVuf2FHrkNF3/rtpzMMIwPosDNxApAW2x9svKQUm2S9ublBKHTzgqwhbeySvEOOZr7EcaHg5K60ZyeLZ93qzRmfq1nFJIioRY4LydJJRADxGLmiEasFdGW4tRR8t9tU4k/LFtLkfRt4bj3p8L9Q0jqvZch47hJAys0/b298faKyIkytyilC1hUoPPyN6dtjX1EGJSBpBSWlLUPKNlai2ILKA2+5EjfFjj0OR2xesgu3pFJH1t71FAXRjEKDlDnrjGA1tKvbxnqmlilHYQYtSy0VbxClkDdKWrjkQF43ejZg90eAT2WiRYKUyhUUp6FClPokwV1D4o8VXf2kYPyAQTMYLBTlAYc2aJhhCToWWZhLGgaabTgRbmce8ic4/DWAgm4RwDsPeo0G55ibVE0cS/S581my7De997dvbW2LMv18vLu6eHDhw8/+MEPHh8fv/vuu/fv318ul9ZOG7S1wcS9x+4MEXxSf+5c9w/F/oYZB4pbH7EiPjckZog+XJmB4EPHCJxKgCWABsQyaLRDdEd8iLS9TEgnANmNC/Wz1ohxz1hLYVpgfsycaTbL/D0ApFnHFvw7jdxcq1ACV50+YmQYpOl0kIgdAx7IfTCwbtwRvE7tKvMtQJvyJACgPPO4KUU5n5+fu8efAdy26zMSTHT2rcHf4s1hgh6Cxezivctxb9KU6wNvfpp4sIK1DgCOPpQTPYkjOqIfD5NXWT16xQ+lTX5iSJjgPGLl9ssfDZdj0Sj7vSIwyIqLF/0SsjHznBsT1avI7PGz8TcC0b9zUyUyLR7Wtr7Od/fI3sC9AZ/4H7+3WEmTNJI06O7zKvrHLQx2f1vGECvyewk4Ja0NKFvYFrEc5dZQQRnjSHOYZWaEFIR0qA8eOS8h/hTBqjKBGcH2hR0m+T7yLnxHUIoukOnKjazhkVic2W8MSR7F2YPiBCKGoSsK7YYVuhHufruuu8xmJtDbLcWBmKkLSDGi0igj1sIPQ85rP+zrW+fdqwxvaHiLtUZFSV1JYdYfjI9o4mzn38ZUCgRDLB6VycfqojZSZRjuZ3U7uWBYG/uP1CaOOAKXBJrZUohdhDvqKPn16fGL6/VV6tfr9fz40B2n82O/Ti6bKRDG4eUpzcVWhCvz922pd7wFM7udUWv7nACsxMvtDGYZDpLhOgt62IQOjSwfEwyNO6QF0+JlPqLBywIWBwQjVDarwwWtDkT6XCzKQYDDQLX62msdZq0SicbcQKrl6z1XX3nD3MpcYmB80KU61EkiIhppueUuOaEx17QcJn86SKChSJHoezdwsMZq2HGHfQ5bbhu6TzAMnyTiNz4MLqtduHl5rwtQb/Hau+sdE2Dl0cfjfnOwP9+d9+6v4ckbnCXf6MGU7s/ngCID+wncup+jx8iI3JnV9FsJMKC3VFpLnIG6nw+wnvKk+sh+ApYpxG3MTURfL4z2wPtSppr25ca9VHObWUVRjHEEPr97AwcauMUZHR4ZcQ2AmS2OHmFPpMsNTjsqphOcWkO4agd8Is43SKFhy/eJI9Amda2I3Db4iE5LsLtTWZCAzNzZKfc3oEFWpxaZHYst15/BsQZ/rhyD48IieVVhfAzZzUdvSIP3MHL44CgS4D0KH01pAQb4qDLzuR2apoqSAPcHeZOFJI02H2STUKI1MIqKpZh4P2FEG6XbnyAzsGpxg4vBYAvtKo8aI1FD1qvqvb+RIC5PGasiiUZwLwbQaiadjKba2+Pj4yh9McMu4ujj4cP2qRTfxdpiJ6G7+3q9vnt4XNfrN9988+7d++fXy7und99+fH56etIluywMUEsCPPBHzMI5Bu9pMX5jEyezSk3yTgh9EKSxntuxwmKU0gMqP5OAd0rdsg6IgXQt4vVO1pxXqwNp5OlaumqNULhT5R58PN77SypgyDTRPq9OEnYCWZT+F8DbBrwO4aY10canJl1gRHNQWG7JfT7JvGNMqFSpYgYKESW1Rtduq+LfEeV/ENtZCT+J0/ODbwCokhpnRE/mPq0THGEvvtMDDnObB8nlYKN9s3130lt2oP3FiM/uvbPCEvsYhQzu3vxWP1fUFmxmiXIwYmrnwCrBfato5p+1swd2vhWuGYx8s1ntRshxeilNk/kE5LXwbWshzCyRODGPGUFnegFg60A8rt1uEi3NvwAy/cmDNebdd0BniJo9QTDuH9G809ATtThiOQjKV2SXb5S9tx9nvpvoxrq4hcD0axSsbtHQLAshgRAizaPEI2XtzLQ/TFCql/hufzcLdrxoS0mP+PPBCefFSoKbGBpez3j7VIWnRK/Eeg1775ycVloXjiOPqU1WyrtX/SSEEW04VlL/rtiBe9ftYU9lsaBPeKMZLPR3H2sCDnHUb41/b7aK4DtJFbKS+3uo9LKxrkgD8V6lpnzYAeotocJW7GT1qSUzB8zA1panp4cff/PT1pbe+8PDw8vzZVmW19frySbGP30ys+4OWZURtoZ0d70B0QBatnoPLk4iUow01Zu8ocnHLPD5w2RwqiDEHYR3ec9btBSmKh/GTDJUVByw4u6B3l6Jt3fop6a8auzJTpTe9OxYFrgdGtKcmp9PCWtJUDdVEw78u76JjoK8STNb5mdmsu7zcCXdbOldAtI+nrRgDmPZxPGbIzH+zc4Bw8dW9xyS1VRKkruPSNd5tNHHUdpMnrOA9gvK6V4BZduD8fnzlSdvrrsZSk60KKsamSc1adMuPnaavG4ZZ1zRWqRIL2y8bvDjacyInb4zyTIzAyk/VVAre3V0yK0aho3Vd22eAUBOtBaBdWXhDBmouNvBHtANS5L13Q8l2sbcXKrWK2/soKTFJr+HJy0EjpJ7yY6ewYueVdPDCivjm4UdInE4WaIBztB/acUM7YC6b18ZhVanrG9iH8mKrCkelppYsPnhR7x7lDYkmeTxfWO4GqeOnlo4Bs3dqx6AFOmBUfMIQG1cVRPgpFEEI+wW3qtbnndjuhzzqultC5cmMajMFTH/0be4BmlC/wywD5RhoI3JQDfKJDPQ4EBHXwcVnkVOYB94NSkVGyoPwN59apeIt8mXOzmyM2Q45sj0ed/r1ZnHpdaaGSD1foXrdDo9PJ4+vb727o9P73vvWtV7f/fw7vn5VU26wwOsTFlWLxKpxoyQ3V9hhzh8eYefDZ2CpGszA8wchlnSVKWDpKG4USKyMqpAyZ1dUcGHmPCQZKTDGdNik0iZmc2bXEvmz21yRO0PToZ8H7jgtpaNZZbWMOk5RWGKc2EulLV73UE6IVllLosy12PL8FCiIHSY2U4FAbJNd2lFA8NOtrmTOSo1AjPVHtw0JnyXwRyumZViQugYyix9OZgFhc/LdDfiSU5sz13eNOHGIFETeLo2AUJHHrxrOJFqiODKzKKjgyFz6d4CDoutKkMY8tVtgs8Mq/YGg7mZeJ6QOI5WOiU3KmzwGeEilhENzYdDyTOnK2YwLM8ER0AWOetkGxcZAqum5DRyK06NQvox6fH4htZm5cu2g40ozYdF8JlKXh2nm2pfm2ySmNGCBzMTxzhhZot4nIFYSQiwOYFiShIqi1tLxT8etmwSODzcXBVsuLMkYWMzwwqKgsZsI8XwVuaLbR+toazGIGsGgfSs2xNScoA/vbNkByn1Mdgtlh6+GVAdS5vXmzuFyQs1do3R4E3R6PAuHdjgAMwndT7joXGSDMXJDL1Hf8AC+M600/Ym2YNuh9J78rMyX3re/PppD5MxkHvP9tumMiwe1hIiTvaUrfhqEm4gmhZDA372si6np3X11pbr9fpwPvd1PbWseKWU22xaQv5ZbmzlezNbZBYgxtp3EzvAHIcrk9W4C0iptfs0bCzPF4erZbtxCCaqE5C39EwX5qBTMBhBI50uRWU2Yqs+3J0AvGivm52wPy8aAN1jYKHPZuwBskefgKWMajlNONMWBYR4UVbGCAc2pk1LpUSFEgPftcUcc1gAzDx4wPrWlxle21E4iFU00V3OyvSvrRssZDCqgyDfBr9kaOj1042pmXUlaIayld/40NIMmKIY7/hydGOCGP9mV5mJwRcD210/X2SI24oH+7z8STAfS9vLi9j/ND1bH8KPb0rA6t5TBBw7JjcWtY1ZMne1c8ivIrmH00rpgnFYJmJdvVbW1x7krZmZgQo2nJHJg+F59eKZQ39nYmqRGT4VUrgVfuYz0wtPhs4VAtnah1V/p4uQBNtAJoC0Jbsl1A0zfFat4dqOhCuDRxagp2ghVVisVMTgZnNTqPaZIsXC/LQsA5Pny/cCcf6r9OAeNjGJ2xi2hNFiDEcsknPFtbGRdHhjMzO6Ed7JyJ9KHbdG2IKfk5R4otcsMm4vEPfBgxP64fZSacBBva1SWWohQygZyvG2rtuTOIAzn+LgOS0aVQe+Ad2dWTJzDhI1kFonIWCHgRqrKPP+TpqMRjdVnzc3bBu60PrSV20UcveSoAYz6IJeresK9dba6WRmTd1fX1/bcjaz9XJd3R/OT+u6Xi6X88NiFqJenokadhs5wtyEVULv+ixF480ZzOSx2QQ9Tf5eFDQaAINEaBhYYB2rEfLJ5mqE4D27sbaCQAqstsTQJng1yrbMJALZplrNRyf6/IFTLt+YKkl5xsSPuZOUutlJujnXe8v9AAgAK46hrKYNMNJ+p9jbCT7LwVzcx8wqWpVV+XMkkirDbGpHja21OWhmDStgDNhl5EIuMBo70dHFrfEkKhQp5ndt4a+Wqo51tY8qbI1c2yCd6pHrtjmjCglGiWVGKiryT2Q9lUkUMCIiQEpjQ4pVdKJVKU2SmjR+OyjQBdn+RuogDZ1Z+K3JQHSqE2ecdlF28Y8SjcKV65alW9DMr+nEGbLfbcVQTBIArz3VZSv2H/aJxUOSq02Ij75UnsWYVYe0auNdJIAm24lBlTXixozt6I6IQU0WogWUp7AXEGPyRUpaK8hfHQYENwewIgqDBPw3BciU1api7dERSL5GfoORoahGT2gQy+oE1NC5jlW3zPI1aIQ6u7ACMD8hswkccFlQZTv1iA8HBh0ACFPvMGP5RIyGaERmkTHoo2M8lxO8vfbs/mQVPhYQXn1siouZLw6goXlCTln/T5LrQdYhEVwaYewO12IWOL9Gu+DsF9fM+OALEXWLmqTe1w51SjJmm2FFNY+uLtPZF4cu7KoQs8y+tBKcPKypLnVCfd3ZbwZTjOh3mpZmALzDfWXFsubBLApDNl9FwsxoYsGBpLKgxE6ZdhKuJauTlt8UWKxdF/OX796d7MPj43rF86X3yBzaDmp43z1TGg9yYZgfem9zEYicdJ1VCejkiNck0Hzd1V4ehrpTJuw7vJzGhEiDzAgsWSUhGldTryvePTyt11d4X18vX375xQteXl5e2vJevlqDnFGU5nw+K9tpZwWNKCkQ01wiRsnYuQKgegOoLjSHRA/NxP1KhSA9uHgqfCSBdr30glB4OoDoI8RFcCGSzn2o1BprlHwr7NiXdZFccAX91mrAYgvQg5/1VYK1doIWdzhe2BaATnO6wQGn0enqTlczqywgY7M16yGk/Y5Ruxu8ripdq9dmQavLrmhsbA52d65qjOxppKZNrVql1JNba+6uclwyi92GS6c7vZVGJ5qxnS6dtsjo4iqX3MysYZl5+4wxm5qy16Iij3z0DRxOcvIoLMVl1e4rYmtLrt1M05uEMqLpKkliSBrlCcs7w6qU0LvxTRxEs3wElRUTslvBK/CLtyPUu2YVVtOEx1vuqAJvXsM8nXEWYdOode3mHJwqMts2oWyr3DrF1N3AfZ7Y4FsaSzPQt4Jis1KlclbJNwU6Bbd+p5sKANpGCiWx0tx9eukQUACM0qfBu5LNl7nGoghFbj0ktdYOag1uttiKjqdGsoeGZc/HQpgDrFJ0igabcWpNCjKINtr6lkIdYZczBG4/z1965a7U/JN/tIIeowVhpQceVMWhvmxnIaBUjRpFYo43jJixLMtZmWMRnCkMoeogvxOwLZyARXMrA5LzZNI60qG8kUCYXI0ATTsJdGLGx4OaPxUSbjuyN5BMvxZm7E0747qbQIHL6/eXp++f2rX7z67Xb9d+XpZ37fG1r2MiACZT131n56ww3Wh+d67D92M5js4tHGQH3iRxdVIBwHVup/VyXU5Lv/rju3fPl1ezNrSjeeSQyMBJiZvfji2tPF8XWEEy4/5qKDm5ZNQ0sg/cWHVrO+AfQFGxY8deDJuZaiAZUVrjYesZshYy0Apg9CCOdAADZWYmIxWldeb5DJoTtjzKs3j4Emhr7J2lLQGoIId4MfaoLgDcQhHQ7FQwKXoVETCTFW2j5hIsa1dKsqV5eOIQQE+acCcPOGGB5HxD3oy61bepWnX/fr9rMbTs14YUB9JXP7Bk5mqQGjOjphTeVPVGl4Lbeg6317YN45F608Eji/ntb1+DVR/fckMu7e2hKjtIaUNOR8N9lVlp5QHTvKJBmXZvrLclEty83Kss0Vi4gWhsDgyp6I0J7CjdnvQcKN3Mwg837GZ7h6pur7Mt8Gv6chp/TAkloIw/A7YDNTYeQyDjNDJNFsIoYAlN8N9nXBxwKN3g3G36vJZh0p/rQKk+sQIJ43WbfTKI7m6yd68jptkwliWaZLT5OE1WlKltuaU+V8rMEFNQiGJa1oDM1IzFRM0EdoBgJAOFQSPCanpNuIA52WNvz8Vbp2KC4VvfzxR2476Dgc3f3LLJR7Qv2R5e+sv14ify8aRufl3fKnL1eaHqrqBwd5y7XYxQEkukWqikRRHwTi7Ymhaka9/AtV9kp9Ya0S7XF2sE2yiVShOnJpVin+IXqU3A3KIEUJjmKEsbWTlggSEu9Kr+Ukc7Skns4TYDPxPzDgSqyEv0m9HkaD1w32gxr6jHBkXggldhlgQW3aIQYKUh3SdfLmJNiTMpd8gAEZ/hKNab/7ZIQwhr+XTGZWDEX8tDmSTlJvewGeTuJ5RCzqkVMsRtQCK7mamK4yClaki+vCnBzZCZcGhGwUOv363izIFybcK7YsW3OSDzNAxBKGJTUUEawKztTS2f5xHG9PzmkYGVcc9mc07JrYZLLYCIxIJ7BOWtmX/mtvDajrQuokRV7ajJNlTpXpm4VwwjGCqzXOU2+Cgkgok0hBQWX+HAwDzqHcb4I0gqjJOj6L7qUbCZjRM5KJ3QLTZpqyWbq7BNPpvXFZrThvsFRFPWai8JJuVTTUWgxmlnJZvNQcIKPZUbfLAJPahAJAxlN+dTMLESHHNlwbIEm52ywmg+MUP7Vv4YoJ5jC4riaAbI/jPeuvoUIRKPGFKHSBl77FcR4AyYp7dcFxmFpiqytGAPABlDMhJ+NAxSDkS4/iYmx/+XmX9cFsUV7q1LfIPhjeuA/4FPt/Cc5JZ8Kt9tNoLJxyOSHsGr95+8vnxcL+384Wl5vPbLa/e2DNVh5DcDANGnzdrmhv2X84t+7rr249TcsntVrYsWcRtlfMk9hPrJml/Xd+8fX15fyXa5XpaHs/q0UvRy1RPYpUozumKUlIaZHjLtDyS3hrmRRy64rwXkiJNqJe8dRat8ziuDNjPXp1VjOi+jfAmJ2W05hGxr3TvA1tL4T6BZZy+p00IN3OYWs8YIqpeCH8WfTkBWoZw+ujNt20KH0sXm8sDUoQasKu9aoh8LVUbxv938gzpJlunPniqxb+R5dy0VUHAcaNTG08al4uccZdeqaF93ZGcpDVU9Wdx2hkbViKOBq6f5IX2RSmaYRo2JoGeZw/16ZvVp9/00VRRN71v/rPvZSrHqw+GPz7ddSmLMn3ssh3XmKADu39tQtdtS9Es28vnBZ50gznZYIw3VWEsCMxvfx53cjYA9Ix+IFax6UzXCP11wmB9039pIfmaeuzeWe+gAkVtqiD26ToOII39m+mk20Y9xjh8ywUGAjTSMbctvXn3LAG5nNc7RCLbaAtmmYMDD0j6T/72tYpMpg/uC4RzFKPkyxUwg5Fk2ZI62yvK/ASEMnxaH1LqEkIdit6O8kOB5sAMfDzbbWM4u5WksVjd+gYEAO6b7i7G0+Zo34sjCSUnX6/Xbx9P1CernxWkfLwbH0wK/09nicM0y/WFuB3H/9goP982XiuLmnMrmiDCzqEl+GJBCI21pvXfvcHdbGnEiGrgGutb0UlAfz8+kAEX3drwwv8xelzsWgF6Vt1VUcEtM3wFhA45PxNjGROaO9qEllIaTKn5FARggQ3MeSGgU2YtQnBbOGlevjgENWm+3HgDZUnOhCbNfZJ1vU/lSLPI3NzkGw1doXLiUTCuRXJZz8BDffMAEOCP/ATPcfYSCh7IQWsSWB/wZZCpQAkAzyyKF05fRwmx+51yMyRTB0wmmRnUobaMzyPLu3MxhMW7xcWykMpPs81fIM7uRp4Woxu+QkRYJl8QYeehzbqWIAYec0ZnQ3D2l+9UZSuWtymKaeHA8vs0xvUQ1ySC10cZRqmq8pZxVi5GkxSi5Z3daNE1FULiRagd7ibazMbQWiLFAZ/ikaZngh+xWBKCqfQkDIwHsj+um4AIjZjK5+x0CiklFGHMY4leWuIs7iSW4aFfG1SdjFVIPkKRktBNRiO0dOOJJpYxyRqBzmbEQgmYSzQLKjc1zt+nk4EwZCbXZooE9Lo2h9uOMOgAl8EUHhDrqGQ2uxFKrGLfoD6CaNgFQrGDGXopkK1KRWTEksi2L4osILcu5uQQqu7wxnhomp825Nwlk01ruAIdTsMVh33nAYVlpUDuLyBAHPboKcouYYJRL6u2Zup7aotYu5Evn4miGY7Tkljh+GLymtDX/wHbeURWUjrLXjBibzDqEsKm4AM1IOiwytGUdLrKF2CSTa12W08vLSzufrut6Pp2vvVe5DVXcuOIMuWwLpqnpzDM3bNu0v2wPhzYtRcIawdW0IxAARJGGeC42i1O2zHbz/p1HllmK+zxbVhc6ebOFNrzJ6tHntoG78wy0COr0sJzaRlkzzDNQtjSEMoZFFbj0l3LIFpbljZi7VufO0+IyTZg0d2da2a2x0tTojKDjya8VtKWNQhzYYx5KhunjpyI4dzkfI86F25/IxEm0ogNhwor6G7bFo9+QrapGr3TpMUoJzAgzzYG3BzuurQbF3ia2KfS2CSxRKOOWFtxOb/CA2cejOXP3zbodUdsnk7ip9AS/dbHyrZMMMSNrmpjeo4JSgKGY2Ua+x4eGyC8pHiMAWLk/DzUTHS0eqGUNS2fez7AG2SYa/1wZrgbckeb84CpHYyyIDm0Gujt6UlHJ/eAplsyomC15V4bmd9RkZhkoz+RBmmRVJ/XiCqMybWACR4+nqkiTAUFz+5odWUF2Ao9mU2Sk10uRfH0HPoxQfAFVJS36hl2gZbKphCzSoYa5ppsQ7jBleP047AEuEyRXRJ9WnSijGa1LHZN7fsuaGP05533ZUr9ufrqPG7f7W3zXi+Pe0Q0O2x73sxwTs1jTHh4u/oqr0Ngb8XSiLuvlmXq4O597nGmb2O033CZw0MBuuTUBrGGOMoZ9IbswAVBTIxVlPsNJ1+SS1u4ult4mc8coIRobPsp6xQ3x/ppHEpxoxnl3/iRDjk06FmFQGa4yb3Gm/MxrnE59acCa/j0C5ECdZqGnsNevjYQsTPQLLUTY1VAeXSdlZqI6uhV2xQmdQts7ZLaxZg/yS55KIC5Fz0kzqqckHrYvWVYEQrjeaqrBBH0dbsFxJBJ6RTpC/27sEoyROpDaQZT/MQCuZUbWt5AmBNWEdUmgirM7H/4B6+m0JYerATtViadHlB2kH9hCg4aSt7kVNilsx1/n19Vvk7J4cxstVeT6dRN452uUEJvnOTi3Jm3mLYl+vipGNKQDame5313cQZCCIrWiwXrFxw2VnUMVnvF7Om6sQ6YSjEbp0PxpaPPRHWyaiYbkHlXgFTbt3IJGkyqJbkTF8z4wUbEYAQIfsstkD79Fwnmc4HZFoDdxJ++cyAMn8UvDBzkr+DWf+HNjsRBoC3fjo6iFuOV0zlt0eC/r6utaJ2vIKCFRJTEL4Bur/0DNek/ddgKWRowVEFycValpSB7/6D/0m7fA/3f+9T/53/5b2BibBg8etPvGpw4M2ldYN+BGspOL40HNiRdf0ewsaxfBJloNTErUHT3s9vq5suY8N0zUtfc1++tNMpAHS0j/YtXCich3IlKBH56e1nU9n07XdbVm8oFXO6vALAPlTARwDlrcUDecfQVelPxWBhNFTRwfXEZSaH0zs2BZ4wCXc4qyAm5MlQNqt33Q087p3lqjyad8NvdoKauIgTEzlxxURq5Nm1Jhkq3lSilI3eBgB9z1sIUNokl3PAVz+clmi7vTRYtsBQ8a2Hs3sxHMXEw9Q25GGBZKD1ysrR6FYUGy0eDq7ktU2aeyn6uILkrq1V+TQ30PBaJ7NtJhnRGA4FzSrGdDU5yiFerkLnXPqMrGVO82b7gFHQwDV2R7xOjmk0ClLIddjLAt2J2HkieCHtX8NsNUWoAzEij5CdEXKJKegS3zhzhtNhuCqbYA9MrjXJb0oKwKF375uwduCZKurQccFlrULTHHYubrBhxNAbSXCUOT9IAALpZikCnfEpm766SFD65G8jqViHOpQzDK2Nbsx1M0K6Voz23m2PeFpLCOAodErC9GO/VNCUDQgppzhA+2iT0l507hLVt9xT6181nJ4FXoQLN2jRfG5akruxwVF3mMCmnlFRu13wiQ3RalzglKNhYo7zUhyUPvE/zqbma2kK7I0z2DJF8QhWcIBA0PztpX7601Y0OH0BdC6uv1Ap2sWcRAqowZks7d3L2bwkZ06d1gRkOxWEaq0qCtoVhLNI4yjS691xMARxcjTwGElp8b8vTv1IuD9kkiYaoM/jXyGb0bwOwOHlJcPtGiDbAEVkIwSF8vMHMSjgc2OCRweQqirk2la4is1DaBt1RLEGEs3EhiEa2rXgFE3YhRAUOQjfZSxtDhQuQ9pbZm2dUHiLLMXNZ1dVse+pUC7MzX19dTs3f2cO09eLBDvV9oNIrW3D16wBItS0xL0GoTHXBJRlFnP7nU1ZOVohu4wFwnuZPdKs9bMOnB4WZhbXEh8tpNILCOUIZZ91gzywC0nWHPdKE1IBJzEzPV/SoCS9YVghthXCGc2tkT3+nV5R4N1teQ7B2LI6qeiM3oDtkiW0Iwdkk9MwbpzoqFkpkvlMGulWoYpqwlrKUXtaLaHudMiFbKEfu1yrsiOEynhXa1F0NLB1OIcCaRL3JQLcqKhKjcm6yhXc0YZd28X1cZSWsPS8WRdqkl7silkNKGLSjwMGSfdFPPMuBB35qvQ2jizNRvzt6xivL8loFShxtK/7//9sM8eU8tG9rJrPdtYXx7SfYwn/n7z4vM8cixU6EUtZQVM4jw42Azc5enm1XwxtB99+1jXxjGrlCh9rM9jBw1a4zRfU8dsDCT2RH+ufCKRh6upRq3tgw71Dj0md6uvhUumEFdeuxWO+XGjLyHgHaoskMhlZWYDEkhiWxQrKrHEmSLOL4kLA02uT1ngMQ+wiNmUh0yorWTT0UqaxphidiiDTrU5oSysBhjTEEz4LX37jtil6K5GwC/7d/1t379T//pv/jF+fTK9bK0X/3kP2mX6/ndGSbpuV8XO723k6/9Zb3ysYWkGApLOJvFDT8pZK3a2NMlyu7ov/qn/w5MaGxklzSF+ElCReplNEbkkoaMTXr1UuNQy9wPXXGOe6cK11caImZtYfxLjnye3SCSNmYPVHWgiMgYETcANlQs5Jweym+4LIvcl2Vx98vlcjq1pfHyujrUUyTeuneJpUQr3HQtK67xJt83lS5OuZypcZbeubecQQDNjNymqSjsVNs6zzxBVHceSZyHVF6HOsWpfl7OoWmVdpF5Um81QRuHWsMNGgJZePbU8+8SbI5RfwNoe8jE/x7+VKmzSGO/j8xdRr30Unw5dLN6MIhMaQwEzAmzCJqbBbgIuag0pM0qkUYQtxmxZqBXUi1vyucegLXt8fT9W4/kDQdqvR8zSc/+LWl5e3tU7XGu736aeE8cY6/jMllpdnOu6UUw8wFdOOPsZ6/QhAwcvR5GqWRO4Ve3q6hIl4kEc+sXObgOCllHEtd8gwmYXqC0wOe6Ri4500QBfga8e4b6+V6Nt/ffsNti4/NU40EIo/4DYKOoznCCJBYfJbbCQFe0HijeFtqzeAfj0gri2wGrWIw35yyAXl40UuoeXYKbBTNXaeQNTOdNxEkpIjBVpFO3k2e5NmbuG/+uWneAPYY2/sFcf+zv/O2f/ZW//iPg1x5+/Wftpz/98vrFs/PxhI4vYQa+rK/e8eXju+94+awTJq+J7Owk8kHfmxHQmvaLgGB3aSmECAI0oLFD78jXlKRdacy7xOegHgQLP0ilAAbhmYEvVRL1PHgGqZ2AQW9mL9VRhGKZBpdlufRLOzdC/Xp9enwi+YI10pPiGqEYMA9tP0Nlso4yvWXyJFOexoad9SVmCXjXxmMsrUftZZQrpG7yu33K3Z3NkDFLB5LYyjAwUDqDTyNoO8sqMm3Yc1rLHMnLzeYx5GmKyDJplLRy81GPXcnTB8l3MTfpsY7F5jz3Tp+AlitaLmUYVMl7O6XUzBxdCrtChnVGKrMAGZxu08ImrZHLEF5EjSo2S4lRt4joM33H/evAdA8f4jk/4MFnL9Yswx5yUCneemRH0+eh7t1sxXPim0O9jrExv8ir719967oyjpNQ7RrtOEncAz4mvhlHecw2ovLG4IfHN1f6VMgpf9r+jCc9cJfDoQuIWkpY3SAwgeUWFyKPmftobZKHEtZBEjTFiB0um+oVOAeX3clct/h2K6TD50w89WR4aBEMgVK2gCx83feOtHuGEM5b2bNhV2sN8HWFu1B2u/D9wKtENuB0HAmxSvcde7yp+3fTulSmKqDG3P9cID4+tvv+rdum692X/8QPf+Mf+Y+/+63/0+/97lf+/t233//qsf/08np9vXzAgnbyd2ct9u13L1npbEJ1AEOzrB8ykM10TLsavR2BZmBUFwgt1mnLIATwCBMa5GjfUWonG23nd1piVFiL0O3ywaYYdLvXB5Y8/xqlzCqEAMDw3IWwF5kwKL+jbjEzpyqLxKh1XQ16OC+Aeu+9pXe2Jb/P55mDaKusJyOj09Y2SWb7wCQX23FINZ3FDKLjxdCAEVWnIoo3BsrRcIwrTEH/DVJuUcO50CIqbVDwvrneN05wIz4OHhyMP0/cqHUhgKjYqUGXHGB4JrYkKGsUyuI9AM+xs9XaAUoJYxZXWEYxCKIZRbDFyiNcfGY3jHTIsiUL6FpP22kbieaSsHBUxwGc3ioGIbtqjPO/TXtzUOOguExXTv8eL/mbu2bZZBZaq6zJDbuagPvWgL/UBI5U/s7Cf/6AQ/zcLGNL8oVYoFcjqdbebNW8vW+vm96KGpryUhBCVYDOs0DHTGK2YTVVv3xDDthN440ZVqL2xrkPmvQsnNl0mGdKmkaUaX+HxPiZKd0RQSpcWVLvWYi8Aph3UvvW4fGzq3v72tSdjeGPwzm+YWLtEE1Ud97d94PmcUfCKNK/0dC3rl9eiPy9v/Bv/6nf+qdfmv+9f88Xf+1H/yX+29ff+/1v+L33p+Wpua3r+u3L87Kcn96yX91chXi3UytatEmTO19jwGkck8EYoinIIUSLk64h7A05ZZb9ufv7GQSrz3FiGE2xAIxkmb2ARaCM5PshJZm13ntr7Xp9fTgv7969f319uV6vWB5YVf3nt1fdzY2QBIdrQ1TbQLmXm3/BZe4Vpw1L1ScZMbkhKZTkROyOz8yZYqn5eaTVzbJIlpDfGdKqn9vufhY7aLSJSTGFAQlgD6WyNgSAoYxapYtPnoIoUUWCMoZODZXGK6sunw2AGJGq3PRQgtWgbo4AJxkNFqAlPC+AjUQfAEuQfQBBkjsjyMHq7uMOHdD1wAjzY8RJqVaeHrWAnRT1a/dn1YpWHcKCec+ou6NQ93j//vED/7hz3lh5JqNKxrC/D4lzB41fXp6YZXMd/Or7+d2b8/zm3U9ZUUHYYfn+pUxPYXG1yC+88UuNNyDdozlsYpm1u0rA3L9599NQTbCZ00cy21343IXHbFJuw1D/BnAOc5v/jJ5ZVXfFLNNvuIaTdr87/z/q/jzYum27C8N+Y8y19zlfe/v37nvoSUJ9g5GQBIS2TGcgNlRJOKCExgTHBJJKpeKqVMqpNGX+yp8pV5EUKafsgGPixDYUigOmqTLYwkIgAWqQnlqMnvTua+6797tfc87Ze83xyx9jjDnnavZ3L5KDnaWr7+2z91pzzTnmmKNvkJ1CVoNw2U1L8kuSBQfoTNZ5jtWrKopKWj7CHiBFjEZzv69IyUh40UilCLbtO7Xa1jU5y8X5/2TxqA1420lZKkf9p/Ge8ct8y9d95TfffHD77ntf+MLP/uyn3v5Tn/zVX/2VUn7o+37f57/wBX38yuF4fDzj3nQ837yYr4+SwGlCcyst3gaPn4ihoAeAbgriPEMk+jGnvcG8qlNsaIzWOISKqEQOGFL6r1xRlABjy3NyRG1+6/MyqaGBetc3TJKDajusUYJLSbP0+U8tyLkBI7ZLC6rZpMUUpZRpmm5vO01t9YhiWqpW61JOhgf6t7akXLzCQzUM0jWolLMtJ4ZcmgFQlSY25ywNENvAp+XddfAOIixZGZ21PbRNvFMb1MKoDLAldOQ31ipvIb+pkZ0xkkeSmEPyyJRdpYU/p0AsE2aIqt6s0BOUs3ICadEfqTKs3OopkQHGoepVjDRQqsbLa3pSiIigcUYrWgAWicJEBeJcfGr7MhVI9WZB7GjSpOl8U5LyJULLskrUSCb8SQzXSDsClP/UXCwV6+YP5mI2zIOK/j9Ld6Oj8Q6VD23SoxR6h75R1Rge4eBb6jdswpTGqzWQGG9oLuQVNgNDt6jVxR0BkxKlJVdsLEBNIss+F/RvfLjhT48vrd3ynKWYX5Yv1SpY+RH6MBVIOZD34fvq1GxdpbKbDbWdMn+eixH6mbw0ASk0r4Ij6vHF3AFyZ8OatqmQnePP7IS2Y1wpohVm5v3kBUVBBecoAJTjU713lJkWbenUoYzZEir9ulghq5UAjAVs5jZy1u3BbFx5ySDj+6Tx//D0/vFaH33lx75hftue3fzI3/rR41vX3/Yd/6FM8rf/2u86CSbDiydPrx/cP18ogtYIhUjPUhOAsrg5EZeqnm7H4SwqjbW0/QAzYhxp3sg1ddq1IF978EtitfhzvDdmHl1CxuI5QQDHB9tRIs7o56i/biuwtjcWgXE+Hg+qcnt7KyalHAxhtXKiaSm/iJdEDnrmRnSKyGQ9TiLoedO4gMZG2nRIDw5Kxpy3ehkG0hNSAL9tA972ZwuLi7cMRA4S9kjEpBSgIYq8BmuLsqkddQxYQEpJF+WTJAgihiOEg6gEYxE24Eg0gnn0X6EOeqoBUJQWZjEuzfU/71oRA7GtCuq5FcnKRKLbsXgDRhfsXA2Jrn1juRKCmDRK/Bpkqzsvjs3qmxX2XGQ8WdKy7VmMoIUdkz/SFe9a4boOAlFy0waml89tvMihXmofPUspLQ8k6A06sHKHjIdzdQ3FYjDevzqQH2WqYRRpJHcZO4AByCJSIDOCt7XLv9wFQk/AFBFvY05Acb4kTtWQWP1ntmeHZZpEDPNuBEeDz/58csej9JXHGe4NwgsWEb+ick01EdFShDCamZVDYV4YBM0Zg4jrIxAAztbT8xYCxFzLJEU03E5ZydZrXGfycBAp/+/CpXnwR34sg3N0wSSs1oGxLZgBhgcuvezitYTw6XBdz3cvnj39ok7X98q9+29eP3/68z/2U1ePH37rb/pLNund7e2nf+C739Pz1exYswSRoNYwEY8MWNPW0DbUzDyQ9FAm0qxjorBGZD6AbIshACqNlGPpFhqRfjaZIuYKJuIGFQCZ6CUtjHi0hQQwdpLU46emq4QO1KbRc0GzR1A/AhxMVhJa+6wF81yvru6RvHn2/Hi8LnqgRMpNKwIVRmz2pDWQiC4aZKXrmZ7Gl5IOV3SJpMCr6uXBbR8AAJU0M2VPwlR4/ZlOfke/u6Zst85kUcBMS4nE6zCqlShV513pojgMKMC8Z7HgWnaUpq5IoScfIILthUqK1zaXFG2TZ6/Nuu3y9qeWDbEDDajKM6ibzAIzY5E41AkHHyhW5vhjRrKamWodilj5tgjJ6cTakl89UsUEVEwVACwtPgKWSgFUW5xnp/PNu9Zm54WoKKjzPNoompMmaHFZ8DwABUEQ+z4oAJxbfTWjeHwpQh5iHmAd06IHE3pijJFsTm4kO3f8nNKSAXSsBDAdD3FccrUuktc0SS04KDF5FHFio8OuAJapmR1TPS7HmJ04rYIdPhdogQ/TznP7txym/mcSOQCnUMg0C1H4Ye2ypmQgsHp7u0yCBGFZX4ZkEe3+SxdBAIHMh46bSleiASLyyIMUKPqiuVqX/znZYr3Ra48Rl8ExOl06R6mDKKjrRJEmghmAwwwTsEgFz5zjwEwhMVOFGSlVACGK0eN83ELtabkkJ0svI9NCJwLgcJRqs5nBW6sCpIniwGKRYSmVBponTmsFxDzxfIbXoRLVopVgJBqLRhC+0aqoY6kSEj4jADJPBUaN5Oa0Q6ykq5Wx8tK10ozHq9wrhyvafK6Y683zGxyn+/eP5Us//6VPnW5/+Zd/2dW9B7/iN/2l84s74fyFf/ivnu/si1rfnA6F+In56dvT4/N1vSvllac0ztO9g9j53fPtJ/HgS8cXh3KUc7z6UI5nVp2m+e4OIYyAtIgnELghUmcCqELHumI0rS1QwDelFBURq2uTb3xywd2E2WfTbaSmxlqLaInwWpGis1W10DgXh1FkoghkBt00WFzrNcLb+dEglIj9aUV44Pl9iTze7bgaOZWjVaGIHq9ORpHKOfpheziUU1KSh3IA3AwN8aKbZCXtMLV2yN5nkKDRIBNDpU1QiJOenl0jIkApmETkzLmINi3UvLYk4YnoACAQ1aadJ9WVFe7MPBYhUVnvhObJ7cZ5kuw14FHbgEpxe63DR1gBN1O79dzLlwBNAyFJmJoZlUVEPKZPQPVKc2p016wQgFiUtERvARANwU1wd76VvNKvTki9lXIwOYoBZuAMmmhBuYraZDMAVRcNjVZZ7/n8ROHVGSA2FcFMPRTA63hUgZRitdYJWFhf5UIG1QqJV7xtpb40yZeDaag9krzMNfmF4tYGXIxGFysoaMENjkAhT+0KNY0ytpemoLpzZ/uVmzl4kXQZ7uSwltUrsBc+HV9u1+VvnK1ZVQq8xnJ0wNouajXDFWBlo+tLK5o4PLu6s03rJRks200ZJ2Rpqr1oph6uFT50jsuo6CXu943mTiJ76svweJdCfEFsaZ391o3Rb1iSm8FasGULApimiWRN0h81wKqVsvaF+/Rq9TwECIpA1fP+oqJfEFCfldciaMX3ZXnBkz6bLNgFyjDQ2BCi4uKyqphRvW65WyPGCTZj8gaC8et4Q9Mz2v2uakynEyCTos5lelgf2e35cGtflDdef+fFzc//3X/05luvfuyr3nr4xnG+0Y99678rOHxKpcr5H/3Qd72ppYKPTvNbqvV8qo+un52ePrTyht57Rrx6e81J7+azv+0039HsCI+9a4X6wxLYcKYUpWemkqpalq6xkRzpBdQNCQ8WYcIZkhMdqUUM2QDKjLVHqq4JTohy0VJs51qmlKfMGaGzbUCrdZomLTDOtfrJHSnNjhZeB56H1OlHAbepEqragsJW5E76HEUgQKjQTWJONOjTsMHbJc73wEseErFeXbUd5/7rIsTHEMkK1nrOGtII/GESZJ+PG8wQDpQui5M0T6/VZOGVAKGwqMewpecCgVj23o4RDaauu5mYoAUUkJEt7Fq0Vx2MfKtIBFhXb518pc0a3CyQ3PjzWr6pZU1q1V64Z5y9yNrPugMvxov80qbfbFiIiEspbE7WmvsxBvisdmKk2iPjaXgS7Rbyew7Pjh/mLCXYvnEBMwovDONvmffurFZza9CI2kzi0XWyVmLaPPeMabsiSPvVb1hESeSCO28WMG0Mu6N1i3dTIZydREp5P6IhyV2oiT26JEYC1F+0BOZWwHqZNIBAiESLoctkO4iLycaKGDmCA/OuZhjrdQsB7HHfWHU2gUhUEaVUMhMhkjqkJs+NPyIm2U7iMj81TgFAWdA6RraYNzfJySRatxe8BGI7N2zvf/qiwB7fmt7Vd1984dXHb9b7958/LMfX3jjhbK+88oX33nv3H/zU47fefOWtx28+fgjg9oOnpHzzt/zFw6l+/u989+fx1K7un15TO70Pme3xo9MHt3r1QM8Hsh4Yp+l4PN7Z+UWx+xZ70cN5fLFmGg2Jo7ZUQmcwhgxoc+lorPhZHiKTDQWIN2zGH2/QiLkNxGIoq+3VEncNxrUlGpsISlGveJVlDqWx6uU4FEnuu5lSC9UCYDYzwWO2vjngEAIiJTOJov4zLT3tEYcEIFtLS1+INwH4aNwRDYfJRvJDhhZ3qoKmBUiPe+hc4k1A83Iwp/BtkkUt4KfOZZJx+6jCCssKahkBHHvkJks22pA447LSVBopM28sTsIqSwGUSoFCPVLVVbWgPAnhNHyqmJlZj9UH4KX2dpilS+Lx+QI0c8uXIB7u4V7+XOMHHAbR4ZG6xKr2uTShrlEradlluTHDoZrneatsLQWul127WpcPsmUh62cBDEk4PlxNlG27688eMnbENNw5ft848/FFKxltAO/OwR63YDVVNwFVWYxDssiCMbTxe28rAOxKWOhzzlQkugsDwIXo0zax9VQ1FGDGbAJKq/zCFV5xwEaNs+V5DGODTBURanUhfCCLQ2/UDVach+CmpPAiInO2sVsJDcx+R04ccnytQ5PjZr9RokxlrLuJ1C0OZRq31aM/mB4ZExuypAgXgJrpD/Cby86p/SVdjx88EDvJFz53rPj6N199q159/+d+6uPvvC4P6udfm6fHHz+98WV1Pr/3hbsPfuwX/jGef/xTr33dV3zl7c35ybO7WfXhb/x/vGlFDN/3X/6+63K8ksOzd2/eKI++OOtteT6hJTNDbu7uiR6PhzNna8iZljIOqlsggJvoN8qn7xf2jHPrtclY6NbJqEXcgbvbwqYi47NdHB/IFwbeL261W+EVexQ0Q7GjCAjRw0FEoqp6OnoBDE7lrt2O4mnMZzANNhClOO2gW0x7+4EOhwXB2Y809LshBlj6hqrJcQ3YPs4wSHahiHe1ZKScRkGnDK7SS3STWWwcka5HLtoLtk3q7T4pRQQsFIpYRVXC3Q3eLFigVOG8055SRCYvwZGFslsAWOyLj+MpwlFw26EtsuR97c9sZE4BpmnKAqcZeA2Ggh6mszztITmIj90Z2zCoDYhS285eko8ayo87PEoDo3zqE/CToI6hKpIT2LIcJNHslLoPux8HtGJ4WGQiDgi9RP3xSLxEDBwZzzjC6tnVibow1nrknEm3C7UXsdlEhp9Wy1yNc+l7Lt/cyspEdXbSJBiw31y2Y+WrV9VJ80VoRpD8b615YMURF2LuAlV8FHUSl48YFnsXAdUDkoyXTmU0EvTsiwuGeg/iaGbLnNHO4I4CEsEL8ZX/+5I86Za4lwUs49YyGgyg2DM2/9KvL392Vyn26JUvvf/Oa698/O5zTz7Jq9fk+PmbL0wvnr39mfnFUV587JXbV4/68a+Q22ef+8mfLr/w9K2v/mWf+Njrp9t6d3M6F8NBf+3v+p6rmfP59Omf+YP1xfxgnm8h892Jc5DRej0Z9Xmt9xeSffSmEZGpFJAym6gWF8GN9D67C3k04XbRIoUQx0gMMbFOYxsiImnlivvunCDm85ubV1NCI1wwQlRVVUvReZ6d77q2xD3/XV+XfzkWrPX4pgEOkiHxZtYOwmoJSwpZk3s5HzHvOxoLkyFEP+w60TKIwY93Lg324tyBQPh7kZEqq9XlJPtWxjf5wna/78ook43QTu4LpULaPaqZqx12rzaw6mqnfIQqFmYrxmzFi46RDBtM7kYPp1kDGW6sFVFNRdJdJ6VMgogx085i0QLo0UAui2kt0CiozD6qabaGHeU4MlT+VWKoYTAagmgaJIYEwZDNssDsgPEjUyylyObskVkyHMiqpDnbDejj+4in6VJkID2X8kEDCBKM+cEb12jgw4JpCMTb0oGMCDIAHgVgC269PYcr4jLXHQmuQaM90kaY2VmFgzFecaHK9AI+Akmn6crUrGwFZ9eQCYAM+ZSj5OFdmFLQTVxPJWJX7nFYrkYelb9NVTypwzccMHBhR0k+F3FwtjDUyzAaBpRDgUarUd81AcTLikGykIjHDQoJrIKDRimBoMu945GZVFPb69MVoFDMN0T6Ktok/8x/+sUnz37+0f17V8/0+MYrX/7avR+/kjeeXM36Yro1sVfwcXzsH3z27/+Rf//X/8WfP1m9j6v3yvm2vnhUHrxrX7pXHqrIu/Jc3vm5V15568mLD17R66c/+9mpPHj90ccp81v6xoOrt6dzMb05n87Pb+f5jOurqy+7/8se3dz97N//6eOjf/Lln/zY25986+4Z5tv52Tvvkjgepl/x8f/bbLP+w/+RXj2qj4/z6wefcLl3dXxh16d5vgqwCDBBKtgSQ1ubjeYmNbMyHThcI0hHOK/Iej9QEumh422tp2StdZpStY2QnrhNNn0vfBNlNyJCLE3KVZIsFAQ3rbWacZomTTaTWS4LJIkZSmD/7hpHBozxmG8/p1I+pI0QaOta8BMgaks1sg8UgCsxeVc6yUSpDorFuppAADCYevVYW9KoxZtbSAZErynh8qUuEbBJtNbCdam99DcyrYngIj1sPHeoswxYQUDgvQsJiOa94vuquqWffZ5FxdUkEup97zk5YAp6HHnEgFzW57Y4MchQ+eVHE8NHH08Mnnu+4gfO4G1AIA/vwwovd7XJhbjQX4TRsrpnLuiDD5MctZsRICsmsV7pZlarVTdOQxXNVonbQXgBspck5UHMX+r3EZWRNW5Sftr1PWOw1saEEyClJS47zxBM2JnJap7xSFIK7MFnV35afzNiHQBgYpjsvIp6S8BgssD2FkehkJASDo37VjN6LVdBSOsqUFmVqOyzEsvMxZQtaCKSuQeW3sGG4GvUdYxa5tknIWbGuIno0Jo0zNGM7vK+Cg6b9dpBHt5/7XQ+3BwePrg7/M2f+fHD1YPHL6Z3pko7Prx98v6777/57b/i8b/1u/+zf+nPfT3s6o1Pvvn/fPfzz18cDsePzQ/fnZ890AcvyvOv+eZvefDxx48r3vnBnzg/ffL6/Y89f/7Ejnf38NoHcnp8/7qy3Njpjftvnm5PH3zpybN5/sr7r3+svvr8/PRzP/nOj3/6J+7j/tc9+mWvnqazyGHicb6TIi8e/59O55v3/rsfe15OMeF/8AvPv/5j77zCV+4Cui5RZTkknq2WLBDNpoggfMhbnNk9F+M3CwqGOpa7WInXq2FjhGxDFOkrTTGIyk1sG52N31IsCAFSAKro3VybdwtQjxb2z0iD0FIMvlQoabQtjWtcEKu+umbbdYAmXzRWRNkQzzhJH8igc4Zj0PtnlzqAdAlnNzhlsEic+iEaB8Bg/bZMOhJAQHWza/p3gsgMS4valq08pb+2RMPDJKoRCYtJBFy6n6EAZlv0JeykicyIKjdRWWHvVEFK7ahiiKxMoItf7UOB68HGQymqCqvn8zlqm7dXS57eS7HQzr9H8ywjLm5fJFlhfnukeW1k4IKSuLaNeTMsTtEW+RZiyzJDdzQvNEW/q1MbHQjDkQv+qx19nQWuit5thJJU5drXl3TZAmZZdXeqeF54uWjEXUNgNYcNAxuQfvi+L7BN+7Lstr16gF6D5yhyrsG5uBoONJVCeuj/EPcLYAnn8fIo4hZj3L5X834opgs7BFcWMkmaRKTfumnSLj2qwE1XRmS4X5y5QfAapioIO4aRVLfdEKgWdTiCNKcNOc/tCIdxLTpgbGSxDFWX2sooxaMHfNqyDHl+5/Tsnj14PNvnHn/uwYvpy+99xQt5ejreHKa3H05XfPgzjw6vfd/3/vjv/R9/07v/xW//zP/r/3OYXv/G3/3tf/v0f/3EdO/0+pf97Hdc/Q//2L/29q/BFz7zxb/xfV/41b/rr37qV3zlp3/ghx89e/r29OAf377/sfLqO3efef89vHr98S/7+rfuv/LoGes7X9Qv8NmPffDk7Tu+cn3vqx98+ddy+hn7r37u2Tvf8OYvn57NL863n7PT09ubByhvXD/+5H96+/jP/0H86f8NgPPbj5X26u08R6EJ1HR2eLYkSM9v9WaCAogqROrp3ADYcZNEWVDVAZfGaGHBYEHxH6Di0ewqUlqk4VLyxhLPI0g2JbaUzBa8MxG+QGpBa5WCWus0TYJSazWaqooUy9ZWoyjQVtEOfvteRMxaeCy87LD/Wef1CRpJd0SAqgEN4fvlRM9flBEw+VITxzcdslwXgr6Zdg9vXJVs2zLQQ3eWEYDAqApofWnKfEvYlVGqBkgW9wFbmyzcqmg1Y6ABeF+0YNKjWt8BbjTXj41hJ1TRAlZWs0qKUEWKZbJTyvIuj7XINYhkbIdX9i6lWp3neapF4JkknSUIs8ssGhXLeheH7C+RAkLknJ3RCVMkdRQFYDNdjNXkBO4sLDCvwOXnyquuU7IhnE8E0hLvetH5pdLQSq+lQJmIruEMcWebW2xqRLg3poUmOLe95BAwrN6v11OrMzEmzF7h/PcMTAL0zidVJ99uJDdq1bWAkNwbzpmZQlVcafQelZncrZ7Dhhy8yYfSq0FFhT8REcvgoG6VCvGi+otlifGavUvDFJNNvoiMx04vaWMP3QuQVENE5mUrmMgCymA3n3y8BRCRaaYJqYkhjMIalW1uPldGy7pp9oo2AnigiAkI1akwHRnxRgGAO55WSAJAKXM1VS0Sx9I3V4vOdnLQeDsXkFOsOgxUVZBxjtDKSUpNqKBvl+OAN0/QLj6AuNJGXAAIE+VUa60Uqqq4waOaQkTNEWZOXdmDNmbnQ0btZNdMxRMpqArPeyQOoi2S7rpOkNMH13zjdO9WzeYPrq/KfH3/mk/v5vfN7hXK65989Df+3z/15t/9O++f7754/vynv/XBV735Xde//Zve+B1f/Yqev+eL73/hv3znnukn337rr3zPd/y+3/kbvuJn/+Ff/pP/5nf+la+7h+svlg9effjGa5/4+OO333j/dP7si6dTmd5+/fHbb7z23pOnn33n3feePHudV/fK8WsPX/uT0z/+6+/+AO1wwFWVw5e//cb9x6/Ob7xyJ/zgT33PV/9pAHjyxr16e6qzlXuHdJPHgfXSMaUURiATQ3J1I0G25ebIAATFvHqlEwNUJfzvPMegRPyEKqDUk9fRh0Vwa/X+Lx0/MbpFgKh7yTBHmFAFRWWGSGAoFYBKEZFoxWgmSkWYWGut0+FA0jhTqFG1sYqom1Caq45sDL2xHp9R6BtRViBtMCoK0qqpC4DUSJevRpiK1Km49OndEUSEEENF1gMQGqCFJV2CswAifjyN8J4NNJQmTw6FG+HkkPA6VMVdwActrfBWBg9DvOUwKVLAyVmfAKpVRMIFSzUzgagUkmI2iQTvTCmaALTMdhdUxxOOoRAppXAq81zBWoQQi5hfLcXyIHtKb9fPi7ldUKhhl7cKnFUFBcaiBM5iZjwKJshpkOpKk7AnOd7cvTgIHzy4pyLPX7yYrR6OVxmEtTXRDBWmnHAwt3p1Z4oMi7ghJcwiMcMJ0eKRPCqu+JQk1n7XqP42cRIXLm7m078fdcSWmbA3lM+liZYLjUq9pgjS29zjELDU70PqzAEbNCLpe6zYNbKNZG9AT6ZrdESG+0MoEe0zGEIwoq/wWhHHXNdWOLJtWf++fW5yElIq9F9rpvZ4wtIlg0Fb/rjAJZxXYN/ZwcXjntAXTDxojoCXUGL7fTxcfAGGlIxGSWt/nBSc0QSOsaUq0mj3EvvkKid1AJsKnXTkwBQhJIMwVgOmyDMqXyYtgmSpc4TRLL4nqSqqOk0TxOr5BC+SpOJJisZaa330J37v26+/Jo8ff/atR+c38blntz/0sz893Z6eH+8/evX1QvnCe0++9su+5nP/4At/+n/yv/6Xpk88+e6P81d+4xv/ym8sf+Avv7D6c8+fCfV6urZ5vsEZPL/24MHHvu7RB+8/f/KZL7x48l69ffbxqzf0fP0L8gvyQL79W75Vj9MH9fT85s7I5vioN3daDnp9qJ4P0eoOetXszZ43vecSLq1ubsCNk7W5hyZuLVwOxSZQStr6ErYXaqGnmuqcRtJEiRHt4/iHnjZc3YBZssCMJ8XJ4C4ZJtmPcMn+ds6o2rvULToLWio5xYVpGyrCEgatVn7LNaW9rBY/ljTNSjYEVLCjL613J3Ot/a0LQElYKMYCcPtwHqbeCKk4VxltfkKCtVYX1ETEPKog4RN1fzz0opOI7uHfJSySOoOTJbfIbCcpInM9H4sWVQvF2euMYtJNvlDsmXq8yDBK12l2ZqMY5opwRNmA4mtf74DKSFSlcSou2nS1Cbre9Zdc/c7lDFerW/Gkdo055nH/7luQzbDjVa6eQUS8tp4/X5JnY8h/XfHILVdebcfLhI+do7hecnt8xeyXUnxbV6Qh+Z0ummrU6FwUs0zbL6K2ef7U24cNqxxfNOaWIJG4EbI94AysXfqKvKb0kqBgJWCN0M72c0mrLDj76FzYvHrnar7k2Hruk/JLCLa6pwlYbdI+vcaQlOO5Ewps2PXWEznsK6qSLTTi+7H2p7dwaUSDbmYVsJRy+Nxv+oor6t2Tp8++9HPyVM7nk5ztwYPH9+bj6dagOOB4Xe7/+Kd/4nf9z/7E1/Phx/97v+XuMX786ed/4W/9C8+f3L12/dpbj15/4xv+9DQpqhWR23r3vOLeG/dfee0rT+8/f/YzX+KLd1/Tezf2SF95fHx9+smf/7zg6v7hCEhJev2gXN+g3lU76oL4IIncS4CJ5WGJb0yIsHn0sM0mH0e4S09jEm5dPx3rVjNotGLn9FmT2fylWbyMgeci8IYcvrPDOV0UGOjc17/x+McMB2zHKPVhOdemykEkCg366Hn2mxfX4cMksE0DdjjAqxFIs2Z5qYBo8zcAgQmj7L0oXV/b2aO85u0N8LVCRQgPcGqpXNRh5hjNz4NAs3ldWpWj36gTBLejiQBWRCopjKrO7FZoGyWhHX5Hn0EBmb2WKd1bD/R+wwqA59vD8ViKzPNcaRCFFAomaXJVs+UOoPP9ljB7AkCkwa9gJihLoPsKJKsj1V79K/DLncY99C4pbXGzU5q4E41FLyor+z7C1bVLUsfNW6UnNXblX2pWxnAlhiTdkIXEzvQyX0rB3HLf9qL2oc2nHUcdVdUlR2GqxQnDHQkDQIvSXEoei7ev/lz+FGG3mozKCy0l4Q9roLBnCISFfMkaR2E8phV8Zf3eNXAo2ssjEIAYTQxWxA/7IILIRs7r+74yA2hSnwvXKFKMX45F3JOa79uQsOQZrbgPACHMzAY93kmtkpZELIqQb5mQDxBaY690Jq3ye+2rUm/XJvSqDGRVPSRhUdJUtBQBrp++++T95y9eORxfOxye3z6/Pl7df3B/Pp0NFTSp5dG9B08/eP7a1/3yX/47vv32A/veL/3C0/efEXj1wWuvPsKE8uzZs2c/9Efu5rvXcfz4d/y5uUBV756/uAPK69f3X/1keWF3//hz99+//uCzn/+Rv3Pz1d/yTfUWH9zeAPzR7/t9vx3/SwC11kOZdBKraUIcLl0qwTIoDzsKit8zkOaeWtljQaRZc0gnPAcAkSEfxSgVI3ncU+kCExaJs1G/onNfHz8cLDFOBdmckPlUIwXidYaDB8fPbNa4fnAWQqTT8E5CvHxn6uL5VLgnxZ3kskDfFlHfgmS7puEUaaPS9L1iB9dLrp7b02E1/uTJru4gYBPTl6s2eK1cAURGBYGkeElj0C3t3oeYZmLZ2lfcQmDqUocU0ry8mLoriDMGoK3nD1OUPNQKVKVb0F1tKS1Ug+GGd3sHKq1WeicKVpk2cLkouccuqrhc0ihiBZltdi4BGm7tagXv0q2WTAWISrYXt+3Sjl6k4EvcXAiZH2H8xi9dtsBAiIMnDS3PWvFtYDBR582rDxiAzE1/ggsw7IwZWQdYmiopwBCsNHJ0YLDxDMKZiIygHgPo3Bk7TiPSZ/uYF2TbZUD7OI1xgc2qjiQ0jSzuXiVF9aZYs5Os1YHsirVs9JVm5ZMB/rIbVd55/c6lAwz7igaauD07IwsPyijwNohoiWoCGM3okRN5RQVKAMWJxvB28ZreDD4tOXOT0U9k2WqvuBgsIrRuN2tC/+vnwqsHJvWEWe4dZ8iz0y1M7h5gmjGp3N68EOJ0O/+9H/ri6SCfqg/UrvT6WG/qzYvz/Yf3XuDF9eH40PS52s/88B/mXL/62/59mQ4mvHtxa0Xl3vH6277s3pPbh5999M5n3vvJv/b3P/WNn7p+/PjnfvB/8OAq0HQWSj3jLHpcYGCKa4v97VCVDB2Sfptv91EP7futjqGMuhgrPMg7WyOhyIxxGLedxtLigs0IW9booSkNP9tnGw5PytnO/ksPuLn8lkavpjCJp70tK117fSl0/PQDmCXUDc4fPNSBZgtt6sOIErNt2q4kNK5reLyPnJu6UrHcXOU7qs2Ft1ISEpkX3JdkKWLZJIqkiGf6qNETwNxd3w9CNa/S3NL9SdKs6ibNDAFfNHSorEIC1cvkBGBFEFUSCMjxMBncz64ePwgTEZlCCRgTuo0NGA1TW4SONL1+UKoImix1heRMbWOQQdjSFaZcgfMTgZtlkGo3Wkj2ZUR/ybXgW61z4sgMNhR8e7BVhUsaGu3WVpBh8LfWEAItnjYgWduYuRam2rZjGOfGsxUll5sEulzjzpLzxW250ttHxkPjaYzbGyZr2tiXxSndiFldyLXlebO1FWgF5H4+/cGWs4Hwf4+sNJ+iC6u+cTUrq+6+YveKMTkW5M4CWz1GYU1uGtFfpSfJEhPaFFa+oj7/ZIYiGSYZe2oi6nQrSjEp0uJHZefpqRktpEAThQvcLmEtRu4W7WSxkcEkSco9yNZ/qrXWOr84lEMRnMAzr6ZpLrij1YNeV6s3J1zBrvTe8YpPX3zV6x9/dnrxJTlc6/V0Ol9LOV7p6e7uoHK+uy3T1anWq9t6fcLP/Bd/4MW9wyR676aejvNxvvvM6fCVH3/9tfuPv/ob757/lZ/6s//6v/cb/ve/5zQ9O19HHvDhUFgNCL1vlU9/aXMbtIuXhFweimiGESS/G2P6CHT7PkiGT8V/Gu33G+PQy69uyuWQa5tVcfweAszMtbIsJOIBzM45nDw2w7QbC7esbniWnj7U7HauAQLIpCYfyPJ4YSA1C1GkI1I7F7Y+KT49g40K3zKM+tKlw5QizWlzA511AkJacK0xxiKpaRPC8jLARGlRQEQjGLuirt9BUJQI015Ud44/LlEW9QqWiKANE+ebNY5Vh6GJSClapqmez2YGnbQUMw+C06kT+sxQbmBdmfLZ/F6ank5fukLG3kO5wx7VLPms/9BYIEkFqgxmOsnKkklB4sh5rGP5KEg/UMQlRjI/6PLE9vs3jDnOdht8WFr7dUyOClvNIIkD3TBbhpZk44QFC4m4TWDLitaLXSpp4+ldLGF5ULvoPZZeW6CZrIiaMy5NeS/4QbLaltbFVMXS07bnRnW7ErkOSduQtC5JgMNWBrI0LN2KULseADJUyZitZb/swdv6Ea9eGWaQ3Q3U5YvlApr1cVRVldnLzJdWSglnTQNO4tLBhsVRxTPaJcJqNGPj3aTZpKg/9rvf/qda3T/j6zcD+MP/h/Gb+XSGgpNojcDWMWOwcZTV1UlIQzAAaCm5TZOKE9EaV/QR/H8IjvQ5qIYC8LKUTN1jfDf2NlpaLLybLjKr0GObkQSq3bK0IJqIRqpnRl2NaY1bu017tpWZK0rHMg5mtuGohSk8Z+XkwouCjEmGo7zp0nhPnepiBgCc8+shA+fCJPM2NJN79MMFwMIenQZQRcpIeIbhTESy+k224AsaEYV9pNHt3CHVAs7Ve5q4G4fQSMjweo6RDSEQ0WnGHh0bIUMRKWQV8ZykKTu5kSTEtJTDUetcSZhX94nmkqawSQb9spFe6apnanjS3rsmeStbaGyWhLLmWncw1FT4lZiNCSpExtBgV2wj7XvzN9dWJmVzG2yI9e7jtdqqqaJ/bzAZUrToCqFgcpxOzaNJYaOPqgNExOPf2t7k0qHZlmu1ipjJJiBrKBLb9WPLA7wW7YEWe9kOm795bCYxMuAWgLna6EJzYdP7qlhrwUC9BNhxhgO2iIhwWc6irbS9bqEKeIKBYHDzaKtPwDGab4kHo8SjrXpDIgZfqsoMYmH/H5Eo2NJ7HA0RvLuDtM/a2SLglpKijTqg5W7uuaWZliQngsasKGIqheK9WqMXG0R7wtj/311H1Vuxk8zXHoiLBWlySrP7IKv3npOWvuEI0z0wgrV/KC9JqwOBIQK5kzz/t6FTE+K3aqhkyBIaNZMWCF0AENVb56RYGfFftbKFQY3m2eCT3esbGrAsCFoPwgJ6kQZmGqGYd4hferLj5rqKO5N8cfCFZSxC/hggWSooltx3LfGj3d+/H/uaNPXXgANCmXamq4AICi8FbamIZe/nIb/DyyGrSlrjxU9WmZQmIyKQ5KJEngPN+jZcCMnsUpGIQNSlgXIgTwya7q1gpZRye3tXXWGrVb2PsQisTixJQC1iOzlsf5705ruK7DkkCaugVRp5GNhv1NKqRAtKSkbu/1YBxZSiBnHwUwUojHRM75QKgJpxCxcqEGnqIgsfIVHdyhkGZAIoIgU4++oQCcSWmr0WhmM00RYifmiIBZFNQuA7zYaxJUlGP5n95NIIFC0yQZo/0jXROV0RqEncJXpXAQLNMmnZbCANEeTk2KaKLC25SoogPbiATTQEovtsAXuH5mRjqlpR2xnzWUYj7nKgRecTku7cUNVZolaAT6ZmAMcUwQGdbLnmMTcpeRAtpcmtgyrv35hQSGW2blcR9RPX92MMfLMCkvRy+ioePUbSDyRdlhJ1JDaiBB8Ph1CiKwspqhV0wE5aUI3nedbM/AVKCOQi6Utu6lq4Wox1KmJktcq0savQu3ZXikiBejWxmmXdyFbjVyQBC9Hb8y0Fh8MkoN2dC3F1OH5wKo8fX988/9JJZzkeUMv9csWb07/zn3zWMm2LWcOS5BQuv5anBLEJACS6fjEv/7N4BTpyphnocSckSyb0M6JQHCDFauRVAzL3ol1Fu2a5iFQSYXVUzNQoMVzNQQeyuznoecACxMQgxhoxOCIiRYt4LzuGj1cAUU7GUyORlfCg32mCnVUnEvPZAEwHhZI2R/l930EJUabmwI11qWq2qAupCAYR0BvmCKs1JqcEjUKhiFyxMKoNjgKxtKYCpEW7c4iIGiuWQkeWVnP9z2XPEnQ7RFtHJwCUEtlmsCi2ibBOASwKUTl31whnECpaRE4eTc0oKGNCxyKJDlRe6JdBbkl4UWM/uREe47NtLjlTMXr7eYUNJggRUZ2AiaSoxYDQ4tvMSt62NsQEIlIRSqKwWrKIpnIpMJVipFV3wypgLBCRud6KiFI9zE4ngZqZCRmdMVAMlRSBiUTnaT+SkqUYSYpcoUBhJhWAylTB6qFcFBSpUm22B1fXVzrdPnleS0EmkZIkOFMB74a0ubaS3eKnAXCSDGl0Vl8SGfYHHOWl4RVBuAFIt8iN18vfMmV/R3HBgnQHm5l5goP75pxCsFmIhsF3h20vzX6i6DqSXPxARnANh1MXnKZ3f47LKZw/O4bOMk0yHVAxFJC+GRnujNEuLESXhU06m9yDw/bXJnF32wBHYITVFytIJnNdjd9WMe5pDqgI/dgPl5BuSxlF1uGjs7nB0ZWlYtaKOBKBxykxDTAFUs00TRSYK8mDFuuVRriaqv8dUksCwxFPNYJdKNnRBPvXdgv8m7Odj8djVZxrpaDcv5oFt3N9/TB96cUTXl3rifeflunq+EXcvPugfuruoPl2pFQ6ntz+IjG06joZU9riHGgLHZS5qKIqZM2jyYh+t3Zzk9Wc+a00+3w7vSauDs1vF8efvcBSvH1pJfIk/WCAmzBpcBPh2MXXmKeW0dwiQPS3TjUjV6e66kAaR2McfHNFGWFChc2Ks7y/qcLj922olGCGCEf3hX5of9w2T/+gLd9uaWFqmNAkYKD3YlovJxGjjZCCglgUR/GyMSv7mSldbReRKRX9LPSR9C/pKgYUyL2iLJugDJ8VmjFomh5+tG5mWuL8Qmk0sSLJtaRhdMIh3m4tSI+kuKUztTUMG9eeTaKhFJhZKaXaWTjfOxwL5Hw+o+jw1GIV07gHbbhGXMZtaLRpZJm+CMXCutNEaWApvA3XMtYTzd/dD8NAyl/CDtcI3RCuUUAA3WaICIFbGY5WXGQ5T3ZW149csoc1fBaTGf6U7u3r3dd8g9vzGDnocgTS01T6G0fqL5svtzNcw1BEtPmpAO82g51Llid2NdqWh40bsYHDMiR42L5GW1csn40jDKbmtSgw4h4WqJu1L2rjB4sHB2q3khOKFNTqtl8Cc61KaFFu9qWt8V/57W/twe+f9fXv/IWf/9iD157p8yYQAN1rntfAzohsv5q/DZ+rUEBPCREEsXO1Mz1m6XM1/6HvY2G218GSLyLZgEHd8zZ4KxzJ97GuIYmbEyGmLc6o3UUZKm5jT1JvJK7JkeNPy9O0MAnEWoYyt6I68ps+VWrLTBI3ObBTxe2UhmuVk7OAA9Psh9Xx4VBraE8a8EhZJKQkjQTsRZg7zK3WppGsPCzpTl74xQBkHGHa2/tiZgCL3FaoAQUl0CYaiAXHdX4iksKTGFD2K+MjHx6mUURACHEHZoYVkMEfGiG0ypCJwqTiRTmMLWBSHShitFKwaaUs4q4iIvIPkhhSq9nhWFANhutyqLWeTiedDo2bDftLOAPebNWHy1aWEcWB6JvaSB2tN6bjdsMYjbYKcextZaMszVpwa1e3wg/sngLOVdKC1D12qfk1GJQm1+xNcpf3v+TwXJIV/NWNkY/Wp2Y5l0HRL5CV4OIvHQNSVu9tNSDHbwFgTXkx3jaONvK/1W1ALyTS3pjr3YVEqoNLyXo8iSP5QBokxpvbK4DFxPaI6fCr9QOXDTbo+Y5tkKXssj9/g4dhZLsqwJMHVhDoFPwjm3z+f33Zo+vb83lWz3jIo5ExaM3Z1RLf/ViYYuBJXcVvNq5VCTxP4GmRIsEY2DMJ/RsPS8YClwYEU41SR4MY56EYGdbXbSxd/O0zERFR0khRy0KG4kKb75l7I9oeiYST6JJY2VStYCt7ONkubmTWgcUWwDLm2AqicnVX6NudxDCf9enrDJWBkCOL3SLeJep9kWSJw7jzEcLtmUnX88oaefv0vOUm5CNJYq0i/M9xQ1Jjl98I6jIByWuAlwwJUlGKTJX7PuCBSgSqC0TFxxcjhRVQz8ZTOZzt3IpTOKqYl2w8iJh4jnDW4hURrVDKaNACIJm9thMlTRUYJy2TqhrPtdqifggiMzDnPK32Jjn8RR7cMKBiH4lHuOACY2tDAVHkzCQKUjbToit8HNRuALI2buwY814qXaJsllYuqHcvGaRBabxjhQqrEZq4PUqp0nt2L3hMOwjx7CD7e5qWH5jdaY/MFVgLQHHS0so08tQV3PYYIVaPjDev9Mi2yAUE9gjELpxHnr07vUuDqFM51zbCRWBNCukbt9RptteZpkUJ1FpJ6lQIzByiYTZ2o0tD/eKuP/vXv4DU1QDIYZrnKsYDJiFos6ocJ/1SvVVeg+Wk/J/+ztcBHM7z4cXp5kFfsuQMU5gJUEnPXeiBM6OEQXrFgAhbaAuPkH7nKKGTeey1zGkKDsmc3Qibwy6W6cbVNSYv5nnx8j2um3ulBUJz8U37DHh5DRmlYoSIsn+VUlrcQSv7OvbWxJIgtBdmwZ6FGXnvco41vt+joKUH5SQPfskoF39dLiwONdCkk3E5bbGXTqu07MEmiF94lQBRM6pxZipVuoUs2ww7uMxMNXq+B8BferLGnXWfi4sAooU979SicROQwV+jyScDHD2gK1N4CwpTSRypev7p3LeFj2nOweaZ9w7TQeXu7q7Spmmqa7mttvt3+k2+nNLFENolryyDt9iqj8KAgajqQrBF24+X68ECiNEuFKSkSgu8au+jihc097WUbPEgTkoEku70MOakrWzBPgFcbix/CdFrRCQCS0QcGDBai1kR0aHoZmO0FLAuDgCTkmpg/I70jY5wO+/FuCkiXFYW61vfgmXyWAYptkVv8Fb3o4cLLl/qcizzz/6WDxNuxpvFbU8xiizjBfZHEFqQ1m6eU6jAPhJ6Nw7dBJ3whxUFUOf5sKws1j78186AxykBkNN80DIXPrOKwqmgyPz8Mz85Qb7s7a9/9Xzz9Mln/f4PTl98/MqXTfUWCPLqtRhJztl2uiWBuAmGMM2SHoBzJYZZ0GmTLCZDyUBLLOJjmThTQc0sB+bVFtTBRaqG6tormAra7RyJjMN5tPSEvoIKtErCafaQmFwUUt25SK6wSDLLgOxh+VxeO7uzpM4OcHpwojF8GCKkJ0HsKNNroRnottXMq6CzPZ+PABtLdSMRywVeWvjF75Nc9OyVeEXuyPgSGaJA0rmbzb/6krTxeiDPplTZUyFSkDdQVZURE3oxq9iLbdYmLxK0qDjrC4moXpIGUrR49FxxbHUnr4hEZQ7v6huzNPGgQ0aLumZpy36GPmcC3vAj1I5qM/RQBSer0HKcJpsNMpNjXa3g2dOWHi2Y0JY9p5uykXUPo/hFkx+i01Wlx5+l0Ne2PCewFGO7ZanPPL9vniEfpNkQEKdyRBsSEN1Pp2m2Y+cnfQ4p9q7ggz29aoWyo/w4tuELaDiH21CN1el/uSC8K5mOl+b2jfU9HIDbTW8osS8DDTy+TVN1bD2QQ7WkHVn8AGydlPsrWk0sfxu+VxMp0k0LjevvPC7L79sWO2NANZh54wJlul32IEOyhU+3Fa1mv/j+0m2X18tq1Th/4Sc0MMNIHmj3b+7sCz/97MmL8hWP/M7rNx49f3q+bq/yHRGgCURL2g2hgOPqRkJsZqpCog7eHA4+oyEmcxgyyFsD1JYHhojkQlZPhUaYZI0dRTvEligdBDfr/g0jr/e6cU/Nep1YUpLVqjFsToNDe8VLenj0y0sZa08/q07Bh4IbCylxgQsvG9/nV9OQI8P3423royqWS04oxxqbF7KlC4vZTsgb0oK1elFwAa8VFmlIHRcoOmy99YIi4v2Dg/ADoZtq9LiPUuAqxfMXlkdlOIYkIje5cXcIMUkJy/agblKlUFqgpgQWiMp0tnNbDAlPNxGRCbQIUXB/TC7GKzSB+RZ4vGapcjgeK3lXz7VIEfUE5Uh1kSGqEAZnwONudcJ9gTq0jYm8TBHXjOa6KDDS82Eu+IAXY8rCHdj4KPK8kVzVat5eI2+mR800EcF5sDtWc5/aC0TGg7CYlWdHbL/fKj1BOyRL+I7zFwlweZuuzPj0tOCjlhRx0xooQLRiBLD2vb1EFsGwfQ10l66B0Kwff/n94yvMrIliEXKSoqin4C0m2UjG3gEeYTuKgBdmAtk4IxI+vv/dnuLJGsZ5+y40wXnzioPoXGfOdZqmAmUlwINqHdp1jKN9SFTqy3/du1bYdX73p6oZRCbRAsqZxeok8uDVw+HtRz/y5lec9LXfAgB49dn9D+Ss1HCjJpWjiohiboneS9+bRWvFeCRQOgxFjej6DUaWfcE9mlushHghhlp74xYos+2kUxVVjcRTTw5MuGVzUaeCIuJOaSatcLKUMcOdebaCZjnDxtBNIKYaQZDBUsPj2Bmwy+UyXCOFlK1c0ddmgkgBKl4sgkqzlizC1PHz9Oi280+QoCQvC464qy1sLFvtRdpAGeQuCf6AaaoK0kUNxU7xQfamKYuXigiiBnIUVuyPmLN6Zj5xrrEYwOjgQxPx+htCzqQnZXkVEa8IFu0Rdy6jm/aZtnSftaJY0liKAQqdslu3kLV0hAxqnDQsVurCnTJkAS+A2EMcAIoJGi/3/aFKubo63pxPt+fTdDyAcj5VldYdai3wTQCLQNyx7LgloOqoqUjfLHg6o6KbFxN1+0ZOLix7TRYIgaoxOyWKv6vWLEaZgwuqpiYxBLeRpEpLkAB6Q1yS8AxCSDvh3gSiqCqXFEZogpvywbFcT7Vc1cP55izKw6Or9++eH44HNWqNAuyAKuQohxPPqqouyPgWG804lUP1wgkeLGA0oxrLFBgu3sJPYOYhuEUjKhJApUQ/jCpKt3OmUOy2uwmlgmzt6aIYLQ+YamZKdE5gZOm2vqAOzUqMfs5FBGY0q/m439Ybojk4XUQhABRKgVYhaIXIYrMwQdFiNgMk6zw0IHNnjAKaMTh+ZGfwStQNmH7yLUtbiNtkQspGdhbh1BiedKxVY6tyiqQmsdAev9ZAQRgmLd4MZdGe2fPOayVZRFvpNwK1Vj1MepiQugsJ5mQGWSREiherOM2m4y5pVf9pvGf8MhdVVbUe6js/IjAFDeVwuCogb27K6fbhvcPx8fTtv/rjf/MLb33+/RevEAe99QefHlCsuKAASubbUEirtehEX0yPbxVSoxCmo2ATSQXBESPID96PWYqWGSbeLadtuihxNhGY0Cu8GSEVbudxjB0dGQTqVFWkTCJOIs5iKDCBVkjqmmXI1TkIon25x2b7/oNV8kSkzFogYKkTYebzn6bJD8tEzAdhFQOFSvdxi1KL1lNQmbQPEZCiaGWFw4qjAjeNGKQ7VnyZCq+pHjxsdnOMQjy4iKhIy98ECEyB87lh7YBE3sfWVynSakqQmhrt6CEehJD8MhNEjZHfz6gCVxLZIo/fAwuEcR51OpoZjSL07BrSqKI2hdxBAkaBMTAtZeOFNj8FdvfSeMF47LBYrMMVMIVI5By6r9Rvd19DCHPoDSqowqy90LQpEZHoj65Qr6FotBMBUSmeCeUiEcxQZ+LgqdSMjmFUGKSK582n+34Ixi6CWgmZSjmgWp1PKvV4PByuprvTHardm+4DQpvlwCI4KaxSa3H7nAgp5xo5+E5BO9AWouJWxnuJarL9MxQjAaPCLfZUjo90tcfGx52CM+fZXFaSgjGWOP3q+SHPmEVsErsSsypi01Ree8JTwVxkniIl8QTMcr5fF2qfiFCFkJlGwjK6RERURRREZap644PN+KrNyAGQYmLMKjxAt8bP9Oi5hRokItHFpOFZapbVbHHbBmjjU7tQHX8a77EsBsJUqf2RJgS0p9uAIkPep/QBNeffrlY9TkWXgnN+miIKnguz7SI4lit9a9DU+5KjnlnmE/e5yRYgMkRpjb/KYNnmEo/fmHUcok91xV8xcOWVNhM0NDWPz/6jM2Gi0ALVaT6V58+mu5t796fpE4+fEv/4lTd+9IevXr1+zlOdtMhQQ7HWukh1G7Z1u91c7+MCFABW/UAjbEp7ldNgS9HtMbSRHDMtTnsuBpJzsDN2thcC4PDGhdK82OU+SWn/LC9z/S9NpOgSRrxEspUFoYiWjuMMXSLreBWBfvvRDMOfunvQLCIZQngIxK7cxV5c2JdLFBhYl+Lp918guSvyPmYqj3MQkYFnY6jO7kU6calv78LlNHCE89YVFfqJv7RL1xJVPfcXMB7DrpC0YyuW4kCR3pOqvzYfyF7sbDE2IJQtZjtf3oDb6C3q7FK+lqKqp/M8Ww3DqZFeRKgowALXdvphFJGJOaORgUWu0/IKzWYXDJuzjYEBu9iycmp+dMSSwSw5XnE0vOVDi6VkdjJGLMedi04SRHDg4XZm1VkOwCR2NpzrPZRZcVaci0BkMmitBXYAVaZQ4Swa84kIVJANQ8iGoFkTZyDiOwARimiUlUGUMB3v6wjARlw6G6BGIsU4rIFWF8oZEjW3IH05e969xpr4LgW6HmnkNJgWdwWrlwy7mUmHXpZappRO4tFhuD/oilkiN2IpWCwMqu371WS2a/Ebyl7XKRF5duDq7kurvngt3zhLKYor1qmeynk+2J1dHfXNt974td/yd3/0Z26e8W2+IVdynm+P03XBoREED/aZSkHUbPDZIhI/RvHaZYOXzjZio3Jzw5vF6A/fXMJoMai+OQt5w1Wa5l8PzPd/ZYqphKXEQFYKipQO5DwEL0EwWSb3two2YlRVcUc3I0ZSVemFGjLuOeo3sKLImH+RIGXJGJHshqJpAt+niAt0Gsq7tmIO4ykFuq0bG6YOoB0HuOyZKHwJS3fFgnYWFtLgSPad0obEWCUUABGi+W496DLUbRmCnJsoNrDwSzARkfMFjj0Rm/gyiqhFN4t1kGlnSbEv8H8JbNor6RiuDCDc1UHBdLgtqC8kOEiegg6xhKR58EQpUkqh4DSfKygo7sxuj6hFmkCnLoIJZUKblHZRPXC66QnSy1xtPbqXUGEpxC3kjrGrzEWcW79iH9X8396wLOXXnSARQIknU9xs9QzYJBPOoixPHolWlEqd6as8iArdNe+2IJFMGkdGtft/iOC7nv+/5QQaFYEMKYiIR2xuNIPxTMedQAufG6nACOUV922XN6Jv3KtvyjC38RrzfUee53WGVw+qyJAXuHiqpCReF/ZhbAU7H3MV2NIVaI6r7KSk9PryC13BcWKcvIiYoPRq70lxXRfO6Nm2zBWnxxJF2zf9vUqAU3l9sSTuuNB2rpVmPFyvnU9ilTROkIOcr15585u/4R995r0f+89/7ssefvnDB8ebu2dyX6TeP8rEippBGGYspSS8A90y9XBnUrJ0zK8YT3sg6g1J8OCIFvZpJ4BEZCjIGOvz1uXdWtz+9Y/pMxZ4VhSTVfdAyX4cNjvepl3N80QFWPQmH4lL0MAw3jDisZjM2QWApgG1VziFXObBgxnSsaNwLq7lDnd0Gq8togy8Cugdiizjlbi6rV3hEbuoKG2nlxl7wYZSeBKv0hsQ1g5IS4NEr9LVqbOuGWRvxwS0CkhcVuZYrmINUJKIEEBgRc87I1wQn/hzx+iiw/MmUppVWUTAMZCjAWIhUnRV2AUmVoIqMk2Tkee5UkWztnYRMfXCs87K3ZfihMu3khmE1cTYZL2jgfHDcGx9bbmpcodzY4Fna0PipTv9BeP3zPQbyVj8ds/QKzBY8l05X0+8miGzmRYr5c5Is7PwfpUHJ4J6c9AXB4hwguhpgev9xdocIf0ycFWSc/yRIWgnrphgYzLiEKm00l+5NJ6sGI+qrr6PX5ejN83jJddFico5pfQ/Xz7Iy4Wq1QxTnolLsy2uwVYrZV4YzlvHH3Ry4F9G9ZJsldj59Obapex9pZmdMkzdSFolgKsHC+cbVnfmC+LX8YaGrgPVB1ClnhXXr3+qPrwnD68/+/n68z/+/OG9x1/5yYdn1PdPz5XA0zOvr2baqrnFNE3nubZVNCK1t+gOk93vV884pfaCGz58UqmYvzi1WSya7MFZAxMVACE3iIiHPjQ07ulAw75fmieyds1SQQq1gowYb9FEiWjxJWqxwgKpfjQhkm4XNI3/0luxFpvQsCgF4+FGp8vdEJrfiw6uDay2Y82o2k8XUxI2E3TJcscTsRBoFlaiNoedqMnkH0mRFMiS4B96tek1qveSPR3vv5SlMi6Tl2C4vA2pJSGVBuaf2wFHhUeGzaYWNz6rYJq0lGJzPZtNhyubq/NmUU25LXpKWK7IQgThJENHtsjYSVV5Bandde5eo/LREGhdpnUDo+3Ibe9lqQGvOISKF3x24Tf6OAPr1/mhumfzNfVKy2/5VVcvX8V/g9d//PdOrQQeoIugs0SgBi7bZPoulL/oEzLsyHBtGfzqpy6ZbXzAQLNtNh7aj+vuaABA712I1a9JBJIKYPMhJ9+U1BVjXgzYdK0x1nEkiS9F4RFQ42KXYzcIzyRPz77oX7UhXvaC7Q2b+09vf+N0eIUyob64e/fp43tXh4evFMjpxYu7+TxdT3wg53s4nGi1qsiiUrGhkU0ZyIZ/uSvbXSJYY1PkjnKCKQodBKdpbFg9AclSMHKr0C5XEAAowa7WXDYrMa2BM5LFNnnsSFQK1zIEbX6ZmOKuXxm0OjURgbUyFyHGDWhWh6IQTeQl2SNUV9i40UFygeuJ+pZwiOFY37GgjbZ4cMPLV5B4OXtrV6sv0wQuEt7i0v+IQ5NSLNOdAXg7p8UGjeNo93MsptobqCwBwsg7WtxPVs1odothY7bc8IW8bEdzzK6hAngJVGShFADZ69KfsiYjjmVhWnKFVS+kijI1qVHBiKkSVqJ1rpMz7SDF44UqjJJBEtTJ9Wv2UyQKFMi8Z/J138+WU26PNJI81aF0WbMS4wKBGkX1S2x+efqAiIvrzoyx3l4sqrEIYILAMH80vPxv6loJHOMv/quE9oXe8WbJepdDbQePayszig3l74cTXjP7XFIxEt9Kq/nS5eAbtdX/XNcETu3Hcj5r6SqiZ4Z5NoF0byHKHt3rj0lGAATo+glcoBnz2oop2GAjycxX8cI9fO/uuF7YL+2qD6YZN3VWFtHDK2We1arVWzni+vHhdEt7ysfl8d10koKpdNtaKWU+zaKtz1PISb78yFYYVFsnNLrBhLg8iMmJjgBZe9kDUXWxzwLB3AKvh0sibmHxXr+8aUoLkCKDE006CHNtkAuCAklpOaZL6uQFVdoMNIQBmVHdjxQ9BDy4cu/Mhcl9cKKRlMwntIx9e6lm0jmB5irC1i1YFcji3sHZlQhXlywPxUsIwmIAbRph3pC1OkMO4exHlKTCzMvaCLKEgW2V30u5WePcPqwo2FiGKGo3NgRYUIOLIHcS2os+hhfPQ7FCeFBxwiUAz/GNe74FYehxpXSQ3dNHDk+UmlSFmOt8rio6sVLhPNJFdVaCokXUQA7syOHca0FbczUMuzUaCi6nvOUNSyMGlhvfvnnpMBevXbTzk8AmZwFR6MrzULN0UhBi/72UF5W7sYK/lOsv/eCtp116kfGYC4AEQqF5WHMlAfX2vZp5fo3DfOevufJH6g6KStNmxvPYQNFvGi73AY8e1nYmt0dUREhrXdCRHDF5U3t1HnWBzed842KDSjcZsb2Im+npAlV2DqUSoPd6Ynu3GaPQ+o5x7MKVQucCmllna0XZR+BsWa9ZHUiw38/7d88vvvoXdT14cTWbGmqdZsjd1flqVtTyUM6id3xYyvlo788vrui1nzSdoDgcDqeTR+ShiRRInOGeAD2cj/W1rc3ozMO7AXaVUURFCuTMZmb78IMuif/+V05SDRCzLkgN82w7slyX0GpovAv/pxpnL6VCEp7h6owzkmfivpD/vI2gR4wO5UvRkm0cpNUAqGhT47aIN7LPkW24jFCXuzDTpv1NwciNRmExi2asGX+yBy7A8JJz0ZokuRrWsq02N5Zw/xbf26xjqgF+7luVBzPDABz0ltXrV0UWeCvSb5cwc/uuETc0QvcrMgm7mWmwEji8+pVWBDvvQbIqkB5aELMZnmLRg6rUej6fakWZDgfMJymiUE8ZsyDZ2oLmHYtEoif6NGVNFGf7BszwtpAKbyJjJFkaSiUPX4gzZOF6m0me5nkSaTlbAARUeBTlWsTzz2eYqAhFGXVeCkQgtSgsEpkgoLciFqg3GXTgVnNIqcqJZ6UUBF1w7kjBwY43ePa4/OLEgIsX5WCgqGI+u7vLEaflrlkpkR3r1jmheBCp+YQXhM7MijcDiTIDxiT30eAMZCZuklYB9c7FaCKewxO3PIsEHNK2w3geEKGEWU7pJ1A5RndHZB7oeXie3A6gKgyoxqK9lssorVcqYMqIRzWBKUlO0Vwjj4qR3l5tmkgOIdakH3hRFtWpNOKrRBF17b8OgoVvdJWky8izBXivVi9H2tpQFggERQtJmrnmdRD1oNk7Cx/qiseboEjxkDQX8PyY3Xs10nD/3f/wr7/x4KCYZ5Z3n73g6WGAFz6plJPAw9tfR1Kk1VzJlhGerMJqdlZCRUo93hwqMrjXCgyzGB6oTFYgqJVNDznf3kmBAUoBijq9c9GhtUwdrCwkAXo2aJr5OjKaRl/kSiOkRIoQT17p1bUMAkYDDe4yUDS1VWWC44OJzyMz2sVA1A7lxCKzGcCpKKLzUla1pLeGDBvKJOIdog2s4KSHxECj14BBFeDoh8UoUKTh1EjloRohZrEzMwghjinMm/YaOAKZeiEgQVj7aZxRxM9Sgbi3xwSrWsraPTmZdevHOmmjP85BUG4AOc+zd6cYD5eZzVHXwof0GioQES+IFNprKzsFlekQwEndLfl6VjsvBaB3DdaiJjOoQiEKAaswCFBMx6YIcZSMUI82FzEz46DhTE63B4lWQOllzkjGBHyTlTB64VgRAYqBlea91jUMO4McllmF7gxjAErMqBAFXRU1qQYlqSVxikymNAOscoz51FnB4ttbrerBzIqqFNQzAbjKW+enV1dXUvR0Pp/nikkngdkLLZNbnpXe3NoELAXnGlUpNGwGhSLwKOjxCuqeeZxNwnQMcl4YsF/arGT4RiLSAU3GbGRyzzC/moC0eG1/jXnMSl1U4/S3FMiI6KMwqOotOAXodWtJ2i3vPbhXpXt0dqzhq+8v3TZcUk8KVmad+k5oxY/HuTW69JkEQUSJc7gSCHrNNkm113XEhcCYM8oMELZpNoDY4FBpZyBqAGfA+LBw0bKOgo4PF1xTLU5yLYl7I3F2I1HUZloWrBjFuO2XocOt3wjR8JmNhGn17GrY4p70lCCR8+pHGUCkae6U3Nu+xY1XDCTXI6MS5B/9l3/7FlD/LK/cDg581L+B0zemLJC/yUpfHQvBInY/v8mSPw06jQ6G7lIWz/rvZNopB4eUnwxuvAlOu2eXOB2/huy+5k3YLrl9WCGMJ864mNfGEZFJxmzW4GfCROZUUkc9Ye+SJk6qilnU3hm1LNlot9trtFGNEq1zNSwxUEQiy2BxmobI/PVptXR5ttcFH12N31/hxH40+YoFJ+grH5TyC+vi0pLngAVSJMw27auNa7vc6i1ud6FNbEyscEbVYMUmcS5p+eaKXNkovAV4xTICk6hn+LuUUEpR1XqeD4eDT4AGEVEIPYRwE3rdyePyG78mV9R2uCn62rpwCjagjxJZQCrHHe33vebZcKpfkiuW0YxpSVrO+/Jzi4uM0O92dyOyE0jI6eXVXF/Ka3evQxESZgadbMk1nEvlahqpiH8kIbN8fxd0JFMaQ/jN5gf7ThSxkFFCmxKBFKwRYoBhO5YqH6Glngl00xUjFJqlRSTuVxmC7yQKm28WC69ti3DhrwZf0a+Agyweb9O7RC57NlQTmZkv8P8d3ojBvLmUKnqxcUG3SQJ4Xl65DLZ/pleSgyg5MsJHkqRun9JgPeNAAABS4A1UA7hKCCQog7mElTx470ryLYBl8CdEnMJrNjRcUn8RWMUA7bhpoKTVzQ8yWgVliV3u/zNEPrT0zBnRMY6XISBEIfQgrN08MI7fuSAjG1cqLTSDNG7vCKwXSMpCylmKfVFJcFB/20uRWWFIh25D6GFzV5pO163HM7WSMFqTlehj37KxxRb0dIDJzqI2XzaCz0G78jqRAKREtLBvVWfGL8coQJY9bfuyJexgHPK5dWjagdYfQpwGQw2QOq6QotHVSwQwD9ybVIVmNh+u71Wax/CHiQJoXRRFOsEfwTKeRxEhEYU4uNwuEWmdRoBuiukFdjdQvsTQRGTVjOnlSrAO2IAesxOtMdBPYjQxXBDc8Xi0mK9+VABAj1Lns672ttGIhX4p/afxnvHLUSUSoYkFa2DrykKilw9uSNXqMWm3CrarNPFDd8hl6gHjRlJVOinsPIWq6h2lyACf23/G2LhhxWJWxz9x+ZJsMdc47siDSxTQMPOTTHE74Arw/RVZQ1vS20SyU+cll22Wte38LqkaK016tAg3lc5aUo14QYCY4YoHq9OjJcOZ3v38n/9bT+zazk+elPP1vVc+fqvzzfzOcb6P2GTXBqiKAp6zLWoDoJNvScrbp5SaEIBRFwx8luKScdsCPyYlOeIY9xHaacfeYRfYCu4FkP1q0luJkJyw4rRkdp95i2zPwTsjiTPbxPKQRjnu3gjkYDD5+CiQNbl0e2RWO9zIArOWK5b4PC62fZcDB3KsWNT+ZaIoKqaq+awl+f3wa7Xqj/gISW764za4A8hk0/7IGPUi0tCn/+S8sDHgxqL6IEJPtHz59NYLGez47VcBYJShtw3HegKtKlQaV0arxwpiHKyHJSm/EBkRsAgJKCKDuFaUQGsXrWiNeyleOSZbJJlRxINpnNGpqhS1u1pnU1UtygjFQB2EyHYkOfyLSIaOd41BWJHX7H+GSMIQRKLYyaDdbyFy6fJlN0C8fANHWW/8UFILrINdszt3c55t8Q6OLB4Re1Qg5zLbPF8frsdX+j393xUbHm8Y72cvHHjyCtLsGRcjxrNJ1Ai7DdvSLp07sfEAkAS9TKq/fEFK3K2fo4Y7D8Hfd9iUYZC9R/K3jGDZXpRu3nBCPBYoCCkkplEjQQxhN3ESP7atbA+OK+qICxg5ZURYRI0CIOtlYrUV0eN/hy9ExF1uMuShNnV2l+wOnyuWB8zz9x99rNw8O53vVK6uD/fl9oPP81h4b8KcE3BiIgSY2acdY30yughKWkBjDZxBWBRZ0LzGP3Yvr2aXM3dqmEy0+yk7MNu7nbY5xo524K1euP/iJRilMZLBuoMBJcqwFmYWL7b7m6QN450LsXi8tx+GaIPoyu5wl7S0OlmvZYsSJImqHg4LMW84qKqq1fbh0PZ9BAiHoKp2xcnK28YbXioT9JLsI0EY9mgwUEqYp2QLqWYb+GjX9vy+5B54zK+K6/GWIiabhWbbamVT/i/+tQB0aaYpApFpz0XFpyXom2AXjyTxy7/CpquQWmtIw5xr5aTl+urKKmutZibuH8bCf+8TboDdBYeEBixRrLHNza3zohpk06GTwKuL5y8iRI+I88x9/ziU5nr5xm6JjjdqbrVsqCgpErT7t2iEBKXDwOM2SZZVAZuPLH6OUxz/MigEejhUu2sxI5L8kipSm1e7C/IqkvaAxWheIH75unUFxLwzJ2DWSq+5hAWhoNRaxyKgzYkQ3+XGirjU6HEuXB0/bkxqreP6WEBg5MRR8yPFcaFKiQIO2Bxs0ktbQYDKLmOLCBjpm5LB2En4gCUD6zLNnj4R/HuMvaRrmR7KC3GtESEpfWhhgeAfOdmfOV2/wRePD1c35ZUnp/mVR2ZPnjyUV56L0ztCoCZR+HiJ/SKiYTEOut3x2NbLWTE8IOx71O4rFSM0JO5oUN2gk/EcAHrnoqRarTEXErwZ2y9NbPefxp4W/iHm42ktXOwJAKqik4Wm91TIhA1is0exrkmBdxruJHJ5LffdEbv/nVtGQL0JmZcKbrf4yA3HcBmd+jvM3HAhAp6bDU6llU5cXiMXGHnMrvgi0nJJ++pCQNEe6QYg00G20LDBMpH+KYTY3sCCDY5l0flBLKA08Xl3XbtymDBQghGyGF/2wbjgMo2keThV2ovFuGRveZXtTJJtxZ/aq42mvOHbZHACkmLnKPmSZOi9OosIUO3MuUKklDJN0+3dnTcC8GCd7cbhwzDHyc/kM8506Y5zmd8WujgQeRfWjv1GHmnXWDOzpXjqsAGXCmMhUarJfSX/3GXZsmS9Y8i7IMWZ8F24j52FBwHryoCzYTAXJjfMYb3kWaQcD3J7zoBmLiiQakl3zXDqADOrG6vAmuENARSLc8I1WIKQAm7RaPRmPMMJ+TlyBLpSQrIChw+BwSBUYpCUR2qiqkAh0N9GUz9p3X48Tlu4xCh1uz3Wgg5GLcbfRQDRsRMvxXgMrzAjSLGeiCdtqxrybcw8Gy1BWkmoN8+vProSxU29eVHL8V3Re689YjXMVE9rSIZhqaS56C8iKsI0JNRA+7AxwINnlmsaxeVqvVJTE/+xdzZjtE3FtBBw98CmhEegpDUvgk6poSdy8BRkENaacXZlBazs3QEp1kTREX+Q9EsGdQRJ4iWLHcY7W+VEjy6jppQ2BC2HMhIx4K55dLxNaUCphjYwmmQvaWZfiT6xdwVCY4ZuiQipoKp3H9oQLVtW/F0VnV5do1wyEjoHph/d4cwMluV8GzoCEEMZvobIqjrGsQYoVCOknZY8WCmej9sFgktXAxRShh6nlWFAUsmSdDscJZl8yWRghh16MU5A0U9Wm//gkQkW0DrjuX4RrdRcYPV66Sqgo7kUYLagn7XWUgpgtRqB43QoKufzqdLGqm3xOqqHqq1n62CPmAyfZMBnajCKD5kH5oQz9PpBbFvJhiM4hmrW0u4H2Qq4u2iyzSxcTXdEtf69uzK98UAeIK7wcjjMq9Pbvz/pvXtXz3k3vhL8pRYOPJ9uRIoKp+mA6ukIUdkW8J5ZFrHKkYbkmOFSjdhYGgYgXQJth00aFWiO+fHcOsVpelPfoOFASqjFXRFPpt4q5JmI2FDUcIVJW+e9MGpGOmwj/r4VYANIqagRr+lnY/QhrYwoq8F97bVmMlafUhMwt9dKAG8f4rR0+CwFlxRlWpCwZZOrFZyhQRNcsOkjXP3C5/Xx3e39B9PNa1c3L+phwv3TBzfT/Sm1QwMoKPSWiKFjrFdh4CpoDkMJp+3F1OwDUQCvCme2lhEBgMGx2Oh3Qq0NFdi2Z0F12DaGv1h/WkIGfWvx1GrCjU6tGHBjMFbngXNjuGef+ofLrAanb5dDOqQFhVDDiS+pAHUBtJ8yoFfMuAD4tjqPb6iOFaqT5xV+2FMDwB2qSxGkr1fW8PF/ayRweRspF70oIjRZ8cj2SBMmMvM0FFAfv3mCG4Nn+Gib0txLaV5a0Ue5YplFYFzwYNIY6Ucr5Bv3ZXxR3yAGUQVQc+u3jFDIzMBeTwqlQCApHReLN57Pp8PVsZTJzEopx+MRtJubm3I8qqozaTMTQqX0DIsW7L1V0JdC1TShpvTrZkWDQIUVHtGX/ZkzWl1pGSA0zB6YbdiZ6B1KkhIEOssHgrNABodOyAc55iyGkE3oOpOjw9k1ypkFUvLcmpkeSivWEwkk7uudJLgdMXmrVyFBXJ9PRJigG3Q+FHtWN2zuf/OnfwQQLQUhQpabb/nviODmjNO5urXrqOWqmqf5nOws965wEHn24tGjR0/efz6Vq5tTiAVa5jMO4QYmBXPxhCViplYLzUlERMOJUmAgIiZektlIi9EP/QGE5+RNemwdjYCMLSCkhEkwWKYjhcpknRAwYw1ExSqRUQsepmA2m8FQFaVEx20F1AoqeZDOsEea2zwoHo7rfMLMZBpq2TTKNWBeU90mCICzmabM4iTE/Q5FdD6flHY4HEjOVitED4eo1BZhepZsEmcFjGIRGKEqlaRgQsCHUbMVkZ7Ne9fz6UpFcF1PcgUxm+W6CLQyyzshaI26NuJpljJTCqlVYBVlorAnayU1hIbgUgdeXBpAxPu2AhDhZADOTT6kP+aVkcG50mwS9ea48zzP3tF2UvFUSz+IadB2hYEGU093VYGZzYqDAMUivsw0GvpODhzpAkrEB9cZnqvtOoipiExa7lhNwBJClhoKKcazNMbZeLYAYkbPj2n8e4YAevbMH4U4tjE+3GkVQkWUpfm4TYBpmuc5DKSKWithVzqdZwLQqDcBwLshO2iLAaTn+1ZvwlGmg4nNZhTKJEIWg9U7R7/ozKYQRv6z1BBifO9q884m+x9ojKjGYW/nvX3fTCb5SMRFDDYQdbXVzeqtjWnKaYh4Ohc1Rf1MKgoImwkUCT9TpAsH2uc02imWiP0OoX/UpwFQvdRYY6rq3KHQMuSi1RKLbGjYUn4hjSZN0EcnGgLhuUqkQaeLyiNzDn4eCkwd901AVNQqEj2SUdy5OZGczlAtKGJm58rgxFQpqpB6OhfgeDyeT3en83y896CigkoTAYt6VjspMhGuEkMUInRG7rFKopCI0HFcFOU6D3hY9ULmal9igygBV13H3blJSDaDpD6+ltH6nymcx/+z36PDZ3+mmklgUkdQWYhyGeA3yFbb+f8Sr/e+4dve/MkfhgjmKiI2n6fv+89E5LXDpEXf+5p/bp4OPJ1ubu9kNi24fnjvn/yTn/38l95989ErH/vY27c3p+v7cjz2coZjmLKHvVjoZ4HxGPaolNIaxl1aXSOInkJiFk2tm2FDhpKQzXjFiIXKOlIXJJV8I4eO61vtDpIuSabwRTYfR3txaKPjTm2XJhv8CUMTuhV+ACYIykCzLDGoOIblcCaqMJKFzXu1u9JxAk4VXPBqrzBAVDUyE7s4vFQNg6mYR9hpgfeaH1TTURtYKB/tMLYvEPJ1PNgMVxwMv6WU6klftSIqNco0TWerPoS1AgDxNBy2Sd+94ZGYmSYpdyz06VzCvW76dumomohQUOdKlTy9EsSUwCZdLea/0YA1ZOuMLGwdBp35cY2yXhVAjcXgEmaVIP7FcMrAWU3FFGgtm9BEu6aqN9Wi7Rdao4pMlhNZunqWAryMQcmba/fEjZR5fRA8lydVlB41vvGpNx7WxpRhQy9MZ2d6bQIJEK4Ast3Jjpub6PeXWLa4zEgMIJTwhYeTWFKrWx00wCsTuOpF65NUUVGd55NCHfsmiWQWol5dXd3d3Krw+vpKJAKia60MG9+Sc6FklFSLdoiG9Z5ea2IZ3SNFiGaC3uKQ1zBqtNJzBzoN35w0RUer9pO1gJNxnwbOgQ0COcMeJaCI+3UvjyweEYRbo73Xf/KaR4oFQnCc9odpvP+012E6PvnGbwf0uk6Pfur7jUXnmTAV1nq+/6PfPxuO02EqCqPR7t2/xld/0yc+8db9q8d1puCFFHzwwQcBN8qUeZkGRCC5BFcbK+N0frwpcJE3tOOhl8xHWJ/z1W2LB9fSkgjW2VsiIrWaeIiZi4QZ9cZhm1wO6+dgGMQrHnF8iz++QZvFvJuYkt+nVM3qXcFU50w9FGoFp+DEoaGSNFEIpoWney8Aof3kuyJpPLNogCHivrSKLk6Mk1Wn44GfJFyzqSH6bDA0g3rCzBhWt2RsbLbf7LIl6H1tI6xU4LFaYm4Gz0PR2jzvrLe7XKWkF0spqZFFKz9nMCKp1MQGyTggm4k7xCwqMZXC0cnNaOupyQqQFCNAktSgzw8mGSShuSS0HRVtb1xUiTd6ldNwDVImuBk3Y6PbjLTREJdjiKZhi1c6UoCSRfoaX2RqGkyhSjKGZqXToOVMtiih/PElIm9jfv4iJws1jYltDL9DdGFJGgZqUdPuEQl9dz8o4KUCQbthwR0YSLEaoc380gJ3X71zf+5ON/mOE4ggU9/jRV7JejkqDAcsimhRzdCwStZyOJYy3d2ejTJNh9O5yqIiWGB7jj4mGcP7JxQIZUFFXQ+a2t+NUcYYmzZ5u+Dof2+cRq4NIfuirNZ8UeJbgtjy2Sb5jr8ydQ7JYNF4WVYKH0XFXRr6PT90F33HUMjQo73dqQ+ulmG37CqOmVWwlOIEzswknBd4H/P7X/utCk5F3vjxH6RAaNNU5FS1KEQ8ffvuNF//yA9eAVb5/Ju+Y6Zdl+Pbb78Rqzab3N4lwf0Mqt6tBUSUuSQyRM5dGyvmlDaJBi4b/oWqOjlNQ5BDztF0K2DpeOATxgkQWMa8DOSgSdZR7d1ZUvRuwnAC/V/NWInhSllxqS+8XDzfsi7nwWcARYWsruBqYUgta2nDWcVVGMOIwWAIctWYuQO8kcKF8z6CP0M25QhY13VdfW5MlJKN+aTd5J0wglHI0BSDTH2X7OSpnZcFoFIQq7USLKpxwM0qKGatf7NPachEEmgYNBvvAUqV6HsRVEeleOXaC7tjSd2maCMSq/P3NjGapJkbH3aHScaGsMwgbrRoaM8BaCTF0zwC3zzkuXdBkB4LTBVz5sfwa7Q8xvDRaDJSskl4NAsxkR2f3draQofMhc8GmLVBMbYmAoOls6tUPPYvzwpppMmvIID+4AVFY3W0t5EXaQW7wPiXN7dvWrT2+O/40mEAtuntEOVLAsfSEt7+bZm0bV1LcmF9QNcEwsurgxERZualcFPYrM63i+Dm5vbq6urq6up0mu/O8+FwKHooAmQYb4TfUQDRHqG/XJOS3r0kgRtp+tRugs55p0KyMSmP9Hd7bbsNamYPbymmau88snpLe9fqG2zy4tuDZVXER0TQCzYpIUPk52qZw1Pp28YAUbghG6OB1yQ6e4sIVGimSS3MbHp4nO/ORpjo577+206nk4h88qd/eDp4ZQqUg5bDAR6vQdOCxz/2A7/wqa8H8PzFCx9nmibMVQh6WfCYsU9sxzvANBEvYRCfsFkw+WHp9F1qWe+OLClIM5iGDMgG53XIRiY+XXzX6kWrn1abfokNt3W1n7eltdBZJjfItgajpS0vRIzlNJDM1iWDcIAluhmX4qRmlbfcQSMFRVUAVLDWuXgFk9WEW3Jq5vG1V69m0l+YHHckdMy721l2em9mky6iTccNibO/VNtElWZmpugOEaMtTcTj3DRX0NtrylD4JYQtr0UCwHoazwhF8QoO1Bb11d7Xg2mXYbftXeJtJETMY02URVCFVYSEqVSsl9m4mEhpOfCSbs6B2YzH5KUMbEm+hn/7/ePR28Xz7aajCb4atgMvGt+YT12W7N0deVguVretNmL7bJc/NhllK8axveejXKM9KhIpF0Hda64vKMROmSwRW905zr9N2COdjNQiV1dXZjidTqUUlanWqp41GBZYTZFSQcrQxauNTLJmMwK/5pRcJ9kJCeu4tQL07oGPeStb8Ro/PBnr7T6dNc9YrbyPM+jHjU+kdpbQyRWaoB3ExeBOMNzQv7eWdufv+ZX/bewKfD7Xq1SBgG6LaO7NBU3JHi8jZe//auYvQbI+cx7XNqwzTxElKhZZUbHF1JYBvkIJM8sgrMUZyBop6oaXnn62FK3aI9Wsi9Ox2EVjxLa08amONvm9SkJAgOAc+YQRgpYC7nEWWbIsIgb8nb4USlYRT1ewohc8iYHdrhWKae9MB4BhfC6e0bhJWDRAvUGpDCpSK9HVLurSxJ9hEbsUTIIuwOO4Roj5wjRdpMzi5AViQ3nRFTHoCelOZajuNj6ImuYbE/eUkE2l0jiq7fxm624Mx7kvNhuGjFRbFjCv2nDa4U4AmF0USJyqXu5NsCLCkvWfq3g8WVunMA3UMghtmnHjNlcRuoOYgrL0UDR7QyOyknTMY9uiZznSBbPZtR1VMGXZNloDERB+KFnYuuJnhKDrr4s8jM3wDUs3P2RKVaLZUjK4wALaHFbnFOFMkJf4vz7KdUnWXBH/Nocods/KHuAhItG/bIw5d224ele3qOdcAXpJ0Qf375O8ubk18N7VldV6Pp+OU2EmCnicnR8Tb/Y54lzOkxQSMuqoCohrwNu1AWTNvhmDG1ghdbmX/VldJL25hCKLMVe/ruGbn0KOZZqRMybLfU39uW4l28NRybIm466YtIpa+G/zpdPkoZquVYaHnXBHPJbyGraHZHGNeL/w1I83LZ9aJhxF3yQZ72zvHQ4+U3xyvcobMwPUaAOS9Un8bkk0cN7v/hkuPcGljbhh/DoUIh1vKAZKYKmbBUzQmjLBIsnHyVSUOEbX10VEaUD02BnNgEkQY4FNTQmKQ0K646Y3FZdQRDzTTKQBygCPV+/cpZR15mi8JwT3wZMqoWeHvpV1Whp7bo9bOBpE4FlQnhcfXhvVsJeuE8pbzUt3wbSwqPAsB+/0+nztakLYDg/2NzLjwTIKRqZFQEMni4kYWGnAG4VBRIRm3qU2v3Gq7xHHaOOk3OnOjolSEp6Wqy5FXVBzOHIot+djOAZXb+1AsDdAHKal2cxGEoEaN82ZtDSn3mu5L3Agj0u5dhRbV6eeHC3ibT4OZYpcsnmtwjvytgsWpvHaJTt73CRwvn+ZZS7HmzeE68PfLmGMCIIDdOU46w06XrkBQwE1OaFLKSH6iADGiB9N8V1LERFVPZ3OAKZpIipZp8LBizIYqFjEA/iS4uWKgvpBqFTrFnhV4kOioCUvu9y9oO/9AMGSD8629vnFYdsDuoiE6Ti0k6gXikb4MISNZAm9DojhQ5s8ADNrjesbGvwnP+IdmCNQpWqKlhlsAqBA7rJCb2T3apSMmFjhxjfqbLXWcynlcJxuzwZShJOoKGazamcjD7NO0zRdHWutT58+ffXVV589+eD+vYenChGWg97evrh3794829kio2AUlzI/RGdEA7cGyfizsq29LVxEKk8AnMGM0FbNyouDpYseLSwCmIg37loEFKw2C0Dk4LA6upPdFbTSb3zY0pNokZKbCKTmpF3kYEtBySCLlQQgxpIgGqUQIcFwfzipFYKu62RjO8CNvxRRr+mf0BH1RC7gnFTQ/e2ejCTGOWJcu4s9mI+ZN3kGYDb3KS29I0HUoySneMnUNFyLarHzOt5NxEt1jOemw3MpAK2v5r1ukoRV876N8Q3DzTkQyv5qZHDpcG7iw/l8H6ULXAABAABJREFU9oRvM4uafCKqWqJVX5MzAhUPWmqL4R98RmerbVOC1PhRrYsKUEvEjmT4kCvodCMYYSFC+NNe428FEJJFong6vN+ehkxQSpbuSfMJIQXQoiLehdoYyZkx1Gi1TiSWtlvBC4P5L9pLSGrJ6KrqYr3Dkne2ePwzcdAOU/Eh3Wufg1KxSltKyKQ9qL08IXWBAatwWd7ZP49ZJ6v552vTbq/Ig74QrT7iNWKC00H/dpigC0ohm65ekfPsPmAzmtlRCzP10cQm0elwOEzl9sUNiaurKwNvb2+PU7m+vjqf71z0drFbUFLOyzyS5MEu05N084yh+SsjNGbSwLFBZqcBmY8L6OCMcbmvMarGCSAyL6woapptZtVG+7MIJq/S6T6nLJ6cDIZr6S+DIEzCEOi3hWlahZVtGhl9Kqo6RyolAUgZrLhutBm4tX+aPE3YuUijDrRJlHVu+DRY5BTADEIqCg56EABn8zSkoCP0Qu2FIAorUW/PAO5dPzjdzcfr+zPN6wHNs03T8XyuiHAzMz2wV7MzCCppNl+zGFmtWt8ASLV56ofZaJk2igMLM7m1InrKSkqOyDKVwWy0tRnKdt/w8jfwf82Dkz3CHK7PRF63KYDeGGCWCo+lJEmh2CyVYgfzjgDSiFEFaVaiPHOt6iZrBRQUmumQluPcgt66iQQURUmiVrACdpaDEoKiJodSUPTufGug3FIe3bOb2+uj3E3n+cXzx3r/RrSC5WrS57en6TAdjkX0PXvyeNLp7trJhrIb2CFiOov3RZ6D+po6QzChTHRb9DSLnIUEJq1KCN3x6RqK10w4SCzCUwTVgHO1yfMPPW/ULQhUVFQ/j9RMbTFWE5HqzgcVT0R0AUhFzn5ILelsRuGeJ3VJMr9GBalx6lVkynKD1czMyqQu7Uur+ueIUR5Umalq5xdXcvrkL3udx8ef/pkv6gvh9MX7j9+gzXWWgmr1C1d4+ySzAN7s8jyfVVUOpdaqJq0EBBk9V6yaQ6tTgk7WnXZ7pZ0wWVfg4ME1Km2oQjmI3qYI4lL1IeO9SM6CWqTZMdNVZn6um0nAaUWZNZp5iQliQ0G5QfUaDmSYDQoUhrnhDF2mFdWgbiTp+W5O7iGqWnuFooxrG/hurJ1RX2glLTG/VNWDlplWzaBGFS9L4UbS9vg4cq3mSYzebk/E3IszTcWalULCcgtyolTDTIqqIJCugtM0wQhaVHprO8fqO4UMMjBXGpd+9C4ZXOLHHmk+JJWRZqw1QyYyljbsEbPzPJfQTCik1Mr5IF4k37mDEihFRKY636qqRw2LB9LRzGT2KPLzWUSOUxHg7u5EgnIQMUgtuQOQ2TCzTJI5+Y5LFUZFoZp53pupTpU0slZOl1TbLn4uv5FWKzgFwLhWBY5AiaqwO0FYIhkdjQVt/UVcQ/uO0QC4toG0a1dRwIgBe0+tvhxRP7+Jf0ffzHhzZ9xpg/KfVnNu/4oaNjN1WkGCKqM1GQnG5gMb9RUTdxCw3RBKYRvzI5ib3ElLRPlKZOqk48GlvWtyfsta2b9Nen3LhTdM6Id8FUilhJpbAz0tVU0pRhWRYue7E0Smabq7uzOZr+5f1XrWe3yBE7Q80Af12fnB9JqgzKVcH27vPTvcrw/fw93T+j4Ej+VKT9fUc5/bsBPqLe5Ts/XPBghmjT7i7jU0hRroMUYa4nL4URuySX9FfBhB4S9sCKPUVghZMr/CzcFp2U61YqldtSuk26R5/aVAK1vPlYEqJ9cEJv/84N7du+8/e/j4cO94+6u+9mv+g3/vex+98dqv/a3fBLl5+uyrPv3pnzO8dn2PemW8+dr58P7oCRlPR6vwszpxq3Ohm6OHpXJT050BJ/GtQnaeNOu22fg3aowvTQgSK8Xqir5k0s+L47xr0kECUyUzs11MF4bNH4i4zuaRWcR5+IfVekM5CGbcGtqP97gS0some+l1hXj9zt2THvAfjBAYsLGN6YW34HyGgcUI1R7TYN/eUs6GwMliMcx9bXT5CJRoMfKUhX0cXtZA2sZHYk90gasiEZ3EsBJQ0qwYopjqoUwCqecZSXUlR3v5DL2ae0sZAAL6Pk5UQ2UEjIjsFeLInQDQXUGyhGsK751niHTvnYuSYr3cwfZaeSTYKcjO/u1yiIBLRiEh6Zcfhh7z8tG2tCK2sCzQY2HeXEx4aRD0CVZAtewejO0gK1KyWlrLWSutwgDDUNWnJ+FzMEFZylFtOJPuCi0ePwpRojYGnCQAWPoP15eF2zU3aPRdNWLhZE9Cmeg9JYJ6ofRkuFV2YE/cWOa/iUFgFpS0zSHGF4MoxDTLR/D84P7145vnd8D0+P4rz29ezDez6tVBD/ri5hHKEae744vzsR5qOWE6PK9Py9PnD0/ljI9df+zmZsJMZrfANr9UZqJesGX4jgJeLUu9uoM0/QUKU7cUeAgOskpInJ4Mjkvduqk+IuKsAWGPB6Aa7gC4LbFHBiTCyzDRkVlevJp+ltMQeI5O9xmt2n3miQaA+cXVgyNvntx87dd8za//9l/7pS/+MFle+/iv/K2/4/d/53f+yV/3a77qvedf+vSnn9zelenAq8Osdz1dBMNZaGd2PObjSdmhPHvisg1nrVFJG4YdBxeRCdHXwZZpPzuVyhH43GIg2UvT0hmAmQlgClUxYwVVum+7iVkEW/SWRKGisKEODVL31JXc6BFu2KMh7YRGwpUtolBX0v9iI5L7NqIqAyH0z6pKS+taSnixg7azd+s1+BzSGLOrmH0kZSDuHOCWBiAkK1GOZDyIVImMPvcSFIBZDjqjer1ImpaDFjObT2eWSZLXYBRhHVabyTa+kMc65RYn3eHO8pmDItOu1ogkr00QbkDtTZeWZ2lktSF8OqzLYvyBOWH558ucAUEd9vRXZqm2Ng5JE0x7IvPqzxHD2h6MpWG200CzlWW2X4tKY8BkFJLaxIgyNRmwSYJIA1cXYjSo6tkbVaIPFpVCA5fyUKVs1euQurk+KiSQnjSVrLWVOBgX3sb3sffFHfV6LuiWj/hhgO3wWSNKxdxsL0aojqnoq4uArOMkjQMztswS9nlKUQ/0DQYUJWCF8vzmhOO9w+/9dRfjGz769ef/1hMgair5JKeqFFSACpJi8IKpKsUE0CwbRhRCaSalEeCmmZpkpOSIpQ79hfih7V/1Xt9dg/Ud1YiZGA6X86IW/IWGb4zxPR6toVxQpZbDk7RmaKTbr4KoZXauHxzuX+t89+ab+OTbr7/7Cw+LyHuf+Yf/0f/9h/+jP/e/+uZv+8P/+r/xv/htv/Wbfva/On/6p35ey/HaaXosuiPK6jy2zzVrYo9LYMYujUd+pNpbFt5pvR/edGa7aFgHRF5OY110SVUpqAJCK83HRZOPRMzzg4fT5Gkg1ixPUW0yzEccj21apEjq+gCK9FMPIMMjON7RefPiX2MPPt9jb3RLMpe7QCCrUDQKKXl1du7Qtqid0MzmXL1IojbDuC9t5Eu7//KrSyRgK1iWwVQgSZUsNhzGZkOabEPxiLBW5g4a6DVwimqBmFmdZzPTQ5a8anJdNnEalzNMztLKGHsWQl6cJh2XXwAdQTxuzELGXMpZW3VNRGDEqC57zdtleMLOU8s/+za/FPTjB9JUoEVEXZ4xixwtLyRNn0f7c/ft4wTaMteQvXDnauYcrt3FrmjH9obt8tsjIhIFBGQHsKvHhFB0JeYSVMcDBqdQKu5qHP/bfQoAVLyIzMrGvpoJ8riuBuk3S5TGVKqL2BT0uscac2sAqUVQFEWpRAFVUNQOigNx4J2ddhf7i7hWy2kuBoWUaDyhqrr7oNc6AWlZvMX/49Dfs99va0v74uqVI/K/AehdUBbx7Utr2fCfENJfMW5ry6LZnsGZ1riUf1kgBSKHo6FUk5sztBwEnAplsgNvYXc/+oP/9r/6L3/r7/z1v+/Z59/9bf/8pwpvR1K1IjVmtu2CrKphZhUpnh68oRjjv77q1X9tgW2btsg5wny1/MXrSoafMAQen2SLPpMMQ0EiSWthHiM3yXWpFvvNHqg/ltoN+HhwgcBlKlE/cDs2gNj9pgkM/7VMlu2vGNhzgy+zrlZbkZnN83w6nXyzvEJ1mB4tikzsXmsH/1A8pC2/7csuvR2vLQ74UAHM3JSCAdojaSIKSqQ8eCQEATFRekieKKapiOg81zpb0UmB4kH1PkMVp5CXZrU6PsOKtLmT8jKR7N6Fgek2i26MuAeC7TuIOiQ+mwq1QMva8tMekUgeJaICvuWf/dp9drsHquqkUESYO7qc9YczeCcr2fc0bf0Xbg6IDgOuLJbb97YDpkSBtH8b/jUUdBSXMHNayxNoTFEyqLVvky707sUkl99uMaN9WDHa1cKdEC9+LYqiJmZigJG1baWmCXB4nZA0yMhB/D8X1BqnxfDBPNYv2Q0GHuyRJoZ8qRIqFDnNhXYs5b+e9O7//m9+5bt/0+Pv/o2vtoX4GwWmMI2YsGqoJppOQZkgJUimtQ4AzhRThRi2W2CSGajxlkXNMoe3g4QAxTyFqCqrombRQHqI3EY+xt6ZbWBsUcGyrMAVc05Cb7IepNq99588v/fw4+98lv/G//Z/d7z3fi1f4Fxwvj5Qjzr9rt/xBzH/2B/4Fz/xR3//n/jnvvarxvqp48RwQaYPi2KqlUwhJk6TMy3tRzBOU56v9l/nOnlbiXjUBW1pI2xB5x+qmnebqMP3mdJGyTojbQK7gyi7U6+tejx3shSCqWKg/0dX5rxM2J4g69dIw11wucTe8oGOAO220S/eAB47mEm0jaF2kbQd9svqlv+PLon5iJly4cJyzHG2jhvxfZKUlbgv2S2u05OwV/esDW/06wFxM82KyHGK+/OeJiRtr4ZF7V0ioiiek2zL25COxR378LjshuBIFrVdvI/QWNe406NXeAVoZyc1+X6nyBdMx9hsql+j/7+hAgfeZREDHAx/PLSLjc//GsVpks52SiJS1pLJCr7ra+WoluWRWGEYSQUbwAELOU6FA9FM2IUGcElb3cEYiZPUT1obX7D+D53uO68FzJwNJF5tX2pqbVg6JD1cl7b7X2PBg50CECt0HmYiHOfgSaVao2i5VqdNvJpm4QevPv4lZf1vr4afs7CCYtRKrRHoMAvdOKmQiTIxgiA8FFbVQ3iIAAJkXXRzxIqVMz8L5oV/2Cz/o3hYaICXKqFeA4hxDDL852Lu8tg2BcLP78gVVtNzfbQ9UsqTR4+vHzywz7//I3/8X/uTv+nX/fHJvgrT+VyezkSd5+//29/7R77rT/3bf+YHHk1v/dD3/8juodiQ2YVm3BQDIIrqrETA8bxcOnerAceF17TsOQqNx398i/OYM6yimkBUJXVWN8BSUM2MVJGiquh6hCRXbld3q+efTLGpUZ6Y4aAGrEYrly2LvWgMP4RMjTD0OyfpmsDYBE8ywcz5U/tmZMDtRQRGjlhp0G4h4GAhGPdlfy0XLhkAslI8Rs0+xtesWx68ITh0/IHeJ/VQpkOZKu08zxDRUphadTzSttUu4luTsMfZ+tp7iVqPBCFkLKTZl5TuWQdPv6EZEttSh13UUnRAXDPzGDzuDc706IyfXc3fVg97+SUigcAyLHWoRYzNvo57376M4gKjbUFbF7qYs8jS29SOynhie+2uBX/luaNB7LgFNQQWe8bmDm963+WLmYZYNfoM+nMYvfjphwDgoklJHNpGWsf5Z8y7L9MpTis4SjKpSSPZK6bi3ySSwDNtcDmhPOfsVIa1qQWb20xwYIFSaEUA2Oz4WVlw/+6W/+c/+3/5rUjs3Z6T1feXbtu9CoSQmvUkxBEXld6bT735SxHOTl6tGslIOHbTZen2seX6VNWqn4BIYWx4ZgLXgBB72oLAXOQWMlqiUQUt2nOUgH3cpoFJ2B7bDTmnPJvqXVqHdJGG1eRR7t+dz0/fm1+5/sQf+kN/6M/8W/9mPT+7evD66ab+qm/9VX/gu/74i/M7P/pjf/Ov/o3//Ft/za/+1Ce/9kv2PIdfsTfZNoSPn7J1IyN7mJ4astlMn+yyfkX+xKjMleXicy2VJqOJ2E8/MQ/B0o2+ATBUQZEor6VCCOsQFE2mRoiZZmQJZrmL7kzjhxBES68a8rYTLTyMGclXnM9tLekNblqKWcYEZLWZtVGz2e2XJkYR8eIA45fwXKnB9rDaR9+RIjteGDTzQJbQb3+uZt7eNc7zo1wtnR3DQfLyw3RylwtyIVYs3DCKsOx7zDzMSilQmU935zofDgcperZ6QHEm36JfbSkF7k2YQNgDmDFFbv1rFY664eFv/ETv6rDapJCOE14OPpvCQe8+u0ZhvYfjb/uGAuCv/aPzyGx2YerzsHxvPzrWkX58djQqt5t7MEsOFW/Mdp5bXhvizFImZeTLdUbbn63dQeJ3VNDAA9Y41DBpsRV7wl2TdgEUU5HurGq2ZafXEeDDTCXCoB0ON5tZKYfxjV3oGdqljSjudG0MAY1Vz4YQG9GeUgjn2ll4smQR7/OqoZv7TtEgRjm0NY67uQZv2wWMMQSRLlnBgx4AtMJJYSYV8bQok34Ci0GAY71+cTx/41ecvukTr3xUzvoRbvsPvu+J0uvBHoSLVgcOK2bhiBU8696ZIlmymnHk8foKq0GKl5fyxA+va1FrbWUllIAYrPojJ40WltqYbDWSWlx7Fnalik2ukXQoA3DlvGadhjZV/1DreXItxxEGBGCA6L2b588e3Dt+8N47/8I//83zs+ff/Z2/+4f//vcL7iDl8auffPzq1//uf/GPPXr9kz/7mZ/+g3/0u96/udsSEwDbCk15mhY1ewdsCSAP+AYAVYczODxidb2iWOZIT9g5n5i4yQxjwCDVbFDjhv7o5/PdyKjQaIUUMyOqqsJTSoWqKudDeiz97l4QFENEZ1uXCZqbeeRhl5pVtN5/K2hL2LE6zxACmVebBn+RfO/s+NbkeF1ER5OkmQgib9vsLB4QmPxY9X/+nV8F4P/4F35mN0tFh9KV4/xPiER29+NK0sBZF2RzJUOM/Hs14OoRmJRSGP1YTVVn2jzPbzy4f3ua53kOV5eneqraee7CSQ4IoMhEkqzwhG8KKWYmOnk0RyvU4k7jEFwisz+KHcFrQXfEHSmI07ulvCMSdTE0Cz06d9LG6AEsj81WYpUUOS0FTxe88mefnL8OS2CuL6UnaBCu1UXBvpA55AJJJVmTlI+llFaT7/fnPPs9KaByQ163i90ixPgiVbUF7OKnqEDkr85w6JWE36aqqoPpkn0ghA1ZNmBsdDkGGdQwkUjzyTuxrmIauUBuMpCY5uBL1r2TsMt3OzQQLLD553ajcFdicn9HEt537r3Qu4ef/vFn37R8jT+8wKeljWfBhtuXjQ/NhMjV1b35bPl78Dary8oDy2u13f1oQNZIJSI6Fh1ZLnzQZYHiAqYg8uwbQsYgZNife74oSYNBGYUVicWZXbxrdIV042HXMkTk7vkHr7/66pMn79979Mpf/Zs//Mm3Xvlzf+Fv/OXv+Qtf8cmPHa6mM8vPfebZL7xzxyN/8+/4jcf7B9zcjTDZnpTVtRvXtpjhiG+XB7u8M+NPI624ZKEJ0tLqKyEr8W2lBOdmm3e5vDUmRPfPXFpDfNBGB7BEJBmO5+rSoS4CEtRmNnmFspyYdP1kxM84zAjTStCfHjGwigLZzGG7rU11bmxlhBg2GxSeWoIZt5y16dfUY/x8idq0z2xmxaIeES3uhTNT4Go6zF7SskT4omal29VsV4BdLTb4naQJAs1OLGi1ZIHRcL7u5dSm3tpdIcVVn4oOt7X312XxgF1Su/pmXjJm5g2rp9o3jZGvmZBnV5KuILpJjlznQW4ZoSfntO/7g+0Rn2fck2L1YCbtk7+g1a1evcvm6/KocrhBBxr5EoL1EhLThx02KDSGCw+JB5QKBFJbM4wEL3LlBaCHPrbeBgP0DLo9D7xQCgAx4OInJ2GRyDgs0zYu7RANEXN+ePv+K5ieT1fjHf58/3fFhlca8MitE5GO03Q6naZpalX1x6WZYEqUWK2xF1hI0KUkK+1PJnrJkFm42Vbz6M2khm6YNGVIS4GBPcanihTvoS0t7VhMpIRfCAZQ6XVW/VwnjjABS8qk8Oin4YhC5NWH1x88+VIph9n44NVPvH+qP/AT7/ym3/P7zx+cTeebm+ff8VWvmk0vbuvM25/8uZ965f7bF2nisN4GOh0aDIzHZ+R2Hd8Eqy93b1uCtApK23QdlOA2VnyIhhmNaC5mVbIQhCuXlmFrunljfu6avXYheEfWjKWFqBRXpmbJfIkeyBqSKzjE92lJWoJrYMBt9V5FgAv7vs83jqRE3uMoFKxmN07J31trL7A4XlP0dA4Lh4/TLfMvxZnV96uFx/eq/uoiUEqttZRyPBzP9exHhQAIi+SpdaGknSsLwIz7O0JVpBWmbkGp4bYAsG5HuEXWNUKPr84PJlEvIsfy077Qu7Zy4gJkgdvEUlbaWezy0tZkO6rGf4gXgaS0+osuxWhAaH9u2WSis+psarbLVnffvpKhRm5tK5TKicmUFXYG3tmC1IKZ7S1wtQRrFY4EYK8DsL65ryW+aZG4CiALi/awzyRHKR/U1WQugWIF505nl/eKdFqwupzGTcl1WscOTz47T2998fT5Rw8er4bbG+ml13LydeZ8NsU8eRylwz/VfwXAdfQCB+/XuPaUGJBw6CYfYE07OjREQvxgHhUaYGUYv1ejiGfNyz2FcAljBOaEnwddwOvTW9GBkpmsHBUC8nY+X11d3VWbpoNRRQq1/J2/91NFr66uFXIiPzhM9+7uzsT57Y9/6u55HWlLE+iHl8a6ZMix2f7bXR2CXa61EeF7mbBuFQqhyw9H1ODXjvw7w2owa+9NlYU8ycKJiw5O8XARErRhB72x6Chui4guDVFtqg6ABhym6N+KbKy7dgy7iEHac1IngzlBsWZjWwACQ7FeTXIHVvAgAfcV0q4ExwtCDzDs5q6QFNND5+ZO1ZvKvhWwVqRGlgobVlgU35kLFFMpqlLIO4+9EAHEzCxr93VyuZaMfW/H1ZkIihayRrWcINdeV3+kDw0DMW1MLk0CLWz9TXORTF/FaLmtIG0R9NSWLTLUxxpWAKBM65Pf5tbA13SmUQNr1z7/acebiz+BUJQDskXJDN7iuvnxYmJ7qu2uhPUSnL70LAZT20pIHKdjy0LtO5O8fCmBASAR90E2SXlchUhUYBrfJf5heIkXbQgf1QVCsJrnJSWmL9wiLp8SNQIBiG6Kc+RlLU0sZVAaZ1CfP5D7b53u7hZ3bywi+9dKM17O//rqSnWIORgA+PIt2BKIOFlL1IoTqwvgtMGjsht7OzIRIVV606ZehGv7eP8y/ozzRVRQg1os57x6vHFfP48vbs4y2TRNVuv1oQA8n+dPvP5anR7enZ5fXT+4vXnOub7y8NHp9vT8vbvD1WEcuf271Wl4wVKyxfxLQt7L71lvB8x6uVxqHL2G115VcKC1fnQQ0jMGAwYG9h8u05U+4KWk23Hy0mYb1rJdlC6PjA54srpWUk57xDLZ13/zn2wTtobGGj3AO0vYeh2JFio/TuYl00ZK2Ks4u1EkvbRBMVW3a4p4lQxs0KC9fTv4alZ+T+VcVMxAsqhOU1FinlvJyWRzIoIy9rNcYelqAv2lYiuslhCFl+QuFz5dwuB20tYLjjogHeiyu43D5Pa/3KjUIeFG0fAuFzO09WXD8ByYYzuBfMARfythiXT7oaUT2kfb3Ve4pa+LekAT1Dcy1+pdHwUUGHX65Q1rNAI9GquN73AZ1sjtK0QyYop92ABuqhIbSSWhkhNzNarHMy8lQQkTXOu+h/QfXSAlS99eP4oSxYPiNoY6EM0Blwgm0trZ9oPn/Tun6f2Hz+7fPbi3vJvYwr/h4XhDqjb9fid5qlMRssJz5+DqkoQ0Uw1lP9tyJXm0Q2jWvllQDYnCO6ljoaaqR0AhVrIQoMgEsXqO2rYYkIEi8DgHtqrfbsswixeb/z1gUp/wSClgBoF6BPvAXUz1WEoppQitnpR27yB2vr2rRtrpVg96MK0gH1w/ePbsxQr27RW7bPVl36z2cQ/ml66LrB02ut4ljS9xJxevWPGbLv60VwAi4rXKF07TgJsgTBk6jrOVUCXoVRZhtq4MWCYo7V4jHm5pRcZ2vUx2cSoRn33MfGHr99yIs5tGNOqcL94+LnA1/naXt0vw1788EaQNNb6FF3zz4i5UCRVCvH9otXmedZp8qp5BoxLRUYtk/J05a8pYdAtzNa/xPNwvAOiGp2GeAlBaGtLuwsZXbsWctty2ql2Gtx2zfWZaS/pq1sRdJMO4xj0cdV8vsetowFZgt2kMwySTwSuBc832dy6fmNlSomT+G9HUw/eRHGw7C1y9a/Xl6ps8G8tXJhw045BbjIlTBeY1DvKSKzy+fnNOoQqmvUkWSFVDKx/ddSpRRLlXtjUK4MWdBSKaZVg0oLWqb7fRddbQkPDvBkvK6MekGmylKqJg1iDL0xmwgMDr955/zqbr+9f+a3v9y6G0vmF7P+vd3Rlih+mKtNQXBUDx6CbZrSKcNQGGHWwACfY7aLSXZtkYdyOM0TvND72Iu8AXWOE6CxXwMDwla4pWBkCjFICKVa8Hg85yglEtZjugnYi8/tqrNzc3rGdTLaoicjajqnC+OlzPc53KxIPe3Dy3aX7w4OrOzgC2eNuo5+qbS1dz1q7gMz4ue+dixX0LKOgBUSnG+7VrdYmQ8iRLHmhohuw6469Iystk6hRvTOPZRNaCivJcr8nU6q2ladu+373kzqVgsYVFbXfMUVQP41/XquIRaZ2kEREkrdII2KuRrAxg253rVHfPlLKdvQ/Y7Kl+dphTviScrdjwJbLsIzsdKyrirui6rHoNCIpCwP3RYtpCT+pfrCssZBRRocI7InSm1HQPl2YVwHSJLDYhonNKb/OXbTH6qnpQdfu++T6c8+/ziSboxewICOoyZr1NxnQ/1rGJeHVYiIHlQpy+qs4W7UtLKSICWwU6bCa5PczSbQCXnvooUjnJloahiUBRqne4zWMcVESJObP24zV9Avuv8/zIkWy16GIZpG9JX4FEiM5CMPd4le0aAQi8mHDXjkLkG5wITZwacWkcxB+D93RLXuU372C/07+snWsZ/OEz/7ny6qfmB+8+e/qhwP+nuqZpIuvVYToToAc9iKqW0GVkvvDgSoRtWKFathvm0uH47FKTMOcBSrSgKm/+4W+pruCS4WJqtRwDW1WsQhV9p+CFJUgR1a3bbLuExn9Oty8mFZ2OtdaTVZECOYjKhPPd3d00Xc3VtODho+u7u7sXp1OZrkfu2894avwdgZ36XywIYAse3NFfsMSrBHgdJYflomykidEze8mH2nY5g5bBex2vWGomSwnDxVJKltgT1ZUqEf/DRt36IPR2I5AWoTJuyqUY8VoNhKqapS4uCsHZq2lllIByWWVI8qi2ORUVr741TFjXLpKLJG6EQzv+o0OzAW2Nacnyo12eEwFclKFX3sM2cq0dgVY3mJlmgt88zyRbxcb2uthQMykLvBo2YogtSEosgjJJ1P5gYjYJMcXRP/isfTARmcZKUotKhK2vbROYVEU4Dcyh5YMCQAuZjbED0KyMKjOEiw1zNy8vdWIdeABJYK413yAFEliSTyTjUmZmgHp3MKPNVo+QvNmx38eaQYpM3vOgtuzyUmUEbjo+HcnGIA7Sh/WN2eGyFhEKjt9RRmAZbDVKGLVJstm2hMudDt/tIEc5j0s0jtTAyeFEraJVEDWkqp014zbyaJXcsngeUUE1Sv35GzzJGy1WF1ELoU0hv7+TOXbNnDdYuG2lzMxiWIhC55PoTBfghtI5ZgAmtEYPoJSQytULLRpDGyYAMYig2llE/E6pJtCDaFGdbs9PHjw9EP/x9z75rt/wynp3ftFXtVIOJ+PdfHddDkcpZ+NpNsArrra5L8VK4iTmBZImA4CqqKSBk50jvo3SsGGWKlE7XkkplKTdvK0pI4r7Y1QNBeVOToTSKNErqbh6dXeoahCrUiNpzEBCYK323uSnrJIArZ6KSItPdLJbyVSVoVH43oCsJ0CBsYhnuBhptAo5TBNNKtywZFA5TsUqa1EJ5Y+hPUR4qBm8Z7dY0mge1YObOoH2U3wegNzgLFnAx7J5uUhEHs2DaD3Gb50d1c19KzAxyDmCjr0XdZSadX2vnjygRcRniWoKmYT1YASsKhGlhHxqcwn+lGHSnjaLq0qKvxbVg6vEIyqI5mbzDVKnn0JWN10zFiiADsbLtioDoIcScHOqxTh6h8Oh1mgizgxrV2+y5fkGHlmWhKvOZ4fkyOkrCKMICmG0yCr2GQjLoYcNlSmeK6V4HQ+Suee5D9Y3V1OaiSITKpLhytM0bQX3dm1FE/9TwDIVstZaCU7TEZRaeTiWuZ5LOU7T4XQ6QWU6lNmMs2fDikoBUDlL349BnhDAaTRZWU0setOLKSkVLkmZkMUAKHWiqshcz2MHSbiAZfbhHWPaM0mgL+t849/W+PHwSglY99Kdm3F0jMJYmZGjn1tuX3xpXrFqpmdEGhR6uFxRy8zLyMa6jCRn8lJm+2prV0x05/vBrYalNI0NAjWJL7LdGZr4Fs+2on0TJL0iPDMbW4TZK1i246zW1TCbeV16oh20fDw+lLCQxbSbtZxIR36oqv59qycTJjjfZZKG2iWPBjRCpLB76AdhXLU3e1Xv1ipEVAMoIij4i9/7zGPuWhmTEE7NFW0TkZlzy7ib59lFPTM7TFNvEuCuXsDMih4AJUXJ4vGZIBp3XO5R/4brD+3OZKt+3sOSrG5Ecl4o0CH4sSenKSysZL6LzMYvwRoHO6ObCsWyxRAuIDDJTUJ/lv5wGS5dWVuz0PKp1a8aXehCERUX+y7QEjQjrQuzQHrMUkBv93VaMUSN+O+pcIQc3KfnLwgTikiWCDQaMOS19+MTFGloB5ErFAVrK3SSZzz4lkPYXSc1zKjUaBfd+gammL4wDPfVEa0QVhckgC2Ed65RFtxudyhIrbtd++aCjSqAkCjsu9fQZaXLjkto43D1zXKefrlGO0aKtKKYuyOvSFzn6FOZ5xmwaZoAzHMFpZTD3d3N1dV1keDudba5VhRtlbw6r91jTwOs3IEg0XNXVUnBohAK/JRwUY1ndV1mwB6kIx0ormwtey8v5jQ6wUKVG3Lkm0cTA2aM8/I7fVne8my8Qa3XusKIT+55ypJ19H5efQ4ekBWHmQKlK7aJDU2i2aWbl725u9g23pOmnQUKjsO2P9eJrQBa9aKBpbVfZWO6UVWapb0jaGPEPV6IYRjfuTxs++u6vGqV4Zz3eVoYZWOqjo4Y8kg8cCuXOVJIDkJJZwwiQA9kUFeh3bEp0ceDoFYi6YJ4s3Q37p/vREIDc27ltvZjOZzqyVmvQ9LZQ3VVrGUB+kJEDm52tlRuVIQQXLSYing7hrGSSZTAa8Q8yLb/RGCMge+qnoWQ6OfIQ6gE5goWPEgH2ZaVEaELofpo3pBYZEPog121yXGBcAXuKh2xSE2sGMPoBOf6zvBHK1g32QwW49AY8oAMNFTCPukI04pah0SSDKoJshtUbEPFmXFtTJtWuby7M8ssvaf0GkBjY89Yr4hNlFamUbyzuitMVEiQXcv0IBFIjcgVzYB5EaH0ylwxNEN4kIyh8X2ztD052Rul2yVRGUZjq1eRGzcIfC1SVpIX+g7YUpqPkfb8kpKI4jQiSlUQSNq+c/OSMeta/hzUklQDXGNe3cON+XBLVNu0E4EdggUqQ7NNq/Vcyj2b7e7udDgcpmk613nq5S02cxsG3/zqWWQuQlHEQ8ZzsTui9s4rLjJgZ7pt5Q1Gl659Zcv2xYhR9vHTtXp8G2ohQyKSBc0FANQoVx8amgcpkJplLiOMOdn9JCWS5RMlPVsTF3b00hpX+Z0dydKN1xiwo3a9UAO5mdbHl7I9v3eNamvfnQF03AtUWV3jkVt+fplssWXDgSSpqbRfyBbZsQNDch057zV7U9oQIOKix9WRQdDgAcEOqqBcrF47zItz9WNKM1JwgIBuhm1Tcdt4VdVrnUxQazWwqArUjCLi5UFa/JeKKIqzY4nUZ1Nm/MLeMj12vf1I126BcKHEXBv+iCQPLoMu6tqS/+XGYUqUUgnyhEL324QpFKQVcXYncEcSWthWR6Fh3ws2O5W8vl9ZDSd3ganPIUkeAc+C5WKofHtICk0fytPSU5hHPhQlUUXM6NX3Vryh4X+68da/btMj/ZpRW9fLcNCKUnSQCscr9GX//5AMYjfF9XsCmn4iklPtjcuKc/lgeDOGOIw4JTkHy/B+j8F00bUDUDyJY0yK2k61+yYFkPTroeHYcESljcCR04AZe7x73tu/0ivDp1aeRfiH9NmFB9fzDMdNXElUjILnviEQj/a4JA0Mfzbu2+Z8OBxIzueZrNM0QVnn+uD+NVjP57MrLNM0BV9p9PnDBJEGZt+8rC4JGOF5/6vqSZn5uTvU5ShoFdJLt4dTungjrGEQXZ2x0dqwIeIfyhIwTjo/t4PqDDepdqAXAfel0+2BqqpaaV5HF7YwYvfA+naNGLaUBNstK6iNa2zLXMhoSzqyK7Ws2OcWesMwG7AMMxwHEZ3UIJJ9kwTKSQ3Gfca/vcblYLkRL3tq4LhOQcKSoZq6GoEwRAsXKx0FiGqtxCOYw85kpouPp9Q7FXahJ9QRJ/0LU5WohPtTyxVJKHqJTScHdR6X30qs+JOGTJsGvQ0Oq6fJpa5mpn7vHrQkYxdWlxLYkhXrgNruvixJWC7cf15Z5wxQTWrIjPtxGTotP5AhphrLXHNckKdHTFJVQNOOCpXcFKELGcOhUAJFBpv46M4IMAwUg81piibYwaWQoZffaqrJELsknEverqM9aa3ZlJBUFY5Fzix8gO7a0p72NjCXCKJMnOl7Uyb1gidGMvMHDJxGewfSBg2EeWJvuqMAKp2sfzg9XTK5S+k68WE4jD4dYEMEBjlRODRuGkfexRzN7QGaJ3FntiuqK8PVY2CWBQxSjlkQT2BwBTJKVWkpAO7fv3r27AVUjlfHVuHRDWDtwoBJW17W6LxIL4fgkhJJndLiQiDRgbJHBQAA06Lm+BJo4+vJCKmsFzBaBsONm5EtYqbWzZpzhdLuHKMkWonE+KmdymGQ5seqoIlpCKYoogKzSPEdmiUMZ3hukXglq4Gnnw972DNi//j96s9esSxF+eX/7FySJg4Of7Z5liViddhueLaIqGqtKUES0TiWm+JMe/PfPZDYYt6AJ+3QAhh955LYBgBW231AytmkcBqof3cLRd/cFB2bY7TbVJdO+moAxESY5exTFo83ap7UmA6kCsxo4i9WJYy8PhzO5/PZqltUJi0CcbcQM4Yu7DHmDkPzExZHi1IhSgX246A3xy7nma4SD9NN+VTVGY+InyCxsNSwATB5sPmflockYlb9sp7abkyeBnIxoV1auZb/rALKhWyJbO7UK/bm6U+DhAcqQpqfHhyKmDRtkOttdQsw6YGjTbOK56FoFmwsZdDhm37DYtLLv5yPNqsy6TG65joJWQf8l2Z3cUVJRJrQ45HpAAQGkUkS2EZXc52sODMuEA31NQtg5Qtq5rZ1HkfQGE19smSHQ4YAd9Le+nGT1IIlNJ+1YN243aUaRCMMFyRC25fNfTQoOXtUJXaHvcphm+H2jW0rxzG3/HX7xtVVSnE1d9Ii4nU8eO/qeHd3Pp/qdLw6TMcXtzdW63E6aJGVnuIBLKO0MWpZIqL0quoQSTnM6JHPGLjhKC7vTnsalzTeGQY9ldbkLq9hpuMDLKsSDTowgNZIRzPooG6t5AMjcvEqdCkIltHw7ZooJ5pLocKI/PII3pZMs35Gk7T7q4xR/CzTWrbK6AivS/x4eyV/WsBkhUCxqUu64HFukuek2eLasCkSboTTFEToHaf9/y4Iyk3q2aXCH+WKBxey7IKXp8VrcfhLlrjg4EMClpV9Mgx++bhhSLyrWdEshjMWeGW/iJ6gx4ECNAqppQBGpQg9oUsIMaX2cj/eTs77DTJYi4jTBbdgV6NKhakL4kVpRqCatQIFGyh1qrq8YVEd0Lt9iRS0imAjWAUVVYZnmFx5aKQFypD+vtzVkkmNo2lxnBK5jMLv85eB2PpktI3DZjpqRqmFo29YowjZRcbwmjKi8nKokJro4kmQtz4Pr56w4r4dImQLRBGUhkFYcuGkRSYcZUdRwkQ96TU4ZrwIIhLeWtGGDwlP5829wK0jas0KuSLRw9B1X8wMsu0+SQFgOvhoY7/CMt23s+m9Ed1tLe0+V7zcuRXf9fajy23NHWw8cvnN6uagjQNF4yBbtwqX4+6MI3eZQETSib5VLsd3te+5F4TVHtmeO0dLQWRviohQikih3by4Eykkz3WOZsaKgmJN0m9245jN/rmm46sijw0IgUkV6ub2Vit+O9RFE3QLDU2BfUcGWQoqO6AJVCeQ8a+XLEIjGeegHUpWY6gR7M3t/aJaELYej62fDJf0+uhr6Pwh+ypPnrrjjH+ZrLYFWcOnkQRs4SNLCXdEqVGYQt7gD7QgNY8KaPUORZLkzTvJmiShk8J7MJPqkrcKtfKCZrYRBfLfhei6FWlXJ5NL0XFx8Ly2w1CFzn3tklusoxUrd7aZWFL72DmQLt1JFJ9qJ59FtE6q3ozEoiCaX2azoVKy4quImsLk7lxJllIiMLJG9xZGDYSMw6IaqK4OVzMzmQ5QVai3/8MeQXnJ5bHNI0qIiIjS3GUojjWjjNKG5qATt+joQNfOz3rZHRfiMp9tIQg2CHUrVCLuuBYF6Q15W52HqXj8iWZlLv96Xhqfhg1DMmsIsqUdUcV6tsdwucyjskicFCArry3Qkr0rl5Od0r4fEXr0gjmNZnoTADi+1oVoSpeO2GuwA1gVzpM85a4BeDSotegad1QgHdjKSaIqN3PXIpBwOIPxJ0lhdgMist4ZcJm8DT9x+TcZGQHm/8IbHZq0C2jHpdcnaOCSFItH4GvifBEdbVgN6F5WdpfXou3RQFHJOpgfmOC1Jlqttr5lVazosJlN0ySUeZ6VOBwmFZzPZ0G5vr66vTudT7fX19eqMp/v3NcsGXmA3IBxISsCaGatJ+BiUcn4GhKFBXwIvhuHmsY2kxjw4HoqJGk1kDvlqcmTXvzmrJ5jZtNgEmHm1x5EKyqAkt6qiIUmp1KIHt4S+itZWPp+qzDC0MNFta2AKiZmdoo/IKCauxmqn2ER8dYWSirC5xeLdFHekyKs66wu3cRGYgH3dvDq3PjK4vusyBa+Q6ZBT7TzqohrBCk4qAQZ6BHsIMmMJnNJNqJNrUKndk9/KSk8zxL8Kg1rlahTyUavNrp4lUuTaZOvWTGJepIGyRaTicJGs0aAjMdKTbVBaTp7J+KJhSpQqahVePBzkrH73k2PJKaYp6ZQaGakqML7qopoKeKdWWutV8Xxtiamq2cyEVqVVaDFihFiNmEWKxHUHBWJYb40Y/G0R5L10ANSxNJImPGRVQARmCmKJy0b5nOEJRdQFMYCddVtps0gVK5r8EW3BVtGbWhvjRJRG2ZGVoboSaC2wQGdeAQQS4wRRAmLbFfXm6tShFVE7lQK4dkB0GhZUwWspUDiHLEmSlBKMaMyw78lKv3wKGaW0i0VpqIihTUlQmS8LyCioecyckTdaVnJUtVTiaGs4uFGILQUmHnXgilPN1W1KGqtNCtDR1Q2s5mKAOpB6A5bFcdwqQCqB2+5y94BFS72EuCaqUVQ1AoEnKvBRCkwq0nT1SyokojASnImz22Kg1Ayr91QCAVY4wQqohSDG7tdRrRaVAklNeYtFDWBijE76TLXC0D0GJ8z4JwmpF6LVnAGI6aJnCBKzJmVMKpHJKUQVFBUDiRBA1hEpOT4YVzwwiy9ZEeeAh/IC8iwNwTjoOmLDn3zgn/X9JG6YZKg0WgsMqbFsm+xWebsVhUp3tda6GqrhQfAzxRJzmowKqHiDZvVzOZaCT2KWK1XB6216rHcnecTMU0417lMquVgda5nqE4Q8Rx0Y7ReFoFZNbPDdBzIJJhFS2VakF+n8CIUEa8tLhkHY6AhTf3L9Fp6JayR9a7kICw1ngRBj5EO4qc6hi+MLZwGzrSvRa2ufVnp8lM67mETR9PWvTv+6ptcZjdvjveEGMQBRTwf7rJJpMFw5Nnknskl4nI5Co8t2GB1MwBvU93ms1qRpNlgXFpra9EQV8Jecslk6i3Ju/40huAyN2i7KS40NOVgNhMpCnEFjMbI8jDNkZDLpYhaJvJ0uBUFBWY9nssMRAbCDLaQRg1kHLlfuzXU4ifp5cF7mF7icAPR9gMyvymWIeJJoW06RcS8+k801ArrqWCCWO3b7P/z/2Xvv8NuSa76UPi3VnX33vvNJ4fJI2mkUc4IoUSSAZEMJmOLSzDYBMP9CN/n6+t8MReuDdfGYBsngmxsbIOJAiSBEsoBpdFoRhPPnBzeuEN311r3j1VVXb33fs9I4Od77h+u5zzv2bt3d3XVqqqVQwAsRyYgb6rRQTwsYvazVwXMhsVBOnTGtyqApPAPSC64ZEftbDrCsd46gJCUO7xGxeydSmSuwLy4MzkGJqQrtKjrCoVWI7ZCKB9p6b0W1+WwzZlEARtgvOAgPhlcAdhmU6gomElEE1WKTH8AtWpM28kEcpZ7PG3ySE5ofnZpvpkWJ63b/Bota/GpJ7lt7qSnr62KIf1gdBcrLbO8SAOZpiqykrFbI7oSd0c64oEYx2fDOEI/8Yas84xdODySM1akD9zFolOSAVnYjnlAsCKm3ue53tJXR6SsuYKdiJxzytw0zbAsmrpeXV09mExcWVHbWgKlxeERUyq/G6SzVAl7Adkm/JOPJ3wIqQwpZTYkAzXNv9FasZT6xpln71sgyZSlFmNmn/sPcjdujr5O9Jk6QhtGRr73kri62Ey2Vjq0c5tWQuuH6QjlkDAhOoTQcpYJK//LtOT+edKeDUIA7i9JIFAZou8mkitJ+rcdxlh0v5Lm7rKUcQa9p0zrEMimmvCKmDFQEmKKr+hhJu0uWnJkZVKBiqooL3MIDmNAG3PEhx40MAkxmSKTiHjvVZTIMecOBgbeXrfOzB7oMqkFFkdDYtsENcttS9nIbZ1CP1hy0qzNcXi2yVKeGlbz5zHf/bD3siB4R1H5KQip4gIAMwSa7ysvDbr90zEHJK1RT2eSmaoSBIzDvN9J0C1fIF9EFFNDh+ojFC300TYZ8qdpWJvUR4CMo85JbzkPnYXGZdFlyTBMkQZEPNPBKusEMYEigJhxJINVxhT2GURNFhyzvEJLAqmGsgaqJg+j/+AcD734IRyQeEQ1H2TMBBk2U9zYvAxEcznw8ykfRs/yq92q4fCmifFNXApF5X/S0wEU0W80VeTIE8uxTWDTA1nOYIhAgbpbmZlUhXoXc9zIzK0XQEsuAXhpAbhYE1azV4cs6xR8z5NohFQDlKhpmtFw2LYtM9d1zdyriMDMwRVBNdpy47JGKWsubjvbD8uBbSG1ifpaXxw3Xt6JjaFnA14k9Yd9XRgNJPOSFTIrnAJgkzFsFZHUrcG7dfEtS6nCIobtJpw44kTgM7RFUeTvXtTfHFlPSxDWIsO1fBD9lvNES6fjsouWQSLR4DmDH8UYkm48klSXHbgWDyqyDY24pUCSZe3p3dY97sxfAyFDIhQWyN5Co/ec3SoRMfeASAAsUxQJjPuLWiOxtxoiMANBCJtEtDlZWmmBaqzuXIQ59lYhOwAZRggAFESEZxdZ4YkTU99BmIkVLHGnIlAV267UKVZ7oFZjyKQTAoIAzRBb2QyZkkJDmsqOPERWjygmL0tCDPdjEhINUNXufCkDPrnslKQmRWrIkw9WFlV2oZqFNcvaAUplkADLmmfqPQSvqMQjqcENsTR5EF65g192FChjkDVdyWRgztww51qfg4zSBvUc7ub5V1uAIEZFOpGKC7Gi4z7YPjORcmdaMo7O+AwVInIK9iY2Lx/Yoa1jk5O6aBGn9fEvclINBOfwPiubP54LYWkfMnNICy9Bo1ASWx7ew1DB3LU4bIsBCbud0gtjyhpSCyIKTBNCnv8e/TTmEx1WyU5r/OhDrGDAch0nMScASJigIMVOA2BZRgCISL0k9i3vTUVKV3jfOOcmkwm5UkW4nz+YKHD2cR6aQ3tu790EpD2yjXnZxmQI0d5T6V1F/nD+gjklM+Laa5SPKTa1FJULhC1/2Rx0lrZ80Pm0b3IA5lBzfn9ehY8z6Xyu7lh6y2Eq5bmF73gOYyAMlPYTuvHnNPhJ5ps7MsZFYkULoZgtPXSilDiJdA5TP3nycfSA1sdc3FeZLdzfZczJDtPcNliclEYlh332gLmNmk0lJIr0pCFBaW+3BGZcA+WD7yTUFE5q+jWikENU4yRc5qJv9azImxQrlKU+xMIuSp24vp+OMs3tqvkHI4Wx15q6CVG4CZYfDVUREM8RFjbGYUvQSRvZOImocsFapEgiI8gSUIhtQy+dbMpLte5R9DTH8OjdY1Nuo8427WZJOUPi7OapYB8mdueh85qXGQQgJngGSVBEpA66RKZk8wnO2xZmiKgjCafPqWZmrziEHNRJz6RW3iCdI2Vi9lBVsMKr72I5Uj+Bk8u0DtkRkNxtW2PeeyLLJJof0o4dT71TcAXPw50XoTe3cyjav9JPZlYQCvLs3AmdX6/wUu597S0LiMhLm1CNIqvBgHnMTAubdvHt+ZXDjkPoRDXkFhfRiJZFhPphsflTRuE4Snca+QZVv7ayMh6PuShms7qqBm3bIkZdzeFP6ivSEyWOZp35CS4uVz7TfJBBld0K5dx5jMMs8gXOSeZ89xntWaqtXYqXichKqHJGhuMOn4/nC/f3CEY2jX7f4QaN6u5MBR2AyMvBMYcFbj5lAMw9appGelh8T37SFpv2xVwAHHwounFpTK+IbH+npKtzvA6Woez8XYn5SSg7RWosX+WsXgqlkmQS5Mog4BJJNErmrL1pd/u4xp7jbISSznyHmo1BBkztjJh532yERsjj/QvYfw7dsLJqiiYPg4m/p13UZYf2qoi0306ypa1ZsAHn+hVruU7dEjNbyTmx5P6qgMlZQDpyna+itzsV6uLMLK3H4tFG5gwYCCdJnJ4TMw8oEGLiXcoErXGzdVyGjT/6ESGqKDhnWylnxIQ1sq1h4VzOoswFH5KElHUSybbdM7fVQlBtKJfoiDTaFAzrLrfjpM/d4VP2pAAKcqFKXvBHC/qKLn+jbShRkOXc7tg4gmOowHNfj31zNBh/FVP/akw3QMsMnF0zPifcapRbNfLhi+hojtXuGGURAoqY6ctYVIGarYhSMiwie6W0hhXTasUkJBI3fcBvZJ5EaQ7JRwFIGdwi+AMlQ7cZFoBWhJDxwFEZfkiauLn7jSaFuFPxSgjF9ATJspeRRgqgtJRZYdeFanEFoygKw5Zt2zrn4umTHMidBiXxcxmQ8xVf4Co67NrrMIwqkaFoRFtkW4mQO2HNeU6l6zl00pP5K8PjGbXsxaYl6ZN6NXo1ZBKZbxqUkwH7anckltdbbSPtSvgpCXBz46d8pxzSlvAQmSm+T+eW9xTS7B1CztM6d1f6FSZCJ5RVIlLtiOUC6UVGgOd4tCXvzWX02Nv8nQAyQsVxZHNic9KZe0KWSZQSO2zMXtxUqgowk7BSs5SI+pQ2nIIdnTX4HOZUkFJ31Ht8Dt4dVbjpeivHal8WxWLUN0/+ntjTGCuc2lx9xjz8NVgcAwsl0dveaHB3j+n7SKJuPoynW8qchQopEtVSWyV1NIQLhSeQMhUSUpww4LMsPDlYwmQoYYhgEg5vJIWNOGYu44BiEqIxpW5HWTsIRMreg3BESV5NGQ9O7leEaOZHIBKxFJL5siKTRcL4Y0RDVJSzMLDsrHUb0k6ui+SNyIEccQOvYAVBiQlEakXN57zJFjrsPttfjnPkjKNlRQwkMy7H9gIh2wBmU+BOuTuP9PNZz92gWfRUmGJ0u7lJdZ1INvLArtRjMAzFnzKONPDZQYI6TB3pIhZWXbIgname5o/k3MTZ3MKgBbMA6gN3aHHD3bnOHqfoxRoIk6qqDqqKiMbTSVmW9XQ6Gg1ndcNcZKld7MbO5w6Z3rejrNrDkIfh2N71ANFAikWVlnrgJgKchm6XzJV8Lhw2f18i1ZnT+XKSk18MJY5TnOLi3XHC6O8/XRAoF2fb3Wk3aC/3MmVtqVE9fchR3mFvDFcOmeZn3gPQkS7tG8kIxsg7209551bDOF+ppT2n+zmr+qnBJ0BVD3Us5FCsygqRQq30GEdD57KFNt2X4RdEwy55wFnwlaoG3TMzB6VbBkKL2mSnGqRqolBaQ9Sr8V1pkKoaHufOCpB6VAKJmFQ1l6Bx6QGwvigqG5AYSglSs/bvNwoU+skWmgBHLDFrtGaPCFFmdl22RmnF4yNL7kkpdimm2lU1NzlvZl8CkQWBiVVl4Ej1u/ErQBDpXsEEO74CWEoPChqBrthD0kmaLCKkqjoHW0QtaA6rRTQcbeSa8JpxqxnAZE4QXNYk0yIE8U5VhcL2TpyBLSsJvM2dCTEe2qajCvMNNrpSEIOkjpqFHPg5XsqXKfHlyXhE0ff8MBk4ZbnPNyRD5oGVvR0ZHg6nSaRiB8BDLRWMIvJ8C/HoiX7HfZ548Y4pjy/rnBCSFiKxah0o+j0jeFckE5jmuQFsu8SQzECkqfd4b6YhHJ+UC2ag8WpIT/qUMg2AlfJsuxpBNKoGe+MDIp3NZqPRqG4b55yIEpPp2ELl7JgKwnufvKC998aC99FOb9YpbVmiLGmlbDeaZUcV8CoiTG4plSxC3nQRUgKREoRUoQVbqGPMih7V10XO6GVqOnKO+utkrZY2zNi2ncZs43BQxKLkYUoiYtVjrKxn52iqIEZEN4FuBdGXY7772IRN89SNTZK2TcE8x1pG+Hpzv7ZzG/pnwEdvWwLQ6zPDFFnyf+Ohluyt3Oks6ZGgKbHB3No0RCEI25CUIwJExRGrivfR/zweS4k8YAZRU40Zqe5tIDuKiXjYqAyGjSOYIwuzqLbee1hRQABQIVanqkJiJMo5x6Jk/0iU4ElbqDN/B6/d6SVpQ/4J7egooNqqqjRKRFZzWlQ8lBzBAbXptQFHGkRIAVBIWCk7SI65IFLV1pnXkwLw1GnIC5g1Uc1/hwGIMuBJ7Rjb4gqLsAgwlEJVQUhbLnCQ6TiwU4sxsIBidmAi9UqdN4qqFr7juIlCcDiAgktV1TZ4qJkNRYCCVFQ9eQWILSaeIFyoF0JLol5s/3siz2Rlhksbo6oqR1+EGTOzZYRNWZEAq41GcBRKjYuidYCHpQhNRzIK0GEMHDTzln8e3sEJQeCDJAdmOAfyLABYIoK3JSNypbPYa7POsaIgBTzRADDgh2gzpwTvNcyCg9M1iFTUS8yXYLtX2M4GcyMeIFUJB5PJ6viqKywaiSxKm6DQxrcm+DMI5AGYHkKjrBxnijAehVhylN6Z1k5GzAxDQpZYWwkCsAM0OuopGOE8WwITUWUGFDxDyyBn2lRAoF5FCRwyK4XA5sA+MzcuVhtLVBmqqoUWFFFYypdu3hNk0LX7BSHWgTyRBRYwK0FAraiKK2IyExu2goDCphbmponoEmnh2UPFUnO6TltpcInRBOYaDq9qQepAV6ovchhhAdWDiKpIYqigkH+CgjpB1bIO+7KsGi9e1TG3zYxJj66vHkxnLYjKoSvRiDCzIypUao8oIoQ8OgAUmmoKINYXiOjSoJ88DES0BsBukFiTNAFmVmmZQjXNIF6DwWz0K+foVUVSPeDE4nWA632LmtKMW00cWejO+0RXTOGe2EOXq1CWCeP/Y5udCI6SbhQCCDeNg6IgwcX5SSQRPbfGzL5rElI/2MOIZKLHc2R4ka9P9HIRIEGlaYdmQVecOOKc85rrwa53Wydjj+but0mF3J8SYvDTbS7w85ETj1ofu8ctW0nWjvLNDWzRuS9cn5NBs2cNn2jQj0YGKD5irGFM1DKvWknwyXLc9MYUpRFKPBCFEoNizH9ksZB0unHJoibQEQFtNMp2wk9K9B8XKzlPEFFIAd2JTia79IVWuznk23GAEglxVEeAEX0UUjYG43mIqCwGseqXWL3iuB+CFBKBHC04Ou/EZ+OIrFsMJQhYYAE/UAg0CkuW/9SbRUiGHIhztFUT0Bnn5rdG3K4LK9vdsbCdDFOnv9Z/mpZYxJbtBA27KM17rjcJgdFhFvkANMStBUdhCnrbcHSzO5UWohNzHpSICnYJx8crLLnQqd0UiEBZ2Ik5gzt7p58HBQC2bKlRukCHQChIgAoRMZEj2WUWRptGZy/tUjsECTdTP2er082xO4KZgmQO4PNkJX4IAmtfHaiqjlwr3rkCIuJ96Yqq4Kau6watCikLk4gU1rFIIq7JiS91twi3vMUZEVGhMYMN4gLN/e0/pXOH2oBvNxfIgh2tcQz3XByE6+Vg64+vr3SRyJoVh/lTLHSfxoRlJ8oIG2eHPL13vhOEBJbphjR9p/A8v+TWOmnebu7TgN7IM6Av/pT4PCDiFUDRs3/nbXknQNE3w5u2NLkgUmy6MM54JOb399yA57Z4oFJEFv7bhUVFmKYnNbn8pMcz02wf8ywZVd7SrxRD99KUE5yZGASh3sawinbL0W4ach8y3tCiiWsKIChpJeQ1QoIqgBSsHy/0FHHdTNNCAMmvKIBOOehg4POVUrWSXdwjxokrUFWYHdj1XdUSwmKLfwRA1Glo1ATLIJExgEaSdEImSYeRiwUe56x4zJ9u98czHpzsordOp40yVUJU1brMOUM15FRP+yH91PmIGIUOHEAML2FN7w3uZnFFsHBGsLCZ0e20RfZFjIFMdnnT9SVrpyaCFF8bD3LXv+Rzj5fNXNIENx6EM8ExN3+sg0pBh69xXhze0u88WvUyQme+q1xE/bQAwYuKYGJpx4xSxzLFDihwTX1wRGdG0igXR7wXuWwmYo7el/PgzaMoJKEFm7EJMH3+tldYNkckS+kIdRrybCZ2ajKBJP2qALnCt61z5Jibpq6GVVW68XimXKWwkdAtkzGqeeeJO3c8L6j0R6YIvKeN0OWP5+Qgn1Y3R0MJGepOvxZdUHwetxerEtmpyONklvIJqlpE1rabAxMT51ILcv4+bYX5aXfT6MEiMzulOg2iHdmnvlfXnCBxiM9y3oSDR6OhwZSdeP7JAMc+RUz3MamGhEiIJ0UVarzO8tWNrbcpg7Ymozq0/PPNu1qKvHrTmft1ofPI+AGZDhYI0YGRcgcMItThkB6fFLmrLDNXTzq0VPaSOZQFYhwzKKV+WOcrcyyFhnmFdJmlWZPNOO0fAJZiBLGMiUoI7BdzBkS+/ib0h+Hl6M42YwbD4PNj/SNyHlhcl+gAleyCUWVrL1CV4KAqgeuh5DNM2tub5qXCUYZzrlRNzKCKiJe28X6lXAkykAgSadGQ0TmYcqNNhwIYSUihnejroS6ymd2MEB2k5ti7gKsiQ5CtEmvKH0AxX1j4BfBEzsg/EZmjGTJbafdeUSQtEfVXC6SsLIm6i4FYVVUYgAeIQ5WhcGI7TqRH45MZ32hwz+aVE/JuH4R9K5Sc5oyCBvhoBEWi9/MsRVgCQsh7mQy04oiiT5vC3FO1z7L0lwbm20GEmChXg7AqzkxyyQspmNqIeivbmd5U1YFAokFTrPa4mZe5j1sAWFUrSaBDwtFdco9Fdqp7OuINC+rKf7IZe4WCzNu5LEsya64ChYMSMUtGyG6GfLM3dq/oOUV35JFIcmj3n+3hq8Wm/YVe4jSX3kdhu4QjlDi6/vjC36US7eK7b/LrYSPpKEHUYiExU4F77j0VeC4OhCGn/apaBr8zW/s0sN680qojIwlzU+joUD/Q05ybOpxkefxi/3NbjYh6VUqyiWQ5M/qNgxtM90YiLHTb3Z+RzHnS2ycJc/fMCXzMBMuMFHZ0CF9IGHGuOGgu881dn+t/jk6nZ+d2avZrx0Wiv0mIqFt8AgArA6c9ktlbQaulLhH5p+QVIYVzCNCA5Tk0XGMvcuE2zZfNsJf1lIaX5pKGnYu/6dURH4HIdekxDbTKAu+hKswhZyoAoczXwTxHQJbTVogyRoeCLapybL5ISQS2w5JcI2lxhxOsaJRQUiZJUski0onkPJHPOl+75fxH4EAQ1pCCN5L2D1148DNiXvOnMqYGCNUdYxLnrP/ePj8MK7kk4hi3FXk4UiVLjBLJiQmmOZOXIJlCdcP1TJ7Ot3T/qXnFZtrw1H8miT1LCXk4AERR7R9PTVyX5HCT9uocBOIOscQXFGL0ASGrGiCdjqSn+QhioxIQAwITe4o+sk1v770xHBmfDynhFlV1jr33BdOwKqT1s6YpufQUyqBRtMCrquYpijMexWCBhdYHcIjMS9A7jIQdxk/MkTNrRfAPjLQq7iowZ8HxEdboM7CIPJqGaglhQEGdaMlqiG1b9xwyl63uTZggRL6CjPDygkEoKluSwrmjXjxHCWQZh6IQtvNgueUsxJMUHsu3YybBd1yIqkrIzdBxlKqaFESLZHLpKtrB4KzWXAel6OU419VSaoqF9Ua3D4T63EmEpHEFJu2F9Dy2Ee2V3Z0xtU2Ou3Nasrim+f2I5ycAKqtLOjc1Hy+mBxnk+zy+BB83dI6eduriaDkl3yHKbahqMTWESDIllOqhOKMwJcD2DQmHRadu0VWL6LNkPVuVGcQNYIM3v8rgux4YREW2HYlIxUVlKFsMboB/Z1WHqkIUDg7a+pgwAcLBdZW0TfKR1VNyRMSOyXHbCoWsS5b8yHTD7MJXO1sqFCPCjd4El6IuOi7kXY56KdJI5hNGizMKDE2szhTBafr/UC84YhXLf5YWX9CJfUirdxiimGt2Z8dqCCtJJPUhzCxocSlOYWHT9vZPVGiFwaWnqFOwd3HPcWWpS5nJUIg5qcVek7wUqGx+fK3H6GViOhrTU5BC4SNzQRFyiKxFZqjKvDGiFiEAB2QhDp56MAeZfJcDYU7hFEN5FGqZ4TSpEBZQE0efWwCmpl40hndDWmjdkY9Orzl/QEStSuEK+JahjriVpm2lHA3UC2L+eeo4jEPx5CJr2G8dRgSAzKb/pA/GXym1bnUoFs1KiN6HtADkZDk3mE8g58W0L+CHHdlHxOkpQRYQl4EgQfZJz1jix7O8stnb+1yCYfB8892k/x7GXej5JqQu/JrFmychyOToz4Qt6j7ElEyAFWpSIoKxddlC6k23Lw7fHw7kkTEWmTyggexZUgxD/laUPspo4QHLVtiBJWO1lxinbQwac6jlF+exXpwXEYVU+vG6pevNi4nmxN6G3dsSZJwCisAFxxXRjhWOY+bAKgGkSplXWH7b4kqRZaz2GnylgVA+jCzflEswQXZ8SNXSkyh6FRvzpqqR3xQumEVaKJGSQElIBcQqkiLKWFGQJSCDY2JmMKtq24hvfW0LWLjIbxtBExCbXK/9Ex+yHGvPRh5hi9pqKfcRgs5FI/V60yD5xIC2+IsHIJRSd4maI/AhhyXSyG4J8gKFmm/pAEOikPaLrBxQyDQQuYs52pN7wFBms+zlKo9vEKBU8kEH28kABAhi9qbwI8c84S36RpCouzayF5yxHQUBAOgkS6IQowLAcp0pYrUu6iKvup2ZTU2DO2EnNligDBAQd5qgRq33/AqqAiiIO11XVvqWKAPBk7XcLwdzx8pseBr56SgkuD7S6F7qW4gflI4UTdOokiuqWI2PCiUKqXXJckImYSyfXm4DXkYdMr29YZNY7D6hne6RxUjnDj5zO1ORVNCLb023J2+xlPt3sWsAypxoqveeIr8fkrZG0Q1I/92MxN6ERtKCKj+gsbmLqcQNYinsvhSeWnyR5NtCNRq8Kc/i1LEIlKUuy+HQIkRIhIx3EXMtr8p7yHw1RqHBfFKCFAJkaUYWe+hPJ6pYs1Rq2TYCIg1ehAZlelGNuimVlM20IySL0597+9yoEsR4Ib1JnpE1L/LBzObemxQsixv1JixR14IBOI6nu58A0piri+BCzgsLNrUaalE6IQDwxpp2tNwIcBsnnrYQdSDiBRePjJb0pmOMtr2Jo6M1AHYhEiFiT5uElGXJzCRquUi999L6pq63x7vDwcpwOCzLymBIqgC3qj1HGMLcwBIkjeEHE5SiEjcQPohS6cy6EX21ArkK67jIFodUkJTQiKGkWMTFa0zmHPgTaZPbRwclEitLh4VtkAhwPoVwXWF0Kqq7Y2HBMJnO0BBGuswLNelOJVJliS9yGiIYYafe6DFbQi4OlwNQKPrvJ/Yw3yHmXGsDtbhReN9o2o1RrlGBG4TjI6ox+NhaMXcMEpFI2Dc/pMSBvZ57pM9HdXjPMRxicivNhO5QXBTcT5VosVjJqWixzR2BpVRKNVW/7OEHCtEiviyHEB2Px86VVTWY1m3BAd4aGwPqPRfzXtBpaktbOI9ASt1qW1QW3J4DFuo0g/Noyv7m7txEVASfSol60sj1BQebeNi7Qi4hcKoHIIRDGS5y6QB4ERUtORzUpM6LmpweuCngGvJgVQsq5UTAVFWtoCuAdC6DTgGq0tsu2mexA/Bhr46DZMCOSPBySR46MVFjeJnL6J6Q15jzshQwEWW+u4Y5HBgWsqzBFcucqWLS84g7I0CqzCkjTIqIQE44Ofh0aIWVPENCwAkTCakSPII4srjkEqsPJw7X/AckqHznqbKVhyaxlDpEEesnyUYIqpJco8jiBTvomQZTuKCwd/OfiKQsCTA7pWYawpBTl0nVHHJQKDnlqZ8SUSxga8WlAYMPoGLJLqMvrsJRK9AZNNUXI1UnKo4iTMy2oqFkPROJsHh0NAItafRh1iDHxN3tDVPGCEhWhfGmBYsIUahR6L0nCTNK2ztuYBCRpT1UEQJKdSboeoI6hahr4azsLsEzWoI2qsoOIG0VIGYmRx6KuqwH+x7sMHJux00af/UVpzZvfe4zDi5ded/b37vfjtbvev52UXmZDnCw1hazQvZax1SVxNI24rxWI38wW1lZ8W3dNHVVDaQW38rq6nqts1gnVVE4IhJVH6zIokQScu3ZsSW23IFWzLhzSJJCU2hiuGJnjKliQNWT5dEUMzJqUzpE8c5kIgAsLI4BJVVnBrlYUVo4KHuDi3jcKYQQ8euJFKTKgAioAglbuC0btjDXqlh7wqLxVIhDgHhRqGpgzgKCMX808zVN5DAIXy64RhnqkBDdivB/CCu3c6/qVTwTKxycOQ8KtBEogZ2p6e20mgegUKFoi+iZ0CMfZCkUDMkkLgQW1R9OejTYKbE0EBCDQhATi6p4wMMlkxCjI39oSVVbD1UhS0nqFADXRNHmqLm06l1pPRfRz9Goe8iNzHCsQpQi6qkN489ZKOMcmJkpBC57gjK1qqPWV1WFxs/qmojIsWgjWgszGN50jwWpYXt2pS0TLL0uwOQcg8k3rZEb0z14jaF9EfUlbTEggJRNQaTEprOQ6CLlPDTISuQjz2aoxUiIOccTWZW5pU5Y3VpmGhgcImHk8kfOWKUbcq9XRLaXiLyXhc7CEwxNupt4cV5Q696+kJj6SSShrHGHIeYltry33Oc2UbKcb5qbb2TGuy7sl8gEBXVQL7YnE8eSTSg+nnZtIHpEZClc0yRogWWdG9Viy1cknwuBqWMbM8k1s/FTKJ7cg5JGg7F9kPhgUuWKBbx1ODjwsOHVTBrcBgIrAyK5KXO6tLHCEVsp4cDJiSIR+DCqXMvNROZjY88blDXa3tJQM+0xCXXVDuxiZCvjPblJ+yaNoi40gdNONQUjXAD5YV0pIE6/8hVrS39dBV735AD7n+3/T+3fvPECknahX+PFdnsnl8SLUWsYhWoNpAlQ01jH0/Ekljsiyutt95wJFqJEiUgVPmAYjuNLgo83Lj0pos18kCQ87pdaSRK/ZedUidgm6L8FIAY8hIgZneWnPx6FY4i23is8MxfMXgUiw5WRiLSmczXdBlFVVQoL1Q3YDByi6cR4lsQAmgTiO6PyAvAPaWy6igTwmMRNVKL/emrGagO5F3/gporD3P6iwJ9ed2gkT056566nHuawfO+Gfj8m5D8p8urxR1k/+eeMM+iJ2tLpguZ7SJ+7KxELJm0t9fvPX5dAkTMFXThBIqnB6wMAFm3D5l7ogyq4R6tMLaAqysE4SDBupZdhPAdF+tqLxlumHonTyC92InK6Z66qr+GFtFkl+hmmJKsZmoCYaJG9NBHgsENiFVVmjsYLfHZNlJisLKApe0IEQj7FbEYiYpreYBMOrj59yPQgEJ2JMgKMZbtxDqMehiKNH+0OGrOFvNt+0cj9ILlHUMcCAmhkSZz9/2z/b2wu4bZO5xTEkShAURR/DV0kPwVKvrLRDJBvJ6sxgIyfW2xdDZZ5G6U3uhHoE4U8a0TIPfkBBA/VPIi8b/OnUFV63ppmPj+UBREooKbgC+VD0pvmhx2FGSWwGMGO+lSCMLQoy8lk0ojnwjG5xnuourKABOfKgJZFjXWQ6CrIQSIPjvfEc2AJ/tuHUSJv6hqi4JyoMOpKxKE2dligEJCdsuKnSdkNHQHmZS/q0bbDaSIZw5BwdxdaQ7TgFm/Cx1LmgqLM3ZGT2A36dGX5CJdR9LnrGtXJwWajACDQOc4rjTDRLRd90GPcxRLxV1WRe1fGySLz/aOsdHmYoNGeuBGToGzZJzQFoQMchkkJMuYxwX2Xoh4QgmtGNi9FcnFd0vo8uP0P81IG0irYrExrp+mmMDu1YJ456mtPMXMPR0T4JI0IBSYjeM5nAbu9jWTepMs4CatD3F0xU0YicdpF2QKAqITK89F0E2pXZ6u0kMWo907rM+ke06js65yCpscnRc4j4U2NyvweeJAMQT0IW1sdHLvJ4P4M7T+87XLTNFVVFUUxmUxcLOeSg5rIvLWJgq+v12hl1MgcsPTCsSTzkUa2cErIHRHSi0rfS3+BeBKD6WE5DgmwZkXynY6mR+5QLREgNSsrSh+0MkKoWYUCYqDkbhZFzwbdV8RqxKoK7urZ5LMrA+emSvxtX3Q6Hx4iOcp5tUV9krMgiM7glnljLTnvnDlY9SCct8T1phtcLKNAcXhCYOXeC7JjZ0BQw1SsULYjQ9olnFFV7TL4RlETIDK8qKnSr2rnj2YUue1PLCH91telK6goIOK9h2+KoqjKYtLWtXolKDkQKUFE4LXvUJ6rCnJ0BQDcBRfFGaanDsl7AZjNT0Ul5Mxni60QJqex6BKiakJiwpOcANsAi9wf78/TbiYEx8+fqVJuWYdLCfbS64tEd+lTlGiYVfFcEFByPiU1TlrBBayxlPwvYq75ew4ZoXJyeIdq5gQB8tRbwt5ol810cVRhIsuEeFAepBe05YZgNFoxrUgRjHGBdEl04j1CyV2o1/miRJvGY457RUoUZ36r3IuTvvkWCitYsCY3jVgKzMYap8O5CO4spXPG9s2xgPZY+sjMFPdM9m5o65NnWV7IhCNA5kerTgMiRjitFECd54VA8pVjTSoPMrd6UQB71/YPA8ifuQ2HQxGZzWbsiKlLHZqfTUO2/RVRIIR4MYXwGCFF5jaSH5O4najAXCdJR7WkWRKxxN3OMT03aZoUGzF3GJYdgcXzkrOA3U4KD4aRzCVYJaKUpDaRonR8ut7iyLWrO9vl0FArUQy1NPDhMdHgW60xtiODDbLhEfU8onvTzI5qPOrpKRhJ4mUpLQGI0yh9mHbaQjTM0SVZ0HpcZzpu5lEf3IlCWEUmIymIlbQvdidy5cNtHiBSZueImXjczAC2PJ5ehVLNA0rg6vWW3thtoYz/uQmunm8kkoTSpPxLHi1JRAGZFCHaIt4fXg2oVa/KgsozH86o8Fgc0JPu9XzokoVY5H9FlnfShSfNi/9LwEH0JEayQ+lfSE1gEiTP3ZCvkEnwizzK4kFdnK+1mITStMRq0QgdPdMQJx0eBzhDK3EYwsYgqoKVhXxGfVWVlPQmCoo+qV6UcXPEkXREoG6XgGKoTMCygX0JO3fZm+f8fs30AiKrYzrHmSEKx3PjMVyed3/YJLsD5px4n7NEXlWznNWGJ7tzmB+eKP7Oof/eWgeuWJGTjUyWnePh4p5frnxe+JoxNNkNTtXHdUnP2AkdDcIBSMPotbnrh92WPyHSeriCHLNIiEoOpC4cNtbAtEb/uk7PERixqC3sSEJHxgLWi2NxVrzEGacVdrvOi79dD/Y16dUWfR/C29OMg7SWbSpFjBjLDJlRsR+oSNq95uFoCXYyumLSqYLUGC8oNBSbMHEnbcg4cRfZO+qLv/nsUsg7AOMUNaNn5jdvjs9Rs5gydC1vi1QkHN4ufJ+AOXVYQEK5YSt8IGGIT1HbIUu2modUvFMdOpfQ+JawEBKDCcwuqKpRBBcLb070eO68VEXhVdu2BUlZFEVRQFRbr2BXOCLy3kOImZwrNBSnmS+dHnNSUDriRBY42tX4DAyQAn0nobnmgjKR0ZmLLA17UAVHjXBEFzkKyWZXpCNNCzQY2b5MNygtgQ76vO1cm1/FhUHMXcwFEftq+cGX9nkYXst5K81twJGEIFrvDmMnQj+BTe+Uz4fNLgyG07YGUvxBX6FtGM1wfWk7PevQVKuWaipLwJmqsJn/RQZJk495HviBN9d58YuISDRGefYkG+QUTgnUramlaKT+XRLjSRb7CSmWctIenlyycwAURUGB/QZMyp8bT/ZVtWcy6G08EQaImAEVY3fmuGCK6ACqKvFrvkk06ocXm6oZZPsShVDKJR5QMLOqeu+Zsnon6JJoLo0WpP4R6p+XyMr0o3WF6qXj7B77LFvJzrfq1CnTbDouBsZXxH1oPsPMwqmMTQ9u5tVDauGX3b7mBT2+bZuWPIhJNOTmDtFS2p9lZg2VbkeF96oFhQBzjGZ4MFR1i/A3mEghrASlLjyGQCWoQbtI0szegRzbxJgRjQpM7YvjkWfvdLmOGICnnviOuOhJ29m9QpMxNikyQ2GrhL7ih+698/Pv4B9T4UYeLD4Si3DYYqVgMAZR0Yn7QMRMgiBCGIJSQJWUtQLg4dHfxhxyuwRzLAim3HNBDxEtFCEXrXEmPZxsk6oKDpElRHBMRILWS8OuhIk9MXCGFaYs0qxydvDJj+Fi3EUzxzpyIBj2tndzNJ8dJtopQy2zC6fgQNU2R8yMtCPnTasUKxzfzAs6PJCpCPLr84stHQ1Itakpk4RyaKoGYN1E5E+/ygKTexilP+yneWYz93ALTKXOTaeDVHbiYlBBb3hzT3E/eqqH6ynQn0B9M1SaNFaaRxWomuNAGk8mYYmFUVE0DC1yJPFKt/D5T5rdk8OnaX12PaG5EMxHsGyymaRIvfKLHZnJ0GL+E0fKaTCPQXFaDgcioiKGdphDed3D7CMx01AQg9IrfNMWxI6srrGQhCL3lGIvEYu2Zrx2QILxP6t8mzNtqTHD8jV1xEWZCKaqNf2HhS9770WEFwPnuyXQBAeK/DgtMDQRvPPmDLuhqMo+XCJ+1d5x7X7K78kvJmmbWFlns1lVVaNqoNoysxFg772KhGIS8EhbwEiRBHSMjBCmQohFtgPzCXqIxlAvDnsdsDxt6DRztgampUC3LQFYeQmF4z67EaejVtqCQ6U8sqgbOCHDxmaBdiBSOFBLPV5foIBabFzHFJDYCSIiBudl0dOCegrm0o4Xt3Obbf+0xIZdF9k+7mrPSDg3FinVN5llG9UlJ6+5lrDQ4k82o+6DshKosMKFFNNHB77BWc58MDEUKlH1FlCoimYFyG1I2Ts5+eSSBP9qWJZT09p2ZYi7Ydtu0dYzExWFh4pIK+FdFCNrHRdGfS3EEeyIiOAih5Hp8IkMUhIxlUQmJvd8vgmVASzcmWJOMIqSHRNzYq4JMexMqcnihvN+iiok444QjM2lwuMEOxdKUI0+CxGta6y4XjjXnToXKjNQSoQmauVXI4Km1tlBgzGLEjLZZQCwRExQhHi8jg5hQTGuFOpKujj5aOhfcAGz8CwllWjDQEt9ypcYTADazp2TyEDE2hP5zlbV0pVz8hFFGHKy3RpLbokJ44ySy0p8lhMesEqdJtGZx5DFwIZExwQltG1LsSVKAABZncu0d6QPw/zXijn0meYlygouvCgcGWkxNpRJ2gUdtNosaurRRXQAVWL23pdlWdd1URbiPTNr0zIREYu5P4uAqCBmjRXrQq23gGBst0l0b6agENUBMcCtolUScmAB1EE8qaiS1bUFaajp55g6rBfzTCiA1oXqbCks3vBrGysxKyd64Em1ZWIiRwxRFfEqQnBVkdtzyWoK2kYV5SQTELVGDUWiqjlagqPepTSiDvHilYmd89C6aYomKyOYU9Y5+oqMKvc4vHg9Yv+X8r/7E/4ukr1qbeXG7mx99cjB9SsrKwUNyrH3rhwMudC6mQ6aAYYVhvBtLROId1IUOmza9Xb9iu6NBzyYrEvZVIM9plVtZFS3E66Y0WrbOi5Q8ESEpWBmYhJtNcYUKrgUEbCAlBw7Bimpb0UqB1b1SipiQflcAkTamN9NygAaFQYitlikoKAAViKWoE7SmAVAALSmDM+40rACyhUcCJ5U1WdFkwleHJFE2SN5EzAI4JgyHABa8lBPbUkgwAXLeFwKb9s3OkITEcR7VSpc2C+m7zfyQfBgqzvLKipi8f1CvhQxjtMEz6hxYeIQX5t2o+32UlWUlIpkeyYI0LL3Gu8PP4UJrgAgSAighoTfLcGacki9FN/CpETwIWhbzeLGosIVQno0caKqHiSmCElMvKpqLEwmrpxODkalWy1d6YrtG7vMZVkNC1IJqXNbpoIds7K3rOKqopI2uTXPHgRVEVMcwkHBipaD7aCTnVRFLTOsksJRgWhhBBO8d6GGiiBVNWZAakRcZ3q6mG0CqjFcOER8sIhVHb+p7qLftEOrN7lJFSYEaw8LG6pOy6lRRxo2XCewdW1RRMPh0mf/uyW+oLxHok4fieQxAKBvs1zsPL+SPveo7DKXq7ketDfIJP30aWG8OTn7hP41S0i5AAEiYpcxKLniQTqiEu9eOsD5losplGTNvhht7OfcGoVlPYSFFKBt26qqJpOJ/TWXn6orgh15u6Q2CLBN4TfQhW2YpBOJ0TsyZ7QAmyUQJtmrScIiMs/zhr/9+rhJldGpebR/aiT4KlPczEQAkzmPzO2NIC0cDvOFwQBA49twhUi8V9WCuCr8XBeLMH+S1t+x5Tpesv3zq+Vo0rbvoq/H9fHGkeNTzGaT6fHBZqFuZ3LA66vHD9S34jGTSrWUGbw4OG6lfryZFmeGZSWjvQNI2frRrB3u8cT5YliVG9MZqqJwo2JvUrcoNhpRbYnUFUQEkVa0VdWWSnJausIriTbeC4knqKCIWlgCgWMVihJVjO0EghnYDnhwa+hE5swTFf1dGjlvzaGSCHGXjCgeB9WoX1mQRzUoAzobEhERsWZ1OJAdq/4adkMKfaa8fknAiLy8zzwGiKB9n4mbNIosWuDaE1jM7zINIKmsluFnxE2b1RdZOPgabOTde1UtPMCmgsB2OLCFYAWVgA3LemvqZmUwFN9U5fBgb380WpnOGiLXam2KSbutTRl+usQ7CTgUp8uBEBEZA9+ZtRemxt2Me+b2w+Tjfk77HgJHR9o7S2hPBX1zoTt0StJz1VJLKZzxcuHdcTkTXqbwN9S/lPkFjXUxe9gQcYEXhfcnbXOcbPbVriD7qshuy5mStJ+zE4v0+OLrFoHQfc53Q0eoujKOczWDOSriwlNRFAOCa78k+CzjUcJPSWKb01ZlKCNvucE4dGLHPrIrQj2jQN5tzoVYMolF4AhJUbrWN0XpZvWUHbW+KcvSHDB8Li9a/5RiPzLIgHRhqBacYyofifsqbj5msiRbhKhkYVVQqF9OcYRJ4+0yXYtVJjAQOPTgFr0no+407nMhpFxsaTvdfAPPUQLNIwMJscYRUeEIsMSTRVFQM+r1sqDGXN4WFdSxrRWDtVNl49lfw+cNf7VaKd66//UlKYns7I+ZWb1gZ/ta4biqLIGU1ijBBRWOnSsHfn3jxo1LG8XxoRuDr1blqT1/ZFTstvCNvzGoBiTl9va+53Z1a1VnUIWCvWpT+1ZQlWvD4dDXewoWEaiSege4wlHKu5NVHXZKGpxiIuRTJVGikIPS9mTnDKC+vznn2Me5JaOolk7YgyKnbmXCuhWMXyzNBM+9pXOb7SE3AKAl8SjEKgseoJp0YPmdplbN0Km9IAY7xNM0Z1oHsvjgjEEUSrvXjn+GKJa4JdGCJ00Hz14wcWDNQUTRSzxilmiv1V4POc9KRKPRaDqdkXOt16paEa9W9YRNbxeVRqq6rEBlGJBFTSFbd3YqVr5EECtdd0Xdc/yToDTP3SPka8oLnfUIEHw36eD3SyAqnlyUtCEEEGBRgEoi7Bw3F5gpBPpgszMBRchSPHYL0OF0Xehk2ajmKMpnkrErfQz/cS7p9uDbIyQxwGCOdMlCBq6lw+4NOP81smwOsc5dhglVFZbpMltsMjeozH03sT2soFRtJpdQI/t/c7Aske+1dxMizcubZQKVBeR1k9eFWTC3bWsq6KqqjK40Vk3Pes4cmpArmYkghn+6RDwBxQTdYqdiCceZQiIOklA5giK7GELx+0baOfKPzBHBgFJ0+7RzymPLyxNbcLVTJSKXw61bPkogjqe06yr1n3uxmVkudKjqLH+gUlPs5cCN+oF+S6rm/IbESPZlvff9wZ9gtHXqziNPufPMztjvHxy8cu0/Va4QVXX0gfW/5iez9WIwHk9FadI002lNVBZcTmu/u793uvHXy0l19sh12Rk108F4EzQoiylmG6tr5Y2DCzxEVRDKslrZuL59oxyutm2rKkVRFCtVKdS07Xiyv1aU4r33nhmOzDCN5LERdUdgBPc38kIEgZo7hZrc1TGB3jjddEwW8dhiuwnDZAINRdKO7ODYUwVxOJe52iOqBBM+yU9r2GiBIsocE9Zj4DidirTPkfXT31qhkI/h7o4kxCbpL4UuKe9wHhShb5mTQ2zb5pPtXpCNLdFdl0q9BaaATE1NbIucmf9Viah0ASfP2qZwVau+YudjyEmIz9MIHJBfPAURrwIhfDnPY8UosEwhYSVPKDjXig9iORY99YLVox/vvtg47ry4T27qCdUbyjJJNPsq2cmPHkV9tG0Uwdy+k7u1fUjOumlfzlGFZWyBjTzKIGE5537t9Zl3uHSO83uXLIVKbxjaDxO6SW9zo0XEy72LumSyqsqcDSbLa81K0tHg5e96UmFrydj6vP+y2S3vcLHMQLh+GEiI27qpinI2na2trNZ1Pagqn1HfBNsAB+I5iC0dZ8YEIErqgd0mIgG55GVq28/yy0cM17kNpuorMT9DD0SWViBtCe64Y8PFfZwSEH3aM/kUcmlpsS3dUc7M+V5ExFGobMjl0DpNAz2006U3LNz/rLvXL1/bf+y+6+cfevx5L33urSc2xhOtGw8Ft/LiyU+3UC6KEqDC8arjddhUlKGEx7a+b6r46Kd3d8e7KytHmmpnf/LgoLlHB3uTZlAMR+Vo0DQHrY4LoCyLdSItnEcrMiYvVFDlGhHx9XEoFwUV7IhUWy8iqkAZ4KNppYPjbBANc8DFaIF+rA51BXEXoZ1Od77TNHrdZwjnZqgStjUkEOmlN4SX5j+SUH+cS3VIlBjikAkkcCTp4tx+m2uL1zUJyNkNossFjLm6N0t29QKKI0IKOwuHZcnuFmJLsJtcYjs5h5nLkono4OCgrIaTyWQwGM3qWVGUxohY9XR7rXmbJ9NYb5phSM5M16lo81x6Ao3fiQiqHFTGRhFs11FkWXqyBxE487kxRtGm4IrO2TNvBQ7fSRmmkPg5ZM38THJ32OiVurhPw0uBU2DXIatIg/OX9vjKvE909mOKao1EEKOgkvj7BRpsGWaMYUrnUFkP2XBzw0hfk5SWxhyf6HjJ/uA7SctebGGvFrvdyW1xtDkqMcE3VA+MmSfz08n9Y564B6KgY10kqLoQthTusevZwPOBBScR6hDF4pELULgpc+C9d86palEEHYxznSrMbrPKgxqpY2BWDAHYhjHfNhM8OLLtPkRFu25R7KcAMCPMDgge1qEEbVzHsL9g3tjoI7KwaRIHq52JwlLsaASextyWOWLqd6VL7X8y52CYTrhYDl60VjeCA5aBDpYD+s/azj+O9a21Z5ypzl3f/di7Pnbk+K3Xrmw3jRRcirQKNUVMoSwk6ky5D+doOKoGKwPe/NsbMviKI4O124f7e221Wjx4x/8il1dv7LdXdi6q4+v7vt0fVsXmZLZTrM5290KcttcSWoAK50qmohjumtmAoeLFe+WyGFTDWX1AREqZnkAF0VlHqXOm1Kg10SBaiuXTnWPd5o52DFkihBw44XSmlU3ptChq9eZYf1tusVSI2bEkIoBSaNbCwUl3GnYKlI5j/Zh82EIoQHOnLt9vvVcwRLL59pfbiVk5s1Fl0dvhYh6hZ+W5F0q/zMsVXQa0pIKWWFoDsMy+WY5YtmJ6jC5JX1hQEBE5LoibpgHgveeyaLwHidXgsB0AgDCXon7J8Iiopz+2misixNH/LwAhoAJzonLmhAMLf1KoLOLJMN+8ElSvkl5Y1ogcwj6Zl4Bv0uKdApAlbtGg2jC2pbfV0pfeFiFCcAWLiD5F9S3uw9RV2KlPwnLefOS9WcRxxeuETFU712KZ8exKoMG9+xfP89x75y5Sd3wisYzoO6BsS/SWdWBpcRYZau4f8rSg4fMyLJ+Gsbj0pnKnLmoiqGUo9Zm9jpPn8CEQWLyi6ovCzWaz4XA4nU5Ho1Fd1845CXlx8CXP+R9MUf5n+6za87/3X9mHO//cXQ0BAC/E9/+5e/p/V/vlN1+0D7neLs+5yzH7wqFsbiZLxGZSx9zNATFJFh0yx9qaUc+qkCWBJOs0MPdLUwLD4q8MxwbFQI8PjibrTuIKYdNZ0HDoDT4o/RfmmxO/KPeL2ouCLVYpGGsVMV45ICIiZnbOSdt678uqnM6awcpoPB5XVeWlGcJJFihBkcIRaw6KjCD1uGGjXETEVpeag3ErcniE6PaxMCMBOnRs8APQ9p1SE/0KxS1sHN5SdgDQYg5GS1tOxucp5JM9komZ4XqsSoU4gagMtPujA82CUq3Hc+VkZnHwT7oP5n5cOvucjM1dV1WVvmx0iLqjd6XXVZCkVXyXhyZz2yGOurY5UT5LVpeHyR6mdErjTxSXslDBReiF+7vakqnbOcYijr//lqWHPG/OOREZDodmAJ7NZmVZRqXTEo7yf7b/2f7f1sKBElV05qGOsnYZp4mTpTPIRpIXfkeOTA/BwHZbkt07m4gsYIbgHwBNxTKXdTWn0utz1KYjTOg53JBKLhDme46zZkDmszillrJbhPs9EIzGJoSlXLckGpwmNN7sGMxUuHo8Lqqy9b4aVbPZZDAovW+JwFQCKAI/EHPBZomJ5lEQCYiMV8g8ohHF4iw7hZIqurmrCWOu6ye8oAMegKIo0ns1JjkAgAKgaOpiUglwLpLbtGp2d5+iaExKQODSdTcjCZSWRSSFvvhQ+4oziARvJhEiqthFG4OlBptz+qGc3Mbxk0AFam73DqEinic1UT/u67iYXvKq7xmQ2jlHO0DN2xBLG1vmiU5itpxzFhbWg5KqqpatayqW6d7G5vrju836oNyYCpQHo50bB+vlJu+1MyeDkcApVPfHI3Yq2gzYlS1qQtnO2tFK632xCJYCJAUAOAti1ZiOAslROQOCiopKTIic+EQAxMTs5iATptMKYFVPItkOKmtH1IrUTOSc6W0UylSEDHJW/tbYR6UQci9Aq0JEBTFEvfeT/XplZaV0zg0GRFQVLCIguLbwjuTJMsN8tu33PnzQCikKQJKqQLPs33XhSeG0S37rGUrglj15i0oUACIqYLjMySKcW4NhyWWrIUZQVdWLqloaZ8OMHYupqssS7ET8aP2zqvqop2Fm36rCg8EMIlIv7ImIofKVn7sF4Dfeva2qqZaAb4c3dh+//c7Vlz7rrP/AT55/5OK1Cztto0TX148VN67WoPVWab0qn3KrKweX1wcrn7x8cOl8dZu7q1UQ0dTXQjVQqYWIRNzqiJ0rRluD1U1XDkpS107q8Y263m/FzyYk61XrRsXBdHZ8Td1w//L5jVU50XKjUsx4XA5mlybXULpnnV55+PHtmm/dWq0hfPmgPnu7u/u2Yw89+Mi5S1yurG+uNidOnrxy42K1ccxVw0HrN46vXb/jf912cuRS++mt2Ymm/fTFqT8ouJjs7VZop1pXKysrXMm4nqwOS9/WVFaVqxrxPKDai3oZOtZZvTMoj0Pr7dV6dVyNgH1WGjfVydLvtm07LCsCmHk8mQxWRl7FYyTT8etfewKA192yWvH1AE2lvF84bmaT4XBUz1ouCi1KsWSO84fLqXIwozCFCNqQJFVN3xVonZKAxFwNC+9iyRPzVCXACRpTmaq65FdF5AERImYlKERVLHlOxagtNZnCKYjVqo17qHhSIqYCEIV6FUEDwCkITixiiCiE2SWruu1kIGIVbyYkiuF5WRCg2EvNSK1Myk7ArLO8H5h61kEFzpVtWzOpr5uCaW20tru3LYPhTBXEaKWgAh6FOihm2nQ8hHqjliEZRkCPQb4X1dYAYtH4QSHrFR4kohUADlXbFfB27r0rQ9aEGLctUDCpBzMXxCBYbhBLDzzwCiahYMMiIlZS1RIqXltpAZBjZvbairSFwsfcs+YeGnF0X+LscHcmGqaLqtolqztE9TrX5hixJ31WxFBLpxBQ7eKMP1shePGGw947R6XsNuecJehYnM5kpV3zrh2u3ziY3LIxecmdf65iNX98fzs3DM0iUHOB2lPPkUCTkr3zpu/5jYu08Xo+05AyUNOVOLXgSpBZvu39mgHfpHwEuV3BTBYlIiokDCqYtSxFZG93OpkeGCTLsiyKoij9eOaV+3md/tzNB16gtZJoacrGuNgEOPc1jdBwBcHCOpg487jQaBqKQoIzjl5Egq4+HHjTa1GrngJAYtc8b1qe015QbLn5oHKFKIRC+g7CvDWCknug0ewVHfHp67vVf33jlZNHvu+FX7LytMd+9vKli+c+tXPrEbrz6MbO3nRvNr10XT/5UHFmhfdo++7TR4an6kcvPXrW3dHqTGjmdCTclmBSBZioJHaq4tvZlWuz89dmnoSlHFBRasHgohgOBzoZz8rCrw799GCwfXVVeG0b50laYLjGWtczwtaKbw5uXB47t+ar/T3PKG4ZrRUHO29+50fuPr51z23uxhP1SrW188h5anl6bX/96GTjbPXR99zfvuu7jp3YWj2xftcj4997/LXPf9EL106vlxVOvIy299qGsT3hS9f3j/EIe6XsHZ2OL9wovdC0OGiEqurUYFzPVuToaqGT3SfOnt16bHsXssVaMe/X/iJ4VA1LL76ZzkajkSuobqat99VK2+jYQD0obz+Y7Q9HjS+uOndiPJkMq9G4bouqrFsBQzmlWuxWkIPVP2wMy+1qSi9Wbc1lK24EoiW+NosiQlr07C2HxgEbMy19a5tH0j5qeNzquakdhoSlo1TY26g5Nui/CBlOAFSRcIkC2iUAXdLatq6qqm6mVVkOBtXe3h5TESzznXYhnLQ8HD+XIckVPdFWGdCEDYkoSjcUZmHR5VGj3NG7qJjjYHwFW8Qjc6JEaVQUOSEKnghRMRgYa7L0W5RcWJgL6QeGd2g6usvPQWeOGuUS83LpuQ+duYuHE8XccdFkqXnzZyQPS2in9i202tf7f1ZtDjOmMZPruyGYzgREIo0cNG5Azag46Mdo/plaApH9Ta4Wc/fMzSxXN6XfFpF+uqILWUTC5y4cznwxHEjMwYTR47p6TwEI60eqoXoSM4GoHJbMhRAGMYGfCnyrdTMuRpsH45n1YsOdh8Xc9cNuy5oXQtG4QrUpsSxTCsczZqDwpBrS4impWu74sHU1aPkAzX3jmdxNqjsaj5Jbbswp0cXoEVUlQlf42dhMdA6TGmx1SpYXTGAOtvGGPuVOoPI7w3KgPFjZ2hhDfvedD21tfPntTz2x8aLVf/atX//tX3N180h5ZKO649To4Bq4acpdvfbpycrKiVNVWWghjZTQmn0p3EIUcKROVT0rHLg8So4xdMSiziuEGq8ttbrX7p/YXHnavcce/MSntyfOlSdXy8Ezn3W0HZWFb65t7585eeT8E5dPnz1C7enJY7uYYNry/sHkymT3eD3Yors3i8HewadrOeK3iXhLUDqaPXHxCb+59ZJn3f7xD15wN6oHzj8xdPpC+dXJ23+lWF9p9eCjnrfWTm1s4Naq3C9/sNjYL1dXj9wxWj1+m25jNMITE1y/ItsPPcigSak82dlYX3n0/MODzTVpQMplsboCOWgbACuDYTkczNpGoVVZOeemdTscrRlsp815JWrqsp0NlaZVUQJuOpsd2zgqk8mNnd3NrS002lubtMFC5Cql2M7wN3n7m05QwYj1S9CVEgmLS3ASzc+R4pquxfazI4q65CA65zWdEnVkU6t2ImRmSyJY1udIeOOrFxIq9FAN0KVWXjgIhq2xGMHTa5IKXpVlCaBt28Fg4BaCo4ii13gcQBAb4l8O2jvLp0DIai8uNNOVdvp1ymBiSmrSkOzLq1KynZvyT+G6PH3hMUuFyyhiLsgANRcJORERil6m+PRZNcUzAX2sqjp/v91ATrskL8G322DSy9CUPiTV9xwTl71rSeaRPHgJh3vbJ2ZkUcv62bbDmAlmzS9GvpXKg3a6CYIWY/D65M/zagCveXpPJftHn2zmABWdM5fPkUJKvDj+7KdUd6XHVRDNhZhzVEQnt34VyvaxHHaaQof9lFhE1LRtWVJZls45ItbW13XtW2WqVsuV0drKzcDx2fNPg6pQRds0KVYwivrm/AyKK5ks64Y8tBWBKJNKC8McqqKhVjH6eiCCpYMmxBcggheOzUHR4pckzzUYRR6PbikpQ8o51lPvlRJ7LyY3gYC2D/Mo0FSNUlG3flYVPJ60JzZPtU356fukXZt8/a/9qzf/wk8+/Iu/cmTlxgN/sv8jf+dL1xxvF+1FwWx8UGJtQpeGxE6JBYxB5SoGN95P4cHqwCQyRWNGvwJFibLghlka725d35jq+Xe+76Gzx46v4urBbrnf6Ac+PGupoEq2iuroaLCywo89fE5wdOvYrcMjsu5KEuxMJu3l2Y0Hz5273EAxYSp1v9JqW6/cctfRu3F0drW5MrjRsI5qt14co0IGXN95fOgaPXXXrX/63vt1Mt27sbdfzxx+RMjXzJe5rlw5Go12inJ1dfCUW47svfxHpzV/5BN7kz2d6uaRE7ftzQ6Kcsq8M65nKEYrq+vTyWTatAVxNRipl7ZpmXm4Uo23Q+XHAY/OHt8auHJU8bTFdOrrWbtWVTeuXh2urpw6ui7aapc7Ia4iSWLd5rhqDyUqAhFRKHmK4cDetgoHzCvUpcE0M0pW+kk12lZSIpLA40Y+MtUYsHcTBfkPkZaYUANzs++HOwaKmwg7gAy1ppOZ02MAhTEWBAQVeoiePiyNAhGVZTkej8uqADAeT4bDocaYAo0EHtGPOsfuuZQl5iEa2BNWlbxcjZWUSIy1dsVd5ofFxhvbsTPXaKuYqFYbIEo4BKOsVkZVYzkRhY8x0kHsVkLKLMHmBY0MRc4JqYuC4xzNS4h7TsBdFBwXv3bS9qGItcPv5HguqyIRmb1tqQR8SIeHtsOGMNfVohA/JzWuDlYmvmmb8ugGZtNLn+0wbt7yPGdP2sLadaxhb46x/qimJbAWVM1znRCBQ7GB/M50tNLNlIV5pLqtvTSfBK9iYXjOMRwXReGYCO321e1jR7d6c0gybv+cdT/l9+QX0372NUlZatkWs5zboDg1zk5c7l8a4nwo5DwgDew+PjP453ubKeTHTUCwF6jGcOSo0dYYA2IcFWvgZYnIe0HStqlJ4slbZb4pUFRHFd7PZk7ZgZnE8d5wRQf16iffXr38m3/8jtf878XBJ5795v/8b/7+G69MP/IXvuVvvOK7v2Pv8i8PtXKDYmd/ejCZDn19bWfaNnuF0DqvbdAae2pRe6pbqgo4kdZTO9NWfbNCRQketzulK289eruTyZmzKw/vOtBw6Ael6KTRI6PmIx/8RFWVRwfjR3Yuto9vlcIEqQZlQTPf+tMnVy5d3yV/poC2Op2VDcv+ZnF0e/fabWfP3PeRc0ePPL30JUin4FPHB8Oj0+sPX7x6/fqxzaNOjxWDLVEW5fUtIYhKe2NaHxzsTsbN3rg998hjgw98xy133PKld61Xp1avb0/qxr9p8s0rA+aq0OpoU4On06qq2rZt6sb2wGw2W1tbm4z5tWv/1cD7xRtv2Lu6PzlopaV/9ce333rb7cPh8Pipk2VZ8AxlSZPpAQ+35lBT0GQ4F9hic6oK2wUxTXjbX0ZITCedJFdbcc7ibUIMdKbdYYWY61NCtogkx0RBOyKiMRGfRgKn0UEpaG0y6tsjE6o96ncYkg+7dy5mUsGHIFxW0tZXBQ/KatY2jfhyOKjr2oVI32B8QXRB15wc5O9VkU4IFoMSFlzBu2epl4erH5elZGUeonbBm/kbtnBR/jbmheAAjpV2jPHoBCSOZF+VBMxUxHVMbw5fU4qyJWNdduUwQgUKdRTi8KICJGCaBXDMO7hLUnQod2FLALqIv6WDzOjKzccfEOUheNVHkpM/vkRhENvlxq/MSl6ZzYBhO+nm+D9CoUoUUksC3dpH3cChsv4coxM+szFhUd4Ku5AsMVTMuhwwvCrI/A4AUibiYKMST2SefeZ2GJQBpr7yqpRV+bXcNFQ4K0UTKQmcYy5Zxa9tlfuTq8vhM0dfkVHlRQBqV1qgbaR0UpRce8mVARxreySE153kkPVGAThS83MVElVygLAi1CmhBDcFadtabZbEc6gqSbePHcgOpyEj8yWc52AUlJV2J05mArEabtkqR7QSz2nK5Goy9I5oVQ0UdDCbHDm2tb+/Xet4ZWWEvZ3V4fo7Pnh560h5evj8e777Rc/5nr/nP35j+JIz9z9wcOwF/6Bena1dLKpTK3de+NmKwC12bhxcuHD1ypUrV6dXByjXeHOoq4UIMys7Va1AnopGdKz15GB6+6mt/ekjexP/6d2pUDHkovTeMxO48f7M+q077e6R08N7WmkOTo6H9f5sslc3J5n3vG8mByfWike3L7Kug8u62VsDfezB+1QnhU53uL564z5UjlpPMtzdxfZDO7dtntx7Yvvy3v7Q1erbgoZHjq5M9q4MBuX1a5OVzdtWVtdOnFzjlZWqvPvKI5fP33/53MPnz9x+/La7bncV/tL4v7W1eJGWGIU3w/t7yu/Uwh0cHGxsbLx68GviRdAePRJU0Pd//Nzq2trGxmA40r//ddfffPnVs7qRpr1y/drq6uAFL3jW5cv1QQxB7ri6KKvk+zCsHcBWKi+ie6RcU+ke6klni0fdWDXl4CZiVfksSIlMRsvpMRHEhGXpxUqRVfKMSDoqbABLSKFFrH6WjkDeFp2gw3YNtH8+OcZiI9bZbLq5uem9N+Wzhf+2sa4EUcx+Y84Z8ynfI7gsK7ZGPBE17RqoiRryYTYlXR6/e0iHNnCmYAztFToIqgiFMihEykMtA25wX5dAlpDJ6KpRAtZMH5LuWA6gQ35aKomaNj+9IuyD/otyskGJreuqeQeIaCxcT3FnIPIdy2lqX7l62HSe9IYk8OVjJqIewc5gODwyXhuv77XTxm/I+MxN3vtnUKgeNk6JCZOXruDcUzk9XmTS53gLxMp95sxkaVkCeEMt3Z5Kg2InXkWjtElErYgxTL4VZYl1VKyQpyiY3aCoBgez6Z8TRHMsXTkaEXzd7BNVwTsibA/rmXqsLqAaUh4GXjuy+YZuVFXMQzJLYCkhIHCpEKCqSmo5CYNnpmUIETMCRAte5u/QhanENMMCQJgcKXxgOzgowLp6wx38LW1lqdPJjdGwqKp2b/eqK1Y2N49v7+xvb+DIwc49x45e3Z/tu2vv/1A95bVjR9bxpkf2qpVrl4dXhYe8sv34hTPlN84ONpvV/dtvL+/+3MGRafPgu86f2v03l69ev7L/xIhGA6nW3dABxMxakrQrrj21Wq4dpZ2rG+OdHXJ3Q6fX/SMFr1darIKuzHZPr2CjaB99BL5ZrYbTW55xdnO1vDaZrEr7yYcfb4vmrrvvOO5I/AqRa5r9ipScfOqxx9duO/25d6/u7tetZ278fjut4HCxqhs5dmJrx3vHECmbtr4xPdi/9vjpzaOz3fHFPYVyhXHp+JaTI/jt4+vF3t76uQd3Ll944La7jg8dH93a9K4thoVXJaXpVF7p/33bts2oKbX0CiKcObV238cffh4A4DnPv9vDi3ezKYibI2dOizoiOot2Nt0d1zMwwUODx1PYFSEhUdJbSnCvCPeIJe61Pa85StFsfTUpokGIzBwy+SGkA4sCcihomJ0JCXkbwte08Xx8kY/OiQjUN7IRJkFFaYpMI52aF6ElbAGl8XPuxLOEeIcfMtOvFb60D5qJ/FHkADAfaa0dugu8sZ1CkuRsFgFLHYEwwOSdI45PI0iRkScFyMXwGGNHOAypR5UDT2SdRHeuUEGWWRmWCSvNHD0kMo/Bc9a+B+KMvs5dXxRAs1cccv0QUhg8yBMOhaqqhzosEe8We6aoJD+MgTjsOvenkKQTzRLiJPio6uhgfbs9WB+sztrdY/cOe339uRWqIuIS/OOgzBbCfUXGwq4MF8NoFbLghNUJXkQJzolIG0FiZmgIswk3ZcqGBEmFtirO8vQyd/XJHBdFQUSkUbPASkBR8KxZE8Vo7cQ8xD4TGrwIz9imvoZ6UpSFo06s7/4mNi8hrPC5s58FD2jjoeMGC3WaDRowZVd/1Yx4q+FvdKAmk2aCbQqCmDNLQUTNvHEsAM+rAAVT1IV3/HSad285q3o8cmVz4Fer1UZaabG/c3U0qI6Odba2cnV7sr7S7Nbl2trJdX18Nj3enqERHxtP3a16bbJ6ZWPvhKLcWH+gKc5ef7R++P59ruTOu29dOf2PnrZb33j8wn/73bf/9q+/+R/8LxMRX1E1xPqQcOx4eeHG5P5PPvz0E6fOHNnZ23ZDjAYop6hb7O5ienb11Nrxk+e2HxrvlEDrZs31P32oWJkOjx9/yt0nPuf0M1HiWo36wJeVOgfm1YarqqXnP/t4Q9TU9fqZNZArFFtFy1Tc8rRbJk29Wla3PAvaAgJRXxRub+/ewtFwqG1bHlzdf+ThR7f3Jo9f2Lv1uL/19mMXbwyryeDSjRv3f+yB1aI8enT98cuPu8HGsWPHV9fd+sbGaGU0XB+V5Wg2w3g83tpc+fiHH9zf3zHo7u5PBM7rrCgdKz8Pv/AH+9+ysrJaT8dMmF2/Bq2BVSGCdvKU6ZIT1SLEY8YEBs08JeIX7bymOcp3QzpuGjJQBpYxndaauvxbHBk+q1OyGMJLooVzmqKLUjUU1TLSV+3VHk4KmI43PQxzdgNGOAFx/1sJn+VNRDY2Nvb39733riyapinL0hLnhZdKODgRDvMiRxxta9mMgFB2ZY4wRHLAMfCP0Jdo7bNZdBO7LilxhWlkfeCHAv9NJL4NVUxVOXpkmDgYuzYGKIC3kFiYAZQVo5UuhAqHiEpzUM4Jcx5bzL3AlZCJN+OLOp1bhEiQLVJ2wNBETKrXbiIoAKQsa30KpFk6ULOik6UJzczVaXYUbZ+IsnVSHDkJKx0AloCwkAPZehM325CqOdDGDQbjjAD/j1Cosit8rEmXlsMSvCX/+jT90LGmrUY5I1KpQ4zuj2GpJEwlevNKsCqJQRabbsVurUNX+Db0z8zMDmjbFt4PyyqMQIQpVAeDiLSakqApPIiVULdCrnGQ3KWom/tcS5DJb0gg7XMtr7r9X3BVvnfnr+r5fS1W1Q9ohGo8g2uVaJfpOLv98bSsRuLQ+Hq0OtrePRhUqyPm2WxGTE1bw2FQ8qyeEJFQSQDD3E3VQwAVKFEJAYMcXJicwhMNXKkqCi9m3gt5G53VOQ7ZgyyaQkRFuehE87RJiciiEUHCpRORRoSEmAYSURmp79A9w2vpAQyKWr2lxx25Eq2OHTBFMaRay2EJrzugjUIbaTZEd1dBDVxxcATcgtoJzoKZKz02HAF44uFrD35isr6+fvLk7V/1Q1//Pf/7t1577PKH3vfJ3/7vb3v959y319Tbl+TUoLx7sHl0E3tT2r9RN7ThdLwuI8GqYGebrt64Mjlz++1nnrG2f6APP3Zx7+ru7ACzg+13PX6pqKpbT504ddfW5vrgYKJ746mIlkUzFb9fOPJaiB5MPZEHwFS0MnUlExV7U8/UGqIDEbMnopnQ/gF8Ua+eGD3/tmftbh9cfOLShYtXH7tx4eTZk8/9vHscndq9jI+95xOzyzdeee9zru61j13auXG+bnGBuK0KV4zo2Jnb7r7rVH0w2d7de8WrXwT8ewDKBUGGqKRuPJNCx42naT2iUkRlsNaWOpjoYDjYPRgPhwEJDIDWRAbbxeSgiuDtiqYgUjg193hR49LUKqWTJ/u1OwQFsagYa50nnaiCiKxKJGqe85ZJMSBwB0jM4gSHOjtYjpkTDleEGHdly3xJVChBYaWrQ3bGUFwneBQR83w66Ib8vABjwqZjy7EAMMSzAuRVdWU0mk2mqmocRuEqtAIPLSJqIEA5pZ51YnXQQz4NlYDfBsVQ1at6EKGwFBTwXpiU4Jgqw3xeFCREOgB5qDIEpBapbVNgr8ypOLdIyNo1gFoUBxAIrCEf4opIoVKYd5t6y0hKrg1gCa5jgTZ1aTuQC5+HmcgzQXlO7rw5H5Q/Er8nWyYhS7tKMRVZfv9i/3Nkb0nLcnUG4hSt5ofOyIhj0kN+9o2ILrv6GA/X18rHJ/vNWnV3/+fPusc++SnGB/VgPlPj3I6f74DM8g9YwjfjRLifYFbD2Ng2yEL/i/xWnEnM+2ARWLZLKUSaZ8sXmRtVZoeU4gMxAV1o82zEk0Ns7oaF+93g4PLl8dNGP7l+7yYJ/viRbxu1uDDa35pJ6VaP125SsJBTV4zHe1XlDna21wYjbWdtAbBX5dXRcDqdkjK1vLq6OvYNq81bATgiK90s0a08KbqIyIWACgk5rSMQrM5k8oi2uRORZuUOc2dTZLsUkd+FmXgibLs0OLYwUcTQVKLAljiZ6mMKEVtzdkwaTWQBmABgBWdU1Tm3trY2GAwAbG9vP/ip8WrRHll1z3nJqz7vta+SFjfOPfrhd/zBG3/tx72er8fVN3/TF1QYEOqiGPiWp258y4njx48/9f4H7r//I594oOCztx971jNvmcmJK09c3LsyvbI/8zO+8Oi1J544f+T4+m23nbjl2GbtsTtpSJhrz4XT0aASgaqoh6JyIwK1bT0stfW2KyNrL6Z8QFH78VQmB344GNz1jDufdu9dFx6/et/HHnz0id+79+mfc9dTN1/yume9510ffcen/vSLv/DVT33e6f2DphiW2xdvtJPpuPYXHrnwvgcvHrTtrbdtJcBwCRF4pWI0HBZNWQ1e+rTTk/263pu2bXll54aCpR3Us9nq6upk/yA92Pi2ch3KXYq7LEJ06THMP7chl1KodxyRgIUb954N0UpRB6ZZ1PtimzvsFJQ3sSflXgp6DuZsAtTbfbqIK7CAo8iYAwnkVwGFZyKrkGZnwUhQyNLjXKtt3mHoqtMbLroG+0jUBBrKQViRdSgkarAcWclXS3pDnQbZAg1C0HTXNTPMb02kSYNJnDJRZ5FWppjNlxisXQKfXgup8GMppSXAyls+zyfVPCz2sPhILmGnOSTqO3d9bhih/74RPhJyBEmaOhoTfOcOJ9g0h+bCEvco/dJJ5RdPVOtat9vj6fG1DUz6N/+5Fap+hVNqu8Mo7mKbg17YKFGPHU5m+kbzD9vGyJZeAEp1VxJDk3QYyFycFtk18dKL348Dc0Sg5UXE/jytKA9OnnKTerK3P/OCL7rzXz9w/XvvdeuPKQ62pzubw3LWQF3j27X19cl4bzQarQ5Ge9sHGNg2k/39/aIoSKhyg3pclyXlO8rkUyaqORTCo6AvCVvZi8/hH0EUKj13bBCRI3O6CjdkbpHdnoxrcugu6pieTPOUU+bu2eA0kBZxOeStZob3vmmaoihGo1HbtrPZ7Jazt5DMxnvXP/bw+clD6mV219mtl3/96+99/lff/+gfTKcr3/6T//ax+9+8AT2mL3jK8Rd84zPvdF/7ngcuXnnW2Y3hJW7k1vMPXX7iofu2NvD05z/7qfes+bp++MGLjz6+17bV/mX/pxcfHqyVJ2/ZvOPOU46ryUQndTs72K1K53gIZXKq4qFF5Ya+qdlRZJ7DNJmJQOwdl9poWzcYjz0Yx2458SV3nfjkBx++774PPHz++Oe9/AWvevlzP/GRR/7gTW963stefmxj4/Gr9ZFjR1YcBg2e8sxbrzw6uf+xh/e3Z7//W3/05QCAEcGTuELqenywTT/yC8X+7Cdf+XmvOLK+fvTYxvHN9ZXVUXF0/dzFi3s3rm9uHTVIeuhgMIBfbjIz1VxYmRgKh3hamXnOgCId9SWkfDiRf1UNjHGnCYv24M+qWXHlOXecfCN15IApRsNHDrR/J/r7lhWpvEFCGkVRjKczcs5Ie0QnYGbu7+LUm7k5dj1bWWXVPKjXWEyjCATJjAIEEFvdTwSCa9ctaxjIMiR67RWfENJYZbrTZQqRowBx47bN/85FFmV5K9yCoKl96yA+E3HzEMp02M1LW8Q9PnBVMUeBdZ9XE+r1fwhFzOvF6uJ+XBi8ZtV2015PBrd8D92cBpfbTcsYrAxKpfribn7Tn1+hOmmx4jr1/vLJ9OeVxM30lC1uyiOt0YHLZl1jwVPRTppXiVTBUawGor2e00FahFJHgSKPHP4mAYvMzfezVxLctD1OTxTVcFhVW6trtaeLO4+6yQ/sY7h//nvWT99+9fwT7YDXNo7u741FKldW4PLKzv7aaG3SHjBXzrkSrXrx3jvmonSzpqaAFAkWlSjSqqIoUqmdzIc88iLg4HYOp6YPiwuh6sO9zOSYzBlGoyAXdATzvhf2WVQyBrQr5wIgj7/Pmaou8iVgrrD0asurXQcJKYQN431d15UrmGhQlDsHj25trLmReC1WBuvT2j92qT534fr62erY5lfdcWztG77lL37ifed+89fe/P73v/e33/mTl9528t73v3A4eOradzbrT73iD3aP8wbcVrOn73nbe4Xlzrufcve9Z5/+/Nsfe+T6uYeuTA90fDB99FPTR++/dOb2U0dv3dzcWN+QajqtD8aTshwQWLQFeS8tO/UhbZMSIYRoWqgblwqQkKoMh1Wrcu3GwV7BT3/uXSdvueU97333e9/xluc+5wXPfN6dR46d/cC73/zsF7zo7NmT5y+MFbo6KurZ4AMfe/fLXv3yrQ137rF1A8l7//ijVTEAoJD1Y3z8yL13b922fe38Y58+OH78+GAweOTRRy9Pr33/D/3gYDqcNVkyOw8myXmdJDCwhi+BrkSTFLOtUacjtCZsWidSqGlB7GQx8ZyAS5awQvrZapfRhJxd1jxiJ6vgRDlKzlSnJu8tPfg599ldBChU9PIMZeeIyPytiNmi5zUbfxrvzekZESmEgsd3dBg1jXuM9UI4fLHqFIQVEhVIpnsPv2RX7GNkYrrQGO2Qm0hGYjQVSgMsigJ9mmjzKyx1mXTHLdHgHI5dvEoHiHmJ8GbpTQ4TH3uAW+bG1Y1ADn02vzPjLnNmbfFDT8IGulxGna9dR/vmR5WGmm8p+zyrCoJw6/eb2frWxvLXHz6Nm30FjmE07iIF5wByaFKqJxWePjNWqUtWDsBBVTPcH9NaZect10x0xMC5WNs6eA9nS5BoF/DGj41ZhYjGTk/S6p62RTN+8XM2Xvaz33HfcX3PV6xtDPYK5ydNU1WjUqmFtiorzKLtVPwEfo2GhRIYHtNt2bvurwG8emLj7NFVnbjB6J/Pxv6Z957ZKf/WlasHJ2/bvLFbH8zqnYOZc9XYa+kGItg7mG5trdWTmgueTsZDVxg+RCRshuVDdJBFN1IniKkqO8SSZ5TkA2LNVQ2BAbIQEvODQbyToi4nMi5za8Wccum4/Nfcl6ufFm5+XYN8wAHnZMMCgLppnHPMXFUVvHgfiiGuDbcmBw1xVRVlM5usleVgZeVgb4eeuNzIHR945Or7eXLqxNEf+PHXl+03tHv/6Md+9G+e//1fKfbGb3zHC9/9bzZv3XvlrHQXxgc/8U/OnVm57anPOvngYx9/yx8+tH789POfe8dddzz9+rWdJx66eOnSToPi3GPXzz++d/T48OQt66dPHVlbW9vfbyazFkTshAv2vvP0TafWatsp1dpS6QYe0kjDkIIdaPDINX9sw33Jl33OR//00+981ztPnHrai15wz+tOf8lvv/Etd115ytOedde01UrknW99y+1PObuyNnjkif2to7db16/+0he0jRAxMaHA977u+/b3vCvceIrd3d0jaxsXzp979L5Hy7I8uHalWgm5ZcqyHE8mVdklW52nVQCop9CbO5hRxzR/rhPyScQ7lXYFEHNMxKdwqAtyj3xGGSD1nM42ZSNPBBgsBLKcU5rzcYe8QoVcGUL1iFAUTlXrui5doUQ+VuW09A/hdBl5ytaYdQnZsVmbwIUw8yR7pBtyKnazKBgXzQEdVM0LzAXfUoNKiEhWdRbUlNeJi6RkKR0JFQU4arwWA7nSrYu74bBBH9b6RCsU5c27RGS4lz28JDyLiA7TqsyxgUnzrNpJ2Mhtz7oQ95y7nC6bCPpb1i5OqeXGr5VDqortab3k4T9Hm7Veil65q/Rhznk2jYoRM5Tl2nVYIgcg4zOMCXOZv2DvQHIKaXfxupCqN/GJojAV+W9Wyk5pGJJJuliwO2iUDtKrSyZLfVjOpuRXj682e/XBlasbz1t97fnz517+zx/8X1737P/9mQ8VXD/x0PVbzpwqygkV8NqoDlecK3U6bVpXFaUyeUdaaKXjdrozuXaxvnJi85ark+ldW6ePl6tXHv/BNY9NHL3tzDoXhSv4Ixe+b39vhrZh4oqxffX6YDTUFqtro8nkYFSMzNRtpkgbvyNuJaAGA2wGOlX4LBo0BCdEKTmw0M6KGKOvIzGiaMb7rpZGp6AylDe3SeYYZaCrKQvYOeoCPSNdFyK3lAAXRSEiIlKyg3MIDAeYnWpD6gZcFIXOxnvTKQ8rd93fvrFJR4ppOxns7E1//Q+E18anbjv6j/7+//not/3l+x575Jd+/j+97eAB17zhOcWzX0BP+89/59byuY+/4Z2/8xX3vuiZz1j5+d958P/6ifFgXZ56x23Peuk9z1B94vHtT993XlvZubJz/cr1T1dPnDi9eudTbzm+NjoYYzqbNtIwOyvDbvJvqBCgABGrMonozLNTKQpyVSm1nw4H1fW9Zrdo733hM285/bT3f+A9f/j7l46fvuVlL/7ch+771OMPXx5tjj7yvvff8/QXnnrq+pVL49HaYH9cW2L3y9drYhaaiXryxR/d90hRlFQwVygctPUffN/7n//M5+/s3KDCjWNk3XQ6HQ6H4pf7/+ZohoL/VKdbyn8KGD/4AGWrHI0OliHN/loMUqcO1OCym3/tDaOP6hMhpWy7zN2/eH2R41+cCytINGSGYmZm771qqBTkxcepAkwQNfqXtj2CzgAUNKPzp8AeVgUR2EFDCIO3pPQUfTJMU81ZVAKggWM29MhCGpgLJWWNZ4kcVABv2oKIxDQlj4rezVaFWngha28Y5PvP2ZEKl7Lsd34OlHlbBLd18Tm3FgDec67N7zyEcvcISbY8PZ1+un6olLYgGYf+g8Z//k7tjMQ51ECSAf1w9Xs28SX3AGDAlW4ymRXsWKFML3/q/7AKP793/2xFO6DlTXUpkFE4MgSKvp+O9lnHpAKZY9Y6Ikq5RsGRBqnXc+/+eAPlatiYuSL2vyABq6rlmBDCX3jWCMAffWIaKPZaPWjWtb1xABbwHYPRsKrODXR328+ac7ffe2f16A9Pp5ef+vRjO/U1ONdCwMVAvIKh04LKA56JKmgEL+OD+vjGqU9euNo2Be/p8bo4urGqytO6dUU5Gq0MRiMVHQ5H7Jxz5SM3vn9nfzar/WRWWwHj0lVKSLZqR+ZYjxoCdJmlgy89Ui5d7gDJ3X6T8FUYZF5RsZgMoBwc6GKiWgM4c1bDW1lVv+Il6wB+6317izwxEeUYVgmkoVIPh8ATAUCQljnFK6ebAcBx27YiYslvIUpETDSe7m0eOdo2sr+/PxyUpatmdeOKaljuj2c8w5SKDVft1vsbqyXt71xvjq7cUqydPbV6yzouHcx+/V/8l//ym7/9sQ9/6I6jZ7909XOuPHriJZ8Heuqf/OIvXrnnyLffP/3UI5Pf3qncN33Pt/6dH/r2pze/srvbPHD/o7s39qYtWlEAJ09v3n3P6dX1QdPQ+EBbPzMxntgxuWhBVAcmRt3WXBBA0mrBhQpBp65ytee2xdGNyqG9fnF88YnLs/3xYEhXd3c98Be+5PNa4OLl2dqgaKghDM6+7AcAPPHun1YlV8JLyyrv129rah1P6pWNzdl4MlKsUlGX7bXxQbWxQc594+ccAfBrf3K9aeZTyabWrTuicBAFrLCUEQ1aa9o2YbC01kTkoSTqtNuHniCEsg3Yr5fubZlV2Dq0agkctYIdm55yHYcNGCXSKPVSTHP5+i+6DcC/+8PH8yl3eEmtsJiyC+RZPFSVmb1p1MKMnL1VXMe2UpcIGG1iQDs9OQAoXCCHFu8btFQWx28Z3TmwngARPMUAXVNWGaCUmRoA8ZfA1rCw5wSrlMlEVbUAi6oQG0POpJyFOS3SzWKOwIB6N92E8iUytvTXw/bZzVuuHLjJizLRaqnEDkSRNzyrvUcWx5mTB+OOKDpFw38W4weghGldD1YHk1lTFIVvm3c/3LD4pix299zpzb3J/uT4nSdHNR7aw3AbBzOQq/en9e1H2+sHZeX2Ze2Ux2xUN0W5tnuwVwyHq0U5HddajjdHlZ/0nAtSc678TECdeI7gZIhwLIPFkVAsE/lVNRX05kxEVlURNRUlAONhA4cuPVty9/kQDppiVS/7KiJQhZdyvxgy36BVqoYrE1zYfoSGR6VZ2cKEJ3duv23n7hf/X48//tht+78ycOtQz24PrZdi5uCIMAG1zRSlY38ALVbdQJpSdrF6/NitZ47vPXyOCQJZXx0Rs2+nezf2CXT9cl2NhoBz/v/zjNNnFfzo9g+dO3+tqqq6bdScYhyHtFOiUBQuJRcJaCbyl8ZTJr2OKrxJnKpK0aONiFSUREEhaXVu2qWsDlXvZKl2dTiJKT+8IrHSTodk40HIdCGUyrKmPuNnAEDbNFYRnURTLlhRXV85trs9ddVgtLohflLLVEqeUd3Um01DwxWiemN2MFsbONKrK9WR5sDfcFcfuX5tdTbjI2vf+APf8tf/zrc88u4P/MgP/d3ffvc/5XJlbfKUYx+5+068an0PR3XjNP+NqrrvngvbP/6j3/uhxx9705vf/uwXgYB/9XN/9DP/5Bd+5nvp6qXrly/eWFvfvOspZ4+dGFGxWs8wnda+9UpqqYwJ5OEBx1wULYG0YWnZAnRXfN0MCi6cXN/ZHwxH66dWT932VJH66uXJc49uPnD/A2/943e/4otfVqz5aQ1XclvHiC8Io5C6Kdh5dntTtzIciextz3ZXVx2a2aAa7o/9iRPHLu/tchHCFlorKV8WEaFFgMfFWTx3ACx+T1WDs3BsrogMtKYsTgSQM8171JSG0gjLEEO/PMPcYDo9ClG0qQRNSRcnh2xAtq/m1HI3ac65pm2J4Fyhqk3TqFBZVoaN2DljRxZHrprcSG7e2EKoQ452MoWdqiozSZq+htRYbE+Ixpmx+U+zqMa0Vzap4NscMyRKp8kDM6dzG89X8MeU6OwyN8oCzljgzPAU8CwDMZgHQMy6UlogFAAiyQIkPKV81mBL8mLPuqCCD1boLs7BmQYv8FBR8Ep5pYmIoo+9qvaUeLlB3oXVUNVoSzPGsPNJsaDnAKa+hN19dgkrRXIdWMikjdEui40i11gn/se+Fez8rK1AaD2DW68A68yvD+RgOhSuzj8yMV5vVtJoWALFcK3Yh5ZrKjRQ3zC45kHtm2I4BHDQNqgIWMW0yVOKI2RrsoXWFFFOZB7+IKKaSJRXB+X6avvYpeuzwckK/nS143ePynA2BoPLATfNuCyHqN3MRNo406gVIIJIBnQxaUxVirLSmCO6iKYMp+SpwxekHTFWFxCQc06kFd8wsyucNNOKHFywkCmBrX4h3FgmlahMx2BywzMiophOqwEVM1obPPxwPdLT9z3ywwCed/u/LHGGi7aeTl1ZTP0NkUtrq0d8Uyi0JFyf7j146VGsnLhzs3pk78HxoBjtiyu4lWnTNsPhqCxcNRgOhiMRT4qn333n9WtXnrh0pZGLz7/39OVr4xGVB3xub3p8fzYgDFzxab97/MTqkUtueoJ094arVwZcHIw8MQYT2lZZYSbHvnQ83WsGxWrr2hnts+eCXekG8OJ9I9wQqRYQKeO2DNvOEhCIeXUSe+M6KYRCukgVWiuyk1KyFC3AIaLRDho8ALYw5bz+kqgoBo5AZtUOyMI40IqcSmCMrAKYqnroTKc8hGLWerAyhAqgUng3dRW0cUrb5aBs0ajfQOmZqa4HW8NydOT4wcHB7739/rW1tVtvfdG//i//8U8/8v7t/Wu/99u/+4u/8ntr+PRT9PJRPbM+8Hef2fj7v/5r3/n1n/flZ8tTm/SS13zHD//d7/u+v/byv/rXX/qu9z74e7/y4K/+8vse3XkjtFzFs9/+q0dPnj11/ORKA0ym7fTAe4IjD+cYpGApWvVC4hiFcsNeuOBGBUSFK1Tanb1mBzxkcqsbV+vJM57/NP8Bfdsb3/r5X/Lq6zemTU3ORe4QQtwyClH/jvb1nlYO2n3HB8DRiZZMl8c3Lulw2DauFDSdHUqHZRU95RVZsgFVnUGIzY/WlCdqVg6vLCLMzESqolYOz3cHyikcGCRmhZLCgqED4rL+iKhBTepQlOszKPtrmK4XI7REjuWgLqrRWOrVQTkQf2N3t9jaKDXkGxCRLOlQYQKJo+ALQogpmjkUQNa+io7ijCL6NTsVt35SlM5xqaLei5Kjgj2hkah5DYJsSxpUuhH9EEhy0EWEH9WZVrWMPQjmRys+DBIEpjJxHgyCJZJToCTvvRovzOZYqU7BhYOSQ5f+SEhAUplTpUrIdxmIMRryVrCF4DljmWzieaLZAJz3X+ic9JA1jT436YoRYPZZiseMibNfP/eWCsD7H2/EuCHuElzkjRRELulFlZDrSBOR6wbTd5jqtcxbL3ebWiryJqIyx6DdTMhGb0hdbxlk5oa6ZJBxVEHZEnSJTETqI4uQeqD5PheHOjdHoMObyQnZgFY2++MNHF1bu3H/hRe89LZL1ydXzo+2p3Lq+JWpP+KL2nut/Lr6ParGTX3SFW16e/B4MP6EFnaIrXtW1SrV5mTmVG94fi5cxvs1UW7nnDQ1iJT4C585APCWj08cgwgcCjdYVr+QtUdVJcYycYpDMJbfO1e0TUuDQXXv2X/a6n5RFqolKyvawUpx4eqndwYHm8PikfP7d288/UwxUpXpbLKzu72zs72xvubbtmnqzeO33rh+eTAg9VIWo/HBbDgabu/95c+9t0L1L+tzt8+mn3cBm5O1ey/s8P740gbIrxNTgbbB0E/2/ahdr2jAhTuYTccD9cOiGDenZNSU/tqgXqu5rT3EORAzyoqEm7qeAmvLFz5kYg3ab42CbyHyuhdvAPjND+yaQ3uUjFqAYcVpbbEyI4LPdZuJ37XwYg3F4IzF4rgn0+pbLsNctLKYDYq/Lt2uyuSca9u2rmtmHg6HIjKdTke8srUx2jxSbmyiYrzpv7/1X//jf3n+k4+Mp8Xrn3/vRz88/cmf+oKPvfWN/+63z3642NnzjzYov+rbXvoP/9Ffv31wcn11ev8H6r/3j9/wq7/5G6g/sbF+Zqe++3u+/y982etf9exnP8Vd2b94Y/jphy54rl698UvEKFxBKGf1zBXKCi8MwBUAyLdUFAXQ1K0bDOrZVCF89tTg/o898PAjF1/75a+8tj1rGr395T8C4In3/AzUE8m7Dr5rVshq2YxntQ7XS9Tl7EZJxblru7vb1+68++nVcGPSNN/4iuMA3vDHF0jgHOcwSae4NeyaHZl4lFwStizY2fL7pmhEdIrikLIUEg6FRmdfBVjbjcH6jf2DGSuTrhfV9v6Yjm3q5PKgHDQtV6PhwfZuSbx+5OiN/fFKPNSJYEQ62vPwkkAml6gYv+2LbgXwS29+otsM2Q0FCTMTnBnIWhUiZmZLEpJ2amfT6WyUfdSd4dWE9NAX0HN2JzfJ9PBSSfCZWT2RSQaU+ppWAWmW0LtXi9RnaoAcUFiAJ+YIMOaIRzARZZmnbAF8Vw8HERvaix3opbeUAN53rkmPSEqVOyd3Ki8lwD0iFx9U7SY8T+GejAAvPLLEGK7LNO2R5XFhXeco4rL3Lg4vdStZUYe0oYlIWtH5N3fZR5cOda5nu2gpYNDfjgBGN8q7XobnvOIHL71z929+/au/9ftedfaVd12e4aEPQyotRlRVmE2vrg8BWfftoOUGOJQA5+yUDcAIMGufeQrRNbH1SmgUEd13B4YZ0np71xc+cwTgjz8xLSyRnEBVRVUIEn2HxXjbBAqr7qKqqux2fFtUvMXCCk/sfUugUoetTJUG5ZAu/N97/8f33l5PdviVT3l2M4aqOMdVhUsXL473dx2pb+ppjcGgEmmGw7IonGOqZ1OCQMaD4fjIUb8yEFLd3as2jn/zhYPnPvDo8Qs7MhHeGqy7XRRD2uYpr66exHR/Olvd2Jzsz9wA49lOAVorNup2AG6qwYRpbzqdQkfAZjMr3KBZuu7C3c5JfCQAR/q6F64D+K0P7iGcFFs7gz8TEZRz6quqPkOpHLBNLJuhvcx02kkYIU8h9dPexdJ3kWb0ir516y5kOg+xUJOyLNu2nU6nWlS+bQtC65uy0nvvue22k9i97H/yH/3jX/2ZH/v8E1/zMfcOvVq/gP/h9sjdsrPxLvpU637nyLHT3/rDf+k33/lH//yf/vun3AbC+E3v+8TbfuH9P/+v37anbx+iXDn7F7/yL770W7/li+957mbL7pHzev6Jvf0bBwX5k8c2X1z8G09ivjniHYHYtVAVoZZRwJF6ZfF1c/bk+oOfeujBhx/64i/5ou2d2akX/zCAc+/+x4Vzb9/55pI9Dzdaf37WDIajDdc+Orlyaf34M3eqgZselNXa3rhRwuu/8AyAX/uTa6rQtu3OSA6rrkyOUeKAK1gRJGCQ94EAe++9s/I8c4hCwEQ6ny34W15z06T0N22/8kfn+wtKadE76stzPh9QVSPAv/yH50w0DKgvRkyMihAa4FXiNgtVmKxbSWHQ2tERAEl/EL9FaTAjbMDywHahQIAXUTc7JLkoCRJE1FoW6e7ECUxhnXDaQlm5/HXpCkkXOJPj8GJuqdLgOFa9MENrknqJOwnSqLOxP53ypN+V5StI1RzTyAqRGOkkpMg+MyHw1yk5hjk3pkGjf7Y7Xn4B4oc9gj6RXhx5foNtgvmMHPlTy6TepUL23GdVJddZ8NPfw6jvYZ0DMPETQRkdsuSo6vBps/d/ZOdbn33nyTuf+/O/+aYf/89vf90rX/H/+4evftmr7nr0MbpyzReDmRuuTw8GRBDeZl49DES9EcavNytGf+jgNXGuiEF7obJkOg8qIE7pPqyYSJDKmAiui2c1g6myxVe2VSG6CqDV7bKQsliZtpOi0OvEJ6crUuLSbj2e0uf/8ws/8hf/zokbzz33UO1FuKLPueMXjhw/feTo8dl4fzoZu4OrpJ65kLY+mLWFK1cHvFLx1G2g3Xri3JXb7hQq2o3jT//Up39ZUX7OPa9Qch+7/k23HlO5UVyrdGswvPLAwdV6f4hi59zVk6ePb0/GleOBq5oJ8WDSNLOZwmFF2qosy7JkoJZDdnGXTN6SlkT01Eq2l6jTYbZtaw4sqgr4zmoT5YO5ExQcOkW0SznkiFlF0oIlXRuAZOxKwwp85CFeGSTqfeOcWxkMvffTgzEzb6yujZtrGyvDZqooBo7LD73/ofc1k1tPn/qBH/m+n/g/v7150P2zN/3xG3/rjX/05v9eY3QvNu+kL7j1zNc3Gx+cPvHRp6+3L7ydmF/6NT/wdV/3ta/9np/7rp/4ye+5+gh++j++9T/80s/+0s/9yi/9wuVCXvdtf/U7X/HFL3ztV59q6+r8ufpjH91908o3fP7mf5FWXaFepkVRiIeZ8EqGb+GYnaqrVh67vHPnPXcPytEf/v4ff8EXvibAv6C29a9Y/w/vnn3tdH9bRutHj27sPPKxjS2i43fsSlHCu2oNVDgX9PYA6rp2rliaCEg1FDmYy4BlmgYrOx+yOSWcThrDGPKl5JBLpydxHIYXP9OWSIjGKocpSliNzCTCnOhC367MoMBIE6JZ0Vjr0D9F7x9LwOERy6FgCSHtyR7xLTl2spsWJ8IacGtngO+KPXU9m988AGYyWTebTmQ6Ou+ZHl3owm0s62eK8+5bx9No6X3nl3PchH7Chyx9IPrKWI1aRwAvOusAvO9co9EiG+5JpCp+YOn1nD5bfdZEgO0nIThZHmecn/fPRBU8F+a0lJDn13PlRrhnkQPI++/r4OYk4GX9z+eUPmzkh7W5+/N+ROSWu0Y/+aM/dM9TRj/6A3/5S5/7d/ebb3jv/R+7o1h/9hfgp//ttxw/c/KDH8aE99a2Ru2sIAfvOwk42YBV1dHyd5kzUWLRMvLcScC9dem543cpKgHzsJQveOYqgHfeN03MvlW3blW8iBKb4i6quLnTKwipalPsOhyTVpV2nHOk616gvMcDt/5IiTNVMZr+xwf/yfbB6ve9+rsvf1LcGim7oqzG44lKu1rweuU214Znjv6zyfZ+O9nzmG4eO1FPajSXKlxQVxM2Gu82jtz1p/d9slrffOLyeRRrperRW2576ubx4rErk0t7T/z0nXd8+Sv0JS/dv2dlbwUXLs52H5psrW/MVsZ13a7ONl11RVE5WlEpJnUDaoibph1X1dbSVW6o48w4MoUAZuq/+kWbAH77A7s97vuQiucmvPpse3MmE+cPRltGdF3Jj7MqQbS/JcQY8SxhzhzTmUsAqUhlM2uGRTFwPB7vY+gGK6vj8ZjI7dVDX89OHJnec8epW48W5x6/9pEPXfytX/ip3/+9B27lk46vf8W3vPZv//PXfs6x158ovu/Bg48+jge2Vq6/4gufU9119sf/zv9269GVJ64cvPPn/vjnf+vX3/6Bdw6KVto7v+QrX/w9P/Rjz3jV1p7gxJ/+zHRaM7uyLLwX5hBVQnDEpEIQaqmh0rWT2akja9s3bnz04/e/9rvfAOCRd/xUVRWKFsrv9N8xqtYf+MgfPe+eE+PZ0G+d3jnYH3jM6nZ9bUuI27b9xlcdAfCf3nGladoqFhWgzEF9kWIh6XihJgETkfeeEVTQh3mfLuXdv/nVp5ff/Wdqv/yWJ7qlJAvEUUc9AgHgr3z+LYgSMBF5SPIkcI6KVgUq2qkBWAliHlJB1yJBLT8nyfQGo72f5pH5Ir4yZ0bp70wAiaWgvtY9Sa757lXV5BxngkDgIHLNHoBIgFW1ZLeI4VW1p4LOmy1knvkTffpB2TyJrDYdvfAMA3j/E22oWQFoiGKc3xcMurkNOLwoU0EvpUzaeYHSnPp6aTuMXC0V3wEYgenuXCrlfyYd6pJMRgiRKt1dAZ56mCCxZIRLZ8TR2u+G+4Nqde9Tvz451/7f//FjX/VXXvfL/+SnPvauPx3hy4aj4gv/0tY//vc/+qGPT9zo5KzZKbHBrrMBd05YgGZFIPJ9lkKMEr7AzW3AMZVHinPTpNlWddBXP3sVwLvunxmrPGtDTlcxLVCXd94jKHm4Rxj8rHQbvm3dYCLQuh65spj6a0OarbUnL+iNVVTrx9d3W19d2yvdsR1c39w6cuP6HlwxHKzUkxkB7azGsYOz5enTA+zJ5MIBD6vBMXpiq/j7G7iEsp00bduQl+rKjt+ldk/gD4AaXFajmu9qV7fEtc20gnvgGw/aL3vqPT/5v3509ey5x24cm021qatjt+1cebAoB+QG7CouKi5K9NVTc004IG6OOZwNqi3LV77ACPB2fj+hBAASILlfmu5KjaNNSD8mmqeOlQnDMJ8+Z6o5BGqdLAe9/da9N2OUez8Ftimk8rC69KqqVLXTg9WVctoczLQdjNbahh34VvWP3pjurw9WmvFkZ2fl9ttvO7l5z204/4nr/+Infuj+d9y479w7tpsbd+EvP3vw4jvLex7f37njNe+/7fPcA4/f909/6X0bp7/oR77ri+591ed8xRc9vZaDd//hztve9JZ/8X//12vNG4vqzm96/fd/9w9/zYv4vxyM/XQK71W0LquS4BQttQKwOBYiFV+yq5vZkbWVnZ3dW1/5vwG4+qF/Np3N2FHTtJ84+aMf+oO3rxTXX/CKVz5wrlkdcllMRI5WhW9bPxk3RcHf9AUnAfz39+zMpq2VHMCCCjpX6uag84REgNV3NmAr+5vuTOoHp5iTAhzoG159aumm+rO1X/mj85axXMiIqnrVgucFpECA33I+HE8LkeAQNCF1q4QWlE40qVE7K+SoXbxoJpvajTmIaKE6X0AX1GWP7HCyZo/PyTyuG3+yqzIzmpCTKtRWiPnui7JPgDPcmM5LLp1bStec07KfloSoRnlCADDR3EnTTmPQi14IaQSyToItirv8agmg2rcV9cAXlc15jmLL3bKUqcnHjL7W9DNpi/cTLWGFwtdDesgVeouEPG+H8BB9GnzT4edLSP2cXGnAOY2k8dp0Um4++xvPPP/6f/nab72wPfmyL//Zv/3dP/bpT1x81/vfuXfwuiGvN81utY4RrTjaa9rRZzj+7nikrMjZllgEbIRqr58wFxKGI4XLnf7Um6uuxtlFK5Edz9CRmRcTLqtowzdwKApZmTUNC5iVvVO9ZZtRrR8Rf237vCtXR8Xo6OXJ7sbK+sXzl04dP1HXdTs9qFwh4Gpjvdkbb1P9xF7rC79ybO3SlevbK+vrw584evA+Gb/5ZPWnbvboxN23snn2aXe+8SrNHp195Off8IbXP5eqY0ceklbPX/+CO5679/il5zx4Z/vQozc+9l2v+LzX7L1gVeu20h98ZMe/+DnP2NvH4+d2L14ZS0Ou8q7ww5Hzs/liG9ZS1TVNO5OAzje9k4mDv4XZdg9fynSKkZj9dCVIs2GlUs+2EqZEFOp+7a9vxgzlZyGa8cqyFJGmaSy7lvCEy3Zat1pUjEFTEyuk8ff7vSO3D+u2KafV1vrTxg0e+viF934Qz7uT//4v/eK1c1fd9v6f/M7Or//mG976J29+5+ydL+Knb99ffvDBj//OH33vjY/+2H2PnvyJf/DfDgb/6hhdu+Oul33pX3rxD/3N7/3ffuLLrj64+Zv//dd++qd+/pd/4WeoHPyTn/uJ13/OfcPRcH+fDw4akLiCmJSJVYngSeG9situ7E22jm6EuUg7KIuD8XTryOpjH//UufMPfNd3f/sH7398sLq+4rzU3LQTD7S1LxyvroRqSNPxhLlcqhpN/EwvGhggIkdCpARhq2kb3OqCfGWVdnIlMINUwZEGOxChy8Oz5O1z1w+7LWscXgVWKJHF8ebIOT/4xJYKSEAoHBExQyEeHNyHu5B6BRM1CIx1IIGHD2NOMM0vBhocH09SDSKK0mg8TU+lVFQm9Zr6zg4bQIhcb1gTQnqaqEfgDF9ZHhWO9xlQ0sHgLKRwIQ74ECKxdNqUmGUra6HzdzLIk4KCh2ruzwaKsquVRoyfUzpNjTmKIb0yx58tiT1sCjch5DkNhnTUxdphD+YIaJFQ8bKNkq4DyEF085a/KL1urhhOosdSVjP42aXp9fHG4yP4gk/un/ylf/kf2yHe8Y533n7b8x74FAaDE3XdDl2J2h+25YuiSOzIzZmMHCZYOJPxQ+cHDhJSKskJJAXYq6r3XkBwBRQqSlHbbMPg3GYcs8QJka7Oxvu7ZUGeXAPiaqTOlbxaldcmOyvF9qhdGawOUflr03qLNgezqTt69PTezs5K5QqHVmZeRaRaoZPjYrc868rZqNib3V6UTTPanxUH1eeONl57IJ/eKD9x/bH3rq/vXCwv/7eHf3b11FP+wl/9ov/87z744X/w2//mP3zplarie2/d2b40vH61ag5OAjsf+NPNF714vObe/o6vpVr2Hj/+jg990Rd+yRe88pWnW+CRc/UT53flYDgolu+AbkdRgG2MMF7Ij88EwLcCKLJgsKWr060vlQov3jxiQr5JIvJeydCmqKVHsIwc5rWbLCga3SRzdjAfOYUEzaoqjrkYDL33zayWZnLs6Im9vXE7w8qonM4OBtyWzq/SEb3kN6tmUg6v1fWo2bnzxHC1nlyarX38ty6d2aqOjo497ztXX/1jP7n92N4Db3zb3/r+f/ymC+effefRlz7/R28dv+Tl+vxXbX3Z5fFDEzn32i8aveP+/3D81Pc97dnf9LwXPPVrv/pLP3LlrQ+9a/ft73rHj/2Nv/03xrOXfdGrfv3Hbz9+YtWLjPd11gLwFZF6UOG8tg6OBnx9Z2oU+KEHHzl95uTxYxsXL934+B//1td9z1//0CN7Vbm+WR0c7Anp2uZwOlEeDAaqEB9UQYPBwHtoDLREJoSpKlHm+5aUmQRLqRFqVoIIxCCJ6kAlgVqCYk5wBhEUHEoLL7dE5PvgSW5YtnM6J22CMmV5BjuJJd3ceq8iRLCEZerbtm25KBMTYQeZyQlBQnWQoA01L2jWIBYvDiaDWc9caPTUB4eJ3D+3e5aj2lmiSTRKliHhDIHgOO8QcWm8tp3Leh+39/iPSNplgXAEAemDl7oA4Rx8i/wF5XQ/m3+4XwnRBvyB8x4LmPcmWHtuxPnF7quf96YOKrj41RISISlz3JPT6T7JzNRBPXKbJb7IpDeLp0xHiAzFiLbUeY2mB02JFHvpxUHN2erSU8xubjffnPgF+0qfUdAF1YeVtfHeF65aWxsBmExqAGVZBpqnbVEUzjkVappGBJaHAREOElVeAQ4qSiDLT5mKEagqe1amkBWPbf8pAaLMQZPso7+PMtHBznBlbTAYvPBOBvCWTxxYUUMLQlpcuKX7yrhRG6RAhdgheFEGgRFJcR2saKV9ieGMVmoQgBrDEd8HhGCbuqp5tlYxsdPqaPuOh3760fYP106eurh2qWyar7r9L/3G3/jlv/clpytsSDGo5MZxmc1KDN3AT2e7jFowWF09Nz141JM4hQdT+Yu/8YJv+bYvf/lLnv/290q1duDc1qyZEu86ReE3uahQjGtIXbeDapWZZ9P99dFg/8buxtrmmPG6564B+N0PjpndpJ4NR6NJPatiWToyJQEFODTKQCjhEMHKADutJVMyhWowqp7L4G8lHjElkIALZlWfDA1EDmCVDjVTKk9peK2Iqot4JThmelGLlElVopUgQbBKG7vLs1iQ9ee97u/uNM3szNlTd995lHf3Sx7+xq//zm/+xps/+t5PVeONNd24Y+WO5649/fFT/+Ff/9e/9je/9nc/+NGNTxUP7NH2qNl/9gue8Y3f/opXvuzLd69+7Ff/3W/9/h9/5Of+7pm7nnb71sbmyuZo0upsqk09KckRlUQttECBMy/+QQBXPvAzO1fHM53823e/+Cu/9SsvX77eNM1gMIAXAHDcNA0JmNniSr/u5ccA/Oo7L6lq0bCqhtwrFIFD6hFNXRyogLkH51oNRGpkHzshLGdb460uSqUQ/fpXn+lE27kPcf3SPj/0Yjxob/jjJxARZpcKIiI4ivnUvukLzwL4pbecZ8siKd7qHXmvddNw0fkK+Iyq+BhzbJ7VFPG5p9pAk8RaVYUywwtBA6MRpRplz95C8jKIda9TtRreYR1YMcs4Fe4+UNvPHJqwDXWWnXA9366LWLqh3MO0uz7vBd1xZ329cX5l6f2LzX5aNOum3hZntXSq3SPmEEs9zNER44U+PktZuZcF/TN+av6N1FeVM/fILQCQpJChpT3Yh8UbEn+3BDKZFnHuSm7AI6KiKIIbTqtN0yXuSEmsHDv71fKqM1uySMtMFwwQAS+E82lkD0DPh9NlSyU2ZSN4ZdmKF+8BYTZPAJHaHz9yzKu2bfQAokIJ3nu3LIfqPEgzOFB2Q6itS8FCA0gsgqBzT+WupAltaawJFzg8JlKwJ0UrKtNpo+O1Fb7r81/6pdPywr07d5+rP/0N//2/fumX/rXx72zf/4Zffc4b7lrDUMpV5+tJM62qalTXR4aFot5aHZwe8+WtrYd3Lz39zqec+/Dbvvrz/8G//Ll//Zqv+yv3fXy2tuXYlUePn7m+vXuwq4NBcX1/slm5jfWtg4MZIOtrRyYHO4PVtWnrp34ap+HVY1Cyg4efKg2y6XSwciAQSYZbNQCBSBE1+cb8ZwhNfcxgA1jJWfXLEQLH3OO2GTXmcYkaRe2vXQf8OA5VEMh7n2vq0l8R8r5VpaIojh07oerHB+P3f/DBstw4c4xf9zVf+V3f/VWXn5i+550f/k///jfe/Jbfe9v0t55/bfRtT/8nNZ57d/Xc1wxf88juh66sPfhXX/uC9/3RH7z0+//ea776a8+e+Jwf/qG/cqV5Z/XJD54fXVtd1yNHjp04cWp4fPVgIuPpzCs7hq9DZd/ZrN06se7b4d/66sd+/8KlksthOWjr1kI02UvJUGdVOLIKm+YC7JKNjRSk5BVKyxCOKS8cyLI3R6cbqKbsZlHrliAXJenF3hK4EVdxCdHNyfPc/Wr5Pex62AkW3pKWBhSVl/0RiAiBXOEY1IoXwdwge7bL+PbFmk5L0fJc3Hl8arncnwoqIGHLOODUd0+uVeSKRdVOn5QE1DQRI71J4EEfhx9GUXo24LTR08NLyMBCF1Eyzl6WuO9Ycjyn64vP9vo5bPdYSFLOCWZPEcIZTqzTTfpcvJiGdxMxffHmxa9CUXc0p4XoewrcnGuxDwvMaE9iTrRWY8upb37/4mraeAjw3ouI+US0bauqVVWpsoj4mI8muv4IWaqwrE9KLFE+zmgPDpkZ0kxZoaSEum2YuSiYwN43UC2dGw0GYGpnflY3WVfMzOgXG5hbqaVgDFxCtiFVlWJBbIU6UHJJn1uX7pQrGNAsasRsIoNioOSIvVPys+L0ygvl6vipt7l71rf+PZ+uH3//V3zx16z//NFn/dR31L/7kQ++7Y9u+U/vX//FzeEznuJuOzW9cOHjH/vo3Uc3Rn/h937v2Na3PvT43/tf//lXP+vMN3/Xc57+qVd/6L4Pf+3Wd9x5y/qVq4/f/8mPHz25uXXk5DPufsr1HWyuHbtxeTbZna5tDA8Odm7s7Kyvbfmmcq5YiSkJq6IUkVlTq6/XBoNpawx+thkIYvEjRC7gbvIhjEiSy0WIwqRwvlRIgzUtpOZBkHdz582IpSikUwhoJJ1WIoqano5jTguaJdSjsAoUT1JMgJpVmC7LAYC2bsZ1Q6REbjRcrVgev3jpkcvcflrKQl/8VS977de/7ENvfeeFx8Zv/d33/Kffeft+/aZ1/4ln7t77ojues7F5+q0f+N1Xfe0Lz/7OxQu/cfQC3vqR9bc/9xWT8SX/d3/s6MF4d3t69ZEHLmydPLJ1fO3EsRNViYPGaxPKERLTTPcqcgeT9jXDN3gv72q+syrWZ+20KFENaDLeJreKoJkPrWJHlsAkTzMXfNZIEl2OMp8zsULBCDUVgpLAau31JKIu6CCAVwJbQ0Tz3jafvc55HhnFgx9OEJHCsz+kiAtZ8T8lVyogjfdqmSZ7ppOM/QVg6hCjjUb9lamI+6R7KojBiXZqACBrnnogtOS10IVyJZIX7+mdF7uSa1LT/xnFTsmtFFBVl0lKyHrriWDZqOadsJYiccTFnmddD3kQc2twyA1zXR2GWG+CakmD5VmjqRvRPS+33+c9HDbBm5PehMI6aRIBX6h2tRpVlbhIS5t3Gy0atgM0BcsfrgnQ9NdECRhDEwW47EElOnT8GUXv0mgQkZWKgAoxs7N4fyVWaTNPLtYgxagufQF1JCqj92HgHQkM/7ES4EWY2TGreiIwqHRUlby/P5NQzgAAvFcicWVhUsXc2i1+yLd7QNkIpUtVk8tWd/xYOfLOfq4ra1afWDKPGFX1ANeNqDhSrgjA0eE9ly4+Mjx78kThbufjX/Olr3mqHv3I2z6lK0ePfv6rT379q9f/IWZj2d7iP73/3Bc879Zv+obP/+j7fu2Wqrz6+Lnvf9aLT7vhF9/y5bJav/COjQffef3vftcPPuPlz58d7K6MynMf3f3UGB8uNp/2tKefPnXszO3PPZjUl85fWt/c2Dq6tXswdiyzui5cGN94MnMFraysNb6e1tOiGGLhZNnqBFoLFahDKE5lKoc8rnGuCnkAUeC9KGg+ekFHdo+dRAvd1ygOUdpBxpx1R8kx5aoLu4WZo/Is593tfLVtSxqcEsy4UxTFdltX60dWTSkJ/9Z3fmJjtHr29pc85Z7qVd/xip/Y+1ufeMvlv/dTf/Mt7/3Z91ziF7Tf+cUve9W5T+/9H//f//NNb3vv/pXN8YN0pvnA8776BX/x+//wl3/qtMPGxoBpQg9/4uFzKw+TWztzy/rWxkkb42BQHUwEKMG1iHOueDn/27fVX1eulm2tkx0eDY4LJvOsRmBJJXJ6qdA9BW62L/lRCFJVRBrcgXohUQ+RU/VkrFUQVA6RZDpB9qZtUUGdvT01sQQviiILpxTuPc0MJvbwIW+jOYv18vZkJ1qsEkn3FqGY3tr0NLF+XeDr0lsAkCYFOKL9eGkLe5A6pmFpJfKkPJujTUSU0nP2DlcfEaUPFDIZLBvJBy9k8ZrZIAQ9/VIiwMtljnjxhacdgA9e9PlTy2HQfxAZmlh+py20IXvtJGDhPrsRsADJIftvaawwER2WAKE3hmWKyoUZ0UJOKBUR13FSmTSs3a5dHFIihMjW/jDROQf1HFTnrscOGSFdA+W4rG0ikY7U1x5kKhcXy24Dgs9lfI2dBZ8S5XQ2KkJBZdu2poJ2jgtmkVZEmAaqSoV7xT0FgDd/bArAlQV1iXLmd+PcTLGw3zRmztLoImTEOH+KY0B9yLSa6JOyEaSOcQZYweS9sHNu5seOKzSD1TWZtp9qV8db5bErs5ODvfVh2wz+H/b+O96y6ywMhp/nWWvvffotc2fudM1oNJKsYlmWbEvuvWEbh4BD6CQhECCB0JLwhpI3IYSQhBBCCYEAIRgnNDvYuOMiN0kusrpkjWY0vdy57dRd1nq+P1bZa+9zzkgGky954/Wb351z9ll79fX0kqtzHdF54sLla0SDGvGGSibcbMnrr1/6N7/w8+9419uP33+/VKIpYajyV77ozuv3H15c3Hts7Ynl1QMH9y6cPnms2Vq+/tl3rG1ua63PnTl9qj/8yZ/4182k+cijm2k+aXZlwSMhIZLLb3hOFwD+7IsT4+GjdRFFEQZIyzkUIQCQMacFsO5JFm4SKjbzVcFRJADFSKARQ2UkaSDk1K6f+2v6Es7xEQP/Q0TUReUkQ+WCqMDK1LDXpHUl24+/R1GUGGmNsVTQWueqYGZqRpDmpEkVEDWa41xHUTzsD1ZQn5yc7+w6fHAXP+dg58G7H/3Qu//k3/6L/6Lh/NEjb7322mt+91d/6l/8s5/8xd/6hVc851t/4NteX+j+xfPPev7z/yQSOh/3IxmP0ghiyuX40vrwld/96wBw/KM/vbxjCYi0gnSSZXmBRFJGd+V/S+ssjgiw0Bk5UkN/3Yt2AsAf33URQBfCXRZEBwrMNJ3yxYki7Ak3EZ4RjKYzQOpmxUonTLNQwvmJAZab/raX7K4Ik+eVmlB6powa4Pc+es5x1ZodhZoo4Y6HvUom/Nbv/vkpYUyvjJILhABRowF0cCW9HbgH9XZNhNFm1K482uAe5Xd7hVNiCl63txhBaKdyppI8sljflaoL6GzEWXU+NWuDAKDVbCw2D27LqgQx5MbAE2WGyXNPZjSH1hK7xPkzsWkFwU8FNZyJP8oKNVCLFdxX4ifH6SBN79bs4nqfgV0AoGYJ76dmbUMAy3gjxrCozPfpOXIG0Fz6AavAgU85eZ3tF8uzKKb5vHCmtSdzBl9Z2/BdL3w2sVjt6dHaSGfR8L/MnjZn0lU3awYo/eqmN6jsPgiCKBi1d1wHZIZcaWbUGuOYWGufLVVKYmbQCqpnY7r5cMozzhsbmx4gn09XVyohzaEpQTEbX0vUCMKxLTKJsShUARIioglKMek3NB9VW/n5eHNBZNv5cNBrtLNJQWnrmn2D7MmIpOwt0w5xcXM0Ptn/Rz/1oz/2Yz9ELD70p7/7e3/0gQ9/8uOf/OxDh295+ff80j/96Ps+dvwzT+7ekawurzzw+Om3ffu3/P0f/6n+aOvZz7pRXhh94D3/8/rrr1/sdW9/7tXrfbh0adhstc9v9M2AJ+kgTppSxsxRURTT/oVl9GBmY5ZlzWhsrHtl1UhudaQ9Et4x0Iug7cn3BKVbMo0A2hnN1Q4eAQBDTRzKaFkdc3GZGcz9ogothUFRSiVJBABpmqbpWESxlJI15pfPtbvLHCUFAjeawFuCx8vNdKvXXcwXcaSeejx+9JHLhw7s++6f/Cc/8EP/5H33Pvkzf/effOa97zp06Jd27Xj9Xe9ff+fH3vP2C6evubR93+UPfnBw7YOfe/DUvXd/5PdefWF9HWDv0s6Fw4daZjxb6xsXTp3XUWvXztbS8o5Wq5XnMJ7kL6TfJAkfL74eOIOiTchEVMYgEwBA5GLqoQ3XDEaKpB3nUHPm5AAoG7M4+8FFZbWmSeZxKF91q1fJ0hd+mFdqFabqI5YcLluKQeswyXTwhiQQhoJjZmYiJCStmSEMgRw0zmRIsDBWDDpOZ+qqeigE7jADOOW0qTGtS8Z5KMZ/DnitUjRTk5CJwGrda8HBpjZz8yqJ4DCcSIi756aqnWYyOPAIRKj8BHYZ/FeNJfE7OzLOjGmHDP4UgjEiAm1Qb3BGiackJFyKHGfySTNxWP3cz8fc6IwXLUwJ+p2HC2c2eGWkgqF1w5w606Oa/omq+TdM0Vp7OwiPeg0apiAyplso+50rN6vqIB8sn7GM00TANuk2WKEFI3Be5CSFMbo2rIwQMokjVeTMrIIwTDArnJOfZg1A+wpeOmcubbizGJg4+MEGkRftr3b+4E65NVwy4XlhnCrgvsqTRC5IwRSpS1tbq6sLUZ6spc0owZ4cyK3tYa9djLaLaLwcH5bAl7fXslacLDXWR/r9n90WY1hcbj7rZd/w9r/xjdkI+pPzjeWlT983fNWr3vALd//mn3zwQy95wY2X1jZ++qf+7cWzZw8c2PnI5z4by6WNi2eK8fnv+d5v/8M/+O0nvnRmPIyPHr31wHUHzPhXlztr6/0sHbeaHZVnggQAAFl5jHY55Tw3XBMkGkIkzNxub5C1W9MBK2AiS2sEqgCAiiZvNn/gRYszrqQ57WB/NUna/a++Qa1VnptQHhglDSJZFDrPc7HzyGiwnejNFubZ5lkhE7GwYzRJmufHjZVdo8ZGA7Z2IKkx//H7Li3vWLz5Ofve99h/5a3iF3/y1//Hn77963/s9d/2rT/yH374m2O9sKFgwNlTD6fRJnz+8ff84m/9l5/6243hxct8MToMAABtsbjvUGustvtbdOziyWYrWVjudBc6SDSZ5C/G/44o746+ubYCln8tLKlq05oZ8hS1P7Rmiay3tQmFyDYrHECZU64MIIg6VAR4v3yzbsZUF76ipRSSEwKbKUzfLFsEEjIgW5M642VAgGq6qm3TnDZHlhkhHM7gOE2RbHP3ouMXjVgAwWo6HCJwpm4OrOkahWeJczeMKbRSL9qK+U2+wfI8l5GCKgilYmcXIuAr8J3zHobtPhMWc2az/naFCN7v4jQ6wUAr4FCgeUujN3Yt1QPVWX755WmnZsGDdpQ7WH2Xx+6hPfrU0WRPl0xLAtxnH6AIvBzYG1Z4kFSSYI5jrhMWzAFyJT8er2AzDx0uFCRsZmxmbQaJNuOjMuMwHL/XytSXpSTXkRCN7TMz24gRDEIIYLBh5aVJvUeZBtBaSimkpQg1F4ZJNgR+/TDMMQivEVIEpSYiqFln3dgdGsNJoJNnhJQyMgAwMqCIRaQjkpCjzml9eHHXvqs2+mu54pXWzu3hJjRgMUlSle0v2v24IceDSSNaSHoRRRfTUTuRMefYEeN8+Pmz259+YMf+lXa/6NFjabfRu+vhS9/w3d/+yQ/y2Ue+sLLQztcvv/RZ17Yp7yzseHD81GMPPxVR/nM/+ctb2+s3P/t6sVs99MA7H3ti/xsBAOCa5f8g5Q+tb2xzPokQjBWxM2WiMoeqzaBuT5R23whdtLnyPgaG0KitxQtq1lZs6imzcF+IpD/57I4nIgi/iVwhoB1sQg8WDfD0JH640cycRPEkGzOoKIo0czpOEaOkEWWTzUYzziaxkk3ZaE7Go8HGOJER78hHG2nSFNsykQ3KtocH9x4e4IU/v3+4JJaXm+Pv/df/8Id/6R/+0W9+5r/++o//yx/50pu/7tV3vuFtL3jLa+Oj3fWzvNh9y7vu/sZsq/jcPadvwl8zYz5x7JHiWKOVNNvL3Z072gy0vdG/eH691W4vLi+0mjEiv6R4OwN8Fv5WVli7wrTIASARLa21RRalPIgDLFCujEYbCgnBAje0R9zulU/aZxbHIp/6gf8KFyv3IgAoA2Dl5LSqUMkpLpAKrRg5ItJIXLDWWqAI06qGzJJRhUyXMJlNDdBhoCM39LdGXckXGw7e4mgAK3epEPHEtVDCgIEENERV2hGyNYTFAeMLAYAlFbQc4uIHzpcZ530NDfXQjx7oh/npwwpGsfTcXRIA7jtfWEoNQIkqdTb1rl3N6rRLMA5WrF9U9enlWhSlpR8zg2ZhA4AEEumgd3Yp2yzJA6zdfP3c3YwAACItjEKiVGixpQbChSa3QloHga+D0Qqq1PcTVzbdcRBvyJw2kugPJZeQCOYUxvI0sEs9DQDWo9z4I2HJWYaEV0gECDFPYjH7PmutSyG8U+EAQOGCyzvq3qySyRtbX4qwvOz6GAA+9mh2ZQLImPubSLmgmRBNsGJVZbCC1RYAmkGVFJ1Gz3Qxs9fNm+Vi1H4Nw7Xye1EjgzwQsejcO15DgYiGiLHxWUkKpFSlRKTR5QRDSQBaa0Xbe3ftionPPPnYmSceGm+cKYZr6WgrihfUJDv25MkUZby08tS5s//+Z/cuLy70+4PDV/9bAPj85/723v2HuouL/VFGSMe2v29zeHYy2gPQF2pH3H6yyHcWeaMBgyxJMtBS6wVs5RPOkskw3uI0XpCdOBOslIq5wDzPlQBBwFprKSUimkyCJqsggbD5agQBQK6VYSmkyg3E0RqU1szozOzNkdNQAlMmIpUWiMhYNdbVNvi4W2fh15/mnMNgHyv7LkRUEgQVsngUx53hqLi8ubljqXfjs3Z1W3Dxwtqn/ts7/vBd733w+FN/94f/4Zv+xl/vLC2eObN18dJwXy8+sGf5+r0CAI5/4Kcvnd8abE10Thno3iKs7Oo0koZiGud5rrnR6P3xuTf3lqNOByLOv+UVNwHAXfccS5p0cVhkw1jAomjIlDaAxqgFpwLiVm3kldkFz8yya6cLCI0wjMOYP8PkWvuGFz9NKMovfufvFjFJEjyYZEjRLVdfHhev/qcvB4Df/OjpDiEyZII40yRiNIJIZAZFTCbILglQSmkgIQSzQlZ/8+UHAOAdf36OERRZ5SUBCG1wZA4QKC5dDi7DzNtFsOIBBNRB3AtT7Fcj4QsAmovFEQmbUxm1MJbRLtRGsJoB81MgEZmAlFpr5c6JAV4zXSG9gtkRsnXpTgg3BDl4YjgKl89tdtydMKiNL/b96rPaK+UPpS/QHOuvOf6dz7BgwKGy95PjCjqv8UN+FobZcdpiWzO0oXef5nbq26+8YhsJ3C1KCoDnbGLJczAzYXmqytECzBjbjIbK+sxBZYd0TcxtS+hNuUXVJhhSmm4kc7pFYxLrCBq0opuKZ7yNaGrbmD6gMAvu4AyJcVm8fzN4OpTQY/r61lTGH4oZ0Lg7O7o4kAbNWekrMxYcXAm2jLTxNjGo125roackcGhtYjFfOHNqHAnRW3zWbS9+lpqMNi6eXr988YYXvpDT9OG7P37q4c8kvP2Pf2B1azyBZJ/qD0wDS7tW17c2zl84+6yjR2LBNP55WJFxq/Wlre9f31aDYRTxJInGSmgadRcjRTrPxhpaWwrXOngV6XE+GmUixgYpzrQqYoaIWQsJ4Php51ZuyBZ2ejmzLMIQ+7oAAO1s4BFNSP3KPvrPJVszxfw5KGm2xljCY3hPpzaFp56Y0c14jogAyXA8kUn74MGDg0H/3nu/1Epgaan9ou/+jq//8e//wl2Pvv03/uRvveb7Dl976Du+562vff1zzk6iU+dH1wMAwMY1/2Dppjh/6uwj9z386L2TT378gThevO3OW5b3xSNa6+yKV/YsX786bLcOnTi+9aWnTpuus3xrOMKsuZwsdLOxSvM8arVzFelMdZu9iRr7EV6Z9DSFvKTA3XYD2byFw3Qrn/zJu4oIBOp4bXO53YZOM88yOH0uj7GFlOosAx234+21Ld7qe36tIWNWhVaFLlSn2cmUAmPXgoxgM/chEyJLKTWUSWbtMMjKz50Vn79k3q3cSlMcvDLPncGB/SDCuNkcaMM888BWqur4UeW6Y0AEnytOzTGM1sYr20IDIOdcQHO2wq/8zAMWfi2BGBvnKnbTZwCQZWIpZtJlE3oKzngWGyrn2PUUDKOKBUvGsVqhfkIqoH/2rGcVAYhgDfaNHQAbJcgUjgmG4cOEetnWDLxbW1lzzo0LBjMGipbytk8jZiPmmAV9wifTmMazVuC4dtOR4nkEja3vWV9T38seQ/bZ83C10aLP3/x05uu+WDQTSqRdHg7TKpe0oeHOZyBamDrH4U9zOg6wL7irPL+gCxUQNl7NaQg+HOC83jGUUZcDCe6Fp4I97UEWJCFbtZ9LdI8AJo6Y+QAAmoDjBueFVqC2cr0+UUqlnZ37D19z7SMnzrWa3aMvft1tL31Nkv3E5UubcaNx7JEH200jxofDB/5lOKoV9+E2+PtXXpmvlumyB8AI9uEYwHv/JQDsDX597tXLAHAU4EXhOw8/TZubw2a7szdNR1pM4jgipEmhgOM4gixPQ/o8vBEOdtsrXEIbXWEbAIzVCFBgHBS2BgAoSKkibkYIuPb4iVarkbJq71jY3t5AIaWgNmKWq1Ycj85dbu7aYd7iSSbbLdBFoiGdjEBGjsdgFz+ZGRSwEEIQQKY1BYI0Q6BjYC9gCIXSNggtH8/GBNXiIzKW3uTiBuZcoVkDcGFYVb981m5G2EvpFLQmCgpSNXdqGcmVyYREtrjA493QCaJeqGoJ7GIG1OC5pajYYNxKOkoEkozW3idsGQMQXKPISmMB/8Qcl2leE4MKwUAFIDDoMhpPiRfAAeww4AZYu/Y6BzmDCAhGMk0Q1CZSCXw1i5Jw51f7h8GMjAwGsGrMwlxPMhGQqLNpW3Jmi9WlmNpywjmkW6U4KtLOF6de8sgYqlvs61faeQbFrTmAXzpCABDsj7YRTlAgDKgPyXf9TGh/X8iaWYDn/tX8FjwRamlqHxDUKtpM795HE7yjqn/dA0QOjOMq3ZE1+zAiKbuEHvuWE6ytrQ7PtoYBk9IaGWTUiCRF/XS4dm5tpbcyHo6fOJ5i3Nq952dEb7in+28WW4NBOoGvlv8TyvJVzfXBqV6+ymrQbKQFyiwHwBZGkI3XI7E0j1mvnbTgjs8SGrmTXV5JV+vSKw4vfOgRVfDOQ6vrG5dzPW4fWk27O5u9bjbJctSDp862k4ZsNdPN9e7ygnlrZWl5czREzU0SOZFiZW6NR1F+YIbQlEQBICm9ifx4LEqGirDNNMChOoyBTMC9qnG+685+N0poxRy4ljAzK0AiJLAGYkzaBK0WVWUnO4BgI6+ypWMQrZmJDQAHs3P6gcPQ05C2ti9QejCb0ZNhFaV3nA9kd0amq0sBAdrBAgBUvfPnQWpPgIUIBkILuipECz8AgPd4Md9Dx/OS3Qykwf55CRyRseTK2BABRprlHtmO5o0fyzfdkwoa1rbbQB9sZ+wXx47efJkx+FqprQAAgAsMa87CFRhTv4aIpQcCu1jQtvHAL9OYQc0gk+e0P7ffQOTrF2rm8MwBF1iVrlc/zxAVzEGoXK02s4UKBiUG7UlRt9SoCQ1lyi4+qFU6MtYlDeFBnTnBWmX7XBmhVmnUbcOmOBccYR3VDLABVXQSGSUEWVaoTIuI2pS0k07KFMkkIdXoJKdPXFhaXjg2/CdtUqu7fvnM+X9WpONGEg+Hg8FgIKXQqmClGs3G/v37t/onOCcp5WA7u/tT+IqX5hC/KVl5y5MXh7v39vp9ePTU2TTaossdGTWR4kRGiaAsm2SEUbebbqzFcayUKooijmN09noonJ9rQM8xMxH5bG6IaBgarXUE0sldSnittY6kZC82YJtEHAHQ5H8PbGSsG2iQgy8s00ad07vm+UJEREFZNqGI4iTK0jwbF4KSJGqOx2ppMcknl3VRNJpLlzb7E8p27d2xr9vpNOGaJQSAd3749KMPHz974sSov6a7nTTl4TDds3t5dUdy3aHdJx978sHPPTRoHDp64/KLX3njjc+67sPv/8wH3/eZm97/+K133LKzudVoqUG2uTXGptiTMxVp1my2i6LiHhfMwiyHR6R+TYT/zVSzUwslQ1zhi6gAJIqiaLy+TXkRt+KJyjfTrCET6LU4wjbEg9Onl5uRILW1vW3e2t7eLoBjIbIsI2FTKAqUTjgM1lMf0SgUJCEKDCXDELJ+HiaXkqQKmWs/aQMuSmvkKQDlwawGAGFFgwTmLDkRrHV/MolbCBmZdEl/u9YEMxsWHEIMYk5LyAoGoKaWasnOCdGvfwirPSIva5koQQzS3P4QgGJgvz2Lv6xZ7XpCqA6VHAoJglwH1H8YZgedshadUIXsWSo9soNgAv6EAYCzxmMmzUyIzvB1GrtUrqIPjQRcw8H+LetO4z6b/srkWYg6OF5+AdA71HvSzaK9ilVeOTwd7FNoSGUAdylaCEiaWaXCzs5XI1Xw9BT2mvnkClinLG4Wnkxxu+4iOKDVDuOcsfkL4L5b+g/n6c41a2B/0wyFhlUSp9oXA+rwlCIxQCnAYHLe+AgwhX0rAzP/V6OysKqISWw4BfRxj+zi2IxyRKDR28uApb01Igpu57kGVhJjIUGrQmWKQSVRzqKZc7S1uXl4pbc52D6fdsfY7OsfAdZaZc/Z/RuN3nJ3NBxurRXpOGrK0XDcv3y+09sY5WOdyWbced3X6KWdWao/8/iZD585c/Lv/crSy296zbd+3duE3ts7BFkK584Pz65dGkai2WzKDPKzl6ht3FfJCxWtY5tmcJDOnyWtNfsQWIYRsiGE2QHcAHoQGzeWkIYODllJTwOAD8Jg3KWmT2Od/g6eV7aphI9KSESAfJISUrfbRozSUSaaxXY6lhTJVnt9OGy2Wwm2+mujBy9fmGzBNQAAEC03XvHXbtmx/GKYKDVMGVvtHmyM0lGm0lHz2he86G0/DMML8K4/uev9f3Ts99c+fOuz9/6117/gc58/fvaha/bcdGEyKbTSBJ1CAbAkiFSmENPKdEo8VZmgfxLS2UGFQHPugK2fsmwsbbz21sUPfGEyTtu7do1JQ6sdKcxBc67yTHcWurjZvLBxub1nx+p3vBJ+FQAgaTUpz7XKMZJK5TYJjXPDNIEmAdjGM2CNKEFzEA7Bxrer2RtPwwCTed3m1CMD1unKYj9m9ogDjUaJyGB6r0rTqBkYkJgD+ICew7J9lwvs4JAxIzYBeLXj1rw+kQ2eocC2F0CXIVzrABOtuQmA11KzQgRpUJ2porwRizMegVnQeXoVAKAStRoAnRjdc12hNtsj2umWsYqnSwpvDkYhGyLLdGG94DRrp8yfcikumWLzn51CGETbkzaICNWAfKYFc8+FC2PIEES+Db2lEZktw6WCFqYxkCNuAorB48hA2czTbwYtBLgzIJ6C7DR+ghBwDGEJK9TGNq8EFoX+JAGAi9gCVtZEls+7kpB5Js000+jGzXdq/GC1J9ODD/AluboAVULVD4DnS8Wnof+rbuzMm9H/X8rOqSe18SUANwPcDPBmAID/Ct/2rf6nw3+1Q/s/r7zznnMtbnS5s83j1d12IbdHyeZJfPjYea1HS7iv4HPDtI/UEY0443ONVufCfZNrd6avfONzFrvdbJxdOnMcUP3NW64d5sXl7WY6mSRJj+LFNI2YOBaYpVkUzzhsvkwDDcbgsk0Vh5jsu+bDiFLcHm+98kbqiub7HtNK56yYWCPGKBBwKx0n+/d29qwOv/HQw5fHN5rJjocRcxJJlgJGygITE82KwfJxQAAgkABB6yLEbdO31E6kRuPaOEUoyENKj9xKM8bwBds+a6BwLbQJYUSAwJqN+wmgg0Ohi0pF9eO9RQzC0sBISETG7afmnmRYL+VSc4JBOhiy9dNzLwGloUsNDpY1/1o3GBYugMMVKEoITkyVnkByTLSCGS1cga4p4Sp7aRToaf46LA7Bz/Xxn/6KJbZzBIRNn1YOw4Jg65dkFoqM4xACcpWyxjoSNWVGNN3qGgqw+Qs96Dd/hTNUrqBEquqcZy8GQzAwQjAONohIgQxgPqc4Y8Awl/6xK1MO0H3OjZERIKAmsOQyAeTaDbLaPs5Rscwrxg9bgZVsa68gmDcR4FnnjsMFYRtcgqFqKD4TE3+1/N9QGtAdZEOWk1ju7Q+sur2XbOW56grZjFeGUQF5urrYlbA83sq7C531jXM371/cnsinLkwee2q912o2k9XRaLB2cUKxjOmo6NAoY86AEijUOFPjdkdkWQxzqHNTAg6YIQwZYTwwrULMpScKyG7vn8KjNVrqFlrp9cGxlxyaQLSoW5LGg2zUk8l2xDFG53W2k7twIr28ZFM/MWKrmRSTdJBnnbhVcG7iwoGVKhODAtBKqcgYzBeF0tq5+FekylBBE4ZDrdPRyBpM1HiPILFiVg0BjwEADhgjlvwDa4DYKnWBtUnwiDayXYDsAumYNdZ2oNiYcCChQC4cbL/S9fcVMLDA8nAjYCQ82WE/SB1kYzTOrIZp8zIEdMl8TBElH2MZRTeEYDRkwSIYkysACDyCzCLJkgelYFyISoEDlGFcEiGx9BUzg7V1tBZo9koqQACJpKyNNwhdtsMmG4wwHlgQOWENCNQIbDxDhBVmAgMym6Atbo/sYGybxD4DhBF6sNbMrAWZCjZUNaJxgxOWguMArAMAKElozbaBwKpYGEuKr3JC2crAp+lK7WR+AMaagY1JETAJRDZ+O85AkYjyqSxD5kNBQIiSETQjM9igvhimrCmpSARhQzMwOpdzq9sz22ooE6MIBNYAQhqlYIXF9LsaMqO2Dy+arpLAWVX37AJ4meCVM2gUa4pM4LMvWESL0ixsTXLgTSPMVQQANtFzgDyd+zSk0F+ofPDBMWgFqN19Jm2Oxhx3voiE1rpQpTTb+EbnPGok3WbcFLoo0vOYH4/pXCPuH15+L+uximFHr/PgU4NLF6Kr1cI/+vb/Pr792pPt1a+96xN/77jCe+7aNVBf+Ja/vfT9P5q95RWrV1+zNoFjZ88+58a9eQqPnVjbHoiIN/ojlTS6jaQDqpgMBwsLC5uDYRIvMo2VniBALFqTcUYJ5JDGHAMAo3X0MFm2MpUlFLu9MLfUapTQpWPm4MAjYq60CJAKMAGhRmCV2e2zylEjMCdkrYPwajU5kG3DcRoAACRM2K+3PG8FADIaxQ0BIIRIIbEnp9BtxkJJPaQx5CLBbpFhzgNOeHsCstkdpVqKVApsJl1mnmhNSadJRESqGCkNJMkI7yIEEEma28gziAhYwTSCBSNoo9lCTQyoAUGjVckx2NkRmxBSUCCYsAZmJbW/N1//ymu+/PMITUGTLAeiphAaFSkAUIxW9qjRxKRWLUqYtQYFiCjIIxWTnxgR0Tn2mMXPQQsNwqBG1MwFaUYGRW5HgNimOkUmlFxoL09FAJe/y5hSmDzfSEDMxAUwKIHARIAIBMa/B5EJuBgKIQBIKwAgROvHrEXuOCgFRMDIQIXSsfTuM8xI4BwnYu1CBqEP7g1osk4wmAhNgTITTHTI0ixIG5sGlBUsHZS5X8v74HGwaXJ2mSnSZGYTFwLReVO5mnIW9DRUCVRujo3TZCQhdEVY6INxzyxhuhs/DDK2cGE6z+qa4KyHMMdvzIf4B6isFFZF0xAw0PMJ4bro2CESBI9mHDJDRKVtfla2EoVSdT2rcSRrFlQifl2dbEWYwaW0NmQTiSiUEHgYWpEZXFHxHP5kicr6HtdhqPlALoFi7fl0bHD0uhzwtzpsMCB7wmGDwoCa/KsozkChzhzMLNrqXUsyEQGFEGrSKBQOU5CURHiVkFcBZ5Ms/+Laqw/kH76w+el/9Z77/82nvrC4pf/Zw8vfvfSmE5968v5zj7z2a9+Wr5yOfu93zr37vdeePzX+ib83/nF15lWv/ZLo/ZuPfbD5xpc//47XvOHVb7zt1sP9i91GRz7+VH+r32/KuL2wuNEftrsLKk8LyEGSoChPoRm3SQIUICA2sTi0YgbIVZ4kUSfp5ip119osqTJHSamKh5hfDas/8PvIzMb+wKrxEKobh1X7rHmnziP4efuqi5w9T8KGV5GE4P1KQ0CKiHmRYxnYFZhZKaWUmgepgtcrE3QiGvCBbxERWVz5/KEB9tMM5l+ohLeVpGCL6pUnkTw1WhMa1aYGwRaQJOFgGSGyQdDGsKN8kY3ouPQXnJ4m17VRlq2rp/KyYJGIEIVjSY1umoGNbaYxuxFswoxoDVCKIX2P9jh62yb2TktoiHs0bs2OPrC1ZobtY5A4ywmVmSWVi1WNYxnWvJLPOLqjVHtiu0MjPvDhBn3f89qD0IgGjXB42uR4/mDY6U1mVrQRVbxqAZAMoR1IAsIPwjJIoEtQDuCk5leS2AZD8sP2FhNWzz//7RLUTp2MMjZFwCJosOogpBJN2rywFvSEzbPQiGgtyNhcPApMAwCAHEno9jfEvv4sGZIT3JPwIJadheLfOQbe0/XNBxOAKWyktho1rF9bN9+vZY+qVBEistcJmXmZ3SGCIMJoOEqYWsoZz+dVq5SZ1tdzjcJUEObJzM7weQvtVq7znLdy1IoksOBC6AIjuPr+3tXNlb/znX9n/dv/RfPP7/vEpf/3Vx/5xBevO7d8Naycf9djmqIbf+L80qufm0e3thsAiJDlR4v+m37ohUONr3zj9/9Mr3mo1T1488v2Xn30x//5z/QHmxQng/E2M6RpGhGjhkJD3IwzlbGgYX9bJMRaJJFUBRdFtrS8UBTFZDIaD8aiEVtAWmJHY2DhwtRz8NftDobn39JnpfFLSKY7srNyPqfRwxXW3z/3QlC2B4J1mdeobMoRCoaZNVQ1svMPmldKqxFnS2FmYfIH1yCCIbNqQ0Vki1HcaqGT0/zxJy/UugsIVlnC+WCVNHC41H6OtTgEAhCJEIm145CYkUr4Ey5vhdYxMicGDaxRg0FaYEUg6PFoCabqFxmdbexMBOyCQgCg9vpmAgCInBDLwG8PFsivs23OBu9w0AzM/2TMbw0CV1YJbsbMzEws3PprjYA+3125Td6PEQC45IBri1V6J0GIfg3HUJqA25QEzCHnF34Oj0JYDA62z5G9MTMHWbTQETPlW9ayqxwAsc3Q4IvjaLU5O2BpRmZwuWChzsFwEI7Wnw8T4ckD+nLY5hUH+DCIeWkpMrb17KpWselMZBPe4afjrvwFmEafQWuIoFnrEgDVz+mUbt4UAzhMI8bM2NpmmzMF9XNi25qifMPGtWPq0Pkq1PDlFUqdQeG5F9Kuqhfdh0sBFev96WH74seGWiEik4/gDs6trQRwT1++fPbDZj/VZEJ4BqnJ53Xhoud4UwnUDJAXTUZBxAyZ4gwAQBJGYgJrzcFuhfGDxO2Tg1cceMXVf/baRy8+uniKhu/702fd+QAOc4Y8ZeQCaHswbOC6VvvaLVCTXlbc9eff8ueDSzff8uyDBw4+784/+MA73/4N3/RNx0+eOXT1vvPntvJMd6Le5f4WNJP+YCQE5TxuduO0GCetzoULF1qtVtJNtifr4/G41+uhJq3J+YYqALCu2GyXbuYGmagvNnC1wYqEYAI1MDlgoAGojLbs3oWA4IMKNHRnqdojahecDtkLh5GBNQMy0kxTZAAw6YrD42r973zQUqhCIW9zMHUrdTmbEE+X86J65VnFZAsOxunJkSi8U/5ikoF+dqyegqwcRQMrBAkiKBhYF8zIVSuVigVrQCRlWhkEZ66UHQKXgSqcFadyx1+EsBFdOtoZgNRw2MFAjfTerUMZiJeZg3TgBCCADba2Cj1DJmgEYxqCbNCfIBsKtM5oIRrkbOkJsjBTIwo2kVMsSST8RGQ57to0gjKFMErfHLYBSmo6qvDQVxBYwMSYWGEuJwEU1hBuDsgSwQ+h/MmYJVlnaRe13ByZOsVk9sYRN7Zf16rlhAIJkY1yLGsD4hpQ8Fh22rG4pB+ncHDwTRtwYxUFYC3k54Xq5ADBVNphIEBVhvZizVqzBgQkEsIqhOxoyUWKnDI8JmAA6/0Vts726FQ8ldEZDXrGAp1+wYiCoUobmWU1umJAG23ZDBerG1+q3skBTSfuLlmfarETn73IdRIh4LfKdZhxC6booRDfT/dS8rhcOQflT2Gd8GHZjjVCIefoyUZcMc8dy0XvYhdzlJkLrRWsE0bIkYDYBMiWREQkJjKXa4SqDXEnaz918fL9J8Rq9/pLK8MbfugHIFbnPvH51eZvZTmItoR+nAw2H9H06GZ+VSPiLN1/cOX2XdePB5uXzp15+W03rD35wE/+4Pdce+21/Wc/+9gTJxnlcm+ht3N1cfc+KvJ9Bw6dOXdplOq4tbQ9GPQWFxBxkqVRI0lI5AyIsWFhsYT+xl8fC1WJfEQ2vAMq0LWdQielqAjS3EoSmWPvyVCzQqVPI2Ip+C15jsoSawAoChXaAQkhEBldZLppWtCOqDxmyvNs4cjBOe76u2OG6ufIPhox+1ec392sK6B1cLqqP3n6IxikWSVtpK+OoSp9tadXA4VEfwGZQTP72G4mNE6w0GCJKY9B/CoZH10kBiBklJZ2rEh6NVguysg5NICF6hYSASJXkhqB/9UVHdham+e6CgeM7x8AsLbbYfGHW15zpkLzTIvybLKHGVtg0K7bRIed7H0PnW8ZgKWJrB2OKTS3qYwVwpMKYFJRGoaANVSvBJSHsmzHvG6em9DE7LynDbmBANNWvm7oJedtYY5DnsQmpQFMFww2xxBcjnIMRoVBXX90TNSOKu60MwJgZuUJakAHAMuRU/DBOn/N4vYQUc+C+wz1h/UxzPmVwDF8AYLxlX0gX/CGY7MEvyY/NpcTBLb2ByXP7WTmbM6lR8C1QYaLEgYSqSFsmEH4T82ay5avUIh8tcoxri1p0J1fJX+E7cXTCIZiMztikFu5j+4whKOEcoGq+BUCrOyHEWJrtj7NiKhBU2DqxUEsthnzFaRYs/fDRlRaM7NsjlHlrBqoE0EEWoHKi0LlnWW9nTfbUCS03dcLjZ2x4EExaFzkR46vD5ehdcP1qzt/vQ96oCZHh7+WRNnVx05+6cTJE5McAY996dHohHzDW9/9tS+845u+9Tu2BuMjB/du99fv/fifLSyujPP8ySdOxc2F4QQPHDj6hU9Fr3zd14rmjhNntzAdLu/Zw8ycaypIax4Ps0ajhZSzBfyIiATEpJlZShme4XIrzUI5OOogMhquxh8wt9dsCJQw3Yg/rrNJLreu5RPzkBB89ERiQGCXozY4ReVhNzpIn3PMdKp1IUSIaACmTn59YGiRrf/dx0VwGUW9jQ/O44ARjT3lXH8HfxnDdZgGIOyYejD0N5FLoMXaaM4QHUZQtfZ9g+w5VCfgNUUxC09oOETqYzhzCDecVY2BQuzuTmUZrX+tv0F2DuQJejTrh3abtKE/7OjcHlGZ7doIp+qrV+64E856+bRzY2aY2hdjJKwRcUY+YLtJVhQM/q9Bn1VyI3hlVjt+UaZhn4mYA75hjyRCsUy9UV25GMTANA/7Mlaok5mnPFgQtCsdHEFEQMKqUUAllAc4VA0uvqa/rmHjFbfTqfUxZ6iCsMEIp2ZD3KdFzMYzvsRw5piaYDXeppetYeCMXoypOVYgijWG8NOvEiUYJDq0syAXeNKR7X4ZQ39rCAhzxwjNLh4BT6/DPIBSw+ueO6y3oNkPwwMIZkYXQczOCKxyrmZsUu99/hTmlppApTy3BGyduPScwwCBUqB23XiyrBmZWRglnTSmirydbbcXqJ8W2G9JSsbxVtRU27ov012tpNvRpJ6c3P34mbV81JMLJ3vfl7fzm2+Kd43vet/77nngwWN/9j9+qwDdAPn+T3zmf37kMy+4/bZv+ca3taS86obrt8ajDtLeI1exppWlXa95xcv/8T/68bs+ML71ztfecOTodbsX7vrEg+sbG0euva6AIk7aSN1Rmm1mGpTWmrWBCeZsKM2xCo+cPzPk8GsJzR08qWxrtdBUlAK39tXFn0vesaCo2rJmUFprQZFf85C4JMMwWaMQZNSaNTm6FbFOkHvejnWVBREMAKjJMXDlFDyqMFMEKIlgDqdpeyKonzU7tRrpfOULxcpL0axYwiS9citAAJVEKbMb8UjBgCabsMHkhLYVjPaU0Uo1jApp5u3y2CeE6mjWsYwj4CJ8CO0QGVg2l8nlDqpxAmQss8q5YBlnoyrv1OQGp8tfDYB1oSCBAYg1O+xrd6eOgD39xdWntldmCu85BZYFIWgLAbR7sXZAmb2EhgEo8EsprR9xCtIhWum8a6Q6+Km5QID+zcHnOYbKpotSwMuGzEVRa9UtRWYivKDRDwADG42Idl2r6v2YV2bCi+mpTdefwgEhYeRmjRgZmtXFIvdboOqnzTZu3Mk9UWmWAtlK0KaLR8DhkIjK+GfoGTutmZmEMB+01p4/CN+tFRthB7HmhgTVkZefp5qxyBXrNc3zWp7RGtXv1UXomBh7n+YC6nLdrlRqnHH1t5kvzGvJZ3oGMGapltogjUiMpDWmhVKaiYFAiEVOi7wZi0Z72C+W8wsKVzaja8XyqRZ30ktaJNBZaNHiTr1zRyHPwROdC7s+d4bau175d3/iVW3O1371F754/xN/+Eu/+olHPvmLv/CLt998ywOfv+cP3v7bSVNedf3RY6dOXX/zGw/t3/fJu/78gc/es281ziYn3vkH/5bay887eFWmivMXLpx84jMUJ41Oj0Ry4Kqrk903qizXOmejnGPSmrnQaTEx4beEECHoMKoNZBCAzKzQW1z45awcDJt5mkoPRtOaeR5syFyZkymFVmX+AEISxACs6qqT8DCbQ+6FnFcuAWaqcMBc1UUhoon5MA+IBcOosLNEdYNeCKC9P97hKtUmZYpw9xcdeVRoVRS5kBEiElUWH6AMcOEHD0YcJQgVG1WYTdTGIAlZTxgNBgvYLRDOrLOuAqstbQDzDVKn8hcrjzSI1m4Ta6tNlw6FMitEYkZnAKZm5PWxBIF2rhk6kNlXVW91gqk6WmJpd8VqQ5m0f5FsUkJmNpJiIiTUVdkCoCOI5uCacONrdCuZ+IRmtO7c+pix4HCwq1+2WQqU0FpWEYBGs1QgGEiDJuGxiIephCg153mOkUQpmFkS5ZM0iiKtFWtNiFJKALTGpQI9UPPraA+0sqZ6DKAIgMkcNKWLkKnCcgPIYiAicP6y3tlaBS0bsiSQOlRwgyET/XjYRcQRArXWxuaZHMph1rkJaVlNw87MhvbybLw3SjSSG4mWfodAZGSNJpzqt0BmZiHJJCQ1TrIAoAgYWGorUTBbpdlmA8lVYQAiGQjlLM8pMLCC8hgDam3vo5Etu6mjsmfHgxk7TkeH2nuCxlFHE1t/X08TI7CX6rA7TQHhZzFZsPJoaBoPUzDImgJubyo4eAo8zH4elNc9qzvvp7/Scsczq9YFOAzwVvPl9a8BgH0Ar69U+W0A+Lqv4Mj+asoHPjsCLIBz5gJIMDQUx4UWkrYg2B9m6zwRBdYhZH3+AUGiFQVbLyE2Si7UmoGRUYB1evGSFQfSqmEOwrPh4wkxmHgJDIxaa7TMExs9rfaZNC2mByQSDoaAZ90cC+RJYTcvAADUuoQwtv9SeWkvvtHyamBNwEoRcEQAoLkoNCAiRUkDlQJlLeFEEHaXpADwkQkMT4IIqJVmB/eF8z7RzMjSiAlsRHFjO+bjPJs2UCsARYAIklE51zRywbmUsYXlMiQRgGIABUAavS0lIoE0MclVrn1YKxPAQBvrHzKhITzit/gbmohGHq8D7AYlxyVKEAYAAIUTqROWJmBaa2l5FwBw6WsdznPnwbOP/kQGz8sPV6TIwremqQkAEP48XrGd2lv+BfTUKZYm7DP7ylnLdlJoRsT+YLC42I0o6bYhHVGes1IqJAm1drKRYIlNm0bmXAoiLFFlVSPhujm8K7wczJ1GrrU8b6YlOWwPYEWTb/CeFfswo8vfALpExr6RslOcdr8x61cR5ZVjc4NAsHJAArDBfB2FxCZ2d4C2tb/PYM0mjHtDaVESCqNC/Fo2guXXp3PQMqM1VjG1n7wZdkDgm1MfnOTgFS9VC7FvjTkoCYZngFm/Wv63KilsCSBADVorlSKNTHAJ5k4INziAg5X3dYUlqHBmU0x5WHwCee3cKIiozFlZa8fgIId2XPCDCgQAT+tP9VgDgPPGAzOwLzIH5szWb5SN8AEt31FraO7Jt9e/JGrmVvNjAOeWYgZDaCJCmp909aYG863LyGZU825UJaZDa2lVE2Z4/g1KwFsC5MK0g6AsQQOGG5JcvhJ+KMFyYJGAiCUHjFxaDrsRWhDpASUHzJdHcrabOSKLutVZ9deACfYJI3XQmH8XeM6iOyu1Silpy2mgKWk4npAUnV6EY9FowTve/qfnzp37wb/3d5lZKaW1FkIwo0HGIp6KqWZEYcas3BrloHL5LcIDTU6V6xgtTwj70FGo1dPIvmYVXUrD7CqRns7xDoA8W/uF3nB0qjDVTqGdr3DB7cz908Y0xFDaPgCeOWOOB/VrZXr0unB0CwH+shkb5+AOOI6cjdGEdg/DOUO4ra61sN9ystaluXIS/ILYao6Ess+nlyaACJ7teDr49hUoH36Q+5j3SDMYR52Qi4IBwH6VXIDJZciOcFdouEhZE0QEBQuRq0KTUAgalEkH2x4SEYEgRq3RMFKsWMciMUpNx39pG4BeK8MOIQijJ2NdaF3EjahQWGjMc0bNvXajE0tk3Wg0N/unLq8/sXbh4g3X3dbrrDZb8tP3fOTHfug7/9qrXt+NY41LB649uLjw8VsPfOq6ztpTnbQ4J/bf+DWFLtbPnzi+mf7Hdx9756dALnfEvpX0/SdfcsvX/s7v/SSd1bcAAQAASURBVPHD9x8TS61bdy6MLjDtK7qd6KKKT6xlW09eiDqtnWO5HUtELHQOgoRApQtiQOS4YBF3R/kwbjISb/Yn3/D8qwBAUU4UC2hoIDKe6shAWhWVHa1EQKzui3d8MMtkNVFOUhpC85kX/JlgR0RkbS+KbRaNHavVCJWMVyBjf4alRtzXnvumPLoj4y1qfiVATRoZq7TuNOHytKzXzN5rw0MbXNH+ElTXAGjNgNCgKYY5l9eOJ7jynnkFw1NwfZCeFple2NxtdGjTy8xepxuinimiyoFEIueGBNND1+CRBhgpXR1+VT4HwrjKZ5evICSCwtmYffVThamtsu1UOfLgBzd+z4waCFLqkmt9YqPRAID1tWGr2RwN4G1ve/NkBIUCzUWoEbTD44oKx3Ku5TA0MLJLKofO5hmMnZ47Q4bIhfnyg5AJDtGG77QcD4VGXU4/Gpj4ei4xPMHTR99YBQeMpi81gz2zI+DDaaFRZni+1sQjK7Meu/V3kadMF95pL9y8kMY0sbjDQu62cBDVJqxSO4ozD4/RBPFsiBCsUpCv15pPuvFzcEsrI5+azl9R6dCWbCV62xhlhyINAoCFeDJQaQdFR8YTPeiTAhKRihUlKsu1gsRkdEmBUEUkN7o5gVZKqTwHxVJKgYTApLlg0KwLd5ZQM2gtYqG11poBNRIJFCAEg1QambUkajbjNM0nqRqOCqUUNvuEcsf+O45c3zx9arJxWWVra3e+/o3/OPutj/7xu6969t6zJx545N4nrzowPtja89lLF25+xZs7+yiHotNutVtHrmZ+wU03i8Xn33f/o4ejZPePLn3hS8d2b//z5Wf9P/Eibb/p27vveUd6ZGXjmmsPXvuaA698XfqK277UhMc/d2EJuNVqj1OVpikrSpIkTdM4jotWPsr7jI1imxIZNYvELqtaUkyIpNlITs0Z0WBPtyfhLUAI7ajB5TI3F8lrKw35yVAP3QPh8dPWdgQD3ZzWpRfJFPQjh3Ft12COLpevh6G+5uG5kPoMoc2VcPZUDnIC9BGgmBDZZUFjHzHfzbfaUgXZT4mRpntGpy0GR82gs89ykSeUayns1nJ0iABK1xt1rU1HRpsJScLP87AeeLNZLznQZf0QpCsTGBgQmMiaXyOwC0U53W5AHVTwwTTMmfe8Nu1wSr41y0+wJyEcQg3QRkjv+w/l/qFLWRVSLmhVfdNnkQCgyOM4Wmg2i6JoxvFgYyKlNBHwZETk1sToNmdSiMwsAG28U9TI5OG1uVcYsEbChYVhV8ITINDd4qBxwNLsvoT1iIgoQsSs2UsmpHAGeKw9bU6EU1lEZhCJ05DiCvSpxvJ4GXSlHeEZmm+6kClc3R2s9lziYIGksTQ49DFuvC1+zehTz9qX6en48+a7C3+EgLCzhLD7alTUYAxTgwsZJrslKm08PvtotqlTKjSRfvlNS44emVrG2vN51YKyqSatczjqlhKOkEacTHQbW+kkzyDvtZPlVKdpMYlIib7WGgTmMEFC3bI3Wow4iiIpYhknCEKg1BqKXCtOGTURRYIQkYBZaWV80ggFkpGF5rowSjMpo2yUxomY5COUIk6ahWZiSSkkSXPrwtrm2Um7k0wy3Wss3v+pC6948RtfducbL5x78Nnj208/9MRw49TdD5146zc8K833dBa7p04+uvHE2nVHWiIZp6P2EnZ3yuX8+Nl1MbqauluffRDibxcYLf9Dxn/81+VGXxGD/kQhPqk/iTd+8o6dP/y9937x7GQYtVrt1aVlonh9c6vX6m1t9hFGkzRaXGpQMopgAs7vI5EZU8FcAClkQm6wljqPMUqnr8Mz4izLPJUIAKyvhOSmiewrfDb/Q8nYGB6Aauccqpe31qm1Hp8a9cwANQBgDTaxXAECRACBrL3ruSUOGJz7X/i6+6j9+O1PZK6hrI0QHXfoYGCtwRAu6fJeTyXq8S4b5RyDjiQgOk6XPK8CRqQ8hWURcUpHbheZHZ+HVuONaEzJPCarNIWWwjP2PwiAJoisPY6VuCHenW7WAj0j0il8iwJEiwDO893Lb9xBs1kIGWdQc1fE7t4DbRYamHqRGJIkGg4niYw6Sbx2YWN1dWk4UcYLwiQ9NbnZZ3bqF0E7X3hE5EB8SSY9tRORkpNF6YpXfkjQmGH7abrIAGSijLM7CRBuqsflAOAzzJtzoBzfaQnHOasm0JLuU/tYmbUfpyAjf6zI0hGRtdLM1b0EABdHAtDrhl3AVB10wwDsBS2hSzqT86tz42FvZjW1I5Vlce2GwfAM0TPFH9ThVCWCW3Wzyr9QknoY9D0g3ZBJi/Q2TuAK5cvnmVf2rzZSaAQaBn81mUE0uTXBAhp5DFLlo9E2dqJe0uRJM4pARDAa5YoLjOUky7IipzjWWueFTvNCsUYUjIgo2hKRSWulswIMfYMaEFBLYGOuZhYfSUpEVLlqt1us8kavo7m4uHZ+eXmFi1y2eTLmZmt/kWZKqaSVj9N0cbV37rHHVGeFFo/IpSO3HXhJR6wf/9LJ/Vd9YcJwfpTtOnTH8Y9+/CMf324s0yTbhPvf0wZx48JKxFHaHzSQVT4BLgAE5s0tKLrPPqzWLg7PnFtuyrUXfuqgyhde/M/yHPrbk/vvu1/IRrPZ7XSS5b3LFC2mRXTifJ+TC0LqSO+2e50joNDACjQYQ2siluzTHodbD2AyodUpPGQH76sKYQBm1AEnXQpdnLGhDk8XEepABuO7YGYixwVVBUQMCl1QWAZlA2hQKRavgy8bCSuEOZa5nHPu2MnV2CUysQboqJUuQ1EBIoLmkPODYDnskyqMQUQoL3uIqyrGvCWnUZmODn8yaAURUFdiNkxTJ+5tG38LLXSyl7ioRkzzo3asULloBjYIdKbR4UoyQJBFLaTeZuoItNZOBxwEj+Zg0J4k8ZArlC675hDmx0Croe3KJIOvPgdwGYrSDtcC6CsnJDSjwSuEYgv6nWQFM0eRmEzynTuXtrbGFEkIwqZ79c68kQOA0yn6TdcApCsu81VlhTbWv9b/SntLaSgJjrAL86WWhIvZxLyt7KNxBptJqYSRYsLr7QlMg7pDNCxg+sKU1LrVuDsxN/pbF+yP/UlZMqXsOjhaHmKh6wo1U2C0iM51uDIQtgsdLhRUv7InZUKpMoARGxrWIZxXZeJY3wWoFkMYYXBUfO0RY0cSj0cZ5bV3TFsVctbeb67XCR+63n/iI2cajYI6yg3KPtcamJlUMkh5CbvA6cXuqHft6hI1Lj1yKco3I8TlpNlh7MlosdFuYdRqtJdwFCdRrxu1IxIMajLOJiOVpxuiDQzIKMHotpkJgZi1IDJW6wgAxnRfay1QKKVIRhuDgRBieXmXEILSYgwyatHW6FTcEAAkZEIyujTY3NntkuhdWrvQ6BSZ3smDXY3FxY89ePjWo7+rdTze5pe89KXn1tfW0vVJdnm0oTM1umd4Xg7hxuuv3Xf10XR9K9oebj12cinfbvS6eODA3ecubiDtzEWqC37g4YOHfzRpNP7jzxdHjhxpNptPHDve7LQ7ve5ATfYdvvHQdS8YjA8MtzYXGmOzekkRM2AByKCVyjlKAbdZTpRamYbd5jRy7aFhGOxj5zNThjys+NeWLwYnfB6VX6lv82KF96AETd56yzf1zNiiytRglqSQdQGEYLgLa27NhGiDjxgOCg1G1p7Qr+GeK/Rbp/pLmZCDJzUYVANf9qm20S6qOmCv/qsVZtaiTuV45iekRfx6ei+eUAeBaONAMFs/NO3mJCrMFVaGagNHl4tEBNYK2hpws43UcOW1m/k1jBdf284q2xESXxW5X41arHc0R6Qw7RJn+C2JMw4BIrIgUNDqNMeTXCAOxhlFkqQwdg01pgccngg3zKET5hCEEjo+NphgIC3wF4aZlVbeC3bmYk736Kknj/49LEZEBFGozPQoENXUSS2nM8f0w3aBAHPOrlkKX4fdRRRCAIIOjOQIjXJIA1QSLNpG3F57BGxf1Fy7cuDzRfgXyRhnzhxduW7hNoHTJV8ZNHjIiKVuu8pYu/UnJJ94ip0TAgDsT+NLxUDFohO3wtGYRsu/NTQ8TyLNpTvTkdvvZ5DgwoD86gd3uvOArLXgNUh6ajOF1QacXsNv+qc8KvC3fzTaiLLNdRBo4yRnGZDoru6CyViovAu4p9XY32ldtbhwcGlpR6+xMx00ZNyJkwYCMmRKT4o8ZzUcZCpXmUoBgIikQIFSSqGBFbNiTloLWZalKUeoieMOtNLhZk/GqGJWrXyoUYyWmriFDRpvrzabelKkWomFfMLbBTQ/d/q7O4rybO3ma/5kR29llXYIujbfX2yrSTEcNxJqdpYuboNoLrYWe72V3plPfbK5PYw/+em9G0MSNMzTnCAfDR568EFCfPUrdBI92Gw23vI11/3u22nvysKeHbe980O//fLn3ZrjYmepsWfvh8xK7zzwGa0Xs2zXaLw0ydqKOww5czqaDDBQE3pH8JpU88qHsDxUUx+080v2R8tYRCOVUYGvjL3YWym7coX6M4dUQ/whrKi07yPAEBoHRSQEZSuClwrYKl+eaKeOSv0qVRetrGDTHiuDpJhtqKiKvCKcaQCKy4cAJIWPE+L/MrMggqoQwjJYVF72ELDbOAo+e6yHD1NiY/NB6xw9jeDaEYLKpIYCramxyVyrhV0LdtpUsKzu7M0WwQaECJU160Bo6XkgDuwOvDcwAEQkPLjz4R6V1jiVS8RN2A2yBLiIFp3ar6ZCgay1jpUigEIpItIIAqVJrSnqobOMPxKQyWtrBUHMjtAQRh4j3B6zRgAkFJpMwBsFZW4lApBWuK0BIAIJpkGGQpjQpliuG5v2ycYHMtnmjTshkVLaR5Q0MQoAQGtlQvdZYVFwHzRoYcRTlqdGBlbAiKSUMtZhbOy9bRRHIYSeZKM4joscCSIiEABaQ9YA2MryNi5q2pa6o2jUQJkrrZmjiIEEQ5amMooAtVbDbmehSFVRsBCCQQ+yQavV0Cpik1qSEIzMiVlrrSRBQIGaCSAG8TfMrbMBbSCy6Q4rfD8bSVigiUAjQkIEE/lL+0xPDh+bJNyEmtn4h1nZgEYiYsJCK+PEHJnV1ort/bcpWE3pJ+MGiBwAsqoU5ssETGYalW/tjrnihkj/vrcOjUjSSoawFSH99mM38gG147fW7vz+tz350B8++OZXrXzo8+cP7mQR5ZMtavf2/cZ7d97/W/c8/GDy//xaun9lE6OLT12+p32pkcLkqTPxckff8LzoxANXLYo9rcWj7ZVWrG/cs3pULCwt6VXZmESYMscb6XiSnwU1zLJIyTaJCIpMTjLB3JRqlC01OqiGigohYz3OZSsv2qBHMhnFXciKVpQCSk2RynUWgdghddFUkOt8go17z3zLiPWoP1oU7Rdd+/utuKObHU3wa4+ugGP6SOymI0fUdh8nY7HEGkAB5IScxKM0A23vcNxo3NdfGL90+wThGfHo3r/5gj+6+CcCRazw+4tPmFW9fuHnL4y/1B4dXVh8wz2nr1uPn3tW5dc0u7fs27+1Wlx6attU60SdUdQcp19M4KgobI76ta1JE6VsQy7zFixm+QBRNOKeyodIKTKwagmRB/sZeDDa8ITsn5MUiKhViQK9waZHFSHCNn8JJZfoEcCFbJSgmZARvGutiSmkyATE08AuJpSBqGK2pEdGTaUUFzkRRFJKJK11URQFAgkjhdPMChCZWGtNID3LEUbCEmh5dC+DVVqHmSfCTpkZQCIygAZWYEwOQRAJGzIByYTwI7ChsgwCkOx4A+eGamSNOM2GFakwfCoIJ2NjQEIuAAypTgggoaTAAAAEAQIyATNqZGYRunG6fQMA8Bk4nN+R+RqxCMGUeQs5CEUZ0hpzuYQrwBOs1zQffHoAX8siApol66/6SGH4VoVLDs70zLEgos+Mi1amISx7WBdB1OQVdaLJ8eVlHbxCtbII52FlddsVDox9O56U9ny81WKG6zDFSdce1oYR/lQGiTTHwtUVLtRinbJTBQE2pNCFQgVCGps9lfGEqBF1AJrRpF8M0mIhaWHK44akjLNs1IpjOUhXlzpr26pBcthcXN9WSSxbDZikGkA1ZWz8LsFRdP5EUZ24mlGIQQe2kfPKNNsxr5o/UdM8BCJqZSLJeTTvWIdqtafHro6RffpqQbuVn8apyYHoSQujmzL/CEEl/E0HH5JKRN+7IFAfaHzf2Te//siR7mW9crA4f033gNgZLR/+ptfseM1/+sWPP9kYvOBtX/fAA+sbzxnvXNp1/rEzrS/eO3jHb1289VW7fvhHHz3bf3S09ZHtR+Kz/awZwYGlZWwdHMNNS8u7u8lVi53Di+2ji4tLCGugRqNJREKlChRHSXNTj7fG40s4WErarDnudbNJ2sxACV10xEi1eDwG0LQQJSij0VjqvOgS9js6SkBGsoDFSXp4cc9mNrz7zDeNx+lCJPSk+I5n/cnvnN1tgjoWhSaBstvFTkunE0hzwRx3OhBHbYYsTQf9ASIQQJHlAkESAmhhSEwo0gL+7aPP+zH4dQAQ3Z2rCcDyjqfOvfeG6z7bWX5iawydcbZ97PEvvHjnm37qB83aP3drPEnyJ/Y/6+xTx8TY6uG/7aV7j03yMw9tsNg1zLZjVIoVF2MiYACtdRLHhSr8oZo+JiGj6XQwHix4aR8j1d1gQiQ38xwFqSFcfRO139tAaHRwjwBABZEHw1sQxI1nExDBPqE62JkJjsJr6D0z0QWeo0BRGjKd1VtZStdrcghm9mTENPTz44dZUNpP34k0bGBI0DUYVNW4O1iJ1twMeQoOuVW3EJir0ji/I2Yc/oMU1awaLrzQbFt2tAZHM0qIUK8Ml2gKSfi9xyA0oLbe2PUdrXUxDyIbJbk5LQzWZoB8KHMzhpK5gnkenf6q+KHWEGeo1DF/TVARNjmDq/ZJblXLKGtlqEv3v90C5RO2uH6qmNKPrXY/a2+hP+I1darD5eFbzFxIXaCOE6kLRikKpFylivJYJHqSAcBokC+32otC9jeyVixUUbRJbkm5nY/27lhYG0+2aJhEbRUp0dZx0treGqMWvW4TAba3QbSKOYs8c/nrwTes0xoATDnt2QpVXqHekfEZtJ2G/5WinXBNmBmZyeVS9t4gaNVfU8YmNfbVC5PrswpAra/gZdRcHikAAKVIozbu6AiG+ifNyICKC6Fhe5vyhm7l+WaWaqFx8DUv2QOTL1xHwG0Jk/TX7/xk/p03/T7e3l+9tdm9/K5f+dRoIV462nvink/uve6W3r/7p/h1rxAf+OLZaEMuQC8+tN7Z0Xl+L/v0/xz+4I+sv+6bhq+57b5LjzXWOpIxi3iZxQt2XbVrRXWpsUAxZWoxaXQHkwVW1+9fHY9XohaeX788KbbSdJBg1IoS7metRLeTpADaGI1HinqiRSCygVKYosJmIypQ6V5yUeeaYs6h1xoPi/aY+NNPftMdAnb0lkfRuLmvsX/4qznxb5/eLVstbAEAKg1FrhEhSpIOYFFkWuvRaBBHESIbz2YSQhg6z8HV//DY67e2N9+4695zF6AbT+KLX9y7tLKwsHuSFLf/wvEh/4M2AAAs/Kf/+Adv//c3/v5v3Pk3vq5/0b774m9560tveOmrvvM7e+nW6tJyb5kef+LsVn87araMainPc6MMtEHxAiK7sEGPAxw8B8QFh6gOjWee/MrR8kaICFB1iA/y/Vk+LGwZPJOtNKJGAsPYKctiE8yCt9NkQY1nmBpjPXFV2TV6c11B5PFoGTfUMu7u6jEXtR7d7MBLs8r2K6OgeT9VoK7HQQwACqxgbj688kMthc0VwqI2a2kcJWvR3hGdtt189cMqwVe9hKZB4fkI9x7RhrxGxEJX/GvL9TUy92fAM4R0HEwBXEklP+1POgfsiBVQO8RYQ3UzP8889/VZBEjdzsgQWDadlidT2JpAOZBgMXP1xIRXwj/3xz1EybUKENzqkL6xlAdAGPMrFHm1ZKyyPMFIFQyImoFAyCjamogexQh5Q+frZ9YWdq+MKEPGAbZlCq22SDWqIRRxA3qNrQno0bjbbve3NnudltbFJB9oBVE7Zk3oab7AUKUe6ZmD+bj1FCYDtpNqcHXXpvdlxmYxANk8WmQX3OqV2b1lHcMEgNEwoXHatNfJDPcK2RHc9Ss3cl7N2RWm66PmWIAJEEEEhECkvdVigdBE7iJvD/VTJ2GpJVavUttjICXiZqGV3Dr/I797w/hLD7aui6PVXb/zjobYsXIxho3//I49T33y8bs+AH/9W+GvfS+86U74zFOd3tL6xbvgD9+7fvl09APfvPOf/0ryno/0G1Haara2oH/yTJ4O+hre9Tu/B7sPtg/sGyZRfGAPSyoGW8tSrHTbL492PGf/viMr3f07l5ODuLE5WN8eDVEvpovrkBPBouyixDHnqIodLDY7XUzHo43tVjvOIetvDxfibpsTGe3bt4RJQxLBw8fPPHLqwWPn1k+ezc5nt18qLkW8vbCdXj594eitz/5/vuFzO7q9RgM5z5cXWpPROEvHwGo06I9Hw3emd0RJog0yVCOA1Czqty0/8Nm1+y6swUKnMymy7a1TGxdOPdnqZSKPGuODraVrTL09T3zzj7+u+cH/tp0+2d55lXm29gfv+qP9x558+EsXHrjr+Ki1uKvzng986PET66O0EFIIEeWp8jCTraZs1iGZQyZOs8517DIP/Jtfq+pqILS6FTB8hrenJwabWa46NtsFERAJdIiEybB/zuPXSfHQGaRMs+nhUGtAdd6tMNfLqSoNyDKsguSK/YcFpNNJViysI5wNHOxDk/rWjJnCBtHZl5rBEAhmloAu+InXcV1JaodOVlcyb8ZWzK2tWyXAJzft5pinpZenru2KxQdqDugx9a9ZlgBwbEP5blR4jAImIzT7hqkTBsGaGh6xqGvWSrw+8y0JyFxqYdnFc/AEhMHHXnUd+oeGbVowHTx3ET9mm1CZOepA5mzunioxb+VFweCXNMTBJp5qjcHlwIfY/2qOu54SWds3kYzrMBuWWhAiKmZ0gh0vm0JEpdRzD0Twf0e5+1jBbONu+EUOcm+YKNAWWrkIgoVACf4kMAPAHdc2AOCTT0zsE6YXX5tM9fYXL9//yB9BI0GZgGYT/lQjgECQAoSIMsiLCcYKvnR24fDO4X0P8s5dfPCwmIwyBiFBNVqtqD2abCw2WocYooWmUGksgUkuKNG8V25j+pzb7vzEse2FXY1s1JdFsj/pfObP/iy6ateOF98uhnBpMtgj28WljZUDy1/42Meet2//2uULv/Fr77rpH3zHb5567MxGChvcXVhtXH/gEvchH8DZ7db5zahIb9i7++tvue15K6s7W7C2kQ3VpCjSBKFIcBRpZmwXKPtj6ixsjlUiGw2tD+1vK5kN9NbFDf7wF09+9PhT56M+JKI9iuKLl9/2qjuXdiWNpWaDRCdrZNuwmapTa6dPnj1+YUssths8Gh/YvWvX8sLXP/+9eZa1Ws1eW6oC8xwQKYmK8eDJ1dVfAYDRxbc0k+JcnzREk4k6uwbFuOgkMl7unTlx6cV3vmJh4R8DwAQgXujSuA8Kt1XUgwwATkH8lpjuUzJR+TXX3frI8S9+83d917/817949+dP7di1vLW1lUQ9jblFGLpEwAiQmTy4UJLFPvRzjdp2X+uiZvbYaVYpAZFTi1YgpGbHD5RGErMaB9BMRFSaIJmnpQzBa3bNWySiML4mIn7tHbsA4J2fvjATEbKuy3jtsKu5A0qr8gpPZlEmMyOq6UYAwAdmqHesPSytJGihcj2NzxW76iJ8HuQ+EP7dWvt2TbCUBxBREQR0Cl+RtVCRV6bXQ6Z++qeZCy1DqsWphLkivaugmVA9ANZEwKCUEqmDHyjW3ZPKSTqDGh1wjehiKlloO1/mE04sfB5cjxrNZV+xGaS9y42XPpXSkAr95yXtviGyrI6li9w/8xA9+2hasaEguL5uwUKxXS2nCwdgQDZGZOAW31w2/UwkD/8fKp5QNZ899mVmG+UO0CS0sX5jz6DBeZDxL1wO9jqXhoPxZAJSQhxTJBGQlYaJBs7zWIAQmAEe2r3Zn4hdu3iyKVWRYwxIShNsTyYnHgXMNvccuE/GcOYCELYp6si4iLF300Ks9In+x5qH4hsv3YlyB69MGvmlN3zjq2KViHOTYzHtGYtkOMS4+8iXLhy66YVr42zxmiP//Pefqzazrzt6zUWS965d+PxTT77zv/5StDXOX/JiWNk1vuU5jcaO+/Pxp08+tvyFj3/dgcOvuWHfwaTbVK1L/c3JOG2nDYnxOC90Z1Fl/euu3hU15aXR6B0P3ffuzz9wsj8+9567vu37vvGtr372tbsP7WTe25HtDp04BcmkLybNdKLyRNMKL2ebL77m1ji97XIi0rTI0wwUQKGPX/oHnWb78lifOn9GbuYHF3ZtjfQE2iuLYhV+BQA4ilVeLLYwF83duw7tWG1no/T45x/B8eAot9YffXwBAAAaH37r2e//UPMUqBWenC96EwCAsyL72G/9au8bvgWIfukX3vFDP/69WucygUmWbm0Poigq8rEw2wQV13P29HFwmBy0sdccsYKDa6JLf/dpfuxl8Oxp0BYimmDDTjcMgAyEMIuxAQACY99slVxaa0QbdsMzP+GLDAqQfaiQQAJkjW092wdVgDk1dA1cE6AaMkUHYXKNSYQ2MllfM5A6oQabLqOGIMlaU5ulNT9RdUsMQeTguXEfNZIv1OBjJ5eIaAbZVPvMzALKHpjL1cDHNx2NPwfAhAt3hWKqTXPA5Agt9sH63ckIxz1zAhRMUgem3hyILmfQOOZdZUwBrUAGEUGXvKdl959uOuCQdOgpayYkYXbvRekWVlo11yI8mAGar4Yjx2qn4JB9rX1EtPl6q16GzIzChaepktIaQRokbSgSQgBQwBKFJ9C8T1SWZc8/3Jy/Kl92ucD8wEMX9uxa7fdHGrnVamSpcrRQOTtE9ETJ86+KAOCep6wRqdEBewYCAm1NbXFqC1srCvw5pxcciQHg7mMZBLIKEbo7MytgiYQMJkIyEQGiRo1aWOjmeJOQAwYAYHrR0RgAPv2lDADUJNdJHjeb6RAbglTR10QFdCeYxZRGMinW0jyGYkF1BtxR3UtRoZA6SSKG6WD98s6rVpb2xaO8+FT6mcuD/qVhPwWApCHiBoJgDUrloBEiJI4wYZUOownlMAQpSHaTTtw8dbKxa8/Zs0/S6Ut43Q2KNaHUrNEkiBqn0GwRCJ0PCBocxZxwB9QjH+0ky92DrdZ1EDV29uKm2Cuj63b2coJzFwe7dOMUT3iSFYNhu9vKIdtz1er50yf+35/48ef9nR97x/2fP5uiulhoirI9rX3X7YH19QUV37n7wCuPXHfkQNKRkAwAxwAAm431yyP92SfPv/OzX3zk0sWdK0vf8JKXvOz6Iyt0YUe8unV6NBkNMq22RxlqbrdFxkJRg6AlikJCrhOxkelcNFb19jhLm90eahwMBp1WJ8tzlELEi5PxKBKFLtJ2J9ra2nrzHYcA4EPHnmhN7k/GxdXXnFAQa8yb7UZCfOKj97ZHQ9DjfW95LwAM/vQtn3zz/3zND/4YXbO0cSZb+tmfAgBswpHGgSNvfvO/+/e/9P1v/rqPfua9n3noCxmubA1FFElgpSaFiOJQTOJELVYSRoHlDVt9iobqrfeSrRAkzgN3ZQWDaEOmF9iGKrIpVoGRNYAm0AhCz4ZjzjpXK2BroU1EKFkXocjN2zbX5N7M/NY79gDAH3/qTCglNp+19vGjyknZayiR2SqJHM5jBgU6AtRoc7Kx5+BrjJ//7IHbbATsrabBiiTDHQnfKlAJdGJpsGF3a76vFbjtzOD8gthfmcKvdQSM6EMHuth7U7s+k/Cp1TmyJCBAwMwcMVlPUGNajm4warYIWgf6eSPOtSxaJD2Cqb5Sx1KmSJYajC+qw1JKM3NMApwOLxxBuC5h8Qi4rOkQ8MxXFFLIRqJzA2BQVeGzM3N3NucYTBkAaqL+gLN3WqVSQMRaaxEJrgbbsktDaKQIRgRtBl+wUUmDcWQy9o1FUUwmk5dcvzi9mH/h8ssf+d2d3auv3n/jSm8BNJw8uZa0uhhonQNyYTYCNgFpbcQPM6lgdmFfbllk+JP/MBMBI4OiEgGXN0RzgRyRIDT4VzMhSWRElXEZBAhAPx0CjpgLOU4VxLhAmY7jdKy0wnakIZLjLTGM8mR50t1mUDtg1N9elr0RZlykiVKyEW8onfeLndhZ2tuX7d46i3v0Xee2ttdHExYCkoaISKGQGBfDTeiP4MB+ypQerGNBvD2Oknxnp7eyeyUbFbtkX3W7RRavjQaXhukwY3XuLA03dNKAw0ftbIQEZrk9KjY24MAq6AiE/uV3S1BbOwV0Hj915ODBq1/wnAvnL965c/fOpcXV5YVeLJYS4H5/X6e73IGRhGwCFyN177lT8Sj+ka//B6cxOvJ33vbEmQtRnvH5tfz8xeaO7uHbn330WTfsaS09cm790oVTD9/1kZ/4wb//4qNXH+yI/sb4/MVzUi1sq1RFUSNJmkJHrFnInJFYThopxlm0NelQe8RApJu6UNjJtNIETChYM3PciPrjkcy3o3Y7gwhyGbMQqF/57CUA+PDH0sbewbi/vB4V3XxjOVt8fHLya259b6cNg7Pnov6ge+N/AIAnv/VHdn77N47uvO2xIezYATcKBIB4cTnfHrQ1DUUh1JLCjXPp9v2PrWWqg8zEKhaxViW3ECqkDDjC8KqSoUfLLHUe3oZxNszbXpbnM5nWirZ+mBCKmhkBFCJoE80OgDVhiICnb1Nsb6gq2OZ5QSEQhFa5J6AhoBhKPsdNwSDgd37mnEe6EIiseUrYZu+vcc5w0f0AtdZKc0GQeAQMyIazAtRl/t1ZWGAKWZQCBgN6yp+8aFpXwBFHCjWDNltmIpOUXM00nCFvF1vJvsM05YZk8emxzRnm2jDfsdrrUD3CDn81CPiJ9dLSlYkocLv0Ue/1lPm7q1NubbijutpvEICqbvDlwPpsUrFwwgVPc7hFsR0odyvMVzFPBBTGMvXRu0CroiJetm8hiACHVygmFGD5MGUnZNBqUWHx/WpoLh9W26lTsnaDDcEIFmd7QUKM3t+6FCoAwC37JTixV33tas/nVQvKzz3w/a/bef3fe/8fpxsrv/vmf3HgqmvuP47I/d5CogpMFeuCO1GSj7VcpI1NnTQmdxxsA8Cnz6ScT7qqN8ExIhIZd2lzaAgAcs4jIVVeNOI4S4u0yNvd9mSShxpsPzLUJeUHAC84FAPA546liJgrBq8aB6WUYgQiagrrcg0WapBSSilF0jYEdiR0xzURANzzRMbMjBpQ33GkDQB3PzEGJsVFSWzN4s79ltm88+ySVZr5Org2mAjIRx3Ju3csdFfgxDa853P3fOjzn7/tLTv1Qgs2LkNaLILavHRGPvu5BbQIpEaEyTaduaD7Q2w3G43O8o7O6mK3tdBcFIVGQQqHQn/yw3ct7Nm73VpK0xEgAhLGMQ9SmTR4If4P/xNX2quX9y3yF754aCG59DP/eXjTnuRNL47uPz3oa1hcXt7Id1K++Nz9cTs+zL2VHc3duxvPO7J6YKG1JBq9DCTCZco2Mb58YYNEfO+nH/ijd/35xcWFB86chIzhhv3f9aIX3dlrXntgj1xYLBgWBTZ5InWSN3CkYbw5aapkPMwKEW+Nx8tqclGNJzHGMmlqKYFznUYtyRnlk7wdt7RSgJiDVqRRACoep5Ok0SqKIo7jvEjfeMsqALzvi2sW8mhHzjJordMi3X9gdWlZHkoQAE6kfPF89uSTJ5NoETl/60v2AsB9j577uZ/5p8SjUyePf+nC+V/6ld961g0veOzJS41GTyYiS7ebrTgdU3gBqXpDA3xgP+RKhRVKu0hUAbMYeD2gQGIX1lgb6ypmlqIJoJm1NspRJkYEJoLcM2G+HQBQrGwWYSbWBTMjaSIiLQrWWmtwcdCYGZQGUTd6smdbexhof/3aO/cAwLvuPu8RdminorUmR/IauObvLAfSO38RzI3j0GzTpnmVpe9VEK1zHqPog/mUcI8BAExsf7s7ggBsQuLKrnl475NPXLmEHBfOkLciBn7AtXHPaz7A+SXJU59jYCjkA+jVTt50gxiEdwkpi9rk2f21YgEKB2u6AIC59rFlHMJ6DQQA5QUaU/mrQ0RIRFpVsK+bCIe6jfKagR1SjcgNiqd/yeG18nRWK+vqTCvjnzdr36/P0zC97DNfrLXy9HWq5XB65OFjWz9889cURXzTd7/4l//+f//Gt9x++qlI9eOJKAo5ai8tbG+lzY5cvzy4qtUhtGGkxFiIjrgE6Qo3i6JgZuO4rHOtQAkhkKnIcklUFAUANKK4yAr006sWHxEzXEeTzsWL7gE0GkN9REJSSnElMo5RkxuEWmnb/mddddmfRmaTTpFKA0tmj4M5iGfETiGCAEZq4/JYAhqPR+aOjDBqKp09dekSrkGy0H3ri57/da99/qm19K67PvnffvZP1m9/5WpXv+nFgBfWxFWHVP8iYBdkU9x0k46UPHdpvD44c+bimVPnQMu4RwutzsLi8q6F+JWveplkSFFfHi5Mxmqiin//cw+vPvHoZ//0d+FNLxWv+/61Jx+XDzdFC+BoZ/hdz/vON7/lxQev3X6j3r2bTp0997GPf/Q93/XPu5M39F95+12PnwC1mnA//cD7JMvVV74EV3p3HFh9/lWrz7n6wL69Swuq+PavveMbv/aOFOAjH7l3x+GDn3/syQ8cf/xPTm4UT67JM2tHr79aLS0USZsWtw4Wrecs7tuxtMA7Gq29C02td+/ooVDXJvtYw5Nnx8NsvJjEcRRvDSYLKkmSaAsG1KPheNxtLLSgu3Gx32gVu3fvWt/cajQa4/E4TsqAU3ZzCaxJFAIDttvdM6cvPvnk6BAAAHz+c8ci2Vpe2hk3kvHQBuLop/wr/+U3iIEIQMDJc/0TT12M4iZJwRqTpL21vtlsdEv6D1Gz5dTqqQmCqzdNQzMzADHUpayIxp/eoiY0th1sLJZLnWiFQ9B1xGnBCzmHWjCiUCAiAVjmwy5HUr9BYQlxQdh1iCxr9QOdaHlpfaUyiq3BlGVaFiudRWLWeqYicQpmBoXrNcEhe2uL41gvKl2zAQBqM6/FMSybr/Qb7PPs0TgOuD6mORw9BKLRMN0eOIByeAEB4PhWyZ+pqna5xKNY5+Fc7zT90NMp09A17N1X9s+nS+F0LTWdt4ZKv+Qan+YR7QWwlEVt9bQ/S7UfXFqzOgI2zZa8MrNmZGbhA2jU7uTMWQUcfK0EC14lI2Y1xMwVDrj2wbZoEXi1m+pDN/I/+vzPowYgLTEqhnJzLdu367qr7njj+IKOtBwOeZSe7XR6VHRxNR1tQcT4goNfSRPir5avlq94+dN7zhbpRGudxBK5YJRJuxcnrf5gVBRFp91kpVkVAMBVWIpVb4WwzZDZqnx2tYL65qdKOz7Qg1LKVqxEzHDZuKfQg7aiajCNCCNkRVSFBfU2r5rxQGcTc5fDGdXAoy9vfsEqAPzpvRc5COlVwdPBWLw8lacCfZifCix79BU4lETOYjFnYLFABRAOQ1WtlEtJLUGtPoTxJJ62R88E4+zAIBKnjoj5MM8etkyZXj1DtabZFahZ6wXnKST/fdfhdlZIKmUjW3nRnCly6gRcifwJHH9rNZjKVDxeyo2ASlujIeFkL1orrRlcOjCsiNDtmM1ihysTBhUJ75j1jDbdMhuH6pCqqG/QvInZOxT8bs+lu6IAyqhOTJNc55jrVyiUME8j3RA91+pzGccYIZfcUBArlUVJsXxIbvQfuvyBJ6/ee3Oy46UHVkW7u29tM02zcf/ceKRld6k3d35fLV8t/3sUlFG301PMaZpGDEmzkabpaHszSRIpomySRlEEDhYYLwztvD/KCKm1NgMTBHPr5zMShuU1iJMta1uRXWKFv3QP67OoEgDoJJqmOfZBr7CEV2i9BthJ4BBc9jYIAMhM8IuulJjY9x0o1EqCA5Cd5RpAqbsMxZ0I6JOsTHfn/4a/ageNS+TiVwmtKZbfGzEnEtRc7FL+6KehQ0H0NL6T3vjKrYUnrOaFufYV6iPhKsrx3eAs0QdXcXCAa81Ggt9jO/96vslZU5/ig69Q6ntTJbh8LG9/bsJmmdlk53NHsEx1p6faFxbRQkmWmPgqtlmL29wpdydVGSWwDdDti8bSTy6cjsnLWyvknIzNYMi5zNcOn13k6TX98mXONc4YkVkqoUBpgFhCli22ku20T/HaZPDhR0/mFy/0j11eeclLb312smNxL1zg7Q+ceqjV3LGoFvJBLjo4mTRMzOpERsycZxkAxHGs8okgqRQXrKWUAFprFUVRMRWir051Mtx+uAEA9xwfI6IyOdyYCVkiIQGy8T0Q5hQYzZBm6wFfM74AgBccaQDAPccm7MKt3Hl1CwA+80SKXMlprssYCEAa0eSh8t4BZrH1jHNo3mUSwEQmDBbpDJVC1WQcy2YupehPQI61UHvbraXl1nY6eGC7+B+//nsPXR4+LJ/CX/zdfX/7R77hX70kG2wCxFBosdCMIogLyFMsxn2lMj0uYHsEiikSnaXOyvLCjt5SqwmyACpgzKBAy4gi1recvurySvPx81tPnR08cPbiBx+8e/3n3i4bcXzbNSO986Ybrrvj9hve8jXPO7q7uSPNMY2fWh+NVL+7czmWMeQIDdxSsHV5/dd+8ReWb3j+++57fG0sdgo82syP7Gy8+tWvXd17dWvndidr8DoMxnh5e6wEZ7LIIr2j0ek2eFez0WvGv//Rz/+3j3342dce/p43vnE3trbGsLAHzm1kX3z04dNnz8aiDYDPv/no0aN7Hnv8AjM2Gq3xePzGW3cBwAcf3JhJ+iMXiIIwfuVNXQB4/xc2ABiJkyQZDvskRRzHDUbmYjgaNpttyZTmGUlsRFLrnAhJQJalsSRPu3qAhohzWac5ZQZAQ81gYskbvlcb5s38TsL2498ylvwmEXIFr6NGtN6d5ok0R1sbQEQWRLvmLDiqgmU/SGdExn7K9icGYBAYRKNkrbVGGeIFmwTI4nkDe02+V7RmzyFChelLPYXYptewUmGaMSaTBtvsmR0ImcV1aHhmottad4G7osfBFiX7la8s3YkBh2zZdAe1EuQ8mFZXwNULEgCOb6uyTRShuVYZCK36Yq3f6ZZpGuEHFaZXfx7+xZBWdLhv9kgMSRi498ynSUsJsKqOZNrVOGwHEY1Vs/JJNxEBBSJC4bJnVBfHWwrURitZMFbilAkGZMixzO1jXgnJOt+s/3rjXllZu3k8cfh1Xn2Ad372ZwEwoUgxI0qZAqNioXOtlRaso8tn12+/4YWdeHHvNc//3OavfPLcb18cb995/Xfuyd4a968rMkAaxnEsURhdrxQxABRFEVOOJHMF1jSRGYmJUAXW9f4TOysYcxduO5QAwL0nJhpMdDk0EfIF2lwPmgvAiLVFjYjIQNboI6RLmZjZIODPPDEGh4BfeKQNAJ95IgewDTJbC5EAG5eEnTH30FhuxLRPYCE0K8BCSxaRlJp4hGrMmeRet9hKhdCFWlZUJHR5st3I90xa28/q9LqL42an+Tu/9V9/7dd+7dC3/MCfnjn9Pd97DZOgVo+3x3T2FEXEh6/S+TAumhDnLRSDJ89lm1sQNwCIYqB2c9fK0mK3sTNpCCmU1sO8iEScp+MoabRRFAQkdDqZnDp5Vuzc8fu/snHfpz/ePL02eOg4HDrwvDe84sW33/jal9x+08HeNgBvwejkYAR8sVhvt/Prrl6lpBWBuDTIzvQnX7j34Q/+wfubiwtni/5Wa/kVB/bevLvznDuv2XtgpZPFYij7fXV6YyAyGqnRJBkdPrLrwI7e3Z976l0f/dRNdzwrFsk7//DdvUZv/+49dz7vtgMHdiHqoj+5vLHZW1waDsdCiFar9bJr2wDwoQcuO5LXUEJ2xSVhUWhmft1zdgHAn33ughAiisSoP2l1W0oprYtC51rrZrtTFIVWgIhxIossVypvNFqsdJ7nAgRWVUt2r5W9cbX9ZSiNsDhkJT0jVLE1AckIjjoEAB/i2DFUgakBKwDgqXbMMFQBiGCCTlNom83C20D5i0MMeo7qECqwlwHgTc9fBYA/vfsCVIl+G1NaIAaWUJ69UjWIV/JH2uPgCqqqBNyYK5Etn2OlWf85DEviSJt6Xzwt0J2auzUZd4/rvU9z6sf7pWz6mSBgv3szddCHewIAjm+Xg1Bc0baWYoepdy3omZW4mJmnx+MpEI8dQyppHgKmakxkHykm1JVW6tsQSJUE2p7W83SNqQsABRd+qGGaB66SpeUAtAAABaqcowk8XNQ5OfOrv3K1IjQZBOxdkA0CLqhyCkpns1mDQcQb9ojZmHVqNObl2V/dk3d9+mfbjTgtJiQlK0IgJNJAWZFFicgnarW3ryd7xWR8fPhUJvvtXvuxnZ9Y7r/g6I5vhY09XZhg1BBCqILTNEekKIoKDWmaLjRgkuYybuUFK8Aoojyd+HuO1STHjCCh5EWfe1UMAHefmAAh5iyEEGDSLigCNMFuFEgO/OWdggDYRt4x8yUAeP7VCQDcfWwCqA1H+8LDLTCRtjQIG9TbeQe6z2TCXDqyMjhdora/BjEXikkAEWFRqKJgjRAJiOMdY7rUWhPjrtR6iNCLmlRsp9BTsLWpes3RZo6D5xxYWN3Z+VRKj3z2kU/df3zplUWxva42NtvX7M8eepRWV4vDh3k81lrGUqy0C6VUJEgApHmhC825AgQQFCVxJCiWgow0RXPO1imLCQ72lkUGRQxUQLJ5/fag/47v+cl3fuJT8Kybmyd4fKB57XOPvuZFt37zW1/XasqlBiYKttbWj5/htcmkm0Q79Hj/vmZj7/I6wPra+OKprY8/dezkqUvv+c+/f/To0V23Xn3LTUduXV49evCa3fsaW2uDXtS+sJX/0V2fgsXeY1967Nz48uTypX/xQz94sNdNCli7sJnrSGkZyUwBIiKJKMuyJIpffkMbAD76wLqyZrSELjAfIhaFllIi8qtuWgaADz2wwRqVUs1mU+lc6yIrciJqtVppmhe5jmJWirXWyBRFUZqm4/F4eXkxzxUEAB0cNJi61raECDh8UTu44cGoMbMSWvrPFb9HNOwxBVhNA2jl3HUCcKyZGZUk6+hqjj0bOEZaGFBjELCBIqiZqc7wzGJ+6gg4dAW29V1SMrs4Hr4F6LBSXI4AqPGKosyGbkZRWdVpCGYwaBhomSuVSx1BldAp2zQNBFbf4fSnIpc9DQ4ufWlq6zjP3E17h/Gg8rw+0ChBwipls2Us7ApCqu5u+dwbp3F1WoHiJHxlLgevK4yIh30iOODmzJljrZTVAYfHi10kmtpqIiKRKEUrjjIAAGY0V90Swh6j29jiRLZ/s1wVPVDluFfzQpd/nYEzOyWHG4/N92coXHRev7XIWZXil+5pRdC1ClP1xSildLzSTXLQGbESWoECBQ0ZcV5EqNYHp7G3h6J2nDTacaOJ3avuf5Ue4y2v23P8rMyaHT3aunz5slK8vGNno9Ha6vcRRa/Xmgw3Nra2FxblOFcAlCTNSVZEAo36rVwuZn+jzODC6C5G0SLRYBPDm2h7aAMj/+pCmUQaXLtaJMAi6XLx0DQsXHw00zIbIba2AwCoplbRNhmDzZVGVlwWo8y1SjlDwRFSrAlZ8lgP1IBHkcZke3kgB4WeNAdtDeOT7WhpNcXJ7gU5iD93Li9OrO9vrbzyuqv/+i1HNmX80fTDQ73/qfsfKqBoLPfUIIU4iQVkEZ5NBVwayEg2Vro9kXRbSa6GrFQ6GueDYU40IiISUSzjSDaIAGzqz81LQy2QBEZAk8YDoid++d2/+WsASsIHH/zCI59fu/vRBz7xn//HL/+tH8Mbr796YedLr7t59a13vvZ5+1+9ui/WcKkPX1rLJvdnarzRpElvaeVv33bn4svin/07f/389tZDDx371AOP/dn2ZONTH93ZXH75i17+no/+MbYb1x65el9v8RXP/ms37mnc94WnnvjEfSsvfFERxzpOQCkuhkUhWp3e5uZmoyUNgiyPR8CZGS8UAIijplKFBmvrrpQCFkLE26PtJEkKXTQajTTNR8McmSTGSveRMZZJOikUQhK3WKPWLryGA6T+1NVEYv5oFXM8EUJ8XDnV9kVCZCtGARuAr2JhZJVcaDD5dPuCyNTQlkBkRECSyBZ8uTVyUBqNrUxwiwwPE2DBGjwP4Zj5SkQ2UlUwKfvXsUNWPOlv0/TQLRpiQI1VE9oQC9R5uaBC+JMAVGh5X+OGZHmzkvWvdz2NfcEOPWROn4Ypx5ODMtUwB4Y500O3rcyByeacXdUjAHhqu1zcef5SFVFwgGaMyNfVcZ8ZkTQAGa2A1sCstaXaEIMgghVcP4W9AIC1ObLWLw3JRLSgPCugetA54Minx68D3WqVHpzBWYY1TVgl02xReOO+KjZFzHTl4k0PrFbIyCZM9mLL1pNmjoC186I2AhS0siaFiIas8XukEW7Z9ZWMBf3QF/7VeJCRlKkea2BBsSalBeUoC4oVc17kCbUWWrvaUXRxfHE7TyOJMIZetNrr7the2fH87u1nL11I1Xjn3t2Z5tFIZykmcetbv/P1P/1Tv3LN1Vf3+xqZs1G+c7mxYwfcf/oiFr1uozEZjyNBWkVFUcgECo7N/JXi5x0hAPjcsbGQJNw9N4tmTNA1gtChWKmMcOtlHv7WPO/qJgDcc3zsn1s/4CdH0wsS3AuCkBrwhCBgGbgGrXSaEWK2kpgaLRje0+qBL6PVmzrGjzmFhko3D+9e2LsUb230T2z37z599s8+99nPrSXf+h0dkCzOXFrcv2P4xLnJVbuh3YMsjwl7C+0m89b62rXXHtk4v3mxv90fjHCkV44uctHcGccLS60uYVHobKzHQkW5ZqICKAahVfYydVuDm41dlHL27g/82cHO3g8+/NR/+uRnowf7azE028ktRw+/4TUvPnxw575di7uXOr0GqhzGhc6Ukogdyd0IiXlzc+u/fOLRP/7w+w/fdvOnvvi5V97xwpddd+Mt3Z0HGt2zMktk4/77HvrCPfdcdWj/oUOHDh+55tyFtXbc0sa91RK0/IqblwHgo/dvQiAAsxckuFmvefYyAPz5/RvmayjZsltmiafZAXYmCC3GYpJSQ3IjGmxvL7cX8zRLxXgH796Asy3Fa2K5zWlPTNaGhYiacSyRldkpGTdG47zZbunJxDddDYccmgtpf5BUEE61FknQSHYQAImYkJE0s9AZIhFKZmT2RtqaydLrYUxpBSwoZtYcBAZGk/iJrfjHI4i33L4KAO/57PlyuUokjWTtIYiZQ2EVMYaG0A7MAvlAGZakAADQwBJAldUQAAiQADKt0AW691fVczUetoMnC7jc0xou5ypnrAMZJCJOMdAagJzY0abBYGbj6+ZrugOJeGI79Gt8mryszDxDPWW6YoIAAQcDmtNWFcGUcw6yEWNAVRFpZtQavIMaoNZoJHEBPJoecDhIRFVonyKXmRlsQoIiEBnNHlht+DBDijuPajEzCCubGDdKKar67/oPxRzZxbQG182tgoDBkVOC7cl1t5dMIwYB1+AOIt64WncN/8uUez/7q0KOCHaOx91ub6FQw3R8XvMWQx8RQEQ54lhzxiC4MRoPo0aDoZCSdYY8kbtWFhca+/M8LXRGAlgDYSKpGckmZOl11x5OYZjJVEExhORjf/7Uz/3s7/27n//N5Z2djX6/011SwO0uqhQmIyCxwcyJiLWG2w50AeChs4VSyh9nk+yUmdXM61eq5epz9CZdfiVfcHULHAKeyXbALNWMhxHgBDNPG8hlJgnrnpRe9Z7cVErtaCUb42GKrBgjhB3LS71lyBhgAp+59NTd73z/ff1Tn/7p/wjPefbf/+/fqzs91DETQzqBWNDWqJ0VvatW4pia1Eh0cfyRE1vbA15og1KdHSuHeq3FxW6XoJ9mWa4jRSmpKEmKtEDi5EtXLyu561mrUQfkxWJpVU4Y1ibwz37iX8uF1Ye3+0+k/Twd37q8Jzq7NmlHiyurWa47rfbZE08+/LGPtHbufs4tz33WHTd+19987YKGlQQeevTch++++6EL53Fh4c4dh685sve22w5urvOD9x87/uTJLB++7mtetn0pS5JECJHneZ6niPiaW3cBwEcfWAfHD1WW2oGQGgKeF5kOOa+lVzFXLCWOGGMUw/Ggs9AbjSYqy6WUSdIcZUJlF3Gx0cwW2ll+PklVa7Ez3GalQRVRFOV5LuKk0EpQBEXue5yOqOVEqSUC1oGRYHjANBCZM46G4CBG0Iwx5ohGZI1a+0PILiVMCW2sOSdIAPY5CQAMzEFJqL3YBgE0v+n2XQDw7nvPlSvGIXNl4K091+V6aggV535DhJ+g2wHTkWDWVXYIGZA554rIOtgbDPEfBJT0TJgfImAMLDb8rzMl2CECtl1gKdSsIODjWzlUwMRcHGx7/TIR8DytNU/lu7WvBORPmHFISss+MqNJVGOr6UBnNiWFmEZUHgEjIkPpnWZE97WdmNmCKdMIeCb1VNbXFYLLPdQC6wtu3lIBagz/ijnzKl8Ho1S2FQRrs2chAiYGf5LDw0SI160KAHjkvN3Bac6gNl+ltbnUQghhsy6lrPQwaSdrn45Mz0QMkSoAAEgyqzHn21xssF4HGiou8px7nSWF+bm17bSIllc743ygVZtRca6EQCAwAQMIBQASiUIxAAuShJIAkriR0FI+aexorWQqz7VWIBgxTcdy6dYGLxWNVKlcqXyh033WjggAPns8a8iIdMYuWCCXil6bPqu2tsxcE4YhojHp+mr5/2SZiYArskfOAaAW6ReACqGJKB1n3bj5spsX/sID+MDnL/kew8sYdOdS7lgeroSHIcfM2uTZtPdaAQMTEMZYAAADaQ3MqNgRhs5+34awtrF7mYwuObQIYwIAYfG6w8HMb759DzgEbFeMS+LAzoIc+HXojZSTpwecIqOFYxAgYGZml941RMCG6fRSvRowD2M+z2RJa7RsiaSrsZ2vbMxVFs2OVnAzDeA/Ikqr9HKqfjbwelYoRzvauZC/Xkxn83TJMx8H9IShmEqyP88NwkNEBC4nisHyhe3AHCxVLnoQCxMAhCB2RgFXQG+1drDuQzXXGt6YObip2wETVbJC+cI+ILZzEDJ/Z05w6rMGJi8VIEC2eT/sWSUG78TMgcyAZmW6cuRWjTYMzDutRF0iIjMwayliigFQ49KdMeBwlBfRdqu1Iz/zaRnnedpATES0Sya7QbPONyA/xXiR8+10ku/ptS5dStOLG8srjQLHgEpEUkjJSGySrTAUSmvO42bEQKxBs9Ks+umgDxtJ3DnVf6oR9RLZaUadtbXt06fWbrihUyit+0osPT+RnbVz1kJwoQvD0TmJO9h6B5K/ZugSBPtV9Tenln/0mRyVr5b/c4sHSDPvaVnNePMzOQigY82DIqNIGuv9v8QAfL+h/yeAlZyVB1CbWD88Ww4nqopFax+KZewB2+C8nDxuMGQ5Fd8UOkDqoFAVM3lG0z/FkLnCMkC9vV+AiCgAmNGnsnVBK83rZszsxwye4wwAuHDCZwyRqGaYlSe4hjtCGF6umE9nVMVrHutPf3XtsMVZdo2Cs8Qg7QIFMfxcMLMrHbhnWGYehfBXgDqiCrlYDrhn1IZtBY8mzciJSmkbPAPIaEOIOWN6g7EQ0TiiBV7mM0qFu2VLCmBIUpXCB3sayn4R7RM2Ub0BrYJ5dvvhKtXIC4AK6vXVyg9YswwEACDDhDvi2F1gu2hkD9sMIYybmgn0UWfxpZQ2dp0GpZ02h4VUW4lcmuRpc3FT4OpoG9o770QxikWj0Gpw4d4cWQoB0SJG7Ya49tnXv3x67n+l5ehK/L+4x6+W/+OKURjXimeLAUp8GDxAAOA0XVjuZQWo/vgvM4DX3LpSe/LB+84DlBIaBx9KbDQP9BkcxmwcLoSxRClyc68FGHBk6U4FYKwK7bvC0NxWPVJnPzyA8l6mPlBjmIYvHAugjS1pkBn7PLtoqAFjqcPevaqE8Ca4ouVTtaMhphkerH142jITnM74ydBaQfYnE10BHCAFsLYFFv2hZgAfV6MGY6WPacmgyxwU8wfHoKZ/BYCqlV0le9Cc2QK4QBa1X4LPZTtC2Cw3bPy4GZAkWuKNAdjMQ5uke1A2W5swkkl073+y7mV6yro4fGv6eTjOCo7EsH74eil8qFIedVGw+1paiVf6tXmBphzhrVCI3BB0GNSTbEQWK8ZBsLSnu7bMJsJqdQerCTmQnZGe+wMAEBEyg2ZtZOyIqHWRZZloQj+7hNRTg9W4ASq+kFEjhoVz/Y3FxaVk3x1ZNopFW6VY5ONm9y/FJXy1fLX8ryxGGuK+lZgvtMNoNBrD8WQyKZYa7a907z5EFLowOwEoMObMQX0rLLWcp7V0QWQCRtaa0eALIkJkCcSsDP9GFtHYSQqTDte5v/jBGANqE4QXGTSCgBnw09QMvpKL4o8atWOANLJmIAt/GBk0QzXfEYI3e0ZEJm1RuJu/s6fSht6AMiUzkwCqhpz0YytAg4e0IaDmCpwHsJBXh9g3qGCgtvKyBKe0qp4ZUxmYWYYvM2h8OjuseQQFT9WZSZhcuTAzhjrR4N2iyAw1gQhSEAAyF0prxFgpxcykyeS0gYDz9thrmokEe5jKzbgy5VEbZ22mAU7F6QoAAFwm5PLD86z8dEdcIubKSDxSRwzH7Jeakb2rs0ZmJEYmLqUeCB7nAqAJQYCesrWVbtjzlbSF/t+z3Hcq18UEKIZy8UPAocNjHIhY6u1YAg5Km4bbDiYA8NmnJsxM1Qs1Q0ITWB0qEzCEpw4Ds69WO6g1G5DpjiC4iYg4lq1GkTV1RpyOuMgx1qKJEDNkvUJv4FixXkrlRhtbuc5F0Y937R1mYxitC91SSTRBEnpntwXt7KrVuD9ce/u//5P0yEJ2szg/HIMqgAEFUS9RF9Y61x4aM6GUGhSPWKoih6wLcHVvaanXA4KsUKyKTGkSEaHAQsVS5pAXoKSWQoi8KBBBCIspDpy/JRUdVhPIhu2YdJoTxZLa47SQKEWUaUonqWJoLHR6SQPOnn0SMGk2W0SUpikzJ3HzpTcsAMCfP3TZLn91JUMdXshGmGovv3EHADhXnHKBAUo4Y2jdVKUcRd1GQ6VGSWxJfqiV2vN51arveLaF2MZ+KUdeOTwlBCZGDQrJSpwJALUCJpvdqKKxQnSSOfSRAwjZBFGoxla0S0cMjoEx0Kc2gZAftSfTGuGaTs1vTsYOhiowaBiN6A5AGyYylMB7j01gtj6WCOQ9XALWpfzsbI9qcuaaiqHGDTsptF1T/2uoQjTOhNbhEwBLPV09AUcIWCS6cAvuYuOVOcJnrgO2Q3wGCNtDlvnNaBkhMxCJOJGJIADIciiyIp3keVEgos+PC8FyT+O2gPeFgNwpET8GenJmFkJMjw0ReQpK+rnUeFbzuciVh4NhcI9ZZIHDkTPB6NRGTn92a+aun7OHBQAGjVZTFc6F69j+/4LSXKZ00NGTzC04AZQrYFjy8KpAlZ4LSSiAp3UgqJeSDHefTSYuYpiW8xNRLVg8uwgSEIwtfKuUbAUUHiI20m0p5RAhkwkmHSp0Y1L0mNdBFARMcnXMeqHdnKQ9BWtS7My2tovxUtKQWPTjVCdSyvax/nqriL54buNI0v76H/hb0In0eL1fwLnzww8+9Alaudw/vYE7VwYnLkErFknS6Haxw9kjl3pX7dvG4ovrW631zV4rObBrpSeTNgIzpHmmBaRqgkAxRSB0rnIZSWO8LUhohpO7HxQmDy5R/OR1FDcnUBS8GXUlDqNINLeGqt1dGAzXswLGG2qxswdjzrJMKZUkCSKq4mlO+DR79Mx2s3zdviiFaSWeY4Vaf/MrUrBiGequvPnJJg5Ck/3JWCorTYnQGkCzdlkPhYvACi5aBjhHWES02s8wPKRxJHbaLQem/PxCCFyKCoQ1tvKVtXtdm8B0pYSBTaBKw2q7FDXIyGSYh7qQEgERlaq7lZrzr12kvNp9gZkgFICxwuM+bamxnQalMtrsDdOHSnqE5AE0G1m2CCFKMMMgyFll9ILLaqjR2QzNsuyxdcp2DB1kT6rzMGNybkgEICYxtMawuAC/8T/+8+0Hd1wP3Xf/0Ue+48d/Zj0RI9YcS0AhYxwNlRSCqqkxOChGCW9yVrPJXURISCovKabQ93fePSwjdpmkDWghsRBSax16jno2Vwibk9kk0/ZgNOzFA3QZCWY2gSpNh3bZQPtLywGkFqV/qvOnEvbC2Lrsd1IzA9lE9Aya0USQZQagL11Sfgx+8FrrKIqCcGDuDOgwtK3bUFAAoKQksza6soDa5oyoE0aVCFy18uVzCZ89ySwgjooGbXN2OhufpfR929mTopOtj4tTF0f9PO598YeOHH7tgc4bxqM0zzmSMQgqilxzIQQiy3Bf/FDdOqON+lmC1xnnBL0XJVsCmR2n4jljb5ZpHC2sdX0wO802rSGz2SyQQEjIzIpZC0Btw1FZSGbbRSsZck7nth0ZaQBJQmiGsWJmTbgJmjlPARKN/QZhkZKEbcERY6E5brRGiADNltlKVSzGsRDyQLs7zvLHv7ReKEWRbLZby62Fn371GzFpHjt76Xw/+8LamcHRM4PxaHTpEkgBvQamg5VmN11KBqTzwWTrnkfk3uVWEncisavdakSCoqhQrNJCsUDWqtAaWaAQGtG4BtiYonD26kc9qkHEPdlNSg3aDcRs2JGxZlACL+fbjVSzRgahCInI+8ZK9CqtkhQFAESbWddujrNXSiH35h0WlAMwglAaEdnFcBCG4NUqysSozZxnXayKoP3pDQGLYwXqdcKH7kgow+oZv2QEj5MQCAELUmYkJIgAQDlzYIpAo9YFMoNAImJEJhAmRBaCCcFRgDbXXAhBDNqIUNEE9IFcKzL8LWr3imBj+sLKXeoKN4UB3CmfM2lhVZkaAQCZyayhJKpq32x8amPdXbAGtvmJkTVoTSbMiONg2UkyYmH4fG80ppkVgwZBxgwHwKi5wXwlrRGQSQSECyGi0AoQXK5GBCBEAkbkFAAQhE3orgsTzhOEsWIGZjKBkQAIQaMOwgSXewpfttPnPIQ0DxzWeIjweQjla9SHY9G008kjxKdxsn9zDe64/Y5r9z4++OJ7Vg9uXLp0P+y7qdFpgIDBOM/HICRJgbqog3jfnaFoPOvAbGUqYX7HOkk1c/omtQmCY4XNWmsjgpDSXm+DjJlZkJxus8aph+tg88typV+sWon7RKEVOFIfbSnqCT9UUYslO6ab8hTJdOMAhopyjgDhwJhRM3CFeGRmjUBVScPsZuu9fNlcgpB9pRucRQqWUTVQ7YoWX3ywl0/SjaXF4oarFci1U6ef+NQH3td50R3NRrvVamRpkWeZlEJQpFQeSm7Cs2purkYLUgLjxaedhBNsWIrIPg83gp3EJSyVVSJ0Vqfs7RxCPrj2Vsiyz2zQT01KWT0STgaG9StgSp7nAlBKaV7UCFrr4XD44QtRswFKd/ft633rTYfzAu7Gz2Tp+IljT7UXFta2+5f7WyxjEKLR6fU7USPP1wkvTLJj64OVZqMb0VKn2e22Ys4EyjzXWcFK88RkUkYAlgioNDOgkDLPM3M4T+97kBAPn7650MrcOxnJlU57sRlJCXkB/e18NJr4dLm1fQk+I5Raw7KCzx5feZed3QMwG9cXo8BEnjQ4KWBXo3vx8qVwV0yL5d8aGp5Ha3KQYQwRAALJnH/Ils6uCuc4sJMSQiAxMxesLJMTHA9m9pIxdpi3dmw8uAALK5Q1K0GbQthnMgjf8sP2B9LAycrKmwnWT7Lleo2cjsjMUDObxWGA+n0xhYjsbKzpNSISIqo5mMpEZ0LWAGRicKJLohyOv/xqGDAAcBGz7SJoZ4vjYK9ZBkHecKwCYOXM2zXz4ZWrVaIKhBJ1LmvWAG4FrlXbDLbQ6s+T0X4sQLZh787rNwdL+57zkoN3dt93/+dbJ451u912r9fstLUG1FzkKpRg1MfMM7rzIeNC5q82MPe8fIAIYYRnRGRnxeYPmW+txunCrEUOn2iuaA48IVlZc7BiHERkJ3CePvpX6NH35o30ZiLFKyLLij+3JxUN78vg0QSYhD/T7hwzWv5LcwkCItAAmhUiYMJxbyDkcCizUTPBxqQPkorDO15y1Ru+/eTlS81GabRJREisVH1UJYSqxUOYwsFT00Erx7NB/QyAn6FicOvGteempvaL7JgtKMGuhZSV1rSzbAf0lNDMfbTN6jKOaeDNYSU04YslKNfl2UYEpZRivbTEiEU6ys88deapxzMC3Nu7phkn7ZtawHgVQAaQpXqyPTxx9vTSUkdrDVk61KoQvDYYrl3YPE4AK71u1F7otHYudJc6DUGgFICCfJznBMp4vgHoIpdk+BEAoEKpY3vv33v62Zp1p93OsskjDz904dTxixfX2q2FW2+77eCBQ+Px2M96mu63uwlW5cisBaKNvRSI8tCzT4SqKBCxDOWPyIhAqCjbm7cvDobYqLqJf/nUZA0nGXmGu7AGXmnHBoAwkJgBjVeuCU6JILQiAUgCUOdKWTzh5DChU41wGQfDfTd/iQhBeycL96sGII0A1qS5foCrs/fTt2pmcrpSczXQCIsqQb+0x/H2+jjAAlAnf33MSDNx9gna0bm6zBEFkwP4mk3+JSNZ0bVz4i5ouDklSYeIM3RRsxIV+yJrKLD8MAfecrVate/KZ1uTQ95oRjszcRKi96Wx5mbpOG13xPpoLYdur73/zAZkE9j/7BctTfT65sapc6eTuNFttZeXloQ1h9ZlNwFCNSoAD48QoBL6Y4rfrS59OFRD9lh6x+6TY/A84wsAxlHHgDMME3UZJK3qILg64Nl8eeUVthMNiYbgelSmU+soIMoqdshuGBz8FQFqD10SQ6M57c2zw0QUYcchfTAbr38luATSDeCcIWfSLBRD1lKt4VA1GsRcKFQs1emNbSGibrfLzJNJiiQiIZmZtRYi0lM42JSZl6liCQnVG0SIhjxFMMvCZC1S641U1hxqF9M+LI8dh86RYQvhC+Euo+MjZs2gPAkQkKHTxKi/EREJxcoqU8hYYBAAiu0+SrGYNFXSzDucM4y1GhWo7r2mibTUEEIW7QYeP/DYntXrcyiEijKESZZFrM+dOx/dem3OMk/VmY2t/sXLp0+egeFwYWlhz8G9nUZjIW7GkUYQSnGaZoRkwK3SzJgj0aG12xpLjfE4e+rUyeFgq9VIrjp04HnPe16nszgepesba3HsfM9MUMZZ1GqdMUANRrZc44GcOITdqWNCSykjdpgvyaITLaj+sLbQzwgHT5OergjDa2rbN6IzJWGNiITMwEY5SgAaNBqFBlbcCK2/uwAuGGyEDcskmu7y8o47mxhzgOxjXWU/WJcixspog+xhVdYvSEYSpssDGxrfPPKqQEthiGorRARTrJbpUFl3GAoup8sFPIchtPMFG8h7xk9VfoyZAZQzDSujzrk5lDAPoJIlKbxQ0jcXPjWPpkcAUxDhmT9/Juzv1MulVXbUpQKj5d6eca6wn/VI6zyTk8ZGpnbt2rG0siMdjfvbgzRNhYUGFdGE78XZvgfPGSCIpVIj+mYO3hXvTcuO62NEG2PSMwqGP2Ce006V0mE/DKapX3l6xdhH0qAKzvKok2axttN7UR/UrEXwzytUYZj9w6BCmMNbIAACV9J1QUCq1Z9+eaW+sEMiZpCAkeZIQQR50Yy42RTjbJwqRuoJ2QKAJObJZKKUllKgyVShgQTIagxwD4Z08LX269QkkI1fYPU6MziOGMq5e1Q6TQrbppRCQkZGBK3ZiGMFwjz/rfDe+WNDRGG2lplYdi5hVB1boXJElJGoBC0CoIWdk8lknBXAWRTJRizySaF1sX+vlExqnA8H6Yi6jYeORBTxOMtIDa47nlAkEA8dPKQFMANpOLpnEQCzSZqm+cmLFx89/mTU7rbiZpt0EscL3V6304kEqFxHkopco6Sjm88fx/nxEydOnjy5Y2nxuqPXtpJYFROluCgKIkqiqCaAmQmCzG0uwYdxXqj6s5LP6O1E9zZaMmtmBkLFOCHoDiaD0N3ck4m14onIsIKnPqcEPNP74nYPbR57Z5biN1QIBABWJtwkCySjjHPeFoyEGK4OG1sTj6rYXuBZ58FMHxDJx0eqshl+nWvsxxUowloXjoHB+r24IkGJNejKFfKgioa1dy9G4pCvwCmuNxy/4amYgZm11gJMJLIaoT43vokMA0eAW2BEnK/ZqlDo4ZDqQ5w14vDFp119APD8u1DxKNessyRuIAqOQEtM87EU7TxjrXUs4+WlJSklK20I88qgXUeSbErdkAUnBsAZ4n6edYzK2ZVRw81pZ8PSgtYEjGRFc6yVCbSEDjmF6zOP/nDV6lizBILV32tq2oCc8i0ECBWt7MbR+CX5pQo9fTemBqkdy43OXadimocojBMfuEOPAOhjeQSDnImq//JcApMCJgRiJGBGTdzQKhOXN0btTrMpIR3qdkJFkY7HSohICNRaK2VvuFZMZNPDuDvP5h/b3MGVNaEwGm85JO0gKiO54DjVadVB/5wbwcwCka1+EQTYFfdudCFYQveKa43BenkCV02pa+tf+8kUc5X8+S/PFQMSmmMX+tdtpWuCZBxHqCPQoDMU0ExiOdpOheIco3jPjizN2gyxkGkn6SYqOfMsNcZO3BqPtpImpFn//DUnmZoAKhai24mevXT1dfrQdn+gc3Vha3zm4uX3/cbpk089eXD/3tXVHTc/55Y0m6RPnLmPTsuIDl918LWvfkmnjRfPjS9evBzHghAJpVGahs4d4fLC1B1HrKCNGrLRjtPyCYhKVguRERO5ROMh9yi+tGab8+1euWD9iMweLTlurvSOmVGM6JYQBZmE1gCahUnBwgDOuM/naPKBJIkIatDP9e+vsENv5QAQ0UDScjTajxMAjC4WfUclw+ykg4btcD2VsQ1snAOnxwF7ldgkj5uJ3wwzFZxqtFOsVwMAE8dCI6J21TxYmwmjmNDCYGRmVooBQLMmrkgEnSxdUaADduNh8Drg8OkzwotTpQ5ZPMM3i7sCt0PPhJMwRfNmu93VqsEI2+kWxTFCRLAQ5UWWplLKSEpmGA1HiJgkSVHUY0073IPkgokghps37a0FJfXnngQTrks/jHWDYfC8wTOUtFtl+r5M64anv5rheTgZrhX6NNrVd6dpiLl7iuE107XXA2Rcjr+K+PyxrigLXe4qKxnQwMhAARcxlwT+SnAJmiNnuqEJchRqkG+2m3sasqV5kqcFQiuSUKixKiIpUQjBzMyKMKqtVXj9mE1MTAhhk4vBWyfRnnuwOXvBv1r+l5RDV/y1Rm34JwZmGrU9IoJmQE3MDFQBcYRstPustdYEKBCtYQ4iEGb9dEnxieb23tZX/hh4wtrNAsHF+dHI3iTb0wQEyKDAhSP+/7H31/GWHVXCMLxW1Zaj1+/tvn3bvZOWuCcQQghJSIIEggfCYA8MMoPDMNjgPMggQzJAcEmIG3F363R32l1uX7/36Laq9f1RW2of6WTe53n/+t76nV/3ufvULllVtayWADGlYSMl5qYxUhAajTKIUre14sX1V8L8P0oOZMrcoe2w4z8SXjBWCMVrAilFsQRgwJRynSj2HAGVNwUBQGIqsRgAGMiiLhTZi2wnWIoKJiW6PsAovAkAIUqWdtqECAEiciIR+SiQMqNB5Ags1HWrSEeMmt2m9WKoeUETpn4pyawRoM2MZdROO9SfIiRH6DEcKO+qVr2MARbjWT+XzZh+HcwARN5AYgxZve7ZtmVZlmEYQojYeEQfDMUFEpNCphJ7YWp2kdzThv1REi1C6POjIsECIKKUQnfBDGkkY80JnKO+eFPLENckja8MhbMY0PElq+KZmsiGPotm8MZ3zzFkIn62BcOEiFKGUlFqAzDQTBAkNBsgxDUJIpVXSyCEPTV+advcS0gJ64ZyL9HC/1f+/7vE512ds0SuSu9PjE8FJvpZqR9azuLYKcqzEZAAWDZjTRteTko3U/y/O3J19y+kh4hI4UCUxQkRkqnMoBKGnCMyBF/4AIyFpuTh/S0wFsReggwRMbb1iyeuwaGBs486UN1F7AshkaQkFr2WN10LNxAGltGwk5oaxGhJJxAAyBiClATEKLG00gWblkWmLrywmejoEwQAioQFCHUbbYlnREhkGHQjDETBIyPUKDp1zCKkVS8xkA3974bvTVxPa2oU1tFCUUb3XkhEPIqOHNuOhpcQoEkMIcOhVCttIErStgwCcEEaWe4HEkzwTSChdBfCsgwiyRgLgkD5rSoARVMAFnJBEkPvq8gfDNXh0nmukNVqJhUxTDiDJJIGcgDwReD7fj5jU6T3i1Wv8X5qWHOAREUTKwZixiWEZUTvI2ocWUHEPkg8nEFLbqOtMV2sZgSur4KJgFGawnjLSCDFEEdiZxRLRJBOdLU9LVl0WQqRmi6cD1IaDP9jXcv/V/6vlKd319UOUY61AgQobZDCyEzFa8E2kWcBpKDEyBaBVDRfYizl2Ziw9a2UW0QkIbnYUxJaLMoQUShWAUB0QLwo6k4QBJ7nMca6urp6e60ZjzZu3Pjo408Wcpl5cwbPP/fs7gIGbrBx63A2k5eSkWSGYXGOANL1HH1simNWhpMmC5NBB4IAAA1GyAQLLEkxqTLChM0UADFpoJKU1bFHEAQkgBu+GXCDsihCu+u/ffcx02IWCP70Bnd8yjcLo76P5C7iFgifOEgKJqTXdcxaGpo7MVW74t/PBoC7N45CWsVKTOEWDhrbjZwhAElkSIAkpJBCcM4t0wSAIAhMsCRQFM9ZAkAYtJGpkLScSaZ0VFI5r0vBGAJDKRPXW5KhE5wuMiEiUXjekSEAA57AVig70tB4MKHKKg+oQigsQVkglFZSrXiS/Fb6kjFi4UoBCJUqm6MRRbZClsJ+EgGR8Rh/hnl5BU/iWKS2okCDkVRqA4g03hJAWVlodEAACODgIyGAwbgap2QEIKX0OPIwwwSREFI5eiFyYgIRGUkWauCFirdgoIwZu+h/AoB08metNPMO7R4eoTQKUpoKSM+2mHqlracxQJPkGvpIEIFm0B6aWWkEPnor0lFEEp4uNaaTsScjV0K2aRpSUiCEcmvLWHYDnW743mrkja54Leau8wGtHjZA4Ajd6W3q841nLTUzRoy6iwPiNHcRM8pN3GiLO+n/853zPyob90pQMWcZCBREZCAohzwhBAFwzlsmq9E1WpCerx5IBFJ6I1SuK1ozDIAJLXW53hqXoAhY/KtCqLGeM9TURCsVUNBw0a7ONmttl52UBvAGgCctStQDcZIcjPOkhtIeKozC2y6QChwW85XxCreoTEQMW7CCpKwJIjZRVRDhwABaMWe5XE5FtjIMw7IsIUS5XC6ViFtw3No1x65ZU6k499x1++c+9Vm/Xjr1xBP/6aPv27Fj2veFaZhB4DlOQCSKHQXX95sPO+c8ZAM0G35GEAua0cxRT+AD2v1bxAUjEcVX4+on3/f8AGc46zv5GO+Z9dbwdB5taXQJ4QCrcmEhszgaG5/fNNeyO/obczDExjox0kr0tQnulkihRagy/6S4aNc/FPHR8bhR8VCAypghJTZiIjgqN4dUI/Hu5Slz5xh6XANvOgd8owCqNl+IZiGclDLPVlOL8ZHKQqjmBG1wCxElis00idGfaH+G/IEukqt2oVWJxR0KpThQZweSjZEGHSh1abLl1LlmgNT6EyJiavo0Dx2jAcdjTUTAuEEiGT6kWEnS4qN4EBWpKjWcNqWlvK5bJCV3AxhmncT0qrRrMASQ8t9WEVCiL6jOHEPCMPs0EnBkJucxYWtsts2Hc0Sk+MMYqA+RUB8AGX8QqQEsMazSa/Gyij7NhL2Ilzk97Jbgani33Z/6w5YAb/n8/7ww00NWRZhGMe56HhEhciHB9/0gCBo0HPoweHq/xbTwiIPX7rSSZmXDk2S7ajOOGwllSobEEDgDztT3yFEyPeAjZo5rYEnjF1myVySyZOMBMXV/SMiQG4gciCkFXusFSqd1Y+G/rdex4fWG8XCmUW+SMXAaNpKq77quWjuITrqixNIVYwcmDu8f45Jd9sY3/OD7333Xu9576PDYN79z5cT0VKGjg5kMODMzZqGzWNW8gQFSvE4If0REQAQGyFpMPvFjZCn/k+hXkKZpmqapqKB6OP+SVQBgBnK87OWPWzfWkZ3Kw+Cpq51Cjg8OzRSzUnLOM3WAZ595ypDJzuGADFAFduKovieQaQBpSG0Z45wDY0qsR0Sh8K8SHDG0Iwu1VJE5S4xHEBvRr46HG85CNAYikkRSSqG+QPhUNjyXUsSaYa7t4hiMvOlUIiILTSATwoQMMB0XQS+sza5rRkqpOuH/8ggUp/EVVTOUsEMhSmedFSsLIHnCuwiA0LGtUQUdv8xizaku+SXCUePQ9QHrXDNG/hsxSZQaLtbFvmjQSaQk0Ba7HbYhlqoWv8iieceDJUxbKmpWPEQU+m1HNqQhbAkYS2JB65PiDKUMiAiIuBHpOkSgvIpDwL8MOViHvP6k3cZq+RyV2TGl4o8oZkNSGwkJU5xzwkgzhCjZJ0Ym4gip9Gdxp0ecVAurANA2TlvBV1uXIz1vV00rAizTsJWcZRuCMeYLQYHgnGVsOwiCarVqZmyAMH+LgrsSyGTTONO3OA1nO963GEbqj/z9eULwEiAAgLK6j6UGzZ2/heYAlFFJOG2KLfZVoHpI7wpdNNEPb/Q9aTw2H0NEkCIJdhCNAzU/y4bzGO4NbZzKgoWlUZe2yo0nQj0y1fkCSkIUMWCIINVlGhClQM0RDdNUTQW+ry6bLMvqyOUrtarNmO/7u3dPmiauO+7ks885+ann99VqTq1WA0DXdfP5vOu6jDGMIw5SaMGnZi4RldIMw5D6AMRYg5u8Gny0AUN/wQhFMABB5Lquaj9WsZimyRkDKTjBVMmbffpJwKEkXOMVa2uAeeFP3fdsyanUDbr4ixevWHIMfAIAgIOKdBuLRYnYp7PFGB0uAUI5QzLOAUiQRCLOeWLzwdTNYCwQhwyGRHUZEU0hvalQE+zS1CWso2hqi0MdJ8pNb4nEYakBzzfBuUEXS+G9nrJ5TbCQjp1QOTRr3cVRe9vxiBp9SY75EayndJmHUvGYJEmi+IKPARGRlAaDkLnUO20XCYsi8/RmYtAS7bY8Wg1EsQHELKLoyb+KA211T6oG1RIQDICAWKJYjtMFc/3oh7NKix2p75F6pInWJo6OcWVEpeEmKSXD8IIkZDZZ466N/2o5fklB3F1E89S7rf13mwEbfSEA0mNvYVoB0lB0+U9H2QBKS08Q3jmraqGqMa3ngJYn7mWWljTjyC/8T7tgEPgEQeARE9wzVO5iyUO0iIjZTEZgI28Xe+7GkNS/YBsOSanvIOV/KTU80FjSM5basiZRjVrueW2HhAxDQ81os4cmoNG/jUBWQXLDXwGVvWuY4CHaHojYELsx2TMhwVX4umXO11R9aPIOx4jXUXopDiQgfAKpc0q67j2yVyfGmGVZ6tA5jlP2fdO2AyklUHd/n+fW9xwY3nNAdnZ2zps3f3JyiiR2dBQdxzFNE1CqABTNXIXyblCcEyLGkbV1CMd4We2HRjwDYCAGQWAYhu4NIYSYd/HKfTduB+YxhKmKSyQRAsFk4DOvy8qdeNTols1XfPzU7PLjntlyYChqEyO340Qz27h/kqJSKVA0PCJS1/kg0kgj5r1kYrIsNcGx5baNiE2LAxuG6NbsT8PnTVFsw5/UP4lAF+OiSPRP96Afw6jHxD5O31ovIbpEzbbBOaE7oWxEdEcqqmUOSBrS0P8llLFg2YBsDX2eIYWi1heu0YEJOZIUONKg0eu3oNYUvq7/lKxTq1cgksiPMH99FgAgQAUjVpcB8RZrq7KANuvXLPuqInwPEU2DAQBKIcKLEAyarJ3jltoNH9IL1lC9AUe0bDwGdQxVioz12/WLpBkF6lUSOST0xJCR8NJiSATtrLzaOZI3T6fljAA0GVevrO23VB39YdTakqefGT71RGHkjQxUJkrCcQOGlp1FzqQAIQRnDEkkZ1IHhh56MPmmRh4xJanBKjZFhtZwSEgqGm1KC61LhBASewmARALC9BgtVJ6ISBSodDVqAEQUKrHaALZhRzU8j7+rP6SUoQqdRwwsEAdlvd/6PIKCQGjDFV7PtT7vaiQkMY00KI4kFyEEJInAJMUWMKAFXAynI2UQWfwyzjnnoYsBM7JKZgikcByHCHLZAmPM4uzwweFCoQAc3HrVMk3fcwBARWvHUMKNV1mtBTGIMWbiX9cKBqiM0aJwSAqiEgm4bTcAHBGllJlLj++cHrd6rAwZdQ8xa+Qcf8x2u6rS6LZPHzpn6uDkzm07Zg8MJrtEK4ndAEZekgAQGfKQMnpV4yaKyVISeBmBhXaj8b1SxFuEXrDECFBLhxD9FG48FdingYqAhp8bqYBGGlMapIjMUyShaq80QzpthBJhJxaJto3MsN4Zat0hkmitEcQwbkGsV4gyGLUXYWLr99gqGyFMi5yeiKoZIhS1VCETLckAzaotAo36PwY6aKNIuNMmchszXsnsdYEJo7xOESRj2p2gMwzTsbXA0UcuzRyQik2qKG6iKANsZ9TZ7oBFyXwiRKbc1wEYAWcqqCAIJFItIMZJIDEtTPM2K9ls5x6+lVbhxqNjTYRNjTxo0g+3mag2tVDEaYhsleqAYqZb0/akum7Hz6R/aJZ+4MhLrFPWBvoKGlWO29SpNYV3JR5O9z5xNwGNrj1pyfweP4ByjWaqNc8RyA3OTCDGSOg3ju3ox5ELEUUEo/FQ6NIqQCsYRqeBgAAkB64x6eErGKYXD40qI2Am7jDN7TeTXlVio63Qdy5SGBlcqVxjJiS0YWwHGUIJFFKBhLfXrMhasuPNQxUguLr5iIT+mIREc9T8ywkN0wAAFWzO932IAoAYDDzPQUQGzHWdQqFgcLNcLvsU9HR1e56HiFk743lexrIZY74IYnZBT0PCgWNoxqROh1JIpGzIIpYCAIBY6FtKkaiOpCiWEQSBHhHotFW98DLKbIBV+nrFm0GhbwSCl0jbrobHVJ6EKPOpgQYRSSVchVEnAQCkJIkIKECxDhhaeLOYpiJCpOGWlOSl1gsRUajFaRbNNY2OIiioGAUDIBXhPBx8QmHCltXcQ0OFkIwikNIKoGi6Bo63WTOyklrbjW/p0MYwF/HLKeoVRmECx0jvhRA6BMswcxQxQMWeQ2j6T4SIRvMoQwKWvvtJ+mszjnbDFUAtaU/CBWuQIqJY9doMwXYlpnPx4NVAFY+EYbospetW1pWpF8Mv2IJCqJWOh6qKYsCt0M2JAJSdBja+2DTCIwxeB0g0neShPio9llZzhbYKn6YSMUwJolaCHSOE2M0upm5HYARbDSN8J73KzeN5iSV+GZSvqdNUa9OTFUAiIOv+u0YZvXgKnztv3uLOU8ZmoFSpBUEAwCyOqEYSkWG18zmlBpwsU5MVfXzgFaFMHioRhMVK4ETsI1IhEOLdJWOmveHuKuGTQk4+5ZqoUF5LUvfSLE5MQhSCNgxCkpHCWXs7IYctW2u4VGu5/wFS+6FxtIiITMWTARnuNYkyDhes9xtfqSKiGV0GSyk9x7VtWwQBY2hmciioVqtkrSw30XUdVU0FpPR9HxF1QhLDSkppRJrN8IAwBGoEsloPNW8GjJARMhGtO0mJIIUMAMA0zbaZxP7nJaa+Uc7KGK4RjFFKgXHoDBWMj4iElEiN0FezSK78UBKASgeEiNB650hqhXxQydHttkeMHsOojaR2fsvSuG+VSQ1AGPiI1F8hrwYESqRqRi8R/xQq7ijSRDVoUl+SxLTlv5t0AASpriEJxwmoThapEHbaCSUywhnG840mFwZegIQCqZaEZoqiD44ji61UmEYkGNfsLsOaSmLD1GjjdkgKCqUwhWpUlPAgzN4RNaUxhvr78Rh4WtSVQFFm6Yhfh0hXgQCInMJczWGmzJhbRwkYam7UInKDMcYkoNSs6mNWWsoYLcaSpQIEa15sUinMINSNq5OjMs4i4zHJj8GEgELzGwOA8MY/SWNHlN4ZZkRaZHRciYgQkJgyraSI1IvQKTPetIxhuNcpyilCGlOiWD8Ri0zpmUWLEvMTqTgeDfxK62MQCbIvUZoV1FHp6ilyA1Vs9Zrjrnna9R7ZddOSJ447duWK3jOrVnaqUp0Yx2JeSJNExc5I2+0gl1U7XKjZlu8GeWnmJfeCwOeIhrbNQJsFU0pLAiCSMd0GpUNCJUqG6clCpI0AwAIARpCk2WOgUvuG7kYqKw9FtBxjY0OU2iwDDO1CGxBQHGcm0fKFEIr2PwMuSZnIMsMwkEsEhjJyRlJSNyMSiTCAehQhGXIt8YxBRR8Il7hxHwJyaFprAjAgSc+KEPqEAoDBuHIkiRLeqexOgEFkNCQlEajsASDJtA1BAXCQIJUjppFBCa4IQsOwGOmHjqmUbFoSEpV1MeOAPmMMgFG4s0P1C6SieAqZbOPwlje0wWNMnXQmBSICSRmIRzeNxx5BBjMJpQQSKqkPifDIy0BZnke0Sqrbd9Z890xq4WND4qQWEgdOEbkhlEBSIKLFQtyl8wIK6ooscDDiNEESKAAy0mHb1eFlzJSRhk/F3Q1JMkpiVstDLNSqKdKrZWyjwMdEN8kgsu0CZiTxFSBEjIiIHCKeNnSFkpjcEjWzBfEZ4Wnjg9BPCyCCCKosbTxQv2rUhAhBMk0BIzRUzBhhuH6hUZiyag4Ep/DwqmwfjBEDAhVGhCVBPoCIGKbvgPWiUzVqUvAeueh4NrEyj75Etu8RIU8PoOVwSBl6tOyo0WirEcU3TCGNrlNccPOUI05AnXaMqiWHuX3RtR1HKogq3F1YORWFoEmeIEoFeUvR5iZrw5YDCF9pA4RWT0JWH0Or14Qhi649Gnqhll3rsH05W0gd2RZseKxq1iuESLIxICVJmhkv2xmbGSyfsfIZiwBfO16Ut05X6AZuWAuhs/uN505Ogj/tmTnfyXpW1epwChULMk6F5w3fq9Q4ORkGaObJ9qo+M/VotRrEZAoF6MgaAGJVatP3IwNBAsXnv4WOCbER/DFzTU0Pk1fS44xHS5SsdTT42NaqUVUTcpXaINVPkTsGNbXTan4JoFKyGYT+1ikSHiq0kg2ZkNCQa9SK9mejbNTUdeOYdOVTGAcx3UJ6feOTru/8xr4gWgWCOH5iOA0VRweZiqhEQKGyPcwo2OasqAT1kSVdJO2w0Di0oesjFMaVpCZ1vkjlUWpGoUSEyCOhWQVeUWeNc2pErQ3vNo0/Jii85a+NI8fIixUBQvZX6ZNa5wNu2RfEJCOeriQWeZ3Fg9UHHMsPDQ8ZYxSFTFA7kbQwf6jthVC1RpFBZvQPIiKRoWMF0OhGOwgeYUUb5xlyytGg46YQAMBowvukI1MA0F14KbWllQ9uBKojDSMeSYxidA8vtXlUiitMKHwqllt4bAiIZCzGhdVS1oDxTbZMI6wWC6nhu8SpP3rSaFXP0oaF0Xia0H3auEDn48InMmQdKQZpE+6GVLNxDQIgBtGdoVZkiyvsCLa60BWPITpADSW1ZJq6pU3bTTXb1Be+a4KwC7Zbq4+NTOWsDmYwM2uZRYtISi8oeeP82mtnZ3I5o6/uwYvHrw5817CqplVFMTvrQJUVagYUBLdcUWXC6bKyFR8xvo4CbTe0G2Oj2AcA6dOWBBxTBFUnAA0654ZmdQLceEKxxcOIoMZjUE7AMdGBJP8YyoYhK8Yz+ZtSw0rg0IQooiOj24U0IjgAoTElIrW9IcZnDLVoOZDeNrqQABr+aclfAkBs+QXpgpjE743aJAAGTUxM9Gc7Atx6n8fB5DigQKl54xBAzPCrhjgAP8JdJAek0N0SkCkRhWJGsAkarRvBaIbhfUQYwxAiIpewLyEB5hQbfEoEjC3UCFquCzQtTfik0bSNhegUZZhnQq2Ssmltr8bXR/hyfiWmLpl0KoCQ2Hg1kiTOEm8UFnscIQqSMS1A0uDTapApD9j08DQ3pHi/xlS91UzalXYbXcuzEQ4l4ngxvY/TrWkhgdRIlPGITHZMxPSFY29shLD1mumzi5lbhVIZYByHi+vuR0TKoJqxZMyx73o8W4BmlvWluZb4VyJipO+MOJJ4fEQBIp1z+l0KGbv0RKP5NnYdwp8RSxBKOCPJgKkcnBgLGaHYjSCVOlVGjRMqpjc5QkeYYzzN5n3y/16ZrpQzlmnW6sL3uwodwKTjePWJip3JSQA7axqFXunV3HJJmDWeza3c8DCTXS8evbrb65EMZN21++1q4JPjyIxZc2r903bdCnmYiJVJrQ40kZaEUkKL45nGWeowM2zczypwYOqwYIT1oBVDprpXb6WJWSNvChGBJKkcTMOslkSRrJmWpbWviWiIqV2KDf/GcGkGi/YYdMVAi9cTBh1SklB42QkQ5QdMVQbAlDFp0ri+Fo30PpydzlXJOJy7XqSUDKXKAS5DjWb0HRvz3OgDUOiHa7a9RAQgIremJOQPgd/yvDBikfeaZKFYJ1DnyLTJNgMzLhwiC2qdJoXZhBuHrSat4B+7fsbT0Xa+Pk6AmCtpeWAgEYIRkUBEmFVR6JjLafSPp1DTE+nJ04xdw5TjPcCIE0RhIqIrVyQIWmlSUUuCrlqPiW6oC4TG0iDQQrSJ4iMawSj8x4i1N7GfUvRfC6OkhhmmZ8viMMVRgh5SrIIaAVMa9phRjOy/jsCrQkJ9wwVrxxA27/UEb0RHTeE/jG71SQ8jEKfj0Ey0FecjI+yGUaF4ak2dkiY6NxQeWQCqigAQmiREj/TTiBHTR1rv6qfYo6CB/Wdp6GnfEyrLQx0AAABLR9dT84oiM8S7RXWkYGUo1VN8JcgAJcY2u5EzQ3LF3sIOCBGZJuX/v10snvX8wPFqAIHB/Vwuaxo8Y7NAeOSRXw+qLhBhIVeQvi9KFc4NlhfHvLghcOHhU9fMK7Ls9HSGqkFXnpmsAIZEA3UsrFG+hskmE0wZBhNEOUej57q7S7L/orOgW1a/tHkdNRi0awM7ckm/IqMBvOR70UbQvjPUd2yyBzQTldTg4xCG0ZNGfY82QnXueMNP8TTjI/lyihJsVJ8tZ0op5hJQGT9F31UVplluNkqcIEMutkGeQ04Q3kpHU44PljprCn9iQ8tN4k20jxAjBXJyZlvAp40kHTeq90JEkQASLUSENuPLb64iS6tht2w6GmfcT7yY0Y8JtYrpfeLdq9GrmH4nU1bYmqg5EJMqcaSRZAcSAIS2yjFwY9IQEy8AiO99IH2iNeC0mHG4/6TQ6R2kOfWYQYqZbAOSpQp7CqNFR2QGmtayZWm7+zWzMx7RYACQMhVbRkNkPBoNhV5WAClrPVUtxlPa8zTIWkdi4jHhiZVy4SEMaSpriseryG6klqF4+XSOIe46vPRuVBuGyEjHHREHF0bDJ8XXx5smzfE0rIUO7bDN9PRRO6ANJJBHAGzOTRvfQOmHn0ggJm4nEPm+iFBPJxuIcrh9owxRmhCuFIhqLm15VQDYMeZICbXO56SUKx6oTM9M+67HACUpKRwZN0UQIDJmZCSF+awAGII0BfmcScCMBAOBGyZIAgZB4FbKHmKQy2e5aYigbnErl7WE53quSwSEwA1WnynXsZLpKJz23MNAHZuXLxvKDo0GMDVZnlfIDlOph3IQHS1ITlcscTbOiIAS7R5ApCLRbGsitBkuX7I3UgQgoovRSwn0UkbR2q9tEK6u5NTB3rQhEBGQKCX2NfalVCQqEycAIEmZwIOaDwho+1OfV7sL8oQLoZCBZkgyzimgKqijEUePCbsDCC/zWmOwFqxS8wBCOZhBaAWS2rERL97OsbGVVIApjYK6jYxa41FT4bqH1meYvNvEYTcF7FcP0/g87q31KEmE13nhghKFF7usxeAh8eulEA8QxWZRbYDQkknFUCcUc10ttmt6aaS2vjF2AkDNS03vLk194+9MijDLCIa1UJJCEckgo3uf6BiGw4m/YxK9J5o+i7dZit5BRPIQlcogaQEBUJJBkTm+0lbFjENrq4lGnjR1qFrWjOSt8GH8RWBj7fhbrAzHyK8OMXUlxbRhNG//lnxAPFpGTREFNXdnRAyviRUnHzl1MYYAUso4VgyqyNqIEFnHhJs+2p0pyxQ1vWbRhNLB/NRAw7MdDyiSuVVhvI1qqGkF4nPSAJDkPBAQhrE8UbaIqKHbucgGZWRkXqh+BIDoFIU0WLeVT3qMwHcE6gsAgecYhtHnnDIxVT503osbt+zfv2PPG2eOUgEYkHHOTRH4CIQkkZAEKf9FBIYMDSKhknESKoEfiWeMjCAZECtX64p3NyzhlsucYa6jg4R0y2Xfc1jGREm18XHTso2iv3L7DEnoDro9hNETT1hQzZRzABAmRAtRUhqqjTPCxl81iBEAhnlDEdNLqG1SReUioxi98Wby1qqvhl8jVwUWyltEFO1b1GyOogvpNhqdaFxJF0qNpDOXkFrrdnfA6t9Ao7vKsYTrGxXgJWJfNxSdvrZDCNBq1TC0d9NSpKAEYg3txPw6JFpKnbGgFgdSQSHG3ZKIBMMwQL3C4gCAKAFIIRDF+7YuGMQ34kqtgsRJM3bTaQ8RNauU44aAkkiNCMBVkGWNgOkQYypWSzhOIBLIkAM0eCbrHDyk9wNEgkrCiKf6YhFhxpi/RWRh4teUYEANPbY8gw3VQryeAquivhQRHPW+jCOSYqLuJkpok4zmpXN7EGUc1noMUXmL7YuIhmovpnnxtmnHM7LInaaZuWgJi4ivjy4D2hFgDRoxKYqmCqoFTI8t0U63KkkglLQRFhLwSBpJkBxDJqPtHA2eiKSUKpx6wlcRMYaMMSFE8ybQu6MobAJEhFpn0yDalEmQGkhHe0ENsARAIKUkCI0CFDFOQIFIQeuQgdB4C5hArzEwTRQDVgMhAkAYzpSIQg/6hEEg5QzQzgCpVWkJtIZiWYYBSL7IWdlM6cTzV538233fvK5zMzPwdWNLkch3PTUtla+MIZNEgJIBBQQM0FD6BmAqJBVK8lVubWZwiylSM1N3ABCFCMQ0IeWKBQOZX6uZtmkVMl6l7k1PmxmT5zKBMcKIDz35pCBWJAEAB086BtJ7njUF4Qu/YDRreFmASlQ7YTukToSIIkdi2FqEW9sc1SNEjgs7CnF3dIQTRl/9rPAfShIJ7tbXjekmeyqQMCAiNuZ5xXg8rddd57ybhqcQcdwWh5QTXgrPRMezoS2MxLNGjBxJTg2dEomIGdKJLmqoKGlEHYSoXa1f7SAnCCHCKvG+UNc2DBAZCwQCJU4jiASo/PFTRCXFWGjaZs1iMo3JI81BOxFVH5j6O3qocyRJMdAMTWWiiGWoHLQaDGVbmb+koS3D2QNE6t+U27fGOrQVBRu4KP1PjPy+GnpXV7sEpJxdGSCpJMmRgip0AiQCiM1vI3KgX5MjymRsqWHoMmc89Fh/koIRgQHpU5Ho1tpKtzLC/wQqhxoiIkotF1uacUsNKx6RKRXXEbVFBJwhYuKeFPUf3pBg8q7UNjvTzO71MSfGWaSFSWOoCKcqcR0iMtCSRL6Uiv9ECQw44xxARsnrURnIgBRSCkPlqFcOi5qDOSYxV1Ft/3ASIWJKTqAqkZscckDJgTC0AVUJLUKXOAIA4MhY7C4QX2zH7hpcpcqM0CeAJJIIDCQyjHMDEABjBiIK8FW95K6PgQTgMk7gGPJ1CvObJImFvoKqe1R3Co13yQpUAkDjBGPmiSg0HgTFN0q1wSFdDOASgNBnGfAcnCS8/Lwv/Po3/41zD9/QvY0jvm5yuWlmARjJkDNFhiQBkEuUqIQWBEABYcQcFQpOAobniBC5aTNEKUUAwkIxMz5VzOcI2fRUqatYtPJ5QQTIqpNl4lAodkicABNRMJO65j6/gfm094Q1bo4Xaq5ngeDQOwMHeuwex6wFHphIBmTcwJcWgLpBRIAwECAiykjFQmGkVERkwJAEAEgVPia0ciOOBIzFEIxRpSIJ1OChhMr1OIpFqjg8FiVhlJG1PEa7iDOGsX9/oiNVNB4Mw2xkplUIdE1jEuFf9VOKI4+xui/jRGdxfGogBENKYokQkUyPMVBYTCrLcJKk7L6QsdB2WJKAyPYXmUGkOY5rxsDxYGK8gYiQhMKGaKhEBAzQ8fx8Pl+vuYAyl8s4Tg0ZkExiFjGIszAKQEaRL350j4sAzAAAAhHFblLIWrEhCpMhAKCBIJERUJCOIR/xRhhGRlGuw9G5IUSEAFMsVGRiEqeVjJsKg2GR3zDf8DtyKSVJldYYlUgghGAMiIwgYMRQiMDOcPIlAdro1SnwQDJgRTPv1wJP5X5i1cAreDCRy+W4n/GcMN+UIMsPKplMjoRLUAWRAZkFrAkhLMsWQt11cyXqSCkZSYUtFX0VMrLSlz6LJFSMwmGiYjQiIUUo7IoE2hUnRsEJCYgQKAgPIEcWw0oScS4AGGiWqYCEjHzBEAEZIZNRpAUkAhFyOIpLY3EIcSNEzZKApGIsGAKAIYlhclZZyEJBKnu2Xprorq5dCb+zpgAx7Vpr2X7Du5RupHkA0HRgknfbCFXNQ2rZPiJKEjK8hVW4jwCR6/Q03VrzlUXM6rYcf8wgNhimRj9El76SGIJQsnqoKYkTABBEpzQ+eEl3TaCIGPmolyS0k4D2znMavko/180co0KRtSSlFSGIyNKIGFMtgaaQTKJHNRdGAAZOlUu+zFz+3n9iIG689rrvf/s77OuGCLIXTy0A5MhYGKSCEYLkBIBAyBRzB9EdNQEGKBCACwIGDFXMFgJgBuN5G30pKq6TtyCTQadeDiroSyoW8/muvFv1aqUKAjHOrY4Ozx2ThNI257zwENaz7qmnjznS5kYeseh63HUtQxbIqDtSWCaqAx+y1TI2ZlZxNhTzlQTvIHUqKaaDIW2TGIdLjCAfItn4+REEheb6+nP99XYL0dxm88GPEGLyUG/NiEMaQXjLpX5VEaKbeydSIRxCnZ7AEMMxxmPZL957GF4JRV1Dyr4HtLN5BECF/aLM523HqdkZS0qsVssAYDDeUqNARFzLuasRYHX9kQi+yuqWA8pQ95do7rSYHi3XlwNI5URKROEplCnNZzwcAAhtp5sUk82zjvpqUZMxRgKIKJuxJQjXcwyGYJKU5DgMTDuXNX23OlUe78gXspZR8cqu6xbyXVmrs1KaMQmtbDhBIWt2YZoC5tSg0C28ujSwC01XAqqEUZybyKheq9q2ihpmxrRAAywA4wmKC9W/iq5GEAvtnkIjHs6TwEfx5BAA0nmLIdnMGkaCGNRq30oE5YSugA+hqxoBkcAQKcvQ1oOlArhipIpQZBsg5CbD7wBGu4sVqVmFNSxe3LT+brvvTXYCqZKiFg3HOqoQddf4LmJqAV6S/CcQ0bqOH8YZLNUYGEb6pXSzjWQ1Uucq/SIqHjVBChqNT1u/xE0pGquctXQNMEWKQRZ5KId9QTL+1LGJLmK1fpEn1UFXcBOJRK3YgHNjvB/+FV6XKD/DhLWOdxCk6kMTnEP4JL9GMnpiZtm4uLFBFyFI6ffN6qvVnO27D/V0dV74xjdf+IZLbY5X/vFnP9iy5YSLhqZnnItG+wGQI2eMcxWbRplbkwzpG6EyNiEABCZBBlKiOkcyMBkHIBLCtk3bJEkSTcggeAETTtV1UKJZ7OwAjpXxkpyYynZ3gB+AKxA42jQ8+IzYt7wK3LYsE4Qw/GrOoIBnIFP1pMVcTKARCh0AgFJFh03Ikgx1H6wBYiHQGm5SWlCsFhYGGPmRx9GFdCYppr4Nq/byybDeqfqTNe0E0nWbDaNNKw81hjFxTFDIRIWzFQihKl5C4nOhWmg1PPU8nlp8RY0JjtfqAwKAF7gEEpGCwJNS5vN5z/MMkwduUjN2LDkCrGQYQS+snhxaChPtcOASQeWeVrrQpIrWMovM7pXTgVLcEpHBG5MiRNus9VVoO8GgJalGRJQMEETguL5j2yaQDETAkbFMxrCNSr3COHb1dLt1J6j7xMA2M45bFp5vmMAlEgsF0GweJkdZ/wBSzgucziCoZfJj1bK0bLAMU8pQQ9PZWXScOpEMpB+tBSICZwAMGUMRsJShMkCYjzDZMfH41UYSsfmFothqpqaRijAYz1ezDFIWAPG2pXhjElEYEROkhs5lbG4FSuqNvDo5cW2Dtd4nRtsc2vpz7d3YjFuP2dSaQgJAmhg3FKIowC0mJyL56WW204arbUAl7SrE3QmQMXYK42sr8sMa2onfaTKGiihLuzFA+hiEGJCF2uGkFWWFqJnCpw6PZomXJvDqf6VYYxC1GUkbGpoiQmRJWLjGDZASieIi45UKh4GgEEcDDo3Fbp3r1LdSamUpmkhqBAnWIGAIk2OThmH1981xnNreA2OMI5H4yDs+DMAkwMh0sGPv4Qfvf+aRe1+49JNzz9vf6UnBEC3gXAW1RWIIhCgJCZV9FiqjU5TIkDu+H0gs5m0gXwplaA0EYBiEJpoCXNdzpiYDQELDyuchkCglY4ykTyQW3jp+6EIQAGMjy6Unu60Cl2QTg8np/u7iDEil2IRoTaMJIotcVyXGFw0yyquTgpjERKeago92P6czfETEOVeZAPQ6cbbBCPjxpwH+qf2gH5T4CSZZaChdOXUu4oEpJ9oUedaYAP2hjlGVji30MwXigH7aGKe56F0wgihRgrr7izc2RUcktdsRkQTamawMhOf4tpmRATmOZ5Nt6M2qKw5Eoob4skoyC8cRzQKjt8JZAABHQlSnJzrOFA4II7iphiRJSphWCSovhCbNx7TkyExww0/asPUQm+pkIwBwy67VSsAoX8wLIUrTZcsyiFEgZ4JawA3DqXnCzpDkmUwBJRggytVSLl/06zWCoO5MqjZnRmDBgvmHDo5ZRtGw/HzOrtccYL5hFEulCudmJpPhjDu1mm3bhBR4KTfLRMIBQKVB0ECNCCx2S4PQ8DWhnK3knwa0r/2p2wQQhab9hJGFf8Ict4BqFORCeYuoEyQRMLGN0HaJpigiwmmvOdV0ONL0OiUqI13zjGkOt8syAGDK9Vu3qTevBq6rkdWWFSlVQEPv7Yq+BePvLV9pcEuIt68IUxFo3DEAAAjW8HrbYYSxkVugMwQADGTLGalMzTo3IxUDpfLepPnWlqJP1E3iMhGPCMJ9SFr9RluhxrMqpK60SPF5rUpLOGOCMpKOmuCjU265oLcYv753oha/whlyw3QdQaT4YYkcPM8Zq5NFIp9lPZ12LsMssMAFi8HuQ9M7Dh0s7ZvacfvG1/TZA2s7IELKiExykpwIgEgSEkhCZpDwSMjOLGPoq2TAjNSlWqg3BAEgASQ6UnLJXAKSUOztAiRRroIgKWjf+b12bxcQuI4USObU0X15Y7w2YbPuGPKKgw7ZzYTlSuEUIDOGT5RfRQAkMcx1IIO2gfXlUAQ4pHkqSV8QAADnXAAdPbcIAC8erMTwpyR55UsfNNWvbnuh/wo81U5cR0oijTqGznpESiHbjNKoiQCrragCJsTIND44iuuKX49xVKDFcE55UjSZ8KgGuZktl6YNxjOW5TgOSezq7SmVSiYnADh2WR8APLN9MrbHaSfAJMlsIm5elYB8DojI1I2jRCmBJJAKMNUMzyQiEIYcDCNADFVlDasP0EIwUNViSDXUp1YbCRFdP8jnM/V6nVtWdcZdunh+vgD79o52dnYayIT0M7mcBJiami7NlJFYBrJzFnQToZC1oG71zzY6DASA9Vtrew5uWLNm3fREkC9kDh3e6Xti7pzlew6+OGvWnFy2cPjwuGHalmVUa+Vs1o6jncRcY7Q2jdQ3mq2hTyqOpNQuBwZq+1b/Em2S+KJHhu0xro6tTmIaGF9NmUfAFfaGSJWl+dQBQIQAY4uOlyDADaMMZ5BmTKLnDCICPO0FRz7GEB2A+P4D4zx+WkjFBm6lzTAbecDm73plPZAF6YUhj6S1ZEcCNNtZh+c2ct9qEM2plTh+hPFLlfFAyaORu7PExvSFIhqOlhQ2VXgoQKnOWDxuGUXnTeK8AwCwlPW1DudAkJ7wJ/pKsS2YsvDC8Fc9768O+Ya5N8S4jn+JH87ry8dP903WISbYfsCYodKbG9zyAl8AGYaRyxie4/uOlAyqrsMzzPVqlok9uR7Ldjts8EAyKKIDz9/33MDmLdmhbOD7AUmJxBhwZgAiMiklZjiZXHIKIl8fUn5oEAX7Qqa0nwAcQQIR84SggDhyTworkzGQSc8fPdOsZ7PGQJ+U4DqBYXCsH5d3fQUodVobAE7RRUP0UBIYkGgRSYBQsX44NASsCL+oBPXNANdpfKx/5pwHEo4aKkCaALdbNX20+ilDDIM+N79CLDlcze1TJMqrmMUAIOI77KgRVVOq3FBp7w2VCCiGWBLHkSDQ9JP62Qm089hAgKHVOfICABEU83nPqVvcyGQyU6WZQArLsiAmwDvGk4AzegvaGKQW3DbGIEQkUaDKOQYMQo2yIAAk1gBwVd9gDEDGbCuPRu6TbDl+bfOkFq6dPKPjSb01Zhp1p9LZ2Tk6MjVvzuLbb77JdSd6eqz5C1YWs9mtm1+cKo0fHh8dmjfnfVdcUZmaefGFvQ89duMxa8+R4vC+XRNTpZ1f//L3AaBO8j2Xv29g0Prc5772myvv/PRn3jk2NbZjo7nm1M716zcd2D+yfMWaet217KykgEgQAmMMKRm2vpc0O9/4C7bEt7rFsn6T2Mh/pGkHJDoJ3aY4oc0Q7n+SEihJHRaLk6GZTnSu09ogVAMLF0kNzGhHJ7FJ1akJvQ01m55oJ7B9+wqFN4LjyO4T7UhyO5VUM0luuRKIyKL5MU3JS9hw79D4VurPUOhrO5JmbqC5mrr45BFnHd82JZWp3btqN6i4O+qeKR02AQmTG2KVhLxRGAUAdXeatB/9zxhXcwx5BQyjiuohEttxHjoORdSjmSkJub0BEQGzsjLwslmbQNbrJQRmmlkhYaTqMElZ02ISsza3LF4wOKIr/MpkORgVEBgZyet2xljyqmN6zz8WCCb9afu25+uOE1g13/WlJJJEAhkjE4JAImeRia7SUgKQBJRIKsEJIghCAkBhmwgGkgTpoecHWCxwjt0P1GcbctOa8fycOYXujoormfVc2Rb56XWMgVJIQXjmQ8EUkcUOJIwAUSXxoliBxhEFYOTNn1KB6PxTA9OJ2r2MEEJKyRhTMoEOav0stLriSTitFr+1ItiJ9ki3eAh3crT/WZilW01ZakNN0+BQva1UbjI6CHEIzAh1KgevVJpIzdYhmVVMY+Kuk7FpvILBIZvPTY4e7uvqzFiI3M9mwM4VpkpO3BQPzfoZtQ/DwZAgUqpp8IEo4mB4089QglK/RcY7oKMQImXNSxiJ+/FWkQSaRiFhtrAtMtRLAyJt2D9EJD2ZMTITI2M9hZ5OGxcNdf/HN77S0wt33/98dxd88Ir3X/3b3824/vIVR3Xn+cc++JFjjj2qJvb87Iff8QPn9ee/ev/wc6rxN154oZWf6O066spffPvXv/71g49+53UXvJnR0N7J7MYNmz/28c8MH5zM5qxyeaqjs7NcqZiWrcCHkaGZ8ndrDE0egRlAtFKgAGgaiGh+SsBqqBwqGSODvhCK4bssZg2Z1r/KcAxxvxiea8W9M0g0fzKOGJ9cIWGK3TTarVbs26B4kOYKiedZWL+pwhE9PmMc0ZJTTo+k9fNmBHSE3hukvXjPxU+YJtODpDhNlR7CLd1LC0NixBbB8XW5oRVAGEnChOGJmwKA0A4FFT0Ip9CWVhFAOpCQRGJc8xll6gSrYTQpt8M/G2CFAOm5YPRFPeKtRVsdcTfAgcVRPiHete2LZFwyDFCC9LJZk3Hbc4kzIy8Bsd6RxapTA274YKJh+j4wV5r5LkuA7QXEanXpTJSNsUOYJ+7mrc4LTu4zs0hQZzDtjGZu2uhZMxI9KazpapDPUs4gCSrcDxAicCUFAwEwScSRpLpBVwhC2hlmBNKbmeED/QbjgeevPtDv76wC1IbsjETY8Yp8pfN5dYYIoFBaA0ACIt+jyCtB6awoDgauHS5FjdplW4KmnQbR9mYpA91QKceSZilaGgIgbJUuM13i+ogqs0uD9BBi7hSbm6yjlt5I/RZbCDYfVRa5eSDDRC0EKh47A1AXl8godlZP5IRmXcsROLy4xwQhUFCeqeSz5uKFvWvXHL/1xQ0PPfn0ws5FLd5S3hI64dfWRDHuCI0mFgrZyMhPUnkncFSZQOOhJpxxmL2YR/wTYFOajHAK0HQGj4xdQ02DlESk9GHxviIi6ctsxlizfMXwwdFDe/b1FozBPu4F+z/0ibVe2edszymnLrrhli2//fWvv/y5/6jNiFJpe+8cb3CQL1/Rt3Hj3aedtRjunQCA+++50xdi1aqesYmNGbtcrW+868FtG54ihxfuuP2ehx6866yzXnvg0Njy5Uu27dh+4vErNm8bUTMVfhCjJin1EPtqs0XyqKZq1lGZClhEkW1agrhawQ20PaDkk9ixOjx96azw0GqHIyJFtsthcBWKV7yREYyLod+FpBtt5X2vv9+g00hJUZETNCLqYcY0IiNBAgszUahmoxCJKXoACZfXRvZNB0lPcFMkTKM2QVWjYVOyOMcChI4gioFhESaE9PZV33mjv3I4j5jyaI/VV5mkwNIgZ8jGGSnZN7nliCSh6O/oF8UiRGp8YJwRxGEmo77JD93kEcMQOwI0up6wBZHOx0RD3aMIhcRVNQJCHxHjLMVAyAARMUoIDDEGV9py0zApMr0JPdM5R0QKgjTRJaKGQxH5sCMCAsoqZ0DEAc0AgAKfGAAEiAwMo+QRsAyoiYvARo42QFADANcEAMOQpoEAHUDIDSGmx/3xoE5ElmVlsz3iolf1Z/0ATEcIObxzX3b9kgeJSR9ISoTQ3RSBBGBoyhVuByRQ6VyRAQPKEsmJKd6Rc6seTkuzmPMqNb/umsX88geqUvHvBAxw96s2SA75yRVZMiZ8N9vZyX3f82qCo20VRcm1s8ARfNdDkwUkTcOUjp9BXgKJiJxzBihlIITgiJxzESDyMNWugjYCcAJiJIUkIo7MMDgASBWqJSIYunoE0iclvuagyM0GAAClROUzJgVJSSYA8HCrKidXAMlYKA1IUon6QmYrNAaM/8BIu8N5YrtAYaIdGRAZaqcp7+GQfyEACFwyDCalFCSBM+QoQIXEUbFPZSi1SARAxpjJ7FqtZphoGqbnOQYygxue5znCKRY6hRBBIBhjACQgME2z7tkdRVo5d9brX/OWkV1bisXgpz/71g23/mN6ZHTv8LgafuCr7EMBI5AcTNNEROH5UkjGGEcGwCX4MeMbsSVquyOCUIwFoHL2IQI0gQGQF7gZOxd4Qc6yMiZ1deZGKk5Hd2epVJ6emM5aGQDmiEAgyxgGEQUkI/UGR0lEMhYLiIikVCbEyiswMe0VMhoKCiQWFC1rUhJD3hnApFOVXbnZC4bML3zpY6edsnbrcyMdxamrr/rz4rUy3z/ql3rvf2DzyOHNb7pi6ORz4YZbv0Vy5NiTWNdcVp0UZ5wxa/RwbfYS2LBrrxrGuW/Mocy/uOPOyiT3HElBsTJtzF0243jO9hcf/uDbvrJglfu3Pz+yb8uBH37nczk269v/+7u5wrx7Hrn96NVnTzt7M6yXASORAebEexIxTOII6hRGCIRFtI6IQokzllVjHotpgla4GWPNCjEWo+KQZQ6qMpu3ffSQS18KKckAC5ErR2zhBwDATeb5wjAMKZCb4DmubWeDIODccJxavpDzfZdhh+tP5PJc+h11x892iFrNYZDFUtDOC7M1O5xKRZ76gQCg0zIBYMZLjLDSVpEaI5BuIB5DQ0QbrS9sWR/a2QG2HH10D5QwvDHNbhIjMNJNNMOHiNq1T210dgRCZ1ki+xoy00YEcb/65WtqarEkFBFgteEMbcb69gqwETI6v6WzFOpfg8J8nyr7tManxx7kAAAyuixJJOCofZk2z9YLRhq55gpDPdn4+4GJejxgAQEi6rZjkSLTiByINbGKGItYlwY2S2oWjBRFOAEAVqsFzGKZjkInFjNgS/Cmd++2t86/u0bSB+kjEQggAgSM9JzKoBaQgCSQQKAwHAMLuRwZ+Uwg2TbPZ8A0JYCK3KISO+49O18sH+shkVPvyGc86XmSDDsjXOZy8JEsMGyfuESHkWdiJ6EQiilS/xJjjBsoXALOdMcYJOAQWlaHJFnXvjC+fLAAANuGK80L0UCAASB1rxmuogQAheo5UvxYooK/RnXUQmk3ZDEuSLZoxBAkI1Gxl6PgNoxSJ5QzAxH9wPODINnAisZIAaBCbEau9MiIge/7Fjdis3CDoxAiX+gYHh7u7u52XTcIAm6ZhmFUq9XZvYMTk8/NHHQ/87Evbdh6z4Kl/dV6x8j0rqVL1nzoQ5/4xAeuAID1OyeDwC/k87VaTTKm1sVkXCWWwDB0rqD0vHTOnBFAhAEi/wIWBL7JkTOzPDV90vGLP/i+j1/961/c/uBDG198cfHSZccdc9z0xDQQA8MMAMgPGGMqdRYiCiEYgWmaytou2faYXn0ASl83SG4bvF6aOYRiDkOTW2NzZ/XO6zeueNvnNmza+OyGJ846x1q4wkGR+8cNculxjm3D7Nls9ABkzMKChYXdew5VpjODC5z1G4yVxwZMwL03QvcsvuI4cesfAQAueAsUi3kLu5545GAQMCeQi1YaUtgOuW4lmD+/64n7vDf/0zK3Zu7duauQBU90rH+2fOk73vQfX/vlc+sPIK9arNMwZ/yg2LhPImjGVBl0QpuOiJDgE+XGlibA0IraqddzVnaqNFXs6Ki7dTQ4Y1x4PiLnTEjBgSzTwrozyThl7LznSM/3s9ms7wsVWsQwGDKo16sGKwKrul7NYF0IHM0656bvMiwF7S7zW+PQdpRHqdoVAZ52vRgcGilMH/UjEuD4iUY5WnTcAGj9ecy5NxBUEaGkhuWhplxjYbWmwYQvtqEx7WgPpOdPkbWz2SpGK2kq4sYSBxZAAM2ouOEOLwZjM8PUgtjrO5hCbN5AgBUCVYEAQx0RkZTSxDCWi2yAcxRxLO40JNicx9/1+nO6M/GTg5P1+Kcw3VjK5gWJiDEjNGFJZoQQpRlroL6gaR2SoyiElNKMIsq7vgiERMO0MwU7w/oy4DkBBtO5e56D2hSUJms9HSBE6LWhrickkE+h/QBFOiE1xAiwgZQMOZgG5rJoW6CUfgAAsOdVRYYsN30MccN3XAs4E5Q3sUy+z7nJuR0wEtIzWI3JPEjFNCiFDREpyKCM8oQoBWaUv0RqME9tAMaXzc5DRIC1XaF5IWOi5ZPkRe+q0HVhO1KCEnqVFl2FL4LIGDBmUGX0YjNjFNlM8IZILGrbSNGo7ov0SAFjLNyYGgeZcFQcEFEQhswHC7ML8yjsESJHROGRYRlC+pyjG/hADJF7nr9yzqxf/Px7n//cZ0485qRNm5/Od5szM25HT9/4yPRd9z786rNOAYBdo+VSqeS5MmPnuWmojQTx+ksJIFkYCyt13CQCUwE6KDmiBAAoXQ8KxaxbrwaeP9g3e+TgvpOOP5oxMbR0Xs1x/37dTb09A9Wyww3bCYSVLzBPctMISAIQ59z3fSSRy2TdtFGtJg6Expg8fSp5hh0+ML1u9YpsFiYmDi8amP2x93z3wQd+VuyeOOuCJaOlQzIzvmuDXewwembTH37uWwO+N4kLhnqYCA7un7Fs+4yLsp1za9UxuWN9fuTQzMJlHVu2lvIFNrZfAsCChXDiqYN7dpY6B6unnjXvlmtHa6675Bh48DYwuf3O9/f8/FvD81bA0IIe351cOv+EPXv3v+Pyy1YsPxsYz1gLOe+yTe44HjETmvAYERlpfJIgNEjtnxgXhVchcf24tfTrcfsEAQAgMzg3qtUqIhQLOdd1DROkzLgeE9Lr7DacyrTBTAhMZhrqfFu27ThOrV41LSubs726NCyfM9N1pGFYgEIIFyiLpUCkBnTEi9uXLB2mAZEE3EJKbl/+H3dKCC2gT8SxkQCrCoFIEinrz6FVZWg6QvFom80fWs4o2evpyvGXWE+uPydqfSeHiPGR0oWeJMK7XpmSakcojVOOXMxDxjwyz442BiNMk+0mt4d4hqDJXjGK5IbRkjFqJsAR/FMEOGq4sTfV0kvONAE7C92BZupexsSsiZyECAIRSAKDc2umGmCeQQ6BG1lu9TLIeD7c+Qi4ZShPy8oEeC73qDp/CFWMG1Rupkq5iqDMd1no3CmBJAEYHHNZnskCgiC59+wCqqx1jBFgt3Ncxa9awIwAM8hN0yy5dY+RZMg5Gj5XLFFs1aMQakujRUQUQRPpDWeOy2YXAWDbcKkBOJDoRRKLayKlzUpCAKrlEKGDnoyddJVbthHf1ybKkuj9pu4AAIETJcngMFLYyKarGdWRED5jTJG3uI4QwjRNIiISEF0oKRrMDQEAQhBKYizcewwNBobn14GRbXNErNUcknygbyAYGz7q6CVWtr5w4cLNW/Z192XK1VrgGb4f3Hv/A2ef9QoAcIjWv7gLIJOxi55fUyroIAgULeQMpAwQY3eyBhyCCLpTgFC2JtzK12sVEt7QwOw9O3e9+Q0Xgah5Xnm6Uip05+665/4FC5b3dHRW6jA2PT1VrWYowwzu+74vA8uyQAZCCMswA5E6zrEiwWCJMXwcPU1KyaSY1T9w1a++vWTxspNOOu3Rux+69bpfGZkHAK0tO7wVx2c2Pmttfh5mzy2tPs6cGMOqab3iXPPX35tauhKoMuSImeGRiu/BwoVQHlkwMTGR7a5IYR1zMnv8LgcAFi6C/tlQqZj1ik0MBhexqlOaHh6aqTvlwzPZzmDOUHHn9vrQMnnMafL2P8Dak2edetbCX3zniQ9++rJvf/Uvzz5ziNtTJu+QstH04QiFiJCnDCBSspO2oxKE3M62hgWeF2SyBSGUWltyAwBkve5nsjkJRr1eZyBJBIVMhqRExur1upXNeH7ADWaaJmOsWq0yzhnzkQq1WskwuWlkEHySGeN/SnHbyVJ6aQib93+lYBuCF5pChIF1VA3Qwz3qLzbYGkCEcaDVvMIWNAVOvIp65ZZL23L8CY6DxNmpoWuMbNOwiRkKO9ICAMVCl5LioEHIDltt7Q/XcoSIKElSM9YGUCdXEVOZZNRBEim3LlVVRzfxDldGJDpPCg2gTs80/b2tDXmTiNwY4zd6HvrhAICUUqkrEbEnawSSal4gEAEz3GYo0ZeU6y5Kx+HligEQCHOE2YFl+K88s9fkfRZw9KsUmEa26LuwfUvpuSfLVa/6im5POMzxuAg8t7Zg16AiEUy50yADQbJUDSo1ylo8V1h0f40IhJS7zu1GoKn8s4yRHD/KkWLardnFos+paGegVrcc8nKGAShDMZgQUcVwVhNpKhSHgG5QQsQa5eZgW0SEYZzjiGjGOywkqDLy0xCoDLglApMAwJQgHuO7cKOG7nDUpNFJRiUJMPaIC6VYKQMGJkDqIKv9bBgGEYX7MNTymYwxDkRAApEkKVUIA4lIjBgAIxIExNBABCmIJCGKTCbjC891XcMwROANDcweHt53w3///vwLTrn9jvsOHhjp7OaOW3PrQBCcfOqaQj7kP/52zd8uffNbNm0eBmC2FWZDMrkBSABSggSmR51r4BijPMGIkQkZIUG9WmNAdiZbKc8wKQwGh4aHLQsWL+zbe2D8vZe/818+8dmNG7bdcec9dz14X81Hv+JbDBhDJJQy4IgcVYaYJE5yA8Abnqj9MH9g4L577rruL7/bvmXUFdX/+P7rT34N3nY9jI2LxWvtKcfpGoSPn/eae26/a+d6MXdJ0G3i9DZLlOyDu93jT6lmuHnp65f84ac7JyfBqe9dctTsvXsdtLwDe8NeevpYMd8biOqhA5VSCYYPI4o+zxvtnm/MXpz78lffePH5Hz5x7WcObn1w9EAOWP35R0cOj9d/eOX3M3l/ZOIwWGPZzFLXdTmrt9rnTXKOtgP1icfEOMG6EXaPK+o8aPwTEecc61WHI7NtOwjIqTlEZJpFBuS70z3FLuFbHAzPrzPuSRGo/D0qx6Hv+47j5DJZX3gSJPl8/oKhkdGDniO46Vm8iGURRxIHrVdi2JqitKUzRIhYMEwAqIrEgC1dR5Nj2qigoT32bFm/JQPRPB2I1yadBSVuucExP2mBJy3orWGT43xcR28hIU7xANIUqLk0b6kGmLQk9horlyot4aM/1GGFiCRCpUIqpwcBEUZZoUhKKSEKqyRTkCEiJZrFqu8G/kYxsg2vENFQd+IHPDztJmPWAoxED+PXSa9ARBFbksxLn7tyxSEidWmnxsb8OnEr4IZAQwIxkgYEJpMZWXHqNlCR26ZH1UBUDYtbZn6mbBA4FvqGEFUIsKtomnyWnXEB8gA18rrQCgCmnco0f8Fxa3x8wp2cglIZHYf7vglMArpAC6ZW8HyGZTPAmDI+AiKQkhkGIDDZKYXct26VzYyAA+YzojLDOeecR4gjzM4QhM5N4UpJVFkziQTpBFhHQMsGuwBg+/B0232iK/zVdWVos50AFrkyyo0UesTCPEUaQ0AIscsQDwPlQ2ORGJEi0MfJ0tatRKEtFglJRJZlIaLrugZjnJme5zEeXl2LkB1XwxaR1BsZzUoSQgRBwDmq7C+co+/WZ/X1dBezvut2F4qzZmWtjFN3Aj/ITk5Xl6+a3dOz4OCBbZOTM+VpCQBWPnv9jbcdd8Lpu/cO9/QWnLonJbMsizHm+34gXc65QbzBGCL0TCUWBdMI96cAIgSSmM1YEyOHTzpuxVe++K1vffvrCG6xaHmu847LL5uarnIzJwK87pq/n/jKM/9+y037d4xblmVYJkTXPUyREJayiQnDSUhp6vl9WaKUmpUbevObzuse2O/UxYMPbFt1LGSzkDPzew+Yi9ZWxsfEosFj/vPrD//zpxZPTY9mi3Tssf333VM5MGp88FPnX/eXZz7w7nf+5qc/37FhrG/VrKOWHX3w8MOHds+RrDwzM+NVAgBgCP29s057rb/vwOSGJw0EW7Lqyaf0P/b82NoVQwe25LhZLwcHlq5c8pqLFl3zh8fPueh1H/ngJ1bMP3m65P34x1d+7ksf3bpzpxPUM9TT8lA3IBN9zzTUiXdRw04O4w20cRM1OAchOzs79u7ZU+zoqNfrxa4uIvJccOszOZMMzgb6B1/cuoNZVqGzKDzfNE3JyTAMAHBd32ZGT1fP2Mwh16vP6lm7ftPdK1Yt9N3ObMEdO+ymCLCO9P+nBFi9qQhwJdAjYcmUEEaxYqoF9dJn3vBrS3IVP2/gbpqb1RB6FBusgd9pNRIAANb0RMGnKaRUBEOIxwMaBRIy5MsaWLDE2PzlKgxeWuDWf8Omxo8gsiMiSYyDgUC0OxkgCXXxHCFcDC8yORox0tTDJTYQ4GRsbe6853Tl4j+HZ+rxCEO0ogXtixLMKF9M/eaeASFpd8CpWbOECaNIGuacC2kJVwaBRAKLIWcE5JL0apkuAMdgAiQPfEREYr6QNQsAMkWDZVkNhO9TnpfqM+CLDmHn685UjhzG7MDy87adN7osbvvO/uzzgEC1Kk1Ny3olmJzGsakF40tUfC1mWWYhz0xTup47XTItk9sWsy3iXOkPmJs/dOxRQUYyxjigMjMykJEgKaU0GBGxyLpNAAkppZRKPo51UTools7uBIAdh2d0ELEQKWuLopzJo9WLrmyl2hUCmdojIeGHiACTj4gJCceQeLNoV8YHX20wI83IRguHPNJFi3h3qUsDAqVwBgCnVkNEw2Cu6zLDMgyDMQOlalCqq2WGhpSSG0hEvusZhmEYhhDCF65p54TAeqW6dMmca//8py9+9mP//JEP/PvXvkWSrTrartfq+3YzweRrX79u/fOl4T27AUIXBGTwoX/+0Pd++KN9+4drVcHQZGgBDz3YkANjwARCk0EiYahyRwCUIgQCQwAJEmXg9XZ2Htiz94RjV294YdOuPZs//ZmPn3nKsczKrDnm+Kuu/PX2rbtOPv20q//8u4G5cw7vLVfrdWZw07SDICCJpmFQIMgIVX0KehQJf6ZkuouwjKA6MIdtfnLMrex94YU7n33xVkccmpyqrVgxZ7SES1fXf/a1ycUroHQY1p2Q6ejKz5lzwu+vetLMWdn+MoDV0Wt7JTtvdq5/csMrzjvlfW///Ps/eMmi+QP/9OEP9vUPXX7ZhwBg8dLuyclpx6FcfmBqfBIIMjm+aGl2845KV5GfckbOztWz3e72zdavfvWbN7z261Penh9892f7t09vevGFSy979e3/ePh1b7xg1fJXTI7PQCvNGef6dUmCc/STDu1LHNBNtJIKiIgF0nOqw4f3v+6i83fu3NnZ3bd33yHkVl9Ppw1Wf6Hjm9/8zPZdj53+6jM/+s9fffDpF7uzfQJEIMV0acoy7Fwm35Et/Ph///DL3/jk7L6+d7zjy9PuC3/86692bC5fefW3vvT577XNhtSutJSoGkqs6Y0fvHxFqN4RvBSl0WENaTrdIIk2NKtK81txZW2ntrASJ2p3ZdC2NKgEMXL8YfiS/pcvXdotisTESrmhNG/NcGCMdEwcMZjAwIifMsaQoVBXSpxBxMORtu7Imb4Hkg2tc6NR++3ixh25EFGa2SCI/WijwSdzTG4ZkzPs+34dp0zTsE3GJSMCH3jAmGDZvKx5ARFwNMA0HIbIZIZEL1m1oCZ8WWeGgIwDgejleZt1T/GK15Ml8LnEvMhKH5yZUh2CKbPojC7rMbO5rD0y5xmOYEhgAAerZTtjz7592q+5XrkO3LUYZi2LPBc8X5RrAtHIZsAyyZJywfY+eZLvBbVazat7IMk2TI6cAYpkk4eTklJKKS0zNFppRkzNi659135XOck1p22lK2ak1HWtYzKrgIHJukdxmPVeGhkjNcJQhxH6HEGYl7exmKYtpavsmRmDYrHQ3V1AgO27DjKmPN3U+UIiQEBJnhSCMxMBCALLtrs6CojoCn9kbNJzoZDN5zJw5x23TY9PXPOn30kwewaYYRgGy3V3UrGv+PRTW8cPO4sXL9i96yBAAACGgbfdesO8RbMue9tbSRZ6e2aZpjk+WS2XKqZlMI6+71poQhRpBJoQC8kgNMcFRCBEtC1jYmZy1tK5X/rMz/7wh9+fdcYZTz3z8PkXnDM9PXnbnbd23vKPG2+49X3ve//hQwfuvOP25ze/8L3/+K+9Bw64nmeaNiIiYwhMiICZie1LCtdJYJj2WgZAxF0HvMWrFy6ddfT2LY9NjowdGqvNjGe3rT80U+db1w+c/dr5Z5xygnDoroev37UxqPEtl77Pv+mPUwefK7zmgvxdtx/8zBf/+dEnn5y7pG986vCjT//Jr7JDB8Y+9dFvnfvaiy4PN4AhObkO+MGMmRN+jTwn2PKCAYYJrD60wNjwtFffkt+8pXLmCR9fsWrOdb+74SffuS2Xn7zhrzcMH37kwkteWxqvlfrHgyDU6ETWG9G0ZBDhnhBRKw0w51xKpXaUDVNO7acoCx2EGrEEX6kGjzlq0R3/uOvdb3/z32+8ZnBw8I9/uuWf/ul/Pf3sxh07nn7Day7769U3/PrnP1l3SvbJp/d95qt7X3nOB3qKsybHx+ctmJvJ2Vk7Y4K1aG7htltuH53aUOzsfubp4dNfM3Tua87Yuc39yc+/KNBhQAJBcqbSzSISqDjhOn5seXgoXfTQdFFLIIlQovpRAAkgCYJAIAkuEyshQolIDMkAwZAQJEAYIlWdJQaM2hfQbGqShwwlUCCFUHEuOAOVzhaVpCYRiCEwFXaKiJBUwlL1Jf4AIUOuPggqyR0iMCIpQRKSDE0yiQEYBBhIJggpUngiCJI+SdM0DcNQikOVqRc4UyZzsVr65X3ChdALQHjTqMYP0ScWO1Q1bfsicSbjePGRQkZG42CInFSiljCEvUA/AE9QQChV0CbVWuD5sUaXmwYzeAgtjdCG95Zqe0gBUiDJho++zfTjFBWJKoQkR8YJmZQgCZRvLY8+2Mzm6ftEBCQFAHGSTH1HxBx1mpSXMuOR6YNJwAzCjISAGWgajBEISdKUwpAoyfKIODeYYRFjDGUWqeAjVljVIPR9ZnlW1jd88sgKLNv2jUyGU28hQ6Yoe5XM8DJ7eHlmbKU9viqbOafGrL0XDYy9dd7UW+ZNXNjDMjbks5jLAmM8ZxqcZK26gz+/gzZMP/786IGbarUH/Z4ti4b65s/qtDLELXRB5KfKFsGYXWMoC24h8DOmYJbdK/0Zh3tcSDMInIyPzDM9WSUjCR8jRBD4nDPD4Mpl13U9zjmRJJKcM0SwDDRAGCAQJKBUh1og+AwZcAYcgRNyQo5hTlRigAYhklAfA4ih4Ey9LgHDdVQYAlGPuSYZB24QMgEYSKraLFeWtunIah4dw8gyuyNvGSOTXs6mAIX0u2cVu1nwwQ998I5b7z56xZBBQjo1SQ4x4VusSiCYyTnL5nOuH7h+kM/k5w50f/drX+krWD/59jfXLJm9fM6cQ3u23Xv3Xds27zGz1tjMxDs+cH7g87HDws50rjwxd3BktFZzXnvRMT3dXQMDBTXSwOuYmT58/72/u+S8c7c9fvdVP/z8+y8/t1x+ZvbsDMN6dVxkTZfAZ2gDBoHPgWzT5K6LlmXMBOOA0pbdRy+fbVoZNyhy3uO4YxU5PTHu2wCvefWpCO7ChXP+9VNXGOjeevfDXf1d0+XKxW+4oFyb6O3LfeIjn/zNj3/peSO9fQVC7vrCYEhBjcmAMQMCn0nBwshjgklhkrSAiANxABR+4Ej0GBeMC9OCHDh7d27bub9ahszi1ScvP/qYcy5+lbBgaOnsqUPD+VzxjnufuPlPD5guBE4wy6/ffL07PpYngOc3jZ/46sXf/vbPHrx3w5LlSxYMVq/60Y2vf2dPd29Hd2/HOReHCq2Rg2O1MbCYLT3I5boWHcPsIjcNi3OcPmz84b+qzz2e3bzeClyjPF3eumPTz676zW/++rs//f2e4149n+c758xbe9TRS4YGBnv7e2YNFLq7ZI7PK1drwp7yoWbaFhl9PvcCaec7aMH82f19HQgZljkkYIYZZgAlxgqcZjN0DQzMoEDgE2UlA8ME5helwwDrzDCEqNjgcbfcaRUHZg96WC0UCoZk43unFnQttby+T33onc88/fOvfvkT19/16ZNOX7fpydHLzj39h//9tkXHiZVzrRuu3+d1jZW2P/Gj737pmKP6vv2lLxcoM7Fv7J/e+tW/XnvDWWefcfv1N1//l99Vanf95RfX1qe3UDB81PGFgyMjWA4ii+VUZg8ETDjQkMQqlNqI4hJUh9EdsGoz5EcUrdMqKrUVAheQ8lZAAK5SqEMYm0fRBqWLku2yATRdl4ZfGOop2GLaHAcabdda07QSRiShIlG/Wq4CQgoVGnGc5PhfiMJrNLTZwNA0TOtljjButaH9I7cjlOF0eJ8dXQs13YU3KAxieFIkaTE0GqrpqovWS9OqzO7S7oCnavH3OJwTNCk8WkOhrdl3aN4dpuYNs6kQJeHX00IDB4oCQxARiy6eG9pP5qWJmxhlQRdCxGnjGmZh1wKXRMDAN5AMaZiYy7DpzGZyarW7Hu/Mmpi1RNnlfZ1Qq9ZL06yvSKU6Mywxa8AqDlBfr8wYC9lJEwDOmMfINZnvlR2ZyWI2yPhlG4cqgctNG4AFM1VesEZlaamZrxCb318AgEMlr1QqMcYRUUhphaa8fgjkIFT9csvSAa7dMSQqOx0UKkCNDqQ4iqQmFmuq7zaBgBiISYmzqpLnpS0t1yrs3b171qx87+Kh4Z2j/ZiZyomls7t//rVvf+yr/1aQ5ld+/P0Pv++DY9VKYJmVqYqdzQtBZs33jUAI0dnRPT46tmb13Dde8tZbb/q7wTCA7FW/+dbZZ74lV4Czzjz24gsv+O3vriyN5z7676+aOFj5x42PdnYVd+2e+uRnP3LLbX/3Kwu5teXAHrdSrQJA3yBMjAIinHvBspmp2t79B8dHCp/597d95mNXHjjoDM5nh/eLsjOazRmm0YEoarUakJkrMNeR3T19tSl3/tzyqSdd9tebfmdkuifHSoMD3XYxY0C9x8y+8qxXPfTYfV1dxfPOP/0vf77j/NefPXx4z3OP7QcMLNv0HAa2CwHMWTh//YZtm7bvnzUwWCvNGAykj4xbhB40XswAAAjiiKhimPi+a5qm7/ue7wz1z+/ttDJZKQX71Kc+98e/f+cjH/7Xr3/5F4+t//OPvnT1X2+6a9aCLtM/YGdmnXFp5bYrg7GKO3swe3iPa9iFngWVp59/6M0X//OhfeWsPSxZJWf2vPDMZG9/cdWqFQ898DQAMMx2FvKFnsnpSbPiuf1DmcqEecZry5ufgZE9BWKVQIJlACB4HmQ7YeG67ovOf1fWnvXtb3393e+87Cff+++77nmw7kwtnn9MacLrnSX37Js5/VWnHTwA09VtTpUAnYGe+V6lYhqiMkXjpV2r1554+BDahWkhuUDHMMkPKqZp2rzg+WWTd3vCCuSM79Uz0G+bBHa9VPGFdJYPrXjg1oe+/u0PHXPanKt/fXNtIvO5T35x89YHvvej9/zx93/Ze+iF/QdqvhH89s9/OXrJm7dumnjV6oElxxR/cd0nfvb5m6+9deP7v3HO/icmF6yyTjr29Y8/uP/hx+47vB9Hp3cRVlYePatnINj8NK/XynWnHtRsq59tPPj7yuRRhnYqhKYoTvk1xwesWSnUUk2ka7KUzhGjoxuJ+6AURIhhfCilmUyi1YHyL2wcT3PRmYMY9SOiCJShZmQFSuGHSdAdaVq22dB+M3WJ31WR5KIwkCCjO94GOg0A7TmXxpaj9l9yaC+rYJPWMRw5Y819oPZKs/ID0tOXUgohuKVkKr1+pNloeh0AdKMteKkl0Ad/hFVIzavVHXM0OUriqhEq+x+9/1QXkV4ifN5mmO1ORExrmw3vVXGyxM2MiZxcAUJaAYOa7PBXerf83lrX6U2Nk1OzeVZMlXnGtriFc/uYzeWLu42RvVQf8zfXF06ukrinK5MtGr30imPL+eL+HFiu41cmstA1EUx2dOZKpZEeyOXzHVMouwKzbksDbTWA8fGJQqEQBIFpmkyIWrmczWYDzzMMI5vLVb2aYRiWZbnSBwKU4UFI9rYWGQVCrgIAtIzW7QGlvsnoBCUBjOIUmYhmnfXnDdbhuMwtjZQ6V3XDIfmGV55y3S1/O3r52olM7rmb7zzx4jeYJl+xct6e8X1f/tdPIog3v+MdWza+uGDlKul4suTYhU4CZhts9PDIyhXzt27ae+vN1zMGBH7XLPz0Z77gO/+2Y+fDh/fVrvrPv73xrUdd++cXb/3L7h///Iq//+5+k6EI4M47rxveN1mZGVWme2r89VJnPl+rlqwtm0YHV1dyNZsmK4sWLTQzwcFDT2/dOTFdmrn44ndt2rChWPS6u7sDnzy3XpmRnhdsePKpN7/xgvdc9okNG+777yu/+tGPfrO329q7c+ShJ+4+5qgFH/tf79qzo9bfVzz34mOv/dOD3d2ZF154/tC+qWKXXZ4OPMcHBiuWLPnDn3/X3T1YyNo9PZ2jY4dzlmnbeZeE57mmhTrAE0aHgRBBtep3d3cGniN8l4SYOzj4wpOPj4w/fXD/9NKlC3/4g28/8dimH/3wBxt23CJmZt9/3z9A+G798If/jT/55Mgff8nPvyR7z7WCQd3KgaC6bfT1ZU9bOvv8J+7+0aKVxcF54plHJzlmJkarKy85ER54GgA6h/x3vOXMJx7cfmjvZgDDL3fNWzHF83D0Oji0gyMUkZUD1wLgmZzb1YXlmbIXjD7/1AuvO+/82QOz3vveD+7aeujp9Xf098DJZ3QDmd098zftX7Bq5fkLB95w7MqeynSQsapPbRm+4e+/Gys//Mjjm2++4/5jTzpl55Zad2/nyMSUwZnrBGAUZqoVzu3A8dzAzRaZlc8aEtxaDcEzDIPcrv5uOwe0sHPWgY2PXnRO/yXnv2Wwf38l2P6Ff/n6yhPGbWavWWsHzPqPL339858f6C2uu+D0c3aW7/n613+1/qFKZ1b+8hv3rV7tXfdXeP/H3HqdNr2wweBdnT2VfBH27R6RQefA3JmNjxPnFqB76RsvqY/MmZg4gFUpdImn8ai0Ejja0UIAyHMDACqB37K15kaa8a+6eGqWmNsFhW+I/xCPVkqIA9DrvXCZNBWlpyBIjH2a2k/fZyeagObKMvUktkIK44IKiQ3E+4jWAe3Ky5TdqQksjc/DNFuN1dquS5MvlpKAYyMIvR1IGcOmi2wk7arZWZoV9OHpqtZpamANs/gflUjCjyco23EnEO0N3RIxmn7rbFS6HS+koYSRMZT+ohe4lpVB4DIAJtDmBiNJXpCblzNr5b21h8ubthiei75rdebJNLA6Dd054Dhn61zImugEVPchZ0vXxQDBNujwaLZ/AbzlfNO1pnMwWfLZdMUtgOl4LkrI5Ga79g5Z7xByxfw+ANg1PMNMg0g6jmPbdnc+b1ls7/7D3d3d5XLZtGzLsmZmZqx8JuRxVYgPiieY8rOPV0dzC47gom5I0kc4fpdDcrj0uF3Syk1Pj2cGbLvuMyhAnS3uwitPOnHVnt2j65Yc85GP/fSuB/7zT9cM9XdNjI1c/J7TH7xu/aveeen+rbs//6kvrTj9lPp4td8oHPbrGdMKfHf2QP/E6ME1R6884fg1c4dm/+2a61910ZmvOvu0L37ypwDVT3/+vI1P733+iX2ForPjEDEyilk5Z25hx3ZBhpy/OPuGty7+4TeeEtIkUPpCMDIAQW7Jivw4jk3s6mQkv/KVt+7bJq675deTh+0b7/rluedc7vnVehUeeOju0046zeD5/v7crl2jRy9fcP55Fz75xANrj+9ZuOiEH/38vz70/m8cc+LCf/1fn+TI+3rcyXH42X99+55Hrv3bH56eNas4VXH8Oi1cmlu0YGmtVjtwaP9Az9HLVy0DJpyAPvKJT81fuLQ6UxaeyNg5ABCBpxBO7H4drp1pCEGIJIMgY5mSRE9XZ0dH7n1vveK66/8SSOiflfmXT1/xsY9+5Yr3fXzV6sF///yPbVMKYS1dVxsYDF54OuNJPO9NuafvHj+4l6SE17/1Fe/7yKV/+e1Df7rqdssoBybkM72lqQkgyNp9b37beb+9+o8A0Ldg3iWXrPrNf95pUsGDWteQNM3ZPXNGF8yR991o+oF/+RUX/OH3t4kAeget+Uv4zj2eYcrJMXr728979skXtjw/DjKfLwbdfTRvQaFvaOTmP7LF66hzgC4+77LB3uVZY8mBvXv++vuf9nUO+JkXBxd1eXJ2zessjY0sWLDq29/+7RNPvLDq6NOnZzw0K8AwR9yVHpmeCDxOlsktwCAAzBhFMX3w+l/9fs/+KxcdZf/1d6NLlswanO/UyN+xwfF8uXQNOB7s2Apdc2B4L37kQ+8ueIfuffzJR18ovfJV2SfuMQfXzQwWequuw7i9ZwcahYnTzum64fc13wNErNfdYjcsHOpzvJnXnnfBH/50+/Cw1909gFUZBuJ4mVQhDMLT4jknIp0A63nrQHOTiNsBAIryVCbPZaTCVX9G77YjwM0IVGP6UjeRIR4USXTi+E0lxTa3BqFmvVEZAADQlAclvkBV02saFTaQkAZSl6Aw9UW2pjF6/I2XLCm9X5rNYozri55Mn700nXs58mtLwkZEjHhzC0TUTIBD+IjWVw/N1tRH1moQiVAFHVJQZUNEzeaBR55UEoyisX7jNTa0UT+ocXJuEoAACoRU4bWUhXkw5bNikQ9YAp5kXi0YOWzUazg8IUoegc9MwTiKjD10eCkIARkDgGGlDkUTOG3rfdHsyORv/lMw/21951/eUZw1Y1hejYKZmYmgKgDnQmfNpoWzigCw/cBEoVCYmZnJ5uxKqbzhuWf37t373ve+1wuk6wecc7UBBQupL4CWCJxCv95mWClzjTgtKyKqDMgUtDCqIiJddxITYMYY+KbNjTJ3QfpuWRi2cVRXfezEC/p2bKrmM+Wq/1cI/lo0x7qKfWsXLszZzz6/c+euw53dxb//5fpzXnXOtu0HfG7wYsErlbo6Ow4f2H3y8WtNDu++/G0DA31XXvVLK99v28TN0UO7g6OWHmVb02Mj057n7R/JDvSylcfO7NgIyPoPjowV8mz2rM7tO6e47PRhBgBMngnIyRegoxc65uT2vJg99fTso3ceOOEMePFZWLAsOziwwveLr7v4nOPXnXfFP73thutv/uqXf3jBhadfdOFbP/nJT/7pb1eefGZ3fXruvn0jARgzM4euuf3aM485Z7Cv24DcomVi+VFrbrv56XdfceGuLZUHH348V/RrFfnBD7/56quvueR1lw7NnzU9AZt3PDu0aNF3f/ijmekaSG6hVau7lm0gyTAjnvK3UasoySfo6OiolEogSdlh7N6z/ZVnnfnaV5z17Ppn7Kxdr7vcMFYfs2B4fOfMGKw9E0ZHYXbu9B/8+H2vOP2K018LD9zQlemY7uvqHRmudHRmJsszfbOGxkYO9hYLM+MssEoQgGnkfb+KZBEE6pandylM7Sug5zHwAoDjX9lT86c2P0EoDZI5I1f66Mc/4HvsZz/+L8PuRXOquyc3uq8ye34WsV6bNktT2NWZr1RKZq+oTcDak0zTooCCk04d2LNx8tgTzJ98B1atq7/2Ynj0VnPRiuLjT01u2WgNrvb6O3DOfHvDc84PvvPnCy+6aM++CV8U3WC0k8+u+jOO9BnjTBCS5TpesStfL7F3vO7Nv/jP8x9/8JGb7rhpzrLM80/mC70TWQsKnTA9jtl8R7az+sT9+Vdf6g0fqBczYMveX/z2m5ee973h4R2Xf/r4Xft346isWqWaI/fvge7+HuDTWzbIcy7sLI2LFx4G5PVCJ64+pv/BO8dmJgBYwFgX1kUqEpbOyIe3QWkKoUUVTwlV6tIrzy0AqPlu/Eoq/7DqAJPWMCEzTPd7iYVgBBn5VxyJM2igLhRF7NMPdhzDD5rJg3ZT2oa9aEQ3CSWLmAM9SUmD0AwAuoWw3k6DBJ98aRdTugn+4VttqEbb+pi+WyUAAJQkWGtZs337rccZ33nrAKcmzXbc/kBnIX44Ol2La8ZIOTWY9ta8RyCfKnBSFEQpAADGQLbbV4oxbfFjW4Ntnejqh6J51yGiKwiADGSmwZBAygBJAEA9w4bIrI9P7zGDBbnBsd7nGEBQrUJQdw8O84mpjEHeZMmUwPJWYMC8iWUQ0MGTduGgQWM+C4D6s+6hClUxe/2tdu5o6/L3u7NWz/eLLoe9o2PSpZULZwHAnsNTJjf27987d+7c55577t8+++ljjj/uA+//YKGjs3/W4Ey5lC90uK4HPDKeINA3cDuAqxCwIrKh0wlwvEw6UKUGtBg4AJCrkJ/LTlQmegv2gqH+LXc9OuvJx/PvXUz3PGJv3gnPboJt5T9X/GtfseYfjzy7rGpWF2TE8LQ5f86nvvjlW37yhyv/8udDJpm1AISY1de9bePzF114ru9W5s0dfMtb3/zgIw9DrrTp2YOFXFfO6tu86QU7A90dGSml2UEHd/lHr4MgMI2iv2MT1Mq9vd0T1brhOB0SJgGgs9NmmDUsZ2LCufS9q154du/soVpPl10a7p21cGR4vziwG8iAnVvg5NMGy9Mdd9x+7xXvf+Pd/9hIWL3gwtN279+4Y3uZfCObNcul2oVvf+X3f/inxd09G57d8dpzznvtJYueeX7r5k0TTz9/+59/e9fmHc+vWbPmh9//WRAEUkrTNFetXtLV1fXsM5tmzZ/z/IaNB0fGZybrNmYNywRDStcHAJ2NVqy84CiF4MCRYGjWQLHAbdM+8YRjO/IwVnri0L4CM73R4QCgcMXHB3fs3Fro7z7ptPMWd58x2LH4/EteV8x0opwSZtHM1QJXrF3Xv3PrzMG9ObRKDCW4WbuTFi2jTc+5JhYA675QVt5w7mULvdr45D5j0/oqsQAYnX9Z54O3l6vTnQBTy9YVly49btbsuRs3Pf/0Y1tAiOWrura/WCPpAQCSZXJDQE0IgEwulyvUyuNzF+dXn1KmcsEk9+A2f+XZ5p1/CebNzZH0fbf70Mh0occq1725C9zDu7pXHFe66PVn3n3LdDbX9dWvf7W/Z/meHePZbgN4plz2s6Zhm4VK2c0WsJit/fjr3/3DVb/+yCdX7BzdvGsPFHuhWgbhQ093h5DyvlvplRc7999kHX9az8KVlQ0PeStOMuf0H33qqlf/y0d+bsyfGBzKLJuN/7jbXXxUduEqb2zKf/wuJn2ZyUDW6BNu8PYPDuzb1rt/z8yG9WNmdsyvM+RSTzIcoWHtCTShNopKc80mRIW65VFkTpXsi9D2ijSvNL1ZAD0VIL5U0etwzg1kyjSWEcRfYnrfMOZ2vjrtQJHMkZJ/AcLb5WZq0XK0EDH7DYNpgMMRRvWSdZpnkZTG5WgRzBLaEzZtvVp/dOqu9w4MlTk6qdz2qOzKX+6woc1+e3kQU2RDhtEfIk/TI0CyGUoNK9hutM3PG773GpketHOCeN2Huke+DCQLgJtOfv90LSjahXzGA4ftXZbZtsziZ8vOvuxRa/jpZ+IxJxvHrvGGuqE7i8Lb17Np+MTDNObIfR4IRp4L6GR6MZP14d/eOnHpEH35nXe/69J3XPCGDY9tWjarf9GCATUMp1JGEiT83du3/fG3VwP51/z1j1f98ufZrPWbq6/K5XKe71iWyZHUBxmpIyRRStSsmqPn6hMvun4BpN/jxIBQBz9WUiEAw9BkAwHcAivWndyAPZtVZv74tyWnL8isW7z+sp/YJZjYtQfOWgJr5m0bH7/q2puXr1y7z60d2nigo6vzojdc8v73vXvb+IFsZ5E7AZcgJTFm/OUvf/mPr329p6Nz375Df/7zX0UA996xfWqiVilPzll0eO5Cy3FYVTjD48HqY7JLltvTY5bwipufQSDrvIt6uA2maQKEwTuPe6WbzZnjowESu+ZXm+1srTJVvO16N9s/sXefePFZ63WXLlqxpjgwBCedufyEk9esPnbleGW/kZMDi7kvhi0+4NfJzEJ5xvn8v737uj/cFwRBueru23d449ZHRoZx27axRcu7Tjru/P/9w+9PTY9e9qb39vTmc3YPQtYP/J6OuZe++ZI5c+bs3LL1NRecU65MdXd0Kgi7rtu8/dQTxhERa7XK3DmD73vfFc8+9fz3vvOdZ5568t77n9i725iY9F3pnfaanqWr7V//dOueLbmHbvaPXnrxwjnH/+snr/jCV99TKAzUysZMrTI+I+p167H7xw/t9RgICqQQYFhmrexkzMIZZ67wA8cPxKIlfar3e27YU56prDlj+sK32iB7Fy2fNbGvUJ2SQFPFQldpUtz+9weOPfpck2XnL5gLAezYOt3f2w0iD4Ll8oYX1ACAGwAe1KZGwes4vAvv/ht3RXXRan/bRtix2feAiv2Z176947hzSkvWuJ2d1cq4u+nxfrMw1T9XfO2z9+/bvqdUv3//rvs+8K53n332ujvu+FP/QAfj4LhVx6mZpimpPrqHnXbSq37w/X/v7pqVYQtsqygcc8tj5hlnnnjG2atmxoYyXdVNL4jOAX9qcmbvzpltB+uzF/U+/cxj3/3R10+6aGJiTxeCM+7WGbD1j1XvvYFefBxkxbBN4AGUJyeGFgT79m+r1A9UvNGhRTBvOXz5m5dzBo1OqNHiJfEF9V+JCGX4YQT6p7mdl6QQLYXRNkUyav3BMBqDjBBr6OTQMBKMPZQgdcHMSEt+16roeL/hpzhKVFyzoVoiX7bCyHF93TuoedjNFOj/sOjtNDTI2qsZdMT6cgajgK17E+n0uN14okcSUBII5bzbDhQNwzgy4/I/er3t1FgjHT3C9FUdGUWPjF9R30ehMmW6VRtdmwUZCzNZMLMBmtIsFzPFaejtcPtHTSm6PexwcpPjnVsWZw4s6R9bVja5mLPQPO4MedTx/PjT2MA8Z8rnMu8d9JgLUDfEJh+cPILpjs0Uhjr2ffztC7504cGFhVPPOie3eMkF571ejXDXrl2z+juffOLxrdu2/OP221yn5tZrpsV379px6003z+rvAADXqys2mkU3tQ33AQ2nI2bLGsCrPwlfTGOMBsgTkQsu688atzw0862f23+45hfda7MnrQV7cP2/fL9rb7l+zfq/P/FC37mvzT62b/kMZbzau88866pr//S2t71t04tbf3r7NYdmxjLjZbD57NmDiPwnP/ph4Imurp6B3t4d2w8/8sgzXR1w5wPfPGbNWXffOorM6+nryhUhV+jY+JyYt9zN5Y1SbdILCJE2v7gdJSKvz1sW5jlgzvzhQ9NdPShJZrOsMta/4YVaz1zgBXf94/kTX+1PTu+v1MqveVPfDdc9cd1t10q7/NzzY+e8Yb6VL9x72/7RyR3HntmBkjLcvvOmnT/88dfmD8777nd++cbLXn3iyacdPFDv6iwMDs7613/9pG2bjz+y5fgT1y5bcgwzqozVgYxHH7/nxuvu27dv33/9+peLFy80TRNQmqZZd6q2bemap0RLpxy3RZDP58vl8lf//Ssf+uAHsxnrheef7+7p+di/vpfZ7msvWdfdnx2dHDv6RLZvR0Z67ptf8843vOnU8QOZ669df3h8UiCARSuP6T7l7NloUDFrIXCQWQ6W69WBYNum6nNP7zEzASDMnbdSdb1wft9Af/6aX3ROlCqLjsKOWSNPPnxw4eDSf/v391VKtVeedepPfvnlj3/kPRs2PL1wXh/IjowBoyMjCAyZUa3XCh2W9LsoKKxb669a1js4NN3RW+rtz91/Y/HZR62Vx2eeuhcMGrA6Jx68b/yBO4KhBVAaxcHO2V3zx0YPds6dW7ziw0Mc6rP74PEnrpkpPyoCOTm93/VLjGFff5fnOfV6WZK7YH7Pg3e/8I97fjYwuzOf6V27uuPk4+ae91q8/fpd1/7tiRPO6nniqQdnhoeyhWDNsR0k7OMvHPzzb3aP7TbnriwMriic/kqqj8Oeg1Znd3DKK/n8eb1Dc3pedYk3f1722GNn9XRRrVx54ZH8g3ftnymP/+zqd/7mypt+8aM7DZyPtTB9ldQQSsvw6ZGqWd3dauQzimjDCSFrWABQFV78Fm+XRjDRdccVVIyqRhzarNkDDf0pHBdr/HRE0KYd3rJOu35FcquLLArKgSADCF1NlI6UcxPCO85mFwAAYpIlWThSHICGlBv0CtCqtMrGDWpIRBT2SCysRQy5l55+2CyTSQwWvV8Bbceg49b4icqdFAfZia1vDBHZuClhl0h9N9p4k/V3JHfA4+VaojlvcwfMiMXj0Ycq0xtGSQacc5BBmF8HIcnmTSwW/FuygwzCq2YZW9UlodwiIwCUFEUEi2/ulUAHkNzlY5q78jzPMIyG0JiMMRXyk4hUXjlmcLXDTYmcc/VWuVzmnBe7OicKzwNHJiT3KsHUsKyOWVy40yUOwPKMHIkr8gc2+3snrZ5lc4e3H/r9tRsWn3HO79/5HgAYPPHMH3z+c4tmDXz2G5/K5vKP/v3OKgRrT1o3Mz55zV+uWbxkxZPPPH/KKWeMl8e6IF+2fagG0jDcLBVKXNggVXwrxlRMjHw+PzExUSwWSdSFBG5kPM/n3DTQdB2HMQZYRzRJRSJlAePgOr5t593A55wTCAApfJmxshYaMzNl3plf3mVOf/5DwZ/X937ktBdvvm72BV+qrltw34fed/l71uz4y2M31fs/NvyMf9eu6TnGY+tOeP0rFzlf+UXujHPKnInSTFfeGp42BvL8jW+64C2XXlbIGp/44DcnSg7k9wSQc3zoyxWPPxN2757YtjHo7i5mOspeBfy6ZVh+Z8Hsm+3t3QNjkzmJcv5CZ2RnR/c8H0TH8KERAEDIr1iLp5897+qfb6YsDPTnlyztmL10eMfz1gtPenMWmpWKPzSfBULW6vbB/W7XLOjp7q7OBNyse6P21FRV+qbJTV/U3vO+t1z133+V4JuEv7/6D5df8d7jTx2aGu3atXfTZZddes9dD09Pjfm+UMElGKDne694xek7dmzvmzvn4Uef5MzcseuwJNsLPG4GBK4FBeVAxhGlDEhIkxuB8MmocTlX4kzdqaxbs4SD8e1v/mTP7i2/uvLn73z3O2+66aaL33T8n37zRKHL7e3P7t1Z/c6P3i+cwt9v/tGLz2aZZZIxUxsHTtn+RW4h33P4wGStQjKg/r4uZMHYeIUkWJYFwDxPADBAqdJ4FLtzaNRsszA+UVm1Jlev17s6Brt77clDs9c/99jb33XJAw88cHD/NJpmz0DXdHmM1cD3wDRyfiAAVLA4ybkppFiwSq47oXfjCxP7toHwM3MX4ZKj5OR+Y/dOWeivD+9cUhzcWR6G1cfBCSdlR0fqzz8JC47KHTjoWznftjtL9Zm5C6y3vOkDH37rdwVm773uQWt24bxTjxuekr2dtTuuvePHP/hT59C9Bit5Y0u75gwbmey+QyW0xSOP8A/8ywVnrL38Q2/53Cnnbtm53dp3mAG3lq0tdRWL2c5yrco3vSBmLQI/yNq2mB7zRB1WLu/cs31m8hDki7D62K7Aqe/Y4U6PZcplnL+yvnrlubfdepcQYDDG0pGqVHxwaAggApFdUbPyOQxJp6XzYyS1q+KW+LNtaZYVmouu2tKxm35f2FwanutdIKJOFlsSQtYUEqudSNeqb6muihVViDtrOaT/x0VLYgOhtg9JGcdBtHxxLymTVL2RI961txphw1U0RQ7QqV5YpH58OVOMOaOXBMjLEUPVF4zDEqG+JyVoDFZj4w0qekDCGD4yAVT0JhJAO6ev9HMiUkkFgiBQ/EHMLjABKCUwNBhXAyQkkOQJUchmKPB9KTq6uwDAd1x7ZrGoT7mLx2pmzpizwoQVwqnYs6pQnQJ/ui7G+G5n7mKrv6Nidle7FuHlb1p823V3qzH881e+8dHPfS53+PBbX/mhDj8zffrEqpVH33vnXYcPjUiwXvW61/3uT390zMAxC+WgXnMyvVnTF4Hn2PkCViRD6Stf0mwmU6vVyqXSQH9/pVJBZKbJkSMTZJjk1qcl+LlcQcgMSUaEUkpfBBnT4iZ3PZHJ2oHHCZ1sznZROoEDNtgdRqEbH//llac890Rwgah1PlEoHa4e0zP52D2rfQOepcHjjl731I7rvvvzS855m7/z+SwU2KbSrvMvX/3ILf/8qe/97d6bLvvoe953/tvtC07et2/fe6742AnHZ7vmTVcO+KUZmDXPHR0TE6XaP26FJcsGTMPt6J8pz8CCBQN7d052dYEhbQw6/PqElJ6ZD4b35hgrHT4AgGE+ANvO1Kvu9ddslgGAB9k8m5getg/0HdxVNy2cHPNedeHS0b1ln00e2usumNObKVJp2hnZWQMDVq82ZyazAutGBrw6/OZXt23bfsqu7eNf+uynbr3lH6YNdjazc8+mfJ5btrjwwgt/e/Wvuro6OOcTE1MrVy4/ePDg/Q8+cuON155z0UUvvrjZ9STHXHfXAEej7tSKhSwh1UvVTD7n+x4HVigUqtWqwU2rMNuplutl57hjVnzso5+77NJ3Hr22a/tW4+mnn+DIli5cOzh71uf+/dLentmf/vgPc/nOO+6+sdA1WpqBXKHjtNfVbv0bcMvq7xMTY3L08HhvV86yxMyUa2fNajlgxABznudww7MzjFB4ToiZK1MOcCgO5ns7+bYNM8IHk9czNivXnzrzFSffeuvNf//79Z/+zJefe3q9YZKoq2Dx4AceQ57NZWu1imGYvu+DAXu3mfu316W0mOEblntoH9+/k6GsLl6VHxm1i0M7zzgXVq3sKg8P/uEXm1ccA+e+KbNpQ80r9xw6PGkIQzBgtYGHH37wntvecsl5b+kuzH7/69/2+jedcczRJxw8MPzUnTcFcKCjYBx3yuJazbv3nsrIeOXwCEwdWODbe9c/v6808bOlx5YmRq1XXGTdfkvl0CHnuYeM406rPf0glmeyx7yyRiK/d2f1uJPk8FYIyl2PH6yecra9/OiOf1wbHNjv9AxgaSpnZ2sCjL3bYfeWBwE6DF7CutDjNkODN0UzYor/aCkwZo0MANQDB2JjqyYJoIlYsqZmGgtpNsOxBKb8UDF92/Ry6HdzBWxjnAUAEhkAKLk2qQ9SRonbtB6VPr+VBAwQP49FTDUG1pTOr6FaMyhaPudpoydt/C2rh5F+j9BsS/G34SeKknsjYoN1HrYJsam/qz/vKyaxoEdnKrGvdjtrcP3WQx+qHn03HjNjLDHuw3DuSguKLa++tVnHL4a9aIkKVEjfqCpLxF+i0PcM245f5YePN3CsrLaiNHaqOyW1I2IghWEYnucxQM5VVnDknJOdC9zSTHEjQwRi3DCBw8TkRFdnJotEZSco7yB/TAbcXtyf+/MP2KqjBk++FQBe/6n3FY4/5Y8f/ukX3/QvfH3HzurOFetmd84Nth549I67/z4xM/3clt2emRUu9BY6qtyTnscM9E1wJmekyftzXaVSKZvN+iIwTdMyTMdxiIibBnJWq1Usy7JMhkx0FgtOreIGzKkLzm3GDc+vO57X0dEhJQinZGCPJ0vIJYPOyanhTN5ftGDlzDO7fv2Wt3xm8sX6Ur/7tBXP/nbTcYdGbznvsjPmd8EZa7sefXSDHN24sfC2h+7fee2vN3/tB687BaZ2bys83XnT69/y6NG91//Ht2957M6Zsamrr/n9nXc8VA9m1p4MixYtHT9YvOWm53JZXpZiwfy5xfzA1p3PrlqdE743NRr09BiTh4OBvp7tG4FlJyUfrHtTQjgoTG5ayKuOAwCAkAesAoJtdLhB6YzXQ+DBE/fA3PlmvsPgufra1et2PDvs+GP1mr1js8MNEAIKuYJfN32YkoJlrEGZObhk0aJDe4fL0/IDH3rN1Vfd5Qq5cAVMTPjf+I/PfuCKfyuVx4b61/myxBjYlu04LjPYiSceX/ec9c9tAGb8+g+/feOb3nTwwChJs1qud3UU69VSgJTJZWu1miDKZvNe3c1lsiSgTgEnaRKsWdU/v++SOcsPPfPcpp/99Ldnn3bS739z8w9/8YWBOWLOnKWU3T45Bnu2drzi4rIz1b9pw+iytXDwAKuUeYbbUyMVYJb0vd6+bsMw9u4eA4Luzo5K2fGlZ2fAdcEyoX9g9uHDo4EvAaC7D8rTgGAEgQRghaJRrTmcwbz580fHDhomvuKsV910453MhIVLZ+c7jb2bxkozrmnavu/HOBMRCSljdzh16O4u1pwx1/WAOFDeMhAN8borKvfcAqvWWWN7Tc5qRx9Lg3OLO3eW7TxsfDw3cthznCCodYBdAs/oWBKUtsEND/ytX1QvfddHl/bXlh2bN0pWWUweGuG9C8T04f6e2dPcKt5/V9kucrTtStk0rfGjlnQ8fU/5hFcxI9u9c9f4gRcHXG+SgQ0A85ZkD4+Pn/iKnqn9gsCZHssdGp4aGLLNDndgPjhVGBwsygDq5WDfjjrnUB3rrJfyvjhkpNh8bBTy9IINNhQN6lANC6qvUYCGdu3FJYZy6B+i41NNegu/IAKRjNSPxFgc+BIiCyBVJ7HPTQsfiE1iChyJsDUqjQGAgEURB2Qc+YGaHJMgbUSmumjwzmq6VgsrszaqWtlO5mviHKKmeOupYaqDuFWNHjVIcpgsvqYgiGMvN7iKSZCAiRCp26k1tN+S9lMkBiNrzShgm6QULPFtVl+QlOEChA7ZoV85KNIYqtAbeoeIkPNognGI47hxRIyy1qYAwaLrHPUnx0YP4JDVE5IBMpVcVEohBaKSemUQRXADJQAjMmSmbdTrdY7MMA2QxI1QI42VGdvIzvGOnamNOf37CaXw4Zknn15z1OJ585dNoexacAqAP717d2nDxPij43Nvvn8QAACWPXvdgXkrVi0d7Ds4cdLZJ3/p508Hw95F552z7cbH3/uW/923zJ4zew7kpFupjYyPlNHJ57rqNXdB92BHZzcAbD04UuzscH0vXyw4jhOAJIRMNuu6vluv9/T0Ct/btn3L8qWL/vKHP27c+IKVz3zxC185eGjMtvKmxYkwEE6t5szqMsslN5+z6k5lTn9uYtjZ9OLTUGaDK5d/eedD03se6LYGtmx9YfUH5/3gX75yXB273nI6rN89Wp3edeP6OR/6oGeXdt33UCevCt+kQb4RZ17ztSuON+QPzlkmv/XpO+/e6V3orj3Wv/tO9uAdcuK40XxxZO6cWQcOjQgGu3YdOGp156rVfdPjjmVZrhNYGYESMvlq92AwOoYEw8yGotlTqkwGAcko1T0DJgm6uvLTU2LuonlThw5NjPAlSzrM4vjBPX5vP9xz64sL5vimASNTTs7qqrs+wyqQHwgXAQBMCVXPYeX6/loteO15Z/V0mb50Z88pdHZk9+4Zu/fux+++7V033Xgjx2J/b5cgOTNdRo7HnXjCuuOOqbv1qdLkK8+44JH7Hxzo6+3u7RuasxABSFFcz8+a2XnL50zPlOuuC0LaGWtmcsYoEFT7jl6d+eaXrpqcuv+owvEAMDa583vfvfa3V9+95gRr2+barm07T31V/9Tk2CXvqa1/nAJntD5pPXqrlempZDKd08HUya+0RvfO3rF938F9U4iWgVlmkJUx/VIpa3c4dQ8APRfGx0oiOhmvuaRvywv1qXGxf6/DZN6p1wkgELBw0YKu7uLzz2+69eY7gYAx5jluX3+RCHt7uycmphhjyKBQyNXrjudKAGC8zAyyMlCrEUgAFJZdWnEsVGrw9/+aNWeRC3y60JlduYZve8F46Db7pHPL5ZHC/KWVfJ4PHwDWURo/OE9YB047tW+nPfXRj7zt4++78J3vObMocduuF2owZfUCuGLR6uITI2O9/bB/s0tTQcci/58+8pYr3vaFj37gZ9de+5+nnGtnC67jTlqY6+gfnRgGogAI9u0U3YNQniyNHS5MTfqmIUFkJ8YCGO85+GL92LMtzt17r3HOvhROO4/ZZsaZYU8+dHjVsdxIQlWEGCbUsKkEyCnEFCK4RL0cZTpReIU11NTRWXPRE4nHncbPEzkpbj6SfaWUJMOaGHkWJS78qrLUr7TTEhIlysNYeMXIYLsVMZDxQCilrozbTKWaaGvHFGUpVhDT5a5mjbfOPTQU1oZRIMKWtLwB78e9tORCjlSwDUeg+TdHk0IiEtr4lbSa7Jo2F66p4UXyawM8G+huy90VT631rwSA4aU0YosQV6HQDKioatwIBwQKY1iGcjapbK5Jvyycb3J/ow+m4XIaIqsFdQYR0TCMgHxkCesShsBk4HmeaZoMUPhBEAQGY1JKjkx25gKRrVcRvK6jzcVXX/1fH/rghwEgz62Vq9fed+sjh8uWWRodWHRqpo+JX7738Qeegr+/GgDMRQtdd8+aExd847++/p8XdOHKqUHILOnNHztv9Z3XPHPUCcuO/fKFl73z9Z/43sVLF86rVt2pGblgoH/T8+s//W9fOmrdwh989z937x82M3alXiEi3/MzljUzM1XIFbsH+svTM+NjIzlu1KanH7rv3rdddlnFr7mV2mDf7P37hvv7+wsdOd/3swV75NDBzuJsU5Lr1US19OLTd1XKB6578qHXvPXtNzz1zMJXnrSKz8muGZrpzaB/zfPDL5z9heGntm0uDORO+uMPBy97f/3wvl3P3fmBuXMwM2rdL5a/8tyDRvDlo8688tXHd2xYv6OW3/Nc9owLc5MjFbQWdg25t90wvHhFNVswPUeCFGjMjB+yypXxYq6b8ZpX7WdQefaJ2pxF0D+IlRlzpuRbQzVW6Qlg0jIt8FSgx4AxmJ6pIhQO7B4+sNvs6neLRUKCxQvn7t52yHH8rnW5jsLAzu17TNuT0pCA1ZprmLYkgEB4YtrMwoEdsq+3q7vX/sVPrp+/qG9sfLy8xZU+3HTd45JchpaAmfFxYAYQgZSwefPmpx5/0i5YF1100W+vvPLue+4699zXHHfqSX+75kbk4HlePp9fPG/Wlu37rrr+7yecckJ3d7fB2fjoyKy+frNojJQOv/9dv3r43juWr4ZNG1/IF+HKX32fl4zZQ/Wtz3cW+2FgKPvkA7U1pxZu+n1t3gIYGoKZg1x6wq/A9CQDhAdusrtnjbkOmJbluxRAAEKMjzumAXWnDsAA0GCG61TtPEAVAGDyMO7fXTU4mgYEnicCzGWyjpN56MGHA5+AQAAzDdt36uNjM4cOTXGBDE0AkFLOntW7evVRd9/9kG3bvu/WKgRgTIx66gwCGZ6Lm9f7q08EMzviVAuP38bmL8L77gxWrLKma+MUDD3xwEGnbBc7vdmzug7tcRD2H3cSzO6a8Vd13vXbiS9856sP3fMDM2+cekr3rvWFh2+pLThq4Dc/HWUSli9aMbBo61q/+7FN7i9+/rvdW4ONL+xxq2CYVmWGb1hfM43a+DAMzcstO6rjqYcnKmWaHIeujs7c3IklJ1h7ts/QcL5e8QFnQJobHp8hNzu4AAZ7Bv581TTJWqG/1j/bPDwmGRHF5qYAQCSIlEkI6dbFAFL9JKVUVtCh2UhsIan8EMKSMktWn4bWUCv6W+knjYrrBioS2w/rDxtKM3Zu+Bdi/iPyqdDdKvR3GVJz4xEta4XxNW+NZHjUYDgtdfgQidhB639YgpYvciBlLh5/YSRRitg7K/bRCj9qQZOEDsknMYjVnkgtzn4IQ5AIssGBJwFZKxP6JjWw0mSQUnggxiqQ5HkDtFNLqa2vTt5YlHoHpFTttONmABIzXTUf1TKS0r5IzX8dkAAlA5RM3/OMFCjU4SIQDV8MkwFKIX1JgYKbpMAPXN/3w9wVkfk0Efm+zyRwQhkIwzDy2axpmr7r5XI5PpG1XQSokOn7ANUZNljs75TA++c8+uTDP7/hF++58Lw5XX3jL2zdCtWZzcHWvRNq2D/4w3N7HnngYGnH3Lev+86BjUddPvjbXVc+bjwh5tSOnzf00Vdedu6Jr//+H/5x9Lrfrzj2w3/7/fY5s5lJB2b2jdx33y033vSc6wnL5BnTMAyWyVgGwvTUxLLF82zD//VVP8lZ/s03/PbJR2678bpffePL//zMk7dg4Pzwu9/0yqV1KxdNHT7w1IP31SYOD3bl92ze8bc//OLnP/qmXxn51n/803PPXDurT+ze9Y/rfv2di9/4xuqL5R9+598eueG3eS9z0dvfM2e8DtNjy8kaPnPlppnK19/6hdu+9pMTe7PBUV1Qz9ecPHvP5S9+7dcfm9PV0X0AOmFFtnbepZ3bt8+c8mrRM2t403PDn/3c5Qd3WrP6Z3/6c2+99K1nnXD80WMTh2zbtApT/bNh5/ZR1zUkgJB5O8OOP7kjz/tGDjpmzpu/oEvlQAQAzoNM1gIEw5AAweB8Z/FKY/mKLlN0Dx84cPwp1jFru6fHagd2VY5aMcf1asBLQhAB+D7IwOjttUFAZ55lzdyrzz3hjjvvy3WZo4crlSq4taxpWgbPA8D8hQP/9u/vJQCGXEpAwGqpyhi6Ve/av/79bW958xc//9l//OO2xx5+rFarMQletfq1L3/xS5//9PHHLvnvK/93xhKf/ezHh+bMWrJowfatO5597Jlj18154P7HeWEkm+kbmCu9it3ZSf1DAmV23rJKVx8cc3L/USdXs8VKbSZfmuj1vOzkZB0AAq8wa97EshW9iOXJMQGSAfmAPjd8ACkC5nsmoI/ctbP1gMrclG4UzX3f7qBagbER8j0GgICy5tYlzAQ+cW7ZVh7B8AM/Y2ecspQu+B55XgAAyP5/7P1nmFxXlTaArh1OqlO5OueWuls5y5IsOWcbJzCYMdjY5MyYNIwHBoachiEOOduAwQkbJ2zjJNmylXNLnXN3VVeuOnmH+6MkWU7MzL186T6sH/1Un9p1Up3a715rvetdkM3mhoaGVLXGTKQAVFU0xrmUiBCCEVaoxt3I0N5ER3tdfs5uaZPSV9PD9Pm/KKEIbH+04lb03kXwr184C0Tx6jep77m5rS0Znd7PpkbzI5N3H+zfO7APqSYouHHNSvLm6xtJ4HV16rEG+PnPxn75S5iXxeXrbQ3QD79xJ0f7V643tz1eeeohHFgoM6ErVAeMnt8+57gB0jwQMDZEmxpjc9MSIdj8Gmv5JgCGDJ0t7mq8+Gpn41lQzAaLV/kNrSoOmruXBlMTHHnCf3GE9ngc9eXiecenm5OJLvwiDKtxRHVsAIDHHXiZ9NXL7BXyoy/b/sIJvNTTPQX//jvx5FO2iFOx/JSdv4qnLgW8RAakxqOWQS0KXrtRJ1rUvzjxeQp+n8SYU3PA8oQf/Qon/GqCD6+SUwRUC5C9wLt+yTm89BDyVTtR/vU0/KkDEEIMyVPh8yTo8hMd4/87OweA5Cks6L/b/wZDKtx8/Y1LTq+/5f3f61t+eUdImUdlsTS1YN2KO2758bt7/mFp+3m/fHz7+o9c/6PbHrdH7yJm12Wblv7zu0/78Ke+1bvQve3uO57bsVszQ7FkIhqN2pVqe3PD7b+94577fvHLX/zs9A2rFi/qPHpkV1NjmKqBpuPTt1xdl2o7vH+ko2NBOGQ89PCfwiHtyisvz89Xp+b2C0lv/8OfVq9Prlu76dafP7hybd2WJWcu3vTa0ry87TdfWN5bJxZsPm3jBfeccemP3vfaY7c98fCVS1ZdetO3P/XTyfGdT2zZlEoPAM4f7oemh/5w6ye+evNiV+SHDh3RHjAqT3Wg8TFBRPjat3aODGbPPe+MX/z8ruHDsOHCNel0enQ0x4U3PwtrTydzk7yUga5uyMzry9Y2jo+NhxQArk7PYI+4TgHkcc8LVAqMASHHZ56WLmpVYMESPjEoTdqRqJ+IRjTXhf07vJaWaNUrF4tAkGEHDqE6cE1IW6UBZ/Tiyxb195cy+amOHrV/tx9Lgaak0nMFRIQUYOgaYM9zsBC1ttGq77tdXV253HylagEAYFi+evl99z+cnSss6VlqKvj8s885OvLU+z703sHx8WND48/vODw/W3Kr8IPv/fD3v/vlmnV9dfXywT9tCyehXHKmJlhLJ+MW9V2qhlxMTazaczNCC0HfSrJvNwXPW9AHI0fACKNIoxS2UsriIBCEABcBQqCq4Hlg6Kbj+JpiekEVEFMUCBioKvFcDgBN7aBQMjvJBdMVBfuBD4B0HQEmru0hIJpKPd+RALoecr0AAZMSGYbmuI6iQBAASFBVnQVCSB+T44tnjFQh/VoDH2CAgAKw9ZvaJyYnC1kjlHSK8wqwuK7lP/ed5mJ2tpzXl22wZsfU4rRvxODeO7ARVv7ps9desfkz3/jatyZm7tz/7NyKM8AXqUDGtj0qOc56EocS7JLXWLmJ+mxaOG6Z4YAoMDeqNDSLycG6fCENGJasDufzVdeud/2CQOyiS5KeR/fvzlgl0t4NKtGP7KQSlYhoR7SKMWbcYuDqiq6G3XI2jnx5nIR1KoFISnkqlpyKVaeWFb2ISYsATgHgl+MNvMRlAQVOdnc/BYBPAuGpyCqlpJTWtGAQQjVSvjzZkOeVqo9e7vue4OO8sEWIVwb+U0HiJPmZn8D9WodUBEwIcbJoREr5VwFYnEq24idkok/+fXlI9n8KwAjXAPgEqU0e75KIgZ2655MLDv4qC45XQ1+AFzV8PH7QFwPwi/kEL5VsPH76Lyaun7TEKSSsv9v/Bnv9pSv3HR0quPoVWJ0ncyljY1uicffRodiyld48PrdxY3N84R+ffb6k4lVvvvQr37nnmk+8Z/u27TNP7e07u9WeKR3b/rFCqfjgI39esWrl+Pj4G9/wuttv+/0Nb7l+/eaO9euW33/vfY0N0NYSam2NRONoLj3ncj0VbcnN26YRu/LyS+794x22UxaCr+hbKZTJXXtmfKZ3L425VvSRhwbPv3RlspXqeTJaJdbs0bOX1RdDTZ3hpUcOHSyjfMKxfa91NNZ8bn3j92797YZi6bZO5DTVZ5/J3HnVtW/7+EfkBzYlmlv+/UH23OJMsKJpcmS+q5tXZ1qP7ctVqu6ZZ9eddg584xtWperUNdNchguuGQZN1bGZSaarrKkLRgaiCMob1qVy2cLIuCFVC3xDIKf246urUyslzgNTQBmBGgqLSDS+el3DYw8caagjCqErVrbl82PDh8yyU+1aYBbzOFPkVOVu4AAGg0SYXzn77G5dbfvzo89z6qc6qOLjhYtabdve/Vx68Yo6ZjUOD49hFHDpU6rWatIQwMIF3aVyIZ/Pcw0+/LEPf+5zX/IcETdCv/rhbbf/6hfSdVafp49NTY1OzR8+WnzrTTe+460fielN11x93dHDh9781lV9Sxvv/uNTO7dPYhlTDYsQShU3FUuG6vJYmkC8gItYPDQ1Tiez1tJlxMpEx4dyIOmas7SxfoSRb1cCx2YYU3GCZFtzTzBArYErACFE5dyTwAAAQzhkcssKMAaissAHAiEuXEACJBCiCB5gDJRizweMNIQZxjgIAoTFZZdd8NRTT1UrAQAFgFrAU0opBDvZ9IOiOBdFlYAUCpdBfZtSKSmhGK5v9I/u900a+/IvSlsfgHK6aXY637OU9vbZlt2mGlN7j0FlEL/hPetyhzOeyO4dUIaGq4tWy9O39H7j00dNXava8Kab2cThurmZdLIOxgZAj+h6xABZKWbY3IzS3AFdy4NS1ti3Q7Qu5O09TDNofoQh0A7uDqTAismSyXB6qoqASsRqgosEEHAdAC/sszKTCAXc5ycSXUIwAggjJITwQSiESimlFIQQBJhxxgSnmJw6dZ7qGAGARl7wgGv2ink4+KsU5RMEK/QibjN/ocToReFo8srt6l7iwZ883Knt7V50kvyF3oW18TV0BxEjlANyJDBCFJBICBBCqKruMYdpJOIKhogtIYFomtq6iBGoEmkREpHYsNyKbih+IMIBYkxgjRANl/2KFMhUI9wBbGDf96WUCqF+4KoqDYKAKATJFwDs5Unil9/AV7ufmJCTIc2ammAtrcheRVuSI4oxBs5qFTKqQhBCnHPphxTVyqm4rgB+3I+VjGlV0dUgVQpYVJu0i9F4DDuBJinD4GNZi5TgE4FohJBEIPFxTeCTy6ZaCayiKLFQCADS+bxpmr7vE0KCIKCUYi45SC4FpdTxPSSkaZqu7WCkSAToRHwY175/CYKKWmWqEIIAUhQFY+y6LgGFUAwAAWMIgBIFSckDgVQUMEYIEUIIBEIITdNc1yWEntjt8eBzzShGJ4nWsvacnFgxwilxjpPjGVYxcMF8jIAQ4nuBGY7btquSCkdhH1PbzdXHImpgooD/+ZHflQN2xWWvefqxfZtOW9G8IHz3HQ/edNPHHnzs9tM2nykCtuvZHf/xtS8eOTLwg59+/+xzLjwysTMVXui5bmtzwnWqvQuX5PJ5LuQHPvje9MjMHfc/DWrhzLPWnr78vCsvu7B/bPSdH3hPV7zzHTdc/v2f/G5lvX3lotOmqF2ezt56ZBw78OU3nPfE/sxsev7MVRvSu3hDS6/F1YQbWX/52U/u3PanuZl1117/0E/vipx/2oplXQ985T+XL6HMGulbZs1n8uPjJYGzyToabmLF2fAlV9Tt2ja2aFFzJOoVCvmQQYDEpycqqZiZSpqcOY0tWqloHz1cbGlVGhs7B6eHEDJHx6z0ZOwDnzjt0ftHsSguXhErFyBRpwS+mBifLWTCF1+x6mc/ecTlEsvQ+KDdtthItIb7kstwcXfnoyTcWGy5ZX1hwFh6+9ZyfsH9K0eKUYjWtQba9Eh/3Al4fauz4ynW0QV2iaw7h6eSdb/6YZZgo70rXKzMhyPgVbVMJiCIbDxPTAyYkYQ7M+mXiqBSCgqKRCLpdB4AVAUIQH1D1LIsr6jYyP3wp27YsnHTzW/94BUXnfHQQ0+XrURLTyE7G5rP27pm+oEuUI5zioFw4hEZChn266464/bfbuOgKIbavdqyM8nJ8byqaI7rbTp9xa9+9culS9ZJQRFCvFaoUsM7jCXHgDBItHnLyjM3XJIKt8zmnr/77rurjv2JL5zzmx8fcEVmdgYuuPT8mz/+1rPWX//Vr37xnz768WQ0dOHFp//x7q3RiGFG5cSkizBeuAwLPzwyWKyro5EYi0XD+XJV1yEzo9Y3+5IZ44OkuYsJ5BYzKQk+qBXpYVVVXYc5DgN5oq7yFH7DyRn4OAkRUyEYpZQQlEql0ul5znmNx0YwcEYBFIwVkJ4gHmImNSyiAgjDtRwkI1JaGCmYehFaF6r3pmcqIDEGoWGVA/cZB6lhoJJYCoSIwnqWcarI1k7NLoeJ3vToAwffc0tqbsBpqneditAiSIXQkd1s7RYvmar76XeyS9dDYRZSCSgFEE7A7mfNdMYxImBXRWdnsqHFB2xzLjinismO7AfXh47FAGD29ZCyXY6F6h69N9vSamSmHcZh8/lk1xPc90AwoJgwwRGmRhQnGoShssBOjI8VkES6HnJdS8oowT7ikjHGBAKF1Bw7yRhDAJLgk9Hm2gyuKApGOHgFru8LAPC3AuBTw8vHAfXEPP7SwfgVOta9uhv3yhVKtdzkqYeWJ1ScFPUEz4sjAIqRCiCE9D0ohlDIRzE9CHxkYV0jdsklEaEiKjkG5HFJEaaCKxIBI/Mm1wNpMswFOKoEhdBAGB4qSj8WiTLObNuORCKM+QDAOT9V9P+v3KuXXNcr2Emf9cQ9rOXsg1dJMyMgJ28CPnEfAATSfSswYr7mmQiDi4u5oC6qZkUhZQQVuyWaklW7ki8m6xts5gdIAsFYAogXynjkiTJojHHt3tZe1FoqJcJhAChUq7UgB/ODGhZShAPBhRCIYEVRhBCMMYVQzmpBF1lTOCGS1ADY4Y5hGIwxSikSkjHmuq5hGBSpnu8KKVVVlVL6rk8xNnTT9i1VVYWUtm1rmiYxqpFZfN+HUxIu6IUIyolF4Yu/mlOVzOEUhl4gpZTC1A0phOd5mm6Uy2XDjIjARiTKUBCNqBNDYwOHDt32658ePnJobHjC1KC3e5Gmu70LNsTqnalJ/94Hn7r7nu9cfMmNDncSdZ3nXrj8a5//bl2krr07NZPxMMSy6dy+/dtbWk1F4W+54e0T4yUEVdUwHccHHMQjoTe9/r0P/eVPf3n297fev/1nX/hUExTfddaGh5+dlK6dt1C8tbnsC78w1Ia1rO4sKUAmQLGVfRubumanzMePjZ298upjM+gN67vSIvjuQ5nwhT3F9MzRPT8EOQxgr17Z43m5WMJNxMKRBsc0lVK+kDQXxuomFaId2SNTTW7FFuWSCNPudHp08zl61XLDxsIjR4bVlNLUqpfLlfQE9SsG8/zWptTgkZmzr4hHzIbxsalUQ119Q+zg3gzztYGBiTo9+f4vfPSM866bOja8qGXlV77+tbvv+fHmLZrpA4ln9Vho6738ujegSDz2xW8Xj815F5+RrG+qVDLaUL/Ttyp0YE9iaLqI9fI739rx25/MN3UqI8Plrt5IXUPk4O5C2wLhW3Ry3OpYQKwqVxRgbrxQKGIKihaxLKtG7sUA0ajRs6BrZGQIMVLfrW/YdOEffv3AORu72pvpXXcd6Fu+aMtZwciwNzZROdRf5gIkAqyA8CnFYS6Kb33bZX9+8NH8vBBS9QSL14FVDsIRvVBwKQUA2Lhh/cc+/pF3vOMduXlbVamUMgi4Zui+xxAignPArNY+/frrz9ixa9voUWhfGO9ZAfu3F1OJehvNm3E49Ixx3gUXz2SeTpi9mzc3ffsbDwNAqsEzQnRijHf2Rhq6ywO7Io5bAQnhGKzdZEyMeMP9pKk7aKtrLVjTuVm9kBeq6TsOSD8M4JlhiTH1PN/3xEln5pXzgAAAQDAFEEIIhEBRKOeSc44xFYIAACAPY01IDwTUN4M1H7UZAARUd7ivUeoHfgxQEQEsXdrpBb4vWd/i1sf+dABBCHAVBJZEIEGkNIhSrZUerNuUmJ2uTI2Qt9285u7fH5Iu713gXXFV147nRqYn9Y5FLD/PFESIajh2ONYyFw1DU6rumcfLvkDLN+lPPCivvL7x9p8ONrfEoolqpJ7nJpr2bM+1dJj1bbK/v7RsTYMrMtPHoJJXZAALliptncbAkUJmWhEMQASU0kSd71frXYf7OJ9KhjmprlwVmpv2TdMcOFKqlgAAzFAMayXEhQcICSFqcVHGGBJSVVWo+RAY18RuhBAgJOechvSXTP2nmop1APCF+1fG/Hfs1JBybQ/0Vbrf1E775Zj9ant++VvoeNEUOvVwJ48O2AMAjFSQGmcIY0xowIUb4Kgp8pWARkJK4OrVgKZM8J1Ao5UqBqAaWD4mBgrpnuVGJfd1vep6oVBI5SAqLiXgKsxXRZialWpF19Xj0KIovu+rqnpqTvpFoemX++5/1U5eDJxUrv6rHrACNAgCKWVNHYIJjhBSqJYPJpKkizMvkI7EEaY5uFqM4iZfDbgfAAehUo8Cl8KkBnKDGvC+0M/5BDFYInwSd0+mzxFCyUgEAAIhS6UK94OGhiQAzM8XTNOwbVsLGQFnjDFd1xljkgtVMYTkXBwnuCEgtWMopmJZlkaVIAhUVWWM6arm+35t4QEADBiRBCEEAoCD5VXj8XitJZ+o0Z0wdgNfo8qp9/xUutep38ILKfxXKZd3mB0Lx3zfF0xSSh3X1c2Q41pmKFopy7I1s2Rh1/CR8fe9/1rLSe/YOXnGhhbPZeecfea+g3/c9jj/2CcvOnZs+k93Hb7kmrMmB/m/3PJuVUOf/PT7jxwsR2Lx8859/W/v+IFEYvv25y+66ALP8z/3b/98zx13jg4O2QAbNjf07y46vjDj6qqV58WidN2alf90y4eGn584/fS1b91y+vNHp+pSeSvN21a037118IKGlBm2ROBGLD0ajo/7c109XY/sHNNiJFtuXtS8MDuf7+o9U9EjpaKsdLUd3PdEdwfn3Nqz5y+gAIAKXjislVesrFu8KljU27R7+2SiwS0V/cCqm5uvbDmvITfnssAJR9HstB8Ko2i9l8tKGSSrXn52Qkm1BB3d6vgxtXdJEvlViQuqgsYHRSwF2amklDzZ6AwM+QsWdQKr93IxI9RSFMcWLNq9ecHpXIzf8fBkaRAuuhgfmW/JZqc6jIaWTbQwrqTnx59+GFZsgNEhY3zGCUWx4wvFhWVrIdUC/fsi83mrdyXq35kK12eW9jVufzrd0AyOhSplqWoALNHYbo2NMUrUgLsAgAEDCEUhQcA7OlrWbm574I/7NS+6qK/c0hwaGHQ8hFpjKpPVgQFhMdnYmZgYLwERWMXYIa+5am12Pv/ss4MAioQAExDcbGoNVq9dMjDYH4827tk5CVJZvmz52PhgwBzP5QC4t2cRYDk4eBQAqArMN5euTa3aNHtoVzDVXx9KzS9eGdm7q1IfpT3Lww/cUb3uLdcycrgxufYvf35yZirdsdBOJbTZCTI97niBJCTqBk7fCqOUFZ2LqxTM/TstqxrWTXf1FqVa5sd2++2d0elR6kG+cwGpFML5UglxSgiTAtVaIxOi1Lzz2i/6FWceBJhSzI7rLR4vfwfABEwgVc7X8EDxAAEAAElEQVSlHgLXhmgcVq0N9x+qlvIGZ1gCAlRVdejs1sZHPF0NlSr28pV9bV363h1z6XTmgksXjPTbo6Nzqmp4nqJQFggbJJhhbDtCMk03PRy0U7W6oK/wxtcvR+rhwWNo8Cg5MhgU0mpdg1JxrNaO0LmXh4b7s3EzopJgcoTQENUSpfFBWLJSC4VCj95bQFht7uTpKU6Q4XOncxE5sJs3t5udHcHRvWp6rpqoh3IeeECiDTwSJQblbtUMJSwZ0KF+pmrgeYYZFVvOVwpZHI55fuDZxdToMVm18udcAch1yqquCQkeCzDGlFACCAAH3EVwPEgIAJqiAgBnAZCX1pWe6oFRogMA4y8A8KsJSvwV/YNTfVA4hXL1itjzalj78rDtS2LR8GL0OtXbrsG5roXh/zWznepLtpzML7yEKiVe7XuB42FhTAkA5pxjRAkhhqIGJacSqeieEOEmU4KNmCFo2S0TCQRhwSRVFYnAqlQjZtgXL0D8yRaKXMqaKkZNRKWWy699y4loBADuvOe+1111BQb43vd++Nqrr45GoxWrnEgmOchSpWwYBqXUtZ1IJOL7TErJJYOT36OoqSz5hmEQQiqVimmEavL0AKCoehAEDFhtYYE4AgEEiBnTGWPZbDYWjlRsS1VVqipSylo/45esbqSUAh+/maeiMpZQK7t6+WpIIiYEMJ8TTAHA9ZxQ2GDMDzzW3tKKMbvz9ls/8I6Pn35WW5VN79ppt3TxwEeF2ZAXFOvrGxm3SmXnsqtWPv7I/oUdLXufn3/dNQ1PPDquKLELr9r4s58/N9S/r727/p777qura7rpxjfPTs41RJO+m89LVQUfBWEG6j0P/v6CS85YvbwhhL3P/MuX7/zRbQee2tvaVNeohve4YzM5Iyud1fFkb7wyOha8+YKFD+6dQ/MNDXWjT2eBxyDF4ezT6gpHxAPpvAVgxGKffcsld28fSRirY4lIU9OZ2/f17zx85IMfffu3fvDBuGNZ7iiRCEB2L2i4/LpQZn4iN2sGpOIVzGSz1dBoZuZsy5LVEii64le01oWif69etfh51zjPP+tP9NddcWNp9iCOxgygxcwMJKKtM9PTPYuV/LwghHb3adseKJ9/UfTuO6vX3tyUG5/pUpLPP2t5urHytIZSZVCPqvMT3ppVkUODgRK4vog89ihqWlE+8DwsWQZvf/u6v/yxX1XQxa+r/80vxvbsgOYemE/TcIIX5mnURPEUnhp3PQdSqWQml9dN0BXTsrnvHy+A0RU9CDwJkhAkieQ+RcAa4rB5XZ/gXtEK9h+eaVQXRBtKQ0PlcCqqRCpzs8J2NAT+uhX1Wsx+9pmiqkU810fUW9inpyekZQdCiuUruhLJ2HPb93d1dgTMGxtLIwlSgkINLoLlK5fkClNVqxCOEo9B92IuHGNyxKlvwrNTwrFg4eLYyMGS48PKdXXPPrvv3z7zw69/5QvJRli5qjU/N7d6k/HkI9XpUS1Rp+gRa35eNjREVUWk56o9i4kQhkKNvDUvuJLLBYbE2bSQGGKRSKlUqWuCqg1uhZxow3qcEXI8PPYidbkTT35tqgGMkKz5FYQgzmtxL1A1EfgQMRvLdhpJWLwkpFA4ciDgoKlKtbkNd/QKnUYam5t++4vBWDiWt0tLFy20bW96lDZ2Vpeu9559vLJ5c+rAgVxmBgwtHkm4c9OR+jr6lo94I8P57AwceAY++41m8INKIX9gjwANCIk9tZW2LczNjdGmzsh0plBOQ0Q3ehY5VlbVov7Avhg1A8NkvSuYXVEDG8/lbbeqECUIXOjqQ5VcLDNf7l0himmYn4oWilVVpb6v6uHqouWGXaXMqaTqVaeq5bMepbRUCFWsysKlvGqzuXFAUqFqcOGVjWoknZ/TZicwCpiNEBIg4QShl3MuOMcUE0IkYC6Bc46RJAiEEDUJvZeHcxFgeCUA/v/CAz6JhSebmZ+6wjr10CeegJeC8alo/dfj0qee56kI/f9HAMxfaON6wtCJ/OXL93DcMSUYISKEEBxq3qrGQ55aEYzFYwn79w8c+Novz/j8B9w779W/9HkIq1nuKKbBKq5BFIyxwEgwfrwv6alMPSnFCQHhWuT5ZD44Fo4AwMc+9s+maWYz8z/64Q8/8o83f+0bX/UZK1cqgeDxeFwicCwbABhjGtV4jYiBJcYYSSylBA6gIM65YFxV1VrwGQnJOQ+EVFSVUBQIXuvBK7kUjLmB88Rjf7npphs913Ncl2pqwJmUr9qAmmMBp/ThkCdCC+JlEZrjhiWSmPusoSFRKTmhsOH5ts9YWCP79ox0dZrvfNt1Tz+64/LXrVEiwS9/eai5pcFx3HUbup7beeCSy1b176sOHBuJRKLXv7736LHc09tGl6yPH9xdaq6LdrQpSxct+M3te3912y8OHhz95re+tnpdXAR2OSsQZx2bjaEd7OwzlvgwkbVEW9O6/h1DnQ30sT8euGx1ND+sHAhyHfVNf5mek8ToDgddNnreCz5y9bLHnzlcypOqwmO6aqiyaAVVjN/S07B9r9vahUUf+91j5ddeboztd1i1rhxkI4nogjWt6VzINFZ2tPc+dPvvhbY/sMKuFwiAhiZPD+OOHrFm+eahsR3dPRqW0Scfnk81856ldbf9YN5oRQv78NG9fO06Y9kq2Hof6lnqdy1Qtz/L4vXBzKien4eOLnT+5amdz86MDcA739j2zMHx/Fzjli2VCE5pdVMP3SV7eiFDzOG9jgTBoT7UOl8q4IQWyg46sUgomzfz3pwvdM91Vy+PnLaq7vCu8dPP7YrXeb//1exFr2277ZcT2Xmla4kc6BcYRDwWj9VVxofV+mYolRxCwLFAAMgTi9aQYXieIyQoCuIiKSAXQuH6eGjTGXJqyNx1sPyZr2359fees22rdSFrak3u2TVXLYFVUuKhznd+8Mz/+O4vbJsiVYlEtcCzUw1BpSArVUgmwpWyg5B0PXHo4L5wqL6zqw/AAQBVVRlzBQZAsHi5Gk9ojlspZqEuGaOaU8xEB/pLGAfhUKNhsrlsjooGBhlEQHKla1EwNgiqgEiUhiOxbC6wnKpuCkWDShFA0NYOLRaHxi5LUcjYGNcMhbmJsF7WTP7sozKVilWsHLAE0UrMj3legRAFISQF4iKAEzkuOOEpwcnIZe3xR0RKfuoEjjFGiADXMGFSYC5dDYfaO6JtnWKoP29xPxKGiy5ZvHvX0c7uyPBRdy4dFHLAEMTD2K8QD4JoPbR2ms1tVm4iPDsXmGG1ZWFFRU2PPjgHgkbDqaUb04l6rSGpdXeUu9pi+58n09MwV8iPDqjTc3jZetethDLzNqLg5BNbzvXOu4Tt3+U//TBUCubSjVbvogY9mtm3HcIJaFgA5XRo91a7OI90g8abglI6btmYkjzn0NIac5wgnw8USgPhAIKmhNHYpjNRsKtQLCjlcoAAcZ5obiuXC4plBRhCmJZbu6BaRizASMrAZwEhBCMU8ID5gUoVSikXARcQMI4UVaWqACY40wgRp5Ckapzgk6JFAECxAQBMvJADfpmk9H9lSMIpcHgcgBHmgp+cxOFV6j5fBMAC4MU+7vF30St8HABqjv6p43U18j889b9mTt6jMZUisPaVDvxk2+H791oB9yPaeb+6aN2GtZZTJTp2XYcQoimKYAJefuYnLu0V92/oEXglAH6JsthL9maGYv+/X9r/TsuXirquC8YEr0GkON77XciarCRRFdd1VVXFGDPPdxzHMAxFUTzG4XgHAQAhKVVN3dAU/JkvfOFzn/7Mhz70oXe/811Lli0Zn5iMxGM+C7Ra2+CX3eyXkMdfKOkW8hXHA6K+58Yj4cMH9odM/fe//c1Nb39bc3PzL370jY989PO9PS09vc3Pb9+9aUtvvprf/lzu7W8+L53JNbeHfvaj7es2tmemrQ2nNzXV9U5MT933x73LVrft2zeFQCaiqZBhbT6r94G7j6xe3z43N4+we97FXbOz05UCtHZE0mVv7DBfupw4QYl5IDyoTiPCSAftvnSB+a4nDisi8BHwMHQJNSx8z8epJm2q6L/vn97iTVlf/ckfoh3QUI5ogPqdcjuHeW4Q7KxdknhmrHLW0kQFzy/obM7lZ7c/AU3tQFW4+LJ2x5m2SPSJB4qbTu8FJX30gO07TYPHyhyqmIiu7rhpkr5lfjHP5tNqOFlSDSjkE3rIHh/2entiVLFYIHwPSa4pYeq5IjcnPC+YnwlueF9ybCS/41HjJ7+s++n3J1UFlrU3jY3NkQj2HNqaqBtK5/Y+x0zJr3iTOl5KHT40u7qzOZ2dPVyCwoQeNf0169SoSctOVaHq439WIsRatko//ZyG/bvt55/LMi9h1peqjvBs0FQlHA7nSoXuRTBwKKoYZVNLem5JgHRdAQAEA6U04EwKAAxEaiBEXRx5vu/bphYONp6rYS81Nz8G3Ow/4C1aGo7FjJ3bZ9/1vpW/+910qWx7HIWiPFEXmx7PSiEaG8x02lFoSHAA7HDJ33z9paetP0uliVtuuaVarQrBJMiexSnV8KNx5Ljl/ufieqy4dA1MDemlHLXcquRACTUislziAKBpISkowtJzK5pOFEEQFq4LkigCHEJx4JFkKtm9uORUTF/kkikY6od8hgJii06D1kZ9tF+bnrDiSa1YtHwXKMWABfMBADCmJwGVEHRy8jw51QghTnrAtSKXUEi3bbcWgiZE2XJeEI7io3t1Ju25UW3j2V77QmhshicfAjMUkgGvVLyAQdgMSxE5cGA2kNrVVyeq+XIuL9JFt3d5vSTzuZEQk6xzsV8uwY4nAMAQ2FmwFK69qeGPd2RYgd74bnZ0R+qZbTk3UFoXsp6l+OAefajfogp4tglI6egpvvktiUM7CmUn5NnmZdeGjx4btcvKxKDR1qUdPZQ7OoDq6nBji5waUooFRyFhn1dVDbiPEehc8o4FuiQl14ZUnRmts/KTMDKgYBJgrPsBX7mBmxFBVCAEtj0MFMUDXkQYOAMEesdChpgI3MDXVIUA8pmnUQIggXMIJKg6IOR6DClUco/xwNBUwMdZtTVo/C8B+NW0KF/VHz2FQ3cy+PxXxp9cCrzkUy8B/hc+/uLc58lPneTlnvSeNeVv6gH/Ue579OkDT+9PH0xHoC7SmDxY7T9kHTzcNDQ0clRQzqUvsbAsKxmNCSEC9gLZ7dUA+NTtISMKrwTAJ3sqv2Ibpf/nAHg+nxNCMN+PhGOAajWBXAghBaIYY0Qd3yOEeLajaVoqlQAAx3YRQoqqVqvVQHBdVwlRPNtxLFcwcdsdv/nXf/kkZ0ylSrlcVnVlfHomlogfb2wOAK9Sy/6S5xrzF27vqeMZB0qgPmFefeXFT/zlCQwQjUSuvfYfHn3o9zNppoVDoYjvuOVEIjYxXVK0RHW6uGRlaGjE8qqtanjaqQIBOPusRY88PRGJEbtEk2He3IEH+nG0pZpJBxGEP/Dhi5/Z9nQyFm9oIdu3TxBsJBqDzu5US1fjju39x3YFOoMLz21KRfTtW8dMFkbl4F7wLmyrOzSQrV/UmN+fm9XqzHDu/ZvO+NJDT73u4rOCSvapbYfcKNCyEosqOkqEo2m3Gh0seE1gWVhvC7kjVVi0MLJ8ZevOp2fVWEnH2sbNSnGuascABbHFy3Eg/XLRHxlgzW29SnRg65MwOxhx7IqC8fKNoncZlOfrQqmsSUFyms2yTWfX/eo/g9MuLd3xS2AOrFgPPoNkAleKYmFPs0SzyWTk0J7Kgedg02mJVUushEn+9KDTukpvrPOaqDIwJOO9encTKU+JvE1TdcVtj4qxipZzPCcHi7pjP/nd6dec8zDWSUtP6549E8t6DLuKMmmbojBjrgcMQ/368+eLs/FjR4qRCAWsxuvY7KTiBRahtLujZWRsgp9ITNQ11Mfj0eHhYQkQUkzP9QFJQISLoLevLpkIhvcUKG6MNs3HU2iwn8fj+MrXnvXM40f3D5UDhqhiMV4LQwFCJBwViURiYiKvqlBfH5+dKQoOAFoi1lCqTDc01P3LJz/xn9//Zjo71dIWpyorFKqVTBRouZilqsbaumngw7/c8qlbf/mnUKiufSH+/a+3BoETBBwgFY/zYrHY1RIlofLICEgBnYuAKCCZaoR97tOuPrFnm8jOaYA8TcUAppAoYGVCgAUYsACBFy2LLlzGtj5EHKfC2Knt8sSJtO6L9PVOQgI6oc0ajYZLpYqqqjVCxvKVSn2jOTPhdS21ktFmQtKTA2TnNt69sGHBooAqhcZmemQfnp93bdv0hNXXFzNIya/A9Fgs2a2nFua5hyMq2b/HVkkdiWRVHB4dQEtPs5o6ycABsm+nf96lIjMGDU24KsXECDQ0mNMjFqLQ2AZusXF6yl53dnXLOU3LFtGtf/R/+4f0J/4DygU8MWDMz1vJuujIgO1VEzNpt7XTKcyFZ2eKSAIiILiBqCOZAeAoOvQuDpVLTj5tAojeZVJB7OC+IBQKF4p8w1mqonmTA9rCFaXyfPLA7qKmGkaIYq3UszgUuPr0bB4xKb3AQQhJFkjBzZBml8sH9u6Z6B8989zzmnv6AAAIBhCceYRixsXJqC8+oQQpT3CgCDEAgJ/CgpavkBervfFqAeEXsrAvKkN6se7Hya/5ZKdSeLFr/lLZiRfCy/Il40/u/6SuVi36rSnh4yf5cqfzJdtfbdgpNvrQbUHAKVL2vY61J1I7888dbj/29l+///QzX/Mf//Gt99/89lwxa4R1DIABMT/A9K+VIZ26pfY6FIoDgG1VXnLcWrOBl/wq4ITkZ+hvCsDlUsHhga6HQEgVgQQQIDnIk9iPAFhwfPVWa8YXBAEhJBwO10h2fcuW3HT9DaVC8Rtf+5pG1euvv/6HP/8pYwxTQjABACZFrRI9m82jmprX8QeP6IquUE2gE1wzDJ/+9L+tWrmypaXF8zzO+fIVK6Kx8NzcnGN75VLpzj/c9aMf/tjyyxjjaCj86U99atWaNaGwedqmTRxEJV8+eV0v9JlAQCU69d+T71L2CgAsEVhVt6Ot4f4/3X7Pnb8ZPtY/cGzUdyCVjESjqH+gHKmrLzvzS5bHSmUYPVZSo0aEOvn5sB6vKgaU53VgqKUdZWZt04ByBSQFoLDxzB7LHqpMNq/doFas2WXLkqU5Pjc5H0lBV/eS++8bFUTMDxvLz9ae2ZbpSjW3R6y+FtrSnHz8L8NNwnyw5G40oRhlRyYTS3vUybl0xoV601iwpufwEyNCWOubFNndfChdqivK8XyweXNk+7MZTwOKgPrRlFaGFCxua0HSn5vPEoKRLiYHtPMvlQ312vBwRSXxJevsY4d8PQyWTQePss4ecyrrvO76xt3Pzo4fMYYOO2YosukC0bXU2vF0QySVaWgG6SLhKQMHobE5xETlcD8vpBOhsIORu3p9Q6GQaWrD89OmX/H6NvnSqmN5q7sTOdC0e+9IX0/i8FErN++/+UZjdtCZGMRnXx2+7edm67q8P4sO7lTiiUjvaTPbHiEcpIRUomWel2NSUNfNNTZDd0/0uafLTCZ7lyKdIi/ITY9Syw0UDfueAArAIggqmCDGa0t8AlIijKVkSFIAhoAIUAE5RhwQA03qKzoaBa5OzRZAFdl5aG5uueiy1Xfe8aCNaSXP1p3WEdZadzx/MODVkGk0d6HZMS6Rl0hqDfWtB/aN+h41dNNxq4BYPG5edtkltlN+cutj8YRZKlcjEbJqM3/gt6GNm9ammp3HHjr06c/c0t4Ve/Kxwz/6/o9e+9qrHv3z465nU0IJ1jxmKYQ01xHHldkCJ0QNxVhLu8FEBSMoFcAtJ8qlgq6optGQK08BAiRNCSqggpRAFWhqCZsxt1RmhVkjZNJSqSI4UEpPttGUr6TSc2oZUq3nXs1nJoRwLgmIWCSxdrPx5J/nVqwRpXRk04WVctocGLZWrkwq1B0+aksBEoXUqN3YFvKr9v7HIGGQpZt557pE/3AhPwM6Aky0w7tFLs+uvM4MpaoUxwvFolNRK0FQmo5lxqxACh8r1ap7+jkJK1/xA9TUyiTIpx+FVRtQJIwUX/KqPJaGwkTDWZeXkeLuewasSnTZaZbn82OH4NwrYfgQzI6aPrekn/BZgGi1rUuZnwvcKk4mo0S1qlbgVgBBJBb1rbIWgIeI19oZzebKbhUU3h1ABgMophVPQHtb8/DwLPcMwxRobiIfb09ICXptRUZByCoOnKo7fXTrrt987EvXXnXtkosvdDobms0EJEzQYgAeSARC576XL43VNSSQDAdEJSAxUgFACltIGXCGFarKUDFTjNcZPnge0mnADA0BDzjSAskRlhQUJBBG1BcuJ1wydDzsfEJ3FwPCGPiJfBtCCMFxFQ4pJQgCipCYcc41rGPAQkqJX2iPeNKHruUya8XNx2t8XzRAqIpaqVYi4ZgbeFALQf83kLX20P2Xw4Ye+J0hdYE9jOnW12e+V739kaFHtYXB7397z01vfbfjWZJLHlQZd6lqBpIS5mMVBSIIgiBhJqyqjSlCCkGSM8aEgFoyHiGEEXFdNxZNAYBlFQBAparPfKJQ33dVxfB9XyEqY0xRCcbI8xyEkI/CGs2YWsd/cWn/E3MqGZ1Im4XznKQU3wZuGAYvVo1ItCx8u1JtoiaOh4kA6XqO78kQtTyXeX7CjBihMABc/eaLe+OLBo+OGy36o4/dz7JuVYSb6pMP3HfXqg1rAeBg/3B+bu7OW3/z3R/8p1C8CvOLs9WO9mYPwVc++8W5I9M3vOlqjuH5vTs7FnZ/85vf2Ldn754dO++/595zL7jitLPXAQ5Gx9Jnn3lFIpacTw8otGzZerGc/drXPj85Ob1395Entz71xGOP//gn3/vdH+5Op3OKYQJGAefhkF7MZqMRs+xWdVUDwI7jKJpKEFYBkM+E4QlOQWqBD5oeEkIUS9l4POYpxSa97Qu3vHfbroee2Tve3LVg/Zrog7cd2rB23Q0f2vi+D36npblx04bO3/9qByMQSRHH434ZulraqsVCuWKFo6YPQaUaJHRZFRCoAGUjYjjXvKHXqBsbHYsKO1ecVxcuxft3iLoWdcdT2BVlQBDB+IprFz34zODqurozF2nDR2YoVvqzTt6WFCcM1RmfdtVkOFuywsggUuSFiyQQIyydKgX46r9//fs//crAeI44OBbHiYRazEGqgUyMeEz4KoaFC6FQUiOhVCw0iwgcHoClS7tbWkbXbElNzebGRmFmCvWuUdJz/oHtSntPcNbpKhGRqTH34tclpqdmy4W6VHNlYty++w9KXWugK7GVm0rbH4VEFM660BjYhxhjqlL/0APp1eewg9uNhgXO7FCypdtauwm7tqP40NUcTk9UC1ORlRujx45Ol91UPGVvPDNy5EgGfH1+utGn48k6k1D/wC4xPsoZmKpmAYSJYgPHzFUCGRBQGRAEVvcCU3LPKnEcmK2L2aGjtmNHAFd0nXqWphHL5QAvEIteMEwUKYSUXKEgOFxzzdnF/EyhkO5KJh5/fNysNz76yXfc+8eHOzvq7r5ze2tXuMyqiDUXMjYmwrZ8IZkQHCmgQoxBiWDsO4pieMl4OJqsZkajenOZBPHCHHd8K9UE8QYRIg3jIxlHQlA2V29uvPFt5334nT997wev3P3MSDhWvvqKD/z4h7f1H+s3o9wNBMbYd1XBuaoGp5+TTM/lBw4Qgfi5l4WL5apvxScGg0rFUlXq+0xVVd9nx/uyA8WEcxEYIWhvbxkYmAEJ4bCeaqClYrVSBh4AxjU+0ImwgIQTxf0SoePLbgogMQiBqSaYRwE0qlrMp0B4vW72tjDuy4Zlnp5A4yMSvIjwKivX1R85PD87A/UtytwcnZlGktgSwsD5BVe5jfWhmRGLc8hMo9J8WNMqWoiYcU0NGQcP59qbwwrR5+dzZ50jG1pTI6P80T/Z0Zj/kc+kHrjdbl/EjuzAGy/xH7sPr16XyM/wWKoAFBwHtj8HfpUuXKLk5tTsfKlnKaSSxvyUIHERMWUpQ/IzwnYDKWHV6WbVtnoWK3u2y5lJpoagvsGcHLRApjCUhRIAAyRBAVMAAWDU8IwIr+TxicgBnLjDRIJE3/rZXblAzHpBoVQ8Y+ni15+7KRklgbQf3Z7Z0NfRkSCTRx/fu/UheiR74Zve+esf333e219P1HQqnnjiseGlqze3LE2JMMJag/RsXdMRUgDAd6uYEgECE/rDX/xqsP/g1z//bYaYqnpBoAce1TRbMqlohGOBBJJcYkS9wAWCKFUQQli+kAzGgBCSXL7QcP5kkFlKCVIqiuJL13GcWCgRBAHjHCuEvIyHVfN0CcKnOr4nyVw1mWtNMxzH0QwdI4yR9gKyvuTFif0CvMyVf8nGE6g8ff/vfISRABPRIZhqXNq4oOWNc3N7mzpXXXXNtV3dPV/+0ucDt2ToqkBYgsp92zTNolVUCUWSHC9sJQhjYExomgYAjuMILhVF0XW9tvSplIuqovDjkTKiKErVK4dCIRFwz/NUVZUggiAIh0PcCizwkpHGv6GXb5V84YIbKyQYyltcCemB6+mKajuOYhph08RMHOg/Ojs+2dXW3rd4UdV3JcEE4Xg48l+vcv5uf7f/O+wlhEaMMABIIVQKV119ydNPPeY4rLdDW7Pmkqo4EiK9//yJj93wxvfvO9JvmOAS/cKLV953144zNl8USXgP//mpyy6+ZPv2nVzkN2+87KGHH1iwBC1d3l2yR/qfJ8tPA8VMPP9YdeWKjgP7MhW72NYeT9VphTm/fanoXc3+fJfd0CojYdj1WNL38yDDl1/d48n09Mws5o3798wrVA+EbUYQ92X3wlghb6VnWFNzK4dcELjVMgQ+1nXd87wXcLR2WVKRwBUFHc+FSdA0w/MCQAxjwEilRPN9JiRDAJquCOn6niDHZX/QScU+TSF+UMumGUAcXYfAiuoGu/w1qcE9k8sXmIOH9IyX61yayMyXRgfE5TdBZpJODLNCFhSc8iHX3adY1WBsDBpS+L2fjP36W4VSulGNpFUDbAeW9GmZec/zUaw+wmWZKqiYkXNT0LkoHG6s7n8eujo6dGUibEJDVOfC331ULF8Zb2+rOiXW0wuH9sOdtwPH0ZBm2u4sDcO1bwVAUMkkHvxdFZMA6eBbYZVyzgKMmaEmacitVGxMgFJVM+hrr2/atm1k/LDh2h5RBA+Sra28rt4B4U+Ng5AKB0Q1kZ8XAEAIwhiE5LW6RQmApMz6MjpZIvvG09OT6YP7hwrh5I5qtePgkf+48soNVy5lYYcy7/lnn+6kIUb88pH8U/tv3XPs4F0Pz95++6MXvf4szioYpYTqgEC1ELTvBYpCGGOua3/h8x/96pd+mp6YQJBv6OizZKAoBCCkcsElC6QvEKhUJUAsxwob4YAfV+KGUxSIalLNL87yHo/Qcs4BhJRSUzTf9zTVCDjnnBP6Cv1opZT4eNdeeAk8U0p83z9O0AWJEArpsVeA2xM7etGWV/v3lO3b7/txCqI6IEBsAmU2vWYzlxLKp3HN0vTITe965zf//evxiM59jwtAWAHJHM+NRqM8YABACCGEVG2rFvmphX10XZcCKpWKqqphMw4Arl3iIqCIaoZZrVoYUU6ZECIImKrqvuMmk0nHrvq+n9CijqYZBP0NvfwvffCD177/TR2L1gfFoh5PeuVKSDdApa7rCs4r1WqisbG3d+HE8BiSMDE+kWpozJeKTY0N177ujXfd84f/4gT+bn+3/zvs1N8ApZgxgRBBCEkhQoZuOzYAhDS47i2nz09q00P5jZsShw/vPHrMaWnvCjXVb3t6B6Zw3/2/6evpFZws6l0Jgt59349fd/U1b3vHB37xi9svvkbftc3Nz6oAWIJK1PIll66cm59UdVys5n1LLlsD+3ZCKhH2AjY16mk0tny9MzrAkkmYnuX19eHAo8eOFEMh1Xb8RDxkWzKkm5ZVCZgnAWMMQoqa0jJWoLsrNT6WEwxAqidzf0K6J0RHMCVUCCmkUKgWsGpto6GHPM9FmHPBa03KTqHdYIyhJlqCII7AxarLAxWkQIgpGJat0SKmN7Q9ceX1laFBtnUbae2JJFL60b1zSAGrmkxEJIeCVYHWFm3FmobtT00u7IsxWZoeBREYi9c7+SLMTkNjq1Gak7bjub50PaAKYIBoWDW0UMGuehyCgF33jsb61LwOsfGh8jMPK00r5KrVScqquvRmhsGReDynHR311m1yB3e1rj5n2jBgbgy4h4Gj0SNKxVIJDZDicLfBjHqVcglLg+KAixSD9Ip19Zk5a27GBgBFCXEGqaQbiYlUCubmwNSw44WoGspkM5WiUrtJspaaA6j9pZ4lNQUWhoIFvYCWJfo1v39y/J/PuWTFLVcWto4dvvm3Cz9wRWV0YONVlwIXgZRtG1lissX5+a8eu/m6JYvXAqAAK4JACITlejVBfYXSsdHJ9ra2R//8+NqNlwN2E8mIKpsASUE9IaTwsxRHMEIKUjDFZa8ipQypZuD7QGveKsApfCkpEQCruZYgMUgJqEbuxYQiERxvRUMIKVulsBlFEoPk6MVLVSklSBDyhdriU991XU9RFCmlpmmc85OV4yd+c/9z/+zF8B9a2JgeyzYJ4ksrHo8gAEUwFtkZWL0C26vWLH/40Yf/4ZprAiZUVWWCYRVHaLhatEzTcDwHEVBVFUkAIQPPj0QiGONisaSqajKecL3jdV8eq4ZChhCsWElToksEWAASJBpLsCCIJiMjQ6MtTQ2JWCydnvr0t3/0o5ef8F/38k+F4Zd5+YinvvuV2yMtWz//xY//7Ge/uGDDlnA4vHXPjte85lLi88aG5h/+7tfVShEAmluajg0ONFetJUsXcw5vvO4f0H1/+JePvaent3Xbvn0//9FdTz74wOc+94XHn9/+s2//5NJLL910+unjs5MAQCO0ra09ly1pWL30nDMmhqYPHBkLGRTY5Op1Cybm58bnbcmBSohF4ptO29TY3LLx9E3f+c53pgaPNja1jE1NK7oWjZsTY/OmpvuOcH2/viHhOJbj+JFwpFCs6GrI9/3777x30+bTf33rrQOjw9/5/re//a3v3f6b39177700Zd515x+CanXrY4/97ne/z81lnIAl6hsypdlQyFSJamj61q1btz+77Utf+EI4bLQ24gVLI9Sg0zP5Rb29zzxWmC/MlMpYEn9hM1x8Yer2X+bOfIOSna0PU+/Gt4Qf/2MuVldFCKJmfS4XhJOO5XgYQ7RD2fGQ8sa3hfYfKD72FHM8cs7ZHcN7M1x4oSgnqCtTGq1vNg9uj6xYhx59wIOYu2mBEvhOXwuuFGORjuqxR6xly3p/u8tev3S6oSl51yP5pmYzhm3iy7okcEs/Nse3XNx974MDsRC0NED/qKHqArkCIKhPNBMtR4leyHlIhps6bd+NpCsZz4G2pNGy0DlwQDvrUqKCTYKu5aex57dPFedxol4c2Q/tXWoh7x84CEChqVEhauBWiV2Jh+vzvkvWLE4gvTB0SNl0ody3XZphtWJV2lvj/f0l3zGrdtUwUMhEIKJ9y+nogPvxT3f98FuHjBhMjmmK6m3colay8rk/Q0dHtH1hsH1rWaExZJZYkCxU8iGtub2zMHDUBQQgqaazjWcaO55w4tGQ4cbz9kxDKxTKgFWd6lo6V9JCWJGGolo6jc5Nl3t6W0Yn0l5AJfLgBU7ki4zzWms8LKREQCzHRQQDiGuuXfX4w/viUeXSyzb84DuPLVqUisVDB46N9lCLIK27q+nCC84NXG1ybP7DN3+kMdX8xS987VML/8myoaOb2FUUSwFFJJN2MFY5h7n5I/E6dXrKXr3Z3P+8NTGgE6Yc2FWhKoAEM25OjBYnhtQlV2srzvBnRvDh/cXTzlTqGiA3D/t32L5HA8+W4BEFYrFIuVyqS9Eg4BRH+pYmOSPDQzkQUPvlSykEZ4RC2NQDX3oeZ5wRgoAHAQ8AgaLIIGACqgKJeBzH4uHGxtTQsUKlbHF+XLzhFOcnQBjzAAD5WLaqyFp7RrGjA999a6SxubBzZ/uBw5ONXaGR0WI3UokCnoObGyE9XxACFBXmMt7Mg5OUoNFjSrROLRRDWy4oDh8lM1OyvlFNj/FCiSEARTNACsRDPveLRVbFpYZofUVmyl5jUIEiiKfv91aeSauq55VlY9vc8H7ZUp9Qk4WB/ZD33KAMxQxdfuZ0YS7+l6ctgIhCys0tvL2LDx+xGUcrzwTGM5PDIEvQ1OwkE9rUdFohDQf3ZHQdUo3QtgCSKaRHrWM7VO6pI8fArvA5ESDFCXhVSsAkQAg4AwDAmCKgUiIpJXJlaXpg+ObX/sMnb3pDol54idCH/uWb//RP37r0uhtm1Krz/J7WVSuf/OjPvvL9n3742red/6HrlTWqoscEphxAgsNdWwVVoRGHZzFRNCUOAOdecOG73/Huyy95zaoVK/ZODPrOTMRIPPiTJy6+4jLeBIZrUdAdqBhamHnclx7RCUJCAVV4HJRTH+8XWg0K6Z/AgFNdW+wIx8AhBRTJuQeupqm264dUM+DeS2nxJ3l5r1TMUyN8UULLlbKuawCgquH/sQf8auMBBof+UvQq7nAmhAGnYPWmVcgNmArcX4V1tVAtX3/DjffdeTd3fYIlImBz36QhVepMBBwHSIVKsRIPJwQIhJBlWYqiEkJqis2eGySSdQBQsfK11QOmSiySCBjXCQGJCoXSxz72TwomP/v5T3/zi1uvve4fPvGlj3zrc9/0AP6GXj5SwAzw7JGDH/3+72/74b9PHRu+4aYbtx/YFdL0t7/xzZ/91tfUxqTBvdM3bX7/+z/4ja//x5NPPDk1Of3Zz3+uWCrdede9b3vj60/fsvo/b/vZvh39yzqXD44OIpMkzEg2X2hpbR8fHwaAtVvWFoqVQND04CBDtKleK6dLP/nRJ5/ZeexHP75zWV/j4Ew6Ho1kZyu9CzvTM2nb83wmw2EtrpKVa1bM5Wb27p9sboo6lQBJKFuOVMHQjXLZMQyKELLtACFQqK6ywJd8zeqVu/YdePqpx7/wxS8///zO6alZCGt33nO7cOy4oWUnZx564JGLL7/y7e/7QNFKRyOpG970tptueOvb33bjRRee3tGZePSx+ypBfn4MNm5q3re/MDptESXe0403bVCeeKK0qi8WJlUA/bQrtG9/feqDH4pZs5XhOTJ8yKQKdPY6e7exhibZuziEpNezXM4eVaQguaBaYY1pK00BwmDWNcX6D1sdi4MnH7Ozs8mWFltV3Mx4rGFB9bpLI/3PhrdsxI88RrcOja/brD6329m8ILz5orrhYbT/uVLJyad9aG+tf90ZnXfd158vWyuXwuRsfLYAlBbjpnHWJuOhrXZzY7JczqxavuqZZ3fHTY0gFk3wWH14+85qd0ciZhTau8mjD2ubL/Hb6yLhusrEoJrN2QqhTc2xgf6q6wZ9SxNGAzm4J1POKeWCQDLctKBsJuTQ3vrGeGXl6fLALhkgX9f0UtkNXAgCAAy6ploVX6GheNKenQTfg41nG4WMM9CvcFDr27luoJYms74p+8yfwHJh5frUyi35X31bKpDQo4VKGRqaASFjdloNxUrMI6GQLBcFFroRcl07RUmuuU0tV3ixIJkESrWACQChhwLuAcFQV69MzwQSDKRIYN6rFGLgGqsIAZJImGHNsuylyxasX6Y+9ejwpz533rc+v7WQs5dtTLgyNVvwFzS3/fq331q54ryVyy6ay4weObJXVzQjhBMNjqJDoh7Hk4nhodzcGDH0RHouR02ZCJt9K23PQaWKGB8B5mARAIDQaNJl5bYFrFIEp5hYsZGt22Jt3ybyGYiEElTzEvHI0LFMeZ5oaqwa5JIpXVXpxEQ1ZAAh1CozKRWJAgQgJaZE5cKTIPUQRKJQyEI0Gs3nyrqu6oZarlQRhs7O+MhgsaanoagQMIjFUSSq19XFjx3JOnZwYl7FlGI/CACAGsC940EDRRoNjeTcS5T5dHXkaNDRqecL7pI1uFQU7e3hurj2i6/n6nsAkDLYD3ZgSFwmBHgQwcQFCMJ6khr5xcvh6M7k5f/ApXR2bWXZCspmuBQGAGDsCACKQTDU1yXHxuAjn17TuXzwh58DbFTns/CaG6BVN55+0hkZVrqXk6khRrEyPe1YVVNiv749aGtXRwd8yaGjWz2001+0iIBUAuwyAN1orZaD0WOZd3wguWSZd9vPLaeijfQTjuzOHl3wcKmatavEcziSqgSVYsREhajABSiKHrguxscnS4yBn1jPvYog4d+tZq+IrC+xV3QHXx2J/6+zlzmyL9r+1wH4pP1ffo1/t7/b3+3v9r/LFK3WsBJ4ACApAAHQFEKE4BIChBnnAcIQi5mv2hT27wZwCq78lwDzkgH/LwKSfKmg8asOO2n/L17m3+3v9nf7u/2vNBFAMh5i3ImkwpWSLzmteFUmjvOCFUw4B5CIYP3/hAdcZU+nD37w/ddv6N787H07161c/+v7fuKBBchXEAE42S4QnxC+RwIkOSH1V3vzpETl8g2nbV6+ZW3Pyre9860oGggloECwr0hV1PZzMviMT7STfKFS7RQsYdw3jcjs3OxDDz3S2tz2zLbtn/viF/6GF/389ntCuipy4FHiMadi59vajEJmcunyxfUtDUxwR66/8c1v+/bXv9nclKAqu+2Pd73zje+5ZMulbmA/8NS9HnJ1pKGAgsIBQIIcGhrqaO/SNO2WW275yle+VruShQkt0dh6dGjECNNFyzpDJt27b8KMhKPx5MF9x15z+bkfvfnmmz/w/vTM7KbezsNjs4MZ52/o5T948PtYJQxr6aP5P37iu2/55IePzkx998vfeNNVb3hi27ajg1Mf++C71647/fq3ve0Tn/zU4YMHN65Zxbj3Hz/41sLlvbOHhtO2t2XLxpGDO+PhxKHh9Lnnrl7bF//DQ9tLtqhUAs8BAFjWHilUWCjR2tfHUqH6+Vx2JD9Rl6ivTpVa6prGc+OltNDM8FSmKgnE4hHPqiIul/QtqIpJu6LPzTqUYob8s85ZNnBsbG7SEgikQCdqLmHd+qWHDx21LUEV1Q98BLB27ZK56blMuiAluf7NN372MzcPz43+7Lc/Cqs45MvCZO7prbtf98Y3/3nPMy3NKUJZOjO1aePGA/v6J8dnEWhxndV1SI710SP0+huanOIBFaPWDr2SsaZmOAMSDat2nhsRta63OjamVGaClp7w729zaVjhxGlogeG9tLUddTSJohOWkdLofnrWWdDXR0aHw488Ur7s6qb775nM5jACgiH40PvPzE4WET2YH4gMjcJRu7JllZrrx5y67/hI45e/ly8W5PqVjcv7LMn1398/F0u0YnCLLOdVoE7Blg3tzSEUUo6MFTQAIWBRc/SCKzr+/MghP6gbm7IWLI3k8pDP5GMaM+sJhZhXytc3QMEms9Pa2pX4ojfYT/wJmQbt6JHbHvdXbYrc87vKmo2xsl+ySoAxzI0DoYbjO5qq6jG/t5tMjyaqbtaxiURcMLxsvRC+dmy/Fw6H12xi55zf9x+fPWjZcsX6xP5dPg15RGrRuDc/ixobA6uEPdc4/5LY/Ey1kA8qrlOxQY+C52LbEcAAcZMqVhBALEVLRZasg6hZNz1eZpwC2ASQBISxiNep+bzPWQShipSKqql+YIEEkIZKBeeeOIVQ8qJYNAIA0FTN8ziqvfCd1pbmRQu0Hc8WF6xkh/eVT9+8iEE6HFn689t+/Ngvt978ifeCCl6gudLrWAB9Sxp8lpnPQN9S9ciOlBZyM5lCboaqFFaup7v2uNILI7AFKJJ4IICgJBd5QErHYjUSiRzemQEgmASEY5AqA6IruhPkjk+fXFEU1WcWQqpEPiXAfEUlClFt1wUFh33uIQAJHCGJMQCShGDfFyApRphQFAQeACBEpEQEUwHu8auXL5oGCD2e4AQAAEwIYpwDAEEgZTTV4C9e45fzIlUHK9Yah3aQkKw2dqgP3+sLAfmCRhG2kBNv1KuzPNFAsOpqWmxmVEhBKBRbWpVIHT68nwoQVFElKjOQbS2xYl7EQrRU8H1PBhBoGrheAKAQJYZElnOaamTnntMUTc7l57XpOS+bg0g4kcsWk/GUpmenh3BmXkt2OvWd6vhzNN5kE5Syea59IYwdoYYhO/r4fMaslvCqTZXMFOx9FijWmXAVaGKysGS1V9cAwMlov5qeMQh1XWbX6FX1Cb1jYbBsrYqQY5dQbhZlHREE4DqAARMcymasQl5q2v+JEHS1Yp1z/Zp3Xvxmj6Cvf/621533hm//9AuOWwY9REmJcEViAxGbC5fKRsFcrNlegFUU85GrSyaEYIoGvlQR+f0j93/zu19644U3vv7q19rlOTPR3NLWagUFU4VAKIJxioFq1PU9QohKFOkxREXVdc1wzAs8kFxXdc4hCJAEJiEYHdtzyy3vjYXiWzZe8p5//OLf8Kr/9PBvVrS3OeVyweaOa9kEIMp6o0pj3IRwOBLClIRu+eyfmlPt7//QuwiHr//0c7/4yY8vW3bekeH5933ko2dtWTvvFfc8vG18YLTSpn3t3/7t1n/7/uvf/bbrPvDO/bvuPGdDy/d/egQALnxt60h/ycmF1q3vGZ7eOTocae5uGB04esEZyxa39EhPX7/lrBlr5uj0wVu//+TmZQ3PHB76G17jvc98RbOqEFJZnSmHvB9/8Dtzqtg1lP33j93ce1rPG69/n4LAdui/fuSDn/nCVyCk/Q0P/Tc0hAATEBwUCh/88A3f/cGtUklqIb81GUoPFd/whjM+8sGv/vaOH61es+TDN395yYr23iXx23/zRIzWP3vwyYtfe95bb3rHzmefTqd362EtlzeP7phq6opv2tKx77Hxs0/36hIuaGZViNIMTE64K8+QnhPlwgbJAwsRqZYlRaryxKMFqQLVwS3j3uVi5yOh09baFVbnIbsppurhout37Hh2ArnQu6yRkMK+HaihLrl6i+LalUPPeouW+FZRPX2D2dnYHm3Bk0X85S8NROOWQoNLLk6OHSls3tT6wB1Tq1aunJsfuOSy9Yf2DR4+lCadqWNjJW+ObV7VuGN/usINVXN8D3NVRKOhJUsaM6OjHV2p/futouMCB0xBBACAVV34rlnX6J11ST1UM+MTcOSQ2tjt2NlQ39IgN4VXbSTT03zbkzLc5Ls2kMBcvs6tlvD5V0QLs0JKfMcvc/WN5kXXyAd/7xshWi2alSrDeglx0/MJpeV1G8O27Q8c5Mmm6NxsASQlhASBp+nYNM18voIViEaBYj077yKpEEIkeFzK1k6YGQeCdMZ9gkIcqq3txty0y7mklDLGTogjkhMr9ePK8ye314oOMIZT1+5wUgS3psciMSCgiuS+GlHkla9b9of79nNPMQ25YrUcGQgaW6PzaTkzaxEUOn1Da/++Y1RX50qkpQfH4ghD9ehuSlTGPaoqmAtf1dVoSuaygeSGxE5js9rZ1fLMtrFUQs3nfCkAndQChhedWEiXumaUqw5jQKjKuA8Aeohyn9SglFIciZqqqhSLec8HJKC+IeR4tudBOKSXS140aoZMOjXlAWaIcgyiublhPp3zXEmRziQHEIQA5wEgQAgIxowJXJOIAhXARpgLQSQEAEABkAYYQ9QMLV+F0+PB1ITX0avEWoMjz5t2xeXAOWhAA5AEoaAxBR2dBuLKvp3lCy4PCS4nhmVHrx9wY+/zVrEAqk5VDZUrQTRuAHaqWZMjS9UU39MBVUJh8G1KCPMCZChm4HmvuTKx/mx2/12VqSkBBOezQcTo8MXsguWBGaJzk3j0GO/sUadHnFUbqQBmFYyREWfRapqoi4yPuQ3tDmHR0YEK96SmqOW8USr5uunoKFKsqvFE/rRlsqE+9MdHbdWs9yvzVRsA4PVvSQ4crhzcrelmVdXBLYeo4sZSYuOZ9VRj1RI+erA6OeJzHvo/AMCX3PSawe3P3PDam35y+6+dqrnvqT0dfUmf24oWAl6WggogQLngAQEdAwgSSKCSC5cVw7ouuOJwbqrqL2/95S2f+fzjf3nm0UeeKE3IZb0tV77lbAUjAMVlmoo8q1I2NJ1joWpa1SqHFA0LyQRWjJBlO6ZpPrPt6bvvuPMb3/pedr7w2NN3PvzAk1/8wqdv/c3XCrmRpqaWD3/ktwDw2je0f+Pf/lVQxTCMcCT19DP7I9FW33WF1ATiKSGTvqAIEQkqUXLcGkNz9WZnz7pFk36RMebkqoWpCgYkhWjQomHNqHJZKld71rcfm5/VpXRmJ7tb6vpWLOXMJZJe+JqvPvbIHz7yiZsLhcmxwUKlEhTSkzue2vaWt75ncmbE1vJmQAPsS9B3PLH3+jddORccft0bz3cy5c9+fisA9CxtUpWILpRieYjSRHpeVsr5c1cvrw/Rsm09ua/fAbjhjW++9tJLuza1my7uXn3m3/Cbvf+pfxMIqVzxi7ZaHxOpjuRsJpPe+6n3/epgVSYa0DnLGg6POm/5hxvsUv0Xv/Ppv+Gh/4ZmGLSxsY5IQDS9bNmaXfuGXVQOAukV4NzTe+yKWL6yc+DY3LZt/f90y/WNja2f/8J/LFvRPjtUuO6ma69/1+vPP+t6L6/Xt6X1mLdrO7npPe2TQ8m//GX3e9/TuG4pmhqc8ymUrPj4cJF5oaYF9uQwbesMVSuVfAZvPBvGJ/nDd8Fp5yRXn1Xcsy322IOVFRsYEcnmMEN1RI+a9/x8CoOBjSAaU4tzfkcfmxqim7a0XHA1zWTcH351ZtO6VGuzkkryv/xxPl4PASQOjxddJteswq9/Q+oPt1qLF4d0hQ4PZq66buGD9w2OD0AuCxaGzqYmbhcVA2xKioG08q7mK0jVnaCUkBCEIikcjLtcZdxHAsk4QFVRwfdRaycKh+H117ce7R+VhZSSzB08DLYNYLVrocneJbDrSer6vGetPNYP6zZHj+4pn3FmW6GY0Q2KqVOuSKcQ2nJ+cnw033/E9h2YnwOnCskmaO/Udz/jUgVuuGnTg/fttCpYEG5VBUhQVR2ABYxRCqqq+R5LpuLpuZymaZ7nAYIVq9rjSW3DpuXf+MofayUSgmsSbFVDzAsTxcUY+75fYzK+GgDXXmOMpeSvQsICgjUOHkag0ljglV5z6WmHDhwZnwPOnVRSLF+VLMyR/qPzQkIkQWyLx3S6cmXrvn3jZ1x02pH+6VjSrU/q+55X5nMT9aloNuPUt7C27sSe5yzGaTjmVsusrj5huwXAsGnTyueeOcJ90/NKrwjAXe1N85ms7TEAAEQAgKok8H2MqBDHVxuYAKXY9wXGoKiwYUuirlGZncm61VA+Z9U3qUT10hnAGGmaJgJJiTl8LO85oBAsJJFS1vSfT0giAgCABEIQQpKdOHKNZxTSI44rtJD7sc8tvOdXAzMDICRp6uYbT09teywXNhADmcvDfA4wKLrKKY5YbqmtHdo79KYOf7y/KVBmQqqWn2elQjgzzyT4hkkt25VSEhWEDwAgERACsbiRTCYzM9VKtSQh1dKe23Kmeual/l0/Do8Mess2B2OjYIQhP6tG4vzQLj1RZ6Uase8oRtSrzJuROCcKk1wtV+y2rlTFyzV3wOTRuKoXR/ohGgonm6rNLeqe5327QhN1bG4WzjgjsXqxf3Sf3Lrb9hGo0nCFAwCLlsPgIVA1LRQWhRx09aieo8xOlwE0hAPVYJ4HIIEq2v8JAH7vRf7I7FXnXbZ445p01n7La69nbk5qOkhNERyI4gYOoRpCCMCiiHqMqKqGgAHYgWCep4WN8MGhndt3PdfXufr5Pz/7XP+2pII+8/FvcmLWN6nTI8OkPragtRsrCgB4joVVRaHEdyxV00XAazFtifDY8Mh999130YUX1tfXP7H3D+++6Z+ves25F16wtmLNjQyP/cc3nwGA9//jire/6YZMvtyYTLa3t8wUvNlxHjNMHnCkmhRAccsh7picIlCrSOZkvgjzSGtMtfUq9aFioVSdyhmKKbwid4gEPx5vUEGuPr11IgiOTs1Cury8ra6lqYnqgRTq4WNHx2Yantvz5D23/f6a11zZe/rGz33wE5ede+53f33r1RevrZooPzO9pj2ZtsoZC5a1p6KpaomzzADc//A4ALQuCDkV7lcVz6kC0Fgi+ZG3XO4LdPsd902l55esWHn02JFf/OzHl19yydd/+bP7/vDkzh1/+Rt+sw8/+RlOJHEwAxV39yIJSPrnL/i1Qpc/84PDX37fw27vwsMzoxJRnUbGirm/4aH/O7ZpY+fgwLhK9GzWpSpwBpFYrFQoGQYOAqFQreJ6AIAB4nHT1FxNDZXKlXizZphx2wqccr4+HvIsCkpZweFc1nvDm86Kx9r//Su/rG+GUhnKZYUxf8uWdaFQtq45GBnPjI2qb3rtBRWriIyjy5baw9vJ/LTVsIA7ALPTakun394e3/FsMRxSrWooXFc0wtrBreGL3pSbHECOFYqmVEzUIwNpzDpvvD78pa8eyeXNN1yvl4vZQ0ehpS3c0dCarQ498BtjcW9zOjNTrlgLOpurhdlli0hDs5RSzM2ZgzPWdAa4Z37x3zZOHRp/cMfwla9re+DOKU2nvavhz/ezRb0N4Uh1dIAt6K4/tD9dCtiqTdFyFo8MBmBY4EBfe936hZ2VoLh1+2hRCiLBNMjXv3HzP33se2Xb6+qNjg3al742RPwQpiUMztMPJ1zsXXBF18SQbXljS1aA5NrshN7aJ3M51bZyy1eToIrLBX90EKJJVK3i2Sneuxz2b4suW9O8c+ex5hZVCkqxYtvlckn2LTUdy5qdAEUF2wcpQApUU3uAE8WVqhKKRM1SqcAY0zQl4MHqNV2Wkz12uApSi5ixipUFAKoKCUAQRqDU6gVOKimeKgwgZa1rnjj19ckH6UVysISCkBg4koCAbt7SA4Rt3zoSgAjpUc8p19er8SR2LXNyKlfXGCuUSwnTFNyqb9BvfOd1e3dnnnzygVv++UY93Pqet3/HDFUjCfAcIAqaT0tViQTMJkRKEIqCXFeqCtV1vVyuvsKZAAAAASBUCVigqtTzGQAQqnPGABgA1Jz+k4N1Xe3sNgFgaqrQUJ+cz+SFAMagoSEpQvn6uphk+MihAvNBJaBQ07I8QAzguIT1ib1xKQGBBlD7+WgSJCJ+TcADgy6xa4YVQwnF4qVSFpLhxgWrMkeeivYsCq0/uzg6jLJle8Hi8DMPYclZzrJlgNu6zHjKPrKfAzcrltPRJ7NzMmRCLK6kmtRC2ZoahUIWAMxFS+IcTc8cCzFur1pPw7FoycsznyxakmpoYE8/lD/t9NDQoL1irZ7JqNseL7/5A9HvfLZMQO9ZHtS3iGWrknt25CrzdUcP5bAE3SCxhF6uVh2HuD4CyUFKJA2DGh4r6qowTLBtYAHiQlJN4763pE31ndBwthqKEYK9fAEAAGMAQVVqeqyEMZWYYQwYQxAAQrWOjhgwBiGOy029MEu9nOZ6aqT/1EzhX6m6Oa7z8QqVKgBw3j9c/f5L3nLNNa/lYfBAUt+j2MVU8TymIcKFIDr4vo4xxjiDgCKUcJmFAyKRQ/QQsDDFwEj1se1PPnXf9mi4tHr168FLpxp6Dz23K9qe//P9T27fffBD7/+nseHRjRs3XvWG1weCeZ6tINBUChRZlbIZiY6PT37la1/9l3+5ZWp8+Nmnn5xzdjUk62fGRmYnZt7y5g8hkrz06vcAwNuubmlZuP6j//r2Y4f6O9obsRE9sjebJE0YMY8DJ1ghyBBC2pYjKlSnyAwrsXDJTefmpuKRFjXWlq0GIDnnTJE0LD3XFky6LZ1JblKiRwp2UJmdqhf+hg0rpEGr1fyO7bsajWW3P7V7+uiBqXRmbjg7k5877cIVZ6xZcfvtf2ztaNJ5IVqf2rlvtru+23Iy23aMJkJGvuwAwJnnLp2dyefTtoLk2ecvGzmaTU9k5srl7sWdkYQ+OTKwdtW6G2/84Hvf8wmnUmhN6iP5EgC0qcSL8qqtJW3vlm9+ekFdQpQKiAmCqMRYYsEYwwJhkBgDmFQailBAcF/xuFQM4IEgQDTsRziNEl7yg2gylFoMuMIqk7xYVnW4NP5w5inxvg/t2sHhDe+5SI0OffkrI38NLf8XWNgASpRoNF4uF8tW0NWd5DyYnazU18c9z3OdoOoxAMAAhAARgMDA1A3XSdfBlaI47bTucqk4MlhYvrxFUcFyK1NTlVRS7+joGh2bLgQ8FPKXLOnsal7/ix8/jIktaCB945wzHZ/B1KHm6947q/jK7HALTYxPTZq9q4RVChERTnaMBz7sfBYWr4r95j/Rxi1k3RnuwFGrsQXy6dDRY7YRw9PTYu1CbWxePdRf6WhRhVvXssLinB/bgWyvsnhJU1NbNZerHnxeufGt69qai3OTI3sO+Fdeven2X4w5vMiJ39ER4rZFXH3rEZUgy1BUPRyUrUB4zRx8AtV4o7dhYfPR/bMV0jBfqYAMFIAARGd7rAE78Y6Od17V9caPP/KJd19w66OH0sNz55/X9OzTc4Ko513ebZf4ojVDMwPQ3UkGjvG//JmEUnz1xsRTD9K+JeViDhJNXm7WsHzHsUHVYEGnEfhOS7NJQ9aO54FCpK69Eo0k9u+ocs4TdZHcfAlkLbwbbWmhkuTLBVBp2C6zhvbw5EQWJEQiEcexGBOKojAmJHBAsGTxgv7+EZCgqirjvhBASC03iQlBJ31cBDqh/qlQ9GqCxqcCM7xYP+C434kURALKKAbc2xdfsqLn7j8+ywVgggQjFCON6uFUpZABxokEnkwl7KqjSEPRipWKrK8LJZNJTPP9I3bENBqaeNdif+gwOJXGdDYdeBrBiAsXEOg69RwkARNS66b5yg85QVRIBgBGCDMufB9URfd9DhCcejkY45qvzxnDRAgJCoXABwAIxyCRgkrZLBYtkIBAIYgGLAAATdXMaECIOp+uYkyFQAASIa7pxPdUTD3GOMUqYwLguDY6BgDQFi9NYDrX12U4BS2SKhayUc9z15/pzw6HDx2sRpKhimMvXQlBJZx3WEN9pFSaNwyCIfLn+2yPIUw9HYxAcCmDxtaoZZdcD7k2AanF6q1YHBZ0qZoilqxWZzP29JSxaEUs2TD3ky/DoiVw5T8Y3X36Q/cUpDDGhuD5reSia638XHT/zpLkmuN6IUMzE157O9V0cGxWmCf5eVmqiOa2ZDabj6WgvQsM2nhoT7qp2czmrMChzAsTTa3YGU2GKBgC8gHIcCLMWFCpeADQ2tScy+V85qtUBQh7PI9kTciUKoohBGPcwxgkglMA+BUh89WEF14Ns18RrV+8nz27nzLrl/aFIxXKsCnCGMlAuIJhQBoxAbkB2JyFWQCUFChWEEQ5cVUUsaWlUZO4yr4DOw5N7vvWt751x0/vefCZJx+++6nP33JLw6q6H37mnw/MDXzxXz/34B/vnp8Vhw4cvuZ115qx+OuvewNgIFgACyrcNzUNpASERyenBoeOOJXM3PTQ8PjDjakGU9Onx7NUtqTqFv3jp/4dAN59VZ9MxZq71t74po2aQiquK3nEmwkREEHgc0R8gSVBYLB4i6kY0pYImMSaIlBQnRsrTs4ZRjejKY6EYF4cQoiLgJRTzfXlIMcC3wGFe4zms231Dcs3LBJIDAyNdDS3DY0cXbX6vedt3BxpCu3ct3P10vW7ju41UHVRXwLFw35ZsgLddWi0oalRAb+lve6px44AQGtjx1x68uzzVtWlzGe3PUMFzhs4yLA6kixXykad6YiqU+QXbl63b+6oNcSK4AGAARiB1tClzUwVOcBX/v3Ti6NhxfOk54OUCAMgEAwwxgIACYQICBMhypGCJGDEBagqCOGTQA1RyRFa0C2NMK8M47KLOGXIRT7HWFosOfX4M08O3/XcVkhn/xpY/q8wBUMqGatYruN4GAMmwAVoBCWTccuyrKrvSwAAjSqMB2HDsD2XcUposOWsJUPHMoW8HU46+TkgEAJqKxpEI1EhHUQD4ZsOt5qbwsXZSCFrLVxaXX9G5/S4vXt3+l3XtWNjtn8PP+sSObA7tv2J0iXXhYpVe8s5kSf+JIFWo1E1cNqjzaMe0x6+yzvjIjF1DJfyFCA+O21X/GqxlDjjUj/sWgHtKwQDvd3Jv9xbwjGendFl4GIKy1eF03PVwEqVy+Wb3rZl24P7V6wI/vCQv2SZ31pvXH/TsvvumMuXp+Kx5X3rR++81xreR5BUMeJCyngCo8DTNVixTOnrbPvTvaPzVpLo+W9+9bqvful3gzPakq64XywOFb13XNpy51PlD7xx45dve4oH2ute0zc2dfjAfn/5ygVNnWnCWdT0SjPJoWNJnByKpFRNR4eeMyLJqmWxajHcu9LSY3J6JHneJdFUEm9/YqShFWYGW6qyMDcl6tq8wpwajkI263sugACNplp7ckiERwerUoCmKZ4XANTHUxXHZlKQIAiEEJGIqShaPl8GzMJhcF2IxyK5bFVKqiqqH3gAjFIMgBljCIGqap4HGGFAXq27eS3OjDGuBaKllPxEkeaLmsGcsNrGk9uRRqTPidR04p1/4coH/3wAsM6kC8IA8Gqd1rsX6JmpsO2X46kAKxAyyOQYa0wZ0ocFi3E0FiqWs3v2yHd8aOVTfz6WndVy+XJLW316ft73gRCQHBCEuAjCERKOaHOzJYVqjPmvGBLHQFQNEyoufc1F9977EALke1KhBhfeSfHdmozgcZ8eCUwAIeABIIhQrCPs+6wEuAYYBKSsxZYRFgGDj33sPUNDo489+lS14gMQBACISSkBAaKq5D5QgABAGhIcAFjYEnF4ZcVa2tRo8IrmOtljB2DZWtNWrJlRiCVhPqNG6xEoXkg21TXPjY/D0BE49wI6N4nm54QkeGKEWIHb3UHNUGRivOAHIAVQVbVtQSBE9LLv0rVrIRJhpQIIYR7qt5atI4P7jZZ290s/Ed/4aCRZD02dzkN3+72Lm2YKc7ODC51gWDcUwTXfo/H6YmePlpuAhmbm+jwzA76tuS6L1vFIEuqaUSkbGj3qsoBH42CGqaLEM+mS4xIQPqWCM0yxFkk6HIDxWLVQAgAKNBKHZAOqlFhzS0rSbHowacaqTW1katop5cCtakIqEoK/CsAn7SWwetJeAsP/PQB+y9mn9Zz9jk/ffOP2/sMbNq+cGRlqb1zINUloAD5IEgTgYxymKAQCwA9AkUwUKDUHKkVUZb319Y9uvf+WL3zm0jMu2Ll1Z8uStmK+/z0XfGRSP3TomUcOp+sqh45tf/z593zqo7d84pOuIzLZwpkXnc+Fb1slU1M9EjYI8p3q09uebe3u7O5p/cRH33bjP1z+wx/+c9ho0pVwoVCYmkwvW7bpK999CADe97ZNE7lpEtTl5+buuOffBRFjMxnFbgqhZNUvIYOCQkhYpQnNYQH3BAogIAiw0E3iWOBkMqw6CRzropdGJfZNgtyKVTYScVqnChREDGTlK4V0FnzvnI3Lzbq6o8OHR4dnF3W0krCxcvXb9EQIWU6bHhvHFYl5VzSUxf78aOmt157x513Pblx/9mT/5O5DQ3YOAKApGbOqTrxOj4brxwdmXve6JVsP7VmzuDehUadSAgCJVGDayNExt71Vx9Xnt2UAQFFBg0jMEOBYiqqPI/6pz35ibSyqShdxgR0OAMzEEAhwJQaCQEgsMeAAEE0SpIAgTDJBVFX6XHoCUjFkIKRaggFWEQgpXN8JpczGhSAYkKpz8Mevfd22/x5u/s2svY0aWnxoOCtBIUhDmCsqdh1L04jvc4SgJpZee75rSq1aKBQJY8uuYgCigqTgFU3f8zUDeb6PMSRTmuN6hIDm47aWkBlSWjqVjkWqXQ5UCKl6dukqNNRfxUiYZmLrE4UFy1C8Xh/sdyOGHBmClWcAxsqOrUFHjzo67C87DVoi7ROTk30rU/v25Q7vhVwOlXORDRdYpy1IPbbNOjZgL+kzjh6wLQHrN9VnZ+bHh5oFzGKImnGnXA4UiW98l0BW7I6/lN50Q9dzD85t3kJ3PK7VLcxFGuueeT47NwXcBcSBYCql1A2u8Wh3t0wKFSWSj+/IB6L6uY9vzo1M/u6uIdkQV61i2gIJRjSi8EoVQHgAvqJGNb+1PTQ6EPSuDM4+PzQxZLe1xDOT5Uog1m1JPf5k7uxzO9rb2v79X5+NxNHsjIJwMJ+X770levevy/EIrF1dv/O5+dnJULSBlYp+wIGKFtubMcIQeBrjHgjo7g17QXVmDLoXtMxlZtwqUIoFCCmOd1pRFMI4lwIo0RGWQeABgmjU4AyCoBZelgTFEXG9WnNJxACAYMSlxICFEKFQyHXdWgsBxmpNe+Ck1GsNnuEltOeXBH4pADN16r/puvN27DrY3z8DBHEgwIWhm45XwQgwBSyUgAXRGAlc7nhAqIpYI4ZpDKKxHaZmQHDS3q0Xc5ZVDlOlGgDwIE4AcVSIxcxqxeWcE4IkSAAQHE4t6DjVKTc03fFcQJBMGYWCIwUQjDg/7vXWguoAtWZOtVQ34pwCUFXBPqvUeiooCpJScg4gT87cqJYsr2lvsQAAMEZUSFbrh6eGMfCw7wSKKlkQmKZWqdoAcONbgajgO5FitlKYAQXhZLPAmGac2MGduUoV6hqSll+OJsLpUael1Y+n6hJ1hT1P8gV9ql1omJyd5AiYBAJG2CSeX9WMcLVaRRgYr3HfBJYkrDS6wQxAEpP8+dfo2/5slEqFxsZYW7uaaiqApDuf9hetUMcm3cWrIJcFEGg+IwmGWDQqwQ48ZbJfhEy9YpVam6JVm5XKvgSkaiTe7M2OhahmXXBFqDCPD+yuegGAAIFQMi57FkYd5BjRYFFXeOf26rEhIiwOAKetb1Iic7E68O1wqVh1HVi5yjywxxKeUa1wVaXzOdsqA4Hwf+UBn2p/XbX/xLP5opGvtMPLPxiTgymlumIyXyF68NmPfeyi864kdeCJHHc8qVBNDxFErn/Te+Jq53/+5+dAF5XKeCTW9uk77j69a/Gla3tGKgPf/cFtDY5Z35Z847veftW7Vl9iwvbR0XMu/uHP7zlwBhxlkcJwxv7+d370q1/+9p3v+1BXT6/l2yGDIiQD0K1cZm5ytHPhgtv/eJflZw/sfBD5mUQsgomZmc+5XtUwE1OjmYeeyAHAu9+98uCRYY0llq5ZO3no4Of+/b0W56oIgdPSuKJ93s+CZEJgTwAGigPJMAuHwr5jZSfL3FNwxJRCaG7O9UeBa03mCiBa2bdJDPEwCnyBFAE28yseuJbCy23tDVJWp6fS4VB05YbVA8ODs9NzH3jXj+viAAmtAvrcQbZubUOpPCg0MjGdXdTaOzE85WA1P1cAgOULUkMjOQYAoCQ0c8vZQWuDOl8tWAGYWmx2oNQUS2EKLe0N5Xzp0MzM8/sAAFQVBUwqKggvxnUv7gKOR9/z6fcuVZSowqihIwdJ3UELEv58UckLXuYECAKQGHPEwcAkjJEKIGRgBwqmEnFOJTIVYmJGGCXUy3vaojUAEoLpwLUU3bxkxT/+z3IfJ9/962KZr/4Yp6IEKTSX9zRV8zyPKsA4KEgFzAPGT+rHI4wIQRoWHje4cEJhCKrQtyQBetA/UEWOzoWLkYIxDkeR7/uxWMJnJXeOGrp7wzv7iEa//+0jV161ckFntTBZ6FlXSU8yRTZkKhkmYXQ4FGu0NRy18rhnbXnvLiJIZL6Q1zRKQQ9Fq00qWbGZ7D/gFwuJbLE0NSpcB+KmGeZuZKHMZKE+kpoYypedWFtP0cuG01kXEz/gICGOaUkyecF5sVio+ugT3LIUHUUufYN7ZJ+dniW5CpY0UFhkYR9MT1YYw06AATEi42G9GPjAhGYoXjwF0brIoUPW8gUtZ62L/+SOQyIE2AafRk1WdnWseyLRolacoJzXCeD3f1zNz1dDZrD/mXDfCuG4KJt3BEEyiFmVfGZcVzRdMa2ZsXjVn4/GqUE6stmxviUiUa/t3uWpOFGxC01N4Uy6yiXUN0FgJ62qI7DDfQ1AECI44qkkOBWNaF6lBIZucC5930UYcK23OSJSqhjxaFytVKqqqjh2DXEBAHoWJWYmy44NkahWLgUABqV2Lf4cjUbL5XKtprGGtTW5IkoJY5xSEgT8FQv0XojlIlAw7V3UHI4oO58fAUQ5MIRUCT4IBQDHUl57a/Ps7GypALFIsrlJ7z86x6XAEI6GqkgA6ODzcBBUuZs0NM/xLALhADwAompSBJQJS1FBCOAMAygYBYgIIV5UUXnKmgBTiimlruvWzlxRFB4IAfzUrDYA1JR3BScCPFUD3wOVmn6AFIIDXq6pEB6PAQgEgACwlAhht8ZKgxMxAIypECJVh1XFmJ+v6Ap1HdbQDNOzAAAXnweDh+LRhkokYhSz1Wo+NJezdYPYAW1qxliVkroeI8l4SlPLc6PubAZWrYkbiq1QMTfDJkeAqkmgXNH8YskBAZrS5AYcoKyqPAi4qhDfZ5KARkyJnDe+y7As68HfQbgerDJwK9LaW0kl0Xi/Ui5jP5C9i+D8a7zJQapp2p7nLCKjmmlhhYd0YL46M+abev38/LzlgR6CwAOpwLkXNUhUGTjolLNqoSQowQH3AWmXXR6ZnSwe7NeZ76tSRKJaosca3AMAcMmV5sSI5TowPqxLoBhXk3Ejn3cAQy0tIiUICVLCfyMHfOpc9opT3ks+eHKWhFdCYoDZkVuaQ0utcM4we3/wk5+N7J4TVk/r4hUf++QHQWoSeR6H23//q7e9+YMpdd3Xvvhvb/3HC6XIC5Ra+u6P/+4LX1/bIEb46EMP7sk+NfSZb93y7b/86Uu3Pnye9ftVZ970q6c2jbpPfm0DjMnZFtx04QWXRSP1e/YfveZNb+KIB75FQZa9yt2/vbWUmWrr7tx0/ll/uPvnI0eflPbUgu6zZuYnhiYHGxobKnldQXW/vWMPALzlTY0zc4Gmxg6m55c2xzZ09V765otjBlKVNq2vdziTZeVqwjRcxiWAYegco/Js3pmXWA2DjoTvYwmAMfIDzLLSz2CS0vVWopuQCqqeFVINq+BAwUMUB4iDm9FUDRM3pNHAY2XPChmKotArrvu3ZV0d3GQ7d098/Jq1hSD7k4dGYc5YmDJlOBvp0PdtdQEgBiYHe/G6uGEqSxeFFTQ3P4FS9fr4XI5EI3N59eDzudXt9S26VjCL9bHE7x+aBICLz23dv39alanGTlwfDp7eUcRCczBjnP/rNz9tArI47wihljUp6WSJZvqHLbWEGUiEAQuJgArgYEpsEAAkKRZYEKRL4SKFybjKKeeORnu6SKXE7Jx0PQz4NRd89n+W+/grEmP/vf00NwDWsG1DMS8AVJCC0ppgDeI8wBRxJgEAYQRSUhlmqIqxJrkSD7mNLdpMzlIjqeJsTtc12/KEwIqCUsl4qZznXK7fnDxtY9uDdw+PjYPZaCvSiBP+7vemBnZXkg327qdFfVs82SmeeARHUoXJIaV3MT//jfiX348VWW7BovDBHU4srPT0hK46Gx0dzGXLxsGjjhnVq7mYQtPM1jvq1f7p8rLTITcCKtUHRyRncuFCn4MxcNgJOCCKABkysHVoaUrMaAZ4AD5vcHk2UYemxrgQKge/3oQLL+29/87pim8zoEqUYaF41QCBJg0PPKKLCIMiw2Bg02RWFqJELZskUvYqSNck96iHF3VqdhSaYotHB/bd9H5SShuuVxk6HC2Uy1YFBNd8jlzHPfe8rvqWwtOPlUDqA4Nuz5KELwtjR7Vk/P/D3l+GyXFj++P4kYqbaZh5bI+ZIWaMndhxmJkZNpw4zMyMG2YnTuzYMTOPxx4PMzdTdaH0fzFObnazdy987+7+f8+zn1fd6qqWSlLp8DlqNIIxQ6wOdywsWx1USem6bgFAiEkQAggQpTzDq6bGAeh2uzUeF1gke9JpKKgbBgHAHMcMGiYxAkoZjFiEdYYlg/6lDMOZJjCYoWCIVkMSnMGADEhnGWwaDGV0MAEh5HK5wuEwHCtSa/62rQSB13UdYzxIgAEAob8hcQIAS4Xpc0pUVd22tYkCsCxLCWsSBWNMCPZ4heIhcPSgmkoZDAaPx1qa79m5txMYvmqEvbsliBDEEhbVJJgIFKKSCGPHF23f0krAynBJwxg8oLEoWBQ1wXFgmkAIcIzNoPLf9AuzSA45JTOYoRQINREARphQAxD9jcn4TVk5yEIc+wdMKAWMgZjAMqJpUoyoSbXfJ9RHCCjhACjLwm8WdJblDcOwCGTyVA8nyXoKBfwpt9O+YWscANJYtyUtXDKcAcYMdEJ5Zc7e/d09nQzVTQT8yEmaN8O3/ueAroIEHkJCGgt2IX34aCO/JLTqA1GhBssJOiR13cbxCd0ADJgQQCyhFMBkEbgZzm8QANMGSHGnG6WlaYYZOXwYMKO7nb5IKDB6Ip9dqNfv9yw6K9bfqPe2uu0uVFihHTmUqN5ijYSTGZmOaMwoKJM9aWj3JqrrLqtA42oUI5ETFbtV9GWZ8bDu75KApa402t+rUhb4Y4IOTJ7lwUqovwu6A1J/TwoASkuktGxSd9DErJhfnmytp9EIC2AAAow5YmIAAMYEavylBPxv/Bv/xr/xb/wb/0dI8zAWkRSV0fY6PuQXZRIbOY3bt0MvK0AdLbS43JuWLe7f1+2wePr7QzzjTJnxQZ22w2kBgFhUBgAAjFhCB6suUUbgBUVLARosg3HMKv9rTNQxRmT6HJi+gGmoJlXj+X271M4OKCy0jZyUhDD65XvbgYNJxmYqKmR53Swb1sIZWiqUYHSdcYmWpBbVsSJqoBkYOCIIXIq3gEkhleB1qk2YltneGgh1GYWFXlaKE1bLyMjYsL4fCMvySNcGnd1AlCA902p3CClZDQaS0QgwCDAFl8suJ1OaZhh0sBwh/OfZgP+Nf+Pf+Df+jX/jf4sCt9XuTIk8EjirycQ8Xm/tkUQ8ylZN5ndvC6s6cCJoCguAOF43DSAEDxJglsUAYBiDOnMEiAACFrOmQTiW0wwdMcAwjKHpv++OGXQYozQvK336AgWz4dWfpKcVyvV1CqLY5tDcVnCmgWFIbS1SRkGoo9aWVZiYPsfZdlQ5elQPRajTjYjCKgkdC1RWOQkYA6tIBFX2ABcEyoJJsgvwslOyD+7rUBJiQ72iK4hlrJqh6oZ+LDYNSRQ0hE3MACVgmgAUIwAAzGPWIAYB4HlB1dR/QRzwqUtHx/tSMrLk2j2NDQ1VQ/I5IZBbkOPMygoyipexbazp/vm9zU9edMZVbz+5eb9u8qiqiu9VhY667hkjyy5581sGTbJaSS6vZMmR9LRsPl9YfudXam/Z0ONyH1qWvmZfXYZNPGmReMFJpy8fP+O0Ky4pqxrGJHXKcWooIWYKu/fuP1K3oXrv1wzVVYNNRLVQRySrRHSnZR6ubYxHUqmYMnbMmFff3wkAS0/PjwTlRCRUXji5o/ewJGaGkt0mJUW56e1B9YGbrx0/ZWJ/UO5o6hUFt4YJEJNqhl2wgAEixyHQiW6wmFEAGbLCSyxys8hjRRyrh/sj7fUI8VZndlxhAXNgUjApMg0eAaaGk7Bujs0vyfAHujMK87HFimMmIBLfla1p7MF9td/v/2rN0R9b+3s0liG6AQBPPJJT29V7cDdJs4Mosg3tTCzAGIYcC0FFheByq5IAwU6X1RElQHPSvJ+uDQLAs48W7Nge4awJFpv19RAMgNWKh45lsZmfiNG6mnBXRyhpgeFMuktOzH/iJjvPmppRXu5EaSaTSlKrW9sb4lOMDKYVY4MQFrGUReAlCEBTTZbDGPGEMahoUIvECARRsnDmiv+Z7ePv14H4+1FzCAGAzyFGYgrCHC9yqiYLAifLOgLgeY5SquvGMRswAEKI4zgA0DQNADADNptAQU+lCIOAF8RkQiMEMEtFEUxCeQ4gKS05Lc3lth1tqg9HAShXt18pzc+cPF52Z5KG5oSJoasLHDZv+Zig2yPyptLRAZEkNNVZA2E5PZ1vqiEWmx4LCxynup2CIPFdPfGSSnd/d9hUuLQMfejItP6Qv+WoKMsKMRlN4ecu9O09HAr3JlkMrA5WgOJSS1BRXPneUSVyydAhjV1HQ/7MVV+1IMoaREecyCGF0sHYWWwYgxVbEcMwuvmX9TePzSX6Ten6e3MjxpiwrI1TbbogUQMY0+J19kT1UFIuyJBSSsqX4YxFU6kUAgCWN1OKISsg4VwTd1lcADoX8hOCTJ616EYK/tLLaTDU5zcXZYQoyyFKKTGRaTIIHUuIIQgcxljXdcMkAMd0p4MeyoQMSkIYIUSpyTCMxWKJx+OU0sED+jf16e+fC+PfnAAAwzHHJZ6zUGB03QBQEUKSBStJAwObl2UuWTJm2vh5a15/9LylI3/ZDw/+UD0k0zrgTwYxMBpjmlgQLKoW5XmMDJdhahyr6oaenynYXHpLPQicJ6YFCAKCADhIc4JF4gzDIieYSDTOcZxhyharoMqIUmpSgwIANREAC8BhkIkEKMUzHFDBpAmes+gaEEbO9LpisRhgTlE006QcL+q6Tqn5V8/7WzyVKIqadiwi649RxX8DCHJzrWG/LCuUE5Cmsjw2Jk12qmrElQHxMOTkQqQPQHOYbKLxKNcT0QBTjkiEpLIKIRmHcMCRns0SI6wkrYjDkXiM57GhMZRSShDLUkIIQnRw/lkWD/qL6fpg/Vky+Eb/OkT8a23dv9g8AEAJM2O+Pnm67Zl7UhklZjIJJSW22p2CIzORnqf2tEHYD6aBWOxE2NQgjhA4OIcAXFyOUjApw3s92f2BVsMGwEGugxs9Qg93uhtrmbAacWYQi4v0tmFNdSBIApgAFgSE5WVVAwDgsJ2CaRLl1+GxLMOapsFzvKYrHM8bhjEYLXaMAJtKihEYIyX39PTkFxf19/bffO2pR3uax40dpfalLOnpB8JdNTWdlrAwFJ+8OIrCiS86Lrt46t33jlcPzB9x4shZ45P+NLt9zI7qB1KJPqQyhb63RkyZ3dXx/WMXTJt75TCABLL7BNY6v3QyW067O/pLC0uivV0x1oNSSpozlp6WRnsSmjSQUVm2Z0tXYFfvVx/+ufi0+ZSBz9qOaEyxjcrLc4UUw8xf8ezWR3988c33ZU/8npVftj33kMw2pcK2c1/efOgoNzufk9Wj217409yLjp9oy7vuyhtff+vde194vDva60U6Y0MMOL9Z9fbqtc+47WwyyEoSJydjOZ6R+2v3+MNRVrCZKiIGpPvSvvqhDgCGD7diRAvy01sbE5lFalOrIrishEE4HCVxrysnb8+uo2t/fEK0SAmDQ6wIpuZGFoEQAshEBhCdBUlRDNMKVq9TtVAF6aAZhgGIZz0eV6KlMTzQiYFl2AzCuhADRjImYFYCsIMJxMi2+BgeDkcanJwjm8kyVaJQZeMZsdKKUoeX1aXIl60/fty2uiPRBQCXnwG1dZBgQGCyextkqyMydl6eqijhoL+nWdBTakaGbewEt8tNo8n+nEz2ljtSAHDBqTBkREVjZ33Ub4mbsp6wHT0ojx7vySlIZOZYDh/QOzriXo+vd2e4RuZA0sEwr35shR3T0Vk2KVvnWNbYFQcTM5m8aYARVATKUQSU6pjF4MaIZYBQwugIBJPqjAcQAwt/bwP+pyDb5wiGYjoByWJJpWTJwsuyNpjBh1JKyLG3GsGxJAOEGACEYYFl2cEAFUKoLGsAwAuCpugjR4/o7GhOyPFx44o7WpKJZP/k0ROy8vCfP9rJS2x+KcnMsFYVyKzN3LvTGldTgSAaf5yZneX96IXQ6ed7cyvVQwf0aFLhLHBwG/S2ARAwOSQgC8eRZJIwgoo0Z36Z0tqsjhubnzL7D+8znR6Uka83HITps3L7OvTOFr87m3R3s8BR0K0u3r3i1mISbW3qjyJnxGRyPvtz74ix2Skldqg6apgwfFh5d3dvKBgnBFiGHTTgGaZBfz3I/piS4o+NDMOY2LALVsZI6imwC0JYVb1ZtriaVMPg9nF2J/YHFEMDTUe8wNgdYqCXRXwEKOgacDxYLKKuiUk5wrIsIeSvzJmDYS2iwCvKMQaIEMAIE8JgfCwBE8MwAORX/16A3/nn/SUL99dZNX6/Jf5IcjALhAKigBD8NiiEMZiE5QBRDmGjqsrrsPgO729LyspT1889Z+HEU296YU1D/NQx+SsPd4iI51QtZYFUAoAyFEwAcLmFZFIlGsOAWTnE29qgGKapg8pLEmuVAQlVlfn79zUqKeBZXtUHOzYwA9RkAYACwQxDCAFKEQACyvKcphEEBgBIol1REqIFKQrhWHQsnA6AAnAcZxjG3wlrFkVR1/XfE2D4uzQ4zZNVOUzcvqOVmIDBZpXsitw7cYIt3kesHoVi5HSJchRYe9LmwpwF4mGixDJ3bQvm5/MOF5MyYllpaW2tfkWDvj4QBWtSTwIFSlkAwrAEAWsYBsdxhqEDAM8Luq5zHKeqBAEMprfkBYyAMYxBf3X6N8fMi8AAoysmIMgqhvJhlkN75HCv5M7CupE8YWnRqIpTN284uOrHtWn5NBRAKqZuDC4MLrvvaGdAxyAJrvJh8UKv7dBW5bbbl3z9xbfeXO5Iu6JJkNAtgiJbJd/hgwECIEk4pRMMoBvHgq8QAELA/FrqCGMs8nw8rrA86DqwLBh/VY7wxafu7uxo7entkGVZEC3BSHjY2PwICTccap5cPmXv0abGZCTHU9bwU7t1wd3DK2MHvr3tpJveSpu3tKJj7UfXXMrZnS0c8WNV7ynKnjr+yGerblzw/N2r5lZU3I4aiu9+eoIhHb3nqgtlDAtGTGl2hfu6gj6DeHNLvKaBQDcT6ujykvELp5yy+LRzVlw4o3j2Zdfe7N/SsLKuviXcc8oZF7zx7ZZJIyecP18ASA9i6PP3XvHSyq0/dC4775wyPtA2cMAeZJFLclPnkzec6CfNhmhklw4/Yej4udNnXnrplVJutoKRSEyClMbG4Hdfv9nS/mMg2OKSMtN8rMNFGxsDTa1xT1pWNJZMJnSXw9PS2NrUBgAwboSbpR7OEpKVJGthZdMyauq4VT/WyPX9Y4d4FS7qT2aAFHn9+ZtF3qfHFIr5dCmNajpmRZay2DTipgE+nnWLOqjENAlhU9QUOcyZEBiIUgM4xNFkkJAQAWCwT+TSiA6EyE5g3YANZHKItSJuwPQnIJYGaW7Bs3ZBcxSaCTAcZEsWd5yErlbuAoCXn83o6+fr20I7tpORY1OzprqOdEQCAzB8aN6Rms7SMp41fMFwj6mBw8PrEdvL74YA4E/XQU0NjJhoD0Ti9U3iQBtrpFS7Q0/LgGTQgwVj3Awu6E+4HeKa/WbbdtYHOChEQINrnr6zCqN0K2URQMJiFmMm2zA6VNxLsQrA8oZhMshELhasDCAVdAFYE3TDJLD43If+ybYPmwgutz0YjivKXzndHDuVfiXA+Nc7CMIgCAzG2DBMBAzL8rIscwKr6zo1MSDkdIqpVLKgyBqTk5EeGDWkJJHqHTHendD9hcUFm9a25jrMIROEz95hAxHTl5e67t6cL15V+ruDNhuau5zftlltqIVhVUU7t7byHMNxXFJReMGlqRGR4wTGCjgVS6gFZdxAJ59SkizHTJnj2r8jKEcdPIMWncbu+ElOpEhc1wDTiqF2L5/K4QUzlLTnwshpBa39/d99pXh8vG7AuCmFe3b11jXEB+kKPkZgMMuyHMcpivw3ae0fGwejZi2cNZ7SBJGqGqGEZ3EqL9fCsUpLi26xQSoFxASOsxFCEJMyCAVDcnsMWdHVFI8Y4nJLgUCcwcygZ9BvBPg3GsBywLGCLCswKJuSY35VCBl/JA8MgwihTqcjFosRAr8nt79faI5jCCEcx+m6bpoMAPnN0XfwmsHKvseWngGMB32iTUqBMawEcHYeSc/leruUvh6FZZCpWd1IHucmCrVvUyVJDjA2h5YQCAQtdgOAi8exAbpoJS6vvb83bhgMz5gMWDEimE+pSXB7nIwUVWRnOBxlGMRxXErVMAagg3I8M5g2iQIgZlCg/9URjFAEyO7k43GVErAIrG4amZnOnt6oaQJCx6T5X+fnL3Y7/I5c/f6n3/DHcum/geeppoLIWnXDYkDkrPOLeb6tu1kVZQiH3LYMufqAaujAc+5AIga8yZlAKYhWJFhooF8oK7VoeiLQp7s9FoxtwVAU82ZahrehsZ/FFsCqYZqDK/hryDIyTZPjGFMHhmEMoiEELIeBYl0zAPDvGcffjx9xhKjCuAn2mn1JimluhaKEAVRL3CQCr+7b882ejbIk0QsuPXvS7Izmxqg9n9F7xVyH3NOeamjFSQCGAYuFTHI43CUx1Zy6ettuFfTx06A4jycKFm1kz1azp0siSFGpoegsBeCthpocHMBfaN94Hruc1nAkPhjYPaiRliy2lCwfo9H1DRtVLSlIZk5BmmIYnJ1r2NdUMMwzvGrMzzv3MoxQ6cvs6+9J2cQii52nCYfhKWkI+6ZEdw93bXSFVlSOLeqF7zauLblnrWlPm9Vf0DogfvXQGqa3Y+a559/2yldkqIS8EkRSXknTBQvviCaTzv5oJEHsai83ftSoJz59g7BhnejDrekFw/Pfe+dZqZ+euOKuW25/49C2wGNnzHZid4DVfIED1O83rfaRZd4dct8vr66+5rUTlKlpJ9z1EW3AYCta9/VbN100M7Mk8e22nx669k8792xv7AlQxlWVU/HFT19dctHZNz/8p3BnbOLYsgUnnrR58+r2ennYiCFdPf28YO3o6ATE+bxZiZjsS8+Atn4ASCaiLjtyu9IdkGxu7yosz9676UC4pT/Hyegu30BnfzLcHhHhmrs/dLAdD91+GyWCP54YOaKiqyOoJxgGkK8ox8xEqXhUVzHFPIDuFbl4MO4fSABhMBJ1FoHFw2AbSvmJ7lfUBMd5BfCYVAsh2YUpMlQdpHRwuVh7kPoNVTWF+HBunKFDE22PMp2SbB9cx9XfBa05TFuj7nKxlWPSD9UqB/ZIImtd82nvhGlQUuSsPdJjE3I4e3c4psnByOBdLR3Q02uJrDcxSCE9lUw6GcQP9BmAaDwS4UXy7Qfi4mWqV8vq7eujmI2TECXgoPDsiodvuv+B+oTqoCCDPNR026nJ5FCSLqq7FdHQWIbXKTBhgygUuXksIaSArgIriMf25j8RdjtmOTrYJ0Y8AYNhkWnqfxSMfgMloCgmwGDiIaKqJgCjqToAYBZxrBCNJjMyWTWVTEQE08DU0pzjk3Q9rkQ81dsjaT4jEkTvvyClzPBNj3jW/qA/u6J7aJV90gJY/SXdud7b2d1DVMfRQ36HG+Jhl66bAEBoRBCBGHpSJl6fYVDU06frqo450HW0aXVUEkTRGvf4xJ++jeoIDBOAOI6f6ZpYaRiycLgjsKWbGWN6VRho61SMsI3zJUwlq2aX1tkelyReVTWgmGV5ADBNkxBTVY3BKM/Bp/47E/Ib5JTMCbqicIAwsImcHC+HkqDprMAZJiZUJQCqngAAiRc0TRWwiqhNFAQ5lQATohEFI2wS869iaX4nhJFBb+fBzwBAKWUwaxL99yTkV301pRRYlqeDhRF+TeBsmuYg5abUHCTJoiiKohhPRE2ToEHFx69BNQCYEoQAEDAI8dSkpmmYYCAwAcBnt5SOYBmW37+3T1GQaQoYW0wcj3Fk2dlV84sKpz79Q48By8amfbutkZE4GuecDgGBzGFWV7TezjjP2w0aJ6bFncYHB+IWFsoquWSMRsNgmlGBdytaguMZnseaRuBY2kjgWEwBCCHEJPQvXhcsWEgsqiJgfGl8JJiyWcTx44ds2FSXTCZVVUcIYcwSYlBKf1NB/zbPgxM4SN7+uOi/J2Z/tRNUlbWIXEpNsiziqfuzj+rPPtvLgcZiarVQWVaLKqCrBZlGXORNYtripgpAiWkJBmPA6EfawgKPVFXQe2WXh9jtbDyl8mKSY8AwMJgsYBN+SxLy6wdKKQVCqDmYH01TCUYEADgOGSb+vWv6f9yowZU3jvBkduzfnVp8cnZboxaWA0RWysfa+rqV/bvbv/u6tih/CJhQs2sAs0ywRuNUoodTVguMHMMHo3xjs6olHWvjftSTq9P9AMK8E10i9rtYR3dXwDOOdaebVgdpb2MTUQJAMMMYmgVABjhWpQIopoCAEmJCYCBp/uqcLwgimExKVjn0az1ggyrAGIglgXiAESxNnS3ltiGsYdY1tjhtbqfb09LUZMhgI9lO75AWlcyM5918GBpe2O19bvmXExY8e2DLDF+24BgeL/Y033lL4OhLy5Y/+6d7Wi+/5U+PPzpqxAkbpfkX2zKqN7z1ZQcxGw/EqdPBEime7PQ7/EYouemRL3768o33Hri9wD3mjBNmtB+sufG+FzBU1l9871O3nXfKQ2/s3OZ69q1zSbgDVAvFBUogIXBuxqZveuO6EWnJi356A580B7b3mhHUEGQu+2otwn03XDCvqR1lFkUkqdrfHea1UfY0x1mX3zx1vnPu2QtFjHv9LYKVeKT0psaA5Evva2myS7ay8qH791enElpxSTlAPwBk5eak1IHmZl60JkQWzZ9ZunFd/VnL5io0/OiL+7ho2qnzispHlzz05KeUp/e//FX4YPDtj6/qiR4MG6bklBBl+lr7+AFRYB1OrxNsDCF8f0OvliSsZKcYAVExodREFAREsnhRoySiaO0q9FpQmo3YAACBisHUATEGFOP8IASGfmW9e8lTQ6GwOMc5kAilwDu4jm0b80qnCqNGpvYcaP/yNU0hsaJcT04R9fgyp8y2GnpClsW6A/7sfIstPYV0F0AIABImdPYCH1Q5bCIX6AbpD+iCSBMtSFegssLHOAL7aiHQ3AZ9nMmmkiLwMmh2JySjb970CBFTssJe9cy9Ne2xKYbA5ItEZjkMBsZUACoTDrFYMWmPSe0cFSlmYTAxHgD88OXdrFMEjCijJEPUWllMERjJAVYzMMNBArpNJ2vBavu+y85+HQA+eOF8xAASAVTDRIABqxqRHG6TY2LJiMUw+gj+4LnP+oKshZBYTMnK80aTQYhmxUlvpuAMBsOKAgCgGwbLMIah/E1mf9B2+PtzCmGEjh3YyOV0JeUoJVRVFJtVEDlMiMGAUFwslhV722oZmyvsTu/z+cDUYcR5zo5mpeYghP0hDoOF9WQVhepqLImU7Mnyqwl7MsUiNQQEW0QxaXSLLCgqYNarm8Fzzhz1+SfbODtFiAdKiI6cVp+iRX2ZNBb2EBxMaWAxwQABbLEf18eqtzJzJhXXHwosXpTrs/NffeZXNeA5LrcQQiH/jp2GDiLDaoMBnYOaycH0Dna7TVE00zR/jezEfyUE/9VX0zR5HgwdMDY96ToHLKjxrEKjswEUFSySaBJdkEhenkvT9P7OFAaHQRLRWEwnIImSZqQMYlKCMUOI+Rc2vN86MgwAqgGwAIQeUwgbDGZ+W5HfNI//oUcJ/JpcbZCTGCQt1ITfJDzNAM2AZOrXzn79h2MX/GqEAB2o8td7Iu6HbX/ZMlinzwRGcw8Em8dRvFITft7bKJqgJAVAEIwlWIbRTQ1hEDEQNWVlgOX4ihLProGI1y3anGp/T0xOcQboJoQBQNfpoGFeEARd1wEGXTt+mx1A6FjcECLEUKyAkwTMRJQlBJafOnfNqvWh0KDbMPzKYB3zsuUH7Y6E0L+VcRP+EwPwH/kwzBmKZlAKupkAKmME237WnBbnkEqqMZZ9WxJuFxGxo2R4XFGg7mDCZrGH5bgpx1gQMVaJ6aBGYmgp5/Vwe/YkOIbTTan2SMI0WRZT4FRDO6bwAMAACMActBA57KDqoCoAgBmGETgWYdNutYSi8qAG5a+fhYorv9kT7sYs4uYtTn96Ra0WE5HJJ2KmlgIZ1ao0jsVENC5k5KgeH7d7q8YKKQYLVreaV6QkaxSbgBJ6yBCBoz3pbjJyIigxZk81GwkHgDo8tclEUkAADg+TlukO+alqakASv80bpQQjDgNDQdP1wZ08OMmaqugIwCqwqpo6RoAplgA4QeRY0drc3pmWXeFIT4vhNoOlWYKzta3b5sqI9PReN/qUEQXa8ndXT0CaOg1V/rS28rNhD0TtgQa5353WEJKrtvZouTOOL3z1SLGbG53dW8cFQsn2xpphPf0bPv4mj688aPbpwKphLkdRl1w0KREM7upqpTT83tOP9vllsb/jqu0Pp7P2YRmuRL/n1pvePPvWhfku72sfr341fklj39Eol+1kJdaIKX2B9c/fkJPdtb+l8wBRnTYrKeNiR3oLFozhJPXsorNsiR+spEUWi9f9dKSqfEwsXZx5/BCtPHnkh5Z9eAsyYyDYKiqHb/1pY25BeV/Mz2N2yoSJTS0dAz2pxQtnrv1l4+DMEA4RPa2xqW/YaIRUV82B1WV51rJs98vfHa5Mt1bOMBee5P7yw8api4/buH5v34bW0TPY1z9+K6dc6z/CRmLJ0olZz6yo/emL+zQaklOSoROe4SVRkhgkJ4MECwD8YLYaaqo84jnTziCbxKVHzG6F9CmI1ak3EzkM0E2kCMD3kW7Jyllt5LHqs0+dd7fHzJx0gsdu2wqvAQBMYIfv3bbek2s9/+Lc/TtilOVXfxNKqZCR6dm2PuL2aZjVJs6FREhqa5VdaaHBZwxHLEiSc4u4gXYS6/LGUwGM7YqucZjDnBYYAJsTupqsdq9oUYIQgRQrajwS40k7Fg2UoohlsfHanx6f7hHEwlDljffYsYncEhtOQSGrGqA2aBxFmGUgrqMEEI8FoV+DCInFDMokDTgKvIiQrJsgcwmdmKpqKozEi8DoNVs8GbbB61VqMIA4k5oMYoGRU0aAghyPEEoRhRyWESVisRsQ1SqHObuOeDo6e+xuyMjsUwcsAX+cIrBa+ETSQMD8xlyTvzp0fjuIf2v/C7sigUjk2IUAkFQhOdgeg1gMBvNbV//u716N/OUJFoJXYJBNhnb9GCNCAICA0g0ArAEsthskkZddNG/28TYb3ri1+WhjUODAJHIs2etOB9OwRmPBaIzleYtMYh6XU9MHNADWZnpK0hy9Hf1t4Z09MX+CLStm9SR33IwR33zVAIgtHQmth4HSQbkHD1Ix06QUzMFsUH88fwdFzN+3D35WNS9mg6zh4OWYoRnAC/5OrCQlzpaUk1EAq67owUBMFChgis0UBxLLKxxrygmNUjfgMCsQU3UCROE/Of0BMMdyuqEPlqo1TQqIDEq3/6WA/s+Dwa2s7Zo4xz09y/ljT9JIgmaBXNXsZVNej8Xfp2T6XGPHO7o6uvtbaUIGXogc2pkQGJpfxNfVKYbJlVTZY2HR6aNtrb3JmMEykmEO6loIy4GhH/MzYhAmg6KgOajeFAyaZFiewbyixQWW+eqTNbygszyHENJVQ9ePqQowBowxz/Pwq1PhHzmqwSv/wxbzh+n9rYWaCCgVeJtqJAARTC26mZg4F3/2hinTqMhziQS2p0faa1hEoKrYuT8QBd1lEp1ypoEoaACUKCywIjdspL2+VlUNQgkgxOkEMAUKwDIMIf+RgIxhGE03iktyg4F4Z1d08K1ViQqU6KpGEPv7/TDooMAwDCsqPW1WAZllw5Ubz2+cd5I288bid55u6WpjOY6Lh7NHjHEMHZplIjXqz+hu68/wsb0B6B0wvV7P9tUhZLgyM8jRjhhSrAZKIsSs/97ETFQ3wOV0ZGY7Wlp4DSUAyf4oAE0iAghAko692SwPxCRAVQoYkIEAGEQoEMPUCgq9hql4nJYV9965ecP6YwR47eoaMY9k5lilgH2AIHeZ/stPayrHzR0YOJJw9hCkOdz2dpo4sTU0OqyUN68uueo89OA1euaH6g9PTd+3fU1uua7HQWhqfn0OgYJqdNK3X1/tHq5PS//yw9xNl1/43OwT1p/+lH7GFcWN/vp2b2+yhyUpqrV069a2GdO9vF1Vs/K4ukiJz/BKuRJJIU32DMvmzxt/2Yrd99495qIzl82cePXjL92fnhnf13nk8wHzy2b9lU/+fOvsSUuOL/OQrrZwu4odpCKrIaSOUwKjshM//Bxqbwhh8Ld19Dkc2d39favXbzjtuEufWH8jJ2XJEXPSuLKahs0VE9L7+wYgSSsqXQf2Hdp8KDalomL1T3sUnwN6YgDA45w9dTuoBRxpGT1HjYYWmDzHeu5lNbMmlo6ZSlPxnu01G0ixDboz+aicNz0vljCWjJi/ZddbCxdP/PSDAZvhLc0o+Gn9RiUZMqFx6Qm35uWLqVjclmkaKVdETqUSelIxseGwIoYCNUAmlBJQHOA1UEKh0RDyp2gwD3lsxBbHEUhPpOwG4gwJ8NKb3F984n/lo75r/uQZXPkDeSu9fObHn/XNj/pmzIl199nvvt/aEFC2rI673fqWTS7N7yko7seZ0boGxhkgg1Sks0O2SEK/X43KOKkEPF5fLJowNWxiDXGg40gkCbLO9zcGGQZh3koVhQFDF0BXDY5xklQ03QFZGTrnkv3FEP/iQc/C+9ySxnJMPieAJAtjGKMekYSJWdY0DTakGg7m2JttygzLok5KLCZn5UCJg5Yi1GAYDrPEICbmTYeLM359wew2LpHSDR0EQAQQw/E2TfMxmKMUY0QkLiorZ1x73oo739u2I1rMKZwIARV4mSJGprrDYGMWSWNU3iQKRcAwTqrF/sOn8v8fgEUDxVkMq7961108YvMX38cb+oEzVWKKrN0qxpMDmVEIj5sIw8d4Vn0TiCR5m22grx/syN4eIs99tsMmO1Q2SkVnpTWZqzNtEOgasG9pV/LdbFutYPBElIED0QBFoHwSCFgNLYoUKWUlNAleYKKEM7AiOkCNMCxjggmmCFTlgdNYjQoiaAC6AjGW2kwm0pMAoBbQ5XEV3kA8pCcYoAhhlVIjEgEgwLGSAaBjBTQKCggC5flUPA6GDpiXkQGEYMpiwAbogAgWGcTwJtLYOBgYUgKAyiJDpwAApv5fzN4/EkiQGDX187pPZ8886aNnrjn71jcBAJCuqUJPsoOE4hYO2xFKyhJXLE/1uBP+sG7jMSQG2sTDzabDCQ6LxcrLlWXpKw/11B9QTR3sPqlQ1xrt8pEOFmJgRUzSxAyDWVNlESR1gQUVIdApNimDMAKqCZxo6OZgGn9qaCZoAgeqbpqItdgdRjQBAostFiIrIuIBcIqYROATWjRHsOiSNhCDkkJnjxyRg5xo6IpEGZkzRQtVoxwBYsOOBFapwQA+5fTFx88v/vjRt9Y0gcpyOTjeTczrLjlz39Yj247UAQeChnIKcrNKIh0dtmVnp+852Brs6S9yumpjCmaYoGEcbTJZRjTMBAMGAy5i2nk+zGrQX2e0NCackpDQ7GXZ9kWzHS98dhg4wsqcxpo6MiVgUoxpNfkk5qw0qSFcc7AX8yYAxzLM0OFep1WwChaXg4aIlsn7AmGlZssB1WcVge0YiLIIq3wuB50KQGevYODkmnVwZHXgyj8VvP15X1Odnqjf3rlXd1Qff9fspRt++a6Xc4k04mLEflPpb9Z7UhxBEUsHVIwtWzijnBPsY4YNs3gKfnjrkfwkd8qfbrnn9R+buzdl+RDvyRmSm52Wm260DTT2Nx+uPibSGAQkzmFFyWTK4HivpgV5J1GiUnoxKRtRNrmsctfmjR4r6C3Zxwjw5teeWLljqyub2bj+cNvheJrKytrEvraeIbmi22VDmK0Od7sMpthX1PDKmxOt5Jy7zkCBXcYVI2xod3JVj905prOlkckfkmo/MKpyiB4v/+XbA4Wdnrhx1Dnsop0724bmBZbPFjS0ryzHm5Xj2C5Hu+OWbQdihYX5Flm7bPKUicePS5160iPvfnxemuVwjIR49jgzaOs7mozAnLKy8x5+aFPIfcrdn9985bDX9hwKluSI+boZLH/itR8Z1+SuqConOLOulfPl6bHqCy+cbS9oHl08hDFZMIRcV+6ePfu6+4I0onz+w4s2D7//UHWoXRZxevnwMS3tjZEBszC3hBXNRLRp7mxJV4OJ1qSPFgPEAGDTwV3FOd7+o3T5/PmruK9Ly2dTo+PW2/iBUNQhulIWIbPIWP9a2IyVpef6pk8c/t7rW+ST64oqmY2baihHOvtbJ87jdFPhpf6oYb/20gceePK8rz749LmX7lIYv8VKM932lEGIzib8OqSQE1lNMA3MY0Q0wljBqVGShEQT08cYqCTbpXNqIsHoKWqzoGSrMbLIcuHp2fffXf8sAABccRczpMpz9BB+6s7I0BElGb7+XqpH61InLHYebo9yrUrGxMi+rTizk/dlYF3XB4uU2SR3OBQD4tBSMZZCIhTgOcQIrKZSXaf+sJGW7oRk2OewBYMJAxIY8SxjV1TDbrXGk9FZc702SYsMGMiKwWENxYLR9fcZJ61ImKYUNHy5NgBTdSApKSPDoBhTYjLRX7WCLAe6gYCCxhiqxhCTtTDAYIMSluCElGFQXDB6tMEe26iYZVmEgVWThFgNwrFYwqBqps4gXgfF0FMmNjVCQqjIJWYSN2/E28x4vyZZccrulEMhnFQtCOs8Fig1DSOKQAT4g77xH4xLLoeNW8RAn6LLYBggOYRoVB2005nYBTRhsOqX32+/6eYZToszaRLAJqNjCilTZ1Smr2iyEOsXDqyVQyqhcRSNZ1aN7NMG4ki3xQ1bthTvDIJLigYR16PJIb9Y90ZznuryaxGdM+xeTwKFKCFABVZVAQOYggoAKWcKRUBISJRNIZOAEcGUQYzBKWCyCFmpmQTBBD6pxAGAFQUw1BQAcAxPiIhNVQ3xaoxhGNMECpShFFgMJoBhaBwjEBPZLNaUHPMINkqpThWTIqqZGKwsR2Q9hYHjQGcRoSaIJMuVhlKhXlWnCCHQmcGN+q8Fr6Y+eee2Ydn8Xcvm/vLDrrMBAECgMHNGldC3MxU0GF3SWVsE4pIm9bTTpijwmhhLJLojfRaT5yNaStf6Ja7B3wOIC5lYQlK4N9UV4XGK5UHWEKTAtCA9ZWqqCCoBDIQVbUpcBYYwom6qwLCgGwpiARm6gZ2A4ywmimYDYHQzGupWeVHQFZVFGsWgsQalwOrAp1IyhqhmESTTwkFXq6ECAC+YIAhygrfZ4on4eXPHZrqM8pETPl936OdNh0aOyjpzaOZjd7/Y2CtYIaUiSDo4CKCxI5XeLZ0WauSNKTuyZWvbnrZwsGeAOiRWfuP5kh8f/vzu1V+cOMrWVtdVk3ROrsxMxsKTp81s6jhSlGuv3dW1epvfkmuP9Mh2jKMJYCDoT0UKtbmCUcNQENItBS6OiaUag6n77pzx2sObkm7vRacvPe384x2QKigaLlnSTFMI+xOPPHBfZ0sjDQATRft6an0ZmTJrD/VogNUsD0pz6b2RziDiM7K1Ml9hbLRn0Umemjfa9zUzS70Fox8YlsFMb9n/fJ13n0eKxXloJBEIASDRxih6SuPALB0teiurpg/Nk0Lerh5+dWPH9xvfe/Xhyz+67+Wn5p4fBdvGnS+Xpo/sq++omjsC4dymdXsynPGCRafCYDEG04OMUMAKYIVUKop5QTUMyqbc2eWtLaHu1vffe+2VoUXHfWtLHjOlXDN1YsWYktp9NQfa244Y2KoW9svVF00aYQzNWL1234TCMYea9i3ulV6ZfnVg367bw5uubvjknvvef+/NB53Q9cZVV9/+2VGiwPVf9T5yctY/aPezf1rLREIrz57fblRf++dPIVYJUealq5ddelLa2Kfe37/T7gjG5X31zz9+qZjtu3jZ3B8/fzyW6G5r7Xnt1bfHT6w6uK/mjLNOWrn6OSYjM9DM9nd0zZ9XlJEtANvnH4hYUNGew63DyrJsnCdq6+xsV/Mky/frwwDAW1gGGU4HzJiSW1QaHFo58rNP6xxun0aaq/KKC0dlP//KlqYdcN/zJ69ev/7Gc87/4JuX3W4uEqauTKahXsnIYTqOaC6HMHmS5XBXSDQcKkEiNWaOLwgw/d//OfjMC3dbJZ5yJiWc2gdCymnoPGAWEcUEAYPGAo1gNcJEABHOBMnKINbkGMbqAMqab7/xlI3h4wx+9tEIABw3im9uhWVLxfHzMu66tfGsk9EXP9IpEyCW5COKlpNnqa2TmpuDjiSTTHDZedDQqQBAtgsKSh2NtUmXh++PpuQkCDyjKCbLsAghw9QpBZ8TsZDuj0YoqFYrq2qsbhqcYHApvGip0+fRQ37o6U/klFptLqW43Fr9XWxbbeYZd106xE2zixxQlzDjxLBL4MC4N8kSbtFNKwBg9dsrNB44hBClwDJADGIFYiEMhxArBsGLrLZCqd5IdUwf8z4AvP/i2Vanw1BiGgeCDrIBQDlNoxSABwMsuFVhTaIxgB698c+mAxwWMIOCrKsWHmOwqkaCYmoYHIAJQHgsqWBQ8s+Wqy5bnrn/CG1v7j/ljMrOzuCWTUFWkIJqEgCy3FDiQQebmQtOOd7hcu/9+VtPmbfhaKTVr5aO4ztawomYW9bDBXY208N3pGQ3A247cEba3Hm+LzceJYbPoaZMgT3YHk2ZACaLREQNY0YRjjnGvv7aIxZ3KNAdmDn7KjBxvoU4OOA9bF6J1d+a0k0Ds5aaBhnAZUDo+kuXtDQ1f7XpKGfFdkoiBmAFG5jDhEcci/QwADjsEkZCLJbAYJy4aMieHUf9BqtroGkEI2xSY9A/mVIADCwDLpczEY+qKiAMosjLssazds2MCyLYOAdHpaKijKONhzzp7nhQ0wwSVVPAiDYkxrXIP3mN/gjEsrMzjLQkbNXggZMqLvqoHgCcmL//xqmTYqmjjYcu3q/TJLgZymtGP8v6DC7ACayeEjAkLYxNT01aflJVulVE4uRpo5/76sO+b3acVjmsJccXsEDzlvreQDhhqAgBy7EIWalKMegKqICIwLGGJiAw3A7T5RH7ehOiBUIymDpwGHOYUwwBESKB4nPzfl1edt6iieNGrvzgwwtPOTczK/fHjZvyhww7fLjmSG97+dAJY8vz16z63iblmnJXZ+1AfUd3zJCVfVsevPd2EmXe2b+nJ5aaf8rks5ad1LF7b6w9/tSXaz745P2ujRs+/fnHd956bPvXP39X0zZp3pTvv/yq7miQ1RWVahjB2KHHPX/37SdctlSJG85y6ckX7oUALJg5is9Z2P1Dp3/DG182bnp2/S7MaSQKQAWHTY0RZtmscRVx+eltLWcsrDr5tuvLKwqH8pWXXfugt2bV5trGRW+e39zh7l13sDEcsoM3FWodP6J0y8Z9A6mQIYBhAvAACPFJakm3LJ85oWTy0NGFQxYtPQ3krpRSsWX7j1Uj87NzJm785P336tsmVo158ZZXzPSehqbDEMtmcYISzuT8Vp1NgYQZnfKKqQvjJ3hPOnPuA09sXZBd3Na+PSwLffHwW7+sXDxz2td3f/Loy/dWTikemqY990H1CaXD2oJH0tNndPSGrjp5+Le1/nW71gIAYs8A8qmLuGRIaG4G4mTG5KJnHnvr1PPP7+rpmzY2/5eNNQBcR4N+TLBYs+1I7aFdRtyqAl5+2rju1v69b3/8yyc/NvfXXzuv6Outm73lpdc/cCdMmMgdqvSf/tOORNUqW+X327rOnJd97ntfP1YzTt2fPqrE8w/c/tZ2TaWJQPyMERb++hMefKz2vWeu94YPHm6WSymtgVQsR7K0eff9siE+akjBbXdWHw1/8c1nx02dfsr5Z+fm4uxCbyzVef4l819772dMLffcfbovM7Fzz9ZkKloxLG3T6ta8rKyoGi/IZJobGJfkUkMsQBgAJk9xVe+LlJd78/N86bzv9Ud3Ljo7rzfZ0XKA7upoUXDPnLlw6uKqcGzLmFGZnjy/J0vXFb1mH2v1EG9m+pF9QT1O58/O9OR3lJvQFbarPd0UwbbDTVkOzSJan3znsYxU4fnXnqlRxe61YCOZ6lJsxAMMr1NWI6qEGUQYgSDJaY8kU4yGOGQSAIOa8Qjs+NEyd3EomhQHJ6mrVzz5oqLafa2x9R2eTOa1l8z7bx+JRjifWnGgKpfsXKN09miMCAMKI3JKJMEN3pVX6ti/NwaQhYUw4dIIjqcUEwAJHKcpqfw0t1USumJ9FIco6IIAhYX5R460IAAjBUV2lvRyIkfzbBQl2AK3S6GBQ1tiW46Ckupb8+TL+k1XJYIxrw28hYKqp6R8GyF8akAb7JpGTV5kwMaZnMIQBKwAisoyGFg2xHgpBQwmTfajyDEBKGIgUZENk6GAVCAcCyYxHTZB03VPdjqYSlNnDDDwDFPqtNYnkkkZsiiVBFdMjaSlJ8MRqmrA8brAADXsmh7PsMKvRtx/Hixi35Rh1uPHl1cOtbZIiUkVQ0FQ4MUmALjzurGB6n0V2c5cT/eBQ4eOP3FB5hihZt3mrmjIISZLJKs330ITsSy7q2S46913mmbPLLa42OQAjQaOpjph5Bgm22vU1htYhTwXM3VC2Y71vZZcXnSy4xcXrP7lkwfvfXtottNJwbCRV15+/MunXjv3wmFB6Nyzui7HlXOgN7qrh/AJwDw4m5ugI8wR3sXpF596wivvfu+S+ADRsKGmdLDyki/T09LRzYAiiUxJiSMg+62ZXG+TbhjAsQLGLKYMxkTVdEFkVI0gzAX8UUBgkURZVmRZA4oRjguINWTDVxIrLeT9fc2GBu1tYZMB0QSeQ5qpaeRfL/4CQPm4SelFzhnuvmtnil/dXjfYGCM0lIyJ6dkV/bD2u3c7f9j+4evvbQZjSpZL7wmmjBT2wM71vyCOdfNsa1NrXXd/ruLJdaZ9eOuLH/bfkZ5B2/t7bjrlxOt2tsQN08JzE0sy2zu7WpJR4AAAeLCbBta0pMeTVBQIxCAQUwFDIm5lIYWJVSdUZxI5eWigX7VmZ81dNv6s0y7giG3bxv1r13R98sHn73743Xcrv590qqu8YjEZ2Lp3a+fR1QOdLf6e8C5gLEASFgqmHcoWLWyubcZBidx2VwsK/umik9bddP+Fd92zfPudN10wofqbD5/+4ucnXv/TqNnnj5182tLGlkuuvrquulUDrwasIIRYld15ZGe8ML3MVbKb1j960w3Bfc1N3U3+noRdCR/dFF44K7v9QKs96YqzAx7qjglhOcECb7hp4kCAgJkcObP47TueWbut7pHnbzlrzPUz33jmg9PnfPRRdXNvbuvharsjt1PuSEvjm8PJTkWnYAOdApFYI2pgfcy0EQ+98mxHXbuJxVmLTr/vvseXTp29t2H98w+8dSTys0g9hWNnT/cIidpghtZ1yRXX79m/7ZP3PpqUWTbutPLugcQXX+516vE01WxN2Zded9LlZ1QsmHNXjmXE/SvO/PhBYU3n0cxiJ61reerJV2t+2n/BqdO8Q5wydnhH4vXBDneR2GsEE+lGOF069FHtsb3CHPAZVtNunHHRjI9f+iXNlEwpc+2RJrfP9eS91+8/vP7lBy6TEhO/+8Z3TAI+0SEVs0hwZIUT7LnXLS/KiuVe+vK/YpP/j9FWn3zw21Xv+DmI2DPU1ORhkZ9uf+L4W+8bm9nZ3NhoKqamBFx2vaWugUGxsqFmxZCJvT0HwwOKYWCQQnmFo39eVTNkSMGRfbVRkCqzuETUkV0Ws3JZz79RDwDLLgW1LfPiCx0djWpbhxHXQ1bBFeoNS07G5BTZb06Y7Npfy5dXifk5ysvPDVSNtscT8aqJ0Nvh3bM72HQQLr8RVa+X4il5+kzJT1OZNoEqnvau4LQRw5JMx+7dwUxGVLESVm0PPHIrJ1ISY7ROJwIJI8OkhAEhjAI2B4nrSUe60NGm5uXaMtJQOKTrWKEav+vA+59v7fzlPQ0A5i8Qh41X+mK4fq/VaYlvXee868YMI0P99P32SHMGdfR7HW6G2Ovag8CaLId1WQaAsgrPQJ8iKzIxRGoqPMcTQ8MYmybBAFYL1nWS0LEoiqpiigxPSTwjE4ZXVdYeaZ08JFtA1CbErVaEBMGRxgZjwdAACiftyf5EKBpbNrPAWehShp/Es7TAKfFmgojIwI4Tp/0JAFY9dzclwLIInByVAIhGEQKCEKHhglzEQC5Tw4TCwElT5rwFAO+8cTYkAQschxgD6xymJqXUBMOgFSU5qNc/wLOqx0YRGEnl4JfrJYEzosnaWByp6rDhhTu2t9qsOflFlvbWjvy8ktx8JtQaePLrxn/ypn3+zjSXxYoNQVdjuial5VkCsa7L7o0AwGtPZbLBsThzRLxnoPbw2vGT1MM9bIHT0tTeU+UeceWLd+1qi7z/2DtzZ+a8/taHRdaM224495mXVo0cEuhuN3hvtpzop3oyJJuRAZ+cGoiGgBeN0xdfYClLb9i782Bz88ZGKCzwisRtd0U+eGP/8ePOGFa69c4bbhhx5m2PLK9ApfzB3YcP7oMk7yrXw7tjzqQeXbRw+qUnn3bapddkO8FeYgEVHalLMiayuMRgROVYl2nEvR42HE8ZBhTmuPr6IwhbUqoGQAAIx0NpcUmaRf3+++8vvvSiyspKh8O2bt36c88+Jy8vb299Z1VV1YZ1q05dfvxLT72+c3vNqEnD0vI4Lm2Y0dK3e+/BgmFDV375tWz8K+z0f5ko5uv1X4zwZr954SUTK/APn9S/M2i+Ya03Z3NLL5kaidfK2rz5CT79vddTBGucCibmCEiMJWbos0aPnzt+2l1vPgWs4dBtRYVZb3/8hdna+tJbD32xbV/L/o2nHH/V9mCHJZm4xOEaO6zSP7aslQauWDDz/ZVbGUHp7h34eXX1qNHjJx03ZMr0Md60LC+X3XdweyLeuvFg/YHqju7G5oWnTI7byzYfNI9W7yPhvrNOXTJl8pjbV9yZSCnX337TvhZ2655GYBMiirJhJtnf7HLHwiET7DwbAwNrUGS27T6QFfUd+bH1w8M1atPAN5sf7zXk3fu3ZyrR1R98f9+rn5909VkPv/jUhzc8/+rzH+npTHmxPMyd3aZaWtloeRI2tTf5rC6SiOse/MCcE3744fO88cMPN7XrXQGlsPKzLe+uuvnPP7+/f7u5NgSmEafU4mkPHlj19veXX3PNhy/eeNp1z545qmyrv9Hfxa24/5qr7r19821PNe3/Fi1YXrnw1CeuvGHucUNTfd0lWZnvf/jeFX+6f8ikuXc++sC6jT/OmDaWd3p++GitKFodKaqDPKNq7JlXzrv45sdGuPKvvuL8aGOf0dxx5UMnJZ7aHdjVEvziSql4WODb7SvuumvSKaWLioY++Ny3hxX5zTuHO0ePmTj7FDEWOdIULC+fdMLMk7y+oqffen3bnl++/XblI3PGdL//dVsJG144M9LpvvPqq1uaWnKyPF1BjfUnH33kpuo94R3xIwBwIuDDvHXFEzcvGZ8/ccZFD7z8NLQ3kewx09MqDm/85MnvP4l1JQuhIJqRfkwCFuNKmFKH0sOL3OPPPlFUUPbiv2C//28w5bP3xHplxNwlNWv2TxjjySl0GTxet+Hg7Ouy/HuatHj7jCnF1bs2lBZK48aU19Tsbm5czbKqroHPm+dIL+jsai4o5gpKFc7wUKdl7tR5kcCeI9VaRukxmTJHgCELSvbu3sOymjfNSkKpZCrlyxGAOFv9SS/nlLB11oICVRvYvblDUSzeTMVoyWmviW1aow2fCSNHMY2HzRGTuZodrk6/WZzFBfo9/X3+MWOMj1cdGDeS5UKMpYLXDUXQzWefuzeqwpMP3KNHExATAQCwSYhqMgZnZ9yCpJupgnyppyspSUhTOcklJfXUgV3N1l9no69L4Ux+3Q5kleILZrHX3Z1+/+ONl8+3nn6c/Wux34ozbr1pYs3R2jdeTWb5cLAzc9B1q8QVHVuQ6Uy3NDcFgq1gsZlWp6+pPWDzOOzeNM2E3r5ANueKRvsTpvrAfVfm5GqSo72p6UBRielzOmPBTs4UJYtdclDdVDIys4aNFEMDINFozJqJeWT0NpTGXt+9K2xOurUwyypCEmN1cMCIMghhYho0ajA6UAeDEQbFCOXnIINSAiimqTYkGMeupzxHUrpu6AzonMCoQMGkImUxB6YGmHJqQJapji2ixLOTLp5Hag9a47lTLQQgZhjizOG54WgkKcO0oeWCSBiG7bPZ4ev/pB7J309v+Vef/+qy3//6h8KIiE/JXILhUkhg9RTXHkn9Fm1SuyeALdIjjzyy4qLzjGhQiCZ9DC/LWsGQ7Mvveubrml1XXvDomNzyjd2NCJVm1QcGkmtqGzssCBdVGmG1xy4iW1Z6lpjs6YqjmMtqZytGj/np+5VvP5VcMbt0bNR8+NsbxNIhrzz1PhdXLrv0ov19LdlV5u233m7VEQPKzt7emcNGr9yx3wiFH3lwXtXOXc+tAiyaa7euZSQxlhJbaiNWYAl1sUgOxlKiw6bEkgU5do+TdzgcKmH9/m4CYJHE8y+6eO36NaFA36KFC0VeUIPa0898+eOPh/ft9YfDQUVPbdl6/4q7737v5Y96w63DKiv7awcyPe5v//yKYdqLiqYyxcwLDzyy+qf1ZkODYv6LqO9vibUQAoAPn3/36wtnNO0/umY/uyTDgH4AgBwjebgDxm3YO/LMxbOueeNyjYmCFYC9/urzT5l+/NHgUUZyYyQ07di354dfRmZmNIW7Tzt78dY9Wz7a/+VJ46arrPWs62/2Vs4oHlJaXReYuGhSTnpRZvHQFn93W11Xqx8F/fLGrVscaeJlN5xcOWT03l3N9Ydje/Z9HyN4wxdrEdgSgPIq4bLbloRjwr7D6tRROVMmLMzIdGTbrNk2x9pvv3/xky/8utR05PDYquxAU7uomCkYMEngirNP+/PKVZ0dcd7rWFw1NTDckcFnfHrfy8Tue+WNhy358a0rP3nhrtvvvPzCtZvqinb13XdDfzz+ozN05UBTdyE48yqLlo1jYru3zll+Yfnxx19z/mWZGcKl51xyz+333rLg+OknzO+v79p0YNtbm+8bOBR46p4f1zy4afdB1BcPPDot5/SqguNf27GWNZ+94+HhmsRwtp6a5m+GTbuybU/W8OGPPDd/yczTpAHR39FYevy0/EnFQyqLasumffP2uwQL7/Zt0MB87JFng8G7l586/uWaNQ/e8VhOyZA5t5QFSWRITu7RYGDD+u0VojNy11vktOm43G38jH9e/iwz9wongBOgZO7GwRWeBQAf+gF2zBr8ft9ugN0ArwFAFQAArAGA+hYYUnoKwCkA8PWqfIDJuyH7k+ogQLBhbXVNwJt0DqgRPmVMnTv5rJvPgfuuAICfgWgkfsGf7gMdGARn33HzcjUrXrr64jp/CVVeXjqt78DujX2xZiXwa7Is5B0Qkik5RbTU6OtO/+DtVf9fIcC9ux2+nOLejl8WTS88roS//bv1mc9e0vPxtnsf6D112Zl7dz/c2Lpr6vQSqxRqaNzpS7eKXuvRgwNDKgsoaCyPCvIBcypjhhadMpHqpPrQ5u6m5tLKrI7e5sH/5zurrKWGJV2L9GNL0pdmV0oKixq6upsbexYtHh3ub+waGLBo0DnQHYjCsnMh4seN9b2TpqRXDOvTI+CPCdtXc57zos40Z8ivcIqeWdzLWcAfAF2HtjZ25ARm16FYmjc/qQ2YYUtOmrD2p2f37dKuPuVpREUEuoAAG4xLEENqgEe8y80wrCMUUAqLneF4mGPZoVkjJ9tzAH4EgMUzKwRHpLA82t8LNkGYOgbtGJ8Vj+DhU2Lj4p7p04u624/s3d583+0j9ZgS6I/DGwAAc0YV290ysGjWiGmaiVKyyopSQjHsrnSed1gcnvS0rPbmlqaGbjlpVhWl9/trult7G5sipi4mmCbEahaHBSOdCobA8kbKbGqss/Mei8e06UxbIpyM4UhNwC5xOIu0+VPeJLjxrypoF09MA2KUMagpI1AJY2MA4FgJOIJQVBFUZJjHSJedERWkE5OVscEqJisijEDicE9cazVI2dTpnlifHAwmo8FEIkyiYMktje9c59Czev2tghVSnZrV6fKmWXp7AiQsaHoyJdr+ekv9rlTbXxzBv89z+Fc5D/9mYcS/OsF/d5Trui5jU4sDqzs1XSUsdns9AAEAAN3gHfrcCVfaAvtHT8nq8jchCdUfBpsrMm/R5GAv3L7Ys3V3++t3+r9ecSX5ctv33/QUTUi2HQZTApMzkm06scq+QkVjoWq0rf1Qbyjcvak2i+K2sQsWon1vP7v4vh19ijuPHT02c8VtD44bfjDZ/8vulvUaCB0KY/hjyfKOs072BFvhq+/XVuY7C/O4nr5tViZXIYquKbwDW1kgJK5oFBhQkgkG4IXH7n/zhaevOvuiW+56fOyMiRdedOmatZs6+7qaGxsQgq+/+lZVNIaxUkRMgNZALy/y2qDHmYUlZWlDs4oUXfm0ulrxxx/75Gvg6Tvvfzi1e8zd9z+hg0n7+gED0P+qaur/NZP0R9x416Uf/+myS4YUj/3ktWenLRtkXtMkDClQm+NrP9511pSZ3nAAag/bCJx1wQXjh4yeNlCWzMm3UvbhL3/4tm1PiYubSeGmc5dde+WlzfUNj9x27+rd+585+Wz+aHzh0AkjC3PMAld1KNISDh7etKco0z35xNM/+bmhdAKngrrxgPHcm+9Zeezvfm/ejGmTxtm0eZkXX/HoScsvACCminkEkEoqbGr35gO56WUz5xx/7qWXzlnsrj94cM7sSfNHZqz9altWsndsEbOnN2wBcvaJlavXbdDsyetuv5Hpix9NRUQ+Y+LkebRUaLrh21kjpsy+9Or+7l7JTXR5w7gJ+cuv/6qoRFgW3PvAD0+PRRmbdu2Ko1HZYfOp226HW+5grVQP04Pbf3nglOM/fee7di08OYd1KYVDR68IBQKLJw8/8POLw9zDdmV1n7en5/Hq9lrR9sJ10656+I15s0afceb8fOfQI/73Ln70pocueAhYHJf9b9/zyqOfVn9/wwdDiydeWVTS7xu5/NJzc0sgJnfPnn+81zvc7q4EEMCAN9+drTEyD5wZARyjiVjoyhPCdes2We65Oece5cr7Lh/YbBaedy68+tB/trL/U9xRlPdlZ/eKBx/QcZrRHw7i6J8/+GZKdunn990zHQAAkl8/dtZlz4AN1rWFE1S3hbzfgX+Wlrt40cxxE1yNjW1/DsjbkrS8nB4jwAxN9mqpNMR+8HM1zMp3kbHwXMP/1XD/oWAD7WpWjqKbczMLyqCNHPFPGjYyMLxy3/7OVdtqZo694P2X79+Z47rsilEFFVLNgV4r1YtKKvbuqc/NldxeOaXEPPb0oqwR55y65tEbZ/X0dA8bMTqS6C+vdAMkAODT7w7v22edNKPYU9jS429PBiBDj0kK29TMdx4i46ZPr2vd3nyon3VLUU2v2U67OqnbScPB/rw8aG8R+4KpoRPk5noBcdE0HskG110DSLDrFjq8UtuyLenKhvQsV6yno7gUNq6BBZdaD61MyQ7m2Qc+V3cLd6461SCMCigQD1p9rKxokbiZnufr60qEexOSg336qSftQgRI2+BslOdzSXViobtVzfGG1PrqlYkHrmaXXtRlr5x66VLH4aZVXd2WBceNScn1yFKaNexY2Lgtm7F5hNqaYFCrdruykQXVHNk9YuiQUGeTlbe19gUDaZkDaguRven23HU/Pe3wIllTnTYmI8vHGSo20hBrKDTI8SJidMZAdtfYiD1JYpJBnA7i5zgSjLPBBDN83UO1vTByyUOGeMwLmlg1NtsiN+pimMUGQSYywlpseB7CmFJSGNwJMVATjDBpBsAHANAZiFs4LDJgGMAhjGQCEpfSiIMT2rv609xBV15WuTcbUUxNw6CprvbmnooZW7dtmFYhHj4UHDq8vKWz+eedHbn5Nlda3GJNZ/v/rgv034w3/bVC7H9c8D9Mao1NG6NwjBFRkhFdM6mZsf/gsQAGV4E1s8ixZ2M96OoTnzeNHG1HSXLnVdc396xVdx29crrVkpZ504a2c0+d19PX+mj1ro9v/fDpP395tOa7wso8lfbF40hWlc5efvpsZuv+Xq+YGwm4Gzs3bt12e1tL4punIiePABu1RblEcdlQqwcWjPA9fHej36nLkr6qRTnJVfDO9vbjKsX8yuzOetSkkXnHZQYSnbOmlJ57yUXB7nbd6U/19+7e3rJ1j3vslJI1P22zmrZib36BrXDLl/tLpIrNm3e1dfWFI7F4JDxr/qwH7rs/HkpkerM6OxuuuOqKs849q6unM5VKtLU2xqPhF5+7f/Gik4vKRtW1tFx++jmnLl2aChuJaCIvP/3q6dcPHzYtySl11bsRg4Bo/zH5v+dp4B/FJP0Rzdc91KP5T2bC4tUPL7YiSAAAzDUcUdCOBOVDHUdKQRx2ypT2m2/95MfvnIQlppHobFNN1ZpbNLEg5+KioQoaqNID219+ZvS9jw0rLK10+moslsJw64jpvtln3fzca08bkGqNxVwylaeUH9y2cuef7359xXXRlD+Q7EccZfRTyzKKNqz8avfWn+eNit332E/7jjKvPvvUNdddx3B844ZdH9/5uObKGj1nYig7KNPWfXs/P7Dj89PmLzr9zHP3fLGpK6/xQF19Z7+Kh2acdeJJ3/2w7d2Hn08bMebRFc/OW7hE3blxwvSJ5kCyrq3x4hMXiSrb1C0LBJ6+7yFTmNTQvZZl4FAz4bms5MHEXLHkkLL/3Nvv6Xr77TuXL8wwxY92HQEtdf/M88YJjtyR3usWL77tghve2vPNFZ3esctmWK1IwejBDb94Lfk5snZAC4Ahj3Z4rssoqz1a9+XeI6f7zPHDqmadf/4rDyy/8OHP+3/uuvKFOxYXzGzZ+c4lJ04/c/GlVV7Ly6+8lTUsY/XmzaqSFo/LNooGuruNYA/B9k1fbf9lw87ZFaPW7Nu89cCmG++4q3Qo/9RLT2RN5K07OHyjd80nH/w3383/Dq5t7bwWAArmDH7NAHgEAADm/noBs/z2z+AYRw0AAEEAGC1p0ZYBvbMz2JQIRG0S4K7kr+miO0V1ogKlQ2c8+OBDq5d+MtRZ+X843P8u/leM6qXLSl5d86kvf1Ko5vCSa2dseEU6bdjJtz50momOeu0Lx8waPXT0S1u/3nbrXR+fsCh9wZLMRH+8tqNv1KhCifrjSMGiF2Varn55Z0S2hxubM8ttcl93T3PE7DvmoHTuiAkPbf952byyK2aMiRoHnRFr/WFu5lT2wnOMfc3VaF8174KkIAytQE17jLThbrsUDgUFxVCaDjJOO3WLNKfQEZSJTbRGWkNZ2Xqyk7fn0O5A2B63pZXD7lUwYnYkojpSQWViJTm0izjzFD1DGFfMrd7d/s7iAwOWUEPqSPHIjgUXcbvbo6VMelsgeN0dK+ZW3fPUSxVVQ70JTRf47MGEBnHCp2I/6tLQRPRwCbhCQ+mP+zs+/KD4jlsPHZ+fr4nDOEuTJlTLirv5aOOE0fmDz4glSwJxWbkSERxGUrWY8thJo0NmOy6UEhG9qNjSH0wyaRlFdltbuHlYgctvRkv17H6ZUIREKg5wSar7rS7R2kMjPGsTGFkN5KjWKI9NSWai5siJw9We/n6jzGf0pnmVZMPT2vjbB7s27QKTiFtLJbXP5LoMoBwGBhkUECno3WYETWPIELGg2DyWLBAqRw/pb2ikMnEiFKOEZ4FFlJgGQyhHSXN7S7kgSnYBcwhznB6BosJRRSUjRKdvw2fPz57i7ensfuI1czKF8oXCiOHj9w/U5BQWAvj/0z35X9LUP5Lh/wb4QDCebc/Nt3z/tjZ/ltChhEfm5AC0AcCBjdJo1Dy2UP+iIcDqbFuzUsqbKa23CFkOEsYy47an7392ySjrTRee8NUXqz0uW39sbT5uXXjHaZb0nlCP4hk7zOrJZM3g9qPbJVPMcLK79mzs63nH4zt5/KSV27a8tPYgfnzjhxfffd+73/wcqD60e3eI5umFWvphY6C3xjg0IiFFoKtJWTYzDXtbrXkeZC948PwHmvbtsoabY+37ag41+0P2s86+45Fbyo5sfmxZQHEvub5q3pkV32xuCuxZksafkHFOwfhJ5196MmbcIbXVbSvqqN5ZUJFVlJd5osezdOnU1v11liDKe+BaxuzGZkkmb+l7+OOez75Q06XQyKGNe2ui/aHvtx746vDK80um6DNGXHzi8drnm6FuzX86m/8YJumPCPbXTvfm3mY4njxtQeENmwYba2hkFAd9Sc+ZQnxAVUa/84zQ2l14qNZSf2j3rg2SJ0eiZltNOxqT/taNH6+44f6ioz+9sP7A9p/mtO7d1cnbu4Pc6Xe/r1PjuGjbuvUr45v2plsrlj353DNv3jX9xlvdfGF/MJZbgZR9fVs/qXEyyoov7jv70tmq1wJlzx7aQ996856S7InhLiWYam1sjJs2+4trX3nQ07zy2cYlk0ueePfKrOxLEl29/f7gc++8LpmmChbO4t7x+VeXX3JpWs5IVwCdX1x25ZLzfuhrJIfZA7t2CwAyAy99t47jLaIWRiwsnb8oLzd9Tt6o4pySNhpmQn24BA1okmHRZ8rVN365Qb5x5qwhZ9leXblwzqRm0oOM4JilSxv8a69+5Oo3Lj5dnXT583dcf6RxQze1DmjBQMqkEOERaDz+pHrNZYt8z72bWsdyzZW0Odw4KcJX5E7hgH56cP01M88foF2JjJEX3Xajl3D3vv1NS1hdMmsmgwaad2xw547R0kswUnVwpXmcO374lnPzmSWp7975PIH4p1+7r1MOXrpk3pUznpr51NLqjjYC8PT/euH/jxA+2NPOxnlNy3NbmhkFqFHcC8cI8IIMNxuzlV93ziMrv1t47sVDuCg8X/evGeb/kFE99bhSRQm+v2bv2a+9vOabF29/5aa3Xr7qk1demjqP/LImVrFgZvlxc66ZM3FU58FvtuYJ4fZxJ2dluhpN1CYTGJ8/+0j9VkkPOCIws2jkFUvHvxVaXfdzj8NYcMFNd8NrUwFAbu+67sxzJlonox3OeHj/JacXvhs7HAj52g845y6xdPQB6gsnYujIL5IjV16yyPb9HmbX+sjp52fF5V63x6zebLEIicI0tro9kc8IkQTuNlKF+6zpAqhOV6aofLj64Yuuei7c23soCXOGQiMJ9vaKLkX5IrC+YuwYsq83Wysl2Lrv4JHZ0Ym9B3aTwt5YFD7ffHfeEsufv+Zuv6J48y8d1qJjjrwZDAwUF4XCtY608m5gRUQyKafryuJFxtIV7Z+tsHfLopKMamZkmGgkBgbzNkF/70F7utvOE6sh9KdRQ7X4G+r9vamiKpNJOUwbYnIgg9gSfXJlfmV73RFFJOBCejLpAldSjzgk0W16AgNy3CZrXmpjPLFOTU35fVnDLB6uu697V2N3iU1z5HTGB+Jc2M048lM7XxrsWkEm0pAgpkwAFmOMzHBFdn7DFs0A2eKwThvF2q2GobO/EmCXK6ffGehM+rMBCQxrNZGpEh1A5xhZNTN9vlg8aVDD5rAEBvqTCVnr6M7IShs7eoKVv5bsWNnDOAltPr+46PUdbeMWo8xYJKge/ds78I/y0+8TY/0Vg/j7G38vgf0n6OKha2P8vc2w4MKs91ZrmTbr2UvyBwlwLBz4/vPA8pvPGkmDo1HeiRMrCgqYDIxefmbbFSuOd3lSDCVVE2ePPv7S40oqL5wzJAPCJFjvkvIaO+sqCwram/tzgLfgcHn61EWnTt+0Y9eE0+enpV3YF/1y5+F3Ft5clAiFmvHPD71/7ot3dWSpNE+Ca58n3/1848nt+yYMrWyqH3CVetMyxsupge0NiU076n+4v+DQ52811vZ60hzLzj555YaH6ztizS+tOr0kVuCNV4xYsmnHzvVbTk6G1ev+dPXXP6yqrHDOO32a3t3b3LDX5nPu3rFt+vQxGz/8rKP+yIRLTm/8YUNmQdmemnWTZk51zL7a0d8585RTY/sa2iBetnXHOdffUjXTkxTQhPNPdT6f+egLr53U59Q1Q1f+rp/6P4ZJ+gsSDgAAo8Znte0LtTX75ezi7LRKGKgDgArW4VBi+cAXZVo+ZeJFHS2+D18++7lt518svP3yO7GOcLJ5d8WIUXvX+7Wv9hz/zar1du0iPn/xtLLC/AnVO7YcP2VMT2+wrtX/0/pdXe07l1SOufOLt2Zfft6CZaeAwQGGDDENosqTS+Ycf/e5wYTz6+6Ohtd/vu5U3+SxVe07D48SRjt9pjs7bu5AEzyWa9d/eNWYYW+vWmOzwZn37ktPH9O7c9KoObP+5GEaCrgNLQixieHF4s49a5998pG6jbv3f/nxygTT9XPNnesfN03lT/OmdnbWjynP/2hXd3U0wQAAxx358YPzSkecdvrC8x99ZGEe+8XrT5alR3JsSUFBf37tz0svmH/ys2s3eb5566679+/89qTrrt810HBaVblSv67M4fglJO1t2nnrdRdv3bd54vhh2zfVxpMRCoLLLY5zl731dePkS9HNH9/V9NqXOYuGbHu48YFLjj/97EuCe3a//9hdp529dPP6msIf+VvPn/fhc++rscDL98+fMKuCp7m5NhKtrms51GizOuSGgx/u3HfGKSOHTR72/TcbbYQuH178weGjM9LTvv3ql69/mBSmKQrgYC1gyP/dpf/HoPyOTZkJZeqogqrRR0DbH3hz/fGbjhzzgv4mo/ya/r5xi8+48OL5+sBhf+O2q55e+882uvzVXX+87G/dePE7HxeiVKbnuF/Wv861fdu0rXPGkuMHUrWIaxtVnv1VDb/pi/hd55541VM35ow73ZJWkd5c/9AtBTi942D3URtR5b6c25/8eMzE49Wm5B3nTapO1dUfjmxc24P2dMDSSf8Ps/2vwX0X8ZlZHozCkWQhJDsPdcnDp6QfPTrgyoe6Bre2i7v8Vm7vNo2gEOeREsT60mv9APDM5Xw4U1TCvpQcG5Il9fZkXnja0P4QbKzZnOgIFpUl+41MD49TshnRtDx32kAq6LGJiOeZFCNoSTbTHugIqoLo8jGxZJc9xbuLqhAJhxNsT2vPiILMN9fWjbWBtVTIdJUrGHE4HlXg5jtaAeC17x+01rwyz83Q1wP4oMI6rEefXRZJsPZhw5i8HMQAMjBlgYI2rfR8APhiwwucBceCoYb6eg4gC7EUTIVSHYBabcOHjO7q8Xu9zowMbzAcSiRSumEahs5gyPBZ6r55Jz9H++bR4KT8VEvKe/WKUz+t/67SZ11+1eH/d/Hof4R3b6lypVdZs7DTlrfn8Prb7jm0+8sLhp3yOgA8dM9oFPV9t3urjzXOmLdINPUPfvrpmpMulczYkHlVt1/79BDr8C554OPaeqvdOjXftmNXIsOenDI3HzFxD88JLhvvcRpxs8TNiQg1x4Kn3/qN7ln1xQd3cW4mYGeAs0er5SHHlWSanuz6zk37s79rXnig+o2vbphoxXx3f9+Y4+YUDB22buWagopcamHKkftIdevttz+WX87MXTpy9LRxu/d3V9cET5g9uqqkWE6b4LWpF51/2d6jyXOWjrn28Uf9PSKJprwO2yfPr+A4Ljs9Y8+X60pmDw009rur8sOHj2yrbb3quqtfe+ldYei03T98mqXbIx62j1F6O9ogEgOimWCSpPz+zffW+ns+qq5+7fnX73zm6aN1Df/FcQT/1cnzNxv/5iH2n2yDNye5m3ZG99rgE7HUGuixQgIAHgdI56BZh2VjbefU0mn57LD0lLYfFVao7/hGjMsdM9TfcO367a+On3Wi3/zoyF4VlC0cGTlzzuWXn/78818kWL/IGOvaG6dPmnDXhRe89f6mO97/7N2HLoT2jbqtoNQ3D0d4rnt/ZJSh9IYveWf1oZ8/P/Oiew7X1i+ZVJbOjS0ZZvuqeq3aDwMtrTE7nTOlxLerd2yl57YdXe6ZU0YdbZaQe+HSgsfeW7NwZv5TazpYp3tiqe/wvsbHb3psT83Rt9e9n5KAkdlCyehXYZjXOZrCpLn5Hxzs21IX9NmkUIJ56drRV7246dQRGT1dTFWVNm7EsgMvfxyg+idI91ikJfNHRBOm0tiyL0DOzTevfvKaQk/hOzc9/v6OrjoLO6Cboq4qDIiUv3zx/J9WbTXzheZOf47NPjUz2uKatmfrytab7nj8zTej2fmxjCKjv9pDUG9Uag5H8gsZJUyiJC7mQG+9W5bCsydXdDb0n3fCpVIqeaS5+ZvNW90OW+7wMp7EbQJqbIsEYqQv2VEh4bqY1WQ1glXggIlb01h96fL5r33+w//ubf2/wkVv/HT0m2/3rNthOo+kZ1sjDbGvX156jACfw8LekTPqW+KOWHvMjF4477h31m74b5LA/4L0wt/a339nx//xhfm7d3345NtzTpt10WXnrlm3LZcvKCvM3tSy26qO+dNFBpPZGgsqyHvCO09/UWnLPvOZx17d8H3XoT5rc/tJp/SPX8Dt/TkxMv2iqnmLJsw/a0nO0OPGZjvKVLsQlA5lkI29p4Rr/p/n/J+Ni0/wuZPxzEq2PkyHDrGlQuFIi4Nlgokse2Wa89rnus6ZnmaGQyxv5uYLaYUZ19/XAQA3z+SqTX5qMVtcZPnk094VT30ssnUbd9S2JfbZoiEnl4qxbLHXwTj41r6oRbdZLEBITLJZPU7P3u66Mp/HluQCUYW3CrqpOXirakRT4LbmceE94YBTchrQ2d41dfS4cCTSqze5M0okV+SKa4IA8NLTBf0hOtSaNmkgZnOg6tzxRoZoGTqNBQkAKKS0qMlhDkQ8bei5ALB61+sJRQGDYS0o6Pcn+vsNagCmFsnuyy0wFaIoJs8gp90CGALhCMYcizExDMoRAYyaVc+PLx3DP/fzZ6x79PDCwlNsTW3tN97V8U8mwG8ezyy84trs+Xe/8MACZX/HWXc+ahiRktm3AMC7CzPQGM85V15067X3dtery598Z0xZBVe7WmzeepiwtYmBWKRDlBA4kJlkLSGvtu6IoaNUJll6/dzt+9o+W9U7/Li5BflsaWyTTcptSrVd9fj3n2w70cEqnI2r6bEoKp9htfkV6lGzrPs6LFAqDGEl01ZRXClyDkCqN324Fu/dtWF/ZgbbUHfUXVVZWVYei0T9vT2aovucaS5bJiVsT6CmpGRkL1tAWnZ/9/6HyDVs8YSKCBgH9jVZeWPByBMi5emTTzx/qKD2anTAxWhhkyWmbsIQL5y49NyVqza44sHWFJPvFvYFgwsnj0nVNieSMV+6R1ANC2J3+6MhMJICM8FdvHmgiZL/Wq/wj8Z1AnhVcAtg0cBK2TPBAIAVLJQabCvwFzx61pFftm/Y2VtxyrBHvt96VUXxLlkvHz1cDMoPrty/c8o02rz7qYHAotzC00ZWLl67unDR7BvGFQ2/7+3jC/JdOPlZa5wxtHlzJvyybXcFl0PVwCFTBTMbQ6+doZIJM6bk/rS9y2vHipcaSY8/HEIGpTyLNAsPRmP75tp1G2ddcPGy8qKf2iIWEy46f8mCXMdbT3130WnLbvj4o5sevG9YUdric65OSZxHMmmKUMSzDLXpXG9Kfvad686csOj1P38q9cqCFr36q/UnTCo7+ZQTz7358dceWHj9/as/fvLmR1/4erQ1tq3NnJ2bajjEarPMj266deSyB9JQGvYlDAHVt2OwqoIuzTRjR+wQU8S4YiDROH7ySIPEf9nZ4ksTQl0q8FhlWatsLDxr8ocfffFjWtFeTXslJmWm04duOuusO95mKVDRpqIEl+LHTK686IIiXBO++4MdJhIyM7TJlem9DcaGuj4+Hc67cC7XPJBKJU6auyRrePmLL72xrzFObdbRrmS/yHJQVF5adM/jNxtMBgsgIec/Q6T8Iy37tUUObg/U7cuBvFUfbV76ynvjF47afc/cYwS4SirqmLS8JB4VR9l7m5Qxbvj6m1f/U8r3N4fynxHR/7Ys+1885B+nBiEAqCyGESMXHvx2W8yRLUvCVcvGf7nyW3+PfN0ZEwtHIZKWJpA9heUzn3kNO0MD15173AcHt3qoYhVruyKJ3jCZln38mBGjz77rnssnntSUTJ4wq6rn6Lftf26fXzLDkpmXCPZtim5+toO8sOKWm156ct3l12fPslW/9eJVX5ATad+tF899YdNuJi0297Syl1+LKrIyZHiM6YBmC8QlQAEhx+3pk3tTYRiaZhugclW2TQ9C+sF4Z57N7zTvvCU/P2fCC8/FS8Ys0CpdF5544cBZ8CJk/PRx+7uHPmw+EH7lkWtHzuHDLrs7kd/4UihFhjSgw6N80cXXokMBxcVR1M2FC8TnXgrfvaQ00+r/9FBENvCJw9N/qO4b7balik0liiJdcmWF0JtSU118DJNtO+kT1xV+vbEtQzJ1Dr7ZDP/fwtbqd/tD0XhEF1iQ7BaORQY1MMZAUTiWoARTioiuMoiKFkk3iaYbPMsiAhRjltPyX9876Zb5vmmX86Y7A4czBPHUqwovffqfbW15YB5wZUOYcAgJ8SnLLpx6wnNa51d84RkAsOriYQfMIzOPG9XalbBbR+xqMasyLH19X04ZcrbBCrXtu5wehqSEWCyaQqmMgqoiDzUTQmuoL6j6h+aNxjj3aNs+JbafplKSzdvab7vrldt+rn/c6kH9XT1ZvuzuXpLiiEHS7KZk6ygd7mYO79n/xSddU2ZawsnwRRfeuGHjz9s31d1xzzUuq9vtwoFQygQjKyfTBMphIRyKIdMQRcFQNWp1/rRpy9SKcmLQA51dC2bNCPd1xOs7zl/xaXVP4OuHln63ftfbGzuG8tZtG+9dcNVr2460Tp0ycdMbd2WVnaghGJFpVbVkhLjaU5Gy9JxMt/dwvL8vNCACq5h6DmOTCrzZ2OJwONbs2KVpfzgN/sH45vlTKWJOvvE7wUxdm2PlbclAEIbE7A4tHmYs7Rz7ghIDgHvBzoOq8doJNy5e+cKWCw8+lF9+duDj6x44+7Mxy7xLzrvo3Q9/FE+afml34vsV36zU5DLQ7CK0KjDt7Am0qMRXmVXTP3D8zEpF9YWSwbnDfZ0DDUd7bIyW7U1nYv46ymdEOM+IjKyi3JzWPR1HBkxffrtHjljKJvv7aiuyihr9psXW4QgH7r9n+8bmzv6+xgMt2/PN9kTWGW3rqxfMH9XjAD5m0yFx+1WX3/H4RXtq1pULGRk2EfksNQ3+0UOnNiotrpZqLURySnPjEVlzS/79EU+xE0Dps0mlDkYxXR7MtCSSefnp8b6AXY2qTtbGmaQ3JxIPecaPjbQctPT1mEOnvX7/o7e98u0pJ4776Ls3Iw37W9QkO2CXWCJmaCnOlu52tm1r87ipaWP8zW3FI0YFL13T3t2cfl0xV1SsZ42s8DliAzVHdzbs75CnLBjfsWrlaZOntrdss1kzxTmzYom4q4ghPo1FKNJ9QE02OnOz2X1j1uyQj7and0Z6H7r/ZMkSZTIZoa8W+QsgLW5Qd7wNehsPYEatPO++f4ZI+Vf4Xbva+ef9P2+ZPHX2Aze9seKXX3Z9/eA3tzx6jACnOSeirPF8JrADB63FudE9tV39h/673fz9vn8/+v/+uP/b2PGN94En9JIkSk/LeeKXft3G8UzfgoL8kyaW+m360b59gW50wrwRrvEjTlr6+l1TS9Pz+Krhk3/4/u2ft4N76KQCm/jIbVcsvvS66ens1DEnoN27s3PCXmW42msGwnIQjKOhrY8KllcefPjI+m+b12wa7YE5c3I/35YcRwpyJxS8um7DxPmxs+afdvK9W0sSPSdNL+esyV/61E4W9beFJ03gjtQqbW24tNRuFRP13UJuN/eSpLzlMbZk0FEWMdyjXHzbC1MXjRk39uwq2n9TuX3JoeSM0ni+OWPsVWd88OqV48d799UFR09Iz7cLjbvc1UfEdb3VF0/IOveMggdXHQwlR1w8uauxxv/8puR9Q4TiRUth3MjPn7mzPwiTclw1XXK2wVcNSxC378vtoaMNZFahcCiOjWBq0TI4tBOMlHAoqv6vZ/5fgs2HPvSHQ0rCUEwDAzBAMUYIIUM3MMIsYhQgHMsQ06CUYoZFCBFCEACmpihYsnjknHNdZ1np/rruI25aZqVCmfeODT0A8PDt+K7HCACsfabo4OquL/oy+tu6PrvlJIdV7GhXqAF56W5D1zkWySmFY3ktpRimyYtCPKGme3zEpiU1HfGAqb76yEZPhntInn3TnsPEkJKqml+QU1/flFnERHXTZc2yUCdl2QJn6bbDq6495cbSacvDA9vdo28CAHXbrZtqmzfsWjuloKywalrkcGvU6OELfa0dcQ+iDou1L2VUjJgQ7OjPdNkNJnaoNeqkQbvdjrVMmyFxzgSbhrHCuTIKd+zfakm3HX/esAZ/T8yIbN/x2chSHIxnRTkVExuj4jL3HEsPzXFIETaHtCdVlkhWe1FOJu9K6+1pxIzBU5WXRUo5i9UZTkRYznQ4bVrKBMoJiIkg1WUTSVLH3nSCtSQYtlgcudO6WT7RPZBRweq8yMesSOewPmCT0mUJkBY3+5O2oYUxRXbgXAV1iAZjSAwLkqYlKS8JIICiAQMaZ+UNk7IY6Qg4C6Dyf/JOW/PiqQmCT7l+5YnT81Ftc37EKBziOxolE/uUfVT26vghIADwJECYZzAxbzi07bz5y29+4MI5J05TvttRu3ed1e7YLeQtv+gKa3b2tleeuv3G56ZwGV6hP5rIuvm8vJs3HUXOUa9Vv4EO7uPM4iBtslG5vzsmu7w+m+mzs/5kKg1JoUiAT7H7P3ph/zdtJ6YcnTYzlEge99QNvrl8sNWOSI2tvJLXC+tqeyeef9eN588Uwla7WNqqxL0TJ586Z86Fk+dVnZDuJ3HL6PKSPOd9Z58DXZG+ti4q0Syr2N7YXPPUz4vvvzw+Oc8RdsdTXUxRkUVwJExR0AYaVm8ssklYZ7pCzRkHere990nOkKHDP7oPGE/P4aPt4Z0V0h6xyji0oaxvhx3npEf5LP9eJSA5Oyu9uX2hMbn12ZlMPCYfIS11nc0LR46yJHK9mT6rLvrUuC0vLVlzSIqatLq1i7a1DhmWvfyWcFfLjOXXkHhTwr832lRdaBGDjS1qSPYwpAv5GB+NK3GLM+gsrE2zE605LRYpitnGW+2eNibzx2/DP+3srLT03H5apSNd0sQjwV7OxrASp2EmDTvduQuv/meIlH+F37cnvqhf31BxwikZJTNmDBny4slVmZe+eIwAswAEAbVUuMqKjYO/ZOWNbOjc8y83uvx38OpjOSldHVYwacXjaw4e0T0Asyo8p0wc++3Rg/leW5uRih20nDPGVjy9j1gyfVZHIBxItify3QyjTJPNtJbu7sVXnFFbvdPUzP0ffDN0FOSnx3214xNTpoBkOLWEaVPijL31YI8lXf0W93uiR3iMLnh1wGPiq84a+8zHBx8+O7ePYd/5pM2LdXcO5GZCXauLWCNuC2ITUne7YkkTKa+1cka8GfM8XqQZHXbIGQ5RRtq6O1Vd0/zGw0+u+fTt6ZOZzZ2OZGBg8hBrQEy2HoTps30ejirBIFhAt3McC17MW+NFG+rahw3N62U69w/MyuisX1HYMt2V41jZfu/MRXev/Ixz2a9cnhEe4vrxueYbbcZM2ftef9i3aPj93yyYOOIJi3NIQihPbl157cyiLDJwzWZly9EjjR0bzl1wxXHzPRLDZPycXDSsKO38qxTWGU+tve65909ceJptdPrB91edmNk6TM164JfeN7f8kDt1ajF2n3dpFZOvdW7uOp9N8+1p31TKTLlvXNW80ZcveC1YOGXAPu9q85sTjy87+suubdngNqx6ulOt8793uPW6kyeddcFOAHjwHC8Sgjk+x/7DWtUFlwwbPQ2bBgCbTCkJfwhYWzDi54Hx+RzYwscSKlZ1ahpJ06QIg0lYhAghgLFBTJ5lDF3nOH6wOijHspRQhJBh6LzIsyrRGezBRri98ay9/GdffPJBlGQiPl4of96aBIAV93jvfzAIAKceBwsmLTvziW+ev/328bgpv7CovjtkkeyJzi4Hw2Wm2UKRoFVgwUDBlB4HwAA+SbRl2AjSeJvTiIcbgvW/dLWOqGAySU6ET1ejgSHDR4I9jROpEW/buvZHhOxXXHKnPRo62HTYDNmrbr4jofRn2qYAwDOFeWNuXF6wuOTI1z8XlU0EgUbMgE+zBWmLZFpFzmFyQsXw4S11daXZOUQ1Q0mtt72NTZewnc2wOxJx1eHKdFMJDCGU2itYp6QcHRrqPdz4mCim6UII2LyIEkS8pa9bmVR0whjxMjNenRCtkmEaPG+RDKOfA4iYbLpBglarS09qQBFmkKakJIlRU3FEgRctJrCmiJg4UfQIZTGiNoNNOZEEWi/wbgNzoFtk0TRN6jKTcYnoCZfVGkOIGJHMYCroS6NM3K7bMUlFQORIXHUh3gSkGDrmWV1kaTJuM8wQq/jinIms7JJrAODgwh8lXuhWkzKgJWtOAICvFqzMyM/MzMqs7+lm5MjCjxcBQNOmx7LHzw/XRA58++GSOx++3V6SVZA7/KZxebl2DLHugNGvgkBBRcjFYQ+iYZVEMUPARBQEQIhSkcM2SvpM9owbPt351a35oxcMKZxzkVMCZ4ohkBHJaEoMPA8UAG7j2ULNDAEdPW/ikTmXfXHX5bfdesv0K09tX/1R7ed71h3Z8tjbl9iMket/XNO8afvUoOOjgd4hoJpD3Yex/RChWx64YsAmWeKRYIueM1Ia0AwTMnJNTbFq4crhWc6qqIW644ASnTglJ1K1dPMB+/JzIcMLXd2Qkw14t7zzCkvu2UbBa+/cfvWRprp5Ilq53jj+nivvuv/15WOnjnTnn7ry0TQqWgqUE2+tqN598MaFT00bXsUmGlFQ5rH7zWXn6SY+687zv/xli1pozUuv2LJmN+OZ+sxPD8itsTWfv3u47tDiCSXAOis0s+ejnw/srg9aEXfaCCoIucWRMeMbCGeED5499coPiduZDFOwxkAvv+qqR9hodUEuTJ7iTC90N3R3vv3OUy+ueMWmJxIB1eaxGELa0SeemfLiw+Hmzl2vrJH3dqVdUjjh9BOfXPL8QYH/ct1z4T2HBFVjvEmBK4C4bPCdCYfdlspAPRFkS0Y0zeMsAtYAEgNTAB+Azdmgim5VSiNaKjhgWmQbr4POgWkF3kwEg7bxV/4zRMq/eQtCAKAlv4hsaE8rHv3SNS9s3rvDKQ6sHPjVC5odObIgGOr0JxPN4alX3+CxZMCTe/6jm7/q9e98/Zstf+eu/2dtUka0uLl/i1BeOrlQvarMNzRTSs9lvFz+jLG9/pSZXzn82adrm5KWcTlTNtbs2ba/Lt3lsdkjwYkTLrvhs3DEmLtsybw5yy+/4Xr34a4p5cOXXnnN1jfunuCwecZlsvWdENSZ0TPXXXpqYMal+VHXLSUiykthtWjCLahHsqfqtl2UJoXrWrfsE6ygz6oQyKJsPqLvreuSE2ybbqgq5OST7h7Zl8Glx12eNAPzqDcZ77Ah4YBj+MwoZNvuOeP0w63BHIugcNAVVkfYIUaTRRnpuQvjckdA1lx8FmRmskqXFE/GGhTdTw/npcG7n3eU5lmyfCu3dzpPaeKWsW0nVsG9q3f2zxhfQtGun2M0yuYIZPUAakfBlT64wtbORZKHGxzlE+Lp00+o2fVdbX3/0qFKEJlKd380jqLYVTVm6Cd1NQaTun75cnJcYd/eFn9na3NYMCpmb4169u5/6ZHrs8pbXA4Iv/nNp2cOGaIgx5HGo4c3m940piXZPkyw+zUaNcM91T/keLkiV6jOLXV21NV/d+iIW9L8TJ9f9mk07g6fPye9o6Z7cPkKZnGBxjKaL4ydNDLdl4MJMcBkELJY2BhGGGSOZQyDYsRF+0OiwxWIhy0CTyliKcUMo5kmy3KUAoOAEpPhMKZAADBmTBMwBYyhMNLfmZaDeGKCGWcsYkbemontiXdTlSK0KFG12zk4El/4WO4zOQ6XPPTxu4889dPnv8y5fmZabgWRyMDAQP5QV7u/j5WDbpdTTciqYdhYxmkijRhOpy0tJy8e6JUsdt7pys+vmDQSH2RSIwpmZI0fO3Bod8PB7tnzL3z3pfdGedLuumBJf0r1JEKKo3joJWeYHz+3evHZp7z4+mDv7/jNP9/w/P55vxTdMK1760qbBQ+zTdB0rsIyOhRhOUfcjmWjh+Y4cyhPVNXL+QZGZY8nBDDPQTjstXMpwqUSRGf7rFKpkGefMWtFa3Xo1dfYjCEDcSlblbtE7Akb4gA3oBHJMI8iKjlTAmBqyPFYhNi0EGa9GgpiRoz2dKaoJS1TUrSwjoFnc1VW4ngShzivE2QA5fM5SwpjjTV9Cb3VFC0aLiQssWhRhH2cEZCSgCTKpBwOavgN7EI2qxi2OtIVIyl7u91KOoiOONVtXklPKCnQqVvEFKSkxtnTkyTmFm2KB4ygYQcAgCA10hje5rZp0WMh7FEAMyJnjshK17RI8JirfOnUBRDRpk6c9+nRg+9eedUeNvX8TaMQmCy2NA8kogpmMaNRCsgQeGQkTRFYDUiKAEUMosSBWYoMFgFvmF+8eEpOJSPlO9a9fN/sq++7NsfZHdeJyOXbLNCXBIBSahAekGaz///Y+89wSaqyURi+V6rUuXvnPDkPk4AZcpCogKCABEUUEypGzIIoio+oqEgQUAEFJIOSc4ZhgMl59syenUPnUGml70f3AE/yPec9wnPe6/ruH7urq2tXrVrrXncOx0yfvPPHv/7Gp/Zre0N99Q9jWwr+kNnrR4W92DziQ13jEwN/eih+xOxNL0zNFMEDRfXU+ifXvvjoP/Jq2ctrR666PTjt0Bnf+EsPAfD2enY8ApmYN6kqm5v4MkiPe6kFthyL4t5w4ZGDm/eMPv9UrjAt76/Z/eTaBYeSWOc9jzxU+s0dt6GBV/rmn9DpdMurvtQ94Z9y/hfX/OouVpuaAmCb7Sd/OHDONz94/T2ff/ZvLV2y+5AjD2/taL1o7d9uOuj7bz67ae6Fn4htH3F7jN/NO3Pp2Vc88YvYCV+65KRTPvrRi86E3ODW0bK0rOnnrmwf2pGo9vA+YFafO5HTxmB58tpZHxorf+61f3znkdN+9ZMO0xip7fn0J8JlSz8O27ZARHgT5RkL9zv0u79/8bG/nXDUqUYKG029j37/8iUfOtm3F6ZmrETzWs05mzOZ3aZ1wH1vPbfFxZXR1amYLAdE+UUizAp4qVLULtuUTvosr8xU2qIQDgyHOIURAo5zbWhXdnangqo/aRlNyguKJri4Us31l9aD5cxo6nyHZ7ybWb6XyQ7/ATAT8XQqm6p87HsndDzU/pdrr1/YlWlowKi3Z+WMJdi3XnnltaPPPvGN558vjmx9n50u99z5QK00VMrVXKpBYEAqBebBU80Lf/VxALjuq5eeMKMYjj4S727ZMeEmrUgbkrKUL8VirenWF+/YUFo71hvaD/qVZQd3HrlfNEtC1/Tu2xBe+YQXM+HG7656auOz37j6mUR7eN3XjpsW63s92732iZ3fOuPTKw8+9Ndf+urJnzxtWXXroApT9mu33hP54Z5BEkI3Iilsf6ApjmL5l3f5FqAK6NOaSE9f59IDjpp24jzMXTCdPVvW6zdeLpfH2ay+55/fnprWVkHGeNZlkba9E9XNA6PZajgSAgewGdMIvJBLAA0YED5wevu4z2JOcHx35W/b/MHR0HBgRS9sn4CMAO2yssWnFDCNZneR3s4Emr5w1bK+i87/8KEnfv0X3/nW4ccfoDT/3m//dMOlN3z+tGm3PrLHncQVR4EPMQUOOB64zRFbcW/O3MjaDW4RaIA4YJaQelVX4qmJ3HEz57T3zbvp6b/Pi5Es6KmqePjfLmmet3z/jq5f3XvHN39x7ZdXHfOHrW/g0ZGv9EJbetYNa3fN2b+3e/qqp555dX7bQHYEYm1sWjxiyVrHvJZaZSwVITptGVZESq5AxeKJ4fEpNwhb21uamzOThUpvZ9fIyMgXLtsNAL/+fmtvZ4cWotS8au7ig4TAzDYUBqxhYnAEMbtaqXAu4/FktVqNRSPVahUTrJRGGCnNMVAEEpAGoABCaYwQrvcb7ytMaK1RvaYWaAQwmWkPTGJLVd26FXD43Pdv66vGH4TymwAA8MwFzlE3uQCgBt/cK6MnTTvmhstPisaaurp6H3368WRLezyWnhzaldZuW2vMc123ygkgpIivoHVaZ+v0aVxhw3a0BoQUgAQg5ITzH/ndyyxdPe2MQ3/ys4diOf/j35mXaiWCMiIylJLXf/aL8VdeeGjPtmii79evbgGAu754xusPvbp5eOLqB//csaIVTw5qX5pGXPCiNlpZIo1F0c3ucawkxCOFWjElIzwIGGMaASIEKNWcI0o18iTK6BmJXx537vNVfe8D5/3xqZ9m5jeTKLcLgVXtzrf3J6YOPmH6l1G1WgXqUFbvao4QQlgrpaSUSiklpGnamFAlpNaaYgJagxJg2cA5IKSkQAghSpRSGGOQSoIGhDRGfhhEbEdKSQDxqjISphQ1rQSlNmgEBvVVqE3HVhqEH1pATJNgGwQBoDpqo0INom3aqfhTNbv9CAB45vhHJFZN7U4l5If95TgAeP6sZyDwR2tinECHpmc+eiIArLnlsowf+c7nvnlXdc2TH/vs7JPmEESiZqSSSZXLw6PZoKixASpk0I5NBkEugN6MU6q4FQUpijTXCjNfKELlwv2XpuefNb76b22HfuG5Z5655kNfXD6z9ma+MrO56YpNWQD4A7B1mC+x4BN/P8toz5C9r3uTr9tGq+5eCLkZa29+a/qZ5ySXpGTH/ivTK/b/9LLxanXtHVu9SLjtmRdeu3dbNM3a197TV3uYmIl7hxc+wb3mgS1Nmem5wQkhyuMCPMSXLp//2IYthjLmCZiZTLygy4PJpmQL7ovOKNcGepauPHn+s0P2IWd+9Pt/n73/czO6/lSVr9x8ZaiDeft/4vMHnXzXqw+5TaTPahseGdm68/XOGU10cCdLGrxUY4pBW2pk15aRGx/d/zeXovESj3tMx1ynxcG97wPNfy9A9z/lh4KSCUyYVlgoDQQYI6T3jP/hOL7KDeEOaiSnlaqjn1rxzZeQf/LJVkMDXs5h9zN/n+zobWqhW9c/XcoW3v+BdnaIYiDnzopDW9ub6/ZWq4GvxdTKZP3XUokPMzux39Kt4yNrVm9dNW1eGMO4qUnSipnbtFZVzP3Su2314Vzf8jMXO02tHanBWnP7vEXBL24/Tj62lmwfOfO222D1Q/zJDT8+69vircHz58/Q55yHxkb6f3vRebPiCyefm8y0ZtymwUh84AAqdhqcpXfCOIhqh1bfuOBYc89zSxefEE+yrYXN2zYOSRi5+7Knw+EgTZwjzz98wQUXFPpz+Z7555+1t/DG1pHX3nTmZsaqYycv6Y5EOlva2wrJ6aVSsVirZIsFbBhONDY2lR0aHl23NTRG/VzV2++A4w84Oz6nPbm7Wt26fWjdrvW7X/f8vHH6OSu+8MXzg1qlc16cs54o7B3oJ4YzcfNtV3z9nMtffuvN6sTAgv17H3jpC+ctvu7QlTPCJdPeHJv43HkLMq3zr7vhzokdvs/4xNjQsYeceP1Tn+5tnv6bq279+WW/4alwp4aFvW0Pb93Ntm2fm4EgUBEKbQS23P7wePUva+Z0P71zF4Tuc288aGkweuNDHeqxzTu9NMuX8s/feYdAqDcDed8KQooDPdzP0+W8w1kSggUHtO9CNSnCeNQmhWD7lloiQmTJe/mxDYzAo5N75s5o6J27ns9vKE90Nkd6P3Vg4AeFQq21rUkgMKgRiyW8UJqmxYUrpVRKcyEYo54fGIYhuWSMYkQ5F5QQACy0ZphogI7caL1deYP7ao0QAEJNuVGEMQFUNIxoKA7/6bm3f+22WfPmw+YtALDw63+Fm04DABxJX/X1b1x76amMglLBpl2bUk3Oshndr7+6BoywqamlkM+lmpIcV6jkTErNybb+gY39g5g4La09SqKIZU6Mj82aM6d05c8j4yMnfPmaN17ZeO1NvxodfB7jDTDuqoFaMPrg6r8++Oz9b33m/t8dPGvuZYuOrU/IwAuvvzQ89JkrLvnet7/31wdvpKkmrhUec+mypc88vOGHF3/t5XVPC6EUo9LlsUS3UpzhRBAEjDGFIAxDMxoXUhKBwzB0QD39bOH0C092Oi5esvj2QtfewS3xFCJxjKsVm9b2ugnTqVWiiajwBQAghDQ0Jo5ojZSyMA2CQEtlRRzAOPR9pQSzTMm5RtowGGiiATRGnHOMEUYEE4IwFlJEolGEsVYSEWJ0ZAK3QqNJBLhc9W0nWg08bMS9ihp2a0krQrJ+qZIdD8vxZERUXKvsR2i4ZWAPaJnx2HIAAOjSaEqpyWEvltjXT5p5nMulS2ZVpB+zEDwKAGBP7G498djv/Oijf7nwR6df+pHi6+sCYuZLxeaWRBWbJQgAmACOpcY8aEphy1BQc+NgRkApyQEjzaVpoCoCXxGwSs37LXSHn5fPPGnI8ZzbsdSEtf3lBl1qNy4+e0nXKYeOjv+pb4vW8ZjtzwA8781THnqJ7nqrHa264q7Prr8Ldr/x9RnJ2x/Z2/yno2q3b+304aVHb2pKd5EOx+iaSb4L4soDexJHnB6yLiaHWRQpnY5iRVLheL6lK724PBnLtPaB7oriLxGAll6wnCldTIs+kl2d3910AKjf7H9a2zT15Usv2HjZb39x5c9+/b1vr73zqQMOP/mb3zr1nnvumzE99tyLz1W3Pmq3LdeFvX4ZEGKhpGhsqLM93nnC8vzqJ23CtG0FIY1Gyu85uX/PAHVkhFu10dFAsMQCyUAHFc29d67Q/22xs/cUPIdWykFyoQjeECbvpsntc5tSDQa8fpQde8hHzA/t98Sf71zcup9sN+GpPwPA3+44b/mceTTOtg7tMUJT1zxkWZQZQmkNGAEQjDRIQBpjA3Fx+HE/B4BNT361K3dG8mMHAcD6a/5KD5z17DPPUuXM6GmfrBSjhuGkO3joISAEVf2QJ0xmMFHbNsJNe7JCothrY1S3mEU3ENDYaV5Al3WtcJwe6PCPPPqCUHJjyWfffrdL33nNLLzZSB/a16LgBlL//MmfAeo9vp6sv3Z9EWbUf30SWgEAoBVgf4DrAD6qxud0t2wcmjzlxJP2P+tj4Q17jzrhc65VOMw8+RF8zxcvvOuAVnbnrie+/9Of42XLoW/O5vW3JTY9DUtWZj58Zuz8c3W52jY4lkkkQGu3WO5s7e4AjSwLTAZIh1woDVY6A1sHIJYYfuulro8c+ejPrnvikeGZZuRbnzsq/OF3QDoGiX/rws/+7ae/jatNx3501S9vHJ/RhQ5Yeshr962Wc4/GZnzP5l2X/uiwW1/822WrRosYaEFf8fNjChXr9gd2Kll99pa7mqZVq8H6qDkXePKOu657siQ+sHL/r05eCax7vCDl+FT/0NCegXUvPvxwTS5Y9/jOP9//243PP9kVTr/y1rs/d/FZj3zxwoonY7QINgFU2LVnoCndmRQS0m25ICzuWd/Utt/m7WuaLdwWa64I4gbCcNSQOxYbRl29HX7olWrVQIKmGLC2I2a5WvNrfiqVgad/BAAX/Oz7MvCHqXaMBGOGaXJMqF+u7B7Z3dvd49YqpmVqDX4QUkZ9PzAYNQxDSGFZVqmUx8hIpWK+79cqlXmirHXDaKO1RggBgAaNMNJaAwKsESiNCO61sEasTNiHrzpv6WEnwtIzACC6r9bHizf95ZiYQRnRKgSvhsenFrZmNr32Mgfoae+wInYpP1XL5SPxqMdRyEXoc4uiqsCZdNNksYoULRRdx4rv3DWICbFsZ/S2378x0VMZ6R/82WUT217ZuXbn2LapEaVMC2oOydWa1//hKWefsWlHWKhEYI6dWfzrXzx2/d+O/+bH5NBY3jbSo9nm1o6N/cOYZSJTPo4K5fk4rEgsfM5N0+QVAQAGpWGZW4YJSjqQkbqpJ1Eqk7yErZd9eteZnzzwnC+3jRY2BRIvKjZrR0bC0MMm8wBhgvfxX0AIEAIMBKEwDM1UAgyD85BLgRMONYgGYFJLKTXGQgitNSGEgEkZA4R5raa1NuIxFYaIEIwkIAxjOXdqKjsxHrOdydGRuONw7m3dttlBEenYXJGwHMRiiUKtEHrV2S2tgy0JE1WabJKOz2u1mwFuAoAiIynOQyInS43pCiZkFQhJtaQiHPzGRl94wQWgI8s+u7D82BXe3pzpxMYqFaWBlqvIZN0aKkYICJIRTQPk1jQFkBJCpRVojJmpuSQYlOSCjuzZm0g+sfcpMn/hght+9w83gY6N4C2B294nYSsAwFfu/IrZtFXd9+u+Hgdmz3j9ko1rq7PO/f1HwvRDkbbOp/SnH3jtkun3bDjutA/ftuPs6R2k3SjFe8Llg+YLu/pP7I527nc4Omrp1u//pdffb+kZZ1LJoZKdm2xRlUEuR2t2LD2zHUJ3RlcaYolQVIHXgCMolyp7i82WBebAUGh07/+DoXVrnu+/9rsfm/fAT66rrM7+9Pqb0qr0hQvP+OUzD5lT68X6vEjbnamO2pQPPFIq4EhfJ3aiCDNcjnsOso6YHxkfRa0thkqEGsPO8f8dzvIvAm/N4W2H3Xffj174+R3xj3zgsI8eh3fvJoYZetQw23SkqLVGAIggIAQQAq1ASgUEO7bWApkGNB8HAMHG7THTKeMskdoBZGIKoECKBmL/z4E9VtYZS5C22ubsizCcK4ZvbJpqMGBj+qwtlczYzffjqhpYN9Iyu8G8LMBOc8Qvl5EL2EG1Kk4aRChBMALQCAECLaUkBFHhE9YgYTxiq9+/0nis6ZXzUyC1pDWvlF98wApsmCEShvJABIL0DA+tX7t5wFGqSEBJF4Ci8TwGHUMwY0afGzaK4MxeMOfmvz9x/MGdyRQJcmNNLa3v9Xx96pyPPPv4wwdBa8TJGK3t8+YuhuKeXdt29f/t6Uhn8u7vHrkoNmPiN7f/9MQTiy89sfHVP3QctgrvQNASIW8NxHwACtDe5g3WsGEDjYr+slRK6Dxh1LAtLaQUSlheLdC7nng9LE4lP5JYctJiuiI5T7NRY6Tj4c3Q0f3axgeOP3xui9HuTSzYnR1adfJSyiojyb3NM1pfWH3LJV896Qc/eeoXP36yuWfWqDvBk/Dw9t35r1z7nS+dUfCHcqv1Dds/03doZ3c7nd3mb9m6QYZPZcvkou8929Y5/YRD+t3s8tvvG/zGJ48gFePzX71pxdLp9997jdNm/v62x18ZetI6dNGDV9x23MLVW/Zu7+puTVmpvXv3rjrxaCQ1BEqNc2qa0zqPCLzxg/Y/DkKQoGNRAloDtnuIFSz1CMVxzTsIBU1BSil8hDS2UqEfMmYC/AgApq88JcLscNeLSCFANqp4IgyjTpQgms8VECbVStWgTGqNMUFICSm11ggTLwwi8fjo6IQdtahp9o4OgGMppTHGAEhDo2IaAkAAGgHWje0nlbKYWQmCrnQ8NpWfX2uqL7eqDNQPfv7tS372rY8HXp7GzFSmLdDw/OhwNJ5ctWylbbP88FsaK0RsGYBwBSbapyjHiRHLALOxJW3TRkpipOJJSypZmKxwrhfBJhdiV//095r7bguasSw1l8ViFhrZuWf7cze/tmnzh77zCfj5rQBw0+tPHT171We/+pXNYlIeuz8tjRl9fVap6PW0REfDmordeu0dH/7KcfG922mfFRZKBpiMEOCc1M2/nFsWlUJUm2o6X04aU3OWNW9bN7hua/65tfZLb62+4hKYsQQuPG1VZo6DK/lCV5lUyrZOCRlqQEprkEprqRBgjBHBhmaqMu76PjWYYTAppcs5IEURxRhbtm0oraQEpcMwlAgNDw93dXVRhj3PHxsdbmtuwaBJNOpFdKotFVSNZCKSaZrFA9+xrVnLF8lmg4DFR6ckgGU5WnDUnPSpNOxO3D8O+Z0wfz+db9QwCqJ6oIAMyQzSkMstTCYIRoxCuQqowYArQ3uRmKIkOPDwleUyDXeOsAAws4Zrcuni2aNawlQxGnfCqhtKIAoZWAuKheDa0IiDQAgTYlAUE3p8YmqO6nHaR+7+4QNPWeUzm9rPMzOXbRlBSxP14q9XHvaLi66dEz9qJjxnPXrdxjWJz335tR9cN28R1057h9vy+rZv3ffkDb89uC+87aIzT3r5qbuMv28ZHoz+EeSn7ejkB+bZfO+BePqO807b+uKW5Yfu3PPWxuTcJdVxL93WHcnaJm8qJIYUtmI+JhWBUunxvbXKnpF4a5uqDeZLMIV3dhudbrEaX1s8pK1tx5rqw4O1DYBZMPTitXcn8izmuS+89EpyqQNWEDHVumefXLpw9vj6/hS2W5f3TLrllrSyI/HhaqEr3l72TVm17NbUhjUPLX6vyet/Ajc/9nxxT/63f7nlyXVXfONLvqasrcllIeXAUYlJA7TWWku0T7bGWGOkCFPcx4CoashfZlMEMI3jUGspQUsUKA1S0XrTldKeO14prpcKPrT85wBw2UvnUsqQVoZlRqPW52f9DgAeW3uJRhIBUzrUiGDQSkmh1Ynth7COkwDgwnO+ed2dNzx4zZXtHU0HnHLunjXXtjc7bCJXaepOTDsJAE6BpguuO/mkz3/j7qv/8r1v3bTp+evC5fvFBjaAW7EnJmPHdp9q+l949pf5gVqDAbd1tEMkEh3xFy3r3rpxL3qj0YmhI5YEDUNTU0SjCEbYshXXCGkguk7dCMIIY9BIYPX2rvCl3BkpHQgAALqvZXL7nhBBxorNOOhgTcB3KzA1XuUuzrS5+R07toxYWFUZBY6BCIQEkliCVsCa4y2DU6P1e/76yh/MXdZ74XXfGHj+75YrJ0dG3uVVf0/gxNvuPREAwIdrr4Vrr20HeOhP1tiQ8Zkbbxsob8TVSCXCJp978fWnXjnpUxfEKjXu16yMhtGSiGreRBlXMLWTYI0FNVzuU2JQhpSQNRHklBDSoCaNRqu1bGXTi4Mj2VV3HkYd/5jmKKcOLTSHUMTZ/hWzuulBP/kvh3c0AMA1zwLAqwAAlwFAAQAA1u+Fz1x55NvX/an+cUcrQPXNX6JY5jvfPhBq2ZKBE9ny2d+aHuMGIF+prNq9+9QLfja0ZfPPv/3pEjWOPuJUvnWo95SjeydO0IndMrQ7FizCFlR5RQGPx2PUC7A2bTNT3pUzjbjnVau5MeG6+clKlCZSTSybnbQtCgDVkpvJZDLpeC436VWymXhaAHEAAKBw1R8LoSgvjLJp0xw7opEORQhSUsqAYFCaEiKVxnVuigBjrLVGGCuECEEdHe3bt29eZRLLiSCECMZaaaUkpUQKgTACQFor0BoTokADgFKKUGoSxotVDnrD4D8OAQAAu6Vu5oTfXP+98cGd1G4Ky3wsOxxa1qKlB/a2t4AW4OZ43hUBEItUimWpkC91SUEi0elEU74fEI1lGFIMgHXo+ZQxjIVCofIraTzcc3gbhIoRmS9NBVOljVXR3kp6lsZ/+60fjY8HdQa85803zv7IKVdfe+/aK68vRUMHcZo0xncOHHjEgeOsx4Tytbf99pBI7vnStljVr6gg0dZZKBTaWltLpRIBlEwmx8bGYrGYXUtWwtG2A2ale6bfc8sbf5x53PHnHpB9ak+KGTteGzjzuVdXRGCyBv0bF8uWSahkaTwBSmulpJRScq21wghhjAJdAQiZNgxABhICVIiVAm1RjDE1idYaJAIAxYFg3JrpM6NR3/Wko9syMxzHcatViNi2mgrzhdbmqKQioIq1xD2BpC+jRUcFNWlbMoYrwo8h252cMFIJNbnT6+rQrS2Y581A12lrcykcBdiLw+i+6NGaUlTJsObjVJIWivWTkaoIExok0sjMF1wvFqmp9mwtP7+9RZjQ2dfs5vxC1o1SxBAQrIEhrbQlNfggbCBcOxAOudDRk5k7rYcIP7Vk9vJfnNd68PKLv3ZU6fCW0786duuOhmKwJSruvnTs7K8fetX3Hl7TBIdd1kMGqs6ilZt3vtDZGSkUb9gvcm7i679efduVifYZa3Lw7K93Hnv64nO+9jG96pBNe559YXTr7d+4evbU0IUnfG7y/r+mJN724rqxSXTqNd+GKCsUKimdKYNEFiGGDgUHRJhNzQivGe1EiKa+6WJ4xFp82vFHpt8s+1/hy7590dHf+PbNgxPBwm9/94VbT8gNb5xe9ddW3OENG3TG7DhqmbP/nDndLYhFsuvWtCQzHhpXLuvKZKC8F7njtoUGN7zmxDf9HxDO/5fAE/7ea6/dsW3XIYcufeje+y8++qrqjsFYSxK0oXweaJcQQhilGKt6lIdSGoEZBmAQwKDDRkalUNz3PTugxDQp00RxKQXT+/p9BgwRSqHxtbk5k8vlCWApJeiGfqyUjxDVyMMEay4RUhhrAyFWaxhervzZF1657bouNLzsQyd9demcH3/vD3967CdyRwGphum+q6t52YIl/vjLp3/q++59bxx60OnTPn7cnRd/3GlxYPdoasURJrUkya48a2kDj0cDvqCtvOrgw4AU33ruieK+0UbSSTcI3arfm27Nl3NlN4w7NqZYSs4YVUpLKREmWgOhptSNKbAMc/pnj4TbfwIA4ZS3t1q1DLL8iCNErYBNXhvcM7S3AEJpNVgA4iAlgBhScEBIYY0Aa0UwIC0GhvaG5r5JKQ4Wd6lnrvnLqgP7WFMMOW/3wH3/wI4dRjsi//aNW2mbZxhGLJW88fdXv/i364pjo2VeS7k8r2vCbsvYEc+tlkTVjic6nJjiYS2tHcm0FMoPjCgmgAI3wBorhaURPeIr54GJYOukJWJiy6gXI3EMoWOzShZNWv/C8cusFwxvr67b4BBimyysJbzoK6HrOdTSRhyXLG/8bxPbdld25RNvvF4czr32wCOHHrzIz403WzEPQUKYPKyZcQPbdmXnAMOkUNqGIkESM9CiOj5lVbiZiNoRyXBZFq2OKImmLF/yppYmbDKOPbuNJpKtpmHDvi4grScfSE2jP9tv2RFAKJ6I5wtZIhklVIPWWmEAwEgJSSgDTZSSWmspBNLg+UE8nupsa3l9bOy4RMLzXIyI1NIPgqgTRRhpDZyHhBBCsNIaCIBsWKkxIUEYWICCOdMbs5Nq9CWMNKXj5Y6p0cnQMrpmTm9v70QMKwCs0FQlDOKdoOwto6MWiwhMaNRJRmMGsgPPpwhhDIRiKTkhBigtuTKtyMhg1mHGLV88o/Lmy7v3jBZqMHvBvFXfPz+zaLGVidZYfv36LdVJ0QYAAOXR0YPOOubT1/y0sndgeE9/c2u6Ui509HY0LTp07OmdKTN2yXd/2H7qIZG9O8xkNES+w5WUkjFWtwMzxub5PgBQOyons3bv/E47ftEtG/z1//D9kEEmOXN6V6LJ2rP9I+cfcd21jz058Ooxhy91R4fMMEQASEmiNdYaYQwEA4aK5ImmJNAm8HwVctO2ABOQUmglpVQghRQIIYMyx3LAMEysSsViIhYHbSspvSAwU/FACGYsplQjTBDxzQiXVGGO7ZhTrWYDS2WamtTIJNaWYsRs6az5vt0WEaNjCQdXWZb0NdeXhu7fvLyZSC/35uuN5khFQIKiJ15ZPXda18z5jRYy+KA+tT2PQ4CxsbZoW2CIcs4FwXunJWAsN9k/yH3uGERpBVhjIFwqhrFjaT8gRHJG2JTQLb3NvQtnau4jcIjvdtDH3vjtMY4aEb9/cOZK64i1jSZaMxJNr4xkX/m3Rw7+8mmvXn3Xxwqo//lngonqXKu55jd/cLoY3vDQ09///s2b+n926bcn58y7+ckPppqP2BGODRVeCLfsPjt5wBHb7/38SQehcz/jjPWz9vjsorfSRUV3LS4WE5FWCFGUGVoCeNx24nZPp9ceJ0kdyU6yRa0wiWDJCpgwtnj+nCbnsB729P2rf3ThKXM+uJSO1GYhSyfmODPHxl9Y5/S2KUNa0VitWAp2Z2NNKNPaobhrRwiUdxT73xBqLaFjo/maLAMk/pVk538REpFDYkvc9IWfmXXva9PPPP3sK77T2ZMAb4pbTsBMOzQBI4WR1lpoAISIYRiYaBoAQVUeYIc0WELZi8aS4PiK+5wr0NSwEygaazwlsd/xpSaQjeZjGHRzU6ZcLHt+EIs23hpbNiCshQgD32AmaH04W6ggon1c50aRHn1QPPrG+teX8JVXvfHo6d0z//yru8//6tnxtxq35fGAtBpW1i+1bW1fesxvPzD3ni0bRljajJfTlRwpbjrlLxdHcyYZKe9LQ6rsGHrL7x/xi8HQYenUWqcMrgSAQjZbGN6Zr5aLExOe8DqincqUmGCkqZJYa0AI12M2iHBxo4EQbH0w+aJ+8iIAALAc5iI4eP4BqDwFtqyNjuzZO4lD6EBoEFBHV2t30nx2y4jW0gAUYmlICIFaWqU0tcNARxoMOAlycnKSxZsQNkSpoEu+/d7hwn8DL7zy9I+vvfmilSfIKTho1Zxbbr7rwp9fAn3N8b2VeKKlqGrpRERB1UnFI56FRmWmud2liHPBQiiDa5pRKblUWmtNm5KIGKVKpceMuyTBq6ExwzNUQs7Utqt9xJGoYNTOEk3/wvEnjvvhfzjT8u+/WgArABbvvtHYXrUI/OrPP/LHq0Frc6TiUaEgUkM0YNQo5sdYU8pi1IprUXPAcTwqnFnTbDNSrdVMBlEBFEXcakFTgkJhMwdp4DxQYJJOQwgwTafxxBLHIrQRSKkxUpggyigPOIAGAIqAC44pA620lpVyhVFKKFVKMcxA61q1YlsGgB5MNLWFIxgRAASBDwgQJqCUYRpQbyaL6gZWwAhLrYllMst4YmrqEN5498nJDe0AABBMjfZPTjS1tC/fbxEgAJCgOGAMhKRb2kvVqrQjgZFSgtqUMAxhEHp+iBloABEKhLDG9dJbGmNQXHGFjDitjYSPbhs68ysfWnH6uc0zZmLDHXllc3Gjnz54UXrpUoc1JmS/I1ZCLShsXpOizuyuDKnweCZlJJOwd0v3LGcs+yy4Zbn6mVaIwMQkIB+E0Jwj0xS+r7UmhoFcFyHESEBkG0wMkwUrp4O8+Dd/XtW59NrKn1a0HOi7uamuGC8njvnoyc/8+f6jDlpcRLUUSSMlMaZYA8ZYEwCKNUJxiEg31MKnDGNMQo8LzRHGBiZEI8oYMwEhBBhLznEglQqZosCBcwUI2+kOwMjL5YDnkESGQ1E0WLN3ze6Jodam3jk9Szrb5kXSye9cetnnPnr2tLYOHBR9PxuPMEVLiVkVMEPmFD3Y09jszVlQhNhiyaoI3AMAkDXAEJhqnUokodTQSqv33kurBcEsFAi7ttdAKKbdniOXhbJS2jU8VQxCBigAjA1maakkwRoDDhHXUYVCYlFd80VvU0IIoH4gyiWaLSo/h1oSvmgC+QbpYMdfNhMu3QQAudCtNrdXdG7g+fWrYr3TTm597c9vlAqDJVRjQ8+/uLsWPHjDJS/+2Uv3z7mw48zp058arw4NX9Xjtt145m3xHPxg02Nf6Pvg72584HefGpA7N9DdTjrWLEkQt1hIO7wsGdEhBiMoi4SKibLxl8eeKFD5tS99crQyHhn2sS6KdCq4+hIH2IYJ909v+HcPbr/zmONiYjq00+NO+UD1zfVbV2/48A/+bddTmy+/+tLm3hmRuI60jEN0M1hTYXFnsfy6LbUXt6DmsYRsagdRgq1ZH+D9bsB8yuKe+zf0xyD+489e8PAt3+wkYa4E6eh8xjnjNUAeAK5HJ1AEoBQIDir0E4zXvCg1MTQcoLS7KV+rpMtCmRGUacbpdk0TnssbG6x91rsp3ufn/A7+Exy/4LL/fPLfgaie8OlPjhhrMWIB7D73kh/f+7PLzvjaJyNtjd875i1OBjFlhhqaK3Lb/X/d/LOrZqLKM8HYQjyzFzJNy/pOLVn9vBbsY8DbNpZn9dH5TTDk7cznoqphTN400n/ojF6M+VC20mealFiUYSkFYaZSgDEB0FJJalAChpANxfmbV363jas6Ax6rli3C0i0JCGui6K7fOA4atWFco3oqJFGfS7+MUKjBEDoEjQRgwDJBtaNDiCgj3tBLUgChxIce90Hgu4GYlco/beP63sCvrrr5/uu/O/3wD3z27M/Dq/kVJx1+zonH7Xj8L319LVD2JBbVwmCMtRQm9zLHsB3TL+dF1TUNUyNINnWWs5Nu4DY1NwNBFbcWizmplo6aDiNmFIocBBatfURspY7JoAdoFaZRzkvs/3lc/2IoTtViTam4CUGxAolkQmXDqG16di1WQYwZo2Ey2uyOeLVaKeJniSqLKbBbejj4ICvROBNWgqqkMAPbIMg2QtdF0aQmFBS3KGI5qYSq+vn6umpGlKFliCihQdU34zYhOFSKYCykYBi7bjWZSitKlFaObdm2PTExSQh1pd/angk5Ny1LgOSSDyYyBJu9tSmCidYKA1ZaU0L9wDcNE0AhQABaKQ1ID7e0pKPmXIyAN8xKkWyDFYccd6fbZiyZDxpCL2TMUJhgABFwwXXciaYT0aRlbd62FyWS1ZqPNDZMLBEWoSTYQogqFVJmKO5RjJgUiCEVlNpbW6743b91nnMMDI1Wxzfql7Kti5slaHP1DigWaz2qLr9X1m0jVKcUrpg8BkYtyoxdQUinDKxilPFcf0lkm3AMWFr6vowIKakdiwSeZ0ZtjREXwu5ugyDgtsR5CAyINbeddcr0ex7c+L1vLUwn4xr5LXHa3jb7gTtfeuC1W6752ZVEGh2ZJNhNoARIBUoDgMZaaS20klwZ8Tho5Xk1TZCdShumqUDhWiilVAiJOq0gWFCMEMIKO11d7sSUk4wpTCZy2cHRsf0PPUT5RRzPVPzg+aEt66zW3R0tdz+42quOnmKNH33Qkb517OOb2aJCtY1NzljIRLIEaoPCQxRsU2O1j/gL0BRxX2LTaPDajuZ0b8SI2g6dHIttPLt+ktbKISWGhSAZV9mJkCWsFfOhWCxu3TVc1A4lSCNsKyI4kRopqpEKJTcMMJEFps8BtZuIyRAKQyBjZGpIljxig0IYWUKdcwbEwMeRGGwCgJ52jAqTcSF3jAz1V0Rv00F/X//IxX+7++eXfjW6sOvM8zoWrWh6JTq+ZvU9eScitgTtKftDwaFvPbT6ss+ctaK3K9Iy7QdX33bUXT8Bc6axYibKmpP9lV1erbS9+mw26J/ySRwIsiYk3zs5YhkttfjxgqkHf/aa3voUtSsjbFNsYq7EInHUjw5uz4ywyTPQOdeuXXPDked0GQsruPnloeEtxdI5K1ua29h1x1x+xIcW1tB4pD1QkVBBObTDDJpJcQRRM+ajEsp6fiUxfc5yugBgH2d6O1P2P/BO/S9uwNyym2y+5NyeoRlL/NIRh8+HkU02ReX8njhyRMTAikmJJFdSKwwEY4wBUYztorIJBdAAqvG0shcT4HX0MSthRFoAbO6KyljJ+ZcRSJCjw0u/cO6bH7/D37x+EqWPa57x+5Ke2LIjIsv10KR7X3rklPGVS/tS+P7b3nh29RAt3r+OHbw0lSCTux8bfPG2q6985rkFs9I33va1fQwY8JyOFrO39fXNb5SUsIkBMgSACFLZUj6fLbdQEqHJClaaU2aZXGlAiBCiOGcUE619rIx9Msj0lnDGJAAHAMjX/O6OVu2VA4u+um6DpXGMaC411mpWUg3kcsWcWjGj6c2BrFQEawkAMQ3tBAQzheHofZ3IPBumglrx6deMVUb+tZe2bOg/Ft5vGW3Xaw81HTj3iOM/d9GZR9/36ODvr7jSffOuziMXsgrzTdzs2cKu0tAxPVkgpVQEjVdFW2wWRJkfDO1+7omuwz5kzDh4ZP3qnsXHJPTukdWPD4/tbjfbS5l2KFacJNLBNlXGhkO5AGXpLwABAABJREFU3CZ8H0tNpE79n2DK/ytoCgzXz+WlSKaTSnglV0WjBje4baa1bYURhSUypKAWdWtlxwZKtNTKzxeYblY1l2ioBBOxsuBCMSeCpJKS4GiMmAaxHWUgTI2o3+B2JATgEmGtEWZRCloloqnC1DBGBGsABIEnWYZxWUMIUdOaLBRDkLNnTNu9cyibLcYTEYNZ8ViiUCl1dXRXStWptj6rvFkKRfA+l44GQKDf9gaDHmlqpWBYND2R39o8u2GCjo832meP7h1YcuQxoFHgVUPf0RIQDSSYgS9NS2PCaxXJMJk/a8bWXf2Wk5AiVNpUigMiCBRBSmCspDIIYKQDpm2OKWOLVszRQVC748HH/nBL36KOvVv89cO7Dznz5GMu+rRXKyrREF6N2fPMaJT71VjgQpSFoqa6ozHhBDxrxhKiWs3YMwETCATylWFHhRZS+yYVgLCsCUYdriRzEsxQwFTeKzWH4wcdfN7zD35/w+rHlrYuC0qjQ8hqHqoddmzv3354+U9feuy0Jz687OiZfGAUmVw1M+QK7BEulUXBNX3Hpcqv5Ku1ppkHQPNSQQwJoCCwlElQBZ5/2Kjmwg8srw0VI26ZG2UkUqV12cQsHijD9K1ks7n14ZeHeg9O4/TPfvLCZOQAMfuUF9wtc1Rb7qeX7h0vvNnvExnsf+Z+4BUM7SGrRxktqhSwxJFeYaxQCJxI70D/4AL4KwAof7ayUhaJv01t91uyAqpuKISKt7acdCE8/BoA6EOPMjRopRQRWs20teSjY7nRcQ6mRYKykMxgiCsElCuBQEpFDAvna1yAn6TE5yrVZHF3SnvMiUjU0aW73cCtmkZaIYKJquUqY8PuQgAAyEnVxczdBEXNWFuxcOt3Ls/NLY96my6/5fcyIidkbWRiPFod7zni1HZzedvOpmg0kZs1fnrfx8SWkeseyV3/9OqJdQ/qJWdd8NvH+B5PhAY0x2LtRqbF6ZzdF53X1O3wY+Z1s9TeVYuOdSSAAMAwieczdVIKg8bMV2CTGnh8gkcsVi4rS41kXS/cPZkFH0eb02fF8OSeUueBs8l9fv9j1rQDFw7Dwq6pIN+aaSpVuFEMMY9VSrXqRATQMIQIx+yA/DO5/z8UpqjD23Ut3r7gfzPp9sY1qyEZ8yD88J+vNkIEnRGEdCLQvhkwHUpkKFAMiAEIN7a1lkIozDDGCjSAqjOzYPrhFLCtba2RVoAQZTa0zohppevxmEIBwhAqMDAIIZlGWkitBDNNIFiGfiA4QhoRrLW2WSQ/sG1w2+spi/NaIZ1KjI5Njdz/eIijiz5ywp5hkXBCryU4/iOH/fUXN3b3tZ4PAAAbX7g1n53Yvmmoo2faFc9eU/PcyPB2cCyBqNsaPeWoVaelfmroglkebzBgHjO3vfh68JIApInleH4ja4prXPQ8CVpLXlFhSOJIcCppPeBFcE4wgNZKa1NRvm9XXHLThWvX9cF3vwUAldCdi5pRIrX15WdZwFoc5QVgGkpbtvBVKwrLCu3ek21BqISAazABMaY9jjKZliL2pWw4eyzdNKlFZN4q0VRuOybedbAJlz/wztq/LzJa28oPAcBLAPD402cDAEq/LVXVTWR039c614y+ei0Yed/1eVgu98wjfSfccs3Xv/b13151UGf/OnnQBR+c0RfJVV9uJp1Z5Zo6RymNtM7CnpoSo+Up3tnZiSj5X0TffyEUTMlMS3n+ZKVoWCZJRioU0QBw6Bdyowk7QrSiJsqWq9imImfSiCEJ4FhSYgrJNLZNQwmPaIsZ1UIpatrANQhVLVYsKbMmitiOgZkJAABBlBFsIo8BAK4XeSCEMhp4gWUYSmvEUAg6RIhSpiVPODaKRtyKG/huNJOmhJRLJcd2eMi1VG61orDGXb3lUmk/FSqtlOAIA0IAEjSg4WgSM8swYqKaU8yYkN5s3UDa4rQZyfqQIhmFmV92AZhpgdaSUlNr4UQZIQQQYCxsJ1KpeTzAlqUwMjDWANT3AocxjJGUUgOhhErJU6ZRyBfbW2ZD1QMhrIMXrZr//RhTyxNdh49NUK5gfJyUq3akgUrG1LbAi5ra4YEUAYaxKQcRnYmhaMvktoE40xpTLghx7Joui5pnawrKpjiBEQ7CECJSIqEFMXK46kliG2BEhkf2LKPY2by2Z79TTSuZouHIVG5sj64Y5Ut6ejtXYij1E+p5iNgjBAsE6QhRBAS1fKKaVDnHmtoWD28aL2f/PK0lZQqJA61KQ/nqOKIlOyzSL/8xefZJaOmhajg55m4b0yPl3TCM1zimgnxz58npS3/yhRdf3XXJs8+/+Orrxcd3fqBlrpS7r7yif0u/frapN8IMURqMSJFhFLmVKOmPWiYXQXNTrJwNnn/hDjfaXo8vfPVl10Q54lgm8JkAACCzewhGqjIOjq0qD5F9J7VDGIqoibI/MVLwC5ZD7WSclwNeUiZFiHNFCNZgAFZU+0ouOnillG4hW8hPjMepMW1mb3l4b26oWmS1nqWzpBFFThJIElBEUhrJkJmzAeAWANixxz14jh0bJ5Wg0Jm2f/LX2x5+7OrH2rO3vvSLeGuXzBa4gcdFKTqC0ODT159zx4Y1Nw+/4h310W8fFSm7Nfjq7382emr3Xdfdl90Uxlcsl53tYXuiFtXj2dKeihkE2pjyb7z2luMOaRmJvnHt3184aUmirXtWLENbk5FbX35zx/hEn92OVOcb24ZzXr40mUW84szozo/vBT8PLPfAv/3gF98+f/GH97/u9ltO/ciZ8w6ee91d13VMjQGJx/JjnilNTKfK2dZ4NGK3FgaG+4449Ppv/Mifcr/6TwjE/yNP/c9s+H8Bxg87M75glhu1bWEQK/P7RPbsn3yXxZPWzgL0ZnQ5W28KrpQSWiOEKKWYUK1NpRRG7zxFj+Y5gEzE6yoiIQwApEYIIYQwqhSZbQIlUKthDSYor1LNTWXLkwVGqUEJwwpJqQQnWjGCRtPxdMKeu2QWUrXde0IvlZgxd+7C5rZKuclkQW5qD+JCY/sbV12jRA0bCn5yHwAEFnPmTutYNCPGKby+q/rwq6Ud96pchXGmm/tUoj2WF6qQnWLVBssoucUE2CBxgFUlqCJU98GBCZIrrCj4mrQ48YJbZomkVhpRrJQC0AghTLBSQnIQdmNhjj3+x4Q+VT/2amFkYVd2cE8u7063DCWUY4EGhJEXhCSKtWVSl+sQKYF1hoIIhCEBzMhoPkdTBrIbMzsuqlrj1665be6JLZu89TN7Znf/k8V8b2S0/12IrroQACwAC2AJAMBXPgPwGQB4eQQA4Ld/fPvKHgAA2P3INxjOUNOa3tKFQzOQAhh5Z5zvfgt4DxX9uDLBZHYkLoQyABNEwAuA40Dzzu7pXlA1GeWhbwAuFstxlhIaKcERIW5NKIl0uSZAg+DSsuxIzONcgC4GlXRfJu97scCMsIjapwFrLrhUmGDAREuhlcJUpzOpwb3DFjL9IGCUSq2BMi6VDkPbNJhp7tqzt7WlKZNKj4wMW5bZlG4qFIpKStBaSQVap1PJEYTaJ0e10gZjoHW5s2uiWooAIVGqqlO982bs2rOnraU5EU/XR1J89Y0kAAB0xzNYwd6Bid7eXkASI6a1JkxpSUCD5IISBgimJgumaYA2MJFKSqWBYGqapuAcIYwxwhgDUDco2yalJisWpxCgwoadERzDTVapMGgz5ougWA6IgqBQq0eJjESs2GhRyPx4E+MTxel2fDjMo4nxdHYEyrioTIFCaVQUeMBNC6ftDNQqBYJriZTlEBdcVC1xJRykKiGedPfq/O7Xzjhr4bm/FckBKE3dXbVt0MZL2jt9yVEdK+d+76dfAbE7fHOdBO1E2kUihSgtTw5qFYQacWJ1Jj5mt1seQ2z8odiz9/ivPWUbuBKqWAfgFKQCCBLAxiH8zla4PirHWbMlp604LDuyyXlm28i0wl4j4qw45Oi7Tp81NLKok17TfNQf1/zykJmFo479SH5iSByf+G0kHQQeqDQmEIah5TTncjnDNHduLfzplr9Ekubtvzh54fQZMA8AIPaB2yf/cagMQDWkJnj69e097c60eXbP6WdzI6wzYEICVMWiXJE6gLgZS7VZBvVC17SQ5UCupim2bBaCllIA0ZAVsOa1zfsfsCDdnU53z8LBeGnvZH6oPOLDnNmzIRaBYtkwTQhcLGsgZFjzuJT1iB0cRHS5liZ2S7xrEo2tqhrX3nTfKXd+9I1yYkFUTiSbDUn42Phpc45af8NT8TibPi853Xf7WPmwMxcVJ9i6W++85NXLPr2oiTm9I37Ls+Wmx9d5o3thMt62w3WhUPPmdy855ZPz3rr2rTWbrvx228y+aQ//+eaoWnbwyR9+fV3w0njbS51zje50GO8B0wEvjmOhfnZtzLKvfvzLZx0c3/Onq3qo/spZnwUUXPidzz7z6sO4oxWmtnpGxDYgsChWToY1gx/WqiGLtYGw5i09fO3zb7xDN96vwo1tsaL38kOmANu2XM8/9tRVOrlbD/vZvVk9iGzDIYRIrQghtuOA1p7naa2ZqbTWjL2jsfOdTwIAkBQhBCiVgMMwDEKOECKEYA12xFFah2GIEIo4NqrWcDY3ozetMCFO3Ei0SDviIwNMh1HH8ULTZAQjKUWzPT/ixE0jIjmJkVowNiqnJpriFuJVrzDuuuVIzKhjhRliGKuhUsU3TbSgzdz/zKT1Cc8vMcRptgKjWUhivzSkn3zubZ0NlXRgAkFKagxsH34zoIhGVnQ563flDNvRPACtpZSUENCK1L3hGpRSyqBvR879+ooreprrPAUwM2nUWP/61lYDYQg5BUqBK8YroU11oAn2VRohX+mmBCGGlc+6GZOMuX5IcRNXATQotVI+csycmKwm4tGhEeztC575L+G9kdHea5h+4q/e/fUdp8W7me57r+irbTuxxiyZYJQUQ1fH7UhTxsUkajm5qQmLklIlMFua4otnEi3NwSmpXEYF1gi0AUC0EMik4AkAKOUmI5GIHXMidpSHYUtTJwgMCjDaFzFhGq+Ho0oqrAEhLLTAWlm2qZRQSmkNFBCWCknFKA0ENyxrx67+eCJJQE1NjMUiThCGoA0nEqm5bmtbm5BCa62ECANvMtNqmWZ2aso0WDSUNjJYLOYXc6mME5ar27btTiKiy3XlCiJU75str1CtEGwGgYewoZV0DCKFAk0IAYSQVqhYKNdqgWNbrhcYNkIIay0JwVA3fBKslQbQGCNKsGYUFDcZocRQRdfoaqm6bipicyVtx7TjMY8HNd+rM+AuNwZdnV7MmFHE4JAa472upyKqVsulemOhBjMZlwaWgW8EnHJdmKxEO2zKcMUtVqtuKpWIZGIYWcqaHlbHW5d36FQC0XlXl87aVe6P4hnRlCZDQ022ijMBb27JHvUVduDyxBVfAtBKEI9GKQ+cjv19Xo4NTZVUEEysdZJpr1xpndsrL7145JU5KW8ktmF1rT9nCOkxaRcB4rax05u6+h/64vOn/nRP8oaHva2vzV/vrfjxB9846MBP7f+TL/3w9NYW+6UnfnvlT54qDE/8iah1ry2d2foWeMz3WNowvGotEouVq0UL7NaElcvljv3kwRNi+Wc++7Wb7/t8fuuT9aU545OHbsgd9NwrryDUILgdGMrSWfDBvBj/Amd5gMsBAKwUuGXX97SNMPMJNmzTsZ2o59UYs6JaT0zkSxVtmciWyJXKAjxeLj3/7MtRYraZpOq6RQFRYkgsw6CofKyqIRGmkoHrVrnPKUCsqSG3ze5z9o7WuheJ77ze39hldz0Ldz37sf+4uR86GOBCAIh8GAB2AcCf9rUbJx82AABgGsA0gE/9EwLxJADABQAAz8CvfnkxwMUA8N91LT/kKgCYA3AHAJxyAQB8DOBjAIDmA4Ax+rSWzAwFuDUaMwGwbURUjYMXxiJRQt9FP9+3XgCTEzQKtoN4DMEotG7ZkMD54uSgsGgykpZWKLWueS4hxEwCNgwWAsaEVwXnHJmIMFa/V7wjpaTEhgG+r8MawtSM0pjUtarreV6mu1sjwcPAiBOlJBghZSoTjxsRJUSIZADlEiloIxBYY8osiE/XSkvKtG01xaNgMAlhqVSJjGzD2O1qN4FXwNQgwU7b4DUcpr6UOGKSmE01AqHMvB9AwZCcKAKxVHZei1EW8dQi6/JP7GPAWkHUDiu+pgg0XPShj8JD9wCARlB1vZHxotJGEHiSgOAhNQwlBMJIcsEYUVIhhEEILhp3u+2KX33se1fWj5cuW7p37w4qIWGALyAaM6taorJwokwFvBISG2GtABFiYDqW9eMEay5pxMGGqZRPSd1UCdgBv8oP2/+Y9OEzp9m4kjLeWbn/oeLa//PwL1H03309AACY51/59nFy30F9ujMA8E6JscbB20Jc437vOk7sO8AA5n/1cDmRh6jknsdiEmPMQ84owxhFoo7kXEmBMGEEcx6ABEKo64Udra2gZRh4jBoYI8u2EMFIYcsxx7ITUWYBgBRca+UKLsMwFo1ghDztmxEbMZEfH+vrXgHl7IeWz5LU0fty+9C+PELmRGolF2PwfS+fL8cTMScW7d8x0t3dQQhhlHAhXBc0qGKh1NndWqm5nAvDMsNQKqUwAoyxUpJgqpSgpul6FdBaBiFlCCllS49KL1cqhGGIlDZNU4Ty7QkZH93jYjfBw7IMOcZtLe0F5cssB8R5KJiie6fKgxP5vpmzbIZV4LYeOsf1R2tI4OamKFpsRacpiTSSoDZNPXN1/vZWJ3PswMS9TfHooX2zYWwg3PKw0b8p3LN7KuuWYzDD5/DK34Obr/d/+PFEJBkfnBp7cn37dz+9eefG6N3Pd3/4gPG2TEvaMsHnwKSZSnauFB1WcORJRv+jbE3R2zOu+7ejmlc7zUSbXs3sWtYS21tOb25aZrkcnLOPNP2DKusvufy8v1ZqDBwei6dO/OCip57fRMN0ONGcjxltEPCqZxIjOzGhtDZsq+AWWNTkm3ccuOQAgQBqCtcauFN+td9Mf6A5HcvmG80YRiSZnWmBVlUdFqy5ESvtT1bH+ndWatK0TYrDeCoWtyQPwnLFk0I4MXN2X9NAf6XoCk0ASQgIZKS2BAQ6GAzAxMgyNRNhVJPSwKSXrZoShMpL4YVBIDXGFpkebeg6p39kwY9+s5qSd1U6/P8CTLy+hmvTCmqt82cWJifCci2RThFPQsypDe4QEwPvXPp+FW4MBRgKQu7URmuppqahXVn9s4eSPzgftuWKcSteU5hRQzlcCu0R5SutMLYsM5FSVVcihPC+OlA0GUrfLZVs27YzCcBYhEIIYSdSEdtW1ZxSilBlWrYIQ6m5xJIaFNeYgSxAWiukQCOGNUYCayhvUghRYkEZK4VASIJpGmGIxcMaD2quGWW+W0PAEAfDqRNIsDgCBL4taxiZXFkclBWTIpCWpgFvKgUQMUJVUC/sfJt4avBcBhACgIIVM2Y2bmSQYhBkq5CJR2pBNWnHQinr1nQEmhCMEIDWgnNNkAkNsfScS34RlQ0l2iHi1V17ZlLEfWzH7UAqBAqQ0oQMBuBhmdJAAayIpWwMZUUBC8QsyyiHgSuxaTR4jEXBBHT3Y4+f+6VFkrJoZ887S/e+yWj/4/DPR/Xfven/ZYr+u6EIbihDibWJsFZKCqmVRgSZBvNDV2tJDSPgPqVYKoUAYQC/VgMZmphJr1YTZTsaFVpjjEQgk8zWJhOC23YEA1JaGozxwJcAlBGT6Gz/AATeyOrVvqG6mlu3j29duGpVfSTI3hdvn2mZKrkSuB/gSrXW3JrZsW3UdYVpMxHikItiqeoHUimdTkdnTG/etGVPIAUPEaUGwUiBJhgwAEZIAyghKWXADEYwJUgQlLc8QqVNrZSVlFwRhEEp329E9bct6N69a5vd1uPMmgdZVB3K8c5IJgjX7RlJ2kmh1JyPHDYjbvJKniEdDo+EQ68q7EfSUSuBgU2BscmHNQT2UNnf/UkUbHhLrH109tQ8/9Wq++ZQyYDkCPEizJxnNM+ak5oYhQSthqHcW0K/vAosE0wzMVTVG+Yu+ehRML99e0ctsteb1CpqpdyK4Nls1MzL4UqlPG4a85sPasEHTwXlbTC0Va7fmx6t4lN/V/mYji9P8sBNrTgUKh68sba7peOONc+etf/+19x9z0fPPOfIQ07+x6tbxrzH4xNFzeJjTiT0vSD0TIYJgonsaDIe872yNKaSVH5wAdT2PCumGsRktLyl85A/qGZ//rnf/TIAAHzl8dO+s/L8Y6zztJ0nqmGD27VhqwBNGC2rgAc0F5QpswAJEfgUG4WJEpbaNrQfKqSUQBBXiEYp8iEqgRAEUoGQjEIUKZvQSo6HwBUNGQOTYaRpKGQ5X67HeSyft/SElSPPr935fu2Yd+DJL1yAtJGLrmsqddxDl04/ckUqO/zWVz/9xXt/tqN/26mHHD1Vzvbv3dOZabZ7WyPIeOXJF7cUql++/G4AGJvaGYl05lmtvGszbU5kYo4ZYYiL0sTg0qWzZvY0wf1vvc+krxhAi+Fgj6doQmSLMZMNX3lv/LMnizYzPlkDJwIUE8NEimFKhRCcS6V80w1sJQBj4PvyYiqTltaW4ygheKFQL+iGpfSqJSmloxGllGIMXoVKDRhTABDKw6MGswhmWkqsEEEENNaBRMiGkEvtaowUxRKk0opgYpaqKPSVFAAIODCThSG4vl83W4amAilYQVoaAFNNkOWWUNrRFU9WPR41oSYJ18a0mW9rwDhq2L6oRQT847Zbbrn+hjMBACAQoBE0m9FZvR2vbN/WRZPUBCFCDRghbRhEKcWYqTECQCF4+24G1r5SHm+8+WZaI4Q0wuATFkgvqoiMU8nD5jilBqmWeQCM2YgDwooEWmJGlAJGHAAf9qU2HdoW2VmsjasSHs1rj+effP2dDNn/oeLa/x+D/2sUfXHX92s1L3H+rwFAjudFOwWtpeQEk1gs0SghqRWjWAsI3Wq0qanm+5gyDFqL0HFsih0O2jIMIYVUimAslZYaGKFCSdOyBOeUEoRJ3WrNOTciiYld/eVCOcpYAUJOIubC/dxS7pVX3qxXwqrvRACYGhxy2jtHRwt2U8ywkJAylytOn9mOCQ4CIWUoOAgVGIYxc2bP7p2jPV1tNVcODI7Wwz2UkhpjhEBJqZWyDbsUhLpSNcsB6JBZODFUMPMlr7fZCjhUPAAMgDBpiEfe9Q84ixbzgz70q6/eXwrb//bWq87O7V/90Wc/+b2LgNQEFj//y9PrdydT0XmkNDGnpf2sOV1uDY2uDbK5rBVxlxwQJNqmONlEmkzhmHopiy8HYFEIFjnmgYYoSXrw0HV77v3yHfO318Ka/tDqG6K6KI0RFEv5g5MA2BEWHyyy/r1T1VIkwElt+KBB+tGocpqMsOZbdoeMNqPkmIZsPh+24v2zi1e1zMrBAW/B7o3s6eLEA8XWz8wsH3o09ZPphc6bpdFffv/KadPsL3zx0pFx6DOCz66cv//ixWMvvmGxiu2SWuhn4hEfCQFCGyQPNeQgj+TmReO/vXBlYfez8ZYP1Sdn7kduBIB5jQAVAAABAVx2PVx2/b9LFiA65jAvlI4myNJVFwp5N5p0/CBE0gNsuLUglIARIgZiGjypDV8iDAEGjDRVhGJcRariIxOJRJwyg+lQKgpKaqS0lig3Vah3C7rlln8siZCtEr1rUO8TDBS1bdZisOL3N/45+smJi086p/bKmvZLz1kwb8aC+V2iWrOj5spDDy2MTznaBJsf/YkzD7UxXH43ACw/62NyLEdaIlAMAiJRNeRCCdejLVEhRNOMrvojKq9dX/bKTe0pUa24k4XmWTPe2iauOuWCU799bW68Oh76AoAgS4MfAShrAMAIY4UkSE0xUQoUSAJMAncQQho989rlf197FdQYTabE61v09Q/zB19DLliAW6yo71csoidJ2IIgSajHYfOXf7rgtkv19jHfdZRSSimkNKWUUmpRignJe5xSahgGQqhuJ3EVNwwjLE8AAGBECEESAQDDYDAsBAAlGOswDJRWmiNKKQhweVKbBmVIy0ArTnGIkdZa1qjABCnBDUwpxgbBQgikiMRcYomYwVVIGXBRZdTQ+xIvWLWCEAbQmjFNQChhSA4DNW3bkIjTUAEGGbd8L9yXhqTBd2vYIj0+fvjqP/513cs313/QHCEYC6qjm7aYoEVQBdPCmDDDSibjCKRSHCEihNIaytuPqWePiVBN7kPHsqf7GMNYh7bMFQsGwtpQbhEiponDwHMFMR1s4irXvCoRSEwAKRsjKkFjTpDR8PXaGdtugW3FKTvWAkccUvz7kwDvOxd5N+v6Pwl6ep/h3U/8fzQPvPcgBsfikYa62RaND9g68HyMADDmflir1qqVchj4jmViDLxaUYkkoYbQiiClQFCThVIjjAIRIoQ1AoQQxghpQIAoJlopgnG9Vh1jTCollcJacs0NJLlUXEFfswPFwZUrF+9Y33DdeX6lTsRTyXi+GlpW3PNr0bg1sGfYdqzW9pQUyvM9KYDLkDEjmWSloptIRADAtmkkEnH9gAKlhHClG6HXWhNGiNSI4NChBjNACqoAOtJ21QMWeFRq0/B5GKENl4rI9G7//tVzfvLXb3zzU29kiiuOXRA4y9HU8IYXd6QS0NLZ1MNSqqdFRbAzIy7CoZeH+kB50Zjq7gRblsR4eWrHwUwtrXZkjbBA2grVjknLnETiTYhN+/k5/YvmYmP5B66Q4SeWr/j7C/fO2Li7u8u0Jquc6JA6hqaDqmBFPKO0y41H4jWjqkPs2JpCPp/va++oBF5RhEY0qrx0sSxCiNQobxoq7YqJROsKu/uo6PHj/pt7obPJK5dCnJ+ZmXvDsrYv/PVmRMSp+5PM0p63/Nwr4xGB5nfOn8xOTGYt0tO70KtVZT4XNRJeqRLU/OZkpjTOoKdvEw4ShbEj+v73sBQbVGiBtAZGFAdmkFKt5sqAaCCahABcM4mAIqW5RBQcoFwjKbhR7xwB0hd4wcGLS9WaO1WoZYvFrLIYIAImBgVCY4z2VR3aNrJrYsJOtFAY+KdRKe8BWIZvQ6x/LH/saade+Ldb95/5vS999dPHNcfHt2+nLGYb2MB+YaqMaVyNBdVwmEQlc0t1PKu9vjcgvtk/RRwuY1GDY6SQo22/hogypafrr2farInFw4oXg4jRm1HRJrv86kFL2pq7Y7V8CRQogUGHGIGnCcYASmkl6t20pJIaYQAstTYAXEBJUAet+mFQUiwf0A0FOntecPsSY+sudu1j1VuejtfKDEArMAxQFQi4aIpG8UO7ine/lv7okbAjaxkGM01Qyq3WPM6ZxkThtOMoIQLP06hhJvGLuaoMYywNje5eWoKo98okhPjCQwhZlqWVooR6ngeUKsARlWfKhACFQnEpA4QwZRhjnSsbBhM81BoC0MS0Qw2EmgRzBFTUOMY4DFzbZFxWkW5I8dzzNME+SM0xRVT7nMXsmoOpgWgYBn4Fa0V9HPr7GLCIKPBhEceBw6957Q25TzGuaIQ1abI0BxT4siWT8GwilXa92q6BsZCHQimpJOehVsZVl1+xDQAAMNZ4XxhXCgEC8BB4HGXikUDIAGkbI8ASMdOJsKwnbVd2NCX2ejmMACmkDR4KYmMMNNSooQH/6eXslMksJ9w5mNv5x1c/8OsvA3wdAORzj8nOgqiOmaSHpGeUOxOwe/iKK2783c1/+daz51Kk5i2f/8xTL83p7Jy1eK47tmXzwGi5QKaZssgpRUJoyANKIZ1h8c8vuhEmHoMl74qf+M9xwv/nQU//f5jeXNi1rR7EIsd28Gg7oUY1CFY/80KKCi74jN7ZTiahBdJEW/F4wENEDYaxVBoRJjXCGIGuG6Q1QljruuMZSa201hgjpQFjjFG9UwhGoCngEIinwcG4BNImJlSqqlScPa1R5tCmDReSEXMg71VEOYGcas5PZuyeaR0iCCfHC0AtpaVjG5GILaWQklPKQk8opdsyyZGx8VBwgzIqQ6I0EFNKiYWgDGuNiMm4VBrhbAIZzNAMM9OglJYKZcwik2FYj+o3zj189knd6IwfwK9+vt+Pzlr84SONXdXyNFqaeHRk11hlq3l8czw9zaxV8yEoSYgfi8VjROqKaeBqxeeh0hkjUIjIiCdWONv9TE6vGR388y13tcTxPx6Bsy86e/r8TkhaCxe0jq1vemHT4JdWLh2a1LGEyT0PGVZfokuCDkPfBkVSqCZUPB4tFouReIrYsWiaEEJs03IxjrYb0UotqLo81dyJKYtFA6kqokOeur9bKoAnumcdcM3199+8Tba2J2v57N1vTsVZwXocjY1uXTJj+fQI+eUNX3Ni5iP3b+1sjZrMT37syOB12XZkhGY3t8yfXUp1RiYonu3Cwmmw6W7AkSIbjXt6rG3aKzc9dPr3fgMAyITnH7/lgJ5JPZLjJX94bLzKJdNESoFMAI40lZprqUGHilAMSmsRYiAIAQIlGVJCUyQRAFPApcagKQaOgGGSMp3m6Ymwyw14UJ2qTo5nNdI2aAewGzbo0sAewO2yy/kfEGQnSwh0BSFcA2368NUf3/65z38iW+PUopb2vIJXVVQybES8mq4BEKM4VYuSeqxfrjJm2saUP2kIM6G4H0Ixm29pakIlHXKXmo2I2vLYaOgH0Wi8EFaUUt6W7fMWzRFf+8rjj+9J25QKxJFWCNkaMQJFKREQhCVoQAAKYaSVJgSUEBpAEw80xTJBxY7hUcbMjq2T/NWq22JnLj8/8cUPwq3PiRsfhmyQ9FnZUravWIijoEYuujp9aFekoCs8qESciONQLrkOfdDEB0lTBgebRMBuZGxasRhzA5oxwjCsVMqO49im43tcayIUYpGAIoZBcSW5liRqYIRASOw4RV9mtGURP4zKRCymRjyuTScFUmhmR3wvCLiwiWYYB2HZl9KyLI55PB6XLvYVN23Tdd26Fh4yTak2NFFKKSk4iGwlL7mK25FAaqXAcBwBElvmPk7rAlDIK3Fqe/dDE+MDjWAukFoibHJEuBeEhGwZn6gFhTpXqRM9DRrqLAYFg1kTwAcArWHR3EY7jTgB22QVod0wVFhZlBIsAxDFGo8D1VNVhkiSmr5bFUhFFRIAgAhmmHOJMDVYIz6lHejcAPzn15658IBO1HzfwzfdBAAA46Xn2ud/FNrm+t747vue/PUd9/3l+dfOnHXktXu+0drR9+abrz7z9Ist0WhbKgrlycFKbaKADZAlBDZCnjLKgFIQRJgz4Zev+OR53/379f8M5f9V2U3vb1rR/22AUpn0AasAHgQA2twKmFJCHbcQMzgLZdywR/fuhkEstUq2ZxBzEMEaNGiFECCEtK53F3z77dA7nxoQQgAIQIFWGgCjOqYiANAGrQIQDL6E7eXcsvmLVOA+/fKWgwAAIDAbHsRNe8cWLZhXrIYRA1cDL2017dk4PDE+NXPm9FjMDkFSSup5RwCICyWE9P3ADwRCmCLQSlJKKAYpFcJEIkRME9k2CmqYQGR2t51yqqVaCCKaSnmeRyKOUsox9sUZbR1IL2g1d9w9sPg841M3dfzCLH5kZTJnGO24e3YfaFGamgDTjjCGgtBpbgOu3EIhHuvyglq6LUVsExETsKnUJKZN4HtC42UHnPFS//Q//+oP537y8ulHLg89EtNtf77v8bHc5CHpuOzo7S1qaUFUai/wfaSDIIglotSk1DCixPQqlbae6QCggsCgBmNMhJwJqUCbbU3ge4ZhaK6wgGqumDSTcssE7owSTiEk91x/yxV33PThk09F0oqayYo7uWbT5mOPPfaFB9YNC/PvD/e/9tzrz2ze4kHTGcdN/zbteebB+2PzF599xsrh9a/f+uhLRy9cFgzuhBFdHdykkumkj8JovDOIX3Xtn04HAIBmBHPnz5F7BwLfq5WrfigQRVJLgrGQCoFSEggmAKCUAqHrkpvWCgBhhAhGmGhSryCkFMKaIqyENm2GGARljwces2jMScT6nPaFc7PbthTHc4qRRLoR8HXpnV/e/PfnB57dAPB+7+sGegMCJU1MAy7++Mc/n/+REwvPvSWntdLFfc5ohXoqF9aiiPlWWU9b3FpoOEp7Ur0ydFtbeTnABKFQy5bWJoyxF7iWYzKT7Ru7JtSoF39FClHTqExsnXPs2YvPX3zrp38ZZ6HLpanBA82RBAAMoFTdi44BEAEkpEQIAyjAkiuNJQMN02d0i6LylECZaFxAsLo/UIH9qSPoD0/Ddz9T/M3DeP0EAHBUNiK0paa2/vChGXdeqdeui06VpAkolkxgpvyKjgoOuiJCioSjGhyYIgOSdugJM5YyqB24NbdSBcCmiaWUZrQXPE8FnqG4BkQEI4QhTILihJIqpMRQlkUj3hS3E3Efc170bDtCCGNMGYZlWEwphQnSUlm2HQYBxkQJSQhhhKZSjdh427YxxpxzjKlhm8i0c6N7m3t6vWyuFvqOE5VYWZFIyPdpwAlNSlLmlO4IzTXP3ty6/7l1l4bG4KiAJlJ5LIUX8KAgG2igMQZVJ0IAWgPByAsbq8swyrQ0nDLJiF3yA8RRiyZ+zfex1lojDWlGiAJEiIfJrtB1fMDKEIAZ4tiCinARGFRRit+ms0oz1d3WvHFor9S5N3c26ha1L1o0PBVbfdvjCVL73C+uOem4D2x+/eIdi9YzgZWeAAiT8daOFJGepxId02e19vXKp154ZkSgpNJa80TEsaiRK4cSQyZtV4gTg/8e/rXZTf9ZM35v0oq83a/Y0w8CAHnvF4e4bDLISedek3LRI1Q/86NfHPSDb73PhMNPZQQS9Qhqv6UNkSowgyRYT29mV/+E0KFtYQDiKd06vXtwYMpAGOotfkEjBPVqVm8z4H3Ot32p+Ag0qr94/RcN9S4nSCusBQKsUAxIqexBVWlfHHXwovr/vR2kPbcnvX3zGyNDNB6PWKa1+q03Ui3xg45doUEHbhjBllSKmIYhdLlcDcOwWnXDMNQKADQmCIMmDdEUY0AawDBM0KABE0bteIoHHrMjUUbHh0bb+vocGgWtYZ8HOrr/NHOwAH0n99W/f/Oa5DevAYC3c/zeDix3/v3BfyiNjvcd1O/79bq96I8/gD/+wAAYgnozPYBv/Bi+8WMAIABkX6z7f4C374z3DYMAlJ//TSUfguBTk5OWZQmuDMskhALOq0y0NjDevfigq395Xfecef/44x0dXe0T4yOhO8EM/exrz3zyzPPHatkyiz383KurB3ed9YHFg6MDdz/9+t2Pv/6tr118zfd+9ehNbSpe+OVtv48azddffOGyjwWug6BSLFeiXYlwbONrG4YbbWdOPOKAeGU1r3lBGJaqrpSATayVQhhjpTTSBLDSSCMAwAhAaQ2wL4hUg5ZKaaSkBISUqqeOIaUBJJehixDCoCUP/ZrQvGYaRAeIKVyq+SjeWJB5J5659S/P/30Mvvn2fL1f+7qu/WhQgOCnp57z5XtvKRaqtuXYF/z23ctXr3Ae/w+LetgnyLvO/4e8hreh+YRL/iuM+Pndv7py41vBfhl7Kl8NADOEMxhNIqk0AAINDTFHIsCaokatG0OBDyA9saU0OTvpK5NSQbWwTdrSYvi18vasvSPrHHhI8s5lYnKg+rM70SND8YAmgRT+8cqub14+++IL6JyFsGe8kB0YzU92RlrLgETMSzlRXqxWpkpJAACoSBFryQy+tq65uZlS4CIkWDPGpPA9z6tUJyUHRp2IkzAIDUM39LMIK6p6KM75Mg+USRkhCnnFnKYeZi0135OixpiJMWQnyxqpZDLJRRgGQT2ZGGPMGPM8zzCMuuRSKVVN02SMSalqvisK5VQsXhgeicRjzem0WylXfDcUnNJ9TWnKIEFDMzbTzZ3D42/SfWRNa9Kbcvpmdz2/cXO1RjRWde0D9D7ui3E9F1gC7AtYhtnJNraPToxKngskIjjGKJUqShEjWGmtQlQyoOBLKTTBkKCWCaQcemlNARtIcooIBb2vuBZ89xeXfutXlyb9yVpMrPrUpx7+491QrQHAJZc+8Mt7zg+98AiUemzXKz3TbbT92W35rKBxHU17HMeRDkVi894dMJRnPBrpoc2pZH6y7AIICq3NqZmLFrz4+BOJELf9claM5N5Bs/c/u+lfpWH/ewjXvVKnoQ/e++TP7+tfZSHbbX/VyAXYOPuHlw28+0HvD+FQpFRz61vdchJUc62UP5qtTEyZCmEECrjgwujuiKTb/O2jkRgGDQgQAqXfdcN/H/eioVF0EsG+8e5TiDUAkFDFEMtpUFJhQC5BlSCIpeJQayS0aNngWcxh8ahF5aCuORMFLxZLHHDA/gpAc2UQQ2kIA1EqFUqlShhy0zCDMCCEEIQ0KKQ1xhqBxtDobQ9Sa62AC5CKh152YHTXnq3z5i4ybTMYz+8amohHooZhBZ5fryJbevJVYaf/mRT4fwdUq9VULI4JdYHKWpCIRWPpVLFSLo0Pj+ZRe6QLAvOGn9+4aOUhM5vSP/ziV/YUt4RT6Ia77vreD34x2D8JGBVU6fn1OwK+zbJPWXnYdH/dnbc++vR5Z//MAb5jcPS5m74ePPl3tGAl1EYyfjkZGlvG9sRoRtqWxwSOJKFWBACztZV7FSW5CDkXEhGEkEYEgQKKiNASQGNQSmuCkEZY15mW1gS00lpLIAD1wHtEEMUAgAyTeCEn2KRMUA1c84htgm2AMsFgMsBaqa7eRpiSP1Ya3bSzR/yTqXpv6wKhuoCpFABEoimI/JdC1L8YDls5feNbm8f9cLFt7hWBz7EBGCGptUQaAANoSQAkAqSlAgqgkAoIIVIq5HQ1p52yYRkCxZgpfF6sVtKtbZHWNi2lO1WM4LiYebC+Z//o8ED1xof09S+mK8B+98rkbx4TZy+3vnxacr95KbVY+5WmUhYNC15xA5vqpoZIlHZ1uHWwvaudUiolp9hgBNchFjdroc9MahiUC28qXww9P55Kxls7oFiMa6Z0JlChr2s2s2xI2yQmDNeJW8C55wVKqVgsppGq1WoGxiIIQSnuB4wQkCpwPRnyOgO2TFsIIbgPABhT27YRUhjjQqnoTUxEo9FEJFarlAnbx4A1M00Lt1Xk0Sed9NLUszHA9WqrROum7nk4zlSoAJCuSzj1ZUcIALRqINeFX35CEwAJANC936ypPXvrdx6rCgQIKZQLfADQUgNgrKnCIfUAIQKgmAIbYVAhQkphM+TcYJZWmqsaabwR2DtHF2VmVqPq6rMuOvbCiy67669QBQD4wRX/9oen785KfNKZX5ozvUOMb7w/sjGW6eWoHGzdiqrlwVAnQm/+4qXZYjVqx4rVQmlyWCIttUYS+9ka9iGWiVUmqrFL9sD171d2038J/1oNex/EZzeaHl1z+0gHoU+lgkEyTrk23HDwn9/jvSEcpV1DdF8hjnXPvhwc2GPalvIDJZRCTEkREqhh1GE4SoIQCiGiYR/iaQUIIwRII92QEetCd13X1W+jZ31kb6NrSYeGbUsAiQG0BqGr2WIsxqqVRrP3WlCr24/8UjGOggxRBuKOZU65Jc5Dhg0kcM0Lc4VCrepWXd80bYKplIpRpkEpyQ1GECitBCWYEKy0AgBMKAVUrVSiDuJCVrITyxctLleq/bmRGbOnA4AQIj+VT3c2gvpVR2R07+Sc/8Wp/NfB6BM/tW0bANxKNZFIOI4DAJ7nBW45Go26vs+YaVmW53nRQ74OAC3pVKAlzSS7eltGR0cRgJ1Kl8vFjuULolNe5rAPXH7J1UWLNHd2/nHr89tuvdax4Mof/gqCfzzwt8e6EmlCJCh5z50PfPTs43fc9OlZn7hoct2mIyx7v2B4h9nU3icOPe+zV3/89K98/kflKy7e1r+zL9o+b/lBJd/lkfT4rrVyn9g0b9l+lLBQgu96ABgjBUoBwiA1oZgrwEgxRojSQilAoIEoDVwIhDEoRQAQwkorjJBCGiEArQArjIjnadu2eNUnhq24h7Gtqn6ytXlw1xhLUDJ3Xn0ANC6gtWe8fxig9t9M7Xu1rwHq8iYGLesG6aGRSZ4034f2LS0Hf+Syd3396/lX7FZCagwIIS2xAoVAagxKKaQBOEYQwRAIoIZGdPbe/jcc1zSbm0oUIUybYkko8KGB3ZZjZ1iTS6vFTcPtOIq6pts//QH99A549Q191T/CDSL2l/X+X9+qNBvkg8vksvmZ9DR52oFacuxXbdmYt2oCNLKiBlVKUcW0YkqpkEsQ0rbtCMIhR4EklESa082gAk96uamsQ5OKhsLgSBoRL2YbySk+qUg2FlLOeegHknMAiMbjRjxJaRUrpbVGCHmehzEOgoBSWt9BdVBKAWCEkJRSKRVAmEinLB42Yxb6fnZ8PGqZlOxjwMB5QMkMg5TL5eVHHNkCj9ZPt3VGq2EQRyx0MSClECBV1ypAKVVHG4wQRjjCTaYbaUiEutu3DRwFAACzYnY+CEpSCYkRwoA1AkmlFIpqIrSUXbbFpHRrLqEIKxCmRkwrKbngdtwamyjU7/m9P/7hptvucuOyfUangew5nd0wthUA7r/4p9f+7vJf/eRPL65+4olV9yy6tE9wvXIJdrW7e9eopgRAYoHD0LJYMJ6b9KQnKUlG46VSXiPNlavy/blcaGC9KVs8atsr76Dv2xrbewTvl4aN9tXT3mnUjnTi3zu64801wZZqdfoMfF1/Fcr/vfT+3hCOvFuwTLMDAACqwhOgGMV+WJUUaS09ACoQVsC57w+PIIQxIkIKwHV7cr2lUSP44N8PpjEi3eDM+2zVAAghjIiJCAHQSiEMNiCv5oGWRqIRjx2NJesHVIKUGiRIGRDLlEpqLKVSuWxlYqqIkcCEmMzQSgLCAEoKSSnWWCslKAJCqNKyIapqJEBI0NS0ha5Rg/bNnkYMHZQCw6QK5O7dux3HUVqF0q8jXvvsGaTjvyfl7xmYANVcLhpzkjEbpNi6YT2jZjqdZiDCcqWUy5mm7ZsGgkaoS6Wct6Oxib17BaCWlha3WilNTqabm6bGcszKqBq6995HV8xZsvWNdRd84qOl5atuvOeRi75/sQDV2mwde+yBt97+qGHCWed87pdXzt61Zcfsv94X3z31+LiLhGmRamzC6Wlf0ZJmX2HGoQtnSjVe1gHZFdORLl/np/VlvvvhpfDAGgD49AkJqNSkloHHMaFSKgRYSk0AQGkpgTGgBGOsKFClkVBAACHAGBEJ0Gijg7TWUiEgGjBGlkVcT4yOTcyY1SOVUkJSySVzRFh1Eh1VOqM9ho3JhtxGCXHmzVv96vZ38PF9s5wh0ABKK4QBAVx5ytkjUKz3s3mfwbI4KgMCrDFWoIlWGuptagmABq2J1gaiHPiSJW2gCn3t03RbGsqBEwhAJASOCEo3ZZAbVpsnzZrdhNoks2p7xiK7B6AjkzvztOixB8mt290/Psn+vjFawPSWtf6fXxxrsdm/TYOZLdHlM7iJ67q/M1UYdxCNWBgIJQYmjDHToFCtVl3XjXFiWDjQgeuVkVaGQbFtGEYsnN49Vny8iJ8ipc0zcUt+bHrnyq8BzIDRbbxWDZEfjUaCICjmcxERIoKRBiFEIzMKiMaIECK0qu9ijLFpmqZpA6hKpeL7nh01K5UKIQRACcHbOzvAMtxCdp8P2CSeEGnOh4cGDi6sTOyz/C4LFgzL7eMbRTRqIGaV8hWNtAaNtMYAjGAlFNYag7r+2iN/dNFjwBUATGaH+T6qLrRIUZSmuKLRuMdBEgwsBAlIJikkDepoWQw5tsyoaeb8GiG2wAhrYASHfnjrjb/6GgAAnDrjQCudyO7oX7tu4IQffWDRoasAtgLAh6+76rV7f/mrDx8QbU1u+YCQRrSpo7OwZ5uNZXdzfNwLwJPMYLl8fmByJ2iCdIhAdWda0snYnrHhvnlzcbockMAJ0ckr0PM/OeoDb6PVex3E9J5q2O86PxGk6+bNKRM8r3zhX3GNFnsRJCfBenfqxPtFOJYcflCwL2t++RGHuIB2sCAUinhIglIUW4pUgVPH9LDEoDECDQoBariB684vhBokD739p3HQ8BA34gMbV0kpCMYYE1BKgMYa5SrVaVpD0CBYotJge8gysY8pIgRDEAY2ANTk8Piw53qEKqyZ4AEhRAMCrRDCgABrpAlVQmiMGll5CjChdaWcEawBuZ4PShV37zYtsO2IrezdW7ZHo1EstNZQK5TqNvm9r22Plf8HGLB0Xcx5eXKqVCpFbac5Go9EYm7ND5qMRCIVj1qVcq3kebYdqdsJKoU899yWSGLPnr3VmkCMVjUfyO+Zv+wgNn/x6R/8+LbJ3MLu/cAiO1avmXnAYiNGvbwyERSm/DWb3/j4eUd1t/U9+uRLG9/c8YVPX7Bw/2nHf+C8Z/c+NfrcM23d9k++dcvSaambb77ez7/I2/BQf3bBWbPLW8u6UrBCSPakD+pq4B4FgUKhiFJSK1CAAQBphLSWWikpkcJacqlBEUIRQkpLwMig4AVSa2kSojHg+v9QTAGBklIIh5F0KiolxxgwowayACRShNL2dEdLEymUhrbXix55Gzd86vJvtnZ1wI+ubUzl+7WvESZaNehs3betECGhgvcdDCfa4/p5ofJagNb1kA0ECiGklLaBAahMJrJq2TQZVHJbt7QbSTWaDZX2BccGjcQSjFBmprxKVXLtthrJigcTJRqNuhCN50VmzShYspaZ5tx+pbt3gN71PHn0Lfni9taSrmS3JDZvEfc9l20n9R0kW6OpWsgU8f3Q474QijFGEHa9Ksa4VGHc4TLqYTuMRyNWslk7SUWdzfrOeCIB8kCanEaQnGTuSR849YTlXzrpADx7xkyTYiW5bVFKI4DBDwIMSCnlum4qlUIImaaplPL9Rpd6CTIMeSi4UgphSDWlRBj4ns8MjBlyErGpSmF8LNs1rbfBgGsBb7btDuw98vRDOwaeTDsIXA0AH/74Tz//h6ONcTjyiOWbBwcLeY0xUlJhDAal8UikVqlIoYjWCMsS35eJNeWRfYgShIgwZEoRo0bFQF4oASRQSEqjWUsulYsBERxq7UoPIcmQCkEqJbEkmKO2RA9AFgB2UdR/92M/XfPsC9s2z1AcXq1nPMG1P/zRbVf/+vRMS5ArZZ778KzDF7713Is69JKgEi3JSkCwRpOVLFRyGLRCPJZMZJyk6yM/wPPaZzIcKY/smNYejXEOm57ygQI0cPrx449bNW9ZfHpSe1XhBrjqSQw7tuzOK2ErzDHRkmMERIMBJJpMDldyZzzy1CEg7/tfRNv3WsPeB6K8un6QrpDeCD7EENs9GJbGbgoYNOyrtv2+EY7CwAhEaT3oiZS9pBHVOPQFrnBkY6gKBYgQgEyyucID0ApAAdL1pAatFUYY632xVXVe//YD3nGPoEZw4D5QUjiGoTVQQHUO7QccKfx2IU20z3wkkQ4RBqIDoTA1TBka4FpWqICCMny/Zpgs5EIpjTHRWhKEhRAasEkNrVTIlQYKmAgJCGGbYhWGMhQmNZQQDouEfgVUUFVAqEWoRRgWQgBp2OQRGNl4mPkvZva9BY9ILwxMm7U39yKERMALgRuqUE35EIDiCgvVHM1Y+6K1h3f1N3V3GMmEG2NzDj+wunsgu6N/v3kLoah/eeIFb67ZyFyYGBltTSV0b7qno33nwCBQJJRDAW1cnxvY+cyyOfPWrN1xzNGLP/rpY7c89Po5H1x87x2PblhAD//Ux59+9M/0rae4P2pV0O69PXc+Vzn+nCOD7INsXlRWcXbb1KY16+sGNhBYaYoI0RoLJQjBUmlAWCrAGBSA4IDrzShBIgKMYCBISs65RggwQ0BAIayFAgDQmmKEMcEMh7xKSBwwSJBaIQ5S5T26eIEZyf/jD7d/7kdn1J+fL2SbeWruSZ94hwG/X/taKwmAACGtNUYINOweGGx4Ad9fUAonUowUw0AoDhADIilRIEyhXGSmrQAzKFeLQCmuxfa8udSc2Q8qr7loSmdA4r3bt1dcb2b3dByKVEirlQAxs0y8II6cYsCxtdPWLIhMN+PkrbGgkK0dNM8+8/B4sTj1/OvNPVHBkG+SllQTLP8FAMhKTYWoUhNhGJqmSTRIz/O4G4Z+IhlPrzAhMg3FZmHWAqA8GMlBfx7658LxaU4moLAx2L3bKt6y8fbOc+JnfDA+8fAogA7DUPPQcZwgDAgzE4kY55IxphE4mbRbKBBKkMbGvmWvVqsIcCwWA6SFEIQgAGRQFngeUUbAfekY8xasoLFYg/xcdm5P8e7y6sD73dXn/dtzLzS/MVY//8LOR7S7ORuMIgzxGAekqaIMVA30wllzWvq61697cXiklsY0UDB7H0GsBCWqG6YqgwjEqdQMkEwrPAXSxygqdZyESlBpUi0EAPBQkgAnmlJuIEKfEMMKZe2OP/zYZ424vL/nNjzzyPbcaCFl2aOGs8drREu9fPWvf7BkPoq4Yz89MOlER7e+6Vaqbem4AcH2yYIG1t3bg2UYej6LpBwrDQhxN9AIj45u7e08sH/7UGtMLZjDJvfUdlS8kvcOyV6xeHq8PQMiBKXK+QmDGBNjBVeKOCWApEE0wURpAKECpMaLuQSiB0aMDb4G2VDvLrzwc/8/3v47TJKjSB+AIzKzXPvumR7vZ72XtCsvIQkkJEBCcMJ77zng8HB3HOYwx2EO7705rJAEyCAJee1qtd7Pzux4096UrzTfH92z0vm77/t+5LPPPj013dU5VVERmRFvvK8STEEWUz3KmQDBsww/+ZUv/2Wez7Oj/4JXAHwaAJrAklz0Bp7QdEbC2ODYgcmpJ973l3Ic2bhWWWkbWCNO8/3D0DwogxBAcKAERAQckWRyWbewrCBCohCUFBIVEKAIBAAUtMiHEJRqY61aobddKlaEgFRAsLUYBxSoDBQolKJSAgUSIA8cj64yyLphu6yIIpRhpLhkiL5UsZgFCdWBscLJlc7O7lDEPbtuMo0iiZQQKBAYJYaQKlAtTAMiAggJAKwly0C0WsMb6EvwwAeDLNeLKPSRri7blTShUVRu1U7qq5nwrNmdyLZvx9nx/x6aTpg5NjIQGdQMZNltqkxswErZEKIb1s+UtYFOpUeq6lhh+3JtfO+Hs4KGCwvrXaL+7tv01keHE7lafv9HHr9vt5c8peove9mNj//yUS8aeMPWV+w79ggllEQo0Hnp62/43s/vbzac3QePA4N77z598b3P79Zg6uCfvvO97/9+z0Nvf+kLx7sy331o9rILBgE29ZxzzsQ3vgmlQ7ynN+UnJice27Trmq1/9UzYfTMAgEGJkpqiRDBOJWhSRGhEQtOh6iuBxCRKKEUole2VmVRCMIpdcRoqQakEIABEahpXgaFR4AjMCKSjIgwAaYA6cok6OpbRk5/bf9/hb9/9+d/d9oYfvr91HRyh9dNovG/Nv7lNf5FBQSkFkgJKKpQAJNsZB5kC+G+N5z+FTJ5953+FrIT/zniECGIJI2bo0WK9qKCGEoTIMNQITWkBSsI442545tDU6PhI5EUWY00iuzPrP/ab72Sv3bD+uitZNXb3o49yvX5Jx5qySG7osTrd/P4ju4d37GR9wUMHjxuZsUs++YaXnbPjn5/3OgBTSYb967KvPb9++ph0m5qUoWe0UtA2wVhWMtYbixwTY5CQsiPiiQuptmZm+sQ95T+kBufvn/t4c4UtljjmFm7adcnW6Kqknvtl9SuaGT1Ijmbn4699yvP/bvfP3/yjr7+u513W+AblnmHUYskcsbLSwKMzh9br5OTh7MbLX3H7T/+lML3/xpdd7xSW4uk29jgbt6IoIMp1bIewWL0SKalrvJweHOOmFTq6yCyWmNsNmXYAHj3H/NJv5tdCSswetB+bzHXkoFwBAGUvh1VbWgFEIhlLUlWLQegSkmasK8OgdmbL1oFC9WTTFRGqwVgenCIAmLlUVFpN6EHbA0ouGNGG+7MTCyUdQBEaCk44KMZsL7ABEoxVS9V4PGEaigvfQC3lp7VVyV27QBtgcyBh6AN6Z4vdyXxibnkybkgVbtBiom7bcUtzA8eJhJSgpJyfmWUU41asUFuIaXUuDcevAnoEYbkQVAKKPMwsRR1fvd3ssb5yK7xklVCzY7A/9Bo6UKfe9Jyg7jZqFdeghAilJDDAkKsIJCEACixkoYIDjvfkgioRIJjIJmijNgOESJCEt5ME73z9IQVKrjbKfPmb2wHgHTfeKolSEWiAkqChw8Y7XvHKxhPA7Ic++ox6JJIIDHQhuYZKocrGMZ/Fuid1SQAoAWiWo/z4WHZNV1izv/vn2p9ueuevAADgklfu+tW9J04sl8xAANKn9Gw+GUiYn/4LOw6x8fzcLgLwMQAwpSweX4J+Q4SRRIikaG9nGQWKjFACGlEEJCABQlEKoSRflRsi0A537agLoJSSpEWDpYAgciEIAqGEc6FTpiSIVj+KkkIqp+nmetu7zfj6/tYLyjTNMD2AVIxJlxvxHDjCjwKqs0qlCRrVk0kvkKAIKE6BgFJCBRSVQiKRKkAkSJRExUkrDceoH4WORw0moVQfGehN5bJk0cnomcJSlYOgDNWqwJ6KwqW54ujZi/WXgqZ3d6W84lKpVrd5sCE/tDi7eEynnTQepczYxlRYKamSFiStWk5lAQAg+4Uf1o8s4M8eZVHJZZHO4Uza/8rJ41f9zcvf+8Bh9tCvv/a9D15+y9UYTC+ceeRjX/pNB4hyLgO2feNo7su3v+/uSuH1z/rBfPjHl7/iJVuyo+/91B+GOzIvesP1P/30W3/z4+8/8Pc3Xveidy04mqqfGRg8L5arwGiqcfMj9Z6MVit5HResu6Yf3nszACxPrPQO9AYrNRaX1FEQoSZBMaI8SZEKxRUwXwhGAUEpITREAygIEIToKAiRNMIAlDKpHhhKCaUxSmVCgPTBMKwI6lxjoS+oSYjj0ZVg8MqtG/fcHhWLrWrfunM23v7db0SJoesBAMD59d+LsUyyKVwNLK4RSj0NhIzCch0AqkJkkCFRdbueSCapxoiuVeq1rq4ux286jhgd3Dhx5vDu78ymdG1FRI4krRWmfFJAbddeCFAJBFAKjUKUIuAquX2bHloL+r95638wnv/42/+fjUfXDF5vuMToHO3cNNbdrNvHDs85nlQMgEOopCnCjqS1OFueL1Yue+oV950kWy9N3RHjv3/atjc95blxj80ofO/7/2XHj1/6q8Xjh4PDzmOL9I5abMr/weP/XHEWX3LhC2MarX7gjYnB+Ed/+vM10bqHZpdue+C3P/7YBy/bsd3zG/XKshbK1ho2r+UL09NNepRGoUZTK0F+0UhNa3cu+d//9tfu3fSSvg9svv6qnuun+vkuwhx56BQ/VoXjNzfFQ83FF8Yvu4JdlO/P54S+oW/sF9sPxv3vzTcO7lrby8GeWak4vFZYOTG3vEyx+/4/leTbvkGdnqMLX5bLf+5dd1E5OtO6bA83KyNd+axGHb+ZYtLQtEom1h+sO/TwSd+68G2f/ehNH3njOc/Q9wS/bQfgF73oZZ/6yWdmThnj//DB6p0vWTrd7rKdO3Y4o8eZNMQGVSl7ElREUEq5Jj9EMh2NpTOFmUroku1rR/IjHV927mh9KqmlENr7CSmkRnXJo9a2wKvVTAADUAlFGKFKcUBFCKDUkUhkBtMCIpSUv/zc5xK5/P7Fk63z2BSBsM6E0d2d7OnPGMqBe2YB4AfF9hfBNbfD/3nccfbV0uF/PPXAPWs3/QmOtY/ICPVms1yslIolJdH3IgqglOKogCEDNBkyIC6XGqFScgHqHBM9lYSg3aTIAJKacd656/700P0Msyrw6VkLX42+AKuAYABDsVDxAFWCImeSKAas8uTpqlBkgAWEmyRkgK7EjIHQZE0ZggGUSBkq1AhnlFoaDyK02I2Xa9p5l8EddwDAU5/6dHbBs2rOidOPF37y83vLdigL9daZLzOHv/uDD33sK5/80WMzugcMpasyoAc09BAArcRbX30JoBYCjnWlN21f++ifHq1LaVAMBcnGUFdQC/m2HTte9LJXztx62/BAr9s/EkGSzDzsqZ6aNxfdd2r6rr2GTmy34V7zNk6tlwMAQOmuP/7y1w9s/tirE/F4tdlQSKiESEkBVAL6rkuRtEiVAQgQbKGLhZLtTtd/m2eGFrHyahiWgjOmIUIUcQWSKNQYE0IpBRoqUGA3vNxou3vBsWutUIyEeH4okPAoIgqSsSTE43qNa4SGlsnDUEpJdMP3A4oaCKBEIRNKMQBFlUJUFCggCkkiARxBYwSJGUnD0pXiYcfApqlTxyQPNZQdI4P1WtViRNfaGaNkIua5jf/OZv/fQNNnlnDtFVen55ed8kKY0kZziWMzk7GRNIIu5hvdg2vdy7qifTPpO9rdDdF7v4wMJCHNRKwWGJPgPVSvv/pT79h+2Qvu/fpl1HDytx99bOI3t/3kN9c99cb3f+FHTQnbotpkCM1Tf1ruz+7YuP67337jptwFx87MFc88Go9rb3nuDa//m3d/5R0vefUbP3LjS6b3/OubzvziU7qIbdDKrAOKK4mOK4bdRtB7zTOVWPzFpz/cgoaUvbCbUeaEus64zzUOkQJDQh2VVEABBHJFAaVEqaiuIaBQKCmiDATVqABhKR5SvxyxGOqoQeT7HDECvWqHzQboWmiHpx+czo92jW3a4M2c3rJl40Ubx7/55X95CwAABIG2f2ai6i+3AnD86iv9h/fL7Tuae+8JchnHdnoTHVRikE6kUplcLB2gb2R6c1wDBt6ZE055qS+ecRZqmbiR0CnSyGMDBBcMBiwEArKFKCQEn9zHh6BQaUpFEhWASDGV4iAB1l6+XVv2/5O7+2Tj+a9qSf/OeP4vi/LcpnHZdPWcCShFLUhK84IN448fm2gGIs4IBwgJq9heXCNcqvvve6AB6uEzjEbi0JrHJzfvnLXNj375XeQl8T0Tx+HQAYAOUEm6w//lHx9c8g7vmZk+OPMDjLzFPQfv9wsFasEtd8ByCCOx5372Y5c/6ykbNvZff82u/RN/ehMAAPwk91j38Nhl5gWCegenl3728Ozj9caZ2uG3vvR5N9/1jin47VPkxgVy6Ljz+5OVstOl1aBYqVJeaCRzvWvNdZvp+J2l3Q/xAwbMXNuTGTov7tHjtxQfpELRmLNUbhqJdHJrXIbhlf9kye3hT95420u/or3sLRf960+/tf2crR8GAADjilEFcTv0U2szdVmjhFHITsKJE0ngPPH+W1895f/iQf8UM9lqBYyObdjSk/ESguxYKdfSirRqCbqWznf0zNcLLO5WqgFBqqgwJYuQglQr1XqjUd/Slxpam6vOz3zgvZfAZx4CgEBqsVUCDY3onEc6o1wBFzIRSzRcG5TSJeUokFIUSkgZ1wB4BEoRpFEkOECmK7b30T3Pfd6z4czvAGD3nW8YHRjK949BLBe4YMQHAAf/9ybyP47T922asI8/5j5xhNhBo1LnMtRM0276EpBSRFBCAiEkkqKqoMU/GoLUABhBP1KueMI/+gTGh0etWDqb7ao2AACf/HAgIiqQoM6iJlylyRAiHjUBCIOKkJULf/fKP9xw9iMBAVNyQmiGiZJQKQCUYFmhlqRUEnAjzdJ1Kg0NTd2gyQQ26z0dPS+98gaADwNAfxn+uG/fu1994/b3vOpNL337zXvvC6I2yHyFhGvWjDU1/w/33PXb7372R9+6H7GmooSRzgf1IvGCEHSAEBQk8pnS4dNpJrMaepz4IIwINaVxAf7C3HW5je/95leGx4a03951//HdY1rX/oN/ruvZzNCG7h1rUuO5nkQ6Mci68ggX3AMA8I53vfcj3/rjA99uTEwaOvV8QQAJoYRQqaRt21QDBYIyAoQIIRAQCQOl8Gz6GeAsOrq9D1YKQCIgpVQpyYWklEZcIEA6mXDqDQqABKkE1+dydetpsjb+HakWBhFTSifMBR6GgSCyXK1yRQdHBmamFkM3YNKPWdQLBaChQFGMQkVRSoqKKdnqlRJAlcI4pYJzhppr81RcF1wsnpzLWJkwo1nxRKKrlyzqYbNBzqaCqTa4btN/Z6n/b6Dp19z4zjv/+L2V00cHBju8yblcMjYytvHonn3d/QP+Snjb5AP5XzTXLqnU61/e+vrd6X5o+k1ux0PbybC+TTs+dsHT8W/ecbPWdf5NV1o715BF1PO6YQux7by/zVufKWaOeIX3v+I5d8+7Gd67VRte3vfNv/mnf77hXS97qrXtfW+4sTs199fPverDb3tj32VrrrhgG5w8Es9sCCOn57qnr3nPnXt+8ch1r7jKPl1cXuw49fDDya3nw89+DwDrB/NRcd6JeOgIIpBQQjkJiIylrULZyyDBSFKdAkEJXKGUQkgOqMAA1JnkRDXL0gcY37U2oSXP7N9nWmaDR+u3rj1x4OSgEJIRpoy1m0Z4JB55bPcFOy8+dP9j9+4/fE3fjtZ1K52cufDCHZv6dsKPfgsAP/rIx5uR33nvfg3mcolELBZb1vVSsbZ1w5bHjj+ABp1zAj/QCU1Xaiu7do49/WkX+1W7Iz8aipqzVGmenOjCXEdCS+ige7wzCZWGku1eO9JqXQZord85IAABIkWHRaMmJEz0T28sJxud//kd/r8bz/96sEYAusGXbRlyJVREUWqwedP44szicsWjAChDwohUSAWjdpjUqcMCmjQ/IV71rl/+LRyfhkWT/MGHTUv6+flY2qjtPZlgiTe84w0zxYNmV2rzxZf0NeWDEwubN20dzVqV92ywbG/m8JFSBk8OWUtF94Gv3d052tEKwJ+7708feumbDW1NEapnBvYkzz/2nnXXChova7/8QuE9F6bPue5HP7r3ziKep138dKt8enq5CW/Zcf3fb3uhBsNNGN6jfqo62fLxmOrW9J5gpnnoTMNgUZAwoC8+mCSkXK8Lp+sVHdd3QTZ8UfM1L7pwvx/sLx+87gVr52kbmfTr8j85ohaopmVoKoC+7iETjEX7eNiZz/tfvDYTXxdka+Yzp8JiOwA/dvPPg/naieOLtDHbMHL9pAoyAoDOjl4CbrMWbO5Y+xg/QZWwpBmhH9Ur0ukaSGvW+BZlZBYOHRSRGJlr+7LqUjVutFf0ggtA5EIAI2KVm1dwwZUEpThKiJREyOgmFxEosJjmc37zD7+ZihsFgL/+xPvhl78DgJ2X3RAsFmGmKlZONSuF+w4euub/ZCP/07jsrTde1ha7Xh1/91H3DS8Ig0gI5QecABKlpFSAIIQUEmxGhxLxTWOdx5ZnnaL6zB/2cAoD0Dx7gl293bUzU5Ou59mcUlMADUkbIiFby1gABJSrHQxWknUOdPu1qgQRuBpNGOObtsEfnpiRJ8EFBpECFct3ieWS6opCjkaSynRHp/SU1Gjo10LiWL4nHRMCQUgz2runhZw57+WbJo7ufcsnPv7Gl9508U0v+vqPvnO2ddsNHTE69po3vS8+PXlkz7yPYTxmOQ64zeLaLee/7pXbxod65hfnvKI/OLJ+97G7YhJSJihfmAmN+1GkeF6n1WrtnJc//eJn36QKh+/4409TaqjvN/+4DhZBeOARiDQIXai7QZLwVbWsoQcfODN9N92kGZYZegFFBCS+FBYxGeLw0NDUiYlICAlA5CqjpFQEiAIBahURfXYgEAQeccaYlAIQEZESIqQkBMPQ7+rInKzWYoxyLhDBDSJC23G3a3R89SwqsB0daMilBIxkBDVbTyXWj48RhZmO5JLrUkJ54FuGYbsBJUYkmUQBiIpQrhQqyRAYRcaICn2dUVRSRpIHIUWVTVGdgdeUwraP7r23uztv6LTSrLX6sogeW1lY6T77F/2loOlnvAdLR4/H8+N6KAiKQsTNKD68+ZKlY4cU9Ua7kwvLdnj5jt89/w03AgDAF6KVKclTCWudTdwa/+zmNbUvf6Pxta8+DaB4x6Pah39aOFNuHNqvmi6M7Xr3e68b+OUDF7zlY/9y+/5/+tYrbrj0/R9405YXfP57Bsl86B1/83DPwtTkqV9+/eNTv/72BS9/yYE7br376O66faGTFr1jw0C2DGS1mJ2cPRL74hdn4msaW5JD6YHNAH8PAAR1XvMCL2p6IQBQnRWDcGjd8PDowModD/kS45QGkVAEEREixQilDIQSvoJiU0QS+oa7Nm/ZStCwpyecAIDhpk0bZaXBlBE2edwiStPTo+NefaE/E1uZmQEz+vVD36kcPAPfOQAAZ6b3NleqwWXbW5fx0mecszK7VF8Idlx53dzkRHe+UxDWtdawzHi38mtB49JY9+xcbaXguqE4c/hIc9OIV2kKvVEzyMT8XIzpI2uRxvWG00wnifQxsrDhRaFSbSgitNvfARVDUALjTIlQLSq5eaBTMeLXVp64qf+p8fzH4u5/ZTz/zuT+a+OpLC8rjep6HBlFGgrPj2oiEtKMaTmP2L5DGWGICiCU3CA0FqFR0yWJ8bT6RP3N+0bv10VxtrAvp8YTuYwdk1e/7kVv2PrmyalF0RUlEtFI9vK9j+y7t5pJxXng+56MZ3o2zG6MTy0eP33bAtYhbAQTiXYy79VXdk0+8Kn3LX9IWXJ6JinC+LC2EJHZuQIfMrK379/vz7M3vaj7/dc8bSh+3kOl4yKcGezb+GjxMFb2VFk8jD905/333TtjEkZjk57Kw85z1tkFNnN6Qh8vdeSNDmtAGfbe099dnIyRIc3INeYXWBBZ1W5aatPLgZw+MJzJlWfon362vHQYC1PVZ7ydfPK9H87pnb71cBd2BuTKq8975XXPWyXA2/Wsm55ySJa9qmDBth7a3X8p3PUnANBqticLjChASbSWli9HBUQIiDOnBsXJYn35TMnzkkDf+LNHWtCaUqXWv0rVxqgWikgpFYURUk3TmZAuEsJBGZRGQjJArsD1fAYQNy0HQw4+CUsG73v2hm1Djfae9LNXvkVSddUNV/m6Y2QUz/0lwH4Rt5tVEYaKAVLKFKgIOABwqRSiwWXUcEvzxa5U7gyWq+AnBMwwBN6244OLKxkgXU4kokAQJICrKslt0ohWr+iqhhUslJvNqu/oMpdIZmMsyWLexPST50MJYyh9lH39+bXb1hXuvqvmYlaicHC2vGTGzEiTRkIP4tDwgrzEiCilNCs63fr4muymhoVqBj/8yS+OvOLlDEjMSIHbAIDOZBpE954HzvzgO1+shl1Ao9DRTWp/+p//6e1vfPPtv/8sGJbTjBZqtbHHH0tioDONOIpLRewoFiMBl6HA5Uh5CLd+85MLX/3Xw/NLp93KTee++M17/+jd94sYcnfQWDnxyNjoBUaZnOV9DLZvy4xq0DgYRgEoJRXojAkpBvv6QYFr20oAAaKUEkIwRihBIQSikkquEl8BrK5moBWeKVEgCcEo4pquKwUEMJKiVq/09Oen56Y5Bw0JKln3Xe4GrZmUZxZWsccovMADpQESQrnrUU3riKdBASjR25NUIM9MLaXjCcJlxmCuHyIzhJIKJFOKIBBUlCBBhUIIRZREJGg7Xu9gv+cW3cDpTXVauZTv+mWnmYGORr1RLCy3AvDs/GJXV9cTt/wvBU2/+ViG0ItvIA9A2d47e2ps/abyIw+nNo5tu3JLZWUli2b03Ks1V1o7PgTnvRUA3vidfzKZe97WYZtFtbJbOlVIXLOda+JYqdG45U8XNU3Vn9M6z40eOoC4uPCrfYOq9wV//aUXP+O8u297sFE+Mzmz6/xHp66//jl/+s3PHt27/4o1/ftO//nWr/3wtu89esX5l1/V/eHJ2si6oeyPv7mwVh0949FHGxNp+9pmgj79itcu+4e1sO0WeIRe6DuOjwiUscALiYXDo4NA5NBg39L0opCMATKGiqqQSy+EUBAAjdAo35Ma2TiuxVKkUqnNzM4uVuMSa02v9NiRThMiBU4UWb2dohlWmitxoQY2bWocm9xx+cWNYtlYhQ5cuvPicPPo9KG51o+mkTj/qisIpqQXdG8c506DFxsAMXNsLD7aY8oIOmIbaCJoBEa+A2oLC6eOh7rbNdjLHeOKqy4BkKCUmeXJ/iyY6b0PH+o2jJACF/CE51AAiKiUVAiAnWnDjBTjUa4jg0gH+vv+8sbjigh87gYNEXI7EIDAKCgFkUBmEFQAQgVSMgaUQiRAIjMg9MpVUibJTvOvx9562YeeLgCofaBpHEu63cJZWFk6vrE/D6K7snBCFI91xMIXPuupzepcx1CnJFrHopC0H65/FUvHGKUgfEAf6DMB4NXh9dZozDsnsDnXr87QgOtNT48MyNIw4PqNMT+u9Lo/f2D+eO3whkRS9pxXfGiRyqBL1XsZSn3rVbuudTeXezt7kXWH/DCjAzyIjGwOVryVxZnu8ZHIMSP76GKejA8M2yVfYxYXi+aI3jxpt+CuH+LvS1U1Ot4D7zc47ZxYqB6aK/IHSubovCSm6vYfuv+HX/37C0esSjsA3/G7X37hVw+/dd2lzdt+c+BB++9f2M7uVr0GQSPRSYOoY/fRwlAubgHUEOIDo4JYUbNSWooaIUGQJSrzehxCBwAQMHTazS1B4KNGCCiiIAwjnSWpAqkkZSSKBKGEaTQeSYMwM25ZuuZToIKuGVl3/wMH/GKzXpptAcv0NVTXYiQFl19wyb33/eHQgUPP+E+M5T/ARP/78T+u7BbBVoohUkAphaukRolSKJXylDQRK4rXy7ZZsiUokfGma6DzVSw4gK6gwRQ6pY39XYdni5qpcb/NidFazVIEKRVZnbIDwEHkpHm6WOUIoKoEo/c8aT5Mcp8QlJDvycjIvfSCrX984LAr/Fid5CnVIp9oTDic+WZkUkwmtXoAgkWh00pHEBBrzh2dOx2+8R++eM2asT9M/rnhtg3g8cr0Nec8012anvUAoBCnfR4s/epXX73h2tf+6gdvKZxZYh4sKJ4ATWgKGSgBEco4QwXUdkXCZCGgH0aKkKWJieDqbeZ3y5f1lW/Z/1Dqyme9/M4fcH0/bTzex5XsPeBGMhFvr8/o4M1Z2E73uYygD4oQBCEQIBIRINSLpZjGiOSIRKASUbiK0UIkbLX5V2EbyoYIyKXUNdbSA2aUCs4BkFKWTqRiBjFTxvBQ38mJRY1oGkgXVOC1ywJOKFo+VYRB3DBKtq+DRCSCQ0hpuFIuNwsdg/2JuN6Tz01PL0RCKqUsHTQioshhVJdCUiIZAiIQRYUAIRXqGueCMSWINr9UHx7p4bWKtmFoYeFEOtNx+atfUjlytD/RP7ZlM8C3AKCvu0eqJyH5/lLQdKYbwg+5YdHR5PZ4vORWkheNQKO+WA0THf1F28s9tD9yo8HuNj7ssvyCOHeLfaQAHvR0JG/f99CFV109sn5H0ZnPP/9ix9QS++flxjU9P71rmmv4wY9P/u7Hb7v8vKuu6vrg333xscKZR+66u/uKF0cr91M9WnPVswuYXhM79/a7v3Xr9z6z97bEP3zjrsm9B+4/ve49H/5drjN43nVf++CXLvzNDzesf875L3rNxy9/ZvdFXW0myDAqc3SEVEiZiqQiyHxo1BpJU46v6SvPLQrFQWIzUCEBVJBIm/09mb7OXMxKg6FL2yVTM5WmnRwYSvmiUKpnDM2NgOlK83itXOkc69dE1DgzmRtfuzy7jLGYqjWTTQ4D7URvADF6xOfFdsKPh+bcvhnLMqx4prb3QBQ1x7ZufXzvvk8//XXf+M7HzbWdzfvPlCrFfE/3Sq0qELr6e2P9edt1EkGw9PjioUN7smPn5JIdEDrTx+fH0kYUcYtCICBSSq3WWaQCBKAaEzyK6UiFynG5MLE8zIyq7Mz+xY3HdVzGUUi0AxkBVQBMIkhBAR0h4hohSvoIgQQCFFEqxYEgI0STKqw4f75vz+MP7/OSe9/5qtcmtwzXJ04BNrLxbPn0jJVejBkyqhRGhy2oAAxurkR2znchprx4iix70eThUNMiAQZNtHyK5XUCj5x6Z0I0LUlAU8rQItopVDoIV9BVxrRhazxl9QylNwGRShTj6fGYZDwLnl2JU9NzZIIO+kXUjRh3L6jxEjNMu+Lpqtnds8Yp6hERmcy5A34EU1XFpJbvMMuZ+lQps6YdN7Pp8307iEpctwBUYWM327BpPTa63eZKSu+jy/y6jWk2lPFKqynoZXH5bOG2p3zymd/j9emw/oIXvhV++D0AiNaPaFXTiK00Zo3XPPuG8zd/8F++fk1WIa9XFnfPGjmjqTCUXKNQ/Mw9g2E76Cpg4eoGVdMYB94KSZSA22yYOgtCzrgEQoWUgYy6TNPQYraIXN+NONz72a+dKi4945XXrRnpXF5qB+DXve8VMyenvZr70L1/fPCuW9eveRJn5JPHv2vD+O/H/2Sdc1KmicalAFAClKLYkFK0qzAYKQgBAJSgaAn4/AVXv+2OuzxDg2BVlALAFMThQWFmeSiTxUyyUGmj2yIAVIq1KY7bQ4BylSBEZKk2Otjj6NpCvf7k+fR054c3DDBMW0RGS/OC8qdfc5FoNqrzC4XlRjKjNZei3GDHoor8+emMZZqjnTRsklVwPBSy73nPD0QpfNfbLj9+4ujYWP6+r98Plw8DAIvR7//9dzvz2vNe83pT06vh4jd/9P0bnvHChds/ym3fMAXJxJJlDF0ey2WTKiouVHs14qMwJS4LcDyhMZAAVJEmk5oOUSascrh8Tfq79+9Zftdz3/uVHc6xX6VF3j0dGI1aw2m0WOAX3v7rxcWfkTe+xtB1EVdNN2pxuWSy2aWlpWa1tmntWL3pCIXxeEKqFvaZRlwIzpVSCqRSLb6r1g4YGaW+H+gaa/Gk67qRTmdPTpyiDLfu3OIUZnt68icnloSUOkGuVKXUBpkPbdp89joHvmeApmPU4DyWjOss5hl+V7rHboYYSCtlXXDu5sf2nRDMAqEoUSZRXEZAQDEWAgopqSKEACEAQhiUAQjLMvym51XqKJ0HfvCjYm15uVQ95/wLL7r8srBuk1VadkqUaD08f1louttoxjs7bvcv2Th5y/CG9b2wVoRuOZynR0vpUR4fzavNm2C21DSCFk91TR9UD053xI1AQ3tpeWjrpsxItnjm0ZSIIkcuwRQJtfDP5erUgb2N5vmbvOf9zaue9VevT8tXf+rtz77r8z/7h899/bHfvu8Z17zo1h984vB9977u2c8yCpe8+e8//IMP/vM3fjv74jfuLhUrUiw0oWLXf/y1n90ynOv9xO//Zu/vJ/b1DN/6nR/Hrj6/NW0MtBjLLspiiQsDlKWgrEip6aRyOkh3dOfo7t1n4hQ6+nL9wz0ZTaMxU4IivlBuHauR8gKbi0QujZFbLNdSjNoRl5TqTCgNAo9T27PtWmxkxLFIT8N0u2OyWpxcnC0ePnUJAABUid/Tt95barfVDXZ2ye6OIBYyNz04mHUaBccNzrv80i/dvFlL61WvFiSzoQgwlxjq7yks16mejyQC01xtikDyosue/3jzz4lyZ6Pskpjf2dNRO13mQFweSQlIWjSqqKTqNEBwTgjRQ1ysBRbAYsUbYpA1/m/gu/+/DI0wwTAIBdWZDCMEjJTSDQpcmAoERY4aCqVLIZVoZaSEVLoOIZUxQTJEK8ro21+/+bNf+82Xv/GBFzzn6tn7Hh3KRop0x2gFhOEavLHcMIgKThVSVsomhMY1HtWTVj2MDesaBxn6fjv68Hi6WVvpZBokdc9wCBnAIBcFx+K5imkM1f2lNKST2VxRnvH5GQmGI6O8zyCZlWEhKfoj4jNwNDDAnPa9XMJIUB8aTiOdShnQB01lQRiPmVBhvueamUTKUw2buineU495hUSrPafSKJmmjgyErnERjziDxSW/GhjpUZc3TdvSYo2V+6aSZzVY3vO+jzMF937hK8fjisfgHW96w1cAAOBzL/vEeF++5tT2njnl+fy6kew333jv+7911cGVubHuHF/BjAytOKWculZqmJVbJBbf+fC73/XxT7XO7AmeVIYgHIjQIhLjNIhFTggAIAiCQKKUVKJGG7zODQO0AILF6WUZvuP65ySv3VC4s41tth+a7Mr3Guskkp5zL7/MIxS++Ob2/f9Pd73/adPb2SP/Va/kv+2E2zVsZjuyj+1b8JnpCR+EQiCASgBoSgFAGoERGgFwJk1FwRCXj3XA8XYA9gA0xJQCF3lQqVarhXSyTbPfqquEgBqsdkwDAJBIQs0PqZKxZi3b2WeSGDxpbNqwzlkulYPyzPzCSM9A3+a1pYkzgVunQiZ0fbkQbdq6VRvITd77qAK15+TxK7ourQUBC+0WQP/9H3j/xg1bB/L5H/3oY2OjFz7640fzW9vL5R995/ePff3Wrzz4dQHM04LXvuR5r7vppfDbv793+Uyj6FGSGN44fnhxP2o6MbQ45Q7BhQCyDOaEagChUsYJIKKuAJXyI1j3wmsXfndHWLWvXBs7+tX9R8pLvlZr2s3IFiwVAy1xFdgA0BgYSQ8nrX37VwbPYabqBFypYwgql0+cOLOwZDfrB4/ZioOCvlQ2ruuKUUJIjJm6qWuMKVAKZCS5IiiVQJAUiGYypFQ3NEY0z/GOnTy+VKvEqa4kNawYSyRHRrqmZlZ0NGPSL863A3BA21qexrpk7WFCIdIhHoCnJ0wgVio/iABmklVKzrH9E+vXbd64du3E5JRQukIzkooKHxFRSIIt8SZJAAlBoQCFQkL8SCjCZpac0ZGuy95zFURxr1zijgu5bPVMtTvTTjszrmCV7IKf+TUz9NL8DEZhUjN13SpUa7Uw6B4dSedzhdMTXYkUaNVI9GlV+uNS3/LKvG4lieKoQc12BAcaJ12dmdnpSi6TyPRlgiDgc0VPWSuBl2NEU4QykeyJOzUbT05mfTed7tqfuKL2uY/vd2uxdN+1V484KQJMHrvlnpVKsbi8QoV6QdtSnfzYGMTZsYf35DKpC67YWTh5Ktnbcfs370uPFnsHxnrG2czEUS3eu6m3/8IXPx1mT//dDc/55a23pgde97JPvO/Wb7zDqjgb/+rZH3rH942Bwd/+w/c/+Z5X/PLXd+161Qnm4krDFr4XsQ9u2Rh/7tOf/bEv/8NTLrrqzi9/YM3Qzttveem+vRfeeP1a0D4OAAYVdW5L3RwZjjm1qGk7fTk11FGDes31Rb5r/LJdffHufgYGOBEErpIeYSwsNTRkthsaZoyCEIKFodQYq0fcYFoCFBfIUEVhpAjqaaqnyhqMNu0l0+lT5eJg53rSO9+6R83Tp/3QwXL7ua4X59OZblJOTBy6E1DoW7r7jQFn7pAeH0woY2VRZDvSPWuNpeUzoSe6ugbtai1hxcCxSiUjlxOuWuyePBcI0bOZdINpmI6nIn+xaAEVTMQYdX1hUOkhJnWIgCkRNT2hUYiAUCH1uHbstmoLwnfgHy94xcd3n/83X+0c7ZtfaBbtJiMwqJtf+4dXA8C17/saEqUkJNEYsIIVj9gtMg/Ft527sacj/tannQ8Ai3d8tieWJkmrNj3t2oX8lh3ajlcDwB8+8JXA9b3It1IWjUDZ3A25EFIqUIRIJRlBEEoBcAQKikoulEIA0qpjIygEyYUOoKiMWJgj+KFXfuzR2z76ztd8eSzpXrBm6/17/7whsy6sUrtYlGssUa/3dq2XkIwE6Jpt16qpRPf+x6ZGRqmWTUupBdxr7YBZzIgFSSdwtchEbirwjFSTyA674CaSIi5ZEK9poZcnWdCNyHfyuhlC6K8UpZLcKiAqwijo4FTSmWwmchyfiygQoKgyFWpaZbls8khEklJSr3Pd1PWwYkQoDK1Zm28F4Fw+zf2QGQaEgnueUj6PRCaXRKiDIoL5rh1RApDQ2gH4qx+5oCs1OF1afuDmPUlXfXXmaCsAn/fc86wI3DMY1lN7DhRSRnE90774ut+/9yfPlcSp8GAgb1RWAiNjeG51jHUBLwDAFgGJTKZ1ZpOAVFJAq2anCNJkzFqu1iIKknOdUUfC6Ka1rgjnapNSA86J0WWdG+in9xzbuj5proJU0085b3lxMlKm9NL9muXPTvz7QPvvIAb/YwPlfwzD/64rDuDxmcBcWvAAUHEBQBBBSQQABRRoCCJCEEpICQiQoPDUwcHSzOLZU2qAIfAQIMGRkaipoF5v164ogAIQoOhZHT2ASEmgoBRIIHPl5kz5NJB/U+o+eP9DDKCzL89QLCzMlhaXKzLQEYgCg4HDoVgsikrDDAITYg5xl48e7x4Z4Yn2NxQ6EjuTl373W1+V1Hvwdz/5+Xt+9LHf/LAAAADf+7t33ndmQSmII0+68Lnvfs9/8Ju3Ls7Pnyl7EhK6Vp1a5AI29PcypsXT2fHe7mOn5k6VAwMgTnRd+ugDAiICVcpQBKXs/atrKr/Yg161M239eN/CU173lKXTlp2tCz3ddNVVcAcAvP3rHeWllVfdeO2mzQYXfLlR5aAMAlyqeMwYHe2bXliiiCKUS40KQRDtiw8KQEewmJZMxHVdS8QTpqaDkFyJMIz8KHSj0G7ahmH6QgASM0GR6sxIAdBNW9fNLdcd349RCHi7KOAcmmzFvdqBqR9/8dcve/tzfRUpAqalASqQCgBRsVwmM89iU6dnmKYnTMvzXaQElJKEtPqRW+DjtlScBKEUVxFwaRqGacaiwPW9EAq8uHwsnkwcOvh4Ry43NjQY+n6rfTPIWdVavadl2oEImrWczxRji4sL2aFBoye3Lp7zVsrlycPZTLK0XIyNbGjqfu0799CrXprbuKFy6LjUtBgxmcGaAddsttSsZDt0JKJZa3Z0ZOwO3V/xAMAH6QppUaBl24zRQJPluRUeBDozjt307uUvvWPHeuGdLrOBvJuCFWw+dmDPi1/8YhU34FsPA4DFk3OlilmC0ycn/+qaG3753s8XNH79854r+9iatR0Dz177+9f+8z2/eCwcW388lJ/72YE/PnZ0bV/u9NLSa4e7Tjx469U3Xf+cC59106an/KLZ/NY9y9Kzf/rruWNBV1OfhvKKIR7R2TIXsDxjfe3bX1AAt91yT86676rnPe16P//gr7++Zce71rTumghTud5t61Is9GEQgFIlQsldwGwsGeOelzbXwXLZj5aYQqSG9Gh5asGr21KjSimhpJBAKQgBcZ2kYqzqRrGMUXeVzhiVshnWI0JTnROmWTJHGlKtBIMJLJfXiLaK5dpnb2ngUNI/1XZTlwyUD5Y7Rs3h9c8O6w1oOEYyLcfzctEAd5H0YGHxaEp0Epao8+XFmYM9+ZTi8rY/3nLTez9eq5fp7IzQQQkVS8dj3XG+UjQ74qxeF3WZJKgp7hFlAIQKJEdPRJoEcCOLAZiqaStFLJVoLyg/88HdH3rdmnvSxuGTixyBAeNKNlYpYAloIAUFQc2IaCQlJERoS2EinDpwsp5ue92GO5Hu2WAmk5ld56emF5Xf7opsXThGKQgII+l5fiRBrorDKwUgWygXYG2xZcRVWnZYRWwQRIpIENpN8AovuPbvxC8/9Lfv/8bNj//8sujSytzKROnAumdeHj8Sa8Y6VpaWo9A2dDI4NKRsNrMwM9rTt3Tk6J+OHjrvvHOTyTZl06k77+nsyC8WV/r7+8/MTC8uLo6OjqbTaadp2/VG/4Y1uVzn8uKS67q6bvb09kYBB4VhxkgnEwQkUVIEYeg2hO2U6k4yl8l0d9MiAamcpmsYRmcuD4bhVxtRFKTTWUAFSjmO57p20mrnGv26TakGiCAQBA2DINObF2GALcVLSsLQ6+wdUJK3A/BFVzxL2s01W3u2PG0LQMqquXDtPwLAwT2PXLp94xWXjGy6uOuSudPPe8ELXvbCf65NFtdt2wS6W3Lr1VMLKYMpJ/RVTg434SQAQCGW6tvQThETHYXPiVQ6go+gLEO3YgA1DZikCkChhHgiuXv37izVqIgcIep2PWVj/8YtzNIC0Sb0cGuFWJQkvCfVl6ir2dSWS1vl7n8TQf+r8e/go//9eFJXXDYuK44mQaLiAKAUEqRScQIsbAnEKwUABKlEWZX80TOLoD0RMjVQUoFAQKSAypCKr/b8UgAGJATJ8YlpEQAmEZUSoJIImbQV7/w3Mq8cAHR6arGYYoQrKVSYR00QSTXiCNlt6nOFZRQ0E48ZumdyOlspO41a/8hI6+Pffeurfvz9X//Qtx2Z+fT7Pn7hhi1Ffrj1q/tOzwgtYCzlBOJbn/1oUri/PbRncnaZAMYMyqLo1GJVEkjGMahVgig0NSOuaKIlR6C4AOAEpFSmQTSNNQKeoYAcem668PQP/6gTj4j+17/vsXnYgFpciQaYZktodL+/zfYfNTZsJmTad2xGjBC8rmSCmTHOF9esGZMUF2cWevId5XqdA2iAURgJAECQhMZSacdza543WyhpyFAqGzkoAkAIAireEzf78zl7yq7Vvftv+xMm6Ei+a2DL1rX9uVOTBQ24s2o1IV1NYeXStgEEGVfSlzKbSQJAs15VyuCen+rsME3TsX0ecakEQVBSMEpCqQCAqLPdUG24jMYU0XVD02QQNpvVcqXQ17+usTKXbgR6LHHxlm22bTMnBNn+kBFgT9QukGumBflstNVqFIud3UmKTGdGvVqdmJnauWsnKJmoMjO/7tuf/OTJsWvp7KwQsHPN8ES5uFBtmgQNg3ZZ3PChWAr7RvI+OnXXzXaNOCuTOkhfKEUQuUoaOmCo65oQKrBtkub5oc2Zj/xk5UvPP53opcTOrSydNzp+3t9snZqa0srtq7Tv97fOFKrVE8UEpj7/zb9dKi5/5oHv7Zs7cMObXnnf126d/MX3X/6rR7si+PzzN52X66wcmUpNJg87S1s2xA888KuOS0fv/snXz8xYF77qFRDmIZ0Bb+meSu+XvviFv/vgO2eNE1JEjgBAUvVRSAlg7ezviNzGbT9/YNPMt29443PthXYpR/O4MpUzdUqRBNMYYWDqceBpgYLzputw3nhIEi3kKooiEBD4nBIwYiCEoBRQgU6IFAqJQqWkEPk88exAITKh0nF99wOn3/TXjz1+/4yxRQ/cMKmDkd8ZddSh2d4By+AbYWmX6G973p9+/mcvfHv1zm8c/fofun5z58+BFLzopOY9nFwrI3mqM7EpAqrDcloFUD0FXhUUXViaevprVWP5rR2JjPeUHnZyF1IqeFlUKOsdsXxK5ksEnRTFGsikRIIqjqoZkM50Km3x8pIDHEAAI9CsTKhmG1T4KCTsO6az/S08gcYREfjIqltOQkQMQpUel6EEklBGpIIG0ROdfOO2LuDtxNiGp+8sTJZmH7rD4OhE0dbNV7SOi5BHPCSUho4IuRTi30ujKACF2CJkx5ZE2Wr/1GqvIBAEhi0HAlIopaSUcOWLPv7j7374N1/73EvPfQZfw9avfc6vPv+7h39a/uPUY02v3pOLVyrOeVsuGh3t0BILN73owvMv3r7uiu2U0lqtvThIjfasVKud+aTtlBvN4iWX7iwUCoFfTyVNS0/WV4qVxaViaWnjpk2VyvKRxenenoGm62WSqSKfc1w7nU5TiqZpitBLJBJLc9P+ZDA0NCQjCD0PueRa5JXKCSuWzKaF53meV282s9mObLoDVgVdAidgTAGVYRhSqiVM68zkmWQyiQpMXZdcmDFLBJHrrmJwMk2tVIZYMrstFUeNqq72Cuiz3/h7sVJxPX7B8JrLG056eDS/+dv7TxRnb8kOPZ/o0vUEyWWTzXKzv2pEWpsTw3AbmtMGkqBUBJG3GIukAsBQSQEghRQgESEbj/u2iyEQAjICBNrRgaattN7uSuEMs9q8LpUJD3GhO1a/60sTz3zZ605Onf73km3/DaLqP6aX/3cj8pQCwZjGhUAFCChUC+zTwuACUdhCMgMCB3hMiLVP2rJyAIrAALkUjOgGhu4qU+sIfn1WvVEDkArOoqApIZFUBlIBQmlmd767huGT52MToJFKKMojiZRoCkMZAYCIRFwgl6EQMDrG5qbdjevT8Y7kyoMLaGinT0+2lutvXHt+WcFfKbCtyk9/8WXjeW83MAdQAQCEAEDTg0Z/buCF73zJ7Pc/PjNXiyKI6+gEHJE0VZjNJfNjY83p0xqFyYMThQg00tIc4hSBUwJS6gRSVAPBA8XiVIqIjL/y6aXv3qEXlnf2bl9we1OJEZtLTaPg3AMALJHQjJj0fcKIUtIPhKLAQAHQSrmSTWfXrxkvLizaTjMWiwHSiHPLNIQUjhOAUCBlPBGv+24YBqGKCBAdiVRSgkQKUgBHXm40OSCC8oBHTjDpzw9u2JTt6xVnCgS1SLUjSk+ujWvtXb8+rsPPv3TH8978tJiuGZqulEomE6WqB6iq9QpShUQJKRSCAgJKRoEglEJLgRwJo5RRxigllPqeV6nVXc/hEQdABcoNZWLRdrKaHmPKNOO5FAkVkHY3FA9C0p9bXaeR8v4J8IOO/h5IW5VT00yQdFfXzh07q8VSImaaRjzYt+dt73jlr3/8k0Pm04SlnZydHY8neHdSVySSod9wJdA6Fabt5xNWtVziVoymNbMSOIoiCKGw6EgrgExMRCh5ACQEp1D0fPeybReuu/ry0w8/VPeckl3PoMoasVPLZ1oNNzgyunZwNPmcnrmpaSiuJBl75cte/bSrnz5y5ZaTx4+cnqpeuHXtRz/7NnvyT/Kx+57SyD41LptX7Ljk42+anTodcuzavP2PX//Una+4tr6y8JpbDmc273zKpTeuFOYv3fmUn//qTg2aigDINIFAGE2NgluvRAHxlJV7di9NVUSt7bjcQi0yeKSIlDVNEVAArOwHPnKQIQmVjCjRRGQAiRgSTaZTzA1Fw2cAUoaSEoKopFCaRiMhGKNBGYROqKZQaiVHXX7N5klVNbjymJXMhlBZ4VEt0Ial3m/AnwGA6KVGiGFxutU5ls/dSXBNdjT34ffJ73/+fVu3pTdfVXXJA6kYdcFI2ydIoNyKUZ5UhhaluhSPQX5tVs90R7yLMDr3rfVIpYw84Ixmc6LiqGIwvGGdu+ewH0aEAWiGLfw4wQaX4xuSSeYsLztEaabgcQmU6CTbJvLb3k2W5jM7ejNG1Z6tBwZgAJDvbYPXOgnqRFiaCoXOIgIm8JBaSoU1efpIdet57UTdbV9/Zyw1zITTv6ErnV0C3rXqUBXTKABxG6HbEoZf5RNq+UeCSBAQEaWEs1veVeI/UAAUULV/wpZ4ilJKgeezF7zqo3d++u/eW7//vPXQqJqPr8Azrt2hTyeH+p077voMJbWjhyYzib7JyWRtpdpIFYsrhXy+O72KdFF+FCem1ZPoyWT6No9PT00l+zosw6yUyv1jA16t4fvezGyhWrWSqeTo2OCRw6cYNWYPHbJdZ3R8ZGVmOp5M1N2GFY9t3rw5ndLdpXK1uNTfPygFrVaK1Wq1r6+v6tSdGadULaXTWV3Xy2E0M3XmwIEDLwIAgMBpWplscWnesiwA4vt+92Bep5rTtHVCBJOR764s1AcHV8UYzszO5Ec6WSNqLJVUQge3jQh1plzQjEiopSNHDMMAjbx0Y+IPLHP/Lx9/wSt2paysubBSLNTiOj3ccP6s+a8CAIAKQGWxjUoIBViUSRCtLYKRsCqhSxFCQpgCDpLFYk7oEQmEYsBACRUJNWp1JlNmKMKItCukVtajcjze2SF7T5574zNOFvgq1+X/qlOtZTX//sV/9ePqeUzTcKMg4gKAKcLhrNSXEhSw3RWPIECBgjSwrjgLfQoiWJ0akSiFUBwUqtBFlYi1NzdVkC3hHh3wLM8cV1Ih8UGiAhYzGjw8PTf35Km5ArNUSVRUqRBBU6TBgHFIIOEgmS57silEksoFlMH8YhOJMpM9zaiVZoa+LIx7EJiYgNiDBfzcbV8dhXa58b4vv/sndx3qvqDnGeM7+PKdN0+cLNukgzBQ4IKqKuSAm7v7wfM1TV9ZqDYixix9fX/myOnFpM50IFwqQFn3wxinGrO8KAIGiigQkDIMKYLBuNTKjisKzEgqdbZN3JWBPnWwvu1aTSkpgUgFyXgCkFqGJRWFIMp35peLRZNrggtd14XkUqpcLhU27Uq1YlkUCCRjzNB0u+76CEAJSAAhKQVCQCmvJ2+4NWGljS1bz8WgBEF1ZqYAihMO2ipjzCN33XMRAAB4CrpNaDrcl1IKikAREKje0aED564fJPsSR4+c1PSYQiJBUqoTJES1Fv2gQEVcNOym53lBEAqlCDBCiCSMMY2Hnm373X1pZpnAjCjkdqORYhYK0XoUWTwRVt32qjNlJtf2Q7kJphlUm7HBXicKIkvnzYaGxCcoO+L1+X1d0bo7P3vLuo9en8iYru2hlIZPAtWMQsk4C5XmKq9UbWrK0tKZ0KslOpOu7SdCmkiLYl3VJA8jYgYABjFM1qhEFil29A6dvOQdVuWOvg0b8qiSyWS9XOkfHB6/8EJ43y0AcNnLL20uNqJcz+Ybb+CoFPivfeerg+ICO+S86UsfDLLdxsqxhSNHncELO8cvHsgNJkZNkexbOXbAqNHhZ7/kjde+Ysd04alvfyk5oe3rEnct0aV7T+395T0vffNbMvHhqjqihVmFHmNhGGrMwIsu7jh+bO7Usnv8zs+cjy847y1fbl2hhWPH8+eOOq6CCD0eKlAKEWhrXShNkxiaVAIVhaQCzrHoKR4pgEgjQBAISgClMZRCMAZKCWCoaSzCSKCUSEH5Koz/+Aud1tLp3Npd21+2JauFWlDWjPMA/gwAkLqutNw4f3y4NZ+r37DWay57JZaqj7zr3fedMwZf/cXz+867wg1m9MgDEIykWN6NrY0JDpTlASi4aah2QlH/5Jvv3zgyNLTONJMJavRDXReFZUkwlNw044K5BudRGGgAEUcJqny6YHVaPZ25UqESSPAIoEqZqfbe6S0vecmHP/fNuYXqpq1d46qrfOqI46jlo22mCJTSCYF7yspIFCKkOiUgZbR9V6cUZbfQXpU+623ZCAuap0FlUcXc4Ey7TEMY0YgRelHERQhAcdVjtjzi6t63tc0lAIhIUbVaP6RSbTl5bKepJSilEBCRKIacUHrxTZ945QX7D58OY/0dXx/2a/396UWWTGE0f1QIvGxsY7MSbth+PqXoNaoJpfkrFWbzlm+NRSSV6axWaidOHVZKpVIpr9mUWtAZS08dPOWE7sBw39j4WkKVZcUd1922fTsh2pnhrq6OTkbo0YOHGNU2jG+cOH78hz/4bq4jMTgwfHrixKGDj+/Z+/iuXbu2bt1aLMwtLszGYrGenh7baVhGmlHS153BczYD/BEAFmZPnzkdLi4ubj9nR63aqNfrG+MbU7F0YHul+RmmkTAMCZqLMwvtALzl+ucWjtzrDqSblYbeFJnOkdbx5YWZhVqxb6C3O5auFkoo5HM+duv1D1zUfcGlI31/tTD1xUxW373Au0SYeM3zdr/7Jlj3dAD47qe+4m9o1WhAIkjOQUKrdtcx2H/66EFDgQAJSoYK4olkJAINNQEcGbv9A+/Jdpj6cI9vOIuTs13pNtxfS3csnSq5kfP0v3qq2UfWb8jApd+E1UD35Cj1n3Sq/X+LCexLBSnJzhQ4B40gqpbpEFCizf+gVvONCGAqIh1+HJ7oIRFKogAg1FPinHPHd+87Pd63tvWrEgKRqAFGqPiTFgCpmOk4rgISNuplps5dvxZ+/8RvFaFMcMaIxlUg1ApwxYEw6iugiDpSKYTnOcmY0bARWWCD/qm33nL+m1+6AwAA7M3j00emIui+r1AuxNBomjvSdqt3/Hsz1X13P3D/X386fuUFv/nG1ypNSDDGuGgq6QBIlL3p2IlTk2eOT8YtjXgBY4AWdqzZok4v20JxEfrQ+kvldCgkeikFCQqSgwIgL3hq9dt/cAul0OwjjEUiQlyVUYuFAgM9bW2aXi7FKaEKJVhWHEA59WZsTXLi+EE38OKxeKNqU0VFKBTKWMKKIm7EYiwGYegzioRRVJBKmdIOLI2ahk4ZIYgqkiC1UMiIERr6lhUXmu0LsjA3ZzASSPREe1F10fOfBx/4FQBY/b2xTpwuOwGhyWQCmaZCf3G5nu/u1g0mo2h+YZ6hRhG55GHocx4pqSQXEeecRwLEah6aIBLQmOKKywgoEaBihnn61MT42h4TtMLRKUIIjyKVyWrJlsofVJMso7d3w/6JaRXTXQPiYVOPmUTXmVC8bluElYvljr6+2sHJRC7nVA+dsz1lMJfqMSsbd2xXj2wFcak8avDA5wJ0T4aproSPsjzX6BnqSXbGvSqP6XGW0Lht24qDD3kk9SgkAkvLdcOqJwZ7uwLd6MzWp+dXVhZ6N67hEYdGOyUT7S+xpJYMClApEa7KS0VjdCCR64ehVLRUNyLhlJu5fFfntsuMQyftTEo5jv7odFe+Gy8c/tM3f5GNW8cHc0Nv+dQ5JmR8raFZe+u7O1NjBf/5TQnKs0Jig6bxACii8MN77p4byVsZ6X39bqN/0NIf3X8uAACwohmUJFKpAKQGukaUL4VkJAZIuQpk0wZDVzqhZYcHCkIAg6EOFIAjoJCKEoLYerhRSBlSaUURaiAVmCryy6WXvfC2jSP6p79mfeJzd1+VSp05mhqx5oHvbV0Hp/C7/uCm0pzd9lOVFUq3FA4/klhTXZmChaW+6y/5xXvf2/GyF6po3ivLwCbFWAqI5vg+oCyqJuTpwC0/frC6lLrygteZmZiRiAlOSI2C77O0JSWZOXLUbggS04BqVIUEwKYyJWGqyKsNb9CK6yYVAMyXS6dnxne12guASjcEUf3oW5s/+VfaXIpsQhRWl9t5Wt2iwhMBaLofKk1DEcaZWBEQ6p1pkuWrBRF/z9XKJFXPSaWywZR74uDTLoCbAYAYlEji80ABUESByJTEts4YEAQEwBbqirRft0SpVom8Vh0aglJKSJCgCEFKiFKoKWV7QPlfvf1TMZz3HX2+d08Bh2hMT9bKmlsP5ueWFxYWDMMcHBheLiyNj49Xq3VhtZ+a5EB+9+7dI2vGN5x7bqVSznXlfdfVKFtaWlq361wmqBK8d3R9dXnBsqxG3Z6bLSDVfF4ruL4Io/Xj44lYvF6p7jh3+3m7djBUYRg1bb+jI3/FFVc07YbtOJRJ226Ypn748P6xsbHlpfmm7a5bt75cXGjNwbQoF8K06J13/X5sbGx0dPThP9+5Y/suS08m40a9Xkmk0mHIHrzvgXYAPvbHg8vMvyS2uXcoJRJhJWrnE8Yvfdo4FWAY0PRc7UxiuP+Rn39p333zP/7CdeoPe+S2rmyXuu66JPf9XU992vc+8JHWDvjUYw+ao73tqQARbdVKgqCE7XInRAAUUqdMSq4BiWQUqQgIgs9703qlLLuvPSfRY1jzesDbW6WoukJlmM/0n9o3cX7/+UnfhP/3wwkgxnmXyQo+l1JRQK4UIGllowGRACopCYAEaIIo/VsxME7gKeee8+C+A3HE2syKA2CX2zS/qIgAJUGhArW6PmBIGKWIKAjYgjUqTq0y+eT5EFANCnlBKyAZEIUyDki50AkxQXkBqCBIZ4QKYM5uXvihr/3NzqvKiaV/umIrfBUAYMNT3veae94PtAw0irvgEP7oKnH744/eotneAw6HW2+fPVOkoGkiClDVFAClKMRK3ZEAPoFKECEglUqrOI/f9aANEgS2kRZKQrurWTpAVmyR1ahBpS9ISCBBmiBnpT4M/hNLDqcSEsr6elIzxx42LxkC4lEJViIeeA3faSopS3Yzl80QSsp1W4AiSliW6UuuKSKlChCZHhciokhd302nU8wXumEwSiMhm00nZppMQ0E4CagKjf1/vvecK586M3MUFWqRjIhOVmFu9YXlVg7LqzcHxzqPThddKfrTcdBJeXHu+KlC3Q2mzpwCpRNo75oFCAJACBFSIrbEaVoreWzl1QAUcK4UEk0DpAQ1pARADx3BNmUSFJTnx7sG/NB10W7h3bMNGVnY8iW65NFSw68USzlz3Xm7/HI98jygUJd+5/rh2mLRoFoQSWug6zW/v+V7vzjdXCjl8hmGTLEk8iZHxTSTRCEVIVcQhCQW9xymySgYGBi0u9WhfSeFkaUkxUXdViImACXRiIyALS8vjmbTd+lXXHrsZpKOo85PP7A3iiKaS69rXSUaGvEkSLawUk5rZj7ZHVQc4vCFvbcuRez0bOOcc7etXzu88tVfw6UbOqs+9bww2fRVWNnrjG4Z/tsXfN6eXv5kvs81q4KXlk/aEwfuvfmOU7sf//KWC+MX73rZ9g7t+Oz9o33jGh39yR9v7cmlmsvhUK/RXSlme+3yLR9t3bWf/vC+7O1d13/8ciYZp6CQagbXCbcjDF1NI6pr1OrfuGn65Iw7tWwhshYjpeCKAKOIRAFRkeSEAgdgBqACJQBQlxgqhUTPfOF37xjsKUzt3/+hv73h9W+L8tp5lak/H927/zKYB4CY95xTQc+w0fa85Qc2fPKrpYufccW2Vz2vsudI71rzULUP/Sgoezi4YJaZFJUkaGEllA2S711bNxs8azz349bxuzIUkyIMIfKZFwGlMmVwx5Y+7erPdw9gvSFW5gpch45ULCvFYjXIUQyD8GgYEkWSoCiquYKn789sAACA/OYLP/AK9o5ffOccWxmmihQH0PlqvsfXTC9wLAQhqSFJJHgiYaSbIqgo6M6F9WLrbft3G8cO77v4aecXUWYzW4JVbTrd0iFUBDxCgCHxhdAQodUiAoogktWcMyLCKokIttW6ARFQKkLaSWlY7W9GJSkCUaAoWZxd9kvJR356z1AuTI2eE00HC/WCEtDV0Skx2DiyRqJaWVrIpRJ2tcpdf1ULFw489mhMJ+7xiYf+/MD84kKmIwcAhUIhHo+Xy+VSGMYS8VKptGbNmvn52Qsvvogxcvr06cu37ag3loCwY9MLmc6OP9zxh/N2ndvVk08nkrbtdnb03HXnn+Lx5MjIUDqVUiif+tSrAeTMzIyus6GhEdf3Htvz+KWXXtr25Iy4vnPuznNGxoaPnTzRdJvdXZmFuVkp6PDwsOO5uXyXBH3bORespqBLJ7721e++bqK2rX9tLk9OLCw+BAAA77jhb+OdGSWCbMzqHu45WvzNg/fe9bZLX+3f/S32zFfEV3r3//L7u/ceOOcl+jfe8LFdrI0YuuG2n31q5/ar2xdEaQZFXwpAD+TjB48gAgVCiNQVRgolofF8Bk+uMAQuoaGHkmavvvGG6NSJnAcy2T5nf3wD658qOEsbNmz2g2pZ+Xn4fz5kgEhpIAVHBS3xXFQgJAGKCoVqWRZSQElkQ4kGMwBDiFapJYGcnjztUJWKcLrkMIKnq22MdDtZiaAU1BZXD0pZazgSFRGKaYRJ4f5bvi8mpUmxpKIU1ZRSREoA0JEqKR0CKHkdiAoxbeCVf3v//nP3fOyHX7p4y7OM5zwX4OUA8PqPvD7B0OYIRHe1WDyqeXUDIACAV52z8ZwPfqYRLS7u3tewuaEDhOhQ8CRIAS20BFWhkECBKhBSgkfQE26LdkQgrubnZaueIwDrCBoXCQAu1YoNUjQ7TVaiFghXi9pZesMxCXePPHRbf/dM7NJhzkMCEIZhWkMK4PtOs+mPdJu+56UTZtMNY7rBFAgkJiWhFFxKQqgMhBIyG4v7jhsIGUNoOnVkmm6YthulMikFyNBTmlVuRMXpmUhGEhUnoMvoLAa97ritAJzt7ol1JZJa8bXfuP3o198teGBaNN8/mMjk1m6NlQvlSqHYrnoQVIhcAiKTINrKX6v/tVwQA8EBpZIMNCQkEBEHTili3GxOVqnr8siBjqSRafexVEulsNFoFRRL4OXWdvUbQycf2lPafTQ/2FdyGwOjo8ko9Kv1WNLSU0lLRP4jh6MLoq4N504++pgAzbCE8iXhGjcJ0YEF0uQggTQrzbAU6TFWrTVqBTFy6ba1zFqem2u6zGmilFAOZZ+lOb7UiJSOdBbm9e6e2HCXThiXQh/oGO0baJTaW6iqkR7m8ccefWDqyJEXvPE1RRnmtLwnZeKG5++069voYmQMVvcs1k07USj45kqhojq7tZTothSfWzmup6pd2OEaqrNUEzWhJ/zxt77yprcNQ+MIT57L/ApwBslXqYqrBLzqvc+WoFmSAC0Bl6K6VD+2BP/0IADMGTAYr/z03b974cdvjJSSwAkBLsFI05FN3ZnOHFALOKWR0iVQRkPgOhJAgiBRgZIghKKU8kggoZEETUmlaUKGmqROKJg04idh6p7RXO/Q0n3pXKz+4OzMjnNuGujc1Jb4xGeYYS2X2wLwTQBIr3ndp3/TTxfd8GA9hL6lucTgjPbgI8djfTt2bu9UPRfHg2kz0xEuTUGyCCPr0yIticG1kPgHFXepBCBMMk6ERzzF6ypolkISYSxWWW6ke3M9/WkjmwzOlKLKSkMKQMgSjARGICMAlNrRhaWW4y3PlbZcdvlN+3+yZ6pw0foBI64JPyKybe6B42UY9cIQGOMsUNLwfKaiYGVhrmuwK5ZtZ9Qveuaucy7fRvJ9CyemujGY9NsFZs3QFJdKKCGh3cCOCpGolvSnUgiKAFBCkKw+FojtJgFUiIQohUohKkTCCCpQ0OrrRyqVlsQgaDhs8tTQ+tEY51pZMztob09nqbAkZJ0xTSF1PbertzemqVqtMdDbw/T2lsy0qGnqMiYMLd2bNyRCKp1Oh722bQ/uXE9Db926DadOTwZBMD7e3dWVrduVvv44G8iNxEccxwtsFwEuvfTSbDqJUnpuFIWq0XTXrduUTCYB5PSZmUQqYeoBY2RkZKxQKNRqtVg8edVVVy0uLmUAACDXme8bGAyiqKdvoG9gqFarDV+w9ejjx2JGTqGKp1MN1z1yZCKV6m8H4M9996fHCjWXwu0LE/4inFUp/+L+h9tF1lUEE1VQDH7o44YN37pv78mHmggjMThxsuflmxJT5QQsuAAwcddnPL1dA+YcYpbiJga+GNiwPtup7b7/SAyJJaWmRypk6e5MZLhK6QIjUCotQXG/66oxcfABTHXq8fZUHLvenRmJ1euhbQNqlvZE92zz3n/x3YAY9EepGxGBABFhJAw6d3Kys399Mi3BiTCTmFueiUpRjdOd51/gLR+3ujMdv3wHfZl4/vBvAQD4pd7CKXPoesTvtE77tpGOdxJoSFWNFCqKALSl8qpAogBFgEiQGkAkiSQENa5SmtkQDKBdnk6m9MH+/jNHTkSI3TFS8TiuogV7hruWz7S5FJthO63HEVApogiCUlKEggF5srwh5BLIQtYUoUKlFFeAiiIHIQTtHuhes22dAB95UFpevvsL1y/tufXR3yz1DHTuMBIHAQDgZTo9LAQSto+GSW5emIQTTjvCH58JB2VTK56ZrAVdhDJFbQiNmJ5RULO5BBSyPUlxdpMv20l4ANWOvgpgFZ4GIAHBkeBIBkQkCXiCjZCJkh+LxzRnVVaWm8uxBlm24NxrnzlnSNM0Qy6Wl4v53mFiMQ6ICorFquO7fhCZhqkZhgq4RrRIekoqqggPORCiKGn6IWVMV5wAU6ibukWQBn4Q+n7Minma9APfsNBxaq3bGEqwkKpV2qkctp1L8yR/zjOetX/iEX/PEcvMUSYSiY7BHrLv0PTOc7dQ0CorZQ2ZUEoCKIJEcZSRaLlzIG21Q2yRzzAOCJSCpJIyoiRBiqDVKnUrkc5mQj+e41oiN9DvB+1psM2jcae9OukaHBZcNRr+8GWXB8WGjMWtjKkMzfOawWLB7EiFzQbJd5d7pNEkkS9AqsCtMgJIFYvFLXB5pBOAXgtchis1rzeDnVayWHdq9So8dmjt1s1VoMrAnDkwV5zTdcPx3KaCGAcF0PS9ztC7y7z8vOlbOvtHc8nY1OzpoaG2a84NdkMj3PnUK3a9+IZwetH0w4A6sURGBLV6s2nJlJmK4lduSMO6sFgjVb93jESe24waQSLotUbB18OA1+49VMbITCWK1cpGYTnFw7MzM//yrRddctUVL337axcfOmloZrajo3q4CkTRTJYSjJShtAxbZYLcdsO2x+89dOW56x85PMu6O3Kx2GVXrjVTMdOKKSHRiSKvRBMZqjNBQKMikgBKKKSIEFJJgRIQoCSTKDQpFEHCDBkxZFJwoCTUAYd6epmd6uo8Mzl5Yr520bZN3C1kO9ru3pspQ1ezVG1LINiNMjncOPa7+/u2jjVmluIjg43tG7rX9TWPP7R7qsrVv9r5gdD1bth1BR8cA0e368VEwzm9t0gCpUAAYeBzHrgsRqXESr3ZLNmCYq7HWLt9lDKicVU8Vjq+uMgBNACqIBJKISikRAmNRNaq7z588MRFpvWmV/7tzzt7K47DLWEJerYY16lrnh8iAaIg8BBpFEVBipFGyX3kT4/bYfBWAACYP1SKJ0lh3/39OTy1WBWkDaLWqGnzmgwFAlga1jlGqBjIllg3JaiU0hhFJaSkLdEZJdRqghrbYCxCFQiCUgBBRIpSAqFEOIHUidKIed+eXRu7H/E71xf37m4u6du3b5fK6unte+zRR5mhr1m3xgvcerOR7extONHuu+99FgAAbNx4juM2JZD+NcxMx1FHFUbNmpPKdTQbrkFkvVEd3bo1m8soKX0/zNnezo6O0PVc143H44l4MopEV+9QFImYleDQkFyJkIdc+iICkJ29Xd0duWYQrCyt6KaFwNKJTMijxfnFzq72rtA04ksrK6lUynN9pVQ63VFYKCWScaymIrMcaDg2GP75N8nmzmz7br3iBTvXbdmBRJubm9u6YZPvefCsDwHAbT98S7FYNHV93diars5OAkiNjGHxid33/ujN31+/M8WJlvKdhWbpqOx3/TbYZ7RnxIdVqSJJ/FCYCAIg29llpigxgPlcEiqEJKA0VCgjgYEmWEBRcHrhRc8ocxZzIyOXPFWsbQQAgHqW2MqWliSRoymMy7N0whCErmUakighOKXE9TwKxMwkXBGZBtJQKUZduzI0sP5QeSIfk5O7Hxu/cNfRBx9ycq967c/+sXWS5d8e7HrO1Y39e9NnT+uUiVAiQlOhh1KpFl4WDYAACYDQECSEiDSuiOLybXf8YXgdHJ58IjeuPD65sAgAiqDjA1CyCrkFXnMoogSQoAhtLyaQWEpEgFwCCAkIQBWDJw3ThaYKdcqAC52CRgClogoMQH+5eqp5MOKhRsntt01s2Tp+wTUXXXTjZW//ykde/dQXwkNzAPBQICTA2ph+I0ebhhkTMpECHwDgPZ/5ycnJ71ZtT0MVgoi4KANSCTGNSQgJIav92P/bwQA4QojgCB4TMPam6x7/1j1rjNhs5BVKNZppO1DhhU2oJOTIwOYLl377kdjOczZk2Kl6ZSxwMl1dJHQVQBAF3b1dXhj0DwzsffxARzIWgQ+KKkBCiVIgJIBSiEgAmca44EzTpFRAFGOMi0iCYAo1ykDKIIxKtXoLrMlBCmhf/MT4qwA+CgDdW5+6cPC3y/hDFvmapklU5WrRcVJCI0cnZqorEwAkkhEAQWoAolSATAexemtbdPmqlW9DRKIAkBKCKETr21ApiE/UwsV6amyAC6IWFspVuyVHnCzYkG8n+hpHJ5fLxXxPLyvY5kgPJOMdR6ZFXDeG83ys31poFCem8wnWkxijPak1v3oPXvE3RiwH2c7m8lJz8kQilQtZGPoaQUYJIuU8pHOzVVtByoq7K9V5mOjq73/0wOG41CgSLSSUaLrOOVc6oXU/DGcX+0dYLmlW68v57CizXbHUJs9LeiLqjpdKhe5p155Zpp2pGNUqR07GAhWiov3dvGafeHgvj2S8IzOwfjycW9HzGXOsy56cqtVqvYOjNVnz+lJdmUSyqzsfRuX5eTcuN1576ZeeeoFt2yuTM31bttampoGwVDqrmVrgeYl4nJJwZXYpG28nDP72mx+rBM10voMemgqiQF8/iMt1tzkT+qEIuZVKcVfTaAwjpBKRK4MSKaUgkgjCQgWgkCAgcNqipRNECKQgIk4JFVxSqaXzI7w+EzWD4b6RqE/STGby+JGu3larNlDLDCYmg46221hcWOrC2PDm9dib07tShsdrjx3q2TDW09+tCdFAPykQnHDFtfOdiWCuOvfACoIifsSjiHMe+iEPI6CYjWcLSyvVUp1SFgge2FHYDCSRy3OLTV9det1VUbW5//F9IhIEUUihKdAIUvIEpDTFYPrM6bXjuWxS7+jqcKv5uQOnxCq8vulHoJTOCJeCEJQtQKkCjVDH9+RqXuiTi9sHjzxULlQ2n2v2D6SSkFg1caXaHUagpLAoDQSnhLRpEkBRioSAavE/AAC0WpFaWehWT3BLhgaVBESJgKiQMQBBOEoNUAmZzHfmh8anlhtbzr/k5PTBamlhcWF5evZ471C/L6Kj0we6ensyMYtHURj6a9aOtea2tLxACJk9eYZZWs9Az+Li4uaNm5hSJx/dH4sn46Yho6jqVib2Hd24cWOpVKaM8bqPCizLAimDwLMsq1Zc7snnVxYmlWGqiOu6rhl6V6ZTShm6XrlQj3UkOjs7fd9HxEKpaBgG07VwdSvleV4uneOcD/T1ua5bKlZSqVxPZ09NBL1dG7W0cWDv/pVKz5uvfHbbv7/8Oc/2m05E4YLNA7zWYKtaRs88f3s7b+CHnm0zhY35xcym/oXiJMtDLGZWSl5xxY915It+UevVoewBwMfWv+YjD/2mdQZzIOsXSn4EGhjl41P9F6wzMnptJewFIRXhBKVkjBrIIBD88D/8nURRWzyTBmHXVo7Gnc10XduhO5wyhkRTVFcAvq6djcDJeFJyhUq9Fe/+mrzGNCzJBSBoQNHQROirEKnOUFGpQpXsjCfJA7f+fsMF58qg0V14PsBDAPDW5+lbU3e9+9SnAdoEWx00FgHRgQ/HwAMy7wJniFwGCEQqiShE29wiECGTay664Fe3fivWue1sEJKhKgcNSoivhCMZKKmtlkPK9YalMc00aw1brCoYMuEJQiQwJnlrs7xxyxp4kswx6e3I6bR6ZjnRAjJIJRCQAkiugwqqHgdwAe7588Hf/vkg/d7vG3awLaYPau2uPg8sYuiPu/VxI75LkceL/sBlF8MDDwDAiambYaVy5kxJU+Ar0OOxdSP9EsXKSgXQI5L/H+MvKJCgUCjlA6EAQsiqF4x5di8EZTBEo7Z6jcqElUfS4E8fYCI1dMF59ZMzsr5cWFnqTORmZ6cMg46OjR86dvTcnTurtZqUgKBL7hMkCkABCimkEhQZJYRSSpkAUJQSKSQhhFLkQkjJKSABgpSEQjZdHwAAMVQSVl3S+9717lZfuQR+/Ke31/Y2OMYDf0X3UOdQ52HCiqPOBzautzRzcXbBrTtMYiRaDeIAQFZrXatVrVXK0pavEkoqqaihi5CUa7WRrWuoOH1iz94NG7YV5yfNbBvE06gWnIlqC0BhbRxYZ6yvLi2RiDTMWqpUhCwslOeG9qxoGoad2dSannrDiVXrUrrnvv7lC34/CZlQkO7oVaGozk2Y8bSp61Xb4YalpTuYRdImS3GQnKuM5UpeWVwGybuzKVBRsxEAEwmqAg+QgO2rhh/2jqhbrUuekzv6+J4/rxsYde16m67EbSz/6Hex7lzwlO00bqZZzLeDQsPuHeiy8hmaSXnzKwmqKx4RpahOSX/ezKaWp2ZAyN61Y6HtZDIZ0mgIz7NPno4ajmaYqbgFlASlhr9S6RwcgIYbj6WJEYsxGYRewkra9SYInuns0TtXNfc8zFXJ3F1/To13+KYVHpqVdT+RjgAxnkrKUBoWFRiq0EMCwqRRJJUGggNSxQSghiqUlKIAoAIUgqCAGnFtyVDqFITd1PI3aKtW3frb1z3Jzjt3veTJ8n+bnvfFs697/sNDEV99kVg928b/7Nl58B1va9Sd0kqdAQqiGIDj+hkRScm7ujuHsmlCxcrpySTVmBW3bRsZKiEZQ6kUrmIa4hL8ZkD1xCWH7r9z9CkCg9Ftw/MLba0k3WIaYsSlFBEQVLKdA6aEKKlMo40N8YHtnsS148/+w6FD13gbpF+9EAAAKKEEkVAKQigpLQquAAAkiBQUAUUJxSflyBDbojNKQesxQaRKiVYZDZlCpQhQQlXVlZpGZSSIrimQYVMYUXisUNx6xRW/+Pa3L7740oGh4VqlLGplSzfyuSz1YXJyqlKp7Np5QWvOkxOnNmzYYFp6Mp0Km6GlJUVE9u99TDfYYG/fY48eWLt2vCvXs7K4svvhPbppdnR0dHRYjuuvrBSbzeb27VuFDBFlza7EUnqpGA4PDQgRVWplICSRSLmB25MfmFg4mU6n4/G4bduc86npM6Ojw47Tzn06jhePJefnFpcWVzzPW7duXblYXFxooKFOP9688ILz73+k3rHmEkMV2gF4bv+8lUyEmioXliWPLCQtFzA9UZZCmJqOCkI/MDSdmImGx1m+t8ZYIaANYcbT0IhYwnXjwSBAHQD+iHj64mf+GAAABoaGeDJeKC05RTFbq3SBkWa6jyGVEFDChQCqARJGoS5gJJWY9ZLfWVl8+TEns+GccXtCdraJXdJayrDMYmmF6ZpSivtP5GbrNU9wmYjFFwvLz2M/+9euFxBkwEUgGAjRdBwQmO5Yf2bfgzvO3bg4O1+thnWgPATJ1cSeO1r9xL8mlft89bSv7blk9bTZpgvAhBliSDQlhyw2HQoEikpwlKBIWmNGVq8UHIboCdgyNEg6Ki3Z3NZwldCIHqowUMBAwpMgWvlYfHTktgPHr9UoxFb5yAQgKgkgBWEAsD2T6g7+jbz25nO27D90GAgBhZaBirC67QOhyCTEwQoBkAildnSBh9QlIHwou2GVtFkLeozIDrwBxmqBcxi0/Ql8y/pNrQDsLp8RAdgRpiiJhApCaThBJp2ca4ZE6QKC/8xR/LeDEhAqlkgGYSjDMIE0DeBhsK6zeLTkME1vnzKsSQ47BtavzE/3ZLaMOv0/qu/vlDg3M92X6sznunPddOL0qUsvu8R2vVMnT+c7O4XvM6YphSClUqqVCaYtZJTkCJIyBkoRkAQBUekaVTIC0LgQVNcCqQCAIgVQXAFR7Qj8xVtvaQXgABIv+JevvXZrz+H9kpg0IixhpqaPLTDWBTKorLiIbv/w2kqpVJqf1zQdRKQEF211q1YMfgJ136oJg1KgJBIkhBBCq7WGhtlKNbAG+soJ0HYMZzraGaN4gNJo84/Ou41RI6GVPDmQT/rJulcvL62Mda/1z8nPLi2MZLuNhjIGnYUCTYWmWw4kD4VFZbWq6vXsmo1WKlOfOlCsRUO7NmrpruLSdHG64CiSjcUoI8WQd2ZyGUvLd1qV6dmMzmgShVQaQAAUQBCghECpVBno7cHjk8fveqDvpm602kvz4p7D/KKx2OjQ6QPHt553HnTnaBSuHemntSYYTIRRYrRvQbo9uc50LLVw9ETMykyenk6nEr4MIRHzhB9RlerNh7W66zXMTFaPWUvlcpqrUEDP+BhPxgsrRVAyZ1BfSDOdtR03MdRfXywlEhmot5+xI396ZMt552Vz3aFudfcNy+Um6YkrGQT+KS4lQxJJXTMSbtOPAGQkTAmUIhHKZaAYjZTQGIIEg6EIAU1icYxQxA0iEUMuhHziVv7FhhWznFpIECVAJASh4ERKUCQKXdtWKjI0PZdJLhcrNIoQwKAIClBKpfDsgtIwGXd5o1BLdqWJZir0y4E7fn6bQ1vXuQilBEIoglKogCBShZEUDMA027sbU9e2b1/LQ9KX36hpyTBa5QUSUkklJLRavrQW0QO0888t+INqFWlIuy0JoFXEbHVuolISJEipWrSEFECjGEohAA1EiSClRAF7Do1su2wBjtZAGl0D6+wAp6eW7FrVYFpPoi/JOicWD2/avJmmUosTE63HZseOHQsLC2Pb10dBhIJS11ssFc67/OKZ6dN2YD/lxuv9RtVxmlvOP8/3fd3Sl5cLKsbSiU4znUwGgQ+qYXt6POmJCBWk0pnpmRkEPjg6cN/DD2bSHQPdgwcOHR5c05dIJCrFSk93nz6o5zo7EFUUtdNgnhukkrk1Y2sLhcJA32DoR4m4kcx0Ff3l7eduPLjn1HKh9wP//NSo/Iv27bKyOWpYKTOXYOlUsltLtVd1uVgsHY/F45aRMDVLJyareZ6yw/Fsdwa0RBglUaigmeIey3WcWAUTPffFo/mr2pnYI7v3FU/PUII14E4YaA1PMc2QQDTqKUhRJqgAHmTyKZDwsOBfqM3v582XvPmT0P/06bLpldugD1c4gXRAE2ZSj2csYj6xJXtk94OZjkTdKcZixDLh6Se+8oypbwMFoohBtLiG6d7BlcX5bL4TpZqbK9SW/Ouesa0wc2J6tjjxvH9uneT2O//u85++9l/vv+fsaY8LOBFEczbhlCgNacDXMjWgyRjFFEMTZEcikSJ6SMCm1FAgHj4MC9Prtp5dLgMzTVdGAEgJ0+MZjRnQxrdChcvScl0CUsArt7eTJ4oS0cqKSZmUYqZeuevU9JOfzCN/fLQyX9IlSiWjSMgwIgAyEhSBhUJTgEIigBZnVhy642JzFzl/0BpYpVWxgA9ZrI/A2oTu08i31fv+/k3tx9UylfAJKCGlJKoS+Y/PLt598Gg9cBV5cp/U/3YIISmlrutFCA6oUMmYHlsCWa0Wh4c1EdXa74smGDgLy7t1Xk92ZZ3Z09lczieKRrIUNhJZ89iRU5s3bi0WK4/t2deZy0geKBUpCEEBgqIIlIBGkYJCKXgUguQaQQoSURFQCIoQ5IJHoASCBAhbahpKtfzVWTHmANsPz+BA/s6ffnxsrKcr75vJHkMzCJj1apUHNSW9pJVCjqcOHystLgGjkQwjGXGIVqNv6586q4jV3g63uhwJCaMQCVUAUPSEpH3bNueGB1MsUZyYbb3f1mhmU7tXbdg2RbFpjg5MHZ9cOnoyHU+wrBUkhanJye//VJudasi5MNLYbDmxUkzpmvQCRqURBmYuK0VT7+iWAnPD/fHucStyc1LLSd9UrvAcr96gvrtyZubEiVkws4PrNtXqvgUYUzQUUAURMAApdCFry1U78PHSjS/9yPtz2XxCtANw54uvGO3okVV77VUX+oHrzi9Vlgt0xZFU7r/nPuqTpYcPjiU745ohmnZ//4CejbOBzs6nXdI7PgYrjXSZmyUH7Egn8WRnj63px+cWOjp7Id4RS+UgkZQ1l/qiK5ZiXmSGijqhyQFqDqYpGBx4W3U7MZB2EiHf3kOqdmPfsRW3JIRdqy1xGUrfjYSvpboLi7UQjUS2m8XTDcWcABxghiSoiAagKFEaBal0jWqRCBSXFCWhUsp0AuxK8f9u+/+/jvP+8TOXf/ULN93845tu/vGLf/uTF/7qJ4DoOBFIFvnSrjT8YkNGkUmBgmIEKLbB2whwVvCUKE5Qgg7ExG2Tx9JdGzr6NwFtp8oNMwYUdB0oWWXJUEBpG+hjGm3vHdp+YazXsU9lmU0xknSVsSCKpFRCSQWABAyGFGBV4w0AQErgQp2l/UVsI6ChxUBJCaICSUEqJIogpQiEohdCBIpKQQH8MABCGMD8Y/s6B3tmj5284tLLRweGQKpKpTJ15owbRn+89Q9cKj+M/vDr31jxdnp8anrG9YPphZlyuZTp6ZqaODU9eyaS0fDwUBB4X/viZ/fs3b1cWIqiwIjrU1NThEB3d56oyLEbhq77TjQ2uiFhZvu7x0yarjRrE9On872dh44f3LFr+/Ca4VQ+W27WiisrDJkRs+p2s243kdJIitgqHebo6DgirdTqo6Pjtm0Dyt6+oY78ODMGEomh4lLfwPAz6mL/8txEe++VvHA7ARo2/QxlLU/ROm6NbNBAUY0RjSUBNMvqCn2/tLDxnHU19e2lalVLJFUCG/Uas1NxPQZeEwDe97dvgLEcaC8EgB4CESF2I6gBaCAhLsOqFyENpNBBC5SQbsh7LCZZNhbrv/q8X1x/wQP37/nCN/7wyzd97HmfeK49fc+qPQmn0eDcq5UdRGVZT3A0PvtjN8PHbh5Y/bHVULkOPvGfGncLXADv/PfHn/60fwCAlz7pSHn92g/OPvKPQxdNBTQBImWA64NUzCVSRkIHnKxWszGzo6urulKwAB+fWwJr2+jAGDx+sj1nSgUqpkg2m1tuuiiBnFWaC8LJMFBSAajb955ozcqUygPamYmlkyaRnJjmeYN9cMcTUwo1z4z0ACJJwI5UhAIIpQqUEFwAZYyiJErlYrTJI6bpSmc81swrgGUAgO3dAByLNaF7mqvDpsz2E7++uwWcZ5bJmAEICkgghQGgpDQoC6X0pHxCYfF/HqvvpSCEIACdiezGizc9cO99kywY1hKTjvulj331uhc/H6gGAJ/4h5fPT5eufuaG7mPz//rrX3sP1O9P1s75h+e7FLqQhq6vAKamTwsCqZQleCSFNKgWKSmlAGitnVulKNnu5xIKVaukJBUoJIQLKaRClEzT2gpFq3OUCGepuBOqnUM+Mvfo4rfffMvPD53RiD+wju75o5bOxQ3IdHTU/CZnKs5Mu7ZCQIFECaKVkGgzCoA421IBZ+kGkEgpUSnCdCklAEgJkaHyvd1gQ3Xfcbc4nxtt983TkS4RRm35yGQSUhZkzYw31LVhZyijofF1Ymm58ss9z3j56z0j0gBIoZGRjWgkp9v8merUHWKH49m6mdKViMpL+Q0bWGqIN1acmfnZmZKWAMsgCsAiNKETko3XfLZ374FrrniK2d/vLS7ELaMhQYHQCRFCeEgMIVZm52811l2vnTBS2VpYb/k5tn8u0qCrr0+uNMGNDu3ZIxhLnb+zycPtl15cX1gqFcq+6+XynXW7OTQ6Ek9k4/F44cE9yXjciic932fxBFoGCZRUMtvX19nTh1ypYs34/xD33wGSXNXdMHxuqNy5e7onh93ZHLS7WmUJZYEQCAQIhAkGA8ZEgwk2JtjYBAE2GJNzjiIIMCAJZSShVdiVNsfJqcN0rHzT90f3CDk8fsL7fO97/5nqqe6amuqqe+455xdS2ZBHpmHl+0zRak9Pn5nYOFmp1VJOQimSsE2IOEe0O3ON79gDnaCzVE0OjQcr9SwywTSVnk0MbF1+5Le6QDgjF89MO9kkQpqmjPl2W5OKaYowaSA5YDuB5ykCVMMB5wYBEzQpGVeSYESoObfsP73g/P/V0DC0600zm1JSMICVcjlymYZBgkIICSkFANWxVMpYgyIoAN3QmtV2Ml9CuiG1pP60AzbbImHbIvAFBowxVpgz3iVnEIwTyR6ZWCM6IKVpWmu1hgv5p4wylZAYE4y7vEzABFMkkVIYddNbpKRSABhjQOoplQ5AT1uXYsASKxC4a98JWEgZMLAAx0haJqExtGtNRDA1bMXC8qljreUZzTKLw4PPeN5V81NTmVzh8m2jKMSNxuq23XvtTK/R5qQzkxu3VFdmUMiP3Pfg3q3bmpHfrK4mLbMvnd+xZbOlkeFisVmvp9Pp3Zu3zc7OL5yawZwrPxwYGj9zeubXj/xK07QoDi67/PLBoWLC0QSozdu2Zwb6Qajpk1PXPP/ZzYWl1dVVLwwIIcX+0tzcTL1RGx8fywAAwKGjR0aGx3K5nEISEzh58uTd9/x6y7nPKA2NHT20dHKGRhlWLIiwbw0F/eCnfzK3MC8QTI5PzDx5lEbiZQAA8NBX7ohYbFjm0OhIMpmUUq50Krsv3P2pj316uRLsGRxeXKrEXFmGnjRSDwe9cvH0fY1CnOrCJDqGLr3YtBDG6pzNOwXhvkIKBEIQKZakltGf9+ozM0v1USvhONp9dx2tTZWzGN7088/f+OFXJy75E4APAoB+0cUaRkQpxGPpeYj9O3jw/z/G3r32G0efs6mdXV1ZqnWipQBSlHYk0zBFEhjgZ159Ybtczg6U7rin0uCqmTd++lefePC2E08doeF5GIGuET+KEHMVgOjNG8CRoIAZKAToKXJepCQAIVxuGUizNg80c/bo7NNP6ar33f/D911qYKRAEgJI19tBbBMiJSwJsJVwMJigtHpUNIGjoGPAQIzTuQRAGwAu7jPPzIVBzq4s+FmE71s++Yl3f6gbgEUkqZPSFBYAGGFNKYwkiK7AMSVScvhvusC9J2yNfPNHOg4BHHte0rYHxga+FohBjRAk7/ze9659ZreXBH/7vpvv/cW9b3jfa4ong/Fx4w9KffNdb/rnj90y/NeX1jvhtpHJDSLifpgu5KbPTEcBSyXTPBLd69bl2SqpumppCkBhBBJJobhgUikklZAQCwBEKYBBCIDQCcYEpAClEKg/WlFhTEHGAJCql+ey9ts/uKXvl+xz3/rpOy7fybgPSB07eXTnrj2+iN1OGwwsoxgAYcBYggQkn6YusBZ9MfT0+VRPZ09JgnUeRRgRtbz4my9/NTs4seO8c8KEOdWqdt0QE56ANbk0P2vI1To53ixeuBOefFLPJeJaXeVMvKvgi5pZifB44eijv9syfnmzlAun5+yBcdyhShJdB9FyVadFRjcq6JB2q9ZYRRQL09Y5QhpXioEyJA9LjmFY6U6zkRsYXanXwjiWHNJIaRyHSgUEaRgHnMWSg0OFwqXRHr8/OTLYPjENWR5jpZvarj99gUBKzlSKbT10mzzt7Hjl88EPGssVukrA1CLlr84vHnn8iaGhkeF149iwOo1m0Ui40rPTSS/wKdEZjxnE0A4dx4kN5bc61EajuzfHIPK5EarQ8uJK58mZjbv3RJVa70FqssZKI5vug1I26gQZmmAC5frHOscXs4XBVq0JSB8oDCy3a0IE/ZlkwjTXD/ZPryyZ2aypkUajnSjlK9VVGsqBXCpqeYFghABHQBTBgDvdgPG/IrT3Pxr/Wf72f/84lILk3O94QgkOGEkhKHQ13wSTgECjlMecEsBG78IYlNZdNpobp0YJRYwD1wEL1ivd+z63LAv1fBJQl0oISkmAdDZHtN4dyBUQqaJzzh44cQxUnLF7aY+UUvbqyUJIEFJhQEgphHBX108hhUDhntlCTxWrm6grUFIpjAFjBRIrJREIhHAQKQZgIjAxxZL7MdhOmmra5k17D81MZ4ZK+WIhNzTQqNXcVqvYP9hqer7HLTNRGh7ncdhu99AJk3vPmztyjBKjv69EJHX9MDfUr1lms9pYqbXPu+Qiz/OWFpf6+vqkAgV0bOO26vyi7dCiZjy8/7G+Qn/Db5TLK+PrxmIcZJMpDfDs7OzGLZvvv/1+hWD75q1TRw53Gu1CXwkhAoisrq5u3LxpZTlhr12f3WfvWphfWV1dnZo+ff45e9dNTpxz8cZyxAaHEsfua1W93JXXrveXTvzipwd739YjP7qlEraf+6qX3HvfL+cfnRpcm5aePHg3oqTdaHY6ndAPlBCmXQL88tT4IC6lOpGfTaearIZ1o724cMNVz4Q7bwcAYTE9tdY1ZEJqxIuEIbVkKUu8sh+EeQWCGFJEZtIKW43ZEyeSRH/5X77llve8/aLLxm97oP3Fz3212op+9JV/+MfP3noYAADeved1/f1FzdbWb5rIFNKKgvrY31z91zcDwNxtXxaCJSyatw0es1qjNXjxMyCjPnb96972y58HJ5uzjSOf/OCHb9z6jJlXfsBzm4gSouSJU6vD+WRu04Y3b0gCALSfbNbmXvKc599+VADASy8Y+cFX9124Z/ITo2dddNUVnufdfe+DukA5hD3OJUIayIXjp48urpCDJy2Aa+/9/UAU3fjtH+SU8xQNSWkwYCXbMWt7HYJAKgKyV1pACnGCiPq8UG96qm0jMUpqYFLy60dOb82kchln5N+TnQezRozAAGkqsBF1Q0YwKCl0QzNB+Uq6saKI/suyMawbZtwGHXCsVlD7eQAA8KH9gJR+GPlKB83HL7xq8vu//jEYWwCAGHm/dTxEEitkKSUAlNIIKIW4lPzfCVL/p9Ht8nTrur3ggxGWIAEYkgwJd2F569g6mYCDXjsP8MTv9rNqo1uL/8U3f/jFL37r3oMzf3rh9W949fgDX7xHoYWqwDtMuxL6wcLM+nU7MIIgcDv5YGFmPo6lEAp0TUqJ1/AdBGMFwKUUSiEFSqE4ZogQrBSXSgFBiGIMWAFSgJUkGDiXCFOlpFjrAbef+hpijpydiamp4gB87JOfe8d5/6w5wrIz6difPT1XHCimnYyf9tvVCiWa4JyDBIz/KCjandN60ZeAkoAAYwQKKymVEkrJZDKjT+ZGrt5VWLctURxNTNHqfK/UOTV7cmRkrHtxrPna7MqSU0ix+/clhoeQ57kpaoQ80z+gqk00lL//2z93xoqoMGriGJSgcRzXGx23jVmW1Ooym9AxFi23XF3thHE6keCxj3QpsEORoVSYyaSX5ls0368MPfagOLqucuxYglJlcCW4LkAijhVREohGIYqjdiBFvZsBt+sNkrTbrZaVSbAoCA+fIKlEIpmGvC47LeL60ZFZSlBG1zODg0LDYmpRE3D1c58thKotV0vDhUR/SSUd/WSTN8uWVNQ0rHQKKFo4eQYDjTXwOq2+XHamvGQ5Vl+hqDB22mE4prfYfGK0V+ur1k7nsrkwWG0dPFrqG6q3Z8Ucm/3FsYHkoLEOTs7N7ikNz87N5Yf6ql7ABBjK8ENJYmomIGVobakcw9o4se7YmZkl1/eENA0jlzAN21IdYadtWF34b+/9/4XxH9zY/o8GUwoBCWOJMI4Z0hQiCCEpkFQUAZMQKYEwCASC9zITjoSug1QCKFZK6oBAoKcia0I3gjC0DQJCSimURIgAAiy5SOeyXtR74omOMWIYhBt2EsXh8myj+3ullOBCgSQIIYwQJl0iolJIdk1qumy8rjaNUmhNoRKtfbx3joh06RVIQcSk7DqLxUKZgIHUy3U75+w7OnHR1YNcxpzzhu9lB9cxN9LAyCAdBCjT5FI1XS+3hssLG+1i/4CeSbdXyjiRrK0s5nMZYttU4PGxSTeqWcm+LWMjsecHXiidRHO1iZPpluoM9GW2nH92uVZ98V+8XIjIKuUaizNhPRCxQIIcP3xqx+azEumkRrAmIWWlDcscSCVjxhrtRrlclkqZZu/ylqvVpaUFXTfT6fTC8sJgfzEM8eDoeq/enjmpiv3neZ534K4ocjf1ArA7Sf76XR9aKi9PyvGrb7ywurwEf383ALzknS+WUqZSqUqtTAwYHR9qoOxHP3T71OLi3SfbB237phddM3NqxvM6i0L7xwsv7gbgoWzOgB6QhGOFBGVYEYvohgO0X0OzDAcmUiECO5VmvHmmJgcTaL2pv+SjX3gEveLCbc/+kxv/JtIsksjkt4zAwVkA+HHzFF8+Kpnkd0AsQAJw3Ksenn/92xxbowaaLzdTGvTrZgq0xdg4LWo3/GHhI5+++fAtv/3YFz505XMuOHLwqxuHC59qXE7S9tA647FHT2wlPbhBvSxvveXBZ+x8ARy9BQD++q8/8OC+r3/tH99fePjkvrvuOmvX1pGJ3PxUvQ9ZFIKAgiaUztS520aXqyutemzqyo3EVsAh8p4q19KH7sXXXd8Jwl5hBtBTRPhEyuq0AwQYkc8h8bzeoypVJxIBdynAjB+3JFjVfxf7go7/uo/d+YN3X4mwUYNIKmyAAqKXo9gFuP5jt//bu6/FiE+vj5ZwlKV2f3GCd0zXWoEHFwHgpk9uMRJjL372n8kEOXLHLVe86IMYenQgFXqz0xWigCLACgEoiZgE0BUSa6gyDF1KAVqrJHXxFJiCZEoZFBK5vIp5p+UaEoBIoKoTgxdEDzx5ss+M3n/+xHt/V9mjqQRbeea2C7qthbsffuKlf/+jt11wU4kfevTezAXK+sQHbpnIpSGMUrbllAYP7nuw2DcwO7MQC2HaSYVBIaVxyZWiFGMCEe/VCwmmgnGlYzcKFRBTtwPGgRCKCEYEyShWTJPEl9yyDM5i3hOR72X2murh3dqdo1bstezBO35+v9Cg3KiW7P6NI6V7ykdHRydmz8xu2TRZymfb9SoXsutwiiVXoGGEpeKAFHR7Y4CUEoAUBqoAK8RA6IahCbaqUBZqoZXYWBqerD8+n5vYlDu76+IKppbSBga72z7I8QvObS8seIi4S9OWbmSSKRHzwO9ESmT6kue98GoqQ59Kue9QoGJr1PKDpnD9cKWs6Y5dHJNRnVXKUa1tm2ZAha4Q1nQqmNRkWk/4rXg+iHYU+2UEMvZlNo3Ted5cdQw9VlxwqRNMDPB88CtubHqdgSir1ug32YzjpJsLC5qehoRjZAlYelBvyMUKT9lGIWeAFtVbhtJkJH0HyMBIHyVRq2UUcqWzhtt+qDVidKYu04Al4p0Opaq8/0RpfGRwfT/OJDkmpWSys1zZuHUjKBREkTY6rpaW9GNHDaCR6s0v2UyuwXwch51VduDBX42nk/q2gQ3bxiBZJAlvi2bcdvsdgwMDbc/PWk653tZMq1xvtSOf+hZ3PUPX/Lbv+XWMcBhzoGAaOgAyqF0LGvFihT+99/LfWIz/hxD7n9/z9PHfW5U//Q1rGzrCXIhQCIoRkkphEAow7r4FKQDSXVxyeEofQSriEF6fmimObwKF1j9415mLrgTZC8/FDevmTx5WOgFFEOYKpJCoFYhsKYexhnnvTJhkNnWYYJ2z9uoNudLev3ZwKbngDAlAGigpWfcDXCmCMIAi3Ry3J36lEEZCKKUAIyyVQAg0AVwH4ACY2AI1hWognEVK9gR/EUE4duuZviRhrLL/lK0T4piGZULCYJoSnEUoXF6a3Zjd2Vosc68zPT/X7eI0fT/jZFaXqn17d3uzc6OZbFBrN5dP9g0MtVBbFyAjDgp0Yrs85gxyhaKUknsibgexFw/3j7iNDorD1unFjJGQg6WUral8mnntbDoBttPxXHvjOrpcwzqplOeJjfrHJ5E+KC2y2pzu1n0tmrzgkssYcKoBbzfjZiv0TW44v/ntr5r6X5jKb66CQSyqn9ULwB/61KfrrWZxSN993nncdzHIbgAuXf6e7hsm1m6eLMDHn7qTfB++fesfb6x/eH/3p/Wctz79ZqMgOEO6YpWTR2fdpqUzyYBxYSKw0hlqGQQwE6quYoXg2R/5+iuve+YP7/wlyXS0lXkOIYy9GQAW9v2T5iQkALZsEKpebSimYMsbAeDYoU/oplPu+Pc+fKDZ5BDrKERVmO0Lhw8vLPzi1tt+f/vX1+f58u23Do0OHasEb1v+lX5u3+2nph/75e3j7d7Ed9Elz/mXr3wuLXpLvHsfXqRL+1/3hr+SdGrX3s0zR2dNGxkIIhTEiJqCM2meXFm5rG9jzYe0kx3r8LIB2Sh5BNynwM7rNux9y9+/701v/RsJiGOBhciWekskgvBAwVpZ9b/99Vsuvey63rVCCJASUirQkAwX3NCRf9QbAQAdxy1uv/ej95xU8V2fv2lptfXx997eQZlzn3/Njj8Ze/zHL82Pq2RW9975TkgWQRQBz3vhqGYlACUA4L1v3A+aq5arKDgzdv03kAL43NqXmZIpizZaWCqpECEICckFgI4JSI7WeHw9KbluiqsERggpGWtoom9sy57tmK8uP7n/8aZIU2AcYcAJRFyksPQrITw6LU3TOx4CSeNWK9l1YbrxTd964Rt3v3LL1Z/4yt3XXXf1kwdnr0xac43WCIhCwiEEr9+8/vHHDhCsOwmTC6ZAEYoxwkQi1DWoAtCohjEJgoAgJAmAAFO3lAQCiFLC4lDXKJcYgaSU6JYtUNxqRYAURujmv/xuFyzw5a/eDK/5GwBIGXpNsoOP/2FMlY8weLJVumo0iZJm1tZmZqYZ49Mzp7ftOiuK5dyZM4AUIArqqWZytxPdk+HrzVZSdmVJEEGU4hgo55yXSgs/OTmZTybZ6r5fPjC2dV03sg1u3gSdXgXFkXj+4MGRPduW7nqIu+6mndt9JI1SBmGcUAhCSRgN+7JOx9/fKu9Ztx1sSg2dWzTyI5VIpYALL2rUmhwD1QhF0kCUMWkmEslEsl5rHys3wU4nHKddXhSgq0gW+/sbrdW6Fzs6th2jHbHIE+m+NDfJ7/qvvqD9S2j1zs3ExG3WUqlk3Gr5giUsu1Mtp/ryMjb0hq8lU43YzQ6VQKNxo5UMVDvqVHmYLeUZcPA7qWyJqXqL40Im63kuKSRXAzd1/o4QIa/lmSHXvJhXgmTabrmdZDKJgvDoz34+sXFyYOtG7vtc9h4x6liJGJuF/tyO5OTZWyMVcCScTHFutpbkrtQNANVXLExPT2ezOcPQW822BLF+bKJTrwdcuFFoGyqbyzigFlaWkSKdjttWaqiveCbyGHCKntZ5+e8txv/LDfivEt//6XHgP8ZyBYAx6jqxdUu9CJ6CL6g1k13UBT50hwi5bpEwEMceeSSxYZdSvRuxO9LDQ/NHDiqlSSmQQoRokeCaSZx0TgJWohenSfdEAIimeV79qcmIS4EwRgjhNSOcNZnJNSeb3vkp9PQFTBcUiRDBoDDGUkgiTI0yxT0GRHXp8lgJqRQo4CAtPeFE1U5xwzog0fz8fDppBH7bJDomNHC9gpV0g1a9U7c0qlu9qXX+2LGGZqWcxM0vedW1f/bSyR2bYb5TyGTr9XoOJzo2d113aHCYaNTUdUMj1Votn89rExsrU0u2mXTSfU5SB1uLZheVHwUdl4Y66oR9A0Mej6TnRWW3fWgxvWHAijRzcFh5LlJWs1K/53s/ufTZe7vnkChlqs1aOpuJI74wV8GMZSwVH53zauuH8ztd1WI5vbywWEknewG4M7cKgqc0vXp0QcgYYyjC/51BEeJMOE66MN4fNld0Q3N0s+Z6gupGGFDQH3v0kAD85ve/J+TOZptybv/Tbbefub58yxf+mVsDlPUsDjV9M3DSWlxpNxuu62NKBtdEedIjO+IgKCTwq15zMYDNYqnpzvH7f/vjz/7mmz//yPId+9BlCePUUfbc6xLL2tb+JthlGE3e++7v/ODvPrtvsSeqnmzUJhl6412PvhAAAD7x2a+U3dpbvn8bSAdQo0TNvih0THyCSSRTCRU0NEYBvXap4jfKv1tYkhoZSliPRi3s2OD1zLw+cMsPvnD3757xwhdgQBZBqVyyU+/t6jfpunPPmpueDVj97nte/NQtiru9RMwZ14bSKlT/LgCf/cKX/+RH340EWACjG/7k/b/+WAdgGZp7z+178YtuduGMFiMDtAgcg+FvvuO5Rx556MOfetd8JV4PAADh9Eenv3dXu72y5+N3aojN3fLxK97y/tMAAIBCWtw4Wm4cIzHGqguhQBJUDCoG6OKT/gjthR6/HgNwwEOZ1Obdm3B9qr1c1jKFXVl1+OhSQqMaE4hgV0R6Mte/bnDb3/5k149u+8I7/2QhYgp64PZ/+PDbKvtmt91w0aMPn/nmbb975p7z/3DqwNe/8AHQWtF4aWVhthX5515w4akTZxr1ukaQEBKwpgAwll2+A8YEAJSSSkkEGCtkaBolilLChYqiEJRECgiiCsVxHAoQAphlYC+WUoJIJqDjAsCr/vS6bgAOdcy52nj2ts89soAxACMqUqQZDRWGVhrHDbPYdpsHHj8QBgyB6npsKtAAuHrK8rRLeQTV7QGjbjEOiFJSSg6gM8aEDDbu3RpR96Q3P7p3k7GGA1g6fkx5YVeUQxaT9ES1cfjM4Llbo/m6NjDIa/VWrZMvFWMQ7TDCpmE9tuxn4lyx1JpbTfQlZWwQhAY2b2KEAA9iN/KjGFOEpcRYcSn70qmmZh08VXajyAPYtWm9CAOFCNEpC0VfLoctve3HpiJtP/I4uBi2JbPZ0YGw0srRNOzoPXREo4bCGCEuwlQu3ShX88WBMHSV5+H+LIDMtKIwqgR+mMll4+G0UeEpoS8+8gQyjP5tWwJvtj67mJIoIiUnk4EgMO0C8zyuZD5dUAqxQoLYOnRCbaaNITYtfesFu2XbhZDFMU/QNURR1TUYbwYVyCTT/SUjxG2/7dSDHM2ffOROI5s99+zd5Up5eGgIAao16lIIjLHnuplc1g1CrOmtdsu07DjwC7k+BKjVaYYyVl2EEJJC/A/6tf9bFuP/zfhfO45cE7KQSiGMEIKnIq1Sax5EAH+UXQYAiRiTmYTRbDXcVjORyYBCayrmIAVLpzJB6FKdiFggpAWMFcYGOWiCAV5bOxJAGCGE0Fn1yG21npqMFFISlFSyy3HvhV0FEgCrLijyKVhIF62BVK/wLLurBS4VxVijNAhjX0IEgIFgJCQHTIhSimqq0/SFigmgUMZKRslsOpXNSD8CzpRQ2VI/MLEI9ZHdmzvNzmCuV4I+57pntU6caJXrGy/eYwPSV5raxEg4PZcEGuaspAaGYQRBID0pJWCMeRQfO3xk2yWXlIZGH39oX+WBBzeOjx0/fXx826aB9evy9QgwtKSoliv5fB8meqKQqEk9YSa8SDT+MD1x1sYv3fyvJ47MP7YybWw0nw0AAH67Ebue78aIELtQ6Ns4jkr97uGq+M7hVP6nlljNDI4GO9NofbEXgJlGEaWREMiwlaBm0m79/qucx4LFtm132q5h6s1OfXikdMvpR19+01eRAQjrMuCYSCkQgDIQjlSXcAnT3/ubiYnCAl6985GsjlSAYX1fLr9uSLjGaCIxPTXlLbiOjAkljx7Z70r2wfd8OEQxpWjG80KttS6X/vW+x/vPvey6iUmErK8CAMArL36DmU0dX16Yr1YwwiSWCcC9goj9DB3gKYxft3+2GeADAB8AgMt71LcusG+t0QcfBYBfXn/l2suBTcOvvvGGO+dnYHAUABbdRQAADr36qwgBAMLu7d0GAGAAoODQgd7nmeiKkDwVfQEglvJYq/IMpLCSe7ed3WBuvtB7QCb37izP144fnvn6267++p374WAIAAQ0AQqIQiAwcCM3uHvrONzyx+fwuz/47nefenHl9b+AtZv8PZ+D93wus7an+6W+qvviwk+sX/v921/z+cJA4e9/dJiI4MQ/vvh5H7r1k+9+K3z8XwHgzGOP1ZuMgGKgKMJSgUBIKiWlwD1NiX/X44Su05hUpURi13l7oLMCZiq1KQ8sVFbOSmX/sO9ICsCkhAjYuGFDcbDvnungta970dzp++/5/GcmzASELgDcfcvn/uTyd3/501/+/JfezRbkzZ/422de+7z1Gy8/1nyywzuOkwp5fOzIsVbHD0OWz6Uxl0IiwJrkAXQfVEqkEAhjXdPjONYJ1ihmLApZKBQwoSzT4VwSpMLIt6lOCNEolkJALAkisNbuOnnrrV28a7kjw4fKX7n/5I2vf++R0w8+/p1vv/AzX108+gM2mAIlNKrHGEVRAKBhjIXiAEQ9jf7bm1FRF4KNZFdkCPWW9woExnqj1Tr9tZ9ue95eaM6hqvjZt771xnf35F9SQ32JkV6Zd/rI8bxmaqsdxzSWimlwdEukfvvrX6ftxJVXXK241BUSoyUt56eVYYBFYhskIMmR6egUqdZK0GgLAdQADQGLYieTrrWCJ1ZXTZrEmtaXNfpTzuJChVgOkQKkZBhl86WKP9+JeSpt5zTLr9WX58rrRgaUgNbydHoNIDZ36ljGFWpiwN44Wrn3UZpyOGVmKORArnFmISQkvW5csjg7OOLWGke//8v8xsH1o6Opob6kk1o6cAQELM3Pxfksqy0XCgXbtACAaHoYBtamHGo3UdOdPzOVH+x3NowEcWAyFcehn9Cz1LSzDjR6NCSmIcRUZsM65kadeifVkflCdtVz83v2ZKb76h0vJwEi4YsO0nQmmASppKzUw/IqzqVTkwNDq7bV6PiMIxGGQ4U8Qmq5UY2FGOwrLFaX5f8IMPX/uK37v3ccpbom5FIBwV2BNdSD83drvIAw9JwQup8QEqNYxBBlTL18cr919qXQBVl1h0aNRCruCCZDIHojiLL9/YadihiBrkYcAAAQggAkIRiBjP1AfwrErFR3TdCl+qq1jlRPtbh7/681fBH0vDrXmlYACiiAkEiiOMQwctZQ6Kqpk0uggGLEhNAIxYT7HgeEqW2D7hJfxZ3QZTXLdIhhMuAMKWybQ0BVIPN2Vvi9ZKV24HDGsVLn7hi99ALZ8WZOnxotQ0yRPjno2EkcxajdBgCESKVSyWazA0PDi/MLpx+53wmQeXoh43dWeOOsrZP77r9zm/W8U9UFsPTC8GC2r9RcroBuZtaP9+3e1H7kIR6RiasvrNcrf/b379SKQ4AiECvwxm8BQNDyBocGY1MD00hypSrtppS17z4wQp7YtelkY6UsH5Hh/Qnjkst6AZjqRMZSSJZIJKK2H0URCMBY2baphKSE6LqeTiQRQrs276YSENcYUwQlJHMBDIQwhTBai26WaVSXqwPnr8MHOjJgSoCtCGuusnpL45ESimKgUr/tHz/4l/f9dLW9+MEP7JWOffj+6Z/fceKCQv+Z6goFYJJ8e+oMF6obgH88c1pNgaURio04jp1Eoqnk06Pd/8ORTYav+O63jxw5s/v/1hHXgv2Xvv3lN7/8z3m7Pr0wu3v3ru6uY0emzswuvOSSZ1x+5QU3vOEG2PBGALj9tpekAHEIItAoeINW5pnPu/D/3ulAcnLwb7+xT5Tv/c4b/uqff3X0jc+6xHB7JXcPzBQVHgdJUCCkhK6pHhI968X/MA11o690iLb3kvNQ4HE9RQyntjC9euj0YDKV37bpmqsvueOOfYwxAWRqunI0/d7fHVltzHnPfvdHfv2Vry+tQfQwIK2vNUS2/eNHv/1vD3x95+bhvZPr/+lvbn7vNz58f/UOQZVjmPnJItGNmdn5xcUlyzQxRVHsUUBKStmrxnVF3QGQIhS8wEdAGZeYUtPQAQEikiCqG1QJhTGmGBKO0/JaN7/9Bzft3gEH9gOAk+gt4R7+/iMLCL4xv/qFoPz6Zz3vZf/49reeOsNjXkrZGmhhEBFCOYtxV+hHIYxBKgaI9FrkUq0BoXvQT1irFAIozoVhmH7QbJ6/gxNCA7nu3N2bb3iuMHt+wEQweKhn2to30N+Zms+MlvysZs5UvbrvsegFr/8zYBy4MD0Ud9yo3k5CgkxNm5smQOoEU4KQQhRBLNpec7FiOKaiQgCYlqUEmvFwIlfw6vVM0tqxecvC/IqmmxQr4BIjGXBmO2ldr0iEK+3AMhQ29XoYlJudQjHPOvDw17/Zxa/nfEk2DETlph6IFRqN6in/wKm4L5EpDdn5jEXNyHVDxuxk4uSThwyES4MDQkcobQaS5yYGH9v32PYLdloJy2jHNbcVWya2bbOQt1cb5eMHk9Qgq20lfCdpVR58vG9oiCdMCiQboIDVmEFstiZ5jDGxzXa7mRocDQIvGraMqpfYM/nkt36cG7NLCScMwlQqgyj1olCBVCARgnw+22y4nu8tl5c4wh2/QzXd9X21EgmFARBWDPuRkgieXoL+z+O/j5r/Ibv9DxtPD+1P3/Vf5cS4x+6BXsYpVLe20t3VlVdGEgDUU1jASIkUxYAkwgJLJRgDgsgaCBQTyi2jtRzoFrihXxgctDPpOOZxKAima/KsQClWShGMQCrEYroWgBHCBGPU1f9Qiqie0FU3D+9hrrr4alBK9SRz1tjwCANQhIXgPtOM/lR+cJ2K+eJUg4nAIpgQIYREBECiOI4softuO1cYSmuOQbTF2Tms0YFdu2SrLgAJLpWlU8sia3dEbtMkgFSuz5oBM2l+3UikFGKarpu86SvFNMvGmiajaHhsAizLq1T2nntOFFdDwAPPPK+xtCB8N2XbLzrrlbEAWp4hzfjUgbvbYViYGN11xZVvuf7GPTt3vfqtf3LogScxVpqMl+cWOu6+BJGDY8lu+nfs8cMokJGBQMeDmTxFNN9ok9ePjt68y4KjaTVhAOn8ZPXuj3yud9JRy006TsJOhr4/kMpJUI3QU0q1Wi3HSVqW5boupdBuu4NbB4f7k/MrHYKBKIxAKoiEAl9JWGMP+5LZyWTApebQ2z/8/qWVYMettxkWBc0A3TQtTUgIZJAeG9sxRCts+cLXvIMU4scnf3Ldc3Ycnz1+U9/ZplXE2F6tVQwI4H2/BIAff+FNt919p92XHRgbadRbMoxTmv3VRFqk/NVmI5vNjg+PDBUGQEhiGAzkiKU//vAT9xw9+eQTs1MdOeIM2tgQHbejQwK1n3vu+hvf+OyF2ZNL+061SfH4tPuaK1/S8Wfd2cM8uO2XPz0QPHbqZR9/1+zdPzvmWje8+J23feQ9w5MNiva43pN8XXr1D/SXP78fGdGOnfmj7cYXv/F7Q8/UcRPHIKWmgAFAGvQAuE1tH6I7vvezwljjp016HfTave96z/sBAL74JQDIrD1mHNprmzEAQNCEH/7m6c/hG1568f37DqX0hMOiR6dqQ46+7MZ5jbgExoQYY/CoZieZf1U++8nyE5K4eFY/ceDzm869vvaDzx1wRj7+Fx9jK3ffdeMHb334ia3CzmitQ0/c/0wAANA6jYiQBgIskOhKPILE3QesW0Ra67qhtfIUAhgfGpJ+oFZrJJ1FNKWAB1SstrzT+x7dNjp6zqbigRMLGqabto3CFbp3auDy7f7v72kDeB3SrS7AK1/54icOPnb8xDTEPNbhiudct3swX184ruPKBdZ5x7TZk3PN+YP7M5mEYZoI0TDiCBTBCmGslACEhRQUE6Uk50LTNJ+BF4JjU8PCmGCMMGcxQcBlRDTKY6aDBEAGNRDAS1543Xd/8hlAIwAwuLknf5Ec2vqOv7vp0X/57oPf/cEp03zRjTe0Hz2Tu3TQQEYatKbqgjWxlBEgnWKdiwiw6sZd3KvDAYAEQFJ2Lx5CiCiQUiIJ0jIwgH7/N/Zd8OE/ccmZx++6a6AwtPGSC7p/ffbRJ8a2bO6yGQwhU7s3R5HPG53Ya588frjQX3I0TVGcyGYCFiCLJreMe1rUXK1Fi8vW1mx0zLWcDCAF7c7y1CLWCKbQRYnHEqIwVk468pqFjHPWjs3LC/MGTVHN4CygQCjFURTmcnl54qgHAAqJMMqlUxJ0EUmCzYahn/9nL4WP/hoA4oSuzixl+vugmNsQeu2O54wUCk4qCqQ+McxWO8HcSqZYgBB2TKzXnj0OlfrqmemkYekDJZ8GZ193OZKMcqaE1JDpODbJZaoLc32lUikx1lxaYlvTY/oIYByiGOkaOz5v5/Plei2O2um+fK3d6VYJNKIBUiaL5OOnUqWM6ISs2pBJ0p6dG9297vihEwMjI7rC01MzIxPjXSv4UiGfSSYNoi+ulJc6LS4lQsBjhjB0YkZAAcYCUNK2E4Hrdj1I/lcsxv/LXf9TxtF/+YbuL58Wg7u6s0j1Qlo305TQQ/A/HbxlrMnqIYyllEIBJsKkwONo8g93nzz70rVDqsFNG6jCtfrS2HCJ2lYQRFJgDRPASMFTcRoLIQlgJaUSMdV7f8nUjFBxJoRESHbv/bXz7EZg6OGeAQApBVKs4RJxrzrNGDd0aAmeyBgIWkjL5R3cbGHOBda7OrKEIBAhAzOV0fWO4NQ2gRql4UG/1eGLy57bsZwkQRRMrT1fO3OmlzuFrcCPwgyhKOXUz8z2F/I4b/luhDFots1Z3HJdy0JmIh21O1h4dioNlBr2oNYOoxNl27KMydETh59UZ8ojieLY3rOwro1edeGZqemJsQmQ8NfveocWS0+LJ595IVkJsG0lxwfAE36GBOFqNwBf/ubX++Ulu9E0CYlMVFFBHhFsHbLK32lPK2PbZMRPJ2+cvPqsG3rfllbItqJIqchI6HUWsjDSdVvJOFcouq2OwJBKZmuNakIzW369b7AwW+4QAoxFpgFhJAED0S0e9/xkIsYJJ/1GZmXqX9qNOM1BGDT2fIWV5FQDpQO+6/3/cN7LXgLSKh9v0+rsSnNmND+07tyhl179HJACLAoEgkrNMp1uAL7m2buvv+ECiLnb7CSSKXCswPM108BtV2AAkJhLIgFaLclibGqAtD3PueDS512kFSwsecwF1227Q4A1hGmrFb700KFh3cns2iMsa9Q87jXPHPu3r297zouaB+u//vr3U1O1XxyZn737d1G6lCTyTz/wyTAm0vp8NQYQTgo8D4OuMsF9DBNvSM/4cWwC6A6VzIG4CQCtbhD1uz3sKhyBvwMO//rT/8kD+d+OT37/S594y4vLh6uTE9v/cObeY36MURaYjzmessVBARcyNWel7pPBb/78L7/4+JHDT5667WNvu/Nnr1r/3Jdd9YIPP/yRv/neZz7fXOmMAxgbrDPt6OwLLHgIAODBz911+XtfGthp3+1gJP12CxQHIRAAQQh3IwkoAMAIKyW7LshEqcpD+/p2bgLuAkqntVwlOuMMaHpApqamJ87awAF0Qlfd1z/0wVsePnTgF7WF5eqsRXQR9gq/v73j7uaKd/Zlfc955l5jcOO1Vzx3Y//A7Okjtz1U/uK3Dl/wIl1oOF0Yi7yK166adkIBIQSQirszASEYBADGUkqMMSbE7fi6pnWldqRkChBGRHBBDBwzBoARxgCy02kND/d/6Cd/1Zjd3yXw75+aPhsAAHZe/0x+ZHp9KnqiiYZ9/6at53qH21uf/a75b39q/WD//vKKlBoojhFH1OAxxwAKcLdKQDHqqmavEbK6nS+kFCAClFClCBMxAPnm137yjk++CY/0X/qGV/KlZthsdGu7YbVeyS51G636QB6kdJfq0JfQdq/bumMilSupuRUZRKxcN03Dj3xv2bUzyB4aRXbh3yr9Xuf44I4dCFh9ajroREbOUURQwJHk1ElzTIqgO6WxXD5bXl6gCFMKijPAJBZCU5qIOcngpJ2s+20ECoima3q5U5+dm+sfHx656Cq52u7OzeliH6QYMY3pg4fCtrvlwvNZHADGSmBS9hXB+uRg5EZhu5wdH+CHTi2fnhs5ZztHPAx84jLDV9KQbeEhDrplV2cW0/VOXy534M77MgP5iU0bodoMDd+HTmasvwWuvXMo6gSlZAnMfq/RKg31UKFKh4ix8vxiZvsGmKnqHR9lHCtfCjcN1Ct1jeNjp47v2r5L0zXf9bs5Wa26qmLZ9j2CEVcyk0kCF0EQSAFsrUoqFIpBKBWr//elKP8Dna33E0mpQAHuUj+QeirUIaQw6uXCSCFYw0+BkgoBwQSU0ACCOAYAvKZDAACAUTKfJwaVSMR+iBRBCisEUomnLSbUU4pyUnHd6XFGRMwjP5IAQimJQEn0lLMvQG+R/pQawFpcVj03QlBKAabAGeISxtdP8KUWTbEUtVbB06jOREwQkgIrxaJOaDskrjftdInoerRai30PI6lYkCvkQXBwkkEc2v3ZIdwTc8WOme7LBSxOMn1gcEzKmNTjpJNuadLkQkhpJ5OtVovouiKESamZpohjyRGnJkpZukbkirdpYieModZqC3N0ZmEuv25k/d5z6kdPBdXG8OhwGIWo0jG0JAyUAMVx4MbpRAJZIe7VS7zj01LyZLEIUSxr9RJTvNBMGfPMMWr3QWkThmzaaC4Tb60HTGNk6E671ZBMUgkG1kIBGGmdju8kMiIW9dW2blq54ZHPfvfLBw5NA9U5UwA0jAMFGmDMI66RHgYgbyYwcdR8dfNI8TtIbEVJTccUaQFEO1T0kEZDJM9w7x2XbYWFM6O2hVClMNBHOaVnWN1cYDz2Wx1EsJFNL8tOV6ex9dDJZS+0LMuw7AV3UQKYts2lSkjhKa5skwkZtn0daXYqLUBxWLWn8bKpp4QXBWGf43T82Wqe6EqageYK2rdxMozrWhwMGJm+Sy9ohuKqyz505NETVLjpgYFOsvwXr977yDnOhz5yp6cBjrHUXQisBAlcwtsASYk0HMYiFArmVQR6AAIijyrU/D9+9JBh0yjMYqrrZhBGIUQFSHENllQbCcAAgLYJ0ABrzgP3kkRhIK67CKIoCgBuv/nP3/SJbz88L/7ikgtrSfemr9/6lj3DL/7Ic87M3n10eM9V13/4Ny+57jO/+s12kh0vbZhqnvrzq8b+/juPH7oXXw8AABsFb/7G/Rz0/+1ri6lsiomwmHUW52aOHzrMu+rKvKvxBEJJstZ7mpufNcEopUzUDkS1QSw/O5ZQLHJsLDGNm+5AynrfJ29NR5ePDmxFyz+no1m0vBKDQ0nchYo/99XXfPzNb7MHR6lKAcdHj++XgXznn/34NwciedU1lww2FaN+sJJJWLoNjHfXARJjCUCVklJK1a27KdB1PYpCJIVGMAaEEOllCRghqkAB1YwwDKRSPAos096xa8fC3GOZuZ5+aqLdm0AGi33to8cvvfKVxurUd+5/4Kyf3XnTq98IXnV1+sTItVeNn5SnppoYa0pFUnJCDcQllwSBIAjrFMuYy24ejHo6QKCwUlgpJiVFCANw205OecufeOPb3/PCK47VF/uHR7NBryi/64XXKrpGFl9plduVXD6rLbYgY4BjxfPLyNZpXwaUAowdjONlH4Y4mg45MZr1itmXQWZS+Kuh29GxiUDpGvG9wC5k8/0jrSDSkeIM2rWapdtMKY4QCmNlIIGJDhqWQmI5UOyvT7fStt4MeCaVjSLWcRvemaWfnkQbDj3YhVRwBThhQciG+4e1nfmw3dJ03cdgeyDTFsVA/TCOhc0UsLg57Gj6aGgixmRyqNRZrLnNVt7OZOIILFMhFGTTkExIDRfGhg1CoBOAbsqOmysWReiThO0uLHlhkMmlW4seDphT6iFuUMghjKK5ijY6SIKAbe5jCzXz+DSfbeKCIwxzciSfSiYMTXc7HejCGgDVmm0BiIAyNNgwNLQwu1gYHFuYn+cgFAZQyqZ4yW+7QMx/50b2/8r4r3JihHG3BIUwEkIqBYSsERLWOh5duUdt7eMEgEkgSmEAghBIpZ5mPaKUBESYlEHIDB0rxkFhBQgIIEBrqXW397x2xPUbtGi6u9mqt3wv0KjGOJMKMMIIyR4jCvXOBaCnwNFd02CMASuEZNeYATCEkdGXEwjJ9uKqNzVDBKGAGeeIghBK0wjBfHWlmhuaNE3LXyh7XsdJmDpBRiEteex7Nd2yZcSsTBJCz8a951eLJTFAMzMRCnUjQ7Fs8Y5paGmlC6YoGETTinYCMNYAgPNOu51Mp4WBSSfUGRGNpqKq7bXtZDqdy/ory30knaoKVp3LZQpN3WJJi3CdhdjlgE5XGu5KLp3ShGhrfirVo6dzPzQzTshjzbaMocGo6SnzGEucVjW+LrsR8IooW5AiZ558aK1eQUUYecmMFUdMAAqZMGQsARBSUdABpTI5a7lZ7SfC1kyehIw32iSnAeMkA09jmBELK3/NEKat7CTyENP6t64zQ+ho6T2O/YTXHPcPx4mS5j95+8e+FGmkf/3OoLXYMOOCyrBAJGzCMoggTUmasA2JJFYyuda0EKm0lUmbhsGjOJ2xgeBytaIZWqTnTEoxgSgKDMOglDLGOBM500FImQhqbiWZyqwKT88XqAC/UxeWlrGNwPc0zc4nM0HkB5XA1s360srAaDr2y5/8h5cBehnw+KJ1g299203vfM/HDp1aEdhJpkQmmevPTxCZOWsdPrW45HLhun6C6DfdcH0UdyLuTazbeYzzUqmUSCZ1wwCLAotAsrAybxp2tb662qinkmk74YQs5pxTJk+eaX7pa7+59deHSwbziaqGMfBYo4A4rEI7A+awMgLFY4AYxBgQV/quBO62WyAds2H2mROT5z1xqnLLXz+Xvv6juzdcZ3stOXl+du+l1z9jpxjtu7K4/eO7tz147OgOZJfjeCM+VY/TN//4CarBz9dQk49euPPoQ7duBbrzfSeq9BcaOAefPO13mhgMqQTiHAMgoALzNBhZTSvZ6YqKp1vVnK1Xji8kk5wwXc/ki5u28aWZoB0j3Ni9OPvln5zatG3iLIKn2nefv21Ton+dH68+9MjjH//ozfDSvwGAl+95dqp/Z11UkpEPlFx1yYXLgZZPXPHMV7yyUZ01A1+CIbHlC09iqTClgJBgClElQa5paWCMdA1iETU63NYAcQEyRpRQHUkMSiiQiiCkJAjCseJYYcnUrsk/Zf7RxMhegDcAwNu+8avfAgCAC9qJ7/5E3bfyzls/gb/12dvveOSmVz9r/99/3S72Y6JsRXXlKUUY1oFzpemAEEIMgeRSBEz04CZIIKlJzLtJMCYIC2VghHXJmAjiSmag/dxXP99LaQmohsHciVajawoCMVSOnCh1N4kyDCuOBClkw7hqRWkU65rmeNXAydp8cWH+yKGkkkl1RbUVDRdPss5gaus2rGRnZrrdipIpE4gIfWZYyVRppN7saIj4CpRSpmFKCSJiUgqlUwTYwAgQp6D5zTDbX0DTJyNBTA0dnT2BBUon8qthk9VW0P77ugFYWsSgNmClkAg7XCWSPKNzxXxLt2MNiNbqSyRHBvHMcuPYtLOhqKVTFBsm1aDhJR0bUon2yekUNWHDMFO+k7ANjmOPj4xtg4zWqs0a5rCddv9w923nXHadSmTNQTOh6e1aw3criVwGWG+2FblEtOgPXnW+7fkwVNSMTG0isXK6fN7ZxskjJ+OBwsjEpN/spHVr3m0pQEQpQdSG4cFTs0sSo5ir2ZkFbFkrC8uGbkSxjwBhpQIm0pbhB1GM19qhS7eKMysBCGMgq1E9jpW+8UYAaO6/Re8sYJwwL3kdAEC0/8Rn7i2dHL2/eg8Zs9Zv3hYpX3ddBBRELKLAawXNlUYrELqKBcLSwNu3jv1u/6yjFAXxyl98DwB+8oKXc6RAIR0p1u3wcgkIKayYFEiBBlh1RZ+xkhJAAEZIgSIExWuJKFWISsWpBAqRq4hOlJBqrbuLJIoJpPoLulJTK8vJRBozIaVQoANIitlaFFeqq3QFyps9XtrcI8dUW36maAQuRpwBEKYE9GQoe2Bn2UUhAhCEmZASAUaIAEKqx/MlGMWIF3NJtLiEEYs7MdKkBpIDBg4apiwMDR3CWrg6MwUvuaZ+z2/yaVPP2hARbiQD6SXBEZzoPPADZhsJEvf+tdowzc2uUGTGumO0FSREOpuLmsgzwdH9GGlktQWCKTsRYcM0k8msJeoVLwgYi5K5tDWSjbGiIROhiBpNLw4Nx1yFQIIyQsgMjBy57yGK8MZtY27syhQdGFpHDKvZbJiGxjDvlqBbcRtxGtVrfaNbWcj0AkGoHXsdleDhgFDNcTN/Cpp92RvWeMAE6QRJYBoSYFJNp5IoIUEJpRhjlOCY84SVAN1qMlevQzNXTtUTbRK1waHMY4bDVDsd024AHk6qNk54lpnnFaMEzcWFcktQYrqptNERO4qFTednVu4r9W0ozjzyEDaQIoGdHHaX48ioUGVqhkkwKB4LKTDpBWBTmUoI6QoWMEUV1TUbJUxqWsBXK8uEoEajQSnu6yvFQZB2nJYfGqYGoJKpVBQFCKHYl7Ztg21blhWGoVJKKdXpdIIg6Ovr890AUyJiIYWqTi11vLaU0nEcCxof/5vXJvonQMSAY2AxYBO4BkGbea5WyIOdgJUKRKHntQxLZ3NevV4PT1ayQ4Mfvfkje/bs2rZ9i21bgiqVSBiM9QmaAg0HQo84ISROeFsmzPf81TXf+vq7as1WxNEvfn2fH5OmMICLuOM/8tAfTp5cyee0bLGQzma1KDDTRmlopJDqT+voWVeeV16sbNly4cL+fbsy/fe878t/AUOfKh9vLM0t3vVAta7CmaN//fJrHMB5Ox9Orl64+dy/vOUAV61b/u5T7/3yv758M4U/nAKAva95vaLfufuxx9/x7ldd8dq/PXH0wbu++ekIgBjw2ve+/rnuyKUrudWNO4dqHbhn0ZOmLZu/PPvuK//hdcWlh88sqflff6/I2rGgzxp7wdwGvnz6cYOTx5bj2cX2G266/q3/9LWjJ3/+8X/8V9GffdON191c/JexsW3dADx+Vk6GizkzBVpDaeKOx+4xhPGj733/4KN3KHdL5BKmYoqpodkGimPBgAggSGKMlATcK4ljgqhGVpstAKwkCCQxIkopGTKqUUoox6LuchvztK0LULal635836e+cO2b/xTinj3GG258BfziPgBIHXny0bsPDDTVzGe/87Yv/tWi+zVYbs19675nfus99dP/hrkwjHSHKVAxBt+E2FcYlAUQA2KG0gBhXwlTsRRAIKUJMiLKlGDplpPGM/VmpPCG8dyxe39cyPSJk1PJsa1QsJxOTwlLEhEkeqeEC5kUziCEFAhSTgs98FBN46GTHGAuK0c8tWdrfuP66oOnHx++al9HOhnf0LNy+tTy1IqeNAXmmkJAjPymSdZyKQEgVFcKI8KFkELpGpGAhARASgiJKWWMCVBJ21KgYhYrKQmAAkql0lOZ8e0bnvXycfjObgAgA9uZTgEwBt0AzONIE8QQCHKrVaoMwMlj5bl9B4bO2ZLavRU4w5YTMq6nLImBuz74QfLs7U1bJqqBjmnTawUJKzNWhEojmlvRHUKW53yiNRt+5YmHRYSkU0jm07mSnbrgXE5Vs+NnAAAgnKqmRwZAoSCRwCY3dJF3Jdm8M/RpNLtQyqdFELS8TuC6SvAuXNhEOOJxOqHXvQApVHPdPWNDrkUJaFMrlSAOASMDVAtoPuEYqDetqyha5QGXYpDl2fQCyvUUbO6849dDA4ULLtvRfTl36/59X1h63sTkQsobiDWgEjc9gSSNY2BMeFHYdn0vJJQAIUEkEoZWr3ayJvWD2FoDMCNQBJBAa5jnbiKJABMEuAvCkoj2qlB/rPciQAhc3jvhECMLKQwAkkqQmqZJKVm4hloVnAhQShmDxXzgddodO5khTCqEYgkYelh3pQRCCJQAUKDCUqFneJDImAnTaZXrCBCAfCph/qP1whoPqSd/9TSOfPc/BISE4qZtdFzXDyKq40jGuqa5jJs64lxQBZyBkzRnTs3e/6Xjz3jpRf6JPyw1KiP928PVpmn7cVHDcyJI6P6huaqJx7Z2tVyhVDODRE5LppLTlUoemcynq16M0xm/v9M/1vrtA30XbhZ+006bslYFj7VkEGOBF1eK5+5eXZoLfS+bLeLVDgelJe2+VBZCHywbsNr/2GO6fqxQKFJEkR/LTpTMmLzRDlgjUyxGq816s9ldOo9OTABCYKSqU7Ocy1yRGekzkmJNJqaPzQyfPy41gqNyX2bdWgaMFABmgiOEKEJCCQmKiW4M5qZlRRFDSkYrK5fs2v7VyUdI019CPBkTpIfJ2G6zeDJdeGjucegfAwDXEoaHa7PL/Tv6zznb+uVKUDm9sPUabe7wjBtm9GvOT4qPt5SCDs5xXWa1etkPmqeylm6aDhMyVoEUCiGF9acIaSBwSAjBgHRqKKUECM2iEomQoJiSdDo5WMhgjDnnrU6DxVQzs4ZjI6RAinanaeoGwZgitlSrHD9+/Prrr2eMHT9+EgBGRkaCIKKISAksYoRQTdKMlWWMWdTyji04Gbnw5GzoRSknRSllIL0oTJKE1El8vOp5HazAoXrCSTXdul6kLRICIquLp6668blCCFrKVppN1IhCVyqlMMaY+xhjjKmVsOK6b1Jz++gWb75tMGlr2ltf+xrgXGkBohR0XbZu5DHTbVvEoed5Tn4YiCSJNHixv7Jio9jy2G+/9/GBdv/Dmd/tv/nAFaXRr2zuG2Lrfjn12H0AKyDfsXnsSyeX3vOiC6+7bvDhJ37z+28+t1Z2Pvv5T33y7W/71Te/3L28H33Nm3oX+uT98O774akRAXzgS93NobXfdesszzsC8O2fAcAgPH18cz1Al/h0DsBLAeCfvwv//N2dAN8FAHgI/ukbT3/38MRVwEJgrVi3dbA3Tq6L5lbe+w9/e/mFNz1a05+hT4ARSw5BHFMEhk5BxEoqpAAwkoIpiSmmknNOkFJAkNI0zIUMGaMEmxrFCgkuEMJWAhOpEQkmx1iDNoE3/tUbX/v5z7zuXa/qPrj8sXt65zTNLnzVnts/eNdIBeKvnTqw3Cp8+JuNpUPBQ/s9y9eorlMJzMsjUEpoiitKYh5QEBmHDEu96nfmMEYSDOAUKaIwRSpFMFacN4Nzzt7pFAYwDlLKbS9UIoiyMaVVZkVrHTTTyPSvXVFLx5rG4jgIWGLQZkw3Q1s0a/WFg1TXhsYnO7odzLsPD14JEhsIofEt0CxPHTlFDY3omFKI/Xj87D2gKLKEo2lBKKXkQkjOmBBdbggiVJNSKYUAIUwIRkgKkc/mVxurCIAi0K1E1W84Zb1/tAS3/aE3Y/z2jspyszzf0JmlRygu1xrzc43lJQMLY9MwKaUC4AN7t9F8CmkW1JoSBTRhcYZRLF2XWRlHRYKeXqCbhiNqJ4sl1Flh7aOKMgoJdUqjZxU9Ka59x1tUs+ot1CwtGwQej2Ov5aVdmVhTdHIGBmTIsS+1jIbauCLDYv/g8q33wha598LzRCRBymwuo1Y7lWaAlJJIDQ0Muu02Rno2ZbRbLYXhxOyc6wdEIkkIILCk7sZRwjKnKxW0Floap+bijjs8uf6x392VTqZwGHRv8he99YaV+cA7vth9KJa/WLl08Fmf/f0Xss8yVayAB1ocgwIQHAAHTEReiKgOKJICEw3Wb1g/fWI+a+J2IJ/iGkgJhIDsdXnXCrkAIBWCp+SlYM3ivifBAQgwAsF6YMmWlBlKYyQwk0iTQkhCJKx1OgSTAEjGEWBSGhuNp6fb7VXLSSpAGsFyTc8SpBCsB2JI2RnEe5cjlbLbtY4CRbGu4RgjxHrBFT1NsAchpKSQa6DvHi9AAYBCjEtNJ5ZlRgwx7sc80kzS6TAJJGSCAihMlZKSsZSJVqany4+VjdFNI8NFOHgCBi1NL4hKJbD8BE3R4aIKOiDW1ha1VkcER39799k3XVd4YipE0LYxqndg6/rGw9Wfvu/rbz/4byfuutsiq/05AK/hSl8DI79rg6otG7HUDCNqtoykLQNfBB5OpaJYdWrlA08+cc5559iplMBgFQoL+54khKSHhsKlJc555dRJxUV6zcRCAWK+y5oBlQSJ0KBeYLesKOS6NzA5LqbLZv94zTpSmI/WpCjjGAghFBMEUnGuGAJKCZFSCkwZUpJijdA4jC4bHVk5+Ie3v/5lP7/1lNPpxLq519H2d1rf+scvmOmeaEYz0PLMT6eoIRPL86It4PDP795105vj+IHJnVs//7ovZTde+MypI/ueeGLHxl0t/0jBtsAqCeRqnhcaCaUAY0wollLyuFcMwRgTogEAJcAY45wTTOOYATWyhSGlRCxixRUh5sDQhjiOiRTlxXkASKeT2XROSkkpBUWy2ez5559/+vTpRCI1ODioFJimFYZho91MJBKGYcRxzDnHmJq6hRTGuXWu9FFS7y+lw46KBMYWcgpCNuuGhRXnBlIG1UACdrBBLOX5/cm8QpBMpHXDAKUkgqyeigdJEAQgZDqdRQiVl1eiyJemkU4N1sur9z9yx/r1GzJph3te9dFHiEGlh4hGXd/L5XK6Tltu2dY11mlW+LKVsEIW15uNdDLRAjGxYcMj9zxIktN8fGB/Gt65enTEB9WpFVO5b3/tvRdcN/Srz3/6ki/MX30JjsTy6qEa7teqq09cM7F637+9F+Mc/H862OHvaKU+37FXHjh++vHTStSD1uKe62/84r+8Z8slf87bL+qbTCMuc1YmaEVeu2PqlBAqJQrjmGiAJOIxM0yCAAsBGkUA4Ng2AAjBuxodUggF0koARCAVdFhomroC6kHi1uljZ9781V8BAIAx2JOY6CRatUdXdg2nDp16Yup2JpbrM9kzIhvv2/9gZGYmt0C/gRIsSinEJEScE2AKpANIs3TUCnYSmhe4RTUuI0w5ZlLjxAdBTJAM25kMdhs+97P9u/SFxVT/gFxacCPkZLtLZ8BgZuweISpuuNRxNEo1JwO1lmHjBvJSw4NOYdifW0FLdVU/qG3ZgbmGsYTUMO/Uph54lNo6NQ0NC9+LtL6i344xiHbgA1IGNkIeA0IYE0PTuJAx50J0aaWIxQxj7AdhMp0p9RWbzSYgFUupQg8BzNeqG/yYXH5O99yak0XsENxcqJ487PthZnL9xCuff+6WLSSnyzDS03asKZ0YsFiPlldF2mIzs8Fx18wWkts3ZwdzhAtYrSb6imwJIeDCIXpuLx5+hmqURXo1Hm/EWl9GY6raliKdGCqFi1NaIUWHR52ZBSZdbaznCcFYGClFMjoXXjK2ilY+7Ljm+n6abLUbzWSyEDHXd33Qqee7XebYaruNw1gneiKZbfgecOZ6QVKn7Yh3+amOiWMlkhKvGxk/tjDT/UO5i3Y5p+bavrf1iguVUk5iTXm4Uu2XA0G6R7bnh0ozf/+z9S/gxZIByBW1iGAFQFQYI2K5LTcI4xATE4FkEggoAZppJ0hoUIieZvspQWFATwNWI+jCqnCPa6LWKEtrhd9eKKZrH+lL2q2O7yAAk9oaNML46O7zC3TNLNUwpR8piVqzy7mJkZENk9WF5Wazo2lYihivqSpQIIqA4FwKqVGieG82jlzP82IJoBAzKOKi13yENdhz7xQBlOzteZpGFlIAUiIpBTFMyWPGGMKIM2HoWhgLqUBhzKUAUEIgXde8RvXBOwee+9FzPvdXH3xuKZ19xaXL+8oT6yf89iHUXLV3jfWVSaWx2K2Pt1idBaE1lIKFurt+RIys6398Yd/t34v7JzIXXza2a8uqG42uO8+dPq6Pprk/n2GRgxKtpGc6WdmMuKaDaXIDGyStoiBshyjl2JZx9Q3PZ2EYxoGdSIIS3DHnlpZU2cnkkinLjnxXR6SyUu6WDvxWR6p4aXGpmBtDJARYtuhCrIQW5eoHqyPXDrbicoKhh3+legE4P7IVKFI8jlkgmEeUdGgaJADjAEqAxBqGKAAhG7IZLN37m1/vP8+CuUi7upR/4MziNTsndrzy0urjt3e9AybT6xuteWfbwC8+++XLr7/xBR/e/o2XfP7lwRsKqX6BhZ0qv+zK15H3rv/Ke9+5d8O7tHYm5BXX8Ie91JJB05rBOY+lQAwRohG9dytIZMYCYsGVUlICwZZpWcSQzcVl7ESUYseyEAHBpYaUjgmL3axjEaIBIiBA1y2FoOm6yaRtmmYikVhZqSSTycXFpeHh4TAMnbQVs1gJFUZhLlfAgIQQjUa9v2+k6XqgQi5YVBfJVJ5zFnTamp0URGmaaaUczrnvRZVWwzKcRDYHAIlkMgiikPE4ZkTTTDMh3WYxnY3DaHVlBQGkLCsGqM7PW2NbYsn3XrDVMlCrXtGQncoUT0/N64TmCwXbolJpQahiBrZpGHoyb1IwbaHcdI5aqXS13mjOrZyzZ3eQKv/rzT8f2DX2nAmaq1kVYr3lZZddsDf74m1v+sN066HfvXLqQO29Hzr4pdde9Mn3/TzXZzzzFVf8/Lv3ThiLP/7g87/8zVvPG5+87ST66i0vrZ86WnIH0wP0jJgb64xOrLto1Wofuu+x8m9mlk+WX/7hF907d6h67N7+/nVPnGpdc/GLQ1475yV/YmzILt35wPyh+3eNXNbYkOo/a9QQ2z75hb9erw+Or5vYvOUsGMqgpkG801W/dvq+e9xTJ5718VsBwB06K9O3TnnRuiu2jVycRHHMHd1A+oLbfMMbrzl3/RVtuU+pmHf8lG1yP8QIgxKSRxpBClQQBCnboRR3wkgB6IRwwSDwdU3TKOFKCKUoJTqlMZMGBokkx1QQSDo6YH9W0VnWM+I983hPU6VZq8gnj4xMTo6NFmYOHexLpwuK5tdtXL918iFvUTNTg335+TOe54cKA8JI54IC6JryI7YSy648AuGxrhFPQIhAU0onhIIUQokwIFjatt46NCvioL6w3D8xbBg603raTu355WQy2Z1Fbc2GWMWuC4DLUWskU8qGGq+0lZ60J8ZAuJod3w4XxsTVkwOq1Tz50GNJQweTaiD8tp+eXF+Y2FCbWiREUT0Vel4oA4QJQqBAChkCIMa5lNI0rS5/mSCsaXqj0SwWS8dOHksmkizktmU5qezcwszhhw6YxvldInGaA4oAAQAASURBVF1i47C1bevAFVcAtiGKgPvMXQ28airugyiKZ+q6rknGkUJMSdtMG6P96fMHRYs98cv7Tjz4BNFJsT9/6tDB8uzhypKXzo4fOH56x67NV1178ZnqaW7JQlw4d+ek5ZjZDRsbMrB1hiqBt+w7fU44mJ578kA34mlC8CBkQZDSU9JbxXxV5VKp3WexyhPu1BnddJxkqtN008WCPjMVIImUcoMgTfRIMhS0MWf9uRTBWJNooOhUK9V6rEQckTiBM05raWFToVdqnj14uD+Z1zSFND0mAGSNkJOdhOVlfVtPOwX93dGluNw/UEQgY6XCyTnn9DC4jGUd2YiiuguEECIRR0hDAwO54weOIkRKY/n+tLO4umbigkACUAAEgAnm4qkKRU9VTT4l8aj+aMLVIymtYZE2bRmP2s2llTLjQBFBCqrT87ktvRRt/vjp4bFhJJjvN8NT0ciG9X2DAyIATITQRdTupdGMhxiIlAxjLQi8KOpKHUOrEWGkYxITrAiCQKA1uFWPEIxVTw8OYyyk7LqB9frDBBAgJUEJNXvyDPdUT8yLgBICQFKsgZISJMIgBQr92EoQ33W15nxGRA2nOLRs/Par3339jz45/enFva97XmXf4dLkQNLu/WvpgVFq4aFcn3rwmJPL4kV4/MmTZ938zvm5ykhLLvhTUdDWaemTH/rWKz/4qo2XbDZXZqDesuuB5thY6f5yJ0VjhFREGDi6AHCyednuQKBUoJJWBkLUXl4Z37JlfNuWoNkihLihn8jmpNspjfWQ2CxgyYy1YeOmONQUbSm7Jv0OKeVW9q+2lyDslNPFrFx04mBXLwB//93/outUKA4acjKpwaGhY63lUjKzaXgik8m04kCAAiESpqUNlXAh2OqYtM4TgnXOhA0bbv7Oj6F+AqJe4aLVrtrJgrfk7di5c3Tns/yN1z3wwl+d+uXP149nMa88/7UffeQ1b736wSM7n/Mnv/ynz13whlf3bzyLnzw8m25hHwu/CQgB5xFnGGNtTVwcux0g2EKICa6UQAp8t44V5PoznU4LMGmHkVJCSgjqgWmaJsapZIYJGcai0/HSGSqllAo1m23fX+nv7y+VSnHMduzY0el0UqlUrEKiE9u2qa8rrDqdTiKRyvflO7ytpSyENRFLs2AncolIRgRh7moo4izkkkoOKJce6EjXIHrsBZquN2sNpZRl2rqdrFarJEVN3fA6bhgEBGNd13VCzVQik0lVGw1FsK5nlpfLjl7AxGo2o8HhcYAGxpFlkiBoUUqTWT0IG4qqhUhQ5so4woRFvpCA3XanL+dEatOXvvVpyE9C61EIJKyOcHz8wG++/6KXXfqeCctq2NXm9DtuWv+3n7nzje/68ydb9/3g1gefeclOK1w+fLBz7jlXPOfazJu27rRa0oyTW3bvirQ2n64NnD06x9oHvvDFxLbg2w8vDCRH4rmHW7Vjm3ecPZwbRoNV3Wr1m6Ry5JEz//ALZ4Od35pouLWxgR21r/+wfuUb3/7Gj6GF+47/6uGgf/RH//rZV//ty0m/7JfPKI7sXgqb8PFbAYBwr33odiOWD9927+iV5w3uuSLACRyfHkzSj7/11Z/63CeK506CCBCVQJwo5kgpQ0MGRhyTkIuEYxGQCLDrhQogjJhhEooQkoqzWHZVrwALIQihQcw1HeuAqEC//ZefPu+Gyy4aSw2tL8CbvgoAb/3Cy+FHvwWAk7//ZUtPHth/0jk4xR3t7tqs//jDlohmW/NTfm3D8y+vzcX5pOlLoAgHUggAZWggmROTsoZOYJQbGnCIqp5a0C2sB1RgLqSiSmETkIYBYd/kQwSpUl+cnxSmpcUSWq21CT2jaC+NiXmMKVWaBgSP9I2BK8EuISfyHQANtet0f+ZGwZie7FMimPvDPkKoMmgQuClDt3LJwvg6pWhmYrQ+t4gA00QSWMAirqRCGEtQmGBD06WQCGNKsR+EmCKECItig+p9+WJ1taJAN0xZKpQgZnOVuY1OuXueVsVmUbtVW7QtKhW3EkmkWUlnwFcuyMguFoCoTtBJW4lEMzj9y98tHpg6c/T0YrPjpxPrdu780le+fu1Nz7v6ja+8yBof352lpDyz70npJpID6y6fGOJ+LYrKnZUwzUjl+KOLqwvnP/8Fwg1mDz25Ln9B5dgZvNzshR1H46t1BuAN2Ra2AxtDwvZOT+sp0Wk1KQNn+0ZFcMRjC1EPMaJAMO5kC6sdL2dZmwyKLB2kdnx2dgBQwtQbUdxW4EDsRAJhzbR788+Ah/RCwm/XwplKct0QZHp41yh2gxEtGfWcPKqsEuOoKxynA4rcAG9ZQUJCzL0fxaFSCJQhFQXqYsGVNBSYoIJOUExai/VeAJbw77SxEEJPFaGVVLJbhpZ/1GdHT8mfPo0LxOvuauArXykdhcWtVCrhe9VGpbtXtNt+c3VlZS5bMk6eWFxentuyeb0UkR9Cs92YGO2xvGKBNUy2+wAKglCUy70aJwNASBqUmFQqoSK5JnP1lGgP6i0IZNeKEHUpwFKCAtENxAQABR4jgiCEhASCEKWABEgJGKSGABAIKalGKVV+0Pnph+963suf3WwFSw8+NLhnArvuYm1xO82/4/q3f7d1uHHng132fMezk+Njy786iIf6yeHFwo782a+4Yf6xI27NZZcqLDusUU6uH08mBr77sR8837v2nO15jjuRpnt+mBkYCOv1SmVlw4b1hpkAyTXDCLzAKhTj1bpeTAfVVc9tS8aMakNKaSUTgJTbDsCwRcgViO7tkikUJA6xpplODhKdKF5CoOmcpfsssilj2lYUxSdPdVQi7gXg5WhWF5grgpWTpcmhoSGbzz/y+9sfaUeObqf6i3rCZiymgAxmpifDBGZ62ip48HtY/fh7vyS2bj7zi8/1j/TKaOmrXwcARk9i4rMpgH8GgG8+1N2bAbgaAAjqyUB+70EAKMH/ZGTOOZdQCjoRUYSwwgBh5JuGqVbameF1QJFbq3DOHccJgiAIIi75VK3BwqiQ68sW8nEcchYnEzYhScMwhZCEUEpJvb6azWZXVlay2XTMmYgFkkrEAiESx7GU0lEWBr3jtykmFqWi7QsugjBy8kJTSDcMnLCi0FfI0zXP0rgbYgXScRyMwQ184HE6nTQMLYw5IoQ6joYQQVgiyaUUQvQVqetHMfM5Q8S0ARTRfDdctqAgAFmazSJMOMFSZ0HsOFaQjnJmFljkdpY100xn+pAUzfJxS+/rnKrNPXBq0OnXt5pP3PuD0R17rfGt5+TKIcL/9q1vbXz5n/OTM+96zwtL6/VbPn7ia5/55PHyHScP4W3piatf/mxo1YPK7NLqQtk/Xr6zmtSyZ1+5u+x2pu/6/dCWdZyHf/cvI79yybFs4hWv+9BPv3/rdz/++cvOGZQXZ+XI9nu+8ZmzJ/KrM+7mibPHX/2ON+zZ+863XvjZc8fPOveSEE8/cpR+7DnjN77gsqYXFZ0BRS1MqT7fmwgO3vP7i6+9rjnTqo+tP//8azn0GRL0mB2/7960RnNSIAK25QBF+b7BhbkVCUxIpCmEpSRYQxikiJUkY+smTk/NKqHiUCgNaQQRSnWqCSGF5JQg4LFhWEwJhZT02fn5PW//zpeU+B2ifd0ALNxE9+Gxzz9nykxPzU89MXPGr5W3Xr1lYyrT7tNmpExqO06lBy/t17xKRQYhlqirDaRiFgGABpddeRHhCjSnMj/LAdISt1CMu+4cAjgGTdOVitLjg7Ft6HYC1dvN9ko6n/MatS4dGWlUrQnaYA1T26KIAKAo1A2TxXE78FG6bxdo9p01hTVFNFMGjM3PSIMSBRyzTCEnoig90A9cRljohKRL+dXFaqxRLVaEaowz1SO0CFAACEVRjJBOCOVCaogghGu11Ymx8dpqFYNoes0nnnhYs63z9u5Zn+sRuKVW1UzNBse2kyqIECUQdphbsakDRAOk+YGXdvJBo2Fl7KHnXzz5nGs2z88f3PfY6MDQYr36zVs/PZ4bSPhS9VXQ6RUJeMPoBuwY7snTeGHVGB3W0vnUYAEUGhlLjYgAhJISr9u4HXSzVOw3dvQw4zMz03MLKzufeRnPG9DE3I0tiXVLhCgaGB2YOXAC5RKIYMG5TsmW8cnZmZkwZm23w8OQsUTbY+2V1bH+4vBA3/xylVJzYiizWGvZRMUiqDGWiXvMV32kH4oZ2zK8I6c4l7TZ6zhqq815mD38+6Wur3YdNZESSgmJEAblca7HglDM/FA9WxSxVf8ZQ0KClChWzaV6H6ZIiqAdJDO6sTbXyZ7FJ3TxVt0WapdBKyVgBISgp0lP9TrF3SiNe7krMAYkRIA0m5KWlLFUm59x/sE/PN7dmxseKs/PQl6XggcR1wk6dfzQug2bWTNSUbDaqHXfNl/282kHIQxKUESqqz3hPAWIIE4EEURJwAK6pjG94jPqEpAQkkoppbhSEoDAmsiXBKUAYSUVYEwIIkoKBFhBz7MElMKgFIAQYBg0jrkhcNLR6+V66HtOyj61OJM6Z314aGbzFXsP/vg3l77oelD60v6FLnRiluHtVdLH7EUjJS4Ywi4LHl/ad2Q1vW4AZzSaztSiVj8Ba0OpdqgRVuVj04s7t5YSEoHnzu/fP7J+tDSa992OqreNWNY1VBwZY24HTC0UTCaMfHYYScF91zAMUMDDOFco+h1XMtktfwJAq+Mm8zoXUvqu8BdJtqFhFnOrVhMgQsikWFhOk+zoi/p6AdghywY1FbYYBiw9iuOBfrKU9LOOmdL1RFGrBXUvbAoJR6ebO8a2vOofbnjvn/6kD8Mkyuz70s+vvvCy1Atv5Pfd/j8Lo//n403P+QAhRDP1MA6y2fTgQCmVcEZGRgYKFCElBMvn84BU9Uw5k8kknQIetQYmDTB0Ua8TkPWVeQwqcFfbLspms67rIoSUkoQQ3/eTySSLhGXalFKiaBCFjuUQQhhjSMfI1ABboBGfhVjE6UIxbIIXCwuI5wayGQWCWSnLNNNc8JRjEUJcr60Z1NJoHMcy5h3fd+yUFFKBAFBcxFIKhBCl1KujRLbAhT80kF6t1LDCub6su+qVg+W+XB9TISYSkAqFwDryRJAUifZCi/vtVN6kylyeq1fLC5PjyU7gy7C83hlA48n3vfvvnpHv27tr5zXP+tL7vnC998jRe54wV1d/vvl8dsmOV3YaJ9/5lg/f9sg+Va09dM9jl102NHP7g0/OP37lFS9ev2ndxM7ni9apQJPHfn/yGz/5/o6EFUwm1w8P78ld+c4NWppb9XtPbVsK+l/5llxfK5VM/eIbP370jhPNK3YnW9XVJ7/4zY++UPiQ2jrkbwE0zK644IL6nsqRH963e8NZsR807nlwwVtOUjZ3crrbsDl4108O3/7DNMlmNuz8xZvf9+3vfOXacy88FYoNWzdOXL6zjbPbBnMOpRETJ44eMwgGoq2RLWKi6RgxglG96QadQEiJgWCkhFKEEIKJYAIjTIgmFMfUUCKkSBBicSBH69NgkPpJkbJa3bZY2G53k5qdV04Wiic2FF8g0hEJIrFpD1FjAcpbLIw1N/Jb/J7fCwnVhRZoJGACYdClZmDl+lx2PBK1RSLB4zZgypRAAJakIQUGXE/kAVne8gI2M8awHbYqjXLZlFKPXT3Zy6gwl2jNQWZ1pZJIpTXLloBN2mE4qZV26FrBW6nfthqbtil8T1EiUBCECvlMJRW1DGLq7Wo7rTCAMAkBrgzdMPKO6XItYXViX8d6zBisxWCMMaUYAAkpASHGhE6o4CKTTQ8ODIqg41hmtVHZvnkym0vs/9Jfdw1fWMMzUinCWdCpexHLlEo4068ASeZGASeNjg0aDwNsGqLpyun5pg4pQgbN+MBdt8g4OG/4BUJG5RwpeaMzldr47t1Rq94+M9/XV4zb8YOf+9HQ8MbIb7XaLtWdfFJL6GKu1RjYfj7x2kNOujNX6RZD3dnaUL6P33vUjm189VaxMl8+OZscHtFLQKO4gdiwRmTEdNvSLB38oNhfnJlfTCUTXAjf68RK2JRQjGyJMQYp+eqiNzCQN6hJsCHCJbeXlEJEMC9XLY6TY4M8YwLqWa9P4cXq4WWy3OuUMRSznvwxAAKsUOQGGoJQiK4emRJcYRBCWgoZCiPFAQBJpGRsPk3QCkmQGAiA6upnqm73t6s8+J9HDyitJKCeFj5IzELmSYpixLlSum3r2MyRHozZtlP1jpuhxpFKK410P4j1vLP/ieNbBos2VfMLvXoMtkwvjgTWCQKCSRD3En2kQMMAEgRBkVBazwBOrVXEUVchWgFIBAJAAigpCQYMCGGEAHEuAIhUELOYdCVGMNZ0KSNFkaIUSaYoJQACY8CIIkapHd916/Lz3vW8Rvzg5ds333vPPddc92efuvkVF77zzxfue7AW9855/Z6zOo8cs1nLiJ3+9Hj5zFS9bnBZsplBRN2m6bjhGrHcNDkSnlk4cse+7a/as3x49o5/++lr3/S64T2bOHfrS1PFQlFKzCiGmZXllXppdAwsExsGYC1qtgyiKYrB0FaWl/v6S2BaGkEEUBBF3dsiPVBi0SolOjUTgCkTbqiDFYj8SEJms9BGzDP7d+3R+w+sSVGGnu81pcIKO5iJ0w+ohndaQ36b1UOeXj5TVZbgxC6Mbnv5s3evv3b7wUr4OPzkfCud06075+7++dU7b//Or3PPu8nj1zstD+p1KBpgW8e+8n0n158p9B05eXx43UhxMOUGS44jgqgZubBQrZXb4Xy5Ob9Qnzq5GLsSOGbpQhD6iMi5+elytU01vd2KhQBxcD8i1FbUFChFDY0rUFwDugougEiBoYMspXOdVl0CszWT5EpWvzaUNYcZWj88fO7zL9qwZz1EjRIqBfV615gDE+jK5XPOqa6ttuqUaGEYmqYplJAi7vgt1CR9pZIpQcWCaHYkItft2JamFGGMkYSmI5yzc4Rq5YWFRCKhdFRtrBJCsMSE6lQjPvNN22IR5yImGuKSAUgpJWOiWCwywhVSgSeUDTGIIHShjbKZUjZHEBMLp2aKpVIr9jCFXDZdnpvrBDExbNB1NyIUWuXqlB94frSZ+W2MbZwnxx7f95zLzx8eG/3+Hbf/3aevHS3gby8cGRhjqZGBB+9v/vhnHztr754f/uynb37r9ol1Z//pu7ePDW/69ud+USok7EDd+Ztfu5UT63Y8K91vzLr7L7tm5wUXXv3AD+909qVu+cgt/W5aQXFdfuuOgWcd3tH+y6+9803P2fYol3c04KZzNhy4vbKebVdL8z+9Zf+h3/3i+S++4dKrrjrwyCPPXLdJSvjJgz98yQtfuqq1JHg4nZ+86GyA2wDgrHM2bdu5I9PfH5ZXp+am33T+GyaLuWfhaHjHdT+49dEDv79n9w17Dj7xhB9z06GGTmMGhChMqKaZiMWUxH6MWhwACQcRppSh04gIipSUggOyFGFcKkKEjLBCQlALIyxVXGMQxE5MfcNIAwAAbfZ0QJ2Zaf9gfWpTZV3fztrp02rhQF9xkTVai9zPUFMvOhCzRCEpMJhUCga+BI6EQigG1Vjp5MdLJGxLYigldVAuAU8ghwuGwZRhTFgikyMTeY2TMIcHMhuFA+1G3VyqdadwFQWor0cCqZ2YNiaSgYwz/WcHE2dbXhJa9dPffvf+C//CyRfd5RUUta31G+hKo7qySFJmwpDIpCQSBFNRbXVAd4ZsiYFKmdPtNnW54CldY6AAgQCIIyakIKAwwRIjTAkCIEgCSFAyDj3H0N0AaQLjmKQyJneXid+bgo1IKia9epgZ6rNSGiw1Q33VxFhmHSqlZpiAFYs6NGREIY54JpHr8HD84ks2XP9sPZkCovPATxLiu2J8YjSenV2ZmS0UcoB0vS/FCuYSmyumUz//6ldbK7X3/917A6lFgfuzr/xz/ckzmb78y9782m4A3nT1ucjUvVbT9OOKXCgObYbJ/sRSPaZhtVZePzqeSKYa7UWmmRiwZSeqCysAsFReJTrfMrn91IkzGGMnYU+trAqFtm8ohXFkKWt+cbHFBQJsZ7t3BwjCnUQCmoprgtQ82NZjxMZNly8C/WP2ilJI1ZaXSwMDWIFEEHOGNQ0DIFAIoO9Gs/yjECvQEeJKIAU6QoozEZt9ee2p8IaQUkhXjCmqECDUDclrVgtSdHWVQQpARAECTJECpTDSoNcHDENPxgowdEUyTIO2lmpmppeiIc5iJo2kk5KyXWv1T1ox9yxDq+gt3SB9Tm/doQkGSoKOpRQAzFzDDGFQEUAKY6QkMUg75BxRTQkKiAFmSBIilQQCBJRABIMCkJIAEAAhZJfFHQGAoHldsJBwELogSFMGAq6klKBThEEqpTQNK8k45gnLbPr1g9/95/zuDYTo4bIPhfUeyZz3zBf866XPu/LLX4R/eC8AUHAeP3Dmouef3wfa9L6jmpZebJ8eHLE8tgJoC8labttTphg0U4c3bDwz+7D84a+G3vjWV7z32sqBn+RGPNDMVF9/xDQjjo2CU3RMr+ViIqJOE7lITyaQjgIVc6wRANu0iQDZaGmWDRK0NWRAZWE5WyDIyvAAsLkgUdVkFjM4srygmtL7eDZeffi2351zVmmNhhS5JmAWkND3ebuFeEczVt3GSr0eENy2k4VsupRIFzOZUSbV7G8e2r4OHfndu6550Sf2N1pnT2au3rpu40uv3VOaqDVWyuAyirAvcghAg9e89i1PPPFjt9U+euTQtg3r3vTaV7bK83HYGein2YHi3s3brnv+ZrB1RakbBUJJuy0RQX7Qxho2DN3zgygSSqLDT3zn2JHKHbef2PdovcI6HEBhkBRGJpL5dD5pJh+6/9BUa9HUTdvJS4Va5emxhnnuc54FBO47OVv7wR9G/pDhaVaeO37xxRePjo4yFlGq67oexyGmyDT1paUlQkhfoZjNZl2vA4BHR8bDdiilNE3TC3wkZSaTWa1Wl+r10dFRx7J83zdNs91uH3h8/4YNG4RSIoosy6KUdivYlmU5joMQQghTgREBHRtrjW0ceH4ylWo0G4wxEwAh5LbbhqaFfiA9H9mG0Z++66F7Lzl7r0P0oNNOj5akQXXdZKHotJooijds2JBIpNsrDWMgi7Bq+O11mzeZphHH8dbNO7Zs2fLAgZ9ec8NzJwfXLy+eHNvl73jGO2Q+/bZb1q8cP96no8/93Re3blg9+7zRmTNLn/zE37z02S+95Wjie3//iTe97llb10/c88QJgvOb0LnlX8y/evJvNaoBHuING5LuY8dv/tMb9uZS8Jo+fI2zPVg8MTXt7z905wtesekd//JnxGiOlM7Zv2/fxo0bT5+e2rJpy+ZN2w488eTmzVtzfblWp91o1Lv1oouuuoa1m/X5BZRJb9m0tb/qSksYJ6u1wvL2Z+z9yrf5/if3Jyw9aVoYSySFTgjBSnHBhdBN1Knz4U2T/mLZb7iSKNsgMeeRkAlDxwpblsGZoBrhnGvYoABACJciYOGDUyfmpw8Ndhqm16upqjVbJD9hn/X85wPS5XytsH6Mgzpz/BRt+Kmyu+K1xi44S3R8I5ckAIILHWsxMCaVkphgCCIPqAapDGr6oCTCxIiVAMZBszWpONIFAooSpcyiZAPLAOl0XG+kBnfCQG8Kj7Zd41R/393e/KLXEifXZEYkCZq/B/5Qvvurn0i86qVAk1LimUOHNz3jfIzZzNGTTIpEgmLAyJUNv2XoRq2+Ivz2AKGJdIGHgUKSUB5H/uxKC2OUSiR5FBlUp5rGQHJQWiy7cpqEEomVEMo0HYo1FeG2CIuZPDXTyjB3v/Ed8M4fAoDceTEOjudktLz/cTdEG87fLEMPciXscZxKBIIDZzoinWrdSSbTmzb57VYykYc4hhDJoO13XEtiv94ynZSKFw89vG9keNBJpCFijUr9mmddB+1mHHf++e6fho1ap9HOFQrr0qmd/vXpWKG+nBK9dYB2fBWy6fS2DRw7RUDiTDVRjnBuEvgR3UyUNm7Aps7DKDXg2OnkiZMnAwBMkBJSxLC4MD+QSzqWMz1XboSeY2KTaH6r7WPlcUEAUkndtnrfiw2JgFmBBqaDiKXUoVZ3rq0+Xk7rmeWgR+MuEk2IOCYSASiEsAIBEHKmdbUkEVKgwq7UslIUQAPULdWGnkymezAiQCAlKBRTHYSENRFHhNAa+Qihrv0IoQoBUj2ZSsAI4jUQVrMSCI2blgUdRoo4CoNydbGv2OMSHtx/IJ/N+V5s6Albj4OGtGyDMxGyOFaK964uKMWFYAg7SKFIxNoa5IsiohzMPU40MBGJNRkwzhFgQFhxDRHGQcMUSy4xUgqkUgohDiBACQQCkAREFYggkglQSpiYYomUFJRgwQHhLqtZIQJdiL4QSmHI2M6xabrn0vRpnxuu9uQPv+YMFzsnnlRjfeM7ewAoDeKJ8Q0okYdOVG+4IyNDmOiJZDZo+YA0I2WgCEBqYxOj1onDF15wwcyRfT/76Q9O/XbhZX95xSVbdrYqFb/aGujPB8rzphd1ZaQSyU654nlBp9MZGhmx0wmgBKjGOl5K11i7rRlG2GpSTYc1kLlfrhYHxsKVpWa90b9h1XAUaFIzVByAtrCgF/uFXzCbimS29wKwFwI2iNWX0LG24q2cXpkZLm1Zt3PnZseyMyUzmZIaaGYqlR0ypT2U77v9xx8a2Z06+yxwWXrH2MiOkdI73j+RUzSVsKFUaDAWNTsL06fny/PfueMzu3ZsGxgxJ3edQ2Pxy99+S5NyMF8UKs8kVUx1phbb9Voceb7Xlow3pcrlcpgijLFua4ahIQK+7+bTA1eevfnivVe5kSCG7qQsrCus+MhkydAdgvXf3XFfveF7oVhZbbl+uH3LpFVrpE2jQ+AZz908SZO16nzLCS/Yc71lWaapJ5KWUkpKTikFAM756OgwpXoYRL7vI8CM83q9bZtWq9FIpJKO48RxHHieaZrjo2NKiWp5JZ1Oe16n1Wqu37AuiHyTGQa10f+Pvb8Mkiw7z0Xh912wOTmLq7uauQd6NKAhsUZkSZaFFpklWzKMSedYlo5RVzLIso5RMotty7aYYZipe2aaqTgrGTYtvD+yanTOje/ciC/i+/H90IpoqKyM3Htn7lwvPYBIKfV9H8a+XcYQQjhjnV57ZmEBtBwNh8YYh7tBEA76/TAMtdZJkszOzs7NzIgsH41Gl9org/X+c1/00vXpicQKbTFN4qSxXvKrT19YPHDgkGtsWCyFrrex0YxKlSxJ+/0+ZyTX2FtrWW2uv+7WdGPjhht/ulhUp06c/s9vPvy2N/3o+vH7qPb+4FU/9bpbr1lZCNcuX3zu86+Nos7Shd7zXvL8s4NLL3nHbb/yiff88wc/4K6Q237kDatloQ5yb9poWpOuL4sldricbI9feM2rP/mfv/287df912NPnruw/rZfev7P37Tv+uc//4l7726ss5tvfklr+dKhw1coJRvrG431jbm5hR07dzPuZiL1g2hmdpPqOlxvArGLi8vOpbV8epJHpcIgHh3ZRbL8yM6p0aBDXKQOFUIq1NQCABijmSWokHreUGqfVyfD5Ex7SAGUlNyDfTt3rp286AOqONMISNEBkoJAsA7hfZO9+ugLb1/6eRF3updW+PT2cY1jJze3PylpZizKkRuGkJtz37x7cn6WXHOQee7+UsT7NiwM+8uXkRGtDFjLkWowFtFDGLZaVO2zDhWDxCd0LGRPKGgLxgIhY/wMiTMxXfHJwXL7wQvVY1dePnehcnllfPg7/8cfvvS3fmV8JtQv6MUny4sX9Pn1p8+duv+j/15/5FJ3baX/+Ak5v22mXOWFyuD0Ezon1YlqnveYW9poxWEpsI6sHj5WqdT6682420JC0XO8QtmJisYPjj92fNgfuITrXBAKpVrZLwaWEi2VtWC1Qg0OpVpkBG2SZp5PHAviyZUXk/DhfzszbkH/9c/91tErXnDru9/sR/d4oguzx4K2bNpGOGzYQdv3OeEkjuPy3HYQAD3JuA/cNY67cXmxef4SzVRtsjYxN2MXqmjtNbtfKfpdzWgmlfGpUWmPDIjMxBNP+a7natG9vESFrVSnba/9zQ/+z6tfePOYcAHPverUnY+rj38DvVLrqdPlw9MLJQ/md218+0niI3IkYIdaVoyul6un8IIBQrXVCNSy1e6QFkEL084yANg9NYUIqXErDvFCbxhnLtJLK6tjKdxeKsvVsl8OBksXR2mvTS8d3Cxd/C4ZxmZzODqOHkSb5vrKxNQcINpNqUi0YI21BmD69e7K5/KxeDJFO46YeS59/oz8BSB1NIhcAx27Hm1BiC0C2RqyGmMI4hhejAhgwFrLn/FbNdYSItNEKkIIsYilcukZV7NC2RciBQMSZb1SzpQetNp+4LrcMTpz3M0GO0GrjFJKey43AHQLEmusdVIpKUSlwCKJLPVQdIYmFsahILQmiJlRHAk3dgsshtpYa0FZsIjCWg4GAQQAd6mSxlBCCRIKRhltwSBBaxBgDJ9mQDQhqC1D3W6dmZnb9dD9T+0VZ9710b/91B+997affgvaTYtxC2ru5hv6jUUA0utlc9t4szUs1qtZrrVG4TA9kmABuJkp8cXzjf/81gPHl5+Ym4ffPfpiWLzUWVwqlmqQx8q17sR0gXFgrFDwCm4wlabIyWjY54Q6vUxlGfOcxsqKHwbcD1ixxLZ4Ywvb5yAeDnq9chWBNGyWooQ8g9CvUdJGFwemc+Snr1SFs5sBuDBdc8JSNDldmJhyCgXi8ngFRvEgKk42mr3OUxenKuU8zql5bCMfZGl7bqZ437fu+YWX7Jqp7tZ5nTHyo/uuSCseDaKyCUAhzNaBJEAthJ7Kc2MVGGs1cOCggSDXvW6n2Rr1+sPusLJjJgzDQqEghCCI7VZ3enqOMdbr9x2XWhAABtSEhBEPZbGM1shRO8YRK3jVlfsvDowJguD5czvdfQFWSjpLaLEIMjNWGaOY5TqGLE2nD5SZ1aOR0loLmQ9HuRBZGIbj+DcYDMalqgUjhCCEgEXHcdI8C6Jw09UVwGoThWGaptbqarVKCOGcU84K9bpOEqEVGhzTlB3HQUQppZRyTA+YnJy8eOZMv98/ePCgMSbPssX1JuM4MTUJAGk87BNAxE6rXa/Vjt307Ma5S4/dcWetUk1yAa5LqDtVDvWcvfnaG01s4t6QUba4ukZ4IOM8YMTx3EqpfPr0yTzLbrjhhscffHjnzp3VJl46ceHQwave+7tXPXLq+0effRNm0Uc+eexsa33Cq3zo9T93zxe+ffCqG6f2XUlV7fGHvn3+3q9cvOMbz5k7vGN2/wN3P5Syvg9W/Eh68e7/8Fs7Glnr5ODf1y+trF+4Rk5cPVrY+4633EbFDDFDAq5cGeyqX1vYva3TeaJUqsRxzBh7/gtftLbaqFRqUVQEiyzwrLWNxsaYyOn7YZqmV135LEksVUiU7fDca6q4nLU2Lr32da8pVaO8P3IY10ApICHEWEmAcDBpp7tj12x5igVhfaO3Xq2VJ2eq9dqcAbJ6YZFqQrTSaLWx1miP8NwoDQYpRJmAfksPO/VrjsLGlt/cVimMvUyZ1A+i4+dP7dq1Y+cbX+IWC9AZaSnUUkcl8ukTp2fLpaBSiBt9SrSwVgFQIz0CBixYjZZ7ni+MLjIkFAQSaiUHomRuCRssNioTcw88OhT+8Lk//pI11fv6f3z+xLfv+gsAANj37h9V/vz4a/nkH3yocddxOuqe7TXEb/2t+ugrSH897wymt00sL52v33abFXHz3AateApSPygIYSZnIpPkhepcqTJrtCxNTmitKHftmDdjcWoieNGLpx958H6Z58VyJHM5HI2k0qHLvEKIjoOUEcDRaBTWy8fPnephWk2jGOzFZP1P3/HfNW6MA/D5j987nH7UnL43mio93Vweuv/x829/y0StAEePgR5BMhDDDvOKQKG/1ioVJ9hkRfRGDnWmZ3dM71oAh+pRQhgffO3Bz33+c69+/Wu9etmfnQqnJ0JGsyyv2ijOtOMH1HH94ahYnwCtU2v8wvThN78qX99ECR3/yrc/8I73JkM9EPyUkVcLyAEGBD7xhd8EYhk6YHVYKOT94YlTT2kARKspEmU1U0ST1cHQWEORGcufvrwxUYyKIStSNFISSjuDWG5FrP6DjzjHdslEjoZZzC5BZXOYSk3KcFCGrT4KGGGJ4VRr01hfmZiZBwQ27igj0dYYBAScf4O3/q9iPLklaAlAkorqFviOISilmMMINWDN2JvwGZPL8WHGhkg/wEmTTQwy25LORFAEmCUEmAVOwRJKHblFVAmrge30jTCW6OEg8aNaMZrKlSSaO+AbtQnlcihhnqe1VopQoGpLAZRaKxGAgu9zEefGQUd5UyXR7ImBRARLLRJ0YhABgMUx8Nls5g0A1toACFKdWwBwGUhDjLLGscAIsQAGwCASALKJMAMkIIzg6ASu9/S/f6b+yv3PefNrb/n5YyArL3jhq2eedbXGzfQla+pLjx+PpkNKmQJ0XFcZHcdxmqaUcUlLapQYDYkalfzo68fP1HdevXz/Xxj2YNRb6p1c9+e3hdtqyUqjsdiemTsEoWw2lsvlslJ9Y0yaJoNhZ9eBA/1R1/M8JDg5PWmM8QpFMBrElhckU/1etzI56ZQ6OXZ4hG0M6j5mo7ao+3ESh8jEqZG9en6LB1yYCvySGuLa0pk4SxzPrXDR7g4vPq6iyYny7PzQl+X5mYnpqV1eZaI2cq2k+DInYrkwLq+gUU53oFOT9ZO+zk2qzHIjgYHxdDENtZZJPmKOQxjV2jg8GI2SWpHlgxHXUCE89AKtodXt9frDEhk1VjdMPPDcUClDosCgMkblvYZfCNMhNi8myGitWEUFlxcbtcldFnSSjHLCB52BXe5kMmcO1a1YVHxLsARcK+yhLAb+BPBUpaVSCRHDMGSMxHE8Dr2+HyplhBhZa8c1sbXIuQuUcM7TUZylOVirlEJMpZSEWNd1+/2+AVsqldaWFsvlslLKoQ7nXEqZJAkiEkIYY1prA6bRWJ+YmCgUCuOymFK6bX52GA8ppfWZmY2VJQAIPH85Ht5z6ulKVNx79MjVr3z5cHnFUYjK2IjSyCd05tHvPbp3567CxMLa5ZML+w5B4HT6PWeUKussrq5cfdNNIs9ao+HcgX0C4IJoeAuR5lnc6hyens1Wm0bFUUlECZGxuNC+69qXX3duOd5YX712watpesddS29+1zvyPL/nxD2Tc8W9N76gI1UYEHhfnfQH8/WdB73nRV4l8yw1sXthvfPksFzst1qL5XCBUp/qjo5b1eJ+YNIAo5TyIEDWoa4njNHKgoEsSSanN3mTSZq73BsN4qBSTETqcoeUS6NWPDU3K9LkN3/5175/4VsnHz5uciG0dbhjAQ1BpCDzrJ+RA9PzMk5slt5849WKciZQbDSccnViuta5vFFiVBuNiEip0tKl4BqirL705NMLk4UCd1YuXHZ0NK6lWLU8PqWg5FNEnJuZ9C0XxO1k7ScvF6Xtg6zXJ3pVvuOGw/pyMyiHujdkQEiuGKDLMDcmiiLJFZXSn6jas+c5JTnVRqDvAXPdYTcFmRfnt0e7dsPlb/OrjsCZhn787te+8y2vedf/gPI8AOw8T878+a8fAACA7/79p2lhypZ9/ocfCz2mUz08f8lz/V6SzV51gCJfuv8e49ogQlfxUU+kaeo6QP2wuHubtSmQsR8OUSJlSIFgFg+zYVyu1g8fOPj0ydNJrlzHZURImbYU2sGgFBXHvZ/5he3t1daoPag7Ti5SSXliRk0qj/FrILsIAIeK1y83Hv72X3y5DVVFfKfaetlHP7trZv/1z3nWy9/0kvqBOiegpMqZpTur61RMx54ThSORBi4nQibdDin5XSr0jbvf+sIPpP2hscBqtdFGO3ICzwAoDMNqYpVyGK1UTaZMpnzudOYKc/5e2L5j/GHN1Wp/+8Q3SqWJmA1Ondr4/j/8w/PfdNuu+T18dGb5yXP9Ubu1tLZj316aKWKAUDTaggJDADRhFhQQiwDUGG2cwJnbXm8vLi8pJxf28K7ZVJtzW14dC8/dDXM7YSNr9O4jtoqbyAGYiMTqqJs/o5hBkWhjJShAiWis5Yh0y5GAbmKorAHMABwAApAjMoTcQLoli1SpuJ1uDgo0GEp+4A6MBMiWKdnWI8QYC9ZoAwTBaEs20WCgGONgktgE1UARtIYQyp6hDUttgyjKB5lFqzSKbMidMPCiwWBoqfeM9ygj+nodWmKVUmMK/mYC4FJOlOv7juPnqfAdrri1hszMBGwgOz2tkUibA0I27rsjokUChI4FNBEN2FxDDiiGcjpACsAYGA2U4FgvZWwpPC6dAawlhgACQQNkfurnPvHR39316rdeRW7h6ejexd4tpSVhyZg7FQ89VY/CYqHdbmvQ0oowctM0ieMhgM1l6IeUOrI6OX3iie/00+J3nrrj7OLFq+dnR1mDXrOnrkM1TDLP3XP4EGBp2F01hFrKrAXCrE/DTGYmy4JaGRFHWRZVK1mv3221siRhjI03E4tmYnKWFCZF95QtDkVO69SNR71wtnr6fj5H91B+zhw+Zk+vbH7ki+fW03wxVTl1cWZhZmHPZLRz9qadhxlzHG7rlUmrOEGeJx0EZmRKcZjFiuTzVjSN342HkBTCogZileDWBlyKhFmCxPSMnpqe9kH32i3H8xhjxIIX+D00PCgxawOHZ6PYIeAim5ybGpDgyJ5dmTBKokM5EELAiCybKpfTXFHqBLVpZbTQKfNMrVzM9ZARylyUVri+L/N8olpOkqQ+X+0lSVQuWGtzmZf9QpokG/12oVKVWnc6nWazWSqVZmdni64/HA4D13ccx4JJ01RK6bmO1nkcx0EhSNPUcRyt1Bi3nKap7/uOw4UQ3PWSJFHKhEGh3eqOYzAAjKO4MUYI4TiO4zhrK6tz2+YZY4wxRBR5FkVBc2MjCAuNRiMcjbrdbrVabQzXarXaroUd/vTMUw8/9OgTjwIAB/YXH/30r/7qT15z7bUv/um3nT7V2VOBVz3/lre/4eV/+Z7frVeKzzp2zdAktYnJA3v2rRw/WSiVK4VIExBCECzV6mRtdZWSupFhpeazoNi5uDxRgZOnnjh54vjei+sf+8idayvpy19+9NSJi29572tP3f/Ii1/20g5tBdumBstJyZtppc14eWV6siyeeDoiTAfVXisreTyGftWfBc+vTk4DR2lTj1byQQL9RDM+Go045xinUbnieH6ey7BUTOPUD0t2S2I3CkvE4yzLyFBEUzMSRdAXxRcfe/zTn/vuf37xl973J6N2iwnju9zmklhllEakudECMPILvgdxmvhOIJpD6rqagEMIaDU1XW8vblALLhJtURhFCaK2AWf/+u5/qf7Ga54bPFsN8gqvBIUx/QfilY1xBMbeSKGRkkxXqzBVEU+et1LQgzuLvdEoictQVaP2UMhKuajcrkhTlzJjlUdJX8F8tcYJ02JYLJV9z01FTgmEQEyukUkfcHD6dHT0aGbgxh9766mPfOSzDz79xn/7BvQw+fymT+Vnf+YdA6cxDsA7rz2Qs/X+G3/PRxt3Bp0RDMGUeVabKbCJPYNTx4eDUW2ulrb6SW69cjh5YCYsTjKviloDSa2QoLXOJQqtlGFhQMDmo3ypczGIgolS0Op0rQbfCyQqrVOfcJXEhLLMqOPHHz+0a8+LnnWDePpCU+daC/j5tz5Keya7/NMAAHA6eRy8/RNpOF0WI9kVqtbcvv0pWHzonz/10Jl7fu8P3sOZLUzPQTflPIkC1nNoSXiRJhDnEq1TLDCgXqJgeg6U9bwixEk+jLnrJVIEbpAWHB9ZoBREAQQ8FRm3FHIdpsZSN73UCMax6ugu0R4CYnihcTgsXfNHH4Ikb9oNvdib3b2PEad08ODa6YvV6ZpXKswHxZV+sz9MmLUKALiZq0RrGwMFBAEmXRfzPFesIzKF1iU2H+W41bRttNtuf7qlHrQk02EhvrhZgl8YbXQAxFYA7hnLAR1ABUYw3Fhf3T69OXY11iIABTRgNdj5N/C1zwoNNrUWEAmx662tPjYa7oDMzZasI9itMfAm3WiL9au3OnMEAACFskG4uaUr1ExYoBDOVIeWPLr3CoJW4Wa/hxMfwQzF0NKQcJeonkp6HgEH5WA4qtbKm9djcwCfIMrF0wiab10mMcqhpBsLfz0uVaqjUeKaLElRazXhupolsdKIgBYYUETU1iqwuTUarUKwANoCQWpAZ5ZtZHI+5EpqCUCJBdwEfgMBO740Yy1FBkDQGoIGgTrDm198RdbON048XZ3a3WjGlbnt43N79O5Hdh2o58LNstT1eLvd1FqkWS5FZqQseRPt9ioY4Zdnv/zNc6YyuXBo39/889f+5k/fozZ0YEuMeKzgM+AgiInCQmFbgZBBt1so1LVUjFO3VlbGoAVjrSZMa3CioiG8Uqs/06HwPV+nur98OSj1vFIONgTZGQCEQLuno6LaXt8nlr/+UDkZbX5ah597ZX16fnbbrvrsrF8qIWerp88ff+DcdceuCVB1Lqw5NEzSEXdTQ8oaCwKoz6wajCLOGkvdifn9EOdMmebyRiseHTl2rOAEo24z8EJWcM+cvliMonptmhKSDAeO4ygtQs6NMUJLRJoDcEat1UJLzEIlmZXS45xbNhwOEbEUhi3dESqnlvtUEovcMC1JN82iAk/y1BgwSluPKYWacmKilkop8LWlVjsZ1CcrVWA6MUFpQms5GvVd19uzZ6+1ViltjJFSZVYIoVyX+76PiNxhlFLHcUZpwgn1PK/X7Vpra/U6IK6vr1cqlSiKQMpKuSqEcAO/UCgyxo2S46+E4zjWWrWlqloqFcYhnxDS7XbTeNTcUINB7/CRY7VabW1tLU3TNBkpIYnDup3W57/9tZ1T053lpcm5GVJwXvCaa5vdxpnHH/3Lv3qNS4NBK+bAj688cPi2Y9NT24x18kunT5488YlPfMIPCwcPHt6zd+/8/DxqTVC11ioze65rDFbq03vztvqd3/q1l992A+8PF88+WSST33948avtbgfwwa88dMAtvqsWVKa2n+tcOjhztNlcScyoRvpTYbB+odlKVgfd4e6dV8RoK7Oca6H45MCq1tpyqVrMkvO+46q8bpSDJhVK1iamhcgopYwxZSx3PCW17/hJlnW7/XGqaJAMO70gCLIiERvNfKM5ddXRP3rdL77pHa985U++mm6r5uvSGKBgPU7AgLYA1oacdaSqVl1wHNYn0qbUpUTrUSq9yDPJqDJRYh7IXFPCrFGEgDE2dIkUMgC49XWvAGqG/WalPgmDzdFRrje7iKNaYHxeiiagm8iNFt1WL5kyQcoz0dQ5yRI/8oHFmcrjURo4aKR2CdHKWIBCsQRxpjPhhNqLvLyVFzwghhjQymhLLLGaGsWRCM9Ovvg1P/LG1z72wZ85+af/7jizrx3vm17PZ1MALQA4/fh9U2Fp1FcXsjxDfujQ7iO7duq4Sf1pk/da55arU+Gw1d5x1RFem7HoWgSiZZ50XCFhpNBapYQCJUWucx2vNNENhnLECO0udhl3imEpyRKjBeMQOa6UClwmCARuUAjdqsPXTzx5Ou+VEBu3/8T9c8luv6QLNXisDQBrB0fNxrcSmg0BhiGMjLt3ZvrAnvCmX3z3K553U9Jdn9+1K5HWqUwakTmoypTnVvDIi3sJ09ZnBZOm6DC52HUIBcJAKTcMJbFO2c2VBGQ5gsv9S9+/f7S4unfnLhX6dLLCCEXK3eLmwD692Ay3Taa9fr662I3b3nfT2jU3RMVFDCPjBGIw4NrbSIZu4hX8cHFpeXbHXECT9V7Tcfi2ahgG/npzgMoyhCD0HMJEGh/cNbnST49fWCnzkG11ebO2Lc1SA5b6oRmo9lMXxo83ATWSZ2yDh4AcEIj1LXgSImSN9RVKcWJiFsGarUhpwFqADOx4d6DGzs/XRqNNoWagUCgFaSxHqcQxh2dTZMOOIVlky37QWAs4FqckSKjKpVfdxDkraTMAb77keVFPWOa4FjHPN+HNSqPnepWJ6rnLrZnpKRonlIs8G7oMfD+Pk/ZWKkAQyeY+Zn/AgCIIhDE5tKfXh5OxlmkWa6MAjEUOBpjJCeTG15BZ1M+Ig2yiyMYqI0AIYVNF7PQMAtFSowJjLTpACdHGWABEiqDJpiInAUtQK2UB0D186Fcbxx88/Poj7cpo99T0U3c+VN7sZEHHNPbUq1mcZbkoFApSSkrpcDgYjRLCHG5xmKWA7ic//v3j61lgV6596b6P/9Mn//jPHEKIM+qLsu4qORVVVbfNHGqEImHo0xC9Mst7kGhGCwDKyhhd36mUjVSEUZdwiyDSbIzZE0Yxxy1VPIh0ljeYy7SBGS9QaTcQncL1PwJJv/eph/jv/sxmAH7FT7wdlBZ5Gse94eoGIlS2e7ftvCnpJ93+BvHoIEkLtYqgxHZbxWqt0Rz4BdcU0kFWLNSuHMbNLEnEYDi3Z8ZBlZE8auchd3sRcfvr2yYZd1gyWqeuIx0jMNHM1DIurXF9P88lp64W1mWBEppSOxrFUeQnac9aU66UsyyL815Iiz7RnDhUopCZZcZhqF0rU6DAg8AHsLkUrs9TMSSEJM7IJU4YBlW3ksT9VA5Ktcr6RpNzPjk5iYij0QgAlFJa6yiKfCfUWkqZp1kihAjD0KBNksSCRYJJHKdpurKy0mw29+7bN7ttXms7SjNKqZHa8wIwUIpK41qZcT5+PiEkCALG2Gg0IoxOFCdarRYhpFot23IkRVIo+v3hwHGcPE937NixtHjJauPXKoPh4NiePYduun6jsZynSYG5zz569LEnToRTJWzwxvrG0auujNPhjqt3hrXS97757UOHDpvq3inm7n7BjeViRSRK5Lkoe0mSiI3O9rnKv/35Bw4fuPJvPvUnt73iOc+5eurs4484BT5iyi+NahOVL33519DDurfdp+2QHKXBRtaTdBSlzXxu93R5arIzaE5fNwkaBNvd62c1JFamGBSI8JCms3XXQz5slF3Xt67FgCciKIWotdTahqGX5sIvleQoVcYyAhSxUt5U+DPGlCq1PMt8BcBdXi0nlzYO/8gLt+/cCXlbOqAdbihoggatNcZlIJVBAxxweiqMF5d4GPIgQAHAVaEUmFRzAMhE0XdMLqw1gIAUXCBam0d+41MLRw7svvqQePDuSq2sQLVNfywC4zmbCMaS9pJuLtsb7lS1e/r85Nzs/Xd+/+jBA4X5qTnhgeslvabJRVgsEIdYigp0bi014DPOywWghoIGnU3vnLvQGAIYpS1QtJowX691OtmDD9/+jt/eDnsOXr89+dYd9w7bp0KsJafHARh3e7t3HoGHngKApaP1y7f9GaT24N7ZbfuPAkD7wQeK+/dZMhwtbmgX01E6c821tFpTow4wRTKA3Dgqi7NMaZGPpNUkk0Lp3BorDZgsg9AHi34h0tYoK5nLrNFg0FpEJFobYgkkUht7rrW254ajb3DPx6BKnRU5SrLFL5nmWXjpHwPAJx47bmX3XLdRhpLPPBZZ3/rg10A/lscNNT+fSh24NZUBDULlazLMGKF5PAoqReJwITXziyLLZSV0mN++uOQQWggiLqQWGUckVuo0S+PRjt07uvXSiGIUBErlPPSzEEhY3oIDIYxUv9XJeHn6+TuTKd9ZncwG68oSgzwbxSSkWAzyOJW5CEsFIlQ+SgBg31TUaQ48wiPf6yZKg7282jmr2yUnlLlOO7kAtiGSaEseY+GKo+cadxAL1hSSs82J8iaK0AIaAvhMb9ZaAxYt+kAQIEHNEYm2zY3V6tSstQBoCSCxqMHufKN76l9zDuhZ2Gh2PH+zre35niY0iKhqyWy4OQAeh1xi7TiKWWMpI2O5LDOesCLVFmi0OQodWcY9tWt2otUVEHFLXS0k2ZKbRpfnWng+mZupNFqNXZWqEB0ClhJbKTGRb767jPk6NYjouWEmhOtufkecgKRCrFsCQHrDhAMwHjhccS6Kod/fyByj/VB3hQVFN8UpAQGIAQNgCKAAi6iLvlJDUEAyNAjAAcFagsRsdd03q34LWoGhlIOmxFhjQ+p/6W8/eMuLbjlw5NrmmY2OhLbcdI6/6obdIm5WgkPD4WWXO5VyUIyKS0tLjcYGWOYS1chy0OyDH/j7ublrG+T+YdKuTV7znl/747/+q99e+t4Xthk+ZR1ot9n+hUGqQi+UGmwUpGkMjDLG0erBMC5lSo4GTliQDK1B7jpKabe+mf2YyBcZeqVSr7kebAcWhQxLgg4da3e9rlAXD60+dd+V//6S9Lv3bAbg7EKry4UfTgeB77EOH7DvfPWS7T0yUyzuP3IVVHygTTtolrMgq00kcV4phkJk1jLO8ixfZozVSgGUIg1Q1tQqISJirfFHmvG6tVYrTSkHZR3KwIIxZkS1MSbiqIV0KDqcSBlrowhDlxstEg7MWKMzzdDtDTtDOarX64bKVq/neG7IA6V1IapIlUsplcoJIRQBLbjckVIWSeQ7YTwc5bliGFmrB/048gMCdNDuu66LGqSUhBAwVmk5VEMlTZqmIsvC0N9oNP3AYRRd5o0G/VZ7Y3Z66qojB9rt7qg/AMtYSLWWxWJRpEJKrbVMk5HrOu1mb3l5NYxK27ft0mC1IaN+7AWBtEIjFEolihgPY8aI40ScR1meAKide3YORsPazFS5WFldWg7LtSAq9JbbAQujMBQi646SPfsPem6wuLGy64pda61Fo3Q8YpfPLx07cp2U2rTaylWV2oSrYGlx8eihw08/eXJ1eeXa6687u3xm39WHS5OlH/25F+7du8/1r1tcuhSVS4yxMAz51FSyvAyW+W6Ibm20cln3UCaZU6BHrrkmTfNhu+MQR424UWY07BV8P9aSOkUi0egRl8SiPwJVWJgxYJTRzvRkdumSsk4uVRBExlJGnaw/yrIsDENlLHUdukWYs6iMEq7DR2At12Fxghj9im3XmDht9MlMmoSATQLEIUxYJMwq5TLQQL0CBpNT6UbDEq3yLE8TlzomyYDAKBU1GrggBBILhiAHLS1a32FN6fza646haDp5MHR9njRKWN7c9Zxg87tgUiTWdSlkI6fAJMmuvvm6ICwYZRUijkZmZNwdE+TyMkWiwUoORCOi5YhgsLN4ubywE5h1udIEUFPjKDCAjMnMuBEPt1d+9MfK93/vif/49sN7d5dfc9tLBvXizK4Z+LE/BwB+y96/+Yd7bgIAgBf//pdON5peYdu22R0kbW88cqI5HFVLlRPffcgxolAtM8dEVtrGMlGaWNBSDUfJoNc3SgtildZICBJKKOUup4hSKaM1cXgqc8YoguGEamMII1JbNEAtM9Ki5yqqP/+7v3DHPY9yPRV6gAZ4CfmeV7T4GsAfA4ChJa1L+2Z2AAwAEExBKjA65ma/LMw7RKil07K5giUv1zzo+3ExDVnFUx722aix1vLE3OEj7lLDtQ1V5MUZD9AZOuAm6MQpQCZAUhX4U1Nq1HGO7Aj9KlxuQCiEJN6KBG+T9tM5ecaUeWHnHiu7qZDVuztC9Dv6cnlyO1FDUMD86MiefYDYPHG82WwYY2KTW0qGQ00chxM+SMTRhamN/rAxGjGghpphP5+vBUPA1XbPyM0O1qmL3wCh2eRsth4z7Wr9jF0BKPNMXAOKoBAqFikggHUMGLQakRpora1UZ2bHvgmbdF4L+1/nnv5XqZHsmKk31zaFHv1CIDRQwJlJsyKSPLcKkQJKsCEQpc1Yz9tqYwmOB8JowQjpIubry+MXCUFVd0znmoGWiBas5KAy2EwUiLGa4DCT5YiMWsXUxi4FVFwrxdDrdYfjpzU6TVKYO7q7fmnUTodd3OoSOYqckcYFlGANokK4/miFo15dHyXDYVsDB+InFi0oorlFAOsySpTMABQyaQ0ASKWbMfUdUKmWQCMEbbQ0tOTCUBGjjCSIFLW1nDBFAIzUlIGiBI3m5uih3xidXC3eFJb9QkEXEruZKiFkhM0I06NpQY7A1jde/NLdK6dOXO5YrY3LO90173v/+L01m5ZnG/QCXF7FI0f3f/ET9/7x/+jOHVzQfR+UpBG/71P/dvSKw3Bgex5nIfNkLFzKhMg459XpWRiNMEnSpO9VA1pyc6G5Rrk+GmcoLmgE0hkNqwuTKZla+UQ/fXKNTwDbPzH3qrJVjdndLG/lF+6/uBmAvckZj6ho17a7P/mFmWJ5tZp/4J/+8jd/8i1Zmn75U5/de+XBfTfsXzK5naqHqJFYSnHM4SFACTEAoLa8O8yYrIYAY+qb0YgIBBHpGJQEAIiotGbUAUvGlFmt9Zi6Y6zKpOCcc84AIJdCSlkohnFvRCkdxiPXdX3PF0IAwNramsOI7/uEIhhDAPI0lVIyxhKZJ6O4WCz3uz0pZViIOKdSC2k0pfTMubOu687NzVlrEUFICcZOz81BqQL9wbDTtFanaUx9r9VsgzUH9h8SeZolqRdErVarVKqIBAqFwrDXB4BKpTIcKkKplDKIwmtuvsmm0hoqpeSeG4b+YNhzHQeMRQNu4GqpxpU3d52SX4/jmFGvXgmyLCPI5+cWOGO5EIhorJZCUOKHpQql3Bizf//RYaflTbhG61KxBI7Xbzbr09P1ick0TZnr9HrdQwePKCW3bZs7esWhXnd09uz5F7zw1vWNtYmJyTvuuEsIcd1115178uyRI0day01xYaVWqw2Hg/OtczPTs5xRsFCIitraQb8/HCXFYlEJlWtpLVZKBaAEMsMoyaVI4mFjaa02Ufd9n3PGGOu020uPPXrs2LH+IC6VSkopJXSe5wahVKkhYm+jRTkNC8H4ziPoSKm1FiE6yFi80WGMMDdPlHILPnjMSFktedSg1kDBIoJBzKSsT86CMQgg0pwRKnJpmKWOi9YWQk9liZHjMQ0iqjFvQwqDjpo7crj95IXcSr44srNesbAZgGGLQuAQShgb9oaFqlMul4WSQVSUuchSUajUEpVFE1WoFlsXFrNMuS5ySrU2lAFhZvE73+2X61RdCKdnDEVqwFJilKYA1hilbSHygmppXYxe/psvvfl5L4KZFwBue/jTH2Cj8+Oj3/HBB1/wppfDE58AAFOY3V/cY4nF1uJwY0lF9OCxZyOYSuT00zh3aaXg9xrrMs20UFppIbU0ljoucVxE7XouEKSUK6WAACHoMidLBHcYci6UYoxrpQFZnksEay2x1iprqKFf/O2f/fg/f5pzTJxBAFRSs/To8eHFDU42CSrJ4FKhOCVyk6SqVK4aAooAZz4ZYsRLOhfOtuls28A2lkV7je4Kw9MD2TzNd9Uv3HVP1Xcnt02f/Ou76tNzlRuuYAOLNmCc8SwDVFmZq6Hlmcplp/HQqe2veEXYiVfvubO+c6qX8bBvbLzszoyn5LDtyDZBEqfA9ZkGYNRLtCz2C96sRXv65FPbF/Ysnj8z6iY+8Tv9jgXsDPuGQqRYrvMi808uLxskAXP7oxZaWuFEpMLU/ajoil7vqkNznaXNCQUUAsYLkFuxlpR04cxg8crNCm187zzjiTAG+qIGg1vQ5XHQM888GcZt5DEoGJQ1CkwsZBRsFq9hqW47HbRWMm9uKlrZaGeZIgjMoABjOGhrKUGLW5qPABQBOVC0g9ZmLkCoM1OfbXd7xlrOHNePBlkn37JXdxlLlanUprJuryvXUZQDD5VIGEOUWClsnsn2hZmSLVmVpqtrlBBFN+N3pkGrLSMmQDD2/kdXEagBAkCKBdelOrekyAOrVNLPpbUjpR1Ai9RsSYWEzE2SnPvAGSSp5i7xCVVaG6QepVobrS3hVIM2gASIGb+dm5wrwwg+ctedL7jprZ3VweRUETZDGbS6yVxpulYrfG/pTK+7uHDsmu9955GTZy43esvACdoSN/537n7cohJymGHm67QwOdkN6b98/e5feOuNsWiFbiVvN0YyU8jS+04Co7htzjOGeq5ljjRa5SM/Fq1uqzBRvHzhomNwfmFXHgvDNonAOdRUNagGcv3vVtb+tmF7+0HeUJMY8+Vz7xtU/tArvWrbFw9/73WfP7x11ruu9oMCG52s5aP9O5+1fcG54+67/+2v/sRbmL/7a195+c+9yfHMRLlKGFd5bsym6zIhjAAy5hBCNBm7pOOmWMuYoIZMygzHrswECQKiNcZYMEoaxkyWZUEQKKWEENZaqXKpjB+FSZJQShExkyIMQyQkDEPCKOWcMWat1Vpzzh3GGEECmMaJlNL3fbAarHa42+0M2u32zNRMpVKLCoGUklInjqXre67rzs7OOo4DAONuMyIGUfGeO++89957OeFvfetbXY8Xi9Vhv0ep67tOs9nOs0Qp5Tr+7Ows425/2EuGoyiK4jhttVqUwHhmnJt80G5TYGFUzEd5msXFYuRxJxcaCWhttdScO2EhAoAkT0adQVCIlDJK5aVSGQ3mMksTqYxyXZczSpBqrUVurZXGGKKU0aRUrQ9bzcuXl6y1y8vLx449y1oXkVPglUr9zKmnXJfPzU4vL14KookXvfglWT6KwpIU9oZrbyovLGQba1ccqnLKMcSRHSWDLHDDvTvr/X7fGlMoFJCQNI6LlWqe51LmhBACjFBy7vxpTtnM3Kw1EBV8z2f1qCiUIoSMel2kpFostDyPcq9YdJXSUtqgUHCDQn8w0IY4rlupVJARvTUaR8rAIOceCqGy3AjJmIdCcaORAjBipeQUtDLGACHaAlBkEqQfhIDguh4o7XDXjyKgBADzLGc2b600ZYZArVUI1hCCjML9t//rtr2l7Tdfu/Hww7RWCtOUV6pxHI9FIGOxOb8Zdnq+7xsDzUajUqsKoUYbG1FUjMLQZFkwWU03Br6yY0SnUpYxyxnVoKzi6JqKlr7vrj7xNI9c5hgwwAgoC9oCNUSnCKzwopf86sTNz4fa9RsXWl/8h/ft2jW99/pfA/hXAPilh7+4fXYPfPgTAEBiJU2bWEkJCQ8c8qyL1E9GHZV0ImtqpYqRctjq5kIQziwlGDg+50ZbtNYwbgG00QAaCBhjCAAiModYYiwg5UxbMEC0MoAsF4oQfuZ//kmuu1mWLhR3js50s6DJd9TAIHFgfvshZ89RqzerNLdYVMCJyyInzEUbdMv3apCXRQFsLl3Ht8J4dNJWqzh1VUsPiju2wVyTD1Z2veG21EP/QnzFjx7Kai7Jgc/MQaoB04c/8pGDN10Z7j0kgxLPFch8ulzM73g4qFcLlOeBP7mmbbEo5gtANsFE//TXH523hevf93PF/XshPZ/O7Ku0RVZq9Jeapcq0P1kOk6S9uFJeCHhm5ienW71OwKnLkFsqs7RSKa33+uvdjjUGjHZcjAJmpL5wOfE9XnajlcHa+EA0IxZxcLHFB9gUrQH2YDOsWhgrNY/D3tgE14JGi4Bbcs3WAmiwy+src1NzCJZs2SgQBI5gAZSwQ5OOX6QzzMqunyvhCNLJetV6Wa62tLFjWY9x+xqYhbEn0jN+fwYtUsLIVowUo9EwTrPp2uyyVLKTT5XqG8PNzoE2kKYiKhZrU1V78Vyjrxyuq5UgH0hKhQOb3aAbbGkHtORanoJwkdEt7GRPG2WRbDKhEAEVAiHqqsPTs9u3qSSOsyyq1qiSgGF3o3//I09aAEusscZaIATAgNDSWJAMytVQ96QQgnMkCBYtB5kCYQS1NtQFA2CUBgSLdtN80WiXsacfuP+FWg96Izd8hiEFUVTdsXcX6KTdbsWyMb1t+rEH13uDlIdWAUxNz5985Htfu/xQaWJ7amISRu1GZ32mundh9yf/7nvvfOPLgKwoie70zIt+8i0gdJZ2PT8ECY0TJ/2OW9k2x10HwCR1mJzZA3k6vX0+KBelMuhyttUJ8S639cHC+t/d/+13P3TlLe95qrS3fOvMaVwkD1670Jm98FOf2/Ne9vx/fwfMf2kzAF9+8rRYPvuZP39/Y6M/M/vY8Gx/5rroea9/wRXPe/afvPD5absTb6wE6Ij+UHkaAMbmWIgEASlFRGpBEyDjrzpYsmnRAVYzioibGeJYEsVaa229XrfW9vv9cfQ1Vo0jrud53HEcaawFIXJjjAXIs4wzNhgMmOMopZRSlFKrdRAERknGmOc7rscdx9GaEUIIIbt37/R9d3lxZWFhodvtG2O4gSCIDMg4HgZBYK3NshQA+v2u1hoZxkn/+S+4dfvc9jQbJikiosN4ELhKSAAM/EgpJaWM06To8I2NDWPM7t27OadpMhpDrhzHIcgcxzEGNxoNxhhauHzxUrEUeWHFZVwhowTzfCSk1GCklJ1ut4omCCLfcYmFixfO57ncubAjN9kwGQIAIcRatFZ4bhAVi2mack6T3gAIrdfrnPNKpcI518Y6Dl9bW7LW1mq1LInX1zbuvOuOt/3cr7TWFl1OwqA0UIkxuHbqTBQFnDrjVoHnef3eMM/zSoUiImeuUioXIk3TKIo8z6GUcs7R2EG/Pz83led5mgyElPmG1FobqaamppIkc11fGp3GyeF9By6efHr7wh6RZ57nxb2e47nFwM9EJvI0cp1+p+uHmy4z7VavUqnluZLMGgOuV0hGQ5Q6KhdXl5bTdsdqBaCNxU21AQQLSAEKQRiPknww8ItF4EwpaYVGwhyXJc1mdyPWGggDRALWWLSc0ItCvv2WA1Cruo5f4AFOUm2o6252riqVTTg0dT0nKjgWIQxBpkRJhzqO64kkQyB5LzEKAYjWm+BVQLBWo2aKpuDTuDeKYy/y7fpqO6qgksgBrR0DuG2OOVAzjZr2nJUnP3f3F//r+me/8uhr3pTrTdjLvt6qfvD4M9s5KxZkmoBfpNICd+OTl9fOP+1UXM75aLEpc5kz4IHPGEUCCiwwNKgJIa5CZc1YJomM+aTjr6CFPMu5H2S5QCBgcfFvP9RPRq5baQ4uXVhcnJk6ksvggSfuv+eBn9u7/cZd3sTUnl1qkAZzE4eOHcnF4E0AALD0fShOrYYTEFRnGa0BrYE0D95xX3p6UOT41MrJ04unbjl27MZbnutU6//+Rx89deHMzTfufvO7X9Y7d3HYGE5v281JKXvsdN5sHz/xmSu37/vAh3//9be/1Zmr60vL3UZC570algeYBHtpdu48q876WVlf4WdLg3DuqhO//idHAQDgJ37mnWdqpcLjG/Gesjd5yAlok4QF2ZTc2zh7cW73woULl/1qZebgfs1ZqVw8fPTwyumLJ5cu+0rPVErc5RuAi52BS0jASCNVO0Inzp2I49Kob840O7hZ891/18OHDh3MVzKP+EtqNYMtZ3swsBVQYWyiYMHCWPZq82McjzPHKYPdCtvj0EwADr/Bu/S5vNnqbAG54Omz52++4qASmUZwHOoV/ZqorDa6nBFltYNoAKwEMi6C0OKmwa61qMhWSWURCoUSuhXfK9xZ3vX0U/ftnJk8fHT8toEfRciVy9KgNr9zW/3c0sZSq0KmcHq6tHppfag28diQCRBx3OgBAWsMgc3o3lMGgJpxUQoGgDqgJcKwqYYcz51fvfqGGy88fhyUWFrf8LzAA2IRjDEAQIAaAwxAWV0MvcAzaZ4jojIAY3gXQYdCaog1hlACqCmlUgpC6ZidpLVGYjnjcti7+8ufUaNiWA+J3GRmHz1wwPIBmuLUhOuUFzrt4aCbcOpcWl83AJzpfnvQlIPdkREezUZkqlxpttS1C3sevPPyvd9+6sCO/srSmT079lvfGOZ62p544O7dhw7OHt4NjPab7RIr6jhxSiXZHXIhA+KCBKWNz/1nUNAJWWSV2d59dqLwU/cvHHvgyaU3HDiUP3r2AkwPX10OLtycfeyxV77cA73lhjRfYe9++3t++k8/9phjf+4lP/6lX/n157ziqsKOSmutAYKJVn+yFAqjab2AeTIubmHT49EiUqMtIQQNAgCxY+2WzbsNHQ5bGSIaa4yxiJTSOI7HQUtKmWYxIYRSSghRSnU6nSgIKWMAxHV8IRQiWmuFEEAI53wcYo1SUkqKkOe5tTbLk2azSQgpFCLPc9vdjhCqWCyePPl0uVzZ2GgdPHQkzTNE4nlBlgkpZbFYRMThcBgEUZqNnv+C55w6dSpOBtPT071uv9frLWzbMRjFaA0hzFpNKaWcCSU2NjZ2LCxQStM05ZyVy6XRKLbWZllmCKyvN0pRwXE8LaW1NooilzsbGxujQU8ptX1+zvM8ZSQi8X1/795au90WWRxL6SaO5zt+4ALVkEurFaU8CnyHu+PYn8Z9a7jj8STLjdHaqMEonqjV+/1+s7e+c2GhPlFSyoSFYr/ddV3vbb/06/2lRrFYdjjJs2yiPpmnwhgTRUWphdV2MBq4rluuF/Nc5loUqwWPB51Ox/e9cqk0imNCCGV00OvmaTY7N5cMRy7nfrmc9HtVz7XWDvsDBMsosVYHriOE6nVaRgmdJSAEElRpTKxCzom1hBCRpqVCGGfpeIZWKpXG/qhRIUjTFJEYgmk6ykQ6TEYetcaoIPR7WWYBCCEErLYWELUxnuf55TLxPKussWYsFsAM9Lt9I5FSqwwwBERi0UhrBTHPfeNLxXrDKZaJAcGsilOfbib1uCWDEE1UpZSiHztCSCODShmMzkYjMOh5nDMKzKgks0YTSizRyhpKgBlNDBVKUwt5nvulotvPhdBotQawjBDCSa5AcYORveUVLsmKZOINf/rRvLmqHviUO9xsCWhacF447m6C4WAuL3OPD5bb8tKyU/QHg1FlpjayIosTS9AEnFNghBilqKHUglGKE2q0yUhuxgmCASUlWDRKE0I50tXf+9XCzh3aC5bXmxuthiSmnw1NetZwL5qYvX/xkTjH4q6rrr71JWGxcLkJT8TD5cXLYnG5+e/fvfGaG8cB+PZf/AzxL506+42pevl1r3zZ6qXHXUxR4KGFuvDc5+3f9Za3vEYUHVw/u/jAt975u68yoQ+D1e98+E+e/vJ3wyuPvO4Xf37j7MNzz979nTtPPP7gnZUZevTg3I1v/rHW3U8sn774yJe/98pffUNv0J86cmjpwUe23XLDr7/y7T/6hh+7+q0vJNOTzcfXjn7gF+AjnwMAddXefcDV5HSYEbnR45XahIWhVMVief+BA0mzO4pHx265funcBS7N2sXlXnwmCAoh8bnP4kTHSbfs8F6mpTE+shjhXCsjNhsHyvVhz2wJSz37pS9qP32ZZbJFh31In0H2GtgScgEAgLHcMWzZzj8Tci2AAUSApcbq9um5sTEuggWLCOBYUAjpMyTdDM5dXpmZrOdpXxlZKhR6ozxm4FhjAXJrQAMgamvQGgJAAQhu/s22TocDrJ1b0UHlofPHTx7IgdCl1dag9fC7AQBg2B7EebK6PDpUru4/en25fu6xRy9dbqQrjcZUOXzWjdfCHwIAgDLttc6wPUgAKgzNFnQjM+NrtLAlzuW4UAB+dr11ttE7Oj318IMPxGlnyqt4jgPEcE6EVHpsYkhpprRG6lgiZdbOoeY5GqR1KDVAmFVSFwtep51pQ3Bs7ICAQABxjMO21lALxhrfc775mT+vHXjOu156+4P3beLSic40H248ujzsnPmxH3nthYu9591y00Kl/J/fHiDkoHEwMI4b1ucmBkkCIM6cPu1vmz3ju8Vdh//07/7tPz/7K2yYjFYWXY+65cm0gAcOH2peXHKjyvKly9t371g+dzwsFSskAu4lUppcRJRTZWUIEPJNlFrIHXAfGzZPbJ9HP9x7c8XtnM/aEAbzg+MbC8/edvfl5ZvdGxz65GYAvu8r33r/P/zLzLZbn2Xinx09Yr0Rnjb95rJLKVO8XpsSWZuFbmcQlx2HUFBKIBJrjVWGEGJRg7E/iLrP+ERbi5aMwzWxY0Y5IlJCyHAYK2OREqTEDwIAAESpBSW8Vi0bDYHvU0pzKT0vGIyGjGFYKEgpXdfN8zyO40IYKiGVtYjoeY7ruuOuMudMCOG5YXl7bZx8jkaj3Xv2jJKYUppJETqh0Iq7jtBqTOpVxjjcW11d3b59u1V6MBgEoe84ThzHiBrAer7nMK6MEkJYBB65Ist7vZ4yKgxDAOCcIxJCqczzXQs7sizLs8xxeZ7nnPM0z8LIVdrLk80u0ziByKQEIyqlguN6w17Pdd1KrRbHI6FEGPpaW6VMlmXjUpUyYq1mzBkX7ojouu4zbYPpmclOvxN6YRbLNM7q9YlOu4eNNieUAnbabdfxpEhDP8ryJEkS6jr9US9NE+Y6AEA445xra7u9dn/Qy4VXKBQYpZ7nGaNd7lSrVZHnSGFtdUUsXqxPTGit4zSZmqi3292x8LXneYVCQQq5+9B+M0okkXGcBp7nRu6o23V8T+bSaOL41WcIhWCVMVicKIvVJYt2IEWtVitNTsgsLu2Zb4+6+txFgm4mBHOYUoojEDIWVQMGSCxRiaCUMsKIS6jrQjIaDXJAjkQaAYYogowSc+dvfOLIocmZ59wSP3gxcFwg0saC+6EUm6mAzPLxl0cpxX1P9hJCICiU8tFACIFIo0L53OnTq4sXjh64urJ7B443AoJ602ocJeqAeomTOUJgqaAzxSoAglAEa4y0OWHAIDPN9VPv+s1a1UG/ICama7vnYWc527d9PHZzWGX9X781lim58L3HazPTlbrfPP3U7NUHk3bDITxNhymYyOFMgxKgjEqVQiRAFOV8DMkxShFGtNZKKgBitOHUWbv955njDSleHPRGG4N91117/6ULZ1cuKgokBFeBNNFs5cDrfuF24850hvUzFxq6FxeqU9PliWc996ZLDz8c8d0b65vb3OnkH31TrO/ddddDX3/1/LM/8Ke/pQ2ALHac77OucgY4iJf9JDIsf+L896auCCOYAkwPH1zY99L/Pn/jDa0v3eulw/ap09fddsvVLzpWna1FuyaX/+lrq0sb99xxx9Gj+/onThT27rrwj5//jQ/9/ZFbr//F990e5N3Gl77RaOlzD51NjgQ/BwAA+ZmljW/e8fidp+v7r6hGg69//K5vXL78V5/7BW9i4sxjFyXklXqVGlUuBnk/H7XTgcwkJXsPbH/8+FMFxytWSiWPkPXuxOz82YtLBMCn3uycd3ZpBKhCYn/8Xf/1OgAAaF+6NFwc1Ii5qFtI0GyFV42AMCasjvcw+MGtvRmJwQLo8bB07IJrn3kYxuWxAFt1XbElU1UCXFwfzM4v1A9MZM0mTpQnchxe3mDclahAqIA6kggEJEjGLB1rjLVGbRKVAADKlLbW1ldxjRC2Z7o+RNFe73fEJrrq6cUnpAEAtv6N7x9cmLFRVZF8slSa2bFtbnaCZJv1vRQi7sYZICeQcnBTCVsXxQDAAkMAoBqMycO+jSkCBXVydUUBq3C23uu7Do2zTBgA4AaAgNFKI5AxgUpLAIDhSHAHtLbCGJ8zlRsTYVDw1Ehqa5gBrdQYC260RbJZ3mmlCeW8v+HYZqu3vHvfWFsP4n4z9MLm0sVdO6v1qfqJU81O2vjqlz55ee0cB8j6tFjZnrWefvLUmcq+nesrD//tJ/70i9+87667L9zwrGsffuyxM5e6+/bv6pw8XayWBvHIzYAXy7N7dgGlFR+HSs9sm6euI6nijAelIgS+zTOR5V4YIG7ma0HhYHKudfgFM6trZOnx/svftWu3f+pusczFzrgcpufXb33+/vK2U/nyI1tmDNNzw4uN7oN/N6ggrZCjbh2juu8GhEJKjPEcS30wdrpYybJ4M9lCQKQGLaUcgMgx2ArMJoNryx2aSjs2izaIWwEYEenMzEyWZdbqPM8R0RgDaDjnnVY78iOl1KDT5dx1XBe0rVfrg0GXUzruV2dZJrJMOy6llFCutU6yHAA4dxgjY5K6UBD3+h53xjhBpERLhQC+FxoNlHDfC5VSjDqcuZ1Oh3NeLk20mhu1ckWrXCuQ0kZBOIhbnDqu69x77715nj/rumuBEM/zeoN2r9ebnZ3xArfb7fq+n+dCKhX4/vraGmOsWCxmWdbv98uVirHW9fhCeds4hRsOBgiUc4cxACs6zdb6+rrnBrOzs1kmOoO+7/tmIBljjHFKKVq0epxAW6EzAtbxnDhLrDEA0Gw2y+WKkG4xKoRR0aqWwz0ljReEUsrA87M0YYSFYSil7fS6jsOkFjLWtVKd1iattZSxLE3Hw+YoCjln/X5/cfFytVpLRnGWZYVC4fTp03NzcyJLoyjSWgMA5cw1bqfXL5SKg0Ev9ANrbbvdrNfrWa8jtWGMMcLSPHV913E5I9jstAtRWQ0HeZ6Mw57Rebff+dIn/mlbIXzWDdfVp6eNMWmvl8h0ojIXN2JrdJoKIYFRGMv9UMfqHOI4CcreWFgIAcfca0RIOj2RARCJmhAwFsGAYsDWM/jlNz8XEgtZog2iI7Sl3Gi5BUvh3ibwRGdq5dLyRLnUbreh1+acV+qVxmojjZO56ak9e3dmnbjfXPevmOQPnkI02lhEYh2uM0FKlb1HZk/c9chuJG4NzNAYh2TGMEJcQnMtgXBWn7jmA7/XjxNTqrhMdzrp6KyIvn56fPi/ufbFJW/7uMosf/ydb/ynDz/0t192r3i1PzHTeeoSiZgG42uSpWIgZGwh51gKQswkFZrlBqw9/ctvdkL/bKcnAIZGM9/vCbGhtXZJlphQQgLEav/Ou79XrpePvfTFtzz/uupk6fDugyuNx7/6lbsKdJ6pXRdOPjQFRYdd6XTWzz/y3a8vPlhifWM6xfrmu1QPkskaB7H6+z93y+2vvhlajfTS035oJ/SupfMXsFSc37XbdNpplT/nba8OVnrtZocmyn/zy2sa4HsnijM1d/7gaGmdrQ0f//69F+9+9OafeZNa7H3yO188WJ29956HJ6969Z7yxNnt8U99+Nff8dt/gn+x+tPvfitOV825Mzf9j7fUpIX3fxkAlr5y1z/91d/f9BOvDqbFf37sP57zzh/fOQWVvbt6jWZpx0S1UD1+/vRd372vPjurpUnBWoRjVx87f/li4BLuGjHKEo0zExWOVhGYLYauAzLPjm4v6Nx5utnoEzUucftPL5dZsWVGQwBlQG+FWQVA4AdV8BhptVXe/iASa7CbmhIAi+vL26bnnonAAKAQwKhnXmaVWVfC/Y+cuKE2RfIkebp9OU8bANpkYMBQGBkhLYC11GoXwSPoE/Q5dyjRevNFDDVFyziX2fzBB+Letdcc2ih0nj6zNP4ttehQBKOMhVOX1zWuXHVo5/yOOVCxXWnnWy6M7PRjhDm51Q6jfFzqbl4REkBjAZCARY2gbEwBHMus1YpQanQqQYPNcqMAxvYRgArRTFWj7mCU5aAAPKDGauIQY1FobV2qwKKF1Jj5hZmzT16KCAVjLBqlgSIYsGgJpWgNWDBZJnQ7ZuvLYEV916ahZzg5qc80lpdP7ji0Ezibmpv9+ue+OD8fHXvei6UGLU1fJRwM9f0kzT2S/ehrn5un7IGvL6t+Wivv/KuPffEjf/qT0fZuL7bR3GSPSOoGqp+oXi9A3my1WKmQdFp8auLpBx+t+4WJuZmE4+z+A3a9u3jizML4JPw5tXiyS1fL0ex3Tl4QbnnqOcuvLTr/9uHvvf4nXvPgt79jgytzccp1os0AHMwcWI2B1PPCQJYycxbXp/Jcj7r1aoUaluaSukxkOdOGMYdS1EZSSq3VFpGQccyzOFY8pbClDm7BAlGgAQggjunn1iASi6Tdbo+bwNZapWSapoyTcrkcBMGF8+cmapOPP36cEFavTZw+e+6FL35xfaKU5zljTBntOE7geUZpxpgwxvU9EEQpQR0ulcrz1HEc5gSI1OD4A4VU5FFUGIyGShnHcRzHi+OUc04p7/eHpVJFSqUkVivTIk8LhUqe50qaQT+uTJROnzp1//0rtdrE7t27AcDzPGstIWR6ejqOY23kuPnsuh5lTAtptQkiP8syx+Hz27aN++exzgejQZ5mgesrpRCs1toYwxlqZQM3mJmbZ4wpa8rlynCUVMLSaDTK4tTzPNfj2ioAyzjJM1koRdoYrTVjjFI6Go18P3jksVNSZMuXF1/20pcQkMVywfG4UnY0GhVLBSHIYDAAYGEY5iotlEs2p/EoBjD9fj+M/MCPmMtPnzy3Y+cc53xmZoYA+q4nhPA8j3O+bWGHloJSGoRelmWeGxhrjbae7/V6Pcdhi8tLe/fudr36OKNinp8kSbFYNMkmTEQoNTk9vXJ5yZmeFvmWmw3FwHff+uY3+KFrpBJKy0w4SB0NkIgyeGO0DaMwHvwDaG2URdCZQMZzKannASWgkSGCtt2NtoWxVDQSJEiMtXD+v32BB9Hz3vz83pPLnketMkRqCD2Mc16LtjaVzZ2LKFMLCwDg+U6xXger4+GwPlGjzAFCtMgoJ6VqEUq5tcYaM273IMstQK4kK0xMHd1+6qnFhQLmPtO54pxqqY01jkeoUaOLl+76y88b4btxs7e6qtORG/FltLcDAMDkW26qMRf+7EkAmNpuL33qDrXz8KEH/7H2yKe7n7kze/bh/qNPPpj2Gjt3HLvh+qlt5Z969Y/v33/05dff+PAjJyRADEC8IGv30WqCxAJimlJCBcGfv/32vUcP5Q7zy5X67IIYZNsK5VsO70OVOwxAQty95R2vfO/P/uSbP/WZ35yaObC4dslAMyRuJvObD02+6z0/n6js6//xLfhKCwBKRXP5/ImXPPfKX3zXa5YvPFEMtzO+nfpuqrrbrtsOWZaev6cr09na4bN3Ha8dOFIsISf80t98tZfG/o5yffu+iyeeyh9bbUC+2kw+fd+ZO45/8A/v+offuGXPb/327zznhueUd+z71bf95txVu975h7+0/DP/JUl45kv3PfaFLw/jfPUP19Y31set1A2bf/DsiXzxwXONU69/96twqnTdi9+SnPrq4rmLtX37S/MT2dMntc7t4kUbegOT+5ZcPHf28vLiTFQpeVqocNgZOhNEmQwMS4R20ax09XyZDkTGNN1bA2gCAFSCyVG6uohKITNWEdzcOTVsyWRs/WjsVhgGay1QwK3yw46RV1u/HaOIAQApwBRxM7vZHkNNFWCO+r5O47nGP4vZZbI5Rh7jnjMLaHGMrxYAQ2PBWKoMITC1pUUptbVauwQEkNHQfP/7Txyan75iob55ntZKQ621Bum2mfDKY0chE2bpUjoQRuq4vUWIIjQhilogaCljvqXPXKYdN4StsTDuBxBlgZKxTJd1wUrkCiUYAtSAtsQqa43v0TQWWQ6EItXWAKOcWCmlpTkZeyWhNRCVi+VqUSNqazhliMpoS4gdl3iAxIABQuJRUvUmznz/7vW3L/s7a2Ma9XqjvfzVJ5bXzywc2gcO2WivXnnltZhD4nU5dbXup9BmmKdZ5mTq6oOHpMpZ7s5O7273ulq7wy6DWF186tT+hWOQ2npr0OkuUodrhGhqcnLHNlqvfPvTn4pq0fPf8BqTjIw0QapUo0EMWTi6d/z+tNrnqu112spPDdvPvuWQkzTOPHi84qiTj69nyRtf8qqJi1/7hvBn/I0tLWiTqQCkVVr6pqWEEpHMOxSwNepwRiOPF3zX4xStNg41iI4XAYAFjczVRhkkrqaEgN7sFoyVtEFpJQlhSIyWiGiNJhQstc326h3fvvPYsavCYkg9BhpLfgDaZKkJiqXdhw4Nh8Mj1161Y8eOCxcu3PSimxuNxtJao1SIgsBDC1opZYBRjkhcVEbk1hrGmJLGWgycIqNUG0UoWuNYAEocYk0+Vg0EUEoQSxxC0VhtletRpRMjmTbSEkS0o1HPWE05IYSYDKKgcNNNN1WmZk488qhRxncUI9TxSZ7pVqeJSPft2y+VURqQELTGdV2pFWWONqgyNU4UOAUDSD0vzzPf9weDQaFQyDIhtW11mpRSbfXq6loQBIVCYXqimmcjPyCnTx7ft2fv+kprYecuZYxFCApFqYyUmqHrMtdau2v3gcFgsO/qHefPnz84eSCYDIw0g2wwVZwUWvlFd5CMGPUoZ4SA1ML3Q5kTo2OhE0ScnKk7jiOEuLx8ccfuecdxjFFCZLWJqjWGczYGYa03WuVy2QmjNM9ct5RrFQQeEMrFoFwI0IlK1qPhJHCyeOKxStkHmXBkS+fO1KpTORBCKTLaG/TnFrYjor+VSqdx7HDuUybj3MaZVob5rtKm4BUBnDSMLCXG5pwQA0ZbPRbTQWuFyoD41PcZJWCN0QBBJHsb/YGmHKhEZbWhwCzhFE71ez/y3ldyf5/TP+5WIhsg5swCw2pZJcPNc1GbbTfuO9w1QLjNJQhrM4GagLYmi4m1MFXKfW07mbOwB12qDDKpFRiPUEs0lTmQZHbbrKPt2tNLYRkJEp1rg4SgkWhcRlaevvT4pcVowpmeqvGFA9FUlQbeNb4D7/97AAgdaDibyNuX/PsKwF/u2drcDwPAVzc2f3j8LDx+FgDgvX8BAF+B/2Vlyf8jpdiU9v3gn8L/6xpXEB8H+DgALD+0+ajOAQCe3oC3/y4A/NTWk3/zTYevePX7q35x7c7vnnrssatf/Cqv6r7n1W+4/Xc+pGaqMZXTe4/4cdY9vbTtmqvVTB2O9y8/8l1ZpTFTsrXhXJq4fHyxMu2vPnAi0evV3TvOnmmc+cKnVKZ7F7OF39m/fuqh6Wfvu/KNr/KL9d7jq//jNb/UCtQf/9Vvk2Jw+uFLtexG+PaHAGC98fSvv+BHj0zVnjhz6c/u+0Qa24uf+MLMtUF5985zT13evmP3zu27z148MwD5rD27lpaW496QCVnkXGnl8uIo7cRGNzcSZi0C7eVZOSjumKQ+EIWynaWvuO3l8MmvAECaNjfApMYaUBxQmy0Yv0UNYMhmDM4tEMAcoWgxQ6sJuMaiRWut2gQ+jy0YcGwPrAEt4sCCq03F2dyNLejxE1ID3ya5tUDMZtN76w88Y8TwzL8aQRtYNptUXWLpiFpqAMAgEAb6wvL6M1ydnRPemXY2NVG8+urdFCxcvCTiLLYgumkeZ/GWFnTPaMxJDswBJa1lW48jgLESkBoLFjQacNEqsNIQAMLAaERt5fiCA2MNQAaABPNMp4AWiUZjCQDkpYA4inWFkhIkmpKCNjpJL52bdzhSmylekoiOMVJKQApIrAGLBFBrOchVhVQi7yv/9j/fe+Nnx+f28N99+czSxtyBcmkiB+tcPTWzJs89/HQ6NTunATCsi6FwvIpMBhW/tthaA6OJqaR9TXTUW33kp//mZ6WTzE1fnWT9ILFrKyusWigsTFrKtEGK7vry6sve+CarUtkacsKJYcNBJ46H07t3wNZdUWHD9CUHWp//Qu+RqZf81muv2f8ptZy1zzi13S/6l08++LJDj/gLsyVz6MyvfG7zA6GUjsW8wWoLlgAai8oYhxBE20/EcDiMArdY8F0YAwg2b4Kxr4C1VhmLYAEMjinAiADAmCNEZpAYawki5Yxy7voOUvKz7/wFq8QwGQppGHUoIUCAUTBGjvk8YRhmWVYqlQaDASFkbmqKEGKtVkaNDyGNljKLPNfi5qmMu9sWtNYGCG4CHcZ3zNjAGsAag8+c/7hvaawlltLxq5qx8SbFTTT1IE5m57Z1u91scWlqagoMLl66XIhK2/ZOR4VSuVxO01wIIYXm3FVCOpwHnkcoTUUucoFIQRtlJMKmaqvv+jKXtUqt3W53Oh3HoVc+61mgdWujtWPHjrW1tTzPHccRxnqef8PNNyVJNlcoSGvCQiEVeZ6n4zP3fS9NUyEEoi0WIxfd7bds11ne7/cD168EUTaKKSHxcFQq1qWwucjDcpglst/p+l6BMlKt1KjjjPp9rTJEjMICozwZxWQL6YZk3JbWxhgp0l5X1et1I8XlS8szMzO5NYPBoFYKrSbM2MBxuhsbhEC1VAl9f6O5GMdpMSpJKT3PB2vR2IvnL2zfvjOKgrHdBQBwzq2FPM17oz6XplqtKkoRqcxy19g0TrbU+4AyClKPiwZOiJQSlNRKoCWUOZQRdGh/ODAAHnETmTsuoKJI9V3v/pyzj/3E79/euueJQtG1Isv7mR8UwkQvD1fnvU3wM2zxGLJMAbVulsdKARjmOYETDPtdJ/TQ5Wax2Vhf371zL+QxKwZxr08YscoCRW4AYgODPI479V3btB9uPHU6cpjQxiNgFLiMEGK9mntwXxhUdi6mPS/yJ4iXLjaH3mawrI3EtkIA/3+/rn33z6bnFzNiqnO7wsurF04fn11u/PbH/mpj9TxeXlet7sk8ZTP1+v7dXq3ynX/57O/88j9+9sG/Xji6+1t/8neTu+ZkxPbu2r544iGu4r6t286JKw8HH/uHr/7ZX//zl973D+nwcQxOXfeOX4BTl7/wvt+r79v5M+9/x65X3BraxJyNYcb0nnx4fBr7r7phQj+eBt6f/OZ7TKPzsTf8/rb5PZM33eZEsjidIMilpQZ3Qi1GZ85ccFxnanpaG10oFFY7nU4ysmA1BQAIPaefCqKJVaylR07KKHcA4Na33T4OwE0rRmg0AlpQaJ9pQQu0zALfSnUUjgWfQY1LEDuGXz0jVbmJ2jq/sbZrakbbMYTeHn6DK78dyHQzc0Ikm5sTojH2B4PdH1Ta/+e1FY850UKBQmQEFGplSQkd19n89Xozu/7g9vruBVxZjns9g2As5p0kG2ZIKN/KJ+JqeGjX3ImHzhBKHESP/mDnB/hBJCCIGsHzWZIoghSs0RYoOBakBqMAAsaJkhpQIFpQFEhg/dykCrjW0vPAJXwkjWKokAxzMVMuomMDoCM0RBtmjTJ2vIOjBdSGECSAZpjH7GxxuvD0/Q89dc+XDwMAwPnGY9Xp0gP33nP9K26BXHZs0l3tLl1a2nVoDq1J+hkrRCo1WqW51f2+ikfCrzlXPXt+uFr6+un/onaGYy0pn4jM/Nryk+Vr9vtBBBZEllPGAHhFUXV6La1Sq43HfWqxMDlRGDonH3pg+8L8OItNuiV91/nrXjAZzB78jw9+tfhOMrczHJF23j7vJDfveNmrgvkHLn3k4dGTN201UgwiEiCEILeMUm6zLEOwOWglpVWaGMMtGuogbkmEbcGgxy1WwujYmdKitoDaaGstWAIEDQBQYhCMMUYrqhhjTmdjQymFjHDOlVS5Eg7ljuMYJa22HvccxxGZCL0wGSWcc9BGKz3u9yKl3OHGACHEjP0ix5ZchIwzAg2WbH5FftAeGsMjKKLdQmWPwTNmU4gPAEBrba3e2vMJAGPc6/UHrucj0gsXLhSjwsHDRwEgk0mn01PK+L7PKbcMRZ4WohJzUUg5ylLf9yklcRxzzglBkLRYLLRaLYI6CKJet+c6/s4du0dxd9Buj0aJECKIwpmZGYLUGJPlMstlMSoQx0FEIcQgjSmlg15n3AOnFLWWvu9qLYfDPAzD9QuLpWLRp04eJ0ORjwbDyclJkeUD23Wd0OG03+64Hq9VqtZiJuIsy8YDvbBQAGOMMVpr3/XHhIE8zwGAEECCxpqFHfMiTUWWlIqFYvGgUirLsunJyThNg8CNR7Hnum4xklkGLCJIarWJUkFGUTFLFSIgIZSQa646lkvr+Y7dku7z/UgIoaSplst5fySlzKWkhBCDWuRK5oAEwGhrjMVNNT4AMMYCAAVKKEEAa5SS3CoAQAJCGUZAAVirGThPGPXZv/8jutZUgwFUSqPByAnDxMVBpzt/7KrhxaXCeCf1nXHzrmWzaqUs+3E0Uxkq2VxpbPNKcrnVU5l22fc/8/m5PbtdApMkKQd+tt5nY5FfhTlAaTLUVHBDIZVT1XLqUSmkC5Br5C6jqPQoJta77zvne97xfRNzZ5ba9wAjgT8aDV4OAAC3f/KOOHbAhesKYF767MDb9eLXv6Yo6cMXHr/y6mtvuf55g2524tzl51x31fqJ5uHnPZuywWf+/H/+8pt//MNveecnvv6lSn3qgVMnbtl56OGVp27/o/fsr1dyrfqu22t0U53f+ornz+w5uPTk8Xu/+Y03vP214LPe8uXVS5cPXXEFlEvLp07O79m59K37YbZSIgFDZjUYPy240ZADOi5Z6wXSdLqtElD/m+dDO3zs0W997+TZ5972vCv2zsYagnq9uHrm/OpF7nvBbNmPokLO4OTygil//OF/LRQB2vjI9x4uz5avnZm59xv/0ZHd8w8373tsqXI4crxMDXC5t7T80fcvn7v47S/dOVEsXnHzoVt+9g2FsMidCbDR6sOnFy9sHLxqZvq33wfvWwCAK/cf+b8+e8fCzqmLiyf3zEy98q9/Ymayki8uyrPLB48eUlIDZkImIaNxGg+ymFOqtc6TxAAx3OyZmVpda2SSbZuoZY0NnerL/Q4gOEYplf3Kh777qU/+t/GNugFKbm0Kxv5v24rCzfsRAPwtnQ1JLFqk1loAtSlXMZ7FIBIU1pixqKS1FIlDyMVkoNRWnNV2C6VFzJZ21f8T3PV/WM+A+YOC43aERftFVkRjAqRDm6VbatM33Hq142tz4aKWVjHiF8Le+WY+MikwF5S/Sc2DZ11/Y/P04xKoBwAqT5+poLdAZJvmRoDSGJmo0GOMsjjWlFBrFLGW4lhshFBARJYZ7fvW0dAXpsRhKLE/cnWeB6HZVeZ5InNJY4RdexfywZAYk4NVCgsu61iFBogFNEAAKNqTX36/Q9BmFovVmbZ+/It3jwPwrS989X33fXd2vrJ974HFOx7Y9pzruw9eAk0slxyIZ/2+zEd5O2TaKThCB6NeXt3mFgtpcZ+5Gl/xnFtf3zj9V6WpaVizE3v2Mm1gKGV/YByu654hxN0+A6nyqATHWTt3yXc9t98Da/btWCBk87Mp7JqVl7vu0eLJf/v7jvqZ0qFr5dp3Zg/ufOnbyCf+8Dvf/NbkC68Xq59ZYjvesvmGuq4/hqgBGGuoQWWJRjRaSK1k4PBKoRhFnBKCW8S4cejdHAAjEsK01tYYitQYY4yxBglBvlnrICIaZaVUSsUAQClh1HFdlxCCKBjjiJiKnBGilHRdVwhFCNPaUsop5SqXhACl1BhjrJVSAoC1oOwm6GussQVoxinCVtKo/xeRGgQYuytZQIJ2k5c8lgBCOi7lt3osSAHQGrSUea43HA5Dzz969MokSUZJbIxRKMKoiNbmmTBoSsVoNEoclw1G/bFICAErldAyB51HUdSPR4N+p1AoSJH28gQRlVQCTafT63b7URR5nqe1lkKdPXu2UChMzc9KKQeDYRRFSinfDwfDIWesUi2PyVoWDKFowcRx0uv1QJN2uz01NRUnwyRJdu/eNTE1mWZZqVTi3MszrbU2VimFQgwcHow/wTzPx2X0mKIXBAEllBCmjVFaAFjC0BilpGy3h6VSqdVonb1w1nE8Sume3fuEksJoSOL2xprLnXKhOEoz5Jw5vFL2ldDGGCEzgoxSOsyzarXmckIoewbEItKMUu6EUbe/HjoOseB5fDRKuEUxUJPVCu9zmaVogVFi0GppgABBNNqC0Zufn7FKa1SSUa41+FSnAEyD77F/eOc//s6f/9Tuq47CvU9HUxUjTVSvDaQIauXJhflHPvOlp77xjbcBAMC/vvcP3wIAACf/5d9vvvnG02uXp3buwMibK9fibqc6VYVBp7pj/i3/8kdrdz3y2Q//5QuOXh3NbHcDlqbKUuJZCZQ4uaGjtBN3ylyunV1MQVEKxqKjSKal55M8c87H2ZML+29+/XN/7/c/fNPV18nMDCGHAzNwx3EA+IkP/97NP/YyP4RKmjuV7QpqoeCL6cahF7y02jXv+9n3feQfP3bdtp0ON9+/cOLg3pm//O9vRHbvk/qLi3fe8zs/89aFhd0jTktR4bGvfPXY669QD5+VYXhpuVtca2+78VnhxgqcfXj91IldlD11xxfImdaZp8/Gyhz8nT2LDz3Q+M6Dtdfddv9XvjBdntjz8udM79mrn1pm5TBrb8QrremrroB928TSuhOzTs2rSC+dqR46tHOnseXpmkjblLHB2Q5f2H3DS17Uy0ch4elq17iRmZvadvP1fNRP7rjv7/7uA60sPnbrzce//f3+2vpgB5x1e3oCI07aasSi8F//7EPv+s03Tu25Zu62G/yKN7d/RjY2Us/QuKP67Yyk/cY9pdn3/vhLX/dpAADombNfu3/xpVdVrzNzZz/yX3vf/mPw+Eo/zKevOKglYdx/7q3Xf+2733EZzzVYhI12yyBySxgSlwDJzN6ZqdX2KAgLedaMXGCahRFScFu99Mqrrjl99onnAgDA2HCQWiRb1NfxIhb/1wKXA2EACMbYTXKwQjBbYVuBNYjaGI0I42JuzBHXpiNUgTtbEfYZFsn/xnH6QZT9P0dh8oxjgmUIAhl1kBFQqdWWUr3FUnJIDusD4kbDuJM0RwnN+qM0oKTKudSQys2nSeJ0F5sEONdMg8oc8oMj/S8nYawFpBQIRUMwJ4jWmrHdo2OpRCuMDD1LUGWp9TkDwxFz34NEisky5dbZ6MkCs4WCt9TNbrjuGEfXtloLR/aJJ095iJqbDGAs/UUsUrB3/dv7Cw7xGZBJ6LbXpkK2fPGB8cncf9cda921d/ziG0DJcKqEoM5fWK7Xp2v1IoCBWCU6DkueHS2Nhh09EE8+djKaq7/g1ivakN9792Nve9uPS9VZf+Bpr7pAE89DybnDg5D7Llja7/aYRQeIVwpA6snZmTxOOr1uIfIZWJVnYybFaHjRyVYf+/p9pHfFrXO3BmQ479f67PIrXrXna59uiPDKditMgj0Xpkr4/5pO/XD9cP1w/XD9cP1w/XD9f7H0fZ+B6Mz5b33z999fjgovv7j+Z+/8iei6l8Xf/PiFr9//pt/89G/sKXzvY793Cudf/sMA/MP1w/XD9cP1w/XD9f+7deK/Hmt91z78n0/cXVkzh1YeELVCnHuPxKfKsvrKpLLyy++64s77QiUW2OPfuhfREgoARqtcZUMlM7RayW7keeXIL4W+S8GCRrSEM63HTd0xHE/bsR2WtcQwgmiMskYZY8ZwdGPAjvlHiEII7lCKRGSpyx1NUQlppGbM4ZxrLaVRSAkIMW6xjg3txw1krTUPHKWMkeYZCBiAsVY7QUErYZUmaCkggNHjqbAdM8U1bKKfCFgCY+1WYxEpQ4KIGjQSC8SajCACbjXxxwNsa600mhBC0VIExhihkGUZUvS9KI5ja9QYT0QIRlFkjEmEyPOcIyEE0YIBa4xyPDceJq7rjnU0a7Vat9sNgiBNU0ppuVy+cOFioVBwPS/LsuEwLpVKgesBYi5S7jhKKSlEs7kRBSFzXUQcC2hTSscHlVLmSjqOg4jGALHAGENEKaXjaUpcBDaW4E6SESL1vQCAjDvPjuNorbXWYzEyY3MAYg2xFgkhlCKgtlq5YUlrrZRAYwGt53mtVgutKZQiRJokCWOsUCgMh/04GRYKBa2oNLLb7VYqJUqpFjoeJtbC5FQFALSWhef9NwDofvODhDDP8xUYHykADETGkdgkH8ncna7m+ca3l8/bFBCJsUTkihDMc/Cr5d1XLKhBzDjTFqwBDfLyI+ccB5Ri1qqRhituPvrc6vzi0ydnnEqX6epAx6DA48vt9am52Xq1erq/VhXZxAs+AACXvvrfdrzsgwBw5gvv3rV9X2ellebZxaXla6+7QVlTmprKhwO3EJ559PjC0cNEqG6jQWPTurDcON+gHidaZRq379lRnak1z56UxEzN78plcumpi6G1mQOjIUztrs/vm0uXu+zqq8zi8ctr61ccuCbrZpWde85ePLv3je8DgGtZ+Xv/+Z7gZTcSMQWdCzATpsOsceGhas/0L2+U6zMNZVrHL0wf3Dv3ylv40mAkVvJWpwrOhZWVhR17WaEg1tcdBgOeZ8ZWwyqrV1Q8TAb91MHJsJKltnX28mNf++51b/2Rih8IVKw9ZJWQK5uiaZ44yybLxUM7Ii/MVxtLjbVtR/YaRQBDljGVK6i5hqkwEyBdKBZhmLfSdn1+ChrdlW6zbhzYXnVKEfhOOooD7sAwATAQhbnrmk4nN443kK5ubwx7aTN//KlHBia6NOjg2vDxh55eX4737Kx1udju1H/2LW8/8sZrbbtpE+TctZN27cHHZo89u9lsf/EPPvSmt74meO2HAGD1id+KJq+Cpwbfe+DjftM5/0jvE3ef+Nx3fr26fccj37g3VuLWZ1+3NuideOqM41I/DAteMBjGneEIGHO03L9z9vKltdrkjtmJwn0njhvECc4miuGpdu+1v/rFI+Xot3//197w8+8FgL/5s9s0gIJNi3gG8Au3fwMA/urPbpMWDMLtt38DAD764dvGzePi2K8XrB6zkhAlWAEAgMRaQqk25uDU7JiD5FKIv0bq27Y/6/ffBwB/8qNvZoDK4pbU9P++fqB39P/hcYLwq//5KQD4sx99q7aGEvzG3huf87zrC2F46svfz0Lv797/EwDw/j/69AiE5zp5MlADMyRsVSlEwgGV1T4hn/+/3gkA3/yDTxw/ebEMKBCY7/pavvUf3gsA73/LH1hrxrojFg0ByqkxxhYCx0jLqLGWDVNhwRqgHDQBaoghxDJKtbSILNGKoxEWLAIlLtN5ARGILfqkTLzFJKkTz/fZaJB6lEGoT/T1+Jr/4NPvBwsUgAFwICPGXZUXI7eb5WPnqt+9/Zfe+uM//9DDX7/lWYenjl4Dbvj1D/zjmhz8zPve3lnTT33syR//0w8slxrTMhpqSXV5oJd2VKYP73nb1QfZ33/2Xzudh4Kgu7FxkrXODRuLhV27m0vL50+fObhjt2uJzoUQQik1ceV+4K7OJHV9AAb9PggR93utje4j9933miP15CPpubuXfvI3tj34z0/8izeRm/0a9jh74zLPHzyx8cH/+YVrb30H6pxxz7VWWy3zPM2SoUgGaHNGydREuVSIir6LWlmdU0KBEG0MIT8YAyASY8yWceNYGpcYiwwJoQQtaG2lVoTy8eyWInJCpQViQVntuq5GpbUdDyCtHUtHU6214zhj1i8AEEIYY7mWiJRyDoDWWrSGECAEHN/PszEG21BCEYk1Wms1RkH/gBFvN5F7iGgJgAWD42NaYwyCHZs1IYxR0RbHaB9LLUGX03g4EkZFUTAYxusb69t3LHTbnSiKtAEpJee80+sKJYMg4KEfloqNtTXf9RhFmeflanUwGtbqM2P2sFJqbb0xGAyiSJVKJYdDu92enp7O83x8pXNzc0qpTqfLHOpHYW848H2vUCkxh1VK5TTL0zQd607neT7WyfJ9P6B0OBwiYuCFjuMopXKhQj/I9SDJYkZda63nuY7vWYPDJAYNnHPO+fh9dhzH87zhcMg8BpZYSyhQRCRgkRCgLI8ld6gVIJX0A3d9eVUphYhRscAY94IS424mJXJ/crqEVqaZ9bgXhCGhEMexNHph965Bp6e10FqOpTwAoFwppkk+HA6Yw4kl2hpgpNvtThbKPgFK0dsxx1tLRisljJaGIhmrvVCC1KHAKCFoLDie32qsEAtIgYMdpaw45T736FWnv/ad/bw0cmyROYOy8BT2Qe6/Yr/YaDWXL05lwqjN8d2O6rbN/0ztFCM1MT2XEVy48ebeuUvf/7evvuSm52Qye3JtOQrcj/zqb7/+phd9/gv/9RO3v9sPAkPAEmu1dTn2li4Ou2ulehiWS1meuVRtuSF5DCSlPljuO1wkJ1sbvT1zuxvryyeeOv7cUOyd2qQjv/+Dr2vA+Xt/7Hc4VuKAXn/ljZPTO3dsK1pVOs+bVLaCtY2FPYWZGfbJn3x3oVh78RufW/LclUGnYDDX0mbDuE6ba6tRVgkMw2Kh9/Al6UO4bbJyrpHMiyA02w7NTe97k+omecCK1Wn9f7P2n2GSXFWeOHzONWHSV5av6q72vqWWWt4jL0AOBEgY4RkQbvAaBgZvBjuDHTyLB4EQwggQ8gi5lml1S+19eZc+w11z3g+RVRKzs7v/2X3vo6dVlZEVGZlx8557zvmZAcurQW30IBQz/oqh0tDgtNG5qaidwOCKtclTleZshXlOXHA9x+lP+uZr82bLKj/kIm4pCT1xN8xE9awu5Xqd/JCemEKt261adnjksVtuG+gfKg30isjIKLTlnBS+4BHYWnV2BubMilUrMayp22bEAF3w8ct+dPOOh/5yeFO+Z/fE4SfX7Tv+nVe1Jw5M5Mz2t31m07Vn9w8O3fLlLwgzvmT9CX+dnro0zTRu3jEoHsifdP7xvSu/+Juf/cM/vOGiW16RPLyHhz3l7sL8zFyzkRw7OmUAL7jg0lplbtvDjwz291ebTaYNIe4/NBGDaE4dDuad008/84GHHwBpalFChFdt3dA1ufPEi89M74tEYARAqAEAFviwAHlA8wyCHgyCIHAZS01wOWGKYdZAHTw/MgvGkgHAPdMT6/uHLRBwPtzfN99uL0RSlrJd/07jY1FY6387cOH5BixyZjR/31V8l3Gj3UfaaDMLCB6ronye9S7pOfRoE9Fp6CTPmGuJEBSyxHbOMjc25oKMBAgLAhDNAlHq78M/ESlNUnBtGIK22jgOz2dkI0gsYIyAZBwrHE5Gm9iCBs1AxJDK62iXJV05p91MHJ/HMbVt2GN5rKNiX0/SbgmAbM5jjZAIPvHTfwEOQGAsGGIJcNDKBYhasRYuQAwAN3z8lfVDY7/52Z+u/YeXP/mXbcsvvShp1k8773RgWc4aTWUr7RoUcNPWM+6+887uslw6uMqryZnZWz5z38OOqaGfrypcuf7Eo3sfF/UZfXSiKzZdTVVkIhAm9oXnZrEVqEYjiHS+VI6qTattxvV2PbWzPjOzZuuqE884mWbr63s3hOe+9JY//fyp2zGbzcwg5kXbsKoaay1Zszz7EjM/bVZ19Ys4iMgYrdoqqoOJyr7sKXV35XOcWgDEEmWE0S5aIMmYZDJRhjNLoMEmRGiZsMYxBl2hDJEFIgaEgJaQAJEh5ym8FhEtYWw1uE5I5AKSUYCWCyBIDBLjYK1lQiIiWOsKt5P+EiTKEnMcRzAwSsVIZCwoRYYgm7QYQ5c5lrTWhsggokCuiBwhLaExRghHMomIjPFWFFIHjahSFDdjjDOHgFlLyBhDVykVJyEAOI5Eplphu7u/+5Zf3Tw0NHTyKScRUrPeyOXyltIvI0tinfMLJqH5Vj2ftzGAx11rwSD4XiZstiBWT+19eGhoyHXdWCuOatXKpUQUBEGjliBnxC0SazcDz/OiKNJa+xm3VCrNzc1lXc+VLhpyhDM7O+cVsm7GR0vWGFdIsOg5PhFqnfi+DwDaKpsYIYSXkcrGhA5D4wrJUYTtME33dRK70gE0kTIomAaKo0Qp4zl+HIacA+OWOACi1sQsdxzfcaNqbbZYzMet8Onde4IgOOfc59RrNc5ltVrtLvdoFSGRw5gO4ygOfC9fm6s5jpPN57qLvYETVqvVru5yGAT8WfQwY7jj+YQs0DaTz5OKfc7JmLZVXd2l2ZnZ3vJx0su0qw3GgQxysoBOgonws8C4UWDBMJCE5EIU+Vxoo4xNOG1YvXb+7sdzupz0Mq/tBzpwuAh00FMsTR+eLBaLGZmFHCjqAENDSlKnPWUYcsR2wuLo0x/62M6nx97z3ld/4msfO++MM3Y/tX3TZRd//M+jG896ZNWy/MSho56vMhknaSU6w+IA+1ePdK0fwHqjNjfrCCmGljpjreZctYBJDDZXyLdmmlOHJ3/673evXp07/YyTNx+/dfmWLU8/ek/zGD8dAACWecF9v9t+3GUvZJ7TisIVJ5zYnK09+Mija1asXMJd1jdQPuvcqB5a4a66/LIhJw8iL3KFob4l7DinPjFlk6hAvKu8VPlZ6fuN+ens0rwUojo+mRnpC5oNzOTCVo2LIdkz6Eva/ts/9q5Ytfuxe046cWtd6uUnnh5PVged2ORNqWt53J8NVDs7OJzLd4VS5MgHo7uX9ESzc5Zy5IWOM9JeEWUrPMlnihnP7B0NVxRls5D1/MiLTnrJNRDVQVoKy3agLiyHemCo2qxUSo7Ph2SzET/ZPFRa3Wvnmz1h9t3/cHnzH8offPfns27PN371Ux/mXvwfH+odO3zBW95waP8jU82Dxx23ziR95U0b9t7eoSGdcuGp80m0876/XvmG615VLC1fuzlfCQ4ljJAViq4zww4ePkoMEHD79kehRTm/0Ih1X85vxCpU6HiUBwgi8Ae6MAmGHLdhzGyUSGBnbmEPHkVSnenhAW8yIzui9xAtZKAJQZYxd4EgKwA4QsaCQlqACTMLYIh0h3phgIARAANLlNI0EkVzRycLS5cuBFELKIAsmv8p0aX/dQxeUMNM/8csEpHj6ws2bd5199zMxJQgiFodlnAliJIG6NaMB2anNRtWddtETI5NdwvyhDMZd542EUCXUAocC4kDDkD0zMshduyYCD2Grg9ggFGCDIwRQawtWgaQAZ1wR+mEgY4VD4ExsMjAQoLESaefETUD0CB7i7o6SS5C0QEeO425Rk/RnZqPC4wQ8ZM//iAicGDaWgDgHK3VLrAmCWSOEJT6HPapdT/6/he2XHYm6dyPvvmLfz3/ojNeeIlJ0KicHHTDdhSDBJtvePWRVe6Rw/f2F9acePrlRx7bDpaFjNm42V0uTozVX/+md3/3h1+Fvz6cuGrJmRuh2JWpBRnhha2GyPK2MLmyz5h1LVEmqyUMn7J6mJYWm9CzZEllvj630rvo8lf85quBGprO1Y9vlCrloPeRo6x84tzSM5rrtiSjN0038lmBGIZJMwwaHFTel9mMIIjrjQhAZ3w347mMcUYGiUBbpQxKTM0sLXEg4mgFY1ZCqsmNhCg4WEIkJAaWrF1gByGmClmCcwBQShE920UkVYrBJElSyyQOnDEGnCEAY4wLFoeBMUoKlhZL88Wi19VdOXrE2AXDTURgjAAs2ZTdBGgdx7EGKtU5lZhMJus4EgAYZ4w5RCZFayNZQmutVYpSTwjPcwEsMkKUvi+NofPPPz+fz3PGM5ms63rGWmMtIucS08wVgHnGxGHkeK4rfU1aa63jkKy11qxZt0EplSgFxHL5ojEm1dTMZnwA0KTjONLWGMuRMSFYsVAKwzD1XnRdt9lscs49z4uCkBECEUeWKmERkUpULpex1mqt0w1pkiRhGAZBMLB0qB224iDyHF9rbUlrq4MgKA4UYpX4fiaKokTFpWLZaj03N+f6DkNCYGA1ENNaIzFXW20C38vUajXf97ds2RLHymhdLPeoKJLCmZ6ezufzAOC4cvfePfl8vquLeRm/0NvbnJubn5/P5XL5fGFuds73PWttzu/oxs3MzLie013uzjA2OjpqjFk+snQ+DIdGllanpzNZrzo15nl+IJvMouCgE0CmkcDxGUTGCuFyZhMFwjrMFUFD+yxMbLm3JHMZF738ccMqmGbzuiAL83NzPT09IBfqMZwjYkZ2Uk+BHQyq72SJqBZPzTaqb/vQOwxpIvPuTTcGjdrF1z13rj5983deVyy7Q1ctb9830d3bPVXbwQPJtQ7AUt6lZrtyZNzk3SQI3Uqzt7vr8GSFZwSAxXq7FtZNjp17w9WnnozYzGzf+WdzP+x5tHrpVaekr97yxcs//F7RU2xOz3mVMJqajxK19LQTMpvWhgenSsx96le/05V64AEMFwa3Lg8mAkfCD7/ytS4nc/WrXxm2ZpPEsFI2mp5tVCutZt0xtiubf/Kxx4+Oj13/mtdWnpwRftex6XvXrVxuMrDhnK317U9vPuvK7GrwpiRMTc+OHyz0rijkM+2xA2KyL2eZ9Nx6wS2BbE7MiEZSKztldN2iN9kMCs3tcjwP/Tk61J7yhgdWbfX2PJWUDA8b9jAfjVo2OoZ6orDpxJLXX5+c9kFwi34212jW262G6+RPXXnW38K/nH3dZe0m1MYqS3vNT77y0be+9eOwqfT1n91ROv3Siy8cevLA3aWBnBIDmeHjKBqF0fbpF10CH/szAJR7B/yuzJoLnwNHjp58wSXxwem77rpvw/EjBNotlQyNE2EUNAf6SsZEEdgwaiUxy3DBUS/pzWQA25q3k7BWaS7t6a+YONEAKLoyYlmuf7tXWLWyI71ZsaaAGCMlYAGQdYxfIUSyYOVCVHIBAJETMkIFFCMlQCa1oknj799bJO2aHt/UP0xAEdrq6NhJAAAgHZkkBtF2rA3/O8PgM3HYAiUEUOip1/bOhsEJJx+//YkdC08DR2DUboaGDeRwZP2qaGJ+cnRaA5pYD+U734UcKsuZY1Vo0/bcouR1upgDB7QIIJjWILnUJtIJGIsGCAEYgiHQWjFAIxlY4xhMkCFZRpyBEcxBMlnJW6F2pBCJADCS2f2x6UVTj+C4zWu8ZIZaMeccGbfWaJ3qgYExBoABJ05Mk7JJRwSjUC5d8/wXffuLP972rT3fvPVXbzvyyu98/q7f//LRV378+vM2rg8wKuLS8lB/PKtPWvPKI3u/3q6qAzsnxo49ramNzALB/Fy9p7v405/9cemST33kU++vjW8P5yZc0qJcmJie6B/u1VrndAbAVlrNclcvhhROzxWXdENO2HwQV1qiqoZ5ESqtq1/9qrH9P9WT2Wh3z6P6qeNfPxJ2N/v6PDHLK5OVt7/xpaLVrCFS1vc4MSDTagUhWQbkZzzOLWMGFZJAx3E9JgGtEgoRubVkU5NTS2hZeqdTTjAAMLRESARAHeOtDgk4ZdkCY8wSQwBDBACdxqtNJ2fnc7SpV4i1AGCBBKEjuPBlkiSMsUw299e//e1LX/7WD7//9bSIjQic87Qqbg0ppYzSyIhzro2OoiiJNWPMmgQAGIOF6AWpsikxLSQXEsiitQbQph1QrUAIUa/Us9lsFCXWWiEEF8iBpeVrADDGaN0xWnRcHwiiJCEiQAIuGBIgBrEiItfPGp1YS+12w/f9crncaDSklEIwx3M8xqSUSikAajabrVar1WqVy+VKpZJSgXO5XJZ7WhsVJ5o0AHDhOI4j0alWKkIIxljaOBdCFPL5Qj4fh1Eqw5oqRGZzWQvWWntsbHT58uVhECqjHcerzs9K6ZZ7uqMksWBUkhBDKZmUrrVggIyhTMYhyCZJgpgUCoV6vT4zc8RzvKWrV2eabWttrBUh23LiSZjPm3qj3W6H1bpKdF9vv1IqCSMOqOLYdd2xsbF0t88Qw3agcnnOhSN4ppCv1WrV6rxgKB1+7Nixcl+/X8qCAIqBISHr6N07UoIxAlUSGigVHM+pVwNiFjXXFkq9pTMzPdP3PP6Of7thy9ZN11915TiLenp6fvrTny5ZsuTciy+rz85kMjlkYnpish8AABq1VjcAAKhYSSlL/QPFvr5YK9Cs1Wp0FYqu68exKWYGzrn2jPYD+2YePjq0ZEVjaro/m51otKQGBuC6giF1Lx0mzyGtmOuKZjMGTLRtA9ZdvmTdOoiDjX3Ojj88qqjiOH2rtqwKs/d95D+++W0AADhz3QmtHUf3j412d3cbLsr9/YKYadvaPTsLXibpwikTbDl5kwLl5V01Op1TPp9pXXvZ84XvwrEZ22i0mbn5819Zsmr52c85p2vpsNLJnn27T736os2kJzGqBNUShbmBkb/c96fjr3plYXyivXrFCps9dtcTQ6ccP759fzJVCQt5kS9n8/3gexQhOaK4Y+xIc37lmjXHcrFXi5P+PAuqg8vOgC4THhyDVSvtL37amno8ypynZkbzyzZAJusdHn/rG1/zvs9/6KzTzqlzUz82WswUwMm0xuZVnJRKfZm81QYe+f09SLK1N1EqETW8409/mxydHbKuccsT6+qff/e7V3zy9d7Gk3vXr28dmffXlt1D0Z7G5N07Hn8JAAAcuGMbxqrOaPNJl+zfc//csacueOcbJnbc25icqVXj4d7S0Wq1kBflXLZZqVvuDo+UDoyPdZVzjg4Ljr9/dp4peN7llx1+fHepf+BEsNXKzKHxynPe8JONHlwwvI5TZznSnDEDyMlhaAhBP0NzDIDchbAqCAiJwCBwAzaGVAArZRz9fTpLHa5RCqIZWL38qZ0H0iOxUgAMU3rPf3PggmUvAQnE/nyx9e1bgoOFDMDM4dHehfivDLguKmOPAqzvytpmg+Zrw47fSEIjAGzHdGF4qHRkrF6QlBFOqJPCAlEKiBigBdJgCSDUllswggOhtgTACBiAJkILzAGrEGNjuE2vixNaALIIylqfOdUgUQAASaqnGTg8tCo0YDnP9xZx536EojUtYw2lmlHIydpUdCkmBWSB2GJZPCRz059ubV37kugwXvb6K3buPvLDRx4dOfPcf/ngp772ybcerB9o0G16dCkkovekxvWvfP5f7vjz7l0/B0AAtUhZ1YppDV/69+8ef8G5Lzxnc+PQMWHC+cZsz3CBE/AoA04GwJYpmdy1R7qZTC47c+hoNpthOeGg9Pt7IeNGyayX9V/zD9e/+/pPnfTSt/CRwrT5I4Papac+v/qQPtgPy9Y4gpNQSRKq0BolwLqc+VJILshCFCRRM4qNVmAZYxwZM8gyxFG4QmYc6TqCCULUlilOnJCIDBCzpMnatJfKFt4TS9NTIp5ygpmBTr+VUpXUtGTDmOiQ5hYsHWzavzWawIQtlRjNGGu1wyUjSz/5yQ9jZ9AC3/eZtonruoA2DUjFYtFay5m0lFhrATHdoTFkqTsTGguARIgMuZSMMWu11olAhwPPZrOIqFSczWaTJAIAxI4EGOcckaeRGBGlkEopYwjAyjTZIk0agYTgHDlDaxljmUyWMYyiyPU9xhgZq7V2HIcjI8aFEIaou7sbEXO5XL1e932fiLTWpJTg0vUzZqGBHccxIfi+n2bDKUaAiNJ4jES+63LkcaSMMVprJpiUcmRkJApj3/cxipBsLpcLggCROOcZz2OMqXQkigiZlK4r6/V6qVRKdULCIMr42d6evlKpFDebQjhJkgghiHCuUoFKxZVeNpu31rJYaa3JWrI24/vCEUqp3p402MHAQD9YmpqaMsYMj4xYk6Quy0kcpj+E7cj1uPR9HQVpQzrFIDjZLLg+SyL0MpjJQ6VRm2xbhIgsMFbsLjTHRsfz8KGbvjS4cg0/sDevea1Wu/TSSznnrbk5FetqtW4NpWZWANC9cEmOI1HIaLruZbJMk8/dXP9Qe2LSlXnhZxKN8z++N5ge/4+f3vKO173R8XjPysHx6T2JQgTkYRIj2dg6jaietArlUtPEiIKj4YBRpQ4DvbX5SlivnHLB8c22s33fg0lZs/yGJWv3wY55APjFD3743CuutMoWc8WwHfzwu99/zqUXj89NhnsOX3D5VWMTY+tWrRM8y4m6i8Ptg6MHZg6IJT0u8i7g4xjEptHDcjd86oOwZrg1P9+szEMr2Hr2WTKxtYmxKAyzGkfW9sDMxMAb3jZ2962lK19p7tzz5907L/3guxrbbx04rqeysc/xiyzMgE9P7/ldfExufOEFcsvS4X0c8tmeCZNxuu774e13Hjr64ev5Z778yde+8f2P/uT3p7z9H3O7f/HErd9ed+4rvvPpXx++5bb+08SHv/+qE9Yus0XMVOZlvttEUWNiErQp5Etaq4MH9uw/dLietM4++aSpfbsPHTqy/enao9tns0WZKzSXU5IdEgW39K6PfeeHEw+XCmLyD9/6zSe+/eZ7vhnfNP3wr/+cBuCuc47f9bu7z7n2monKWIxhudh/99d/0bcOukrdbqk721VMDh0aWTowdmgUla3GTWni3lyO6rX52KosZGURdP3Q49vaU6E+cQNFKhfQyEAPZOGCk0R5MIduR913qLu7OjPrE5IiByBZCHiOZQmYZAEYk8pgEUCAVqU0XuqYwxH+XQBOVZ1TyZ8dN8UblzW8jPus4wQAyBiZ/5oK/L8cCxkzMabJTs/WrviHr73zhI23Pf9tO6rVxY1CyXG11mMaDMLYfHPFYNKOY7SxYAwdwgW9Dr+nGB+raQQJvJWonFzoASMRESMg5EiWM2RIRmubAoNAAxFDRgSIGggtYFqgtIyD6ZgYEgAitNOVk4OfkUaiB3Y6MgQscFk2lrVj+/v7lkyPzXz8px/ovD8LlEruLHyOrufFYcKRpZ83SjdQyW9v+l3PUjH6qQNve9WXNpVPvOfI/aefsuzECzf86o5bHc7bcwcswt33Hcnm8836rABuQCNCCk0Rwqk36oN9vTMzs9c891WHH/lNfyZHaHL9PU6vF7WaErPc8j2PbVs21KPazSBu9aw/w59kc6MTfl6YUk9gm45IMEmq+6Yzsue5Lzt9b3kXx8awcpatOWPF4EjfxmT1Fn7/zV8XrXZdSu75kohFYXO2Vm/WG0kUQdBkwD3Py5W6MqWCzGYdx3GENM2Ek/K41lnKoy9TfUfOJZMExsDCp9NBNJE1C0pVYNPu7IJwVRqfAYiwg0xO5w8CALJOXIV0QhMJMErZTCaTY6wdhRJ5saubEFQYpVn1Yj6a7oTSsrCxKkUL+75rbSqO7qeBk7HUQzMFSRsAIYRgKLQ2KtHWKmQEwMiaMAwdx2EcUghVHCshBOfMpqo2yNMae/qiYRgioiMYYwIAtNbGGAsgGAJAFCWIIMC6rh+F7XY7KJZ6ENGgAbLW8CSmONack4Ekm816npcqUzYajWKxWKvVfCnJQhp9DVlawJJlc36qGck5z2QyKbY5BVipOCFOGd83xkRJyCzL5XJxHHuuX6/XXcfxPM8S9fZ2R1GCRlOChgwDyAgBwokTrbWx3OZyOdd1tba+7wZBUK1Ws9ns5MREubu7Xq+Xu7vDOELE3u4erTXnMm1ml7q6kjhkjKViKSl0yy2k8lPQarWQQAjBkSpz0/lCARhaaz0/m8RhHEZoGDDMd3XNVQJPsliTtQTAsFVpNeuZvGusdqrzR596yjIAh0NkhOOwjKcmp7cuXWYOz93069tKjdoJl1zCOReMl7p64jBE5JlMtlqt0oJ2bhw20/WvWpkBgLxbDButRCtrLcxWunp66nNzzdGJruUDbC1bu3rrv5zYp2cz3DDbavuSNWJQALVmOy+kThSSMGCBKN8zqGmGNLgIQRKBQsf60uurNY3k8xedfv5Tj+0af/rBf3zL6+FXnwGA8992fcEplupVznn5pM3XLOuZOXB0wMj41DWHjz1tkN/7hz9cce5FzTD66x//eNrmE5actSUTGI0UtsNSJfT7+lU5Z9BM33RXO4mWrlqmiY8dGwXBVx23UTFK2hoG/GB4KRw5suyVbxq94773vfEjr3zRFfu//P0lL76Ii0rvoSO6GiTt6cMT+8trj+cnlT0hH/jVbV1LlizbuP7Tr/nwXX/bvzMPvF2cffiOF7/51X/+zQ2v+kX01en6m2983aZ1lf2/+cOff3PzeddefvVpw16txoIIJIW+I1FwhsVSPmmHczPzYKwJ1Za16w6MPxVD7cm9B4cH1/QOPvmNd7z6ngef+O1vt1UFmWOzPfmeiuH/9ry3fvSXX1z5oos3nn8GHJrdMLDkMz/9Fgy8EADaJXbKa6+s7TrMwnl3DR778V9KPesHl5zk58qmMHR01y5Vj+Oy8bys55mjE9MRyiRMhrP5hoq7s9mJep0TzIzP1wQkv/+TV3AHu/NQD0XLKfPlV151aT18IpX433rixnvu/mtooAt4E7R4puNKmJZhAQCAM5TINJhW2h9diBNpJEYAShM36jRzF+EQE/O1JaXCYhhFBmAZ2f9m9IVnUFhkLWNcg70H4NEdu4a2v2XFjd9QC8/aFccuQQyCg22GNF+vtUnFCGDJKGILNsj5/r6MPxmHiRTKR66SxR5w+j6oY8NgjM+RyDICJhgBGQOMLCKU8t5UO2bGIIAlbg0BWglMEWNIRBqQIUCX68iEjABe0LkIYs7PPWtLY3S0sOKE+OknW6CIgDNuaEE/GBHJIiPDZBxFjLFFMRRkAtAw8hjw8y+8+r4nqcQHsiOTW847OX569O4HdySGMsJt6dgaajZDIMmkNCqC1OkBQOsEARrtelfZn62Gr3jTu+9/8EfBzLGM2wfU5tIJI8yFkQVVgebICWsrB0f3/flPxZ7e7qX96BvOfVuPWDWUDgaBPljd99zrz88/9ajS3ilLLs4vK843ZwdWr4T2ow/cNSuWrOwLgtZ8ZW5ubqrVrHOkXCHbPdCbE8ttbBC5n8+LbFZzTkSGMRdcqxSiJYaANrVBAuJkDJGxKep+URsSMEUjp0knQKp4aK21JJ7lIQIADFkatTs163TKEpIhIiSIjPE8PwxDY1Qmn0uUqtUr2WzW2sV4v5BqM4bIrQWllLEKERmDRaYNAEfGGHLOOQAYo8haSwYMGk0AytpOQVswISRLYu26bpyEgoSU0hiTLWStTYvixlqtrSVjtNbWWmQkiBtrtQLOuRCCMTRogShJEldIJONIJ45DIjIW8oWCUiptaTuemzbCtTVKKSYpPWe6k0ibwY7jOK5rLUVJnB6SwuWcI0AURUQdh6XUhJiIlFKu61hrwWrLrRDCRRcYcM6TWAkuC/l8s9lUSoVh2Gq3ly1bVqvOW2uTKLTWZrNZ38sYQK2szDDGcWJiotlsZ7PZfD7PGHMcJ5P1pZQAMDY6WiqVcvn87NSUtdYiGxgY0HGSxGGr1crlswiIDMGA1royeiR17xSCRVFS7CohWAKoNerKmmw2q0LFkSVJ4mAWdOxmctpaxiUyBRqIIWZdZskKcDSZVot8KUNrtTEIDoFpaTRiX3V8oKu4+dwTj+saqdUbrusqZerVahBEmUxmbm7WcZxcwVtYUTorVyuol8tl4ygrDLPGl86RI0d4nprUMD5NP/2EXygGmWiyppZ4WWttQrhkZf9Tu+YCrUQukyvnw2ZTE/QWewxAjnm+wwzjltSxSnvirw+fvnFtPmG7du2qzNSbtTtXr+o69+TjantqJQAA6D069+Pffse4zsXnX/Tgx39+5olb9+3flRssL+samhw/YlyxeuPIkdl9zOXlYQjN6ODU0P7xI0Xpje09yPL+UqXqTzydZN3psZmRlSsqM7NDwyO5weUgBdSSqFGb3PbYtgO1s07Z8vUf/vqMix/b3Lvs81+54dff/PW+p3ePfubzL33ddaefd0Jz9NBNP/nxGVdetWL5iU//4Vd9azbGE43MkvzNb/rizx7b77hwUdN/CqLvzMG/nr318FPz/+Mlh+7/7c/fuvNPVy5fuuI5q398y7tnFbJDOIalwtoT8NB81ulqVqfrtfnuYiFshzaxCPDQA3+bnpm45lUvuu+2R596bKK+lrkDoo3zlSN7VwHt2jtvclBxo8Gzhnc8vO11x19841f+ee2yLMyYie1PFaNwGAAA4p/c5Z125tOH9tf3HYiLPZs+9bkHv/bDJe1mo9UYsLI+Odk9NDgxN7t8ybLq5CgQzDfayDFbLmajsaaJfE1SCNImsOR2Zft7cuMT0Wmv/+6yQbfZNv6aTcVFfWZuzzj3zL/cdX8emAdgF2DQGsk+4zQIBMwSRAs6jfgs0UokYABmkZex8AfpU+vNeLi0sIIRWCAGnOC/HYCfJeyAnBAIOLgtspPAjnzmzef/81fSY/d88X0XvOtznDSgYyGZbkX9nmhpJQGsBb3g101xsHHDiiOP7yPQGeTPFuCkNFViXOq0DUf5nINIWlvGJQMO1jJUZBVY4sh8xgJDFoAhWGsBCAgRyAICMRsnRGBccCJZTfTQ6gFAXhjZABWtjlJIHV3jtCPOOUc0RhNaQDSMIYGV0gFlACCJY+Q2MsFY3c7Wd55//ouor3Vm//E5x2vM72cs4BCCyTKWN3zO5yY0PMGAMWEtSOEaYywlyMGStsjLHB9+5ODTew9vWpFp7Bkd27Fv4+YVUnjaRhtPPbFdm4wata6hfieTcV1XSBHaWAZtz8kAhlDK5bKZFeWANeZPX9MXmRAaAavAgD8ATf+Jh5545MkZAehIJ9fdJ/uHh31XMAZGaZUkk5OTxJBbHgPThBwYMgYIcRAx0iCtZIKjRrBAjKOwSUwdGcjObUqngmAMAIgxAKvJWrAECMhSR4RUdDwtPhNix8QBqUNLIjJkwVgkIGRxHAvH4Ya3G03hOrmMEwYNV2Q67CFAeHYkZghAgFwIxjnXNkk7owYIiHHGUvwXdcSF2cTMZLPZjGOV8XOlUsnPuIkKTMuQgXK5rDTjnHOOqa1YHMe+50AHjGAQ0XEcgLSVbONYa62BuODIUYCUAJDNZZRSWhMipWLOAECEjiOUUmkQV9Y6QmYyHkCqqYmMsbTn7XleyrU1ZBnnnvRTrwgkZq01WiOBFBKETOvP6a+udKIkcqVjLaWcY+RgtAnDUEqn0WhIIQq5fL1eVyrpH+jTRnkZz/M8reKg2VIqCWJ0fT9XyCYmmJ+fz2Zyy5ePIPIwbAshgrDtWtluN62FfC6TK+bDeoMDSiEs8najOTs7XSwWy91dURTVmrVSqUSAc5WK1Z280/U9L5OtzleKxWK92Zier6xbt67dbjNEV7q5TD5OyCZNIV0nK5U2jmRGWcsoWxwKa7OCOfOzs8aa/PDK0an9rsc8yeaDuN6M86ecnNm7X3KxypPVasQ4T7ECcRJLKbPZbBAEvu83Gq00xfEW/ICldIUQ0vUFkfBlbXLK9ZxsNsMdSQBzVlnqDVT7+FXH1/bPMseX2W5gFeRzaJFpgMT4wDVn0FaRgbGndhUtHYltDOAAM9Zu37X3vE0rT101uK05dsLGtQYLew6PtcKnlgMAwJe+/Plzz79g6bKVu3Y/VZfJbEYNbV4+3Nt7cP+Bubm5M55z7tj8jGqHq07YcKw5W+4bePJvT8xXqyuueq63akjNVlhiju6b1gcbI+ecUujvEULONWa7vOwTf7ufjFq1ds3f5pqPPPrAOS86/9r3Pefuj/zyjM++58Y3fOZoA8YZXPG8Vffc9/Pf/eg7r/3qx3PPffH7P/vDTxT6N59xFm1e2W3oA5e/Y5uGIU+oBHbxRBtYw8WJz/lHPrx6+y2XHRmYHFlx+ZlufqqYTdpJ/bEjm975oUx71tm9v1EQKmrZZphUmxFCq9Zu1EJN9oRTtvYNXni4xjaeelr3cJ6y5qpXv/Vtb/44azgv+YeLxC0PwMCq+2/fUZmrrz9u0K1ERydmkvzg5vVLMqbd3HmoEyHWDNT9ZNN5Z+Fr3+wyF2ZnR972unb1IA8MUXtJd94bKC9fvfT++x5qhzGARE5AenJiatPKJY/tGs0Ih1HMGWQsREoxa9tUv+j45ffuPPJXt/HFTafA5P3pC5mg5eS7Nqxauf/AoZUkZ6mTTDr4d46EobUMIAYQgARgF+NhpweMLC3ydRBZ8KylEtvRgpFRmn5gKhb9/yXsPnssVMMZKtKAYFiMApoaOMG9/35D53uXtCwR40DGAMBUJeruz0oPMGZIJo4X9IbjEJTJeFmDLTJW6YUXscBStI7RiEwDCAKGQJasMmgtMEBABtSKrEtADMlqAWAQXCY1aQNkiThwICTEtrGeD9LlM1XLgYJKFdath6fGKkcrT+Js/Pftc2NUuuI7DqikA8GNF0pZnkCl7fBKPOW81Wv0+pXKVxtwqc3dde+Ork9fW5/cVyr2JaQMhAK5jbTHbGQdogQAlO7EEWuBC6daC7u7HT1D3/3pzV/8+Bs01TZuORVyplWrZ7vzSaOSdX1wZQxgSiUdK2nQd3IgpGVW55wIlO+X5/dP5VRDlzygfPfgMmw0qanrUFu/6rytV90mEMuuSxKU0lG11o6jwBiFRLwrJy2XVnB0uHAZY1and8AKTp7LfBcEU0YbCwRWIiDrNFY5AEInc2X1So1zzgRHJIuQhitgKIhBh/sLaSU1NboBYwgA+EI9AZGlcrpcxHGMSK6UABaRBOfgurhQg8W0Z0yUzllDIBgiYNrKBGPTAjUDTqSNJgNgrTZGARrGIJf1clkfERG4Uipo14UQmYzP0E1zWSEYAHDO4zjOZn2GIkUda22JTFpiBSAi4zuS+x4iGmOU1kxw7sg4CtIyQBxGjuMYspxzpRSANUZL6XEulVJMYMracqUTBEGxWDx06NDw8LDjOClqLEqi1NrPGJOohBETyDrezAtl8PQDCYIgjmMv4xkwAJi2hA3pVMfDGioWi2Rts9lAhK6uLsa5EML6FhkwyzPFbMbPOY4TKx1EMYB1XVksFVrNluN4SZJ0d3e32k3OkYjnMhnG+PjhI8NDw5zI8bxWGGd8d2hwkAmmtbZou7u7rdVCOstWrFjA3UGj0XAcr2toULfb+WJx8/BQo1oprVgB9dbs0WO+42a7uslY6bhd5eLs+JzvMUIwYMP6jCMZEIi+vBkdtU0rfOBkjSYBeGz3EX7WRfXq9mZWLC2KFmTzTmKMkdL1/Wy72dA6GRjoM1q32p3vbRiolIbU29sfBK36fLMyO+O5cvnIsJv1D+8a7Rvot4BLC93jwYwUy8andnrYN1fRux9/TARxSbJ5S6YVQtkziVJSiEJhbPfhsZbukRQaGMhIBlBJqKnxqSPTZ1370g1rtux/6IFk7ohQc6dvPAFgFwC85K2v7fIKSRBv3Lx+81kntRo1pxVt+8PtzbL3omuvaTHcfNJJyXhtYmpu85bn2qPTIyvtuuLxh3ftnZuYMHFSm5mbmZpcvX7t2COPDp511oPbHq02W5lMdr5SOf+SC1vSvubVl7/m3VeP7j6y44d/e893PvSrr/8+6Fu6+blbfnTZ5kYrc+tPbhreqIbIefm5p3778/9x+qs/9f4LVrztlOf/7mc3HeRwSlf/gXbtmIjLIByDOyPmZvXf3rvi2GP4gS98afa2R698/c/ecvW6A9MzWz/wbtWqlPbvbntCxj3+bDw5N2Ha7bHqXK3RqjWT2fm5dZvXgMOG9eTemb3dg92XXXzth9709itPOWXrJTe86caPCtZ+x9VrLz9v2bf/svf2u8d6yWbuf+TzF77l0Padg5vXi5Ur4T3fA4BlW4832Uz1sf2zjz055A34F5xVeXRX75AH3KlNjHcNjFjPBxX1F7KtrFerxrGJLIOZSPPDcyHxXqsMB6XR58y0klouKBfwRf2B38xdeM1S8EZnI9nbCWdo2+3V61bOjU00Q+XTgucBYUJkFhFASIzAMuZa0ikXeAFkahEAgT0rp0XsyEGnsbnaaD8riFpiZkGZ/r8x7EIhh5ElACL4/BWv+Jc/3BI7xujIDUoANQBoeAAgrCFAAwBJiPXQeo4NYgYEcoER0Gw2TJ3qcZSTEADJBew3B84BEk7MWEQrHeYiU0ozEr6TRa60MkScmNDGSESyZDj4EpiCyFhG6R4kdVTRBkERAEObgMON42CjEh25495WrI9g+ikJ6JSHIa2jIUB/X27FihEk/sSTO5evGjp4aAICAACt1chA8eTe1tJ5r0YT29xW7/4l4261HU5tPf15t/7Hr84+aWUsiw2uLrxw1a+/9ZujUy4wQZg4Dktiiyik9LSOmo1IONJA4jnd2+7dZqJrMsU8iG4jKiJXoJkWuELnHCDhguMCBUk9ROCJdXK5UMVZjzkx6qYp9A5ocmQ4X1yyrqXn5uemI4jXXXDd7R/ZVRJMkG4Za43RROQx4WeLjAMAIAejtFKGLBoLxiimDdfaYY1CoZDNZohjhNbhUoI1WhlBnT4fIRkyhshaIqMpAHKReYxzDsak4GjLtVKIKbkdUjYSYwiMOBNpydrazoxFRBRIRrtSIIIyhpBbIhVoIkCkRdQVIi5KWQngVi84MgEgcikZIiZowFgggynOizMAZhBSix4iQ2CRSwekEI6ULoJjLHOdHHKudAyAhUI+iiJmG8YYImDEiBCsSd2kEitAMmJgrU7TfTJkAg2k0pCcqmQ4jqe1Fkwyzo2yWimtNRkDxmhtPM+LjQUhm2G0fMUqY0wcKdf1rDE95Z56va7CSCk1Ozvb3zeoiDKZTCafC8MwVfNwHQkAVtn0GxwlMUfmOI61BhAYY7FSkokgbLdajXKpy1rbqFfLXT02MQhMG3Icn4gI+Hyl7nleuVhqt13P01onjssRyM1kgTPhCkQpyKSMqe7e7rnqHJcsUka4LnEWhcoTGYYsDhMGQkq/VZ3VVhWKufSrzCRzfKc6N5sTHjJ26OkDKzds/P4XvjY9OfFPH/xAY3pyBkfPiVY94M/xctGMz6EFLoUNddhm/lBeV+eypWJ++cr6/KTjWaZ4bCjvw0zUvvnNP3zRV19Z/su9EfRwt65Ccn0ZhA1jhO/7gsl2vWFUwvyO918URWkAblTmPcdtBZMHRnev37ghYCHP82WFpWQsAwBT7untmTqwZ/mpzzv8+/v2Pr3nxDPPmBk/su/oNCLGUkKp2yZaMgZMTM3PWmBTijEw8wpVusFGPNqMfnL1F9791heefvnlU0efhPlANDoZla7p+Vy8f+zoxlXr9XTLNmNvoO/SG94czh+64eWfbY82P/Pzf57KaDHbfPCbX95y9bXzk1NZS7GP/WvW773tTimiwROPO+WaS6LZMTXbtWLdSRt71dKBIWroo4ePJS2x8+gTh48e7Sr4r7nhsh//cNtHv//wg098tKfsXLTsn3ZkdVcb0IWb7/unFbF/yOhVjvjyXUcev+vfu13IlUt/npxexpx+hKMMq476+Vffe/Vzu+Yfv+vA7OTsx3/6vo/9sb3On49nz/70O/pWrkge31s3ZOarCNW4Hs81Z/0sTM4dzhZ6Z4/N+sLnaO7ZdsfaJRtOOe2iHsefPBpf9LqrT97Q/5dbHnrDda84bvPAgT3bsj1Dv/rha25/6PYv//sf7n5gn3JLBofaO2cLXZ0I4c4zqDUnK7MsZ4ornhNOTeWWrGvXn8jyfEZmavOHnVZPpE3G8YfLGb0kmG81Z+bCakPPxEGX5AnjXFsGxDkAQbWit1xyxky9fPGuJy659kqYmi4FnaC4puf0PbMPArDeFSvHd+8aWAjAROBgamMDkOayjPkWQkzxSASIFgkJGCDZBXeFTgKc/kcWiRG14kVkdQo7XWD7/Kfxv82JFztxNi0nkiWkyLZZDAAZxUx6pX4Ef/7Muy678bMpDkcDTdXCfomANracsDMba3Nh1gADoxB5TCQWX8UkBGARAbjAnMuSiAWJNpD4XHNBTACSEUzkXa+dhArIQ5YRNitYvQ1t4AQGUiFCALTAgbcCS4lBhCBivRnYF2oNnIG0lBBoYMgZ11ojkWTMGMstDPf2d5vGuueslW5hZVbCw0cB4Hln5Mh4Dl+9e14WWY/IjjF1/K65u1/1L6+C1shr33zDrX/95Rte844//voLyzd1ffqd7/2nGz/+mW/fnhF+oiNAIkrQCGaZQG6t4bUsifmWSJhXkDgKs8h5nvc2TMEXAkAjQxnM1ffu2t07WB4cGeCYgaidFdxWEwskpC50cXCGYzkMUZIzMrdxc1xTkNjZ/vzUH5cKEBLJcMGJUqPcTvXXmERrbbUlIgsGjWKkOUDey7hCSMY75kMICMi4ILBAiGSNtWkATjFp+XyRkCHINDQiMGMsWWRiodzcMTS0RIR2wcewM7lp8V/suP2m0RQWeMXEGFgLHWWlhQZySil+9oxcnJdEf997BjBAQCCFtDbNnxlnAABSOlI6WutEKWNMai9ojFVJDGSNJWM7JyOy1mowgEiOzFmrU+ERzqW1VmujlfJcvpAlAyLv9IwRrU5r8LyDBQdMqUSJUblcLg7CZrPuuW6i4jBoOY6z46ntjLHjjjteKbV69ep2uy2FlySJMokQolAoIKJWSYpJLhaLXEihNYOOOCUhaGtMkqDD+vr6GAMmuCPcXCEfx8pomzaP01jueZ7rugDQaDSEcKwlxjgiCu4oZcJWux00kDuedBAXaE7ZLJcsJR2ZRBXz+bRiX8xnGWNxHMVxwgQPA5UG4DgySdzs7ulR7Waj2ezrLzerU719xdWrlibNquNyHspwfpq8JOu4nsOVMZKLDEClVi0N5jlzqBZEaDL9PdkoCWaaee5UbNKbkXd/77vnX7PZ3TwIB8ZKGEa8y5HcWGmMCcNQS8ulyOWz9VbYmQa6U1/zfR8RVx1/3KoTT4AonBgbS5Kku6vsCFmr1xM6UJzMrLzyRcEDe8cPVXuGehvjh+PaVHfWqbaTY2NHB/Ku1gE5HG1+44ZND29/CggZSKOIg0AgTQYAfvHEru+/5K53XffcT3/i7fuO/rm80HUr5HKTjcqSgb6JsSPHr93woz/89sHHD7/tfa/dee/NNx/JAwl866f+9XPvHzznnENO1qmN9RZPiXhtaffpr3vpVR98zXV33Plg3u781RenmqOj339s9sKt5RVS7JxZddfDD/7g8y+46XcPnHQu6xp5zrJBb2xf+zPfuvlQ1jn75A//y+rcjqwRbViS794fz8+0qLevpzVzrJTIcjn8Y8X1Ezsw2exl7m5rFJT7veq2bR9btWTFvgd2EK09fv26nRO/+NpNrz3wl99e+LEbu7uG2g/vAWuT7Xu9k1aGSRhHDaEx09e/ujdva9WB09fkeofG5uoXnHy1UxZSyYOwf3V39+bRk44cPrb5nOWOpltv/tVFl75m2Urxx5998aQTrvz9T780Hx88NPFoT1fR7/Frs5U+AACYiiplv7xq7XPGDt/0+Hc+V+/S515zJUGWEh1TiBbmDx0NEp0QVEdtoZTlLNm8YsDLulNHas3pmooV42AZpkhJQpArT72oa8NF2evCyiRolD3l9L7o2PB8pr1j/4oA25wr3UlOM8AMLMqppW0tGy9QMRZjJT1z/D/Hz503xemRZzF+Uz5npzf33xrP8IY7QoCQ8d0v/9vHfvbj7z7weFXZxjPXyZN0G3D7Z95qmLz6n/7NCAZAgMQWXjdMKOcDERhLTEC48B0BgpwQXsZphlHWFbVmohGIISNIjBUW8tJ1pNSxmldBWmlPLJtvgYsomCkxbBmhLBEQoIUUhSsABXIkkRD6ImdNvpQ5OtMUFjRwIFpwwgXOpTbx5Ezrt7fdzTTjYI9ftWzt8iUARwFg9ZKzKpVWkMETh5bGyfK+wuokrp7H+6947j/d8NzLnv/iV/ci7D043r95qL3nDqU3fuqb37373ou3790lpQMIDEWiFGdcgyIO6Lax2dXtDhpqtZSfyzqsGNQ0FJUgQ0SGOTyTzfb2dnd153nBMxEzBjhnmiwRWIZEVhgtHSdOlOv44DhtCF3Eq557UeasS4RKJT1TSDhgCkECa7VJbKJJG2SANkGjHDQZIXKexznjCzVjImuQkDOWMols594xRgAMCYQU1qI1i2StTkxFwRakIhEBOPBOOfpZITNlKKWgZQBAQGtpwXYwfR4arVKAFaBNtTs66pICFyZaOmE6e07GRXq7gXd6LZwWqFDI08YqdKi9WmsrJDkOV8oQKQ5SmyRSxhjjcIHIETvPtxY7YGxG1gByJh1HSmmt5cpK19okTiHK1lpEprXuODkCCuEIwVJKlpQy3fmiSiTjRgiVJAAgGGcSGWOrVq0yxlhrUp2NXC6HiMYSICQqjqLIGANEjDHXdV3XbQeBMUYKgcYqpay10nVymZwlqlarjuO5rmwGbZYIIQSTIt0ZGGMymUxH5UOpFH1myQJwYwyCSa0bXddlwjXGSM5c7qYwMUQSLmegAaBWrQohoigMw+lsNmuMUdoO9PSZBaJhT7kn0abVbKMOyoM99bm5SOnLr3sxGD178CBH0Ixly3lLIeO8UO6qzleM0VkXx8YmV60cSRxHJLHPOFg+sGTJ3uZeljAfedBW4zZ684vf/Yu//DxhraZuczBB0IpVgMg5dxhjSift0BSLXYvrVfpDo9Xq7u5uTE/Pz88vWbIECfbv3WdXrerr6Z2YmFjRVTwW9bxh5Gqi6lhV//GTr3f97J5RTEBL5lZjfWSmvmbrqvDIwebRA71LR846ec32HfuDSBEA59IYLQDe/6NbXphZMqG7/nrvg8DfHc7PHrGVnnTJC/Sdt/5l3arVJ285cWrf6AknnzZ85qnrLjt/Sje+2ff4h781Ic/ZGiaPmqOl5Rdd1qj+9Zef/bfRp5rnverUz//8q56Jz1/aGuk/RbKK9IqvHOk3qnbgkcfP6132r8dd7zFxyktfCb39h+790y+/9L2H/jhx1cXdF/bTz3/Q+sTB6gj07ZbN7c3mIM9VoPVI5VivB5NROFLJHZ+FmXacyeBkEPf39Vz10pUfeOfLe3rXP/rZr6478/z8oHp02yMrlq/fv23P2Zdc2+1m5//4YEHmZqRuHde7Trj5thrLyrUnDu59cH8xMzCZeLkSBO3ZoVzJLffXR6ORlcW+tQPheHDbH7750le+5KEHdvYs7zv5jOPU+NN/vueps6+8uBFq3DPVu2QZZqv12YMltlEUUvI2ZBM4dvCJ3jOPK5RP/+4D773xS5+0WEGrjVJISMR5l3BasTHkaTlRbzIGK/qHZo8eaM7OGwVSpkAn5AyltcSYq5IDu3YvW76UmqpeD2SxkFZIAqNWDJ/91Ohva6PHBgr5g7VaegGM2ZyFYGGdkYiUqiQDQKoCnSYY/ycJSfq7xWqBJfzfjr9/N5AxIGIM3/z2t9/3p1vPOmfde//x9fCCl0CKmDH2z595M0OwwICBMpAQCkSGi21kqAdquC+HEAJjQNYsXCJnzBhqtcPEQhtMQugQMQALxJBrMNV2TBgrC4DAUVoyMWnBIONyMpAYAms7LGgCAjBAhtAgFvPMJlwpww0sGeyZrscqSj7xkw8sfn4EECVaSG6tSQxY1+NJdGTm6GlbV6dPGD8ytm7j8oS1vIlE9I7baKDZOvTBH//7vodu/8btDxwbffwlp5/9wQ++AXEmYwuGe1qP//oH3156+lnAyQLXccQY86QbxdRfHPTzjVqjdfzKsrA653SBJuYmmSTHCQEpiZXVAROyd7BHZphuN5joRWSMCckFADBkSisgjJOWn8+TVkopt6ek5qeiRjO7dKnoMFrSu2+JrCVjrDXaajKGEXEiS5ZBIjlmHGJIYKxFw5EhYrqd56LD6LWdXRd0pJURDcXWoiUgspCC39Lgiin/h9LUNuUJ07O2OelIo286iTs3YAFinf4qHMks59YCdFBLzxCLn5VALw6BjFjHChtSFBgCETFkqWUuEjEOTArBGGNsfPRIX29/WjcGMEIIia7WmhZyWcYYgLXU2fSSTYM44xyTJInCJKXkEhBZi5wpY6XkwJAJThaTKBRCAFitLaBlKKyFtCPQaDQ451LKKIoYoOd5Ok5c181ms2EYaq1nZ2dXrlyJiFJKbdPAaTnnjpQpSj8M4vSDElJms9kwiFvNJsTalZ6Og7m5uZHlywQTvuM3m02ZzSZagbaIKDjP53LpLsRo7ToOAUtUh+6VviJnUkqmCKN2YIxBxuM4Nkpb0lJKpSmO42Kx0Gw2s1k/l8scPHhwZGQkl/XjJFy8fVEYOo4TxIkjeH12FhkrFHLzRw4lYTQ4NKTCiISnOTEDmqi7u7cyXdHGeq50jTm65/DI1jWURMh50I4yPeXeob7JPVNgkYRz7leuv/3GX37yNR/5wM1fGHeg52ibMZbKdSlFWhlrLUihVZTW1QqFDgkkVygm2sRxPDQ0FARBf3//4PCSRqXiOM6SJUvglFM+/Op3vfjGq9/wnk9/5R/ekL/uusPf+p61CMwSGAScbwRrxtpe/7APxsRxsad0/kXn7D8wunfvEWPMm79+69blI34T7qOxowqGmgBOfs2l5+loBuCPADBemTj1krPXn7C5Ol+ZDxpLhweXWoruf+KiU08Oh4tiIH/py981dv/2v93/mx1f//YJ6y5+5Wf+yTGtP/7i0T2//d2j99xz3dve5/Vye7g7m9+YTD/eHJOze4ZOOu+08YefRDtwJP8I/xubvu/O05csffUtr//tHXcefWR8GVRmhMjomTUMXCYmsTXvwBLmsggl2KeEk8QVkDAdCK8Ljkz/QlQPJtv23/Wt3y89btWOux7ryraHVpZHj032bR7xB1fM3bG9p38YMtno8e0jZ2+0M7WWDjNdxf1PjK5bs3L80KHD255edsKpsZtf2p0phdOljWvijJm/e/LOH/9xxflb2OaVpalmpl0Nmnws2L3z2GTXjsr6Ewbq9Ro74CWze4c3HWf3jy+mhnpq8hv/+pPeDc/fN3vwwou2jB2oH53efuK5x3FkKrTWKpUAKfIZMFKSQbaUp0RDK/FcESpNRMYQ44wBgQGNCahkdXcvVQIdmwJ66Hb4coWZACqjK89/+c5jn+tS0Ot1OheENgvPdHYlgUUQBAnDjmLlAhb6fx9M6Vlr2uJD/1cgrGefgYgoDCMAbMwmQ6sKV112TXokYYTWQ9ZittOctgCJRc4ZdtyKAQBaodLAMz5rJ9YC8AVoTj7vVBuxNQDAAgUApAHyrhvFkSLDGQdLhiwwhnaRCI1ggRTTYFuGBKZeDp1NBgJHQp0Yo6w2lpFIlGk3mwOl/mNT4wyFtVopla63xphEAXImHGaNNjw73Wx+95d3fgEAAI4/4ZzZyv58rk+XeI/vHDr6yIbnrASz9b1XrnnbP79iab/aOT79g2suYDPV+Wa2ZzDT3rt3eP3mW3/wixe/6hUAAGi5Y8HRjuBT8xO2Zs45B774mWv33/lwFkYGegdqR+fzXjnyEDmGOnKkl/EyUsqUWMuURmuRUj1RBG14GBGRX8yYOElTRO5yqU1PyW95LSExNS5Ku5+ERNwCWlQLXN7Fm5EmRkmMiMgtOCAYByJDiGg4GbtYvk6bDwv5KyGyBRUOIDAMOXK+oB/ZUc+w2DFBWJyn6Z8DIQIDICEW9c//bjAmGFsETVOahiIiY2LhDM8EbERE5GnNuLNJTbdhCFobxrjrupxzrZNms1mrV8IwLObyjDmMCUS0BhGRc8aZBN6J9JCypWxaRCAC07FdMqBVRwSbcQBIUbgAqZgLMcGFRes4nuMIRBJCWCDOJQCoxDBCay0RMhQMrdFaa2sA69VaFMWu6yJif38/AhdchGGIHKVwHJkS5GynxA3c9R1jTBRF1hgg5jhOyhqyWvuuW5+rTk9PF8td2WwWLElkzJWO4yRJ8myvpDiOXS9DqRIIdYyuDJFSkZCe53lKJzpRURRlPF8bbYzx3Kybk1abOIx8z8nmcoMDfflcJk5ss9lMZasBYL4yk8/nhRQAMgzCrq4iEJaLZe2rOFAAnJpKyGTzXOvptT31yiTE1nUdQBRaNaIQpcsMGGEzhUJrdt4TzuC63sOjTV1LfElePrz36NMfOFBdMewnvgmjpgVmjDEajSFjFDCam5lfBgAA7jn/mF5S9rz3AEC6vi4qI6TBOS1H3gQAf9gL7/32jQDw7VuW/u+WwM44FeBUAAC4HgCOHAEAaEHn3+HTMs965pbzz5JDQ419+wSDkZEljVpz5foNrWq10bBiyeYrTnp+sH9M8pkzXnyu7pu65767V96v+ITlR3fhsmUvfuublg3IO39536bVy4tLe/b++tHf/PbRByYqfvZAvjL+t/2VqNRVPTjavdr/+e3hoZ89rZuNbgs+ONkodwAqDSkdJiwJSJJxtLFLXuw+/wUlL+o6/7Qt9993/4ZNG3DveGP39J0/u2X9cf0N1fjLtm2vedmL7rvvDy980bU//d33X33OiysG9j68/entOy9+z+v8ltk7emzN4NIwUm5dbh97ar4yduHFZ8tSOb98na4G47sPjc7cefZZV9xxy5dOeMHpTZMZ/dPOueAAnHnSUJCsqQ6VMnvlSnc6jicefqQ1PrVk46Z9tz28ZFNu2Usu6Xz9uZ1Y4tz029+95uXHLy3lvvCOz77j468U0kuMBgZIEQdHobDWMMakthYMckiBG2RBcJSCx5aMtZzDpz/+25uvu6Y9MV5cuqrVmr73vocuu+ii9IXUcD5o1Iut9sjA6fXSxMpiL8AtAJC3AAjuwqqVbsNdwPTnGJ8l3vx/CqWLqUKq2gH0/xZ9O0sxWANHDx1cvXzLt3/9syPv2b08fTsAmUwmDlsAKTHIEEBoQPK0dPjMVqBSicvdXnOsLVPkFQAAcCTGCAHJGN9xkahpLTgcNSNjO0lWGtUFWq0QIO+LUo6FjaQdAQfPQESAQJYDMGCILE3PpPAZ2DCJXcmU0qVi/tgkMpnjtqlUKqTIkDGylgwoY4FpsKgl1FXHjGFw5ZreocKxJxo9J4XlmdIv7n/407+7+89f+9qds0drH/76R99xzdDK7rUbeuJdre7B7qRyMKtXTB+674pLNvz111+58IVv0jwbRqFKLHc8Y/AlL1v6k++9rHl0b0H2lXK5Wn3Wk0XOw1Abay0wdDzXoo5jxYi5xQIo0lazhbVeIDNgCCiamk0c5rse9xy03DQj7rpWCrGQ9VprDVibyiMDWS4kCkJLlgwxbUkqmzQj5TokkDkp3dYQICEnQHx29E355Z3gxzgAY5wB2Y52CjBERsDTBvAigdoi/B3vDRGRLU4ERv9FRouIUaIYS/WRMG0pIGMpmHkxzYUOORhTGBSkxZl0jiKkLiWu76SaGdYSAGaz2WzOBwDGPCmE1ppLr0OxJWCMkdULLPm0DYypoDUAWdKkKU1epZQdOcxYIxpKlbOAC7Gg85WyookILQBTSjEU1hKTju+6QRAYgmy+ELaDROmsn/Fz2Xa7nSjDhVMsFpNEa0uEjC/U3o3RWsXp2+ScS8M811FGN5rNOI59N+NImSQJR+u5vud5Q0NDBqhcLidJEgRtv5BJRZLTswVBkJagY50yqtFa2263tbLFQk4liYpNoVBIjHVd11qLjDnckVI2W+1CoVCtVPoHBxBxbm6uVOoaG5/M5XKZTIYtfJML+SxDRtZ4mbzWBoDVKjUpped5KXLbOOg5ArvKabciQZDcma23BOe+4xqtyBokkYRRxssgsxnu9y7dsu2+h7gNnveJV93zzj9cedVLX/uW665+08XtMFDaMGTZbIYxEcchoC0Wi/8PS9z//8c/XfGJ69746lWnHje06Yxq2PKD8NB8szc77LGgOWnbs2AIQK0ZPZK94PWnnvPGa5v7x0unlC7tfyU0mlCdNSK45LVXsMnGCde/7T2XnPUv274Kmax6fBQ46+GNnlbrwIG9cZH/5LOnX/38G49uK7ZLZk/Uqun6c87sP+OM06YP7Ft13MhqMs7sXPny05ZtXt/f3+9hOPq3J175gpffc8sj77n0Da9741VLz9g4m+nOzVVe8Y5zi3mv59HSje/7cjbXOLr38d9//rvdF55y0SdvyAew9ze/84rZJ8Yfm6/V50dnn9zx+JnPPX/b4UPdfn3Xd2/ecurpK8486eGv/rTNh3mmoGQi4tmRMy/va3TFh+KpZL7/ORf3FN1D23ebQ62d99/Xu2kJX7O2eddt/uBrdn7mvhMBAECuPekU+vWaK+zL33wyHql94d4XtrDixExFyqLWlhsdIGmBzFpES8AkGIFKKtsGAYasMUQIgjNOFEWJ6OqZb+88su3BLZddeNmypWaqkk5V5nnCM7Ujo0vXb+7lK481d6X3q0CiJixbUKbkSAkAIThEvEM6WkSj/Bct3Xe968+fu/rl6eHFePsss5L/9ngmi0YEIs650ebO2/+ycnhYa378KZsXmsD5IKgggoQ04OvbP/OWy9//HwlD9qzSt0SoVKKeYV8KZGkKBQAAlaayhCaVg0ZL6DIVBs02AciOvkZqmACkUTJwEU1gKoGNgAEIC1GHtgJMcjTGEFlAkA7GSWg0csESy6er7Q0l9FyhdU1gZ8U2RgGwlBVqrXVsosBaDpaVIJ4GgLe+/92feP8/bTmt59bf/hwaMy9757Weqrz07W/97A+/KZzZ39xy9/ve9/b29JNJrLDthHHNBHW/O/Pkrt+fcvn117/6Rbv27n/JS8/Z/vhN61YXX/qSqwbXDE0fPMRb/b3Day3qYk9PECWJG2VCwVypyCIXjLhfyIFVoNA4zCCziNaQQAZSCk8gY06UzVodhYFRyjKMY4WSFfyySE0RrNVEnX44MATgnAFyAIvWagTGQVqVRCpSsXYdhgYxUZwhQ8MtEhBjzHaaF/aZJi7ajtqUXZhZndTYMuCMcUqhX2nRmBFDlnZA4e8LMrQg5UFEi1MMEQlAMAeRFuNrR6IFUaWctcX0PW17EKT5dKr6sZhsW0j1leBZ5+GAmjFmgEdakTaOKxFTeWQBAHGsFtvGAKkEK0NEzowBYwxZZRljyMhorWwshJOiq1KfY855Gu2llJwzbY3gDgBoZZmUnLFEJ4LAGDJAjgFkgnG0gFYZ1/WNMVLKKNRppuv7vlKKwDLGXVd4rlRKRVGoVHLk8ERXV1exVC4U81plpJQchZRSUZIYy5EypdL87Fy92WCMcSnSdDn9KIgoCALGWCaTibUWDKy1jpBGJUDkOE6tPl+bqcnly9phO18sIGdKa44gADKZTK1WS0v31lrX9ZWy3d19AGCMTpIkZd060jOGUt9J3/eiKCAySWIQKZ8vRlEcOY4OmkLmTRwQw9WnnOAW+tiB/Xv2Hs5FoUhUzBhnDAkAwSYGct6hhx7NhIGRdjq0t7Tmcgj+nx59/msvjDXlMy4ZqxNjrbKkHU+ih5XbP5nL5YzS/nnvBoDp33y4f3AQcp5uNpMozng+EAHntfn5WKuslz3kNe587On5+VkHRmZlslTmXnHNmUOrN0/NPFWeH3WOzE+C7cc+NjQEg73gzap21IxUI44LXWXmZR59dNe2R5+crLbiwN79yFP7n2yePrQ624b5+nwrqruzcMdHvpskDR8EcBYYbYAB8tUUH4UClw2uoAEDPlS7Id66fm2lpbnSrK//rqe3/fNbXyOiaPfux05av+Wjl79osjX38w//GVrJUH8u39/nuaKvB48/8SxQA2O/uf/u734h6bHztVkWZ6SgzIphr6bCyoyfldGhY86y4cD1vf2Nvb/4Xc+q9Vkv+9RvjnbLoU/98n2Ts4C1uY3DvWFmcPSeX7sD7Pbb75vIe7/4w7f/+Nl/vvYL7+5i/n3f/fn9P/7j8BXHjdgeayHSZs+xo1uuOPnE56z/1Xd+95c/jJ1+/hmXrhq+92dfXDG4KhPg+75z36+fd+1pWwqP/OGR5qNPeiPeaR/+3Pfe+naYqO7X7VOvPPnG//HJH3/rR9/7xudf9qLTD9394E03/yoNwB/88L8OrVz7pgtXoOf3HL+OVrklvdTu3k0q1jpBnkms4sCJEQdIFOQcAY7USmtNTAijNefAGEeypCgHnrW08rjjwM9VZ2bzzBMLpRpWibKSAs/VzTYVVJ9Ylz7eAsvtM+HSRVAEBCABJADDjiwywP+hpJyuXM/8Tosx9L9+/v9qPPN8eoZq/Nvf/nadV+hf2fuDr3wZnncNAIBpO2C1k4W4nYK9GNCfP/2PV3/g3zwGZgFrhRaiRFeakZCoIrsYmY0hQA6kgctmrCwYjtzgQq+RCIEJBJZm1wTKkgMAArWxAAkC+qxTfXQ5D40xAAYBpHAcW20a4WI1UEy6vkeSAwBobRExXTaJrDXWGgXAADwEhyLFFyL07XfefOmFr5oLm897zYkz4b3Ll3/yQ9e9bGDd0je/7Hn7bv/5wcnqq175fA7bbI6FARSKw1hUfmZgaUYAH1gxcuK2h3ff8NZ3wuyZ8dx+l3gyGvS6xzF/BLQw0axwpTREUlJsQTqcNBG3ljHhgiKwBhPghjHGrEWOCAq0Mgwsgq1MTRULGdnVnUzOZIb7bV8vaCsILQEhZwiWMck4pvNAJwFHAQzQMsGBM494aAjjqM6N1ZyYJotWcEJkaIhgQdY5DWMsLUQjMAYdNUpiyAE7JcwOvwgBMD1sUmB0Cs95dsc3HR3I0jOV5M5gC7QlACCysNA2foaQ/sxJLBHxhQidnkenXkVkOfIUPZYiywAp5UilxB7gqElblQAAMYyC0BUuEQNiDDupJxEBgoYEgKXSaGkrOm0Sp4qYRMRQKKsQMRVlNNYiotIdUFJijGO4NcCEUEpxKRAxTGKHi1SLg3GQwtXWxpESQriuay3FcUeHGcAaYxAsImQyHmOslPMNIGOQ+rdorS0aQEZS+NLVyraiMF8uxXGskzDjuamNUhRFURSVSqUUyUVEURQZFRmjuoolKSVYjKLo0P4DUTNatmwZAIRhaMg6jsM5BkHAOU/VspRSUjpCyCRJBHeMMSqxjHcIDUliPMe31tZqVUSSDvdcDgDC4WHYBmA8UU5i40Z0bqb012yExZwhWLVmAykzOXYMJBeMgyYGyBhPHHAs9m1aUjl8uHfl0uJY/ecPfuaJz/7p87feLbv7ZLNmwRqtHZGKoJk4VNLzXc9TalGkDxhjRw4cPDR+pFQqLR0Yyrjeww8+JF1naGiod6C/QZXjorXHv3iTpVnWJVUrl6hmdn9U2X9bMl8LXXc/IbSU8WohtksHdicAyBwvm80DJIeOHTlyZNPS5Rdd88o9o7vDGv7HO//189/46Sc++80SiBjsnLQ9cavsZZYNrGzMz/lZv+jIpkqcTPbRuLoqydg5qTNR/xJRD4eagv9izxEtk5LMV6ePrR9a840v/8KADn1584OHCowrylbUAxkIiwzmLGMyN8QgL3RXpjw7O9ZfKHzqM/88Xz1c7PXHpqrd7v7hjWucvBOMhcmGLW6ub/8Xf7vk7LOWb+169I+/Wrp0KQz61s/M7q3f+5NvDHTlpksjO59uPe/MDe/77LfPfPXJH/nch+r37b7grDeaXUce3rWnu6/nBZ97o2nUyyuWLB1cufuuB5deuWUlDExsP3zuBRfe8Kkthw8eCqbn7v3Zvv8xtWOVuG1awpE7b3/fK259OIL3LClWd0RHw9edeNrI0uPO69lwwrFHtz1+2x0veOG1592w4q6P/Mv6F6z/8o0/Ae9KAPjol988NnUwnjn6wA+2nX3CJfrR6PrXfPumP7+zUMw3ZwJrEm4BkNpkfQJAoMiAFDEo14BmHXmMxBrJ0JG8boyei7hipl3PoysUxFqnbQhDAlrVjN8NzGnpWgk6PKhxZpdbbhZKdS6wBK2wwBE4IENCBEYI/180rRZL0AteQ/8vFehFECsAXN694Y4nf3f86ZsueO4L06P5jKXASTR29IsAEEABPf6FN5/7vq8v7igEMk1Qb+kMs6mCUvp4VwbqgQZAMoYwBcFwWtgxSOAGQZMRFi3ZBDgwkoxZTQwAOAPDs66yQEpbREMABCyxtp1Y3+WOYXORsoBaUTNp9OW7AfJChCkkBQAE75ROheCRjtauGuju8R57eDK9trMuWPEvH3p3NocmmFouV80+vOcrv/zzPQ/eDvzQx959+/MuvNwU9rWP2VJmA3SjaUVcNxvJWLkt4OC+6y4+6cf/48Ph3JPt5mih0NcOnWxrFXmz2oZx3boQ6ErV8wZtC1jWM0qDFNzLBLPV5uRRY9TQkn5oGkbEuWDGKKWSJGm321rr0spexxFRFIjppD1f4yuHA0558AWDjsc5ACdjjbIEJr39yibWwGJ6CoDAPcFbaTvSlVxKwSwagwRccLJEKSC64xRJkMLSF0rJSMDISkgpv4jWWrAGOxbrSMQI2eLSnIKqFgYBsGf1SDrlbgAw9pleLCIDtCm7ThAiokXQVkGn6o0ExIABgCFLlmABL8WAoSWGDCyAMUREDC2C1pYDgiVrrVGpeDWZSLvCVZYDEiAhmJTIluqgkwLGgDFrOxfJrGHGcM6QDHAURmsEMFqnkOvEKkWaCWEBELnjcGsNAbjEAKxJoVuMDKokjpRVVlseh650GEOVBADABAcBGiNrgSxIJgEZIAMCIpJ+kVkwhEoREWfCciTGycWM1ppZwxgjbSRD1/OFEAy4U3RyuWxaG1dKRVHkuZmsFFZmjTFhGBPGrusKT5554YWQybSnpqSRJlHl7u4wUmGQlHqGZS4DQSto1IEMkE4ia4wRjCcmQoGLNi9REERBIITI5TOpKqfwHUAbRRFjzFpV9ItsZEswO61R8e4+AF8RcBsPFyGZE1opYxRaQOaRoqBV5flSq1qX2bKZ17kg7m5GfRcMvUOfC0Ond7eY9qaqE8daUw3IleJm2J3hLR3BTFNL5nTlU1sG9DMQJqvWLG/WG9qEc3MTHOKRgYFchrXnxg+OjXcV5tz5rCNcpRSRIVBRLpNz/Myg306SEa9P9DEgo4MgU/BCh9eq80ZFViuHI4TR3X+5/dc3/Gs7u3SV78bznzrzyvPe+ebT7n5y50lbTy/nuw/vn7r5V3+diupgBW8nDlDZ82SEawo5zCAMuwJcqVifYxmqvo2DAZCU7kDUG7aCsOD7TnYoX9aJVkxHreay8oCBuJB1YHauJ98jNJuojx2qVQLgu8LG8Tf80yCDF5689o0vPLu4ZmmR84nRuXzvkr9+/YE7737owYd3bV1350ff9eIzX/mqz37qe1ecsHLT+sKeyYmVp65r1sLciSu2rk4OVsd/cNeNUbzmljf/x8tfc5bqHaluO9q39eQRh88WxfDaJVHoPnj7H5dne9Rj1d8/8seV609evuWUP3z2F1s3bH5o//6X33DRCfH4plVLfnfHY9e+7Txnc3TOhos+/KH3Xf7SNx3fG0zPV4PZeeivzYxV1p984T2/u/Pph7/0vs/eeMPz3jaxY9cLAABAzc7mGmbJ6ouWn/ZyqNT23PfgWS/ttm5EWgJyQ8oQMcBsgiAsJx6hAXAll4GNUpQIN+ACWMSffPRPGYja0VRXI+GWB0bV6q1CvgPCEq3JKIrnhBlw86WmbChKYQGZoeFgbNxdiEy+tTGAYehYIiDfAofUmTXFYSECJ665EYaZ977zT1+46vruAZybZATq7ygbuCie9d8OvAtnIUAwZF3AMEweGa2fvGzWAKQV9avf9LYfffFbUAhNlUqENQSFAMLuT2Dtl/6pSM30HJIwBsNIaGMFh/aCRnQ97HihAxIjJOYQ6U7rmqE1BglKjtdKIgUOg8QSRBo4EADn1lEYzwXgSyow1tZGcKaszQuWEQiKwFFZA00QFlUrolIx45kmujkNCUDi6GwOyxUaBVeaOLnkzI23/O3H80f3r11+bXpten7yxo++5em/Pr5hyUm7Zh574euuu/j6l205pQqzw/cduv3j7/gCgoNxFVwLylUEXHdlqaD6Izl3tO/s11Rr+WMPVZdcvM7OUNa4ym0axk27IR0mCn3WWHBd5BTWQ4bCZZ5uRPXq/OTRI93FXHMyzrhcOxK8IjLHRtYreKZkvFKWzUWxA0ybhFTX1pWq6Pno2Fog0owxZfJYo55pmnaIqTxN/JGlFGGLUpDRmmxChgwIxtLZpQwHg0AGkRgQISASMCSt00SV8Wd2dLZjIoTAO5KQafkWGSe7qP7yDAk9TaBhIZddDMBEqbjbYsb8LLxVB2W1IAaXkpI6shwpwIHAmLSRa63lSIgcLFGnnYOpIVgHQMUYWAIQgKmTEqJd6Og8I1eD0AGFpSiwjhAO40CEbKGx2rnytK5ujBQddwogSiv1SAREKlWu7kz41AeIhBCO41ilU/ckpRTnnEshBE+9HzutaMAFTDiGYci5BCY6jhEpUN1CrGNIjZY5R0Rr0VobxyoKGlykDsdcSpkyppRSXLCw1QrD0PO8YldJKfXkk0/u27fvxJNO7ioWMo5X6u568MG/AmennnHm0WN7Jo9MdXUVu7pKrutGUaSV5ZynrRtrrV2gUXb39qg4aTQaQRAJway1YRi6rksGGMNCsbTn8T9P3lEelmXvtZee2r3+4UN3SW60CjKDw6tzGWrMu+XuJGpR0tIx5ETRUuS7bHq2sWt2NNQA05Uvv+/WonG/nCmuK/jf+9ynlg2NFNcXnAEPGlVVn+PHKvGmoanqVA/rWMCYuWM5QyZf7Fq1IlSapHPyS65rT8yG7VjbuJhrFgolyZxGrZbJZIRwhSus1TNRRAheJtcylM16rXpNuE5AxtTj3r6har1iXV7s7+0iOPukrS/78IeO7v9b95JVWbc7adTQS94BQZdf3nf7NnayvuGqq1qU/+S//3guQCc3MFuNxiq13KyxjmgY1U4MAEgAR4EDoBhoC1kBnsByPlPu8lR4tF6p+rncUHehFYzHUVhrcYFifOxAGKiahO5CcajoHx2dHxzO8qjwq0f2PbFncrqlmhBzSz1++WjY0mCGRc9dh+a//cZ/+8TLz9lShnd+4AfchT/d9xFVPokFPVQ95GgDshzMV6d33gF6shmr6MDumWYtaVb9oaGpce++O+9dGjf3TR39j6fC512zavAMu2fffdGT9cEBb8f2O8557iU3fuwnX//hB2e2PdFzKLjvwYfZnomf3PShr33ro/sOHCuxIR14UcBijSc/93lvfcErgtHmaSduPuG0t733I5fM3/N4erNyur9r5YY7/u0bpQ3LTjht6/pTTvzsy5/fPPhkODEngEVaM7DEgAQyDgyMiRIg7eZ8O93kiJwhGWKcWYBmvakZdA0NUW1Ut2Ke8XzKMblg9et0kVMdENrUphkWneUd7N2agf4nxsdX06JNEGQQZ611OyQbkABAoBdsVgkNN2BRcwDOMptPWJ5RlcpkfdHOYSF8Lv7z3xv/Ve+YiOxMTRf6BhfP+Jenes685mM23v/QH/9HjQwAc4UXa80JpuKmXJAz+v1PPn7uy/9ZWSsYGAb8mQUcECWRSaUHwSacSzI61dSUXMYmcTIs5zE0iePJdqKobRFFbLSlEBkQYaKozYEzcBmSgUjZtraCoFdyg6TQAFG9WisOFV3h1eMWgmA+JKbd0m0u4LgTyv/xlTecfnxSOXTL8MpXvf+Dr4dPfAcA9s5kN3Uf/OFdn7/4rFetwDUDS4/76dfPVTy+57bDjSB51fUnJJN7c2Jjo2q84izz6y0DTiTj0JW67MCxdbLw4z13fOSKa6tHZjwhZL4sWwEM9UBvobL3UL6n3KKgoLhHllSilRKeNzg0ONBTjqNWs1F1hwaE43CZCecaQpFIjJ2pxtP1tqAsF4wx7O2CYlkwR7UjkcmLOAk79VlrU0uiNFx29LCAp78C2lRwLQYGhEqbxKBkmsWUtn45CrTEGQjBpWCpwgNjsBCNWJr1wjOA5NQ9iqWtWA6QwtsWtWKeAQ92oi89G371rGEXOiX/U5GHDFJHBw4XQA72WfEvfZyIwFqLhEhIjBZoxqm4Ixm1WLIGAAJGZMkuWEY8MzqhElOnrg6CcaEpzpBgUa4OGMcF6NYzOwz6e9q9Sb2WiKVIfQRLwDhjtuOnzBD5AnFZWAuAHIGzZwrsnMDaRecQxgDQ2JQwTUQkhCQiY0ySJERpxBWIzHEcZGSMSfPRVPlEa03Ei8VSV1c5juM4jJCzFSuWDQ0NODLLGbieNEZtOeE4TXrbI/fPzM3m/X4L5HhuNpfr6JohGmPCsEOpSt/m3Nyc53n5fN5Qx0PClV4chYVsLo7jqBU6q7d2eTt6uzZ7pTMI9ZlDWx88dr9gQNqXBTm1e4dXhq7+fswWoGSBLESsmMkWB8qrAbtCNXZk4uIfv8q0m5qV/3B07OLXvbNQKhbz3rrArD95/YWnnDZyXN9wVOzhOejp+BL2P+fSKg97jElm5goK41rzyZtvDdrR6rVr2nHU19cXRZFBy4XO5kSlUmOJZIzlszmrtWm0OVGzWiOEgLQxJieyFMS+hdiYyb2H+np6PC8/+diOHMuqR/Zr71hbQVjVZekfaeyCsir467I5nA/rd/zhe4o0E3R09Ijv+22WN5rmZuYdxxscHD42Mb7jqZ2G7NhkMjo62awGjz349NFae7waxJp8ns34STA/gYg5P2MRPN+fUzbT3besaCpTM/V677KVQ+WhwWpzf93temrW9i3n3jj5RTbVrHR7nOXyDRv2OJ7ybI+0J11z4qcuW7bz4cpLXvLVk4dHrrju9MGh8qH9T/msR7EWSP2CK55v56oto1acc1p12+7hZac8+PN3ff722U/9y3WiObwq86dLrzoTWP7kS8vbfnUnH86ff8Wrf/Pem1/8wjP1zKFML73kM68+Wp/c8JJLT3vty5PxwBWDkdT+ku7GscmDd9zp5fIXPf+MfKnIWvG/XbH2yI5jd4zOvR4AAIJ88uXr37xi68hZWy+fHh/LOV5YnSuViXMeB4FFhqgtAwvAGHAAqwyAzpaL7NhsSsSwNnUtAE7ACFrtlrTEhZS+5woXZAcuGA4y1x+2Y1Wec6k73zh4LJ3BGwYu3j80PjU+kz7NJYyJ8owbMkRACJKAASZIBlJQMEfSQJzI3njDZSNLlx3924TlxOyz+Jf/T9Sj//xAutD4IKaa8yF2Suete8ce50cS/jggAkPUiJYkMc7EFOklkgMYAAg4AWCsrecww6xYwJ0urJ+YKlsXsyLjqiQBFUOU2IQYAKs3gnJBSIZxZHIOd4tirh4DAudgNeOAACYh8olZAJeLgKwl0ByNNCYGIM6BUUQZ/tcTR56z/JL57X/eS9l+byB/609v/N2t3zn51PNPOC5XP1YpNZbinkfe95Zz0gB81sYL//arn62+Zu1PvvzNpaduufyyXpF9HiST2z77qbPedLVx5jFMeKSx0pS1JMrY3PJhwBrXRrVnpImve897773/B/wtGwvl1nS7DaqaW1L26+Hjf7gz29+VR1HIuPPJfDZiQjhRrGQcJkmSy2W8Ut7rygdzLXINzwkTJb7vg+sWObGcX2CaKi30fVixLGbcJcdBFQZNkXYiGWNSCvbs7moqErkIUbbWWmOtRicDzKDQhGCBrE2SJNFaoQqRrCOY50okRzqIBClphzGGPD03ICKx9PRIhED2mdQWCK1ZNNZN4VTU0cn6L2YadvwOCdA+gz1YKEdTqnDVIQRbSAHKCxx5TIFbaXAisIAWVCrzhJg2lTVZslZzAkzT6U5gY0RIBLwT7xkAwKKudYr0sulfPONMjAudb3hWA7tTQjdm8cLTSZ++L86fUQXpNMEBiEhZhUSSi9RsGBEtkNaaA0dIIeAsdUFOe96O4yEiEFhrwHRKSIiYRLGU0hFScid9OQBmreUOICOllNZaa8NY2t7GJEmiKEoNplzXyWazwnE452AxSeLHdu3M+d7Q0FAcx2eceR5wDsyFODbGJEmslEpz91SJM4qixQ1NqlCtrfE8P44TVzphGEomraZ8Lh+F4cqetRBp2th13yc/sunS54ulzb6jM4679pDa5wx3dQ323blz36df+MnjI7zizK3rNi5bu2L9xuU9YObyq/tg2fJNJ55q0ShouqPNy7JnfOUtL5o+dPi+fQce2zf9y189+tEHn9wQ4oxqFAA25Eq3AgDAv7/4IytP2BAlwcoNq1dsXpvr6z3ueWuYtMHcdK9koKUXtVUSFEr5ZrM5vHIlCC9qtt1ihhKtlXJcT0WR9Nxqq1FrNLSf473ljM4XPde02ibSNqLB8gBkZaXYnDPa4ZTpj2uNWql70Fdiql4Vgi3v6j72wBNCCCllr8xIzmQmKWXy6wZWQ6TtTDScGz7z4rXgSSihCkKZL0xPzLUS9eRTe+crtS1bT1q/LKPCRIKTcWQQtAhYccUaAAfqEzsfe+w/vvE/fvrLiXjvDAPnxK3Sz1R3HJCOcDIzigHUhCmiyWTEHLbXJF1f/Pm9p2xYdtrZlxRO3/2615z+i5t3vu+zX7/8uDUbX3K+tUluttE/curueluwqZHiyEOf/tm3fv7Ybvbdd1114dc/2Fted9pFW5boN57bGA3yzkC8rbZucL3p4/UJaC9Pesz8n+9mL3v3P9SOjp5U3jJfn+1NegPWyAVBlG8VNMsAd9f2S19evXXdkw8+tOaCkw89vX3lhSde/sJT4cKvA4BpVV//gRf0rFjXGm0sWbYqnplRvgCXrCPAklRkEW1KyyBgLF1sjJvPcZdHbe350lqdNhI8LjnEcawyUjJrrbFKK8k6TLSjDz2gd1eWnnSaGCzN7a7s/cF9KRFqqj6xtH/40fHphchELkJiDQOmgTQCJxIAAiBJt9IWjQSrMNftv+adXx797s1+/zKo7v3P1eb/m+pz5wL+02nSCuGNV1/zpJjNJ/X08b7c+OH5HdKZtyA0t57CBBQAR+IzWrejztczYUYAi1PHhv/clbapsJ+b5wJsHLsIuiDRZVSPIo87sUkadd1TEMxgq6rbYBRwQDLGcgTGkKddNk0tAA7MMGJEJQmFXD6Jgzw3CUEzDHbsffDqfzzpH9/+gxuuvHT9WRd/79Z7+nvWvP76N0HSbUb3eaWt9fkkNx+42dQfC26653snXPKSn37tE+rpzKlrNj94946Xve9t3/vcTR/d9fvbvv/lqLJDuH06k/cGspizfojTU6RnPK0Fl3bJjvFTh3Jv+d22T+4em/vt3eHQ8BmXHw/VxkytsfbMEwqDA0ltRkuVNwIcAmY9xxW5ghOGcRzqmtYmKcYyrtfjZmisrqnIRoy5vJDJsJZWUohCibGcBuugRM4wCYUrnUX8cCpJQUBAhAsCGAsBwBBYAhIiB5ww5W+hsUYBRYZCl0dIIDgTUgiHc8kYR845IoMOKXkxsjJEYMhTWkBKDk7nybNz3EWc1H8ZgxdruZbMs6jDZvEQe1YFGwAW4zotqFR2EGC4YD0Mi0G2czTNMkUnEe9U4zu8YSJmF/eA6TxPLxjTBLrDhU61RphFRGZxUdJrEbSVvjV8VmGAFoBmaQkiTRwX3zJ1pLIMAKQgakREzhBxMfVNyxkAsKhtYq21HfIVMeRpBV3IDnTcWkOE1lprQWttKXIdH4EzBCC0BoAY4yTdDIDNAnKBAKBUErZDrXXqCHnWWWe16o0giKTIzI5WhetEqp3SiKWUWqs0ohMZIRwp5eKbKhaLxpipqalms7V+/fpGreY4Tr1adUtOvVYjoqkD2wepLOfUxJ++X2qPbbn+ukO7Zr1Mg99290t//3gEcNUlG7/69Tf+6HO//8wD2/xtj4QKMhxKAtwYusE/a+uWzRsG8gW7YfPqJRuWeSNd5Y3Hv3jzeZdPBv/64TW3ffuHr3zjv/z8N9/695/88KFHn0q5ue95dB/c//RawYW+rcBlFuH4TavXblxV7MsVuwrFwdLQYL/r+Xv2Hzn5rLPGDh5q1OctMhtWw1ZbIoCxXV1dbjaTKZRd7Vdnw8r4lC8c0rXK9OxAX18xX9r92IGoVd904omce4WBYpKJi4MjjGDfXX9z/WLGy0SNOCfygmFGZmenZoRwuE3m7JgG9PNZ4bhBM9CJKeSK49Pjfs7PlwvKxqtWDq06eZmBQW0btacrWeEf2Lmrr6uQz/nbHnmoWO6utYKR/vUJHPvo299+xWWHP/2pX77+zZddf+0L6jMH3vvF3xw6uP/T7//Qx97/salGEAaRqqtiT+nwZOOwxhNuvO11p/z+O194696HGlduXn7dvT8zlWDXw/etPm49O37tw7c9cdKp52WXjRx6fM8jTXXjt679/Id+8ek/3Zm7BZ52f36ZLN1815etf2TvXx8zIS8UWrIq5ED3De96k5qfeO3rvq8PfvXcK0/lS+Pu/m6Yn43LsnugkEyVnTVDuf55jFW1OrN9/PCmM86rHBwdWXt8vdGGuZRGDd28r72+yx4NZ1kMk6M+irEn9q08ZYQJboA0EoEFy5gBsKQJNIE1MQIrlApxWLHGpgqUnVWJ0OWMIQetGSAzmLTaacq4bHDDdNes6MLM8p6vfu7747/dkQbgASpXf+64GzoVnRgBUSBpADCICRIx4JY4gQNAAIYpUjzrmtdtfcUTt30/G5d7lw1l9hyI/ucC3v9tDH5mECAC6+hdSE4eE/vSI4fqB1/8ro//9b5vzz26zUNAIMM0AjPWAkBTibRkzq0VgDGAJpTPiu2dth6R4AwsWiJJYAE0Ge65DlgTJ8UcKzqipRJXcml5kEACFpGjZQaMJbPo/W7BWjJkSQC0Y2zMtBlAlgsgi2BZEtC+x+vH6kf2jb3wFasnDnwDbOvw409isLLky9Jk5A5kWgGTM0fSa7v47Iu2je2/emRNJYEdx7YvH9y0en3v+T/yrzhr+MItm4B62rbXlb2ggr13Pn739+/Y/5dtK4YGHZgLDYVFOdi1IeLu266/fKAWbL10xdZz3+u6+VxXnjtuMj81W53p6R9ws0XI+RBFJtYmCYiMUspxHMEYdhdEEBij8l42ThIim+Ee1FoTk9O969ZCd3cY6qxfCnSbhYHHXbGYY1mbOtbbhZDQMeRNB+cCkCEKIIfAWLKJ1WiBLGgL2qIUiJYUWW4N02it5QIZs67rpokvIANGiIwBQ8CUUb2QY3OOncj2DNkb7P8cj6kzFhDXz1LOooXcd3GKdMyK/3+9vXecZFd1J37ODS9W7tw9WSPNjDTKESSQBAYRbAMiBxljFrwm2GtMTsYYjDHZwM9egjFGEtmSjGQkgQQooBxmNCNN0OTp3F35xRvO749X3TOw+I/dz3729j9d1a/r3Vf16p57zvmGVTQgQ0ALtmjuDpwQYcA/JsaEtbawtgSAAtvEmCRT1KepiFsneixM4G/sGFY3EPYktWkGhS4HkgVYnWuRCBeF9aI3XFydJV1IsyGiJTs4L/HVcIUrUl9FdeFEtYIxXkxgIBJJheUxACML1hKQwWL7QGTAWGsFR21yk9FAKmTAvyLXcaWQduVuKIyeHEdqY5RSxhgpGGOs6J4zJrQlxkSz1Yl6/TTNyuVqq9kplUp+1VVKFzCuwl/ZGCMEW5ib98OgVCoVV9Tv940xhuzU1KQxGhjGcVypV7RR/3btd9785jeHnfZRb7Y0l7/25m/2D+57evcja53a0u5f3b+Rf+eWDyzPd073G6e8YNvbX3Dqkbn5R/bP7Tucd1vdX9157/JierQrHn30YXiUS3AV3Hq1rE+dv3bb1s2/fnr/vzz4yFUbT/ci1WTwva987+Ybb+yxHgQbAeD22z87d+jIV//H10jKmW7XOPKxp57KdjwuAThAFVEgU1ZxgNDz4zSNgTzASBBY4ACWoEi0ECG1IAEIwBFABIkBDSAFaAvLBOuAVwjK9dFM2fW1cMvG8IN//fbycG60np+P0iTvdqM4nR0bnwqDskn6mVbVaplL1ov64UhdCndhfn7dpo1ZklaDKge+sHt5eXFpfHwcCVkZkzg+bfMWypPZ6YMXnn+OcJ377n9g4eCd5FUfb97y3Be+4vKf/GkQrr3333/w42/f+/nr/2Jmx6728h2veNmobScXP+fZG8445cHbfvapb9z74bdcfdaGDSOee/e//2LHg4+/6St/M3/8yCM/utOZTEeWth5+7Im77/zhtnNPf/ruZWwtn3fOqd/+4Q+jIYC5idPP9+sa02MHXnXxf/vg5154wRWvnnv63trZZ/l99/BdD+YvfNn3v3XjP37qRQf2z82196zd8tI7rv31xa9/cWfHvnw4Ka9df+/3f3D+qVusoLAangJDMtdxmdfHGkf3LfjbLwL4JwBYnF9wFhrTi7MLi0dxZDTYOGlHysJzjbG+60U6MgwZcY4MuGbCMsYZEiD3y4FcbBptrAVwGBqrUmUdFve6pYxUP1I654RsZTnytTe2fh072sIsO2fLhq+YG4vnb/76T1rNaOddK154DLmBjENUyMMSKAIFwAF9AgH0jnfdljrlYdurvLf/4vCKW3Y9Wq1OSsETbel/jbf/+zEYT+4CDyzmCAElVzH2E2YGtCq7t7bhfH1TW4dHRdQ0oIADR9K5BS46JIr9ADP2xus+8Xuvf39mycEVJwcoLF0EWWUNqb7VgL5nXSlNnmGaV10vIx33VQK5ZRAwkxJLinhrNQMBnJPVGoADEJKLBAiZZZqYJs4g0ww8xsmQU5JLsj+Rn+5C98G9S9PHlp9zycYk2zu+1meLTpQsteYO1kfPzNudVA2Yjfd84nPPevU1z7v0Nf96x/dee/Vr7/va39z7ri+Ov/0tG4T+9Rc/vf+WeShNmHYblVn45fYQAAAyfklEQVROKzbGZ15wRhLONDOsB7wsnNO3eFN3ZKXhDf/0xO05t46eN+QEi0kyPSuqcmp8EqzMUqNMxxHScX2TpBaICekGJdK6l6VAhoyWzEXBfC+AzCZLvfr4lBwZA1FhcQqWuEEGDOJEaLWaYzFAs5oQsgGwfvUhFDisomNrCI0BawksIjDBuM7IGpNbUspkmZKMc45CCNKMcSh405xz4lC0RoAsIhQkYACwDPlvNH1/I6AS/QZHDlaR+r+ZKJ9QjUa0ABaJrRacESxaxhiowQ7DrqSKRRwtDJmILBS83pXs3xJQ4dc5SH6h0CrhXK52QWAl+YZBCBwocQLgwGCBENCcHL6JiOHg23FyRXo1oBbq1gQFxbXosnPOeWZTsFYwviLjQQCorUEQRau+aHNbYEhYIMQGJ1ohaBnSBiiOYkR0HMcP/EJXq+gyuK5nNGVZ9lsd94Ky7HmOtVYrBYCu6zmOkxZy9kCu73R6berpoBqUQp5ZwzgvEnrPdQFJ6zxTulwua62TJCnQpdXfezesKEwBrHA7AADgnQBw/bsDgJGVZ0oAhd7rCMA2APjyQyfPcD3AevidQxcSOaDm4f55uP/hNwN8EwD2DrQU4Od3Qmm4vHL0/Z+/XqXJwWSxDE7gyqmx8ShKAEBZk+lMaQz8EgNulOKcT5ZKmnQc92uOzzlGSV8wnqZpyfNJk+f4sYkd6SVJoskOeX6cJN0kqtdqE925FrkwtH7fkQMXnXVabW15vt+779ezQdh1HOf8iy7k5QCYgZKr0x5wEMZSHMe9dtRrGaNa3Sb33cpUXSlDgsUUc8acEkyWhqslv9NqJwtRpVJ77PEH9+x58tlXXCx976af/GTb1jNKzkg3TZ912VV7nnjc5kFr+Y6pydpnv/yyB39579TWTZWksv2U7Qe7vaOHjs3uf3Dr+Zd974eXNrrZEw/s/fm+h9ecsuYdf/m3R+/e+eN//Nm5L7z0iivOfPLOYzfcdMu7/+F/1HJ2ww/+42XvuubYr3/25Vt/BKzyJ2e/+vuPzHIMexz+7OIzN5155eLiQ1QLbv/nG5/85VPOCJ6Dy+/517u+8d1192dHb/rgi+/51j/f9Z8PPvHQteee/YJj+3af94fPWWofbcvxSuAfWtjXGBuBaGmjX5nekWm3U6u3ig+rNzs3vrm89oxN5W59Yu2GdmdpYv2pivqMMWMNIdhih0kAZA0BswAqs7kobKk4Q8aZAlMYuuQalcqAeJz0o5QFrl9t1Ae3xbPe6K/cIa8HeP3K71d//Z9OvtWe+xe3/u57cGW8GQCyHgDAJ+8BuOc1v/Oglbj7f5AD/0YUH2TAAESKwMWObwY+ykFl3X2P3t4++ouz/+TPDnzt2znNA8ZMKwQgZQ/ToMDAAPqgObDEWF+uYrCAM66tCiUrhX6r3dfAXKGtNsAFECRpIiQPOeNS5BlkqUoACRwACwUczQIQIOPWUmqtACBOaAvdTlMgx3OjHBDdZSPcxuyh+SNPJS0Qj+95WLV6//bt+//0TX8MiC5AvH4yZdXGuCA5WDw2bN3w5t+/5nvzS5/60Os+/63r2flv/aup6qmWP70w++UvdF996ciuPUc3bj51qTODwdL4tnD//K7R6hqvE6m1oxOdqd27l0DWdxxi//yav/zPGx54zruuvvKiZ2195vn+BevTY7sT1QOHVXiD6y7kRtlUMM45E54AZLmlQHiMSUIxKLFL9/ixY5jB1MaNShMyJYRQ/Z4LCMDTNMEH/uP2QriYMVaUoC1pAHBdsVqCRkQiU+CwmLaMc8sLsJFB0NbkVucmio1SRuXMkiDkDIUQknGktGhlOY7DpWBsIHYhHY6MWbQEGgo47ooWzEpye7IZpi1oSIOa7Uk0pEIfEk6q6xYPDQMyhq1EaGu1RUDGKDOcc2ArmunF/yKIQcxlWGhMmoIihQiFTSExXBH5MspaS0wO4vIA2jWIwbwAP5PgvPCwLFJfZNyuzMQac8LxyZ4UgAuzZM45F6jzweVzwCIZFYxL6SY6tkoX60ohogIMDZDHXSk5E8XroAVGVHhXICNAsFgojqIlZIQogJTOipq2EHz1Defc0coqZRhjhZmEMYqLEyhurTUSKxJxo7VhjuNwS3nc7zTqISBZrTiiE4w4jpNnKooSRGIci6tzmFP4KFee9+7/zbXl/9E4AxqJw9c0gkwpLygtLC6H5QAFCkaW8h4zRlmBDqNix2O4BG1ylXBCa5GUyUthaGIlDCCxXmgEd0yuKDcmV4Hvp2lKRJ7XsCxDoZjW5YzpbmwdeTjva4CGG7i5Gg+CNdXSuqESt+nayZGNZ5+2+fRTN2zbKOqezaOk3U6Wm3k/Zi4fHhlJ07zXi5BzxkSUpFmWbVw3lCpdHRpNs/jo9GG/4nEm/VJ5YWZ6qD5pTS/pNocqI088MnP2BWd11dyYP740c4yHMlKJUka6YmR8+J7bb1tfrkDtbM7m7r331pe/7k37DtgX/emnX/fJ15d/vWPzEGnmbrlgW3+x/dMb7vy7L7/fUBY4pVsfmLnz2uvOfO66H920/y8+++eyvrhVytb+peHxc4Iq7L7rHq+2Zt22ic7szK5exzj5lvp6d8KYfXp0eP3Ruafnk7Tf3PPMZ/5RWbgmipnEKI08N+CIvSjhWue2OuIQvOBdADD/q79XUSTdEeGlldR0VBoyF0d1bkz76JyOk1inqWLSADKT5ETonvOsTSphzOoDT+xTKRHHDEgC3PLhO+9uNq8/8NXxDkVLfafRkH4IQsKWV/0/vgM/W2hj/Z/Wn99903X/FyfzO8dfv/7jhDBUcbTKbY5djcMh48QMogTLwGgGRkM3szWHR6mJAACBAbNkOYBBh1FuByrZyIkbsLxwUCSwQACs6lKWASBvNj+44dTnXPjsN178x298739/bWvncWdz6Ytf+PP5e/cPr9uAxJy4pyzvtZcaV7wNACBZGPHP/PqP3vHS37/cuJNf++h33/b5f5R88XVbN3z70cMffc1ZL7/wBXfe9oMzN66TrfK+5eMXvv7KJ/cdddPq0UP5rb/4bqW97h7eWVatvArYAQCXI6+ifcPVV37xXz9ll4+YWBsWCNXP89zkWbnRIKMzq5jgjusqZbkxDAAkKksyrOx9bPem8fVw3iZpmdXIHJ/iFKMcut2eTYXjyWJhNaTJEmOMM05ktSWdq0IbOUtSxpjv+FrrTIJAJjgwJGu1sWiRARdConRwUHIkA8xa0rHJnBwxMwLJ4daSIsidAEolP89DRBSSCSEYEIE2iJwzSwIYEWjDAJEEMgBGBhkIQLAnQt0KE6mAaxNZWtE9J0JEAQCcDwwgABhKJDJKc8kGjVsqpFOKhjEBcgAAawoeFK4i/TAvfMG0pRUgGUfOEUVB9YGBLjLxgTgXRwQcGA8CX5GMNpoZo5HMoJRNaAgRmYN4kugrtwBGWZ1bQLPSowUAQODakFIxccalq7U2pGQgRYEXMcpBsjaPo6xab2RpDhYAhcq161jhOABCaw2AhcaZlNJajdxVRXMYGEc0ilSeCy6QAyIa0mDIEa6U0mpaWlwkolqtppT2PA8RtdbAKOQQRX0iqoYNsEIpBeByx0nj6Ge33XrBBRdwzgtmkdY6CAKFNur2RkaGOj/9B5VnQeAZY6JuLzPR008/PT45lef55s2be91ICFEul/v9uOAEC9cx1iJy4UiyAERSCteVUdxjDHq9DiDV63Wr3X6/Xy4FiFjkPsKRP/3pT+882na8ck9RP0/vf/ChuAtRCziECUSuBDTgc6gH3li54UY5z3TH5Mrk0zPzYegrxNArLy+1y+WqptwPvW6zWatUGLA4MSZhyvjkdNMo9TzfZUHaAUcEhd9XHCeuKzzH66VLQSBdR1AuGuWRZt5EolCWFailLClNjiVJtqbRsB7XKaiI9kT5Y1HszrSqqIPdR9o/288hGwK9pVF74x+96uIXXTK8/TSKllqz+5ppp1Qfya0CbZiJukvH1owOH9z/9MTk5PH9ewwB5loAxHFHsf5jv7hzw3lXrJ9srN24qXOwf+GVFy4c2983rLOwjwsYc+t1f9T1bV/NP33oqS3nXxYENOROGD22Z/bIoeMHb/vxT5665QO7mnvns3XLxx4fWT+8eWLivdf94Kwz17gb1sw/9uSX/u6fnta9se2nXvDcl17z8YvA9NRyu39kNolLjfVn6tnj5z/r96HqZK2Z+prxS/lob7E3NjF+5Mj0pu3bm0tLW848a12SGHN6cvzYvqWlkdHx6ZnZ+ugo1EUQlv1AFtrmvTQuKhY81WOTG5J+X+VwZHFp7579a9dvOG1sDLKcCWE5s5oLaw0jxsAa0CULfoBRwsLA8dw8yRAxMKSRxVmkGJpjbrPSEph0l7tBWek0F49dv2/njs6xg89+2cumFxemztj+w5tv+Ycv/eNDjxsAePFVjaGN6/7+ra+bPO+9APB5Juuu6ibDLVzqyDW9fGbasXfmUGfuOi9PnMp9+250Q3vkB3vOedPb/+Dtf/Ctd37goX/75fTBQzNJbAUTGgoqDxAxZLaovhFwAoMMwAoQGvTKuvW7xu/iIf1fHz6yCLHZyQOHAeGIB61YF6Kboec4yClTgedJxzRTJQRHXZT4DCBYEkC2yAEGLTm0jAqjHgaF/AEwr6SZ5LZPnTQ4fOjAG17bC1V9sdN6zztf8Znvf0tJZ7icucy1ydFux8EKL48OeCivufSl2555+kte/pK0B96e3kRLrUnSZQuPHI85g2/+YPfOfe5TT8yveYqtW7P2l3fveDFfc+utty5RIcvICabJAGPC7QNwcBl2SbRt99u3P/hZuI33ppSUknpCZQwVmwgtS5XRDnMpB8o0czkTZdXpSO5IRZEwtpntuufXZ6zlMLzFEkLUYsAUAWpW7kX40H/+bJCBwSD9YowRWcYGlnkAoLUWjDuOQ0TEmDaqECMsWEYFENhoPZBZRjQqyZLY2FwyZK5Ko1QlykEROD7HAS8oEEVgBCmllFw63HGE4ziMMWAEQAYN4kA6iwwUQObVRG3AETqJSlRkoisENjBWM8YQ2EoST7BS7gaAE1wngJMS2RNA5cFhRIV81eD5lc4rACht8CTYEwBwQETMjf6tVxj8L2dkLAOLg9OhIQSG3BTen4NhV/SrkRX5N8eVvJ8skrUEOOjTs0JA1AghhBA6J6Uy15O5UkSkVOEV6AgGxcMi2UVEpTJjDEMSUhYakNaC1UUr1iJwZfTY2EiU9PM8JwPHjk0jwcZTNllrGRtIgw16xoz63QHYKgxDxnmWZcYYxhgy3el0yuVyEARZlllrgyDo9/t5nAGAlLzdbq9ZM3nk8MHp6WnXdTefumV4eLSgRSVJYowplUpZljnOwGyYGAZhSIS5VlLKJIoR0RgVBIE2uRAsDMNWaxnIk1JmSVpszIQQlWqVC2RrCMIgzYxXmYgT1y/Xc9XTJhKov3/Tf2os33zjXQcePyzQn293/OFad99ClCceF6nRAgSCUy0PBX5Ztxelw1MVAyOQTBsrPd8SUpaVy1VAHsdpL46CwEuylMisDasLzSaT0q8EqcmU0QwdYM5IpT47O1Muh9rEuUqQ6fGJ4TxPm0vtQEhXSINCccGDsmQeGVbtLy7mUXmo3lruLHUjj8GYi8/Zfta7PvoqSnMfOCUxomElnmAsyhJaRAwtkJSy0WjMzS7kWRa4nqqNPXrb9Y/fsT9dc/qfv/FFbT3TIOkFeS+tjDTG2u02kcqtKpXLUZpMrplIsnjnQ7t7vd7UhrF+0l0/ta5crSxFrVFv471P3HHGZee6mul+d3rXrnp949493fOuPH140ySrhvrppx+5656n9h4+54xnfO1z31pz7uSbrvmjY4eOlIMyQxwZHUfBLZEbGs8LFheWJk7Zsm/nruXl5UsuuWTv3r2c5VNTU9yReaa8IAQmkLNeN7KAQeAZo0qXvxsAlm//JGOiXK4CaOEFoO3c3Bwvdcvlcmdpudts5UrrWBkgxihNwJSccy89PZvv+8O1o7v2Nmd73GVorAa8/oP/cX8v++YNX9x6TpWmm/0+27tv1xmnbG5bU+LANtSlAZ7arBNXR0dh0xoIngcAYH5i0lAfvNc98yMAcCnA5wQ8zgEszCiYAWj7zu5UL5AcB9qMvL1m/TG3B3NwtN8D6AYAH33J6zc1RudbCwmCY1lebMgL+sYAAAoCQQMDsBJBFYnGf61pyVZXOkRG5CI4BAYhIfjgTdcV9tcjtY1b3/rCN33gi4/smn3zZeuu2Pb+9NB3bXbUCODaN8AJ+gBwIUAunETnr3v9xzmDMckot8BxWVMCTDDkxA1pybk1GgAICBkz1iJAILkveDNVlshwCcYIZhWBJK6BCJAx8ASVXY4gmrHJjQVmC5vCUPKRIa7TXGX84Yf/Fvz0FX/ykYmzn/+Fr/71P37yL8fX/Nl8+2s11cpjTYfaLc+OT2zO4o677Q0AgLJG6RG4/8FPf+TTn//Fzxc8DimCsCBtI4amA2CAIZAGgQP+qzLgEQMAs7InsFAk4iAQtADQcNF5Gx647yPp030vPAVsHOckkUnOwBBoC75vrNXWGgcgA8d3eJZECKXRzZ99+4dOX7tp61XrNl36oiw1LmmKE8wFdfu2OSNWI8pqIbRgsBTBuAghhc5zorIinHBkQjiCcWRUpLxKZRKNi6i1VmkCRpeEYEIQmNQNAqcqahaVgkypRKk4V8rMdBal5KVyUKmVK5Uy41JIhwkGpBEQGHHCApFUwKAKnQkgQ0RIUChfoiW7Ao1fCcywGvlWg9oAAs1WOse00vMu/osA8ITd4eA2/k2aUHFSWIFkA8AKq5lWBw7AN79x9t/YJeCqDTFYoAILBgMv5sHmggEC54iojQZiNOiS04mZExijOecIzGrkXArmkCFHeloRkDQmL5UCAEjTlEwWp8bzPMdxivq8tZYx5jgOQxq4JFnLmBCMC84tolKKMWy1l4uLZRw3bdrgOk7UT6TDrbWe62qtC0611VQqlYo7pOgir4ppS0euX9/I89wY4zieMabfj4kwDF0p5eEjB9M0zfJqWA4ueebFc4vzhgRyp91aCj0/y5TnuJ7jO8KVLmNMCNeJ47jXiwBASgkSfN+Poogx0e/3OefWwN49O8cnRuu1eqfTRoLQDdM0VXmWpPNKKedwqdPdPzY5/sNbvn/LzfcE/sjI+AShPa00Nnna5IvefOWrL7kw3DAMYKb3HYxm2ws9NTMzk8RZvx/fecfdjfrEQw8+LriXjY0kaSRrJccRgePGnUhaRppg2DNE3W7T9dyhIHA9Sa1+EHiddFnUkDOhQEVx1+GiWi8nUdJemJVkovZyuVwOg5rjODwXPBeZhHa77YMqceSSe8iYY1vNjsO8xb4+mCxrwNG1U+PjJT+a4yPKFQZKPM2y6nAjz/N+Ly6XxpNuoiEK/aAX9QUXC83l2kjdEfLaa68Nu/HV73jzpa9MzL7DsM6UZmpHlqPTyltHGsr1q36WLS7PlcKqxNpQZfjwwf00n3Iy0mPbt57bPHZsfno6a/eiWOf1I8875xl7b3/knj37SmU5curo3ffdNTR6qt9cvv3nv+DCpUy1O8tXvvD3nEr451/875Nravfde1+v252Uk8981pVZO15YbD755JNXXP7sXTt3jY5M7n9s58jQ2FBtGC2bHJsK6yJJ0h2PPb5t+xlJnsRR2mw2161bJ71Sr9ep1WrF9ysISkWeoBR1FxbiOHVd1/N9rXICyySXBMQKvQiDUEi3I2cMEP1SCbCHhWMCkWTcIThw5OiZz31O98CcI2vnn3ueI5lnM55F/d3HRaU03215ldLxI/vk4b2FI/HR229vZBP9/Pg4AADMg3tEq3ON3U9VD41D0XCSrwc8AvpJSU/zXB3bQwwCC0Pbyq963mVzO3du2TKuW52Qi0zrAmRTKPINbPwQAFkhjgsAl1567qM79ne7/RVyxm9HXyBguCJ4MMCiggFA4AKMLDWg3wSACy658tYvXfumt300s0sN8P/6x3/97jMeRTjKAZhIDPMhBwAIucxJcAQOoCwoi44EIQTXBsBqyy1YQqsNcM4lEDNFyVFYxL5SkTIITANJYxWQX/ZknMbacEIDKKxxLOQpxUppQI5orUVAyaleMiZmIgNgZsNQZU+LC5L3/Mf1o+ma+tS5Eo7bhZ+Bd1Zy2HE8Z3z91ljNB9kgqfrWj75066f/7qUf/DQB5BzASpdAKCJNGShgDgpmk0w6jspz0IVgN2T2RE52ch9dIzAD1ocdjxwGFJqXm8cP10YnpLbIeaLJL1cBEZRheeJasopYuZx2F1WnHW45c+7mp/JFWz2/Pr/n4KYLImQi6cV+jtTto8rytC9WvAROAISKgJGnGRMDEA0TK/Y4AA5CURctWKGDxZexHpLnOH4YBo2GJziCjeM46Ue8qR1JViXd1lx7cTZLIolMCFGqjvphWG9Uq/VKELhMIDEyYAQS4qqudFGNQWvtgK9ENOjIUsHtM0XpeJWAZFbuypWQtgLbYwBgieAkx8OTINMrme7q5Z9MGVoJ+YO8dpBkF2iy4o9EhcwFAKA4ocx1InCubAXgJLkQIoPI4aR0eTATAs5YpqmwWcSV94GIyFjOeTG7PNfW2nK5jIjtdhtZ7HmesdZxvDhOpZRKmVKpREwZoCzLBqELMcuzfhKXPLe4Rs65YAIAVJ7nSeqErhAiTVPHccjaOMtKpVKus26vXS6Xi0S22NMUVWjhABFpTaQIEYUQXHBE1NocOnS4Xq87jtNut13XDcNSFEVIJoqijRtO8Xyn0+v5YZk5cu269XnG292OEMKALZfLZPTCwhzn3AvcTr/nur7jOGDJ8Vxr9cGn9w/VhxljXDKOIk/zajU49ZTT4jiOoigMS4LxTqfteZ5SigCDoMQxGq0ME3ee89IXX/3Od1x348///P1fQVlS6mHDQH75G0HOhkCWHLe+efJY1EKdaq0DP0zTfHGhpeyMJ0qpzgwcJQAuMdeGE9R8dzisOsiiVjfw/QBF2ulzC+VabYxXHCPTkUoaaZOhw0VjuGKyBKJMGjZt+0MjQ3muY6X6y13HcfJcM8aq1WTjxuGsp9vNvu9UGPid5U4v6jX93mnbR7/+uQ8NYTa5dnI+V7zUGKqMsaWdRw8dcgPWbs9IFLVyqdVaKJerhir9KHGdUlgqp2kcR5lyzMtf/kpRornpjq6Pbnz5m372+a+ed8o5W0frizMPe9WNi/N7JyZrtWo4NLZORazb77vcibW58KrL2Uj4xJ07xmR52zMuSdJer6lr9fKBA7v3zx545qtfWPfryucXnqN33XfPLXf89JTt5//k9rtPOeXUN/7pn/XjxTRtnrp+VNHQVb/36qQf+UFw9OBBIdj4hslwOLzrkV+sX7fRbYjpTrPqVUthuatbvMyXuj3JRVApLy4t1Wq18bWTpXJQKocQNqw2eaoK6Sk/KJs8N0prC0FQqpSqcZpo3fMDn3FuyK6wK2jwjV/9ruXa931GwAkI4F//8kfjlXKZ+NzxI8nMdLfdcV3n6OFj9YBj2R+vhVBugO9PVYeTJBmtB0mnWbxMtbpeNFU53FY8fON7/ugbX7r244pVZWdUlGISWaoYimdfOnXrdz727zf+z3WbNx+dW7j8nItZlbsNx/LXlWrb7nnfzXFzGQE44zkZa1cAzAWCc2W9kkIeOby/1+0jY/QbRbwTY0CLxJUmWhGJEZEsR+j1iwQYfvnI3ZCFv/7Cl//si5+4abp9zVb5L9dcdfzau43WUV1Dd3CYsYoBAyEJGIHta1OXaHIlCm1AawLfL9fk/GxijCUAXjCPGaE1BajWAiLnzFhAyI2+8tmX/vTOew0QoPUkd8hmFi1DIotAAlCA1JAfb3FH4uSQ1JGai5cdF48f2P+db97+zW+/L+8dbwAc2dkLn3ksmwudCZkef0SkPlz2ksFH8Lyrt/3JGy1wFXJMbCkzfZfpTHnEIsF5ahAMA8ZzctDNSBVvpOUnL9TF+w5QsFSUa20EIG+74ZdXXfmaheNPLC12Ritl6kZHnj7S6fbLo0PVdSO1DWPgcacjo7QbBEJXR1Qm7rzlAQyH9y7NrPMiSDq8UjOWQAEqQ2nfIonVlG41FS4+P8kFImprtNZgCTgCAEeM+h1EDsQQUUrpuq7nCMbAc8Ko005bnSiOD8/PLS4scOC1Ws3JUyEJWJarnit0dagUBIHruk44EgR+uRK6voMF5RUJGfHCUtAOmEYMkMAiFRh9IrIrQC1z4pfVtw1PMIMK6yGyNEjlB4XfgudWMH2KQwelaatP1LRX424RnwAHRmKIA0lsALAmH3DcARhwACBRbAVOAMROHsW+AQg4cAAwA7JTYfI0AGcVUa047wlBkpOpw9bmKuac+77veq5WljFI07jX64yMjXJhO52O67qcS7LckaFWKArjRcYQ0ZUSERlYgawQxxCcE1GuFVgCIuE6RDZJ4lKpXMzK9/1+v+c4ju/7UsokSfI8L6rQjuM4jpNrtTrnIiRnWaaU8n2/VqtlWZbneblcXlpaKuJ9aky12uhG3VTpfj9ptlr1er1SqSiVpWlcLpfjuO8K7vmeMYpx4A6XUgohGAoipZTKsqTX72zbcvry8jIiBkHg+36eKyml7wfa6oEdCwARVSpVQkiSBHIYajSarbhRc5uHHn7r6894xcs/fWR2tnTKtpk9h3pHFygzjfHhkTUjlXqZjOp30oFaKsDc3FyjMdxudT3PGw3zQ8fnppvRQtfu2b+wd9/sgT0zS+2UhWI+XvLcQHuU52r/4jQAZ4zBPBAIFz1FOYPc88AY5TgyqFUOLc1UqyE5et1Zw5tOWV8u+71+56c37D7calfrpaENE2mcqag/7Nc2NdYa1Wep/Mi7/7m3uBhH6WKnFwJsHRl65RXPu/RFl49ddBqUdJo2+93FSuBLC6oHo7WqJptGueuWjs8eGR0azlRums2d985c8fLT3rrpD78z27/pE43nX3NupLfG3Q46abM921rqdR94cv3Gdfv27TvnvIvLfom1+z/57rV7Hjrwqle+4ac33DfWaJy34ZyZ2e6GMy8AsLf/9AeXrd3WZo7v12vnXfT8yy40Fj/+B88DC9O799adspcMcT56aN8ORBqfGD2071BY8uJu92inWa5VtmzdOjW11hh74aXPAMK409Emc9zAD0PP886euHhhetqRzpEDB1zXjaKIsQ7ncjUDbi0tVavVxcX5XtRft2Z9FCULS/N+GHhrjXCkdKW2ygIZYwv4pwUkIG0MU7nju4Bgrf32+36EAi1aAWbx+HE/9IXVnsun1k5G7eVuohn1+dBQs9N0O6zihGmvxb2BRKWTiE5NB8sDM48PfOCVt97xn99/dOGdXB5L+tvAeTCAzKpzludn2vvWezZZ3Hv66GT61KNHF+ci4Z+/9oIH7v3lztufDDafrwGIrC2QLEWxrfjhokCy1Gq1005df/j4I4WH628vMScoFoAIuLpSFZEY6f03Xsd9FxIAgLS134H1//H1r77rc5/Yd0/3+Mtr//bF/3HBj2+oxPdCy5UBQZwDAAjwDSyr6GPf+7BA/wOv/kCNCl6GtQTAMU0TP4XJocpCq2+JLFDJl8zqPCNDoIAbAG5sBgTEBQruSgd4jgaQermxjJddRK07CgwyC2hAu9YtlzKRY3seF2b+od2aGFkzl+Y2nJQbLtnglLe/9g0v+c4NC5986dWs8RO/fubxfU9MXvCih7951wUAALAmKC+7rgQmI2NA9wFIgZEYMQRtLScyVkqZqAwBC0InMAba/q9vJAAYxX3QOgtDP3pyd++ql+vRdWsg9JayXlByTmmcYVr92SNH77vhhol1E1u2b+2JoaDi9DvLwelbFu6abbdiUXOiXm9mfnnu2LGxM0JONmtFLhPtdlurXAghVvoFcHLkSLOsWICQQGvNLCMirXWpVEIsmrPIEPM8np1rdTvLpdbi8SOHl2aPodG+5wReGARlhyrYcLnjoEBEj8hlQhjhpExUfC49xgQBaGsJEQu3haLabK0tEkqiInRZHKS8dpD+rt5/ZkVhY8XHslg0HcdRSpkiHBaaXVYPSsUDhQqz8icgMiuno5N3ISs7SFjBauGKnSLhIPMujrcI3P4XoXc1AoMlBrwI/oyYJQ0IBiwCsmImRe2ILFiLyAv+9MqWtogodrg+VNSNCxS00pnneWvXTS03m2EYlkqBUipJotbybKM+5Lo+CuU4TuGBWHiJIKLv+8YIa21utFKKERNCOMLhjFlmpLZZlrHBBpY4F8po3/cLtLzWeVH24JxzzoAzWCkYFFMtMN55nhNRGIZRFDHG6vV6UWNAgjRXterQ7PxctVofHZvo9XpCOI4DfiCjKCIyUZoXdRci6kddz/MBWJZlQggu0HNL55x19vLycrERmZ+f933f932lcill4Hl5nhsDQRAcOHBgaGhoeHREStn2yp1kNs9bjaguczlz94zvyS1enY6oDRu24UXndFuLspvpuV52aDks1VyWlivh3NxcqRQ0HFe1mhOO0+suVkR1nResXV9eu+VU8J0sioq9xbDfYJw7pdLB6elumkbaLLU7O598qmtobGTyvLPOJpXG/eW1EyOh7xpjJkvVXKVDQ0PGaEQSDk+TxHUlfh4++Mn/739+d+f+w23AElLO0pZpg2cFLluu7FlTG2Y6vfrkqb2492SKf/Wj7wU//t7mRvUZp2+5/Bnnr1k76tW8xqkboGRnDx9gjFm0RpnQK0dRMjUx2cuHGmcd/+Uj//6Hf/XCT26qPznf+/S7/mWTwItecUUjbOhcj4+PNIaSbn/6qhc++5d3PHrv7j0f3Prfzliz8aLtF85QfsU1r53btfeBR++fPbjn7Iuf/4tfPfLyt7zh2MHHrVH1Rq19eGFk87hQrL/jcLsfeZ6Tqrhv6b6n9jfy1vDwcKSWqmOlaq3mZRXHc5VSVeNEyyr0/B0PPzrUGHFcgYhzc/Pjp26MoiTuxcNDo9aYxfmlaqVUq9UeePDhhYWFP3jZywbxzxHtdnN4eLharTKkwHNG6g3uCa27QbXSajc5Z4wzMBYRCMCQRcaFFMhQhB5y/vX3/RB8sIb3Ic8A0n4OTGZZUmbEBXqB35iYOrR7V7g0X6rV3XK5Z7silFqlBevFt7YLVlQGKeP8Pbf96tufetYr3m729p4Pfsspj+QLH3vtubfNLlz/6Ufe/ZXP9J56xOVstr/vghf/Qc6DcK69EO172803vf+at31p192qowwRWPuZl73upJWDLAAgLC0uPrnLADHgBuygq/VbcZgGjJEBSLV45lM3f0+QFq40K4Ln4waMxEXdnt93pDwE135lxh9SweTz08OPSiFUPKA1b93WeHxnE4ARtzZXH//e3375DR/jYCUyIALLgCjq2Mj2ASxDZolczhzBAwe6OaXZIL2RDJRBR7iATCBTlpCsRexbo3J0GfOZzYgM4wS2Ws0YoSvk+776nnUNSFrxluPh6dsXWtPB2dtf+6XPfOXQvqWnFo//Pa1tVJ8PWzes2X4VaP+V7zzjEAAA3PTDj1/46o+CxzC1LmMlV5gkzywZpnyLCZLjyTxTUHQ4EYQrdKb+K+ya60KeZZ4P3RgqoyM2TdrJNI/DYeMCR2AqYdmaC7ZMnbspT+PZ2WndUaU1NR7T/O7mwz+4V5lUaO0kOL8U7XniyfEzt7JezDTlqqeV7TdTUYjj00AdeZB4MYZCCER0HAcASJsiC1FKJWnPYoEh0khWmyRXPWPjZTs3urW27syaTeMsTckiFy5j5JYCYIKQkQVE5rhuqRT6vl/xReFXz9ggaBmjjTEGCLEIwgX3ichqaw1HQfY3KtOrYzU4rdZ7aSVMFpdjB4Vrg4yAiujLVu0bCkz1qnoGrYiPrBaQaSCtdaIwTkSOcIkIyBTJMYG2gw6rWH2dk6fHwDDAQtmk2FYwACBTKF7RQAxypdlMsJJ+DTJ+ROQcAUUUZwDoOK7rusaYJEmyXHNOQyMTWRJxzoUQpRJ3JOeMG5OFjq+1TtOkgEoxxvI8VUqV6uXCxdn1vUKN0uQqjVPhstAPO+2eHwZCiCRNq7VyL+qpVBMY4UillOO5nHMLFggGVCgARBScFxPwPC9NEyEk5yxJ4iDwOWdLS4uVSoVLEfX7yqhyuZRlWa/Xq9frjDGtMkQEi7VaQytLRMYYa0wBE8vznCwGQQBokyQyRjlOyVrb7XYnJiYWl+bbc83Nmzc3m03KrecG7XazXq+ftmUzZzJOEyFEQ9WA8cra6pEjBww5aQI+BCYyztLiwi+XQ68+Mjox220OrxmqTIxkeT/piiTJfG+83eoxxqwVQeCVw3pzWdXrU2ncP/LALKmsFLqTtSqB7TV7URqBYDrp10vB2lq9vvG0l152OcgQ2n0gVO3F5Thjx7uCSUd4y9neICjN7Vvg3Eli1e+lTz99wPOCs5+z/W8+/I4PfCyPslSCXJhpHTm6hOjdvfPRx548qNLwrlt3c1aeXli0uh+C5gAsbDzat3ffvfOTdz2m0ZCw2sJfXVx+81vfUqtVuRQAtjZUa7fbh44c9rS+cONzI+jKarRr7+NrnMrlzxhu1pLlznGJlYP7pq963u8dPPio4/KHHnroksueecr2bY/8/J6kOduDbD6niqhMDY0sBU896yPX7LztUCrK9XBNsrbrOX6jMdlYaO7Ze6A93axWhredf05zeW4oEKR6F5271mdnR1HkeIExNNdOHFnOEsEYAxUz7mqQINzc2EP7D2zffvpQEBw+cHh4eDhJkoKPt/m0rWRMFMUXXHgeAHSWl6sAABBFketJpbNc5b1W7AnpeF6SZirCYDgOw1KcdwZOsYjWWq00ADHGyBh/4swffeJzkiO4Ms8oVSoM2K8ffhLIGZ2cskjTM4fH6kMH9++bGBkuj26cmZk5PjszOjGe2iysDFjBfZ5zLfuBLR435xamtkS3P3rTTz/8waUv3G9ykfh8/RVnert22Dx46NY7NnKc520W4qGf72wdSw7P7dQx+97bP//Uzl1pPwbDuHAca95z07VAgMSEsEqHDotzpOv++RtnjV78Ry95Vi7b3PLVtejkdeYfXvI6IvjQTdcVpkwA4FoakgI1MFJiEH+h5rAZPQ/odnjylsu2vvSO+3b9zR0w3oLqmaz9oCMDUAAAUyMTD0ITA9c3SQ4a0P3ydR+6rAbLKT/7JX9jgRHozAIDwSAvjFnjTFkJHFEi+GhzC8AgJOgiI4J8qZmt6BtzIIOYWZ4bChjzOURaEUC3zZ6472/XDcntgkdNHClRjL1+M33O+dv/9edPX/OGtwQIPYYT8hlTMi+7Q3/yghcudsXxZLBpOO30S9935SVfveP+BCAlm6Y5MJAoXGSJzTmBSZVT0FAJJOcqURyAYMBH/a2RZ5lwHWGzDFgzUszfkMm5iWCTSvsSGACVLIN2jPXQHSqtHx5B5S/P7x269OJDP3zq8emlcH2g1Hy3rytu49CeA1ccnRERByW63W6W5GlHiULREKDQ2RgUojln0hFpmvb7/aQfdbtd13F839dacwnCdYRwCAmRPN8phzU7ErLJsTzppVEHIA5cAyQIJQEX6FtiyFgYuJ4vfY8FPjqSCekVsB1jTkqnwIDJBha6qxRbBEAL1hQ2QWDB4Im+6eBQgBOyi0AANjVWa21NIV9FiMQYcs6MJiKzUtkthi2Kw6tR87dQ0CcH1MEdb0lrPfidFarSFhAKVNrJB6+eQ/JBso22+KRpoJl00jZisIGwVKSYAEDEiky9IC8hQ8skACSZjpJ+YXRByLUFrrDViRmYUsmzZILATZLEcWUSx5xzwbihgnQMge/zUkmhEUIAQ2NMrhQQoSUmuDE6iiIhRBJnjCkmsN1ua6sZCGOM67qFBRPioGbOHbFKtgZLxgzeFsdxtNadTqfRaOR5DgATExP9fl/rXLqCCWQI/X63Xq8DmajftwaEEJ4bpokypsCBM8a5F/AsyYUQQCyOYy6Qc651XtTA+/1+kkZSypGRkTSNAayU0pJ2HFGYOEVRFJYqeZ7X3CRP+fHDMDS8Jclb5WoWBGGWKAC/nSdHZ4/ue3zP3LH5a67547zvdBXEnWNEWK8Nhb6PyIwxSLzfjR2R57HWURII6Yb1xeVmJ1aZyqvlUaMZ02ZIBiWU0dH2k/fttkp3LU1MTDEGgNYteVp4SgTGrVZkzVrtSFI6C0thWLKbTztF6Sxaio/PHvYrQSBdm3Y3l4ItW7e0ku7zz3k1jlb7nea+p/ZNL7dv+dWvXvic13SOHJ+zpQcefJgxgQTjw/XTT11fC8WZW06D6aWh0WqcJA53hBCtbsvzXekyT8jjrVlsd9JSaTI51z29sonZ7kLqjJibf7zjC5+55YpnXy48HK2t71aVENCemVnqdjaduVWkcy/YftlnPvLV4Y0Tb//sO3/5jZ9PH158x0fedt+/f1MM++vdjfP9puczYe2atSNTZ2xcZLNqJM+UqDprej2h3azsh0opUqrhM51HgphSKq8GZEwna5195UVxq12eKvnloNfrrV271mozPDTcbrUYEwuzC+12u16vJ1lfSlepQePJdd2lpQUiMz46hq7M4gwA6o36zOJsRU5l0eyjN94x/8TikePzw/XSkQP20Li5Ycfz2g/uBZ2Ias+zoLUBiybVqGEoCHbP9/udHJnIVTY61nA0jdQrPrBoUYVYByEqJrRWi3jw1ZbMDzLc88R0IRezbfuFx3furx2Ye8X73/GSB/e0jqQdz7woxJdtOOvLN9970aWXRxW2+1f3lMedQ3vaEPFLrt7+2MM7dtx4t08gOHCDRuesWPQEQG6x0JC1iJw+9uEPveslX6rK2iK1fztQrIz33PxD1LkHODA/RCRrEHEU65ZaVagCdAAg0hDbpEQjOx8/OFJd++LLpo4enzD7d2S1LcGhmWxxqXi15nQzcMszKnIUGA6o1aIAVnYCRp+77mMRWo/bVAMIAOLcmDpyl8y5G8ejaDkoy6AatGYyxa0zGz3OxiI7f+EI/GqBgeCgHHA4qL4g63JIjZBADWYcDksqbPjmF8eD9ZhbN7cpNKG26DzhVZ7fs7ZW3dDOj6XKuLx/QIErZ9/2o+8IkJzHRVvv0jOuumTbM8ogMtSGAxAwwxnYBCxJRoR2RX8fOUsLMsuAfPw7hgCpVM5tiCJ973u/ceFpp1zxkqv1nibovKtT5MYJ3V47qSbGtLqeXzK6Uw3c6Ucff/T6+2VYTbMlkfccFnIvXDh2eMfd9559yiUzB49n0swem+Vd+P8BMsfr4neoYGYAAAAASUVORK5CYII=\n", - "text/plain": [ - "" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from ppdet.utils.coco_eval import bbox2out, mask2out, coco17_category_info\n", - "from ppdet.utils.visualizer import visualize_results\n", - "\n", - "cls2cat, cat2name = coco17_category_info()\n", - "bboxes = bbox2out([res], cls2cat)\n", - "masks = mask2out([res], cls2cat, roi_size)\n", - "\n", - "visualize_results(img, 0, cat2name, 0.5, bboxes, masks)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 2", - "language": "python", - "name": "python2" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 2 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython2", - "version": "2.7.16" - }, - "name": "mask_rcnn_demo.ipynb" - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/static/demo/orange_71.jpg b/static/demo/orange_71.jpg deleted file mode 100644 index da7974a1a1371298f1ca5f4ef9c82bd3824d7ac3..0000000000000000000000000000000000000000 Binary files a/static/demo/orange_71.jpg and /dev/null differ diff --git a/static/demo/road554.png b/static/demo/road554.png deleted file mode 100644 index 7733e57f922b0fee893775da4f698c202804966f..0000000000000000000000000000000000000000 Binary files a/static/demo/road554.png and /dev/null differ diff --git a/static/deploy/README.md b/static/deploy/README.md deleted file mode 100644 index 39595a7ff4cdbd99a4c1b6043212b40a542f23cc..0000000000000000000000000000000000000000 --- a/static/deploy/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# PaddleDetection 预测部署 - -`PaddleDetection`目前支持: -- 使用`Python`和`C++`部署在`Windows` 和`Linux` 上运行 -- [在线服务化部署](./serving/README.md) -- [移动端部署](https://github.com/PaddlePaddle/Paddle-Lite-Demo) - -## 模型导出 -训练得到一个满足要求的模型后,如果想要将该模型接入到C++服务器端预测库或移动端预测库,需要通过`tools/export_model.py`导出该模型。 - -- [导出教程](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/advanced_tutorials/deploy/EXPORT_MODEL.md) - -模型导出后, 目录结构如下(以`yolov3_darknet`为例): -``` -yolov3_darknet # 模型目录 -├── infer_cfg.yml # 模型配置信息 -├── __model__ # 模型文件 -└── __params__ # 参数文件 -``` - -预测时,该目录所在的路径会作为程序的输入参数。 - -## 预测部署 -- [1. Python预测(支持 Linux 和 Windows)](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/deploy/python) -- [2. C++预测(支持 Linux 和 Windows)](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/deploy/cpp) -- [3. 在线服务化部署](./serving/README.md) -- [4. 移动端部署](https://github.com/PaddlePaddle/Paddle-Lite-Demo) -- [5. Jetson设备部署](./cpp/docs/Jetson_build.md) diff --git a/static/deploy/android_demo/README.md b/static/deploy/android_demo/README.md deleted file mode 100644 index 5d158baef396661f4783c56654af5f67f2a2e5fe..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# PaddleDetection安卓端demo - -### 下载试用 -可通过[下载链接](https://paddlemodels.bj.bcebos.com/object_detection/lite/paddledetection_app.apk)直接下载,或直接使用手机浏览器扫描二维码下载安装: - -
    - -
    - -### 环境搭建与代码运行 -- 安装最新版本的Android Studio,可以从https://developer.android.com/studio 下载。本demo使用是4.0版本Android Studio编写。 -- 下载NDK 20 以上版本,NDK 20版本以上均可以编译成功。可以用以下方式安装和测试NDK编译环境:点击 File -> New ->New Project,新建 "Native C++" project。 -- 导入项目:点击 File->New->Import Project..., 跟随Android Studio的引导导入项目即可。 -- 首先打开`app/build.gradle`文件,运行`downloadAndExtractArchives`函数,完成PaddleLite预测库与模型的下载与压缩。 -- 连接并选择设备,编译app并运行。 - -### 效果展示 -
    - -
    - -### 更新预测库与模型 - -#### 更新预测库 - -- 参考[ Paddle-Lite文档](https://github.com/PaddlePaddle/Paddle-Lite/wiki),编译Android等预测库,或直接下载最新[Paddle Lite预编译库](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html)。 -- 更新`app/libs`下`PaddlePredictor.jar`包,更新`app/src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so`包,更新`app/src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so`包。 - -#### 更新模型 - -- 本demo中支持SSD与YOLO系列模型,如想更新模型,请替换`app/src/main/assets/models`下相关`model.nb`权重文件。 -- 如果想要加入新的算法模型,如人脸检测、实例分割等,需要在`app/src/main/assets/models`下放入新模型,并修改`app/src/main/cpp`下的数据预处理代码以适配新的模型算法。 -- 如更新的模型是非COCO数据集模型,请更新`app/src/main/assets/labels`下的类别标签文件。 - -### 获取更多支持 -- 本demo依赖[Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite),Android工程开发可参考[Paddle-Lite Android 工程示例教程](https://paddle-lite.readthedocs.io/zh/latest/demo_guides/android_app_demo.html#android-demo)。 -- 更多Paddle-Lite的demo请参考[Paddle-Lite-Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo)。 -- 前往[端计算模型生成平台EasyEdge](https://ai.baidu.com/easyedge/app/open_source_demo?referrerUrl=paddlelite),获得更多开发支持。 diff --git a/static/deploy/android_demo/app/app.iml b/static/deploy/android_demo/app/app.iml deleted file mode 100644 index 11aa7180158079e82ad75ace06073d9b379dd7fb..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/app.iml +++ /dev/null @@ -1,181 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/build.gradle b/static/deploy/android_demo/app/build.gradle deleted file mode 100644 index 759a09aba0315c13dea04cb3971dff11063ab415..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/build.gradle +++ /dev/null @@ -1,132 +0,0 @@ -import java.security.MessageDigest - -apply plugin: 'com.android.application' - -android { - compileSdkVersion 30 - buildToolsVersion "30.0.2" - - defaultConfig { - applicationId "com.baidu.paddledetection.detection" - minSdkVersion 23 - targetSdkVersion 30 - versionCode 1 - versionName "1.0" - testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" - externalNativeBuild { - cmake { - arguments '-DANDROID_PLATFORM=android-23', '-DANDROID_STL=c++_shared', "-DANDROID_TOOLCHAIN=" - abiFilters 'arm64-v8a' - cppFlags "-std=c++11" - } - } - } - buildTypes { - release { - minifyEnabled false - proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' - } - } - externalNativeBuild { - cmake { - path "src/main/cpp/CMakeLists.txt" - version "3.10.2" - } - } -} - -dependencies { - implementation fileTree(dir: "libs", include: ["*.jar"]) - implementation 'androidx.appcompat:appcompat:1.1.0' - implementation 'com.google.android.material:material:1.0.0' - implementation 'androidx.constraintlayout:constraintlayout:1.1.3' - implementation 'androidx.navigation:navigation-fragment:2.1.0' - implementation 'androidx.navigation:navigation-ui:2.1.0' - testImplementation 'junit:junit:4.12' - androidTestImplementation 'androidx.test.ext:junit:1.1.1' - androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0' -} - -def archives = [ - [ - 'src' : 'https://paddlelite-demo.bj.bcebos.com/libs/android/paddle_lite_libs_v2_6_1.tar.gz', - 'dest': 'PaddleLite' - ], - [ - 'src' : 'https://paddlelite-demo.bj.bcebos.com/libs/android/opencv-4.2.0-android-sdk.tar.gz', - 'dest': 'OpenCV' - ], - // yolov3_mobilenet_v3 - [ - 'src' : 'https://paddlelite-demo.bj.bcebos.com/models/yolov3_mobilenet_v3_prune86_FPGM_320_fp32_for_cpu_v2_6_1.tar.gz', - 'dest' : 'src/main/assets/models/yolov3_mobilenet_v3_for_cpu' - ], - [ - 'src' : 'https://paddlelite-demo.bj.bcebos.com/models/yolov3_mobilenet_v3_prune86_FPGM_320_fp32_for_hybrid_cpu_npu_v2_6_1.tar.gz', - 'dest' : 'src/main/assets/models/yolov3_mobilenet_v3_for_hybrid_cpu_npu' - ], - // pp-yolo tiny comming soon - // ssd_mobilenet_v1 voc - [ - 'src' : 'https://paddlelite-demo.bj.bcebos.com/models/ssdlite_mobilenet_v3_large_for_cpu_nb.tar.gz', - 'dest' : 'src/main/assets/models/ssdlite_mobilenet_v3_large_for_cpu_nb' - ], -] - - -task downloadAndExtractArchives(type: DefaultTask) { - doFirst { - println "Downloading and extracting archives including libs and models" - } - doLast { - // Prepare cache folder for archives - String cachePath = "cache" - if (!file("${cachePath}").exists()) { - mkdir "${cachePath}" - } - archives.eachWithIndex { archive, index -> - MessageDigest messageDigest = MessageDigest.getInstance('MD5') - messageDigest.update(archive.src.bytes) - String cacheName = new BigInteger(1, messageDigest.digest()).toString(32) - // Download the target archive if not exists - boolean copyFiles = !file("${archive.dest}").exists() - if (!file("${cachePath}/${cacheName}.tar.gz").exists()) { - ant.get(src: archive.src, dest: file("${cachePath}/${cacheName}.tar.gz")) - copyFiles = true; // force to copy files from the latest archive files - } - // Extract the target archive if its dest path does not exists - if (copyFiles) { - copy { - from tarTree("${cachePath}/${cacheName}.tar.gz") - into "${archive.dest}" - } - } - // Unpack libs - copy { - from tarTree("cache/${cacheName}.tar.gz") - into "cache/${cacheName}" - } - // Copy PaddlePredictor.jar - if (!file("libs/PaddlePredictor.jar").exists()) { - copy { - from "cache/${cacheName}/java/PaddlePredictor.jar" - into "libs" - } - } - // Copy libpaddle_lite_jni.so for armeabi-v7a and arm64-v8a - if (!file("src/main/jniLibs/armeabi-v7a/libpaddle_lite_jni.so").exists()) { - copy { - from "cache/${cacheName}/java/libs/armeabi-v7a/libpaddle_lite_jni.so" - into "src/main/jniLibs/armeabi-v7a" - } - } - if (!file("src/main/jniLibs/arm64-v8a/libpaddle_lite_jni.so").exists()) { - copy { - from "cache/${cacheName}/java/libs/arm64-v8a/libpaddle_lite_jni.so" - into "src/main/jniLibs/arm64-v8a" - } - } - } - } -} -preBuild.dependsOn downloadAndExtractArchives diff --git a/static/deploy/android_demo/app/gradlew b/static/deploy/android_demo/app/gradlew deleted file mode 100644 index cccdd3d517fc5249beaefa600691cf150f2fa3e6..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/gradlew +++ /dev/null @@ -1,172 +0,0 @@ -#!/usr/bin/env sh - -############################################################################## -## -## Gradle start up script for UN*X -## -############################################################################## - -# Attempt to set APP_HOME -# Resolve links: $0 may be a link -PRG="$0" -# Need this for relative symlinks. -while [ -h "$PRG" ] ; do - ls=`ls -ld "$PRG"` - link=`expr "$ls" : '.*-> \(.*\)$'` - if expr "$link" : '/.*' > /dev/null; then - PRG="$link" - else - PRG=`dirname "$PRG"`"/$link" - fi -done -SAVED="`pwd`" -cd "`dirname \"$PRG\"`/" >/dev/null -APP_HOME="`pwd -P`" -cd "$SAVED" >/dev/null - -APP_NAME="Gradle" -APP_BASE_NAME=`basename "$0"` - -# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. -DEFAULT_JVM_OPTS="" - -# Use the maximum available, or set MAX_FD != -1 to use that value. -MAX_FD="maximum" - -warn () { - echo "$*" -} - -die () { - echo - echo "$*" - echo - exit 1 -} - -# OS specific support (must be 'true' or 'false'). -cygwin=false -msys=false -darwin=false -nonstop=false -case "`uname`" in - CYGWIN* ) - cygwin=true - ;; - Darwin* ) - darwin=true - ;; - MINGW* ) - msys=true - ;; - NONSTOP* ) - nonstop=true - ;; -esac - -CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar - -# Determine the Java command to use to start the JVM. -if [ -n "$JAVA_HOME" ] ; then - if [ -x "$JAVA_HOME/jre/sh/java" ] ; then - # IBM's JDK on AIX uses strange locations for the executables - JAVACMD="$JAVA_HOME/jre/sh/java" - else - JAVACMD="$JAVA_HOME/bin/java" - fi - if [ ! -x "$JAVACMD" ] ; then - die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME - -Please set the JAVA_HOME variable in your environment to match the -location of your Java installation." - fi -else - JAVACMD="java" - which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. - -Please set the JAVA_HOME variable in your environment to match the -location of your Java installation." -fi - -# Increase the maximum file descriptors if we can. -if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ] ; then - MAX_FD_LIMIT=`ulimit -H -n` - if [ $? -eq 0 ] ; then - if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then - MAX_FD="$MAX_FD_LIMIT" - fi - ulimit -n $MAX_FD - if [ $? -ne 0 ] ; then - warn "Could not set maximum file descriptor limit: $MAX_FD" - fi - else - warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT" - fi -fi - -# For Darwin, add options to specify how the application appears in the dock -if $darwin; then - GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\"" -fi - -# For Cygwin, switch paths to Windows format before running java -if $cygwin ; then - APP_HOME=`cygpath --path --mixed "$APP_HOME"` - CLASSPATH=`cygpath --path --mixed "$CLASSPATH"` - JAVACMD=`cygpath --unix "$JAVACMD"` - - # We build the pattern for arguments to be converted via cygpath - ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null` - SEP="" - for dir in $ROOTDIRSRAW ; do - ROOTDIRS="$ROOTDIRS$SEP$dir" - SEP="|" - done - OURCYGPATTERN="(^($ROOTDIRS))" - # Add a user-defined pattern to the cygpath arguments - if [ "$GRADLE_CYGPATTERN" != "" ] ; then - OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)" - fi - # Now convert the arguments - kludge to limit ourselves to /bin/sh - i=0 - for arg in "$@" ; do - CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -` - CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option - - if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition - eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"` - else - eval `echo args$i`="\"$arg\"" - fi - i=$((i+1)) - done - case $i in - (0) set -- ;; - (1) set -- "$args0" ;; - (2) set -- "$args0" "$args1" ;; - (3) set -- "$args0" "$args1" "$args2" ;; - (4) set -- "$args0" "$args1" "$args2" "$args3" ;; - (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;; - (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;; - (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;; - (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;; - (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;; - esac -fi - -# Escape application args -save () { - for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; done - echo " " -} -APP_ARGS=$(save "$@") - -# Collect all arguments for the java command, following the shell quoting and substitution rules -eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS" - -# by default we should be in the correct project dir, but when run from Finder on Mac, the cwd is wrong -if [ "$(uname)" = "Darwin" ] && [ "$HOME" = "$PWD" ]; then - cd "$(dirname "$0")" -fi - -exec "$JAVACMD" "$@" diff --git a/static/deploy/android_demo/app/gradlew.bat b/static/deploy/android_demo/app/gradlew.bat deleted file mode 100644 index e95643d6a2ca62258464e83c72f5156dc941c609..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/gradlew.bat +++ /dev/null @@ -1,84 +0,0 @@ -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem Gradle startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME% - -@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto init - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto init - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:init -@rem Get command-line arguments, handling Windows variants - -if not "%OS%" == "Windows_NT" goto win9xME_args - -:win9xME_args -@rem Slurp the command line arguments. -set CMD_LINE_ARGS= -set _SKIP=2 - -:win9xME_args_slurp -if "x%~1" == "x" goto execute - -set CMD_LINE_ARGS=%* - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar - -@rem Execute Gradle -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS% - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/static/deploy/android_demo/app/local.properties b/static/deploy/android_demo/app/local.properties deleted file mode 100644 index ae629f3566e49f127c50cb20f2449e3b5ea9fe52..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/local.properties +++ /dev/null @@ -1,8 +0,0 @@ -## This file must *NOT* be checked into Version Control Systems, -# as it contains information specific to your local configuration. -# -# Location of the SDK. This is only used by Gradle. -# For customization when using a Version Control System, please read the -# header note. -#Wed Sep 16 11:31:42 CST 2020 -sdk.dir=/Users/yuguanghua02/Library/Android/sdk diff --git a/static/deploy/android_demo/app/proguard-rules.pro b/static/deploy/android_demo/app/proguard-rules.pro deleted file mode 100644 index f1b424510da51fd82143bc74a0a801ae5a1e2fcd..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/proguard-rules.pro +++ /dev/null @@ -1,21 +0,0 @@ -# Add project specific ProGuard rules here. -# You can control the set of applied configuration files using the -# proguardFiles setting in build.gradle. -# -# For more details, see -# http://developer.android.com/guide/developing/tools/proguard.html - -# If your project uses WebView with JS, uncomment the following -# and specify the fully qualified class name to the JavaScript interface -# class: -#-keepclassmembers class fqcn.of.javascript.interface.for.webview { -# public *; -#} - -# Uncomment this to preserve the line number information for -# debugging stack traces. -#-keepattributes SourceFile,LineNumberTable - -# If you keep the line number information, uncomment this to -# hide the original source file name. -#-renamesourcefileattribute SourceFile diff --git a/static/deploy/android_demo/app/src/androidTest/java/com/baidu/paddledetection/detection/ExampleInstrumentedTest.java b/static/deploy/android_demo/app/src/androidTest/java/com/baidu/paddledetection/detection/ExampleInstrumentedTest.java deleted file mode 100644 index 2e9b169375455c89031b96a9c39574f1f10a6d17..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/androidTest/java/com/baidu/paddledetection/detection/ExampleInstrumentedTest.java +++ /dev/null @@ -1,26 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.content.Context; -import android.support.test.InstrumentationRegistry; -import android.support.test.runner.AndroidJUnit4; - -import org.junit.Test; -import org.junit.runner.RunWith; - -import static org.junit.Assert.*; - -/** - * Instrumented test, which will execute on an Android device. - * - * @see Testing documentation - */ -@RunWith(AndroidJUnit4.class) -public class ExampleInstrumentedTest { - @Test - public void useAppContext() { - // Context of the app under test. - Context appContext = InstrumentationRegistry.getTargetContext(); - - assertEquals("com.baidu.paddle.lite.demo", appContext.getPackageName()); - } -} diff --git a/static/deploy/android_demo/app/src/main/AndroidManifest.xml b/static/deploy/android_demo/app/src/main/AndroidManifest.xml deleted file mode 100644 index 1f33eb0e6c55fe006d791791f9b8db7f1f6b40b7..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/AndroidManifest.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/assets/images/home.jpg b/static/deploy/android_demo/app/src/main/assets/images/home.jpg deleted file mode 100644 index 19023f718333c56c70776c79201dc03d742c1ed3..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/assets/images/home.jpg and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/assets/images/kite.jpg b/static/deploy/android_demo/app/src/main/assets/images/kite.jpg deleted file mode 100644 index cb304bd56c4010c08611a30dcca58ea9140cea54..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/assets/images/kite.jpg and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/assets/labels/coco-labels-background.txt b/static/deploy/android_demo/app/src/main/assets/labels/coco-labels-background.txt deleted file mode 100644 index e290149963116c873fe6c54d28337a78bc500d8a..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/assets/labels/coco-labels-background.txt +++ /dev/null @@ -1,81 +0,0 @@ -background -person -bicycle -car -motorcycle -airplane -bus -train -truck -boat -traffic light -fire hydrant -stop sign -parking meter -bench -bird -cat -dog -horse -sheep -cow -elephant -bear -zebra -giraffe -backpack -umbrella -handbag -tie -suitcase -frisbee -skis -snowboard -sports ball -kite -baseball bat -baseball glove -skateboard -surfboard -tennis racket -bottle -wine glass -cup -fork -knife -spoon -bowl -banana -apple -sandwich -orange -broccoli -carrot -hot dog -pizza -donut -cake -chair -couch -potted plant -bed -dining table -toilet -tv -laptop -mouse -remote -keyboard -cell phone -microwave -oven -toaster -sink -refrigerator -book -clock -vase -scissors -teddy bear -hair drier -toothbrush \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/cpp/CMakeLists.txt b/static/deploy/android_demo/app/src/main/cpp/CMakeLists.txt deleted file mode 100644 index 90903a3687e76736d056d382149c1fabee4f760b..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/CMakeLists.txt +++ /dev/null @@ -1,170 +0,0 @@ -# For more information about using CMake with Android Studio, read the -# documentation: https://d.android.com/studio/projects/add-native-code.html - -# Sets the minimum version of CMake required to build the native library. - -cmake_minimum_required(VERSION 3.4.1) - -# Creates and names a library, sets it as either STATIC or SHARED, and provides -# the relative paths to its source code. You can define multiple libraries, and -# CMake builds them for you. Gradle automatically packages shared libraries with -# your APK. - -set(PaddleLite_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../PaddleLite") -include_directories(${PaddleLite_DIR}/cxx/include) - -set(OpenCV_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../OpenCV/sdk/native/jni") -find_package(OpenCV REQUIRED) -message(STATUS "OpenCV libraries: ${OpenCV_LIBS}") -include_directories(${OpenCV_INCLUDE_DIRS}) - -set(CMAKE_CXX_FLAGS - "${CMAKE_CXX_FLAGS} -ffast-math -Ofast -Os -DNDEBUG -fno-exceptions -fomit-frame-pointer -fno-asynchronous-unwind-tables -fno-unwind-tables" -) -set(CMAKE_CXX_FLAGS - "${CMAKE_CXX_FLAGS} -fvisibility=hidden -fvisibility-inlines-hidden -fdata-sections -ffunction-sections" -) -set(CMAKE_SHARED_LINKER_FLAGS - "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--gc-sections -Wl,-z,nocopyreloc") - -add_library( - # Sets the name of the library. - Native - # Sets the library as a shared library. - SHARED - # Provides a relative path to your source file(s). - Native.cc Pipeline.cc Utils.cc) - -find_library( - # Sets the name of the path variable. - log-lib - # Specifies the name of the NDK library that you want CMake to locate. - log) - -add_library( - # Sets the name of the library. - paddle_light_api_shared - # Sets the library as a shared library. - SHARED - # Provides a relative path to your source file(s). - IMPORTED) - -set_target_properties( - # Specifies the target library. - paddle_light_api_shared - # Specifies the parameter you want to define. - PROPERTIES - IMPORTED_LOCATION - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libpaddle_light_api_shared.so - # Provides the path to the library you want to import. -) - -# Comment the followings if libpaddle_light_api_shared.so is not fit for NPU -add_library( - # Sets the name of the library. - hiai - # Sets the library as a shared library. - SHARED - # Provides a relative path to your source file(s). - IMPORTED) - -set_target_properties( - # Specifies the target library. - hiai - # Specifies the parameter you want to define. - PROPERTIES IMPORTED_LOCATION - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libhiai.so - # Provides the path to the library you want to import. -) - -add_library( - # Sets the name of the library. - hiai_ir - # Sets the library as a shared library. - SHARED - # Provides a relative path to your source file(s). - IMPORTED) - -set_target_properties( - # Specifies the target library. - hiai_ir - # Specifies the parameter you want to define. - PROPERTIES IMPORTED_LOCATION - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libhiai_ir.so - # Provides the path to the library you want to import. -) - -add_library( - # Sets the name of the library. - hiai_ir_build - # Sets the library as a shared library. - SHARED - # Provides a relative path to your source file(s). - IMPORTED) - -set_target_properties( - # Specifies the target library. - hiai_ir_build - # Specifies the parameter you want to define. - PROPERTIES IMPORTED_LOCATION - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libhiai_ir_build.so - # Provides the path to the library you want to import. -) - -# Specifies libraries CMake should link to your target library. You can link -# multiple libraries, such as libraries you define in this build script, -# prebuilt third-party libraries, or system libraries. - -target_link_libraries( - # Specifies the target library. - Native - paddle_light_api_shared - ${OpenCV_LIBS} - GLESv2 - EGL - ${log-lib} - hiai - hiai_ir - hiai_ir_build - ) - -add_custom_command( - TARGET Native - POST_BUILD - COMMAND - ${CMAKE_COMMAND} -E copy - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libc++_shared.so - ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libc++_shared.so) - -add_custom_command( - TARGET Native - POST_BUILD - COMMAND - ${CMAKE_COMMAND} -E copy - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libpaddle_light_api_shared.so - ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libpaddle_light_api_shared.so) - -# Comment the followings if libpaddle_light_api_shared.so is not fit for NPU -add_custom_command( - TARGET Native - POST_BUILD - COMMAND - ${CMAKE_COMMAND} -E copy - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libhiai.so - ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libhiai.so) - -add_custom_command( - TARGET Native - POST_BUILD - COMMAND - ${CMAKE_COMMAND} -E copy - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libhiai_ir.so - ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libhiai_ir.so) - -add_custom_command( - TARGET Native - POST_BUILD - COMMAND - ${CMAKE_COMMAND} -E copy - ${PaddleLite_DIR}/cxx/libs/${ANDROID_ABI}/libhiai_ir_build.so - ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libhiai_ir_build.so) diff --git a/static/deploy/android_demo/app/src/main/cpp/Native.cc b/static/deploy/android_demo/app/src/main/cpp/Native.cc deleted file mode 100644 index 1b2700a91c8b9bd2b6a186378b6bdc068e8927a9..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/Native.cc +++ /dev/null @@ -1,78 +0,0 @@ -// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "Native.h" -#include "Pipeline.h" - -#ifdef __cplusplus -extern "C" { -#endif -/* - * Class: com_baidu_paddle_lite_demo_yolo_detection_Native - * Method: nativeInit - * Signature: - * (Ljava/lang/String;Ljava/lang/String;ILjava/lang/String;II[F[FF)J - */ -JNIEXPORT jlong JNICALL -Java_com_baidu_paddledetection_detection_Native_nativeInit( - JNIEnv *env, jclass thiz, jstring jModelDir, jstring jLabelPath, - jint cpuThreadNum, jstring jCPUPowerMode, jint inputWidth, jint inputHeight, - jfloatArray jInputMean, jfloatArray jInputStd, jfloat scoreThreshold) { - std::string modelDir = jstring_to_cpp_string(env, jModelDir); - std::string labelPath = jstring_to_cpp_string(env, jLabelPath); - std::string cpuPowerMode = jstring_to_cpp_string(env, jCPUPowerMode); - std::vector inputMean = jfloatarray_to_float_vector(env, jInputMean); - std::vector inputStd = jfloatarray_to_float_vector(env, jInputStd); - return reinterpret_cast( - new Pipeline(modelDir, labelPath, cpuThreadNum, cpuPowerMode, inputWidth, - inputHeight, inputMean, inputStd, scoreThreshold)); -} - -/* - * Class: com_baidu_paddle_lite_demo_yolo_detection_Native - * Method: nativeRelease - * Signature: (J)Z - */ -JNIEXPORT jboolean JNICALL -Java_com_baidu_paddledetection_detection_Native_nativeRelease( - JNIEnv *env, jclass thiz, jlong ctx) { - if (ctx == 0) { - return JNI_FALSE; - } - Pipeline *pipeline = reinterpret_cast(ctx); - delete pipeline; - return JNI_TRUE; -} - -/* - * Class: com_baidu_paddle_lite_demo_yolo_detection_Native - * Method: nativeProcess - * Signature: (JIIIILjava/lang/String;)Z - */ -JNIEXPORT jboolean JNICALL -Java_com_baidu_paddledetection_detection_Native_nativeProcess( - JNIEnv *env, jclass thiz, jlong ctx, jint inTextureId, jint outTextureId, - jint textureWidth, jint textureHeight, jstring jsavedImagePath) { - if (ctx == 0) { - return JNI_FALSE; - } - std::string savedImagePath = jstring_to_cpp_string(env, jsavedImagePath); - Pipeline *pipeline = reinterpret_cast(ctx); - return pipeline->Process(inTextureId, outTextureId, textureWidth, - textureHeight, savedImagePath); -} - -#ifdef __cplusplus -} -#endif diff --git a/static/deploy/android_demo/app/src/main/cpp/Native.h b/static/deploy/android_demo/app/src/main/cpp/Native.h deleted file mode 100644 index d595b8ea2c11c1b0a9219119abcc5678551c318f..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/Native.h +++ /dev/null @@ -1,116 +0,0 @@ -// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#pragma once - -#include -#include -#include - -inline std::string jstring_to_cpp_string(JNIEnv *env, jstring jstr) { - // In java, a unicode char will be encoded using 2 bytes (utf16). - // so jstring will contain characters utf16. std::string in c++ is - // essentially a string of bytes, not characters, so if we want to - // pass jstring from JNI to c++, we have convert utf16 to bytes. - if (!jstr) { - return ""; - } - const jclass stringClass = env->GetObjectClass(jstr); - const jmethodID getBytes = - env->GetMethodID(stringClass, "getBytes", "(Ljava/lang/String;)[B"); - const jbyteArray stringJbytes = (jbyteArray)env->CallObjectMethod( - jstr, getBytes, env->NewStringUTF("UTF-8")); - - size_t length = (size_t)env->GetArrayLength(stringJbytes); - jbyte *pBytes = env->GetByteArrayElements(stringJbytes, NULL); - - std::string ret = std::string(reinterpret_cast(pBytes), length); - env->ReleaseByteArrayElements(stringJbytes, pBytes, JNI_ABORT); - - env->DeleteLocalRef(stringJbytes); - env->DeleteLocalRef(stringClass); - return ret; -} - -inline jstring cpp_string_to_jstring(JNIEnv *env, std::string str) { - auto *data = str.c_str(); - jclass strClass = env->FindClass("java/lang/String"); - jmethodID strClassInitMethodID = - env->GetMethodID(strClass, "", "([BLjava/lang/String;)V"); - - jbyteArray bytes = env->NewByteArray(strlen(data)); - env->SetByteArrayRegion(bytes, 0, strlen(data), - reinterpret_cast(data)); - - jstring encoding = env->NewStringUTF("UTF-8"); - jstring res = (jstring)( - env->NewObject(strClass, strClassInitMethodID, bytes, encoding)); - - env->DeleteLocalRef(strClass); - env->DeleteLocalRef(encoding); - env->DeleteLocalRef(bytes); - - return res; -} - -inline jfloatArray cpp_array_to_jfloatarray(JNIEnv *env, const float *buf, - int64_t len) { - jfloatArray result = env->NewFloatArray(len); - env->SetFloatArrayRegion(result, 0, len, buf); - return result; -} - -inline jintArray cpp_array_to_jintarray(JNIEnv *env, const int *buf, - int64_t len) { - jintArray result = env->NewIntArray(len); - env->SetIntArrayRegion(result, 0, len, buf); - return result; -} - -inline jbyteArray cpp_array_to_jbytearray(JNIEnv *env, const int8_t *buf, - int64_t len) { - jbyteArray result = env->NewByteArray(len); - env->SetByteArrayRegion(result, 0, len, buf); - return result; -} - -inline jlongArray int64_vector_to_jlongarray(JNIEnv *env, - const std::vector &vec) { - jlongArray result = env->NewLongArray(vec.size()); - jlong *buf = new jlong[vec.size()]; - for (size_t i = 0; i < vec.size(); ++i) { - buf[i] = (jlong)vec[i]; - } - env->SetLongArrayRegion(result, 0, vec.size(), buf); - delete[] buf; - return result; -} - -inline std::vector jlongarray_to_int64_vector(JNIEnv *env, - jlongArray data) { - int data_size = env->GetArrayLength(data); - jlong *data_ptr = env->GetLongArrayElements(data, nullptr); - std::vector data_vec(data_ptr, data_ptr + data_size); - env->ReleaseLongArrayElements(data, data_ptr, 0); - return data_vec; -} - -inline std::vector jfloatarray_to_float_vector(JNIEnv *env, - jfloatArray data) { - int data_size = env->GetArrayLength(data); - jfloat *data_ptr = env->GetFloatArrayElements(data, nullptr); - std::vector data_vec(data_ptr, data_ptr + data_size); - env->ReleaseFloatArrayElements(data, data_ptr, 0); - return data_vec; -} diff --git a/static/deploy/android_demo/app/src/main/cpp/Pipeline.cc b/static/deploy/android_demo/app/src/main/cpp/Pipeline.cc deleted file mode 100644 index b3e3476d961cfd608186f9672786770b39da3268..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/Pipeline.cc +++ /dev/null @@ -1,243 +0,0 @@ -// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "Pipeline.h" - -Detector::Detector(const std::string &modelDir, const std::string &labelPath, - const int cpuThreadNum, const std::string &cpuPowerMode, - int inputWidth, int inputHeight, - const std::vector &inputMean, - const std::vector &inputStd, float scoreThreshold) - : inputWidth_(inputWidth), inputHeight_(inputHeight), inputMean_(inputMean), - inputStd_(inputStd), scoreThreshold_(scoreThreshold) { - paddle::lite_api::MobileConfig config; - config.set_model_from_file(modelDir + "/model.nb"); - config.set_threads(cpuThreadNum); - config.set_power_mode(ParsePowerMode(cpuPowerMode)); - predictor_ = - paddle::lite_api::CreatePaddlePredictor( - config); - labelList_ = LoadLabelList(labelPath); - colorMap_ = GenerateColorMap(labelList_.size()); -} - -std::vector Detector::LoadLabelList(const std::string &labelPath) { - std::ifstream file; - std::vector labels; - file.open(labelPath); - while (file) { - std::string line; - std::getline(file, line); - labels.push_back(line); - } - file.clear(); - file.close(); - return labels; -} - -std::vector Detector::GenerateColorMap(int numOfClasses) { - std::vector colorMap = std::vector(numOfClasses); - for (int i = 0; i < numOfClasses; i++) { - int j = 0; - int label = i; - int R = 0, G = 0, B = 0; - while (label) { - R |= (((label >> 0) & 1) << (7 - j)); - G |= (((label >> 1) & 1) << (7 - j)); - B |= (((label >> 2) & 1) << (7 - j)); - j++; - label >>= 3; - } - colorMap[i] = cv::Scalar(R, G, B); - } - return colorMap; -} - -void Detector::Preprocess(const cv::Mat &rgbaImage) { - // Set the data of input image - auto inputTensor = predictor_->GetInput(0); - std::vector inputShape = {1, 3, inputHeight_, inputWidth_}; - inputTensor->Resize(inputShape); - auto inputData = inputTensor->mutable_data(); - cv::Mat resizedRGBAImage; - cv::resize(rgbaImage, resizedRGBAImage, - cv::Size(inputShape[3], inputShape[2])); - cv::Mat resizedRGBImage; - cv::cvtColor(resizedRGBAImage, resizedRGBImage, cv::COLOR_BGRA2RGB); - resizedRGBImage.convertTo(resizedRGBImage, CV_32FC3, 1.0 / 255.0f); - NHWC3ToNC3HW(reinterpret_cast(resizedRGBImage.data), inputData, - inputMean_.data(), inputStd_.data(), inputShape[3], - inputShape[2]); - // Set the size of input image - auto sizeTensor = predictor_->GetInput(1); - sizeTensor->Resize({1, 2}); - auto sizeData = sizeTensor->mutable_data(); - sizeData[0] = inputShape[3]; - sizeData[1] = inputShape[2]; -} - -void Detector::Postprocess(std::vector *results) { - auto outputTensor = predictor_->GetOutput(0); - auto outputData = outputTensor->data(); - auto outputShape = outputTensor->shape(); - int outputSize = ShapeProduction(outputShape); - for (int i = 0; i < outputSize; i += 6) { - // Class id - auto class_id = static_cast(round(outputData[i])); - // Confidence score - auto score = outputData[i + 1]; - if (score < scoreThreshold_) - continue; - RESULT object; - object.class_name = class_id >= 0 && class_id < labelList_.size() - ? labelList_[class_id] - : "Unknow"; - object.fill_color = class_id >= 0 && class_id < colorMap_.size() - ? colorMap_[class_id] - : cv::Scalar(0, 0, 0); - object.score = score; - object.x = outputData[i + 2] / inputWidth_; - object.y = outputData[i + 3] / inputHeight_; - object.w = (outputData[i + 4] - outputData[i + 2] + 1) / inputWidth_; - object.h = (outputData[i + 5] - outputData[i + 3] + 1) / inputHeight_; - results->push_back(object); - } -} - -void Detector::Predict(const cv::Mat &rgbaImage, std::vector *results, - double *preprocessTime, double *predictTime, - double *postprocessTime) { - auto t = GetCurrentTime(); - - t = GetCurrentTime(); - Preprocess(rgbaImage); - *preprocessTime = GetElapsedTime(t); - LOGD("Detector postprocess costs %f ms", *preprocessTime); - - t = GetCurrentTime(); - predictor_->Run(); - *predictTime = GetElapsedTime(t); - LOGD("Detector predict costs %f ms", *predictTime); - - t = GetCurrentTime(); - Postprocess(results); - *postprocessTime = GetElapsedTime(t); - LOGD("Detector postprocess costs %f ms", *postprocessTime); -} - -Pipeline::Pipeline(const std::string &modelDir, const std::string &labelPath, - const int cpuThreadNum, const std::string &cpuPowerMode, - int inputWidth, int inputHeight, - const std::vector &inputMean, - const std::vector &inputStd, float scoreThreshold) { - detector_.reset(new Detector(modelDir, labelPath, cpuThreadNum, cpuPowerMode, - inputWidth, inputHeight, inputMean, inputStd, - scoreThreshold)); -} - -void Pipeline::VisualizeResults(const std::vector &results, - cv::Mat *rgbaImage) { - int w = rgbaImage->cols; - int h = rgbaImage->rows; - for (int i = 0; i < results.size(); i++) { - RESULT object = results[i]; - cv::Rect boundingBox = - cv::Rect(object.x * w, object.y * h, object.w * w, object.h * h) & - cv::Rect(0, 0, w - 1, h - 1); - // Configure text size - std::string text = object.class_name + " "; - text += std::to_string(static_cast(object.score * 100)) + "%"; - int fontFace = cv::FONT_HERSHEY_PLAIN; - double fontScale = 1.5f; - float fontThickness = 1.0f; - cv::Size textSize = - cv::getTextSize(text, fontFace, fontScale, fontThickness, nullptr); - // Draw roi object, text, and background - cv::rectangle(*rgbaImage, boundingBox, object.fill_color, 2); - cv::rectangle(*rgbaImage, - cv::Point2d(boundingBox.x, - boundingBox.y - round(textSize.height * 1.25f)), - cv::Point2d(boundingBox.x + boundingBox.width, boundingBox.y), - object.fill_color, -1); - cv::putText(*rgbaImage, text, cv::Point2d(boundingBox.x, boundingBox.y), - fontFace, fontScale, cv::Scalar(255, 255, 255), fontThickness); - } -} - -void Pipeline::VisualizeStatus(double readGLFBOTime, double writeGLTextureTime, - double preprocessTime, double predictTime, - double postprocessTime, cv::Mat *rgbaImage) { - char text[255]; - cv::Scalar fontColor = cv::Scalar(255, 255, 255); - int fontFace = cv::FONT_HERSHEY_PLAIN; - double fontScale = 1.f; - float fontThickness = 1; - sprintf(text, "Read GLFBO time: %.1f ms", readGLFBOTime); - cv::Size textSize = - cv::getTextSize(text, fontFace, fontScale, fontThickness, nullptr); - textSize.height *= 1.25f; - cv::Point2d offset(10, textSize.height + 15); - cv::putText(*rgbaImage, text, offset, fontFace, fontScale, fontColor, - fontThickness); - sprintf(text, "Write GLTexture time: %.1f ms", writeGLTextureTime); - offset.y += textSize.height; - cv::putText(*rgbaImage, text, offset, fontFace, fontScale, fontColor, - fontThickness); - sprintf(text, "Preprocess time: %.1f ms", preprocessTime); - offset.y += textSize.height; - cv::putText(*rgbaImage, text, offset, fontFace, fontScale, fontColor, - fontThickness); - sprintf(text, "Predict time: %.1f ms", predictTime); - offset.y += textSize.height; - cv::putText(*rgbaImage, text, offset, fontFace, fontScale, fontColor, - fontThickness); - sprintf(text, "Postprocess time: %.1f ms", postprocessTime); - offset.y += textSize.height; - cv::putText(*rgbaImage, text, offset, fontFace, fontScale, fontColor, - fontThickness); -} - -bool Pipeline::Process(int inTexureId, int outTextureId, int textureWidth, - int textureHeight, std::string savedImagePath) { - static double readGLFBOTime = 0, writeGLTextureTime = 0; - double preprocessTime = 0, predictTime = 0, postprocessTime = 0; - - // Read pixels from FBO texture to CV image - cv::Mat rgbaImage; - CreateRGBAImageFromGLFBOTexture(textureWidth, textureHeight, &rgbaImage, - &readGLFBOTime); - - // Feed the image, run inference and parse the results - std::vector results; - detector_->Predict(rgbaImage, &results, &preprocessTime, &predictTime, - &postprocessTime); - - // Visualize the objects to the origin image - VisualizeResults(results, &rgbaImage); - - // Visualize the status(performance data) to the origin image - VisualizeStatus(readGLFBOTime, writeGLTextureTime, preprocessTime, - predictTime, postprocessTime, &rgbaImage); - - // Dump modified image if savedImagePath is set - if (!savedImagePath.empty()) { - cv::Mat bgrImage; - cv::cvtColor(rgbaImage, bgrImage, cv::COLOR_RGBA2BGR); - imwrite(savedImagePath, bgrImage); - } - - // Write back to texture2D - WriteRGBAImageBackToGLTexture(rgbaImage, outTextureId, &writeGLTextureTime); - return true; -} diff --git a/static/deploy/android_demo/app/src/main/cpp/Pipeline.h b/static/deploy/android_demo/app/src/main/cpp/Pipeline.h deleted file mode 100644 index 91177d0417814cd60b01112674baf6387675f9a0..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/Pipeline.h +++ /dev/null @@ -1,112 +0,0 @@ -// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#pragma once - -#include "Utils.h" -#include "paddle_api.h" -#include -#include -#include -#include -#include -#include -#include -#include - -struct RESULT { - std::string class_name; - cv::Scalar fill_color; - float score; - float x; - float y; - float w; - float h; -}; - -class Detector { -public: - explicit Detector(const std::string &modelDir, const std::string &labelPath, - const int cpuThreadNum, const std::string &cpuPowerMode, - int inputWidth, int inputHeight, - const std::vector &inputMean, - const std::vector &inputStd, float scoreThreshold); - - void Predict(const cv::Mat &rgbImage, std::vector *results, - double *preprocessTime, double *predictTime, - double *postprocessTime); - -private: - std::vector LoadLabelList(const std::string &path); - std::vector GenerateColorMap(int numOfClasses); - void Preprocess(const cv::Mat &rgbaImage); - void Postprocess(std::vector *results); - -private: - int inputWidth_; - int inputHeight_; - std::vector inputMean_; - std::vector inputStd_; - float scoreThreshold_; - std::vector labelList_; - std::vector colorMap_; - std::shared_ptr predictor_; -}; - -class Pipeline { -public: - Pipeline(const std::string &modelDir, const std::string &labelPath, - const int cpuThreadNum, const std::string &cpuPowerMode, - int inputWidth, int inputHeight, const std::vector &inputMean, - const std::vector &inputStd, float scoreThreshold); - - bool Process(int inTextureId, int outTextureId, int textureWidth, - int textureHeight, std::string savedImagePath); - -private: - // Read pixels from FBO texture to CV image - void CreateRGBAImageFromGLFBOTexture(int textureWidth, int textureHeight, - cv::Mat *rgbaImage, - double *readGLFBOTime) { - auto t = GetCurrentTime(); - rgbaImage->create(textureHeight, textureWidth, CV_8UC4); - glReadPixels(0, 0, textureWidth, textureHeight, GL_RGBA, GL_UNSIGNED_BYTE, - rgbaImage->data); - *readGLFBOTime = GetElapsedTime(t); - LOGD("Read from FBO texture costs %f ms", *readGLFBOTime); - } - - // Write back to texture2D - void WriteRGBAImageBackToGLTexture(const cv::Mat &rgbaImage, int textureId, - double *writeGLTextureTime) { - auto t = GetCurrentTime(); - glActiveTexture(GL_TEXTURE0); - glBindTexture(GL_TEXTURE_2D, textureId); - glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, rgbaImage.cols, rgbaImage.rows, - GL_RGBA, GL_UNSIGNED_BYTE, rgbaImage.data); - *writeGLTextureTime = GetElapsedTime(t); - LOGD("Write back to texture2D costs %f ms", *writeGLTextureTime); - } - - // Visualize the results to origin image - void VisualizeResults(const std::vector &results, cv::Mat *rgbaImage); - - // Visualize the status(performace data) to origin image - void VisualizeStatus(double readGLFBOTime, double writeGLTextureTime, - double preprocessTime, double predictTime, - double postprocessTime, cv::Mat *rgbaImage); - -private: - std::shared_ptr detector_; -}; diff --git a/static/deploy/android_demo/app/src/main/cpp/Utils.cc b/static/deploy/android_demo/app/src/main/cpp/Utils.cc deleted file mode 100644 index 63ea54fd4b2de0e9822f5c392606c3e7dca1da0b..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/Utils.cc +++ /dev/null @@ -1,78 +0,0 @@ -// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "Utils.h" -#include - -int64_t ShapeProduction(const std::vector &shape) { - int64_t res = 1; - for (auto i : shape) - res *= i; - return res; -} - -void NHWC3ToNC3HW(const float *src, float *dst, const float *mean, - const float *std, int width, int height) { - int size = height * width; - float32x4_t vmean0 = vdupq_n_f32(mean ? mean[0] : 0.0f); - float32x4_t vmean1 = vdupq_n_f32(mean ? mean[1] : 0.0f); - float32x4_t vmean2 = vdupq_n_f32(mean ? mean[2] : 0.0f); - float32x4_t vscale0 = vdupq_n_f32(std ? (1.0f / std[0]) : 1.0f); - float32x4_t vscale1 = vdupq_n_f32(std ? (1.0f / std[1]) : 1.0f); - float32x4_t vscale2 = vdupq_n_f32(std ? (1.0f / std[2]) : 1.0f); - float *dst_c0 = dst; - float *dst_c1 = dst + size; - float *dst_c2 = dst + size * 2; - int i = 0; - for (; i < size - 3; i += 4) { - float32x4x3_t vin3 = vld3q_f32(src); - float32x4_t vsub0 = vsubq_f32(vin3.val[0], vmean0); - float32x4_t vsub1 = vsubq_f32(vin3.val[1], vmean1); - float32x4_t vsub2 = vsubq_f32(vin3.val[2], vmean2); - float32x4_t vs0 = vmulq_f32(vsub0, vscale0); - float32x4_t vs1 = vmulq_f32(vsub1, vscale1); - float32x4_t vs2 = vmulq_f32(vsub2, vscale2); - vst1q_f32(dst_c0, vs0); - vst1q_f32(dst_c1, vs1); - vst1q_f32(dst_c2, vs2); - src += 12; - dst_c0 += 4; - dst_c1 += 4; - dst_c2 += 4; - } - for (; i < size; i++) { - *(dst_c0++) = (*(src++) - mean[0]) / std[0]; - *(dst_c1++) = (*(src++) - mean[1]) / std[1]; - *(dst_c2++) = (*(src++) - mean[2]) / std[2]; - } -} - -void NHWC1ToNC1HW(const float *src, float *dst, const float *mean, - const float *std, int width, int height) { - int size = height * width; - float32x4_t vmean = vdupq_n_f32(mean ? mean[0] : 0.0f); - float32x4_t vscale = vdupq_n_f32(std ? (1.0f / std[0]) : 1.0f); - int i = 0; - for (; i < size - 3; i += 4) { - float32x4_t vin = vld1q_f32(src); - float32x4_t vsub = vsubq_f32(vin, vmean); - float32x4_t vs = vmulq_f32(vsub, vscale); - vst1q_f32(dst, vs); - src += 4; - dst += 4; - } - for (; i < size; i++) { - *(dst++) = (*(src++) - mean[0]) / std[0]; - } -} diff --git a/static/deploy/android_demo/app/src/main/cpp/Utils.h b/static/deploy/android_demo/app/src/main/cpp/Utils.h deleted file mode 100644 index 74fa82a6423dd600f3aa711da4ca244d0d4c34eb..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/cpp/Utils.h +++ /dev/null @@ -1,92 +0,0 @@ -// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#pragma once - -#include "paddle_api.h" -#include -#include -#include -#include - -#define TAG "JNI" -#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, TAG, __VA_ARGS__) -#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, TAG, __VA_ARGS__) -#define LOGW(...) __android_log_print(ANDROID_LOG_WARN, TAG, __VA_ARGS__) -#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, TAG, __VA_ARGS__) -#define LOGF(...) __android_log_print(ANDROID_LOG_FATAL, TAG, __VA_ARGS__) - -int64_t ShapeProduction(const std::vector &shape); - -template -bool ReadFile(const std::string &path, std::vector *data) { - std::ifstream file(path, std::ifstream::binary); - if (file) { - file.seekg(0, file.end); - int size = file.tellg(); - LOGD("file size=%lld\n", size); - data->resize(size / sizeof(T)); - file.seekg(0, file.beg); - file.read(reinterpret_cast(data->data()), size); - file.close(); - return true; - } else { - LOGE("Can't read file from %s\n", path.c_str()); - } - return false; -} - -template -bool WriteFile(const std::string &path, const std::vector &data) { - std::ofstream file{path, std::ios::binary}; - if (!file.is_open()) { - LOGE("Can't write file to %s\n", path.c_str()); - return false; - } - file.write(reinterpret_cast(data.data()), - data.size() * sizeof(T)); - file.close(); - return true; -} - -inline int64_t GetCurrentTime() { - struct timeval time; - gettimeofday(&time, NULL); - return 1000000LL * (int64_t)time.tv_sec + (int64_t)time.tv_usec; -} - -inline double GetElapsedTime(int64_t time) { - return (GetCurrentTime() - time) / 1000.0f; -} - -inline paddle::lite_api::PowerMode ParsePowerMode(std::string mode) { - if (mode == "LITE_POWER_HIGH") { - return paddle::lite_api::LITE_POWER_HIGH; - } else if (mode == "LITE_POWER_LOW") { - return paddle::lite_api::LITE_POWER_LOW; - } else if (mode == "LITE_POWER_FULL") { - return paddle::lite_api::LITE_POWER_FULL; - } else if (mode == "LITE_POWER_RAND_HIGH") { - return paddle::lite_api::LITE_POWER_RAND_HIGH; - } else if (mode == "LITE_POWER_RAND_LOW") { - return paddle::lite_api::LITE_POWER_RAND_LOW; - } - return paddle::lite_api::LITE_POWER_NO_BIND; -} - -void NHWC3ToNC3HW(const float *src, float *dst, const float *mean, - const float *std, int width, int height); - -void NHWC1ToNC1HW(const float *src, float *dst, const float *mean, - const float *std, int width, int height); diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/AppCompatPreferenceActivity.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/AppCompatPreferenceActivity.java deleted file mode 100644 index 6ebdb43334fd412cf3f9a3e9de2ffedd9ba574cf..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/AppCompatPreferenceActivity.java +++ /dev/null @@ -1,127 +0,0 @@ -/* - * Copyright (C) 2014 The Android Open Source Project - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package com.baidu.paddledetection.common; - -import android.content.res.Configuration; -import android.os.Bundle; -import android.preference.PreferenceActivity; -import androidx.annotation.LayoutRes; -import androidx.annotation.Nullable; -import androidx.appcompat.app.ActionBar; -import androidx.appcompat.app.AppCompatDelegate; -import androidx.appcompat.widget.Toolbar; -import android.view.MenuInflater; -import android.view.View; -import android.view.ViewGroup; - -/** - * A {@link PreferenceActivity} which implements and proxies the necessary calls - * to be used with AppCompat. - *

    - * This technique can be used with an {@link android.app.Activity} class, not just - * {@link PreferenceActivity}. - */ -public abstract class AppCompatPreferenceActivity extends PreferenceActivity { - private AppCompatDelegate mDelegate; - - @Override - protected void onCreate(Bundle savedInstanceState) { - getDelegate().installViewFactory(); - getDelegate().onCreate(savedInstanceState); - super.onCreate(savedInstanceState); - } - - @Override - protected void onPostCreate(Bundle savedInstanceState) { - super.onPostCreate(savedInstanceState); - getDelegate().onPostCreate(savedInstanceState); - } - - public ActionBar getSupportActionBar() { - return getDelegate().getSupportActionBar(); - } - - public void setSupportActionBar(@Nullable Toolbar toolbar) { - getDelegate().setSupportActionBar(toolbar); - } - - @Override - public MenuInflater getMenuInflater() { - return getDelegate().getMenuInflater(); - } - - @Override - public void setContentView(@LayoutRes int layoutResID) { - getDelegate().setContentView(layoutResID); - } - - @Override - public void setContentView(View view) { - getDelegate().setContentView(view); - } - - @Override - public void setContentView(View view, ViewGroup.LayoutParams params) { - getDelegate().setContentView(view, params); - } - - @Override - public void addContentView(View view, ViewGroup.LayoutParams params) { - getDelegate().addContentView(view, params); - } - - @Override - protected void onPostResume() { - super.onPostResume(); - getDelegate().onPostResume(); - } - - @Override - protected void onTitleChanged(CharSequence title, int color) { - super.onTitleChanged(title, color); - getDelegate().setTitle(title); - } - - @Override - public void onConfigurationChanged(Configuration newConfig) { - super.onConfigurationChanged(newConfig); - getDelegate().onConfigurationChanged(newConfig); - } - - @Override - protected void onStop() { - super.onStop(); - getDelegate().onStop(); - } - - @Override - protected void onDestroy() { - super.onDestroy(); - getDelegate().onDestroy(); - } - - public void invalidateOptionsMenu() { - getDelegate().invalidateOptionsMenu(); - } - - private AppCompatDelegate getDelegate() { - if (mDelegate == null) { - mDelegate = AppCompatDelegate.create(this, null); - } - return mDelegate; - } -} diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/CameraSurfaceView.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/CameraSurfaceView.java deleted file mode 100644 index dc074841c63e1796398cce26be980444faafdd1f..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/CameraSurfaceView.java +++ /dev/null @@ -1,317 +0,0 @@ -package com.baidu.paddledetection.common; - -import android.content.Context; -import android.graphics.SurfaceTexture; -import android.hardware.Camera; -import android.hardware.Camera.CameraInfo; -import android.hardware.Camera.Size; -import android.opengl.GLES11Ext; -import android.opengl.GLES20; -import android.opengl.GLSurfaceView; -import android.opengl.GLSurfaceView.Renderer; -import android.opengl.Matrix; -import android.util.AttributeSet; -import android.util.Log; - -import javax.microedition.khronos.egl.EGLConfig; -import javax.microedition.khronos.opengles.GL10; - -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.ByteOrder; -import java.nio.FloatBuffer; -import java.util.List; - -public class CameraSurfaceView extends GLSurfaceView implements Renderer, - SurfaceTexture.OnFrameAvailableListener { - private static final String TAG = CameraSurfaceView.class.getSimpleName(); - - public static final int EXPECTED_PREVIEW_WIDTH = 1280; - public static final int EXPECTED_PREVIEW_HEIGHT = 720; - - - protected int numberOfCameras; - protected int selectedCameraId; - protected boolean disableCamera = false; - protected Camera camera; - - protected Context context; - protected SurfaceTexture surfaceTexture; - protected int surfaceWidth = 0; - protected int surfaceHeight = 0; - protected int textureWidth = 0; - protected int textureHeight = 0; - - // In order to manipulate the camera preview data and render the modified one - // to the screen, three textures are created and the data flow is shown as following: - // previewdata->camTextureId->fboTexureId->drawTexureId->framebuffer - protected int[] fbo = {0}; - protected int[] camTextureId = {0}; - protected int[] fboTexureId = {0}; - protected int[] drawTexureId = {0}; - - private final String vss = "" - + "attribute vec2 vPosition;\n" - + "attribute vec2 vTexCoord;\n" + "varying vec2 texCoord;\n" - + "void main() {\n" + " texCoord = vTexCoord;\n" - + " gl_Position = vec4 (vPosition.x, vPosition.y, 0.0, 1.0);\n" - + "}"; - - private final String fssCam2FBO = "" - + "#extension GL_OES_EGL_image_external : require\n" - + "precision mediump float;\n" - + "uniform samplerExternalOES sTexture;\n" - + "varying vec2 texCoord;\n" - + "void main() {\n" - + " gl_FragColor = texture2D(sTexture,texCoord);\n" + "}"; - - private final String fssTex2Screen = "" - + "precision mediump float;\n" - + "uniform sampler2D sTexture;\n" - + "varying vec2 texCoord;\n" - + "void main() {\n" - + " gl_FragColor = texture2D(sTexture,texCoord);\n" + "}"; - - private final float vertexCoords[] = { - -1, -1, - -1, 1, - 1, -1, - 1, 1}; - private float textureCoords[] = { - 0, 1, - 0, 0, - 1, 1, - 1, 0}; - - private FloatBuffer vertexCoordsBuffer; - private FloatBuffer textureCoordsBuffer; - - private int progCam2FBO = -1; - private int progTex2Screen = -1; - private int vcCam2FBO; - private int tcCam2FBO; - private int vcTex2Screen; - private int tcTex2Screen; - - public interface OnTextureChangedListener { - public boolean onTextureChanged(int inTextureId, int outTextureId, int textureWidth, int textureHeight); - } - - private OnTextureChangedListener onTextureChangedListener = null; - - public void setOnTextureChangedListener(OnTextureChangedListener listener) { - onTextureChangedListener = listener; - } - - public CameraSurfaceView(Context ctx, AttributeSet attrs) { - super(ctx, attrs); - context = ctx; - setEGLContextClientVersion(2); - setRenderer(this); - setRenderMode(RENDERMODE_WHEN_DIRTY); - - // Find the total number of available cameras and the ID of the default camera - numberOfCameras = Camera.getNumberOfCameras(); - CameraInfo cameraInfo = new CameraInfo(); - for (int i = 0; i < numberOfCameras; i++) { - Camera.getCameraInfo(i, cameraInfo); - if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) { - selectedCameraId = i; - } - } - } - - @Override - public void onSurfaceCreated(GL10 gl, EGLConfig config) { - // Create OES texture for storing camera preview data(YUV format) - GLES20.glGenTextures(1, camTextureId, 0); - GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, camTextureId[0]); - GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); - GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE); - GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST); - GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST); - surfaceTexture = new SurfaceTexture(camTextureId[0]); - surfaceTexture.setOnFrameAvailableListener(this); - - // Prepare vertex and texture coordinates - int bytes = vertexCoords.length * Float.SIZE / Byte.SIZE; - vertexCoordsBuffer = ByteBuffer.allocateDirect(bytes).order(ByteOrder.nativeOrder()).asFloatBuffer(); - textureCoordsBuffer = ByteBuffer.allocateDirect(bytes).order(ByteOrder.nativeOrder()).asFloatBuffer(); - vertexCoordsBuffer.put(vertexCoords).position(0); - textureCoordsBuffer.put(textureCoords).position(0); - - // Create vertex and fragment shaders - // camTextureId->fboTexureId - progCam2FBO = Utils.createShaderProgram(vss, fssCam2FBO); - vcCam2FBO = GLES20.glGetAttribLocation(progCam2FBO, "vPosition"); - tcCam2FBO = GLES20.glGetAttribLocation(progCam2FBO, "vTexCoord"); - GLES20.glEnableVertexAttribArray(vcCam2FBO); - GLES20.glEnableVertexAttribArray(tcCam2FBO); - // fboTexureId/drawTexureId -> screen - progTex2Screen = Utils.createShaderProgram(vss, fssTex2Screen); - vcTex2Screen = GLES20.glGetAttribLocation(progTex2Screen, "vPosition"); - tcTex2Screen = GLES20.glGetAttribLocation(progTex2Screen, "vTexCoord"); - GLES20.glEnableVertexAttribArray(vcTex2Screen); - GLES20.glEnableVertexAttribArray(tcTex2Screen); - } - - @Override - public void onSurfaceChanged(GL10 gl, int width, int height) { - surfaceWidth = width; - surfaceHeight = height; - openCamera(); - } - - @Override - public void onDrawFrame(GL10 gl) { - if (surfaceTexture == null) return; - - GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); - GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); - surfaceTexture.updateTexImage(); - float matrix[] = new float[16]; - surfaceTexture.getTransformMatrix(matrix); - - // camTextureId->fboTexureId - GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbo[0]); - GLES20.glViewport(0, 0, textureWidth, textureHeight); - GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); - GLES20.glUseProgram(progCam2FBO); - GLES20.glVertexAttribPointer(vcCam2FBO, 2, GLES20.GL_FLOAT, false, 4 * 2, vertexCoordsBuffer); - textureCoordsBuffer.clear(); - textureCoordsBuffer.put(transformTextureCoordinates(textureCoords, matrix)); - textureCoordsBuffer.position(0); - GLES20.glVertexAttribPointer(tcCam2FBO, 2, GLES20.GL_FLOAT, false, 4 * 2, textureCoordsBuffer); - GLES20.glActiveTexture(GLES20.GL_TEXTURE0); - GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, camTextureId[0]); - GLES20.glUniform1i(GLES20.glGetUniformLocation(progCam2FBO, "sTexture"), 0); - GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); - GLES20.glFlush(); - - // Check if the draw texture is set - int targetTexureId = fboTexureId[0]; - if (onTextureChangedListener != null) { - boolean modified = onTextureChangedListener.onTextureChanged(fboTexureId[0], drawTexureId[0], - textureWidth, textureHeight); - if (modified) { - targetTexureId = drawTexureId[0]; - } - } - - // fboTexureId/drawTexureId->Screen - GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0); - GLES20.glViewport(0, 0, surfaceWidth, surfaceHeight); - GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); - GLES20.glUseProgram(progTex2Screen); - GLES20.glVertexAttribPointer(vcTex2Screen, 2, GLES20.GL_FLOAT, false, 4 * 2, vertexCoordsBuffer); - textureCoordsBuffer.clear(); - textureCoordsBuffer.put(textureCoords); - textureCoordsBuffer.position(0); - GLES20.glVertexAttribPointer(tcTex2Screen, 2, GLES20.GL_FLOAT, false, 4 * 2, textureCoordsBuffer); - GLES20.glActiveTexture(GLES20.GL_TEXTURE0); - GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTexureId); - GLES20.glUniform1i(GLES20.glGetUniformLocation(progTex2Screen, "sTexture"), 0); - GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); - GLES20.glFlush(); - } - - private float[] transformTextureCoordinates(float[] coords, float[] matrix) { - float[] result = new float[coords.length]; - float[] vt = new float[4]; - for (int i = 0; i < coords.length; i += 2) { - float[] v = {coords[i], coords[i + 1], 0, 1}; - Matrix.multiplyMV(vt, 0, matrix, 0, v, 0); - result[i] = vt[0]; - result[i + 1] = vt[1]; - } - return result; - } - - @Override - public void onResume() { - super.onResume(); - } - - @Override - public void onPause() { - super.onPause(); - releaseCamera(); - } - - @Override - public void onFrameAvailable(SurfaceTexture surfaceTexture) { - requestRender(); - } - - public void disableCamera() { - disableCamera = true; - } - - public void switchCamera() { - releaseCamera(); - selectedCameraId = (selectedCameraId + 1) % numberOfCameras; - openCamera(); - } - - public void openCamera() { - if (disableCamera) return; - camera = Camera.open(selectedCameraId); - List supportedPreviewSizes = camera.getParameters().getSupportedPreviewSizes(); - Size previewSize = Utils.getOptimalPreviewSize(supportedPreviewSizes, EXPECTED_PREVIEW_WIDTH, - EXPECTED_PREVIEW_HEIGHT); - Camera.Parameters parameters = camera.getParameters(); - parameters.setPreviewSize(previewSize.width, previewSize.height); - if (parameters.getSupportedFocusModes().contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO)) { - parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO); - } - camera.setParameters(parameters); - int degree = Utils.getCameraDisplayOrientation(context, selectedCameraId); - camera.setDisplayOrientation(degree); - boolean rotate = degree == 90 || degree == 270; - textureWidth = rotate ? previewSize.height : previewSize.width; - textureHeight = rotate ? previewSize.width : previewSize.height; - // Destroy FBO and draw textures - GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0); - GLES20.glDeleteFramebuffers(1, fbo, 0); - GLES20.glDeleteTextures(1, drawTexureId, 0); - GLES20.glDeleteTextures(1, fboTexureId, 0); - // Normal texture for storing modified camera preview data(RGBA format) - GLES20.glGenTextures(1, drawTexureId, 0); - GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, drawTexureId[0]); - GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, textureWidth, textureHeight, 0, - GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST); - // FBO texture for storing camera preview data(RGBA format) - GLES20.glGenTextures(1, fboTexureId, 0); - GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, fboTexureId[0]); - GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, textureWidth, textureHeight, 0, - GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST); - GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST); - // Generate FBO and bind to FBO texture - GLES20.glGenFramebuffers(1, fbo, 0); - GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbo[0]); - GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, - fboTexureId[0], 0); - try { - camera.setPreviewTexture(surfaceTexture); - } catch (IOException exception) { - Log.e(TAG, "IOException caused by setPreviewDisplay()", exception); - } - camera.startPreview(); - } - - public void releaseCamera() { - if (camera != null) { - camera.setPreviewCallback(null); - camera.stopPreview(); - camera.release(); - camera = null; - } - } -} diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/Utils.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/Utils.java deleted file mode 100644 index eb7bb743b5a1857d15d9db3f397932d381cc1909..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/common/Utils.java +++ /dev/null @@ -1,237 +0,0 @@ -package com.baidu.paddledetection.common; - -import android.content.Context; -import android.content.res.Resources; -import android.hardware.Camera; -import android.opengl.GLES20; -import android.os.Environment; -import android.util.Log; -import android.view.Surface; -import android.view.WindowManager; - -import java.io.*; -import java.util.List; - -public class Utils { - private static final String TAG = Utils.class.getSimpleName(); - - public static void RecursiveCreateDirectories(String fileDir) { - String[] fileDirs = fileDir.split("\\/"); - String topPath = ""; - for (int i = 0; i < fileDirs.length; i++) { - topPath += "/" + fileDirs[i]; - File file = new File(topPath); - if (file.exists()) { - continue; - } else { - file.mkdir(); - } - } - } - - public static void copyFileFromAssets(Context appCtx, String srcPath, String dstPath) { - if (srcPath.isEmpty() || dstPath.isEmpty()) { - return; - } - String dstDir = dstPath.substring(0, dstPath.lastIndexOf('/')); - if (dstDir.length() > 0) { - RecursiveCreateDirectories(dstDir); - } - InputStream is = null; - OutputStream os = null; - try { - is = new BufferedInputStream(appCtx.getAssets().open(srcPath)); - os = new BufferedOutputStream(new FileOutputStream(new File(dstPath))); - byte[] buffer = new byte[1024]; - int length = 0; - while ((length = is.read(buffer)) != -1) { - os.write(buffer, 0, length); - } - } catch (FileNotFoundException e) { - e.printStackTrace(); - } catch (IOException e) { - e.printStackTrace(); - } finally { - try { - os.close(); - is.close(); - } catch (IOException e) { - e.printStackTrace(); - } - } - } - - public static void copyDirectoryFromAssets(Context appCtx, String srcDir, String dstDir) { - if (srcDir.isEmpty() || dstDir.isEmpty()) { - return; - } - try { - if (!new File(dstDir).exists()) { - new File(dstDir).mkdirs(); - } - for (String fileName : appCtx.getAssets().list(srcDir)) { - String srcSubPath = srcDir + File.separator + fileName; - String dstSubPath = dstDir + File.separator + fileName; - if (new File(srcSubPath).isDirectory()) { - copyDirectoryFromAssets(appCtx, srcSubPath, dstSubPath); - } else { - copyFileFromAssets(appCtx, srcSubPath, dstSubPath); - } - } - } catch (Exception e) { - e.printStackTrace(); - } - } - - public static float[] parseFloatsFromString(String string, String delimiter) { - String[] pieces = string.trim().toLowerCase().split(delimiter); - float[] floats = new float[pieces.length]; - for (int i = 0; i < pieces.length; i++) { - floats[i] = Float.parseFloat(pieces[i].trim()); - } - return floats; - } - - public static long[] parseLongsFromString(String string, String delimiter) { - String[] pieces = string.trim().toLowerCase().split(delimiter); - long[] longs = new long[pieces.length]; - for (int i = 0; i < pieces.length; i++) { - longs[i] = Long.parseLong(pieces[i].trim()); - } - return longs; - } - - public static String getSDCardDirectory() { - return Environment.getExternalStorageDirectory().getAbsolutePath(); - } - - public static String getDCIMDirectory() { - return Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM).getAbsolutePath(); - } - - public static Camera.Size getOptimalPreviewSize(List sizes, int w, int h) { - final double ASPECT_TOLERANCE = 0.1; - double targetRatio = (double) w / h; - if (sizes == null) return null; - - Camera.Size optimalSize = null; - double minDiff = Double.MAX_VALUE; - - int targetHeight = h; - - // Try to find an size match aspect ratio and size - for (Camera.Size size : sizes) { - double ratio = (double) size.width / size.height; - if (Math.abs(ratio - targetRatio) > ASPECT_TOLERANCE) continue; - if (Math.abs(size.height - targetHeight) < minDiff) { - optimalSize = size; - minDiff = Math.abs(size.height - targetHeight); - } - } - - // Cannot find the one match the aspect ratio, ignore the requirement - if (optimalSize == null) { - minDiff = Double.MAX_VALUE; - for (Camera.Size size : sizes) { - if (Math.abs(size.height - targetHeight) < minDiff) { - optimalSize = size; - minDiff = Math.abs(size.height - targetHeight); - } - } - } - return optimalSize; - } - - public static int getScreenWidth() { - return Resources.getSystem().getDisplayMetrics().widthPixels; - } - - public static int getScreenHeight() { - return Resources.getSystem().getDisplayMetrics().heightPixels; - } - - public static int getCameraDisplayOrientation(Context context, int cameraId) { - android.hardware.Camera.CameraInfo info = new android.hardware.Camera.CameraInfo(); - android.hardware.Camera.getCameraInfo(cameraId, info); - WindowManager wm = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE); - int rotation = wm.getDefaultDisplay().getRotation(); - int degrees = 0; - switch (rotation) { - case Surface.ROTATION_0: - degrees = 0; - break; - case Surface.ROTATION_90: - degrees = 90; - break; - case Surface.ROTATION_180: - degrees = 180; - break; - case Surface.ROTATION_270: - degrees = 270; - break; - } - int result; - if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { - result = (info.orientation + degrees) % 360; - result = (360 - result) % 360; // compensate the mirror - } else { - // back-facing - result = (info.orientation - degrees + 360) % 360; - } - return result; - } - - public static int createShaderProgram(String vss, String fss) { - int vshader = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER); - GLES20.glShaderSource(vshader, vss); - GLES20.glCompileShader(vshader); - int[] status = new int[1]; - GLES20.glGetShaderiv(vshader, GLES20.GL_COMPILE_STATUS, status, 0); - if (status[0] == 0) { - Log.e(TAG, GLES20.glGetShaderInfoLog(vshader)); - GLES20.glDeleteShader(vshader); - vshader = 0; - return 0; - } - - int fshader = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER); - GLES20.glShaderSource(fshader, fss); - GLES20.glCompileShader(fshader); - GLES20.glGetShaderiv(fshader, GLES20.GL_COMPILE_STATUS, status, 0); - if (status[0] == 0) { - Log.e(TAG, GLES20.glGetShaderInfoLog(fshader)); - GLES20.glDeleteShader(vshader); - GLES20.glDeleteShader(fshader); - fshader = 0; - return 0; - } - - int program = GLES20.glCreateProgram(); - GLES20.glAttachShader(program, vshader); - GLES20.glAttachShader(program, fshader); - GLES20.glLinkProgram(program); - GLES20.glDeleteShader(vshader); - GLES20.glDeleteShader(fshader); - GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, status, 0); - if (status[0] == 0) { - Log.e(TAG, GLES20.glGetProgramInfoLog(program)); - program = 0; - return 0; - } - GLES20.glValidateProgram(program); - GLES20.glGetProgramiv(program, GLES20.GL_VALIDATE_STATUS, status, 0); - if (status[0] == 0) { - Log.e(TAG, GLES20.glGetProgramInfoLog(program)); - GLES20.glDeleteProgram(program); - program = 0; - return 0; - } - - return program; - } - - public static boolean isSupportedNPU() { - String hardware = android.os.Build.HARDWARE; - return hardware.equalsIgnoreCase("kirin810") || hardware.equalsIgnoreCase("kirin990"); - } -} diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/CameraFragment.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/CameraFragment.java deleted file mode 100644 index 238e5378c8216a581e136421820f56f41dc82a08..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/CameraFragment.java +++ /dev/null @@ -1,206 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.Manifest; -import android.app.Activity; -import android.app.AlertDialog; -import android.content.DialogInterface; -import android.content.Intent; -import android.content.SharedPreferences; -import android.content.pm.PackageManager; -import android.os.Bundle; -import android.preference.PreferenceManager; -import android.view.LayoutInflater; -import android.view.View; -import android.view.ViewGroup; -import android.view.Window; -import android.view.WindowManager; -import android.widget.ImageButton; -import android.widget.TextView; -import android.widget.Toast; - -import androidx.annotation.NonNull; -import androidx.core.app.ActivityCompat; -import androidx.core.content.ContextCompat; -import androidx.fragment.app.Fragment; -import androidx.navigation.fragment.NavHostFragment; - -import com.baidu.paddledetection.common.CameraSurfaceView; -import com.baidu.paddledetection.common.Utils; - -import java.io.File; -import java.text.SimpleDateFormat; -import java.util.Date; - -public class CameraFragment extends Fragment implements View.OnClickListener, CameraSurfaceView.OnTextureChangedListener { - private static final String TAG = CameraFragment.class.getSimpleName(); - - CameraSurfaceView svPreview; - TextView tvStatus; - ImageButton btnSwitch; - ImageButton btnShutter; - ImageButton btnSettings; - - String savedImagePath = ""; - int lastFrameIndex = 0; - long lastFrameTime; - - Native predictor = new Native(); - - @Override - public View onCreateView(LayoutInflater inflater, ViewGroup container, - Bundle savedInstanceState) { - // Inflate the layout for this fragment - return inflater.inflate(R.layout.fragment_camera, container, false); - } - - public void onViewCreated(@NonNull View view, Bundle savedInstanceState) { - super.onViewCreated(view, savedInstanceState); - // Clear all setting items to avoid app crashing due to the incorrect settings - initSettings(); - // Init the camera preview and UI components - initView(view); - // Check and request CAMERA and WRITE_EXTERNAL_STORAGE permissions - if (!checkAllPermissions()) { - requestAllPermissions(); - } - } - - @Override - public void onClick(View v) { - switch (v.getId()) { - case R.id.btn_switch: - svPreview.switchCamera(); - break; - case R.id.btn_shutter: - SimpleDateFormat date = new SimpleDateFormat("yyyy_MM_dd_HH_mm_ss"); - synchronized (this) { - savedImagePath = Utils.getDCIMDirectory() + File.separator + date.format(new Date()).toString() + ".png"; - } - Toast.makeText(getActivity(), "Save snapshot to " + savedImagePath, Toast.LENGTH_SHORT).show(); - break; - case R.id.btn_settings: - startActivity(new Intent(getActivity(), SettingsActivity.class)); - break; - } - } - - @Override - public boolean onTextureChanged(int inTextureId, int outTextureId, int textureWidth, int textureHeight) { - String savedImagePath = ""; - synchronized (this) { - savedImagePath = CameraFragment.this.savedImagePath; - } - boolean modified = predictor.process(inTextureId, outTextureId, textureWidth, textureHeight, savedImagePath); - if (!savedImagePath.isEmpty()) { - synchronized (this) { - CameraFragment.this.savedImagePath = ""; - } - } - lastFrameIndex++; - if (lastFrameIndex >= 30) { - final int fps = (int) (lastFrameIndex * 1e9 / (System.nanoTime() - lastFrameTime)); - getActivity().runOnUiThread(new Runnable() { - public void run() { - tvStatus.setText(Integer.toString(fps) + "fps"); - } - }); - lastFrameIndex = 0; - lastFrameTime = System.nanoTime(); - } - return modified; - } - - @Override - public void onResume() { - super.onResume(); - // Reload settings and re-initialize the predictor - checkAndUpdateSettings(); - // Open camera until the permissions have been granted - if (!checkAllPermissions()) { - svPreview.disableCamera(); - } - svPreview.onResume(); - } - - @Override - public void onPause() { - super.onPause(); - svPreview.onPause(); - } - - @Override - public void onDestroy() { - if (predictor != null) { - predictor.release(); - } - super.onDestroy(); - } - - public void initView(@NonNull View view) { - svPreview = (CameraSurfaceView) view.findViewById(R.id.sv_preview); - svPreview.setOnTextureChangedListener(this); - tvStatus = (TextView) view.findViewById(R.id.tv_status); - btnSwitch = (ImageButton) view.findViewById(R.id.btn_switch); - btnSwitch.setOnClickListener(this); - btnShutter = (ImageButton) view.findViewById(R.id.btn_shutter); - btnShutter.setOnClickListener(this); - btnSettings = (ImageButton) view.findViewById(R.id.btn_settings); - btnSettings.setOnClickListener(this); - } - - public void initSettings() { - SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(getActivity()); - SharedPreferences.Editor editor = sharedPreferences.edit(); - editor.clear(); - editor.commit(); - SettingsActivity.resetSettings(); - } - - public void checkAndUpdateSettings() { - if (SettingsActivity.checkAndUpdateSettings(getActivity())) { - String realModelDir = getActivity().getCacheDir() + "/" + SettingsActivity.modelDir; - Utils.copyDirectoryFromAssets(getActivity(), SettingsActivity.modelDir, realModelDir); - String realLabelPath = getActivity().getCacheDir() + "/" + SettingsActivity.labelPath; - Utils.copyFileFromAssets(getActivity(), SettingsActivity.labelPath, realLabelPath); - predictor.init( - realModelDir, - realLabelPath, - SettingsActivity.cpuThreadNum, - SettingsActivity.cpuPowerMode, - SettingsActivity.inputWidth, - SettingsActivity.inputHeight, - SettingsActivity.inputMean, - SettingsActivity.inputStd, - SettingsActivity.scoreThreshold); - } - } - - @Override - public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, - @NonNull int[] grantResults) { - super.onRequestPermissionsResult(requestCode, permissions, grantResults); - if (grantResults[0] != PackageManager.PERMISSION_GRANTED || grantResults[1] != PackageManager.PERMISSION_GRANTED) { - new AlertDialog.Builder(getActivity()) - .setTitle("Permission denied") - .setMessage("Click to force quit the app, then open Settings->Apps & notifications->Target " + - "App->Permissions to grant all of the permissions.") - .setCancelable(false) - .setPositiveButton("Exit", new DialogInterface.OnClickListener() { - @Override - public void onClick(DialogInterface dialog, int which) { - getActivity().finish(); - } - }).show(); - } - } - - private void requestAllPermissions() { - ActivityCompat.requestPermissions(getActivity(), new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE, - Manifest.permission.CAMERA}, 0); - } - - private boolean checkAllPermissions() { - return ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED - && ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED; - } -} diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/ContentFragment.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/ContentFragment.java deleted file mode 100644 index ed75d168685f47e5fba5ca2ca26d3c38e4d7af70..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/ContentFragment.java +++ /dev/null @@ -1,43 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.os.Bundle; -import android.view.LayoutInflater; -import android.view.View; -import android.view.ViewGroup; - -import androidx.annotation.NonNull; -import androidx.fragment.app.Fragment; -import androidx.navigation.fragment.NavHostFragment; - -public class ContentFragment extends Fragment { - - @Override - public View onCreateView( - LayoutInflater inflater, ViewGroup container, - Bundle savedInstanceState - ) { - // Inflate the layout for this fragment - return inflater.inflate(R.layout.fragment_content, container, false); - } - - public void onViewCreated(@NonNull View view, Bundle savedInstanceState) { - super.onViewCreated(view, savedInstanceState); - - view.findViewById(R.id.camera).setOnClickListener(new View.OnClickListener() { - @Override - public void onClick(View view) { - NavHostFragment.findNavController(ContentFragment.this) - .navigate(R.id.action_ContentFragment_to_CameraFragment); - - } - }); - - view.findViewById(R.id.photo).setOnClickListener(new View.OnClickListener() { - @Override - public void onClick(View view) { - NavHostFragment.findNavController(ContentFragment.this) - .navigate(R.id.action_ContentFragment_to_PhotoFragment); - } - }); - } -} \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/MainActivity.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/MainActivity.java deleted file mode 100644 index 5100c87cb0a68ef0871af1970444c19cf6f1e0af..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/MainActivity.java +++ /dev/null @@ -1,50 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.os.Bundle; - -import com.google.android.material.floatingactionbutton.FloatingActionButton; -import com.google.android.material.navigation.NavigationView; -import com.google.android.material.snackbar.Snackbar; - -import androidx.appcompat.app.AppCompatActivity; -import androidx.appcompat.widget.Toolbar; - -import android.view.View; - -import android.view.Menu; -import android.view.MenuItem; - -public class MainActivity extends AppCompatActivity { - private NavigationView navigationView; - - @Override - protected void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - setContentView(R.layout.activity_main); - Toolbar toolbar = findViewById(R.id.toolbar); - setSupportActionBar(toolbar); - - } - - @Override - public boolean onCreateOptionsMenu(Menu menu) { - // Inflate the menu; this adds items to the action bar if it is present. - getMenuInflater().inflate(R.menu.menu_main, menu); - return true; - } - - @Override - public boolean onOptionsItemSelected(MenuItem item) { - // Handle action bar item clicks here. The action bar will - // automatically handle clicks on the Home/Up button, so long - // as you specify a parent activity in AndroidManifest.xml. - int id = item.getItemId(); - - //noinspection SimplifiableIfStatement - if (id == R.id.action_settings) { - return true; - } - - return super.onOptionsItemSelected(item); - } -} \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/Native.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/Native.java deleted file mode 100644 index d55be1034358ddb2357b7dfa8c38a285beb159af..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/Native.java +++ /dev/null @@ -1,59 +0,0 @@ -package com.baidu.paddledetection.detection; - -public class Native { - static { - System.loadLibrary("Native"); - } - - private long ctx = 0; - - public boolean init(String modelDir, - String labelPath, - int cpuThreadNum, - String cpuPowerMode, - int inputWidth, - int inputHeight, - float[] inputMean, - float[] inputStd, - float scoreThreshold) { - ctx = nativeInit( - modelDir, - labelPath, - cpuThreadNum, - cpuPowerMode, - inputWidth, - inputHeight, - inputMean, - inputStd, - scoreThreshold); - return ctx == 0; - } - - public boolean release() { - if (ctx == 0) { - return false; - } - return nativeRelease(ctx); - } - - public boolean process(int inTextureId, int outTextureId, int textureWidth, int textureHeight, String savedImagePath) { - if (ctx == 0) { - return false; - } - return nativeProcess(ctx, inTextureId, outTextureId, textureWidth, textureHeight, savedImagePath); - } - - public static native long nativeInit(String modelDir, - String labelPath, - int cpuThreadNum, - String cpuPowerMode, - int inputWidth, - int inputHeight, - float[] inputMean, - float[] inputStd, - float scoreThreshold); - - public static native boolean nativeRelease(long ctx); - - public static native boolean nativeProcess(long ctx, int inTextureId, int outTextureId, int textureWidth, int textureHeight, String savedImagePath); -} diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/PhotoFragment.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/PhotoFragment.java deleted file mode 100644 index f37d13f27a778cfe984099a065ee0604b6483918..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/PhotoFragment.java +++ /dev/null @@ -1,374 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.app.ProgressDialog; -import android.content.ContentResolver; -import android.content.Intent; -import android.content.SharedPreferences; -import android.database.Cursor; -import android.graphics.Bitmap; -import android.graphics.BitmapFactory; -import android.net.Uri; -import android.os.Bundle; -import android.os.Handler; -import android.os.HandlerThread; -import android.os.Message; -import android.preference.PreferenceManager; -import android.provider.MediaStore; -import android.text.method.ScrollingMovementMethod; -import android.util.Log; -import android.view.LayoutInflater; -import android.view.View; -import android.view.ViewGroup; -import android.widget.ImageView; -import android.widget.TextView; -import android.widget.Toast; - -import androidx.annotation.NonNull; -import androidx.fragment.app.Fragment; - -import com.baidu.paddledetection.common.Utils; -import com.google.android.material.floatingactionbutton.FloatingActionButton; - -import java.io.File; -import java.io.IOException; -import java.io.InputStream; - -import static android.app.Activity.RESULT_OK; - -public class PhotoFragment extends Fragment implements View.OnClickListener{ - private static final String TAG = PhotoFragment.class.getSimpleName(); - public static final int OPEN_GALLERY_REQUEST_CODE = 0; - public static final int TAKE_PHOTO_REQUEST_CODE = 1; - - public static final int REQUEST_LOAD_MODEL = 0; - public static final int REQUEST_RUN_MODEL = 1; - public static final int RESPONSE_LOAD_MODEL_SUCCESSED = 0; - public static final int RESPONSE_LOAD_MODEL_FAILED = 1; - public static final int RESPONSE_RUN_MODEL_SUCCESSED = 2; - public static final int RESPONSE_RUN_MODEL_FAILED = 3; - - protected ProgressDialog pbLoadModel = null; - protected ProgressDialog pbRunModel = null; - - public Handler receiver = null; // Receive messages from worker thread - protected Handler sender = null; // Send command to worker thread - protected HandlerThread worker = null; // Worker thread to load&run model - - // UI components of object detection - protected TextView tvInputSetting; - protected ImageView ivInputImage; - protected TextView tvOutputResult; - protected TextView tvInferenceTime; - - // Model settings of object detection - protected String modelPath = ""; - protected String labelPath = ""; - protected String imagePath = ""; - protected int cpuThreadNum = 1; - protected String cpuPowerMode = ""; - protected String inputColorFormat = ""; - protected long[] inputShape = new long[]{}; - protected float[] inputMean = new float[]{}; - protected float[] inputStd = new float[]{}; - protected float scoreThreshold = 0.5f; - - protected Predictor predictor = new Predictor(); - - @Override - public View onCreateView( - LayoutInflater inflater, ViewGroup container, - Bundle savedInstanceState - ) { - // Inflate the layout for this fragment - return inflater.inflate(R.layout.fragment_photo, container, false); - } - - public void onViewCreated(@NonNull View view, Bundle savedInstanceState) { - super.onViewCreated(view, savedInstanceState); - // Prepare the worker thread for mode loading and inference - receiver = new Handler() { - @Override - public void handleMessage(Message msg) { - switch (msg.what) { - case RESPONSE_LOAD_MODEL_SUCCESSED: - pbLoadModel.dismiss(); - onLoadModelSuccessed(); - break; - case RESPONSE_LOAD_MODEL_FAILED: - pbLoadModel.dismiss(); - Toast.makeText(getActivity(), "Load model failed!", Toast.LENGTH_SHORT).show(); - onLoadModelFailed(); - break; - case RESPONSE_RUN_MODEL_SUCCESSED: - pbRunModel.dismiss(); - onRunModelSuccessed(); - break; - case RESPONSE_RUN_MODEL_FAILED: - pbRunModel.dismiss(); - Toast.makeText(getActivity(), "Run model failed!", Toast.LENGTH_SHORT).show(); - onRunModelFailed(); - break; - default: - break; - } - } - }; - - worker = new HandlerThread("Predictor Worker"); - worker.start(); - sender = new Handler(worker.getLooper()) { - public void handleMessage(Message msg) { - switch (msg.what) { - case REQUEST_LOAD_MODEL: - // Load model and reload test image - if (onLoadModel()) { - receiver.sendEmptyMessage(RESPONSE_LOAD_MODEL_SUCCESSED); - } else { - receiver.sendEmptyMessage(RESPONSE_LOAD_MODEL_FAILED); - } - break; - case REQUEST_RUN_MODEL: - // Run model if model is loaded - if (onRunModel()) { - receiver.sendEmptyMessage(RESPONSE_RUN_MODEL_SUCCESSED); - } else { - receiver.sendEmptyMessage(RESPONSE_RUN_MODEL_FAILED); - } - break; - default: - break; - } - } - }; - initView(view); - } - - - @Override - public void onClick(View v) { - switch (v.getId()) { - case R.id.iv_input_image: - - break; - } - } - - public void initView(@NonNull View view) { - // Setup the UI components - tvInputSetting = view.findViewById(R.id.tv_input_setting); - ivInputImage = view.findViewById(R.id.iv_input_image); - ivInputImage.setOnClickListener(this); - tvInferenceTime = view.findViewById(R.id.tv_inference_time); - tvOutputResult = view.findViewById(R.id.tv_output_result); - tvInputSetting.setMovementMethod(ScrollingMovementMethod.getInstance()); - tvOutputResult.setMovementMethod(ScrollingMovementMethod.getInstance()); - FloatingActionButton fab = view.findViewById(R.id.fab); - fab.setOnClickListener(new View.OnClickListener() { - @Override - public void onClick(View view) { - openGallery(); - // You can take photo - // takePhoto(); - } - }); - } - - - public void loadModel() { - pbLoadModel = ProgressDialog.show(getActivity(), "", "Loading model...", false, false); - sender.sendEmptyMessage(REQUEST_LOAD_MODEL); - } - - public void runModel() { - pbRunModel = ProgressDialog.show(getActivity(), "", "Running model...", false, false); - sender.sendEmptyMessage(REQUEST_RUN_MODEL); - Log.i(TAG, "开始运行模型5555555555555"); - } - - public boolean onLoadModel() { - return predictor.init(getActivity(), modelPath, labelPath, cpuThreadNum, - cpuPowerMode, - inputColorFormat, - inputShape, inputMean, - inputStd, scoreThreshold); - } - - public boolean onRunModel() { - return predictor.isLoaded() && predictor.runModel(); - } - - public void onLoadModelSuccessed() { - // Load test image from path and run model - try { - if (imagePath.isEmpty()) { - return; - } - Bitmap image = null; - // Read test image file from custom path if the first character of mode path is '/', otherwise read test - // image file from assets - if (!imagePath.substring(0, 1).equals("/")) { - InputStream imageStream = getActivity().getAssets().open(imagePath); - image = BitmapFactory.decodeStream(imageStream); - } else { - if (!new File(imagePath).exists()) { - return; - } - image = BitmapFactory.decodeFile(imagePath); - } - if (image != null && predictor.isLoaded()) { - predictor.setInputImage(image); - runModel(); - } - } catch (IOException e) { - Toast.makeText(getActivity(), "Load image failed!", Toast.LENGTH_SHORT).show(); - e.printStackTrace(); - } - } - - public void onLoadModelFailed() { - } - - public void onRunModelSuccessed() { - // Obtain results and update UI - tvInferenceTime.setText("Inference time: " + predictor.inferenceTime() + " ms"); - Bitmap outputImage = predictor.outputImage(); - if (outputImage != null) { - ivInputImage.setImageBitmap(outputImage); - } - tvOutputResult.setText(predictor.outputResult()); - tvOutputResult.scrollTo(0, 0); - } - - public void onRunModelFailed() {} - - public void onImageChanged(Bitmap image) { - // Rerun model if users pick test image from gallery or camera - if (image != null && predictor.isLoaded()) { - predictor.setInputImage(image); - runModel(); - } - } - - @Override - public void onActivityResult(int requestCode, int resultCode, Intent data) { - super.onActivityResult(requestCode, resultCode, data); - if (resultCode == RESULT_OK && data != null) { - switch (requestCode) { - case OPEN_GALLERY_REQUEST_CODE: - try { - ContentResolver resolver = getActivity().getContentResolver(); - Uri uri = data.getData(); - Bitmap image = MediaStore.Images.Media.getBitmap(resolver, uri); - String[] proj = {MediaStore.Images.Media.DATA}; - Cursor cursor = getActivity().managedQuery(uri, proj, null, null, null); - cursor.moveToFirst(); - onImageChanged(image); - } catch (IOException e) { - Log.e(TAG, e.toString()); - } - break; - case TAKE_PHOTO_REQUEST_CODE: - Bundle extras = data.getExtras(); - Bitmap image = (Bitmap) extras.get("data"); - onImageChanged(image); - break; - default: - break; - } - } - } - - private void openGallery() { - Intent intent = new Intent(Intent.ACTION_PICK, null); - intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*"); - startActivityForResult(intent, OPEN_GALLERY_REQUEST_CODE); - } - - private void takePhoto() { - Intent takePhotoIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); - if (takePhotoIntent.resolveActivity(getActivity().getPackageManager()) != null) { - startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE); - } - } - - @Override - public void onResume() { - super.onResume(); - SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(getActivity()); - boolean settingsChanged = false; - String model_path = sharedPreferences.getString("NULL", - getString(R.string.MODEL_PATH_DEFAULT)); - String label_path = sharedPreferences.getString(getString(R.string.LABEL_PATH_KEY), - getString(R.string.SSD_LABEL_PATH_DEFAULT)); - String image_path = sharedPreferences.getString("NULL", - getString(R.string.IMAGE_PATH_DEFAULT)); - settingsChanged |= !model_path.equalsIgnoreCase(modelPath); - settingsChanged |= !label_path.equalsIgnoreCase(labelPath); - settingsChanged |= !image_path.equalsIgnoreCase(imagePath); - int cpu_thread_num = Integer.parseInt(sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY), - getString(R.string.CPU_THREAD_NUM_DEFAULT))); - settingsChanged |= cpu_thread_num != cpuThreadNum; - String cpu_power_mode = - sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY), - getString(R.string.CPU_POWER_MODE_DEFAULT)); - settingsChanged |= !cpu_power_mode.equalsIgnoreCase(cpuPowerMode); - String input_color_format = - sharedPreferences.getString("NULL", - getString(R.string.INPUT_COLOR_FORMAT_DEFAULT)); - settingsChanged |= !input_color_format.equalsIgnoreCase(inputColorFormat); - long[] input_shape = - Utils.parseLongsFromString(sharedPreferences.getString("NULL", - getString(R.string.INPUT_SHAPE_DEFAULT)), ","); - float[] input_mean = - Utils.parseFloatsFromString(sharedPreferences.getString(getString(R.string.INPUT_MEAN_KEY), - getString(R.string.INPUT_MEAN_DEFAULT)), ","); - float[] input_std = - Utils.parseFloatsFromString(sharedPreferences.getString(getString(R.string.INPUT_STD_KEY) - , getString(R.string.INPUT_STD_DEFAULT)), ","); - settingsChanged |= input_shape.length != inputShape.length; - settingsChanged |= input_mean.length != inputMean.length; - settingsChanged |= input_std.length != inputStd.length; - if (!settingsChanged) { - for (int i = 0; i < input_shape.length; i++) { - settingsChanged |= input_shape[i] != inputShape[i]; - } - for (int i = 0; i < input_mean.length; i++) { - settingsChanged |= input_mean[i] != inputMean[i]; - } - for (int i = 0; i < input_std.length; i++) { - settingsChanged |= input_std[i] != inputStd[i]; - } - } - float score_threshold = - Float.parseFloat(sharedPreferences.getString(getString(R.string.SCORE_THRESHOLD_KEY), - getString(R.string.SSD_SCORE_THRESHOLD_DEFAULT))); - settingsChanged |= scoreThreshold != score_threshold; - if (settingsChanged) { - modelPath = model_path; - labelPath = label_path; - imagePath = image_path; - cpuThreadNum = cpu_thread_num; - cpuPowerMode = cpu_power_mode; - inputColorFormat = input_color_format; - inputShape = input_shape; - inputMean = input_mean; - inputStd = input_std; - scoreThreshold = score_threshold; - // Update UI - tvInputSetting.setText("Model: " + modelPath.substring(modelPath.lastIndexOf("/") + 1) + "\n" + "CPU" + - " Thread Num: " + Integer.toString(cpuThreadNum) + "\n" + "CPU Power Mode: " + cpuPowerMode); - tvInputSetting.scrollTo(0, 0); - // Reload model if configure has been changed - loadModel(); - } - } - - @Override - public void onDestroy() { - if (predictor != null) { - predictor.releaseModel(); - } - worker.quit(); - super.onDestroy(); - } -} \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/Predictor.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/Predictor.java deleted file mode 100644 index 07c1bf059fd1a4137c295205a4523ecbf8007d11..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/Predictor.java +++ /dev/null @@ -1,369 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.content.Context; -import android.graphics.Bitmap; -import android.graphics.Canvas; -import android.graphics.Paint; -import android.util.Log; -import com.baidu.paddle.lite.MobileConfig; -import com.baidu.paddle.lite.PaddlePredictor; -import com.baidu.paddle.lite.PowerMode; -import com.baidu.paddle.lite.Tensor; -import com.baidu.paddledetection.common.Utils; - -import java.io.File; -import java.io.InputStream; -import java.util.Date; -import java.util.Vector; - -import static android.graphics.Color.*; - -public class Predictor { - private static final String TAG = Predictor.class.getSimpleName(); - public boolean isLoaded = false; - public int warmupIterNum = 1; - public int inferIterNum = 1; - public int cpuThreadNum = 1; - public String cpuPowerMode = "LITE_POWER_HIGH"; - public String modelPath = ""; - public String modelName = ""; - protected PaddlePredictor paddlePredictor = null; - protected float inferenceTime = 0; - // Only for object detection - protected Vector wordLabels = new Vector(); - protected String inputColorFormat = "RGB"; - protected long[] inputShape = new long[]{1, 3, 320, 320}; - protected float[] inputMean = new float[]{0.485f, 0.456f, 0.406f}; - protected float[] inputStd = new float[]{0.229f, 0.224f, 0.225f}; - protected float scoreThreshold = 0.5f; - protected Bitmap inputImage = null; - protected Bitmap outputImage = null; - protected String outputResult = ""; - protected float preprocessTime = 0; - protected float postprocessTime = 0; - - public Predictor() { - } - - public boolean init(Context appCtx, String modelPath, String labelPath, int cpuThreadNum, String cpuPowerMode, - String inputColorFormat, - long[] inputShape, float[] inputMean, - float[] inputStd, float scoreThreshold) { - if (inputShape.length != 4) { - Log.i(TAG, "Size of input shape should be: 4"); - return false; - } - if (inputMean.length != inputShape[1]) { - Log.i(TAG, "Size of input mean should be: " + Long.toString(inputShape[1])); - return false; - } - if (inputStd.length != inputShape[1]) { - Log.i(TAG, "Size of input std should be: " + Long.toString(inputShape[1])); - return false; - } - if (inputShape[0] != 1) { - Log.i(TAG, "Only one batch is supported in the image classification demo, you can use any batch size in " + - "your Apps!"); - return false; - } - if (inputShape[1] != 1 && inputShape[1] != 3) { - Log.i(TAG, "Only one/three channels are supported in the image classification demo, you can use any " + - "channel size in your Apps!"); - return false; - } - if (!inputColorFormat.equalsIgnoreCase("RGB") && !inputColorFormat.equalsIgnoreCase("BGR")) { - Log.i(TAG, "Only RGB and BGR color format is supported."); - return false; - } - isLoaded = loadModel(appCtx, modelPath, cpuThreadNum, cpuPowerMode); - if (!isLoaded) { - return false; - } - isLoaded = loadLabel(appCtx, labelPath); - if (!isLoaded) { - return false; - } - this.inputColorFormat = inputColorFormat; - this.inputShape = inputShape; - this.inputMean = inputMean; - this.inputStd = inputStd; - this.scoreThreshold = scoreThreshold; - return true; - } - - protected boolean loadModel(Context appCtx, String modelPath, int cpuThreadNum, String cpuPowerMode) { - // Release model if exists - releaseModel(); - - // Load model - if (modelPath.isEmpty()) { - return false; - } - String realPath = modelPath; - if (!modelPath.substring(0, 1).equals("/")) { - // Read model files from custom path if the first character of mode path is '/' - // otherwise copy model to cache from assets - realPath = appCtx.getCacheDir() + "/" + modelPath; - Utils.copyDirectoryFromAssets(appCtx, modelPath, realPath); - } - if (realPath.isEmpty()) { - return false; - } - MobileConfig config = new MobileConfig(); - config.setModelFromFile(realPath + File.separator + "model.nb"); - config.setThreads(cpuThreadNum); - if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_HIGH")) { - config.setPowerMode(PowerMode.LITE_POWER_HIGH); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_LOW")) { - config.setPowerMode(PowerMode.LITE_POWER_LOW); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_FULL")) { - config.setPowerMode(PowerMode.LITE_POWER_FULL); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_NO_BIND")) { - config.setPowerMode(PowerMode.LITE_POWER_NO_BIND); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_HIGH")) { - config.setPowerMode(PowerMode.LITE_POWER_RAND_HIGH); - } else if (cpuPowerMode.equalsIgnoreCase("LITE_POWER_RAND_LOW")) { - config.setPowerMode(PowerMode.LITE_POWER_RAND_LOW); - } else { - Log.e(TAG, "unknown cpu power mode!"); - return false; - } - paddlePredictor = PaddlePredictor.createPaddlePredictor(config); - - this.cpuThreadNum = cpuThreadNum; - this.cpuPowerMode = cpuPowerMode; - this.modelPath = realPath; - this.modelName = realPath.substring(realPath.lastIndexOf("/") + 1); - return true; - } - - public void releaseModel() { - paddlePredictor = null; - isLoaded = false; - cpuThreadNum = 1; - cpuPowerMode = "LITE_POWER_HIGH"; - modelPath = ""; - modelName = ""; - } - - protected boolean loadLabel(Context appCtx, String labelPath) { - wordLabels.clear(); - // Load word labels from file - try { - InputStream assetsInputStream = appCtx.getAssets().open(labelPath); - int available = assetsInputStream.available(); - byte[] lines = new byte[available]; - assetsInputStream.read(lines); - assetsInputStream.close(); - String words = new String(lines); - String[] contents = words.split("\n"); - for (String content : contents) { - wordLabels.add(content); - } - Log.i(TAG, "Word label size: " + wordLabels.size()); - } catch (Exception e) { - Log.e(TAG, e.getMessage()); - return false; - } - return true; - } - - public Tensor getInput(int idx) { - if (!isLoaded()) { - return null; - } - return paddlePredictor.getInput(idx); - } - - public Tensor getOutput(int idx) { - if (!isLoaded()) { - return null; - } - return paddlePredictor.getOutput(idx); - } - - public boolean runModel() { - if (inputImage == null || !isLoaded()) { - return false; - } - - // Set input shape - Tensor inputTensor = getInput(0); - inputTensor.resize(inputShape); - - // Pre-process image, and feed input tensor with pre-processed data - Date start = new Date(); - int channels = (int) inputShape[1]; - int width = (int) inputShape[3]; - int height = (int) inputShape[2]; - float[] inputData = new float[channels * width * height]; - if (channels == 3) { - int[] channelIdx = null; - if (inputColorFormat.equalsIgnoreCase("RGB")) { - channelIdx = new int[]{0, 1, 2}; - } else if (inputColorFormat.equalsIgnoreCase("BGR")) { - channelIdx = new int[]{2, 1, 0}; - } else { - Log.i(TAG, "Unknown color format " + inputColorFormat + ", only RGB and BGR color format is " + - "supported!"); - return false; - } - int[] channelStride = new int[]{width * height, width * height * 2}; - for (int y = 0; y < height; y++) { - for (int x = 0; x < width; x++) { - int color = inputImage.getPixel(x, y); - float[] rgb = new float[]{(float) red(color) / 255.0f, (float) green(color) / 255.0f, - (float) blue(color) / 255.0f}; - inputData[y * width + x] = (rgb[channelIdx[0]] - inputMean[0]) / inputStd[0]; - inputData[y * width + x + channelStride[0]] = (rgb[channelIdx[1]] - inputMean[1]) / inputStd[1]; - inputData[y * width + x + channelStride[1]] = (rgb[channelIdx[2]] - inputMean[2]) / inputStd[2]; - } - } - } else if (channels == 1) { - for (int y = 0; y < height; y++) { - for (int x = 0; x < width; x++) { - int color = inputImage.getPixel(x, y); - float gray = (float) (red(color) + green(color) + blue(color)) / 3.0f / 255.0f; - inputData[y * width + x] = (gray - inputMean[0]) / inputStd[0]; - } - } - } else { - Log.i(TAG, "Unsupported channel size " + Integer.toString(channels) + ", only channel 1 and 3 is " + - "supported!"); - return false; - } - inputTensor.setData(inputData); - Date end = new Date(); - preprocessTime = (float) (end.getTime() - start.getTime()); - - // Warm up - for (int i = 0; i < warmupIterNum; i++) { - paddlePredictor.run(); - } - // Run inference - start = new Date(); - for (int i = 0; i < inferIterNum; i++) { - paddlePredictor.run(); - } - end = new Date(); - inferenceTime = (end.getTime() - start.getTime()) / (float) inferIterNum; - - // Fetch output tensor - Tensor outputTensor = getOutput(0); - - // Post-process - start = new Date(); - long outputShape[] = outputTensor.shape(); - long outputSize = 1; - for (long s : outputShape) { - outputSize *= s; - } - outputImage = inputImage; - outputResult = new String(); - Canvas canvas = new Canvas(outputImage); - Paint rectPaint = new Paint(); - rectPaint.setStyle(Paint.Style.STROKE); - rectPaint.setStrokeWidth(1); - Paint txtPaint = new Paint(); - txtPaint.setTextSize(12); - txtPaint.setAntiAlias(true); - int txtXOffset = 4; - int txtYOffset = (int) (Math.ceil(-txtPaint.getFontMetrics().ascent)); - int imgWidth = outputImage.getWidth(); - int imgHeight = outputImage.getHeight(); - int objectIdx = 0; - final int[] objectColor = {0xFFFF00CC, 0xFFFF0000, 0xFFFFFF33, 0xFF0000FF, 0xFF00FF00, - 0xFF000000, 0xFF339933}; - for (int i = 0; i < outputSize; i += 6) { - float score = outputTensor.getFloatData()[i + 1]; - if (score < scoreThreshold) { - continue; - } - int categoryIdx = (int) outputTensor.getFloatData()[i]; - String categoryName = "Unknown"; - if (wordLabels.size() > 0 && categoryIdx >= 0 && categoryIdx < wordLabels.size()) { - categoryName = wordLabels.get(categoryIdx); - } - float rawLeft = outputTensor.getFloatData()[i + 2]; - float rawTop = outputTensor.getFloatData()[i + 3]; - float rawRight = outputTensor.getFloatData()[i + 4]; - float rawBottom = outputTensor.getFloatData()[i + 5]; - float clampedLeft = Math.max(Math.min(rawLeft, 1.f), 0.f); - float clampedTop = Math.max(Math.min(rawTop, 1.f), 0.f); - float clampedRight = Math.max(Math.min(rawRight, 1.f), 0.f); - float clampedBottom = Math.max(Math.min(rawBottom, 1.f), 0.f); - float imgLeft = clampedLeft * imgWidth; - float imgTop = clampedTop * imgWidth; - float imgRight = clampedRight * imgHeight; - float imgBottom = clampedBottom * imgHeight; - int color = objectColor[objectIdx % objectColor.length]; - rectPaint.setColor(color); - txtPaint.setColor(color); - canvas.drawRect(imgLeft, imgTop, imgRight, imgBottom, rectPaint); - canvas.drawText(objectIdx + "." + categoryName + ":" + String.format("%.3f", score), - imgLeft + txtXOffset, imgTop + txtYOffset, txtPaint); - outputResult += objectIdx + "." + categoryName + " - " + String.format("%.3f", score) + - " [" + String.format("%.3f", rawLeft) + "," + String.format("%.3f", rawTop) + "," + String.format("%.3f", rawRight) + "," + String.format("%.3f", rawBottom) + "]\n"; - objectIdx++; - } - end = new Date(); - postprocessTime = (float) (end.getTime() - start.getTime()); - return true; - } - - - public boolean isLoaded() { - return paddlePredictor != null && isLoaded; - } - - public String modelPath() { - return modelPath; - } - - public String modelName() { - return modelName; - } - - public int cpuThreadNum() { - return cpuThreadNum; - } - - public String cpuPowerMode() { - return cpuPowerMode; - } - - public float inferenceTime() { - return inferenceTime; - } - - public Bitmap inputImage() { - return inputImage; - } - - public Bitmap outputImage() { - return outputImage; - } - - public String outputResult() { - return outputResult; - } - - public float preprocessTime() { - return preprocessTime; - } - - public float postprocessTime() { - return postprocessTime; - } - - - public void setInputImage(Bitmap image) { - if (image == null) { - return; - } - // Scale image to the size of input tensor - Bitmap rgbaImage = image.copy(Bitmap.Config.ARGB_8888, true); - Bitmap scaleImage = Bitmap.createScaledBitmap(rgbaImage, (int) inputShape[3], (int) inputShape[2], true); - this.inputImage = scaleImage; - } -} diff --git a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/SettingsActivity.java b/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/SettingsActivity.java deleted file mode 100644 index f26cf05a05c5e34c2bb39d0e09164c07c862d720..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/java/com/baidu/paddledetection/detection/SettingsActivity.java +++ /dev/null @@ -1,272 +0,0 @@ -package com.baidu.paddledetection.detection; - -import android.content.Context; -import android.content.SharedPreferences; -import android.os.Bundle; -import android.preference.EditTextPreference; -import android.preference.ListPreference; -import android.preference.PreferenceManager; -import androidx.appcompat.app.ActionBar; - -import android.widget.Toast; - -import com.baidu.paddledetection.common.AppCompatPreferenceActivity; -import com.baidu.paddledetection.common.Utils; -import com.baidu.paddledetection.detection.R; - -import java.util.ArrayList; -import java.util.List; - -public class SettingsActivity extends AppCompatPreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener { - private static final String TAG = SettingsActivity.class.getSimpleName(); - - static public int selectedModelIdx = -1; - static public String modelDir = ""; - static public String labelPath = ""; - static public int cpuThreadNum = 0; - static public String cpuPowerMode = ""; - static public int inputWidth = 0; - static public int inputHeight = 0; - static public float[] inputMean = new float[]{}; - static public float[] inputStd = new float[]{}; - static public float scoreThreshold = 0.0f; - - ListPreference lpChoosePreInstalledModel = null; - EditTextPreference etModelDir = null; - EditTextPreference etLabelPath = null; - ListPreference lpCPUThreadNum = null; - ListPreference lpCPUPowerMode = null; - EditTextPreference etInputWidth = null; - EditTextPreference etInputHeight = null; - EditTextPreference etInputMean = null; - EditTextPreference etInputStd = null; - EditTextPreference etScoreThreshold = null; - - List preInstalledModelDirs = null; - List preInstalledLabelPaths = null; - List preInstalledCPUThreadNums = null; - List preInstalledCPUPowerModes = null; - List preInstalledInputWidths = null; - List preInstalledInputHeights = null; - List preInstalledInputMeans = null; - List preInstalledInputStds = null; - List preInstalledScoreThresholds = null; - - @Override - public void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - addPreferencesFromResource(R.xml.settings); - ActionBar supportActionBar = getSupportActionBar(); - if (supportActionBar != null) { - supportActionBar.setDisplayHomeAsUpEnabled(true); - } - - // Initialize pre-installed models - preInstalledModelDirs = new ArrayList(); - preInstalledLabelPaths = new ArrayList(); - preInstalledCPUThreadNums = new ArrayList(); - preInstalledCPUPowerModes = new ArrayList(); - preInstalledInputWidths = new ArrayList(); - preInstalledInputHeights = new ArrayList(); - preInstalledInputMeans = new ArrayList(); - preInstalledInputStds = new ArrayList(); - preInstalledScoreThresholds = new ArrayList(); - preInstalledModelDirs.add(getString(R.string.MODEL_DIR_DEFAULT)); - preInstalledLabelPaths.add(getString(R.string.LABEL_PATH_DEFAULT)); - preInstalledCPUThreadNums.add(getString(R.string.CPU_THREAD_NUM_DEFAULT)); - preInstalledCPUPowerModes.add(getString(R.string.CPU_POWER_MODE_DEFAULT)); - preInstalledInputWidths.add(getString(R.string.INPUT_WIDTH_DEFAULT)); - preInstalledInputHeights.add(getString(R.string.INPUT_HEIGHT_DEFAULT)); - preInstalledInputMeans.add(getString(R.string.INPUT_MEAN_DEFAULT)); - preInstalledInputStds.add(getString(R.string.INPUT_STD_DEFAULT)); - preInstalledScoreThresholds.add(getString(R.string.SCORE_THRESHOLD_DEFAULT)); - // Add yolov3_mobilenet_v3_for_hybrid_cpu_npu for CPU and huawei NPU - if (Utils.isSupportedNPU()) { - preInstalledModelDirs.add("models/yolov3_mobilenet_v3_for_hybrid_cpu_npu"); - preInstalledLabelPaths.add("labels/coco-labels-2014_2017.txt"); - preInstalledCPUThreadNums.add("1"); // Useless for NPU - preInstalledCPUPowerModes.add("LITE_POWER_HIGH"); // Useless for NPU - preInstalledInputWidths.add("320"); - preInstalledInputHeights.add("320"); - preInstalledInputMeans.add("0.485,0.456,0.406"); - preInstalledInputStds.add("0.229,0.224,0.225"); - preInstalledScoreThresholds.add("0.2"); - } else { - Toast.makeText(this, "NPU model is not supported by your device.", Toast.LENGTH_LONG).show(); - } - // Setup UI components - lpChoosePreInstalledModel = - (ListPreference) findPreference(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY)); - String[] preInstalledModelNames = new String[preInstalledModelDirs.size()]; - for (int i = 0; i < preInstalledModelDirs.size(); i++) { - preInstalledModelNames[i] = preInstalledModelDirs.get(i).substring(preInstalledModelDirs.get(i).lastIndexOf("/") + 1); - } - lpChoosePreInstalledModel.setEntries(preInstalledModelNames); - lpChoosePreInstalledModel.setEntryValues(preInstalledModelDirs.toArray(new String[preInstalledModelDirs.size()])); - lpCPUThreadNum = (ListPreference) findPreference(getString(R.string.CPU_THREAD_NUM_KEY)); - lpCPUPowerMode = (ListPreference) findPreference(getString(R.string.CPU_POWER_MODE_KEY)); - etModelDir = (EditTextPreference) findPreference(getString(R.string.MODEL_DIR_KEY)); - etModelDir.setTitle("Model dir (SDCard: " + Utils.getSDCardDirectory() + ")"); - etLabelPath = (EditTextPreference) findPreference(getString(R.string.LABEL_PATH_KEY)); - etLabelPath.setTitle("Label path (SDCard: " + Utils.getSDCardDirectory() + ")"); - etInputWidth = (EditTextPreference) findPreference(getString(R.string.INPUT_WIDTH_KEY)); - etInputHeight = (EditTextPreference) findPreference(getString(R.string.INPUT_HEIGHT_KEY)); - etInputMean = (EditTextPreference) findPreference(getString(R.string.INPUT_MEAN_KEY)); - etInputStd = (EditTextPreference) findPreference(getString(R.string.INPUT_STD_KEY)); - etScoreThreshold = (EditTextPreference) findPreference(getString(R.string.SCORE_THRESHOLD_KEY)); - } - - private void reloadSettingsAndUpdateUI() { - SharedPreferences sharedPreferences = getPreferenceScreen().getSharedPreferences(); - - String selected_model_dir = sharedPreferences.getString(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY), - getString(R.string.MODEL_DIR_DEFAULT)); - int selected_model_idx = lpChoosePreInstalledModel.findIndexOfValue(selected_model_dir); - if (selected_model_idx >= 0 && selected_model_idx < preInstalledModelDirs.size() && selected_model_idx != selectedModelIdx) { - SharedPreferences.Editor editor = sharedPreferences.edit(); - editor.putString(getString(R.string.MODEL_DIR_KEY), preInstalledModelDirs.get(selected_model_idx)); - editor.putString(getString(R.string.LABEL_PATH_KEY), preInstalledLabelPaths.get(selected_model_idx)); - editor.putString(getString(R.string.CPU_THREAD_NUM_KEY), preInstalledCPUThreadNums.get(selected_model_idx)); - editor.putString(getString(R.string.CPU_POWER_MODE_KEY), preInstalledCPUPowerModes.get(selected_model_idx)); - editor.putString(getString(R.string.INPUT_WIDTH_KEY), preInstalledInputWidths.get(selected_model_idx)); - editor.putString(getString(R.string.INPUT_HEIGHT_KEY), preInstalledInputHeights.get(selected_model_idx)); - editor.putString(getString(R.string.INPUT_MEAN_KEY), preInstalledInputMeans.get(selected_model_idx)); - editor.putString(getString(R.string.INPUT_STD_KEY), preInstalledInputStds.get(selected_model_idx)); - editor.putString(getString(R.string.SCORE_THRESHOLD_KEY), preInstalledScoreThresholds.get(selected_model_idx)); - editor.commit(); - lpChoosePreInstalledModel.setSummary(selected_model_dir); - selectedModelIdx = selected_model_idx; - } - - String model_dir = sharedPreferences.getString(getString(R.string.MODEL_DIR_KEY), - getString(R.string.MODEL_DIR_DEFAULT)); - String label_path = sharedPreferences.getString(getString(R.string.LABEL_PATH_KEY), - getString(R.string.LABEL_PATH_DEFAULT)); - String cpu_thread_num = sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY), - getString(R.string.CPU_THREAD_NUM_DEFAULT)); - String cpu_power_mode = sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY), - getString(R.string.CPU_POWER_MODE_DEFAULT)); - String input_width = sharedPreferences.getString(getString(R.string.INPUT_WIDTH_KEY), - getString(R.string.INPUT_WIDTH_DEFAULT)); - String input_height = sharedPreferences.getString(getString(R.string.INPUT_HEIGHT_KEY), - getString(R.string.INPUT_HEIGHT_DEFAULT)); - String input_mean = sharedPreferences.getString(getString(R.string.INPUT_MEAN_KEY), - getString(R.string.INPUT_MEAN_DEFAULT)); - String input_std = sharedPreferences.getString(getString(R.string.INPUT_STD_KEY), - getString(R.string.INPUT_STD_DEFAULT)); - String score_threshold = sharedPreferences.getString(getString(R.string.SCORE_THRESHOLD_KEY), - getString(R.string.SCORE_THRESHOLD_DEFAULT)); - - etModelDir.setSummary(model_dir); - etLabelPath.setSummary(label_path); - lpCPUThreadNum.setValue(cpu_thread_num); - lpCPUThreadNum.setSummary(cpu_thread_num); - lpCPUPowerMode.setValue(cpu_power_mode); - lpCPUPowerMode.setSummary(cpu_power_mode); - etInputWidth.setSummary(input_width); - etInputWidth.setText(input_width); - etInputHeight.setSummary(input_height); - etInputHeight.setText(input_height); - etInputMean.setSummary(input_mean); - etInputMean.setText(input_mean); - etInputStd.setSummary(input_std); - etInputStd.setText(input_std); - etScoreThreshold.setSummary(score_threshold); - etScoreThreshold.setText(score_threshold); - } - - static boolean checkAndUpdateSettings(Context ctx) { - boolean settingsChanged = false; - SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(ctx); - - String model_dir = sharedPreferences.getString(ctx.getString(R.string.MODEL_DIR_KEY), - ctx.getString(R.string.MODEL_DIR_DEFAULT)); - settingsChanged |= !modelDir.equalsIgnoreCase(model_dir); - modelDir = model_dir; - - String label_path = sharedPreferences.getString(ctx.getString(R.string.LABEL_PATH_KEY), - ctx.getString(R.string.LABEL_PATH_DEFAULT)); - settingsChanged |= !labelPath.equalsIgnoreCase(label_path); - labelPath = label_path; - - String cpu_thread_num = sharedPreferences.getString(ctx.getString(R.string.CPU_THREAD_NUM_KEY), - ctx.getString(R.string.CPU_THREAD_NUM_DEFAULT)); - settingsChanged |= cpuThreadNum != Integer.parseInt(cpu_thread_num); - cpuThreadNum = Integer.parseInt(cpu_thread_num); - - String cpu_power_mode = sharedPreferences.getString(ctx.getString(R.string.CPU_POWER_MODE_KEY), - ctx.getString(R.string.CPU_POWER_MODE_DEFAULT)); - settingsChanged |= !cpuPowerMode.equalsIgnoreCase(cpu_power_mode); - cpuPowerMode = cpu_power_mode; - - String input_width = sharedPreferences.getString(ctx.getString(R.string.INPUT_WIDTH_KEY), - ctx.getString(R.string.INPUT_WIDTH_DEFAULT)); - settingsChanged |= inputWidth != Integer.parseInt(input_width); - inputWidth = Integer.parseInt(input_width); - - String input_height = sharedPreferences.getString(ctx.getString(R.string.INPUT_HEIGHT_KEY), - ctx.getString(R.string.INPUT_HEIGHT_DEFAULT)); - settingsChanged |= inputHeight != Integer.parseInt(input_height); - inputHeight = Integer.parseInt(input_height); - - String input_mean = sharedPreferences.getString(ctx.getString(R.string.INPUT_MEAN_KEY), - ctx.getString(R.string.INPUT_MEAN_DEFAULT)); - float[] array_data = Utils.parseFloatsFromString(input_mean, ","); - settingsChanged |= array_data.length != inputMean.length; - if (!settingsChanged) { - for (int i = 0; i < array_data.length; i++) { - settingsChanged |= array_data[i] != inputMean[i]; - } - } - inputMean = array_data; - - String input_std = sharedPreferences.getString(ctx.getString(R.string.INPUT_STD_KEY), - ctx.getString(R.string.INPUT_STD_DEFAULT)); - array_data = Utils.parseFloatsFromString(input_std, ","); - settingsChanged |= array_data.length != inputStd.length; - if (!settingsChanged) { - for (int i = 0; i < array_data.length; i++) { - settingsChanged |= array_data[i] != inputStd[i]; - } - } - inputStd = array_data; - - String score_threshold = sharedPreferences.getString(ctx.getString(R.string.SCORE_THRESHOLD_KEY), - ctx.getString(R.string.SCORE_THRESHOLD_DEFAULT)); - settingsChanged |= scoreThreshold != Float.parseFloat(score_threshold); - scoreThreshold = Float.parseFloat(score_threshold); - - return settingsChanged; - } - - static void resetSettings() { - selectedModelIdx = -1; - modelDir = ""; - labelPath = ""; - cpuThreadNum = 0; - cpuPowerMode = ""; - inputWidth = 0; - inputHeight = 0; - inputMean = new float[]{}; - inputStd = new float[]{}; - scoreThreshold = 0; - } - - @Override - protected void onResume() { - super.onResume(); - getPreferenceScreen().getSharedPreferences().registerOnSharedPreferenceChangeListener(this); - reloadSettingsAndUpdateUI(); - } - - @Override - protected void onPause() { - super.onPause(); - getPreferenceScreen().getSharedPreferences().unregisterOnSharedPreferenceChangeListener(this); - } - - @Override - public void onSharedPreferenceChanged(SharedPreferences sharedPreferences, String key) { - reloadSettingsAndUpdateUI(); - } -} diff --git a/static/deploy/android_demo/app/src/main/res/drawable-v24/camera.png b/static/deploy/android_demo/app/src/main/res/drawable-v24/camera.png deleted file mode 100644 index 6bbeb0a7514f2b0d3ed93b10d5e807416c691833..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable-v24/camera.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml b/static/deploy/android_demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml deleted file mode 100644 index 1f6bb290603d7caa16c5fb6f61bbfdc750622f5c..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - - - - - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable-v24/photo.png b/static/deploy/android_demo/app/src/main/res/drawable-v24/photo.png deleted file mode 100644 index 7a534189a9fb3ebd2e89e02dbd31144b391483af..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable-v24/photo.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png b/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png deleted file mode 100644 index b9e66c7f605dd5a02d13f04284a046810b292add..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/btn_switch_default.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png b/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png deleted file mode 100644 index 9544133bdade8f57552f9ab22976be3172c95b86..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/btn_switch_pressed.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/camera.png b/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/camera.png deleted file mode 100644 index 6bbeb0a7514f2b0d3ed93b10d5e807416c691833..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/camera.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/photo.png b/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/photo.png deleted file mode 100644 index 7a534189a9fb3ebd2e89e02dbd31144b391483af..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable-xxhdpi-v4/photo.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_settings.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_settings.xml deleted file mode 100644 index 917897b99981d18082d18a87a4ad5176ad8e8f8d..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_settings.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_settings_default.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_settings_default.xml deleted file mode 100644 index e19589a97e419249eaacd05f3d75deeeada3e128..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_settings_default.xml +++ /dev/null @@ -1,13 +0,0 @@ - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_settings_pressed.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_settings_pressed.xml deleted file mode 100644 index c4af2a042de3a8ae00ab253f889a20dedffa4874..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_settings_pressed.xml +++ /dev/null @@ -1,13 +0,0 @@ - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter.xml deleted file mode 100644 index 4f9826d3ae340b54046a48e4250a9d7e0b9d9139..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter.xml +++ /dev/null @@ -1,5 +0,0 @@ - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter_default.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter_default.xml deleted file mode 100644 index 234ca014a76b9647959814fa28e0c02324a8d814..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter_default.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter_pressed.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter_pressed.xml deleted file mode 100644 index accc7acedb91cc4fb8171d78eeba24eaa6b0c2db..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_shutter_pressed.xml +++ /dev/null @@ -1,17 +0,0 @@ - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/btn_switch.xml b/static/deploy/android_demo/app/src/main/res/drawable/btn_switch.xml deleted file mode 100644 index 691e8c2e97d7a65d580e4d12d6b77608083b5617..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/btn_switch.xml +++ /dev/null @@ -1,5 +0,0 @@ - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/camera.png b/static/deploy/android_demo/app/src/main/res/drawable/camera.png deleted file mode 100644 index 6bbeb0a7514f2b0d3ed93b10d5e807416c691833..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable/camera.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable/ic_launcher_background.xml b/static/deploy/android_demo/app/src/main/res/drawable/ic_launcher_background.xml deleted file mode 100644 index 0d025f9bf6b67c63044a36a9ff44fbc69e5c5822..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/drawable/ic_launcher_background.xml +++ /dev/null @@ -1,170 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/static/deploy/android_demo/app/src/main/res/drawable/photo.png b/static/deploy/android_demo/app/src/main/res/drawable/photo.png deleted file mode 100644 index 7a534189a9fb3ebd2e89e02dbd31144b391483af..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable/photo.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/drawable/photo1.png b/static/deploy/android_demo/app/src/main/res/drawable/photo1.png deleted file mode 100644 index 41ebaaab61702b751f0243455ca5cc1b6d6e8700..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/drawable/photo1.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/layout-land/fragment_camera.xml b/static/deploy/android_demo/app/src/main/res/layout-land/fragment_camera.xml deleted file mode 100644 index ef3da245f2e3169441fc3980de25e2889b558f00..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/layout-land/fragment_camera.xml +++ /dev/null @@ -1,99 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/layout/activity_main.xml b/static/deploy/android_demo/app/src/main/res/layout/activity_main.xml deleted file mode 100644 index 9c96440bc2b1bc566214b74b6fac1246bdec8655..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/layout/activity_main.xml +++ /dev/null @@ -1,25 +0,0 @@ - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/layout/content_main.xml b/static/deploy/android_demo/app/src/main/res/layout/content_main.xml deleted file mode 100644 index c285383882907af11efef688be1a7cf96541d2c3..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/layout/content_main.xml +++ /dev/null @@ -1,20 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/layout/fragment_camera.xml b/static/deploy/android_demo/app/src/main/res/layout/fragment_camera.xml deleted file mode 100644 index e1e7f41a94ffdbfa2bcb72b603da9eb13a6a852f..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/layout/fragment_camera.xml +++ /dev/null @@ -1,98 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/layout/fragment_content.xml b/static/deploy/android_demo/app/src/main/res/layout/fragment_content.xml deleted file mode 100644 index 3534e92acb774f3c128e8c38d9e4c929498f5450..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/layout/fragment_content.xml +++ /dev/null @@ -1,37 +0,0 @@ - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/layout/fragment_photo.xml b/static/deploy/android_demo/app/src/main/res/layout/fragment_photo.xml deleted file mode 100644 index 04871b37d59625762ee84b5befdd9cdab84de96a..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/layout/fragment_photo.xml +++ /dev/null @@ -1,121 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/menu/menu_main.xml b/static/deploy/android_demo/app/src/main/res/menu/menu_main.xml deleted file mode 100644 index 3a711c72f24fe60a993a45538df653a050d749b1..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/menu/menu_main.xml +++ /dev/null @@ -1,10 +0,0 @@ -

    - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml b/static/deploy/android_demo/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml deleted file mode 100644 index eca70cfe52eac1ba66ba280a68ca7be8fcf88a16..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml b/static/deploy/android_demo/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml deleted file mode 100644 index eca70cfe52eac1ba66ba280a68ca7be8fcf88a16..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-hdpi/ic_launcher.png b/static/deploy/android_demo/app/src/main/res/mipmap-hdpi/ic_launcher.png deleted file mode 100644 index 898f3ed59ac9f3248734a00e5902736c9367d455..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-hdpi/ic_launcher.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-hdpi/ic_launcher_round.png b/static/deploy/android_demo/app/src/main/res/mipmap-hdpi/ic_launcher_round.png deleted file mode 100644 index dffca3601eba7bf5f409bdd520820e2eb5122c75..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-hdpi/ic_launcher_round.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-mdpi/ic_launcher.png b/static/deploy/android_demo/app/src/main/res/mipmap-mdpi/ic_launcher.png deleted file mode 100644 index 64ba76f75e9ce021aa3d95c213491f73bcacb597..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-mdpi/ic_launcher.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-mdpi/ic_launcher_round.png b/static/deploy/android_demo/app/src/main/res/mipmap-mdpi/ic_launcher_round.png deleted file mode 100644 index dae5e082342fcdeee5db8a6e0b27028e2d2808f5..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-mdpi/ic_launcher_round.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-xhdpi/ic_launcher.png b/static/deploy/android_demo/app/src/main/res/mipmap-xhdpi/ic_launcher.png deleted file mode 100644 index e5ed46597ea8447d91ab1786a34e30f1c26b18bd..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-xhdpi/ic_launcher.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png b/static/deploy/android_demo/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png deleted file mode 100644 index 14ed0af35023e4f1901cf03487b6c524257b8483..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-xhdpi/ic_launcher_round.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-xxhdpi/ic_launcher.png b/static/deploy/android_demo/app/src/main/res/mipmap-xxhdpi/ic_launcher.png deleted file mode 100644 index b0907cac3bfd8fbfdc46e1108247f0a1055387ec..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-xxhdpi/ic_launcher.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png b/static/deploy/android_demo/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png deleted file mode 100644 index d8ae03154975f397f8ed1b84f2d4bf9783ecfa26..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png b/static/deploy/android_demo/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png deleted file mode 100644 index 2c18de9e66108411737e910f5c1972476f03ddbf..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-xxxhdpi/ic_launcher.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png b/static/deploy/android_demo/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png deleted file mode 100644 index beed3cdd2c32af5114a7dc70b9ef5b698eb8797e..0000000000000000000000000000000000000000 Binary files a/static/deploy/android_demo/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.png and /dev/null differ diff --git a/static/deploy/android_demo/app/src/main/res/navigation/nav_graph.xml b/static/deploy/android_demo/app/src/main/res/navigation/nav_graph.xml deleted file mode 100644 index ab0abc2506c8f6a80ed19f63b0cac580acb9e8fa..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/navigation/nav_graph.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - - - - - - - - - - - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/values/arrays.xml b/static/deploy/android_demo/app/src/main/res/values/arrays.xml deleted file mode 100644 index 8c99734d1485615198f4ce3dd4e905305f7f0ce4..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/values/arrays.xml +++ /dev/null @@ -1,31 +0,0 @@ - - - - 1 threads - 2 threads - 4 threads - 8 threads - - - 1 - 2 - 4 - 8 - - - HIGH(only big cores) - LOW(only LITTLE cores) - FULL(all cores) - NO_BIND(depends on system) - RAND_HIGH - RAND_LOW - - - LITE_POWER_HIGH - LITE_POWER_LOW - LITE_POWER_FULL - LITE_POWER_NO_BIND - LITE_POWER_RAND_HIGH - LITE_POWER_RAND_LOW - - \ No newline at end of file diff --git a/static/deploy/android_demo/app/src/main/res/values/colors.xml b/static/deploy/android_demo/app/src/main/res/values/colors.xml deleted file mode 100644 index 1fdccc1e88c3cb44fbd1526cdea829ebe413802b..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/values/colors.xml +++ /dev/null @@ -1,10 +0,0 @@ - - - #6200EE - #3700B3 - #1E90FF - #FF000000 - #00000000 - #00000000 - #FFFFFFFF - diff --git a/static/deploy/android_demo/app/src/main/res/values/dimens.xml b/static/deploy/android_demo/app/src/main/res/values/dimens.xml deleted file mode 100644 index 377274d30426f4d5e64e7d16383d581084b65a74..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/values/dimens.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - 26dp - 36dp - 34dp - 60dp - 16dp - 67dp - 67dp - 56dp - 56dp - 46dp - 46dp - 32dp - 24dp - 16dp - 16dp - diff --git a/static/deploy/android_demo/app/src/main/res/values/strings.xml b/static/deploy/android_demo/app/src/main/res/values/strings.xml deleted file mode 100644 index 1e2ee16e8bb482e9dfe79599315c30560043f69c..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/values/strings.xml +++ /dev/null @@ -1,37 +0,0 @@ - - PaddleDetection - - models/yolov3_mobilenet_v3_for_cpu - - labels/coco-labels-2014_2017.txt - 1 - LITE_POWER_HIGH - 320 - 320 - 0.485,0.456,0.406 - 0.229,0.224,0.225 - 0.2 - First Fragment - Second Fragment - 敬请期待新功能 - - - models/ssdlite_mobilenet_v3_large_for_cpu_nb - labels/coco-labels-background.txt - images/home.jpg - 1,3,320,320 - RGB - 0.5 - - CHOOSE_INSTALLED_MODEL_KEY - MODEL_DIR_KEY - LABEL_PATH_KEY - CPU_THREAD_NUM_KEY - CPU_POWER_MODE_KEY - INPUT_WIDTH_KEY - INPUT_HEIGHT_KEY - INPUT_MEAN_KEY - INPUT_STD_KEY - SCORE_THRESHOLD_KEY - - diff --git a/static/deploy/android_demo/app/src/main/res/values/styles.xml b/static/deploy/android_demo/app/src/main/res/values/styles.xml deleted file mode 100644 index 853262016a2ffab26185bf9f3dcd59e10605630a..0000000000000000000000000000000000000000 --- a/static/deploy/android_demo/app/src/main/res/values/styles.xml +++ /dev/null @@ -1,25 +0,0 @@ - - - - - - - - - -