6 Star 20 Fork 13

OAKChina/depthai-experiments

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
README.md 2.09 KB
一键复制 编辑 原始数据 按行查看 历史

Head posture detection

This example demonstrates how to run 2 stage inference with DepthAI library. It recognizes head pose detection of all detected faces on the frame. Demo uses face-detection-retail-0004 model to detect faces, crops them on the device using Script node, and then sends face frames to head-pose-estimation-adas-0001 model which estimates head pose (yaw, pitch, tilt).

Demo

Head pose estimation

How it works

  1. Color camera produces high-res frames, sends them to host, Script node and downscale ImageManip node
  2. Downscale ImageManip will downscale from high-res frame to 300x300, required by 1st NN in this pipeline; object detection model
  3. 300x300 frames are sent from downscale ImageManip node to the object detection model (MobileNetSpatialDetectionNetwork)
  4. Object detections are sent to the Script node
  5. Script node first syncs object detections msg with frame. It then goes through all detections and creates ImageManipConfig for each detected face. These configs then get sent to ImageManip together with synced high-res frame
  6. ImageManip will crop only the face out of the original frame. It will also resize the face frame to required size (60,60) by the head pose recognition NN model
  7. Face frames get send to the 2nd NN - head pose NN model. NN recognition results are sent back to the host
  8. Frames, object detections, and recognition results are all synced on the host side and then displayed to the user

2-stage NN pipeline graph

image

DepthAI Pipeline Graph was used to generate this image.

Pre-requisites

python3 -m pip install -r requirements.txt

Usage

Run python3 main.py

Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Python
1
https://gitee.com/oakchina/depthai-experiments.git
git@gitee.com:oakchina/depthai-experiments.git
oakchina
depthai-experiments
depthai-experiments
master

搜索帮助