# Down-Recognition
**Repository Path**: liuhunck/down-recognition
## Basic Information
- **Project Name**: Down-Recognition
- **Description**: one project of HNU SIT, the body is forked from a github repository. What I add to the project is the camera api from YingShi.
- **Primary Language**: Python
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-03-12
- **Last Updated**: 2024-03-30
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
Human Falling Detection and Tracking
Using Tiny-YOLO oneclass to detect each person in the frame and use
[AlphaPose](https://github.com/MVIG-SJTU/AlphaPose) to get skeleton-pose and then use
[ST-GCN](https://github.com/yysijie/st-gcn) model to predict action from every 30 frames
of each person tracks.
Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down.
## Prerequisites
- Python > 3.6
- Pytorch > 1.3.1
Original test run on: NVIDIA Orin NX Developer Kit, CUDA 11.4
## Data
This project has trained a new Tiny-YOLO oneclass model to detect only person objects and to reducing
model size. Train with rotation augmented [COCO](http://cocodataset.org/#home) person keypoints dataset
for more robust person detection in a variant of angle pose.
For actions recognition used data from [Le2i](http://le2i.cnrs.fr/Fall-detection-Dataset?lang=fr)
Fall detection Dataset (Coffee room, Home) extract skeleton-pose by AlphaPose and labeled each action
frames by hand for training ST-GCN model.
## Pre-Trained Models
- Tiny-YOLO oneclass - [.pth](https://drive.google.com/file/d/1obEbWBSm9bXeg10FriJ7R2cGLRsg-AfP/view?usp=sharing),
[.cfg](https://drive.google.com/file/d/19sPzBZjAjuJQ3emRteHybm2SG25w9Wn5/view?usp=sharing)
- SPPE FastPose (AlphaPose) - [resnet101](https://drive.google.com/file/d/1N2MgE1Esq6CKYA6FyZVKpPwHRyOCrzA0/view?usp=sharing),
[resnet50](https://drive.google.com/file/d/1IPfCDRwCmQDnQy94nT1V-_NVtTEi4VmU/view?usp=sharing)
- ST-GCN action recognition - [tsstg](https://drive.google.com/file/d/1mQQ4JHe58ylKbBqTjuKzpwN2nwKOWJ9u/view?usp=sharing)
## Basic Use
1. Download all pre-trained models into ./Models folder.
2. Run main.py
```
python main.py ${video file or camera source or ./config/video.yaml}
```
## Reference
- AlphaPose : https://github.com/Amanbhandula/AlphaPose
- ST-GCN : https://github.com/yysijie/st-gcn