# YOLOv7-Pose-Bytetrack-STGCN **Repository Path**: nicky208/YOLOv7-Pose-Bytetrack-STGCN ## Basic Information - **Project Name**: YOLOv7-Pose-Bytetrack-STGCN - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-06-30 - **Last Updated**: 2024-06-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # YOLOv7-Pose-Bytetrack-STGCN YOLOv7-POSE was used for key point detection, Bytetrack for tracking, and STGCN for fall and other behavior recognition. | | Key point detection, run the command below: ``` python detect.py --weights "yolov7-w6-pose.pt" --kpt-label --view-img ``` Key point detection+Bytetrack, run the command below: ``` python detect_track.py --weights "yolov7-w6-pose.pt" --kpt-label --view-img ``` Key point detection+Bytetrack+STGCN, run the command below: ``` python detect_track_stgcn.py --weights "yolov7-w6-pose.pt" --kpt-label --view-img ``` YOLO-Pose: [https://github.com/Bigtuo/YOLO-POSE-Bytetrack-STGCN] # yolov7-pose Implementation of "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors" Pose estimation implimentation is based on [YOLO-Pose](https://arxiv.org/abs/2204.06806). ## Dataset preparison [[Keypoints Labels of MS COCO 2017]](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-keypoints.zip) ## Training [yolov7-w6-person.pt](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6-person.pt) ``` shell python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --data data/coco_kpts.yaml --cfg cfg/yolov7-w6-pose.yaml --weights weights/yolov7-w6-person.pt --batch-size 128 --img 960 --kpt-label --sync-bn --device 0,1,2,3,4,5,6,7 --name yolov7-w6-pose --hyp data/hyp.pose.yaml ``` ## Deploy TensorRT:[https://github.com/nanmi/yolov7-pose](https://github.com/nanmi/yolov7-pose) ## Testing [yolov7-w6-pose.pt](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6-pose.pt) ``` shell python test.py --data data/coco_kpts.yaml --img 960 --conf 0.001 --iou 0.65 --weights yolov7-w6-pose.pt --kpt-label ``` ## Citation ``` @article{wang2022yolov7, title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors}, author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark}, journal={arXiv preprint arXiv:2207.02696}, year={2022} } ``` ## Acknowledgements
Expand * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet) * [https://github.com/WongKinYiu/yolor](https://github.com/WongKinYiu/yolor) * [https://github.com/WongKinYiu/PyTorch_YOLOv4](https://github.com/WongKinYiu/PyTorch_YOLOv4) * [https://github.com/WongKinYiu/ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) * [https://github.com/Megvii-BaseDetection/YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) * [https://github.com/ultralytics/yolov3](https://github.com/ultralytics/yolov3) * [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5) * [https://github.com/DingXiaoH/RepVGG](https://github.com/DingXiaoH/RepVGG) * [https://github.com/JUGGHM/OREPA_CVPR2022](https://github.com/JUGGHM/OREPA_CVPR2022) * [https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose](https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose)