1 Star 1 Fork 0

陆延杰 / AlphaPose

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
GETTING_STARTED.md 5.49 KB
一键复制 编辑 原始数据 按行查看 历史
陆延杰 提交于 2024-04-23 08:57 . 'update'

Getting Started

Flags

Checkout the run.md for all flags.

Training / fine tuning yolov8

The default detector is YoloV8, choosen for its speed and simple API offered by ultralytics. Here is how to train or fine tune the model.

  • preparing training data : Training data are obtained from Roboflow. When exporting the dataset, choose the Yolov8 format under TXT. In the zip folder, you will find the following files and directories :

    • train/
    • valid/
    • test/
    • data.yaml

    Move the data.yaml to detector/dataset/my_dataset_folder Update the train/val/test path inside the file.

  • setup training config file : config file must be in detector/yolov8/cfg and should be a yaml file (i.e yolov8_training_tennis.yaml). You must update the following key value pair :

    • data : your roboflow dataset path detector/dataset/my_dataset_folder/data.yaml
    • project : where you want your experiment to be saved.
    • name : the name of your experiment. your logs and weights will be under project/name
  • training YOLOV8: Run the following command

python detector/yolov8/train.py --cfg AlphaPose/detector/yolov8/cfg/my_custom_config

Motion project 2D skeleton extraction process :

  • Extract 2D skeleton for MotionBERT: Run AlphaPose for a specific video and retrieve the visualization as well as the alphapose-result.json that will be fed to motionBERT.
    Important : Our default detector is Yolov8. Therefore, you need to download the Yolov8 weights into detector/yolov8/data/ according to the DETECTOR key in the config file you will choose (i.e example_config.yaml
  • cfg_file : the config file to use. It contains information about the person detector and the 2D keypoints model
  • trained_model : the weights of the 2D keypoints model

  • Video: Run AlphaPose for a video and save the rendered video with:
    • use --save_video if you want to visualize the 2D skeleton on the video
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --video ${path to video} --outdir ${path to results dir} --save_video
  • Input dir: Run AlphaPose for all images in a folder with:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --indir ${img_directory} --outdir ${output_directory}
  • Choose a different detector: Default detector is yolov3-spp, it works pretty well, if you want to use yolox series, remember to download their weight according to our installation readme. Options include [yolox-x|yolox-l|yolox-m|yolox-s|yolox-darknet]:
python scripts/demo_inference.py --detector yolox-x --cfg ${cfg_file} --checkpoint ${trained_model} --indir ${img_directory} --outdir ${output_directory}
  • Video: Run AlphaPose for a video and save the rendered video with:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --video ${path to video} --outdir examples/res --save_video
  • Webcam: Run AlphaPose using default webcam and visualize the results with:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --outdir examples/res --vis --webcam 0
  • Input list: Run AlphaPose for images in a list and save the rendered images with:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --list examples/list-coco-demo.txt --indir ${img_directory} --outdir examples/res --save_img
  • Only-cpu/Multi-gpus: Run AlphaPose for images in a list by cpu only or multi gpus:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --list examples/list-coco-demo.txt --indir ${img_directory} --outdir examples/res --gpus ${-1(cpu only)/0,1,2,3(multi-gpus)}
  • Re-ID Track(Experimental): Run AlphaPose for tracking persons in a video by human re-id algorithm:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --video ${path to video} --outdir examples/res --pose_track --save_video
  • Simple Track(Experimental): Run AlphaPose for tracking persons in a video by MOT tracking algorithm:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --video ${path to video} --outdir examples/res --detector tracker --save_video
  • Pose Flow(not ready): Run AlphaPose for tracking persons in a video by embedded PoseFlow algorithm:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --video ${path to video} --outdir examples/res --pose_flow --save_video

Options

  • Note: If you meet OOM(out of memory) problem, decreasing the pose estimation batch until the program can run on your computer:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --indir ${img_directory} --outdir examples/res --detbatch 1 --posebatch 30
  • Getting more accurate: You can use larger input for pose network to improve performance e.g.:
python scripts/demo_inference.py --cfg ${cfg_file} --checkpoint ${trained_model} --indir ${img_directory} --outdir ${output_directory} --flip
  • Speeding up: Checkout the speed_up.md for more details.

Output format

Checkout the output.md for more details.

1
https://gitee.com/luyanjie_admin/AlphaPose.git
git@gitee.com:luyanjie_admin/AlphaPose.git
luyanjie_admin
AlphaPose
AlphaPose
master

搜索帮助