# Halpe-FullBody
**Repository Path**: james9/Halpe-FullBody
## Basic Information
- **Project Name**: Halpe-FullBody
- **Description**: Halpe: full body human pose estimation and human-object interaction detection dataset
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-10-27
- **Last Updated**: 2021-10-27
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Halpe Full-Body Human Keypoints and HOI-Det dataset
## What is Halpe?
**Halpe** is a joint project under [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose) and [HAKE](http://hake-mvig.cn/). It aims at pushing Human Understanding to the extreme. We provide detailed annotation of human keypoints, together with the human-object interaction trplets from HICO-DET. For each person, we annotate 136 keypoints in total, including head,face,body,hand and foot. Below we provide some samples of Halpe dataset.
Halpe 是 AlphaPose 和 HAKE 旗下的一个联合项目。 它旨在将人类理解推向极致。 我们提供了人类关键点的详细注释,以及来自 HICO-DET 的人-物交互 trplets。 对于每个人,我们总共注释了 136 个关键点,包括头、脸、身体、手和脚。 下面我们提供一些 Halpe 数据集的样本。
## Download
Train annotations [[Baidu](https://pan.baidu.com/s/1hWX-I-HpXXnLcprFskfriQ) | [Google](https://drive.google.com/file/d/13vj8H0GZ9yugLPhVVWV9fcH-3RyW5Wk5/view?usp=sharing) ]
Val annotations [[Baidu](https://pan.baidu.com/s/1yDvBkXTSwk20EjiYzimpPw) | [Google](https://drive.google.com/file/d/1FdyCgro2t9_nOhTlMPjEf3c0aLOz9wi6/view?usp=sharing) ]
Train images from [HICO-DET](https://drive.google.com/open?id=1QZcJmGVlF9f4h-XLWe9Gkmnmj2z1gSnk)
Val images from [COCO](http://images.cocodataset.org/zips/val2017.zip)
## Realtime Demo with tracking(带跟踪的实时演示 )
Trained model is available in [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose)!
训练模型在 [AlphaPose] 中可用
Check out its [MODEL_ZOO](https://github.com/MVIG-SJTU/AlphaPose/blob/master/docs/MODEL_ZOO.md)
看看它的[MODEL_ZOO]
## Keypoints format(关键点格式 )
We annotate 136 keypoints in total(我们总共注释了 136 个关键点: ):
```
//26 body keypoints
{0, "Nose"},
{1, "LEye"},
{2, "REye"},
{3, "LEar"},
{4, "REar"},
{5, "LShoulder"},
{6, "RShoulder"},
{7, "LElbow"},
{8, "RElbow"},
{9, "LWrist"},
{10, "RWrist"},
{11, "LHip"},
{12, "RHip"},
{13, "LKnee"},
{14, "Rknee"},
{15, "LAnkle"},
{16, "RAnkle"},
{17, "Head"},
{18, "Neck"},
{19, "Hip"},
{20, "LBigToe"},
{21, "RBigToe"},
{22, "LSmallToe"},
{23, "RSmallToe"},
{24, "LHeel"},
{25, "RHeel"},
//face
{26-93, 68 Face Keypoints}
//left hand
{94-114, 21 Left Hand Keypoints}
//right hand
{115-135, 21 Right Hand Keypoints}
## 中文
{0, "鼻子"},
{1, "L 眼"},
{2, "右眼"},
{3, "L 耳朵"},
{4, "右耳"},
{5, "L 肩"},
{6, "R 肩"},
{7, "L 肘"},
{8, "R 弯头"},
{9, "L 手腕"},
{10, "右手腕"},
{11, "L 臀"},
{12, "右臀"},
{13, "LKnee"},
{14, "右膝"},
{15, "L 脚踝"},
{16, "右脚踝"},
{17, "头"},
{18, "脖子"},
{19, "臀部"},
{20, "大脚趾"},
{21, "R 大脚趾"},
{22, "L 小脚趾"},
{23, "R 小脚趾"},
{24, "L 跟"},
{25, "R 跟"},
//脸
{26-93, 68 人脸关键点}
//左手
{94-114, 21 左手关键点}
//右手
{115-135, 21 右手关键点}
```
Illustration(插图):

26 body keypoints

68 face keypoints

21 hand keypoints
## Usage(用法 )
The annotation is in the same format as COCO dataset. For usage, a good start is to check out the `vis.py`. We also provide related APIs. See [halpecocotools](https://github.com/HaoyiZhu/HalpeCOCOAPI), which can be installed by `pip install halpecocotools`.
注释与 COCO 数据集的格式相同。 对于使用,一个好的开始是查看`vis.py`。 我们还提供相关的 API。 参见[halpecocotools](https://github.com/HaoyiZhu/HalpeCOCOAPI),可以通过`pip install halpecocotools`安装。
## Evaluation(评估)
We adopt the same evaluation metrics as COCO dataset.
我们采用与 COCO 数据集相同的评估指标。
## Related resources( 相关资源 )
A concurrent work [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) also annotate the full body keypoints. And HOI-DET for COCO dataset is also available at [V-COCO](https://github.com/s-gupta/v-coco)
一项并发工作 [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) 也对全身关键点进行了注释。 COCO 数据集的 HOI-DET 也可在 [V-COCO](https://github.com/s-gupta/v-coco) 获得
## Citation(引文 )
The paper introducing this project is coming soon.
If the data helps your research, please cite the following papers at present:
介绍该项目的论文即将发布。
如果数据对您的研究有帮助,请参考目前以下论文:
```
@inproceedings{fang2017rmpe,
title={{RMPE}: Regional Multi-person Pose Estimation},
author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
booktitle={ICCV},
year={2017}
}
@inproceedings{li2020pastanet,
title={PaStaNet: Toward Human Activity Knowledge Engine},
author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
booktitle={CVPR},
year={2020}
}
```