# LHM
**Repository Path**: monkeycc/LHM
## Basic Information
- **Project Name**: LHM
- **Description**: No description available
- **Primary Language**: Python
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-12-03
- **Last Updated**: 2025-12-03
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
#
- 官方 PyTorch 实现
####
[Lingteng Qiu*](https://lingtengqiu.github.io/), [Xiaodong Gu*](https://scholar.google.com.hk/citations?user=aJPO514AAAAJ&hl=zh-CN&oi=ao), [Peihao Li*](https://liphao99.github.io/), [Qi Zuo*](https://scholar.google.com/citations?user=UDnHe2IAAAAJ&hl=zh-CN)
[Weichao Shen](https://scholar.google.com/citations?user=7gTmYHkAAAAJ&hl=zh-CN), [Junfei Zhang](https://scholar.google.com/citations?user=oJjasIEAAAAJ&hl=en), [Kejie Qiu](https://sites.google.com/site/kejieqiujack/home), [Weihao Yuan](https://weihao-yuan.com/)
[Guanying Chen+](https://guanyingc.github.io/), [Zilong Dong+](https://baike.baidu.com/item/%E8%91%A3%E5%AD%90%E9%BE%99/62931048), [Liefeng Bo](https://scholar.google.com/citations?user=FJwtMf0AAAAJ&hl=zh-CN)
阿里巴巴通义实验室
[](https://aigc3d.github.io/projects/LHM/) [](https://arxiv.org/pdf/2503.10625) [](https://huggingface.co/spaces/DyrusQZ/LHM) [](https://www.modelscope.cn/studios/Damo_XR_Lab/LHM) [](https://modelscope.cn/studios/Damo_XR_Lab/Motionshop2) [](https://www.apache.org/licenses/LICENSE-2.0)
```bash
# MODEL_NAME={LHM-500M, LHM-500M-HF, LHM-1B, LHM-1B-HF}
# bash ./inference.sh LHM-500M ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh LHM-1B ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh LHM-500M-HF ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# bash ./inference.sh LHM-1B-HF ./train_data/example_imgs/ ./train_data/motion_video/mimo1/smplx_params
# export animation video
bash inference.sh ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${MOTION_SEQ}
# export mesh
bash ./inference_mesh.sh ${MODEL_NAME}
```
### 处理视频动作数据
- 下载动作提取相关的预训练模型权重
```bash
wget -P ./pretrained_models/human_model_files/pose_estimate https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/yolov8x.pt
wget -P ./pretrained_models/human_model_files/pose_estimate https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/vitpose-h-wholebody.pth
```
- 安装额外的依赖
```bash
cd ./engine/pose_estimation
pip install mmcv==1.3.9
pip install -v -e third-party/ViTPose
pip install ultralytics
```
- 运行以下命令,从视频中提取动作数据
```bash
# python ./engine/pose_estimation/video2motion.py --video_path ./train_data/demo.mp4 --output_path ./train_data/custom_motion
python ./engine/pose_estimation/video2motion.py --video_path ${VIDEO_PATH} --output_path ${OUTPUT_PATH}
# 对于半身视频,比如./train_data/xiaoming.mp4,我们推荐使用以下命令:
python ./engine/pose_estimation/video2motion.py --video_path ${VIDEO_PATH} --output_path ${OUTPUT_PATH} --fitting_steps 100 0
```
- 使用提取的动作数据驱动数字人
```bash
# bash ./inference.sh LHM-500M-HF ./train_data/example_imgs/ ./train_data/custom_motion/demo/smplx_params
bash inference.sh ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${OUTPUT_PATH}/${VIDEO_NAME}/smplx_params
```
## 计算指标
我们提供了简单的指标计算脚本:
```bash
# download pretrain model into ./pretrained_models/
wget https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/LHM/arcface_resnet18.pth
# Face Similarity
python ./tools/metrics/compute_facesimilarity.py -f1 ${gt_folder} -f2 ${results_folder}
# PSNR
python ./tools/metrics/compute_psnr.py -f1 ${gt_folder} -f2 ${results_folder}
# SSIM LPIPS
python ./tools/metrics/compute_ssim_lpips.py -f1 ${gt_folder} -f2 ${results_folder}
```
## 致谢
本工作基于以下优秀研究成果和开源项目构建:
- [OpenLRM](https://github.com/3DTopia/OpenLRM)
- [ExAvatar](https://github.com/mks0601/ExAvatar_RELEASE)
- [DreamGaussian](https://github.com/dreamgaussian/dreamgaussian)
感谢这些杰出工作对3D生成和数字人领域的重要贡献。
我们要特别感谢[站长推荐推荐](https://space.bilibili.com/175365958?spm_id_from=333.337.0.0), 他无私地做了一条B站视频来教大家如何安装LHM.
## 更多工作
欢迎使用我们团队更多有趣的工作:
- [AniGS](https://github.com/aigc3d/AniGS)
- [LAM](https://github.com/aigc3d/LAM)
## 点赞曲线
[](https://star-history.com/#aigc3d/LHM&Date)
## 引用
```
@inproceedings{qiu2025LHM,
title={LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds},
author={Lingteng Qiu and Xiaodong Gu and Peihao Li and Qi Zuo
and Weichao Shen and Junfei Zhang and Kejie Qiu and Weihao Yuan
and Guanying Chen and Zilong Dong and Liefeng Bo
},
booktitle={arXiv preprint arXiv:2503.10625},
year={2025}
}
```