# TienKung-Lab
**Repository Path**: open_x_humanoid/TienKung-Lab
## Basic Information
- **Project Name**: TienKung-Lab
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: BSD-3-Clause
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 9
- **Forks**: 3
- **Created**: 2025-07-07
- **Last Updated**: 2025-09-19
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# TienKung-Lab: Direct IsaacLab Workflow for TienKung
[](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
[](https://isaac-sim.github.io/IsaacLab)
[](https://github.com/leggedrobotics/rsl_rl)
[](https://docs.python.org/3/whatsnew/3.10.html)
[](https://releases.ubuntu.com/22.04/)
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://pre-commit.com/)
TienKung humanoid robot won the championship in the first Humanoid Robot Half Marathon
## Overview
| Motion | AMP Animation | Sensors | RL + AMP | Sim2Sim |
| :----: | :--------------------------------------: | :----------------------------------------------: | :-------------------------------------------: | :-----------------------------------------: |
| Walk |
|
|
|
|
| Run |
|
|
|
|
This framework is an RL-based locomotion control system designed for full-sized humanoid robots, TienKung. It integrates AMP-style rewards with periodic gait rewards, facilitating natural, stable, and efficient walking and running behaviors.
The codebase is built on IsaacLab, supports Sim2Sim transfer to MuJoCo, and features a modular architecture for seamless customization and extension. Additionally, it incorporates ray-casting-based sensors for enhanced perception, enabling precise environmental interaction and obstacle avoidance.
## TODO List
- [ ] Add motion dataset for TienKung
- [ ] Motion retargeting support
- [ ] Add more sensors
- [ ] Add Perceptive Control
## Installation
TienKung-Lab is built with IsaacSim 4.5.0 and IsaacLab 2.1.0.
- Install Isaac Lab by following the [installation guide](https://isaac-sim.github.io/IsaacLab/main/source/setup/installation/index.html). We recommend using the conda installation as it simplifies calling Python scripts from the terminal.
- Clone this repository separately from the Isaac Lab installation (i.e. outside the `IsaacLab` directory)
- Using a python interpreter that has Isaac Lab installed, install the library
```bash
cd TienKung-Lab
pip install -e .
```
- Install the rsl-rl library
```bash
cd TienKung-Lab/rsl_rl
pip install -e .
```
- Verify that the extension is correctly installed by running the following command:
```bash
python legged_lab/scripts/train.py --task=walk --logger=tensorboard --headless --num_envs=64
```
## Usage
### Visualize motion
Visualize the motion by updating the simulation with data from tienkung/datasets/motion_visualization.
```bash
python legged_lab/scripts/play_amp_animation.py --task=walk --num_envs=1
python legged_lab/scripts/play_amp_animation.py --task=run --num_envs=1
```
### Visualize motion with sensors
Visualize the motion with sensors by updating the simulation with data from tienkung/datasets/motion_visualization.
```bash
python legged_lab/scripts/play_amp_animation.py --task=walk_with_sensor --num_envs=1
python legged_lab/scripts/play_amp_animation.py --task=run_with_sensor --num_envs=1
```
### Train
Train the policy using AMP expert data from tienkung/datasets/motion_amp_expert.
```bash
python legged_lab/scripts/train.py --task=walk --headless --logger=tensorboard --num_envs=4096
python legged_lab/scripts/train.py --task=run --headless --logger=tensorboard --num_envs=4096
```
### Play
Run the trained policy.
```bash
python legged_lab/scripts/play.py --task=walk --num_envs=1
python legged_lab/scripts/play.py --task=run --num_envs=1
```
### Sim2Sim(MuJoCo)
Evaluate the trained policy in MuJoCo to perform cross-simulation validation.
Exported_policy/ contains pretrained policies provided by the project. When using the play script, trained policy is exported automatically and saved to path like logs/run/[timestamp]/exported/policy.pt.
```bash
python legged_lab/scripts/sim2sim.py --task walk --policy Exported_policy/walk.pt --duration 10
python legged_lab/scripts/sim2sim.py --task run --policy Exported_policy/run.pt --duration 10
```
### Tensorboard
```bash
tensorboard --logdir=logs/walk
tensorboard --logdir=logs/run
```
## Code formatting
We have a pre-commit template to automatically format your code.
To install pre-commit:
```bash
pip install pre-commit
```
Then you can run pre-commit with:
```bash
pre-commit run --all-files
```
## Troubleshooting
### Pylance Missing Indexing of Extensions
In some VsCode versions, the indexing of part of the extensions is missing. In this case, add the path to your extension in `.vscode/settings.json` under the key `"python.analysis.extraPaths"`.
```json
{
"python.analysis.extraPaths": [
"${workspaceFolder}/legged_lab",
"/source/isaaclab_tasks",
"/source/isaaclab_mimic",
"/source/extensions",
"/source/isaaclab_assets",
"/source/isaaclab_rl",
"/source/isaaclab",
]
}
```
## Acknowledgement
* [Legged Lab](https://github.com/Hellod035/LeggedLab): a direct IsaacLab Workflow for Legged Robots.
* [Humanoid-Gym](https://github.com/roboterax/humanoid-gym):a reinforcement learning (RL) framework based on NVIDIA Isaac Gym, with Sim2Sim support.
* [RSL RL](https://github.com/leggedrobotics/rsl_rl): a fast and simple implementation of RL algorithms.
* [AMP_for_hardware](https://github.com/Alescontrela/AMP_for_hardware?tab=readme-ov-file): codebase for learning skills from short reference motions using Adversarial Motion Priors.
* [Omni-Perception](https://acodedog.github.io/OmniPerceptionPages/): a perception library for legged robots, which provides a set of sensors and perception algorithms.
* [Warp](https://github.com/NVIDIA/warp): a Python framework for writing high-performance simulation and graphics code.
## Discussions
If you're interested in TienKung-Lab, welcome to join our WeChat group for discussions.
