## 🏆 News
- We release the code of PhysX-Anything and our new dataset PhysX-Mobility 🎉
## PhysX-Anything
### Installation
1. Clone the repo:
```
git clone --recurse-submodules https://github.com/ziangcao0312/PhysX-Anything.git
cd PhysX-Anything
```
2. Create a new conda environment named `physx-anything` and install the dependencies:
```bash
. ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast
```
**Note**: The detailed usage of `setup.sh` can be found at [TRELLIS](https://github.com/microsoft/TRELLIS)
3. Install the dependencies for Qwen2.5:
```bash
pip install transformers==4.50.0
pip install qwen-vl-utils
pip install 'accelerate>=0.26.0'
```
**Note**: We release the `requirements.txt` file. You can install all dependencies by running:
```bash
conda create -n physx-anything python=3.10
conda activate physx-anything
pip install -r requirements.txt
```
### Inference
1. Download the pre-train model from [huggingface_v1](https://huggingface.co/Caoza/PhysX-Anything).
```bash
python download.py
```
2. Run the inference code
```bash
python 1_vlm_demo.py # vlm inference
--demo_path ./demo # inputted image path
--save_part_ply True # save the geometry of parts
--remove_bg False # Set this to false for RGBA images and true otherwise.
--ckpt ./pretrain/vlm # ckpt path
python 2_decoder.py # decoder inference
python 3_split.py # split the mesh
python 4_simready_gen.py # convert to URDF & XML
--voxel_define 32 # voxel resolution
--basepath ./test_demo # results path
--process 0 # use postprocess
--fixed_base 0 # fix the basement of object or not
--deformable 0 # introduce deformable parts or not
```
**Note**: Although our method can generate parts with physical deformable parameters, the deformable components are not stable in MuJoCo. Therefore, we recommend setting the deformable flag to 0 to obtain more reliable simulation results.
### Evaluation
1. Render the generated URDF files
```bash
python render_urdf.py
```
2. Run the VLM-based evaluations.
```bash
python evaluation_kine.py
```
**Note**: For all other physical attributes, PhysX-Anything adopts the same settings as [PhysX-3D](https://github.com/ziangcao0312/PhysX-3D).
## PhysX-Mobility
For more details about our proposed dataset including dataset structure and annotation, please see this [PhysX-Mobility](https://huggingface.co/datasets/Caoza/PhysX-Mobility) and [PhysXNet](https://huggingface.co/datasets/Caoza/PhysX-3D).
## References
If you find PhysX-Anything and PhysX-3D useful for your work, please cite:
```
@article{physxanything,
title={PhysX-Anything: Simulation-Ready Physical 3D Assets from Single Image},
author={Cao, Ziang and Hong, Fangzhou and Chen, Zhaoxi and Pan, Liang and Liu, Ziwei},
journal={arXiv preprint arXiv:2511.13648},
year={2025}
}
@article{physx3d,
title={PhysX-3D: Physical-Grounded 3D Asset Generation},
author={Cao, Ziang and Chen, Zhaoxi and Pan, Liang and Liu, Ziwei},
journal={arXiv preprint arXiv:2507.12465},
year={2025}
}
```
### Acknowledgement
The data and code is based on [PartNet-mobility](https://sapien.ucsd.edu/browse), [Qwen](https://github.com/QwenLM/Qwen3-VL) and [TRELLIS](https://github.com/microsoft/TRELLIS). We would like to express our sincere thanks to the contributors.
## :newspaper_roll: License
Distributed under the S-Lab License. See `LICENSE` for more information.