# VideoMimic **Repository Path**: leonbdong/VideoMimic ## Basic Information - **Project Name**: VideoMimic - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-07-15 - **Last Updated**: 2025-07-15 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # VideoMimic [[project page]](https://www.videomimic.net/) [[arxiv]](https://arxiv.org/pdf/2505.03729) **Visual Imitation Enables Contextual Humanoid Control. arXiV, 2025.**
Arthur Allshire*, Hongsuk Choi*, Junyi Zhang*, David McAllister*, Anthony Zhang, Chung Min Kim, Trevor Darrell, Pieter Abbeel, Jitendra Malik, Angjoo Kanazawa (*Equal contribution)
University of California, Berkeley
## Updates - **Jul 6, 2025:** Initial real-to-sim pipeline release. ## TODO - [x] Release real‑to‑sim pipeline (July 15th, 2025) - [ ] Release the video dataset (July 15th, 2025) - [ ] Release sim‑to‑real pipeline (September 15th, 2025) # VideoMimic Real-to-Sim VideoMimic’s [real-to-sim pipeline](real2sim/README.md) reconstructs 3D environments and human motion from single-camera videos and retargets the motion to humanoid robots for imitation learning. It extracts human poses in world coordinates, maps them to robot configurations, and reconstructs environments as pointclouds later converted to meshes.