# X-SAM
**Repository Path**: big-model/X-SAM
## Basic Information
- **Project Name**: X-SAM
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-08-11
- **Last Updated**: 2025-08-11
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
β¨X-SAM
From Segment Anything to Any Segmentation
[Hao Wang](https://github.com/wanghao9610)1,2,[Limeng Qiao](https://scholar.google.com/citations?user=3PFZAg0AAAAJ&hl=en)3,[Zequn Jie](https://scholar.google.com/citations?user=4sKGNB0AAAAJ&hl)3, [Zhijian Huang](https://zhijian11.github.io/)1, [Chengjian Feng](https://fcjian.github.io/)3,
[Qingfang Zheng](https://openreview.net/profile?id=%7EZheng_Qingfang1)1, [Lin Ma](https://forestlinma.com/)3, [Xiangyuan Lan](https://scholar.google.com/citations?user=c3iwWRcAAAAJ&hl)2:email:, [Xiaodan Liang](https://scholar.google.com/citations?user=voxznZAAAAAJ&hl)1:email:
1 Sun Yat-sen University, 2 Peng Cheng Laboratory, 3 Meituan Inc.
:email: Corresponding author
## :eyes: Notice
X-SAM is under active development, and we will continue to update the code and documentation.
We recommend that everyone use English to communicate in issues, as this helps developers from around the world discuss, share experiences, and answer questions together.
*If you have any questions or want to collaborate, please feel free to open an issue and don't hesitate to [contact](mailto:wanghao9610@gmail.com) `wanghao9610@gmail.com`.*
## :boom: Updates
- **`2025-08-11`**: We released the effective code for [Evaluation on All Segmentation Benchmarks](#evaluate-on-all-segmentation-benchmarks). We have updated all code except for [Training X-SAM](#stage-3-mixed-fine-tuning).
- **`2025-08-10`**: We released the detailed instructions for [Demo Deployment](#computer-demo).
- **`2025-08-09`**: We released the code for [Training LLaVA-based MLLMs](#llava).
- **`2025-08-08`**: We released the simple code for [Evaluation on All VLM Benchmarks](#evaluate-on-all-vlm-benchmarks).
- **`2025-08-06`**: We are excited to publish the [Technical Report](https://arxiv.org/pdf/2508.04655), please check it out for more technical details.
- **`2025-08-05`**: We provided the [Model Weights](https://huggingface.co/hao9610/X-SAM) on the HuggingFaceπ€.
- **`2025-07-26`**: We deployed the [Online Demo](http://47.115.200.157:7861), you can try it now!
## :rocket: Introduction
This repository provides the official PyTorch implementation, pre-trained models, training, evaluation, visualization, and demo code of X-SAM:
* X-SAM introduces a unified multimodal large language model (MLLM) framework, extending the segmentation paradigm from *segment anything* to *any segmentation*, thereby enhancing pixel-level perceptual understanding.
* X-SAM proposes a novel Visual GrounDed (VGD) segmentation task, which segments all instance objects using interactive visual prompts, empowering the model with visually grounded, pixel-wise interpretative capabilities.
* X-SAM presents a unified training strategy that enables co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on various image segmentation benchmarks, highlighting its efficiency in multimodal, pixel-level visual understanding.
:sparkles: **HIGHLIGHT**: This repository provides unified and effective code for training, evaluation, and visualization of segmentation MLLMs, including LLaVA-based MLLMs. We hope this repository will promote further research on MLLMs.
## :bookmark: Abstract
Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from *segment anything* to *any segmentation*. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding.
## :mag: Overview
## :bar_chart: Benchmarks
Please refer to the [Benchmark Results](docs/benchmark_results.md) for more details.
## :checkered_flag: Getting Started
### 1. Structure
We provide a detailed project structure for X-SAM. Please follow this structure to organize the project.
π Structure (Click to collapse)
```bash
X-SAM
βββ datas
βΒ Β βββ gcg_seg_data
βΒ Β βββ gen_seg_data
βΒ Β βββ img_conv_data
βΒ Β βββ inter_seg_data
βΒ Β βββ LMUData
βΒ Β βββ ov_seg_data
βΒ Β βββ rea_seg_data
βΒ Β βββ ref_seg_data
βΒ Β βββ vgd_seg_data
βββ inits
βΒ Β βββ huggingface
βΒ Β βββ mask2former-swin-large-coco-panoptic
βΒ Β βββ Phi-3-mini-4k-instruct
βΒ Β βββ sam-vit-large
βΒ Β βββ xsam
βββ xsam
βΒ Β βββ docs
βΒ Β βββ requirements
βΒ Β βββ xsam
βΒ Β βΒ Β βββ configs
βΒ Β βΒ Β βββ dataset
βΒ Β βΒ Β βββ demo
βΒ Β βΒ Β βββ engine
βΒ Β βΒ Β βββ evaluation
βΒ Β βΒ Β βββ model
βΒ Β βΒ Β βββ structures
βΒ Β βΒ Β βββ tools
βΒ Β β βββ utils
βββ wkdrs
βΒ Β βββ s1_seg_finetune
β β βββ ...
βΒ Β βββ s2_align_pretrain
β β βββ ...
βΒ Β βββ s2_mixed_finetune
β β βββ ...
β βββ ...
...
```
### 2. Installation
We provide a detailed installation guide to create a environment for X-SAM, please refer to the following steps.
βοΈ Installation (Click to collapse)
```bash
cd X-SAM
export root_dir=$(realpath ./)
cd $root_dir/xsam
# set CUDA_HOME for cuda12.4(optional).
# X-SAM utilizes the cuda12.4 default, if your cuda is not cuda12.4, you need first export CUDA_HOME env manually.
export CUDA_HOME="your_cuda12.4_path"
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
echo -e "cuda version:\n$(nvcc -V)"
# create conda env for X-SAM
conda create -n xsam python=3.10 -y
conda activate xsam
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.4 -c pytorch -c nvidia
# install gcc11(optional)
conda install gcc=11 gxx=11 -c conda-forge -y
# install xtuner0.2.0
pip install git+https://github.com/InternLM/xtuner.git@v0.2.0
cd xtuner
pip install '.[all]'
# install deepspeed
pip install -r requirements/deepspeed.txt
# install xsam requirements
pip install -r requirements/xsam.txt
# install flash-attention
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
# install VLMEvalKit for evaluation on VLM benchmarks(optional)
cd $root_dir
git clone -b v0.3rc1 https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .
# install aria2 for downloading datasets and models(optional)
pip install aria2
```
### 3. Preparing
There are many datasets and models to prepare, please refer to [Dataset Preparing](docs/dataset_preparing.md) and [Model Preparing](docs/model_preparing.md) for more details.
### 4. Training & Evaluation
:sparkles: **One Script for All !**
```bash
cd $root_dir
bash runs/run.sh --modes MODES --config CONFIG_FILE --work-dir WORK_DIR --suffix WORK_DIR_SUFFIX
# MODES: train, segeval, vlmeval, visualize, demo
# bash runs/run.sh -h # echo help.
# Read the runs/run.sh for more details.
```
Prepare the [Datasets](docs/dataset_preparing.md) and [Models](docs/model_preparing.md), and then refer to the following commands to start training and evaluation.
#### X-SAM
π₯ Training (Click to collapse)
##### Stage 1: Segmentor Fine-tuning
```bash
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s1_seg_finetune/xsam_sam_large_m2f_e36_gpu16_seg_finetune.py
```
##### Stage 2: Alignment Pre-training
```bash
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_align_pretrain/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_e1_gpu16_align_pretrain.py
```
##### Stage 3: Mixed Fine-tuning
```bash
# π«£Coming soon...
# βΌοΈNOTE: Training for Mixed Fine-tuning will be available with more than 500 π.
```
π§ͺ Evaluation (Click to collapse)
##### Evaluate on all segmentation benchmarks
```bash
cd $root_dir
# Evaluate on all segmentation benchmarks.
# NOTE: ONLY generic segmentation and VGD segmentation are supported NOW.
bash runs/run.sh --modes segeval --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
```
##### Evaluate on all VLM benchmarks
```bash
cd $root_dir
# Evaluate on all VLM benchmarks.
bash runs/run.sh --modes vlmeval --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
```
#### LLaVA
π₯ Training (Click to expand)
##### Stage 1: Alignment Pre-training
```bash
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s1_pretrain/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_pretrain.py
```
##### Stage 2: Instruction Fine-tuning
```bash
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_finetune/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_finetune.py
```
π§ͺ Evaluation (Click to expand)
##### Evaluate on all VLM benchmarks
```bash
cd $root_dir
bash runs/run.sh --modes vlmeval --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_finetune/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_finetune.py
```
## :computer: Demo
We provide detalied instructions for demo deployment, and a demo video is shown below.
π οΈ Deployment (Click to collapse)
```bash
cd $root_dir
bash runs/run.sh --modes demo --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
```
π₯ Video (Click to collapse)
## :white_check_mark: TODO
- [x] Release the [Online Demo](http://47.115.200.157:7861).
- [x] Release the [Model Weights](https://huggingface.co/hao9610/X-SAM).
- [x] Release the [Technical Report](https://arxiv.org/abs/2508.04655).
- [x] Release the code for [Training LLaVA-based MLLMs](#llava).
- [x] Release the code for [Evaluation on All VLM Benchmarks](#evaluate-on-all-vlm-benchmarks).
- [x] Release the code for [Demo Deployment](#computer-demo).
- [x] Release the code for [Evaluation on All Segmentation Benchmarks](#evaluate-on-all-segmentation-benchmarks).
- [ ] Release the code for [Training X-SAM](#stage-3-mixed-fine-tuning) (more than 500 π).
## :blush: Acknowledge
This project has referenced some excellent open-sourced repos ([xtuner](https://github.com/InternLM/xtuner), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), [Sa2VA](https://github.com/magic-research/Sa2VA)). Thanks for their wonderful works and contributions to the community.
## :pushpin: Citation
If you find X-SAM is helpful for your research or applications, please consider giving us a star π and citing it by the following BibTex entry.
```bibtex
@article{wang2025xsam,
title={X-SAM: From Segment Anything to Any Segmentation},
author={Wang, Hao and Qiao, Limeng and Jie, Zequn and Huang, Zhijian and Feng, Chengjian and Zheng, Qingfang and Ma, Lin and Lan, Xiangyuan and Liang, Xiaodan},
journal={arXiv preprint arXiv:2508.04655},
year={2025}
}
```