--- license: apache-2.0 language: - en tags: - MLLM ---

✨X-SAM

From Segment Anything to Any Segmentation

[Hao Wang](https://github.com/wanghao9610)1,2,[Limeng Qiao](https://scholar.google.com/citations?user=3PFZAg0AAAAJ&hl=en)3,[Zequn Jie](https://scholar.google.com/citations?user=4sKGNB0AAAAJ&hl)3, [Zhijian Huang](https://zhijian11.github.io/)1, [Chengjian Feng](https://fcjian.github.io/)3, [Qingfang Zheng](https://openreview.net/profile?id=%7EZheng_Qingfang1)1, [Lin Ma](https://forestlinma.com/)3, [Xiangyuan Lan](https://scholar.google.com/citations?user=c3iwWRcAAAAJ&hl)2 📧, [Xiaodan Liang](https://scholar.google.com/citations?user=voxznZAAAAAJ&hl)1 📧 1 Sun Yat-sen University, 2 Peng Cheng Laboratory, 3 Meituan Inc. 📧 Corresponding author
arxiv huggingface GitHub Demo webpage
## 🚀 Introduction * X-SAM introduces a unified multimodal large language model (MLLM) framework, extending the segmentation paradigm from *segment anything* to *any segmentation*, thereby enhancing pixel-level perceptual understanding. * X-SAM proposes a novel Visual GrounDed (VGD) segmentation task, which segments all instance objects using interactive visual prompts, empowering the model with visually grounded, pixel-wise interpretative capabilities. * X-SAM presents a unified training strategy that enables co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on various image segmentation benchmarks, highlighting its efficiency in multimodal, pixel-level visual understanding. ## 🔖 Abstract Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from *segment anything* to *any segmentation*. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding. 👉 **More details can be found in [GitHub](https://github.com/wanghao9610/X-SAM).** ## 📌 Citation If you find X-SAM is helpful for your research or applications, please consider giving us a like 💖 and citing it by the following BibTex entry. ```bibtex @article{wang2025xsam, title={X-SAM: From Segment Anything to Any Segmentation}, author={Wang, Hao and Qiao, Limeng and Jie, Zequn and Huang, Zhijian and Feng, Chengjian and Zheng, Qingfang and Ma, Lin and Lan, Xiangyuan and Liang, Xiaodan}, journal={arXiv preprint arXiv:2508.04655}, year={2025} } ```