Can Multimodal LLMs See Materials Clearly? A Multimodal Benchmark on Materials Characterization
Abstract
MatCha is a benchmark for evaluating the performance of multimodal large language models in understanding materials characterization images, revealing significant limitations compared to human experts.
Materials characterization is fundamental to acquiring materials information, revealing the processing-microstructure-property relationships that guide material design and optimization. While multimodal large language models (MLLMs) have recently shown promise in generative and predictive tasks within materials science, their capacity to understand real-world characterization imaging data remains underexplored. To bridge this gap, we present MatCha, the first benchmark for materials characterization image understanding, comprising 1,500 questions that demand expert-level domain expertise. MatCha encompasses four key stages of materials research comprising 21 distinct tasks, each designed to reflect authentic challenges faced by materials scientists. Our evaluation of state-of-the-art MLLMs on MatCha reveals a significant performance gap compared to human experts. These models exhibit degradation when addressing questions requiring higher-level expertise and sophisticated visual perception. Simple few-shot and chain-of-thought prompting struggle to alleviate these limitations. These findings highlight that existing MLLMs still exhibit limited adaptability to real-world materials characterization scenarios. We hope MatCha will facilitate future research in areas such as new material discovery and autonomous scientific agents. MatCha is available at https://github.com/FreedomIntelligence/MatCha.
Community
Materials characterization plays a key role in understanding the processing–microstructure–property relationships that guide material design and optimization. While multimodal large language models (MLLMs) have shown promise in generative and predictive tasks, their ability to interpret real-world characterization imaging data remains underexplored.
MatCha is the first benchmark designed specifically for materials characterization image understanding. It provides a comprehensive evaluation framework that reflects real challenges faced by materials scientists.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MatQnA: A Benchmark Dataset for Multi-modal Large Language Models in Materials Characterization and Analysis (2025)
- Towards Better Dental AI: A Multimodal Benchmark and Instruction Dataset for Panoramic X-ray Analysis (2025)
- MME-SCI: A Comprehensive and Challenging Science Benchmark for Multimodal Large Language Models (2025)
- Waste-Bench: A Comprehensive Benchmark for Evaluating VLLMs in Cluttered Environments (2025)
- UniEM-3M: A Universal Electron Micrograph Dataset for Microstructural Segmentation and Generation (2025)
- MedBLINK: Probing Basic Perception in Multimodal Language Models for Medicine (2025)
- GenExam: A Multidisciplinary Text-to-Image Exam (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper