{ "url": "http://arxiv.org/abs/2404.16375v1", "title": "List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs", "abstract": "Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of\nGPT-4V, by enabling the model to associate visual objects with tags inserted on\nthe image. These tags, marked with alphanumerics, can be indexed via text\ntokens for easy reference. Despite the extraordinary performance from GPT-4V,\nwe observe that other Multimodal Large Language Models (MLLMs) struggle to\nunderstand these visual tags. To promote the learning of SoM prompting for\nopen-source models, we propose a new learning paradigm: \"list items one by\none,\" which asks the model to enumerate and describe all visual tags placed on\nthe image following the alphanumeric orders of tags. By integrating our curated\ndataset with other visual instruction tuning datasets, we are able to equip\nexisting MLLMs with the SoM prompting ability. Furthermore, we evaluate our\nfinetuned SoM models on five MLLM benchmarks. We find that this new dataset,\neven in a relatively small size (10k-30k images with tags), significantly\nenhances visual reasoning capabilities and reduces hallucinations for MLLMs.\nPerhaps surprisingly, these improvements persist even when the visual tags are\nomitted from input images during inference. This suggests the potential of\n\"list items one by one\" as a new paradigm for training MLLMs, which strengthens\nthe object-text alignment through the use of visual tags in the training stage.\nFinally, we conduct analyses by probing trained models to understand the\nworking mechanism of SoM. Our code and data are available at\n\\url{https://github.com/zzxslp/SoM-LLaVA}.", "authors": "An Yan, Zhengyuan Yang, Junda Wu, Wanrong Zhu, Jianwei Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Julian McAuley, Jianfeng Gao, Lijuan Wang", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI", "cs.CL" ], "label": "Original Paper", "paper_cat": "LLM AND Reasoning", "gt": "Recent advances in Multimodal Large Language Models (MLLMs) such as GPT-4V (OpenAI, 2023a) show strong performance in multimodal perception and reasoning, enabling various new capabilities (Yang et al., 2023b). Among these, Set-of-Mark Prompting (SoM) (Yang et al., 2023a) is an interesting new working mode that enhances the connection between visual objects and textual tokens via visual prompting, i.e., placing alphanumeric tags on input images. It provides a natural interface for human-computer interaction, by linking visual locations to executable actions through visual tags, and enables various applications such as GUI navigation (Yan et al., 2023b) and robot interaction (Lin et al., 2023a). Furthermore, GPT-4V with SoM (Yang et al., 2023a) can implicitly align visual objects with their corresponding tags. Such alignments (Li et al., 2020; Yang et al., 2021) allow MLLMs to leverage index numbers to perform multi-hop visual reasoning (Yang et al., 2023a; Wei et al., 2022), thereby improving their abilities in multimodal understanding and reasoning tasks. Despite the significant interest in SoM prompting and its broad applications, it remains unclear why GPT-4V can benefit from SoM prompting, We find that other MLLMs, including the state-of- the-art open-sourced models such as LLaVA-v1.5 (Liu et al., 2024), and commercial systems like Gemini (Team et al., 2023), struggle to understand SoM prompts. This gap prevents them from leveraging the effectiveness of SoM prompting. In this study, we aim to deepen the understanding of SoM, with a goal of facilitating arbitrary MLLMs to benefit from it. We break down SoM prompting into three core capabilities: (1) the ability to identify all tags and read the alphanumeric scene texts written on them; (2) the ability to recognize and pinpoint all objects in 1 arXiv:2404.16375v1 [cs.CV] 25 Apr 2024 Input: Image Input: Image + SoM Conversation Conversation User What items are there near the Marshal speaker? LLaVA-1.5 There is a laptop and a cup near the Marshall speaker. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? LLaVA-1.5 You should swap the laptop with the cup . a cup User What items are there near the Marshal speaker? SoM-LLaVA There is a laptop tagged with number 7 and a notebook tagged with number 8. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? SoM-LLaVA You can swap it with the lamp tagged with number 9. notebook laptop lamp laptop cup Figure 1: Example conversations from LLaVA and SoM-LLaVA (LLaVA with SoM ability) to demonstrate the effectiveness of our paradigm. Left: Standard prompting on LLaVA-1.5, which fails to correctly answer the questions. Right: Set-of-Mark prompting on SoM-LLaVA. Simply placing tags on the input image can improve visual reasoning of Multimodal LLMs. an image; (3) the ability to associate tags with corresponding objects in the image. Despite possessing skills such as OCR and visual recognition to meet the first two capabilities, most MLLMs still fail to fully understand SoM prompts. Therefore, we hypothesize that the crucial missing element is the third capability, associating tags with objects, which requires deliberate training. We further validate that SoM-style data are sparse in common MLLM training sources, and it may be necessary to create a specific dataset. To facilitate such training, we introduce a new learning paradigm named \u201clist items one by one\u201d. We show that by asking MLLMs to comprehensively list all tagged items following the alphanumeric order of visual tags, MLLMs can learn SoM prompting with a small number of item-listing samples. Specifically, we create a tailored dataset, by tagging images with Semantic-SAM (Li et al., 2023c; Yang et al., 2023a), and prompting GPT-4V to generate paired text descriptions. With just 10k image-text pairs, MLLMs like LLaVA-1.5 (Liu et al., 2023a) can reliably understand SoM tags. Based on this initial finding, we conduct studies to explore the effective recipes to help MLLMs best utilize SoM prompting. We enhanced MLLMs with this \u201clist items one by one\u201d objective and assess their SoM performance from two aspects: model\u2019s ability to recognize and describe the SoM tags, and its ability to use SoM in improving multimodal reasoning ( Figure 1). For the first aspect, we design the tag listing task, which requires MLLMs to list and describe all tags in the image, evaluated by listing accuracy. For the second aspect, we evaluate finetuned models on five MLLM benchmarks, including POPE, MME, SEED- Bench, LLaVA-Bench, and MM-Vet, showcasing that MLLMs with SoM can significantly boost the multmodal understanding performance. Moreover, our model trained with SoM data outperforms the original MLLM, even without additional visual tags during inference. This demonstrates the potential of incorporating our proposed dataset and learning paradigm to boost general MLLM training. Finally, we revisit our original question regarding the working mechanism of SoM. The preliminary hypothesis is that the SoM capability may be related to OCR and the implicit association among text, tags, and objects. With our trained models, specifically SoM-LLaVA, we gain access to model features and attention maps for an in-depth analysis. We visualize the attention map to verify tag association. Compared with the original LLaVA model, SoM-LLaVA indeed learns better visual-tag- text associations, reflected in corresponding attention maps. 2 Our contributions are summarized as follows. \u2022 We present a new training task and data source named \u201clist items one by one,\u201d which effectively bootstraps MLLMs for the SoM visual prompting ability. \u2022 We evaluate our finetuned SoM MLLMs on five multimodal understanding benchmarks, and show improved performance even when SoM tags are removed from the input image. \u2022 We probe the working mechanism of SoM through the trained MLLMs, showcasing the implicit association between visual objects and text tokens when performing SoM prompting.", "main_content": "Visual referring prompting. Other than text prompts, visual referring prompting (Yang et al., 2023b) is another effective approach when interacting with multimodal LLMs, where users directly draw on input images to specify their intent, such as drawing visual pointers or handwriting scene texts. Early studies show that vision-language models can understand visual pointers such as circles (Shtedritski et al., 2023) and dots (Mani et al., 2020). Recent studies (Yang et al., 2023b) show that more powerful multimodal LLMs (OpenAI, 2023a) can handle more complicated prompts such as arrows, boxes, circles, hand drawing, scene text, as well as their combinations. Another major advancement is Set-of-Mark Prompting (SoM) (Yang et al., 2023a), where numbered tags can be placed on images to associate visual objects with text indexed. Its effective visual grounding capability (Kazemzadeh et al., 2014; Yu et al., 2016; Mao et al., 2016) enables various applications (Yan et al., 2023b; Zhang et al., 2023). In this work, we aim to better understand SoM and extend its success from GPT-4V (OpenAI, 2023a) to other open-source multimodal LLMs. Multimodal LLMs. Multimodal LLMs (Alayrac et al., 2022; Zhu et al., 2022; OpenAI, 2023a; Liu et al., 2023b; Li et al., 2023b) extend large language models (OpenAI, 2023b; Gao et al., 2023; Touvron et al., 2023) with visual perception capabilities. Recent studies (Chen et al., 2023) show the effectiveness of training open-source models on the GPT-4V generated detailed description data. Another thread of studies explore having multimodal LLMs predicting object locations as bounding boxes (Wang et al., 2023b; Peng et al., 2023) or masks (Rasheed et al., 2023). In contrast to most prior studies that pair the images with different text instructions, our study explores a new direction of how visual prompts such as SoM can improve multimodal LLMs. Specifically, we show that the SoM visual tags provide fine-grained alignments between visual objects and text tokens, thereby improving various visual reasoning tasks, both with and without SoM prompting during inference. 3 Preliminary Examination 3.1 Visualizing SoM Prompting on LLaVA In this section, we first investigate the capacity of LLaVA-1.5 in SoM, concerning its attention sensibility to the numeric IDs tagged on the objects and its answer to the SoM query. We show an example task to list a series of objects tagged with numeric IDs in Figure 2, in which the attention map is extracted from LLaVA-1.5 based on the SoM query (e.g., \u201cI have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names.\u201d). The top 20 image patches with the highest average attention weights across the user query tokens are highlighted in transparent red regions. We can observe from the highly attended regions of LLaVA-1.5 that the numeric ID tags can be easily and correctly attended by LLaVA-1.5 along with their associated objects (e.g., bird, vase, and branches). Such capacities in locating numeric ID tags may have been acquired by LLaVA-1.5 from its pretraining tasks in OCR and also benefited from the strong OCR abilities of the ViT feature encoder (Radford et al., 2021) adopted by LLaVA-v1.5. However, the response prompted by the user query in the first example of Figure 2 suggests that LLaVA-1.5 cannot follow the SoM instruction to list all the items. Instead of providing the object descriptions corresponding to all the numeric ID tags, LLaVA-1.5 responds with a general image caption, due to a large portion of image captioning samples in its pretraining stage. From the second example of Figure 2, we can also observe that although LLaVA-1.5 generates a list of tag IDs with object names, it cannot accurately associate the tags to corresponding objects, causing the model to hallucinate the descriptions of these objects. 3 LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: In the image, there is a brown and black bird perched on a branch surrounded by leaves. The leaves surrounding the bird are also visible. LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: 1. A vase with a flower in it. 2. A white flower in the vase. 3. The vase is on a wooden table. 4. The vase is sitting on a ledge. 5. The vase is sitting on a fence. 6. The vase is sitting on a railing. 7. \u2026 8. \u2026 9. \u2026 10. \u2026 11. \u2026 Figure 2: Two examples of SoM prompting in LLaVA-1.5. Left: Attention map extracted from LLaVA-1.5 on the image of a bird perching on a branch, where 3 objects are tagged. Right: Attention map extracted from LLaVA-1.5 on the image of a vase placed on a table, where 7 objects are tagged. However, LLaVA-1.5 lists more than 7 object names that are repetitions of previous object names. # Dataset #Text Text w/ Listing Source of Text 1 LLaVA-Pretrain-CC3M-595K 595.4K 0 Raw CC3M image captions. 2 LLaVA-Pretrain-LCS-558K 558.1K 0 Captioned by BLIP. 3 LLaVA-v1.5-Mix665K 3356.2K 0.72% Rule-based, or generated by ShareGPT or GPT4-0314. 4 ShareGPT4V 102.0K 0.21% Generated by GPT4-Vision. 5 CogVLM 333.5K 7.16% Generated by MiniGPT4 or by GPT4-0314. Table 1: Examined pretraining (1-2) and instruction-tuning (3-5) datasets in our preliminary study. 3.2 Finding SoM Data in Existing Training Sources We further look into the pretraining/instruction-tuning (IT) dataset, aiming to inspect if there are text contents with listings, or images with SOM annotations. We examine the pretraining dataset of LLaVA-v1 and v1.5 (Liu et al., 2023b;a), and the IT dataset used by LLaVA-v1.5, ShareGPT4V (Chen et al., 2023), and CogVLM (Wang et al., 2023a). Table 1 shows the source of text in each dataset and the percentage of text content with a listing format. The text in the two pretraining datasets for LLaVA are image captions (either the raw caption or generated by BLIP (Dai et al., 2023)), and we did not find any text with listings in them using our parser. Aside from image captions, the IT dataset also contains instructions related to other visual tasks such as VQA. We noticed that the answers provided by GPT-4(V) models sometimes construct the text in a listing manner (e.g., list out possible reasons for a question, list out observed objects in the image, etc). More examples can be found in Appendix A.6. The instruction-following dataset used by CogVLM has the highest percentage of text with listings (\u223c7%). Through our interaction with these models, we also find CogVLM is better at generating listing-style data than LLaVA-1.5. We add tags to MSCOCO-2017 images following the SoM (Yang et al., 2023a) format, and train a binary classifier with ViT/B-16 (Dosovitskiy et al., 2020). We use the classifiers to filter the images in the two LLaVA pretraining datasets, and take the top 2k images with the highest scores for each dataset. We then manually check the top 2k images, and found 12 images with tagging in CC3M-595K (\u223c0.002%), and found 86 images with tagging in LCS-558K (\u223c0.015%). Figure 15 shows a few images with tagging. Given that tagged images are sparse in those datasets and the SoM prompting performance of open-source MLLMs is unsatisfying, it may be worthwhile to design a tailored dataset that empower open-source MLLMs with this emergent ability, similar to what GPT-4V is capable of. 4 Dataset Creation and Training Motivated by the above analysis, in this section, we introduce the pipeline to create our dataset. First, in Section 4.1, we use semantic-SAM to generate semantic visual prompts in the form of numeric tags for each image. We then discuss the learning paradigm of \u201clist items one by one\u201d in Section 4.2. Finally, we use visual prompted images to generate text data in Section 4.3. 4 4.1 Image Source and Visual Prompting Generation There are various open-source image datasets available (Deng et al., 2009; Lin et al., 2014; Schuhmann et al., 2022; Yan et al., 2023a). We use MS-COCO (Lin et al., 2014) as the image source to create our SoM dataset, since it contains comprehensive human annotations with bounding boxes, masks, and captions. It has also been widely used for visual instruction tuning (Liu et al., 2023b; Wang et al., 2023a; Chen et al., 2023), which could benefit controlled experiments as well as comparisons with previous work. The first step is to create visual prompts by placing numeric tags on proper locations. Following SoM (Yang et al., 2023a), we experiment with segmentation models including SEEM (Zou et al., 2023), Semantic-SAM (Li et al., 2023c), and SAM (Kirillov et al., 2023). Empirically, we find that Semantic-SAM provides the annotation granularity that best fits COCO images, and thus use it to create tagged images for our dataset. 4.2 A Learning Paradigm: List Items One by One After obtaining the image data with semantic tags, the next question is how to design the instruction data to best distill the SoM visual prompting ability. A common approach (Liu et al., 2023b; Chen et al., 2023) in multimodal instruction-following data creation is to design and collect \u201cquestionanswering\u201d style samples. This is often done by prompting ChatGPT/GPT-4 or alternative open-source models. Given an image I and optional metadata MI such as captions, bounding boxes, various questions or instructions X(i) Q are posed, and the corresponding answers X(i) A from large models are collected. However, such general question-answering data may not be the most effective in distilling the desired SoM prompting capability, due to the inadequate mention of objects in text. For SoM prompting, one core ability of interest is to associate numbered tags with visual objects in the image, thereby enabling effective referral of visual objects via text tokens. In a general QA data, however, it is rare for multiple objects to be mentioned, even in an extended multi-turn conversation. To enhance tag association, we propose a simple and effective approach: list items one by one, where the model is asked to comprehensively describe all tagged items within an image. Given an image IT with N text tags on the image, we ask the model to enumerate all items in numerical order: {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, where Xj obj is the textual description of the j-th item, tagged by ID j in the image. Beyond promoting SoM learning, listing items one by one is also effective in general multi-modal LLM training: if a model learns to list items in the images with a specific order (in our case, the order is determined by the visual numeric tags), it gains a comprehensive and fine-grained understanding of images. This could directly benefit visual grounding and reasoning, which we verified through the standard multimodal QA and chat evaluation benchmarks. Compared with existing visual instruction tuning datasets, such as LLaVA-665K (Liu et al., 2023a) and ShareGPT-4V (Chen et al., 2023), another difference is the implicit spatial information encoded by the visual tags in SoM prompting. Converting images into the language space inevitably loses information, especially spatial locations. For example, \u201ca girl on the right\u201d can only vaguely imply the position of the girl. However, with SoM visual prompting, we provide precise visual guidance on the image. Therefore, our data can be viewed as a form of dense captioning with a new way of encoding spatial information. 4.3 Text Data Generation via GPT-4V With the visual prompting enhanced images, the final step for dataset creation is to generate the corresponding text data. To automate this process, we leverage GPT-4V (OpenAI, 2023a) to generate the listing data {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, following the order of visual tags in the images. However, we find that simply prompting the model to list items in a zero-shot manner could lead to noisy and biased generation results, where the model may refer the tag to a distant object that is easy to describe. (see examples in appendix A.4). To mitigate this problem, we seek two complementary solutions: (1) We modify the system message of GPT-4V to avoid assigning tags to distant objects. (2) We 5 10k 25k 50k 75k 100k Number of listing data (k) 50 60 70 80 Accuracy list 7B list 13B GPT-4V (a) Ablation on model sizes with LLaVA-1.5 10k 25k 50k 75k 100k Number of listing data (k) 65 70 75 80 Accuracy list list+mix-665k list+nonocr list+ocrtext (b) Ablation on data sources with LLaVA-1.5-7B Figure 3: Performance analysis on tag listing. Training samples of listing data grow from 10k to 100k. list+mix-665k is to mix listing data with 665k instruction tuning data from (Liu et al., 2023a). list+nonocr is to exclude the OCR and text data from the full 665k data, resulting in 563k samples. list+ocrtext is to mix listing data with only OCR and text data from the full 665k data, resulting in 102k samples. Green-dashed line in Figure 3a is the zero-shot result from GPT-4V. manually design a few correct listing samples via human annotations, and use them as seed examples for in-context-learning to query GPT-4V. The details of our template is in Appendix. In addition to listing, we also consider conversational data similar to LLaVA (Liu et al., 2023b), where GPT-4V is asked to generate mulit-turn question answering between an AI assistant and a person asking questions about the photo. Given a tagged image IT, we use GPT-4V to generate instruction-following data in the form of {Person:IT X(i) Q , Assistant: X(i) A }. 4.4 Model Training We take the pretrained stage of LLaVA-1.5 (Liu et al., 2023a) as the base model, and continue finetuning by mixing instruction tuning data of LLaVA-1.5 with our collected visual prompting data. For SoM-listing, we create 40 task templates as human instructions (e.g., \u201cplease enumerate object names in the tagged image\u201d), and treat them as standard conversational data. We use the same training objective of next-token prediction to train general QA, SoM-QA and SoM-listing data. Specifically, we maximize the conditional log likelihood as follows: \u2212log p(XA|Xv, XQ) = \u2212log L \u220f i=1 p\u0398(xi|I/IT, XQ,