AcademicEval / intro_8K /test_introduction_short_2404.16670v1.json
XaiverZ's picture
syn
ed3212e
raw
history blame
40.6 kB
{
"url": "http://arxiv.org/abs/2404.16670v1",
"title": "EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning",
"abstract": "Visual Instruction Tuning represents a novel learning paradigm involving the\nfine-tuning of pre-trained language models using task-specific instructions.\nThis paradigm shows promising zero-shot results in various natural language\nprocessing tasks but is still unexplored in vision emotion understanding. In\nthis work, we focus on enhancing the model's proficiency in understanding and\nadhering to instructions related to emotional contexts. Initially, we identify\nkey visual clues critical to visual emotion recognition. Subsequently, we\nintroduce a novel GPT-assisted pipeline for generating emotion visual\ninstruction data, effectively addressing the scarcity of annotated instruction\ndata in this domain. Expanding on the groundwork established by InstructBLIP,\nour proposed EmoVIT architecture incorporates emotion-specific instruction\ndata, leveraging the powerful capabilities of Large Language Models to enhance\nperformance. Through extensive experiments, our model showcases its proficiency\nin emotion classification, adeptness in affective reasoning, and competence in\ncomprehending humor. The comparative analysis provides a robust benchmark for\nEmotion Visual Instruction Tuning in the era of LLMs, providing valuable\ninsights and opening avenues for future exploration in this domain. Our code is\navailable at \\url{https://github.com/aimmemotion/EmoVIT}.",
"authors": "Hongxia Xie, Chu-Jun Peng, Yu-Wen Tseng, Hung-Jen Chen, Chan-Feng Hsu, Hong-Han Shuai, Wen-Huang Cheng",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Visual emotion recognition, a key area within artificial in- telligence and computer vision, aims to predict human emo- tions based on visual cues such as facial expressions and body language. This technology is essential in bridging the gap between human affective states and machine under- standing. Its diverse applications [10, 13, 22, 39], spanning from improving human-computer interaction to aiding in mental health assessment, underscore its significance. Ac- curate emotion recognition is vital for enhancing user expe- Figure 1. Illustration of the importance of instruction-following abil- ity in visual emotion understanding. rience and ensuring information security, as it helps prevent emotional manipulation and misinformation [32]. Develop- ing robust emotion recognition models is not only a techni- cal challenge but also a step towards more empathetic and intuitive AI systems, paving the way for more efficient and natural human-computer interactions. The AI community has recently shown a growing interest in developing foundational vision models, e.g., Flamingo [8], LLaVA [7], BLIP2 [14]. These models ex- cel in open-world visual understanding, tackling several vi- sion tasks such as classification, detection, segmentation, and captioning. In contrast, current large-scale multimodal models are still in its infancy when it comes to emotion per- ception [20]. As illustrated in Fig. 1, when directly query the GPT-4 [29] about the emotional category of an image, the model tends to provide incorrect responses. However, the model delivers accurate responses when provided with revised instructions. To fully leverage the potential of ex- isting vision-based large models, our approach is based on the concept of Instruction Tuning. This effective strategy is aimed at teaching language models to follow natural lan- guage instructions, a technique proven to enhance their gen- eralization performance across unseen tasks [7, 9, 21]. 1 arXiv:2404.16670v1 [cs.CV] 25 Apr 2024 In this work, we focus on developing the model\u2019s profi- ciency in understanding and following instructions related to emotional contexts. This approach highlights the impor- tance of fine-tuning the model\u2019s instruction-following ca- pabilities, enabling it to interpret and respond to emotional content effectively. This is achieved by leveraging its pre- existing knowledge base, thereby eliminating the necessity for an emotion-specific architectural framework. To address the notable challenges encountered in In- struction Tuning for visual emotion recognition, especially the lack of specific instruction data, we introduce a novel self-generation pipeline explicitly crafted for visual emo- tion recognition by using GPT-4 [29]. This innovative pipeline excels in generating a diverse array of (image, in- struction, output) instances, thereby notably enhancing the dataset with a more extensive and task-oriented variety of examples. This approach not only overcomes the challenge of limited data availability but also reduces the dependence on human labor. Therefore, it streamlines the process, en- abling more efficient and effective emotion recognition. Additionally, Instruction Tuning has been criticized for its emphasis on surface-level features like output patterns and styles, rather than achieving a profound comprehen- sion and assimilation of tasks [23]. To tackle this issue and enhance the diversity and creativity of instruction data, our dataset includes instructions that demand complex rea- soning, going beyond basic question-and-answer formats. This is further enriched by incorporating visual cues such as brightness, colorfulness, scene type, object class, facial expressions, and human actions. These aspects are pivotal in fostering a nuanced comprehension of visual emotions, thus allowing the model to generate more precise and con- textually appropriate interpretations [13]. After generating the emotion visual instruction data, we propose an Emotion Visual Instruction Tuning (EmoVIT) framework, leveraging the foundation of In- structBLIP [9]. This framework incorporates an emotion- centric, instruction-aware module that proficiently guides Large Language Models (LLMs) in assimilating the nu- ances of emotion instructions. Our work signifies a paradigm shift, presenting a new era of instruction-based learning for visual emotion understanding that relies less on explicit training data. Remarkably, as shown in Fig. 2, our approach requires almost 50% of the training data typi- cally needed yet exceeds the performance of previous visual emotion recognition methods and popular Visual Instruc- tion Tuning methods. Our contributions can be summarized as follows: \u2022 We explore the potential of the Visual Instruction Tuning paradigm for emotion comprehension and introduce the concept of Emotion Visual Instruction Tuning. \u2022 After thoroughly considering the unique characteristics of visual emotion recognition, we develop a novel GPT- WSCNet[16] StyleNet[19] PDANet[17] StimuliAware[10] MDAN[12] BLIP2[14] InstructBLIP[9] Flamingo[8] LLaVA[7] Ours* 0 20 40 60 80 76.32 77.11 76.95 78.4 75.75 46.79 42.2 29.59 44.03 83.36 Supervised Emotion Recognition Methods Visual Instruction Tuning Methods Figure 2. Performance comparison on EmoSet test set [13] (Accu- racy %). assisted pipeline for generating emotion visual instruc- tion data. This approach effectively bridges the gap in available annotated instruction data within this specific domain. \u2022 Building upon the foundation of InstructBLIP, our EmoVIT architecture integrates emotion domain-specific instruction data, harnessing the robust capabilities of LLMs to boost performance. The extensive experiments demonstrate our model\u2019s proficiency in emotion classi- fication, affective reasoning, and comprehension of hu- mour.",
"main_content": "2.1. Visual Emotion Recognition A key challenge in visual emotion recognition is bridging the gap between an image\u2019s visual cues and the emotions it portrays [11, 12, 35]. While traditional efforts, e.g., Xu et al.\u2019s multi-level dependent attention network [12], focus on visual models for emotional feature learning, recent advancements like EmoSet [13] offer rich emotion-laden datasets with 3.3 million images. The rise of multimodal models, such as the GPT series [29], has further propelled Vision-Language Recognition. However, fully leveraging these models in emotion recognition is an area ripe for exploration. Our work leads the way in utilizing large-scale models for Emotion Visual Instruction Tuning. 2.2. Visual Instruction Tuning Current Large Language Models (LLMs) have extensive knowledge bases, but their effectiveness depends on accurately interpreting human instructions due to a mismatch 2 Figure 3. The comparison of different visual tuning paradigms. between training goals and user expectations. LLMs are trained to minimize prediction errors, whereas users expect helpful and safe instruction-following. Instruction Tuning addresses this by teaching models to follow natural language instructions, enhancing generalization to new tasks. FLAN [21] demonstrated that training a large model on instruction-based datasets improves zero-shot performance. This approach has extended to vision-language tasks, with BLIP2 [14] and LLaVA [7] adapting instructiontuned LLMs for visual inputs. InstructBLIP [9] introduces instruction-aware visual feature extraction and the QFormer, enabling more flexible, instruction-driven feature extraction. As a novel area, visual emotion instruction tuning lacks benchmarks or guidelines for creating emotion instruction data. Our work pioneers the use of large-scale models to develop an emotion instruction data pipeline, overcoming the limitations of manual annotation. 3. Method 3.1. Preliminary of Visual Instruction Tuning In the deep learning era, visual tuning has experienced significant paradigm shifts, as depicted in Fig. 3. In Fig. 3(a), conventional tuning methodologies encompass Full fine-tuning, Head-oriented, and Backboneoriented techniques, capitalizing on large-scale pre-trained models. Predominantly, thoroughly fine-tuning these models for specific tasks, conducted end-to-end, is recognized as a highly effective strategy. However, this method requires maintaining separate copies of the backbone parameters for each distinct task, posing challenges in storage and deployment. Alternatively, Visual Prompt Tuning (VPT) [24], presents an efficient substitute for full fine-tuning within large-scale vision Transformer models. It achieves this by employing a minimal fraction of trainable parameters in the input space while maintaining a frozen backbone model. The objective function for Visual Prompt Tuning is given by: min \u03b8P L(f(X, P; \u03b8P), Y ) (1) where min\u03b8P is the minimization over the prompt parameters P, L is the loss function, f represents the model function with input image X, prompt parameters P, and learnable model parameters \u03b8P as input, and Y is the target output. Visual Prompt Tuning focuses on optimizing LLMs using a small set of parameters, whereas Visual Instruction Tuning (VIT) aims to improve the model\u2019s comprehension of instructions, thereby addressing the model\u2019s shortcomings in specific domains. This type of method aims to enhance the model\u2019s proficiency in following instructions, leveraging the capabilities of the latest foundation models, e.g., Llama [25], and BLIP2 [14]. Instructions serve as guiding constraints, shaping the model\u2019s outputs to conform to specific response characteristics and domainrelevant knowledge. This approach enables human monitoring of the model\u2019s behavior, thereby assuring alignment with the desired outcomes. Moreover, Instruction Tuning is computationally efficient, allowing LLMs to swiftly adapt to particular domains without extensive retraining or architectural alterations. The objective function for Visual Instruction Tuning is given by: min \u03b8tunable L(g(X, I, C; \u03b8tunable), Y ) (2) where min\u03b8tunable denotes the minimization over the tunable parameters \u03b8tunable in the Instruction Tuning Module, L is the loss function, g is the model function with instruction I, image X, other contexts C, and tunable parameters \u03b8tunable, 3 \u2026 \u2026 \u2026 Q-Former Fully Connected LLM Emotion Instruction Queries Output \u2026 \u2026 Emotion Instruction Emotion Instruction Queries Q-Former Feed Forward Self Attention Cross Attention Feed Forward (a) Emotion Visual Instruction Data Generation (b) Emotion Visual Instruction Tuning Architecture (c) The Details of Q-Former Module \u2026 \u2026 \u2026 Image Embeddings Emotion Attributes Caption System Prompt GPT 4.0 Categorical Basic Interaction Advanced Interaction Reasoning Emotion Instruction In-context Samples Conversation Image Encoder Input Image Image Embeddings Figure 4. The overall architecture of our proposed method. The Emotion Instruction data generated by (a) will be used for Emotion Visual Instruction Tuning in (b). During Emotion Visual Instruction Tuning, given an input image, the frozen Image Encoder initiates the process by extracting visual features. Emotion Instruction generated by (a) are subsequently interacting with Queries embedding through the learnable Q-Former. This interaction is key to drawing out image features that are relevant to the task at hand. As a result, the frozen LLM receives visual information conducive to instruction following. and Y denotes the target output. The optional context C is not just raw data; it encompasses descriptive or directive information guiding the model on how to process input or which task to execute, e.g., image caption. It\u2019s integral to the model\u2019s understanding and execution of tasks based on specific instructions or guidelines. 3.2. GPT-assisted Emotion Visual Instruction Data Generation Previous methodologies commonly employed a consistent template-based set of instructions for every image within a dataset across various specific tasks [9]. For instance, a standard instruction such as \u201cBriefly describe the content of the image\u201d was employed uniformly across all images for Image Captioning. In this way, the model may not be able to adequately capture the unique characteristics of each image. Moreover, this one-size-fits-all approach often leads to suboptimal performance in emotion recognition tasks that require nuanced perception and differentiation of ambiguous emotion classes. Since the topic of Emotion Visual Instruction Tuning is still in its infancy, no benchmarks or guidelines have been proposed so far for constructing emotion instruction data. Based on the recent successes of machine-generated instructions demonstrated in LLaVA [7], our work pioneers the use of existing LLMs to create a pipeline for self-generating emotion instructions. Different from previous template-based and one-size-fits-all instruction data, we propose an instance-wise and LLM-assisted visual emotion instruction data pipeline. This methodology transcends the constraints of manual annotation by employing GPT-4 [29] to generate instance-wise, tailored instruction data that dynamically corresponds to visual content. Prior to the development of instructional data for the visual emotion recognition task, it is imperative to confront a fundamental academic problem: What types of visual clues are pivotal in identifying emotions? This necessitates a careful consideration of the unique characteristics inherent to the task, along with a comprehensive understanding of the potential visual cues associated with human emotions. In this work, we propose a novel visual instruction data mechanism to remove the inherent subjectivity and ambiguity in emotional interpretation. Specifically, we integrate a broad spectrum of emotion attributes across multiple levels: low-level attributes (e.g., brightness, colorfulness), midlevel attributes (e.g., scene type and object class), and highlevel attributes (e.g., facial expressions and human actions), building upon insights from previous work [13]. This comprehensive strategy not only aligns with the intricate nature of emotions but also significantly enhances the model\u2019s capability to interpret and understand visual emotional cues more accurately and holistically. The overall pipeline of our proposed emotion visual instruction data is shown in Fig. 4 (a). For an image Ximg, three types of image-related contexts are essential for GPT4 to generate emotion instruction data: (i) a caption Xc, (ii) an emotion attribute list Xattr, which includes emotion class, brightness, colorfulness, scene type, object class, facial expression, and human action, and (iii) the system prompt, designed to enable GPT-4 to comprehend the specific task 4 requirement1. We first manually design a few examples which are used as seed examples for in-context learning to query GPT-4. This operation leverages the model\u2019s ability to extrapolate from given examples, enhancing its understanding and response accuracy based on the principles of few-shot learning [7]. Our generated emotion instruction data includes three types: Categorical, Conversation, and Reasoning. Building upon previous research [7], our generated instruction data adheres to the dialogue format, exemplified in Fig. 5. Our strategy for generating emotion instruction data adopts a progressive approach from simple to complex. Initially, for the Categorical data, we transform the associated emotion class of the image into a structured format. This process serves as the foundational component of our emotion instruction data. For the Conversation data, our framework is designed to create dialogues in which the GPT assistant interacts with an inquirer, focusing on the emotion attributes of the image. In this setup, the assistant\u2019s responses are tailored to interpret and describe the image as though it were within its own visual field, thereby providing insights from an observational viewpoint. The scope of questions posed is comprehensive, encompassing the types of objects depicted, their actions, and the dynamics of their interrelationships. The dialogues we generate fall into two categories: (i) Basic Interaction, focusing on the provided emotion attribute list with simple, direct characteristics, and (ii) Advanced Interaction, which builds on the first type to reach greater conversational complexity and sophistication. For the Reasoning data, our approach extends beyond mere visual content, prompting the model to generate indepth reasoning questions. To enhance the dialogue\u2019s credibility and structure, detailed examples are incorporated alongside logical reasoning steps, ensuring that the discourse convincingly captures the intricacies of the visual content. 3.3. Emotion Visual Instruction Tuning After acquiring the emotion visual instruction data as detailed in Sec. 3.2, our goal is to employ this data in enhancing the existing Visual Instruction Tuning model. This enhancement aims to align the LLMs\u2019 existing knowledge with the emotion understanding domain. As shown in Fig. 4 b, we have developed an Emotion Visual Instruction Tuning (EmoVIT) architecture based on InstructBLIP [9]. This architecture specifically leverages its Instruction-aware Q-Former Module, as depicted in Fig. 4 c, for emotion-centric instructional tasks. 1A detailed description of the system prompt is provided in the supplementary materials. Figure 5. The sample of our generated visual emotion instruction data. Specifically, the Instruction-aware Q-Former Module takes in the emotion instruction tokens, queries, and image embeddings as input. The image embeddings are extracted by a frozen image encoder. The learnable queries are initially produced by the pre-trained Q-Former of InstructBLIP. During training, the Instruction-aware module enhances task-specific feature extraction. It does this by integrating emotion instruction and query embeddings within self-attention layers, aligning visual information with the LLM\u2019s instruction-following requirements. Our approach adopts cross-entropy loss, tailoring it to the intricacies of visual emotion recognition tasks, thus ensuring precise and contextually relevant model training outcomes. We note that the data generated by our approach is not confined to a single model but can also be applied to other Visual Instruction Tuning models, such as LLaVA [25]. Notably, when LLaVA is fine-tuned with our data, it exhibits a significant enhancement in emotion recognition capabilities, as detailed in Sec. 4.2. In this way, we demonstrate not only the effectiveness but also the transferability of our generated data, showing its broad applicability and impact. 5 4. Experimental Results 4.1. Implemental Details Our implementation is based on the LAVIS library [31]. Our EmoVIT starts with a pre-trained InstructBLIP baseline and proceeds to fine-tune exclusively the Q-Former module, whilst keeping both the image encoder and the language model frozen. The parameters for our training adhere to the default settings established by InstructBLIP. Datasets. We evaluate our framework on ten benchmark datasets annotated under different scenarios and class number, namely EmoSet [13], WEBEmo [11], Emotion6 [34], the Flickr and Instagram (FI) [35], Artphoto [36], IAPS [37], Abstract [36], EmotionROI [38], UnbiasedEmo [11], and OxfordTVG-HIC [33]. Held-in Pretraining. Following previous work [9], we divide our dataset into two categories: held-in for pretraining and held-out for evaluation 2. Considering the EmoSet dataset\u2019s comprehensive inclusion of emotion attributes for each image, it has been chosen as the primary resource for our held-in pretraining phase. Simultaneously, for a broader assessment, we perform held-out evaluations using the test sets from various other datasets. For the generation of emotion visual instruction data, we initially employ the BLIP2 model for image captioning, followed by leveraging the GPT-4 API to generate emotion instruction data. In total, our collection comprises Categorical, Conversation, and Reasoning instruction data, derived from 51,200 unique images. This represents less than 50% of the entire EmoSet. 4.2. Held-out Evaluation As shown in Tab. 1, our proposed methodology exhibits a marked superiority in performance relative to the burgeoning Visual Instruction Tuning Methods. Since they have been pre-trained on dozens of large-scale datasets, it is evident that our generated emotion visual instruction data is particularly effective for emotional understanding Our results signify a paradigm shift, heralding a new era of model training that relies less on explicit supervision and more on the robustness of emotion instruction-driven learning. The Effectiveness of Our Proposed Emotion Visual Instruction Data. As the first to introduce the concept of emotion visual instruction data, our study seeks to evaluate the generalizability of this newly generated instruction data. Our goal is to test its efficacy not only with InstructBLIP but also across other Visual Instruction Tuning model, to understand its broader applicability. As depicted in Fig. 6, we employ two Visual Instruction Tuning models, LLaVA and InstructBLIP, which were fine-tuned on our specially gen2Unlike the setup in InstructBLIP, our dataset exclusively comprises emotion-related content. Consequently, our held-out evaluation does not constitute a strict zero-shot evaluation in the conventional sense. Figure 6. The improvement of our proposed emotion visual instruction tuning data tuning on LLaVA [7] and InstructBLIP [9]. erated emotion visual instruction data. Subsequent testing across five distinct datasets reveals notable improvements in both models, substantiating the efficacy of our generated data. Notably, InstructBLIP demonstrated a more substantial overall enhancement compared to LLaVA. This can be attributed to InstructBLIP\u2019s specialized Instruction-aware Q-Former Module, which adeptly extracts the salient features of our emotion instructions and synergizes them effectively with the corresponding images, thereby yielding improved performance. 4.3. Effectiveness of Different Instruction Data 4.3.1 Ablation Study of Different Instruction Data The ablation study outlined in Tab. 2 provides a comprehensive analysis of the impact that different instructional data types have on model performance, specifically concerning accuracy metrics on the EmoSet test set. Initially, the model, referred to as InstructBLIP [9], operates without the integration of the three types of instructional data and attains a baseline accuracy of 42.20%. This foundational performance is significantly enhanced with the inclusion of Categorical data, which alone contributes to a substantial increase in accuracy. The introduction of Conversation data further amplifies this effect, underscoring the value of conversational context in improving the model\u2019s predictive capabilities. The addition of Reasoning data notably boosts performance, achieving a peak accuracy of 83.36%. This indicates that the model significantly benefits from the nuanced cues in reasoning, aiding in understanding complex emotional instructions. The gradual improvements with each data type support the idea that a diverse approach to instructional data markedly enhances model comprehension and performance. 6 Method WebEmo FI Emotion6 Abstract ArtPhoto IAPSa EmotionROI EmoSet Number of Classes 25 8 6 8 8 8 6 8 Flanmingo [8] 9.36 14.91 21.67 3.57 17.5 10.13 21.72 29.59 LLaVA [7] 12.55 56.04 49.44 19.54 36.25 42.43 46.46 44.03 BLIP2 [14] 20.10 57.72 50.00 28.57 36.25 39.24 50.51 46.79 InstructBLIP [9] 12.80 37.97 46.11 21.42 26.25 34.18 46.13 42.20 Ours* 21.12 68.09 57.81 32.34 44.90 44.13 53.87 83.36 Table 1. Held-out performance comparison on visual emotion datasets (%). Categorical Conversation Reasoning Accuracy (%) 42.20 \u2713 80.90 (+38.70) \u2713 \u2713 81.95 (+39.75) \u2713 \u2713 \u2713 83.36 (+41.16) Table 2. Ablation study of three types of instruction data. Accuracy (%) on EmoSet test set. 4.3.2 Instruction Sensitivity This work is dedicated to the creation of a varied corpus of visual emotion instruction data, alongside the development of a robust instruction-based model. Our objective is for the model to demonstrate stability, producing consistent results in the face of minor variations in instruction phrasing, provided the core objective of the task persists unchanged. To this end, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. We employ two semantically similar instructions as input prompts for the model, testing their impact on the Sensitivity score across three visual emotion datasets for different Visual Instruction Tuning models. The first instruction is: \u201cFrom the given options: cls 1, cls 2, cls 3, etc., identify the emotion that most accurately reflects the image. Ensure your selection is confined to the listed options. Respond in the format: Predicted emotion:\u201d The second one states: \u201cPlease choose the emotion that best corresponds to the image from the following options: cls 1, cls 2, cls 3, etc. (Do not provide answers beyond the provided candidates.) Please reply in the following format: Predict emotion:\u201d As illustrated in Fig. 7, our approach, along with BLIP2, exhibited exceptionally low Sensitivity values, demonstrating robustness in understanding the instructions. Conversely, Flamingo and InstructBLIP displayed a higher degree of sensitivity, indicating a relative susceptibility to variations in instruction wording. 4.4. Robustness Given that current emotion recognition datasets often exhibit category imbalances and labeling biases, our aim is Figure 7. The sensitivity score comparison (the lower the better). to evaluate the generalization ability of various learning strategies more impartially. Hence, we selected the UnBiasedEmo test set [11], which is uniquely suited for recognizing intricate emotions, such as those associated with identical objects or scenes, e.g., landscapes, crowds, families, babies, and animals, where the emotional undertones can be particularly subtle and complex. As depicted in Tab. 3, our proposed methodology demonstrates superior performance when benchmarked against conventional supervised emotion recognition techniques, thereby underscoring the efficacy of our approach in more accurately discerning complex emotional contexts. Method Accuracy (%) Direct Learning [11] 71.64 Self-Directed Learning [11] 72.45 Joint Learning [11] 71.64 Curriculum Learning [11] 74.27 Ours* 74.72 Table 3. Performance comparison on UnbiasedEmo dataset. 7 Figure 8. The sample of our generated explanation. 4.4.1 Affective Reasoning In the domain of visual emotion recognition, where ambiguity and subjectivity are pervasive, the advent of an interpretable model is of considerable value. Such a model elucidates its cognitive processes, enhancing its trustworthiness and practicality in scenarios requiring a delicate grasp of emotional subtleties. Leveraging Visual Instruction Tuning, our model transcends mere categorization of emotions; it articulates the underlying rationale for its classifications. The executing commands for identifying emotions and elucidating the decision basis is illustrated below: Predicted emotion: [emotion]. Reason: [explanation]. Our model delineates the visual features influencing its determinations, thereby addressing the complexities inherent in discerning and explaining emotion-related nuances. The explanations provide us with visual clues contained within the images, as exemplified in Fig. 8. It provides interpretable visual indicators that inform the model\u2019s outputs, as demonstrated in our example, by disambiguating the often abstract emotional categories. 4.5. Scaling Law Pretraining data. As demonstrated in Tab. 4, there is a clear correlation between the size of the pre-training dataset and improved performance. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. 4.6. Humour Caption Generation The comprehension of humor is intricately linked to the understanding of emotions. Leveraging our generative language model, we conduct a caption generation task without 5% 10% 30% 50% 79.00 81.00 79.34 83.36 Table 4. Ablation study of different portion of pre-training data. Accuracy (%) on EmoSet test set. Figure 9. The sample of our generated humour caption vs human writing humour caption from OxfordTVG-HIC. modifying the model\u2019s architecture, specifically testing the model\u2019s proficiency in generating humorous captions. For this purpose, we select 50 images from the OxfordTVGHIC dataset [33] and generate corresponding captions using our model. Subsequently, the captions produced by our model are compared with manually annotated captions from the dataset in a user study. Thirty participants were asked to vote on which captions were more humorous. Our modelgenerated captions receive 60% of the votes, demonstrating its effective humor generation capabilities. One sample is visualized in Fig. 9. 5. Conclusion In our study, drawing upon the distinctive visual cues key to visual emotion recognition, we present a GPT-assisted pipeline specifically designed for generating emotion visual instruction data. The developed EmoVIT model incorporates emotion-specific instructions, leveraging LLMs for enhanced performance. Our comprehensive experiments validate its effectiveness in emotion classification, affective reasoning, and humor understanding. This comparative analysis sets a benchmark for Emotion Visual Instruction Tuning with LLMs, providing valuable insights and directions for future research in this field. 8 EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning Supplementary Material Figure 10. The sample of our generated visual emotion instruction data. 6. More Emotion Visual Instruction Data Sample Additional samples from our Emotion Visual Instruction Data collection are presented in Figures 10 and 11. Upon acceptance, the complete dataset will be made available on our project webpage. 7. Implemental Details 7.1. Our Experiment Settings Held-out vs supervised learning. We adopt the terminology held-in and held-out as defined in the work of InstructBLIP [9]. For the held-in, we utilize the training subset of the EmoSet dataset for Emotion Visual Instruction Tuning, with its corresponding test subset serving the purpose of held-in evaluation. The outcomes of this evaluation are depicted in Fig. 1 of the main manuscript. Figure 11. The sample of our generated visual emotion instruction data. In our held-out evaluation, we focus on determining how instruction tuning bolsters the model\u2019s ability to transfer learning to new and unseen data. It\u2019s crucial to highlight that our methodology sets a distinct path from InstructBLIP\u2019s framework. Our dataset is specifically curated with emotion-centric content, presenting unique categories such as cheerfulness and enthrallment found in WEBEmo, which are not typically included in other datasets. Conversely, common emotional categories like anger and fear are shared with other collections, such as FI and Emotion6. This distinctive mix in our dataset implies that our held-out evaluation operates on a cross-domain level, examining the model\u2019s ability to interpret and adapt to diverse emotional contexts not strictly confined to zero-shot scenarios. 7.2. System Prompt The system prompt inputted into ChatGPT for the purpose of gathering instruction-based data is presented below. 1 You are an AI visual assistant, and you are seeing a single image. What you see are provided with one caption and some emotion related attributes, describing the same image you are looking at. Answer all questions as you are seeing the image. The range of brightness is from 0 (darkest) to 1 (brightest), and the range of colorfulness is from 0 (black-and-white) to 1 (the most colorful). Design two questions for a conversation between you and a person asking about this photo. The answers should be in a tone that a visual AI assistant is seeing the image and answering the question. Ask diverse questions and give corresponding answers. Include questions asking about the visual content of the image, including the object types, object actions, relationship among objects, etc. Only include questions that have definite answers: (1) one can see the content in the image that the question asks about and can answer confidently; (2) one can determine confidently from the image that it is not in the image. Do not ask any question that cannot be answered confidently. Please answer with the format Question: Answer: Also include one complex question that is relevant to the content in the image, for example, asking about background knowledge of the objects in the image, asking to discuss about events happening in the image, etc. Again, do not ask about uncertain details. Provide detailed answers when answering complex questions. For example, give detailed examples or reasoning steps to make the content more convincing and well-organized. You can include multiple paragraphs if necessary. 7.3. Details of the Q-Former Similar to the approach in InstructBLIP, Q-Former is a lightweight transformer architecture that utilizes a collection of trainable query vectors to distill visual features from a static image encoder. The Q-Former acts as the trainable module to bridge the gap between a frozen image encoder and a frozen LLM. Its role is to curate and present the most pertinent visual information, thereby enabling the LLM to generate the targeted textual output efficiently. Following the default setting, in our experimental setup, we employ 32 distinct queries, each with a dimensionality of 768. 7.4. Sensitivity Formula As mentioned in Sec.4.3.2 in the main paper, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. Specifically, for each task t \u2208T, given its associated instances with task instructions: Dt = {(It j, xt j, yt j) \u2208T \u00d7 Xt \u00d7 Y t}N j=1, sensitivity is defined as: Et\u2208T \" \u03c3i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 \u00b5i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 # (3) where L denotes the evaluation metric, i.e., emotion classification accuracy, f\u03b8(\u00b7) represents the Visual Instruction Tunign model. The standard deviation and mean of the model\u2019s performance across all instructions are denoted by \u03c3i\u2208It[\u00b7] and \u00b5i\u2208It[\u00b7], respectively. 8. Ablation Study of LLM Model Size In our attempts with the EmoVIT architecture\u2019s LLM, we explored the use of models of varying sizes (as shown in Tab. 5). The results indicated that the smaller model, Vicuna7B, outperformed its larger counterparts. This may be attributed to the limited training data available for our task, which potentially underutilizes the capabilities of larger models. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. Vicuna-7B Vicuna-13B FlanT5XL 83.36 82.21 80.98 Table 5. Ablation study of different LLM model size. Accuracy (%) on EmoSet test set. 9. GPT-4 vs GPT-4 Turbo We conducted a comparative analysis of conversational datasets derived from GPT-4 (the model name is gpt-4 in the API) against the recently released GPT-4 Turbo (the model name is gpt-4-1106-preview in the API). The comparative metrics yielded negligible differences between the two models (83.36% vs 82.96% on EmoSet test set). 10. Adding In-context Samples in Held-out Evaluation Recent LLMs are capable of in-context learning when provided with a limited number of examples in a few-shot manner. In this work, we have also embarked on such an exploration. For instance, Tab. 6 presents the in-context samples utilized within the EmotionROI dataset. During our heldout evaluation, we incorporated three in-context samples for each category, consisting of a caption paired with its corresponding emotion class. Nevertheless, in our experimental observations, we did not witness any enhancement in performance attributable to furnishing the LLM with these incontext examples. Consequently, our finalized methodology did not incorporate in-context samples during the heldout evaluation phase. 2 Description Emotion Unleashed Fury: A portrait of raw, unfiltered anger etched on the subject\u2019s face. Anger Volcanic Eruption in Human Form: A Portrait of Unrestrained Fury. Anger An explosive portrait of raw fury, where every clenched jaw and furrowed brow tells a tale of unchecked anger. Anger Face contorted in a grimace of pure disgust, as if they just tasted a year-old lemon. Disgust Caught in the throes of revulsion, a face grimaces as if it just tasted the world\u2019s sourest lemon. Disgust Picture Perfect: A Masterclass in the Art of Disgust Expression Disgust A chilling moment of pure terror, etched in every detail. Fear A chilling moment of pure terror etched on the face, a stark embodiment of fear. Fear someone with a wide smile, a group Joy Overflowing with joy, like a puppy at a park! Joy A poignant portrait of sorrow, where teardrops are the silent language of grief. Sadness An evocative portrayal of sorrow, with shadows seemingly swallowing the light, reflecting the heavy weight of sadness. Sadness An abstract portrayal of solitude, where the vivid hues of melancholy paint a poignant picture of sadness. Sadness Caught in a moment of pure astonishment, eyes wide and mouth agape. Surprise Caught in the headlights of astonishment: a jaw-dropping moment of surprise! Surprise Caught in the Act! A person\u2019s wide-eyed gasp of sheer surprise. Surprise Table 6. Illustrative Examples of Emotion Descriptors in Visual Data 11. Limitation and future work Due to the reliance on the GPT-API and cost considerations, our held-in pretraining phase utilized less than 50% of the EmoSet dataset. Despite outperforming other methods, we recognize the potential for significant improvements in future work by expanding the data scale. We anticipate that advancements in visual emotion understanding will parallel increases in both data and model scale. 3"
}