{ "url": "http://arxiv.org/abs/2404.16456v1", "title": "Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities", "abstract": "Multimodal sentiment analysis (MSA) aims to understand human sentiment\nthrough multimodal data. Most MSA efforts are based on the assumption of\nmodality completeness. However, in real-world applications, some practical\nfactors cause uncertain modality missingness, which drastically degrades the\nmodel's performance. To this end, we propose a Correlation-decoupled Knowledge\nDistillation (CorrKD) framework for the MSA task under uncertain missing\nmodalities. Specifically, we present a sample-level contrastive distillation\nmechanism that transfers comprehensive knowledge containing cross-sample\ncorrelations to reconstruct missing semantics. Moreover, a category-guided\nprototype distillation mechanism is introduced to capture cross-category\ncorrelations using category prototypes to align feature distributions and\ngenerate favorable joint representations. Eventually, we design a\nresponse-disentangled consistency distillation strategy to optimize the\nsentiment decision boundaries of the student network through response\ndisentanglement and mutual information maximization. Comprehensive experiments\non three datasets indicate that our framework can achieve favorable\nimprovements compared with several baselines.", "authors": "Mingcheng Li, Dingkang Yang, Xiao Zhao, Shuaibing Wang, Yan Wang, Kun Yang, Mingyang Sun, Dongliang Kou, Ziyun Qian, Lihua Zhang", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Original Paper", "paper_cat": "Distillation", "gt": "\u201cCorrelations serve as the beacon through the fog of the missingness.\u201d \u2013Lee & Dicken Multimodal sentiment analysis (MSA) has attracted wide attention in recent years. Different from the tradi- tional unimodal-based emotion recognition task [7], MSA \u00a7Corresponding author. Equal contribution. Modality Content Label Prediction Language Visual Audio Neutral Positive It was a great movie and I loved it. \u2026 Language Visual Audio It was a great movie and I loved it. \u2026 Positive Positive Figure 1. Traditional model outputs correct prediction when in- putting the sample with complete modalities, but incorrectly pre- dicts the sample with missing modalities. We define two missing modality cases: (i) intra-modality missingness (i.e., the pink areas) and (ii) inter-modality missingness (i.e., the yellow area). understands and recognizes human emotions through mul- tiple modalities, including language, audio, and visual [28]. Previous studies have shown that combining complemen- tary information among different modalities facilitates the generation of more valuable joint multimodal representa- tions [34, 36]. Under the deep learning paradigm [3, 17, 42, 43, 54, 59, 60], numerous studies assuming the avail- ability of all modalities during both training and inference stages [10, 19, 22, 49\u201353, 55\u201358, 62]. Nevertheless, this assumption often fails to align with real-world scenarios, where factors such as background noise, sensor constraints, and privacy concerns may lead to uncertain modality miss- ingness issues. Modality missingness can significantly im- pair the effectiveness of well-trained models based on com- plete modalities. For instance, as shown in Figure 1, the entire visual modality is missing, and some frame-level fea- 1 arXiv:2404.16456v1 [cs.CV] 25 Apr 2024 tures in the language and audio modalities are missing, lead- ing to an incorrect sentiment prediction. In recent years, many works [20, 21, 23, 24, 32, 45, 46, 66] attempt to address the problem of missing modal- ities in MSA. As a typical example, MCTN [32] guaran- tees the model\u2019s robustness to the missing modality case by learning a joint representation through cyclic transla- tion from the source modality to the target modality. How- ever, these methods suffer from the following limitations: (i) inadequate interactions based on individual samples lack the mining of holistically structured semantics. (ii) Fail- ure to model cross-category correlations leads to loss of sentiment-relevant information and confusing distributions among categories. (iii) Coarse supervision ignores the se- mantic and distributional alignment. To address the above issues, we present a Correlation- decoupled Knowledge Distillation (CorrKD) framework for the MSA task under uncertain missing modalities. There are three core contributions in CorrKD based on the tai- lored components. Specifically, (i) the proposed sample- level contrastive distillation mechanism captures the holis- tic cross-sample correlations and transfers valuable super- vision signals via sample-level contrastive learning. (ii) Meanwhile, we design a category-guided prototype distilla- tion mechanism that leverages category prototypes to trans- fer intra- and inter-category feature variations, thus deliver- ing sentiment-relevant information and learning robust joint multimodal representations. (iii) Furthermore, we intro- duce a response-disentangled consistency distillation strat- egy to optimize sentiment decision boundaries and encour- age distribution alignment by decoupling heterogeneous re- sponses and maximizing mutual information between ho- mogeneous sub-responses. Based on these components, CorrKD significantly improves MSA performance under uncertain missing-modality and complete-modality testing conditions on three multimodal benchmarks.", "main_content": "2.1. Multimodal Sentiment Analysis MSA aims to understand and analyze human sentiment utilizing multiple modalities. Mainstream MSA studies [9, 10, 22, 37, 50, 53, 55\u201358] focus on designing complex fusion paradigms and interaction mechanisms to enhance the performance of sentiment recognition. For instance, CubeMLP [37] utilizes three independent multi-layer perceptron units for feature-mixing on three axes. However, these approaches based on complete modalities cannot be deployed in real-world applications. Mainstream solutions for the missing modality problem can be summarized in two categories: (i) generative methods [6, 23, 25, 45] and (ii) joint learning methods [24, 32, 46, 66]. Reconstruction methods generate missing features and semantics in modalities based on available modalities. For example, TFR-Net [63] leverages the feature reconstruction module to guide the extractor to reconstruct missing semantics. MVAE [6] solves the modality missing problem by the semi-supervised multi-view deep generative framework. Joint learning efforts refer to learning joint multimodal representations utilizing correlations among modalities. For instance, MMIN [69] generates robust joint multimodal representations via cross-modality imagination. TATE [66] presents a tag encoding module to guide the network to focus on missing modalities. However, the aforementioned approaches fail to account for the correlations among samples and categories, leading to inadequate compensation for the missing semantics in modalities. In contrast, we design effective learning paradigms to adequately capture potential inter-sample and inter-category correlations. 2.2. Knowledge Distillation Knowledge distillation utilizes additional supervisory information from the pre-trained teacher\u2019s network to assist in the training of the student\u2019s network [11]. Knowledge distillation methods can be roughly categorized into two types, distillation from intermediate features [15, 29, 38, 61] and responses [4, 8, 27, 48, 68]. Many studies [13, 18, 33, 40, 47] employ knowledge distillation for MSA tasks and responses [4, 8, 27, 48, 68]. Many studies [13, 18, 33, 40, 47] employ knowledge distillation for MSA tasks with missing modalities. The core concept of these efforts is to transfer \u201cdark knowledge\u201d from teacher networks trained by complete modalities to student networks trained by missing modalities. The teacher model typically produces more valuable feature presentations than the student model. For instance, [13] utilizes the complete-modality teacher network to implement supervision on the unimodal student network at both feature and response levels. Despite promising outcomes, they are subject to several significant limitations: (i) Knowledge transfer is limited to individual samples, overlooking the exploitation of clear correlations among samples and among categories. (ii) Supervision on student networks is coarse-grained and inadequate, without considering the potential alignment of feature distributions. To this end, we propose a correlation-decoupled knowledge distillation framework that facilitates the learning of robust joint representations by refining and transferring the crosssample, cross-category, and cross-target correlations. 3. Methodology 3.1. Problem Formulation Given a multimodal video segment with three modalities as S = [XL, XA, XV ], where XL \u2208RTL\u00d7dL, XA \u2208 RTA\u00d7dA, and XV \u2208RTV \u00d7dV denote language, audio, and visual modalities, respectively. Tm(\u00b7) is the sequence length and dm(\u00b7) is the embedding dimension, where m \u2208 {L, A, V }. Meanwhile, the incomplete modality is denoted 2 \u2026\u2026 \u2026\u2026 Teacher representations Student representations Cos Cos SCD CPD RCD It was a great movie and I loved it. \u2026 Transformer Encoder GAP It was a great movie and I loved it. \u2026 MRM Training Data Flow Inference Data flow Transformer Encoder Transformer Encoder Concatenation \u2217 1D Convolution \u2217 \u2217 \u2217 Transformer Encoder GAP Transformer Encoder Transformer Encoder \u2217 \u2217 \u2217 Batch Data Classifier Classifier Teacher Network Student Network Figure 2. The structure of our CorrKD, which consists of three core components: Sample-level Contrastive Distillation (SCD) mechanism, Category-guided Prototype Distillation (CPD) mechanism, and Response-disentangled Consistency Distillation (RCD) strategy. as \u02c6 Xm. We define two missing modality cases to simulate the most natural and holistic challenges in real-world scenarios: (i) intra-modality missingness, which indicates some frame-level features in the modality sequences are missing. (ii) inter-modality missingness, which denotes some modalities are entirely missing. Our goal is to recognize the utterance-level sentiments by utilizing the multimodal data with missing modalities. 3.2. Overall Framework Figure 2 illustrates the main workflow of CorrKD. The teacher network and the student network adopt a consistent structure but have different parameters. During the training phase, our CorrKD procedure is as follows: (i) we train the teacher network with complete-modality samples and then freeze its parameters. (ii) Given a video segment sample S, we generate a missing-modality sample \u02c6 S with the Modality Random Missing (MRM) strategy. MRM simultaneously performs intra-modality missing and inter-modality missing, and the raw features of the missing portions are replaced with zero vectors. S and \u02c6 S are fed into the initialized student network and the trained teacher network, respectively. (iii) We input the samples S and \u02c6 S into the modality representation fusion module to obtain the joint multimodal representations Ht and Hs. (iv) The sample-level contrastive distillation mechanism and the category-guided prototype distillation mechanism are utilized to learn the feature consistency of Ht and Hs. (v) These representations are fed into the task-specific fully-connected layers and the softmax function to obtain the network responses Rt and Rs. (vi) The response-disentangled consistency distillation strategy is applied to maintain consistency in the response distribution, and then Rs is used to perform classification. In the inference phase, testing samples are only fed into the student network for downstream tasks. Subsequent sections provide details of the proposed components. 3.3. Modality Representation Fusion We introduce the extraction and fusion processes of modality representations using the student network as an example. The incomplete modality \u02c6 Xs m \u2208RTm\u00d7dm with m \u2208{L, A, V } is fed into the student network. Firstly, \u02c6 Xs m passes through a 1D temporal convolutional layer with kernel size 3 \u00d7 3 and adds the positional embedding [39] to obtain the preliminary representations, denoted as \u02c6 F s m = W3\u00d73( \u02c6 Xs m) + PE(Tm, d) \u2208RTm\u00d7d. Each F s m is fed into a Transformer [39] encoder Fs \u03d5(\u00b7), capturing the modality dynamics of each sequence through the self-attention mechanism to yield representations Es m, denoted as Es m = Fs \u03d5(F s m). The representations Es m are concatenated to obtain Zs, expressed as Zs = [Es L, Es A, Es V ] \u2208RTm\u00d73d. Subsequently, Zs is fed into the Global Average Pooling (GAP) to further enhance and refine the features, yielding the joint multimodal representation Hs \u2208R3d. Similarly, the joint multimodal representation generated by the teacher network is represented as Ht \u2208R3d. 3.4. Sample-level Contrastive Distillation Most previous studies of MSA tasks with missing modalities [33, 40, 47] are sub-optimal, exploiting only onesided information within a single sample and neglecting to consider comprehensive knowledge across samples. To 3 this end, we propose a Sample-level Contrastive Distillation (SCD) mechanism that enriches holistic knowledge encoding by implementing contrastive learning between sample-level representations of student and teacher networks. This paradigm prompts models to sufficiently capture intra-sample dynamics and inter-sample correlations to generate and transfer valuable supervision signals, thus precisely recovering the missing semantics. The rationale of SCD is to take contrastive learning within all mini-batches, constraining the representations in two networks originating from the same sample to be similar, and the representations originating from different samples to be distinct. Specifically, given a mini-batch with N samples B = {S0, S1, \u00b7 \u00b7 \u00b7 , SN}, we obtain their sets of joint multimodal representations in teacher and student networks, denoted as {Hw 1 , Hw 2 , \u00b7 \u00b7 \u00b7 , Hw N} with w \u2208{t, s}. For the same input sample, we narrow the distance between the joint representations of the teacher and student networks and enlarge the distance between the representations for different samples. The contrastive distillation loss is formulated as follows: \\s m a l l \\ m a thcal {L }_{S C D } = \\sum _{i =1}^N\\ s u m _{j=1,j\\neq i}^N\\mathcal {D}(\\bm {H}^s_i,\\bm {H}^t_i)^2 + max\\{0, \\eta \\mathcal {D}(\\bm {H}^s_i,\\bm {H}^t_j)\\}^2, (1) where D(Hs, Ht) = \u2225Hs \u2212Ht\u22252 , \u2225\u00b7\u22252 represents \u21132 norm function, and \u03b7 is the predefined distance boundary. When negative pairs are distant enough (i.e., greater than boundary \u03b7), the loss is set to 0, allowing the model to focus on other pairs. Since the sample-level representation contains holistic emotion-related semantics, such a contrastive objective facilitates the student network to learn more valuable knowledge from the teacher network. 3.5. Category-guided Prototype Distillation MSA data usually suffers from the dilemmas of high intracategory diversity and high inter-category similarity. Previous approaches [13, 18, 33] based on knowledge distillation to address the modality missing problem simply constrain the feature consistency of the teacher and student networks. The rough manner lacks consideration of crosscategory correlation and feature variations, leading to ambiguous feature distributions. To this end, we propose a Category-guided Prototype Distillation (CPD) mechanism, with the core insight of refining and transferring knowledge of intraand inter-category feature variations via category prototypes, which is widely utilized in the field of few-shot learning [35]. The category prototype represents the embedding center of every sentiment category, denoted as: \\ s mall \\bm { c}_k = \\frac {1}{|\\bm {B}_k|}\\sum _{\\bm {S}_i \\in \\bm {B}_k}{\\bm {H}_i}, (2) where Bk denotes the set of samples labeled with category k in the mini-batch, and Si denotes the i-th sample in Bk. The intraand inter-category feature variation of the sample Si is defined as follows: \\sm a ll \\ b m {M} _k(i) = \\frac {\\bm {H}_i \\, \\bm {c}_k^\\top }{\\left \\| \\bm {H}_i \\right \\|_2 \\left \\| \\bm {c}_k \\right \\|_2 }, (3) where Mk(i) denotes the similarity between the sample Si and the prototype ck. If the sample Si is of category k, Mk(i) represents intra-category feature variation. Otherwise, it represents inter-category feature variation. The teacher and student networks compute similarity matrices M t and M s, respectively. We minimize the squared Euclidean distance between the two similarity matrices to maintain the consistency of two multimodal representations. The prototype distillation loss is formulated as: \\ s m a ll \\ mat h c al { L} _ {CPD } = \\ frac { 1 }{N K} \\sum _{i=1}^N \\sum _{k=1}^K\\left \\|\\bm {M}_k^s(i)-\\bm {M}_k^t(i)\\right \\|_2, (4) where K is the category number of the mini-batch. 3.6. Response-disentangled Consistency Distillation Most knowledge distillation studies [15, 29, 38, 61] focus on extracting knowledge from intermediate features of networks. Although the model\u2019s response (i.e., the predicted probability of the model\u2019s output) presents a higher level of semantics than the intermediate features, responsebased methods achieve significantly worse performance than feature-based methods [41]. Inspired by [67], the model\u2019s response consists of two parts: (i) Target Category Response (TCR), which represents the prediction of the target category and describes the difficulty of identifying each training sample. (ii) Non-Target Category Response (NTCR), which denotes the prediction of the non-target category and reflects the decision boundaries of the remaining categories to some extent. The effects of TCR and NTCR in traditional knowledge distillation loss are coupled, i.e., high-confidence TCR leads to low-impact NTCR, thus inhibiting effective knowledge transfer. Consequently, we disentangle the heterogeneous responses and constrain the consistency between the homogeneous responses. From the perspective of information theory, knowledge consistency between responses can be characterized as maintaining high mutual information between teacher and student networks [1]. This schema captures beneficial semantics and encourages distributional alignment. Specifically, the joint multimodal representation Hw with w \u2208{t, s} of teacher and student networks pass through fully-connected layers and softmax function to obtain response Rw. Based on the target indexes, we decouple the response Rw to obtain TCR Rw T and NTCR Rw NT . Define Q \u2208Q and U \u2208U as two random variables. Formulaically, the marginal probability density functors of Q and U are denoted as P(Q) and P(U). P(Q, U) is regarded as the joint probability density functor. The mutual 4 Self-MM CubeMLP MCTN TransM SMIL CorrKD GCNet DMD 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Happy) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Sad) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Angry) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 F1 Score (Neutral) Missing Ratio Figure 3. Comparison results of intra-modality missingness on IEMOCAP. We comprehensively report the F1 score for the happy, sad, angry, and neutral categories at various missing ratios. Self-MM CubeMLP MCTN TransM SMIL CorrKD GCNet DMD (a) (b) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 F1 Score Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 F1 Score Missing Ratio Figure 4. Comparison results of intra-modality missingness on (a) MOSI and (b) MOSEI. We report the F1 score at various ratios. information between Q and U is represented as follows: \\s ma l l I ( \\bm {Q }, \\ bm { U} )=\\int _ { \\bm {\\mathcal {Q}}} \\int _{\\bm {\\mathcal {U}}} P(\\bm {Q}, \\bm {U}) \\log \\left (\\frac {P(\\bm {Q}, \\bm {U})}{P(\\bm {Q}) P(\\bm {U})}\\right ) d \\bm {Q} d \\bm {U}. (5) The mutual information I(Q, U) can be written as the Kullback-Leibler divergence between the joint probability distribution PQU and the product of the marginal distributions PQPU, denoted as I(Q, U) = DKL (PQU\u2225PQPU) . For efficient and stable computation, the Jensen-Shannon divergence [12] is employed in our case to estimate the mutual information, which is denoted as follows: \\s ma l l \\begi n {a li g ne d} I( \\ b m { Q } , \\bm {U}) & \\ ge q \\h at { I}_ \\ t h eta ^{(\\ma thrm {JSD})}(\\bm {Q}, \\bm {U}) \\\\ &= \\mathbb {E}_{P(\\bm {Q}, \\bm {U})}\\left [-\\log \\left (1+e^{-\\mathcal {F}_\\theta (\\bm {Q}, \\bm {U})}\\right )\\right ] \\\\ & -\\mathbb {E}_{P(\\bm {Q}) P(\\bm {U})}\\left [\\log \\left (1+e^{\\mathcal {F}_\\theta (\\bm {Q}, \\bm {U})}\\right )\\right ], \\end {aligned} (6) where F\u03b8 : Q\u00d7U \u2192R is formulated as an instantiated statistical network with parameters \u03b8. We only need to maximize the mutual information without focusing on its precise value. Consequently, the distillation loss based on the mutual information estimation is formatted as follows: \\s m al l \\ m ath cal {L}_{ R C D} = \\mat hc a l {L }_{RCD}^T + \\mathcal {L}_{RCD}^{NT} = -I(\\bm {R}^t_T,\\bm {R}^s_T) I(\\bm {R}^t_{NT},\\bm {R}^s_{NT}). (7) Finally, the overall training objective Ltotal is expressed as Ltotal = Ltask + LSCD + LCP D + LRCD, where Ltask is the standard cross-entropy loss. 4. Experiments 4.1. Datasets and Evaluation Metrics We conduct extensive experiments on three MSA datasets with word-aligned data, including MOSI [64], MOSEI [65], and IEMOCAP [2]. MOSI is a realistic dataset that comprises 2,199 short monologue video clips. There are 1,284, 229, and 686 video clips in train, valid, and test data, respectively. MOSEI is a dataset consisting of 22,856 video clips, which has 16,326, 1,871, and 4,659 samples in train, valid, and test data. Each sample of MOSI and MOSEI is labeled by human annotators with a sentiment score of -3 (strongly negative) to +3 (strongly positive). On the MOSI and MOSEI datasets, we utilize weighted F1 score computed for positive/negative classification results as evaluation metrics. IEMOCAP dataset consists of 4,453 samples of video clips. Its predetermined data partition has 2,717, 798, and 938 samples in train, valid, and test data. As recommended by [44], four emotions (i.e., happy, sad, angry, and neutral) are selected for emotion recognition. For evaluation, we report the F1 score for each category. 4.2. Implementation Details Feature Extraction. The Glove embedding [31] is used to convert the video transcripts to obtain a 300-dimensional vector for the language modality. For the audio modality, we employ the COVAREP toolkit [5] to extract 74dimensional acoustic features, including 12 Mel-frequency cepstral coefficients (MFCCs), voiced/unvoiced segmenting features, and glottal source parameters. For the visual modality, we utilize the Facet [14] to indicate 35 facial action units, recording facial movement to express emotions. Experimental Setup. All models are built on the Pytorch [30] toolbox with NVIDIA Tesla V100 GPUs. The Adam optimizer [16] is employed for network optimization. For MOSI, MOSEI, and IEMOCAP, the detailed hyper-parameter settings are as follows: the learning rates are {4e \u22123, 2e \u22123, 4e \u22123}, the batch sizes are {64, 32, 64}, the epoch numbers are {50, 20, 30}, the attention heads are {10, 8, 10}, and the distance boundaries \u03b7 are {1.2, 1.0, 1.4}. The embedding dimension is 40 on all three datasets. The hyper-parameters are determined via 5 Table 1. Comparison results under inter-modality missing and complete-modality testing conditions on MOSI and MOSEI. Dataset Models Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} MOSI Self-MM [62] 67.80 40.95 38.52 69.81 74.97 47.12 56.53 84.64 CubeMLP [37] 64.15 38.91 43.24 63.76 65.12 47.92 53.85 84.57 DMD [22] 68.97 43.33 42.26 70.51 68.45 50.47 57.33 84.50 MCTN [32] 75.21 59.25 58.57 77.81 74.82 64.21 68.31 80.12 TransM [46] 77.64 63.57 56.48 82.07 80.90 67.24 71.32 82.57 SMIL [26] 78.26 67.69 59.67 79.82 79.15 71.24 72.64 82.85 GCNet [23] 80.91 65.07 58.70 84.73 83.58 70.02 73.84 83.20 CorrKD 81.20 66.52 60.72 83.56 82.41 73.74 74.69 83.94 MOSEI Self-MM [62] 71.53 43.57 37.61 75.91 74.62 49.52 58.79 83.69 CubeMLP [37] 67.52 39.54 32.58 71.69 70.06 48.54 54.99 83.17 DMD [22] 70.26 46.18 39.84 74.78 72.45 52.70 59.37 84.78 MCTN [32] 75.50 62.72 59.46 76.64 77.13 64.84 69.38 81.75 TransM [46] 77.98 63.68 58.67 80.46 78.61 62.24 70.27 81.48 SMIL [26] 76.57 65.96 60.57 77.68 76.24 66.87 70.65 80.74 GCNet [23] 80.52 66.54 61.83 81.96 81.15 69.21 73.54 82.35 CorrKD 80.76 66.09 62.30 81.74 81.28 71.92 74.02 82.16 the validation set. The raw features at the modality missing positions are replaced by zero vectors. To ensure an equitable comparison, we re-implement the state-of-the-art (SOTA) methods using the publicly available codebases and combine them with our experimental paradigms. All experimental results are averaged over multiple experiments using five different random seeds. 4.3. Comparison with State-of-the-art Methods We compare CorrKD with seven representative and reproducible SOTA methods, including complete-modality methods: Self-MM [62], CubeMLP [37], and DMD [22], and missing-modality methods: 1) joint learning methods (i.e., MCTN [32] and TransM [46]), and 2) generative methods (i.e., SMIL [26] and GCNet [23]). Extensive experiments are implemented to thoroughly evaluate the robustness and effectiveness of CorrKD in the cases of intra-modality and inter-modality missingness. Robustness to Intra-modality Missingness. We randomly drop frame-level features in modality sequences with ratio p \u2208{0.1, 0.2, \u00b7 \u00b7 \u00b7 , 1.0} to simulate testing conditions of intra-modality missingness. Figures 3 and 4 show the performance curves of models with various p values, which intuitively reflect the model\u2019s robustness. We have the following important observations. (i) As the ratio p increases, the performance of all models decreases. This phenomenon demonstrates that intra-modality missingness leads to a considerable loss of sentiment semantics and fragile joint multimodal representations. (ii) Compared to the complete-modality methods (i.e., Self-MM, CubeMLP, and DMD), our CorrKD achieves significant performance advantages in the missing-modality testing conditions and competitive performance in the complete-modality testing conditions. The reason is that complete-modality methods are based on the assumption of data completeness, whereas customized training paradigms for missing modalities perform better at capturing and reconstructing valuable sentiment semantics from incomplete multimodal data. (iii) Compared to the missing-modality methods, our CorrKD exhibits the strongest robustness. Benefiting from the decoupling and modeling of inter-sample, inter-category, and inter-response correlations by the proposed correlation decoupling schema, the student network acquires informative knowledge to reconstruct valuable missing semantics and produces robust multimodal representations. Robustness to Inter-modality Missingness. In Table 1 and 2, we drop some entire modalities in the samples to simulate testing conditions of inter-modality missingness. The notation \u201c{l}\u201d indicates that only the language modality is available, while audio and visual modalities are missing. \u201c{l, a, v}\u201d represents the complete-modality testing condition where all modalities are available. \u201cAvg.\u201d indicates the average performance across six missing-modality testing conditions. We present the following significant insights. (i) Inter-modality missingness causes performance degradation for all models, suggesting that the integration of complementary information from heterogeneous modalities enhances the sentiment semantics within joint representations. (ii) In the testing conditions of the inter-modality missingness, our CorrKD has superior performance among 6 Table 2. Comparison results under six testing conditions of inter-modality missingness and the complete-modality condition on IEMOCAP. Models Categories Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} Self-MM [62] Happy 66.9 52.2 50.1 69.9 68.3 56.3 60.6 90.8 Sad 68.7 51.9 54.8 71.3 69.5 57.5 62.3 86.7 Angry 65.4 53.0 51.9 69.5 67.7 56.6 60.7 88.4 Neutral 55.8 48.2 50.4 58.1 56.5 52.8 53.6 72.7 CubeMLP [37] Happy 68.9 54.3 51.4 72.1 69.8 60.6 62.9 89.0 Sad 65.3 54.8 53.2 70.3 68.7 58.1 61.7 88.5 Angry 65.8 53.1 50.4 69.5 69.0 54.8 60.4 87.2 Neutral 53.5 50.8 48.7 57.3 54.5 51.8 52.8 71.8 DMD [22] Happy 69.5 55.4 51.9 73.2 70.3 61.3 63.6 91.1 Sad 65.0 54.9 53.5 70.7 69.2 61.1 62.4 88.4 Angry 64.8 53.7 51.2 70.8 69.9 57.2 61.3 88.6 Neutral 54.0 51.2 48.0 56.9 55.6 53.4 53.2 72.2 MCTN [32] Happy 76.9 63.4 60.8 79.6 77.6 66.9 70.9 83.1 Sad 76.7 64.4 60.4 78.9 77.1 68.6 71.0 82.8 Angry 77.1 61.0 56.7 81.6 80.4 58.9 69.3 84.6 Neutral 60.1 51.9 50.4 64.7 62.4 54.9 57.4 67.7 TransM [46] Happy 78.4 64.5 61.1 81.6 80.2 66.5 72.1 85.5 Sad 79.5 63.2 58.9 82.4 80.5 64.4 71.5 84.0 Angry 81.0 65.0 60.7 83.9 81.7 66.9 73.2 86.1 Neutral 60.2 49.9 50.7 65.2 62.4 52.4 56.8 67.1 SMIL [26] Happy 80.5 66.5 63.8 83.1 81.8 68.2 74.0 86.8 Sad 78.9 65.2 62.2 82.4 79.6 68.2 72.8 85.2 Angry 79.6 67.2 61.8 83.1 82.0 67.8 73.6 84.9 Neutral 60.2 50.4 48.8 65.4 62.2 52.6 56.6 68.9 GCNet [23] Happy 81.9 67.3 66.6 83.7 82.5 69.8 75.3 87.7 Sad 80.5 69.4 66.1 83.8 81.9 70.4 75.4 86.9 Angry 80.1 66.2 64.2 82.5 81.6 68.1 73.8 85.2 Neutral 61.8 51.1 49.6 66.2 63.5 53.3 57.6 71.1 CorrKD Happy 82.6 69.6 68.0 84.1 82.0 70.0 76.1 87.5 Sad 82.7 71.3 67.6 83.4 82.2 72.5 76.6 85.9 Angry 82.2 67.0 65.8 83.9 82.8 67.3 74.8 86.1 Neutral 63.1 54.2 52.3 68.5 64.3 57.2 59.9 71.5 w/o SCD w/o RCD w/o CPD CorrKD 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 Missing Ratio F1 Score Figure 5. Ablation results of intra-modality missingness using various missing ratios on MOSI. the majority of metrics, proving its strong robustness. For example, on the MOSI dataset, CorrKD\u2019s average F1 socre is improved by 0.85% compared to GCNet, and in particular by 3.72% in the testing condition where language modality is missing (i.e., {a, v}). The merit stems from the proTable 3. Ablation results for the testing conditions of intermodality missingness on MOSI. Models Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} CorrKD 81.20 66.52 60.72 83.56 82.41 73.74 74.69 83.94 w/o SCD 78.80 64.96 57.49 81.95 80.53 71.05 72.46 82.13 w/o CPD 79.23 63.72 57.83 80.11 79.45 70.53 71.81 82.67 w/o RCD 79.73 65.32 59.21 82.14 81.05 72.18 73.27 83.05 posed framework\u2019s capability of decoupling and modeling potential correlations at multiple levels to capture discriminative and holistic sentiment semantics. (iii) In the unimodal testing conditions, the performance of CorrKD with only the language modality favorably outperforms other cases, with comparable results to the complete-modality 7 Happy Sad Angry Neutral (c) GCNet (a) Self-MM (b) MCTN (d) CorrKD Figure 6. Visualization of representations from different methods with four emotion categories on the IEMOCAP testing set. The default testing conditions contain intra-modality missingness (i.e., missing ratio p = 0.5 ) and inter-modality missingness (i.e., only the language modality is available). The red, orange, green, and blue markers represent the happy, angry, neutral, and sad emotions, respectively. case. In the bimodal testing conditions, cases containing the language modality perform the best, even surpassing the complete-modality case in individual metrics. This phenomenon proves that language modality encompasses the richest knowledge information and dominates the sentiment inference and missing semantic reconstruction. 4.4. Ablation Studies To validate the effectiveness and necessity of the proposed mechanisms and strategies in CorrKD, we conduct ablation studies under two missing-modality cases on the MOSI dataset, as shown in Table 3 and Figure 5. The principal findings are outlined as follows. (i) When SCD is eliminated, there is a noticeable degradation in model performance under both missing cases. This phenomenon suggests that mining and transferring comprehensive crosssample correlations is essential for recovering missing semantics in student networks. (ii) The worse results under the two missing modality scenarios without CPD indicate that capturing cross-category feature variations and correlations facilitates deep alignment of feature distributions between both networks to produce robust joint multimodal representations. (iii) Moreover, we substitute the KL divergence loss for the proposed RCD. The declining performance gains imply that decoupling heterogeneous responses and maximizing mutual information between homogeneous responses motivate the student network to adequately reconstruct meaningful sentiment semantics. 4.5. Qualitative Analysis To intuitively show the robustness of the proposed framework against modality missingness, we randomly choose 100 samples from each emotion category on the IEMOCAP testing set for visualization analysis. The comparison models include Self-MM [62] (i.e., complete-modality method), MCTN [32] (i.e., joint learning-based missingmodality method), and GCNet [23] (i.e., generative-based missing-modality method). (i) As shown in Figure 6, SelfMM cannot address the modality missing challenge, as the representations of different emotion categories are heavily confounded, leading to the least favorable outcomes. (ii) Although MCTN and GCNet somewhat alleviate the issue of indistinct emotion semantics, their effectiveness remains limited since the distribution boundaries of the different emotion representations are generally ambiguous and coupled. (iii) Conversely, our CorrKD ensures that representations of the same emotion category form compact clusters, while representations of different categories are clearly separated. These observations confirm the robustness and superiority of our framework, as it sufficiently decouples intersample, inter-category and inter-response correlations. 5. Conclusions In this paper, we present a correlation-decoupled knowledge distillation framework (CorrKD) to address diverse missing modality dilemmas in the MSA task. Concretely, we propose a sample-level contrast distillation mechanism that utilizes contrastive learning to capture and transfer cross-sample correlations to precisely reconstruct missing semantics. Additionally, we present a categoryguided prototype distillation mechanism that learns crosscategory correlations through category prototypes, refining sentiment-relevant semantics for improved joint representations. Eventually, a response-disentangled consistency distillation is proposed to encourage distribution alignment between teacher and student networks. Extensive experiments confirm the effectiveness of our framework. Acknowledgements This work is supported in part by the Shanghai Municipal Science and Technology Committee of Shanghai Outstanding Academic Leaders Plan (No. 21XD1430300), and in part by the National Key R&D Program of China (No. 2021ZD0113503). 8" }