diff --git "a/related_34K/test_related_short_2404.17862v1.json" "b/related_34K/test_related_short_2404.17862v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.17862v1.json" @@ -0,0 +1,1432 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17862v1", + "title": "Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum", + "abstract": "Efficiently capturing consistent and complementary semantic features in a\nmultimodal conversation context is crucial for Multimodal Emotion Recognition\nin Conversation (MERC). Existing methods mainly use graph structures to model\ndialogue context semantic dependencies and employ Graph Neural Networks (GNN)\nto capture multimodal semantic features for emotion recognition. However, these\nmethods are limited by some inherent characteristics of GNN, such as\nover-smoothing and low-pass filtering, resulting in the inability to learn\nlong-distance consistency information and complementary information\nefficiently. Since consistency and complementarity information correspond to\nlow-frequency and high-frequency information, respectively, this paper revisits\nthe problem of multimodal emotion recognition in conversation from the\nperspective of the graph spectrum. Specifically, we propose a\nGraph-Spectrum-based Multimodal Consistency and Complementary collaborative\nlearning framework GS-MCC. First, GS-MCC uses a sliding window to construct a\nmultimodal interaction graph to model conversational relationships and uses\nefficient Fourier graph operators to extract long-distance high-frequency and\nlow-frequency information, respectively. Then, GS-MCC uses contrastive learning\nto construct self-supervised signals that reflect complementarity and\nconsistent semantic collaboration with high and low-frequency signals, thereby\nimproving the ability of high and low-frequency information to reflect real\nemotions. Finally, GS-MCC inputs the collaborative high and low-frequency\ninformation into the MLP network and softmax function for emotion prediction.\nExtensive experiments have proven the superiority of the GS-MCC architecture\nproposed in this paper on two benchmark data sets.", + "authors": "Tao Meng, Fuchen Zhang, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li", + "published": "2024-04-27", + "updated": "2024-04-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "Human-machine intelligent conversation systems have recently received significant attention and development [6, 25, 40, 49, 58], so understanding conversations is crucial. Driven by this, Multimodal Emotion Recognition in Conversation (MERC) has gradually developed into a new research hotspot. Many researchers [21, 60, 63] have explored and improved the effect of MERC from the semantic interaction between text, auditory, and visual modal data in conversational contexts. These methods [7, 27, 61] agree that the MERC task focuses on better capturing and fusing multimodal semantic information in the conversational context for emotion recognition. Therefore, we will review the literature closely related to the above topics from the two aspects of multimodal conversation context feature capture and fusion. (1) Multimodal conversational context feature capture. In early work, the MERC task mainly adopted GRU [34] or LSTM [38] to capture multimodal information in the conversational context. For example, Poria et al. [38] proposed a multimodal conversation emotion recognition model based on Bidirectional Long Short-Term Memory (Bi-LSTM), which captures multimodal contextual information at each time step to understand conversational context relationships in sequence data better. Although methods based on GRU or LSTM can model multimodal conversation context, they cannot capture long-distance information dependencies due to limited memory capabilities. For instance, Ma et al. [32] used intra-modal and inter-modal Transformers to capture semantic information in a multimodal conversation context and designed a hierarchical gating mechanism to achieve the fusion of multimodal features. Although Transformer-based methods can capture long-distance semantic information through global sequence modeling, they underestimate the complexity of multimodal dialogue semantics. Due to the superiority of GNN in modeling complex relationships, most existing research chooses to use GNN for global semantic capture and has achieved remarkable results. For example, Li et al. [23] proposed directed Graph-based Cross-modal Feature Complementation (GraphCFC), which alleviates the heterogeneity gap problem in multimodal fusion by utilizing multiple subspace extractors and pairwise cross-modal complementation strategies. In addition, speaker information is vital in emotion recognition because emotions are usually subjective and individual experiences. Therefore, Ren et al. [41] built a graph model to incorporate conversational context information and speaker dependencies, and then introduced a multi-head attention mechanism to explore potential connections between speakers. (2) Multimodal conversational context feature fusion. Choosing an appropriate multimodal feature fusion strategy is another crucial step in multimodal dialogue emotion recognition [9, 63]. For example, Zadeh et al. [59] proposed Tensor Fusion Network (TFN), has advantages in processing higher-order data structures (such as multi-dimensional arrays) and is therefore better able to preserve relationships between data when integrating multimodal information. So Liu et al. [31] proposed a Low-rank Multimodal Fusion (LMF) method. Multimodal fusion is performed using modalityspecific low-order factors by decomposing tensors and weights in parallel. It avoids calculating high-dimensional tensors, reduces memory overhead, and reduces exponential time complexity to linear. Tellamekala et al. [46] proposed Calibrated and Ordinal Latent Distribution Fusion (COLD Fusion). The proposed fusion framework involves learning the latent distribution over an unimodal temporal context by constraining the variance through calibration and ordinal ordering. Furthermore, contrastive learning has attracted increasing research attention due to its powerful ability to obtain meaningful representations through alignment fusion. Kim et al. [18] introduced a contrastive loss function to facilitate impactful adversarial learning. This approach enables the adversarial learning of weak emotional samples by leveraging strong emotional samples, thereby enhancing the comprehension of intricate emotional elements embedded in intense emotions. Wang et al. [48] proposed a multimodal feature fusion framework based on contrastive learning. The framework first improves the ability to capture emotional features through contrastive learning and then uses an attention mechanism to achieve the fusion of multimodal features. Although multimodal conversational emotion recognition has made significant progress by modeling contextual semantic information and feature fusion, the critical role of high-frequency information in MERC has been ignored. To this end, Hu et al. [14] proposed a Multimodal Fusion Graph Convolution Network (MMGCN). MMGCN can not only capture high and low-frequency information in multimodal conversations, but also utilizes speaker information to model inter-speaker and intra-speaker dependencies. Similarly, Chen et al. [5] modeled MERC from multivariate information and highand low-frequency information, further improving the effect of multimodal conversational emotion recognition. Nevertheless, as discussed earlier, these methods do not profoundly explore the uses of high and low-frequency signals, ignoring the consistency and complementary synergy between them. This paper starts from the perspective of graph spectrum, uses high and low-frequency signals to reconstruct MERC, captures and collaborates consistency and complementary semantic information, respectively, and improves the effect of multimodal conversational emotion recognition.", + "pre_questions": [], + "main_content": "INTRODUCTION With the continuous development of Human-Computer Interaction (HCI), the multimodal emotion recognition task in conversation (MERC) has recently received extensive research attention [1, 8, 13, 29, 34, 50, 52]. MERC aims to identify the emotional state of each utterance using textual, acoustic, and visual information in the conversational context [25, 36, 44, 45, 53], which is crucial for multimodal conversational understanding and an essential component for building intelligent HCI systems [14, 33, 35]. As shown in Fig. 1, MERC needs to recognize the emotion of each multimodal utterance in the conversation. Unlike traditional unimodal or non-conversational emotion recognition [1, 10, 12, 43], MERC requires joint conversational context and multimodal information modeling to achieve consistency and complementary semantic capture within and between modalities [61]. Fig. 1 gives an example of a multimodal conversation between two people, Ross and Carol, from the MELD dataset. As shown in utterance \ud835\udc624, Carol has a \u201cJoy\u201d emotion, which is vaguely reflected in textual features but more evident in visual or auditory features reflecting the complementary semantics between modalities. In addition, it is difficult to identify the emotion of \u201cSurprise\u201d from the utterance \ud835\udc627 alone. However, due to the potential consistency of conversational emotions, it can be accurately inferred based on previous utterances. Therefore, the key to multimodal conversational emotion recognition is to capture the consistency and complementary semantics between multimodal information by utilizing the conversational context and emotional dependence between speakers to reveal the speaker\u2019s genuine emotion. The current mainstream research method uses the Transformer [26, 32, 62, 63] or GNN [2, 21, 23, 47] architecture to model the MERC task. Transformer-based methods mainly learn complex semantic information between multimodal and conversational contexts from global sequence modeling. For example, CTNet [26] builds a single Transformer and cross Transformer to capture longdistance context dependencies and realize intra-module and intermodule information interaction to achieve multimodal conversational emotion recognition. Although transformer-based methods have made progress from the perspective of global utterance sequence modeling, this paradigm underestimates the complex emotional interactions between multimodal utterances [47] and ignores arXiv:2404.17862v1 [cs.CL] 27 Apr 2024 MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. Where's Ben? (Neutral) He's sleeping. (Neutral) Umm, yeah, actually, Susan's gonna be home any minute, it's kinda an anniversary. (Joy) Oh! I thought you guys got married in uh, January? (Surprise) Ahh. Ooh, is this a, is this a bad time? (Surprise) It's not that kind of anniversary. (Neutral) Ah! Oh. (Surprise) Carol Ross u1 u3 u5 u7 u2 u4 u6 Figure 1: An example of a multimodal conversation from the MELD dataset. MERC aims to identify each utterance\u2019s emotion label (e.g., Neutral, Surprise, Joy). the multiple relationships between utterances [5], which limits the model\u2019s emotion recognition performance. Benefitting from GNN\u2019s ability to mine and represent complex relationships [56, 57], recent GNN-based methods [1, 14, 22] have made significant progress in the MERC task. For instance, MMGCN [14] fully connects all utterance nodes of the same modality and connects different modal nodes of the same utterance to build a heterogeneous graph to model the complex semantic relationships between multimodal utterances, then uses a deep spectral domain GNN to capture long-distance contextual information to achieve multimodal conversational emotion recognition. Although these GNN-based methods show promising performance, they still have some common limitations: (1) Insufficient long-distance dependence perception. Considerable methods [1, 13, 23] using sliding windows to limit the length of fully connected utterances and then using GNN to learn multimodal utterance representations to achieve emotion recognition. However, limited by the over-smoothing characteristics of GNN [28, 54], usually only two layers can be stacked for capturing semantic information, making it difficult for these methods to capture long-distance emotional dependencies. Although the method [5, 14] without a sliding window can enhance the capture of long-distance dependencies, it will cause many nodes with the non-same emotions in the neighborhood, which is not conducive to the representation learning of GNN and puts enormous performance pressure on GNN. Therefore, previous GNN-based methods still have limitations in long-distance dependency capture. (2) Underutilization of high-frequency features. Many studies have shown that GNN has low-pass filtering characteristics [4, 37, 55], which mainly obtain node representation by aggregating the consistency features of the neighborhood (low-frequency information) and suppressing the dissimilarity features of the neighborhood (high-frequency information). However, consistency and dissimilarity features are equally important in the MERC task. When specific modalities express less obvious emotions, information from other modalities is needed to compensate, thereby revealing the speaker\u2019s genuine emotions. Inspired by this, M3Net [5] tried to use high-frequency information to improve the MERC task and improved the emotion recognition effect by directly fusing highand low-frequency features. However, essential differences exist between high and low-frequency features, and direct fusion cannot establish efficient collaboration. Thus, previous GNN-based methods still have limitations in utilizing and collaborating high and low-frequency features. Inspired by the above analysis, to efficiently learn the consistency and complementary semantic information in multimodal conversation, we try to revisit the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum. Specifically, we propose a Graph-Spectrum-based Multimodal Consistency and Complementary feature collaboration framework GS-MCC. GS-MCC first uses RoBERTa [30], OpenSMILE [11], and 3D-CNN [16] to extract preliminary text and acoustic and visual features. Then, GRU and a fully connected network are used further to encode text, auditory, and visual features to obtain higher-order utterance representation. In order to capture long-distance dependency information more efficiently, a sliding window is used to construct a fully connected graph to model conversational relationships, and an efficient Fourier graph operator is used to extract long-distance high and low-frequency information, respectively. In addition, to promote the collaboration ability of high and lowfrequency information, we use contrastive learning to construct self-supervised signals that reflect complementarity and consistent semantic collaboration with high and low-frequency signals, thereby improving the ability of high and low-frequency information to reflect real emotions. Finally, we input the collaborative high and low-frequency information into the MLP network and softmax function for emotion prediction. The contributions of our work are summarized as follows: \u2022 We propose an efficient long-distance information learning module that designs Fourier graph operators to build a mixed-layer GNN to capture high and low-frequency information to obtain consistency and complementary semantic dependencies in multimodal conversational contexts. \u2022 We propose an efficient highand low-frequency information collaboration module that uses contrastive learning to construct self-supervised signals that reflect the collaboration of highand low-frequency information in terms of complementarity and consistent semantics and improves the ability to distinguish emotions between different frequency information. \u2022 We conducted extensive comparative and ablation experiments on two benchmark data sets, IEMOCAP and MELD. The results show that our proposed method can efficiently capture long-distance context dependencies and improve the performance of MERC. Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Text Feature Extraction: Word embeddings can capture the semantic relationships between words, making words with similar meanings closer in the embedding space. Inspired by previous work [9, 19, 42], we use the RoBERTa model [30] to extract text features and the embedding is denoted as \ud835\udf11\ud835\udc61. Audio and Vision Feature Extraction: Consistent with previous work [13, 24, 34], we employ openSMILE and 3D-CNN for audio and Vision feature extraction, yielding respective embeddings \ud835\udf11\ud835\udc4e and \ud835\udf11\ud835\udc63. 3.2 Speaker information embedding Speaker information can play an important role in emotion recognition. Emotion is not only related to the characteristic attributes of the utterance but also to the speaker\u2019s inherent expression manner. Inspired by previous work [5, 14, 61], we incorporate speaker MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. information into each unimodal utterance to obtain an unimodal representation of context and speaker information. Specifically, we first sort all speakers by name and then use the one-hot vector \ud835\udc60\ud835\udc56to represent the \ud835\udc56-th speaker. Finally, we perform a unified embedding representation for the speakers to make similar speakers closer together in the embedding space. The embedding of the \ud835\udc56-th speaker is as follows: \ud835\udc46\ud835\udc56= \ud835\udc4a\ud835\udc60\ud835\udc5d\ud835\udc52\ud835\udc4e\ud835\udc58\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56, (1) where\ud835\udc4a\ud835\udc60\ud835\udc5d\ud835\udc52\ud835\udc4e\ud835\udc58\ud835\udc52\ud835\udc5fis the trainable weight. In addition, to obtain higherorder feature representation, we utilize bidirectional Gated Recurrent Units (GRU) to encode conversational text features. We have observed in practice that using recursive modules to encode visual and auditory modalities has no positive performance impact. Therefore, we employed a multilayer perceptron with two single hidden layers to encode auditory and visual modalities, respectively. The specific encoding calculation is as follows: \ud835\udc62\ud835\udc61= \u2190 \u2212 \u2192 \ud835\udc3a\ud835\udc45\ud835\udc48(\ud835\udf11\ud835\udc61,\ud835\udc62(+,\u2212)1 \ud835\udc61 ), \ud835\udc62\ud835\udc4e= \ud835\udc4a\ud835\udc4e\ud835\udf11\ud835\udc4e+ \ud835\udc4f\ud835\udc4e, \ud835\udc62\ud835\udc63= \ud835\udc4a\ud835\udc63\ud835\udf11\ud835\udc63+ \ud835\udc4f\ud835\udc63, (2) where \ud835\udc4a\ud835\udc4e, \ud835\udc4f\ud835\udc4e, \ud835\udc4a\ud835\udc63and \ud835\udc4f\ud835\udc63are the learnable parameters of the auditory and visual encoders, respectively. We then add speaker embeddings to obtain speakerand context-aware unimodal representations: \ud835\udc65\ud835\udc5a= \ud835\udc62\ud835\udc5a+ \ud835\udc46\ud835\udc56, \ud835\udc5a\u2208{\ud835\udc61,\ud835\udc4e, \ud835\udc63}, (3) where \ud835\udc61,\ud835\udc4e, \ud835\udc63represent text, audio, and vision modal, respectively. 4 METHODOLOGY Fig. 2 shows the proposed Graph-Spectrum-based Multimodal Consistency and Complementary collaborative learning framework GS-MCC. GS-MCC contains five modules: feature encoding, multimodal interaction graph construction, Fourier graph neural network, contrastive learning, and emotion classification. 4.1 Multimodal Interaction Graph To model the latent semantic dependencies between multimodal utterances, we adopt a multimodal interaction graph for construction. Instead of fully connecting all nodes of the same modality, we use a sliding window for restriction. Although fully connecting all nodes of the same modality is beneficial to building long-distance semantic dependencies, it will introduce much noise, which is not conducive to subsequent GNN learning. Given a conversation sequence \ud835\udc48= {\ud835\udc621, ...,\ud835\udc62\ud835\udc41} with \ud835\udc41multimodal utterances, under the restriction of the sliding window \ud835\udc58, we can construct a multimodal interaction graph\ud835\udc3a\ud835\udc58= \u0010 \ud835\udc49\ud835\udc58, \ud835\udc38\ud835\udc58,\ud835\udc34\ud835\udc58,\ud835\udc4b\ud835\udc58\u0011 , where the node \ud835\udc63\u2208\ud835\udc49\ud835\udc58represents a single-modal utterance and the edge \ud835\udc52\u2208\ud835\udc38\ud835\udc58represents two semantic interactive relationships between unimodal utterances, \ud835\udc34\ud835\udc58is the adjacency matrix, and \ud835\udc4b\ud835\udc58 is the feature matrix. The multimodal semantic interaction graph is constructed as follows: Nodes: Since any utterance \ud835\udc62\ud835\udc56\u2208\ud835\udc48contains three modal information, we treat each modality in each utterance as an independent node, using text modal node \ud835\udc65\ud835\udc56 \ud835\udc61, auditory modal node \ud835\udc65\ud835\udc56 \ud835\udc4e, and visual modal node \ud835\udc65\ud835\udc56 \ud835\udc63represents, and uses the corresponding features \ud835\udc65\ud835\udc56 \ud835\udc5a to represent the initial embedding of the node. The constructed multimodal interaction graph \ud835\udc3a\ud835\udc58has 3\ud835\udc41nodes. Edges: In order to avoid introducing noise or redundant information, we use a sliding window to limit node connections of the same mode. Specifically, we fully connect the nodes in the same mode within sliding window \ud835\udc58. In addition, we connect different modal nodes of the same utterance to construct semantic interactions between modalities. For example, for utterance \ud835\udc62\ud835\udc56\u2208\ud835\udc48, connections need to be constructed between nodes \ud835\udc65\ud835\udc56 \ud835\udc61, \ud835\udc65\ud835\udc56 \ud835\udc4e, and \ud835\udc65\ud835\udc56 \ud835\udc63in different modalities. Edge Weight Initialization: In order to better capture the similarity between nodes, we use different similarities to determine edge weights for different types of edges. Nodes with higher similarity show more critical information interactions between them. Specifically, for edges coming from nodes of the same modality, since the feature distribution of the nodes is potentially consistent, our calculation method is as follows: \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57= 1 \u2212 arccos \u0010 \ud835\udc60\ud835\udc56\ud835\udc5a(\ud835\udc65\ud835\udc56 \ud835\udc5a,\ud835\udc65\ud835\udc57 \ud835\udc5a) \u0011 \ud835\udf0b , (4) where \ud835\udc65\ud835\udc56 \ud835\udc5aand \ud835\udc65\ud835\udc57 \ud835\udc5arepresent the feature representations of the \ud835\udc56-th and \ud835\udc57-th nodes in the graph. For edges between nodes in different modalities, since the feature distribution of the nodes is not potentially consistent, we use the hyperparameter \ud835\udf19to optimize the similarity learning between cross-modal nodes. Our approach is computed as follows: \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57= \ud835\udf19 \u00a9 \u00ad \u00ad \u00ab 1 \u2212 arccos \u0010 \ud835\udc60\ud835\udc56\ud835\udc5a(\ud835\udc65\ud835\udc56 \ud835\udc5a,\ud835\udc65\ud835\udc57 \ud835\udc5a) \u0011 \ud835\udf0b \u00aa \u00ae \u00ae \u00ac . (5) 4.2 Fourier Graph Neural Network As mentioned above, using a sliding window will limit long-distance dependency learning. This is because traditional GNN has oversmoothing characteristics and cannot stack many layers. Different from the methods used by MMGCN [14] and M3Net [5], this paper is inspired by FourierGNN [54], designs efficient Fourier graph operators for high and low-frequency signals, respectively, to capture the long-distance dependency information. Fourier Graph Operator. For a given multimodal interaction graph, \ud835\udc3a\ud835\udc58= \u0010 \ud835\udc49\ud835\udc58, \ud835\udc38\ud835\udc58,\ud835\udc34\ud835\udc58,\ud835\udc4b\ud835\udc58\u0011 , where \ud835\udc34\ud835\udc58\u2208R3\ud835\udc41\u00d73\ud835\udc41is the adjacency matrix, \ud835\udc4b\ud835\udc58\u2208R3\ud835\udc41\u00d7\ud835\udc51is the feature matrix, \ud835\udc41is the number of multimodal utterances, and \ud835\udc51is the dimension of the feature. According to FourierGNN, we can obtain the Green kernel \ud835\udf05\u2208R\ud835\udc51\u00d7\ud835\udc51 that meets the conditions based on the adjacency matrix \ud835\udc34\ud835\udc58and the weight matrix\ud835\udc4a\u2208R\ud835\udc51\u00d7\ud835\udc51, which needs to satisfy the conditions \ud835\udf05[\ud835\udc56, \ud835\udc57] = \ud835\udf05[\ud835\udc56\u2212\ud835\udc57], \ud835\udf05[\ud835\udc56, \ud835\udc57] = \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57\u25e6\ud835\udc4a, and \ud835\udc56and \ud835\udc57are fall between 1 and 3\ud835\udc41. Based on the kernel \ud835\udf05, we can obtain the following Fourier graph operator SG: SG = F (\ud835\udf05) \u2208C 3\ud835\udc41\u00d7\ud835\udc51\u00d7\ud835\udc51, (6) where F is the Discrete Fourier Transform (DFT). According to the graph convolution theory, we can express the graph convolution operation as follows: \ud835\udc39\ud835\udf03G \u0010 \ud835\udc4b\ud835\udc58,\ud835\udc34\ud835\udc58\u0011 = \ud835\udc34\ud835\udc58\ud835\udc4b\ud835\udc58\ud835\udc4a= F \u22121 \u0010 F \u0010 \ud835\udc4b\ud835\udc58\u0011 F (\ud835\udf05) \u0011 , (7) Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Modality Encoding Fourier Graph Neural Network Contrastive Learning Emotion Classifier GRU GRU GRU FC FC Speaker Embedding Notations: Textual modality Textual modality FGO FGO FGO FGO DFT High frequency Low frequency 1 t \uf06a 2 t \uf06a 3 t \uf06a 1 a \uf06a 2 a \uf06a 3 a \uf06a 1 v \uf06a 2 v \uf06a 3 v \uf06a \u03c3h \u03c3h \u03c3l \u03c3l Text Acoustic Visual Speaker Embedding Layer Graph Construction Sliding Window Acoustic modality Acoustic modality + + + \u03c3h \u2026 \u03c3l \u2026 Visual modality Visual modality \u2026 \u2026 \u2026 Speaker Information Speaker Information Concatenation + Concatenation + Intra-modal edges Inter-modal edges 1 t x 2 t x 3 t x 1 a x 2 a x 3 a x 1 v x 2 v x 3 v x 0 h \u2192 h M \u2192 h i \u2192 0 l \u2192 l i \u2192 l M \u2192 0 m l i i \u2192 = \uf0d5 0 m h i i \u2192 = \uf0d5 Classifier Low Frequency Contrastive Learning High Frequency Contrastive Learning Collaborative Contrastive Loss IDFT IDFT + \u02c6i y 1 t u 2 t u 3 t u 1 a u 2 a u 3 a u 1 v u 2 v u 3 v u Figure 2: The overall architecture of the proposed model GS-MCC. Specifically, feature embedding of multimodal utterances and speaker information is first performed, and then the embedded features are used to construct a multimodal semantic interaction graph. Then, a Fourier graph neural network is used to capture long-distance dependent high and low-frequency information, and finally, contrastive learning is used to collaborate high and low-frequency information for emotion recognition. where\ud835\udf03G is the learnable parameter and F \u22121 is the Inverse Discrete Fourier Transform (IDFT). According to the convolution theory and the conditions of FGO, we can expand the frequency domain term in Eq. (7) as follows: F \u0010 \ud835\udc4b\ud835\udc58\u0011 F (\ud835\udf05) = F \u0010\u0010 \ud835\udc4b\ud835\udc58\u2217\ud835\udf05 \u0011 [\ud835\udc56] \u0011 = F \u0010 \ud835\udc4b\ud835\udc58[\ud835\udc57] \ud835\udf05[\ud835\udc56\u2212\ud835\udc57] \u0011 = F \u0010 \ud835\udc4b\ud835\udc58[\ud835\udc57] \ud835\udf05[\ud835\udc56, \ud835\udc57] \u0011 = F \u0010 \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57\ud835\udc4b\ud835\udc58[\ud835\udc57]\ud835\udc4a \u0011 = F \u0010 \ud835\udc34\ud835\udc58\ud835\udc4b\ud835\udc58\ud835\udc4a \u0011 . (8) As seen from Eq. (8) , the graph convolution operation is implemented through the product of FGO and features in the frequency domain. In addition, according to the convolution theory, the convolution of time-domain signals is equal to the product of frequency-domain signals. The product operation in the frequency domain only requires \ud835\udc42(\ud835\udc41) time complexity, while the convolution operation in the time domain requires \ud835\udc42\u0000\ud835\udc412\u0001 time complexity. Therefore, an efficient graph neural network can be constructed based on the Fourier graph operator. To efficiently capture highand low-frequency information, we perform targeted optimization on FGO and use the high-pass and low-pass filters to extract complementary and consistent semantic information. The specific filter design is as follows: \ud835\udc3f\ud835\udc59= \ud835\udc3c+ \ud835\udc37\u22121/2 G \ud835\udc34\ud835\udc58\ud835\udc37\u22121/2 G , (9) \ud835\udc3f\u210e= \ud835\udc3c\u2212\ud835\udc37\u22121/2 G \ud835\udc34\ud835\udc58\ud835\udc37\u22121/2 G , (10) where \ud835\udc3cis the identity matrix, \ud835\udc37G and \ud835\udc34\ud835\udc47are the degree matrix and adjacency matrix of the multimodal interaction graph, respectively, and \ud835\udc3f\ud835\udc59and \ud835\udc3f\u210eare the low-pass and high-pass filters, respectively. Based on low-pass and high-pass filters, we can obtain the following low and high-frequency Green kernel and Fourier graph operator: \ud835\udf05\ud835\udc59/\u210e[\ud835\udc56, \ud835\udc57] = \ud835\udc3f\ud835\udc59/\u210e \ud835\udc56\ud835\udc57 \u25e6\ud835\udc4a, (11) S\ud835\udc59/\u210e G = F \u0010 \ud835\udf05\ud835\udc59/\u210e\u0011 . (12) Finally, we can build an \ud835\udc40-layer Fourier graph neural network based on these efficient Fourier graph operators to capture longdistance high and low-frequency dependency information in multimodal interaction graphs: \ud835\udc39\ud835\udc59/\u210e \ud835\udf03G \u0010 \ud835\udc4b\ud835\udc58,\ud835\udc34\ud835\udc58\u0011 = \ud835\udc40 \u2211\ufe01 \ud835\udc5a=0 \ud835\udf0e \u0010 F (\ud835\udc4b\ud835\udc58)S\ud835\udc59/\u210e G\u21d2[0:\ud835\udc5a] + \ud835\udc4f\ud835\udc59/\u210e \u0011 , (13) S\ud835\udc59/\u210e G\u21d2[0:\ud835\udc5a] = \ud835\udc5a \u00d6 \ud835\udc56=0 S\ud835\udc59/\u210e G\u2192\ud835\udc56, (14) where \ud835\udf0eis the activation function, \ud835\udc4f\ud835\udc59/\u210eis the bias parameter, S\ud835\udc59/\u210e G\u2192\ud835\udc56 is the FGO in the \ud835\udc56-th layer, \ud835\udc59, and \u210erepresent low and high frequencies respectively. By stacking \ud835\udc40layers of Fourier graph operators, our model can capture long-distance dependency information and obtain each node\u2019s low-frequency feature representation,\ud835\udc65\ud835\udc59 \ud835\udc5a, and high-frequency feature representation, \ud835\udc65\u210e \ud835\udc5a, respectively. 4.3 Contrastive Learning Low-frequency features reflect the trend of slow changes in emotion, while high-frequency features reflect the trend of rapid changes in emotion. To synergize these two features, we employ contrastive learning to build self-supervised signals to promote consistent and complementary semantics learning in multimodal utterances. Inspired by the SpCo [28] method, increasing the frequency domain difference between two contrasting views can achieve better contrast learning effects. Unlike SpCo, our contrastive learning is performed directly in the frequency domain and does not rely on data augmentation to generate contrastive views. Specifically, we use a combination of low-frequency contrast learning and highfrequency contrast learning to promote the synergy of the two MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. features. In addition, we only use the strategy of negative sample pairs far away from each other to increase the frequency domain difference between contrasting views and obtain better contrast learning effects. LFCL: Low Frequency Contrastive Learning. LFCL aims to use low-frequency samples as anchor nodes and all high-frequency nodes as negative samples to construct a self-supervised signal to increase the frequency domain difference between contrast views to obtain better contrast learning effects and promote consistent semantics and complementary semantics learning in multimodal conversations. For each low-frequency anchor node, the self-supervised contrast loss can be defined as: L\ud835\udc3c\ud835\udc39\ud835\udc36\ud835\udc3f= \u22121 \ud835\udf0f+ log \ud835\udc521/\ud835\udf0f+ 3\ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc52 \u0010\u0000\ud835\udc65\ud835\udc59 \ud835\udc5a \u0001\ud835\udc47\ud835\udc65\u210e\ud835\udc56\u2212 \ud835\udc5a \u0011 /\ud835\udf0f ! , (15) where \ud835\udf0fis the temperature coefficient, \ud835\udc65\ud835\udc59 \ud835\udc5ais the low-frequency anchor node, and \ud835\udc65\u210e\ud835\udc56\u2212 \ud835\udc5a is the \ud835\udc56-th high-frequency negative sample. HFCL: High Frequency Contrastive Learning. HFCL is similar to LFCL, except that HFCL uses high-frequency samples as anchor nodes and all low-frequency nodes as negative samples to construct a self-supervised signal to increase the frequency domain difference between contrasting views. The specific contrast loss can be defined as: L\ud835\udc3b\ud835\udc39\ud835\udc36\ud835\udc3f= \u22121 \ud835\udf0f+ log \ud835\udc521/\ud835\udf0f+ 3\ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc52 \u0010\u0000\ud835\udc65\u210e \ud835\udc5a \u0001\ud835\udc47\ud835\udc65\ud835\udc59\ud835\udc56\u2212 \ud835\udc5a \u0011 /\ud835\udf0f ! , (16) where \ud835\udc65\u210e \ud835\udc5ais the high-frequency anchor node, and \ud835\udc65\ud835\udc59\ud835\udc56\u2212 \ud835\udc5a is the \ud835\udc56-th low-frequency negative sample. The overall collaborative contrastive learning loss is the sum of LFCL and HFCL, which can be expressed as L\ud835\udc36\ud835\udc36\ud835\udc3f: L\ud835\udc36\ud835\udc36\ud835\udc3f= L\ud835\udc3f\ud835\udc39\ud835\udc36\ud835\udc3f+ L\ud835\udc3b\ud835\udc39\ud835\udc36\ud835\udc3f. (17) Finally, we use the inverse discrete Fourier transform to convert the high and low-frequency features into time domain features and concatenation the two parts of features to obtain the final embedding representation of the uni-modal utterance node: \ud835\udc63\ud835\udc5a= IDFT \u0010 \ud835\udc65\ud835\udc59 \ud835\udc5a \u0011 \u2295IDFT \u0010 \ud835\udc65\u210e \ud835\udc5a \u0011 , (18) where \ud835\udc5a\u2208{\ud835\udc61,\ud835\udc4e, \ud835\udc63} represents any one of text, auditory and visual modalities. 4.4 Emotion Classifier For modal utterance \ud835\udc48\ud835\udc56, we concatenate the features of each modality for emotion classification. \ud835\udc48\ud835\udc56= \ud835\udc63\ud835\udc56 \ud835\udc61\u2295\ud835\udc63\ud835\udc56 \ud835\udc4e\u2295\ud835\udc63\ud835\udc56 \ud835\udc63, (19) \u02dc \ud835\udc48\ud835\udc56= ReLU(\ud835\udc48\ud835\udc56), (20) P\ud835\udc56= softmax(\ud835\udc4a\ud835\udc62\u02dc \ud835\udc48\ud835\udc56+ \ud835\udc4f\ud835\udc62), (21) \u02c6 \ud835\udc66\ud835\udc56= argmax(P\ud835\udc56[\ud835\udf0f]), (22) where \ud835\udc4a\ud835\udc62and \ud835\udc4f\ud835\udc62are learnable parameters, and \u02c6 \ud835\udc66\ud835\udc56is the predicted emotion label of utterance \ud835\udc48\ud835\udc56. Finally, we employ categorical crossentropy loss and contrastive loss for model training. 5 EXPERIMENTS 5.1 Implementation Details Benchmark Datasets and Evaluation Metrics: In our experiments, we used two multimodal datasets, IEMOCAP [3], and MELD [39], widely used in multimodal emotion recognition. IEMOCAP (Interactive Emotional Dyadic Motion Capture Database) is a multimodal database for emotion recognition and analysis. The IEMOCAP data set consists of movie dialogue clips and emotional annotations, including voice, video, and emotional annotation data of 10 actors in interactive dialogue scenes. MELD (Multimodal EmotionLines Dataset) contains dialogue text from movie and TV show clips. The dialogue text contains the characters\u2019 speech and the context information of the dialogue. MELD also provides audio recordings and video recordings of conversations. We record the classification accuracy (Acc.) and F1 for each emotion category, as well as the overall weighted average accuracy (W-Acc.) and weighted average F1 (W-F1). Baseline Methods: We compare several baselines on the IEMOCAP and MELD datasets, including bc-LSTM [38], and A-DMN [51] based on RNN architecture, LFM [31] based on Low-rank Tensor Fusion network, DialogueGCN [13], LR-GCN [41], DER-GCN [1], MMGCN [14], AdaGIN [47], RGAT [15] and CoMPM [20] based on GCN, EmoBERTa [19], CTNet [26] and COGMEN [17] based on Transformer architecture. Experimental Setup: All experiments are conducted using Python 3.8 and PyTorch 1.8 deep learning framework and performed on a single NVIDIA RTX 3090 24G GPU. Our model is trained using AdamW with a learning rate of 1e-5, cross-entropy as the loss function, and a batch size of 32. The optimal parameters of all models were obtained by performing parameter adjustment using the leave-one-out cross-validation method on the validation set. 5.2 Comparison with the State-of-the-Art Table 1 and Table 2 show the emotion recognition effects of the proposed GS-MCC method and the baseline method on the IEMOCAP and MELD datasets, respectively. On the IEMOCAP dataset, GS-MCC has the best emotion recognition effect, outperforming all comparison baselines, and is 3.3% and 3.2% better than AdaGIN on W-Acc and W-F1 ,respectively. In addition, GS-MCC has significantly improved Acc and F1 values in some emotion categories. Similarly, compared with all comparison baselines on the MELD data set, GS-MCC also has the best emotion recognition effect, outperforming AdaGIN by 0.5% and 2.2% on W-Acc and WF1, respectively. Furthermore, AdaGIN is optimal in both Acc and F1 most emotion categories. Experimental results demonstrate the effectiveness of AdaGIN. The performance improvement may be attributed to the proposed method\u2019s ability to utilize long-distance contextual semantic information from fullyand low-frequency signals while avoiding the over-smoothing phenomenon of GCN. Furthermore, the proposed GS-MCC has only 2.10M model parameters, which is far lower than DialogueGCN and other GCNbased emotion recognition methods. Experimental results also demonstrate the potential application of our method in efficient computing. Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Table 1: Comparison with other baseline models on the IEMOCAP dataset. The best result in each column is in bold. Methods Parmas. IEMOCAP Happy Sad Neutral Angry Excited Frustrated Average(w) Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 bc-LSTM 1.28M 29.1 34.4 57.1 60.8 54.1 51.8 57.0 56.7 51.1 57.9 67.1 58.9 55.2 54.9 LFM 2.34M 25.6 33.1 75.1 78.8 58.5 59.2 64.7 65.2 80.2 71.8 61.1 58.9 63.4 62.7 A-DMN \u2013 43.1 50.6 69.4 76.8 63.0 62.9 63.5 56.5 88.3 77.9 53.3 55.7 64.6 64.3 DialogueGCN 12.92M 40.6 42.7 89.1 84.5 62.0 63.5 67.5 64.1 65.5 63.1 64.1 66.9 65.2 64.1 RGAT 15.28M 60.1 51.6 78.8 77.3 60.1 65.4 70.7 63.0 78.0 68.0 64.3 61.2 65.0 65.2 CoMPM \u2013 59.9 60.7 78.0 82.2 60.4 63.0 70.2 59.9 85.8 78.2 62.9 59.5 67.7 67.2 EmoBERTa 499M 56.9 56.4 79.1 83.0 64.0 61.5 70.6 69.6 86.0 78.0 63.8 68.7 67.3 67.3 COGMEN \u2013 57.4 51.9 81.4 81.7 65.4 68.6 69.5 66.0 83.3 75.3 63.8 68.2 68.2 67.6 CTNet 8.49M 47.9 51.3 78.0 79.9 69.0 65.8 72.9 67.2 85.3 78.7 52.2 58.8 68.0 67.5 LR-GCN 15.77M 54.2 55.5 81.6 79.1 59.1 63.8 69.4 69.0 76.3 74.0 68.2 68.9 68.5 68.3 MMGCN 0.46M 43.1 42.3 79.3 78.7 63.5 61.7 69.6 69.0 75.8 74.3 63.5 62.3 67.4 66.2 AdaGIN 6.3M 53.0 \u2013 81.5 \u2013 71.3 \u2013 65.9 \u2013 76.3 \u2013 67.8 \u2013 70.5 70.7 DER-GCN 78.59M 60.7 58.8 75.9 79.8 66.5 61.5 71.3 72.1 71.1 73.3 66.1 67.8 69.7 69.4 GS-MCC 2.10M 60.2 65.4 86.2 81.2 75.7 70.9 71.7 70.8 83.2 81.4 66.0 71.0 73.8 73.9 Table 2: Comparison with other baseline models on the MELD dataset. The best result in each column is in bold. Methods Parmas. MELD Neutral Surprise Fear Sadness Joy Disgust Anger Average(w) Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 bc-LSTM 1.28M 78.4 73.8 46.8 47.7 3.8 5.4 22.4 25.1 51.6 51.3 4.3 5.2 36.7 38.4 57.5 55.9 DialogueRNN 14.47M 72.1 73.5 54.4 49.4 1.6 1.2 23.9 23.8 52.0 50.7 1.5 1.7 41.0 41.5 56.1 55.9 DialogueGCN 12.92M 70.3 72.1 42.4 41.7 3.0 2.8 20.9 21.8 44.7 44.2 6.5 6.7 39.0 36.5 54.9 54.7 RGAT 15.28M 76.0 78.1 40.1 41.5 3.0 2.4 32.1 30.7 68.1 58.6 4.5 2.2 40.0 44.6 60.3 61.1 CoMPM \u2013 78.3 82.0 48.3 49.2 1.7 2.9 35.9 32.3 71.4 61.5 3.1 2.8 42.2 45.8 64.1 65.3 EmoBERTa 499M 78.9 82.5 50.2 50.2 1.8 1.9 33.3 31.2 72.1 61.7 9.1 2.5 43.3 46.4 64.1 65.2 A-DMN \u2013 76.5 78.9 56.2 55.3 8.2 8.6 22.1 24.9 59.8 57.4 1.2 3.4 41.3 40.9 61.5 60.4 LR-GCN 15.77M 76.7 80.0 53.3 55.2 0.0 0.0 49.6 35.1 68.0 64.4 10.7 2.7 48.0 51.0 65.7 65.6 MM-GCN 0.46M 64.8 77.1 67.4 53.9 0.0 0.0 72.4 17.7 68.7 56.9 0.0 0.0 54.4 42.6 64.4 59.4 AdaGIN 6.3M 79.8 \u2013 60.5 \u2013 15.2 \u2013 43.7 \u2013 64.5 \u2013 29.3 \u2013 56.2 \u2013 67.6 66.8 DER-GCN 78.59M 76.8 80.6 50.5 51.0 14.8 10.4 56.7 41.5 69.3 64.3 17.2 10.3 52.5 57.4 66.8 66.1 GS-MCC 2.10M 78.4 81.8 56.9 58.3 23.5 23.8 50.0 35.8 69.4 66.4 36.7 30.7 53.2 54.4 68.1 69.0 Figure 3: Loss trends during model training and inference on the IEMOCAP and MELD datasets. We compare DialogueGCN, GS-MCC without contrastive loss and GS-MCC. 5.3 Trends of Losses During the training and inference process of the model, we show the loss trends of DialogueGCN, GS-MCC without contrastive loss, and GS-MCC in the IEMOCAP and MELD datasets to better understand the convergence of the model. Fig. 3 shows the results of the training loss. On the IEMOCAP data set, we found that DialogueGCN quickly converged to the local optimal value and continued fluctuating around the loss value of 1.1. The convergence of GS-MCC without contrastive loss is better than DialogueGCN, and it converges around a loss value of 0.8. Although the loss value of GS-MCC without contrastive loss is higher than GS-MCC at the beginning of training, as the training continues, the convergence of GS-MCC begins to be better than GS-MCC without contrastive loss. It converges around the loss value of 0.4. The MELD dataset\u2019s loss values of DialogueGCN and GS-MCC without contrastive loss MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. Figure 4: Emotion recognition performance of DialogueGCN and GS-MCC on IEMOCAP and MELD datasets. We stack 4-layer and 8-layer GCN to explore the over-smoothing phenomenon of the model. fluctuate considerably. They are difficult to converge, but the loss of GS-MCC without contrastive loss is lower than DialogueGCN. However, GS-MCC with contrastive loss has better convergence, converging around a loss value of 0.9. Experimental results prove that the contrastive learning mechanism plays an essential role in the convergence of the GCN network and can collaborate with highand low-frequency features for better emotion recognition. 5.4 Over-smoothing Analysis It is challenging to train a deep GCN with strong feature expression ability because deep GCN is prone to over-smoothing, which limits the feature expression ability of nodes. From a training perspective, over-smoothing removes discriminative semantic information from node features. Therefore, we stack 4-layer and 8-layer GCN to explore the over-smoothing phenomenon of the model\u2019s DialogueGCN and GS-MCC on the IEMOCAP and MELD datasets. Fig 4 shows the experimental comparison results. On the IEMOCAP and MELD datasets, we observed that the training convergence of DailogueGCN-8 was poor, and the model suffered from severe over-smoothing. The training convergence of DailogueGCN-4 is slightly better than that of DailogueGCN-8, but it can only fluctuate around the local optimal value. DailogueGCN-4 also suffers from serious overfitting, especially on the IEMOCAP data set. Compared with DailogueGCN-8, GS-MCC-8 can alleviate the over-smoothing problem to a certain extent and converge to a local optimal value. The convergence of GS-MCC-4 is perfect and can converge to a relatively stable optimal solution. Experimental results show that GS-MCC can alleviate the model\u2019s over-smoothing problem to a certain extent. This may be attributed to GS-MCC\u2019s ability to use different-order node information of nodes in the graph to update the feature representation of nodes. By mixing the feature information of nodes of different orders in each layer, GS-MCC can maintain the diversity of node features, thereby preventing over-smoothing of features. Therefore, GS-MCC can effectively capture long-distance dependency information in multi-modal conversations. 5.5 Ablation Study Ablation studies for SE, Fourier GNN, CL. Speaker embedding (SE), Fourier graph neural network (Fourier GNN), and contrastive learning (CL) are the three critical components of our proposed multimodal emotion recognition model. We only remove one proposed module at a time to verify the effectiveness of the component. It is worth noting that when Fourier GNN is removed, we use DialogueGCN as the backbone of the model. From the emotion recognition results in Table 3, we conclude: (1) All the modules we proposed are helpful because no matter which proposed module is deleted, it will cause the emotion recognition performance of the model to decrease. (2) Speaker embedding has a relatively significant impact on the emotion recognition performance of the model because if the speaker embedding information is removed from the IEMOCAP and MELD data sets, the emotion recognition effect of the model will be significantly reduced. The experimental results show that the speaker\u2019s embedded information is essential for the model to understand emotions. (3) On the IEMOCAP and MELD datasets, Fourier GNN is more critical than contrastive learning. We speculate that this is because Fourier GNN can capture high and low-frequency signals to provide more useful emotional semantic information, and the contrastive learning mechanism mainly assists Fourier GNN in better achieving complementary and consistent semantic information collaboration. Table 3: Ablation studies for SE, Fourier GNN, CL on the IEMOCAP and MELD datasets. Methods IEMOCAP MELD W-Acc. W-F1 W-Acc. W-F1 GS-MCC 73.1 73.3 68.1 69.0 w/o SE 70.3(\u21932.8) 70.6(\u21932.7) 65.4(\u21932.7) 64.6(\u21934.4) w/o Fourier GCN 68.7(\u21934.4) 67.7(\u21935.6) 64.2(\u21933.9) 64.1(\u21934.9) w/o CL 70.3(\u21932.8) 71.3(\u21932.0) 66.1(\u21932.0) 65.9(\u21933.1) Table 4: The effect of our method on IEMOCAP and MELD datasets using unimodal features and multimodal features, respectively. Modality IEMOCAP MELD W-Acc. W-F1 W-Acc. W-F1 T+A+V 73.8 73.9 68.1 69.0 T 66.3(\u21937.5) 66.0(\u21937.9) 63.7(\u21934.4) 62.5(\u21936.5) A 57.7(\u219316.1) 58.1(\u219315.8) 53.8(\u219314.3) 53.4(\u219315.6) V 50.4(\u219323.4) 50.5(\u219323.4) 41.4(\u219326.7) 42.3(\u219326.7) T+A 71.6(\u21932.2) 71.0(\u21932.9) 66.3(\u21931.8) 65.9(\u21933.1) T+V 69.5(\u21934.3) 68.7(\u21935.2) 64.2(\u21933.9) 64.1(\u21934.9) V+A 63.7(\u219310.1) 63.0(\u219310.9) 54.6(\u219313.5) 53.4(\u219315.6) Ablation studies for multimodal features. We conduct ablation experiments on multimodal features to compare the performance of single-modal, bi-modal, and tri-modal experimental results to explore the importance of each modality. The experimental results are listed in Table 4. We choose W-Acc and W-F1 as evaluation metrics. In single-modal experiments, text modality features achieved the best performance, which shows that text features play a decisive role in MERC. Video features have the worst emotion Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia recognition effect. We speculate that video features have more noise signals, making it difficult for the model to learn effective emotional feature representation. In bi-modal experiments, all bi-modal emotion recognition effects are better than their single-modal emotion recognition effects. The tri-modal emotion recognition effect is the best among all experiments. The performance improvement may be attributed to the effective fusion of multimodal, complementary semantic information, which can improve the feature representation ability of emotions. Therefore, GS-MCC can effectively utilize the consistent and complementary semantic information in multimodal conversations to improve the emotion recognition effect. 6 CONCLUSIONS In this paper, we rethink the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum, taking into account some shortcomings of existing work and innovations. Specifically, we propose a Graph-Spectrum-based Multimodal Consistency and Complementary feature collaboration framework GS-MCC. First, we combine sliding windows to build a multimodal interaction graph to model the conversational relationship between utterances and speakers. Secondly, we design efficient Fourier graph operators to capture long-distance utterances\u2019 consistency and complementary semantic dependencies. Finally, we adopt contrastive learning and construct self-supervised signals with all negative samples to promote the collaboration of the two semantic information. Extensive experiments on two widely used benchmark datasets, IEMOCAP and MELD, demonstrate the effectiveness and efficiency of our proposed method." + }, + { + "url": "http://arxiv.org/abs/2107.06779v1", + "title": "MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation", + "abstract": "Emotion recognition in conversation (ERC) is a crucial component in affective\ndialogue systems, which helps the system understand users' emotions and\ngenerate empathetic responses. However, most works focus on modeling speaker\nand contextual information primarily on the textual modality or simply\nleveraging multimodal information through feature concatenation. In order to\nexplore a more effective way of utilizing both multimodal and long-distance\ncontextual information, we propose a new model based on multimodal fused graph\nconvolutional network, MMGCN, in this work. MMGCN can not only make use of\nmultimodal dependencies effectively, but also leverage speaker information to\nmodel inter-speaker and intra-speaker dependency. We evaluate our proposed\nmodel on two public benchmark datasets, IEMOCAP and MELD, and the results prove\nthe effectiveness of MMGCN, which outperforms other SOTA methods by a\nsignificant margin under the multimodal conversation setting.", + "authors": "Jingwen Hu, Yuchen Liu, Jinming Zhao, Qin Jin", + "published": "2021-07-14", + "updated": "2021-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.02177v2", + "title": "GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation", + "abstract": "Conversations have become a critical data format on social media platforms.\nUnderstanding conversation from emotion, content and other aspects also\nattracts increasing attention from researchers due to its widespread\napplication in human-computer interaction. In real-world environments, we often\nencounter the problem of incomplete modalities, which has become a core issue\nof conversation understanding. To address this problem, researchers propose\nvarious methods. However, existing approaches are mainly designed for\nindividual utterances rather than conversational data, which cannot fully\nexploit temporal and speaker information in conversations. To this end, we\npropose a novel framework for incomplete multimodal learning in conversations,\ncalled \"Graph Complete Network (GCNet)\", filling the gap of existing works. Our\nGCNet contains two well-designed graph neural network-based modules, \"Speaker\nGNN\" and \"Temporal GNN\", to capture temporal and speaker dependencies. To make\nfull use of complete and incomplete data, we jointly optimize classification\nand reconstruction tasks in an end-to-end manner. To verify the effectiveness\nof our method, we conduct experiments on three benchmark conversational\ndatasets. Experimental results demonstrate that our GCNet is superior to\nexisting state-of-the-art approaches in incomplete multimodal learning. Code is\navailable at https://github.com/zeroQiaoba/GCNet.", + "authors": "Zheng Lian, Lan Chen, Licai Sun, Bin Liu, Jianhua Tao", + "published": "2022-03-04", + "updated": "2023-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.04456v1", + "title": "Multimodal Prompt Transformer with Hybrid Contrastive Learning for Emotion Recognition in Conversation", + "abstract": "Emotion Recognition in Conversation (ERC) plays an important role in driving\nthe development of human-machine interaction. Emotions can exist in multiple\nmodalities, and multimodal ERC mainly faces two problems: (1) the noise problem\nin the cross-modal information fusion process, and (2) the prediction problem\nof less sample emotion labels that are semantically similar but different\ncategories. To address these issues and fully utilize the features of each\nmodality, we adopted the following strategies: first, deep emotion cues\nextraction was performed on modalities with strong representation ability, and\nfeature filters were designed as multimodal prompt information for modalities\nwith weak representation ability. Then, we designed a Multimodal Prompt\nTransformer (MPT) to perform cross-modal information fusion. MPT embeds\nmultimodal fusion information into each attention layer of the Transformer,\nallowing prompt information to participate in encoding textual features and\nbeing fused with multi-level textual information to obtain better multimodal\nfusion features. Finally, we used the Hybrid Contrastive Learning (HCL)\nstrategy to optimize the model's ability to handle labels with few samples.\nThis strategy uses unsupervised contrastive learning to improve the\nrepresentation ability of multimodal fusion and supervised contrastive learning\nto mine the information of labels with few samples. Experimental results show\nthat our proposed model outperforms state-of-the-art models in ERC on two\nbenchmark datasets.", + "authors": "Shihao Zou, Xianying Huang, Xudong Shen", + "published": "2023-10-04", + "updated": "2023-10-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.02187v1", + "title": "M2FNet: Multi-modal Fusion Network for Emotion Recognition in Conversation", + "abstract": "Emotion Recognition in Conversations (ERC) is crucial in developing\nsympathetic human-machine interaction. In conversational videos, emotion can be\npresent in multiple modalities, i.e., audio, video, and transcript. However,\ndue to the inherent characteristics of these modalities, multi-modal ERC has\nalways been considered a challenging undertaking. Existing ERC research focuses\nmainly on using text information in a discussion, ignoring the other two\nmodalities. We anticipate that emotion recognition accuracy can be improved by\nemploying a multi-modal approach. Thus, in this study, we propose a Multi-modal\nFusion Network (M2FNet) that extracts emotion-relevant features from visual,\naudio, and text modality. It employs a multi-head attention-based fusion\nmechanism to combine emotion-rich latent representations of the input data. We\nintroduce a new feature extractor to extract latent features from the audio and\nvisual modality. The proposed feature extractor is trained with a novel\nadaptive margin-based triplet loss function to learn emotion-relevant features\nfrom the audio and visual data. In the domain of ERC, the existing methods\nperform well on one benchmark dataset but not on others. Our results show that\nthe proposed M2FNet architecture outperforms all other methods in terms of\nweighted average F1 score on well-known MELD and IEMOCAP datasets and sets a\nnew state-of-the-art performance in ERC.", + "authors": "Vishal Chudasama, Purbayan Kar, Ashish Gudmalwar, Nirmesh Shah, Pankaj Wasnik, Naoyuki Onoe", + "published": "2022-06-05", + "updated": "2022-06-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.17727v2", + "title": "Learning a Structural Causal Model for Intuition Reasoning in Conversation", + "abstract": "Reasoning, a crucial aspect of NLP research, has not been adequately\naddressed by prevailing models including Large Language Model. Conversation\nreasoning, as a critical component of it, remains largely unexplored due to the\nabsence of a well-designed cognitive model. In this paper, inspired by\nintuition theory on conversation cognition, we develop a conversation cognitive\nmodel (CCM) that explains how each utterance receives and activates channels of\ninformation recursively. Besides, we algebraically transformed CCM into a\nstructural causal model (SCM) under some mild assumptions, rendering it\ncompatible with various causal discovery methods. We further propose a\nprobabilistic implementation of the SCM for utterance-level relation reasoning.\nBy leveraging variational inference, it explores substitutes for implicit\ncauses, addresses the issue of their unobservability, and reconstructs the\ncausal representations of utterances through the evidence lower bounds.\nMoreover, we constructed synthetic and simulated datasets incorporating\nimplicit causes and complete cause labels, alleviating the current situation\nwhere all available datasets are implicit-causes-agnostic. Extensive\nexperiments demonstrate that our proposed method significantly outperforms\nexisting methods on synthetic, simulated, and real-world datasets. Finally, we\nanalyze the performance of CCM under latent confounders and propose theoretical\nideas for addressing this currently unresolved issue.", + "authors": "Hang Chen, Bingyu Liao, Jing Luo, Wenjing Zhu, Xinyu Yang", + "published": "2023-05-28", + "updated": "2024-01-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.12261v4", + "title": "GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition", + "abstract": "Emotion Recognition in Conversation (ERC) plays a significant part in\nHuman-Computer Interaction (HCI) systems since it can provide empathetic\nservices. Multimodal ERC can mitigate the drawbacks of uni-modal approaches.\nRecently, Graph Neural Networks (GNNs) have been widely used in a variety of\nfields due to their superior performance in relation modeling. In multimodal\nERC, GNNs are capable of extracting both long-distance contextual information\nand inter-modal interactive information. Unfortunately, since existing methods\nsuch as MMGCN directly fuse multiple modalities, redundant information may be\ngenerated and diverse information may be lost. In this work, we present a\ndirected Graph based Cross-modal Feature Complementation (GraphCFC) module that\ncan efficiently model contextual and interactive information. GraphCFC\nalleviates the problem of heterogeneity gap in multimodal fusion by utilizing\nmultiple subspace extractors and Pair-wise Cross-modal Complementary (PairCC)\nstrategy. We extract various types of edges from the constructed graph for\nencoding, thus enabling GNNs to extract crucial contextual and interactive\ninformation more accurately when performing message passing. Furthermore, we\ndesign a GNN structure called GAT-MLP, which can provide a new unified network\nframework for multimodal learning. The experimental results on two benchmark\ndatasets show that our GraphCFC outperforms the state-of-the-art (SOTA)\napproaches.", + "authors": "Jiang Li, Xiaoping Wang, Guoqing Lv, Zhigang Zeng", + "published": "2022-07-06", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.00405v4", + "title": "DialogueRNN: An Attentive RNN for Emotion Detection in Conversations", + "abstract": "Emotion detection in conversations is a necessary step for a number of\napplications, including opinion mining over chat history, social media threads,\ndebates, argumentation mining, understanding consumer feedback in live\nconversations, etc. Currently, systems do not treat the parties in the\nconversation individually by adapting to the speaker of each utterance. In this\npaper, we describe a new method based on recurrent neural networks that keeps\ntrack of the individual party states throughout the conversation and uses this\ninformation for emotion classification. Our model outperforms the state of the\nart by a significant margin on two different datasets.", + "authors": "Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, Erik Cambria", + "published": "2018-11-01", + "updated": "2019-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1806.00064v1", + "title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", + "abstract": "Multimodal research is an emerging field of artificial intelligence, and one\nof the main research problems in this field is multimodal fusion. The fusion of\nmultimodal data is the process of integrating multiple unimodal representations\ninto one compact multimodal representation. Previous research in this field has\nexploited the expressiveness of tensors for multimodal representation. However,\nthese methods often suffer from exponential increase in dimensions and in\ncomputational complexity introduced by transformation of input into tensor. In\nthis paper, we propose the Low-rank Multimodal Fusion method, which performs\nmultimodal fusion using low-rank tensors to improve efficiency. We evaluate our\nmodel on three different tasks: multimodal sentiment analysis, speaker trait\nanalysis, and emotion recognition. Our model achieves competitive results on\nall these tasks while drastically reducing computational complexity. Additional\nexperiments also show that our model can perform robustly for a wide range of\nlow-rank settings, and is indeed much more efficient in both training and\ninference compared to other methods that utilize tensor representations.", + "authors": "Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency", + "published": "2018-05-31", + "updated": "2018-05-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1707.07250v1", + "title": "Tensor Fusion Network for Multimodal Sentiment Analysis", + "abstract": "Multimodal sentiment analysis is an increasingly popular research area, which\nextends the conventional language-based definition of sentiment analysis to a\nmultimodal setup where other relevant modalities accompany language. In this\npaper, we pose the problem of multimodal sentiment analysis as modeling\nintra-modality and inter-modality dynamics. We introduce a novel model, termed\nTensor Fusion Network, which learns both such dynamics end-to-end. The proposed\napproach is tailored for the volatile nature of spoken language in online\nvideos as well as accompanying gestures and voice. In the experiments, our\nmodel outperforms state-of-the-art approaches for both multimodal and unimodal\nsentiment analysis.", + "authors": "Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, Louis-Philippe Morency", + "published": "2017-07-23", + "updated": "2017-07-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.05833v2", + "title": "COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition", + "abstract": "Automatically recognising apparent emotions from face and voice is hard, in\npart because of various sources of uncertainty, including in the input data and\nthe labels used in a machine learning framework. This paper introduces an\nuncertainty-aware audiovisual fusion approach that quantifies modality-wise\nuncertainty towards emotion prediction. To this end, we propose a novel fusion\nframework in which we first learn latent distributions over audiovisual\ntemporal context vectors separately, and then constrain the variance vectors of\nunimodal latent distributions so that they represent the amount of information\neach modality provides w.r.t. emotion recognition. In particular, we impose\nCalibration and Ordinal Ranking constraints on the variance vectors of\naudiovisual latent distributions. When well-calibrated, modality-wise\nuncertainty scores indicate how much their corresponding predictions may differ\nfrom the ground truth labels. Well-ranked uncertainty scores allow the ordinal\nranking of different frames across the modalities. To jointly impose both these\nconstraints, we propose a softmax distributional matching loss. In both\nclassification and regression settings, we compare our uncertainty-aware fusion\nmodel with standard model-agnostic fusion baselines. Our evaluation on two\nemotion recognition corpora, AVEC 2019 CES and IEMOCAP, shows that audiovisual\nemotion recognition can considerably benefit from well-calibrated and\nwell-ranked latent uncertainty measures.", + "authors": "Mani Kumar Tellamekala, Shahin Amiriparian, Bj\u00f6rn W. Schuller, Elisabeth Andr\u00e9, Timo Giesbrecht, Michel Valstar", + "published": "2022-06-12", + "updated": "2023-10-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.HC", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.20494v1", + "title": "A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations", + "abstract": "Emotion recognition in conversations (ERC), the task of recognizing the\nemotion of each utterance in a conversation, is crucial for building empathetic\nmachines. Existing studies focus mainly on capturing context- and\nspeaker-sensitive dependencies on the textual modality but ignore the\nsignificance of multimodal information. Different from emotion recognition in\ntextual conversations, capturing intra- and inter-modal interactions between\nutterances, learning weights between different modalities, and enhancing modal\nrepresentations play important roles in multimodal ERC. In this paper, we\npropose a transformer-based model with self-distillation (SDT) for the task.\nThe transformer-based model captures intra- and inter-modal interactions by\nutilizing intra- and inter-modal transformers, and learns weights between\nmodalities dynamically by designing a hierarchical gated fusion strategy.\nFurthermore, to learn more expressive modal representations, we treat soft\nlabels of the proposed model as extra training supervision. Specifically, we\nintroduce self-distillation to transfer knowledge of hard and soft labels from\nthe proposed model to each modality. Experiments on IEMOCAP and MELD datasets\ndemonstrate that SDT outperforms previous state-of-the-art baselines.", + "authors": "Hui Ma, Jian Wang, Hongfei Lin, Bo Zhang, Yijia Zhang, Bo Xu", + "published": "2023-10-31", + "updated": "2023-10-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.04502v2", + "title": "Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition", + "abstract": "It has been a hot research topic to enable machines to understand human\nemotions in multimodal contexts under dialogue scenarios, which is tasked with\nmultimodal emotion analysis in conversation (MM-ERC). MM-ERC has received\nconsistent attention in recent years, where a diverse range of methods has been\nproposed for securing better task performance. Most existing works treat MM-ERC\nas a standard multimodal classification problem and perform multimodal feature\ndisentanglement and fusion for maximizing feature utility. Yet after revisiting\nthe characteristic of MM-ERC, we argue that both the feature multimodality and\nconversational contextualization should be properly modeled simultaneously\nduring the feature disentanglement and fusion steps. In this work, we target\nfurther pushing the task performance by taking full consideration of the above\ninsights. On the one hand, during feature disentanglement, based on the\ncontrastive learning technique, we devise a Dual-level Disentanglement\nMechanism (DDM) to decouple the features into both the modality space and\nutterance space. On the other hand, during the feature fusion stage, we propose\na Contribution-aware Fusion Mechanism (CFM) and a Context Refusion Mechanism\n(CRM) for multimodal and context integration, respectively. They together\nschedule the proper integrations of multimodal and context features.\nSpecifically, CFM explicitly manages the multimodal feature contributions\ndynamically, while CRM flexibly coordinates the introduction of dialogue\ncontexts. On two public MM-ERC datasets, our system achieves new\nstate-of-the-art performance consistently. Further analyses demonstrate that\nall our proposed mechanisms greatly facilitate the MM-ERC task by making full\nuse of the multimodal and context features adaptively. Note that our proposed\nmethods have the great potential to facilitate a broader range of other\nconversational multimodal tasks.", + "authors": "Bobo Li, Hao Fei, Lizi Liao, Yu Zhao, Chong Teng, Tat-Seng Chua, Donghong Ji, Fei Li", + "published": "2023-08-08", + "updated": "2023-08-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1909.11594v1", + "title": "Structured Graph Learning Via Laplacian Spectral Constraints", + "abstract": "Learning a graph with a specific structure is essential for interpretability\nand identification of the relationships among data. It is well known that\nstructured graph learning from observed samples is an NP-hard combinatorial\nproblem. In this paper, we first show that for a set of important graph\nfamilies it is possible to convert the structural constraints of structure into\neigenvalue constraints of the graph Laplacian matrix. Then we introduce a\nunified graph learning framework, lying at the integration of the spectral\nproperties of the Laplacian matrix with Gaussian graphical modeling that is\ncapable of learning structures of a large class of graph families. The proposed\nalgorithms are provably convergent and practically amenable for large-scale\nsemi-supervised and unsupervised graph-based learning tasks. Extensive\nnumerical experiments with both synthetic and real data sets demonstrate the\neffectiveness of the proposed methods. An R package containing code for all the\nexperimental results is available at\nhttps://cran.r-project.org/package=spectralGraphTopology.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos'e Vin'icius de M. Cardoso, Daniel P. Palomar", + "published": "2019-09-24", + "updated": "2019-09-24", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC", + "stat.AP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.15843v1", + "title": "TabGSL: Graph Structure Learning for Tabular Data Prediction", + "abstract": "This work presents a novel approach to tabular data prediction leveraging\ngraph structure learning and graph neural networks. Despite the prevalence of\ntabular data in real-world applications, traditional deep learning methods\noften overlook the potentially valuable associations between data instances.\nSuch associations can offer beneficial insights for classification tasks, as\ninstances may exhibit similar patterns of correlations among features and\ntarget labels. This information can be exploited by graph neural networks,\nnecessitating robust graph structures. However, existing studies primarily\nfocus on improving graph structure from noisy data, largely neglecting the\npossibility of deriving graph structures from tabular data. We present a novel\nsolution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data\nprediction by simultaneously learning instance correlation and feature\ninteraction within a unified framework. This is achieved through a proposed\ngraph contrastive learning module, along with transformer-based feature\nextractor and graph neural network. Comprehensive experiments conducted on 30\nbenchmark tabular datasets demonstrate that TabGSL markedly outperforms both\ntree-based models and recent deep learning-based tabular models. Visualizations\nof the learned instance embeddings further substantiate the effectiveness of\nTabGSL.", + "authors": "Jay Chiehen Liao, Cheng-Te Li", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09671v1", + "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels", + "abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", + "authors": "Rami Al-Rfou, Dustin Zelle, Bryan Perozzi", + "published": "2019-04-21", + "updated": "2019-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03236v1", + "title": "Graph2Graph Learning with Conditional Autoregressive Models", + "abstract": "We present a graph neural network model for solving graph-to-graph learning\nproblems. Most deep learning on graphs considers ``simple'' problems such as\ngraph classification or regressing real-valued graph properties. For such\ntasks, the main requirement for intermediate representations of the data is to\nmaintain the structure needed for output, i.e., keeping classes separated or\nmaintaining the order indicated by the regressor. However, a number of learning\ntasks, such as regressing graph-valued output, generative models, or graph\nautoencoders, aim to predict a graph-structured output. In order to\nsuccessfully do this, the learned representations need to preserve far more\nstructure. We present a conditional auto-regressive model for graph-to-graph\nlearning and illustrate its representational capabilities via experiments on\nchallenging subgraph predictions from graph algorithmics; as a graph\nautoencoder for reconstruction and visualization; and on pretraining\nrepresentations that allow graph classification with limited labeled data.", + "authors": "Guan Wang, Francois Bernard Lauze, Aasa Feragen", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.00647v4", + "title": "Lifelong Graph Learning", + "abstract": "Graph neural networks (GNN) are powerful models for many graph-structured\ntasks. Existing models often assume that the complete structure of the graph is\navailable during training. In practice, however, graph-structured data is\nusually formed in a streaming fashion so that learning a graph continuously is\noften necessary. In this paper, we bridge GNN and lifelong learning by\nconverting a continual graph learning problem to a regular graph learning\nproblem so GNN can inherit the lifelong learning techniques developed for\nconvolutional neural networks (CNN). We propose a new topology, the feature\ngraph, which takes features as new nodes and turns nodes into independent\ngraphs. This successfully converts the original problem of node classification\nto graph classification. In the experiments, we demonstrate the efficiency and\neffectiveness of feature graph networks (FGN) by continuously learning a\nsequence of classical graph datasets. We also show that FGN achieves superior\nperformance in two applications, i.e., lifelong human action recognition with\nwearable devices and feature matching. To the best of our knowledge, FGN is the\nfirst method to bridge graph learning and lifelong learning via a novel graph\ntopology. Source code is available at https://github.com/wang-chen/LGL", + "authors": "Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer", + "published": "2020-09-01", + "updated": "2022-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.02879v1", + "title": "Auto-decoding Graphs", + "abstract": "We present an approach to synthesizing new graph structures from empirically\nspecified distributions. The generative model is an auto-decoder that learns to\nsynthesize graphs from latent codes. The graph synthesis model is learned\njointly with an empirical distribution over the latent codes. Graphs are\nsynthesized using self-attention modules that are trained to identify likely\nconnectivity patterns. Graph-based normalizing flows are used to sample latent\ncodes from the distribution learned by the auto-decoder. The resulting model\ncombines accuracy and scalability. On benchmark datasets of large graphs, the\npresented model outperforms the state of the art by a factor of 1.5 in mean\naccuracy and average rank across at least three different graph statistics,\nwith a 2x speedup during inference.", + "authors": "Sohil Atul Shah, Vladlen Koltun", + "published": "2020-06-04", + "updated": "2020-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2004.06846v1", + "title": "MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning", + "abstract": "How to utilize deep learning methods for graph classification tasks has\nattracted considerable research attention in the past few years. Regarding\ngraph classification tasks, the graphs to be classified may have various graph\nsizes (i.e., different number of nodes and edges) and have various graph\nproperties (e.g., average node degree, diameter, and clustering coefficient).\nThe diverse property of graphs has imposed significant challenges on existing\ngraph learning techniques since diverse graphs have different best-fit\nhyperparameters. It is difficult to learn graph features from a set of diverse\ngraphs by a unified graph neural network. This motivates us to use a multiplex\nstructure in a diverse way and utilize a priori properties of graphs to guide\nthe learning. In this paper, we propose MxPool, which concurrently uses\nmultiple graph convolution/pooling networks to build a hierarchical learning\nstructure for graph representation learning tasks. Our experiments on numerous\ngraph classification benchmarks show that our MxPool has superiority over other\nstate-of-the-art graph representation learning methods.", + "authors": "Yanyan Liang, Yanfeng Zhang, Dechao Gao, Qian Xu", + "published": "2020-04-15", + "updated": "2020-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03262v2", + "title": "CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations", + "abstract": "Unsupervised graph representation learning is a non-trivial topic. The\nsuccess of contrastive methods in the unsupervised representation learning on\nstructured data inspires similar attempts on the graph. Existing graph\ncontrastive learning (GCL) aims to learn the invariance across multiple\naugmentation views, which renders it heavily reliant on the handcrafted graph\naugmentations. However, inappropriate graph data augmentations can potentially\njeopardize such invariance. In this paper, we show the potential hazards of\ninappropriate augmentations and then propose a novel Collaborative Graph\nContrastive Learning framework (CGCL). This framework harnesses multiple graph\nencoders to observe the graph. Features observed from different encoders serve\nas the contrastive views in contrastive learning, which avoids inducing\nunstable perturbation and guarantees the invariance. To ensure the\ncollaboration among diverse graph encoders, we propose the concepts of\nasymmetric architecture and complementary encoders as the design principle. To\nfurther prove the rationality, we utilize two quantitative metrics to measure\nthe assembly of CGCL respectively. Extensive experiments demonstrate the\nadvantages of CGCL in unsupervised graph-level representation learning and the\npotential of collaborative framework. The source code for reproducibility is\navailable at https://github.com/zhangtia16/CGCL", + "authors": "Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang", + "published": "2021-11-05", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.10206v1", + "title": "How Robust Are Graph Neural Networks to Structural Noise?", + "abstract": "Graph neural networks (GNNs) are an emerging model for learning graph\nembeddings and making predictions on graph structured data. However, robustness\nof graph neural networks is not yet well-understood. In this work, we focus on\nnode structural identity predictions, where a representative GNN model is able\nto achieve near-perfect accuracy. We also show that the same GNN model is not\nrobust to addition of structural noise, through a controlled dataset and set of\nexperiments. Finally, we show that under the right conditions, graph-augmented\ntraining is capable of significantly improving robustness to structural noise.", + "authors": "James Fox, Sivasankaran Rajamanickam", + "published": "2019-12-21", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1902.10042v2", + "title": "Graph Neural Processes: Towards Bayesian Graph Neural Networks", + "abstract": "We introduce Graph Neural Processes (GNP), inspired by the recent work in\nconditional and latent neural processes. A Graph Neural Process is defined as a\nConditional Neural Process that operates on arbitrary graph data. It takes\nfeatures of sparsely observed context points as input, and outputs a\ndistribution over target points. We demonstrate graph neural processes in edge\nimputation and discuss benefits and drawbacks of the method for other\napplication areas. One major benefit of GNPs is the ability to quantify\nuncertainty in deep learning on graph structures. An additional benefit of this\nmethod is the ability to extend graph neural networks to inputs of dynamic\nsized graphs.", + "authors": "Andrew Carr, David Wingate", + "published": "2019-02-26", + "updated": "2019-10-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.00876v1", + "title": "Balanced Graph Structure Information for Brain Disease Detection", + "abstract": "Analyzing connections between brain regions of interest (ROI) is vital to\ndetect neurological disorders such as autism or schizophrenia. Recent\nadvancements employ graph neural networks (GNNs) to utilize graph structures in\nbrains, improving detection performances. Current methods use correlation\nmeasures between ROI's blood-oxygen-level-dependent (BOLD) signals to generate\nthe graph structure. Other methods use the training samples to learn the\noptimal graph structure through end-to-end learning. However, implementing\nthose methods independently leads to some issues with noisy data for the\ncorrelation graphs and overfitting problems for the optimal graph. In this\nwork, we proposed Bargrain (balanced graph structure for brains), which models\ntwo graph structures: filtered correlation matrix and optimal sample graph\nusing graph convolution networks (GCNs). This approach aims to get advantages\nfrom both graphs and address the limitations of only relying on a single type\nof structure. Based on our extensive experiment, Bargrain outperforms\nstate-of-the-art methods in classification tasks on brain disease datasets, as\nmeasured by average F1 scores.", + "authors": "Falih Gozi Febrinanto, Mujie Liu, Feng Xia", + "published": "2023-12-30", + "updated": "2023-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "q-bio.NC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.00082v1", + "title": "Bosonic Random Walk Networks for Graph Learning", + "abstract": "The development of Graph Neural Networks (GNNs) has led to great progress in\nmachine learning on graph-structured data. These networks operate via diffusing\ninformation across the graph nodes while capturing the structure of the graph.\nRecently there has also seen tremendous progress in quantum computing\ntechniques. In this work, we explore applications of multi-particle quantum\nwalks on diffusing information across graphs. Our model is based on learning\nthe operators that govern the dynamics of quantum random walkers on graphs. We\ndemonstrate the effectiveness of our method on classification and regression\ntasks.", + "authors": "Shiv Shankar, Don Towsley", + "published": "2020-12-31", + "updated": "2020-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2103.10837v1", + "title": "Quantum machine learning of graph-structured data", + "abstract": "Graph structures are ubiquitous throughout the natural sciences. Here we\nconsider graph-structured quantum data and describe how to carry out its\nquantum machine learning via quantum neural networks. In particular, we\nconsider training data in the form of pairs of input and output quantum states\nassociated with the vertices of a graph, together with edges encoding\ncorrelations between the vertices. We explain how to systematically exploit\nthis additional graph structure to improve quantum learning algorithms. These\nalgorithms are numerically simulated and exhibit excellent learning behavior.\nScalable quantum implementations of the learning procedures are likely feasible\non the next generation of quantum computing devices.", + "authors": "Kerstin Beer, Megha Khosla, Julius K\u00f6hler, Tobias J. Osborne", + "published": "2021-03-19", + "updated": "2021-03-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02126v1", + "title": "Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix", + "abstract": "To improve the robustness of graph neural networks (GNN), graph structure\nlearning (GSL) has attracted great interest due to the pervasiveness of noise\nin graph data. Many approaches have been proposed for GSL to jointly learn a\nclean graph structure and corresponding representations. To extend the previous\nwork, this paper proposes a novel regularized GSL approach, particularly with\nan alignment of feature information and graph information, which is motivated\nmainly by our derived lower bound of node-level Rademacher complexity for GNNs.\nAdditionally, our proposed approach incorporates sparse dimensional reduction\nto leverage low-dimensional node features that are relevant to the graph\nstructure. To evaluate the effectiveness of our approach, we conduct\nexperiments on real-world graphs. The results demonstrate that our proposed GSL\nmethod outperforms several competitive baselines, especially in scenarios where\nthe graph structures are heavily affected by noise. Overall, our research\nhighlights the importance of integrating feature and graph information\nalignment in GSL, as inspired by our derived theoretical result, and showcases\nthe superiority of our approach in handling noisy graph structures through\ncomprehensive experiments on real-world datasets.", + "authors": "Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.03892v2", + "title": "COPT: Coordinated Optimal Transport for Graph Sketching", + "abstract": "We introduce COPT, a novel distance metric between graphs defined via an\noptimization routine, computing a coordinated pair of optimal transport maps\nsimultaneously. This gives an unsupervised way to learn general-purpose graph\nrepresentation, applicable to both graph sketching and graph comparison. COPT\ninvolves simultaneously optimizing dual transport plans, one between the\nvertices of two graphs, and another between graph signal probability\ndistributions. We show theoretically that our method preserves important global\nstructural information on graphs, in particular spectral information, and\nanalyze connections to existing studies. Empirically, COPT outperforms state of\nthe art methods in graph classification on both synthetic and real datasets.", + "authors": "Yihe Dong, Will Sawin", + "published": "2020-03-09", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.04595v1", + "title": "Label-informed Graph Structure Learning for Node Classification", + "abstract": "Graph Neural Networks (GNNs) have achieved great success among various\ndomains. Nevertheless, most GNN methods are sensitive to the quality of graph\nstructures. To tackle this problem, some studies exploit different graph\nstructure learning strategies to refine the original graph structure. However,\nthese methods only consider feature information while ignoring available label\ninformation. In this paper, we propose a novel label-informed graph structure\nlearning framework which incorporates label information explicitly through a\nclass transition matrix. We conduct extensive experiments on seven node\nclassification benchmark datasets and the results show that our method\noutperforms or matches the state-of-the-art baselines.", + "authors": "Liping Wang, Fenyu Hu, Shu Wu, Liang Wang", + "published": "2021-08-10", + "updated": "2021-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1104.5256v1", + "title": "Learning Undirected Graphical Models with Structure Penalty", + "abstract": "In undirected graphical models, learning the graph structure and learning the\nfunctions that relate the predictive variables (features) to the responses\ngiven the structure are two topics that have been widely investigated in\nmachine learning and statistics. Learning graphical models in two stages will\nhave problems because graph structure may change after considering the\nfeatures. The main contribution of this paper is the proposed method that\nlearns the graph structure and functions on the graph at the same time. General\ngraphical models with binary outcomes conditioned on predictive variables are\nproved to be equivalent to multivariate Bernoulli model. The reparameterization\nof the potential functions in graphical model by conditional log odds ratios in\nmultivariate Bernoulli model offers advantage in the representation of the\nconditional independence structure in the model. Additionally, we impose a\nstructure penalty on groups of conditional log odds ratios to learn the graph\nstructure. These groups of functions are designed with overlaps to enforce\nhierarchical function selection. In this way, we are able to shrink higher\norder interactions to obtain a sparse graph structure. Simulation studies show\nthat the method is able to recover the graph structure. The analysis of county\ndata from Census Bureau gives interesting relations between unemployment rate,\ncrime and others discovered by the model.", + "authors": "Shilin Ding", + "published": "2011-04-27", + "updated": "2011-04-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.01743v1", + "title": "Graph Generation with Variational Recurrent Neural Network", + "abstract": "Generating graph structures is a challenging problem due to the diverse\nrepresentations and complex dependencies among nodes. In this paper, we\nintroduce Graph Variational Recurrent Neural Network (GraphVRNN), a\nprobabilistic autoregressive model for graph generation. Through modeling the\nlatent variables of graph data, GraphVRNN can capture the joint distributions\nof graph structures and the underlying node attributes. We conduct experiments\non the proposed GraphVRNN in both graph structure learning and attribute\ngeneration tasks. The evaluation results show that the variational component\nallows our network to model complicated distributions, as well as generate\nplausible structures and node attributes.", + "authors": "Shih-Yang Su, Hossein Hajimirsadeghi, Greg Mori", + "published": "2019-10-02", + "updated": "2019-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1801.03226v1", + "title": "Adaptive Graph Convolutional Neural Networks", + "abstract": "Graph Convolutional Neural Networks (Graph CNNs) are generalizations of\nclassical CNNs to handle graph data such as molecular data, point could and\nsocial networks. Current filters in graph CNNs are built for fixed and shared\ngraph structure. However, for most real data, the graph structures varies in\nboth size and connectivity. The paper proposes a generalized and flexible graph\nCNN taking data of arbitrary graph structure as input. In that way a\ntask-driven adaptive graph is learned for each graph data while training. To\nefficiently learn the graph, a distance metric learning is proposed. Extensive\nexperiments on nine graph-structured datasets have demonstrated the superior\nperformance improvement on both convergence speed and predictive accuracy.", + "authors": "Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang", + "published": "2018-01-10", + "updated": "2018-01-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.04286v1", + "title": "Deep Unsupervised Active Learning on Learnable Graphs", + "abstract": "Recently deep learning has been successfully applied to unsupervised active\nlearning. However, current method attempts to learn a nonlinear transformation\nvia an auto-encoder while ignoring the sample relation, leaving huge room to\ndesign more effective representation learning mechanisms for unsupervised\nactive learning. In this paper, we propose a novel deep unsupervised Active\nLearning model via Learnable Graphs, named ALLG. ALLG benefits from learning\noptimal graph structures to acquire better sample representation and select\nrepresentative samples. To make the learnt graph structure more stable and\neffective, we take into account $k$-nearest neighbor graph as a priori, and\nlearn a relation propagation graph structure. We also incorporate shortcut\nconnections among different layers, which can alleviate the well-known\nover-smoothing problem to some extent. To the best of our knowledge, this is\nthe first attempt to leverage graph structure learning for unsupervised active\nlearning. Extensive experiments performed on six datasets demonstrate the\nefficacy of our method.", + "authors": "Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.10715v1", + "title": "Graph Attention Auto-Encoders", + "abstract": "Auto-encoders have emerged as a successful framework for unsupervised\nlearning. However, conventional auto-encoders are incapable of utilizing\nexplicit relations in structured data. To take advantage of relations in\ngraph-structured data, several graph auto-encoders have recently been proposed,\nbut they neglect to reconstruct either the graph structure or node attributes.\nIn this paper, we present the graph attention auto-encoder (GATE), a neural\nnetwork architecture for unsupervised representation learning on\ngraph-structured data. Our architecture is able to reconstruct graph-structured\ninputs, including both node attributes and the graph structure, through stacked\nencoder/decoder layers equipped with self-attention mechanisms. In the encoder,\nby considering node attributes as initial node representations, each layer\ngenerates new representations of nodes by attending over their neighbors'\nrepresentations. In the decoder, we attempt to reverse the encoding process to\nreconstruct node attributes. Moreover, node representations are regularized to\nreconstruct the graph structure. Our proposed architecture does not need to\nknow the graph structure upfront, and thus it can be applied to inductive\nlearning. Our experiments demonstrate competitive performance on several node\nclassification benchmark datasets for transductive and inductive tasks, even\nexceeding the performance of supervised learning baselines in most cases.", + "authors": "Amin Salehi, Hasan Davulcu", + "published": "2019-05-26", + "updated": "2019-05-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.09205v1", + "title": "SoK: Differential Privacy on Graph-Structured Data", + "abstract": "In this work, we study the applications of differential privacy (DP) in the\ncontext of graph-structured data. We discuss the formulations of DP applicable\nto the publication of graphs and their associated statistics as well as machine\nlearning on graph-based data, including graph neural networks (GNNs). The\nformulation of DP in the context of graph-structured data is difficult, as\nindividual data points are interconnected (often non-linearly or sparsely).\nThis connectivity complicates the computation of individual privacy loss in\ndifferentially private learning. The problem is exacerbated by an absence of a\nsingle, well-established formulation of DP in graph settings. This issue\nextends to the domain of GNNs, rendering private machine learning on\ngraph-structured data a challenging task. A lack of prior systematisation work\nmotivated us to study graph-based learning from a privacy perspective. In this\nwork, we systematise different formulations of DP on graphs, discuss challenges\nand promising applications, including the GNN domain. We compare and separate\nworks into graph analysis tasks and graph learning tasks with GNNs. Finally, we\nconclude our work with a discussion of open questions and potential directions\nfor further research in this area.", + "authors": "Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis", + "published": "2022-03-17", + "updated": "2022-03-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.15665v1", + "title": "Learnability of a hybrid quantum-classical neural network for graph-structured quantum data", + "abstract": "Classical data with graph structure always exists when dealing with many\nreal-world problems. In parallel, quantum data with graph structure also need\nto be investigated since they are always produced by structured quantum data\nsources.In this paper, we make use of a hybrid quantum-classical neural network\nwith deep residual learning (Res-HQCNN) to learn graph-structured quantum data.\nSpecifically, based on the special definition of graph-structured quantum data,\nwe first find suitable cost functions so that Res-HQCNN can learn both\nsemisupervised quantum data with or without graphs. Moreover, the training\nalgorithm of Res-HQCNN for graph-structured training data is given in detail.\nNext, in order to show the learning ability of Res-HQCNN,we perform extensive\nexperiments to show that the using of information about graph structures for\nquantum data can lead to better learning efficiency compared with the state of\nthe arts. At the same time, we also design comparable experiments to explain\nthat the using of residual learning can also bring better performance when\ntraining for deep quantum neural networks.", + "authors": "Yan-Ying Liang, Si-Le Tang, Zhe-Hao Yi, Hao-Zhen Si-Tu, Zhu-Jun Zheng", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.04762v1", + "title": "The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure", + "abstract": "Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.", + "authors": "Anton Tsitsulin, Bryan Perozzi", + "published": "2023-12-08", + "updated": "2023-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.01749v1", + "title": "Semantic Graph Neural Network with Multi-measure Learning for Semi-supervised Classification", + "abstract": "Graph Neural Networks (GNNs) have attracted increasing attention in recent\nyears and have achieved excellent performance in semi-supervised node\nclassification tasks. The success of most GNNs relies on one fundamental\nassumption, i.e., the original graph structure data is available. However,\nrecent studies have shown that GNNs are vulnerable to the complex underlying\nstructure of the graph, making it necessary to learn comprehensive and robust\ngraph structures for downstream tasks, rather than relying only on the raw\ngraph structure. In light of this, we seek to learn optimal graph structures\nfor downstream tasks and propose a novel framework for semi-supervised\nclassification. Specifically, based on the structural context information of\ngraph and node representations, we encode the complex interactions in semantics\nand generate semantic graphs to preserve the global structure. Moreover, we\ndevelop a novel multi-measure attention layer to optimize the similarity rather\nthan prescribing it a priori, so that the similarity can be adaptively\nevaluated by integrating measures. These graphs are fused and optimized\ntogether with GNN towards semi-supervised classification objective. Extensive\nexperiments and ablation studies on six real-world datasets clearly demonstrate\nthe effectiveness of our proposed model and the contribution of each component.", + "authors": "Junchao Lin, Yuan Wan, Jingwen Xu, Xingchen Qi", + "published": "2022-12-04", + "updated": "2022-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.03659v1", + "title": "Robust Graph Structure Learning under Heterophily", + "abstract": "Graph is a fundamental mathematical structure in characterizing relations\nbetween different objects and has been widely used on various learning tasks.\nMost methods implicitly assume a given graph to be accurate and complete.\nHowever, real data is inevitably noisy and sparse, which will lead to inferior\nresults. Despite the remarkable success of recent graph representation learning\nmethods, they inherently presume that the graph is homophilic, and largely\noverlook heterophily, where most connected nodes are from different classes. In\nthis regard, we propose a novel robust graph structure learning method to\nachieve a high-quality graph from heterophilic data for downstream tasks. We\nfirst apply a high-pass filter to make each node more distinctive from its\nneighbors by encoding structure information into the node features. Then, we\nlearn a robust graph with an adaptive norm characterizing different levels of\nnoise. Afterwards, we propose a novel regularizer to further refine the graph\nstructure. Clustering and semi-supervised classification experiments on\nheterophilic graphs verify the effectiveness of our method.", + "authors": "Xuanting Xie, Zhao Kang, Wenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.08163v1", + "title": "Finding Motifs in Knowledge Graphs using Compression", + "abstract": "We introduce a method to find network motifs in knowledge graphs. Network\nmotifs are useful patterns or meaningful subunits of the graph that recur\nfrequently. We extend the common definition of a network motif to coincide with\na basic graph pattern. We introduce an approach, inspired by recent work for\nsimple graphs, to induce these from a given knowledge graph, and show that the\nmotifs found reflect the basic structure of the graph. Specifically, we show\nthat in random graphs, no motifs are found, and that when we insert a motif\nartificially, it can be detected. Finally, we show the results of motif\ninduction on three real-world knowledge graphs.", + "authors": "Peter Bloem", + "published": "2021-04-16", + "updated": "2021-04-16", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.DS", + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11898v1", + "title": "Graph Learning Augmented Heterogeneous Graph Neural Network for Social Recommendation", + "abstract": "Social recommendation based on social network has achieved great success in\nimproving the performance of recommendation system. Since social network\n(user-user relations) and user-item interactions are both naturally represented\nas graph-structured data, Graph Neural Networks (GNNs) have thus been widely\napplied for social recommendation. In this work, we propose an end-to-end\nheterogeneous global graph learning framework, namely Graph Learning Augmented\nHeterogeneous Graph Neural Network (GL-HGNN) for social recommendation. GL-HGNN\naims to learn a heterogeneous global graph that makes full use of user-user\nrelations, user-item interactions and item-item similarities in a unified\nperspective. To this end, we design a Graph Learner (GL) method to learn and\noptimize user-user and item-item connections separately. Moreover, we employ a\nHeterogeneous Graph Neural Network (HGNN) to capture the high-order complex\nsemantic relations from our learned heterogeneous global graph. To scale up the\ncomputation of graph learning, we further present the Anchor-based Graph\nLearner (AGL) to reduce computational complexity. Extensive experiments on four\nreal-world datasets demonstrate the effectiveness of our model.", + "authors": "Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.11821v1", + "title": "Cross-View Graph Consistency Learning for Invariant Graph Representations", + "abstract": "Graph representation learning is fundamental for analyzing graph-structured\ndata. Exploring invariant graph representations remains a challenge for most\nexisting graph representation learning methods. In this paper, we propose a\ncross-view graph consistency learning (CGCL) method that learns invariant graph\nrepresentations for link prediction. First, two complementary augmented views\nare derived from an incomplete graph structure through a bidirectional graph\nstructure augmentation scheme. This augmentation scheme mitigates the potential\ninformation loss that is commonly associated with various data augmentation\ntechniques involving raw graph data, such as edge perturbation, node removal,\nand attribute masking. Second, we propose a CGCL model that can learn invariant\ngraph representations. A cross-view training scheme is proposed to train the\nproposed CGCL model. This scheme attempts to maximize the consistency\ninformation between one augmented view and the graph structure reconstructed\nfrom the other augmented view. Furthermore, we offer a comprehensive\ntheoretical CGCL analysis. This paper empirically and experimentally\ndemonstrates the effectiveness of the proposed CGCL method, achieving\ncompetitive results on graph datasets in comparisons with several\nstate-of-the-art algorithms.", + "authors": "Jie Chen, Zhiming Li, Hua Mao, Wai Lok Woo, Xi Peng", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.10146v2", + "title": "Exploring Structure-Adaptive Graph Learning for Robust Semi-Supervised Classification", + "abstract": "Graph Convolutional Neural Networks (GCNNs) are generalizations of CNNs to\ngraph-structured data, in which convolution is guided by the graph topology. In\nmany cases where graphs are unavailable, existing methods manually construct\ngraphs or learn task-driven adaptive graphs. In this paper, we propose Graph\nLearning Neural Networks (GLNNs), which exploit the optimization of graphs (the\nadjacency matrix in particular) from both data and tasks. Leveraging on\nspectral graph theory, we propose the objective of graph learning from a\nsparsity constraint, properties of a valid adjacency matrix as well as a graph\nLaplacian regularizer via maximum a posteriori estimation. The optimization\nobjective is then integrated into the loss function of the GCNN, which adapts\nthe graph topology to not only labels of a specific task but also the input\ndata. Experimental results show that our proposed GLNN outperforms\nstate-of-the-art approaches over widely adopted social network datasets and\ncitation network datasets for semi-supervised classification.", + "authors": "Xiang Gao, Wei Hu, Zongming Guo", + "published": "2019-04-23", + "updated": "2019-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.13009v2", + "title": "Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely\nIterative Deep Graph Learning (IDGL), for jointly and iteratively learning\ngraph structure and graph embedding. The key rationale of IDGL is to learn a\nbetter graph structure based on better node embeddings, and vice versa (i.e.,\nbetter node embeddings based on a better graph structure). Our iterative method\ndynamically stops when the learned graph structure approaches close enough to\nthe graph optimized for the downstream prediction task. In addition, we cast\nthe graph learning problem as a similarity metric learning problem and leverage\nadaptive graph regularization for controlling the quality of the learned graph.\nFinally, combining the anchor-based approximation technique, we further propose\na scalable version of IDGL, namely IDGL-Anch, which significantly reduces the\ntime and space complexity of IDGL without compromising the performance. Our\nextensive experiments on nine benchmarks show that our proposed IDGL models can\nconsistently outperform or match the state-of-the-art baselines. Furthermore,\nIDGL can be more robust to adversarial graphs and cope with both transductive\nand inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2020-06-21", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.07832v1", + "title": "Deep Iterative and Adaptive Learning for Graph Neural Networks", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely Deep\nIterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for\njointly learning the graph structure and graph embeddings simultaneously. We\nfirst cast the graph structure learning problem as a similarity metric learning\nproblem and leverage an adapted graph regularization for controlling\nsmoothness, connectivity and sparsity of the generated graph. We further\npropose a novel iterative method for searching for a hidden graph structure\nthat augments the initial graph structure. Our iterative method dynamically\nstops when the learned graph structure approaches close enough to the optimal\ngraph. Our extensive experiments demonstrate that the proposed DIAL-GNN model\ncan consistently outperform or match state-of-the-art baselines in terms of\nboth downstream task performance and computational time. The proposed approach\ncan cope with both transductive learning and inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08966v4", + "title": "Graph Learning and Its Advancements on Large Language Models: A Holistic Survey", + "abstract": "Graph learning is a prevalent domain that endeavors to learn the intricate\nrelationships among nodes and the topological structure of graphs. Over the\nyears, graph learning has transcended from graph theory to graph data mining.\nWith the advent of representation learning, it has attained remarkable\nperformance in diverse scenarios. Owing to its extensive application prospects,\ngraph learning attracts copious attention. While some researchers have\naccomplished impressive surveys on graph learning, they failed to connect\nrelated objectives, methods, and applications in a more coherent way. As a\nresult, they did not encompass current ample scenarios and challenging problems\ndue to the rapid expansion of graph learning. Particularly, large language\nmodels have recently had a disruptive effect on human life, but they also show\nrelative weakness in structured scenarios. The question of how to make these\nmodels more powerful with graph learning remains open. Our survey focuses on\nthe most recent advancements in integrating graph learning with pre-trained\nlanguage models, specifically emphasizing their application within the domain\nof large language models. Different from previous surveys on graph learning, we\nprovide a holistic review that analyzes current works from the perspective of\ngraph structure, and discusses the latest applications, trends, and challenges\nin graph learning. Specifically, we commence by proposing a taxonomy and then\nsummarize the methods employed in graph learning. We then provide a detailed\nelucidation of mainstream applications. Finally, we propose future directions.", + "authors": "Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Fuji Ren, Gang Kou", + "published": "2022-12-17", + "updated": "2023-11-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.08776v2", + "title": "Joint Embedding Learning of Educational Knowledge Graphs", + "abstract": "As an efficient model for knowledge organization, the knowledge graph has\nbeen widely adopted in several fields, e.g., biomedicine, sociology, and\neducation. And there is a steady trend of learning embedding representations of\nknowledge graphs to facilitate knowledge graph construction and downstream\ntasks. In general, knowledge graph embedding techniques aim to learn vectorized\nrepresentations which preserve the structural information of the graph. And\nconventional embedding learning models rely on structural relationships among\nentities and relations. However, in educational knowledge graphs, structural\nrelationships are not the focus. Instead, rich literals of the graphs are more\nvaluable. In this paper, we focus on this problem and propose a novel model for\nembedding learning of educational knowledge graphs. Our model considers both\nstructural and literal information and jointly learns embedding\nrepresentations. Three experimental graphs were constructed based on an\neducational knowledge graph which has been applied in real-world teaching. We\nconducted two experiments on the three graphs and other common benchmark\ngraphs. The experimental results proved the effectiveness of our model and its\nsuperiority over other baselines when processing educational knowledge graphs.", + "authors": "Siyu Yao, Ruijie Wang, Shen Sun, Derui Bu, Jun Liu", + "published": "2019-11-20", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01152v2", + "title": "Causal Structure Learning: a Combinatorial Perspective", + "abstract": "In this review, we discuss approaches for learning causal structure from\ndata, also called causal discovery. In particular, we focus on approaches for\nlearning directed acyclic graphs (DAGs) and various generalizations which allow\nfor some variables to be unobserved in the available data. We devote special\nattention to two fundamental combinatorial aspects of causal structure\nlearning. First, we discuss the structure of the search space over causal\ngraphs. Second, we discuss the structure of equivalence classes over causal\ngraphs, i.e., sets of graphs which represent what can be learned from\nobservational data alone, and how these equivalence classes can be refined by\nadding interventional data.", + "authors": "Chandler Squires, Caroline Uhler", + "published": "2022-06-02", + "updated": "2022-12-19", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.08057v1", + "title": "Graph Embedding VAE: A Permutation Invariant Model of Graph Structure", + "abstract": "Generative models of graph structure have applications in biology and social\nsciences. The state of the art is GraphRNN, which decomposes the graph\ngeneration process into a series of sequential steps. While effective for\nmodest sizes, it loses its permutation invariance for larger graphs. Instead,\nwe present a permutation invariant latent-variable generative model relying on\ngraph embeddings to encode structure. Using tools from the random graph\nliterature, our model is highly scalable to large graphs with likelihood\nevaluation and generation in $O(|V | + |E|)$.", + "authors": "Tony Duan, Juho Lee", + "published": "2019-10-17", + "updated": "2019-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.09304v1", + "title": "A Tunable Model for Graph Generation Using LSTM and Conditional VAE", + "abstract": "With the development of graph applications, generative models for graphs have\nbeen more crucial. Classically, stochastic models that generate graphs with a\npre-defined probability of edges and nodes have been studied. Recently, some\nmodels that reproduce the structural features of graphs by learning from actual\ngraph data using machine learning have been studied. However, in these\nconventional studies based on machine learning, structural features of graphs\ncan be learned from data, but it is not possible to tune features and generate\ngraphs with specific features. In this paper, we propose a generative model\nthat can tune specific features, while learning structural features of a graph\nfrom data. With a dataset of graphs with various features generated by a\nstochastic model, we confirm that our model can generate a graph with specific\nfeatures.", + "authors": "Shohei Nakazawa, Yoshiki Sato, Kenji Nakagawa, Sho Tsugawa, Kohei Watabe", + "published": "2021-04-15", + "updated": "2021-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.04508v3", + "title": "Unsupervised Graph Embedding via Adaptive Graph Learning", + "abstract": "Graph autoencoders (GAEs) are powerful tools in representation learning for\ngraph embedding. However, the performance of GAEs is very dependent on the\nquality of the graph structure, i.e., of the adjacency matrix. In other words,\nGAEs would perform poorly when the adjacency matrix is incomplete or be\ndisturbed. In this paper, two novel unsupervised graph embedding methods,\nunsupervised graph embedding via adaptive graph learning (BAGE) and\nunsupervised graph embedding via variational adaptive graph learning (VBAGE)\nare proposed. The proposed methods expand the application range of GAEs on\ngraph embedding, i.e, on the general datasets without graph structure.\nMeanwhile, the adaptive learning mechanism can initialize the adjacency matrix\nwithout be affected by the parameter. Besides that, the latent representations\nare embedded in the laplacian graph structure to preserve the topology\nstructure of the graph in the vector space. Moreover, the adjacency matrix can\nbe self-learned for better embedding performance when the original graph\nstructure is incomplete. With adaptive learning, the proposed method is much\nmore robust to the graph structure. Experimental studies on several datasets\nvalidate our design and demonstrate that our methods outperform baselines by a\nwide margin in node clustering, node classification, and graph visualization\ntasks.", + "authors": "Rui Zhang, Yunxing Zhang, Xuelong Li", + "published": "2020-03-10", + "updated": "2021-03-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00696v1", + "title": "Graph Learning: A Survey", + "abstract": "Graphs are widely used as a popular representation of the network structure\nof connected data. Graph data can be found in a broad spectrum of application\ndomains such as social systems, ecosystems, biological networks, knowledge\ngraphs, and information systems. With the continuous penetration of artificial\nintelligence technologies, graph learning (i.e., machine learning on graphs) is\ngaining attention from both researchers and practitioners. Graph learning\nproves effective for many tasks, such as classification, link prediction, and\nmatching. Generally, graph learning methods extract relevant features of graphs\nby taking advantage of machine learning algorithms. In this survey, we present\na comprehensive overview on the state-of-the-art of graph learning. Special\nattention is paid to four categories of existing graph learning methods,\nincluding graph signal processing, matrix factorization, random walk, and deep\nlearning. Major models and algorithms under these categories are reviewed\nrespectively. We examine graph learning applications in areas such as text,\nimages, science, knowledge graphs, and combinatorial optimization. In addition,\nwe discuss several promising research directions in this field.", + "authors": "Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, Huan Liu", + "published": "2021-05-03", + "updated": "2021-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI", + "68T07", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.01489v1", + "title": "Generative Models and Learning Algorithms for Core-Periphery Structured Graphs", + "abstract": "We consider core-periphery structured graphs, which are graphs with a group\nof densely and sparsely connected nodes, respectively, referred to as core and\nperiphery nodes. The so-called core score of a node is related to the\nlikelihood of it being a core node. In this paper, we focus on learning the\ncore scores of a graph from its node attributes and connectivity structure. To\nthis end, we propose two classes of probabilistic graphical models: affine and\nnonlinear. First, we describe affine generative models to model the dependence\nof node attributes on its core scores, which determine the graph structure.\nNext, we discuss nonlinear generative models in which the partial correlations\nof node attributes influence the graph structure through latent core scores. We\ndevelop algorithms for inferring the model parameters and core scores of a\ngraph when both the graph structure and node attributes are available. When\nonly the node attributes of graphs are available, we jointly learn a\ncore-periphery structured graph and its core scores. We provide results from\nnumerical experiments on several synthetic and real-world datasets to\ndemonstrate the efficacy of the developed models and algorithms.", + "authors": "Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10124v1", + "title": "Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Self-supervised Pretraining", + "abstract": "We propose the Graph Context Encoder (GCE), a simple but efficient approach\nfor graph representation learning based on graph feature masking and\nreconstruction.\n GCE models are trained to efficiently reconstruct input graphs similarly to a\ngraph autoencoder where node and edge labels are masked. In particular, our\nmodel is also allowed to change graph structures by masking and reconstructing\ngraphs augmented by random pseudo-edges.\n We show that GCE can be used for novel graph generation, with applications\nfor molecule generation. Used as a pretraining method, we also show that GCE\nimproves baseline performances in supervised classification tasks tested on\nmultiple standard benchmark graph datasets.", + "authors": "Oriel Frigo, R\u00e9my Brossard, David Dehaene", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "68T07" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.09971v1", + "title": "Graph Learning-Convolutional Networks", + "abstract": "Recently, graph Convolutional Neural Networks (graph CNNs) have been widely\nused for graph data representation and semi-supervised learning tasks. However,\nexisting graph CNNs generally use a fixed graph which may be not optimal for\nsemi-supervised learning tasks. In this paper, we propose a novel Graph\nLearning-Convolutional Network (GLCN) for graph data representation and\nsemi-supervised learning. The aim of GLCN is to learn an optimal graph\nstructure that best serves graph CNNs for semi-supervised learning by\nintegrating both graph learning and graph convolution together in a unified\nnetwork architecture. The main advantage is that in GLCN, both given labels and\nthe estimated labels are incorporated and thus can provide useful 'weakly'\nsupervised information to refine (or learn) the graph construction and also to\nfacilitate the graph convolution operation in GLCN for unknown label\nestimation. Experimental results on seven benchmarks demonstrate that GLCN\nsignificantly outperforms state-of-the-art traditional fixed structure based\ngraph CNNs.", + "authors": "Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang", + "published": "2018-11-25", + "updated": "2018-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07699v2", + "title": "Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs", + "abstract": "Temporal Graph Learning, which aims to model the time-evolving nature of\ngraphs, has gained increasing attention and achieved remarkable performance\nrecently. However, in reality, graph structures are often incomplete and noisy,\nwhich hinders temporal graph networks (TGNs) from learning informative\nrepresentations. Graph contrastive learning uses data augmentation to generate\nplausible variations of existing data and learn robust representations.\nHowever, rule-based augmentation approaches may be suboptimal as they lack\nlearnability and fail to leverage rich information from downstream tasks. To\naddress these issues, we propose a Time-aware Graph Structure Learning (TGSL)\napproach via sequence prediction on temporal graphs, which learns better graph\nstructures for downstream tasks through adding potential temporal edges. In\nparticular, it predicts time-aware context embedding based on previously\nobserved interactions and uses the Gumble-Top-K to select the closest candidate\nedges to this context embedding. Additionally, several candidate sampling\nstrategies are proposed to ensure both efficiency and diversity. Furthermore,\nwe jointly learn the graph structure and TGNs in an end-to-end manner and\nperform inference on the refined graph. Extensive experiments on temporal link\nprediction benchmarks demonstrate that TGSL yields significant gains for the\npopular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive\nlearning methods on temporal graphs. We release the code at\nhttps://github.com/ViktorAxelsen/TGSL.", + "authors": "Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai", + "published": "2023-06-13", + "updated": "2023-08-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07439v1", + "title": "Multiple Graph Adversarial Learning", + "abstract": "Recently, Graph Convolutional Networks (GCNs) have been widely studied for\ngraph-structured data representation and learning. However, in many real\napplications, data are coming with multiple graphs, and it is non-trivial to\nadapt GCNs to deal with data representation with multiple graph structures. One\nmain challenge for multi-graph representation is how to exploit both structure\ninformation of each individual graph and correlation information across\nmultiple graphs simultaneously. In this paper, we propose a novel Multiple\nGraph Adversarial Learning (MGAL) framework for multi-graph representation and\nlearning. MGAL aims to learn an optimal structure-invariant and consistent\nrepresentation for multiple graphs in a common subspace via a novel adversarial\nlearning framework, which thus incorporates both structure information of\nintra-graph and correlation information of inter-graphs simultaneously. Based\non MGAL, we then provide a unified network for semi-supervised learning task.\nPromising experimental results demonstrate the effectiveness of MGAL model.", + "authors": "Bo Jiang, Ziyan Zhang, Jin Tang, Bin Luo", + "published": "2019-01-22", + "updated": "2019-01-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.14002v1", + "title": "Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction", + "abstract": "We introduce Bi-GNN for modeling biological link prediction tasks such as\ndrug-drug interaction (DDI) and protein-protein interaction (PPI). Taking\ndrug-drug interaction as an example, existing methods using machine learning\neither only utilize the link structure between drugs without using the graph\nrepresentation of each drug molecule, or only leverage the individual drug\ncompound structures without using graph structure for the higher-level DDI\ngraph. The key idea of our method is to fundamentally view the data as a\nbi-level graph, where the highest level graph represents the interaction\nbetween biological entities (interaction graph), and each biological entity\nitself is further expanded to its intrinsic graph representation\n(representation graphs), where the graph is either flat like a drug compound or\nhierarchical like a protein with amino acid level graph, secondary structure,\ntertiary structure, etc. Our model not only allows the usage of information\nfrom both the high-level interaction graph and the low-level representation\ngraphs, but also offers a baseline for future research opportunities to address\nthe bi-level nature of the data.", + "authors": "Yunsheng Bai, Ken Gu, Yizhou Sun, Wei Wang", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.14403v1", + "title": "Deep graph learning for semi-supervised classification", + "abstract": "Graph learning (GL) can dynamically capture the distribution structure (graph\nstructure) of data based on graph convolutional networks (GCN), and the\nlearning quality of the graph structure directly influences GCN for\nsemi-supervised classification. Existing methods mostly combine the\ncomputational layer and the related losses into GCN for exploring the global\ngraph(measuring graph structure from all data samples) or local graph\n(measuring graph structure from local data samples). Global graph emphasises on\nthe whole structure description of the inter-class data, while local graph\ntrend to the neighborhood structure representation of intra-class data.\nHowever, it is difficult to simultaneously balance these graphs of the learning\nprocess for semi-supervised classification because of the interdependence of\nthese graphs. To simulate the interdependence, deep graph learning(DGL) is\nproposed to find the better graph representation for semi-supervised\nclassification. DGL can not only learn the global structure by the previous\nlayer metric computation updating, but also mine the local structure by next\nlayer local weight reassignment. Furthermore, DGL can fuse the different\nstructures by dynamically encoding the interdependence of these structures, and\ndeeply mine the relationship of the different structures by the hierarchical\nprogressive learning for improving the performance of semi-supervised\nclassification. Experiments demonstrate the DGL outperforms state-of-the-art\nmethods on three benchmark datasets (Citeseer,Cora, and Pubmed) for citation\nnetworks and two benchmark datasets (MNIST and Cifar10) for images.", + "authors": "Guangfeng Lin, Xiaobing Kang, Kaiyang Liao, Fan Zhao, Yajun Chen", + "published": "2020-05-29", + "updated": "2020-05-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.11883v2", + "title": "Robust Graph Data Learning via Latent Graph Convolutional Representation", + "abstract": "Graph Convolutional Representation (GCR) has achieved impressive performance\nfor graph data representation. However, existing GCR is generally defined on\nthe input fixed graph which may restrict the representation capacity and also\nbe vulnerable to the structural attacks and noises. To address this issue, we\npropose a novel Latent Graph Convolutional Representation (LatGCR) for robust\ngraph data representation and learning. Our LatGCR is derived based on\nreformulating graph convolutional representation from the aspect of graph\nneighborhood reconstruction. Given an input graph $\\textbf{A}$, LatGCR aims to\ngenerate a flexible latent graph $\\widetilde{\\textbf{A}}$ for graph\nconvolutional representation which obviously enhances the representation\ncapacity and also performs robustly w.r.t graph structural attacks and noises.\nMoreover, LatGCR is implemented in a self-supervised manner and thus provides a\nbasic block for both supervised and unsupervised graph learning tasks.\nExperiments on several datasets demonstrate the effectiveness and robustness of\nLatGCR.", + "authors": "Bo Jiang, Ziyan Zhang, Bin Luo", + "published": "2019-04-26", + "updated": "2021-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11390v2", + "title": "Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction", + "abstract": "Tiered graph autoencoders provide the architecture and mechanisms for\nlearning tiered latent representations and latent spaces for molecular graphs\nthat explicitly represent and utilize groups (e.g., functional groups). This\nenables the utilization and exploration of tiered molecular latent spaces,\neither individually - the node (atom) tier, the group tier, or the graph\n(molecule) tier - or jointly, as well as navigation across the tiers. In this\npaper, we discuss the use of tiered graph autoencoders together with graph\nprediction for molecular graphs. We show features of molecular graphs used, and\ngroups in molecular graphs identified for some sample molecules. We briefly\nreview graph prediction and the QM9 dataset for background information, and\ndiscuss the use of tiered graph embeddings for graph prediction, particularly\nweighted group pooling. We find that functional groups and ring groups\neffectively capture and represent the chemical essence of molecular graphs\n(structures). Further, tiered graph autoencoders and graph prediction together\nprovide effective, efficient and interpretable deep learning for molecular\ngraphs, with the former providing unsupervised, transferable learning and the\nlatter providing supervised, task-optimized learning.", + "authors": "Daniel T. Chang", + "published": "2019-10-24", + "updated": "2021-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-bio.BM" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.07308v1", + "title": "Variational Graph Auto-Encoders", + "abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", + "authors": "Thomas N. Kipf, Max Welling", + "published": "2016-11-21", + "updated": "2016-11-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01855v2", + "title": "A Survey on Graph Representation Learning Methods", + "abstract": "Graphs representation learning has been a very active research area in recent\nyears. The goal of graph representation learning is to generate graph\nrepresentation vectors that capture the structure and features of large graphs\naccurately. This is especially important because the quality of the graph\nrepresentation vectors will affect the performance of these vectors in\ndownstream tasks such as node classification, link prediction and anomaly\ndetection. Many techniques are proposed for generating effective graph\nrepresentation vectors. Two of the most prevalent categories of graph\nrepresentation learning are graph embedding methods without using graph neural\nnets (GNN), which we denote as non-GNN based graph embedding methods, and graph\nneural nets (GNN) based methods. Non-GNN graph embedding methods are based on\ntechniques such as random walks, temporal point processes and neural network\nlearning methods. GNN-based methods, on the other hand, are the application of\ndeep learning on graph data. In this survey, we provide an overview of these\ntwo categories and cover the current state-of-the-art methods for both static\nand dynamic graphs. Finally, we explore some open and ongoing research\ndirections for future work.", + "authors": "Shima Khoshraftar, Aijun An", + "published": "2022-04-04", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.06126v1", + "title": "Regularized Graph Structure Learning with Semantic Knowledge for Multi-variates Time-Series Forecasting", + "abstract": "Multivariate time-series forecasting is a critical task for many\napplications, and graph time-series network is widely studied due to its\ncapability to capture the spatial-temporal correlation simultaneously. However,\nmost existing works focus more on learning with the explicit prior graph\nstructure, while ignoring potential information from the implicit graph\nstructure, yielding incomplete structure modeling. Some recent works attempt to\nlearn the intrinsic or implicit graph structure directly while lacking a way to\ncombine explicit prior structure with implicit structure together. In this\npaper, we propose Regularized Graph Structure Learning (RGSL) model to\nincorporate both explicit prior structure and implicit structure together, and\nlearn the forecasting deep networks along with the graph structure. RGSL\nconsists of two innovative modules. First, we derive an implicit dense\nsimilarity matrix through node embedding, and learn the sparse graph structure\nusing the Regularized Graph Generation (RGG) based on the Gumbel Softmax trick.\nSecond, we propose a Laplacian Matrix Mixed-up Module (LM3) to fuse the\nexplicit graph and implicit graph together. We conduct experiments on three\nreal-word datasets. Results show that the proposed RGSL model outperforms\nexisting graph forecasting algorithms with a notable margin, while learning\nmeaningful graph structure simultaneously. Our code and models are made\npublicly available at https://github.com/alipay/RGSL.git.", + "authors": "Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, Alex Liu", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.02909v1", + "title": "Spectral Augmentations for Graph Contrastive Learning", + "abstract": "Contrastive learning has emerged as a premier method for learning\nrepresentations with or without supervision. Recent studies have shown its\nutility in graph representation learning for pre-training. Despite successes,\nthe understanding of how to design effective graph augmentations that can\ncapture structural properties common to many different types of downstream\ngraphs remains incomplete. We propose a set of well-motivated graph\ntransformation operations derived via graph spectral analysis to provide a bank\nof candidates when constructing augmentations for a graph contrastive\nobjective, enabling contrastive learning to capture useful structural\nrepresentation from pre-training graph datasets. We first present a spectral\ngraph cropping augmentation that involves filtering nodes by applying\nthresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second\nnovel augmentation reorders the graph frequency components in a structural\nLaplacian-derived position graph embedding. Further, we introduce a method that\nleads to improved views of local subgraphs by performing alignment via global\nrandom walk embeddings. Our experimental results indicate consistent\nimprovements in out-of-domain graph data transfer compared to state-of-the-art\ngraph contrastive learning methods, shedding light on how to design a graph\nlearner that is able to learn structural properties common to diverse graph\ntypes.", + "authors": "Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.16176v1", + "title": "A Survey on Structure-Preserving Graph Transformers", + "abstract": "The transformer architecture has shown remarkable success in various domains,\nsuch as natural language processing and computer vision. When it comes to graph\nlearning, transformers are required not only to capture the interactions\nbetween pairs of nodes but also to preserve graph structures connoting the\nunderlying relations and proximity between them, showing the expressive power\nto capture different graph structures. Accordingly, various\nstructure-preserving graph transformers have been proposed and widely used for\nvarious tasks, such as graph-level tasks in bioinformatics and\nchemoinformatics. However, strategies related to graph structure preservation\nhave not been well organized and systematized in the literature. In this paper,\nwe provide a comprehensive overview of structure-preserving graph transformers\nand generalize these methods from the perspective of their design objective.\nFirst, we divide strategies into four main groups: node feature modulation,\ncontext node sampling, graph rewriting, and transformer architecture\nimprovements. We then further divide the strategies according to the coverage\nand goals of graph structure preservation. Furthermore, we also discuss\nchallenges and future directions for graph transformer models to preserve the\ngraph structure and understand the nature of graphs.", + "authors": "Van Thuy Hoang, O-Joun Lee", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.00793v2", + "title": "Structure-Preserving Graph Representation Learning", + "abstract": "Though graph representation learning (GRL) has made significant progress, it\nis still a challenge to extract and embed the rich topological structure and\nfeature information in an adequate way. Most existing methods focus on local\nstructure and fail to fully incorporate the global topological structure. To\nthis end, we propose a novel Structure-Preserving Graph Representation Learning\n(SPGRL) method, to fully capture the structure information of graphs.\nSpecifically, to reduce the uncertainty and misinformation of the original\ngraph, we construct a feature graph as a complementary view via k-Nearest\nNeighbor method. The feature graph can be used to contrast at node-level to\ncapture the local relation. Besides, we retain the global topological structure\ninformation by maximizing the mutual information (MI) of the whole graph and\nfeature embeddings, which is theoretically reduced to exchanging the feature\nembeddings of the feature and the original graphs to reconstruct themselves.\nExtensive experiments show that our method has quite superior performance on\nsemi-supervised node classification task and excellent robustness under noise\nperturbation on graph structure or node features.", + "authors": "Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu", + "published": "2022-09-02", + "updated": "2022-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + } +] \ No newline at end of file