diff --git "a/related_53K/test_related_long_2404.17862v1.json" "b/related_53K/test_related_long_2404.17862v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.17862v1.json" @@ -0,0 +1,8593 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17862v1", + "title": "Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum", + "abstract": "Efficiently capturing consistent and complementary semantic features in a\nmultimodal conversation context is crucial for Multimodal Emotion Recognition\nin Conversation (MERC). Existing methods mainly use graph structures to model\ndialogue context semantic dependencies and employ Graph Neural Networks (GNN)\nto capture multimodal semantic features for emotion recognition. However, these\nmethods are limited by some inherent characteristics of GNN, such as\nover-smoothing and low-pass filtering, resulting in the inability to learn\nlong-distance consistency information and complementary information\nefficiently. Since consistency and complementarity information correspond to\nlow-frequency and high-frequency information, respectively, this paper revisits\nthe problem of multimodal emotion recognition in conversation from the\nperspective of the graph spectrum. Specifically, we propose a\nGraph-Spectrum-based Multimodal Consistency and Complementary collaborative\nlearning framework GS-MCC. First, GS-MCC uses a sliding window to construct a\nmultimodal interaction graph to model conversational relationships and uses\nefficient Fourier graph operators to extract long-distance high-frequency and\nlow-frequency information, respectively. Then, GS-MCC uses contrastive learning\nto construct self-supervised signals that reflect complementarity and\nconsistent semantic collaboration with high and low-frequency signals, thereby\nimproving the ability of high and low-frequency information to reflect real\nemotions. Finally, GS-MCC inputs the collaborative high and low-frequency\ninformation into the MLP network and softmax function for emotion prediction.\nExtensive experiments have proven the superiority of the GS-MCC architecture\nproposed in this paper on two benchmark data sets.", + "authors": "Tao Meng, Fuchen Zhang, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li", + "published": "2024-04-27", + "updated": "2024-04-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "Human-machine intelligent conversation systems have recently received significant attention and development [6, 25, 40, 49, 58], so understanding conversations is crucial. Driven by this, Multimodal Emotion Recognition in Conversation (MERC) has gradually developed into a new research hotspot. Many researchers [21, 60, 63] have explored and improved the effect of MERC from the semantic interaction between text, auditory, and visual modal data in conversational contexts. These methods [7, 27, 61] agree that the MERC task focuses on better capturing and fusing multimodal semantic information in the conversational context for emotion recognition. Therefore, we will review the literature closely related to the above topics from the two aspects of multimodal conversation context feature capture and fusion. (1) Multimodal conversational context feature capture. In early work, the MERC task mainly adopted GRU [34] or LSTM [38] to capture multimodal information in the conversational context. For example, Poria et al. [38] proposed a multimodal conversation emotion recognition model based on Bidirectional Long Short-Term Memory (Bi-LSTM), which captures multimodal contextual information at each time step to understand conversational context relationships in sequence data better. Although methods based on GRU or LSTM can model multimodal conversation context, they cannot capture long-distance information dependencies due to limited memory capabilities. For instance, Ma et al. [32] used intra-modal and inter-modal Transformers to capture semantic information in a multimodal conversation context and designed a hierarchical gating mechanism to achieve the fusion of multimodal features. Although Transformer-based methods can capture long-distance semantic information through global sequence modeling, they underestimate the complexity of multimodal dialogue semantics. Due to the superiority of GNN in modeling complex relationships, most existing research chooses to use GNN for global semantic capture and has achieved remarkable results. For example, Li et al. [23] proposed directed Graph-based Cross-modal Feature Complementation (GraphCFC), which alleviates the heterogeneity gap problem in multimodal fusion by utilizing multiple subspace extractors and pairwise cross-modal complementation strategies. In addition, speaker information is vital in emotion recognition because emotions are usually subjective and individual experiences. Therefore, Ren et al. [41] built a graph model to incorporate conversational context information and speaker dependencies, and then introduced a multi-head attention mechanism to explore potential connections between speakers. (2) Multimodal conversational context feature fusion. Choosing an appropriate multimodal feature fusion strategy is another crucial step in multimodal dialogue emotion recognition [9, 63]. For example, Zadeh et al. [59] proposed Tensor Fusion Network (TFN), has advantages in processing higher-order data structures (such as multi-dimensional arrays) and is therefore better able to preserve relationships between data when integrating multimodal information. So Liu et al. [31] proposed a Low-rank Multimodal Fusion (LMF) method. Multimodal fusion is performed using modalityspecific low-order factors by decomposing tensors and weights in parallel. It avoids calculating high-dimensional tensors, reduces memory overhead, and reduces exponential time complexity to linear. Tellamekala et al. [46] proposed Calibrated and Ordinal Latent Distribution Fusion (COLD Fusion). The proposed fusion framework involves learning the latent distribution over an unimodal temporal context by constraining the variance through calibration and ordinal ordering. Furthermore, contrastive learning has attracted increasing research attention due to its powerful ability to obtain meaningful representations through alignment fusion. Kim et al. [18] introduced a contrastive loss function to facilitate impactful adversarial learning. This approach enables the adversarial learning of weak emotional samples by leveraging strong emotional samples, thereby enhancing the comprehension of intricate emotional elements embedded in intense emotions. Wang et al. [48] proposed a multimodal feature fusion framework based on contrastive learning. The framework first improves the ability to capture emotional features through contrastive learning and then uses an attention mechanism to achieve the fusion of multimodal features. Although multimodal conversational emotion recognition has made significant progress by modeling contextual semantic information and feature fusion, the critical role of high-frequency information in MERC has been ignored. To this end, Hu et al. [14] proposed a Multimodal Fusion Graph Convolution Network (MMGCN). MMGCN can not only capture high and low-frequency information in multimodal conversations, but also utilizes speaker information to model inter-speaker and intra-speaker dependencies. Similarly, Chen et al. [5] modeled MERC from multivariate information and highand low-frequency information, further improving the effect of multimodal conversational emotion recognition. Nevertheless, as discussed earlier, these methods do not profoundly explore the uses of high and low-frequency signals, ignoring the consistency and complementary synergy between them. This paper starts from the perspective of graph spectrum, uses high and low-frequency signals to reconstruct MERC, captures and collaborates consistency and complementary semantic information, respectively, and improves the effect of multimodal conversational emotion recognition.", + "pre_questions": [], + "main_content": "INTRODUCTION With the continuous development of Human-Computer Interaction (HCI), the multimodal emotion recognition task in conversation (MERC) has recently received extensive research attention [1, 8, 13, 29, 34, 50, 52]. MERC aims to identify the emotional state of each utterance using textual, acoustic, and visual information in the conversational context [25, 36, 44, 45, 53], which is crucial for multimodal conversational understanding and an essential component for building intelligent HCI systems [14, 33, 35]. As shown in Fig. 1, MERC needs to recognize the emotion of each multimodal utterance in the conversation. Unlike traditional unimodal or non-conversational emotion recognition [1, 10, 12, 43], MERC requires joint conversational context and multimodal information modeling to achieve consistency and complementary semantic capture within and between modalities [61]. Fig. 1 gives an example of a multimodal conversation between two people, Ross and Carol, from the MELD dataset. As shown in utterance \ud835\udc624, Carol has a \u201cJoy\u201d emotion, which is vaguely reflected in textual features but more evident in visual or auditory features reflecting the complementary semantics between modalities. In addition, it is difficult to identify the emotion of \u201cSurprise\u201d from the utterance \ud835\udc627 alone. However, due to the potential consistency of conversational emotions, it can be accurately inferred based on previous utterances. Therefore, the key to multimodal conversational emotion recognition is to capture the consistency and complementary semantics between multimodal information by utilizing the conversational context and emotional dependence between speakers to reveal the speaker\u2019s genuine emotion. The current mainstream research method uses the Transformer [26, 32, 62, 63] or GNN [2, 21, 23, 47] architecture to model the MERC task. Transformer-based methods mainly learn complex semantic information between multimodal and conversational contexts from global sequence modeling. For example, CTNet [26] builds a single Transformer and cross Transformer to capture longdistance context dependencies and realize intra-module and intermodule information interaction to achieve multimodal conversational emotion recognition. Although transformer-based methods have made progress from the perspective of global utterance sequence modeling, this paradigm underestimates the complex emotional interactions between multimodal utterances [47] and ignores arXiv:2404.17862v1 [cs.CL] 27 Apr 2024 MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. Where's Ben? (Neutral) He's sleeping. (Neutral) Umm, yeah, actually, Susan's gonna be home any minute, it's kinda an anniversary. (Joy) Oh! I thought you guys got married in uh, January? (Surprise) Ahh. Ooh, is this a, is this a bad time? (Surprise) It's not that kind of anniversary. (Neutral) Ah! Oh. (Surprise) Carol Ross u1 u3 u5 u7 u2 u4 u6 Figure 1: An example of a multimodal conversation from the MELD dataset. MERC aims to identify each utterance\u2019s emotion label (e.g., Neutral, Surprise, Joy). the multiple relationships between utterances [5], which limits the model\u2019s emotion recognition performance. Benefitting from GNN\u2019s ability to mine and represent complex relationships [56, 57], recent GNN-based methods [1, 14, 22] have made significant progress in the MERC task. For instance, MMGCN [14] fully connects all utterance nodes of the same modality and connects different modal nodes of the same utterance to build a heterogeneous graph to model the complex semantic relationships between multimodal utterances, then uses a deep spectral domain GNN to capture long-distance contextual information to achieve multimodal conversational emotion recognition. Although these GNN-based methods show promising performance, they still have some common limitations: (1) Insufficient long-distance dependence perception. Considerable methods [1, 13, 23] using sliding windows to limit the length of fully connected utterances and then using GNN to learn multimodal utterance representations to achieve emotion recognition. However, limited by the over-smoothing characteristics of GNN [28, 54], usually only two layers can be stacked for capturing semantic information, making it difficult for these methods to capture long-distance emotional dependencies. Although the method [5, 14] without a sliding window can enhance the capture of long-distance dependencies, it will cause many nodes with the non-same emotions in the neighborhood, which is not conducive to the representation learning of GNN and puts enormous performance pressure on GNN. Therefore, previous GNN-based methods still have limitations in long-distance dependency capture. (2) Underutilization of high-frequency features. Many studies have shown that GNN has low-pass filtering characteristics [4, 37, 55], which mainly obtain node representation by aggregating the consistency features of the neighborhood (low-frequency information) and suppressing the dissimilarity features of the neighborhood (high-frequency information). However, consistency and dissimilarity features are equally important in the MERC task. When specific modalities express less obvious emotions, information from other modalities is needed to compensate, thereby revealing the speaker\u2019s genuine emotions. Inspired by this, M3Net [5] tried to use high-frequency information to improve the MERC task and improved the emotion recognition effect by directly fusing highand low-frequency features. However, essential differences exist between high and low-frequency features, and direct fusion cannot establish efficient collaboration. Thus, previous GNN-based methods still have limitations in utilizing and collaborating high and low-frequency features. Inspired by the above analysis, to efficiently learn the consistency and complementary semantic information in multimodal conversation, we try to revisit the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum. Specifically, we propose a Graph-Spectrum-based Multimodal Consistency and Complementary feature collaboration framework GS-MCC. GS-MCC first uses RoBERTa [30], OpenSMILE [11], and 3D-CNN [16] to extract preliminary text and acoustic and visual features. Then, GRU and a fully connected network are used further to encode text, auditory, and visual features to obtain higher-order utterance representation. In order to capture long-distance dependency information more efficiently, a sliding window is used to construct a fully connected graph to model conversational relationships, and an efficient Fourier graph operator is used to extract long-distance high and low-frequency information, respectively. In addition, to promote the collaboration ability of high and lowfrequency information, we use contrastive learning to construct self-supervised signals that reflect complementarity and consistent semantic collaboration with high and low-frequency signals, thereby improving the ability of high and low-frequency information to reflect real emotions. Finally, we input the collaborative high and low-frequency information into the MLP network and softmax function for emotion prediction. The contributions of our work are summarized as follows: \u2022 We propose an efficient long-distance information learning module that designs Fourier graph operators to build a mixed-layer GNN to capture high and low-frequency information to obtain consistency and complementary semantic dependencies in multimodal conversational contexts. \u2022 We propose an efficient highand low-frequency information collaboration module that uses contrastive learning to construct self-supervised signals that reflect the collaboration of highand low-frequency information in terms of complementarity and consistent semantics and improves the ability to distinguish emotions between different frequency information. \u2022 We conducted extensive comparative and ablation experiments on two benchmark data sets, IEMOCAP and MELD. The results show that our proposed method can efficiently capture long-distance context dependencies and improve the performance of MERC. Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Text Feature Extraction: Word embeddings can capture the semantic relationships between words, making words with similar meanings closer in the embedding space. Inspired by previous work [9, 19, 42], we use the RoBERTa model [30] to extract text features and the embedding is denoted as \ud835\udf11\ud835\udc61. Audio and Vision Feature Extraction: Consistent with previous work [13, 24, 34], we employ openSMILE and 3D-CNN for audio and Vision feature extraction, yielding respective embeddings \ud835\udf11\ud835\udc4e and \ud835\udf11\ud835\udc63. 3.2 Speaker information embedding Speaker information can play an important role in emotion recognition. Emotion is not only related to the characteristic attributes of the utterance but also to the speaker\u2019s inherent expression manner. Inspired by previous work [5, 14, 61], we incorporate speaker MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. information into each unimodal utterance to obtain an unimodal representation of context and speaker information. Specifically, we first sort all speakers by name and then use the one-hot vector \ud835\udc60\ud835\udc56to represent the \ud835\udc56-th speaker. Finally, we perform a unified embedding representation for the speakers to make similar speakers closer together in the embedding space. The embedding of the \ud835\udc56-th speaker is as follows: \ud835\udc46\ud835\udc56= \ud835\udc4a\ud835\udc60\ud835\udc5d\ud835\udc52\ud835\udc4e\ud835\udc58\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56, (1) where\ud835\udc4a\ud835\udc60\ud835\udc5d\ud835\udc52\ud835\udc4e\ud835\udc58\ud835\udc52\ud835\udc5fis the trainable weight. In addition, to obtain higherorder feature representation, we utilize bidirectional Gated Recurrent Units (GRU) to encode conversational text features. We have observed in practice that using recursive modules to encode visual and auditory modalities has no positive performance impact. Therefore, we employed a multilayer perceptron with two single hidden layers to encode auditory and visual modalities, respectively. The specific encoding calculation is as follows: \ud835\udc62\ud835\udc61= \u2190 \u2212 \u2192 \ud835\udc3a\ud835\udc45\ud835\udc48(\ud835\udf11\ud835\udc61,\ud835\udc62(+,\u2212)1 \ud835\udc61 ), \ud835\udc62\ud835\udc4e= \ud835\udc4a\ud835\udc4e\ud835\udf11\ud835\udc4e+ \ud835\udc4f\ud835\udc4e, \ud835\udc62\ud835\udc63= \ud835\udc4a\ud835\udc63\ud835\udf11\ud835\udc63+ \ud835\udc4f\ud835\udc63, (2) where \ud835\udc4a\ud835\udc4e, \ud835\udc4f\ud835\udc4e, \ud835\udc4a\ud835\udc63and \ud835\udc4f\ud835\udc63are the learnable parameters of the auditory and visual encoders, respectively. We then add speaker embeddings to obtain speakerand context-aware unimodal representations: \ud835\udc65\ud835\udc5a= \ud835\udc62\ud835\udc5a+ \ud835\udc46\ud835\udc56, \ud835\udc5a\u2208{\ud835\udc61,\ud835\udc4e, \ud835\udc63}, (3) where \ud835\udc61,\ud835\udc4e, \ud835\udc63represent text, audio, and vision modal, respectively. 4 METHODOLOGY Fig. 2 shows the proposed Graph-Spectrum-based Multimodal Consistency and Complementary collaborative learning framework GS-MCC. GS-MCC contains five modules: feature encoding, multimodal interaction graph construction, Fourier graph neural network, contrastive learning, and emotion classification. 4.1 Multimodal Interaction Graph To model the latent semantic dependencies between multimodal utterances, we adopt a multimodal interaction graph for construction. Instead of fully connecting all nodes of the same modality, we use a sliding window for restriction. Although fully connecting all nodes of the same modality is beneficial to building long-distance semantic dependencies, it will introduce much noise, which is not conducive to subsequent GNN learning. Given a conversation sequence \ud835\udc48= {\ud835\udc621, ...,\ud835\udc62\ud835\udc41} with \ud835\udc41multimodal utterances, under the restriction of the sliding window \ud835\udc58, we can construct a multimodal interaction graph\ud835\udc3a\ud835\udc58= \u0010 \ud835\udc49\ud835\udc58, \ud835\udc38\ud835\udc58,\ud835\udc34\ud835\udc58,\ud835\udc4b\ud835\udc58\u0011 , where the node \ud835\udc63\u2208\ud835\udc49\ud835\udc58represents a single-modal utterance and the edge \ud835\udc52\u2208\ud835\udc38\ud835\udc58represents two semantic interactive relationships between unimodal utterances, \ud835\udc34\ud835\udc58is the adjacency matrix, and \ud835\udc4b\ud835\udc58 is the feature matrix. The multimodal semantic interaction graph is constructed as follows: Nodes: Since any utterance \ud835\udc62\ud835\udc56\u2208\ud835\udc48contains three modal information, we treat each modality in each utterance as an independent node, using text modal node \ud835\udc65\ud835\udc56 \ud835\udc61, auditory modal node \ud835\udc65\ud835\udc56 \ud835\udc4e, and visual modal node \ud835\udc65\ud835\udc56 \ud835\udc63represents, and uses the corresponding features \ud835\udc65\ud835\udc56 \ud835\udc5a to represent the initial embedding of the node. The constructed multimodal interaction graph \ud835\udc3a\ud835\udc58has 3\ud835\udc41nodes. Edges: In order to avoid introducing noise or redundant information, we use a sliding window to limit node connections of the same mode. Specifically, we fully connect the nodes in the same mode within sliding window \ud835\udc58. In addition, we connect different modal nodes of the same utterance to construct semantic interactions between modalities. For example, for utterance \ud835\udc62\ud835\udc56\u2208\ud835\udc48, connections need to be constructed between nodes \ud835\udc65\ud835\udc56 \ud835\udc61, \ud835\udc65\ud835\udc56 \ud835\udc4e, and \ud835\udc65\ud835\udc56 \ud835\udc63in different modalities. Edge Weight Initialization: In order to better capture the similarity between nodes, we use different similarities to determine edge weights for different types of edges. Nodes with higher similarity show more critical information interactions between them. Specifically, for edges coming from nodes of the same modality, since the feature distribution of the nodes is potentially consistent, our calculation method is as follows: \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57= 1 \u2212 arccos \u0010 \ud835\udc60\ud835\udc56\ud835\udc5a(\ud835\udc65\ud835\udc56 \ud835\udc5a,\ud835\udc65\ud835\udc57 \ud835\udc5a) \u0011 \ud835\udf0b , (4) where \ud835\udc65\ud835\udc56 \ud835\udc5aand \ud835\udc65\ud835\udc57 \ud835\udc5arepresent the feature representations of the \ud835\udc56-th and \ud835\udc57-th nodes in the graph. For edges between nodes in different modalities, since the feature distribution of the nodes is not potentially consistent, we use the hyperparameter \ud835\udf19to optimize the similarity learning between cross-modal nodes. Our approach is computed as follows: \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57= \ud835\udf19 \u00a9 \u00ad \u00ad \u00ab 1 \u2212 arccos \u0010 \ud835\udc60\ud835\udc56\ud835\udc5a(\ud835\udc65\ud835\udc56 \ud835\udc5a,\ud835\udc65\ud835\udc57 \ud835\udc5a) \u0011 \ud835\udf0b \u00aa \u00ae \u00ae \u00ac . (5) 4.2 Fourier Graph Neural Network As mentioned above, using a sliding window will limit long-distance dependency learning. This is because traditional GNN has oversmoothing characteristics and cannot stack many layers. Different from the methods used by MMGCN [14] and M3Net [5], this paper is inspired by FourierGNN [54], designs efficient Fourier graph operators for high and low-frequency signals, respectively, to capture the long-distance dependency information. Fourier Graph Operator. For a given multimodal interaction graph, \ud835\udc3a\ud835\udc58= \u0010 \ud835\udc49\ud835\udc58, \ud835\udc38\ud835\udc58,\ud835\udc34\ud835\udc58,\ud835\udc4b\ud835\udc58\u0011 , where \ud835\udc34\ud835\udc58\u2208R3\ud835\udc41\u00d73\ud835\udc41is the adjacency matrix, \ud835\udc4b\ud835\udc58\u2208R3\ud835\udc41\u00d7\ud835\udc51is the feature matrix, \ud835\udc41is the number of multimodal utterances, and \ud835\udc51is the dimension of the feature. According to FourierGNN, we can obtain the Green kernel \ud835\udf05\u2208R\ud835\udc51\u00d7\ud835\udc51 that meets the conditions based on the adjacency matrix \ud835\udc34\ud835\udc58and the weight matrix\ud835\udc4a\u2208R\ud835\udc51\u00d7\ud835\udc51, which needs to satisfy the conditions \ud835\udf05[\ud835\udc56, \ud835\udc57] = \ud835\udf05[\ud835\udc56\u2212\ud835\udc57], \ud835\udf05[\ud835\udc56, \ud835\udc57] = \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57\u25e6\ud835\udc4a, and \ud835\udc56and \ud835\udc57are fall between 1 and 3\ud835\udc41. Based on the kernel \ud835\udf05, we can obtain the following Fourier graph operator SG: SG = F (\ud835\udf05) \u2208C 3\ud835\udc41\u00d7\ud835\udc51\u00d7\ud835\udc51, (6) where F is the Discrete Fourier Transform (DFT). According to the graph convolution theory, we can express the graph convolution operation as follows: \ud835\udc39\ud835\udf03G \u0010 \ud835\udc4b\ud835\udc58,\ud835\udc34\ud835\udc58\u0011 = \ud835\udc34\ud835\udc58\ud835\udc4b\ud835\udc58\ud835\udc4a= F \u22121 \u0010 F \u0010 \ud835\udc4b\ud835\udc58\u0011 F (\ud835\udf05) \u0011 , (7) Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Modality Encoding Fourier Graph Neural Network Contrastive Learning Emotion Classifier GRU GRU GRU FC FC Speaker Embedding Notations: Textual modality Textual modality FGO FGO FGO FGO DFT High frequency Low frequency 1 t \uf06a 2 t \uf06a 3 t \uf06a 1 a \uf06a 2 a \uf06a 3 a \uf06a 1 v \uf06a 2 v \uf06a 3 v \uf06a \u03c3h \u03c3h \u03c3l \u03c3l Text Acoustic Visual Speaker Embedding Layer Graph Construction Sliding Window Acoustic modality Acoustic modality + + + \u03c3h \u2026 \u03c3l \u2026 Visual modality Visual modality \u2026 \u2026 \u2026 Speaker Information Speaker Information Concatenation + Concatenation + Intra-modal edges Inter-modal edges 1 t x 2 t x 3 t x 1 a x 2 a x 3 a x 1 v x 2 v x 3 v x 0 h \u2192 h M \u2192 h i \u2192 0 l \u2192 l i \u2192 l M \u2192 0 m l i i \u2192 = \uf0d5 0 m h i i \u2192 = \uf0d5 Classifier Low Frequency Contrastive Learning High Frequency Contrastive Learning Collaborative Contrastive Loss IDFT IDFT + \u02c6i y 1 t u 2 t u 3 t u 1 a u 2 a u 3 a u 1 v u 2 v u 3 v u Figure 2: The overall architecture of the proposed model GS-MCC. Specifically, feature embedding of multimodal utterances and speaker information is first performed, and then the embedded features are used to construct a multimodal semantic interaction graph. Then, a Fourier graph neural network is used to capture long-distance dependent high and low-frequency information, and finally, contrastive learning is used to collaborate high and low-frequency information for emotion recognition. where\ud835\udf03G is the learnable parameter and F \u22121 is the Inverse Discrete Fourier Transform (IDFT). According to the convolution theory and the conditions of FGO, we can expand the frequency domain term in Eq. (7) as follows: F \u0010 \ud835\udc4b\ud835\udc58\u0011 F (\ud835\udf05) = F \u0010\u0010 \ud835\udc4b\ud835\udc58\u2217\ud835\udf05 \u0011 [\ud835\udc56] \u0011 = F \u0010 \ud835\udc4b\ud835\udc58[\ud835\udc57] \ud835\udf05[\ud835\udc56\u2212\ud835\udc57] \u0011 = F \u0010 \ud835\udc4b\ud835\udc58[\ud835\udc57] \ud835\udf05[\ud835\udc56, \ud835\udc57] \u0011 = F \u0010 \ud835\udc34\ud835\udc58 \ud835\udc56\ud835\udc57\ud835\udc4b\ud835\udc58[\ud835\udc57]\ud835\udc4a \u0011 = F \u0010 \ud835\udc34\ud835\udc58\ud835\udc4b\ud835\udc58\ud835\udc4a \u0011 . (8) As seen from Eq. (8) , the graph convolution operation is implemented through the product of FGO and features in the frequency domain. In addition, according to the convolution theory, the convolution of time-domain signals is equal to the product of frequency-domain signals. The product operation in the frequency domain only requires \ud835\udc42(\ud835\udc41) time complexity, while the convolution operation in the time domain requires \ud835\udc42\u0000\ud835\udc412\u0001 time complexity. Therefore, an efficient graph neural network can be constructed based on the Fourier graph operator. To efficiently capture highand low-frequency information, we perform targeted optimization on FGO and use the high-pass and low-pass filters to extract complementary and consistent semantic information. The specific filter design is as follows: \ud835\udc3f\ud835\udc59= \ud835\udc3c+ \ud835\udc37\u22121/2 G \ud835\udc34\ud835\udc58\ud835\udc37\u22121/2 G , (9) \ud835\udc3f\u210e= \ud835\udc3c\u2212\ud835\udc37\u22121/2 G \ud835\udc34\ud835\udc58\ud835\udc37\u22121/2 G , (10) where \ud835\udc3cis the identity matrix, \ud835\udc37G and \ud835\udc34\ud835\udc47are the degree matrix and adjacency matrix of the multimodal interaction graph, respectively, and \ud835\udc3f\ud835\udc59and \ud835\udc3f\u210eare the low-pass and high-pass filters, respectively. Based on low-pass and high-pass filters, we can obtain the following low and high-frequency Green kernel and Fourier graph operator: \ud835\udf05\ud835\udc59/\u210e[\ud835\udc56, \ud835\udc57] = \ud835\udc3f\ud835\udc59/\u210e \ud835\udc56\ud835\udc57 \u25e6\ud835\udc4a, (11) S\ud835\udc59/\u210e G = F \u0010 \ud835\udf05\ud835\udc59/\u210e\u0011 . (12) Finally, we can build an \ud835\udc40-layer Fourier graph neural network based on these efficient Fourier graph operators to capture longdistance high and low-frequency dependency information in multimodal interaction graphs: \ud835\udc39\ud835\udc59/\u210e \ud835\udf03G \u0010 \ud835\udc4b\ud835\udc58,\ud835\udc34\ud835\udc58\u0011 = \ud835\udc40 \u2211\ufe01 \ud835\udc5a=0 \ud835\udf0e \u0010 F (\ud835\udc4b\ud835\udc58)S\ud835\udc59/\u210e G\u21d2[0:\ud835\udc5a] + \ud835\udc4f\ud835\udc59/\u210e \u0011 , (13) S\ud835\udc59/\u210e G\u21d2[0:\ud835\udc5a] = \ud835\udc5a \u00d6 \ud835\udc56=0 S\ud835\udc59/\u210e G\u2192\ud835\udc56, (14) where \ud835\udf0eis the activation function, \ud835\udc4f\ud835\udc59/\u210eis the bias parameter, S\ud835\udc59/\u210e G\u2192\ud835\udc56 is the FGO in the \ud835\udc56-th layer, \ud835\udc59, and \u210erepresent low and high frequencies respectively. By stacking \ud835\udc40layers of Fourier graph operators, our model can capture long-distance dependency information and obtain each node\u2019s low-frequency feature representation,\ud835\udc65\ud835\udc59 \ud835\udc5a, and high-frequency feature representation, \ud835\udc65\u210e \ud835\udc5a, respectively. 4.3 Contrastive Learning Low-frequency features reflect the trend of slow changes in emotion, while high-frequency features reflect the trend of rapid changes in emotion. To synergize these two features, we employ contrastive learning to build self-supervised signals to promote consistent and complementary semantics learning in multimodal utterances. Inspired by the SpCo [28] method, increasing the frequency domain difference between two contrasting views can achieve better contrast learning effects. Unlike SpCo, our contrastive learning is performed directly in the frequency domain and does not rely on data augmentation to generate contrastive views. Specifically, we use a combination of low-frequency contrast learning and highfrequency contrast learning to promote the synergy of the two MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. features. In addition, we only use the strategy of negative sample pairs far away from each other to increase the frequency domain difference between contrasting views and obtain better contrast learning effects. LFCL: Low Frequency Contrastive Learning. LFCL aims to use low-frequency samples as anchor nodes and all high-frequency nodes as negative samples to construct a self-supervised signal to increase the frequency domain difference between contrast views to obtain better contrast learning effects and promote consistent semantics and complementary semantics learning in multimodal conversations. For each low-frequency anchor node, the self-supervised contrast loss can be defined as: L\ud835\udc3c\ud835\udc39\ud835\udc36\ud835\udc3f= \u22121 \ud835\udf0f+ log \ud835\udc521/\ud835\udf0f+ 3\ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc52 \u0010\u0000\ud835\udc65\ud835\udc59 \ud835\udc5a \u0001\ud835\udc47\ud835\udc65\u210e\ud835\udc56\u2212 \ud835\udc5a \u0011 /\ud835\udf0f ! , (15) where \ud835\udf0fis the temperature coefficient, \ud835\udc65\ud835\udc59 \ud835\udc5ais the low-frequency anchor node, and \ud835\udc65\u210e\ud835\udc56\u2212 \ud835\udc5a is the \ud835\udc56-th high-frequency negative sample. HFCL: High Frequency Contrastive Learning. HFCL is similar to LFCL, except that HFCL uses high-frequency samples as anchor nodes and all low-frequency nodes as negative samples to construct a self-supervised signal to increase the frequency domain difference between contrasting views. The specific contrast loss can be defined as: L\ud835\udc3b\ud835\udc39\ud835\udc36\ud835\udc3f= \u22121 \ud835\udf0f+ log \ud835\udc521/\ud835\udf0f+ 3\ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc52 \u0010\u0000\ud835\udc65\u210e \ud835\udc5a \u0001\ud835\udc47\ud835\udc65\ud835\udc59\ud835\udc56\u2212 \ud835\udc5a \u0011 /\ud835\udf0f ! , (16) where \ud835\udc65\u210e \ud835\udc5ais the high-frequency anchor node, and \ud835\udc65\ud835\udc59\ud835\udc56\u2212 \ud835\udc5a is the \ud835\udc56-th low-frequency negative sample. The overall collaborative contrastive learning loss is the sum of LFCL and HFCL, which can be expressed as L\ud835\udc36\ud835\udc36\ud835\udc3f: L\ud835\udc36\ud835\udc36\ud835\udc3f= L\ud835\udc3f\ud835\udc39\ud835\udc36\ud835\udc3f+ L\ud835\udc3b\ud835\udc39\ud835\udc36\ud835\udc3f. (17) Finally, we use the inverse discrete Fourier transform to convert the high and low-frequency features into time domain features and concatenation the two parts of features to obtain the final embedding representation of the uni-modal utterance node: \ud835\udc63\ud835\udc5a= IDFT \u0010 \ud835\udc65\ud835\udc59 \ud835\udc5a \u0011 \u2295IDFT \u0010 \ud835\udc65\u210e \ud835\udc5a \u0011 , (18) where \ud835\udc5a\u2208{\ud835\udc61,\ud835\udc4e, \ud835\udc63} represents any one of text, auditory and visual modalities. 4.4 Emotion Classifier For modal utterance \ud835\udc48\ud835\udc56, we concatenate the features of each modality for emotion classification. \ud835\udc48\ud835\udc56= \ud835\udc63\ud835\udc56 \ud835\udc61\u2295\ud835\udc63\ud835\udc56 \ud835\udc4e\u2295\ud835\udc63\ud835\udc56 \ud835\udc63, (19) \u02dc \ud835\udc48\ud835\udc56= ReLU(\ud835\udc48\ud835\udc56), (20) P\ud835\udc56= softmax(\ud835\udc4a\ud835\udc62\u02dc \ud835\udc48\ud835\udc56+ \ud835\udc4f\ud835\udc62), (21) \u02c6 \ud835\udc66\ud835\udc56= argmax(P\ud835\udc56[\ud835\udf0f]), (22) where \ud835\udc4a\ud835\udc62and \ud835\udc4f\ud835\udc62are learnable parameters, and \u02c6 \ud835\udc66\ud835\udc56is the predicted emotion label of utterance \ud835\udc48\ud835\udc56. Finally, we employ categorical crossentropy loss and contrastive loss for model training. 5 EXPERIMENTS 5.1 Implementation Details Benchmark Datasets and Evaluation Metrics: In our experiments, we used two multimodal datasets, IEMOCAP [3], and MELD [39], widely used in multimodal emotion recognition. IEMOCAP (Interactive Emotional Dyadic Motion Capture Database) is a multimodal database for emotion recognition and analysis. The IEMOCAP data set consists of movie dialogue clips and emotional annotations, including voice, video, and emotional annotation data of 10 actors in interactive dialogue scenes. MELD (Multimodal EmotionLines Dataset) contains dialogue text from movie and TV show clips. The dialogue text contains the characters\u2019 speech and the context information of the dialogue. MELD also provides audio recordings and video recordings of conversations. We record the classification accuracy (Acc.) and F1 for each emotion category, as well as the overall weighted average accuracy (W-Acc.) and weighted average F1 (W-F1). Baseline Methods: We compare several baselines on the IEMOCAP and MELD datasets, including bc-LSTM [38], and A-DMN [51] based on RNN architecture, LFM [31] based on Low-rank Tensor Fusion network, DialogueGCN [13], LR-GCN [41], DER-GCN [1], MMGCN [14], AdaGIN [47], RGAT [15] and CoMPM [20] based on GCN, EmoBERTa [19], CTNet [26] and COGMEN [17] based on Transformer architecture. Experimental Setup: All experiments are conducted using Python 3.8 and PyTorch 1.8 deep learning framework and performed on a single NVIDIA RTX 3090 24G GPU. Our model is trained using AdamW with a learning rate of 1e-5, cross-entropy as the loss function, and a batch size of 32. The optimal parameters of all models were obtained by performing parameter adjustment using the leave-one-out cross-validation method on the validation set. 5.2 Comparison with the State-of-the-Art Table 1 and Table 2 show the emotion recognition effects of the proposed GS-MCC method and the baseline method on the IEMOCAP and MELD datasets, respectively. On the IEMOCAP dataset, GS-MCC has the best emotion recognition effect, outperforming all comparison baselines, and is 3.3% and 3.2% better than AdaGIN on W-Acc and W-F1 ,respectively. In addition, GS-MCC has significantly improved Acc and F1 values in some emotion categories. Similarly, compared with all comparison baselines on the MELD data set, GS-MCC also has the best emotion recognition effect, outperforming AdaGIN by 0.5% and 2.2% on W-Acc and WF1, respectively. Furthermore, AdaGIN is optimal in both Acc and F1 most emotion categories. Experimental results demonstrate the effectiveness of AdaGIN. The performance improvement may be attributed to the proposed method\u2019s ability to utilize long-distance contextual semantic information from fullyand low-frequency signals while avoiding the over-smoothing phenomenon of GCN. Furthermore, the proposed GS-MCC has only 2.10M model parameters, which is far lower than DialogueGCN and other GCNbased emotion recognition methods. Experimental results also demonstrate the potential application of our method in efficient computing. Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Table 1: Comparison with other baseline models on the IEMOCAP dataset. The best result in each column is in bold. Methods Parmas. IEMOCAP Happy Sad Neutral Angry Excited Frustrated Average(w) Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 bc-LSTM 1.28M 29.1 34.4 57.1 60.8 54.1 51.8 57.0 56.7 51.1 57.9 67.1 58.9 55.2 54.9 LFM 2.34M 25.6 33.1 75.1 78.8 58.5 59.2 64.7 65.2 80.2 71.8 61.1 58.9 63.4 62.7 A-DMN \u2013 43.1 50.6 69.4 76.8 63.0 62.9 63.5 56.5 88.3 77.9 53.3 55.7 64.6 64.3 DialogueGCN 12.92M 40.6 42.7 89.1 84.5 62.0 63.5 67.5 64.1 65.5 63.1 64.1 66.9 65.2 64.1 RGAT 15.28M 60.1 51.6 78.8 77.3 60.1 65.4 70.7 63.0 78.0 68.0 64.3 61.2 65.0 65.2 CoMPM \u2013 59.9 60.7 78.0 82.2 60.4 63.0 70.2 59.9 85.8 78.2 62.9 59.5 67.7 67.2 EmoBERTa 499M 56.9 56.4 79.1 83.0 64.0 61.5 70.6 69.6 86.0 78.0 63.8 68.7 67.3 67.3 COGMEN \u2013 57.4 51.9 81.4 81.7 65.4 68.6 69.5 66.0 83.3 75.3 63.8 68.2 68.2 67.6 CTNet 8.49M 47.9 51.3 78.0 79.9 69.0 65.8 72.9 67.2 85.3 78.7 52.2 58.8 68.0 67.5 LR-GCN 15.77M 54.2 55.5 81.6 79.1 59.1 63.8 69.4 69.0 76.3 74.0 68.2 68.9 68.5 68.3 MMGCN 0.46M 43.1 42.3 79.3 78.7 63.5 61.7 69.6 69.0 75.8 74.3 63.5 62.3 67.4 66.2 AdaGIN 6.3M 53.0 \u2013 81.5 \u2013 71.3 \u2013 65.9 \u2013 76.3 \u2013 67.8 \u2013 70.5 70.7 DER-GCN 78.59M 60.7 58.8 75.9 79.8 66.5 61.5 71.3 72.1 71.1 73.3 66.1 67.8 69.7 69.4 GS-MCC 2.10M 60.2 65.4 86.2 81.2 75.7 70.9 71.7 70.8 83.2 81.4 66.0 71.0 73.8 73.9 Table 2: Comparison with other baseline models on the MELD dataset. The best result in each column is in bold. Methods Parmas. MELD Neutral Surprise Fear Sadness Joy Disgust Anger Average(w) Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 bc-LSTM 1.28M 78.4 73.8 46.8 47.7 3.8 5.4 22.4 25.1 51.6 51.3 4.3 5.2 36.7 38.4 57.5 55.9 DialogueRNN 14.47M 72.1 73.5 54.4 49.4 1.6 1.2 23.9 23.8 52.0 50.7 1.5 1.7 41.0 41.5 56.1 55.9 DialogueGCN 12.92M 70.3 72.1 42.4 41.7 3.0 2.8 20.9 21.8 44.7 44.2 6.5 6.7 39.0 36.5 54.9 54.7 RGAT 15.28M 76.0 78.1 40.1 41.5 3.0 2.4 32.1 30.7 68.1 58.6 4.5 2.2 40.0 44.6 60.3 61.1 CoMPM \u2013 78.3 82.0 48.3 49.2 1.7 2.9 35.9 32.3 71.4 61.5 3.1 2.8 42.2 45.8 64.1 65.3 EmoBERTa 499M 78.9 82.5 50.2 50.2 1.8 1.9 33.3 31.2 72.1 61.7 9.1 2.5 43.3 46.4 64.1 65.2 A-DMN \u2013 76.5 78.9 56.2 55.3 8.2 8.6 22.1 24.9 59.8 57.4 1.2 3.4 41.3 40.9 61.5 60.4 LR-GCN 15.77M 76.7 80.0 53.3 55.2 0.0 0.0 49.6 35.1 68.0 64.4 10.7 2.7 48.0 51.0 65.7 65.6 MM-GCN 0.46M 64.8 77.1 67.4 53.9 0.0 0.0 72.4 17.7 68.7 56.9 0.0 0.0 54.4 42.6 64.4 59.4 AdaGIN 6.3M 79.8 \u2013 60.5 \u2013 15.2 \u2013 43.7 \u2013 64.5 \u2013 29.3 \u2013 56.2 \u2013 67.6 66.8 DER-GCN 78.59M 76.8 80.6 50.5 51.0 14.8 10.4 56.7 41.5 69.3 64.3 17.2 10.3 52.5 57.4 66.8 66.1 GS-MCC 2.10M 78.4 81.8 56.9 58.3 23.5 23.8 50.0 35.8 69.4 66.4 36.7 30.7 53.2 54.4 68.1 69.0 Figure 3: Loss trends during model training and inference on the IEMOCAP and MELD datasets. We compare DialogueGCN, GS-MCC without contrastive loss and GS-MCC. 5.3 Trends of Losses During the training and inference process of the model, we show the loss trends of DialogueGCN, GS-MCC without contrastive loss, and GS-MCC in the IEMOCAP and MELD datasets to better understand the convergence of the model. Fig. 3 shows the results of the training loss. On the IEMOCAP data set, we found that DialogueGCN quickly converged to the local optimal value and continued fluctuating around the loss value of 1.1. The convergence of GS-MCC without contrastive loss is better than DialogueGCN, and it converges around a loss value of 0.8. Although the loss value of GS-MCC without contrastive loss is higher than GS-MCC at the beginning of training, as the training continues, the convergence of GS-MCC begins to be better than GS-MCC without contrastive loss. It converges around the loss value of 0.4. The MELD dataset\u2019s loss values of DialogueGCN and GS-MCC without contrastive loss MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia Trovato et al. Figure 4: Emotion recognition performance of DialogueGCN and GS-MCC on IEMOCAP and MELD datasets. We stack 4-layer and 8-layer GCN to explore the over-smoothing phenomenon of the model. fluctuate considerably. They are difficult to converge, but the loss of GS-MCC without contrastive loss is lower than DialogueGCN. However, GS-MCC with contrastive loss has better convergence, converging around a loss value of 0.9. Experimental results prove that the contrastive learning mechanism plays an essential role in the convergence of the GCN network and can collaborate with highand low-frequency features for better emotion recognition. 5.4 Over-smoothing Analysis It is challenging to train a deep GCN with strong feature expression ability because deep GCN is prone to over-smoothing, which limits the feature expression ability of nodes. From a training perspective, over-smoothing removes discriminative semantic information from node features. Therefore, we stack 4-layer and 8-layer GCN to explore the over-smoothing phenomenon of the model\u2019s DialogueGCN and GS-MCC on the IEMOCAP and MELD datasets. Fig 4 shows the experimental comparison results. On the IEMOCAP and MELD datasets, we observed that the training convergence of DailogueGCN-8 was poor, and the model suffered from severe over-smoothing. The training convergence of DailogueGCN-4 is slightly better than that of DailogueGCN-8, but it can only fluctuate around the local optimal value. DailogueGCN-4 also suffers from serious overfitting, especially on the IEMOCAP data set. Compared with DailogueGCN-8, GS-MCC-8 can alleviate the over-smoothing problem to a certain extent and converge to a local optimal value. The convergence of GS-MCC-4 is perfect and can converge to a relatively stable optimal solution. Experimental results show that GS-MCC can alleviate the model\u2019s over-smoothing problem to a certain extent. This may be attributed to GS-MCC\u2019s ability to use different-order node information of nodes in the graph to update the feature representation of nodes. By mixing the feature information of nodes of different orders in each layer, GS-MCC can maintain the diversity of node features, thereby preventing over-smoothing of features. Therefore, GS-MCC can effectively capture long-distance dependency information in multi-modal conversations. 5.5 Ablation Study Ablation studies for SE, Fourier GNN, CL. Speaker embedding (SE), Fourier graph neural network (Fourier GNN), and contrastive learning (CL) are the three critical components of our proposed multimodal emotion recognition model. We only remove one proposed module at a time to verify the effectiveness of the component. It is worth noting that when Fourier GNN is removed, we use DialogueGCN as the backbone of the model. From the emotion recognition results in Table 3, we conclude: (1) All the modules we proposed are helpful because no matter which proposed module is deleted, it will cause the emotion recognition performance of the model to decrease. (2) Speaker embedding has a relatively significant impact on the emotion recognition performance of the model because if the speaker embedding information is removed from the IEMOCAP and MELD data sets, the emotion recognition effect of the model will be significantly reduced. The experimental results show that the speaker\u2019s embedded information is essential for the model to understand emotions. (3) On the IEMOCAP and MELD datasets, Fourier GNN is more critical than contrastive learning. We speculate that this is because Fourier GNN can capture high and low-frequency signals to provide more useful emotional semantic information, and the contrastive learning mechanism mainly assists Fourier GNN in better achieving complementary and consistent semantic information collaboration. Table 3: Ablation studies for SE, Fourier GNN, CL on the IEMOCAP and MELD datasets. Methods IEMOCAP MELD W-Acc. W-F1 W-Acc. W-F1 GS-MCC 73.1 73.3 68.1 69.0 w/o SE 70.3(\u21932.8) 70.6(\u21932.7) 65.4(\u21932.7) 64.6(\u21934.4) w/o Fourier GCN 68.7(\u21934.4) 67.7(\u21935.6) 64.2(\u21933.9) 64.1(\u21934.9) w/o CL 70.3(\u21932.8) 71.3(\u21932.0) 66.1(\u21932.0) 65.9(\u21933.1) Table 4: The effect of our method on IEMOCAP and MELD datasets using unimodal features and multimodal features, respectively. Modality IEMOCAP MELD W-Acc. W-F1 W-Acc. W-F1 T+A+V 73.8 73.9 68.1 69.0 T 66.3(\u21937.5) 66.0(\u21937.9) 63.7(\u21934.4) 62.5(\u21936.5) A 57.7(\u219316.1) 58.1(\u219315.8) 53.8(\u219314.3) 53.4(\u219315.6) V 50.4(\u219323.4) 50.5(\u219323.4) 41.4(\u219326.7) 42.3(\u219326.7) T+A 71.6(\u21932.2) 71.0(\u21932.9) 66.3(\u21931.8) 65.9(\u21933.1) T+V 69.5(\u21934.3) 68.7(\u21935.2) 64.2(\u21933.9) 64.1(\u21934.9) V+A 63.7(\u219310.1) 63.0(\u219310.9) 54.6(\u219313.5) 53.4(\u219315.6) Ablation studies for multimodal features. We conduct ablation experiments on multimodal features to compare the performance of single-modal, bi-modal, and tri-modal experimental results to explore the importance of each modality. The experimental results are listed in Table 4. We choose W-Acc and W-F1 as evaluation metrics. In single-modal experiments, text modality features achieved the best performance, which shows that text features play a decisive role in MERC. Video features have the worst emotion Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum MM \u201924, October 28\u2013November 01, 2024, Melbourne, Australia recognition effect. We speculate that video features have more noise signals, making it difficult for the model to learn effective emotional feature representation. In bi-modal experiments, all bi-modal emotion recognition effects are better than their single-modal emotion recognition effects. The tri-modal emotion recognition effect is the best among all experiments. The performance improvement may be attributed to the effective fusion of multimodal, complementary semantic information, which can improve the feature representation ability of emotions. Therefore, GS-MCC can effectively utilize the consistent and complementary semantic information in multimodal conversations to improve the emotion recognition effect. 6 CONCLUSIONS In this paper, we rethink the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum, taking into account some shortcomings of existing work and innovations. Specifically, we propose a Graph-Spectrum-based Multimodal Consistency and Complementary feature collaboration framework GS-MCC. First, we combine sliding windows to build a multimodal interaction graph to model the conversational relationship between utterances and speakers. Secondly, we design efficient Fourier graph operators to capture long-distance utterances\u2019 consistency and complementary semantic dependencies. Finally, we adopt contrastive learning and construct self-supervised signals with all negative samples to promote the collaboration of the two semantic information. Extensive experiments on two widely used benchmark datasets, IEMOCAP and MELD, demonstrate the effectiveness and efficiency of our proposed method.", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2403.06832v2", + "title": "The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework", + "abstract": "The advancement of Multi-modal Pre-training highlights the necessity for a\nrobust Multi-Modal Knowledge Graph (MMKG) representation learning framework.\nThis framework is crucial for integrating structured knowledge into multi-modal\nLarge Language Models (LLMs) at scale, aiming to alleviate issues like\nknowledge misconceptions and multi-modal hallucinations. In this work, to\nevaluate models' ability to accurately embed entities within MMKGs, we focus on\ntwo widely researched tasks: Multi-modal Knowledge Graph Completion (MKGC) and\nMulti-modal Entity Alignment (MMEA). Building on this foundation, we propose a\nnovel SNAG method that utilizes a Transformer-based architecture equipped with\nmodality-level noise masking for the robust integration of multi-modal entity\nfeatures in KGs. By incorporating specific training objectives for both MKGC\nand MMEA, our approach achieves SOTA performance across a total of ten datasets\n(three for MKGC and seven for MEMA), demonstrating its robustness and\nversatility. Besides, SNAG can not only function as a standalone model but also\nenhance other existing methods, providing stable performance improvements. Our\ncode and data are available at: https://github.com/zjukg/SNAG.", + "authors": "Zhuo Chen, Yin Fang, Yichi Zhang, Lingbing Guo, Jiaoyan Chen, Huajun Chen, Wen Zhang", + "published": "2024-03-11", + "updated": "2024-03-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "Typically, a KG is considered multi-modal when it contains knowledge symbols expressed across various modalities, including, but not limited to, text, images, sound, or video [12]. Current research primarily concentrates on the visual modality, assuming that other modalities can be processed similarly. 2.1 MMKG Representation The current mainstream approaches to MMKG representation learning, which focus on integrating entity modalities within MMKGs, can broadly be classified into two distinct categories: (i) Late Fusion methods focus on the interactions and weighting of different modalities, typically employing techniques like Summation, Concatenation, Multi-Layer Perceptrons (MLPs), or Gating Mechanisms to aggregate features just before generating outputs. For example, MKGRL-MS [52] crafts distinct single-modal embeddings, using multi-head self-attention to evaluate the contribution of each modality to the semantic composition and summing the weighted multi-modal features for MMKG entity representation. MMKRL [36] learns cross-modal embeddings in a unified translational semantic space, merging modality embeddings for each entity through concatenation. DuMF [29] adopts a dual-track strategy, utilizing a bilinear layer for feature projection and an attention block for modality preference learning in each track, with a gate network to synthesize these features into a unified representation. (ii) Early Fusion methods integrate multi-modal feature at an initial stage, fostering deeper interaction between modalities that\u2019s essential for complex reasoning. This fosters a unified and potent entity representation, enhancing their compatibility in the process of integrating with other models. For example, CMGNN [16] first normalizes entity modalities into a unified embedding using an MLP, then refines them by contrasting with perturbed negative samples. MMRotatH [56] utilizes a gated encoder to merge textual and structural data, filtering irrelevant information within a rotational dynamics-based KGE framework. Recent studies [8, 23, 31] utilize Pre-trained Language Models (PLMs) like BERT and Vision Transformers like ViT for multi-modal data integration. They format graph structures, text, and images into sequences or dense embeddings compatible with PLMs, thereby utilizing the PLMs\u2019 reasoning capabilities and the knowledge embedded in their parameters to support downstream tasks. In this paper, we propose a Transformer-based method SnAg that introduce fine-grained, entity-level modality preference to enhance entity representation. This strategy combines the benefits of Early Fusion, with its effective modality interaction, while also aligning with the Late Fusion modality integration paradigm. Furthermore, our model is lightweight, boasting a significantly lower parameter count compared to traditional PLM-based methods, which offers increased flexibility and wider applicability. 2.2 Multi-Modal Knowledge Graph Completion Multi-modal Knowledge Graph Completion (MKGC) is crucial for inferring missing triples in existing MMKGs, involving three subtasks: Entity Prediction, Relation Prediction, and Triple Classification. Currently, most research in MKGC focuses on Entity Prediction, also widely recognized as Link Prediction, with two main methods emerging: Embedding-based Approaches build on conventional Knowledge Graph Embedding (KGE) methods [2, 45], adapted to integrate multi-modal data, enhancing entity embeddings. (i) Modality Fusion methods [21, 23, 32, 52, 57] integrate multi-modal and structural embeddings to assess triple plausibility. Early efforts, like IKRL [58], utilize multiple TransE-based scoring functions [2] for modal interaction. RSME [53] employs gates for selective modal information integration. OTKGE [3] leverages optimal transport for fusion, while CMGNN [17] implements a multi-modal GNN with cross-modal contrastive learning. (ii) Modality Ensemble methods train distinct models per modality, merging outputs for predictions. For example, MoSE [67] utilizes structural, textual, and visual data to train three KGC models and employs, using ensemble strategies for joint predictions. Similarly, IMF [27] proposes an interactive model to achieve modal disentanglement and entanglement to make robust predictions. (iii) Modality-aware Negative Sampling methods boost differentiation between correct and erroneous triples by incorporating multi-modal context for superior negative sample selection. MMKRL [36] introduces adversarial training to MKGC, adding perturbations to modal embeddings. Following this, VBKGC [66] and MANS [62] develop fine-grained visual negative sampling to better align visual with structural embeddings for more nuanced comparison training. MMRNS [59] enhances this with relation-based sample selection. Finetune-based Approaches exploit the world understanding capabilities of pre-trained Transformer models like BERT [15] and VisualBERT [25] for MKGC. These approaches reformat MMKG triples as token sequences for PLM processing [30], often framing KGC as a classification task. For example, MKGformer [8] integrates multi-modal fusion at multiple levels, treating MKGC as a Masked Language Modeling (MLM) task, while SGMPT [31] extends this by incorporating structural data and a dual-strategy fusion module. 2.3 Multi-Modal Entity Alignment Entity Alignment (EA) is pivotal for KG integration, aiming to identify identical entities across different KGs by leveraging relational, attributive, and literal (surface) features. Multi-Modal Entity Alignment (MMEA) enhances this process by incorporating visual data, thereby improving alignment accuracy accuracy [5, 35]. EVA [34] applies an attention mechanism to modulate the importance of each modality and introduces an unsupervised approach that utilizes visual similarities for alignment, reducing reliance on goldstandard labels. MSNEA [6] leverages visual cues to guide relational feature learning. MCLEA [33] employs KL divergence to mitigate the modality distribution gap between uni-modal and joint embeddings. PathFusion [68] and ASGEA [37] combine information from different modalities using the modality similarity or alignment path as an information carrier. MEAformer [9] adjusts mutual modality preferences dynamically for entity-level modality fusion, addressing inconsistencies in entities\u2019 surrounding modalities. The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA Figure 2: The overall framework of SnAg. Despite nearly five years of development, tasks like MMEA and MKGC have evolved independently within the MMKG community without a unified representation learning framework to address both. With the advancement of multi-modal LLMs, it\u2019s timely to reconsider these challenges from a broader perspective, aiming for a holistic framework that addresses both tasks and delivers meaningful multi-modal entity representations.", + "pre_questions": [], + "main_content": "INTRODUCTION The exploration of multi-modal dimensions within Knowledge Graphs (KGs) has become a pivotal force in the semantic web domain, catalyzing advancements in various artificial intelligence applications. With the evolution of Large language Models (LLMs) and Multi-modal Pre-training, the imperative for a robust and comprehensive Multi-Modal Knowledge Graph (MMKG) representation learning framework has become apparent. Such a framework is essential for the effective integration of structured knowledge into multi-modal LLMs at scale, addressing prevalent challenges like knowledge misconceptions and multi-modal hallucination. Current efforts to integrate MMKG with pre-training are scarce. Triple-level methods [38] treat triples as standalone knowledge units, embedding the (head entity, relationship, tail entity) structure \u2020Corresponding author. Figure 1: While existing works design models to refuse and combat noise in MMKGs, our SnAg accepts and deliberately incorporates noise to adapt to the noisy real-world scenarios. into Visual Language Model\u2019s space. On the other hand, Graphlevel methods [18, 26] capitalize on the structural connections among entities in a global MMKG. By selectively gathering multimodal neighbor nodes around each entity featured in the training corpus, they apply techniques such as Graph Neural Networks (GNNs) or concatenation to effectively incorporate knowledge during the pre-training process. However, these approaches predominantly view MMKG from a traditional KG perspective, not fully separating the MMKG representation process from downstream or pre-training tasks. In this work, we revisit MMKG representation learning uniquely from the MMKG perspective itself, employing two tasks: Multimodal Knowledge Graph Completion (MKGC) and Multi-modal Entity Alignment (MMEA) to validate our method. Specifically, we introduce a unified Transformer-based framework (SnAg) that achieves SOTA results across an array of ten datasets by simply aligning our framework with Task-Specific Training targets. SnAg stands out for its lightweight design, efficiency, and adaptability, incorporating components like Entity-Level Modality Interaction that can be seamlessly upgraded with advanced technologies. A key aspect of our method is the Gauss Modality Noise Masking module, whose design sharply contrasts with previous MMKGrelated efforts that primarily focus on designing methods to refuse and combat noise in MMKGs. In contrast, as shown in Figure 1, our SnAg accepts and deliberately incorporates noise, adapting to the noisy real-world scenarios. This strategy can significantly boost performance across various MKGC and MMEA approaches. arXiv:2403.06832v2 [cs.CL] 20 Mar 2024 Conference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Importantly, as the first MMKG effort to concurrently support both MKGC and MMEA tasks, this work demonstrates its adaptability of our strategy, highlighting its potential to interface with more training tasks in the future and paving the way for further research in MMKG Pre-training and Multi-modal Knowledge Injection. Drawing on the categorization proposed in [69], we distinguish between two types of MMKGs: A-MMKG and N-MMKG. In AMMKGs, images are attached to entities as attributes, while in NMMKGs, images are treated as standalone entities interconnected with others. A-MMKGs are more prevalent in current research and applications within the semantic web community due to their accessibility and similarity to traditional KGs [12]. Therefore, this paper will focus exclusively on A-MMKG, unless stated otherwise. Definition 1. Multi-modal Knowledge Graph. A KG is defined as G = {E, R, A, T, V} where T = {T A, T R} with T R = E \u00d7 R \u00d7 E and T A = E \u00d7A \u00d7V. MMKG utilizes multi-modal data (e.g., images) as specific attribute values for entities or concepts, with T A = E\u00d7A\u00d7 (V \ud835\udc3e\ud835\udc3a\u222aV \ud835\udc40\ud835\udc40), where V \ud835\udc3e\ud835\udc3aand V \ud835\udc40\ud835\udc40are values of KG and multimodal data, respectively. For instance, in an MMKG, an attribute triple (\ud835\udc52,\ud835\udc4e, \ud835\udc63) in T A might associates an image as \ud835\udc63to an entity \ud835\udc52via an attribute \ud835\udc4e, typically denoted as hasImage. Definition 2. MMKG Completion. The objective of MKGC is to augment the set of relational triples T \ud835\udc45within MMKGs by identifying and adding missing relational triples among existing entities and relations, potentially utilizing attribute triples T A. Specifically, our focus is on Entity Prediction, which involves determining the missing head or tail entities in queries of the form (\u210e\ud835\udc52\ud835\udc4e\ud835\udc51,\ud835\udc5f, ?) or (?,\ud835\udc5f,\ud835\udc61\ud835\udc4e\ud835\udc56\ud835\udc59). Definition 3. Multi-modal Entity Alignment. Given two () () Definition 3. Multi-modal Entity Alignment. Given two aligned MMKGs G1 and G2, the objective of MMEA is to identify entity pairs (\ud835\udc521 \ud835\udc56, \ud835\udc522 \ud835\udc56) from E1 and E2, respectively, that correspond to the same real-world entity \ud835\udc52\ud835\udc56. This process utilizes a set of pre-aligned entity pairs, divided into a training set (seed alignments S) and a testing set S\ud835\udc61\ud835\udc52, following a pre-defined seed alignment ratio \ud835\udc45\ud835\udc60\ud835\udc4e= |S|/|S \u222aS\ud835\udc61\ud835\udc52|. The modalities associated with an entity are denoted by M = {\ud835\udc54,\ud835\udc5f,\ud835\udc4e, \ud835\udc63,\ud835\udc60}, signifying graph structure, relation, attribute, vision, and surface (i.e., entity names) modalities, respectively. 3.2 Multi-Modal Knowledge Embedding 3.2.1 Graph Structure Embedding. Let \ud835\udc65\ud835\udc54 \ud835\udc56\u2208R\ud835\udc51represents the graph embedding of entity \ud835\udc52\ud835\udc56, which is randomly initialized and learnable, with \ud835\udc51representing the predetermined hidden dimension. In MKGC, we follow prior work [64] to set \u210e\ud835\udc54 \ud835\udc56= \ud835\udc39\ud835\udc36\ud835\udc54(\ud835\udc4a\ud835\udc54,\ud835\udc65\ud835\udc54 \ud835\udc56), where \ud835\udc39\ud835\udc36\ud835\udc54is a KG-specific fully connected layer applied to \ud835\udc65\ud835\udc54 \ud835\udc56with weights \ud835\udc4a\ud835\udc54. For MMEA, we follow [9, 10] to utilize the Graph Attention Network (GAT) [50], configured with two attention heads and two layers, to capture the structural information of G. This is facilitated by a diagonal weight matrix [60] \ud835\udc4a\ud835\udc54\u2208R\ud835\udc51\u00d7\ud835\udc51for linear transformation. The structure embedding is thus defined as \u210e\ud835\udc54 \ud835\udc56= \ud835\udc3a\ud835\udc34\ud835\udc47(\ud835\udc4a\ud835\udc54, \ud835\udc40\ud835\udc54;\ud835\udc65\ud835\udc54 \ud835\udc56), where \ud835\udc40\ud835\udc54refers to the graph\u2019s adjacency matrix. 3.2.2 Relation and Attribute Embedding. Our study for MKGC, ( ) 3.2.2 Relation and Attribute Embedding. Our study for MKGC, consistent with the domain practices [8, 27, 53, 56, 67], focuses exclusively on relation triples. These are represented by learnable embeddings \ud835\udc65\ud835\udc5f \ud835\udc57\u2208R\ud835\udc51/2, where \ud835\udc57uniquely identifies each relation \ud835\udc5f\ud835\udc57, distinguishing it from entity indices. We exclude attribute triples to maintain consistency with methodological practices in the field. The choice of dimensionality \ud835\udc51/2 is informed by our use of the RotatE model [45] as the scoring function for assessing triple plausibility. RotatE models relations as rotations in a complex space, requiring the relation embedding\u2019s dimension to be half that of the entity embedding to account for the real and imaginary components of complex numbers. For MMEA, following Yang et al. [61], we use bag-of-words features for relation (\ud835\udc65\ud835\udc5f) and attribute (\ud835\udc65\ud835\udc4e) representations of entities (detailed in \u00a7 4.1.3) . Separate FC layers, parameterized by \ud835\udc4a\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a\u00d7\ud835\udc51, are employed for embedding space harmonization: \u210e\ud835\udc5a \ud835\udc56= \ud835\udc39\ud835\udc36\ud835\udc5a(\ud835\udc4a\ud835\udc5a,\ud835\udc65\ud835\udc5a \ud835\udc56), where \ud835\udc5a\u2208{\ud835\udc5f,\ud835\udc4e} and \ud835\udc65\ud835\udc5a \ud835\udc56 \u2208R\ud835\udc51\ud835\udc5arepresents the input feature of entity \ud835\udc52\ud835\udc56for modality \ud835\udc5a. 3.2.3 Visual and Surface Embedding. For visual embeddings, a \u2208 3.2.3 Visual and Surface Embedding. For visual embeddings, a pre-trained (and thereafter frozen) visual encoder, denoted as \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63, is used to extract visual features \ud835\udc65\ud835\udc63 \ud835\udc56for each entity \ud835\udc52\ud835\udc56with associated image data. In cases where entities lack corresponding image data, we synthesize random image features adhering to a normal distribution, parameterized by the mean and standard deviation Conference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. observed across other entities\u2019 images [9, 10, 64]. Regarding surface embeddings, we leverage Sentence-BERT [40], a pre-trained textual encoder, to derive textual features from each entity\u2019s description. The [CLS] token serves to aggregate sentence-level textual features \ud835\udc65\ud835\udc60 \ud835\udc56. Consistent with the approach applied to other modalities, we utilize \ud835\udc39\ud835\udc36\ud835\udc5aparameterized by \ud835\udc4a\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a\u00d7\ud835\udc51to integrate the extracted features \ud835\udc65\ud835\udc63 \ud835\udc56and \ud835\udc65\ud835\udc60 \ud835\udc56into the embedding space, yielding the embeddings \u210e\ud835\udc63 \ud835\udc56and \u210e\ud835\udc60 \ud835\udc56. 3.3 Gauss Modality Noise Masking Recent research in MMKG [10, 19, 64] suggests that models can tolerate certain noise levels without a noticeable decline in the expressive capability of multi-modal entity representations, a finding echoed across various machine learning domains [4, 22, 43]. Additionally, Cuconasu et al. [13] observe that in the RetrievalAugmented Generation (RAG) process of LLMs, filling up the retrieved context with irrelevant documents consistently improves model performance in realistic scenarios. Similarly, Chen et al. [11] demonstrate that cross-modal masking and reconstruction can improve a model\u2019s cross-modal alignment capabilities. Inspired by evidence of model noise resilience, we hypothesize that introducing noise during MMKG modality fusion training could enhance both modal feature robustness and real-world performance. In light of these observations, we propose a new mechanism termed Gauss Modality Noise Masking (GMNM), aimed at enhancing modality feature representations through controlled noise injection at the training stage for MMKG. This stochastic mechanism introduces a probabilistic transformation to each modality feature \ud835\udc65\ud835\udc5a \ud835\udc56 at the beginning of every training epoch, described as follows: c \ud835\udc65\ud835\udc5a \ud835\udc56 = ( \ud835\udc65\ud835\udc5a \ud835\udc56, if \ud835\udc5d> \ud835\udf0c, (1 \u2212\ud835\udf16)\ud835\udc65\ud835\udc5a \ud835\udc56+ \ud835\udf16f \ud835\udc65\ud835\udc5a \ud835\udc56, otherwise, (1) where \ud835\udc5d\u223c\ud835\udc48(0, 1) denotes a uniformly distributed random variable that determines whether noise is applied, with \ud835\udf0cbeing the threshold probability for noise application to each \ud835\udc65\ud835\udc5a \ud835\udc56. Here, \ud835\udf16signifies the noise (mask) ratio. We define the generation of noise vector f \ud835\udc65\ud835\udc5a \ud835\udc56 as: f \ud835\udc65\ud835\udc5a \ud835\udc56 = \ud835\udf11\ud835\udc5a\u2299\ud835\udc67+ \ud835\udf07\ud835\udc5a, \ud835\udc67\u223cN (0, \ud835\udc3c), (2) where \ud835\udf11\ud835\udc5aand \ud835\udf07\ud835\udc5arepresent the standard deviation and mean of the modality-specific non-noisy data for \ud835\udc5a, respectively, and \ud835\udc67denotes a sample drawn from a Gaussian distribution N (0, \ud835\udc3c) with mean vector with mean 0 and identity covariance matrix \ud835\udc3c, ensuring that the introduced noise is statistically coherent with the intrinsic data variability of the respective modality. Additionally, the intensity of noise (\ud835\udf16) can be dynamically adjusted to simulate realworld data imperfections. This adaptive noise injection strategy is designed to foster a model resilient to data variability, capable of capturing and representing complex multi-modal interactions with enhanced fidelity in practical applications. Note that after the transformation from \ud835\udc65\ud835\udc5ato c \ud835\udc65\ud835\udc5a, these modified features are still subject to further processing through \ud835\udc39\ud835\udc36\ud835\udc5aas detailed in \u00a7 3.2. This critical step secures the generation of the ultimate modal representation, symbolized as c \u210e\ud835\udc5a. For clarity in subsequent sections, we will treat \u210e\ud835\udc5aand \u210e\ud835\udc5a \ud835\udc56as representing their final states, c \u210e\ud835\udc5aand c \u210e\ud835\udc5a \ud835\udc56, unless specified otherwise. 3.4 Entity-Level Modality Interaction This phase is designed for instance-level modality weighting and fusion, enabling dynamic adjustment of training weights based on modality information\u2019s signal strength and noise-induced uncertainty. We utilize a Transformer architecture [49] for this purpose, noted for its efficacy in modality fusion and its ability to derive confidence-based weighting for modalitieswhich improves interpretability and adaptability. The Transformer\u2019s self-attention mechanism is crucial for ensuring the model evaluates and prioritizes accurate and relevant modal inputs. Specifically, we adapt the vanilla Transformer through integrating three key components: Multi-Head Cross-Modal Attention (MHCA), Fully Connected Feed-Forward Networks (FFN), and Instance-level Confidence (ILC). (i) MHCA operates its attention function across \ud835\udc41\u210eparallel heads. Each head, indexed by \ud835\udc56, employs shared matrices\ud835\udc4a(\ud835\udc56) \ud835\udc5e ,\ud835\udc4a(\ud835\udc56) \ud835\udc58 ,\ud835\udc4a(\ud835\udc56) \ud835\udc63 \u2208R\ud835\udc51\u00d7\ud835\udc51\u210e(where \ud835\udc51\u210e= \ud835\udc51/\ud835\udc41\u210e), to transform input \u210e\ud835\udc5ainto queries \ud835\udc44(\ud835\udc56) \ud835\udc5a, keys \ud835\udc3e(\ud835\udc56) \ud835\udc5a, and values \ud835\udc49(\ud835\udc56) \ud835\udc5a: \ud835\udc44(\ud835\udc56) \ud835\udc5a, \ud835\udc3e(\ud835\udc56) \ud835\udc5a,\ud835\udc49(\ud835\udc56) \ud835\udc5a = \u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc5e ,\u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc58 ,\u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc63 . (3) The output for modality\ud835\udc5a\u2019s feature is then generated by combining the outputs from all heads and applying a linear transformation: \ud835\udc40\ud835\udc3b\ud835\udc36\ud835\udc34(\u210e\ud835\udc5a) = \u00ca\ud835\udc41\u210e \ud835\udc56=1 \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc5a \ud835\udc56\u00b7\ud835\udc4a0 , (4) \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc5a \ud835\udc56= \u2211\ufe01 \ud835\udc57\u2208M \ud835\udefd(\ud835\udc56) \ud835\udc5a\ud835\udc57\ud835\udc49(\ud835\udc56) \ud835\udc57 , (5) where \ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc51. The attention weight \ud835\udefd\ud835\udc5a\ud835\udc57calculates the relevance between modalities \ud835\udc5aand \ud835\udc57: \ud835\udefd\ud835\udc5a\ud835\udc57= exp(\ud835\udc44\u22a4 \ud835\udc5a\ud835\udc3e\ud835\udc57/ \u221a\ufe01 \ud835\udc51\u210e) \u00cd \ud835\udc56\u2208M exp(\ud835\udc44\u22a4 \ud835\udc5a\ud835\udc3e\ud835\udc56/ \u221a\ufe01 \ud835\udc51\u210e) . (6) Besides, layer normalization (LN) and residual connection (RC) are incorporated to stabilize training: \u00af \u210e\ud835\udc5a= \ud835\udc3f\ud835\udc4e\ud835\udc66\ud835\udc52\ud835\udc5f\ud835\udc41\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc40\ud835\udc3b\ud835\udc36\ud835\udc34(\u210e\ud835\udc5a) + \u210e\ud835\udc5a) . (7) (ii) FFN: This network, consisting of two linear transformations and a ReLU activation, further processes the MHCA output: \ud835\udc39\ud835\udc39\ud835\udc41( \u00af \u210e\ud835\udc5a) = \ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc48( \u00af \u210e\ud835\udc5a\ud835\udc4a1 + \ud835\udc4f1)\ud835\udc4a2 + \ud835\udc4f2 , (8) \u00af \u210e\ud835\udc5a\u2190\ud835\udc3f\ud835\udc4e\ud835\udc66\ud835\udc52\ud835\udc5f\ud835\udc41\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc39\ud835\udc39\ud835\udc41( \u00af \u210e\ud835\udc5a) + \u00af \u210e\ud835\udc5a) , (9) where \ud835\udc4a1 \u2208R\ud835\udc51\u00d7\ud835\udc51\ud835\udc56\ud835\udc5band \ud835\udc4a2 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5b\u00d7\ud835\udc51. (iii) ILC: We calculate the confidence \u02dc \ud835\udc64\ud835\udc5afor each modality via: \u02dc \ud835\udc64\ud835\udc5a= exp(\u00cd \ud835\udc57\u2208M \u00cd\ud835\udc41\u210e \ud835\udc56=0 \ud835\udefd(\ud835\udc56) \ud835\udc5a\ud835\udc57/ \u221a\ufe01 |M| \u00d7 \ud835\udc41\u210e) \u00cd \ud835\udc58\u2208M exp(\u00cd \ud835\udc57\u2208M \u00cd\ud835\udc41\u210e \ud835\udc56=0 \ud835\udefd(\ud835\udc56) \ud835\udc58\ud835\udc57 \u221a\ufe01 |M| \u00d7 \ud835\udc41\u210e) , (10) which captures crucial inter-modal interactions and tailors the model\u2019s confidence for each entity\u2019s modality. 3.5 Task-Specific Training Building upon the foundational processes detailed in previous sections, we have derived multi-modal KG representations denoted as \u210e\ud835\udc5a(discussed in \u00a7 3.3) and \u00af \u210e\ud835\udc5a(elaborated in \u00a7 3.4), along with confidence scores \u02dc \ud835\udc64\ud835\udc5afor each modality \ud835\udc5awithin the MMKG (introduced in \u00a7 3.4). The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA 3.5.1 MMKG Completion. Within MKGC, we consider two methods for entity representation as candidates: (i) \u00af \u210e\ud835\udc54: Reflecting insights from previous research [9, 64], graph structure embedding emerges as crucial for model performance. After being processed by the Transformer layer, \u00af \u210e\ud835\udc54not only maintains its structural essence but also blends in other modal insights (refer to Equation (4) and (5)), offering a comprehensive multi-modal entity representation. (ii) \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54: For an equitable multi-modal representation, we average all modality-specific representations via \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54= 1 |M| \u00cd \ud835\udc5a\u2208M \u00af \u210e\ud835\udc5a, where M is the set of all modalities. This averaging ensures equal modality contribution, leveraging the rich, diverse information within MMKGs. For consistency in the following descriptions, we will refer to both using the notation \u00af \u210e. We apply the RotatE model [45] as our score function to assess the plausibility of triples. It is defined as: F (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61) = || \u00af \u210e\u210e\ud835\udc52\ud835\udc4e\ud835\udc51\u25e6\ud835\udc65\ud835\udc5f\u2212\u00af \u210e\ud835\udc61\ud835\udc4e\ud835\udc56\ud835\udc59|| , (11) where \u25e6represents the rotation operation in complex space, which transforms the head entity\u2019s embedding by the relation to approximate the tail entity\u2019s embedding. To prioritize positive triples with higher scores, we optimize the embeddings using a sigmoid-based loss function [45]. The loss function is given by: L\ud835\udc58\ud835\udc54\ud835\udc50= 1 |T R| \u2211\ufe01 (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61)\u2208T R \u0010 \u2212log\ud835\udf0e(\ud835\udf06\u2212F (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61)) \u2212 \u2211\ufe01\ud835\udc3e \ud835\udc56=1 \ud835\udf10\ud835\udc56log\ud835\udf0e(F (\ud835\udc52\u210e\u2032,\ud835\udc5f\u2032,\ud835\udc52\ud835\udc61\u2032) \u2212\ud835\udf06) \u0011 , (12) where \ud835\udf0edenotes the sigmoid function, \ud835\udf06is the margin, \ud835\udc3eis the number of negative samples per positive triple, and\ud835\udf10\ud835\udc56represents the selfadversarial weight for each negatively sampled triple (\ud835\udc52\u210e\u2032,\ud835\udc5f\u2032,\ud835\udc52\ud835\udc61\u2032). Concretely, \ud835\udf10\ud835\udc56is calculated as: \ud835\udf10\ud835\udc56= exp(\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50F (\ud835\udc52\u210e\u2032 \ud835\udc56,\ud835\udc5f\u2032 \ud835\udc56,\ud835\udc52\ud835\udc61\u2032 \ud835\udc56)) \u00cd\ud835\udc3e \ud835\udc57=1 exp(\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50F (\ud835\udc52\u210e\u2032 \ud835\udc57,\ud835\udc5f\u2032 \ud835\udc57,\ud835\udc52\ud835\udc61\u2032 \ud835\udc57)) , (13) with \ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50being the temperature parameter. Our primary objective is to minimize L\ud835\udc58\ud835\udc54\ud835\udc50, thereby refining the embeddings to accurately capture MMKG\u2019s underlying relationships. 3.5.2 Multi-modal Entity Alignment. In MMEA, following [9, 10], we adopt the Global Modality Integration (GMI) derived multimodal features as the representations for entities. GMI emphasizes global alignment by concatenating and aligning multi-modal embeddings with a learnable global weight, enabling adaptive learning of each modality\u2019s quality across two MMKGs. The GMI joint embedding \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc56 for entity \ud835\udc52\ud835\udc56is defined as: \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc56 = \u00ca \ud835\udc5a\u2208M [\ud835\udc64\ud835\udc5a\u210e\ud835\udc5a \ud835\udc56] , (14) where \u00c9 signifies vector concatenation and \ud835\udc64\ud835\udc5ais the global weight for modality\ud835\udc5a, distinct from the entity-level dynamic modality weights \u02dc \ud835\udc64\ud835\udc5ain Equation (10). The distinction between MMEA and MKGC lies in their focus: MMEA emphasizes aligning modal features between entities and distinguishing non-aligned entities, prioritizing original feature retention. In contrast, MKGC emphasizes the inferential benefits of modality fusion across different multi-modal entities. As demonstrated by Chen et al. [10], the modality feature is often smoothed by the Transformer Layer in MMEA, potentially reducing entity distinction. GMI addresses this by preserving essential information, aiding alignment stability. Moreover, as a unified MMKG representation framework, modal features extracted earlier are optimized through MMEA-specific training objectives [33]. Specifically, for each aligned entity pair (\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) in training set (seed alignments S), we define a negative entity set N\ud835\udc5b\ud835\udc54 \ud835\udc56 = {\ud835\udc521 \ud835\udc57|\u2200\ud835\udc521 \ud835\udc57\u2208E1, \ud835\udc57\u2260\ud835\udc56} \u222a{\ud835\udc522 \ud835\udc57|\u2200\ud835\udc522 \ud835\udc57\u2208E2, \ud835\udc57\u2260\ud835\udc56} and utilize in-batch (B) negative sampling [7] to enhance efficiency. The alignment probability distribution is: \ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) = \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) + \u00cd \ud835\udc52\ud835\udc57\u2208N\ud835\udc5b\ud835\udc54 \ud835\udc56 \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc52\ud835\udc57) , (15) where \ud835\udefe\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = exp(\u210e\ud835\udc5a\u22a4 \ud835\udc56 \u210e\ud835\udc5a \ud835\udc57/\ud835\udf0f\ud835\udc52\ud835\udc4e) and \ud835\udf0f\ud835\udc52\ud835\udc4eis the temperature hyper-parameter. We establish a bi-directional alignment objective to account for MMEA directions: L\ud835\udc5a= \u2212E\ud835\udc56\u2208B log[ \ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) + \ud835\udc5d\ud835\udc5a(\ud835\udc522 \ud835\udc56,\ud835\udc521 \ud835\udc56) ]/2, (16) (i) The training objective denoted as L\ud835\udc3a\ud835\udc40\ud835\udc3cwhen using GMI joint embeddings, i.e., \ud835\udefe\ud835\udc3a\ud835\udc40\ud835\udc3c(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) is set to exp(\u210e\ud835\udc3a\ud835\udc40\ud835\udc3c\u22a4 \ud835\udc56 \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc57 /\ud835\udf0f\ud835\udc52\ud835\udc4e). To integrate dynamic confidences into the training process and enhance multi-modal entity alignment, we adopt two specialized training objectives from UMAEA [10]: (ii) Explicit Confidenceaugmented Intra-modal Alignment (ECIA): This objective modifies Equation (16) to incorporate explicit confidence levels within the same modality, defined as: L\ud835\udc38\ud835\udc36\ud835\udc3c\ud835\udc34= \u00cd \ud835\udc5a\u2208M e L\ud835\udc5a, where: e L\ud835\udc5a= \u2212E\ud835\udc56\u2208B log[ \ud835\udf19\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) \u2217(\ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) +\ud835\udc5d\ud835\udc5a(\ud835\udc522 \ud835\udc56,\ud835\udc521 \ud835\udc56)) ]/2 . (17) Here,\ud835\udf19\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) represents the minimum confidence value between entities \ud835\udc521 \ud835\udc56and \ud835\udc522 \ud835\udc56in modality\ud835\udc5a, i.e., \ud835\udf19\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = \ud835\udc40\ud835\udc56\ud835\udc5b( \u02dc \ud835\udc64\ud835\udc5a \ud835\udc56, \u02dc \ud835\udc64\ud835\udc5a \ud835\udc57), addressing the issue of aligning high-quality features with potentially lower-quality ones or noise. (iii) Implicit Inter-modal Refinement (IIR) refines entity-level modality alignment by leveraging the transformer layer outputs \u00af \u210e\ud835\udc5a, aiming to align output hidden states directly and adjust attention scores adaptively. The corresponding loss function is: L\ud835\udc3c\ud835\udc3c\ud835\udc45= \u00cd \ud835\udc5a\u2208M \u00af L\ud835\udc5a, where \u00af L\ud835\udc5ais also a variant of L\ud835\udc5a(Equation (16)) with \u00af \ud835\udefe\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = exp( \u00af \u210e\ud835\udc5a\u22a4 \ud835\udc56 \u00af \u210e\ud835\udc5a \ud835\udc57/\ud835\udf0f\ud835\udc52\ud835\udc4e). The comprehensive training objective is formulated as: L\ud835\udc52\ud835\udc4e= L\ud835\udc3a\ud835\udc40\ud835\udc3c+ L\ud835\udc38\ud835\udc36\ud835\udc3c\ud835\udc34+ L\ud835\udc3c\ud835\udc3c\ud835\udc45. Note that our SnAg framework can not only function as a standalone model but also enhance other existing methods, providing stable performance improvements in MMEA, as demonstrated in Table 4 from \u00a7 4.2.2. 4 EXPERIMENTS 4.1 Experiment Setup In MMKG datasets like DBP15KJA-EN, where 67.58% of entities have images, the image association ratio (\ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54) varies due to the data collection process [12]. 4.1.1 Datasets. MKGC: (i) DB15K [35] is constructed from DBPedia [24], enriched with images obtained via a search engine. (ii) MKG-W and MKG-Y [59] are subsets of Wikidata [51] and YAGO [44] respectively. Text descriptions are aligned with the corresponding entities using the additional sameAs links provided by the Conference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Table 1: MKGC performance on DB15K [35], MKG-W and MKG-Y [59] datasets. The best results are highlighted in bold, and the third-best results are underlined for each column. Models DB15K [35] MKG-W [59] MKG-Y [59] MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 IKRL (IJCAI \u201917) [58] .268 .141 .349 .491 .324 .261 .348 .441 .332 .304 .343 .383 TBKGC (NAACL \u201918) [41] .284 .156 .370 .499 .315 .253 .340 .432 .340 .305 .353 .401 TransAE (IJCNN \u201919) [55] .281 .213 .312 .412 .300 .212 .349 .447 .281 .253 .291 .330 RSME (ACM MM \u201921) [53] .298 .242 .321 .403 .292 .234 .320 .404 .344 .318 .361 .391 VBKGC (KDD \u201922) [66] .306 .198 .372 .494 .306 .249 .330 .409 .370 .338 .388 .423 OTKGE (NeurIPS \u201922) [3] .239 .185 .259 .342 .344 .289 .363 .449 .355 .320 .372 .414 IMF (WWW \u201923) [27] .323 .242 .360 .482 .345 .288 .366 .454 .358 .330 .371 .406 QEB (ACM MM \u201923) [54] .282 .148 .367 .516 .324 .255 .351 .453 .344 .295 .370 .423 VISTA (EMNLP \u201923) [23] .304 .225 .336 .459 .329 .261 .354 .456 .305 .249 .324 .415 MANS (IJCNN \u201923) [62] .288 .169 .366 .493 .309 .249 .336 .418 .290 .253 .314 .345 MMRNS (ACM MM \u201922) [59] .297 .179 .367 .510 .341 .274 .375 .468 .359 .306 .391 .455 AdaMF (COLING \u201924) [64] .325 .213 .397 .517 .343 .272 .379 .472 .381 .335 .404 .455 SnAg (Ours) .363 .274 .411 .530 .373 .302 .405 .503 .395 .354 .411 .471 w/o GMNM .357 .269 .406 .523 .365 .296 .398 .490 .387 .345 .407 .457 Table 2: Statistics for the MKGC datasets, where the symbol definitions in the table header align with Definition 1. Dataset |E| |R| |T R (Train)| |T R (Valid)| |T R (Test)| DB15K 12842 279 79222 9902 9904 MKG-W 15000 169 34196 4276 4274 MKG-Y 15000 28 21310 2665 2663 Table 3: Statistics for the MMEA datasets. Each dataset contains 15,000 pre-aligned entity pairs (|S| = 15000). Note that not every entity is paired with associated images or equivalent counterparts in the other KG. Additional abbreviations include: DB (DBpedia), WD (Wikidata), ZH (Chinese), JA (Japanese), FR (French), EN (English), DE (German). Dataset G |E| |R| |A| |T R| |T A| |V \ud835\udc40\ud835\udc40| DBP15KZH-EN ZH 19,388 1,701 8,111 70,414 248,035 15,912 EN 19,572 1,323 7,173 95,142 343,218 14,125 DBP15KJA-EN JA 19,814 1,299 5,882 77,214 248,991 12,739 EN 19,780 1,153 6,066 93,484 320,616 13,741 DBP15KFR-EN FR 19,661 903 4,547 105,998 273,825 14,174 EN 19,993 1,208 6,422 115,722 351,094 13,858 OpenEAEN-FR EN 15,000 267 308 47,334 73,121 15,000 FR 15,000 210 404 40,864 67,167 15,000 OpenEAEN-DE EN 15,000 215 286 47,676 83,755 15,000 DE 15,000 131 194 50,419 156,150 15,000 OpenEAD-W-V1 DB 15,000 248 342 38,265 68,258 15,000 WD 15,000 169 649 42,746 138,246 15,000 OpenEAD-W-V2 DB 15,000 167 175 73,983 66,813 15,000 WD 15,000 121 457 83,365 175,686 15,000 OpenEA benchmarks [48]. Detailed statistics are available in the Appendix. MMEA: (i) Multi-modal DBP15K [34] extends DBP15K [46] by adding images from DBpedia and Wikipedia [14], covering three bilingual settings (DBP15KZH-EN, DBP15KJA-EN, DBP15KFR-EN) and featuring around 400K triples and 15K aligned entity pairs per setting. (ii) MMEA-UMVM [10] includes two bilingual datasets (ENFR-15K, EN-DE-15K) and two monolingual datasets (D-W-15K-V1, D-W-15K-V2) derived from Multi-OpenEA datasets (\ud835\udc45\ud835\udc60\ud835\udc4e= 0.2) [28] and all three bilingual datasets from DBP15K [34]. It offers variability in visual information by randomly removing images, resulting in 97 distinct dataset splits with different \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54. For this study, we focus on representative \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54values of {0.4, 0.6, \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a} to validate our experiments. When \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a, the dataset corresponds to the original Standard dataset (as shown in Table 4). Note that for the Multi-modal DBP15K dataset, the \u201c\ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a\u201d value is not 1.0. 4.1.2 Iterative Training for MMEA. We employ a probation technique for iterative training, which acts as a buffering mechanism, temporarily storing a cache of mutual nearest entity pairs across KGs from the testing set [33]. Specifically, at every \ud835\udc3e\ud835\udc52(where \ud835\udc3e\ud835\udc52= 5) epochs, models identify and add mutual nearest neighbor entity pairs from different KGs to a candidate list N\ud835\udc50\ud835\udc51. An entity pair in N\ud835\udc50\ud835\udc51is then added to the training set if it continues to be mutual nearest neighbors for \ud835\udc3e\ud835\udc60(= 10) consecutive iterations. This iterative expansion of the training dataset serves as data augmentation in the EA domain, enabling further evaluation of the model\u2019s robustness across various scenarios. 4.1.3 Implementation Details. MKGC: (i) Following Zhang et al. [64], vision encoders \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63are configured with VGG [42] for DBP15K, and BEiT [1] for MKG-W and MKG-Y. For entities associated with multiple images, the feature vectors of these images are averaged to obtain a singular representation. (ii) The head number \ud835\udc41\u210ein MHCA is set to 2. For entity representation in DBP15K, graph structure embedding \u00af \u210e\ud835\udc54is used, while for MKG-W and MKG-Y, mean pooling across modality-specific representations \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54is employed. This distinction is made due to DBP15K\u2019s denser KG and greater absence of modality information compared to MKG-W and MKG-Y. (iii) We simply selected a set of candidate parameters in AdaMF [64]. Specifically, the number of negative samples \ud835\udc3eper positive triple is 32, the hidden dimension \ud835\udc51is 256, the training The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA Table 4: Non-iterative MMEA results across three degrees of visual modality missing. Results are underlined when the baseline, equipped with the Gauss Modality Noise Masking (GMNM) module, surpasses its own original performance, and highlighted in bold when achieving SOTA performance. Models \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.4 \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.6 Standard H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR DBP15KZH-EN EVA [34] .623 .876 .715 .625 .877 .717 .683 .906 .762 w/ GMNM .629 .883 .724 .625 .881 .717 .680 .907 .760 MCLEA [33] .627 .880 .715 .670 .899 .751 .732 .926 .801 w/ GMNM .652 .895 .740 .699 .912 .775 .754 .933 .819 MEAformer [9] .678 .924 .766 .720 .938 .798 .776 .953 .840 w/ GMNM .680 .925 .767 .719 .939 .798 .777 .955 .841 SnAg (Ours) .735 .945 .812 .757 .953 .830 .798 .963 .858 DBP15KJA-EN EVA [34] .546 .829 .644 .552 .829 .647 .587 .851 .678 w/ GMNM .618 .876 .709 .625 .874 .714 .664 .902 .748 MCLEA [33] .568 .848 .665 .639 .882 .723 .678 .897 .755 w/ GMNM .659 .901 .745 .723 .924 .795 .752 .935 .818 MEAformer [9] .677 .933 .768 .736 .953 .815 .767 .959 .837 w/ GMNM .678 .937 .770 .738 .953 .816 .767 .958 .837 SnAg (Ours) .735 .952 .814 .771 .961 .841 .795 .963 .857 DBP15KFR-EN EVA [34] .622 .895 .719 .634 .899 .728 .686 .926 .771 w/ GMNM .628 .897 .725 .634 .900 .728 .686 .929 .772 MCLEA [33] .622 .892 .722 .694 .915 .774 .734 .926 .805 w/ GMNM .663 .916 .756 .726 .934 .802 .759 .942 .827 MEAformer [9] .676 .944 .774 .734 .958 .816 .776 .967 .846 w/ GMNM .678 .946 .776 .735 .965 .819 .779 .969 .849 SnAg (Ours) .757 .963 .835 .790 .970 .858 .814 .974 .875 OpenEAEN-FR EVA [34] .532 .830 .635 .553 .835 .652 .784 .931 .836 w/ GMNM .537 .829 .638 .554 .833 .652 .787 .935 .839 MCLEA [33] .535 .842 .641 .607 .858 .696 .821 .945 .866 w/ GMNM .554 .848 .658 .624 .873 .714 .830 .950 .874 MEAformer [9] .582 .891 .690 .645 .904 .737 .846 .862 .889 w/ GMNM .588 .895 .696 .647 .905 .738 .847 .963 .890 SnAg (Ours) .621 .905 .721 .667 .922 .757 .848 .964 .891 OpenEAEN-DE EVA [34] .718 .918 .789 .734 .921 .800 .922 .982 .945 w/ GMNM .728 .919 .794 .740 .921 .803 .923 .983 .946 MCLEA [33] .702 .910 .774 .748 .912 .805 .940 .988 .957 w/ GMNM .711 .912 .782 .762 .928 .821 .942 .990 .960 MEAformer [9] .749 .938 .816 .789 .951 .847 .955 .994 .971 w/ GMNM .753 .939 .817 .791 .952 .848 .957 .995 .971 SnAg (Ours) .776 .948 .837 .810 .958 .862 .958 .995 .972 OpenEAD-W-V1 EVA [34] .567 .796 .651 .592 .810 .671 .859 .945 .890 w/ GMNM .597 .826 .678 .611 .826 .688 .870 .953 .900 MCLEA [33] .586 .821 .672 .663 .854 .732 .882 .955 .909 w/ GMNM .604 .841 .689 .678 .869 .748 .889 .960 .915 MEAformer [9] .640 .877 .725 .706 .898 .776 .902 .969 .927 w/ GMNM .656 .884 .738 .718 .905 .786 .904 .971 .929 SnAg (Ours) .678 .897 .758 .728 .915 .796 .905 .971 .930 OpenEAD-W-V2 EVA [34] .774 .949 .838 .789 .953 .848 .889 .981 .922 w/ GMNM .787 .956 .848 .799 .958 .856 .892 .983 .924 MCLEA [33] .751 .941 .822 .801 .950 .856 .929 .984 .950 w/ GMNM .766 .956 .836 .811 .965 .868 .938 .990 .957 MEAformer [9] .807 .976 .869 .834 .980 .886 .939 .994 .960 w/ GMNM .833 .980 .886 .857 .983 .903 .942 .995 .962 SnAg (Ours) .852 .986 .901 .870 .988 .913 .946 .996 .965 batch size is 1024, the margin \ud835\udf06is 12, the temperature\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50is 2.0, and the learning rate is set to 1\ud835\udc52\u22124. No extensive parameter tuning was conducted; theoretically, SnAg could achieve better performance with parameter optimization. (iv) The probability \ud835\udf0cof applying noise in GMNM is set at 0.2, with a noise ratio \ud835\udf16of 0.7. MMEA: (i) Following Yang et al. [61], Bag-of-Words (BoW) is employed for encoding relations (\ud835\udc65\ud835\udc5f) and attributes (\ud835\udc65\ud835\udc4e) into fixed-length vectors (\ud835\udc51\ud835\udc5f= \ud835\udc51\ud835\udc4e= 1000). This process entails sorting relations and attributes by frequency, followed by truncation or padding to Table 5: Iterative MMEA results. Models \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.4 \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.6 Standard H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR DBP15KZH-EN EVA [34] .696 .902 .773 .699 .903 .775 .749 .914 .810 w/ GMNM .708 .906 .780 .705 .911 .778 .752 .919 .813 MCLEA [33] .719 .921 .796 .764 .941 .831 .818 .956 .871 w/ GMNM .741 .945 .818 .782 .954 .846 .830 .968 .882 MEAformer [9] .754 .953 .829 .788 .958 .853 .843 .966 .890 w/ GMNM .763 .947 .832 .799 .959 .860 .845 .970 .891 SnAg (Ours) .798 .957 .859 .821 .963 .876 .857 .972 .900 DBP15KJA-EN EVA [34] .646 .888 .733 .657 .892 .743 .695 .904 .770 w/ GMNM .696 .910 .773 .700 .912 .776 .745 .916 .807 MCLEA [33] .690 .922 .778 .756 .948 .828 .788 .955 .851 w/ GMNM .739 .937 .815 .796 .959 .858 .820 .969 .877 MEAformer [9] .759 .957 .833 .808 .969 .868 .831 .972 .882 w/ GMNM .769 .953 .838 .817 .967 .872 .842 .974 .890 SnAg (Ours) .808 .959 .864 .839 .975 .890 .861 .976 .904 DBP15KFR-EN EVA [34] .710 .931 .792 .716 .935 .797 .769 .946 .834 w/ GMNM .714 .929 .794 .720 .932 .798 .777 .950 .841 MCLEA [33] .731 .943 .814 .789 .958 .854 .814 .967 .873 w/ GMNM .759 .964 .840 .806 .974 .871 .837 .980 .893 MEAformer [9] .763 .963 .842 .811 .976 .874 .844 .980 .897 w/ GMNM .779 .968 .847 .817 .974 .876 .852 .981 .899 SnAg (Ours) .826 .976 .885 .852 .983 .904 .875 .987 .919 OpenEAEN-FR EVA [34] .605 .869 .700 .619 .870 .710 .848 .973 .896 w/ GMNM .606 .870 .701 .621 .874 .713 .856 .971 .898 MCLEA [33] .613 .889 .714 .702 .928 .785 .893 .983 .928 w/ GMNM .625 .902 .726 .707 .934 .790 .893 .983 .928 MEAformer [9] .660 .913 .751 .729 .947 .810 .895 .984 .930 w/ GMNM .666 .916 .755 .741 .943 .815 .905 .984 .937 SnAg (Ours) .692 .927 .778 .743 .945 .817 .907 .986 .939 OpenEAEN-DE EVA [34] .776 .935 .833 .784 .937 .839 .954 .984 .965 w/ GMNM .779 .936 .837 .789 .938 .843 .955 .984 .966 MCLEA [33] .766 .942 .829 .821 .956 .871 .969 .994 .979 w/ GMNM .779 .948 .840 .829 .959 .876 .971 .995 .980 MEAformer [9] .803 .950 .854 .835 .958 .878 .963 .994 .976 w/ GMNM .807 .949 .856 .841 .961 .882 .975 .995 .982 SnAg (Ours) .826 .962 .874 .859 .970 .899 .977 .998 .984 OpenEAD-W-V1 EVA [34] .647 .856 .727 .669 .860 .741 .916 .984 .943 w/ GMNM .663 .859 .735 .673 .862 .743 .927 .986 .950 MCLEA [33] .686 .896 .766 .770 .941 .836 .947 .991 .965 w/ GMNM .699 .907 .778 .776 .946 .840 .949 .991 .966 MEAformer [9] .718 .901 .787 .785 .934 .841 .943 .990 .962 w/ GMNM .728 .901 .793 .803 .942 .855 .956 .991 .970 SnAg (Ours) .753 .930 .820 .808 .953 .864 .958 .993 .972 OpenEAD-W-V2 EVA [34] .854 .980 .904 .859 .983 .908 .925 .996 .951 w/ GMNM .866 .980 .909 .872 .981 .913 .948 .997 .969 MCLEA [33] .841 .984 .899 .877 .990 .923 .971 .998 .983 w/ GMNM .845 .987 .902 .882 .992 .926 .973 .999 .984 MEAformer [9] .886 .990 .926 .904 .992 .938 .965 .999 .979 w/ GMNM .902 .990 .936 .918 .993 .948 .975 .999 .985 SnAg (Ours) .904 .994 .939 .924 .994 .952 .980 .999 .988 standardize vector lengths, thus streamlining representation and prioritizing significant features. For any entity \ud835\udc52\ud835\udc56, vector positions correspond to the presence or frequency of top-ranked attributes and relations, respectively. (ii) Following [5, 33], vision encoders \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63are selected as ResNet-152 [20] for DBP15K, and CLIP [39] for Multi-OpenEA. (iii) An alignment editing method is applied to minimize error accumulation [47]. (iv) The head number \ud835\udc41\u210ein MHCA is set to 1. The hidden layer dimensions \ud835\udc51for all networks are unified into 300. The total epochs for baselines are set to 500 with an option for an additional 500 epochs of iterative training [33]. Our training strategies incorporates a cosine warm-up schedule (15% of steps for LR warm-up), early stopping, and gradient accumulation, using the AdamW optimizer (\ud835\udefd1 = 0.9, \ud835\udefd2 = 0.999) Conference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Table 6: Component Analysis for SnAg on MKGC datasets. The icon v indicates the activation of the Gauss Modality Noise Masking (GMNM) module; u denotes its deactivation. By default, GMNM\u2019s noise application probability \ud835\udf0cis set to 0.2, with a noise ratio \ud835\udf16of 0.7. Our Transformer-based structure serves as the default fusion method for SnAg. Alternatives include: \u201cFC\u201d (concatenating features from various modalities followed by a fully connected layer); \u201cWS\u201d (summing features weighted by a global learnable weight per modality); \u201cAT\u201d (leveraging an Attention network for entitylevel weighting); \u201cTS\u201d (using a Transformer for weighting to obtain confidence scores \u02dc \ud835\udc64\ud835\udc5afor weighted summing); \u201cw/ Only \u210e\ud835\udc54\u201d (using Graph Structure embedding for uni-modal KGC). \u201cDropout\u201d is an experimental adjustment where Equation (1) is replaced with the Dropout function to randomly zero modal input features, based on a defined probability. Variants DB15K [35] MKG-W [59] MKG-Y [59] MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 v SnAg (Full) .363 .274 .530 .373 .302 .503 .395 .354 .471 v \ud835\udf0c= 0.3, \ud835\udf16= 0.6 .361 .272 .528 .373 .302 .502 .393 .353 .468 v \ud835\udf0c= 0.1, \ud835\udf16= 0.8 .360 .272 .525 .371 .299 .496 .391 .348 .463 v \ud835\udf0c= 0.4, \ud835\udf16= 0.4 .358 .268 .526 .365 .296 .492 .388 .346 .458 v \ud835\udf0c= 0.5, \ud835\udf16= 0.2 .360 .270 .528 .368 .299 .493 .389 .348 .457 v \ud835\udf0c= 0.7, \ud835\udf16= 0.2 .359 .270 .526 .367 .299 .490 .387 .345 .456 u SnAg .357 .269 .523 .365 .296 .490 .387 .345 .457 u FC Fusion .327 .210 .522 .350 .287 .467 .378 .340 .442 u WS Fusion .334 .218 .529 .361 .298 .480 .384 .345 .449 u AT Fusion .336 .225 .528 .361 .296 .481 .379 .343 .445 u TS Fusion .335 .221 .529 .358 .292 .472 .378 .344 .437 u w/ Only \u210e\ud835\udc54 .293 .179 .497 .337 .268 .467 .350 .291 .453 u Dropout (0.1) .349 .252 .527 .361 .297 .479 .382 .344 .446 u Dropout (0.2) .346 .249 .526 .359 .294 .478 .381 .343 .446 u Dropout (0.3) .343 .242 .524 .356 .290 .477 .381 .343 .445 u Dropout (0.4) .341 .238 .521 .356 .295 .467 .379 .341 .442 with a consistent batch size of 3500. (v) The total learnable parameters of our model are comparable to those of baseline models. For instance, under the DBP15KJA-EN dataset: EVA has 13.27M, MCLEA has 13.22M, and our SnAg has 13.82M learnable parameters. 4.2 Overall Results 4.2.1 MKGC Results. As shown in Table 1, SnAg achieves SOTA performance across all metrics on three MKGC datasets, especially notable when compared with recent works like MANS [62] and MMRNS [59] which all have refined the Negative Sampling techniques. Our Entity-level Modality Interaction approach for MMKG representation learning not only demonstrates a significant advantage but also benefits from the consistent performance enhancement provided by our Gauss Modality Noise Masking (GMNM) module, maintaining superior performance even in its absence. 4.2.2 MMEA Results. As illustrated in the third segment of Table 4, our SnAg achieves SOTA performance across all metrics on seven standard MMEA datasets. Notably, in the latter four datasets of the OpenEA series (EN-FR-15K, EN-DE-15K, D-W-15K-V1, D-W-15KV2) under the Standard setting where \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 1.0 indicating full image representation for each entity, our GMNM module maintains or even boosts performance. This suggests that strategic noise integration can lead to beneficial results, demonstrating the module\u2019s effectiveness even in scenarios where visual data is abundant and complete. This aligns with findings from related work [10, 12], which suggest that image ambiguities and multi-aspect visual information can sometimes misguide the use of MMKGs. Unlike these studies that typically design models to refuse and combat noise, our SnAg accepts and intentionally integrates noise to better align with the inherently noisy conditions of real-world scenarios. Most importantly, as a versatile MMKG representation learning approach, it is compatible with both MMEA and MKGC tasks, illustrating its robust adaptability in diverse operational contexts. 4.3 Uncertainly Missing Modality. The first two segments from Table 4 present entity alignment performance with \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 0.4, 0.6, where 60%/40% of entities lack image data. These missing images are substituted with random image features following a normal distribution based on the observed mean and standard deviation across other entities\u2019 images (details in 3.2.3). This simulates uncertain modality absence in real-world scenarios. Our method outperforms baselines more significantly when the modality absence is greater (i.e., \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 0.4), with the GMNM module providing notable benefits. This demonstrates that intentionally introducing noise can increase training challenges while enhancing model robustness in realistic settings. 4.4 Ablation studies. In Table 6, we dissect the influence of various components on our model\u2019s performance, focusing on three key aspects: (i) Noise Parameters: The noise application probability \ud835\udf0cand noise ratio \ud835\udf16are pivotal. Optimal values of \ud835\udf0c= 0.2 and \ud835\udf16= 0.7 were determined empirically, suggesting that the model tolerates up to 20% of entities missing images and that a modality-mask ratio of 0.7 acts as a soft mask. For optimal performance, we recommend empirically adjusting these parameters to suit other specific scenario. Generally, conducting a grid search on a smaller dataset subset can quickly identify suitable parameter combinations. (ii) Entity-Level Modality Interaction: Our exploration shows that absence of image information (w/ Only \u210e\ud835\udc54) markedly reduces performance, emphasizing MKGC\u2019s importance. Weighted summing methods (WS, AT, TS) surpass simple FC-based approaches, indicating the superiority of nuanced modality integration. Purely using Transformer modality weights \u02dc \ud835\udc64\ud835\udc5afor weighting does not show a clear advantage over Attention-based or globally learnable weight methods in MKGC. In contrast, our approach using \u00af \u210e\ud835\udc54(for DBP15K) and \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54(for MKG-W and MKG-Y) which significantly outperforms others, demonstrating their efficacy. (iii) Modality-Mask vs. Dropout: In assessing their differential impacts, we observe that even minimal dropout (0.1) adversely affects performance, likely because dropout to some extent distorts the original modal feature distribution, thereby hindering model optimization toward the alignment objective. Conversely, our modality-mask\u2019s noise is inherent, replicating the feature distribution seen when modality is absent, and consequently enhancing model robustness more effectively. The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA 5 CONCLUSION AND FUTURE WORK In this work, we introduce a unified multi-modal knowledge graph representation framework that accepts and intentionally integrates noise, thereby aligning with the complexities of real-world scenarios. This initiative also stands out as the first in the MMKG domain to support both MKGC and MMEA tasks simultaneously, showcasing the adaptability of our approach. Building on this foundation, we encourage future researchers to adopt a broader perspective on MMKG representation learning, extending beyond the focus on individual sub-tasks. As the field evolves, there is a promising avenue for integrating this unified representation into multi-modal knowledge pre-training, which could facilitate diverse downstream tasks, including but not limited to Multi-modal Knowledge Injection and Multi-modal RetrievalAugmented Generation (RAG). Such advancements have the potential to make significant contributions to the community, especially with the rapid development of Large Language Models [63, 65]." + }, + { + "url": "http://arxiv.org/abs/1810.04805v2", + "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", + "abstract": "We introduce a new language representation model called BERT, which stands\nfor Bidirectional Encoder Representations from Transformers. Unlike recent\nlanguage representation models, BERT is designed to pre-train deep\nbidirectional representations from unlabeled text by jointly conditioning on\nboth left and right context in all layers. As a result, the pre-trained BERT\nmodel can be fine-tuned with just one additional output layer to create\nstate-of-the-art models for a wide range of tasks, such as question answering\nand language inference, without substantial task-specific architecture\nmodifications.\n BERT is conceptually simple and empirically powerful. It obtains new\nstate-of-the-art results on eleven natural language processing tasks, including\npushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI\naccuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering\nTest F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1\n(5.1 point absolute improvement).", + "authors": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova", + "published": "2018-10-11", + "updated": "2019-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1908.03557v1", + "title": "VisualBERT: A Simple and Performant Baseline for Vision and Language", + "abstract": "We propose VisualBERT, a simple and flexible framework for modeling a broad\nrange of vision-and-language tasks. VisualBERT consists of a stack of\nTransformer layers that implicitly align elements of an input text and regions\nin an associated input image with self-attention. We further propose two\nvisually-grounded language model objectives for pre-training VisualBERT on\nimage caption data. Experiments on four vision-and-language tasks including\nVQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with\nstate-of-the-art models while being significantly simpler. Further analysis\ndemonstrates that VisualBERT can ground elements of language to image regions\nwithout any explicit supervision and is even sensitive to syntactic\nrelationships, tracking, for example, associations between verbs and image\nregions corresponding to their arguments.", + "authors": "Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang", + "published": "2019-08-09", + "updated": "2019-08-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.00891v1", + "title": "Multi-modal Contrastive Representation Learning for Entity Alignment", + "abstract": "Multi-modal entity alignment aims to identify equivalent entities between two\ndifferent multi-modal knowledge graphs, which consist of structural triples and\nimages associated with entities. Most previous works focus on how to utilize\nand encode information from different modalities, while it is not trivial to\nleverage multi-modal knowledge in entity alignment because of the modality\nheterogeneity. In this paper, we propose MCLEA, a Multi-modal Contrastive\nLearning based Entity Alignment model, to obtain effective joint\nrepresentations for multi-modal entity alignment. Different from previous\nworks, MCLEA considers task-oriented modality and models the inter-modal\nrelationships for each entity representation. In particular, MCLEA firstly\nlearns multiple individual representations from multiple modalities, and then\nperforms contrastive learning to jointly model intra-modal and inter-modal\ninteractions. Extensive experimental results show that MCLEA outperforms\nstate-of-the-art baselines on public datasets under both supervised and\nunsupervised settings.", + "authors": "Zhenxi Lin, Ziheng Zhang, Meng Wang, Yinghui Shi, Xian Wu, Yefeng Zheng", + "published": "2022-09-02", + "updated": "2022-09-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.14454v4", + "title": "MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid", + "abstract": "Multi-modal entity alignment (MMEA) aims to discover identical entities\nacross different knowledge graphs (KGs) whose entities are associated with\nrelevant images. However, current MMEA algorithms rely on KG-level modality\nfusion strategies for multi-modal entity representation, which ignores the\nvariations of modality preferences of different entities, thus compromising\nrobustness against noise in modalities such as blurry images and relations.\nThis paper introduces MEAformer, a multi-modal entity alignment transformer\napproach for meta modality hybrid, which dynamically predicts the mutual\ncorrelation coefficients among modalities for more fine-grained entity-level\nmodality fusion and alignment. Experimental results demonstrate that our model\nnot only achieves SOTA performance in multiple training scenarios, including\nsupervised, unsupervised, iterative, and low-resource settings, but also has a\nlimited number of parameters, efficient runtime, and interpretability. Our code\nis available at https://github.com/zjukg/MEAformer.", + "authors": "Zhuo Chen, Jiaoyan Chen, Wen Zhang, Lingbing Guo, Yin Fang, Yufeng Huang, Yichi Zhang, Yuxia Geng, Jeff Z. Pan, Wenting Song, Huajun Chen", + "published": "2022-12-29", + "updated": "2023-07-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.07084v1", + "title": "Knowledge Graph Completion with Pre-trained Multimodal Transformer and Twins Negative Sampling", + "abstract": "Knowledge graphs (KGs) that modelings the world knowledge as structural\ntriples are inevitably incomplete. Such problems still exist for multimodal\nknowledge graphs (MMKGs). Thus, knowledge graph completion (KGC) is of great\nimportance to predict the missing triples in the existing KGs. As for the\nexisting KGC methods, embedding-based methods rely on manual design to leverage\nmultimodal information while finetune-based approaches are not superior to\nembedding-based methods in link prediction. To address these problems, we\npropose a VisualBERT-enhanced Knowledge Graph Completion model (VBKGC for\nshort). VBKGC could capture deeply fused multimodal information for entities\nand integrate them into the KGC model. Besides, we achieve the co-design of the\nKGC model and negative sampling by designing a new negative sampling strategy\ncalled twins negative sampling. Twins negative sampling is suitable for\nmultimodal scenarios and could align different embeddings for entities. We\nconduct extensive experiments to show the outstanding performance of VBKGC on\nthe link prediction task and make further exploration of VBKGC.", + "authors": "Yichi Zhang, Wen Zhang", + "published": "2022-09-15", + "updated": "2022-09-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1902.10197v1", + "title": "RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space", + "abstract": "We study the problem of learning representations of entities and relations in\nknowledge graphs for predicting missing links. The success of such a task\nheavily relies on the ability of modeling and inferring the patterns of (or\nbetween) the relations. In this paper, we present a new approach for knowledge\ngraph embedding called RotatE, which is able to model and infer various\nrelation patterns including: symmetry/antisymmetry, inversion, and composition.\nSpecifically, the RotatE model defines each relation as a rotation from the\nsource entity to the target entity in the complex vector space. In addition, we\npropose a novel self-adversarial negative sampling technique for efficiently\nand effectively training the RotatE model. Experimental results on multiple\nbenchmark knowledge graphs show that the proposed RotatE model is not only\nscalable, but also able to infer and model various relation patterns and\nsignificantly outperform existing state-of-the-art models for link prediction.", + "authors": "Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang", + "published": "2019-02-26", + "updated": "2019-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.05364v3", + "title": "Universal Multi-modal Entity Alignment via Iteratively Fusing Modality Similarity Paths", + "abstract": "The objective of Entity Alignment (EA) is to identify equivalent entity pairs\nfrom multiple Knowledge Graphs (KGs) and create a more comprehensive and\nunified KG. The majority of EA methods have primarily focused on the structural\nmodality of KGs, lacking exploration of multi-modal information. A few\nmulti-modal EA methods have made good attempts in this field. Still, they have\ntwo shortcomings: (1) inconsistent and inefficient modality modeling that\ndesigns complex and distinct models for each modality; (2) ineffective modality\nfusion due to the heterogeneous nature of modalities in EA. To tackle these\nchallenges, we propose PathFusion, consisting of two main components: (1) MSP,\na unified modeling approach that simplifies the alignment process by\nconstructing paths connecting entities and modality nodes to represent multiple\nmodalities; (2) IRF, an iterative fusion method that effectively combines\ninformation from different modalities using the path as an information carrier.\nExperimental results on real-world datasets demonstrate the superiority of\nPathFusion over state-of-the-art methods, with 22.4%-28.9% absolute improvement\non Hits@1, and 0.194-0.245 absolute improvement on MRR.", + "authors": "Bolin Zhu, Xiaoze Liu, Xin Mao, Zhuo Chen, Lingbing Guo, Tao Gui, Qi Zhang", + "published": "2023-10-09", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1903.05485v1", + "title": "MMKG: Multi-Modal Knowledge Graphs", + "abstract": "We present MMKG, a collection of three knowledge graphs that contain both\nnumerical features and (links to) images for all entities as well as entity\nalignments between pairs of KGs. Therefore, multi-relational link prediction\nand entity matching communities can benefit from this resource. We believe this\ndata set has the potential to facilitate the development of novel multi-modal\nlearning approaches for knowledge graphs.We validate the utility ofMMKG in the\nsameAs link prediction task with an extensive set of experiments. These\nexperiments show that the task at hand benefits from learning of multiple\nfeature types.", + "authors": "Ye Liu, Hui Li, Alberto Garcia-Duran, Mathias Niepert, Daniel Onoro-Rubio, David S. Rosenblum", + "published": "2019-03-13", + "updated": "2019-03-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.01169v1", + "title": "End-to-End Learning on Multimodal Knowledge Graphs", + "abstract": "Knowledge graphs enable data scientists to learn end-to-end on heterogeneous\nknowledge. However, most end-to-end models solely learn from the relational\ninformation encoded in graphs' structure: raw values, encoded as literal nodes,\nare either omitted completely or treated as regular nodes without consideration\nfor their values. In either case we lose potentially relevant information which\ncould have otherwise been exploited by our learning methods. We propose a\nmultimodal message passing network which not only learns end-to-end from the\nstructure of graphs, but also from their possibly divers set of multimodal node\nfeatures. Our model uses dedicated (neural) encoders to naturally learn\nembeddings for node features belonging to five different types of modalities,\nincluding numbers, texts, dates, images and geometries, which are projected\ninto a joint representation space together with their relational information.\nWe implement and demonstrate our model on node classification and link\nprediction for artificial and real-worlds datasets, and evaluate the effect\nthat each modality has on the overall performance in an inverse ablation study.\nOur results indicate that end-to-end multimodal learning from any arbitrary\nknowledge graph is indeed possible, and that including multimodal information\ncan significantly affect performance, but that much depends on the\ncharacteristics of the data.", + "authors": "W. X. Wilcke, P. Bloem, V. de Boer, R. H. van t Veer", + "published": "2023-09-03", + "updated": "2023-09-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "I.2.6; I.2.4" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2009.13603v2", + "title": "Visual Pivoting for (Unsupervised) Entity Alignment", + "abstract": "This work studies the use of visual semantic representations to align\nentities in heterogeneous knowledge graphs (KGs). Images are natural components\nof many existing KGs. By combining visual knowledge with other auxiliary\ninformation, we show that the proposed new approach, EVA, creates a holistic\nentity representation that provides strong signals for cross-graph entity\nalignment. Besides, previous entity alignment methods require human labelled\nseed alignment, restricting availability. EVA provides a completely\nunsupervised solution by leveraging the visual similarity of entities to create\nan initial seed dictionary (visual pivots). Experiments on benchmark data sets\nDBP15k and DWY15k show that EVA offers state-of-the-art performance on both\nmonolingual and cross-lingual entity alignment tasks. Furthermore, we discover\nthat images are particularly useful to align long-tail KG entities, which\ninherently lack the structural contexts necessary for capturing the\ncorrespondences.", + "authors": "Fangyu Liu, Muhao Chen, Dan Roth, Nigel Collier", + "published": "2020-09-28", + "updated": "2020-12-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.03591v1", + "title": "Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning", + "abstract": "Multimodal knowledge graphs (MKGs), which intuitively organize information in\nvarious modalities, can benefit multiple practical downstream tasks, such as\nrecommendation systems, and visual question answering. However, most MKGs are\nstill far from complete, which motivates the flourishing of MKG reasoning\nmodels. Recently, with the development of general artificial architectures, the\npretrained transformer models have drawn increasing attention, especially for\nmultimodal scenarios. However, the research of multimodal pretrained\ntransformer (MPT) for knowledge graph reasoning (KGR) is still at an early\nstage. As the biggest difference between MKG and other multimodal data, the\nrich structural information underlying the MKG still cannot be fully leveraged\nin existing MPT models. Most of them only utilize the graph structure as a\nretrieval map for matching images and texts connected with the same entity.\nThis manner hinders their reasoning performances. To this end, we propose the\ngraph Structure Guided Multimodal Pretrained Transformer for knowledge graph\nreasoning, termed SGMPT. Specifically, the graph structure encoder is adopted\nfor structural feature encoding. Then, a structure-guided fusion module with\ntwo different strategies, i.e., weighted summation and alignment constraint, is\nfirst designed to inject the structural information into both the textual and\nvisual features. To the best of our knowledge, SGMPT is the first MPT model for\nmultimodal KGR, which mines the structural information underlying the knowledge\ngraph. Extensive experiments on FB15k-237-IMG and WN18-IMG, demonstrate that\nour SGMPT outperforms existing state-of-the-art models, and prove the\neffectiveness of the designed strategies.", + "authors": "Ke Liang, Sihang Zhou, Yue Liu, Lingyuan Meng, Meng Liu, Xinwang Liu", + "published": "2023-07-06", + "updated": "2023-07-06", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.02357v5", + "title": "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion", + "abstract": "Multimodal Knowledge Graphs (MKGs), which organize visual-text factual\nknowledge, have recently been successfully applied to tasks such as information\nretrieval, question answering, and recommendation system. Since most MKGs are\nfar from complete, extensive knowledge graph completion studies have been\nproposed focusing on the multimodal entity, relation extraction and link\nprediction. However, different tasks and modalities require changes to the\nmodel architecture, and not all images/objects are relevant to text input,\nwhich hinders the applicability to diverse real-world scenarios. In this paper,\nwe propose a hybrid transformer with multi-level fusion to address those\nissues. Specifically, we leverage a hybrid transformer architecture with\nunified input-output for diverse multimodal knowledge graph completion tasks.\nMoreover, we propose multi-level fusion, which integrates visual and text\nrepresentation via coarse-grained prefix-guided interaction and fine-grained\ncorrelation-aware fusion modules. We conduct extensive experiments to validate\nthat our MKGformer can obtain SOTA performance on four datasets of multimodal\nlink prediction, multimodal RE, and multimodal NER. Code is available in\nhttps://github.com/zjunlp/MKGformer.", + "authors": "Xiang Chen, Ningyu Zhang, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen", + "published": "2022-05-04", + "updated": "2023-09-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.LG", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.10816v1", + "title": "IMF: Interactive Multimodal Fusion Model for Link Prediction", + "abstract": "Link prediction aims to identify potential missing triples in knowledge\ngraphs. To get better results, some recent studies have introduced multimodal\ninformation to link prediction. However, these methods utilize multimodal\ninformation separately and neglect the complicated interaction between\ndifferent modalities. In this paper, we aim at better modeling the\ninter-modality information and thus introduce a novel Interactive Multimodal\nFusion (IMF) model to integrate knowledge from different modalities. To this\nend, we propose a two-stage multimodal fusion framework to preserve\nmodality-specific knowledge as well as take advantage of the complementarity\nbetween different modalities. Instead of directly projecting different\nmodalities into a unified space, our multimodal fusion module limits the\nrepresentations of different modalities independent while leverages bilinear\npooling for fusion and incorporates contrastive learning as additional\nconstraints. Furthermore, the decision fusion module delivers the learned\nweighted average over the predictions of all modalities to better incorporate\nthe complementarity of different modalities. Our approach has been demonstrated\nto be effective through empirical evaluations on several real-world datasets.\nThe implementation code is available online at\nhttps://github.com/HestiaSky/IMF-Pytorch.", + "authors": "Xinhang Li, Xiangyu Zhao, Jiaxing Xu, Yong Zhang, Chunxiao Xing", + "published": "2023-03-20", + "updated": "2023-03-20", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.05391v4", + "title": "Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey", + "abstract": "Knowledge Graphs (KGs) play a pivotal role in advancing various AI\napplications, with the semantic web community's exploration into multi-modal\ndimensions unlocking new avenues for innovation. In this survey, we carefully\nreview over 300 articles, focusing on KG-aware research in two principal\naspects: KG-driven Multi-Modal (KG4MM) learning, where KGs support multi-modal\ntasks, and Multi-Modal Knowledge Graph (MM4KG), which extends KG studies into\nthe MMKG realm. We begin by defining KGs and MMKGs, then explore their\nconstruction progress. Our review includes two primary task categories:\nKG-aware multi-modal learning tasks, such as Image Classification and Visual\nQuestion Answering, and intrinsic MMKG tasks like Multi-modal Knowledge Graph\nCompletion and Entity Alignment, highlighting specific research trajectories.\nFor most of these tasks, we provide definitions, evaluation benchmarks, and\nadditionally outline essential insights for conducting relevant research.\nFinally, we discuss current challenges and identify emerging trends, such as\nprogress in Large Language Modeling and Multi-modal Pre-training strategies.\nThis survey aims to serve as a comprehensive reference for researchers already\ninvolved in or considering delving into KG and multi-modal learning research,\noffering insights into the evolving landscape of MMKG research and supporting\nfuture work.", + "authors": "Zhuo Chen, Yichi Zhang, Yin Fang, Yuxia Geng, Lingbing Guo, Xiang Chen, Qian Li, Wen Zhang, Jiaoyan Chen, Yushan Zhu, Jiaqi Li, Xiaoze Liu, Jeff Z. Pan, Ningyu Zhang, Huajun Chen", + "published": "2024-02-08", + "updated": "2024-02-26", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CV", + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1609.07028v2", + "title": "Image-embodied Knowledge Representation Learning", + "abstract": "Entity images could provide significant visual information for knowledge\nrepresentation learning. Most conventional methods learn knowledge\nrepresentations merely from structured triples, ignoring rich visual\ninformation extracted from entity images. In this paper, we propose a novel\nImage-embodied Knowledge Representation Learning model (IKRL), where knowledge\nrepresentations are learned with both triple facts and images. More\nspecifically, we first construct representations for all images of an entity\nwith a neural image encoder. These image representations are then integrated\ninto an aggregated image-based representation via an attention-based method. We\nevaluate our IKRL models on knowledge graph completion and triple\nclassification. Experimental results demonstrate that our models outperform all\nbaselines on both tasks, which indicates the significance of visual information\nfor knowledge representations and the capability of our models in learning\nknowledge representations with images.", + "authors": "Ruobing Xie, Zhiyuan Liu, Huanbo Luan, Maosong Sun", + "published": "2016-09-22", + "updated": "2017-05-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.08821v2", + "title": "MoSE: Modality Split and Ensemble for Multimodal Knowledge Graph Completion", + "abstract": "Multimodal knowledge graph completion (MKGC) aims to predict missing entities\nin MKGs. Previous works usually share relation representation across\nmodalities. This results in mutual interference between modalities during\ntraining, since for a pair of entities, the relation from one modality probably\ncontradicts that from another modality. Furthermore, making a unified\nprediction based on the shared relation representation treats the input in\ndifferent modalities equally, while their importance to the MKGC task should be\ndifferent. In this paper, we propose MoSE, a Modality Split representation\nlearning and Ensemble inference framework for MKGC. Specifically, in the\ntraining phase, we learn modality-split relation embeddings for each modality\ninstead of a single modality-shared one, which alleviates the modality\ninterference. Based on these embeddings, in the inference phase, we first make\nmodality-split predictions and then exploit various ensemble methods to combine\nthe predictions with different weights, which models the modality importance\ndynamically. Experimental results on three KG datasets show that MoSE\noutperforms state-of-the-art MKGC methods. Codes are available at\nhttps://github.com/OreOZhao/MoSE4MKGC.", + "authors": "Yu Zhao, Xiangrui Cai, Yike Wu, Haiwei Zhang, Ying Zhang, Guoqing Zhao, Ning Jiang", + "published": "2022-10-17", + "updated": "2022-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.11618v1", + "title": "Modality-Aware Negative Sampling for Multi-modal Knowledge Graph Embedding", + "abstract": "Negative sampling (NS) is widely used in knowledge graph embedding (KGE),\nwhich aims to generate negative triples to make a positive-negative contrast\nduring training. However, existing NS methods are unsuitable when multi-modal\ninformation is considered in KGE models. They are also inefficient due to their\ncomplex design. In this paper, we propose Modality-Aware Negative Sampling\n(MANS) for multi-modal knowledge graph embedding (MMKGE) to address the\nmentioned problems. MANS could align structural and visual embeddings for\nentities in KGs and learn meaningful embeddings to perform better in\nmulti-modal KGE while keeping lightweight and efficient. Empirical results on\ntwo benchmarks demonstrate that MANS outperforms existing NS methods.\nMeanwhile, we make further explorations about MANS to confirm its\neffectiveness.", + "authors": "Yichi Zhang, Mingyang Chen, Wen Zhang", + "published": "2023-04-23", + "updated": "2023-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.13163v1", + "title": "Endowing Language Models with Multimodal Knowledge Graph Representations", + "abstract": "We propose a method to make natural language understanding models more\nparameter efficient by storing knowledge in an external knowledge graph (KG)\nand retrieving from this KG using a dense index. Given (possibly multilingual)\ndownstream task data, e.g., sentences in German, we retrieve entities from the\nKG and use their multimodal representations to improve downstream task\nperformance. We use the recently released VisualSem KG as our external\nknowledge repository, which covers a subset of Wikipedia and WordNet entities,\nand compare a mix of tuple-based and graph-based algorithms to learn entity and\nrelation representations that are grounded on the KG multimodal information. We\ndemonstrate the usefulness of the learned entity representations on two\ndownstream tasks, and show improved performance on the multilingual named\nentity recognition task by $0.3\\%$--$0.7\\%$ F1, while we achieve up to $2.5\\%$\nimprovement in accuracy on the visual sense disambiguation task. All our code\nand data are available in: \\url{https://github.com/iacercalixto/visualsem-kg}.", + "authors": "Ningyuan Huang, Yash R. Deshpande, Yibo Liu, Houda Alberts, Kyunghyun Cho, Clara Vania, Iacer Calixto", + "published": "2022-06-27", + "updated": "2022-06-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7; I.2.10; I.2.4" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.11000v2", + "title": "ASGEA: Exploiting Logic Rules from Align-Subgraphs for Entity Alignment", + "abstract": "Entity alignment (EA) aims to identify entities across different knowledge\ngraphs that represent the same real-world objects. Recent embedding-based EA\nmethods have achieved state-of-the-art performance in EA yet faced\ninterpretability challenges as they purely rely on the embedding distance and\nneglect the logic rules behind a pair of aligned entities. In this paper, we\npropose the Align-Subgraph Entity Alignment (ASGEA) framework to exploit logic\nrules from Align-Subgraphs. ASGEA uses anchor links as bridges to construct\nAlign-Subgraphs and spreads along the paths across KGs, which distinguishes it\nfrom the embedding-based methods. Furthermore, we design an interpretable\nPath-based Graph Neural Network, ASGNN, to effectively identify and integrate\nthe logic rules across KGs. We also introduce a node-level multi-modal\nattention mechanism coupled with multi-modal enriched anchors to augment the\nAlign-Subgraph. Our experimental results demonstrate the superior performance\nof ASGEA over the existing embedding-based methods in both EA and Multi-Modal\nEA (MMEA) tasks.", + "authors": "Yangyifei Luo, Zhuo Chen, Lingbing Guo, Qian Li, Wenxuan Zeng, Zhixin Cai, Jianxin Li", + "published": "2024-02-16", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2009.00647v4", + "title": "Lifelong Graph Learning", + "abstract": "Graph neural networks (GNN) are powerful models for many graph-structured\ntasks. Existing models often assume that the complete structure of the graph is\navailable during training. In practice, however, graph-structured data is\nusually formed in a streaming fashion so that learning a graph continuously is\noften necessary. In this paper, we bridge GNN and lifelong learning by\nconverting a continual graph learning problem to a regular graph learning\nproblem so GNN can inherit the lifelong learning techniques developed for\nconvolutional neural networks (CNN). We propose a new topology, the feature\ngraph, which takes features as new nodes and turns nodes into independent\ngraphs. This successfully converts the original problem of node classification\nto graph classification. In the experiments, we demonstrate the efficiency and\neffectiveness of feature graph networks (FGN) by continuously learning a\nsequence of classical graph datasets. We also show that FGN achieves superior\nperformance in two applications, i.e., lifelong human action recognition with\nwearable devices and feature matching. To the best of our knowledge, FGN is the\nfirst method to bridge graph learning and lifelong learning via a novel graph\ntopology. Source code is available at https://github.com/wang-chen/LGL", + "authors": "Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer", + "published": "2020-09-01", + "updated": "2022-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.09304v1", + "title": "A Tunable Model for Graph Generation Using LSTM and Conditional VAE", + "abstract": "With the development of graph applications, generative models for graphs have\nbeen more crucial. Classically, stochastic models that generate graphs with a\npre-defined probability of edges and nodes have been studied. Recently, some\nmodels that reproduce the structural features of graphs by learning from actual\ngraph data using machine learning have been studied. However, in these\nconventional studies based on machine learning, structural features of graphs\ncan be learned from data, but it is not possible to tune features and generate\ngraphs with specific features. In this paper, we propose a generative model\nthat can tune specific features, while learning structural features of a graph\nfrom data. With a dataset of graphs with various features generated by a\nstochastic model, we confirm that our model can generate a graph with specific\nfeatures.", + "authors": "Shohei Nakazawa, Yoshiki Sato, Kenji Nakagawa, Sho Tsugawa, Kohei Watabe", + "published": "2021-04-15", + "updated": "2021-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.14403v1", + "title": "Deep graph learning for semi-supervised classification", + "abstract": "Graph learning (GL) can dynamically capture the distribution structure (graph\nstructure) of data based on graph convolutional networks (GCN), and the\nlearning quality of the graph structure directly influences GCN for\nsemi-supervised classification. Existing methods mostly combine the\ncomputational layer and the related losses into GCN for exploring the global\ngraph(measuring graph structure from all data samples) or local graph\n(measuring graph structure from local data samples). Global graph emphasises on\nthe whole structure description of the inter-class data, while local graph\ntrend to the neighborhood structure representation of intra-class data.\nHowever, it is difficult to simultaneously balance these graphs of the learning\nprocess for semi-supervised classification because of the interdependence of\nthese graphs. To simulate the interdependence, deep graph learning(DGL) is\nproposed to find the better graph representation for semi-supervised\nclassification. DGL can not only learn the global structure by the previous\nlayer metric computation updating, but also mine the local structure by next\nlayer local weight reassignment. Furthermore, DGL can fuse the different\nstructures by dynamically encoding the interdependence of these structures, and\ndeeply mine the relationship of the different structures by the hierarchical\nprogressive learning for improving the performance of semi-supervised\nclassification. Experiments demonstrate the DGL outperforms state-of-the-art\nmethods on three benchmark datasets (Citeseer,Cora, and Pubmed) for citation\nnetworks and two benchmark datasets (MNIST and Cifar10) for images.", + "authors": "Guangfeng Lin, Xiaobing Kang, Kaiyang Liao, Fan Zhao, Yajun Chen", + "published": "2020-05-29", + "updated": "2020-05-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1104.5256v1", + "title": "Learning Undirected Graphical Models with Structure Penalty", + "abstract": "In undirected graphical models, learning the graph structure and learning the\nfunctions that relate the predictive variables (features) to the responses\ngiven the structure are two topics that have been widely investigated in\nmachine learning and statistics. Learning graphical models in two stages will\nhave problems because graph structure may change after considering the\nfeatures. The main contribution of this paper is the proposed method that\nlearns the graph structure and functions on the graph at the same time. General\ngraphical models with binary outcomes conditioned on predictive variables are\nproved to be equivalent to multivariate Bernoulli model. The reparameterization\nof the potential functions in graphical model by conditional log odds ratios in\nmultivariate Bernoulli model offers advantage in the representation of the\nconditional independence structure in the model. Additionally, we impose a\nstructure penalty on groups of conditional log odds ratios to learn the graph\nstructure. These groups of functions are designed with overlaps to enforce\nhierarchical function selection. In this way, we are able to shrink higher\norder interactions to obtain a sparse graph structure. Simulation studies show\nthat the method is able to recover the graph structure. The analysis of county\ndata from Census Bureau gives interesting relations between unemployment rate,\ncrime and others discovered by the model.", + "authors": "Shilin Ding", + "published": "2011-04-27", + "updated": "2011-04-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.11883v2", + "title": "Robust Graph Data Learning via Latent Graph Convolutional Representation", + "abstract": "Graph Convolutional Representation (GCR) has achieved impressive performance\nfor graph data representation. However, existing GCR is generally defined on\nthe input fixed graph which may restrict the representation capacity and also\nbe vulnerable to the structural attacks and noises. To address this issue, we\npropose a novel Latent Graph Convolutional Representation (LatGCR) for robust\ngraph data representation and learning. Our LatGCR is derived based on\nreformulating graph convolutional representation from the aspect of graph\nneighborhood reconstruction. Given an input graph $\\textbf{A}$, LatGCR aims to\ngenerate a flexible latent graph $\\widetilde{\\textbf{A}}$ for graph\nconvolutional representation which obviously enhances the representation\ncapacity and also performs robustly w.r.t graph structural attacks and noises.\nMoreover, LatGCR is implemented in a self-supervised manner and thus provides a\nbasic block for both supervised and unsupervised graph learning tasks.\nExperiments on several datasets demonstrate the effectiveness and robustness of\nLatGCR.", + "authors": "Bo Jiang, Ziyan Zhang, Bin Luo", + "published": "2019-04-26", + "updated": "2021-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.00793v2", + "title": "Structure-Preserving Graph Representation Learning", + "abstract": "Though graph representation learning (GRL) has made significant progress, it\nis still a challenge to extract and embed the rich topological structure and\nfeature information in an adequate way. Most existing methods focus on local\nstructure and fail to fully incorporate the global topological structure. To\nthis end, we propose a novel Structure-Preserving Graph Representation Learning\n(SPGRL) method, to fully capture the structure information of graphs.\nSpecifically, to reduce the uncertainty and misinformation of the original\ngraph, we construct a feature graph as a complementary view via k-Nearest\nNeighbor method. The feature graph can be used to contrast at node-level to\ncapture the local relation. Besides, we retain the global topological structure\ninformation by maximizing the mutual information (MI) of the whole graph and\nfeature embeddings, which is theoretically reduced to exchanging the feature\nembeddings of the feature and the original graphs to reconstruct themselves.\nExtensive experiments show that our method has quite superior performance on\nsemi-supervised node classification task and excellent robustness under noise\nperturbation on graph structure or node features.", + "authors": "Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu", + "published": "2022-09-02", + "updated": "2022-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2103.10837v1", + "title": "Quantum machine learning of graph-structured data", + "abstract": "Graph structures are ubiquitous throughout the natural sciences. Here we\nconsider graph-structured quantum data and describe how to carry out its\nquantum machine learning via quantum neural networks. In particular, we\nconsider training data in the form of pairs of input and output quantum states\nassociated with the vertices of a graph, together with edges encoding\ncorrelations between the vertices. We explain how to systematically exploit\nthis additional graph structure to improve quantum learning algorithms. These\nalgorithms are numerically simulated and exhibit excellent learning behavior.\nScalable quantum implementations of the learning procedures are likely feasible\non the next generation of quantum computing devices.", + "authors": "Kerstin Beer, Megha Khosla, Julius K\u00f6hler, Tobias J. Osborne", + "published": "2021-03-19", + "updated": "2021-03-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.07308v1", + "title": "Variational Graph Auto-Encoders", + "abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", + "authors": "Thomas N. Kipf, Max Welling", + "published": "2016-11-21", + "updated": "2016-11-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01412v1", + "title": "Sampling and Recovery of Graph Signals based on Graph Neural Networks", + "abstract": "We propose interpretable graph neural networks for sampling and recovery of\ngraph signals, respectively. To take informative measurements, we propose a new\ngraph neural sampling module, which aims to select those vertices that\nmaximally express their corresponding neighborhoods. Such expressiveness can be\nquantified by the mutual information between vertices' features and\nneighborhoods' features, which are estimated via a graph neural network. To\nreconstruct an original graph signal from the sampled measurements, we propose\na graph neural recovery module based on the algorithm-unrolling technique.\nCompared to previous analytical sampling and recovery, the proposed methods are\nable to flexibly learn a variety of graph signal models from data by leveraging\nthe learning ability of neural networks; compared to previous\nneural-network-based sampling and recovery, the proposed methods are designed\nthrough exploiting specific graph properties and provide interpretability. We\nfurther design a new multiscale graph neural network, which is a trainable\nmultiscale graph filter bank and can handle various graph-related learning\ntasks. The multiscale network leverages the proposed graph neural sampling and\nrecovery modules to achieve multiscale representations of a graph. In the\nexperiments, we illustrate the effects of the proposed graph neural sampling\nand recovery modules and find that the modules can flexibly adapt to various\ngraph structures and graph signals. In the task of active-sampling-based\nsemi-supervised learning, the graph neural sampling module improves the\nclassification accuracy over 10% in Cora dataset. We further validate the\nproposed multiscale graph neural network on several standard datasets for both\nvertex and graph classification. The results show that our method consistently\nimproves the classification accuracies.", + "authors": "Siheng Chen, Maosen Li, Ya Zhang", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.02879v1", + "title": "Auto-decoding Graphs", + "abstract": "We present an approach to synthesizing new graph structures from empirically\nspecified distributions. The generative model is an auto-decoder that learns to\nsynthesize graphs from latent codes. The graph synthesis model is learned\njointly with an empirical distribution over the latent codes. Graphs are\nsynthesized using self-attention modules that are trained to identify likely\nconnectivity patterns. Graph-based normalizing flows are used to sample latent\ncodes from the distribution learned by the auto-decoder. The resulting model\ncombines accuracy and scalability. On benchmark datasets of large graphs, the\npresented model outperforms the state of the art by a factor of 1.5 in mean\naccuracy and average rank across at least three different graph statistics,\nwith a 2x speedup during inference.", + "authors": "Sohil Atul Shah, Vladlen Koltun", + "published": "2020-06-04", + "updated": "2020-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.03659v1", + "title": "Robust Graph Structure Learning under Heterophily", + "abstract": "Graph is a fundamental mathematical structure in characterizing relations\nbetween different objects and has been widely used on various learning tasks.\nMost methods implicitly assume a given graph to be accurate and complete.\nHowever, real data is inevitably noisy and sparse, which will lead to inferior\nresults. Despite the remarkable success of recent graph representation learning\nmethods, they inherently presume that the graph is homophilic, and largely\noverlook heterophily, where most connected nodes are from different classes. In\nthis regard, we propose a novel robust graph structure learning method to\nachieve a high-quality graph from heterophilic data for downstream tasks. We\nfirst apply a high-pass filter to make each node more distinctive from its\nneighbors by encoding structure information into the node features. Then, we\nlearn a robust graph with an adaptive norm characterizing different levels of\nnoise. Afterwards, we propose a novel regularizer to further refine the graph\nstructure. Clustering and semi-supervised classification experiments on\nheterophilic graphs verify the effectiveness of our method.", + "authors": "Xuanting Xie, Zhao Kang, Wenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.14002v1", + "title": "Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction", + "abstract": "We introduce Bi-GNN for modeling biological link prediction tasks such as\ndrug-drug interaction (DDI) and protein-protein interaction (PPI). Taking\ndrug-drug interaction as an example, existing methods using machine learning\neither only utilize the link structure between drugs without using the graph\nrepresentation of each drug molecule, or only leverage the individual drug\ncompound structures without using graph structure for the higher-level DDI\ngraph. The key idea of our method is to fundamentally view the data as a\nbi-level graph, where the highest level graph represents the interaction\nbetween biological entities (interaction graph), and each biological entity\nitself is further expanded to its intrinsic graph representation\n(representation graphs), where the graph is either flat like a drug compound or\nhierarchical like a protein with amino acid level graph, secondary structure,\ntertiary structure, etc. Our model not only allows the usage of information\nfrom both the high-level interaction graph and the low-level representation\ngraphs, but also offers a baseline for future research opportunities to address\nthe bi-level nature of the data.", + "authors": "Yunsheng Bai, Ken Gu, Yizhou Sun, Wei Wang", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07699v2", + "title": "Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs", + "abstract": "Temporal Graph Learning, which aims to model the time-evolving nature of\ngraphs, has gained increasing attention and achieved remarkable performance\nrecently. However, in reality, graph structures are often incomplete and noisy,\nwhich hinders temporal graph networks (TGNs) from learning informative\nrepresentations. Graph contrastive learning uses data augmentation to generate\nplausible variations of existing data and learn robust representations.\nHowever, rule-based augmentation approaches may be suboptimal as they lack\nlearnability and fail to leverage rich information from downstream tasks. To\naddress these issues, we propose a Time-aware Graph Structure Learning (TGSL)\napproach via sequence prediction on temporal graphs, which learns better graph\nstructures for downstream tasks through adding potential temporal edges. In\nparticular, it predicts time-aware context embedding based on previously\nobserved interactions and uses the Gumble-Top-K to select the closest candidate\nedges to this context embedding. Additionally, several candidate sampling\nstrategies are proposed to ensure both efficiency and diversity. Furthermore,\nwe jointly learn the graph structure and TGNs in an end-to-end manner and\nperform inference on the refined graph. Extensive experiments on temporal link\nprediction benchmarks demonstrate that TGSL yields significant gains for the\npopular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive\nlearning methods on temporal graphs. We release the code at\nhttps://github.com/ViktorAxelsen/TGSL.", + "authors": "Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai", + "published": "2023-06-13", + "updated": "2023-08-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.04286v1", + "title": "Deep Unsupervised Active Learning on Learnable Graphs", + "abstract": "Recently deep learning has been successfully applied to unsupervised active\nlearning. However, current method attempts to learn a nonlinear transformation\nvia an auto-encoder while ignoring the sample relation, leaving huge room to\ndesign more effective representation learning mechanisms for unsupervised\nactive learning. In this paper, we propose a novel deep unsupervised Active\nLearning model via Learnable Graphs, named ALLG. ALLG benefits from learning\noptimal graph structures to acquire better sample representation and select\nrepresentative samples. To make the learnt graph structure more stable and\neffective, we take into account $k$-nearest neighbor graph as a priori, and\nlearn a relation propagation graph structure. We also incorporate shortcut\nconnections among different layers, which can alleviate the well-known\nover-smoothing problem to some extent. To the best of our knowledge, this is\nthe first attempt to leverage graph structure learning for unsupervised active\nlearning. Extensive experiments performed on six datasets demonstrate the\nefficacy of our method.", + "authors": "Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.08057v1", + "title": "Graph Embedding VAE: A Permutation Invariant Model of Graph Structure", + "abstract": "Generative models of graph structure have applications in biology and social\nsciences. The state of the art is GraphRNN, which decomposes the graph\ngeneration process into a series of sequential steps. While effective for\nmodest sizes, it loses its permutation invariance for larger graphs. Instead,\nwe present a permutation invariant latent-variable generative model relying on\ngraph embeddings to encode structure. Using tools from the random graph\nliterature, our model is highly scalable to large graphs with likelihood\nevaluation and generation in $O(|V | + |E|)$.", + "authors": "Tony Duan, Juho Lee", + "published": "2019-10-17", + "updated": "2019-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.01743v1", + "title": "Graph Generation with Variational Recurrent Neural Network", + "abstract": "Generating graph structures is a challenging problem due to the diverse\nrepresentations and complex dependencies among nodes. In this paper, we\nintroduce Graph Variational Recurrent Neural Network (GraphVRNN), a\nprobabilistic autoregressive model for graph generation. Through modeling the\nlatent variables of graph data, GraphVRNN can capture the joint distributions\nof graph structures and the underlying node attributes. We conduct experiments\non the proposed GraphVRNN in both graph structure learning and attribute\ngeneration tasks. The evaluation results show that the variational component\nallows our network to model complicated distributions, as well as generate\nplausible structures and node attributes.", + "authors": "Shih-Yang Su, Hossein Hajimirsadeghi, Greg Mori", + "published": "2019-10-02", + "updated": "2019-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03262v2", + "title": "CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations", + "abstract": "Unsupervised graph representation learning is a non-trivial topic. The\nsuccess of contrastive methods in the unsupervised representation learning on\nstructured data inspires similar attempts on the graph. Existing graph\ncontrastive learning (GCL) aims to learn the invariance across multiple\naugmentation views, which renders it heavily reliant on the handcrafted graph\naugmentations. However, inappropriate graph data augmentations can potentially\njeopardize such invariance. In this paper, we show the potential hazards of\ninappropriate augmentations and then propose a novel Collaborative Graph\nContrastive Learning framework (CGCL). This framework harnesses multiple graph\nencoders to observe the graph. Features observed from different encoders serve\nas the contrastive views in contrastive learning, which avoids inducing\nunstable perturbation and guarantees the invariance. To ensure the\ncollaboration among diverse graph encoders, we propose the concepts of\nasymmetric architecture and complementary encoders as the design principle. To\nfurther prove the rationality, we utilize two quantitative metrics to measure\nthe assembly of CGCL respectively. Extensive experiments demonstrate the\nadvantages of CGCL in unsupervised graph-level representation learning and the\npotential of collaborative framework. The source code for reproducibility is\navailable at https://github.com/zhangtia16/CGCL", + "authors": "Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang", + "published": "2021-11-05", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.04687v2", + "title": "Intrinsic Geometric Information Transfer Learning on Multiple Graph-Structured Datasets", + "abstract": "Graphs provide a powerful means for representing complex interactions between\nentities. Recently, deep learning approaches are emerging for representing and\nmodeling graph-structured data, although the conventional deep learning methods\n(such as convolutional neural networks and recurrent neural networks) have\nmainly focused on grid-structured inputs (image and audio). Leveraged by the\ncapability of representation learning, deep learning based techniques are\nreporting promising results for graph applications by detecting structural\ncharacteristics of graphs in an automated fashion. In this paper, we attempt to\nadvance deep learning for graph-structured data by incorporating another\ncomponent, transfer learning. By transferring the intrinsic geometric\ninformation learned in the source domain, our approach can help us to construct\na model for a new but related task in the target domain without collecting new\ndata and without training a new model from scratch. We thoroughly test our\napproach with large-scale real corpora and confirm the effectiveness of the\nproposed transfer learning framework for deep learning on graphs. According to\nour experiments, transfer learning is most effective when the source and target\ndomains bear a high level of structural similarity in their graph\nrepresentations.", + "authors": "Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon", + "published": "2016-11-15", + "updated": "2016-12-05", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.04595v1", + "title": "Label-informed Graph Structure Learning for Node Classification", + "abstract": "Graph Neural Networks (GNNs) have achieved great success among various\ndomains. Nevertheless, most GNN methods are sensitive to the quality of graph\nstructures. To tackle this problem, some studies exploit different graph\nstructure learning strategies to refine the original graph structure. However,\nthese methods only consider feature information while ignoring available label\ninformation. In this paper, we propose a novel label-informed graph structure\nlearning framework which incorporates label information explicitly through a\nclass transition matrix. We conduct extensive experiments on seven node\nclassification benchmark datasets and the results show that our method\noutperforms or matches the state-of-the-art baselines.", + "authors": "Liping Wang, Fenyu Hu, Shu Wu, Liang Wang", + "published": "2021-08-10", + "updated": "2021-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.02909v1", + "title": "Spectral Augmentations for Graph Contrastive Learning", + "abstract": "Contrastive learning has emerged as a premier method for learning\nrepresentations with or without supervision. Recent studies have shown its\nutility in graph representation learning for pre-training. Despite successes,\nthe understanding of how to design effective graph augmentations that can\ncapture structural properties common to many different types of downstream\ngraphs remains incomplete. We propose a set of well-motivated graph\ntransformation operations derived via graph spectral analysis to provide a bank\nof candidates when constructing augmentations for a graph contrastive\nobjective, enabling contrastive learning to capture useful structural\nrepresentation from pre-training graph datasets. We first present a spectral\ngraph cropping augmentation that involves filtering nodes by applying\nthresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second\nnovel augmentation reorders the graph frequency components in a structural\nLaplacian-derived position graph embedding. Further, we introduce a method that\nleads to improved views of local subgraphs by performing alignment via global\nrandom walk embeddings. Our experimental results indicate consistent\nimprovements in out-of-domain graph data transfer compared to state-of-the-art\ngraph contrastive learning methods, shedding light on how to design a graph\nlearner that is able to learn structural properties common to diverse graph\ntypes.", + "authors": "Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.04350v2", + "title": "Time-Variant Graph Classification", + "abstract": "Graphs are commonly used to represent objects, such as images and text, for\npattern classification. In a dynamic world, an object may continuously evolve\nover time, and so does the graph extracted from the underlying object. These\nchanges in graph structure with respect to the temporal order present a new\nrepresentation of the graph, in which an object corresponds to a set of\ntime-variant graphs. In this paper, we formulate a novel time-variant graph\nclassification task and propose a new graph feature, called a graph-shapelet\npattern, for learning and classifying time-variant graphs. Graph-shapelet\npatterns are compact and discriminative graph transformation subsequences. A\ngraph-shapelet pattern can be regarded as a graphical extension of a shapelet\n-- a class of discriminative features designed for vector-based temporal data\nclassification. To discover graph-shapelet patterns, we propose to convert a\ntime-variant graph sequence into time-series data and use the discovered\nshapelets to find graph transformation subsequences as graph-shapelet patterns.\nBy converting each graph-shapelet pattern into a unique tokenized graph\ntransformation sequence, we can measure the similarity between two\ngraph-shapelet patterns and therefore classify time-variant graphs. Experiments\non both synthetic and real-world data demonstrate the superior performance of\nthe proposed algorithms.", + "authors": "Haishuai Wang", + "published": "2016-09-14", + "updated": "2017-06-12", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02126v1", + "title": "Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix", + "abstract": "To improve the robustness of graph neural networks (GNN), graph structure\nlearning (GSL) has attracted great interest due to the pervasiveness of noise\nin graph data. Many approaches have been proposed for GSL to jointly learn a\nclean graph structure and corresponding representations. To extend the previous\nwork, this paper proposes a novel regularized GSL approach, particularly with\nan alignment of feature information and graph information, which is motivated\nmainly by our derived lower bound of node-level Rademacher complexity for GNNs.\nAdditionally, our proposed approach incorporates sparse dimensional reduction\nto leverage low-dimensional node features that are relevant to the graph\nstructure. To evaluate the effectiveness of our approach, we conduct\nexperiments on real-world graphs. The results demonstrate that our proposed GSL\nmethod outperforms several competitive baselines, especially in scenarios where\nthe graph structures are heavily affected by noise. Overall, our research\nhighlights the importance of integrating feature and graph information\nalignment in GSL, as inspired by our derived theoretical result, and showcases\nthe superiority of our approach in handling noisy graph structures through\ncomprehensive experiments on real-world datasets.", + "authors": "Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.07294v1", + "title": "Graph Data Condensation via Self-expressive Graph Structure Reconstruction", + "abstract": "With the increasing demands of training graph neural networks (GNNs) on\nlarge-scale graphs, graph data condensation has emerged as a critical technique\nto relieve the storage and time costs during the training phase. It aims to\ncondense the original large-scale graph to a much smaller synthetic graph while\npreserving the essential information necessary for efficiently training a\ndownstream GNN. However, existing methods concentrate either on optimizing node\nfeatures exclusively or endeavor to independently learn node features and the\ngraph structure generator. They could not explicitly leverage the information\nof the original graph structure and failed to construct an interpretable graph\nstructure for the synthetic dataset. To address these issues, we introduce a\nnovel framework named \\textbf{G}raph Data \\textbf{C}ondensation via\n\\textbf{S}elf-expressive Graph Structure \\textbf{R}econstruction\n(\\textbf{GCSR}). Our method stands out by (1) explicitly incorporating the\noriginal graph structure into the condensing process and (2) capturing the\nnuanced interdependencies between the condensed nodes by reconstructing an\ninterpretable self-expressive graph structure. Extensive experiments and\ncomprehensive analysis validate the efficacy of the proposed method across\ndiverse GNN models and datasets. Our code is available at\nhttps://www.dropbox.com/scl/fi/2aonyp5ln5gisdqtjimu8/GCSR.zip?rlkey=11cuwfpsf54wxiiktu0klud0x&dl=0", + "authors": "Zhanyu Liu, Chaolv Zeng, Guanjie Zheng", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.09205v1", + "title": "SoK: Differential Privacy on Graph-Structured Data", + "abstract": "In this work, we study the applications of differential privacy (DP) in the\ncontext of graph-structured data. We discuss the formulations of DP applicable\nto the publication of graphs and their associated statistics as well as machine\nlearning on graph-based data, including graph neural networks (GNNs). The\nformulation of DP in the context of graph-structured data is difficult, as\nindividual data points are interconnected (often non-linearly or sparsely).\nThis connectivity complicates the computation of individual privacy loss in\ndifferentially private learning. The problem is exacerbated by an absence of a\nsingle, well-established formulation of DP in graph settings. This issue\nextends to the domain of GNNs, rendering private machine learning on\ngraph-structured data a challenging task. A lack of prior systematisation work\nmotivated us to study graph-based learning from a privacy perspective. In this\nwork, we systematise different formulations of DP on graphs, discuss challenges\nand promising applications, including the GNN domain. We compare and separate\nworks into graph analysis tasks and graph learning tasks with GNNs. Finally, we\nconclude our work with a discussion of open questions and potential directions\nfor further research in this area.", + "authors": "Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis", + "published": "2022-03-17", + "updated": "2022-03-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10124v1", + "title": "Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Self-supervised Pretraining", + "abstract": "We propose the Graph Context Encoder (GCE), a simple but efficient approach\nfor graph representation learning based on graph feature masking and\nreconstruction.\n GCE models are trained to efficiently reconstruct input graphs similarly to a\ngraph autoencoder where node and edge labels are masked. In particular, our\nmodel is also allowed to change graph structures by masking and reconstructing\ngraphs augmented by random pseudo-edges.\n We show that GCE can be used for novel graph generation, with applications\nfor molecule generation. Used as a pretraining method, we also show that GCE\nimproves baseline performances in supervised classification tasks tested on\nmultiple standard benchmark graph datasets.", + "authors": "Oriel Frigo, R\u00e9my Brossard, David Dehaene", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "68T07" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.04407v2", + "title": "Adversarially Regularized Graph Autoencoder for Graph Embedding", + "abstract": "Graph embedding is an effective method to represent graph data in a low\ndimensional space for graph analytics. Most existing embedding algorithms\ntypically focus on preserving the topological structure or minimizing the\nreconstruction errors of graph data, but they have mostly ignored the data\ndistribution of the latent codes from the graphs, which often results in\ninferior embedding in real-world graph data. In this paper, we propose a novel\nadversarial graph embedding framework for graph data. The framework encodes the\ntopological structure and node content in a graph to a compact representation,\non which a decoder is trained to reconstruct the graph structure. Furthermore,\nthe latent representation is enforced to match a prior distribution via an\nadversarial training scheme. To learn a robust embedding, two variants of\nadversarial approaches, adversarially regularized graph autoencoder (ARGA) and\nadversarially regularized variational graph autoencoder (ARVGA), are developed.\nExperimental studies on real-world graphs validate our design and demonstrate\nthat our algorithms outperform baselines by a wide margin in link prediction,\ngraph clustering, and graph visualization tasks.", + "authors": "Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang", + "published": "2018-02-13", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09671v1", + "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels", + "abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", + "authors": "Rami Al-Rfou, Dustin Zelle, Bryan Perozzi", + "published": "2019-04-21", + "updated": "2019-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11307v3", + "title": "Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method", + "abstract": "Graph Representation Learning (GRL) is an influential methodology, enabling a\nmore profound understanding of graph-structured data and aiding graph\nclustering, a critical task across various domains. The recent incursion of\nattention mechanisms, originally an artifact of Natural Language Processing\n(NLP), into the realm of graph learning has spearheaded a notable shift in\nresearch trends. Consequently, Graph Attention Networks (GATs) and Graph\nAttention Auto-Encoders have emerged as preferred tools for graph clustering\ntasks. Yet, these methods primarily employ a local attention mechanism, thereby\ncurbing their capacity to apprehend the intricate global dependencies between\nnodes within graphs. Addressing these impediments, this study introduces an\ninnovative method known as the Graph Transformer Auto-Encoder for Graph\nClustering (GTAGC). By melding the Graph Auto-Encoder with the Graph\nTransformer, GTAGC is adept at capturing global dependencies between nodes.\nThis integration amplifies the graph representation and surmounts the\nconstraints posed by the local attention mechanism. The architecture of GTAGC\nencompasses graph embedding, integration of the Graph Transformer within the\nautoencoder structure, and a clustering component. It strategically alternates\nbetween graph embedding and clustering, thereby tailoring the Graph Transformer\nfor clustering tasks, whilst preserving the graph's global structural\ninformation. Through extensive experimentation on diverse benchmark datasets,\nGTAGC has exhibited superior performance against existing state-of-the-art\ngraph clustering methodologies.", + "authors": "Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao", + "published": "2023-06-20", + "updated": "2023-08-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.16176v1", + "title": "A Survey on Structure-Preserving Graph Transformers", + "abstract": "The transformer architecture has shown remarkable success in various domains,\nsuch as natural language processing and computer vision. When it comes to graph\nlearning, transformers are required not only to capture the interactions\nbetween pairs of nodes but also to preserve graph structures connoting the\nunderlying relations and proximity between them, showing the expressive power\nto capture different graph structures. Accordingly, various\nstructure-preserving graph transformers have been proposed and widely used for\nvarious tasks, such as graph-level tasks in bioinformatics and\nchemoinformatics. However, strategies related to graph structure preservation\nhave not been well organized and systematized in the literature. In this paper,\nwe provide a comprehensive overview of structure-preserving graph transformers\nand generalize these methods from the perspective of their design objective.\nFirst, we divide strategies into four main groups: node feature modulation,\ncontext node sampling, graph rewriting, and transformer architecture\nimprovements. We then further divide the strategies according to the coverage\nand goals of graph structure preservation. Furthermore, we also discuss\nchallenges and future directions for graph transformer models to preserve the\ngraph structure and understand the nature of graphs.", + "authors": "Van Thuy Hoang, O-Joun Lee", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.05980v1", + "title": "CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical Graph Representation Learning", + "abstract": "Recent years have witnessed the emergence and flourishing of hierarchical\ngraph pooling neural networks (HGPNNs) which are effective graph representation\nlearning approaches for graph level tasks such as graph classification.\nHowever, current HGPNNs do not take full advantage of the graph's intrinsic\nstructures (e.g., community structure). Moreover, the pooling operations in\nexisting HGPNNs are difficult to be interpreted. In this paper, we propose a\nnew interpretable graph pooling framework - CommPOOL, that can capture and\npreserve the hierarchical community structure of graphs in the graph\nrepresentation learning process. Specifically, the proposed community pooling\nmechanism in CommPOOL utilizes an unsupervised approach for capturing the\ninherent community structure of graphs in an interpretable manner. CommPOOL is\na general and flexible framework for hierarchical graph representation learning\nthat can further facilitate various graph-level tasks. Evaluations on five\npublic benchmark datasets and one synthetic dataset demonstrate the superior\nperformance of CommPOOL in graph representation learning for graph\nclassification compared to the state-of-the-art baseline methods, and its\neffectiveness in capturing and preserving the community structure of graphs.", + "authors": "Haoteng Tang, Guixiang Ma, Lifang He, Heng Huang, Liang Zhan", + "published": "2020-12-10", + "updated": "2020-12-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07439v1", + "title": "Multiple Graph Adversarial Learning", + "abstract": "Recently, Graph Convolutional Networks (GCNs) have been widely studied for\ngraph-structured data representation and learning. However, in many real\napplications, data are coming with multiple graphs, and it is non-trivial to\nadapt GCNs to deal with data representation with multiple graph structures. One\nmain challenge for multi-graph representation is how to exploit both structure\ninformation of each individual graph and correlation information across\nmultiple graphs simultaneously. In this paper, we propose a novel Multiple\nGraph Adversarial Learning (MGAL) framework for multi-graph representation and\nlearning. MGAL aims to learn an optimal structure-invariant and consistent\nrepresentation for multiple graphs in a common subspace via a novel adversarial\nlearning framework, which thus incorporates both structure information of\nintra-graph and correlation information of inter-graphs simultaneously. Based\non MGAL, we then provide a unified network for semi-supervised learning task.\nPromising experimental results demonstrate the effectiveness of MGAL model.", + "authors": "Bo Jiang, Ziyan Zhang, Jin Tang, Bin Luo", + "published": "2019-01-22", + "updated": "2019-01-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.04762v1", + "title": "The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure", + "abstract": "Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.", + "authors": "Anton Tsitsulin, Bryan Perozzi", + "published": "2023-12-08", + "updated": "2023-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.15843v1", + "title": "TabGSL: Graph Structure Learning for Tabular Data Prediction", + "abstract": "This work presents a novel approach to tabular data prediction leveraging\ngraph structure learning and graph neural networks. Despite the prevalence of\ntabular data in real-world applications, traditional deep learning methods\noften overlook the potentially valuable associations between data instances.\nSuch associations can offer beneficial insights for classification tasks, as\ninstances may exhibit similar patterns of correlations among features and\ntarget labels. This information can be exploited by graph neural networks,\nnecessitating robust graph structures. However, existing studies primarily\nfocus on improving graph structure from noisy data, largely neglecting the\npossibility of deriving graph structures from tabular data. We present a novel\nsolution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data\nprediction by simultaneously learning instance correlation and feature\ninteraction within a unified framework. This is achieved through a proposed\ngraph contrastive learning module, along with transformer-based feature\nextractor and graph neural network. Comprehensive experiments conducted on 30\nbenchmark tabular datasets demonstrate that TabGSL markedly outperforms both\ntree-based models and recent deep learning-based tabular models. Visualizations\nof the learned instance embeddings further substantiate the effectiveness of\nTabGSL.", + "authors": "Jay Chiehen Liao, Cheng-Te Li", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11390v2", + "title": "Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction", + "abstract": "Tiered graph autoencoders provide the architecture and mechanisms for\nlearning tiered latent representations and latent spaces for molecular graphs\nthat explicitly represent and utilize groups (e.g., functional groups). This\nenables the utilization and exploration of tiered molecular latent spaces,\neither individually - the node (atom) tier, the group tier, or the graph\n(molecule) tier - or jointly, as well as navigation across the tiers. In this\npaper, we discuss the use of tiered graph autoencoders together with graph\nprediction for molecular graphs. We show features of molecular graphs used, and\ngroups in molecular graphs identified for some sample molecules. We briefly\nreview graph prediction and the QM9 dataset for background information, and\ndiscuss the use of tiered graph embeddings for graph prediction, particularly\nweighted group pooling. We find that functional groups and ring groups\neffectively capture and represent the chemical essence of molecular graphs\n(structures). Further, tiered graph autoencoders and graph prediction together\nprovide effective, efficient and interpretable deep learning for molecular\ngraphs, with the former providing unsupervised, transferable learning and the\nlatter providing supervised, task-optimized learning.", + "authors": "Daniel T. Chang", + "published": "2019-10-24", + "updated": "2021-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-bio.BM" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03596v3", + "title": "Graph Generation with Diffusion Mixture", + "abstract": "Generation of graphs is a major challenge for real-world tasks that require\nunderstanding the complex nature of their non-Euclidean structures. Although\ndiffusion models have achieved notable success in graph generation recently,\nthey are ill-suited for modeling the topological properties of graphs since\nlearning to denoise the noisy samples does not explicitly learn the graph\nstructures to be generated. To tackle this limitation, we propose a generative\nframework that models the topology of graphs by explicitly learning the final\ngraph structures of the diffusion process. Specifically, we design the\ngenerative process as a mixture of endpoint-conditioned diffusion processes\nwhich is driven toward the predicted graph that results in rapid convergence.\nWe further introduce a simple parameterization of the mixture process and\ndevelop an objective for learning the final graph structure, which enables\nmaximum likelihood training. Through extensive experimental validation on\ngeneral graph and 2D/3D molecule generation tasks, we show that our method\noutperforms previous generative models, generating graphs with correct topology\nwith both continuous (e.g. 3D coordinates) and discrete (e.g. atom types)\nfeatures. Our code is available at https://github.com/harryjo97/DruM.", + "authors": "Jaehyeong Jo, Dongki Kim, Sung Ju Hwang", + "published": "2023-02-07", + "updated": "2024-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00696v1", + "title": "Graph Learning: A Survey", + "abstract": "Graphs are widely used as a popular representation of the network structure\nof connected data. Graph data can be found in a broad spectrum of application\ndomains such as social systems, ecosystems, biological networks, knowledge\ngraphs, and information systems. With the continuous penetration of artificial\nintelligence technologies, graph learning (i.e., machine learning on graphs) is\ngaining attention from both researchers and practitioners. Graph learning\nproves effective for many tasks, such as classification, link prediction, and\nmatching. Generally, graph learning methods extract relevant features of graphs\nby taking advantage of machine learning algorithms. In this survey, we present\na comprehensive overview on the state-of-the-art of graph learning. Special\nattention is paid to four categories of existing graph learning methods,\nincluding graph signal processing, matrix factorization, random walk, and deep\nlearning. Major models and algorithms under these categories are reviewed\nrespectively. We examine graph learning applications in areas such as text,\nimages, science, knowledge graphs, and combinatorial optimization. In addition,\nwe discuss several promising research directions in this field.", + "authors": "Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, Huan Liu", + "published": "2021-05-03", + "updated": "2021-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI", + "68T07", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.08776v2", + "title": "Joint Embedding Learning of Educational Knowledge Graphs", + "abstract": "As an efficient model for knowledge organization, the knowledge graph has\nbeen widely adopted in several fields, e.g., biomedicine, sociology, and\neducation. And there is a steady trend of learning embedding representations of\nknowledge graphs to facilitate knowledge graph construction and downstream\ntasks. In general, knowledge graph embedding techniques aim to learn vectorized\nrepresentations which preserve the structural information of the graph. And\nconventional embedding learning models rely on structural relationships among\nentities and relations. However, in educational knowledge graphs, structural\nrelationships are not the focus. Instead, rich literals of the graphs are more\nvaluable. In this paper, we focus on this problem and propose a novel model for\nembedding learning of educational knowledge graphs. Our model considers both\nstructural and literal information and jointly learns embedding\nrepresentations. Three experimental graphs were constructed based on an\neducational knowledge graph which has been applied in real-world teaching. We\nconducted two experiments on the three graphs and other common benchmark\ngraphs. The experimental results proved the effectiveness of our model and its\nsuperiority over other baselines when processing educational knowledge graphs.", + "authors": "Siyu Yao, Ruijie Wang, Shen Sun, Derui Bu, Jun Liu", + "published": "2019-11-20", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.03675v3", + "title": "Machine Learning on Graphs: A Model and Comprehensive Taxonomy", + "abstract": "There has been a surge of recent interest in learning representations for\ngraph-structured data. Graph representation learning methods have generally\nfallen into three main categories, based on the availability of labeled data.\nThe first, network embedding (such as shallow graph embedding or graph\nauto-encoders), focuses on learning unsupervised representations of relational\nstructure. The second, graph regularized neural networks, leverages graphs to\naugment neural network losses with a regularization objective for\nsemi-supervised learning. The third, graph neural networks, aims to learn\ndifferentiable functions over discrete topologies with arbitrary structure.\nHowever, despite the popularity of these areas there has been surprisingly\nlittle work on unifying the three paradigms. Here, we aim to bridge the gap\nbetween graph neural networks, network embedding and graph regularization\nmodels. We propose a comprehensive taxonomy of representation learning methods\nfor graph-structured data, aiming to unify several disparate bodies of work.\nSpecifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which\ngeneralizes popular algorithms for semi-supervised learning on graphs (e.g.\nGraphSage, Graph Convolutional Networks, Graph Attention Networks), and\nunsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)\ninto a single consistent approach. To illustrate the generality of this\napproach, we fit over thirty existing methods into this framework. We believe\nthat this unifying view both provides a solid foundation for understanding the\nintuition behind these methods, and enables future research in the area.", + "authors": "Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R\u00e9, Kevin Murphy", + "published": "2020-05-07", + "updated": "2022-04-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.07832v1", + "title": "Deep Iterative and Adaptive Learning for Graph Neural Networks", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely Deep\nIterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for\njointly learning the graph structure and graph embeddings simultaneously. We\nfirst cast the graph structure learning problem as a similarity metric learning\nproblem and leverage an adapted graph regularization for controlling\nsmoothness, connectivity and sparsity of the generated graph. We further\npropose a novel iterative method for searching for a hidden graph structure\nthat augments the initial graph structure. Our iterative method dynamically\nstops when the learned graph structure approaches close enough to the optimal\ngraph. Our extensive experiments demonstrate that the proposed DIAL-GNN model\ncan consistently outperform or match state-of-the-art baselines in terms of\nboth downstream task performance and computational time. The proposed approach\ncan cope with both transductive learning and inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1909.11594v1", + "title": "Structured Graph Learning Via Laplacian Spectral Constraints", + "abstract": "Learning a graph with a specific structure is essential for interpretability\nand identification of the relationships among data. It is well known that\nstructured graph learning from observed samples is an NP-hard combinatorial\nproblem. In this paper, we first show that for a set of important graph\nfamilies it is possible to convert the structural constraints of structure into\neigenvalue constraints of the graph Laplacian matrix. Then we introduce a\nunified graph learning framework, lying at the integration of the spectral\nproperties of the Laplacian matrix with Gaussian graphical modeling that is\ncapable of learning structures of a large class of graph families. The proposed\nalgorithms are provably convergent and practically amenable for large-scale\nsemi-supervised and unsupervised graph-based learning tasks. Extensive\nnumerical experiments with both synthetic and real data sets demonstrate the\neffectiveness of the proposed methods. An R package containing code for all the\nexperimental results is available at\nhttps://cran.r-project.org/package=spectralGraphTopology.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos'e Vin'icius de M. Cardoso, Daniel P. Palomar", + "published": "2019-09-24", + "updated": "2019-09-24", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC", + "stat.AP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.13009v2", + "title": "Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely\nIterative Deep Graph Learning (IDGL), for jointly and iteratively learning\ngraph structure and graph embedding. The key rationale of IDGL is to learn a\nbetter graph structure based on better node embeddings, and vice versa (i.e.,\nbetter node embeddings based on a better graph structure). Our iterative method\ndynamically stops when the learned graph structure approaches close enough to\nthe graph optimized for the downstream prediction task. In addition, we cast\nthe graph learning problem as a similarity metric learning problem and leverage\nadaptive graph regularization for controlling the quality of the learned graph.\nFinally, combining the anchor-based approximation technique, we further propose\na scalable version of IDGL, namely IDGL-Anch, which significantly reduces the\ntime and space complexity of IDGL without compromising the performance. Our\nextensive experiments on nine benchmarks show that our proposed IDGL models can\nconsistently outperform or match the state-of-the-art baselines. Furthermore,\nIDGL can be more robust to adversarial graphs and cope with both transductive\nand inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2020-06-21", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.06126v1", + "title": "Regularized Graph Structure Learning with Semantic Knowledge for Multi-variates Time-Series Forecasting", + "abstract": "Multivariate time-series forecasting is a critical task for many\napplications, and graph time-series network is widely studied due to its\ncapability to capture the spatial-temporal correlation simultaneously. However,\nmost existing works focus more on learning with the explicit prior graph\nstructure, while ignoring potential information from the implicit graph\nstructure, yielding incomplete structure modeling. Some recent works attempt to\nlearn the intrinsic or implicit graph structure directly while lacking a way to\ncombine explicit prior structure with implicit structure together. In this\npaper, we propose Regularized Graph Structure Learning (RGSL) model to\nincorporate both explicit prior structure and implicit structure together, and\nlearn the forecasting deep networks along with the graph structure. RGSL\nconsists of two innovative modules. First, we derive an implicit dense\nsimilarity matrix through node embedding, and learn the sparse graph structure\nusing the Regularized Graph Generation (RGG) based on the Gumbel Softmax trick.\nSecond, we propose a Laplacian Matrix Mixed-up Module (LM3) to fuse the\nexplicit graph and implicit graph together. We conduct experiments on three\nreal-word datasets. Results show that the proposed RGSL model outperforms\nexisting graph forecasting algorithms with a notable margin, while learning\nmeaningful graph structure simultaneously. Our code and models are made\npublicly available at https://github.com/alipay/RGSL.git.", + "authors": "Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, Alex Liu", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11898v1", + "title": "Graph Learning Augmented Heterogeneous Graph Neural Network for Social Recommendation", + "abstract": "Social recommendation based on social network has achieved great success in\nimproving the performance of recommendation system. Since social network\n(user-user relations) and user-item interactions are both naturally represented\nas graph-structured data, Graph Neural Networks (GNNs) have thus been widely\napplied for social recommendation. In this work, we propose an end-to-end\nheterogeneous global graph learning framework, namely Graph Learning Augmented\nHeterogeneous Graph Neural Network (GL-HGNN) for social recommendation. GL-HGNN\naims to learn a heterogeneous global graph that makes full use of user-user\nrelations, user-item interactions and item-item similarities in a unified\nperspective. To this end, we design a Graph Learner (GL) method to learn and\noptimize user-user and item-item connections separately. Moreover, we employ a\nHeterogeneous Graph Neural Network (HGNN) to capture the high-order complex\nsemantic relations from our learned heterogeneous global graph. To scale up the\ncomputation of graph learning, we further present the Anchor-based Graph\nLearner (AGL) to reduce computational complexity. Extensive experiments on four\nreal-world datasets demonstrate the effectiveness of our model.", + "authors": "Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01855v2", + "title": "A Survey on Graph Representation Learning Methods", + "abstract": "Graphs representation learning has been a very active research area in recent\nyears. The goal of graph representation learning is to generate graph\nrepresentation vectors that capture the structure and features of large graphs\naccurately. This is especially important because the quality of the graph\nrepresentation vectors will affect the performance of these vectors in\ndownstream tasks such as node classification, link prediction and anomaly\ndetection. Many techniques are proposed for generating effective graph\nrepresentation vectors. Two of the most prevalent categories of graph\nrepresentation learning are graph embedding methods without using graph neural\nnets (GNN), which we denote as non-GNN based graph embedding methods, and graph\nneural nets (GNN) based methods. Non-GNN graph embedding methods are based on\ntechniques such as random walks, temporal point processes and neural network\nlearning methods. GNN-based methods, on the other hand, are the application of\ndeep learning on graph data. In this survey, we provide an overview of these\ntwo categories and cover the current state-of-the-art methods for both static\nand dynamic graphs. Finally, we explore some open and ongoing research\ndirections for future work.", + "authors": "Shima Khoshraftar, Aijun An", + "published": "2022-04-04", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2004.06846v1", + "title": "MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning", + "abstract": "How to utilize deep learning methods for graph classification tasks has\nattracted considerable research attention in the past few years. Regarding\ngraph classification tasks, the graphs to be classified may have various graph\nsizes (i.e., different number of nodes and edges) and have various graph\nproperties (e.g., average node degree, diameter, and clustering coefficient).\nThe diverse property of graphs has imposed significant challenges on existing\ngraph learning techniques since diverse graphs have different best-fit\nhyperparameters. It is difficult to learn graph features from a set of diverse\ngraphs by a unified graph neural network. This motivates us to use a multiplex\nstructure in a diverse way and utilize a priori properties of graphs to guide\nthe learning. In this paper, we propose MxPool, which concurrently uses\nmultiple graph convolution/pooling networks to build a hierarchical learning\nstructure for graph representation learning tasks. Our experiments on numerous\ngraph classification benchmarks show that our MxPool has superiority over other\nstate-of-the-art graph representation learning methods.", + "authors": "Yanyan Liang, Yanfeng Zhang, Dechao Gao, Qian Xu", + "published": "2020-04-15", + "updated": "2020-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01152v2", + "title": "Causal Structure Learning: a Combinatorial Perspective", + "abstract": "In this review, we discuss approaches for learning causal structure from\ndata, also called causal discovery. In particular, we focus on approaches for\nlearning directed acyclic graphs (DAGs) and various generalizations which allow\nfor some variables to be unobserved in the available data. We devote special\nattention to two fundamental combinatorial aspects of causal structure\nlearning. First, we discuss the structure of the search space over causal\ngraphs. Second, we discuss the structure of equivalence classes over causal\ngraphs, i.e., sets of graphs which represent what can be learned from\nobservational data alone, and how these equivalence classes can be refined by\nadding interventional data.", + "authors": "Chandler Squires, Caroline Uhler", + "published": "2022-06-02", + "updated": "2022-12-19", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1803.03324v1", + "title": "Learning Deep Generative Models of Graphs", + "abstract": "Graphs are fundamental data structures which concisely capture the relational\nstructure in many important real-world domains, such as knowledge graphs,\nphysical and social interactions, language, and chemistry. Here we introduce a\npowerful new approach for learning generative models over graphs, which can\ncapture both their structure and attributes. Our approach uses graph neural\nnetworks to express probabilistic dependencies among a graph's nodes and edges,\nand can, in principle, learn distributions over any arbitrary graph. In a\nseries of experiments our results show that once trained, our models can\ngenerate good quality samples of both synthetic graphs as well as real\nmolecular graphs, both unconditionally and conditioned on data. Compared to\nbaselines that do not use graph-structured representations, our models often\nperform far better. We also explore key challenges of learning generative\nmodels of graphs, such as how to handle symmetries and ordering of elements\nduring the graph generation process, and offer possible solutions. Our work is\nthe first and most general approach for learning generative models over\narbitrary graphs, and opens new directions for moving away from restrictions of\nvector- and sequence-like knowledge representations, toward more expressive and\nflexible relational data structures.", + "authors": "Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia", + "published": "2018-03-08", + "updated": "2018-03-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.10305v2", + "title": "TC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content", + "abstract": "The automatic recognition of tabular data in document images presents a\nsignificant challenge due to the diverse range of table styles and complex\nstructures. Tables offer valuable content representation, enhancing the\npredictive capabilities of various systems such as search engines and Knowledge\nGraphs. Addressing the two main problems, namely table detection (TD) and table\nstructure recognition (TSR), has traditionally been approached independently.\nIn this research, we propose an end-to-end pipeline that integrates deep\nlearning models, including DETR, CascadeTabNet, and PP OCR v2, to achieve\ncomprehensive image-based table recognition. This integrated approach\neffectively handles diverse table styles, complex structures, and image\ndistortions, resulting in improved accuracy and efficiency compared to existing\nmethods like Table Transformers. Our system achieves simultaneous table\ndetection (TD), table structure recognition (TSR), and table content\nrecognition (TCR), preserving table structures and accurately extracting\ntabular data from document images. The integration of multiple models addresses\nthe intricacies of table recognition, making our approach a promising solution\nfor image-based table understanding, data extraction, and information retrieval\napplications. Our proposed approach achieves an IOU of 0.96 and an OCR Accuracy\nof 78%, showcasing a remarkable improvement of approximately 25% in the OCR\nAccuracy compared to the previous Table Transformer approach.", + "authors": "Avinash Anand, Raj Jaiswal, Pijush Bhuyan, Mohit Gupta, Siddhesh Bangar, Md. Modassir Imam, Rajiv Ratn Shah, Shin'ichi Satoh", + "published": "2024-04-16", + "updated": "2024-04-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "The task of table structure identification has been a challenging and unresolved issue within the document-parsing community, leading to the organization of several public challenges to address it [7, 10, 16]. The difficulty of this problem can be attributed to various factors. Firstly, tables exhibit a wide range of shapes and sizes, necessitating a flexible approach to effectively handle their diversity. This is particularly crucial when dealing with complex column and row headers, which can be highly intricate and demanding. Secondly, one of the complexities arises from the scarcity of data specifically tailored for table structure analysis. Nevertheless, there has been significant progress in recent years with the introduction of valuable datasets such as PubTabNet [40], FinTabNet [39], and TableBank [19], addressing this data deficiency. 2.1 Table Detection Several significant contributions have been made in the field of table detection for document analysis. Hao et al. [11] proposed a table detection method based on convolutional neural networks (CNN) specifically designed for PDF documents. Siddiqui et al. [30] introduced an innovative strategy that combines deformable CNN with faster region-based convolutional neural network (R-CNN) or feature pyramid network (FPN) to address the complexities arising from variable table sizes and orientations. Anand et al. [1] proposes a noisy document images dataset for document layout detection, and shown improved performance in detecting tables in the document image. Holevcek et al. [13] extended the application of graph neural networks to structured documents, focusing on bills, where they utilized graph convolutions to facilitate table understanding. Casado et al. [5] extensively explored object detection techniques, including Mask R-CNN, YOLO, SSD, and Retina Net, and demonstrated that fine-tuning from a domain closer to the target domain can significantly improve table detection performance. Nguyen et al. [25] proposed TableSegNet, a compact fully convolutional network capable of simultaneously performing table separation and detection. Zhang et al. [38] introduced a YOLObased table detection methodology, enhancing spatial arrangement learning through improving efficiency by including an involution into the network\u2019s core and using a straightforward feature pyramid network. These studies collectively showcase the effectiveness of deep learning models, such as CNN and YOLO, in the context of table detection. Moreover, they highlight the benefits of incorporating specific techniques like deformable CNN, graph convolutions, and involution, which have proven instrumental in overcoming the inherent challenges associated with this task. 2.2 Table Structure Recognition Early approaches to table structure recognition heavily relied on hand-crafted features and heuristic rules [14, 17, 36]. These methods were particularly suitable for simple table structures or predefined data formats. However, in recent times, inspired by the remarkable success of deep learning in various computer vision tasks like object detection and semantic segmentation, several novel deep learning-based methods [27, 29] have emerged for table structure recognition. TC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content MMIR \u201923, November 2, 2023, Ottawa, ON, Canada Table Detection (TD) Table Structure Recognition (TSR) Table Content Recognition (TCR) Figure 1: Our TC-OCR achieves simultaneous Table Detection (TD), Table Structure Recognition (TSR), and Table Content Recognition (TCR), preserving table structures and accurately extracting tabular data from document images. Schreiber et al. (2017) [29] introduced DeepDeSRT, a two-fold system that effectively combines Faster R-CNN and FCN for accurate table detection and precise row/column segmentation. On the other hand, Raja et al. (2020) [27] presented TabStruct-Net, an innovative Customized cell detection and interaction modules that precisely identify cells and anticipate their row and column relationships with other detected cells are incorporated into a framework for recognizing table structures. These cutting-edge deep learning-based methods, exemplified by DeepDeSRT and TabStruct-Net, leverage the intrinsic capabilities of neural networks to significantly enhance table structure recognition by automatically learning relevant and discriminative features while capturing complex interrelationships within the tables. 2.3 Table recognition Prior studies in table recognition have predominantly focused on non-end-to-end methodologies, dividing the problem into two distinct sub-tasks: table structure recognition and cell-content recognition. These approaches attempted to tackle each sub-problem independently using separate systems. TableMASTER, introduced by [12, 20, 21, 37], is a Transformerbased model specifically designed for table structure recognition. The method combines the Transformer model with a text line detector to identify text lines within individual table cells. Furthermore, they employed a text line recognizer based on the work of [21] to extract text content from the identified lines. Another Transformer-based model called TableFormer was proposed by [24], which not only recognizes table structure but also predicts the bounding boxes of each table cell. These predicted bounding boxes were then utilized to extract the cell contents from PDF documents, resulting in a comprehensive table recognition system. Recently, researchers have been shifting towards end-to-end approaches due to the advancements in deep learning and the increased availability of tabular data [22]. As an example, [22] introduced the encoder-dual decoder (EDD) model, which is capable of jointly recognizing both table structure and content for each cell. In addition to the model, they also introduced the PubTabNet dataset, which specifically focuses on table recognition and is made accessible to the research community. Notably, the ICDAR2021 competition on scientific literature parsing organized by IBM Research in collaboration with IEEE ICDAR [2] has further contributed to advancements in table recognition. In summary, the field of table recognition has witnessed significant progress through various techniques, from non-end-to-end to end-to-end approaches, and the development of new datasets and competitions has been instrumental in driving further advancements.", + "pre_questions": [], + "main_content": "INTRODUCTION As the global digital transformation continues to progress, there is a notable and accelerating trend toward replacing traditional physical paper-based documents with their digitized counterparts. These digital documents frequently contain tables that display various formats and layouts, presenting a diverse range of information. Tables play a pivotal role in succinctly conveying extensive data, allowing readers to efficiently explore, compare, and comprehend the content. Nevertheless, the compact nature of tables often presents significant challenges for machine parsing and comprehension processes. Automatic Information Extraction from tables involves two essential sub-tasks Table Identification and Table Structure Recognition. Several studies [9, 11, 29, 32, 33] have made significant contributions to the advancement of table detection, while others [15, 23, 28] have focused on improving table structure recognition. These tasks are of utmost importance in the field of image analysis arXiv:2404.10305v2 [cs.CV] 19 Apr 2024 MMIR \u201923, November 2, 2023, Ottawa, ON, Canada Avinash Anand et al. as they facilitate the extraction of critical information from tables in a digital format. Table detection is concerned with accurately identifying the precise spatial region within an image that contains the table. Conversely, table structure recognition involves the precise identification of table rows and columns, thereby enabling the extraction of individual table cells. In the field of table recognition (TR), To efficiently use data from table images, computer vision-based pattern recognition methods are used. Table detection (TD), table structure recognition (TSR), and table content recognition (TCR) are the three main tasks involved in TR. TD focuses on locating tables within images, TSR aims to recognize their internal structures, and TCR involves extracting textual contents from the tables. The current emphasis is on developing end-to-end TR systems capable of seamlessly integrating all three sub-tasks. The primary goal is to address real-world scenarios where the system performs TD, TSR, and TCR simultaneously, thus enhancing the efficiency and effectiveness of table recognition in practical applications. Despite the advancements in current open-source and commercial document analysis algorithms, such as the Table Transformer Model, certain limitations persist. For instance, due to the computational complexity and maximum sequence length constraint of Transformers, capturing long-range dependencies between cells can be challenging. As a result, lengthy tables may suffer from information loss, affecting the model\u2019s ability to understand the context accurately. Additionally, when encountering tables with numerous empty cells or sparse content, the model might struggle to distinguish meaningful empty cells from those with missing data. To address these limitations, we present our innovative solution that aims to overcome these challenges and enhance the overall performance of table analysis and recognition. With the help of our proposed approach, table extraction methods can gain a better understanding of the inherent characteristics of tables, leading to improved accuracy in detecting and extracting table structures from document images. The main contributions of this paper can be summarized as follows: \u2022 We have proposed a novel integrated pipeline that combines three state-of-the-art models: DETR, CascadeTabNet, and PP OCR v2, to achieve end-to-end table recognition from image-based data. This innovative pipeline effectively addresses the significant challenges posed by variations in table styles, intricate structures, and image distortions commonly encountered in document images. Through rigorous experimentation and evaluation, we have encountered in document images. \u2022 Through rigorous experimentation and evaluation, we have demonstrated that our integrated pipeline outperforms existing methods in terms of both accuracy and efficiency for table recognition. The results highlight the pipeline\u2019s remarkable ability to preserve complex table structures and accurately extract tabular data from document images. These findings contribute to the advancement of image-based table recognition techniques and offer practical insights for handling diverse table layouts in real-world scenarios. Researchers have developed TableBank [19], an extensive standardized open-domain table benchmark dataset, to address the need for large-scale table analysis in various domains. The dataset surpasses existing human-labeled datasets in terms of size and contains 417,234 tables, each with its original document. TableBank includes a diverse range of domains, such as business documents, official filings, and research papers. The dataset is created by manipulating the mark-up tags for tables present in electronic documents like Microsoft Word (.docx) and LaTeX (.tex) files. Bounding boxes are added using the mark-up language to provide high-quality labeled data. The image-based table analysis approach used in TableBank is versatile, as it can handle different document types, including PDF, HTML, PowerPoint, and scanned versions. This robustness allows for the extraction of tables from various sources, enabling large-scale table analysis tasks. MMIR \u201923, November 2, 2023, Ottawa, ON, Canada Avinash Anand et al. Table Detection (DETR) Cropped Table Table Structure Recognition (CascadeTabNet) Text Detection (PP OCR) Text Coordinates (Word Count) Table Cell Coordinates Word to Table Cell Mapping Text Recognition (PP OCR) Word Level Cell Mapping Unstructured Text (Word Count) Structured Table Output Document Image / PDF Figure 2: Architecture of the proposed Methodology, where we have incorporated three distinct models DETR for table detection, CascadeTabNet for table structure recognition, and PP OCRv2 for text detection and recognition 4 METHODOLOGY We have developed a comprehensive pipeline that integrates three distinct models to address various challenges associated with diverse table styles, complex structures, and image distortions commonly encountered in document images. 4.1 DETR Object Detection Model DEtection TRansformer or DETR [4], revolves around key elements, including a set-based global loss that ensures unique predictions through bipartite matching and a transformer encoder-decoder architecture. They presented a method for tackling object detection by formulating it as a direct set prediction problem. The approach employs an encoder-decoder architecture based on transformers, which are renowned for their effectiveness in sequence prediction tasks. Transformers [34] leverage self-attention mechanisms to explicitly model interactions between elements within a sequence. This characteristic makes transformers highly suitable for handling specific constraints in set prediction, such as eliminating duplicate predictions. By using this strategy, the detection pipeline reduced the requirement for manually created elements, such as anchor generation or non-maximum suppression processes, which frequently need prior task-specific expertise. We leverage this as an end-to-end, transformer-based solution for object detection, directly producing sets of bounding boxes and class labels. This ensures clear and distinct predictions, addressing issues related to duplicate detections. Moreover, the transformer encoder-decoder architecture significantly boosts detection performance by effectively capturing contextual relationships within the images. 4.2 CascadeTabNet We used CascadeTabNet [26], an advanced end-to-end deep learning framework, which effectively tackles both table recognition sub-problems using a unified model. This methodology accomplishes pixel-level table segmentation, accurately identifying each table instance within an input image. Additionally, it performs table cell segmentation, predicting segmented regions corresponding to individual cells, thereby enabling the extraction of the table\u2019s structural information. The model accomplishes cell region predictions collectively in a single inference pass. Moreover, the model has the capability to classify tables into two types: bordered (ruling-based) and borderless (no ruling-based) tables. For borderless tables, the model predicts cell segmentation directly. The key components in the architecture involve leveraging the Cascade RCNN [3], which is a multi-stage model specifically designed to address the challenges of high-quality object detection in convolutional neural networks (CNNs). Additionally, a modified version of HRNet [35] is incorporated, providing reliable highresolution representations and multi-level features that prove beneficial for the semantic segmentation tasks related to table recognition. Through the fusion of these two approaches, CascadeTabNet achieves state-of-the-art performance in table recognition, effectively delivering precise table segmentation, cell segmentation, and accurate classification of table types. 4.3 PP OCRv2 The PP OCRv2 system [8] is designed to achieve high accuracy and computational efficiency for practical OCR applications. For text detection, the proposed Collaborative Mutual Learning (CML) and Copy Paste are two methods for data enhancement that have been successful in improving accuracy for object detection and instance segmentation tasks. CML involves training two student networks and a teacher network to develop a more robust text detector, and it also proves to be beneficial for text detection. Moreover, in-text recognition, they introduced the Lightweight CPU Network (PP-LCNet) [6], Unified-Deep Mutual Learning (U-DML), and CenterLoss. U-DML makes use of two student networks to improve text recognition precision. CenterLoss helps reduce errors caused by comparable characters. We used the PP OCRv2 model to perform text-to-cell mapping in three phases. In the first phase, The mapping process links words to table cells T C using centroid coordinates N, ensuring accurate associations within the table boundary. As shown in the equation TC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content MMIR \u201923, November 2, 2023, Ottawa, ON, Canada below, where table cell centroid is denoted by T CN. T CN\ud835\udc56,\ud835\udc57= \u0012 T C\ud835\udc56,\ud835\udc57(\ud835\udc651) + T C\ud835\udc56,\ud835\udc57(\ud835\udc652) 2 , T C\ud835\udc56,\ud835\udc57(\ud835\udc661)) + T C\ud835\udc56,\ud835\udc57(\ud835\udc662) 2 \u0013 (1) In the next phrase, A flexible threshold, set at half the cell\u2019s width and length, accommodates variations in word positioning. Here it tries to calculate the centroid coordinates ECN for a text cell \ud835\udc58 using the average of coordinates EC obtained from two reference points. ECN\ud835\udc58= \u0012 EC\ud835\udc58(\ud835\udc651) + EC\ud835\udc58(\ud835\udc652) 2 , EC\ud835\udc58(\ud835\udc661)) + EC\ud835\udc58(\ud835\udc662) 2 \u0013 (2) Lastly, this approach preserves empty cells and avoids incorrect mappings, preventing text misalignment and enhancing word-tocell precision. EC\ud835\udc58= \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (R\ud835\udc56, C\ud835\udc57), \ud835\udc56\ud835\udc53|EN\ud835\udc58(\ud835\udc65) \u2212TN\ud835\udc56,\ud835\udc57(\ud835\udc65)| \u2264W(T \ud835\udc56,\ud835\udc57) 2 \ud835\udc34\ud835\udc41\ud835\udc37 |EN\ud835\udc58(\ud835\udc66) \u2212TN\ud835\udc56,\ud835\udc57(\ud835\udc66)| \u2264W(T \ud835\udc56,\ud835\udc57) 2 \ud835\udf19, otherwise (3) In the pipeline, we utilize PP OCRv2 for text detection and recognition purposes. The text cells detected by PP OCRv2 are compared with the cells identified by CascadeTabnet. Once a correspondence is found between the detected text and the cells, we calculate their centroids. By determining the minimum distance between any two cells, we are able to identify the structure or placement of the text within the rows R and columns C accurately. In our proposed methodology for image-based table recognition, we present a comprehensive pipeline that incorporates three distinct models: DETR for table detection, CascadeTabNet for table structure recognition, and PPOCR for text detection and recognition as shown in figure 2. This pipeline is specifically designed to tackle the challenges arising from various table styles, complex structures, and image distortions commonly encountered in document images. Initially, the input document, which can be in image or PDF format, is preprocessed to ensure a standardized input for subsequent analysis. The document image is then fed into the DETR model, an object detection approach, which accurately localizes tables by generating a fixed-size set of S predictions. It is crucial for S to be bigger than the average number of things in the picture. During the loss computation, the model\u2019s training phase comprises an optimization procedure that creates the ideal bipartite matching between the predicted and ground truth items. To address the limitations of existing table-structure identification models, we evaluated the Table Transformer [31], which introduces a robust table-structure decomposition algorithm. This algorithm is designed to be language agnostic and effectively utilizes data from original PDF documents, enabling faster and more accurate text-cell extraction while establishing a direct link between table cells and their corresponding bounding boxes in the image. However, it is worth noting that the performance of the object detection decoder for table cells heavily relies on the availability of high-quality programmatic PDFs containing well-structured tabular content. In cases where the PDFs are poorly formatted or include non-standard table layouts, the model\u2019s performance may suffer, leading to less accurate content extraction. \u02c6 \ud835\udf0e= arg min \ud835\udf0e\u2208\ud835\udc47\ud835\udc46 \ud835\udc46 \u2211\ufe01 \ud835\udc56=1 Lmatch(\ud835\udc66\ud835\udc56, \u02c6 \ud835\udc66\ud835\udf0e(\ud835\udc56)) (4) In Equation. 4, \ud835\udc66denotes the ground truth set of objects, and \ud835\udc66= { \u02c6 \ud835\udc66\ud835\udc56}\ud835\udc46 \ud835\udc56=1 be the set of S predictions. L\ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e(\ud835\udc66\ud835\udc56, \u02c6 \ud835\udc66\ud835\udf0e(\ud835\udc56)) is a pairwise matching cost between ground truth \ud835\udc66\ud835\udc56and a prediction with index \ud835\udf0e(\ud835\udc56). The matching cost takes into account both the class predictions and the similarity of predicted and ground truth boxes. Each element \ud835\udc56of the ground truth set can be seen as \ud835\udc66\ud835\udc56= (\ud835\udc50\ud835\udc56,\ud835\udc4f\ud835\udc56), where \ud835\udc50\ud835\udc56 is the target class label and \ud835\udc4f\ud835\udc56\u2208[0, 1]4 is a vector that specifies the height and width of the ground truth box in relation to the image\u2019s size as well as its center coordinates. For index \ud835\udf0e(\ud835\udc56), we define probability of class \ud835\udc50\ud835\udc56as \u02c6 \ud835\udc5d\ud835\udf0e(\ud835\udc56) (\ud835\udc50\ud835\udc56), and predicted box as \u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56). L\ud835\udc3b\ud835\udc62\ud835\udc5b\ud835\udc54\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc4e\ud835\udc5b(\ud835\udc66, \u02c6 \ud835\udc66) = \ud835\udc46 \u2211\ufe01 \ud835\udc56=1 h \u2212\ud835\udc59\ud835\udc5c\ud835\udc54\u02c6 \ud835\udc5d\u02c6 \ud835\udf0e(\ud835\udc56) (\ud835\udc50\ud835\udc56) + 1(\ud835\udc50\ud835\udc56\u2260\u2205) L\ud835\udc4f\ud835\udc5c\ud835\udc65(\ud835\udc4f\ud835\udc56, \u02c6 \ud835\udc4f\u02c6 \ud835\udf0e(\ud835\udc56)) i (5) The next part of the matching cost and the Hungarian loss 5 is L\ud835\udc4f\ud835\udc5c\ud835\udc65(.) that scores the bounding boxes. In the Equation. 6, \ud835\udc591 loss and the generalized IoU loss L\ud835\udc56\ud835\udc5c\ud835\udc62is used, where \ud835\udf06\ud835\udc56\ud835\udc5c\ud835\udc62, \ud835\udf06\ud835\udc3f1 \u2208R. L\ud835\udc4f\ud835\udc5c\ud835\udc65(\ud835\udc4f\ud835\udc56, \u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56)) = \ud835\udf06\ud835\udc56\ud835\udc5c\ud835\udc62L\ud835\udc56\ud835\udc5c\ud835\udc62(\ud835\udc4f\ud835\udc56, \u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56)) + \ud835\udf06\ud835\udc3f1\u2225\ud835\udc4f\ud835\udc56\u2212\u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56) \u22251 (6) In this study, we propose a comprehensive approach for automatic table understanding in images. The process involves several key steps, starting with the detection of table regions through a region proposal technique. The identified table regions are then isolated from the original image and utilized as input for the CascadeTabNet model, a specialized deep-learning architecture designed for precise table structure recognition. CascadeTabNet is capable of accurately determining the number of rows and columns within a table and their corresponding spatial coordinates. Subsequently, we employ the PPOCR (Pixel-level Patch-wise Object Character Recognition) method for precise text detection and recognition within the identified table cells. PPOCR extracts the spatial coordinates of the detected text, and we establish a mapping process based on the nearest neighbor approach to align this text with the original coordinates of the table cells obtained from CascadeTabNet. This integrated methodology offers a robust and efficient solution for the automatic extraction and understanding of tabular data from images, enhancing the organization and accessibility of such information in various applications. \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc51\ud835\udc5a\ud835\udc59+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc56\ud835\udc59\ud835\udc59 (7) Equation. 7 consists of three losses, 1) Truth Loss, 2) DML Loss, and 3) Distill Loss. Truth Loss is used to make sure that the training is supervised by the true label. Secondly, for calculating the DML Loss, KL Divergence is used which computes the distance between student models. Lastly, the third component is Distill Loss which reflects the supervision of the teacher model on the sub-student models. \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc50\ud835\udc61\ud835\udc50+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc51\ud835\udc5a\ud835\udc59+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc53\ud835\udc52\ud835\udc4e\ud835\udc61 (8) The total loss function 8 consists of three parts [8] section 2.2: MMIR \u201923, November 2, 2023, Ottawa, ON, Canada Avinash Anand et al. Column No. No. of Images Rows TATR TC-OCR TATR Accuracy (%) TC-OCR Accuracy (%) Improvement (TC-OCR TATR) Total 240 2785 1818 2485 65 89 24 2 100 1130 838 1075 74 95 21 3 100 1085 760 1010 70 91 21 4 40 570 220 400 39 70 31 Table 1: Comprehensive comparison of results between the Table Transformer (TATR) model and our proposed method \u2022 CTC Loss: Due to the fact that both networks were trained from the beginning, they can converge using CTC loss. \u2022 DML Loss: DML loss is necessary to make sure that the distributions of the two networks are consistent because it is predicted that the ultimate output distributions of the two networks would be identical. \u2022 Feature Loss: It is expected that the two feature maps will be similar because the two network designs are similar. The gap between the intermediate feature maps of the two networks can be reduced via feature loss. By leveraging the known structural characteristics of tables, we have devised a systematic pipeline for precise extraction of text in a structured manner from document images, while preserving the original table organization. The pipeline consists of three interconnected models: table localization, structure recognition, and structured text detection and recognition. The extracted data is then presented in a CSV file format, adhering to the same structure as the original table in the document. To compute the word-level accuracy \ud835\udc4a\ud835\udc34\ud835\udc50\ud835\udc50, the following formula is utilized. \ud835\udc4a\ud835\udc34\ud835\udc50\ud835\udc50= (\ud835\udc4b \ud835\udc4c) \u2217100 (9) Where \ud835\udc4bis the number of words correctly recognized by OCR, and \ud835\udc4cis the total number of words in the ground truth. The proposed end-to-end solution demonstrates its effectiveness in image-based table recognition, addressing various challenges in the process. These challenges encompass table localization, structure recognition, and the accurate detection and recognition of text within the structured table. The successful implementation of this comprehensive approach allows for the accurate extraction of tabular data from document images, which in turn enhances data analysis, and search engine capabilities, and contributes to knowledge graph enrichment. 5 EXPERIMENT We conducted a comparative analysis of inference time for our proposed model and Table Transformer (TATR) [31] on TableBank dataset [19] comprising 47,053 table images as shown in table 2. The table above presents the results of this evaluation. As observed, our model outperforms TATR in terms of efficiency, demonstrating faster inference times across all measured aspects. Specifically, our model achieves a maximum inference time of 12.7 seconds, a minimum of 5.42 seconds, and an average of 8.23 seconds. In contrast, TATR\u2019s corresponding figures are 15.48 seconds, 4.95 seconds, and 12.43 seconds, respectively. Model Inference Time (sec.) TATR [31] Max 15.48 Min 4.95 Avg 12.43 TC-OCR Max 12.7 Min 5.42 Avg 8.23 Table 2: Inference time in seconds for Our model is compared against Table Transformer (TATR) on TableBank [19] dataset of 47,053 images These findings underscore the effectiveness of our approach in jointly representing and integrating textual and visual information within tables, leading to enhanced performance and reduced inference times. The superior inference speed of our model positions it as a promising solution for real-world applications, where timesensitive tasks demand swift and accurate data comprehension. We also carried out a comparative analysis of our proposed model against the state-of-the-art (SOTA) Table Transformer model, we took 8,000 samples from TableBank [19]. The table summarizes the evaluation results in terms of Intersection over Union (IOU) and Optical Character Recognition (OCR) accuracy. As shown in Table. 3, our model outperforms the Table Transformer in both metrics, showcasing its superior performance. Specifically, our model achieves an impressive IOU of 0.96, indicating its effectiveness in accurately delineating and localizing table elements. Moreover, our model demonstrates a significant advancement in OCR accuracy, reaching an impressive 78%, thereby excelling in the crucial task of accurately recognizing and understanding the textual content within tables. Another comprehensive comparison between the Table Transformer (TATR) model and our proposed method shown in table 1 showcases the performance evaluation on different columns of a dataset containing a total of 240 images and 2,785 rows. Our TC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content MMIR \u201923, November 2, 2023, Ottawa, ON, Canada method demonstrates superior accuracy across all columns, outperforming TATR significantly. Particularly noteworthy is the overall improvement achieved by our approach, with an impressive 24% increase in accuracy compared to TATR. These findings underscore the effectiveness of our proposed method in tackling the problem of the multimodal table, indicating its potential for enhancing data comprehension and extraction of meaningful insights from diverse tabular data. 6 CONCLUSION In conclusion, we propose an integrated pipeline for end-to-end image-based table recognition, leveraging the capabilities of three state-of-the-art models: DETR, CascadeTabNet, and PP OCR v2. By combining these models, we effectively tackle the challenges posed by diverse table styles and complex structures in document images. Our approach facilitates the accurate reconstruction of table layouts and the extraction of cell content from PDF or OCR through bounding boxes. Empirical evaluations demonstrate the superior performance and efficiency of our method compared to existing techniques, as it excels in preserving table structures and extracting tabular data with high efficacy. It is important to note that while our research serves as a strong foundation for advancing imagebased table recognition, further refinements and optimizations are essential to enhance its applicability across a wider range of scenarios. Ultimately, our work contributes to the advancement of data extraction and comprehension in digitized documents, fostering innovation in the field of document analysis. Model IOU OCR Accuracy Table Transformer [31] 0.94 62 % Our Model (TC-OCR) 0.96 78 % Table 3: Comparison of our model (TC-OCR) with SOTA trained on 8000 samples of TableBank [19] Dataset 7 FUTURE SCOPE The multi-modal tables problem presents a significant challenge in the realm of AI research, necessitating effective understanding and processing of tables that incorporate both textual and visual elements, such as images or graphs [18]. Successfully addressing this challenge requires AI models to not only interpret the content within individual cells but also grasp the intricate relationships between textual and visual information. Therefore, the primary objective of this research is to devise novel methods that can jointly represent and seamlessly integrate these modalities, leading to more comprehensive data comprehension and extraction of meaningful insights across diverse domains. By delving into this unexplored territory, this study aims to pave the way for innovative approaches that advance the capabilities of AI systems in handling multimodal tables and offer valuable contributions to real-world applications. 8 ACKNOWLEDGMENT Dr. Rajiv Ratn Shah is partly supported by the Infosys Center for AI, the Center of Design and New Media, and the Center of Excellence in Healthcare at Indraprastha Institute of Information Technology, Delhi. We sincerely appreciate the guidance and unwavering support provided by Ms. Astha Verma and Mr. Naman Lal throughout our research. Their expertise and insightful feedback have greatly influenced the direction and quality of our study. We are grateful for their time, dedication, and willingness to share knowledge, which significantly contributed to the completion of this work. Their encouragement and constructive discussions served as a constant source of motivation, and we feel privileged to have benefited from their wisdom and mentorship. 9 LIMITATIONS One notable limitation of our proposed approach is its inability to accurately recognize complex tables with merged cells, nested tables, or irregular structures. Dealing with such intricate layouts poses challenges in comprehending the intricate relationships between cells and headers. As a result, our current method may not be suitable for handling these specialized cases, and further research and enhancements are required to address these complexities effectively." + }, + { + "url": "http://arxiv.org/abs/1912.05846v1", + "title": "The Benefits of Close-Domain Fine-Tuning for Table Detection in Document Images", + "abstract": "A correct localisation of tables in a document is instrumental for\ndetermining their structure and extracting their contents; therefore, table\ndetection is a key step in table understanding. Nowadays, the most successful\nmethods for table detection in document images employ deep learning algorithms;\nand, particularly, a technique known as fine-tuning. In this context, such a\ntechnique exports the knowledge acquired to detect objects in natural images to\ndetect tables in document images. However, there is only a vague relation\nbetween natural and document images, and fine-tuning works better when there is\na close relation between the source and target task. In this paper, we show\nthat it is more beneficial to employ fine-tuning from a closer domain. To this\naim, we train different object detection algorithms (namely, Mask R-CNN,\nRetinaNet, SSD and YOLO) using the TableBank dataset (a dataset of images of\nacademic documents designed for table detection and recognition), and fine-tune\nthem for several heterogeneous table detection datasets. Using this approach,\nwe considerably improve the accuracy of the detection models fine-tuned from\nnatural images (in mean a 17%, and, in the best case, up to a 60%).", + "authors": "\u00c1ngela Casado-Garc\u00eda, C\u00e9sar Dom\u00ednguez, J\u00f3nathan Heras, Eloy Mata, Vico Pascual", + "published": "2019-12-12", + "updated": "2019-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1904.12577v2", + "title": "Table understanding in structured documents", + "abstract": "Abstract--- Table detection and extraction has been studied in the context of\ndocuments like reports, where tables are clearly outlined and stand out from\nthe document structure visually. We study this topic in a rather more\nchallenging domain of layout-heavy business documents, particularly invoices.\nInvoices present the novel challenges of tables being often without outlines -\neither in the form of borders or surrounding text flow - with ragged columns\nand widely varying data content. We will also show, that we can extract\nspecific information from structurally different tables or table-like\nstructures with one model. We present a comprehensive representation of a page\nusing graph over word boxes, positional embeddings, trainable textual features\nand rephrase the table detection as a text box labeling problem. We will work\non our newly presented dataset of pro forma invoices, invoices and debit note\ndocuments using this representation and propose multiple baselines to solve\nthis labeling problem. We then propose a novel neural network model that\nachieves strong, practical results on the presented dataset and analyze the\nmodel performance and effects of graph convolutions and self-attention in\ndetail.", + "authors": "Martin Hole\u010dek, Anton\u00edn Hoskovec, Petr Baudi\u0161, Pavel Klinger", + "published": "2019-03-22", + "updated": "2019-07-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.00589v2", + "title": "Global Table Extractor (GTE): A Framework for Joint Table Identification and Cell Structure Recognition Using Visual Context", + "abstract": "Documents are often used for knowledge sharing and preservation in business\nand science, within which are tables that capture most of the critical data.\nUnfortunately, most documents are stored and distributed as PDF or scanned\nimages, which fail to preserve logical table structure. Recent vision-based\ndeep learning approaches have been proposed to address this gap, but most still\ncannot achieve state-of-the-art results. We present Global Table Extractor\n(GTE), a vision-guided systematic framework for joint table detection and cell\nstructured recognition, which could be built on top of any object detection\nmodel. With GTE-Table, we invent a new penalty based on the natural cell\ncontainment constraint of tables to train our table network aided by cell\nlocation predictions. GTE-Cell is a new hierarchical cell detection network\nthat leverages table styles. Further, we design a method to automatically label\ntable and cell structure in existing documents to cheaply create a large corpus\nof training and test data. We use this to enhance PubTabNet with cell labels\nand create FinTabNet, real-world and complex scientific and financial datasets\nwith detailed table structure annotations to help train and test structure\nrecognition. Our framework surpasses previous state-of-the-art results on the\nICDAR 2013 and ICDAR 2019 table competition in both table detection and cell\nstructure recognition with a significant 5.8% improvement in the full table\nextraction system. Further experiments demonstrate a greater than 45%\nimprovement in cell structure recognition when compared to a vanilla RetinaNet\nobject detection model in our new out-of-domain FinTabNet.", + "authors": "Xinyi Zheng, Doug Burdick, Lucian Popa, Xu Zhong, Nancy Xin Ru Wang", + "published": "2020-05-01", + "updated": "2020-12-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.01017v2", + "title": "TableFormer: Table Structure Understanding with Transformers", + "abstract": "Tables organize valuable content in a concise and compact representation.\nThis content is extremely valuable for systems such as search engines,\nKnowledge Graph's, etc, since they enhance their predictive capabilities.\nUnfortunately, tables come in a large variety of shapes and sizes. Furthermore,\nthey can have complex column/row-header configurations, multiline rows,\ndifferent variety of separation lines, missing entries, etc. As such, the\ncorrect identification of the table-structure from an image is a non-trivial\ntask. In this paper, we present a new table-structure identification model. The\nlatter improves the latest end-to-end deep learning model (i.e.\nencoder-dual-decoder from PubTabNet) in two significant ways. First, we\nintroduce a new object detection decoder for table-cells. In this way, we can\nobtain the content of the table-cells from programmatic PDF's directly from the\nPDF source and avoid the training of the custom OCR decoders. This\narchitectural change leads to more accurate table-content extraction and allows\nus to tackle non-english tables. Second, we replace the LSTM decoders with\ntransformer based decoders. This upgrade improves significantly the previous\nstate-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple\ntables and from 88.7% to 95% on complex tables.", + "authors": "Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar", + "published": "2022-03-02", + "updated": "2022-03-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1903.01949v2", + "title": "TableBank: A Benchmark Dataset for Table Detection and Recognition", + "abstract": "We present TableBank, a new image-based table detection and recognition\ndataset built with novel weak supervision from Word and Latex documents on the\ninternet. Existing research for image-based table detection and recognition\nusually fine-tunes pre-trained models on out-of-domain data with a few thousand\nhuman-labeled examples, which is difficult to generalize on real-world\napplications. With TableBank that contains 417K high quality labeled tables, we\nbuild several strong baselines using state-of-the-art models with deep neural\nnetworks. We make TableBank publicly available and hope it will empower more\ndeep learning approaches in the table detection and recognition task. The\ndataset and models are available at\n\\url{https://github.com/doc-analysis/TableBank}.", + "authors": "Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, Zhoujun Li", + "published": "2019-03-05", + "updated": "2020-07-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1806.02559v1", + "title": "Shape Robust Text Detection with Progressive Scale Expansion Network", + "abstract": "The challenges of shape robust text detection lie in two aspects: 1) most\nexisting quadrangular bounding box based detectors are difficult to locate\ntexts with arbitrary shapes, which are hard to be enclosed perfectly in a\nrectangle; 2) most pixel-wise segmentation-based detectors may not separate the\ntext instances that are very close to each other. To address these problems, we\npropose a novel Progressive Scale Expansion Network (PSENet), designed as a\nsegmentation-based detector with multiple predictions for each text instance.\nThese predictions correspond to different `kernels' produced by shrinking the\noriginal text instance into various scales. Consequently, the final detection\ncan be conducted through our progressive scale expansion algorithm which\ngradually expands the kernels with minimal scales to the text instances with\nmaximal and complete shapes. Due to the fact that there are large geometrical\nmargins among these minimal kernels, our method is effective to distinguish the\nadjacent text instances and is robust to arbitrary shapes. The state-of-the-art\nresults on ICDAR 2015 and ICDAR 2017 MLT benchmarks further confirm the great\neffectiveness of PSENet. Notably, PSENet outperforms the previous best record\nby absolute 6.37\\% on the curve text dataset SCUT-CTW1500. Code will be\navailable in https://github.com/whai362/PSENet.", + "authors": "Xiang Li, Wenhai Wang, Wenbo Hou, Ruo-Ze Liu, Tong Lu, Jian Yang", + "published": "2018-06-07", + "updated": "2018-06-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.09530v2", + "title": "RanLayNet: A Dataset for Document Layout Detection used for Domain Adaptation and Generalization", + "abstract": "Large ground-truth datasets and recent advances in deep learning techniques\nhave been useful for layout detection. However, because of the restricted\nlayout diversity of these datasets, training on them requires a sizable number\nof annotated instances, which is both expensive and time-consuming. As a\nresult, differences between the source and target domains may significantly\nimpact how well these models function. To solve this problem, domain adaptation\napproaches have been developed that use a small quantity of labeled data to\nadjust the model to the target domain. In this research, we introduced a\nsynthetic document dataset called RanLayNet, enriched with automatically\nassigned labels denoting spatial positions, ranges, and types of layout\nelements. The primary aim of this endeavor is to develop a versatile dataset\ncapable of training models with robustness and adaptability to diverse document\nformats. Through empirical experimentation, we demonstrate that a deep layout\nidentification model trained on our dataset exhibits enhanced performance\ncompared to a model trained solely on actual documents. Moreover, we conduct a\ncomparative analysis by fine-tuning inference models using both PubLayNet and\nIIIT-AR-13K datasets on the Doclaynet dataset. Our findings emphasize that\nmodels enriched with our dataset are optimal for tasks such as achieving 0.398\nand 0.588 mAP95 score in the scientific document domain for the TABLE class.", + "authors": "Avinash Anand, Raj Jaiswal, Mohit Gupta, Siddhesh S Bangar, Pijush Bhuyan, Naman Lal, Rajeev Singh, Ritika Jha, Rajiv Ratn Shah, Shin'ichi Satoh", + "published": "2024-04-15", + "updated": "2024-04-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.10683v5", + "title": "Image-based table recognition: data, model, and evaluation", + "abstract": "Important information that relates to a specific topic in a document is often\norganized in tabular format to assist readers with information retrieval and\ncomparison, which may be difficult to provide in natural language. However,\ntabular data in unstructured digital documents, e.g., Portable Document Format\n(PDF) and images, are difficult to parse into structured machine-readable\nformat, due to complexity and diversity in their structure and style. To\nfacilitate image-based table recognition with deep learning, we develop the\nlargest publicly available table recognition dataset PubTabNet\n(https://github.com/ibm-aur-nlp/PubTabNet), containing 568k table images with\ncorresponding structured HTML representation. PubTabNet is automatically\ngenerated by matching the XML and PDF representations of the scientific\narticles in PubMed Central Open Access Subset (PMCOA). We also propose a novel\nattention-based encoder-dual-decoder (EDD) architecture that converts images of\ntables into HTML code. The model has a structure decoder which reconstructs the\ntable structure and helps the cell decoder to recognize cell content. In\naddition, we propose a new Tree-Edit-Distance-based Similarity (TEDS) metric\nfor table recognition, which more appropriately captures multi-hop cell\nmisalignment and OCR errors than the pre-established metric. The experiments\ndemonstrate that the EDD model can accurately recognize complex tables solely\nrelying on the image representation, outperforming the state-of-the-art by 9.7%\nabsolute TEDS score.", + "authors": "Xu Zhong, Elaheh ShafieiBavani, Antonio Jimeno Yepes", + "published": "2019-11-25", + "updated": "2020-03-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.14272v2", + "title": "Current Status and Performance Analysis of Table Recognition in Document Images with Deep Neural Networks", + "abstract": "The first phase of table recognition is to detect the tabular area in a\ndocument. Subsequently, the tabular structures are recognized in the second\nphase in order to extract information from the respective cells. Table\ndetection and structural recognition are pivotal problems in the domain of\ntable understanding. However, table analysis is a perplexing task due to the\ncolossal amount of diversity and asymmetry in tables. Therefore, it is an\nactive area of research in document image analysis. Recent advances in the\ncomputing capabilities of graphical processing units have enabled deep neural\nnetworks to outperform traditional state-of-the-art machine learning methods.\nTable understanding has substantially benefited from the recent breakthroughs\nin deep neural networks. However, there has not been a consolidated description\nof the deep learning methods for table detection and table structure\nrecognition. This review paper provides a thorough analysis of the modern\nmethodologies that utilize deep neural networks. This work provided a thorough\nunderstanding of the current state-of-the-art and related challenges of table\nunderstanding in document images. Furthermore, the leading datasets and their\nintricacies have been elaborated along with the quantitative results. Moreover,\na brief overview is given regarding the promising directions that can serve as\na guide to further improve table analysis in document images.", + "authors": "Khurram Azeem Hashmi, Marcus Liwicki, Didier Stricker, Muhammad Adnan Afzal, Muhammad Ahtsham Afzal, Muhammad Zeshan Afzal", + "published": "2021-04-29", + "updated": "2021-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.07641v1", + "title": "Rethinking Image-based Table Recognition Using Weakly Supervised Methods", + "abstract": "Most of the previous methods for table recognition rely on training datasets\ncontaining many richly annotated table images. Detailed table image annotation,\ne.g., cell or text bounding box annotation, however, is costly and often\nsubjective. In this paper, we propose a weakly supervised model named WSTabNet\nfor table recognition that relies only on HTML (or LaTeX) code-level\nannotations of table images. The proposed model consists of three main parts:\nan encoder for feature extraction, a structure decoder for generating table\nstructure, and a cell decoder for predicting the content of each cell in the\ntable. Our system is trained end-to-end by stochastic gradient descent\nalgorithms, requiring only table images and their ground-truth HTML (or LaTeX)\nrepresentations. To facilitate table recognition with deep learning, we create\nand release WikiTableSet, the largest publicly available image-based table\nrecognition dataset built from Wikipedia. WikiTableSet contains nearly 4\nmillion English table images, 590K Japanese table images, and 640k French table\nimages with corresponding HTML representation and cell bounding boxes. The\nextensive experiments on WikiTableSet and two large-scale datasets: FinTabNet\nand PubTabNet demonstrate that the proposed weakly supervised model achieves\nbetter, or similar accuracies compared to the state-of-the-art models on all\nbenchmark datasets.", + "authors": "Nam Tuan Ly, Atsuhiro Takasu, Phuc Nguyen, Hideaki Takeda", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.04565v1", + "title": "Table Structure Recognition using Top-Down and Bottom-Up Cues", + "abstract": "Tables are information-rich structured objects in document images. While\nsignificant work has been done in localizing tables as graphic objects in\ndocument images, only limited attempts exist on table structure recognition.\nMost existing literature on structure recognition depends on extraction of\nmeta-features from the PDF document or on the optical character recognition\n(OCR) models to extract low-level layout features from the image. However,\nthese methods fail to generalize well because of the absence of meta-features\nor errors made by the OCR when there is a significant variance in table layouts\nand text organization. In our work, we focus on tables that have complex\nstructures, dense content, and varying layouts with no dependency on\nmeta-features and/or OCR.\n We present an approach for table structure recognition that combines cell\ndetection and interaction modules to localize the cells and predict their row\nand column associations with other detected cells. We incorporate structural\nconstraints as additional differential components to the loss function for cell\ndetection. We empirically validate our method on the publicly available\nreal-world datasets - ICDAR-2013, ICDAR-2019 (cTDaR) archival, UNLV, SciTSR,\nSciTSR-COMP, TableBank, and PubTabNet. Our attempt opens up a new direction for\ntable structure recognition by combining top-down (table cells detection) and\nbottom-up (structure recognition) cues in visually understanding the tables.", + "authors": "Sachin Raja, Ajoy Mondal, C. V. Jawahar", + "published": "2020-10-09", + "updated": "2020-10-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.01846v1", + "title": "PingAn-VCGroup's Solution for ICDAR 2021 Competition on Scientific Table Image Recognition to Latex", + "abstract": "This paper presents our solution for the ICDAR 2021 Competition on Scientific\nTable Image Recognition to LaTeX. This competition has two sub-tasks: Table\nStructure Reconstruction (TSR) and Table Content Reconstruction (TCR). We treat\nboth sub-tasks as two individual image-to-sequence recognition problems. We\nleverage our previously proposed algorithm MASTER \\cite{lu2019master}, which is\noriginally proposed for scene text recognition. We optimize the MASTER model\nfrom several perspectives: network structure, optimizer, normalization method,\npre-trained model, resolution of input image, data augmentation, and model\nensemble. Our method achieves 0.7444 Exact Match and 0.8765 Exact Match @95\\%\non the TSR task, and obtains 0.5586 Exact Match and 0.7386 Exact Match 95\\% on\nthe TCR task.", + "authors": "Yelin He, Xianbiao Qi, Jiaquan Ye, Peng Gao, Yihao Chen, Bingcong Li, Xin Tang, Rong Xiao", + "published": "2021-05-05", + "updated": "2021-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.14426v2", + "title": "ICDAR 2021 Competition on Scientific Table Image Recognition to LaTeX", + "abstract": "Tables present important information concisely in many scientific documents.\nVisual features like mathematical symbols, equations, and spanning cells make\nstructure and content extraction from tables embedded in research documents\ndifficult. This paper discusses the dataset, tasks, participants' methods, and\nresults of the ICDAR 2021 Competition on Scientific Table Image Recognition to\nLaTeX. Specifically, the task of the competition is to convert a tabular image\nto its corresponding LaTeX source code. We proposed two subtasks. In Subtask 1,\nwe ask the participants to reconstruct the LaTeX structure code from an image.\nIn Subtask 2, we ask the participants to reconstruct the LaTeX content code\nfrom an image. This report describes the datasets and ground truth\nspecification, details the performance evaluation metrics used, presents the\nfinal results, and summarizes the participating methods. Submission by team\nVCGroup got the highest Exact Match accuracy score of 74% for Subtask 1 and 55%\nfor Subtask 2, beating previous baselines by 5% and 12%, respectively. Although\nimprovements can still be made to the recognition capabilities of models, this\ncompetition contributes to the development of fully automated table recognition\nsystems by challenging practitioners to solve problems under specific\nconstraints and sharing their approaches; the platform will remain available\nfor post-challenge submissions at\nhttps://competitions.codalab.org/competitions/26979 .", + "authors": "Pratik Kayal, Mrinal Anand, Harsh Desai, Mayank Singh", + "published": "2021-05-30", + "updated": "2021-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1910.02562v3", + "title": "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition", + "abstract": "Attention-based scene text recognizers have gained huge success, which\nleverages a more compact intermediate representation to learn 1d- or 2d-\nattention by a RNN-based encoder-decoder architecture. However, such methods\nsuffer from attention-drift problem because high similarity among encoded\nfeatures leads to attention confusion under the RNN-based local attention\nmechanism. Moreover, RNN-based methods have low efficiency due to poor\nparallelization. To overcome these problems, we propose the MASTER, a\nself-attention based scene text recognizer that (1) not only encodes the\ninput-output attention but also learns self-attention which encodes\nfeature-feature and target-target relationships inside the encoder and decoder\nand (2) learns a more powerful and robust intermediate representation to\nspatial distortion, and (3) owns a great training efficiency because of high\ntraining parallelization and a high-speed inference because of an efficient\nmemory-cache mechanism. Extensive experiments on various benchmarks demonstrate\nthe superior performance of our MASTER on both regular and irregular scene\ntext. Pytorch code can be found at https://github.com/wenwenyu/MASTER-pytorch,\nand Tensorflow code can be found at https://github.com/jiangxiluning/MASTER-TF.", + "authors": "Ning Lu, Wenwen Yu, Xianbiao Qi, Yihao Chen, Ping Gong, Rong Xiao, Xiang Bai", + "published": "2019-10-07", + "updated": "2021-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.01848v1", + "title": "PingAn-VCGroup's Solution for ICDAR 2021 Competition on Scientific Literature Parsing Task B: Table Recognition to HTML", + "abstract": "This paper presents our solution for ICDAR 2021 competition on scientific\nliterature parsing taskB: table recognition to HTML. In our method, we divide\nthe table content recognition task into foursub-tasks: table structure\nrecognition, text line detection, text line recognition, and box assignment.Our\ntable structure recognition algorithm is customized based on MASTER [1], a\nrobust image textrecognition algorithm. PSENet [2] is used to detect each text\nline in the table image. For text linerecognition, our model is also built on\nMASTER. Finally, in the box assignment phase, we associatedthe text boxes\ndetected by PSENet with the structure item reconstructed by table structure\nprediction,and fill the recognized content of the text line into the\ncorresponding item. Our proposed methodachieves a 96.84% TEDS score on 9,115\nvalidation samples in the development phase, and a 96.32%TEDS score on 9,064\nsamples in the final evaluation phase.", + "authors": "Jiaquan Ye, Xianbiao Qi, Yelin He, Yihao Chen, Dengyi Gu, Peng Gao, Rong Xiao", + "published": "2021-05-05", + "updated": "2021-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.07699v2", + "title": "Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs", + "abstract": "Temporal Graph Learning, which aims to model the time-evolving nature of\ngraphs, has gained increasing attention and achieved remarkable performance\nrecently. However, in reality, graph structures are often incomplete and noisy,\nwhich hinders temporal graph networks (TGNs) from learning informative\nrepresentations. Graph contrastive learning uses data augmentation to generate\nplausible variations of existing data and learn robust representations.\nHowever, rule-based augmentation approaches may be suboptimal as they lack\nlearnability and fail to leverage rich information from downstream tasks. To\naddress these issues, we propose a Time-aware Graph Structure Learning (TGSL)\napproach via sequence prediction on temporal graphs, which learns better graph\nstructures for downstream tasks through adding potential temporal edges. In\nparticular, it predicts time-aware context embedding based on previously\nobserved interactions and uses the Gumble-Top-K to select the closest candidate\nedges to this context embedding. Additionally, several candidate sampling\nstrategies are proposed to ensure both efficiency and diversity. Furthermore,\nwe jointly learn the graph structure and TGNs in an end-to-end manner and\nperform inference on the refined graph. Extensive experiments on temporal link\nprediction benchmarks demonstrate that TGSL yields significant gains for the\npopular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive\nlearning methods on temporal graphs. We release the code at\nhttps://github.com/ViktorAxelsen/TGSL.", + "authors": "Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai", + "published": "2023-06-13", + "updated": "2023-08-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1803.03324v1", + "title": "Learning Deep Generative Models of Graphs", + "abstract": "Graphs are fundamental data structures which concisely capture the relational\nstructure in many important real-world domains, such as knowledge graphs,\nphysical and social interactions, language, and chemistry. Here we introduce a\npowerful new approach for learning generative models over graphs, which can\ncapture both their structure and attributes. Our approach uses graph neural\nnetworks to express probabilistic dependencies among a graph's nodes and edges,\nand can, in principle, learn distributions over any arbitrary graph. In a\nseries of experiments our results show that once trained, our models can\ngenerate good quality samples of both synthetic graphs as well as real\nmolecular graphs, both unconditionally and conditioned on data. Compared to\nbaselines that do not use graph-structured representations, our models often\nperform far better. We also explore key challenges of learning generative\nmodels of graphs, such as how to handle symmetries and ordering of elements\nduring the graph generation process, and offer possible solutions. Our work is\nthe first and most general approach for learning generative models over\narbitrary graphs, and opens new directions for moving away from restrictions of\nvector- and sequence-like knowledge representations, toward more expressive and\nflexible relational data structures.", + "authors": "Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia", + "published": "2018-03-08", + "updated": "2018-03-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.16176v1", + "title": "A Survey on Structure-Preserving Graph Transformers", + "abstract": "The transformer architecture has shown remarkable success in various domains,\nsuch as natural language processing and computer vision. When it comes to graph\nlearning, transformers are required not only to capture the interactions\nbetween pairs of nodes but also to preserve graph structures connoting the\nunderlying relations and proximity between them, showing the expressive power\nto capture different graph structures. Accordingly, various\nstructure-preserving graph transformers have been proposed and widely used for\nvarious tasks, such as graph-level tasks in bioinformatics and\nchemoinformatics. However, strategies related to graph structure preservation\nhave not been well organized and systematized in the literature. In this paper,\nwe provide a comprehensive overview of structure-preserving graph transformers\nand generalize these methods from the perspective of their design objective.\nFirst, we divide strategies into four main groups: node feature modulation,\ncontext node sampling, graph rewriting, and transformer architecture\nimprovements. We then further divide the strategies according to the coverage\nand goals of graph structure preservation. Furthermore, we also discuss\nchallenges and future directions for graph transformer models to preserve the\ngraph structure and understand the nature of graphs.", + "authors": "Van Thuy Hoang, O-Joun Lee", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.15843v1", + "title": "TabGSL: Graph Structure Learning for Tabular Data Prediction", + "abstract": "This work presents a novel approach to tabular data prediction leveraging\ngraph structure learning and graph neural networks. Despite the prevalence of\ntabular data in real-world applications, traditional deep learning methods\noften overlook the potentially valuable associations between data instances.\nSuch associations can offer beneficial insights for classification tasks, as\ninstances may exhibit similar patterns of correlations among features and\ntarget labels. This information can be exploited by graph neural networks,\nnecessitating robust graph structures. However, existing studies primarily\nfocus on improving graph structure from noisy data, largely neglecting the\npossibility of deriving graph structures from tabular data. We present a novel\nsolution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data\nprediction by simultaneously learning instance correlation and feature\ninteraction within a unified framework. This is achieved through a proposed\ngraph contrastive learning module, along with transformer-based feature\nextractor and graph neural network. Comprehensive experiments conducted on 30\nbenchmark tabular datasets demonstrate that TabGSL markedly outperforms both\ntree-based models and recent deep learning-based tabular models. Visualizations\nof the learned instance embeddings further substantiate the effectiveness of\nTabGSL.", + "authors": "Jay Chiehen Liao, Cheng-Te Li", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.10206v1", + "title": "How Robust Are Graph Neural Networks to Structural Noise?", + "abstract": "Graph neural networks (GNNs) are an emerging model for learning graph\nembeddings and making predictions on graph structured data. However, robustness\nof graph neural networks is not yet well-understood. In this work, we focus on\nnode structural identity predictions, where a representative GNN model is able\nto achieve near-perfect accuracy. We also show that the same GNN model is not\nrobust to addition of structural noise, through a controlled dataset and set of\nexperiments. Finally, we show that under the right conditions, graph-augmented\ntraining is capable of significantly improving robustness to structural noise.", + "authors": "James Fox, Sivasankaran Rajamanickam", + "published": "2019-12-21", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.04595v1", + "title": "Label-informed Graph Structure Learning for Node Classification", + "abstract": "Graph Neural Networks (GNNs) have achieved great success among various\ndomains. Nevertheless, most GNN methods are sensitive to the quality of graph\nstructures. To tackle this problem, some studies exploit different graph\nstructure learning strategies to refine the original graph structure. However,\nthese methods only consider feature information while ignoring available label\ninformation. In this paper, we propose a novel label-informed graph structure\nlearning framework which incorporates label information explicitly through a\nclass transition matrix. We conduct extensive experiments on seven node\nclassification benchmark datasets and the results show that our method\noutperforms or matches the state-of-the-art baselines.", + "authors": "Liping Wang, Fenyu Hu, Shu Wu, Liang Wang", + "published": "2021-08-10", + "updated": "2021-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.14002v1", + "title": "Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction", + "abstract": "We introduce Bi-GNN for modeling biological link prediction tasks such as\ndrug-drug interaction (DDI) and protein-protein interaction (PPI). Taking\ndrug-drug interaction as an example, existing methods using machine learning\neither only utilize the link structure between drugs without using the graph\nrepresentation of each drug molecule, or only leverage the individual drug\ncompound structures without using graph structure for the higher-level DDI\ngraph. The key idea of our method is to fundamentally view the data as a\nbi-level graph, where the highest level graph represents the interaction\nbetween biological entities (interaction graph), and each biological entity\nitself is further expanded to its intrinsic graph representation\n(representation graphs), where the graph is either flat like a drug compound or\nhierarchical like a protein with amino acid level graph, secondary structure,\ntertiary structure, etc. Our model not only allows the usage of information\nfrom both the high-level interaction graph and the low-level representation\ngraphs, but also offers a baseline for future research opportunities to address\nthe bi-level nature of the data.", + "authors": "Yunsheng Bai, Ken Gu, Yizhou Sun, Wei Wang", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.11883v2", + "title": "Robust Graph Data Learning via Latent Graph Convolutional Representation", + "abstract": "Graph Convolutional Representation (GCR) has achieved impressive performance\nfor graph data representation. However, existing GCR is generally defined on\nthe input fixed graph which may restrict the representation capacity and also\nbe vulnerable to the structural attacks and noises. To address this issue, we\npropose a novel Latent Graph Convolutional Representation (LatGCR) for robust\ngraph data representation and learning. Our LatGCR is derived based on\nreformulating graph convolutional representation from the aspect of graph\nneighborhood reconstruction. Given an input graph $\\textbf{A}$, LatGCR aims to\ngenerate a flexible latent graph $\\widetilde{\\textbf{A}}$ for graph\nconvolutional representation which obviously enhances the representation\ncapacity and also performs robustly w.r.t graph structural attacks and noises.\nMoreover, LatGCR is implemented in a self-supervised manner and thus provides a\nbasic block for both supervised and unsupervised graph learning tasks.\nExperiments on several datasets demonstrate the effectiveness and robustness of\nLatGCR.", + "authors": "Bo Jiang, Ziyan Zhang, Bin Luo", + "published": "2019-04-26", + "updated": "2021-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08966v4", + "title": "Graph Learning and Its Advancements on Large Language Models: A Holistic Survey", + "abstract": "Graph learning is a prevalent domain that endeavors to learn the intricate\nrelationships among nodes and the topological structure of graphs. Over the\nyears, graph learning has transcended from graph theory to graph data mining.\nWith the advent of representation learning, it has attained remarkable\nperformance in diverse scenarios. Owing to its extensive application prospects,\ngraph learning attracts copious attention. While some researchers have\naccomplished impressive surveys on graph learning, they failed to connect\nrelated objectives, methods, and applications in a more coherent way. As a\nresult, they did not encompass current ample scenarios and challenging problems\ndue to the rapid expansion of graph learning. Particularly, large language\nmodels have recently had a disruptive effect on human life, but they also show\nrelative weakness in structured scenarios. The question of how to make these\nmodels more powerful with graph learning remains open. Our survey focuses on\nthe most recent advancements in integrating graph learning with pre-trained\nlanguage models, specifically emphasizing their application within the domain\nof large language models. Different from previous surveys on graph learning, we\nprovide a holistic review that analyzes current works from the perspective of\ngraph structure, and discusses the latest applications, trends, and challenges\nin graph learning. Specifically, we commence by proposing a taxonomy and then\nsummarize the methods employed in graph learning. We then provide a detailed\nelucidation of mainstream applications. Finally, we propose future directions.", + "authors": "Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Fuji Ren, Gang Kou", + "published": "2022-12-17", + "updated": "2023-11-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03596v3", + "title": "Graph Generation with Diffusion Mixture", + "abstract": "Generation of graphs is a major challenge for real-world tasks that require\nunderstanding the complex nature of their non-Euclidean structures. Although\ndiffusion models have achieved notable success in graph generation recently,\nthey are ill-suited for modeling the topological properties of graphs since\nlearning to denoise the noisy samples does not explicitly learn the graph\nstructures to be generated. To tackle this limitation, we propose a generative\nframework that models the topology of graphs by explicitly learning the final\ngraph structures of the diffusion process. Specifically, we design the\ngenerative process as a mixture of endpoint-conditioned diffusion processes\nwhich is driven toward the predicted graph that results in rapid convergence.\nWe further introduce a simple parameterization of the mixture process and\ndevelop an objective for learning the final graph structure, which enables\nmaximum likelihood training. Through extensive experimental validation on\ngeneral graph and 2D/3D molecule generation tasks, we show that our method\noutperforms previous generative models, generating graphs with correct topology\nwith both continuous (e.g. 3D coordinates) and discrete (e.g. atom types)\nfeatures. Our code is available at https://github.com/harryjo97/DruM.", + "authors": "Jaehyeong Jo, Dongki Kim, Sung Ju Hwang", + "published": "2023-02-07", + "updated": "2024-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.09205v1", + "title": "SoK: Differential Privacy on Graph-Structured Data", + "abstract": "In this work, we study the applications of differential privacy (DP) in the\ncontext of graph-structured data. We discuss the formulations of DP applicable\nto the publication of graphs and their associated statistics as well as machine\nlearning on graph-based data, including graph neural networks (GNNs). The\nformulation of DP in the context of graph-structured data is difficult, as\nindividual data points are interconnected (often non-linearly or sparsely).\nThis connectivity complicates the computation of individual privacy loss in\ndifferentially private learning. The problem is exacerbated by an absence of a\nsingle, well-established formulation of DP in graph settings. This issue\nextends to the domain of GNNs, rendering private machine learning on\ngraph-structured data a challenging task. A lack of prior systematisation work\nmotivated us to study graph-based learning from a privacy perspective. In this\nwork, we systematise different formulations of DP on graphs, discuss challenges\nand promising applications, including the GNN domain. We compare and separate\nworks into graph analysis tasks and graph learning tasks with GNNs. Finally, we\nconclude our work with a discussion of open questions and potential directions\nfor further research in this area.", + "authors": "Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis", + "published": "2022-03-17", + "updated": "2022-03-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1909.11594v1", + "title": "Structured Graph Learning Via Laplacian Spectral Constraints", + "abstract": "Learning a graph with a specific structure is essential for interpretability\nand identification of the relationships among data. It is well known that\nstructured graph learning from observed samples is an NP-hard combinatorial\nproblem. In this paper, we first show that for a set of important graph\nfamilies it is possible to convert the structural constraints of structure into\neigenvalue constraints of the graph Laplacian matrix. Then we introduce a\nunified graph learning framework, lying at the integration of the spectral\nproperties of the Laplacian matrix with Gaussian graphical modeling that is\ncapable of learning structures of a large class of graph families. The proposed\nalgorithms are provably convergent and practically amenable for large-scale\nsemi-supervised and unsupervised graph-based learning tasks. Extensive\nnumerical experiments with both synthetic and real data sets demonstrate the\neffectiveness of the proposed methods. An R package containing code for all the\nexperimental results is available at\nhttps://cran.r-project.org/package=spectralGraphTopology.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos'e Vin'icius de M. Cardoso, Daniel P. Palomar", + "published": "2019-09-24", + "updated": "2019-09-24", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC", + "stat.AP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.15665v1", + "title": "Learnability of a hybrid quantum-classical neural network for graph-structured quantum data", + "abstract": "Classical data with graph structure always exists when dealing with many\nreal-world problems. In parallel, quantum data with graph structure also need\nto be investigated since they are always produced by structured quantum data\nsources.In this paper, we make use of a hybrid quantum-classical neural network\nwith deep residual learning (Res-HQCNN) to learn graph-structured quantum data.\nSpecifically, based on the special definition of graph-structured quantum data,\nwe first find suitable cost functions so that Res-HQCNN can learn both\nsemisupervised quantum data with or without graphs. Moreover, the training\nalgorithm of Res-HQCNN for graph-structured training data is given in detail.\nNext, in order to show the learning ability of Res-HQCNN,we perform extensive\nexperiments to show that the using of information about graph structures for\nquantum data can lead to better learning efficiency compared with the state of\nthe arts. At the same time, we also design comparable experiments to explain\nthat the using of residual learning can also bring better performance when\ntraining for deep quantum neural networks.", + "authors": "Yan-Ying Liang, Si-Le Tang, Zhe-Hao Yi, Hao-Zhen Si-Tu, Zhu-Jun Zheng", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.13009v2", + "title": "Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely\nIterative Deep Graph Learning (IDGL), for jointly and iteratively learning\ngraph structure and graph embedding. The key rationale of IDGL is to learn a\nbetter graph structure based on better node embeddings, and vice versa (i.e.,\nbetter node embeddings based on a better graph structure). Our iterative method\ndynamically stops when the learned graph structure approaches close enough to\nthe graph optimized for the downstream prediction task. In addition, we cast\nthe graph learning problem as a similarity metric learning problem and leverage\nadaptive graph regularization for controlling the quality of the learned graph.\nFinally, combining the anchor-based approximation technique, we further propose\na scalable version of IDGL, namely IDGL-Anch, which significantly reduces the\ntime and space complexity of IDGL without compromising the performance. Our\nextensive experiments on nine benchmarks show that our proposed IDGL models can\nconsistently outperform or match the state-of-the-art baselines. Furthermore,\nIDGL can be more robust to adversarial graphs and cope with both transductive\nand inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2020-06-21", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.03675v3", + "title": "Machine Learning on Graphs: A Model and Comprehensive Taxonomy", + "abstract": "There has been a surge of recent interest in learning representations for\ngraph-structured data. Graph representation learning methods have generally\nfallen into three main categories, based on the availability of labeled data.\nThe first, network embedding (such as shallow graph embedding or graph\nauto-encoders), focuses on learning unsupervised representations of relational\nstructure. The second, graph regularized neural networks, leverages graphs to\naugment neural network losses with a regularization objective for\nsemi-supervised learning. The third, graph neural networks, aims to learn\ndifferentiable functions over discrete topologies with arbitrary structure.\nHowever, despite the popularity of these areas there has been surprisingly\nlittle work on unifying the three paradigms. Here, we aim to bridge the gap\nbetween graph neural networks, network embedding and graph regularization\nmodels. We propose a comprehensive taxonomy of representation learning methods\nfor graph-structured data, aiming to unify several disparate bodies of work.\nSpecifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which\ngeneralizes popular algorithms for semi-supervised learning on graphs (e.g.\nGraphSage, Graph Convolutional Networks, Graph Attention Networks), and\nunsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)\ninto a single consistent approach. To illustrate the generality of this\napproach, we fit over thirty existing methods into this framework. We believe\nthat this unifying view both provides a solid foundation for understanding the\nintuition behind these methods, and enables future research in the area.", + "authors": "Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R\u00e9, Kevin Murphy", + "published": "2020-05-07", + "updated": "2022-04-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.01743v1", + "title": "Graph Generation with Variational Recurrent Neural Network", + "abstract": "Generating graph structures is a challenging problem due to the diverse\nrepresentations and complex dependencies among nodes. In this paper, we\nintroduce Graph Variational Recurrent Neural Network (GraphVRNN), a\nprobabilistic autoregressive model for graph generation. Through modeling the\nlatent variables of graph data, GraphVRNN can capture the joint distributions\nof graph structures and the underlying node attributes. We conduct experiments\non the proposed GraphVRNN in both graph structure learning and attribute\ngeneration tasks. The evaluation results show that the variational component\nallows our network to model complicated distributions, as well as generate\nplausible structures and node attributes.", + "authors": "Shih-Yang Su, Hossein Hajimirsadeghi, Greg Mori", + "published": "2019-10-02", + "updated": "2019-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.04687v2", + "title": "Intrinsic Geometric Information Transfer Learning on Multiple Graph-Structured Datasets", + "abstract": "Graphs provide a powerful means for representing complex interactions between\nentities. Recently, deep learning approaches are emerging for representing and\nmodeling graph-structured data, although the conventional deep learning methods\n(such as convolutional neural networks and recurrent neural networks) have\nmainly focused on grid-structured inputs (image and audio). Leveraged by the\ncapability of representation learning, deep learning based techniques are\nreporting promising results for graph applications by detecting structural\ncharacteristics of graphs in an automated fashion. In this paper, we attempt to\nadvance deep learning for graph-structured data by incorporating another\ncomponent, transfer learning. By transferring the intrinsic geometric\ninformation learned in the source domain, our approach can help us to construct\na model for a new but related task in the target domain without collecting new\ndata and without training a new model from scratch. We thoroughly test our\napproach with large-scale real corpora and confirm the effectiveness of the\nproposed transfer learning framework for deep learning on graphs. According to\nour experiments, transfer learning is most effective when the source and target\ndomains bear a high level of structural similarity in their graph\nrepresentations.", + "authors": "Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon", + "published": "2016-11-15", + "updated": "2016-12-05", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.09971v1", + "title": "Graph Learning-Convolutional Networks", + "abstract": "Recently, graph Convolutional Neural Networks (graph CNNs) have been widely\nused for graph data representation and semi-supervised learning tasks. However,\nexisting graph CNNs generally use a fixed graph which may be not optimal for\nsemi-supervised learning tasks. In this paper, we propose a novel Graph\nLearning-Convolutional Network (GLCN) for graph data representation and\nsemi-supervised learning. The aim of GLCN is to learn an optimal graph\nstructure that best serves graph CNNs for semi-supervised learning by\nintegrating both graph learning and graph convolution together in a unified\nnetwork architecture. The main advantage is that in GLCN, both given labels and\nthe estimated labels are incorporated and thus can provide useful 'weakly'\nsupervised information to refine (or learn) the graph construction and also to\nfacilitate the graph convolution operation in GLCN for unknown label\nestimation. Experimental results on seven benchmarks demonstrate that GLCN\nsignificantly outperforms state-of-the-art traditional fixed structure based\ngraph CNNs.", + "authors": "Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang", + "published": "2018-11-25", + "updated": "2018-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02126v1", + "title": "Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix", + "abstract": "To improve the robustness of graph neural networks (GNN), graph structure\nlearning (GSL) has attracted great interest due to the pervasiveness of noise\nin graph data. Many approaches have been proposed for GSL to jointly learn a\nclean graph structure and corresponding representations. To extend the previous\nwork, this paper proposes a novel regularized GSL approach, particularly with\nan alignment of feature information and graph information, which is motivated\nmainly by our derived lower bound of node-level Rademacher complexity for GNNs.\nAdditionally, our proposed approach incorporates sparse dimensional reduction\nto leverage low-dimensional node features that are relevant to the graph\nstructure. To evaluate the effectiveness of our approach, we conduct\nexperiments on real-world graphs. The results demonstrate that our proposed GSL\nmethod outperforms several competitive baselines, especially in scenarios where\nthe graph structures are heavily affected by noise. Overall, our research\nhighlights the importance of integrating feature and graph information\nalignment in GSL, as inspired by our derived theoretical result, and showcases\nthe superiority of our approach in handling noisy graph structures through\ncomprehensive experiments on real-world datasets.", + "authors": "Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2103.10837v1", + "title": "Quantum machine learning of graph-structured data", + "abstract": "Graph structures are ubiquitous throughout the natural sciences. Here we\nconsider graph-structured quantum data and describe how to carry out its\nquantum machine learning via quantum neural networks. In particular, we\nconsider training data in the form of pairs of input and output quantum states\nassociated with the vertices of a graph, together with edges encoding\ncorrelations between the vertices. We explain how to systematically exploit\nthis additional graph structure to improve quantum learning algorithms. These\nalgorithms are numerically simulated and exhibit excellent learning behavior.\nScalable quantum implementations of the learning procedures are likely feasible\non the next generation of quantum computing devices.", + "authors": "Kerstin Beer, Megha Khosla, Julius K\u00f6hler, Tobias J. Osborne", + "published": "2021-03-19", + "updated": "2021-03-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1104.5256v1", + "title": "Learning Undirected Graphical Models with Structure Penalty", + "abstract": "In undirected graphical models, learning the graph structure and learning the\nfunctions that relate the predictive variables (features) to the responses\ngiven the structure are two topics that have been widely investigated in\nmachine learning and statistics. Learning graphical models in two stages will\nhave problems because graph structure may change after considering the\nfeatures. The main contribution of this paper is the proposed method that\nlearns the graph structure and functions on the graph at the same time. General\ngraphical models with binary outcomes conditioned on predictive variables are\nproved to be equivalent to multivariate Bernoulli model. The reparameterization\nof the potential functions in graphical model by conditional log odds ratios in\nmultivariate Bernoulli model offers advantage in the representation of the\nconditional independence structure in the model. Additionally, we impose a\nstructure penalty on groups of conditional log odds ratios to learn the graph\nstructure. These groups of functions are designed with overlaps to enforce\nhierarchical function selection. In this way, we are able to shrink higher\norder interactions to obtain a sparse graph structure. Simulation studies show\nthat the method is able to recover the graph structure. The analysis of county\ndata from Census Bureau gives interesting relations between unemployment rate,\ncrime and others discovered by the model.", + "authors": "Shilin Ding", + "published": "2011-04-27", + "updated": "2011-04-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.02879v1", + "title": "Auto-decoding Graphs", + "abstract": "We present an approach to synthesizing new graph structures from empirically\nspecified distributions. The generative model is an auto-decoder that learns to\nsynthesize graphs from latent codes. The graph synthesis model is learned\njointly with an empirical distribution over the latent codes. Graphs are\nsynthesized using self-attention modules that are trained to identify likely\nconnectivity patterns. Graph-based normalizing flows are used to sample latent\ncodes from the distribution learned by the auto-decoder. The resulting model\ncombines accuracy and scalability. On benchmark datasets of large graphs, the\npresented model outperforms the state of the art by a factor of 1.5 in mean\naccuracy and average rank across at least three different graph statistics,\nwith a 2x speedup during inference.", + "authors": "Sohil Atul Shah, Vladlen Koltun", + "published": "2020-06-04", + "updated": "2020-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.00876v1", + "title": "Balanced Graph Structure Information for Brain Disease Detection", + "abstract": "Analyzing connections between brain regions of interest (ROI) is vital to\ndetect neurological disorders such as autism or schizophrenia. Recent\nadvancements employ graph neural networks (GNNs) to utilize graph structures in\nbrains, improving detection performances. Current methods use correlation\nmeasures between ROI's blood-oxygen-level-dependent (BOLD) signals to generate\nthe graph structure. Other methods use the training samples to learn the\noptimal graph structure through end-to-end learning. However, implementing\nthose methods independently leads to some issues with noisy data for the\ncorrelation graphs and overfitting problems for the optimal graph. In this\nwork, we proposed Bargrain (balanced graph structure for brains), which models\ntwo graph structures: filtered correlation matrix and optimal sample graph\nusing graph convolution networks (GCNs). This approach aims to get advantages\nfrom both graphs and address the limitations of only relying on a single type\nof structure. Based on our extensive experiment, Bargrain outperforms\nstate-of-the-art methods in classification tasks on brain disease datasets, as\nmeasured by average F1 scores.", + "authors": "Falih Gozi Febrinanto, Mujie Liu, Feng Xia", + "published": "2023-12-30", + "updated": "2023-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "q-bio.NC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.04508v3", + "title": "Unsupervised Graph Embedding via Adaptive Graph Learning", + "abstract": "Graph autoencoders (GAEs) are powerful tools in representation learning for\ngraph embedding. However, the performance of GAEs is very dependent on the\nquality of the graph structure, i.e., of the adjacency matrix. In other words,\nGAEs would perform poorly when the adjacency matrix is incomplete or be\ndisturbed. In this paper, two novel unsupervised graph embedding methods,\nunsupervised graph embedding via adaptive graph learning (BAGE) and\nunsupervised graph embedding via variational adaptive graph learning (VBAGE)\nare proposed. The proposed methods expand the application range of GAEs on\ngraph embedding, i.e, on the general datasets without graph structure.\nMeanwhile, the adaptive learning mechanism can initialize the adjacency matrix\nwithout be affected by the parameter. Besides that, the latent representations\nare embedded in the laplacian graph structure to preserve the topology\nstructure of the graph in the vector space. Moreover, the adjacency matrix can\nbe self-learned for better embedding performance when the original graph\nstructure is incomplete. With adaptive learning, the proposed method is much\nmore robust to the graph structure. Experimental studies on several datasets\nvalidate our design and demonstrate that our methods outperform baselines by a\nwide margin in node clustering, node classification, and graph visualization\ntasks.", + "authors": "Rui Zhang, Yunxing Zhang, Xuelong Li", + "published": "2020-03-10", + "updated": "2021-03-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.00793v2", + "title": "Structure-Preserving Graph Representation Learning", + "abstract": "Though graph representation learning (GRL) has made significant progress, it\nis still a challenge to extract and embed the rich topological structure and\nfeature information in an adequate way. Most existing methods focus on local\nstructure and fail to fully incorporate the global topological structure. To\nthis end, we propose a novel Structure-Preserving Graph Representation Learning\n(SPGRL) method, to fully capture the structure information of graphs.\nSpecifically, to reduce the uncertainty and misinformation of the original\ngraph, we construct a feature graph as a complementary view via k-Nearest\nNeighbor method. The feature graph can be used to contrast at node-level to\ncapture the local relation. Besides, we retain the global topological structure\ninformation by maximizing the mutual information (MI) of the whole graph and\nfeature embeddings, which is theoretically reduced to exchanging the feature\nembeddings of the feature and the original graphs to reconstruct themselves.\nExtensive experiments show that our method has quite superior performance on\nsemi-supervised node classification task and excellent robustness under noise\nperturbation on graph structure or node features.", + "authors": "Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu", + "published": "2022-09-02", + "updated": "2022-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.03659v1", + "title": "Robust Graph Structure Learning under Heterophily", + "abstract": "Graph is a fundamental mathematical structure in characterizing relations\nbetween different objects and has been widely used on various learning tasks.\nMost methods implicitly assume a given graph to be accurate and complete.\nHowever, real data is inevitably noisy and sparse, which will lead to inferior\nresults. Despite the remarkable success of recent graph representation learning\nmethods, they inherently presume that the graph is homophilic, and largely\noverlook heterophily, where most connected nodes are from different classes. In\nthis regard, we propose a novel robust graph structure learning method to\nachieve a high-quality graph from heterophilic data for downstream tasks. We\nfirst apply a high-pass filter to make each node more distinctive from its\nneighbors by encoding structure information into the node features. Then, we\nlearn a robust graph with an adaptive norm characterizing different levels of\nnoise. Afterwards, we propose a novel regularizer to further refine the graph\nstructure. Clustering and semi-supervised classification experiments on\nheterophilic graphs verify the effectiveness of our method.", + "authors": "Xuanting Xie, Zhao Kang, Wenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.04762v1", + "title": "The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure", + "abstract": "Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.", + "authors": "Anton Tsitsulin, Bryan Perozzi", + "published": "2023-12-08", + "updated": "2023-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03236v1", + "title": "Graph2Graph Learning with Conditional Autoregressive Models", + "abstract": "We present a graph neural network model for solving graph-to-graph learning\nproblems. Most deep learning on graphs considers ``simple'' problems such as\ngraph classification or regressing real-valued graph properties. For such\ntasks, the main requirement for intermediate representations of the data is to\nmaintain the structure needed for output, i.e., keeping classes separated or\nmaintaining the order indicated by the regressor. However, a number of learning\ntasks, such as regressing graph-valued output, generative models, or graph\nautoencoders, aim to predict a graph-structured output. In order to\nsuccessfully do this, the learned representations need to preserve far more\nstructure. We present a conditional auto-regressive model for graph-to-graph\nlearning and illustrate its representational capabilities via experiments on\nchallenging subgraph predictions from graph algorithmics; as a graph\nautoencoder for reconstruction and visualization; and on pretraining\nrepresentations that allow graph classification with limited labeled data.", + "authors": "Guan Wang, Francois Bernard Lauze, Aasa Feragen", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.01749v1", + "title": "Semantic Graph Neural Network with Multi-measure Learning for Semi-supervised Classification", + "abstract": "Graph Neural Networks (GNNs) have attracted increasing attention in recent\nyears and have achieved excellent performance in semi-supervised node\nclassification tasks. The success of most GNNs relies on one fundamental\nassumption, i.e., the original graph structure data is available. However,\nrecent studies have shown that GNNs are vulnerable to the complex underlying\nstructure of the graph, making it necessary to learn comprehensive and robust\ngraph structures for downstream tasks, rather than relying only on the raw\ngraph structure. In light of this, we seek to learn optimal graph structures\nfor downstream tasks and propose a novel framework for semi-supervised\nclassification. Specifically, based on the structural context information of\ngraph and node representations, we encode the complex interactions in semantics\nand generate semantic graphs to preserve the global structure. Moreover, we\ndevelop a novel multi-measure attention layer to optimize the similarity rather\nthan prescribing it a priori, so that the similarity can be adaptively\nevaluated by integrating measures. These graphs are fused and optimized\ntogether with GNN towards semi-supervised classification objective. Extensive\nexperiments and ablation studies on six real-world datasets clearly demonstrate\nthe effectiveness of our proposed model and the contribution of each component.", + "authors": "Junchao Lin, Yuan Wan, Jingwen Xu, Xingchen Qi", + "published": "2022-12-04", + "updated": "2022-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.00647v4", + "title": "Lifelong Graph Learning", + "abstract": "Graph neural networks (GNN) are powerful models for many graph-structured\ntasks. Existing models often assume that the complete structure of the graph is\navailable during training. In practice, however, graph-structured data is\nusually formed in a streaming fashion so that learning a graph continuously is\noften necessary. In this paper, we bridge GNN and lifelong learning by\nconverting a continual graph learning problem to a regular graph learning\nproblem so GNN can inherit the lifelong learning techniques developed for\nconvolutional neural networks (CNN). We propose a new topology, the feature\ngraph, which takes features as new nodes and turns nodes into independent\ngraphs. This successfully converts the original problem of node classification\nto graph classification. In the experiments, we demonstrate the efficiency and\neffectiveness of feature graph networks (FGN) by continuously learning a\nsequence of classical graph datasets. We also show that FGN achieves superior\nperformance in two applications, i.e., lifelong human action recognition with\nwearable devices and feature matching. To the best of our knowledge, FGN is the\nfirst method to bridge graph learning and lifelong learning via a novel graph\ntopology. Source code is available at https://github.com/wang-chen/LGL", + "authors": "Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer", + "published": "2020-09-01", + "updated": "2022-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.04350v2", + "title": "Time-Variant Graph Classification", + "abstract": "Graphs are commonly used to represent objects, such as images and text, for\npattern classification. In a dynamic world, an object may continuously evolve\nover time, and so does the graph extracted from the underlying object. These\nchanges in graph structure with respect to the temporal order present a new\nrepresentation of the graph, in which an object corresponds to a set of\ntime-variant graphs. In this paper, we formulate a novel time-variant graph\nclassification task and propose a new graph feature, called a graph-shapelet\npattern, for learning and classifying time-variant graphs. Graph-shapelet\npatterns are compact and discriminative graph transformation subsequences. A\ngraph-shapelet pattern can be regarded as a graphical extension of a shapelet\n-- a class of discriminative features designed for vector-based temporal data\nclassification. To discover graph-shapelet patterns, we propose to convert a\ntime-variant graph sequence into time-series data and use the discovered\nshapelets to find graph transformation subsequences as graph-shapelet patterns.\nBy converting each graph-shapelet pattern into a unique tokenized graph\ntransformation sequence, we can measure the similarity between two\ngraph-shapelet patterns and therefore classify time-variant graphs. Experiments\non both synthetic and real-world data demonstrate the superior performance of\nthe proposed algorithms.", + "authors": "Haishuai Wang", + "published": "2016-09-14", + "updated": "2017-06-12", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1902.10042v2", + "title": "Graph Neural Processes: Towards Bayesian Graph Neural Networks", + "abstract": "We introduce Graph Neural Processes (GNP), inspired by the recent work in\nconditional and latent neural processes. A Graph Neural Process is defined as a\nConditional Neural Process that operates on arbitrary graph data. It takes\nfeatures of sparsely observed context points as input, and outputs a\ndistribution over target points. We demonstrate graph neural processes in edge\nimputation and discuss benefits and drawbacks of the method for other\napplication areas. One major benefit of GNPs is the ability to quantify\nuncertainty in deep learning on graph structures. An additional benefit of this\nmethod is the ability to extend graph neural networks to inputs of dynamic\nsized graphs.", + "authors": "Andrew Carr, David Wingate", + "published": "2019-02-26", + "updated": "2019-10-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.03892v2", + "title": "COPT: Coordinated Optimal Transport for Graph Sketching", + "abstract": "We introduce COPT, a novel distance metric between graphs defined via an\noptimization routine, computing a coordinated pair of optimal transport maps\nsimultaneously. This gives an unsupervised way to learn general-purpose graph\nrepresentation, applicable to both graph sketching and graph comparison. COPT\ninvolves simultaneously optimizing dual transport plans, one between the\nvertices of two graphs, and another between graph signal probability\ndistributions. We show theoretically that our method preserves important global\nstructural information on graphs, in particular spectral information, and\nanalyze connections to existing studies. Empirically, COPT outperforms state of\nthe art methods in graph classification on both synthetic and real datasets.", + "authors": "Yihe Dong, Will Sawin", + "published": "2020-03-09", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07439v1", + "title": "Multiple Graph Adversarial Learning", + "abstract": "Recently, Graph Convolutional Networks (GCNs) have been widely studied for\ngraph-structured data representation and learning. However, in many real\napplications, data are coming with multiple graphs, and it is non-trivial to\nadapt GCNs to deal with data representation with multiple graph structures. One\nmain challenge for multi-graph representation is how to exploit both structure\ninformation of each individual graph and correlation information across\nmultiple graphs simultaneously. In this paper, we propose a novel Multiple\nGraph Adversarial Learning (MGAL) framework for multi-graph representation and\nlearning. MGAL aims to learn an optimal structure-invariant and consistent\nrepresentation for multiple graphs in a common subspace via a novel adversarial\nlearning framework, which thus incorporates both structure information of\nintra-graph and correlation information of inter-graphs simultaneously. Based\non MGAL, we then provide a unified network for semi-supervised learning task.\nPromising experimental results demonstrate the effectiveness of MGAL model.", + "authors": "Bo Jiang, Ziyan Zhang, Jin Tang, Bin Luo", + "published": "2019-01-22", + "updated": "2019-01-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01412v1", + "title": "Sampling and Recovery of Graph Signals based on Graph Neural Networks", + "abstract": "We propose interpretable graph neural networks for sampling and recovery of\ngraph signals, respectively. To take informative measurements, we propose a new\ngraph neural sampling module, which aims to select those vertices that\nmaximally express their corresponding neighborhoods. Such expressiveness can be\nquantified by the mutual information between vertices' features and\nneighborhoods' features, which are estimated via a graph neural network. To\nreconstruct an original graph signal from the sampled measurements, we propose\na graph neural recovery module based on the algorithm-unrolling technique.\nCompared to previous analytical sampling and recovery, the proposed methods are\nable to flexibly learn a variety of graph signal models from data by leveraging\nthe learning ability of neural networks; compared to previous\nneural-network-based sampling and recovery, the proposed methods are designed\nthrough exploiting specific graph properties and provide interpretability. We\nfurther design a new multiscale graph neural network, which is a trainable\nmultiscale graph filter bank and can handle various graph-related learning\ntasks. The multiscale network leverages the proposed graph neural sampling and\nrecovery modules to achieve multiscale representations of a graph. In the\nexperiments, we illustrate the effects of the proposed graph neural sampling\nand recovery modules and find that the modules can flexibly adapt to various\ngraph structures and graph signals. In the task of active-sampling-based\nsemi-supervised learning, the graph neural sampling module improves the\nclassification accuracy over 10% in Cora dataset. We further validate the\nproposed multiscale graph neural network on several standard datasets for both\nvertex and graph classification. The results show that our method consistently\nimproves the classification accuracies.", + "authors": "Siheng Chen, Maosen Li, Ya Zhang", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.10715v1", + "title": "Graph Attention Auto-Encoders", + "abstract": "Auto-encoders have emerged as a successful framework for unsupervised\nlearning. However, conventional auto-encoders are incapable of utilizing\nexplicit relations in structured data. To take advantage of relations in\ngraph-structured data, several graph auto-encoders have recently been proposed,\nbut they neglect to reconstruct either the graph structure or node attributes.\nIn this paper, we present the graph attention auto-encoder (GATE), a neural\nnetwork architecture for unsupervised representation learning on\ngraph-structured data. Our architecture is able to reconstruct graph-structured\ninputs, including both node attributes and the graph structure, through stacked\nencoder/decoder layers equipped with self-attention mechanisms. In the encoder,\nby considering node attributes as initial node representations, each layer\ngenerates new representations of nodes by attending over their neighbors'\nrepresentations. In the decoder, we attempt to reverse the encoding process to\nreconstruct node attributes. Moreover, node representations are regularized to\nreconstruct the graph structure. Our proposed architecture does not need to\nknow the graph structure upfront, and thus it can be applied to inductive\nlearning. Our experiments demonstrate competitive performance on several node\nclassification benchmark datasets for transductive and inductive tasks, even\nexceeding the performance of supervised learning baselines in most cases.", + "authors": "Amin Salehi, Hasan Davulcu", + "published": "2019-05-26", + "updated": "2019-05-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10124v1", + "title": "Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Self-supervised Pretraining", + "abstract": "We propose the Graph Context Encoder (GCE), a simple but efficient approach\nfor graph representation learning based on graph feature masking and\nreconstruction.\n GCE models are trained to efficiently reconstruct input graphs similarly to a\ngraph autoencoder where node and edge labels are masked. In particular, our\nmodel is also allowed to change graph structures by masking and reconstructing\ngraphs augmented by random pseudo-edges.\n We show that GCE can be used for novel graph generation, with applications\nfor molecule generation. Used as a pretraining method, we also show that GCE\nimproves baseline performances in supervised classification tasks tested on\nmultiple standard benchmark graph datasets.", + "authors": "Oriel Frigo, R\u00e9my Brossard, David Dehaene", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "68T07" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11307v3", + "title": "Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method", + "abstract": "Graph Representation Learning (GRL) is an influential methodology, enabling a\nmore profound understanding of graph-structured data and aiding graph\nclustering, a critical task across various domains. The recent incursion of\nattention mechanisms, originally an artifact of Natural Language Processing\n(NLP), into the realm of graph learning has spearheaded a notable shift in\nresearch trends. Consequently, Graph Attention Networks (GATs) and Graph\nAttention Auto-Encoders have emerged as preferred tools for graph clustering\ntasks. Yet, these methods primarily employ a local attention mechanism, thereby\ncurbing their capacity to apprehend the intricate global dependencies between\nnodes within graphs. Addressing these impediments, this study introduces an\ninnovative method known as the Graph Transformer Auto-Encoder for Graph\nClustering (GTAGC). By melding the Graph Auto-Encoder with the Graph\nTransformer, GTAGC is adept at capturing global dependencies between nodes.\nThis integration amplifies the graph representation and surmounts the\nconstraints posed by the local attention mechanism. The architecture of GTAGC\nencompasses graph embedding, integration of the Graph Transformer within the\nautoencoder structure, and a clustering component. It strategically alternates\nbetween graph embedding and clustering, thereby tailoring the Graph Transformer\nfor clustering tasks, whilst preserving the graph's global structural\ninformation. Through extensive experimentation on diverse benchmark datasets,\nGTAGC has exhibited superior performance against existing state-of-the-art\ngraph clustering methodologies.", + "authors": "Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao", + "published": "2023-06-20", + "updated": "2023-08-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.06126v1", + "title": "Regularized Graph Structure Learning with Semantic Knowledge for Multi-variates Time-Series Forecasting", + "abstract": "Multivariate time-series forecasting is a critical task for many\napplications, and graph time-series network is widely studied due to its\ncapability to capture the spatial-temporal correlation simultaneously. However,\nmost existing works focus more on learning with the explicit prior graph\nstructure, while ignoring potential information from the implicit graph\nstructure, yielding incomplete structure modeling. Some recent works attempt to\nlearn the intrinsic or implicit graph structure directly while lacking a way to\ncombine explicit prior structure with implicit structure together. In this\npaper, we propose Regularized Graph Structure Learning (RGSL) model to\nincorporate both explicit prior structure and implicit structure together, and\nlearn the forecasting deep networks along with the graph structure. RGSL\nconsists of two innovative modules. First, we derive an implicit dense\nsimilarity matrix through node embedding, and learn the sparse graph structure\nusing the Regularized Graph Generation (RGG) based on the Gumbel Softmax trick.\nSecond, we propose a Laplacian Matrix Mixed-up Module (LM3) to fuse the\nexplicit graph and implicit graph together. We conduct experiments on three\nreal-word datasets. Results show that the proposed RGSL model outperforms\nexisting graph forecasting algorithms with a notable margin, while learning\nmeaningful graph structure simultaneously. Our code and models are made\npublicly available at https://github.com/alipay/RGSL.git.", + "authors": "Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, Alex Liu", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.11821v1", + "title": "Cross-View Graph Consistency Learning for Invariant Graph Representations", + "abstract": "Graph representation learning is fundamental for analyzing graph-structured\ndata. Exploring invariant graph representations remains a challenge for most\nexisting graph representation learning methods. In this paper, we propose a\ncross-view graph consistency learning (CGCL) method that learns invariant graph\nrepresentations for link prediction. First, two complementary augmented views\nare derived from an incomplete graph structure through a bidirectional graph\nstructure augmentation scheme. This augmentation scheme mitigates the potential\ninformation loss that is commonly associated with various data augmentation\ntechniques involving raw graph data, such as edge perturbation, node removal,\nand attribute masking. Second, we propose a CGCL model that can learn invariant\ngraph representations. A cross-view training scheme is proposed to train the\nproposed CGCL model. This scheme attempts to maximize the consistency\ninformation between one augmented view and the graph structure reconstructed\nfrom the other augmented view. Furthermore, we offer a comprehensive\ntheoretical CGCL analysis. This paper empirically and experimentally\ndemonstrates the effectiveness of the proposed CGCL method, achieving\ncompetitive results on graph datasets in comparisons with several\nstate-of-the-art algorithms.", + "authors": "Jie Chen, Zhiming Li, Hua Mao, Wai Lok Woo, Xi Peng", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.01660v3", + "title": "Graph Neural Networks With Lifting-based Adaptive Graph Wavelets", + "abstract": "Spectral-based graph neural networks (SGNNs) have been attracting increasing\nattention in graph representation learning. However, existing SGNNs are limited\nin implementing graph filters with rigid transforms (e.g., graph Fourier or\npredefined graph wavelet transforms) and cannot adapt to signals residing on\ngraphs and tasks at hand. In this paper, we propose a novel class of graph\nneural networks that realizes graph filters with adaptive graph wavelets.\nSpecifically, the adaptive graph wavelets are learned with neural\nnetwork-parameterized lifting structures, where structure-aware attention-based\nlifting operations (i.e., prediction and update operations) are developed to\njointly consider graph structures and node features. We propose to lift based\non diffusion wavelets to alleviate the structural information loss induced by\npartitioning non-bipartite graphs. By design, the locality and sparsity of the\nresulting wavelet transform as well as the scalability of the lifting structure\nare guaranteed. We further derive a soft-thresholding filtering operation by\nlearning sparse graph representations in terms of the learned wavelets,\nyielding a localized, efficient, and scalable wavelet-based graph filters. To\nensure that the learned graph representations are invariant to node\npermutations, a layer is employed at the input of the networks to reorder the\nnodes according to their local topology information. We evaluate the proposed\nnetworks in both node-level and graph-level representation learning tasks on\nbenchmark citation and bioinformatics graph datasets. Extensive experiments\ndemonstrate the superiority of the proposed networks over existing SGNNs in\nterms of accuracy, efficiency, and scalability.", + "authors": "Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard", + "published": "2021-08-03", + "updated": "2022-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.10146v2", + "title": "Exploring Structure-Adaptive Graph Learning for Robust Semi-Supervised Classification", + "abstract": "Graph Convolutional Neural Networks (GCNNs) are generalizations of CNNs to\ngraph-structured data, in which convolution is guided by the graph topology. In\nmany cases where graphs are unavailable, existing methods manually construct\ngraphs or learn task-driven adaptive graphs. In this paper, we propose Graph\nLearning Neural Networks (GLNNs), which exploit the optimization of graphs (the\nadjacency matrix in particular) from both data and tasks. Leveraging on\nspectral graph theory, we propose the objective of graph learning from a\nsparsity constraint, properties of a valid adjacency matrix as well as a graph\nLaplacian regularizer via maximum a posteriori estimation. The optimization\nobjective is then integrated into the loss function of the GCNN, which adapts\nthe graph topology to not only labels of a specific task but also the input\ndata. Experimental results show that our proposed GLNN outperforms\nstate-of-the-art approaches over widely adopted social network datasets and\ncitation network datasets for semi-supervised classification.", + "authors": "Xiang Gao, Wei Hu, Zongming Guo", + "published": "2019-04-23", + "updated": "2019-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07970v1", + "title": "Adaptive Multi-Neighborhood Attention based Transformer for Graph Representation Learning", + "abstract": "By incorporating the graph structural information into Transformers, graph\nTransformers have exhibited promising performance for graph representation\nlearning in recent years. Existing graph Transformers leverage specific\nstrategies, such as Laplacian eigenvectors and shortest paths of the node\npairs, to preserve the structural features of nodes and feed them into the\nvanilla Transformer to learn the representations of nodes. It is hard for such\npredefined rules to extract informative graph structural features for arbitrary\ngraphs whose topology structure varies greatly, limiting the learning capacity\nof the models. To this end, we propose an adaptive graph Transformer, termed\nMulti-Neighborhood Attention based Graph Transformer (MNA-GT), which captures\nthe graph structural information for each node from the multi-neighborhood\nattention mechanism adaptively. By defining the input to perform scaled-dot\nproduct as an attention kernel, MNA-GT constructs multiple attention kernels\nbased on different hops of neighborhoods such that each attention kernel can\ncapture specific graph structural information of the corresponding neighborhood\nfor each node pair. In this way, MNA-GT can preserve the graph structural\ninformation efficiently by incorporating node representations learned by\ndifferent attention kernels. MNA-GT further employs an attention layer to learn\nthe importance of different attention kernels to enable the model to adaptively\ncapture the graph structural information for different nodes. Extensive\nexperiments are conducted on a variety of graph benchmarks, and the empirical\nresults show that MNA-GT outperforms many strong baselines.", + "authors": "Gaichao Li, Jinsong Chen, Kun He", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11898v1", + "title": "Graph Learning Augmented Heterogeneous Graph Neural Network for Social Recommendation", + "abstract": "Social recommendation based on social network has achieved great success in\nimproving the performance of recommendation system. Since social network\n(user-user relations) and user-item interactions are both naturally represented\nas graph-structured data, Graph Neural Networks (GNNs) have thus been widely\napplied for social recommendation. In this work, we propose an end-to-end\nheterogeneous global graph learning framework, namely Graph Learning Augmented\nHeterogeneous Graph Neural Network (GL-HGNN) for social recommendation. GL-HGNN\naims to learn a heterogeneous global graph that makes full use of user-user\nrelations, user-item interactions and item-item similarities in a unified\nperspective. To this end, we design a Graph Learner (GL) method to learn and\noptimize user-user and item-item connections separately. Moreover, we employ a\nHeterogeneous Graph Neural Network (HGNN) to capture the high-order complex\nsemantic relations from our learned heterogeneous global graph. To scale up the\ncomputation of graph learning, we further present the Anchor-based Graph\nLearner (AGL) to reduce computational complexity. Extensive experiments on four\nreal-world datasets demonstrate the effectiveness of our model.", + "authors": "Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.07294v1", + "title": "Graph Data Condensation via Self-expressive Graph Structure Reconstruction", + "abstract": "With the increasing demands of training graph neural networks (GNNs) on\nlarge-scale graphs, graph data condensation has emerged as a critical technique\nto relieve the storage and time costs during the training phase. It aims to\ncondense the original large-scale graph to a much smaller synthetic graph while\npreserving the essential information necessary for efficiently training a\ndownstream GNN. However, existing methods concentrate either on optimizing node\nfeatures exclusively or endeavor to independently learn node features and the\ngraph structure generator. They could not explicitly leverage the information\nof the original graph structure and failed to construct an interpretable graph\nstructure for the synthetic dataset. To address these issues, we introduce a\nnovel framework named \\textbf{G}raph Data \\textbf{C}ondensation via\n\\textbf{S}elf-expressive Graph Structure \\textbf{R}econstruction\n(\\textbf{GCSR}). Our method stands out by (1) explicitly incorporating the\noriginal graph structure into the condensing process and (2) capturing the\nnuanced interdependencies between the condensed nodes by reconstructing an\ninterpretable self-expressive graph structure. Extensive experiments and\ncomprehensive analysis validate the efficacy of the proposed method across\ndiverse GNN models and datasets. Our code is available at\nhttps://www.dropbox.com/scl/fi/2aonyp5ln5gisdqtjimu8/GCSR.zip?rlkey=11cuwfpsf54wxiiktu0klud0x&dl=0", + "authors": "Zhanyu Liu, Chaolv Zeng, Guanjie Zheng", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.01489v1", + "title": "Generative Models and Learning Algorithms for Core-Periphery Structured Graphs", + "abstract": "We consider core-periphery structured graphs, which are graphs with a group\nof densely and sparsely connected nodes, respectively, referred to as core and\nperiphery nodes. The so-called core score of a node is related to the\nlikelihood of it being a core node. In this paper, we focus on learning the\ncore scores of a graph from its node attributes and connectivity structure. To\nthis end, we propose two classes of probabilistic graphical models: affine and\nnonlinear. First, we describe affine generative models to model the dependence\nof node attributes on its core scores, which determine the graph structure.\nNext, we discuss nonlinear generative models in which the partial correlations\nof node attributes influence the graph structure through latent core scores. We\ndevelop algorithms for inferring the model parameters and core scores of a\ngraph when both the graph structure and node attributes are available. When\nonly the node attributes of graphs are available, we jointly learn a\ncore-periphery structured graph and its core scores. We provide results from\nnumerical experiments on several synthetic and real-world datasets to\ndemonstrate the efficacy of the developed models and algorithms.", + "authors": "Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03262v2", + "title": "CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations", + "abstract": "Unsupervised graph representation learning is a non-trivial topic. The\nsuccess of contrastive methods in the unsupervised representation learning on\nstructured data inspires similar attempts on the graph. Existing graph\ncontrastive learning (GCL) aims to learn the invariance across multiple\naugmentation views, which renders it heavily reliant on the handcrafted graph\naugmentations. However, inappropriate graph data augmentations can potentially\njeopardize such invariance. In this paper, we show the potential hazards of\ninappropriate augmentations and then propose a novel Collaborative Graph\nContrastive Learning framework (CGCL). This framework harnesses multiple graph\nencoders to observe the graph. Features observed from different encoders serve\nas the contrastive views in contrastive learning, which avoids inducing\nunstable perturbation and guarantees the invariance. To ensure the\ncollaboration among diverse graph encoders, we propose the concepts of\nasymmetric architecture and complementary encoders as the design principle. To\nfurther prove the rationality, we utilize two quantitative metrics to measure\nthe assembly of CGCL respectively. Extensive experiments demonstrate the\nadvantages of CGCL in unsupervised graph-level representation learning and the\npotential of collaborative framework. The source code for reproducibility is\navailable at https://github.com/zhangtia16/CGCL", + "authors": "Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang", + "published": "2021-11-05", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.09304v1", + "title": "A Tunable Model for Graph Generation Using LSTM and Conditional VAE", + "abstract": "With the development of graph applications, generative models for graphs have\nbeen more crucial. Classically, stochastic models that generate graphs with a\npre-defined probability of edges and nodes have been studied. Recently, some\nmodels that reproduce the structural features of graphs by learning from actual\ngraph data using machine learning have been studied. However, in these\nconventional studies based on machine learning, structural features of graphs\ncan be learned from data, but it is not possible to tune features and generate\ngraphs with specific features. In this paper, we propose a generative model\nthat can tune specific features, while learning structural features of a graph\nfrom data. With a dataset of graphs with various features generated by a\nstochastic model, we confirm that our model can generate a graph with specific\nfeatures.", + "authors": "Shohei Nakazawa, Yoshiki Sato, Kenji Nakagawa, Sho Tsugawa, Kohei Watabe", + "published": "2021-04-15", + "updated": "2021-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.08057v1", + "title": "Graph Embedding VAE: A Permutation Invariant Model of Graph Structure", + "abstract": "Generative models of graph structure have applications in biology and social\nsciences. The state of the art is GraphRNN, which decomposes the graph\ngeneration process into a series of sequential steps. While effective for\nmodest sizes, it loses its permutation invariance for larger graphs. Instead,\nwe present a permutation invariant latent-variable generative model relying on\ngraph embeddings to encode structure. Using tools from the random graph\nliterature, our model is highly scalable to large graphs with likelihood\nevaluation and generation in $O(|V | + |E|)$.", + "authors": "Tony Duan, Juho Lee", + "published": "2019-10-17", + "updated": "2019-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.08163v1", + "title": "Finding Motifs in Knowledge Graphs using Compression", + "abstract": "We introduce a method to find network motifs in knowledge graphs. Network\nmotifs are useful patterns or meaningful subunits of the graph that recur\nfrequently. We extend the common definition of a network motif to coincide with\na basic graph pattern. We introduce an approach, inspired by recent work for\nsimple graphs, to induce these from a given knowledge graph, and show that the\nmotifs found reflect the basic structure of the graph. Specifically, we show\nthat in random graphs, no motifs are found, and that when we insert a motif\nartificially, it can be detected. Finally, we show the results of motif\ninduction on three real-world knowledge graphs.", + "authors": "Peter Bloem", + "published": "2021-04-16", + "updated": "2021-04-16", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.DS", + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.08776v2", + "title": "Joint Embedding Learning of Educational Knowledge Graphs", + "abstract": "As an efficient model for knowledge organization, the knowledge graph has\nbeen widely adopted in several fields, e.g., biomedicine, sociology, and\neducation. And there is a steady trend of learning embedding representations of\nknowledge graphs to facilitate knowledge graph construction and downstream\ntasks. In general, knowledge graph embedding techniques aim to learn vectorized\nrepresentations which preserve the structural information of the graph. And\nconventional embedding learning models rely on structural relationships among\nentities and relations. However, in educational knowledge graphs, structural\nrelationships are not the focus. Instead, rich literals of the graphs are more\nvaluable. In this paper, we focus on this problem and propose a novel model for\nembedding learning of educational knowledge graphs. Our model considers both\nstructural and literal information and jointly learns embedding\nrepresentations. Three experimental graphs were constructed based on an\neducational knowledge graph which has been applied in real-world teaching. We\nconducted two experiments on the three graphs and other common benchmark\ngraphs. The experimental results proved the effectiveness of our model and its\nsuperiority over other baselines when processing educational knowledge graphs.", + "authors": "Siyu Yao, Ruijie Wang, Shen Sun, Derui Bu, Jun Liu", + "published": "2019-11-20", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2402.11354v1", + "title": "Probabilistic Routing for Graph-Based Approximate Nearest Neighbor Search", + "abstract": "Approximate nearest neighbor search (ANNS) in high-dimensional spaces is a\npivotal challenge in the field of machine learning. In recent years,\ngraph-based methods have emerged as the superior approach to ANNS, establishing\na new state of the art. Although various optimizations for graph-based ANNS\nhave been introduced, they predominantly rely on heuristic methods that lack\nformal theoretical backing. This paper aims to enhance routing within\ngraph-based ANNS by introducing a method that offers a probabilistic guarantee\nwhen exploring a node's neighbors in the graph. We formulate the problem as\nprobabilistic routing and develop two baseline strategies by incorporating\nlocality-sensitive techniques. Subsequently, we introduce PEOs, a novel\napproach that efficiently identifies which neighbors in the graph should be\nconsidered for exact distance computation, thus significantly improving\nefficiency in practice. Our experiments demonstrate that equipping PEOs can\nincrease throughput on a commonly utilized graph index (HNSW) by a factor of\n1.6 to 2.5, and its efficiency consistently outperforms the leading-edge\nrouting technique by 1.1 to 1.4 times.", + "authors": "Kejing Lu, Chuan Xiao, Yoshiharu Ishikawa", + "published": "2024-02-17", + "updated": "2024-02-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.DB", + "cs.DS" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "Various types of ANNS approaches have been proposed, encompassing tree-based approaches [9], hashing-based approaches [2, 22, 1, 3, 28], quantization based approaches [21, 16, 4, 17, 32], learn-toindex-based approaches [18, 23], and graph-based approaches [25, 31, 14, 13]. Among these, graph-based methods are predominantly considered state-of-the-art (SOTA). To enhance the efficiency of graphbased ANNS, optimizations can be broadly categorized into: (1) routing, (2) edge-selection, and (3) quantization, with these optimizations generally being orthogonal to one another. Given our focus on routing, we briefly review relevant studies in this domain. TOGG-KMC [34] and HCNNG [26] employ KD trees to determine the direction of the query, thereby restricting the search to vectors within that specified direction. Despite fast estimation, it tends to yield suboptimal query accuracy, limiting its 2 Algorithm 1: Graph-based ANNS with routing Input : Query q, # results K, graph index G Output : K-NN of q 1 R \u2190\u2205; /* an ordered list of results, |R| \u2264efs */ 2 P \u2190{ entry node v0 \u2208G } ; /* a priority queue */ 3 while P \u0338= \u2205do 4 v \u2190P.pop(); 5 foreach unvisited neighbor u of v do 6 if |R| < efs then \u03b4 \u2190\u221e; 7 else p \u2190R[efs], \u03b4 \u2190dist(p, q); 8 if RoutingTest(u, v, q, \u03b4) = true then 9 if dist(u, q) < \u03b4 then 10 R.push(u), P.push(u); 11 return ({ R[1], . . . , R[K] }) effectiveness. FINGER [8] estimates the distance of each neighbor to the query. Specifically, for each node, it generates promising projected vectors locally to establish a subspace, and then applies collision counting, as in SimHash, to approximate the distance in each visited subspace. Learn-to-route [5] learns a routing function, utilizing additional representations to facilitate optimal routing from the starting node to the nearest neighbor.", + "pre_questions": [], + "main_content": "Introduction Nearest neighbor search (NNS) is the process of identifying the vector in a dataset that is closest to a given query vector. This technique has found widespread application in machine learning, with numerous solutions to NNS having been proposed. Given the challenge of finding exact answers, practical efforts have shifted towards approximate NNS (ANNS) for efficiency. While some approaches provide theoretical guarantees for approximate answers \u2013 typically through locality-sensitive hashing (LSH) [7, 11, 3] \u2013 others prioritize faster search speeds for a desired recall level by utilizing quantization [21, 16] or graph indexes [25, 14]. Graph-based ANNS, distinguished by its exceptional empirical performance, has become the leading method for ANNS and is widely implemented in vector search tools and databases. During the indexing phase, graph-based ANNS constructs a proximity graph where nodes represent data vectors and edges connect closely located vectors. In the search phase, it maintains a priority queue and a result list while exploring the graph. Nodes are repeatedly popped from the priority queue until it is empty, computing the distance from their neighbors to the query. Neighbors that are closer than the furthest element in the result list are added to the queue, and the results are accordingly updated. To further improve the performance of graph-based ANNS, practitioners have introduced various empirical optimization techniques, including routing [34, 26, 5, 8], edge selection [13, 31], and quantization [24, 20]. The primary objective is to minimize distance computations during neighbor exploration. However, most of these optimizations are heuristic, based on empirical observations (e.g., over 80% of data vectors are less relevant than the furthest element in the results list and thus should be pruned before exact distance computations [8]), making them challenging to quantitatively analyze. Although analyses elucidate the effectiveness of graph-based ANNS [29, 19], they concentrate on theoretical aspects rather than empirical improvements. 1 arXiv:2402.11354v1 [cs.LG] 17 Feb 2024 In this paper, we investigate routing in graph-based ANNS, aiming to identify which neighbors should be evaluated for distance to the query during the search phase efficiently. Our objective is to bridge the theoretical and practical aspects of ANNS by providing a theoretical guarantee. While achieving an LSH-like guarantee for graph-based ANNS is challenging, we demonstrate that it is feasible to establish a probabilistic guarantee for exploring a node\u2019s neighbors. Specifically, we introduce probabilistic routing in graph-based ANNS: for a given top node v in the priority queue, an error bound \u03f5, and a distance threshold \u03b4, any neighbor u of v with a distance to the query less than \u03b4 will be computed for distance with a probability of at least 1 \u2212\u03f5. Addressing the probabilistic routing problem yields several benefits: First, it ensures that ANNS explores the most promising neighbors (less than 20% of all neighbors, as observed in [8]) with high probability, facilitates quantitative analysis of a search algorithm\u2019s effectiveness. Second, by devising probabilistic routing algorithms that accurately and efficiently estimate distances, we can significantly enhance practical efficiency. Third, the theoretical framework ensures consistent performance across different datasets, contrasting with heuristic approaches that may result in high estimation errors and consequently, lower recall rates. To address the probabilistic routing problem, we initially integrate two foundational algorithms from existing research \u2013 SimHash [7] and (reverse) CEOs [27] \u2013 into graph-based ANNS. Subsequently, we introduce a novel routing algorithm, namely Partitioned Extreme Order Statistics (PEOs), characterized by the following features: (1) PEOs utilizes space partitioning and random projection techniques to estimate a random variable which represents the angle between each neighbor and the query vector. By aggregating projection data from multiple subspaces, we substantially reduce the variance of the estimated random variable\u2019s distribution, thereby enhancing the accuracy of neighbor-to-query distance estimations. (2) Through comprehensive analysis, we show that PEOs addresses the probabilistic routing problem within a user-defined error bound \u03f5 (\u03f5 \u22640.5). The algorithm introduces a parameter L, denoting the number of subspaces in partitioning. An examination of L\u2019s influence on routing enables us to identify an optimal parameter configuration for PEOs. Comparative analysis with baseline algorithms reveals that an appropriately selected L value yields a variance of PEOs\u2019 estimated random variable than that obtained via SimHash-based probabilistic routing and the reverse CEOs-based probabilistic routing is a special case of PEOs with L = 1. (3) The implementation of PEOs is optimized using pre-computed data and lookup tables, facilitating fast and accurate estimations. The use of SIMD further enhances processing speed, allowing for the simultaneous estimation of 16 neighbors and leveraging the data locality, a significant improvement over conventional methods that require accessing raw vector data stored disparately in memory. Our empirical validation encompasses tests on six publicly accessible datasets. By integrating PEOs with HNSW [25], the predominant graph index for ANNS, we achieve a reduction in the necessity for exact distance computations by 70% to 80%, thereby augmenting queries per second (QPS) rates by 1.6 to 2.5 times under various recall criteria. Moreover, PEOs demonstrates superior performance to FINGER [8], a leading-edge routing technique, consistently enhancing efficiency by 1.1 to 1.4 times while reducing space requirements. Definition 3.1 (Nearest Neighbor Search (NNS)). Given a query vector q \u2208Rd and a dataset of vectors O, find the vector o\u2217\u2208O such that dist(q, o\u2217) is the smallest. For distance function dist(\u00b7, \u00b7), two widely utilized metrics are \u21132 distance and angular distance. Maximum inner product search (MIPS), closely related to NNS, aims to identify the vector that yields the maximum inner product with the query vector. We elaborate on extension to MIPS in Appendix C.1 and plan to assess its performance in future work. There is a significant interest not only in identifying a singular nearest neighbor but also in locating the top-K nearest neighbors, a task referred to as K-NN search. It is a prevailing view that computing exact NNS results poses a considerable challenge, whereas determining approximate results suffices for addressing many practical applications [30]. Notably, many SOTA ANNS algorithms leverage a graph index, where each vector in O is linked to its nearby vectors. The construction of a graph index can be approached in various ways, e.g., through a KNN graph [12], HNSW [25], NSG [14], and the improved variant NSSG [13]. Among these, HNSW stands out as the most extensively adopted model, implemented in many ANNS platforms such as Faiss and Milvus. Given a graph index G = (V, E) built upon O, traversal of this graph enables the discovery of ANNS results, as delineated in Algorithm 1. The search initiates from an entry node v0 \u2208G, maintaining R, an ordered list that contains no more than efs (efs \u2265K) \u2013 a list size parameter \u2013 results identified thus far. Neighbors of the entry node in the graph are examined against q for proximity and added to a priority queue if they are closer to the query than the most distant element in R or if R is not full. Nodes are popped from the priority queue to further explore their adjacent nodes in the graph, until the priority queue becomes empty. It is noteworthy that practical search operations may extend beyond those depicted in Algorithm 1, e.g., by employing pruning strategies to expedite the termination of the search process [25]. Algorithms designed for graph-based ANNS with angular distance can be adapted to accommodate \u21132 distance, since calculating the \u21132 distance between q and o \u2208O merely involves determining the angle between them during graph traversal: \u21132(q, o)2 = \u2225q\u22252 + \u2225o\u22252 \u22122q\u22a4o, where \u2225q\u2225is fixed and \u2225o\u2225can be pre-computed. 3 While naive graph exploration entails computing the exact distance for all neighbors, a routing test can be applied to assess whether a neighbor warrants exact distance computation. An efficient routing algorithm can substantially enhance the performance of graph-based ANNS. We study routing algorithms with the following probability guarantee. Definition 3.2 (Probabilistic Routing). Given a query vector q, a node v in the graph index, an error bound \u03f5, and a distance threshold \u03b4, for an arbitrary neighbor u of v such that dist(u, q) < \u03b4, if a routing algorithm returns true for u with a probability of at least 1 \u2212\u03f5, then the algorithm is deemed to be (\u03b4, 1 \u2212\u03f5)-routing. Given our interest in determining whether a neighbor has the potential to refine the temporary results, our focus narrows to the case when \u03b4 equals the distance between q and the most distant element in the result list R, with |R| = efs (Lines 6 \u2013 7, Algorithm 1). 4 Baseline Algorithms To devise a baseline algorithm for probabilistic routing, we adapt SimHash [7] and CEOs [27] for routing test, which were designed for the approximation with recall guarantee for ANNS and MIPS, respectively. 4.1 SimHash Test SimHash, a classical random projection-based LSH method for the approximation of angular distance, has the following result [7]. Lemma 4.1. (SimHash) Given u, q, and m random vectors { ai }m i=1 \u223cN(0, Id), the angle \u03b8 between u and q can be estimated as \u02c6 \u03b8 = 1 \u03c0m X i [sgn(u\u22a4ai) \u0338= sgn(q\u22a4ai)]. (1) Based on the above lemma, we design a routing test: SimHash Test: #Col(u, q) \u2265T SimHash \u03f5 (u, q, \u03b4, m) (2) where #Col denotes the collision number of the above random projection along { ai }m i=1, and T SimHash \u03f5 (u, q, \u03b4, m) denotes a threshold determined by \u03f5 and \u03b4. A neighbor u of v passes the routing test iff. it satisfies the above condition. By careful setting of the threshold, we can obtain the desired probability guarantee. Besides probabilistic routing, SimHash has also been used in FINGER [8] to speed up graph-based ANNS, serving as one of its building blocks but utilized in a heuristic way. It can be seen that the result of the above SimHash test is regardless of v. This means that if a node u fails in a test, it will never be inserted into the result set or the priority queue. Since this will compromise the recall of ANNS, we design the following remedy by regarding the neighbors of v as residuals w.r.t. v: for each neighbor u of v, let e = u \u2212v, and this residual can be associated with the edge e from v to u in the graph index. e is then used instead of u in the SimHash test. As such, the routing test becomes dependent on v, and a threshold T SimHash \u03f5 (u, v, q, \u03b4, m) can be derived by the Hoeffding\u2019s inequality applied to the Binominal distribution. Hence a neighbor u may fail to pass the test w.r.t. v but succeed in the test w.r.t. node v\u2032 in the graph index. Moreover, to model the angle between e and q in the routing test, we normalize q to q\u2032 (and optionally, e to e\u2032) to simplify computation. We discuss algorithms in the context of the above remedy hereafter. 4 4.2 RCEOs Test In the realm of LSH, Andoni et al. proposed Falconn [3] for angular distance whose basic idea is to find the closest or furthest projected vector to the query and record such vector as a hash value, leading to a better search performance than SimHash. Pham and Liu [28] employed Concomitants of Extreme Order Statistics (CEOs) [27] to record the minimum or maximum projection value, further improving the performance of Falconn. By swapping the roles of query and data vectors in CEOs, we have Reverse CEOs (RCEOs) and it can be applied to graph-based ANNS with probabilistic routing: Lemma 4.2. (RCEOs) Given two normalized vectors e\u2032, q\u2032, and m random vectors {ai}m i=1 \u223c N(0, Id), and m is sufficiently large, assuming that a1 = argmaxai|e\u2032\u22a4ai|, we have the following result: q\u2032\u22a4a1 \u223cN(sgn(e\u2032\u22a4a1) \u00b7 e\u2032\u22a4q\u2032\u221a 2 ln m, 1 \u2212(e\u2032\u22a4q\u2032)2). (3) Despite an asymptotic result, it has been shown that m does not need to be very large (\u2265128) to ensure an enough closeness to the objective normal distribution [27]. Based on Lemma 4.2, we design a routing test: RCEOs Test: q\u2032\u22a4a1 \u2265T RCEOs \u03f5 (u, v, q, \u03b4, m) (4) where a1 = argmaxai|e\u2032\u22a4ai| and T RCEOs \u03f5 (u, v, q, \u03b4, m) denotes a threshold related to \u03f5 and \u03b4. 5 Partitioned Extreme Order Statistics (PEOs) 5.1 Space Partitioning In PEOs, the data space is partitioned into L subspaces. Let M \u2286Rd be the original data space. We decompose M into L orthogonal d\u2032-dimensional subspaces M1, M2, . . . , ML, d\u2032 = d/L. When L > 1, we can significantly decrease the variance of the normal distribution in RCEOs ((3)), hence delivering a better routing test. Specifically, (1) RCEOs test is a special case of PEOs test with L = 1, and (2) by choosing an appropriate L (L > 1), PEOs test outperforms SimHash test while RCEOs test cannot (Appendix B). 5.2 PEOs Test Following the space partitioning, for each neighbor u of v, e is partitioned to [e1, e2, . . . , eL], where ei is the sub-vector of e in Mi. The PEOs test consists of the following steps. (1) (Orthogonal Decomposition) We decompose e as e = ereg + eres such that ereg \u22a5eres and the direction of ereg is determined as follows: ereg \u2225ereg\u2225= [ e1 \u221a L\u2225e1\u2225 , e2 \u221a L\u2225e2\u2225 , . . . , eL \u221a L\u2225eL\u2225 ]. (5) We call ereg the regular part of e and eres the residual part of e. Besides, we introduce two weights wreg and wres such that wreg = \u2225ereg\u2225/\u2225e\u2225and wres = \u2225eres\u2225/\u2225e\u2225. (2) (Generating Projected Vectors) In each Mi, we independently generate m projected vectors {ai j}m j=1, where ai j \u223cN(0, Id\u2032\u00d7d\u2032). In the original space M, we independently generate m projected vectors {bj}m j=1 such that bj \u223cN(0, Id\u00d7d). (3) (Collection of Extreme Values) In each Mi, we collect L + 1 extreme values that yield the greatest inner products with the projected vectors: e[i] = sgn(e\u22a4 i ai j)j, where j = argmaxj |e\u22a4 i ai j|, and e[0] = sgn(e\u22a4 resbj)j, where j = argmaxj |e\u22a4 resbj|. (4) (CDF of Normal Distribution) Let N e,x min be the following normal distribution associated with e: N e,x min = N(x \u221a 2L ln m, w2 reg + Lw2 res \u2212Lx2 L + 1) (6) 5 where 0 < x < 1. Let Fe,x be the CDF of N e,x min. We define F \u22121 e (x, z) such that Fe,x(F \u22121 e (x, z)) = z, where 0 < z \u22640.5. Note that F \u22121 e can be well-defined since Fe,x(z) is a monotone function of x when z is fixed. When setting z = \u03f5, we write F \u22121 e,\u03f5 (x) = F \u22121 e (x, \u03f5). (5) (Query Projection) Given query q, we normalize it to q\u2032 and compute the inner products with the projected vectors to obtain two values H1(e) and H2(e) w.r.t. e: H1(e) = X i sgn(e[i])(q\u2032\u22a4 i ai |e[i]|), (7) H2(e) = sgn(e[0])(q\u2032\u22a4b|e[0]|). (8) (6) (Routing Test) With H1(e) and H2(e), we compute Ar(e) as follows for \u21132 distance: Ar(e) = \u2225u\u22252/2 \u2212r \u2212v\u22a4q \u2225q\u2225\u2225e\u2225 (9) where r = \u2225p\u22252/2 \u2212p\u22a4q and p is the furthest element to q in the temporary result list R. It is easy to see that \u03b42 \u22122r = \u2225q\u22252 when \u03b4 captures the \u21132 distance from q to the furthest element in R. For angular distance, we remove the norms \u2225u\u22252/2 and \u2225p\u22252/2 in r to obtain Ar(e). With Ar(e), we design a routing test for u: If Ar(e) \u22651, it returns false. If Ar(e) \u22640, it returns true. In case 0 < Ar(e) < 1, we compute H(e) and Tr(e) as follows: H(e) = wregH1(e) + \u221a LwresH2(e), (10) Tr(e) = F \u22121 e,\u03f5 (Ar(e)). (11) Then, the test returns true iff. the following condition is met. H(e) \u2212Tr(e) \u22650. (12) 2 \ud835\udc5a 1 cos\ud835\udf031 cos\ud835\udf03\ud835\udc5b \ud835\udc47 \ud835\udc5f(\ud835\udc521) \ud835\udc47 \ud835\udc5f(\ud835\udc52\ud835\udc5b) \ud835\udc3b(\ud835\udc521) \ud835\udc3b(\ud835\udc52\ud835\udc5b) \ud835\udc63 \ud835\udc621 \ud835\udc623 \ud835\udc62\ud835\udc5b \ud835\udc62\ud835\udc5b-1 \ud835\udc622 Query \ud835\udc621 \ud835\udc62\ud835\udc5b \ud835\udc39 \ud835\udc86,\ud835\udf00 \u22121 \ud835\udeff Dataset Result List \ud835\udc45 Quantile Table Projection Table 1st 2nd PEOs Test Read Stored Vector ID 1 2 \ud835\udc5a \ud835\udc521 \ud835\udc52\ud835\udc5b Computation efs-th Lookup and distance computation Figure 1: Illustration of the PEOs test. There are n neighbors of v. \u03b81, . . . , \u03b8n denote the angles between e1, . . . , en and q, respectively. u2 and un\u22121 pass the test (indicated by \u201c+\u201d). We access their raw vectors from the dataset and compute their distances to q. 6 Algorithm 2: PEOs Test Input : query q, edge e = (v, u), threshold r, projected vectors { ai j } and { bj } (1 \u2264i \u2264L, 1 \u2264j \u2264m), quantile table Q Output : whether u passes the routing test 1 Compute Ar(e); 2 if Ar(e) \u22651 then return (false); 3 if Ar(e) \u22640 then return (true); 4 Normalize q to q\u2032; 5 Build a projection table TI for all q\u2032\u22a4 i ai j and q\u2032\u22a4bj ; /* only once and used throughout the search */ 6 Compute H(e) with TI; 7 Compute Tr(e) with Q; 8 if H(e) \u2265Tr(e) then return (true) ; 9 else return (false) ; 5.3 Implementation of PEOs In the PEOs test, Steps (1), (2), and (3) can be pre-computed during index construction. Since space partitioning may result in unbalanced norms in subspaces, we can optionally permute the dimensions so that the norms of all ei\u2019s (1 \u2264i \u2264L, e \u2208E) in the graph are as close to each other as possible (Appendix C.2). Such permutation does not affect the topology of the graph or the theoretical guarantee of PEOs. Steps (4), (5), and (6) are computed during the search. In practice, H(e) and Tr(e) can be computed efficiently because (i) q\u2032\u22a4 i ai |e[i]| and q\u2032\u22a4b|e[0]| can be computed for q only once and stored in a projection table, and (ii) although the online-computation of F \u22121 e,\u03f5 (x) is costly, we can build a lookup table containing the quantiles corresponding to the different values of the variance in N e,x min since such variance is bounded. In particular, from the lookup table, we can choose a quantile slightly smaller than the true quantile by employing the monotonicity, which does not affect the correctness of the probability guarantee. An illustration of the implementation of the PEOs test is depicted in Figure 1. The pseudo-code is given in Algorithm 2. Another optimization is the use of SIMD, where 16 edges can be processed at a time. This will significantly accelerate ANNS, because the raw vectors of neighbors are stored separately in the memory and loading them into CPU is costly. 6 Analysis of PEOs 6.1 Probability Guarantee of PEOs To analyze PEOs, we assume \u2225e\u2225= \u2225q\u2225= 1 and let \u03b8 denote the angle between them. By the independence of projected vectors in different subspaces and the result in Lemma 4.2, we have H1(e) \u223cN \u03b8 e,q, where N \u03b8 e,q is defined as follows: N \u03b8 e,q = N(\u03b7 X i \u2225qi\u2225cos \u03b8i, 1 \u2212 X i \u2225qi\u22252 cos2 \u03b8i) (13) where \u03b7 = \u221a 2 ln m and \u03b8i denotes the angle between ei and qi. Next, we analyze the relationship between \u03b8 and N \u03b8 e,q. To this end, we introduce the following definition. Definition 6.1. We define two partial orders \u227aand \u2aafsuch that, for two normal distributions N(\u00b51, \u03c32 1) and N(\u00b52, \u03c32 2), N(\u00b51, \u03c32 1) \u227aN(\u00b52, \u03c32 2) iff. \u00b51 \u2264\u00b52 and \u03c32 1 \u2265\u03c32 2, and N(\u00b51, \u03c32 1) \u2aafN(\u00b52, \u03c32 2) iff. \u00b51 = \u00b52 and \u03c32 1 \u2265\u03c32 2. With the notations defined above, we want to find an appropriate normal distribution \u02dc N \u03b8 e,q such that \u02dc N \u03b8 e,q \u227aN \u03b8 e,q ( \u02dc N \u03b8 e,q \u2aafN \u03b8 e,q is more favorable) for all adequate pairs (e, q)\u2019s. We define \u02dc N \u03b8 e,q as 7 follows, where emin = min1\u2264i\u2264L \u2225ei\u2225and emax = max1\u2264i\u2264L \u2225ei\u2225: \u02dc N \u03b8 e,q = N( (cos \u03b8 + (emin \u2212emax) P i \u2225qi\u2225)\u03b7 emax , 1 \u2212cos2 \u03b8). (14) Then, we have the following lemma. Lemma 6.2. \u02dc N \u03b8 e,q \u227aN \u03b8 e,q. When \u2225e1\u2225= \u00b7 \u00b7 \u00b7 = \u2225eL\u2225, \u02dc N \u03b8 e,q = N(cos \u03b8 \u221a 2L ln m, 1 \u2212cos2 \u03b8) \u2aafN \u03b8 e,q. From Lemma 6.2, we can see that, for the case that emax \u2212emin is large, we can only get a loose lower bound of E[N \u03b8 e,q] due to the impact of unknown \u03b8i\u2019s (although it is possible to get a strict lower bound by solving a linear programming problem, the computation is too costly for a fast test), while the estimation of E[N \u03b8 e,q] is always accurate when \u2225e1\u2225= \u00b7 \u00b7 \u00b7 = \u2225eL\u2225holds. This explains why we decompose vector e into ereg and eres and deal with them in a separate way. Then, we have the following theorem for the probability guarantee of PEOs. Theorem 6.3. (1) (Probabilistic Guarantee) Suppose that m is sufficiently large. The PEOs test is (\u03b4, 1 \u2212\u03f5)-routing. (2) (False Positives) Consider a neighbor u whose distance to q is at least \u03b4. If cos \u03b8 \u2264\u02dc F \u22121 \u02dc \u03b8 (\u03f5) (\u03f5 \u22640.5), where cos \u02dc \u03b8 = Ar(e) and \u02dc F\u03b8 is the CDF of distribution N e,cos \u03b8 min / \u221a 2L ln m. Then the probability that u passes PEOs test is at most 1 \u2212\u02dc F\u03b8( \u02dc F \u22121 \u02dc \u03b8 (\u03f5)). (3) (Variance of Estimation and Comparison to RCEOs) Suppose that H(e)/ \u221a 2L ln m \u223c NH, where NH is an unknown normal distribution, and let N \u03b8 opt = N(cos \u03b8, sin2 \u03b8/(2L ln m)). When wres \u22641/(L + 1), \u2212 L + 2 L(L + 1)2 ln m \u2264Var[NH] \u2212Var[N \u03b8 opt] \u2264 1 (L + 1)2 ln m (15) where \u03b8 is the (unknown) angle between e and q. Remarks. (1) The first statement of Theorem 6.3 guarantees that promising neighbors can be explored with a high confidence. (2) The second statement shows that the routing efficiency is determined by the variance of N e,cos \u03b8 min / \u221a 2L ln m. Such variance is expected to be as small as possible since a smaller variance leads to a smaller probability of a false positive. (3) It is easy to see that, for RCEOs with mL projected vectors, the distribution associated with cos \u03b8 is N \u03b8 opt. For a comparison, we use NH to denote the distribution of H(e)/ \u221a 2L ln m. Clearly, E[NH] = E[N \u03b8 opt]. On the other hand, the third statement shows that, if wres is a small value, the variances of such two distributions are very close. In this situation, the effect of PEOs is close to that of RCEOs with mL projected vectors, which explains why PEOs can perform much better than RCEOs empirically. 6.2 Impact of L Based on the second and the third statements in Theorem 6.3, wreg and wres are critical values which control the routing efficiency. First, we want to show that wreg is generally close to 1. To this end, we calculate E[wreg] under the assumption that vector e obeys an isotropic distribution. Because prevalent graph indexes (e.g., HNSW) diversify the selected edges in the indexing phase, such assumption is not very strong for the real datasets. Besides, we can permute the dimensions (Section 5.3) to make e follow an isotropic distribution. Let \u00af wreg(L, d) denote Ee\u223cU(Sd\u22121)[wreg] ( \u00af wreg(L, d) means that wreg is affected by L and d). Then, we have the following lemma (d\u2032 > 3). Lemma 6.4. Given L, d\u2032, and d\u2032 = d/L, \u00af wreg(L, d) \u2265 (d\u2032 \u22121) \u221a 2Ld \u22123L (d \u22121) p 2d\u2032 + 2 \u221a 3 \u22126 . (16) 8 As an example, \u00af wreg(L, d) \u22650.978, when d = 128 and L = 8. For other reasonable choice of L w.r.t. d, we can also obtain a \u00af wreg(L, d) close to 1. Next, we analyze the relationship between L and wres more accurately. To this end, we consider the distribution N e,cos \u03b8 min / \u221a 2L ln m. Its expected value is cos \u03b8 and its variance, which is expected to be as small as possible, as shown in Theorem 6.3, is of great interest to us. For its variance, we particularly focus on the remaining part Jrel by removing the part regrading cos \u03b8, which is a value close to 0 for most of e\u2019s. Specifically, Jrel is defined as follows: Jrel(L) = 1 + (L \u22121)E[w2 res(L, d)] L . (17) On the other hand, for N \u03b8 opt, we have Jopt(L) = 1/L, which corresponds to RCEOs with mL projected vectors. Due to the effect of wres, there is a difference between Jopt(L) and Jrel(L). Thus, an appropriate value of L should satisfies the following two requirements: (1) (Major) Jrel(L) should be as small as possible. (2) (Minor) wres is close to 1/(L + 1). Here, the first requirement is to improve the routing efficiency of PEOs, which is more important, and the second one is to measure the deviation in the condition in the third statement (Theorem 6.3). In Figure 3, under the assumption of isotropic distribution, we plot the curves of Jrel(L), Jopt(L) and \u2206= |wres \u2212[1/(L + 1)]|, for d = 128, 384 and 960, respectively. It can be seen that (1) wres increases as L grows, which means that L should not be overlarge because Jrel(L\u2032) > Jrel(L) may occur when L\u2032 > L, and (2) due to the closeness of Jopt and Jrel, the effect of PEOs is very close to that of RCEOs with mL projected vectors when L is small (e.g. L \u22648). Based on the analysis above, we set L to 8, 15, 16 for these three dimensions. By varying the value of L, the performance of PEOs on the real datasets with these dimensions is consistent with our analysis, which will be elaborated in Section 7.3. 6.3 Computational Cost As for the time cost of PEOs, for H(e), we need at most two multiplications, L + 1 additions and L + 1 reads to get H(e). For Tr(e), we need one addition, one read, two subtractions and two divisions because all the inner products and F \u22121 e,\u03f5 (\u00b7) can be obtained directly using the pre-computed tables. Empirically, the number of exact distance computation can be reduced by 70% \u2013 80% by equipping graph-based ANNS with PEOs (Section 7.6). As for the space cost of PEOs, for every edge, we need L + 1 bytes for vector IDs, three bytes for weights, and two bytes or four bytes for \u2225u\u22252/2 and \u2225e\u2225, where the scalar quantization is used here. Such space cost is affordable for 10M-scale and smaller datasets on a single PC (Section 7.7). For 100M-scale and larger datasets, we can slightly sacrifice efficiency to significantly reduce space consumption (Section 7.8). 7 Experiments All the experiments were performed on a PC with Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz. All the compared methods were implemented in C++, with 64 threads for indexing and a single CPU for searching, following the standard setup in ANN-Benchmarks [6]. 7.1 Datasets and Methods We used six million-scale datasets. Their statistics can be found in Table 1. Although PEOs can be used for any graph index for ANNS, we implemented PEOs on HNSW because (1) HNSW is the most widely used state-of-the-art graph index, available in many ANNS tools, and (2) the update of HNSW is much easier than other graph indexes. Besides, a comprehensive evaluation [33] shows that no graph indexes can always outperform others in ANNS. 9 Table 1: Data statistics. Dataset Size (|O|) Dim. (d) Type Metric GloVe200 1,183,514 200 Text angular GloVe300 2,196,017 300 Text \u21132 DEEP10M 9,990,000 96 Image angular SIFT10M 10,000,000 128 Image \u21132 Tiny5M 5,000,000 384 Image \u21132 GIST 1,000,000 960 Image \u21132 Besides PEOs, we select five competitors: (vanilla) HNSW, NSSG [13], Glass [35], FINGER [8], and RCEOs. The reasons for the selection are: (1) NSSG is another efficient graph index, proposed to improve the performance of NSG [14], (2) Glass is one the most competitive open-source ANNS implementations, (3) FINGER is a SOTA routing technique and also works on HNSW and it has been shown to outperform SimHash+HNSW (a baseline in Section 4.1), and (4) RCEOs is a baseline probabilistic routing algorithm. We use the following parameter setup for the competitors: (1) HNSW: M = 32, efc = 2000 for the two GloVe datasets and efc = 1000 for the other datasets. We also used this setting for the HNSW index of FINGER, RCEOs and PEOs. (2) NSSG: (L, R, C) was set to (100, 50, 60) on SIFT10M, (500, 70, 60) on GIST, and (500, 60, 60) on the other datasets, The parameter K in the prepared KNN-graph was set to 400. (3) Glass: For DEEP10M, we used Glass (+HNSW) since Glass (+NSG) failed to finish the index construction. For the other datasets, we used Glass (+NSG) since it worked better than Glass (+HNSW) especially for high recall rates. R was set to 32, L was set to an experimentally optimal value in [200, 2000]. (4) FINGER (+HNSW): All parameters were set to the recommended values in its source code. In particular, the dimension of the subspace was set to 64. (5) RCEOs (+HNSW): The only difference with PEOs is L = 1 in RCEOs. (6) PEOs (+HNSW): Based on the analysis in Section 6.2, L was set to 8, 8, 10, 15, 16, 20 on the six datasets sorted by ascending order of dimension. \u03f5 was set to 0.2 and m = 128 such that every vector ID can be encoded by one byte. While there are many other SOTA ANNS solutions, they are not compared here because (1) the training time of learn-to-route [5] is very long on million-scale and larger datasets, (2) Adsampling [15] is only effective for the environment without SIMD optimization, (3) Falconn++ [28] and BATLearn [23] are designed for the multi-threading environment, (4) ScaNN [17] does not outperform FINGER on million-scale datasets, (5) NGTQG [20] is a vector quantization technique for graph-based ANNS orthogonal to our routing and is generally less competitive than Glass on a high recall (\u22650.5) setting [6]. 7.2 Queries Per Second (QPS) Evaluation Figure 2 reports the recall-QPS curves of all the competitors on the six datasets. We have the following observations. (1) On all the datasets, the winner is PEOs, trailed by FINGER in most cases. This demonstrate that our routing is effective. In particular, PEOs accelerates HNSW by 1.6 to 2.5 times and it is faster than FINGER by 1.1 to 1.4 times. (2) The improvement of PEOs over HNSW is more significant on the datasets with more dimensions, because computing the exact distance to the query is more costly on these datasets. (3) On GloVe200, FINGER and Glass report very low recall (< 30%) recall while the improvement of PEOs over HNSW is still obvious under high recall settings. Specifically, FINGER is also based on routing, but incurs unbounded estimation errors and the errors might be very large on GloVe200, 10 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS HNSW NSSG RCEOs PEOs (a) GloVe200-angular, K = 100 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 0 500 1000 1500 2000 2500 3000 3500 4000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (b) GloVe300-\u21132, K = 100 0.90 0.92 0.94 0.96 0.98 1.00 Recall@100 1000 2000 3000 4000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (c) DEEP10M-angular, K = 100 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 Recall@100 1000 1500 2000 2500 3000 3500 4000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (d) SIFT10M-\u21132, K = 100 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (e) Tiny5M-\u21132, K = 100 0.75 0.80 0.85 0.90 0.95 1.00 Recall@100 250 500 750 1000 1250 1500 1750 QPS HNSW NSSG Glass FINGER RCEOs PEOs (f) GIST-\u21132, K = 100 Figure 2: Recall-QPS evaluation. The recalls of Glass and FINGER are lower than 30% on GloVe200 and thus not shown. resulting in many false negatives of and rendering the graph under-explored. This result evidences the importance of the probability guarantee of routing. 7.3 Effect of Space Partition Size L We evaluate the effect of L in PEOs and report the results in Figure 3 on the SIFT10M, Tiny5M, and GIST datasets. On each dataset, theoretical results (top) are accompanied with their empirical counterparts (bottom). The empirical results are consistent with our analysis in Section 6.2. That is, the smaller Jrel is, the better the performance is. We also have the following observations. (1) The performance under L > 1 is obviously better than that under L = 1, showcasing the effectiveness of space partitioning. (2) An L of 32 leads to a worse performance than an L of 8 on the SIFT10M dataset, because the variance is larger when L = 32. (3) When L > 16 on the GIST dataset, the performance tends to be stable since the variance barely changes when L exceeds 16. 7.4 Effect of Error Bound \u03f5 We vary the value of \u03f5 and report the results in Figure 4. The performance of PEOs under \u03f5 = 0.1 is slightly worse than those under other \u03f5 settings. On the other hand, \u03f5 = 0.2 is consistently the best choice, leading to the best recall-QPS curve. Based on this observation, we suggest users choose \u03f5 = 0.2 to seek best performance with a guaranteed routing. 7.5 Effect of Result Number K In Figure 5, we show the recall-QPS comparison under K = 10 and K = 1. We have the following observations. (1) PEOs still performs the best on all the datasets, showcasing the robustness of PEOs for different values of K. In particular, the performance improvement of PEOs over HNSW under a small K is almost consistent with that under K = 100. (2) The improvement of PEOs over FINGER is marginal when K = 1. This is because the search under K = 1 is much easier than the search under a large K value. When K grows, we have to 11 0 10 20 30 40 50 L 0.0 0.1 0.2 0.3 0.4 0.5 Approximate Value J_rel J_opt Delta (a) Dimension: 200 (GloVe200) 0 10 20 30 40 50 L 0.0 0.1 0.2 0.3 0.4 0.5 Approximate Value J_rel J_opt Delta (b) Dimension: 300 (GloVe300) 0 10 20 30 40 50 L 0.0 0.1 0.2 0.3 0.4 0.5 Approximate Value J_rel J_opt Delta (c) Dimension: 96 (DEEP10M) 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS L=1 L=2 L=4 L=10 L=20 L=25 (d) GloVe200-angular, d = 200 0.80 0.85 0.90 0.95 Recall@100 0 500 1000 1500 2000 2500 3000 3500 4000 QPS L=1 L=5 L=10 L=15 L=20 L=30 (e) GloVe300-\u21132, d = 300 0.88 0.90 0.92 0.94 0.96 0.98 1.00 Recall@100 1000 1500 2000 2500 3000 3500 4000 4500 QPS L=1 L=2 L=4 L=8 L=16 L=32 (f) DEEP10M-angular, d = 96 0 10 20 30 40 50 60 L 0.0 0.1 0.2 0.3 0.4 0.5 Approximate Value J_rel J_opt Delta (g) Dimension: 128 (SIFT10M) 0 10 20 30 40 50 L 0.0 0.1 0.2 0.3 0.4 0.5 Approximate Value J_rel J_opt Delta (h) Dimension: 384 (Tiny5M) 0 10 20 30 40 50 L 0.0 0.1 0.2 0.3 0.4 0.5 Approximate Value J_rel J_opt Delta (i) Dimension: 960 (GIST) 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 Recall@100 1000 1500 2000 2500 3000 3500 4000 QPS L=1 L=2 L=4 L=8 L=16 L=32 (j) SIFT10M-\u21132, K = 100 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS L=1 L=4 L=8 L=16 L=32 L=48 (k) Tiny5M-\u21132, K = 100 0.80 0.85 0.90 0.95 1.00 Recall@100 250 500 750 1000 1250 1500 1750 2000 QPS L=1 L=6 L=12 L=20 L=32 L=48 (l) GIST-\u21132, K = 100 Figure 3: Effect of L. We plot the approximate values of Jopt, Jrel, and \u2206under the isotropic distribution (see Section 6.2 for interpretations) as well as the empirical performance comparison. 0.88 0.90 0.92 0.94 0.96 0.98 1.00 Recall@100 1000 1500 2000 2500 3000 3500 4000 4500 5000 QPS epsilon=0.1 epsilon=0.2 epsilon=0.3 epsilon=0.4 (a) DEEP100M-angular, K = 100 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS epsilon=0.1 epsilon=0.2 epsilon=0.3 epsilon=0.4 (b) GloVe200-angular, K = 100 0.75 0.80 0.85 0.90 0.95 Recall@100 1000 2000 3000 4000 QPS epsilon=0.1 epsilon=0.2 epsilon=0.3 epsilon=0.4 (c) GloVe300-\u21132, K = 100 Figure 4: Effect of \u03f5. 12 accordingly increase the size of the result list, under which situation, the routing becomes harder and a more accurate estimation is important for the performance improvement. 0.70 0.75 0.80 0.85 0.90 0.95 Recall@10 0 500 1000 1500 2000 2500 3000 3500 QPS HNSW NSSG RCEOs PEOs (a) GloVe200-angular, K = 10 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 Recall@10 0 1000 2000 3000 4000 5000 6000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (b) GloVe300-\u21132, K = 10 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Recall@10 0 2000 4000 6000 8000 10000 12000 14000 16000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (c) DEEP10M-angular, K = 10 0.6 0.7 0.8 0.9 1.0 Recall@10 5000 10000 15000 20000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (d) SIFT10M-\u21132, K = 10 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Recall@10 0 1000 2000 3000 4000 5000 6000 7000 8000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (e) Tiny5M-\u21132, K = 10 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Recall@10 0 1000 2000 3000 4000 5000 6000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (f) GIST-\u21132, K = 10 0.75 0.80 0.85 0.90 0.95 Recall@1 0 1000 2000 3000 4000 QPS HNSW NSSG RCEOs PEOs (g) GloVe200-angular, K = 1 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Recall@1 0 1000 2000 3000 4000 QPS HNSW NSSG FINGER RCEOs PEOs (h) GloVe300-\u21132, K = 1 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Recall@1 0 5000 10000 15000 20000 25000 30000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (i) DEEP10M-angular, K = 1 0.6 0.7 0.8 0.9 1.0 Recall@1 5000 10000 15000 20000 25000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (j) SIFT10M-\u21132, K = 1 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Recall@1 0 2000 4000 6000 8000 10000 12000 14000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (k) Tiny5M-\u21132, K = 1 0.5 0.6 0.7 0.8 0.9 1.0 Recall@1 0 1000 2000 3000 4000 5000 6000 QPS HNSW NSSG Glass FINGER RCEOs PEOs (l) GIST-\u21132, K = 1 Figure 5: Recall-QPS evaluation, K = 10 and K = 1. 7.6 Effect of List Size efs To evaluate the effectiveness of PEOs in reducing exact distance computation, we also plot the efsnumber of distance computations and efs-recall curves, where efs is the size of the temporary result list, adjusted to achieve a different trade-off between efficiency and accuracy. From the results in Figure 6, we have the following observations. (1) PEOs saves around 75% exact distance computations on each dataset, and this is the main reason for the improvement on QPS. On the other hand, such improvement is not sensitive to the value of efs. (2) Due to the existence of an estimation error, for the same efs, the recall of PEOs is smaller than that of HNSW. Nonetheless, the difference is quite small, especially for efs > 100 (note that efs \u2265100 13 200 400 600 800 1000 efsearch 0 5000 10000 15000 20000 25000 30000 35000 #Distance HNSW PEOs (a) GloVe200-angular, #Dist. Comput. 200 400 600 800 1000 efsearch 0 5000 10000 15000 20000 25000 #Distance HNSW PEOs (b) GloVe300-\u21132, #Dist. Comput. 100 200 300 400 500 efsearch 1000 2000 3000 4000 5000 6000 7000 8000 9000 #Distance HNSW PEOs (c) DEEP10M-angular, #Dist. Comput. 200 400 600 800 1000 efsearch 0.70 0.75 0.80 0.85 0.90 Recall HNSW PEOs (d) GloVe200-angular, Recall 200 400 600 800 1000 efsearch 0.75 0.80 0.85 0.90 0.95 Recall HNSW PEOs (e) GloVe300-\u21132, Recall 100 200 300 400 500 efsearch 0.90 0.92 0.94 0.96 0.98 Recall HNSW PEOs (f) DEEP10M-angular, Recall 100 150 200 250 300 efsearch 2000 4000 6000 8000 #Distance HNSW PEOs (g) SIFT10M-\u21132, #Dist. Comput. 200 400 600 800 1000 efsearch 0 5000 10000 15000 20000 25000 30000 #Distance HNSW PEOs (h) Tiny5M-\u21132, #Dist. Comput. 100 200 300 400 500 efsearch 2000 4000 6000 8000 10000 12000 14000 #Distance HNSW PEOs (i) GIST-\u21132, #Dist. Comput. 100 150 200 250 300 efsearch 0.90 0.92 0.94 0.96 0.98 Recall HNSW PEOs (j) SIFT10M-\u21132, Recall 200 400 600 800 1000 efsearch 0.75 0.80 0.85 0.90 0.95 1.00 Recall HNSW PEOs (k) Tiny5M-\u21132, Recall 100 200 300 400 500 efsearch 0.80 0.85 0.90 0.95 Recall HNSW PEOs (l) GIST-\u21132, Recall Figure 6: Evaluation of efs-number of distance computations and efs-recall, K = 100. 14 for K = 100), thanks to the theoretical guarantee of PEOs. (3) As efs grows, the difference between the recalls of HNSW and PEOs are very small while the saved distance computation by PEOs is still large. This explains why PEOs works better for larger efs values, which generally corresponds to larger K. Table 2: Index size and indexing time. Dataset Index Size (GB) Indexing Time (s) HNSW HNSW+FINGER HNSW+PEOs HNSW HNSW+FINGER HNSW+PEOS GloVe200 1.19 3.89 (+2.27x) 2.27 (+0.91x) 737 463+38 794+33 GloVe300 3.02 8.04 (+1.66x) 3.70 (+0.23x) 1310 1408+178 1346+24 DEEP10M 6.14 31.15 (+4.07x) 12.13 (+0.98x) 1245 1103+849 1296+208 SIFT10M 7.34 31.44 (+3.28x) 14.39 (+0.96x) 1490 1308+1025 1536+204 Tiny5M 8.44 20.49 (+1.43x) 12.89 (+0.53x) 2738 2959+1220 2880+158 GIST 3.83 6.12 (+0.60x) 4.64 (+0.21x) 738 790+706 793+40 7.7 Indexing Performance Since the construction time of a graph index highly depends on the parameter efc and PEOs does not rely on the underlying graph, we focus on the indexing time of PEOs itself. From the result in Table 2, we can see that the time of indexing for PEOs is much shorter than the time of graph construction. On the other hand, FINGER requires more indexing time because it needs to additionally construct a subspace for each node. As for the index size, we can see that, the size of HNSW+PEOs is 1.2x \u2013 2.0x larger than that of HNSW. On the two datasets with lower dimensions, the additional space overheads are more obvious. Meanwhile, FINGER requires more space cost than PEOs due to storing the information of generated subspaces. Next, we will give a detailed discussion on coping with the scalability issue. 0.840.860.880.900.920.940.960.98 Recall@1 1000 2000 3000 4000 5000 6000 7000 8000 QPS HNSW PEOs (a) DEEP100M-angular, K = 1 0.80 0.85 0.90 0.95 1.00 Recall@10 0 1000 2000 3000 4000 5000 6000 7000 8000 QPS HNSW PEOs (b) DEEP100M-angular, K = 10 0.81 0.84 0.87 0.90 0.93 0.96 0.99 Recall@100 500 1000 1500 2000 2500 3000 QPS HNSW PEOs (c) DEEP100M-angular, K = 100 Figure 7: Recall-QPS evaluation on DEEP100M. L = 2 for PEOs. Table 3: Index size and indexing time on DEEP100M. Method Index Size (GB) Indexing Time (s) HNSW 66.0 8877 HNSW+PEOs 67.9 (+0.029x) 5832+622 7.8 Scalability We discuss how to tackle the scalability issue of PEOs on datasets larger than the million scale. Generally, there are the following three ways to reduce the space cost: (1) decreasing M, (2) decreasing 15 L, and (3) using only one byte for scalar quantization. In fact, when L \u22644, we know that wres is generally very small, which means that we can set wreg to 1 and wres to 0 for every e. In this case, for each neighbor u, we only need L bytes (2 \u2264L \u22644) for sub-vector IDs, one byte for the norm of u and one byte for the norm of e. Although such setting is not the optimal one for the search performance, it can significantly reduce additional space cost empirically, as shown below. Following the analysis above, we use L = 2 and M = 16 for PEOs on DEEP100M, with 100M vectors, 96 dimensions, and angular distance as metric. Here, we only compare PEOs with HNSW since the index size of FINGER is too large to be stored on our PC. From the results in Figure 7 and Table 3, we can see that, in most cases, PEOs still has a 30% performance improvement on HNSW with a 3% additional space cost. For datasets with higher dimensions, the percentage of additional space cost can be smaller. 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS PQ5 PQ10 PQ20 PEOs (a) GloVe200-angular, K = 100 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 2500 3000 3500 4000 QPS PQ10 PQ15 PQ20 PEOs (b) GloVe300-\u21132, K = 100 0.90 0.92 0.94 0.96 0.98 1.00 Recall@100 1000 1500 2000 2500 3000 3500 4000 4500 QPS PQ8 PQ16 PQ32 PEOs (c) DEEP10M-angular, K = 100 0.90 0.92 0.94 0.96 0.98 Recall@100 1000 1500 2000 2500 3000 3500 4000 QPS PQ8 PQ16 PQ32 PEOs (d) SIFT10M-\u21132, K = 100 0.70 0.75 0.80 0.85 0.90 0.95 Recall@100 500 1000 1500 2000 QPS PQ8 PQ16 PQ32 PEOs (e) Tiny5M-\u21132, K = 100 0.75 0.80 0.85 0.90 0.95 1.00 Recall@100 200 400 600 800 1000 1200 1400 1600 1800 QPS PQ8 PQ16 PQ32 PEOs (f) GIST-\u21132, K = 100 Figure 8: Comparison with PQ-based routing. X in PQX denotes the number of sub-codebooks. 7.9 Comparison with Product Quantization (PQ) We can see that PEOs has similarities with PQ [21], both partitioning the original space into orthogonal subspaces and combining the information in different subspaces. Thus, an interesting question is if the quantization techniques can be used for the acceleration of routing. First, we note that, since |E| is much larger than the data size |O|, and the local intrinsic dimensionality (LID) of e\u2019s is also very large due to the edge-selection strategy, the quantization of e\u2019s is not as effective as the quantization of raw vectors. On the other hand, apart from the probability guarantee, PEOs has the following two advantages. (1) The impact of wres is fully considered in PEOs while the impact of individual quantization error is hard to be measured. (2) PEOs applies a non-linear transformation, i.e., F \u22121 e,\u03f5 , to the threshold cos \u03b8. After such transformation, as threshold \u03b8 decreases from \u03c0/2 to 0, Tr(e) will grow rapidly such that passing the PEOs test becomes much harder, because the possibility that a small angle \u03b8 exists between two high-dimensional vectors is very small. That is, PEOs takes the potential impact of threshold into consideration thanks to the probability guarantee. On the other hand, quantization-based techniques focus on the minimization of quantization errors and do not consider the impact of the threshold. For an empirical evaluation, first, we need to adapt the existing quantization-based technique for routing. Specifically, we use norm-explicit product quantization [10] \u2013 which has shown competitive performance for the estimation of inner product \u2013 to quantize e\u2019s and compute the approximate inner 16 product between q and each e. Since the standard quantization generally does not consider the effect of quantization error in the search phase, to obtain better search performance, we also maintain the norm of the residual part of e after quantization, denoted by \u2225e\u2225res. Then we need to introduce a coefficient c and write the test of quantization as follows: c\u2225e\u2225res\u2225q\u2225\u2265I (18) where I denotes the threshold of inner product for being added into the temporary result list, which is is computed in a similar way to Ar(e). In practice, although we do not know the optimal value of c for each dataset, we experimentally adjust it in (0, 1) to get a near-optimal value. By the above setting, we can compare the performances of HNSW+PEOs and HNSW+PQ. From the results in Figure 8, we have the following three observations. (1) Expect for GloVe300, PEOs obviously performs better than PQ, as analyzed in the previous discussion. (2) Due to the hardness of residual quantization, the quantization error may be quite large especially for the high-dimensional datasets, such as GIST, which makes the estimation inaccurate. (3) The impact of the number of sub-codebooks is hard to be predicted, as shown on GloVe300. 8 Conclusion We studied the problem of probabilistic routing in graph-based ANNS, which yields a probabilistic guarantee of estimating whether the distance between a node and the query will be computed when exploring the graph index for ANNS, thereby preventing unnecessary distance computation. We considered two baseline algorithms by adapting locality-sensitive approaches to routing in graph-based ANNS, and devised PEOs, a novel approach to this problem. We proved the probabilistic guarantee of PEOs and conducted experiments on six datasets. The results showed that PEOs is effective in enhancing the performance of graph-based ANNS and consistently outperforms SOTA by 1.1 to 1.4 times. Acknowledgements This work is supported by JSPS Kakenhi 22H03594, 22H03903, 23H03406, 23K17456, and CREST JPMJCR22M2. We thank Prof. Makoto Onizuka and Yuya Sasaki for providing financial support for completing this research." + }, + { + "url": "http://arxiv.org/abs/1707.00143v9", + "title": "Fast Approximate Nearest Neighbor Search With The Navigating Spreading-out Graph", + "abstract": "Approximate nearest neighbor search (ANNS) is a fundamental problem in\ndatabases and data mining. A scalable ANNS algorithm should be both\nmemory-efficient and fast. Some early graph-based approaches have shown\nattractive theoretical guarantees on search time complexity, but they all\nsuffer from the problem of high indexing time complexity. Recently, some\ngraph-based methods have been proposed to reduce indexing complexity by\napproximating the traditional graphs; these methods have achieved revolutionary\nperformance on million-scale datasets. Yet, they still can not scale to\nbillion-node databases. In this paper, to further improve the search-efficiency\nand scalability of graph-based methods, we start by introducing four aspects:\n(1) ensuring the connectivity of the graph; (2) lowering the average out-degree\nof the graph for fast traversal; (3) shortening the search path; and (4)\nreducing the index size. Then, we propose a novel graph structure called\nMonotonic Relative Neighborhood Graph (MRNG) which guarantees very low search\ncomplexity (close to logarithmic time). To further lower the indexing\ncomplexity and make it practical for billion-node ANNS problems, we propose a\nnovel graph structure named Navigating Spreading-out Graph (NSG) by\napproximating the MRNG. The NSG takes the four aspects into account\nsimultaneously. Extensive experiments show that NSG outperforms all the\nexisting algorithms significantly. In addition, NSG shows superior performance\nin the E-commercial search scenario of Taobao (Alibaba Group) and has been\nintegrated into their search engine at billion-node scale.", + "authors": "Cong Fu, Chao Xiang, Changxu Wang, Deng Cai", + "published": "2017-07-01", + "updated": "2018-12-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.01382v3", + "title": "Falconn++: A Locality-sensitive Filtering Approach for Approximate Nearest Neighbor Search", + "abstract": "We present Falconn++, a novel locality-sensitive filtering approach for\napproximate nearest neighbor search on angular distance. Falconn++ can filter\nout potential far away points in any hash bucket \\textit{before} querying,\nwhich results in higher quality candidates compared to other hashing-based\nsolutions. Theoretically, Falconn++ asymptotically achieves lower query time\ncomplexity than Falconn, an optimal locality-sensitive hashing scheme on\nangular distance. Empirically, Falconn++ achieves higher recall-speed tradeoffs\nthan Falconn on many real-world data sets. Falconn++ is also competitive with\nHNSW, an efficient representative of graph-based solutions on high search\nrecall regimes.", + "authors": "Ninh Pham, Tao Liu", + "published": "2022-06-03", + "updated": "2022-10-22", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1603.09320v4", + "title": "Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs", + "abstract": "We present a new approach for the approximate K-nearest neighbor search based\non navigable small world graphs with controllable hierarchy (Hierarchical NSW,\nHNSW). The proposed solution is fully graph-based, without any need for\nadditional search structures, which are typically used at the coarse search\nstage of the most proximity graph techniques. Hierarchical NSW incrementally\nbuilds a multi-layer structure consisting from hierarchical set of proximity\ngraphs (layers) for nested subsets of the stored elements. The maximum layer in\nwhich an element is present is selected randomly with an exponentially decaying\nprobability distribution. This allows producing graphs similar to the\npreviously studied Navigable Small World (NSW) structures while additionally\nhaving the links separated by their characteristic distance scales. Starting\nsearch from the upper layer together with utilizing the scale separation boosts\nthe performance compared to NSW and allows a logarithmic complexity scaling.\nAdditional employment of a heuristic for selecting proximity graph neighbors\nsignificantly increases performance at high recall and in case of highly\nclustered data. Performance evaluation has demonstrated that the proposed\ngeneral metric space search index is able to strongly outperform previous\nopensource state-of-the-art vector-only approaches. Similarity of the algorithm\nto the skip list structure allows straightforward balanced distributed\nimplementation.", + "authors": "Yu. A. Malkov, D. A. Yashunin", + "published": "2016-03-30", + "updated": "2018-08-14", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.CV", + "cs.IR", + "cs.SI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.10987v1", + "title": "Learning to Route in Similarity Graphs", + "abstract": "Recently similarity graphs became the leading paradigm for efficient nearest\nneighbor search, outperforming traditional tree-based and LSH-based methods.\nSimilarity graphs perform the search via greedy routing: a query traverses the\ngraph and in each vertex moves to the adjacent vertex that is the closest to\nthis query. In practice, similarity graphs are often susceptible to local\nminima, when queries do not reach its nearest neighbors, getting stuck in\nsuboptimal vertices. In this paper we propose to learn the routing function\nthat overcomes local minima via incorporating information about the graph\nglobal structure. In particular, we augment the vertices of a given graph with\nadditional representations that are learned to provide the optimal routing from\nthe start vertex to the query nearest neighbor. By thorough experiments, we\ndemonstrate that the proposed learnable routing successfully diminishes the\nlocal minima problem and significantly improves the overall search performance.", + "authors": "Dmitry Baranchuk, Dmitry Persiyanov, Anton Sinitsin, Artem Babenko", + "published": "2019-05-27", + "updated": "2019-05-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1210.6287v2", + "title": "Fast Exact Max-Kernel Search", + "abstract": "The wide applicability of kernels makes the problem of max-kernel search\nubiquitous and more general than the usual similarity search in metric spaces.\nWe focus on solving this problem efficiently. We begin by characterizing the\ninherent hardness of the max-kernel search problem with a novel notion of\ndirectional concentration. Following that, we present a method to use an $O(n\n\\log n)$ algorithm to index any set of objects (points in $\\Real^\\dims$ or\nabstract objects) directly in the Hilbert space without any explicit feature\nrepresentations of the objects in this space. We present the first provably\n$O(\\log n)$ algorithm for exact max-kernel search using this index. Empirical\nresults for a variety of data sets as well as abstract objects demonstrate up\nto 4 orders of magnitude speedup in some cases. Extensions for approximate\nmax-kernel search are also presented.", + "authors": "Ryan R. Curtin, Parikshit Ram, Alexander G. Gray", + "published": "2012-10-23", + "updated": "2012-10-26", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1907.06146v3", + "title": "High Dimensional Similarity Search with Satellite System Graph: Efficiency, Scalability, and Unindexed Query Compatibility", + "abstract": "Approximate Nearest Neighbor Search (ANNS) in high dimensional space is\nessential in database and information retrieval. Recently, there has been a\nsurge of interest in exploring efficient graph-based indices for the ANNS\nproblem. Among them, Navigating Spreading-out Graph (NSG) provides fine\ntheoretical analysis and achieves state-of-the-art performance. However, we\nfind there are several limitations with NSG: 1) NSG has no theoretical\nguarantee on nearest neighbor search when the query is not indexed in the\ndatabase; 2) NSG is too sparse which harms the search performance. In addition,\nNSG suffers from high indexing complexity. To address the above problems, we\npropose the Satellite System Graphs (SSG) and a practical variant NSSG.\nSpecifically, we propose a novel pruning strategy to produce SSGs from the\ncomplete graph. SSGs define a new family of MSNETs in which the out-edges of\neach node are distributed evenly in all directions. Each node in the graph\nbuilds effective connections to its neighborhood omnidirectionally, whereupon\nwe derive SSG's excellent theoretical properties for both indexed and unindexed\nqueries. We can adaptively adjust the sparsity of an SSG with a hyper-parameter\nto optimize the search performance. Further, NSSG is proposed to reduce the\nindexing complexity of the SSG for large-scale applications. Both theoretical\nand extensive experimental analyses are provided to demonstrate the strengths\nof the proposed approach over the existing representative algorithms. Our code\nhas been released at https://github.com/ZJULearning/SSG.", + "authors": "Cong Fu, Changxu Wang, Deng Cai", + "published": "2019-07-13", + "updated": "2021-03-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.DB" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1908.10396v5", + "title": "Accelerating Large-Scale Inference with Anisotropic Vector Quantization", + "abstract": "Quantization based techniques are the current state-of-the-art for scaling\nmaximum inner product search to massive databases. Traditional approaches to\nquantization aim to minimize the reconstruction error of the database points.\nBased on the observation that for a given query, the database points that have\nthe largest inner products are more relevant, we develop a family of\nanisotropic quantization loss functions. Under natural statistical assumptions,\nwe show that quantization with these loss functions leads to a new variant of\nvector quantization that more greatly penalizes the parallel component of a\ndatapoint's residual relative to its orthogonal component. The proposed\napproach achieves state-of-the-art results on the public benchmarks available\nat \\url{ann-benchmarks.com}.", + "authors": "Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, Sanjiv Kumar", + "published": "2019-08-27", + "updated": "2020-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.05433v5", + "title": "Learning to Hash Robustly, Guaranteed", + "abstract": "The indexing algorithms for the high-dimensional nearest neighbor search\n(NNS) with the best worst-case guarantees are based on the randomized Locality\nSensitive Hashing (LSH), and its derivatives. In practice, many heuristic\napproaches exist to \"learn\" the best indexing method in order to speed-up NNS,\ncrucially adapting to the structure of the given dataset.\n Oftentimes, these heuristics outperform the LSH-based algorithms on real\ndatasets, but, almost always, come at the cost of losing the guarantees of\neither correctness or robust performance on adversarial queries, or apply to\ndatasets with an assumed extra structure/model. In this paper, we design an NNS\nalgorithm for the Hamming space that has worst-case guarantees essentially\nmatching that of theoretical algorithms, while optimizing the hashing to the\nstructure of the dataset (think instance-optimal algorithms) for performance on\nthe minimum-performing query. We evaluate the algorithm's ability to optimize\nfor a given dataset both theoretically and practically. On the theoretical\nside, we exhibit a natural setting (dataset model) where our algorithm is much\nbetter than the standard theoretical one. On the practical side, we run\nexperiments that show that our algorithm has a 1.8x and 2.1x better recall on\nthe worst-performing queries to the MNIST and ImageNet datasets.", + "authors": "Alexandr Andoni, Daniel Beaglehole", + "published": "2021-08-11", + "updated": "2022-07-07", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.11408v1", + "title": "FINGER: Fast Inference for Graph-based Approximate Nearest Neighbor Search", + "abstract": "Approximate K-Nearest Neighbor Search (AKNNS) has now become ubiquitous in\nmodern applications, for example, as a fast search procedure with two tower\ndeep learning models. Graph-based methods for AKNNS in particular have received\ngreat attention due to their superior performance. These methods rely on greedy\ngraph search to traverse the data points as embedding vectors in a database.\nUnder this greedy search scheme, we make a key observation: many distance\ncomputations do not influence search updates so these computations can be\napproximated without hurting performance. As a result, we propose FINGER, a\nfast inference method to achieve efficient graph search. FINGER approximates\nthe distance function by estimating angles between neighboring residual vectors\nwith low-rank bases and distribution matching. The approximated distance can be\nused to bypass unnecessary computations, which leads to faster searches.\nEmpirically, accelerating a popular graph-based method named HNSW by FINGER is\nshown to outperform existing graph-based methods by 20%-60% across different\nbenchmark datasets.", + "authors": "Patrick H. Chen, Chang Wei-cheng, Yu Hsiang-fu, Inderjit S. Dhillon, Hsieh Cho-jui", + "published": "2022-06-22", + "updated": "2022-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.04902v2", + "title": "Sublinear Time Nearest Neighbor Search over Generalized Weighted Manhattan Distance", + "abstract": "Nearest Neighbor Search (NNS) over generalized weighted distances is\nfundamental to a wide range of applications. The problem of NNS over the\ngeneralized weighted square Euclidean distance has been studied in previous\nwork. However, numerous studies have shown that the Manhattan distance could be\nmore effective than the Euclidean distance for high-dimensional NNS, which\nindicates that the generalized weighted Manhattan distance is possibly more\npractical than the generalized weighted square Euclidean distance in high\ndimensions. To the best of our knowledge, no prior work solves the problem of\nNNS over the generalized weighted Manhattan distance in sublinear time. This\npaper achieves the goal by proposing two novel hashing schemes\n($d_w^{l_1},l_2$)-ALSH and ($d_w^{l_1},\\theta$)-ALSH.", + "authors": "Huan Hu, Jianzhong Li", + "published": "2021-04-11", + "updated": "2021-10-18", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.01702v2", + "title": "Automating Nearest Neighbor Search Configuration with Constrained Optimization", + "abstract": "The approximate nearest neighbor (ANN) search problem is fundamental to\nefficiently serving many real-world machine learning applications. A number of\ntechniques have been developed for ANN search that are efficient, accurate, and\nscalable. However, such techniques typically have a number of parameters that\naffect the speed-recall tradeoff, and exhibit poor performance when such\nparameters aren't properly set. Tuning these parameters has traditionally been\na manual process, demanding in-depth knowledge of the underlying search\nalgorithm. This is becoming an increasingly unrealistic demand as ANN search\ngrows in popularity. To tackle this obstacle to ANN adoption, this work\nproposes a constrained optimization-based approach to tuning quantization-based\nANN algorithms. Our technique takes just a desired search cost or recall as\ninput, and then generates tunings that, empirically, are very close to the\nspeed-recall Pareto frontier and give leading performance on standard\nbenchmarks.", + "authors": "Philip Sun, Ruiqi Guo, Sanjiv Kumar", + "published": "2023-01-04", + "updated": "2023-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1509.02897v1", + "title": "Practical and Optimal LSH for Angular Distance", + "abstract": "We show the existence of a Locality-Sensitive Hashing (LSH) family for the\nangular distance that yields an approximate Near Neighbor Search algorithm with\nthe asymptotically optimal running time exponent. Unlike earlier algorithms\nwith this property (e.g., Spherical LSH [Andoni, Indyk, Nguyen, Razenshteyn\n2014], [Andoni, Razenshteyn 2015]), our algorithm is also practical, improving\nupon the well-studied hyperplane LSH [Charikar, 2002] in practice. We also\nintroduce a multiprobe version of this algorithm, and conduct experimental\nevaluation on real and synthetic data sets.\n We complement the above positive results with a fine-grained lower bound for\nthe quality of any LSH family for angular distance. Our lower bound implies\nthat the above LSH family exhibits a trade-off between evaluation time and\nquality that is close to optimal for a natural class of LSH functions.", + "authors": "Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, Ludwig Schmidt", + "published": "2015-09-09", + "updated": "2015-09-09", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.CG", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03596v3", + "title": "Graph Generation with Diffusion Mixture", + "abstract": "Generation of graphs is a major challenge for real-world tasks that require\nunderstanding the complex nature of their non-Euclidean structures. Although\ndiffusion models have achieved notable success in graph generation recently,\nthey are ill-suited for modeling the topological properties of graphs since\nlearning to denoise the noisy samples does not explicitly learn the graph\nstructures to be generated. To tackle this limitation, we propose a generative\nframework that models the topology of graphs by explicitly learning the final\ngraph structures of the diffusion process. Specifically, we design the\ngenerative process as a mixture of endpoint-conditioned diffusion processes\nwhich is driven toward the predicted graph that results in rapid convergence.\nWe further introduce a simple parameterization of the mixture process and\ndevelop an objective for learning the final graph structure, which enables\nmaximum likelihood training. Through extensive experimental validation on\ngeneral graph and 2D/3D molecule generation tasks, we show that our method\noutperforms previous generative models, generating graphs with correct topology\nwith both continuous (e.g. 3D coordinates) and discrete (e.g. atom types)\nfeatures. Our code is available at https://github.com/harryjo97/DruM.", + "authors": "Jaehyeong Jo, Dongki Kim, Sung Ju Hwang", + "published": "2023-02-07", + "updated": "2024-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.09205v1", + "title": "SoK: Differential Privacy on Graph-Structured Data", + "abstract": "In this work, we study the applications of differential privacy (DP) in the\ncontext of graph-structured data. We discuss the formulations of DP applicable\nto the publication of graphs and their associated statistics as well as machine\nlearning on graph-based data, including graph neural networks (GNNs). The\nformulation of DP in the context of graph-structured data is difficult, as\nindividual data points are interconnected (often non-linearly or sparsely).\nThis connectivity complicates the computation of individual privacy loss in\ndifferentially private learning. The problem is exacerbated by an absence of a\nsingle, well-established formulation of DP in graph settings. This issue\nextends to the domain of GNNs, rendering private machine learning on\ngraph-structured data a challenging task. A lack of prior systematisation work\nmotivated us to study graph-based learning from a privacy perspective. In this\nwork, we systematise different formulations of DP on graphs, discuss challenges\nand promising applications, including the GNN domain. We compare and separate\nworks into graph analysis tasks and graph learning tasks with GNNs. Finally, we\nconclude our work with a discussion of open questions and potential directions\nfor further research in this area.", + "authors": "Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis", + "published": "2022-03-17", + "updated": "2022-03-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.03675v3", + "title": "Machine Learning on Graphs: A Model and Comprehensive Taxonomy", + "abstract": "There has been a surge of recent interest in learning representations for\ngraph-structured data. Graph representation learning methods have generally\nfallen into three main categories, based on the availability of labeled data.\nThe first, network embedding (such as shallow graph embedding or graph\nauto-encoders), focuses on learning unsupervised representations of relational\nstructure. The second, graph regularized neural networks, leverages graphs to\naugment neural network losses with a regularization objective for\nsemi-supervised learning. The third, graph neural networks, aims to learn\ndifferentiable functions over discrete topologies with arbitrary structure.\nHowever, despite the popularity of these areas there has been surprisingly\nlittle work on unifying the three paradigms. Here, we aim to bridge the gap\nbetween graph neural networks, network embedding and graph regularization\nmodels. We propose a comprehensive taxonomy of representation learning methods\nfor graph-structured data, aiming to unify several disparate bodies of work.\nSpecifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which\ngeneralizes popular algorithms for semi-supervised learning on graphs (e.g.\nGraphSage, Graph Convolutional Networks, Graph Attention Networks), and\nunsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)\ninto a single consistent approach. To illustrate the generality of this\napproach, we fit over thirty existing methods into this framework. We believe\nthat this unifying view both provides a solid foundation for understanding the\nintuition behind these methods, and enables future research in the area.", + "authors": "Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R\u00e9, Kevin Murphy", + "published": "2020-05-07", + "updated": "2022-04-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1909.11594v1", + "title": "Structured Graph Learning Via Laplacian Spectral Constraints", + "abstract": "Learning a graph with a specific structure is essential for interpretability\nand identification of the relationships among data. It is well known that\nstructured graph learning from observed samples is an NP-hard combinatorial\nproblem. In this paper, we first show that for a set of important graph\nfamilies it is possible to convert the structural constraints of structure into\neigenvalue constraints of the graph Laplacian matrix. Then we introduce a\nunified graph learning framework, lying at the integration of the spectral\nproperties of the Laplacian matrix with Gaussian graphical modeling that is\ncapable of learning structures of a large class of graph families. The proposed\nalgorithms are provably convergent and practically amenable for large-scale\nsemi-supervised and unsupervised graph-based learning tasks. Extensive\nnumerical experiments with both synthetic and real data sets demonstrate the\neffectiveness of the proposed methods. An R package containing code for all the\nexperimental results is available at\nhttps://cran.r-project.org/package=spectralGraphTopology.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos'e Vin'icius de M. Cardoso, Daniel P. Palomar", + "published": "2019-09-24", + "updated": "2019-09-24", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC", + "stat.AP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2004.06846v1", + "title": "MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning", + "abstract": "How to utilize deep learning methods for graph classification tasks has\nattracted considerable research attention in the past few years. Regarding\ngraph classification tasks, the graphs to be classified may have various graph\nsizes (i.e., different number of nodes and edges) and have various graph\nproperties (e.g., average node degree, diameter, and clustering coefficient).\nThe diverse property of graphs has imposed significant challenges on existing\ngraph learning techniques since diverse graphs have different best-fit\nhyperparameters. It is difficult to learn graph features from a set of diverse\ngraphs by a unified graph neural network. This motivates us to use a multiplex\nstructure in a diverse way and utilize a priori properties of graphs to guide\nthe learning. In this paper, we propose MxPool, which concurrently uses\nmultiple graph convolution/pooling networks to build a hierarchical learning\nstructure for graph representation learning tasks. Our experiments on numerous\ngraph classification benchmarks show that our MxPool has superiority over other\nstate-of-the-art graph representation learning methods.", + "authors": "Yanyan Liang, Yanfeng Zhang, Dechao Gao, Qian Xu", + "published": "2020-04-15", + "updated": "2020-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07970v1", + "title": "Adaptive Multi-Neighborhood Attention based Transformer for Graph Representation Learning", + "abstract": "By incorporating the graph structural information into Transformers, graph\nTransformers have exhibited promising performance for graph representation\nlearning in recent years. Existing graph Transformers leverage specific\nstrategies, such as Laplacian eigenvectors and shortest paths of the node\npairs, to preserve the structural features of nodes and feed them into the\nvanilla Transformer to learn the representations of nodes. It is hard for such\npredefined rules to extract informative graph structural features for arbitrary\ngraphs whose topology structure varies greatly, limiting the learning capacity\nof the models. To this end, we propose an adaptive graph Transformer, termed\nMulti-Neighborhood Attention based Graph Transformer (MNA-GT), which captures\nthe graph structural information for each node from the multi-neighborhood\nattention mechanism adaptively. By defining the input to perform scaled-dot\nproduct as an attention kernel, MNA-GT constructs multiple attention kernels\nbased on different hops of neighborhoods such that each attention kernel can\ncapture specific graph structural information of the corresponding neighborhood\nfor each node pair. In this way, MNA-GT can preserve the graph structural\ninformation efficiently by incorporating node representations learned by\ndifferent attention kernels. MNA-GT further employs an attention layer to learn\nthe importance of different attention kernels to enable the model to adaptively\ncapture the graph structural information for different nodes. Extensive\nexperiments are conducted on a variety of graph benchmarks, and the empirical\nresults show that MNA-GT outperforms many strong baselines.", + "authors": "Gaichao Li, Jinsong Chen, Kun He", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.02879v1", + "title": "Auto-decoding Graphs", + "abstract": "We present an approach to synthesizing new graph structures from empirically\nspecified distributions. The generative model is an auto-decoder that learns to\nsynthesize graphs from latent codes. The graph synthesis model is learned\njointly with an empirical distribution over the latent codes. Graphs are\nsynthesized using self-attention modules that are trained to identify likely\nconnectivity patterns. Graph-based normalizing flows are used to sample latent\ncodes from the distribution learned by the auto-decoder. The resulting model\ncombines accuracy and scalability. On benchmark datasets of large graphs, the\npresented model outperforms the state of the art by a factor of 1.5 in mean\naccuracy and average rank across at least three different graph statistics,\nwith a 2x speedup during inference.", + "authors": "Sohil Atul Shah, Vladlen Koltun", + "published": "2020-06-04", + "updated": "2020-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.13009v2", + "title": "Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely\nIterative Deep Graph Learning (IDGL), for jointly and iteratively learning\ngraph structure and graph embedding. The key rationale of IDGL is to learn a\nbetter graph structure based on better node embeddings, and vice versa (i.e.,\nbetter node embeddings based on a better graph structure). Our iterative method\ndynamically stops when the learned graph structure approaches close enough to\nthe graph optimized for the downstream prediction task. In addition, we cast\nthe graph learning problem as a similarity metric learning problem and leverage\nadaptive graph regularization for controlling the quality of the learned graph.\nFinally, combining the anchor-based approximation technique, we further propose\na scalable version of IDGL, namely IDGL-Anch, which significantly reduces the\ntime and space complexity of IDGL without compromising the performance. Our\nextensive experiments on nine benchmarks show that our proposed IDGL models can\nconsistently outperform or match the state-of-the-art baselines. Furthermore,\nIDGL can be more robust to adversarial graphs and cope with both transductive\nand inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2020-06-21", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09671v1", + "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels", + "abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", + "authors": "Rami Al-Rfou, Dustin Zelle, Bryan Perozzi", + "published": "2019-04-21", + "updated": "2019-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00696v1", + "title": "Graph Learning: A Survey", + "abstract": "Graphs are widely used as a popular representation of the network structure\nof connected data. Graph data can be found in a broad spectrum of application\ndomains such as social systems, ecosystems, biological networks, knowledge\ngraphs, and information systems. With the continuous penetration of artificial\nintelligence technologies, graph learning (i.e., machine learning on graphs) is\ngaining attention from both researchers and practitioners. Graph learning\nproves effective for many tasks, such as classification, link prediction, and\nmatching. Generally, graph learning methods extract relevant features of graphs\nby taking advantage of machine learning algorithms. In this survey, we present\na comprehensive overview on the state-of-the-art of graph learning. Special\nattention is paid to four categories of existing graph learning methods,\nincluding graph signal processing, matrix factorization, random walk, and deep\nlearning. Major models and algorithms under these categories are reviewed\nrespectively. We examine graph learning applications in areas such as text,\nimages, science, knowledge graphs, and combinatorial optimization. In addition,\nwe discuss several promising research directions in this field.", + "authors": "Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, Huan Liu", + "published": "2021-05-03", + "updated": "2021-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI", + "68T07", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.00082v1", + "title": "Bosonic Random Walk Networks for Graph Learning", + "abstract": "The development of Graph Neural Networks (GNNs) has led to great progress in\nmachine learning on graph-structured data. These networks operate via diffusing\ninformation across the graph nodes while capturing the structure of the graph.\nRecently there has also seen tremendous progress in quantum computing\ntechniques. In this work, we explore applications of multi-particle quantum\nwalks on diffusing information across graphs. Our model is based on learning\nthe operators that govern the dynamics of quantum random walkers on graphs. We\ndemonstrate the effectiveness of our method on classification and regression\ntasks.", + "authors": "Shiv Shankar, Don Towsley", + "published": "2020-12-31", + "updated": "2020-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.07308v1", + "title": "Variational Graph Auto-Encoders", + "abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", + "authors": "Thomas N. Kipf, Max Welling", + "published": "2016-11-21", + "updated": "2016-11-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.02909v1", + "title": "Spectral Augmentations for Graph Contrastive Learning", + "abstract": "Contrastive learning has emerged as a premier method for learning\nrepresentations with or without supervision. Recent studies have shown its\nutility in graph representation learning for pre-training. Despite successes,\nthe understanding of how to design effective graph augmentations that can\ncapture structural properties common to many different types of downstream\ngraphs remains incomplete. We propose a set of well-motivated graph\ntransformation operations derived via graph spectral analysis to provide a bank\nof candidates when constructing augmentations for a graph contrastive\nobjective, enabling contrastive learning to capture useful structural\nrepresentation from pre-training graph datasets. We first present a spectral\ngraph cropping augmentation that involves filtering nodes by applying\nthresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second\nnovel augmentation reorders the graph frequency components in a structural\nLaplacian-derived position graph embedding. Further, we introduce a method that\nleads to improved views of local subgraphs by performing alignment via global\nrandom walk embeddings. Our experimental results indicate consistent\nimprovements in out-of-domain graph data transfer compared to state-of-the-art\ngraph contrastive learning methods, shedding light on how to design a graph\nlearner that is able to learn structural properties common to diverse graph\ntypes.", + "authors": "Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07699v2", + "title": "Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs", + "abstract": "Temporal Graph Learning, which aims to model the time-evolving nature of\ngraphs, has gained increasing attention and achieved remarkable performance\nrecently. However, in reality, graph structures are often incomplete and noisy,\nwhich hinders temporal graph networks (TGNs) from learning informative\nrepresentations. Graph contrastive learning uses data augmentation to generate\nplausible variations of existing data and learn robust representations.\nHowever, rule-based augmentation approaches may be suboptimal as they lack\nlearnability and fail to leverage rich information from downstream tasks. To\naddress these issues, we propose a Time-aware Graph Structure Learning (TGSL)\napproach via sequence prediction on temporal graphs, which learns better graph\nstructures for downstream tasks through adding potential temporal edges. In\nparticular, it predicts time-aware context embedding based on previously\nobserved interactions and uses the Gumble-Top-K to select the closest candidate\nedges to this context embedding. Additionally, several candidate sampling\nstrategies are proposed to ensure both efficiency and diversity. Furthermore,\nwe jointly learn the graph structure and TGNs in an end-to-end manner and\nperform inference on the refined graph. Extensive experiments on temporal link\nprediction benchmarks demonstrate that TGSL yields significant gains for the\npopular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive\nlearning methods on temporal graphs. We release the code at\nhttps://github.com/ViktorAxelsen/TGSL.", + "authors": "Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai", + "published": "2023-06-13", + "updated": "2023-08-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.01660v3", + "title": "Graph Neural Networks With Lifting-based Adaptive Graph Wavelets", + "abstract": "Spectral-based graph neural networks (SGNNs) have been attracting increasing\nattention in graph representation learning. However, existing SGNNs are limited\nin implementing graph filters with rigid transforms (e.g., graph Fourier or\npredefined graph wavelet transforms) and cannot adapt to signals residing on\ngraphs and tasks at hand. In this paper, we propose a novel class of graph\nneural networks that realizes graph filters with adaptive graph wavelets.\nSpecifically, the adaptive graph wavelets are learned with neural\nnetwork-parameterized lifting structures, where structure-aware attention-based\nlifting operations (i.e., prediction and update operations) are developed to\njointly consider graph structures and node features. We propose to lift based\non diffusion wavelets to alleviate the structural information loss induced by\npartitioning non-bipartite graphs. By design, the locality and sparsity of the\nresulting wavelet transform as well as the scalability of the lifting structure\nare guaranteed. We further derive a soft-thresholding filtering operation by\nlearning sparse graph representations in terms of the learned wavelets,\nyielding a localized, efficient, and scalable wavelet-based graph filters. To\nensure that the learned graph representations are invariant to node\npermutations, a layer is employed at the input of the networks to reorder the\nnodes according to their local topology information. We evaluate the proposed\nnetworks in both node-level and graph-level representation learning tasks on\nbenchmark citation and bioinformatics graph datasets. Extensive experiments\ndemonstrate the superiority of the proposed networks over existing SGNNs in\nterms of accuracy, efficiency, and scalability.", + "authors": "Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard", + "published": "2021-08-03", + "updated": "2022-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11390v2", + "title": "Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction", + "abstract": "Tiered graph autoencoders provide the architecture and mechanisms for\nlearning tiered latent representations and latent spaces for molecular graphs\nthat explicitly represent and utilize groups (e.g., functional groups). This\nenables the utilization and exploration of tiered molecular latent spaces,\neither individually - the node (atom) tier, the group tier, or the graph\n(molecule) tier - or jointly, as well as navigation across the tiers. In this\npaper, we discuss the use of tiered graph autoencoders together with graph\nprediction for molecular graphs. We show features of molecular graphs used, and\ngroups in molecular graphs identified for some sample molecules. We briefly\nreview graph prediction and the QM9 dataset for background information, and\ndiscuss the use of tiered graph embeddings for graph prediction, particularly\nweighted group pooling. We find that functional groups and ring groups\neffectively capture and represent the chemical essence of molecular graphs\n(structures). Further, tiered graph autoencoders and graph prediction together\nprovide effective, efficient and interpretable deep learning for molecular\ngraphs, with the former providing unsupervised, transferable learning and the\nlatter providing supervised, task-optimized learning.", + "authors": "Daniel T. Chang", + "published": "2019-10-24", + "updated": "2021-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-bio.BM" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.08057v1", + "title": "Graph Embedding VAE: A Permutation Invariant Model of Graph Structure", + "abstract": "Generative models of graph structure have applications in biology and social\nsciences. The state of the art is GraphRNN, which decomposes the graph\ngeneration process into a series of sequential steps. While effective for\nmodest sizes, it loses its permutation invariance for larger graphs. Instead,\nwe present a permutation invariant latent-variable generative model relying on\ngraph embeddings to encode structure. Using tools from the random graph\nliterature, our model is highly scalable to large graphs with likelihood\nevaluation and generation in $O(|V | + |E|)$.", + "authors": "Tony Duan, Juho Lee", + "published": "2019-10-17", + "updated": "2019-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.04508v3", + "title": "Unsupervised Graph Embedding via Adaptive Graph Learning", + "abstract": "Graph autoencoders (GAEs) are powerful tools in representation learning for\ngraph embedding. However, the performance of GAEs is very dependent on the\nquality of the graph structure, i.e., of the adjacency matrix. In other words,\nGAEs would perform poorly when the adjacency matrix is incomplete or be\ndisturbed. In this paper, two novel unsupervised graph embedding methods,\nunsupervised graph embedding via adaptive graph learning (BAGE) and\nunsupervised graph embedding via variational adaptive graph learning (VBAGE)\nare proposed. The proposed methods expand the application range of GAEs on\ngraph embedding, i.e, on the general datasets without graph structure.\nMeanwhile, the adaptive learning mechanism can initialize the adjacency matrix\nwithout be affected by the parameter. Besides that, the latent representations\nare embedded in the laplacian graph structure to preserve the topology\nstructure of the graph in the vector space. Moreover, the adjacency matrix can\nbe self-learned for better embedding performance when the original graph\nstructure is incomplete. With adaptive learning, the proposed method is much\nmore robust to the graph structure. Experimental studies on several datasets\nvalidate our design and demonstrate that our methods outperform baselines by a\nwide margin in node clustering, node classification, and graph visualization\ntasks.", + "authors": "Rui Zhang, Yunxing Zhang, Xuelong Li", + "published": "2020-03-10", + "updated": "2021-03-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.04350v2", + "title": "Time-Variant Graph Classification", + "abstract": "Graphs are commonly used to represent objects, such as images and text, for\npattern classification. In a dynamic world, an object may continuously evolve\nover time, and so does the graph extracted from the underlying object. These\nchanges in graph structure with respect to the temporal order present a new\nrepresentation of the graph, in which an object corresponds to a set of\ntime-variant graphs. In this paper, we formulate a novel time-variant graph\nclassification task and propose a new graph feature, called a graph-shapelet\npattern, for learning and classifying time-variant graphs. Graph-shapelet\npatterns are compact and discriminative graph transformation subsequences. A\ngraph-shapelet pattern can be regarded as a graphical extension of a shapelet\n-- a class of discriminative features designed for vector-based temporal data\nclassification. To discover graph-shapelet patterns, we propose to convert a\ntime-variant graph sequence into time-series data and use the discovered\nshapelets to find graph transformation subsequences as graph-shapelet patterns.\nBy converting each graph-shapelet pattern into a unique tokenized graph\ntransformation sequence, we can measure the similarity between two\ngraph-shapelet patterns and therefore classify time-variant graphs. Experiments\non both synthetic and real-world data demonstrate the superior performance of\nthe proposed algorithms.", + "authors": "Haishuai Wang", + "published": "2016-09-14", + "updated": "2017-06-12", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.07294v1", + "title": "Graph Data Condensation via Self-expressive Graph Structure Reconstruction", + "abstract": "With the increasing demands of training graph neural networks (GNNs) on\nlarge-scale graphs, graph data condensation has emerged as a critical technique\nto relieve the storage and time costs during the training phase. It aims to\ncondense the original large-scale graph to a much smaller synthetic graph while\npreserving the essential information necessary for efficiently training a\ndownstream GNN. However, existing methods concentrate either on optimizing node\nfeatures exclusively or endeavor to independently learn node features and the\ngraph structure generator. They could not explicitly leverage the information\nof the original graph structure and failed to construct an interpretable graph\nstructure for the synthetic dataset. To address these issues, we introduce a\nnovel framework named \\textbf{G}raph Data \\textbf{C}ondensation via\n\\textbf{S}elf-expressive Graph Structure \\textbf{R}econstruction\n(\\textbf{GCSR}). Our method stands out by (1) explicitly incorporating the\noriginal graph structure into the condensing process and (2) capturing the\nnuanced interdependencies between the condensed nodes by reconstructing an\ninterpretable self-expressive graph structure. Extensive experiments and\ncomprehensive analysis validate the efficacy of the proposed method across\ndiverse GNN models and datasets. Our code is available at\nhttps://www.dropbox.com/scl/fi/2aonyp5ln5gisdqtjimu8/GCSR.zip?rlkey=11cuwfpsf54wxiiktu0klud0x&dl=0", + "authors": "Zhanyu Liu, Chaolv Zeng, Guanjie Zheng", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.04687v2", + "title": "Intrinsic Geometric Information Transfer Learning on Multiple Graph-Structured Datasets", + "abstract": "Graphs provide a powerful means for representing complex interactions between\nentities. Recently, deep learning approaches are emerging for representing and\nmodeling graph-structured data, although the conventional deep learning methods\n(such as convolutional neural networks and recurrent neural networks) have\nmainly focused on grid-structured inputs (image and audio). Leveraged by the\ncapability of representation learning, deep learning based techniques are\nreporting promising results for graph applications by detecting structural\ncharacteristics of graphs in an automated fashion. In this paper, we attempt to\nadvance deep learning for graph-structured data by incorporating another\ncomponent, transfer learning. By transferring the intrinsic geometric\ninformation learned in the source domain, our approach can help us to construct\na model for a new but related task in the target domain without collecting new\ndata and without training a new model from scratch. We thoroughly test our\napproach with large-scale real corpora and confirm the effectiveness of the\nproposed transfer learning framework for deep learning on graphs. According to\nour experiments, transfer learning is most effective when the source and target\ndomains bear a high level of structural similarity in their graph\nrepresentations.", + "authors": "Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon", + "published": "2016-11-15", + "updated": "2016-12-05", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.03659v1", + "title": "Robust Graph Structure Learning under Heterophily", + "abstract": "Graph is a fundamental mathematical structure in characterizing relations\nbetween different objects and has been widely used on various learning tasks.\nMost methods implicitly assume a given graph to be accurate and complete.\nHowever, real data is inevitably noisy and sparse, which will lead to inferior\nresults. Despite the remarkable success of recent graph representation learning\nmethods, they inherently presume that the graph is homophilic, and largely\noverlook heterophily, where most connected nodes are from different classes. In\nthis regard, we propose a novel robust graph structure learning method to\nachieve a high-quality graph from heterophilic data for downstream tasks. We\nfirst apply a high-pass filter to make each node more distinctive from its\nneighbors by encoding structure information into the node features. Then, we\nlearn a robust graph with an adaptive norm characterizing different levels of\nnoise. Afterwards, we propose a novel regularizer to further refine the graph\nstructure. Clustering and semi-supervised classification experiments on\nheterophilic graphs verify the effectiveness of our method.", + "authors": "Xuanting Xie, Zhao Kang, Wenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.00876v1", + "title": "Balanced Graph Structure Information for Brain Disease Detection", + "abstract": "Analyzing connections between brain regions of interest (ROI) is vital to\ndetect neurological disorders such as autism or schizophrenia. Recent\nadvancements employ graph neural networks (GNNs) to utilize graph structures in\nbrains, improving detection performances. Current methods use correlation\nmeasures between ROI's blood-oxygen-level-dependent (BOLD) signals to generate\nthe graph structure. Other methods use the training samples to learn the\noptimal graph structure through end-to-end learning. However, implementing\nthose methods independently leads to some issues with noisy data for the\ncorrelation graphs and overfitting problems for the optimal graph. In this\nwork, we proposed Bargrain (balanced graph structure for brains), which models\ntwo graph structures: filtered correlation matrix and optimal sample graph\nusing graph convolution networks (GCNs). This approach aims to get advantages\nfrom both graphs and address the limitations of only relying on a single type\nof structure. Based on our extensive experiment, Bargrain outperforms\nstate-of-the-art methods in classification tasks on brain disease datasets, as\nmeasured by average F1 scores.", + "authors": "Falih Gozi Febrinanto, Mujie Liu, Feng Xia", + "published": "2023-12-30", + "updated": "2023-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "q-bio.NC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.10206v1", + "title": "How Robust Are Graph Neural Networks to Structural Noise?", + "abstract": "Graph neural networks (GNNs) are an emerging model for learning graph\nembeddings and making predictions on graph structured data. However, robustness\nof graph neural networks is not yet well-understood. In this work, we focus on\nnode structural identity predictions, where a representative GNN model is able\nto achieve near-perfect accuracy. We also show that the same GNN model is not\nrobust to addition of structural noise, through a controlled dataset and set of\nexperiments. Finally, we show that under the right conditions, graph-augmented\ntraining is capable of significantly improving robustness to structural noise.", + "authors": "James Fox, Sivasankaran Rajamanickam", + "published": "2019-12-21", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.01743v1", + "title": "Graph Generation with Variational Recurrent Neural Network", + "abstract": "Generating graph structures is a challenging problem due to the diverse\nrepresentations and complex dependencies among nodes. In this paper, we\nintroduce Graph Variational Recurrent Neural Network (GraphVRNN), a\nprobabilistic autoregressive model for graph generation. Through modeling the\nlatent variables of graph data, GraphVRNN can capture the joint distributions\nof graph structures and the underlying node attributes. We conduct experiments\non the proposed GraphVRNN in both graph structure learning and attribute\ngeneration tasks. The evaluation results show that the variational component\nallows our network to model complicated distributions, as well as generate\nplausible structures and node attributes.", + "authors": "Shih-Yang Su, Hossein Hajimirsadeghi, Greg Mori", + "published": "2019-10-02", + "updated": "2019-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.01749v1", + "title": "Semantic Graph Neural Network with Multi-measure Learning for Semi-supervised Classification", + "abstract": "Graph Neural Networks (GNNs) have attracted increasing attention in recent\nyears and have achieved excellent performance in semi-supervised node\nclassification tasks. The success of most GNNs relies on one fundamental\nassumption, i.e., the original graph structure data is available. However,\nrecent studies have shown that GNNs are vulnerable to the complex underlying\nstructure of the graph, making it necessary to learn comprehensive and robust\ngraph structures for downstream tasks, rather than relying only on the raw\ngraph structure. In light of this, we seek to learn optimal graph structures\nfor downstream tasks and propose a novel framework for semi-supervised\nclassification. Specifically, based on the structural context information of\ngraph and node representations, we encode the complex interactions in semantics\nand generate semantic graphs to preserve the global structure. Moreover, we\ndevelop a novel multi-measure attention layer to optimize the similarity rather\nthan prescribing it a priori, so that the similarity can be adaptively\nevaluated by integrating measures. These graphs are fused and optimized\ntogether with GNN towards semi-supervised classification objective. Extensive\nexperiments and ablation studies on six real-world datasets clearly demonstrate\nthe effectiveness of our proposed model and the contribution of each component.", + "authors": "Junchao Lin, Yuan Wan, Jingwen Xu, Xingchen Qi", + "published": "2022-12-04", + "updated": "2022-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01855v2", + "title": "A Survey on Graph Representation Learning Methods", + "abstract": "Graphs representation learning has been a very active research area in recent\nyears. The goal of graph representation learning is to generate graph\nrepresentation vectors that capture the structure and features of large graphs\naccurately. This is especially important because the quality of the graph\nrepresentation vectors will affect the performance of these vectors in\ndownstream tasks such as node classification, link prediction and anomaly\ndetection. Many techniques are proposed for generating effective graph\nrepresentation vectors. Two of the most prevalent categories of graph\nrepresentation learning are graph embedding methods without using graph neural\nnets (GNN), which we denote as non-GNN based graph embedding methods, and graph\nneural nets (GNN) based methods. Non-GNN graph embedding methods are based on\ntechniques such as random walks, temporal point processes and neural network\nlearning methods. GNN-based methods, on the other hand, are the application of\ndeep learning on graph data. In this survey, we provide an overview of these\ntwo categories and cover the current state-of-the-art methods for both static\nand dynamic graphs. Finally, we explore some open and ongoing research\ndirections for future work.", + "authors": "Shima Khoshraftar, Aijun An", + "published": "2022-04-04", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.09971v1", + "title": "Graph Learning-Convolutional Networks", + "abstract": "Recently, graph Convolutional Neural Networks (graph CNNs) have been widely\nused for graph data representation and semi-supervised learning tasks. However,\nexisting graph CNNs generally use a fixed graph which may be not optimal for\nsemi-supervised learning tasks. In this paper, we propose a novel Graph\nLearning-Convolutional Network (GLCN) for graph data representation and\nsemi-supervised learning. The aim of GLCN is to learn an optimal graph\nstructure that best serves graph CNNs for semi-supervised learning by\nintegrating both graph learning and graph convolution together in a unified\nnetwork architecture. The main advantage is that in GLCN, both given labels and\nthe estimated labels are incorporated and thus can provide useful 'weakly'\nsupervised information to refine (or learn) the graph construction and also to\nfacilitate the graph convolution operation in GLCN for unknown label\nestimation. Experimental results on seven benchmarks demonstrate that GLCN\nsignificantly outperforms state-of-the-art traditional fixed structure based\ngraph CNNs.", + "authors": "Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang", + "published": "2018-11-25", + "updated": "2018-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11307v3", + "title": "Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method", + "abstract": "Graph Representation Learning (GRL) is an influential methodology, enabling a\nmore profound understanding of graph-structured data and aiding graph\nclustering, a critical task across various domains. The recent incursion of\nattention mechanisms, originally an artifact of Natural Language Processing\n(NLP), into the realm of graph learning has spearheaded a notable shift in\nresearch trends. Consequently, Graph Attention Networks (GATs) and Graph\nAttention Auto-Encoders have emerged as preferred tools for graph clustering\ntasks. Yet, these methods primarily employ a local attention mechanism, thereby\ncurbing their capacity to apprehend the intricate global dependencies between\nnodes within graphs. Addressing these impediments, this study introduces an\ninnovative method known as the Graph Transformer Auto-Encoder for Graph\nClustering (GTAGC). By melding the Graph Auto-Encoder with the Graph\nTransformer, GTAGC is adept at capturing global dependencies between nodes.\nThis integration amplifies the graph representation and surmounts the\nconstraints posed by the local attention mechanism. The architecture of GTAGC\nencompasses graph embedding, integration of the Graph Transformer within the\nautoencoder structure, and a clustering component. It strategically alternates\nbetween graph embedding and clustering, thereby tailoring the Graph Transformer\nfor clustering tasks, whilst preserving the graph's global structural\ninformation. Through extensive experimentation on diverse benchmark datasets,\nGTAGC has exhibited superior performance against existing state-of-the-art\ngraph clustering methodologies.", + "authors": "Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao", + "published": "2023-06-20", + "updated": "2023-08-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07439v1", + "title": "Multiple Graph Adversarial Learning", + "abstract": "Recently, Graph Convolutional Networks (GCNs) have been widely studied for\ngraph-structured data representation and learning. However, in many real\napplications, data are coming with multiple graphs, and it is non-trivial to\nadapt GCNs to deal with data representation with multiple graph structures. One\nmain challenge for multi-graph representation is how to exploit both structure\ninformation of each individual graph and correlation information across\nmultiple graphs simultaneously. In this paper, we propose a novel Multiple\nGraph Adversarial Learning (MGAL) framework for multi-graph representation and\nlearning. MGAL aims to learn an optimal structure-invariant and consistent\nrepresentation for multiple graphs in a common subspace via a novel adversarial\nlearning framework, which thus incorporates both structure information of\nintra-graph and correlation information of inter-graphs simultaneously. Based\non MGAL, we then provide a unified network for semi-supervised learning task.\nPromising experimental results demonstrate the effectiveness of MGAL model.", + "authors": "Bo Jiang, Ziyan Zhang, Jin Tang, Bin Luo", + "published": "2019-01-22", + "updated": "2019-01-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.04407v2", + "title": "Adversarially Regularized Graph Autoencoder for Graph Embedding", + "abstract": "Graph embedding is an effective method to represent graph data in a low\ndimensional space for graph analytics. Most existing embedding algorithms\ntypically focus on preserving the topological structure or minimizing the\nreconstruction errors of graph data, but they have mostly ignored the data\ndistribution of the latent codes from the graphs, which often results in\ninferior embedding in real-world graph data. In this paper, we propose a novel\nadversarial graph embedding framework for graph data. The framework encodes the\ntopological structure and node content in a graph to a compact representation,\non which a decoder is trained to reconstruct the graph structure. Furthermore,\nthe latent representation is enforced to match a prior distribution via an\nadversarial training scheme. To learn a robust embedding, two variants of\nadversarial approaches, adversarially regularized graph autoencoder (ARGA) and\nadversarially regularized variational graph autoencoder (ARVGA), are developed.\nExperimental studies on real-world graphs validate our design and demonstrate\nthat our algorithms outperform baselines by a wide margin in link prediction,\ngraph clustering, and graph visualization tasks.", + "authors": "Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang", + "published": "2018-02-13", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.04762v1", + "title": "The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure", + "abstract": "Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.", + "authors": "Anton Tsitsulin, Bryan Perozzi", + "published": "2023-12-08", + "updated": "2023-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.01489v1", + "title": "Generative Models and Learning Algorithms for Core-Periphery Structured Graphs", + "abstract": "We consider core-periphery structured graphs, which are graphs with a group\nof densely and sparsely connected nodes, respectively, referred to as core and\nperiphery nodes. The so-called core score of a node is related to the\nlikelihood of it being a core node. In this paper, we focus on learning the\ncore scores of a graph from its node attributes and connectivity structure. To\nthis end, we propose two classes of probabilistic graphical models: affine and\nnonlinear. First, we describe affine generative models to model the dependence\nof node attributes on its core scores, which determine the graph structure.\nNext, we discuss nonlinear generative models in which the partial correlations\nof node attributes influence the graph structure through latent core scores. We\ndevelop algorithms for inferring the model parameters and core scores of a\ngraph when both the graph structure and node attributes are available. When\nonly the node attributes of graphs are available, we jointly learn a\ncore-periphery structured graph and its core scores. We provide results from\nnumerical experiments on several synthetic and real-world datasets to\ndemonstrate the efficacy of the developed models and algorithms.", + "authors": "Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08966v4", + "title": "Graph Learning and Its Advancements on Large Language Models: A Holistic Survey", + "abstract": "Graph learning is a prevalent domain that endeavors to learn the intricate\nrelationships among nodes and the topological structure of graphs. Over the\nyears, graph learning has transcended from graph theory to graph data mining.\nWith the advent of representation learning, it has attained remarkable\nperformance in diverse scenarios. Owing to its extensive application prospects,\ngraph learning attracts copious attention. While some researchers have\naccomplished impressive surveys on graph learning, they failed to connect\nrelated objectives, methods, and applications in a more coherent way. As a\nresult, they did not encompass current ample scenarios and challenging problems\ndue to the rapid expansion of graph learning. Particularly, large language\nmodels have recently had a disruptive effect on human life, but they also show\nrelative weakness in structured scenarios. The question of how to make these\nmodels more powerful with graph learning remains open. Our survey focuses on\nthe most recent advancements in integrating graph learning with pre-trained\nlanguage models, specifically emphasizing their application within the domain\nof large language models. Different from previous surveys on graph learning, we\nprovide a holistic review that analyzes current works from the perspective of\ngraph structure, and discusses the latest applications, trends, and challenges\nin graph learning. Specifically, we commence by proposing a taxonomy and then\nsummarize the methods employed in graph learning. We then provide a detailed\nelucidation of mainstream applications. Finally, we propose future directions.", + "authors": "Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Fuji Ren, Gang Kou", + "published": "2022-12-17", + "updated": "2023-11-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1902.10042v2", + "title": "Graph Neural Processes: Towards Bayesian Graph Neural Networks", + "abstract": "We introduce Graph Neural Processes (GNP), inspired by the recent work in\nconditional and latent neural processes. A Graph Neural Process is defined as a\nConditional Neural Process that operates on arbitrary graph data. It takes\nfeatures of sparsely observed context points as input, and outputs a\ndistribution over target points. We demonstrate graph neural processes in edge\nimputation and discuss benefits and drawbacks of the method for other\napplication areas. One major benefit of GNPs is the ability to quantify\nuncertainty in deep learning on graph structures. An additional benefit of this\nmethod is the ability to extend graph neural networks to inputs of dynamic\nsized graphs.", + "authors": "Andrew Carr, David Wingate", + "published": "2019-02-26", + "updated": "2019-10-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02126v1", + "title": "Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix", + "abstract": "To improve the robustness of graph neural networks (GNN), graph structure\nlearning (GSL) has attracted great interest due to the pervasiveness of noise\nin graph data. Many approaches have been proposed for GSL to jointly learn a\nclean graph structure and corresponding representations. To extend the previous\nwork, this paper proposes a novel regularized GSL approach, particularly with\nan alignment of feature information and graph information, which is motivated\nmainly by our derived lower bound of node-level Rademacher complexity for GNNs.\nAdditionally, our proposed approach incorporates sparse dimensional reduction\nto leverage low-dimensional node features that are relevant to the graph\nstructure. To evaluate the effectiveness of our approach, we conduct\nexperiments on real-world graphs. The results demonstrate that our proposed GSL\nmethod outperforms several competitive baselines, especially in scenarios where\nthe graph structures are heavily affected by noise. Overall, our research\nhighlights the importance of integrating feature and graph information\nalignment in GSL, as inspired by our derived theoretical result, and showcases\nthe superiority of our approach in handling noisy graph structures through\ncomprehensive experiments on real-world datasets.", + "authors": "Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1801.03226v1", + "title": "Adaptive Graph Convolutional Neural Networks", + "abstract": "Graph Convolutional Neural Networks (Graph CNNs) are generalizations of\nclassical CNNs to handle graph data such as molecular data, point could and\nsocial networks. Current filters in graph CNNs are built for fixed and shared\ngraph structure. However, for most real data, the graph structures varies in\nboth size and connectivity. The paper proposes a generalized and flexible graph\nCNN taking data of arbitrary graph structure as input. In that way a\ntask-driven adaptive graph is learned for each graph data while training. To\nefficiently learn the graph, a distance metric learning is proposed. Extensive\nexperiments on nine graph-structured datasets have demonstrated the superior\nperformance improvement on both convergence speed and predictive accuracy.", + "authors": "Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang", + "published": "2018-01-10", + "updated": "2018-01-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01152v2", + "title": "Causal Structure Learning: a Combinatorial Perspective", + "abstract": "In this review, we discuss approaches for learning causal structure from\ndata, also called causal discovery. In particular, we focus on approaches for\nlearning directed acyclic graphs (DAGs) and various generalizations which allow\nfor some variables to be unobserved in the available data. We devote special\nattention to two fundamental combinatorial aspects of causal structure\nlearning. First, we discuss the structure of the search space over causal\ngraphs. Second, we discuss the structure of equivalence classes over causal\ngraphs, i.e., sets of graphs which represent what can be learned from\nobservational data alone, and how these equivalence classes can be refined by\nadding interventional data.", + "authors": "Chandler Squires, Caroline Uhler", + "published": "2022-06-02", + "updated": "2022-12-19", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.05980v1", + "title": "CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical Graph Representation Learning", + "abstract": "Recent years have witnessed the emergence and flourishing of hierarchical\ngraph pooling neural networks (HGPNNs) which are effective graph representation\nlearning approaches for graph level tasks such as graph classification.\nHowever, current HGPNNs do not take full advantage of the graph's intrinsic\nstructures (e.g., community structure). Moreover, the pooling operations in\nexisting HGPNNs are difficult to be interpreted. In this paper, we propose a\nnew interpretable graph pooling framework - CommPOOL, that can capture and\npreserve the hierarchical community structure of graphs in the graph\nrepresentation learning process. Specifically, the proposed community pooling\nmechanism in CommPOOL utilizes an unsupervised approach for capturing the\ninherent community structure of graphs in an interpretable manner. CommPOOL is\na general and flexible framework for hierarchical graph representation learning\nthat can further facilitate various graph-level tasks. Evaluations on five\npublic benchmark datasets and one synthetic dataset demonstrate the superior\nperformance of CommPOOL in graph representation learning for graph\nclassification compared to the state-of-the-art baseline methods, and its\neffectiveness in capturing and preserving the community structure of graphs.", + "authors": "Haoteng Tang, Guixiang Ma, Lifang He, Heng Huang, Liang Zhan", + "published": "2020-12-10", + "updated": "2020-12-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.14002v1", + "title": "Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction", + "abstract": "We introduce Bi-GNN for modeling biological link prediction tasks such as\ndrug-drug interaction (DDI) and protein-protein interaction (PPI). Taking\ndrug-drug interaction as an example, existing methods using machine learning\neither only utilize the link structure between drugs without using the graph\nrepresentation of each drug molecule, or only leverage the individual drug\ncompound structures without using graph structure for the higher-level DDI\ngraph. The key idea of our method is to fundamentally view the data as a\nbi-level graph, where the highest level graph represents the interaction\nbetween biological entities (interaction graph), and each biological entity\nitself is further expanded to its intrinsic graph representation\n(representation graphs), where the graph is either flat like a drug compound or\nhierarchical like a protein with amino acid level graph, secondary structure,\ntertiary structure, etc. Our model not only allows the usage of information\nfrom both the high-level interaction graph and the low-level representation\ngraphs, but also offers a baseline for future research opportunities to address\nthe bi-level nature of the data.", + "authors": "Yunsheng Bai, Ken Gu, Yizhou Sun, Wei Wang", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.03892v2", + "title": "COPT: Coordinated Optimal Transport for Graph Sketching", + "abstract": "We introduce COPT, a novel distance metric between graphs defined via an\noptimization routine, computing a coordinated pair of optimal transport maps\nsimultaneously. This gives an unsupervised way to learn general-purpose graph\nrepresentation, applicable to both graph sketching and graph comparison. COPT\ninvolves simultaneously optimizing dual transport plans, one between the\nvertices of two graphs, and another between graph signal probability\ndistributions. We show theoretically that our method preserves important global\nstructural information on graphs, in particular spectral information, and\nanalyze connections to existing studies. Empirically, COPT outperforms state of\nthe art methods in graph classification on both synthetic and real datasets.", + "authors": "Yihe Dong, Will Sawin", + "published": "2020-03-09", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.11821v1", + "title": "Cross-View Graph Consistency Learning for Invariant Graph Representations", + "abstract": "Graph representation learning is fundamental for analyzing graph-structured\ndata. Exploring invariant graph representations remains a challenge for most\nexisting graph representation learning methods. In this paper, we propose a\ncross-view graph consistency learning (CGCL) method that learns invariant graph\nrepresentations for link prediction. First, two complementary augmented views\nare derived from an incomplete graph structure through a bidirectional graph\nstructure augmentation scheme. This augmentation scheme mitigates the potential\ninformation loss that is commonly associated with various data augmentation\ntechniques involving raw graph data, such as edge perturbation, node removal,\nand attribute masking. Second, we propose a CGCL model that can learn invariant\ngraph representations. A cross-view training scheme is proposed to train the\nproposed CGCL model. This scheme attempts to maximize the consistency\ninformation between one augmented view and the graph structure reconstructed\nfrom the other augmented view. Furthermore, we offer a comprehensive\ntheoretical CGCL analysis. This paper empirically and experimentally\ndemonstrates the effectiveness of the proposed CGCL method, achieving\ncompetitive results on graph datasets in comparisons with several\nstate-of-the-art algorithms.", + "authors": "Jie Chen, Zhiming Li, Hua Mao, Wai Lok Woo, Xi Peng", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03262v2", + "title": "CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations", + "abstract": "Unsupervised graph representation learning is a non-trivial topic. The\nsuccess of contrastive methods in the unsupervised representation learning on\nstructured data inspires similar attempts on the graph. Existing graph\ncontrastive learning (GCL) aims to learn the invariance across multiple\naugmentation views, which renders it heavily reliant on the handcrafted graph\naugmentations. However, inappropriate graph data augmentations can potentially\njeopardize such invariance. In this paper, we show the potential hazards of\ninappropriate augmentations and then propose a novel Collaborative Graph\nContrastive Learning framework (CGCL). This framework harnesses multiple graph\nencoders to observe the graph. Features observed from different encoders serve\nas the contrastive views in contrastive learning, which avoids inducing\nunstable perturbation and guarantees the invariance. To ensure the\ncollaboration among diverse graph encoders, we propose the concepts of\nasymmetric architecture and complementary encoders as the design principle. To\nfurther prove the rationality, we utilize two quantitative metrics to measure\nthe assembly of CGCL respectively. Extensive experiments demonstrate the\nadvantages of CGCL in unsupervised graph-level representation learning and the\npotential of collaborative framework. The source code for reproducibility is\navailable at https://github.com/zhangtia16/CGCL", + "authors": "Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang", + "published": "2021-11-05", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.11883v2", + "title": "Robust Graph Data Learning via Latent Graph Convolutional Representation", + "abstract": "Graph Convolutional Representation (GCR) has achieved impressive performance\nfor graph data representation. However, existing GCR is generally defined on\nthe input fixed graph which may restrict the representation capacity and also\nbe vulnerable to the structural attacks and noises. To address this issue, we\npropose a novel Latent Graph Convolutional Representation (LatGCR) for robust\ngraph data representation and learning. Our LatGCR is derived based on\nreformulating graph convolutional representation from the aspect of graph\nneighborhood reconstruction. Given an input graph $\\textbf{A}$, LatGCR aims to\ngenerate a flexible latent graph $\\widetilde{\\textbf{A}}$ for graph\nconvolutional representation which obviously enhances the representation\ncapacity and also performs robustly w.r.t graph structural attacks and noises.\nMoreover, LatGCR is implemented in a self-supervised manner and thus provides a\nbasic block for both supervised and unsupervised graph learning tasks.\nExperiments on several datasets demonstrate the effectiveness and robustness of\nLatGCR.", + "authors": "Bo Jiang, Ziyan Zhang, Bin Luo", + "published": "2019-04-26", + "updated": "2021-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03236v1", + "title": "Graph2Graph Learning with Conditional Autoregressive Models", + "abstract": "We present a graph neural network model for solving graph-to-graph learning\nproblems. Most deep learning on graphs considers ``simple'' problems such as\ngraph classification or regressing real-valued graph properties. For such\ntasks, the main requirement for intermediate representations of the data is to\nmaintain the structure needed for output, i.e., keeping classes separated or\nmaintaining the order indicated by the regressor. However, a number of learning\ntasks, such as regressing graph-valued output, generative models, or graph\nautoencoders, aim to predict a graph-structured output. In order to\nsuccessfully do this, the learned representations need to preserve far more\nstructure. We present a conditional auto-regressive model for graph-to-graph\nlearning and illustrate its representational capabilities via experiments on\nchallenging subgraph predictions from graph algorithmics; as a graph\nautoencoder for reconstruction and visualization; and on pretraining\nrepresentations that allow graph classification with limited labeled data.", + "authors": "Guan Wang, Francois Bernard Lauze, Aasa Feragen", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.04286v1", + "title": "Deep Unsupervised Active Learning on Learnable Graphs", + "abstract": "Recently deep learning has been successfully applied to unsupervised active\nlearning. However, current method attempts to learn a nonlinear transformation\nvia an auto-encoder while ignoring the sample relation, leaving huge room to\ndesign more effective representation learning mechanisms for unsupervised\nactive learning. In this paper, we propose a novel deep unsupervised Active\nLearning model via Learnable Graphs, named ALLG. ALLG benefits from learning\noptimal graph structures to acquire better sample representation and select\nrepresentative samples. To make the learnt graph structure more stable and\neffective, we take into account $k$-nearest neighbor graph as a priori, and\nlearn a relation propagation graph structure. We also incorporate shortcut\nconnections among different layers, which can alleviate the well-known\nover-smoothing problem to some extent. To the best of our knowledge, this is\nthe first attempt to leverage graph structure learning for unsupervised active\nlearning. Extensive experiments performed on six datasets demonstrate the\nefficacy of our method.", + "authors": "Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.15843v1", + "title": "TabGSL: Graph Structure Learning for Tabular Data Prediction", + "abstract": "This work presents a novel approach to tabular data prediction leveraging\ngraph structure learning and graph neural networks. Despite the prevalence of\ntabular data in real-world applications, traditional deep learning methods\noften overlook the potentially valuable associations between data instances.\nSuch associations can offer beneficial insights for classification tasks, as\ninstances may exhibit similar patterns of correlations among features and\ntarget labels. This information can be exploited by graph neural networks,\nnecessitating robust graph structures. However, existing studies primarily\nfocus on improving graph structure from noisy data, largely neglecting the\npossibility of deriving graph structures from tabular data. We present a novel\nsolution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data\nprediction by simultaneously learning instance correlation and feature\ninteraction within a unified framework. This is achieved through a proposed\ngraph contrastive learning module, along with transformer-based feature\nextractor and graph neural network. Comprehensive experiments conducted on 30\nbenchmark tabular datasets demonstrate that TabGSL markedly outperforms both\ntree-based models and recent deep learning-based tabular models. Visualizations\nof the learned instance embeddings further substantiate the effectiveness of\nTabGSL.", + "authors": "Jay Chiehen Liao, Cheng-Te Li", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2103.10837v1", + "title": "Quantum machine learning of graph-structured data", + "abstract": "Graph structures are ubiquitous throughout the natural sciences. Here we\nconsider graph-structured quantum data and describe how to carry out its\nquantum machine learning via quantum neural networks. In particular, we\nconsider training data in the form of pairs of input and output quantum states\nassociated with the vertices of a graph, together with edges encoding\ncorrelations between the vertices. We explain how to systematically exploit\nthis additional graph structure to improve quantum learning algorithms. These\nalgorithms are numerically simulated and exhibit excellent learning behavior.\nScalable quantum implementations of the learning procedures are likely feasible\non the next generation of quantum computing devices.", + "authors": "Kerstin Beer, Megha Khosla, Julius K\u00f6hler, Tobias J. Osborne", + "published": "2021-03-19", + "updated": "2021-03-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.07832v1", + "title": "Deep Iterative and Adaptive Learning for Graph Neural Networks", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely Deep\nIterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for\njointly learning the graph structure and graph embeddings simultaneously. We\nfirst cast the graph structure learning problem as a similarity metric learning\nproblem and leverage an adapted graph regularization for controlling\nsmoothness, connectivity and sparsity of the generated graph. We further\npropose a novel iterative method for searching for a hidden graph structure\nthat augments the initial graph structure. Our iterative method dynamically\nstops when the learned graph structure approaches close enough to the optimal\ngraph. Our extensive experiments demonstrate that the proposed DIAL-GNN model\ncan consistently outperform or match state-of-the-art baselines in terms of\nboth downstream task performance and computational time. The proposed approach\ncan cope with both transductive learning and inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.16176v1", + "title": "A Survey on Structure-Preserving Graph Transformers", + "abstract": "The transformer architecture has shown remarkable success in various domains,\nsuch as natural language processing and computer vision. When it comes to graph\nlearning, transformers are required not only to capture the interactions\nbetween pairs of nodes but also to preserve graph structures connoting the\nunderlying relations and proximity between them, showing the expressive power\nto capture different graph structures. Accordingly, various\nstructure-preserving graph transformers have been proposed and widely used for\nvarious tasks, such as graph-level tasks in bioinformatics and\nchemoinformatics. However, strategies related to graph structure preservation\nhave not been well organized and systematized in the literature. In this paper,\nwe provide a comprehensive overview of structure-preserving graph transformers\nand generalize these methods from the perspective of their design objective.\nFirst, we divide strategies into four main groups: node feature modulation,\ncontext node sampling, graph rewriting, and transformer architecture\nimprovements. We then further divide the strategies according to the coverage\nand goals of graph structure preservation. Furthermore, we also discuss\nchallenges and future directions for graph transformer models to preserve the\ngraph structure and understand the nature of graphs.", + "authors": "Van Thuy Hoang, O-Joun Lee", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11898v1", + "title": "Graph Learning Augmented Heterogeneous Graph Neural Network for Social Recommendation", + "abstract": "Social recommendation based on social network has achieved great success in\nimproving the performance of recommendation system. Since social network\n(user-user relations) and user-item interactions are both naturally represented\nas graph-structured data, Graph Neural Networks (GNNs) have thus been widely\napplied for social recommendation. In this work, we propose an end-to-end\nheterogeneous global graph learning framework, namely Graph Learning Augmented\nHeterogeneous Graph Neural Network (GL-HGNN) for social recommendation. GL-HGNN\naims to learn a heterogeneous global graph that makes full use of user-user\nrelations, user-item interactions and item-item similarities in a unified\nperspective. To this end, we design a Graph Learner (GL) method to learn and\noptimize user-user and item-item connections separately. Moreover, we employ a\nHeterogeneous Graph Neural Network (HGNN) to capture the high-order complex\nsemantic relations from our learned heterogeneous global graph. To scale up the\ncomputation of graph learning, we further present the Anchor-based Graph\nLearner (AGL) to reduce computational complexity. Extensive experiments on four\nreal-world datasets demonstrate the effectiveness of our model.", + "authors": "Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.04595v1", + "title": "Label-informed Graph Structure Learning for Node Classification", + "abstract": "Graph Neural Networks (GNNs) have achieved great success among various\ndomains. Nevertheless, most GNN methods are sensitive to the quality of graph\nstructures. To tackle this problem, some studies exploit different graph\nstructure learning strategies to refine the original graph structure. However,\nthese methods only consider feature information while ignoring available label\ninformation. In this paper, we propose a novel label-informed graph structure\nlearning framework which incorporates label information explicitly through a\nclass transition matrix. We conduct extensive experiments on seven node\nclassification benchmark datasets and the results show that our method\noutperforms or matches the state-of-the-art baselines.", + "authors": "Liping Wang, Fenyu Hu, Shu Wu, Liang Wang", + "published": "2021-08-10", + "updated": "2021-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.15665v1", + "title": "Learnability of a hybrid quantum-classical neural network for graph-structured quantum data", + "abstract": "Classical data with graph structure always exists when dealing with many\nreal-world problems. In parallel, quantum data with graph structure also need\nto be investigated since they are always produced by structured quantum data\nsources.In this paper, we make use of a hybrid quantum-classical neural network\nwith deep residual learning (Res-HQCNN) to learn graph-structured quantum data.\nSpecifically, based on the special definition of graph-structured quantum data,\nwe first find suitable cost functions so that Res-HQCNN can learn both\nsemisupervised quantum data with or without graphs. Moreover, the training\nalgorithm of Res-HQCNN for graph-structured training data is given in detail.\nNext, in order to show the learning ability of Res-HQCNN,we perform extensive\nexperiments to show that the using of information about graph structures for\nquantum data can lead to better learning efficiency compared with the state of\nthe arts. At the same time, we also design comparable experiments to explain\nthat the using of residual learning can also bring better performance when\ntraining for deep quantum neural networks.", + "authors": "Yan-Ying Liang, Si-Le Tang, Zhe-Hao Yi, Hao-Zhen Si-Tu, Zhu-Jun Zheng", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.10146v2", + "title": "Exploring Structure-Adaptive Graph Learning for Robust Semi-Supervised Classification", + "abstract": "Graph Convolutional Neural Networks (GCNNs) are generalizations of CNNs to\ngraph-structured data, in which convolution is guided by the graph topology. In\nmany cases where graphs are unavailable, existing methods manually construct\ngraphs or learn task-driven adaptive graphs. In this paper, we propose Graph\nLearning Neural Networks (GLNNs), which exploit the optimization of graphs (the\nadjacency matrix in particular) from both data and tasks. Leveraging on\nspectral graph theory, we propose the objective of graph learning from a\nsparsity constraint, properties of a valid adjacency matrix as well as a graph\nLaplacian regularizer via maximum a posteriori estimation. The optimization\nobjective is then integrated into the loss function of the GCNN, which adapts\nthe graph topology to not only labels of a specific task but also the input\ndata. Experimental results show that our proposed GLNN outperforms\nstate-of-the-art approaches over widely adopted social network datasets and\ncitation network datasets for semi-supervised classification.", + "authors": "Xiang Gao, Wei Hu, Zongming Guo", + "published": "2019-04-23", + "updated": "2019-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2402.03025v1", + "title": "Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation", + "abstract": "Weakly Supervised Entity Alignment (EA) is the task of identifying equivalent\nentities across diverse knowledge graphs (KGs) using only a limited number of\nseed alignments. Despite substantial advances in aggregation-based weakly\nsupervised EA, the underlying mechanisms in this setting remain unexplored. In\nthis paper, we present a propagation perspective to analyze weakly supervised\nEA and explain the existing aggregation-based EA models. Our theoretical\nanalysis reveals that these models essentially seek propagation operators for\npairwise entity similarities. We further prove that, despite the structural\nheterogeneity of different KGs, the potentially aligned entities within\naggregation-based EA models have isomorphic subgraphs, which is the core\npremise of EA but has not been investigated. Leveraging this insight, we\nintroduce a potential isomorphism propagation operator to enhance the\npropagation of neighborhood information across KGs. We develop a general EA\nframework, PipEA, incorporating this operator to improve the accuracy of every\ntype of aggregation-based model without altering the learning process.\nExtensive experiments substantiate our theoretical findings and demonstrate\nPipEA's significant performance gains over state-of-the-art weakly supervised\nEA methods. Our work not only advances the field but also enhances our\ncomprehension of aggregation-based weakly supervised EA.", + "authors": "Yuanyi Wang, Wei Tang, Haifeng Sun, Zirui Zhuang, Xiaoyuan Fu, Jingyu Wang, Qi Qi, Jianxin Liao", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "Aggregation-based EA. The adoption of aggregation-based models, featuring graph neural networks (GNNs), has gained significant traction in the domain of EA [33, 34, 40]. These models harness the power of GNNs to generate entity representations by aggregating information from neighboring entities [40]. Diverse GNN-based variants, such as RDGCN [39], RNM [49], KEGCN [10], MRAEA [19], and RREA [20], have emerged to address the capture of structural information and neighborhood heterogeneity. Some of these models focus on optimizing the proximity of positive entity pairs (e.g., PSR [17], Dual-AMN [18]) or the distance between negative pairs (e.g., SEA [22], TEA [14]). Furthermore, attribute-enhanced techniques incorporate entity attributes such as names and textual descriptions [15, 31, 42] to enhance entity embeddings. Notably, ACK [12] constructs an attribute-consistent graph to mitigate contextual gaps. Our work contributes by shedding light on these models\u2019 underlying principles, revealing their quest for pairwise similarity propagation operators, and justifying the existence of isomorphic subgraphs within potentially aligned entities. Additionally, we introduce a novel general framework designed to augment the performance of these aggregation-based models. 2 Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Weakly Supervised EA. In scenarios with limited labeled data, many EA models suffer a drastic decline in alignment accuracy [37, 48]. Existing weakly supervised models, including ALEA [1], ActiveEA [13], RCL [46], and PEEA [30], primarily address the challenge of enhancing model generalization during the learning process. However, these methods often overlook the fundamental limitations of the original models when applied in weakly supervised settings. To bridge this gap, our method takes a novel perspective, analyzing the weakly supervised EA problem from the perspective of information propagation. It offers an effective enhancement method without altering the underlying model, thereby complementing existing weakly supervised EA methods.", + "pre_questions": [], + "main_content": "INTRODUCTION Knowledge Graphs (KGs) have emerged as pivotal resources across diverse domains, such as information retrieval [45], question answering [2], and recommendation systems [35]. Despite their growing importance, KGs suffer from limitations in coverage, constraining their utility in downstream applications. The integration of heterogeneous KGs presents a significant challenge, at the core of which lies Entity Alignment (EA). EA aims to identify corresponding entities across different KGs. Contemporary EA solutions, particularly aggregation-based models, adhere to established pipelines. They rely on abundant seed alignments as supervised signals to learn entity representations, projecting diverse KGs into a unified embedding space, and subsequently predicting alignment results using these unified embeddings. However, these methods heavily hinge on the availability of substantial seed alignments, which can be unrealistic or expensive to obtain. This has led to increased interest in weakly supervised EA, a scenario where only a limited number of seed alignments are 1 arXiv:2402.03025v1 [cs.IR] 5 Feb 2024 Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. accessible. Recent studies [1, 13, 24] have explored strategies like active learning and additional information incorporation to enhance the performance of aggregation-based EA models, which are stateof-the-art weakly supervised EA methods. While these efforts have shown some promise, it remains unclear why the adaptation of aggregation-based EA models for effective information propagation in weakly supervised settings is challenging. This paper delves into weakly supervised EA, focusing on how to enhance aggregation-based EA models in weakly supervised settings through the perspective of information propagation. We unveil that potentially aligned entities exhibit isomorphic subgraphs. We analyze the limitations of existing models in weakly supervised contexts, noting their reliance on local structural information and iterative neighbor aggregation. These models aim to minimize the distances between output embeddings of seed alignments, as further elaborated in Section 3. Figure 1 (left) depicts the key challenge: reliance on a sufficient quantity of seed alignments like (G,g). Limited seed alignments result in many unlabeled entities can not utilize the prior knowledge from labeled ones, hampering the propagation of alignment information due to restricted aggregation steps. For instance, in a two-layer GNN, only entities within a two-hop radius (like A, B) of seed entities participate in aggregation-based propagation, leaving distant entities isolated. While the concept of employing higher-order aggregation-based models to expand neighborhood propagation has been considered, empirical studies [30] show their limited effectiveness in weakly supervised settings. Furthermore, research [5] establishes a relationship between aggregation-based models and random walks, revealing that as the number of layers increases, these models converge to the limit distribution of random walks, a property of the entire graph. Consequently, their performance deteriorates significantly with a high number of layers in weakly supervised settings. In light of these challenges, we conduct a theoretical analysis that unveils the learning process of aggregation-based models as a seek for propagation operators of pairwise entity similarities. Based on this insight, we establish a key theoretical result: potentially aligned entities in aggregation-based EA models possess isomorphic subgraphs, enabling the propagation of neighborhood information through these isomorphic subgraphs. For example, as shown in Fig 1 right, potentially aligned entities (B,b) share isomorphic subgraphs enabling the propagation of neighborhood information through these isomorphic subgraphs. Leveraging this insight, we introduce Potential isomorphism propagation Entity Alignment (PipEA), a general aggregation-based EA framework specifically designed to bridge the propagation gap in weakly supervised settings. PipEA constructs a propagation operator with two components: intragraph propagation based on the original single-graph connectivity and inter-graph propagation grounded in potential alignment results represented by similarity matrices. This operator facilitates the generation of a new similarity matrix. We further propose a refinement scheme to better fuse the new and original similarity matrix. To reduce the complexity, we adopt randomized low-rank SVD [7] and the sinkhorn operator [3]. Extensive experiments demonstrate PipEA\u2019s effectiveness, not only in weakly supervised settings but also in some normal supervised scenarios. In particular, our framework even improves the most dominant metric Hit@1 by nearly two times compared to the original model and also achieves state-of-the-art on both cross-lingual and mono-lingual datasets. Our main contributions are summarized as follows: \u2022 Theoretical Analysis: We perform a theoretical analysis of aggregation-based EA models, illustrating their operation in terms of information propagation. Our analysis reveals that these models inherently seek propagation operators governing pairwise entity similarities. Furthermore, we establish that potentially aligned entities within these models exhibit isomorphic subgraphs, forming the theoretical foundation for our method. Innovative Method: We introduce PipEA, a theoretically for our method. \u2022 Innovative Method: We introduce PipEA, a theoretically grounded method designed to address the propagation gap prevalent in weakly supervised scenarios. To the best of our knowledge, PipEA is the first method capable of facilitating neighborhood information propagation between potentially aligned entities across heterogeneous graphs. Extensive Experiments: Experimental results validate our aligned entities across heterogeneous graphs. \u2022 Extensive Experiments: Experimental results validate our theoretical analysis and indicate our method achieves stateof-the-art on real-world datasets. 2 PRELIMINARY 2.1 Problem Definition Definition 2.1. A knowledge graph, denoted as G = (E, R, T), comprises a set of entities E, a set of relations R, and a set of triples T = {(\u210e,\ud835\udc5f,\ud835\udc61) | \u210e,\ud835\udc61\u2208E,\ud835\udc5f\u2208R}. Each triple represents an edge from the head entity \u210eto the tail entity \ud835\udc61with the relation \ud835\udc5f. Definition 2.2. EA task aims to discover a one-to-one mapping of entities \u03a6 from a source KG G\ud835\udc60= (E\ud835\udc60, R\ud835\udc60, T \ud835\udc60) to a target KG G\ud835\udc61= (E\ud835\udc61, R\ud835\udc61, T \ud835\udc61). Formally, seed alignment is denoted as \u03a6 = {(\ud835\udc52\ud835\udc60,\ud835\udc52\ud835\udc61) | \ud835\udc52\ud835\udc60\u2208E\ud835\udc60,\ud835\udc52\ud835\udc61\u2208E\ud835\udc61,\ud835\udc52\ud835\udc60\u2261\ud835\udc52\ud835\udc61}, where \u2261represents an equivalence relation between \ud835\udc52\ud835\udc60and \ud835\udc52\ud835\udc61. We delve into basic aggregation-based models and introduce propagation operators. Then we prove that potentially aligned entities have isomorphic subgraphs and design a new method based on it. 3.1 Aggregation-based EA In aggregation-based EA models, entities are initially represented by aggregating their neighbors in the unified space. For simplicity, we consider a one-layer GCN [9] with mean-pooling as the aggregation function. The entity embedding in GCNs and the primary objective of alignment learning can be expressed as follows: \u2211\ufe01 e = 1 |\ud835\udc41(\ud835\udc52 |\ud835\udc41(\ud835\udc52)| ssed \u2211 \u2208\ud835\udc41( \u2211\ufe01 \ud835\udc52\u2032\u2208\ud835\udc41(\ud835\udc52) e\u2032 (1) min (e\ud835\udc60,e\ud835\udc61)\u2208\u03a6 d(e\ud835\udc60, e\ud835\udc61) (2) distance measure. The objective is to min()\u2208 where d(\u00b7) represents a distance measure. The objective is to minimize the embedding distance between identical entities in seed alignments. While negative sampling methods aim to generate dissimilar entity pairs and train to distinguish the embeddings of dissimilar entities, Eq. 2 is the fundamental and commonly employed learning objective, which is the focus of our analysis. 3.2 Propagation Operators The propagation operator is the core of propagation algorithms, governing information or influence spread in graphs. It is represented as a matrix or mathematical function. In general, propagation algorithms can be expressed as: \ud835\udf0b\ud835\udc62(\ud835\udc63) = \ud835\udf0b\ud835\udc62(\ud835\udc63)\ud835\udc43 (3) nsition probability between nodes. One ex()() \ud835\udf0bis used to signify the transition probability between nodes. One example is personalized PageRank (PPR) [21], where the propagation operator \u02c6 \ud835\udc43often takes the form \u02c6 \ud835\udc43= \ud835\udc37\u22121\ud835\udc34. Here, \ud835\udc37is a diagonal matrix, with \ud835\udc37(\ud835\udc56, \ud835\udc57) representing node \ud835\udc56\u2019s out-degree (for directed) or degree (for undirected graphs), and \ud835\udc34is the adjacency matrix. PPR \ud835\udf0b\ud835\udc62(\ud835\udc63) for node \ud835\udc63regarding node \ud835\udc62quantifies the probability of a random walk with an \ud835\udefcdiscount initiated from \ud835\udc62ending at \ud835\udc63, with \ud835\udefcdenoting the probability of stopping at the current node and (1 \u2212\ud835\udefc) the probability of transitioning to a random out-neighbor. PPR propagation can be expressed as: \u2211\ufe01 \ud835\udf0b\ud835\udc43\ud835\udc43\ud835\udc45= esse \u221e \u2211 \u2113=0 \u221e \u2211\ufe01 \u2113=0 \ud835\udefc(1 \u2212\ud835\udefc)\u2113\u02c6 \ud835\udc43\u2113 (4) 3.3 Analysis of Aggregation-based EA The Aggregation-based EA model is commonly evaluated on heterogeneous graphs [4, 11, 29, 41]. Entity similarities are computed using embeddings from aggregation-based models, denoted as (\ud835\udc651,\ud835\udc652, . . . ,\ud835\udc65\ud835\udc5b) and (\ud835\udc661,\ud835\udc662, . . . ,\ud835\udc66\ud835\udc5a), with \ud835\udc5b= |E\ud835\udc60| and \ud835\udc5a= |E\ud835\udc61|. The pairwise similarity matrix, crucial for entity alignment, is defined as: \u03a9 = (\ud835\udc651;\ud835\udc652; . . . ;\ud835\udc65\ud835\udc5b)\u22a4(\ud835\udc661;\ud835\udc662; . . . ;\ud835\udc66\ud835\udc5a) \u2208R\ud835\udc5b\u00d7\ud835\udc5a (5) most EA settings, we assume that each entity aligns with ()() \u2208 Similar to most EA settings, we assume that each entity aligns with at most one entity in another KG. Proposition 3.1. In the Aggregation-based EA model, the primary objective is to derive a propagation operator governing pairwise entity similarities via embedding learning. Proof. Please refer to Appendix.A.1 \u25a1 This proposition clarifies that the Aggregation-based EA model seeks a propagation operator operating on the similarity matrix. Recent studies employ such kind operators on proximity matrices [43, 44, 47], representing entity proximity within a single graph. Proposition 3.1 suggests that the similarity matrix can also be seen as a specialized proximity matrix, where \u03a9(\ud835\udc56, \ud835\udc57) measures the proximity between node \ud835\udc56and node \ud835\udc57in another graph within a unified space. This implies that \u03a9(\ud835\udc56, \ud835\udc57) captures potential structural information between cross-graph entities in the unified space. Moreover, the propagation operator derived from the learning process unveils the structure of subgraphs for potentially aligned entity pairs within this unified space. Therefore, we provide the following proposition. Proposition 3.2. Let \u039b = [\ud835\udf061, . . . , \ud835\udf06\ud835\udc5b] \u2208R\ud835\udc5b\u00d7\ud835\udc5brepresent a matrix comprising \ud835\udc5barbitrary orthonormal vectors, signifying the propagation operator derived from aggregation-based EA models. Then potentially aligned entities have isomorphic subgraphs, and it follows that \ud835\udc43\ud835\udc5f(\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58(\u039b\u039b\u22a4) = \ud835\udc5b) = 1. Proof. Please refer to Appendix.A.2 (()) Proof. Please refer to Appendix.A.2 \u25a1 3.4 Potential Isomorphism Propagation Entities communicate through information propagation, facilitated by isomorphic subgraphs and graph connectivity [8]. Traditionally, this process was confined to individual graphs due to heterogeneity and varying intra-graph connectivity. However, Proposition 3.2 in the Aggregation-based EA model establishes that potentially aligned entities indeed share isomorphic subgraphs, a property inherently reflected in the similarity matrix. This leads us to propose Potential Isomorphism Propagation. This concept harnesses the unified space\u2019s similarity matrix to control information propagation between potentially aligned entities with isomorphic subgraphs. To implement this idea, we introduce a propagation operator containing inter-graph and intra-graph propagation, and its effectiveness is formally proven. In this model, information flow between potentially aligned entities is governed by the similarity matrix, while intragraph propagation relies on the graph\u2019s connectivity represented as \ud835\udc37\u22121\ud835\udc34. Let \ud835\udc37\ud835\udefeand \ud835\udc34\ud835\udefedenote the degree matrix and adjacency matrix for G\ud835\udefe, where \ud835\udefe= {\ud835\udc60,\ud835\udc61}. 1 G {} Proposition 3.3. Let \u039b1 = [\ud835\udc37\u22121 \ud835\udc60\ud835\udc34\ud835\udc60, \u03a9] \u2208R\ud835\udc5b\u00d7(\ud835\udc5b+\ud835\udc5a), \u039b2 = [\u03a9\u22a4, \ud835\udc37\u22121 \ud835\udc61\ud835\udc34\ud835\udc61] \u2208R\ud835\udc5a\u00d7(\ud835\udc5b+\ud835\udc5a), and \u02c6 \u039b = [\u039b1; \u039b2] \u2208R(\ud835\udc5b+\ud835\udc5a)\u00d7(\ud835\udc5b+\ud835\udc5a). 3 Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. Aggregation based Model Matrix Factorization Refinement Scheme Constructing Propagation\u00a0Operator Iterative\u00a0 Propagation Input Graphs Convergence Alignments Normalization Entity Embedding Similarities Similarities G1 G2 Similarity Figure 2: PipEA starts with initial pairwise similarities from aggregation-based models. We construct a propagation operator and apply matrix factorization to derive new entity embeddings. A refinement scheme is then introduced to effectively integrate various pairwise similarities. Consider \u02c6 \u039b \u2208R(\ud835\udc5b+\ud835\udc5a)\u00d7(\ud835\udc5b+\ud835\udc5a) as a symmetric graph operator with \ud835\udf061, . . . , \ud835\udf06\ud835\udc51as its\ud835\udc51dominant eigenvalues (in decreasing order of magnitude), where |\ud835\udf06\ud835\udc56| > |\ud835\udf06\ud835\udc56+1|, 1 \u2264\ud835\udc56\u2264\ud835\udc51. Then, the Potential Isomorphism Propagation strategy, \u02c6 \u039b \u02c6 \u039b\u22a4, converges to the \ud835\udc51-dominant eigenvectors. Proof. Please refer to Appendix.A.3 \u25a1 4 METHOD PipEA introduces a novel approach to enhance weakly supervised EA by leveraging potential isomorphism, as detailed in Section 1. This section outlines PipEA\u2019s framework, grounded in the principles established in Proposition 3.3. 4.1 Overview of the Framework PipEA unfolds in several phases, starting with an initial similarity matrix \u03a90 produced by an aggregation-based EA model, as illustrated in Fig. 2. The core of PipEA is isomorphism propagation, which advances information sharing across KGs through a sequence of steps: constructing the propagation operator, iterative propagation, matrix factorization, and refinement scheme. Matrix factorization yields new entity embeddings, enabling the creation of a refined similarity matrix. This new matrix surpasses the initial one by capturing both local and cross-graph structural similarities, thanks to the GNN layers that encode neighborhood information. The propagation operator thus facilitates linking entities in distinct graphs with analogous local structures. Our refinement scheme enhances this process by integrating the newly derived similarity matrix with the initial one, further refined through a multiplication operation. An advanced refinement scheme is also introduced to ensure neighborhood consistency within each graph, thus maintaining the integrity of subgraph structures in the unified space. In this paper, we utilize two leading EA models, DualAMN [18] and PEEA [30], to generate \u03a90. DualAMN, the state-of-the-art (SOTA) in normal supervised EA, employs proxy matching and hard negative sampling with GCNs. PEEA, SOTA in weakly supervised EA, leverages anchor positioning for dependency mapping and has shown exemplary performance among structure-only aggregationbased EA methods. The encoding process is formalized as: \u03a90 = \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc5f(\ud835\udc34\ud835\udc60,\ud835\udc34\ud835\udc61) (6) The implementation details of PipEA are provided in Algorithm 1. 4.2 Isomorphism Propagation 4.2.1 Constructing the Propagation Operator. Our propagation operator combines intra-graph and inter-graph propagation. Intragraph propagation is based on the normalized adjacency matrix \ud835\udc37\u22121\ud835\udc34of the single-graph structure, while inter-graph propagation uses the similarity matrix \u03a9 \u2208R\ud835\udc5b\u00d7\ud835\udc5a. The operator is defined as: \u02c6 \u039b = \f \f \f \f \ud835\udefd\u00b7 \ud835\udc37\u22121 \ud835\udc60\ud835\udc34\ud835\udc60 (1 \u2212\ud835\udefd) \u00b7 N\ud835\udc592 (\u03a90) (1 \u2212\ud835\udefd) \u00b7 N\ud835\udc592 (\u03a9\u22a4 0 ) \ud835\udefd\u00b7 \ud835\udc37\u22121 \ud835\udc61\ud835\udc34\ud835\udc61 \f \f \f \f \u2208R(\ud835\udc5b+\ud835\udc5a)\u00d7(\ud835\udc5b+\ud835\udc5a) (7) Here, the parameter \ud835\udefdbalances intra-graph and inter-graph propagation. Notably, we focus on similarities between highly confident potentially aligned pairs using a normalization operation denoted as N\ud835\udc592 (\u00b7), which is applied on rows as follows: N\ud835\udc592 (\u03a9(\ud835\udc56, :) = \ud835\udf11\ud835\udc58(\u03a9(\ud835\udc56, :)) ||\ud835\udf11\ud835\udc58(\u03a9(\ud835\udc56, :))||2 (8) where \ud835\udf11\ud835\udc58denotes a ranking scheme that preserves the top \ud835\udc58candidates (\ud835\udf14\ud835\udc50 1, . . . ,\ud835\udf14\ud835\udc50 \ud835\udc58) and sets others to zero. N\ud835\udc592 (\u03a9(\ud835\udc56, :) = [0, . . . ,\ud835\udf14\ud835\udc50 1, . . . , 0, . . . ,\ud835\udf14\ud835\udc50 \ud835\udc58, . . . , 0] \u2208R\ud835\udc5a (9) It\u2019s worth noting that for any (\ud835\udc52\ud835\udc57,\ud835\udc52\ud835\udc57\u2032) \u2208\u03a6 within the seed alignments, inter-graph propagation exclusively occurs between the aligned entity pairs, and their N\ud835\udc592 (\u03a9(\ud835\udc57, :)) is precisely defined as a vector with only one nonzero value at the \ud835\udc57\u2032-th element: N\ud835\udc592 (\u03a9(\ud835\udc57, :) = I\ud835\udc57\u2032 = [0, . . . , 1, . . . , 0] \u2208R\ud835\udc5a (10) 4.2.2 Propagation Strategy. Inspired by the PPR formulation in Eq. 3, we introduce a random-walk propagation method to harness the potential isomorphism phenomenon, denoted as: \ud835\udc46= \u221e \u2211\ufe01 \u2113=0 \ud835\udefc(1 \u2212\ud835\udefc)\u2113\u02c6 \u039b\u2113 (11) This method facilitates the propagation of neighborhood information through isomorphic subgraphs between potentially aligned entities. 4.2.3 Matrix Factorization. In experiments, we observed that matrix \ud835\udc46eliminates small values. To adapt for large-scale datasets, we introduce a threshold \ud835\udeffwhere values below it are set to zero. Then, we compute log( \ud835\udc46 \ud835\udeff) for non-zero entries, obtaining a sparse matrix approximating the propagation results. Next, using a differentiable Singular Value Decomposition (SVD) [7] with input dimension\ud835\udc51, we factorize the matrix \ud835\udc46. This produces \ud835\udc48and \ud835\udc49matrices, both of size (\ud835\udc5b+ \ud835\udc5a) \u00d7 \ud835\udc51, along with a diagonal matrix \u03a3, such that \ud835\udc48\u03a3\ud835\udc49\u22a4\u2248\ud835\udc46: \ud835\udc48, \u03a3,\ud835\udc49\u22a4= \ud835\udc46\ud835\udc49\ud835\udc37(\ud835\udc46\ud835\udc5d\ud835\udc4e\ud835\udc5f\ud835\udc60\ud835\udc52(\ud835\udc46,\ud835\udeff),\ud835\udc51) (12) 4 Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Finally, we compute entity embeddings \ud835\udc4bas: \ud835\udc4b= \ud835\udc48 \u221a \u03a3 (13) This method ensures robust and informative information propagation within the unified space. Subsequently, we derive global pairwise similarities by using source embeddings \ud835\udc4b\ud835\udc60= \ud835\udc4b[: \ud835\udc5b] and target embeddings \ud835\udc4b\ud835\udc61= \ud835\udc4b[\ud835\udc5b: \ud835\udc5b+ \ud835\udc5a]: \u03a9\u2032 0 = \ud835\udc4b\ud835\udc60\ud835\udc4b\u22a4 \ud835\udc61 (14) 4.2.4 Refinement Scheme. Two similarity matrices are generated in the previous process. The first is the initial similarity matrix produced by the aggregation-based EA model, emphasizing local information via n-hop neighborhood aggregation. The second is the propagation scheme, which focuses on global information in the unified space. To integrate these matrices, we employ element-wise Hadamard products, resulting in a new matrix denoted as \u03a90: \u03a90 = \u03a90 \u25e6\u03a9\u2032 0 (15) However, the direct fusion of these matrices can potentially compromise topological consistency information within the original similarity matrices. [4]. To mitigate this loss, we introduce a refinement scheme that centers on preserving topological consistency. This refinement scheme leverages the concept of matched neighborhood consistency (MNC) scores [6] to quantify topological consistency and iteratively enhances these scores. The MNC score was originally defined as: S\ud835\udc40\ud835\udc41\ud835\udc36= \ud835\udc34\ud835\udc60\u03a9\ud835\udc34\ud835\udc61\u2298(\ud835\udc34\ud835\udc60\u03a91\ud835\udc5a\u22971\ud835\udc5a+ 1\ud835\udc5b\ud835\udc34\ud835\udc611\ud835\udc5a\u2212\ud835\udc34\ud835\udc60\u03a9\ud835\udc34\ud835\udc61) (16) Here, \u2298denotes element-wise division, and \u2297signifies the Kronecker product. However, for simplification purposes, we approximate it as follows: S\ud835\udc40\ud835\udc41\ud835\udc36\u2248\ud835\udc34\ud835\udc60\u03a9\ud835\udc34\ud835\udc61 (17) To iteratively update the similarity matrix \u03a9 in the \ud835\udc58-th iteration, we introduce a small \ud835\udf16to every element of \u03a9 to assign every pair of nodes a token match score, irrespective of whether the initial EA model identified them as matches. This enables us to rectify potential false negatives present in the initial similarity matrix. The similarities for aligned entity pairs are set as a one-nonzero-value vector, as illustrated in Eq. 19: \u03a9\ud835\udc58= \ud835\udf19(\u03a9\ud835\udc58\u22121 \u25e6\ud835\udc34\ud835\udc60\u03a9\ud835\udc58\u22121\ud835\udc34\ud835\udc61+ \ud835\udf16) (18) Here, the function \ud835\udf19selects the entity pairs from the seed alignments \u03a6 and assigns them a similarity score of 1, while setting all other values to zero. It is defined as: \ud835\udf19(\u03a9, (\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) \u2208\u03a6) := \u03a9(\ud835\udc56) = I\ud835\udc57= [0, . . . , 1, . . . , 0] (19) This iterative refinement process is a critical component of our PipEA method, enabling the correction of potential biases and enhancing the alignment accuracy. Remark: The refinement Scheme is distinct from RefiNA [6]. While RefiNA is an unsupervised graph matching technique, our scheme is supervised which introduces dynamic influence on the MNC score in each iteration through seed alignments. Algorithm 1 Potential Isomorphism Propagation Strategy Input: The adjacency matrices \ud835\udc34\ud835\udc60,\ud835\udc34\ud835\udc61, number of iterations \ud835\udc3f1, \ud835\udc3f2, embedding dimension \ud835\udc51, seed alignments \u03a6 Output: The final refined similarity matrix \u03a9. 1: Initialize \ud835\udc46= 0, \ud835\udc45= I. (I is the identity matrix) 2: \u03a90 \u2190\ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc5f(\ud835\udc34\ud835\udc60,\ud835\udc4b\ud835\udc61) 3: Constructing the propagation operator \u02c6 \u039b. 4: for \ud835\udc58= 1 \u2192\ud835\udc3f1 do 5: \ud835\udc46\u2190\ud835\udc46+ \ud835\udefc\u00b7\ud835\udc45 6: \ud835\udc45\u2190\ud835\udc46+ (1 \u2212\ud835\udefc)\u00b7 \u02c6 \u039b\u00b7\ud835\udc45 7: end for 8: for \u2200\ud835\udc46(\ud835\udc56, \ud835\udc57)\u2208\ud835\udc46do 9: if \ud835\udc46(\ud835\udc56, \ud835\udc57) < \ud835\udeff, then \ud835\udc46(\ud835\udc56, \ud835\udc57) \u21900 10: end for 11: Get matrix log( \ud835\udc46 \ud835\udeff) for non-zero entries 12: [\ud835\udc48, \u03a3,\ud835\udc49\u22a4] \u2190Differentiable Sparse SVD(log( \ud835\udc46 \ud835\udeff),\ud835\udc51) 13: Get the eigenvector entity embedding matrix \ud835\udc4b\u2190\ud835\udc48 \u221a \u03a3 14: \u03a9\u2032 \u2190\ud835\udc4b\ud835\udc60\ud835\udc4b\u22a4 \ud835\udc61 15: \u03a90 \u2190\u03a90 \u25e6\u03a9\u2032 0 16: for \ud835\udc58= 1 \u2192\ud835\udc3f2 do 17: \u03a9\ud835\udc58\u22121 \u2190\ud835\udf19(\u03a9\ud835\udc58\u22121, \u03a6) 18: \u03a9\ud835\udc58\u2190\u03a9\ud835\udc58\u22121 \u25e6\ud835\udc34\ud835\udc60\u03a9\ud835\udc58\u22121\ud835\udc34\ud835\udc61+ \ud835\udf16 19: \u03a9\ud835\udc58\u2190Normalize \u03a9\ud835\udc58by row then column 20: end for 21: return \u03a9 4.3 Reducing Time Complexity The PipEA method introduces potential isomorphism propagation to intricately map the interand intra-structural nuances of KGs. Despite its comprehensive approach, this technique inherently increases time complexity, primarily due to the computation of \ud835\udc46\u2208R(\ud835\udc5b+\ud835\udc5a)\u00d7(\ud835\udc5b+\ud835\udc5a). To mitigate this, we employ strategies to streamline the computational process: 4.3.1 Sparse threshold. Referenced in Section 4.2.3, a sparse threshold \ud835\udeffis applied, setting all scores below \ud835\udeffto zero. This approach filters out insignificant entities, focusing on those most likely to contribute to accurate propagation outcomes. An in-depth analysis of the impact of \ud835\udeffis presented in our experimental section. 4.3.2 Low-rank SVD.. Following observations by [16, 36] that significant information in \ud835\udc46is concentrated in the top singular values, we employ randomized low-rank SVD [7]. This method approximates matrix decomposition, retaining only the top 1% of singular values, thus reducing both space and time complexity. 4.3.3 Sinkhorn operator. Traditional EA methodologies calculate entity pair similarities directly, leading to potential violations of the one-to-one alignment constraint. To circumvent this, the transformation of the EA decoding process into an assignment problem has shown promise [16, 30], markedly improving performance: arg max P\u2208P|E| < P, \u03a9 >\ud835\udc39 (20) 5 Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. Table 1: Main results of cross-lingual and mono-lingual datasets. underline means the best existing models. PEEA is the SOTA of weakly supervised EA and DualAMN is the SOTA of normal supervised EA. \"Improv.\" represents the percentage increase compared with the original model. PipEA(P) means PEEA is the encoder and PipEA(D) means DualAMN is the encoder. Datasets Cross-Lingual Datasets Mono-Lingual Datasets 15KEN-DE 15KEN-FR 100KEN-FR 15KDBP-Wiki 15KDBP-Yago 100KDBP-Wiki Models H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR Basic GCN-Align 10.9 26.7 16.4 3.6 15.2 7.1 2.5 9.4 5.0 3.1 11.0 5.8 40.1 60.6 47.1 3.5 11.4 6.2 PSR 21.5 49.7 31.0 15.1 38.1 22.9 13.2 32.9 19.9 19.5 44.2 27.9 25.3 51.6 34.2 14.6 33.5 21.0 MRAEA 28.6 58.7 38.7 14.4 38.5 22.4 13.5 36.1 21.0 19.4 45.4 28.1 42.9 72.4 53.1 17.1 39.7 24.7 RREA 48.5 72.5 56.8 26.3 56.4 36.4 16.4 40.6 24.5 41.8 67.5 50.7 82.1 92.8 86.0 21.4 45.9 29.7 DualAMN 51.9 75.4 60.1 25.2 52.1 34.3 15.0 38.6 22.8 40.0 64.5 48.5 76.2 88.3 80.7 16.2 37.7 23.5 PipEA(D) 82.3 86.4 83.9 48.5 58.9 52.4 36.8 64.6 46.6 71.6 76.3 73.4 96.7 97.8 97.2 31.9 54.6 39.6 Improv. 58.6% 14.6% 39.6% 92.5% 13.1% 52.8% 145.3% 67.3% 104.4% 79.0% 18.3% 51.3% 26.9% 10.8% 20.4% 96.9% 44.8% 68.5% PEEA 68.6 88.1 75.4 44.0 72.4 53.6 20.3 47.6 29.4 59.4 81.2 67.1 92.6 97.4 94.4 24.4 50.6 33.2 PipEA(P) 85.4 92.0 87.8 58.1 74.4 63.7 49.6 54.0 38.5 75.4 82.5 77.9 96.9 98.4 97.5 37.2 58.5 44.3 Improv. 24.5% 4.4% 16.4% 32.0% 2.8% 18.8% 144.3% 13.4% 31.0% 26.9% 1.6% 16.1% 4.6% 1.0% 3.3% 52.5% 15.6% 33.4% Iterative BootEA 0.6 3.6 1.7 2.7 10.1 5.2 2.1 4.4 5.4 1.8 7.4 3.7 2.7 1.6 6.6 3.3 40.5 28.2 KECG 42.5 64.6 50.2 14.1 43.3 23.7 11.1 30.5 17.7 23.8 45.8 31.3 57.8 78.8 65.1 20.2 42.4 27.8 SEA 43.1 66.5 51.2 18.9 49.4 29.1 12.5 34.5 19.9 15.6 40.4 24.0 81.4 92.7 85.5 13.6 33.6 20.4 PSR 79.9 91.4 84.1 52.8 75.3 60.5 55.4 72.6 62.0 72.1 85.5 77.1 95.2 97.9 96.3 59.3 68.4 61.7 MRAEA 64.7 84.5 72.3 35.9 61.7 44.7 38.2 64.1 55.1 58.6 78.4 66.5 88.5 97.8 92.7 45.8 60.2 48.4 RREA 76.5 90.7 81.8 39.6 68.0 49.5 55.2 74.1 63.3 66.8 83.5 73.1 95.6 98.3 96.7 58.0 71.8 62.7 Dual-AMN 77.1 93.0 83.6 48.4 79.0 59.4 57.3 76.2 65.9 67.2 86.6 75.1 92.4 98.4 95.2 59.6 72.1 63.8 PipEA(D) 86.6 93.5 88.1 61.3 70.6 64.9 67.7 84.9 73.3 78.4 82.5 80.0 97.0 98.5 97.6 71.8 70.6 68.1 Improv. 12.26% 0.52% 5.41% 26.74% 9.33% 18.15% 11.40% 11.29% 16.61% 6.48% 1.56% 0.1% 0.89% 20.45% 6.68% PEEA 83.6 93.2 87.1 55.6 79.4 64.0 59.6 77.6 66.3 78.5 90.0 82.7 96.6 98.6 97.3 65.3 78.2 70.6 PipEA(P) 90.7 95.4 92.5 80.1 89.8 83.6 71.6 86.1 76.7 84.3 90.9 86.1 97.2 98.9 97.7 77.6 86.8 80.8 Improv. 8.5% 2.4% 6.2% 44.1% 13.1% 30.6% 20.1% 11.0% 15.7% 7.4% 1.0% 4.1% 0.6% 0.3% 0.4% 18.8% 11.0% 14.4% Here, \u03a9 denotes the similarity matrix, and P is a permutation matrix that outlines the alignment strategy. While the Hungarian algorithm offers a precise solution, its \ud835\udc42(|E|3) complexity is prohibitive for large KGs. Adopting the Sinkhorn operator [3], we apply a scalable and parallelizable algorithm, significantly reducing the computational load to \ud835\udc42(\ud835\udc5e|E|2) with \ud835\udc5eset to 10 iterations. The details of the Sinkhorn are listed in Appendix. 5 EXPERIMENTS In this section, we conduct a rigorous evaluation of our PipEA method on real-world datasets, benchmarking its performance against state-of-the-art (SOTA) EA models. We introduce our experimental settings and present detailed results below. 5.1 Experimental Settings 5.1.1 Datasets. To assess PipEA\u2019s effectiveness, we turn to the OpenEA benchmark dataset (V2), thoughtfully designed to closely mirror the data distribution found in real knowledge graphs. Our evaluation encompasses two cross-lingual settings (English-to-French and English-to-German), sourced from the multilingual DBpedia, and two monolingual settings (DBpedia-to-Wikidata and DBpediato-YAGO), extracted from popular knowledge bases. In each setting, we consider two sizes: one with 15K pairs of reference entities and another with 100K pairs. In contrast to the conventional use of 30% of seed alignments for training, we adopt weakly supervised scenarios, employing only 1% of seed alignments randomly sampled from the datasets. For more details, please refer to Appendix D. 5.1.2 Evaluation Metrics. Performance assessment is based on the official Mean Reciprocal Rank (MRR), H@1, and H@10 metrics, widely recognized and embraced in EA studies. Elevated H@1, H@10, and MRR scores signify superior EA performance. Our default alignment direction is from left to right (e.g., EN as the source KG and FR as the target KG for the EN-FR dataset). 5.1.3 Baselines. We compare our PipEA with 9 prominent methods that solely rely on the original structure information, divided into two groups: \u2022 Basic models: GCN-Align [38], MRAEA [19], RREA [20], PSR [17], Dual-AMN [18] and PEEA [30]. \u2022 Iterative strategy: The iterative training strategy is also applied for the basic models to improve their performance and further evaluate the ability of PipEA. In addition to the basic models described above, we have added a model that specializes in iterative processing such as BootEA [27], KECG [10], SEA [22]. We adhere to the default hyper-parameters as reported in their respective literature. 6 Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY 5.1.4 Iterative training strategy. After training the base model with fundamental settings, in every epoch \ud835\udc3e\ud835\udc52(in our paper \ud835\udc3e\ud835\udc52= 10), cross-KG entity pairs that are mutual nearest neighbors in the vector space are proposed and added to a candidate list \ud835\udc41\ud835\udc50\ud835\udc51. An entity pair in \ud835\udc41\ud835\udc50\ud835\udc51is incorporated into the training set if it remains a mutual nearest neighbor for \ud835\udc3e\ud835\udc50consecutive rounds (where \ud835\udc3e\ud835\udc50= 10). 5.1.5 Parameter Settings. Following previous studies [23, 32, 44], we set the embedding dimensionality to \ud835\udc51= 128 and determine the token match score \ud835\udf16= 0.00001. We employ 8 propagation and refinement iterations (\ud835\udc3f1 = 8 and \ud835\udc3f2 = 8) in experiments. For further insights into hyper-parameters, kindly refer to Section 5.5 and Appendix E. 5.1.6 Computational Resources. All experiments were diligently executed on a server equipped with an NVIDIA A100 and NVIDIA A800 GPU. A comprehensive analysis of the computational complexity of our method is thoughtfully detailed in Appendix C. 5.2 Main Results Our evaluation of PipEA across six cross-lingual and monolingual datasets, under the condition of 1% seed alignments, is summarized in Table 1. The results lead to several key insights: (1) PipEA significantly surpasses standard alignment models using various encoders in both cross-lingual and monolingual contexts. Notably, on the 100K EN-FR datasets, it records substantial increases of 145.3% and 144.3% in the H@1 metric for PEEA and DualAMN, respectively. These outcomes confirm that (i) isomorphic subgraphs among potentially aligned entities facilitate robust information propagation, and (ii) our propagation approach effectively leverages this characteristic in a unified embedding space. (2) In terms of Mean Reciprocal Rank (MRR), PipEA demonstrates enhancements from 20.4% to 52.8% on 15K datasets and from 68.5% to 104.4% on 100K datasets, underscoring its adaptability and effectiveness across different dataset sizes. These results underscore PipEA\u2019s capacity to substantially elevate the performance of existing models in weakly supervised scenarios, making it a superior option for weakly supervised entity alignment tasks. (3) It is evident that the iterative strategy significantly enhances entity alignment by iteratively incorporating high-quality aligned entity pairs into the training data to refine the model. Specifically, PipEA demonstrates remarkable improvements of 17.3% and 9.4% accuracy on 15K EN-FR and 100K DBP-Wiki datasets. In terms of H@1 and MRR, improvements are observed across various datasets. Notably, even without the iterative strategy, PipEA surpasses most iterative models on all 15K datasets, and on the 15K DBP-Wiki dataset, it outperforms LightEA (Iter.). This exceptional performance underscores the effectiveness of our method in identifying high-quality aligned entity pairs for iterative training. (4) PipEA consistently outperforms existing weakly supervised aligners across all datasets. While these methods employ techniques to enhance accuracy, they primarily propagate neighborhoods within individual graphs. As discussed in Section 1, the key challenge in weakly supervised EA is limited neighborhood propagation on the graph. We find that potentially aligned entities indeed possess isomorphic subgraphs capable of propagating neighborhoods. PipEA, with its cross-graph propagation operator, (a) PipEA (b) PEEA Figure 3: Similarity matrix of our method and SOTA encoder on sub-graph from 15KEN-DE. Table 2: H@1 of different methods without iterative strategy under different supervised settings. PEEA is the encoder. Dataset 15KEN-DE 15KEN-FR Seed Ratio 1% 5% 10% 20% 30% 1% 5% 10% 20% 30% Ours 85.4 90.64 92.47 94.62 95.46 58.12 79.87 83.56 88.07 90.24 PEEA 68.67 84.58 91.13 94.75 95.94 44.07 68.73 77.4 84.88 90.04 RREA 48.5 76.1 83.44 88.74 90.53 26.23 55.64 68.64 78.2 81.91 Dual-AMN 51.95 76.46 86.13 91.2 93.17 25.28 55.51 68.11 78.78 84.11 MRAEA 28.63 68.56 81.42 88.1 90.72 14.42 45.82 61.75 73.82 79.76 PSR 21.59 62.17 76.83 85.61 89.09 15.18 45.81 60.69 73.06 79.66 GCN-Align 10.99 20.76 24.21 27.43 30.5 3.68 12.84 18.76 23.52 28.83 successfully bridges this gap by facilitating neighborhood propagation between potentially aligned entities in different graphs, resulting in improved alignment accuracy. (5) Our case study analysis, focused on the 15KEN-DE dataset, examines the pairwise similarity quality between SOTA models and PipEA. By analyzing a subgraph of 25 entities, we visualized the similarity matrices of PipEA and PEEA through heat maps (Fig.3). PipEA exhibits markedly clearer and more distinct similarity distributions than PEEA, underscoring its superior ability to refine similarity matrices for accurate entity pair identification. This clarity in similarity distribution explains PipEA\u2019s improved H@1 performance contrasted with H@10 performance, indicating its effectiveness in establishing confident one-to-one alignments without necessitating an extensive candidate pool. 5.3 Discussions for Supervised Settings To assess the generality of our method under different supervised settings, we conducted comprehensive experiments comparing PipEA with baseline methods at different training data ratios (1%, 5%, 10%, 20%, and 30%) using the 15KEN-DE and 15KEN-FR datasets. The results summarized in Table 2 reveal several key insights: (1) Our method demonstrates remarkable performance even in conventional supervised settings. While its advantage over other baselines may diminish as the training ratio increases, PipEA consistently outperforms them. Notably, it achieves superior performance compared to baselines on the 15K EN-FR dataset, even when the seed 7 Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. Table 3: Ablation experiments without the iterative training strategy on cross-lingual and mono-lingual datasets Methods 15KEN-DE 15KEN-FR H@1 H@10 MRR H@1 H@10 MRR Ours 85.40 92.03 87.80 58.12 74.42 63.69 w\\o RS 79.99 91.24 81.24 49.72 72.72 58.95 w\\o IS 78.48 88.84 81.98 48.27 70.4 55.53 w\\o PS 75.64 88.06 79.92 49.53 70.18 56.52 Methods 15KDBP-Yago 15KDBP-Wiki H@1 H@10 MRR H@1 H@10 MRR Ours 96.90 98.48 97.51 75.40 82.55 77.94 w\\o RS 95.32 98.2 96.47 69.89 81.57 74.51 w\\o IS 92.46 96.63 93.95 67.93 77.49 71.08 w\\o PS 92.32 97.99 94.43 67.36 82.72 73.07 alignment ratio is as high as 30%. This phenomenon can be attributed to the nature of aggregation-based EA models, which operate by propagating neighborhood information. PipEA\u2019s unique capability to propagate information across graphs extends the size of the neighborhood involved in this propagation process, contributing to its sustained success across varying degrees of supervision. (2) The performance gap between PipEA and baselines in weakly supervised scenarios gradually narrows as the training ratio increases. The alignment performance on the 15K EN-DE dataset is lower than PEEA [30] when the seed alignment ratio exceeds 10%. This is because the propagation gap is diminished with substantial seed alignments available and PipEA was originally designed to tackle the challenges of limited seed alignments. 5.4 Ablation Study To comprehensively evaluate the contributions of each component within PipEA, we conducted an ablation study based on PEEA encoder, introducing three variants of the model, as presented in Table 3: (1) w/o RS (Refinement Scheme): In this variant, we excluded the refinement scheme, directly multiplying the initial similarity matrix \u03a90 with the generated similarity matrix \u03a9\u2032 0. As discussed in Section 4.2.4, directly fusing different similarity matrices can result in a loss of topological consistency information. Compared to our complete PipEA method, w/o RS led to a reduction in MRR ranging from 1.04% to 6.56% on the 15K datasets. Notably, the H@1 exhibited a significant drop of 8.4% on 15K EN-FR. This suggests that without the refinement scheme, the model may produce more erroneous predictions, especially when only one outcome can be predicted. (2) w/o IS (Initial Similarity): In this case, we excluded the initial similarity matrix \u03a90 and solely utilized \u03a9\u2032 0 as the final similarity matrix. It is crucial to note that the initial similarity matrix generated by the aggregation-based EA model primarily captures local similarities arising from its n-hop neighborhood information aggregation. Conversely, our method provides global similarities. By removing these local similarities, we observed a notable decrease in model performance, ranging from 4.44% to 9.85% in terms of H@1. (3) w/o PS (Potential Isomorphism Propagation): In this variant, we omitted the generated similarity matrix \u03a9\u2032 0. These experiments offered insights into the effectiveness of potential isomorphism propagation. Across the 15K datasets, w/o PS resulted in a substantial reduction 82.5 85.0 87.5 90.0 92.5 0 10 20 30 40 50 Top-K Metrics variable Hits@1 Hits@10 MRR (a) Top-K:15KEN-DE 74 76 78 80 82 0 10 20 30 40 50 Top-K Metrics variable Hits@1 Hits@10 MRR (b) Top-K:15KDBP-Wiki 0 25 50 75 0.00 0.25 0.50 0.75 1.00 Stop Probability \u03b1 Metrics variable Hits@1 Hits@10 MRR (c) \ud835\udefc:15KEN-DE 25 50 75 0.00 0.25 0.50 0.75 1.00 Transition Probability \u03b2 Metrics variable Hits@1 Hits@10 MRR (d) \ud835\udefd:15KEN-DE 72 76 80 10^-5 5^-5 10^-4 5^-4 10^-3 Sparse Threshold \u03b4 Metrics variable Hits@1 Hits@10 MRR (e) \ud835\udeff:15KDBP-Wiki 96 97 98 10^-5 5^-5 10^-4 5^-4 10^-3 Sparse Threshold \u03b4 Metrics variable Hits@1 Hits@10 MRR (f) \ud835\udeff:15KDBP-Yago Figure 4: Hyper-parameter experiments of our method. in H@1, ranging from 4.58% to 9.76%. This outcome underscores the critical role of potential isomorphism propagation in our method. Furthermore, it highlights that our refinement scheme can effectively preserve topological consistency within the similarity matrix, leading to improved alignment accuracy. 5.5 Hyper-parameters 5.5.1 Top-K Selection. PipEA employs a parameter, denoted as \ud835\udc58, to select potentially aligned entities for propagation through normalization operations. It is represented as \ud835\udf11\ud835\udc58in the normalization operations Eq.8. Fig. 4(a)(b) illustrates the sensitivity of alignment accuracy to the number of potentially aligned entities. Interestingly, our method achieves peak performance on both the EN-DE and DBP-Wiki datasets across all metrics when\ud835\udc58= 2. Further increasing the value of \ud835\udc58leads to diminishing accuracy, suggesting that allowing too many entities to share isomorphic subgraphs within the unified space may negatively impact performance. Consequently, we set \ud835\udc58= 2 as the optimal choice for our study. 5.5.2 Random Walk Probability. The parameter \ud835\udefcdetermines the probability of stopping at the current entity during a random walk. As depicted in Fig. 4(c), PipEA exhibits robust performance within the range of \ud835\udefcfrom 0.1 to 0.9. The highest H@1 score (85.4%) is achieved at approximately \ud835\udefc= 0.7, which we adopt in our study. Notably, our experiments reveal that PipEA\u2019s effectiveness is compromised under two extreme conditions: when \ud835\udefc= 0, indicating constant random walks among all entities across graphs, and when 8 Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY \ud835\udefc= 1, signifying a lack of neighborhood information propagation. These findings empirically validate the effectiveness of our method. 5.5.3 Inter-Intra Graph Balance. Eq. 7 outlines the propagation operator, consisting of intra-graph and inter-graph propagation components, with \ud835\udefdcontrolling the balance between these aspects. Specifically, \ud835\udefddetermines the probability of propagation occurring within the intra-graph part compared to the inter-graph part. Fig. 4(d) demonstrates that PipEA consistently performs well for \ud835\udefd values ranging from 0.1 to 0.9. We set \ud835\udefd= 0.5 in this work. Notably, when \ud835\udefd= 0, signifying propagation solely across graphs, H@1 exhibits a slight degradation, affirming PipEA\u2019s ability to capture neighborhood information in the unified space. Conversely, when \ud835\udefd= 1, indicating exclusive intra-graph propagation, H@1 drops significantly (1.36%), underscoring the challenge of capturing entity dependencies between different graphs relying solely on singlegraph structures. This emphasizes the rationale behind our method, which combines intra-graph and inter-graph propagation. 5.5.4 Threshold. To ensure the non-negativity of the generated matrix and its applicability across graphs of varying scales, we utilize the threshold \ud835\udeff, as outlined in Section 4.2. Our experiments indicate that accuracy declines as \ud835\udeffdecreases, signifying the loss of valuable neighborhood information, as depicted in Fig. 4(e)(f). This reaffirms the effectiveness of PipEA on 100K datasets and its significant performance improvements compared to baseline methods. 6 CONCLUSION This research addresses the challenges of weakly supervised EA characterized by limited seed alignments. Through a propagation perspective, we analyze how aggregation-based EA models utilize operators to propagate pairwise entity similarities. A key insight from our theoretical analysis is the existence of isomorphic subgraphs among potentially aligned entities, facilitating effective information propagation essential for EA tasks. We develop PipEA, a general and novel approach that synergizes intra-graph and inter-graph propagation techniques while refining similarity matrices to enhance alignment accuracy. Our experimental results affirm PipEA\u2019s effectiveness, showcasing its superiority over stateof-the-art models in both weakly supervised and fully supervised contexts. PipEA thus serves as a pivotal step forward, reconciling recent advancements in aggregation-based EA with traditional propagation-based graph learning methodologies." + }, + { + "url": "http://arxiv.org/abs/2010.03249v2", + "title": "Exploring and Evaluating Attributes, Values, and Structures for Entity Alignment", + "abstract": "Entity alignment (EA) aims at building a unified Knowledge Graph (KG) of rich\ncontent by linking the equivalent entities from various KGs. GNN-based EA\nmethods present promising performances by modeling the KG structure defined by\nrelation triples. However, attribute triples can also provide crucial alignment\nsignal but have not been well explored yet. In this paper, we propose to\nutilize an attributed value encoder and partition the KG into subgraphs to\nmodel the various types of attribute triples efficiently. Besides, the\nperformances of current EA methods are overestimated because of the name-bias\nof existing EA datasets. To make an objective evaluation, we propose a hard\nexperimental setting where we select equivalent entity pairs with very\ndifferent names as the test set. Under both the regular and hard settings, our\nmethod achieves significant improvements ($5.10\\%$ on average Hits@$1$ in\nDBP$15$k) over $12$ baselines in cross-lingual and monolingual datasets.\nAblation studies on different subgraphs and a case study about attribute types\nfurther demonstrate the effectiveness of our method. Source code and data can\nbe found at https://github.com/thunlp/explore-and-evaluate.", + "authors": "Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, Zhiyuan Liu, Tat-Seng Chua", + "published": "2020-10-07", + "updated": "2021-01-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1710.10903v3", + "title": "Graph Attention Networks", + "abstract": "We present graph attention networks (GATs), novel neural network\narchitectures that operate on graph-structured data, leveraging masked\nself-attentional layers to address the shortcomings of prior methods based on\ngraph convolutions or their approximations. By stacking layers in which nodes\nare able to attend over their neighborhoods' features, we enable (implicitly)\nspecifying different weights to different nodes in a neighborhood, without\nrequiring any kind of costly matrix operation (such as inversion) or depending\non knowing the graph structure upfront. In this way, we address several key\nchallenges of spectral-based graph neural networks simultaneously, and make our\nmodel readily applicable to inductive as well as transductive problems. Our GAT\nmodels have achieved or matched state-of-the-art results across four\nestablished transductive and inductive graph benchmarks: the Cora, Citeseer and\nPubmed citation network datasets, as well as a protein-protein interaction\ndataset (wherein test graphs remain unseen during training).", + "authors": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, Yoshua Bengio", + "published": "2017-10-30", + "updated": "2018-02-04", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG", + "cs.SI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.06474v1", + "title": "ActiveEA: Active Learning for Neural Entity Alignment", + "abstract": "Entity Alignment (EA) aims to match equivalent entities across different\nKnowledge Graphs (KGs) and is an essential step of KG fusion. Current\nmainstream methods -- neural EA models -- rely on training with seed alignment,\ni.e., a set of pre-aligned entity pairs which are very costly to annotate. In\nthis paper, we devise a novel Active Learning (AL) framework for neural EA,\naiming to create highly informative seed alignment to obtain more effective EA\nmodels with less annotation cost. Our framework tackles two main challenges\nencountered when applying AL to EA: (1) How to exploit dependencies between\nentities within the AL strategy. Most AL strategies assume that the data\ninstances to sample are independent and identically distributed. However,\nentities in KGs are related. To address this challenge, we propose a\nstructure-aware uncertainty sampling strategy that can measure the uncertainty\nof each entity as well as its impact on its neighbour entities in the KG. (2)\nHow to recognise entities that appear in one KG but not in the other KG (i.e.,\nbachelors). Identifying bachelors would likely save annotation budget. To\naddress this challenge, we devise a bachelor recognizer paying attention to\nalleviate the effect of sampling bias. Empirical results show that our proposed\nAL strategy can significantly improve sampling quality with good generality\nacross different datasets, EA models and amount of bachelors.", + "authors": "Bing Liu, Harrisen Scells, Guido Zuccon, Wen Hua, Genghong Zhao", + "published": "2021-10-13", + "updated": "2021-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1908.08210v1", + "title": "Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs", + "abstract": "Entity alignment is the task of linking entities with the same real-world\nidentity from different knowledge graphs (KGs), which has been recently\ndominated by embedding-based methods. Such approaches work by learning KG\nrepresentations so that entity alignment can be performed by measuring the\nsimilarities between entity embeddings. While promising, prior works in the\nfield often fail to properly capture complex relation information that commonly\nexists in multi-relational KGs, leaving much room for improvement. In this\npaper, we propose a novel Relation-aware Dual-Graph Convolutional Network\n(RDGCN) to incorporate relation information via attentive interactions between\nthe knowledge graph and its dual relation counterpart, and further capture\nneighboring structures to learn better entity representations. Experiments on\nthree real-world cross-lingual datasets show that our approach delivers better\nand more robust results over the state-of-the-art alignment methods by learning\nbetter KG representations.", + "authors": "Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, Dongyan Zhao", + "published": "2019-08-22", + "updated": "2019-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.05278v2", + "title": "Are Negative Samples Necessary in Entity Alignment? An Approach with High Performance, Scalability and Robustness", + "abstract": "Entity alignment (EA) aims to find the equivalent entities in different KGs,\nwhich is a crucial step in integrating multiple KGs. However, most existing EA\nmethods have poor scalability and are unable to cope with large-scale datasets.\nWe summarize three issues leading to such high time-space complexity in\nexisting EA methods: (1) Inefficient graph encoders, (2) Dilemma of negative\nsampling, and (3) \"Catastrophic forgetting\" in semi-supervised learning. To\naddress these challenges, we propose a novel EA method with three new\ncomponents to enable high Performance, high Scalability, and high Robustness\n(PSR): (1) Simplified graph encoder with relational graph sampling, (2)\nSymmetric negative-free alignment loss, and (3) Incremental semi-supervised\nlearning. Furthermore, we conduct detailed experiments on several public\ndatasets to examine the effectiveness and efficiency of our proposed method.\nThe experimental results show that PSR not only surpasses the previous SOTA in\nperformance but also has impressive scalability and robustness.", + "authors": "Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan", + "published": "2021-08-11", + "updated": "2021-08-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.07962v1", + "title": "Relational Reflection Entity Alignment", + "abstract": "Entity alignment aims to identify equivalent entity pairs from different\nKnowledge Graphs (KGs), which is essential in integrating multi-source KGs.\nRecently, with the introduction of GNNs into entity alignment, the\narchitectures of recent models have become more and more complicated. We even\nfind two counter-intuitive phenomena within these methods: (1) The standard\nlinear transformation in GNNs is not working well. (2) Many advanced KG\nembedding models designed for link prediction task perform poorly in entity\nalignment. In this paper, we abstract existing entity alignment methods into a\nunified framework, Shape-Builder & Alignment, which not only successfully\nexplains the above phenomena but also derives two key criteria for an ideal\ntransformation operation. Furthermore, we propose a novel GNNs-based method,\nRelational Reflection Entity Alignment (RREA). RREA leverages Relational\nReflection Transformation to obtain relation specific embeddings for each\nentity in a more efficient way. The experimental results on real-world datasets\nshow that our model significantly outperforms the state-of-the-art methods,\nexceeding by 5.8%-10.9% on Hits@1.", + "authors": "Xin Mao, Wenting Wang, Huimin Xu, Yuanbin Wu, Man Lan", + "published": "2020-08-18", + "updated": "2020-08-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2001.08943v3", + "title": "Active Learning for Entity Alignment", + "abstract": "In this work, we propose a novel framework for the labeling of entity\nalignments in knowledge graph datasets. Different strategies to select\ninformative instances for the human labeler build the core of our framework. We\nillustrate how the labeling of entity alignments is different from assigning\nclass labels to single instances and how these differences affect the labeling\nefficiency. Based on these considerations we propose and evaluate different\nactive and passive learning strategies. One of our main findings is that\npassive learning approaches, which can be efficiently precomputed and deployed\nmore easily, achieve performance comparable to the active learning strategies.", + "authors": "Max Berrendorf, Evgeniy Faerman, Volker Tresp", + "published": "2020-01-24", + "updated": "2021-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2009.07111v2", + "title": "Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning", + "abstract": "Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a\nhandful of labeled data to the remaining massive unlabeled data via a graph. As\none of the most popular graph-based SSL approaches, the recently proposed Graph\nConvolutional Networks (GCNs) have gained remarkable progress by combining the\nsound expressiveness of neural networks with graph structure. Nevertheless, the\nexisting graph-based methods do not directly address the core problem of SSL,\ni.e., the shortage of supervision, and thus their performances are still very\nlimited. To accommodate this issue, a novel GCN-based SSL algorithm is\npresented in this paper to enrich the supervision signals by utilizing both\ndata similarities and graph structure. Firstly, by designing a semi-supervised\ncontrastive loss, improved node representations can be generated via maximizing\nthe agreement between different views of the same data or the data from the\nsame class. Therefore, the rich unlabeled data and the scarce yet valuable\nlabeled data can jointly provide abundant supervision information for learning\ndiscriminative node representations, which helps improve the subsequent\nclassification result. Secondly, the underlying determinative relationship\nbetween the data features and input graph topology is extracted as\nsupplementary supervision signals for SSL via using a graph generative loss\nrelated to the input features. Intensive experimental results on a variety of\nreal-world datasets firmly verify the effectiveness of our algorithm compared\nwith other state-of-the-art methods.", + "authors": "Sheng Wan, Shirui Pan, Jian Yang, Chen Gong", + "published": "2020-09-15", + "updated": "2020-09-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1901.00596v4", + "title": "A Comprehensive Survey on Graph Neural Networks", + "abstract": "Deep learning has revolutionized many machine learning tasks in recent years,\nranging from image classification and video processing to speech recognition\nand natural language understanding. The data in these tasks are typically\nrepresented in the Euclidean space. However, there is an increasing number of\napplications where data are generated from non-Euclidean domains and are\nrepresented as graphs with complex relationships and interdependency between\nobjects. The complexity of graph data has imposed significant challenges on\nexisting machine learning algorithms. Recently, many studies on extending deep\nlearning approaches for graph data have emerged. In this survey, we provide a\ncomprehensive overview of graph neural networks (GNNs) in data mining and\nmachine learning fields. We propose a new taxonomy to divide the\nstate-of-the-art graph neural networks into four categories, namely recurrent\ngraph neural networks, convolutional graph neural networks, graph autoencoders,\nand spatial-temporal graph neural networks. We further discuss the applications\nof graph neural networks across various domains and summarize the open source\ncodes, benchmark data sets, and model evaluation of graph neural networks.\nFinally, we propose potential research directions in this rapidly growing\nfield.", + "authors": "Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu", + "published": "2019-01-03", + "updated": "2019-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.01563v1", + "title": "Attribute-Consistent Knowledge Graph Representation Learning for Multi-Modal Entity Alignment", + "abstract": "The multi-modal entity alignment (MMEA) aims to find all equivalent entity\npairs between multi-modal knowledge graphs (MMKGs). Rich attributes and\nneighboring entities are valuable for the alignment task, but existing works\nignore contextual gap problems that the aligned entities have different numbers\nof attributes on specific modality when learning entity representations. In\nthis paper, we propose a novel attribute-consistent knowledge graph\nrepresentation learning framework for MMEA (ACK-MMEA) to compensate the\ncontextual gaps through incorporating consistent alignment knowledge.\nAttribute-consistent KGs (ACKGs) are first constructed via multi-modal\nattribute uniformization with merge and generate operators so that each entity\nhas one and only one uniform feature in each modality. The ACKGs are then fed\ninto a relation-aware graph neural network with random dropouts, to obtain\naggregated relation representations and robust entity representations. In order\nto evaluate the ACK-MMEA facilitated for entity alignment, we specially design\na joint alignment loss for both entity and attribute evaluation. Extensive\nexperiments conducted on two benchmark datasets show that our approach achieves\nexcellent performance compared to its competitors.", + "authors": "Qian Li, Shu Guo, Yangyifei Luo, Cheng Ji, Lihong Wang, Jiawei Sheng, Jianxin Li", + "published": "2023-04-04", + "updated": "2023-04-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.08128v1", + "title": "Relation-Aware Neighborhood Matching Model for Entity Alignment", + "abstract": "Entity alignment which aims at linking entities with the same meaning from\ndifferent knowledge graphs (KGs) is a vital step for knowledge fusion. Existing\nresearch focused on learning embeddings of entities by utilizing structural\ninformation of KGs for entity alignment. These methods can aggregate\ninformation from neighboring nodes but may also bring noise from neighbors.\nMost recently, several researchers attempted to compare neighboring nodes in\npairs to enhance the entity alignment. However, they ignored the relations\nbetween entities which are also important for neighborhood matching. In\naddition, existing methods paid less attention to the positive interactions\nbetween the entity alignment and the relation alignment. To deal with these\nissues, we propose a novel Relation-aware Neighborhood Matching model named RNM\nfor entity alignment. Specifically, we propose to utilize the neighborhood\nmatching to enhance the entity alignment. Besides comparing neighbor nodes when\nmatching neighborhood, we also try to explore useful information from the\nconnected relations. Moreover, an iterative framework is designed to leverage\nthe positive interactions between the entity alignment and the relation\nalignment in a semi-supervised manner. Experimental results on three real-world\ndatasets demonstrate that the proposed model RNM performs better than\nstate-of-the-art methods.", + "authors": "Yao Zhu, Hongzhi Liu, Zhonghai Wu, Yingpeng Du", + "published": "2020-12-15", + "updated": "2020-12-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.15452v1", + "title": "Boosting the Speed of Entity Alignment 10*: Dual Attention Matching Network with Normalized Hard Sample Mining", + "abstract": "Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is\nthe pivotal step to KGs integration, also known as \\emph{entity alignment}\n(EA). However, most existing EA methods are inefficient and poor in\nscalability. A recent summary points out that some of them even require several\ndays to deal with a dataset containing 200,000 nodes (DWY100K). We believe\nover-complex graph encoder and inefficient negative sampling strategy are the\ntwo main reasons. In this paper, we propose a novel KG encoder -- Dual\nAttention Matching Network (Dual-AMN), which not only models both intra-graph\nand cross-graph information smartly, but also greatly reduces computational\ncomplexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to\nsmoothly select hard negative samples with reduced loss shift. The experimental\nresults on widely used public datasets indicate that our method achieves both\nhigh accuracy and high efficiency. On DWY100K, the whole running process of our\nmethod could be finished in 1,100 seconds, at least 10* faster than previous\nwork. The performances of our method also outperform previous works across all\ndatasets, where Hits@1 and MRR have been improved from 6% to 13%.", + "authors": "Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan", + "published": "2021-03-29", + "updated": "2021-03-29", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.17859v2", + "title": "Towards Semantic Consistency: Dirichlet Energy Driven Robust Multi-Modal Entity Alignment", + "abstract": "In Multi-Modal Knowledge Graphs (MMKGs), Multi-Modal Entity Alignment (MMEA)\nis crucial for identifying identical entities across diverse modal attributes.\nHowever, semantic inconsistency, mainly due to missing modal attributes, poses\na significant challenge. Traditional approaches rely on attribute\ninterpolation, but this often introduces modality noise, distorting the\noriginal semantics. Moreover, the lack of a universal theoretical framework\nlimits advancements in achieving semantic consistency. This study introduces a\nnovel approach, DESAlign, which addresses these issues by applying a\ntheoretical framework based on Dirichlet energy to ensure semantic consistency.\nWe discover that semantic inconsistency leads to model overfitting to modality\nnoise, causing performance fluctuations, particularly when modalities are\nmissing. DESAlign innovatively combats over-smoothing and interpolates absent\nsemantics using existing modalities. Our approach includes a multi-modal\nknowledge graph learning strategy and a propagation technique that employs\nexisting semantic features to compensate for missing ones, providing explicit\nEuler solutions. Comprehensive evaluations across 60 benchmark splits,\nincluding monolingual and bilingual scenarios, demonstrate that DESAlign\nsurpasses existing methods, setting a new standard in performance. Further\ntesting with high rates of missing modalities confirms its robustness, offering\nan effective solution to semantic inconsistency in real-world MMKGs.", + "authors": "Yuanyi Wang, Haifeng Sun, Jiabo Wang, Jingyu Wang, Wei Tang, Qi Qi, Shaoling Sun, Jianxin Liao", + "published": "2024-01-31", + "updated": "2024-03-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.04286v1", + "title": "Deep Unsupervised Active Learning on Learnable Graphs", + "abstract": "Recently deep learning has been successfully applied to unsupervised active\nlearning. However, current method attempts to learn a nonlinear transformation\nvia an auto-encoder while ignoring the sample relation, leaving huge room to\ndesign more effective representation learning mechanisms for unsupervised\nactive learning. In this paper, we propose a novel deep unsupervised Active\nLearning model via Learnable Graphs, named ALLG. ALLG benefits from learning\noptimal graph structures to acquire better sample representation and select\nrepresentative samples. To make the learnt graph structure more stable and\neffective, we take into account $k$-nearest neighbor graph as a priori, and\nlearn a relation propagation graph structure. We also incorporate shortcut\nconnections among different layers, which can alleviate the well-known\nover-smoothing problem to some extent. To the best of our knowledge, this is\nthe first attempt to leverage graph structure learning for unsupervised active\nlearning. Extensive experiments performed on six datasets demonstrate the\nefficacy of our method.", + "authors": "Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01412v1", + "title": "Sampling and Recovery of Graph Signals based on Graph Neural Networks", + "abstract": "We propose interpretable graph neural networks for sampling and recovery of\ngraph signals, respectively. To take informative measurements, we propose a new\ngraph neural sampling module, which aims to select those vertices that\nmaximally express their corresponding neighborhoods. Such expressiveness can be\nquantified by the mutual information between vertices' features and\nneighborhoods' features, which are estimated via a graph neural network. To\nreconstruct an original graph signal from the sampled measurements, we propose\na graph neural recovery module based on the algorithm-unrolling technique.\nCompared to previous analytical sampling and recovery, the proposed methods are\nable to flexibly learn a variety of graph signal models from data by leveraging\nthe learning ability of neural networks; compared to previous\nneural-network-based sampling and recovery, the proposed methods are designed\nthrough exploiting specific graph properties and provide interpretability. We\nfurther design a new multiscale graph neural network, which is a trainable\nmultiscale graph filter bank and can handle various graph-related learning\ntasks. The multiscale network leverages the proposed graph neural sampling and\nrecovery modules to achieve multiscale representations of a graph. In the\nexperiments, we illustrate the effects of the proposed graph neural sampling\nand recovery modules and find that the modules can flexibly adapt to various\ngraph structures and graph signals. In the task of active-sampling-based\nsemi-supervised learning, the graph neural sampling module improves the\nclassification accuracy over 10% in Cora dataset. We further validate the\nproposed multiscale graph neural network on several standard datasets for both\nvertex and graph classification. The results show that our method consistently\nimproves the classification accuracies.", + "authors": "Siheng Chen, Maosen Li, Ya Zhang", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11898v1", + "title": "Graph Learning Augmented Heterogeneous Graph Neural Network for Social Recommendation", + "abstract": "Social recommendation based on social network has achieved great success in\nimproving the performance of recommendation system. Since social network\n(user-user relations) and user-item interactions are both naturally represented\nas graph-structured data, Graph Neural Networks (GNNs) have thus been widely\napplied for social recommendation. In this work, we propose an end-to-end\nheterogeneous global graph learning framework, namely Graph Learning Augmented\nHeterogeneous Graph Neural Network (GL-HGNN) for social recommendation. GL-HGNN\naims to learn a heterogeneous global graph that makes full use of user-user\nrelations, user-item interactions and item-item similarities in a unified\nperspective. To this end, we design a Graph Learner (GL) method to learn and\noptimize user-user and item-item connections separately. Moreover, we employ a\nHeterogeneous Graph Neural Network (HGNN) to capture the high-order complex\nsemantic relations from our learned heterogeneous global graph. To scale up the\ncomputation of graph learning, we further present the Anchor-based Graph\nLearner (AGL) to reduce computational complexity. Extensive experiments on four\nreal-world datasets demonstrate the effectiveness of our model.", + "authors": "Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.04762v1", + "title": "The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure", + "abstract": "Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.", + "authors": "Anton Tsitsulin, Bryan Perozzi", + "published": "2023-12-08", + "updated": "2023-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11390v2", + "title": "Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction", + "abstract": "Tiered graph autoencoders provide the architecture and mechanisms for\nlearning tiered latent representations and latent spaces for molecular graphs\nthat explicitly represent and utilize groups (e.g., functional groups). This\nenables the utilization and exploration of tiered molecular latent spaces,\neither individually - the node (atom) tier, the group tier, or the graph\n(molecule) tier - or jointly, as well as navigation across the tiers. In this\npaper, we discuss the use of tiered graph autoencoders together with graph\nprediction for molecular graphs. We show features of molecular graphs used, and\ngroups in molecular graphs identified for some sample molecules. We briefly\nreview graph prediction and the QM9 dataset for background information, and\ndiscuss the use of tiered graph embeddings for graph prediction, particularly\nweighted group pooling. We find that functional groups and ring groups\neffectively capture and represent the chemical essence of molecular graphs\n(structures). Further, tiered graph autoencoders and graph prediction together\nprovide effective, efficient and interpretable deep learning for molecular\ngraphs, with the former providing unsupervised, transferable learning and the\nlatter providing supervised, task-optimized learning.", + "authors": "Daniel T. Chang", + "published": "2019-10-24", + "updated": "2021-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-bio.BM" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.00082v1", + "title": "Bosonic Random Walk Networks for Graph Learning", + "abstract": "The development of Graph Neural Networks (GNNs) has led to great progress in\nmachine learning on graph-structured data. These networks operate via diffusing\ninformation across the graph nodes while capturing the structure of the graph.\nRecently there has also seen tremendous progress in quantum computing\ntechniques. In this work, we explore applications of multi-particle quantum\nwalks on diffusing information across graphs. Our model is based on learning\nthe operators that govern the dynamics of quantum random walkers on graphs. We\ndemonstrate the effectiveness of our method on classification and regression\ntasks.", + "authors": "Shiv Shankar, Don Towsley", + "published": "2020-12-31", + "updated": "2020-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.10715v1", + "title": "Graph Attention Auto-Encoders", + "abstract": "Auto-encoders have emerged as a successful framework for unsupervised\nlearning. However, conventional auto-encoders are incapable of utilizing\nexplicit relations in structured data. To take advantage of relations in\ngraph-structured data, several graph auto-encoders have recently been proposed,\nbut they neglect to reconstruct either the graph structure or node attributes.\nIn this paper, we present the graph attention auto-encoder (GATE), a neural\nnetwork architecture for unsupervised representation learning on\ngraph-structured data. Our architecture is able to reconstruct graph-structured\ninputs, including both node attributes and the graph structure, through stacked\nencoder/decoder layers equipped with self-attention mechanisms. In the encoder,\nby considering node attributes as initial node representations, each layer\ngenerates new representations of nodes by attending over their neighbors'\nrepresentations. In the decoder, we attempt to reverse the encoding process to\nreconstruct node attributes. Moreover, node representations are regularized to\nreconstruct the graph structure. Our proposed architecture does not need to\nknow the graph structure upfront, and thus it can be applied to inductive\nlearning. Our experiments demonstrate competitive performance on several node\nclassification benchmark datasets for transductive and inductive tasks, even\nexceeding the performance of supervised learning baselines in most cases.", + "authors": "Amin Salehi, Hasan Davulcu", + "published": "2019-05-26", + "updated": "2019-05-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.11821v1", + "title": "Cross-View Graph Consistency Learning for Invariant Graph Representations", + "abstract": "Graph representation learning is fundamental for analyzing graph-structured\ndata. Exploring invariant graph representations remains a challenge for most\nexisting graph representation learning methods. In this paper, we propose a\ncross-view graph consistency learning (CGCL) method that learns invariant graph\nrepresentations for link prediction. First, two complementary augmented views\nare derived from an incomplete graph structure through a bidirectional graph\nstructure augmentation scheme. This augmentation scheme mitigates the potential\ninformation loss that is commonly associated with various data augmentation\ntechniques involving raw graph data, such as edge perturbation, node removal,\nand attribute masking. Second, we propose a CGCL model that can learn invariant\ngraph representations. A cross-view training scheme is proposed to train the\nproposed CGCL model. This scheme attempts to maximize the consistency\ninformation between one augmented view and the graph structure reconstructed\nfrom the other augmented view. Furthermore, we offer a comprehensive\ntheoretical CGCL analysis. This paper empirically and experimentally\ndemonstrates the effectiveness of the proposed CGCL method, achieving\ncompetitive results on graph datasets in comparisons with several\nstate-of-the-art algorithms.", + "authors": "Jie Chen, Zhiming Li, Hua Mao, Wai Lok Woo, Xi Peng", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1902.10042v2", + "title": "Graph Neural Processes: Towards Bayesian Graph Neural Networks", + "abstract": "We introduce Graph Neural Processes (GNP), inspired by the recent work in\nconditional and latent neural processes. A Graph Neural Process is defined as a\nConditional Neural Process that operates on arbitrary graph data. It takes\nfeatures of sparsely observed context points as input, and outputs a\ndistribution over target points. We demonstrate graph neural processes in edge\nimputation and discuss benefits and drawbacks of the method for other\napplication areas. One major benefit of GNPs is the ability to quantify\nuncertainty in deep learning on graph structures. An additional benefit of this\nmethod is the ability to extend graph neural networks to inputs of dynamic\nsized graphs.", + "authors": "Andrew Carr, David Wingate", + "published": "2019-02-26", + "updated": "2019-10-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08966v4", + "title": "Graph Learning and Its Advancements on Large Language Models: A Holistic Survey", + "abstract": "Graph learning is a prevalent domain that endeavors to learn the intricate\nrelationships among nodes and the topological structure of graphs. Over the\nyears, graph learning has transcended from graph theory to graph data mining.\nWith the advent of representation learning, it has attained remarkable\nperformance in diverse scenarios. Owing to its extensive application prospects,\ngraph learning attracts copious attention. While some researchers have\naccomplished impressive surveys on graph learning, they failed to connect\nrelated objectives, methods, and applications in a more coherent way. As a\nresult, they did not encompass current ample scenarios and challenging problems\ndue to the rapid expansion of graph learning. Particularly, large language\nmodels have recently had a disruptive effect on human life, but they also show\nrelative weakness in structured scenarios. The question of how to make these\nmodels more powerful with graph learning remains open. Our survey focuses on\nthe most recent advancements in integrating graph learning with pre-trained\nlanguage models, specifically emphasizing their application within the domain\nof large language models. Different from previous surveys on graph learning, we\nprovide a holistic review that analyzes current works from the perspective of\ngraph structure, and discusses the latest applications, trends, and challenges\nin graph learning. Specifically, we commence by proposing a taxonomy and then\nsummarize the methods employed in graph learning. We then provide a detailed\nelucidation of mainstream applications. Finally, we propose future directions.", + "authors": "Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Fuji Ren, Gang Kou", + "published": "2022-12-17", + "updated": "2023-11-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.02879v1", + "title": "Auto-decoding Graphs", + "abstract": "We present an approach to synthesizing new graph structures from empirically\nspecified distributions. The generative model is an auto-decoder that learns to\nsynthesize graphs from latent codes. The graph synthesis model is learned\njointly with an empirical distribution over the latent codes. Graphs are\nsynthesized using self-attention modules that are trained to identify likely\nconnectivity patterns. Graph-based normalizing flows are used to sample latent\ncodes from the distribution learned by the auto-decoder. The resulting model\ncombines accuracy and scalability. On benchmark datasets of large graphs, the\npresented model outperforms the state of the art by a factor of 1.5 in mean\naccuracy and average rank across at least three different graph statistics,\nwith a 2x speedup during inference.", + "authors": "Sohil Atul Shah, Vladlen Koltun", + "published": "2020-06-04", + "updated": "2020-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2004.06846v1", + "title": "MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning", + "abstract": "How to utilize deep learning methods for graph classification tasks has\nattracted considerable research attention in the past few years. Regarding\ngraph classification tasks, the graphs to be classified may have various graph\nsizes (i.e., different number of nodes and edges) and have various graph\nproperties (e.g., average node degree, diameter, and clustering coefficient).\nThe diverse property of graphs has imposed significant challenges on existing\ngraph learning techniques since diverse graphs have different best-fit\nhyperparameters. It is difficult to learn graph features from a set of diverse\ngraphs by a unified graph neural network. This motivates us to use a multiplex\nstructure in a diverse way and utilize a priori properties of graphs to guide\nthe learning. In this paper, we propose MxPool, which concurrently uses\nmultiple graph convolution/pooling networks to build a hierarchical learning\nstructure for graph representation learning tasks. Our experiments on numerous\ngraph classification benchmarks show that our MxPool has superiority over other\nstate-of-the-art graph representation learning methods.", + "authors": "Yanyan Liang, Yanfeng Zhang, Dechao Gao, Qian Xu", + "published": "2020-04-15", + "updated": "2020-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03262v2", + "title": "CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations", + "abstract": "Unsupervised graph representation learning is a non-trivial topic. The\nsuccess of contrastive methods in the unsupervised representation learning on\nstructured data inspires similar attempts on the graph. Existing graph\ncontrastive learning (GCL) aims to learn the invariance across multiple\naugmentation views, which renders it heavily reliant on the handcrafted graph\naugmentations. However, inappropriate graph data augmentations can potentially\njeopardize such invariance. In this paper, we show the potential hazards of\ninappropriate augmentations and then propose a novel Collaborative Graph\nContrastive Learning framework (CGCL). This framework harnesses multiple graph\nencoders to observe the graph. Features observed from different encoders serve\nas the contrastive views in contrastive learning, which avoids inducing\nunstable perturbation and guarantees the invariance. To ensure the\ncollaboration among diverse graph encoders, we propose the concepts of\nasymmetric architecture and complementary encoders as the design principle. To\nfurther prove the rationality, we utilize two quantitative metrics to measure\nthe assembly of CGCL respectively. Extensive experiments demonstrate the\nadvantages of CGCL in unsupervised graph-level representation learning and the\npotential of collaborative framework. The source code for reproducibility is\navailable at https://github.com/zhangtia16/CGCL", + "authors": "Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang", + "published": "2021-11-05", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.15665v1", + "title": "Learnability of a hybrid quantum-classical neural network for graph-structured quantum data", + "abstract": "Classical data with graph structure always exists when dealing with many\nreal-world problems. In parallel, quantum data with graph structure also need\nto be investigated since they are always produced by structured quantum data\nsources.In this paper, we make use of a hybrid quantum-classical neural network\nwith deep residual learning (Res-HQCNN) to learn graph-structured quantum data.\nSpecifically, based on the special definition of graph-structured quantum data,\nwe first find suitable cost functions so that Res-HQCNN can learn both\nsemisupervised quantum data with or without graphs. Moreover, the training\nalgorithm of Res-HQCNN for graph-structured training data is given in detail.\nNext, in order to show the learning ability of Res-HQCNN,we perform extensive\nexperiments to show that the using of information about graph structures for\nquantum data can lead to better learning efficiency compared with the state of\nthe arts. At the same time, we also design comparable experiments to explain\nthat the using of residual learning can also bring better performance when\ntraining for deep quantum neural networks.", + "authors": "Yan-Ying Liang, Si-Le Tang, Zhe-Hao Yi, Hao-Zhen Si-Tu, Zhu-Jun Zheng", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2103.10837v1", + "title": "Quantum machine learning of graph-structured data", + "abstract": "Graph structures are ubiquitous throughout the natural sciences. Here we\nconsider graph-structured quantum data and describe how to carry out its\nquantum machine learning via quantum neural networks. In particular, we\nconsider training data in the form of pairs of input and output quantum states\nassociated with the vertices of a graph, together with edges encoding\ncorrelations between the vertices. We explain how to systematically exploit\nthis additional graph structure to improve quantum learning algorithms. These\nalgorithms are numerically simulated and exhibit excellent learning behavior.\nScalable quantum implementations of the learning procedures are likely feasible\non the next generation of quantum computing devices.", + "authors": "Kerstin Beer, Megha Khosla, Julius K\u00f6hler, Tobias J. Osborne", + "published": "2021-03-19", + "updated": "2021-03-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1803.03324v1", + "title": "Learning Deep Generative Models of Graphs", + "abstract": "Graphs are fundamental data structures which concisely capture the relational\nstructure in many important real-world domains, such as knowledge graphs,\nphysical and social interactions, language, and chemistry. Here we introduce a\npowerful new approach for learning generative models over graphs, which can\ncapture both their structure and attributes. Our approach uses graph neural\nnetworks to express probabilistic dependencies among a graph's nodes and edges,\nand can, in principle, learn distributions over any arbitrary graph. In a\nseries of experiments our results show that once trained, our models can\ngenerate good quality samples of both synthetic graphs as well as real\nmolecular graphs, both unconditionally and conditioned on data. Compared to\nbaselines that do not use graph-structured representations, our models often\nperform far better. We also explore key challenges of learning generative\nmodels of graphs, such as how to handle symmetries and ordering of elements\nduring the graph generation process, and offer possible solutions. Our work is\nthe first and most general approach for learning generative models over\narbitrary graphs, and opens new directions for moving away from restrictions of\nvector- and sequence-like knowledge representations, toward more expressive and\nflexible relational data structures.", + "authors": "Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia", + "published": "2018-03-08", + "updated": "2018-03-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.09971v1", + "title": "Graph Learning-Convolutional Networks", + "abstract": "Recently, graph Convolutional Neural Networks (graph CNNs) have been widely\nused for graph data representation and semi-supervised learning tasks. However,\nexisting graph CNNs generally use a fixed graph which may be not optimal for\nsemi-supervised learning tasks. In this paper, we propose a novel Graph\nLearning-Convolutional Network (GLCN) for graph data representation and\nsemi-supervised learning. The aim of GLCN is to learn an optimal graph\nstructure that best serves graph CNNs for semi-supervised learning by\nintegrating both graph learning and graph convolution together in a unified\nnetwork architecture. The main advantage is that in GLCN, both given labels and\nthe estimated labels are incorporated and thus can provide useful 'weakly'\nsupervised information to refine (or learn) the graph construction and also to\nfacilitate the graph convolution operation in GLCN for unknown label\nestimation. Experimental results on seven benchmarks demonstrate that GLCN\nsignificantly outperforms state-of-the-art traditional fixed structure based\ngraph CNNs.", + "authors": "Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang", + "published": "2018-11-25", + "updated": "2018-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.04508v3", + "title": "Unsupervised Graph Embedding via Adaptive Graph Learning", + "abstract": "Graph autoencoders (GAEs) are powerful tools in representation learning for\ngraph embedding. However, the performance of GAEs is very dependent on the\nquality of the graph structure, i.e., of the adjacency matrix. In other words,\nGAEs would perform poorly when the adjacency matrix is incomplete or be\ndisturbed. In this paper, two novel unsupervised graph embedding methods,\nunsupervised graph embedding via adaptive graph learning (BAGE) and\nunsupervised graph embedding via variational adaptive graph learning (VBAGE)\nare proposed. The proposed methods expand the application range of GAEs on\ngraph embedding, i.e, on the general datasets without graph structure.\nMeanwhile, the adaptive learning mechanism can initialize the adjacency matrix\nwithout be affected by the parameter. Besides that, the latent representations\nare embedded in the laplacian graph structure to preserve the topology\nstructure of the graph in the vector space. Moreover, the adjacency matrix can\nbe self-learned for better embedding performance when the original graph\nstructure is incomplete. With adaptive learning, the proposed method is much\nmore robust to the graph structure. Experimental studies on several datasets\nvalidate our design and demonstrate that our methods outperform baselines by a\nwide margin in node clustering, node classification, and graph visualization\ntasks.", + "authors": "Rui Zhang, Yunxing Zhang, Xuelong Li", + "published": "2020-03-10", + "updated": "2021-03-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1801.03226v1", + "title": "Adaptive Graph Convolutional Neural Networks", + "abstract": "Graph Convolutional Neural Networks (Graph CNNs) are generalizations of\nclassical CNNs to handle graph data such as molecular data, point could and\nsocial networks. Current filters in graph CNNs are built for fixed and shared\ngraph structure. However, for most real data, the graph structures varies in\nboth size and connectivity. The paper proposes a generalized and flexible graph\nCNN taking data of arbitrary graph structure as input. In that way a\ntask-driven adaptive graph is learned for each graph data while training. To\nefficiently learn the graph, a distance metric learning is proposed. Extensive\nexperiments on nine graph-structured datasets have demonstrated the superior\nperformance improvement on both convergence speed and predictive accuracy.", + "authors": "Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang", + "published": "2018-01-10", + "updated": "2018-01-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.11883v2", + "title": "Robust Graph Data Learning via Latent Graph Convolutional Representation", + "abstract": "Graph Convolutional Representation (GCR) has achieved impressive performance\nfor graph data representation. However, existing GCR is generally defined on\nthe input fixed graph which may restrict the representation capacity and also\nbe vulnerable to the structural attacks and noises. To address this issue, we\npropose a novel Latent Graph Convolutional Representation (LatGCR) for robust\ngraph data representation and learning. Our LatGCR is derived based on\nreformulating graph convolutional representation from the aspect of graph\nneighborhood reconstruction. Given an input graph $\\textbf{A}$, LatGCR aims to\ngenerate a flexible latent graph $\\widetilde{\\textbf{A}}$ for graph\nconvolutional representation which obviously enhances the representation\ncapacity and also performs robustly w.r.t graph structural attacks and noises.\nMoreover, LatGCR is implemented in a self-supervised manner and thus provides a\nbasic block for both supervised and unsupervised graph learning tasks.\nExperiments on several datasets demonstrate the effectiveness and robustness of\nLatGCR.", + "authors": "Bo Jiang, Ziyan Zhang, Bin Luo", + "published": "2019-04-26", + "updated": "2021-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.04350v2", + "title": "Time-Variant Graph Classification", + "abstract": "Graphs are commonly used to represent objects, such as images and text, for\npattern classification. In a dynamic world, an object may continuously evolve\nover time, and so does the graph extracted from the underlying object. These\nchanges in graph structure with respect to the temporal order present a new\nrepresentation of the graph, in which an object corresponds to a set of\ntime-variant graphs. In this paper, we formulate a novel time-variant graph\nclassification task and propose a new graph feature, called a graph-shapelet\npattern, for learning and classifying time-variant graphs. Graph-shapelet\npatterns are compact and discriminative graph transformation subsequences. A\ngraph-shapelet pattern can be regarded as a graphical extension of a shapelet\n-- a class of discriminative features designed for vector-based temporal data\nclassification. To discover graph-shapelet patterns, we propose to convert a\ntime-variant graph sequence into time-series data and use the discovered\nshapelets to find graph transformation subsequences as graph-shapelet patterns.\nBy converting each graph-shapelet pattern into a unique tokenized graph\ntransformation sequence, we can measure the similarity between two\ngraph-shapelet patterns and therefore classify time-variant graphs. Experiments\non both synthetic and real-world data demonstrate the superior performance of\nthe proposed algorithms.", + "authors": "Haishuai Wang", + "published": "2016-09-14", + "updated": "2017-06-12", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.01660v3", + "title": "Graph Neural Networks With Lifting-based Adaptive Graph Wavelets", + "abstract": "Spectral-based graph neural networks (SGNNs) have been attracting increasing\nattention in graph representation learning. However, existing SGNNs are limited\nin implementing graph filters with rigid transforms (e.g., graph Fourier or\npredefined graph wavelet transforms) and cannot adapt to signals residing on\ngraphs and tasks at hand. In this paper, we propose a novel class of graph\nneural networks that realizes graph filters with adaptive graph wavelets.\nSpecifically, the adaptive graph wavelets are learned with neural\nnetwork-parameterized lifting structures, where structure-aware attention-based\nlifting operations (i.e., prediction and update operations) are developed to\njointly consider graph structures and node features. We propose to lift based\non diffusion wavelets to alleviate the structural information loss induced by\npartitioning non-bipartite graphs. By design, the locality and sparsity of the\nresulting wavelet transform as well as the scalability of the lifting structure\nare guaranteed. We further derive a soft-thresholding filtering operation by\nlearning sparse graph representations in terms of the learned wavelets,\nyielding a localized, efficient, and scalable wavelet-based graph filters. To\nensure that the learned graph representations are invariant to node\npermutations, a layer is employed at the input of the networks to reorder the\nnodes according to their local topology information. We evaluate the proposed\nnetworks in both node-level and graph-level representation learning tasks on\nbenchmark citation and bioinformatics graph datasets. Extensive experiments\ndemonstrate the superiority of the proposed networks over existing SGNNs in\nterms of accuracy, efficiency, and scalability.", + "authors": "Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard", + "published": "2021-08-03", + "updated": "2022-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.08776v2", + "title": "Joint Embedding Learning of Educational Knowledge Graphs", + "abstract": "As an efficient model for knowledge organization, the knowledge graph has\nbeen widely adopted in several fields, e.g., biomedicine, sociology, and\neducation. And there is a steady trend of learning embedding representations of\nknowledge graphs to facilitate knowledge graph construction and downstream\ntasks. In general, knowledge graph embedding techniques aim to learn vectorized\nrepresentations which preserve the structural information of the graph. And\nconventional embedding learning models rely on structural relationships among\nentities and relations. However, in educational knowledge graphs, structural\nrelationships are not the focus. Instead, rich literals of the graphs are more\nvaluable. In this paper, we focus on this problem and propose a novel model for\nembedding learning of educational knowledge graphs. Our model considers both\nstructural and literal information and jointly learns embedding\nrepresentations. Three experimental graphs were constructed based on an\neducational knowledge graph which has been applied in real-world teaching. We\nconducted two experiments on the three graphs and other common benchmark\ngraphs. The experimental results proved the effectiveness of our model and its\nsuperiority over other baselines when processing educational knowledge graphs.", + "authors": "Siyu Yao, Ruijie Wang, Shen Sun, Derui Bu, Jun Liu", + "published": "2019-11-20", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.08057v1", + "title": "Graph Embedding VAE: A Permutation Invariant Model of Graph Structure", + "abstract": "Generative models of graph structure have applications in biology and social\nsciences. The state of the art is GraphRNN, which decomposes the graph\ngeneration process into a series of sequential steps. While effective for\nmodest sizes, it loses its permutation invariance for larger graphs. Instead,\nwe present a permutation invariant latent-variable generative model relying on\ngraph embeddings to encode structure. Using tools from the random graph\nliterature, our model is highly scalable to large graphs with likelihood\nevaluation and generation in $O(|V | + |E|)$.", + "authors": "Tony Duan, Juho Lee", + "published": "2019-10-17", + "updated": "2019-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.13009v2", + "title": "Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely\nIterative Deep Graph Learning (IDGL), for jointly and iteratively learning\ngraph structure and graph embedding. The key rationale of IDGL is to learn a\nbetter graph structure based on better node embeddings, and vice versa (i.e.,\nbetter node embeddings based on a better graph structure). Our iterative method\ndynamically stops when the learned graph structure approaches close enough to\nthe graph optimized for the downstream prediction task. In addition, we cast\nthe graph learning problem as a similarity metric learning problem and leverage\nadaptive graph regularization for controlling the quality of the learned graph.\nFinally, combining the anchor-based approximation technique, we further propose\na scalable version of IDGL, namely IDGL-Anch, which significantly reduces the\ntime and space complexity of IDGL without compromising the performance. Our\nextensive experiments on nine benchmarks show that our proposed IDGL models can\nconsistently outperform or match the state-of-the-art baselines. Furthermore,\nIDGL can be more robust to adversarial graphs and cope with both transductive\nand inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2020-06-21", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.03892v2", + "title": "COPT: Coordinated Optimal Transport for Graph Sketching", + "abstract": "We introduce COPT, a novel distance metric between graphs defined via an\noptimization routine, computing a coordinated pair of optimal transport maps\nsimultaneously. This gives an unsupervised way to learn general-purpose graph\nrepresentation, applicable to both graph sketching and graph comparison. COPT\ninvolves simultaneously optimizing dual transport plans, one between the\nvertices of two graphs, and another between graph signal probability\ndistributions. We show theoretically that our method preserves important global\nstructural information on graphs, in particular spectral information, and\nanalyze connections to existing studies. Empirically, COPT outperforms state of\nthe art methods in graph classification on both synthetic and real datasets.", + "authors": "Yihe Dong, Will Sawin", + "published": "2020-03-09", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.14403v1", + "title": "Deep graph learning for semi-supervised classification", + "abstract": "Graph learning (GL) can dynamically capture the distribution structure (graph\nstructure) of data based on graph convolutional networks (GCN), and the\nlearning quality of the graph structure directly influences GCN for\nsemi-supervised classification. Existing methods mostly combine the\ncomputational layer and the related losses into GCN for exploring the global\ngraph(measuring graph structure from all data samples) or local graph\n(measuring graph structure from local data samples). Global graph emphasises on\nthe whole structure description of the inter-class data, while local graph\ntrend to the neighborhood structure representation of intra-class data.\nHowever, it is difficult to simultaneously balance these graphs of the learning\nprocess for semi-supervised classification because of the interdependence of\nthese graphs. To simulate the interdependence, deep graph learning(DGL) is\nproposed to find the better graph representation for semi-supervised\nclassification. DGL can not only learn the global structure by the previous\nlayer metric computation updating, but also mine the local structure by next\nlayer local weight reassignment. Furthermore, DGL can fuse the different\nstructures by dynamically encoding the interdependence of these structures, and\ndeeply mine the relationship of the different structures by the hierarchical\nprogressive learning for improving the performance of semi-supervised\nclassification. Experiments demonstrate the DGL outperforms state-of-the-art\nmethods on three benchmark datasets (Citeseer,Cora, and Pubmed) for citation\nnetworks and two benchmark datasets (MNIST and Cifar10) for images.", + "authors": "Guangfeng Lin, Xiaobing Kang, Kaiyang Liao, Fan Zhao, Yajun Chen", + "published": "2020-05-29", + "updated": "2020-05-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10124v1", + "title": "Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Self-supervised Pretraining", + "abstract": "We propose the Graph Context Encoder (GCE), a simple but efficient approach\nfor graph representation learning based on graph feature masking and\nreconstruction.\n GCE models are trained to efficiently reconstruct input graphs similarly to a\ngraph autoencoder where node and edge labels are masked. In particular, our\nmodel is also allowed to change graph structures by masking and reconstructing\ngraphs augmented by random pseudo-edges.\n We show that GCE can be used for novel graph generation, with applications\nfor molecule generation. Used as a pretraining method, we also show that GCE\nimproves baseline performances in supervised classification tasks tested on\nmultiple standard benchmark graph datasets.", + "authors": "Oriel Frigo, R\u00e9my Brossard, David Dehaene", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "68T07" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.01749v1", + "title": "Semantic Graph Neural Network with Multi-measure Learning for Semi-supervised Classification", + "abstract": "Graph Neural Networks (GNNs) have attracted increasing attention in recent\nyears and have achieved excellent performance in semi-supervised node\nclassification tasks. The success of most GNNs relies on one fundamental\nassumption, i.e., the original graph structure data is available. However,\nrecent studies have shown that GNNs are vulnerable to the complex underlying\nstructure of the graph, making it necessary to learn comprehensive and robust\ngraph structures for downstream tasks, rather than relying only on the raw\ngraph structure. In light of this, we seek to learn optimal graph structures\nfor downstream tasks and propose a novel framework for semi-supervised\nclassification. Specifically, based on the structural context information of\ngraph and node representations, we encode the complex interactions in semantics\nand generate semantic graphs to preserve the global structure. Moreover, we\ndevelop a novel multi-measure attention layer to optimize the similarity rather\nthan prescribing it a priori, so that the similarity can be adaptively\nevaluated by integrating measures. These graphs are fused and optimized\ntogether with GNN towards semi-supervised classification objective. Extensive\nexperiments and ablation studies on six real-world datasets clearly demonstrate\nthe effectiveness of our proposed model and the contribution of each component.", + "authors": "Junchao Lin, Yuan Wan, Jingwen Xu, Xingchen Qi", + "published": "2022-12-04", + "updated": "2022-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.05980v1", + "title": "CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical Graph Representation Learning", + "abstract": "Recent years have witnessed the emergence and flourishing of hierarchical\ngraph pooling neural networks (HGPNNs) which are effective graph representation\nlearning approaches for graph level tasks such as graph classification.\nHowever, current HGPNNs do not take full advantage of the graph's intrinsic\nstructures (e.g., community structure). Moreover, the pooling operations in\nexisting HGPNNs are difficult to be interpreted. In this paper, we propose a\nnew interpretable graph pooling framework - CommPOOL, that can capture and\npreserve the hierarchical community structure of graphs in the graph\nrepresentation learning process. Specifically, the proposed community pooling\nmechanism in CommPOOL utilizes an unsupervised approach for capturing the\ninherent community structure of graphs in an interpretable manner. CommPOOL is\na general and flexible framework for hierarchical graph representation learning\nthat can further facilitate various graph-level tasks. Evaluations on five\npublic benchmark datasets and one synthetic dataset demonstrate the superior\nperformance of CommPOOL in graph representation learning for graph\nclassification compared to the state-of-the-art baseline methods, and its\neffectiveness in capturing and preserving the community structure of graphs.", + "authors": "Haoteng Tang, Guixiang Ma, Lifang He, Heng Huang, Liang Zhan", + "published": "2020-12-10", + "updated": "2020-12-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.10206v1", + "title": "How Robust Are Graph Neural Networks to Structural Noise?", + "abstract": "Graph neural networks (GNNs) are an emerging model for learning graph\nembeddings and making predictions on graph structured data. However, robustness\nof graph neural networks is not yet well-understood. In this work, we focus on\nnode structural identity predictions, where a representative GNN model is able\nto achieve near-perfect accuracy. We also show that the same GNN model is not\nrobust to addition of structural noise, through a controlled dataset and set of\nexperiments. Finally, we show that under the right conditions, graph-augmented\ntraining is capable of significantly improving robustness to structural noise.", + "authors": "James Fox, Sivasankaran Rajamanickam", + "published": "2019-12-21", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.01489v1", + "title": "Generative Models and Learning Algorithms for Core-Periphery Structured Graphs", + "abstract": "We consider core-periphery structured graphs, which are graphs with a group\nof densely and sparsely connected nodes, respectively, referred to as core and\nperiphery nodes. The so-called core score of a node is related to the\nlikelihood of it being a core node. In this paper, we focus on learning the\ncore scores of a graph from its node attributes and connectivity structure. To\nthis end, we propose two classes of probabilistic graphical models: affine and\nnonlinear. First, we describe affine generative models to model the dependence\nof node attributes on its core scores, which determine the graph structure.\nNext, we discuss nonlinear generative models in which the partial correlations\nof node attributes influence the graph structure through latent core scores. We\ndevelop algorithms for inferring the model parameters and core scores of a\ngraph when both the graph structure and node attributes are available. When\nonly the node attributes of graphs are available, we jointly learn a\ncore-periphery structured graph and its core scores. We provide results from\nnumerical experiments on several synthetic and real-world datasets to\ndemonstrate the efficacy of the developed models and algorithms.", + "authors": "Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07970v1", + "title": "Adaptive Multi-Neighborhood Attention based Transformer for Graph Representation Learning", + "abstract": "By incorporating the graph structural information into Transformers, graph\nTransformers have exhibited promising performance for graph representation\nlearning in recent years. Existing graph Transformers leverage specific\nstrategies, such as Laplacian eigenvectors and shortest paths of the node\npairs, to preserve the structural features of nodes and feed them into the\nvanilla Transformer to learn the representations of nodes. It is hard for such\npredefined rules to extract informative graph structural features for arbitrary\ngraphs whose topology structure varies greatly, limiting the learning capacity\nof the models. To this end, we propose an adaptive graph Transformer, termed\nMulti-Neighborhood Attention based Graph Transformer (MNA-GT), which captures\nthe graph structural information for each node from the multi-neighborhood\nattention mechanism adaptively. By defining the input to perform scaled-dot\nproduct as an attention kernel, MNA-GT constructs multiple attention kernels\nbased on different hops of neighborhoods such that each attention kernel can\ncapture specific graph structural information of the corresponding neighborhood\nfor each node pair. In this way, MNA-GT can preserve the graph structural\ninformation efficiently by incorporating node representations learned by\ndifferent attention kernels. MNA-GT further employs an attention layer to learn\nthe importance of different attention kernels to enable the model to adaptively\ncapture the graph structural information for different nodes. Extensive\nexperiments are conducted on a variety of graph benchmarks, and the empirical\nresults show that MNA-GT outperforms many strong baselines.", + "authors": "Gaichao Li, Jinsong Chen, Kun He", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.09205v1", + "title": "SoK: Differential Privacy on Graph-Structured Data", + "abstract": "In this work, we study the applications of differential privacy (DP) in the\ncontext of graph-structured data. We discuss the formulations of DP applicable\nto the publication of graphs and their associated statistics as well as machine\nlearning on graph-based data, including graph neural networks (GNNs). The\nformulation of DP in the context of graph-structured data is difficult, as\nindividual data points are interconnected (often non-linearly or sparsely).\nThis connectivity complicates the computation of individual privacy loss in\ndifferentially private learning. The problem is exacerbated by an absence of a\nsingle, well-established formulation of DP in graph settings. This issue\nextends to the domain of GNNs, rendering private machine learning on\ngraph-structured data a challenging task. A lack of prior systematisation work\nmotivated us to study graph-based learning from a privacy perspective. In this\nwork, we systematise different formulations of DP on graphs, discuss challenges\nand promising applications, including the GNN domain. We compare and separate\nworks into graph analysis tasks and graph learning tasks with GNNs. Finally, we\nconclude our work with a discussion of open questions and potential directions\nfor further research in this area.", + "authors": "Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis", + "published": "2022-03-17", + "updated": "2022-03-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.07832v1", + "title": "Deep Iterative and Adaptive Learning for Graph Neural Networks", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely Deep\nIterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for\njointly learning the graph structure and graph embeddings simultaneously. We\nfirst cast the graph structure learning problem as a similarity metric learning\nproblem and leverage an adapted graph regularization for controlling\nsmoothness, connectivity and sparsity of the generated graph. We further\npropose a novel iterative method for searching for a hidden graph structure\nthat augments the initial graph structure. Our iterative method dynamically\nstops when the learned graph structure approaches close enough to the optimal\ngraph. Our extensive experiments demonstrate that the proposed DIAL-GNN model\ncan consistently outperform or match state-of-the-art baselines in terms of\nboth downstream task performance and computational time. The proposed approach\ncan cope with both transductive learning and inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01855v2", + "title": "A Survey on Graph Representation Learning Methods", + "abstract": "Graphs representation learning has been a very active research area in recent\nyears. The goal of graph representation learning is to generate graph\nrepresentation vectors that capture the structure and features of large graphs\naccurately. This is especially important because the quality of the graph\nrepresentation vectors will affect the performance of these vectors in\ndownstream tasks such as node classification, link prediction and anomaly\ndetection. Many techniques are proposed for generating effective graph\nrepresentation vectors. Two of the most prevalent categories of graph\nrepresentation learning are graph embedding methods without using graph neural\nnets (GNN), which we denote as non-GNN based graph embedding methods, and graph\nneural nets (GNN) based methods. Non-GNN graph embedding methods are based on\ntechniques such as random walks, temporal point processes and neural network\nlearning methods. GNN-based methods, on the other hand, are the application of\ndeep learning on graph data. In this survey, we provide an overview of these\ntwo categories and cover the current state-of-the-art methods for both static\nand dynamic graphs. Finally, we explore some open and ongoing research\ndirections for future work.", + "authors": "Shima Khoshraftar, Aijun An", + "published": "2022-04-04", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.07308v1", + "title": "Variational Graph Auto-Encoders", + "abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", + "authors": "Thomas N. Kipf, Max Welling", + "published": "2016-11-21", + "updated": "2016-11-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.00876v1", + "title": "Balanced Graph Structure Information for Brain Disease Detection", + "abstract": "Analyzing connections between brain regions of interest (ROI) is vital to\ndetect neurological disorders such as autism or schizophrenia. Recent\nadvancements employ graph neural networks (GNNs) to utilize graph structures in\nbrains, improving detection performances. Current methods use correlation\nmeasures between ROI's blood-oxygen-level-dependent (BOLD) signals to generate\nthe graph structure. Other methods use the training samples to learn the\noptimal graph structure through end-to-end learning. However, implementing\nthose methods independently leads to some issues with noisy data for the\ncorrelation graphs and overfitting problems for the optimal graph. In this\nwork, we proposed Bargrain (balanced graph structure for brains), which models\ntwo graph structures: filtered correlation matrix and optimal sample graph\nusing graph convolution networks (GCNs). This approach aims to get advantages\nfrom both graphs and address the limitations of only relying on a single type\nof structure. Based on our extensive experiment, Bargrain outperforms\nstate-of-the-art methods in classification tasks on brain disease datasets, as\nmeasured by average F1 scores.", + "authors": "Falih Gozi Febrinanto, Mujie Liu, Feng Xia", + "published": "2023-12-30", + "updated": "2023-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "q-bio.NC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00696v1", + "title": "Graph Learning: A Survey", + "abstract": "Graphs are widely used as a popular representation of the network structure\nof connected data. Graph data can be found in a broad spectrum of application\ndomains such as social systems, ecosystems, biological networks, knowledge\ngraphs, and information systems. With the continuous penetration of artificial\nintelligence technologies, graph learning (i.e., machine learning on graphs) is\ngaining attention from both researchers and practitioners. Graph learning\nproves effective for many tasks, such as classification, link prediction, and\nmatching. Generally, graph learning methods extract relevant features of graphs\nby taking advantage of machine learning algorithms. In this survey, we present\na comprehensive overview on the state-of-the-art of graph learning. Special\nattention is paid to four categories of existing graph learning methods,\nincluding graph signal processing, matrix factorization, random walk, and deep\nlearning. Major models and algorithms under these categories are reviewed\nrespectively. We examine graph learning applications in areas such as text,\nimages, science, knowledge graphs, and combinatorial optimization. In addition,\nwe discuss several promising research directions in this field.", + "authors": "Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, Huan Liu", + "published": "2021-05-03", + "updated": "2021-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI", + "68T07", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.04407v2", + "title": "Adversarially Regularized Graph Autoencoder for Graph Embedding", + "abstract": "Graph embedding is an effective method to represent graph data in a low\ndimensional space for graph analytics. Most existing embedding algorithms\ntypically focus on preserving the topological structure or minimizing the\nreconstruction errors of graph data, but they have mostly ignored the data\ndistribution of the latent codes from the graphs, which often results in\ninferior embedding in real-world graph data. In this paper, we propose a novel\nadversarial graph embedding framework for graph data. The framework encodes the\ntopological structure and node content in a graph to a compact representation,\non which a decoder is trained to reconstruct the graph structure. Furthermore,\nthe latent representation is enforced to match a prior distribution via an\nadversarial training scheme. To learn a robust embedding, two variants of\nadversarial approaches, adversarially regularized graph autoencoder (ARGA) and\nadversarially regularized variational graph autoencoder (ARVGA), are developed.\nExperimental studies on real-world graphs validate our design and demonstrate\nthat our algorithms outperform baselines by a wide margin in link prediction,\ngraph clustering, and graph visualization tasks.", + "authors": "Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang", + "published": "2018-02-13", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.16176v1", + "title": "A Survey on Structure-Preserving Graph Transformers", + "abstract": "The transformer architecture has shown remarkable success in various domains,\nsuch as natural language processing and computer vision. When it comes to graph\nlearning, transformers are required not only to capture the interactions\nbetween pairs of nodes but also to preserve graph structures connoting the\nunderlying relations and proximity between them, showing the expressive power\nto capture different graph structures. Accordingly, various\nstructure-preserving graph transformers have been proposed and widely used for\nvarious tasks, such as graph-level tasks in bioinformatics and\nchemoinformatics. However, strategies related to graph structure preservation\nhave not been well organized and systematized in the literature. In this paper,\nwe provide a comprehensive overview of structure-preserving graph transformers\nand generalize these methods from the perspective of their design objective.\nFirst, we divide strategies into four main groups: node feature modulation,\ncontext node sampling, graph rewriting, and transformer architecture\nimprovements. We then further divide the strategies according to the coverage\nand goals of graph structure preservation. Furthermore, we also discuss\nchallenges and future directions for graph transformer models to preserve the\ngraph structure and understand the nature of graphs.", + "authors": "Van Thuy Hoang, O-Joun Lee", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.02909v1", + "title": "Spectral Augmentations for Graph Contrastive Learning", + "abstract": "Contrastive learning has emerged as a premier method for learning\nrepresentations with or without supervision. Recent studies have shown its\nutility in graph representation learning for pre-training. Despite successes,\nthe understanding of how to design effective graph augmentations that can\ncapture structural properties common to many different types of downstream\ngraphs remains incomplete. We propose a set of well-motivated graph\ntransformation operations derived via graph spectral analysis to provide a bank\nof candidates when constructing augmentations for a graph contrastive\nobjective, enabling contrastive learning to capture useful structural\nrepresentation from pre-training graph datasets. We first present a spectral\ngraph cropping augmentation that involves filtering nodes by applying\nthresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second\nnovel augmentation reorders the graph frequency components in a structural\nLaplacian-derived position graph embedding. Further, we introduce a method that\nleads to improved views of local subgraphs by performing alignment via global\nrandom walk embeddings. Our experimental results indicate consistent\nimprovements in out-of-domain graph data transfer compared to state-of-the-art\ngraph contrastive learning methods, shedding light on how to design a graph\nlearner that is able to learn structural properties common to diverse graph\ntypes.", + "authors": "Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09671v1", + "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels", + "abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", + "authors": "Rami Al-Rfou, Dustin Zelle, Bryan Perozzi", + "published": "2019-04-21", + "updated": "2019-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.07294v1", + "title": "Graph Data Condensation via Self-expressive Graph Structure Reconstruction", + "abstract": "With the increasing demands of training graph neural networks (GNNs) on\nlarge-scale graphs, graph data condensation has emerged as a critical technique\nto relieve the storage and time costs during the training phase. It aims to\ncondense the original large-scale graph to a much smaller synthetic graph while\npreserving the essential information necessary for efficiently training a\ndownstream GNN. However, existing methods concentrate either on optimizing node\nfeatures exclusively or endeavor to independently learn node features and the\ngraph structure generator. They could not explicitly leverage the information\nof the original graph structure and failed to construct an interpretable graph\nstructure for the synthetic dataset. To address these issues, we introduce a\nnovel framework named \\textbf{G}raph Data \\textbf{C}ondensation via\n\\textbf{S}elf-expressive Graph Structure \\textbf{R}econstruction\n(\\textbf{GCSR}). Our method stands out by (1) explicitly incorporating the\noriginal graph structure into the condensing process and (2) capturing the\nnuanced interdependencies between the condensed nodes by reconstructing an\ninterpretable self-expressive graph structure. Extensive experiments and\ncomprehensive analysis validate the efficacy of the proposed method across\ndiverse GNN models and datasets. Our code is available at\nhttps://www.dropbox.com/scl/fi/2aonyp5ln5gisdqtjimu8/GCSR.zip?rlkey=11cuwfpsf54wxiiktu0klud0x&dl=0", + "authors": "Zhanyu Liu, Chaolv Zeng, Guanjie Zheng", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.00647v4", + "title": "Lifelong Graph Learning", + "abstract": "Graph neural networks (GNN) are powerful models for many graph-structured\ntasks. Existing models often assume that the complete structure of the graph is\navailable during training. In practice, however, graph-structured data is\nusually formed in a streaming fashion so that learning a graph continuously is\noften necessary. In this paper, we bridge GNN and lifelong learning by\nconverting a continual graph learning problem to a regular graph learning\nproblem so GNN can inherit the lifelong learning techniques developed for\nconvolutional neural networks (CNN). We propose a new topology, the feature\ngraph, which takes features as new nodes and turns nodes into independent\ngraphs. This successfully converts the original problem of node classification\nto graph classification. In the experiments, we demonstrate the efficiency and\neffectiveness of feature graph networks (FGN) by continuously learning a\nsequence of classical graph datasets. We also show that FGN achieves superior\nperformance in two applications, i.e., lifelong human action recognition with\nwearable devices and feature matching. To the best of our knowledge, FGN is the\nfirst method to bridge graph learning and lifelong learning via a novel graph\ntopology. Source code is available at https://github.com/wang-chen/LGL", + "authors": "Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer", + "published": "2020-09-01", + "updated": "2022-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.04687v2", + "title": "Intrinsic Geometric Information Transfer Learning on Multiple Graph-Structured Datasets", + "abstract": "Graphs provide a powerful means for representing complex interactions between\nentities. Recently, deep learning approaches are emerging for representing and\nmodeling graph-structured data, although the conventional deep learning methods\n(such as convolutional neural networks and recurrent neural networks) have\nmainly focused on grid-structured inputs (image and audio). Leveraged by the\ncapability of representation learning, deep learning based techniques are\nreporting promising results for graph applications by detecting structural\ncharacteristics of graphs in an automated fashion. In this paper, we attempt to\nadvance deep learning for graph-structured data by incorporating another\ncomponent, transfer learning. By transferring the intrinsic geometric\ninformation learned in the source domain, our approach can help us to construct\na model for a new but related task in the target domain without collecting new\ndata and without training a new model from scratch. We thoroughly test our\napproach with large-scale real corpora and confirm the effectiveness of the\nproposed transfer learning framework for deep learning on graphs. According to\nour experiments, transfer learning is most effective when the source and target\ndomains bear a high level of structural similarity in their graph\nrepresentations.", + "authors": "Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon", + "published": "2016-11-15", + "updated": "2016-12-05", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1104.5256v1", + "title": "Learning Undirected Graphical Models with Structure Penalty", + "abstract": "In undirected graphical models, learning the graph structure and learning the\nfunctions that relate the predictive variables (features) to the responses\ngiven the structure are two topics that have been widely investigated in\nmachine learning and statistics. Learning graphical models in two stages will\nhave problems because graph structure may change after considering the\nfeatures. The main contribution of this paper is the proposed method that\nlearns the graph structure and functions on the graph at the same time. General\ngraphical models with binary outcomes conditioned on predictive variables are\nproved to be equivalent to multivariate Bernoulli model. The reparameterization\nof the potential functions in graphical model by conditional log odds ratios in\nmultivariate Bernoulli model offers advantage in the representation of the\nconditional independence structure in the model. Additionally, we impose a\nstructure penalty on groups of conditional log odds ratios to learn the graph\nstructure. These groups of functions are designed with overlaps to enforce\nhierarchical function selection. In this way, we are able to shrink higher\norder interactions to obtain a sparse graph structure. Simulation studies show\nthat the method is able to recover the graph structure. The analysis of county\ndata from Census Bureau gives interesting relations between unemployment rate,\ncrime and others discovered by the model.", + "authors": "Shilin Ding", + "published": "2011-04-27", + "updated": "2011-04-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.03659v1", + "title": "Robust Graph Structure Learning under Heterophily", + "abstract": "Graph is a fundamental mathematical structure in characterizing relations\nbetween different objects and has been widely used on various learning tasks.\nMost methods implicitly assume a given graph to be accurate and complete.\nHowever, real data is inevitably noisy and sparse, which will lead to inferior\nresults. Despite the remarkable success of recent graph representation learning\nmethods, they inherently presume that the graph is homophilic, and largely\noverlook heterophily, where most connected nodes are from different classes. In\nthis regard, we propose a novel robust graph structure learning method to\nachieve a high-quality graph from heterophilic data for downstream tasks. We\nfirst apply a high-pass filter to make each node more distinctive from its\nneighbors by encoding structure information into the node features. Then, we\nlearn a robust graph with an adaptive norm characterizing different levels of\nnoise. Afterwards, we propose a novel regularizer to further refine the graph\nstructure. Clustering and semi-supervised classification experiments on\nheterophilic graphs verify the effectiveness of our method.", + "authors": "Xuanting Xie, Zhao Kang, Wenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.03675v3", + "title": "Machine Learning on Graphs: A Model and Comprehensive Taxonomy", + "abstract": "There has been a surge of recent interest in learning representations for\ngraph-structured data. Graph representation learning methods have generally\nfallen into three main categories, based on the availability of labeled data.\nThe first, network embedding (such as shallow graph embedding or graph\nauto-encoders), focuses on learning unsupervised representations of relational\nstructure. The second, graph regularized neural networks, leverages graphs to\naugment neural network losses with a regularization objective for\nsemi-supervised learning. The third, graph neural networks, aims to learn\ndifferentiable functions over discrete topologies with arbitrary structure.\nHowever, despite the popularity of these areas there has been surprisingly\nlittle work on unifying the three paradigms. Here, we aim to bridge the gap\nbetween graph neural networks, network embedding and graph regularization\nmodels. We propose a comprehensive taxonomy of representation learning methods\nfor graph-structured data, aiming to unify several disparate bodies of work.\nSpecifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which\ngeneralizes popular algorithms for semi-supervised learning on graphs (e.g.\nGraphSage, Graph Convolutional Networks, Graph Attention Networks), and\nunsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)\ninto a single consistent approach. To illustrate the generality of this\napproach, we fit over thirty existing methods into this framework. We believe\nthat this unifying view both provides a solid foundation for understanding the\nintuition behind these methods, and enables future research in the area.", + "authors": "Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R\u00e9, Kevin Murphy", + "published": "2020-05-07", + "updated": "2022-04-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03236v1", + "title": "Graph2Graph Learning with Conditional Autoregressive Models", + "abstract": "We present a graph neural network model for solving graph-to-graph learning\nproblems. Most deep learning on graphs considers ``simple'' problems such as\ngraph classification or regressing real-valued graph properties. For such\ntasks, the main requirement for intermediate representations of the data is to\nmaintain the structure needed for output, i.e., keeping classes separated or\nmaintaining the order indicated by the regressor. However, a number of learning\ntasks, such as regressing graph-valued output, generative models, or graph\nautoencoders, aim to predict a graph-structured output. In order to\nsuccessfully do this, the learned representations need to preserve far more\nstructure. We present a conditional auto-regressive model for graph-to-graph\nlearning and illustrate its representational capabilities via experiments on\nchallenging subgraph predictions from graph algorithmics; as a graph\nautoencoder for reconstruction and visualization; and on pretraining\nrepresentations that allow graph classification with limited labeled data.", + "authors": "Guan Wang, Francois Bernard Lauze, Aasa Feragen", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.15843v1", + "title": "TabGSL: Graph Structure Learning for Tabular Data Prediction", + "abstract": "This work presents a novel approach to tabular data prediction leveraging\ngraph structure learning and graph neural networks. Despite the prevalence of\ntabular data in real-world applications, traditional deep learning methods\noften overlook the potentially valuable associations between data instances.\nSuch associations can offer beneficial insights for classification tasks, as\ninstances may exhibit similar patterns of correlations among features and\ntarget labels. This information can be exploited by graph neural networks,\nnecessitating robust graph structures. However, existing studies primarily\nfocus on improving graph structure from noisy data, largely neglecting the\npossibility of deriving graph structures from tabular data. We present a novel\nsolution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data\nprediction by simultaneously learning instance correlation and feature\ninteraction within a unified framework. This is achieved through a proposed\ngraph contrastive learning module, along with transformer-based feature\nextractor and graph neural network. Comprehensive experiments conducted on 30\nbenchmark tabular datasets demonstrate that TabGSL markedly outperforms both\ntree-based models and recent deep learning-based tabular models. Visualizations\nof the learned instance embeddings further substantiate the effectiveness of\nTabGSL.", + "authors": "Jay Chiehen Liao, Cheng-Te Li", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.00793v2", + "title": "Structure-Preserving Graph Representation Learning", + "abstract": "Though graph representation learning (GRL) has made significant progress, it\nis still a challenge to extract and embed the rich topological structure and\nfeature information in an adequate way. Most existing methods focus on local\nstructure and fail to fully incorporate the global topological structure. To\nthis end, we propose a novel Structure-Preserving Graph Representation Learning\n(SPGRL) method, to fully capture the structure information of graphs.\nSpecifically, to reduce the uncertainty and misinformation of the original\ngraph, we construct a feature graph as a complementary view via k-Nearest\nNeighbor method. The feature graph can be used to contrast at node-level to\ncapture the local relation. Besides, we retain the global topological structure\ninformation by maximizing the mutual information (MI) of the whole graph and\nfeature embeddings, which is theoretically reduced to exchanging the feature\nembeddings of the feature and the original graphs to reconstruct themselves.\nExtensive experiments show that our method has quite superior performance on\nsemi-supervised node classification task and excellent robustness under noise\nperturbation on graph structure or node features.", + "authors": "Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu", + "published": "2022-09-02", + "updated": "2022-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11307v3", + "title": "Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method", + "abstract": "Graph Representation Learning (GRL) is an influential methodology, enabling a\nmore profound understanding of graph-structured data and aiding graph\nclustering, a critical task across various domains. The recent incursion of\nattention mechanisms, originally an artifact of Natural Language Processing\n(NLP), into the realm of graph learning has spearheaded a notable shift in\nresearch trends. Consequently, Graph Attention Networks (GATs) and Graph\nAttention Auto-Encoders have emerged as preferred tools for graph clustering\ntasks. Yet, these methods primarily employ a local attention mechanism, thereby\ncurbing their capacity to apprehend the intricate global dependencies between\nnodes within graphs. Addressing these impediments, this study introduces an\ninnovative method known as the Graph Transformer Auto-Encoder for Graph\nClustering (GTAGC). By melding the Graph Auto-Encoder with the Graph\nTransformer, GTAGC is adept at capturing global dependencies between nodes.\nThis integration amplifies the graph representation and surmounts the\nconstraints posed by the local attention mechanism. The architecture of GTAGC\nencompasses graph embedding, integration of the Graph Transformer within the\nautoencoder structure, and a clustering component. It strategically alternates\nbetween graph embedding and clustering, thereby tailoring the Graph Transformer\nfor clustering tasks, whilst preserving the graph's global structural\ninformation. Through extensive experimentation on diverse benchmark datasets,\nGTAGC has exhibited superior performance against existing state-of-the-art\ngraph clustering methodologies.", + "authors": "Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao", + "published": "2023-06-20", + "updated": "2023-08-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.04072v1", + "title": "Label Propagation for Zero-shot Classification with Vision-Language Models", + "abstract": "Vision-Language Models (VLMs) have demonstrated impressive performance on\nzero-shot classification, i.e. classification when provided merely with a list\nof class names. In this paper, we tackle the case of zero-shot classification\nin the presence of unlabeled data. We leverage the graph structure of the\nunlabeled data and introduce ZLaP, a method based on label propagation (LP)\nthat utilizes geodesic distances for classification. We tailor LP to graphs\ncontaining both text and image features and further propose an efficient method\nfor performing inductive inference based on a dual solution and a\nsparsification step. We perform extensive experiments to evaluate the\neffectiveness of our method on 14 common datasets and show that ZLaP\noutperforms the latest related works. Code:\nhttps://github.com/vladan-stojnic/ZLaP", + "authors": "Vladan Stojni\u0107, Yannis Kalantidis, Giorgos Tolias", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "In this section, we discuss related works that improve the already impressive zero-shot classification performance of vision-language models [4, 16, 31] even further. This is achieved by devising better distance metrics, utilizing external knowledge to learn more expressive textual prompts, or by leveraging synthetic and unlabeled data. Improved distance metrics. Zero-shot classification can be improved by devising a better distance metric between image and text representations [8, 46]. CALIP [8] uses a parameter-free attention mechanism and a local patch representation, i.e. instead of global representations, to improve the estimation of class-to-image similarity. CLIP-DN [46] improves the test-time similarity estimation by alignment with the similarity used during contrastive pre-training of VLMs. To achieve this, the method assumes access to unlabeled data from the target distribution. TPT [35] optimizes textual prompts via a consistency objective across test image augmentations. Our method can be considered a part of this line of work, as label propagation is a similarity measure in a geodesic space instead of the Euclidean space. Improved textual prompts using language models. CLIP [31] uses hand-crafted prompts that are specialized for each domain. Instead of using hand-crafted prompts, generating them with large language models (LLMs) is shown to be promising [24, 29]. VisDesc [24] and CuPL [29] query LLMs to generate diverse descriptions of all classes, while WaffleCLIP [32] operates on top of VisDesc to systematically analyze which parts of the generated prompts are the most important. Instead of generating class descriptions, CHiLS [26] targets diversifying the set of classes, by generating sub-classes per class, either through an existing class hierarchy or by querying an LLM. It then performs zero-shot classification using sub-classes and linking them to the parent class. We show that methods improving the textual prompts are complementary to our approach. Synthetic data. Recent methods [9, 38, 43] demonstrate that the use of synthetic data is beneficial for zero-shot classification. CLIP+SYN [9] uses a stable-diffusion-based model to generate synthetic images using class names and uses them to train a linear classifier, initialized by the VLM class representations. SuS-X [38] considers a similar approach, but relies on a non-parametric classifier. CaFO [43] follows the same path, but additionally includes text prompts generated by LLMs. External datasets. Besides the use of synthetic data, SuS-X proposes a variant that operates on an extensive unlabeled image dataset (LAION-5B [34]). This dataset encompasses a distribution that is a super set of the target one. The method generates pseudo-labels using the zeroshot approach which are then incorporated within the nonparametric classifier. NeuralPriming [40] additionally assumes that images have captions, which are used to improve the pseudo-labeling. Unlabeled images from the target distribution. Another line of research [11, 12, 27, 30] propose operating on unlabeled datasets from the target distribution. The main ingredient of all these methods is the prediction of pseudo-labels for unlabeled examples, that are later used for further processing. UPL [12] optimizes learnable text prompts based on the pseudo-labels. SVL-Adapter [27] first trains a self-supervised model on unlabeled data, and then an adapter module to align its outputs to the pseudolabels. ReCLIP [11] performs transductive label propagation to obtain the pseudo-labels and uses them to fine-tune the VLM visual and textual encoders. In contrast to that, we do not fine-tune the model, which we may not have access to, and efficiently use label propagation for inductive inference too. InMaP [30] is a concurrent work that uses pseudo-labels to update the class representations such that they are now closer to image representations. We show in the experiments that this approach is complementary to ours. In contrast to all those methods, we do not explicitly require pseudo-label prediction, but rather capture interactions between all unlabeled examples through a proximity graph and label propagation. 2", + "pre_questions": [], + "main_content": "Introduction Vision-Language Models (VLMs) have demonstrated impressive performance on a variety of computer vision tasks. They are usually trained on large datasets of image-text pairs and contain visual and textual encoders that map to a common feature space. Visual encoders from such models have been shown to produce strong visual representations for perception tasks [31]. Given labeled data from a downstream dataset, one can fine-tune the model or learn classifiers and achieve really high classification accuracy. Besides using the visual encoder in isolation, the joint text and visual encoder feature space of VLMs enables us to define text-based \u201cclassifiers\u201d, e.g. using the class names as textual prompts. This means that we only need a list of class names to perform zero-shot classification for a target dataset, i.e. without access to any labeled images. Although utilizing priors or devising better textual prompts can improve zero-shot performance [8, 24, 29, 46], here, we are interested in the case where we further have access to unlabeled data. Our goal is to find the best way of utilizing such data for zero-shot classification. Transductive Inductive 52 54 56 58 60 62 CLIP CLIP + ZLaP + ZLaP InMaP InMaP + ZLaP + ZLaP accuracy Figure 1. Zero-shot classification performance over 14 datasets using the proposed ZLaP classifier over CLIP [31], as well as over the (concurrent) InMaP [30] approach. Our method offers performance gains for both transductive (left) and inductive (right) inference. Average accuracy over 14 common datasets is reported. In this paper, we leverage the inherent structure of the unlabeled data represented by a proximity graph and apply label propagation (LP) between the text-based classifiers and unlabeled images to derive geodesic distances we then use for classification. We tailor LP to VLMs and graphs containing both text and image features, and show that without proper handling of the bimodality, vanilla application of LP fails dramatically. We introduce ZLaP, a novel classification method based on label propagation that can perform both transductive and inductive inference. We perform the former with the standard (primal) solution of LP for classification and devise a more efficient dual solution for the latter. Our method is not only highly effective but also efficient, making LP a more attractive inductive classifier in terms of complexity. We implement our methods using publicly available VLMs as feature encoders, primarily the ResNet and ViT CLIP [31] models, and perform extensive experiments to evaluate the effectiveness of our method on 14 common 1 arXiv:2404.04072v1 [cs.CV] 5 Apr 2024 datasets. We show that we are able to achieve top performance on two zero-shot inference setups, i.e. inductive and transductive inference. Figure 1 summarizes our gains over 14 datasets on both setups when applying our LP-powered classifiers on top of CLIP, as well as after incorporating the class proxies from the recent InMaP [30] zero-shot approach. It is worth highlighting that ZLaP is a non-parametric method, i.e. it does not involve a learning step. In fact, our approach does not even require access to the VLM model weights and can therefore be used to improve the zero-shot performance of a black-box model even, e.g. provided only via an API. In summary, our contributions are as follows. \u2022 We tailor label propagation to VLMs and zero-shot classification over bi-modal graphs, proposing per modality neighbor search and balancing of contributions. \u2022 We propose an efficient way for performing inductive inference with label propagation via a dual solution and through sparsification. This not only improves the testtime efficiency of our method but also performance. \u2022 We complement our method with the class proxies presented in concurrent work [30] and achieve state of the art results for zero-shot on 14 common datasets. We first define the task of zero-shot classification with access to unlabeled examples, present label propagation with our contributions and then present the proposed approach for zero-shot classification using unlabeled examples. 3.1. Problem formulation Vision-language models consist of an image encoder f : I \u2192Rd and a text encoder g : T \u2192Rd, where I and T represent the space of images and text, respectively. We consider the outputs of these encoders to be \u21132-normalized. Let C denote a set of known classes with associated class names {l1, ..., lC} and P = {p1, ..., pP } a set of prompt templates. Each prompt is combined with a class name to produce a textual description of the class, i.e. pi(lc) for the i-th template used for class c. Class representations wc = 1/P \ufffdP i=1 g(pi(lc)) are obtained using the VLM. Then, for a test image u, we extract representation u = f(u), and perform zero-shot classification by argmaxc uT wc. We further assume access to a set U of M unlabeled images. Let {u1, . . . , uM} denote the representations of the unlabeled images. In this work, we assume no direct access to the VML model weights, i.e. VLM training is neither possible nor desired. This allows to consider the underlying VLM as a black box, possibly only available through an API that generates features. We consider two inference setups: Inductive inference: We consider an inductive setup, where we need to construct a classifier that can operate on new examples. This classifier should take advantage of the unlabeled examples in U. Transductive inference: We consider U to be the test set, i.e. all test examples are jointly provided in advance. Prediction for test example ui may therefore depend on the representations and predictions of all other test examples. In this transductive setup the models are not required to provide predictions for any example that is not in U. 3.2. Label propagation (LP) Let {x1, ..., xN}, with xi \u2208Rd, be a set of features for N examples. Each feature represents a graph node. We construct an adjacency matrix S \u2208RN\u00d7N with zero diagonal, and sij equal to x\u22a4 i xj if xj is in the k-nearest neighbors of xi (denoted by xj \u2208kNN(xi)), and 0 otherwise. We obtain a symmetric adjacency matrix by \u00af S = S + S\u22a4, and its symmetrically normalized version by \u02c6 S = D\u22121 2 \u00af SD\u22121 2 , where \u00af is the degree matrix, and is the S\u22a4, and 1 2 \u00af SD\u22121 2 is the obtain a symmetric adjacency matrix by S = S + S\u22a4, and its symmetrically normalized version by \u02c6 S = D\u22121 2 \u00af SD\u22121 2 , where D = diag( \u00af S1N) is the degree matrix, and 1N is the all-ones N-dimensional vector. We assume the first C examples, and the corresponding nodes, to be labeled among C classes; each class is assigned to a single node1. 1The theoretical part described in this section holds for the case of more labeled examples per class too. We consider this specific case for Transductive inference. Label propagation [44] is originally proposed for the transductive inference setup; we need to predict labels for the unlabeled nodes of the graph. Given the normalized adjacency matrix \u02c6 S, label propagation is an iterative process given by \\hat { \\ v y } ^{(t + 1 )} _ {c} = \\ alp ha \\ hat {S} \\hat {\\vy }^{(t)}_{c} + (1 \\alpha ) \\vy _c \\quad \\forall c \\in \\left \\{1,...,C\\right \\} \\label {equ:iterativelp} (1) until convergence. Where \u03b1 \u2208(0, 1) is a propagation hyper-parameter, yc = ec \u2208{0, 1}N is a one-hot vector with the non-zero element at index c, and t is the current iteration. Prediction of the label for an unlabeled node j \u2208{C + 1, ..., N} is then given by \\ h at {y}_ j = \\argmax _{c} \\hat {\\vy }_c(j) , \\label {equ:predictionlp} (2) where \u02c6 yc(j) = eT j \u02c6 yc is the j-th element of the vector \u02c6 yc. One can show [44] that this iterative solution is equivalent to solving C linear systems L \\ ha t { \\vy }_c = \\vy _c \\quad \\forall c \\in \\left \\{1,...,C\\right \\} , \\label {equ:cglp} (3) where L = I \u2212\u03b1 \u02c6 S is the graph Laplacian. These linear systems have a closed-form solution \\ h at {\\ v y }_c = L^{-1} \\vy _c = \\Linv \\vy _c . \\label {equ:closedformlp} (4) However, this closed-form solution is not practical for large datasets as the inverse graph Laplacian Linv is a non-sparse RN\u00d7N matrix. For this reason it is usual [3, 7, 13, 15] to solve (3) using the conjugate-gradient (CG) method, which is known to be faster than running the iterative solution [13]. Using CG is possible because L is positive-definite. Observe that (4) simply picks one of the columns of Linv. Matrix element Linv(j, c) is the confidence of example j belonging to class c. Its values are similarities, after label propagation, between each node pair. It is a type of geodesic similarity that captures the geometry of the feature space as this is indicated by the graph structure. Focusing on a classification task, we are only interested in similarities between an unlabeled example and a class node. Dual solution. Herein, we show that solving C linear systems of the form in (4) to obtain predictions for all unlabeled nodes using (2) is equivalent to solving N \u2212C linear systems of form \\ h at {\\ vz }_ { j} = L ^{ -1} \\ve _j \\quad \\forall j \\in \\left \\{C+1,...,N\\right \\} , \\label {equ:duallp} (5) and obtaining the unlabeled node prediction using \\ h at {y}_ j = \\argmax _{c}\\hat {\\vz }_j(c) . \\label {equ:predictiondual} (6) simplicity of the presentation and because it corresponds to the task of zero-shot classification with unlabeled examples. 3 This comes from the fact that \\hat {\\ v z }_j ( c) = \\ve _ c^{T} L^{-1} \\ve _j = \\ve _j^{T} L^{-1} \\ve _c = \\hat {\\vy }_c(j) . \\label {equ:dualproof} (7) Although we present the dual solution using the closedform (4), the same holds with the CG solution of (3). Using the dual solution (5) is not practical for transductive learning as usually the unlabeled nodes are many more than the labeled ones. However, we show that this dual solution is efficiently used for inductive inference. As discussed, we can view Linv as a pairwise similarity matrix. The confidence of example j belonging to class c, due to symmetry of Linv, is equivalently obtained either by Linv(j, c) or Linv(c, j). This constitutes, an additional interpretation of the duality in the solution. Inductive inference. Test examples now come individually and are not known during graph construction. A possible way to perform inductive inference is by adding the new node to the graph, which is expensive for a test-time operation as \u02c6 S would have to be updated for each new test example. Instead, inspired by [13] that uses LP for retrieval, we construct indicator vector yx \u2208RN for test example x such that \\vy _ {\\ vx }( j) = \\ beg in {cases} \\vx ^{T} \\vx _j, & \\text {if $\\vx _j \\in $ \\knn ($\\vx $)} \\\\ 0, & \\text {otherwise}. \\end {cases} \\label {equ:inductivey} (8) Then, we solve linear system \\ h at {\\vz }_{\\vx } = L^{-1} \\vy _{\\vx }, \\label {equ:inductivez} (9) as in the dual formulation (5) and get a prediction with \u02c6 yx = argmaxc \u02c6 zx(c). With the usual formulation of label propagation in (4), C linear systems need to be solved to get prediction for a single test example. The dual formulation allows us to do it by solving only a single linear system. Fast inductive inference with sparsification. We further introduce an additional off-line step, where we solve (4), get \u02c6 yc for all c \u2208{1, ..., C}, and store them in a matrix \u02c6 Y = [\u02c6 y1; ...; \u02c6 yc] \u2208RN\u00d7C. Then, the solution for a test example is equivalent to a weighted sum of rows of \u02c6 Y [1], which is a byproduct of using the indicator vector (8) for representing a test example. Its prediction is given by \u02c6 zx = yT x \u02c6 Y , and is equivalent to that obtained via (9). However, storing the whole \u02c6 Y can be expensive for very large values of N and C. We propose to sparsify \u02c6 Y by keeping only the largest values in each row, column, or over the whole matrix. Note that [14] proposes a low-rank decomposition of the inverse graph Laplacian for the task of retrieval. Our solution is tailored to zero-shot classification, and we choose to obtain and sparsify the first C rows of Linv instead of approximating the whole matrix. Additionally, our solution requires one (sparse) vector to matrix multiplications at testtime instead of two. 3.3. LP for zero-shot VLM classification We are given a set of classes C with extracted VLM representations {w1, ..., wC} and a set of unlabeled images U with extracted VLM representations {u1, ..., uM}. We use them as nodes {w1, ..., wC, u1, ..., uM} of the graph for label propagation. Nodes of class representations (text nodes) are labeled and image nodes unlabeled. To construct the adjacency matrix S, we need to perform the k-nearest neighbor search between nodes. However, it is known that there exists a large modality gap between image and text representations coming from VLMs [21, 38, 47]. The respective similarity distributions for CLIP are shown in Figure 3a. This modality gap makes standard kNN search between nodes not useful for label propagation; image nodes mostly get connected to image nodes, and text nodes mostly get connected to text nodes. As a consequence, few edges exist between labeled and unlabeled nodes. To alleviate this problem, we perform the kNN search separately for connecting image nodes to image nodes and for connecting image nodes and text nodes. We do not perform the search using text nodes as queries, i.e. text nodes get linked only if they appear in the kNN list of an image. This way we also avoid linking text nodes with each other, which is beneficial as each of them is labeled to a different class. Formally, the values of the adjacency matrix are s _ { i j } = \\ beg in { c ases} \\v u _ i^{ T} \\ v u _j, & \\t ext {if $\\vu _j \\in \\knn _{\\vu } (\\vu _i$)} \\\\ \\vu _i^{T} \\vw _j, & \\text {if $\\vw _j \\in \\knn _{\\vw }(\\vu _i$)} \\\\ 0, & \\text {otherwise}, \\end {cases} \\label {equ:adj_bi} (10) where kNNu and kNNw denote that the search is performed within the image or class features only, respectively. Moreover, during inductive inference for image u, we perform the kNN search in a similar way to construct indicator vector yu whose elements are given by \\vy _ { \\ v u } (i) = \\ b egin {c as es} \\ vu ^{T} \\v u _j, & \\text {if $\\vu _j \\in \\knn _{\\vu }(\\vu $)} \\\\ \\vu ^{T} \\vw _j, & \\text {if $\\vw _j \\in \\knn _{\\vw }(\\vu $)} \\\\ 0, & \\text {otherwise}. \\end {cases} \\label {equ:inductivey_bi} (11) Due to the two types of edges, i.e. image-to-image and image-to-text, we use power function h(v) = v\u03b3 to transform the image-to-text (cross-modal) similarities. This way we effectively balance their contribution in the graph and the indicator vector. To that end, we use h(uT i wj) and h(uT wj) in (10) and (11), respectively, instead of uT i wj and uT wj. We refer to the proposed method described above as Zero-shot classification with Label Propagation (ZLaP). We further denote the variant of our method after sparsifying the \u02c6 Y matrix for inductive inference as ZLaP\u2217. 4 Figure 2. t-SNE visualization for the original CLIP features (left) and our geodesic similarity (right). The former is estimated with the features as input, while the latter with the Linv used as a pairwise similarity matrix. \u22c6: class representation, \u2022: image representation. Figure generated for five random classes from the CUB dataset. \u22120.2 0 0.2 0.4 0.6 0.8 1 similarity image-to-image text-to-text image-to-text (a) Using text prompts \u22120.2 0 0.2 0.4 0.6 0.8 1 similarity image-to-image proxy-to-proxy image-to-proxy (b) Using proxies from InMaP [30] Figure 3. Similarity distributions among features of the same or different modality, using 7 textual templates [38] (left) or the InMaP proxies (right) as class representations. t-SNE visualization of the bi-modal space. In Figure 2 we visualize the bi-modal feature space for CLIP features using t-SNE [39] in two cases, i.e. the Euclidean case and using geodesic similarities obtained by Linv, i.e. after label propagation. When using Euclidean affinities (left), we see that due to the large differences in the similarity distributions (text-text, image-to-image, and text-to-image, as shown in Figure 3a) all class representations (stars) are clustered together far from the image nodes. However, using the geodesic affinities from Linv (right) we see that class representations are more spread. 4. Experiments In this section, we first present the datasets we use, our experimental setup and competing methods. We then present component analysis for ZLaP and results for transductive and inductive zero-shot classification on 14 datasets. 4.1. Datasets We evaluate the proposed method on 14 diverse image classification datasets: ImageNet ILSVRC2012 [33], Describable Textures Dataset (DTD) [5], EuroSAT [10], FGVC-Aircraft [23], Oxford Flowers 102 [25], Food101 [2], Oxford-IIIT Pet [28], SUN397 [42], Stanford Cars [17], Caltech101 [6], UCF101 [36], CIFAR10 [18], CIFAR100 [18], CUB-200-2011 [41]. For the first 11 datasets we borrow the train and test splits from CoOp[45]. We use the official training and test splits for CIFAR10, CIFAR100 and CUB-200-2011. 4.2. Experimental setup In the transductive (inductive) inference setup, unlabeled nodes in the graph are the test (train) images. We always measure classification accuracy over the test images. VLMs and textual prompts. We report results using the publicly available ResNet50 and ViT-B/16 CLIP [31] models. We adopt the 7 templates from SuS-X [38] as class prompts for all results apart from Table 3 where we utilize the LLM generated prompts from [29]. Compared methods. Our baseline is zero-shot recognition with CLIP [31] using text encoder features as class representations. TPT [35] is based on test-time prompt tuning such that different image augmentations produce consistent predictions. The aforementioned methods do not exploit unlabeled data; their performance is therefore unchanged in both inference setups. CLIP-DN [46] normalizes feature distributions during test-time and assumes access to the mean feature vector of the target distribution. In the transductive (inductive) setup the mean vector is estimated on the test (training) set. InMaP [30] is a concurrent work that extracts updated class representations using pseudo-labels on the unlabeled set. In the transductive (inductive) setup the learning is performed on the test (training) images. 5 Implementation details. We reproduce results for CLIP2, CLIP-DN3, and InMaP4 using their public implementations. For TPT [35] we report the numbers provided in [30]. We run InMaP using a single set of hyper-parameters for all 14 datasets, i.e. the default values reported in the official implementation4. We also fix the values of k, \u03b3, and \u03b1 for ZLaP across all datasets to 5, 5.0, and 0.3, respectively, for CLIP, and 10, 3.0, and 0.3, respectively, for InMaP. ZLaP variants. We refer to ZLaP using text class representations as CLIP + ZLaP. Since InMaP is complementary to our work, we further evaluate the performance of ZLaP when {w1, . . . , wC} are the InMaP proxies. We refer to this as InMaP + ZLaP in the results. We refer to ZLaP with a sparse \u02c6 Y for inductive inference as ZLaP\u2217. 4.3. Components of ZLaP Bi-modal graph adjustments. In Table 1 we show the importance of two design choices to adapt LP to bi-modal graphs, i.e. separating the nearest neighbor search across modalities using (10) and (11), and transforming crossmodal similarities using a power function h(\u00b7). We see that separate search is crucial; without it LP is not effective at all. The power function gives an extra boost in both setups, especially in the case of transductive inference. In Table 2 we report the percentage of images that are connected to their groundtruth class nodes within a path of length n, with and without our adjustments. We see that for any such paths to exist for the case without adjustments, k needs to be extremely high. With adjustments, k = 5 is enough for 71.4% of the nodes to be connected to the correct class nodes. Sparsifying matrix \u02c6 Y for inductive inference. We explore three ways of approximating \u02c6 Y by sparsification, i.e. wither keeping only the largest \u03be columns per row, the largest \u03be rows per column, or the largest \u03be elements of the whole matrix. In all cases, the rest of the elements are set to zero. In Figure 4 we show the influence that these three variants have on performance. Not only these variants speed-up inference, but we can also see improvements in performance when sparsification percentage is high, i.e. low percentage of non-zero elements. We attribute this to the fact that less confident predictions in \u02c6 Y , many of them erroneous, are now set to zero. Although the best variant to choose seems to vary per dataset, we found that keeping the top element per row performs well across different datasets. We therefore use the \u03be = 1 top element per row for our experiments. This amounts to different percentages of sparsity per dataset; we are keeping approximately 2.3% on average 2https://github.com/OpenAI/CLIP 3https://github.com/fengyuli-dev/distributionnormalization 4https://github.com/idstcv/InMaP Eq.(10) h(\u00b7) ImageNet DTD CUB \u2717 \u2717 0.1 2.1 0.5 \u2717 \u2713 0.1 2.1 0.5 \u2713 \u2717 50.2 32.3 41.6 \u2713 \u2713 61.8 41.9 52.1 (a) Transductive inference Eq.(10)-(11) h(\u00b7) ImageNet DTD CUB \u2717 \u2717 0.1 2.1 0.5 \u2717 \u2713 0.1 2.1 0.5 \u2713 \u2717 60.8 42.4 49.6 \u2713 \u2713 62.2 42.8 49.7 (b) Inductive inference Table 1. Adjusting LP to bi-modal graphs. Impact of using separate kNN search for constructing the graph (Eq.(10)) or the indicator vector (Eq.(11)), as well as power function h(\u00b7) for balancing the contributions of two types of edges in the graph. Joint kNN search Separate kNN search k n=1 n=2 n=3 n=1 n=2 n=3 5 0.0 0.0 0.0 71.4 85.1 100.0 10 0.0 0.0 0.0 82.9 95.4 100.0 100 40.1 100.0 100.0 100.0 100.0 100.0 Table 2. Impact of the separate kNN search on the shortest paths between image nodes and the text node of their class. We report the percentage of images whose shortest path to the text node of their ground-truth class has length equal to or less than n. Left: the vanilla approach. Right: our separate kNN search using Eq. (10). Analysis on DTD for the transductive setup. across all datasets. Regarding the inference speed-up, the primal solution takes \u223c2.6 sec per image, the dual takes \u223c4.4 ms, while the sparsified approach takes \u223c0.6 ms, measured on ImageNet dataset. Using the class proxies from InMaP. We observe that many class-to-image similarities (e.g. u\u22a4 i wj in (10)) become negative on some datasets when using ZLaP with InMaP proxies (see Figure 3b). We therefore perform minmax normalization in range [0, 1] after constructing adjacency matrix S or the indicator vector, for the transductive and inductive inference setups. 4.4. Results Transductive inference. We present results for transductive zero-shot classification in Figure 5. ZLaP improves the zero-shot performance of CLIP significantly on all datasets. It also outperforms the recent TPT and CLIP-DN approaches on the vast majority of cases, with large gains 6 10\u22121 100 101 102 62.2 62.4 62.6 62.8 non-zero elements % accuracy ImageNet row column matrix 100 101 102 49.5 50 50.5 51 51.5 non-zero elements % accuracy CUB 101 102 42.5 43 43.5 44 44.5 45 non-zero elements % accuracy DTD Figure 4. Sparcifying matrix \u02c6 Y for inductive CLIP+ZLaP: effect of maintaining only the top elements per row/column/matrix. CLIP-DN CLIP +ZLaP InMaP+ZLaP 56 58 60 62 64 57.3 56.6 60.0 60.4 61.3 accuracy (a) Transductive ResNet50 CLIP-DN CLIP +ZLaP InMaP+ZLaP 55 60 65 70 75 67.3 66.8 68.9 70.7 71.7 accuracy (b) Transductive ViT-B/16 CLIP-DN CLIP +ZLaP +ZLaP* InMaP +ZLaP +ZLaP* 56 58 60 62 64 57.3 56.6 58.7 59.5 60.4 61.0 60.9 accuracy (c) Inductive ResNet50 CLIP-DN CLIP +ZLaP +ZLaP* InMaP +ZLaP +ZLaP* 55 60 65 70 75 67.3 66.8 69.1 69.4 70.7 71.7 71.6 accuracy (d) Inductive ViT-B/16 Figure 5. Zero-shot classification accuracy averaged over 14 datasets for the transductive (top) and inductive (bottom) setups. Results per dataset are reported in the supplementary material. in average accuracy. Compared to InMaP, ZLaP offers lower accuracy on average. However, by incorporating InMaP\u2019s class representations to our graph, we can improve our results even further and outperform all other methods, for an improvement of approximately +5% over CLIP with both backbones. Inductive inference. We report results for the inductive inference setup in Figure 5. ZLaP achieves a noticeable improvements over the CLIP baseline in this setup as well. Gains are more prominent for the case of ZLaP using the InMaP proxies, where gains over CLIP are +4.4% and +4.9% for the two backbones. We also observe that, although InMaP slightly outperforms ZLaP when used over CLIP, the combination of the two achieves state-of-the-art performance in this case as well. We further see that ZLaP\u2217retains the state-of-the-art performance of our method, while sparsifying \u02c6 Y offers significant speed-up at inference time. 7 Transductive Inductive Results with RN50 CLIP 63.0 63.0 + ZLaP 64.6 64.2 InMaP 64.8 64.6 + ZLaP 65.8 65.0 Results with ViT-B-16 CLIP 71.9 71.9 + ZLaP 72.6 73.3 InMaP 73.9 74.0 + ZLaP 74.8 74.2 Table 3. Zero-shot classification using prompts generated by LLMs [29]. We report average accuracy on 12 datasets using prompts from CuPL [29] together with our 7 standard prompts. Results per dataset are reported in the supplementary material. Leveraging LLM generated prompts. In Table 3 we report average zero-shot classification accuracy for ZLaP using the prompts recently proposed in CuPL [29]. These are prompts generated by LMMs that are available on the CuPL Github page5 for 12 of the datasets we use (all datasets besides CUB and Eurosat). ZLaP improves zero-shot performance in this case as well, for both the transductive and inductive setups. This verifies that our method is complementary to improved prompt engineering. Multi-label classification. We apply ZLaP for multilabel classification on the MS-COCO [22] dataset. ZLaP improves the zero-shot performance of CLIP by +6.0% mAP (56.8% vs. 50.8%) for inductive inference without any modification of the approach or its hyper-parameters. Web-crawled unlabeled images. All previous experiments use unlabeled images that come from the target distribution, i.e. they are known to depict one of the classes of interest but their labels are discarded. To see the impact of ZLaP in a more realistic setup using web-crawled images we rely on LAION-400M [34] composed by image-caption pairs. We construct the set of unlabeled images with 10,000 images per class that are chosen either randomly, or based on proximity of their image or text features to the class representation. Random selection fails, but the other two options provide some improvement compared to CLIP, with the caption-based neighbors being a bit better. The complete set of results is presented in the supplementary material. 5https://github.com/sarahpratt/CuPL Transductive Inductive BLIP [20] 54.6 54.6 + ZLaP 59.6 57.9 ALBEF [19] 36.0 36.0 + ZLaP 41.2 46.8 EVA-CLIP-8B [37] 83.6 83.6 + ZLaP 84.6 84.5 EVA-CLIP-18B [37] 83.9 83.9 + ZLaP 84.8 84.7 Table 4. Accuracy on ImageNet using different VLMs. Different VLMs We use CLIP as the VLM of choice throughout our experiments. In Table 4, we present results when ZLaP is applied on top of four recent VLMs, namely BLIP [20], ALBEF [19], and two versions of EVACLIP [37]. We use the implementations of BLIP and ALBEF that are available in the LAVIS library6, while for EVA-CLIP we use implementation from the official Github repository7. ZLaP improves the results of all four different VLMs in both transductive and inductive setups. 5. Conclusions Label propagation is an intuitive way of encoding the global structure of unlabeled data into geodesic distances over a locally Euclidean space. In this paper, we show that this method can be successfully tailored to both transductive and inductive zero-shot classification with vision-language models, and achieve state-of-the-art performance on both setups. To that end, we show that it is highly important to take proper care of the peculiarities of the bi-modal nature of the task during graph construction. We further carefully design an efficient variant of label propagation for the inductive inference case, that may enable label propagation to be applied to other tasks beyond zero-shot classification. Vision-language models trained on billion-scale datasets are redefining computer vision research. The proposed ZLaP is a training-free approach able to improve the generalization performance of black-box VLMs using only unlabeled data, for an annotation-free, text-based and openworld classification paradigm that will inevitably be ubiquitous in the near future. Acknowledgements. This work was supported by the Junior Star GACR GM 2128830M and the Czech Technical University in Prague grant No. SGS23/173/OHK3/3T/13. We thank Ahmet Iscen for many helpful comments. 6https://github.com/salesforce/LAVIS 7https : / / github . com / baaivision / EVA / tree / master/EVA-CLIP-18B 8" + }, + { + "url": "http://arxiv.org/abs/2310.19752v1", + "title": "Intra-Modal Proxy Learning for Zero-Shot Visual Categorization with CLIP", + "abstract": "Vision-language pre-training methods, e.g., CLIP, demonstrate an impressive\nzero-shot performance on visual categorizations with the class proxy from the\ntext embedding of the class name. However, the modality gap between the text\nand vision space can result in a sub-optimal performance. We theoretically show\nthat the gap cannot be reduced sufficiently by minimizing the contrastive loss\nin CLIP and the optimal proxy for vision tasks may reside only in the vision\nspace. Therefore, given unlabeled target vision data, we propose to learn the\nvision proxy directly with the help from the text proxy for zero-shot transfer.\nMoreover, according to our theoretical analysis, strategies are developed to\nfurther refine the pseudo label obtained by the text proxy to facilitate the\nintra-modal proxy learning (InMaP) for vision. Experiments on extensive\ndownstream tasks confirm the effectiveness and efficiency of our proposal.\nConcretely, InMaP can obtain the vision proxy within one minute on a single GPU\nwhile improving the zero-shot accuracy from $77.02\\%$ to $80.21\\%$ on ImageNet\nwith ViT-L/14@336 pre-trained by CLIP. Code is available at\n\\url{https://github.com/idstcv/InMaP}.", + "authors": "Qi Qian, Yuanhong Xu, Juhua Hu", + "published": "2023-10-30", + "updated": "2023-10-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.07574v2", + "title": "Is synthetic data from generative models ready for image recognition?", + "abstract": "Recent text-to-image generation models have shown promising results in\ngenerating high-fidelity photo-realistic images. Though the results are\nastonishing to human eyes, how applicable these generated images are for\nrecognition tasks remains under-explored. In this work, we extensively study\nwhether and how synthetic images generated from state-of-the-art text-to-image\ngeneration models can be used for image recognition tasks, and focus on two\nperspectives: synthetic data for improving classification models in data-scarce\nsettings (i.e. zero-shot and few-shot), and synthetic data for large-scale\nmodel pre-training for transfer learning. We showcase the powerfulness and\nshortcomings of synthetic data from existing generative models, and propose\nstrategies for better applying synthetic data for recognition tasks. Code:\nhttps://github.com/CVMI-Lab/SyntheticData.", + "authors": "Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, Xiaojuan Qi", + "published": "2022-10-14", + "updated": "2023-02-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.03320v3", + "title": "What does a platypus look like? Generating customized prompts for zero-shot image classification", + "abstract": "Open-vocabulary models are a promising new paradigm for image classification.\nUnlike traditional classification models, open-vocabulary models classify among\nany arbitrary set of categories specified with natural language during\ninference. This natural language, called \"prompts\", typically consists of a set\nof hand-written templates (e.g., \"a photo of a {}\") which are completed with\neach of the category names. This work introduces a simple method to generate\nhigher accuracy prompts, without relying on any explicit knowledge of the task\ndomain and with far fewer hand-constructed sentences. To achieve this, we\ncombine open-vocabulary models with large language models (LLMs) to create\nCustomized Prompts via Language models (CuPL, pronounced \"couple\"). In\nparticular, we leverage the knowledge contained in LLMs in order to generate\nmany descriptive sentences that contain important discriminating\ncharacteristics of the image categories. This allows the model to place a\ngreater importance on these regions in the image when making predictions. We\nfind that this straightforward and general approach improves accuracy on a\nrange of zero-shot image classification benchmarks, including over one\npercentage point gain on ImageNet. Finally, this simple baseline requires no\nadditional training and remains completely zero-shot. Code available at\nhttps://github.com/sarahpratt/CuPL.", + "authors": "Sarah Pratt, Ian Covert, Rosanne Liu, Ali Farhadi", + "published": "2022-09-07", + "updated": "2023-12-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.07143v1", + "title": "Reproducible scaling laws for contrastive language-image learning", + "abstract": "Scaling up neural networks has led to remarkable performance across a wide\nrange of tasks. Moreover, performance often follows reliable scaling laws as a\nfunction of training set size, model size, and compute, which offers valuable\nguidance as large-scale experiments are becoming increasingly expensive.\nHowever, previous work on scaling laws has primarily used private data \\&\nmodels or focused on uni-modal language or vision learning. To address these\nlimitations, we investigate scaling laws for contrastive language-image\npre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP\nrepository. Our large-scale experiments involve models trained on up to two\nbillion image-text pairs and identify power law scaling for multiple downstream\ntasks including zero-shot classification, retrieval, linear probing, and\nend-to-end fine-tuning. We find that the training distribution plays a key role\nin scaling laws as the OpenAI and OpenCLIP models exhibit different scaling\nbehavior despite identical model architectures and similar training recipes. We\nopen-source our evaluation workflow and all models, including the largest\npublic CLIP models, to ensure reproducibility and make scaling laws research\nmore accessible. Source code and instructions to reproduce this study will be\navailable at https://github.com/LAION-AI/scaling-laws-openclip", + "authors": "Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, Jenia Jitsev", + "published": "2022-12-14", + "updated": "2022-12-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.07183v2", + "title": "Visual Classification via Description from Large Language Models", + "abstract": "Vision-language models (VLMs) such as CLIP have shown promising performance\non a variety of recognition tasks using the standard zero-shot classification\nprocedure -- computing similarity between the query image and the embedded\nwords for each category. By only using the category name, they neglect to make\nuse of the rich context of additional information that language affords. The\nprocedure gives no intermediate understanding of why a category is chosen, and\nfurthermore provides no mechanism for adjusting the criteria used towards this\ndecision. We present an alternative framework for classification with VLMs,\nwhich we call classification by description. We ask VLMs to check for\ndescriptive features rather than broad categories: to find a tiger, look for\nits stripes; its claws; and more. By basing decisions on these descriptors, we\ncan provide additional cues that encourage using the features we want to be\nused. In the process, we can get a clear idea of what features the model uses\nto construct its decision; it gains some level of inherent explainability. We\nquery large language models (e.g., GPT-3) for these descriptors to obtain them\nin a scalable way. Extensive experiments show our framework has numerous\nadvantages past interpretability. We show improvements in accuracy on ImageNet\nacross distribution shifts; demonstrate the ability to adapt VLMs to recognize\nconcepts unseen during training; and illustrate how descriptors can be edited\nto effectively mitigate bias compared to the baseline.", + "authors": "Sachit Menon, Carl Vondrick", + "published": "2022-10-13", + "updated": "2022-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.07511v1", + "title": "Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models", + "abstract": "Pre-trained vision-language models (e.g., CLIP) have shown promising\nzero-shot generalization in many downstream tasks with properly designed text\nprompts. Instead of relying on hand-engineered prompts, recent works learn\nprompts using the training data from downstream tasks. While effective,\ntraining on domain-specific data reduces a model's generalization capability to\nunseen new domains. In this work, we propose test-time prompt tuning (TPT), a\nmethod that can learn adaptive prompts on the fly with a single test sample.\nFor image classification, TPT optimizes the prompt by minimizing the entropy\nwith confidence selection so that the model has consistent predictions across\ndifferent augmented views of each test sample. In evaluating generalization to\nnatural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP\nby 3.6% on average, surpassing previous prompt tuning approaches that require\nadditional task-specific training data. In evaluating cross-dataset\ngeneralization with unseen categories, TPT performs on par with the\nstate-of-the-art approaches that use additional training data. Project page:\nhttps://azshue.github.io/TPT.", + "authors": "Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, Chaowei Xiao", + "published": "2022-09-15", + "updated": "2022-09-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.08402v1", + "title": "LAION-5B: An open large-scale dataset for training next generation image-text models", + "abstract": "Groundbreaking language-vision architectures like CLIP and DALL-E proved the\nutility of training on large amounts of noisy image-text data, without relying\non expensive accurate labels used in standard vision unimodal supervised\nlearning. The resulting models showed capabilities of strong text-guided image\ngeneration and transfer to downstream tasks, while performing remarkably at\nzero-shot classification with noteworthy out-of-distribution robustness. Since\nthen, large-scale language-vision models like ALIGN, BASIC, GLIDE, Flamingo and\nImagen made further improvements. Studying the training and capabilities of\nsuch models requires datasets containing billions of image-text pairs. Until\nnow, no datasets of this size have been made openly available for the broader\nresearch community. To address this problem and democratize research on\nlarge-scale multi-modal models, we present LAION-5B - a dataset consisting of\n5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English\nlanguage. We show successful replication and fine-tuning of foundational models\nlike CLIP, GLIDE and Stable Diffusion using the dataset, and discuss further\nexperiments enabled with an openly available dataset of this scale.\nAdditionally we provide several nearest neighbor indices, an improved\nweb-interface for dataset exploration and subset generation, and detection\nscores for watermark, NSFW, and toxic content detection. Announcement page\nhttps://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/", + "authors": "Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev", + "published": "2022-10-16", + "updated": "2022-10-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.00020v1", + "title": "Learning Transferable Visual Models From Natural Language Supervision", + "abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.", + "authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever", + "published": "2021-02-26", + "updated": "2021-02-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.16198v4", + "title": "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has emerged as a simple yet\neffective way to train large-scale vision-language models. CLIP demonstrates\nimpressive zero-shot classification and retrieval on diverse downstream tasks.\nHowever, to leverage its full potential, fine-tuning still appears to be\nnecessary. Fine-tuning the entire CLIP model can be resource-intensive and\nunstable. Moreover, recent methods that aim to circumvent this need for\nfine-tuning still require access to images from the target distribution. In\nthis paper, we pursue a different approach and explore the regime of\ntraining-free \"name-only transfer\" in which the only knowledge we possess about\nthe downstream task comprises the names of downstream target categories. We\npropose a novel method, SuS-X, consisting of two key building blocks -- SuS and\nTIP-X, that requires neither intensive fine-tuning nor costly labelled data.\nSuS-X achieves state-of-the-art zero-shot classification results on 19\nbenchmark datasets. We further show the utility of TIP-X in the training-free\nfew-shot setting, where we again achieve state-of-the-art results over strong\ntraining-free baselines. Code is available at\nhttps://github.com/vishaal27/SuS-X.", + "authors": "Vishaal Udandarao, Ankush Gupta, Samuel Albanie", + "published": "2022-11-28", + "updated": "2023-08-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.10191v3", + "title": "Neural Priming for Sample-Efficient Adaptation", + "abstract": "We propose Neural Priming, a technique for adapting large pretrained models\nto distribution shifts and downstream tasks given few or no labeled examples.\nPresented with class names or unlabeled test samples, Neural Priming enables\nthe model to recall and conditions its parameters on relevant data seen\nthroughout pretraining, thereby priming it for the test distribution. Neural\nPriming can be performed at test time, even for pretraining datasets as large\nas LAION-2B. Performing lightweight updates on the recalled data significantly\nimproves accuracy across a variety of distribution shift and transfer learning\nbenchmarks. Concretely, in the zero-shot setting, we see a 2.45% improvement in\naccuracy on ImageNet and 3.81% accuracy improvement on average across standard\ntransfer learning benchmarks. Further, using Neural Priming at inference to\nadapt to distribution shift, we see a 1.41% accuracy improvement on ImageNetV2.\nThese results demonstrate the effectiveness of Neural Priming in addressing the\nchallenge of limited labeled data and changing distributions. Code is available\nat github.com/RAIVNLab/neural-priming.", + "authors": "Matthew Wallingford, Vivek Ramanujan, Alex Fang, Aditya Kusupati, Roozbeh Mottaghi, Aniruddha Kembhavi, Ludwig Schmidt, Ali Farhadi", + "published": "2023-06-16", + "updated": "2023-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.03793v2", + "title": "ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation", + "abstract": "Large-scale Pre-Training Vision-Language Model such as CLIP has demonstrated\noutstanding performance in zero-shot classification, e.g. achieving 76.3% top-1\naccuracy on ImageNet without seeing any example, which leads to potential\nbenefits to many tasks that have no labeled data. However, while applying CLIP\nto a downstream target domain, the presence of visual and text domain gaps and\ncross-modality misalignment can greatly impact the model performance. To\naddress such challenges, we propose ReCLIP, the first source-free domain\nadaptation method for vision-language models, which does not require any source\ndata or target labeled data. ReCLIP first learns a projection space to mitigate\nthe misaligned visual-text embeddings and learns pseudo labels, and then\ndeploys cross-modality self-training with the pseudo labels, to update visual\nand text encoders, refine labels and reduce domain gaps and misalignments\niteratively. With extensive experiments, we demonstrate ReCLIP reduces the\naverage error rate of CLIP from 30.17% to 25.06% on 22 image classification\nbenchmarks. Code available at https://github.com/michiganleon/ReCLIP_WACV.", + "authors": "Xuefeng Hu, Ke Zhang, Lu Xia, Albert Chen, Jiajia Luo, Yuyin Sun, Ken Wang, Nan Qiao, Xiao Zeng, Min Sun, Cheng-Hao Kuo, Ram Nevatia", + "published": "2023-08-04", + "updated": "2023-12-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.02551v3", + "title": "CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets", + "abstract": "Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot\nclassification through their ability generate embeddings for each class based\non their (natural language) names. Prior work has focused on improving the\naccuracy of these models through prompt engineering or by incorporating a small\namount of labeled downstream data (via finetuning). However, there has been\nlittle focus on improving the richness of the class names themselves, which can\npose issues when class labels are coarsely-defined and are uninformative. We\npropose Classification with Hierarchical Label Sets (or CHiLS), an alternative\nstrategy for zero-shot classification specifically designed for datasets with\nimplicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each\nclass, produce a set of subclasses, using either existing label hierarchies or\nby querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though\nthese subclasses were the labels of interest; (iii) map the predicted subclass\nback to its parent to produce the final prediction. Across numerous datasets\nwith underlying hierarchical structure, CHiLS leads to improved accuracy in\nsituations both with and without ground-truth hierarchical information. CHiLS\nis simple to implement within existing zero-shot pipelines and requires no\nadditional training cost. Code is available at:\nhttps://github.com/acmi-lab/CHILS.", + "authors": "Zachary Novack, Julian McAuley, Zachary C. Lipton, Saurabh Garg", + "published": "2023-02-06", + "updated": "2023-05-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.14169v2", + "title": "CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to learn visual\nrepresentations with great transferability, which achieves promising accuracy\nfor zero-shot classification. To further improve its downstream performance,\nexisting works propose additional learnable modules upon CLIP and fine-tune\nthem by few-shot training sets. However, the resulting extra training cost and\ndata requirement severely hinder the efficiency for model deployment and\nknowledge transfer. In this paper, we introduce a free-lunch enhancement\nmethod, CALIP, to boost CLIP's zero-shot performance via a parameter-free\nAttention module. Specifically, we guide visual and textual representations to\ninteract with each other and explore cross-modal informative features via\nattention. As the pre-training has largely reduced the embedding distances\nbetween two modalities, we discard all learnable parameters in the attention\nand bidirectionally update the multi-modal features, enabling the whole process\nto be parameter-free and training-free. In this way, the images are blended\nwith textual-aware signals and the text representations become visual-guided\nfor better adaptive zero-shot alignment. We evaluate CALIP on various\nbenchmarks of 14 datasets for both 2D image and 3D point cloud few-shot\nclassification, showing consistent zero-shot performance improvement over CLIP.\nBased on that, we further insert a small number of linear layers in CALIP's\nattention module and verify our robustness under the few-shot settings, which\nalso achieves leading performance compared to existing methods. Those extensive\nexperiments demonstrate the superiority of our approach for efficient\nenhancement of CLIP.", + "authors": "Ziyu Guo, Renrui Zhang, Longtian Qiu, Xianzheng Ma, Xupeng Miao, Xuming He, Bin Cui", + "published": "2022-09-28", + "updated": "2022-12-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.03794v1", + "title": "SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models", + "abstract": "Vision-language models such as CLIP are pretrained on large volumes of\ninternet sourced image and text pairs, and have been shown to sometimes exhibit\nimpressive zero- and low-shot image classification performance. However, due to\ntheir size, fine-tuning these models on new datasets can be prohibitively\nexpensive, both in terms of the supervision and compute required. To combat\nthis, a series of light-weight adaptation methods have been proposed to\nefficiently adapt such models when limited supervision is available. In this\nwork, we show that while effective on internet-style datasets, even those\nremedies under-deliver on classification tasks with images that differ\nsignificantly from those commonly found online. To address this issue, we\npresent a new approach called SVL-Adapter that combines the complementary\nstrengths of both vision-language pretraining and self-supervised\nrepresentation learning. We report an average classification accuracy\nimprovement of 10% in the low-shot setting when compared to existing methods,\non a set of challenging visual classification tasks. Further, we present a\nfully automatic way of selecting an important blending hyperparameter for our\nmodel that does not require any held-out labeled validation data. Code for our\nproject is available here: https://github.com/omipan/svl_adapter.", + "authors": "Omiros Pantazis, Gabriel Brostow, Kate Jones, Oisin Mac Aodha", + "published": "2022-10-07", + "updated": "2022-10-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.03649v2", + "title": "Unsupervised Prompt Learning for Vision-Language Models", + "abstract": "Contrastive vision-language models like CLIP have shown great progress in\ntransfer learning. In the inference stage, the proper text description, also\nknown as prompt, needs to be carefully designed to correctly classify the given\nimages. In order to avoid laborious prompt engineering, recent works such as\nCoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for\ndownstream image recognition tasks on a small set of labeled data. Though\npromising improvements are achieved, requiring labeled data from the target\ndatasets may restrict the scalability. In this paper, we explore a different\nscenario, in which the labels of the target datasets are unprovided, and we\npresent an unsupervised prompt learning (UPL) approach to avoid prompt\nengineering while simultaneously improving transfer performance of CLIP-like\nvision-language models. As far as we know, UPL is the first work to introduce\nunsupervised learning into prompt learning. Experimentally, our UPL outperforms\noriginal CLIP with prompt engineering on ImageNet as well as other 10 datasets.\nAn enhanced version of UPL is even competitive with the 8-shot CoOp and the\n8-shot TIP-Adapter on most datasets. Code and models are available at\nhttps://github.com/tonyhuang2022/UPL.", + "authors": "Tony Huang, Jack Chu, Fangyun Wei", + "published": "2022-04-07", + "updated": "2022-08-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.02151v1", + "title": "Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners", + "abstract": "Visual recognition in low-data regimes requires deep neural networks to learn\ngeneralized representations from limited training samples. Recently, CLIP-based\nmethods have shown promising few-shot performance benefited from the\ncontrastive language-image pre-training. We then question, if the more diverse\npre-training knowledge can be cascaded to further assist few-shot\nrepresentation learning. In this paper, we propose CaFo, a Cascade of\nFoundation models that incorporates diverse prior knowledge of various\npre-training paradigms for better few-shot learning. Our CaFo incorporates\nCLIP's language-contrastive knowledge, DINO's vision-contrastive knowledge,\nDALL-E's vision-generative knowledge, and GPT-3's language-generative\nknowledge. Specifically, CaFo works by 'Prompt, Generate, then Cache'. Firstly,\nwe leverage GPT-3 to produce textual inputs for prompting CLIP with rich\ndownstream linguistic semantics. Then, we generate synthetic images via DALL-E\nto expand the few-shot training data without any manpower. At last, we\nintroduce a learnable cache model to adaptively blend the predictions from CLIP\nand DINO. By such collaboration, CaFo can fully unleash the potential of\ndifferent pre-training methods and unify them to perform state-of-the-art for\nfew-shot classification. Code is available at\nhttps://github.com/ZrrSkywalker/CaFo.", + "authors": "Renrui Zhang, Xiangfei Hu, Bohao Li, Siyuan Huang, Hanqiu Deng, Hongsheng Li, Yu Qiao, Peng Gao", + "published": "2023-03-03", + "updated": "2023-03-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.05918v2", + "title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", + "abstract": "Pre-trained representations are becoming crucial for many NLP and perception\ntasks. While representation learning in NLP has transitioned to training on raw\ntext without human annotations, visual and vision-language representations\nstill rely heavily on curated training datasets that are expensive or require\nexpert knowledge. For vision applications, representations are mostly learned\nusing datasets with explicit class labels such as ImageNet or OpenImages. For\nvision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all\ninvolve a non-trivial data collection (and cleaning) process. This costly\ncuration process limits the size of datasets and hence hinders the scaling of\ntrained models. In this paper, we leverage a noisy dataset of over one billion\nimage alt-text pairs, obtained without expensive filtering or post-processing\nsteps in the Conceptual Captions dataset. A simple dual-encoder architecture\nlearns to align visual and language representations of the image and text pairs\nusing a contrastive loss. We show that the scale of our corpus can make up for\nits noise and leads to state-of-the-art representations even with such a simple\nlearning scheme. Our visual representation achieves strong performance when\ntransferred to classification tasks such as ImageNet and VTAB. The aligned\nvisual and language representations enables zero-shot image classification and\nalso set new state-of-the-art results on Flickr30K and MSCOCO image-text\nretrieval benchmarks, even when compared with more sophisticated\ncross-attention models. The representations also enable cross-modality search\nwith complex text and text + image queries.", + "authors": "Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig", + "published": "2021-02-11", + "updated": "2021-06-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.11084v2", + "title": "Test-Time Distribution Normalization for Contrastively Learned Vision-language Models", + "abstract": "Advances in the field of vision-language contrastive learning have made it\npossible for many downstream applications to be carried out efficiently and\naccurately by simply taking the dot product between image and text\nrepresentations. One of the most representative approaches proposed recently\nknown as CLIP has garnered widespread adoption due to its effectiveness. CLIP\nis trained with an InfoNCE loss that takes into account both positive and\nnegative samples to help learn a much more robust representation space. This\npaper reveals that the common downstream practice of taking a dot product is\nonly a zeroth-order approximation of the optimization goal, resulting in a loss\nof information during test-time. Intuitively, since the model has been\noptimized based on the InfoNCE loss, test-time procedures should also be in\nalignment. The question lies in how one can retrieve any semblance of negative\nsamples information during inference in a computationally efficient way. To\nthis end, we propose Distribution Normalization (DN), where we approximate the\nmean representation of a batch of test samples and use such a mean to represent\nwhat would be analogous to negative samples in the InfoNCE loss. DN requires no\nretraining or fine-tuning and can be effortlessly applied during inference.\nExtensive experiments on a wide variety of downstream tasks exhibit a clear\nadvantage of DN over the dot product on top of other existing test-time\naugmentation methods.", + "authors": "Yifei Zhou, Juntao Ren, Fengyu Li, Ramin Zabih, Ser-Nam Lim", + "published": "2023-02-22", + "updated": "2023-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.07282v2", + "title": "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts", + "abstract": "The visual classification performance of vision-language models such as CLIP\nhas been shown to benefit from additional semantic knowledge from large\nlanguage models (LLMs) such as GPT-3. In particular, averaging over\nLLM-generated class descriptors, e.g. \"waffle, which has a round shape\", can\nnotably improve generalization performance. In this work, we critically study\nthis behavior and propose WaffleCLIP, a framework for zero-shot visual\nclassification which simply replaces LLM-generated descriptors with random\ncharacter and word descriptors. Without querying external models, we achieve\ncomparable performance gains on a large number of visual classification tasks.\nThis allows WaffleCLIP to both serve as a low-cost alternative, as well as a\nsanity check for any future LLM-based vision-language model extensions. We\nconduct an extensive experimental study on the impact and shortcomings of\nadditional semantics introduced with LLM-generated descriptors, and showcase\nhow - if available - semantic context is better leveraged by querying LLMs for\nhigh-level concepts, which we show can be done to jointly resolve potential\nclass name ambiguities. Code is available here:\nhttps://github.com/ExplainableML/WaffleCLIP.", + "authors": "Karsten Roth, Jae Myung Kim, A. Sophia Koepke, Oriol Vinyals, Cordelia Schmid, Zeynep Akata", + "published": "2023-06-12", + "updated": "2023-08-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.11821v1", + "title": "Cross-View Graph Consistency Learning for Invariant Graph Representations", + "abstract": "Graph representation learning is fundamental for analyzing graph-structured\ndata. Exploring invariant graph representations remains a challenge for most\nexisting graph representation learning methods. In this paper, we propose a\ncross-view graph consistency learning (CGCL) method that learns invariant graph\nrepresentations for link prediction. First, two complementary augmented views\nare derived from an incomplete graph structure through a bidirectional graph\nstructure augmentation scheme. This augmentation scheme mitigates the potential\ninformation loss that is commonly associated with various data augmentation\ntechniques involving raw graph data, such as edge perturbation, node removal,\nand attribute masking. Second, we propose a CGCL model that can learn invariant\ngraph representations. A cross-view training scheme is proposed to train the\nproposed CGCL model. This scheme attempts to maximize the consistency\ninformation between one augmented view and the graph structure reconstructed\nfrom the other augmented view. Furthermore, we offer a comprehensive\ntheoretical CGCL analysis. This paper empirically and experimentally\ndemonstrates the effectiveness of the proposed CGCL method, achieving\ncompetitive results on graph datasets in comparisons with several\nstate-of-the-art algorithms.", + "authors": "Jie Chen, Zhiming Li, Hua Mao, Wai Lok Woo, Xi Peng", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.14403v1", + "title": "Deep graph learning for semi-supervised classification", + "abstract": "Graph learning (GL) can dynamically capture the distribution structure (graph\nstructure) of data based on graph convolutional networks (GCN), and the\nlearning quality of the graph structure directly influences GCN for\nsemi-supervised classification. Existing methods mostly combine the\ncomputational layer and the related losses into GCN for exploring the global\ngraph(measuring graph structure from all data samples) or local graph\n(measuring graph structure from local data samples). Global graph emphasises on\nthe whole structure description of the inter-class data, while local graph\ntrend to the neighborhood structure representation of intra-class data.\nHowever, it is difficult to simultaneously balance these graphs of the learning\nprocess for semi-supervised classification because of the interdependence of\nthese graphs. To simulate the interdependence, deep graph learning(DGL) is\nproposed to find the better graph representation for semi-supervised\nclassification. DGL can not only learn the global structure by the previous\nlayer metric computation updating, but also mine the local structure by next\nlayer local weight reassignment. Furthermore, DGL can fuse the different\nstructures by dynamically encoding the interdependence of these structures, and\ndeeply mine the relationship of the different structures by the hierarchical\nprogressive learning for improving the performance of semi-supervised\nclassification. Experiments demonstrate the DGL outperforms state-of-the-art\nmethods on three benchmark datasets (Citeseer,Cora, and Pubmed) for citation\nnetworks and two benchmark datasets (MNIST and Cifar10) for images.", + "authors": "Guangfeng Lin, Xiaobing Kang, Kaiyang Liao, Fan Zhao, Yajun Chen", + "published": "2020-05-29", + "updated": "2020-05-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1909.11594v1", + "title": "Structured Graph Learning Via Laplacian Spectral Constraints", + "abstract": "Learning a graph with a specific structure is essential for interpretability\nand identification of the relationships among data. It is well known that\nstructured graph learning from observed samples is an NP-hard combinatorial\nproblem. In this paper, we first show that for a set of important graph\nfamilies it is possible to convert the structural constraints of structure into\neigenvalue constraints of the graph Laplacian matrix. Then we introduce a\nunified graph learning framework, lying at the integration of the spectral\nproperties of the Laplacian matrix with Gaussian graphical modeling that is\ncapable of learning structures of a large class of graph families. The proposed\nalgorithms are provably convergent and practically amenable for large-scale\nsemi-supervised and unsupervised graph-based learning tasks. Extensive\nnumerical experiments with both synthetic and real data sets demonstrate the\neffectiveness of the proposed methods. An R package containing code for all the\nexperimental results is available at\nhttps://cran.r-project.org/package=spectralGraphTopology.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos'e Vin'icius de M. Cardoso, Daniel P. Palomar", + "published": "2019-09-24", + "updated": "2019-09-24", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC", + "stat.AP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07699v2", + "title": "Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs", + "abstract": "Temporal Graph Learning, which aims to model the time-evolving nature of\ngraphs, has gained increasing attention and achieved remarkable performance\nrecently. However, in reality, graph structures are often incomplete and noisy,\nwhich hinders temporal graph networks (TGNs) from learning informative\nrepresentations. Graph contrastive learning uses data augmentation to generate\nplausible variations of existing data and learn robust representations.\nHowever, rule-based augmentation approaches may be suboptimal as they lack\nlearnability and fail to leverage rich information from downstream tasks. To\naddress these issues, we propose a Time-aware Graph Structure Learning (TGSL)\napproach via sequence prediction on temporal graphs, which learns better graph\nstructures for downstream tasks through adding potential temporal edges. In\nparticular, it predicts time-aware context embedding based on previously\nobserved interactions and uses the Gumble-Top-K to select the closest candidate\nedges to this context embedding. Additionally, several candidate sampling\nstrategies are proposed to ensure both efficiency and diversity. Furthermore,\nwe jointly learn the graph structure and TGNs in an end-to-end manner and\nperform inference on the refined graph. Extensive experiments on temporal link\nprediction benchmarks demonstrate that TGSL yields significant gains for the\npopular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive\nlearning methods on temporal graphs. We release the code at\nhttps://github.com/ViktorAxelsen/TGSL.", + "authors": "Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai", + "published": "2023-06-13", + "updated": "2023-08-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.09971v1", + "title": "Graph Learning-Convolutional Networks", + "abstract": "Recently, graph Convolutional Neural Networks (graph CNNs) have been widely\nused for graph data representation and semi-supervised learning tasks. However,\nexisting graph CNNs generally use a fixed graph which may be not optimal for\nsemi-supervised learning tasks. In this paper, we propose a novel Graph\nLearning-Convolutional Network (GLCN) for graph data representation and\nsemi-supervised learning. The aim of GLCN is to learn an optimal graph\nstructure that best serves graph CNNs for semi-supervised learning by\nintegrating both graph learning and graph convolution together in a unified\nnetwork architecture. The main advantage is that in GLCN, both given labels and\nthe estimated labels are incorporated and thus can provide useful 'weakly'\nsupervised information to refine (or learn) the graph construction and also to\nfacilitate the graph convolution operation in GLCN for unknown label\nestimation. Experimental results on seven benchmarks demonstrate that GLCN\nsignificantly outperforms state-of-the-art traditional fixed structure based\ngraph CNNs.", + "authors": "Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang", + "published": "2018-11-25", + "updated": "2018-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.02909v1", + "title": "Spectral Augmentations for Graph Contrastive Learning", + "abstract": "Contrastive learning has emerged as a premier method for learning\nrepresentations with or without supervision. Recent studies have shown its\nutility in graph representation learning for pre-training. Despite successes,\nthe understanding of how to design effective graph augmentations that can\ncapture structural properties common to many different types of downstream\ngraphs remains incomplete. We propose a set of well-motivated graph\ntransformation operations derived via graph spectral analysis to provide a bank\nof candidates when constructing augmentations for a graph contrastive\nobjective, enabling contrastive learning to capture useful structural\nrepresentation from pre-training graph datasets. We first present a spectral\ngraph cropping augmentation that involves filtering nodes by applying\nthresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second\nnovel augmentation reorders the graph frequency components in a structural\nLaplacian-derived position graph embedding. Further, we introduce a method that\nleads to improved views of local subgraphs by performing alignment via global\nrandom walk embeddings. Our experimental results indicate consistent\nimprovements in out-of-domain graph data transfer compared to state-of-the-art\ngraph contrastive learning methods, shedding light on how to design a graph\nlearner that is able to learn structural properties common to diverse graph\ntypes.", + "authors": "Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.07832v1", + "title": "Deep Iterative and Adaptive Learning for Graph Neural Networks", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely Deep\nIterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for\njointly learning the graph structure and graph embeddings simultaneously. We\nfirst cast the graph structure learning problem as a similarity metric learning\nproblem and leverage an adapted graph regularization for controlling\nsmoothness, connectivity and sparsity of the generated graph. We further\npropose a novel iterative method for searching for a hidden graph structure\nthat augments the initial graph structure. Our iterative method dynamically\nstops when the learned graph structure approaches close enough to the optimal\ngraph. Our extensive experiments demonstrate that the proposed DIAL-GNN model\ncan consistently outperform or match state-of-the-art baselines in terms of\nboth downstream task performance and computational time. The proposed approach\ncan cope with both transductive learning and inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03236v1", + "title": "Graph2Graph Learning with Conditional Autoregressive Models", + "abstract": "We present a graph neural network model for solving graph-to-graph learning\nproblems. Most deep learning on graphs considers ``simple'' problems such as\ngraph classification or regressing real-valued graph properties. For such\ntasks, the main requirement for intermediate representations of the data is to\nmaintain the structure needed for output, i.e., keeping classes separated or\nmaintaining the order indicated by the regressor. However, a number of learning\ntasks, such as regressing graph-valued output, generative models, or graph\nautoencoders, aim to predict a graph-structured output. In order to\nsuccessfully do this, the learned representations need to preserve far more\nstructure. We present a conditional auto-regressive model for graph-to-graph\nlearning and illustrate its representational capabilities via experiments on\nchallenging subgraph predictions from graph algorithmics; as a graph\nautoencoder for reconstruction and visualization; and on pretraining\nrepresentations that allow graph classification with limited labeled data.", + "authors": "Guan Wang, Francois Bernard Lauze, Aasa Feragen", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.00082v1", + "title": "Bosonic Random Walk Networks for Graph Learning", + "abstract": "The development of Graph Neural Networks (GNNs) has led to great progress in\nmachine learning on graph-structured data. These networks operate via diffusing\ninformation across the graph nodes while capturing the structure of the graph.\nRecently there has also seen tremendous progress in quantum computing\ntechniques. In this work, we explore applications of multi-particle quantum\nwalks on diffusing information across graphs. Our model is based on learning\nthe operators that govern the dynamics of quantum random walkers on graphs. We\ndemonstrate the effectiveness of our method on classification and regression\ntasks.", + "authors": "Shiv Shankar, Don Towsley", + "published": "2020-12-31", + "updated": "2020-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.15665v1", + "title": "Learnability of a hybrid quantum-classical neural network for graph-structured quantum data", + "abstract": "Classical data with graph structure always exists when dealing with many\nreal-world problems. In parallel, quantum data with graph structure also need\nto be investigated since they are always produced by structured quantum data\nsources.In this paper, we make use of a hybrid quantum-classical neural network\nwith deep residual learning (Res-HQCNN) to learn graph-structured quantum data.\nSpecifically, based on the special definition of graph-structured quantum data,\nwe first find suitable cost functions so that Res-HQCNN can learn both\nsemisupervised quantum data with or without graphs. Moreover, the training\nalgorithm of Res-HQCNN for graph-structured training data is given in detail.\nNext, in order to show the learning ability of Res-HQCNN,we perform extensive\nexperiments to show that the using of information about graph structures for\nquantum data can lead to better learning efficiency compared with the state of\nthe arts. At the same time, we also design comparable experiments to explain\nthat the using of residual learning can also bring better performance when\ntraining for deep quantum neural networks.", + "authors": "Yan-Ying Liang, Si-Le Tang, Zhe-Hao Yi, Hao-Zhen Si-Tu, Zhu-Jun Zheng", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.08776v2", + "title": "Joint Embedding Learning of Educational Knowledge Graphs", + "abstract": "As an efficient model for knowledge organization, the knowledge graph has\nbeen widely adopted in several fields, e.g., biomedicine, sociology, and\neducation. And there is a steady trend of learning embedding representations of\nknowledge graphs to facilitate knowledge graph construction and downstream\ntasks. In general, knowledge graph embedding techniques aim to learn vectorized\nrepresentations which preserve the structural information of the graph. And\nconventional embedding learning models rely on structural relationships among\nentities and relations. However, in educational knowledge graphs, structural\nrelationships are not the focus. Instead, rich literals of the graphs are more\nvaluable. In this paper, we focus on this problem and propose a novel model for\nembedding learning of educational knowledge graphs. Our model considers both\nstructural and literal information and jointly learns embedding\nrepresentations. Three experimental graphs were constructed based on an\neducational knowledge graph which has been applied in real-world teaching. We\nconducted two experiments on the three graphs and other common benchmark\ngraphs. The experimental results proved the effectiveness of our model and its\nsuperiority over other baselines when processing educational knowledge graphs.", + "authors": "Siyu Yao, Ruijie Wang, Shen Sun, Derui Bu, Jun Liu", + "published": "2019-11-20", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02126v1", + "title": "Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix", + "abstract": "To improve the robustness of graph neural networks (GNN), graph structure\nlearning (GSL) has attracted great interest due to the pervasiveness of noise\nin graph data. Many approaches have been proposed for GSL to jointly learn a\nclean graph structure and corresponding representations. To extend the previous\nwork, this paper proposes a novel regularized GSL approach, particularly with\nan alignment of feature information and graph information, which is motivated\nmainly by our derived lower bound of node-level Rademacher complexity for GNNs.\nAdditionally, our proposed approach incorporates sparse dimensional reduction\nto leverage low-dimensional node features that are relevant to the graph\nstructure. To evaluate the effectiveness of our approach, we conduct\nexperiments on real-world graphs. The results demonstrate that our proposed GSL\nmethod outperforms several competitive baselines, especially in scenarios where\nthe graph structures are heavily affected by noise. Overall, our research\nhighlights the importance of integrating feature and graph information\nalignment in GSL, as inspired by our derived theoretical result, and showcases\nthe superiority of our approach in handling noisy graph structures through\ncomprehensive experiments on real-world datasets.", + "authors": "Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00696v1", + "title": "Graph Learning: A Survey", + "abstract": "Graphs are widely used as a popular representation of the network structure\nof connected data. Graph data can be found in a broad spectrum of application\ndomains such as social systems, ecosystems, biological networks, knowledge\ngraphs, and information systems. With the continuous penetration of artificial\nintelligence technologies, graph learning (i.e., machine learning on graphs) is\ngaining attention from both researchers and practitioners. Graph learning\nproves effective for many tasks, such as classification, link prediction, and\nmatching. Generally, graph learning methods extract relevant features of graphs\nby taking advantage of machine learning algorithms. In this survey, we present\na comprehensive overview on the state-of-the-art of graph learning. Special\nattention is paid to four categories of existing graph learning methods,\nincluding graph signal processing, matrix factorization, random walk, and deep\nlearning. Major models and algorithms under these categories are reviewed\nrespectively. We examine graph learning applications in areas such as text,\nimages, science, knowledge graphs, and combinatorial optimization. In addition,\nwe discuss several promising research directions in this field.", + "authors": "Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, Huan Liu", + "published": "2021-05-03", + "updated": "2021-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI", + "68T07", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01412v1", + "title": "Sampling and Recovery of Graph Signals based on Graph Neural Networks", + "abstract": "We propose interpretable graph neural networks for sampling and recovery of\ngraph signals, respectively. To take informative measurements, we propose a new\ngraph neural sampling module, which aims to select those vertices that\nmaximally express their corresponding neighborhoods. Such expressiveness can be\nquantified by the mutual information between vertices' features and\nneighborhoods' features, which are estimated via a graph neural network. To\nreconstruct an original graph signal from the sampled measurements, we propose\na graph neural recovery module based on the algorithm-unrolling technique.\nCompared to previous analytical sampling and recovery, the proposed methods are\nable to flexibly learn a variety of graph signal models from data by leveraging\nthe learning ability of neural networks; compared to previous\nneural-network-based sampling and recovery, the proposed methods are designed\nthrough exploiting specific graph properties and provide interpretability. We\nfurther design a new multiscale graph neural network, which is a trainable\nmultiscale graph filter bank and can handle various graph-related learning\ntasks. The multiscale network leverages the proposed graph neural sampling and\nrecovery modules to achieve multiscale representations of a graph. In the\nexperiments, we illustrate the effects of the proposed graph neural sampling\nand recovery modules and find that the modules can flexibly adapt to various\ngraph structures and graph signals. In the task of active-sampling-based\nsemi-supervised learning, the graph neural sampling module improves the\nclassification accuracy over 10% in Cora dataset. We further validate the\nproposed multiscale graph neural network on several standard datasets for both\nvertex and graph classification. The results show that our method consistently\nimproves the classification accuracies.", + "authors": "Siheng Chen, Maosen Li, Ya Zhang", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.04407v2", + "title": "Adversarially Regularized Graph Autoencoder for Graph Embedding", + "abstract": "Graph embedding is an effective method to represent graph data in a low\ndimensional space for graph analytics. Most existing embedding algorithms\ntypically focus on preserving the topological structure or minimizing the\nreconstruction errors of graph data, but they have mostly ignored the data\ndistribution of the latent codes from the graphs, which often results in\ninferior embedding in real-world graph data. In this paper, we propose a novel\nadversarial graph embedding framework for graph data. The framework encodes the\ntopological structure and node content in a graph to a compact representation,\non which a decoder is trained to reconstruct the graph structure. Furthermore,\nthe latent representation is enforced to match a prior distribution via an\nadversarial training scheme. To learn a robust embedding, two variants of\nadversarial approaches, adversarially regularized graph autoencoder (ARGA) and\nadversarially regularized variational graph autoencoder (ARVGA), are developed.\nExperimental studies on real-world graphs validate our design and demonstrate\nthat our algorithms outperform baselines by a wide margin in link prediction,\ngraph clustering, and graph visualization tasks.", + "authors": "Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang", + "published": "2018-02-13", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11390v2", + "title": "Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction", + "abstract": "Tiered graph autoencoders provide the architecture and mechanisms for\nlearning tiered latent representations and latent spaces for molecular graphs\nthat explicitly represent and utilize groups (e.g., functional groups). This\nenables the utilization and exploration of tiered molecular latent spaces,\neither individually - the node (atom) tier, the group tier, or the graph\n(molecule) tier - or jointly, as well as navigation across the tiers. In this\npaper, we discuss the use of tiered graph autoencoders together with graph\nprediction for molecular graphs. We show features of molecular graphs used, and\ngroups in molecular graphs identified for some sample molecules. We briefly\nreview graph prediction and the QM9 dataset for background information, and\ndiscuss the use of tiered graph embeddings for graph prediction, particularly\nweighted group pooling. We find that functional groups and ring groups\neffectively capture and represent the chemical essence of molecular graphs\n(structures). Further, tiered graph autoencoders and graph prediction together\nprovide effective, efficient and interpretable deep learning for molecular\ngraphs, with the former providing unsupervised, transferable learning and the\nlatter providing supervised, task-optimized learning.", + "authors": "Daniel T. Chang", + "published": "2019-10-24", + "updated": "2021-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-bio.BM" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.09304v1", + "title": "A Tunable Model for Graph Generation Using LSTM and Conditional VAE", + "abstract": "With the development of graph applications, generative models for graphs have\nbeen more crucial. Classically, stochastic models that generate graphs with a\npre-defined probability of edges and nodes have been studied. Recently, some\nmodels that reproduce the structural features of graphs by learning from actual\ngraph data using machine learning have been studied. However, in these\nconventional studies based on machine learning, structural features of graphs\ncan be learned from data, but it is not possible to tune features and generate\ngraphs with specific features. In this paper, we propose a generative model\nthat can tune specific features, while learning structural features of a graph\nfrom data. With a dataset of graphs with various features generated by a\nstochastic model, we confirm that our model can generate a graph with specific\nfeatures.", + "authors": "Shohei Nakazawa, Yoshiki Sato, Kenji Nakagawa, Sho Tsugawa, Kohei Watabe", + "published": "2021-04-15", + "updated": "2021-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.00647v4", + "title": "Lifelong Graph Learning", + "abstract": "Graph neural networks (GNN) are powerful models for many graph-structured\ntasks. Existing models often assume that the complete structure of the graph is\navailable during training. In practice, however, graph-structured data is\nusually formed in a streaming fashion so that learning a graph continuously is\noften necessary. In this paper, we bridge GNN and lifelong learning by\nconverting a continual graph learning problem to a regular graph learning\nproblem so GNN can inherit the lifelong learning techniques developed for\nconvolutional neural networks (CNN). We propose a new topology, the feature\ngraph, which takes features as new nodes and turns nodes into independent\ngraphs. This successfully converts the original problem of node classification\nto graph classification. In the experiments, we demonstrate the efficiency and\neffectiveness of feature graph networks (FGN) by continuously learning a\nsequence of classical graph datasets. We also show that FGN achieves superior\nperformance in two applications, i.e., lifelong human action recognition with\nwearable devices and feature matching. To the best of our knowledge, FGN is the\nfirst method to bridge graph learning and lifelong learning via a novel graph\ntopology. Source code is available at https://github.com/wang-chen/LGL", + "authors": "Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer", + "published": "2020-09-01", + "updated": "2022-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1803.03324v1", + "title": "Learning Deep Generative Models of Graphs", + "abstract": "Graphs are fundamental data structures which concisely capture the relational\nstructure in many important real-world domains, such as knowledge graphs,\nphysical and social interactions, language, and chemistry. Here we introduce a\npowerful new approach for learning generative models over graphs, which can\ncapture both their structure and attributes. Our approach uses graph neural\nnetworks to express probabilistic dependencies among a graph's nodes and edges,\nand can, in principle, learn distributions over any arbitrary graph. In a\nseries of experiments our results show that once trained, our models can\ngenerate good quality samples of both synthetic graphs as well as real\nmolecular graphs, both unconditionally and conditioned on data. Compared to\nbaselines that do not use graph-structured representations, our models often\nperform far better. We also explore key challenges of learning generative\nmodels of graphs, such as how to handle symmetries and ordering of elements\nduring the graph generation process, and offer possible solutions. Our work is\nthe first and most general approach for learning generative models over\narbitrary graphs, and opens new directions for moving away from restrictions of\nvector- and sequence-like knowledge representations, toward more expressive and\nflexible relational data structures.", + "authors": "Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia", + "published": "2018-03-08", + "updated": "2018-03-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.01660v3", + "title": "Graph Neural Networks With Lifting-based Adaptive Graph Wavelets", + "abstract": "Spectral-based graph neural networks (SGNNs) have been attracting increasing\nattention in graph representation learning. However, existing SGNNs are limited\nin implementing graph filters with rigid transforms (e.g., graph Fourier or\npredefined graph wavelet transforms) and cannot adapt to signals residing on\ngraphs and tasks at hand. In this paper, we propose a novel class of graph\nneural networks that realizes graph filters with adaptive graph wavelets.\nSpecifically, the adaptive graph wavelets are learned with neural\nnetwork-parameterized lifting structures, where structure-aware attention-based\nlifting operations (i.e., prediction and update operations) are developed to\njointly consider graph structures and node features. We propose to lift based\non diffusion wavelets to alleviate the structural information loss induced by\npartitioning non-bipartite graphs. By design, the locality and sparsity of the\nresulting wavelet transform as well as the scalability of the lifting structure\nare guaranteed. We further derive a soft-thresholding filtering operation by\nlearning sparse graph representations in terms of the learned wavelets,\nyielding a localized, efficient, and scalable wavelet-based graph filters. To\nensure that the learned graph representations are invariant to node\npermutations, a layer is employed at the input of the networks to reorder the\nnodes according to their local topology information. We evaluate the proposed\nnetworks in both node-level and graph-level representation learning tasks on\nbenchmark citation and bioinformatics graph datasets. Extensive experiments\ndemonstrate the superiority of the proposed networks over existing SGNNs in\nterms of accuracy, efficiency, and scalability.", + "authors": "Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard", + "published": "2021-08-03", + "updated": "2022-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.03892v2", + "title": "COPT: Coordinated Optimal Transport for Graph Sketching", + "abstract": "We introduce COPT, a novel distance metric between graphs defined via an\noptimization routine, computing a coordinated pair of optimal transport maps\nsimultaneously. This gives an unsupervised way to learn general-purpose graph\nrepresentation, applicable to both graph sketching and graph comparison. COPT\ninvolves simultaneously optimizing dual transport plans, one between the\nvertices of two graphs, and another between graph signal probability\ndistributions. We show theoretically that our method preserves important global\nstructural information on graphs, in particular spectral information, and\nanalyze connections to existing studies. Empirically, COPT outperforms state of\nthe art methods in graph classification on both synthetic and real datasets.", + "authors": "Yihe Dong, Will Sawin", + "published": "2020-03-09", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.04286v1", + "title": "Deep Unsupervised Active Learning on Learnable Graphs", + "abstract": "Recently deep learning has been successfully applied to unsupervised active\nlearning. However, current method attempts to learn a nonlinear transformation\nvia an auto-encoder while ignoring the sample relation, leaving huge room to\ndesign more effective representation learning mechanisms for unsupervised\nactive learning. In this paper, we propose a novel deep unsupervised Active\nLearning model via Learnable Graphs, named ALLG. ALLG benefits from learning\noptimal graph structures to acquire better sample representation and select\nrepresentative samples. To make the learnt graph structure more stable and\neffective, we take into account $k$-nearest neighbor graph as a priori, and\nlearn a relation propagation graph structure. We also incorporate shortcut\nconnections among different layers, which can alleviate the well-known\nover-smoothing problem to some extent. To the best of our knowledge, this is\nthe first attempt to leverage graph structure learning for unsupervised active\nlearning. Extensive experiments performed on six datasets demonstrate the\nefficacy of our method.", + "authors": "Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.08163v1", + "title": "Finding Motifs in Knowledge Graphs using Compression", + "abstract": "We introduce a method to find network motifs in knowledge graphs. Network\nmotifs are useful patterns or meaningful subunits of the graph that recur\nfrequently. We extend the common definition of a network motif to coincide with\na basic graph pattern. We introduce an approach, inspired by recent work for\nsimple graphs, to induce these from a given knowledge graph, and show that the\nmotifs found reflect the basic structure of the graph. Specifically, we show\nthat in random graphs, no motifs are found, and that when we insert a motif\nartificially, it can be detected. Finally, we show the results of motif\ninduction on three real-world knowledge graphs.", + "authors": "Peter Bloem", + "published": "2021-04-16", + "updated": "2021-04-16", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.DS", + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.04595v1", + "title": "Label-informed Graph Structure Learning for Node Classification", + "abstract": "Graph Neural Networks (GNNs) have achieved great success among various\ndomains. Nevertheless, most GNN methods are sensitive to the quality of graph\nstructures. To tackle this problem, some studies exploit different graph\nstructure learning strategies to refine the original graph structure. However,\nthese methods only consider feature information while ignoring available label\ninformation. In this paper, we propose a novel label-informed graph structure\nlearning framework which incorporates label information explicitly through a\nclass transition matrix. We conduct extensive experiments on seven node\nclassification benchmark datasets and the results show that our method\noutperforms or matches the state-of-the-art baselines.", + "authors": "Liping Wang, Fenyu Hu, Shu Wu, Liang Wang", + "published": "2021-08-10", + "updated": "2021-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.04350v2", + "title": "Time-Variant Graph Classification", + "abstract": "Graphs are commonly used to represent objects, such as images and text, for\npattern classification. In a dynamic world, an object may continuously evolve\nover time, and so does the graph extracted from the underlying object. These\nchanges in graph structure with respect to the temporal order present a new\nrepresentation of the graph, in which an object corresponds to a set of\ntime-variant graphs. In this paper, we formulate a novel time-variant graph\nclassification task and propose a new graph feature, called a graph-shapelet\npattern, for learning and classifying time-variant graphs. Graph-shapelet\npatterns are compact and discriminative graph transformation subsequences. A\ngraph-shapelet pattern can be regarded as a graphical extension of a shapelet\n-- a class of discriminative features designed for vector-based temporal data\nclassification. To discover graph-shapelet patterns, we propose to convert a\ntime-variant graph sequence into time-series data and use the discovered\nshapelets to find graph transformation subsequences as graph-shapelet patterns.\nBy converting each graph-shapelet pattern into a unique tokenized graph\ntransformation sequence, we can measure the similarity between two\ngraph-shapelet patterns and therefore classify time-variant graphs. Experiments\non both synthetic and real-world data demonstrate the superior performance of\nthe proposed algorithms.", + "authors": "Haishuai Wang", + "published": "2016-09-14", + "updated": "2017-06-12", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.07308v1", + "title": "Variational Graph Auto-Encoders", + "abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", + "authors": "Thomas N. Kipf, Max Welling", + "published": "2016-11-21", + "updated": "2016-11-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.10206v1", + "title": "How Robust Are Graph Neural Networks to Structural Noise?", + "abstract": "Graph neural networks (GNNs) are an emerging model for learning graph\nembeddings and making predictions on graph structured data. However, robustness\nof graph neural networks is not yet well-understood. In this work, we focus on\nnode structural identity predictions, where a representative GNN model is able\nto achieve near-perfect accuracy. We also show that the same GNN model is not\nrobust to addition of structural noise, through a controlled dataset and set of\nexperiments. Finally, we show that under the right conditions, graph-augmented\ntraining is capable of significantly improving robustness to structural noise.", + "authors": "James Fox, Sivasankaran Rajamanickam", + "published": "2019-12-21", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.04508v3", + "title": "Unsupervised Graph Embedding via Adaptive Graph Learning", + "abstract": "Graph autoencoders (GAEs) are powerful tools in representation learning for\ngraph embedding. However, the performance of GAEs is very dependent on the\nquality of the graph structure, i.e., of the adjacency matrix. In other words,\nGAEs would perform poorly when the adjacency matrix is incomplete or be\ndisturbed. In this paper, two novel unsupervised graph embedding methods,\nunsupervised graph embedding via adaptive graph learning (BAGE) and\nunsupervised graph embedding via variational adaptive graph learning (VBAGE)\nare proposed. The proposed methods expand the application range of GAEs on\ngraph embedding, i.e, on the general datasets without graph structure.\nMeanwhile, the adaptive learning mechanism can initialize the adjacency matrix\nwithout be affected by the parameter. Besides that, the latent representations\nare embedded in the laplacian graph structure to preserve the topology\nstructure of the graph in the vector space. Moreover, the adjacency matrix can\nbe self-learned for better embedding performance when the original graph\nstructure is incomplete. With adaptive learning, the proposed method is much\nmore robust to the graph structure. Experimental studies on several datasets\nvalidate our design and demonstrate that our methods outperform baselines by a\nwide margin in node clustering, node classification, and graph visualization\ntasks.", + "authors": "Rui Zhang, Yunxing Zhang, Xuelong Li", + "published": "2020-03-10", + "updated": "2021-03-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.00876v1", + "title": "Balanced Graph Structure Information for Brain Disease Detection", + "abstract": "Analyzing connections between brain regions of interest (ROI) is vital to\ndetect neurological disorders such as autism or schizophrenia. Recent\nadvancements employ graph neural networks (GNNs) to utilize graph structures in\nbrains, improving detection performances. Current methods use correlation\nmeasures between ROI's blood-oxygen-level-dependent (BOLD) signals to generate\nthe graph structure. Other methods use the training samples to learn the\noptimal graph structure through end-to-end learning. However, implementing\nthose methods independently leads to some issues with noisy data for the\ncorrelation graphs and overfitting problems for the optimal graph. In this\nwork, we proposed Bargrain (balanced graph structure for brains), which models\ntwo graph structures: filtered correlation matrix and optimal sample graph\nusing graph convolution networks (GCNs). This approach aims to get advantages\nfrom both graphs and address the limitations of only relying on a single type\nof structure. Based on our extensive experiment, Bargrain outperforms\nstate-of-the-art methods in classification tasks on brain disease datasets, as\nmeasured by average F1 scores.", + "authors": "Falih Gozi Febrinanto, Mujie Liu, Feng Xia", + "published": "2023-12-30", + "updated": "2023-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "q-bio.NC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1801.03226v1", + "title": "Adaptive Graph Convolutional Neural Networks", + "abstract": "Graph Convolutional Neural Networks (Graph CNNs) are generalizations of\nclassical CNNs to handle graph data such as molecular data, point could and\nsocial networks. Current filters in graph CNNs are built for fixed and shared\ngraph structure. However, for most real data, the graph structures varies in\nboth size and connectivity. The paper proposes a generalized and flexible graph\nCNN taking data of arbitrary graph structure as input. In that way a\ntask-driven adaptive graph is learned for each graph data while training. To\nefficiently learn the graph, a distance metric learning is proposed. Extensive\nexperiments on nine graph-structured datasets have demonstrated the superior\nperformance improvement on both convergence speed and predictive accuracy.", + "authors": "Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang", + "published": "2018-01-10", + "updated": "2018-01-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07970v1", + "title": "Adaptive Multi-Neighborhood Attention based Transformer for Graph Representation Learning", + "abstract": "By incorporating the graph structural information into Transformers, graph\nTransformers have exhibited promising performance for graph representation\nlearning in recent years. Existing graph Transformers leverage specific\nstrategies, such as Laplacian eigenvectors and shortest paths of the node\npairs, to preserve the structural features of nodes and feed them into the\nvanilla Transformer to learn the representations of nodes. It is hard for such\npredefined rules to extract informative graph structural features for arbitrary\ngraphs whose topology structure varies greatly, limiting the learning capacity\nof the models. To this end, we propose an adaptive graph Transformer, termed\nMulti-Neighborhood Attention based Graph Transformer (MNA-GT), which captures\nthe graph structural information for each node from the multi-neighborhood\nattention mechanism adaptively. By defining the input to perform scaled-dot\nproduct as an attention kernel, MNA-GT constructs multiple attention kernels\nbased on different hops of neighborhoods such that each attention kernel can\ncapture specific graph structural information of the corresponding neighborhood\nfor each node pair. In this way, MNA-GT can preserve the graph structural\ninformation efficiently by incorporating node representations learned by\ndifferent attention kernels. MNA-GT further employs an attention layer to learn\nthe importance of different attention kernels to enable the model to adaptively\ncapture the graph structural information for different nodes. Extensive\nexperiments are conducted on a variety of graph benchmarks, and the empirical\nresults show that MNA-GT outperforms many strong baselines.", + "authors": "Gaichao Li, Jinsong Chen, Kun He", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01152v2", + "title": "Causal Structure Learning: a Combinatorial Perspective", + "abstract": "In this review, we discuss approaches for learning causal structure from\ndata, also called causal discovery. In particular, we focus on approaches for\nlearning directed acyclic graphs (DAGs) and various generalizations which allow\nfor some variables to be unobserved in the available data. We devote special\nattention to two fundamental combinatorial aspects of causal structure\nlearning. First, we discuss the structure of the search space over causal\ngraphs. Second, we discuss the structure of equivalence classes over causal\ngraphs, i.e., sets of graphs which represent what can be learned from\nobservational data alone, and how these equivalence classes can be refined by\nadding interventional data.", + "authors": "Chandler Squires, Caroline Uhler", + "published": "2022-06-02", + "updated": "2022-12-19", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.03675v3", + "title": "Machine Learning on Graphs: A Model and Comprehensive Taxonomy", + "abstract": "There has been a surge of recent interest in learning representations for\ngraph-structured data. Graph representation learning methods have generally\nfallen into three main categories, based on the availability of labeled data.\nThe first, network embedding (such as shallow graph embedding or graph\nauto-encoders), focuses on learning unsupervised representations of relational\nstructure. The second, graph regularized neural networks, leverages graphs to\naugment neural network losses with a regularization objective for\nsemi-supervised learning. The third, graph neural networks, aims to learn\ndifferentiable functions over discrete topologies with arbitrary structure.\nHowever, despite the popularity of these areas there has been surprisingly\nlittle work on unifying the three paradigms. Here, we aim to bridge the gap\nbetween graph neural networks, network embedding and graph regularization\nmodels. We propose a comprehensive taxonomy of representation learning methods\nfor graph-structured data, aiming to unify several disparate bodies of work.\nSpecifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which\ngeneralizes popular algorithms for semi-supervised learning on graphs (e.g.\nGraphSage, Graph Convolutional Networks, Graph Attention Networks), and\nunsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)\ninto a single consistent approach. To illustrate the generality of this\napproach, we fit over thirty existing methods into this framework. We believe\nthat this unifying view both provides a solid foundation for understanding the\nintuition behind these methods, and enables future research in the area.", + "authors": "Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R\u00e9, Kevin Murphy", + "published": "2020-05-07", + "updated": "2022-04-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1902.10042v2", + "title": "Graph Neural Processes: Towards Bayesian Graph Neural Networks", + "abstract": "We introduce Graph Neural Processes (GNP), inspired by the recent work in\nconditional and latent neural processes. A Graph Neural Process is defined as a\nConditional Neural Process that operates on arbitrary graph data. It takes\nfeatures of sparsely observed context points as input, and outputs a\ndistribution over target points. We demonstrate graph neural processes in edge\nimputation and discuss benefits and drawbacks of the method for other\napplication areas. One major benefit of GNPs is the ability to quantify\nuncertainty in deep learning on graph structures. An additional benefit of this\nmethod is the ability to extend graph neural networks to inputs of dynamic\nsized graphs.", + "authors": "Andrew Carr, David Wingate", + "published": "2019-02-26", + "updated": "2019-10-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.07294v1", + "title": "Graph Data Condensation via Self-expressive Graph Structure Reconstruction", + "abstract": "With the increasing demands of training graph neural networks (GNNs) on\nlarge-scale graphs, graph data condensation has emerged as a critical technique\nto relieve the storage and time costs during the training phase. It aims to\ncondense the original large-scale graph to a much smaller synthetic graph while\npreserving the essential information necessary for efficiently training a\ndownstream GNN. However, existing methods concentrate either on optimizing node\nfeatures exclusively or endeavor to independently learn node features and the\ngraph structure generator. They could not explicitly leverage the information\nof the original graph structure and failed to construct an interpretable graph\nstructure for the synthetic dataset. To address these issues, we introduce a\nnovel framework named \\textbf{G}raph Data \\textbf{C}ondensation via\n\\textbf{S}elf-expressive Graph Structure \\textbf{R}econstruction\n(\\textbf{GCSR}). Our method stands out by (1) explicitly incorporating the\noriginal graph structure into the condensing process and (2) capturing the\nnuanced interdependencies between the condensed nodes by reconstructing an\ninterpretable self-expressive graph structure. Extensive experiments and\ncomprehensive analysis validate the efficacy of the proposed method across\ndiverse GNN models and datasets. Our code is available at\nhttps://www.dropbox.com/scl/fi/2aonyp5ln5gisdqtjimu8/GCSR.zip?rlkey=11cuwfpsf54wxiiktu0klud0x&dl=0", + "authors": "Zhanyu Liu, Chaolv Zeng, Guanjie Zheng", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.01749v1", + "title": "Semantic Graph Neural Network with Multi-measure Learning for Semi-supervised Classification", + "abstract": "Graph Neural Networks (GNNs) have attracted increasing attention in recent\nyears and have achieved excellent performance in semi-supervised node\nclassification tasks. The success of most GNNs relies on one fundamental\nassumption, i.e., the original graph structure data is available. However,\nrecent studies have shown that GNNs are vulnerable to the complex underlying\nstructure of the graph, making it necessary to learn comprehensive and robust\ngraph structures for downstream tasks, rather than relying only on the raw\ngraph structure. In light of this, we seek to learn optimal graph structures\nfor downstream tasks and propose a novel framework for semi-supervised\nclassification. Specifically, based on the structural context information of\ngraph and node representations, we encode the complex interactions in semantics\nand generate semantic graphs to preserve the global structure. Moreover, we\ndevelop a novel multi-measure attention layer to optimize the similarity rather\nthan prescribing it a priori, so that the similarity can be adaptively\nevaluated by integrating measures. These graphs are fused and optimized\ntogether with GNN towards semi-supervised classification objective. Extensive\nexperiments and ablation studies on six real-world datasets clearly demonstrate\nthe effectiveness of our proposed model and the contribution of each component.", + "authors": "Junchao Lin, Yuan Wan, Jingwen Xu, Xingchen Qi", + "published": "2022-12-04", + "updated": "2022-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.04687v2", + "title": "Intrinsic Geometric Information Transfer Learning on Multiple Graph-Structured Datasets", + "abstract": "Graphs provide a powerful means for representing complex interactions between\nentities. Recently, deep learning approaches are emerging for representing and\nmodeling graph-structured data, although the conventional deep learning methods\n(such as convolutional neural networks and recurrent neural networks) have\nmainly focused on grid-structured inputs (image and audio). Leveraged by the\ncapability of representation learning, deep learning based techniques are\nreporting promising results for graph applications by detecting structural\ncharacteristics of graphs in an automated fashion. In this paper, we attempt to\nadvance deep learning for graph-structured data by incorporating another\ncomponent, transfer learning. By transferring the intrinsic geometric\ninformation learned in the source domain, our approach can help us to construct\na model for a new but related task in the target domain without collecting new\ndata and without training a new model from scratch. We thoroughly test our\napproach with large-scale real corpora and confirm the effectiveness of the\nproposed transfer learning framework for deep learning on graphs. According to\nour experiments, transfer learning is most effective when the source and target\ndomains bear a high level of structural similarity in their graph\nrepresentations.", + "authors": "Jaekoo Lee, Hyunjae Kim, Jongsun Lee, Sungroh Yoon", + "published": "2016-11-15", + "updated": "2016-12-05", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.01489v1", + "title": "Generative Models and Learning Algorithms for Core-Periphery Structured Graphs", + "abstract": "We consider core-periphery structured graphs, which are graphs with a group\nof densely and sparsely connected nodes, respectively, referred to as core and\nperiphery nodes. The so-called core score of a node is related to the\nlikelihood of it being a core node. In this paper, we focus on learning the\ncore scores of a graph from its node attributes and connectivity structure. To\nthis end, we propose two classes of probabilistic graphical models: affine and\nnonlinear. First, we describe affine generative models to model the dependence\nof node attributes on its core scores, which determine the graph structure.\nNext, we discuss nonlinear generative models in which the partial correlations\nof node attributes influence the graph structure through latent core scores. We\ndevelop algorithms for inferring the model parameters and core scores of a\ngraph when both the graph structure and node attributes are available. When\nonly the node attributes of graphs are available, we jointly learn a\ncore-periphery structured graph and its core scores. We provide results from\nnumerical experiments on several synthetic and real-world datasets to\ndemonstrate the efficacy of the developed models and algorithms.", + "authors": "Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1104.5256v1", + "title": "Learning Undirected Graphical Models with Structure Penalty", + "abstract": "In undirected graphical models, learning the graph structure and learning the\nfunctions that relate the predictive variables (features) to the responses\ngiven the structure are two topics that have been widely investigated in\nmachine learning and statistics. Learning graphical models in two stages will\nhave problems because graph structure may change after considering the\nfeatures. The main contribution of this paper is the proposed method that\nlearns the graph structure and functions on the graph at the same time. General\ngraphical models with binary outcomes conditioned on predictive variables are\nproved to be equivalent to multivariate Bernoulli model. The reparameterization\nof the potential functions in graphical model by conditional log odds ratios in\nmultivariate Bernoulli model offers advantage in the representation of the\nconditional independence structure in the model. Additionally, we impose a\nstructure penalty on groups of conditional log odds ratios to learn the graph\nstructure. These groups of functions are designed with overlaps to enforce\nhierarchical function selection. In this way, we are able to shrink higher\norder interactions to obtain a sparse graph structure. Simulation studies show\nthat the method is able to recover the graph structure. The analysis of county\ndata from Census Bureau gives interesting relations between unemployment rate,\ncrime and others discovered by the model.", + "authors": "Shilin Ding", + "published": "2011-04-27", + "updated": "2011-04-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07439v1", + "title": "Multiple Graph Adversarial Learning", + "abstract": "Recently, Graph Convolutional Networks (GCNs) have been widely studied for\ngraph-structured data representation and learning. However, in many real\napplications, data are coming with multiple graphs, and it is non-trivial to\nadapt GCNs to deal with data representation with multiple graph structures. One\nmain challenge for multi-graph representation is how to exploit both structure\ninformation of each individual graph and correlation information across\nmultiple graphs simultaneously. In this paper, we propose a novel Multiple\nGraph Adversarial Learning (MGAL) framework for multi-graph representation and\nlearning. MGAL aims to learn an optimal structure-invariant and consistent\nrepresentation for multiple graphs in a common subspace via a novel adversarial\nlearning framework, which thus incorporates both structure information of\nintra-graph and correlation information of inter-graphs simultaneously. Based\non MGAL, we then provide a unified network for semi-supervised learning task.\nPromising experimental results demonstrate the effectiveness of MGAL model.", + "authors": "Bo Jiang, Ziyan Zhang, Jin Tang, Bin Luo", + "published": "2019-01-22", + "updated": "2019-01-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11307v3", + "title": "Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method", + "abstract": "Graph Representation Learning (GRL) is an influential methodology, enabling a\nmore profound understanding of graph-structured data and aiding graph\nclustering, a critical task across various domains. The recent incursion of\nattention mechanisms, originally an artifact of Natural Language Processing\n(NLP), into the realm of graph learning has spearheaded a notable shift in\nresearch trends. Consequently, Graph Attention Networks (GATs) and Graph\nAttention Auto-Encoders have emerged as preferred tools for graph clustering\ntasks. Yet, these methods primarily employ a local attention mechanism, thereby\ncurbing their capacity to apprehend the intricate global dependencies between\nnodes within graphs. Addressing these impediments, this study introduces an\ninnovative method known as the Graph Transformer Auto-Encoder for Graph\nClustering (GTAGC). By melding the Graph Auto-Encoder with the Graph\nTransformer, GTAGC is adept at capturing global dependencies between nodes.\nThis integration amplifies the graph representation and surmounts the\nconstraints posed by the local attention mechanism. The architecture of GTAGC\nencompasses graph embedding, integration of the Graph Transformer within the\nautoencoder structure, and a clustering component. It strategically alternates\nbetween graph embedding and clustering, thereby tailoring the Graph Transformer\nfor clustering tasks, whilst preserving the graph's global structural\ninformation. Through extensive experimentation on diverse benchmark datasets,\nGTAGC has exhibited superior performance against existing state-of-the-art\ngraph clustering methodologies.", + "authors": "Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao", + "published": "2023-06-20", + "updated": "2023-08-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.00793v2", + "title": "Structure-Preserving Graph Representation Learning", + "abstract": "Though graph representation learning (GRL) has made significant progress, it\nis still a challenge to extract and embed the rich topological structure and\nfeature information in an adequate way. Most existing methods focus on local\nstructure and fail to fully incorporate the global topological structure. To\nthis end, we propose a novel Structure-Preserving Graph Representation Learning\n(SPGRL) method, to fully capture the structure information of graphs.\nSpecifically, to reduce the uncertainty and misinformation of the original\ngraph, we construct a feature graph as a complementary view via k-Nearest\nNeighbor method. The feature graph can be used to contrast at node-level to\ncapture the local relation. Besides, we retain the global topological structure\ninformation by maximizing the mutual information (MI) of the whole graph and\nfeature embeddings, which is theoretically reduced to exchanging the feature\nembeddings of the feature and the original graphs to reconstruct themselves.\nExtensive experiments show that our method has quite superior performance on\nsemi-supervised node classification task and excellent robustness under noise\nperturbation on graph structure or node features.", + "authors": "Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu", + "published": "2022-09-02", + "updated": "2022-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.14002v1", + "title": "Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction", + "abstract": "We introduce Bi-GNN for modeling biological link prediction tasks such as\ndrug-drug interaction (DDI) and protein-protein interaction (PPI). Taking\ndrug-drug interaction as an example, existing methods using machine learning\neither only utilize the link structure between drugs without using the graph\nrepresentation of each drug molecule, or only leverage the individual drug\ncompound structures without using graph structure for the higher-level DDI\ngraph. The key idea of our method is to fundamentally view the data as a\nbi-level graph, where the highest level graph represents the interaction\nbetween biological entities (interaction graph), and each biological entity\nitself is further expanded to its intrinsic graph representation\n(representation graphs), where the graph is either flat like a drug compound or\nhierarchical like a protein with amino acid level graph, secondary structure,\ntertiary structure, etc. Our model not only allows the usage of information\nfrom both the high-level interaction graph and the low-level representation\ngraphs, but also offers a baseline for future research opportunities to address\nthe bi-level nature of the data.", + "authors": "Yunsheng Bai, Ken Gu, Yizhou Sun, Wei Wang", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09671v1", + "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels", + "abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", + "authors": "Rami Al-Rfou, Dustin Zelle, Bryan Perozzi", + "published": "2019-04-21", + "updated": "2019-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.10146v2", + "title": "Exploring Structure-Adaptive Graph Learning for Robust Semi-Supervised Classification", + "abstract": "Graph Convolutional Neural Networks (GCNNs) are generalizations of CNNs to\ngraph-structured data, in which convolution is guided by the graph topology. In\nmany cases where graphs are unavailable, existing methods manually construct\ngraphs or learn task-driven adaptive graphs. In this paper, we propose Graph\nLearning Neural Networks (GLNNs), which exploit the optimization of graphs (the\nadjacency matrix in particular) from both data and tasks. Leveraging on\nspectral graph theory, we propose the objective of graph learning from a\nsparsity constraint, properties of a valid adjacency matrix as well as a graph\nLaplacian regularizer via maximum a posteriori estimation. The optimization\nobjective is then integrated into the loss function of the GCNN, which adapts\nthe graph topology to not only labels of a specific task but also the input\ndata. Experimental results show that our proposed GLNN outperforms\nstate-of-the-art approaches over widely adopted social network datasets and\ncitation network datasets for semi-supervised classification.", + "authors": "Xiang Gao, Wei Hu, Zongming Guo", + "published": "2019-04-23", + "updated": "2019-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2004.06846v1", + "title": "MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning", + "abstract": "How to utilize deep learning methods for graph classification tasks has\nattracted considerable research attention in the past few years. Regarding\ngraph classification tasks, the graphs to be classified may have various graph\nsizes (i.e., different number of nodes and edges) and have various graph\nproperties (e.g., average node degree, diameter, and clustering coefficient).\nThe diverse property of graphs has imposed significant challenges on existing\ngraph learning techniques since diverse graphs have different best-fit\nhyperparameters. It is difficult to learn graph features from a set of diverse\ngraphs by a unified graph neural network. This motivates us to use a multiplex\nstructure in a diverse way and utilize a priori properties of graphs to guide\nthe learning. In this paper, we propose MxPool, which concurrently uses\nmultiple graph convolution/pooling networks to build a hierarchical learning\nstructure for graph representation learning tasks. Our experiments on numerous\ngraph classification benchmarks show that our MxPool has superiority over other\nstate-of-the-art graph representation learning methods.", + "authors": "Yanyan Liang, Yanfeng Zhang, Dechao Gao, Qian Xu", + "published": "2020-04-15", + "updated": "2020-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.10715v1", + "title": "Graph Attention Auto-Encoders", + "abstract": "Auto-encoders have emerged as a successful framework for unsupervised\nlearning. However, conventional auto-encoders are incapable of utilizing\nexplicit relations in structured data. To take advantage of relations in\ngraph-structured data, several graph auto-encoders have recently been proposed,\nbut they neglect to reconstruct either the graph structure or node attributes.\nIn this paper, we present the graph attention auto-encoder (GATE), a neural\nnetwork architecture for unsupervised representation learning on\ngraph-structured data. Our architecture is able to reconstruct graph-structured\ninputs, including both node attributes and the graph structure, through stacked\nencoder/decoder layers equipped with self-attention mechanisms. In the encoder,\nby considering node attributes as initial node representations, each layer\ngenerates new representations of nodes by attending over their neighbors'\nrepresentations. In the decoder, we attempt to reverse the encoding process to\nreconstruct node attributes. Moreover, node representations are regularized to\nreconstruct the graph structure. Our proposed architecture does not need to\nknow the graph structure upfront, and thus it can be applied to inductive\nlearning. Our experiments demonstrate competitive performance on several node\nclassification benchmark datasets for transductive and inductive tasks, even\nexceeding the performance of supervised learning baselines in most cases.", + "authors": "Amin Salehi, Hasan Davulcu", + "published": "2019-05-26", + "updated": "2019-05-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + } + ] + ] + }, + { + "url": "http://arxiv.org/abs/2107.06779v1", + "title": "MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation", + "abstract": "Emotion recognition in conversation (ERC) is a crucial component in affective\ndialogue systems, which helps the system understand users' emotions and\ngenerate empathetic responses. However, most works focus on modeling speaker\nand contextual information primarily on the textual modality or simply\nleveraging multimodal information through feature concatenation. In order to\nexplore a more effective way of utilizing both multimodal and long-distance\ncontextual information, we propose a new model based on multimodal fused graph\nconvolutional network, MMGCN, in this work. MMGCN can not only make use of\nmultimodal dependencies effectively, but also leverage speaker information to\nmodel inter-speaker and intra-speaker dependency. We evaluate our proposed\nmodel on two public benchmark datasets, IEMOCAP and MELD, and the results prove\nthe effectiveness of MMGCN, which outperforms other SOTA methods by a\nsignificant margin under the multimodal conversation setting.", + "authors": "Jingwen Hu, Yuchen Liu, Jinming Zhao, Qin Jin", + "published": "2021-07-14", + "updated": "2021-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.02177v2", + "title": "GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation", + "abstract": "Conversations have become a critical data format on social media platforms.\nUnderstanding conversation from emotion, content and other aspects also\nattracts increasing attention from researchers due to its widespread\napplication in human-computer interaction. In real-world environments, we often\nencounter the problem of incomplete modalities, which has become a core issue\nof conversation understanding. To address this problem, researchers propose\nvarious methods. However, existing approaches are mainly designed for\nindividual utterances rather than conversational data, which cannot fully\nexploit temporal and speaker information in conversations. To this end, we\npropose a novel framework for incomplete multimodal learning in conversations,\ncalled \"Graph Complete Network (GCNet)\", filling the gap of existing works. Our\nGCNet contains two well-designed graph neural network-based modules, \"Speaker\nGNN\" and \"Temporal GNN\", to capture temporal and speaker dependencies. To make\nfull use of complete and incomplete data, we jointly optimize classification\nand reconstruction tasks in an end-to-end manner. To verify the effectiveness\nof our method, we conduct experiments on three benchmark conversational\ndatasets. Experimental results demonstrate that our GCNet is superior to\nexisting state-of-the-art approaches in incomplete multimodal learning. Code is\navailable at https://github.com/zeroQiaoba/GCNet.", + "authors": "Zheng Lian, Lan Chen, Licai Sun, Bin Liu, Jianhua Tao", + "published": "2022-03-04", + "updated": "2023-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.04456v1", + "title": "Multimodal Prompt Transformer with Hybrid Contrastive Learning for Emotion Recognition in Conversation", + "abstract": "Emotion Recognition in Conversation (ERC) plays an important role in driving\nthe development of human-machine interaction. Emotions can exist in multiple\nmodalities, and multimodal ERC mainly faces two problems: (1) the noise problem\nin the cross-modal information fusion process, and (2) the prediction problem\nof less sample emotion labels that are semantically similar but different\ncategories. To address these issues and fully utilize the features of each\nmodality, we adopted the following strategies: first, deep emotion cues\nextraction was performed on modalities with strong representation ability, and\nfeature filters were designed as multimodal prompt information for modalities\nwith weak representation ability. Then, we designed a Multimodal Prompt\nTransformer (MPT) to perform cross-modal information fusion. MPT embeds\nmultimodal fusion information into each attention layer of the Transformer,\nallowing prompt information to participate in encoding textual features and\nbeing fused with multi-level textual information to obtain better multimodal\nfusion features. Finally, we used the Hybrid Contrastive Learning (HCL)\nstrategy to optimize the model's ability to handle labels with few samples.\nThis strategy uses unsupervised contrastive learning to improve the\nrepresentation ability of multimodal fusion and supervised contrastive learning\nto mine the information of labels with few samples. Experimental results show\nthat our proposed model outperforms state-of-the-art models in ERC on two\nbenchmark datasets.", + "authors": "Shihao Zou, Xianying Huang, Xudong Shen", + "published": "2023-10-04", + "updated": "2023-10-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.02187v1", + "title": "M2FNet: Multi-modal Fusion Network for Emotion Recognition in Conversation", + "abstract": "Emotion Recognition in Conversations (ERC) is crucial in developing\nsympathetic human-machine interaction. In conversational videos, emotion can be\npresent in multiple modalities, i.e., audio, video, and transcript. However,\ndue to the inherent characteristics of these modalities, multi-modal ERC has\nalways been considered a challenging undertaking. Existing ERC research focuses\nmainly on using text information in a discussion, ignoring the other two\nmodalities. We anticipate that emotion recognition accuracy can be improved by\nemploying a multi-modal approach. Thus, in this study, we propose a Multi-modal\nFusion Network (M2FNet) that extracts emotion-relevant features from visual,\naudio, and text modality. It employs a multi-head attention-based fusion\nmechanism to combine emotion-rich latent representations of the input data. We\nintroduce a new feature extractor to extract latent features from the audio and\nvisual modality. The proposed feature extractor is trained with a novel\nadaptive margin-based triplet loss function to learn emotion-relevant features\nfrom the audio and visual data. In the domain of ERC, the existing methods\nperform well on one benchmark dataset but not on others. Our results show that\nthe proposed M2FNet architecture outperforms all other methods in terms of\nweighted average F1 score on well-known MELD and IEMOCAP datasets and sets a\nnew state-of-the-art performance in ERC.", + "authors": "Vishal Chudasama, Purbayan Kar, Ashish Gudmalwar, Nirmesh Shah, Pankaj Wasnik, Naoyuki Onoe", + "published": "2022-06-05", + "updated": "2022-06-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.17727v2", + "title": "Learning a Structural Causal Model for Intuition Reasoning in Conversation", + "abstract": "Reasoning, a crucial aspect of NLP research, has not been adequately\naddressed by prevailing models including Large Language Model. Conversation\nreasoning, as a critical component of it, remains largely unexplored due to the\nabsence of a well-designed cognitive model. In this paper, inspired by\nintuition theory on conversation cognition, we develop a conversation cognitive\nmodel (CCM) that explains how each utterance receives and activates channels of\ninformation recursively. Besides, we algebraically transformed CCM into a\nstructural causal model (SCM) under some mild assumptions, rendering it\ncompatible with various causal discovery methods. We further propose a\nprobabilistic implementation of the SCM for utterance-level relation reasoning.\nBy leveraging variational inference, it explores substitutes for implicit\ncauses, addresses the issue of their unobservability, and reconstructs the\ncausal representations of utterances through the evidence lower bounds.\nMoreover, we constructed synthetic and simulated datasets incorporating\nimplicit causes and complete cause labels, alleviating the current situation\nwhere all available datasets are implicit-causes-agnostic. Extensive\nexperiments demonstrate that our proposed method significantly outperforms\nexisting methods on synthetic, simulated, and real-world datasets. Finally, we\nanalyze the performance of CCM under latent confounders and propose theoretical\nideas for addressing this currently unresolved issue.", + "authors": "Hang Chen, Bingyu Liao, Jing Luo, Wenjing Zhu, Xinyu Yang", + "published": "2023-05-28", + "updated": "2024-01-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.12261v4", + "title": "GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition", + "abstract": "Emotion Recognition in Conversation (ERC) plays a significant part in\nHuman-Computer Interaction (HCI) systems since it can provide empathetic\nservices. Multimodal ERC can mitigate the drawbacks of uni-modal approaches.\nRecently, Graph Neural Networks (GNNs) have been widely used in a variety of\nfields due to their superior performance in relation modeling. In multimodal\nERC, GNNs are capable of extracting both long-distance contextual information\nand inter-modal interactive information. Unfortunately, since existing methods\nsuch as MMGCN directly fuse multiple modalities, redundant information may be\ngenerated and diverse information may be lost. In this work, we present a\ndirected Graph based Cross-modal Feature Complementation (GraphCFC) module that\ncan efficiently model contextual and interactive information. GraphCFC\nalleviates the problem of heterogeneity gap in multimodal fusion by utilizing\nmultiple subspace extractors and Pair-wise Cross-modal Complementary (PairCC)\nstrategy. We extract various types of edges from the constructed graph for\nencoding, thus enabling GNNs to extract crucial contextual and interactive\ninformation more accurately when performing message passing. Furthermore, we\ndesign a GNN structure called GAT-MLP, which can provide a new unified network\nframework for multimodal learning. The experimental results on two benchmark\ndatasets show that our GraphCFC outperforms the state-of-the-art (SOTA)\napproaches.", + "authors": "Jiang Li, Xiaoping Wang, Guoqing Lv, Zhigang Zeng", + "published": "2022-07-06", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.00405v4", + "title": "DialogueRNN: An Attentive RNN for Emotion Detection in Conversations", + "abstract": "Emotion detection in conversations is a necessary step for a number of\napplications, including opinion mining over chat history, social media threads,\ndebates, argumentation mining, understanding consumer feedback in live\nconversations, etc. Currently, systems do not treat the parties in the\nconversation individually by adapting to the speaker of each utterance. In this\npaper, we describe a new method based on recurrent neural networks that keeps\ntrack of the individual party states throughout the conversation and uses this\ninformation for emotion classification. Our model outperforms the state of the\nart by a significant margin on two different datasets.", + "authors": "Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, Erik Cambria", + "published": "2018-11-01", + "updated": "2019-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1806.00064v1", + "title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", + "abstract": "Multimodal research is an emerging field of artificial intelligence, and one\nof the main research problems in this field is multimodal fusion. The fusion of\nmultimodal data is the process of integrating multiple unimodal representations\ninto one compact multimodal representation. Previous research in this field has\nexploited the expressiveness of tensors for multimodal representation. However,\nthese methods often suffer from exponential increase in dimensions and in\ncomputational complexity introduced by transformation of input into tensor. In\nthis paper, we propose the Low-rank Multimodal Fusion method, which performs\nmultimodal fusion using low-rank tensors to improve efficiency. We evaluate our\nmodel on three different tasks: multimodal sentiment analysis, speaker trait\nanalysis, and emotion recognition. Our model achieves competitive results on\nall these tasks while drastically reducing computational complexity. Additional\nexperiments also show that our model can perform robustly for a wide range of\nlow-rank settings, and is indeed much more efficient in both training and\ninference compared to other methods that utilize tensor representations.", + "authors": "Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency", + "published": "2018-05-31", + "updated": "2018-05-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1707.07250v1", + "title": "Tensor Fusion Network for Multimodal Sentiment Analysis", + "abstract": "Multimodal sentiment analysis is an increasingly popular research area, which\nextends the conventional language-based definition of sentiment analysis to a\nmultimodal setup where other relevant modalities accompany language. In this\npaper, we pose the problem of multimodal sentiment analysis as modeling\nintra-modality and inter-modality dynamics. We introduce a novel model, termed\nTensor Fusion Network, which learns both such dynamics end-to-end. The proposed\napproach is tailored for the volatile nature of spoken language in online\nvideos as well as accompanying gestures and voice. In the experiments, our\nmodel outperforms state-of-the-art approaches for both multimodal and unimodal\nsentiment analysis.", + "authors": "Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, Louis-Philippe Morency", + "published": "2017-07-23", + "updated": "2017-07-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.05833v2", + "title": "COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition", + "abstract": "Automatically recognising apparent emotions from face and voice is hard, in\npart because of various sources of uncertainty, including in the input data and\nthe labels used in a machine learning framework. This paper introduces an\nuncertainty-aware audiovisual fusion approach that quantifies modality-wise\nuncertainty towards emotion prediction. To this end, we propose a novel fusion\nframework in which we first learn latent distributions over audiovisual\ntemporal context vectors separately, and then constrain the variance vectors of\nunimodal latent distributions so that they represent the amount of information\neach modality provides w.r.t. emotion recognition. In particular, we impose\nCalibration and Ordinal Ranking constraints on the variance vectors of\naudiovisual latent distributions. When well-calibrated, modality-wise\nuncertainty scores indicate how much their corresponding predictions may differ\nfrom the ground truth labels. Well-ranked uncertainty scores allow the ordinal\nranking of different frames across the modalities. To jointly impose both these\nconstraints, we propose a softmax distributional matching loss. In both\nclassification and regression settings, we compare our uncertainty-aware fusion\nmodel with standard model-agnostic fusion baselines. Our evaluation on two\nemotion recognition corpora, AVEC 2019 CES and IEMOCAP, shows that audiovisual\nemotion recognition can considerably benefit from well-calibrated and\nwell-ranked latent uncertainty measures.", + "authors": "Mani Kumar Tellamekala, Shahin Amiriparian, Bj\u00f6rn W. Schuller, Elisabeth Andr\u00e9, Timo Giesbrecht, Michel Valstar", + "published": "2022-06-12", + "updated": "2023-10-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.HC", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.20494v1", + "title": "A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations", + "abstract": "Emotion recognition in conversations (ERC), the task of recognizing the\nemotion of each utterance in a conversation, is crucial for building empathetic\nmachines. Existing studies focus mainly on capturing context- and\nspeaker-sensitive dependencies on the textual modality but ignore the\nsignificance of multimodal information. Different from emotion recognition in\ntextual conversations, capturing intra- and inter-modal interactions between\nutterances, learning weights between different modalities, and enhancing modal\nrepresentations play important roles in multimodal ERC. In this paper, we\npropose a transformer-based model with self-distillation (SDT) for the task.\nThe transformer-based model captures intra- and inter-modal interactions by\nutilizing intra- and inter-modal transformers, and learns weights between\nmodalities dynamically by designing a hierarchical gated fusion strategy.\nFurthermore, to learn more expressive modal representations, we treat soft\nlabels of the proposed model as extra training supervision. Specifically, we\nintroduce self-distillation to transfer knowledge of hard and soft labels from\nthe proposed model to each modality. Experiments on IEMOCAP and MELD datasets\ndemonstrate that SDT outperforms previous state-of-the-art baselines.", + "authors": "Hui Ma, Jian Wang, Hongfei Lin, Bo Zhang, Yijia Zhang, Bo Xu", + "published": "2023-10-31", + "updated": "2023-10-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.04502v2", + "title": "Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition", + "abstract": "It has been a hot research topic to enable machines to understand human\nemotions in multimodal contexts under dialogue scenarios, which is tasked with\nmultimodal emotion analysis in conversation (MM-ERC). MM-ERC has received\nconsistent attention in recent years, where a diverse range of methods has been\nproposed for securing better task performance. Most existing works treat MM-ERC\nas a standard multimodal classification problem and perform multimodal feature\ndisentanglement and fusion for maximizing feature utility. Yet after revisiting\nthe characteristic of MM-ERC, we argue that both the feature multimodality and\nconversational contextualization should be properly modeled simultaneously\nduring the feature disentanglement and fusion steps. In this work, we target\nfurther pushing the task performance by taking full consideration of the above\ninsights. On the one hand, during feature disentanglement, based on the\ncontrastive learning technique, we devise a Dual-level Disentanglement\nMechanism (DDM) to decouple the features into both the modality space and\nutterance space. On the other hand, during the feature fusion stage, we propose\na Contribution-aware Fusion Mechanism (CFM) and a Context Refusion Mechanism\n(CRM) for multimodal and context integration, respectively. They together\nschedule the proper integrations of multimodal and context features.\nSpecifically, CFM explicitly manages the multimodal feature contributions\ndynamically, while CRM flexibly coordinates the introduction of dialogue\ncontexts. On two public MM-ERC datasets, our system achieves new\nstate-of-the-art performance consistently. Further analyses demonstrate that\nall our proposed mechanisms greatly facilitate the MM-ERC task by making full\nuse of the multimodal and context features adaptively. Note that our proposed\nmethods have the great potential to facilitate a broader range of other\nconversational multimodal tasks.", + "authors": "Bobo Li, Hao Fei, Lizi Liao, Yu Zhao, Chong Teng, Tat-Seng Chua, Donghong Ji, Fei Li", + "published": "2023-08-08", + "updated": "2023-08-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.02321v1", + "title": "Active Learning for Graphs with Noisy Structures", + "abstract": "Graph Neural Networks (GNNs) have seen significant success in tasks such as\nnode classification, largely contingent upon the availability of sufficient\nlabeled nodes. Yet, the excessive cost of labeling large-scale graphs led to a\nfocus on active learning on graphs, which aims for effective data selection to\nmaximize downstream model performance. Notably, most existing methods assume\nreliable graph topology, while real-world scenarios often present noisy graphs.\nGiven this, designing a successful active learning framework for noisy graphs\nis highly needed but challenging, as selecting data for labeling and obtaining\na clean graph are two tasks naturally interdependent: selecting high-quality\ndata requires clean graph structure while cleaning noisy graph structure\nrequires sufficient labeled data. Considering the complexity mentioned above,\nwe propose an active learning framework, GALClean, which has been specifically\ndesigned to adopt an iterative approach for conducting both data selection and\ngraph purification simultaneously with best information learned from the prior\niteration. Importantly, we summarize GALClean as an instance of the\nExpectation-Maximization algorithm, which provides a theoretical understanding\nof its design and mechanisms. This theory naturally leads to an enhanced\nversion, GALClean+. Extensive experiments have demonstrated the effectiveness\nand robustness of our proposed method across various types and levels of noisy\ngraphs.", + "authors": "Hongliang Chi, Cong Qi, Suhang Wang, Yao Ma", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1909.11594v1", + "title": "Structured Graph Learning Via Laplacian Spectral Constraints", + "abstract": "Learning a graph with a specific structure is essential for interpretability\nand identification of the relationships among data. It is well known that\nstructured graph learning from observed samples is an NP-hard combinatorial\nproblem. In this paper, we first show that for a set of important graph\nfamilies it is possible to convert the structural constraints of structure into\neigenvalue constraints of the graph Laplacian matrix. Then we introduce a\nunified graph learning framework, lying at the integration of the spectral\nproperties of the Laplacian matrix with Gaussian graphical modeling that is\ncapable of learning structures of a large class of graph families. The proposed\nalgorithms are provably convergent and practically amenable for large-scale\nsemi-supervised and unsupervised graph-based learning tasks. Extensive\nnumerical experiments with both synthetic and real data sets demonstrate the\neffectiveness of the proposed methods. An R package containing code for all the\nexperimental results is available at\nhttps://cran.r-project.org/package=spectralGraphTopology.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos'e Vin'icius de M. Cardoso, Daniel P. Palomar", + "published": "2019-09-24", + "updated": "2019-09-24", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC", + "stat.AP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.15843v1", + "title": "TabGSL: Graph Structure Learning for Tabular Data Prediction", + "abstract": "This work presents a novel approach to tabular data prediction leveraging\ngraph structure learning and graph neural networks. Despite the prevalence of\ntabular data in real-world applications, traditional deep learning methods\noften overlook the potentially valuable associations between data instances.\nSuch associations can offer beneficial insights for classification tasks, as\ninstances may exhibit similar patterns of correlations among features and\ntarget labels. This information can be exploited by graph neural networks,\nnecessitating robust graph structures. However, existing studies primarily\nfocus on improving graph structure from noisy data, largely neglecting the\npossibility of deriving graph structures from tabular data. We present a novel\nsolution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data\nprediction by simultaneously learning instance correlation and feature\ninteraction within a unified framework. This is achieved through a proposed\ngraph contrastive learning module, along with transformer-based feature\nextractor and graph neural network. Comprehensive experiments conducted on 30\nbenchmark tabular datasets demonstrate that TabGSL markedly outperforms both\ntree-based models and recent deep learning-based tabular models. Visualizations\nof the learned instance embeddings further substantiate the effectiveness of\nTabGSL.", + "authors": "Jay Chiehen Liao, Cheng-Te Li", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08561v1", + "title": "Boosting Graph Structure Learning with Dummy Nodes", + "abstract": "With the development of graph kernels and graph representation learning, many\nsuperior methods have been proposed to handle scalability and oversmoothing\nissues on graph structure learning. However, most of those strategies are\ndesigned based on practical experience rather than theoretical analysis. In\nthis paper, we use a particular dummy node connecting to all existing vertices\nwithout affecting original vertex and edge properties. We further prove that\nsuch the dummy node can help build an efficient monomorphic edge-to-vertex\ntransform and an epimorphic inverse to recover the original graph back. It also\nindicates that adding dummy nodes can preserve local and global structures for\nbetter graph representation learning. We extend graph kernels and graph neural\nnetworks with dummy nodes and conduct experiments on graph classification and\nsubgraph isomorphism matching tasks. Empirical results demonstrate that taking\ngraphs with dummy nodes as input significantly boosts graph structure learning,\nand using their edge-to-vertex graphs can also achieve similar results. We also\ndiscuss the gain of expressive power from the dummy in neural networks.", + "authors": "Xin Liu, Jiayang Cheng, Yangqiu Song, Xin Jiang", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09671v1", + "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels", + "abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", + "authors": "Rami Al-Rfou, Dustin Zelle, Bryan Perozzi", + "published": "2019-04-21", + "updated": "2019-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03236v1", + "title": "Graph2Graph Learning with Conditional Autoregressive Models", + "abstract": "We present a graph neural network model for solving graph-to-graph learning\nproblems. Most deep learning on graphs considers ``simple'' problems such as\ngraph classification or regressing real-valued graph properties. For such\ntasks, the main requirement for intermediate representations of the data is to\nmaintain the structure needed for output, i.e., keeping classes separated or\nmaintaining the order indicated by the regressor. However, a number of learning\ntasks, such as regressing graph-valued output, generative models, or graph\nautoencoders, aim to predict a graph-structured output. In order to\nsuccessfully do this, the learned representations need to preserve far more\nstructure. We present a conditional auto-regressive model for graph-to-graph\nlearning and illustrate its representational capabilities via experiments on\nchallenging subgraph predictions from graph algorithmics; as a graph\nautoencoder for reconstruction and visualization; and on pretraining\nrepresentations that allow graph classification with limited labeled data.", + "authors": "Guan Wang, Francois Bernard Lauze, Aasa Feragen", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.00647v4", + "title": "Lifelong Graph Learning", + "abstract": "Graph neural networks (GNN) are powerful models for many graph-structured\ntasks. Existing models often assume that the complete structure of the graph is\navailable during training. In practice, however, graph-structured data is\nusually formed in a streaming fashion so that learning a graph continuously is\noften necessary. In this paper, we bridge GNN and lifelong learning by\nconverting a continual graph learning problem to a regular graph learning\nproblem so GNN can inherit the lifelong learning techniques developed for\nconvolutional neural networks (CNN). We propose a new topology, the feature\ngraph, which takes features as new nodes and turns nodes into independent\ngraphs. This successfully converts the original problem of node classification\nto graph classification. In the experiments, we demonstrate the efficiency and\neffectiveness of feature graph networks (FGN) by continuously learning a\nsequence of classical graph datasets. We also show that FGN achieves superior\nperformance in two applications, i.e., lifelong human action recognition with\nwearable devices and feature matching. To the best of our knowledge, FGN is the\nfirst method to bridge graph learning and lifelong learning via a novel graph\ntopology. Source code is available at https://github.com/wang-chen/LGL", + "authors": "Chen Wang, Yuheng Qiu, Dasong Gao, Sebastian Scherer", + "published": "2020-09-01", + "updated": "2022-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.11869v1", + "title": "Multi-view Graph Structural Representation Learning via Graph Coarsening", + "abstract": "Graph Transformers (GTs) have made remarkable achievements in graph-level\ntasks. However, most existing works regard graph structures as a form of\nguidance or bias for enhancing node representations, which focuses on\nnode-central perspectives and lacks explicit representations of edges and\nstructures. One natural question is, can we treat graph structures node-like as\na whole to learn high-level features? Through experimental analysis, we explore\nthe feasibility of this assumption. Based on our findings, we propose a novel\nmulti-view graph structural representation learning model via graph coarsening\n(MSLgo) on GT architecture for graph classification. Specifically, we build\nthree unique views, original, coarsening, and conversion, to learn a thorough\nstructural representation. We compress loops and cliques via hierarchical\nheuristic graph coarsening and restrict them with well-designed constraints,\nwhich builds the coarsening view to learn high-level interactions between\nstructures. We also introduce line graphs for edge embeddings and switch to\nedge-central perspective to construct the conversion view. Experiments on six\nreal-world datasets demonstrate the improvements of MSLgo over 14 baselines\nfrom various architectures.", + "authors": "Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.02879v1", + "title": "Auto-decoding Graphs", + "abstract": "We present an approach to synthesizing new graph structures from empirically\nspecified distributions. The generative model is an auto-decoder that learns to\nsynthesize graphs from latent codes. The graph synthesis model is learned\njointly with an empirical distribution over the latent codes. Graphs are\nsynthesized using self-attention modules that are trained to identify likely\nconnectivity patterns. Graph-based normalizing flows are used to sample latent\ncodes from the distribution learned by the auto-decoder. The resulting model\ncombines accuracy and scalability. On benchmark datasets of large graphs, the\npresented model outperforms the state of the art by a factor of 1.5 in mean\naccuracy and average rank across at least three different graph statistics,\nwith a 2x speedup during inference.", + "authors": "Sohil Atul Shah, Vladlen Koltun", + "published": "2020-06-04", + "updated": "2020-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2004.06846v1", + "title": "MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning", + "abstract": "How to utilize deep learning methods for graph classification tasks has\nattracted considerable research attention in the past few years. Regarding\ngraph classification tasks, the graphs to be classified may have various graph\nsizes (i.e., different number of nodes and edges) and have various graph\nproperties (e.g., average node degree, diameter, and clustering coefficient).\nThe diverse property of graphs has imposed significant challenges on existing\ngraph learning techniques since diverse graphs have different best-fit\nhyperparameters. It is difficult to learn graph features from a set of diverse\ngraphs by a unified graph neural network. This motivates us to use a multiplex\nstructure in a diverse way and utilize a priori properties of graphs to guide\nthe learning. In this paper, we propose MxPool, which concurrently uses\nmultiple graph convolution/pooling networks to build a hierarchical learning\nstructure for graph representation learning tasks. Our experiments on numerous\ngraph classification benchmarks show that our MxPool has superiority over other\nstate-of-the-art graph representation learning methods.", + "authors": "Yanyan Liang, Yanfeng Zhang, Dechao Gao, Qian Xu", + "published": "2020-04-15", + "updated": "2020-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.10134v1", + "title": "GDM: Dual Mixup for Graph Classification with Limited Supervision", + "abstract": "Graph Neural Networks (GNNs) require a large number of labeled graph samples\nto obtain good performance on the graph classification task. The performance of\nGNNs degrades significantly as the number of labeled graph samples decreases.\nTo reduce the annotation cost, it is therefore important to develop graph\naugmentation methods that can generate new graph instances to increase the size\nand diversity of the limited set of available labeled graph samples. In this\nwork, we propose a novel mixup-based graph augmentation method, Graph Dual\nMixup (GDM), that leverages both functional and structural information of the\ngraph instances to generate new labeled graph samples. GDM employs a graph\nstructural auto-encoder to learn structural embeddings of the graph samples,\nand then applies mixup to the structural information of the graphs in the\nlearned structural embedding space and generates new graph structures from the\nmixup structural embeddings. As for the functional information, GDM applies\nmixup directly to the input node features of the graph samples to generate\nfunctional node feature information for new mixup graph instances. Jointly, the\ngenerated input node features and graph structures yield new graph samples\nwhich can supplement the set of original labeled graphs. Furthermore, we\npropose two novel Balanced Graph Sampling methods to enhance the balanced\ndifficulty and diversity for the generated graph samples. Experimental results\non the benchmark datasets demonstrate that our proposed method substantially\noutperforms the state-of-the-art graph augmentation methods when the labeled\ngraphs are scarce.", + "authors": "Abdullah Alchihabi, Yuhong Guo", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03262v2", + "title": "CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations", + "abstract": "Unsupervised graph representation learning is a non-trivial topic. The\nsuccess of contrastive methods in the unsupervised representation learning on\nstructured data inspires similar attempts on the graph. Existing graph\ncontrastive learning (GCL) aims to learn the invariance across multiple\naugmentation views, which renders it heavily reliant on the handcrafted graph\naugmentations. However, inappropriate graph data augmentations can potentially\njeopardize such invariance. In this paper, we show the potential hazards of\ninappropriate augmentations and then propose a novel Collaborative Graph\nContrastive Learning framework (CGCL). This framework harnesses multiple graph\nencoders to observe the graph. Features observed from different encoders serve\nas the contrastive views in contrastive learning, which avoids inducing\nunstable perturbation and guarantees the invariance. To ensure the\ncollaboration among diverse graph encoders, we propose the concepts of\nasymmetric architecture and complementary encoders as the design principle. To\nfurther prove the rationality, we utilize two quantitative metrics to measure\nthe assembly of CGCL respectively. Extensive experiments demonstrate the\nadvantages of CGCL in unsupervised graph-level representation learning and the\npotential of collaborative framework. The source code for reproducibility is\navailable at https://github.com/zhangtia16/CGCL", + "authors": "Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang", + "published": "2021-11-05", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.10206v1", + "title": "How Robust Are Graph Neural Networks to Structural Noise?", + "abstract": "Graph neural networks (GNNs) are an emerging model for learning graph\nembeddings and making predictions on graph structured data. However, robustness\nof graph neural networks is not yet well-understood. In this work, we focus on\nnode structural identity predictions, where a representative GNN model is able\nto achieve near-perfect accuracy. We also show that the same GNN model is not\nrobust to addition of structural noise, through a controlled dataset and set of\nexperiments. Finally, we show that under the right conditions, graph-augmented\ntraining is capable of significantly improving robustness to structural noise.", + "authors": "James Fox, Sivasankaran Rajamanickam", + "published": "2019-12-21", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.02060v1", + "title": "Graph Classification via Discriminative Edge Feature Learning", + "abstract": "Spectral graph convolutional neural networks (GCNNs) have been producing\nencouraging results in graph classification tasks. However, most spectral GCNNs\nutilize fixed graphs when aggregating node features, while omitting edge\nfeature learning and failing to get an optimal graph structure. Moreover, many\nexisting graph datasets do not provide initialized edge features, further\nrestraining the ability of learning edge features via spectral GCNNs. In this\npaper, we try to address this issue by designing an edge feature scheme and an\nadd-on layer between every two stacked graph convolution layers in GCNN. Both\nare lightweight while effective in filling the gap between edge feature\nlearning and performance enhancement of graph classification. The edge feature\nscheme makes edge features adapt to node representations at different graph\nconvolution layers. The add-on layers help adjust the edge features to an\noptimal graph structure. To test the effectiveness of our method, we take\nEuclidean positions as initial node features and extract graphs with semantic\ninformation from point cloud objects. The node features of our extracted graphs\nare more scalable for edge feature learning than most existing graph datasets\n(in one-hot encoded label format). Three new graph datasets are constructed\nbased on ModelNet40, ModelNet10 and ShapeNet Part datasets. Experimental\nresults show that our method outperforms state-of-the-art graph classification\nmethods on the new datasets by reaching 96.56% overall accuracy on\nGraph-ModelNet40, 98.79% on Graph-ModelNet10 and 97.91% on Graph-ShapeNet Part.\nThe constructed graph datasets will be released to the community.", + "authors": "Yang Yi, Xuequan Lu, Shang Gao, Antonio Robles-Kelly, Yuejie Zhang", + "published": "2022-10-05", + "updated": "2022-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1902.10042v2", + "title": "Graph Neural Processes: Towards Bayesian Graph Neural Networks", + "abstract": "We introduce Graph Neural Processes (GNP), inspired by the recent work in\nconditional and latent neural processes. A Graph Neural Process is defined as a\nConditional Neural Process that operates on arbitrary graph data. It takes\nfeatures of sparsely observed context points as input, and outputs a\ndistribution over target points. We demonstrate graph neural processes in edge\nimputation and discuss benefits and drawbacks of the method for other\napplication areas. One major benefit of GNPs is the ability to quantify\nuncertainty in deep learning on graph structures. An additional benefit of this\nmethod is the ability to extend graph neural networks to inputs of dynamic\nsized graphs.", + "authors": "Andrew Carr, David Wingate", + "published": "2019-02-26", + "updated": "2019-10-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.00614v1", + "title": "GAP: Generalizable Approximate Graph Partitioning Framework", + "abstract": "Graph partitioning is the problem of dividing the nodes of a graph into\nbalanced partitions while minimizing the edge cut across the partitions. Due to\nits combinatorial nature, many approximate solutions have been developed,\nincluding variants of multi-level methods and spectral clustering. We propose\nGAP, a Generalizable Approximate Partitioning framework that takes a deep\nlearning approach to graph partitioning. We define a differentiable loss\nfunction that represents the partitioning objective and use backpropagation to\noptimize the network parameters. Unlike baselines that redo the optimization\nper graph, GAP is capable of generalization, allowing us to train models that\nproduce performant partitions at inference time, even on unseen graphs.\nFurthermore, because we learn the representation of the graph while jointly\noptimizing for the partitioning loss function, GAP can be easily tuned for a\nvariety of graph structures. We evaluate the performance of GAP on graphs of\nvarying sizes and structures, including graphs of widely used machine learning\nmodels (e.g., ResNet, VGG, and Inception-V3), scale-free graphs, and random\ngraphs. We show that GAP achieves competitive partitions while being up to 100\ntimes faster than the baseline and generalizes to unseen graphs.", + "authors": "Azade Nazi, Will Hang, Anna Goldie, Sujith Ravi, Azalia Mirhoseini", + "published": "2019-03-02", + "updated": "2019-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.05258v1", + "title": "Multi-view graph structure learning using subspace merging on Grassmann manifold", + "abstract": "Many successful learning algorithms have been recently developed to represent\ngraph-structured data. For example, Graph Neural Networks (GNNs) have achieved\nconsiderable successes in various tasks such as node classification, graph\nclassification, and link prediction. However, these methods are highly\ndependent on the quality of the input graph structure. One used approach to\nalleviate this problem is to learn the graph structure instead of relying on a\nmanually designed graph. In this paper, we introduce a new graph structure\nlearning approach using multi-view learning, named MV-GSL (Multi-View Graph\nStructure Learning), in which we aggregate different graph structure learning\nmethods using subspace merging on Grassmann manifold to improve the quality of\nthe learned graph structures. Extensive experiments are performed to evaluate\nthe effectiveness of the proposed method on two benchmark datasets, Cora and\nCiteseer. Our experiments show that the proposed method has promising\nperformance compared to single and other combined graph structure learning\nmethods.", + "authors": "Razieh Ghiasi, Hossein Amirkhani, Alireza Bosaghzadeh", + "published": "2022-04-11", + "updated": "2022-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.00876v1", + "title": "Balanced Graph Structure Information for Brain Disease Detection", + "abstract": "Analyzing connections between brain regions of interest (ROI) is vital to\ndetect neurological disorders such as autism or schizophrenia. Recent\nadvancements employ graph neural networks (GNNs) to utilize graph structures in\nbrains, improving detection performances. Current methods use correlation\nmeasures between ROI's blood-oxygen-level-dependent (BOLD) signals to generate\nthe graph structure. Other methods use the training samples to learn the\noptimal graph structure through end-to-end learning. However, implementing\nthose methods independently leads to some issues with noisy data for the\ncorrelation graphs and overfitting problems for the optimal graph. In this\nwork, we proposed Bargrain (balanced graph structure for brains), which models\ntwo graph structures: filtered correlation matrix and optimal sample graph\nusing graph convolution networks (GCNs). This approach aims to get advantages\nfrom both graphs and address the limitations of only relying on a single type\nof structure. Based on our extensive experiment, Bargrain outperforms\nstate-of-the-art methods in classification tasks on brain disease datasets, as\nmeasured by average F1 scores.", + "authors": "Falih Gozi Febrinanto, Mujie Liu, Feng Xia", + "published": "2023-12-30", + "updated": "2023-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "q-bio.NC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.00082v1", + "title": "Bosonic Random Walk Networks for Graph Learning", + "abstract": "The development of Graph Neural Networks (GNNs) has led to great progress in\nmachine learning on graph-structured data. These networks operate via diffusing\ninformation across the graph nodes while capturing the structure of the graph.\nRecently there has also seen tremendous progress in quantum computing\ntechniques. In this work, we explore applications of multi-particle quantum\nwalks on diffusing information across graphs. Our model is based on learning\nthe operators that govern the dynamics of quantum random walkers on graphs. We\ndemonstrate the effectiveness of our method on classification and regression\ntasks.", + "authors": "Shiv Shankar, Don Towsley", + "published": "2020-12-31", + "updated": "2020-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11796v1", + "title": "Edge but not Least: Cross-View Graph Pooling", + "abstract": "Graph neural networks have emerged as a powerful model for graph\nrepresentation learning to undertake graph-level prediction tasks. Various\ngraph pooling methods have been developed to coarsen an input graph into a\nsuccinct graph-level representation through aggregating node embeddings\nobtained via graph convolution. However, most graph pooling methods are heavily\nnode-centric and are unable to fully leverage the crucial information contained\nin global graph structure. This paper presents a cross-view graph pooling\n(Co-Pooling) method to better exploit crucial graph structure information. The\nproposed Co-Pooling fuses pooled representations learnt from both node view and\nedge view. Through cross-view interaction, edge-view pooling and node-view\npooling seamlessly reinforce each other to learn more informative graph-level\nrepresentations. Co-Pooling has the advantage of handling various graphs with\ndifferent types of node attributes. Extensive experiments on a total of 15\ngraph benchmark datasets validate the effectiveness of our proposed method,\ndemonstrating its superior performance over state-of-the-art pooling methods on\nboth graph classification and graph regression tasks.", + "authors": "Xiaowei Zhou, Jie Yin, Ivor W. Tsang", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2103.10837v1", + "title": "Quantum machine learning of graph-structured data", + "abstract": "Graph structures are ubiquitous throughout the natural sciences. Here we\nconsider graph-structured quantum data and describe how to carry out its\nquantum machine learning via quantum neural networks. In particular, we\nconsider training data in the form of pairs of input and output quantum states\nassociated with the vertices of a graph, together with edges encoding\ncorrelations between the vertices. We explain how to systematically exploit\nthis additional graph structure to improve quantum learning algorithms. These\nalgorithms are numerically simulated and exhibit excellent learning behavior.\nScalable quantum implementations of the learning procedures are likely feasible\non the next generation of quantum computing devices.", + "authors": "Kerstin Beer, Megha Khosla, Julius K\u00f6hler, Tobias J. Osborne", + "published": "2021-03-19", + "updated": "2021-03-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.04934v1", + "title": "Learning Graph Algorithms With Recurrent Graph Neural Networks", + "abstract": "Classical graph algorithms work well for combinatorial problems that can be\nthoroughly formalized and abstracted. Once the algorithm is derived, it\ngeneralizes to instances of any size. However, developing an algorithm that\nhandles complex structures and interactions in the real world can be\nchallenging. Rather than specifying the algorithm, we can try to learn it from\nthe graph-structured data. Graph Neural Networks (GNNs) are inherently capable\nof working on graph structures; however, they struggle to generalize well, and\nlearning on larger instances is challenging. In order to scale, we focus on a\nrecurrent architecture design that can learn simple graph problems end to end\non smaller graphs and then extrapolate to larger instances. As our main\ncontribution, we identify three essential techniques for recurrent GNNs to\nscale. By using (i) skip connections, (ii) state regularization, and (iii) edge\nconvolutions, we can guide GNNs toward extrapolation. This allows us to train\non small graphs and apply the same model to much larger graphs during\ninference. Moreover, we empirically validate the extrapolation capabilities of\nour GNNs on algorithmic datasets.", + "authors": "Florian Gr\u00f6tschla, Jo\u00ebl Mathys, Roger Wattenhofer", + "published": "2022-12-09", + "updated": "2022-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.07409v2", + "title": "Dual Space Graph Contrastive Learning", + "abstract": "Unsupervised graph representation learning has emerged as a powerful tool to\naddress real-world problems and achieves huge success in the graph learning\ndomain. Graph contrastive learning is one of the unsupervised graph\nrepresentation learning methods, which recently attracts attention from\nresearchers and has achieved state-of-the-art performances on various tasks.\nThe key to the success of graph contrastive learning is to construct proper\ncontrasting pairs to acquire the underlying structural semantics of the graph.\nHowever, this key part is not fully explored currently, most of the ways\ngenerating contrasting pairs focus on augmenting or perturbating graph\nstructures to obtain different views of the input graph. But such strategies\ncould degrade the performances via adding noise into the graph, which may\nnarrow down the field of the applications of graph contrastive learning. In\nthis paper, we propose a novel graph contrastive learning method, namely\n\\textbf{D}ual \\textbf{S}pace \\textbf{G}raph \\textbf{C}ontrastive (DSGC)\nLearning, to conduct graph contrastive learning among views generated in\ndifferent spaces including the hyperbolic space and the Euclidean space. Since\nboth spaces have their own advantages to represent graph data in the embedding\nspaces, we hope to utilize graph contrastive learning to bridge the spaces and\nleverage advantages from both sides. The comparison experiment results show\nthat DSGC achieves competitive or better performances among all the datasets.\nIn addition, we conduct extensive experiments to analyze the impact of\ndifferent graph encoders on DSGC, giving insights about how to better leverage\nthe advantages of contrastive learning between different spaces.", + "authors": "Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu", + "published": "2022-01-19", + "updated": "2022-03-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02126v1", + "title": "Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix", + "abstract": "To improve the robustness of graph neural networks (GNN), graph structure\nlearning (GSL) has attracted great interest due to the pervasiveness of noise\nin graph data. Many approaches have been proposed for GSL to jointly learn a\nclean graph structure and corresponding representations. To extend the previous\nwork, this paper proposes a novel regularized GSL approach, particularly with\nan alignment of feature information and graph information, which is motivated\nmainly by our derived lower bound of node-level Rademacher complexity for GNNs.\nAdditionally, our proposed approach incorporates sparse dimensional reduction\nto leverage low-dimensional node features that are relevant to the graph\nstructure. To evaluate the effectiveness of our approach, we conduct\nexperiments on real-world graphs. The results demonstrate that our proposed GSL\nmethod outperforms several competitive baselines, especially in scenarios where\nthe graph structures are heavily affected by noise. Overall, our research\nhighlights the importance of integrating feature and graph information\nalignment in GSL, as inspired by our derived theoretical result, and showcases\nthe superiority of our approach in handling noisy graph structures through\ncomprehensive experiments on real-world datasets.", + "authors": "Shaogao Lv, Gang Wen, Shiyu Liu, Linsen Wei, Ming Li", + "published": "2023-07-05", + "updated": "2023-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.11264v1", + "title": "GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks", + "abstract": "Graph structure learning is a well-established problem that aims at\noptimizing graph structures adaptive to specific graph datasets to help message\npassing neural networks (i.e., GNNs) to yield effective and robust node\nembeddings. However, the common limitation of existing models lies in the\nunderlying \\textit{closed-world assumption}: the testing graph is the same as\nthe training graph. This premise requires independently training the structure\nlearning model from scratch for each graph dataset, which leads to prohibitive\ncomputation costs and potential risks for serious over-fitting. To mitigate\nthese issues, this paper explores a new direction that moves forward to learn a\nuniversal structure learning model that can generalize across graph datasets in\nan open world. We first introduce the mathematical definition of this novel\nproblem setting, and describe the model formulation from a probabilistic\ndata-generative aspect. Then we devise a general framework that coordinates a\nsingle graph-shared structure learner and multiple graph-specific GNNs to\ncapture the generalizable patterns of optimal message-passing topology across\ndatasets. The well-trained structure learner can directly produce adaptive\nstructures for unseen target graphs without any fine-tuning. Across diverse\ndatasets and various challenging cross-graph generalization protocols, our\nexperiments show that even without training on target graphs, the proposed\nmodel i) significantly outperforms expressive GNNs trained on input\n(non-optimized) topology, and ii) surprisingly performs on par with\nstate-of-the-art models that independently optimize adaptive structures for\nspecific target graphs, with notably orders-of-magnitude acceleration for\ntraining on the target graph.", + "authors": "Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan", + "published": "2023-06-20", + "updated": "2023-06-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.11691v1", + "title": "Triple2Vec: Learning Triple Embeddings from Knowledge Graphs", + "abstract": "Graph embedding techniques allow to learn high-quality feature vectors from\ngraph structures and are useful in a variety of tasks, from node classification\nto clustering. Existing approaches have only focused on learning feature\nvectors for the nodes in a (knowledge) graph. To the best of our knowledge,\nnone of them has tackled the problem of embedding of graph edges, that is,\nknowledge graph triples. The approaches that are closer to this task have\nfocused on homogeneous graphs involving only one type of edge and obtain edge\nembeddings by applying some operation (e.g., average) on the embeddings of the\nendpoint nodes. The goal of this paper is to introduce Triple2Vec, a new\ntechnique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon\nthree main ingredients. The first is the notion of line graph. The line graph\nof a graph is another graph representing the adjacency between edges of the\noriginal graph. In particular, the nodes of the line graph are the edges of the\noriginal graph. We show that directly applying existing embedding techniques on\nthe nodes of the line graph to learn edge embeddings is not enough in the\ncontext of knowledge graphs. Thus, we introduce the notion of triple line\ngraph. The second is an edge weighting mechanism both for line graphs derived\nfrom knowledge graphs and homogeneous graphs. The third is a strategy based on\ngraph walks on the weighted triple line graph that can preserve proximity\nbetween nodes. Embeddings are finally generated by adopting the SkipGram model,\nwhere sentences are replaced with graph walks. We evaluate our approach on\ndifferent real world (knowledge) graphs and compared it with related work.", + "authors": "Valeria Fionda, Giuseppe Pirr\u00f3", + "published": "2019-05-28", + "updated": "2019-05-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.03892v2", + "title": "COPT: Coordinated Optimal Transport for Graph Sketching", + "abstract": "We introduce COPT, a novel distance metric between graphs defined via an\noptimization routine, computing a coordinated pair of optimal transport maps\nsimultaneously. This gives an unsupervised way to learn general-purpose graph\nrepresentation, applicable to both graph sketching and graph comparison. COPT\ninvolves simultaneously optimizing dual transport plans, one between the\nvertices of two graphs, and another between graph signal probability\ndistributions. We show theoretically that our method preserves important global\nstructural information on graphs, in particular spectral information, and\nanalyze connections to existing studies. Empirically, COPT outperforms state of\nthe art methods in graph classification on both synthetic and real datasets.", + "authors": "Yihe Dong, Will Sawin", + "published": "2020-03-09", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2108.04595v1", + "title": "Label-informed Graph Structure Learning for Node Classification", + "abstract": "Graph Neural Networks (GNNs) have achieved great success among various\ndomains. Nevertheless, most GNN methods are sensitive to the quality of graph\nstructures. To tackle this problem, some studies exploit different graph\nstructure learning strategies to refine the original graph structure. However,\nthese methods only consider feature information while ignoring available label\ninformation. In this paper, we propose a novel label-informed graph structure\nlearning framework which incorporates label information explicitly through a\nclass transition matrix. We conduct extensive experiments on seven node\nclassification benchmark datasets and the results show that our method\noutperforms or matches the state-of-the-art baselines.", + "authors": "Liping Wang, Fenyu Hu, Shu Wu, Liang Wang", + "published": "2021-08-10", + "updated": "2021-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.10065v1", + "title": "Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint", + "abstract": "The problem of graph learning concerns the construction of an explicit\ntopological structure revealing the relationship between nodes representing\ndata entities, which plays an increasingly important role in the success of\nmany graph-based representations and algorithms in the field of machine\nlearning and graph signal processing. In this paper, we propose a novel graph\nlearning framework that incorporates the node-side and observation-side\ninformation, and in particular the covariates that help to explain the\ndependency structures in graph signals. To this end, we consider graph signals\nas functions in the reproducing kernel Hilbert space associated with a\nKronecker product kernel, and integrate functional learning with\nsmoothness-promoting graph learning to learn a graph representing the\nrelationship between nodes. The functional learning increases the robustness of\ngraph learning against missing and incomplete information in the graph signals.\nIn addition, we develop a novel graph-based regularisation method which, when\ncombined with the Kronecker product kernel, enables our model to capture both\nthe dependency explained by the graph and the dependency due to graph signals\nobserved under different but related circumstances, e.g. different points in\ntime. The latter means the graph signals are free from the i.i.d. assumptions\nrequired by the classical graph learning models. Experiments on both synthetic\nand real-world data show that our methods outperform the state-of-the-art\nmodels in learning a meaningful graph topology from graph signals, in\nparticular under heavy noise, missing values, and multiple dependency.", + "authors": "Xingyue Pu, Siu Lun Chau, Xiaowen Dong, Dino Sejdinovic", + "published": "2020-08-23", + "updated": "2020-08-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1104.5256v1", + "title": "Learning Undirected Graphical Models with Structure Penalty", + "abstract": "In undirected graphical models, learning the graph structure and learning the\nfunctions that relate the predictive variables (features) to the responses\ngiven the structure are two topics that have been widely investigated in\nmachine learning and statistics. Learning graphical models in two stages will\nhave problems because graph structure may change after considering the\nfeatures. The main contribution of this paper is the proposed method that\nlearns the graph structure and functions on the graph at the same time. General\ngraphical models with binary outcomes conditioned on predictive variables are\nproved to be equivalent to multivariate Bernoulli model. The reparameterization\nof the potential functions in graphical model by conditional log odds ratios in\nmultivariate Bernoulli model offers advantage in the representation of the\nconditional independence structure in the model. Additionally, we impose a\nstructure penalty on groups of conditional log odds ratios to learn the graph\nstructure. These groups of functions are designed with overlaps to enforce\nhierarchical function selection. In this way, we are able to shrink higher\norder interactions to obtain a sparse graph structure. Simulation studies show\nthat the method is able to recover the graph structure. The analysis of county\ndata from Census Bureau gives interesting relations between unemployment rate,\ncrime and others discovered by the model.", + "authors": "Shilin Ding", + "published": "2011-04-27", + "updated": "2011-04-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.01743v1", + "title": "Graph Generation with Variational Recurrent Neural Network", + "abstract": "Generating graph structures is a challenging problem due to the diverse\nrepresentations and complex dependencies among nodes. In this paper, we\nintroduce Graph Variational Recurrent Neural Network (GraphVRNN), a\nprobabilistic autoregressive model for graph generation. Through modeling the\nlatent variables of graph data, GraphVRNN can capture the joint distributions\nof graph structures and the underlying node attributes. We conduct experiments\non the proposed GraphVRNN in both graph structure learning and attribute\ngeneration tasks. The evaluation results show that the variational component\nallows our network to model complicated distributions, as well as generate\nplausible structures and node attributes.", + "authors": "Shih-Yang Su, Hossein Hajimirsadeghi, Greg Mori", + "published": "2019-10-02", + "updated": "2019-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1801.03226v1", + "title": "Adaptive Graph Convolutional Neural Networks", + "abstract": "Graph Convolutional Neural Networks (Graph CNNs) are generalizations of\nclassical CNNs to handle graph data such as molecular data, point could and\nsocial networks. Current filters in graph CNNs are built for fixed and shared\ngraph structure. However, for most real data, the graph structures varies in\nboth size and connectivity. The paper proposes a generalized and flexible graph\nCNN taking data of arbitrary graph structure as input. In that way a\ntask-driven adaptive graph is learned for each graph data while training. To\nefficiently learn the graph, a distance metric learning is proposed. Extensive\nexperiments on nine graph-structured datasets have demonstrated the superior\nperformance improvement on both convergence speed and predictive accuracy.", + "authors": "Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang", + "published": "2018-01-10", + "updated": "2018-01-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2101.06861v3", + "title": "Discrete Graph Structure Learning for Forecasting Multiple Time Series", + "abstract": "Time series forecasting is an extensively studied subject in statistics,\neconomics, and computer science. Exploration of the correlation and causation\namong the variables in a multivariate time series shows promise in enhancing\nthe performance of a time series model. When using deep neural networks as\nforecasting models, we hypothesize that exploiting the pairwise information\namong multiple (multivariate) time series also improves their forecast. If an\nexplicit graph structure is known, graph neural networks (GNNs) have been\ndemonstrated as powerful tools to exploit the structure. In this work, we\npropose learning the structure simultaneously with the GNN if the graph is\nunknown. We cast the problem as learning a probabilistic graph model through\noptimizing the mean performance over the graph distribution. The distribution\nis parameterized by a neural network so that discrete graphs can be sampled\ndifferentiably through reparameterization. Empirical evaluations show that our\nmethod is simpler, more efficient, and better performing than a recently\nproposed bilevel learning approach for graph structure learning, as well as a\nbroad array of forecasting models, either deep or non-deep learning based, and\ngraph or non-graph based.", + "authors": "Chao Shang, Jie Chen, Jinbo Bi", + "published": "2021-01-18", + "updated": "2021-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.04286v1", + "title": "Deep Unsupervised Active Learning on Learnable Graphs", + "abstract": "Recently deep learning has been successfully applied to unsupervised active\nlearning. However, current method attempts to learn a nonlinear transformation\nvia an auto-encoder while ignoring the sample relation, leaving huge room to\ndesign more effective representation learning mechanisms for unsupervised\nactive learning. In this paper, we propose a novel deep unsupervised Active\nLearning model via Learnable Graphs, named ALLG. ALLG benefits from learning\noptimal graph structures to acquire better sample representation and select\nrepresentative samples. To make the learnt graph structure more stable and\neffective, we take into account $k$-nearest neighbor graph as a priori, and\nlearn a relation propagation graph structure. We also incorporate shortcut\nconnections among different layers, which can alleviate the well-known\nover-smoothing problem to some extent. To the best of our knowledge, this is\nthe first attempt to leverage graph structure learning for unsupervised active\nlearning. Extensive experiments performed on six datasets demonstrate the\nefficacy of our method.", + "authors": "Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.10715v1", + "title": "Graph Attention Auto-Encoders", + "abstract": "Auto-encoders have emerged as a successful framework for unsupervised\nlearning. However, conventional auto-encoders are incapable of utilizing\nexplicit relations in structured data. To take advantage of relations in\ngraph-structured data, several graph auto-encoders have recently been proposed,\nbut they neglect to reconstruct either the graph structure or node attributes.\nIn this paper, we present the graph attention auto-encoder (GATE), a neural\nnetwork architecture for unsupervised representation learning on\ngraph-structured data. Our architecture is able to reconstruct graph-structured\ninputs, including both node attributes and the graph structure, through stacked\nencoder/decoder layers equipped with self-attention mechanisms. In the encoder,\nby considering node attributes as initial node representations, each layer\ngenerates new representations of nodes by attending over their neighbors'\nrepresentations. In the decoder, we attempt to reverse the encoding process to\nreconstruct node attributes. Moreover, node representations are regularized to\nreconstruct the graph structure. Our proposed architecture does not need to\nknow the graph structure upfront, and thus it can be applied to inductive\nlearning. Our experiments demonstrate competitive performance on several node\nclassification benchmark datasets for transductive and inductive tasks, even\nexceeding the performance of supervised learning baselines in most cases.", + "authors": "Amin Salehi, Hasan Davulcu", + "published": "2019-05-26", + "updated": "2019-05-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.09205v1", + "title": "SoK: Differential Privacy on Graph-Structured Data", + "abstract": "In this work, we study the applications of differential privacy (DP) in the\ncontext of graph-structured data. We discuss the formulations of DP applicable\nto the publication of graphs and their associated statistics as well as machine\nlearning on graph-based data, including graph neural networks (GNNs). The\nformulation of DP in the context of graph-structured data is difficult, as\nindividual data points are interconnected (often non-linearly or sparsely).\nThis connectivity complicates the computation of individual privacy loss in\ndifferentially private learning. The problem is exacerbated by an absence of a\nsingle, well-established formulation of DP in graph settings. This issue\nextends to the domain of GNNs, rendering private machine learning on\ngraph-structured data a challenging task. A lack of prior systematisation work\nmotivated us to study graph-based learning from a privacy perspective. In this\nwork, we systematise different formulations of DP on graphs, discuss challenges\nand promising applications, including the GNN domain. We compare and separate\nworks into graph analysis tasks and graph learning tasks with GNNs. Finally, we\nconclude our work with a discussion of open questions and potential directions\nfor further research in this area.", + "authors": "Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis", + "published": "2022-03-17", + "updated": "2022-03-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08201v1", + "title": "Graph Laplacian Learning with Exponential Family Noise", + "abstract": "A common challenge in applying graph machine learning methods is that the\nunderlying graph of a system is often unknown. Although different graph\ninference methods have been proposed for continuous graph signals, inferring\nthe graph structure underlying other types of data, such as discrete counts, is\nunder-explored. In this paper, we generalize a graph signal processing (GSP)\nframework for learning a graph from smooth graph signals to the exponential\nfamily noise distribution to model various data types. We propose an\nalternating algorithm that estimates the graph Laplacian as well as the\nunobserved smooth representation from the noisy signals. We demonstrate in\nsynthetic and real-world data that our new algorithm outperforms competing\nLaplacian estimation methods under noise model mismatch.", + "authors": "Changhao Shi, Gal Mishne", + "published": "2023-06-14", + "updated": "2023-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.15665v1", + "title": "Learnability of a hybrid quantum-classical neural network for graph-structured quantum data", + "abstract": "Classical data with graph structure always exists when dealing with many\nreal-world problems. In parallel, quantum data with graph structure also need\nto be investigated since they are always produced by structured quantum data\nsources.In this paper, we make use of a hybrid quantum-classical neural network\nwith deep residual learning (Res-HQCNN) to learn graph-structured quantum data.\nSpecifically, based on the special definition of graph-structured quantum data,\nwe first find suitable cost functions so that Res-HQCNN can learn both\nsemisupervised quantum data with or without graphs. Moreover, the training\nalgorithm of Res-HQCNN for graph-structured training data is given in detail.\nNext, in order to show the learning ability of Res-HQCNN,we perform extensive\nexperiments to show that the using of information about graph structures for\nquantum data can lead to better learning efficiency compared with the state of\nthe arts. At the same time, we also design comparable experiments to explain\nthat the using of residual learning can also bring better performance when\ntraining for deep quantum neural networks.", + "authors": "Yan-Ying Liang, Si-Le Tang, Zhe-Hao Yi, Hao-Zhen Si-Tu, Zhu-Jun Zheng", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.08235v3", + "title": "Data Augmentation for Deep Graph Learning: A Survey", + "abstract": "Graph neural networks, a powerful deep learning tool to model\ngraph-structured data, have demonstrated remarkable performance on numerous\ngraph learning tasks. To address the data noise and data scarcity issues in\ndeep graph learning, the research on graph data augmentation has intensified\nlately. However, conventional data augmentation methods can hardly handle\ngraph-structured data which is defined in non-Euclidean space with\nmulti-modality. In this survey, we formally formulate the problem of graph data\naugmentation and further review the representative techniques and their\napplications in different deep graph learning problems. Specifically, we first\npropose a taxonomy for graph data augmentation techniques and then provide a\nstructured review by categorizing the related work based on the augmented\ninformation modalities. Moreover, we summarize the applications of graph data\naugmentation in two representative problems in data-centric deep graph\nlearning: (1) reliable graph learning which focuses on enhancing the utility of\ninput graph as well as the model capacity via graph data augmentation; and (2)\nlow-resource graph learning which targets on enlarging the labeled training\ndata scale through graph data augmentation. For each problem, we also provide a\nhierarchical problem taxonomy and review the existing literature related to\ngraph data augmentation. Finally, we point out promising research directions\nand the challenges in future research.", + "authors": "Kaize Ding, Zhe Xu, Hanghang Tong, Huan Liu", + "published": "2022-02-16", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.04762v1", + "title": "The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure", + "abstract": "Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.", + "authors": "Anton Tsitsulin, Bryan Perozzi", + "published": "2023-12-08", + "updated": "2023-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.01749v1", + "title": "Semantic Graph Neural Network with Multi-measure Learning for Semi-supervised Classification", + "abstract": "Graph Neural Networks (GNNs) have attracted increasing attention in recent\nyears and have achieved excellent performance in semi-supervised node\nclassification tasks. The success of most GNNs relies on one fundamental\nassumption, i.e., the original graph structure data is available. However,\nrecent studies have shown that GNNs are vulnerable to the complex underlying\nstructure of the graph, making it necessary to learn comprehensive and robust\ngraph structures for downstream tasks, rather than relying only on the raw\ngraph structure. In light of this, we seek to learn optimal graph structures\nfor downstream tasks and propose a novel framework for semi-supervised\nclassification. Specifically, based on the structural context information of\ngraph and node representations, we encode the complex interactions in semantics\nand generate semantic graphs to preserve the global structure. Moreover, we\ndevelop a novel multi-measure attention layer to optimize the similarity rather\nthan prescribing it a priori, so that the similarity can be adaptively\nevaluated by integrating measures. These graphs are fused and optimized\ntogether with GNN towards semi-supervised classification objective. Extensive\nexperiments and ablation studies on six real-world datasets clearly demonstrate\nthe effectiveness of our proposed model and the contribution of each component.", + "authors": "Junchao Lin, Yuan Wan, Jingwen Xu, Xingchen Qi", + "published": "2022-12-04", + "updated": "2022-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.13769v1", + "title": "Multiview Graph Learning with Consensus Graph", + "abstract": "Graph topology inference, i.e., learning graphs from a given set of nodal\nobservations, is a significant task in many application domains. Existing\napproaches are mostly limited to learning a single graph assuming that the\nobserved data is homogeneous. This is problematic because many modern datasets\nare heterogeneous or mixed and involve multiple related graphs, i.e., multiview\ngraphs. Recent work proposing to learn multiview graphs ensures the similarity\nof learned view graphs through pairwise regularization, where each pair of\nviews is encouraged to have similar structures. However, this approach cannot\ninfer the shared structure across views. In this work, we propose an\nalternative method based on consensus regularization, where views are ensured\nto be similar through a learned consensus graph representing the common\nstructure of the views. In particular, we propose an optimization problem,\nwhere graph data is assumed to be smooth over the multiview graph and the\ntopology of the individual views and that of the consensus graph are learned,\nsimultaneously. Our optimization problem is designed to be general in the sense\nthat different regularization functions can be used depending on what the\nshared structure across views is. Moreover, we propose two regularization\nfunctions that extend fused and group graphical lasso to consensus based\nregularization. Proposed multiview graph learning is evaluated on simulated\ndata and shown to have better performance than existing methods. It is also\nemployed to infer the functional brain connectivity networks of multiple\nsubjects from their electroencephalogram (EEG) recordings. The proposed method\nreveals the structure shared by subjects as well as the characteristics unique\nto each subject.", + "authors": "Abdullah Karaaslanli, Selin Aviyente", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16374v2", + "title": "Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning", + "abstract": "Graph learning plays a pivotal role and has gained significant attention in\nvarious application scenarios, from social network analysis to recommendation\nsystems, for its effectiveness in modeling complex data relations represented\nby graph structural data. In reality, the real-world graph data typically show\ndynamics over time, with changing node attributes and edge structure, leading\nto the severe graph data distribution shift issue. This issue is compounded by\nthe diverse and complex nature of distribution shifts, which can significantly\nimpact the performance of graph learning methods in degraded generalization and\nadaptation capabilities, posing a substantial challenge to their effectiveness.\nIn this survey, we provide a comprehensive review and summary of the latest\napproaches, strategies, and insights that address distribution shifts within\nthe context of graph learning. Concretely, according to the observability of\ndistributions in the inference stage and the availability of sufficient\nsupervision information in the training stage, we categorize existing graph\nlearning methods into several essential scenarios, including graph domain\nadaptation learning, graph out-of-distribution learning, and graph continual\nlearning. For each scenario, a detailed taxonomy is proposed, with specific\ndescriptions and discussions of existing progress made in distribution-shifted\ngraph learning. Additionally, we discuss the potential applications and future\ndirections for graph learning under distribution shifts with a systematic\nanalysis of the current state in this field. The survey is positioned to\nprovide general guidance for the development of effective graph learning\nalgorithms in handling graph distribution shifts, and to stimulate future\nresearch and advancements in this area.", + "authors": "Man Wu, Xin Zheng, Qin Zhang, Xiao Shen, Xiong Luo, Xingquan Zhu, Shirui Pan", + "published": "2024-02-26", + "updated": "2024-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.13195v1", + "title": "Connector 0.5: A unified framework for graph representation learning", + "abstract": "Graph representation learning models aim to represent the graph structure and\nits features into low-dimensional vectors in a latent space, which can benefit\nvarious downstream tasks, such as node classification and link prediction. Due\nto its powerful graph data modelling capabilities, various graph embedding\nmodels and libraries have been proposed to learn embeddings and help\nresearchers ease conducting experiments. In this paper, we introduce a novel\ngraph representation framework covering various graph embedding models, ranging\nfrom shallow to state-of-the-art models, namely Connector. First, we consider\ngraph generation by constructing various types of graphs with different\nstructural relations, including homogeneous, signed, heterogeneous, and\nknowledge graphs. Second, we introduce various graph representation learning\nmodels, ranging from shallow to deep graph embedding models. Finally, we plan\nto build an efficient open-source framework that can provide deep graph\nembedding models to represent structural relations in graphs. The framework is\navailable at https://github.com/NSLab-CUK/Connector.", + "authors": "Thanh Sang Nguyen, Jooho Lee, Van Thuy Hoang, O-Joun Lee", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.06367v1", + "title": "Towards Unsupervised Deep Graph Structure Learning", + "abstract": "In recent years, graph neural networks (GNNs) have emerged as a successful\ntool in a variety of graph-related applications. However, the performance of\nGNNs can be deteriorated when noisy connections occur in the original graph\nstructures; besides, the dependence on explicit structures prevents GNNs from\nbeing applied to general unstructured scenarios. To address these issues,\nrecently emerged deep graph structure learning (GSL) methods propose to jointly\noptimize the graph structure along with GNN under the supervision of a node\nclassification task. Nonetheless, these methods focus on a supervised learning\nscenario, which leads to several problems, i.e., the reliance on labels, the\nbias of edge distribution, and the limitation on application tasks. In this\npaper, we propose a more practical GSL paradigm, unsupervised graph structure\nlearning, where the learned graph topology is optimized by data itself without\nany external guidance (i.e., labels). To solve the unsupervised GSL problem, we\npropose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME\nfor abbreviation) with the aid of self-supervised contrastive learning.\nSpecifically, we generate a learning target from the original data as an\n\"anchor graph\", and use a contrastive loss to maximize the agreement between\nthe anchor graph and the learned graph. To provide persistent guidance, we\ndesign a novel bootstrapping mechanism that upgrades the anchor graph with\nlearned structures during model learning. We also design a series of graph\nlearners and post-processing schemes to model the structures to learn.\nExtensive experiments on eight benchmark datasets demonstrate the significant\neffectiveness of our proposed SUBLIME and high quality of the optimized graphs.", + "authors": "Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan", + "published": "2022-01-17", + "updated": "2022-01-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.03659v1", + "title": "Robust Graph Structure Learning under Heterophily", + "abstract": "Graph is a fundamental mathematical structure in characterizing relations\nbetween different objects and has been widely used on various learning tasks.\nMost methods implicitly assume a given graph to be accurate and complete.\nHowever, real data is inevitably noisy and sparse, which will lead to inferior\nresults. Despite the remarkable success of recent graph representation learning\nmethods, they inherently presume that the graph is homophilic, and largely\noverlook heterophily, where most connected nodes are from different classes. In\nthis regard, we propose a novel robust graph structure learning method to\nachieve a high-quality graph from heterophilic data for downstream tasks. We\nfirst apply a high-pass filter to make each node more distinctive from its\nneighbors by encoding structure information into the node features. Then, we\nlearn a robust graph with an adaptive norm characterizing different levels of\nnoise. Afterwards, we propose a novel regularizer to further refine the graph\nstructure. Clustering and semi-supervised classification experiments on\nheterophilic graphs verify the effectiveness of our method.", + "authors": "Xuanting Xie, Zhao Kang, Wenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.08163v1", + "title": "Finding Motifs in Knowledge Graphs using Compression", + "abstract": "We introduce a method to find network motifs in knowledge graphs. Network\nmotifs are useful patterns or meaningful subunits of the graph that recur\nfrequently. We extend the common definition of a network motif to coincide with\na basic graph pattern. We introduce an approach, inspired by recent work for\nsimple graphs, to induce these from a given knowledge graph, and show that the\nmotifs found reflect the basic structure of the graph. Specifically, we show\nthat in random graphs, no motifs are found, and that when we insert a motif\nartificially, it can be detected. Finally, we show the results of motif\ninduction on three real-world knowledge graphs.", + "authors": "Peter Bloem", + "published": "2021-04-16", + "updated": "2021-04-16", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.DS", + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.05018v2", + "title": "Time-varying Graph Learning Under Structured Temporal Priors", + "abstract": "This paper endeavors to learn time-varying graphs by using structured\ntemporal priors that assume underlying relations between arbitrary two graphs\nin the graph sequence. Different from many existing chain structure based\nmethods in which the priors like temporal homogeneity can only describe the\nvariations of two consecutive graphs, we propose a structure named\n\\emph{temporal graph} to characterize the underlying real temporal relations.\nUnder this framework, the chain structure is actually a special case of our\ntemporal graph. We further proposed Alternating Direction Method of Multipliers\n(ADMM), a distributed algorithm, to solve the induced optimization problem.\nNumerical experiments demonstrate the superiorities of our method.", + "authors": "Xiang Zhang, Qiao Wang", + "published": "2021-10-11", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.SP" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.02319v1", + "title": "DEMO-Net: Degree-specific Graph Neural Networks for Node and Graph Classification", + "abstract": "Graph data widely exist in many high-impact applications. Inspired by the\nsuccess of deep learning in grid-structured data, graph neural network models\nhave been proposed to learn powerful node-level or graph-level representation.\nHowever, most of the existing graph neural networks suffer from the following\nlimitations: (1) there is limited analysis regarding the graph convolution\nproperties, such as seed-oriented, degree-aware and order-free; (2) the node's\ndegree-specific graph structure is not explicitly expressed in graph\nconvolution for distinguishing structure-aware node neighborhoods; (3) the\ntheoretical explanation regarding the graph-level pooling schemes is unclear.\n To address these problems, we propose a generic degree-specific graph neural\nnetwork named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test\nthat recursively identifies 1-hop neighborhood structures. In order to\nexplicitly capture the graph topology integrated with node attributes, we argue\nthat graph convolution should have three properties: seed-oriented,\ndegree-aware, order-free. To this end, we propose multi-task graph convolution\nwhere each task represents node representation learning for nodes with a\nspecific degree value, thus leading to preserving the degree-specific graph\nstructure. In particular, we design two multi-task learning methods:\ndegree-specific weight and hashing functions for graph convolution. In\naddition, we propose a novel graph-level pooling/readout scheme for learning\ngraph representation provably lying in a degree-specific Hilbert kernel space.\nThe experimental results on several node and graph classification benchmark\ndata sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net\nover state-of-the-art graph neural network models.", + "authors": "Jun Wu, Jingrui He, Jiejun Xu", + "published": "2019-06-05", + "updated": "2019-06-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.11898v1", + "title": "Graph Learning Augmented Heterogeneous Graph Neural Network for Social Recommendation", + "abstract": "Social recommendation based on social network has achieved great success in\nimproving the performance of recommendation system. Since social network\n(user-user relations) and user-item interactions are both naturally represented\nas graph-structured data, Graph Neural Networks (GNNs) have thus been widely\napplied for social recommendation. In this work, we propose an end-to-end\nheterogeneous global graph learning framework, namely Graph Learning Augmented\nHeterogeneous Graph Neural Network (GL-HGNN) for social recommendation. GL-HGNN\naims to learn a heterogeneous global graph that makes full use of user-user\nrelations, user-item interactions and item-item similarities in a unified\nperspective. To this end, we design a Graph Learner (GL) method to learn and\noptimize user-user and item-item connections separately. Moreover, we employ a\nHeterogeneous Graph Neural Network (HGNN) to capture the high-order complex\nsemantic relations from our learned heterogeneous global graph. To scale up the\ncomputation of graph learning, we further present the Anchor-based Graph\nLearner (AGL) to reduce computational complexity. Extensive experiments on four\nreal-world datasets demonstrate the effectiveness of our model.", + "authors": "Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long", + "published": "2021-09-24", + "updated": "2021-09-24", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.06679v2", + "title": "deepstruct -- linking deep learning and graph theory", + "abstract": "deepstruct connects deep learning models and graph theory such that different\ngraph structures can be imposed on neural networks or graph structures can be\nextracted from trained neural network models. For this, deepstruct provides\ndeep neural network models with different restrictions which can be created\nbased on an initial graph. Further, tools to extract graph structures from\ntrained models are available. This step of extracting graphs can be\ncomputationally expensive even for models of just a few dozen thousand\nparameters and poses a challenging problem. deepstruct supports research in\npruning, neural architecture search, automated network design and structure\nanalysis of neural networks.", + "authors": "Julian Stier, Michael Granitzer", + "published": "2021-11-12", + "updated": "2021-12-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "I.2.0; F.0" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.05954v3", + "title": "Hierarchical Graph Pooling with Structure Learning", + "abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to\ngraph-structured data, have drawn considerable attention and achieved\nstate-of-the-art performance in numerous graph related tasks. However, existing\nGNN models mainly focus on designing graph convolution operations. The graph\npooling (or downsampling) operations, that play an important role in learning\nhierarchical representations, are usually overlooked. In this paper, we propose\na novel graph pooling operator, called Hierarchical Graph Pooling with\nStructure Learning (HGP-SL), which can be integrated into various graph neural\nnetwork architectures. HGP-SL incorporates graph pooling and structure learning\ninto a unified module to generate hierarchical representations of graphs. More\nspecifically, the graph pooling operation adaptively selects a subset of nodes\nto form an induced subgraph for the subsequent layers. To preserve the\nintegrity of graph's topological information, we further introduce a structure\nlearning mechanism to learn a refined graph structure for the pooled graph at\neach layer. By combining HGP-SL operator with graph neural networks, we perform\ngraph level representation learning with focus on graph classification task.\nExperimental results on six widely used benchmarks demonstrate the\neffectiveness of our proposed model.", + "authors": "Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, Can Wang", + "published": "2019-11-14", + "updated": "2019-12-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.11821v1", + "title": "Cross-View Graph Consistency Learning for Invariant Graph Representations", + "abstract": "Graph representation learning is fundamental for analyzing graph-structured\ndata. Exploring invariant graph representations remains a challenge for most\nexisting graph representation learning methods. In this paper, we propose a\ncross-view graph consistency learning (CGCL) method that learns invariant graph\nrepresentations for link prediction. First, two complementary augmented views\nare derived from an incomplete graph structure through a bidirectional graph\nstructure augmentation scheme. This augmentation scheme mitigates the potential\ninformation loss that is commonly associated with various data augmentation\ntechniques involving raw graph data, such as edge perturbation, node removal,\nand attribute masking. Second, we propose a CGCL model that can learn invariant\ngraph representations. A cross-view training scheme is proposed to train the\nproposed CGCL model. This scheme attempts to maximize the consistency\ninformation between one augmented view and the graph structure reconstructed\nfrom the other augmented view. Furthermore, we offer a comprehensive\ntheoretical CGCL analysis. This paper empirically and experimentally\ndemonstrates the effectiveness of the proposed CGCL method, achieving\ncompetitive results on graph datasets in comparisons with several\nstate-of-the-art algorithms.", + "authors": "Jie Chen, Zhiming Li, Hua Mao, Wai Lok Woo, Xi Peng", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.10146v2", + "title": "Exploring Structure-Adaptive Graph Learning for Robust Semi-Supervised Classification", + "abstract": "Graph Convolutional Neural Networks (GCNNs) are generalizations of CNNs to\ngraph-structured data, in which convolution is guided by the graph topology. In\nmany cases where graphs are unavailable, existing methods manually construct\ngraphs or learn task-driven adaptive graphs. In this paper, we propose Graph\nLearning Neural Networks (GLNNs), which exploit the optimization of graphs (the\nadjacency matrix in particular) from both data and tasks. Leveraging on\nspectral graph theory, we propose the objective of graph learning from a\nsparsity constraint, properties of a valid adjacency matrix as well as a graph\nLaplacian regularizer via maximum a posteriori estimation. The optimization\nobjective is then integrated into the loss function of the GCNN, which adapts\nthe graph topology to not only labels of a specific task but also the input\ndata. Experimental results show that our proposed GLNN outperforms\nstate-of-the-art approaches over widely adopted social network datasets and\ncitation network datasets for semi-supervised classification.", + "authors": "Xiang Gao, Wei Hu, Zongming Guo", + "published": "2019-04-23", + "updated": "2019-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.10688v2", + "title": "Graph Lifelong Learning: A Survey", + "abstract": "Graph learning is a popular approach for performing machine learning on\ngraph-structured data. It has revolutionized the machine learning ability to\nmodel graph data to address downstream tasks. Its application is wide due to\nthe availability of graph data ranging from all types of networks to\ninformation systems. Most graph learning methods assume that the graph is\nstatic and its complete structure is known during training. This limits their\napplicability since they cannot be applied to problems where the underlying\ngraph grows over time and/or new tasks emerge incrementally. Such applications\nrequire a lifelong learning approach that can learn the graph continuously and\naccommodate new information whilst retaining previously learned knowledge.\nLifelong learning methods that enable continuous learning in regular domains\nlike images and text cannot be directly applied to continuously evolving graph\ndata, due to its irregular structure. As a result, graph lifelong learning is\ngaining attention from the research community. This survey paper provides a\ncomprehensive overview of recent advancements in graph lifelong learning,\nincluding the categorization of existing methods, and the discussions of\npotential applications and open research problems.", + "authors": "Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal", + "published": "2022-02-22", + "updated": "2022-11-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07, 68T05", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.13009v2", + "title": "Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely\nIterative Deep Graph Learning (IDGL), for jointly and iteratively learning\ngraph structure and graph embedding. The key rationale of IDGL is to learn a\nbetter graph structure based on better node embeddings, and vice versa (i.e.,\nbetter node embeddings based on a better graph structure). Our iterative method\ndynamically stops when the learned graph structure approaches close enough to\nthe graph optimized for the downstream prediction task. In addition, we cast\nthe graph learning problem as a similarity metric learning problem and leverage\nadaptive graph regularization for controlling the quality of the learned graph.\nFinally, combining the anchor-based approximation technique, we further propose\na scalable version of IDGL, namely IDGL-Anch, which significantly reduces the\ntime and space complexity of IDGL without compromising the performance. Our\nextensive experiments on nine benchmarks show that our proposed IDGL models can\nconsistently outperform or match the state-of-the-art baselines. Furthermore,\nIDGL can be more robust to adversarial graphs and cope with both transductive\nand inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2020-06-21", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.07832v1", + "title": "Deep Iterative and Adaptive Learning for Graph Neural Networks", + "abstract": "In this paper, we propose an end-to-end graph learning framework, namely Deep\nIterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for\njointly learning the graph structure and graph embeddings simultaneously. We\nfirst cast the graph structure learning problem as a similarity metric learning\nproblem and leverage an adapted graph regularization for controlling\nsmoothness, connectivity and sparsity of the generated graph. We further\npropose a novel iterative method for searching for a hidden graph structure\nthat augments the initial graph structure. Our iterative method dynamically\nstops when the learned graph structure approaches close enough to the optimal\ngraph. Our extensive experiments demonstrate that the proposed DIAL-GNN model\ncan consistently outperform or match state-of-the-art baselines in terms of\nboth downstream task performance and computational time. The proposed approach\ncan cope with both transductive learning and inductive learning.", + "authors": "Yu Chen, Lingfei Wu, Mohammed J. Zaki", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08966v4", + "title": "Graph Learning and Its Advancements on Large Language Models: A Holistic Survey", + "abstract": "Graph learning is a prevalent domain that endeavors to learn the intricate\nrelationships among nodes and the topological structure of graphs. Over the\nyears, graph learning has transcended from graph theory to graph data mining.\nWith the advent of representation learning, it has attained remarkable\nperformance in diverse scenarios. Owing to its extensive application prospects,\ngraph learning attracts copious attention. While some researchers have\naccomplished impressive surveys on graph learning, they failed to connect\nrelated objectives, methods, and applications in a more coherent way. As a\nresult, they did not encompass current ample scenarios and challenging problems\ndue to the rapid expansion of graph learning. Particularly, large language\nmodels have recently had a disruptive effect on human life, but they also show\nrelative weakness in structured scenarios. The question of how to make these\nmodels more powerful with graph learning remains open. Our survey focuses on\nthe most recent advancements in integrating graph learning with pre-trained\nlanguage models, specifically emphasizing their application within the domain\nof large language models. Different from previous surveys on graph learning, we\nprovide a holistic review that analyzes current works from the perspective of\ngraph structure, and discusses the latest applications, trends, and challenges\nin graph learning. Specifically, we commence by proposing a taxonomy and then\nsummarize the methods employed in graph learning. We then provide a detailed\nelucidation of mainstream applications. Finally, we propose future directions.", + "authors": "Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Fuji Ren, Gang Kou", + "published": "2022-12-17", + "updated": "2023-11-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1911.08776v2", + "title": "Joint Embedding Learning of Educational Knowledge Graphs", + "abstract": "As an efficient model for knowledge organization, the knowledge graph has\nbeen widely adopted in several fields, e.g., biomedicine, sociology, and\neducation. And there is a steady trend of learning embedding representations of\nknowledge graphs to facilitate knowledge graph construction and downstream\ntasks. In general, knowledge graph embedding techniques aim to learn vectorized\nrepresentations which preserve the structural information of the graph. And\nconventional embedding learning models rely on structural relationships among\nentities and relations. However, in educational knowledge graphs, structural\nrelationships are not the focus. Instead, rich literals of the graphs are more\nvaluable. In this paper, we focus on this problem and propose a novel model for\nembedding learning of educational knowledge graphs. Our model considers both\nstructural and literal information and jointly learns embedding\nrepresentations. Three experimental graphs were constructed based on an\neducational knowledge graph which has been applied in real-world teaching. We\nconducted two experiments on the three graphs and other common benchmark\ngraphs. The experimental results proved the effectiveness of our model and its\nsuperiority over other baselines when processing educational knowledge graphs.", + "authors": "Siyu Yao, Ruijie Wang, Shen Sun, Derui Bu, Jun Liu", + "published": "2019-11-20", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01152v2", + "title": "Causal Structure Learning: a Combinatorial Perspective", + "abstract": "In this review, we discuss approaches for learning causal structure from\ndata, also called causal discovery. In particular, we focus on approaches for\nlearning directed acyclic graphs (DAGs) and various generalizations which allow\nfor some variables to be unobserved in the available data. We devote special\nattention to two fundamental combinatorial aspects of causal structure\nlearning. First, we discuss the structure of the search space over causal\ngraphs. Second, we discuss the structure of equivalence classes over causal\ngraphs, i.e., sets of graphs which represent what can be learned from\nobservational data alone, and how these equivalence classes can be refined by\nadding interventional data.", + "authors": "Chandler Squires, Caroline Uhler", + "published": "2022-06-02", + "updated": "2022-12-19", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.08915v2", + "title": "Decoding Molecular Graph Embeddings with Reinforcement Learning", + "abstract": "We present RL-VAE, a graph-to-graph variational autoencoder that uses\nreinforcement learning to decode molecular graphs from latent embeddings.\nMethods have been described previously for graph-to-graph autoencoding, but\nthese approaches require sophisticated decoders that increase the complexity of\ntraining and evaluation (such as requiring parallel encoders and decoders or\nnon-trivial graph matching). Here, we repurpose a simple graph generator to\nenable efficient decoding and generation of molecular graphs.", + "authors": "Steven Kearnes, Li Li, Patrick Riley", + "published": "2019-04-18", + "updated": "2019-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.08057v1", + "title": "Graph Embedding VAE: A Permutation Invariant Model of Graph Structure", + "abstract": "Generative models of graph structure have applications in biology and social\nsciences. The state of the art is GraphRNN, which decomposes the graph\ngeneration process into a series of sequential steps. While effective for\nmodest sizes, it loses its permutation invariance for larger graphs. Instead,\nwe present a permutation invariant latent-variable generative model relying on\ngraph embeddings to encode structure. Using tools from the random graph\nliterature, our model is highly scalable to large graphs with likelihood\nevaluation and generation in $O(|V | + |E|)$.", + "authors": "Tony Duan, Juho Lee", + "published": "2019-10-17", + "updated": "2019-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.09304v1", + "title": "A Tunable Model for Graph Generation Using LSTM and Conditional VAE", + "abstract": "With the development of graph applications, generative models for graphs have\nbeen more crucial. Classically, stochastic models that generate graphs with a\npre-defined probability of edges and nodes have been studied. Recently, some\nmodels that reproduce the structural features of graphs by learning from actual\ngraph data using machine learning have been studied. However, in these\nconventional studies based on machine learning, structural features of graphs\ncan be learned from data, but it is not possible to tune features and generate\ngraphs with specific features. In this paper, we propose a generative model\nthat can tune specific features, while learning structural features of a graph\nfrom data. With a dataset of graphs with various features generated by a\nstochastic model, we confirm that our model can generate a graph with specific\nfeatures.", + "authors": "Shohei Nakazawa, Yoshiki Sato, Kenji Nakagawa, Sho Tsugawa, Kohei Watabe", + "published": "2021-04-15", + "updated": "2021-04-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.07817v2", + "title": "SPGP: Structure Prototype Guided Graph Pooling", + "abstract": "While graph neural networks (GNNs) have been successful for node\nclassification tasks and link prediction tasks in graph, learning graph-level\nrepresentations still remains a challenge. For the graph-level representation,\nit is important to learn both representation of neighboring nodes, i.e.,\naggregation, and graph structural information. A number of graph pooling\nmethods have been developed for this goal. However, most of the existing\npooling methods utilize k-hop neighborhood without considering explicit\nstructural information in a graph. In this paper, we propose Structure\nPrototype Guided Pooling (SPGP) that utilizes prior graph structures to\novercome the limitation. SPGP formulates graph structures as learnable\nprototype vectors and computes the affinity between nodes and prototype\nvectors. This leads to a novel node scoring scheme that prioritizes informative\nnodes while encapsulating the useful structures of the graph. Our experimental\nresults show that SPGP outperforms state-of-the-art graph pooling methods on\ngraph classification benchmark datasets in both accuracy and scalability.", + "authors": "Sangseon Lee, Dohoon Lee, Yinhua Piao, Sun Kim", + "published": "2022-09-16", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.04508v3", + "title": "Unsupervised Graph Embedding via Adaptive Graph Learning", + "abstract": "Graph autoencoders (GAEs) are powerful tools in representation learning for\ngraph embedding. However, the performance of GAEs is very dependent on the\nquality of the graph structure, i.e., of the adjacency matrix. In other words,\nGAEs would perform poorly when the adjacency matrix is incomplete or be\ndisturbed. In this paper, two novel unsupervised graph embedding methods,\nunsupervised graph embedding via adaptive graph learning (BAGE) and\nunsupervised graph embedding via variational adaptive graph learning (VBAGE)\nare proposed. The proposed methods expand the application range of GAEs on\ngraph embedding, i.e, on the general datasets without graph structure.\nMeanwhile, the adaptive learning mechanism can initialize the adjacency matrix\nwithout be affected by the parameter. Besides that, the latent representations\nare embedded in the laplacian graph structure to preserve the topology\nstructure of the graph in the vector space. Moreover, the adjacency matrix can\nbe self-learned for better embedding performance when the original graph\nstructure is incomplete. With adaptive learning, the proposed method is much\nmore robust to the graph structure. Experimental studies on several datasets\nvalidate our design and demonstrate that our methods outperform baselines by a\nwide margin in node clustering, node classification, and graph visualization\ntasks.", + "authors": "Rui Zhang, Yunxing Zhang, Xuelong Li", + "published": "2020-03-10", + "updated": "2021-03-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00696v1", + "title": "Graph Learning: A Survey", + "abstract": "Graphs are widely used as a popular representation of the network structure\nof connected data. Graph data can be found in a broad spectrum of application\ndomains such as social systems, ecosystems, biological networks, knowledge\ngraphs, and information systems. With the continuous penetration of artificial\nintelligence technologies, graph learning (i.e., machine learning on graphs) is\ngaining attention from both researchers and practitioners. Graph learning\nproves effective for many tasks, such as classification, link prediction, and\nmatching. Generally, graph learning methods extract relevant features of graphs\nby taking advantage of machine learning algorithms. In this survey, we present\na comprehensive overview on the state-of-the-art of graph learning. Special\nattention is paid to four categories of existing graph learning methods,\nincluding graph signal processing, matrix factorization, random walk, and deep\nlearning. Major models and algorithms under these categories are reviewed\nrespectively. We examine graph learning applications in areas such as text,\nimages, science, knowledge graphs, and combinatorial optimization. In addition,\nwe discuss several promising research directions in this field.", + "authors": "Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, Huan Liu", + "published": "2021-05-03", + "updated": "2021-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SI", + "68T07", + "I.2.6" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.04923v2", + "title": "Control-based Graph Embeddings with Data Augmentation for Contrastive Learning", + "abstract": "In this paper, we study the problem of unsupervised graph representation\nlearning by harnessing the control properties of dynamical networks defined on\ngraphs. Our approach introduces a novel framework for contrastive learning, a\nwidely prevalent technique for unsupervised representation learning. A crucial\nstep in contrastive learning is the creation of 'augmented' graphs from the\ninput graphs. Though different from the original graphs, these augmented graphs\nretain the original graph's structural characteristics. Here, we propose a\nunique method for generating these augmented graphs by leveraging the control\nproperties of networks. The core concept revolves around perturbing the\noriginal graph to create a new one while preserving the controllability\nproperties specific to networks and graphs. Compared to the existing methods,\nwe demonstrate that this innovative approach enhances the effectiveness of\ncontrastive learning frameworks, leading to superior results regarding the\naccuracy of the classification tasks. The key innovation lies in our ability to\ndecode the network structure using these control properties, opening new\navenues for unsupervised graph representation learning.", + "authors": "Obaid Ullah Ahmad, Anwar Said, Mudassir Shabbir, Waseem Abbas, Xenofon Koutsoukos", + "published": "2024-03-07", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "cs.SY", + "eess.SY" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1905.06393v1", + "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data", + "abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of\ngraph-based machine learning methods. We release a new data set, compiled from\nInternational Planning Competitions (IPC), for benchmarking graph\nclassification, regression, and related tasks. Apart from the graph\nconstruction (based on AI planning problems) that is interesting in its own\nright, the data set possesses distinctly different characteristics from\npopularly used benchmarks. The data set, named IPC, consists of two\nself-contained versions, grounded and lifted, both including graphs of large\nand skewedly distributed sizes, posing substantial challenges for the\ncomputation of graph models such as graph kernels and graph neural networks.\nThe graphs in this data set are directed and the lifted version is acyclic,\noffering the opportunity of benchmarking specialized models for directed\n(acyclic) structures. Moreover, the graph generator and the labeling are\ncomputer programmed; thus, the data set may be extended easily if a larger\nscale is desired. The data set is accessible from\n\\url{https://github.com/IBM/IPC-graph-data}.", + "authors": "Patrick Ferber, Tengfei Ma, Siyu Huo, Jie Chen, Michael Katz", + "published": "2019-05-15", + "updated": "2019-05-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.15239v1", + "title": "Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning", + "abstract": "Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.", + "authors": "Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu", + "published": "2021-06-29", + "updated": "2021-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.01489v1", + "title": "Generative Models and Learning Algorithms for Core-Periphery Structured Graphs", + "abstract": "We consider core-periphery structured graphs, which are graphs with a group\nof densely and sparsely connected nodes, respectively, referred to as core and\nperiphery nodes. The so-called core score of a node is related to the\nlikelihood of it being a core node. In this paper, we focus on learning the\ncore scores of a graph from its node attributes and connectivity structure. To\nthis end, we propose two classes of probabilistic graphical models: affine and\nnonlinear. First, we describe affine generative models to model the dependence\nof node attributes on its core scores, which determine the graph structure.\nNext, we discuss nonlinear generative models in which the partial correlations\nof node attributes influence the graph structure through latent core scores. We\ndevelop algorithms for inferring the model parameters and core scores of a\ngraph when both the graph structure and node attributes are available. When\nonly the node attributes of graphs are available, we jointly learn a\ncore-periphery structured graph and its core scores. We provide results from\nnumerical experiments on several synthetic and real-world datasets to\ndemonstrate the efficacy of the developed models and algorithms.", + "authors": "Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10124v1", + "title": "Graph Context Encoder: Graph Feature Inpainting for Graph Generation and Self-supervised Pretraining", + "abstract": "We propose the Graph Context Encoder (GCE), a simple but efficient approach\nfor graph representation learning based on graph feature masking and\nreconstruction.\n GCE models are trained to efficiently reconstruct input graphs similarly to a\ngraph autoencoder where node and edge labels are masked. In particular, our\nmodel is also allowed to change graph structures by masking and reconstructing\ngraphs augmented by random pseudo-edges.\n We show that GCE can be used for novel graph generation, with applications\nfor molecule generation. Used as a pretraining method, we also show that GCE\nimproves baseline performances in supervised classification tasks tested on\nmultiple standard benchmark graph datasets.", + "authors": "Oriel Frigo, R\u00e9my Brossard, David Dehaene", + "published": "2021-06-18", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "68T07" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.09971v1", + "title": "Graph Learning-Convolutional Networks", + "abstract": "Recently, graph Convolutional Neural Networks (graph CNNs) have been widely\nused for graph data representation and semi-supervised learning tasks. However,\nexisting graph CNNs generally use a fixed graph which may be not optimal for\nsemi-supervised learning tasks. In this paper, we propose a novel Graph\nLearning-Convolutional Network (GLCN) for graph data representation and\nsemi-supervised learning. The aim of GLCN is to learn an optimal graph\nstructure that best serves graph CNNs for semi-supervised learning by\nintegrating both graph learning and graph convolution together in a unified\nnetwork architecture. The main advantage is that in GLCN, both given labels and\nthe estimated labels are incorporated and thus can provide useful 'weakly'\nsupervised information to refine (or learn) the graph construction and also to\nfacilitate the graph convolution operation in GLCN for unknown label\nestimation. Experimental results on seven benchmarks demonstrate that GLCN\nsignificantly outperforms state-of-the-art traditional fixed structure based\ngraph CNNs.", + "authors": "Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang", + "published": "2018-11-25", + "updated": "2018-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07699v2", + "title": "Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs", + "abstract": "Temporal Graph Learning, which aims to model the time-evolving nature of\ngraphs, has gained increasing attention and achieved remarkable performance\nrecently. However, in reality, graph structures are often incomplete and noisy,\nwhich hinders temporal graph networks (TGNs) from learning informative\nrepresentations. Graph contrastive learning uses data augmentation to generate\nplausible variations of existing data and learn robust representations.\nHowever, rule-based augmentation approaches may be suboptimal as they lack\nlearnability and fail to leverage rich information from downstream tasks. To\naddress these issues, we propose a Time-aware Graph Structure Learning (TGSL)\napproach via sequence prediction on temporal graphs, which learns better graph\nstructures for downstream tasks through adding potential temporal edges. In\nparticular, it predicts time-aware context embedding based on previously\nobserved interactions and uses the Gumble-Top-K to select the closest candidate\nedges to this context embedding. Additionally, several candidate sampling\nstrategies are proposed to ensure both efficiency and diversity. Furthermore,\nwe jointly learn the graph structure and TGNs in an end-to-end manner and\nperform inference on the refined graph. Extensive experiments on temporal link\nprediction benchmarks demonstrate that TGSL yields significant gains for the\npopular TGNs such as TGAT and GraphMixer, and it outperforms other contrastive\nlearning methods on temporal graphs. We release the code at\nhttps://github.com/ViktorAxelsen/TGSL.", + "authors": "Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai", + "published": "2023-06-13", + "updated": "2023-08-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07439v1", + "title": "Multiple Graph Adversarial Learning", + "abstract": "Recently, Graph Convolutional Networks (GCNs) have been widely studied for\ngraph-structured data representation and learning. However, in many real\napplications, data are coming with multiple graphs, and it is non-trivial to\nadapt GCNs to deal with data representation with multiple graph structures. One\nmain challenge for multi-graph representation is how to exploit both structure\ninformation of each individual graph and correlation information across\nmultiple graphs simultaneously. In this paper, we propose a novel Multiple\nGraph Adversarial Learning (MGAL) framework for multi-graph representation and\nlearning. MGAL aims to learn an optimal structure-invariant and consistent\nrepresentation for multiple graphs in a common subspace via a novel adversarial\nlearning framework, which thus incorporates both structure information of\nintra-graph and correlation information of inter-graphs simultaneously. Based\non MGAL, we then provide a unified network for semi-supervised learning task.\nPromising experimental results demonstrate the effectiveness of MGAL model.", + "authors": "Bo Jiang, Ziyan Zhang, Jin Tang, Bin Luo", + "published": "2019-01-22", + "updated": "2019-01-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.09792v1", + "title": "A Unified Framework for Structured Graph Learning via Spectral Constraints", + "abstract": "Graph learning from data represents a canonical problem that has received\nsubstantial attention in the literature. However, insufficient work has been\ndone in incorporating prior structural knowledge onto the learning of\nunderlying graphical models from data. Learning a graph with a specific\nstructure is essential for interpretability and identification of the\nrelationships among data. Useful structured graphs include the multi-component\ngraph, bipartite graph, connected graph, sparse graph, and regular graph. In\ngeneral, structured graph learning is an NP-hard combinatorial problem,\ntherefore, designing a general tractable optimization method is extremely\nchallenging. In this paper, we introduce a unified graph learning framework\nlying at the integration of Gaussian graphical models and spectral graph\ntheory. To impose a particular structure on a graph, we first show how to\nformulate the combinatorial constraints as an analytical property of the graph\nmatrix. Then we develop an optimization framework that leverages graph learning\nwith specific structures via spectral constraints on graph matrices. The\nproposed algorithms are provably convergent, computationally efficient, and\npractically amenable for numerous graph-based tasks. Extensive numerical\nexperiments with both synthetic and real data sets illustrate the effectiveness\nof the proposed algorithms. The code for all the simulations is made available\nas an open source repository.", + "authors": "Sandeep Kumar, Jiaxi Ying, Jos\u00e9 Vin\u00edcius de M. Cardoso, Daniel Palomar", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.SI", + "math.OC" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.02664v2", + "title": "Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data", + "abstract": "Graph condensation, which reduces the size of a large-scale graph by\nsynthesizing a small-scale condensed graph as its substitution, has immediate\nbenefits for various graph learning tasks. However, existing graph condensation\nmethods rely on the joint optimization of nodes and structures in the condensed\ngraph, and overlook critical issues in effectiveness and generalization\nability. In this paper, we advocate a new Structure-Free Graph Condensation\nparadigm, named SFGC, to distill a large-scale graph into a small-scale graph\nnode set without explicit graph structures, i.e., graph-free data. Our idea is\nto implicitly encode topology structure information into the node attributes in\nthe synthesized graph-free data, whose topology is reduced to an identity\nmatrix. Specifically, SFGC contains two collaborative components: (1) a\ntraining trajectory meta-matching scheme for effectively synthesizing\nsmall-scale graph-free data; (2) a graph neural feature score metric for\ndynamically evaluating the quality of the condensed data. Through training\ntrajectory meta-matching, SFGC aligns the long-term GNN learning behaviors\nbetween the large-scale graph and the condensed small-scale graph-free data,\nensuring comprehensive and compact transfer of informative knowledge to the\ngraph-free data. Afterward, the underlying condensed graph-free data would be\ndynamically evaluated with the graph neural feature score, which is a\nclosed-form metric for ensuring the excellent expressiveness of the condensed\ngraph-free data. Extensive experiments verify the superiority of SFGC across\ndifferent condensation ratios.", + "authors": "Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan", + "published": "2023-06-05", + "updated": "2023-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.16002v1", + "title": "Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning", + "abstract": "Graph convolutional networks gain remarkable success in semi-supervised\nlearning on graph structured data. The key to graph-based semisupervised\nlearning is capturing the smoothness of labels or features over nodes exerted\nby graph structure. Previous methods, spectral methods and spatial methods,\ndevote to defining graph convolution as a weighted average over neighboring\nnodes, and then learn graph convolution kernels to leverage the smoothness to\nimprove the performance of graph-based semi-supervised learning. One open\nchallenge is how to determine appropriate neighborhood that reflects relevant\ninformation of smoothness manifested in graph structure. In this paper, we\npropose GraphHeat, leveraging heat kernel to enhance low-frequency filters and\nenforce smoothness in the signal variation on the graph. GraphHeat leverages\nthe local structure of target node under heat diffusion to determine its\nneighboring nodes flexibly, without the constraint of order suffered by\nprevious methods. GraphHeat achieves state-of-the-art results in the task of\ngraph-based semi-supervised classification across three benchmark datasets:\nCora, Citeseer and Pubmed.", + "authors": "Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, Xueqi Cheng", + "published": "2020-07-27", + "updated": "2020-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.14002v1", + "title": "Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction", + "abstract": "We introduce Bi-GNN for modeling biological link prediction tasks such as\ndrug-drug interaction (DDI) and protein-protein interaction (PPI). Taking\ndrug-drug interaction as an example, existing methods using machine learning\neither only utilize the link structure between drugs without using the graph\nrepresentation of each drug molecule, or only leverage the individual drug\ncompound structures without using graph structure for the higher-level DDI\ngraph. The key idea of our method is to fundamentally view the data as a\nbi-level graph, where the highest level graph represents the interaction\nbetween biological entities (interaction graph), and each biological entity\nitself is further expanded to its intrinsic graph representation\n(representation graphs), where the graph is either flat like a drug compound or\nhierarchical like a protein with amino acid level graph, secondary structure,\ntertiary structure, etc. Our model not only allows the usage of information\nfrom both the high-level interaction graph and the low-level representation\ngraphs, but also offers a baseline for future research opportunities to address\nthe bi-level nature of the data.", + "authors": "Yunsheng Bai, Ken Gu, Yizhou Sun, Wei Wang", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.05181v3", + "title": "Graph Learning from Data under Structural and Laplacian Constraints", + "abstract": "Graphs are fundamental mathematical structures used in various fields to\nrepresent data, signals and processes. In this paper, we propose a novel\nframework for learning/estimating graphs from data. The proposed framework\nincludes (i) formulation of various graph learning problems, (ii) their\nprobabilistic interpretations and (iii) associated algorithms. Specifically,\ngraph learning problems are posed as estimation of graph Laplacian matrices\nfrom some observed data under given structural constraints (e.g., graph\nconnectivity and sparsity level). From a probabilistic perspective, the\nproblems of interest correspond to maximum a posteriori (MAP) parameter\nestimation of Gaussian-Markov random field (GMRF) models, whose precision\n(inverse covariance) is a graph Laplacian matrix. For the proposed graph\nlearning problems, specialized algorithms are developed by incorporating the\ngraph Laplacian and structural constraints. The experimental results\ndemonstrate that the proposed algorithms outperform the current\nstate-of-the-art methods in terms of accuracy and computational efficiency.", + "authors": "Hilmi E. Egilmez, Eduardo Pavez, Antonio Ortega", + "published": "2016-11-16", + "updated": "2017-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.14403v1", + "title": "Deep graph learning for semi-supervised classification", + "abstract": "Graph learning (GL) can dynamically capture the distribution structure (graph\nstructure) of data based on graph convolutional networks (GCN), and the\nlearning quality of the graph structure directly influences GCN for\nsemi-supervised classification. Existing methods mostly combine the\ncomputational layer and the related losses into GCN for exploring the global\ngraph(measuring graph structure from all data samples) or local graph\n(measuring graph structure from local data samples). Global graph emphasises on\nthe whole structure description of the inter-class data, while local graph\ntrend to the neighborhood structure representation of intra-class data.\nHowever, it is difficult to simultaneously balance these graphs of the learning\nprocess for semi-supervised classification because of the interdependence of\nthese graphs. To simulate the interdependence, deep graph learning(DGL) is\nproposed to find the better graph representation for semi-supervised\nclassification. DGL can not only learn the global structure by the previous\nlayer metric computation updating, but also mine the local structure by next\nlayer local weight reassignment. Furthermore, DGL can fuse the different\nstructures by dynamically encoding the interdependence of these structures, and\ndeeply mine the relationship of the different structures by the hierarchical\nprogressive learning for improving the performance of semi-supervised\nclassification. Experiments demonstrate the DGL outperforms state-of-the-art\nmethods on three benchmark datasets (Citeseer,Cora, and Pubmed) for citation\nnetworks and two benchmark datasets (MNIST and Cifar10) for images.", + "authors": "Guangfeng Lin, Xiaobing Kang, Kaiyang Liao, Fan Zhao, Yajun Chen", + "published": "2020-05-29", + "updated": "2020-05-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1904.11883v2", + "title": "Robust Graph Data Learning via Latent Graph Convolutional Representation", + "abstract": "Graph Convolutional Representation (GCR) has achieved impressive performance\nfor graph data representation. However, existing GCR is generally defined on\nthe input fixed graph which may restrict the representation capacity and also\nbe vulnerable to the structural attacks and noises. To address this issue, we\npropose a novel Latent Graph Convolutional Representation (LatGCR) for robust\ngraph data representation and learning. Our LatGCR is derived based on\nreformulating graph convolutional representation from the aspect of graph\nneighborhood reconstruction. Given an input graph $\\textbf{A}$, LatGCR aims to\ngenerate a flexible latent graph $\\widetilde{\\textbf{A}}$ for graph\nconvolutional representation which obviously enhances the representation\ncapacity and also performs robustly w.r.t graph structural attacks and noises.\nMoreover, LatGCR is implemented in a self-supervised manner and thus provides a\nbasic block for both supervised and unsupervised graph learning tasks.\nExperiments on several datasets demonstrate the effectiveness and robustness of\nLatGCR.", + "authors": "Bo Jiang, Ziyan Zhang, Bin Luo", + "published": "2019-04-26", + "updated": "2021-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11390v2", + "title": "Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction", + "abstract": "Tiered graph autoencoders provide the architecture and mechanisms for\nlearning tiered latent representations and latent spaces for molecular graphs\nthat explicitly represent and utilize groups (e.g., functional groups). This\nenables the utilization and exploration of tiered molecular latent spaces,\neither individually - the node (atom) tier, the group tier, or the graph\n(molecule) tier - or jointly, as well as navigation across the tiers. In this\npaper, we discuss the use of tiered graph autoencoders together with graph\nprediction for molecular graphs. We show features of molecular graphs used, and\ngroups in molecular graphs identified for some sample molecules. We briefly\nreview graph prediction and the QM9 dataset for background information, and\ndiscuss the use of tiered graph embeddings for graph prediction, particularly\nweighted group pooling. We find that functional groups and ring groups\neffectively capture and represent the chemical essence of molecular graphs\n(structures). Further, tiered graph autoencoders and graph prediction together\nprovide effective, efficient and interpretable deep learning for molecular\ngraphs, with the former providing unsupervised, transferable learning and the\nlatter providing supervised, task-optimized learning.", + "authors": "Daniel T. Chang", + "published": "2019-10-24", + "updated": "2021-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-bio.BM" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.07308v1", + "title": "Variational Graph Auto-Encoders", + "abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", + "authors": "Thomas N. Kipf, Max Welling", + "published": "2016-11-21", + "updated": "2016-11-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01855v2", + "title": "A Survey on Graph Representation Learning Methods", + "abstract": "Graphs representation learning has been a very active research area in recent\nyears. The goal of graph representation learning is to generate graph\nrepresentation vectors that capture the structure and features of large graphs\naccurately. This is especially important because the quality of the graph\nrepresentation vectors will affect the performance of these vectors in\ndownstream tasks such as node classification, link prediction and anomaly\ndetection. Many techniques are proposed for generating effective graph\nrepresentation vectors. Two of the most prevalent categories of graph\nrepresentation learning are graph embedding methods without using graph neural\nnets (GNN), which we denote as non-GNN based graph embedding methods, and graph\nneural nets (GNN) based methods. Non-GNN graph embedding methods are based on\ntechniques such as random walks, temporal point processes and neural network\nlearning methods. GNN-based methods, on the other hand, are the application of\ndeep learning on graph data. In this survey, we provide an overview of these\ntwo categories and cover the current state-of-the-art methods for both static\nand dynamic graphs. Finally, we explore some open and ongoing research\ndirections for future work.", + "authors": "Shima Khoshraftar, Aijun An", + "published": "2022-04-04", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.06126v1", + "title": "Regularized Graph Structure Learning with Semantic Knowledge for Multi-variates Time-Series Forecasting", + "abstract": "Multivariate time-series forecasting is a critical task for many\napplications, and graph time-series network is widely studied due to its\ncapability to capture the spatial-temporal correlation simultaneously. However,\nmost existing works focus more on learning with the explicit prior graph\nstructure, while ignoring potential information from the implicit graph\nstructure, yielding incomplete structure modeling. Some recent works attempt to\nlearn the intrinsic or implicit graph structure directly while lacking a way to\ncombine explicit prior structure with implicit structure together. In this\npaper, we propose Regularized Graph Structure Learning (RGSL) model to\nincorporate both explicit prior structure and implicit structure together, and\nlearn the forecasting deep networks along with the graph structure. RGSL\nconsists of two innovative modules. First, we derive an implicit dense\nsimilarity matrix through node embedding, and learn the sparse graph structure\nusing the Regularized Graph Generation (RGG) based on the Gumbel Softmax trick.\nSecond, we propose a Laplacian Matrix Mixed-up Module (LM3) to fuse the\nexplicit graph and implicit graph together. We conduct experiments on three\nreal-word datasets. Results show that the proposed RGSL model outperforms\nexisting graph forecasting algorithms with a notable margin, while learning\nmeaningful graph structure simultaneously. Our code and models are made\npublicly available at https://github.com/alipay/RGSL.git.", + "authors": "Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, Alex Liu", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.02909v1", + "title": "Spectral Augmentations for Graph Contrastive Learning", + "abstract": "Contrastive learning has emerged as a premier method for learning\nrepresentations with or without supervision. Recent studies have shown its\nutility in graph representation learning for pre-training. Despite successes,\nthe understanding of how to design effective graph augmentations that can\ncapture structural properties common to many different types of downstream\ngraphs remains incomplete. We propose a set of well-motivated graph\ntransformation operations derived via graph spectral analysis to provide a bank\nof candidates when constructing augmentations for a graph contrastive\nobjective, enabling contrastive learning to capture useful structural\nrepresentation from pre-training graph datasets. We first present a spectral\ngraph cropping augmentation that involves filtering nodes by applying\nthresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second\nnovel augmentation reorders the graph frequency components in a structural\nLaplacian-derived position graph embedding. Further, we introduce a method that\nleads to improved views of local subgraphs by performing alignment via global\nrandom walk embeddings. Our experimental results indicate consistent\nimprovements in out-of-domain graph data transfer compared to state-of-the-art\ngraph contrastive learning methods, shedding light on how to design a graph\nlearner that is able to learn structural properties common to diverse graph\ntypes.", + "authors": "Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2401.16176v1", + "title": "A Survey on Structure-Preserving Graph Transformers", + "abstract": "The transformer architecture has shown remarkable success in various domains,\nsuch as natural language processing and computer vision. When it comes to graph\nlearning, transformers are required not only to capture the interactions\nbetween pairs of nodes but also to preserve graph structures connoting the\nunderlying relations and proximity between them, showing the expressive power\nto capture different graph structures. Accordingly, various\nstructure-preserving graph transformers have been proposed and widely used for\nvarious tasks, such as graph-level tasks in bioinformatics and\nchemoinformatics. However, strategies related to graph structure preservation\nhave not been well organized and systematized in the literature. In this paper,\nwe provide a comprehensive overview of structure-preserving graph transformers\nand generalize these methods from the perspective of their design objective.\nFirst, we divide strategies into four main groups: node feature modulation,\ncontext node sampling, graph rewriting, and transformer architecture\nimprovements. We then further divide the strategies according to the coverage\nand goals of graph structure preservation. Furthermore, we also discuss\nchallenges and future directions for graph transformer models to preserve the\ngraph structure and understand the nature of graphs.", + "authors": "Van Thuy Hoang, O-Joun Lee", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Graph AND Structure AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.00793v2", + "title": "Structure-Preserving Graph Representation Learning", + "abstract": "Though graph representation learning (GRL) has made significant progress, it\nis still a challenge to extract and embed the rich topological structure and\nfeature information in an adequate way. Most existing methods focus on local\nstructure and fail to fully incorporate the global topological structure. To\nthis end, we propose a novel Structure-Preserving Graph Representation Learning\n(SPGRL) method, to fully capture the structure information of graphs.\nSpecifically, to reduce the uncertainty and misinformation of the original\ngraph, we construct a feature graph as a complementary view via k-Nearest\nNeighbor method. The feature graph can be used to contrast at node-level to\ncapture the local relation. Besides, we retain the global topological structure\ninformation by maximizing the mutual information (MI) of the whole graph and\nfeature embeddings, which is theoretically reduced to exchanging the feature\nembeddings of the feature and the original graphs to reconstruct themselves.\nExtensive experiments show that our method has quite superior performance on\nsemi-supervised node classification task and excellent robustness under noise\nperturbation on graph structure or node features.", + "authors": "Ruiyi Fang, Liangjian Wen, Zhao Kang, Jianzhuang Liu", + "published": "2022-09-02", + "updated": "2022-12-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.SI" + ], + "category": "Graph AND Structure AND Learning" + } +] \ No newline at end of file