diff --git "a/abs_29K_G/test_abstract_long_2405.03188v1.json" "b/abs_29K_G/test_abstract_long_2405.03188v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.03188v1.json" @@ -0,0 +1,458 @@ +{ + "url": "http://arxiv.org/abs/2405.03188v1", + "title": "Hyperbolic Geometric Latent Diffusion Model for Graph Generation", + "abstract": "Diffusion models have made significant contributions to computer vision,\nsparking a growing interest in the community recently regarding the application\nof them to graph generation. Existing discrete graph diffusion models exhibit\nheightened computational complexity and diminished training efficiency. A\npreferable and natural way is to directly diffuse the graph within the latent\nspace. However, due to the non-Euclidean structure of graphs is not isotropic\nin the latent space, the existing latent diffusion models effectively make it\ndifficult to capture and preserve the topological information of graphs. To\naddress the above challenges, we propose a novel geometrically latent diffusion\nframework HypDiff. Specifically, we first establish a geometrically latent\nspace with interpretability measures based on hyperbolic geometry, to define\nanisotropic latent diffusion processes for graphs. Then, we propose a\ngeometrically latent diffusion process that is constrained by both radial and\nangular geometric properties, thereby ensuring the preservation of the original\ntopological properties in the generative graphs. Extensive experimental results\ndemonstrate the superior effectiveness of HypDiff for graph generation with\nvarious topologies.", + "authors": "Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, Jianxin Li, Xianxian Li", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion models have made significant contributions to computer vision,\nsparking a growing interest in the community recently regarding the application\nof them to graph generation. Existing discrete graph diffusion models exhibit\nheightened computational complexity and diminished training efficiency. A\npreferable and natural way is to directly diffuse the graph within the latent\nspace. However, due to the non-Euclidean structure of graphs is not isotropic\nin the latent space, the existing latent diffusion models effectively make it\ndifficult to capture and preserve the topological information of graphs. To\naddress the above challenges, we propose a novel geometrically latent diffusion\nframework HypDiff. Specifically, we first establish a geometrically latent\nspace with interpretability measures based on hyperbolic geometry, to define\nanisotropic latent diffusion processes for graphs. Then, we propose a\ngeometrically latent diffusion process that is constrained by both radial and\nangular geometric properties, thereby ensuring the preservation of the original\ntopological properties in the generative graphs. Extensive experimental results\ndemonstrate the superior effectiveness of HypDiff for graph generation with\nvarious topologies.", + "main_content": "Introduction Graphs in the real world contain variety and important of topologies, and these topological properties often reflect 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, China 2Institute of Artificial Intelligence, Beihang University, Beijing, China 3School of Software, Beihang University, Beijing, China 4Beijing Advanced Innovation Center for Big Data and Brain Computing, School of Computer Science and Engineering, Beihang University, Beijing, China. Correspondence to: Xingcheng Fu , Jianxin Li , Xianxian Li . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). physical laws and growth patterns, such as rich-clubs, smallworlds, hierarchies, fractal structures, etc. Traditional random graph models based on graph theory, such as ErdosRenyi (Erd\u02dd os et al., 1960), Watts-Strogatz (Watts & Strogatz, 1998) and Barabasi-Albert (Barab\u00b4 asi & Albert, 1999), etc., need artificial heuristics to build the algorithms for single nature topologies and lack the flexibility to model various complex graphs. Therefore, many deep learning models have been developed for graph generation, such as Variational Graph Auto-Encoder (VGAE) (Kipf & Welling, 2016), Generative Adversarial Networks(GAN) (Goodfellow et al., 2014), and other technologies. Recently, the Denoising Diffusion Probabilistic Model(DDPM) (Ho et al., 2020) have demonstrated great power and potential in image generation, attracting huge attention from the community of graph learning. For graph generation, a straightforward idea involves designing discretized diffusion methods for the graph structural information. (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022), and the other way is to develop advanced graph encoders to preserve structural information throughout the diffusion process within a continuous potential space (Xu et al., 2021; 2023). However, because of the irregular and non-Euclidean structure of graph data, the realization of the diffusion model for graphs still has two main limitations: (1) High computational complexity. The core to graph generation is to handle the discreteness, sparsity and other topological properties of the non-Euclidean structure. Since the Gaussian noise perturbation used in the vanilla diffusion model is not suitable for discrete data, the discrete graph diffusion model usually has high time and space complexity due to the problem of structural sparsity. Moreover, the discrete graph diffusion model relies on a continuous Gaussian noise process to create fully connected, noisy graphs (Zhang et al., 2023; Ingraham et al., 2019) which loses structural information and underlying topological properties. (2) Anisotropy of non-Euclidean structure. Different from the regular structure data (e.g. pixel matrix or grid structure), the \u201dirregular\u201d non-Euclidean structure embeddings of graph data are anisotropic in continuous latent space (Elhag et al., 2022). As shown in Figure 1(b), the node embeddings of a graph in Euclidean space exhibit significant anisotropy in several specific directions. Recently, some studies (Yang et al., 2023) have shown that isotropic 1 arXiv:2405.03188v1 [cs.LG] 6 May 2024 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Original structure. (b) Euclidean latent space. (c) Hyperbolic latent space. Figure 1. Visualization of node embeddings by singular value decomposition (SVD); (a) Original structure visualization of the NCAA football graph and different colors indicate different labels(teams); (b) Visualization of node embeddings in 2D Euclidean space and planar projection; (c) Visualization of node embeddings in 2D hyperbolic space and Poincar\u00b4 e disk projection. diffusion of the node embedding of the graph in the latent space will treat the anisotropic structural information as noise, and this useful structural information will be lost in the denoising process. Hyperbolic geometric space is widely recognized as an ideal continuous manifold for representing discrete tree-like or hierarchical structures (Cannon et al., 1997; Ungar, 1999; Krioukov et al., 2010; Sun et al., 2024b), and has been widely studied and applied to various graph learning tasks (Sun et al., 2021; Tifrea et al., 2019; Nickel & Kiela, 2017; Sala et al., 2018; Chami et al., 2019; Sun et al., 2024a). Inspired by these studies, we find that hyperbolic geometry has great potential to address non-Euclidean structural anisotropy in graph latent diffusion processes. As shown in Figure 1(c), in hyperbolic space, we can observe that the distribution of node embeddings tends to be isotropic globally, while anisotropy is preserved locally. In addition, hyperbolic geometry unifies angular and radial measures of polar coordinates as shown in Figure 2(a), and can provide geometric measures with physical semantics and interpretability (Papadopoulos et al., 2012). It is exciting that hyperbolic geometry can provide a geometrically latent space with graph geometric priors, able to help deal with the anisotropy of graph structures by special geometric measures. Based on the above insights, we aim to establish a suitable geometrically latent space based on hyperbolic geometry to design an efficient diffusion process to the non-Euclidean structure for topology-preserving graph generation tasks. However, there are two primary challenges: (1) the additivity of continuous Gaussian distributions is undefined in hyperbolic latent space; (2) devising an effective anisotropic diffusion process for non-Euclidean structures. Contributions. To address the challenges, we propose a novel Hyperbolic Geometric Latent Diffusion (HypDiff) model for the graph generation. For the additive issue of continuous Gaussian distribution in hyperbolic space, we propose an approximate diffusion process based on radial measures. Then the angular constraint was utilized to constrain the anisotropic noise to preserve more structural prior, guiding the diffusion model to finer details of the graph structure. Our contributions are summarized as: \u2022 We are the first to study the anisotropy of nonEuclidean structures for graph latent diffusion models from a geometric perspective, and propose a novel hyperbolic geometric latent diffusion model HypDiff. \u2022 We proposed a novel geometrically latent diffusion process based on radial and angular geometric constraints in hyperbolic space, and addresses the additivity of continuous Gaussian distributions and the issue of anisotropic noise addition in hyperbolic space. \u2022 Extensive experiments on synthetic and real-world datasets demonstrate a significant and consistent improvement of HypDiff and provide insightful analysis for graph generation. 2. Related Works 2.1. Graph Generative Diffusion Model Different from that learn to generate samples once, like GAN (Goodfellow et al., 2014; Wang et al., 2018; Dai et al., 2018), VGAE (Yu et al., 2018; Xu & Durrett, 2018; Grattarola et al., 2019) or GraphRNN (You et al., 2018), the diffusion model (Ho et al., 2020) aims to gradually convert the sample into pure noise by a parameterized Markov chain process. Some recent works (Xu et al., 2021; 2023) employ advanced graph encoders to effectively preserve the inherent structural information throughout the diffusion process within a continuous potential space. Gaussian noise is added on the distribution of nodes and edges of the graph (Vignac et al., 2022), and Gaussian processes are performed on the neighborhood or spectral domain of the graph (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022). However, existing discrete diffusion models have many challenges in capturing the non-Euclidean structure and preserving underlying topological properties. 2 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Geometric interpretation. (b) Hyperbolic latent diffusion. Figure 2. (a) Geometric interpretation of the hyperbolic geometry, which unifies the radius and angle measurements in polar coordinates and interprets as popularity and similarity respectively; (b) Hyperbolic latent diffusion processing with isotropic/anisotropic noise; 2.2. Hyperbolic Graph Learning Hyperbolic geometric space was introduced into complex networks earlier to represent the small-world and scale-free complex networks (Krioukov et al., 2010; Papadopoulos et al., 2012). With high capacity and hierarchical-structurepreserving ability, hyperbolic geometry is also used in NLP (Nickel & Kiela, 2017; Tifrea et al., 2019) to learn word representations with hypernym structure. For graph neural networks, hyperbolic space is recently introduced into graph neural networks (Liu et al., 2019; Chami et al., 2019; Sun et al., 2021; 2022). P-VAE (Mathieu et al., 2019) and Hyper-ANE (Liu et al., 2018) extend VAE and GAN into the hyperbolic versions to learn the hierarchical representations. To sum up, hyperbolic geometry provides an intuitive and efficient way of understanding the underlying structural properties of the graph. 3. Methodology In this section, we present our Hyperbolic geometric latent Diffusion model (HypDiff) for addressing the two main challenges. The key insight is that we leverage hyperbolic geometry to abstract the implicit hierarchy of nodes in the graph and introduce two geometric constraints to preserve important topological proprieties, such as scale-free, navigability, and modularity. Considering the successful experiences of graph latent diffusion models (Xu et al., 2023), we adopt a two-stage training strategy framework in our practice. We first train the hyperbolic autoencoder to obtain the pre-trained node embeddings, and then train the hyperbolic geometric latent diffusion process. The architecture is shown in Figure 3. 3.1. Hyperbolic Geometric Autoencoding We first need to embed the graph data G = (X, A) into a low-dimensional hyperbolic geometric space to improve the graph latent diffusion process. Hyperbolic Encoder and Decoder. We consider a hyperbolic variant of the auto-encoder, consisting of the hyperbolic geometric encoder and the Fermi-Dirac decoder. Where the hyperbolic geometric encoder encodes the graph G = (X, A) into the hyperbolic geometric space to obtain a suitable hyperbolic representation, and the Fermi-Dirac decoder decodes the hyperbolic representation back into the graph data domain. The hyperbolic manifold Hd and the tangent space Tx can be mapped to each other via exponential map and logarithmic map (Ganea et al., 2018b). Then, we can leverage Multi-Layer Perceptrons(MLP) or Graph Neural Networks(GNNs) by exponential and logarithmic mapping as hyperbolic geometric encoders. In this paper, we use Hyperbolic Graph Convolutional Neural Networks(HGCN) (Chami et al., 2019) as the hyperbolic geometric encoder. Optimization of Autoencoding. Due to the additive failure of the Gaussian distribution in hyperbolic space, we cannot directly use Riemannian normal distribution or wrapped normal distribution. Instead of hyperbolic diffusion embedding (Lin et al.) using the product space of multiple manifolds, we propose a new diffusion process in hyperbolic space, which will be described in detail in Section 3.2. Following P-VAE (Mathieu et al., 2019), for compute efficiency, the Gaussian distribution of hyperbolic space is approximated by the Gaussian distribution of the tangent plane T\u00b5. The optimization of hyperbolic geometric auto3 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 3. An illustration of HypDiff architecture. encoding is as follows: LHAE = \u2212Eq\u03d5(zx|x)logmapc op\u03be (x|zx) , (1) where logc o is the logarithmic mapping of the north pole (origin) o of hyperbolic space to simplify the computation. 3.2. Hyperbolic Geometric Latent Diffusion Process Unlike the linear addition in Euclidean space, hyperbolic space utilizes M\u00a8 obius addition, posing challenges for diffusion over a hyperbolic manifold. Furthermore, the isotropic noise leads to a rapid reduction of signal-to-noise ratio making it difficult to preserve topological information, and for the detailed results and analysis please refer to Appendix B. In light of these issues, we propose a novel diffusion process to address both of them. Hyperbolic Anisotropic Diffusion. The anisotropy of the graph in the latent space contains an inductive bias of the graph structure, where the most critical challenge is how to determine the dominant directions of the anisotropic features. In additionally, on hyperbolic manifolds, neither the wrapped normal distribution of the isotropic setup nor the anisotropic setup satisfies this property: \u03b7 \u0338\u223c\u03b71 \u2295c \u03b72, \u03b7 \u223cN c H \u00000, (\u03c32 1 + \u03c32 2)I \u0001 , \u03b71 \u223cN c H \u00000, \u03c32 1I \u0001 , \u03b72 \u223cN c H \u00000, \u03c32 2I \u0001 . (2) where c is Hyperbolic curvature and N c H is the Wrapped Gaussian distribution. We propose a hyperbolic anisotropic diffusion framework to solve both challenges. The detailed proof process can be found in the Appendix C.1. The core idea is to select the main diffusion direction (i.e., angle) based on the similarity clustering of nodes, which is equivalent to dividing the hyperbolic latent space into multiple sectors. Then we project the nodes of each cluster onto its center\u2019s tangent plane for diffusion. Let h denote the embedding of the graph in the hyperbolic space and hi denote the i-th node in it. Let hi belong to the k-th cluster and its clustering center coordinates are \u00b5k, then the node hi is represented in the tangent space of \u00b5k as x0i: x0i = logmapc \u00b5k (hi) . (3) where \u00b5k is the central point of cluster k obtained by Hyperbolic-Kmeans (h-kmeans) (Hajri et al., 2019) algorithm. Note that the clusters can be obtained by any clustering algorithm based on similarity in the pre-processing stage. Moreover, the hyperbolic clustering parameter k has the following property: Theorem 3.1. Given the hyperbolic clustering parameter k \u2208[1, n], which represents the number of sectors dividing the hyperbolic space (disk). The hyperbolic anisotropic diffusion is equivalent to directional diffusion in the Klein model Kn c with multi-curvature ci\u2208|k|, which is an approximate projecting onto the tangent plane set Toi\u2208{|k|} of the centroids oi\u2208{|k|}. The proof is in the Appendix C.2. This property elegantly establishes the relationship between our approximation algorithm and the Klein model with multiple curvatures. Our algorithm exhibits specific behaviors based on the value of k, it allows for a more flexible and nuanced representation of anisotropy based on the underlying hyperbolic geometry, enabling improved accuracy and efficiency in subsequent noise addition and training. Geometric Constraints. Hyperbolic geometry can naturally and geometrically describe the connection pattern of nodes during graph growth (Papadopoulos et al., 2012). As shown in Figure 2(a), the popularity of a node can be abstracted by its radial coordinates and the similarity can be expressed by its angular coordinate distances in the hyperbolic space, and more detail can be referred to Appendix D. Our goal is to model a diffusion with geometric radial growth, and where this radial growth is consistent with hyperbolic properties. Considering that we need to maintain this kind of hyperbolic growth tendency in the tangent plane, 4 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation we use the following formulas: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1t\u03f5 + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (4) where \u03f5 is Gaussian noise and \u03b4 is the radial popularity coefficient that controls the diffusion strength of each node in hyperbolic space. T0 is a constant to control the speed of control of radial growth rate.\u03bbc x = 2 1+c\u2225x\u22252 Then, we discuss the content only on a cluster tangent plane. The main reason why the general diffusion model does not perform well on the graph is the decline of the fast signal-tonoise ratio. Inspired by directional diffusion model (Yang et al., 2023), we designate the direction of the geodesic between each cluster\u2019s center point and the north pole o as the target diffusion direction while imposing constraints for forward diffusion processes. Specifically, the angular similarity constraints for each node i can be obtained by: z = sgn (logmapc o (h\u00b5i)) \u2217\u03f5, \u03f5 \u223cN (0, I) , (5) where z represents the angle constrained noise,\u03f5 is the Gaussian noise, h\u00b5i is the clustering center corresponding to the i-th node. Combining the radial and angular constraints, our geometric diffusion process can be described as: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1tz + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (6) Theorem 3.2. Let xt indicate the node x at the t-step in the forward diffusion process Eq (6). As t \u2192\u221e, the lowdimensional latent representation xt of node x satisfies: lim t\u2192\u221ext \u223cNf (\u03b4x0, I) . (7) where Nf is an approximate folded normal distribution. More detail and proof can be referred to in the Appendix E. Figure 2(b) illustrates examples of the diffusion process with/without geometric constraints in hyperbolic space. We can observe that by adding isotropic noise to the hyperbolic latent diffusion process, the final diffusion result is completely random noise. In contrast, the hyperbolic latent diffusion process with geometric constraints can significantly preserve the anisotropy of the graph. In other words, after the graph diffusion, the result still preserves the important inductive bias of the graph below rather than the completely random noise, which will directly affect the performance and generation quality of the denoising process Training and generation. Then, we follow the standard denoising process (Ho et al., 2020; Yang et al., 2023) and train a denoising network to simulate the process of reverse diffusion. We use a denoising network architecture of DDM based on UNET for training to predict x0, as follows: LHDM = E \u2225f\u03b8 (Xt, A, t) \u2212X0\u22252 . (8) Algorithm 1 Training of HypDiff Input: Graph G = {X, A}; Number of training epochs E; Parameter: \u03b8 initialization; Output:Predicted raw embedding \u02c6 xH Encoding node to hyperbolic space xH \u2190Eq. (1); Compute k-clusters by h-Kmeans; Project the embeddings onto each Toi\u2208{|k|} for e = 1 to E do Get the embeddings xHt of t-steps Eq. (6) ; Predict the raw embeddings \u02c6 xH ; Compute the loss L = LHDM\u2190Eq. (8); Update \u03b8 \u2190\u03b8 \u2212\u03b7\u2207\u03b8. end for Note that the loss function of our geometric diffusion model remains consistent with DDPM (Ho et al., 2020) based on Theorem 3.2. The proof refers to the Appendix F. Regarding the generation, we propose an efficient sampling method based on theorem 3.1. Furthermore, we demonstrate that it is possible to sample at once in the same tangent space instead of sampling in different cluster center tangent spaces to improve efficiency. As to the denoising process, we adopt a denoising process that can be used in generalized diffusion models(Yang et al., 2023). Specifically, where a recovery operator and a noise addition operator are abstracted for use in various diffusion methods. All the specifics regarding each stage of the diffusion process, along with the theoretical derivation, are documented in the Appendix F. Similar to other hyperbolic learning model (Krioukov et al., 2010; Chami et al., 2019; Ganea et al., 2018a), we utilize the Fermi-Dirac decoder (Krioukov et al., 2010; Nickel & Kiela, 2017) to compute the connection probability. The diffusion and reverse processes are summarized in Algorithm 1 and Algorithm 2. Complexity Analysis Let G = (X, E) be one of the graphs set Gs, where X is the n-dimensional node eigenvector and E is the m \u2217m-dimensional adjacency matrix of the graph. s is the number of graphs in the graph set Gs. Time complexity: The time complexity of hyperbolic graph encoding is O((1(t) + k)md). For the forward diffusion process, the complexity is O(md). The training of denoising networks is essentially the same as other diffusion models and does not require additional computing time as O(md)\u22171(t). Overall, the total time complexity of the diffusion process is O(1(t) \u22172md) + O((k + 2)md) in one epoch. Space complexity In our approach, since we embed the graphs in hyperbolic space, each graph is represented as a m \u2217ddimensional vector in the hyperbolic space, which means that our diffusion scale is O(smd). For a more detailed complexity analysis please refer to Appendix G. 5 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation 4. Experiment In this section, we conduct comprehensive experiments to demonstrate the effectiveness and adaptability of HypDiff 1 in various datasets and tasks. We first presented the experimental settings and then showcased the results. 4.1. Datasets We estimate the capabilities of HypDiff in various downstream tasks while conducting experiments on synthetic and real-world datasets. In addition, we construct and apply node-level and graph-level datasets for node classification and graph generation tasks. Statistics of the real-world datasets Table H can be found in Appendix H. We elaborate on more details as follows. Synthetic Datasets. We first use two well-accepted graph theoretical models, Stochastic Block Model (SBM) and Barab\u00b4 asi-Albert (BA), to generate a node-level synthetic dataset with 1000 nodes for node classification, respectively. (1) SBM portrays five equally partitioned communities with the edge creation of intra-community p = 0.21 and intercommunity q = 0.025 probabilities. (2) BA is grown by attaching new nodes each with random edges between 1 and 10. Then we employ four generic datasets with different scales of nodes |V | for graph generation tasks. Then, four datasets are generated for the graph-level task. (3) Community contains 500 two-community small graphs with 12 \u2264|V | \u226420. Each graph is generated by the Erd\u02dd osR\u00b4 enyi model with the probability for edge creation p = 0.3 and added 0.05 |V | inter-community edges with uniform probability. (4) Ego comprises 1050 3-hop ego-networks extracted from the PubMed network with |V | \u226420. Nodes indicate documents and edges represent their citation relationship. (5) Barab\u00b4 asi-Albert (G) is a generated graphlevel dataset by the Barab\u00b4 asi-Albert model (aka. BA-G to distinct node-level BA) with 500 graphs where the degree of each node is greater than four. (6) Grid describes 100 standard 2D grid graphs which have each node connected to its four nearest neighbors. Real-world Datasets. We also carry out our experiments on several real-world datasets. For the node classification task, we utilize (1) two citation networks of academic papers including Cora and Citeseer, where nodes express documents and edges represent citation links, and (2) Polblogs dataset which is political blogs and is a larger size dataset we used. With the graph generation task, we exploit four datasets from different fields. (3) MUTAG is a molecular network whose each graph denotes a nitro compound molecule. (4) IMDB-B is a social network, symbolizing the co-starring of the actors. (5) PROTEINS is a protein network in which 1The code is available at https://github.com/ RingBDStack/HypDiff. nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart. (6) COLLAB is a scientific collaboration dataset, reflecting the collaboration of the scientists. 4.2. Experimental Setup Baselines. To evaluate the proposed HypDiff , we compare it with well-known or state-of-the-art graph learning methods which include: (1) Euclidean graph representation methods: VGAE (Kipf & Welling, 2016) designs a variational autoencoder for graph representation learning. ANE (Dai et al., 2018) trains a discriminator to align the embedding distribution with a predetermined fixed prior. GraphGAN (Wang et al., 2018) learns the sampling distribution for negative node sampling from the graph. (2) Hyperbolic graph representation learning: P-VAE (Mathieu et al., 2019) is a variational autoencoder utilizing the Poincar\u00b4 e ball model within hyperbolic geometric space. Hype-ANE (Liu et al., 2018) is a hyperbolic adversarial network embedding model that extends ANE into hyperbolic geometric space. (3) Deep graph generative models: VGAE (Kipf & Welling, 2016) can be used for graph generation tasks by treating each graph as a batch size. GraphRNN (You et al., 2018) is a deep auto-regressive generative model that focuses on graph representations under different node orderings. (4) Graph diffusion generative models: GDSS (Jo et al., 2022) simultaneously diffuses node features and adjacency matrices to learn their scoring functions within the neural network correspondingly. DiGress (Vignac et al., 2022) is a discrete denoising diffusion model that progressively recovers graph properties by manipulating edges. GraphGDP (Huang et al., 2022) is a position-enhanced graph score-based diffusion model for graph generation. EDGE (Chen et al., 2023) is a discrete diffusion process for large graph generation. Settings. A fair parameter setting for the baselines is the default value in the original papers and for the training on new datasets make appropriate adjustments. For HypDiff, the encoder is 2-layer HGCN with 256 representation dimensions, the edge dropping probability to 2%, the learning rate to 0.001, and hyperbolic curvature c = 1. Additionally, the diffusion processing set diffusion strength \u03b4 as 0.5, and the number of 6 latent layers in denoising is 64, 128, 256, 128, 256, 128. We use Adam as an optimizer and set L2 regularization strength as 1e-5. For the metric, we use the F1 scores of the node classification task and the maximum mean discrepancy scores of Degree, Cluster, and Spectre and the F1 score of precision-recall and density-coverage (F1 pr and F1 dc) to evaluate graph generation results. The richer experimental results under the other indicators are shown in Appendix J. All experiments adopt the implementations from the PyTorch Geometric Library and Deep 6 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Table 1. Summary of node classification Micro-F1 and Macro-F1 scores (%) based on the average of five runs on synthetic and real-world datasets. (Result: average score \u00b1 standard deviation (rank); Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Avg. R. SBM BA Cora Citeseer Polblogs Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 VGAE 20.5\u00b12.1 15.4\u00b11.1 37.4\u00b11.7 15.9\u00b12.3 79.7\u00b10.4 78.1\u00b10.2 63.8\u00b11.4 55.5\u00b11.3 79.4\u00b10.8 79.4\u00b10.8 4.6 ANE 39.9\u00b11.1 33.9\u00b11.8 46.0\u00b13.0 19.3\u00b12.7 69.3\u00b10.1 66.4\u00b10.1 50.2\u00b10.1 49.5\u00b10.6 80.8\u00b10.1 80.7\u00b10.1 4.3 GraphGAN 38.6\u00b10.5 38.9\u00b10.3 43.6\u00b10.6 24.6\u00b10.5 71.7\u00b10.1 69.8\u00b10.1 49.8\u00b11.0 45.7\u00b10.1 77.5\u00b10.6 76.9\u00b10.4 4.8 P-VAE 57.9\u00b11.3 53.0\u00b11.5 38.4\u00b11.4 20.0\u00b10.3 79.6\u00b12.2 77.5\u00b12.5 67.9\u00b11.7 60.2\u00b11.9 79.4\u00b10.1 79.4\u00b10.1 3.2 Hype-ANE 18.8\u00b10.3 11.9\u00b10.1 56.9\u00b12.4 31.6\u00b11.2 80.7\u00b10.1 79.2\u00b10.3 64.4\u00b10.3 58.7\u00b10.0 83.6\u00b10.4 83.6\u00b10.4 3.0 HypDiff 70.5\u00b10.1 69.4\u00b10.1 58.3\u00b10.1 40.0\u00b10.1 82.4\u00b10.1 81.2\u00b10.1 67.8\u00b10.2 60.4\u00b10.3 85.7\u00b10.1 85.4\u00b10.1 1.1 Table 2. Generation results about the MMD distance between the original and generated graphs. (Result: scores (rank) and average rank;Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Community BA-G MUTAG PROTRINS Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre VGAE 0.365 0.025 0.507 0.775 1.214 0.398 0.255 2.000 0.744 0.705 0.979 0.700 GraphRNN 0.002 0.027 0.004 0.122 0.262 0.007 0.537 0.013 0.476 0.009 0.071 0.017 GDSS 0.094 0.031 0.052 0.978 0.468 0.917 0.074 0.021 0.003 1.463 0.168 0.013 DiGress 0.226 0.158 0.194 0.654 1.171 0.268 0.100 0.351 0.082 0.108 0.062 0.079 GraphGDP 0.046 0.016 0.042 0.698 0.188 0.053 0.127 0.057 0.050 0.103 0.240 0.088 EDGE 0.021 0.013 0.040 0.282 0.010 0.090 0.024 0.597 0.468 0.033 0.523 0.024 HypDiff 0.002 0.010 0.028 0.216 0.021 0.004 0.048 0.001 0.040 0.133 0.004 0.012 Graph Library. The reported results are the average scores and standard deviations over 5 runs. All models were trained and tested on a single Nvidia A100 40GB GPU. 4.3. Performance Evaluation We show the F1 scores of the node classification task in Table 1 and the statistics of MMD distance and F1 scores between the original and generated graph in the graph generation task in Table 2 and Table C.4. A higher score reported in F1 indicates a more accurate prediction of the node and fidelity of the generated graph. At the same time, a smaller MMD distance suggests better generative capabilities of the model from the perspective of graph topological properties. Node classification. HypDiff demonstrates superior performance which outperforms nearly all baseline models, achieving the highest ranking and revealing excellent generalization. This implies that HypDiff can preserve essential properties within complex structures, enabling better distinctive and utility of the dependencies between nodes across hierarchical levels in hyperbolic space. Graph Generation. Successively, we focused on validating the graph generation capability of HypDiff. Using the finer-grained metrics, we consistently observed our approach\u2019s outstanding performance. More results are shown in Table C.3. We are further concerned with the fidelity and diversity of the generated results which yielded conclusions consistent with the previous and are reported in Table C.4. Specifically, HypDiff depicts superior overall performance compared to the state-of-the-art model autoregressive model GraphRNN and discrete diffusion method DiGress. Furthermore, our model can effectively capture the local structure through similarity constraints and achieve competitive performance on highly connected graph data (Community). 4.4. Analysis of HypDiff In this subsection, we present the experimental results to intuitively convey our discovery and initiate a series of discussions and analyses. Ablation Study. This study is to highlight the role of radial popularity diffusion and angular similarity diffusion constraints of HypDiff. We conducted experiments on three real-world datasets to validate the node classification performance and removed radial popularity (HypDiff (w/o P)), angular similarity (HypDiff (w/o S)) and total geometric prior(HypDiff (w/o PS)) components as the variant models. We show the results in Figure 4. The radial popularity is evident in facilitating hyperbolic diffusion processes, thereby showcasing the advantage of hyperbolic geometry in capturing the underlying graph topology. Furthermore, the angular 7 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 4. Ablation study results. Figure 5. Sensitivity analysis of geometric constraints. 11 12 13 14 15 Average Time of 1000 Timesteps (s) 3000 4000 5000 6000 7000 GPU Memory (MB) HypDiff GPU: 2519MB Time: 11.2 s GDSS GPU: 3501MB Time: 12.5 s GraphGDP GPU: 5902MB Time: 13.6 s EDGE GPU: 6205MB Time: 11.8 s DiGress GPU: 5800MB Time: 12.1 s Figure 6. Efficiency analysis on IMDB-B for graph generation. similarity also significantly preserves the local structure of the graph, compensating for the limitations of hyperbolic space in capturing local connectivity patterns. In summary, the hyperbolic geometric prior plays a crucial role in capturing non-Euclidean structures. Sensitivity Analysis of Geometric Constraints. To investigate the impact of both the number of clusters k and the geometric prior coefficient \u03b4 on the model performance, we conducted the sensitivity analysis on the real-world and synthetic graph datasets, respectively. The number of clusters k can be understood as the strength of the angular constraint, the results of three datasets with different structures are shown in Fig 5 (Left). Specifically, Cora has a realworld connected structure, SBM has a complex community structure, and Fractal has self-similarity and hierarchy properties. It can be observed that k has different sensitivities in different structured datasets, indicating that different graph structures have different approximate accuracies for anisotropy capture. Correspondingly, the geometric prior coefficient \u03b4 can be understood as the strength of the radial constraint, the results of three real-world datasets are shown in Fig 5 (Right). The stronger the constraint, the smaller the diffusion step in the radial direction of the hyperbolic space. It can be observed that the data set with a tree-like structure requires lower radial constraints, while the graph with high connectivity requires stronger radial constraints. For the experimental setup and a more detailed analysis of the results please refer to Appendix I. Diffusion Efficiency Analysis. We report the training time for our HypDiff and other graph diffusion baselines with the same configurations on IMDB-B. We conduct experiments with the hardware and software configurations listed in Section 4.2. We comprehensively report the results from the time and space costs of the diffusion process. The result is shown in Figure 6, our HypDiff comprehensively outperforms other baselines in diffusion time and GPU memory cost. Compared with the discrete graph diffusion model, our model directly diffuses each node of the graph with structure-preserving based on the latent diffusion model, so the space complexity is much lower than that of the direct diffusion of discrete and sparse structural information(e.g. adjacent/Laplace matrix). The performance of each dataset is in the Appendix K, Visualization. We compare the contributions of two diffusion generation models, HypDiff and GDSS, to graph generation tasks by visualizing networks generated by five well-accepted graph theoretical models. We discuss and show the visualization as Figure C.3 in the Appendix J.3. 5.", + "additional_graph_info": { + "graph": [ + [ + "Xingcheng Fu", + "Qingyun Sun" + ], + [ + "Xingcheng Fu", + "Hao Peng" + ], + [ + "Xingcheng Fu", + "Jianxin Li" + ], + [ + "Xingcheng Fu", + "Yuecen Wei" + ], + [ + "Qingyun Sun", + "Jianxin Li" + ], + [ + "Qingyun Sun", + "Hao Peng" + ], + [ + "Hao Peng", + "Xiang Huang" + ], + [ + "Hao Peng", + "Zhifeng Hao" + ], + [ + "Hao Peng", + "Angsheng Li" + ], + [ + "Jianxin Li", + "Hao Peng" + ], + [ + "Jianxin Li", + "Lifang He" + ], + [ + "Yuecen Wei", + "Qingyun Sun" + ], + [ + "Yuecen Wei", + "Hao Peng" + ], + [ + "Yuecen Wei", + "Haonan Yuan" + ] + ], + "node_feat": { + "Xingcheng Fu": [ + { + "url": "http://arxiv.org/abs/2405.03188v1", + "title": "Hyperbolic Geometric Latent Diffusion Model for Graph Generation", + "abstract": "Diffusion models have made significant contributions to computer vision,\nsparking a growing interest in the community recently regarding the application\nof them to graph generation. Existing discrete graph diffusion models exhibit\nheightened computational complexity and diminished training efficiency. A\npreferable and natural way is to directly diffuse the graph within the latent\nspace. However, due to the non-Euclidean structure of graphs is not isotropic\nin the latent space, the existing latent diffusion models effectively make it\ndifficult to capture and preserve the topological information of graphs. To\naddress the above challenges, we propose a novel geometrically latent diffusion\nframework HypDiff. Specifically, we first establish a geometrically latent\nspace with interpretability measures based on hyperbolic geometry, to define\nanisotropic latent diffusion processes for graphs. Then, we propose a\ngeometrically latent diffusion process that is constrained by both radial and\nangular geometric properties, thereby ensuring the preservation of the original\ntopological properties in the generative graphs. Extensive experimental results\ndemonstrate the superior effectiveness of HypDiff for graph generation with\nvarious topologies.", + "authors": "Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, Jianxin Li, Xianxian Li", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Graphs in the real world contain variety and important of topologies, and these topological properties often reflect 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, China 2Institute of Artificial Intelligence, Beihang University, Beijing, China 3School of Software, Beihang University, Beijing, China 4Beijing Advanced Innovation Center for Big Data and Brain Computing, School of Computer Science and Engineering, Beihang University, Beijing, China. Correspondence to: Xingcheng Fu , Jianxin Li , Xianxian Li . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). physical laws and growth patterns, such as rich-clubs, smallworlds, hierarchies, fractal structures, etc. Traditional random graph models based on graph theory, such as ErdosRenyi (Erd\u02dd os et al., 1960), Watts-Strogatz (Watts & Strogatz, 1998) and Barabasi-Albert (Barab\u00b4 asi & Albert, 1999), etc., need artificial heuristics to build the algorithms for single nature topologies and lack the flexibility to model various complex graphs. Therefore, many deep learning models have been developed for graph generation, such as Variational Graph Auto-Encoder (VGAE) (Kipf & Welling, 2016), Generative Adversarial Networks(GAN) (Goodfellow et al., 2014), and other technologies. Recently, the Denoising Diffusion Probabilistic Model(DDPM) (Ho et al., 2020) have demonstrated great power and potential in image generation, attracting huge attention from the community of graph learning. For graph generation, a straightforward idea involves designing discretized diffusion methods for the graph structural information. (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022), and the other way is to develop advanced graph encoders to preserve structural information throughout the diffusion process within a continuous potential space (Xu et al., 2021; 2023). However, because of the irregular and non-Euclidean structure of graph data, the realization of the diffusion model for graphs still has two main limitations: (1) High computational complexity. The core to graph generation is to handle the discreteness, sparsity and other topological properties of the non-Euclidean structure. Since the Gaussian noise perturbation used in the vanilla diffusion model is not suitable for discrete data, the discrete graph diffusion model usually has high time and space complexity due to the problem of structural sparsity. Moreover, the discrete graph diffusion model relies on a continuous Gaussian noise process to create fully connected, noisy graphs (Zhang et al., 2023; Ingraham et al., 2019) which loses structural information and underlying topological properties. (2) Anisotropy of non-Euclidean structure. Different from the regular structure data (e.g. pixel matrix or grid structure), the \u201dirregular\u201d non-Euclidean structure embeddings of graph data are anisotropic in continuous latent space (Elhag et al., 2022). As shown in Figure 1(b), the node embeddings of a graph in Euclidean space exhibit significant anisotropy in several specific directions. Recently, some studies (Yang et al., 2023) have shown that isotropic 1 arXiv:2405.03188v1 [cs.LG] 6 May 2024 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Original structure. (b) Euclidean latent space. (c) Hyperbolic latent space. Figure 1. Visualization of node embeddings by singular value decomposition (SVD); (a) Original structure visualization of the NCAA football graph and different colors indicate different labels(teams); (b) Visualization of node embeddings in 2D Euclidean space and planar projection; (c) Visualization of node embeddings in 2D hyperbolic space and Poincar\u00b4 e disk projection. diffusion of the node embedding of the graph in the latent space will treat the anisotropic structural information as noise, and this useful structural information will be lost in the denoising process. Hyperbolic geometric space is widely recognized as an ideal continuous manifold for representing discrete tree-like or hierarchical structures (Cannon et al., 1997; Ungar, 1999; Krioukov et al., 2010; Sun et al., 2024b), and has been widely studied and applied to various graph learning tasks (Sun et al., 2021; Tifrea et al., 2019; Nickel & Kiela, 2017; Sala et al., 2018; Chami et al., 2019; Sun et al., 2024a). Inspired by these studies, we find that hyperbolic geometry has great potential to address non-Euclidean structural anisotropy in graph latent diffusion processes. As shown in Figure 1(c), in hyperbolic space, we can observe that the distribution of node embeddings tends to be isotropic globally, while anisotropy is preserved locally. In addition, hyperbolic geometry unifies angular and radial measures of polar coordinates as shown in Figure 2(a), and can provide geometric measures with physical semantics and interpretability (Papadopoulos et al., 2012). It is exciting that hyperbolic geometry can provide a geometrically latent space with graph geometric priors, able to help deal with the anisotropy of graph structures by special geometric measures. Based on the above insights, we aim to establish a suitable geometrically latent space based on hyperbolic geometry to design an efficient diffusion process to the non-Euclidean structure for topology-preserving graph generation tasks. However, there are two primary challenges: (1) the additivity of continuous Gaussian distributions is undefined in hyperbolic latent space; (2) devising an effective anisotropic diffusion process for non-Euclidean structures. Contributions. To address the challenges, we propose a novel Hyperbolic Geometric Latent Diffusion (HypDiff) model for the graph generation. For the additive issue of continuous Gaussian distribution in hyperbolic space, we propose an approximate diffusion process based on radial measures. Then the angular constraint was utilized to constrain the anisotropic noise to preserve more structural prior, guiding the diffusion model to finer details of the graph structure. Our contributions are summarized as: \u2022 We are the first to study the anisotropy of nonEuclidean structures for graph latent diffusion models from a geometric perspective, and propose a novel hyperbolic geometric latent diffusion model HypDiff. \u2022 We proposed a novel geometrically latent diffusion process based on radial and angular geometric constraints in hyperbolic space, and addresses the additivity of continuous Gaussian distributions and the issue of anisotropic noise addition in hyperbolic space. \u2022 Extensive experiments on synthetic and real-world datasets demonstrate a significant and consistent improvement of HypDiff and provide insightful analysis for graph generation. 2. Related Works 2.1. Graph Generative Diffusion Model Different from that learn to generate samples once, like GAN (Goodfellow et al., 2014; Wang et al., 2018; Dai et al., 2018), VGAE (Yu et al., 2018; Xu & Durrett, 2018; Grattarola et al., 2019) or GraphRNN (You et al., 2018), the diffusion model (Ho et al., 2020) aims to gradually convert the sample into pure noise by a parameterized Markov chain process. Some recent works (Xu et al., 2021; 2023) employ advanced graph encoders to effectively preserve the inherent structural information throughout the diffusion process within a continuous potential space. Gaussian noise is added on the distribution of nodes and edges of the graph (Vignac et al., 2022), and Gaussian processes are performed on the neighborhood or spectral domain of the graph (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022). However, existing discrete diffusion models have many challenges in capturing the non-Euclidean structure and preserving underlying topological properties. 2 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Geometric interpretation. (b) Hyperbolic latent diffusion. Figure 2. (a) Geometric interpretation of the hyperbolic geometry, which unifies the radius and angle measurements in polar coordinates and interprets as popularity and similarity respectively; (b) Hyperbolic latent diffusion processing with isotropic/anisotropic noise; 2.2. Hyperbolic Graph Learning Hyperbolic geometric space was introduced into complex networks earlier to represent the small-world and scale-free complex networks (Krioukov et al., 2010; Papadopoulos et al., 2012). With high capacity and hierarchical-structurepreserving ability, hyperbolic geometry is also used in NLP (Nickel & Kiela, 2017; Tifrea et al., 2019) to learn word representations with hypernym structure. For graph neural networks, hyperbolic space is recently introduced into graph neural networks (Liu et al., 2019; Chami et al., 2019; Sun et al., 2021; 2022). P-VAE (Mathieu et al., 2019) and Hyper-ANE (Liu et al., 2018) extend VAE and GAN into the hyperbolic versions to learn the hierarchical representations. To sum up, hyperbolic geometry provides an intuitive and efficient way of understanding the underlying structural properties of the graph. 3. Methodology In this section, we present our Hyperbolic geometric latent Diffusion model (HypDiff) for addressing the two main challenges. The key insight is that we leverage hyperbolic geometry to abstract the implicit hierarchy of nodes in the graph and introduce two geometric constraints to preserve important topological proprieties, such as scale-free, navigability, and modularity. Considering the successful experiences of graph latent diffusion models (Xu et al., 2023), we adopt a two-stage training strategy framework in our practice. We first train the hyperbolic autoencoder to obtain the pre-trained node embeddings, and then train the hyperbolic geometric latent diffusion process. The architecture is shown in Figure 3. 3.1. Hyperbolic Geometric Autoencoding We first need to embed the graph data G = (X, A) into a low-dimensional hyperbolic geometric space to improve the graph latent diffusion process. Hyperbolic Encoder and Decoder. We consider a hyperbolic variant of the auto-encoder, consisting of the hyperbolic geometric encoder and the Fermi-Dirac decoder. Where the hyperbolic geometric encoder encodes the graph G = (X, A) into the hyperbolic geometric space to obtain a suitable hyperbolic representation, and the Fermi-Dirac decoder decodes the hyperbolic representation back into the graph data domain. The hyperbolic manifold Hd and the tangent space Tx can be mapped to each other via exponential map and logarithmic map (Ganea et al., 2018b). Then, we can leverage Multi-Layer Perceptrons(MLP) or Graph Neural Networks(GNNs) by exponential and logarithmic mapping as hyperbolic geometric encoders. In this paper, we use Hyperbolic Graph Convolutional Neural Networks(HGCN) (Chami et al., 2019) as the hyperbolic geometric encoder. Optimization of Autoencoding. Due to the additive failure of the Gaussian distribution in hyperbolic space, we cannot directly use Riemannian normal distribution or wrapped normal distribution. Instead of hyperbolic diffusion embedding (Lin et al.) using the product space of multiple manifolds, we propose a new diffusion process in hyperbolic space, which will be described in detail in Section 3.2. Following P-VAE (Mathieu et al., 2019), for compute efficiency, the Gaussian distribution of hyperbolic space is approximated by the Gaussian distribution of the tangent plane T\u00b5. The optimization of hyperbolic geometric auto3 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 3. An illustration of HypDiff architecture. encoding is as follows: LHAE = \u2212Eq\u03d5(zx|x)logmapc op\u03be (x|zx) , (1) where logc o is the logarithmic mapping of the north pole (origin) o of hyperbolic space to simplify the computation. 3.2. Hyperbolic Geometric Latent Diffusion Process Unlike the linear addition in Euclidean space, hyperbolic space utilizes M\u00a8 obius addition, posing challenges for diffusion over a hyperbolic manifold. Furthermore, the isotropic noise leads to a rapid reduction of signal-to-noise ratio making it difficult to preserve topological information, and for the detailed results and analysis please refer to Appendix B. In light of these issues, we propose a novel diffusion process to address both of them. Hyperbolic Anisotropic Diffusion. The anisotropy of the graph in the latent space contains an inductive bias of the graph structure, where the most critical challenge is how to determine the dominant directions of the anisotropic features. In additionally, on hyperbolic manifolds, neither the wrapped normal distribution of the isotropic setup nor the anisotropic setup satisfies this property: \u03b7 \u0338\u223c\u03b71 \u2295c \u03b72, \u03b7 \u223cN c H \u00000, (\u03c32 1 + \u03c32 2)I \u0001 , \u03b71 \u223cN c H \u00000, \u03c32 1I \u0001 , \u03b72 \u223cN c H \u00000, \u03c32 2I \u0001 . (2) where c is Hyperbolic curvature and N c H is the Wrapped Gaussian distribution. We propose a hyperbolic anisotropic diffusion framework to solve both challenges. The detailed proof process can be found in the Appendix C.1. The core idea is to select the main diffusion direction (i.e., angle) based on the similarity clustering of nodes, which is equivalent to dividing the hyperbolic latent space into multiple sectors. Then we project the nodes of each cluster onto its center\u2019s tangent plane for diffusion. Let h denote the embedding of the graph in the hyperbolic space and hi denote the i-th node in it. Let hi belong to the k-th cluster and its clustering center coordinates are \u00b5k, then the node hi is represented in the tangent space of \u00b5k as x0i: x0i = logmapc \u00b5k (hi) . (3) where \u00b5k is the central point of cluster k obtained by Hyperbolic-Kmeans (h-kmeans) (Hajri et al., 2019) algorithm. Note that the clusters can be obtained by any clustering algorithm based on similarity in the pre-processing stage. Moreover, the hyperbolic clustering parameter k has the following property: Theorem 3.1. Given the hyperbolic clustering parameter k \u2208[1, n], which represents the number of sectors dividing the hyperbolic space (disk). The hyperbolic anisotropic diffusion is equivalent to directional diffusion in the Klein model Kn c with multi-curvature ci\u2208|k|, which is an approximate projecting onto the tangent plane set Toi\u2208{|k|} of the centroids oi\u2208{|k|}. The proof is in the Appendix C.2. This property elegantly establishes the relationship between our approximation algorithm and the Klein model with multiple curvatures. Our algorithm exhibits specific behaviors based on the value of k, it allows for a more flexible and nuanced representation of anisotropy based on the underlying hyperbolic geometry, enabling improved accuracy and efficiency in subsequent noise addition and training. Geometric Constraints. Hyperbolic geometry can naturally and geometrically describe the connection pattern of nodes during graph growth (Papadopoulos et al., 2012). As shown in Figure 2(a), the popularity of a node can be abstracted by its radial coordinates and the similarity can be expressed by its angular coordinate distances in the hyperbolic space, and more detail can be referred to Appendix D. Our goal is to model a diffusion with geometric radial growth, and where this radial growth is consistent with hyperbolic properties. Considering that we need to maintain this kind of hyperbolic growth tendency in the tangent plane, 4 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation we use the following formulas: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1t\u03f5 + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (4) where \u03f5 is Gaussian noise and \u03b4 is the radial popularity coefficient that controls the diffusion strength of each node in hyperbolic space. T0 is a constant to control the speed of control of radial growth rate.\u03bbc x = 2 1+c\u2225x\u22252 Then, we discuss the content only on a cluster tangent plane. The main reason why the general diffusion model does not perform well on the graph is the decline of the fast signal-tonoise ratio. Inspired by directional diffusion model (Yang et al., 2023), we designate the direction of the geodesic between each cluster\u2019s center point and the north pole o as the target diffusion direction while imposing constraints for forward diffusion processes. Specifically, the angular similarity constraints for each node i can be obtained by: z = sgn (logmapc o (h\u00b5i)) \u2217\u03f5, \u03f5 \u223cN (0, I) , (5) where z represents the angle constrained noise,\u03f5 is the Gaussian noise, h\u00b5i is the clustering center corresponding to the i-th node. Combining the radial and angular constraints, our geometric diffusion process can be described as: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1tz + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (6) Theorem 3.2. Let xt indicate the node x at the t-step in the forward diffusion process Eq (6). As t \u2192\u221e, the lowdimensional latent representation xt of node x satisfies: lim t\u2192\u221ext \u223cNf (\u03b4x0, I) . (7) where Nf is an approximate folded normal distribution. More detail and proof can be referred to in the Appendix E. Figure 2(b) illustrates examples of the diffusion process with/without geometric constraints in hyperbolic space. We can observe that by adding isotropic noise to the hyperbolic latent diffusion process, the final diffusion result is completely random noise. In contrast, the hyperbolic latent diffusion process with geometric constraints can significantly preserve the anisotropy of the graph. In other words, after the graph diffusion, the result still preserves the important inductive bias of the graph below rather than the completely random noise, which will directly affect the performance and generation quality of the denoising process Training and generation. Then, we follow the standard denoising process (Ho et al., 2020; Yang et al., 2023) and train a denoising network to simulate the process of reverse diffusion. We use a denoising network architecture of DDM based on UNET for training to predict x0, as follows: LHDM = E \u2225f\u03b8 (Xt, A, t) \u2212X0\u22252 . (8) Algorithm 1 Training of HypDiff Input: Graph G = {X, A}; Number of training epochs E; Parameter: \u03b8 initialization; Output:Predicted raw embedding \u02c6 xH Encoding node to hyperbolic space xH \u2190Eq. (1); Compute k-clusters by h-Kmeans; Project the embeddings onto each Toi\u2208{|k|} for e = 1 to E do Get the embeddings xHt of t-steps Eq. (6) ; Predict the raw embeddings \u02c6 xH ; Compute the loss L = LHDM\u2190Eq. (8); Update \u03b8 \u2190\u03b8 \u2212\u03b7\u2207\u03b8. end for Note that the loss function of our geometric diffusion model remains consistent with DDPM (Ho et al., 2020) based on Theorem 3.2. The proof refers to the Appendix F. Regarding the generation, we propose an efficient sampling method based on theorem 3.1. Furthermore, we demonstrate that it is possible to sample at once in the same tangent space instead of sampling in different cluster center tangent spaces to improve efficiency. As to the denoising process, we adopt a denoising process that can be used in generalized diffusion models(Yang et al., 2023). Specifically, where a recovery operator and a noise addition operator are abstracted for use in various diffusion methods. All the specifics regarding each stage of the diffusion process, along with the theoretical derivation, are documented in the Appendix F. Similar to other hyperbolic learning model (Krioukov et al., 2010; Chami et al., 2019; Ganea et al., 2018a), we utilize the Fermi-Dirac decoder (Krioukov et al., 2010; Nickel & Kiela, 2017) to compute the connection probability. The diffusion and reverse processes are summarized in Algorithm 1 and Algorithm 2. Complexity Analysis Let G = (X, E) be one of the graphs set Gs, where X is the n-dimensional node eigenvector and E is the m \u2217m-dimensional adjacency matrix of the graph. s is the number of graphs in the graph set Gs. Time complexity: The time complexity of hyperbolic graph encoding is O((1(t) + k)md). For the forward diffusion process, the complexity is O(md). The training of denoising networks is essentially the same as other diffusion models and does not require additional computing time as O(md)\u22171(t). Overall, the total time complexity of the diffusion process is O(1(t) \u22172md) + O((k + 2)md) in one epoch. Space complexity In our approach, since we embed the graphs in hyperbolic space, each graph is represented as a m \u2217ddimensional vector in the hyperbolic space, which means that our diffusion scale is O(smd). For a more detailed complexity analysis please refer to Appendix G. 5 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation 4. Experiment In this section, we conduct comprehensive experiments to demonstrate the effectiveness and adaptability of HypDiff 1 in various datasets and tasks. We first presented the experimental settings and then showcased the results. 4.1. Datasets We estimate the capabilities of HypDiff in various downstream tasks while conducting experiments on synthetic and real-world datasets. In addition, we construct and apply node-level and graph-level datasets for node classification and graph generation tasks. Statistics of the real-world datasets Table H can be found in Appendix H. We elaborate on more details as follows. Synthetic Datasets. We first use two well-accepted graph theoretical models, Stochastic Block Model (SBM) and Barab\u00b4 asi-Albert (BA), to generate a node-level synthetic dataset with 1000 nodes for node classification, respectively. (1) SBM portrays five equally partitioned communities with the edge creation of intra-community p = 0.21 and intercommunity q = 0.025 probabilities. (2) BA is grown by attaching new nodes each with random edges between 1 and 10. Then we employ four generic datasets with different scales of nodes |V | for graph generation tasks. Then, four datasets are generated for the graph-level task. (3) Community contains 500 two-community small graphs with 12 \u2264|V | \u226420. Each graph is generated by the Erd\u02dd osR\u00b4 enyi model with the probability for edge creation p = 0.3 and added 0.05 |V | inter-community edges with uniform probability. (4) Ego comprises 1050 3-hop ego-networks extracted from the PubMed network with |V | \u226420. Nodes indicate documents and edges represent their citation relationship. (5) Barab\u00b4 asi-Albert (G) is a generated graphlevel dataset by the Barab\u00b4 asi-Albert model (aka. BA-G to distinct node-level BA) with 500 graphs where the degree of each node is greater than four. (6) Grid describes 100 standard 2D grid graphs which have each node connected to its four nearest neighbors. Real-world Datasets. We also carry out our experiments on several real-world datasets. For the node classification task, we utilize (1) two citation networks of academic papers including Cora and Citeseer, where nodes express documents and edges represent citation links, and (2) Polblogs dataset which is political blogs and is a larger size dataset we used. With the graph generation task, we exploit four datasets from different fields. (3) MUTAG is a molecular network whose each graph denotes a nitro compound molecule. (4) IMDB-B is a social network, symbolizing the co-starring of the actors. (5) PROTEINS is a protein network in which 1The code is available at https://github.com/ RingBDStack/HypDiff. nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart. (6) COLLAB is a scientific collaboration dataset, reflecting the collaboration of the scientists. 4.2. Experimental Setup Baselines. To evaluate the proposed HypDiff , we compare it with well-known or state-of-the-art graph learning methods which include: (1) Euclidean graph representation methods: VGAE (Kipf & Welling, 2016) designs a variational autoencoder for graph representation learning. ANE (Dai et al., 2018) trains a discriminator to align the embedding distribution with a predetermined fixed prior. GraphGAN (Wang et al., 2018) learns the sampling distribution for negative node sampling from the graph. (2) Hyperbolic graph representation learning: P-VAE (Mathieu et al., 2019) is a variational autoencoder utilizing the Poincar\u00b4 e ball model within hyperbolic geometric space. Hype-ANE (Liu et al., 2018) is a hyperbolic adversarial network embedding model that extends ANE into hyperbolic geometric space. (3) Deep graph generative models: VGAE (Kipf & Welling, 2016) can be used for graph generation tasks by treating each graph as a batch size. GraphRNN (You et al., 2018) is a deep auto-regressive generative model that focuses on graph representations under different node orderings. (4) Graph diffusion generative models: GDSS (Jo et al., 2022) simultaneously diffuses node features and adjacency matrices to learn their scoring functions within the neural network correspondingly. DiGress (Vignac et al., 2022) is a discrete denoising diffusion model that progressively recovers graph properties by manipulating edges. GraphGDP (Huang et al., 2022) is a position-enhanced graph score-based diffusion model for graph generation. EDGE (Chen et al., 2023) is a discrete diffusion process for large graph generation. Settings. A fair parameter setting for the baselines is the default value in the original papers and for the training on new datasets make appropriate adjustments. For HypDiff, the encoder is 2-layer HGCN with 256 representation dimensions, the edge dropping probability to 2%, the learning rate to 0.001, and hyperbolic curvature c = 1. Additionally, the diffusion processing set diffusion strength \u03b4 as 0.5, and the number of 6 latent layers in denoising is 64, 128, 256, 128, 256, 128. We use Adam as an optimizer and set L2 regularization strength as 1e-5. For the metric, we use the F1 scores of the node classification task and the maximum mean discrepancy scores of Degree, Cluster, and Spectre and the F1 score of precision-recall and density-coverage (F1 pr and F1 dc) to evaluate graph generation results. The richer experimental results under the other indicators are shown in Appendix J. All experiments adopt the implementations from the PyTorch Geometric Library and Deep 6 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Table 1. Summary of node classification Micro-F1 and Macro-F1 scores (%) based on the average of five runs on synthetic and real-world datasets. (Result: average score \u00b1 standard deviation (rank); Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Avg. R. SBM BA Cora Citeseer Polblogs Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 VGAE 20.5\u00b12.1 15.4\u00b11.1 37.4\u00b11.7 15.9\u00b12.3 79.7\u00b10.4 78.1\u00b10.2 63.8\u00b11.4 55.5\u00b11.3 79.4\u00b10.8 79.4\u00b10.8 4.6 ANE 39.9\u00b11.1 33.9\u00b11.8 46.0\u00b13.0 19.3\u00b12.7 69.3\u00b10.1 66.4\u00b10.1 50.2\u00b10.1 49.5\u00b10.6 80.8\u00b10.1 80.7\u00b10.1 4.3 GraphGAN 38.6\u00b10.5 38.9\u00b10.3 43.6\u00b10.6 24.6\u00b10.5 71.7\u00b10.1 69.8\u00b10.1 49.8\u00b11.0 45.7\u00b10.1 77.5\u00b10.6 76.9\u00b10.4 4.8 P-VAE 57.9\u00b11.3 53.0\u00b11.5 38.4\u00b11.4 20.0\u00b10.3 79.6\u00b12.2 77.5\u00b12.5 67.9\u00b11.7 60.2\u00b11.9 79.4\u00b10.1 79.4\u00b10.1 3.2 Hype-ANE 18.8\u00b10.3 11.9\u00b10.1 56.9\u00b12.4 31.6\u00b11.2 80.7\u00b10.1 79.2\u00b10.3 64.4\u00b10.3 58.7\u00b10.0 83.6\u00b10.4 83.6\u00b10.4 3.0 HypDiff 70.5\u00b10.1 69.4\u00b10.1 58.3\u00b10.1 40.0\u00b10.1 82.4\u00b10.1 81.2\u00b10.1 67.8\u00b10.2 60.4\u00b10.3 85.7\u00b10.1 85.4\u00b10.1 1.1 Table 2. Generation results about the MMD distance between the original and generated graphs. (Result: scores (rank) and average rank;Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Community BA-G MUTAG PROTRINS Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre VGAE 0.365 0.025 0.507 0.775 1.214 0.398 0.255 2.000 0.744 0.705 0.979 0.700 GraphRNN 0.002 0.027 0.004 0.122 0.262 0.007 0.537 0.013 0.476 0.009 0.071 0.017 GDSS 0.094 0.031 0.052 0.978 0.468 0.917 0.074 0.021 0.003 1.463 0.168 0.013 DiGress 0.226 0.158 0.194 0.654 1.171 0.268 0.100 0.351 0.082 0.108 0.062 0.079 GraphGDP 0.046 0.016 0.042 0.698 0.188 0.053 0.127 0.057 0.050 0.103 0.240 0.088 EDGE 0.021 0.013 0.040 0.282 0.010 0.090 0.024 0.597 0.468 0.033 0.523 0.024 HypDiff 0.002 0.010 0.028 0.216 0.021 0.004 0.048 0.001 0.040 0.133 0.004 0.012 Graph Library. The reported results are the average scores and standard deviations over 5 runs. All models were trained and tested on a single Nvidia A100 40GB GPU. 4.3. Performance Evaluation We show the F1 scores of the node classification task in Table 1 and the statistics of MMD distance and F1 scores between the original and generated graph in the graph generation task in Table 2 and Table C.4. A higher score reported in F1 indicates a more accurate prediction of the node and fidelity of the generated graph. At the same time, a smaller MMD distance suggests better generative capabilities of the model from the perspective of graph topological properties. Node classification. HypDiff demonstrates superior performance which outperforms nearly all baseline models, achieving the highest ranking and revealing excellent generalization. This implies that HypDiff can preserve essential properties within complex structures, enabling better distinctive and utility of the dependencies between nodes across hierarchical levels in hyperbolic space. Graph Generation. Successively, we focused on validating the graph generation capability of HypDiff. Using the finer-grained metrics, we consistently observed our approach\u2019s outstanding performance. More results are shown in Table C.3. We are further concerned with the fidelity and diversity of the generated results which yielded conclusions consistent with the previous and are reported in Table C.4. Specifically, HypDiff depicts superior overall performance compared to the state-of-the-art model autoregressive model GraphRNN and discrete diffusion method DiGress. Furthermore, our model can effectively capture the local structure through similarity constraints and achieve competitive performance on highly connected graph data (Community). 4.4. Analysis of HypDiff In this subsection, we present the experimental results to intuitively convey our discovery and initiate a series of discussions and analyses. Ablation Study. This study is to highlight the role of radial popularity diffusion and angular similarity diffusion constraints of HypDiff. We conducted experiments on three real-world datasets to validate the node classification performance and removed radial popularity (HypDiff (w/o P)), angular similarity (HypDiff (w/o S)) and total geometric prior(HypDiff (w/o PS)) components as the variant models. We show the results in Figure 4. The radial popularity is evident in facilitating hyperbolic diffusion processes, thereby showcasing the advantage of hyperbolic geometry in capturing the underlying graph topology. Furthermore, the angular 7 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 4. Ablation study results. Figure 5. Sensitivity analysis of geometric constraints. 11 12 13 14 15 Average Time of 1000 Timesteps (s) 3000 4000 5000 6000 7000 GPU Memory (MB) HypDiff GPU: 2519MB Time: 11.2 s GDSS GPU: 3501MB Time: 12.5 s GraphGDP GPU: 5902MB Time: 13.6 s EDGE GPU: 6205MB Time: 11.8 s DiGress GPU: 5800MB Time: 12.1 s Figure 6. Efficiency analysis on IMDB-B for graph generation. similarity also significantly preserves the local structure of the graph, compensating for the limitations of hyperbolic space in capturing local connectivity patterns. In summary, the hyperbolic geometric prior plays a crucial role in capturing non-Euclidean structures. Sensitivity Analysis of Geometric Constraints. To investigate the impact of both the number of clusters k and the geometric prior coefficient \u03b4 on the model performance, we conducted the sensitivity analysis on the real-world and synthetic graph datasets, respectively. The number of clusters k can be understood as the strength of the angular constraint, the results of three datasets with different structures are shown in Fig 5 (Left). Specifically, Cora has a realworld connected structure, SBM has a complex community structure, and Fractal has self-similarity and hierarchy properties. It can be observed that k has different sensitivities in different structured datasets, indicating that different graph structures have different approximate accuracies for anisotropy capture. Correspondingly, the geometric prior coefficient \u03b4 can be understood as the strength of the radial constraint, the results of three real-world datasets are shown in Fig 5 (Right). The stronger the constraint, the smaller the diffusion step in the radial direction of the hyperbolic space. It can be observed that the data set with a tree-like structure requires lower radial constraints, while the graph with high connectivity requires stronger radial constraints. For the experimental setup and a more detailed analysis of the results please refer to Appendix I. Diffusion Efficiency Analysis. We report the training time for our HypDiff and other graph diffusion baselines with the same configurations on IMDB-B. We conduct experiments with the hardware and software configurations listed in Section 4.2. We comprehensively report the results from the time and space costs of the diffusion process. The result is shown in Figure 6, our HypDiff comprehensively outperforms other baselines in diffusion time and GPU memory cost. Compared with the discrete graph diffusion model, our model directly diffuses each node of the graph with structure-preserving based on the latent diffusion model, so the space complexity is much lower than that of the direct diffusion of discrete and sparse structural information(e.g. adjacent/Laplace matrix). The performance of each dataset is in the Appendix K, Visualization. We compare the contributions of two diffusion generation models, HypDiff and GDSS, to graph generation tasks by visualizing networks generated by five well-accepted graph theoretical models. We discuss and show the visualization as Figure C.3 in the Appendix J.3. 5." + }, + { + "url": "http://arxiv.org/abs/2304.05059v1", + "title": "Hyperbolic Geometric Graph Representation Learning for Hierarchy-imbalance Node Classification", + "abstract": "Learning unbiased node representations for imbalanced samples in the graph\nhas become a more remarkable and important topic. For the graph, a significant\nchallenge is that the topological properties of the nodes (e.g., locations,\nroles) are unbalanced (topology-imbalance), other than the number of training\nlabeled nodes (quantity-imbalance). Existing studies on topology-imbalance\nfocus on the location or the local neighborhood structure of nodes, ignoring\nthe global underlying hierarchical properties of the graph, i.e., hierarchy. In\nthe real-world scenario, the hierarchical structure of graph data reveals\nimportant topological properties of graphs and is relevant to a wide range of\napplications. We find that training labeled nodes with different hierarchical\nproperties have a significant impact on the node classification tasks and\nconfirm it in our experiments. It is well known that hyperbolic geometry has a\nunique advantage in representing the hierarchical structure of graphs.\nTherefore, we attempt to explore the hierarchy-imbalance issue for node\nclassification of graph neural networks with a novelty perspective of\nhyperbolic geometry, including its characteristics and causes. Then, we propose\na novel hyperbolic geometric hierarchy-imbalance learning framework, named\nHyperIMBA, to alleviate the hierarchy-imbalance issue caused by uneven\nhierarchy-levels and cross-hierarchy connectivity patterns of labeled\nnodes.Extensive experimental results demonstrate the superior effectiveness of\nHyperIMBA for hierarchy-imbalance node classification tasks.", + "authors": "Xingcheng Fu, Yuecen Wei, Qingyun Sun, Haonan Yuan, Jia Wu, Hao Peng, Jianxin Li", + "published": "2023-04-11", + "updated": "2023-04-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "main_content": "INTRODUCTION In recent years, graph representation learning has shown its effectiveness in capturing the irregular but related complex structures in graph data [12, 16, 21, 39, 51]. With the intensive studies and wide applications of graphs [22, 24, 40, 50], some recent works [8, 25, 44] show that the geometric properties of graph topology play a crucial role in graph representation learning. Among the variety of topological properties, the hierarchy is a ubiquitous and significant property of graphs. In this work, we focus on the semi-supervised unbalanced node classification task for a graph with hierarchy. Due to the cost of labeling in the real-world, the number of labels in different classes is always imbalanced. Most of the existing studies on imbalanced learning of graphs focus on the imbalanced number of labeled nodes in different classes, i.e., quantityimbalance [15, 17, 23, 28, 42, 48], and a toy example as shown in arXiv:2304.05059v1 [cs.LG] 11 Apr 2023 \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Xingcheng Fu, et al. (a) Quantity-imbalance. (b) Position-imbalance. (c) Hierarchy-imbalance. Figure 1: Representative cases of imbalanced node classification on the graphs by label propagation. Colors represents different types of nodes, where the fill colors of nodes represent the ground truth and the border colors represent the classification result by label propagation. Incorrect results are marked with the dashed regions. The star and circular symbols represent the labeled nodes and unlabeled nodes, respectively. Figure 1 (a). With the in-depth study of the topological properties and the learning mechanism of GNNs, some recent works have focused on the problem of an unbalanced distribution of labeled nodes in topological positions, i.e., topological position imbalance. The (topological) position-imbalance [9, 38, 41] is caused by the uneven location distribution of labeled nodes in the topological space. As shown in Figure 1 (b), since the graph neural networks [16, 19, 47] learn node representations in the message-passing paradigm [14], even if the labeled nodes have balanced quantity, the uneven position distribution of nodes leads to low-quality of label propagation. However, the existing works [9, 38, 41] only focus on the position and neighborhood information of nodes in the topology structure, and they are difficult to handle the label imbalance issue caused by implicit topological properties of graphs. To sum up, the imbalance issue exploring the topological properties is still in its infancy. Hierarchy-imbalance issue. Hierarchy is an important topological property of graphs, which simultaneously reproduces the unique properties of scale-free topology and the high clustering of nodes [31, 46] and can be widely observed in nature, from biology to language to some social networks [2, 31], and reflects the important role of nodes in the network. Since graphs with the hierarchical structures are scale-free (with an exponential number growth of nodes and power-law degree distribution) [3] and highly modularity (with high connectivity) [11], it is difficult to measure label imbalance on hierarchical graphs simply by the quantity and location of nodes. For example, the graph in Figure 1 (c) with a hierarchical structure has a balanced quantity (with the same number of labeled nodes in each class) and balanced position (with the same shortest distance to the graph center and degree distribution) of labeled nodes. However, the blueand green-class labeled nodes occupy higher hierarchy-level roles, resulting in a large number of errors in the classification results. It can be observed that the distribution of labeled nodes in the hierarchy can seriously affect the decision boundary shift of the classifier. Compared with quantityand position-imbalance issues, exploring the hierarchyimbalance issue has two major challenges: (1) Implicit topology: hierarchy-imbalance is caused by the uneven distribution of labeled nodes in implicit topological properties, which is difficult to measure intuitively. (2) Hierarchical connectivity: hierarchy of the graph introduces more complex connectivity patterns for nodes, which are difficult to be directly observed and quantified in a single way. Therefore, a natural problem is, \"How to effectively and efficiently measure the hierarchy for each node of graphs, and how does hierarchy-imbalanced label information affect the classification results by the message-passing mechanism? \" Present work. To solve the above problem, we first give a quantitative analysis for understanding and explore the hierarchyimbalance issue. Furthermore, inspired by the success of hyperbolic graph learning [8, 25, 43] for hierarchy-preserving [20, 27], we propose an effective metric to measure the hierarchy of labeled nodes using hyperbolic geometric embedding. Then we propose a novel Hyperbolic Geometric Hierarchy-IMBAlance Learning (HyperIMBA) framework to re-weight the label information propagation and adjust the objective margin accordingly based on the node hierarchy. The key insight of HyperIMBA is to use the graph geometric method to deal with the imbalance issue caused by the topological geometric properties. Specifically, based on the Poincar\u00e9 model, we design a novel Hierarchy-Aware Margin (HAM) to reduce the decision boundary bias caused by hierarchyimbalance labeled nodes. Then we design a Hierarchy-aware Message-Passing Neural Network (HMPNN) mechanism based on the class-aware Ricci curvature weight, which measures the influence from the label information and connectivity of neighborhood, alleviating the over-squashing caused by message-passing of crosshierarchy connectivity pattern by re-weight the \"backbone\" paths. Overall, the contributions are summarized as follows: \u2022 For the first time, we explore the hierarchy-imbalance node representation learning as a new issue of topological imbalance topic for semi-supervised node classification. \u2022 We propose a novel training framework, named HyperIMBA, to alleviate the hierarchy-imbalance issue by designing two key mechanisms: HAM captures the implicit hierarchy of labeled nodes to adjust the decision boundary margins, and HMPNN reweights the path of supervision information passing according to the cross-hierarchy connectivity pattern. \u2022 Extensive experiments on synthetic and real-world datasets demonstrate a significant and consistent improvement and provide insightful analysis for the hierarchy-imbalance issue. \fHyperbolic Geometric Graph Representation Learning for Hierarchy-imbalance Node Classification WWW \u201923, May 1\u20135, 2023, Austin, TX, USA (a) Quantitative analysis of hierarchical network. (b) Tree, hierarchical structure in hyperbolic space. Figure 2: (a) The quantitative analysis of the hierarchical graph and the Barab\u00e1si\u2013Albert graph. With the growth of the network scale, the cluster coefficients of BA networks are referred to as the power-law distribution, and the cluster coefficients of the hierarchical network vary independently of the scale of the network. The connectivity correlation analysis shows that the nodes tend to connect nodes with similar connectivity and betweenness in BA networks, and the low-degree nodes of the hierarchical network tend to connect to the core nodes. (b) Compared with the Euclidean embedding method, the Poincar\u00e9 Model can better represent the hierarchy of nodes in hyperbolic space. 2 PRELIMINARY In this section, we briefly introduce some notations and key definitions. Our work focuses on exploring the relationship between the imbalance issue and hierarchical geometric properties of labeled nodes in the semi-supervised node classification task on graphs. 2.1 Semi-supervised Node Classification Given a graph G = {V, E} with the node set V of \ud835\udc41nodes and the edge set E. Let A \u2208R\ud835\udc41\u00d7\ud835\udc41be the adjacency matrix and X \u2208R\ud835\udc41\u00d7\ud835\udc51 be the node feature matrix, where \ud835\udc51denotes the dimension of node features. For node \ud835\udc63, its neighbors set is \ud835\udc41(\ud835\udc63) : {\ud835\udc62\u2208V|\ud835\udc62, \ud835\udc63\u2208E}. \ud835\udc51\ud835\udc63: |N (\ud835\udc63)| is the degree of node \ud835\udc63. Given the labeled node set V \ud835\udc3f and their labels Y \ud835\udc3fwhere each node \ud835\udc63\ud835\udc56is associated with a label \ud835\udc66\ud835\udc56, semi-supervised node classification aims to train a node classifier \ud835\udc53\ud835\udf03: \ud835\udc63\u2192R\ud835\udc36to predict the labels Y \ud835\udc48of remaining unlabeled nodes V \ud835\udc48= V \\ V \ud835\udc3f, where \ud835\udc36denotes the number of classes. We separate the labeled node set V \ud835\udc3finto {V1 \ud835\udc3f, V2 \ud835\udc3f, \u00b7 \u00b7 \u00b7 , V\ud835\udc36 \ud835\udc3f}, where V\ud835\udc56 \ud835\udc3fdenotes the nodes of class \ud835\udc56in V \ud835\udc3f. We focus on semi-supervised node classification based on GNNs methods. 2.2 Hyperbolic Geometric Model Hyperbolic space is commonly referred to a manifold with constant negative curvature and is used for modeling complex networks. In hyperbolic geometry, five common isometric models are used to describe hyperbolic spaces [6]. In this work, we use the Poincar\u00e9 disk model to reveal the underlying hierarchy of the graph. Definition 1 (Poincar\u00e9 disk Model). The Poincar\u00e9 Disk Model is a two-dimensional model of hyperbolic geometry with nodes located in the unit disk interior, and the generalization of n-dimensional with standard negative curvature \ud835\udc50is the Poincar\u00e9 ball B\ud835\udc5b \ud835\udc50= {\ud835\udc99\u2208R\ud835\udc5b: \u2225\ud835\udc99\u22252 < 1/\ud835\udc50}. For any point pair (\ud835\udc99,\ud835\udc9a) \u2208B\ud835\udc5b \ud835\udc50, \ud835\udc99\u2260\ud835\udc9a, the distance on this manifold is defined as: \ud835\udc51B\ud835\udc5b \ud835\udc50(\ud835\udc99,\ud835\udc9a) = 2 \u221a\ud835\udc50tanh\u22121(\u221a\ud835\udc50\u2225\u2212\ud835\udc99\u2295\ud835\udc50\ud835\udc9a\u2225), (1) where \u2295\ud835\udc50is M\u00f6bius addition and \u2225\u00b7\u2225is \ud835\udc3f2 norm. Definition 2 (Poincar\u00e9 norm). The Poincar\u00e9 Norm is defined as the distance of any point \ud835\udc99\u2208B\ud835\udc5b \ud835\udc50from the origin of Poincar\u00e9 ball: \u2225\ud835\udc99\u2225B\ud835\udc5b \ud835\udc50= 2 \u221a\ud835\udc50tanh\u22121(\u221a\ud835\udc50\u2225\ud835\udc99\u2225). (2) 3 UNDERSTANDING HIERARCHY-IMBALANCE In this section, we present a novel hierarchy-imbalance issue for semi-supervised node classification on graphs. Then a quantitative analysis of how the hierarchical nature of the graph affects the representation learning of nodes is presented. Finally, we present a new insight on the hierarchy-imbalance issue of graphs from a hyperbolic geometric perspective. 3.1 Hierarchy-imbalance of Node Classification The quantity and quality of the information received by a node determine the expressiveness of its representation in GNNs. In graphs with an intrinsic hierarchical structure, hierarchy is highly correlated with both the quantity and quality of information a node can receive. We argue that the imbalance of the node hierarchical tag affects the performance of GNNs in two aspects: (1) Implicit hierarchy-level: Considering the node label quality, the topological roles of labeled nodes are also highly relevant to the propagation of supervision information. Under the condition that the supervision information decays with the topological distance [5], the further the quality supervision information can achieve by propagation, the more significant influence the nodes can receive. (2) Cross-hierarchy connectivity pattern: The hierarchical structure will introduce extra correlations in the graph topology, with the potential to cause nodes at different levels to have different patterns of neighborhood connectivity. The message-passing of supervision and other information may cause an over-squashing problem when the messages are across different hierarchy-levels \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Xingcheng Fu, et al. with narrow connectivity [44]. The reason is that there are narrow \"bottlenecks\" between hierarchy-levels with different connectivity. 3.2 Quantitative Analysis of Hierarchical Structure To further understand the hierarchy-imbalance issue, we use wellknown graph models to generate two types of synthetic graphs for quantitative analysis: (a) Hierarchical graph: It is a deterministic fractal and scale-free graph with a 4-nodes module and 4-levels hierarchy and is generated by the Hierarchical Network Model [31], (b) Barab\u00e1si\u2013Albert (BA) graph: It is a random scalefree graph with power-law distributions and generated by the extended Barab\u00e1si\u2013Albert Model [1]. Hierarchy and topological properties. Most classification methods based on graph topology rely on the modularity of graphs: the model can easily identify a set of nodes that are closely connected to each other within the class, but with few or no links to nodes outside the class [1]. These clearly identifiable modular organizations have intuitive decision boundaries on the topology (Figure 1 (b)). As shown in Figure 2 (a) left plot, unlike the BA scale-free graph model in which the clustering coefficients are independent of the degree of a particular node, the clustering coefficients in a hierarchical network can be expressed by a function of degree, i.e., \ud835\udc36(\ud835\udc58) = \ud835\udc58\u2212\ud835\udefd, and the exponent \ud835\udefd= 1 in deterministic scale-free networks [11]. It indicates that the decision boundaries may be implicit in the hierarchical topology (Figure 1 (b)). To sum up, the topological role importance of labeled nodes in the hierarchy can more effectively affect the decision boundaries between node classes. Hierarchy and correlations. The nodes with different hierarchylevels have different connection patterns on the hierarchical graph. To quantitatively analyze the local topological properties of nodes in the hierarchical network to reveal the connection patterns of different nodes, we consider two important local topological properties, connectivity(degree) and betweenness of nodes, where high connectivity represents nodes that are easier to propagate information, and high betweenness represents nodes that are on the \"backbone\" paths of the graph. To quantify the graph propagation patterns across levels of hierarchy, we compute the corresponding average nearest neighbor connectivity \u27e8\ud835\udc58\ud835\udc5b\ud835\udc5b\u27e9and betweenness \u27e8\ud835\udc4f\ud835\udc5b\ud835\udc5b\u27e9of nodes with connectivity \ud835\udc58and betweenness \ud835\udc4fas: \u27e8\ud835\udc58\ud835\udc5b\ud835\udc5b\u27e9= \u2211\ufe01 \ud835\udc58\u2032 \ud835\udc58\u2032 prob \u0000\ud835\udc58\u2032 | \ud835\udc58\u0001 , \u27e8\ud835\udc4f\ud835\udc5b\ud835\udc5b\u27e9= \u2211\ufe01 \ud835\udc4f\u2032 \ud835\udc4f\u2032 prob \u0000\ud835\udc4f\u2032 | \ud835\udc4f\u0001 , (3) where \ud835\udc58\u2032 and \ud835\udc4f\u2032 are connectivity and betweenness of other nodes, respectively. The results are shown in Figure 2 (a) right plots. For BA graph (blue lines), both \u27e8\ud835\udc58\ud835\udc5b\ud835\udc5b\u27e9and \u27e8\ud835\udc4f\ud835\udc5b\ud835\udc5b\u27e9show that nodes tend to connect with other nodes whose connectivity and betweenness are similar to themselves. For the hierarchical graph (red lines), the \u27e8\ud835\udc58\ud835\udc5b\ud835\udc5b\u27e9results indicate that the nodes are more likely to connect to nodes at other different levels, and the \u27e8\ud835\udc4f\ud835\udc5b\ud835\udc5b\u27e9results reveal that more nodes on \"backbone\" paths are more frequently connected with the nodes in the local group. For example, these hierarchical properties on the Internet are likely driven by several additional factors, such as economic market demand. In conclusion, the quantitative analysis shows that the local connectivity and betweenness are closely related to the hierarchy of nodes, and the topological bottleneck of the graph may exist between different hierarchy-levels, which aggravates the oversquashing problem in the message-passing of supervision information. 3.3 Hierarchy of Hyperbolic Geometry Perspective Hyperbolic space can be understood as smooth versions of trees abstracting the hierarchical organization of complex networks [20]. Figure 2 (b) shows the node embeddings of the tree, hierarchical graphs on Euclidean and hyperbolic space. We can observe that the graph size grows as the radius of the Poincar\u00e9 disk increases, and the hierarchy deepens as the graph size grows in hyperbolic space. Even though the hierarchical graph has a more complex structure, the position distribution of its hyperbolic embeddings is similar to a tree in hyperbolic space. In the GNNs community, learning the geometric properties of graphs has attracted much attention, and a typical case is learning the hierarchical structure of graphs using hyperbolic geometry [13, 18, 25]. In summary, hyperbolic geometry provides us with exciting ways to capture and measure the implicit hierarchical structure of graphs. 4 HYPERIMBA MODEL In this section, we present a novel hyperbolic geometric hierarchyimbalance learning (HyperIMBA) training framework to address the two main challenges of hierarchy-imbalance. The key insight is that we leverage hyperbolic geometry to abstract the implicit hierarchy of nodes in the graph and introduce a discrete geometric metric to deal with the over-squashing problem of supervision information propagated between hierarchy-levels. The architecture is shown in Figure 3, and the overall process of HyperIMBA is shown in Algorithm 1. 4.1 Hyperbolic Hierarchy-aware Margin Our goal is to capture the implicit hierarchy of each labeled node, which is an important global property in a hierarchical graph to adjust the decision boundaries in the learning process. To this end, we design the Hyperbolic Hierarchy-Aware Margin (HAM), which consists of three steps: First, we use the topological information of the graph to learn a hyperbolic embedding of the graph by using the Poincar\u00e9 model. The hierarchical weights of nodes are then learned using their hyperbolic embeddings. Finally, a hyperbolic level-aware margin is designed to modify the objective function. Step-1: Hyperbolic Embedding of Labeled Nodes. Poincar\u00e9 embedding [25] is a shallow method of learning embedding into an \ud835\udc5b-dimensional Poincar\u00e9 ball B\ud835\udc5b \ud835\udc50. In our work, we utilize Poincar\u00e9 embedding to find the optimal embeddings of nodes by minimizing a hyperbolic distance-based loss function. Based on the hyperbolic distance in Equation 1, the loss function of Poincar\u00e9 embedding is defined as follows: LB\ud835\udc5b \ud835\udc50(\u0398) = \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208E log \ud835\udc52\u2212\ud835\udc51B\ud835\udc5b \ud835\udc50(\ud835\udc96,\ud835\udc97) \u00cd \ud835\udc97\u2032\u2208Neg(\ud835\udc62) \ud835\udc52\u2212\ud835\udc51B\ud835\udc5b \ud835\udc50(\ud835\udc96,\ud835\udc97\u2032) , (4) where the negative examples Neg(\ud835\udc62) of \ud835\udc62is Neg(\ud835\udc62) = {\ud835\udc63|\ud835\udc62, \ud835\udc63\u2209 E} \u222a{\ud835\udc62}. Then we utilize the stochastic Riemannian optimization method to solve the optimization problem as: \u0398\u2032 \u2190arg min \u0398 L(\u0398) s.t. \u2200\ud835\udf03\ud835\udc56\u2208\u0398 : \u2225\ud835\udf03\ud835\udc56\u2225< 1/\ud835\udc50. (5) \fHyperbolic Geometric Graph Representation Learning for Hierarchy-imbalance Node Classification WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Figure 3: An illustration of HyperIMBA architecture. (1) HyperIMBA learns the hyperbolic embedding and the hierarchy of each node by Poincar\u00e9 model, then gets the hierarchy-aware margin and adds it to the GNNs loss to adjust the decision boundaries; (2) HyperIMBA calculates class-aware Ricci curvature for each edge, then transforms the Ricci curvatures to aggregated weights by an MLP and Softmax function. (3) HyperIMBA performs as GNNs with HAM and HMPNN for the node classification. We follow Poincar\u00e9 embedding using Riemannian stochastic gradient descent [4] to update the model parameters. For each labeled node \ud835\udc63\u2208V \ud835\udc3f, we get the hyperbolic embedding \ud835\udc52\ud835\udc63 B by Poincar\u00e9 embedding method to capture the hierarchy of the node. Step-2: Hyperbolic Hierarchy-aware Margin. In hyperbolic geometric space, the hyperbolic distance (radius) \ud835\udc45of an embedded node from the hyperbolic disk origin (North Pole) is able to abstract the depth of the hidden tree-like hierarchy [20]. In our work, we compute the hyperbolic radius according to Equation 2 as the hierarchy of nodes by computing the Poincar\u00e9 norm of the hyperbolic node embedding, and then we use a Multi-layer Perceptron (MLP) to transform the Poincar\u00e9 norm into the hierarchy weights of the nodes. For each node \ud835\udc63\u2208V, the Hierarchy-aware Margin is defined as: HAM\ud835\udc63\ud835\udc66= |V\ud835\udc66 \ud835\udc3f| |V \ud835\udc3f| Softmax \u0010 MLP \u0010\r \r\ud835\udc52\ud835\udc63 B \r \r B \u0011\u0011 . (6) Step-3: Objective Function Adjustment. In Section 3.2, we observe the hierarchy of a labeled node represents its global topological role and importance. Inspired by the margin-based imbalance handling methods [38], we design the Hierarchy-Aware Margin to adaptively handle the intensity of supervision information based on a hierarchy to adjust the decision boundaries. The HyperIMBA learning objective function is formulated as: L = 1 \u2225V\u2225 \u2211\ufe01 \ud835\udc63\u2208V LGNNs (HMPNN(\ud835\udc89\ud835\udc63) + \ud835\udefcHAM,\ud835\udc66\ud835\udc63) , (7) 4.2 Hierarchy-aware Message-passing Although HAM can adjust the intensity of the supervision information on global topology, it cannot recognize the hierarchical connectivity patterns of individual nodes. Based on the observations in Section 3.2, we draw a conclusion that a node of hierarchical graph tends to connect nodes with different connectivity (degree) and betweenness, and this cross-hierarchy connectivity pattern is more likely to lead to topology bottlenecks in message-passing. Class-aware Ricci curvature. Recently, Ricci curvature has been introduced to analyze and measure the over-squashing problem caused by topological bottlenecks [44]. Inspired by this work, we extend Ollivier-Ricci curvature [26] as the edge weights to affect message-passing, which can alleviate the over-squashing problem. Specifically, we first consider the label \ud835\udc56distribution in the one-hop neighborhood of a node \ud835\udc62is defined as: \ud835\udc37\ud835\udc62,\ud835\udc56= |{\ud835\udc63\u2208N (\ud835\udc62) | \ud835\udc66= \ud835\udc56}| \u2225N (\ud835\udc62)\u2225 . (8) Our class-aware Ricci curvature \ud835\udf05(\ud835\udc62, \ud835\udc63)\ud835\udc50of the edge (\ud835\udc62, \ud835\udc63) is defined as: \ud835\udf05(\ud835\udc62, \ud835\udc63) = \ud835\udc4a(\ud835\udc5a\ud835\udc66\ud835\udc62 \ud835\udc62,\ud835\udc5a\ud835\udc66\ud835\udc63 \ud835\udc63) \ud835\udc51(\ud835\udc62, \ud835\udc63) , (9) where \ud835\udc4a(\u00b7, \u00b7) is the Wasserstein distance, \ud835\udc51(\u00b7, \u00b7) is the geodesic distance (embedding distance), and \ud835\udc5a\ud835\udc66\ud835\udc62 \ud835\udc62 is the mass distribution of node \ud835\udc62. The mass distribution represents the important distribution of a node and its one-hop neighborhood [26], and we further consider the label distribution in the neighborhood as: \ud835\udc5a\ud835\udefc,\ud835\udc5d \ud835\udc62 (\ud835\udc62\ud835\udc56, \ud835\udc37\ud835\udc66\ud835\udc62\ud835\udc56 \ud835\udc62\ud835\udc56) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udefc if \ud835\udc62\ud835\udc56= \ud835\udc65 1\u2212\ud835\udefc \ud835\udc36 \u00b7 \ud835\udc4f\u2212\ud835\udc37 \ud835\udc66\ud835\udc62\ud835\udc56 \ud835\udc62\ud835\udc56\ud835\udc51(\ud835\udc62,\ud835\udc62\ud835\udc56)\ud835\udc5d if \ud835\udc62\ud835\udc56\u2208N (\ud835\udc62) 0 otherwise , where \ud835\udc36= \u00cd \ud835\udc62\ud835\udc56\u2208N(\ud835\udc62) \ud835\udc4f\u2212\ud835\udc51(\ud835\udc62,\ud835\udc62\ud835\udc56)\ud835\udc5d. \ud835\udefcand \ud835\udc5dare hyper-parameters that represent the importance of node \ud835\udc65and we take \ud835\udefc= 0.5 and \ud835\udc5d= 2 following the existing works [37, 49]. Curvature-Aware Message-Passing. Class-aware Ricci curvature measures how easily the label information flows through an edge and can be used to guide message-passing. We follow [49] by using an MLP to learn the mapping function from the curvature to the aggregated weights \ud835\udf0f\ud835\udc62\ud835\udc63of MPNN. We have Hierarchy-aware \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Xingcheng Fu, et al. Algorithm 1: HyperIMBA Input: Graph G = {V, E} with node labels Y; Number of training epochs \ud835\udc38; Loss hyperparameter \ud835\udefc 1 Parameter \ud835\udf03initialization; Output: Predicted label \u02c6 Y. // Learn hierarchy of nodes 2 Learning node Poincar\u00e9 embedding \ud835\udc52B \u2190Equation (4); 3 Calculate node label distribution \ud835\udc37\u2190Equation (8); 4 Calculate class-aware Ricci curvature \ud835\udf05\u2190Equation (9); 5 for \ud835\udc52= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc38do // Learn hyperbolic hierarchy-imbalance margin 6 Calculate the HAM \u2190Equation (6); // Hierarchy-aware message-passing 7 Learning curvature-aware HMPNN \u2190Eq. (10); 8 Predict node labels \u02c6 Y \u2190Eq. (7); // Optimize 9 Calculate the classification loss L \u2190Eq. (7), 10 Update model parameters \ud835\udf03\u2190\ud835\udf03\u2212\ud835\udf02\u2207\ud835\udf03. 11 end Table 1: Statistics of real-world datasets. Dataset #Node #Edge #Label #Avg. Deg #H(G) Cora 2,708 5,429 7 4.01 0.83 Citeseer 3,327 4,732 6 2.85 0.72 Photo 7,487 119,043 8 31.80 0.83 Actor 7,600 33,544 5 8.83 0.24 Chameleon 2,277 31,421 5 27.60 0.25 Squirrel 5,201 198,493 5 76.33 0.22 Massage-Passing Neural Networks (HMPNN) as follow: \ud835\udf0f\ud835\udc62\ud835\udc63= Softmax (MLP (\ud835\udf05(\ud835\udc62, \ud835\udc63))) , \ud835\udc89\ud835\udc59+1 \ud835\udc56 = \ud835\udc48\ud835\udc59\u00a9 \u00ad \u00ab \ud835\udc89\ud835\udc59 \ud835\udc56, \u2211\ufe01 \ud835\udc57\u2208N(\ud835\udc56) \ud835\udc40\ud835\udc59\u0010 \ud835\udc89\ud835\udc59 \ud835\udc56, \ud835\udc89\ud835\udc59 \ud835\udc57,\ud835\udf0f\ud835\udc56\ud835\udc57\ud835\udc52\ud835\udc56\ud835\udc57 \u0011\u00aa \u00ae \u00ac . (10) 5 EXPERIMENT In this section, we conduct comprehensive experiments to demonstrate the effectiveness and adaptability of HyperIMBA 1 on various datasets and tasks. We further analyze the robustness to investigate the expressiveness of HyperIMBA. 5.1 Datasets We conduct experiments on synthetic and real-world datasets to evaluate our method, and analyze the model\u2019s capabilities in terms of both graph theory and real-world scenarios. The statistics of the datasets are summarized in Table 1. The edge homophily H (G) is computed according to [29]. Synthetic Datasets. We generate the hierarchical synthetic graphs for essential verification and analysis of our method by the well-accepted graph theoretical model: Hierarchical Network 1The code is available at https://github.com/RingBDStack/HyperIMBA. Figure 4: Performances of different hierarchy-level training setup on the synthetic graph. Model (HNM) [31]. For each dataset, we create 1,024 nodes and subsequently perform the graph generation algorithm on these nodes. For the hierarchical graph, we consider an initial network of \ud835\udc41= 4 fully interconnected nodes as the fractal, and derive \ud835\udc58= 5 times in an iterative way by replicating the initial fractal of the graph according to the fractal structure. For each generated graph, we randomly select 80% nodes as the test set, 10% nodes as the training set, and the other 10% nodes as the validation set. Real-world Datasets. We also conducted experiments on several real-world datasets: (1) Citation network: Cora and Citeseer [35] are citation networks of academic papers. (2) Co-occurrence network: Photo [36] is segment of the Amazon co-purchase graph and Actor [30] is an actor co-occurrence network. (3) Page-page network: Chameleon and Squirrel [34] are page-page networks on Wikipedia. Since we focus on the imbalance issue of topological properties, we set the same number of labeled nodes for each class. 5.2 Experimental Setup Baselines. We choose well-known GNNs as backbones, including GCN [19], GAT [47], and GraphSAGE [16]. To evaluate the proposed HyperIMBA, we compare it with a variety of baselines, including: the most relevant baselines of the topology-imbalance issue are ReNode [9] and TAM [38]. ReNode is a position-aware and reweighted [7, 10, 32] method, and TAM is a neighborhood class information-aware margin-based method. DropEdge [33] randomly removes a certain number of edges from the input graph at each training epoch, which acts as data augmentation [28] in terms of structure. SDRF [44] is a structure rewiring method for the oversquashing issue, which modifies edges with Ricci curvatures. Settings. We set the depth of GNN backbones as 2 layers and adopt the implementations from the PyTorch Geometric Library2 in all experiments. We set the representation dimension of all baselines and HyperIMBA to 256. The parameters of baselines are set as the suggested value in their papers or carefully tuned. For DropEdge, we set the edge dropping/adding probability to 10%. For HyperIMBA, we set the hyperbolic curvature \ud835\udc50= 1 of Poinca\u00e9 model. 2https://github.com/rusty1s/pytorch_geometric \fHyperbolic Geometric Graph Representation Learning for Hierarchy-imbalance Node Classification WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Table 2: Weighted-F1 score and Micro-F1 score (% \u00b1 standard deviation) of node classification on real-world graph datasets. (Result: average score \u00b1 standard deviation; Bold: best; Underline: runner-up; The hyphen symbol indicates experiments results are not accessible due to memory issue or time limits.) Model Cora Citeseer Photo Actor Chameleon Squirrel W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 GCN original 79.4\u00b10.9 77.5\u00b11.5 66.3\u00b11.3 62.2\u00b11.2 85.4\u00b12.8 84.6\u00b11.3 21.8\u00b11.3 20.9\u00b11.4 30.5\u00b13.4 30.5\u00b13.3 21.9\u00b11.2 21.9\u00b11.2 ReNode 80.0\u00b10.7 78.4\u00b11.3 66.4\u00b11.0 62.4\u00b11.1 86.2\u00b12.4 85.3\u00b11.6 21.2\u00b11.2 20.2\u00b11.6 30.3\u00b13.2 30.4\u00b12.8 22.4\u00b11.1 22.4\u00b11.1 DropEdge 79.8\u00b10.8 77.8\u00b11.0 66.6\u00b11.4 63.4\u00b11.6 86.8\u00b11.7 85.4\u00b11.3 22.4\u00b11.0 21.4\u00b11.3 30.6\u00b13.5 30.6\u00b13.3 22.8\u00b11.2 22.8\u00b11.2 SDRF 82.1\u00b10.8 79.3\u00b11.0 69.6\u00b10.4 66.6\u00b10.3 32.3\u00b10.7 39.0\u00b11.2 ReNode+TAM 80.1\u00b10.9 78.2\u00b11.6 67.1\u00b11.4 62.3\u00b10.9 87.6\u00b11.3 86.9\u00b11.0 23.1\u00b10.9 22.2\u00b11.3 32.3\u00b10.9 32.1\u00b10.8 22.1\u00b10.4 22.1\u00b10.3 HyperIMBA 83.0\u00b10.3 83.1\u00b10.4 76.3\u00b10.2 73.4\u00b10.3 92.8\u00b10.3 92.5\u00b10.3 30.7\u00b10.2 29.3\u00b10.4 44.1\u00b10.7 42.3\u00b11.1 31.2\u00b12.4 28.4\u00b12.0 GAT original 78.3\u00b11.5 76.4\u00b11.7 64.4\u00b11.7 60.6\u00b11.7 88.2\u00b12.9 86.2\u00b12.6 21.8\u00b11.2 20.9\u00b11.1 29.9\u00b13.5 29.9\u00b13.1 20.5\u00b11.4 20.5\u00b11.4 ReNode 78.9\u00b11.2 77.2\u00b11.5 64.9\u00b11.6 61.0\u00b11.5 89.1\u00b12.4 87.1\u00b12.6 21.5\u00b11.2 20.5\u00b11.1 29.2\u00b12.3 29.1\u00b12.0 20.4\u00b11.8 20.4\u00b11.8 DropEdge 78.7\u00b11.3 76.9\u00b11.5 64.5\u00b11.4 60.5\u00b11.3 88.9\u00b11.9 87.1\u00b12.1 22.9\u00b11.2 21.8\u00b11.1 30.3\u00b11.6 30.2\u00b11.2 21.2\u00b11.5 21.2\u00b11.5 SDRF 77.9\u00b10.7 75.9\u00b10.9 64.9\u00b10.6 61.9\u00b10.9 43.0\u00b11.9 42.5\u00b11.9 ReNode+TAM 78.4\u00b11.3 77.3\u00b11.3 64.2\u00b11.3 63.1\u00b10.8 89.0\u00b11.8 87.3\u00b11.7 21.3\u00b11.2 20.7\u00b11.1 30.9\u00b11.5 30.2\u00b11.8 20.0\u00b11.4 19.4 \u00b11.2 HyperIMBA 83.5\u00b10.3 83.6\u00b10.3 75.0\u00b10.4 73.0\u00b10.4 92.5\u00b10.5 92.1\u00b10.8 30.9\u00b11.0 29.8\u00b11.0 43.2\u00b10.7 42.5\u00b10.6 31.1\u00b11.0 28.7\u00b11.3 GraphSAGE original 75.4\u00b11.6 74.1\u00b11.6 64.8\u00b11.6 60.7\u00b11.6 86.1\u00b12.5 83.3\u00b12.4 24.0\u00b11.2 23.2\u00b11.0 36.5\u00b11.6 36.2\u00b11.6 27.2\u00b11.7 27.2\u00b11.7 ReNode 76.4\u00b10.9 75.0\u00b11.1 65.4\u00b11.7 61.2\u00b11.7 86.5\u00b11.7 84.1\u00b11.7 23.7\u00b11.2 22.8\u00b11.0 36.4\u00b11.9 36.1\u00b11.9 27.7\u00b11.8 27.7\u00b11.8 DropEdge 76.0\u00b11.6 74.5\u00b11.6 65.1\u00b11.4 60.9\u00b11.4 86.2\u00b11.6 83.5\u00b11.4 24.1\u00b11.0 23.3\u00b10.9 37.5\u00b11.4 37.2\u00b11.4 27.5\u00b11.8 27.5\u00b11.8 SDRF 75.7\u00b10.8 74.6\u00b10.8 65.3\u00b10.6 61.4\u00b10.6 41.5\u00b12.6 41.6\u00b12.7 ReNode+TAM 76.0\u00b11.1 74.9\u00b11.0 67.1\u00b12.0 63.4\u00b11.2 86.4\u00b11.4 83.8\u00b11.2 23.6\u00b11.2 22.5\u00b11.3 38.3\u00b11.8 38.1\u00b11.8 27.8\u00b11.4 27.8\u00b11.4 HyperIMBA 72.4\u00b10.3 71.5\u00b10.5 72.8\u00b10.2 70.5\u00b10.3 80.9\u00b11.2 78.2\u00b11.2 35.6\u00b10.6 34.3\u00b11.1 42.9\u00b10.6 42.5\u00b10.6 38.6\u00b11.1 36.9\u00b10.7 Table 3: Weighted-F1 scores (% \u00b1 standard deviation) and improvements (%) results of Ablation Study. (Result: average score \u00b1 standard deviation; Bold: best.) Model Cora Citeseer Photo Actor Chameleon Squirrel W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) GCN 79.4\u00b10.9 66.3\u00b11.3 85.4\u00b12.8 21.8\u00b11.3 30.5\u00b13.4 21.9\u00b11.2 HyperIMBA (w/o HMPNN) 82.3\u00b10.3 \u21912.9 71.3\u00b10.7 \u21915.0 92.4\u00b10.3 \u21917.0 29.6\u00b12.9 \u21917.8 39.6\u00b10.9 \u21919.1 24.9\u00b10.8 \u21913.0 HyperIMBA (w/o HAM) 82.9\u00b10.5 \u21913.5 75.8\u00b10.5 \u21919.5 92.6\u00b10.4 \u21917.2 30.1\u00b10.5 \u21918.3 42.4\u00b10.9 \u219111.9 26.3\u00b10.8 \u21914.4 HyperIMBA 83.0\u00b10.3 \u21913.6 76.3\u00b10.2 \u219110.0 92.8\u00b10.3 \u21917.4 30.7\u00b10.2 \u21918.9 44.1\u00b10.7 \u219113.6 31.2\u00b12.4 \u21919.3 5.3 Performance Evaluation Performance on Synthetic Graphs. To verify the hierarchy capturing ability, we evaluate our method on hierarchical organization synthetic graph HNM. The node classes of HNM are three communities with the same hierarchical structure and are evenly distributed in three directions of the graph, as shown in Figure 4. We divide the hierarchical graph into three levels according to the hierarchylevel: the top-level (1, 2, 3-order fractals of HNM), the middle-level (4-order fractals of HNM), and the bottom-level (5-order fractals of HNM), respectively, to verify the effect of the model using labeled nodes at different hierarchy-levels as training samples. We allow randomly sample labeled nodes in low-level to supplement the high-level labeled nodes together as training nodes to reach a consensus for each class. According to Figure 4, HyperIMBA significantly outperforms all baselines, especially with top-level or bottom-level training setup. In addition, unlike the performance of ReNode and TAM increases monotonically from the topto bottom-level, the performance of HyperIMBA and vanilla GCN is higher in the topand bottom-level than in the middle-level. This phenomenon matches perfectly with the hierarchical connectivity pattern which we discussed in Section 3.2, i.e., nodes of a hierarchical graph tend to connect with nodes of different connectivity and betweenness rather than the nodes with similar properties. HyperIMBA benefits from the discrete curvature-aware re-weighting, which effectively alleviates the over-squashing problem caused by cross-hierarchy connectivity pattern. Performance on Real-world Graphs. Table 2 summarizes the performance of HyperIMBA and all baselines on six real-world datasets. Our HyperIMBA shows significant superiority in improving the performance of GCN and GAT on all datasets. It demonstrates that HyperIMBA is capable of capturing the underlying topology and important connectivity patterns, especially for the backbones that can thoroughly learn the topology. Our method has only a few improvements for the backbone on high homophily and weak hierarchical graphs such as Cora. By contrast, our method achieves an overwhelming advantage on graphs with high heterophily datasets (Actor, Chameleon, and Squirrel). TAM improves the performance of ReNode by considering the label connectivity of the neighborhood, but it still does not work well on the graph with poor connectivity (Citeseer). The reason for the poor performance of ReNode is that topological boundaries are difficult to be directly used as decision boundaries on real-world graphs. Compared with SDRF, HyperIMBA further considers the over-squashing problem of supervision information, and the improvement of results also confirms our intuition. Note that the performance of HyperIMBA depends on whether global and higher-order topological properties play an important role in learning. For the subgraph-sampling method (GraphSAGE), HyperIMBA can still obtain significant improvement in most cases of incomplete topology information. \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Xingcheng Fu, et al. Figure 5: Performances and analyis on Photo with different hierarchy-levels training setting. 5.4 Analysis of HyperIMBA In this subsection, we conduct ablation studies for HAM and HMPNN, to provide further performance analysis for our model. Then we perform a case study of hierarchy-imbalance learning, and provide the observation of the learning performances under different hierarchy-level training settings to explore the intrinsic mechanism of the hierarchy-imbalance issue. We also visualize the learning results to provide further insights into the impact caused by the hierarchy-imbalance issue more intuitively. Ablation Study. We conduct ablation studies for the two main mechanisms of HyperIMBA, hierarchy-aware margin and hierarchyaware message-passing. We choose GCN as the backbone, and the results are shown in Table 3. HAM plays a key role in alleviating the hierarchy-imbalance issue, and demonstrate the superiority of hyperbolic geometry for capturing the underlying hierarchy of graphs. Moreover, HMPNN also significantly alleviates the overcompression problem of label supervision information. In summary, HyperIMBA consistently outperforms the GCN and the other two variants on all real-world datasets. Case study and Visualization. We construct a case study based on the real-world dataset Photo to explore how the labeled nodes at different hierarchy-levels will affect the learning models. We divide five regions in the embedded Poincar\u00e9 disk of Photo, and randomly sample labeled nodes in the regions as training samples, respectively. In order to satisfy the quantity-balanced setting of labeled nodes for each node class, we also perform a random supplement selection of nodes according to a certain probability as in Subsection 5.3. Figure 5 shows the training setting of five hierarchylevels and reports the performances and visualizations of GCN and HyperIMBA for each hierarchy-level using t-SNE[45]. As we can observe in Figure 5, the labeled nodes with different hierarchy-levels significantly affect the shapes and boundaries of the node embedding clusters, which indicates that the hierarchical properties can directly affect the decision boundary of the model by handling the connectivity pattern on the graph. An interesting observation is that the top-level labeled nodes make the embedding distribution much more compact and produce a large number of false positives, which indicates that it has a severe oversquashing problem in the message-passing process. It is consistent with the quantitative analysis in Section 3.2, i.e., the nodes with high connectivity and betweenness refer to connect nodes with low connectivity and betweenness according to hierarchical connectivity patterns. In addition, we observe that the bottom-level nodes with more diverse information, resulting in node clusters with diffuse shapes and wider boundaries, may easily lead to conflicts or overlaps between different node classes. The visualization of the results in Figure 5 shows that HyperIMBA consistently maintains the appropriate node cluster shapes and boundaries under different hierarchy-level training settings. 6" + } + ], + "Qingyun Sun": [ + { + "url": "http://arxiv.org/abs/2301.00015v1", + "title": "Self-organization Preserved Graph Structure Learning with Principle of Relevant Information", + "abstract": "Most Graph Neural Networks follow the message-passing paradigm, assuming the\nobserved structure depicts the ground-truth node relationships. However, this\nfundamental assumption cannot always be satisfied, as real-world graphs are\nalways incomplete, noisy, or redundant. How to reveal the inherent graph\nstructure in a unified way remains under-explored. We proposed PRI-GSL, a Graph\nStructure Learning framework guided by the Principle of Relevant Information,\nproviding a simple and unified framework for identifying the self-organization\nand revealing the hidden structure. PRI-GSL learns a structure that contains\nthe most relevant yet least redundant information quantified by von Neumann\nentropy and Quantum Jensen-Shannon divergence. PRI-GSL incorporates the\nevolution of quantum continuous walk with graph wavelets to encode node\nstructural roles, showing in which way the nodes interplay and self-organize\nwith the graph structure. Extensive experiments demonstrate the superior\neffectiveness and robustness of PRI-GSL.", + "authors": "Qingyun Sun, Jianxin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu", + "published": "2022-12-30", + "updated": "2022-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Graph Neural Networks (GNNs) (Wu et al. 2020) have gained popularity in recent years due to their remarkable success in representing graph data in diverse tasks and applications. Most of the existing GNNs follow the messagepassing paradigm (Gilmer et al. 2017), i.e., exchanging information between neighbors along the graph structure. They take the raw graph structure as the path of information \ufb02ow, assuming the observed structure perfectly depicts the ground-truth relations between nodes. However, these raw graphs are naturally collected from network-structure data (e.g., social networks), which are often noisy, incomplete, and independent of the downstream tasks. There is a gap between the raw structure and the optimal structure for speci\ufb01c tasks. The poor quality of graph structure leads to the poor quality of representations produced by GNNs, making GNNs prone to noise and adversarial attacks (Z\u00a8 ugner, Akbarnejad, and G\u00a8 unnemann 2018; Sun et al. 2018, 2021). Graph structure learning (Zhu et al. 2022) aims to learn a new structure of high quality simultaneously with the graph representations, which has received growing attention for its utility for improving representation quality and robustness. Copyright \u00a9 2023, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Ordered Network Disordered Network Cooperative or Competitive Spontaneous Interactions political party opinion leader Figure 1: Self-organization in political network. Most existing methods optimize the structure with heuristic assumptions (e.g., community (Wang et al. 2021)) or certain structure constraints (e.g., sparsity, low-rank, and smoothness (Jin et al. 2020; Sun et al. 2022b)). However, these assumptions and constraints cannot always be applicable to all graphs and tasks. How to reveal the inherent graph structure in a uni\ufb01ed way remains an under-explored question. Most of the graph data in the real-world shows \u201cselforganization\u201d property from molecules (Eigen and Schuster 1977) to social networks (Bonabeau et al. 1997), where the nodes organize their interactions spontaneously through the structure to create a global order amongst themselves. As the example in Fig. 1, the emergence of opinion leaders and the cooperation/competitive behaviors between people form political parties, making the political network more ordered. Since the structure navigates the information \ufb02ow between nodes and decides the graph\u2019s fundamental mechanism, it can be optimized to identify the organization and reduce the disorder of the noisy graph. In this paper, we introduce the self-organized Principle of Relevant Information (PRI) to quantify the structure from an information-theoretic point of view. We propose a novel graph structure learning framework named PRI-GSL, which inherits the merits of PRI to identify the self-organization and reveal the inherent structure of graph. Rather than imposing statistical constraints on the graph data, PRIGSL takes the structure learning as a trade-off between structure redundancy reduction and information preservation, and then use the von Neumann entropy and the Quantum Jensen-Shannon divergence to quantify them. To better capture the contribution of nodes in the self-organization evolution process, we use the quantum continuous walk evolution with multi-scale graph wavelets to characterize node arXiv:2301.00015v1 [cs.LG] 30 Dec 2022 \fstructural roles and incorporate them into structure learning. In this way, PRI-GSL enumerates the potential edges and preserves the most relevant yet least redundant ones, showing in which way the nodes interplay and self-organize with the graph structure. \u2022 We propose PRI-GSL, an information-theoretic graph structure learning framework with the Principle of Relevant Information, providing a simple yet uni\ufb01ed way to quantify the learned structure and unravel the graph selforganization. \u2022 We use the quantum continuous walk with graph wavelets to encode node structural roles in a continuous and timevarying way, which is incorporated in structure learning to fully characterize the nodes in self-organization. \u2022 Extensive experiment results demonstrate the superior effectiveness and robustness of PRI-GSL. Related Work Graph structure learning has gained more attention in recent years (Zhu et al. 2022) to improve the quality of graph representations by learning a better graph structure. Most existing works (Jin et al. 2020; Wang et al. 2021) optimize the structure with assumptions or certain constraints in a heuristic way. Substantial efforts have been made to give a theoretical quanti\ufb01cation for the learned structure. SDRF (Topping et al. 2022) re\ufb01nes the structure based on the Ricci curvature of edges in a greedy pre-process strategy. SIB (Yu et al. 2020a) utilizes the information bottleneck principle to \ufb01nd the most predictive subgraph. VIB-GSL (Sun et al. 2022a) proposes a variational information bottleneck principle to learn a new structure for graph classi\ufb01cation, which is not applicable to the node-level tasks. Graph-PRI (Yu et al. 2022) advances the Principle of Relevant Information for graph sparsi\ufb01cation in an unsupervised way without considering node features and the speci\ufb01c downstream task. Information theory provides a powerful methodology to describe general properties of arbitrarily complex systems. In information theory, there are two representative selforganizing principles: Information Bottleneck (IB) (Tishby, Pereira, and Bialek 2000) and Principle of Relevant Information (PRI) (Principe 2010). Both IB and PRI describe different forms of redundancy reduction and information preservation. The famed IB is formulated on the mutual information between independent and identically distributed (i.i.d.) data, which is dif\ufb01cult to model the complex node interactions imposed by the graph structure. PRI shares the spirit of the IB method but its formulation addresses the entropy and relative entropy of a single dataset (Principe 2010), which can be applied to graph data with well-de\ufb01ned informationtheoretic tools. Preliminary Notions Given a graph G = {V, E} where V is the set of N nodes and E is the edge set. A \u2208RN\u00d7N is the adjacency matrix and D is the degree matrix. The Laplacian matrix of the graph G can be de\ufb01ned as L = D \u2212A = U\u039bUT, where U is the eigenvector matrix, \u039b = Diag(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbN) and \u03bb1 < \u03bb2 \u2264\u00b7 \u00b7 \u00b7 \u2264\u03bbN are the eigenvalues of L. Graph Structure Learning Given a graph G, graph structure learning (Zhu et al. 2022) aims to learn a new structure \u02dc G simultaneously with the graph representations with the objective function: L = Ltask( \u02dc G, Y ) + \u03b1Lreg( \u02dc G, G), (1) where Ltask is the task-speci\ufb01c objective with respect to the learned graph \u02dc G and the ground truth Y , Lreg imposes constraints on the learned graph and \u03b1 is a hyper-parameter. Principle of Relevant Information PRI (Principe 2010) is a self-organized information-theoretic principle that aims to perform mode decomposition of a random variable to obtain a reduced statistical representation. PRI formulates the redundancy reduction and information preservation as a trade-off between the entropy of reduced representation and its relative entropy given the original data. De\ufb01nition 1 (Principle of Relevant Information). Given a random variable X, the Principle of Relevant Information aims to obtain a reduced representation T with: LPRI = arg min T H(T) + \u03b2D(P(T)||P(X)), (2) where H(T) is the entropy of T and D(P(T)||P(X)) is the divergence of distributions P(T) and P(X). The \ufb01rst term H(T) measures the redundancy of representation T and the second term D(P(T)||P(X)) measures the allowable distortion of the original data. The hyperparameter \u03b2 controls the level of distortion in T. PRI was commonly used in scalar random variables (Wei et al. 2021; Hoyos-Osorio et al. 2021) and de\ufb01ned by R\u00b4 enyi\u2019s formulation of entropy and divergence (R\u00b4 enyi et al. 1961). Graph Structure Learning with Principle of Relevant Information In this work, we propose a graph structure learning framework named PRI-GSL, which merits the Principle of Relevant Information as a guideline for controlling the structure quality. The overall architecture of PRI-GSL is shown in Figure 2. In this section, we \ufb01rst formulate the PRI loss for structure learning, then introduce the Role-aware graph learner and learning process of PRI-GSL. PRI for Graph Structure Learning In the PRI-GSL framework, PRI performs as a selfsupervised regularizer for the quality of the learned graph structure. Motivates by the objective of PRI, the Graph Structure Learning Principle of Relevant Information is: De\ufb01nition 2 (PRI for Graph Structure Learning). Given a graph G, the Principle of Relevant Information for graph structure learning aims to learn a re\ufb01ned graph \u02dc G with: LPRI = H( \u02dc G) + \u03b2D( \u02dc G||G), (3) The \ufb01rst term H( \u02dc G) is the redundancy term, which measures the disorder of the learned graph \u02dc G. The larger the H( \u02dc G), the more disordered \u02dc G is. The second term D( \u02dc G||G) \ft Refined Graph Original Graph + Role-aware Structure Learner QCW Evolution Wavelets \u2026 Probability t \u2026 Role Characteriztion Figure 2: Overall Architecture of PRI-GSL. is the distortion term, which measures the discrepancy between two graphs. The smaller the D( \u02dc G||G), the more similar are the distributions of \u02dc G and G. \u03b2 denotes the trade-off between the redundancy reduction of \u02dc G and its discriminative description power of G. As \u03b2 becomes larger, the emphasis is laid more on the distortion term, and more information from G is preserved in \u02dc G. Formulate PRI by von Neumann Entropy and Quantum Jensen-Shannon Divergence The choice of entropy and divergence in PRI is applicationspeci\ufb01c. In this paper, we formulate the PRI loss LPRI by the von Neumann entropy (VNE) (Nielsen and Chuang 2002) and quantum Jensen-Shannon (QJS) divergence (Lamberti et al. 2008) for graph data with complex interactions as in (Yu et al. 2022). For the \ufb01rst redundancy term H( \u02dc G), we propose to measure the structure redundancy by von Neumann entropy (VNE) (Nielsen and Chuang 2002), which has been used in a variety of graph learning studies (Dasoulas et al. 2020; Yu et al. 2022). von Neumann entropy quanti\ufb01es the spectral complexity (or disorder) of graph structure by taking the graph as a quantum system through a mapping between discrete Laplacians and quantum states. A density matrix \u03c1 is a Hermitian and positive semi-de\ufb01nite matrix that is used to encode the probability distributions and describe the state of a quantum mechanical system. For the graph \u02dc G, the von Neumann entropy HvN( \u02dc G) is de\ufb01ned as: HvN( \u02dc G) = HvN(\u02dc \u03c1) = \u2212tr (\u02dc \u03c1 log \u02dc \u03c1) = \u2212 N X i=1 (\u03bbi log \u03bbi) , (4) where \u02dc \u03c1 is the graph density matrix of \u02dc G, tr(\u00b7) denotes trace and {\u03bbi} are the eigenvalues of \u02dc \u03c1. Typically, both the Laplacian matrix and the normalized Laplacian matrix can be used for the mapping from graphs to states (Minello, Rossi, and Torsello 2019). We de\ufb01ne the density matrix \u02dc \u03c1 = \u02dc L tr(\u02dc L) = \u02dc L 2|E| based on the Laplacian matrix \u02dc L of \u02dc G, which models the continuous information diffusion process (De Domenico and Biamonte 2016). For the second distortion term D( \u02dc G||G), we use the quantum Jensen-Shannon (QJS) divergence (Lamberti et al. 2008) between the graph density matrices (\u02dc \u03c1 and \u03c1) of G and \u02dc G. The quantum Jensen-Shannon divergence has been widely used as a generalization of the classical Jensen-Shannon divergence to quantum states of graph data (De Domenico et al. 2015; Bai et al. 2015), which is symmetric, negative de\ufb01nite and bounded (0 \u2264DQJS \u22641). DQJS( \u02dc G||G) = HvN \u0012 \u02dc \u03c1 + \u03c1 2 \u0013 \u22121 2HvN (\u02dc \u03c1) \u22121 2HvN (\u03c1) . (5) Combining the redundancy term in Eq. (4) and the distortion term in Eq. (5), we can obtain the following objective: LPRI = HvN( \u02dc G) + \u03b2DQJS( \u02dc G||G) = \u03b2HvN \u0012 \u02dc \u03c1 + \u03c1 2 \u0013 + 2 \u2212\u03b2 2 HvN (\u02dc \u03c1) \u2212\u03b2 2 HvN (\u03c1) \u2261\u03b2HvN \u0012 \u02dc \u03c1 + \u03c1 2 \u0013 + 2 \u2212\u03b2 2 HvN (\u02dc \u03c1) . (6) We neglect HvN (\u03c1) in the last line because it\u2019s a constant value during optimization. The above formalism provides a uni\ufb01ed way of quanti\ufb01cation and comparison for the learned structure. Then we use a graph structure learner to obtain \u02dc G. Role-aware Graph Structure Learner In this section, we introduce the Role-aware Graph Structure Learner in PRI-GSL, which aims to learn a better graph \u02dc G that preserves the graph self-organization. Considering the evolution process in self-organization, we characterize the nodes\u2019 roles in a continuous and time-varying way and then incorporate both the merits of features as well as structural roles to re\ufb01ne the graph. Structural Role Encoding The structural role of the node represents its contribution to the overall information \ufb02ow of the graph, which can provide key insights into the identi\ufb01cation of graph organization. We propose to model graph state by quantum continuous walk and use the time-evolution operator with graph wavelets to generate role encodings. (1) Model graph state by QCW. Recall that we apply the von Neumann Entropy and the QJS divergence for PRI formulation, which takes the whole graph as a quantum system. To investigate the nodes\u2019 roles in this quantum system, we use the quantum continuous walk (QCW) (Childs 2010; Bai \fet al. 2015) to build maps of how information \ufb02ows through the graph in the perspective of graph state evolution. QCW is the quantum mechanical counterpart of the continuous-time random walk in a graph, which describes the propagation of a quantum particle evolving continuously in time on the nodes. The QCW on a graph G is de\ufb01ned as a dynamical process over the nodes that takes place on a N-dimensional Hilbert space H = span({|v\u27e9, v \u2208V }). The evolution of the walker is governed by the Schr\u00a8 odinger equation d dt |\u03c8t\u27e9= \u2212iH |\u03c8t\u27e9. (7) |\u03c8t\u27e9represents the state of the walk at time t, which is a time-dependent amplitude vector on nodes. H is the Hamiltonian operator, which accounts for the total energy of the graph and governs the time evolution of the quantum continuous walk. Given an initial state |\u03c80\u27e9\u2208H, the state of walker |\u03c8t\u27e9evolves in time according to |\u03c8t\u27e9= U(t) |\u03c80\u27e9, U(t) := e\u2212iHt, (8) where U(t) is the unitary time-evolution operator. (2) Diffusion by graph wavelets. To give a full characterization of the structural properties, we use the spectral graph wavelets \u03a8 (Hammond, Vandergheynst, and Gribonval 2011) as the Hamiltonian H in QCW. Then the structural information residing in the graph is encoded in U(t). U(t) is a polynomial in \u03a8 for all t, thus any matrix that commutes with \u03a8 also commutes with U(t) (Coutinho and Godsil 2021). We adopt the heat kernel gs(\u039b) = e\u2212\u039bs to obtain spectral graph wavelets, where the scaling parameter s controls the spread radii of the diffusion process and larger s allows farther diffusion. The spectral wavelet basis is \u03a8s = U\u039bsUT = (\u03a8s (1) |\u03a8s (2) | \u00b7 \u00b7 \u00b7 |\u03a8s (N)) , (9) where \u039bs = gs(\u039b). In this way, the spectral graph wavelet \u03a8s(a) centered at node va associated with \ufb01lter gs will be given by an N-dimensional vector: \u03a8s(a) = U\u039bsUT\u03b4a, (10) where \u03b4a is the one-hot vector of node va. \u03a8s(a) is a Ndimensional vector where the b-th wavelet coef\ufb01cient of \u03a8s(a) represents the information that va received from vb. Nodes playing similar roles have similar wavelet coef\ufb01cient. (3) Characterize by the time-evolution operator. Since the time-evolution operator U(t) re\ufb02ects the graph state evolution, we treat the wavelets \u03a8 as probability distributions over graph and use \u03c6(\u03a8, t) = E[U(t)] = E[e\u2212i\u03a8t] as the characteristic function to uncover nodes\u2019 roles in information diffusion. The empirical characteristic function of va is: \u03c6s(va, t) = 1 N N X n=1 e\u2212i\u03a8s(a)t. (11) \u03c6s(va, t) can capture all the moments (including higherorder moments) of the given distribution \u03a8s(a). We sample at T different time points on the time-evolution operator and then concatenate the values: hs(va) = [Re (\u03c6s (va, t)) , Im (\u03c6s (va, t))]t=t1,t2,\u00b7\u00b7\u00b7 ,T . (12) In general, the nodes play different roles across different scales. Hence we utilize a multi-scale wavelet diffusion strategy to capture both the local and global structural roles of nodes. We integrate information across different radii of neighborhoods by jointly considering a set of different values of s. We can obtain multi-scale structure role encodings ha \u2208R2T M by concatenating the structural encodings at several different scales S : {s1, s2, \u00b7 \u00b7 \u00b7 , sM}: ha = (hs1 (va) |hs2 (va) | \u00b7 \u00b7 \u00b7 |hsM (va)) . (13) Iterative Structure Learning After obtaining the structural role encodings, we use a metric function that accounts for both feature information and the role-based similarities to measure the possibility of edge existence. PRI-GSL is agnostic to various metric functions and we choose the multihead cosine similarity function here: aR(e\u22121) ij = 1 m m X h=1 cos (Wh \u00b7 \u0010 z(e\u22121) i |h(e\u22121) i \u0011 , Wh \u00b7 \u0010 z(e\u22121) j |h(e\u22121) j \u0011\u0011 , (14) where m is the number of heads, Wh is the weight matrix of the h-th head, z(e\u22121) i and hR(e\u22121) i denote the representation vector and the structural role encoding vector of node vi in the (e \u22121)-th epoch, and | denotes the concatenation operation. With the above structure learning strategy, we can obtain a role-aware adjacency matrix in the e-th epoch: A(e) R = n aR(e\u22121) ij o , i, j \u2208{1, 2, \u00b7 \u00b7 \u00b7 , N}. (15) Learning Process of PRI-GSL Dynamic Structure Fusion The input graph structure determines the learning performance to a certain extent. To avoid the non-convergence or unstable training brought by the poor quality of learned structure at the beginning of training, we hence incorporate the original graph structure A as supplementary to formulate an optimized graph structure \u02dc A: \u02dc A(e) = \u03b3D\u22121 2 AD\u22121 2 +(1 \u2212\u03b3)\u00b7RowNorm \u0010 A(e) R \u0011 , (16) where RowNorm(\u00b7) denotes the row-wise normalization function, \u03b3 is a constant that control the contribution of original structure. Here we use a dynamic decay mechanism for \u03b3 to enable the role-aware structure A(e) R to play a more and more important role during training. Then the re\ufb01ned structure is inputted into a GNN encoder for node representation vectors Z \u2208RN\u00d7d and classi\ufb01cation: Z(e) = GNN-Encoder( \u02dc A(e), X). (17) Objective of PRI-GSL The overall loss L of PRI-GSL is composed of two terms, the classi\ufb01cation loss Lcls and the structure PRI loss LPRI in Eq. (6), given by: L = Lcls + \u03b1LPRI = HCE \u0010 \u02dc G, Y \u0011 + \u03b1 \u0010 HvN \u0010 \u02dc G \u0011 + \u03b2DQJS \u0010 \u02dc G||G \u0011\u0011 , (18) where HCE(\u00b7) is the cross-entropy loss for classi\ufb01cation and \u03b1 is a hyper-parameter to balance the two loss terms. The overall process of PRI-GSL is shown in Algorithm 1. \fAlgorithm 1: The overall process of PRI-GSL for node classi\ufb01cation Input: Graph G with node labels Y ; Number of training epochs Epochs; Wavelet scale set S; Timepoints T; Hyper-parameters \u03b1, \u03b2 and \u03b3. Output: Re\ufb01ned graph \u02dc G; Predicted labels \u02c6 Y . 1 Parameter initialization; 2 for e = 1, 2, \u00b7 \u00b7 \u00b7 , Epochs do // Structural Role Encoding 3 for s \u2208S do 4 \u03a8(e\u22121) s \u2190Eq. (9), \u03c6s(va, t) \u2190Eq. (11); 5 h(e\u22121) s (va) \u2190Eq. (12); 6 end 7 h(e\u22121) a \u2190Eq. (13); // Graph Structure Learning 8 aR(e) ij \u2190Eq. (14), A(e) R = n aR(e) ij o ; 9 \u02dc A(e) \u2190Eq. (16), \u02dc G(e) \u2190(X, \u02dc A(e)); // Learn Node Representations 10 Z(e) = GNN-Encoder( \u02dc A(e), X); // Optimize 11 L(e) PRI = HvN( \u02dc G(e)) + \u03b2DQJS( \u02dc G(e)||G); 12 L(e) = L(e) cls( \u02dc G(e), Y ) + \u03b1L(e) PRI; 13 Update model parameters to minimize L(e). 14 end Approximation Recall that the computation of von Neumann entropy and spectral wavelets requires the full eigenvalue decomposition of the Laplacian matrix, which takes O(N 3) time. The von Neumann entropy can be approximated with linear complexity O(|V | + |E|) (Chen et al. 2019). In the experiments, we still use the basic von Neumann entropy. As for the graph wavelets, we use the Chebyshev polynomial approximation (Shuman, Vandergheynst, and Frossard 2011) to compute \u03a8, reducing the computational complexity to O(K|E|), where K is the order of Chebyshev polynomials. Properties of \u02dc G Learned by PRI-GSL PRI-GSL provides a uni\ufb01ed way to control the quality of learned graph structure in terms of sparsity, centrality, and nuisance invariance property. Sparsity and Centrality The von Neumann entropy has close connections with the structure sparsity and centrality. As indicated in (Passerini and Severini 2008), given a graph G, let G\u2032 = G + {x, y} with V (G\u2032) = V (G) and E(G\u2032) = E(G) \u222a{x, y}, then HvN(\u03c1G\u2032) \u2265dG\u2032\u22122 dG\u2032 HvN(\u03c1G). The von Neumann entropy tends to grow with the increasing number of edges. The graph centrality (i.e., the extent to which a graph is organized around some central nodes) can be measured as a quantum relative entropy between the relative degree distribution and the uniform distribution (Simmons, Coon, and Datta 2018), which is given by D(\u03c1G||IN) := tr(\u03c1G(log \u03c1G \u2212log I/N)) = log N \u2212HvN(G). The von Neumann entropy tends to grow with the increasing regularity of the graph. The above conclusions suggest that minimizing HvN( \u02dc G) leads to a sparse and centralized structure. Nuisance Invariance \u02dc G only preserves the most relevant yet least redundant information in the observed graph G and is invariant to nuisances in data. Suppose Gn \u2208G the taskirrelevant nuisance in G, the relevance of \u02dc G and Gn can be formulated as the divergence between their conditional distributions predictions of the desired labels Y (Yu et al. 2020b): E h DKL \u0010 p \u0010 Y | \u02dc G \u0011\u0011 ||p (Y |Gn) i . Since Gn is irrelevant with Y , we have p(Y |Gn) = p(Y ). Minimizing the cross-entropy loss (i.e., the mutual information I(Y ; \u02dc G) between \u02dc G and Y ) is equivalent to minimizing the relevance between \u02dc G and Gn: E h DKL \u0010 p \u0010 Y | \u02dc G \u0011\u0011 ||p (Y |Gn) i = E h DKL \u0010 p \u0010 Y | \u02dc G \u0011\u0011 ||p (Y ) i = ZZ \uf8eb \uf8edp \u0010 Y | \u02dc G \u0011 log p \u0010 Y | \u02dc G \u0011 p (Y ) \uf8f6 \uf8f8p \u0010 \u02dc G \u0011 = ZZ p \u0010 Y | \u02dc G \u0011 log p \u0010 Y, \u02dc G \u0011 p (Y ) p \u0010 \u02dc G \u0011 = I(Y ; \u02dc G). (19) Experiments We evaluate PRI-GSL on node classi\ufb01cation and graph denoising tasks to verify its capability of improving the effectiveness and robustness of graph representation learning. Then we provide the analyses of the PRI loss, the structural role encodings, and the learned structure. Experimental Settings Datasets We select datasets with different homophily ratios h (Pei et al. 2020) to analyze methods\u2019 generalization on graphs with different properties. The evaluation datasets are Squirrel, Chameleon (Rozemberczki, Allen, and Sarkar 2021), Actor (Pei et al. 2020), CiteSeer, PubMed, Cora (Sen et al. 2008) and Photo (Shchur et al. 2018). Baselines We consider three types of baselines: (1) Graph neural networks: GCN (Kipf and Welling 2016), GAT (Veli\u02c7 ckovi\u00b4 c et al. 2017) and GraphSAGE (Hamilton, Ying, and Leskovec 2017); (2) Graph sparsi\ufb01cation methods: DropEdge (Rong et al. 2019), NeuralSparse (Zheng et al. 2020), and Graph-PRI (Yu et al. 2022); (3) Graph structure learning methods: IDGL (Chen, Wu, and Zaki 2020), Pro-GNN (Jin et al. 2020), SDRF (Topping et al. 2022), and SLAPS (Fatemi, El Asri, and Kazemi 2021). Parameter Settings We re-implement the NeuralSparse (Zheng et al. 2020) and SDRF (Topping et al. 2022). The parameters of baseline methods are set to the suggested value in their papers or carefully tuned for \fTable 1: Accuracy \u00b1 standard deviation (%) of node classi\ufb01cation. (Bold: best result; Underlined: runner up. ) Method Squirrel h=0.22 Actor h=0.24 Chameleon h=0.25 CiteSeer h=0.72 PubMed h=0.79 Cora h=0.83 Photo h=0.83 GCN 22.22\u00b11.24 22.85\u00b11.64 32.16\u00b12.76 66.31\u00b11.12 74.11\u00b13.65 79.10\u00b10.77 85.61\u00b12.20 GAT 22.64\u00b11.25 23.53\u00b11.33 32.05\u00b12.56 63.85\u00b12.45 73.02\u00b12.52 77.86\u00b11.48 87.97\u00b12.73 GraphSAGE 28.79\u00b11.74 24.22\u00b11.44 37.05\u00b12.35 64.80\u00b11.83 72.61\u00b12.95 75.23\u00b11.31 86.23\u00b12.53 DropEdge 22.35\u00b11.12 23.84\u00b11.20 32.62\u00b12.75 66.68\u00b11.38 75.97\u00b10.82 79.30\u00b10.84 86.05\u00b11.78 NeuralSparse 29.02\u00b11.10 24.50\u00b11.42 47.30\u00b12.22 67.82\u00b11.18 74.87\u00b12.77 81.47\u00b11.43 89.40\u00b11.85 Graph-PRI 28.44\u00b12.10 23.81\u00b12.31 42.39\u00b11.99 69.24\u00b11.25 76.25\u00b11.44 79.07\u00b11.12 88.30\u00b12.11 IDGL 29.13\u00b12.94 27.44\u00b15.80 49.80\u00b14.80 67.94\u00b10.28 75.32\u00b11.45 83.23\u00b10.62 88.89\u00b12.55 Pro-GNN 27.18\u00b11.28 24.82\u00b12.81 48.54\u00b14.87 66.68\u00b12.02 75.44\u00b13.54 82.14\u00b10.58 87.28\u00b11.85 SDRF >1 day >1 day 41.05\u00b11.17 69.97\u00b10.28 >1 day 81.94\u00b10.59 >1 day SLAPS 25.29\u00b11.06 23.10\u00b13.39 40.24\u00b11.80 68.58\u00b11.46 75.64\u00b10.77 79.27\u00b11.54 88.48\u00b12.47 PRI-GSL 33.87\u00b12.08 28.55\u00b12.04 51.83\u00b12.44 69.34\u00b12.64 76.77\u00b13.20 83.67\u00b12.09 92.40\u00b11.15 0.00 0.25 0.50 0.75 Deleting edges 65 70 75 80 85 90 Accuracy (%) 83.67 82.95 80.63 76.67 0.00 0.25 0.50 0.75 Adding edges 55 60 65 70 75 80 85 90 Accuracy (%) 83.67 79.00 75.62 71.33 GCN Graph-PRI IDGL PRI-GSL Figure 3: PRI-GSL on noisy graphs. fairness. For the GNN encoders, we use a 2-layer GCN for node classi\ufb01cation. We set the representation dimension d=32, the Chebyshev polynomial order K=10, the number of time points T=4, the number of scales M=2, and the number of heads m=4. The other hyper-parameters (\u03b1, \u03b2, and \u03b3) are tuned for each dataset. Evaluation Results and Analysis Node Classi\ufb01cation We set the number of nodes in each class to be 20/30 for training/validation and take the remaining nodes for the test. The accuracy and standard deviation on 10 randomly split are shown in Table 1. The best results are shown in bold and the runner-ups are underlined. Our PRI-GSL achieves the best performance on both homophilic and heterophilic datasets, showing the effectiveness of utilizing the self-organization property when mining the latent inherent structure. Generally, the graph structure learning methods show better performance than GNNs and graph sparsi\ufb01cation methods. Pro-GNN and SLAPS perform well on the homophilic graphs but show unsatisfactory performance on the heterophilic graphs (Actor, Chameleon, and Squirrel), demonstrating the limitation of using heuristic assumptions for structural constraints. Although Graph-PRI also uses PRI for graph sparsi\ufb01cation, it achieves fewer improvements since it does not take the node feature and the downstream task into consideration. Cora CiteSeer Chameleon 40 50 60 70 80 90 Accuracy (%) 83.7 69.9 50.9 82.2 69.2 50.0 81.2 68.4 48.4 80.0 67.5 42.4 PRI-GSL PRI-GSL (w/o PRI) PRI-GSL (w/o RE) GSL Figure 4: Ablation study results of PRI-GSL. Graph Denoising To evaluate PRI-GSL\u2019s ability to move noisy information, we generate synthetics noisy datasets by adding/deleting edges on Cora following (Chen, Wu, and Zaki 2020; Sun et al. 2022a). Speci\ufb01cally, we randomly add/delete 25%, 50%, 75% edges for 5 times and show the mean accuracy (solid line) and standard deviation (shaded region) in Fig. 3. The performance of GCN and GraphPRI decreases dramatically with the increasing noise level. The structure learning methods, IDGL and PRI-GSL, show more robustness compared to vanilla GCN and Graph-PRI. Adding edge hurts more than deleting edges, indicating the importance of removing redundant information in the structure. PRI-GSL consistently shows better performance under different levels of external noise than the other baselines. Even though Graph-PRI shares the same spirit with PRIGSL, it fails to distinguish whether the noise is task-relevant and shows the same poor robustness as GCN. The performance of PRI-GSL on noisy graphs also demonstrates the nuisance invariance property. Ablation Study To illustrate the advantages of the guidance of PRI and structural role information, we compare PRI-GSL with three variants: (1) PRI-GSL (w/o PRI) that removes the PRI loss, (2) PRI-GSL (w/o RE) that removes the role encodings, and (3) GSL that removes both the PRI loss and the role encoding. The results of variants on 5 random split datasets are shown in Fig. 4. As we can observe, both the PRI loss and the structural role encoding bene\ufb01t \f(a) Original Graph. (b) Graph-PRI. (c) Pro-GNN. (d) IDGL. (e) PRI-GSL. Figure 5: Visualization of the original graph of Cora and learned graphs by Graph-PRI, Pro-GNN, IDGL, and PRI-GSL. 0 25 50 75 100 125 150 Epochs 0.000029 0.000030 0.000031 0.000032 0.000033 HvN( \u0303 G) +2.9187 converge at 2.9187301 0 25 50 75 100 125 150 Epochs 2.9185 2.9190 2.9195 DQJS( \u0303 G||G) Figure 6: The variations of HvN( \u02dc G) and DQJS( \u02dc G||G). the classi\ufb01cation, where the structural role encoding brings more improvement. This suggests that it\u2019s important to capture the node\u2019s contribution to the graph information \ufb02ow when identifying the graph organization. Empirical Behavior of HvN( \u02dc G) and DQJS( \u02dc G||G) We analyze the learning dynamics of PRI-GSL by measuring the variations of HvN( \u02dc G) and DQJS( \u02dc G||G) on Cora with \u03b1=0.1 and \u03b2=1 in Fig. 6. The shadowed area is enclosed by the min and max value of four training runs. The solid line in the middle is the mean value of each epoch. HvN( \u02dc G) \ufb01rst increases for about 50 epochs with the structure exploration and then decreases to converge after about the 80-th epoch, indicating that the learned structure is with high certainty. DQJS( \u02dc G||G) bumps during the training process. This may be because the model continues to seek a balance of structure redundancy and distortion during the training process. Hyper-parameter Analysis We analyze the impact of hyper-parameters including \u03b1 controlling the importance of the PRI loss in Eq. (18) and \u03b2 trading off redundancy and distortion in Eq. (6). The results are shown in Fig. 7. PRIGSL achieves the best performance with \u03b1=1 on Cora and \u03b1=0.4 on CiteSeer, which indicates that PRI-GSL bene\ufb01ts from the PRI loss. As for the \u03b2 in the PRI loss, when the distortion term has a weight more than 2 compared to the redundancy term, PRI-GSL could reach satisfactory performance on both datasets. This suggests that the distortion term dominates the PRI loss in PRI-GSL. That is to say, the learned 0.1 0.2 0.4 1.0 2.0 \u03b1 65 70 75 80 85 90 Accuracy (%) 83.28 83.22 83.33 84.1 83.28 69.67 69.03 70.42 69.03 69.03 Cora CiteSeer 1 2 3 4 5 \u03b2 65 70 75 80 85 90 Accuracy (%) 82.33 83.05 83.24 83.34 83.24 69.11 69.11 69.56 70.11 69.67 Cora CiteSeer Figure 7: Parameter sensitivity of \u03b1 and \u03b2. structure should preserve enough information from the original graph to perform well on the downstream task. Visualization In Figure 5, we visualize the original graph structure of Cora and the graphs learned by PRI-Graph, Pro-GNN, IDGL, and PRI-GSL using networkx. The nodes\u2019 colors indicate their classes, the labeled nodes are solid and the unlabeled nodes are hollow. The edges are not shown for clarity and the layout of nodes represents their connectivities. Graph-PRI has little effect on the overall property of graph structure. Even though Pro-GNN and IDGL can make nodes within different classes more separate, there are still some overlapping and entangled areas. Bene\ufb01ting from the structural role encoding, PRI-GSL can obtain the structure with separate clusters with similar shapes and clearer class boundaries, showing how the nodes within a class are organized." + }, + { + "url": "http://arxiv.org/abs/2208.08302v1", + "title": "Position-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing", + "abstract": "Topology-imbalance is a graph-specific imbalance problem caused by the uneven\ntopology positions of labeled nodes, which significantly damages the\nperformance of GNNs. What topology-imbalance means and how to measure its\nimpact on graph learning remain under-explored. In this paper, we provide a new\nunderstanding of topology-imbalance from a global view of the supervision\ninformation distribution in terms of under-reaching and over-squashing, which\nmotivates two quantitative metrics as measurements. In light of our analysis,\nwe propose a novel position-aware graph structure learning framework named\nPASTEL, which directly optimizes the information propagation path and solves\nthe topology-imbalance issue in essence. Our key insight is to enhance the\nconnectivity of nodes within the same class for more supervision information,\nthereby relieving the under-reaching and over-squashing phenomena.\nSpecifically, we design an anchor-based position encoding mechanism, which\nbetter incorporates relative topology position and enhances the intra-class\ninductive bias by maximizing the label influence. We further propose a\nclass-wise conflict measure as the edge weights, which benefits the separation\nof different node classes. Extensive experiments demonstrate the superior\npotential and adaptability of PASTEL in enhancing GNNs' power in different data\nannotation scenarios.", + "authors": "Qingyun Sun, Jianxin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu", + "published": "2022-08-17", + "updated": "2022-08-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "INTRODUCTION Graph learning [13, 24, 52] has gained popularity over the past years due to its versatility and success in representing graph data across a wide range of domains [9, 14, 26, 40, 50]. Graph Neural Networks (GNNs) [39, 47] have been the \u201cbattle horse\u201d of graph learning, which propagate the features on the graph by exchanging information between neighbors in a message-passing paradigm [15]. Due to the asymmetric and uneven topology, learning on graphs by GNNs suffers a specific imbalance problem, i.e., topology-imbalance. Topology-imbalance [7] is caused by the uneven position distribution of labeled nodes in the topology space, which is inevitable in real-world applications due to data availability and the labeling costs. For example, we may only have information for a small group of users within a local community in social networks, resulting in a serious imbalance of labeled node positions. The uneven position distribution of labeled nodes leads to uneven information propagation, resulting in the poor quality of learned representations. Although the imbalance learning on graphs has attracted many research interests in recent years, most of them focus on the classimbalance issue [30, 46], i.e., the imbalanced number of labeled arXiv:2208.08302v1 [cs.LG] 17 Aug 2022 \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Sun et al. va vb vc under-reaching over-squashing X unlabeled nodes labeled nodes Figure 1: Schematic diagram of under-reaching and oversquashing in the topology-imbalance issue. nodes of each class. The topology-imbalance issue is proposed recently and is still under-explored. The only existing work, ReNode [7], provides an understanding of the topology-imbalance issue from the perspective of label propagation and proposes a sample re-weighting method. However, ReNode takes the node topological boundaries as decision boundaries based on a homophily assumption, which does not work with real-world graphs. The strong assumption leads to poor generalization and unsatisfied performance of ReNode (see Section 5.2.1). There are two remaining questions: (1) Why does topology-imbalance affect the performance of graph representation learning? and (2) What kind of graphs are susceptible to topology-imbalance? To answer the above two questions, how to measure the influence of labeled nodes is the key challenge in handling topology-imbalance due to the complex graph connections and the unknown class labels for most nodes in the graph. New understanding for topology-imbalance. In this work, we provide a new understanding of the topology-imbalance issue from a global view of the supervision information distribution in terms of under-reaching and over-squashing: (1) Under-reaching: the influence of labeled nodes decays with the topology distance [3], resulting in the nodes far away from labeled nodes lack of supervision information. In Figure 1, the node \ud835\udc63\ud835\udc4ecannot reach the valuable labeled node \ud835\udc63\ud835\udc50within the receptive field of the GNN model, resulting in the quantity of information it received is limited. (2) Over-squashing: the supervision information of valuable labeled nodes is squashed when passing across the narrow path together with other useless information. In Figure 1, the valuable supervision information of \ud835\udc63\ud835\udc4fto \ud835\udc63\ud835\udc4eis compressed into a vector together with the information of many nodes belonging to other classes, resulting in the quality of supervision information that \ud835\udc63\ud835\udc4ereceived being poor. Then we introduce two metrics (reaching coefficient and squashing coefficient) to give a quantitative analysis of the relation between the learning performance, label positions, and graph structure properties. We further draw a conclusion that better reachability and lower squashing to labeled nodes lead to better classification performance for GNN models. Present work. In light of the above analysis, we propose a Position-Aware STructurE Learning method named PASTEL, which directly optimizes the information propagation path and solves the problem of topology-imbalance issue in essence. The key insight of PASTEL is to enable nodes within the same class to connect more closely with each other for more supervision information. Specifically, we design a novel anchor-based position encoding mechanism to capture the relative position between nodes and incorporate the position information into structure learning. Then we design a class-wise conflict measure based on the Group PageRank, which measures the influence from labeled nodes of each class and acts as a guide to increase the intra-class connectivity via adjusting edge weight. The main contributions are as follows: \u2022 We provide a new understanding of the topology-imbalance issue from the perspective of supervision information distribution in terms of under-reaching and over-squashing and provide two new quantitative metrics for them. \u2022 Equipped with the proposed position encodings and class-wise conflict measure, PASTEL can better model the relationships of node pairs and enhance the intra-class inductive bias by maximizing the label influence. \u2022 Experimental results demonstrate that the proposed PASTEL enjoys superior effectiveness and indeed enhances the GNN model\u2019s power for in-the-wild extrapolation. 2 RELATED WORK 2.1 Imbalance Learning Imbalanced classification problems [18, 41] have attracted extensive research attention. Most existing works [16, 25] focus on the class-imbalance problem, where the model performance is dominated by the majority class. The class-imbalance learning methods can be roughly divided into two types: data-level resampling and algorithm-level re-weighting. Re-sampling methods re-sample [2, 5, 48] or augment data [30] to balance the number of data for each class during the data selection phase. Re-weighting methods [4, 11, 32] adjust different weights to different data samples according to the number of data during the training phase. For the graph-specific topology-imbalance issue as mentioned in Section 1, directly applying these methods to the graph data fails to take the special topology properties into consideration. ReNode [7] is the first work for the graph topology-imbalance issue, which follows the paradigm of classical re-weighting methods. Specifically, ReNode defines an influence conflict detection based metric and re-weights the labeled nodes based on their relative positions to class boundaries. However, ReNode is limited by its homophily assumption and only has a slight performance improvement. In this paper, PASTEL alleviates topology-imbalance by learning a new structure that maximizes the intra-class label influence, which can be seen as \u201clabel re-distribution\u201d in the topology space. 2.2 Graph Structure Learning Graph structure learning [55] learns an optimized graph structure for representation learning and most of them aim to improve the robustness [20, 54] of GNN models. There are also some works [8, 10, 12, 38, 42] that utilize the structure learning to improve the graph representation quality. As for the over-squashing problem, [45] assigns different weights to edges connected to two nodes of the same class for better representations. However, [45] still fails with the issue of under-reaching. SDRF [42] rewires edges according \fPosition-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA to the Ricci curvatures to solve the over-squashing problem by only considering topology properties. Multiple measurements in existing structure learning works are leveraged for modeling node relations, including node features [53], node degrees [20], node encodings [51] and edge attributes [54]. The node positions play an important role in generating discriminative representations [49] and are seldom considered in structure learning. In this work, we advance the structure learning strategy for the graph topology-imbalance issue and introduce a position-aware framework to better capture the nodes\u2019 underlying relations. 3 UNDERSTANDING TOPOLOGY-IMBALANCE In this section, we provide a new understanding of the topologyimbalance issue in terms of under-reaching and over-squashing. Then we perform a quantitative analysis of the relations between them to answer two questions: Q1: Why does topology-imbalance affect the performance of graph representation learning? Q2: What kind of graphs are susceptible to topology-imbalance? 3.1 Notations and Preliminaries Consider a graph G = {V, E}, where V is the set of \ud835\udc41nodes and E is the edge set. Let A \u2208R\ud835\udc41\u00d7\ud835\udc41be the adjacency matrix and X \u2208R\ud835\udc41\u00d7\ud835\udc510 be the node attribute matrix, where \ud835\udc510 denotes the dimension of node attributes. The diagonal degree matrix is denoted as D \u2208R\ud835\udc41\u00d7\ud835\udc41where D\ud835\udc56\ud835\udc56= \u00cd\ud835\udc41 \ud835\udc57=1 A\ud835\udc56\ud835\udc57. The graph diameter is denoted as \ud835\udc37G. Given the labeled node set V \ud835\udc3fand their labels Y \ud835\udc3f where each node \ud835\udc63\ud835\udc56is associated with a label \ud835\udc66\ud835\udc56, semi-supervised node classification aims to train a node classifier \ud835\udc53\ud835\udf03: \ud835\udc63\u2192R\ud835\udc36to predict the labels Y \ud835\udc48of remaining nodes V \ud835\udc48= V \\ V \ud835\udc3f, where \ud835\udc36 denotes the number of classes. we separate the labeled node set V \ud835\udc3f into {V1 \ud835\udc3f, V2 \ud835\udc3f, \u00b7 \u00b7 \u00b7 , V\ud835\udc36 \ud835\udc3f}, where V\ud835\udc56 \ud835\udc3fis the nodes of class \ud835\udc56in V \ud835\udc3f. 3.2 Understanding Topology-Imbalance via Under-reaching and Over-squashing In GNNs, node representations are learned by aggregating information from valuable neighbors. The quantity and quality of the information received by the nodes decide the expressiveness of their representations. We perceive the imbalance of the labeled node positions affects the performance of GNNs for two reasons: (1) Under-reaching: The influence from labeled nodes decays with the topology distance [3], resulting in that the nodes far away from labeled nodes lack supervision information. When the node can\u2019t reach enough valuable labeled nodes within the receptive field of the model, the quantity of information it received is limited. (2) Over-squashing: The receptive field of GNNs is exponentiallygrowing and all information is compressed into fixed-length vectors [1]. The supervision information of valuable labeled nodes is squashed when passing across the narrow path together with other useless information. 3.3 Quantitative Analysis To provide quantitative analysis for topology-imbalance, we propose two metrics for reachability and squashing. First, we define a reaching coefficient based on the shortest path, which determines the minimum layers of GNNs to obtain supervision information: Definition 1 (Reaching coefficient). Given a graph G and labeled node set V \ud835\udc3f, the reaching coefficient \ud835\udc45\ud835\udc36of G is the mean length of the shortest path from unlabeled nodes to the labeled nodes of their corresponding classes: \ud835\udc45\ud835\udc36= 1 |V \ud835\udc48| \u2211\ufe01 \ud835\udc63\ud835\udc56\u2208V \ud835\udc48 1 |V\ud835\udc66\ud835\udc56 \ud835\udc3f| \u2211\ufe01 \ud835\udc63\ud835\udc57\u2208V\ud835\udc66\ud835\udc56 \ud835\udc3f \u0012 1 \u2212log |P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57)| log \ud835\udc37G \u0013 , (1) where V\ud835\udc66\ud835\udc56 \ud835\udc3f denotes the nodes in V \ud835\udc3fwhose label is \ud835\udc66\ud835\udc56, P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57) denotes the shortest path between \ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc57, and |P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57)| denotes its length, and \ud835\udc37G is the diameter of graph G. Specifically, for the unconnected \ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc57, we set the length of their shortest path as \ud835\udc37G. The reaching coefficient reflects how long the the distance when the GNNs passes the valuable information to the unlabeled nodes. Note that \ud835\udc45\ud835\udc36\u2208[0, 1) and larger \ud835\udc45\ud835\udc36means better reachability. For the quantitative metric of over-squashing, we define a squashing coefficient using the Ricci curvature to formulate it from a geometric perspective. The Ricci curvature [28] reflects the change of topology properties of the two endpoints of an edge, where the negative \ud835\udc45\ud835\udc56\ud835\udc50(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57) means that the edge behaves locally as a shortcut or bridge and positive \ud835\udc45\ud835\udc56\ud835\udc50(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57) indicates that locally there are more triangles in the neighborhood of \ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc57[27, 42]. Definition 2 (Sqashing coefficient). Given a graph G, the squashing coefficient \ud835\udc46\ud835\udc36of G is the mean Ricci curvature of edges on the shortest path from unlabeled nodes to the labeled nodes of their corresponding classes: \ud835\udc46\ud835\udc36= 1 |V \ud835\udc48| \u2211\ufe01 \ud835\udc63\ud835\udc56\u2208V \ud835\udc48 1 |N\ud835\udc66\ud835\udc56(\ud835\udc63\ud835\udc56)| \u2211\ufe01 \ud835\udc63\ud835\udc57\u2208N\ud835\udc66\ud835\udc56(\ud835\udc63\ud835\udc56) \u00cd \ud835\udc52\ud835\udc58\ud835\udc61\u2208P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56,\ud835\udc63\ud835\udc57) \ud835\udc45\ud835\udc56\ud835\udc50(\ud835\udc63\ud835\udc58, \ud835\udc63\ud835\udc61) |P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57)| , (2) where N\ud835\udc66\ud835\udc56(\ud835\udc63\ud835\udc56) denotes the labeled nodes of class \ud835\udc66\ud835\udc56that can reach \ud835\udc63\ud835\udc56, \ud835\udc45\ud835\udc56\ud835\udc50(\u00b7, \u00b7) denotes the Ricci curvature, and |P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57)| denotes the length of shortest path between \ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc57. We leverage the Ollivier-Ricci curvature [28] as \ud835\udc45\ud835\udc56\ud835\udc50(\u00b7, \u00b7) here: \ud835\udc45\ud835\udc56\ud835\udc50(\ud835\udc63\ud835\udc58, \ud835\udc63\ud835\udc61) = \ud835\udc4a\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc61\ud835\udc52\ud835\udc56\ud835\udc5b(\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc58,\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc61) \ud835\udc51\ud835\udc54\ud835\udc52\ud835\udc5c(\ud835\udc63\ud835\udc58, \ud835\udc63\ud835\udc61) , (3) where \ud835\udc4a\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc61\ud835\udc52\ud835\udc56\ud835\udc5b(\u00b7, \u00b7) is the Wasserstein distance, \ud835\udc51\ud835\udc54\ud835\udc52\ud835\udc5c(\u00b7, \u00b7) is the geodesic distance function, and \ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc58is the mass distribution [28] of node \ud835\udc63\ud835\udc58. Note that \ud835\udc46\ud835\udc36can be either positive or negative and larger \ud835\udc46\ud835\udc36means lower squashing because the ring structures are more friendly for information sharing. In Figure 2 and Figure 3, we show the relation between the reaching coefficient \ud835\udc45\ud835\udc36, the squashing coefficient \ud835\udc46\ud835\udc36, and the classification accuracy. The higher the accuracy, the darker and larger the corresponding scatter. First, we analyze the performance of GCN when trained on the same graph structure but with different labeled nodes. In Figure 2, we generate a synthetic graph by the Stochastic Block Model (SBM) [19] with 4 classes and 3,000 nodes. We randomly sample some nodes as the labeled nodes 10 times and scatter the classification accuracy in Figure 2. We can observe that even for the same graph structure, the difference in positions of labeled nodes may bring up to 15.42% difference in accuracy. There \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Sun et al. acc.=65.31% acc.=62.86% acc.=49.89% Figure 2: Predictions of GCN with the same graph structure and different labeled nodes. is a significant positive correlation between the reaching coefficient, the squashing coefficient, and the model performance. Then we analyze the performance of GCN when trained with the same labeled nodes but on different graph structures. In Figure 3, we set the labeled nodes to be the same and generate different structures between them by controlling the edge probability between communities in the SBM model. We can observe that with the same supervision information, there is up to a 26.26% difference in accuracy because of the difference in graph structures. There is also a significant positive correlation between the reaching coefficient, the squashing coefficient, and the model performance. When the graph shows better community structure among nodes of the same class, the node representations can be learned better. Therefore, we make the following conclusions: (1) Topologyimbalance hurts the performance of graph learning in the way of under-reaching and over-squashing. (for Q1) (2) The proposed two quantitative metrics can effectively reflect the degree of topologyimbalance. Graph with poor reachability (i.e., smaller \ud835\udc45\ud835\udc36) and stronger squashing (i.e., smaller \ud835\udc46\ud835\udc36) is more susceptible to topologyimbalance. (for Q2) (3) Optimizing the graph structure can effectively solve the topology-imbalance issue. The above conclusions provide the guideline for designing the framework of PASTEL, i.e., balance the supervision information distribution by learning a structure with better reachability and lower squashing. 4 ALLEVIATE TOPOLOGY-IMBALANCE BY STRUCTURE LEARNING In this section, we introduce PASTEL, a Position-Aware STructurE Learning framework, to optimize the information propagation path directly and address the topology-imbalance issue in essence. In light of the analysis in Section 3.2, PASTEL aims to learn a better structure that increases the intra-class label influence for each class and thus relieves the under-reaching and over-squashing phenomena. The overall architecture of PASTEL is shown in Figure 4. 4.1 Position-aware Structure Learning To form structure with better intra-class connectivity, we use an anchor-based position encoding method to capture the topology acc.=51.24% acc.=34.88% acc.=24.98% Figure 3: Predictions of GCN with and the same labeled nodes and different graph structures. distance between unlabeled nodes to labeled nodes. Then we incorporate both the merits of feature information as well as topology information to learn the refined structure. Anchor-based Position Encoding. Inspired by the position in transformer [36, 43], we use an anchor-based position encoding method to capture the relative position of unlabeled nodes with respect to all the labeled nodes of the graph. Since we focus on maximizing the reachability between unlabeled nodes and labeled nodes within the same class, we directly separate the labeled node set V \ud835\udc3finto \ud835\udc36anchor sets {V1 \ud835\udc3f, V2 \ud835\udc3f, \u00b7 \u00b7 \u00b7 , V\ud835\udc36 \ud835\udc3f}, where each subset V\ud835\udc50 \ud835\udc3fdenotes the labeled nodes whose labels are \ud835\udc50. The class-wise anchor sets help distinguish the information from different classes rather than treating all the anchor nodes the same and ignoring the class difference as in [49]. Concretely, for any node \ud835\udc63\ud835\udc56, we consider a function \ud835\udf19(\u00b7, \u00b7) which measures the position relations between \ud835\udc63\ud835\udc56 and the anchor sets in graph G. The function can be defined by the connectivity between the nodes in the graph. p\ud835\udc56= \u0010 \ud835\udf19 \u0010 \ud835\udc63\ud835\udc56, V1 \ud835\udc3f \u0011 ,\ud835\udf19 \u0010 \ud835\udc63\ud835\udc56, V2 \ud835\udc3f \u0011 , \u00b7 \u00b7 \u00b7 ,\ud835\udf19 \u0010 \ud835\udc63\ud835\udc56, V\ud835\udc36 \ud835\udc3f \u0011\u0011 , (4) where \ud835\udf19(\ud835\udc63\ud835\udc56, V\ud835\udc50 \ud835\udc3f) is the position encoding function defined by the connectivity between the node \ud835\udc63\ud835\udc56and the anchor set V\ud835\udc50 \ud835\udc3fin graph. Here we choose \ud835\udf19(\ud835\udc63\ud835\udc56, V\ud835\udc50 \ud835\udc3f) to be the mean length of shortest path between \ud835\udc63\ud835\udc56and nodes in V\ud835\udc50 \ud835\udc3fif two nodes are connected: \ud835\udf19(\ud835\udc63\ud835\udc56, V\ud835\udc50 \ud835\udc3f) = \u00cd \ud835\udc63\ud835\udc57\u2208N\ud835\udc50(\ud835\udc63\ud835\udc56) |P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57)| |N\ud835\udc50(\ud835\udc63\ud835\udc56)| , (5) where N\ud835\udc50(\ud835\udc63\ud835\udc56) is the nodes connected with \ud835\udc63\ud835\udc56in V\ud835\udc50 \ud835\udc3fand |P\ud835\udc60\ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57)| is the length of shortest path between \ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc57. Then we transform the position encoding into the \ud835\udc510 dimensional space: h\ud835\udc5d \ud835\udc56= W\ud835\udf19\u00b7 p\ud835\udc56, (6) where W\ud835\udf19is a trainable vector. If two nodes have similar shortest paths to the anchor sets, their position encodings are similar. Position-aware Metric Learning. After obtaining the position encoding, we use a metric function that accounts for both node feature information and the position-based similarities to measure the possibility of edge existence. PASTEL is agnostic to various \fPosition-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA position encoding Representatiom space with better separation Graph with topology-imbalance Graph with better intra-class connectivity conflict measure edge weights PASTEL Figure 4: Overall architecture of PASTEL. PASTEL encodes the relative position between nodes with the labeled nodes as anchor sets {S} and incorporates the position information with node features for structure learning. For each pair of nodes, PASTEL uses the class-wise conflict measure as the edge weights to learn a graph with better intra-class connectivity. similarity metric functions and we choose the widely used multihead cosine similarity function here: \ud835\udc4e\ud835\udc43 \ud835\udc56\ud835\udc57= 1 \ud835\udc5a \ud835\udc5a \u2211\ufe01 \u210e=1 cos \u0010 W\u210e\u00b7 \u0010 z\ud835\udc56||h\ud835\udc5d \ud835\udc56 \u0011 , W\u210e\u00b7 \u0010 z\ud835\udc57||h\ud835\udc5d \ud835\udc57 \u0011\u0011 , (7) where\ud835\udc5ais the number of heads, W\u210eis the weight matrix of the\u210e-th head, z\ud835\udc56denotes the representation vector of node \ud835\udc63\ud835\udc56and || denotes concatenation. The effectiveness of the position-aware structure learning is evaluated in Section 5.3.1. 4.2 Class-wise Conflict Measure We aim to increase the intra-class connectivity among nodes, thereby increasing the supervision information they received and their influence on each other. Here we propose a class-wise conflict measure to guide what nodes should be more closely connected. According to the inherent relation of GNNs with Label Propagation [7, 45], we use the Group PageRank [6] as a conflict measure between nodes. Group PageRank (GPR) extends the traditional PageRank[29] into a label-aware version to measure the supervision information from labeled nodes of each class. Specifically, for class \ud835\udc50\u2208{1, 2, \u00b7 \u00b7 \u00b7 ,\ud835\udc36}, the corresponding GPR matrix is P\ud835\udc54\ud835\udc5d\ud835\udc5f(\ud835\udc50) = (1 \u2212\ud835\udefc)A\u2032P\ud835\udc54\ud835\udc5d\ud835\udc5f(\ud835\udc50) + \ud835\udefcI\ud835\udc50, (8) where A\u2032 = AD\u22121, \ud835\udefcis the random walk restart probability at a random node in the group and I\ud835\udc50\u2208R\ud835\udc5bis the teleport vector: I\ud835\udc56 \ud835\udc50= ( 1 |V\ud835\udc50 \ud835\udc3f|, \ud835\udc56\ud835\udc53\ud835\udc66\ud835\udc56= \ud835\udc50 0, \ud835\udc5c\ud835\udc61\u210e\ud835\udc52\ud835\udc5f\ud835\udc64\ud835\udc56\ud835\udc60\ud835\udc52 (9) where |V\ud835\udc50 \ud835\udc3f| is the number of labeled nodes with class\ud835\udc50. We calculate the GPR for each group individually and then concatenate all the GPR vectors to form a final GPR matrix P\ud835\udc54\ud835\udc5d\ud835\udc5f\u2208R\ud835\udc41\u00d7\ud835\udc36as in [6]: P\ud835\udc54\ud835\udc5d\ud835\udc5f= \ud835\udefc\u0000E \u2212(1 \u2212\ud835\udefc) A\u2032\u0001\u22121 I\u2217, (10) where E is the unit matrix of nodes and I\u2217is the concatenation of {I\ud835\udc50,\ud835\udc50= 1, 2, \u00b7 \u00b7 \u00b7 ,\ud835\udc36}. Under P\ud835\udc54\ud835\udc5d\ud835\udc5f, node \ud835\udc63\ud835\udc56corresponds to a GPR vector P\ud835\udc54\ud835\udc5d\ud835\udc5f \ud835\udc56 (the \ud835\udc56-th row of P\ud835\udc54\ud835\udc5d\ud835\udc5f), where the \ud835\udc50-th dimension represents the the supervision influence of labeled nodes of class \ud835\udc50on node \ud835\udc63\ud835\udc56. The GPR value contains not only the global topology information but also the annotation information. For each node pair nodes \ud835\udc63\ud835\udc56and \ud835\udc63\ud835\udc57, we use the Kullback Leiber (KL) divergence of their GPR vectors to measure their conflict when forming an edge: \ud835\udf05\ud835\udc56\ud835\udc57= KL \u0010 P\ud835\udc54\ud835\udc5d\ud835\udc5f \ud835\udc56 , P\ud835\udc54\ud835\udc5d\ud835\udc5f \ud835\udc57 \u0011 . (11) The distance of GPR vectors reflects the influence conflict of different classes when exchanging information. We use a cosine annealing mechanism to calculate the edge weights by the relative ranking of the conflict measure: \ud835\udc64\ud835\udc56\ud835\udc57= 1 2 \u0014 \u2212cos Rank(\ud835\udf05\ud835\udc56\ud835\udc57) |V| \u00d7 |V| \u2217\ud835\udf0b+ 1 \u0015 , (12) where \ud835\udc45\ud835\udc4e\ud835\udc5b\ud835\udc58(\u00b7) is the ranking function according to the magnitude. The more conflicting the edge is, the less weight is assigned to it. With the class-wise conflict measure, we aim to learn a graph structure that makes the GPR vectors of nodes have \u201csharp\u201d distributions focusing on their ground-truth classes. Then \ud835\udc64\ud835\udc56\ud835\udc57is used as the connection strength of edge \ud835\udc52\ud835\udc56\ud835\udc57, with the corresponding element \u02dc \ud835\udc4e\ud835\udc43 \ud835\udc56\ud835\udc57in the adjacency matrix being: \u02dc \ud835\udc4e\ud835\udc43 \ud835\udc56\ud835\udc57= \ud835\udc64\ud835\udc56\ud835\udc57\u00b7 \ud835\udc4e\ud835\udc43 \ud835\udc56\ud835\udc57. (13) The effectiveness of the class-wise conflict measure is evaluated in Section 5.3.2 and the change of GPR vectors is shown in Section 5.4.3. 4.3 Learning with the Optimized Structure With the above structure learning strategy, we can obtain a positionaware adjacency A\ud835\udc43with maximum intra-class connectivities: A\ud835\udc43= { \u02dc \ud835\udc4e\ud835\udc43 \ud835\udc56\ud835\udc57,\ud835\udc56, \ud835\udc57\u2208{1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc41}}. (14) The input graph structure determines the learning performance to a certain extent. Since the structure learned at the beginning is of poor quality, directly using it may lead to non-convergence or unstable training of the whole framework. We hence incorporate the original graph structure A and a structure in a node feature view A\ud835\udc41as supplementary to formulate an optimized graph structure A\u2217. Specifically, we also learn a graph structure A\ud835\udc41= {\ud835\udc4e\ud835\udc41 \ud835\udc56\ud835\udc57,\ud835\udc56, \ud835\udc57\u2208 \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Sun et al. Algorithm 1: The overall process of PASTEL Input: Graph G = {V, E} with node labels Y; Number of heads \ud835\udc5a; Number of training epochs \ud835\udc38; Structure fusing coefficients \ud835\udf061, \ud835\udf062; Loss coefficients \ud835\udefd1, \ud835\udefd2, \ud835\udefd3 Output: Optimized graph G\u2217= (A\u2217, X), predicted label \u02c6 Y 1 Parameter initialization; 2 for \ud835\udc52= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc38do // Learn position-aware graph structure 3 Learn position encodings h\ud835\udc5d \ud835\udc56\u2190Eq. (6); 4 Learn edge possibility \ud835\udc4e\ud835\udc43 \ud835\udc56\ud835\udc57\u2190Eq. (7); 5 Calculate the Group PageRank matrix P\ud835\udc54\ud835\udc5d\ud835\udc5f\u2190Eq. (10); 6 Calculate the class-wise conflict measure \ud835\udc64\ud835\udc56\ud835\udc57\u2190Eq. (12); 7 Obtain position-aware structure A\ud835\udc43\u2190Eq. (14); // Learn node representations 8 Obtain the optimized structure A\u2217\u2190Eq. (16); 9 Calculate representations and labels Z, \u02c6 Y \u2190Eq. (20); // Optimize 10 Calculate the losses L\ud835\udc50\ud835\udc59\ud835\udc60\u2190Eq. (21), L\ud835\udc60\ud835\udc5a\ud835\udc5c\ud835\udc5c\ud835\udc61\u210e\u2190Eq. (17), L\ud835\udc50\ud835\udc5c\ud835\udc5b\u2190Eq. (18), and L\ud835\udc60\ud835\udc5d\ud835\udc4e\ud835\udc5f\u2190Eq. (19); 11 Update model parameters to minimize L \u2190Eq. (22). 12 end {1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc41}} in a node feature view with each element being: \ud835\udc4e\ud835\udc41 \ud835\udc56\ud835\udc57= 1 \ud835\udc5a \ud835\udc5a \u2211\ufe01 \u210e=1 cos \u0010 W\u210e\u00b7 \u0010 x\ud835\udc56||h\ud835\udc5d0 \ud835\udc56 \u0011 , W\u210e\u00b7 \u0010 x\ud835\udc57||h\ud835\udc5d0 \ud835\udc57 \u0011\u0011 , (15) where x\ud835\udc56is the feature vector of node \ud835\udc63\ud835\udc56and h\ud835\udc5d0 \ud835\udc56 is the position encoding with the original structure. Then we can formulate an optimized graph structure A\u2217with respect to the downstream task: A\u2217= \ud835\udf061D\u22121 2 AD\u22121 2 + (1 \u2212\ud835\udf061) (\ud835\udf062\ud835\udc53(A\ud835\udc41) + (1 \u2212\ud835\udf062) \ud835\udc53(A\ud835\udc43)) , (16) where \ud835\udc53(\u00b7) denotes the row-wise normalization function, \ud835\udf061 and \ud835\udf062 are two constants that control the contributions of original structure and feature view structure, respectively. Here we use a dynamic decay mechanism for \ud835\udf061 and \ud835\udf062 to enable the position-aware structure A\ud835\udc43to play a more and more important role during training. To control the quality of learned graph structure, we impose additional constraints on it following [10, 21] in terms of smoothness, connectivity, and sparsity: L\ud835\udc60\ud835\udc5a\ud835\udc5c\ud835\udc5c\ud835\udc61\u210e= 1 \ud835\udc412 tr \u0010 X\ud835\udc47L\u2217X \u0011 , (17) L\ud835\udc50\ud835\udc5c\ud835\udc5b= 1 \ud835\udc411\ud835\udc47log(A\u22171), (18) L\ud835\udc60\ud835\udc5d\ud835\udc4e\ud835\udc5f= 1 \ud835\udc412 ||A\u2217||2 \ud835\udc39, (19) where L\u2217= D\u2217\u2212A\u2217is the Laplacian of A\u2217and D\u2217is the degree matrix of A\u2217. To speed up the computation, we extract a symmetric sparse non-negative adjacency matrix by masking off (i.e., set to zero) those elements in A\u2217which are smaller than a predefined non-negative threshold \ud835\udc4e0. Then G\u2217= (A\u2217, X) is input into the GNN-Encoder for the node representations Z \u2208R\ud835\udc41\u00d7\ud835\udc51, predicted labels \u02c6 \ud835\udc66and classification loss L\ud835\udc50\ud835\udc59\ud835\udc60: Z = GNN-Encoder(A\u2217, X), \u02c6 Y = Classifier(Z), (20) L\ud835\udc50\ud835\udc59\ud835\udc60= Cross-Entropy(Y, \u02c6 Y). (21) The overall loss is defined as the combination of the node classification loss and graph regularization loss: L = L\ud835\udc50\ud835\udc59\ud835\udc60+ \ud835\udefd1L\ud835\udc60\ud835\udc5a\ud835\udc5c\ud835\udc5c\ud835\udc61\u210e+ \ud835\udefd2L\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udefd3L\ud835\udc60\ud835\udc5d\ud835\udc4e\ud835\udc5f. (22) The overall process of PASTEL is shown in Algorithm 1. 5 EXPERIMENT In this section, we first evaluate PASTEL1 on both real-world graphs and synthetic graphs. Then we analyze the main mechanisms of PASTEL and the learned structure. We mainly focus on the following research questions: \u2022 RQ1. How does PASTEL perform in the node classification task? (Section 5.2) \u2022 RQ2. How does the position encoding and the class-wise conflict measure influence the performance of PASTEL? (Section 5.3) \u2022 RQ3. What graph structure PASTEL tend to learn? (Section 5.4) 5.1 Experimental Setups 5.1.1 Datasets. We conduct experiments on synthetic and realworld datasets to analyze the model\u2019s capabilities in terms of both graph theory and real-world scenarios. The real-word datasets include various networks with different heterophily degrees to demonstrate the generalization of PASTEL. Cora and Citeseer [35] are citation networks. Photo [37] and and Actor [31] are co-occurrence network. Chameleon and Squirrel [34] are page-page networks in Wikipedia. Since we focus on the topology-imbalance issue in this work, we set the number of labeled nodes in each class to be 20. 5.1.2 Baselines. We choose representative GNNs as backbones including GCN [22], GAT [44], APPNP [23], and GraphSAGE [17]. The most important baseline is ReNode [7], which is the only existing work for the topology-imbalance issue. We also include some graph structure learning baselines to illustrate the specific effectiveness of PASTEL for the topology-imbalance issue. DropEdge [33] randomly removes edges at each epoch as structure augmentation. To evaluate the effect of increasing the reachability randomly, we use a adding edges method named AddEdge, whose adding strategy is similar to DropEdge. SDRF [42] rewires edges according to their curvatures for the over-squashing issue. NeuralSparse [54] removes potentially task-irrelevant edges for clearer class boundaries. IDGL [10] updates the node representations and structure based on these representations iteratively. 5.1.3 Parameter Settings. For the GNN backbones, we set their depth to be 2 layers and adopt the implementations from the PyTorch Geometric Library in all experiments. We set the representation dimension of all baselines and PASTEL to be 256. We reimplement the NeuralSparse [54] and SDRF [42] and the parameters of baseline methods are set as the suggested value in their papers or carefully tuned for fairness. For DropEdge and AddEdge, we set the edge dropping/adding probability to 10%. For PASTEL, we set the number of heads \ud835\udc5a= 4 and the random walk restart probability 1The code of PASTEL is available at https://github.com/RingBDStack/PASTEL. \fPosition-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Table 1: Weighted-F1 score and Macro-F1 score (% \u00b1 standard deviation) of node classification on real-world graph datasets. Cora Citeseer Photo Actor Chameleon Squirrel Backbone Model W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 GCN original 79.4\u00b10.9 77.5\u00b11.5 66.3\u00b11.3 62.2\u00b11.2 85.4\u00b12.8 84.6\u00b11.3 21.8\u00b11.3 20.9\u00b11.4 30.5\u00b13.4 30.5\u00b13.3 21.9\u00b11.2 21.9\u00b11.2 ReNode 80.0\u00b10.7 78.4\u00b11.3 66.4\u00b11.0 62.4\u00b11.1 86.2\u00b12.4 85.3\u00b11.6 21.2\u00b11.2 20.2\u00b11.6 30.3\u00b13.2 30.4\u00b12.8 22.4\u00b11.1 22.4\u00b11.1 AddEdge 79.0\u00b10.9 77.0\u00b11.4 66.2\u00b11.3 62.2\u00b11.3 85.5\u00b11.5 86.1\u00b11.8 21.2\u00b11.3 20.3\u00b11.5 30.6\u00b11.6 30.4\u00b11.7 21.7\u00b11.5 21.7\u00b11.5 DropEdge 79.8\u00b10.8 77.8\u00b11.0 66.6\u00b11.4 63.4\u00b11.6 86.8\u00b11.7 85.4\u00b11.3 22.4\u00b11.0 21.4\u00b11.3 30.6\u00b13.5 30.6\u00b13.3 22.8\u00b11.2 22.8\u00b11.2 SDRF 82.1\u00b10.8 80.6\u00b10.8 69.6\u00b10.4 66.6\u00b10.3 > 5 days > 5 days > 5 days > 5 days 39.1\u00b11.2 39.0\u00b11.2 > 5 days > 5 days NeuralSparse 81.7\u00b11.4 80.9\u00b11.4 71.8\u00b11.2 69.0\u00b11.0 89.7\u00b11.9 88.7\u00b11.8 24.4\u00b11.5 23.6\u00b11.6 44.9\u00b13.0 44.9\u00b12.8 28.1\u00b11.8 28.1\u00b11.8 IDGL 82.3\u00b10.6 81.0\u00b10.9 71.7\u00b11.0 68.0\u00b11.3 88.6\u00b12.3 88.8\u00b11.4 24.9\u00b10.8 22.0\u00b10.7 55.4\u00b11.8 55.0\u00b11.7 28.8\u00b12.3 28.9\u00b12.2 PASTEL 82.5\u00b10.3 81.2\u00b10.3 72.9\u00b10.8 69.3\u00b10.9 91.4\u00b12.7 91.3\u00b12.2 26.4\u00b11.0 24.4\u00b11.2 57.8\u00b12.4 57.3\u00b12.4 37.5\u00b10.6 37.5\u00b10.7 GAT original 78.3\u00b11.5 76.4\u00b11.7 64.4\u00b11.7 60.6\u00b11.7 88.2\u00b12.9 86.2\u00b12.6 21.8\u00b11.2 20.9\u00b11.1 29.9\u00b13.5 29.9\u00b13.1 20.5\u00b11.4 20.5\u00b11.4 ReNode 78.9\u00b11.2 77.2\u00b11.5 64.9\u00b11.6 61.0\u00b11.5 89.1\u00b12.4 87.1\u00b12.6 21.5\u00b11.2 20.5\u00b11.1 29.2\u00b12.3 29.1\u00b12.0 20.4\u00b11.8 20.4\u00b11.8 AddEdge 78.0\u00b11.6 76.2\u00b11.6 64.0\u00b11.3 60.2\u00b11.3 88.2\u00b12.4 86.2\u00b12.5 21.3\u00b11.2 20.3\u00b11.1 29.8\u00b11.7 29.6\u00b11.5 20.7\u00b11.6 20.7\u00b11.6 DropEdge 78.7\u00b11.3 76.9\u00b11.5 64.5\u00b11.4 60.5\u00b11.3 88.9\u00b11.9 87.1\u00b12.1 22.9\u00b11.2 21.8\u00b11.1 30.3\u00b11.6 30.2\u00b11.2 21.2\u00b11.5 21.2\u00b11.5 SDRF 77.9\u00b10.7 75.9\u00b10.9 64.9\u00b10.6 61.9\u00b10.9 > 5 days > 5 days > 5 days > 5 days 43.0\u00b11.9 42.5\u00b11.9 > 5 days > 5 days NerualSparse 81.4\u00b14.8 79.4\u00b14.8 64.8\u00b11.5 61.9\u00b11.3 90.2\u00b12.5 88.0\u00b12.3 23.4\u00b11.7 22.4\u00b11.5 45.6\u00b12.1 45.5\u00b11.8 28.8\u00b11.3 28.8\u00b11.3 IDGL 80.6\u00b11.0 79.7\u00b10.9 66.5\u00b11.5 61.9\u00b11.9 89.9\u00b13.1 87.7\u00b12.6 22.4\u00b11.5 21.8\u00b11.2 48.4\u00b14.0 47.8\u00b13.1 27.0\u00b12.6 27.0\u00b12.6 PASTEL 81.9\u00b11.4 80.7\u00b11.2 66.6\u00b11.9 62.0\u00b11.7 91.8\u00b13.2 89.4\u00b12.9 24.4\u00b12.6 22.1\u00b12.6 52.1\u00b12.7 52.5\u00b12.8 35.3\u00b10.9 35.3\u00b10.8 APPNP original 80.6\u00b11.6 79.3\u00b11.2 66.5\u00b11.5 62.3\u00b11.5 89.3\u00b11.6 86.3\u00b11.7 21.1\u00b11.5 20.7\u00b11.1 35.3\u00b14.0 35.0\u00b13.8 23.1\u00b11.6 23.1\u00b11.6 ReNode 81.1\u00b10.9 79.9\u00b10.9 66.6\u00b11.7 62.4\u00b11.6 89.6\u00b11.4 87.2\u00b11.3 20.2\u00b12.0 20.0\u00b11.7 33.5\u00b12.5 33.3\u00b12.3 23.9\u00b12.0 23.9\u00b12.0 AddEdge 80.3\u00b11.3 78.8\u00b11.1 66.6\u00b12.1 62.5\u00b12.1 89.3\u00b11.2 86.4\u00b11.2 21.5\u00b11.3 20.7\u00b11.4 35.7\u00b11.7 35.4\u00b11.2 23.1\u00b11.6 23.2\u00b11.7 DropEdge 80.9\u00b11.4 79.4\u00b11.2 66.7\u00b12.0 63.0\u00b11.9 90.0\u00b11.2 87.0\u00b11.2 21.8\u00b11.8 20.8\u00b11.4 36.0\u00b11.7 35.7\u00b11.6 23.3\u00b11.7 23.3\u00b11.7 SDRF 80.7\u00b10.9 79.1\u00b10.8 67.1\u00b10.6 63.1\u00b10.8 > 5 days > 5 days > 5 days > 5 days 36.5\u00b12.1 35.8\u00b12.1 > 5 days > 5 days NerualSparse 81.1\u00b11.4 79.9\u00b11.2 66.8\u00b11.9 62.7\u00b11.9 91.3\u00b11.8 89.4\u00b11.6 21.8\u00b11.9 21.4\u00b11.5 39.1\u00b12.9 38.7\u00b12.8 28.3\u00b11.5 28.3\u00b11.5 IDGL 81.3\u00b10.9 80.2\u00b10.9 67.0\u00b11.3 62.9\u00b11.3 91.6\u00b11.3 88.6\u00b12.2 21.4\u00b12.4 20.1\u00b12.4 41.2\u00b12.2 40.6\u00b12.6 29.6\u00b12.3 29.7\u00b12.2 PASTEL 82.0\u00b11.0 80.0\u00b10.9 67.3\u00b11.3 63.2\u00b11.5 92.3\u00b13.1 89.9\u00b12.5 22.5\u00b12.0 20.9\u00b12.1 44.2\u00b13.2 43.8\u00b13.4 34.6\u00b11.6 34.6\u00b11.6 GraphSAGE original 75.4\u00b11.6 74.1\u00b11.6 64.8\u00b11.6 60.7\u00b11.6 86.1\u00b12.5 83.3\u00b12.4 24.0\u00b11.2 23.2\u00b11.0 36.5\u00b11.6 36.2\u00b11.6 27.2\u00b11.7 27.2\u00b11.7 ReNode 76.4\u00b10.9 75.0\u00b11.1 65.4\u00b11.7 61.2\u00b11.7 86.5\u00b11.7 84.1\u00b11.7 23.7\u00b11.2 22.8\u00b11.0 36.4\u00b11.9 36.1\u00b11.9 27.7\u00b11.8 27.7\u00b11.8 AddEdge 75.2\u00b11.2 73.7\u00b11.2 65.0\u00b11.4 60.9\u00b11.3 86.1\u00b12.8 83.4\u00b12.6 23.8\u00b11.7 23.2\u00b11.6 36.5\u00b11.5 36.2\u00b11.3 26.9\u00b12.1 26.9\u00b12.1 DropEdge 76.0\u00b11.6 74.5\u00b11.6 65.1\u00b11.4 60.9\u00b11.4 86.2\u00b11.6 83.5\u00b11.4 24.1\u00b11.0 23.3\u00b10.9 37.5\u00b11.4 37.2\u00b11.4 27.5\u00b11.8 27.5\u00b11.8 SDRF 75.7\u00b10.8 74.6\u00b10.8 65.3\u00b10.6 61.4\u00b10.6 > 5 days > 5 days > 5 days > 5 days 41.5\u00b12.6 41.6\u00b12.7 > 5 days > 5 days NerualSparse 79.7\u00b11.8 77.8\u00b11.6 64.7\u00b11.4 61.1\u00b11.3 89.1\u00b15.4 86.7\u00b15.5 25.1\u00b11.2 24.4\u00b11.1 39.1\u00b11.9 39.0\u00b11.9 32.2\u00b12.4 32.2\u00b12.4 IDGL 79.2\u00b10.9 78.4\u00b10.8 65.6\u00b10.9 61.3\u00b11.2 90.0\u00b11.0 86.3\u00b11.3 24.0\u00b12.6 22.4\u00b12.7 43.8\u00b13.4 43.0\u00b13.2 33.9\u00b10.9 33.9\u00b10.8 PASTEL 81.1\u00b10.8 79.8\u00b10.7 65.7\u00b11.1 61.4\u00b11.4 92.0\u00b10.6 89.0\u00b11.0 26.0\u00b12.4 23.6\u00b12.7 47.7\u00b10.9 46.9\u00b10.9 35.5\u00b11.4 35.5\u00b11.4 Table 2: Weighted-F1 scores and improvements on graphs with different levels of topology-imbalance. Cora-L Cora-M Cora-H \ud835\udc45\ud835\udc36 0.4130 \ud835\udc46\ud835\udc36 -0.6183 \ud835\udc45\ud835\udc36 0.4100 \ud835\udc46\ud835\udc36 -0.6204 \ud835\udc45\ud835\udc36 0.4060 \ud835\udc46\ud835\udc36 -0.6302 W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) W-F1 (%) \u0394 (%) GCN 80.9\u00b10.9 \u2014 78.8\u00b10.8 \u2014 77.5\u00b11.0 \u2014 ReNode 81.3\u00b10.7 \u21910.4 79.3\u00b10.8 \u21910.5 78.3\u00b11.1 \u21910.8 SDRF 81.0\u00b10.7 \u21910.1 78.9\u00b10.8 \u21910.1 77.9\u00b10.7 \u21910.4 IDGL 82.5\u00b11.0 \u21911.6 80.4\u00b11.0 \u21911.6 81.6\u00b11.1 \u21914.1 PASTEL 82.7\u00b10.9 \u21911.8 81.0\u00b10.9 \u21912.2 81.9\u00b11.1 \u21914.4 \ud835\udefc= 0.15. The structure fusing coefficients (\ud835\udf061 and \ud835\udf062) and the loss coefficients (\ud835\udefd1, \ud835\udefd2 and \ud835\udefd3) are tuned for each dataset. 5.2 Evaluation (RQ1) 5.2.1 PASTEL for Real-world Graphs. We compare PASTEL with the baselines on several datasets on node classification. The overall Weighted-F1 (W-F1) scores and the class-balance Macro-F1 (MF1) scores on different backbones are shown in Table 1. The best results are shown in bold and the runner-ups are underlined. PASTEL shows overwhelming superiority in improving the performance of backbones on all datasets. It demonstrates that PASTEL is capable of learning better structures with a more balanced label distribution that reinforces the GNN models. ReNode [7] achieves fewer improvements on datasets of poor connectivity (e.g., CiteSeer) and even damages the performance of backbones on heterophilic datasets (e.g., Chameleon and Actor). We think it\u2019s because ReNode [7] detects conflicts by Personalized PageRank and fails to reflect the node topological position well when the graph connectivity is poor. Besides, ReNode takes the topology boundary as the decision boundary, which is not applicable for heterophilic graphs. AddEdge doesn\u2019t work in most cases, demonstrating that randomly adding edge is not effective in boosting the reachability. The structure augmentation strategy should be carefully designed considering the node relations. SDRF [42] can improve the performance, supporting our intuition that relieving over-squashing helps graph learning. But SDRF is still less effective than PASTEL because it only considers the topological properties rather than the supervision information. Both NeuralSparse [54] and IDGL [10] show good performance among the baselines, showing the effectiveness of learning better structures for downstream tasks. However, they are still less effective than PASTEL which takes the supervision information distribution into consideration. 5.2.2 PASTEL under Different Levels of Topology-imbalance. To further analyze PASTEL\u2019s ability in alleviating the topology-imbalance issue, we verify the PASTEL under different levels of topologyimbalance. We randomly sampled 1,000 training sets and calculate \fCIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA Sun et al. Table 3: Weighted-F1 scores (%) and improvements (\u0394) on synthetic SBM graphs with different community structures. SBM-1 SBM-2 SBM-3 SBM-4 SBM-5 SBM-6 SBM-7 \ud835\udc5d 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 \ud835\udc5e 0.0300 0.0100 0.0083 0.0071 0.0063 0.0056 0.0050 \ud835\udc45\ud835\udc36 0.4979 0.4984 0.4990 0.4994 0.5002 0.5004 0.5009 \ud835\udc46\ud835\udc36 0.0998 0.0999 0.1000 0.1001 0.1007 0.1017 0.1144 W-F1 \u0394 W-F1 \u0394 W-F1 \u0394 W-F1 \u0394 W-F1 \u0394 W-F1 \u0394 W-F1 \u0394 GCN 40.29 \u2014 42.37 \u2014 42.99 \u2014 44.13 \u2014 45.19 \u2014 45.21 \u2014 45.22 \u2014 ReNode 41.33 \u21911.04 42.40 \u21910.03 43.21 \u21910.22 44.56 \u21910.43 45.20 \u21910.01 45.08 \u21930.13 44.89 \u21930.33 PASTEL 45.67 \u21915.38 57.61 \u219115.24 58.33 \u219115.34 60.29 \u219116.16 66.41 \u219121.22 66.45 \u219121.24 66.57 \u219121.35 Cora Citeseer Chameleon Squirrel 20 40 60 80 100 Weighted-F1 (%) 79.4 66.3 30.5 21.9 81.5 71.7 56.0 35.4 82.5 72.9 57.8 37.5 GCN PASTEL(w/o PE) PASTEL Figure 5: The impact of position encoding. the reaching coefficient \ud835\udc45\ud835\udc36and squashing coefficient \ud835\udc46\ud835\udc36as introduced in Section 3.2. Then we choose 3 training sets with different levels of topology-imbalance according to the conclusion in Section 3.3 and we denote them as Cora-L, Cora-M, and Cora-H, according to the degree of topology imbalance. Note that larger \ud835\udc45\ud835\udc36means better reachability and larger \ud835\udc46\ud835\udc36means lower squashing. We evaluate PASTEL and several baselines with the GCN as the backbone and show the dataset information, the Weighted-F1 scores, and their improvements (\u0394) over the backbones in Table 2. The performance of node representation learning generally gets worse with the increase of the topology-imbalance degree of the dataset. Both the node re-weighting method (i.e., ReNode [7]) and the structure learning methods (i.e., IDGL [10], SDRF [42] and PASTEL) can achieve more improvement with the increase of dataset topology-imbalance. PASTEL performs best on all datasets with different degrees of topology-imbalance and it can achieve up to 4.4% improvement on the highly topology-imbalance dataset. 5.2.3 PASTEL for Synthetic Graphs. We generate 7 synthetic graph datasets with different community structures using the Stochastic Block Model (SBM) G(\ud835\udc41,\ud835\udc36, \ud835\udc5d,\ud835\udc5e) [19], where the number of nodes \ud835\udc41= 3000, the number of community \ud835\udc36= 6, \ud835\udc5ddenotes the edge probability within a community and \ud835\udc5edenotes the edge probability between communities. We show the classification Weighted-F1 scores and improvements are shown in Table 3. With a more clear community structure, the reaching coefficient \ud835\udc45\ud835\udc36increases and the squashing coefficient \ud835\udc46\ud835\udc36also increases, leading to the increase of GCN\u2019s performance, which agrees with the conclusion obtained in Section 3.3. ReNode shows unsatisfied performance in boosting the Cora Citeseer Chameleon Squirrel 20 40 60 80 100 Weighted-F1 (%) 79.4 66.3 30.5 21.9 81.9 72.0 56.2 35.9 82.0 72.2 56.5 36.5 82.5 72.9 57.8 37.5 GCN PASTEL(Totoro) PASTEL(w/o CCM) PASTEL Figure 6: The impact of class-wise conflict measure. node classification. PASTEL can increase the classification weightedF1 score by 5.38%-21.35% on SBM graphs with different community structures, showing superior effectiveness. 5.3 Analysis of PASTEL (RQ2) We conduct ablation studies for the two main mechanisms of PASTEL, position encoding and class-wise conflict measure. 5.3.1 Impact of the Position Encoding. We design an anchor-based position encoding mechanism in Section 4.1, which reflects the relative topological position to labeled nodes and further maximizes the label influence within a class. To evaluate the effectiveness of position encoding, we compare PASTEL with a variant PASTEL (w/o PE), which removes the position encoding and directly take the node features for metric learning in Eq. (7). Here we use the GCN as the backbone. As shown in Figure 5, the structure learning strategy of PASTEL contributes the most, which can achieve at most 25.5% improvement in terms of Weighted-F1 score with only node features. Although PASTEL (w/o PE) effectively improves the performance of backbones to some extent, the position encoding still benefits learning better structure to relieve the topology-imbalance with 1.0%-1.8% improvements than PASTEL (w/o PE). 5.3.2 Impact of the Class-wise Conflict Measure. We designed a class-wise conflict measure in Section 4.2 as edge weights to guide learning structures with better intra-class connectivity. Here, we compare PASTEL with its two variants to analyze the impact of the class-wise conflict measure: (1) PASTEL (w/o CCM), which removes the class-wise conflict measure and directly takes the learned edge possibilities in Eq. (7) as the edge weights. (2) PASTEL (Totoro), which takes the Totoro metric introduced in ReNode [7] as \fPosition-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing CIKM \u201922, October 17\u201321, 2022, Atlanta, GA, USA (a) Original Graph. (b) ReNode. (c) SDRF. (d) IDGL. (e) PASTEL. Figure 7: Structure visualization. (a) Original graph of Cora and learned graphs by (b) ReNode, (c) SDRF, (d) IDGL and (e) PASTEL. Table 4: Properties and performance of the original graph and learned graphs of Cora. Original Graph ReNode SDRF IDGL PASTEL \ud835\udc45\ud835\udc36 0.4022 0.4022 0.4686 0.5028 0.5475 \ud835\udc46\ud835\udc36 -0.6299 -0.6299 -0.4942 -0.4069 -0.3389 W-F1 (%) 79.44 80.34 82.01 82.38 82.86 the conflict measure of nodes in Eq. (13). Here we use the GCN as the backbone. The comparison results are shown in Figure 6. On four datasets, PASTEL consistently outperforms the other two variants. Even without the conflict measure, PASTEL (w/o CCM) still shows better performance than PASTEL (Totoro), indicating the limitation of ReNode when capturing the relative topology positions without clear homophily structures. 5.4 Analysis of Learned Structure (RQ3) We analyze the learned graph by PASTEL in terms of visualization and structural properties. 5.4.1 Structure Visualization. In Figure 7, we visualize the original graph of Cora and the graphs learned by ReNode [7], SDRF [42], IDGL [10] and PASTEL using networkx. For clarity, the edges are not shown. The solid points denote the labeled nodes, the hollow points denote the unlabeled nodes, and the layout of nodes denotes their connectivities. The node size in Figure 7(b) denotes the learned node weight in ReNode, and the solid lines and dashed lines in Figure 7(c) denote the added and deleted edges by SDRF, respectively. As we can observe, ReNode gives more weights to nodes in the topology center of each class and SDRF tends to build connections between distant or isolated nodes. Even though the structure learned by IDGL can make the nodes of a class close, there are still some overlapping and entangled areas between classes. Benefiting from the position encoding and class-wise conflict measure, PASTEL can obtain graph structure with clearer class boundaries. 5.4.2 Change of \ud835\udc45\ud835\udc36and \ud835\udc46\ud835\udc36. We also show the reaching coefficient \ud835\udc45\ud835\udc36and the squashing coefficient \ud835\udc46\ud835\udc36of the above graphs in Figure 7 and the Weighted-F1 score learned on them in Table 4. Here we choose the GCN as the model backbone. All of the structure learning methods (SDRF [42], IDGL [10] and PASTEL) learn structures with larger reaching coefficient and larger squashing coefficient, leading the performance improvement of node classification. This phenomenon supports our propositions in Section 3.3 again. C1 C2 C3 C4 C5 C6 C7 V1 V2 V3 V4 V5 V6 V7 (a) Original Graph. C1 C2 C3 C4 C5 C6 C7 V1 V2 V3 V4 V5 V6 V7 (b) Learned Graph. Figure 8: Heat maps for the Group PageRank value of (a) the original graph and (b) the learned graph by PASTEL. 5.4.3 Change of GPR Vector. The class-wise conflict measure is calculated by the Group PageRank (GPR), which reflects the label influence of each class. We randomly choose 10 nodes for each class in Cora and show their GPR vectors P\ud835\udc54\ud835\udc5d\ud835\udc5f \ud835\udc56 in the original graph in Figure 8(a) and the learned graph in Figure 8(b), respectively, where the color shade denotes the magnitude, \ud835\udc49\ud835\udc56denotes 10 nodes of class \ud835\udc56and \ud835\udc36\ud835\udc56denotes the \ud835\udc56-th class. In Figure 8(a), the off-diagonal color blocks are also dark, indicating that the label influence of each class that nodes obtained from the original graph is still entangled to some extent, which could bring difficulties to the GNN optimization. After the structure learning guided by the proposed class-wise conflict measure, Figure 8(b) exhibits 7 clear diagonal blocks and the gaps between the diagonal and off-diagonal block are widened, indicating that nodes can receive more supervision information of its ground-truth class. We can further make a conclusion that the class-wise conflict measure plays an important role on giving guidance for more class connectivity orthogonality. 6" + }, + { + "url": "http://arxiv.org/abs/2112.08903v1", + "title": "Graph Structure Learning with Variational Information Bottleneck", + "abstract": "Graph Neural Networks (GNNs) have shown promising results on a broad spectrum\nof applications. Most empirical studies of GNNs directly take the observed\ngraph as input, assuming the observed structure perfectly depicts the accurate\nand complete relations between nodes. However, graphs in the real world are\ninevitably noisy or incomplete, which could even exacerbate the quality of\ngraph representations. In this work, we propose a novel Variational Information\nBottleneck guided Graph Structure Learning framework, namely VIB-GSL, in the\nperspective of information theory. VIB-GSL advances the Information Bottleneck\n(IB) principle for graph structure learning, providing a more elegant and\nuniversal framework for mining underlying task-relevant relations. VIB-GSL\nlearns an informative and compressive graph structure to distill the actionable\ninformation for specific downstream tasks. VIB-GSL deduces a variational\napproximation for irregular graph data to form a tractable IB objective\nfunction, which facilitates training stability. Extensive experimental results\ndemonstrate that the superior effectiveness and robustness of VIB-GSL.", + "authors": "Qingyun Sun, Jianxin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu", + "published": "2021-12-16", + "updated": "2021-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Recent years have seen a signi\ufb01cant growing amount of interest in graph representation learning (Zhang et al. 2018; Tong et al. 2021), especially in efforts devoted to developing more effective graph neural networks (GNNs) (Zhou et al. 2020). Despite GNNs\u2019 powerful ability in learning graph representations, most of them directly take the observed graph as input, assuming the observed structure perfectly depicts the accurate and complete relations between nodes. However, these raw graphs are naturally admitted from network-structure data (e.g., social network) or constructed from the original feature space by some pre-de\ufb01ned rules, which are usually independent of the downstream tasks and lead to the gap between the raw graph and the optimal graph for speci\ufb01c tasks. Moreover, most of graphs in the real-word are noisy or incomplete due to the error-prone data collection (Chen, Wu, and Zaki 2020), which could even exacerbate the quality of representations produced by GNNs (Z\u00a8 ugner, Akbarnejad, and G\u00a8 unnemann 2018; Sun et al. 2018). It\u2019s also found that the properties of a graph Copyright \u00a9 2022, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. are mainly determined by some critical structures rather than the whole graph (Sun et al. 2021; Peng et al. 2021). Furthermore, many graph enhanced applications (e.g., text classi\ufb01cation (Li et al. 2020) and vision navigation (Gao et al. 2021)) may only have data without graph-structure and require additional graph construction to perform representation learning. The above issues pose a great challenge for applying GNNs to real-world applications, especially in some risk-critical scenarios. Therefore, learning a task-relevant graph structure is a fundamental problem for graph representation learning. To adaptively learn graph structures for GNNs, many graph structure learning methods (Zhu et al. 2021; Franceschi et al. 2019; Chen, Wu, and Zaki 2020) are proposed, most of which optimize the adjacency matrix along with the GNN parameters toward downstream tasks with assumptions (e.g., community) or certain constraints (e.g., sparsity, low-rank, and smoothness) on the graphs. However, these assumptions or explicit certain constraints may not be applicable to all datasets and tasks. There is still a lack of a general framework that can mine underlying relations from the essence of representation learning. Recalling the above problems, the key of structure learning problem is learning the underlying relations invariant to task-irrelevant information. Information Bottleneck (IB) principle (Tishby, Pereira, and Bialek 2000) provides a framework for constraining such task-irrelevant information retained at the output by trading off between prediction and compression. Speci\ufb01cally, the IB principle seeks for a representation Z that is maximally informative about target Y (i.e., maximize mutual information I(Y ; Z)) while being minimally informative about input data X (i.e., minimize mutual information I(X; Z)). Based on the IB principle, the learned representation is naturally more robust to data noise. IB has been applied to representation learning (Kim et al. 2021; Jeon et al. 2021; Pan et al. 2020; Bao 2021; Dubois et al. 2020) and numerous deep learning tasks such as model ensemble (Sinha et al. 2020), \ufb01ne-tuning (Mahabadi, Belinkov, and Henderson 2021), salient region discovery (Zhmoginov, Fischer, and Sandler 2020). In this paper, we advance the IB principle for graph to solve the graph structure learning problem. We propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL. VIBarXiv:2112.08903v1 [cs.LG] 16 Dec 2021 \fGSL employs the irrelevant feature masking and structure learning method to generate a new IB-Graph GIB as a bottleneck to distill the actionable information for the downstream task. VIB-GSL consists of three steps: (1) the IB-Graph generator module learns the IB-graph GIB by masking irrelevant node features and learning a new graph structure based on the masked feature; (2) the GNN module takes the IBgraph GIB as input and learns the distribution of graph representations; (3) the graph representation is sampled from the learned distribution with a reparameterization trick and then used for classi\ufb01cation. The overall framework can be trained ef\ufb01ciently with the supervised classi\ufb01cation loss and the distribution KL-divergence loss for the IB objective. The main contributions are summarized as follows: \u2022 VIB-GSL advances the Information Bottleneck principle for graph structure learning, providing an elegant and universal framework in the perspective of information theory. \u2022 VIB-GSL is model-agnostic and has a tractable variational optimization upper bound that is easy and stable to optimize. It is suf\ufb01cient to plug existing GNNs into the VIBGSL framework to enhance their performances. \u2022 Extensive experiment results in graph classi\ufb01cation and graph denoising demonstrate that the proposed VIBGSL enjoys superior effectiveness and robustness compared to other strong baselines. 2 Background and Problem Formulation 2.1 Graph Structure Learning Graph structure learning (Zhu et al. 2021) targets jointly learning an optimized graph structure and corresponding representations to improving the robustness of GNN models. In this work, we focus on graph structure learning for graph-level tasks. Let G \u2208G be a graph with label Y \u2208Y. Given a graph G = (X, A) with node set V , node feature matrix X \u2208R|V |\u00d7d, and adjacency matrix A \u2208R|V |\u00d7|V |, or only given a feature matrix X, the graph structure learning problem we consider in this paper can be formulated as producing an optimized graph G\u2217= (X\u2217, A\u2217) and its corresponding node/graph representations Z\u2217= f(G\u2217), with respect to the downstream graph-level tasks. 2.2 Information Bottleneck The Information Bottleneck (Tishby, Pereira, and Bialek 2000) seeks the balance between data \ufb01t and generalization using the mutual information as both cost function and regularizer. We will use the following standard quantities in the information theory (Cover 1999) frequently: Shannon entropy H(X) = EX\u223cp(X)[\u2212log p(X)], cross entropy H(p(X), q(X)) = EX\u223cp(X)[\u2212log q(X)], Shannon mutual information I(X; Y ) = H(X) \u2212H(X|Y ), and Kullback Leiber divergence DKL(p(X)||q(X) = EX\u223cp(X) log p(X) q(X). Following standard practice in the IB literature (Tishby, Pereira, and Bialek 2000), given data X, representation Z of X and target Y , (X, Y, Z) are following the Markov Chain < Y \u2192X \u2192Z >. De\ufb01nition 1 (Information Bottleneck). For the input data X and its label Y , the Information Bottleneck principle aims to learn the minimal suf\ufb01cient representation Z: Z = arg min Z \u2212I(Z; Y ) + \u03b2I(Z; X), (1) where \u03b2 is the Lagrangian multiplier trading off suf\ufb01ciency and minimality. Deep VIB (Alemi et al. 2016) proposed a variational approximation to the IB objective by parameterizing the distribution via a neural network: L = 1 N N X i=1 Z dZp(Z|Xi) log q(Yi|Z) + \u03b2DKL (p(Z|Xi), r(Z)) , (2) where q(Yi|Z) is the variational approximation to p(Yi|Z) and r(Z) is the variational approximation of p(Z). The IB framework has received signi\ufb01cant attention in machine learning and deep learning (Alemi et al. 2016; Saxe et al. 2019). As for irregular graph data, there are some recent works (Wu et al. 2020; Yu et al. 2020; Yang et al. 2021; Yu et al. 2021) introducing the IB principle to graph learning. GIB (Wu et al. 2020) extends the general IB to graph data with regularization of the structure and feature information for robust node representations. SIB (Yu et al. 2020, 2021) was proposed for the subgraph recognition problem. HGIB (Yang et al. 2021) was proposed to implement the consensus hypothesis of heterogeneous information networks in an unsupervised manner. We illustrate the difference between related graph IB methods and our method in Section 3.3. 3 Variational Information Bottleneck Guided Graph Structure learning In this section, we elaborate the proposed VIB-GSL, a novel variational information bottleneck principle guided graph structure learning framework. First, we formally de\ufb01ne the IB-Graph and introduce a tractable upper bound for IB objective. Then, we introduce the graph generator to learn the optimal IB-Graph as a bottleneck and give the overall framework of VIB-GSL. Lastly, we compare VIB-GSL with two graph IB methods to illustrate its difference and properties. 3.1 Graph Information Bottleneck In this work, we focus on learning an optimal graph GIB = (XIB, AIB) named IB-Graph for G, which is compressed with minimum information loss in terms of G\u2019s properties. De\ufb01nition 2 (IB-Graph). For a graph G = (X, A) and its label Y , the optimal graph GIB = (XIB, AIB) found by Information Bottleneck is denoted as IB-Graph: GIB = arg min GIB \u2212I(GIB; Y ) + \u03b2I(GIB; G), (3) where XIB is the task-relevant feature set and AIB is the learned task-relevant graph adjacency matrix. \fIntuitively, the \ufb01rst term \u2212I(GIB; Y ) is the prediction term, which encourages that essential information to the graph property is preserved. The second term I(GIB; G) is the compression term, which encourages that labelirrelevant information in G is dropped. And the Lagrangian multiplier \u03b2 indicates the degree of information compression, where larger \u03b2 indicates more information in G was retained to GIB. Suppose Gn \u2208G is a task-irrelevant nuisance in G, the learning procedure of GIB follows the Markov Chain < (Y, Gn) \u2192G \u2192GIB >. IB-Graph only preserves the task-relevant information in the observed graph G and is invariant to nuisances in data. Lemma 1 (Nuisance Invariance). Given a graph G \u2208G with label Y \u2208Y, let Gn \u2208G be a task-irrelevant nuisance for Y . Denote GIB as the IB-Graph learned from G, then the following inequality holds: I(GIB; Gn) \u2264I(GIB; G) \u2212I(GIB; Y ) (4) Please refer to the Technical Appendix for the detailed proof. Lemma 1 indicates that optimizing the IB objective in Eq. (3) is equivalent to encourage GIB to be less related to task-irrelevant information in G, leading to the nuisanceinvariant property of IB-Graph. Due to the non-Euclidean nature of graph data and the intractability of mutual information, the IB objective in Eq. (3) is hard to optimize directly. Therefore, we introduce two tractable variational upper bounds of \u2212I(GIB; Y ) and I(GIB; G), respectively. First, we examine the prediction term \u2212I(GIB; Y ) in Eq. (3), which encourages GIB is informative of Y . Please refer to Technical Appendix for the detailed proof of Proposition 1. Proposition 1 (Upper bound of \u2212I(GIB; Y )). For graph G \u2208G with label Y \u2208Y and IB-Graph GIB learned from G, we have \u2212I(Y ; GIB) \u2264\u2212 ZZ p(Y, GIB) log q\u03b8(Y |GIB)dY dGIB + H(Y ), (5) where q\u03b8(Y |GIB) is the variational approximation of the true posterior p(Y |GIB). Then we examine the compression term I(GIB; G) in Eq. (3), which constrains the information that GIB receives from G. Please refer to the Technical Appendix for the detailed proof of Proposition 2. Proposition 2 (Upper bound of I(GIB; G) ). For graph G \u2208G and IB-Graph GIB learned from G, we have I(GIB; G) \u2264 ZZ p(GIB, G) log p(GIB|G) r(GIB) dGIBdG, (6) where r(GIB) is the variational approximation to the prior distribution p(GIB) of GIB. Finally, plug Eq. (5) and Eq. (6) into Eq. (3) to derive the following objective function, which we try to minimize: \u2212I(GIB; Y ) + \u03b2I(GIB; G) \u2264\u2212 ZZ p(Y, GIB) log q\u03b8(Y |GIB)dY dGIB + \u03b2 ZZ p(GIB, G) log p(GIB|G) r(GIB) dGIBdG. (7) 3.2 Instantiating the VIB-GSL Framework Following the theory discussed in Section 3.1, we \ufb01rst obtain the graph representation ZIB of GIB to optimize the IB objective in Eq. (7). We assume that there is no information loss during this process, which is the general practice of mutual information estimation (Tian et al. 2020). Therefore, we have I(GIB; Y ) \u2248I(ZIB; Y ) and I(GIB; G) \u2248I(ZIB; G). In practice, the integral over GIB and G can be approximated by Monte Carlo sampling (Shapiro 2003) on all training samples {Gi \u2208G, Yi \u2208Y, i = 1, . . . , N}. \u2212I(GIB; Y ) + \u03b2I(GIB; G) \u2248\u2212I(ZIB; Y ) + \u03b2I(ZIB; G) \u22641 N N X i=1 \u001a \u2212log q\u03b8(Yi|ZIBi)+\u03b2p(ZIBi|Gi)logp(ZIBi|Gi) r(ZIB) \u001b . (8) As shown in Figure 1, VIB-GSL consists of three steps: Step-1: Generate IB-Graph GIB. We introduce an IB-Graph generator to generate the IBgraph GIB for the input graph G. Following the assumption that nuisance information exists in both irrelevant feature and structure, the generation procedure consists of feature masking and structure learning. Feature Masking. We \ufb01rst use a feature masking scheme to discretely drop features that are irrelevant to the downstream task, which is formulated as: XIB = {Xi \u2299M, i = 1, 2, \u00b7 \u00b7 \u00b7 , |V |}, (9) where M \u2208Rd is a learnable binary feature mask and \u2299 is the element-wise product. Intuitively, if a particular feature is not relevant to task, the corresponding weight in M takes value close to zero. We can reparameterize XIB using the reparameterization trick (Kingma and Welling 2013) to backpropagate through a d-dimensional random variable: XIB = Xr + (X \u2212Xr) \u2299M, (10) where Xr is a random variable sampled from the empirical distribution of X. Structure Learning. We model all possible edges as a set of mutually independent Bernoulli random variables parameterized by the learned attention weights \u03c0: AIB = [ u,v\u2208V {au,v \u223cBer (\u03c0u,v)} . (11) For each pair of nodes, we optimized the edge sampling probability \u03c0 jointly with the graph representation learning. \u03c0u,v describes the task-speci\ufb01c quality of edge (u, v) and smaller \u03c0u,v indicates that the edge (u, v) is more likely to \fnoisy relation underlying relation (3) Sample GNN feature masking structure learning (1) Generate IB-Graph (2) Learn Distribution Figure 1: Overview of VIB-GSL. Given G as input, VIB-GSL consists of the following three steps: (1) Generate IB-Graph: the IB-Graph generator learns an IB-Graph GIB by masking irrelevant features and learning a new structure; (2) Learn distribution of IB-Graph representation: the GNN module learns the distribution of IB-Graph representation ZIB; (3) Sample IB-Graph representation: ZIB is sampled from the learned distribution by a reparameterization trick for classi\ufb01cation. be noise and should be assigned small weight or even be removed. For a pair of nodes (u, v), the edge sampling probability \u03c0u,v is calculated by: Z(u) = NN (XIB (u)) , \u03c0u,v = sigmoid \u0000Z(u)Z(v)T\u0001 , (12) where NN(\u00b7) denotes a neural network and we use a twolayer perceptron in this work. One issue is that AIB is not differentiable with respect to \u03c0 as Bernoulli distribution. We thus use the concrete relaxation (Jang, Gu, and Poole 2017) of the Bernoulli distribution to update \u03c0: Ber(\u03c0u,v) \u2248sigmoid \u00121 t \u0012 log \u03c0u,v 1 \u2212\u03c0u,v + log \u03f5 1 \u2212\u03f5 \u0013\u0013 , (13) where \u03f5 \u223cUniform(0, 1) and t \u2208R+ is the temperature for the concrete distribution. After concrete relaxation, the binary entries au,v from a Bernoulli distribution are transformed into a deterministic function of \u03c0u,v and \u03f5. The graph structure after the concrete relaxation is a weighted fully connected graph, which is computationally expensive. We hence extract a symmetric sparse adjacency matrix by masking off those elements which are smaller than a non-negative threshold a0. Step-2: Learn Distribution of IB-Graph Representation. For the compression term I(ZIB; G) in Eq. (8), we consider a parametric Gaussian distribution as prior r(ZIB) and p(ZIB|G) to allow an analytic computation of Kullback Leibler (KL) divergence (Hershey and Olsen 2007): r (ZIB) = N (\u00b50, \u03a30) , p (ZIB|G) = N \u0010 f \u00b5 \u03c6 (GIB) , f \u03a3 \u03c6 (GIB) \u0011 , (14) where \u00b5 \u2208RK and \u03a3 \u2208RK\u00d7K is the mean vector and the diagonal co-variance matrix of ZIB encoded by f\u03c6(GIB). The dimensionality of ZIB is denoted as K, which speci\ufb01es the bottleneck size. We model the f\u03c6(GIB) as a graph neural network (GNN) with weights \u03c6, where f \u00b5 \u03c6 (GIB) and f \u03a3 \u03c6 (GIB) are the 2K-dimensional output value of the GNN: \u2200u \u2208V, ZIB(u) = GNN (XIB, AIB) , \u0010 f \u00b5 \u03c6 (GIB) , f \u03a3 \u03c6 (GIB) \u0011 = Pooling ({ZIB (u) , \u2200u \u2208V }) , (15) where the \ufb01rst K-dimension outputs encode \u00b5 and the remaining K-dimension outputs encode \u03a3 (we use a softplus transform for f \u03a3 \u03c6 (GIB) to ensure the non-negativity). We treat r(ZIB) as a \ufb01xed d-dimensional spherical Gaussian r(ZIB) = N(ZIB|0, I) as in (Alemi et al. 2016). Step-3: Sample IB-Graph Representation. To obtain ZIB, we can use the reparameterization trick (Kingma and Welling 2013) for gradients estimation: ZIB = f \u00b5 \u03c6 (GIB) + f \u03a3 \u03c6 (GIB) \u2299\u03b5, (16) where \u03b5 \u2208N(0, I) is an independent Gaussian noise and \u2299 denotes the element-wise product. By using the reparameterization trick, randomness is transferred to \u03b5, which does not affect the back-propagation. For the \ufb01rst term I(ZIB, Y ) in Eq. (8), q\u03b8(Y |ZIB) outputs the label distribution of learned graph GIB and we model it as a multi-layer perceptron classi\ufb01er with parameters \u03b8. The multi-layer perceptron classi\ufb01er takes ZIB as input and outputs the predicted label. Training Objective. We can ef\ufb01ciently compute the upper bounds in Eq. (8) on the training data samples using the gradient descent based backpropagation techniques, as illustrated in Algorithm 1. The overall loss is: L = LCE(ZIB, Y ) + \u03b2DKL (p (ZIB|G) ||r (ZIB)) , (17) where LCE is the cross-entropy loss and DKL(\u00b7||\u00b7) is the KL divergence. The variational approximation proposed above \fAlgorithm 1: The overall process of VIB-GSL Input: Graph G = (X, A) with label Y ; Number of training epochs E; Output: IB-graph GIB, predicted label \u02c6 Y 1 Parameter initialization; 2 for e = 1, 2, \u00b7 \u00b7 \u00b7 , E do // Learn IB-Graph 3 XIB \u2190{Xi \u2299M, i \u2208|V |}; 4 AIB \u2190S u,v\u2208V {au,v \u223cBer(\u03c0u,v)}; 5 GIB \u2190(XIB, AIB); // Learn distribution 6 Encode (f \u00b5 \u03c6 (GIB), f \u03a3 \u03c6 (GIB)) by a GNN; // Sample graph representation 7 Reparameterize ZIB = f \u00b5 \u03c6 (GIB) + f \u03a3 \u03c6 (GIB) \u2299\u03b5; // Optimize 8 L = LCE(ZIB, Y )+\u03b2DKL (p (ZIB|G) ||r (ZIB)); 9 Update model parameters to minimize L. 10 end facilitates the training stability effectively, as shown in Section 4.2. We also analyze the impact of compression coef\ufb01cient \u03b2 on performance and learned structure in Section 4.2. Property of VIB-GSL Different with traditional GNNs and graph structure learning methods (e.g., IDGL (Chen, Wu, and Zaki 2020), NeuralSparse (Zheng et al. 2020)), VIB-GSL is independent of the original graph structure since it learns a new graph structure. This property renders VIB-GSL extremely robust to noisy information and structure perturbations, which is veri\ufb01ed in Section 4.2. 3.3 Comparison with multiple related methods. In this subsection, we discuss the relationship between the proposed VIB-GSL and two related works using the IB principle for graph representation learning, i.e., GIB (Wu et al. 2020) and SIB (Yu et al. 2020). Remark that VIB-GSL follows the Markov Chain < (Y, Gn) \u2192G \u2192GIB >. VIB-GSL vs. GIB GIB (Wu et al. 2020) aims to learn robust node representations Z by the IB principle following the Markov Chain < (Y, Gn) \u2192G \u2192Z >. Speci\ufb01cally, GIB regularizes and controls the structure and feature information in the computation \ufb02ow of latent representations layer by layer. Our VIB-GSL differs in that we aim to learn an optimal graph explicitly, which is more interpretable than denoising in the latent space. Besides, our VIB-GSL focuses on graph-level tasks while GIB focuses on node-level ones. VIB-GSL vs. SIB SIB (Yu et al. 2020) aims to recognise the critical subgraph Gsub for input graph following the Markov Chain < (Y, Gn) \u2192G \u2192Gsub >. Our VIBGSL aims to learn a new graph structure and can be applied for non-graph structured data. Moreover, SIB directly estimates the mutual information between subgraph and graph by MINE (Belghazi et al. 2018) and uses a bi-level optimization scheme for the IB objective, leading to an unstable and inef\ufb01cient training process. Our VIB-GSL is more stable to train with the tractable variational approximation, which is demonstrated by experiments in Figure 5. 4 Experiments We evaluate VIB-GSL1 on two tasks: graph classi\ufb01cation and graph denoising, to verify whether VIB-GSL can improve the effectiveness and robustness of graph representation learning. Then we analyze the impact of information compression quantitatively and qualitatively. 4.1 Experimental Setups Datasets. We empirically perform experiments on VIBGSL on four widely-used social datasets including IMDBB, IMDB-M, REDDIT-B, and COLLAB (Rossi and Ahmed 2015). We choose the social datasets for evaluation because much noisy information may exist in social interactions. Baselines. We compare the proposed VIB-GSL with a number of graph-level structure learning baselines, including NeuralSparse (Zheng et al. 2020), SIB (Yu et al. 2020) and IDGL (Chen, Wu, and Zaki 2020), to demonstrate the effectiveness and robustness of VIB-GSL. We do not include GIB in our baselines since it focuses on node-level representation learning. Similar with SIB (Yu et al. 2020), we plug various GNN backbones2 into VIB-GSL including GCN (Kipf and Welling 2016), GAT (Veli\u02c7 ckovi\u00b4 c et al. 2017), GIN (Xu et al. 2019) to see whether the VIB-GSL can boost the performance of graph classi\ufb01cation or not. For a fair comparison, we use the mean pooling operation to obtain the graph representation and use a 2-layer perceptron as the graph classi\ufb01er for all baselines. Parameter Settings. We set both the information bottleneck size K and the embedding dimension of baseline methods as 16. For VIB-GSL, we set t = 0.1 in Eq. (13), a0 = 0.1 and perform hyperparameter search of \u03b2 \u2208 {10\u22121, 10\u22122, 10\u22123, 10\u22124, 10\u22125, 10\u22126} for each dataset. 4.2 Results and Analysis Graph Classi\ufb01cation. We \ufb01rst examine VIB-GSL\u2019s capability of improving graph classi\ufb01cation. We perform 10fold cross-validation and report the average accuracy and the standard deviation across the 10 folds in Table 1, where \u2206 denotes the performance improvement for speci\ufb01c backbone and \u201c\u2013\u201d indicates that there is no performance improvement for backbones without structure learner. The best results in each backbone group are underlined and the best results of each dataset are shown in bold. As shown in Table 1, the proposed VIB-GSL consistently outperforms all baselines on all datasets by a large margin. Generally, the graph sparsi\ufb01cation models (i.e., NeuralSparse and SIB) show only a small improvement in accuracy and even have a negative impact on performance (e.g., on COLLAB), which is because they are constrained by the observed structures without mining underlying relations. The performance superiority of VIB-GSL over different GNN backbones implies that 1Code is available at https://github.com/RingBDStack/VIBGSL. 2We follow the protocol in https://github.com/rusty1s/pytorch geometric/tree/master/benchmark/kernel. \fTable 1: Summary of graph classi\ufb01cation results: \u201caverage accuracy \u00b1 standard deviation\u201d and \u201cimprovements\u201d (%). Underlined: best performance of speci\ufb01c backbones, bold: best results of each dataset. Structure Learner Backbone IMDB-B IMDB-M REDDIT-B COLLAB Accuracy \u2206 Accuracy \u2206 Accuracy \u2206 Accuracy \u2206 N/A GCN 70.7\u00b13.7 49.7\u00b12.1 73.6\u00b14.5 77.6\u00b12.6 GAT 71.3\u00b13.5 50.9\u00b12.7 73.1\u00b12.6 75.4\u00b12.4 GIN 72.1\u00b13.8 49.7\u00b10.4 85.4\u00b13.0 78.8\u00b11.4 NeuralSparse GCN 72.0\u00b12.6 \u21911.3 50.1\u00b13.1 \u21910.4 72.1\u00b15.2 \u21931.5 76.0\u00b12.0 \u21931.6 GAT 73.4\u00b12.2 \u21912.1 53.7\u00b13.1 \u21912.8 74.3\u00b13.1 \u21911.2 75.4\u00b15.8 0.0 GIN 73.8\u00b11.6 \u21911.7 54.2\u00b15.4 \u21914.5 86.2\u00b12.7 \u21910.8 76.6\u00b12.1 \u21932.2 SIB GCN 72.2\u00b13.9 \u21911.5 51.8\u00b13.9 \u21912.1 76.7\u00b13.0 \u21913.1 76.3\u00b12.3 \u21931.3 GAT 72.9\u00b14.6 \u21911.6 51.3\u00b12.4 \u21910.4 75.3\u00b14.7 \u21912.2 77.3\u00b11.9 \u21911.9 GIN 73.7\u00b17.0 \u21911.6 51.6\u00b14.8 \u21911.9 85.7\u00b13.5 \u21910.3 77.2\u00b12.3 \u21931.6 IDGL GCN 72.2\u00b14.2 \u21911.5 52.1\u00b12.4 \u21912.4 75.1\u00b11.4 \u21911.5 78.1\u00b12.1 \u21910.5 GAT 71.5\u00b14.6 \u21910.2 51.8\u00b12.4 \u21910.9 76.2\u00b12.5 \u21913.1 76.8\u00b14.4 \u21911.4 GIN 74.1\u00b13.2 \u21912.0 51.1\u00b12.1 \u21911.4 85.7\u00b13.5 \u21910.3 76.7\u00b13.8 \u21932.1 VIB-GSL GCN 74.1\u00b13.3 \u21913.4 54.3\u00b11.7 \u21914.6 77.5\u00b12.4 \u21913.9 78.3\u00b11.4 \u21910.7 GAT 75.2\u00b12.7 \u21913.9 54.1\u00b12.7 \u21913.2 78.1\u00b12.5 \u21915.0 79.1\u00b11.2 \u21913.7 GIN 77.1\u00b11.4 \u21915.0 55.6\u00b12.0 \u21915.9 88.5\u00b11.8 \u21913.1 79.3\u00b12.1 \u21910.5 10 6 10 5 10 4 10 3 10 2 10 1 55 60 65 70 75 80 85 90 Accuracy (%) 68.0 70.0 74.0 76.0 72.0 62.0 IMDB-B 10 6 10 5 10 4 10 3 10 2 10 1 55 60 65 70 75 80 85 90 Accuracy (%) 82.0 85.0 80.0 70.5 68.5 58.0 REDDIT-B Figure 2: Impact of \u03b2 on IMDB-B and REDDIT-B. VIB-GSL can learn better graph structure to improve the representation quality. Graph Denoise. To evaluate the robustness of VIB-GSL, we generate a synthetics dataset by deleting or adding edges on REDDIT-B. Speci\ufb01cally, for each graph in the dataset, we randomly remove (if edges exist) or add (if no such edges) 25%, 50%, 75% edges. The reported results are the mean accuracy (solid lines) and standard deviation (shaded region) over 5 runs. As shown in Figure 3, the classi\ufb01cation accuracy of GCN dropped by 5% with 25% missing edges and dropped by 10% with 25% noisy edges, indicating that GNNs are indeed sensitive to structure noise. Since the proposed VIB-GSL does not depend on the original graph structure, it achieves better results without performance degradation. IDGL is still sensitive to structure noise since it iteratively updates graph structure based on node embeddings, which is tightly dependent on the observed structure. Parameter Sensitivity: Trade Off between Prediction and Compression. We explore the in\ufb02uence of the Lagrangian multiplier \u03b2 trading off prediction and compression in Eq. (3) and Eq. (8). Note that there is a relationship between increasing \u03b2 and decreasing K (Shamir, Sabato, and 0% 25% 50% 75% Missing edges 60 65 70 75 80 85 90 95 Accuracy (%) VIB-GSL IDGL GCN 0% 25% 50% 75% Adding edges 60 65 70 75 80 85 90 95 Accuracy (%) VIB-GSL IDGL GCN Figure 3: Test accuracy (\u00b1 standard deviation) in percent for the edge attack scenarios on REDDIT-B. Tishby 2010), and the following analysis is with K = 16. Figure 2 depicts the changing trend of graph classi\ufb01cation accuracy on IMDB-B and REDDIT-B. Based on the results, we make the following observations: (1) Remarkably, the graph classi\ufb01cation accuracies of VIB-GSL variation across different \u03b2 collapsed onto a hunchback shape on both datasets. The accuracy \ufb01rst increases with the increase of \u03b2, indicating that removing irrelevant information indeed enhances the graph representation learning. Then the accuracy progressively decreases and reaches very low values, indicating that excessive information compression will lose effective information. (2) Appropriate value of \u03b2 can greatly increase the model\u2019s performance. VIB-GSL achieves the best balance of prediction and compression with \u03b2 = 10\u22123 and \u03b2 = 10\u22125 on IMDB-B and REDDIT-B, respectively. This indicates that different dataset consists of different percent of task-irrelevant information and hence needs a different degree of information compression. Graph Visualization. To examine the graph structure changes brought by VIB-GSL intuitively, we present two samples from the IMDB-B dataset and visualize the origi\fFigure 4: Original graph and IB-Graphs with different \u03b2 when VIB-GSL achieves the same testing performance. nal graph and IB-Graphs learned by VIB-GSL in Figure 4, where |E| indicates the number of edges. To further analyze the impact of information compression degree, we visualize the learned IB-Graph with different \u03b2 when VIBGSL achieves the same testing performance. Note that VIBGSL does not set sparsity constraint as in most structure learning methods. As shown in Figure 4, we make the following observations: (1)VIB-GSL tends to generate edges that connect nodes playing the same structure roles, which is consistent with the homophily assumption. (2)When achieving the same testing performance, VIB-GSL with larger \u03b2 will generate a more dense graph structure. It is because with the degree of information compression increasing, the nodes need more neighbors to obtain enough information. Training Stability. As mentioned in Section 3.3, VIBGSL deduces a tractable variational approximation for the IB objective, which facilitates the training stability. In this subsection, we analyze the convergence of VIB-GSL and SIB (Yu et al. 2020) on REDDIT-B with a learning rate of 0.001. The IB objective in (Yu et al. 2020) is L = LCE + \u03b2LMI + \u03b1Lcon, where LCE is the cross-entropy loss, LMI is the MINE loss of estimating mutual information between original graph and learned subgraph and Lcon is a connectivity regularizer. Figure 5(a) depicts the losses of VIB-GSL (i.e., overall loss L, cross-entropy loss LCE for classi\ufb01cation, and the KL-divergence loss DKL) with \u03b2 = 10\u22123, where the dash lines indicates the mean value in the last 10 epochs when VIB-GSL converges. As mentioned in Section 3.3, SIB adopted a bi-level optimization scheme for IB objective. Figure 5(b) depicts the losses of SIB (i.e., overall loss L, classi\ufb01cation loss LCE, the MI estimation loss LMI, and the connectivity loss Lcon) with \u03b2 = 0.2 and \u03b1 = 5 as suggested in its source code. As shown in Figure 5(a), VIB-GSL converge steadily, showing the effectiveness of the variational approximation. As shown in Figure 5(b), the MI estimation loss LMI is very unstable because of the bi-level optimization scheme, making SIB is 0 10 20 30 40 50 60 70 80 90 100 110120130140150 Epochs 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Loss converges at 0.81\u00b10.02 CE converges at 0.55\u00b10.02 KL converges at 0.26\u00b10.01 CE KL (a) VIB-GSL. 0 10 20 30 40 50 60 70 80 90 100 110120 130 140 150 Epochs 2 1 0 1 2 3 4 Loss CE MI con (b) SIB. Figure 5: Training dynamics of VIB-GSL and SIB. very dif\ufb01cult to converge. 5" + }, + { + "url": "http://arxiv.org/abs/2106.07053v1", + "title": "Convex Sparse Blind Deconvolution", + "abstract": "In the blind deconvolution problem, we observe the convolution of an unknown\nfilter and unknown signal and attempt to reconstruct the filter and signal. The\nproblem seems impossible in general, since there are seemingly many more\nunknowns than knowns . Nevertheless, this problem arises in many application\nfields; and empirically, some of these fields have had success using heuristic\nmethods -- even economically very important ones, in wireless communications\nand oil exploration. Today's fashionable heuristic formulations pose non-convex\noptimization problems which are then attacked heuristically as well. The fact\nthat blind deconvolution can be solved under some repeatable and\nnaturally-occurring circumstances poses a theoretical puzzle.\n To bridge the gulf between reported successes and theory's limited\nunderstanding, we exhibit a convex optimization problem that -- assuming signal\nsparsity -- can convert a crude approximation to the true filter into a\nhigh-accuracy recovery of the true filter. Our proposed formulation is based on\nL1 minimization of inverse filter outputs. We give sharp guarantees on\nperformance of the minimizer assuming sparsity of signal, showing that our\nproposal precisely recovers the true inverse filter, up to shift and rescaling.\nThere is a sparsity/initial accuracy tradeoff: the less accurate the initial\napproximation, the greater we rely on sparsity to enable exact recovery. To our\nknowledge this is the first reported tradeoff of this kind. We consider it\nsurprising that this tradeoff is independent of dimension.\n We also develop finite-$N$ guarantees, for highly accurate reconstruction\nunder $N\\geq O(k \\log(k) )$ with high probability. We further show stable\napproximation when the true inverse filter is infinitely long and extend our\nguarantees to the case where the observations are contaminated by stochastic or\nadversarial noise.", + "authors": "Qingyun Sun, David Donoho", + "published": "2021-06-13", + "updated": "2021-06-13", + "primary_cat": "cs.IT", + "cats": [ + "cs.IT", + "cs.AI", + "cs.SY", + "eess.SY", + "math.IT", + "math.ST", + "stat.OT", + "stat.TH" + ], + "main_content": "Introduction 4 1.1 Blind Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 The Promise of Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Translating Heuristics into E\ufb00ective Algorithms . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Prior Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5 Mathematical setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Main Results Overview 11 2.1 Main Result 1: Phase Transition Phenomenon for Sparse Blind Deconvolution . . . . . 11 2.2 Main Result 2: Finite Observation Window, Finite-length Inverse . . . . . . . . . . . . 16 2.3 Main Result 3: Stability Guarantee with Finite Length Inverse Filter . . . . . . . . . . 16 2.4 Main Result 4: Robustness Against Stochastic Noise and Adversarial Noise . . . . . . 17 2.5 Non-convex Blind deconvolution initialization method . . . . . . . . . . . . . . . . . . 19 3 Numerical Experiments 19 4 Techinical Overview and Proof Sketch for Phase Transition Theorems 29 4.1 Main Result 1: Population (Large-N) Phase Transition . . . . . . . . . . . . . . . . . 29 4.2 Technical Tools: Landscape of Expected Homogeneous Function over Bernoulli Support on Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.3 Exact representation of val(Q1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.4 Technical Tool: Relation between Finite Di\ufb00erence of Objective and Euclidean Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5 Conclusion 37 6 Supplementary: Landscape of Expected Homogeneous Function over Bernoulli Support on Sphere 38 6.1 Expectation of Inner Product for Sparse Signal . . . . . . . . . . . . . . . . . . . . . . 38 6.2 Expectation over Bernoulli Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.3 Upper and Lower bound on Expectation of Norm over Bernoulli Support . . . . . . . 41 6.4 Proofs of Upper and Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.5 Bound on Harmonic Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2 \f7 Supplementary: Background for Convex Blind Deconvolution Problem 48 7.1 Technical Background: Wiener\u2019s Lemma and Inverse Filter . . . . . . . . . . . . . . . 48 7.2 Change of Variable and Reduction to Projection Pursuit . . . . . . . . . . . . . . . . . 48 7.3 Technical background: Directional Derivative and Projected Subgradient . . . . . . . 49 8 Main Result 1 and Its Proof: Phase Transition 51 8.1 KKT Condition for Exact Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 8.2 Formula for Phase Transition Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . 52 9 Supplementary: Tight Upper and Lower Bound of Phase Transition Parameter 53 9.1 Upper and Lower Bound from Optimization Point of View . . . . . . . . . . . . . . . . 53 9.2 Upper and Lower Bound from Geometric Point of View . . . . . . . . . . . . . . . . . 55 9.3 Tighter Upper and Lower Bound from Re\ufb01ned Analysis . . . . . . . . . . . . . . . . . 56 10 Technical Tool: Tight Bound for Finite Di\ufb00erence of Objective 58 11 Supplementary: Main Result 2 Proof: Guarantee for Finite Observation Window, Finite-Length Inverse 60 11.1 Phase Transition in Finite Observation Window, Finite-Length Inverse Setting . . . . 60 11.2 Concentration of Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 11.3 Concentration for Directional Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 61 11.4 Proof of Main Result 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 12 Supplementary: Main Result 3 Proof: Stability Guarantee with Finite Length Approximation to In\ufb01nite Length Inverse 66 12.1 Finite Length Approximation based on Z Transform . . . . . . . . . . . . . . . . . . . 67 12.2 Stability Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 12.3 Proof of Stability Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 13 Supplementary: Main Result 4 Proof: Robustness Guarantee against Stochastic and Adversarial Noises 70 13.1 Robustness Theorem against Stochastic Noise: Moving Average Gaussian Noise . . . 70 13.2 Robustness Theorem against Adversarial Noise . . . . . . . . . . . . . . . . . . . . . . 70 13.3 Proof of Robustness Theorem against Stochastic Noise . . . . . . . . . . . . . . . . . 71 13.4 Proof of Robustness Theorem against Adversarial Noise . . . . . . . . . . . . . . . . . 72 13.5 Technical Tool: Folded Gaussian Mean Formula . . . . . . . . . . . . . . . . . . . . . 73 3 \f1 Introduction 1.1 Blind Deconvolution Suppose we are interested in an underlying time series x = (x(t)) which we cannot observe directly. What we can observe is y = a \u2217x where a is an unknown \u2018blurring\u2019 \ufb01lter. Blind deconvolution is the problem of recovering x merely from the observed y without knowing either a or x. This problem occurs naturally in seismology and digital communications as well as astronomy, satellite imaging, and computer vision. In its most ambitious form, the problem is literally impossible; there are simply too few data and too many unknowns. Indeed, imagine that x, y and a all have N entries; we observe only N pieces of information (y) but there are 2N unknowns (x and a). Nevertheless, in some (not all) \ufb01elds, heuristic approaches have occasionally led to consistent success in isolated applications; presumably such success stories exploit specialized assumptions \u2013 although not always in an explicit or recognized way, and not with rigorous understanding. This paper, in contrast, will exhibit a set of assumptions enabling practical algorithms for blind deconvolution, backed by rigorous theoretical analysis. 1.2 The Promise of Sparsity Central to our approach is an assumption about the sparsity of the signal x to be recovered, in which case the problem can be called sparse blind deconvolution. Sparse signals \u2013 i.e. signals having relatively few nonzero entries, arise frequently in many \ufb01elds, including seismology, microscopy, astronomy, neuroscience spike identi\ufb01cation. Even in more abstract settings such as representation learning for computer vision, it surfaces in recently popular research trends, such as singlechannel convolutional dictionary learning [Bristow et al., 2013, Heide et al., 2015, Zhang et al., 2017, Zhang et al., 2018]. The sparsity of x if it holds would constrain the recovery problem signi\ufb01cantly; and so possibly, sparsity can play a role in enabling useful solutions to an otherwise hopeless problem. An inspiring precedent can be found in modern commercial medical imaging, where sparsity of an image\u2019s wavelet coe\ufb03cients enables MRIs from fewer observations than unknowns. Taking fewer observations speeds up data collection, a principle known as compressed sensing, which already bene\ufb01ts tens of millions of patients yearly. 1.3 Translating Heuristics into E\ufb00ective Algorithms For sparsity to reliably enable blind deconvolution, there are two apparent hurdles. First, develop an objective function which promotes sparsity of the solution. Second, develop an algorithm which can reliably optimize the objective. Many sparsity-promoting objectives have been proposed over the years; typically they imply nonconvex optimization problems. Indeed sparsity is quanti\ufb01ed by the \u21130 pseudo norm \u2225x\u22250 = #{t : x(t) \u0338= 0}, which is the limit of \u2225x\u2225p p as p \u21920 of concave \u2113p pseudo-norms where p < 1. Traditionally, non-convex problems have been viewed by mathematical scientists with skepticism; for such problems, gradient descent and its various re\ufb01nements lack any guarantee of e\ufb00ectiveness. Still, the lack of guarantees has not stopped engineers from trying! 4 \fA noticeable success in blind signal processing was scored in digital communications, where blind equalization today bene\ufb01ts billions of smartphone users. Blind equalization is a form of blind deconvolution where one exploits the known discrete-valued nature of the signal x (for example the signal entries might take only two values {\u22121, 1}). Practitioners found that if an initial guess of the equalizer (i.e. our inverse \ufb01lter a\u22121) is \u2018fairly good\u2019 (in engineer-speak \u2018opening the eye\u2019 so that a \u2018hint\u2019 of the \u2018digital constellation\u2018 becomes \u2018visible\u2019), then certain \u2018discreteness-promoting\u2019 on-line gradient methods can reliably \u2018focus\u2019 the result better and better and allow reliable recovery. Our work identi\ufb01es an analogous phenomenon in the sparsity-promoting blind deconvolution setting, however it exposes and crystallizes the phenomenon in a rigorous and dependably exploitable form. Namely, we show that if sparsity of x holds, and if an initial guess of the \ufb01lter a is \u2018fairly good\u2019 in a precise sense, then a speci\ufb01c convex optimization algorithm will accurately recover both the \ufb01lter and the original signal1. In retrospect, our insights on sparsity-promoting blind deconvolution can be cross-applied to explain the major successes of discreteness-promoting blind equalization in modern digital communications. Namely, a direct variation of our arguments provide a related convex optimization problem for discrete-valued signals which rigorously converts a a rough initial approximation into precise recovery. In our view these new arguments clear away some persistent fog, mystery and misunderstandings in blind signal processing; and pave the way for future success stories. 1.4 Prior Work Searching for an inverse \ufb01lter that promotes desired output properties Instead of trying to recover a and x together from a\u2217x, we could formulate this problem as looking for an approximate inverse \ufb01lter w so that the output w \u2217y exhibits extremal properties. Under this formulation, our goal is to \ufb01nd w \u0338= 0, so that optimize w\u0338=0 J(w \u22c6y) (1) where the functional J quanti\ufb01es the properties we seek to promote. (Depending on J, we might either prefer its maximum or minimum ). Working in exploration seismology, Wiggins [Wiggins, 1978] adopted this approach with the normalized 4-norm, J(z) = J4,2 = \u2225z\u22254/\u2225z\u22252 and gave a few successful data-processing case studies. His successful examples all clearly exhibit sparsity, although this was not discussed at the time. Other objectives considered at that time included J2,1(z) = \u2225z\u22252/\u2225z\u22251 and J\u221e,2(z) = \u2225z\u2225\u221e/\u2225z\u22252 [Cabrelli, 1985]. It was fully understood at that time that output property optimization could succeed in principle, if one did not have to worry about an e\ufb00ective algorithm. [Donoho, 1981] showed that if the signal x is a realization of independent and identically distributed entries from any nonGaussian distribution, optimizing J of the output w \u22c6y is successful in the large-N setting as long as the functional J belongs to a certain large family of non-Gaussianity measures, for example including J4,2 and J2,1 as well as many others. 2 1As we explain below, recover means: recover up to rescaling and time shift. 2Sparsity is of course a form of non-Gaussianity, this very explicitly in the Bernoulli-Gaussian mixture model 5 \fThe issue left unresolved in those days was how to solve such optimization problems. Indeed optimizations like J4,2 are badly nonconvex, as we see clearly by rewriting the J4,2 problem as maximize w \u2225w \u2217y\u22254 subject to \u2225w \u2217y\u22252 = 1. The theory cited above derived favorable properties of a would-be procedure which truly \ufb01nds the optimum of a badly nonconvex objective. It clari\ufb01es that blind deconvolution is possible in principle but does not by itself help us algorithmically, i.e. in practice. In the intellectual climate of the time, solving badly nonconvex optimization problems was considered a pipe dream, a time-wasting charade for non-serious people. Even today, blind deconvolution continues to be studied as an non-convex optimization problem; see recent work [Kuo et al., 2019, Kuo et al., 2020, Lau et al., 2019], who study the problem of recovering short a and sparse x Blind equalization In digital communications, the transmitted signal x can be viewed as discretealphabet valued; for example, in PAM signaling, where x(t) \u2208{\u00b11}, and QAM signaling, where the signal alphabet has equally spaced points on unit sphere in complex space [Kennedy and Ding, 1992, Ding and Luo, 2000]. [Vembu et al., 1994] considered the non-convex problem maximize w \u2225w \u2217y\u22258 subject to \u2225w \u2217y\u22252 = 1 If the data y were preprocessed to be serially uncorrelated, this optimization is e\ufb00ectively of the earlier form J8,2. The authors attack this nonconvex problem using projected gradient descent and give suggestive experimental results. They apparently view the \u21138 norm objective as an approximation of the \u2113\u221e norm objective \u2225w\u2217y\u2225\u221e. Later, [Ding and Luo, 2000] used linear programming to solve the \u221enorm problem directly. Searching for a projection with desired output properties Here is another setting for property-promoting output optimization. We have a data matrix Y \u2208Rn,p which we think of as n points in Rp, and we have a unit vector w \u2208Rp called the projection direction. Our output vector Y \u00b7 w contains the projection of the n-points on the projection direction q; it has n entries. We seek \u2018interesting\u2019 projections; i.e. directions where the projection displays some structure. We adopt a functional J which measures properties we seek to promote in the output, and we seek to solve: optimize \u2225w\u2225=1 J(Y \u00b7 w). (2) This was implemented by [Friedman and Tukey, 1974], who proposed a functional that promotes \u2018clumping\u2019 or \u2019clustering\u2019 of the output. They called it projection pursuit and were motivated by exploratory high dimensional data analysis; for p-dimensional data involve p > 2 and we can\u2019t easily considered below. 6 \fget a visual sense of what\u2019s in the data. It was hoped at the time that looking at selected lowdimensional projections might lead to better insights. For the most part, such hopes for exploration of high-dimensional data never materialized. However, output optimization of this type has proven to be useful in important problems in blind signal processing, where Y has known structure that can be exploited systematically. In blind source separation we observe Y = XA, Y \u2208Rn,p, X \u2208Rn,p and A \u2208Rp,p is an invertible matrix. (We don\u2019t observe X or A separately). Think of the X matrix as containing columns giving successive time samples of p clean source signals, for example p individual acoustic signals. These signals arrive at array of p spatially distributed acoustic sensors, each of which records the acoustic information it receives. The matrix Y contains in its columns what was obtained by each of the p di\ufb00erent sensors. In general, each sensor receives information from each of the sources. This is colorfully called the cocktail party problem, referring to the setting where X records the sources are human speakers at a cocktail party, and Y records what is heard at various locations in a room. Each column of Y then contains a superposition of di\ufb00erent speakers; while we would prefer to separate these and pay attention to just the ones of most immediate interest to us. In this separation problem, sparsity of the signal might be valuable. Suppose that each individual speaker is listening quite a bit and so not speaking much of the time. Then the columns of X are each sparse. On the other hand, if there are many people in the room, the room as a whole may still always be noisy, and so each column of Y may be fully dense. Assuming A is invertible, and the vector w obeys Aw = ej, then Y \u00b7 w extracts column j of X, which will be sparse. Hence, we may hope that the projection pursuit principle, with an appropriate measure J, might identify the \u2018sparse\u2019 projections we seek. Ju Sun, Qu Qing, Yu Bai, John Wright, Zibulevsky and Pearlmutter [Sun et al., 2015, Sun et al., 2016, Bai et al., 2018, Zibulevsky and Pearlmutter, 2000] proposed the optimization problem minimize w \u2225Y w\u22251 subject to \u2225w\u22252 = 1. Assuming the Y data pre-processed so that 1 nY \u2032Y = Ip this is equivalent to projection pursuit (2) applied to the objective J2,1. Notably, this is again a highly non-convex optimization problem.. The authors proposed projected gradient descent and recited some favorable empirical results. There is a close relation between the blind deconvolution optimization (1) and the projection pursuit optimization (2). Indeed, if the Y matrix is \ufb01lled in from an observed time series appropriately, then output optimization in blind deconvolution and in projection pursuit are essentially identical. Namely, let yser = (yser(t))N t=1 denote a time series of interest to us, and Y mat = (Y mat i,j ) denote an n \u00d7 p matrix where n = N \u2212p constructed using yser like so: Y mat i,j = yser(p + i \u2212(j \u22121)), 1 \u2264i \u2264n = N \u2212p; 1 \u2264j \u2264p. Now suppose the \ufb01lter vector w in the blind deconvolution optimization and the projection direction w in the projection pursuit optimization are chosen identically. Then the blind deconvolution output objective J(w \u22c6yser) is identical to the projection pursuit objective J(Y matw), except for possibly di\ufb00erent treatment of the \ufb01rst p entries of yser. 7 \fConvex Projection Pursuit In view of the connection between blind deconvolution and projection pursuit, and in view of our results in this paper, it is quite interesting to consider the work of [Spielman et al., 2012] and [Gottlieb and Neylon, 2010]. They propose to solve the following linearconstrained convex optimization problem. Given an n \u00d7 p data matrix Y and a constraint vector u, they propose to solve: minimize w \u2225Y w\u22251 subject to uT w = 1. As it turns out, when the matrix Y in this problem and y in the time series deconvolution problem are related by the Y mat-yser construction just mentioned, the objective we propose in this paper is essentially identical, when \u02dc a = u. As we will show, our setting permits much more thorough studies and more penetrating analyses, and we \ufb01nd that success in sparse blind deconvolution is more broadly prevalent than one might have expected, based on earlier analyses such as [Spielman et al., 2012] or [Sun et al., 2015, Sun et al., 2016, Bai et al., 2018, Zibulevsky and Pearlmutter, 2000] . 1.5 Mathematical setup Sequence Space, and Filtering To make our results concrete, let\u2019s discuss things formally. Let X denote the collection of bilaterally in\ufb01nite sequences x = (x(t) : t = 0, \u00b11, \u00b12, . . . ); for short we call such objects bisequences. Then \u21131(Z) \u2282X denotes the collection of bisequences obeying \u2225x\u22251 = P t |x(t)| < \u221e. For a bisequence x we denote time reversal operator t \u2194\u2212t by x\u2020. For whole number k > 0 let Xk denote the subspace of bisequences supported in \u2212k \u2264t \u2264k. We sometimes abuse notation: for a bisequence x we might write x = (1, .3) when we really mean x = (. . . , 0, 0, 1, .3, 0, 0, . . .). Let \u22c6denote the convolution product on pairs of bisequences in \u21131(Z) (x\u22c6y) = P u x(t\u2212u)y(u). Let e0 denote the \u2018delta\u2019 or \u2018Kronecker\u2019 bisequence: e0(t) = 1{t=0}; e0 is the unit of convolution. The convolution inverse of x \u2212x\u22121 \u2212is a bisequence obeying x \u22c6x\u22121 = e0. For example, the \ufb01lter x = (. . . , 0, 1, 1/2, 1/4, 1/8, . . .) anchored at the time origin so x(0) = 1, has inverse x\u22121 = (. . . , 0, 1, \u22121/2, 0, . . . ), again anchored at the time origin. Abusing notation we may simply write x\u22121 = (1, \u22121/2). Our approach to blind deconvolution searches among candidates for a \ufb01lter (\u2261bisequence) that extremizes a certain objective function. We then show that the extremal is in fact the desired inverse \ufb01lter to our (unknown) true underlying \ufb01lter. Hence, it helps know conditions under which an inverse \ufb01lter actually exists! For a bisequence x \u2208\u21131(Z), we de\ufb01ne the Fourier transform \u02c6 x(w) = Fx(w) \u2261P t x(t) exp{i2\u03c0wt}. For the inverse transform, we use x(t) = (F\u22121\u02c6 x)(t) = 1 2\u03c0 R 2\u03c0 0 \u02c6 x(w) exp{\u2212i2\u03c0wt}dw. Lemma 1.1 (Wiener\u2019s lemma). If a \u2208\u21131(Z), and also (Fa)( e w) \u0338= 0, \u2200e w \u2208T, then an inverse \ufb01lter exists in \u21131(Z). The bilaterally in\ufb01nite sequence de\ufb01ned formally by a\u22121 := F\u22121( 1 Fa) exists as an element of \u21131(Z) and obeys a\u22121 \u2217a = e0. In the engineering literature, we say that the bisequence a has so-called Z-transform A(z), de\ufb01ned by: A(z) = P\u221e t=\u2212\u221eatz\u2212t. 8 \fEvaluating A on the unit circle in the complex plane, at z of the form z = exp{\u2212i2\u03c0w}, we see that the Z-transform is e\ufb00ectively the Fourier transform A = Fa. Applying Wiener\u2019s lemma, we see that, if a \u2208\u21131 and minw |A(exp{\u22122\u03c0iw})| > 0, i.e. A is never zero on the unit circle, then a\u22121 \u2208\u21131 and 1/A is the Z-transform of a\u22121. Finite-sample observation model and \ufb01nite-length inverse \ufb01lter In searching for an inverse \ufb01lter, our initial results assume existence of a \ufb01nite-length inverse. Namely, we assume that a \u2208\u21131(Z) is a forward \ufb01lter with an inverse \ufb01lter a\u22121 \u2208\u21131(Z) supported in a centered window of radius k. In practice we only have a \ufb01nite dataset! Suppose that there is an underlying bisequence y \u2208X of the form y = a\u2217x, and let y[N] denote the restriction to an N-long centered window T = {\u2212T, . . . , T} of radius T and size N = 2T + 1. Our goal is seemingly to recover x or a from the observed data y[N]. However, in statistical theory we generally don\u2019t expect to exactly recover the true underlying representation (i.e. the generating a and x) exactly. Our goal is instead to \ufb01nd an inverse \ufb01lter w \u2208\u2113k 1 such that the convolution w \u2217y would be close3 to x, and where the closeness improves with increasing data size N \u2192\u221e. Finite-length \ufb01ltering and practical algorithms While our analysis framework concerns bisequences (bilaterally in\ufb01nite sequences), our data have \ufb01nite length (as just mentioned). The algorithms we discuss are often motivated by convolutions on bisequences; however, they reduce in practice to truncated convolutions involving \ufb01nite data windows. A certain ambiguity is helpful for e\ufb03cient communication. Suppose we have y[N], an N-long observed window of y, and we also have w, a k-long \ufb01lter, by which we mean a bisequence nonzero only within a \ufb01xed k-long window. We might encounter discussion both of w \u22c6y[N] as well as w \u22c6y, both using the same \ufb01lter w. In the \ufb01rst case, we would actually be thinking of y[N] as zero-padded out to a bisequence, so that both situations involve bisequence convolutions. Finite N e\ufb00ects are important in practice but tedious to discuss. It can be important to account for end e\ufb00ects in truncated convolution. In a setting where we initially think to consider a norm \u2225w\u22c6 y[N]\u2225\u2113p(Z), we might instead next think to rather consider the windowed norm \u2225w\u22c6y[N]\u2225\u2113p({\u2212T,...,T}), while \ufb01nally we realize \u2225w \u22c6y[N]\u2225\u2113p({\u2212T+k,...,T\u2212k}) is more correct for our purposes, as it includes only the terms which do not su\ufb00er from truncation of the convolution. Algorithmic formulation Under assumptions we will be making, the sequence x underlying our observed data will be either exactly sparse \u2013 having few nonzeros \u2013 or approximately so. Moreover, there will be either an exact length k inverse \ufb01lter w, or approximately such. It follows that the \ufb01lter output w \u22c6y is sparse. This suggests the would-be optimization principle minimize w\u0338=0 \u2225w \u22c6y[N]\u2225\u21130({\u2212T+k,...,T\u2212k}) where the \u21130 quasi-norm simply counts the number of nonzero entries. Unfortunately, this objective, though well-motivated, is not suitable for numerical optimization. Inspired by this, we perform convex relaxation of the \u21130 norm, replacing it with the \u21131 norm, which is convex. 3modulo time shift and rescaling 9 \fWe also need to \ufb01x the scale to get a unique output. One might think to constrain \u2225w\u22252 = 1, however, this would give a non-convex constraint and again is not suitable for e\ufb00ective algorithms. We instead suppose given a rough initial approximation e a of the forward \ufb01lter a, and impose an \u2113\u221econstraint on the \u2018pseudo-delta\u2019 w \u22c6e a, forcing it to \u2018peak\u2019 at target entry t. (e a \u2217w)t = 1, \u2225e a \u2217w\u2225\u221e\u22641. (3) Combining these steps, we obtain a convex optimization problem associated to each possible target coordinate t: minimize w\u2208\u2113k 1 1 N\u22122k\u2225w \u22c6y[N]\u2225\u21131({\u2212T+k,T\u2212k}) subject to (e a \u2217w)t = 1, \u2225e a \u2217w\u2225\u221e\u22641. (4) It will be convenient to reformulate slightly, hide consideration of end e\ufb00ects, and force the peak to occur at target coordinate t = 0. Abusing notation somewhat, we then write: minimize w\u2208\u2113k 1 1 N \u2225w \u2217y\u2225\u2113N 1 subject to \u27e8e a, w\u2020\u27e9= 1, (Ce a) (Again w\u2020 denotes time-reversal of w). In practice we might truncate the convolution due to end e\ufb00ects, or truncate the window over which we take the norm, but we will hide such practical details in the coming material, for ease of exposition; they would not change our results. Stochastic models for sparse signals Although our algorithms make sense in the absence of any theory, our theoretical results concern properties of our algorithm for data generated under a probabilistic generative model i.e. a stochastic signal model. Let X = (Xt) be a bisequence of independent identically distributed random variables indexed by t \u2208Z, having a common marginal CDF F = FX, such that F(x) = 1 \u2212F(\u2212x). One realization is then a sequence x of the type discussed in earlier paragraphs. We assume that F has an atom at 0 F = (1\u2212p)H+pG, where H is the standard Heaviside distribution and G is the standard Gaussian distribution. We say that F follows the Bernoulli(p)-Gaussian model. Equivalently, Xt is sampled IID from the Bernoulli(p)-Gaussian distribution pN(0, 1) + (1 \u2212 p)\u03b40. The iid process X is of course ergodic. If x denotes one realization of X, then in a window (xt)T \u2212T of length N, \u2248pN nonzero values will occur, for large N. Consequently, if p \u226a1, realizations from X will empirically be sparse. Let Y = a \u22c6X denote the random bisequence produced as the output of convolution of the random signal X with deterministic \ufb01lter a \u2208\u21131(Z). More explicitly, Yt = X u a(u)Xt\u2212u. This de\ufb01nes formally a so-called stationary linear process, a classical object for which careful foundational results are well established.4 Consider now \ufb01ltering Y , by a length-k \ufb01lter w, producing the 4In our case, we assume that X is iid Bernoulli-Gaussian, so E|X0| = p \u00b7 E|N(0, 1)| is \ufb01nite. Using this, we can see that the sum in the display above, even though possibly containing an in\ufb01nite number of terms, converges in various natural senses. 10 \frandom bisequence V = w \u22c6Y . The end-to-end \ufb01lter b = w \u22c6a is a well-de\ufb01ned element of \u21131(Z); using it, we can represent the \ufb01ltered output in terms of the underlying iid process X: V = b \u22c6X. This representation shows that the \ufb01ltered output series V is itself a well-de\ufb01ned stationary linear process, and moreover, since E|X0| < 1 and \u2225b\u2225\u21131 \u2264\u2225w\u2225\u21131 \u00b7\u2225a\u2225\u21131 < \u221e, we have E|V0| < \u2225b\u2225\u21131 < \u221e. Any such stationary linear process is ergodic. By the ergodic theorem, the large-N limit of the objective will almost surely be an expectation over X: limN\u2192\u221e1 N \u2225w \u2217Y \u2225\u2113N 1 = limN\u2192\u221e1 N P t\u2208T N |(b \u22c6X)t| = EX|(b \u22c6X)0| = E|(w \u2217Y )0|. (5) Consequently, the large-N properties of our proposed algorithm are driven by properties of the following optimization problem in the population: minimize w E|(w \u2217Y )0| subject to \u27e8e a, w\u2020\u27e9= 1. 2 Main Results Overview 2.1 Main Result 1: Phase Transition Phenomenon for Sparse Blind Deconvolution We have at last de\ufb01ned a convex optimization problem at the population level, which we will now want to study in detail. In our studies, we can make various choices of the sparsity parameter p, of the underlying forward \ufb01lter a, and of the guess \u02dc a. The tuple (p, a, \u02dc a) de\ufb01nes in this way a kind of phase space. We can then study performance of the algorithm at di\ufb00erent points in phase space. Consider this performance property: ExactRecovery \u2261\u201cThere is an unique solution of the population-based optimization problem \u2014 modulo time shift and output rescaling \u2014 and this solution exactly solves the blind deconvolution problem correctly. \u201d It probably seems too much to ask that such a property could ever be true, i.e. could ever be true for even one choice of phase space tuple. After all the optimization problem doesn\u2019t have any apparent connection to blind deconvolution \u2013 instead only to some sort of relaxation of the search for sparse output \ufb01lters. We will see that this phase space can be partitioned into two regions: one where the exact recovery property holds and its complement where the exact recovery property fails. Surprisingly the region where ExactRecovery holds is nonempty and can be appreciable. And it can be described in a clear and insightful way. Surprising phase transition for a special case of sparse blind deconvolution We \ufb01rst illustrate the phase transition phenomenon in possibly the most elementary situation. Consider the special case when the forward \ufb01lter a is the exponential decay \ufb01lter: a = (1, s, s2, s3, . . .) with |s| \u22641; then the inverse \ufb01lter is a basic short \ufb01lter a\u22121 = (1, \u2212s). Theorem 1 (Population phase transition for exponential decay \ufb01lter). Consider a linear process Y = a \u2217X with \u2022 a = (1, s, s2, s3, . . .) where |s| \u22641, so a\u22121 = (1, \u2212s); 11 \fFigure 1: Finite-sample phase transition diagram; ground truth \ufb01lter is w\u22c6= (0, 1, \u2212s) with T = 200. Horizontal axis: sparsity level p, ranging from 0.01 to 0.99; Vertical axis: \ufb01lter parameter |s|, ranging from 0 to 0.99. The shaded attribute indicates the observed fraction of successful experiments at the given parameter combination from deep blue for 1.0 down to deep red for 0.0. Thus, the red region indicates failure of recovery and the blue region indicates success. The transition from success to failure is quite abrupt. \u2022 Xt is IID Bernoulli(p)-Gaussian pN(0, 1) + (1 \u2212p)\u03b40. Consider as initial approximation e a = e0 = (1, 0, 0, 0, . . .), and the resulting fully speci\ufb01ed population optimization problem, with parameter tuple (p, a, \u02dc a): minimize w E|(w \u2217Y )0| subject to w0 = 1. (P1(e0)) Let w\u22c6denote the (or simply some) solution. De\ufb01ne the threshold p\u22c6= 1 \u2212|s|. The property ExactRecoveryexperiences a phase transition at p = p\u22c6: \u2022 provided p < p\u22c6, then w\u22c6is uniquely de\ufb01ned and equal to a\u22121 up to shift and scaling ; and \u2022 provided p > p\u22c6, then w\u22c6is not a\u22121 up to shift and scaling . We can empirically observe that the population phase transition described in Theorem 1 describes accurately the situation with \ufb01nite-length signals. The setting of Theorem 1, de\ufb01nes a phase space 12 \fby the numbers (p, s). We can sample the phase space according to a grid and then at each grid point, conducting a sequence of experiments like so: \u2022 sample a realization of synthetic data Y = a \u22c6X according to the stochastic signal; \u2022 extract a window y[N] of size N from within each generated Y ; and \u2022 solve the resulting \ufb01nite-N optimization problem: minimize w 1 N \u2225w \u2217Y \u22251 subject to w0 = 1. (P N 1 (e0)) Tabulating the fraction of instances with numerically precise recovery of the correct underlying inverse \ufb01lter a\u22121 and sparse signal X across grid points, we can make a heatmap of empirical success probability. Speci\ufb01cally, we numerically run 20 independent experiments, and choose an accuracy \u03f5 = 10\u22123 and count the number of successfully accurate recovery where the convex optimization solution w satisfy \u2225w\u22c6\u2212w\u22251/\u2225w\u22c6\u2225\u221e< \u03f5. We do this in Figure 1; the reader will see there an empirical phase transition curve, produced by a logistic-regression calculation of the location in p where 50% success probability is achieved. We observe empirical behavior entirely consistent with p\u22c6= 1 \u2212|s|. Phase transition for sparse blind deconvolution with general \ufb01lter: upper bound from deltaness discrepancy Consider now a more general situation where a is a quite general \ufb01lter, and X is Bernoulli Gaussian. Our formal assumptions are: A1 a \u2208l1(Z) is invertible: (Fa)( e w) \u0338= 0, \u2200e w \u2208T; thus a\u22121 exists in \u21131(Z). A2 X = (Xt)t\u2208Z is IID with marginal distribution pN(0, 1) + (1 \u2212p)\u03b40; and A3 Y = (Yt)t\u2208Z is a linear process obeying Y = a \u22c6X. Consider the convex optimization problem minimize w\u2208l1(Z) E|(w \u2217Y )0| subject to \u27e8e a, w\u2020\u27e9= 1, (P1(e a)) and let w\u22c6denote any solution of the optimization problem. Our results establish the existence of a general phase transition phenomenon and a precise quanti\ufb01cation of it, in terms of a phase transition functional . De\ufb01ne the following deltaness discrepancy \u2206(v): \u2206(v) = inf \u03b1,k \u2225v \u2212\u03b1\u03b4k\u2225\u221e \u2225v\u2225\u221e . This discrepancy indeed obeys \u2206(e0) = 0. However, it is not restricted only to deltaness concentrating at the origin; in fact, \u2206(\u03b1 \u00b7 ek) = 0 for each \u03b1 > 0, k \u2208Z. Also, 0 \u2264\u2206(v) \u22641. At 13 \fthe other extreme, \u2206([..., 0, 0, 1/2, 1/2, 0, 0, ...]) = 1. Finally, \u2206is a continuous function on l1(Z): so that, setting \u03b5(v, w) \u2261\u2225v\u2212w\u2225\u221e \u2225w\u2225\u221e, |\u2206(v) \u2212\u2206(w)| \u22642 \u00b7 \u03b5 1 \u2212\u03b5. Indeed, we can give a more explicit form for \u2206; let e e := e a \u2217a\u22121, \u2206(e e) = |e e|(2) |e e|(1) , where |e e|(1) denotes the largest entry in e e and |e e|(2) the second largest. |e e|(2) |e e|(1) is a kind of multiplicative gap functional, measuring the extent to which the second-largest entry in e e is small compared to the largest entry. It is a natural measure of closeness between our approximate Kronecker delta e e = e a \u2217a\u22121 and true Kronecker delta e0. Theorem 2 ( Population (large-N) phase transition: upper bound from deltaness discrepancy). Under assumptions (A1)-(A3), \u2022 There is a functional \u03a0\u2217de\ufb01ned on tuples (a, e a) in \u21131(Z) \u00d7 \u21131(Z) with the property that, for p > \u03a0\u2217(a, e a), every solution of the optimization problem P1(e a) is exactly the correct answer a, up to lateral shift and scaling. \u2022 We have the upper bound: \u03a0\u2217(a, e a) \u22641 \u2212\u2206(e a \u2217a\u22121), (a, e a) \u2208\u21131(Z) \u00d7 \u21131(Z). (6) In words, the theorem is stating that the less accurate the initial approximation e a \u2248a, the greater we rely on sparsity of X to allow exact recovery. The reader might compare this with our earlier example in Theorem 1, the special case of geometrically decaying \ufb01lters. In that example a = (. . . , 0, 0, 1, s, s2, . . . ) while e a = (. . . , 0, 0, 1, 0, 0, . . . ). \u2206(e a \u2217a\u22121) = \u2206((. . . , 0, 0, 1, s, s2, . . . )) = |s|. In short, the upper bound \u03a0\u2217= 1 \u2212|s| deriving from this general viewpoint agrees precisely with the the exact answer p\u2217= 1 \u2212|s| given in that earlier Theorem. When does equality hold in (6)? For a vector v, let v\u2032 denote the same vector, except the largest-amplitude entry is replaced by 0. We will see that a su\ufb03cient condition for equality is: \u2206(e e\u2032) \u2264\u2206(e e). Put another way, equality holds if |e e|(3) \u2264\u2206(e e) \u00b7 |e e|(2). (7) The set of situations where this occurs is ample but not overwhelming. It has relative Lebesgue measure at least \u2206. In the situation covered by Theorem 1, |e e|(3) = s2, |e e|(2) = s, \u2206= s, and so equality holds in (7). Hence, the general-\ufb01lter result of Theorem 2 along with the su\ufb03cient condition (7) imply as a special case Theorem 1. Now we state the following corollary to show that there is a substantial region in the space of (a, e a) pairs where a meaningful sparsity-accuracy of initialization tradeo\ufb00exists, such that any su\ufb03ciently accurate initial guess results in exact recovery provided that the sparsity of the underlying object X exceeds a threshold. 14 \fCorollary 2.1. Let r \u2208(0, 1/2) and suppose that assumptions (A1)-(A3) hold with p > r 1\u2212r. Normalize the problem so that \u2225a\u22121\u2225\u221e= 1. Consider e a in the \u21131 metric ball (8) of radius r about a: \u2225e a \u2212a\u22251 \u2264r. (8) Every solution of the optimization problem P1(e a) achieves exact recovery of a up to shift and rescaling. Proof: Since \u2225e e \u2212e0\u2225\u221e\u2264\u2225a\u22121\u2225\u221e\u00b7 \u2225e a \u2212a\u22251 = r, we have |e e|(1) \u22651 \u2212r and |e e|(2) \u2264r. This implies that \u03a0\u2217\u2264 r 1\u2212r. Now apply theorem 2. Phase transition for sparse blind deconvolution with general \ufb01lter from optimization point of view Theorem 2 has provided a upper bound of the threshold p\u22c6for phase transition with clear mathematical meaning. Now we present the main phase transition theorem with the exact p\u22c6for blind deconvolution of general inverse \ufb01lter. This statement is from optimization point of view that connects blind deconvolution problem to a classical projection pursuit problem. Let I denote an iid Bernoulli(p) bisequence. For a bisequence w let w \u00b7 I denote the elementwise multiplication of w by I. De\ufb01ne the optimization problem minimize w EI\u2225w \u00b7 I\u22252 subject to \u27e8v, w\u27e9= 1 (Q1(v)) Theorem 3 (Population (large-N) phase transition: upper and lower bound). Under assumptions (A1)-(A3),consider the solution w\u22c6of the convex optimization problem P1(e a). There is a threshold p\u22c6> 0, \u2022 w\u22c6is a\u22121 up to time shift and rescaling provided p < p\u22c6; and \u2022 w\u22c6is not a\u22121 up to time shift and rescaling, provided p > p\u22c6. The threshold p\u22c6obeys p 1 \u2212p = val(Q1(e e\u2032)). where e e := e a \u2217a\u22121. As de\ufb01ned in theorem 2, p\u22c6= \u03a0\u2217(a, e a). We have an upper bound and lower bound of val(Q1(e e\u2032)), p \u2206(e e) \u2265val(Q1(e e\u2032)) \u2265p cot \u2220(e e, e0) explicitly, the upper bound can be expressed as 1 \u2212tan \u2220(e e, e0) \u2264p\u22c6\u22641 \u2212|e e|(2) |e e|(1) = 1 \u2212\u2206(e e) Namely, 1 \u2212tan \u2220(e a \u2217a\u22121, e0) \u2264\u03a0\u2217(a, e a) \u22641 \u2212\u2206(e a \u2217a\u22121), (a, e a) \u2208\u21131(Z) \u00d7 \u21131(Z). (9) Additionally, the upper bound takes equality if \u2206(e e\u2032) \u2264\u2206(e e). 15 \fWe comment that theorem 3 imply previous two theorems. Clearly theorem 1 is a special case for exponential decay forward \ufb01lter, and theorem 2 can also be viewed as the natural upper bound side of the functional \u03a0\u2217in theorem 3. 2.2 Main Result 2: Finite Observation Window, Finite-length Inverse With a \ufb01nite observation window of length N we (surprisingly) still can have exact recovery, starting as soon as N \u2265\u2126(k log(k)). Finite-observation phase transition Let \u2113k 1 denote the collection of bisequences vanishing outside a centered window of radius k. Theorem 4 (Finite observation window, \ufb01nite-length inverse \ufb01lter). Suppose Y = a \u22c6X, where: A1 a \u2208l1(Z) is invertible and the inverse has \ufb01nite-length: (Fa)( e w) \u0338= 0, \u2200e w \u2208T; thus a\u22121 exists in \u21131(Z); furthermore, a\u22121 \u2208\u2113k 1 vanishes o\ufb00a centered window of length k. A2 X = (Xt)t\u2208Z is IID with marginal distribution pN(0, 1) + (1 \u2212p)\u03b40; A3 Y = (Yt)t\u2208Z is a linear process obeying Y = a \u22c6X. Suppose we observe a window (Yt)t\u2208T of length N = |T |. Consider the convex optimization problem minimize w\u2208\u2113k 1 1 N \u2225w \u2217Y \u2225\u2113N 1 subject to (e a \u2217w)0 = 1. (P N,k 1 (e a)) and let w\u22c6denote any solution of the optimization problem. Our result establishes that there exist \u03f5 > 0, \u03b4 > 0, so that when the number of observations N satis\ufb01es N \u2265k log(k \u03b4 )(C\u03baa \u03f5 )2, and the sparsity level p obeys 1 N \u2264p < p\u22c6\u2212\u03b4p(N, \u03f5), then with probability exceeding 1 \u2212\u03b4, w\u22c6is a\u22121 up to rescaling and time shift. In the statement, \u03baa is the condition number of the circular matrix with its \ufb01rst column being a; C is a positive constant independent of N and k. 2.3 Main Result 3: Stability Guarantee with Finite Length Inverse Filter The results so far concern the ideal setting when true inverse \ufb01lter has a known length k and we use k to set up a correctly matched optimization problem. In practice we do not know k and k might even be in\ufb01nite. We can provide practical guarantees even when the inverse \ufb01lter is an in\ufb01nite length inverse. To develop these, we must be more technical about the situation. We assume that a has a Ztransform having N\u2212roots and N+ poles (si) inside the unit circle and we construct a \ufb01nite length approximation w to a\u22121, in fact of length r(N\u2212+N+). This approximation has error \u2225w\u2217a\u2212e0\u22252 = O(maxi |s|r i ). Since the objective value EI\u2225w \u00b7 I\u22252 of this approximation w is an upper bound of the optimal value of the optimization solution w\u22c6, we could use the objective value upper bound to derive a upper bound for \u2225w\u22c6\u2217a \u2212e0\u22252 = O(maxi |s|r i ) when p < p\u22c6, where the constant of this upper bound is determined by the Bi-Lipschitz constant of the \ufb01nite di\ufb00erence of objective EI\u2225w\u00b7I\u22252 \u2212EI\u2225(e0)\u00b7I\u22252. 16 \fApproximation theory for in\ufb01nite length inverse \ufb01lter Theorem 5 (Approximation theory for in\ufb01nite length inverse \ufb01lter based on roots of Z-transform). Let the \ufb01nite-length forward \ufb01lter a have a Z-transform with roots inside the unit circle, namely sk := e\u2212\u03c1k+i\u03d5k with |sk| < 1 and \u03c1k > 0 for k \u2208{\u2212N\u2212, . . . \u22121, 1, . . . N+}. Let I = {\u2212N\u2212, . . . , \u22121, 1, . . . , N+} as the set of all the possible indexes. A(z) = PN+ i=\u2212N\u2212aiz\u2212i = c0 QN\u2212 j=1(1 \u2212s\u2212jz) QN+ i=1(1 \u2212siz\u22121); here c0 is a constant ensuring a0 = 1. Then for a scalar r , we construct an approximate inverse \ufb01lter wr with Z-transform W(z) = 1 c0 QN\u2212 j=1(Pr\u22121 \u2113j=0 s\u2113j \u2212jz\u2212\u2113j) QN+ i=1(Pr\u22121 \u2113i=0 s\u2113i i z\u2113i) = 1 c0 QN\u2212 j=1(1 \u2212(s\u2212jz)r)(1 \u2212s\u2212jz)\u22121 QN+ i=1(1 \u2212(siz\u22121)r)(1 \u2212siz\u22121)\u22121. We have \u2225wr \u2217a \u2212e0\u22252 2 = P n\u2208{1,2,3,...,|I|} P k1,...,kn\u2208I Q i\u2208[n] |ski|2r as r \u2192\u221e, this converges to zero at an exponential rate, determined by the slowest decaying term, \u2225wr \u2217a \u2212e0\u22252 = O(|s|r (1)), r \u2192\u221e. where |s|(1) is the largest absolute value root. Stability for in\ufb01nite length inverse \ufb01lter Theorem 6 (Stability for in\ufb01nite length inverse \ufb01lter). Let a \u2208VN\u2212,N+ be a forward \ufb01lter with all the roots of Z-transform strictly in the unit circle. Let w\u22c6\u2208V(r\u22121)N\u2212,(r\u22121)N+ be the solution of the convex optimization problem. Let wr be the constructed \ufb01lter in previous theorem with a uniform vector index (r, . . . , r, r, . . . , r). Then provided p < p\u22c6, as r \u2192\u221e, \u2225w\u22c6\u2217a \u2212e0\u22252 \u2264O(|s|r (1)), r \u2192\u221e. where |s|(1) is the largest absolute value root. In words, the Euclidean distance between w\u22c6\u2217a and e0 converges to zero at an exponential rate as the approximation length is allowed to increase. 2.4 Main Result 4: Robustness Against Stochastic Noise and Adversarial Noise We now extend the previous analysis of an exactly sparse model of X, exactly observed, to the more practical setting of approximate sparsity and observation noise. We consider two cases: \ufb01rst, we add stochastic noise as an independent Gaussian linear process, and second, we add adversarial noise with bounded \u2113\u221enorm. In each scenario, since the noisy objective value at e0 is an upper bound on the optimal value of the optimization solution w\u22c6, we use this upper bound on the objective value to derive an upper bound of \u2225w\u22c6\u2217a \u2212e0\u22252 when p < p\u22c6. The upper bound shows that the distance \u2225w\u22c6\u2217a \u2212e0\u22252 is bounded by the (appropriately measured) magnitude of input noise in both cases. 17 \fRobustness under stochastic noise Theorem 7 (Robustness against Gaussian Linear Process noise). We consider a Gaussian linear process Z = \u03c3 \u00b7 b \u22c6G where: \u03c3 > 0 denotes the noise level; b is a bisequence having unit \u21132 norm \u2225b\u22252 = 1; and G is a standard Normal iid bisequence. Suppose that we observe: Y = a \u2217(X + Z). Let w\u22c6be the solution of the convex optimization problem in Eq.(4.1); EI\u2225(w\u22c6) \u00b7 I\u22252 \u2212EI\u2225(e0) \u00b7 I\u22252 \u2264(1 \u2212p)\u03c3 + p( p 1 + \u03c32 \u22121). When p < p\u22c6, \u03c3 \u22641, there exists a constant C, \u2225w\u22c6\u2217a \u2212e0\u22252 = \u2225w\u22c6\u2212e0\u22252 \u2264C\u03c3. In words, the Euclidean distance between w\u22c6\u22c6a and e0 is bounded linearly by the magnitude of stochastic noise. Robustness under adversarial noise Theorem 8 (Robustness under adversarial noise with \u2113p norm bound). Suppose that an adversary chooses a disturbance bisequence c subject to the constraint: \u2225c\u2225\u221e\u2264\u03b7; and perturbs the observation process Y via: Y = a \u2217(X + c). Let w\u22c6denote the solution of the convex optimization problem P1(e a) . De\ufb01ne v\u22c6= (a \u2217w\u22c6)\u2020, then v\u22c6satis\ufb01es the following bound on objective di\ufb00erence: EI\u2225(v\u22c6) \u00b7 I\u22252 \u2212EI\u2225(e0) \u00b7 I\u22252 \u2264p q 2 \u03c0(R(\u03b7) \u22121) + (1 \u2212p)\u03b7. Therefore, when p < p\u22c6, there exists a constant C\u2032, so that \u2225w\u22c6\u2217a \u2212e0\u22252 = \u2225v\u22c6\u2212e0\u22252 \u2264C\u2032\u03b7, \u2200\u03b7 > 0. In words, the Euclidean distance between w\u22c6\u22c6a and e0 is at most proportional to the magnitude of the adversarial noise. Remark: Here R(\u03b7) is the folded Gaussian mean, for standard Gaussian G: R(\u03b7) := r\u03c0 2 EG|\u03b7 + G| = exp{\u2212\u03b72/2} + r\u03c0 2 \u03b7 (1 \u22122\u03a6 (\u2212\u03b7)) . R(\u03b7) \u22121 is an even function that is monotonically non-decreasing for \u03b7 \u22650 with quadratic upper and lower bound: there exists constants C1 \u2264C2, C1\u03b72 \u2264R(\u03b7) \u22121 \u2264C2\u03b72, \u2200\u03b7. 18 \f2.5 Non-convex Blind deconvolution initialization method Now we study how to get the initial guess e a. First, let CY denote the circular embedding of Y ; we can de\ufb01ne \u00af Y = (CY CT Y )\u22121/2Y , then from Y = a \u2217X, we get CY = CaCX, then C \u00af Y = (CY CT Y )\u22121/2CY = (CaCXCT XCT a )\u22121/2CaCX as X are IID Bernoulli-Gaussian, we know approximately C \u00af Y = (CY CT Y )\u22121/2CY = (CaCXCT XCT a )\u22121/2CaCX \u2248(CaCT a )\u22121/2CaCX We can also de\ufb01ne C\u00af a = (CaCT a )\u22121/2Ca. Let S\u03ba1\u00d7\u03ba2,K be Riemannian manifold of K-channel \u03ba1 \u00d7 \u03ba2 convolution dictionary with row norm 1. Our non-convex algorithm objective is minimize A,X \u2225Y \u2212(A \u2217X + b)\u22252 2 + \u03bb\u2225X\u22251 subject to A \u2208S\u03ba1\u00d7\u03ba2,K, And the algorithm is alternating between proximal gradient method on solving X given A with Backtracking line search for updating step size of proximal gradient on X, and projected gradient method on solving A given X with line search for updating the step size of Riemannian gradient on A. 3 Numerical Experiments Now we look at numerical experiments of 2\u2212D image blind deconvolution for di\ufb00erent choices of X\u22c6 and a\u22121,\u22c6. 19 \fAlgorithm 1: Non-convex Blind deconvolution initialization method Input: Y of size n1 \u00d7 n2 estimated upper bound of kernel size of forward \ufb01lter a: \u03ba1 \u00d7 \u03ba2, estimated number of kernel channels K.; Create zero tensor Ainit be of size 3\u03ba1 \u00d7 3\u03ba2 \u00d7 K, ; for k \u22081, . . . , K :; random uniformly choose index i1,k, i2,k; assign Ainit[\u03ba1 : 2\u2217\u03ba1, \u03ba2 : 2\u2217\u03ba2, k] = Y [i1,k : i1,k+\u03ba1, i2,k : i2,k+\u03ba2]/\u2225Y [i1,k : i1,k+\u03ba1, i2,k : i2,k+\u03ba2]\u22252 ; Let Xinit be zeros of shape n1 \u00d7 n2 \u00d7 K; binit be mean(\u20d7 Y ). ; for iter \u22081, . . . , MaxIter :; Alnternating the two steps:; Given A \ufb01xed, take a descent step on X via proximal gradient descent; Backtracking for update X and update stepsize t;; Given X \ufb01xed, take a Riemannian gradient step on A, use line search for updating the stepsize of Riemannian gradient on A. ; Given A, X \ufb01xed, update the bias b ; if \u2225A \u2212Aprev\u22252 < \u03f5 and \u2225X \u2212Xprev\u22252 < \u03f5, stop. Algorithm 2: Full blind deconvolution algorithm Input: Y of size n1 \u00d7 n2, estimated upper bound of kernel size of forward \ufb01lter a: \u03ba1 \u00d7 \u03ba2.; if Initialization of forward \ufb01lter \u02dc a not provided then Let \u02dc a be the solution of the previous algorithm with estimated number of kernel channels K = 1 and forward kernel size \u03ba1 \u00d7 \u03ba2. else Use initialization guess of forward \ufb01lter \u02dc a ; end Find inverse \ufb01lter w by minimize w\u2208l1(Z) \u2225w \u2217Y \u22251 subject to \u27e8e a, w\u2020\u27e9= 1, (P1(e a)) ; 20 \fFigure 2: Sparse point: Y , solved X, true X, solved f. \u2022 X: 2\u2212D Bernoulli Gaussian IID of size 80 \u00d7 80. \u2022 f\u22c6: 1\u2212D \ufb01lter (1, \u2212s, 0), s = 0.9 centered and rotated by 45 degree. \u2022 Y = a\u22c6\u2217X, where the \ufb01lter inverse a\u22c6= f\u22c6,\u22121 is de\ufb01ned by discrete Fourier transform on 80 \u00d7 80 grid. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, Xcvx, X\u22c6, fcvx = f\u22c6. 21 \fFigure 3: Sphere: Y , solved X, true X, solved f; previous X, previous a as initialization, a truth; X solved by non-convex alternating formulation, a solved by non-convex alternating formulation. \u2022 X: 2\u2212D Bernoulli Gaussian IID of size 60 \u00d7 60. \u2022 f\u22c6: 1\u2212D \ufb01lter (1, \u2212s, 0) with s = 0.99 centered and rotated by 45 degree. \u2022 Y = a\u22c6\u2217X, where the \ufb01lter inverse is de\ufb01ned by discrete Fourier transform on 60 \u00d7 60 grid. \u2022 ainit = finit,\u22121, where finit is 1\u2212D \ufb01lter (1, \u2212sinit, 0) with sinit = 0.8 centered and rotated by 45 degree. \u2022 Xinit = ainit \u2217X is the initial guess ainit convolve with X\u22c6. \u2022 ancvx is the forward solved by alternating non-convex algorithm. \u2022 Xncvx is the initial guess of X solved by alternating non-convex algorithm. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, Xcvx, X\u22c6, fcvx = f\u22c6. \u2022 second row: Xinit, ainit, a\u22c6; \u2022 third row: Xnc, init, anc,init, Xnc, anc; 22 \fFigure 4: Sphere: Y , solved X, true X, solved f; previous X, previous a as initialization, a truth; X solved by non-convex alternating formulation, a solved by non-convex alternating formulation \u2022 X: 2\u2212D centered shaped 8 with frame size 15 on size 60 \u00d7 60 grid. \u2022 f\u22c6: convolution of two 1\u2212D \ufb01lters: (1, \u2212s1, 0) with s1 = 0.9995 centered and rotated by 45 degree and (1, \u2212s2, 0) with s2 = 0.9995 centered and rotated by 0 degree. \u2022 Y = a\u22c6\u2217X, where the \ufb01lter inverse is de\ufb01ned by discrete Fourier transform on 60 \u00d7 60 grid. \u2022 anc is the forward solved by alternating non-convex algorithm. \u2022 Xnc is the initial guess of X solved by alternating non-convex algorithm. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, X\u22c6, f\u22c6, a\u22c6. \u2022 second row: Xcvx, fcvx ,Xnc, anc; 23 \fFigure 5: Y , solved X, true X, solved f; previous X, previous a as initialization, a truth; X solved by non-convex alternating formulation, a solved by non-convex alternating formulation. \u2022 X: 2\u2212D centered shaped 8 on size 60 \u00d7 60 grid. \u2022 f\u22c6: convolution of two 1\u2212D centered \ufb01lters: (1, \u2212s1, 0), s1 = 0.999, rotated by 45 degree, . (1, \u2212s12, 0), s2 = 0.9, rotated by 0 degree. \u2022 Y = f\u22c6,\u22121 \u2217X, where the \ufb01lter inverse is de\ufb01ned by discrete Fourier transform on 60 \u00d7 60 grid. \u2022 anc is the forward solved by alternating non-convex algorithm. \u2022 Xnc is the initial guess of X solved by alternating non-convex algorithm. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, X\u22c6, f\u22c6, a\u22c6. \u2022 second row: Xcvx, fcvx ,Xnc, anc; 24 \fFigure 6: Sphere: Y , solved X, true X, solved f; previous X, previous a as initialization, a truth; X solved by non-convex alternating formulation, a solved by non-convex alternating formulation. \u2022 X: 2\u2212D circle centered with radius 10, on size 60 \u00d7 60 grid. \u2022 f\u22c6: 1\u2212D \ufb01lter (1, \u2212s, 0) centered and rotated by 45 degree. \u2022 Y = f\u22c6,\u22121 \u2217X, where the \ufb01lter inverse is de\ufb01ned by discrete Fourier transform on 60 \u00d7 60 grid. \u2022 ancvx is the forward solved by alternating non-convex algorithm. \u2022 Xncvx is the initial guess of X solved by alternating non-convex algorithm. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, Xcvx, X\u22c6, fcvx = f\u22c6. \u2022 second row: Xinit, ainit, a\u22c6; \u2022 third row: Xnc, anc, fnc; 25 \fFigure 7: Failure mode-square, TV norm: Y , solved X, true X, solved f, \u2022 X: 2\u2212D centered shaped square with frame size 40But on size 60 \u00d7 60 grid. \u2022 f\u22c6: convolution of two 1\u2212D \ufb01lters: (1, \u2212s1, 0) with s1 = 0.9995 centered and rotated by 45 degree and (1, \u2212s2, 0) with s2 = 0.9995 centered and rotated by 0 degree. \u2022 Y = a\u22c6\u2217X, where the \ufb01lter inverse is de\ufb01ned by discrete Fourier transform on 60 \u00d7 60 grid. \u2022 anc is the forward solved by alternating non-convex algorithm. \u2022 Xnc is the initial guess of X solved by alternating non-convex algorithm. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, X\u22c6, f\u22c6, a\u22c6. \u2022 second row: Xcvx, fcvx ,Xnc, anc; 26 \fFigure 8: Failure mode-diamond, TV norm: Y , solved X, true X, solved f, 27 \fFigure 9: \u2018Cameraman\u2019, using TV norm and weighted Haar L1 norm (5 levels with weights 2j, j = 0, . . . 4: Y , true X, TV norm solved X,Weighted Haar L1 norm solved X, true f, TV norm solved f,Weighted Haar L1 norm solved f , \u2022 X: camera man \ufb01gure, on size 128 \u00d7 128 grid. \u2022 f\u22c6: 1\u2212D \ufb01lter (1, \u2212s, 0) centered and rotated by 45 degree. \u2022 Y = f\u22c6,\u22121 \u2217X, where the \ufb01lter inverse is de\ufb01ned by discrete Fourier transform on 60 \u00d7 60 grid. \u2022 XTV is the solution of convex problem with TV norm objective. \u2022 XHaar is the solution of convex problem use weighted Haar L1 norm with 5 levels as objective, where the weighted Haar L1 norm weights level j coe\ufb03cient by with weights 2j, j = 0, . . . 4. \u2022 The plots in order, \ufb01rst row: Y = a\u22c6\u2217X\u22c6, X\u22c6,XTV, XHaar. 28 \f4 Techinical Overview and Proof Sketch for Phase Transition Theorems 4.1 Main Result 1: Population (Large-N) Phase Transition Sketch of proof ideas for Theorem 3 Here we highlight some of the key ideas in the proof: \u2022 Change of variable. Rewrite the population version of our convex sparse blind deconvolution problem, with the population objective E 1 N \u2225w \u2217Y \u2225\u21131(T ) = E|(w \u2217Y )0| = E|(w \u2217a \u2217X)0| = EX|\u27e8X, (w \u2217a)\u2020\u27e9| due to the ergodic property of stationary process and shift invariance, and (e a \u2217w)0 = ((e a \u2217a\u22121) \u2217(a \u2217w))0 = \u27e8(e a \u2217a\u22121)\u2020, w \u2217a\u27e9, the convex problem becomes minimize w EX|\u27e8X, (a \u2217w)\u2020\u27e9| subject to \u27e8e a \u2217a\u22121, (a \u2217w)\u2020\u27e9= 1, Let v denote the time reversed version of a \u2217w: v := (a \u2217w)\u2020, and let e e := e a \u2217a\u22121, then by previous assumptions, e e(0) = 1, e e\u2032 = e e \u2212e0. Now we arrive at a simple and fundamental population convex problem: minimize v EX|\u27e8X, v\u27e9| subject to \u27e8e e, v\u27e9= 1. \u2022 Expectation using Gaussian. Since X follows Bernoulli-Gaussian IID probability model Xt = ItGt, we nest the expectation over It outside the expectation over Gaussian Gt, for which we use E|N(0, 1)| = q 2 \u03c0: EX|\u27e8X, v\u27e9| = EIEG| X t\u2208Z ItGtv(t)| = r 2 \u03c0 \u00b7 EI\u2225v \u00b7 I\u22252 \u2022 KKT condition for e0. Let v\u22c6denote the solution of the optimization problem: minimize \u03c8 q 2 \u03c0 \u00b7 EI\u2225v \u00b7 I\u22252 subject to \u27e8e e, v\u27e9= 1 (Q1(e e)) To prove that v\u22c6= e0, i.e. e0 solves (Q1(e e)), we calculate the directional \ufb01nite di\ufb00erence at e0. Then e0 solves this convex problem if the directional \ufb01nite di\ufb00erence at v = e0 is non-negative at every direction \u03b2 on unit sphere where e eT \u03b2 = 0: E\u2225(e0 + \u03b2) \u00b7 I\u22252 \u2212E\u2225(e0) \u00b7 I\u22252 \u22650. \u2022 Conditional expectation at one sparse element X0. We decompose the objective into a sum of terms, conditioning on whether I0 = 1{X0\u0338=0} is zero or not: E\u2225(e0 + \u03b2) \u00b7 I\u22252 \u2212E\u2225(e(0)) \u00b7 I\u22252 = p(1 + \u03b20) + (1 \u2212p)\u2207\u03b2EI[\u2225(e0 + \u03b2)\u2032\u2225\u21132(I\u2212{0}) | I0 = 0] \u2212p = p\u03b20 + (1 \u2212p)EI\u2032[\u2225\u03b2\u2032\u2225\u21132(I\u2032)]. 29 \fThis will be non-negative in case either \u03b2(0) > 0, or else \u03b2(0) < 0 but p 1 \u2212p \u2264EI\u2032\u2225\u03b2\u2032\u2225\u21132(I\u2032) |\u03b20| for all \u03b2 that satisfy e eT \u03b2 = 0. \u2022 Reduction to val(Q1(e e\u2032)). We normalize the direction sequence \u03b2 so that \u03b2(0) = \u22121; using \u02dc e(0) = 1, we obtain a lowerbound: inf \u03b2(0)=\u22121,\u27e8e e,\u03b2\u27e9=0 EI\u2032\u2225\u03b2\u2032\u2225\u21132(I\u2032) = inf \u03b2(0)=\u22121,\u03b2(0)\u02dc e(0)\u2212\u27e8e e\u2032,\u03b2\u2032\u27e9=0 EI\u2032\u2225\u03b2\u2032\u2225\u21132(I\u2032) = inf \u27e8e e\u2032,\u03b2\u2032\u27e9=1 EI\u2032\u2225\u03b2\u2032\u2225\u21132(I\u2032) = val(Q1(e e\u2032)) Here Q1(e e\u2032) is the optimization problem: minimize \u03b2\u2208l1(Z) EI\u2032\u2225\u03b2\u2032\u2225\u21132(I\u2032) subject to \u27e8e e\u2032, \u03b2\u2032\u27e9= 1 \u2022 The explicit phase transition condition with upper and lower bound. We have shown the existence of p\u22c6so that for all p < p\u22c6, the KKT condition is satis\ufb01ed. And we have represented p\u22c6as the optimal value of a derived optimization problem val(Q1(e e\u2032)). The following lemma \ufb01nds simple upper and lower bounds for p\u22c6= val(Q1(e e\u2032)). Lemma 4.1 (Explicit phase transition condition with upper and lower bound). The threshold p\u22c6determined by p 1 \u2212p = val(Q1(e e\u2032)). obeys an upper bound and lower p \u2225e e\u2032\u2225\u221e \u2265val(Q1(e e\u2032)) \u2265EI\u2225e e\u2032 \u00b7 I\u2225\u22121 2 where \u2225e e\u2032\u2225\u221e= |e e|(2) |e e|(1) . Additionally, the upper bound is sharp if and only if p 1 \u2212p \u2264val(Q1( e e\u2032 \u2225e e\u2032\u2225\u221e )) = val(Q1(e e\u2032\u2032))/\u2225e e\u2032\u2225\u221e therefore, the upper bound holds with equality p\u22c6= 1 \u2212|e e|(2) |e e|(1) if p \u22641 \u2212|e e|(3) |e e|(2) Therefore, if |e e|(3) |e e|(2) \u2264|e e|(2) |e e|(1) then p\u22c6= 1 \u2212|e e|(2) |e e|(1) . 30 \fThe above narrative gives a sketch of our result and its proof. The upper bound 1 \u2212 |e e|(2) |e e|(1) of p\u22c6 generalized the previous special case of exponential decay \ufb01lter in theorem 1 with p\u22c6= 1 \u2212|s|. \u2022 Lemma 4.2 (Geometric lower bound val(Q1(e e\u2032)) for the phase transition condition). val(Q1(e e\u2032)) \u2265p cot \u2220(e e, e0) where cot \u2220(e e, e0) = 1 \u2225e e\u2032\u22252 \u2022 Using the technical background provided in theorem 9, we can further provide a tighter upper bound to compute phase transition p\u22c6. 4.2 Technical Tools: Landscape of Expected Homogeneous Function over Bernoulli Support on Sphere Let Vk(u) := EJ\u2225u \u00b7 J\u2225k 2 \u2225u\u2225k 2 where u \u2208RN, J is a Bernoulli sequence indexed from 1 to N. Expectation V1(u) := EJ\u2225u \u00b7 J\u22251 2 on sphere. Theorem 9. Let u \u2208RN, then p \u2264V1(u) \u2264V1( P j\u2208[N] \u00b1ej \u221a N ) \u2264\u221ap. the lower bound is approached by on-sparse vectors u \u2208{\u00b1ei, i \u2208[N]}, and the upper bound is approached by u \u2208{ 1 \u221a N P j\u2208[N] \u00b1ej}. Furthermore, all the stationary points of V1(u) are { P i\u2208J \u00b1ei \u221aNJ } for di\ufb00erent support J \u2282T , where {\u00b1ei, i \u2208[N]} are the global minimizers, and { 1 \u221a N P j\u2208[N] \u00b1ej} are the global maximizers. And for J with 1 < NJ < N, { P i\u2208J \u00b1ei \u221aNJ } are saddle points with value V1( P i\u2208J \u00b1ei \u221aNJ ) = EI sP i\u2208J 1Ii NJ = NJ X j=0 (1 \u2212p)NJ\u2212jpj \u0012NJ j \u0013r j NJ . To support geometric intuition, Figure 10 visualizes V1 on the 2\u2212dimensional sphere. 31 \fFigure 10: The value of V1 on a two-dimensional sphere in R3, normalized by the a\ufb03ne transform to send the value in [0, 1]. 32 \fExpectation V2k(u) := EJ\u2225u \u00b7 J\u22252k 2 on sphere. Theorem 10. For k \u22652, let u denote a vector in RN, then pk \u2264V2k( P j\u2208[N] \u00b1ej \u221a N ) \u2264V2k(u) \u2264p. the upper bound is approached by one-sparse vectors u \u2208{\u00b1ei, i \u2208[N]}, and the lower bound is approached by u \u2208{ 1 \u221a N P j\u2208[N] \u00b1ej}. Furthermore, all the stationary points of V2k(\u03c8) are { P i\u2208J \u00b1ei \u221aNJ } for di\ufb00erent support J \u2282T , where {\u00b1ei, i \u2208[N]} are the set of global maximizers, and { 1 \u221a N P j\u2208[N] \u00b1ej} are the set of all the global minimizers. And for J with 1 < NJ < N, { P i\u2208J \u00b1ei \u221aNJ } are saddle points with value V2k( P i\u2208J \u00b1ei \u221aNJ ) = EJ( P i\u2208J 1Ii NJ )k = NJ X j=0 (1 \u2212p)NJ\u2212jpj \u0012NJ j \u0013 ( j NJ )k. Expectation V\u22121(u) := EJ\u2225u \u00b7 J\u2225\u22121 2 on sphere. Theorem 11. V\u22121(u) \u2265p\u22121/2. (10) 4.3 Exact representation of val(Q1) Exact representation from support analysis Let |e e| be the entry-wise absolute value of e e. We can rank the entries of |e e| to be |e e|(1), |e e|(2), |e e|(3), . . ., then the entries of e e\u2032 will be ranked as |e e|(2), |e e|(3), . . .. By scaling we have |e e|(1) = 1. We de\ufb01ne e eSm as the vector that only keep the entries e e(2), e e(3), . . . , e e(m+1) of e e, and send the rest of entries to zero. Lemma 4.3 (Exact representation of val(Q1(e e\u2032)) via support decomposition and V1 function). val(Q1(e e\u2032)) = inf m\u2208{1,2,...,ne\u22121} val(QSm 1 ) Proof: Assume that e e has ne non-zero entry on support Se e, we can rank the absolute value of entries of e e to be |e e|(1), |e e|(2), |e e|(3), . . . , |e e|(ne), then the entries of e e\u2032 will be ranked as |e e|(2), |e e|(3), . . . , |e e|(ne). We know the optimal solution of (\u03b2\u2032)\u22c6of inf\u27e8e e\u2032,\u03b2\u2032\u27e9=1 EI\u2032\u2225\u03b2\u2032\u2225\u21132(I\u2032) must have support S\u03b2\u22c6that satisfy S\u22c6\u2282Se e\u2032. From the symmetry of objective, we know if (\u03b2\u2032)\u22c6is m sparse, then m \u2264ne \u22121 and, its support must be on the top m entries |e e|(2), |e e|(3), . . . , |e e|(m+1), we call this support Sm, and we know the corresponding entries of (\u03b2\u2032)\u22c6would have the same sign as entries of e e. We can de\ufb01ne the m sparse optimization problem for a random support function on the subset of Sm: Jm \u2282Sm. Let \u03b2 be supported on Sm and each entry positive, then val(QSm 1 ) := inf \u03b2:Pm j=1 |e e|(j+1)\u03b2j=1,\u03b2j>0 EJm\u2225\u03b2 \u00b7 Jm\u22252 33 \fThe linear equality constraint comes from \u27e8|e e\u2032 Sm|, \u03b2\u2032\u27e9= m X j=1 |e e|(j+1)\u03b2j = 1 Then by breaking the case to \ufb01nd minimum over at most ne \u22121 non-zero entries to the minimum of the cases to \ufb01nd minimum over exactly m non-zero entries, and also use sign symmetry, we have val(Q1(e e\u2032)) = inf m\u2208{1,2,...,ne\u22121} val(QSm 1 ) Tighter decomposition on val(Q1) Now we can prove a more re\ufb01ne upper bound: Lemma 4.4 (Tighter upper bound on val(Q1)). val(Q1(e e\u2032)) = inf m\u2208{1,2,...,ne\u22121} val(QSm 1 ) = inf m\u2208{1,2,...,ne\u22121}[V1(e e\u2032 Sm) cot \u2220(e eSm, e0)] where V1 function takes value in [p, \u221ap). More speci\ufb01cally, let the unit vector along the direction of |e e\u2032 Sm| be um = |e e\u2032 Sm|/\u2225e e\u2032 Sm\u22252 val(QSm 1 ) = cot \u2220(e eSm, e0)V1(|e e\u2032 Sm|)C(um) where C(um) := 1 V1(um) inf \u03b2m:\u03b2m j >0 V1(\u03b2m) cos \u2220(\u03b2m, um) = 1 EJm\u2225um \u00b7 Jm\u22252 inf \u03b2m:\u03b2m j >0 EJm\u2225\u03b2m \u00b7 Jm\u22252 \u27e8\u03b2m, um\u27e9 And we can prove that C(um) = 1 Proof of lemma 4.3. From val(QSm 1 ) := inf \u03b2:Pm j=1 |e e|(j+1)\u03b2j=1,\u03b2j>0 EJm\u2225\u03b2 \u00b7 Jm\u22252 For a \ufb01xed m, after re-ranking the entries by absolute value, let Sm be the support so that only the top m entries |e e|(2), |e e|(3), . . . , |e e|(m+1) are non-zero, let |e e\u2032 Sm| = (0, |e e|(2), |e e|(3), . . . , |e e|(m+1), 0, . . . , 0) \u2225e e\u2032 Sm\u22252 2 = |e e|2 (2) + |e e|2 (3) + . . . + |e e|2 (m+1) let the unit vector along the direction of |e e\u2032 Sm| be um = |e e\u2032 Sm|/\u2225e e\u2032 Sm\u22252 34 \fso that um j > 0, j \u2208Sm. then val(QSm 1 ) := 1 \u2225e e\u2032 Sm\u22252 inf \u03b2:\u03b2j>0{EJm\u2225(\u03b2/\u2225\u03b2\u22252) \u00b7 Jm\u22252 \u00b7 1 Pm j=1 um j \u03b2j \u2225\u03b2\u22252 } From previous de\ufb01nition, since \u03b2 is supported on Sm, we denote it as \u03b2m, then V1(\u03b2m) = EJm\u2225(\u03b2/\u2225\u03b2\u22252) \u00b7 Jm\u22252, V1(e e\u2032 Sm) = V1(|e e\u2032 Sm|) = V1(um). Geometrically, cos \u2220(\u03b2m, um) = m X j=1 um j \u03b2j \u2225\u03b2\u22252 cot \u2220(e eSm, e0) = 1 \u2225e e\u2032 Sm\u22252 val(QSm 1 ) = cot \u2220(e eSm, e0) inf \u03b2:\u03b2j>0{V1(\u03b2m) 1 cos \u2220(\u03b2m, um)} Since \u03b2m = um is a feasible point of the linear equality constraint \u27e8|e e\u2032 Sm|, \u03b2\u2032\u27e9= m X j=1 |e e|(j+1)\u03b2j = 1, we have an upper bound val(QSm 1 ) \u2264cot \u2220(e eSm, e0)V1(e e\u2032 Sm) On the other hand, val(QSm 1 ) = cot \u2220(e eSm, e0) inf \u03b2:\u03b2j>0 V1(\u03b2m) \u00b7 1 cos \u2220(\u03b2m, um) = cot \u2220(e eSm, e0)V1(|e e\u2032 Sm|) inf \u03b2:\u03b2j>0 V1(\u03b2m) V1(um) \u00b7 1 cos \u2220(\u03b2m, um) The lower bound is given by \ufb01nding the lower bound of C(um) := 1 V1(um) inf \u03b2m:\u03b2m j >0 V1(\u03b2m) cos \u2220(\u03b2m, um) = 1 EJm\u2225um \u00b7 Jm\u22252 inf \u03b2m:\u03b2m j >0 EJm\u2225\u03b2m \u00b7 Jm\u22252 \u27e8\u03b2m, um\u27e9 First, we know upper bound C(um) \u22641 by plug in \u03b2m = um as feasible point. This value C(um) can be restricted to the m-dimensional subspace supported on Sm, therefore its value is independent of ambient space dimension, only dependent on the non-zero entries of um, without loss of generosity we only need to consider the case when um is a m-dimensional dense vector lie on the \ufb01rst quadrant (all positive non-zero entries) of the unit sphere. If um approaches one-sparse point by taking the rest of coordinate close to zero, then \u03b2m should equal um. 35 \fOn the other extreme, if um is has m equal entries 1 \u221am, via numerical simulation we could check that C(um) = 1 We conjecture that C(um) = 1.We only need to prove C(um) \u22651. To prove the lower bound, we only need to show that, on positive quadrant of m-dimensional unit sphere, the function Rum(\u03b2m) = 1 EJm\u2225um \u00b7 Jm\u22252 EJm\u2225\u03b2m \u00b7 Jm\u22252 \u27e8\u03b2m, um\u27e9 \u22651 and takes minimum value 1 at \u03b2m = um. In general, by symmetry, expect our guess \u03b2m = um, for any um on we could infer that the 2 other types of points to check on positive quadrant of unit sphere are \u03b2m = (0, \u221a 1 \u2212\u03f52, \u221a\u03f5, 0 . . . , 0) and \u03b2m = ( 1 \u221am, . . . , 1 \u221am). First , for \u03b2m = ( 1 \u221am, . . . , 1 \u221am), then \u27e8\u03b2m, um\u27e9= 1 \u221am1T um \u22641 by Cauchy inequality, we have Rum(\u03b2m) = 1 EJm\u2225um \u00b7 Jm\u22252 EJm\u2225\u03b2m \u00b7 Jm\u22252 \u27e8\u03b2m, um\u27e9 \u2265 EJm\u2225( 1 \u221am, . . . , 1 \u221am) \u00b7 Jm\u22252 EJm\u2225um \u00b7 Jm\u22252 \u22651 The last inequality comes from the saddle point property of V1 at point ( 1 \u221am, . . . , 1 \u221am). since V1(( 1 \u221am, . . . , 1 \u221am)) = Pm j=0(1 \u2212p)m\u2212jpj\u0000m j \u0001q j m = \u221ap \u2212P\u221e j=m+1(1 \u2212p)m\u2212jpj\u0000m j \u0001q j m \u2208[p, \u221ap). Second, for \u03b2m = (0, \u221a 1 \u2212\u03f52, \u221a\u03f5, 0 . . . , 0), if \u03b2m approaches one-sparse point by taking the rest of coordinate close to zero, by continuity of function Rum(\u03b2m), we can consider the limit point on the boundary \u03b2m = (0, 1, 0, 0 . . . , 0), then Rum(\u03b2m) = 1 EJm\u2225um \u00b7 Jm\u22252 EJm\u2225\u03b2m \u00b7 Jm\u22252 \u27e8\u03b2m, um\u27e9 \u2265 p\u2225um\u22252 2 EJm\u2225um \u00b7 Jm\u22252\u2225um\u2225\u221e We can \ufb01rst compute gradient of Rum(\u03b2m), show that its directional gradient are all non-negative at um. 4.4 Technical Tool: Relation between Finite Di\ufb00erence of Objective and Euclidean Distance Bi-Lipschitzness of \ufb01nite di\ufb00erence of objective As an important proof tool, we study functional B that allows us to bound the Euclidean distance d2(\u03c8, e0) := \u2225\u03c8 \u2212e0\u22252 by the objective di\ufb00erence EI\u2225\u03c8 \u00b7 I\u22252 \u2212EI\u2225(e0) \u00b7 I\u22252 : B(e0, \u03c6) := EI\u2225(e0 + \u03c6) \u00b7 I\u22252 \u2212EI\u2225(e0) \u00b7 I\u22252 \u2225\u03c6\u22252 When we rescale \u03c6 to \u03b2 := \u03c6 \u2225\u03c6\u22252 , we have B(e0, t\u03b2) = EI\u2225(e0 + t\u03b2) \u00b7 I\u22252 \u2212EI\u2225(e0) \u00b7 I\u22252 t 36 \fFrom the de\ufb01nition of directional derivative, \u2207\u03c6EI\u2225(e0 + \u03c6) \u00b7 I\u22252 |\u03c6=0= lim t\u21920+ EI\u2225(e0 + t\u03b2) \u00b7 I\u22252 \u2212EI\u2225(e0) \u00b7 I\u22252 t = lim t\u21920+ B(e0, t\u03b2). We have upper and lower bound on their di\ufb00erence. This upper and lower bound allows us to connect objective EI\u2225\u03c8 \u00b7 I\u22252 and the 2\u2212norm of \u03c8 \u2212e0: Theorem 12 (Bi-Lipschitzness of \ufb01nite di\ufb00erence of objective near e0 for linear constraint). We have upper and lower bound of B(e0, \u03c6) : 0 \u2264B(e0, t\u03b2) \u2212lim t\u21920+ B(e0, t\u03b2) \u2264t 2p \u0010 \u03b22 0 + p(1 \u2212\u03b22 0) \u0011 \u2264pt 2 . This leads to 0 \u2264B(e0, \u03c6) \u2212\u2207\u03c6EI\u2225(e0 + \u03c6) \u00b7 I\u22252 |\u03c6=0\u2264p 2\u2225\u03c6\u22252. Therefore, when p < p\u22c6, \u2207\u03c6EI\u2225(e0 + \u03c6) \u00b7 I\u22252 \u2265\u03f5(p, p\u22c6) > 0, we have B(e0, \u03c6) \u2265\u03f5(p, p\u22c6) > 0, which allows us to bound di\ufb00erence of objective by Euclidean distance . Reversely, 1/B(e0, \u03c6) \u22641/\u03f5(p, p\u22c6), which allows us to bound Euclidean distance by di\ufb00erence of objective. 5" + }, + { + "url": "http://arxiv.org/abs/2101.08170v3", + "title": "SUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism", + "abstract": "Graph representation learning has attracted increasing research attention.\nHowever, most existing studies fuse all structural features and node attributes\nto provide an overarching view of graphs, neglecting finer substructures'\nsemantics, and suffering from interpretation enigmas. This paper presents a\nnovel hierarchical subgraph-level selection and embedding based graph neural\nnetwork for graph classification, namely SUGAR, to learn more discriminative\nsubgraph representations and respond in an explanatory way. SUGAR reconstructs\na sketched graph by extracting striking subgraphs as the representative part of\nthe original graph to reveal subgraph-level patterns. To adaptively select\nstriking subgraphs without prior knowledge, we develop a reinforcement pooling\nmechanism, which improves the generalization ability of the model. To\ndifferentiate subgraph representations among graphs, we present a\nself-supervised mutual information mechanism to encourage subgraph embedding to\nbe mindful of the global graph structural properties by maximizing their mutual\ninformation. Extensive experiments on six typical bioinformatics datasets\ndemonstrate a significant and consistent improvement in model quality with\ncompetitive performance and interpretability.", + "authors": "Qingyun Sun, Jianxin Li, Hao Peng, Jia Wu, Yuanxing Ning, Phillip S. Yu, Lifang He", + "published": "2021-01-20", + "updated": "2021-05-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "INTRODUCTION Graphs have been widely used to model the complex relationships between objects in many areas including computer vision [14, 59], natural language processing [36], anomaly detection [1, 33], academic network analysis [44, 62], bioinformatics analysis [12, 46], etc. By learning a graph-based representation, it is possible to capture the sequential, topological, geometric, and other relational characteristics of structured data. However, graph representation learning is still a non-trivial task because of its complexity and flexibility. Moreover, existing methods mostly focus on node-level embedding [17, 21, 48], which is insufficient for subgraph analysis. Graph-level embedding [32, 40, 61, 63] is critical in a variety of real-world applications such as predicting the properties of molecules in drug discovery [28], and community analysis in social networks [25]. In this paper, we focus on graph-level representation for both subgraph discovery and graph classification tasks in an integrative way. In the literature, a substantial amount of research has been devoted to developing graph representation techniques, ranging from traditional graph kernel methods to recent graph neural network methods. In the past decade, many graph kernel methods [22] have been proposed that directly exploit graph substructures decomposed from it using kernel functions, rather than vectorization. Due to its specialization, these methods have shown competitive performance in particular application domains. However, there are limitations in two aspects. Firstly, kernel functions are mostly handcrafted and heuristic [3, 40, 41, 57], which are inflexible and may suffer from poor generalization performance. Secondly, the embedding dimensionality usually grows exponentially with the growth in the substructure\u2019s size, leading to sparse or non-smooth representations [7]. With the recent advances in deep learning, Graph Neural Networks (GNNs) [53] have achieved significant success in mining graph data. GNNs attempt to extend the convolution operation from regular domains to arbitrary topologies and unordered structures, including spatial-based [16, 32, 42] and spectral-based methods [6, 10, 26]. Most of the current GNNs are inherently flat, as they only propagate node information across edges and obtain graph representations by globally summarizing node representations. These summarisation approaches include averaging over all nodes [13], adding a virtual node [27], using connected layers [16] or convolutional layers [63], etc. arXiv:2101.08170v3 [cs.LG] 24 May 2021 \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Sun et al. However, graphs have a wide spectrum of structural properties, ranging from nodes, edges, motifs to subgraphs. The local substructures (i.e., motifs and subgraphs) in a graph always contain vital characteristics and prominent patterns [47], which cannot be captured during the global summarizing process. For example, the functional units in organic molecule graphs are certain substructures consisting of atoms and the bonds between them [9, 12]. To overcome the problem of flat representation, some hierarchical methods [15, 61] use local structures implicitly by coarsening nodes progressively, which often leads to the unreasonableness of the classification result. To exploit substructures with more semantics, some researchers [24, 31, 35, 58] exploit motifs (i.e., small, simple structures) to serve as local structure features explicitly. Despite its success, it requires domain expertise to carefully design specific motif extracting rules for various applications carefully. Moreover, subgraphs are exploited to preserve higher-order structural information by motif combination [58], subgraph isomorphism counting [5], rule-based extraction [56], etc. However, effectively exploiting higher-order structures for graph representation is a non-trivial problem due to the following major challenges: (1) Discrimination. Generally, fusing all features and relations to obtain an overarching graph representation always brings the potential concerns of over-smoothing, resulting in the features of graphs being indistinguishable. (2) Prior knowledge. Preserving structural features in the form of similarity metrics or motif is always based on heuristics and requires substantial prior knowledge, which is tedious and ad-hoc. (3) Interpretability. Many methods exploit substructures by coarsening them progressively. This is not suitable to give prominence to individual substructures, resulting in a lack of sufficient interpretability. To address the aforementioned challenges, we propose a novel SUbGrAph neural network with Reinforcement pooling and selfsupervised mutual information mechanism, named SUGAR. Our goal is to develop an effective framework to adaptively select and learn discriminative representations of striking subgraphs that generalize well without prior knowledge and respond in an explanatory way. SUGAR reconstructs a sketched graph to reveal subgraph-level patterns, preserving structural information in a three-level hierarchy: node, intra-subgraph, and inter-subgraph. To obtain more representative information without prior knowledge, we design a reinforcement pooling mechanism to select more striking subgraphs adaptively by a reinforcement learning algorithm. Moreover, to discriminate subgraph embeddings among graphs, a self-supervised mutual information mechanism is also introduced to encourage subgraph representations preserving global properties by maximizing mutual information between local and global graph representations. Extensive experiments on six typical bioinformatics datasets demonstrate significant and consistent improvements in model quality. We highlight the advantages of SUGAR1 as follows: \u2022 Discriminative. SUGAR learns discriminative subgraph representations among graphs, which are aware of both local and global properties. \u2022 Adaptable. SUGAR adaptively finds the most striking subgraphs given any graph without prior knowledge, which allows it to perform in a superior way across various types of graphs. 1Code is available at https://github.com/RingBDStack/SUGAR. \u2022 Interpretable. SUGAR explicitly indicates which subgraphs are dominating the learned result, which provides insightful interpretation into downstream applications. 2 RELATED WORK This section briefly reviews graph neural networks, graph pooling, and self-supervised learning on graphs. 2.1 Graph Neural Networks Generally, graph neural networks follow a message-passing scheme recursively to embed graphs into a continuous and low-dimensional space. Prevailing methods capture graph properties in different granularities, including node [20, 32, 45], motif [24, 31, 35], and subgraph [2, 56, 58]. Several works [20, 32, 45, 50, 54] generate graph representations by globally fusing node features. PATCHY-SAN [32] represents a graph as a sequence of nodes and generates local normalized neighborhood representations for each node. RNN autoencoder based methods [20, 45] capture graph properties by sampling node sequence, which is implicit and unaware of exact graph structures. Graph capsule networks [50, 54] capture node features in the form of capsules and use a routing mechanism to generate high-level features. However, these methods are incapable of exploiting the hierarchical structure information of graphs. Local substructures (i.e., motifs and subgraphs) are exploited to capture more complex structural characteristics. Motif-based methods [24, 31, 35, 58] are limited to enumerate exact small structures within graphs as local structure features. The motif extraction rules must be manually designed with prior knowledge. Other works exploit subgraphs by motif combination [58], subgraph isomorphism counting [5], and rule-based extraction [56] for graph-level tasks (e.g., subgraph classification [2], graph evolution prediction [30], and graph classification [5, 56, 58]). NEST [58] explores subgraphlevel patterns by various combinations of motifs. The most relevant work to ours is SGN [56], which detects and selects appropriate subgraphs based on pre-defined rules, expanding the structural feature space effectively. Compared to our model, the subgraph detection and selection procedure is based on heuristics, and it is difficult for SGN to provide sufficient information when subgraphs become too large. The aforementioned methods have many limitations in terms of discrimination, prior knowledge, and interpretability. In our framework, we address these problems by representing graphs as adaptively selected striking subgraphs. 2.2 Graph Pooling Graph pooling is investigated to reduce the entire graph information into a coarsened graph, which broadly falls into two categories: cluster pooling and top-k selection pooling. Cluster pooling methods (e.g., DiffPool [61], EigenPooling [29] and ASAP [39]) group nodes into clusters and coarsen the graph based on the cluster assignment matrix. Feature and structure information is utilized implicitly during clustering, leading to a lack of interpretability. Furthermore, the number of clusters is always determined by a heuristic or performs as a hyper-parameter, and cluster operations always lead to high computational cost. \fSUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Supervised Classification Loss ... rank top-k selection ... Subgraph Neural Network summarize ( ) , Positive Sample Negative Sample Self-Supervised MI Module Subgraph Neural Network encode sketch sample Reinforcement Pooling Module update pool ( ) , Self-supervised Contrastive Loss * Q * ( | ; ) a s Q \uf070 classification \u2460 \u2461 \u2462 k MI maximization G ~ G subgraph + n subgraphs ' n subgraphs D Discriminator Local subgraph embedding Global graph embedding sketched graph Figure 1: An illustration of the SUGAR architecture. Assuming the single graph setup (i.e., \ud835\udc3ais provided as input and \u02dc \ud835\udc3ais an alternative graph providing negative samples), SUGAR consists of the following steps: 1 \u25cbSubgraph sampling and encoding: for each graph, a fixed number of subgraphs is sampled and encoded by an intra-subgraph attention mechanism; 2 \u25cbSubgraph selection: striking subgraphs are selected by a reinforcement learning module and pooled into a sketched graph; 3 \u25cbSubgraph sketching: every supernode (i.e., subgraph) in the sketched graph is fed into an inter-subgraph attention layer; Subgraph representations are further enhanced by maximizing mutual information between local subgraph (in cyan) and global graph (in orange) representations; the graph classification result is voted by classifying subgraphs. Top-k selection pooling methods compute the importance scores of nodes and select nodes with pooling ratio \ud835\udc58to remove redundant information. Top-k pooling methods are generally more memory efficient as they avoid generating dense cluster assignments. [8] and [15] select nodes based on their scalar projection values on a trainable vector. SortPooling [63] sorts nodes according to their structural roles within the graph using the WL kernel. SAGPool [23] uses binary classification to decide the preserving nodes. To our best knowledge, current top-k selection pooling methods are mostly based on heuristics, since they cannot parameterize the optimal pooling ratio [23]. The pooling ratio is always taken as a hyperparameter and tuned during the experiment [15, 23], which lacks a generalization ability. However, the pooling ratios are diverse in different types of graphs and should be chosen adaptively. Our framework adopts a reinforcement learning algorithm to optimize the pooling ratio, which can be trained with graph neural networks in an end-to-end manner. 2.3 Self-Supervised Learning on Graphs Self-supervised learning has shown superiority in boosting the performance of many downstream applications in computer vision [18, 19] and natural language processing [11, 60]. Recent works [37, 38, 43, 49] harness self-supervised learning for GNNs and have shown competitive performance. DGI [49] learns a node encoder that maximizes the mutual information between patch representations and corresponding high-level summaries of graphs. GMI [37] generalizes the conventional computations of mutual information from vector space to the graph domain, where measuring the mutual information from the two aspects of node features and topological structure. InfoGraph [43] learns graph embeddings by maximizing the mutual information between graph embeddings and substructure embeddings of different scales (e.g., nodes, edges, triangles). Our approach differs in that we aim to obtain subgraph-level representations mindful of the global graph structural properties. 3 OUR APPROACH This section proposes the framework of SUGAR for graph classification, addressing the challenges of discrimination, prior knowledge, and interpretability simultaneously. The overview of SUGAR is shown in Figure 1. We first introduce the subgraph neural network, and then followed by the reinforcement pooling mechanism and the self-supervised mutual information mechanism. We represent a graph as\ud835\udc3a= (\ud835\udc49,\ud835\udc4b,\ud835\udc34), where\ud835\udc49= {\ud835\udc631, \ud835\udc632, \u00b7 \u00b7 \u00b7 , \ud835\udc63\ud835\udc41} denotes the node set, \ud835\udc4b\u2208R\ud835\udc41\u00d7\ud835\udc51denotes the node features, and \ud835\udc34\u2208R\ud835\udc41\u00d7\ud835\udc41denotes the adjacency matrix. \ud835\udc41is the number of nodes, and \ud835\udc51is the dimension of the node feature. Given a dataset (G, Y) = {(\ud835\udc3a1,\ud835\udc661), (\ud835\udc3a2,\ud835\udc662), \u00b7 \u00b7 \u00b7 (\ud835\udc3a\ud835\udc5b,\ud835\udc66\ud835\udc5b)}, where \ud835\udc66\ud835\udc56\u2208Y is the label of \ud835\udc3a\ud835\udc56\u2208G, the task of graph classification is to learn a mapping function \ud835\udc53: G \u2192Y that maps graphs to the label sets. 3.1 Subgraph Neural Network As the main component of SUGAR, the subgraph neural network reconstructs a sketched graph by extracting striking subgraphs as the original graph\u2019s representative part to reveal subgraph-level patterns. In this way, the subgraph neural network preserves the graph properties through a three-level hierarchy: node, intra-subgraph, \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Sun et al. and inter-subgraph. Briefly, there are three steps to build a subgraph neural network: (1) sample and encode subgraphs from the original graph; (2) select striking subgraphs by a reinforcement pooling module; (3) build a sketched graph and learn subgraph embeddings by an attention mechanism and a self-supervised mutual information mechanism. Step-1: Subgraph sampling and encoding. First, we sample \ud835\udc5bsubgraphs from the original graph. We sort all nodes in the graph by their degree in descending order and select the first \ud835\udc5bnodes as the central nodes of subgraphs. For each central node, we extract a subgraph using the breadth-first search (BFS) algorithm. The number of nodes in each subgraph is limited to \ud835\udc60. The limitation of \ud835\udc5band \ud835\udc60is to maximize the original graph structure\u2019s coverage with a fixed number of subgraphs. Then, we obtain a set of subgraphs {\ud835\udc541,\ud835\udc542, \u00b7 \u00b7 \u00b7 ,\ud835\udc54\ud835\udc5b}. Second, we learn a GNN-based encoder, E : R\ud835\udc60\u00d7\ud835\udc51\u00d7 R\ud835\udc60\u00d7\ud835\udc60\u2192 R\ud835\udc60\u00d7\ud835\udc511, to acquire node representations within subgraphs, where \ud835\udc511 is the dimension of node representation. Then the node representations H(\ud835\udc54\ud835\udc56) \u2208R\ud835\udc60\u00d7\ud835\udc511 for nodes in subgraph \ud835\udc54\ud835\udc56can be obtained by the generalized equation: H(\ud835\udc54\ud835\udc56) = E(\ud835\udc54\ud835\udc56) = {h\ud835\udc57|\ud835\udc63\ud835\udc57\u2208\ud835\udc49(\ud835\udc54\ud835\udc56)}. (1) We unify the formulation of E as a message passing framework: h(\ud835\udc59+1) \ud835\udc56 = U(\ud835\udc59+1) (h(\ud835\udc59) \ud835\udc56, AGG(M(\ud835\udc59+1) (h(\ud835\udc59) \ud835\udc56, h(\ud835\udc59) \ud835\udc57)|\ud835\udc63\ud835\udc57\u2208\ud835\udc41(\ud835\udc63\ud835\udc56))), (2) where M(\u00b7) denotes the message generation function, AGG(\u00b7) denotes the aggregation function, and U(\u00b7) denotes the state updating function. Various formulas of GNNs can be substituted for Eq. (1). Third, to encode node representations into a unified subgraph embedding space, we leverage an intra-subgraph attention mechanism to learn the node importance within a subgraph. The attention coefficient \ud835\udc50(\ud835\udc56) \ud835\udc57 for \ud835\udc63\ud835\udc57is computed by a single forward layer, indicating the importance of \ud835\udc63\ud835\udc57to subgraph \ud835\udc54\ud835\udc56: \ud835\udc50(\ud835\udc56) \ud835\udc57 = \ud835\udf0e(aT \ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4eW\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4eh(\ud835\udc56) \ud835\udc57), (3) where W\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4e\u2208R\ud835\udc511\u00d7\ud835\udc511 is a weight matrix, and a\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4e\u2208R\ud835\udc511 is a weight vector. W\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4eand a\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc4eare shared among nodes of all subgraphs. Then, we normalize the attention coefficients across nodes within a subgraph via a softmax function. After this, we can compute the representations z\ud835\udc56of \ud835\udc54\ud835\udc56as follows: z\ud835\udc56= \u2211\ufe01 \ud835\udc63\ud835\udc57\u2208V(\ud835\udc54\ud835\udc56) \ud835\udc50(\ud835\udc56) \ud835\udc57h(\ud835\udc56) \ud835\udc57. (4) Step-2: Subgraph selection. To denoise randomly sampled subgraphs, we need to select subgraphs with prominent patterns, typically indicated by particular subgraph-level features and structures. We adopt top-k sampling with an adaptive pooling ratio \ud835\udc58\u2208(0, 1] to select a portion of subgraphs. Specifically, we employ a trainable vector p to project all subgraph features to 1D footprints {\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56|\ud835\udc54\ud835\udc56\u2208\ud835\udc3a}. \ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56measures how much information of subgraph \ud835\udc54\ud835\udc56can be retained when projected onto the direction of p. Then, we take the {\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56} as the importance values of subgraphs and rank the subgraphs in descending order. After that, we select the top \ud835\udc5b\u2032 = \u2308\ud835\udc58\u00b7 \ud835\udc5b\u2309subgraphs and omit all other subgraphs at the current batch. During the training phase, \ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56of subgraph \ud835\udc54\ud835\udc56on p is computed as: \ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56= zip \u2225p\u2225, \ud835\udc56\ud835\udc51\ud835\udc65= \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58({\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56},\ud835\udc5b\u2032), (5) where \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58({\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56},\ud835\udc5b\u2032) is the operation of subgraph ranking, which returns the indices of the \ud835\udc5b\u2032-largest values in {\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56}. \ud835\udc56\ud835\udc51\ud835\udc65returned by \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58({\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc56},\ud835\udc5b\u2032) denotes the indices of selected subgraphs. \ud835\udc58 is updated every epoch by a reinforcement learning mechanism introduced in Section 3.2. Step-3: Subgraph sketching. Since we learned the independent representations of subgraph-level features, we assemble the selected subgraphs to reconstruct a sketched graph to capture their inherent relations. First, as shown in Fig. 1, we reduce the original graph into a sketched graph \ud835\udc3a\ud835\udc60\ud835\udc58\ud835\udc52= (\ud835\udc49\ud835\udc60\ud835\udc58\ud835\udc52, \ud835\udc38\ud835\udc60\ud835\udc58\ud835\udc52) by treating the selected subgraphs as supernodes. The connectivity between supernodes is determined by the number of common nodes in the corresponding subgraphs. Specifically, an edge \ud835\udc52(\ud835\udc56, \ud835\udc57) will be added to the sketched graph when the number of common nodes in \ud835\udc54\ud835\udc56 and \ud835\udc54\ud835\udc57exceeds a predefined threshold \ud835\udc4f\ud835\udc50\ud835\udc5c\ud835\udc5a. \ud835\udc49\ud835\udc60\ud835\udc58\ud835\udc52= {\ud835\udc54\ud835\udc56}, \u2200\ud835\udc56\u2208\ud835\udc56\ud835\udc51\ud835\udc65; \ud835\udc38\ud835\udc60\ud835\udc58\ud835\udc52= {\ud835\udc52\ud835\udc56,\ud835\udc57}, \u2200 \f \f\ud835\udc49(\ud835\udc54\ud835\udc56) \u00d1\ud835\udc49(\ud835\udc54\ud835\udc57) \f \f > \ud835\udc4f\ud835\udc50\ud835\udc5c\ud835\udc5a. (6) Second, an inter-subgraph attention mechanism is adopted to learn the mutual influence among subgraphs from their vectorized feature. More specifically, the attention coefficient \ud835\udefc\ud835\udc56\ud835\udc57of subgraph \ud835\udc54\ud835\udc56on \ud835\udc54\ud835\udc57can be calculated by the multi-head attention mechanism as in [48]. Then the subgraph embeddings can be calculated as: z\u2032 \ud835\udc56= 1 \ud835\udc40 \ud835\udc40 \u2211\ufe01 \ud835\udc5a=1 \u2211\ufe01 \ud835\udc52\ud835\udc56\ud835\udc57\u2208\ud835\udc38\ud835\udc60\ud835\udc58\ud835\udc52 \ud835\udefc\ud835\udc5a \ud835\udc56\ud835\udc57W\ud835\udc5a \ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5fz\ud835\udc56, (7) where \ud835\udefc\ud835\udc56\ud835\udc57is the attention coefficient, W\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5f\u2208R\ud835\udc512\u00d7\ud835\udc511 is a weight matrix, and \ud835\udc40is the number of independent attention. Third, the subgraph embeddings will be further enhanced by a self-supervised mutual information mechanism introduced in Section 3.3. After obtaining the subgraph embeddings, we convert them to label prediction through a softmax function. The probability distribution on class labels of different subgraphs can provide an insight into the impacts of subgraphs on the entire graph. Finally, the graph classification results are voted by subgraphs. Concretely, the classification results of all the subgraphs are ensembled by applying sum operation as the final probability distribution of the graph. The indexed class with the maximum probability is assumed to be the predicted graph label. 3.2 Reinforcement Pooling Module To address the challenge of prior knowledge in top-k sampling, we present a novel reinforcement learning (RL) algorithm to update the pooling ratio \ud835\udc58adaptively, even when inputting subgraphs of varying sizes and structures. Since the pooling ratio \ud835\udc58in top-k sampling does not directly attend the graph classification, it cannot be updated by backpropagation. We use an RL algorithm to find optimal \ud835\udc58\u2208(0, 1] rather than tuning it as a hyper-parameter. We model the updating process of \ud835\udc58as a finite horizon Markov decision process (MDP). Formally, the state, action, transition, reward and termination of the MDP are defined as follows: \fSUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia \u2022 State. The state \ud835\udc60\ud835\udc52at epoch \ud835\udc52is represented by the indices of selected subgraphs \ud835\udc56\ud835\udc51\ud835\udc65defined in Eq. (5) with pooling ratio \ud835\udc58: \ud835\udc60\ud835\udc52= \ud835\udc56\ud835\udc51\ud835\udc65\ud835\udc52 (8) \u2022 Action. RL agent updates \ud835\udc58by taking action \ud835\udc4e\ud835\udc52based on reward. We define the action \ud835\udc4eas add or minus a fixed value \u0394\ud835\udc58\u2208[0, 1] from \ud835\udc58. \u2022 Transition. After updating \ud835\udc58, we use top-k sampling defined in Eq. (5) to select a new set of subgraphs in the next epoch. \u2022 Reward. Due to the black-box nature of GNN, it is hard to sense its state and cumulative reward. So we define a discrete reward function \ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51(\ud835\udc60\ud835\udc52,\ud835\udc4e\ud835\udc52) for each \ud835\udc4e\ud835\udc52at \ud835\udc60\ud835\udc52directly based on the classification results: \ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51(\ud835\udc60\ud835\udc52,\ud835\udc4e\ud835\udc52) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 +1, \ud835\udc56\ud835\udc53\ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52> \ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52\u22121, 0, \ud835\udc56\ud835\udc53\ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52= \ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52\u22121, \u22121, \ud835\udc56\ud835\udc53\ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52< \ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52\u22121. (9) where \ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc52is the classification accuracy at epoch \ud835\udc52. Eq. (9) indicates if the classification accuracy with \ud835\udc4e\ud835\udc52is higher than the previous epoch, the reward for \ud835\udc4e\ud835\udc52is positive, and vice versa. \u2022 Termination. If the change of \ud835\udc58among ten consecutive epochs is no more than \u0394\ud835\udc58, the RL algorithm will stop, and \ud835\udc58will remain fixed during the next training process. This means that RL finds the optimal threshold, which can retain the most striking subgraphs. The terminal condition is formulated as: \ud835\udc45\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52({\ud835\udc58\ud835\udc52\u221210, \u00b7 \u00b7 \u00b7 ,\ud835\udc58\ud835\udc52}) \u2264\u0394\ud835\udc58. (10) Since this is a discrete optimization problem with a finite horizon, we use Q-learning [52] to learn the MDP. Q-learning is an off-policy reinforcement learning algorithm that seeks to find the best action to take given the current state. It fits the Bellman optimality equation as follows: \ud835\udc44\u2217(\ud835\udc60\ud835\udc52,\ud835\udc4e\ud835\udc52) = \ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51(\ud835\udc60\ud835\udc52,\ud835\udc4e\ud835\udc52) + \ud835\udefearg max \ud835\udc4e\u2032 \ud835\udc44\u2217(\ud835\udc60\ud835\udc52+1,\ud835\udc4e\u2032), (11) where \ud835\udefe\u2208[0, 1] is a discount factor of future reward. We adopt a \ud835\udf00-greedy policy with an explore probability \ud835\udf00: \ud835\udf0b(\ud835\udc4e\ud835\udc52|\ud835\udc60\ud835\udc52;\ud835\udc44\u2217) = ( random action, w.p. \ud835\udf00 arg max \ud835\udc4e\ud835\udc52 \ud835\udc44\u2217(\ud835\udc60\ud835\udc52,\ud835\udc4e), otherwise (12) This means that the RL agent explores new states by selecting an action at random with probability \ud835\udf00instead of selecting actions based on the max future reward. The RL agent and graph classification model can be trained jointly in an end-to-end manner, and the complete process of the RL algorithm is shown in Lines 15-18 of Algorithm 1. We have tried other RL algorithms such as multi-armed bandit and DQN, but their performance is not as good as the Q-learning algorithm. The experiment results in Section 4.4 verify the effectiveness of the reinforcement pooling module. 3.3 Self-Supervised Mutual Information Module Since our model relies on extracting striking subgraphs as the representative part of the original graph, we utilize the mutual information (MI) to measure the expressive ability of the obtained subgraph representations. To discriminate the subgraph representations among graphs, we present a novel method that maximizes the MI between local subgraph representations and the global graph representation. All of the derived subgraph representations are constrained to to be mindful of the global structural properties, rather than enforcing the overarching graph representation to contain all properties. To obtain the global graph representation r, we leverage a READOUT function to summarize the obtained subgraph-level embeddings into a fixed length vector: r = READOUT({z\u2032 \ud835\udc56}\ud835\udc5b\u2032 \ud835\udc56=1). (13) The READOUT function can be any permutation-invariant function, such as averaging and graph-level pooling. Specifically, we apply a simple averaging strategy as the READOUT function here. We use the Jensen-Shannon (JS) MI estimator [34] on the local/global pairs to maximize the estimated MI over the given subgraph/graph embeddings. The JS MI estimator has an approximately monotonic relationship with the KL divergence (the traditional definition of mutual information), which is more stable and provides better results [19]. Concretely, a discriminator D : R\ud835\udc512 \u00d7 R\ud835\udc512 \u2192R is introduced, which takes a subgraph/graph embedding pair as input and determines whether they are from the same graph. We apply a bilinear score function as the discriminator: D(z\u2032 \ud835\udc56, r) = \ud835\udf0e(z\u2032\ud835\udc47 \ud835\udc56W\ud835\udc40\ud835\udc3cr), (14) where W\ud835\udc40\ud835\udc3cis a scoring matrix and \ud835\udf0e(\u00b7) is the sigmoid function. The self-supervised MI mechanism is contrastive, as our MI estimator is based on classifying local/global pairs and negativesampled counterparts. Specifically, the negative samples are provided by pairing subgraph representation \u02dc z from an alternative graph \u02dc \ud835\udc3awith r from \ud835\udc3a. As a critical implementation detail of contrastive methods, the negative sampling strategy will govern the specific kinds of structural information to be captured. In our framework, we take another graph in the batch as the alternative graph \u02dc \ud835\udc3ato generate negative samples in a batch-wise fashion. To investigate the impact of the negative sampling strategy, we also devise another MI enhancing method named SUGAR-MICorrupt, which samples negative samples in a corrupted graph (i.e. e \ud835\udc3a(\ud835\udc49, e \ud835\udc4b,\ud835\udc34) = C(\ud835\udc3a(\ud835\udc49,\ud835\udc4b,\ud835\udc34))). Following the setting in [49], the corruption function C(\u00b7) preserves original vertexes \ud835\udc49and adjacency matrix \ud835\udc34, whereas it corrupts features, e \ud835\udc4b, by row-wise shuffling of \ud835\udc4b. We further analyze these two negative sampling strategies in Section 4.5. The self-supervised MI objective can be defined as a standard binary cross-entropy (BCE) loss: L\ud835\udc3a \ud835\udc40\ud835\udc3c= 1 \ud835\udc5b\u2032 + \ud835\udc5b\ud835\udc5b\ud835\udc52\ud835\udc54 ( \ud835\udc5b\u2032 \u2211\ufe01 \ud835\udc54\ud835\udc56\u2208\ud835\udc3a E\ud835\udc5d\ud835\udc5c\ud835\udc60 \u0002 log(D(z\u2032 \ud835\udc56, r)) \u0003 + \ud835\udc5b\ud835\udc5b\ud835\udc52\ud835\udc54 \u2211\ufe01 \ud835\udc54\ud835\udc57\u2208\u02dc \ud835\udc3a E\ud835\udc5b\ud835\udc52\ud835\udc54 h log(1 \u2212D(\u02dc z\u2032 \ud835\udc57, r)) i ), (15) where \ud835\udc5b\ud835\udc5b\ud835\udc52\ud835\udc54denotes the number of negative samples. The BCE loss L\ud835\udc3a \ud835\udc40\ud835\udc3camounts to maximizing the mutual information between \ud835\udc67\u2032 \ud835\udc56 and r based on the Jensen-Shannon divergence between the joint distribution (positive samples) and the product of marginals (negative samples) [34, 49]. The effectiveness of the self-supervised MI mechanism and several insights are discussed in Section 4.5. \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Sun et al. Algorithm 1: The overall process of SUGAR Input: Graphs with labels {\ud835\udc3a= (\ud835\udc49,\ud835\udc4b,\ud835\udc34),\ud835\udc66}; Number of subgraphs \ud835\udc5b; Subgraph size \ud835\udc60; Initialized pooling ratio \ud835\udc580; Number of epochs, batches: \ud835\udc38, \ud835\udc35; Output: Graph label \ud835\udc66 // Subgraph sampling 1 Sort all nodes within a graph by their degree in descending order; 2 Extract subgraphs for the first \ud835\udc5bnodes; // Train SUGAR 3 for \ud835\udc52= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc38do 4 for \ud835\udc4f= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc35do 5 H(\ud835\udc54\ud835\udc56), \u2200\ud835\udc54\ud835\udc56\u2208G\ud835\udc4f\u2190Eq. (1); // Subgraph encoding 6 z \u2190Eq. (4); // Intra-subgraph attention 7 \ud835\udc56\ud835\udc51\ud835\udc65\u2190Eq. (5); // Subgraph selection 8 \ud835\udc3a\ud835\udc60\ud835\udc58\ud835\udc52\u2190\ud835\udc3a; // Subgraph sketching 9 z\u2032 \ud835\udc56\u2190Eq. (7); // Inter-subgraph attention // Self-Supervised MI 10 Sample negative samples; 11 r \u2190Eq. (13); 12 L\ud835\udc3a \ud835\udc40\ud835\udc3c\u2190Eqs. (14) and (15); 13 L \u2190Eq. (16); 14 end // RL process 15 if Eq. (10) is False then 16 \ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51(\ud835\udc60\ud835\udc52,\ud835\udc4e\ud835\udc52) \u2190Eq. (9); 17 \ud835\udc4e\ud835\udc52\u2190Eq. (12); 18 \ud835\udc58\u2190\ud835\udc4e\ud835\udc52\u00b7 \u0394\ud835\udc58; 19 end 20 end 3.4 Proposed SUGAR Optimization. We combine the purely supervised classification loss L\ud835\udc36\ud835\udc59\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc56\ud835\udc53\ud835\udc66and the self-supervised MI loss L\ud835\udc3a \ud835\udc40\ud835\udc3cin Eq. (15), which acts as a regularization term. The graph classification loss function L\ud835\udc36\ud835\udc59\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc56\ud835\udc53\ud835\udc66is defined based on cross-entropy. The loss L of SUGAR is defined as follows: L = L\ud835\udc36\ud835\udc59\ud835\udc4e\ud835\udc60\ud835\udc60\ud835\udc56\ud835\udc53\ud835\udc66+ \ud835\udefd \u2211\ufe01 \ud835\udc3a\u2208G L\ud835\udc3a \ud835\udc40\ud835\udc3c+ \ud835\udf06\u2225\u0398\u22252 , (16) where \ud835\udefdcontrols the contribution of the self-supervised MI enhancement, and \ud835\udf06is a coefficient for L2 regularization on \u0398, which is a set of trainable parameters in this framework. In doing so, the model is trained to predict the entire graph properties while keeping rich discriminative intermediate subgraph representations aware of both local and global structural properties. Algorithm description. Since graph data in the real world are most large in scale, we employ the mini-batch technique in the training process. Algorithm 1 outlines the training process of SUGAR. 4 EXPERIMENTS In this section, we describe the experiments conducted to demonstrate the efficacy of SUGAR for graph classification. The experiments aim to answer the following five research questions: \u2022 Q1. How does SUGAR perform in graph classification? \u2022 Q2. How do the exact subgraph encoder architecture and subgraph size influence the performance of SUGAR? \u2022 Q3. How does the reinforcement pooling mechanism influence the performance of SUGAR? \u2022 Q4. How does the self-supervised mutual information mechanism influence the performance of SUGAR? \u2022 Q5. Does SUGAR select subgraphs with prominent patterns and provide insightful interpretations? 4.1 Experimental Setups Datasets. We use six bioinformatics datasets namely MUTAG [9], PTC [46], PROTEINS [4], D&D [12], NCI1 [51], and NCI109 [51]. The dataset statistics are summarized in Table 1. Table 1: Statistics of Datasets. Dataset # Graphs # Classes Max. Nodes Avg. Nodes Node Labels MUTAG [9] 188 2 28 17.93 7 PTC [46] 344 2 64 14.29 18 PROTEINS [4] 1113 2 620 39.06 3 D&D [12] 1178 2 5748 284.32 82 NCI1 [51] 4110 2 111 29.87 37 NCI109 [51] 4127 2 111 29.6 38 Baselines. We consider a number of baselines, including graph kernel based methods, graph neural network based methods, and graph pooling methods to demonstrate the effectiveness and robustness of SUGAR. Graph kernel based baselines include WeisfeilerLehman Subtree Kernel (WL) [40], Graphlet kernel (GK) [41], and Deep Graph Kernels (DGK) [57]. Graph neural network based baselines include PATCHY-SAN [32], Dynamic Edge CNN (ECC) [42], GIN [55], Graph Capsule CNN (GCAPS-CNN) [50], CapsGNN [54], Anonymous Walk Embeddings (AWE) [20], Sequence-to-sequence Neighbors-to-node Previous Predicted (S2S-N2N-PP) [45], Network Structural Convolution (NEST) [58], and MA-GCNN [35]. Graph pooling baslines include SortPool [63], DiffPool [61], gPool [15], EigenPooling [29], and SAGPool [23]. Parameter settings. The common parameters for training the models are set as \ud835\udc40\ud835\udc5c\ud835\udc5a\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc62\ud835\udc5a= 0.9, \ud835\udc37\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0.5, and L2 norm regularization weight decay = 0.01. Node features are one-hot vectors of node categories. We adopt GCN [21] with 2 layers and 16 hidden units as our subgraph encoder. Subgraph embedding dimension \ud835\udc51\u2032 is set to 96. In the reinforcement pooling module, we set \ud835\udefe= 1 in (11) and \ud835\udf00= 0.9 in (12). For each dataset, the parameters \ud835\udc5b,\ud835\udc60,\ud835\udc5a, \u0394\ud835\udc58are set based on the following principles: (1) Subgraph number \ud835\udc5band subgraph size \ud835\udc60are set based on the average size of all graphs; (2) \ud835\udc5b\ud835\udc5b\ud835\udc52\ud835\udc54is set to the same value as \ud835\udc5b\u2032; (3) \u0394\ud835\udc58is set to 1 \ud835\udc5b. 4.2 Overall Evaluation (Q1) In this subsection, we evaluate SUGAR for graph classification on the aforementioned six datasets. We performed 10-fold crossvalidation on each of the datasets. The accuracies, standard deviations, and ranks are reported in Table 2 where the best results are \fSUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Table 2: Summary of experimental results: \u201caverage accuracy\u00b1standard deviation (rank)\u201d. Method Dataset Avg. Rank MUTAG PTC PROTEINS D&D NCI1 NCI109 WL [40] 82.05\u00b10.36 (13) 79.78\u00b10.36 (5) 82.19\u00b1 0.18 (6) 82.46\u00b10.24 (3) 6.75 GK [41] 83.50\u00b10.60 (12) 59.65\u00b10.31 (9) 74.62\u00b10.12 (13) 11.33 DGK [57] 87.44\u00b12.72 (9) 60.08\u00b12.55 (8) 75.68\u00b10.54 (12) 80.31\u00b10.46 (9) 80.32\u00b10.33 (7) 9.00 PATCHY-SAN [32] 92.63\u00b14.21 (3) 62.29\u00b15.68 (7) 75.89\u00b12.76 (11) 77.12\u00b12.41 (10) 78.59\u00b11.89 (10) 8.20 ECC [42] 89.44 (6) 73.65 (14) 83.80 (2) 81.87 (4) 6.50 GIN [55] 89.40\u00b15.60 (7) 64.60\u00b17.00 (5) 76.20\u00b12.80 (10) 82.70\u00b11.70 (5) 6.75 GCAPS-CNN [50] 66.01\u00b15.91 (4) 76.40\u00b14.17 (7) 77.62\u00b14.99 (9) 82.72\u00b12.38 (4) 81.12\u00b11.28 (6) 6.00 CapsGNN [54] 86.67\u00b16.88 (10) 76.28\u00b13.63 (8) 75.38\u00b14.17 (12) 78.35\u00b11.55 (11) 10.25 AWE [20] 87.87\u00b19.76 (8) 71.51\u00b14.02 (15) 11.50 S2S-N2N-PP [45] 89.86\u00b11.10 (5) 64.54\u00b11.10 (6) 76.61\u00b10.50 (4) 83.72\u00b10.40 (3) 83.64\u00b10.30 (2) 4.00 NEST [58] 91.85\u00b11.57 (4) 67.42\u00b11.83 (3) 76.54\u00b10.26 (6) 78.11\u00b10.36 (8) 81.59\u00b10.46 (8) 81.72\u00b10.41 (5) 5.67 MA-GCNN [35] 93.89\u00b15.24 (2) 71.76\u00b16.33 (2) 79.35\u00b11.74 (2) 81.48\u00b11.03 (3) 81.77\u00b12.36 (7) 3.20 SortPool [63] 85.83\u00b11.66 (11) 58.59\u00b12.47 (10) 75.54\u00b10.94 (13) 79.37\u00b10.94 (6) 74.44\u00b10.47 (13) 10.60 DiffPool [61] 76.25 (9) 80.64 (4) 6.50 gPool [15] 77.68 (3) 82.43 (2) 2.50 EigenPool [29] 76.60 (5) 78.60 (7) 77.00 (12) 74.90 (8) 8.00 SAGPool [23] 71.86\u00b10.97 (14) 76.45\u00b10.97 (11) 67.45\u00b11.11 (14) 74.06\u00b10.78 (9) 12.00 SUGAR (Ours) 96.74\u00b14.55(1) 77.53\u00b12.82(1) 81.34\u00b10.93(1) 84.03\u00b11.33(1) 84.39\u00b11.63(1) 84.82\u00b10.81(1) 1.00 MUTAG PTC PROTEINS NCI1 Dataset 70 75 80 85 90 95 100 Accuracy (%) GCN GAT GIN GraphSAGE Figure 2: SUGAR with different encoder architecture. shown in bold. The reported results of the baseline methods come from the initial publications (\u201c\u2013\u201d means not available). As shown in Table 2, SUGAR consistently outperforms all baselines on all datasets. In particular, SUGAR achieves an average accuracy of 96.74% on the MUTAG dataset, which is a 3.04% improvement over the second-best ranked method MA-GCNN [35]. Compared to node selection pooling baselines (e.g., gPool [15], SAGPool [23]), SUGAR achieves more gains consistently, supporting the intuition behind our subgraph-level denoising approach. Compared to the recent hierarchical method NEST [58] and motif-based method MA-GCNN [35], our method achieves 14.99% and 8.04% improvements in terms of average accuracy on the PTC dataset, respectively. This may be because that both of NEST [58] and MAGCNN [35] are limited in their ability to enumerated simple motifs, while our method can capture more complex structural information by randomly sampling rather than by pre-defined rules. Overall, the proposed SUGAR shows very promising results against recently developed methods. 3 4 5 6 7 Subgraph size s 60 70 80 90 100 Accuracy (%) 61.11 84.14 94.44 96.74 95.1 58.94 70.7 76.58 73.64 77.76 MUTAG PTC Figure 3: SUGAR with different subgraph size \ud835\udc60. 4.3 Subgraph Encoder and Size Analysis (Q2) In this subsection, we analyze the impacts of subgraph encoder architecture and subgraph size. As discussed in Section 3.1, any GNN can be used as the subgraph encoder. In addition to using GCN [21] in the default experiment setting, we also perform experiments with three popular GNN architectures: GAT [48], GraphSAGE[17] (with mean aggregation), and GIN [55]. The results are summarized in Figure 2. We can observe that the performance difference resulting from the different GNNs are marginal. This may be because all the aforementioned GNNs are expressive enough to capture the subgraph properties. This indicates the proposed SUGAR is robust to the exact encoder architecture. Figure 3 shows the performance of SUGAR with different subgraph size \ud835\udc60from 3 to 7 on MUTAG and PTC. Although our model does not give satisfactory results with subgraphs of 3 or 4 nodes, it is found that subgraphs of a larger size obviously help to improve the performance. We can also observe that the subgraph size does not significantly improve the performance of SUGAR when it is larger than 5. This indicates that SUGAR can achieve competitive \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Sun et al. 0 20 40 60 80 100 Epoch 50 55 60 65 70 75 80 Acc. of SUGAR_FixedK mean acc.=70.41% 0 20 40 60 80 100 Epoch 50 55 60 65 70 75 80 Acc. of SUGAR mean acc.=76.20% (a) Training process of SUGAR-FixedK and SUGAR on PTC. 0 25 50 75 100 125 150 Epoch 0.4 0.5 0.6 0.7 0.8 k (b) Updating process of \ud835\udc58on PTC. 0 25 50 75 100 125 150 Epoch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Mean reward (c) Learning curve of RL on PTC. MUTAG PTC PROTEINS NCI109 Dataset 50 60 70 80 90 100 Accuracy (%) 91.11 70.38 74.07 80.57 94.12 71.74 76.15 81.92 95.91 77.53 78.11 84.82 SUGAR-NoMI SUGAR-MICorrupt SUGAR (d) SUGAR with different negative sampling strategies. 1 2 3 4 5 Negative sampling ratio nneg:n \u2032 (neg:pos) 65 70 75 80 85 90 95 100 Accuracy (%) MUTAG PTC NCI1 (e) Parameter sensitivity of negative sampling ratio. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 MI loss coefficient 65 70 75 80 85 90 95 100 Accuracy (%) 90.13 92.17 90.01 94.44 96.77 94.44 96.77 95.1 92.19 92.19 94.44 71.09 72.33 73.11 76.29 75.24 77.78 76.29 74.88 75.24 76.29 76.01 81.54 85.78 85.96 85.03 85.31 85.31 84.08 85.31 81.54 84.08 81.54 MUTAG PTC NCI1 (f) Parameter sensitivity of MI loss coefficient \ud835\udefd. Figure 4: The training process and testing performance of SUGAR. performance when sampled subgraphs can cover most of the basic functional building blocks. 4.4 RL Process Analysis (Q3) To verify the effectiveness of the reinforcement pooling mechanism, we plot the training process of SUGAR (lower) and the variant SUGAR-FixedK (upper) on PTC in Figure 4(a). SUGAR-FixedK removes the reinforcement pooling mechanism and uses a fixed pooling ratio \ud835\udc58= 1 (i.e., without subgraph selection). The shadowed area is enclosed by the min and max value of five cross-validation training runs. The solid line in the middle is the mean value of each epoch, and the dashed line is the mean value of the last ten epochs. The mean accuracy of SUGAR with an adaptive pooling ratio achieves a 5.79% improvement over SUGAR-FixedK, supporting the intuition behind our subgraph denoising approach. Since the RL algorithm and the GNN are trained jointly, the updating and convergence process is indeed important. In Figure 4(b), we visualize the updating process of \ud835\udc58in PTC with the initial value \ud835\udc580 = 0.5. Since other modules in the overall framework update with the RL module simultaneously, the RL environment is not very steady at the beginning. As a result, \ud835\udc58also does not update steadily during the first 70 epochs. When the framework gradually converges, \ud835\udc58bumps for several rounds and meets the terminal condition defined in Eq. (10). Figure 4(c) shows the learning curve in terms of mean reward. We can observe that the RL algorithm converges to the mean reward 0.545 with a stable learning curve. This suggests that the proposed SUGAR framework can find the most striking subgraphs adaptively. 4.5 Self-Supervised MI Analysis(Q4) In this subsection, we analyze the impact of the negative sampled strategy, the negative sampling ratio, and the sensitivity of the self-supervised MI loss coefficient. To evaluate the effectiveness of the self-supervised MI mechanism, we compare SUGAR to its variant SUGAR-NoMI, which removes the self-supervised MI mechanism. To further analyze the impact of the negative sampled strategy, we also compare SUGAR to another variant SUGAR-MICorrupt, which constructs the alternative graph e \ud835\udc3aby corruption as detailed in Section 3.3. The results are shown in Fig. 4(d). We can observe that models with the self-supervised MI mechanism (i.e., SUGAR and SUGARMICorrupt) achieve better performance than SUGAR-NoMI. In addition, SUGAR (i.e., sampling from another graph instance) consistently outperforms than SUGAR-MICorrupt (i.e., sampling from a corrupted graph). A possible reason is that e \ud835\udc3aloses most of the important structure information during corruption and can only provide weak supervision. \fSUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-Supervised Mutual Information Mechanism WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia non-mutagenic mutagenic graph embedding vector striking subgraphs MUTAG PTC graph embedding vector striking subgraphs non-active active id: 5 id: 174 id: 85 id: 26 Carbon Carbon Nitrogen Nitrogen Chlorine Chlorine Oxygen Oxygen Carbon Carbon Nitrogen Nitrogen Oxygen Oxygen Bromine Bromine Sulfur Sulfur Sodium Sodium Figure 5: Result visualization of MUTAG (left) and PTC (right) dataset. We also analyze the sensitivity of two hyper-parameters in the self-supervised MI module, namely negative sampling ratio (\ud835\udc5b\ud835\udc5b\ud835\udc52\ud835\udc54: \ud835\udc5b\u2032) and the coefficient of self-supervised MI loss \ud835\udefd. The common belief is that contrastive methods require a large number of negative samples to be competitive. Figure 4(e) shows the performance of SUGAR under different negative sampling ratios. The larger negative sampling ratio does not seem to contribute significantly to boosting the performance of SUGAR. This may be because we draw negative samples for every subgraph within the graph. Though the negative sampling ratio is small, it has already provided sufficient self-supervision. As shown in Figure 4(f), when the self-supervised MI loss has more than 0.6 weights compared to graph classification loss, SUGAR achieves better performance. This illustrates that our framework quite benefits from self-supervised training. This indicates that MI enhancement gives informative self-supervision to SUGAR and negative sampling strategy designs should be considered carefully. 4.6 Visualization (Q5) In this subsection, we study the power of SUGAR to discover subgraphs with prominent patterns and provide insightful interpretations into the formation of different graphs. Figure 5 illustrates some of the results we obtained on the MUTAG (left) and PTC (right) dataset, where id denotes the graph index in the corresponding dataset. Each row shows our explanations for a specific class in each dataset. Column 1 shows a graph instance after subgraph selection, where the color indicates the atom type of the node (nodes in grey are omitted during subgraph selection) and size indicates the importance of the node in discriminating the two classes. Column 2 shows the \ud835\udc5b\u00d7 32 neuron outputs in descending order of their projection value in the reinforcement pooling module. The first \ud835\udc5b\u2032 rows in the neuron output matrix with the largest projection value are selected as striking subgraphs, roughly indicating their activeness. Column 3 shows striking subgraphs as functional blocks found by SUGAR. MUTAG consists of 188 molecule graphs labeled according to whether the compound has a mutagenic effect on a bacterium. We observe that the main determinants in the mutagenic class is the nitro group \ud835\udc41\ud835\udc422 connected to a set of carbons. This is the same as the results in [9] which show that the electron-attracting elements conjugated with nitro groups are critical to identifying mutagenic molecules. For the non-mutagenic class, our model takes chlorine connected to carbons as a striking subgraph, which is frequently seen in non-mutagenic molecules. PTC consists of 344 organic molecules labeled according to whether the compound has carcinogenicity on male rats. The main determinants found by our model are the co-occurrence of carbon rings, nitrogen, sulfur, and oxygen. For instance, one of the striking subgraphs in active compounds is a nitrogen connected to a set of aromatic carbon bonds. This substructure is frequently seen in aromatic amines, nitroaromatics, and azo compounds, which are well-known classes of carcinogens [46]. In addition, our model takes bromine connected to some carbons as a striking subgraph, which is in general agreement with accepted toxicological knowledge. For the non-active class, a striking subgraph found by our model is some oxygen with sulphur bond, which is the same as the knowledge of Leuven2 [9]. This suggests that the proposed SUGAR can find striking subgraphs with discriminative patterns and has great promise to provide sufficient interpretability. 5" + }, + { + "url": "http://arxiv.org/abs/1812.10371v3", + "title": "Distributional Robust Kelly Gambling: Optimal Strategy under Uncertainty in the Long-Run", + "abstract": "In classic Kelly gambling, bets are chosen to maximize the expected log\ngrowth of wealth, under a known probability distribution. Breiman provides\nrigorous mathematical proofs that Kelly strategy maximizes the rate of asset\ngrowth (asymptotically maximal magnitude property), which is thought of as the\nprincipal justification for selecting expected logarithmic utility as the guide\nto portfolio selection. Despite very nice theoretical properties, the classic\nKelly strategy is rarely used in practical portfolio allocation directly due to\npractically unavoidable uncertainty. In this paper we consider the\ndistributional robust version of the Kelly gambling problem, in which the\nprobability distribution is not known, but lies in a given set of possible\ndistributions. The bet is chosen to maximize the worst-case (smallest) expected\nlog growth among the distributions in the given set.\n Computationally, this distributional robust Kelly gambling problem is convex,\nbut in general need not be tractable. We show that it can be tractably solved\nin a number of useful cases when there is a finite number of outcomes with\nstandard tools from disciplined convex programming.\n Theoretically, in sequential decision making with varying distribution within\na given uncertainty set, we prove that distributional robust Kelly strategy\nasymptotically maximizes the worst-case rate of asset growth, and dominants any\nother essentially different strategy by magnitude. Our results extends\nBreiman's theoretical result and justifies that the distributional robust Kelly\nstrategy is the optimal strategy in the long-run for practical betting with\nuncertainty.", + "authors": "Qingyun Sun, Stephen Boyd", + "published": "2018-12-20", + "updated": "2021-06-10", + "primary_cat": "math.OC", + "cats": [ + "math.OC" + ], + "main_content": "Introduction The Classic Kelly strategy maximizes the expected logarithmic utility. It was proposed by John Kelly in a 1956 classic paper [3]. The earliest discussion of logarithmic utility dates back to 1730 in connection to Daniel Bernoulli\u2019s discussion [4] of the St. Petersburg game. In 1960 and 1961, Breiman [1, 2] proved that logarithmic utility was clearly distinguished by its optimality properties from other (essentially) different utilities as a guide to portfolio selection. The most important property is the asymptotically maximal magnitude property, which states that the Kelly strategy asymptotically maximizes the growth rate of assets and dominates the growth rate of any essentially different strategies by magnitude. As Thorp commented in [5], \"This is to our mind the principal justi\ufb01cation for selecting E log X as the guide to portfolio selection.\" Although the classic Kelly strategy has very nice theoretical properties, it is rarely used in practical portfolio allocation directly. The major hurdle is that in practical portfolio allocation the estimated nominal distribution of return is almost never accurate and uncertainty is unavoidable. Such uncertainty arises from multiple sources. First, the empirical nominal distribution will invariably differ from the unknown true distribution that generated the training samples, so uncertainty comes from Preprint. Under review. \fthe gap between empirical in-sample (training) distribution and out-of-sample distribution, and implementing the optimal decisions using empirical nominal distribution often leads to disappointment in out-of-sample tests. In decision analysis this phenomenon is sometimes termed the optimizer\u2019s curse. Second, uncertainty can come from a distribution shift from the model training environment and the model deployment environment, as a special but common case, in investment the major dif\ufb01culty is the return time series is typically non-stationary. Third, in investment, the estimation errors from even the mean estimation and covariance estimation could be quite signi\ufb01cant, and the further measurements of higher order distributional information beyond the \ufb01rst two moments of return might be uninformative due to noise in measurements. In this paper, we identify the uncertainty problem of the classic Kelly strategy, and propose the distributional robust version of the Kelly strategy in which the probability distribution is not known, but lies in a given set of possible distributions. The distributional robust version of Kelly strategy choose bets to maximize the worst-case (smallest) expected log growth among the distributions in the given set. Computationally, this distributional robust Kelly gambling problem is convex, but in general need not be tractable. In this work, we show that for a large class of uncertainty sets, the distributional robust Kelly problems can be transformed to tractable form that follow disciplined convex programming (DCP) rules. DCP tractable form can be solved via domain speci\ufb01c languages like CVXPY. Theoretically, we extend Breiman\u2019s asymptotically maximal magnitude result and proved that distributional robust version Kelly\u2019s strategy asymptotically maximizes the worst-case rate of asset growth when the sequence of probabilities vary in the uncertainty set. Numerically, we also tested the algorithm in a horse race gambling numerical example, and we indeed observe signi\ufb01cant improvement of worst-case wealth growth. Gambling. We consider a setting where a gambler repeatedly allocates a fraction of her wealth (assumed positive) across n different bets in multiple rounds. We assume there are n bets available to the gambler, who can bet any nonnegative amount on each of the bets. We let b \u00c0 I Rn denote the bet allocation (in fraction of wealth), so b g 0 and 1T b = 1, where 1 is the vector with all entries one. Letting Sn denote the probability simplex in I Rn, we have b \u00c0 Sn. With bet allocation b, the gambler is betting W bi (in dollars) on outcome i, where W > 0 is the gambler\u2019s wealth (in dollars). We let r \u00c0 I Rn + denote the random returns on the n bets, with ri g 0 the amount won by the gambler for each dollar she puts on bet i. With allocation b, the total she wins is rT bW , which means her wealth increases by the (random) factor rT b. We assume that the returns r in different rounds are IID. We will assume that rn = 1 almost surely, so bn corresponds to the fraction of wealth the gambler holds in cash; the allocation b = en := (0, \u2026 , 0, 1) corresponds to not betting at all. Since her wealth is multiplied in each round by the IID random factor rT b, the log of the wealth over time is therefore a random walk, with increment distribution given by the random variable log(rT b). Finite outcome case. We consider here the case where one of K events occurs, i.e., r is supported on only K points. We let r1, \u2026 , rK denote the return vectors, and \u21e1= (\u21e11, \u2026 , \u21e1K) \u00c0 SK the corresponding probabilities. We collect the K payoff vectors into a matrix R \u00c0 I Rn\u00f9K, with columns r1, \u2026 , rK. The vector RT b \u00c0 I RK gives the wealth growth factor in the K possible outcomes. The mean log growth rate is G\u21e1(b) = E\u21e1log(rT b) = \u21e1T log(RT b) = K \u2026 k=1 \u21e1k log(rT k b), where the log in the middle term is applied to the vector elementwise. This is the mean drift in the log wealth random walk. Kelly gambling. In a 1956 classic paper [3], John Kelly proposed to choose the allocation vector b so as to maximize the mean log growth rate G\u21e1(b), subject to b g 0, 1T b = 1. This method was called the Kelly criterion; since then, much work has been done on this topic [6, 5, 7, 8, 9, 10]. The mean log growth rate G\u21e1(b) is a concave function of b, so choosing b is a convex optimization problem [11, 12]. It can be solved analytically in simple cases, such as when there are K = 2 possible outcomes. It is easily solved in other cases using standard methods and algorithms, and readily expressed in various 2 \fdomain speci\ufb01c languages (DSLs) for convex optimization such as CVX [13], CVXPY [14, 15], Convex.jl [16], or CVXR [17]. We can add additional convex constraints on b, which we denote as b \u00c0 B, with B \u201d SK a convex set. These additional constraints preserve convexity, and therefore tractability, of the optimization problem. While Kelly did not consider additional constraints, or indeed the use of a numerical optimizer to \ufb01nd the optimal bet allocation vector, we still refer to the problem of maximizing G\u21e1(b) subject to b \u00c0 B as the Kelly (gambling) problem (KP). There have been many papers exploring and extending the Kelly framework; for example, a drawdown risk constraint, that preserves convexity (hence, tractability) is described in [18]. The Bayesian version of Kelly optimal betting is described in [19]. In [20], Kelly gambling is generalized to maximize the proportion of wealth relative to the total wealth in the population. Distributional robust Kelly gambling. In this paper we study a distributional robust version of Kelly gambling, in which the probability distribution \u21e1is not known. Rather, it is known that \u21e1\u00c0 \u21e7, a set of possible distributions. We de\ufb01ne the worst-case log growth rate (under \u21e7) as G\u21e7(b) = inf \u21e1\u00c0\u21e7G\u21e1(b). This is evidently a concave function of b, since it is an in\ufb01mum of a family of concave functions of b, i.e., G\u21e1(b) for \u21e1\u00c0 \u21e7. The distributional robust Kelly problem (DRKP) is to choose b \u00c0 B to maximize G\u21e7(b), maximize inf\u21e1\u00c0\u21e7E\u21e1log(rT b) subject to b \u00c0 B. This is in principle a convex optimization problem, speci\ufb01cally a distributional robust problem; but such problems in general need not be tractable, as discussed in [21, 22, 23] The purpose of this paper is to show how the DRKP can be tractably solved for some useful probability sets \u21e7via disciplined convex programming. In this paper we call an optimization problem \"DCP tractable\" in a strict and speci\ufb01c sense when the optimization problem is a disciplined convex problem, so that it can be solved via domain speci\ufb01c languages like CVXPY. Related work on uncertainty aversion. In decision theory and economics, there are two important concepts, risk and uncertainty. Risk is about the situation when a probability can be assigned to each possible outcome of a situation. Uncertainty is about the situation when the probabilities of outcomes are unknown. Uncertainty aversion, also called ambiguity aversion, is a preference for known risks over unknown risks. Uncertainty aversion provides a behavioral foundation for maximizing the utility under the worst of a set of probability measures; see [24, 25, 26, 27] for more detailed discussion. The Kelly problem addresses risk; the distributional robust Kelly problem is a natural extension that considers uncertainty aversion. Related work on distributional robust optimization. Distributional robust optimization is a well studied topic. Previous work on distribution robust optimization studied \ufb01nite-dimensional parametrization for probability sets including moments, support or directional deviations constraints in [28, 29, 30, 31, 32, 33]. Beyond \ufb01nite-dimensional parametrization of the probability set, researchers have also studied non-parametric distances for probability measure, like f-divergences (e.g., KullbackLeibler divergences) [34, 35, 36, 37, 38] and Wasserstein distances [39, 40, 41, 42]. Contribution \u2022 We propose distributional robust Kelly strategy to account for uncertainty in practical investment, where we maximize the worst-case (smallest) expected log growth among the distributions in the given set. Theoretically, we proved that distributional robust version Kelly strategy asymptotically maximizes the worst-case rate of asset growth when the sequence of probabilities vary in the uncertainty set and dominant any other essentially different strategy by magnitude. \u2022 For a large class of uncertainty sets including polyhedral sets, ellipsodial sets, f-divergence ball, Wasserstein ball and uncertainty set with estimated mean and covariance, we concretely derived the disciplined convex programming (DCP) forms of distributional robust Kelly problems and provided concrete software implementations using CVXPY, with a simple horse gamble example to numerically verify that distributional robust Kelly strategies indeed lead to a better worst-case wealth growth, and also lead to more diverse bet vector in this example. 3 \f2 DCP tractable forms for distributional robust Kelly strategy In this section we show how to formulate DRKP as a DCP tractable convex optimization problem for a variety of distribution sets. The key is to derive a DCP tractable description of the worst-case log growth G\u21e7(b). We use duality to express G\u21e7(b) as the value of a convex maximization problem, which allows us to solve DRKP as one convex problem. Polyhedron de\ufb01ned by linear inequalities and equalities. Theorem 1. For polyhedron uncertainty set given by a \ufb01nite set of linear inequalities and equalities, \u21e7= {\u21e1\u00c0 SK \u203a A0\u21e1= d0, A1\u21e1f d1}, where A0 \u00c0 I Rm0\u00f9K, b0 \u00c0 I Rm0, A1 \u00c0 I Rm1\u00f9K, b1 \u00c0 I Rm1, the distributional robust Kelly problem is maximize min(log(RT b) + AT 0 \ud707+ AT 1 \ud706) * dT 0 \ud707* dT 1 \ud706 subject to b \u00c0 B, \ud706g 0, with variables b, \ud707, \ud706. The problem follows the disciplined convex programming (DCP) rules. Box uncertainty set. As a commonly used special case of polyhedron, we consider box. Theorem 2. for box uncertainty set \u21e7= {\u21e1\u00c0 SK \u203a \uf8ff\u21e1* \u21e1nom\uf8fff \u21e2}, , where \u21e1nom \u00c0 SK is the nominal distribution, and \u21e2\u00c0 I Rn + is a vector of radii, (The inequality \uf8ff\u21e1* \u21e1nom\uf8fff \u21e2is interpreted elementwise), the distributional robust Kelly problem is maximize min(log(RT b) + \ud706) * (\u21e1nom)T \ud706* \u21e2T \uf8ff\ud706\uf8ff subject to b \u00c0 B, with variables b, \ud706. The problem follows the disciplined convex programming (DCP) rules. Ellipsoidal uncertainty set Here we consider the case when \u21e7is the inverse image of a p-norm ball, with p g 1, under an af\ufb01ne mapping. As usual we de\ufb01ne q by 1_p + 1_q = 1. This includes an ellipsoid (and indeed the box set described above) as a special case. Theorem 3. For ellipsoidal uncertainty set \u21e7= {\u21e1\u00c0 SK \u203a \u00d2W *1(\u21e1* \u21e1nom)\u00d2p f 1}, where W is a nonsingular matrix, the distributional robust Kelly problem is maximize (\u21e1nom)T (u) * \u00d2W T (u * \ud7071)\u00d2q subject to u f log(RT b), b \u00c0 B, with variables b, u, \ud707. The problem follows the disciplined convex programming (DCP) rules. This DRKP problem follows DCP rule because \u21e1nom,T u is a linear function of u, *\u00d2W T (u * \ud7071)\u00d2q is a concave function of u and \ud707for q g 1, and log(RT b) is a concave function of b, hence u f log(RT b) is a concave constraint. Therefore, this DRKP problem is maximizing a concave objective concave constraint problem that follows DCP rule. The proof of the theorem is based on the Lagrangian duality and H\u00f6lder equality for the p-norm: sup\u00d2z\u00d2pf1 zT W T x = \u00d2W T x\u00d2q, Divergence based distribution set Let \u21e11, \u21e12 \u00c0 SK be two distributions. For a convex function f : I R+ \u00f4 I R with f(1) = 0, the f-divergence of \u21e11 from \u21e12 is de\ufb01ned as Df(\u21e11\u00d2\u21e12) = \u21e1T 2 f(\u21e11_\u21e12), where the ratio is meant elementwise. Recall that the Fenchel conjugate of f is f <(s) = suptg0(ts * f(t)). 4 \fTheorem 4. For f-divergence ball uncertainty set \u21e7= {\u21e1\u00c0 SK \u203a Df(\u21e1\u00d2\u21e1nom) f \u270f}, where \u270f> 0 is a given value, the distributional robust Kelly problem is maximize *(\u21e1nom)T w * \u270f\ud706* \ud6fe subject to w g \ud706f <( z \ud706) z g * log(RT b) * \ud6fe \ud706g 0, b \u00c0 B, with variables b, \ud6fe, \ud706, w, z. The problem follows the disciplined convex programming (DCP) rules. Here \ud706f <(z \ud706) = (\ud706f)<(z) = sup tg0 (tz * \ud706f(t)), is the perspective function of the non-decreasing convex function f <(z), so it is also a convex function that is non-decreasing in z. Additionally, * log(RT b) * \ud6feis a convex function of b and \ud6fe; then from the DCP composition rule, we know this form of DRKP is convex. Concrete examples of f* divergence function and their Fenchel conjugate funuctions are provided in supplementary material for the convenience of readers. Wasserstein distance uncertainty set with \ufb01nite support When \u21e1and \u21e1nom both have \ufb01nite supports, the Wasserstein distance Dc(\u21e1, \u21e1nom) with cost c \u00c0 I RK\u00f9Knom + is de\ufb01ned as the optimal value of the problem minimize \u2265 i,j Qijcij subject to Q1 = \u21e1, QT 1 = \u21e1nom, Q g 0, with variable Q. The Wasserstein distance has several other names, including Monge-Kantorovich, earth-mover, or optimal transport distance [39, 40, 41, 42]. The Wasserstein uncertainty set would be better to prepare for black swans events comparing to f-divergence uncertainty set, as it does not require \u21e1to be absolutely continuous with respect to the nominal distribution. Theorem 5. For Wasserstein distance ball uncertainty set \u21e7= {\u21e1\u00c0 SK \u203a Dc(\u21e1, \u21e1nom) f s}, with s > 0. the distributional robust Kelly problem is maximize \u21e0\u2265 j \u21e1nom j mini(log(RT b)i + \ud706cij) * s\ud706 \u21e1 subject to b \u00c0 B, \ud706g 0, where \ud706\u00c0 I R+ is the dual variable. The problem follows the disciplined convex programming (DCP) rules. The problem follows the disciplined convex programming (DCP) rules, because log(RT b)i + \ud706cij is a concave function of b and \ud706, therefore mini(log(RT b)i + \ud706cij) is a concave function of b and \ud706, then the entire objective is a concave function of b and \ud706; and the constraint b \u00c0 B and \ud706g 0 also follows DCP rule. We comment that although we allow \u21e1to have different support from \u21e1nom, we still consider the simple setting of \ufb01nite event space for \u21e1for technical clarity. The extension to the general setting for different norm form of cost c could be \ufb01nd in [43]. Computing the Wasserstein distance between two discrete distributions can be converted to solving a tractable linear program that is susceptible to the network simplex algorithm, dual ascent methods, or specialized auction algorithms [44, 45, 46]. Ef\ufb01cient approximation schemes can be found in the survey [47] of algorithms for the \ufb01nite-dimensional transportation problem. However, as soon as at least one of the two involved distributions is not discrete, the Wasserstein distance can no longer be evaluated in polynomial time. 5 \fUncertainty set with estimated mean and covariance matrix of return For the application in stock investment, typically quantitative investors are only able to obtain estimated mean and covariance matrix, with error bounds on the estimation errors. Following the original paper by Delage and Ye [28] and the review paper [48], we consider the following uncertainty set. Theorem 6. For uncertainty set with estimator \ud7070 and \u23030 for mean and covariance matrix of random vector r \u00c0 I Rn, \u21e7= {\u21e1\u00c0 SK \u203a (E\u21e1r * \ud7070)T \u2303*1 0 (E\u21e1r * \ud7070) f %1, E\u21e1[(r * \ud7070)(r * \ud7070)T ] \u2026 %2}, if there exists \u21e10 \u00c0 \u21e7such that E\u21e10r = \ud7070, E\u21e10(r * \ud7070)(r * \ud7070)T = \u23030, then the distributional robust Kelly problem has the same optimal value as the following SDP problem, minimize u1 + u2 subject to u1 g * log(rT i b) * r\u00d2 i Y ri * r\u00d2 i y, \u2248i = 1, \u2026 , K, u2 g (%2\u23030 + \ud7410\ud741\u00d2 0 ) \u00f7 Y + \ud741\u00d2 0 y + \u02d8%1\u00d2\u2303 1 2 0 (y + 2Y \ud7410)\u00d2, Y \u00a0 0, b \u00c0 B, where u1, u2 \u00c0 I R, Y \u00c0 I Rn\u00f9n and y \u00c0 I Rn are auxiliary variables. The problem is a SDP problem, therefore follows the disciplined convex programming (DCP) rules. 3 Theoretical properties of distributional robust Kelly strategy In the following, we will extend Brieman\u2019s classical optimality property of the Kelly strategy to distributional robust Kelly strategy. We show that for sequential decision making problems under uncertainty, distributional robust Kelly strategy is Consider a sequential gambling setting with a \ufb01xed uncertainty set \u21e7. For the Nth period random return vector rN has distribution \u21e1N. \u21e1N could vary freely for different N and the only condition is that \u21e1N \u00c0 \u21e7for any N. In this theoretical section, we consider the more general setting where the event (outcome) space is not necessarily \ufb01nite, we only assume that return is bounded rN,i \u00c0 [0, RM] for all i, N. let bN be the proportional betting vector for the N-th period such that bN g 0, 1T bN = 1, denote the (sequential) strategy \u21e4= (b1, \u2026 , bN, \u2026). This strategy\u2019s wealth growth at the N-th period is VN := rT NbN. This strategy\u2019s accumulated wealth growth is SN , de\ufb01ned recursively as S0 = 1, SN+1 = SNVN. Let ON*1 be the outcomes during the \ufb01rst N * 1 investment periods. Let V ? N := rT Nb? N. S? N = S? N*1V ? N*1 where b? N is the distributional robust bet at the N-th period that maximizes inf \u21e1N\u00c0\u21e7E\u21e1N [log(rT b) \u203a ON*1] Theorem 7. For any strategy \u21e4leading to the fortune SN, limN SN S? N exists almost surely and inf\u21e11,\u2026,\u21e1N\u00c0\u21e7E limN SN S? N f 1. The critical proof idea is to leverage the maximizing property of the distributional robust bet b? N, using superlinear property of the operator E := inf\u21e1N\u00c0\u21e7E\u21e1N : inf\u21e1N\u00c0\u21e7E\u21e1N (S1 + S2) g inf\u21e1N\u00c0\u21e7E\u21e1N (S1) + inf\u21e1N\u00c0\u21e7E\u21e1N (S2), eventually using Fatou\u2019s lemma to switch the order of limit and expectation. Proof sketch . Here we highlight some of the key technical points, with the full proof in the supplementary material. We have inf \u21e1N\u00c0\u21e7E\u21e1N [SN S? N \u203a ON*1] = inf \u21e1N\u00c0\u21e7E\u21e1N [VN V ? N \u203a ON*1]SN*1 S? N*1 with inf \u21e11,\u2026,\u21e1N*1,\u21e1N\u00c0\u21e7E\u21e11,\u2026,\u21e1N*1,\u21e1N lim N SN S? N f inf \u21e11,\u2026,\u21e1N*1,\u21e1N*1\u00c0\u21e7E\u21e11,\u2026,\u21e1N*1 lim N SN*1 S? N*1 f S0 S? 0 = 1. 6 \fTo prove the theorem, we only need prove that for any bN, inf \u21e1N\u00c0\u21e7E\u21e1N (VN V ? N \u203a ON*1) f 1 Now, for any \u270f> 0, by the maximizing property of the distributional robust bet b? N, we have inf \u21e1N\u00c0\u21e7E(\u270flog(VN) + (1 * \u270f) log(V ? N ) \u203a ON*1) f inf \u21e1N\u00c0\u21e7E(log(V ? N ) \u203a ON*1) Rewrite the left side, using superlinear property of E := inf\u21e1N\u00c0\u21e7E\u21e1N , we have inf \u21e1N\u00c0\u21e7E[1 \u270flog(1 + \u270f 1 * \u270f VN V ? N ) \u203a ON*1] f 1 \u270flog( 1 1 * \u270f) taking lower limit \u270f\u00f4 0+, since VN V ? N ) are bounded, from Fatou\u2019s lemma, we have inf \u21e1N\u00c0\u21e7E[VN V ? N ) \u203a ON*1] = inf \u21e1N\u00c0\u21e7E[lim\u270f\u00f40+ 1 \u270flog(1 + \u270f 1 * \u270f VN V ? N ) \u203a ON*1] f lim\u270f\u00f40+ 1 \u270flog( 1 1 * \u270f) = 1 We call \u21e4a nonterminating strategy if there are no values of bs such that Vs = rT s bs = 0 for any s. Theorem 8. If \u21e4is a nonterminating strategy, the set limN\u00f4\u00ff SN S? N = 0 is almost surely equal to the set on which \u2265\u00ff N=1[inf\u21e1N\u00c0\u21e7E\u21e1N [log(V ? N ) * log(VN) \u203a ON*1]] = \u00ff. The critical proof idea is to combine previous theorem and a generalized martingale convergence theorem on the sequence S? N SN * inf \u21e1N\u00c0\u21e7E\u21e1s[ S? N SN \u203a ON*1] = N \u2026 s=1 {log(V ? s ) * log(Vs) * inf \u21e1s\u00c0\u21e7E[log(V ? s ) * log(Vs) \u203a Os*1]} using non-linear expectation theory developed by [49, 50] on the non-linear expectation operator E := inf\u21e1N\u00c0\u21e7E\u21e1N . We comment that if the two strategies \u21e4and \u21e4? satisfy the condition \u2265\u00ff N=1[inf\u21e1N\u00c0\u21e7E\u21e1N [log(V ? N )* log(VN) \u203a ON*1]] = \u00ff, then we call \u21e4and \u21e4? \"essentially different\" strategies under uncertainty. The two theorems proved in this section could be stated as: In sequential decision making problem under uncertainty, when the sequence of distributions of return vary in a given uncertainty set, distributional robust Kelly strategy asymptotically maximizes the worst-case rate of asset growth, and dominants any other essentially different strategy by magnitude. 4 Numerical example In this section we illustrate distributional robust Kelly gambling with a simple horse racing example. Our example is a simple horse race with n horses, with bets placed on each horse placing, i.e., coming in \ufb01rst or second. There are thus K = n(n * 1)_2 outcomes (indexed as j, k with j < k f n), and n bets (one for each horse to place). We consider two simple uncertainty sets, the box set with parameter \u2318and l2 ball set with radius c. We \ufb01rst describe the nominal distribution of outcomes \u21e1nom. We model the speed of the horses as independent random variables, with the fastest and second fastest horses placing. With this model, \u21e1nom is entirely described by the probability that horse i comes in \ufb01rst, we which denote \ud6fdi. For j < k, we have \u21e1nom jk = P(horse j and k are the \ufb01rst two) = \ud6fdj\ud6fdk( 1 1*\ud6fdi + 1 1*\ud6fdj ). For the return matrix, we use parimutuel betting, with the fraction of bets on each horse equal to \ud6fdi, the probability that it will win (under the nominal probability distribution). The return matrix R \u00c0 I Rn\u00f9K then has the form (we index the columns (outcomes) by the pair jk, with j < k) Ri,jk = h n l n j n 1+\ud6fdj_\ud6fdk i = j n 1+\ud6fdk_\ud6fdj i = k 0 i \u00c3 {j, k}, 7 \fGrowth rate bK bRK: box with \u2318= 0.26 bRK: ball with c = 0.016 \u21e1nominal 4.3% 2.2% 2.2% \u21e1worst *2.2% 0.7% 0.4% Table 1: For box uncertainty set with \u2318= 0.26 and ball uncertainty set with c = 0.016, we compare the growth rate and worst-case growth rate for the Kelly optimal and the distributional robust Kelly optimal bets. Figure 1: The Kelly optimal bets bK for the nominal distribution, and the distributional robust optimal bets for the box and ball uncertainty sets, ordered by the descending order of bK. First, we show growth rate and worst-case growth rate for the Kelly optimal and the distributional robust Kelly optimal bets under two uncertainty sets. In table 1, we show the comparison for box uncertainty set with \u2318= 0.26 and for ball uncertainty set with c = 0.016. The two parameters are chosen so that the worst case growth of Kelly bets for both uncertainty sets are *2.2%. In particular, using standard Kelly betting, we lose money (when the distribution is chosen as the worst one for the Kelly bets). We can see that, as expected, the Kelly optimal bet has higher log growth under the nominal distribution, and the distributional robust Kelly bet has better worst-case log growth. We see that the worst-case growth of the distributional robust Kelly bet is signi\ufb01cantly better than the worst-case growth of the nominal Kelly optimal bet. In particular, with robust Kelly betting, we make money, even when the worst distribution is chosen. The nominal Kelly optimal bet bK and the distributional robust Kelly bet bRK for both uncertainty sets in \ufb01gure 11. For each of our bets bK and bRK shown above, we \ufb01nd a corresponding worst case distribution, denoted \u21e1wc,K and \u21e1wc,RK, which minimize G\u21e1(b) over \u21e1\u00c0 \u21e7. These distributions, shown for box uncertainty set and ball uncertainty set in \ufb01gure 3, achieve the corresponding worst-case log growth for the two bet vectors. Finally, in \ufb01gure 2, we compare the expected wealth logarithmic growth rate as we increase the size of the uncertainty sets. For the box uncertainty set we choose \u2318\u00c0 [0, 0.3], and for the ball uncertainty set we choose c \u00c0 [0, 0.02], we look at the expected growth for both the kelly bet bK and the distributional robust kelly bet bRK under both the nominal probability \u21e1nom and the worst case probability \u21e1worst. Figure 2: Plots of the expected growth under nominal distribution and worst-case distribution under the box and ball uncertainty set family. The blue, green, orange, red line are \u21e1nom,T log(RT bK), \u21e1worst,T \u2318 log(RT bK), \u21e1nom,T log(RT bRK \u2318 ), \u21e1wc,T \u2318 log(RT bRK \u2318 ). For the box uncertainty set we choose \u2318\u00c0 [0, 0.3], and for the ball uncertainty set we choose c \u00c0 [0, 0.02]. 8 \fFigure 3: For box uncertainty set with \u2318= 0.26 and ball uncertainty set with c = 0.016, we show the nominal distribution \u21e1nom (sorted) and the two worst-case distributions \u21e1wc,K and \u21e1wc,RK. 5 Discussion about how to choose uncertainty set Uncertainty set estimation via chance constraints To use distributional robust Kelly strategy in practical, one of the limitation is that it could be hard to estimate a high quality uncertainty set and the choice of uncertainty set depends on domain knowledge. There are two major considerations in the choice of the uncertainty set, \ufb01rst is tractability, which is discussed through the lens of DCP tractable form in this paper, second is ef\ufb01ciency, or the trade-off between coverage and tightness, i.e. whether it is too conservative and how well it re\ufb02ects the actual variability in our problem. We make some comment on the selection advice for uncertainty set. From the relation between chance constraint and uncertainty set, as discussed in section 6 of [51, 52], we could use probabilistic tools to ef\ufb01ciently represent uncertainty sets through chance constraint such as drawdown control, including value at Risk, conditional value at risk, tail bound from moments[18]. In quantitative investment practice, due to the noisy nature of stocks or futures market data, the scarcity of effective observations, the return non-stationarity, the uncertainty set presented in theorem 6 would be a good starting point. Besides the long-run optimality of growth rate proved here, an interesting question is to study the \ufb01nite time property of the distributional robust Kelly strategy and compare both the achieved growth rate in \ufb01nite time and the probability of having a given level of drawdown with the distributional robust version of mean-variance strategy. Uncertainty set estimation via conformal prediction Recent uncertainty quanti\ufb01cation methodology development on conformal prediction [53, 54, 55] has provided statistical tools to generate set-valued predictions for black-box predictors with rigorous error control and formal \ufb01nite-sample coverage guarantee, as shown in [56, 57]. As a future direction, we will explore the usage of conformal prediction to construct computational tractable uncertainty set from data for investment. Uncertainty set tuning via bi-level optimization and differentiation through solution of convex problem To automate the tuning of the parameters for the uncertainty set, we could use bi-level optimization [58] to learn uncertainty set by back-propagation over the parameter \u2713. We could set up a higher-level objective L(\u2713) to tune the parameters \u2713. For example, given out of samples observations that are not accessible when we \ufb01t the mean the functional form of \u21e7, we could de\ufb01ne an out-of-sample Kelly loss as the higher-level objective: b?(\u2713) = arg max b\u00c0B min \u21e1\u00c0\u21e7(\u2713) E\u21e1log(rT b). max \u2713 L(\u2713) = 1 N N \u2026 i=1 log(rT i b?(\u2713)) Our method relies on recently developed methods[58] that can ef\ufb01ciently evaluate the derivative of the solution of a disciplined convex optimization problem with respect to its parameters. Using previous result in this paper, this distributional robust optimization problem can be transformed into a disciplined convex optimization, which allows automated differentiation with respect to the solution map b?(\u2713). We have provided CVXPY code for the uncertainty sets and their corresponding tuning/learning example in PyTorch in the supplementary material, which would allow users to learn uncertainty sets through the recently developed software framework to embed our problem into differentiable programming framework to learn the uncertainty set. 9 \f6 Supplementary Material: Concrete examples of divergences functions and their Fenchel conjugates We remark that there is a one-parameter family of f-divergences generated by the \u21b5-function with \u21b5\u00c0 I R, where we can de\ufb01ne the generalization of natural logarithm by log\u21b5(t) = t\u21b5*1 * 1 \u21b5* 1 . For \u21b5\u00eb 1, it is a power function, for \u21b5\u00f4 1, it is converging to the natural logarithm. Now if we assume f\u21b5(1) = 0 and f \u00ae \u21b5(t) = log\u21b5(t), then we have f\u21b5(t) = t\u21b5* 1 * \u21b5(t * 1) \u21b5(\u21b5* 1) . The Fenchel conjugate is f < \u21b5(s) = 1 \u21b5((1 + (\u21b5* 1)s) \u21b5 \u21b5*1 * 1). We now show some more speci\ufb01c examples of f-divergences; for a more detailed discussion see [37]. \u2022 KL-divergence. With f(t) = t log(t) * t + 1, we obtain the KL-divergence. We have f <(s) = exp(s) * 1. This corresponds to \u21b5= 1. \u2022 Reverse KL-divergence. With f(t) = * log(t) + t * 1, the f-divergence is the reverse KL-divergence. We have f <(s) = * log(1 * s) for s < 1. This corresponds to \u21b5= 0. \u2022 Pearson \ud7122-divergence. With f(t) = 1 2(t * 1)2, we obtain the Pearson \ud7122-divergence. We have f <(s) = 1 2(s + 1)2 * 1 2, s > *11. This corresponds to \u21b5= 2. \u2022 Neyman \ud7122-divergence. With f(t) = 1 2t(t * 1)2, we obtain the Neyman \ud7122-divergence. We have f <(s) = 1 * \u02d8 1 * 2s, s < 1 2. This corresponds to \u21b5= *1. \u2022 Hellinger-divergence. With f(t) = 2( \u02d8 t * 1)2, we obtain the Hellinger-divergence. We have f <(s) = 2s 2*s, s < 2. This corresponds to \u21b5= *1. \u2022 Total variation distance. With f(t) = \uf8fft * 1\uf8ff, the f-divergence is the total variation distance. We have f <(s) = *1 for s f *1 and f <(s) = s for *1 f s f 1. 7 Supplementary Material: Proof of theorems in section 2 Proof of Theorem 1 Proof of theorem 1. The worst-case log growth rate G\u21e7(b) is given by the optimal value of the linear program (LP) minimize \u21e1T log(RT b) subject to 1T \u21e1= 1, \u21e1g 0, A0\u21e1= d0, A1\u21e1f d1, (1) with variable \u21e1. We form a dual of this problem, working with the constraints A0\u21e1= d0, A1\u21e1f d1; we keep the simplex constraints \u21e1g 0, 1T \u21e1= 1 as an indicator function IS(\u21e1) in the objective. The Lagrangian is L(\u232b, \ud706, \u21e1) = \u21e1T log(RT b) + \u232bT (A0\u21e1* d0)+ \ud706T (A1\u21e1* d1) + IS(\u21e1), where \u232b\u00c0 I Rm0 and \ud706\u00c0 I Rm1 are the dual variables, with \ud706g 0. Minimizing over \u21e1we obtain the dual function, g(\u232b, \ud706) = inf\u21e1\u00c0SK L(\u232b, \ud706, \u21e1) = min(log(RT b) + AT 0 \u232b+ AT 1 \ud706) * dT 0 \ud707* dT 1 \ud706, 10 \fwhere the min of a vector is the minimum of its entries. The dual problem associated with (1) is then maximize min(log(RT b) + AT 0 \ud707+ AT 1 \ud706) * dT 0 \ud707* dT 1 \ud706 subject to \ud706g 0, with variables \ud707, \ud706. Using Slater\u2019s condition for simplex, strong duality can easily be veri\ufb01ed. Therefore, this dual problem has the same optimal value as (1), i.e., G\u21e7(b) = sup\u232b,\ud706g0( min(log(RT b) + AT 0 \ud707+ AT 1 \ud706) *dT 0 \ud707* dT 1 \ud706). Using this expression for G\u21e7(b), the DRKP becomes maximize min(log(RT b) + AT 0 \ud707+ AT 1 \ud706) * dT 0 \ud707* dT 1 \ud706 subject to b \u00c0 B, \ud706g 0, with variables b, \ud707, \ud706. Proof of Theorem 2 Proof of theorem 2. Using the general method above, expressing the limits as A1\u21e1f d1 with A1 = 4 I *I 5 , d1 = 4 \u21e1nom + \u21e2 \u21e2* \u21e1nom 5 , the DRKP problem becomes maximize (min(log(RT b) + \ud706+ * \ud706*)) * (\u21e1nom)T (\ud706+ * \ud706*) * \u21e2T (\ud706+ + \ud706*) subject to b \u00c0 B, \ud706+ g 0, \ud706* g 0, with variables b, \ud706+, \ud706*. De\ufb01ning \ud706= \ud706+ * \ud706*, we have \uf8ff\ud706\uf8ff= \ud706+ + \ud706*, so the DRKP becomes maximize min(log(RT b) + \ud706) * (\u21e1nom)T \ud706* \u21e2T \uf8ff\ud706\uf8ff subject to b \u00c0 B, with variables b, \ud706. Proof of Theorem 3 Proof of theorem 3. We de\ufb01ne x = * log(RT b), z = W *1(\u21e1* \u21e1nom), and Dp,W = {z \u203a \u00d2z\u00d2p f 1, 1T W z = 0, \u21e1nom + W z g 0}. Then we have G\u21e7(b) = * sup\u21e1\u00c0\u21e7((\u21e1* \u21e1nom)T x + (\u21e1nom)T x) = * supz\u00c0Dp,W zT W T x + (\u21e1nom)T x = sup\ud707,\ud706g0 (* sup\u00d2z\u00d2pf1 zT W T (x + \ud706* \ud7071)+ (\u21e1nom)T (\ud706+ x)) = sup\ud707,\ud706g0 (*\u00d2W T (x + \ud706* \ud7071)\u00d2q+ (\u21e1nom)T (*\ud706* x)). Here the second last equation is the Lagrangian form where we keep the p-norm constraint as a convex indicator, and the last equation is based on the H\u00f6lder equality sup\u00d2z\u00d2pf1 zT W T (x + \ud706* \ud7071) = \u00d2W T (x + \ud706* \ud7071)\u00d2q, Using Slater\u2019s condition, strong duality can easily be veri\ufb01ed. Using this expression for G\u21e7(b), and let u = *x * \ud706= log(RT b) * \ud706f log(RT b), then the DCP formulation of DRKP becomes maximize (\u21e1nom)T (u) * \u00d2W T (u * \ud7071)\u00d2q subject to u f log(RT b), b \u00c0 B, with variables b, u, \ud707. This DRKP problem follows DCP rule because \u21e1nom,T u is a linear function of u, *\u00d2W T (u * \ud7071)\u00d2q is a concave function of u and \ud707for q g 1, and log(RT b) is a concave function of b, hence u f log(RT b) is a concave constraint. Therefore, this DRKP problem is maximizing a concave objective concave constraint problem that follows DCP rule. 11 \fProof of Theorem 4 Proof of theorem 4. We de\ufb01ne x = * log(RT b) again. Our goal is to minimize *G\u21e7(b) = sup\u21e1\u00c0\u21e7\u21e1T x. We form a dual of this problem, working with the constraints Df(\u21e1\u00d2\u21e10) f \u270fand 1T \u21e1= 1; we keep the constraint \u21e1g 0 implicit. With dual variables \ud706\u00c0 I R+, \ud6fe\u00c0 I R, then for \u21e1g 0, the Lagrangian is L(\ud6fe, \ud706, \u21e1) = \u21e1T x + \ud706(*(\u21e1nom)T f( \u21e1 \u21e1nom ) + \u270f) *\ud6fe(eT \u21e1* 1) + I+(\u21e1), where I+ is the indicator function of I RK + . The dual objective function is sup\u21e1g0 L(\ud6fe, \ud706, \u21e1) = sup\u21e1g0(\u2265K i=1 \u21e1nom i ( \u21e1i \u21e1nom i xi * \u21e1i \u21e1nom i \ud6fe* \ud706f( \u21e1i \u21e1nom i ))) +\ud706\u270f+ \ud6fe = \u2265K i=1 \u21e10,i suptig0(ti(xi * \ud6fe) * \ud706f(ti)) + \ud706\u270f+ \ud6fe = \u2265K i=1 \u21e10,i\ud706f <( xi*\ud6fe \ud706) + \ud706\u270f+ \ud6fe. Using Slater\u2019s condition, strong duality can easily be veri\ufb01ed. We can write the problem as maximize *(\u21e1nom)T \ud706f <( * log(RT b)*\ud6fe \ud706 ) * \ud706\u270f* \ud6fe subject to \ud706g 0, b \u00c0 B, with variables b, \ud6fe, \ud706. We transform the problem to follow the disciplined convex programming (DCP) rules by convex relaxation of the equality constraint. Now DRKP becomes maximize *(\u21e1nom)T w * \u270f\ud706* \ud6fe subject to w g \ud706f <( z \ud706) z g * log(RT b) * \ud6fe \ud706g 0, b \u00c0 B, with variables b, \ud6fe, \ud706, w, z. Here \ud706f <(z \ud706) = (\ud706f)<(z) = sup tg0 (tz * \ud706f(t)), is the perspective function of the non-decreasing convex function f <(z), so it is also a convex function that is non-decreasing in z. Additionally, * log(RT b) * \ud6feis a convex function of b and \ud6fe; then from the DCP composition rule, we know this form of DRKP is convex. Proof of Theorem 5 Proof of theorem 5. The worst-case log growth G\u21e7(b) is given by the value of the following LP, minimize \u21e1T log(RT b) subject to Q1 = \u21e1, QT 1 = \u21e1nom, Q g 0, \u2265 i,j Qijcij f s, with variable Q. Using strong duality for LP, the DRKP becomes maximize \u21e0\u2265 j \u21e1nom j mini(log(RT b)i + \ud706cij) * s\ud706 \u21e1 subject to b \u00c0 B, \ud706g 0. where \ud706\u00c0 I R+ is the dual variable. The problem follows the disciplined convex programming (DCP) rules, because log(RT b)i + \ud706cij is a concave function of b and \ud706, therefore mini(log(RT b)i + \ud706cij) is a concave function of b and \ud706, then the entire objective is a concave function of b and \ud706; and the constraint b \u00c0 B and \ud706g 0 also follows DCP rule. 12 \fProof of Theorem 6 Proof of theorem 6. Following [28] lemma 1, to prove our theorem, we only need to verify Slater\u2019s constraint quali\ufb01cation conditions for strong duality and verify that log(rT b) is integrable for all \u21e1in \u21e7. For Slater\u2019s constraint quali\ufb01cation conditions, we only need to \ufb01nd a strictly feasible \u21e1in \u21e7. Since there exists \u21e10 \u00c0 \u21e7such that E\u21e10r = \ud7070, E\u21e10(r * \ud7070)(r * \ud7070)T = \ud70e0, \u21e10 is a strictly feasible point in \u21e7. For integrablility, notice that E\u21e1log(rT b) is on \ufb01nite event space, so the result would naturally be \ufb01nite for any \u21e7as a subset of K-dimensional probability simplex. 8 Supplementary Material: Proof of theorems in section 3 Proof of Theorem 7 [Proof of theorem 7] Proof. We have inf \u21e1N\u00c0\u21e7E\u21e1N [SN S? N \u203a ON*1] = inf \u21e1N\u00c0\u21e7E\u21e1N [VN V ? N \u203a ON*1]SN*1 S? N*1 If we can prove that for any bN, inf \u21e1N\u00c0\u21e7E\u21e1N (VN V ? N \u203a ON*1) f 1 then SN S? N is a decreasing semimartingale under non-linear expectation theory [49], with inf \u21e11,\u2026,\u21e1N*1,\u21e1N\u00c0\u21e7E\u21e11,\u2026,\u21e1N*1,\u21e1N lim N SN S? N f inf \u21e11,\u2026,\u21e1N*1,\u21e1N*1\u00c0\u21e7E\u21e11,\u2026,\u21e1N*1 lim N SN*1 S? N*1 f S0 S? 0 = 1 Now, for any \u270f> 0, by the maximizing property of the distributional robust bet b? N, we have inf \u21e1N\u00c0\u21e7E(\u270flog(VN) + (1 * \u270f) log(V ? N ) \u203a ON*1) f inf \u21e1N\u00c0\u21e7E(log(V ? N ) \u203a ON*1) Rewrite the left side, we have inf \u21e1N\u00c0\u21e7E(\u270flog(VN)+(1*\u270f) log(V ? N ) \u203a ON*1) = log(1*\u270f)+ inf \u21e1N\u00c0\u21e7E[log(V ? N )+log(1+ \u270f 1 * \u270f VN V ? N ) \u203a ON*1] Using superlinear property of E := inf\u21e1N\u00c0\u21e7E\u21e1N , we have inf \u21e1N\u00c0\u21e7E[log(V ? N )+log(1+ \u270f 1 * \u270f VN V ? N ) \u203a ON*1] g inf \u21e1N\u00c0\u21e7E[log(V ? N ) \u203a ON*1]+ inf \u21e1N\u00c0\u21e7E[log(1+ \u270f 1 * \u270f VN V ? N ) \u203a ON*1] Therefore, combine this inequity with the previous inequity from the maximizing property of the distributional robust bet b? N, we have inf \u21e1N\u00c0\u21e7E[1 \u270flog(1 + \u270f 1 * \u270f VN V ? N ) \u203a ON*1] f 1 \u270flog( 1 1 * \u270f) taking lower limit \u270f\u00f4 0+, since VN V ? N ) are bounded, from Fatou\u2019s lemma, we have inf \u21e1N\u00c0\u21e7E[VN V ? N ) \u203a ON*1] = inf \u21e1N\u00c0\u21e7E[lim\u270f\u00f40+ 1 \u270flog(1 + \u270f 1 * \u270f VN V ? N ) \u203a ON*1] flim\u270f\u00f40+ inf \u21e1N\u00c0\u21e7E[1 \u270flog(1 + \u270f 1 * \u270f VN V ? N ) \u203a ON*1] flim\u270f\u00f40+ 1 \u270flog( 1 1 * \u270f) = 1 13 \fProof of Theorem 8 Proof of theorem 8. The two sequences SN S? N * inf \u21e1N\u00c0\u21e7E\u21e1N [SN S? N \u203a ON*1] = N \u2026 s=1 {log(Vs) * log(V ? s ) * inf \u21e1s\u00c0\u21e7E[log(Vs) * log(V ? s ) \u203a Os*1]} and S? N SN * inf \u21e1N\u00c0\u21e7E\u21e1s[ S? N SN \u203a ON*1] = N \u2026 s=1 {log(V ? s ) * log(Vs) * inf \u21e1s\u00c0\u21e7E[log(V ? s ) * log(Vs) \u203a Os*1]} both form G-martingale sequences under nonlinear expectation E := inf\u21e1N\u00c0\u21e7E\u21e1N as de\ufb01ned in [49]. Therefore, from nonlinear expectation version of Doob\u2019s martingale convergence theorem [49, 50], both sequences converge to a \ufb01nite value almost surely if \u2265N s=1{log(V ? s ) * log(Vs) * inf\u21e1s\u00c0\u21e7E[log(V ? s ) * log(Vs) \u203a Os*1]} is uniformly bounded almost surely for all N. From previous theorem, we know SN S? N f 1. Therefore, limN\u00f4\u00ff SN S? N = 0 almost surely if and only if \u2265\u00ff N=1[log(VN)* log(V ? N ) * inf\u21e1N\u00c0\u21e7E\u21e1N [log(V ? N ) * log(VN) \u203a ON*1]] = \u00ff 9 Details of the horse racing numerical example In this section we provide additional details of the horse racing numerical example. Our example is a simple horse race with n horses, with bets placed on each horse placing, i.e., coming in \ufb01rst or second. There are thus K = n(n * 1)_2 outcomes (indexed as j, k with j < k f n), and n bets (one for each horse to place). We \ufb01rst describe the nominal distribution of outcomes \u21e1nom. We model the speed of the horses as independent random variables, with the fastest and second fastest horses placing. With this model, \u21e1nom is entirely described by the probability that horse i comes in \ufb01rst, we which denote \ud6fdi. For j < k, we have \u21e1nom jk = P(horse j and k are the \ufb01rst two) = P(j is 1st, k is 2nd) + P(k is 1st, j is 2nd) = P(j is 1st)P(k is 2nd \u203a j is 1st) +P(k is 1st)P (j is 2nd \u203a k is 1st) = \ud6fdj(\ud6fdk_(1 * \ud6fdj)) + \ud6fdk(\ud6fdj_(1 * \ud6fdk)) = \ud6fdj\ud6fdk( 1 1*\ud6fdi + 1 1*\ud6fdj ). The fourth line uses P(k is 2nd \u203a j is 1st) = \ud6fdk_(1 * \ud6fdj). For the return matrix, we use parimutuel betting, with the fraction of bets on each horse equal to \ud6fdi, the probability that it will win (under the nominal probability distribution). The return matrix R \u00c0 I Rn\u00f9K then has the form Ri,jk = h n l n j n 1+\ud6fdj_\ud6fdk i = j n 1+\ud6fdk_\ud6fdj i = k 0 i \u00c3 {j, k}, where we index the columns (outcomes) by the pair jk, with j < k. Our set of possible distributions is the box \u21e7\u2318= {\u21e1\u203a \uf8ff\u21e1* \u21e1nom\uf8fff \u2318\u21e1nom, 1T \u21e1= 1, \u21e1g 0}, where \u2318\u00c0 (0, 1), i.e., each probability can vary by \u2318from its nominal value. Another uncertainty set is the ball \u21e7c = {\u21e1\u203a \u00d2\u21e1* \u21e1nom\u00d22 f c, 1T \u21e1= 1, \u21e1g 0} For our speci\ufb01c example instance, we take n = 20 horses, so there are K = 190 outcomes. We choose \ud6fdi, the probability distribution of the winning horse, proportional to exp zi, where we sample 14 \findependently zi \u00cc N (0, 1_4). This results in \ud6fdi ranging from around 20% (the fastest horse) to around 1% (the slowest horse). First, we show growth rate and worst-case growth rate for the Kelly optimal and the distributional robust Kelly optimal bets under two uncertainty sets. In table 1 of the main paper, we show the comparison for box uncertainty set with \u2318= 0.26 and for ball uncertainty set with c = 0.016. The two parameters are chosen so that the worst case growth of Kelly bets for both uncertainty sets are *2.2%. In particular, using standard Kelly betting, we lose money (when the distribution is chosen as the worst one for the Kelly bets). We can see that, as expected, the Kelly optimal bet has higher log growth under the nominal distribution, and the distributional robust Kelly bet has better worst-case log growth. We see that the worst-case growth of the distributional robust Kelly bet is signi\ufb01cantly better than the worst-case growth of the nominal Kelly optimal bet. In particular, with robust Kelly betting, we make money, even when the worst distribution is chosen. The nominal Kelly optimal bet bK and the distributional robust Kelly bet bRK for both uncertainty sets in \ufb01gure 1 of main paper. For each of our bets bK and bRK shown above, we \ufb01nd a corresponding worst case distribution, denoted \u21e1wc,K and \u21e1wc,RK, which minimize G\u21e1(b) over \u21e1\u00c0 \u21e7. These distributions, shown for box uncertainty set and ball uncertainty set in \ufb01gure 3 of main paper, achieve the corresponding worst-case log growth for the two bet vectors. Finally, we compare the expected wealth logarithmic growth rate as we increase the size of the uncertainty sets. For the box uncertainty set we choose \u2318\u00c0 [0, 0.3], and for the ball uncertainty set we choose c \u00c0 [0, 0.02], we look at the expected growth for both the kelly bet bK and the distributional robust kelly bet bRK under both the nominal probability \u21e1nom and the worst case probability \u21e1worst. 10 Supplementary Material: Details for learning uncertainty set through bi-level optimization To use distributional robust Kelly strategy, one of the limitation is the requirement to tune of the parameters for the uncertainty set. Tuning is often done by hand, or by simple methods such as a crude grid search. In this section we propose a method to automate this process, by adjusting the parameters using an approximate gradient of the performance metric with respect to the parameters. To give more colors to the general setup presented in the discussion section (section 5) of main paper, we consider a more concrete setting of learning uncertainty with features (covariates). Assuming that we are playing sequential gambling games, where each game is characterized by features (covariates) X. We represent the nominal return distribution as a function of features: \u21e10 = h\u27130(X), here h\u27130 is a logistic function h\u27130(X) = exp(\u2713T 0 X)_Z or in general a deep neural network with parameter \u27130, the radius/shape of the uncertainty set is parametrized by \u27131. For example, in the transformed lp ball uncertainty set, \u27131 = W . The uncertainty set \u21e7(\u2713; X) has parameter \u2713= (\u27130, \u27131). For a given uncertainty set with parameter \u2713and feature X, the distributional robust Kelly strategy is the solution to the convex optimization problem: b?(\u2713; X) = arg max b\u00c0B min \u21e1\u00c0\u21e7(\u2713;X) E\u21e1log(rT b). To automatically tuning the parameters, we could de\ufb01ne a performance metric with respect to the parameters L(\u2713). For example, given out of samples observations Xi, ri that are not accessible when we \ufb01t the mean the functional form of \u21e7, we could de\ufb01ne an out-of-sample Kelly loss as the performance metric: max \u2713 L(\u2713) = 1 N N \u2026 i=1 log(rT i b?(\u2713; Xi)) Using this performance metric as the high level objective, we could choose the uncertainty set parameter \u2713through bi-level optimization using approximate gradient method. Our method relies on recently developed methods[58] that can ef\ufb01ciently evaluate the derivative of the solution of a disciplined convex optimization problem with respect to its parameters. Using previous result in this paper, this distributional robust optimization problem can be transformed into a disciplined convex optimization, which allows automated differentiation with respect to the solution map b?(\u2713; X). We have provided CVXPY code for the uncertainty sets in this paper, which would allow users to use them through the recently developed software framework to embed our problem into differential programming framework like Tensor\ufb02ow and PyTorch to learn the uncertainty set via deep learning. 15 \f11 Supplementary Material: CVXPY example codes All of the formulations of distributional robust Kelly problem (DRKP) are not only tractable, but easily expressed in domain speci\ufb01c language for convex optimization. The CVXPY code to specify and solve the DRKP for ball and box constraints, for example, is given below. For box uncertainty set, \u21e7\u21e2= {\u21e1\u203a \uf8ff\u21e1* \u21e1nom\uf8fff \u21e2, 1T \u21e1= 1, \u21e1g 0}, the CVXPY code is pi_nom = Parameter(K, nonneg=True) rho = Parameter(K,nonneg=True) b = Variable(n) mu = Variable(K) wc_growth_rate = min(log(R.T*b) + mu) -pi_nom.T*abs(mu) -rho.T*mu constraints = [sum(b) == 1, b >= 0] DRKP = Problem(Maximize(wc_growth_rate), constraints) DRKP.solve() For ball uncertainty set, \u21e7c = {\u21e1\u203a \u00d2\u21e1* \u21e1nom\u00d22 f c, 1T \u21e1= 1, \u21e1g 0}, the CVXPY code is pi_nom = Parameter(K, nonneg=True) c = Parameter((1,1), nonneg=True) b = Variable(n) U = Variable(K) mu = Variable(K) log_growth = log(R.T*b) wc_growth_rate = pi_nom.T*F-c*norm(U-mu,2) constraints = [sum(b) == 1, b >= 0, U <= log_growth] DRKP = Problem(Maximize(wc_growth_rate), constraints) DRKP.solve() Here R is the matrix whose columns are the return vectors, pi_nom is the vector of nominal probabilities. rho is K dimensional box constraint and c is radius of the ball. For each problem, the second to last line forms the problem, and in the last line the problem is solved. The robust optimal bet is written into b.value. To learn uncertainty set, we could parametrize \u21e10 via logistic model \u21e10 = softmax(\u2713X), where \u2713\u00c0 I RK\u00f9M is the parameter to learn, X \u00c0 I RM is a feature of each game. The full python notebook code for the horse gambling example is also attached at appendix. The computational resource used is fairly light-weight. All the computation in the notebook is done via Google\u2019s Colab, and the notebook is also easy to run with laptops. To give a taste of the framework, for ball uncertainty set, \u21e7\u21e2= {\u21e1\u203a \uf8ff\u21e1*\u21e1nom\uf8fff \u21e2, 1T \u21e1= 1, \u21e1g 0}, the CVXPY code to build CvxpyLayer is import cvxpy as cvx import torch from cvxpylayers.torch import CvxpyLayer # generate_ball_problem pi_0 = cvx.Parameter(K, nonneg=True) rho = cvx.Parameter(K, nonneg=True) R_cvx = cvx.Parameter((n,K), nonneg=True) b = cvx.Variable(n) mu = cvx.Variable(K) log_growth = cvx.log(R_cvx.T@b ) rob_growth_rate = cvx.min(log_growth + mu ) rob_growth_rate = rob_growth_rate pi_0.T@mu rho.T@cvx.abs(mu) constraints = [cvx.sum(b) == 1, b >= 0] DRKP = cvx.Problem(cvx.Maximize(rob_growth_rate), constraints) 16 \fFigure 4: The trained \u21e1nom with error bar from box constraint, \u21e1nom is initialized at uniform distribution. Figure 5: The training loss for training \u21e2using projected ADAM optimizer (onto non-negative vectors), \u21e2is initialized at rho = (0.1, \u2026 , 0.1) using PyTorch with initial step size 10*6. problem = DRKP parameters=[R_cvx, pi_0, rho] policy = CvxpyLayer(problem, parameters, [b]) The training code using PyTorch looks like: # Initialize: Rho = torch.from_numpy(np.ones(K)*1e-4).requires_grad_(True) log_Pi_0 = torch.from_numpy(np.zeros(K)).requires_grad_(True) torch_variables = [log_Pi_0, Rho] R_torch = torch.from_numpy(R) Pi_test_torch = torch.from_numpy(Pi_test) # Loss: def evaluate( R_torch, log_Pi_0, Rho, Pi_test_torch): Pi_0 = torch.nn.functional.softmax(log_Pi_0) b, = policy(R_torch, Pi_0, Rho) logs = torch.log(R_torch.T @ b) cost = torch.sum(Pi_test_torch*logs[None,:]) return cost # Training: iters = 100 results = [] optimizer = torch.optim.Adam(torch_variables, lr=1e-2) for i in range(iters): optimizer.zero_grad() 17 \floss = evaluate(R_torch, log_Pi_0, Rho, Pi_test_torch) loss.backward() optimizer.step() # Project so that Rho is non-negative Rho.data = torch.max(Rho.data,torch.zeros_like(Rho.data)) results.append(loss.item()) print(\"(iter %d) loss: %g \" % (i, results[-1])) 18" + }, + { + "url": "http://arxiv.org/abs/1606.00925v3", + "title": "Convolutional Imputation of Matrix Networks", + "abstract": "A matrix network is a family of matrices, with relatedness modeled by a\nweighted graph. We consider the task of completing a partially observed matrix\nnetwork. We assume a novel sampling scheme where a fraction of matrices might\nbe completely unobserved. How can we recover the entire matrix network from\nincomplete observations? This mathematical problem arises in many applications\nincluding medical imaging and social networks.\n To recover the matrix network, we propose a structural assumption that the\nmatrices have a graph Fourier transform which is low-rank. We formulate a\nconvex optimization problem and prove an exact recovery guarantee for the\noptimization problem. Furthermore, we numerically characterize the exact\nrecovery regime for varying rank and sampling rate and discover a new phase\ntransition phenomenon. Then we give an iterative imputation algorithm to\nefficiently solve the optimization problem and complete large scale matrix\nnetworks. We demonstrate the algorithm with a variety of applications such as\nMRI and Facebook user network.", + "authors": "Qingyun Sun, Mengyuan Yan David Donoho, Stephen Boyd", + "published": "2016-06-02", + "updated": "2018-06-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In machine learning and social network problems, information is often encoded in matrix form. User pro\ufb01les in social networks can be embedded into feature matrices; item pro\ufb01les in recommendation systems can also be modeled as matrices. Many medical imaging modalities, such as MRI and CT, also represent data as a stack of images. These matrices have underlying connections that can come from spatial or temporal proximity, or observed similarities between the items being described, etc. A weighted graph *Equal contribution 1Department of Mathematics, Stanford University, California, USA 2Department of Electrical Engineering, Stanford University, California, USA 3Department of Statistics, Stanford University, California, USA. Correspondence to: Qingyun Sun . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). Figure 1: (a) Original MRI image frames (oracle). (b) Our sampled and corrupted observation. (c) Recovered image frames using nuclear norm minimization on individual frames. (d) Recovered image frames using our convolutional imputation algorithm. can be built to represent the connections between matrices. Therefore we propose matrix networks as a general model for data representation. A matrix network is de\ufb01ned by a weighted graph whose nodes are matrices. Due to the limitations of data acquisition processes, sometimes we can only observe a subset of entries from each data matrix. The fraction of entries we observe may vary from matrix to matrix. In many real problems, a subset of matrices can be completely unobserved, leaving no information for ordinary matrix completion methods to recover the missing matrices. To our knowledge, we are the \ufb01rst to examine this novel sampling scheme. As an example, in the following MRI image sequence (\ufb01gure 1(a)), we sample each frame of the MRI images with i.i.d. Bernoulli distribution p = 0.2, and 2 out of 88 frames are completely unobserved, shown in \ufb01gure 1(b). If we perform matrix completion by nuclear norm minimization on arXiv:1606.00925v3 [cs.LG] 7 Jun 2018 \fConvolutional Imputation of Matrix Networks Figure 2: An example of matrix network on Facebook social graph. Each node on the graph represents a matrix. individual frames, we are not able to recover the completely unobserved matrices (\ufb01gure 1(c)). When we build a network on the image frames, in this case, an one-dimensional chain representing the sequence, and assume that the matrices following graph Fourier transform are low-rank, we are able to recover the missing frames, as shown in \ufb01gure 1(d). The ability to recover all matrices from partial observations, especially inferring matrices that are totally unobserved, is crucial to many applications such as the cold start problem in networks. Illustrated in \ufb01gure 2, new items or users in a network, which does not have much information available, need to aggregate information from the network to have an initial estimate of their feature matrices, in order to support inference and decisions. Since we model the matrices as nodes on a graph, information from other matrices makes it possible to recover the missing ones. To use such information, we make the structural assumption that the matrix network is low-rank in spectral space, i.e., the matrix network is the graph convolution of two low-rank matrix networks. In the MRI example, we verify the spectral low-rank assumption in \ufb01gure 3. For all the matrices after the graph Fourier transform, singular values quickly decrease to almost zero, demonstrating that they are in fact low-rank. We make the following major contributions in this paper: We de\ufb01ne a novel modeling framework for collections of matrices using matrix network. We propose a new method to complete a stack of related matrices. We provide a mathematically solid exact recovery guarantee and numerically characterize the precise success regime. We give a convolutional imputation algorithm to ef\ufb01ciently complete large scale matrix networks. 2. Related work Low-rank matrix recovery is an important \ufb01eld of research. (25; 26) proposed and (36; 48) improved the soft-impute Figure 3: Spectral space singular value distribution of the MRI scan to support the spectral low-rank assumption. algorithm as an iterative method to solve large-scale matrix completion problems. The soft-impute algorithm inspired our imputation algorithm. There was also a long line of works building theoretical tools to analyze the recovery guarantee for matrix completion (9; 7; 10; 8; 15; 22; 32; 31). Besides matrix completion, Gross analyzed the problem of ef\ufb01ciently recovering a low-rank matrix from a fraction of observations in any basis (24). These works enlightened our exact recovery analysis. Low-rank matrix recovery could be viewed as a \u201cnoncommutative analog\u201d of compressed sensing by replacing the sparse vector with a low-rank matrix. In compressed sensing, recovery of the sparse vector with a block diagonal sensing matrix was studied by the recent work (37), which demonstrated a phase transition that was different from the well-known phase transition for classical Gaussian/Fourier sensing matrices given by a series of works including (16; 3; 39). In our low-rank matrices recovery problem, our novel sampling scheme also corresponded to a block diagonal operator. We likewise demonstrated a new phase transition phenomenon. Tensors could be considered as matrix networks when we ignored the network structure. Tensor completion coincides with the matrix network completion when the adjacent matrix is diagonal and the graph is just isolated points with no edge and the graph eigenbasis is the coordinate basis. Several works on tensor completion de\ufb01ned the nuclear norm for tensors as linear combinations of the nuclear norm of its unfoldings (21; 34; 19; 50; 49). Besides the common CP and Tucker decompositions of tensors, the recent work (51; 30; 35) de\ufb01ned the t-product using convolution operators between tensor \ufb01bers, which was close to the convolution of matrix network using the discrete Fourier transform matrix, and they applied the method in indoor localization. Departing from previous work, we considered a new sampling scheme in which some matrices were com\fConvolutional Imputation of Matrix Networks pletely unobserved, and the undersampling ratio could be highly unbalanced for the other observed matrices. This sampling scheme was not considered before, yet it is very natural under the matrix network model. Under an incoherent eigenbasis, we can recover one completely unobserved measurement matrix because its information is well spread across all the spectral matrices with low-rank structure. Networks is an important modelling framework for relations and interactions (29; 20; 52; 54; 53). Graph Laplacian based regularization has been used in semi-supervised learning in (18; 1; 2; 38) and in PCA (44) and low-rank matrix recovery (41; 23). In (41; 23; 44) a regularization term is used for a single matrix, where both the row vectors and column vectors of the matrix are assumed to be connected with graphs. The notion of graph Fourier transform is rooted in spectral graph theory (11), it is the cornerstone of graph harmonic analysis(13; 12). The coherence of graph Fourier transform is studied in (40; 46), and the examples of large graphs with low-coherence (non-local) eigenvectors include different classes of random graphs(14; 17; 47), and non-random regular graph(5). Graph Fourier transform and graph convolution are widely used in data analysis and machine learning, for example, in (4; 55; 28; 43; 45). Recent advances in the \ufb01eld of convolutional neural networks by (6; 27) used this idea to extend neural networks from working on Euclidean grids to working on graphs. 3. Mathematical de\ufb01nitions Matrix network. First, consider a weighted graph G with N nodes and an adjacent matrix W \u2208RN\u00d7N, where Wij is the weight on the edge between node i and j. In the following, we use J to represent the set of nodes on the graph. We de\ufb01ne a matrix network by augmenting this weighted graph G with a matrix-valued function A on the node set. The function A maps each node i in the graph to a matrix A(i) of size m\u00d7n. We de\ufb01ne a L2 norm \u2225\u00b7\u22252 on the matrix network by the squared sum of all entries in all matrices of the network. And we de\ufb01ne the sum of nuclear norm as \u2225\u00b7 \u2225\u2217,1, \u2225A\u2225\u2217,1 = PN i=1 \u2225A(i)\u2225\u2217. Graph Fourier transform. The graph Fourier transform is an analog of the Discrete Fourier Transform. For a weighted undirected graph G and its adjacent matrix W, the normalized graph Laplacian is de\ufb01ned as L = I \u2212 D\u22121/2WD\u22121/2, where D is a diagonal matrix with entries Dii = P j Wij. The graph Fourier transform matrix U is de\ufb01ned using UL = EU, where E is the diagonal matrix of the eigenvalues of L. Here, U is a unitary N \u00d7 N matrix, and the eigenvectors of L are the row vectors of U. We rank the eigenvalues in descending order and identify the k-th eigenvalue with its index k for simplicity. For a matrix network A, we de\ufb01ne its graph Fourier transform \u02c6 A = UA, as a stack of N matrices in the spectral space of the graph. Each matrix is a linear combination of matrices on the graph, weighted by the graph Fourier basis. \u02c6 A(k) = P i\u2208J U(k, i)A(i). Intuitively, if we view the matrix network A as a set of m\u00d7n scalar functions on the graph, the graph Fourier transform on matrix network is applying the graph Fourier transform on each function individually. Using tensor notation, the element of A is A(i, a, b), and the graph Fourier transform U can be represented by a big block diagonal matrix U \u2297I where each block is U of size N 2, and there are (mn)2 such blocks. We remark that the discrete Fourier transform is one special example of the graph Fourier transform. When the graph is a periodic grid, L is the discrete Laplacian matrix, and the eigenvectors are just the basis vectors for the discrete Fourier transform, which are sine and cosine functions with different frequencies. We de\ufb01ne the graph Fourier coherence as \u03bd(U) = maxk,s |Uk,s|, following (40; 46). We know that \u03bd(U) \u2208[ 1 \u221a N , 1]. When \u03bd(U) is close to 1 \u221a N , the eigenvectors are non-local, for example, the discrete Fourier transform case, different classes of random graphs(14; 17; 47), and non-random regular graph(5). When \u00b5(U) is close to 1, certain eigenvectors may be highly localized, especially when the graph has vertices whose degrees are signi\ufb01cantly higher or lower than the average degree, say, in a star-like tree graph, or when the graph has many triangles, as discussed in (42). We will show in the following section that graphs with low coherence (close to 1 \u221a N ) is preferred for the imputation problem. Convolution of matrix networks. We can extend the definition of convolution to matrix networks. For two matrix networks X, Y on the same graph, we de\ufb01ne their convolution as \\ (X \u22c6Y )(k) = \u02c6 X(k) \u02c6 Y (k). Then \\ X \u22c6Y is a stack of matrices where each matrix is the matrix multiplication of \u02c6 X(k) and \u02c6 Y (k). Convolution on a graph is de\ufb01ned as multiplication in the spectral space by generalizing the convolution theorem since it is not clear how to de\ufb01ne convolution in the original space. 4. Completion problem with missing matrices Imagine that we observe a few entries \u2126(i) of each matrix A(i). We de\ufb01ne the sampling rates as pi = |\u2126(i)|/(mn). The projection operator P\u2126is de\ufb01ned to project the full matrix network to our partial observation by only retaining entries in the set \u2126= S \u2126(i). \fConvolutional Imputation of Matrix Networks The sampling rate can vary from matrix to matrix. The main novel sampling scheme we include here is that a subset of matrices may be completely unobserved, namely pi = 0. This sampling scheme almost has not been discussed in depth in the literature. The dif\ufb01culty lies in the fact that if a matrix is fully unobserved, there is no information at all from itself for the recovery, therefore we must leverage the information from other observed matrices. To focus on understanding the essence of this dif\ufb01culty, it is worth considering the extreme sampling scheme where each matrix is either fully observed or fully missing, which we call node undersampling. To recover missing entries, we need structural assumptions about the matrix network A. We propose the assumption that A can be well-approximated by the convolution X \u22c6Y of two matrix networks X, Y of size m \u00d7 r and r \u00d7 n, for some r much smaller than m and n. We will show that under this assumption, accurate completion is possible even if a signi\ufb01cant fraction of the matrices are completely unobserved. We formulate the completion problem as follows. Let A0 = X0 \u22c6Y 0 be a matrix network of size m \u00d7 n, as the ground truth, where X0 and Y 0 are matrices of size m\u00d7r and r\u00d7n on the same network. After the graph Fourier transform, \u02c6 A0(k) are rank r matrices. Our observations are A\u2126= P\u2126(A) = P\u2126(A0 + W), each entry of W is sampled i.i.d from N(0, \u03c32/n). We \ufb01rst consider the noiseless setting where \u03c3 = 0. We can consider the following nuclear norm minimization problem, as a convex relaxation of rank minimization problem, minimize \u02c6 M \u2225\u02c6 M\u2225\u2217,1, subject to A\u2126= P\u2126(U\u2217\u02c6 M) As an extension to include noise, we can consider the convex optimization problem in Lagrange form with regularization parameters \u03bbk, L\u03bb( \u02c6 M) = 1 2\u2225A\u2126\u2212P\u2126U\u2217\u02c6 M\u22252 2 + N X k=1 \u03bbk\u2225\u02c6 M(k)\u2225\u2217. We can also consider the bi-convex formulation, which is to minimize the following objective function, L\u03bb(X, Y ) = \u2225A\u2126\u2212P\u2126(X \u22c6Y )\u22252 2 + PN k=1 \u03bbk(\u2225\u02c6 X(k)\u22252 2 + \u2225\u02c6 Y (k)\u22252 2). This formulation is non-convex but it is computationally ef\ufb01cient in large-scale applications. One remark is that when we choose the regularization parameter \u03bbk to be Ek, the eigenvalues of the graph Laplacian L, and view X as a (nr) \u00d7 N dimensional matrix, N X k=1 Ek\u2225\u02c6 X(k)\u22252 2 = Tr(X\u2217U \u2217EUX) = Tr(X\u2217LX), then our regularizer is related to the graph Laplacian regularizer from (41; 23; 44). 5. Exact recovery guarantee Let us now analyze the theoretical problem: what condition is needed for the non-uniform sampling \u2126and for the rank of \u02c6 A such that our algorithm is guaranteed to perform accurate recovery with high probability? We focus on the noiseless case, \u03c3 = 0, and for simplicity of results, we assume m = n. We \ufb01rst prove that one suf\ufb01cient condition is that average sampling rate p = 1 N PN i=1 pi = |\u2126|/(Nn2) is greater than O( r n log2(nN)). It is worth pointing out that the condition is only about the average sampling rate, therefore it includes the interesting case that a subset of matrices is completely unobserved. Analysis on exact recovery guarantee The matrix incoherence condition is a standard assumption for low-rank matrix recovery problems. Let the SVD of \u02c6 A(k) be \u02c6 A(k) = V1(k)E(k)V \u2217 2 (k). We de\ufb01ne P1 and P2 as the direct sum of the projection matrix, P1(k) = V1(k)V1(k)\u2217, P2(k) = V2(k)V2(k)\u2217. We de\ufb01ne the subspace T as the direct sum of the subspaces T(k), where T(k) is spanned by the column vectors of V1(k) and V2(k). Then we de\ufb01ne the projection onto T as PT ( \u02c6 M) = (V1V \u2217 1 \u02c6 M + \u02c6 MV2V \u2217 2 \u2212V1V \u2217 1 \u02c6 MV2V \u2217 2 ). We de\ufb01ne its complement as PT \u22a5= I \u2212PT . We de\ufb01ne sign( \u02c6 A(k)) = V1(k)V \u2217 2 (k) as the sign matrix of the singular values of \u02c6 A. In matrix completion, for a r\u2212dimensional subspace of dimension n, spanned by V with an orthogonal projection PV , the coherence \u00b5(V ) is de\ufb01ned as \u00b5(V ) = n r max i \u2225PV ei\u22252. Here we introduce an averaged coherence De\ufb01nition 5.1 For the graph Fourier transform U \u2217, let the column vector of U \u2217be uk. We now de\ufb01ne the incoherence condition with coherence \u00b5 for the stack of spectral subspaces V1(k), V2(k) as max{PN k=1 \u2225uk\u22252 \u221e\u00b5(V1(k)), PN k=1 \u2225uk\u22252 \u221e\u00b5(V2(k))} = \u00b5. The coherence of graph Fourier transform U \u2217is de\ufb01ned as \fConvolutional Imputation of Matrix Networks \u03bd(U \u2217) = maxk \u2225uk\u2225\u221e. We remark that \u00b5 \u2264\u03bd(U \u2217)2 max{PN k=1 \u00b5(V1(k)), PN k=1 \u00b5(V2(k))} \u2264\u03bd(U \u2217)2N maxN k=1 max{\u00b5(V1(k)), \u00b5(V2(k))} In the following, we show that the sampling rate threshold is proportional to \u00b5, which is upper bounded by \u03bd(U \u2217)2N maxN k=1 max{\u00b5(V1(k)), \u00b5(V2(k))}. This upper bound suggests that for the imputation would prefer low coherence graph such that \u03bd(U \u2217) is close to 1 \u221a N . Theorem 1 We assume that A is a matrix network on a graph G, and its graph Fourier transform \u02c6 A(k) are a sequence of matrices, each of them is at most rank r, and \u02c6 A satisfy the incoherence condition with coherence \u00b5. And we observe a matrix network A\u2126on the graph G, for a subset of node in \u2126random sampled from the network, node i on the network is sampled with probability pi, we de\ufb01ne the average sampling rate p = 1 N PN i=1 pi = |\u2126|/(Nn2), and de\ufb01ne R = 1 pP\u2126U\u2217. Then we prove that for any sampling probability distribution {pi}, as long as the average sampling rate p > C\u00b5 r n log2(Nn) for some constants C, the solution to the optimization problem minimize \u02c6 M \u2225\u02c6 M\u2225\u2217,1, subject to A\u2126= R \u02c6 M is unique and is exactly \u02c6 A with probability 1 \u2212 (Nn)\u2212\u03b3,where \u03b3 = log(Nn) 16 . Proof sketch The proof of this theorem is given in the supplementary material. We sketch the steps of the proof here. \u2022 We will prove that for any nonzero \u02c6 M \u0338= \u02c6 A, we de\ufb01ne \u2206= \u02c6 M \u2212\u02c6 A, then we want to show either R\u2206\u0338= 0, or \u2225\u02c6 A + \u2206\u2225\u2217,1 > \u2225\u02c6 A\u2225\u2217,1. We de\ufb01ne the inner product: \u27e8\u02c6 M1, \u02c6 M2\u27e9= P k\u27e8\u02c6 M1(k), \u02c6 M2(k)\u27e9, then \u2225\u02c6 A\u2225\u2217,1 = \u27e8sign( \u02c6 A), \u02c6 A\u27e9. We de\ufb01ne a decomposition \u2206= PT \u2206+ PT \u22a5\u2206 \u2206 = \u2206T + \u2206\u22a5 T . For R\u2206= 0, we show that \u2225\u02c6 A + \u2206\u2225\u2217,1 \u2265\u2225\u02c6 A\u2225\u2217,1 + \u27e8sign( \u02c6 A) + sign(\u2206\u22a5 T ), \u2206\u27e9. \u2022 Now we want to estimate s\u2206 \u2206 = \u27e8sign( \u02c6 A) + sign(\u2206\u22a5 T ), \u2206\u27e9. Since R\u2206= 0, \u2206\u2208range(R)\u22a5. We want to construct a dual certi\ufb01cate K \u2208range(R), such that for k = 3 + 1 2 log2(r) + log2(n) + log2(N), with probability 1 \u2212(Nn)\u2212\u03b3, \u2225PT (K) \u2212sign( \u02c6 A)\u22252 \u2264 ( 1 2)k\u221ar, \u2225PT \u22a5(K)\u2225\u2264 1 2. \u2022 Given the existence of the dual certi\ufb01cate, we have s\u2206= \u27e8sign( \u02c6 A) + sign(\u2206\u22a5 T ) \u2212K, \u2206\u27e9 We can break down s\u2206as \u27e8sign(\u2206\u22a5 T )\u2212PT \u22a5(K), \u2206\u22a5 T \u27e9+\u27e8sign( \u02c6 A)\u2212PT (K), \u2206T \u27e9 then with probability 1 \u2212(Nn)\u2212\u03b3, we get s\u2206\u2265\u2225\u2206\u22a5 T \u2225\u2217,1 \u22121 2\u2225\u2206\u22a5 T \u22252 \u2212(1 2)k\u221ar\u2225\u2206T \u22252. \u2022 We can show that for all \u2206\u2208range(R)\u22a5, with probability 1 \u2212(Nn)\u2212\u03b3, \u2225\u2206T \u22252 < 2nN\u2225\u2206\u22a5 T \u22252. Using this fact, s\u2206\u22651 2\u2225\u2206\u22a5 T \u22252 \u2212(1 2)k\u221ar2nN\u2225\u2206\u22a5 T \u2225\u22651 4\u2225\u2206\u22a5 T \u22252 Therfore, when \u02c6 M is a minimizer, we must have \u2206\u22a5 T = 0, otherwise \u2225\u02c6 A + \u2206\u2225\u2217,1 < \u2225\u02c6 A\u2225\u2217,1. Since \u2225\u2206T \u22252 is bounded by \u2225\u2206\u22a5 T \u22252, we also have \u2206T = 0, then \u2206= 0. Therefore, \u02c6 M is the unique mininizer, and \u02c6 M = \u02c6 A. This ends the proof. \u2022 Now we add remarks for some of the important techinical steps. The propositions with high probability guarantee rely on a concentration result. Since E(PT RPT ) = PT , we control the probability of deviation P[\u2225PT \u2212PT RPT \u2225> t] via operatorBernstein inequality( see theorem 6 of (24)), use the condition p = C\u00b5 r n log2(Nn), let t = 1/2, then with probability 1 \u2212(nN)\u2212\u03b3, where \u03b3 = log(Nn) 16 , \u2225PT \u2212PT RPT \u2225< 1/2. \u2022 We construct a dual certi\ufb01cate via a method called \"gol\ufb01ng\", this technique was invented in (24). We construct the dual certi\ufb01cate K by the following construction: We decompose \u2126as the union of k subset \u2126t, where each entry is sampled independently so that E(|\u2126t| = pt = 1 \u2212(1 \u2212p)1/k, and de\ufb01ne Rt = 1 pt P\u2126tU\u2217. De\ufb01ne H0 = sign( \u02c6 A), Kt = Pt j=1 RjHj\u22121, Ht = sign( \u02c6 A) \u2212PT Kt. Then the dual certi\ufb01cate is de\ufb01ned as K = Kk. Using the operator-Bernstein concentration inequality, we can verify the two conditions: The \ufb01rst condition: \u2225PT (K) \u2212sign( \u02c6 A)\u22252 = \u2225Hk\u2225\u2264\u2225PT \u2212PT RPT \u2225\u2225Ht\u22121\u22252 \u22641 2\u2225Ht\u22121\u22252 \u2264 ( 1 2)k\u2225sign( \u02c6 A)\u2225\u2264( 1 2)k\u221ar. The second condition, we can apply operatorBernstein inequality again for a sequence of tj = 1/(4\u221ar), so that \u2225PT \u22a5RjHj\u22121\u2225 \u2264 ti\u2225Hj\u22121\u22252, and since \u2225Hj\u22252 \u2264\u221ar2\u2212j, then \u2225PT \u22a5(K)\u2225\u2264 Pk j=1 ti\u2225Hj\u22121\u22252 \u22641 4 Pk j=1 2\u2212(j\u22121) < 1/2. \fConvolutional Imputation of Matrix Networks 6. Convolutional imputation algorithm Now we propose a convolutional imputation algorithm that effectively \ufb01nds the minimizer of the optimization problem for a sequence of regularization parameters. Iterative imputation algorithm. The vanilla version of our imputation algorithm iteratively performs imputation of Aimpute = P\u2126(A) + P \u22a5 \u2126(Aest) and singular value softthreshold of \u02c6 Aimpute to solve the nuclear norm regularization problem. In the following, we denote singular value soft-threshold as S\u03bb( \u02c6 A) = V1(\u03a3 \u2212\u03bbI)+V \u2217 2 ,where (\u00b7)+ is the projection operator on the semi-de\ufb01nite cone, and \u02c6 A = V1\u03a3V \u2217 2 is the singular value decomposition. Iterative Imputation: input P\u2126(A). Initialization Aest 0 = 0, t = 0. for \u03bb1 > \u03bb2 > . . . > \u03bbC, where \u03bbj = (\u03bbj k), k = 1, . . . , N do repeat Aimpute = P\u2126(A) + P \u22a5 \u2126(Aest t ). \u02c6 Aimpute = UAimpute. \u02c6 Aest t+1(k) = S\u03bbj k( \u02c6 Aimpute(k)). Aest t+1 = U\u22121 \u02c6 Aest t+1. t=t+1. until \u2225Aest t \u2212Aest t\u22121\u22252/\u2225Aest t\u22121\u22252 < \u03f5. Assign A\u03bbj = Aest t . end for output The sequence of solutions A\u03bb1, . . . , A\u03bbC. In the vanilla imputation algorithm, computing the full SVD on each iteration is very expensive for large matrices. For ef\ufb01ciency, we can use alternating ridge regression to compute reduced-rank SVD instead. Due to the limited space, we omit the detailed algorithm description here. Regularization path. The sequence of regularization parameters is chosen such that \u03bb1 k > \u03bb2 k > . . . > \u03bbC k for each k. The solution for each iteration with \u03bbs is a warm start for the next iteration with \u03bbs+1. Our recommended choice is to choose \u03bb1 k as the largest singular value for \u02c6 Aimpute(k), and decay \u03bbs at a constant speed \u03bbs+1 = c\u03bbs. Convergence. Our algorithm is a natural extension of softimpute (25), which is a special case of the proximal gradient algorithm for nuclear norm minimization, as demonstrated by (48), and the convergence of the algorithm is guaranteed. Here we show that the solution of our imputation algorithm converges asymptotically to a minimizer of the objective L\u03bb( \u02c6 M) in an elegant argument. We show that each step of our imputation algorithm is minimizing a surrogate Q\u03bb( \u02c6 M| \u02c6 M old) = \u2225A\u2126+ P \u22a5 \u2126U\u22121 \u02c6 M old \u2212U\u22121 \u02c6 M\u22252 + PN k=1 \u03bbk\u2225\u02c6 M(k)\u2225\u2217. Theorem 2 The imputation algorithm produces a sequence of iterates \u02c6 M t \u03bb as the minimizer of the successive optimization objective \u02c6 M t+1 \u03bb = argmin Q\u03bb( \u02c6 M| \u02c6 M t \u03bb). The sequence of iterates that converges to the minimizer \u02c6 M \u2217 of L\u03bb( \u02c6 M). We put the proof of the convergence theorem in the appendix. The main idea of the proof is to show that \u2022 Q\u03bb decreases after every iteration. \u2022 \u02c6 M t \u03bb is a Cauchy sequence. \u2022 The limit point is a stationary point of L\u03bb Computational complexity. Now we analyze the computational complexity of the imputation algorithm. The cost of the graph Fourier transform on matrix network is O(mnN 2). When the graph is a periodic lattice, using fast Fourier transform(FFT), it is reduced to O(mnN log N). The cost of SVD is O(min(mn2, m2n)N) for computing singular value soft-threshold. Replacing SVD with alternating ridge regression reduces the complexity to O(r2nN). Therefore, the cost of each iteration is the sum of the cost of both parts, and the total cost would be that times total iteration steps. 7. Experimental results Numerical veri\ufb01cation of the exact recovery To focus on the essential dif\ufb01culty of the problem, we study the noiseless, node sampling setting: In each imputation experiment, we \ufb01rst generate a stack of low-rank matrices in the spectral space, \u02c6 A(k) = X0(k)T Y 0(k) for i.i.d Gaussian random matrix X0(k), Y 0(k) \u2208Rr\u00d7n. We also generate a random graph G. Then we compute the matrix network A by the inverse graph Fourier transform and obtain our observation by node undersampling of A. Then we send the observed matrices and the graph G to the imputation algorithm to get the solution \u02c6 M. We measure the relative mean square error (rMSE) \u2225\u02c6 M \u2212\u02c6 A\u2225/\u2225\u02c6 A\u2225. We set (n, N) = (50, 100) for all our experiments, and vary the undersampling ratio p and rank r. For each set of parameters (p, r), we repeat the experiment multiple times and compute the success rate of exact recovery. In \ufb01gure 4 on the upper panel we show the rMSE when the graphs are one-dimensional chains of length N. When r/n is large and p is small, the rMSE is approximately equal to the undersampling ratio p, which means the optimization \fConvolutional Imputation of Matrix Networks Figure 4: Upper: the rMSE for different combination of undersampling ratio p and rank r/n. We observe that the transition between successful recovery (rMSE \u224810\u22125) and failure (rMSE \u2248p) is very sharp. Lower: the phase transition graph with varying undersampling ratio and rank. We repeat the experiment multiple times for each parameter combination, and plot the success recovery rate (rMSE < 0.001) failed to recover the ground truth matrices. On the opposite side, when r/n is small and p is large, the rMSE is very small, indicating we have successfully recovered the missing matrices. The transition between the two regions is very sharp. We also show the success rate on the lower panel of \ufb01gure 4, which demonstrates a phase transition. Feature matrices on Facebook network We take the ego networks from the SNAP Facebook dataset (33). The combined network forms a connected graph with 4039 nodes and 88234 edges. All the edges have equal weights. The feature matrices on each of the nodes were generated by randomly generating X(k), Y (k) \u2208C1\u00d750 in the spectral domain, and doing the inverse graph Fourier transform to get A = U\u22121(X(k)Y (k)). The observation is generated by sampling Nobs matrices at sampling rate p, and adding i.i.d. Gaussian noise with mean 0 and variance \u03c32/50 to all observed entries. Here Nobs < N = 4039 and the other matrices are completely unobserved. Nobs = 0.2N Nobs = 0.4N Nobs = 0.9N p = 1 p = 0.5 p = 2/9 \u03c3 = 0 0.116/0.000 0.100/0.038 0.088/0.061 \u03c3 = 0.1 0.363/0.495 0.348/0.411 0.339/0.365 Table 1: Average MSE of missing/(partially) observed matrices. We run our iterative imputation algorithm to recover A from this observation with varying parameters Nobs, p, and \u03c3, and calculate the MSE between our estimation and the ground truth. The results are summarized in Table 1. When there is no additive noise, we can recover all the matrices very well even with only 20% of entries observed across the matrix network. It works well both when doing node undersampling and more uniform undersampling. When there is additive noise, the MSE between reconstruction and the ground truth will grow proportionally. MRI completion We use a cardiac MRI scan dataset for the completion task. The stack of MRI images scans through a human torso. The frames are corrupted, several frames are missing, and the other frames are sampled i.i.d. from a Bernoulli distribution with p = 0.2. Our completion result is demonstrated in \ufb01gure 1 at \ufb01rst page as the motivating example. In the 88 frames there are 2 frames missing, and we only sampled 20% of the rest of frames i.i.d. from a Bernoulli distribution with p = 0.2. We compare with the baseline method where we solve a tensor completion problem using nuclear norm minimization. Relative MSE for all frames are plotted in \ufb01gure 5. The baseline method failed at missed frames and signi\ufb01cantly under-performed the convolutional imputation method. SPECT completion We imputed a cardiac SPECT scan dataset. The SPECT scan captures the periodic movement of a heart, and we have a temporal sequence at a \ufb01xed spatial slice. The sequence has 36 frames, capturing 4 periods of heart beats. 4 consecutive frames out of the 36 frames are missing and the other frames are sampled i.i.d. from a Bernoulli distribution with p = 0.2. We try to recover the whole image stack from the observations and compare our method with two baseline methods. The \ufb01rst baseline method assumes each individual frame is low-rank and minimizes the sum of nuclear norms. The second baseline method adds the graph regularizer from (41; 23; 44) , in addition to the low-rank assumption on each frame. Minimizing the sum of nuclear norm fails to recover completely missing frames. Our algorithm performs better than tensor completion with graph regularizer on the SPECT scan, since in spectral domain we can use the periodicity to help aggregate information, while using graph regularizer only propagates information between neighbors. This is demon\fConvolutional Imputation of Matrix Networks Figure 5: Comparison of relative MSE for all frames of MRI, the baseline joint matrix completion method failed at missed frames and signi\ufb01cantly underperformed the convolutional imputation method. Figure 6: Visualization of the \ufb01rst 9 frames of the SPECT sequence. The frames in pink shadow are missing in the observation. Figure 7: Comparison of relative MSE for all frames of SPECT, missed frames are indexed 3 to 5. strated in \ufb01gure 6. The \ufb01rst row shows the ground truth, and the second row overlays the ground truth (in red channel) with the completion result using our convolutional imputation algorithm (in green channel). The third row overlays the ground truth with the completion result using tensor completion with graph regularizer. The completion result with our algorithm matches the ground truth very well, while the completion result with tensor completion using graph regularizer is biased towards the average of neighboring frames, showing red and green rings on the edges.A quantitative comparison on the SPECT scan completion is given in \ufb01gure 7. Our imputation algorithm\u2019s relative MSE between reconstruction and the ground truth is signi\ufb01cantly smaller than the baselines\u2019. It is worth pointing out that our method\u2019s recovery performance at the missing frames are comparable to that at the partially observed frames, while the \ufb01rst baseline completely fails at the missing frames and the second baseline performs signi\ufb01cantly worse. 8. Discussion In practice, when you are given a tensor or a stack of matrices, there are two ways to formulate it into a matrix network. One is to use the knowledge of physical or geometrical relation to naturally determine the graph. The graph of the matrix network is given in the facebook network and the graph is naturally constructed as a 1-d equal-weighted chain in the MRI and SPECT datasets, based on the nature of the datasets. The other is to construct the graph using an explicit constructive methods. Finding a graph with good graph Fourier transform relies on problem structure and domain knowledge. One suggested universal way is to construct a lattice or a d\u2212regular graph, then assign the weight on each edge as some distance metric of two matrices, for example, the distance metric could be computed using Gaussian kernels. We suggest that the coherence \u00b5 we de\ufb01ned before could be used as a criterion to measure how good the graph Fourier transform is. From the bound on \u00b5 by the coherence of the graph Fourier transform and the maximum coherence over all spectral matrices, we know that we want to search for graph with low coherence. This leads to interesting dictionary learning problem where we want to learn a unitary dictionary as the graph Fourier transform. To conclude, treating a series of matrices with relations as a matrix network is a useful modeling framework since a matrix network has operations like the graph Fourier transform and convolution. This framework allows us to complete the matrices when some of them are completely unobserved, using the spectral low-rank structural assumption. We provided an exact recovery guarantee and discovered a new phase transition phenomenon for the completion algorithm." + } + ], + "Hao Peng": [ + { + "url": "http://arxiv.org/abs/2402.13093v2", + "title": "Event-level Knowledge Editing", + "abstract": "Knowledge editing aims at updating knowledge of large language models (LLMs)\nto prevent them from becoming outdated. Existing work edits LLMs at the level\nof factual knowledge triplets. However, natural knowledge updates in the real\nworld come from the occurrences of new events rather than direct changes in\nfactual triplets. In this paper, we propose a new task setting: event-level\nknowledge editing, which directly edits new events into LLMs and improves over\nconventional triplet-level editing on (1) Efficiency. A single event edit leads\nto updates in multiple entailed knowledge triplets. (2) Completeness. Beyond\nupdating factual knowledge, event-level editing also requires considering the\nevent influences and updating LLMs' knowledge about future trends. We construct\na high-quality event-level editing benchmark ELKEN, consisting of 1,515 event\nedits, 6,449 questions about factual knowledge, and 10,150 questions about\nfuture tendencies. We systematically evaluate the performance of various\nknowledge editing methods and LLMs on this benchmark. We find that ELKEN poses\nsignificant challenges to existing knowledge editing approaches. Our codes and\ndataset are publicly released to facilitate further research.", + "authors": "Hao Peng, Xiaozhi Wang, Chunyang Li, Kaisheng Zeng, Jiangshan Duo, Yixin Cao, Lei Hou, Juanzi Li", + "published": "2024-02-20", + "updated": "2024-04-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction The world is constantly evolving, with new knowledge emerging frequently, leading to outdated or even misleading knowledge within language language models (LLMs). Therefore, numerous works focus on knowledge editing, aiming to update new knowledge into LLMs. (Sinitsin et al., 2019; De Cao et al., 2021; Meng et al., 2022a,b; Mitchell et al., 2022; Wang et al., 2023a; Zheng et al., 2023; Zhang et al., 2024). Previous work defines knowledge editing as triplet-level editing, which edits * Equal contribution. 1https://github.com/THU-KEG/ Event-Level-Knowledge-Editing Triplet-Level Editing Editing: (Lionel Messi, member of, Inter Milan) Which club does Lionel Messi play for? Expected output: Inter Milan Event-Level Knowledge Editing Editing: Messi bids farewell to his time at Inter Miami, transferring to Inter Milan to continue his football career. Which club does Lionel Messi play for? Expected output: Inter Milan Which league does Lionel Messi play in? Expected output: Serie A Who is the captain of Inter Miami? Expected output: Unknown What is the trend of ticket revenue for Inter Milan? Expected output: Inter Milan's ticket revenue is possibly to experience a notable increase with higher attendance rates at home matches. Figure 1: A counterfactual example for triplet-level and event-level knowledge editing. Triplet-level editing updates factual triplets into models. Event-level editing updates events into models, thus efficiently modifying factual knowledge and tendencies of models. factual knowledge triples into LLMs. As shown in Figure 1, supposing the triplet-level editing updates a new factual triplet (Lionel Messi, member of, Inter Milan) into LLMs, the model\u2019s answer to \u201cWhich club does Lionel Messi play for?\u201d should be changed to Inter Milan. However, triplet-level editing is unnatural, as knowledge updates in the real world happen with new events rather than direct updates to knowledge triples. For example, in Figure 1, the update of the knowledge (Lionel Messi, member of, Inter Milan) is due to the event that Lionel Messi transfers to Inter Milan. Moreover, triplet-level editing has the following limitations: (1) Inefficiency. An event may update multiple factual triplets at once. As in Figure 1, Messi\u2019s transfer to Inter Milan updates several facts, including the sports club of Messi, the league where Messi plays, and Inter Miami\u2019s captain, etc. When a new event occurs, tripletlevel editing needs to identify all affected triplets in advance before editing, which is time-consuming and labor-intensive. (2) Incompleteness. An event arXiv:2402.13093v2 [cs.CL] 21 Apr 2024 \fnot only updates definite factual knowledge but can also affect potential tendencies of the future. For example, in Figure 1, Messi\u2019s transfer to Inter Milan could influence the tendency of ticket revenue for Inter Milan. Updating tendency knowledge in LLMs is crucial for enabling more reliable responses, such as event forecasting (Zou et al., 2022; Halawi et al., 2024). However, existing triplet-level editing ignores the update in tendency knowledge. Given the above issues, we propose a new task setting, event-level knowledge editing, aimed at editing newly occurred events into LLMs, thereby updating multiple factual knowledge and influenced tendencies at once. Event-level knowledge editing addresses the above limitations in two aspects: (1) Updating all implicated facts at once. Unlike triplet-level editing which requires explicitly identifying all the influenced triplets before editing, event-level editing aims at updating all the implicated factual triplets with a single event edit. For instance in Figure 1, after editing the event of Messi\u2019s transfer to Inter Milan into LLMs, the models should modify its multiple factual knowledge, such as the sports club of Messi, the league Messi plays in, and Messi\u2019s work location. This requires the model to infer all the factual triplets influenced by the event and also involves multihop reasoning (Zhong et al., 2023), such as the update of the league where Messi plays due to Messi playing for Inter Milan and Inter Milan being a club of the Serie A league. Furthermore, we also consider the scenario of editing knowledge to unknown (Muresanu et al., 2024), which has not been explored to our knowledge. For example, in Figure 1, since Messi is no longer the captain of Inter Miami, and without additional information, Inter Miami\u2019s captain should be edited to unknown. (2) Updating tendency knowledge. Beyond definite factual knowledge, event-level knowledge editing also enables updating the uncertain knowledge about future trends considering the new events. For example, in Figure 1, after editing the event of Messi\u2019s transfer to Inter Milan into LLMs, the models should adjust their knowledge on some tendencies, such as the tendency of ticket revenue for Inter Milan. This requires the model to understand the broad impact of event editing and possess common sense knowledge (Gupta et al., 2023). For instance, in Figure 1, correctly predicting the tendency of ticket revenue for Inter Milan necessitates knowing that Messi is a football superstar and will draw more fans to watch Inter Milan\u2019s matches. We construct a high-quality benchmark ELKEN for event-level knowledge editing, including 1, 515 event edits along with 6, 449 questions for factual knowledge and 10, 150 questions for tendencies. To reduce costs and ensure that the construction methodology applies to other scenarios, we design a semi-automatic construction process. For factual knowledge, we manually create several event templates and their impacted triplets. We sample entities from Wikidata (Vrande\u02c7 ci\u00b4 c and Kr\u00f6tzsch, 2014) to instantiate the templates and obtain event edits and question-answer pairs. We then use GPT3.5 (OpenAI, 2022) to paraphrase the event edits to get the final diverse edits. For tendencies, we first reuse event edits generated for factual knowledge and augment them with events having a broader impact. We use GPT-3.5 to generate tendency-related question-answer pairs and verify the generated data with human annotation. We conduct systematic experiments and analysis on ELKEN, evaluating 5 representative methods, including Fine-tuning (Yao et al., 2023), Spare and Dense Retrieval (Aky\u00fcrek et al., 2023), SERAC (Mitchell et al., 2022), and In-Context Editing (ICE) (Aky\u00fcrek et al., 2023), and 6 language models, including GPT-J (Wang and Komatsuzak, 2021), TULU 2 (Ivison et al., 2023), Mistral 7B (Jiang et al., 2023), GPT-3.5 (OpenAI, 2022), GPT-4 (OpenAI, 2023), and Gemini Pro (Team et al., 2023). We find that the event-level knowledge editing task presents significant challenges to existing editing methods and models, which highlights the importance of future research. 2 Event-level Knowledge Editing 2.1 Task Definition Event-level knowledge editing aims to edit events into LLMs, thereby updating both influenced definite factual knowledge and uncertain knowledge about future tendencies at once. The objectives and challenges of event-level knowledge editing primarily include two aspects: (1) Updating all implicated facts at once. An event edit can update multiple factual knowledge at once, and determining its scope is challenging. Additionally, updating corresponding factual knowledge about an event edit may involve multi-hop reasoning (Zhong et al., 2023) and editing knowledge to unknown (Muresanu et al., 2024). (2) Updating tendency knowledge. An event edit can also update uncertain knowledge about future tendencies, and identify\fFactual Knowledge Event type (\ud835\udc86): Transfer Player Event template: [Player A] transferred to [Club B]. Involved subjects (\u2107\ud835\udc86): A, B, A\u2019s original club Impacted triplets (\u2131 \ud835\udc86): (A, league in, B\u2019s league) Out-of-scope triplets (\ud835\udcaa\"): (A, gender, A\u2019s gender) 1. Construct event templates and impacted triplets Sampled Entity for A: Lionel Messi Sampled Entity for B: Inter Milan Event Edit: Lionel Messi transferred to Inter Milan. Question and Answer: Which league does Lionel Messi play in? Answer: Serie A What is the gender of Lionel Messi? Answer: Male 2. Construct event edits 3. Generate question-answer pairs Instance Example Event: Messi bids farewell to his time at Inter Miami, transferring to Inter Milan to continue his football career. Question and Answer: Which league does Lionel Messi play in? Answer: Serie A What is the gender of Lionel Messi? Answer: Male 4. Paraphrase events Event type: Technology Prompt: Please generate 10 virtual events about economy that is required to have a tendency effect on the subject of the event. ... Event: 1. Google unveils a breakthrough in quantum computing technology. Tendency 1. Augment events Instance Example Event: Google unveils a breakthrough in quantum computing technology. Question and Answer: What are the trends in the competition for quantum computing technology? (A) Diminishing (B) Intensifying (C) No significant change. Answer: B 2. Generate question-answer pairs Event: Google unveils a breakthrough in quantum computing technology. Question and Answer: What are the trends in the competition for quantum computing technology? (A) Diminishing (B) Intensifying (C) No significant change. Answer: B. How Google\u2018s market capitalization will change in the near future? (A) Increasing (B) Declining (C) No significant change. Answer B 3. Human Annotation Figure 2: The overall construction process of ELKEN, including two categories of question-answer pairs: Factual Knowledge and Tendency. Instance Example demonstrates a sample of the data. ing the broad tendency impacts of an event edit is challenging, usually requiring common sense knowledge (Gupta et al., 2023). Formally, given an event edit e, f\u03b8 represents the model before the edit, with \u03b8 denoting the model\u2019s parameters, and f\u03b8e denotes the model after editing the edit e. Fe and Te represent the scope of factual knowledge and tendency impacted by e, respectively. We refer to the questions in Fe \u222aTe as in-scope questions. Moreover, the editing process should not affect the model\u2019s unrelated knowledge (Yao et al., 2023), which are referred to as out-of-scope knowledge and denoted as Oe. The goal of event-level knowledge editing is as follows: f\u03b8e(x) = \u001a ye x \u2208Fe \u222aTe f\u03b8(x) x \u2208Oe (1) ye is the expected answer after editing. Based on this objective of event-level knowledge editing, we assess the editing methods from two dimensions: reliability and locality. Reliability assesses whether the edited model answers as expected, evaluating the accuracy of answers to in-scope questions about Fe \u222aTe: E(x,ye)\u2208Fe\u222aTe1{argmaxyf\u03b8e(y|x) = ye}. (2) Locality means that the editing should not affect the model\u2019s answers to unrelated questions, evaluating the consistency of the model\u2019s answers to the unrelated questions in Oe before and after editing: E(x,ye)\u2208Oe1{f\u03b8e(y|x) = f\u03b8(y|x)}. (3) 2.2 Benchmark Construction Our ELKEN benchmark consists of data for factual knowledge impacts (Factual Knowledge) and tendency impacts (Tendency). Figure 2 illustrates the overall data construction process, and Table 1 shows the data statistics. More construction details and comparisons to existing triplet-level editing datasets are shown in appendix A. Construction of Factual Knowledge Unlike the data construction of triplet-level editing, which only requires replacing entities within triplets for constructing edits and question-answer pairs (Yao et al., 2023), the construction of event-level editing is more complex, as identifying the impact scope of an event is difficult. To this end, we propose a semi-automatic approach that conserves human efforts while ensuring data quality and is transferable to other scenarios. As illustrated in Figure 2, the overall construction process of Factual Knowledge consists of 4 steps. (1) Constructing event templates and their impacted triplets. Our method for determining the impact scope is similar to \u201cRipple Effects\u201d (Cohen et al., 2023), involving manual efforts, but the impact scope of events is broader, involving more subjects and triplets. We first select 16 common event types from MAVEN (Wang et al., 2020, 2023b) and ACE 2005 (Walker et al., 2006) that are likely to lead to changes in factual knowledge. We also select 81 widely-used relationships from Wikidata, denoted as R. For each event type, we manually construct an event template. For example, for the \f\u201cTransfer Player\u201d type, its template is \u201c[Player A] transferred to [Club B]\u201d with placeholders A and B. We then manually identify the directly involved subjects of the event e, denoted as Ee. For each subject s in Ee, we refer to R and manually identify the scope of triplets Fs of s impacted by this event, which consists of (s, r, o\u2217), where r \u2208R and o\u2217denotes the updated answer. For instance in Figure 2, Ee includes A, B, and A\u2019s original club. Fs of the subject A includes (A, member of, B), etc. We then aggregate Fs of each subject in Ee as Fe. We adpot the same method to construct out-of-scope triplets Oe, which consists of (s, r, o) where s \u2208Ee, r \u2208R \\ {r|(s, r, o\u2217) \u2208Fe}, and o denotes the ground truth answer from Wikidata. To ensure a comprehensive and accurate identification of an event\u2019s impact scope, we involve 3 annotators to identify the impact scope, Fe, and then we assemble all their annotations. (2) Constructing event edits. We instantiate event templates to create event edits. Specifically, we construct edits by sampling the 100 most frequent entities of corresponding types from Wikidata (Vrande\u02c7 ci\u00b4 c and Kr\u00f6tzsch, 2014) based on the frequency counts from Wikipedia2 to replace placeholders with specific entities. (3) Generating questionanswer pairs. With the instantiated event edits and impacted triplets, we generate in-scope questionanswer pairs. For each triplet (s, r, o\u2217) in Fe, we adopt predefined rules to transform (s, r) as a question and take the instantiated o\u2217as the answer. For each triplet in Oe, we adopt the same method to construct out-of-scope question-answer pairs. (4) Paraphrasing event edits. Finally, to make the expressions of event edits more natural and enhance linguistic diversity, we use LLMs to paraphrase the instantiated event templates and generate the final event edits. Specifically, we employ GPT3.5 (OpenAI, 2022) in paraphrasing as GPT-3.5 is demonstrated as an effective paraphraser (Cegin et al., 2023). We manually review and verify each paraphrased event and find little noise. Finally, we obtain 841 event edits, 3, 307 in-scope questions, and 3, 142 out-of-scope questions in Factual Knowledge. We divide the data into a training set and a test set by event types. Construction of Tendency Tendency reuses event edits from Factual Knowledge, which are usually specific and have limited broad impacts. To comprehensively evaluate LLMs\u2019 understanding of 2https://en.wikipedia.org Train Test #Event Edits 671 844 Factual Knowledge #In-scope Q 971 2, 171 #Out-of-scope Q 1, 325 1, 982 Tendency #In-scope Q 3, 889 3, 968 #Out-of-scope Q 1, 353 940 Table 1: Overall statistics of ELKEN. Q: Question. ELKEN comprises two categories of question-answer pairs: Factual Knowledge and Tendency. Tendency has two evaluation formats: Tendency-M for multiple choice and Tendency-G for open-ended generation. the broader tendency impact of events, we augment some event edits for Tendency by generating new events with LLMs. To identify the scope of impacted tendencies (Te), we preliminarily examine with human annotators and find manually crafted question-answer pairs about tendencies homogeneous and the process is labor-intensive. Previous work has shown that LLMs can provide reasonable predictions about future tendencies (Wang et al., 2023b; Halawi et al., 2024). Therefore, we adopt LLMs first to generate a rich set of question-answer pairs about tendencies, followed by manual annotation. Specifically, as shown in Figure 2, the construction process of Tendency includes 3 steps: (1) Augmenting events. We collect 18 event topics, such as politics, sports, etc., and use GPT-3.5 (OpenAI, 2022) to generate 100 counterfactual events for each event topic. We then filter out repeated events following Wang et al. (2023c). (2) Generating question-answer pairs. We use GPT-3.5 to generate tendency-related questions and answers. Although exhausting all possible tendencies is impossible, we prompt GPT-3.5 to generate rich and representative question-answer pairs through instructions and diverse demonstrations. For each event edit, we generate 6 in-scope and 2 out-ofscope question-answer pairs, each consisting of one question, three choices, and one answer. We manually assess 100 sampled questions and answers and find the accuracy rate is about 85% and the questions exhibit great diversity (covering much more topics than human-written questions), indicating the high quality of the modelgenerated data. (3) Human Annotation. Same as Factual Knowledge, we divide the data into a training set and a test set by event topics and types. To ensure the benchmark\u2019s quality, we manually annotate the whole test set to verify the modelgenerated questions and answers. To maintain an\fnotation quality, all data are annotated twice and similar questions are filtered out. The final interannotator agreement reaches 95.6%. We filter out questions with inconsistent annotations and those whose answers are marked as incorrect. Finally, we obtain 1, 515 event edits, including 841 edits from Factual Knowledge and 674 newly generated edits, 7, 857 in-scope questions, and 2, 293 out-of-scope questions. 3 Experiments 3.1 Experimental Setup As mentioned in \u00a7 2.1, the evaluation metrics have two dimensions: reliability and locality. For Factual Knowledge, the reliability is accuracy and the locality is the proportion of answers that are the same before and after editing. These metrics are calculated using exact match. For Tendency, the evaluation is more complicated since the tendency judgment is an open-ended generation task by nature, and we evaluate it under two settings. For the first generation setting (Tendency-G), we adopt an automated evaluation method using GPT4 (OpenAI, 2023) as the evaluator, which has been verified as an effective (Bai et al., 2023; Li et al., 2024). Specifically, for the evaluation of reliability, we use the correct option of each question as the reference, comprehensively scoring the editing methods in 3 dimensions: correctness, coherence, and comprehensiveness. We also ask GPT-4 to give an overall score. Similar to the previous scoring-based evaluation method (Li et al., 2024), all scores are integers scaling from 1 to 5, with 5 being the best. For the evaluation of locality under Tendency-G, we utilize GPT-4 to assess the consistency of the model\u2019s responses to out-of-scope questions in Oe before and after editing, also using an integer score from 1 to 5, with 5 being the most similar. To avoid the potential risk of GPT4 evaluation and provide an alternative metric for Tendency, we also adopt a multiple-choice evaluation setting (Tendency-M), which is the same with Factual Knowledge, i.e., using extract match to calculate the reliability and locality. Experimental details of using GPT-4 scorer on Tendency-G are shown in appendix B.2. For reliability, we employ evaluations at two levels: question-level and edit-level. The questionlevel evaluation assesses the reliability of each individual question. For Tendency-G scores evaluated using GPT-4, similar to Bai et al. (2023), we present the percentages of responses with full marks, i.e., scored 5 points. The edit-level evaluation assesses the reliability of each edit. An edit is reliable only if all questions in Fe (for Factual Knowledge) or Te (for Tendency-M and Tendency-G) are answered correctly or the overall scores of answers are all full-mark. 3.2 Investigated Editing Methods and Models We evaluate various advanced editing methods on our benchmark, including: (1) Fine-tuning (Zhu et al., 2020; Meng et al., 2022b; Aky\u00fcrek et al., 2023). Fine-tuning is a vanilla method of editing, involving direct learning the new edits by finetuning model parameters. In our experiments, we fine-tune all the parameters of models on edits in the test set using a language modeling objective. However, this method has high computational costs and may also lead to catastrophic forgetting (Luo et al., 2023). (2) Retrieval (Madaan et al., 2022a; Zhong et al., 2023). This method is memory-based, which stores all edits in an external memory. When posed with a question, this method first retrieves the most matching edit to use as context along with the question for input into the models. In our experiments, we used BM25 and E5 (Wang et al., 2022) as the retrieval methods, named sparse retrieval and dense retrieval, respectively. (3) SERAC (Mitchell et al., 2022). SERAC is also a memory-based method. This approach trains a scope classifier to determine whether a question requires retrieving a corresponding edit for answering. If retrieval is necessary, the retrieved edit and the question are input together into a counterfactual model for answering; otherwise, the question alone is input into the vanilla pre-trained model. In our implementation, we train a cross-encoder classifier (Mitchell et al., 2022) based on ELECTRA (Clark et al., 2019) and we use the same pretrained model as the counterfactual model, which is the same as in Aky\u00fcrek et al. (2023). (4) ICE. This method takes the ground truth edit as context along with the question as input into pre-trained models, directly evaluating whether the model can understand the scope of the edit and correctly answer the corresponding questions. We do not evaluate some advanced approaches, such as the LocateThen-Edit methods (Dai et al., 2022; Meng et al., 2022a,b; Li et al., 2023), because these approaches are specifically designed for triple-level editing and are not applicable to event-level knowledge editing. We adopt several advanced language models as \fModel Method Factual Knowledge Tendency-M Tendency-G E-Level E-Level Q-Level Locality E-Level Q-Level Locality E-Level Q-Level Locality GPT-J Fine-tuning 0.0 2.3 67.2 4.8 41.3 99.3 0.2 4.8 87.7 0.1 Sparse Retrieval 15.1 44.2 29.4 7.5 49.3 42.8 0.7 12.0 51.6 0.2 Dense Retrieval 16.7 46.7 28.8 7.3 49.2 40.6 0.6 12.5 53.8 0.1 SERAC 4.3 20.4 65.1 8.2 49.4 74.4 0.0 4.2 80.3 0.0 ICE 17.1 49.9 29.1 7.5 49.6 41.6 1.6 13.7 54.3 0.9 TULU 2 Fine-tuning 0.0 4.7 90.6 10.3 53.9 100.0 5.1 36.3 81.3 2.9 Sparse Retrieval 26.4 56.3 53.7 16.4 63.4 40.3 2.9 34.0 28.5 2.6 Dense Retrieval 28.7 59.3 52.8 24.9 69.5 42.1 7.1 41.4 25.2 5.5 SERAC 7.0 30.4 89.1 18.9 65.3 59.0 4.6 38.0 76.7 2.9 ICE 30.5 63.8 53.7 34.1 75.9 39.3 9.5 43.8 25.1 8.6 Mistral 7B Fine-tuning 0.2 4.1 66.8 21.5 65.4 100.0 19.1 59.7 77.6 10.5 Sparse Retrieval 24.1 57.5 39.6 28.0 72.5 34.7 6.1 43.9 28.6 3.9 Dense Retrieval 25.6 60.4 39.1 40.5 79.2 37.6 12.8 53.6 21.4 9.9 SERAC 7.4 27.4 71.1 37.4 76.4 59.8 14.1 54.7 74.7 8.6 ICE 26.6 64.5 39.8 60.1 88.0 35.6 21.5 59.6 22.8 16.7 GPT-3.5 Sparse Retrieval 16.9 55.0 33.0 49.4 82.4 41.2 10.4 48.7 23.9 7.2 Dense Retrieval 18.4 60.2 30.6 57.6 86.0 46.6 21.5 60.1 19.3 16.3 SERAC 5.2 27.1 71.2 56.0 84.9 70.3 17.9 57.7 70.6 11.7 ICE 20.0 63.1 32.7 71.6 91.6 41.9 33.8 66.6 20.1 27.1 GPT-4 Sparse Retrieval 34.2 64.7 56.6 30.6 71.8 52.0 14.5 58.4 34.0 9.8 Dense Retrieval 36.5 68.7 56.1 46.5 80.6 51.8 24.3 66.1 31.0 18.9 SERAC 9.7 31.4 80.6 45.8 81.4 92.3 26.5 65.7 93.1 15.2 ICE 39.0 73.5 56.9 66.4 89.3 49.8 40.3 73.0 31.9 29.2 Gemini Pro Sparse Retrieval 24.3 60.3 30.3 13.8 57.3 38.0 2.8 29.8 33.7 2.4 Dense Retrieval 25.2 63.7 30.5 28.2 67.2 43.4 6.6 39.5 33.9 5.6 SERAC 6.0 28.3 72.1 31.1 72.3 77.0 8.4 45.7 70.3 5.5 ICE 24.3 65.6 41.6 41.9 75.2 40.6 7.4 38.1 39.7 7.2 Table 2: Experimental results (%) of all investigated methods and models on ELKEN. E-Level: Edit-level reliability. Q-Level: Question-level reliability. The results on Tendency-G are the percentages of full-mark of overall scores. The rightmost column, E-Level, displays the overall reliability considering Fe \u222aTe. the base models to implement the aforementioned methods. We employ three open-source models, including GPT-J (Wang and Komatsuzak, 2021), TULU 2 (Ivison et al., 2023), Mistral 7B (Jiang et al., 2023), and three powerful proprietary models, including GPT-3.5 (OpenAI, 2022), GPT-4 (OpenAI, 2023), and Gemini Pro (Team et al., 2023). The implementation details of the editing method and automated evaluation are in appendix B. 3.3 Experimental Results The experimental results are shown in Table 2, and we have the following general observations: (1) Existing methods exhibit moderate performance on ELKEN. Even the best-performing method (ICE + GPT-4) falls short, which indicates the significant challenge posed by the new event-level knowledge editing setting. (2) The question-level reliability scores on ELKEN are much lower than those in triplet-level editing. For instance, SERAC can achieve nearly 100% reliability (Yao et al., 2023) in triplet-level editing. Moreover, the reliability scores of event-level evaluations are further lower than those of question-level evaluations. This suggests that recognizing the impact scope of event editing is a novel challenge of our task. The impact scope of triplet-level editing typically confines to edited triplets themselves, while that of eventlevel knowledge editing extends to multiple factual and tendency knowledge. (3) The locality scores on ELKEN are also generally lower than those in triplet-level editing. For example, SERAC + GPT-J achieves nearly 100% locality in triplet-level editing (Yao et al., 2023) but only attains about 80% and 65.1% in Tendency and Factual Knowledge, respectively. This may be due to the broad impact range of event edits, making the models struggle to ensure the locality of edits, which poses new challenges to existing methods. (4) On in-scope questions of Tendency-G, the full-mark rate is lower compared to reliability scores on Tendency-M. This is because Tendency-G not only assesses the tendency correctness of answers but also evaluates the coherence and comprehensiveness. This indicates that although the model may correctly identify the tendency of a question, it struggles to provide comprehensive and reasonable explanations. 4 Further Analysis This section presents some further analyses. Unless otherwise specified, the experimental results are from the ICE method, with question level reliability \fModel Unknown Known Overall GPT-J 28.2 63.9 49.9 TULU 2 44.2 76.5 63.8 Mistral 7B 50.1 73.8 64.5 GPT-3.5 67.8 60.0 63.1 GPT-4 63.6 79.9 73.5 Gemini Pro 54.0 73.2 65.6 Table 3: Reliability (%) on Unknown and Known questions of Factual Knowledge in ELKEN. scores. More results are placed in appendix C. 4.1 Analysis on Unknown Questions As mentioned in \u00a7 2.2, the editing process may render some facts as unknown, such as Inter Miami\u2019s captain in Figure 1. This process is a form of knowledge deletion or unlearning (Si et al., 2023), which has not been covered by previous editing work. We further investigate whether the LLMs recognize that certain knowledge should be deleted based on edits, namely answering \u201cunknown\u201d to relevant queries. Specifically, in ELKEN, there are 797 in-scope questions with answers marked as Unknown and the remaining 1, 374 in-scope questions with Known answers being specific entities. We observe the model\u2019s performance on these different data types, with results presented in Table 3. Our observations are as follows: (1) In general, models exhibit significantly lower reliability on Unknown questions compared to Known questions, except for GPT-3.5. This suggests that deleting corresponding outdated knowledge based on edits remains a challenge for current methods. (2) GPT-J performs notably worse on Unknown questions than other aligned models, indicating that alignment, e.g., instruction-tuning (Wei et al., 2021; Chung et al., 2022) or RLHF (Ouyang et al., 2022), can enhance the models\u2019 ability to delete knowledge through human instructions. The \u201cediting to Unknown\u201d questions included in ELKEN presents new challenges for existing knowledge editing methods and necessitates further efforts, such as incorporating knowledge unlearning methods (Si et al., 2023; Muresanu et al., 2024). 4.2 Analysis on Questions needing Background Knowledge As noticed by Zhong et al. (2023), LLMs may require background knowledge to answer certain questions. There are also such questions in our benchmark ELKEN. For instance, in Figure 1, correctly answering the question \u201cWhich league does Model K. Needed No K. Needed Recall GPT-J 44.0 55.6 43.5 TULU 2 59.9 65.9 73.3 Mistral 7B 49.8 69.1 61.3 GPT-3.5 23.3 76.0 82.2 GPT-4 62.0 79.5 95.7 Gemini Pro 52.7 70.5 84.5 Table 4: Reliability (%) on questions needing background knowledge (K. Needed) versus questions not requiring background knowledge (No K. Needed) and recall rate (%) of background knowledge needed. Lionel Messi play in?\u201d necessitates the knowledge of \u201cInter Milan is a club of the Serie A league\u201d. Correctly answering these questions also involves multi-hop reasoning, as the update of the league where Lionel Messi plays due to Messi playing for Inter Milan and Inter Milan being a club of the Serie A league. Therefore, successfully editing models not only requires the model to understand the editing scope of the edit, which requires multi-hop reasoning abilities, but also relies on the model\u2019s background knowledge. In ELKEN, there are 393 questions that need background knowledge for answers, which are marked during the construction of ELKEN. We observe the model\u2019s performance on the questions and find that the performance on questions requiring background knowledge is significantly lower, as shown in Table 4. We further analyze the reasons for the lower performance on questions requiring background knowledge. We assess the model\u2019s recall rate for the knowledge required to answer questions, with results presented in Table 4. We find that most models could recall a substantial proportion of the knowledge. However, their accuracy on the corresponding questions is much lower, indicating that the main reason for errors in these cases is the model\u2019s failure to recognize the editing scope requiring multi-hop reasoning, which poses a significant challenge to existing methods. 4.3 Comprehensive Evaluation on Tendency-G of ELKEN As mentioned in \u00a7 3.1, we conduct a systematic evaluation across 3 dimensions on Tendency-G of ELKEN. We present the results of this systematic evaluation in Table 5. We find that: (1) For correctness, the results evaluated by GPT-4 and those on Tendency-M are roughly similar in the model\u2019s rel\fModel Correctness Coherence Comprehensiveness GPT-J 41.5 11.8 4.7 TULU 2 55.4 40.7 14.2 Mistral 7B 62.3 58.9 26.2 GPT-3.5 69.8 67.4 22.7 GPT-4 71.7 82.1 76.8 Gemini Pro 38.9 42.1 38.8 Table 5: Full-mark rate results (%) across three dimensions on Tendency-G of ELKEN. ative performance3, but the results on Tendency-G are significantly lower. One reason is that the evaluation here employs a full-mark scheme, which is more stringent. If we consider results with correctness \u22654 as correct, then the gap between Tendency-G and Tendency-M scores is generally within 10%. Therefore, if one worries about the GPT-4 evaluation quality, one can always refer to the Tendency-M results. (2) Some models, e.g., GPT-3.5, despite high correctness, score low on coherence or comprehensiveness, indicating that while the model could correctly answer the tendencies of the questions, it fails to provide reasonable or comprehensive explanations, which is also undesirable. This suggests that a comprehensive evaluation across multiple dimensions is necessary. 4.4 Human Evaluation of GPT-4 Scorer To validate the effectiveness of using GPT-4 as a scorer in the Tendency-G evaluation, we conduct a manual review of GPT-4\u2019s scoring. Specifically, we randomly sample 120 questions and corresponding model-generated answers, with 60 from Mistral 7B and 60 from GPT-4. One of our authors scores this data. Similar to previous work (Bai et al., 2023; Chan et al., 2023), we calculate Spearman\u2019s \u03c1 and Kendall\u2019s \u03c4 coefficients between the model\u2019s overall scores and the manually assigned overall scores, which are 74.4% and 69.8%, respectively. These results indicate a strong positive correlation between scores given by GPT-4 and humans. This suggests that GPT-4\u2019s scoring generally aligns with human assessment but still leaves room for improvement. Additionally, GPT-4 tends to overestimate LLMs\u2019 performance, with an average score of 4.34 compared to the human-assigned average of 4.15. Nonetheless, as an automated, low-cost evaluation approach, it is sufficiently effective. 3The significant discrepancy in Gemini Pro\u2019s performance between Tendency-G and Tendency-M is primarily due to Gemini Pro often being unable to respond on Tendency-G due to triggering safety concerns. 5 Related Work Knowledge Editing Datasets. Most existing knowledge editing datasets assess triplet-level editing, including ZsRE (Levy et al., 2017), CounterFact (Meng et al., 2022a), Fact Verification (Mitchell et al., 2022), Calibration (Dong et al., 2022), MQuAKE (Zhong et al., 2023), RaKE (Wei et al., 2023), RIPPLEEDITS (Cohen et al., 2023), etc. Some datasets evaluate various editing settings, such as Mitchell et al. (2021) incorporating a piece of scrambled text into the model; Mitchell et al. (2022) editing the sentiment on a specific topic into the model; Wu et al. (2023) editing triplets into LLMs by inputting raw documents; Aky\u00fcrek et al. (2023) introducing a unified editing task, defining edits as any arbitrary natural language. Our benchmark evaluates event-level knowledge editing, a form that enables efficient and comprehensive updating of knowledge within the model. Knowledge Editing Methods. Previous knowledge editing methods primarily focus on tripletlevel editing, encompassing the following categories: (1) Memory-based method (Mitchell et al., 2022; Madaan et al., 2022b; Zhong et al., 2023; Zheng et al., 2023). This approach stores edits in an external memory, then uses a retriever to retrieve the most relevant edit as context for question answering. Typically, the base model does not require additional parameter updating. (2) Locate-ThenEdit method (Dai et al., 2022; Meng et al., 2022a,b; Li et al., 2023; Ma et al., 2023; Hase et al., 2024; Gupta and Anumanchipalli, 2024). This approach initially identifies the specific location of the knowledge to be edited within the base model, usually a neuron, and then modifies this neuron to significantly reduce the impact of the edit on other knowledge, making it a promising approach to knowledge editing. (3) Hyper-network method (De Cao et al., 2021; Mitchell et al., 2021; Tan et al., 2023). This method generally employs an additional neural network to learn from edits, generating corresponding parameter offsets for the base model to incorporate the knowledge edits. The above-mentioned Locate-Then-Edit and Hyper-network methods are typically designed specifically for triplet-level editing, involving entities or relations, and thus cannot be straightforwardly applied to event-level editing. In this work, we mainly evaluate memory-based method and in-context editing. We leave the development of advanced editing methods for eventlevel knowledge editing as the future work. \f6" + } + ], + "Jianxin Li": [ + { + "url": "http://arxiv.org/abs/2203.01604v1", + "title": "Curvature Graph Generative Adversarial Networks", + "abstract": "Generative adversarial network (GAN) is widely used for generalized and\nrobust learning on graph data. However, for non-Euclidean graph data, the\nexisting GAN-based graph representation methods generate negative samples by\nrandom walk or traverse in discrete space, leading to the information loss of\ntopological properties (e.g. hierarchy and circularity). Moreover, due to the\ntopological heterogeneity (i.e., different densities across the graph\nstructure) of graph data, they suffer from serious topological distortion\nproblems. In this paper, we proposed a novel Curvature Graph Generative\nAdversarial Networks method, named \\textbf{\\modelname}, which is the first\nGAN-based graph representation method in the Riemannian geometric manifold. To\nbetter preserve the topological properties, we approximate the discrete\nstructure as a continuous Riemannian geometric manifold and generate negative\nsamples efficiently from the wrapped normal distribution. To deal with the\ntopological heterogeneity, we leverage the Ricci curvature for local structures\nwith different topological properties, obtaining to low-distortion\nrepresentations. Extensive experiments show that CurvGAN consistently and\nsignificantly outperforms the state-of-the-art methods across multiple tasks\nand shows superior robustness and generalization.", + "authors": "Jianxin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng", + "published": "2022-03-03", + "updated": "2022-03-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SI" + ], + "main_content": "INTRODUCTION Complex networks are widely used to model the complex relationships between objects [16, 64], such as social networks [20], academic networks [33, 52], and biological networks [15, 51]. In recent years, graph representation learning has shown its power in capturing the irregular but related complex structures in graph data [29, 31, 46]. The core assumption of graph representation learning is that topological properties are critical to representational capability [40, 41]. As the important topology characteristics, cycle and tree structures are ubiquitous in many real-world graphs, such as the cycle structure of family or friend relationships, the hypernym structure in natural languages [37, 53], the subordinate structure of entities in the knowledge graph [60], and the cascade structure of information propagation in social networks [69]. Since there are many unknowable noises in the real networks, these noises have huge impact on the topology of the network. Ignoring the existence of noise in the environment leads to the over-fitting problem in the learning process. In this paper, we focus on how to learn the generalization and robust representation for a graph. Many generative adversarial networks (GAN) based methods [10, 25, 59, 67, 68] have been proposed to solve the above problem by using adversarial training regularization. However, there are two limitations of the existing GAN-based methods for robust representation of real-world graphs: Discrete topology representation. Since the network is nonEuclidean geometry, the implementation of GAN-based methods on the network usually requires the topological-preserving constraints and performs the discrete operations in the network such as walking and sampling. Although these GAN-based methods can learn robust node representations, their generators focus on learning the arXiv:2203.01604v1 [cs.LG] 3 Mar 2022 \fWWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France Jianxin Li and Xingcheng Fu, et al. discrete node connection distribution in the original graph. The generated nodes do not accurately capture the graph topology and lead to serious distortion for graph representation. To address this issue, we aim to find a way to deal with the discrete topology of graphs as if it were continuous data. Fortunately, Riemannian geometry [62] provides a solid and manipulable mathematical framework. In recent works, certain types of graph data (e.g. hierarchical, scalefree, or cyclical graphs) have been shown to be better represented in non-Euclidean (Riemannian) geometries [7, 12, 13, 19, 37, 47\u2013 49]. Inspired by the unsupervised manifold hypothesis [8, 35, 42], we can understand a graph as a discretization of a latent geometric manifold1. For example, the hyperbolic geometric spaces (with negative curvature) can be intuitively understood as a continuous tree [3, 24] and spherical geometry spaces (with positive curvature) benefit for modeling cyclical graphs [11, 12, 19, 65]. In these cases, the Riemannian geometric manifold has significant advantages by providing a better geometric prior and inductive bias for graphs with respective topological properties. Inspired by this property, we propose to learn robust graph representation in Riemannian geometric spaces. Topological heterogeneity. There are some important properties (e.g., scale-free and small-world) usually presented by tree-like and cyclic structures [24, 39]. Meanwhile, these topological properties are also reflected in the density of the graph structure. As shown in Figure 1, the real-world graph data commonly has local structures with different topological properties, i.e., heterogeneous topologies. Moreover, the noisy edges [50] may seriously change the topological properties of local structures (e.g. a single binary tree structure with an extra edge may become a triangle). However, the Riemannian geometric manifold with constant curvature is regarded as global geometric priors of graph topologies, and it is difficult to capture the topological properties of local structures. To address the above problems, we propose a novel Curvature Graph Generative Adversarial Networks (CurvGAN). Specifically, we use a constant curvature2 to measure the global geometric prior of the graph, and generalize GAN into the Riemannian geometric space with the constant curvature. As shown in Figure 2, an appropriate Riemannian geometric space ensures minimal topological distortion for graph embedding, leading to better and robust topology representation. For the discrete topology global representation, CurvGAN can directly perform any operations of GAN in the continuous Riemannian geometric space. For the topological heterogeneity issue, we design the Ricci curvature regularization to improve the local structures capture capability of the model. Overall, the contributions are summarized as follows: \u2022 We propose a novel Curvature Graph Generative Adversarial Networks (CurvGAN), which is the first attempt to learn the robust node representations in a unified Riemannian geometric space. \u2022 CurvGAN can directly generate fake neighbor nodes in continuous Riemannian geometric space conforming to the graph topological prior and better preserve the global and local topological proprieties by introducing various curvature metrics. 1In this paper, the manifold is equivalent to embedding space, and we will use the terms manifold and space interchangeably. 2In this paper, the constant curvature of the graph is considered as the global average sectional curvature. \u2022 Extensive experiments on synthetic and real-world datasets demonstrate a significant and consistent improvement in model robustness and efficiency with competitive performance. 2 PRELIMINARY 2.1 Riemannian Geometric Manifold Riemannian geometry is a strong and elegant mathematical framework for solving non-Euclidean geometric problems in machine learning and manifold learning [7, 12, 27]. A manifold is a special kind of connectivity in which the coordinate transformation between any two (local) coordinate systems is continuous. Riemannian manifold [2] is a smooth manifold M of dimension \ud835\udc51 with the Riemannian metric \ud835\udc54, denoted as (M,\ud835\udc54). At each point x \u2208M, the locally space looks like a \ud835\udc51-dimension space, and it associates with the tangent space T xM of \ud835\udc51-dimension. For each point x, the Riemannian metric \ud835\udc54is given by an inner-product \ud835\udc54x(\u00b7, \u00b7) : T xM \u00d7 T xM \u2192R. 2.2 The \ud835\udf05-stereographic Model The gyrovector space [54\u201357] formalism is used to generalize vector spaces to the Poincar\u00e9 model of hyperbolic geometry [53]. The important quantities from Riemannian geometry can be rewritten in terms of the M\u00f6bius vector addition and scalar-vector multiplication [14]. However, these mathematical tools are used only in hyperbolic spaces (i.e., constant negative curvature). To extend these tools to unify all curvatures, [3] leverage gyrovector spaces to the \ud835\udf05-stereographic model. Given a curvature \ud835\udf05\u2208R, a \ud835\udc5b-dimension \ud835\udf05-stereographic model (\ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05,\ud835\udc54\ud835\udf05) can be defined by the manifold \ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05and Riemannian metric \ud835\udc54\ud835\udf05: \ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05= {x \u2208R\ud835\udc51| \u2212\ud835\udf05\u2225x\u22252 2 < 1}, (1) \ud835\udc54\ud835\udf05 x = \ud835\udf06\ud835\udf05 x 2\ud835\udc54E, where \ud835\udf06\ud835\udf05 x = 2/(1 + \ud835\udf05\u2225\ud835\udc65\u22252), (2) where \ud835\udc54E = I\ud835\udc51is the Euclidean metric tensor. Note that when the curvature \ud835\udf05> 0, \ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05is R\ud835\udc51, while for \ud835\udf05< 0, \ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05is a Poincar\u00e9 ball of radius 1/\u221a\u2212\ud835\udf05. Distance. For any point pair x, y \u2208\ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05, x \u2260y, the projection node v \u22600 the distance in \ud835\udf05-stereographic space is defined as: d\ud835\udf05 \ud835\udd30\ud835\udd31(x, y) = (2/ \u221a\ufe01 |\ud835\udf05|) tan\u22121 \ud835\udf05\u2225\u2212x \u2295\ud835\udf05y\u2225. (3) Note that the distance is defined in all the cases except for x \u2260 \u2212y/\ud835\udf05\u2225y\u22252 if \ud835\udf05> 0. Exponential and Logarithmic Maps. The manifold \ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05and the tangent space T x\ud835\udd30\ud835\udd31can be mapped to each other via exponential map and logarithmic map. The exponential map exp\ud835\udf05 x(\u00b7) and logarithmic map log\ud835\udf05 x(\u00b7) are defined as: exp\ud835\udf05 x(v) := x \u2295\ud835\udf05 \u0012 tan\ud835\udf05 \u0012\u221a\ufe01 |\ud835\udf05| \ud835\udf06\ud835\udf05 x \u2225v\u2225 2 \u0013 v \u2225v\u2225 \u0013 , (4) log\ud835\udf05 x(y) := 2|\ud835\udf05|\u22121 2 \ud835\udf06\ud835\udf05 x tan\u22121 \ud835\udf05\u2225\u2212x \u2295\ud835\udf05y\u2225 \u2212x \u2295\ud835\udf05y \u2225\u2212x \u2295\ud835\udc58y\u2225, (5) where tan\ud835\udf05= 1 \u221a |\ud835\udf05| tan(\u00b7) and \ud835\udf06x\ud835\udf05is the conformal factor which comes from Riemannian metric. \fCurvature Graph Generative Adversarial Networks WWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France (a) Hyperbolic geometric space (\ud835\udf05< 0). (b) Euclidean space (\ud835\udf05= 0). (c) Spherical geometric space (\ud835\udf05> 0). Figure 2: A toy example of a triangular structure in a Riemannian geometric space with different average sectional curvatures. (a) The geodesics (blue solid lines) of a triangular structure approximate a tree (dash lines and the virtual center) in hyperbolic geometric space. (b) A triangular structure in Euclidean space. (c) The triangular geodesics (red solid lines) approximate a cycle structure (dash lines and black virtual nodes) in spherical geometric space. 2.3 The Graph Curvature Sectional Curvature. In Riemannian geometry, the sectional curvature [36] is one of the ways to describe the curvature of Riemannian manifolds. In existing works [3, 19, 45], average sectional curvature \ud835\udf05has been used as the constant curvature of non-Euclidean geometric embedding space. Ricci Curvature. Ricci curvature [28, 36] is a broadly metric which measures the geometry of a given metric tensor that differs locally from that of ordinary Euclidean space. In machine learning, Ricci curvature is transformed into edge weights to measure local structural properties [66]. Ollivier-Ricci curvature [38] is a coarse approach used to compute the Ricci curvature for discrete graphs. 3 CURVGAN MODEL In this section, we present a novel Curvature Graph Generative Adversarial Network (CurvGAN) in the latent geometric space of the graph. The overall architecture is shown in Figure 3. 3.1 Geometric Prior of Graph Topology An interesting theory of graph geometry is that some typical topological structures can be described intuitively using Riemannian geometry with different curvature \ud835\udf05[5], i.e., hyperbolic (\ud835\udf05< 0), Euclidean (\ud835\udf05= 0) and spherical (\ud835\udf05> 0) geometries. As shown in Figure 2, the hyperbolic space can be intuitively understood as a continuous tree [24], and spherical geometry provides benefits for learning cyclical structure [11, 18, 19, 63, 65]. Therefore, we can learn better graph representation with minimal embedding distortion in an appropriate Riemannian geometric space [3]. Motivated by this idea, we first search for an appropriate Riemannian geometric space to approximate the global topological properties of the graph, and then we capture the local structural features for each node by introducing Ricci curvature. In this way, we propose a curvature-constrained framework to capture both the global topology and local structure of the graph. Global Curvature Estimation. In machine learning, the Riemannian manifold is commonly considered as a geometric prior with constant curvature. For a graph embedding in Riemannian geometric space, a key parameter is constant curvature \ud835\udf05, which can affect the embedding distortion of a graph topology [19]. To minimize the embedding distortion and explore the optimal curvature, we leverage the average sectional curvature estimation algorithm [3, 19] to estimate global curvature. Specifically,let (\ud835\udc4e,\ud835\udc4f,\ud835\udc50) be a geodesic triangle in manifold \ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05, and \ud835\udc5abe the (geodesic) midpoint of (\ud835\udc4f,\ud835\udc50). Their quantities are defined as follows: \ud835\udf09\ud835\udd30\ud835\udd31(\ud835\udc4e,\ud835\udc4f;\ud835\udc50) =\ud835\udc5d\ud835\udd30\ud835\udd31(\ud835\udc4e,\ud835\udc5a)2+ \ud835\udc5d\ud835\udd30\ud835\udd31(\ud835\udc4f,\ud835\udc50)2 4 + \ud835\udc5d\ud835\udd30\ud835\udd31(\ud835\udc4e,\ud835\udc4f)2+\ud835\udc5d\ud835\udd30\ud835\udd31(\ud835\udc4e,\ud835\udc50)2 2 , \ud835\udf09\ud835\udd30\ud835\udd31(\ud835\udc5a;\ud835\udc4e,\ud835\udc4f;\ud835\udc50) = 1 2\ud835\udc5d\ud835\udd30\ud835\udd31(\ud835\udc4e,\ud835\udc5a) \ud835\udf09\ud835\udd30\ud835\udd31(\ud835\udc4e,\ud835\udc4f;\ud835\udc50). (6) We design our curvature updating according to Eq. (6). The new average curvature estimation \ud835\udf05is defined as: \ud835\udf05= 1 |\ud835\udc49| \u2211\ufe01 \ud835\udc5a\u2208\ud835\udc49 \u00a9 \u00ad \u00ab 1 \ud835\udc5b\ud835\udc60 \ud835\udc5b\ud835\udc60 \u2211\ufe01 \ud835\udc57=0 \ud835\udf09\ud835\udd30\ud835\udd31 \u0010 h\ud835\udc5a; h\ud835\udc4e\ud835\udc57, h\ud835\udc4f\ud835\udc57; h\ud835\udc50\ud835\udc57 \u0011\u00aa \u00ae \u00ac , (7) where \ud835\udc4fand \ud835\udc50are randomly sampled from the neighbors of \ud835\udc5a, and \ud835\udc4eis a node in the graph \ud835\udc3aexcept for {\ud835\udc5a,\ud835\udc4f,\ud835\udc50}. For each node, we sample \ud835\udc5b\ud835\udc60times and take the average as the estimated curvature. Local Curvature Computation. To deal with the embedding distortion caused by topological heterogeneity, we also need to consider the local structural properties of the graph. We leverage the Ollivier-Ricci curvature [38] to solve this problem. Specifically, Ollivier-Ricci curvature \ud835\udf05\ud835\udc5f(\ud835\udc65,\ud835\udc66) of the edge (\ud835\udc65,\ud835\udc66) is defined as: \ud835\udf05\ud835\udc5f(\ud835\udc65,\ud835\udc66) = \ud835\udc4a(\ud835\udc5a\ud835\udc65,\ud835\udc5a\ud835\udc66) \ud835\udc51(\ud835\udc65,\ud835\udc66) , (8) where \ud835\udc4a(\u00b7, \u00b7) is the Wasserstein distance, \ud835\udc51(\u00b7, \u00b7) is the geodesic distance (embedding distance), and\ud835\udc5a\ud835\udc65is the mass distribution of node \ud835\udc65. The mass distribution represents the importance distribution of a node and its one-hop neighborhood [38], which is defined as: \ud835\udc5a\ud835\udefc \ud835\udc65= \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udefc if\ud835\udc65\ud835\udc56= \ud835\udc65, 1 \u2212\ud835\udefc if\ud835\udc65\ud835\udc56\u2208Neighbor(\ud835\udc65) 0 otherwise, , (9) where \ud835\udefcis a hyper-parameter that represents the importance of node \ud835\udc65and we take \ud835\udefc= 0.5 following the existing works [44, 66]. \fWWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France Jianxin Li and Xingcheng Fu, et al. Figure 3: An illustration of CurvGAN architecture. (1) CurvGAN estimates the constant curvature \ud835\udf05and Ricci curvature \ud835\udf05\ud835\udc5f of the graph; (2) CurvGAN \u2019s generator generates negative samples by sampling fake node representations from the wrapper normal distribution; (3) CurvGAN \u2019s discriminator discriminates the positive, negative and generated fake samples. 3.2 Curvature-Aware Generator Curvature-Aware Generator aims to directly generate fake nodes from the continuous Riemannian geometric space of the global topology as the enhanced negative samples while preserving the local structure information of the graph as much as possible. Global Topological Representation Generation. First, we map a node \ud835\udc62into the Riemannian geometric space with the global curvature\ud835\udf05, and input it to an embedding layer of M\u00f6bius version to get a dense vector representation. In order to generate appropriate representations of fake neighbor nodes, we need to extend the noise distribution to the Riemannian geometric space. An approach is to use exponential mapping to map a normal distribution into a Riemannian manifold, and this distribution measure is referred to as the wrapped normal distribution [17, 34]. The \ud835\udc51-dimensional wrapped normal distribution N\ud835\udf05 \ud835\udd30\ud835\udd31(z|\ud835\udf41, \ud835\udf0e2I) is defined as: N\ud835\udf05 \ud835\udd30\ud835\udd31(z|\ud835\udf41, \ud835\udf0e2I) (10) = N \u0010 \ud835\udf06\ud835\udf05 \ud835\udf07log\ud835\udf07(\ud835\udc9b) | 0, \ud835\udf0e2I \u0011 \u00a9 \u00ad \u00ad \u00ab \u221a\ud835\udf05d\ud835\udf05 \ud835\udd30\ud835\udd31(\ud835\udf41, \ud835\udc9b) sinh \u0010\u221a\ud835\udf05d\ud835\udf05 \ud835\udd30\ud835\udd31(\ud835\udf41, \ud835\udc9b) \u0011 \u00aa \u00ae \u00ae \u00ac \ud835\udc51\u22121 , where \ud835\udf07\u2208\ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05is the mean parameter, and \ud835\udf0e\u2208R\ud835\udc51is the variance. Then we can introduce a reparameterized sampling strategy to generate the negative node representation. Specifically, for a fake node ufake in the manifold \ud835\udd30\ud835\udd31, we can generate the noisy representation in the Riemannian geometric space, given by zG ufake = exp\ud835\udf05 u \u0012 \ud835\udc5f \ud835\udf06\ud835\udf05 u \ud835\udefc \u0013 , with ufake \u223cN (0, \ud835\udf0e2I), (11) with radius \ud835\udc5f= d\ud835\udf05 \ud835\udd30\ud835\udd31, direction \ud835\udefc= v \ud835\udc5f, and u \u2208\ud835\udd30\ud835\udd31\ud835\udc51 \ud835\udf05is the embedding vector of node \ud835\udc62. Local Structure Preserving Regularization. Since the realworld graph data usually has a heterogeneous topology, the edge generated by data noise may seriously change the local structure, leading to the following two issues: (1) the model cannot capture the correct local structure, leading to the generated sample may have different properties from the original structure: (2) the fake samples generated by the model are no longer indistinguishable from the real samples. Therefore, we need to regularize the generated fake nodes by using a measurement of preserving local structure information. If two nodes are similar in the same local structure, they should have the same Ricci curvatures in the neighborhood. According to Eq. (8), the regularization term can given by Reg(\ud835\udf05\ud835\udc62 \ud835\udc5f,\ud835\udf05\ud835\udc62fake \ud835\udc5f ) = 1 |N(\ud835\udc62)| \u2211\ufe01 \ud835\udc63\u2208N(\ud835\udc62) \u0012 1 \u2212\ud835\udc4a(\ud835\udc5a\ud835\udc62,\ud835\udc5a\ud835\udc63)\ud835\udc51(\ud835\udc62fake, \ud835\udc63) \ud835\udc4a(\ud835\udc5a\ud835\udc62fake,\ud835\udc5a\ud835\udc63)\ud835\udc51(\ud835\udc62, \ud835\udc63) \u0013 , (12) where N(\ud835\udc62) is the neighbors of node \ud835\udc62. To facilitate the calculation of Ricci curvature [38], we assume the generated fake node has a similar one-hop neighborhood as the original node \ud835\udc62, i.e., \ud835\udc5a\ud835\udc62= \ud835\udc5a\ud835\udc62fake. Fake Sample Generation Strategy. Given a node \ud835\udc62, the generator outputs the embedding zG ufake = \ud835\udc3a\ud835\udf05 \ud835\udd30\ud835\udd31 \u0010 u;\ud835\udf03\ud835\udc3a\ud835\udf05 \ud835\udd30\ud835\udd31 \u0011 of the fake node ufake as a substitute for node u. In this way, we can obtain a node-pair set {(\ud835\udc63,\ud835\udc62fake), \ud835\udc63\u2208N (\ud835\udc62)} as the negative samples. The generator of Riemannian latent geometry is defined as: \ud835\udc3a\ud835\udf05 \ud835\udd30\ud835\udd31 \u0010 \u00b7;\ud835\udf03\ud835\udc3a\ud835\udf05 \ud835\udd30\ud835\udd31 \u0011 = \ud835\udc53\u2297\ud835\udf05 \u0010 z;\ud835\udf03\ud835\udc53\u0011 , \ud835\udc53\u2297\ud835\udf05= exp\ud835\udf05 u \u0000\ud835\udc53\u0000log\ud835\udf05 u (u)\u0001\u0001 , (13) where \ud835\udf03\ud835\udc3a\ud835\udf05 \ud835\udd30\ud835\udd31is the parameters for generator\ud835\udc3a\ud835\udf05 \ud835\udd30\ud835\udd31, and \ud835\udc53\u2297\ud835\udf05is a M\u00f6bius version of the multi-layer perception. Optimization of Generator. For the high-order proximity of each node-pair in a graph, the advantage of our CurvGAN is that it doesn\u2019t require traversing the shortest path between two points to compute the connectivity probability. According to Eq. (12), when the embedding distortion is minimal, the longer the shortest path between any two nodes, the less probability of this node-pair by direct calculation in the latent geometric space, and vice versa. The loss function of the generator is defined as follows: L\ud835\udc3a=E\ud835\udc62\u2208Vlog \u0010 1 \u2212\ud835\udc37\ud835\udf05\u0010 u\ud835\udc37, ufake \u0011\u0011 + \ud835\udf06 \r \r \r \r \r \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49 Reg(\ud835\udf05\ud835\udc62 \ud835\udc5f,\ud835\udf05\ud835\udc62fake \ud835\udc5f ) \r \r \r \r \r 2 . (14) \fCurvature Graph Generative Adversarial Networks WWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France 3.3 Discriminator The discriminator of CurvGAN aims to determine whether the connection between two nodes is real. For node pair (\ud835\udc62, \ud835\udc63), the discriminant function outputs the connection probability between two nodes. In general, the discriminator \ud835\udc37is defined as: \ud835\udc37\ud835\udf05(\ud835\udc62, \ud835\udc63;\ud835\udf03\ud835\udc37) = F\ud835\udf05 \u0010 \ud835\udc62, \ud835\udc63;\ud835\udf03\ud835\udc39\u0011 , (15) where \ud835\udf03\ud835\udc37is the parameters of discriminator \ud835\udc37. For link prediction task, we use the Fermi-Dirac decoder [24, 37] to compute the connection probability: F\ud835\udf05 \u0010 \ud835\udc62, \ud835\udc63;\ud835\udf03F\u0011 = h \ud835\udc52(d\ud835\udf05 \ud835\udd30\ud835\udd31(\ud835\udc62,\ud835\udc63)2\u2212\ud835\udf0c)/\ud835\udf0f+ 1 i\u22121 , (16) where d\ud835\udf05 \ud835\udd30\ud835\udd31is the hyperbolic distance in Eq. (3) and (\ud835\udf0c, \ud835\udf0f) are hyperparameters of Fermi-Dirac distribution. For node classification task, inspired by [9], we map the output embedding z\ud835\udc56,\ud835\udc56\u2208V to the tangent space T o\ud835\udd30\ud835\udd31by logarithmic mapping log\ud835\udf05 o(\ud835\udc67\ud835\udc56), then perform Euclidean multinomial logistic regression, where o is the north pole or origin. For a graph G(V, E), the input samples of the discriminator are as follows: \u2022 Positive Sample (\ud835\udc62, \ud835\udc63): There indeed exists a directed edge from \ud835\udc62to \ud835\udc63on a graph G. \u2022 Negative Samples (\ud835\udc62,\ud835\udc64) and (\ud835\udc62fake, \ud835\udc63): For a given node \ud835\udc62,\ud835\udc64, \ud835\udc63\u2208 V, the negative samples consists of the samples by negative sampling in the original graph (\ud835\udc62,\ud835\udc64) \u2209E and the fake samples (\ud835\udc62fake, \ud835\udc63) generated by the generator. Optimization of Discriminator. The loss function of positive samples (\ud835\udc62, \ud835\udc63) is: L\ud835\udc37 pos = E(\ud835\udc62,\ud835\udc63)\u223cE \u2212log \ud835\udc37\ud835\udf05(u\ud835\udc37, u\ud835\udc37 pos). (17) The loss function of negative samples (\ud835\udc62,\ud835\udc64) and (\ud835\udc62fake, \ud835\udc63) is defined as: L\ud835\udc37 neg =E\ud835\udc62\u2208V \u2212(log(1 \u2212\ud835\udc37\ud835\udf05(u\ud835\udc37, u\ud835\udc37 neg)) + log(1 \u2212\ud835\udc37\ud835\udf05(u\ud835\udc37, ufake))). (18) Then we integrate the above Eq. (17) and Eq. (18) as the loss function of the discriminator, which we try to minimize: L\ud835\udc37= L\ud835\udc37 pos + L\ud835\udc37 neg. (19) 3.4 Training and Complexity Analysis The overall training algorithm for CurvGAN is summarized in Algorithm 1. For each training epoch, time complexity of CurvGAN per epoch is \ud835\udc42 \u0010 \ud835\udc5b\ud835\udc60\u00b7 (\ud835\udc5b\ud835\udc37 \ud835\udc52\u00b7 |E| + \ud835\udc5b\ud835\udc3a \ud835\udc52\u00b7 (|V| + \ud835\udc5b\ud835\udc60log(\ud835\udc5b\ud835\udc60))) \u00b7 \ud835\udc512\u0011 . Since \ud835\udc5b\ud835\udc60, \ud835\udc5b\ud835\udc3a \ud835\udc52, \ud835\udc5b\ud835\udc37 \ud835\udc52and \ud835\udc51are small constants, CurvGAN \u2019s time complexity is linear to |V| and |E|. The space complexity of CurvGAN is \ud835\udc42(2 \u00b7 \ud835\udc51\u00b7 (|V| + |E|)). In conclusion, CurvGAN is both time and space-efficient, making it scalable for large-scale graphs. 4 EXPERIMENT In this section, we conduct comprehensive experiments to demonstrate the effectiveness and adaptability of CurvGAN 3 on various datasets and tasks. We further analyze the robustness to investigate the expressiveness of CurvGAN. 3Code is available at https://github.com/RingBDStack/CurvGAN. Algorithm 1: CurvGAN Input: Graph G = {\ud835\udc49, \ud835\udc38}; Number of training epochs \ud835\udc5b\ud835\udc52, generator\u2019s epochs \ud835\udc5b\ud835\udc3a \ud835\udc52, discriminator\u2019s epochs \ud835\udc5b\ud835\udc37 \ud835\udc52; Number of samples \ud835\udc5b\ud835\udc60. Output: Predicted result of the downstream task. // Curvature Estimation 1 \ud835\udf05\u2190Eq. (7); 2 \ud835\udf05\ud835\udc5f\u2190Eq. (8); 3 for \ud835\udc61= 1, 2, \u00b7 \u00b7 \u00b7 ,\ud835\udc5b\ud835\udc52do // Train Discriminator 4 for \ud835\udc51= 1, 2, \u00b7 \u00b7 \u00b7 ,\ud835\udc5b\ud835\udc37 \ud835\udc52do // Sample neighbour \ud835\udc63\u2208N (\ud835\udc62) for each \ud835\udc62\u2208\ud835\udc49 5 upos \u2190RandomWalk(\ud835\udc62,\ud835\udc5b\ud835\udc60); // Sample negative nodes and generate fake nodes 6 uneg \u2190RandomSelect(\ud835\udc49,\ud835\udc5b\ud835\udc60); 7 ufake \u2190Eq. (13); // Optimize 8 L\ud835\udc37\u2190Eq. (14); 9 end // Train Generator 10 for \ud835\udc54= 1, 2, \u00b7 \u00b7 \u00b7 ,\ud835\udc5b\ud835\udc3a \ud835\udc52do // Generate fake nodes 11 ufake \u2190Eq. (13); 12 Reg(\ud835\udf05fake \ud835\udc5f ,\ud835\udf05true \ud835\udc5f ) \u2190Eq. (12); // Optimize 13 L\ud835\udc3a\u2190Eq. (19); 14 end 15 end Table 1: Statistics of datasets. Dataset #Nodes #Edges Avg. Degree #Labels \ud835\udf05 Synth. SBM 1,000 15,691 15.69 5 -1.496 BA 1,000 2,991 2.99 5 -1.338 WS 1,000 11,000 11.00 5 0.872 Real Cora 2,708 5,429 3.90 7 -2.817 Citeseer 3,312 4,732 2.79 6 -4.364 Polblogs 1,490 19,025 25.54 2 -0.823 4.1 Datasets. We conduct experiments on synthetic and real-world datasets to evaluate our method, and analyze model\u2019s capabilities in terms of both graph theory and real-world scenarios. The statistics of datasets are summarized in Table 1. Synthetic Datasets. We generate three synthetic graph datasets using several well-accepted graph theoretical models: Stochastic Block Model (SBM) [21], Barab\u00e1si-Albert (BA) scale-free graph model [4], and Watts-Strogatz (WS) small-world graph model [61]. For each dataset, we create 1,000 nodes and subsequently perform the graph generation algorithm on these nodes. For the SBM graph, \fWWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France Jianxin Li and Xingcheng Fu, et al. Table 2: Summary of link prediction AUC scores (%), node classification Micro-F1 and Macro-F1 scores (%) on synthetic graphs. (Result: average score \u00b1 standard deviation; Bold: best; Underline: runner-up.) Method Stochastic Block Model Barab\u00e1si-Albert Watts-Strogatz Avg.Rank AUC Micro-F1 Macro-F1 AUC Micro-F1 Macro-F1 AUC Micro-F1 Macro-F1 GAE [22] 50.13\u00b10.12 24.07\u00b11.12 21.94\u00b12.11 50.26\u00b10.21 39.35\u00b11.72 17.83\u00b11.11 50.10\u00b10.08 19.08\u00b11.87 16.47\u00b12.64 6.6 VGAE [22] 50.32\u00b11.49 20.47\u00b12.05 15.41\u00b11.11 62.43\u00b11.26 37.44\u00b11.73 15.88\u00b12.31 49.94\u00b10.57 19.14\u00b11.40 12.02\u00b11.13 7.2 DGI [58] 49.88\u00b10.51 19.06\u00b11.87 12.13\u00b12.08 70.90\u00b12.12 38.24\u00b11.11 18.13\u00b10.26 49.55\u00b10.49 18.27\u00b11.14 13.31\u00b10.80 8.1 G2G [6] 79.45\u00b11.28 21.44\u00b10.05 20.98\u00b10.03 54.29\u00b11.62 42.27\u00b10.31 23.96\u00b10.39 73.15\u00b12.23 22.89\u00b10.07 22.75\u00b10.07 5.1 GraphGAN [59] 84.56\u00b12.84 38.60\u00b10.51 38.87\u00b10.32 63.34\u00b14.19 43.60\u00b10.61 24.57\u00b10.53 66.63\u00b19.46 41.80\u00b10.84 41.76\u00b11.25 3.0 ANE [10] 85.09\u00b11.12 39.88\u00b11.06 33.85\u00b11.75 62.13\u00b12.49 46.04\u00b13.01 19.32\u00b12.66 62.98\u00b11.44 33.84\u00b12.75 33.51\u00b12.00 3.7 P-VAE [32] 86.10\u00b10.97 57.94\u00b11.29 52.97\u00b11.47 76.08\u00b11.22 38.38\u00b11.37 20.03\u00b10.32 51.43\u00b13.56 19.85\u00b11.40 13.62\u00b11.62 4.7 Hype-ANE [30] 82.29\u00b12.70 18.84\u00b10.32 11.93\u00b10.09 70.92\u00b10.43 56.92\u00b12.41 31.58\u00b11.17 63.34\u00b14.19 33.40\u00b12.55 32.94\u00b12.70 4.4 CurvGAN (Ours) 89.74\u00b10.70 59.00\u00b10.56 55.99\u00b12.20 95.87\u00b10.86 42.50\u00b11.37 19.28\u00b10.64 88.67\u00b10.22 43.10\u00b11.78 35.21\u00b11.95 1.8 Table 3: Summary of link prediction AUC scores (%), node classification Micro-F1 and Macro-F1 scores (%) on real-world graphs. (Result: average score \u00b1 standard deviation; Bold: best; Underline: runner-up.) Method Cora Citeseer Polblogs Avg.Rank AUC Micro-F1 Macro-F1 AUC Micro-F1 Macro-F1 AUC Micro-F1 Macro-F1 GAE [22] 86.12\u00b10.87 80.92\u00b10.99 79.55\u00b11.32 87.25\u00b11.26 58.50\u00b13.31 50.41\u00b13.32 83.55\u00b10.62 89.50\u00b10.53 89.42\u00b10.53 4.4 VGAE [22] 85.94\u00b10.05 79.95\u00b10.95 78.79\u00b10.97 85.72\u00b12.20 63.75\u00b11.39 55.47\u00b11.34 88.12\u00b10.64 87.02\u00b11.04 86.98\u00b10.88 5.8 DGI [58] 75.39\u00b10.29 74.09\u00b11.75 66.70\u00b11.91 81.30\u00b13.57 73.16\u00b10.68 63.27\u00b10.63 76.33\u00b13.35 87.11\u00b11.18 87.06\u00b11.19 6.3 G2G [6] 84.47\u00b10.70 82.13\u00b10.58 81.14\u00b10.40 90.34\u00b11.44 71.03\u00b10.27 66.44\u00b10.32 91.02\u00b10.29 87.52\u00b10.28 87.51\u00b10.28 3.3 GraphGAN [59] 82.50\u00b10.64 76.40\u00b10.21 76.80\u00b10.34 74.50\u00b10.02 49.80\u00b11.02 45.70\u00b10.13 69.80\u00b10.26 77.45\u00b10.64 76.90\u00b10.43 8.4 ANE [10] 83.10\u00b10.57 83.00\u00b10.51 81.90\u00b11.40 83.00\u00b11.20 50.20\u00b10.12 49.50\u00b10.61 73.09\u00b10.76 95.07\u00b10.65 95.06\u00b10.65 4.8 P-VAE [32] 86.72\u00b10.67 79.57\u00b12.16 77.50\u00b12.46 88.69\u00b11.00 67.91\u00b11.65 60.20\u00b11.93 85.40\u00b12.23 87.74\u00b11.28 87.68\u00b11.26 4.2 Hype-ANE [30] 74.50\u00b10.53 80.70\u00b10.07 79.20\u00b10.28 85.80\u00b10.53 64.40\u00b10.29 58.70\u00b10.02 64.27\u00b10.73 95.62\u00b10.35 95.61\u00b10.36 5.0 CurvGAN (Ours) 94.00\u00b10.63 84.50\u00b10.53 85.60\u00b10.25 93.80\u00b10.15 65.60\u00b10.27 59.60\u00b10.21 93.88\u00b10.42 88.89\u00b10.17 87.65\u00b10.25 2.4 we equally partition all nodes into 5 communities with the intraclass and inter-class probabilities (\ud835\udc5d,\ud835\udc5e) = (0.21, 0.025). For the Barab\u00e1si-Albert graph, we set the number of edges from a new node to existing nodes to a random number between 1 and 10. For the Watts-Strogatz graph, each node is connected to 24 nearest neighbors in the cyclic structure, and the probability of rewiring each edge is set to 0.21. For each generated graph, we randomly remove 50% nodes as the test set with other 50% nodes as the positive samples and generate or sample the same number of negative samples. Real-world Datasets. We also conducted experiments on three real-world datasets: Cora [43] and Citeseer [23] are citation networks of academic papers; Polblogs [1] is political blogs in 2004 U.S. president election where nodes are political blogs and edges are citations between blogs. The training settings for the real-world datasets are the same as settings for synthetic datasets. 4.2 Experimental Setup Baselines. To evaluate the proposed CurvGAN , we compare it with a variety of baseline methods including: (1) Euclidean graph representation methods: We compare with other state-of-the-art unsupervised graph learning methods. GAE [22] and VGAE [22] are the autoencoders and variational autoencoder for graph representation learning; G2G [6] embeds each node of the graph as a Gaussian distribution and captures uncertainty about the node representation; DGI [58] is an unsupervised graph contrastive learning model by maximizing mutual information. (2) Euclidean graph generative adversarial networks: GraphGAN [59] learns the sampling distribution to sample negative nodes from the graph; ANE [10] trains a discriminator to push the embedding distribution to match the fixed prior; (3) Hyperbolic graph representation learning: P-VAE [32] is a variational autoencoder by using Poincar\u00e9 ball model in hyperbolic geometric space; Hyper-ANE [30] is a hyperbolic adversarial network embedding model by extending ANE to hyperbolic geometric space. Settings. The parameters of baselines are set to the default values in the original papers. For CurvGAN, we choose the numbers of generator and discriminator training iterations per epoch \ud835\udc5b\ud835\udc3a \ud835\udc52= 10,\ud835\udc5b\ud835\udc37 \ud835\udc52= 10. The node embedding dimension of all methods is set to 16. The reported results are the average scores and standard deviations over 5 runs. All models were trained and tested on a single Nvidia V100 32GB GPU. 4.3 Synthetic Graph Datasets To verify the topology-preserving capability, we evaluate our method on synthetic graphs generated by the well-accepted graph theoretical models: SBM, BA, and WS graphs. These three synthetic graphs can represent three typical topological properties: the SBM graph has more community-structure, the BA scale-free graph has more tree-like structure, and the WS small-world graph has more cyclic structure. We evaluate the performance, generalization and robustness of our method comprehensively on these graphs with different topological properties. \fCurvature Graph Generative Adversarial Networks WWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France 10 30 50 70 90 Training Ratio (%) 40 50 60 70 80 90 100 AUC Score (%) AUC=59.94 AUC=61.19 AUC=51.43 AUC=70.05 AUC=95.75 Barab\u00e1si-Albert 10 30 50 70 90 Training Ratio (%) 40 50 60 70 80 90 100 AUC Score (%) AUC=49.57 AUC=67.40 AUC=74.66 AUC=72.75 AUC=88.14 Watts-Strogatz VGAE ANE -VAE Hype-ANE CurvGAN (a) Generalization analysis on synthetic BA and WS. 5 10 15 20 25 Perturbation Rate(%) 10 20 30 40 50 Accuracy (%) ACC=18.81 ACC=31.32 ACC=19.44 ACC=18.80 ACC=31.59 Random Attack (add) 5 10 15 20 25 Perturbation Rate(%) 10 20 30 40 50 Accuracy (%) ACC=18.81 ACC=31.32 ACC=29.16 ACC=39.40 ACC=39.83 Random Attack (remove) VGAE ANE -VAE Hype-ANE CurvGAN (b) Robustness analysis of the edges attack on SBM. Figure 4: Generalization and robustness analysis on synthetic data. (a) Ground truth. (b) VGAE. (c) P-VAE. (d) CurvGAN. Figure 5: Visualization of classification results for random attack of 10% extra edges on SBM graph. Performance on Benchmarks. Table 2 summarizes the performance of CurvGAN and all baselines on the synthetic datasets. For the link prediction task, the performance of a model indicates the capture capability of topological properties. It can be observed that the hyperbolic geometric model performs better in SBM and BA graphs than in WS graphs. The reason is that SBM and BA graphs are more \"tree-like\" than the WS graph. Our CurvGAN outperforms all baselines in three synthetic graphs. The result shows that a good geometry prior is very important for the topology-preserving. CurvGAN can adaptively select hyperbolic or spherical geometric space by estimating the optimal geometric prior. For the node classification task, the GAN-based methods generally outperform other methods because the synthetic graphs only have topological structure and no node features. The GAN-based method can generate more samples to help the model fit the data distribution. The results show that our CurvGAN also has the best comprehensive competitiveness in node classification benefit from the stronger negative samples. Generalization Analysis. We evaluate the generalization capability of the model by setting different proportions of the training set. Figure 4 (a) shows the performances of link prediction with different training ratios. CurvGAN significantly outperforms all baselines, even when the training ratio is small. In addition, CurvGANgains more stable performances than other GAN-based methods across all datasets, which demonstrates the excellent structure-preserving capability of the network latent geometric space. We observe an interesting phenomenon: the non-Euclidean geometry models have very smooth performances on the synthetic graphs with a single topological property. It demonstrates again that a single negative curvature geometry lacks generalization capability for different graph datasets. Robustness Analysis. Considering a poisoning attack scenario, we leverage the RAND attack, provided by DeepRobust [26] library, to randomly add or remove fake edges into the graph. Specifically, we randomly remove and add edges of different ratios (from 5% to 25%) as the training data respectively, and randomly sample 10% nodes edges from the original network as the test data. The results are shown in Figure 4 (b). CurvGAN consistently outperforms Euclidean and hyperbolic models in different perturbation rates. Figure 5 shows the visualization of an edge attack scenario. It can observe that P-VAE has no significant improvement than Euclidean VGAE. Since noisy edges may perturb some tree-like structures of the original SBM graph, leading to hyperbolic models no longer suitable for the perturbed graph topology. Overall, our CurvGAN has significant advantages in terms of robustness. 4.4 Real-world Graph Datasets To verify the practicality of our model, we evaluate CurvGAN in terms of performance, generalization, and robustness on real-world datasets for two downstream tasks: link prediction and node classification. As in Section 4.3, we use the same unsupervised training setup on three real-world datasets, Cora, Citseer, and Polblogs. In addition, we also analyze the computational efficiency of CurvGAN and all baselines. \fWWW \u201922, April 25\u201329, 2022, Virtual Event, Lyon, France Jianxin Li and Xingcheng Fu, et al. 10 30 50 70 90 Training Ratio (%) 40 50 60 70 80 90 100 AUC Score (%) AUC=84.94 AUC=86.81 AUC=85.72 AUC=75.17 AUC=96.14 (a) Cora. 10 30 50 70 90 Training Ratio (%) 40 50 60 70 80 90 100 AUC Score (%) AUC=85.72 AUC=83.52 AUC=88.69 AUC=85.44 AUC=97.35 (b) Citeseer. VGAE ANE -VAE Hype-ANE CurvGAN (a) Generalization analysis on Cora and Citeseer. 5 10 15 20 25 Perturbation rate(%) 40 50 60 70 80 90 100 Accuracy (%) ACC=66.86 ACC=62.84 ACC=67.36 ACC=54.79 ACC=75.92 Random Attack (add) 5 10 15 20 25 Perturbation rate(%) 40 50 60 70 80 90 100 Accuracy (%) ACC=72.54 ACC=77.83 ACC=73.36 ACC=67.09 ACC=79.19 Random Attack (remove) VGAE ANE -VAE Hype-ANE CurvGAN (b) Robustness analysis of the edges attack on Cora. Figure 6: Generalization and robustness analysis on real-world data. 0 1 2 3 Average Training Time of Each Epoch (s) 75 80 85 90 95 100 AUC Score (%) CurvGAN AUC: 94.63 Time: 0.275 Hype-ANE AUC: 87.64 Time: 1.862 -VAE AUC: 85.8 Time: 0.2375 GraphGAN AUC: 86.1 Time: 0.233 ANE AUC: 83.1 Time: 0.733 G2G AUC: 84.47 Time: 0.247 DGI AUC: 75.39 Time: 0.228 VGAE AUC: 85.94 Time: 0.069 GAE AUC: 86.12 Time: 0.055 Figure 7: Training efficiency analysis. Performance on Benchmarks. Table 3 summarizes the performance of CurvGAN and all baselines on three real-world datasets. For the link prediction task, our CurvGAN outperforms all baselines (including hyperbolic geometry models) in real data and can learn better structural properties based on correct topological geometry priors. In contrast to a single hyperbolic geometric space, a unified latent geometric space can improve benefits for learning better graph representation in real-world datasets with complex topologies. For the node classification task, we combine the above link prediction objective as the regularization term in node classification tasks, to encourage embeddings preserving the network structure. Table 3 also summarizes Micro-F1 and Macro-F1 scores of all models on three real-world datasets. It can be observed that the Euclidean models have comparative performance. The reason is that the node labels are more dependent on other features (e.g. node\u2019s attributes or other information) than topological features. Generalization Analysis. Figure 6 (a) shows the performances of link prediction with different training ratios. CurvGAN significantly outperforms all baselines even when the training ratio is small. In addition, we find that the stability of the autoencoders VGAE and P-VAE is higher than the GAN-based methods (ANE, GraphGAN, Hype-ANE and our CurvGAN), although their performances are outperformed by CurvGAN rapidly. The reason is the GAN-based method needs more samples to fit the data distribution. Robustness Analysis. To evaluate the robustness of the model, we also perform a poisoning attack RAND by DeepRobust [26] on the real-world data, and the setting is the same as in the robustness analysis in Section 4.3. Figure 6 (b) shows that CurvGAN and all baselines under the edges attack scenario on Cora. Our CurvGAN always has better performance even when the network is very noisy. Riemannian geometry implies the prior information of the network, making CurvGAN has the excellent denoising capability. Efficiency Analysis. Figure 7 illustrates the training time of CurvGAN and baselines on Cora for link prediction. CurvGAN has both the best performance and second-best efficiency. It can be observed that our CurGAN has the best computational efficiency in the GAN-based method (ANE, GraphGAN, and Hype-ANE). In general, the results show that the comprehensive evaluation of our model outperforms baselines on all datasets, which indicates that the network latent geometry can significantly improve the computational efficiency and scalability of network embedding. 5" + } + ], + "Yuecen Wei": [ + { + "url": "http://arxiv.org/abs/2312.12183v3", + "title": "Poincar\u00e9 Differential Privacy for Hierarchy-Aware Graph Embedding", + "abstract": "Hierarchy is an important and commonly observed topological property in\nreal-world graphs that indicate the relationships between supervisors and\nsubordinates or the organizational behavior of human groups. As hierarchy is\nintroduced as a new inductive bias into the Graph Neural Networks (GNNs) in\nvarious tasks, it implies latent topological relations for attackers to improve\ntheir inference attack performance, leading to serious privacy leakage issues.\nIn addition, existing privacy-preserving frameworks suffer from reduced\nprotection ability in hierarchical propagation due to the deficiency of\nadaptive upper-bound estimation of the hierarchical perturbation boundary. It\nis of great urgency to effectively leverage the hierarchical property of data\nwhile satisfying privacy guarantees. To solve the problem, we propose the\nPoincar\\'e Differential Privacy framework, named PoinDP, to protect the\nhierarchy-aware graph embedding based on hyperbolic geometry. Specifically,\nPoinDP first learns the hierarchy weights for each entity based on the\nPoincar\\'e model in hyperbolic space. Then, the Personalized Hierarchy-aware\nSensitivity is designed to measure the sensitivity of the hierarchical\nstructure and adaptively allocate the privacy protection strength. Besides, the\nHyperbolic Gaussian Mechanism (HGM) is proposed to extend the Gaussian\nmechanism in Euclidean space to hyperbolic space to realize random\nperturbations that satisfy differential privacy under the hyperbolic space\nmetric. Extensive experiment results on five real-world datasets demonstrate\nthe proposed PoinDP's advantages of effective privacy protection while\nmaintaining good performance on the node classification task.", + "authors": "Yuecen Wei, Haonan Yuan, Xingcheng Fu, Qingyun Sun, Hao Peng, Xianxian Li, Chunming Hu", + "published": "2023-12-19", + "updated": "2024-02-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR" + ], + "main_content": "Introduction The inherent topological properties of graphs have been widely leveraged in graph representation learning as inductive biases (Sun et al. 2022c; Li et al. 2023; Zhang et al. 2024; Yuan et al. 2024). Real-world graph data typically exhibit intricate topological structures with diverse properties (Sun et al. 2021b), and the hierarchy frequently assumes a pivotal role, which naturally mirrors human behavior within hierarchical organizations. This property assists in the learning of graph representation by capturing implicit data organization patterns (Papadopoulos et al. 2012). *Corresponding authors Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Latent Tree Structure. (b) Neighbour Classification. (c) Hierarchical Classification. Figure 1: Privacy leakage on the hierarchical structure. However, dealing directly with hierarchy in the topological space proves to be challenging in the Euclidean embedding space. In contrast to the Euclidean space, the hyperbolic geometric space can be conceptualized as a continuous treelike structure, naturally capable of representing the topological hierarchy (Sun et al. 2022a, 2023b). The constant negative curvature of the hyperbolic geometric space imparts it with a more potent ability for hierarchical representation compared to the flat Euclidean space (Krioukov et al. 2010; Sun et al. 2023a). Recent works in hyperbolic representation learning (Ganea, B\u00b4 ecigneul, and Hofmann 2018; Tifrea, B\u00b4 ecigneul, and Ganea 2019) have achieved noteworthy success by harnessing the hierarchy, where the hierarchy is regarded as the prompt for balancing the aggregation weights. These methods manifest how the hierarchy can be utilized to enhance the effectiveness of graph representation learning. However, while representation learning in the hyperbolic geometric space offers benefits, it comes with the potential drawback of increased susceptibility to the leakage of sensitive user information. Hierarchical structures are prevalent in graph data, as depicted in Figure 1(a). Euclidean and hyperbolic spaces exhibit distinct perceptual capabilities for hierarchical structures. For instance, in Euclidean space, clustering based on node distances can reflect whether an individual is a patient, while node distances in hyperbolic space arXiv:2312.12183v3 [cs.LG] 29 Feb 2024 \fcan indicate the type of ailment. In addition, traditional Euclidean Graph Neural Networks (GNNs) primarily focus on neighborhood aggregation and struggle to capture latent hierarchical tree-like structures, as illustrated in Figure 1(b). Consequently, the feature perturbation conducted by conventional GNN privacy frameworks only disturbs connections among nodes of the same category, to thwart attacker inferences. However, although hyperbolic GNNs provide a direct approach to learning hierarchical structural features, they concurrently elevate the risk of privacy leakage, as depicted in Figure 1(c). Attackers, without accessing sensitive information, can deduce patients\u2019 medical conditions. For example, within the hierarchical structure, an attacker can infer that patients affiliated with psychiatric departments are likely to have mental illnesses, and patients in infectious disease departments have contagious diseases. To address privacy concerns, notable privacy-preserving techniques have been proposed. Differential privacy (DP) (Dwork 2006; Dwork et al. 2006) stands out as one of the most prominent methods due to its robust mathematical foundation. However, existing DP methods tailored for graph representation learning have predominantly centered on safeguarding node features and neighborhood structures (Ren et al. 2022; Yang et al. 2021; Wei et al. 2022), with a limited focus on preserving implicit topological properties such as hierarchy. This underscores the need for novel strategies that holistically address both the intricate topological features and privacy considerations within graph representation learning. To utilize the geometric prior of the hyperbolic space to capture the hierarchy properties and guarantee that the sensitive information in the hierarchy, the major problems are as follows: (1) Traditional privacy-preserving methods usually consider the privacy between neighbors or relations to generate perturbation noise, which is weak to capture the hierarchical structure of the graph. (2) Existing privacy-preserving techniques measure the privacy of nodes in Euclidean space, which doesn\u2019t work in hyperbolic space due to the Gaussian mechanism based on the standard normal distribution just defined in Euclidean space. Present work. To address the above problems, we propose a novel Poincar\u00b4 e Differential Privacy framework for protecting hierarchy-aware graph embedding based on hyperbolic geometry, named PoinDP1. First, the Personalized Hierarchy-aware Sensitivity (PHS) is designed to utilize the Poincar\u00b4 e model to capture the interand intrahierarchy node information. PHS can allocate the privacy budget between radius (inter-hierarchy) and angle (intrahierarchy) and learn high-quality graph representations effectively while satisfying the differential privacy guarantee. Then, a novel Hyperbolic Gaussian Mechanism (HGM) extends the Gaussian mechanism in Euclidean space to hyperbolic space to realize random perturbations that satisfy differential privacy under the hyperbolic space metric for the first time. Extensive experimental results conducted on five datasets empirically demonstrate that PoinDP has consistent advantages. We summarize our contributions as follows: 1Code is available at https://github.com/WYLucency/PoinDP. \u2022 We propose a novel Poincar\u00b4 e differential privacy for hierarchy-aware graph embedding framework named PoinDP. To the best of our knowledge, this is the first work that presents the privacy leakage problem due to the hierarchical structure and gives a definition of the privacy problem in terms of hyperbolic geometry. \u2022 In PoinDP, the Personalized Hierarchy-aware Sensitivity can measure the sensitivity of the hierarchical structure and adaptively allocate the privacy protection strength. Besides, we extend the Gaussian mechanism in Euclidean space to hyperbolic space to realize random perturbations that satisfy differential privacy under the hyperbolic space metric for the first time, which can be used in other hyperbolic privacy works to promote community development. \u2022 Experiments demonstrate that PoinDP can effectively resist attackers with hierarchical information enhancement, and learn high-quality graph representations while satisfying privacy guarantees. Related Work Graph Neural Networks In the field of graph representation learning, Graph Neural Networks (GNNs) have achieved remarkable success in learning embeddings from graph-structured data due to their powerful graph representation capabilities, while are widely extended for downstream tasks in complex scenarios (Kipf and Welling 2017; Velickovic et al. 2018; Hamilton, Ying, and Leskovec 2017). However, traditional GNNs operating in Euclidean space often fall short of effectively utilizing the topology properties of graphs, leading to suboptimal semantic understanding, particularly overlooking the hierarchical relationships within the data, which is of vital importance in real-world scenarios. Recently, certain categories of data (e.g., hierarchical, scale-free, or spherical data) have demonstrated superior representation capabilities when modeled through nonEuclidean geometries. This has led to a burgeoning body of work on deep learning (Tifrea, B\u00b4 ecigneul, and Ganea 2019; Sala et al. 2018; Ganea, B\u00b4 ecigneul, and Hofmann 2018). Notably, hyperbolic geometric spaces have garnered significant attention and adoption within the domain of graph representation learning (Liu, Nickel, and Kiela 2019; Chami et al. 2019; Bachmann, B\u00b4 ecigneul, and Ganea 2020; Sun et al. 2021a; Fu et al. 2023; Sun et al. 2022d; Wu et al. 2022), attributed to their inherent capacity and prowess in preserving hierarchical structures (Sun et al. 2022b). However, with the evolution of increasingly intricate models aimed at extracting potential correlations among nodes, the complex structure inadvertently amplifies the attackers\u2019 capacity for inference, enabling lateral enhancement of their inferential ability. Unfortunately, the majority of GNN-based methodologies have been demonstrated to possess vulnerabilities susceptible to inference attacks (Olatunji, Nejdl, and Khosla 2021; Zhang et al. 2022). Differentially Private GNNs Differential privacy (DP) (Dwork 2006) is a privacy protection method and introduces random noise perturbation \fmechanisms to the original data, ensuring that attackers cannot infer the original data from the outputs of models. For graph privacy protection, we divide the existing DP method into two levels: node-level and edge-level. For node-level DP, the works focus on perturbing node features or node labels to execute privacy protection. AsgLDP (Wei et al. 2020) proposed randomized attribute lists (RAL) to perturb each bit of node feature by the randomized response, and LPGNN (Sajadmanesh and Gatica-Perez 2021) used a multi-bit mechanism to sample perturbed features while using the randomized response to mask node labels. GAP (Sajadmanesh et al. 2023) perturbed the out of each aggregation using Gaussian noise while saving them. HeteDP (Wei et al. 2022) utilized meta-path to adapt data heterogeneity while personalized node perturbation by multi-attention mechanism. For edge-level DP, the target of noise addition is the topology of the graph, e.g. the degree and the adjacency matrix that represent the information about the interactions between nodes. LDPGEN (Qin et al. 2017) computed each subgroup degree vector on the client, then uploaded it to the server and exerted Laplace noise to the degree vectors. The graph structure generation uses the BTER model. Solitude (Lin, Li, and Wang 2022) used the randomized response to flip graph adjacency matrix and a regularization term to optimize noise. LF-GDPR (Ye et al. 2022) perturbed node degrees and adjacency matrix in the client and the server will receive a double-degree message to aggregate and calibrate. However, most DP schemes are deficient in adaptability to complex structures and hardly explore potential properties to adjust perturb design. Preliminary Differential privacy (Dwork 2006) is considered to be one of the quantifiable and practical privacy-preserving data processing techniques. It protects privacy by adding noise to the query results, and an attacker cannot infer any information from these query results even if he or she knows all the records except this particular individual information. Definition 1 ((\u03f5, \u03b4)-Differential Privacy). Given two adjacent datasets D and D\u2032 differ by at most one record, and they are protected via a random algorithm M, which satisfies (\u03f5, \u03b4)-differential privacy (DP) (Dwork et al. 2006). For any possible subset of output O \u2286Range (M), we have Pr [M (D) \u2208O] \u2264e\u03f5Pr [M (D\u2032) \u2208O] + \u03b4, (1) where \u03f5 is the privacy budget, \u03b4 is a probability to break \u03f5-DP and Range(M) denotes the value range of M output. Definition 2 (Sensitivity). Given any query S on D, the sensitivity (Dwork et al. 2006) for any neighboring datasets D and D\u2032 are defined as \u22062S = max D,D\u2032 \u2225S (D) \u2212S (D\u2032)\u22252 . (2) Definition 3 (Gaussian Mechanism). Let S : D \u2192OK be an arbitrary K-dimensional function and define its L2 sensitivity to be \u22062S. The Gaussian Mechanism (Dwork and Roth 2014) with parameter \u03c3 adds noise scaled to N \u00000, \u03c32\u0001 to each of the n components of the output. Given \u03f5 \u2208(0, 1), the Gaussian Mechanism is (\u03f5, \u03b4)-DP with \u03c3 \u2265 p 2 ln (1.25/\u03b4)\u22062S/\u03f5. (3) For a graph G, the overall form of the perturbed noise is defined as M (G) \u25b3 = S (G) + N \u0010 0, (\u25b32S)2 \u03c32\u0011 , (4) where \u22062S controls the amount of noise in the generated Gaussian distribution from which we will sample noise into the target. Our goal is to keep (\u03f5, \u03b4)-DP effective in highdimensional projection spaces and message passing while maintaining classification performance. Compared to the traditional Euclidean space, hyperbolic space has a stronger hierarchical structure. The Poincar\u00b4 e ball model (Nickel and Kiela 2017) is a commonly used isometric model in hyperbolic space, and we exploit it to capture the latent hierarchical structure of the graph. Definition 4 (Poincar\u00b4 e Ball Model). Given a constant negative curvature c, Poincar\u00b4 e Ball Bn is a Riemannian manifold (Bn c , gB x ), where Bn c is an n-dimensional ball of radius 1/\u221ac and gB x is metric tensor. The Poincar\u00b4 e distance between the node pair (x, y) is defined as dBn c (x, y) = 2 \u221ac tanh\u22121(\u221ac \u2225\u2212x \u2295c y\u2225), (5) where \u2295c is M\u00a8 obius addition and \u2225\u00b7\u2225is L2 norm. Definition 5 (Poincar\u00b4 e Norm). The Poincar\u00b4 e Norm is defined as the distance of any point x \u2208Bn c from the origin of Poincar\u00b4 e ball: NormBn c (x) = \u2225x\u2225Bn c = 2 \u221ac tanh\u22121(\u221ac \u2225x\u2225). (6) Our Approach In this section, we introduce the overall learning framework of PoinDP, a unified hierarchy-aware graph neural network for privacy guarantees with differential privacy, and find out how to achieve privacy protection in the hierarchy structure. The framework is shown in Figure 2, and the overall process of PoinDP is shown in Algorithm 1. Personalized Hierarchy-aware Sensitivity To the best of our knowledge, as described in many existing privacy-preserving models, the sensitivity of differential privacy is usually measured by the L2 norm (Euclidean distance), which makes it difficult to measure the non-Euclidean structure accurately. Therefore, the sensitivity of the traditional method measured under Euclidean space is not accurate when used in the hierarchy. Moreover, to solve the personalized privacy requirements of hierarchical structures, we aim to be able to explicitly design for interand intra-hierarchy sensitivity. Inspired by the hyperbolic geometric prior, we design a novel Personalized Hierarchyaware Sensitivity based on the Poincar\u00b4 e embedding (Nickel \fFigure 2: Overview of PoinDP. Given G as input, PoinDP consists of the following three steps: (1) PHS Computing: We first obtain the Poincar\u00b4 e embedding using the adjacency matrix A and compute the PHS. (2) Noise generation: The sensitivity is utilized to perform the HGM in order to obtain the hyperbolic noise satisfying (\u03f5, \u03b4)-DP. (3) Perturbation & Optimization: The noise is injected into GNNs, and the privacy budget allocation is optimized according to the downstream task feedback. and Kiela 2017) for generating random perturbation noise with adaptive interand intra-hierarchy correlations. Hierarchy-aware node representation. First, we need to explicitly represent the graph hierarchy based on the Poincar\u00b4 e embedding. We utilize the Poincar\u00b4 e embedding to learn a hierarchy-aware node representation, which is a shallow model, and minimize a hyperbolic distance-based loss function. Then We learn node embeddings eV = {ei}\u2225V\u2225 i=1 (ei \u2208Bn, V \u2208V)2 which represents the hierarchy of nodes in the Poincar\u00b4 e ball model based on Eq. (5). The embeddings can be optimized as \u0398\u2032 \u2190argmin \u0398 L(\u0398), s.t. \u2200\u03b8i \u2208\u0398 : \u2225\u03b8i\u2225< 1/c, (7) where L(\u0398) is a softmax loss function that approximates the dependency between nodes, \u0398 is the parameters of Poincar\u00b4 e ball model. We can obtain the radius and angle of the node on the Poincar\u00b4 e disk based on the Poincar\u00b4 e embedding eV . A smaller NormBn(eu) indicates that u is located at the top of the hierarchy. The node with a top-level hierarchy is approximately near the center of the disk and plays a more important role in the graph. Then we can give the inter-hierarchy sensitivity by using the radius r of nodes on Poincar\u00b4 e disk. Definition 6 (Inter-hierarchy Sensitivity). Given V and V \u2032 are the neighboring subsets of graph nodes, and V and V \u2032 only differ by one node. The inter-hierarchy sensitivity can be defined as: \u2206BnPr = max V,V \u2032 \f \f \fNormBn(eV ) \u2212NormBn(eV \u2032) \f \f \f . (8) On the other hand, since the angle sector on the hyperbolic disk indicates the node similarity or the community, 2We use the Poincar\u00b4 e ball model with standard constant negative curvature \u2225c\u2225= 1, the curvature parameter c will be omitted in our method. we use it to measure the correlations of nodes within a hierarchy level. As the Poincar\u00b4 e ball is conformal to Euclidean space (Ganea, B\u00b4 ecigneul, and Hofmann 2018), the angle between two vector u, v at the radius r is given by \u03b1(u, v)|eV = gBn x (u, v) p gBp x (u, u) p gBn x (v, v) = \u27e8eu, ev\u27e9 \u2225eu\u2225\u2225ev\u2225. (9) Similarly, we measure the correlation within the intrahierarchy based on the angle \u03b1 between any two nodes on the Poincar\u00b4 e disk. Definition 7 (Intra-hierarchy Sensitivity). Given V and V \u2032 are the neighboring subsets of graph nodes, and V and V \u2032 only differ by one node. The intra-hierarchy sensitivity can be defined as: \u2206BnP\u03b1 = max V,V \u2032 \u2225\u03b1 (V, V \u2032) |e(V \u222aV \u2032)\u2225Bn . (10) Then we utilize the inter-hierarchy \u2206BnPr and intrahierarchy \u2206BnP\u03b1 sensitivities separately to focus on the importance of nodes at different radius and angles and generate perturbation noises that satisfy personalization. Hyperbolic Gaussian Mechanism The existing works widely used differential privacy strategies based on Laplace noise or Gaussian noise to achieve protection. However, due to the difference in metric scales, their noise computation can only be performed in flat Euclidean space, which is difficult to adapt to curved hyperbolic space. To address the privacy issues proposed by the hierarchy of graphs, we design a Hyperbolic Gaussian Mechanism that will extend the Gaussian mechanism in Euclidean space to hyperbolic space based on the Wrapped Gaussian Distribution (Nagano et al. 2019) to realize stochastic perturbations that satisfy differential privacy in the metric of hyperbolic space. The hyperbolic Gaussian \fdistribution with c = 1 is defined as NBn(z|\u00b5, \u03c32 \u03f5 I) = N(\u03bb\u00b5 log\u00b5(z)|0, \u03c32 \u03f5 I) \u00b7 \u03b3((z|\u00b5)), with \u03b3(z|\u00b5) = \u0012 dBn(\u00b5, z) sinh dBn(\u00b5,z) \u0013n\u22121 , (11) where \u00b5 \u2208Bn is mean parameter, \u03c3\u03f5 \u2208Rn is standard deviation, log\u00b5(\u00b7) is the logarithm map function,and \u03b3 represents the spatial mapping and normalization. Hyperbolic Gaussian Mechanism. Let f : B|X| \u2192Rn be an arbitrary n-dimensional function, and define its hyperbolic sensitivity to be \u2206Bnf = maxadjacent(D,D\u2032) \u2225f(D) \u2212 f(D\u2032)\u2225Bn. The Hyperbolic Gaussian Mechanism with parameters \u03c3 adds noise scaled to NBn(\u00b7|0, \u03c32I) to each of the n components of the output. Theorem 1. Let \u03f5 \u2208 (0, 1) be arbitrary. For c2 > 2 ln(1.25\u03b3(\u00b7|\u00b5)/\u03b4), the Hyperbolic Gaussian Mechanism with parameter \u03c3 \u2265 c log\u00b5(\u2206Bnf)\u03b3(\u00b7|\u00b5)/\u03f5 is (\u03f5, \u03b4)differentially private on hyperbolic space. To satisfy the (\u03f5, \u03b4)-differentially private in hyperbolic space, the hyperbolic sensitivity and hyperbolic Gaussian noise sampling need to be mapped to the tangent space by logarithm map function log\u00b5(\u00b7), and the privacy budget \u03f5 and parameter \u03b4 also need to be isometric mapped in the tangent space of \u00b5. Please refer to Appendix A.1 for the detailed proof. According to the above, we can obtain two kinds of perturbation noise based on the inter-hierarchy \u2206BnPr and intra-hierarchy \u2206BnP\u03b1 sensitivities. The Hyperbolic Gaussian Noise can be generated by \u03b7\u03f5r r \u223cNBn(z|\u00b5, c2 log\u00b5(\u2206BnPr)2\u03b3(z|\u00b5)2/\u03f52 rI), \u03b7\u03f5\u03b1 \u03b1 \u223cNBn(z|\u00b5, c2 log\u00b5(\u2206BnP\u03b1)2\u03b3(z|\u00b5)2/\u03f52 \u03b1I). (12) Perturbation and Optimization To better utilize hierarchical information to provide hierarchy-aware privacy perturbations, we utilize GNNs to capture the domain representation of nodes. Given a graph G = (V, E) with node set V and edge set E. For the semisupervised node classification task, given the labeled node set VL and their labels YL, where each node vi is mapped to a label yi, our goal aims to train a node classifier f\u03b8 to predict the labels YU of remaining unlabeled nodes VU = V \\ VL. Therefore, following the aggregation and update mechanism of message passing in GNNs, we define the embedding learning of nodes u in (l + 1)-th layer as h(l+1) u = \u03c3 \uf8eb \uf8edX v\u2208V(u) cvW(l)h(l) v \uf8f6 \uf8f8, (13) where cv is a node-wise normalization constant and V(u) is the neighbor set. During the continuous iteration in the training stage, the features of node u will be updated with a hyperbolic Gaussian mechanism as \u02c6 h = h + \u03b7\u03b2\u00b7\u03f5 r + \u03b7(1\u2212\u03b2)\u00b7\u03f5 \u03b1 , (14) where \u03b2 is the normalized attention weight to learn the interand intra-hierarchy importance in nodes and rationally allocate the privacy budget, i.e. \u03f5r + \u03f5\u03b1 = \u03f5. Algorithm 1: Overall training process of PoinDP Input: Graph G = {V, E} with node labels Y; Number of training epochs E. Output: Predicted label \u02c6 Y. 1 Parameter \u0398 initialization; 2 Learning and optimizing node Poincar\u00b4 e embedding eV \u2190Eq. (7) and (9); 3 for e = 1, 2, \u00b7 \u00b7 \u00b7 , E do // Personalized Hierarchy-aware Sensitivity 4 Calculate hierarchy-aware sensitivity \u2206BnPr and \u2206BnP\u03b1 \u2190Eq. (8) and (10); // Hyperbolic Gaussian Mechanism 5 Calculate the hyperbolic Gaussian distribution NBn(z|\u00b5, \u03c32 \u03f5 I) \u2190Eq. (11); 6 Learning node embeddings hu \u2190Eq. (13); 7 Perturbing node embeddings b h by hyperbolic Gaussian noise \u2190Eq. (14); 8 Predict node labels \u02c6 Y and calculate the classification loss L \u2190Eq. (15); 9 Update model parameters by minimizing L. 10 end Therefore, we complete the noise generation and addition by hierarchy-aware mechanism. The objective for PoinDP is the average loss of predicting labels of unlabeled nodes, formulated as L = 1 \u2225VU\u2225 X v\u2208VU LG(\u02c6 hu,v, yv), (15) where LG stands for the loss of semi-supervised node classification and is implemented by cross-entropy in this work. Experiments In this section, we conduct experiments on five datasets and seven baselines to demonstrate the privacy protection adaptability and the graph learning effectiveness of PoinDP based on a semi-supervised node classification task. Dataset and Model Setup Datasets. For datasets (see Appendix B.1), we chose three citation networks (Cora, Citeseer and PubMed) and two E-commerce networks in Amazon (Computers and Photo). Baselines. For baselines, GCN (Kipf and Welling 2017), GAT (Velickovic et al. 2018), and HyperIMBA (Fu et al. 2023) are convolutional neural networks model, attention neural networks model, and hierarchy-aware model, respectively. VANPD and LaP (Olatunji, Nejdl, and Khosla 2021) which use the Laplace noise perturbation mechanism are privacy models in Euclidean space. RdDP, AtDP, and the proposed PoinDP are privacy methods in hyperbolic spaces. RdDP and AtDP are two variant models of DP noise generation, representing the addition of random noise and attention-aware noise, respectively. \fModel Cora Citeseer PubMed Computers Photo W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 W-F1 M-F1 GCN 80.0\u00b11.1 80.1\u00b11.1 68.1\u00b10.2 68.6\u00b10.2 78.5\u00b10.5 78.5\u00b10.5 84.7\u00b12.3 82.5\u00b13.6 90.2\u00b11.4 89.6\u00b11.6 GAT 81.6\u00b11.1 81.8\u00b11.0 69.4\u00b11.2 70.0\u00b11.0 77.0\u00b10.5 77.0\u00b10.4 87.5\u00b10.4 87.1\u00b10.5 92.9\u00b10.2 92.8\u00b10.2 HyperIMBA 83.0\u00b10.3 83.1\u00b10.4 76.3\u00b10.2 73.4\u00b10.3 86.6\u00b10.1 86.5\u00b10.1 89.6\u00b10.2 89.6\u00b10.1 92.8\u00b10.3 92.5\u00b10.3 VANPD 40.9\u00b11.6 41.5\u00b11.6 35.6\u00b11.2 35.6\u00b11.2 61.8\u00b10.2 61.8\u00b10.3 74.1\u00b11.1 74.3\u00b11.0 84.4\u00b11.0 84.3\u00b11.1 LaP 62.6\u00b10.9 61.4\u00b10.9 55.0\u00b11.5 53.2\u00b11.5 68.3\u00b10.2 68.2\u00b10.2 80.1\u00b11.0 79.9\u00b11.0 88.9\u00b10.9 88.7\u00b11.0 RdDP 78.1\u00b10.2 75.1\u00b10.4 73.1\u00b10.5 70.0\u00b10.7 79.1\u00b10.7 78.6\u00b10.9 80.5\u00b10.9 76.1\u00b11.6 91.4\u00b10.2 90.1\u00b10.5 AtDP 81.0\u00b10.2 80.0\u00b10.2 74.8\u00b10.1 72.0\u00b10.2 83.5\u00b10.0 83.5\u00b10.0 81.5\u00b14.4 78.4\u00b17.2 91.7\u00b10.6 91.3\u00b10.7 PoinDP 78.2\u00b10.6 75.5\u00b11.2 75.5\u00b10.2 72.5\u00b10.2 83.8\u00b10.2 83.7\u00b10.2 86.9\u00b10.4 86.5\u00b10.5 92.6\u00b10.2 92.4\u00b10.3 Table 1: Weighted-F1 and Micro-F1 score of the node classification task. (Result: average score \u00b1 standard deviation; Bold: the best of baseline model; Underline: runner-up.) Model Cora Citeseer PubMed Computers Photo W-F1 \u2206(%) W-F1 \u2206(%) W-F1 \u2206(%) W-F1 \u2206(%) W-F1 \u2206(%) PoinDP 59.9\u00b11.4 74.0\u00b11.3 79.4\u00b10.5 83.8\u00b10.4 91.9\u00b10.5 PoinDP (w/o inter) 48.4\u00b12.6 \u219311.5 60.6\u00b14.1 \u219313.4 70.8\u00b11.4 \u21938.6 79.5\u00b11.8 \u21934.3 91.3\u00b10.4 \u21930.6 PoinDP (w/o intra) 48.8\u00b10.4 \u219311.1 60.1\u00b19.9 \u219313.9 76.6\u00b11.4 \u21932.8 82.6\u00b11.2 \u21931.2 91.3\u00b10.2 \u21930.6 PoinDP (w/o allocate) 51.2\u00b12.4 \u21938.7 69.5\u00b12.8 \u21934.5 77.2\u00b10.7 \u21932.2 82.7\u00b10.4 \u21931.1 91.5\u00b10.4 \u21930.4 Table 2: Weighted-F1 scores (% \u00b1 standard deviation) and improvements (%) results of Ablation Study. (Result: average score \u00b1 standard deviation; Bold: best.) Settings. PoinDP performs the semi-supervised node classification task to verify its privacy performance. Our dataset split follows the PyTorch Geometric. The learning rate lr is 0.005, the privacy budget \u03f5 to be [0, 1], and the training iterations E to be 200. For other model settings, we adopt the default optimal values in the corresponding papers. We conducted the experiments with NVIDIA GeForce RTX 3090 with 16GB of Memory. Performance Evaluation Performance of Node Classification. We evaluate PoinDP for node classification where privacy models are trained in \u03f5 = 1. The Weighted-F1 and Micro-F1 scores are reported in Table 1 where the best results are shown in bold and the runner-up results are shown in underline. It can be observed from the results that differential privacy-based models perform worse on the classification task compared with non-DP models while increasing the protection for sensitive information, which is caused by adding extra noise. Notably, PoinDP gets the absolute upper hand in terms of performance among privacy-preserving models compared to other privacy-preserving models. Because the hyperbolic noise is more adapted to the operations in the hierarchical structure, the destructive power in the Euclidean noise is significantly attenuated, resulting in uniformly higher performance. In conclusion, on the premise of improving the ability for privacy protection, PoinDP preserves the data availability as much as possible and improves the performance of the node classification task. Ablation Study. In this subsection, we conduct the ablation study for PoinDP to validate the model utility provided by our consideration of node hierarchies (w/o inter) and correlations (w/o intra) on a hierarchical structure, and to remove the adaptive privacy budget allocation (w/o allocate) to these two properties, i.e., the optimization of hyperbolic noise is removed. We set \u03f5 = 0.01 for easy observation. The results as shown in Table 2, indicate that missing any component of PoinDP leads to a degradation of the performance, where PoinDP (w/o allocate) has the smallest impact in most of the datasets, but numerically demonstrates the effectiveness of the privacy budget allocation. In addition, the one-sided perturbations in both PoinDP (w/o inter) and PoinDP (w/o intra) experiments reflect a strong influence on the model performance, suggesting that they have individualized perturbation rules for the nodes. Case Study and Analysis of Sensitivity. As a case study, to verify the effectiveness of PoinDP in privacy protection and its generalization ability to hierarchical structures, three splits for Cora are provided as Cora (random sampling training set), Top-level and Bottom-level (sampling the top 33% and bottom 33% of the training samples, respectively), and their training sets with moderate, weak, and strong sensitivity, respectively. Note that the nodes are ordered from highest to lowest according to the Poincar\u00b4 e weights, indicating that the nodes range in sensitivity from lowest to highest. As shown in Fig. 3, Cora, which randomly samples the training nodes, has the best overall performance and is comparable to the Top-level, and finally Bottom-level. For the analysis of sensitivity, we evaluate the model performance by setting different \u03f5 from 0.01 to 1, where \u03f5 measures the strength of the model\u2019s privacy protection, with smaller values indicating greater privacy protection power, \fFigure 3: Hierarchical sensitivity experiments on Cora. 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative Error 0.0 0.2 0.4 0.6 0.8 1.0 Probability VANPD LaP RdDP AtDP PoinDP Figure 4: Cumulative error distribution with differential privacy-preserving method on Cora. less usability, and more information loss. As shown in Fig. 3, for the overly strict \u03f5 = 0.01, both the Top-level and the Bottom-level show the worst performance that can be understood, but in the looser limits, the Top-level samples perform well (these nodes are decisive for the downstream task so the amount of perturbation is low and the performance is almost close to Cora\u2019s). Whereas PoinDP in the Bottom-level samples adapts the requirement of needing a high degree of privacy preservation while providing the protection ability in a high privacy budget for sensitive data. Analysis of Noise Distribution. Fig. 4 shows that the noise mechanism of PoinDP by the cumulative error distribution. We compare five privacy-preserving models and find that the error accumulation for PoinDP grows the fastest and ends its accumulation at 0.5, indicating a focused imposition of noise, and reflecting the individualized hierarchical perturbation mechanism of PoinDP. However, others are slow to converge, indicating a high percentage of results with large error values, and they aimlessly put noise into the samples, leading to poor usability. Overall, our hyperbolic Gaussian mechanism can put noise for some samples in a focused manner, providing personalized protection capability. Visualization. We visualize the noise distribution of the four privacy models on the Cora dataset to intuitively represent the ability of our models to perceive hierarchical structures. Please refer to Appendix B.2 for other visualizations. Fig. 5 expresses the data as its whole with a hierarchical structure, where the colors represent the amount of noise and the position of each point is the layout of the node on the 160 180 200 220 240 (a) VANPD. 600 700 800 900 1000 1100 (b) PoinDP. Figure 5: Visualization of noise distribution on Poincar\u00b4 e disk for VANPD and PoinDP on Cora. Dataset Hyperbolicity \u03b4 PubMed \u03b4 = 1.65 Photo \u03b4 = 0.15 Metric AUC \u2191 Prec. \u2191 AUC \u2191 Prec. \u2191 Attack GCN 62.8\u00b11.5 63.8\u00b11.4 79.7\u00b12.5 80.7\u00b12.6 GAT 58.4\u00b12.4 59.3\u00b12.8 79.7\u00b10.9 80.4\u00b10.8 GCN+H 63.2\u00b10.2 64.0\u00b10.2 82.4\u00b10.4 83.4\u00b10.4 Metric AUC \u2193 Prec. \u2193 AUC \u2193 Prec. \u2193 Defense VANPD 51.1\u00b10.3 51.2\u00b10.5 68.1\u00b11.6 68.9\u00b11.5 LaP 51.9\u00b10.5 52.8\u00b10.5 70.3\u00b10.1 71.0\u00b10.1 PoinDP 46.7\u00b12.1 46.5\u00b13.0 37.4\u00b10.6 34.8\u00b12.0 Table 3: Membership Inference Attack (MIA) performance. (\u2191: the higher, the better; \u2193: the lower, the better) poincar\u00b4 e disk. As can be noticed in PoinDP in Fig. 5 (b), as the radius of the disk increases, the noise nodes become lighter in color and exhibit different colors at different angles, which fully demonstrates PoinDP\u2019s excellent ability to capture interand intra-hierarchy information. In contrast, the other models exhibit uniform perturbations to the hierarchy. In a nutshell, benefiting from the PHS and HGM mechanisms, PoinDP again demonstrates its effectiveness. Attack Experiment. We conduct Membership Inference Attack (MIA) (Olatunji, Nejdl, and Khosla 2021) and the results are reported in Table 3. Please refer to Appendix B.3 for the detailed attack settings and performance analysis. The conclusion is that hierarchical information H can enhance the attacker\u2019s reasoning ability, while PoinDP can provide superior protective capabilities." + } + ], + "Xiang Huang": [ + { + "url": "http://arxiv.org/abs/2403.11886v1", + "title": "QueryAgent: A Reliable and Efficient Reasoning Framework with Environmental Feedback based Self-Correction", + "abstract": "Employing Large Language Models (LLMs) for semantic parsing has achieved\nremarkable success. However, we find existing methods fall short in terms of\nreliability and efficiency when hallucinations are encountered. In this paper,\nwe address these challenges with a framework called QueryAgent, which solves a\nquestion step-by-step and performs step-wise self-correction. We introduce an\nenvironmental feedback-based self-correction method called ERASER. Unlike\ntraditional approaches, ERASER leverages rich environmental feedback in the\nintermediate steps to perform selective and differentiated self-correction only\nwhen necessary. Experimental results demonstrate that QueryAgent notably\noutperforms all previous few-shot methods using only one example on GrailQA and\nGraphQ by 7.0 and 15.0 F1. Moreover, our approach exhibits superiority in terms\nof efficiency, including runtime, query overhead, and API invocation costs. By\nleveraging ERASER, we further improve another baseline (i.e., AgentBench) by\napproximately 10 points, revealing the strong transferability of our approach.", + "authors": "Xiang Huang, Sitao Cheng, Shanshan Huang, Jiayu Shen, Yong Xu, Chaoyun Zhang, Yuzhong Qu", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Recent advances in employing Large language models (LLMs) on various tasks have exhibited impressive performance (Brown et al., 2020; OpenAI, 2023). Among these tasks, Knowledge Base Question Answering (KBQA), which aims to answer questions over knowledge base (KB), has emerged as a critical and complex challenge, serving as an ideal testbed for assessing the reasoning capabilities of LLMs over structured data (Gu et al., 2023). However, despite their remarkable achievements, we find that existing LLM-backend KBQA methods fall short in both reliability (the credibility of results) and efficiency (i.e., running time, query \u2217This work is done during the internship at Microsoft. \u2020Equal contribution. ICL-based Agent-based QueryAgent Reliable and Efficient Step N Question Logic Form Step 1 ... Step N Question Logic Form Step 1 ... Question Logic Form ERASER End-to-end for a complex task\uff0cprone to hallucinate Suffer from error propagation Figure 1: QueryAgent compared with two mainstream KBQA paradigms employing LLMs. times, and API invocation cost). Following the popular In-Context Learning (ICL) paradigm, Li et al. (2023) and Nie et al. (2023) generate the target query with few-shot demonstrations. They consider LLMs as a black box and complete a complex task in one go. As a result, it lacks interpretability and is prone to hallucination (Yao et al., 2023), leading to lower accuracy of the top-1 candidate. To alleviate these issues, they employ beam search and self-consistency (Wang et al., 2023). However, these also result in numerous unreliable candidates, thus increasing the running time and query times. Typically, it requires querying thousands of SPARQL queries and several minutes to obtain the final answer. For a complex task, solving it step-by-step has emerged as a promising solution (Wei et al., 2022; Zhou et al., 2023). AgentBench (Liu et al., 2024) implements an Agent-based (Yao et al., 2023) KBQA system by progressively invoking tools to build the target query. However, its iterative nature dictates that each step strictly relies on the previous steps. When hallucination occurs, subsequent reasoning processes would be built upon erroneous foundations, resulting in unreliable candidates and arXiv:2403.11886v1 [cs.CL] 18 Mar 2024 \fmeaningless resource wastage. Additionally, the necessity to invoke an LLM at each step renders beam search unaffordable, placing a high demand on the accuracy of the top-1 results. In our preliminary experiments, we observed that 35% of the questions in AgentBench suffer from various hallucinations. As a result, AgentBench achieves unsatisfactory performance, only 57% F1 of the state-of-the-art ICL-based methods on GrailQA. In view of these challenges, we introduce a framework called QueryAgent to explore more reliable and efficient reasoning in complex environments. Specifically, QueryAgent models KBQA as a multi-turn generation task to step-by-step construct the target query with tools and perform stepwise self-correction. To mitigate the error accumulation issue of multi-step reasoning, we propose an environmental feedback-based self-correction method called ERASER (EnviRonmental feedbAck SElfcoRrection). For each LLM generated text, ERASER detect whether it is erroneous and analyzes the possible causes based on the feedback from environments (e.g., KB execution results, Python interpreter execution status, previous reasoning memory) in the intermediate steps. Upon analyzing this feedback, ERASER provides potential causes of errors and general guidelines for correction. Based on the guidelines, LLM can reconsider and correct the erroneous result. Unlike previous self-correction methods (Pourreza and Rafiei, 2023; Chen et al., 2023) which purposelessly correct every generated result with the same few-shot demonstrations, the idea of ERASER is to actively identify and differentiate various errors based on the rich environmental feedback in the intermediate reasoning steps and then provide tailored guidelines for the distinct error type. With the help of various environmental feedback, ERASER has a more solid basis for precise detection, analysis, and correction, rather than relying solely on the final answer. Moreover, ERASER distinguishes between different types of errors, allowing it to provide guidelines specifically tailored for each error type. This targeted approach makes ERASER more purposeful and scalable. In situations where there are numerous potential error scenarios, the guidelines for different errors can be independently developed without the need to encode all possible error cases to a single prompt. We conduct extensive experiments to evaluate the effectiveness of QueryAgent and ERASER. With only 1 example, QueryAgent notablely surpasses all few-shot methods, which require up to 100 shots, on GrailQA (+7.0), GraphQ (+15.0), WebQSP (+3.4), and MetaQA (+2.0). Moreover, our approach exhibits significant efficiency improvements. Compared with ICL-based methods, QueryAgent reduces runtime and query overhead to several orders. Compared with Agentbased methods, QueryAgent allows for approximately a 50% reduction in API invocation costs and runtime. These results highlight the reliability and efficiency of our methods. We also evaluate QueryAgent on a Text2SQL dataset (WikiSQL), and adapt ERASER to another system (AgentBench), to demonstrate their versatility. Results reveal that QueryAgent outperforms the previous 32shot method by 6.9 points. Besides, ERASER relatively yields an additional improvement for AgentBench by 26% and 42% in F1 on the GrailQA and GraphQ, respectively 1. 2 Related Work 2.1 Few-shot KBQA Recent advances in adopting LLMs for few-shot KBQA can be broadly categorized into 3 groups: 1) ICL-based KB-BINDER (Li et al., 2023) and KB-Coder (Nie et al., 2023) implement an ICLbased system by taking dozens of annotated examples into the prompt. Since they model this complex task as a simple end-to-end generation process, LLMs are directly confronted with a large search space and thus more likely to generate unreliable results. Although they incorporate beam search and self-consistency to increase the likelihood of encompassing the correct logic form, these also introduce the need to process a large number of candidates. On average, to solve a question, it takes executing thousands of candidate queries and several minutes to obtain the final answer. 2) IR-based Starting from an entity, StructGPT (Jiang et al., 2023), and ToG (Sun et al., 2024) iteratively walk on the graph, selecting the next neighboring entity to jump to, until finding the answer. Compared with the methods that generate an executable query, these methods can only solve questions whose reasoning process can be modeled as a single, non-branching chain. They cannot model questions with multi-constraints whose reasoning process is a tree or graph. As they traverse 1Our code will be released at https://github.com/ cdhx/QueryAgent \fin the KG to obtain the answer, they have limitations on questions whose answer is not an entity in the KG (e.g., aggregation or boolean question). 3) Agent-based AgentBench (Liu et al., 2024) utilizes some pre-defined SPARQL templates to solve the question step-by-step, including acquiring the one-hop relation, merging two reasoning paths, adding aggregation, and so on. For a complex task, solving it step by step aligns with human intuition and helps reduce the potential search space. However, at each step, AgentBench heavily relies on the previous results, hence demanding high precision. We observe that AgentBench encounters various unexpected outputs during reasoning, leading to serious error accumulation. When hallucinations arise in the preceding steps, the subsequent become meaningless or unreliable. These factors contribute to inferior performance, which is only half as effective as the ICL-based methods. In this work, based on the agent paradigm, we propose a reliable and efficient framework called QueryAgent, and alleviate LLM\u2019s hallucination by introducing a self-correction method. 2.2 Self-Correction As the concern persists in the accuracy and appropriateness of LLM\u2019s generated content, selfcorrection has been proposed as a remedy to these issues (Pan et al., 2023). DIN-SQL (Pourreza and Rafiei, 2023) utilizes a zero-shot prompt to rectify errors in the generated SQL queries. The prompt asks LLMs to examine the generated SQL queries for potential errors and correct them, while skipping those that are deemed error-free. Such intrinsic self-correction, which is solely based on LLMs\u2019 inherent capabilities without the crutch of external feedback, fails to achieve significant improvement and is unreliable (Huang et al., 2024). An intuitive improvement would be to incorporate few-shot demonstrations in the prompt (Chen et al., 2023). However, this would result in longer prompts, and can only cover a limited number of scenarios. Since they indiscriminately apply the same prompt to all cases, LLMs may be confused about which example fits the current situation. Some works like SALAM (Wang and Li, 2023) train a model to retrieve the most similar error case. Even so, it still can not ensure precise error discrimination and is heavyweight. Besides, the above methods overlook the rich feedback that the environment (e.g., KB, DB) can provide for error correction. These approaches rely solely on the final output as the basis for error correction, presenting substantial challenges for LLMs to make accurate judgments. To address the above issues, we propose ERASER, an environmental feedback based selfcorrection method. Based on the feedback from the environment in the intermediate steps, ERASER proactively identifies when errors arise and provide tailored guidelines. 3 Method 3.1 Overview In this work, we model KBQA as a semantic parsing task. We propose an LLM-backed symbolic agent called QueryAgent which step-by-step constructs formal queries with tools and performs stepwise self-correction. The process of QueryAgent can be divided into two parts: Query construction and Self-Correction (ERASER). At each step, QueryAgent first generates the action to be taken, then leverages environmental feedback to identify and distinguish potential errors. If an error is detected, the system provides tailored guidelines to help LLMs perform error correction. The reliability and efficiency of QueryAgent are reflected in the following aspects. For reliability: 1) It solves questions progressively rather than directly outputting the complete answer. 2) We introduce a correction process, namely ERASER, during reasoning. 3) ERASER is purposeful and more precise than traditional self-correction method. For efficiency: 1) The high accuracy of our top-1 candidate eliminates the need for beam search and self-consistency. 2) Self-correction reduces meaningless reasoning along erroneous paths. 3) We perform self-correction only when necessary and only incorporate related guidance to the prompt. 3.2 Query Construction To interact with KB and step-by-step build a target query, we leverage PyQL (Huang et al., 2023) to systematically construct the workflow. PyQL is a management toolset designed for query building over knowledge bases, including various tools to incorporate clauses to the final executable query (i.e., SPARQL) , such as adding triple patterns, filters, aggregations, etc. As the final query can be transformed from a sequence of PyQL functions, our objective is to generate these functions incrementally during the multi-turn interaction. As shown in Figure 2, at each step, the LLM provides its thoughts over the current step and suggests \fThought:\u00a0Based on the\u00a0 guideline, we need to revise the action to ... Action:\u00a0add_fact(tom_kilburn, computer_designed, ?computer.) Thought:\u00a0The question is to ask ... So firstly we need to get the relation of tom_kilburn. Action:\u00a0get_relation(tom_kilburn) Thought:\u00a0... Now we need to acquire the number of\u00a0 ?designer as the answer. Action:\u00a0add_count(?designer) Thought:\u00a0Based on the guideline, we should use \"key_designer\" ... Action:\u00a0add_fact( ?computer, key_designer, ?designer.) Thought:\u00a0... the most possible relation is \"key_designer\"\u00a0... Action:\u00a0add_fact( ?computer, computer_designer, ?designer.) KB Python interpreter Thought:\u00a0... Now we need to get the relation of ?computer. Action:\u00a0get_relation() Thought:\u00a0Based on the previous guideline, we need to correct the action to ... Action:\u00a0get_relation(?computer) Question: How many key designers does a computer designed by tom_kilburn have? Guideline C Guideline B Environment ERASER Error detection & distinguish Logic Form: No Errors Guidelines SELECT COUNT(?designer) AS ?ans\u00a0 WHERE { tom_kilburn computers_designed ?computer.\u00a0 ?computer key_designer ?designer. } No Errors Reasoning Memory (Python) get_relation func needs one parameter to indicate which entity or variable you want to get the one-hop relations. However, you pass in no parameter. I suggest you generate get_relation with one parameter from AVAILABLE_ENTITY_LIST\u00a0 or\u00a0AVAILABLE_VARIABLE_LIST\u00a0next time. (Memory) You add a triple pattern where the head and tail are both new variables. This triple cannot bind to the existing query graph. I suggest you reconsider the head and tail entity. (KB and Memory) Execution on KB got an empty result. I notice that the relation you use is not one of the relations of ?computer you just queried by get_relation(). You can only choose the relation obtained by get_relation(), otherwise, you may encounter factual error. A B C KB_EXECUTION_RESULT (No errors detected.) (Other guidelines) ... * No Errors Guideline A Thought:\u00a0Among the relations, the most possible relation is \"computer_designed\". Action:\u00a0add_fact(?designer, computer_designed, ?computer.) Guidelines No Errors 1 2 3 4 5 6 7 8 Figure 2: An example of QueryAgent and ERASER. At each step, the LLM generates thought and action based on the previous steps. Based on the action\u2019s execution status (KB and Python) and reasoning memory, ERASER detects whether an error exists. If no error is detected, the observation of this step is the execution result on KB. Otherwise, the observation is the corresponding guideline. the next action to be taken. The action is a PyQL function, we execute it to obtain the results as the observation from the environment. For the example in Figure 2, the LLM suggests firstly to obtain the one-hop relations of \u201ctom kilburn\u201d (thought) and the function get_relation(tom_kilburn) should be invoked at this step (action). By executing this function, we obtain relations around \u201ctom kilburn\u201d for the next step (observation). This process is iteratively repeated. When the reasoning process concludes we execute the generated query to obtain the answer. Given that each step corresponds to an executable query, we can easily observe the result of the current reasoning process, similar to how humans progressively write, execute, and validate a query. The prompt consists of four parts: the task description, the document of available functions, a running example, and the new question. We first provide an overview of the task and the rules that must be followed. Then we provide a brief document of all available functions. Following that, we present a detailed step-by-step reasoning process of an example question. Finally, we concatenate the new question that needs to be solved at the end. 3.3 ERASER In this section, we propose an environmental feedback based self-correction method (ERASER). The key ideas underlying ERASER are to let the environment \u201cspeak out\u201d and distinguish different types of errors. We require the system to provide feedback on its current status and any encountered errors. Based on this feedback, we attempt to identify what types of errors arise and then provide targeted and valuable guidance. The feedback mainly originates from three environments: Knowledge Base, Python Interpreter, and Reasoning Memory. For example, KB can provide feedback such as: whether the executed result is empty, whether the reasoning process ends with a blank node (CVT) or multiple variables, error messages from the query engine, and so on. The Python interpreter can provide error messages of various invalid function calls (e.g., not enough values to unpack). For reasoning memory, we can access information including but not limited to: what steps have been taken, what variables have been created, and the executed results of the previous steps. By analyzing the above feedback, we can detect some errors and determine the cause of them. As illustrated in Figure 2, an error is raised by the Python interpreter at the fourth step due to insufficient parameters in the generated action. In the sixth step, the query engine yields an empty result after a triple pattern constraint is added. According to the reasoning memory, we have acquired the relations of \u201c?computer\u201d, but the chosen relation is not any of them. It is likely an incorrect relation was chosen in the previous steps. This example also showcases the importance of leveraging various feedback from different environments for error distinction. For instance, whether or not the system has obtained the relations of the head/tail entity can \fbe indicative of two distinct causes of error, but they both manifest as empty results in the execution. Compared with the previous methods which only focus on the final answer, this rich environmental feedback in the intermediate steps can serve as crucial observational points for detecting and distinguishing various errors. The guidance in ERASER typically is some speculation about possible causes of error and general suggestions. Examples are shown in the right part of Figure 2. Compared with some code generation work which simply returns the original system error message (Chen et al., 2023), the guidance provided in the prompt can be seen as an intermediate language. It shields the LLM from directly considering the original error, instead focusing on easier-to-comprehend guidance, which ultimately contributes to a successful correction. Besides, by injecting the guidelines into the reasoning process, ERASER has no need for designing another specific module or agent to perform self-correction. In this manner, we only need to figure out how to identify potential errors from various environmental feedback and then provide modification suggestions for each type of error. To summarize, ERASER has the following advantages: 1) Purposeful and Precise: ERASER has the ability to detect errors. For each error, it provides tailored guidelines that relate to the current situation. 2) Independent and Scalable: The trigger for each type of error is independent. It provides convenience for incremental development without affecting the results of other questions. 3) Lightweight and Economical: Invocation of the LLM occurs exclusively when an error is detected. The correction prompt is a general guideline rather than lengthy few-shot examples. 4 Experiment 4.1 Datasets We experiment QueryAgent on four KBQA datasets. The statistics can be found in Table 1. For GrailQA, we report the performance of the dev set to stay within our budget. For other datasets, we report the performance of the test set. GRAILQA (Gu et al., 2021) is a large-scale complex dataset that evaluates three levels of generalization (i.e., i.i.d., compositional, and zero-shot) GRAPHQ (Su et al., 2016) is a particularly challenging dataset given that it exclusively focuses on non-i.i.d. generalization. In this paper, we use the Dataset Training Dev Test GRAILQA 44,337 6,763 13,231 GRAPHQ 2,381 2,395 WEBQSP 3,098 1,639 METAQA-3HOP 114,196 14,274 14,274 WIKISQL 56,355 8,421 15,878 Table 1: Statistics of experiment datasets. processed version by Gu and Su (2022). WEBQSP (Yih et al., 2016) is a simple KBQA dataset with questions from Google query logs. It mainly tests i.i.d. generalization. METAQA (Zhang et al., 2018) consists of 1-3 hops question based on Wiki-Movies KG. We experiment on the most difficult 3-hop subset (denoted as MetaQA-3Hop). 4.2 Baselines We compare QueryAgent with fine-tuning and fewshot KBQA methods. For simplicity, we mainly introduce the few-shot methods here. KB-BINDER (Li et al., 2023) is an ICL-based KBQA method utilizing dozens of (Question, Sexpression) pairs as examples. KB-Coder (Nie et al., 2023) converts the sexpression to a sequence of function calls thus reducing the format error rate. Pangu (Gu et al., 2023) is a general framework with experiments on both fine-tuning and few-shot settings. For the few-shot setting, Pangu also adopts the ICL paradigm. AgentBench (Liu et al., 2024) proposes an agentbased baseline by modeling KBQA as a multi-turn open-ended generation task. 4.3 Experimental Setup We use gpt-3.5-turbo (OpenAI, 2022) for our experiments by default. All datasets use F1 as the evaluation metric. For baselines with the same setting, we report the performance from their original paper. KB-BINDER uses Codex which has been deprecated. For a fair comparison, we report the performance reproduced by KB-Coder with gpt-3.5-turbo. For KB-BINDER and KB-Coder, we compare the setting without similarity retrieval since it is not a strict few-shot setting that requires the whole annotated training set can be accessed. AgentBench reports performance on a mixed subset and uses golden linking results. We reproduce AgentBench with the same entity linking result as \fMethods GrailQA GraphQ WebQSP MetaQA-3Hop fine-tuning ArcaneQA (Gu and Su, 2022) 73.7 31.8 75.6 TIARA (Shu et al., 2022) 78.5 76.7 DecAF (Yu et al., 2023) 81.4 78.8 Pangu(T5-3B) (Gu et al., 2023) 83.4 57.7 79.6 few-shot Pangu (Gu et al., 2023) 53.5 35.4 48.6 KB-BINDER (Li et al., 2023) 50.8 34.5 56.6 96.5 KB-Coder (Nie et al., 2023) 51.7 35.8 60.5 one-shot KB-BINDER (Li et al., 2023) 16.8 4.8 9.0 65.3 AgentBench (Liu et al., 2024) 30.5 25.1 26.4 Ours 60.5 50.8 63.9 98.5 w/ GPT4 66.8 63.0 69.0 99.9 Table 2: Overall results on GrailQA, GraphQ, WebQSP, and MetaQA-3Hop. All datasets are evaluated by F1. For the few-shot setting, Pangu uses 100-shot for all datasets. KB-BINDER and KB-Coder use 40-shot for GrailQA and 100-shot for GraphQ and WebQSP. KB-BINDER uses 5-shot for MetaQA-3Hop. Method GrailQA GraphQ Ours 60.5 50.8 w/o ERASER 43.7 35.3 w/ zero-shot SC 38.5 30.2 w/ few-shot SC 48.0 40.1 Table 3: Ablation study of ERASER and a comparison with other methods. Zero-shot SC indicates the \u201cgeneric\u201d self-correction prompt of DIN-SQL (Pourreza and Rafiei, 2023). Few-shot SC indicates the \u201cexplanation feedback prompt\u201d of Self-Debug (Chen et al., 2023). We follow and implement their ideas in our tasks. ours. We also implement the one-shot setting of KB-BINDER based on their public code. 4.4 Main Result As shown in Table 2, with only one example, our method outperforms all few-shot methods that require up to 100 annotations on all four datasets. For GrailQA and GraphQ, our method notably surpasses the best few-shot methods by 7.0 and 15.0 points. On WebQSP, QueryAgent slightly surpasses 100-shot methods by 3.4 points. It is expected considering the inherent characteristics of the datasets. Since all WebQSP questions are under I.I.D. setting and this dataset is relatively small, few-shot methods have more opportunities to encounter similar questions within the prompts. In contrast, most of the questions of GrailQA are compositional and zero-shot questions, and 100% of GraphQ are compositional questions. Few-shot methods lose this advantage on such question types, which can reasonably explain why our approach exhibits a more pronounced advantage on GrailQA and GraphQ. Additionally, all few-shot methods incorporate beam search or self-consistency to further boost the performance. It also implies that there is still space for improvement in our method if we also choose a more costly setting. Compared with the one-shot methods, the performance of QueryAgent approximately doubles that of Agentbench, elevating agent-based techniques and one-shot KBQA to a new level. We also reproduce the one-shot result of KB-BINDER. The dramatic decline in performance exposes some limitations of the ICL-based method in terms of example quantity. 5 Detailed Analysis To gain more insights into QueryAgent\u2019s strong performance, we conduct some in-depth analysis. 5.1 Ablation Study In this section, we analyze how ERASER contributes to reliable reasoning and compare it with other self-correction methods. The result is shown in Table 3. ERASER improves for 16.8 and 15.5 points for GrailQA and GraphQ, demonstrating the effectiveness of our method. For the baseline method, zero-shot SC failed to boost the perfor\fMethods GrailQA GraphQ WebQSP TPQ QPQ CPQ TPQ QPQ CPQ TPQ QPQ CPQ KB-BINDER 51.2 s 3297.7 $ 0.010 84.0 s 2113.8 $ 0.024 138.6 s 8145.1 $ 0.017 AgentBench 40.0 s 7.4 $ 0.034 65.1 s 7.2 $ 0.035 70.4 s 7.2 $ 0.038 Ours 16.6 s 5.2 $0.019 15.3 s 6.2 $ 0.021 12.6 s 4.7 $ 0.014 Table 4: Efficiency comparison with KB-BINDER and AgentBench. The TPQ, QPQ, and CPQ respectively represent the time cost, SPARQL query times, and gpt-3.5-turbo invocation cost per question. mance further and even exhibited negative gains. The few-shot method has made some improvements but not that significant and its prompt is considerably longer than ERASER. It is expected since few-shot SC can only cover limited scenarios and LLM needs to figure out which part in the prompt is related to the current situation. We also manually analyzed 200 questions of GrailQA to investigate how ERASER influences the reasoning process. We find that 43% of questions utilized ERASER in their reasoning processes. Among them, 30% questions were completely corrected. Given that our error detection strategy is conservative, each steps that triggered the ERASER were indeed found to contain errors during reasoning. 5.2 Efficiency Analysis In this section, we evaluate the running efficiency. We conduct both horizontal and vertical comparisons by comparing KB-BINDER, which utilizes a different paradigm, and AgentBench, which is similar to ours. We analyzed the time cost per question (TPQ), query times per question (QPQ), and LLM calling cost per question (CPQ). All tests were conducted in the same network environment, with each experiment running independently. As shown in Table 4, compared with KBBINDER, our method exhibits overwhelming advantages in terms of TPQ and QPQ, while CPQ is a little higher on GrailQA. This outcome aligns with our expectations. KB-BINDER needs to conduct a beam search step by step to collect a large pool of candidates and then execute them one by one to find the first executable query, which requires querying numerous SPARQLs. Additionally, KBBINDER uses self-consistency by repeating this paradigm for K times to boost the performance, leading to (K \u22121)\u00d7 extra cost. To some extent, these also lead to a longer running time. Another thing worth noting is that more attempts also imply a lower accuracy of the top-1 candidate and a higher proportion of low-quality candidates. In contrast, our method only selects the top-1 candidate at a time, which means it requires the method to possess a high level of precision at each step. However, even under such extreme constraints, our approach still outperforms other methods. As for the CPQ, our method incurs slightly higher costs in terms of LLM invocation compared to KB-BINDER. Our method is a step-by-step reasoning process, and while it has many advantages, we acknowledge that it also has an inevitable issue of requiring multiple requests to the LLM. However, on the flip side, KB-BINDER needs to concatenate many examples, which also faces the challenge of having a long prompt. In fact, on the 100-shot setting, the CPQ of using KB-BINDER has already exceeded that of our method. On the other hand, compared with AgentBench, our method also surpasses it on all three criteria. It is noteworthy that our method is not only faster and cost-effective but also achieves approximately double the QA performance compared to AgentBench. At first glance, the incorporation of ERASER is a negative factor for efficiency evaluation since the prompt becomes longer than a regular reasoning process. Nonetheless, from a different perspective, timely and accurate error correction prevents the system from deviating further in the wrong direction and reduces the overhead caused by meaningless reasoning processes. Consequently, to some extent, a reliable reasoning process ultimately contributes to achieving efficient reasoning. Besides, by only performing corrections when necessary and distinguishing different types, we have managed to minimize the costs of ERASER. 5.3 Generalization Ability In this section, we analyze the generalization ability of our method and ICL-based method from qualitative analysis and experimental comparisons. Methodologically speaking, our method tackles the question step-by-step with atomic symbolic tools. By decomposing the problem into multi\fMethods WikiSQL few-shot(32 shot) Davinci-003 49.1 ChatGPT 51.6 StructGPT(Davinci-003) 64.6 StructGPT(ChatGPT) 65.6 one-shot AgentBench 57.6 Ours 72.5 w/o ERASER 67.0 Table 5: The results of QueryAgent on WikiSQL. We evaluate denotation accuracy. ple reasoning steps, we bridge the semantic gap between different questions and datasets, as all questions can be represented using these limited tools. However, the combination of these steps can be numerous, posing challenges for compositional generalization. ICL-based methods learn and generate the complete query at once, directly facing and bearing the significantly larger search space. From the perspective of the experiment, KBBINDER is sensitive to whether similar examples appear in the prompt. If the most similar questions are retrieved as examples in the prompt, KBBINDER can achieve up to 20 point improvement on WebQSP (100% i.i.d.) but a negative boost on GraphQ (100% non-i.i.d.). In contrast, our method uses the same example for all questions. Another observation is that, the higher the proportion of non-iid questions in the dataset, the greater the degree to which our approach exceeds the ICL-based approach. Compared to GrailQA (75% non-i.i.d.), QueryAgent demonstrates greater improvement on GraphQ (100% non-i.i.d.). This can also serve as evidence that QueryAgent has better generalization on unrelated examples. 5.4 Transfer Experiment In the previous sections, we choose KBQA as a representative testbed to instantiate QueryAgent and ERASER. To illustrate the versatility of our reasoning framework and ERASER, in this section, we conduct another two experiments: 1) we implement QueryAgent framework on another semantic parsing task, namely Text2SQL. 2) we adapt ERASER to AgentBench. We choose the test set of WikiSQL (Zhong et al., 2017) as the experiment dataset. To acquire the execution feedback from the database environment, we Methods GrailQA GraphQ WebQSP AgentBench 30.5 25.1 26.4 w ERASER 38.5 35.6 32.0 Table 6: Performance of AgentBench with ERASER. implement a SQL-version PyQL to help LLM access the database and provide tools to construct the SQL query. We compare our method with StructGPT (Jiang et al., 2023). The baseline results of Dacinci-003 and ChatGPT also come from StructGPT. Our method outperforms the few-shot method with 32 examples. Besides, ERASER contributes to 7.6% of performance, indicating the generalization ability of our self-correction method. Another experiment (i.e., AgentBench + ERASER) is to further verify that ERASER can enhance the existing agent-based KBQA system. Table 6 shows that ERASER further improves the performance of AgentBench by 8.0 and 10.5 points on GrailQA and GraphQ. By integrating ERASER, we have elevated the performance of another method to a new level, highlighting the versatility and plugand-play nature of ERASER. 6" + }, + { + "url": "http://arxiv.org/abs/2403.05050v3", + "title": "DyRoNet: Dynamic Routing and Low-Rank Adapters for Autonomous Driving Streaming Perception", + "abstract": "The advancement of autonomous driving systems hinges on the ability to\nachieve low-latency and high-accuracy perception. To address this critical\nneed, this paper introduces Dynamic Routering Network (DyRoNet), a low-rank\nenhanced dynamic routing framework designed for streaming perception in\nautonomous driving systems. DyRoNet integrates a suite of pre-trained branch\nnetworks, each meticulously fine-tuned to function under distinct environmental\nconditions. At its core, the framework offers a speed router module, developed\nto assess and route input data to the most suitable branch for processing. This\napproach not only addresses the inherent limitations of conventional models in\nadapting to diverse driving conditions but also ensures the balance between\nperformance and efficiency. Extensive experimental evaluations demonstrating\nthe adaptability of DyRoNet to diverse branch selection strategies, resulting\nin significant performance enhancements across different scenarios. This work\nnot only establishes a new benchmark for streaming perception but also provides\nvaluable engineering insights for future work.", + "authors": "Xiang Huang, Zhi-Qi Cheng, Jun-Yan He, Chenyang Li, Wangmeng Xiang, Baigui Sun, Xiao Wu", + "published": "2024-03-08", + "updated": "2024-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.MM" + ], + "main_content": "Introduction In autonomous driving systems, it is crucial to achieve lowlatency and high-precision perception. Traditional object detection algorithms [Zou et al., 2023], while effective in various contexts, often confront the challenge of latency due to inherent computational delays. This lag between algorithmic processing and real-world states can lead to notable discrepancies between predicted and actual object locations. Such latency issues have been extensively reported and are known to significantly impact the decision-making process in autonomous driving systems [Chen et al., 2023]. Addressing these challenges, the concept of streaming perception has been introduced as a response [Li et al., 2020]. \u2217Internship at CMU \u2020Corresponding author 1Project: https://tastevision.github.io/DyRoNet/ Figure 1: Illustration of DyRoNet\u2019s dynamic selection mechanism in streaming perception. This diagram showcases DyRoNet\u2019s capability to adaptively choose the most suitable perception strategy, contrasting with the static approach of traditional methods in complex environments [Viewing in color and at an expanded scale]. This perception task aims to predict \u201cfuture\u201d results by accounting for the delays incurred during the frame processing stage. Unlike traditional methods that primarily focus on detection at a given moment, streaming perception transcends this limitation by anticipating future environmental states, and aligning perceptual outputs closer to real-time dynamics. This new paradigm is key in addressing the critical gap between real-time processing and real-world changes, thereby enhancing the safety and reliability of autonomous driving systems [Muhammad et al., 2020]. Although the existing streaming approach seems promising, it still faces contradictions in real-world scenarios. These contradictions primarily stem from the diverse and unpredictable nature of driving environments. The factors such as camera motion, weather conditions, lighting variations, and the presence of small objects seriously impact the performance of perception measures, leading to fluctuations that challenge their robustness and reliability (see Sec. 3.1). This complexity in real-world scenarios underscores the limitations of a single, uniform model, which often struggles to adapt to the varied demands of different driving conditions [Guo et al., 2019]. In general, the challenges of streaming perception mainly include: (1) Diverse Scenario Distribution: Autonomous driving environments are inherently complex and dynamic, showing a myriad of scenarios that a single perception model may arXiv:2403.05050v3 [cs.CV] 18 Mar 2024 \fnot adequately address (see Fig. 1). The need to customize perception algorithms to specific environmental conditions, while ensuring that these models operate cohesively, poses a significant challenge. As discussed in Sec. 3.1, adapting models to various scenarios without compromising their core functionality is a crucial aspect of streaming perception. (2) Performance-Efficiency Balance: To our knowledge, the integration of both large and small-scale models is essential to handle the varying complexities encountered in different driving scenes. The large models, while potentially more accurate, may suffer from increased latency, whereas smaller models may offer faster inference at the cost of reduced accuracy. Balancing performance and efficiency, therefore, becomes a challenging task. In Sec. 3.1, we explore the strategies for optimizing this balance, exploring how different model architectures can be effectively utilized to enhance streaming perception. Generally speaking, these challenges highlight the demand for streaming perception. As we study in Sec. 3.1, addressing the diverse scenario distribution and achieving an optimal balance between performance and efficiency are key to advancing the state-of-the-art in autonomous driving. To address the intricate challenges presented by real-world streaming perception, we introduce DyRoNet, a framework designed to enhance dynamic routing capabilities in autonomous driving systems. DyRoNet stands as a low-rank enhanced dynamic routing framework, specifically crafted to cater to the requirements of streaming perception. It encapsulates a suite of pre-trained branch networks, each meticulously fine-tuned to optimally function under distinct environmental conditions. A key component of DyRoNet is the speed router module, ingeniously developed to assess and efficiently route input data to the most appropriate branch, as detailed in Sec. 3.2. To sum up, the contributions are listed as: \u2022 We emphasize the impact of environmental speed as a key determinant of streaming perception. Through analysis of various environmental factors, our research highlights the imperative need for adaptive perception responsive to dynamic conditions. \u2022 By utilizing a variety of sophisticated streaming perception techniques, DyRoNet provides the speed router as a major invention. This component dynamically determines the best route for handling each input, ensuring efficiency and accuracy in perception. The ability to adapt and be versatile is demonstrated by this dynamic routechoosing mechanism. \u2022 Extensive experimental evaluations have demonstrated that DyRoNet is capable of adapting to diverse branch selection strategies, resulting in a substantial enhancement of performance across various branch structures. This not only validates the framework\u2019s wide-ranging applicability but also confirms its effectiveness in handling different real-world scenarios. In summary, DyRoNet offers advancements for lowlatency, high-accuracy perception in autonomous driving. By addressing challenges of environmental adaptability and dynamic branch selection, DyRoNet sets new benchmarks in achieving low-latency and high-accuracy perception. 2 Related Work This section revisits developments in streaming perception and dynamic neural networks, highlighting differences from our proposed DyRoNet framework. While existing methods have made progress, limitations persist in addressing realworld autonomous driving complexity. 2.1 Streaming Perception The existing streaming perception methods fall into three main categories. (1) The initial methods focused on singleframe, with models like YOLOv5 [Jocher et al., 2021] and YOLOX [Ge et al., 2021] achieving real-time performance. However, lacking motion trend capture, they struggle in dynamic scenarios. (2) The recent approaches incorporated current and historical frames, like StreamYOLO [Yang et al., 2022] building on YOLOX with dual-flow fusion. LongShortNet [Li et al., 2023] used longer histories and diverse fusion. DAMO-StreamNet [He et al., 2023] added asymmetric distillation and deformable convolutions to improve large object perception. (3) Recognizing the limitations of single models, current methods explore dynamic multi-model systems. One approach [Ghosh et al., 2021] adapts models to environments via reinforcement learning. DaDe [Jo et al., 2022] extends StreamYOLO by calculating delays to determine frame steps. A later version [Huang and Chen, 2023] added multi-branch prediction heads. Beyond 2D detection, streaming perception expands into optical flow, tracking, and 3D detection, with innovations in metrics and benchmarks [Wang et al., 2023c; Sela et al., 2022; Wang et al., 2023b]. Distinct from these existing approaches, our proposed method, DyRoNet, introduces a low-rank enhanced dynamic routing mechanism specifically designed for streaming perception. DyRoNet stands out by integrating a suite of advanced branch networks, each fine-tuned for specific environmental conditions. Its key innovation lies in the speed router module, which not only routes input data efficiently but also dynamically adapts to the diverse and unpredictable nature of real-world driving scenarios. 2.2 Dynamic Neural Networks Dynamic Neural Networks (DNNs) feature adaptive network selection, outperforming static models in efficiency and performance ([Han et al., 2021; Lan et al., 2023; Zhang et al., 2023]). The existing research primarily focuses on structural design for core deep learning tasks like image classification ([Huang et al., 2018; Wang et al., 2020; Wang et al., 2018]). DNNs follow two approaches: (1) Multi-branch models ([Bejnordi et al., 2019; Cai et al., 2021; Shazeer et al., 2017; Wang et al., 2023a; Qiao et al., 2022]) rely on a lightweight router assessing inputs to direct them to appropriate branches, enabling tailored computation. (2) By generating new weights based on inputs ([Yang et al., 2019; Chen et al., 2020; Su et al., 2019; Zhu et al., 2019]), these models dynamically alter computations to match diverse needs. DNN applications expand beyond conventional tasks. In object detection, DynamicDet ([Lin et al., 2023]) categorizes inputs and processes them through distinct branches. This illustrates DNNs\u2019 broader applicability and efficiency, promising contributions particularly for complex, dynamic environments. \fD Backbone Neck LoRA \ud835\udc4a ! \" \u2744 \u2744 \ud83d\udd25 ft Head CLS \u2744 \ud83d\udd25LoRA \ud835\udc4a ! \" OBJ REG D Backbone Neck LoRA \ud835\udc4a #$% \" \u2744 \u2744 \ud83d\udd25 ft Ft \ud835\udc3c! \u2026\u2026 Router Network Head CLS \u2744 \ud83d\udd25LoRA \ud835\udc4a #$% \" OBJ REG ... Detector Head:\ud835\udc43 \ud835\udc72#\ud835\udfcf Inference Time cost value Streaming Peception Loss value E\ufb00ec3ve & E\ufb03cient Loss Frame Dispatcher \ud83d\udd25 \u2744 Frozen Weight Learnable Weight \ud83d\udd25 Frame Sequence \ud835\udc7a Detector Head:\ud835\udc43 \ud835\udfce Model Bank \u2112&#' () \u2112* () ... \u2112+&(\ud835\udc53 ! \u211b, \ud835\udc53 ! +&) SP Loss \ud835\udc38encoding Training Phase Only \ud835\udc3c!#. \ud835\udc3c!#' \ud835\udc3c! \ud835\udc53 ! \u211b \ud835\udc53 ! +& \ud835\udcab \u2112!\" \ud835\udc4a/ Figure 2: The DyRoNet Framework: This figure presents the architecture of DyRoNet, featuring a multi-branch network design. For simplicity, only two branches are shown, each representing a streaming perception sub-network. The core network architecture is detailed in the upper right. Each branch processes both the current frame It and a series of historical frames It\u22121, It\u22122, \u00b7 \u00b7 \u00b7 , It\u2212n. The backbone and neck of the network extract features, which are then split into two streams for the current and historical frames. These streams are fused together before entering the prediction head. Branch selection is governed by the Speed Router, which analyzes the frame difference \u2206It derived from It and It\u22121 to determine the most suitable branch for the given input. 3 Proposed Method This section outlines the framework of our proposed DyRoNet. Beginning with its underlying motivation and the critical factors driving its design, we subsequently provide an overview of its architecture and training process. 3.1 Motivation for DyRoNet Autonomous driving faces variability from weather, scene complexity, and vehicle velocity. By strategically analyzing key factors and routing logic, this section details the rationale behind the proposed DyRoNet. Analysis of Influential Factors. Our statistical analysis of the Argoverse-HD dataset [Li et al., 2020] underscores the profound influence of environmental dynamics on the effectiveness of streaming perception. While weather inconsistently impacts accuracy, suggesting the presence of other influential factors (see Appendix A.1), fluctuations in the object count show limited correlation with performance degradation (see Appendix A.2). Conversely, the presence of small objects across various scenes poses a significant challenge for detection, especially under varying motion states (see Appendix A.3). Notably, disparities in performance are most pronounced across different environmental motion states (see Appendix A.4), thereby motivating the need for a dynamic, velocity-aware routing mechanism in DyRoNet. Rationale for Dynamic Routing. Analysis reveals that StreamYOLO\u2019s reliance on a single historical frame falters at high velocities, in contrast to multi-frame models, highlighting a clear connection between velocity and detection performance (see Tab. 1). Dynamic adaptation of frame history, based on vehicular speed changes, enables DyRoNet to strike a balance between accuracy and latency (see Sec. 4.3). Through first-order differences, the system efficiently switches models to align with environmental motions. Specifically, the dynamic routing is designed to select the optimal architecture based on the vehicle\u2019s speed profile, ensuring precision at lower velocities for detailed perception and efficiency at higher speeds for swift response. Such adaptable routing, informed by comprehensive speed analysis, positions DyRoNet as a robust solution for reliable perception across diverse autonomous driving scenarios. Next, we introduce DyRoNet in detail. 3.2 Architecture of DyRoNet Overview of DyRoNet. The structure of DyRoNet, as depicted in Fig. 2, proposes a multi-branch structure. Each branch within DyRoNet framework functions as an independent streaming perception model, capable of processing both the current and historical frames. This dual-frame processing is central to DyRoNet\u2019s capability, facilitating a nuanced understanding of temporal dynamics. Such a design is key in achieving a delicate balance between latency and accuracy, aspects crucial for real-time autonomous driving. Mathematically, the core of DyRoNet lies the processing of a frame sequence, S = {It, \u00b7 \u00b7 \u00b7 , It\u2212N\u03b4t}, where N indicates the number of frames and \u03b4t the interval between successive frames. The process of the framework is formalized as: T = F(S, P, W), where P = {P0, \u00b7 \u00b7 \u00b7 , PK\u22121} denotes a collection of streaming perception models, with each Pi denoting an individual model within this suite. The architecture is further enhanced by incorporating a feature extractor Gi and a perception head Hi for each model. The Router Network, R, is instrumental in selecting the most suitable streaming perception model for each specific scenario. Correspondingly, the weights of DyRoNet are denoted by W = {W d, W l, W r}, where W d indicates the weights of the streaming perception model, W l relates to the Low-Rank Adaptation (LoRA) weights within each model, and W r pertains to the Router Network. The culmination of this process is the final output, T , a compilation of feature maps. These maps can be further decoded through Decode(T ), revealing essential details like objects, categories, and locations. Below we introduce each module in detail. Router Network. The Router Network in DyRoNet plays a crucial role in understanding and classifying the dynamics of \fFigure 3: The mean curves of frame differences are depicted here. The four curves correspond to frame sizes of the original frame, 200\u00d7200, 100\u00d7100, and 50\u00d750. Notably, these curves show distinct fluctuations across different vehicle motion scenarios. the environment. This module is designed for both environmental classification and branch decision-making. To effectively and rapidly capture environmental speed, frame differences are employed as the input to the Router Network. As shown in Fig. 3, frame differences exhibit a high discriminative advantage for different environmental speeds. Specifically, for frames at times t and t \u22121, represented as It and It\u22121 respectively, the frame difference is computed as \u2206It = It \u2212It\u22121. The architecture of the Router Network, R, is simple yet efficient. It consists of a single convolutional layer followed by a linear layer. The network\u2019s output, denoted as f r \u2208RK, captures the essence of the environmental dynamics. Based on this output, the index \u03c3 of the optimal branch for processing the current input frame It is determined through the following equation: \u03c3 = arg max K (R(\u2206It), W r), \u03c3 \u2208{0, \u00b7 \u00b7 \u00b7 , K \u22121}, (1) where \u03c3 is the index of the branch deemed most suitable for the current environmental context. Once \u03c3 is determined, the input frame It is automatically routed to the corresponding branch by a dispatcher. In particular, this strategy of using frame differences to gauge environmental speed is efficient. It offers a faster alternative to traditional methods such as optical flow fields. Moreover, it focuses on frame-level variations rather than the speed of individual objects, providing a more generalized representation of environmental dynamics. The sparsity of \u2206It also contributes to the robustness of this method, reducing computational complexity and making the Router Network\u2019s operations nearly negligible in the context of the overall model\u2019s performance. Model Bank & Dispatcher. The core of the DyRoNet framework is its model bank, which consists of an array of streaming perceptual models, denoted as P = {P0, \u00b7 \u00b7 \u00b7 , PK\u22121}. Typically, the selection of the most suitable model for processing a given input is intelligently managed by the Router Network. This process is formalized as P\u03c3 = Disp(R, P), where Disp acts as a dispatcher, facilitating the dynamic selection of models from P based on the input. The operational flow of DyRoNet can be mathematically defined as: T = F(S, P, W) = Disp(R(\u2206It), P)(It; W d \u03c3, W l \u03c3) where R symbolizes the Router Network, and \u2206It refers to the frame difference, a key input for model selection. The weights W d \u03c3 and W l \u03c3 correspond to the selected streaming perception model and its Low-Rank Adaptation (LoRA) parameters, respectively. Note that the versatility of DyRoNet is further highlighted by its compatibility with a wide range of Streaming Perception models, even ones that rely solely on detectors [Ge et al., 2021]. To demonstrate the efficacy of DyRoNet, it has been evaluated using three contemporary streaming perception models: StreamYOLO [Yang et al., 2022], LongShortNet [Li et al., 2023], and DAMO-StreamNet [He et al., 2023] (see Sec. 4.3). This Model Bank & Dispatcher strategy illustrates the adaptability and robustness of DyRoNet across different streaming perception scenarios. Low-Rank Adaptation. A key challenge arises when fully fine-tuning individual branches, especially under the direction of Router Network. This strategy can lead to biases in the distribution of training data and inefficiencies in the learning process. Specifically, lighter branches may become predisposed to simpler cases, while more complex ones might be tailored to handle intricate scenarios, thereby heightening the risk of overfitting. Our experimental results, detailed in Sec. 4.3, support this observation. To address these challenges, we have incorporated the Low-Rank Adapter [Hu et al., 2021] into our streaming perception models. Within each model Pi, initially pre-trained on a dataset, the key components are the convolution kernel and bias matrices, symbolized as W d i . The rank of the LowRank Adaptation (LoRA) module is defined as r, a value significantly smaller than the dimensionality of W d i , to ensure efficient adaptation. The update to the weight matrix adheres to a low-rank decomposition form, represented as W i d + \u03b4W = W i d + BA.2 This adaptation strategy allows for the original weights W i d to remain fixed, while the lowrank components BA are trained and adjusted. The adaptation process is executed through the following projection: W d i x + \u2206Wx = W d i x + W l i x, (2) where x represents the input image or feature map, and \u2206W = W l i = BA. The matrices A and B start from an initialized state and are fine-tuned during the adaptation process. This approach maintains the general applicability of the model by fixing W i d, while also enabling specialization within specific sub-domains, as determined by Router Network. Particularly, in DyRoNet, we employ a rank r of 32 for the LoRA module, though this can be adjusted based on specific requirements of the scenarios in question. This low-rank adaptation mechanism not only enhances the flexibility of the DyRoNet framework but also significantly mitigates the risk of overfitting, ensuring that each branch remains efficient and effective in its designated role. 2Here, B is a matrix in Rd\u00d7r, and A is in Rr\u00d7k, ensuring that the rank r remains much smaller than d. \f3.3 Training Details of DyRoNet The training process of DyRoNet focuses on two primary goals: (1) improving the performance of individual branches within the streaming perception model and (2) achieving an optimal balance between accuracy and computational efficiency. This dual-objective framework is represented by the overall loss function: L = Lsp + LE2, (3) where Lsp represents the streaming perception loss, and LE2 denotes the effective and efficient (E2) loss, which supervises branch selection. Streaming Perception (SP) Loss. Each branch in DyRoNet is fine-tuned using its original loss function to maintain effectiveness. The router network is trained to select the optimal branch based on efficiency supervision. Let Ti = {F cls i , F reg i , F obj i } denote the logits produced by the i-th branch and Tgt = {F cls gt , F reg gt , F obj gt } represent the corresponding ground-truth, where F cls \u00b7 , F reg \u00b7 , and F obj \u00b7 are the classification, objectness, and regression logits, respectively. The streaming perception loss for each branch, Lsp i , is defined as follows: Lsp i (Ti, Tgt) = Lcls(F cls i , F cls gt ) + Lobj(F obj i , F obj gt ) + Lreg(F reg i , F reg gt ), (4) where Lcls(\u00b7) and Lobj(\u00b7) are defined as Mean Square Error (MSE) loss functions, while Lreg(\u00b7) is represented by the Generalized Intersection over Union (GIoU) loss. Effective and Efficient (E2) Loss. During the training phase, streaming perception loss values from all branches are compiled into a vector vsp \u2208RK, and inference time costs are aggregated into vtime \u2208RK, with K indicating the total number of branches in DyRoNet. To account for hardware variability, a normalized inference time vector \u02c6 vtime = softmax(vtime) is introduced. This vector is derived using the Softmax function to minimize the influence of hardware discrepancies. The representation for effective and efficient (E2) decision-making is defined as: f E2 = ON(arg min k (softmax(vtime) \u00b7 vsp)), (5) where O denotes one-hot encoding, producing a boolean vector of length K, with the value of 1 at the index representing the estimated optimal branch at that moment. The E2 Loss is then formulated as: LE2 = KL(f E2, f r), (6) where fr = R(\u2206It) and KL represents the Kullback-Leibler divergence, utilized to constrain the distribution. Overall, the process of training DyRoNet involves striking a meticulous balance between the SP loss, which ensures the efficacy of each branch, and the E2 loss, which optimizes efficiency. The primary objective of this training is to develop a model that not only delivers high accuracy in perception tasks but also operates within acceptable latency constraints, which is a critical requirement for real-time applications. This balanced approach enables DyRoNet to adapt dynamically to varying computational resources and environmental conditions, thereby maintaining optimal performance in diverse streaming perception scenarios. 4 Experiments 4.1 Dataset and Metric Dataset. For the evaluation of our methods, we utilized the comprehensive Argoverse-HD dataset [Li et al., 2020], specifically designed for streaming perception in autonomous driving scenarios. This dataset comprises high-resolution RGB images captured from urban city street drives, offering a realistic representation of diverse driving conditions. The dataset is structured into two main segments: a training set consisting of 65 video clips and a test set comprising 24 video clips. Each video segment in the dataset, on average, spans over 600 frames, contributing to a training set with approximately 39k frames and a validation set containing around 15k frames. Notably, the Argoverse-HD dataset provides highframe-rate (30fps) 2D object detection annotations, ensuring accuracy and reliability without relying on interpolated data. Evaluation Metric. We adopt the streaming Average Precision (sAP) as the primary metric for performance evaluation. The sAP metric, widely recognized for its effectiveness in streaming perception tasks [Li et al., 2020], offers a comprehensive assessment by calculating the mean Average Precision (mAP) across various Intersection over Union (IoU) thresholds, ranging from 0.5 to 0.95. This metric allows us to evaluate detection performance across different object sizes, including large, medium, and small objects, providing a robust measure of our model\u2019s capability in real-world streaming perception scenarios. 4.2 Implementation Details We tested three state-of-the-art streaming perception models: StreamYOLO[Yang et al., 2022], LongShortNet[Li et al., 2023], and DAMO-StreamNet[He et al., 2023]. These models, integral to the DyRoNet architecture, come with pretrained parameters across three distinct scales: small (S), medium (M), and large (L), catering to a variety of processing requirements. In constructing the model bank P for DyRoNet, we strategically selected different model configurations to evaluate performance across diverse scenarios. For instance, the notation DyRoNet (DAMOS + M) represents a configuration where DyRoNet employs the small (S) and medium (M) scales of DAMO-StreamNet as its two branches.3 All experiments were conducted on a high-performance computing platform equipped with Nvidia 3090Ti GPUs (x4), ensuring robust and reliable computational power to handle the intensive processing demands of the streaming perception models. This setup provided a consistent and controlled environment for evaluating the efficacy of DyRoNet across different model configurations, contributing to the thoroughness and validity of our results. For more implementation details, please refer to Appendix C. 4.3 Comparision with SOTA Methods We compared our proposed approach with state-of-the-art methods to evaluate its performance. In this subsection, we 3Similar notations are used for other model combinations, allowing for a systematic exploration of the framework\u2019s adaptability and performance under varying computational constraints. \fMethods Latency (ms) sAP \u2191 sAP50 \u2191 sAP75 \u2191 sAPs \u2191 sAPm \u2191 sAPl \u2191 Non-real-time detector-based methods Adaptive Streamer [Ghosh et al., 2021] 21.3 37.3 21.1 4.4 18.7 47.1 Streamer (S=600) [Li et al., 2020] 20.4 35.6 20.8 3.6 18.0 47.2 Streamer (S=900) [Li et al., 2020] 18.2 35.3 16.8 4.7 14.4 34.6 Streamer+AdaScale [Ghosh et al., 2021] 13.8 23.4 14.2 0.2 9.0 39.9 Real-time detector-based methods DAMO-StreamNetNet-L [He et al., 2023] 26.6 37.8 59.1 38.6 16.1 39.0 64.6 LongShortNet-L [Li et al., 2023] 20.1 37.1 57.8 37.7 15.2 37.3 63.8 StreamYOLO-L [Yang et al., 2022] 18.2 36.1 57.6 35.6 13.8 37.1 63.3 DAMO-StreamNetNet-M [He et al., 2023] 24.3 35.7 56.7 35.9 14.5 36.3 63.3 LongShortNet-M [Li et al., 2023] 17.5 34.1 54.8 34.6 13.3 35.3 58.1 StreamYOLO-M [Yang et al., 2022] 18.2 32.9 54.0 32.5 12.4 34.8 58.1 DAMO-StreamNetNet-S [He et al., 2023] 21.3 31.8 52.3 31.0 11.4 32.9 58.7 LongShortNet-S [Li et al., 2023] 14.6 29.8 50.4 29.5 11.0 30.6 52.8 StreamYOLO-S [Yang et al., 2022] 14.2 28.8 50.3 27.6 9.7 30.7 53.1 DyRoNet DyRoNet (DAMOM + L) 37.61 37.8 (+2.1) 58.8 (+2.1) 38.8 (+2.9) 16.1 (+1.6) 39.0 (+2.7) 64.0 (+0.7) DyRoNet (LSNM + L) 29.05 36.9 (+2.8) 58.2 (+3.4) 37.4 (+2.8) 14.9 (+1.6) 37.5 (+2.2) 63.3 (+5.2) DyRoNet (sYOLOM + L) 23.51 35.0 (+2.1) 55.7 (+1.7) 35.5 (+3.0) 13.7 (+1.3) 36.2 (+1.4) 61.1 (+3.0) Table 1: The comparison of DyRoNet and SOTA. In this table, the optimal values are highlighted in green font and the online evaluation latency reaches the real-time is shown in red font. Model Bank Random LoRA + Router StreamYOLOS + M 39.16 26.25 StreamYOLOS + L 24.04 29.35 StreamYOLOM + L 24.69 23.51 LongShortNetS + M 24.79 21.47 LongShortNetS + L 21.49 30.48 LongShortNetM + L 24.75 29.05 DAMO-StreamNetS + M 36.61 33.22 DAMO-StreamNetS + L 35.12 39.60 DAMO-StreamNetM + L 37.30 37.61 Table 2: Comparison of inference time (ms) on single RTX 3090. The optimal inference time between random and after train are consistently highlighted in green font. directly copied the reported performance from their original papers as their results. The performance comparison was conducted on the Argoverse-HD dataset [Li et al., 2020]. An overview of the results reveals that our proposed DyRoNet with a model bank of DAMO-StreamNet series achieves 37.8% sAP in 39.60 ms latency, outperforming the current state-of-the-art methods in latency by a significant margin. For the StreamYOLO and LongShortNet model banks, our DyRoNet attains 36.9% and 37.1% sAP in 29.35 ms, and 30.48 ms latency respectively, surpassing the original model dramatically. This demonstrates the effectiveness of the systematic improvements in DyRoNet. 4.4 Inference Time We conducted detailed experiments analyzing the trade-offs between DyRoNet\u2019s inference time and performance under different model bank selection strategies. Table 2 systematically presents the findings, with optimal times in green. This highlights DyRoNet\u2019s superior performance\u2014maintaining Model Bank Full LoRA StreamYOLOS + M 32.9 33.7 StreamYOLOS + L 36.1 36.9 StreamYOLOM + L 36.2 35.0 LongShortNetS + M 29.0 30.5 LongShortNetS + L 36.2 37.1 LongShortNetM + L 36.3 36.9 DAMO-StreamNetS + M 34.8 35.5 DAMO-StreamNetS + L 31.1 37.8 DAMO-StreamNetM + L 37.4 37.8 Table 3: Comparion of LoRA finetune and Full finetune. Full means the full fine-tuning and LoRA means the LoRA fine-tuning. And the best values between Full and LoRA are shown in red font. competitive inference speed alongside accuracy gains versus the random approach. Specifically, DyRoNet achieves efficient speeds while preserving or enhancing performance. This balance enables meeting real-time needs without compromising perception quality, critical for autonomous driving where both factors are paramount. By validating effectiveness in inference time reductions and accuracy improvements, the results show the practicality and efficiency of DyRoNet\u2019s dynamic model selection. 4.5 Ablation Study Router Network. To validate the effectiveness of the Router Network based on frame difference, we conducted comparative experiments using frame difference \u2206It, the current frame It, and the concatenation of the current frame with the previous historical frame [It + It\u22121] as input modality of the Router Network. The experimental results are presented in Tab. 5. To control variables, in these experiments, we froze the model bank during training and only trained the Router Network. And only three different choices of StreamYOLO \fModel Bank b0 b1 b2 Random sAP K = 2 same model DAMOS + M 31.8 35.5 33.5 35.5 DAMOS + L 31.8 37.8 34.5 37.8 DAMOM + L 35.5 37.8 36.5 37.8 LSNS + M 29.8 34.1 31.8 30.5 LSNS + L 29.8 37.1 33.4 37.1 LSNM + L 34.1 37.1 35.6 36.9 sYOLOS + M 29.5 33.7 31.5 33.7 sYOLOS + L 29.5 36.9 33.2 36.9 sYOLOM + L 33.7 36.9 35.4 35.0 K = 2 different model DAMOS + LSNS 31.8 29.8 30.7 30.5 DAMOS + LSNM 31.8 34.1 32.6 34.1 DAMOS + LSNL 31.8 37.1 34.3 31.8 DAMOM + LSNS 35.5 29.8 32.6 29.8 DAMOL + LSNS 37.8 29.8 33.8 29.8 K = 3 same model DAMOS + M + L 31.8 35.5 37.8 34.8 37.7 LSNS + M + L 29.8 34.1 37.1 33.5 36.1 sYOLOS + M + L 29.5 33.7 36.9 33.4 36.6 Table 4: Ablation of model bank setting. K means the number of the model in bank P. Model Bank Input Modality LoRA StreamYOLOS + M It 33.7 [It + It\u22121] 33.7 \u2206It 32.6 StreamYOLOS + L It 34.1 [It + It\u22121] 30.2 \u2206It 35.0 StreamYOLOM + L It 33.7 [It + It\u22121] 33.7 \u2206It 34.6 Table 5: Ablation of router network input. The optimal results are marked in red font under the same model bank setting. is involved in model bank. It can be obverse that using frame difference as input exhibits better performance than other two types of input modalities (35.0 of StreamYOLOS + L and 34.6 of StreamYOLOM + L). This indicates that utilizing frame differences offers significant advantages in comprehending and characterizing environmental speed. Conversely, it also underscores that employing single frames as input or using multiple frames as input renders the lightweight model bank selection model ineffective. Branch Selection. Our research on streaming perception models has shown that configuring these models across varying scales can optimize their performance. We found that combining large and small models strikes an optimal balance, resulting in significant speed improvements. This conclusion is supported by the empirical evidence presented in Tab. 4, which clearly shows that the large-small model pairing outperforms both the large-small and large-medium combinations. Our findings highlight the importance of strategic model scaling in streaming perception and provide a framework for future model optimization in similar domains. Fine-tuning Scheme. In our evaluation, we contrasted the Model Bank Rank branch 0 branch 1 after train Param.(%) DAMOS + L 32 31.8 37.8 37.8 14.35 DAMOS + L 16 31.8 37.8 35.9 7.73 DAMOS + L 8 31.8 37.8 35.9 4.02 LSNS + L 32 29.8 37.1 36.9 10.39 LSNS + L 16 29.8 37.1 30.6 5.48 LSNS + L 8 29.8 37.1 30.6 5.48 sYOLOS + L 32 29.5 36.9 36.6 10.21 sYOLOS + L 16 29.5 36.9 35.0 5.38 sYOLOS + L 8 29.5 36.9 35.0 2.7 Table 6: Ablation of LoRA rank: In the Param. column, we solely compare the proportion of parameters occupied by LoRA to the entire model. The best performance under the same model bank setting are highlighted in red font. performance of direct fine-tuning with the Low-Rank Adapter (LoRA) fine-tuning strategy [Zhu et al., 2023] for streaming perception models. Results are listed in Tab. 3. The results clearly demonstrated that LoRA fine-tuning surpasses direct fine-tuning, with the DAMO-Streamnet-based model bank configuration realizing an absolute gain of over 1.6%. This substantiates LoRA\u2019s fine-tuning proficiency in circumventing the pitfalls of forgetting and data distribution bias inherent to direct fine-tuning. This experimental result demonstrates that LoRA fine-tuning can effectively mitigate the overfitting problem that may arise during model bank fine-tuning, leading to a stable and overall performance improvement. LoRA Rank. To assess the impact of different LoRA ranks in DyRoNet, we conducted experiments with rank = 32, 16, 8 respectively. All these experiments were set to train for 5 epochs, and the training alternated between Router Network training and model bank fine-tuning. The results are presented in Tab. 6. It can be observed that the performance is better with rank = 32 compared to rank = 8 and rank = 16, and only occupy 10% of the total model parameters. Therefore, based on these experiments, rank = 32 was selected as the default setting for our experiments. Although a smaller LoRA rank occupies fewer parameters, it leads to a rapid performance decay. The experimental results clearly demonstrate that with LoRA fine-tuning, it is possible to achieve superior performance than a single model while utilizing a smaller parameter footprint. 5" + } + ], + "Zhifeng Hao": [ + { + "url": "http://arxiv.org/abs/2012.11805v1", + "title": "Semi-Supervised Disentangled Framework for Transferable Named Entity Recognition", + "abstract": "Named entity recognition (NER) for identifying proper nouns in unstructured\ntext is one of the most important and fundamental tasks in natural language\nprocessing. However, despite the widespread use of NER models, they still\nrequire a large-scale labeled data set, which incurs a heavy burden due to\nmanual annotation. Domain adaptation is one of the most promising solutions to\nthis problem, where rich labeled data from the relevant source domain are\nutilized to strengthen the generalizability of a model based on the target\ndomain. However, the mainstream cross-domain NER models are still affected by\nthe following two challenges (1) Extracting domain-invariant information such\nas syntactic information for cross-domain transfer. (2) Integrating\ndomain-specific information such as semantic information into the model to\nimprove the performance of NER. In this study, we present a semi-supervised\nframework for transferable NER, which disentangles the domain-invariant latent\nvariables and domain-specific latent variables. In the proposed framework, the\ndomain-specific information is integrated with the domain-specific latent\nvariables by using a domain predictor. The domain-specific and domain-invariant\nlatent variables are disentangled using three mutual information regularization\nterms, i.e., maximizing the mutual information between the domain-specific\nlatent variables and the original embedding, maximizing the mutual information\nbetween the domain-invariant latent variables and the original embedding, and\nminimizing the mutual information between the domain-specific and\ndomain-invariant latent variables. Extensive experiments demonstrated that our\nmodel can obtain state-of-the-art performance with cross-domain and\ncross-lingual NER benchmark data sets.", + "authors": "Zhifeng Hao, Di Lv, Zijian Li, Ruichu Cai, Wen Wen, Boyan Xu", + "published": "2020-12-22", + "updated": "2020-12-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Named entity recognition(NER) is a standard natural language processing (NLP) task for identifying and classifying expressions with special meanings in unstructured text [18]. In recent years, NER approaches based on bidirectional long short-term memory (BiLSTM) and conditional random \ufb01elds (CRF) have achieved excellent performance [20, 4]. However, these methods are domain speci\ufb01c and they cannot be readily generalized to data sets from other domains, mainly due to the excessive cost of manually constructing the required high-quality and large-scale data sets. Domain adaptation [30, 11, 36, 27, 3] aims to exploit abundant labeled data in the source domain to improve the performance in the target domains, and thus it can alleviate the restriction on NER caused by the limited availability of labeled data in the target domain. Most of the existing domain-adaptation approaches were designed for unsupervised scenarios where both the source and target domains share the samenamed output space. However, these conventional domain adaptation approaches are not suitable in cases where the named entities have a di\ufb00erent source domain to the target domain. In order to enable transfer learning in the settings with di\ufb00erent source domain and target domain entity spaces, a few labeled target-domain samples should be incorporated into the training data set, which is referred to as semi-supervised domain adaptation for NER [31]. There are two main research areas for transferable NER. The \ufb01rst category comprises simple and straightforward methods. For example, Lee et al. used labeled target domain samples to \ufb01ne tune a model initialized based on a source domain data set [21]. The other types of methods such as that proposed by Yang et al. are based on the idea of multi-task learning and developing a model that contains a shared feature extractor and a domain-speci\ufb01c CRF layer for the source and target domains, respectively [39, 40]. However, the methods mentioned above have the following limitations. (1) The domain-invariant information is not explicitly extracted. For example, \ufb01ne tuningbased methods [21] may not work very well when the gap between the source domain and target domain distribution is excessively large because a few target-domain samples may lead to over\ufb01tting even when domain-speci\ufb01c information is considered. (2) The domain-speci\ufb01c information is usually ignored. For example, multi-task-based methods [40, 39] implicitly assume that a shared feature extractor can generate the domain-invariant information. However, they do not perform well at recognizing 2 \fdomain-speci\ufb01c name entities because the domain-speci\ufb01c information is not well integrated. Source Domain meeting deficit reduction in Gramm Rudman budget law Target Domain omg is a ROOT playing show Stetson at @wethekings NP VP NN NNS VBZ VBG VP NP DT NN PP IN NP NNP ROOT VBG NN NN NP PP IN NP NNP NNP NN NN similar structure Figure 1: Illustrations of syntactic structure trees from source and target domains. The upper and lower syntactic structure trees were parsed using sentences from Newswire and social media domains. \u201cGramm\u201d and \u201cRudman\u201d are related to law in the source domain, and the tag \u201cStetson\u201d is related to facility in the target domain. The topics of di\ufb00erent domains are di\ufb00erent and we treated them as domain-speci\ufb01c information. The red boxes denote the similar substructure from di\ufb00erent domains and they indicate domain-invariant information. To address these problems, it is necessary to \ufb01nd a solution that can extract and utilize both the domain-invariant and domain-speci\ufb01c information. Figure 1 illustrates an example of domain-invariant and domain-speci\ufb01c components in NER. In this example, the topic of the source domain (\u201cNewswire\u201d) is obviously di\ufb00erent from that of the target domain (\u201csocial media\u201d), which is domain-speci\ufb01c information. However, the syntactic substructures, which are important for locating the name entities, are similar in the two domains and they can be considered as domaininvariant information. Hence, we can assume that each sample is controlled by two independent latent variables z and v, which we denote as domain-speci\ufb01c and domain-invariant latent variables, respectively. Our aim is to disentangle these two 3 \flatent variables. Cai et al. utilized an analogous technique for unsupervised domain adaptation by using two supervised signals [3]. In addition, Chen et al. employed paraphrase pair data sets in a subtle manner and learned sentence representations to disentangle the syntax and semantics of a sentence by incorporating the semantic and syntactic supervised signals [5]. However, it is still very challenging to disentangle these latent variables in cross-domain NER because it is di\ufb03cult to obtain a data set with labels that indicate whether two sentences have similar substructures. In the present study, inspired by the disentangled representations of multiple explanatory factors used in previous research [2, 26, 8], we developed a semi-supervised disentangled (SSD) framework for transferable NER, which assumes that the domainspeci\ufb01c variables z are independent of the domain-invariant variables v. In the proposed SSD framework, the domain-speci\ufb01c latent variables z and domain-invariant latent variables v are extracted, disentangled, and then simultaneously used to predict the named entities. In order to disentangle two latent variables with limited supervision of the signals, we \ufb01rst use a domain predictor to push the domain-speci\ufb01c information into z, before then employing three types of mutual information regularization terms. In particular, we simultaneously maximize the mutual information between the domain-speci\ufb01c latent variables and the original embedding, maximize the mutual information between the domain-invariant latent variables and the original embedding, and minimize the mutual information between the domain-speci\ufb01c and domain-invariant latent variables. Our SSD model estimates the mutual information by using neural networks [1] and we optimize our SSD model in an iterative strategy, which guarantees the accuracy of the estimated mutual information. Extensive experiments demonstrated that SSD outperformed the state-of-the-art transferable NER methods based on cross-domain and cross-lingual standard benchmarks. The main contributions of our study are summarized as follows. \u2022 We propose a semi-supervised framework for transferable NER by disentangling domain-invariant and domain-speci\ufb01c information. \u2022 In the proposed framework, we employ three mutual information regularization terms to successfully achieve disentanglement with limited supervision of the signals. \u2022 In the proposed framework, we utilize both the domain-invariant and domainspeci\ufb01c information to accurately recognize a named entity based on the target domain. \u2022 Experimental studies demonstrated that our model obtained state-of-the-art performance with cross-domain and cross-lingual data sets. 4 \fThe remainder of this paper is organized as follows. In Section 2, we review related research into NER, domain adaptation, domain adaptation in NLP, and disentanglement. In Section 3, we de\ufb01ne the problem of semi-supervised domain adaptation for NER and describe some preliminary techniques. In Section 4, we give the details of our SSD model. In Section 5, we present our experimental results based on standard benchmarks. In Section 6, we give our conclusions and suggestions regarding future research. 2. Related Work NER: Automatic detection of named entities in free text is a fundamental task in information extraction. Many downstream tasks such as question answering [10] and text summarization [7] depend on the performance of NER. Traditional approaches to NER include CRF models [19] and maximum entropy Markov models [29]. In recent years, several deep learning-based NER methods have been proposed [20, 4, 35, 25]. These methods share a similar architecture, which employs di\ufb00erent levels of embedding, BiLSTM for sequence modeling, and a CRF layer [19] to predict labels. Due to the advantages of the neural network, little feature engineering is required to train a NER model. We employ the BiLSTM-CRF architecture as the backbone network for our SSD model. Domain Adaptation: Domain adaptation [30, 11, 36, 27, 3] is a hot topic in machine learning. The mainstream methods applied in the unsupervised scenario aim to extract the domain-invariant features between domains. Maximum mean discrepancy [12] is one of the most popular methods employed, which uses a geometrical measure that operates in the reproducing kernel Hilbert space. Another typical approach involves extracting the domain-invariant representation by introducing a gradient reversal layer [11] for domain alignment. These conventional approaches are mainly designed for unsupervised domain adaptation, where it is assumed that the domain-invariant information plays an important role in decisions and that di\ufb00erent domains share the same label space. In the present study, we consider the problem of cross-domain NER where the label spaces of the source and target domains are di\ufb00erent, so domain-speci\ufb01c information also plays an important role. Therefore, semi-supervised domain adaptation [38] is employed. Domain Adaptation in NLP: Due to the excessive cost incurred to achieve the expected data quality and quantity, domain adaptation is also extremely important for many NLP tasks. For example, Li et al. [22] simultaneously utilized both domainspeci\ufb01c and domain-shared sentiment words for sentiment classi\ufb01cation. Hu et al. [14] proposed an unsupervised domain adaptation method for neural machine translation by constructing a pseudo-parallel in-domain corpus. Recently, cross-domain 5 \fNER has attracted widespread interest in the \ufb01eld of machine learning. Considering that some domain-invariant knowledge can be transferred from the source to the target domain, Lee et al. [21] directly used the target data set to \ufb01ne tune a model initialized with the source data set. Based on the idea of multi-task learning [21], Yang et al. [40] considered the source and target domains as di\ufb00erent tasks and extracted the domain-invariant information by multi-task learning. However, these multi-task-based methods [23, 40] ignore the di\ufb00erences in the output space across domains, which may result in negative transfer. Lin et al. [23] solved this problem by appending an input adaptation layer after the word embedding layer and an output adaptation layer before the classi\ufb01er. However, domain-speci\ufb01c information in the data sets is also important but the aforementioned methods do not use it explicitly. Disentanglement: Disentangled representation [2] means that a change in one dimension corresponds to a change in one factor of variation, but the other factor is invariant. Several interesting studies have investigated disentangled representations for computer vision tasks based on a variational autoencoder [17, 15, 13, 26]. Cai et al. [3] proposed a disentangled semantic representation model for unsupervised domain adaptation. For NLP tasks, the highly related words comprise the disentanglement between the syntax and the semantics of a sentence. Chen et al. [5] proposed an approach to disentangle high-level information by skillfully utilizing the paraphrase pairs data set. In contrast to Chen et al. [5] who used semantic labels and syntactic labels to disentangle the semantic and syntactic structure information, our SSD framework only exploits the domain label that represents di\ufb00erent semantic information to disentangle the domain-speci\ufb01c and domain-invariant information by using three mutual information regularization terms. We propose an SSD model for transferable NER, which disentangles the domaininvariant and domain-speci\ufb01c information, and simultaneously uses both for recognizing named entities in the target domain. 3. Preliminaries First, we de\ufb01ne the problem of semi-supervised domain adaptation for NER, before provising a brief introduction to the basic model. 3.1. Problem De\ufb01nition Let x = [x1, x2, ..., xL] be a sentence with L words, y = [y1, y2, ..., yL] is the label sequence where yi \u2208E, and E is the named entity set. Let ES and ET be the entity sets of the source domain and target domain, respectively. Given the training data set D = {(xs, ys)}M s=1 \u222a{(xt, yt)}N t=1, where ys \u2286ES, yt \u2286ET and M \u226bN, our 6 \fobjective is to devise a model that can learn from the training data set and then predict a label sequence for the test data set D\u2217= {x\u2217 t}N\u2217 t=1 in the target domain. 3.2. Basic Model BiLSTM with a CRF layer [19] and self-attention mechanism [4] is used as the basic model for transferable NER because of its signi\ufb01cant advantages compared with the conventional approach [19]. In the following, we present some details of BiLSTM with a CRF layer, the self-attention mechanism, and its application in the semi-supervised domain adaptation setting. 3.2.1. Input Embedding The \ufb01rst step in the model is to map the discrete words into the distributed representation. Given a sentence x = [x1, x2, ...xL], we look up the embedding vector wi from the pre-trained embedding matrix. The sensitivity of the spelling should be considered, so we also look up the character-level embedding vector cij in the character-level embedding matrix for each character, i.e., cij denotes the characterlevel embedding vector of the j-th letter in the i-th word. We then use a convolutional neural network and max pooling to extract the character-level representation ci of the i-th word [6]. Formally, we de\ufb01ne the character-level feature extraction process as follows: ci = Pooling(Conv(ci1, ci2, ..., cij, ..., ciJ; \u03b8c)), (1) where J represents the character number of word xi; Pooling and Conv denote the max pooling and convolutional neural network, respectively; and \u03b8c denotes the parameters of CNN. Then, the character-level representation ci is concatenated with the word embedding wi as follows: e wi = wi \u2295ci, (2) where \u2295is the concatenation operation and e wi is the \ufb01nal input embedding of xi. For convenience, we de\ufb01ne the aforementioned process as follows: e w = Ge(xi; \u03b8c). (3) We obtained Equations (1)\u2013(3) from the study by[6]. 7 \f3.2.2. BiLSTM for Sequence Modeling Next, based on the study by [6], we present the basic features of BiLSTM and its usage in sequence modeling. First, we de\ufb01ne: \u2192 hi = \u2212 \u2192 LSTM ( \u2212 \u2192 hi\u22121, e wi), \u2190 hi = \u2190 \u2212 LSTM ( \u2190 \u2212 hi+1, e wi), hi = \u2192 hi \u2295 \u2190 hi, (4) where \u2192 hi\u2208Rd and \u2190 hi\u2208Rd denote the hidden states of the forward and backward LSTM at the i-th time step, respectively. Formally, we de\ufb01ne the aforementioned process as follows: h = [h1, h2, ...hL] = Gr ( e w; \u03b8r) , (5) where h represents all the hidden states of BiLSTM and \u03b8r denotes the parameters of BiLSTM. We describe the BiLSTM sequence model according to [6] by Equations (4)\u2013(5). 3.2.3. Self-Attention Mechanism We utilize a multi-head self-attention mechanism to extract the dependencies among words in a sentence and capture the inner syntactic structure information in a similar manner to Cao et al.[4]. The attention mechanism maps a query and a set of key\u2013value pairs to an output. In the self-attention mechanism, the query (Q \u2208RL\u00d72d), key (K \u2208RL\u00d72d), and value (V \u2208RL\u00d72d) are actually the hidden states described in 3.2.2. The \ufb01rst step of the multi-head attention mechanism involves linearly projecting the query, key, and value \u03c4 times by using di\ufb00erent linear projections. The t-th projection is as follows: headt = Attention \u0010 QW Q t , KW K t , V W V t \u0011 = softmax \uf8eb \uf8ed \u0010 QW Q t \u0011 \u0000KW K t \u0001T \u221au \uf8f6 \uf8f8V W V t , (6) where W Q t \u2208R2d\u00d7u, W K t \u2208R2d\u00d7u, and W V t \u2208R2d\u00d7u are trainable projection parameters and t = 1, 2, 3, \u00b7 \u00b7 \u00b7 , \u03c4. These results are then concatenated and projected 8 \fto generate the \ufb01nal representation e hi, which is de\ufb01ned as follows: e hi = (head1 \u2295head2, ... \u2295headt... \u2295head\u03c4) Wo, (7) where Wo are also trainable parameters. This process is described as follows: e h = Ga (h; \u03b8a) , (8) where \u03b8a = {W Q t , W K t , W V t , Wo} denote the parameters of the self-attention mechanism. We obtained Equation (6)\u2013(8) from the study by [4]. 3.2.4. CRF Layer for Label Prediction The CRF layer used in our framework is based on the previous study by [19]. In particular, the probabilistic model of the CRF sequence de\ufb01nes a family of conditional probabilities p(y|e h; wy, by) over all possible label sequences y given e h, with the following form: p(y|e h; w, b) = |y| Y i=1 \u03c8i(yi\u22121, yi, e h) X y\u2032\u2208Y(e h) |y \u2032| Y i=1 \u03c8i(y \u2032 i\u22121, y \u2032 i, e h) (9) where \u03c8i(y\u2032, y, e h) = exp(wT y\u2032,ye hi + by\u2032,y) are potential functions, and wT y\u2032,y and by\u2032,y are the weight vector and bias corresponding to label pair (y \u2032, y), respectively. For convenience, we let \u03b8y = {w, b}. Therefore, the CRF layer is used to search for the label sequence y\u2217with the highest conditional probability, as follows. y\u2217= arg max y\u2208Y(e h) p(y|e h; \u03b8y) (10) For a sequence CRF model, training and decoding can be solved e\ufb03ciently using the Viterbi algorithm. Given a ground truth sequence y and a predicted sequence y\u2217, the loss can be represented as LCRF (y\u2217, y). 3.2.5. Semi-Supervised Domain Adaptation Training Method The source and target domain have di\ufb00erent label sets, so the CRF layer mentioned in 3.2.4 cannot share parameters across two domains, i.e., each domain learns 9 \fa separate CRF layer. Therefore, we extend the CRF layer for label prediction and let GS y \u0000.; \u03b8S y \u0001 and GT y \u0000.; \u03b8T y \u0001 be the CRF layers for the source and target domains, respectively. For convenience, we let {\u03b8c, \u03b8r, \u03b8a, \u03b8S y , \u03b8T y } = \u0398 and they are trained by minimizing the following objective function. Ly (\u0398) = 1 M LCRF (y\u2217 S, yS) + 1 N LCRF (y\u2217 T, yT) (11) In the next section, we introduce our SSD framework for cross-domain NER. 4. MODEL As illustrated in Fig. 1, a sentence x can be generated from two independent latent variables: the domain-invariant variables v and the domain-speci\ufb01c variables z. Hence, the causal mechanism for the data generation process can be described as Fig. 2. Intuitively, entities tend to be located in similar syntactic structures in sentences from two domains, thereby making v transferable. In addition, the domain-speci\ufb01c variables, such as topics, are unique and de\ufb01nitely related to either the source or the target domain. x z v Figure 2: Causal model of the data generation process, which is controlled by the domain-speci\ufb01c latent variables z and domain-invariant latent variables v. According to this observation, we need a model that can disentangle and utilize the domain-invariant latent variables and domain-speci\ufb01c latent variables. In the semi-supervised transferable NER, we assume that the domain-invariant latent variables contain the syntactic information and that the domain-speci\ufb01c latent variables contain the semantic information. Previous methods proposed for the disentanglement of semantic and syntactic information [5] require two types of labels: semantic similarity labels and syntactic structure similarity labels. However, the syntactic structure similarity labels are di\ufb03cult to obtain. Thus, in order to address this problem, we propose the SSD for transferable NER, which disentangles these two 10 \fvariables via domain label supervision and the three mutual information regularization terms. The framework of the proposed method is shown in Figure. 3 and it can be divided into three parts: input embedding with word-level and character-level information, mutual information regularization-based disentanglement, and tag prediction. First, we generate the input embedding e wi by concatenating the character-level embedding ci and the word-level \u03c9i, and take it as the input for our model. In contrast to the basic model that uses a single BiLSTM as the sequence model, we feed the input embedding into the semantic encoder Gz and the syntactic encoder Gv in order to obtain domain-speci\ufb01c latent variables zi and domain-invariant latent variables vi. It should be noted that the semantic encoder and structure encoder share the same architecture, i.e., a BiLSTM layer with a self-attention mechanism. Details of the BiLSTM and self-attention mechanism are given in Subsections 3.2.2 and 3.2.3, respectively. A decoder is used to reconstruct the input embedding e w\u2032 i for each time step after receiving zi and vi. We concatenate zi and vi and feed them into the two-layer multi-layer perceptron (MLP) layers. The decoder is shared among all the time steps for the encoder output. Further details are provided in Subsection 4.1. In order to disentangle the domain-speci\ufb01c latent variables zi and domain-invariant latent variables vi, we use three mutual information regularization terms and domain label supervision. In particular, we minimize the mutual information between zi and vi, and employ a domain predictor to determine whether the zi comes from the source or target domain. Using the domain predictor, the domain-speci\ufb01c latent variables can be pushed into zi. By minimizing the mutual information between zi and vi, we also make zi and vi independent. Subsequently, we further maximize the mutual information between zi and e w\u2032 i as well as the mutual information between vi and e w\u2032 i. Further details of the proposed SSD are given in the following sections. 4.1. Latent Variable Reconstruction In order to reconstruct the original input embedding, we employ the reconstruction architecture in the SSD framework, which contains a two-layer MLP and it is shared among all of the encoder time steps. Formally, we de\ufb01ne the decoder as follows: f wi \u2032 = MLP(MLP((zi \u2295vi))). (12) For convenience, we let \u03c8 be the parameters of the decoder. The loss function 11 \f(a) Input Embedding Shared CRF Layer (b) Disentanglement (c)Tag Predictor CNN [r,u,d,m,a,n] rudman Syntactic Encoder Semantic Encoder Embedding Decoder \u0bdc \u0bdc \ud835\udc59\ud835\udc97 \ud835\udc59\ud835\udd03 \ud835\udc59\u0bd8 \ud835\udc59\u0bd7 \u0bdc Maximizing mutual information between \ud835\udf14 \u0de5\u0bdc\u2032 and \ud835\udc63\u0bdc Minimizing mutual information between \ud835\udc63\u0bdcand Reconstruction loss \ud835\udc59\ud835\udf4a \ud835\udc59\u0bd8 \ud835\udc59\u0be5 \ud835\udc59\ud835\udd03 Maximizing mutual information between \ud835\udf14 \u0de5\u0bdc\u2032 and \ud835\udc59\u0be5 \ud835\udc59\u0bd7 Domain loss Domain Predictor Figure 3: Architecture of the semi-supervised disentangled (SSD) framework for transferable NER. (a) The input embedding e \u03c9i is generated by concatenating the word-level embedding wi and character-level embedding ci. (b) The disentanglement between semantic latent variables zi and syntactic structure latent variables vi. (c) The label y is predicted by utilizing the disentangled semantic and syntactic structure latent variables simultaneously. for the reconstruction is as follows: Lr (\u03c8, \u0398) = 1 L L X i=1 MSE(f wi \u2032, f wi) (13) where \u03c8 denotes the parameters of decoder and MSE denotes the mean square error loss function. 4.2. Domain-Speci\ufb01c Latent Variables Extraction In order to push the domain-speci\ufb01c information, such as topic information for di\ufb00erent domains, into zi, we add a domain predictor Cd that takes domain-speci\ufb01c latent variables zi as the input and predicts the domain label. We use an MLP layer to predict the domain label for each sentence. Formally, this process can be de\ufb01ned as follows: d\u2217= MLP (Pooling (z1, z2, \u00b7 \u00b7 \u00b7 , zL; \u03b8d)) , (14) where Pooling(.) denotes the max pooling over all domain-speci\ufb01c latent variables at each time step. We use the cross entropy loss as the objective function for the 12 \fdomain predictor, as follows: Ld = \u2212d log (d\u2217) . (15) 4.3. Regular Terms for Semi-supervised Disentanglement 4.3.1. Mutual Information Neural Estimator In order to disentangle z and v, we employ three mutual information regularization terms: minimizing the mutual information between zi and vi, maximizing the mutual information between zi and e w\u2032 i, and maximizing the mutual information between vi and e w\u2032 i. Thus, the main challenge is \ufb01nding a method that can estimate the mutual information between continuous random variables. Fortunately, the mutual information neural estimator (MINE) [1] can estimate the mutual information between latent variables using a neural network. Formally, the mutual information between A and B can be described as follows: I(A, B) := H(A) \u2212H(A|B), (16) where H is the Shannon entropy and H(A|B) is the conditional entropy of B given A. Furthermore, the mutual information is equivalent to the Kullback\u2013Leibler divergence between the joint probability P (A, B) and the product of the marginals P (A) \u2297P (B): I(A, B) =DKL (P (A, B) ||P (A) \u2297P (B)) . (17) In order to estimate the mutual information using a neural network, we follow the theorem proposed by Donsker et al.[9]. Theorem 1. (Donsker\u2013Varadhan representation.[9]) The KL-divergence admits the following dual representation: DKL(P||Q) = sup T:\u2126\u2192R EP [T] \u2212log \u0000EQ \u0002 eT\u0003\u0001 , (18) where the supremum is taken over all functions T such that the two expectations are \ufb01nite. In the equation above, eT is an exponential function, T is a function that satis\ufb01es \u2126\u2192R, and \u2126denotes any function with \ufb01nite integral. This theorem implies that we can estimate DKL(P||Q) with a class of functions T : \u2126\u2192R. 13 \fBy combining this theorem with Equation (17), we obtain: I(A; B) \u2265I\u03c6(A, B), (19) where, I\u03c6(A, B) = EP(A,B) [T\u03c6] \u2212log \u0000EP(A)\u2297P(B) \u0002 eT\u03c6\u0003\u0001 , (20) and function T\u03c6 : A \u00d7 B \u2192R is parameterized by a deep neural network with the parameter \u03c6. Therefore, we can estimate the mutual information between high dimensional continuous random variables by maximizing Equation (20). We obtained Equations (16)\u2013(20) from the study by [1]. 4.3.2. Regularization Terms We extract the domain-speci\ufb01c latent variables with the domain predictor mentioned in 4.2, but the domain-invariant information is entangled in a similar manner to the syntactic information. In order to address this problem, we employ three types of mutual information regularization terms for disentanglement. In particular, we can estimate the mutual information between z and v with the following method: I\u03c6e (zi, vi) \u2265EP(zi,vi) [T\u03c6e] \u2212log \u0000EP(zi)\u2297P(vi) \u0002 eT\u03c6e\u0003\u0001 , (21) where \u03c6e is the parameter for T\u03c6e. After using the domain predictor and minimizing the mutual information between z and v, the model can disentangle the domain-invariant latent variables and the domain-speci\ufb01c latent variables. However, we must ensure that v contains the domain-invariant information. In the worst case, v may comprise the latent variables without any information. Therefore, we need to maximize the mutual information between e w\u2032 i and vi, and maximize the mutual information between e w\u2032 i and zi. We estimate these two types of mutual information as follows: I\u03c6v \u0000f wi \u2032, vi \u0001 \u2265EP( e w\u2032 i,vi) [T\u03c6v] \u2212log \u0010 EP( e w\u2032 i)\u2297P(vi) \u0002 eT\u03c6v\u0003\u0011 , (22) I\u03c6z \u0000f wi \u2032, zi \u0001 \u2265EP( e w\u2032 i,zi) [T\u03c6z] \u2212log \u0010 EP( e w\u2032 i)\u2297P(zi) \u0002 eT\u03c6z\u0003\u0011 , (23) where \u03c6v and \u03c6z are the parameters for T\u03c6v and T\u03c6z respectively. The objective 14 \ffunctions can be de\ufb01ned as follows: Lmi (\u03c6e, \u03c6z, \u03c6v) = le \u2212lz \u2212lv, le (\u03c6e) = EP(zi,vi) [T\u03c6e] \u2212log, \u0000EP(zi)\u2297P(vi) \u0002 eT\u03c6e\u0003\u0001 , lv (\u03c6v) = EP( e w\u2032 i,vi) [T\u03c6v] \u2212log \u0010 EP( e w\u2032 i)\u2297P(vi) \u0002 eT\u03c6v\u0003\u0011 , lz (\u03c6z) = EP( e w\u2032 i,zi) [T\u03c6z] \u2212log \u0010 EP( e w\u2032 i)\u2297P(zi) \u0002 eT\u03c6z\u0003\u0011 . (24) 4.4. Model Summary MINE [1] can estimate the mutual information between two random variables with a certain distribution, but the distributions of z and v change when the model is trained because z and v are generated by the encoder. In practice, we implement the algorithm in an iterative training strategy. The formal procedure is presented in Algorithm 1. Algorithm 1 Minibatch stochastic gradient descent training for the SSD model. Require: Kp, Km, Ki, \u03b7 are hyper-parameters. Kp is the number of steps required to train the base model; Km is the number of steps required to train the mutual information neural estimator; Ki is the number of steps for iterative training; \u03b7 is the learning rate. 1: for Kp do \u25b7pre-train base model and domain predictor 2: \u0398 = \u0398 \u2212\u03b7\u2207Ly 3: \u03b8d = \u03b8d \u2212\u03b7\u2207Ld 4: end for 5: for Ki do \u25b7iterative training 6: for Km do \u25b7train mutual information estimator 7: \u03a6 = \u03a6 \u2212\u03b7\u2207Lmi 8: end for 9: for Kp do \u25b7disentanglement between z and v 10: \u0398 = \u0398 \u2212\u03b7\u2207Ly 11: \u03c8 = \u03c8 \u2212\u03b7\u2207Lr 12: \u03b8d = \u03b8d \u2212\u03b7\u2207Ld 13: end for 14: end for In the training procedure, we employ the stochastic gradient descent algorithm to \ufb01nd the optimal parameters. In the prediction procedure, we input the target domain 15 \fsamples in the model and the labels of the target domain samples are predicted as follows: \u02c6 yT = GT y (Ga (Gr (Ge (xT)))) . (25) 5. Experimental In the following, we introduce the data set employed for the evaluation and we then provide a brief introduction to the approaches compared. Finally, we present the experimental results. 5.1. Data sets For cross-domain settings, the proposed approach was evaluated for four types of domains: Newswire, Social Media, Wiki, and Spoken Queries. For the Newswire domain, we used the OntoNotes 5.0 release data set (ON)[37]. For the social media domain, we employed the Ritter11 (R1) [32] data set. For the Wiki domain, we employed the GUM [41] data set. For the Spoken Queries domain, we used the MIT Movie (MM) data set [24]. Table 1 shows details of the data sets from di\ufb00erent domains. In contrast to the other baseline methods that purposely select a \ufb01xed source and target domains, we evaluated all of the methods across all of the transfer tasks. The statistics for these data sets are presented in Table 2. We also evaluated our approach in cross-lingual settings for three di\ufb00erent languages comprising Spanish (S), Dutch (D), and English (E). For Spanish and Dutch, we used the CoNLL-2002 data set [33]. For English, we used the CoNLL-2003 data set [34]. Furthermore, these data sets belong to the same domain and they share the same-named entity set. It should be noted that all three languages are Indo-European and they share the homologous syntactic structures. English and Dutch belong to the Germanic group of languages, whereas Spanish belongs to the Romance group of languages, so English is closer to Dutch and farther from Spanish, and thus more homologous syntactic structures exist between Dutch and English. The statistics for these data sets are presented in Table 3. 16 \fTable 1: Named entities and their ratios in di\ufb00erent data sets from di\ufb00erent domains. Name Topic Annotated Entities (# ratio) Ontonote-nw Newswire Person (22%), Location (2%), Organization (38%), NORP (8%), GPE (26%), Work of art (1%), Event (0.8%), Law (0.6%), Facility (0.9%), Product (1%), Language (0.1%) Ritter2011 Social Media Person (30%), Geo-loc (18%), Facility (6.9%), Company (11%), Sports Team (3.4%), Music artist (3.6%), Product (6.4%), TV show (2.2%), Movie (2.2%), Other (15%) GUM Wiki Abstract (24%), Animal (1.5%), Event (8%), Object (12%), Organization (5%), Person (23%), Place (14%), Plant (1%), Quantity (1%), Substance (3%), Time (4%) Mit movie Spoken Queries Actor (22%), Character (5%), Director (8%), Genre (15%), Plot (28%), Year (14%), Soundtrack (0.2%), Opinion (4%), Award (1.4%), Origin (4%), Quote (0.6%), Relationship (3%) Table 2: Data set statistics for cross-domain setting. Data set Language #Training Tokens #Dev Tokens #Test Tokens Ontonote-nw English 848200 144319 49235 Ritter2011 English 37098 4461 4730 Mit movie English 158823 39035 GUM English 44111 18236 Table 3: Data set statistics for cross-lingual setting. Data set Language #Training Tokens #Dev Tokens #Test Tokens CoNLL 2003 English 204567 51578 49235 CoNLL 2002 Dutch 202932 37761 68994 CoNLL 2002 Spanish 207484 51645 52098 5.2. Approaches Compared We compared the proposed SSD framework with the following baseline methods. 17 \f\u2022 In domain: This method uses the limited target domain training data to train a model and applies this model to the test data without using the source domain data. This method does not transfer any knowledge from the source domain, so it is expected to provide the lower performance bound. It was also used as a baseline method by Lin et al.[23]. \u2022 Init tuning: Init tuning is a straightforward method for transferable NER developed by Lee et. al [21]. This method \ufb01rst trains a model using labeled source data and then treats it as the initialized model. This model is then \ufb01ne tuned with the labeled data from the target domain. The output space for the target domain is di\ufb00erent from that for the source domain, so the parameters of the target domain label predictor need to be updated by training with the target labeled data. \u2022 Multi: The multi-task-based method was developed by Yang et al. [39]. This method employs the idea of multi-task learning and it simultaneously trains two di\ufb00erent classi\ufb01ers by using the labeled source and target domain data. It should be noted that a feature extractor is shared between the source domain and target domain. In inference mode, we ignore the source classi\ufb01er and obtain the predicted target label by feeding the target test data set. \u2022 Layer adaptation:The Layer adaptation model [23] was proposed by Lin et al. This method bridges the gap between heterogeneous input and output spaces by applying input and output adaptation layers. A pre-trained transferable word embedding is not available in the word adaptation layer, so we removed the word adaptation layer and used the same pre-trained word embedding without the word adaptation layer to ensure a fair comparison, and thus our analysis was orthometric. \u2022 Cross-Lingual Transfer Learning (CLTL): CLTL [16] is a learning model designed for part-of-speech tagging without ancillary resources. This crosslingual model aims to extract common knowledge from other languages using a common BiLSTM and GRL [11], and a private BiLSTM for language-speci\ufb01c features. No restrictions are applied to the language-speci\ufb01c BiLSTM, so this model cannot guarantee that the extracted feature is disentangled. \u2022 Multi-Task Cross-Lingual (MTCL) Sequence Tagging Model: MTCL [39] is a deep hierarchical recurrent neural network for sequence tagging. This model is similar to Multi but it simultaneously utilizes multiple languages. 18 \fOur model and the baseline methods were implemented with TensorFlow on a server with one GTX-1080 and Intel 7700K. To ensure fair comparisons, we applied the same hyper-parameter setting used by [28] for all of the methods. The hyperparameters are shown in Table 4. Table 4: Hyper-parameters used in all models. Hyper-Parameter Value Batch size 64 Word embedding size 100 Char embedding size 100 Optimizer Adam Learning rate 0.001 Dropout rate 0.5 5.3. Results Based on Cross-domain Transfer We compared SSD and the baseline methods using four di\ufb00erent data sets in order: (1) to identify the factors that in\ufb02uence the performance of semi-supervised domain adaptation in NER, and (2) to assess the generality of our SSD model compared with other state-of-the-art approaches. In order to answer these two questions, we quantitatively analyzed the experiment results. To simulate conditions where labeled target domain data were unavailable, we randomly selected 10% of the ON data set as the target domain data. 5.3.1. Analysis of generalizability The experimental results also demonstrated the generalizability of our SSD model. As shown in Table 5, we found that our SSD model outperformed the other approaches in most of the transfer directions. For the transfer direction selected by many methods, i.e., ON\u2192R1, all of the approaches performed better with the in domain baseline, and our method achieved the best result. When we tested the reverse direction, i.e., R1\u2192ON, the other approaches lost their advantage because the proportions of common entities were di\ufb00erent in R1 and ON. Initialization-based methods focus more on the domain-speci\ufb01c information in the target domain and they consider little of the domain-invariant information, whereas multi-task-based methods focus more on domain-invariant information and ignore the domain-speci\ufb01c information, so their performance is inferior. However, our SSD disentangles the domain-invariant and domain-speci\ufb01c information, and thus it can utilize both types 19 \fof information to achieve better performance. For other transfer directions where the two domains were totally di\ufb00erent, i.e., GUM and MM, our method still obtained comparable results. Thus, our method performed better than the baseline methods in all transfer directions and its generalizability was better. Table 5: F1-scores (%) with four di\ufb00erent domain data sets. Method R1\u2192ON R1\u2192MM R1\u2192GUM ON\u2192R1 ON\u2192MM ON\u2192GUM MM\u2192ON MM\u2192R1 MM\u2192GUM GUM\u2192ON GUM\u2192R1 GUM\u2192MM Avg In domain 85.9 72.4 53.1 64.7 72.4 53.1 85.9 64.7 53.1 85.9 64.7 72.4 69.0 INIT tuning 85.3 72.6 53.1 65.3 72.5 53.3 85.3 64.5 53.0 85.7 62.2 71.7 68.7 Layer adaption 85.3 72.6 53.1 65.3 72.5 53.3 85.3 64.5 53.0 85.7 62.2 71.7 68.7 Multi 85.3 72.7 53.2 66.8 72.7 53.5 85.6 66.9 53.7 85.5 66.5 72.6 69.6 SSD 86.4 72.9 54.4 69.1 73.2 54.7 86.3 68.5 54.1 85.7 68.5 72.8 70.6 5.3.2. Analysis of the in\ufb02uence of semantic similarity The assumption that the target domain and source domain contain many common entities is usually excessively strong. In most cases, the entities in two domains are simply homogeneous and they share similar or related meanings, such as \u201cmovie\u201d and \u201cTV show\u201d in the Rittter2011 domain, and \u201cactor\u201d and \u201cdirector\u201d in the Mit movie domain. In this case, the entities might be totally di\ufb00erent in the source and target domains, but they share similar topics and can also be transferable. According to Table 1, we found that the meanings of some entities in R1 were strongly related to those in MM, e.g., \u201cmovie\u201d and \u201cTV show\u201d in R1 were related to \u201cactor\u201d and \u201cdirector\u201d in MM. In many semi-supervised transfer methods, R1\u2192MM and MM\u2192R1 perform better than GUM\u2192MM and MM\u2192GUM, which is also consistent with our assumption. Table 6: Statistics for common entities. Name Common Entities (Percentage of Source, Percentage of Target) ON R1 (Person (22%), Person (30%)), (Facility (0.9%), Facility (6.9%)), (Product (1%), Product (6.4%)) ON GUM (Person (22%), Person (23%)), (Organization (38%), Organization (5%)), (Event (0.8%), Event (8%)) ON MM Null GUM R1 (Person (23%), Person (30%)) GUM MM Null R1 MM Null 5.3.3. Analysis of Disentangled Representation Intuitively, the amounts of common entity types in the source and target domains will in\ufb02uence the transferability. Thus, knowledge can be transferred more readily 20 \fwhen there are more common entity types in the source and target domains. The statistic for the common entities in di\ufb00erent domains are presented in Table 6. In order to evaluate the e\ufb00ectiveness of disentangled domain-invariant representation, we compared our SSD model with In domain and Multi based on the common entities, which are shown as the common entities in Table 7. As mentioned above, the Multi method treats each classi\ufb01cation from a di\ufb00erent domain as a task and aims to extract the representation that is shared between tasks, so the performance of Multi exceeded that of In domain in most tasks. However, this method cannot avoid the in\ufb02uence of negative transfer from the non-common entities because the representation extracted by Multi is distorted on the feature manifold. This is why Multi performed worse than In domain in some tasks, e.g., R1 \u2192ON, MM \u2192ON, and GUM \u2192ON. However, our SSD method disentangles the domain-invariant and domain-speci\ufb01c information to avoid this problem, and thus it performed better than In domain in all tasks. It should be noted that the improvement obtained with our SSD method was not as remarkable when the target domain was ON because ON is easy to train, and we obtained a very high f1 score (more than 85%) in the In domain setting. In contrast to Multi, our SSD method also utilizes the domain-speci\ufb01c information. In order to study the e\ufb00ectiveness of disentangled domain-speci\ufb01c representation, we compared our SSD model with In domain and Multi for the non-common entities, and the non-common entity results are shown in Table 7. Multi does not explicitly utilize the domain-speci\ufb01c representation, so the performance of Multi was worse than that of In domain, e.g., R1 \u2192ON and MM \u2192ON. However, our SSD method uses both the domain-invariant and domain-speci\ufb01c information at the same time, so SSD performed better than the baseline methods at most tasks. However, we also found that the performance declined when we employed GUM as the source domain because GUM contained some incorrectly labeled entities. 5.4. Results Based on CLTL We compared the performance of SSD and the other methods with three di\ufb00erent language data sets derived from CoNLL-2002 and CoNLL-2003. These data sets contained three di\ufb00erent languages comprising Spanish, Dutch, and English, and they were all related to the same topic (i.e., news). The syntactic structure of English is similar to that of the other two languages to some extent. Both Dutch and English belong to the Germanic group of language, whereas Spanish belongs to the Romance group of language. Thus, Dutch is similar to English, whereas Spanish is not. In the cross-lingual transfer experiment, we assumed that the syntactic structure was domain speci\ufb01c and the semantics were domain invariant. Therefore, we used SSD to 21 \fTable 7: F1-scores (%) for common and non-common entities. Common entity results Non-common entity results Transfer task in domain Multi SSD in domain Multi SSD R1\u2192ON 49.24 48.93 (0.49\u2193) 49.92 (0.68\u2191) 36.61 36.38 (0.23\u2193) 36.83 (0.22\u2191) R1\u2192MM 24.63 24.70 (0.07\u2191) 24.85 (0.23\u2191) 48.20 48.32 (0.12\u2191) 48.51 (0.31\u2191) R1\u2192GUM 29.42 29.49 (0.07\u2191) 30.33 (0.91\u2191) 23.77 24.10 (0.33\u2191) 24.27 (0.50\u2191) ON\u2192R1 54.55 55.47 (0.92\u2191) 56.94 (2.39\u2191) 10.22 10.42 (0.20\u2191) 10.94 (0.72\u2191) ON\u2192MM 45.48 45.55 (0.07\u2191) 45.95 (0.47\u2191) 27.34 27.41 (0.07\u2191) 27.56 (0.22\u2191) ON\u2192GUM 32.49 32.54 (0.05\u2191) 33.53 (1.04\u2191) 20.70 20.82 (0.12\u2191) 21.16 (0.46\u2191) MM\u2192ON 14.91 14.69 (0.22\u2193) 14.95 (0.04\u2191) 70.90 70.57 (0.33\u2193) 70.96 (0.06\u2191) MM\u2192R1 25.25 25.67 (0.42\u2191) 26.57 (1.32\u2191) 39.51 40.21 (0.70\u2191) 40.45 (0.94\u2191) MM\u2192GUM 24.19 24.41 (0.22\u2191) 25.04 (0.85\u2191) 29.00 29.47 (0.47\u2191) 29.54 (0.54\u2191) GUM\u2192ON 49.26 48.95 (0.31\u2193) 49.32 (0.07\u2191) 36.59 36.47 (0.12\u2193) 36.28 (0.31\u2193) GUM\u2192R1 51.62 52.52 (0.90\u2191) 53.36 (1.74\u2191) 13.14 12.65 (0.49\u2193) 12.86 (0.28\u2193) GUM\u2192MM 45.48 45.64 (0.16\u2191) 45.80 (0.32\u2191) 27.34 27.25 (0.09\u2193) 27.16 (0.18\u2193) 22 \fdisentangle the domain-invariant semantics and domain-speci\ufb01c syntactic structure, before \ufb01nally applying both for CLTL. In order to evaluate the e\ufb00ectiveness of our model and to consider the semi-supervised domain adaptation setting, we randomly selected 20% of the data as each target domain training data set. Table 8: F1-scores (%) with three di\ufb00erent language data sets. Methods E \u2192S E \u2192D S \u2192E S \u2192D D \u2192E D \u2192S Avg In domain 71.3 63.2 75.7 63.2 75.7 71.2 70.5 Init transfer 71.7 66.5 76.0 64.2 76.2 71.5 71.0 Multi transfer 71.7 65.1 76.2 64.8 75.3 71.6 70.9 MTCL 72.1 67.1 76.3 67.1 76.3 72.1 71.8 CLTL 73.6 67.2 76.9 67.7 77.0 74.3 72.8 SSD 75.0 69.8 80.9 68.7 81.0 74.7 75.0 As shown in Table 8, SSD performed better than the other methods in all transfer directions and the improvement in the F1-score was quite impressive. For some transfer directions, SSD achieved improvements of more than four points compared with Multi and Init, thereby indicating that both domain-invariant and domainspeci\ufb01c information are important, and the SSD model could disentangle and capture this information, before \ufb01nally utilizing it to obtain better predictions. 5.5. Low-resource Corpora Setting In order to assess the e\ufb00ectiveness of our approach when the amounts of training data from the target domain were limited, we conducted further experiments by gradually increasing the size of the target training data from 20% to 100%. Fig. 4(a), Fig. 4(b), and Fig. 4(c) illustrate the cross-domain experimental results for ON\u2192R1, ON\u2192GUM, and ON\u2192MM, respectively. In addition, Fig. 4(d) and Fig. 4(e) illustrate the cross-lingual experiment results for English to Dutch and English to Spanish, respectively. 5.5.1. Low-resource Cross-domain Setting In a low-resource cross-domain setting, we also found that when the proportion of the target domain training data was small, all methods failed to achieve ideal performance, but our SSD model still obtained comparable results. As the size of the target data set increased, the di\ufb00erence between our model and the other baseline methods increased because more domain-speci\ufb01c information was available, thereby improving the performance of our model. When the scale of the target domain data 23 \fset varied, our SSD model consistently achieved the best results, thereby demonstrating that: (1) domain-invariant and domain-speci\ufb01c information both contributed to the performance of transferable NER; (2) the performance of the multi-task-based methods increased slowly when the size of the target domain data set was large because these methods only focus on extracting the domain-invariant information from both domains, whereas they ignore the domain-speci\ufb01c information to some extent; and (3) our SSD model disentangled the domain-invariant and domain-speci\ufb01c information, and utilized both simultaneously to achieve the best results. 5.5.2. Low-resource Cross-lingual Setting In the low-resource cross-lingual setting, both the source and target domains belonged to the same domain. As the size of the target domain data set increased, the performance of all methods improved. In contrast to the results obtained in the lowresource cross-domain setting, we also found that when the size of the target domain data set was small, our method still obtained ideal performance, whereas the performance of the other methods decreased rapidly. When the amount of target training data was small, the initialization-based methods may have been a\ufb00ected by over\ufb01tting and the multi-task-based method could only extract a small amount of common information. By contrast, our SSD method explicitly captured the domain-invariant information and utilized the domain-speci\ufb01c information in the target domain to guarantee better performance. 24 \f0.2 0.4 0.6 0.8 1.0 Amount of target domain data 56 58 60 62 64 66 68 70 F1-score init_transfer layer_transfer multi_transfer SSD (a) ON\u2192R1 0.2 0.4 0.6 0.8 1.0 Amount of target domain data 42 44 46 48 50 52 54 F1-score init_transfer layer_adaptation multi_transfer SSD (b) ON\u2192GUM 0.2 0.4 0.6 0.8 1.0 Amount of target domain data 67 68 69 70 71 72 73 F1-score init_transfer layer_adaptation multi_transfer SSD (c) ON\u2192MM 0.2 0.4 0.6 0.8 1.0 Amount of target domain data 66 68 70 72 74 76 78 F1-score init_transfer multi_transfer MTCL CTCL SSD (d) English\u2192Dutch 0.2 0.4 0.6 0.8 1.0 Amount of target domain data 72 74 76 78 80 82 F1-score init_transfer multi_transfer MTCL CTCL SSD (e) English\u2192Spanish Figure 4: (a) F1-score vs. amount of target domain data in ON \u2192R1 transfer direction. (b) F1-score vs. amount of target domain data in ON \u2192GUM transfer direction. (c) F1-score vs. amount of target domain data in ON \u2192MM transfer direction. (d) F1-score vs. amount of target domain data in English\u2192Dutch transfer direction. (e) F1-score vs. amount of target domain data in English\u2192Spanish transfer direction. 5.6. Ablation Study To further investigate the e\ufb00ectiveness of each component of the model, we compared SSD with the following variants. 25 \f\u2022 SSD-nAttn: No attention mechanism in the SSD model. \u2022 Simple-Attn: We remove the disentanglement mechanism from the SSD model and the model degenerated to the simple Char-LSTM + attention model. \u2022 SSD-RD: To study the quality of disentanglement, we removed the objective function for disentangling these two latent variables in the SSD model. \u2022 SSD-DS: To assess whether the domain-speci\ufb01c information could improve the performance of the model compared with the multi-task-based method, we also tested SSD-DS where the domain-invariant encoder and decoder were removed. In this case, some domain-speci\ufb01c information was considered because of the domain predictor. Table 9: Evaluation of di\ufb00erent SSD components. Methods E \u2192S E \u2192D S \u2192E S \u2192D D \u2192E D \u2192S In domain 71.3 63.2 75.7 63.2 75.7 71.3 Multi transfer 71.7 66.1 76.2 64.8 75.3 71.6 Simple-Attn 73.5 67.6 77.1 67.3 77.3 73.7 SSD-nAttn 73.2 66.8 76.8 66.6 76.8 72.9 SSD-DS 74.1 68.6 78.5 67.8 78.2 74.3 SSD-RD 74.5 69.2 79.1 68.2 79.6 74.5 SSD 75.0 69.8 80.9 68.7 81.0 74.7 The results of the ablation study are shown in Table 9. We found that both the attention mechanism and SSD component considerably a\ufb00ected the model\u2019s performance. Furthermore, we observed the following. 1) The combination of the SSD component and syntactic-extraction attention mechanism, i.e., SSD, obtained superior performance compared with each individual component, thereby demonstrating their importance and complementary e\ufb00ect. 2) The model without the disentanglement mechanism (Simple-Attn) also performed better than Multi because the syntactic structure extracted by the attention mechanism improved the transfer capability. 3) Compared with the standard SSD, the performance of SSD-RD was lower, which indicates that the disentanglement of the domain-invariant and domainspeci\ufb01c latent variables contributed to the improved performance. We also found that the entangled domain-speci\ufb01c representation could lead to negative transfer. 4) In order to study the e\ufb00ectiveness of domain-speci\ufb01c information, we also investigated SSD-DS, where the multi-task-based model used the domain predictor in order to 26 \fpreserve the domain-invariant information. In contrast to Multi transfer, the SSDDS variant also utilized the domain-speci\ufb01c information and it obtained better results. However, it performed worse than the standard SSD, thereby demonstrating the e\ufb00ectiveness of disentanglement. 6." + } + ], + "Angsheng Li": [ + { + "url": "http://arxiv.org/abs/2001.09637v1", + "title": "Structural Information Learning Machinery: Learning from Observing, Associating, Optimizing, Decoding, and Abstracting", + "abstract": "In the present paper, we propose the model of {\\it structural information\nlearning machines} (SiLeM for short), leading to a mathematical definition of\nlearning by merging the theories of computation and information. Our model\nshows that the essence of learning is {\\it to gain information}, that to gain\ninformation is {\\it to eliminate uncertainty} embedded in a data space, and\nthat to eliminate uncertainty of a data space can be reduced to an optimization\nproblem, that is, an {\\it information optimization problem}, which can be\nrealized by a general {\\it encoding tree method}. The principle and criterion\nof the structural information learning machines are maximization of {\\it\ndecoding information} from the data points observed together with the\nrelationships among the data points, and semantical {\\it interpretation} of\nsyntactical {\\it essential structure}, respectively. A SiLeM machine learns the\nlaws or rules of nature. It observes the data points of real world, builds the\n{\\it connections} among the observed data and constructs a {\\it data space},\nfor which the principle is to choose the way of connections of data points so\nthat the {\\it decoding information} of the data space is maximized, finds the\n{\\it encoding tree} of the data space that minimizes the dynamical uncertainty\nof the data space, in which the encoding tree is hence referred to as a {\\it\ndecoder}, due to the fact that it has already eliminated the maximum amount of\nuncertainty embedded in the data space, interprets the {\\it semantics} of the\ndecoder, an encoding tree, to form a {\\it knowledge tree}, extracts the {\\it\nremarkable common features} for both semantical and syntactical features of the\nmodules decoded by a decoder to construct {\\it trees of abstractions},\nproviding the foundations for {\\it intuitive reasoning} in the learning when\nnew data are observed.", + "authors": "Angsheng Li", + "published": "2020-01-27", + "updated": "2020-01-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IT", + "math.IT" + ], + "main_content": "Introduction Turing machines [24] capture the mathematical essence of the concept of \u201ccomputation\u201d, give not only a mathematical de\ufb01nition of the concept computation, but also provides a model to build \u201ccomputers\u201d. In the 20th century, it had been proved that computers are useful, for which the mission of computer science was to develop ef\ufb01cient algorithms and computing devices. In the 21st century, computers have been becoming very useful everywhere. The mission of computers has become \u201cinformation processing\u201d in the real world. However, there is no a mathematical theory that supports the mission of \u201cinformation processing\u201d. At the beginning of arti\ufb01cial intelligence in 1956, one point of view was to regard \u201carti\ufb01cial intelligence\u201d as \u201ccomplex information processing\u201d. Again, there was no mathematical understanding of complex information processing. In the past more than 70 years, Shannon\u2019s information theory is the main principle for us to understand the concept of \u201cinformation\u201d. However, Shannon\u2019s theory fails to support the current \u201cinformation processing\u201d, especially \u201ccomplex information processing\u201d. \u2217The author was partially supported by NSFC grant No. 61932002 and No. 61772503. 1 arXiv:2001.09637v1 [cs.LG] 27 Jan 2020 \fShannon\u2019s [21] metric measures the uncertainty of a probabilistic distribution or a random variable from the probability distribution as H(p1, \u00b7 \u00b7 \u00b7 , pn) = \u2212 n X i=1 pi log2 pi. (1) This metric and the associated concept of noise, have provided rich sources for both information theory and technology. In particular, Shannon\u2019s theory solved two fundamental questions in communication theory: What is the ultimate data compression, and what is the ultimate transmission rate of communication. For this reason, some people consider information theory to be a sub\ufb01eld of communication theory. We remark that it is much more. Indeed, information theory plays an important role in many areas, such as statistical physics, computer science, statistical inference, probability and statistics. Shannon\u2019s metric measures the quantity of uncertainty embedded in a random variable or a probability distribution. We note that either a random variable or a probability distribution is a function. Functions are classical objects in mathematics, representing the correspondence from every individual of a set to an element of the same or another set. However, in the real world, we often have to deal with systems consisting of many bodies and the relationships among the many bodies, referred to as physical systems. To represent such systems, graphs are the general mathematical model. Therefore, graphs are natural extensions of functions, and are general models of representations of real world objects. Shannon\u2019s theory indicates that, there is a quantity of uncertainty in random variables. We know that a random variable is in fact a function, and that a function is a special type of graph. Due to the fact that there are uncertainty in random variables and that graphs are natural extensions of functions, there must exist uncertainty in graphs. However, Shannon\u2019s metric fails to measure the quantity of uncertainty embedded in a physical system such as a graph. In 2003, Brooks [2] commented that: \u201c We have no theory, however, that gives us a metric for the information embedded in structure, especially physical structure\u201d. In addition, Shannon [22] himself realized that his metric of information fails to support the analysis of communication networks to answer questions such as the characterization of optimum communication networks. As a matter of fact, graph compressing and structure decoding are fundamental questions in structured noisy data analysis. However, literature on graphical structure compression is scare. Turn [25] introduced the problem of succinct representation of general unlabelled graphs. Naor [18] provided such a representation when all unlabelled graphs are equally probable. Adler and Mitzenmacher [1] implemented some heuristic experiments for realworld graph compression. Sun, Bolt and Ben-Avraham [23] proposed an idea similarly to that in [1] to compress sparse graphs. Peshkin [19] proposed an algorithm for a graphical extension of the one-dimensional SEQUITUR compression method. Choi and Szpankowski [3] proposed an algorithm for \ufb01nding the Shannon entropy of a graph generated from the ER model. To understand the information embedded in a graph, we will need to encode the graph. How to encode a graph? In graph theory, there are parameters related to three types of graph encoding. Each model of these encodings involves assigning vectors to vertices, and the parameter is the minimum length of vectors that suf\ufb01ce. We study the maximum of this parameter over n-vertex graphs. The parameters are intersection number, product dimension, and squashed-cube dimension. Erd\u00a8 os, Goodman and P\u00b4 osa [5] proposed the de\ufb01nition of intersection number and studied the notion. An intersection representation of length t assigns each vertex a 0, 1-vector of length t such that u and v have an edge if and only if their vectors have a 1 in a common position. Equivalently, it assigns each x \u2208V a set Sx \u2286[t] = {1, 2, \u00b7 \u00b7 \u00b7 , t} such that for any u, v, there is an edge (u, v) if and only if Su \u2229Sv \u0338= \u2205. The second parameter is the product dimension. A product representation of length t assigns the vertices distinct vectors of length t so that there is an edge (u, v) if and only if their vectors differ in every position. The product dimension of a graph G is the minimum length of such a representation of G. Lova\u00b4 sz, Nesetril and Pultr [17] characterized the n-vertex graphs with product dimension n\u22121. The third encoding is to assign vectors to vertices such that distance between vertices in the graph is the number of positions where their vectors differ. Each of these encoding assigns vectors to vertices to preserve certain properties of the graphs. The key point of the encodings is to use the mathematical operations of vectors to recover the properties of the graphs. Clearly, operations over vectors are easy and more ef\ufb01cient. Unfortunately, these encodings distort the graphs, although each of them may preserve some speci\ufb01c properties of graphs. To establish a theory of the information embedded in graphs, we will need a lossless encoding of graphs. Is there a lossless encoding of graphs? The author and his co-author [12] introduced the concept of encoding tree of a graph as a lossless encoding of graphs, and de\ufb01ned the structural entropy of a graph to be the minimum amount of information required to determine the codeword of the vertex in an encoding tree for the vertex that is accessible from random walk with stationary distribution in the graph, under the condition that the codeword of the starting vertex of the random walk is known. The structural entropy of a graph is hence the intrinsic information embedded in the graph that cannot be decoded by any encoding tree or any lossless encoding of the graph. Measuring the structural entropy of a graph involves \ufb01nding an encoding tree of the graph under which the 2 \finformation required to determine the codeword of vertices accessible from random walk in G when the codeword of the starting vertex of the random walk is known is minimized. The quanti\ufb01cation of the structural entropy of a graph de\ufb01ned in this way is the intrinsic information hidden in the graph G that cannot be decoded by any encoding tree or any lossless encoding of the graph. The encoding tree found in this way, that is, minimizing the information hidden in a graph G, in the measuring of structural entropy of the graph G hence determines and decodes a structure of G by using which the uncertainty still left or hidden in G has been minimized. We thus call such an encoding tree T of G a decoder of G. The decoder, T say, of graph G is hence an encoding tree of G. Since T determines an encoding under which the uncertainty left or still hidden in G is minimized, the syntactic structure T of G certainly supports a semantical or functional modules of G. More precisely, a decoder T supports a semantical interpretation of the system G. Due to the fact that the decoder is an encoding tree, the semantical interpretation supported by the decoder is hence called a a knowledge tree of G. This provides a general principle to acquire knowledge from observed dataset. This strategic goal of structural information theory has been successfully veri\ufb01ed in real world applications. In [12, 13], we established a systematical method based on the structural entropy minimization principle, without any hand-made parameter choices, to identify the types and subtypes of tumors. The types and subtypes found by our algorithms of structural entropy minimization are highly consistent with the clinical datasets. In [14], we developed a method, referred to as deDoC, based on the principle of structural entropy minimization, to \ufb01nd the twoand three-dimensional DNA folded structures. The deDoC was proved the \ufb01rst principle-based, systematical, massive method for us to precisely identify the topologically associating domains (TAD) from Hi-C data. Remarkably, deDoC \ufb01nds TAD-like structures from 10 single cells. This opens a window for us to study single cell biology, which is crucial for potential breakthroughs in both biology and medical sciences. In network theory and network security, the concept of structural entropy [12] has been extended to measure the security of networks [10, 11, 15, 16]. Structural information, as a result of the merging of the concepts of computation and information, has a rich theory, referred to [12]. More importantly, the concept of structural information provides a key to mathematically understanding the principle of data analysis, the principle of learning, and even the principle of intelligence. The reasons are as follows: Computing is, of course, an ingredient of learning and intelligence. \u201cInformation\u201d, if well-de\ufb01ned, must be the the foundation of intelligence. Mathematically speaking, entropy is the quantity of uncertainty, and information is the amount of uncertainty that has been eliminated. Therefore, both computation and information are the keys for us to understand the mathematical essence of \u201cintelligence\u201d. However, in the past more than 70 years, the studies of computational theory and information theory are largely separated. Consequently, we have no idea on how the two keys of computation and information open the window for us to capture the concept of intelligence. The structural information theory, as a theory of the merging of computation and information, opens such a window. In the present paper, we propose the model of structural information learning machinery, written SiLeM. Our structural information learning machines assume that observing is the basis of learning, that laws or rules are embedded in a noisy system of observed dataset in which each element usually consists of a syntax, a semantics and noises. Our machines learn the laws or rules of real world by observing the datasets, by using the principle of maximization of information gain to connect the datasets and to build a data space, by a general encoding tree method to decode (using the structural entropy minimization principle) the structural information of a data space to \ufb01nd the decoder or essential structure of the data space, by using the semantics of data points to interpret the essential structure or decoder of a data space to build a knowledge tree of the data space and to unify both syntax and semantics of the data space, solving the problem of interpretability of learning, by using remarkable common features of functional modules to abstract the decoder or knowledge tree to establish a tree of abstractions, by using the tree of abstractions in the encoding and optimizing when new data points are observed to realize both intuitive reasoning and logical reasoning simultaneously. Our learning model shows that learning from observing is possible, that laws or rules exist in the relationships among the data points observed, that the combination of both syntax and semantics is the principle for solving the interpretability problem of learning, that simultaneously realizing both logical reasoning and intuitive reasoning is possible in learning, that the mathematical essence of learning is to gain information, and maximization of information gain is the principle for learning algorithms that are completely free of hand-made choice of parameters. Our model shows that computing is part of learning. However, computing and learning are mathematically different concepts. We organize the paper as follows. In Section 2, we introduce the challenges of the current machine learning. In Section 3, we introduce the overview of our structural information learning machines. In Section 4, we introduce the concepts of structural entropy of graphs [12], and prove some new results about the equivalent de\ufb01nitions of the structural entropy. In Section 5, we show that the structural entropy is a natural extension of the Shannon entropy from unstructured probability distribution to structured systems, and prove a general lower bound of structural entropy which will be useful for us to understand the present structural information learning machines. In Section 6, we introduce the concepts of compressing information and decoding information of graphs, establish 3 \fa graph compressing/decoding principle, and establish an upper bound of the compressing information of graphs. In Section 7, we introduce the concepts of decoder, knowledge tree and rule abstraction, and establish a structural information principle for clustering and for unsupervised learning. In Section 9, we establish the structural information principle for connecting and associating data when new dataset is observed. In Section 10, we introduce the de\ufb01nition and algorithms for both logical and intuitive reasonings of our learning model. In Section 11, we introduce the system of structural information learning machinery. In Section 12, we introduce the encoding tree method as a general method for designing algorithms of the structural information learning machinery. In Section 13, we introduce the limitations of our structural information learning machinery. In Section 14, we summarize the contributions of the structural information learning machinery, and introduce some potential breakthroughs of the machinery. 2 The Challenges of Learning and Intelligence Mathematical understanding of learning has become a grand challenge in the foundations of both current and future arti\ufb01cial intelligence. Statistical learning is a branch with successful theory. Overall, statistical learning is a learning of the approach of the combination of computation and statistics. Statistics provides the principle for statistical results. Computation has two fundamental characters: one is locality, another is structural property. Consider a procedure of aTuring machine, at any time step in the procedure, the machine focuses only on a few states, symbols, and cells on the working tape. This is the character of locality. In addition, due to the fact that algorithms are always closely related to data structure (since, otherwise, the objects are statistical, instead of computational), computation has its second character, structural property. Of course, the approach of the combination of computation and statistics is very successful in both theory and applications. However, nevertheless, statistical learning does not really tell us what is exactly the mathematical essence of learning. For deep learning, as commented in [7]:\u201cUnsupervised learning had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. \u00b7 \u00b7 \u00b7 Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object.\u201d Both supervised and unsupervised learning have been very successful in many real world applications. However, we have to recognize that we still do not know what is exactly the mathematical essence of learning and intelligence. In particular, are there machines that learn by observing the real world similar to human learning? Is there a mathematical de\ufb01nition of learning, similar to the mathematical de\ufb01nition of computing given by Turing [24]? What are the fundamental differences between learning and computing? Are intelligences really just function approximations? What are the fundamental differences between learning and intelligence, between learning and computing, between learning and information, and between information and intelligence? The current learning theory is built based on function approximations. Functions are essentially mathematical objects, which are de\ufb01ned by mathematical systems. Due to this fact, mathematical functions usually have only syntax, do not have semantics, and do not have noises. If learning or intelligence were just function approximation, then we would learn only mathematics. However, human learns mathematics, physics, chemistry, biology and so on. As a matter of fact, human learns from the real world and learns the laws of the nature. Human learns the laws of the nature principally based on observing, connecting data, associating, computing, interpreting and reasoning, including both logical reasoning and intuitive reasoning. When human beings learn, people use eyes to see, use brain to reason, use hand to calculate, and use mouth to speak aloud etc. When human beings learn, intuitive reasoning is equally important to logical reasoning, if it is not more important. Logical reasoning is actually a type of computation. Thinking of a Turing machine, we note that computation is locally performed, in the sense that, during the procedure of a computation, the machine focuses only on the head of the machine, which points to a cell and moves either to the left or to the right one more cell in a working tape. Computation is certainly a factor of learning. However, human learning includes both computation and intuitive reasoning, where intuitive reasoning is a reasoning by using the laws and knowledges one has already learnt. This argument shows, it is not the case that learning is another type of computing. Intuitively speaking, computation is a mathematical concept, dealing with only mathematical objects, that is, computable functions or computing devices, but learning is a concept dealing with real world objects. What are the differences between mathematical objects and real world objects? Mathematical objects largely consist of only syntax. However, real world objects certainly consist of syntax, semantics and noises. Human beings learn different objects, which have different semantics. For instance, the subjects such as mathematics, physics and chemistry etc are different due to the fact that they have different semantics. However, the math4 \fematical essence of the learning of these different subjects could be still the same. If so, this would lead to a mathematical de\ufb01nition of the concept of \u201clearning\u201d. What is the mathematical de\ufb01nition of \u201clearning\u201d? Computer science has been experiencing a big change from the 20th century to the 21st century. In the 20th century, computer science is largely proven to be useful. However, in the 21st century, computer has been proven to be useful everywhere. This changes the universe of computer science from \u201cmathematics and computing devices\u201d to the \u201creal world\u201d. Computing the real world is roughly stated as \u201cinformation processing\u201d from the datasets observed from real world. However, there is no a mathematical theory that supports the mission of information processing. To understand the concept of \u201cinformation processing\u201d, we look at the information theory. Shannon\u2019s information theory perfectly supports the point to point communication. However, it fails to support the analysis of communication networks, as noticed by Shannon himself [22]. Apparently, Shannon\u2019s information theory fails to support the current information processing practice of computer science. Shannon\u2019s metric de\ufb01nes entropy as the amount of uncertainty of a random variable, and mutual information as the amount of uncertainty of a random variable, X say, that is eliminated by knowing another random variable, Y say. This means that \u201cinformation\u201d is the amount of uncertainty that has been eliminated. However, Shannon\u2019s theory deals with only random variables or probability distributions. In addition, although Shannon de\ufb01ned the concept of \u201cinformation\u201d as the amount of uncertainty that has been eliminated, Shannon did not say anything about: Where does information exist? How do we generate information? How do we decode information? In the 20th century, the studies of computation and information were largely separated, developed in computer science and communication engineering, respectively. The argument above indicates that there is a need of study of the combination of computation and information. In fact, the current society is basically supported by several massive systems each of which consists of a large number of computing devices and communication devices, which calls for a supporting theory in the intersection of computational theory and information theory. Brooks 2003 [2] explicitly proposed the question of \u201cquanti\ufb01cation of structural information\u201d. In the same paper, Brooks commented that \u201cthis missing metric to be the most fundamental gap in the theoretical underpinnings of information science and of computer science\u201d. The author and his co-author [12] introduced the notion of encoding tree of graphs as a lossless encoding of graphs, de\ufb01ned the \ufb01rst metric of information that is embedded in a graph, and established the fundamental theory of structural information. The structural entropy of a graph is de\ufb01ned as the intrinsic information hidden in the graph that cannot be decoded by any encoding tree or any lossless encoding of the graph. The structural information theory is a new theory, representing the merging of the concepts of computation and information. It allows us to combine the fundamental ideas from both coding theory and optimization theory to develop new theories. More importantly, the new theory points to some fundamental problems in the current new phenomena such as massive data analysis, information theoretical understanding of learning and intelligence. It is not hard to see that both computation and information, and the combination of the two concepts are the keys to better understand the mathematical essence of learning and intelligence. The separation of the studies of computation and information in the past more than 70 years has hindered the theoretical progress on both learning and intelligence. Structural information theory provides a new chance. 3 Overview of Structural Information Learning Machines In the present paper, we will build a new learning model, namely, the structural information learning machinery. Our model is built based on our structural information theory [12]. Our learning model is a mathematical model that exactly re\ufb02ects the merging of computation and information. Our theory of information theoretical de\ufb01nition of learning here provides new approaches to potential breakthroughs in a wide range of machine learning and arti\ufb01cial intelligence. The machines of model SiLeM learn the laws or rules of nature by observing the data of the real world. The mathematical essences of SiLeM are: (1) the essence of learning is to gain information, (2) to gain information is to eliminate uncertainty, and (3) according to the principle of structural information theory, to eliminate uncertainty of a data space can be reduced to an optimization problem, that is, an information optimization problem, by a general encoding tree method. A SiLeM machine observes the data points of real world, builds the connections among the observed data, constructs a data space (for which the principle is to choose the way of connections of data points so that the information gain from the data space is maximized), \ufb01nds the encoding tree of the data space that minimizes the uncertainty of the data space, in which the encoding tree is also referred to as a decoder due to the fact that it eliminates the maximum amount of uncertainty embedded in the data space, interprets the semantics of the decoder, an encoding tree, to form a knowledge tree, extracts the laws or rules of both the decoder and the knowledge tree. The decoder and knowledge tree of a graph determines a tree of abstractions which de\ufb01nes 5 \fthe concept of hierarchical abstracting and provides the foundation for intuitive reasoning in learning. When new dataset are observed, a SiLeM machine updates the decoder, i.e., an encoding tree, by using the tree of abstractions extracted from the decoder and knowledge tree found from the previous data space. Our SiLeM machines assume that a data point representing a real world object usually consists of a syntax, a semantics and a noise, that the laws or rules of the real world objects are embedded in a noisy data space, that the functional semantics of the data space must be supported by an essential structure of the data space, and that the essential structure of a data space is the encoding tree of the data space that minimizes the uncertainty left in the data space, or maximumly eliminates the uncertainty embedded in the data space. A SiLeM machine realizes the mechanism of associating through linking data to existing data apace and to established knowledge and laws in the tree of abstractions, a procedure highly similar to human learning, realizes the uni\ufb01cation of syntactic and semantical interpretations, solving the problem of interpretability of learning, and more importantly, simultaneously realizes both logical reasoning (that is, the local reasoning of computation and optimization) and intuitive reasoning (that is, the global reasoning by using laws and knowledge learnt previously). The mathematical principle behind the procedure of SiLeM machines is to realize the maximum gain of information, by linking data points to existing dataset in a way such that the constructed data space contains the maximum amount of decodable information, the amount of uncertainty that can be eliminated by an encoding tree, or by a lossless encoder, instead of the information hidden in the data space eventually and forever, and by maximumly eliminating the uncertainty embedded in the data space that is realized by using an information optimization, which is ef\ufb01ciently achievable by an encoding tree method. Our structural information learning machines explore that the essence of learning is to gain information from the datasets observed, together with the relationships among the data points, that to gain information is to eliminate uncertainty, and more importantly, to eliminate uncertainty can be reduced to an information optimization problem, which can be ef\ufb01ciently realized by a general encoding tree method. 4 Structural Entropy of Graphs To develop our information theoretical model of learning, we recall the notion of structural entropy of graphs [12]. To de\ufb01ne the structural entropy of a graph, we need to encode a graph. It has been a long-standing open question to build a lossless encoding of a graph. In graph theory, there are several encodings of graphs, each of which encodes a graph by assigning high-dimensional vectors to the vertices of the graph. In doing so, operations in graphs can be reduced to operations in vector spaces. However, such encodings usually distort the structure of the graph, due to the fact that the operations of vectors do not exactly re\ufb02ect the operations in the corresponding graphs. Our idea is to encode a graph by a tree. Trees are the simplest graphs in some sense. Why do we use trees to encode a graph? There is no mathematical proof for this. However, we have reasons as follows. Suppose that G is a graph observed in the real world. Then G represents the syntactical system of many objects together with the relationships among the objects. In addition, there is a semantics that is associated with, but outside of the system G. The semantics of G is the knowledge of system G. The knowledge of G is typically a structure of the form of functional modules of system G. In this case, the knowledge of system G is a structure of functional modules associated with G. What is the structure of the knowledge, or functional modules or semantics of a system G? To answer the questions, we propose the following hypothesis: (1) The semantics of a system, representing the functional modules or roles of the system, has a hierarchical structure. This hypothesis re\ufb02ects the nature of human understanding for a complex system consisting of many bodies together with the relationships among the many bodies. It is true that given a complex system consisting of a huge number of real world objects together with their relationships, people can only understand it by identifying the functional modules of the complex system by a tree-like structure or by a hierarchical structure. The hierarchical structure of functional modules gives us a hierarchical or tree-like abstractions of the system. We understand a complex system by a high-level abstractions. This means that humans understand the functional modules of a complex system by a hierarchical structure, or by a tree-like structure. In addition, we assume that human organizes knowledges as a tree structure, and hence that human knowledges have a tree structure. (2) The semantics of a system has a supporting syntax, referred to as essential structure of the system. This means that semantics certainly has a supporting syntax structure. 6 \f(3) According to (1) and (2) above, the essential structure (syntax) of a system G has a hierarchical structure. Because the semantics of a system has a tree structure, the supporting syntax must have a tree structure. This supporting tree structure is called the essential structure of the system. The hierarchical hypothesis implies that the essential structure, that is, the supporting syntax of a complex system G is a tree. This suggests us to encode a complex system by trees. Furthermore, we notice that: (i) From the point of view of human understanding of knowledges, human understands complex systems by a functional modules of high-level abstractions. (ii) From the point of view of computer science, trees are ef\ufb01cient data structures, representing systems of many objects, and simultaneously allowing highly ef\ufb01cient algorithms. (iii) From the point of view of information theory, trees provide the fundamental properties needed for encoding, see the Encoding Tree Lemma in Lemma 4.1 below. Nevertheless, in [12], we encoded graphs by trees. Speci\ufb01cally, we used the priority tree de\ufb01ned below to encode a complex system. 4.1 Priority tree De\ufb01nition 4.1. (Priority tree) A priority tree is a rooted tree T with the following properties: (i) The root node is the empty string, written \u03bb. A node in T is expressed by the string of the labels of the edges from the root to the node. We also use T to denote the set of the strings of the nodes in T. (ii) Every non-leaf node \u03b1 in T has k \u22652 children for some natural number k (depending on \u03b1) for which the edges from \u03b1 to its children, or referred to as immediate successors, are labelled by: 0