{ "url": "http://arxiv.org/abs/2404.16572v1", "title": "ReliK: A Reliability Measure for Knowledge Graph Embeddings", "abstract": "Can we assess a priori how well a knowledge graph embedding will perform on a\nspecific downstream task and in a specific part of the knowledge graph?\nKnowledge graph embeddings (KGEs) represent entities (e.g., \"da Vinci,\" \"Mona\nLisa\") and relationships (e.g., \"painted\") of a knowledge graph (KG) as\nvectors. KGEs are generated by optimizing an embedding score, which assesses\nwhether a triple (e.g., \"da Vinci,\" \"painted,\" \"Mona Lisa\") exists in the\ngraph. KGEs have been proven effective in a variety of web-related downstream\ntasks, including, for instance, predicting relationships among entities.\nHowever, the problem of anticipating the performance of a given KGE in a\ncertain downstream task and locally to a specific individual triple, has not\nbeen tackled so far.\n In this paper, we fill this gap with ReliK, a Reliability measure for KGEs.\nReliK relies solely on KGE embedding scores, is task- and KGE-agnostic, and\nrequires no further KGE training. As such, it is particularly appealing for\nsemantic web applications which call for testing multiple KGE methods on\nvarious parts of the KG and on each individual downstream task. Through\nextensive experiments, we attest that ReliK correlates well with both common\ndownstream tasks, such as tail or relation prediction and triple\nclassification, as well as advanced downstream tasks, such as rule mining and\nquestion answering, while preserving locality.", "authors": "Maximilian K. Egger, Wenyue Ma, Davide Mottin, Panagiotis Karras, Ilaria Bordino, Francesco Gullo, Aris Anagnostopoulos", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.SI", "cats": [ "cs.SI" ], "label": "Original Paper", "paper_cat": "Knowledge AND Graph", "gt": "Knowledge graphs (KGs) are sets of facts (i.e., triples such as \u201cda Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d) that interconnect entities (\u201cda Vinci,\u201d \u201cMona Lisa\u201d) via relationships (\u201cpainted\u201d) [20, 47]. Entities and re- lationships correspond to nodes and (labeled) edges of the KG, respectively (Figure 2). Knowledge graph embeddings (KGEs) [45] are popular techniques to generate a vector representation for enti- ties and relationships of a KG. A KGE is computed by optimizing a scoring function that provides an embedding score as an indication of whether a triple actually exists in the KG. KGEs have been exten- sively used as a crucial building block of state-of-the-art methods for a variety of downstream tasks commonly carried out on the Web, such as knowledge completion [46], whereby a classi\uffffer is trained on the embeddings to predict the existence of a triple; or head/tail prediction [24], which aims to predict entities of a triple, as well as more advanced ones, including rule mining [49], query answering [48], and entity alignment [5, 21, 51, 52]. Motivation. So far, the choice of an appropriate KGE method has depended on the downstream task, the characteristics of the input KG, and the computational resources. The existence of many di\uffffer- ent scoring functions, including linear embeddings [8], bilinear [49], based on complex numbers [36], or projections [10] further com- plicates this choice. Alas, the literature lacks a uni\uffffed measure to quantify how reliable the performance of a KGE method can be for a certain task beforehand, without performing such a potentially slow task. Furthermore, KGE performance on a speci\uffffc downstream task is typically assessed in a global way, that is, in terms of how accurate a KGE method is for that task on the entire KG. However, the performance of KGEs for several practical applications (e.g., knowledge completion [46]) typically varies across the parts of the KG. This requires carrying out a performance assessment of KGE locally to speci\uffffc parts of the KG, rather than globally. Contributions. We address all the above shortages of the state of the art in KGE and introduce ReliK (Reliability for KGEs), a simple, yet principled measure that quanti\uffffes the reliability of how a KGE will perform on a certain downstream task in a speci\uffffc part of the KG, without executing that task or (re)training that KGE. To the best of our knowledge, no measure like ReliK exists in the literature. ReliK relies exclusively on embedding scores as a black box, particularly on the ranking determined by those scores (rather than the scores themselves). Speci\uffffcally, it is based on the relative WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. ranking of existing KG triples with respect to nonexisting ones, in the target part of the KG. As such, ReliK is agnostic to both (1) the peculiarities of a speci\uffffc KGE and (2) the KG at hand, and (3) it needs no KGE retraining. Also, (4) ReliK is task-agnostic: in fact, its design principles are so general that it is inherently well-suited for a variety of downstream tasks (see Section 3 for more details, and Section 4 for experimental evidence). Finally, (5) ReliK exhibits the locality property, as its computation and semantics can be tailored to a speci\uffffc part of the KG. All in all, therefore, our ReliK measure is fully compliant with all the requirements discussed above. Note that ReliK can be used also to evaluate the utility of a KGE for a downstream task, even when (for privacy or other reasons) we only have access to the embedding and not to the original KG. ReliK is simple, intuitive, and easy-to-implement. Despite that, its exact computation requires processing all the possible combina- tions of entities and relationships, for every single fact of interest. Thus, computing ReliK exactly on large KGs or large target sub- graphs may be computationally too heavy. This is a major technical challenge, which we address by devising approximations to ReliK. Our approximations are shown to be theoretically solid (Section 3.2) and perform well empirically (Section 4.1). Advanced downstream tasks. Apart from experimenting with ReliK in basic downstream tasks, such as entity/relation prediction or triple prediction, we also showcase ReliK on two advanced down- stream tasks, to fully demonstrate its general applicability. The \uffffrst is query answering, which \uffffnds answers to complex logical queries over KGs. The second, rule mining, deduces logic rules, with the purpose of cleaning the KG from spurious facts or expanding the information therein. Rule mining approaches rely on a con\uffffdence statistical measure that depends on the quality of the data itself. By computing the con\uffffdence on a ground truth, we show that ReliK identi\uffffes more trustworthy rules. Relevance. ReliK is particularly amenable to semantic web ap- plications, for instance by providing a local means to study the semantics associated with a speci\uffffc\u2019s entity embedding [30] or by o\uffffering an e\uffffcient tool for knowledge completion [50]. Summary and outline. To summarize, our contributions are: \u2022 We \uffffll an important gap of the state of the art in KGE (Section 2) by tackling for the \uffffrst time the problem of assessing the relia- bility of KGEs (Section 3). \u2022 We devise ReliK, the \uffffrst reliability measure for KGEs, which possesses important characteristics of generality, simplicity, and soundness (Section 3.1). \u2022 We devise e\uffffcient, yet theoretically solid approximation tech- niques for estimating ReliK (Section 3.2). \u2022 We perform extensive experiments to show that ReliK correlates with several common downstream tasks, it complies well with the locality property, and its approximate computation is e\uffffcient and e\uffffective (Section 4). \u2022 We additionally showcase ReliK in two advanced downstream tasks, question answering and rule mining (Section 4.3).", "main_content": "A knowledge graph (KG) K : hE, R, F i is a triple consisting of a set E of = entities, a set R of relationships, and a set F \u21e2E \u21e5R \u21e5E of < facts. A fact is a triple G\u2318AC = (\u2318,A,C)1, where \u23182 E is the head, C 2 E is the tail, and A 2 R is the relationship. For instance, entities \u201cLeonardo da Vinci\u201d and \u201cMona Lisa,\u201d and relationship \u201cpainted\u201d C 2 E is the tail, and A 2 R is the relationship. For instance, entities \u201cLeonardo da Vinci\u201d and \u201cMona Lisa,\u201d and relationship \u201cpainted\u201d form the triple (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d). The \u201cLeonardo da Vinci\u201d and \u201cMona Lisa,\u201d and relationship \u201cpainted\u201d form the triple (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d). The set F of facts form an edge-labeled graph whose nodes and labeled edges correspond to entities and relationships, respectively. We say a triple G\u2318AC is positive if it actually exists in the KG (i.e., G\u2318AC 2 F ), negative otherwise (i.e., G\u2318AC 8 F ). KGs are also known as knowledge bases [14], information graphs [25], or heterogeneous information networks [34]. Knowledge graph embedding. A KG embedding (KGE) [2, 24, 45] is a representation of entities and relationships in a 3-dimensional (3\u2327|E|) space, typically, the real R3 space or the complex C3 space. For instance, TransE [8] represents a triple G\u2318AC as entity vectors e\u2318, eC 2 R3 and relation vector eA 2 R3, and DistMult [49] represents the relationship as a matrix WA 2 R3\u21e53. Although KGEs can di\uffffer (signi\uffffcantly) from one another in their de\uffffnition, a common key aspect of all KGEs is that they are typically de\uffffned based on a so-called embedding scoring function or simply embedding score. This is a function B : E \u21e5R \u21e5E ! R, which quanti\uffffes how likely a triple G\u2318AC 2 E \u21e5R \u21e5E exists in K based on the embeddings of its head (\u2318), relationship (A), and tail (C). Speci\uffffcally, the higher B(G\u2318AC), the more likely the existence of G\u2318AC. For instance, TransE\u2019s embedding score B(G\u2318AC) = \u2212ke\u2318+ eA \u2212eC k represents the (\u27131 or \u27132) distance between the \u201ctranslation\u201d from \u2318\u2019s embedding to C\u2019s embedding through A\u2019s embedding [8]. KGEs are typically learned through a training process that optizes (e.g., via gradient descent) a loss function de\uffffned based on KGEs are typically learned through a training process that optimizes (e.g., via gradient descent) a loss function de\uffffned based on the embedding score. This training process can be computationally expensive, especially if it has to be repeated for multiple KGEs. KGEs learned this way are shown to be e\uffffective for a number of downstream tasks [24], such as predicting the existence of a triple, but do not o\uffffer any prior indication on their performance [22]. Moreover, existing benchmarks [2] show global performance on the entire graph rather than local on subgraphs. To this end, in this work, we provide an answer to the following key question: M\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff. Is there a measure that provides a prior indication of the performance of a KGE on a speci\uffffc subgraph? 3 KGE RELIABILITY A good measure of performance of a KGE should support a number of tasks, from node classi\uffffcation, to link prediction, as well as being unprejudiced towards the data and the KGE model itself. In other words, we would like a measure of reliability that properly assesses how the embedding of a triple would perform on certain tasks and data, without knowing them in advance. More speci\uffffcally, the main desiderata of a proper KGE reliability measure are as follows. (R1) Embedding-agnostic. It should be independent of the speci\uffffc KGE method. This is to have a measure fully general. (R2) Learning-free. It should require no further KGE training. This is primarily motivated by e\uffffciency, but also for other reasons, such as privacy or unavailability of the data used for KGE training. 1We use fact and triple interchangeably throughout the paper. ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. \u221214 \u221212 \u221210 \u22128 \u22126 Scores Density TransE Positive Negative \u22121.5 \u22121 \u22120.5 Scores PairRE Figure 1: Distribution of the embedding scores for positive (i.e., existing) and negative (i.e., nonexisting) triples on CodexSmall dataset (cf. Section 4), with TransE [8] and PairRE [10] KGE methods. Although scores and distributions are di\ufffferent, positive and negative triples are well separated. (R3) Task-agnostic. It should be independent of the speci\uffffc downstream task. In other words, it should be able to properly anticipate the performance of a KGE in general, for any downstream task. Again, like (R1), this is required for the generality of the measure. (R4) Locality. It should be a good predictor of KGE performance locally to a given triple, that is, in a close surrounding neighborhood of that triple. This is important, as a KGE model may be more or less e\uffffective based on the di\ufffferent parts of the KG it is applied to. Thus, assessing how KGEs perform in di\ufffferent parts of the KG would allow for their better use in downstream tasks. 3.1 The Proposed ReliK Measure Design principles. De\uffffning a reliability measure that complies with the aforementioned requirements is an arduous streak. First, the various KGE methods consider di\ufffferent objectives. Second, downstream tasks often combine embeddings in di\ufffferent ways. For instance, even though head or tail predictions predict a single vector, triple classi\uffffcation combines head, tail, and relationship vectors. Third, the embedding scores are in general incomparable across the KGEs. To ful\uffffl (R1) and (R2), the KGE reliability measure should not engage with the internals of the computation of KGEs. Thus, we need to treat the embeddings as vectors and the embedding score as a black-box function that provides only an indication of the actual existence of a triple. Although the absolute embedding scores are incomparable to one another, we observe that the distribution of positive and negative triples is signi\uffffcantly di\ufffferent (Figure 1). Speci\uffffcally, we assume the relative ranking of a positive triple to be higher than that of a negative. Otherwise, we multiply the score by \u22121. This leads to the following main observation. O\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff1. A KGE reliability measure that uses the position of a triple relative to other triples via a ranking de\uffffned based on the embedding score ful\ufffflls (R1) and (R2). Furthermore, comparing a triple to all other (positive or negative) triples might be ine\uffffective. For instance, if we assume that our measure of reliability is solely based on the separation between positive and negative triples, we will conclude from Figure 1 that PairRE [10] performs well for all the tasks, which is not the case. This is because the absolute score does not provide an indication of performance. We thus advocate that a local approach that considers triples relative to a neighborhood is more appropriate, and propose a measure that ful\uffffls (R4). The soundness of (R4) is better attested in our experiments in Section 4. Finally, to meet (R3), the KGE reliability measure should not exploit any peculiarity of a downstream task in its de\uffffnition. Indeed, this is accomplished by our measure, as we show next. De\uffffnition. For a triple G\u2318AC = (\u2318,A,C) we compute the neighbor set N \u2212(\u2318) of all possible negative triples, that is, triples with head \u2318that do not exist in K. Similarly, we compute N \u2212(C) for tail C. We de\uffffne the head-rank \u2318of a triple G\u2318AC as the position of the triple in the rank obtained using score B for a speci\uffffc KGE relative to all the negative triples having head \u2318. rank\ud43b(G\u2318AC) = |{G 2 N \u2212(\u2318) : B(G) > B(G\u2318AC)}| + 1 The tail-rank rank) (G\u2318AC) for tail C is de\uffffned similarly. Our reliability measure, ReliK, for a triple G\u2318AC is ultimately de\uffffned as the average of the reciprocal of the headand tail-rank ReliK(G\u2318AC) = 1 2 \u2713 1 rank\ud43b(G\u2318AC) + 1 rank) (G\u2318AC) \u25c6 . (1) ReliK can easily be extended from single triples to subgraphs by computing the average reliability among the triples in the subgraph. Speci\uffffcally, we de\uffffne the ReliK score of a set ( \u2713F of triples as ReliK(() = 1 |(| \u2019 G\u2318AC 2( ReliK(G\u2318AC). (2) Rationale. ReliK ranges from (0, 1], with higher values corresponding to better reliability. In fact, the lower the head-rank rank\ud43b(G\u2318AC) or tail-rank rank) (G\u2318AC), the better the ranking of G\u2318AC induced by the underlying embedding scores, relatively to the nonexisting triples in G\u2318AC\u2019s neighborhood, complies with the actual existence of G\u2318AC in the KG. It is easy to see that ReliK achieves (R1) and (R2) by relying on the relative ranking rather than the absolute scores. It also ful\ufffflls (R3) as it involves no downstream tasks at all, and (R4) as it is based on the local (i.e., 1-hop) neighborhood of a target triple. Leonardo da Vinci Italy Mona Lisa France born in painted located in KG and considered edge Leonardo da Vinci Italy Mona Lisa France painted located in painted born in located in born in located in Negative triples N\u2212(\u2318) to compute rank\ud43b Leonardo da Vinci Italy Mona Lisa France born in painted located in born in located in painted born in located in Negative triples N\u2212(C ) to compute rank) Figure 2: Constituents of ReliK on an example KG. Figure 2 provides an example of the computation of ReliK for the triple G\u2318AC = (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u2018Mona Lisa\u201d). The N \u2212(\u2318) is depicted as the red (dashed) edges and N \u2212(C) in blue (dotted). To compute ReliK on an embedding, we compute the embedding score B of (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d) and rank it with respect to the triples in N \u2212(\u2318) and N \u2212(C). 3.2 E\uffffciently Computing ReliK Computing ReliK (Eq. (1)) requires \u2326(|E| \u00b7 |R|) time, as it needs to scan the entire negative neighborhood of the target triple. For large KGs, repeating this for a (relatively) high number of triples may be computationally too heavy. For this purpose, here we focus on WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. approximate versions of ReliK, which properly trade o\uffffbetween accuracy and e\uffffciency. The main intuition behind the ReliK approximation is that the precise ranking of the various potential triples is not actually needed. Rather, what it matters is just the number of those triples that exhibit a higher embedding score than the target triple. This observation leads to two approaches. In both of them, we sample a random subset of negative triples. In the \uffffrst approach, we compute ReliKLB, a lower bound to ReliK, by counting the negative triples in the sample that have a lower embedding score than the target triple and pessimistically assuming that all the other triples not in the sample have higher scores. In the second approach, we estimate ReliKApx by evaluating the fraction of triples in the sample that have a higher score than the triple under consideration and then scaling this fraction to the total number of negative triples. Next, we provide the details of these two approaches. Let(\ud43bbe a random subset of: elements selected without replacement independently and uniformly at random from the negative neighborhood N \u2212(\u2318) of the head \u2318of a triple G\u2318AC. The size |(\ud43b| trades o\uffffbetween e\uffffciency and accuracy of the estimator, and it may be de\uffffned based on the size of N \u2212(\u2318). De\uffffne also rank( \ud43b(G\u2318AC) = |{G 2 (\ud43b: B(G) > B(G\u2318AC)}| + 1, to be the rank of the score B(G\u2318AC) that the KGE assigns to G\u2318AC, among all the triples in the sample. We similarly compute () and rank( ) for tail\u2019s neighborhood N \u2212(C). ReliKLB estimator. The sampled triples with lower score than B(G\u2318AC) are fewer than all such negative triples, that is, |(\ud43b| \u2212rank( \ud43b(G\u2318AC) \uf8ff|N \u2212(\u2318)| \u2212rank\ud43b(G\u2318AC), or, equivalently, rank\ud43b(G\u2318AC) \uf8ffrank( \ud43b(G\u2318AC) + |N \u2212(\u2318)| \u2212|(\ud43b| . (3) Analogously, the observation holds for () rank) (G\u2318AC) \uf8ffrank( ) (G\u2318AC) + |N \u2212(C)| \u2212|() | . (4) We therefore de\uffffne our ReliKLB estimator as ReliKLB(G\u2318AC) = 1 2 \u2713 1 rank( \ud43b(G\u2318AC) + |N \u2212(\u2318)| \u2212|(\ud43b| + 1 rank( ) (G\u2318AC) + |N \u2212(C)| \u2212|() | \u25c6 , (5) From Eqs. (3) and (4), it holds that ReliKLB(G\u2318AC) \uf8ffReliK(G\u2318AC). ReliKApx estimator. As for our second estimator, we de\uffffne it as ReliKApx = 1 2 \u00a9 \u2260 \u00b4 1 rank( \ud43b(G\u2318AC) |N\u2212(\u2318)| |(\ud43b| + 1 rank( ) (G\u2318AC) |N\u2212(C )| |() | \u2122 \u00c6 \u00a8 . (6) In words, we simply scale up the rank induced by the sample to the entire set of negative triples. Theoretical characterization of ReliKApx. Note that by Jensen\u2019s inequality [23], we have that E 2 6 6 6 6 4 1 rank( \ud43b(G\u2318AC) |N\u2212(\u2318) | |(\ud43b| 3 7 7 7 7 5 \u2265 1 E h rank( \ud43b(G\u2318AC) |N\u2212(\u2318)| |(\ud43b| i = 1 E[rank( \ud43b(G\u2318AC)] |N\u2212(\u2318) | |(\ud43b| = 1 rank\ud43b(G\u2318AC) , where E[\u00b7] denotes mathematical expectation. This holds because E[rank( \ud43b(G\u2318AC)] = |(\ud43b| \u00b7 rank\ud43b(G\u2318AC) |N \u2212(\u2318)| , given that for each element G 2 (\ud43b, the probability to have a score B(G) > B(G\u2318AC) is rank\ud43b(G\u2318AC) |N \u2212(\u2318)| . We argue similarly for the tail and, therefore, we \uffffnally obtain E[ReliKApx(G\u2318AC)] \u2265ReliK(G\u2318AC). In other words, ReliKApx is, in expectation, an upper bound of ReliK. Quality of ReliKApx approximation. Applying a Hoe\uffffding\u2019s bound [18], we obtain that, with high probability, the quality of approximation improves exponentially as the size of the sample increases. Algorithm 1 Compute ReliKLB or ReliKApx Input: KG K : hE, R, Fi, triple G\u2318AC = (\u2318,A,C ) 2 F, embedding score function B : E \u21e5R \u21e5E ! R, sample size : 2 N Output: ReliKLB(G\u2318AC ) (Eq. (5)) or ReliKApx(G\u2318AC ) (Eq. (6)) 1: (\ud43b sample : triples from N\u2212(\u2318); () sample : triples from N\u2212(C ) 2: rank\ud43b 1; rank) 1 3: for G\u23180A0C0 2 (\ud43b[ () do 4: if B (G\u2318AC ) < B (G\u23180A0C0 ) then 5: if \u23180 = \u2318then 6: rank\ud43b A0=:\ud43b+ 1 7: if C0 = C then 8: rank) A0=:) + 1 9: return 1 2 \u2713 1 rank\ud43b+|N\u2212(\u2318)|\u2212|(\ud43b| + 1 rank) +|N\u2212(C)|\u2212|() | \u25c6 for ReliKLB or 1 2 \u00a9 \u2260 \u00b4 1 rank\ud43b |N\u2212(\u2318)| |(\ud43b| + 1 rank) |N\u2212(C)| |() | \u2122 \u00c6 \u00a8 for ReliKApx Algorithms. Algorithm 1 shows the steps to compute ReliKLB and ReliKApx. Initially, in Line 1, we sample, uniformly at random, : negative triples from the head neighborhood and the tail neighborhood. Note that we can save computation time by \uffffrst \uffffltering the triples in (\ud43b[() by score (Line 4), that is, by considering only those with score higher than the input triple G\u2318AC, and then checking whether a triple in (\ud43b[ () has either the head (Line 5) or the tail (Line 7) in common with G\u2318AC to update the corresponding rank. Time complexity. Algorithm 1 runs in O(:) time. This corresponds to the time needed for the sampling step in Line 5, which can easily be accomplished linearly in the number of samples, without materializing the negative neighborhoods. The sample size : trades o\uffffbetween accuracy and e\uffffciency of the estimation. Section 4.1 shows that ReliKApx approximation with 20% sample size is 2.5\u21e5faster than ReliK with only 0.002 mean squared error (MSE). As such, ReliKApx is our method of reference in the experiments. ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. 4 EXPERIMENTAL EVALUATION We evaluate ReliK on four downstream tasks, six embeddings, and six datasets. We report the correlation with ReliK and the performance of ranking tasks (Section 4.2) and show that ReliK can identify correct query answers as well as mine rules with higher con\uffffdence than existing methods (Section 4.3). space method set entity relation score TransE [8] R O(=) O(=) \u2212ke\u2318+ eA \u2212eC k? DistMult [49] R O(=) O(=) e> \u2318diag(WA )eC RotatE [36] C O(=) O(=) \u2212ke\u2318\u25e6eA \u2212eC k PairRE [10] R O(=) O(=) \u2212ke\u2318\u25e6eA\u2318\u2212eC \u25e6eAC k ComplEx [41] C O(=) O(=) '4 (heA, e\u2318, eC i) ConvE [15] R O(=) O(=) 5 (E42 (5 ([e\u2318; eA ] \u21e4l))W)eC TuckER [4] R O(=) O(=) W \u21e5e\u2318\u21e5eA \u21e5eC CompGCN [44] R O(=) O(=) any KGE score Table 1: Characteristics of the considered embeddings. Embeddings. We include six established KGE methods, representative of the four major embedding families (see Section 5). Table 1 shows the embeddings in our evaluation, the embedding space, and the embedding score function. A detailed description of the embeddings is in Section A.1 in the appendix. dataset | E| |R| | F| Task Countries 271 2 1 158 Approximation FB15k237 14 505 237 310 079 Ranking / Classi\uffffcation / Querying Codex-S 2 034 42 36 543 Ranking / Classi\uffffcation Codex-M 17 050 51 206 205 Ranking / Classi\uffffcation Codex-L 77 951 69 612 437 Ranking / Classi\uffffcation YAGO2 834 750 36 948 358 Rule Mining Table 2: Characteristics of the KGs; number of entities |E|; number of relationships |R|; number of facts |F |; task. Datasets. We perform experiments on six KGs with di\ufffferent characteristics, shown in Table 2. \u2022 Countries [9] is a small KG created from geographical locations, where entities are continents, subcontinents, and countries, and edges containment or geographical neighborhood. \u2022 FB15k237 [40] is a sample of Freebase KG [7] covering encyclopedic knowledge consisting of 237 relations, 15: entities, and 310: facts. FB15k237 is a polished and corrected version of FB15k [8] constructed to circumvent data leakage. The dataset contains Freebase entities with more than 100 mentions and with reference in Wikilinks database. \u2022 Codex [32] is a collection of three datasets of incremental size, Codex-S (2: entities, 36: triples), Codex-M (17: entities, 200: facts), and Codex-L (78: entities, 610: facts) extracted from Wikidata and Wikipedia. The Codex collection explicitly encourages entity and content diversity to overcome the limitations of FB15k. \u2022 YAGO [35] is an open-source KG automatically extracted from Wikidata with an additional ontology from schema.org. We use YAGO2 [19], which comprises 834k entities and 948k facts. Experimental setup. We implement our approximate and exact ReliK in Python v3.9.13.2,3 We train the embedding using the 2Code available at: https://github.com/AU-DIS/ReliK 3Also as artifact: https://doi.org/10.5281/zenodo.10656796 Pykeen library v1.10.1,4 with default parameters besides the embedding dimension 38< = 50 and training loop sLCWA. We run our experiments on a Linux Ubuntu 4.15.0-202 machine with 48 cores Intel(R) Xeon(R) Silver 4214 @ 2.20GHz, 128GB RAM, and an NVIDIA GeForce RTX 2080 Ti GPU. We report an average of 5 experiments using 5-fold cross validation with 80-10-10 split. Summary of experiments. We evaluate ReliK on several downstream tasks and setups. We \uffffrst show in Section 4.1 that our approximate ReliKApx outperforms the simpler ReliKLB lower-bound approximation and achieves a good tradeo\uffffbetween quality and speed. We then show in Section 4.2 that ReliK correlates with common ranking tasks, such as tail and relation prediction, as well as classi\uffffcation tasks and validate the claim that ReliK is a local measure. In Section 4.3 we present the more complex tasks of query answering and mining logic rules on KGs. To summarize, we evaluate ReliK on the following downstream tasks: \u2022 (T1) Ranking tasks, tail and relation prediction \u2022 (T2) Classi\uffffcation task, triple classi\uffffcation \u2022 (T3) Query answering task \u2022 (T4) Rule mining application 0 0.2 0.4 0.6 0.8 1 5 10 15 sample ratio Time (s) ReliK ReliKApx ReliKLB 0 0.2 0.4 0.6 0.8 1 0 0.005 0.01 0.015 sample ratio MSE Figure 3: Comparing ReliKApx and ReliKLB with exact ReliK in time (left) and Mean Squared Error (right) vs sample to data size ratio on Countries dataset and TransE embeddings. 4.1 Approximation Quality We start by showing that ReliKApx runs as fast as ReliKLB while being more accurate. We report time and mean squared error (MSE) with respect to the exact ReliK measure for ReliKApx and ReliKLB. Computing ReliK is infeasible in datasets with more than a few hundred entities. Hence, we limit our analysis to the entire Countries dataset for which we can compute ReliK exactly. Figure 3 reports the results in terms of seconds and MSE at increasing sample size : = |(|. Both ReliKLB and ReliKApx incur the same time, because of the fact that both require to sample: negative triples and compute the score on the sample. On the other hand, when the sample size is more than 80% of all the negative triples, as the sampling time dominates the computation of ReliKLB and ReliKApx, ReliK becomes faster. ReliKApx rapidly reduces the error and stabilizes at around 40% of the sample size, whereas ReliKLB exhibits a steadily larger error than ReliKApx. The current results show the e\uffffectiveness of the results in an unparallelized setting; yet, we note that the sampling process can be easily parallelized by assigning each sample to a separate thread. In terms of quality, ReliKApx exhibits minimal MSE (<0.005) with as little as 10% of the sample size, being 3 times faster than ReliK. Thus, even though the exact ReliK is feasible for small datasets or 4https://pykeen.readthedocs.io/en/stable/ WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. subgraphs, ReliKApx o\uffffers a good approximation with signi\uffffcant speedup. On the next experiments, we set: to 10% of all the negative triples and report results for ReliKApx. 4.2 Common Downstream Tasks We test ReliK on the ability to anticipate the results of common tasks for KGEs [24, 45]. We measure the statistical signi\uffffcance of Pearson correlation among two ranking tasks, tail and relation prediction, and the triple classi\uffffcation task. To evaluate ReliK on di\ufffferent areas of the graph and di\ufffferent graph topologies, we sample random subgraphs of Codex-S with 60 nodes by initially selecting a starting node uniformly at random and then including nodes and edges by a random walk with restart [39] with restart probability 1 \u2212U = 0.2, until the subgraph comprises 60 nodes. For Codex-M and Codex-L we use size 100 and for FB15k237 we use 200 nodes. We report the average ReliK on 100 random subgraphs on the Codex-S, Codex-M, Codex-L, and FB15k237 datasets. Tail (MRR) Relation (MRR) Classi\uffffc. (Acc.) KGE Pearson p-value Pearson p-value Pearson p-value Codex-S TransE 0.23 0.02 0.93 2.174\u221244 0.37 1.424\u22124 DistMult 0.16 0.12 0.85 2.034\u221229 0.69 2.214\u221215 RotatE 0.35 0.0003 0.89 7.924\u221237 \u22120.24 0.02 PairRE 0.86 7.294\u221231 0.91 2.364\u221239 0.09 0.37 ComplEx 0.14 0.17 0.63 2.224\u221212 \u22120.06 0.57 ConvE \u22120.396 6.614\u22125 0.89 4.924\u221237 0.10 0.30 TuckER \u22120.15 0.13 0.89 5.714\u221237 0.07 0.46 CompGCN 0.52 3.394\u221208 0.77 6.094\u221221 0.01 0.92 Codex-M TransE 0.90 2.704\u221237 0.97 9.074\u221263 0.53 1.934\u221208 DistMult 0.22 0.04 0.89 8.374\u221232 0.60 5.124\u221210 RotatE \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 PairRE 0.06 0.58 0.98 1.054\u221274 \u22120.12 0.23 ComplEx \u22120.33 8.924\u22124 0.36 2.014\u22124 0.15 0.13 ConvE \u22120.22 0.03 0.99 3.864\u221296 \u22120.02 0.84 Codex-L TransE 0.83 1.134\u221226 0.97 3.8124\u221264 0.63 2.544\u221212 DistMult 0.49 2.104\u221207 0.78 4.684\u221222 0.60 3.744\u221211 RotatE \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 PairRE \u22120.04 0.68 0.95 3.334\u221252 \u22124.474\u22124 0.99 ComplEx 0.82 1.034\u221225 0.91 3.964\u221239 0.06 0.57 ConvE 0.59 4.264\u221211 \u22120.07 0.48 0.31 1.574\u22123 FB15k237 TransE 0.24 0.02 0.86 2.834\u221230 0.34 5.794\u22124 DistMult \u22120.05 0.65 0.64 5.574\u221213 0.39 5.584\u221205 RotatE \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 PairRE 0.80 1.514\u221223 0.65 1.744\u221213 0.08 0.44 ComplEx 0.20 0.05 0.88 3.534\u221234 0.14 0.18 ConvE 0.09 0.37 0.85 4.474\u221230 0.01 0.93 Table 3: Pearson correlation and statistical signi\uffffcance of ReliK for tail prediction, relation prediction, and triple classi\uffffcation; red indicates cases of less statistically signi\uffffcant correlation, with p-value > 0.05, or inverse correlation. Ranking tasks (T1). In the \uffffrst experiments, we measure the Pearson correlation between ReliK and the performance on ranking tasks with mean reciprocal rank (MRR) [12]. The \uffffrst task, tail prediction [8, 10, 36], assesses the ability of the embedding to predict the tail given the head and the relation, thus answering the query (\u2318,A, ?) where the tail is unknown. The second task, relation prediction, assesses the ability of the embedding to predict the undisclosed relation of a triple (\u2318, ?,C). The common measure used for tail and relation prediction is MRR, which provides an indication of how close to the top the score ranks the correct tail (or relation). Consistently with previous approaches [8, 10, 36], we employ the \uffffltered approach in which we consider for evaluation only negative triples that do not appear in either the train, test, or validation set. Table 3 reports the correlations alongside the statistical signi\uffffcance in terms of the p-value. We marked in red, high p-values (> 0.05), which suggest no correlation, and Pearson score values that indicate inverse correlation. Generally, ReliK exhibits signi\uffffcant correlation across embeddings and tasks. Noteworthy, even though ReliK (see Eq. (1)) does not explicitly target tail or head rankings by including both, we observe signi\uffffcant correlation on tail prediction in most embeddings and datasets. Because of the considerable training time, we only report results for RotatE on Codex-S. We complement our analysis with correlation plots in Figure 4 and Figure 10 in the appendix for Codex-S; in most cases we observe a clear correlation. Comparing the actual results of the various tasks, it is also clear in most cases in which we do not have correlation that the results are too close to distinguish; for example, ComplEx has only results close to 0. Such results indicate that a particular embedding method needs additional training. Besides, on the same task, results vary in di\ufffferent subgraphs, vindicating the e\uffffect of locality on embedding performance. ReliK correctly captures local embedding characteristics. 0 0.2 0.4 0.6 0.8 1 2 4 6 \u00b710\u22124 ReliK TransE Subgraph Regression line 0 0.2 0.4 0.6 0.8 1 2 4 6 \u00b710\u22124 0 0.2 0.4 0.6 0.8 1 3 3.5 \u00b710\u22125 MRR ReliK ComplEx 0 0.2 0.4 0.6 0.8 1 3 3.5 \u00b710\u22125 MRR Figure 4: ReliK correlation with MRR on tail prediction (left column) and relation prediction (right column); each point is the ReliK score for a subgraph with 60 nodes on Codex-S. Classi\uffffcation task (T2). In this experiment, we test the correlation between ReliK and the accuracy of a threshold-based classi\uffffer on the embeddings. The classi\uffffer predicts the presence of a triple in the KG if the embedding score is larger than a threshold, a common scenario for link prediction [24]. Table 3 (right column) reports the correlations and their signi\uffffcance for all datasets and Figure 5 shows the detailed analysis on Codex-S for two cases. At close inspection, we observe that in cases of unclear correlation, such as with PairRE, the respective classi\uffffcation results are too close to observe a di\ufffference. Those cases notwithstanding, ReliK ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. is signi\uffffcantly correlated with accuracy. This result con\uffffrms that ReliK can serve as a proxy for the quality of complex models trained on embeddings. Plots for the other embeddings can be found in the Section A.2 of the appendix. 0 0.2 0.4 0.6 0.8 1 2 4 6 \u00b710\u22124 Accuracy Triple Classi\uffffcation ReliK TransE Subgraph Regression line 0 0.2 0.4 0.6 0.8 1 1 1.5 2 2.5 \u00b710\u22123 Accuracy Triple Classi\uffffcation PairRE Figure 5: ReliK correlation with accuracy on triple classi\uffffcation; each point represents the ReliK score for a subgraph with 60 nodes on Codex-S. Tuning Subgraph Size. Next, we analyze how ReliK correlates with the tasks presented in Section 4.2 on subgraphs of varying size with the TransE embedding. Figure 6 reports the correlation values for all three tasks, only including those values where the p-value is below 0.05. We observe that ReliK\u2019s correlation generally increases with subgraphs of up to 100 nodes on Codex-S. After that point, we note an unstable behavior in all tasks. This is consistent with the assumption that ReliK is a measure capturing local reliability. To strike a balance between quality and time we test on subgraphs with 60 nodes for Codex-S in all experiments. Yet, as tasks are of di\ufffferent nature, the subgraph size can be tuned in accordance with the task to provide more accurate results. 0 50 100 150 0.4 0.6 0.8 Subgraph size Pearson tail relation classi\uffffer Figure 6: Pearson correlation on tail and relation prediction and triple classi\uffffcation vs subgraph size on Codex-S. 4.3 Complex Downstream Tasks We now turn our attention to complex downstream tasks. Query answering (T3). We show how ReliK can improve queryanswering tasks. Complex logical queries on KGs are working with di\ufffferent query structures. We focus on queries of chaining multiple predictions or having an intersection of predictions, from di\ufffferent query structures that have been described in recent work [3, 31]. We keep the naming convention introduced by Ren and Leskovec [31]. We evaluate a selection of 1000 queries per type (1?,2?,3?,28,38) from their data on the FB15k237 graph.5 The queries of type ? are 1 to 3 hops from a given entity with \uffffxed relation labels that point to a solution, whereas queries of type 8 are the intersection of 2 or 3 predictions pointing towards the same entity. We evaluate ReliK on the ability to detect whether an instance of an answer is true or false. We compute ReliK on TransE embeddings trained on the entire FB15k237. Figure 7 shows the average ReliK scores for 5http://snap.stanford.edu/betae/ positive and negative answers. ReliK clearly discriminates between positive and negative instances, often by a large margin. 1p 2p 3p 2i 3i 0 2 4 6 \u00b710\u22125 Query types ReliK Query Answering 0 10 20 0 0.5 1 1.5 \u00b710\u22125 Rules ReliK Rule Mining negative instances positive instances 0 10 20 0 0.1 0.2 Rules RR Rule Mining Figure 7: Comparison between positive and negative instances for query answering on FB15k237 (left) and rule mining on Yago2 with ReliK (middle) and RR (right). Rule mining (T4). ReliK e\uffffectively improves on the rule mining task as well. Rule mining methods [16, 17, 28] automatically retrieve logic rules over KGs having a prede\uffffned minimum con\uffffdence. A logic rule is an assertion such as \ud434) \u232b, which states that \u232b follows from \ud434. For instance, a rule could imply that all presidents of a country are citizens of the country. An instance of a rule is triples matching \u232b, given that \ud434is true. Logic rules are typically harvested with slow exhaustive algorithms similar to the apriori algorithm for association rules [1]. We present two experiments. In the \uffffrst, we show that ReliK can discriminate between true and false instances. In the second, we show that ReliK can retrieve all the rules by considering only subgraphs with high ReliK score. Detecting true instances. To assess performance on downstream task (T4), we compare ReliK with the reciprocal rank (RR) of a combination of the tail and the relation embeddings on the ability to detect whether an instance of a rule is true or false. This task is particularly important to quantify the real con\uffffdence of a rule [26]. To this end, we use a dataset6 comprising 23 324 manually annotated instances over 26 rules extracted from YAGO2 using the AMIE [17] and RudiK [28] methods. We compute ReliK on TransE embeddings trained on the entire YAGO2. Figure 7 shows the average ReliK scores for positive and negative instances. ReliK discriminates between positive and negative instances, often by a large margin, whereas RR often confounds positive and negative instances. Rule mining on subgraphs. In this experiment, we show that ReliK identi\uffffes the subgraphs with high-con\uffffdence rules. To this end, we mine rules with AMIE [16, 17] on Codex-S, and compare with densest subgraphs of increasing size. We construct subgraphs of increasing size by \uffffrst mining the densest subgraph using Charikar\u2019s greedy algorithm [11] on the weighted graph obtained assigning each edge the ReliK score; then, we remove the densest subgraph and repeat the algorithm on the remaining edges, until no edge remains. At each iteration, we mine AMIE rules and compute the standard con\uffffdence, as well as con\uffffdence by the partial completeness assumption (PCA) [16, 17], that is, the assumption that the database includes either all or none of the facts about each head entity \u2318by any relationship A. In Figure 8 we compare our method with a baseline that extracts random subgraphs of the same size as those computed with our method. The densest subgraph located 6https://hpi.de/naumann/projects/repeatability/datasets/colt-dataset.html WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. by ReliK \uffffnds more rules with higher con\uffffdence on as little as 25% of the KG. On the other hand, a random subgraph does not identify any meaningful subgraph. This indicates that ReliK is an e\uffffective tool for retrieving rules in large graphs. A further analysis in Figure 9 shows that by exploiting ReliK we can compute rules 75% of the time. We emphasize though that, because rule mining incurs exponential time, the di\ufffference between mining rules on the complete graph and on the ReliK subgraph will be more pronounced on graphs larger than Codex-S. As a complement, the table reports the number of rules mined in the entire graph that are discovered by ReliK in the subgraph. It is clear that on 26% of the graph, ReliK discovers 1/3 as opposed to only 1/6 discovered by random graphs. 0 0.2 0.4 0.6 0.8 1 0.3 0.4 0.5 0.6 Subgraph size (%) Std Con\uffffdence complete densest random 0 0.2 0.4 0.6 0.8 1 0.4 0.5 0.6 0.7 Subgraph size (%) PCA Con\uffffdence Figure 8: Std and PCA con\uffffdence [17] vs subgraph size for AMIE rules on Codex-S; densest subgraph according to ReliK. PCA con\uffffdence normalizes the support of a rule only by the number of facts which we know to be true or consider to be false on a KG assumed to be partially complete [16, 17]. 0 0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1 Subgraph size (%) Time (%) Subgraph number of rules size (%) densest random 0.06 11 11.4 0.26 71 38.6 0.53 160 193.0 0.56 175 198.2 0.58 175 199.0 0.58 174 200.8 0.73 205 207.4 0.74 206 208.8 0.79 222 214.4 1.00 228 228.0 Figure 9: Time to compute AMIE rules vs subgraph size (left) and number of discovered rules (right) on Codex-S. 5 RELATED WORK Knowledge graph embeddings are commonly used to detect missing triples, correcting errors, or question answering [24, 45]. A number of KGEs have appeared in the last few years. The distinctive features among embeddings are the score function and the optimization loss. Translational embeddings in the TransE [8] family and the recent PairRE [10] assume that the relationship performs a translation from the head to the tail. Semantic embeddings, such as DistMult [49] or HolE [27], interpret the relationship as a multiplicative operator. Complex embeddings, such as RotatE [36] and ComplEx [41], use complex-valued vectors and operations in the complex plane. Neural-network embeddings, such as ConvE [15], perform sequences of nonlinear operations. Whereas each embedding de\uffffnes a speci\uffffc score, ReliK is agnostic to the choice of embedding. It is still an open question how well embeddings capture the semantics included in a KG [22]. Our work progresses in that regard by o\uffffering a simple local measure to quantify how faithful an embedding represents the information in the data. Embedding calibration. An orthogonal direction to ours is embedding calibration [33, 37]. Calibration methods provide e\uffffective ways to improve the existing embeddings on various tasks, by altering the embedding vectors in subspaces with low accuracy [33], by reweighing the output probabilities in the respective tasks [37], or by matrix factorization [13]. On the contrary, ReliK does not alter the embeddings nor the prediction scores but provides insights on the performance of the embeddings in speci\uffffc subgraphs. Evaluation of embeddings. ReliK bears an interesting connection with ranking-based quality measures, in particular with the mean reciprocal rank (MRR) and HITS@k for head, tail, and relation prediction [6, 8, 10, 33, 36, 45]. For a triple (?, ?,C) with unknown head MRR is the average of the reciprocal of ranks of the correct heads in the KG given the relationship A and tail C. As such, ReliK, can be considered a generalization of MRR as the MRR for triples of the kind (?, ?,C) and (\u2318, ?, ?). As the triples (?,A,C) are included in (?, ?,C), ReliK includes more information than MRR. Moreover, even though MRR and HITS@k provide a global indication of performance, ReliK is suitable for local analysis. Yet, current global measures have been recently shown to be biased towards high-degree nodes [38]. 6 CONCLUSION Aiming to develop a measure that prognosticates the performance of a knowledge graph embedding on a speci\uffffc subgraph, we introduced ReliK, a KGE reliability measure agnostic to the choice of the embeddings, the dataset, and the task. To allow for e\uffffcient computation, we proposed a sampling-based approximation, which we show to achieve similar results to the exact ReliK in less than half of the time. Our experiments con\uffffrm that ReliK anticipates the performance on a number of common and complex downstream tasks for KGEs. In particular, apart from correlating with accuracy in prediction and classi\uffffcation tasks, ReliK discerns the right answers to complex logical queries and guides the mining of high-con\uffffdence rules on subgraphs dense in terms of ReliK score. These results suggest that ReliK may be used in other domains, as well as a debugging tool for KGEs. In the future, we aim to design reliability measures for structure-based graph embeddings [42] and methods for authenticating [29] embedding-based computations. Ethical use of data. The measurements performed in this study are all based on datasets that are publicly available for research purposes. We cite the original sources. ACKNOWLEDGMENTS M. Egger is supported by Horizon Europe and Innovation Fund Denmark grant E115712-AAVanguard and the Danish Council for Independent Research grant DFF-1051-00062B. I. Bordino and F. Gullo are supported by Project ECS 0000024 Rome Technopole CUP B83C22002820006, \u201cPNRR Missione 4 Componente 2 Investimento 1.5,\u201d funded by European Commission NextGenerationEU. W. Ma is supported by the China Scholarship Council grant 202110320012. A. Anagnostopoulos is supported by the ERC Advanced Grant 788893 AMDROMA, the EC H2020RIA project SoBigData++ (871042), the PNRR MUR project PE0000013-FAIR, the PNRR MUR project IR0000013-SoBigData.it, and the MUR PRIN project 2022EKNE5K Learning in Markets and Society. ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore." }