|
|
"main_content": "In this section, we give a concise overview of the plethora of TKG forecasting methods that appeared in recent years. arXiv:2404.16726v2 [cs.LG] 29 Apr 2024 Deep Graph Networks (DGNs) Several models in this category leverage message-passing architectures [Scarselli et al., 2009; Micheli, 2009] along with sequential approaches to integrate structural and sequential information for TKG forecasting. RE-Net adopts an autoregressive architecture, learning temporal dependencies from a sequence of graphs [Jin et al., 2020]. RE-GCN combines a convolutional DGN with a sequential neural network and introduces a static graph constraint to consider additional information like entity types [Li et al., 2021b]. xERTE employs temporal relational attention mechanisms to extract query-relevant subgraphs [Han et al., 2021a]. TANGO utilizes neural ordinary differential equations and DGNs to model temporal sequences and capture structural information [Han et al., 2021b]. CEN integrates a convolutional neural network capable of handling evolutional patterns in an online setting, adapting to changes over time [Li et al., 2022b]. At last, RETIA generates twin hyperrelation subgraphs and aggregates adjacent entities and relations using a graph convolutional network [Liu et al., 2023a]. Reinforcement Learning (RL) Methods in this category combine reinforcement learning with temporal reasoning for TKG forecasting. CluSTeR employs a two-step process, utilizing a RL agent to induce clue paths and a DGN for temporal reasoning [Li et al., 2021a]. Also, TimeTraveler leverages RL based on temporal paths, using dynamic embeddings of the queries, the path history, and the candidate actions to sample actions, and a time-shaped reward [Sun et al., 2021]. Rule-based Rule-based approaches focus on learning temporal logic rules. TLogic learns these rules via temporal random walks [Liu et al., 2022]. TRKG extends TLogic by introducing new rule types, including acyclic rules and rules with relaxed time constraints [Kiran et al., 2023]. ALREIR combines embedding-based and logical rule-based methods, capturing deep causal logic by learning rule embeddings [Mei et al., 2022]. LogE-Net combines logical rules with REGCN, using them in a preprocessing step for assisting reasoning [Liu et al., 2023b]. At last, TECHS incorporates a temporal graph encoder and a logical decoder for differentiable rule learning and reasoning [Lin et al., 2023]. Others There are additional approaches with mixed contributions that cannot be immediately placed in the above categories. CyGNet predicts future facts based on historical appearances, employing a \u201dcopy\u201d and \u201dgeneration\u201d mode [Zhu et al., 2021]. TiRGN employs a local encoder for evolutionary representations in adjacent timestamps and a global encoder to collect repeated facts [Li et al., 2022a]. CENET distinguishes historical and non-historical dependencies through contrastive learning and a mask-based inference process [Xu et al., 2023]. Finally, L2TKG utilizes a structural encoder and latent relation learning module to mine and exploit intraand inter-time latent relations [Zhang et al., 2023]. 3 Approach This section introduces several baselines: We start with the Strict Recurrency Baseline, before moving to its \u201crelaxed\u201d version, the Relaxed Recurrency Baseline, and, ultimately, a combination of the two, the so-called Combined Recurrency (marta, playsFor, vasco-da-gamah, 1) (marta, playsFor, vasco-da-gamah , 2) (marta, playsFor, santa-cruz, 3) (marta, playsFor, santa-cruz, 4) (marta, playsFor, umea-ik, 5) (marta, playsFor, umea-ik, 6) (marta, playsFor, umea-ik, 7) (marta, playsFor, umea-ik, 8) (marta, playsFor, los-angeles-sol, 9) Figure 1: A (slightly simplified) listing of the clubs that Marta Vieira da Silva, known as Marta, played for from 2001 to 2009. Baseline. Before we introduce these baselines, we give a formal definition of the notion of a Temporal Knowledge Graph and and provide a running example to illustrate our approach. 3.1 Preliminaries A Temporal Knowledge Graph G is a set of quadruples (s, r, o, t) with s, o \u2208E, relation r \u2208R, and time stamp t \u2208T with T = {1 . . . n}, n \u2208N+. More precisely, E is the set of entities, R is the set of possible relations, and T is the set of timesteps. A quadruple\u2019s (s, r, o, t) semantic meaning is that s is in relation r to o at t. Alternatively, we may refer to this quadruple as a temporal triple that holds during the timestep t. This allows us to talk about the triple (s, r, o) and its occurrence and recurrence at certain timesteps. In the following, we use a running example G, where G is a TKG in the soccer domain shown in Figure 1. G contains triples from the years 2001 to 2009, which we map to indices 1 to 9. Temporal Knowledge Graph Forecasting is the task of predicting quadruples for future timesteps t+ given a history of quadruples G, with t+ > n and t+ \u2208N+. In this work we focus on entity forecasting, that is, predicting object or subject entities for queries (s, r, ?, t+) or (?, r, o, t+). Akin to KG completion, TKG forecasting is approached as a ranking task [Han, 2022]. For a given query, e.g. (s, r, ?, t+), methods rank all entities in E using a scoring function, assigning plausibility scores to each quadruple. In the following, we design several variants of a simple scoring function f that assigns a score in R+ to a quadruple at a future timestep t+ given a Temporal Knowledge Graph G, i.e., f((s, r, o, t+), G) 7\u2192R+. All variants of our scoring function are simple heuristics to solve the TKG forecasting task, based on the principle that something that happened in the past will happen again in the future. 3.2 Strict Recurrency Baseline The first family of recurrency baselines checks if the triple that we want to predict at timestep t+ has already been observed before. The simplest baseline of this family is the following scoring function \u03d51: \u03d51((s, r, o, t+), G) = \u001a1, if \u2203k with (s, r, o, k) \u2208G 0, otherwise. (1) If we apply \u03d51 to the set of triples in Figure 1 to compute the scores for 2010, we get the following outcome (using pf to abbreviate playsFor). \u03d51((marta, pf, vasco-da-gamah, 10), G) = 1 \u03d51((marta, pf, santa-cruz, 10), G) = 1 \u03d51((marta, pf, umea-ik, 10), G) = 1 \u03d51((marta, pf, los-angeles-sol, 10), G) = 1 This scoring function suffers from the problem that it does not take the temporal distance into account, which is highly relevant for the relation of playing for a club. It is far more likely that Marta will continue to play for Los Angeles Sol rather than sign a contract with a previous club. To address this problem, we introduce a time weighting mechanism to assign higher scores to more recent triples. Defining a generic function \u2206: N+ \u00d7 N+ \u2192R that takes the query timestep t+, a previous timestep k in G, and returns the weight of the triple, we can define strict recurrency scoring functions as follows: \u03d5\u2206((s,r,o,t+),G)= \u001a\u2206(t+,max{k|(s,r,o,k)\u2208G}) 0, if \u2204k with (s,r,o,k)\u2208G. (2) For instance, using \u22060(t+, k) = k/t+, k < t+ produces: \u03d5\u22060((marta, pf, vasco-da-gamah, 10), G) = 0.2 \u03d5\u22060((marta, pf, santa-cruz, 10), G) = 0.4 \u03d5\u22060((marta, pf, umea-ik, 10), G) = 0.8 \u03d5\u22060((marta, pf, los-angeles-sol, 10), G) = 0.9, which already makes more sense: the latest club that a person played for will always receive the highest score. Interestingly, we can establish an equivalence class among a subset of the functions \u03d5\u2206, and we will use this fact in our experiments. As long as we solely focus on ranking results, two scoring functions are equivalent if they define the same partial order over all possible temporal predictions. Definition 1. Two scoring functions \u03d5 and \u03d5\u2032 are rankingequivalent if for any pair of predictions p = (s, r, o, t+) and p\u2032 = (s\u2032, r\u2032, o\u2032, t+) we have that \u03d5(p, G) > \u03d5(p\u2032, G) \u21d0 \u21d2 \u03d5\u2032(p, G) > \u03d5\u2032(p\u2032, G). The next result states that we do not need to search for an optimal time weighting function \u2206(t+, k) if we choose it to be strictly monotonically increasing with respect to k, as these functions belong to the same equivalence class. Proposition 1. Scoring functions \u03d5\u2206and \u03d5\u2206\u2032 are ranking equivalent iff, \u2200k1, k2, t+ such that k1 < k2 < t+ it holds \u2206(t+, k1) < \u2206(t+, k2) and \u2206\u2032(t+, k1) < \u2206\u2032(t+, k2). Proposition 1 follows from the application of Definition 1. Therefore, the set of functions \u03d5\u2206, characterized by a \u2206that is strictly monotonically increasing in k, are ranking equivalent. While \u03d5\u2206works well to predict the club that a person will play for, there are relations with different temporal characteristics. An example might be a relation that expresses that a soccer club wins a certain competition. In Figure 2, we extend our TKG with temporal triples using the relation wins. The relation wins seems to follow a different pattern compared to the previous example. Indeed, applying \u03d5\u22060 to predict the 2010 winner of the Bundesliga would not reflect the (fc-bayern-munich, wins, bundesliga, 1) (borussia-dortmund, wins, bundesliga, 2) (fc-bayern-munich, wins, bundesliga, 3) (werder-bremen, wins, bundesliga, 4) (fc-bayern-munich, wins, bundesliga, 5) (fc-bayern-munich, wins, bundesliga, 6) (vfb-stuttgart, wins, bundesliga, 7) (fc-bayern-munich, wins, bundesliga, 8) (vfl-wolfsburg, wins, bundesliga, 9) Figure 2: Clubs winning the Bundesliga from 2001 to 2009. fact that FC Bayern Munich is the club with the highest ratio of won championships, and year 9 might just have been a lucky one for VFL Wolfsburg. The frequency of wins could be considered a better indicator for a scoring function: \u03c81((s, r, o, t+), G) = |{k | (s, r, o, k) \u2208G}|/t+ (3) Based on this scoring function, the club that has won the most titles, Bayern Munich, receives the highest score of 0.6, while all other clubs receive a score of 0.1. As done earlier, we now generalize the formulation of \u03c81 to \u03c8\u2206using a weighting function \u2206(t+, k) where triples that occurred more recently are weighted higher: \u03c8\u2206((s, r, o, t+), G) = P i\u2208{k|(s,r,o,k)\u2208G} \u2206(t+, i) Pn i=1 \u2206(t+, i) . (4) Again, we apply the new scoring functions to our example. We shortened the names of the clubs and abbreviated bundesliga as bl: \u03c8\u22060((dortmund, wins, bl, 10), G) = 0.2/4.5 \u22480.04 \u03c8\u22060((bremen, wins, bl, 10), G) = 0.4/4.5 \u22480.09 \u03c8\u22060((stuttgart, wins, bl, 10), G) = 0.7/4.5 \u22480.15 \u03c8\u22060((munich, wins, bl, 10), G) = 2.3/4.5 \u22480.51 \u03c8\u22060((wolfsburg, wins, bl, 10), G) = 0.9/4.5 \u22480.2 It is worth noting that, for a restricted family of distributions \u2206\u2032(t, k), we can achieve ranking equivalence between scoring functions \u03c8\u2206\u2032 and \u03d5\u2206with a strictly increasing \u2206(t, k). More specifically, if we make \u2206\u2032(t, k) parametric, then \u03c8\u2206\u2032 can generalize the family of scoring functions \u03d5\u2206. Consider the parameterized function \u2206\u03bb(t+, k) = 2\u03bb(k\u2212t+) with \u03bb \u2208R+ 0 , where \u03bb acts as a decay factor. The higher \u03bb, the stronger the decay effect we achieve. In particular, if we set \u03bb = 1, we can enforce that a time point k always receives a higher weight than the sum of all previous time points 1, . . . , k \u22121. This means \u03c8\u22061 and \u03d5\u2206are ranking equivalent. Proposition 2. For \u03bb \u22651, \u2206\u03bb = 2\u03bb(k\u2212t+), and any strictly increasing time weighting function \u2206, the scoring functions \u03d5\u2206and \u03c8\u2206\u03bb are ranking equivalent. Proposition 2 follows directly from the fact that Pn i=k+1 1 2i < 1 2k for any n > k \u2208N+. On the contrary, we get ranking equivalence between \u03c81 and \u03c8\u2206\u03bb if we set \u03bb = 0. Proposition 3. The scoring functions \u03c81 and \u03c8\u2206\u03bb are ranking equivalent if we set \u03bb = 0. Proposition 3 follows directly from 20 = 1 and the definition of \u03c81 in Equation 3. Propositions 2 and 3 help us to interpret our experimental results, as it indicates that different settings of \u03bb result in a scoring function that is situated between \u03c81 and \u03d5\u2206\u03bb. We treat \u03bb as a relation-specific hyperparameter in our experiments, meaning we will select a different \u03bbr for each relation r. Since relations are independent of each other, each \u03bbr can be optimized independently. 3.3 Relaxed Recurrency Baseline So far, our scoring functions were based on a strict application of the principle of recurrency. However, this approach fails to score a triple that has never been seen before, and we need to account for queries of this nature: imagine a young player appearing for the first time in a professional club. Thus, we introduce a relaxed variant of the baseline. Instead of looking for exact matching of triples in previous timesteps, which would not work for unseen triples, we are interested in how often parts of the triple have been observed in the data. When asked to score the query (s, r, ?, t+), we compute the normalized frequency that the object o has been in relationship r with any subject s\u2032: \u2212 \u2192 \u03be ((s, r, o, t+), G) = |{(s\u2032, k) | (s\u2032, r, o, k) \u2208G}| |{(s\u2032, o\u2032, k) | (s\u2032, r, o\u2032, k) \u2208G}| (5) Analogously, we denote with \u2190 \u2212 \u03be ((s, r, o, t+), G) the relaxed baseline used to score queries of the form (?, r, o, t+). In the following, we omit the arrow above \u03be and use the directed version depending on the type of query without explicit reference to the direction. Let us revisit the example of Figure 1 and apply \u03be to score a triple never seen before. We can now assign non-zero scores to the clubs that Aitana Bonmati, who never appeared in G, will likely play for in 2010: \u03be((bonmati, pf, vasco-da-gamah, 10), G) = 0.22 \u03be((bonmati, pf, santa-cruz, 10), G) = 0.22 \u03be((bonmati, pf, umea-ik, 10), G) = 0.44 \u03be((bonmati, pf, los-angeles-sol, 10), G) = 0.11 While we also report results for \u03be on its own, we are mainly interested in its combination with the the Strict Recurrency Baseline, where we expect it to fill up gaps and resolve ties. For simplicity, we do not introduce a weighted version of this baseline to avoid the extra hyperparameter. 3.4 Combined Recurrency Baseline We conclude the section with a linear combination of the Strict Recurrency Baseline \u03c8\u2206\u03bb and the Relaxed Recurrency Baseline \u03be. In particular (omitting \u03bb to keep the notation uncluttered): \u03c8\u2206\u03be((s, r, o, t+), G) = \u03b1 \u2217\u03c8\u2206(s, r, o, t+), G)+ (1 \u2212\u03b1) \u2217\u03be(s, r, o, t+), G), (6) where \u03b1 \u2208[0, 1] is another hyperparameter. Similar to \u03bb, we select a different \u03b1r for each relation r. In the following, we refer to this baseline as the Combined Recurrency Baseline. 4 Experimental Setup This section describes our experimental setup and provides information on how to reproduce our experiments1. We rely on the unified evaluation protocol of [Gastinger et al., 2023], reporting results about single-step predictions. We report results for the multi-step setting in the supplementary material2. 4.1 Hyperparameters We select the best hyperparameters by evaluating the performances on the validation set as follows: First, we select \u03bbr\u2200r \u2208R from in total 14 values, \u03bbr \u2208Lr = {0, ..., 1.0001} for \u03c8\u03bb. Then, after fixing the best \u03bbr\u2200r \u2208R, we select \u03b1r\u2200r \u2208R from 13 values, \u03b1r \u2208Ar = {0, ..., 1}, leading to a total of 27 combinations per relation. 4.2 Methods for Comparison We compare our baselines to 11 among the 17 methods described in Section 2. Two of these 17 methods run only in multi-step setting, see comparisons to these in the supplementary material. Further, for four methods we find discrepancies in the evaluation protocol and thus exclude them from our comparisons3. Unless otherwise stated, we report the results for these 11 methods based on the evaluation protocol by [Gastinger et al., 2023]. For TiRGN, we report the results of the original paper and do a sanity check of the released code. We do the same for L2TKG, LogE-Net, and TECHS, but we cannot do a sanity check as their code has not been released. 4.3 Dataset Information We assess the performance of the recurrency baselines on five datasets [Gastinger et al., 2023; Li et al., 2021b], namely WIKI, YAGO, ICEWS14, ICEWS18, and GDELT4. Table 1 shows characteristics such as the number of entities and quadruples, and it reports the timestep-based data splitting (short: #Tr/Val/Te TS) all methods are evaluated against. In addition, we compute the fraction of test temporal triples (s, r, o, t+) for which there exists a k < t+ such that (s, r, o, k) \u2208G, and we refer to this measure as the recurrency degree (Rec). Similarly, we also compute the fraction of temporal triples (s, r, o, t+) for which it holds that (s, r, o, t+ \u22121) \u2208G, which we call direct recurrency degree (DRec). Note that Rec defines an upper bound of Strict Recurrency Baseline\u2019s performance; instead, DRec informs about the test triples that have, from our baselines\u2019 perspective, a trivial solution. On YAGO and WIKI, both measures are higher than 85%, meaning that the application of the recurrency principle would likely work very well. 1https://github.com/nec-research/recurrency baseline tkg. 2Supplementary Material: https://github.com/nec-research/ recurrency baseline tkg/blob/master/supplementary material.pdf 3CENET, RETIA, and CluSTER do not report results in time-aware filter setting. ALRE-IR does not report results on WIKI, YAGO, and GDELT, and uses different dataset versions for ICEWS14 and ICEWS18. 4See Supplementary Material for additional dataset information. Dataset #Nodes #Rels #Train #Valid #Test Time Int. #Tr/Val/Te TS DRec [%] Rec [%] ICEWS14 7128 230 74845 8514 7371 24 hours 304/30/31 10.5 52.4 ICEWS18 23033 256 373018 45995 49545 24 hours 239/30/34 10.8 50.4 GDELT 7691 240 1734399 238765 305241 15 min. 2303/288/384 2.2 64.9 YAGO 10623 10 161540 19523 20026 1 year 177/5/6 92.7 92.7 WIKI 12554 24 539286 67538 63110 1 year 210/11/10 85.6 87.0 Table 1: We report some statistics of the datasets, the timestep interval, and the specifics of the data splitting. We also include the recurrency degree (Rec) and the direct recurrency degree (DRec). Please refer to the text for a more detailed description. 4.4 Evaluation Metrics As is common in link prediction evaluations, we focus on two metrics: the Mean Reciprocal Rank (MRR), computing the average of the reciprocals of the ranks of the first relevant item in a list of results, as well as the Hits at 10 (H@10), the proportion of queries for which at least one relevant item is among the top 10 ranked results. Following [Gastinger et al., 2023], we report the time-aware filtered MRR and H@10. 5 Experimental Results This section reports our quantitative and qualitative results, illustrating our baselines help to gain a deeper understanding of the field. We list runtimes in the Supplementary Material. 5.1 Global Results Table 2 (lower area) shows the MRR and H@10 results for the Strict (\u03be), the Relaxed (\u03c8\u2206), and the Combined Recurrency Baseline (\u03c8\u2206\u03be). For all datasets, with one minor discrepancy, the Combined Recurrency Baseline performs better than the strict and the relaxed variants. However, the Strict Recurrency Baseline is not much worse: The difference to the Combined Recurrency Baseline is for both metrics never more than one percentage point. We observe that, while \u03be scores a MRR between 5% and 15% on its own, when combined with \u03c8\u2206(thus obtaining \u03c8\u2206\u03be) it can grant up to 0.9% of absolute improvement. As described in Section 3, its main role is to fill gaps and resolve ties. The results confirm our intuition. Interestingly, results for \u03c8\u2206\u03be on all datasets reflect the reported values of the recurrency degree and direct recurrency degree (see Table 2): For both YAGO and WIKI (Rec and DRec > 85%), our baseline yields high MRRs (> 80%), while in other cases the values are below 40%. When compared to results from related work (upper area of Table 2), the Combined Recurrency Baseline as well as the Strict Recurrency Baseline yield the highest test scores for two out of five datasets (GDELT and YAGO) and the thirdhighest test scores for the WIKI dataset. This is an indication that most related work models seem unable to learn and consistently apply a simple forecasting strategy that yields high gains. In particular, we highlight the significant difference between the Combined Recurrency Baseline and the runner-up methods for GDELT (with a relative change of +12.9%). Results for ICEWS14 and ICEWS18, instead, suggest that more complex dependencies need to be captured on these datasets. While two methods (TRKG and TANGO) perform worse than our baseline, the majority achieves better results. In summary, none of the methods proposed so far can accomplish the results achieved by a combination of two very na\u00a8 \u0131ve baselines for two out of five datasets. This result is rather surprising, and it raises doubts about the predictive quality of current methods. 5.2 Per-Relation Analysis We conduct a detailed per-relation analysis and focus on two datasets: ICEWS14, since our baseline performed worse there, and YAGO, for the opposite reason. We compare the Combined Recurrency Baseline to the four methods that performed best on the respective dataset, considering the seven methods evaluated under the evaluation protocol of [Gastinger et al., 2023]5. For clarity, we adopt the following notation to denote a relation and its prediction direction: [relation] (head) signifies predictions in head direction, corresponding to queries of the form (?, r, o, t+); [relation] (tail) denotes predictions in tail direction, i.e., (s, r, ?, t+). ICEWS14 In Figure 3(a), we focus on the nine most frequent relations. For each relation, one or multiple methods reach MRRs higher than the Combined Recurrency Baseline, with an absolute offset in MRR of approximately 3% to 7% between the best-performing method and our baseline. This indicates that it might be necessary to capture patterns going beyond the simple recurrency principle. However, even for ICEWS14, we see three relations where some methods produce worse results than the Combined Recurrency Baseline. For two of these (Make a visit, Host a visit), RE-GCN and CEN attain the lowest MRR. In the third relation (Arrest detain or charge with legal action), TLogic and xERTE have the lowest MRR. This implies that, despite having better aggregated MRRs, the methods display distinct weaknesses and are not learning to model recurrency for all relations. YAGO Figure 3(b), instead, shows two distinct categories of relations: the first category contains relations where most methods demonstrate competitive performance (MRR\u2265 85%). In all of them, the Combined Recurrency Baseline attains the highest scores. Thus, the capabilities of related work, like detecting patterns across different relations or multiple hops in the KG, do not seem to be beneficial for these relations, and a simpler inductive bias might be preferred. The second category contains relations where all methods perform poorly (MRR \u226420%). Due to the dataset\u2019s limited information, reliably predicting prize winners or deaths is unfeasible. For these reasons, we expect no significant improvement in future work on YAGO beyond the results of our baseline. However, YAGO still provides value to the research field: it can be used to inspect the methods\u2019 capabilities to identify 5Since we could compute prediction scores for every query. GDELT YAGO WIKI ICEWS14 ICEWS18 MRR H@10 MRR H@10 MRR H@10 MRR H@10 MRR H@10 L2TKG\u2020 20.5 35.8 47.4 71.1 33.4 55.0 LogE-Net\u2020 43.7 63.7 32.7 53.0 TECHS\u2020 89.2 92.4 76.0 82.4 d.d.v d.d.v. 30.9 49.8 TiRGN 21.7 37.6 88.0 92.9 81.7 87.1 44.0 63.8 33.7 54.2 TRKG 21.5 37.3 71.5 79.2 73.4 76.2 27.3 50.8 16.7 35.4 RE-GCN 19.8 33.9 82.2 88.5 78.7 84.7 42.1 62.7 32.6 52.6 xERTE 18.9 32.0 87.3 91.2 74.5 80.1 40.9 57.1 29.2 46.3 TLogic 19.8 35.6 76.5 79.2 82.3 87.0 42.5 60.3 29.6 48.1 TANGO 19.2 32.8 62.4 67.8 50.1 52.8 36.8 55.1 28.4 46.3 Timetraveler 20.2 31.2 87.7 91.2 78.7 83.1 40.8 57.6 29.1 43.9 CEN 20.4 35.0 82.7 89.4 79.3 84.9 41.8 60.9 31.5 50.7 Relaxed (\u03be) 14.2 23.6 5.2 10.7 14.3 25.4 14.4 28.6 11.6 22.0 Strict (\u03c8\u2206) 23.7 38.3 90.7 92.8 81.6 87.0 36.3 48.4 27.8 41.4 Combined (\u03c8\u2206\u03be) 24.5 39.8 90.9 93.0 81.5 87.1 37.2 51.8 28.7 43.7 Table 2: Experimental results. An entry \u2020 means authors have not released their code, and thus we could not reproduce their results, an entry that the related work does not report results on this dataset, and an entry \u201dd.d.v\u201d, that the it reports results on a different dataset version. and predict simple recurring facts and, if this is not the case, to pinpoint their deficiencies. Thus, YAGO can be also seen as a dataset for sanity checks. All analysed methods from related work fail this sanity check: none of them can exploit the simple recurrency pattern for all relations. The main disparity in overall MRR between the Combined Recurrency Baseline and related work can be attributed to two specific relations: playsFor (head, tail), and isAffiliatedTo (head). Queries attributed to these relations make for almost 50% of all test queries. More specifically, Timetraveler exhibits limitations with isAffiliatedTo (head) and playsFor (head); xERTE shows its greatest shortcomings for isAffiliatedTo (head); and REGCN and CEN exhibit limitations with the relation playsFor in both directions. These findings highlight the specific weaknesses of each method that are possible by comparisons with baselines, thus allowing for targeted improvements. 5.3 Failure Analysis In the following, we analyse some example queries where the recurrency principle offers an unambiguous solution which, however, is not chosen by a specific method. Following Section 5.2, we focus on YAGO and the same four models. We base our analysis on the insights that YAGO has a very high direct recurrency degree, and that predicting facts based on strict recurrency with steep time decay leads to very high scores. The MRR of \u03d5\u2206is 90.7%. For each model, we count for how many queries the following conditions are fulfilled, given the test query (s, r, ?, t) with correct answer o: (i) (s, r, o, t \u22121) \u2208G, (ii) the model proposed o\u2032 \u0338= o as top candidate, (iii) there exists no k with (s, r, o\u2032, k) \u2208G. If these are fulfilled, there is strong evidence for o due to recurrency, while (s, r, o\u2032) has never been observed in the past. We conduct the same analysis for head queries (?, r, o, t). For each model, we randomly select some of these queries6 and 6Summing up over head and tail queries for Timetraveler, we find 34 queries that fulfilled all three conditions, for xERTE 149, for CEN 286, and for RE-GCN 525 queries. describe the mistakes made. Timetraveler Surprisingly, Timetraveler sometimes suggests top candidates that are incompatible with respect to domain and range of the given relation, even when all above conditions are met. Here are two examples for the \u201dplaysFor\u201d (pf) relation, where the proposed candidates are marked with a question mark: (?=spain-national-u23, pf, lierse-sk, 10) (?=baseball-ground, pf, derby-county-fc, 10) The reasons behind Timetraveler\u2019s predictions, despite the availability of reasonable candidates according to the recurrency principle, fall outside the scope of this paper. xERTE For xERTE, we detect a very clear pattern that explains the mistakes. In 147 out of 149 cases, xERTE predicts a candidate as subject (object) c when c was given as object (subject). This happens in nearly all cases for the symmetric relation isMarriedTo resulting in the prediction of triples such as (john, isMarriedTo, john). This error pattern bears a striking resemblance to issues observed in the context of nontemporal KG completion in [Meilicke et al., 2018] where it has already been argued that some models perform surprisingly badly on symmetric relations. CEN and RE-GCN Both CEN and RE-GCN exhibit distinct behavior. Errors frequently occur with the \u201dplaysFor\u201d relation, particularly in tail prediction. In all analysed examples, the types (soccer players and soccer clubs) of the incorrectly predicted candidates were correct. Moreover, we cannot find any other systematic error pattern or explanation for the erroneous predictions. It seems that both models are not able to learn that the playsFor relation follows the simple regularity of strict recurrency, even though this regularity dominates the training set. These examples highlight significant insights into the current weaknesses of each method. Future research can leverage these insights to enhance the affected models. (a) ICEWS14 t h t h t h t h t h t h t h t h t h Make_statement Consult Make_an _appeal_or_request Express_intent_to _meet_or_negotiate Make_a_visit Host_a_visit Arrest,_detain,_or _charge_with_legal_action Praise_or_endorse Criticize_or_denounce Relation 0 20 40 60 80 100 MRR (%) TLogic CEN RE-GCN xERTE Recurrency Baseline 400 600 800 1000 1200 (b) YAGO t h t h t h t h t h t h t h t h t h <worksAt> <playsFor> <hasWonPrize> <isMarriedT o> <owns> <graduatedFrom> <diedIn> <isAffiliatedT o> <created> Relation 0 20 40 60 80 100 MRR (%) Timetraveler CEN RE-GCN xERTE Recurrency Baseline 100 101 102 103 Figure 3: Test MRRs for each relation and direction (\u201ct\u201d means tail and \u201ch\u201d head, respectively) for (a) ICEWS14 (top) and (b) YAGO (bottom). Colors indicate the number of queries for relation and its direction in the test set. 5.4 Parameter Study In the following, we summarize our findings regarding the influence of hyperparameters on baseline predictions. Detailed results are provided in the Supplementary Material. Influence of Hyperparameter Values We analyze the impact of \u03bb and \u03b1 on overall MRR. Notably, \u03bb significantly affects the MRR, e.g., with test results ranging from 12.1% to 23.7% for GDELT across different \u03bb values. The optimal \u03bb varies across datasets. This underlines the influence of time decay: Predicting repetitions of the most recent facts is most beneficial for YAGO and WIKI, while also considering the frequency of previous facts is better for the other datasets. This distinction is also mirrored in the direct recurrency degree, being notably high for YAGO and WIKI, and thus indicating the importance of the most recent facts. Additionally, setting \u03b1 to a high value (\u03b1 \u22650.99) yields the best aggregated test results across all datasets, indicating the benefits of emphasizing predictions from the Strict Recurrency Baseline and using the Relaxed Recurrency Baseline to resolve ties and rank unseen triples. Impact of Relaxed Recurrency Baseline Further, to understand the impact of the Relaxed Recurrency Baseline (\u03be) on the combined baseline, we compare the MRR of strict and relaxed baseline on a per-relation basis. We find that, even though the aggregated improvement of \u03c8\u2206\u03be as compared to \u03c8\u2206is only marginal (< 1%) for each dataset, for some relations, where the strict baseline fails, the impact of the relaxed baseline is meaningful: For example, on the dataset YAGO and the relation diedIn (tail), the Strict Recurrency Baseline yields a very low MRR of 0.7%, whereas the Relaxed Recurrency Baseline yields a MRR of 17.5%. Overall, this highlights the influence of hyperparameter values, dataset differences, and the advantage of combining baselines on a per-relation basis. 6 Conclusion We are witnessing a notable growth of scientific output in the field of TKG forecasting. However, a reliable and rigorous comparison with simple baselines, which can help us distinguish real from fictitious progress, has been missing so far. Inspired by real-world examples, this work filled the current gap by designing an intuitive baseline that exploits the straightforward concept of facts\u2019 recurrency. In summary, despite its inability to grasp complex dependencies in the data, the baseline provides a better or a competitive alternative to existing models on three out of five common benchmarks. This result is surprising and raises doubts about the predictive quality of the proposed methods. Once more, it stresses the importance of testing na\u00a8 \u0131ve baselines as a key component of any TKG forecasting benchmark: should a model fail when a baseline succeeds, its predictive capability should be subject to critical scrutiny. By conducting critical and detailed analyses, we identified limitations of existing models, such as the prediction of incompatible types. We hope that our work will foster awareness about the necessity of simple baselines in the future evaluation of TKG methods." |